research problems

19
RESEARCH PROBLEMS , HYPOTHESES & Operational Defininitions A. Identifying a Research Problem A problem is an unanswered question . The problem selected for study depends in part upon the researcher's Interests , Skills , Inventiveness , Creativity , Resources available Sources determining research problems: 1. Observation of life events and asking how and/or why they do. 2. Brainstorming happens when two or more people generate ideas for research 3. Theoretical Predictions - regarding what will emerge if a particular explanatory system (paradigm) is valid . 4. Developments in Technology make possible two types of study: 1) studies to investigate old problems in new ways , usually by increases in resolution. 2) advances generate new problems . E.g. Computers leading to studies in artificial intelligence. a. Computer Data Analysis now permits efficient accumulation, management, and analysis of huge amounts of information not previously possible. b. Computer Control permits the precise

Upload: associate-professor-drmarut-damcha-om

Post on 11-Apr-2015

2.384 views

Category:

Documents


2 download

DESCRIPTION

How to conduct the research?

TRANSCRIPT

Page 1: Research Problems

RESEARCH PROBLEMS , HYPOTHESES

& Operational Defininitions

A.  Identifying a Research Problem   A problem is an unanswered question.  

     The problem selected for study depends in part upon the researcher's          Interests,          Skills,          Inventiveness,          Creativity,          Resources available

Sources determining research problems:     1.  Observation of life events and asking how and/or why they do.

     2.  Brainstorming happens when two or more people generate ideas for research

     3.  Theoretical Predictions - regarding what will emerge if a particular explanatory           system (paradigm) is valid.

     4.  Developments in Technology make possible two types of study:          1)  studies to investigate old problems in new ways, usually by increases in                     resolution.           2) advances generate new problems.  E.g.  Computers leading to studies in                     artificial intelligence.          a.  Computer Data Analysis now permits efficient accumulation, management, and           analysis of huge amounts of information not previously possible.             b.  Computer Control permits the precise application of stimuli, instructions, and                apparatus control.  In addition to recording, storing, and analyzing data.

        c.  Computer Simulations can be utilized to model psychological and other processes.

Page 2: Research Problems

Knowledge of Research Literature

     Science is based upon the research literature.  Without it all would have to start from zero.  Familiarity with the literature in a discipline can help clarify and select problems and defines what it means to be a professional.

  Problems can be generated by:     a.  Identifying gaps (laguna) in existing knowledge.  We know about “A”, and we                know about “C”.  What about “B”?.

     b.  Encountering contradictory findings.  E.g. research on extra sensory perception

     c.  Replication of previously published research result.  “Do you believe everything           you read?”

Searching the Research Literature

     Psychological Abstracts      PsycINFO     World Wide Web  - the NIH's National Library of Medicine is clearly the best                resource for searching topics in the health sciences, including Psychology.               www.nlm.nih.gov - go to PubMed page

B.  Hypotheses - A hypothesis is a statement, which if true, solves the problem.  It is a tentative explanation which formulates the problem so it can be studied systematically.

     1.  Research Hypotheses specify a possible relationship between different aspects of           the problem,  i.e. between the IV and the DV.

      2.  Research Hypotheses are assessed by two criteria:

          a.  Does the hypothesis state a relationship between the variables?  It should                also serve to narrow the problem down to specific variables and/or                     contexts.

          b.  Is the hypothesis testable?  Careful  wording is important, and the terms                     should be definable (operationally), observable, and measureable.

C.  Null Hypotheses - (not to be confused with research hypotheses)

Page 3: Research Problems

     A null hypothesis is a special type of  hypothesis associated with statistical analysis.     Traditionally, a null hypothesis states for the problem at hand,           “No relationship exists between the variables”.     It is the common wisdom that is assumed (by experts or the population at large) to           be true, and which the experimenter sets out to attack and prove wrong.

     It is what would be expected, either by logic or by experience (prejudice?).              E.g. “A fair coin tossed 100 times, will show an equal number of heads and tails.”

OPERATIONAL   DEFINITIONS, VARIABLES, AND CONSTRUCTS

Good definitions in science are used so others can better understand research methods, findings and interpretations.

A.  Dictionary definitions provide only a set of synonymous terms for a term which may have several possible referents (ie. limits).

B.   A term may refer to direct or indirect observation of the wor1d1. eg. "reaction time" amount of time from onset of  stimulus for action (teacher giving assignment) and the action (student starting the assignment).

2.     Or, "learning" = (which cannot be seen) can be inferred by performance, answering questions or carrying out tasks.

C.     A term can also be defined according to a theory (a system of interelated meanings each supporting the other), eg.:1.     “reinforcement" = a satisfying state of affairs.2.     "punishment" = an annoying state of affairs.  Each term mirrors and supports the other.

D.     Factual/Conceptual vs. Operational Definitions.

1.     A Factual/Conceptual definition is a dictionary-type that uses another term or set of terms synonymous with term being defined.   Thus

a. a “reinforcer” = a satisfying stimu1us or, a “satisfier” = a reinforcer.

b. The problem is both above are roughly equivalent . , i.e. ambiguous and circular.

1)     Is a reinforcer always a satisfier? And  vice-versa? or dos calling a satisfier a reinforcer make it any clearer what it really is?

Page 4: Research Problems

2.     Operational definitions were made popular by physicist Percy Bridgeman necessitated by Einsteins theory of relativity (e.g. “time” is very difficult to define)

a. an operational defininition establishes a term by describing the set of manipulations necessary to create the presence of the object.,   or by describing the measuring operations which identify the terms presence.

3.     There are two types of operational definitions.

a.  Experimental operational definitions describe how a terms referents are manipulated. E.g. "hunger" = length of time without food.     Then someone without food for 24 hours would be “hungrier” than someone without for 12 hours.

And  an operational definition of "Chocolate cake" - the recipe to make it.

b.  Measured operational definitions describe how referents of a term are measured.

     eg. "hunger" - the amount of food (by weight, volume, or calory content) consumed.

     "Chocolate cake" - a description of the flavor., texture, appearance & other properties of the cake.

4.     One or the other is used in a research report.

 a. Here's a hypothesis:  "Students learn more if classes are short, rather than long."                           What is "long"?, or "short"?

     1)     Manipulatory operational definition:

          “short class” = one lasting less than 50 minutes          “long class” = one lasting more than 50 minutes.

2)     Measurement operational definitions:

          “short class”  one ending before squirming begins;            “long class” one still in session when 1/2 class is squirming or looking out the window.

Page 5: Research Problems

E.     Advantages of operational definitions:

     1.     Make research methodology used clearer to reader.     2.     Confine statements to things either directly or indirectly observable ie. empirical.     3.     Helps assure good communication by specifying how terms are used.

F.     Limitations of operational definitions:

1.     None can fully define a phenomena.

2.     One can use any operational definitions but should use operational definitions similar to those in use and which are consistent with historical reference avoids initial struggle.

3.     What about “brave” = leaving the house in the morning without a cup of coffee.  

     What's wrong here?

http://ceci.uprm.edu/~ephoebus/id98.htm (Dec.13,2008)

Research Analysis Guide   |   Explanation of Research Analysis Guide   |   Experimental Checklist   |   Problems, Hypotheses & Operational Definitions   |   VARIABLE CONTROL   |   Intro to Research Designs   |   Random (Independent)Groups Designs   |   Randomized Blocks Designs   |   Factorial Designs   |   Factorial Designs   |   Resarch Project Information

Introduction to RESEARCH DESIGNS  (not complete)

The design of an experiment is its general structure,      or the experimenter's plan for testing the hypothesis.    It is not its specific content (i.e. it does not depend on the types of IV's or DV's under study)

The design of an experiment is decided mainly on the basis of three factors:

     (1) the number of independent variables in the hypothesis(es),      (2) the number of treatment conditions needed to make a fair test of the hypothesis, and      (3) whether the same or different subjects are used in each of the treatment conditions.

A basic assumption behind each experimental design is that subjects in the experiment are typical of the population they represent.

Page 6: Research Problems

     Researchers use a variety of procedures to obtain the most representative samples; ideally, they use random samples, in which each member of the population has an equal chance of being selected for the experiment.

     Random samples can be obtained through some form of probability sampling, through which the odds of any individual being selected are known or can be calculated. Simple random sampling, stratified random sampling, and cluster sampling are the most frequent examples of this approach.

      For practical reasons, nonprobabiliy samples are often used.      Quota samples and accidental samples are the most common examples. Researchers must be extremely cautious in generalizing their results from samples in which subjects were not chosen at random.

      After subjects have been selected for the experiment, they are assigned to treatment conditions in various ways:  eg. randomly, by using whole classes, or friends.  

How about random selection of people going into the Student Center?  How would you describe such a sample?

BASIC DESIGNS

     Several methods of classifying research designs can be used:   Single vs. Multi - variate, Between vs. Within -Subjects etc..

1. Between-Subjects designs are those in which different subjects take part in each condition of the experiment.  Examples of these types are:  Independent  (or Random) Group and Factorial.     The name comes from the fact that we draw conclusions from between-subjects experiments by making comparisons between the behavior of different (independent) groups of subjects .

     We will look at two kinds of two-group between-subjects designs:  two independent (or random) groups and matched groups.  

NOTE****[Craig & Metz classify Matched Groups as one type of Blocked Designs. The other is Repeated Measures]****

Page 7: Research Problems

  The Independent Groups design is used when one independent variable must be tested at two treatment levels or values.

  Usually one of the treatment conditions is a control condition in which the subjects receive the zero value of the independent variable. The other condition is an experimental condition in which the subjects are given some nonzero value of the independent variable.

Diagram of Independent Groups Design:  

                 Treatment Group     Control Group

The Independent Groups design is based on the assumption that subjects are selected randomly from the population and randomly assigned to conditions.

When we use an independent groups design, we assume that randomization was successful.   That is, we must assume that the groups are equivalent (in all important respects) before application of the independent variable.  We then apply the independent variable and measure the difference between our groups

BUT WATCH OUT:  If treatment groups were initially different from each other on a variable related to the dependent variable of the experiment, the results will probably be confounded (contaminated).

     Sometimes, however, especially when the total number of subjects is small, we do not want to rely on randomization. Even with random assignment, sometimes treatment groups start out being different from each other in important ways. This is called sampling error.

     These differences can affect the dependent variable, and we may not be able to separate the effects of the independent variable from the effects of the initial differences between the groups.   This is the essence of confounding.One-Way Analysis of Variance ( ANOVA ) Designs (AKA One-Way Factorial) are similar to Independent Groups but utilize more than two groups.

     Diagram of One-Way ANOVA Designs:

          Condition A            Condition B     Condition C           Condition n

                                        ....................

Page 8: Research Problems

2.  Matched Group Designs {really matched subjects}are another type of Between Subjects designs.

     Instead of relying on randomization, we would use the matched groups approach.

      In a matched groups design, we select a variable that is highly related (correlated) to the dependent variable and measure subjects on that variable.  (e.g.  using IQ in a study comparing teaching methods.)

For a two-matched group experiment, we form pairs of subjects having similar scores on the matching variable and then randomly assign one member of each pair to the experimental condition, and the other member is placed in the control condition.

     [A good example of  matched group designs are Twin Studies, which match subjects based on their genetic makeup;  e.g. identical vs fraternal twins].

     Matching is advantageous because we can increase the probability that our groups start out the same, at least on variables that we think matter.

     However, there are also disadvantages:   we do not always know what is best to use as our matching variable.

     Because of the statistical tests used for matched groups, we want to be sure that our matching variable is really related to our dependent variable. If it is not , we will have less chance of showing whether the independent variable had an effect.

     Diagram of a Matched Groups Design:

Scores on Matching Variable:     Treatment A       Treatment B     S1  98  --> To GroupA     S2  96  --> To GroupB     S3  89          .     S4  87          .     S5  86          .     S6  71          .     S7  68          .     S8  65          .

3.   [Craig & Metz classify Repeated Measures Designs and Matched Group designs together as Blocked Designs]     Repeated Measures designs measure the dependent variable on the same subjects under different treatment conditions.  (E.g. Before vs After)

     Diagram of Repeated Measures Designs:

Page 9: Research Problems

                           Condition A  Condition B               Condition n

                                                                                ...........................

4.  Multivariate Designs -- also called Factorial Designs:

     These apply more than one independent variable in the same experiment.

     The simplest form of a Factorial Design is the 2 X 2.  i.e. Two IVs with two levels each.

Diagram of a 2 X 2 Factorial Design:

                          IV 1

                        Level A          Level B                                                             Level A

IV 2

     Level B

OR

                      Sound    Level A   Level B

             Male          Gender           Female

Page 10: Research Problems

5.  Single Subject Designs (Developed by B.F. Skinner)

6.  Covariate Designs (i.e. Statistical Control)

Research Analysis Guide   |   Explanation of Research Analysis Guide   |   Experimental Checklist   |   Problems, Hypotheses & Operational Definitions   |   VARIABLE CONTROL   |   Intro to Research Designs   |   Random (Independent)Groups Designs   |   Randomized Blocks Designs   |   Factorial Designs   |   Factorial Designs   |   Resarch Project Information

PREPARING A PROPOSAL &   CONDUCTING A STUDY :   A CHECKLIST

     All the essentials for planning and conducting a study have been covered. This checklist reviews what should be considered before conducting a study, the steps to follow when conducting a study, the steps to follow after a study is completed, and a list of errors commonly made by students at each step in the research process.

1. Selection and Statement of a Research Problem      As indicated previously, the first step in conducting research is specifying the problem to be investigated. Research problems may be identified in a variety of ways:      (1) observation of everyday events,     (2) brainstorming or think sessions,      (3) technological developments,      (4) gaps in existing knowledge,      (5) contradictory findings, and

Page 11: Research Problems

     (6) replication of published research.

     Errors Commonly Made by Students     1.     Choice of a problem that cannot be answered by conducting research.     2.     Failure to specify the problem, that is, collecting data without asking a research question beforehand.     3.     Statement of a problem in such broad terms that any results could be interpreted as being related (or unrelated) to the problem.     4.     Statement of a problem in such narrow terms that answers apply only to specific cases.     5.     Choosing a problem that is beyond the student's ability to evaluate.     6.     Failing to search the literature thoroughly; the problem may already be answered.

2. Formation and Statement of Hypotheses

     After a research problem is identified, the next step is to formulate a hypothesis. Since a hypothesis is a statement of a relationship between variables, it must be properly stated. The hypothesis suggests the particular relationship between variables and it narrows the problem to one that is specific and researchable.      This makes the specification of the independent and dependent variables relatively easy.     In addition, a hypothesis should be:     (1) testable,      (2) stated in a quantified or quantifiable form,      (3) as simple as possible while still offering a solution to the problem, and (4) as general as possible.

     Errors Commonly Made by Students

     1.     Formulating a research hypothesis that cannot be tested.     2.     Failure to review the literature adequately to determine whether the research hypothesis is reasonable in light of existing information.     3.     Statement of the research hypothesis in such a way that it cannot be tested.     4.     Statement of a hypothesis that is too specific.     5.     Statement of a hypothesis that is too general.

3.  Definition of Variables

     In order for the investigator to observe whether the hypothesized relationships between variables exist, these variables and any other terms used must be clearly defined. The process of definition begins by stating problems and hypotheses; nevertheless, the researcher should define at this point all terms not already defined. Definition of the variables in research allows everyone (both the researcher and any reader of the research report) to know what is being studied and facilitates interpretation of the results.

     Errors Commonly Made by Students     1.     Failure to define terms or variables operationally.     2.     Failure to define terms or variables in a manner that is consistent with the research literature.     3.     Failure to define terms or variables so that they are consistent with the way they are operationally measured in the study.     4.     Failure to define terms or variables.

4. Specification of the Population, Selection of Sample, and Assignment of Individuals to Groups

     Before conducting a study, the investigator must make clear exactly what population is being studied and how a participant or sample of participants will be chosen from the population. Is the whole population to be studied?  Will a sample be randomly selected from the population? Will the sample be one participant or many? If a group is used, will the sample consist of volunteers? After a sample is chosen, the investigator must specify how participants will be assigned to groups. Will they be randomly assigned, or do they already make up groups? These decisions will help determine, among other things, the generalizations that can be made, the design to be used, the data collection procedures that are possible,

Page 12: Research Problems

and the statistical analysis that will be employed.

     Errors Commonly Made by Students     1.     Narrow specification of a population, which limits generalizations to that population rather than to a broader one.     2.     Vague specification of a population that makes sample selection difficult.     3.     Selection of a sample in a manner that is inappropriate for the intended design (for example, using a nonrandom sample for a randomized groups design).     4.     Assignment of participants to groups in a manner that is inappropriate for the intended design (for example, randomly assigning participants to groups in a randomized blocks design).

5.  Selection of a Research Design

     The next step is the selection of an appropriate research design. The design chosen should best allow the researcher to investigate the research problem for the population under consideration. Much of this selection may already have taken place, that is, the decisions that have already been made may eliminate some designs and make others desirable.

     To use the method selection tree, first answer the question "Is the sample being studied a group or one participant?" Suppose the answer is a group.      The next question is "Are the participants randomly assigned to conditions?" Assume the answer is yes.      Then, "Does the experimenter have functional control over the independent variable(s)?" If the answer is no, stop; data collection procedures for this combination of factors have not been discussed.      If the answer is yes, next ask, "Are the participants blocked in conditions?" Suppose the answer is yes.

     Then, "How many variables are being studied?" If there is only one variable, then use a randomized blocks design; if there is more than one variable, use a factorial randomized blocks design. Other designs or research strategies are chosen in a similar fashion; begin at the top of the tree and follow the appropriate route, asking and answering each question in turn.

     Errors Commonly Made by Students

     1.     Failure to consider what design is being used.     2.     Use of an experimental design when participants cannot be randomly assigned to experimental conditions.     3.     Use of an experimental design when the investigator cannot manipulate the levels of the independent variables.     4.     Use of inappropriate blocking procedures.     5.     Selection of designs whose requirements cannot be met, for example, using a nonequivalent control groups design when no control group is available.

6.  Description of Research Methods      The description of the research methods includes the control techniques to be used, the methods for administering the independent variables and measuring the dependent variables, and the step-by-step procedures that will be followed.

     Selection of Control Techniques

     After all of the preceding steps have been completed, the researcher should choose the control techniques to be used. In order to answer as clearly as possible the questions that have been raised, the researcher must exercise control over the conditions of the experiment. Independent variables may be controlled by manipulation or selection.

     Dependent variable control mainly involves accurate and valid measurement.

     Control of extraneous variables includes elimination, constancy of conditions, balancing,

Page 13: Research Problems

counterbalancing, randomization, and statistical control. Elimination refers to control of a variable by removing its influence from a study. Constancy of conditions refers to holding a variable constant so that its influence will be the same throughout the study. Balancing is often used whenever a variable can neither be held constant nor eliminated and requires that the effects of an extraneous variable be balanced or have an equal opportunity to affect all conditions. Counterbalancing is used to control order effects whenever a within design is used. Randomization is a key control procedure of experimental research and is used to help assure that there are no systematic relationships between levels of the independent and extraneous variables.

     A control procedure used in addition to or in place of the other experimental control procedures is statistical control. This procedure involves the statistical assessment of the effects of one extraneous variable and the subsequent removal of that influence from the statistical analysis.

7.  Selection of Measurement Procedures

     Of great importance is the proper and accurate measurement of the values of the dependent variable. Without accurate measurements, any relationships between independent and dependent variables may be missed. Since psychologists usually measure psychological dimensions of reality, measurement can be complex. There are no rulers for measuring a variable such as willingness to work. The proper selection and use of psychophysical methods or psychological scaling and measurement procedures are of great importance in any psychological investigation.

Step-by-Step Procedure

     Finally, a step-by-step procedure for carrying out the study should be written. If every researcher would put in writing the specific steps to be followed in conducting an investigation, then fewer errors would occur!

Errors Commonly Made by Students1.     Failure to use sufficient control procedures.2.     Failure to use correct control procedures.3.     Failure to use adequate measurement techniques.4.     Failure to plan the data collection procedures in enough detail to avoid errors.5.     Failure to train or properly train the people who will collect the data.6.     Failure to establish a recording system for information, which will be needed when the report is written.

8.  Administration and Observation of the Levels of the Independent Variables and Observation and Measurement of the Levels of the Dependent Variables

     After the preliminary work has been completed, the study can be conducted. If all of the preparation for the study has been carried out, the conduct of the study is easy and mostly mechanical.     Errors Commonly Made by Students     1.     Initiation of data collection procedures prior to planning.     2.     Failure to follow the procedures that have been established.     3.     Failure to constantly assess the progress of the data collection so that appropriate modifications can be made.     4.     Failure to practice the data collection procedures before the initiation of the study.

9.  Comparison of Groups or Individuals

     After the data are collected, the investigator must assess the relationships found between the independent and dependent variables. Some of these assessments are often based on statistical analyses.

Page 14: Research Problems

     Errors Commonly Made by Students     1.     Failure to ascertain whether data collection procedures were followed.     2.     Incorrect description or display of data.     3.     Use of incorrect statistical procedures.     4.     Failure to check arithmetic computations at least once before accepting statistical analysis as accurate.     5.     Failure to analyze the data relevant to the research hypothesis.

10.  Reporting the Study

     Every investigator has an obligation to report the results of the studies that have been conducted.

     Errors Commonly Made by Students     1.     Failure to use APA writing and editorial style (for example, incomplete references).     2.     Plagiarism.     3.     Incomplete description of studies in literature review.     4.     Incomplete description of data collection procedures.     5.     Incomplete description of results.     6.     Failure to separate results and discussion sections of a report.     7.   Failure to use tables and figures.     8.     Incomplete description of the research hypothesis.     9.     Failure to include references.