common pitfalls in research & writing presentation for faculty and students at qatar university...
TRANSCRIPT
Common Pitfalls inResearch & Writing
Presentation for faculty and students at Qatar University
Gordon P. Brooks, ProfessorEducational Research & Evaluation
Ohio University
March 2015
My Experience
• Quantitative/statistical methodologist• 150+ dissertation committees
– Chaired 12, “methodologist” on most• 10 master’s thesis committees, chaired 1• Reviewer for 11 journals
– 2 editorial boards– Reviewer for 3 organizations over 14 years
• Presented/published over 80 academic papers3
Common Pitfalls in Research & Writing
• Research Writing is intimately connected to Research Design and Analysis
• That is, if you haven’t done it correctly, you can’t possibly write about it the way you should
• We’ll focus on what to report rather than how to report, and what to report is closely connected to strong design principles
4
Common Pitfalls in Research & Writing
• And because I’m a quantitative methodologist, we’ll focus on a more quantitative (not qualitative) approach to research
• We’ll focus on good practices rather than the negative (i.e., common pitfalls or problems)
• We’ll talk generally --- you’ll need to pay attention to the best design practices and the publication style manual used in your field
5
But here are somethings to avoid…
• A statistician is a professional who diligently collects facts and data and then carefully draws confusions about them.
• Statisticians use data the way a fatigued man uses lamp-posts, for support rather than illumination.
(both quotations have unknown/multiple authors)
6
Triumvirate of Evils in Quantitative Research
• Sampling (representative)• Instrumentation (data)• Over-reaching Conclusions (design)• Analytical concerns (conclusion)• Connection to theory/literature (theoretical)
(there were just 3 when I started)
7
Writing about Research
• Traditionally 5 sections in 2 parts• Proposal
– Introduction to the Research Problem– Literature Review– Methods & Research Design
• Final Dissertation adds– Analysis & Results– Conclusions & Discussion
8
Writing about Research
• In journal articles, the introduction and literature review tend to be combined and are much shorter than in dissertations and theses
• In academic writing, the proposal is typically designed for the student to assure the advisor and committee they can do the work
9
Writing about Research
• Often, the proposal is seen as a ‘”contract” between the student and the faculty about what work will be done
• This requires that strong effort be put forth in creating the proposal, so that it covers all the expected requirements for the research
10
Writing about Research
• There’s an old public speaking axiom:– Tell them what you will tell them– Tell them– Tell them that you told them
• In academic research, it can be amended:– Tell them what you will do (proposal)– Do it (data collection and analysis)– Then tell them that you did it (defense)
11
Common Pitfalls in Research & Writing
• Theoretical Validity (significance)• Representative Validity (sampling)• Data Validity (instrumentation)• Design Validity (over-reaching conclusions)• Conclusion Validity (analytical concerns)• Theoretical Validity (significance)
see Brooks, G. P. (2011). Qualitative experimentation, local generalizability, and other oxymoronic opportunities for educated researchers. Mid-Western Educational Researcher,
24(4), 10-20. {http://www.mwera.org/MWER/archives.html}
12
Theoretical Validity
• Does the research make sense?• Are there reasons to connect the variables?• Are there reasons to study the population?• What are the conceptual and theoretical
definitions?• Why is the research important?• Significance of the Study (research is about theory not data)
13
Representative Validity
• How many, sampled how, from where?• Why does the sample matter when we are
interested in the population?• Do the cases serve their purpose for being
included in qualitative research?• Is the case study interesting or useful?• Generalizability and Transferability (external validity & delimitations)
14
Data Validity
• Were the data collected from the people or places with the needed information?
• Do the operational definitions match the theoretical or conceptual definitions?
• Are there concerns with trustworthiness of credibility in qualitative research?
• Construct or Measurement Validity (psychometric validity & reliability)
15
Design Validity
• Are elements of a causal argument controlled in the design: (a) predictive/temporal relationship, (b) ruling out alternative explanations (confounding variables), (c) theoretical understanding, and (d) replication?
• What conclusions are legitimate based on your design? All designs can be flawed, all designs can be useful.
• Internal validity & limitations16
Conclusion Validity
• What impact do violations of statistical assumptions, outliers or extreme values, range restriction, “fishing” have on results?
• What impact do potential errors (e.g., Type I, II, III, VI) have on conclusions?
• How large are the effects or relationships?• Be true to your data AND design (statistical conclusion validity)
17
Triumvirate of Evils in Quantitative Research
Said in a more traditional way…• External Validity (representative)• Construct/measurement Validity (data)• Internal Validity (design)• Statistical Conclusion Validity (conclusion)• “So What?” (theoretical)
18
Triumvirate of Evils in Quantitative Research
• Note that there are entire separate courses dedicated to the topics addressed by these “validity” terms (and often several, like introductory and advanced)– Design– Measurement– Analysis
• We can’t possibly cover it all… but we’ll do what we can
19
Common Pitfalls in Research Design & Writing
Said in another way…• Theoretical Validity (the right reasons)• Design Validity (the right methods)• Representative Validity (the right sources)• Data Validity (the right information)• Conclusion Validity (the right answers)• Theoretical Validity (the right conclusions)
20
Standards for Reporting on Empirical Social Science Research
• Problem Formulation (theoretical)• Design & Logic (design)• Sources of Evidence (representative & data)• Measurement & Classification (data)• Analysis & Interpretation (conclusion)• Generalization & Ethics (representative & design)
AERA (2006) Educational Researcher, 35(6), 33-40
21
Research is a Rhetorical Process
• Because cause requires argument, I say that research is a rhetorical process– “of, relating to, or concerned with the art of
speaking or writing formally and effectively especially as a way to persuade or influence people” (http://www.merriam-webster.com/dictionary/)
22
Research is a Rhetorical Process
We need to convince ourselves and our audience we have found something useful• Bradford Hill’s (1965) criteria for minimal
conditions to provide adequate evidence of a causal relationship: (a) theoretical plausibility, (b) strength of association, (c) temporality, (d) consistency, (e) coherence, (f) specificity, (g) gradient/dose-response, (h) experimental evidence, and (i) analogy, to consider alternate explanations.
But we don’t necessarily need to do it all at once23
I lie* to my students (repeatedly) when I congratulate them
• Courses/Qualifying Exams are the hardest part• Dissertation Proposal is hardest• Data Collection is hardest• Data Analysis is hardest (statistics!)• Drawing Conclusions is hardest• Dissertation Defense is the hardest part
*a “little” lie , a “fish story”
24
The Research Question is the hardest part
Finagle's Rule• To study a subject best, understand it thoroughly
before you start– (Know the literature well because it informs every aspect of
the research process: introduction, lit review, methods, results, and conclusions)
– Murphy’s Law “Whatever can go wrong will go wrong”
25
On the Creative Emergence of Research Problems & Questions
Youtube: Daniel Simons“selective attention test” & “The Monkey Business Illusion”
26
On the Emergence of Research Problems & Questions
• Why do we do research?– Rigorous attempt to answer questions– Share our answer with others
• Our instinctual reaction in answer to a question is not always correct– Will 2 balls hit the ground at the same time even if
one weighs more than the other?• Skepticism is good and indeed lies at the heart of
the research process27
On the Emergence of Research Problems & Questions
• Research topic– General field of study, background to the study
• Research problem– An issue that must be addressed
• Statement of the Purpose– What you plan to study within that field, related to
that problem• Research question
– Specific question to be answered by the research28
On the Emergence of Research Questions
A good Research Question…– Excites your passion, contributes to the field– Is creative, unique, adaptive, and/or synthetic– Is valuable, manageable, and affordable (sometimes
we need to break large questions into smaller ones)– Is doable ethically, logistically, skillfully– Is relevant---has an answer to “so what” question– Does not rely only on methodology we know– Does NOT depend on statistical significance
29
On the Emergence of Research Questions
• What problem needs an answer?• Where are the gaps/contradictions in our knowledge? • What assumptions can be changed?• Can it be applied differently? Any new tech?• Is the world different now? Any new ideas?• Any new theories/measurements for old ideas?• Do we dare upset folks with controversy?• Can we adapt from other fields?• Can it be newly qualitative or quantitative?
30
On the Emergence of Research Questions
• Research questions should generally not include the word “should”
• “Should” questions are best answered by policy-makers or decision-makers after they review the relevant research– Researchers rarely make decisions for others– Researchers want to remain objective rather than
advocate for political positions
31
On the Emergence of Research Questions
• Before planning to do research, attempt to answer the research question using a literature review– If the research question has been answered, then
research can focus on something else, perhaps still strongly related to the original question
– If part of the research question has been answered, the research can be refined and focused to address the remaining unknowns
32
On the Emergence of Research Questions
• Need to be knowledgeable in the field/literature• Want broad experience in related fields• Use creativity techniques to think about possible
questions (e.g., brainstorming)• Don’t ask questions based only on the analytical
methods you know how to use– We tend to ask questions we know how to ask– We tend to ask questions we know how to answer
33
On the Emergence of Research Questions
• Once you’ve decided on your final research question, clearly write the question so that it can be answered empirically (using data you will collect or data that already exists)
• You want to learn something useful from your research no matter what answer the results give you– Not every answer will change the world, but every
answer should be useful34
On the Emergence of Research Questions
• Decide/describe what type of research it will be:– Basic, applied, evaluation, action research– Safety, efficacy, or effectiveness research– Descriptive, exploratory, or confirmatory– Quantitative, qualitative, or mixed methods– Primary or secondary data analysis– Replication or extension research– Prospective, retrospective, longitudinal, cross-section
35
On the Emergence of Research Questions
• The research question should contain:– The variables of interest– The relationship to be studied among the variables– The population of interest– Any relevant, specific context for the study
• e.g., laboratory, timing, materials
36
On the Emergence of Research Questions
Summary• Your introduction should ultimately explain
clearly– What you want to learn– Why you want to learn it– Why others should care about it– What we already know about it (e.g., the literature’s
“Greatest Hits”)
37
Theoretical Validity
• An Introduction also usually includes– Significance of the Study– Definitions
• Use these terms consistently throughout
– Research hypotheses– Delimitations
• Scope of the research, controlled by researcher
– Limitations• Elements of study design the researcher cannot control,
but can try to manage38
Theoretical Validity -> Literature Review
• While developing your research questions, you should always be considering theoretical validity
• You use the literature review, in part, to build the theoretical rationale/case, often called a theoretical framework– Based on existing theories, previous empirical
research, or perhaps connections you seem among them
• But not all theories can be combined (e.g., ontology, epistemology) 39
The “Unending Conversation” Metaphor (Burke, 1941/1973)• “Imagine that you enter a parlor. You come late. When you arrive, others have
long preceded you, and they are engaged in a heated discussion, a discussion too heated for them to pause and tell you exactly what it is about. In fact, the discussion had already begun long before any of them got there, so that no one present is qualified to retrace for you all the steps that had gone before. You listen for a while, until you decide that you have caught the tenor of the argument; then you put in your oar. Someone answers; you answer him; another comes to your defense; another aligns himself against you, to either the embarrassment or gratification of your opponent, depending upon the quality of your ally's assistance. However, the discussion is interminable. The hour grows late, you must depart. And you do depart, with the discussion still vigorously in progress.”
– Kenneth Burke (1941/1973) “The Philosophy of Literary Form”
40
The “Unending Conversation” Metaphor (Kenneth Burke)
“Imagine that you enter a parlor. You come late. When you arrive, others have long preceded you, and they are engaged in a heated discussion, a discussion too heated for them to pause and tell you exactly what it is about. In fact, the discussion had already begun long before any of them got there, so that no one present is qualified to retrace for you all the steps that had gone before. You listen for a while, until you decide that you have caught the tenor of the argument; … ” 41
The “Unending Conversation” Metaphor (Kenneth Burke)
“… then you put in your oar. Someone answers; you answer him; another comes to your defense; another aligns himself against you, to either the embarrassment or gratification of your opponent, depending upon the quality of your ally's assistance. However, the discussion is interminable. The hour grows late, you must depart. And you do depart, with the discussion still vigorously in progress.”
42
Reviewing the Literature
• Include support for why you are studying your variables and including other relevant variables (perhaps for statistical control)
• Include support for why you are studying particular relationships among your variables (or differences among populations)
• Include support for why you are excluding variables from your study– Why you don’t need to include them
43
Reviewing the Literature
Johnson’s Irony• Master the literature in your topic area so well
that you don’t need to read it when you review it– Corollary 1: Review the forest not the trees (but cite
125-150 trees)• Don’t miss the “big picture” by focus on the parts
– Corollary 2: Synthesize it into new knowledge• “State of the Art”
44
Reviewing the Literature
Brooks' Observation• Half of what you read for your research will be
irrelevant or unworthy, despite having the perfect title or abstract– Corollary 1: Age alone does not make something
irrelevant (i.e., don’t ignore older references, but focus on the newer)
– Corollary 2: Even apparently irrelevant literature can potentially be relevant
45
Reviewing the Literature
• Code the resources you read for later use, especially quotes– Some things will be useful in introduction– Some things will be useful in methods– Some things will be useful in conclusions
• Include both theoretical and empirical literature• All claims of knowledge need citations
46
Reviewing the Literature
• Synthesize literature, identify gaps/deficiencies• Organize thematically using your variables,
population, and connections among variables– There may be sections that can be organized
differently (e.g., chronologically)– Don’t do “he wrote this” and “she wrote that” and
“they wrote this and that”• It is your literature review, so own it (and avoid
plagiarism)47
Reviewing the Literature
• You want to present the current thinking in the field, but to understand current thinking we often need to understand original thinking
• Why do we want to cite older resources?– Over time, interpretations of older work change
• Telephone message game, message changes from beginning to end
– Don’t rely on the interpretations of others, sometimes they read it differently than you
48
Reviewing the Literature
• For development, some people like to use– Outlines– Table of contents– Thought or concept maps
• For organization, some like to use– Funnel (variables, population, relationships)
• Start general and move to the more specific
– Use Headings, Transitions, and Sections Summaries• Tell us what you will tell us, tell us, tell us you told us
49
Reviewing the Literature
• The goal is to inform about the extant knowledge about your variables that informs your research with your chosen population– Critical review of the literature, identifying strengths
and weaknesses in the arguments– Summarize all sides of issue, not just literature that
supports your views– How did the field get to this point? History matters– Who are the experts?
50
Reviewing the Literature
• Critical review (analysis) of the literature• Not all research is created equal
– Determine which is better as evidence– Which have more accurate effect sizes
• Determine why studies are contradictory– Were they different populations or research designs– Were different measurements or analyses used
• Try to determine why gaps & deficiencies exist51
Reviewing the Literature
• Authors can reach conclusions for many reasons– Don’t just say that someone concluded something
• What was the basis for conclusions made in the literature– Were they speculations– Were they based on theory– Were they based on personal experience or belief– Were they based on empirical research
52
Reviewing the Literature
• Synthesize all relevant literature– Integrate work from other perspectives & disciplines
• For example, most fields study education– Ignoring education literature makes no sense– But for education scholars to ignore the literature in
other fields equally makes no sense• For example, many fields study leadership
– Easier now to find literature across disciplines with computerized searching
53
Reviewing the Literature
• Includes methodological information like– How your variables/constructs are measured– What are the common instruments/scales– History and validity of the measures you’ll use– What your treatment levels are chosen– What designs are common in the field– What analyses are used or considered acceptable
54
Reviewing the Literature
• The goal may also be to develop research hypotheses as expected answers to your research questions– The literature may lead you to an expectation of
what the relationship between variables should be based on theory and previous research
– But sometimes it doesn’t, which would lead to exploratory research
55
“How to Grade a Dissertation” (Lovitts, 2005)
• In 2003-04, 272 faculty members in 74 departments across 10 disciplines (4 science, 3 social science, 3 humanities) at 9 research universities
• The faculty members had 6,129 combined years of experience, had advised approximately 3,470 dissertations, and had sat on about 9,890 dissertation committees
• The average focus group participant had been a professor for 22 years, advised 13 dissertations, and served on 36 dissertation committees.
56
“Making Faculty Expectations more Transparent” (Lovitts, 2005)
• Focus groups were asked to characterize dissertations and 6 components at 4 different quality levels (outstanding, very good, acceptable, and unacceptable)
• Faculty members said they often make holistic judgments about the quality --- no mental checklist of items against which they assess a dissertation
• But results show that faculty members do make quality judgments that can be made explicit
– Barbara E. Lovitts (2005, Nov-Dec). How to Grade a Dissertation. Academe, 91(6), 18-23.
57
Dissertation Rubrics(many found, mostly similar)
(www.georgiahealth.edu/gradstudies/Rubric-PhD.MS.6-16.11.doc) 58
Outstanding ~ Theoretical Validity(Lovitts, 2005)• Is original and significant, ambitious, brilliant, clear,
clever, coherent, compelling, concise, creative, elegant, engaging, exciting, interesting, insightful, persuasive, sophisticated, surprising, and thoughtful
• Is very well written and organized• Is synthetic and interdisciplinary• Connects components in a seamless way• Exhibits mature, independent thinking• Has a point of view & a strong, confident, independent,
and authoritative voice59
Outstanding ~ Theoretical Validity(Lovitts, 2005)• Asks new questions or addresses an important
question or problem• Clearly states the problem & why it is important• Displays a deep understanding of a massive amount of
complicated literature• Exhibits command & authority over the material• Argument is focused, logical, rigorous, & sustained• Is theoretically sophisticated and shows a deep
understanding of theory• Is thoroughly researched
60
Very Good ~ Theoretical Validity(Lovitts, 2005)• Is solid, well written and organized• Has some original ideas, insights, and
observations, but is less original, significant, ambitious, interesting, and exciting than the outstanding category
• Has a good question or problem that tends to be small and traditional
• Is the next step in a research program (good normal science)
61
Acceptable ~ Theoretical Validity (Lovitts, 2005)• Is not very original or significant• Is not interesting, exciting, or surprising• Displays little creativity, imagination, or insight• Writing is pedestrian and plodding• Has a weak structure and organization• Is narrow in scope• Has a question or problem that is not exciting—is often
highly derivative or an extension of the adviser’s work• Displays a narrow understanding of the field
62
Acceptable ~ Theoretical Validity (Lovitts, 2005)• Reviews the literature adequately—knows the
literature but is not critical of it or does not discuss what is important
• Can sustain an argument, but the argument is not imaginative, complex, or convincing
• Demonstrates understanding of theory at a simple level, and theory is minimally to competently applied to the problem
63
Unacceptable ~ Theoretical Validity (Lovitts, 2005)• Is poorly written, has spelling and grammatical
errors, has a sloppy presentation, contains errors or mistakes
• Plagiarizes or deliberately misreads or misuses sources
• Does not understand basic concepts, processes, or conventions of the discipline
• Lacks careful thought
64
Unacceptable (Lovitts, 2005)
• Looks at a question or problem that is trivial, weak, unoriginal, or already solved
• Does not understand or misses relevant literature
• Has a weak, inconsistent, self-contradictory, unconvincing, or invalid argument
• Does not handle theory well, or theory is missing or wrong
65
Lovitts (2005)Component 1: Introduction
The introduction• Includes a problem statement• Makes clear the research question to be addressed• Describes the motivation for the study• Describes the context in which the question arises• Summarizes the dissertation’s findings• Discusses the importance of the findings• Provides a roadmap for readers
66
Lovitts (2005)Component 2: Literature Review
The review• Is comprehensive and up to date• Shows a command/understanding of the
literature• Contextualizes the problem• Includes a discussion of the literature that is
selective, synthetic, analytical, and thematic
67
Lovitts (2005)Component 3: Theory
The theory that is applied or developed• Is appropriate• Is logically interpreted• Is well understood• Aligns with the question at hand• Shows comprehension of the theory’s strengths
and limitations
68
Not-so-Random Transition…
I can’t go a whole presentation without a single statistical formula. Here are a couple of my favorites:
69
Representative Validity
• How well do the participants or cases represent the population you are studying? (population external validity)
• How well does the setting of the research represent the environment of the population? (ecological external validity)
• How well does the timing of the study represent the time desired? (temporal external validity)
70
Representative Validity
• How well can you describe the participants and setting and time for generalizability or transferability or utility?
• Even in qualitative research, do they represent extreme, typical, or negative cases with purposeful sampling strategies?
• Do the people you have chosen represent all aspects of information required for the case study you are performing?
71
Representative Validity
• Do they actually have the information you need in order to answer your research question?
• Are you collecting information at the right level (i.e., unit of analysis)?
• In most circumstances in quantitative research, random sampling is our best chance to ensure representativeness
72
Representative Validity
• In quantitative research, there a potentially at least 5 levels of representativeness required for generalizability:– Target Population– Accessible Population (sampling frame)– Sample– Participants / Respondents– Cases Analyzed
73
Representative Validity
• Provide delimitations of your Target Population– To whom will your results generalize?
• Create strict Inclusion criteria– Who is in your target population
• Create strict Exclusion criteria– Who is not
• Use screening techniques or questions to include and exclude cases
74
Representative Validity
• How well does your Accessible Population (or Sampling Frame) represent Target Population?
• You need an actual list of population members in order to ensure an equal probability of all possible samples (sampling error)– Describe where and how you obtained your list
• Internet search, organization, membership list
• Can you actually access the Accessible Population? (e.g., gaining entry)
75
Representative Validity
• How well does your Sample represent the Accessible Population? (coverage error)
• Ideally, a equal probability random sampling method was used to create the sample
• Note, the researcher decides whom to include in the sample– that is, the sample comprises those cases from a
population that the researcher will request to participate in the study
76
Representative Validity
• How did you select sample & how many?• When did you stop collecting data?• How did you recruit participants?
– Incentives, extra credit for class, employer• How & How many times were they contacted?• Who contacted them or collected data? Why?
– Internet, Research assistant, Intermediary– Possibility an intermediary restricted participation?
77
Representative Validity
• We inappropriately use the word sample also for participants
• How well do your respondents or volunteers---your actual Participants---represent the Sample? (non-response error)
• Note, the participants decide whether they are included in the study– There is no sampling method used to determine
who actually participates78
Representative Validity
• Collect thoughtful and meaningful demographic characteristics for your participants– Gender, socioeconomic status, ethnicity– Perhaps other studied variables as well
• Collect information that can be compared to known characteristics of the population
• Although not desirable to take this approach, every sample represents some broader group– Probably shouldn’t call it a population, though
79
Representative Validity
• How is your Response Rate calculated?– Proportion of Accessible Population– Proportion of Sample who participated– Proportion of cases with necessary data
• How did people who chose to participate differ from those who chose not to participate?– Thoughtful demographics– Thoughtful follow-up (maybe even try to sample of
non-respondents…?)80
Representative Validity
• How well do the cases with data for the variables actually included in a particular analysis (cases analyzed) represent all Participants, regardless of missing data?
• How was missing data handled?• Articles commonly say:
– 150 surveys were returned– 120 surveys had usable (or complete) data– Then report 90 degrees of freedom for an analysis
81
Representative Validity
• There are a number of reasons we should describe our participants (or the cases we actually analyze), rather than a sample:– Non-response or volunteer bias– Non-random sampling (e.g., convenience)– “Damaged” random sample (doesn’t always work)– Convenient populations– Transferability (a perhaps useful qualitative idea)
82
Representative Validity
• Often our problem is not with the sample but with the population we are studying
• Instead of an Accessible Population that represents the Target Population well, we choose a convenient population
• My suggestion for local generalizability is to draw a truly representative sample from that convenient population
83
Representative Validity
• We can rarely hope to truly represent a large target population, so let’s represent the smaller, accessible convenient population well instead of not representing a population at all with a convenient sample
• We definitely don’t want convenient samples from convenient populations
84
Representative Validity
• We can then have confidence that our results will at least have small-scale external validity (e.g., within our local/convenient population)– Maybe call it purposeful random sampling, or a
quantitative case study• Then we use the ideas of transferability and
meta-analysis to make sense of these local results
85
Representative Validity
• What impact of the timing of data collection?– Longitudinal data collection– Do seasons or times of year matter?– Does time of day matter?
• Was your setting natural or artificial?– How realistic was the laboratory setting?
• How much attrition occurred?• Do you need additional permissions for access?
86
Lovitts (2005)
• Lovitts didn’t really discuss much about methodology at the same level of analysis as I will be discussing, so we’ll wait to look at all her methodological points at one time later
87
Some evidence that statistics can be fun
88
Data Validity
• How well do your data represent what you purport they represent (e.g., construct validity, scope, context, saturation)?
• What psychometric, credibility, accuracy, or trustworthiness evidence can you provide?
• Do you have evidence that the data accurately represent perceptions, facts, or attitudes?
89
Data Validity
• Creating an instrument– Need validity studies (not just pilot studies)– Collect reliability and validity information– Item analysis & revision– Consider the impact of order
• Similar types of items• More sensitive items• More important variables
– Difficulty collecting some data (e.g., income, age)90
Data Validity
• Basic creation of questionnaires and questions– Loaded questions (impartiality)– Double-barreled questions (clarity)– How clear were your instructions or directions?– Don’t know options– How difficult was the task?
• How many times last week vs. how many times last year• Do they need to look up information (and do they)
– We hope that each respondent is answering the “same” question
91
Data Validity
• Evidence of Construct Validity– Content validity
• Table of specifications• Expert judgment• Theoretical support• Mixed methods approaches (e.g., using interviews)
– Structural validity– Criterion-related validity– Face validity
92
Data Validity
• Evidence of Construct Validity– Convergent validity– Discriminant validity– Multi-trait, Multi-method Matrix– Multiple measures of the same variables– Single-item measures (versus scales)
• Complex phenomena, Unreliability• May be appropriate for simple, factual questions
– Disparate populations & intervention evidence93
Data Validity
• What are you measuring? (e.g., attitudes, perceptions, behaviors, practices, ability)– Providing validity evidence that two attitude scales
are correlated may be helpful, but it won’t necessarily provide evidence that the attitude scale measures behaviors or abilities
– Self-report data is always suspect• E.g., are you good in math?
94
Data Validity
• Evidence of Reliability– Test-retest– Alternate forms– Internal consistency– Inter-rater reliability– Inter-coder agreement– Item analysis
95
Data Validity
• Response sets– Acquiescence (people like to say yes and agree)– Social Desirability (people like to look good)– Extreme responding (reverse coding)– Moderate responding (remove middle)– Sponsor Bias (what they think you want to hear)– Consistency (answer new questions based on
previous answers, rather than each independently)
96
Data Validity
• Use of existing instruments– Get permission– Did they perform validity studies (not just pilot
studies)– Provide previous evidence of scale use as part of an
argument for using the existing instrument– Collect your own evidence to verify your data are
useful for your own analyses• Validity and reliability of scores, not scales
97
Data Validity
• Adaptation of existing instruments– Get permission!– Impact of changing item wording
• Change “leader” to “supervisor”
– Adding or removing items• Less-related to previous evidence
– Changing language– Cultural adaptation– Need new validity study process
98
Data Validity
• Do your respondents or resources have the information required to answer the questions– Do they ask a subordinate to respond– What’s the best source for the data
• Self-report vs. other-report
– Self-report items that shouldn’t be• Are you a good student? A good leader?• Maybe other better information exists
99
Data Validity
• How were data collected?– In person– In a classroom– Internet (web site vs. email)– Postal Mail– Telephone
• Is there any reason to worry that the data collection method has an impact on the data itself?
100
Data Validity
• Qualitative data– Can you be sure what they tell you is correct
• Do you need facts or perceptions
– How accurate is their information– How trustworthy are their answers– Do they have the information you need– Corroboration– Triangulation
101
Design Validity
“To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of.”
(Fisher, 1938)
“You can't fix by analysis what you bungled by design.”
(Light, Singer, & Willett, 1990, preface)102
Design Validity
• Are we confident that evidence we obtain from our design can be used to make the knowledge claims we hope to make by answering our research questions?
• How strong a causal argument can you make (i.e., internal validity)?
103
Design Validity
• Take plenty of methodology courses, statistics and qualitative and design and measurement and evaluation…
• Always pay attention to Validity, no matter what form or what it’s called
• Learn research design well, a variety of types of research and modes of inquiry (e.g., experimental, correlational, qualitative)
104
Design Validity
• Research Design is not intended to be an impediment, but rather it is a foundation developed over time to help provide us confidence that our evidence can be used to make the knowledge claims we want to make
• Recall that research is a rhetorical process (even if you find “the truth” you need to convince both yourself and others)
105
Design Validity
• We need to protect us from ourselves• There are too many ways a researcher can
influence the results• Recall that we often start a research project
with an answer in mind (the research hypothesis)
• We use strong research design to keep us from influencing the results inadvertently
106
Design Validity
• All research designs can be flawed• All research designs can be useful• Choose a research design that will answer your
research question(s)…… and then answer them
• There is no perfect study… make justifiable decisions and report them
• Always consider ethical issues, get IRB approval107
Design Validity
• Procedures should be clear and transparent enough that if someone else needed to it, they would know how (like a recipe for research)
• Said another way, other researchers need to be able to perfectly replicate your research
• Defend your decision to choose the methods and design, but don’t defend the design itself– Qualitative researchers no longer need to explain
why it’s acceptable to do qualitative research108
Design Validity
Basic designs • Just collect data
– Observational– Correlational
• Collect data after manipulation or across traits– Non-Experimental– Quasi-Experimental– Experimental
109
Design Validity
• Different designs allow us to control threats to internal validity differently
• Some give us no control• Some give us strong control
– For example, adding random assignment and control groups give us strong comparisons
• Different design elements help us control threats to internal validity
110
Design Validity
• Pay attention to Threats to Internal Validity– Maturation -- History– Testing -- Instrumentation– Selection Bias -- Attrition– Regression -- Contamination– Rivalry -- Demoralization– Experimenter bias
111
Design Validity
• Observation (O)• Treatment then Observation
– X O• Observation then Treatment then Observation
– O1 X O2• Treatment then Observation with Control Group
– X O– C O
112
Design Validity
• Random Assignment before Treatment then Observation with Control Group– R X O– R C O
• Random Assignment before Observation the Treatment then Observation with Control Group– R O1 X O2– R O1 C O2
113
Design Validity
• Note that in many of these cases, the data can be collected all at the same time, or can be collected sequentially
• In order to achieve assurance of temporal order, the treatment should precede the observation
• In order to control many threats, treatment and control much occur simultaneously across groups (not sequentially)
114
Design Validity
• Clearly indicate in “experimental” designs what interventions, treatments, and/or manipulations were implemented (or what traits compared)
• Report how potential threats to validity were controlled (and what could not be controlled)
• Report whether random assignment was used• Describe whether you will use a between-
subjects or within-subjects design115
Design Validity
Pay attention to sample size a design element• Larger samples can but may not represent the
population better (representativeness is key)• Sample size matters for Statistical Power• Sample size matters for the accuracy of our
estimates (e.g., confidence intervals)• The larger the sample, the better the results
should cross-validate and remain stable116
Design Validity
Use a pilot study • to make sure manipulations and equipment will
work as you expect• To make sure data can be collected as desired• To make sure assistants know how to perform
their roles• To make sure you have estimated time and costs
and effort reasonably well117
Design Validity
Whenever possible• Use random assignment• Use blind and double-blind interventions• Acknowledge ethical issues• Describe strengths & acknowledge weaknesses
of the design– Provide rationale for the methodology
118
Design Validity
Decide on your methods of control• “Third variable” problems (covariates)• Interactions among predictors• Sometimes a third variable is responsible for an
outcome on the dependent variable• Control potentially confounding variables by
including them in data collection or design– If left uncontrolled, they are called extraneous
119
Design Validity
Decide on your methods of control• Control through design
– Control, comparison, or placebo groups– Matching or repeated measures– Variance reduction, Homogeneous samples
• Statistical control (covariance, blocking)– Consider and/or include all essential variables– Maybe literature helps you rule some out
120
Lovitts (2005)
Outstanding• Has a brilliant research design• Uses or develops new tools, methods, approaches, or types of analyses• Has rich data from multiple sourcesVery Good• Includes well-executed research• Demonstrates technical competence• Uses appropriate (standard) theory, methods, and techniquesAcceptable• Uses standard methodsUnacceptable• Relies on inappropriate or incorrect methods• Has data that are flawed, wrong, false, fudged, or misinterpreted
121
Lovitts (2005)Component 4: Methods
The research design and methods applied/developed are• Appropriate• Described in detail• In alignment with the question addressed and the
theory usedIn addition, the author demonstrates• An understanding of the methods’ advantages and
disadvantages• How to use the methods
122
The Wonderful, Descriptive World on Hans Rosling
123
Conclusion Validity
• What conclusions can you reach about your variables within your populations
• What evidence supports your claims—whether statistical, analytical, thematic?
• What rationale justifies your implications and recommendations?
• Do you have multiple sources to support your claims (e.g., triangulation, replication)?
124
Conclusion Validity
• We want to know how strong our conclusions can be based on the information we have in our data (which is based on other validities to some extent)
• “All models are wrong; but some models are useful.” (George Box, 1979)– We almost always have model specification errors
• How useful is your model?
125
Conclusion Validity
• Descriptive statistics help us understand the data before running statistical tests– For example, measurement statistics to verify that
we can even run inferential statistics on these data in a meaningful way
– Understanding the underlying subpopulations– Testing assumptions
• But we can also ask significant, informative, descriptive Research Questions
126
Conclusion Validity
• “Data don’t speak to strangers” (V. M. Conley)• ”Telling the story” of the data
– borrowed from qualitative research• Describe your participants
– recall representative validity– Compare sample to population– Provide information for transferability
127
Conclusion Validity
• “Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.” (Tukey, 1962) (e.g., log scale)
128
Conclusion Validity
• 42.7% of all statistics are made up on the spot. (Steven Wright, American comedian)
• 62.4% of all statistics are made up on the spot. (http://www.math.temple.edu/~paulos/)
129
Conclusion Validity
• 62.4% of all statistics are made up on the spot. (http://www.math.temple.edu/~paulos/)
• 53.14159265359% of all statistics are inappropriately precise (I just made this up on the spot, but it is true for some percentage)
New speculation suggests that the number may actually be closer to 62.381528%
130
Conclusion Validity
As Tukey (1977, p. vii) said:“Once upon a time, statisticians only explored. Then they learned to confirm exactly—to confirm a few things exactly, each under very specific circumstances. As they emphasized exact confirmation, their techniques inevitably became less flexible… Anything to which a confirmatory procedure was not explicitly attached was decried as ‘mere descriptive statistics’, no matter how much we learned from it… Today, exploratory and confirmatory can—and should—proceed side by side.”
131
Conclusion Validity
• Tukey (1977, p. vii) also said: “We can no longer get along without confirmatory data analysis. But we need not start with it.”
• The simplest, correct answer is often the best and most persuasive– Which means that the simplest statistical analysis
that will correctly provide the results you need may be preferred (e.g., EFA vs. CFA, t test vs. SEM)
132
Conclusion Validity
• Run correct analyses correctly– Were appropriate interactions tested– Were appropriate control techniques used– Were appropriate units of analyses used
• Ecological fallacy, aggregation or disaggregation
– How were missing values handled– Avoid Type VI error (an analysis that answers a
different question than your research question)
133
Conclusion Validity
• Perform the correct analyses• Test the statistical assumptions• Report any other assumptions you’ve made• Check for outliers and extreme values• Consider cross-validation of results• Consider carefully appropriate “post hoc” tests
– Multiple comparisons– Testing regression coefficients
134
Conclusion Validity
• After describing your data, perform the tests required to answer your research questions
• Answer research questions completely, even if it requires additional post hoc or supplementary analyses (recommend future research when you don’t have data available for such analyses)
• Include supplemental analyses to explore your data completely, but be clear that these are supplemental analyses
135
Conclusion Validity
• Run supplemental analyses that permit examination of other interesting meaningful relationships that exist in the data– We can almost always find a reason for results after
the fact, even if we didn’t expect them– These should be used only for descriptive purposes
to recommend future research– These questions are not falsifiable after you have the
data, and you may know the answer before you start (either consciously or subconsciously) 136
Conclusion Validity
• Effect sizes are always desirable (we can “buy” statistical significance)
• Confidence intervals provide both effect size and statistical significance information
• Be careful using exploratory analyses to confirm and confirmatory analyses to explore
137
Conclusion Validity
• Report your statistics well– Correct values– Degrees of freedom– p value & decision criterion (level of significance)– Directionality of alternative hypotheses
• Use tables and graphs well– Some information is better in a table or a graph– Sometimes results are better written in text
138
Conclusion Validity
• Interpret analyses correctly– In results, summarize statistical results by explaining
what they mean descriptively and in relation to hypotheses
– In conclusions, interpret statistical results by what they mean in relation to research questions
– What was the impact of outliers and extreme values– Don’t misinterpret p values or confidence intervals– Remember that all conclusions are probabilistic, not
truths or certainties 139
Conclusion Validity
• Interpret analyses correctly– Don’t over-interpret– Don’t go farther down the causal conclusion path
than your design will allow– Report any unusual analyses or tactics used
• Trimming the data
– Connect all conclusion to results, do not go beyond what the data will allow (no extrapolation, be careful with interpolation)
140
Conclusion Validity
• Interpret statistical significance very carefully– Do not read inappropriate information from p values
• If doing statistical significance testing, we must always pay attention to potential errors– Type I– Type II (and Statistical Power, therefore sample size)– Type III– Multiple hypothesis testing accumulated Type I
errors (data “fishing”)141
Conclusion Validity
• Consider Performing “Sensitivity” Analyses– What if something major changes in your data
• e.g., more data at one extreme or the other
– What if something minor changes– What if you analyzed the data differently– What if you ran robust statistics even if not needed– What if you don’t assume normality– What if you use nonparametric analyses– What if you use more sophisticated analyses
142
Lovitts (2005)
Outstanding• Analysis is comprehensive, complete, sophisticated, and convincing• Results are significantVery Good• Demonstrates technical competence• Uses appropriate (standard) theory, methods, and techniques• Obtains solid, expected results or answers• Misses opportunities to completely explore interesting issues and connectionsAcceptable• Uses standard methods• Has an unsophisticated analysis—does not explore all possibilities and misses connections• Has predictable results that are not excitingUnacceptable• Relies on inappropriate or incorrect methods• Has data that are flawed, wrong, false, fudged, or misinterpreted• Has wrong, inappropriate, incoherent, or confused analysis• Includes results that are obvious, already known, unexplained, or misinterpreted
143
Lovitts (2005)Component 5: Results or Analysis
The analysis• Is appropriate• Aligns with the question and hypotheses raised• Shows sophistication• Is iterativeIn addition, the amount and quality of data or information is• Sufficient, Well presented, Intelligently interpretedThe author also cogently expresses• The insights gained from the study• The study’s limitations
144
The Return of Theoretical Validity
145
The Return of Theoretical Validity
• Why is the study significant? (i.e., important)• What can we learn in terms of transferable or
generalizable knowledge from this study?• Is there theoretical support for variables &
models specified, relationships found, and conclusions reached?– Research is all about the theory
• Are causal arguments plausible?
146
Theoretical Validity Returns
• Provide recommendations for policy, practice, and future research that are grounded in your results
• Interpret results correctly (e.g., non-significant in right way is not statistically significant)
• Avoid causal language unless it’s really okay• Avoid too much reliance on self-report as fact• Do not perform new analyses in discussion
147
Theoretical Validity 2
• Address limitations that arose during the study that could not be handled perfectly well
• Be objective and avoid confirmation bias (finding what you wanted to find)
• Answer your research questions• Integrate findings (e.g., meta-inference)• Why are your findings important• Discuss the implications of your results
148
Conclusion Validity
• Discussion should not simply reiterate results• Discuss how well the study answered the
research questions• Did the statistical model tested fit the data?• Discuss alternative interpretations of the results
or different models that might have been assessed
• Disclose potential conflicts of interest149
Theoretical Validity: The Sequel
• Place your results in the context of the literature• Interpret new and unanticipated relationships
among variables– See hidden and unexpected findings
• There’s an art to the science– The more artistic researchers see things the rest of
us don’t, perhaps based on their understanding of the theory, variables, and/or population
150
Theoretical Validity: The Sequel
• Provide a statement of support (or not) for any hypotheses provided
• If the hypotheses were not supported, the author considers post hoc explanations.
• If the hypotheses were supported, how strong were the effect sizes?
151
Theoretical Validity: The Sequel
• Discuss what difficulties were faced in performing the research (some may be limitations)– Some limitations are known before the study starts
(e.g., self-report data)– Some limitation arise during study, sometimes they
can be handled adequately, sometimes not– Recognize the weaknesses (and strengths) of your
own study152
Lovitts (2005)
Outstanding• Conclusion ties the whole thing together• Is publishable in top-tier journals• Is of interest to a larger community and changes the way people think• Pushes the discipline’s boundaries and opens new areas for researchVery Good• Makes a modest contribution to the field but does not open it upAcceptable• Makes a small contributionUnacceptable• Has unsupported or exaggerated interpretation• Does not make a contribution
153
Lovitts (2005)Component 6: Discussion or Conclusion
The conclusion• Summarizes the findings• Provides perspective on them• Refers back to the introduction• Ties everything together• Discusses the study’s strengths and weaknesses• Discusses implications and applications for the
discipline• Discusses future directions for research
154
Summary
• Where or from whom will you acquire the data for your qualitative problem-finding study? We suggest five data sources: (a) Self, (b) Literature, (c) Theory, (d) Advocates, and (e) Skeptics. Regarding the self as a source of data, Crowl (1993) notes that "One of the most fertile sources of research ideas is you" (p. 27). For example, what experiences have you had that are pertinent to your topical area? Also, what are your thoughts and opinions concerning this topic. With respect to the literature, "...there is no substitute for knowing the territory...“ (Krathwohl, 1993, p. 75). Being intimately familiar with the professional literature in your general area of interest is a necessary prerequisite to problem identification. A researcher can also work deductively by starting with a theory. Consequences, or predictions that might be observable in practice and which would serve to either confirm or deny a theory can be investigated. Finally, both advocates and skeptics should be consulted. These individuals will serve as your key informants in the qualitative problem finding-creating effort. Advocates have carefully considered perspectives on the topic of interest and may have unique insights or a number of unresolved issues they may be willing to share with you. If you really want to understand the weaknesses of an idea, however, discuss it with someone who is critical of the notion or who holds an opposing viewpoint.
• Assumptions- What assumptions are being made? Tacit or overt? Are they reasonable? Who is making them? What are their consequences?
• Challenges- To what extent are aspects of this area open to challenge? By whom? Are they political? What are competing viewpoints?
• Importance- Why is this area important---which parts? Who studies here? Are the questions more relevant to theory or practice?• What's Hot- What is the current thinking? Have the newest ideas shown to be effective and practical? Where is the unfinished work?• What's Cool- What is the "other side" of current thinking ? What is presently unpopular? What does older lit say about current
thinking?• Provocative– Provocative statements, uncharted areas, unverified findings, and interesting ideas can also be sources of RQ’s• History- How did we get to this current state of knowledge? Where were we? Can we predict where we may go?• Ideal/Goal- What is the ideal situation or goal? Is this ideal reasonable? What are pitfalls regarding reaching the goals?• Extremes- Take the notions and recommendations to their logical (or not) extremes. Is this extreme desirable? Are there dangers?• Controversy- What is current controversy in this area? Where are areas of conflict? What are tacit agreements? Who gains/loses?• Contradictions- What is contradicting what? Who is contradicting whom? What is the quality of the positions?• The Gaps- Why are there gaps?
Just Kidding…155
Summary
• Provide evidence and support for claims and assertions• Write in a formal and scientific style• Write clearly using headings, logical organization, topic
sentences, and transitions• Use consistent wording and variable names• Follow your publication manual for reporting statistics
and citing references• You can do many things in research as long as you
report them---and ideally justify them156
Summary
• Write concisely, but don’t oversimplify so much that what you say is wrong
• Don’t exaggerate your results or conclusions, but promote your legitimate findings (don’t bury the lead)
• Create a strong research design and follow it• Expand your methodological horizons---we do what we
know how to do, we ask what we know how to ask• Tell the reader what you will tell them, tell them, and
then tell them you told them• Tell the story of your research
157
Summary
• Be able to describe your research in a paragraph– “elevator talk”– “explain it to your grandmother”
• Good writing, proper citation, & format matter– Follow your publication manual– Mundane things like margins, page numbers, font
• Find a peer support group or writing group– review each others’ work– “dissertation camp”
158
Thank you