research design, methodology, and analysis – validity & … · 2020. 12. 14. · dr. tan teck...

1

Upload: others

Post on 02-Feb-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

  • Research Design, Methodology, and Analysis – Validity & Research Design(Part 1)

    DR. TAN TECK KIANG

  • Objectives

    (1) Provide NUS researchers with useful research methodology to carry out their research

    (2) Provide an up-to-date educational methodology to help researchers to apply research funds

    (3) Support NUS staff to initial research projects and activities(4) Provide ALSET research activities and publications.(5) Help researchers to use the ALSET Data Lake (ADL) for carrying out

    research (6) Provide guidelines and information to carry out data analytics for

    analysing research project

  • Contents – Research Design, Methodology, and Analysis (Part 1)1. Validity

    • Internal Validity• External Validity• Statistical Conclusion Validity• Construct Validity

    2. Introduce A Few Research Designs• Type of Research Designs• Random Selection and Assignment• 4 Quasi- / Non-Experimental Designs• 4 Experimental / Randomized Designs• Failure of Random Assignment• Benefits of Using Quasi-Experiment

  • Internal Validity

    External Validity

    Construct Validity

    Conclusion Validity

    Can We Generalize to other persons, places , and times?

    Four Types of Validity Questions

    Can we Generalize to the constructs?

    Is there a relationship between cause and effect?

    Is the relationship causal? Donald T Campbell Julian C. StanleyCampbell, D.T., & Stanley, J.C. (1966). Experimental and quasi-experimental designs for research. Skokie, IL: Rand McNally.

  • Internal Validity

    The degree of confidence that the causal relationship being tested is trustworthy and not influenced by other factors or variables

    Intervention

    causes

    Outcomes

    Internal validity means is that you have evidence that what you did in the study (i.e., the intervention) caused what you observed

    (i.e., the outcome) to happen

    Alternative Cause Alternative Cause

    Alternative Cause Alternative Cause

    Procedure

    It is the Intervention that cause the change.

  • Why is Internal Validity Important?

    • Researchers often conduct research to determine cause-and-effect relationships.

    • If a study shows a high degree of internal validity, there is strong evidence of causality. Otherwise, for a study that has low internal validity, there is little or no evidence of causality.

    • Reviewers when examining a research grant look for evidence of internal validity for a proposal.

    • Publication reviewers also examine internal validity to conclude whether the changes in the independent variable caused the observed changes in the dependent variable is properly planned to make a valid statistical conclusion on the findings.

  • 7 Threats to Internal Validity –Campbell and Stanley

    1. Testing Effect2. History 3. Maturation 4. Instrumentation 5. Statistical Regression 6. Attrition (Experimental Mortality)7. Selection biases

  • Depression

    Pre-Test Post-Test

    Repeatedly Testing Effect

    Practice Effect

    Questions might get familiar when it is repeatedly tested, and therefore now easier, so if the scores improve from pre-test to post-

    test, it could be a practice effect rather than a treatment effect.

    Use Control GroupControl for Learning Curve of Taking Test Repeatedly

    Repeated Quizzing or Testing Improves Retention

    How to Avoid Testing Effect?

    Follow-up

    Depression Score

  • HistoryEvents or Experiences Impact the Program

    Catherine Zeta-Jones

    Treatment

    Week 0 Week 4Week 2

  • Maturation

    Mental /Physical Change Due to MaturityChange in Participant

    Pre-Test Post-Test

    Shorter Time between Pre and Post

    Have a Control Group

    6 Months

    Change Due to Grew Older Not Due to Intervention

    TakeBreakfast

    Improve in Reading

    Pre-School

  • Instrumentation

    Pre-TestOn-Line Survey

    Post-TestTel Survey

    ConsistencyUse Standard Inventory

    Changes in the way a test or other measuring instrument is calibrated that could account for results of a research study.

    Pre-TestEasy Assessment

    Post-TestDifficult Assessment

    Pre-TestSelf-Report

    Questionnaire

    Post-TestOpen Interview

    New Questionnaire

  • Regression to the Mean

    Pre-Test Mean

    Population Mean

    No Regression Post-Test

    Mean

    Regression to the Mean

    Pre-Test Mean

    Post-Test Mean

    Regression to the Mean

    Mean low High Mean

    Tendency for scores on a post-test to be closer to the mean, especially for those who were at extreme ends of the continuum of scores at pre-test

    Pre-test Post-test

  • Attrition / Experimental MortalityNon-Random dropout through course of study

    Pre-Test Post-Test

    Shorter Time between Pre and Post

    Homogeneous Attrition Heterogenous AttritionAttrition rates equal across experimental conditionsThreat to external validity.

    Attrition rates different across experimental conditionsThreat to internal validity.

    Group Control Treatment

    Young 20 20

    Old 20 20

    PlannedSample

    Group Control Treatment

    Young 10 10

    Old 10 10EventualSample

    Group Control Treatment

    Young 20 20

    Old 20 20

    PlannedSample

    Group Control Treatment

    Young 9 15

    Old 12 12EventualSample

    50% Attrition Attrition Rates Differ

  • Selection ThreatBiases resulting in differential selection of respondents for control and experimental group

    Any factor other than the program that leads to post-test differences between group

    Motivation Program

    Control Group

    Motivated Person

    Unmotivated Person

    High Score

    Low Score

    Subjects self-selected into experimental and control groups affect the dependent variable.

    Experimental Group

    Motivation Score

  • Threats to Internal Validity

    Participant Associated

    Measurement Associated

    Outside Source

    Maturation

    Attrition

    Testing

    Instrumentation

    Regression to the Mean

    History

    Selection

    Categorizing Internal Validity

    Threats

  • External Validity

    The extent to which results from a study can be applied (generalized) to other situations, people, groups, settings, events, and time periods.

    2 Main External Validities

    Population Validity• How well can the research on

    a sample be generalized to the population as a whole?

    Ecological Validity• Are your study results

    generalizable across different settings?

    Reflect Reality

    Truth in the Study

    Internal Validity

    External Validity

    Truth in Real Life

    https://www.statisticshowto.com/sample/https://www.statisticshowto.com/what-is-a-population/

  • A Sample ofNUS Final Year Students

    Other Countries

    Ecological Validity

    NTU First Year Students

    All NUS Students

    Population Validity

  • COVID-19HumidityTemperature

    Real Life Situation

  • Total Population

    TargetedPopulation

    Sample

    Treatment Group

    Control GroupAssignment

    Experimentally AccessiblePopulation

    Inclusion Criteria

    Exclusion Criteria

    Population Hierarchy, Sample, Inclusion and

    Exclusion Criteria

    All NUS Students

    NUS StudentsDo Not Express Not

    to Participate

    Have A StudentsEmail Address

  • Breast Cancer Research

    Inclusion Criteria

    Exclusion Criteria

    Chemotherapy

    Postmenopausal women betweenthe ages of 45 and 75 who havebeen diagnosed with Stage IIbreast cancer.

    Fail kidney function test

    Ethnographic Research

    The picture can't be displayed.

    Inclusion Criteria

    Individuals living near the Amazonian forest would be included

    Inclusion and Exclusion Criteria

    Minority Women

  • Construct ValidityConstruct validity is to establish valid operational measures for the concepts being studied.

    Construct Operational Definition

    Socio-Economic Status

    Parent’s Income

    Housing Type

    Have a Maid?

    Own a Car?

    Construct Operational Definition

    Academic Achievement

    Final Year Grade

    Honour Degree?

    Continuous Assessment

  • Statistical Condition ValidityStatistical condition validity (SCV) examines the extent to which conclusions derived using a statistical procedure is valid.

    It refers to the accuracy of statistical conclusion(s) regarding the relationship between or among variables of interest under study.

    (1) Enough Statistical Power to Detect An Effect

    (2) Risk of Having Effect that Does not Actually Exist (Type I Error: False Positive)

    Cook and Campbell (1979) – 3 Aspects of Validity from SCV

    (3) Confidently Estimate Effect Type Plain English Statistic Interpretation (Ho: No RelationshipI False Positive Reject Null When it is True

    II Fasle Negative Do not reject Null it it not True

  • Strategies for Statistical Condition Validity

    1. Good Design – Connect to and Expected Statistical Analysis2. Sample Size Planning – Control for Type II Error Rate (Sufficient

    Sample Size)3. Correct Use of Statistical Analysis4. Check for Violation of Statistical Assumptions5. Aware of Restriction of Range6. Avoid Fishing – Stress on Theory7. Avoid Poor Reliability of Treatment Implementation8. Use Reliable Measures and Validated Measurement

  • Restriction of Range Assumptions of t-test1. Gaussianity

    • Population distributions are normal2. Independence

    • Samples are independent and randomly selected

    3. Heteroscedasticity• Population variances are equal.

    Correct Use of Statistical Analysis

    Violation of AssumptionsFailure to meet any of these fundamental assumptions can result in increased type I error, reduced statistical power, or both.

  • Research Design

    • Research design is a comprehensive plan for data collection in an empirical research project.

    • It is a “blueprint” for empirical research aimed to answering specific research questions of testing specific hypothesis, and specify at least three processes

    1. The data collection processes2. The instrument development process, and3. The sampling process

  • Is Random Assignment Used?

    Is There a Control Group or Multiple Measures?

    Randomized or True Experiment

    Quasi-Experiment Non-Experiment

    Yes No

    Yes No

    Type of Research Designs

  • Randomized or True Experiment

    Quasi-ExperimentNon-Experiment

    No Comparison Group

    Measure outcomes before and after for participants

    With Comparison Group

    Measure outcomes before and after for participants and non-participants

    Randomized Participants

    Measure outcomes before and after for participants and non-participants &

    randomize participants

    rand

    omize

    No Control Group

    Control

    Treatment

  • Random Selection

    The process of randomly selecting individuals

    from a population to be involved in a study.

    Random Assignment

    The process of randomly assigning the

    individuals in a study to either a treatment group or

    a control group.

    Population Sample Group

    CV

    CV

    Control

    Treatment

    Random SelectionRandom Assignment

    Random Selection and Assignment

  • Notation DescriptionT Intervention or ProgramC ControlO Observation (Data Collection Point)RA Random Allocation (Assignment)

    Notation

    RA O1 T O2RA O3 C O4

    Control GroupRandomly Allocated to Experimental (T) and

    Control Group (C)

    4 Data Collection Points

    Experimental Group

  • (1) One-Group Post-Test Only Design

    (3) One-Group Pre-Test and Post-Test Design

    (2) Post-Test Only Comparison Group DesignT O1C O2

    Quasi- / Non-Experimental Designs

    T O1

    O1 T O2(4) Pre-test and Post-Test Non-Equivalent Group Design

    O11 T O12O21 C O22

    Notation DescriptionT Intervention or ProgramC ControlO Observation (Data Collection Point)

    RA Random Allocation (Assignment)

  • (1) Two-Group Post-Test Only Design

    (3) Two-Group Pre-test Post-test Follow-Up Design

    (2) Two-Group Pre-Test Post-Test Design

    Experimental / Randomized Designs

    (4) Retrospective Pre-test and Post-test Group DesignRA O11 O12 T O13RA O21 O22 C O23

    Notation DescriptionT Intervention or ProgramC ControlO Observation (Data Collection Point)

    RA Random Allocation (Assignment)RA T O1RA C O2

    RA O11 T O12RA O21 C O22

    RA O11 T O12 O13RA O21 C O22 O23

  • Failure of Random Assignment

    1. Ethnical and practical reasons2. Sample size too small

    • People with particular characteristics appearing in treatment but not in control merely by chance.

    • Fraley and Vazire (2014)• Year 2006 to 2010 from 6 major social-personal psychology journal- 104

    • Sassenberg & Ditrich (2019)• Year 2018 from top social psychology journals - 195

  • How to Mitigating Chance Differences?

    1. Re-randomized2. Use De-confounding Techniques 3. Matching 4. Stratified Analysis

  • Benefits of Using Quasi-Experiment(Grant & Wall, 2009)1. Strengthening Causal Inferences When Random Assignment and

    Controlled2. Building Better Theories of Time and Temporal Progression3. Minimizing or Avoiding Ethical Dilemmas of Harm, Inequity,

    Paternalism, and Deception.4. Facilitating Collaboration With Practitioners5. Using Context to Explain Conflicting Findings

    Grant, A. M. and Wall, T. D. (2009). The neglected science and art of quasi-experimentation Why-to, When-to, and how-to advice for organizational researchers. Organizational Research Methods, 12(4), 653-686.

  • References – Quasi-Experiment (Journal of Clinical Epidemiology [JCE], 2017 – 13 Papers)

    1. Barnighausen, T., Rottingen, J.-A., Rocker, P., Shcemilt, I., and Tugwell, P. (2017). Quasi-experimental study designs series Paper 1: Introduction two historical lineages. JCE, 89, 4-11.

    2. Geldsetzer, P, and Fawzi, W. (2017). Quasi-experimental study designs series -paper 2: Complementary approaches to advancing global health knowledge. JCE, 89, 12-16.

    3. Frenk, J. and Gomez-Dantes, O. (2017). Quasi-experimental study designs series -paper 3: Systematic generation of evidence through public policy evaluation. JCE, 89, 17-20.

    4. Barnighausen, T., Tugwell, P., Rottingen, J.-A., Shemilt, I., Rockers, P., et al (2017). Quasi-experimental study designs series -paper 4: Uses and value. JCE, 89, 21-29.

    5. Reeves, B. C., Wells, G. A., and Waddington, H. (2017). Quasi-experimental study designs series -paper 5: A checklist for classifying studies evaluating the effects on health interventions – A taxonomy without labels. JCE, 89, 30-42.

    6. Waddington, H., Aloe, A. M., Becker, B. J., Djimeu, E. W., Hombrados, J. G., Tugwell, P., Wells, G., and Reeves, B. (2017). Quasi-experimental study designs series -paper 6: Risk of bias assessment. JCE, 89, 43-52.

  • References – Quasi-Experiment (Journal of Clinical Epidemiology [JCE], 2017 – 13 Papers)7. Barnighausen, T., Oldenburg, C., Tugwell, P., Bommer, C., et al (2017). Quasi-experimental study designs

    series -paper 7: Assessing the assumptions. JCE, 89, 53-66.

    8. Glanville, J., Eyers, J., Jones, A. M., Shcemilt, I., Wang, G., Jonansen, M., Fiander, M., and Rothstein, H. (2017). Quasi-experimental study designs series -paper 8: Identifying quasi-experimental studies to inform systematic reviews, JCE, 89, 67-76.

    9. Aloe, A. M., Becker, B. J., Duvendack, M., Valentine, J. C., Shemilt, I., and Waddington, H. (2017). Quasi-experimental study designs series -paper 9: Collecting data from quasi-experimental studies. JCE, 89, 77-83.

    10. Becker, B. J., Aloe, A. M., Duvendack, M., Stanley, T. D., Valentine, J. C., Fretheim, A., and Tugwell, P. (2017). Quasi-experimental study designs series -paper 10: Synthesizing evidence for effects collected from quasi-experimental studies presents surmountable challenges, JCE, 89, 84-91.

    11. Lavis, J. N., Barnighausen, T. and EI-Jardali, F. (2017). Quasi-experimental study designs series -paper 11: Supporting the production and use of health systems research syntheses that draw on quasi-experimental study designs. JCE, 89, 92-97.

    12. Rockers, P. C., Tugwell, P., Grimshaw, J., Oliver, S., Atun, R., et al (2017). Quasi-experimental study designs series -paper 12: Strengthening global capacity for evidence synthesis of quasi-experimental health systems research. JCE, 89, 98-105.

    13. Rockers, P. C., Tugwell, P., Rottingen, J.-A., and Barnighausen, T. (2017). Quasi-experimental study designs series -paper 13: Realizing the full potential of quasi-experiments for health research. JCE, 89, 106-110.

  • HAVE A MINUTE? HELP US IMPROVEbit.ly/RU2020_Sem1

    TAN Teck Kiang (Dr)[email protected]

    NG Magdeline (Dr)[email protected]

    Slide Number 1ObjectivesContents – Research Design, Methodology, and Analysis (Part 1)Slide Number 4Slide Number 5Why is Internal Validity Important?7 Threats to Internal Validity – Campbell and StanleySlide Number 8Slide Number 9Slide Number 10Slide Number 11Slide Number 12Slide Number 13Slide Number 14Slide Number 15Slide Number 16Slide Number 17Slide Number 18Slide Number 19Slide Number 20Construct ValiditySlide Number 22Strategies for Statistical Condition ValiditySlide Number 24Research DesignSlide Number 26Slide Number 27Slide Number 28Slide Number 29Slide Number 30Slide Number 31Failure of Random AssignmentHow to Mitigating Chance Differences?Benefits of Using Quasi-Experiment�(Grant & Wall, 2009)Slide Number 35References – Quasi-Experiment (Journal of Clinical Epidemiology [JCE], 2017 – 13 Papers)References – Quasi-Experiment (Journal of Clinical Epidemiology [JCE], 2017 – 13 Papers)Slide Number 38