objectives

25
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 11: Between-Subjects Designs 1

Upload: xena

Post on 06-Jan-2016

23 views

Category:

Documents


2 download

DESCRIPTION

Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 11: Between-Subjects Designs. Objectives. t -test for independent groups Hypothesis testing Interpreting t and p Statistical power. t -test for Independent Groups. Basic inferential statistic - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Objectives

Slides to accompany Weathington, Cunningham & Pittenger (2010),

Chapter 11: Between-Subjects Designs

1

Page 2: Objectives

Objectives

• t-test for independent groups

• Hypothesis testing

• Interpreting t and p

• Statistical power

2

Page 3: Objectives

t-test for Independent Groups

• Basic inferential statistic

• Ratio of two measures of variability =

Difference between two group means

Standard Error of the difference between group means

• Allows us to consider effect, relative to error

3

Page 4: Objectives

Standard Error of the Difference between Means

4

Page 5: Objectives

t-test

• Larger |t-ratio| = greater difference between means

• Based on this we can decide whether to reject Ho

– Usually Ho = µ1 = µ2

• Sampling error may account for some difference, but when t is “large” enough…

5

Page 6: Objectives

Hypothesis Testing: t-tests

• Based on estimates of probability

• When α = .05, there is a 5% chance of rejecting Ho when we should not (Type I

error)

– See Figure 11.2 (each tail = 2.5%)

– Region of rejection

• If t falls within the shaded ranges, we reject Ho because probability is so low

6

Page 7: Objectives

Figure 11.2

7

Page 8: Objectives

Hypothesis Testing Steps

1. State Ho and H1

– Before collecting or examining the data

2. Identify appropriate statistical test(s)

– Based on hypotheses

– Often multiple approaches are possible

– Depends on how well data meet the assumptions of specific statistical tests

8

Page 9: Objectives

Hypothesis Testing Steps

3. Set the significance level (α)

– α = p(Type I error)

• Risk of false alarm

• You control

– 1 – α = p(Type II error)

• Risk of miss

• Careful, you might “overcontrol”

9

Page 10: Objectives

Hypothesis Testing Steps

4. Determine significance level for t-ratio

– Use appropriate table in Appendix B, df for the test and your selected alpha (α) level to determine tcritical

– If your observed |t ratio| > tcritical reject

Ho

– If your observed p-level is less than α you can also reject Ho

10

Page 11: Objectives

Hypothesis Testing Steps

5. Interpreting t-ratio

– Is it statistically significant?

– Is it practically/clinically significant?

• Does the effect size matter, really?

• Book mentions d-statistic

11

Page 12: Objectives

Hypothesis Testing Steps5. Interpreting t-ratio

– Magnitude of the effect

• Degree of variance accounted for by the IV

• Omega squared = % of variance accounted for by IV in the DV

– Is there cause and effect?

• Typically requires manipulated IV, randomized assignment, and careful pre- / post- design

12

Page 13: Objectives

Correct Interpretation of t and p• If you have a significant t-ratio:

= statistically significant difference between two groups

= IV affects DV

= probability of a Type I error is α

13

Page 14: Objectives

Errors in p Interpretation

• Changing α after analyzing the data

– Unethical

– We cannot use p to alter α

• Kills your chances of limiting Type I error risk

• p only estimates the probability of obtaining at least the results you did if the null hypothesis is true, and it is based on sample statistics not fully the case for α

14

Page 15: Objectives

Errors in p Interpretation

• Stating that p = odds-against chance

– p = .05 does not mean that the probability of results due to chance was 5% or less

– p is not the probability of committing a Type I error

– Recommended interpretation:

•If p is small enough, I reject the null hypothesis in favor of the alternative hypothesis.

15

Page 16: Objectives

Errors in p Interpretation

• Assuming p = probability that H1 is

true (i.e., that the results are “valid”)

– p does not confirm the validity of H1

– Smaller p values do not indicate a more important relationship between IV and DV

•Effect size estimates are required for this

16

Page 17: Objectives

Errors in p Interpretation

• Assuming p = probability of replicating results

– The probability of rejecting Ho is not

related to the obtained p-value

• A new statistic, prep is getting some

attention for this purpose (see Killeen, 2005)

17

Page 18: Objectives

Statistical Tests & Power

• β = p(Type II error) or p(miss)

• 1 – β = p(correctly rejecting false Ho)

= power

• Four main factors influence statistical power

18

Page 19: Objectives

Power: Difference between µ

• Power increases when the difference between µ of two populations is greater

19

Page 20: Objectives

Power: Sample Size

• Issue of how well a statistic estimates the population parameter (Fig. 10.5)

• Larger N smaller SEM

• As SEM decreases overlap of sampling distributions for two populations decreases power increases

• Don’t forget about cost20

Page 21: Objectives

Power: Variability in Data

• Lots of variability variance in the sampling distribution and greater overlap of two distributions

• Reducing variability reduces SEM overlap decreases power goes up

• Techniques: Use homogeneous samples, reliable measurements

21

Page 22: Objectives

Power: α

• Smaller α lower Type I probability lower power

• As p(Type I) decreases, p(Type II) increases (see Figure 11.6)

• As α increases, power increases

– Enlarges the region of rejection

22

Page 23: Objectives

23

Page 24: Objectives

Estimating Sample Size

• Based on power

• Tables in Appendix B can give you estimates for t-ratios

– Effect size is sub-heading

• Cost / feasibility considerations

• Remember that sample size is not the only influence on statistical power

24

Page 25: Objectives

What is Next?

• **instructor to provide details

25