bivariate linear regression
DESCRIPTION
Bivariate Linear Regression. Linear Function. Y = a + bX +e. Sources of Error. Error in the measurement of X and or Y or in the manipulation of X. The influence upon Y of variables other than X (extraneous variables), including variables that interact with X. - PowerPoint PPT PresentationTRANSCRIPT
Bivariate Linear Regression
Linear Function
•Y = a + bX +e
Sources of Error
• Error in the measurement of X and or Y or in the manipulation of X.
• The influence upon Y of variables other than X (extraneous variables), including variables that interact with X.
• Any nonlinear influence of X upon Y.
The Regression Line
• r2 < 1 Predicted Y regresses towards mean Y
• Least Squares Criterion:
bXaY ˆxy rZZ ˆ
2ˆ YY
Our Beer and Burger DataSubject X Y Y )ˆ( YY 2)ˆ( YY 2)ˆ( YY
1 5 8 9.2 -1.2 1.44 10.24 2 4 10 7.6 2.4 5.76 2.56 3 3 4 6.0 -2.0 4.00 0.00 4 2 6 4.4 1.6 2.56 2.56 5 1 2 2.8 -0.8 0.64 10.24
Sum 15 30 30 0.0 14.40 25.60 Mean 3 6 St. Dev. 1.581 3.162
x
y
x
xy
x s
sr
s
COV
SS
SSCPb
26.1
581.1
162.38.
b
1.2 1.6(3)- 6 XbYa
40)162.3(4 2 YSS
Pearson r is a Standardized Slope
• Pearson r is the number of standard deviations that predicted Y changes for each one standard deviation change in X.
x
y
s
srb
Error Variance
• What is SSE if r = 0?
• If r2 > 0, we can do better than just predicting that Yi is mean Y.
8.43
4.14
2
n
SSEMSE
Y Y ySSYYSSE 2
4.14)40)(64.1())(1()ˆ( 22 ySSrYYSSE
Standard Error of Estimate
• Get back to the original units of measurement.
2
11 2
ˆ
n
nrsMSEs yyy
191.28.4ˆ yys
Regression Variance
• Variance in Y “due to” X
• p is number of predictors.• p = 1 for bivariate regression.
25.6 14.4 - 40ˆ 22 yyregr SSrSSESSYYSS
p
SSMS regr
regr
Coefficient of Determination
• The proportion of variance in Y explained by the linear model.
y
regr
SS
SSr 2
Coefficient of Alienation
• The proportion of variance in Y that is not explained by the linear model.
y
error
SS
SSr 21
Testing Hypotheses
• H: b = 0
• F = t2
• One-tailed p from F = two-tailed p from t
2 , ndfMSE
MSt regr
1 , , pnpdfMSE
MSF regr
Source Table
• MStotal is nothing more than the sample variance of the dependent
variable Y. It is usually omitted from the table.
Source SS df MS F p Regression 25.6 1 25.6 5.33 .104 Error 14.4 3 4.8 Total 40.0 4 10.0
Power Analysis
• Using Steiger & Fouladi’s R2.exe
Power = 13%
G*Power = 15%
Summary Statement
The linear regression between my friends’ burger consumption and their beer consumption fell short of statistical significance, r = .8,beers = 1.2 + 1.6 burgers, F(1, 3) = 5.33, p = .10. The most likely explanation of the nonsiginficant result is that it represents a Type II error. Given our small sample size (N = 5), power was only 13% even for a large effect (ρ = .5).
The Regression Line is Similar to a Mean
Subject X Y Y )ˆ( YY 2)ˆ( YY 2)ˆ( YY 1 5 8 9.2 -1.2 1.44 10.24 2 4 10 7.6 2.4 5.76 2.56 3 3 4 6.0 -2.0 4.00 0.00 4 2 6 4.4 1.6 2.56 2.56 5 1 2 2.8 -0.8 0.64 10.24
Sum 15 30 30 0.0 14.40 25.60 Mean 3 6 St. Dev. 1.581 3.162
0ˆ YY errorSSYY 2ˆ
regressionSSYY 2ˆ
Increase n to 10
• Same value of F• r2 = SSregression SStotal, = 25.6/64.0 = .4 (down
from .64).
Source SS df MS F p Regression 25.6 1 25.6 5.33 .05 Error 38.4 8 4.8 Total 64.0 9 7.1
Power Analysis
Power = 33%
G*Power = 36%
Increase n to 10
A .05 criterion of statistical significant was employed for all tests. An a priori power analysis indicated that my sample size (N = 10) would yield power of only 33% even for a large effect (ρ = .5). Despite this low power, the analysis yielded a statistically significant result. Among my friends, beer consumption increased significantly with burger consumption, r = .632,beers = 1.2 + 1.6 burgers, F(1, 8) = 5.33,p = .05.
Testing Directional Hypotheses
• H: b 0 H1: b > 0
• For F, one-tailed p = .05• half-tailed p = .025.• P(AB) = P(A)P(B) = .5(.05) = .025
025. tailed-one , 31.240.1
8632.
pt
Assumptions
• To test H: b = 0 or construct a CI• Homoscedasticity across Y|X• Normality of Y|X• Normality of Y ignoring X• No assumptions about X• No assumptions for descriptive statistics
(not using t or F)
Placing Confidence Limits on Predicted Values of Mean Y|X
• To predict the mean value of Y for all subjects who have some particular score on X:
•
x
yy SS
XX
nstYCI
2
ˆ
1ˆ
Placing Confidence Limits on Predicted Values of Individual Y|X
x
yy SS
XX
nstYCI
2
ˆ
11ˆ
Bowed Confidence Intervals
Testing Other Hypotheses
• Is the correlation between X and Y the same in one population as in another?
• Is the slope for predicting Y from X the same in one population as in another?
• Is the intercept for predicting Y from X the same in one population as in another.
Can differ on r but not slope or slope but not r.