1 made what if some ols assumptions are not fulfiled?

22
1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

Upload: phebe-warren

Post on 25-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

1 MADE

WHAT IF SOME OLS ASSUMPTIONS ARE NOT

FULFILED?

Page 2: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

2 MADE

QUIZ ONCE AGAIN

• What are the main OLS assumptions?

1. On average right2. Linear3. Predicting variables and error term

uncorrelated4. No serial correlation in error term5. Homoscedasticity+ Normality of error term

Page 3: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

3 MADE

QUIZ ONCE AGAIN

• Do we know the error term?• Do we know the coefficients?• How can we know whether all the

assumptions are fulfilled1. On average right => ???2. Linearity => ???3. X and ε uncorrelated => ???4. ε serially uncorrelated => ???5. ε homoscedastic => ???

Page 4: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

4 MADE

• We assumed a certain functional form

• What if in reality the relation exists, but has a different form?

• Any function can be approximated by the Taylor expansion. Consequently:

• So what we need to test is that γ’s are all zero in:

FUNCTIONAL FORM

Page 5: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

5 MADE

FUNCTIONAL FORM – cont.

• This is easy, cause we already know how to do it • But there could be a lot of these terms (time

consuming)• Ramsey came up with the idea, that if γ’s are all

zero, then fitted y’s should have no correlation with actual y’s.

• So instead of testing whether γ’s are all zero, we test whether they differ from zero

• RESET TEST:– Do your model– Find the fitted y’s– Run a model with your Xβ and all the powers of

fitted y’s– Test the hypothesis that coefficients by fitted y’s are

zero– If you cannot reject the null (i.e. that they are all

zero), you have a functional misspecification problem => you cannot say if your b’s are correct estimates of β’s.

Page 6: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

6 MADE

NORMALITY OF THE ERROR

• We typically assume that ε’s have a standard normal distribution N(O,σ2).

• This helps us to derive t and F distributions.• Although we never know ε’s, we get e’s, who

should be consistent with ε’s (so the same distribution)

• We can test if the distribution of e’s is far from normal

• Jarque-Berra– Check out the skewness and kurtosis– Compare it to the values for normal distribution– The null says that they are alike, if you reject

the null, you reject the normality in residuals• Does that hurt?

Page 7: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

7 MADE

STABILITY OF THE PARAMETERS

• We typically assume that β’s do not depend on the size of X’s (in other words, relation between X’s and y’s is stable)

• However, we can actually have many subsamples.

• Then what? Look at your „dots”, see if such thing occurs, and run Chow estimation (just as in LAB)

Page 8: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

8 MADE

NO AUTOCORRELATION(AT LEAST NOT SERIAL)

• We also assume that subsequent error terms are independent of each other.

• Assume that it does not hold, so that:

• What then?– Our estimators are still unbiased:

– We can also show they are consistent– But the problem occurs with the estimators of the

variance of the estimators (which we need to test the significance hypothesis)

Page 9: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

9 MADE

NO AUTOCORRELATION(AT LEAST NOT SERIAL)

• Consequently, our estimators of the standard errors are incorrect

•We cannot trust our t-statistics any more!

KEEP THAT IN MIND,

WE’LL COME BACK TO IT IN A SECOND

Page 10: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

10 MADE

HOW DO WE GET AUTOCORRELATION?

• What we need in the error term is white noise

Page 11: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

11 MADE

HOW DO WE GET AUTOCORRELATION?

• Positive autocorrelation (rare changes of signs)

Page 12: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

12 MADE

HOW DO WE GET AUTOCORRELATION?

• Negative autocorrelation (frequent changes of signs)

Page 13: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

13 MADE

HOW DO WE GET AUTOCORRELATION?

• Model misspecification can give it to you for free

Page 14: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

14 MADE

NO HETEROSCEDASTICITY

• We also assume that error terms do not depend in size on the size of X’s

• Assume that it does not hold, so that:

• What then?– Our estimators are still unbiased:

– We can also show they are consistent– But the problem occurs with the estimators of the

variance of the estimators (which we need to test the significance hypothesis)

Page 15: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

15 MADE

HOW DO WE GET HETEROSCEDASTICITY?

• What we need is error terms independent of SIZE of X.

Page 16: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

16 MADE

HOW DOES THE V MATRIX LOOK?

• We know that:

• This holds for both heteroscedasticity and for autocorrelation.

• But aren’t there any differences?– Heteroscedasticity is about the diagonal (values

along the diagonal differ and should be always the same)

– Autocorrelation is about what happens outside the diagonal (they should be zero and they deviate from that)

Page 17: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

17 MADE

Testing for hetero

• Breusch-Pagan approach– The alternative hypothesis assumes that

σ2i=σ2f(zi), where f(.) is continuous

– Run your model yi=xiβ+εi

– Run the regression of ε2 on any set of variables (x, y, whatever)

– Use the R2 from this regression (it has χ2 distribution with p dof, where p is the no. of variables in the auxiliary regression)

– Test: • H0 – no heteroscedasticity in this form (does not

say NO heteroscedasticity in general!)• H1 – heteroscedasticity in the assumed form

Page 18: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

18 MADE

Testing for hetero

• White approach– Heteroscedasticity occurs because some

interrelations between x’s are not accounted for

– Run your model yi=xiβ+εi

– Run the regression of ε2 on all interactions of x’s

– Use the R2 from this regression (it has χ2 distribution with K(K+1) dof, where K is the no. of interactions in the auxiliary regression)

– Test: • H0 – no heteroscedasticity in this form (does not

say NO heteroscedasticity in general! this form is rather general though )

• H1 – heteroscedasticity in the assumed form

Page 19: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

19 MADE

Testing for auto

• Durbin-Watson approach– The alternative hypothesis states, that there is

autocorrelation of order 1 (two closest ε are correlated)

– Run your model yi=xiβ+εi

– Get your ε2 – Do the statistic on these ε2

Page 20: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

20 MADE

Testing for auto

• Durbin-Watson approach continued– If there is no (or weak) autocorrelation, the last term

would be equal (or close) to 0, so the whole statistic would be 2.

– IF DW < 2 • blue: positive autocorrelation, green: inconclusive• no color: no autocorrelation,

– IF DW>2• blue: negative autocorrelation, green:

inconclusive, no color: no autocorrelation– YOU CAN USE IT ON SMALL SAMPLES EVEN

Page 21: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

21 MADE

Testing for auto

• Breush-Godfrey approach– There is autocorrelation of order s (s closest ε are

correlated)

– Run your model yi=xiβ+εi

– Get your ε2 – Run the auxiliary regression on lagged ε in the form

of: εt=xγ+λ1εt-1 + λ2εt-2+… λsεt-s

– Test the hypothesis that your λ’s are zero– The nice part is that TR2 (where T is the number of

observations) of this auxiliary regression allows to test it as a combined hypothesis with χ2 distribution with s dof, where s is the no. of lags you take into account.

– MUCH NICER THAN DW, BUT REQUIRES BIG SAMPLES!

Page 22: 1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?

22 MADE

CONCLUSIONS ABOUT AUTO AND HETERO

• What they both mean is that you can no longer trust the estimates of the standard errors

• You can still trust the estimators of your model, but you cannot test if they are non-zero (no valid hypothesis testing)

• If you have autocorrelation but a very big sample, you are asymptotically OK. so you need not to worry

• Big sample does not help for heteroscedasticity though

• In a small sample autocorrelation cannot be eliminated either

• What we have as a response is GENERALISED LEAST SQUARE estimator => GLS same as OLS only helps to overcome the misestimation of standard errors.

• If no problems with auto and hetero, GLS less efficient than OLS (do not overuse it!)