chapter 9
DESCRIPTION
ECON 6002 Econometrics Memorial University of Newfoundland. Dynamic Models, Autocorrelation and Forecasting. Chapter 9. Adapted from Vera Tabakova’s notes. Chapter 9: Dynamic Models, Autocorrelation and Forecasting. 9.1 Introduction 9.2 Lags in the Error Term: Autocorrelation - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/1.jpg)
ECON 6002Econometrics Memorial University of Newfoundland
Adapted from Vera Tabakova’s notes
![Page 2: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/2.jpg)
9.1 Introduction
9.2 Lags in the Error Term: Autocorrelation
9.3 Estimating an AR(1) Error Model
9.4 Testing for Autocorrelation
9.5 An Introduction to Forecasting: Autoregressive Models
9.6 Finite Distributed Lags
9.7 Autoregressive Distributed Lag Models
![Page 3: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/3.jpg)
Figure 9.1
![Page 4: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/4.jpg)
The model so far assumes that the observations are not correlated with one another.
This is believable if one has drawn a random sample, but less likely if one has drawn observations sequentially in time
Time series observations, which are drawn at regular intervals, usually embody a structure where time is an important component.
Principles of Econometrics, 3rd Edition
![Page 5: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/5.jpg)
If we cannot completely model this structure in the regression function itself, then the remainder spills over into the unobserved component of the statistical model (its error) and this causes the errors be correlated with one another.
Principles of Econometrics, 3rd Edition
![Page 6: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/6.jpg)
1 2( , , ,...)t t t ty f x x x
1( , )t t ty f y x
1( ) ( )t t t t ty f x e e f e
Three ways to view the dynamics:
![Page 7: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/7.jpg)
Figure 9.2(a) Time Series of a Stationary Variable
Assume Stationarity For now
![Page 8: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/8.jpg)
Figure 9.2(b) Time Series of a Nonstationary Variable that is ‘Slow Turning’ or ‘Wandering’
![Page 9: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/9.jpg)
Figure 9.2(c) Time Series of a Nonstationary Variable that ‘Trends’
![Page 10: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/10.jpg)
9.2.1 Area Response Model for Sugar Cane
1 2ln lnA P
1 2ln lnt t tA P e
1 2t t ty x e
1t t te e v
![Page 11: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/11.jpg)
1 2t t ty x e
1t t te e v
2( ) 0 var( ) cov( , ) 0 fort t v t sE v v v v t s
1 1 Assume also
![Page 12: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/12.jpg)
( ) 0tE e
22
2var( )
1v
t ee
2cov , 0kt t k ee e k
OK
OK, constant
Not zero!!!
![Page 13: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/13.jpg)
2
2
cov( , ) cov( , )corr( , )
var( )var( ) var
kkt t k t t k e
t t kt et t k
e e e ee e
ee e
1corr( , )t te e
ˆ 3.893 .776
(se) (.061) (.277)t ty x
Aka autocorrelation coefficient
![Page 14: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/14.jpg)
![Page 15: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/15.jpg)
Figure 9.3 Least Squares Residuals Plotted Against Time
![Page 16: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/16.jpg)
1
2 2
1 1
( )( )cov( , )
var( )var( ) ( ) ( )
T
t tt t t
xy T T
t tt t
t t
x x y yx y
rx y x x y y
11 2
12
12
ˆ ˆcov( , )
var( ) ˆ
T
t tt t t
T
t tt
e ee e
re e
Constant varianceNull expectation of x and y
![Page 17: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/17.jpg)
The existence of AR(1) errors implies:
The least squares estimator is still a linear and unbiased estimator, but
it is no longer best. There is another estimator with a smaller
variance.
The standard errors usually computed for the least squares estimator
are incorrect. Confidence intervals and hypothesis tests that use these
standard errors may be misleading.
![Page 18: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/18.jpg)
As with heteroskedastic errors, you can salvage OLS when your data are autocorrelated.
Now you can use an estimator of standard errors that is robust to both heteroskedasticity and autocorrelation proposed by Newey and West.
This estimator is sometimes called HAC, which stands for heteroskedasticity autocorrelated consistent.
![Page 19: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/19.jpg)
HAC is not as automatic as the heteroskedasticity consistent (HC) estimator.
With autocorrelation you have to specify how far away in time the autocorrelation is likely to be significant
The autocorrelated errors over the chosen time window are averaged in the computation of the HAC standard errors; you have to specify how many periods over which to average and how much weight to assign each residual in that average.
That weighted average is called a kernel and the number of errors to average is called bandwidth.
![Page 20: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/20.jpg)
Usually, you choose a method of averaging (Bartlett kernel or Parzen kernel) and a bandwidth (nw1, nw2 or some integer). Often your software defaults to the Bartlett kernel and a bandwidth computed based on the sample size, N.
Trade-off: Larger bandwidths reduce bias (good) as well as precision (bad). Smaller bandwidths exclude more relevant autocorrelations (and hence have more bias), but use more observations to increase precision (smaller variance).
Choose a bandwidth large enough to contain the largest autocorrelations. The choice will ultimately depend on the frequency of observation and the length of time it takes for your system to adjust to shocks.
![Page 21: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/21.jpg)
Sugar cane example
The two sets of standard errors, along with the estimated equation are:
The 95% confidence intervals for β2 are:
ˆ 3.893 .776
(.061) (.277) 'incorrect' se's
(.062) (.378) 'correct' se's
t ty x
(.211,1.340) (incorrect)
(.006,1.546) (correct)
![Page 22: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/22.jpg)
1 2t t ty x e
1t t te e v
1 2 1t t t ty x e v
1 1 1 2 1t t te y x
![Page 23: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/23.jpg)
1 1 1 2 1t t te y x
1 2 1 2 1(1 )t t t t ty x y x v
1ln( ) 3.899 .888ln( ) .422
(se) (.092) (.259) (.166)t t t t tA P e e v
Now this transformed nonlinear model has errors that are uncorrelated over time
![Page 24: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/24.jpg)
It can be shown that nonlinear least squares estimation of (9.24) is
equivalent to using an iterative generalized least squares estimator
called the Cochrane-Orcutt procedure. Details are provided in
Appendix 9A.
![Page 25: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/25.jpg)
The nonlinear least squares estimator only requires that the errors be stable (not necessarily stationary).
Other methods commonly used make stronger demands on the data, namely that the errors be covariance stationary.
Furthermore, the nonlinear least squares estimator gives you an unconditional estimate of the autocorrelation parameter, , and yields a simple t-test of the hypothesis of no serial correlation.
Monte Carlo studies show that it performs well in small samples as well.
![Page 26: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/26.jpg)
But nonlinear least squares requires more computational power than linear estimation, though this is not much of a constraint these days.
Nonlinear least squares (and other nonlinear estimators) use numerical methods rather than analytical ones to find the minimum of your sum of squared errors objective function. The routines that do this are iterative.
![Page 27: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/27.jpg)
1 2 2 1 1(1 )t t t t ty x x y v
0 1 1 1 1t t t t ty x x y v
1 0 2 1 2 1(1 )
1 1ˆ 2.366 .777 .611 .404
(se) (.656) (.280) (.297) (.167)
t t t ty x x y We should then test the above restrictions
![Page 28: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/28.jpg)
9.4.1 Residual Correlogram
0 1: 0 : 0H H
1 (0,1)z T r N
34 .404 2.36 1.96z
![Page 29: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/29.jpg)
9.4.1 Residual Correlogram (more practically…)
1 1
1.96 1.96 or r r
T T
1.96 1.96 or k kr r
T T
2
cov( , ) ( )
var( ) ( )t t k t t k
kt t
e e E e e
e E e
![Page 30: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/30.jpg)
Figure 9.4 Correlogram for Least Squares Residuals from Sugar Cane Example
![Page 31: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/31.jpg)
1 2t t ty x e
1 2 1 2 1(1 )t t t t ty x y x v
For this nonlinear model, then, the residuals should be uncorrelated
![Page 32: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/32.jpg)
Figure 9.5 Correlogram for Nonlinear Least Squares Residualsfrom Sugar Cane Example
![Page 33: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/33.jpg)
The other way to determine whether or not your residuals are autocorrelated is to use an LM (Lagrange multiplier) test. For autocorrelation, this test is based on an auxiliary regression where lagged OLS residuals are added to the original regression equation. If the coefficient on the lagged residual is significant then you conclude that the model is autocorrelated.
![Page 34: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/34.jpg)
1 2 1t t t ty x e v
= 2.439 = 5.949 -value = .021t F p
1 2 1ˆ ˆt t t ty x e v
1 2 1 2 1ˆ ˆ ˆt t t t tb b x e x e v
We can derive the auxiliary regression for the LM test:
![Page 35: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/35.jpg)
1 1 2 2 1
1 2 1
ˆ ˆ ˆ( ) ( )
ˆ ˆ
t t t t
t t t
e b b x e v
x e v
2 34 .16101 5.474LM T R
Centered around zero, so the power of this test now would come from The lagged error, which is what we care about…
![Page 36: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/36.jpg)
1 1 2 2 1
1 2 1
ˆ ˆ ˆ( ) ( )
ˆ ˆ
t t t t
t t t
e b b x e v
x e v
2 34 .16101 5.474LM T R This was the Breusch-Godfrey LM test for autocorrelation and it is distributed chi-sq
In STATA:regress la lpestat bgodfrey
![Page 37: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/37.jpg)
1 1 2 2t t t p t p ty y y y v
11
1
ln( ) ln( ) 100 100t tt t t
t
CPI CPIy CPI CPI
CPI
1 2 3.1883 .3733 .2179 .1013
(se) (.0253) (.0615) (.0645) (.0613)
t t t tINFLN INFLN INFLN INFLN
Adding lagged values of y can serve to eliminate the error autocorrelation
In general, p should be large enough so that vt is white noise
![Page 38: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/38.jpg)
Figure 9.6 Correlogram for Least Squares Residuals fromAR(3) Model for Inflation
Helps decide How many lags To include in the model
![Page 39: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/39.jpg)
1 1 2 2 3 3t t t t ty y y y v
1 1 2 1 3 2 1T T T T Ty y y y v
1 1 2 1 3 2ˆ ˆ ˆ ˆˆ
.1883 .3733 .4468 .2179 .5988 .1013 .3510
.2602
T T T Ty y y y
![Page 40: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/40.jpg)
2 1 1 2 3 1ˆ ˆ ˆ ˆˆ ˆ
.1883 .3733 .2602 .2179 .4468 .1013 .5988
.2487
T T T Ty y y y
1 1 1 1 1 2 2 1 3 3 2 1ˆ ˆ ˆ ˆˆ ( ) ( ) ( ) ( )T T T T T Tu y y y y y v
![Page 41: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/41.jpg)
![Page 42: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/42.jpg)
1 1Tu v
2 1 1 1 2 1 1 2 1 1 2ˆ( )T T T T T Tu y y v u v v v
23 1 2 2 1 3 1 2 1 1 2 3( )T T T Tu u u v v v v
![Page 43: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/43.jpg)
2 21 1
2 2 22 2 1
2 2 2 2 23 3 1 2 1
var( )
var( ) (1 )
var( ) [( ) 1]
v
v
v
u
u
u
ˆ ˆ ˆ ˆ1.96 , 1.96T j j T j jy y
![Page 44: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/44.jpg)
This model is just a generalization of the ones previously discussed.
In this model you include lags of the dependent variable (autoregressive) and the contemporaneous and lagged values of independent variables as regressors (distributed lags).
The acronym is ARDL(p,q) where p is the maximum distributed lag and q is the maximum autoregressive lag
![Page 45: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/45.jpg)
0 1 1 2 2 , 1, ,t t t t q t q ty x x x x v t q T
( )ts
t s
E y
x
11
1
ln( ) ln( ) 100 100t tt t t
t
WAGE WAGEx WAGE WAGE
WAGE
![Page 46: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/46.jpg)
![Page 47: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/47.jpg)
![Page 48: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/48.jpg)
0 1 1 1 1t t t q t q t p t p ty x x x y y v
0 1 1 2 2 3 3
0
t t t t t t
s t s ts
y x x x x e
x e
![Page 49: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/49.jpg)
Figure 9.7 Correlogram for Least Squares Residuals fromFinite Distributed Lag Model
![Page 50: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/50.jpg)
1 2
3 1 2
.0989 .1149 .0377 .0593
(se) (.0288) (.0761) (.0812) (.0812)
.2361 .3536 .1976
(.0829) (.0604) (.0604)
t t t t
t t t
INFLN PCWAGE PCWAGE PCWAGE
PCWAGE INFLN INFLN
![Page 51: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/51.jpg)
Figure 9.8 Correlogram for Least Squares Residuals from Autoregressive Distributed Lag Model
![Page 52: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/52.jpg)
0 1 1 2 2 3 3 1 1 2 2t t t t t t t ty x x x x y y v
0 0
1 1 0 1
2 1 1 2 0 2
3 1 2 2 1 3
4 1 3 2 2
ˆ ˆ .1149
ˆ ˆ ˆ ˆ .3536 .1149 .0377 .0784
ˆ ˆ ˆ ˆ ˆ ˆ .0643
ˆ ˆ ˆ ˆ ˆ ˆ .2434
ˆ ˆ ˆ ˆ ˆ .0734
![Page 53: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/53.jpg)
Figure 9.9 Distributed Lag Weights for Autoregressive Distributed Lag Model
![Page 54: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/54.jpg)
Slide 9-54Principles of Econometrics, 3rd Edition
![Page 55: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/55.jpg)
Slide 9-55Principles of Econometrics, 3rd Edition
![Page 56: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/56.jpg)
Slide 9-56Principles of Econometrics, 3rd Edition
(9A.2)
1 2 1 t t t t t ty x e e e v
(9A.1)1 2 1 1 2 1t t t t ty x y x v
1 1 2 11t t t t ty y x x v
1 2 1 1 1t t t t t t ty y y x x x x
![Page 57: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/57.jpg)
Slide 9-57Principles of Econometrics, 3rd Edition
(9A.4)
(9A.3)1 1 2 2t t t ty x x v
1 2 1 1 2 1( )t t t t ty x y x v
![Page 58: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/58.jpg)
Slide 9-58Principles of Econometrics, 3rd Edition
(9A.5)
1 1 1 2 1y x e
2 2 2 21 1 1 2 11 1 1 1y x e
1 11 1 12 2 1y x x e
(9A.6)
2 21 1 11
2 212 1 1 1
1 1
1 1
y y x
x x e e
![Page 59: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/59.jpg)
Slide 9-59Principles of Econometrics, 3rd Edition
22 2 2
1 1 2var( ) (1 ) var( ) (1 )
1v
ve e
![Page 60: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/60.jpg)
Slide 9-60Principles of Econometrics, 3rd Edition
(9B.1)
0 1: 0 : 0H H
2
12
2
1
ˆ ˆ
ˆ
T
t tt
T
tt
e ed
e
![Page 61: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/61.jpg)
Slide 9-61Principles of Econometrics, 3rd Edition
(9B.2)
2 21 1
2 2 2
2
1
2 21 1
2 2 2
2 2 2
1 1 1
1
ˆ ˆ ˆ ˆ2
ˆ
ˆ ˆ ˆ ˆ2
ˆ ˆ ˆ
1 1 2
T T T
t t t tt t t
T
tt
T T T
t t t tt t tT T T
t t tt t t
e e e ed
e
e e e e
e e e
r
![Page 62: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/62.jpg)
Slide 9-62Principles of Econometrics, 3rd Edition
(9B.3) 12 1d r
cd d
![Page 63: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/63.jpg)
Figure 9A.1:
Principles of Econometrics, 3rd Edition Slide 9-63
![Page 64: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/64.jpg)
Figure 9A.2:
Principles of Econometrics, 3rd Edition Slide 9-64
![Page 65: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/65.jpg)
The Durbin-Watson bounds test.
if the test is inconclusive.
Principles of Econometrics, 3rd Edition Slide 9-65
0 1if , reject : 0 and accept : 0;Lcd d H H
0if , do not reject : 0;Ucd d H
,Lc Ucd d d
![Page 66: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/66.jpg)
Slide 9-66Principles of Econometrics, 3rd Edition
0 1 1 2 2 3 30
t t t t t t s t s ts
y x x x x e x e
0 1 1 1 1t t t q t q t p t p ty x x x y y v
![Page 67: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/67.jpg)
Slide 9-67Principles of Econometrics, 3rd Edition
(9C.2)
(9C.1)0 1 1t t t ty x y v
1 0 1 1 2t t ty x y
0 1 1 0 1 0 1 1 2
21 0 1 0 1 1 2
( )t t t t t t
t t t
y x y x x y
x x y
![Page 68: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/68.jpg)
Slide 9-68Principles of Econometrics, 3rd Edition
(9C.3)
21 0 1 0 1 1 0 2 1 3
2 2 31 1 0 1 0 1 1 0 2 1 3
( )t t t t t
t t t t
y x x x y
x x x y
21 1 1
2 10 1 0 1 1 0 2 1 0 1 ( 1)
2 11 1 1 0 1 1 ( 1)
0
(1 )
jt
j jt t t t j t j
jj s j
t s t js
y
x x x x y
x y
![Page 69: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/69.jpg)
Slide 9-69Principles of Econometrics, 3rd Edition
(9C.4)0 10
st t s
s
y x
21 1
1
(1 )1
0t s t s t
s
y x e
![Page 70: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/70.jpg)
Slide 9-70Principles of Econometrics, 3rd Edition
0 1s
s
2 00 1 1
0 1
(1 )1s
s
![Page 71: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/71.jpg)
Slide 9-71Principles of Econometrics, 3rd Edition
(9C.6)
(9C.5)0 1 1 2 2 3 3 1 1 2 2t t t t t t t ty x x x x y y v
0 0
1 1 0 1
2 1 1 2 0 2
3 1 2 2 1 3
4 1 3 2 2
1 1 2 2 for 4s s s s
![Page 72: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/72.jpg)
Slide 9-72Principles of Econometrics, 3rd Edition
(9D.2)
(9D.1)
1 21ˆ
3T T T
T
y y yy
1 21 1 2ˆ (1 ) (1 )T T T Ty y y y
2 31 2 3ˆ(1 ) (1 ) (1 ) (1 ) .....T T T Ty y y y
1ˆ ˆ(1 )T T Ty y y
![Page 73: Chapter 9](https://reader036.vdocument.in/reader036/viewer/2022081419/56812ac2550346895d8e8f1c/html5/thumbnails/73.jpg)
Figure 9A.3: Exponential Smoothing Forecasts for two alternative values of α
Principles of Econometrics, 3rd Edition Slide 9-73