autocorrelation

38
© Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 1 http://www.masil.org http://www.joomok.org Autocorrelation and the AR(1) Process Hun Myoung Park This document discusses autocorrelation (or serial correlation) in linear regression models with focus on the first-order autoregression process, AR(1). This document is largely based on Greene (2003). 1. Defining Autocorrelation Autocorrelation occurs in time-series data more often than in cross-sectional data. Autocorrelation (or autoregressiion and serial correlation) is a result of the violation of the nonautocorrelation assumption that each disturbance is uncorrelated with every other disturbance 1.1 Stationarity and Autocorrelation In the presence of autocorrelation, 0 ) | ( ) | ( = = X E X E s t ε ε 2 ) | ( ) | ( σ ε ε = = X Var X Var s t , but 0 ) | , ( = X Cov s t ε ε for all s t . The distribution of disturbances is said to be covariance stationary or weekly stationary. 1 I E 2 ) ' ( σ ε ε , but Ω = 2 ) ' ( σ ε ε E that is a full, positive definite matrix with a constant 2 σ on the diagonal. Since ts Ω is a function of |t-s|, but not of t or s alone (stationary assumption), the covariance between observations t and s is also a finite function of |t-s|, the distance apart in time of the observations. The autocovariances is defined as s s t s t t s t s t s t t X Cov X Cov , 2 , 2 ) | , ( ) | , ( + + Ω = Ω = = = σ σ γ ε ε ε ε and 2 0 , 2 σ γ σ = = Ω t t Autocorrelation is the correlation between t ε and s t ε , s s s t t s t t s t t X Var X Var X Cov X Corr ρ γ γ ε ε ε ε ε ε = = = 0 ) | ( ) | ( ) | , ( ) | , ( 1.2 Autoregression and AR(p) A typical autoregression model AR(p) is t p t p t t t y y y y ε φ φ φ μ + + + + = ... 2 2 1 1 or 1 Strong stationarity requires that whole joint distribution is the same over the time periods.

Upload: hoang-tuan

Post on 20-Oct-2015

24 views

Category:

Documents


1 download

DESCRIPTION

Econometrics for All

TRANSCRIPT

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 1

    http://www.masil.org http://www.joomok.org

    Autocorrelation and the AR(1) Process Hun Myoung Park This document discusses autocorrelation (or serial correlation) in linear regression models with focus on the first-order autoregression process, AR(1). This document is largely based on Greene (2003). 1. Defining Autocorrelation Autocorrelation occurs in time-series data more often than in cross-sectional data. Autocorrelation (or autoregressiion and serial correlation) is a result of the violation of the nonautocorrelation assumption that each disturbance is uncorrelated with every other disturbance 1.1 Stationarity and Autocorrelation In the presence of autocorrelation,

    0)|()|( == XEXE st 2)|()|( == XVarXVar st , but

    0)|,( =XCov st for all st . The distribution of disturbances is said to be covariance stationary or weekly stationary.1

    IE 2)'( , but = 2)'( E that is a full, positive definite matrix with a constant 2 on the diagonal. Since ts is a function of |t-s|, but not of t or s alone (stationary assumption), the covariance between observations t and s is also a finite function of |t-s|, the distance apart in time of the observations. The autocovariances is defined as

    sststtstststt XCovXCov ,2

    ,2)|,()|,( ++ ==== and 20,2 == tt

    Autocorrelation is the correlation between t and st ,

    ss

    stt

    sttstt XVarXVar

    XCovXCorr

    ===

    0)|()|(

    )|,()|,(

    1.2 Autoregression and AR(p) A typical autoregression model AR(p) is tptpttt yyyy ++++= ...2211 or

    1 Strong stationarity requires that whole joint distribution is the same over the time periods.

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 2

    http://www.masil.org http://www.joomok.org

    ttp

    p yBBB += )...1( 2211 or simply ttyB +=)( , where B denotes the backward shift operator. The first-order autoregression AR(1) process is structured so that the influence of a given disturbance fades as it recedes into the more distant past but vanishes only asymptotically.

    ttt yy ++= 11 ttttttt yuyuy ++++=++++= 1122111211 )(

    Alternatively, ttt u+= 1 In contrast, the first-order moving-average MA(1) process has a short memory, 1= ttt uu . Interestingly, AR(1) can be written in MA() form, ttt u+= 1 , where 0)( =tuE ,

    22 )( utuE = , and 0),( =st uuCov if st . Repeated substitution ends up with ...2

    21 ++= tttt uuu Each disturbance embodies the entire past history of us, with the most

    recent observations receiving greater weight than those in the distant past. The variance and covariance of disturbances are

    ==+==

    ==+++=

    22

    11111

    22

    224222

    1)(),(),(),(

    1...)(

    utttttttt

    uuuut

    VaruEECov

    Var

    2. Causes and Consequences of Autocorrelation Autocorrelation may result from a problem in (linear) functional form assumption, omitted relevant explanatory variables (often lagged dependent variables), or measurement errors that could be autocorrelated. In practice, the specification errors (ignoring relevant variables) appear to be most critical. Like heteroscedasticity, autocorrelation makes estimated variances of OLS (ordinary least squares) parameter estimates asymptotically inefficient. Technically speaking, 2 is biased (underestimated). However, OLS parameter estimates themselves remain unbiased and consistent. In short, OLS is not BLUE. 3. Detecting Autocorrelation This section considers several test statistics including Breusch-Godfrey LM, Box-Pierce Q, Ljung-Box Q, Durbin-Watson d, and Durbin h. 3.1 Lagrange Multiplier Test for AR(p)

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 3

    http://www.masil.org http://www.joomok.org

    Breusch (1978) and Godfrey (1978) develop a Lagrange multiplier test that can be applied to the pth order autoregression models. Thus, this test is more general than D-W d and Durbin h. The null hypothesis is a model without lagged dependent variables, 0...21 ==== p .2 The LM test consists of several steps. First, regress Y on Xs to get residuals. Compute lagged residuals up to pth order. Replace missing values for lagged residuals with zeros. Regress t on Xs and et-1, et-2, and et-p to get R2. Finally compute LM statistic using the R2 and the number of observations T used in the model.3

    )(~ 22 pTRLM = This statistic follows the chi-squared distribution with p degrees of freedom. This Breusch-Godfrey LM is preferred to other test statistics. 3.2 Q and Q Test for AR(p) Box and Pierce (1970) develop the Q test that is asymptotically equivalent to the Breusch-Godfrey LM test. Box-Pierce Q has a chi-squared distribution with p degrees of freedom. The Q statistic is

    =

    =P

    jj prTQ

    1

    22 )(~ , where

    =

    +=

    = Tt

    t

    T

    ptptt

    p

    e

    eer

    1

    2

    1

    First, regress Y on Xs to get residuals and compute lagged residuals up to pth order. Compute individual pr s using

    2te and pttee . Finally, plug pr s in the formula to compute Box-Pierce Q.

    Ljung and Box (1979) refine the Box-Pierce Q test to get Q. You may use information obtained above. Ljung-Box Q also follows the chi-squared distribution with p degrees of freedom.

    = +=P

    j

    j pjT

    rTTQ

    1

    22

    )(~)2(' . 3.3 Durbin-Watson d for AR(1) The Durbin-Watson (D-W) test is based on the principle that if the true disturbances are autocorrelated, this fact will be revealed through the autocorrelations of the least squares residuals (Durbin and Watson 1950, 1951, 1971). The null hypothesis is that disturbances are not autocorrelated, 0= . The test statistic is 2 This model is viewed as a restricted model, whereas the full or unrestricted model has p lagged dependent variables. 3 Since missing values in lagged residuals are filled with zero, the number of observations used in the model is the same as that in the original model.

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 4

    http://www.masil.org http://www.joomok.org

    =

    =

    = Tt

    t

    T

    ttt

    e

    eed

    1

    2

    2

    21)(

    From the Durbin-Watson statistic table (T and k), we get the following decision criteria.4 0

    Ld Ud 2 *Ud

    *Ld 4

    Reject H0 Inconclusive Do not reject H0 Inconclusive Reject H0 0> (uncertain) H0: 0= (uncertain) 0

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 5

    http://www.masil.org http://www.joomok.org

    standard error of the parameter estimator. 2lags is also an element in the diagonal of the variance-covariance matrix. The h statistic is approximately normally distributed with zero mean and unit variance. Note that this h test is the one-tailed test. 3.5 Software Issue The Table 3.1 summarizes procedures and commands that conduct AR(1) test. STATA .durbina command produces a chi-squared statistic, which is different from the z score. Table 3.1 Comparison of Computing Test Statistics SAS 9.3 STATA 9.2 LIMDEP 8.0 B-G LM AUTOREG /GODFREY=1 .bgodfrey,lags(p) - D-W d REG /DW AUTOREG /DW=1

    .dwstat (.estat dwatson) Regress

    D-W h AUTOREG /LAGDV= DW=1 .durbina (.estat durbinalt) - B-P Q - - - L-B Q - - -

    * Box-Pierce Q and Ljung-Box Q are not supported by statistical software. 4. Correcting Autocorrelation The autoregressive error model corrects autocorrelation. If is known, you may take the generalized least squares (GLS) method. Otherwise, you have to estimate the feasible generalized least squares (FGLS). If an autoregressive error model also suffers from heteroscedasticity, you may try the generalized autoregressive conditional heteroscedasticity (GARCH) model. 4.1 Generalized Least Squares If is known, the generalized least squares estimator is yXXX 111 ')'( = , which is consistent. The variance of parameter estimates is 112 )'()( = XXVar . GLS needs transformation of dependent variable, independent variables, and the intercept. See the following.

    Since

    =1...0............0...10...01

    1

    2

    2

    P in AR(1),

    =

    1

    12

    12

    *

    ...

    1

    TT yy

    yyy

    Y

    ,

    =

    1,,1,1,1

    121112

    12

    1122

    *

    ...1............

    ...11...11

    TkTkTT

    kk

    k

    xxxx

    xxxxxx

    X

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 6

    http://www.masil.org http://www.joomok.org

    Then, regress Y* on X*. In SAS and STATA, the intercept should be suppressed. 4.2 Feasible Generalized Least Squares In the real world is often unknown. So the feasible generalized least squares (FGLS) seems to be more plausible than GLS. FGLS begins with estimating . The following methods are commonly used.

    =

    = 22

    1

    1t

    T

    ttt

    e

    eer

    211 DWr Theils (1971) adjusted estimator )1/()(*1 TKTrradj .

    Once you compute r, again perform data transformation. Do not forget to transform the intercept term. The Prais-Winsten (1954) and Cochrane-Orcutt (1949) FGLS can be both two-stepped and iterative. The Prais-Winsten estimator uses all observations, while the Cochrane-Orcutt FGLS ignores the first observation. The Prais-Winsten FGLS is often called the Yule-Walker method in SAS (METHOD=YW). You may iterate the procedure to get more satisfactory outputs. SAS provides two-step, iterative two-step, and maximum likelihood methods, while STATA supports the first two methods. Instead, STATA allows researchers to use various estimators. In addition to two-step, iterative two-step, and maximum likelihood methods, LIMDEP supports the Hatanakas (1974) model for autocorrelation with a lagged dependent variable, which is asymptotically equivalent to the maximum likelihood model. 4.3 Software Issue The Table 4.1 summarizes estimation methods supported in each statistical software. Table 4.1 Comparison Estimation Methods SAS 9.3 STATA 9.2 LIMDEP 8.0 OLS REG .regress Regress 2-step P-W AUTOREG /YW .prais, twostep Regress;AR1;Maxit=1;Rho= 2-step C-O - .prais, corc twostep Regress;AR1;Maxit=1;Alg=C;Rho= 2-step P-W (dw) - .prais, rhotype(dw) twostep Regress;AR1;Maxit=1 2-step C-O (dw) - .prais, rho(dw) corc twostep Regress;AR1;Alg=Corc;Maxit=1 Iterative P-W AUTOREG /ITYW .prais Regress;AR1; Iterative C-O - .prais, corc Regress;AR1;Alg=Corc MLE AUTOREG /ML - Regress;AR1;Alg=MLE Two-stage (IV) - - 2SLS;Inst;AR1;Hatanaka GARCH AUTOREG /GARCH .arch, garch(p) Regress;Model=Garch(p,q,1)

    * The default types of rho are autocorrelation coefficient in SAS, residual regression-based rho in STATA, and the D-W d-based rho in LIMDEP.

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 7

    http://www.masil.org http://www.joomok.org

    5. Example: Detecting Autocorrelation This section illustrates how to detect autocorrelation using STATA and SAS. 5.1 STATA STATA has the .dwstat command for D-W d and .durbina for the Durbin h statistic. These commands are postestimation commands of the .regress. The .durbina produces a chi-squared statistic instead of a z score and returns a slightly different p-value.

    5.1.1 Data Preparation The data are downloaded from Greenes webpage at http://pages.stern.nyu.edu/~wgreene. This data set for the U.S. Gasoline Market, 1960-1995, is drawn from the Economic Report of the President: 1996, Council of Economic Advisors, 1996. The variables included are

    G = Total U.S. gasoline consumption, computed as total expenditure divided by price index.

    Pg = Price index for gasoline, Y = Per capita disposable income, Pnc = Price index for new cars, Puc = Price index for used cars, Ppt = Price index for public transportation, Pd = Aggregate price index for consumer durables, Pn = Aggregate price index for consumer nondurables, Ps = Aggregate price index for consumer services, Pop = U.S. total population in millions.

    . infile Year G Pg Y Pnc Puc Ppt Pd Pn Ps Pop /// using http://pages.stern.nyu.edu/~wgreene/Text/tables/TableF2-2.txt, clear . drop if Year==. . tsset Year time variable: Year, 1960 to 1995 . gen lnG=ln(G/Pop) . gen lnPg=ln(Pg) . gen lnI=ln(Y) . gen lnPnc=ln(Pnc) . gen lnPuc=ln(Puc) . sum ln* Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- lnG | 36 -.0037086 .1516908 -.3358192 .1602927 lnPg | 36 .6740943 .604228 -.0899247 1.41318 lnI | 36 9.110928 .2048051 8.705497 9.387147 lnPnc | 36 .4431982 .3794222 -.0090407 1.034962 lnPuc | 36 .6636122 .6301064 -.1791266 1.653263 . global OLS "lnG lnPg lnI" // OLS . global OLS2 "lnG l1.lnG lnPg lnI" // OLS for Durbin h . global K=5

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 8

    http://www.masil.org http://www.joomok.org

    . regress $OLS Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 4, 31) = 176.71 Model | .771516959 4 .19287924 Prob > F = 0.0000 Residual | .033836907 31 .001091513 R-squared = 0.9580 -------------+------------------------------ Adj R-squared = 0.9526 Total | .805353866 35 .02301011 Root MSE = .03304 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.0590955 .0324849 -1.82 0.079 -.125349 .007158 lnI | 1.3734 .0756277 18.16 0.000 1.219156 1.527643 lnPnc | -.1267969 .1269934 -1.00 0.326 -.3858017 .1322079 lnPuc | -.1187082 .0813371 -1.46 0.154 -.2845962 .0471799 _cons | -12.34184 .6748946 -18.29 0.000 -13.7183 -10.96539 ------------------------------------------------------------------------------ . predict e, residuals . gen e_2=e^2 . gen ee1=e*l.e . gen e1_2=l.e^2 . gen e_e1_2=(e-l.e)^2 . list e l.e e_2 ee1 e1_2 e_e1_2 in 1/5 +-------------------------------------------------------------------+ | e L.e e_2 ee1 e1_2 e_e1_2 | |-------------------------------------------------------------------| 1. | .0338152 . .0011435 . . . | 2. | .0160893 .0338152 .0002589 .0005441 .0011435 .0003142 | 3. | .019306 .0160893 .0003727 .0003106 .0002589 .0000103 | 4. | .0146887 .019306 .0002158 .0002836 .0003727 .0000213 | 5. | -.0187137 .0146887 .0003502 -.0002749 .0002158 .0011157 | +-------------------------------------------------------------------+ . tabstat e e_2 ee1 e1_2 e_e1_2, stat(n sum mean) save stats | e e_2 ee1 e1_2 e_e1_2 ---------+-------------------------------------------------- N | 36 36 35 35 35 sum | -3.73e-09 .0338369 .0228194 .0334066 .0204612 mean | -1.03e-10 .0009399 .000652 .0009545 .0005846 ------------------------------------------------------------ . matrix sum=r(StatTotal) . local s_e_2 = sum[2,2] //.03383691 . local s_ee1 = sum[2,3] //.02281944 . local s_e1_2 = sum[2,4] //.03340659 . local s_e_e1_2 = sum[2,5] //.02046115 . global T = sum[1,1] //36

    5.1.2 Breusch-Godfrey LM Test Unlike the Durbin-Watson d test, the Breusch-Godfrey Lagrange multiplier test can be applied to general AR(p) processes. The STATA .bgodfrey, a postestimation command, computes the statistic up to the pth order. Compare the four LM statistics with those in 5.2.2. . quietly regress $OLS (output is skipped) . bgodfrey,lags(1) Breusch-Godfrey LM test for autocorrelation --------------------------------------------------------------------------- lags(p) | chi2 df Prob > chi2

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 9

    http://www.masil.org http://www.joomok.org

    -------------+------------------------------------------------------------- 1 | 16.835 1 0.0000 --------------------------------------------------------------------------- H0: no serial correlation . bgodfrey,lags(2) Breusch-Godfrey LM test for autocorrelation --------------------------------------------------------------------------- lags(p) | chi2 df Prob > chi2 -------------+------------------------------------------------------------- 2 | 20.825 2 0.0000 --------------------------------------------------------------------------- H0: no serial correlation . bgodfrey,lags(3) Breusch-Godfrey LM test for autocorrelation --------------------------------------------------------------------------- lags(p) | chi2 df Prob > chi2 -------------+------------------------------------------------------------- 3 | 20.994 3 0.0001 --------------------------------------------------------------------------- H0: no serial correlation . bgodfrey,lags($lag) // equivalent to .estat bgodfrey,lags() Breusch-Godfrey LM test for autocorrelation --------------------------------------------------------------------------- lags(p) | chi2 df Prob > chi2 -------------+------------------------------------------------------------- 4 | 21.536 4 0.0002 --------------------------------------------------------------------------- H0: no serial correlation In order to manually compute the LM statistic, create p lagged residuals and regress residuals on all independent variables and all lagged residuals. Do not forget to fill missing in the lagged residuals with zero. . gen e1=l1.e . replace e1=0 if e1==. Now run the OLS to get R2. . regress e lnPg lnI e1 Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 3, 32) = 9.22 Model | .015685468 3 .005228489 Prob > F = 0.0002 Residual | .018151439 32 .000567232 R-squared = 0.4636 -------------+------------------------------ Adj R-squared = 0.4133 Total | .033836907 35 .000966769 Root MSE = .02382 ------------------------------------------------------------------------------ e | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.0055195 .0156354 -0.35 0.726 -.0373678 .0263289 lnI | .0108632 .0460709 0.24 0.815 -.0829801 .1047065 e1 | .6873731 .1307147 5.26 0.000 .421116 .9536301 _cons | -.0956494 .4102647 -0.23 0.817 -.9313312 .7400324 ------------------------------------------------------------------------------ The LM statistic is T*R2, which follows a chi-squared distribution with p degrees of freedom. . local r2=e(r2) . local lm = ($T)*`r2' . disp `lm' // LM statistic

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 10

    http://www.masil.org http://www.joomok.org

    16.688194 . disp chi2tail(1, `lm') // p-value .00004405

    5.1.3 Q and Q Test Now, consider Box-Pierce Q and Ljung-Box Q statistics. First compute individual autocorrelation coefficients up to the pth order. . regress lnPg lnI lnPnc lnPuc e1 // for AR(1) Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 4, 31) = 88.47 Model | 11.7490172 4 2.93725431 Prob > F = 0.0000 Residual | 1.02918525 31 .033199524 R-squared = 0.9195 -------------+------------------------------ Adj R-squared = 0.9091 Total | 12.7782025 35 .365091499 Root MSE = .18221 ------------------------------------------------------------------------------ lnPg | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnI | .5279324 .4061863 1.30 0.203 -.3004899 1.356355 lnPnc | .9524263 .6809687 1.40 0.172 -.4364186 2.341271 lnPuc | .1862664 .4491529 0.41 0.681 -.729787 1.10232 e1 | .3949087 1.001913 0.39 0.696 -1.648507 2.438324 _cons | -4.681809 3.626033 -1.29 0.206 -12.07715 2.713534 ------------------------------------------------------------------------------ . gen ee=e^2 . gen ee1=e*e1 . tabstat ee ee1, stat(n sum mean) save // for AR(1) stats | ee ee1 ---------+-------------------- N | 36 36 sum | .0338369 .0228194 mean | .0009399 .0006339 ------------------------------ . matrix sum=r(StatTotal) . local r1=sum[2,2]/sum[2,1] . local Q = $T*(`r1'^2) // for AR(1) . disp `Q' 16.373108 . disp chi2tail(1,`Q') // for AR(1) .00005202 . local Q1 = $T*($T+2)*(`r1'^2/($T-1)) // for AR(1) . disp `Q1' 17.776517 . disp chi2tail(1,`Q1') // for AR(1) .00002484 AS and STATA do not have option or command to compute Q or Q.

    5.1.4 Durbin-Watson d Test

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 11

    http://www.masil.org http://www.joomok.org

    First, let us compute D-W d manually to make sure it is identical to the statistic provided by STATA. Note that .dwstat and .estat dwatson are equivalent. . local dw= `s_e_e1_2'/`s_e_2' // .60469933 . dwstat // equivalent to .estat dwatson Durbin-Watson d-statistic( 5, 36) = .6046993 . local rho=1-`dw'/2 // DW based rho: rhotype(dw) .69765033 . local rho=`s_ee1'/`s_e_2' // Autocorrelation rho: rhotype(tscorr) .67439496

    5.1.5 Durbin h Test Finally, let us compute the Durbin h for a model with a lagged dependent variable. Note that Durbin h is not a two-tailed test, but a one-tailed test. . regress $OLS2 Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 5, 29) = 276.54 Model | .680487618 5 .136097524 Prob > F = 0.0000 Residual | .014272173 29 .000492144 R-squared = 0.9795 -------------+------------------------------ Adj R-squared = 0.9759 Total | .694759791 34 .020434112 Root MSE = .02218 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnG | L1. | .6877655 .1139042 6.04 0.000 .4548052 .9207257 lnPg | -.0939412 .0225678 -4.16 0.000 -.1400975 -.047785 lnI | .4312204 .1692678 2.55 0.016 .0850289 .7774119 lnPnc | -.2909653 .0927684 -3.14 0.004 -.4806981 -.1012325 lnPuc | .1737385 .0717074 2.42 0.022 .0270803 .3203967 _cons | -3.844963 1.524568 -2.52 0.017 -6.963055 -.7268714 ------------------------------------------------------------------------------ . matrix list e(V) symmetric e(V)[6,6] L. lnG lnPg lnI lnPnc lnPuc _cons L.lnG .01297417 lnPg -.00065925 .0005093 lnI -.01830652 .00067796 .02865159 lnPnc -.00199992 -.00036491 .0033396 .00860598 lnPuc .00496849 -.00033304 -.00791344 -.00550537 .00514196 _cons .16504766 -.00614887 -.25805664 -.03035274 .07142527 2.3243077 . matrix V = e(V) . local v_lag = V[1,1] // variance of coefficient of the lagged DV . predict e, residuals . gen e_2=e^2 . gen ee1=e*l.e . list e l.e e_2 ee1 in 1/5 +----------------------------------------------+ | e L.e e_2 ee1 | |----------------------------------------------| 1. | . . . . | 2. | .0065173 . .0000425 . | 3. | .0107834 .0065173 .0001163 .0000703 | 4. | -.0018847 .0107834 3.55e-06 -.0000203 |

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 12

    http://www.masil.org http://www.joomok.org

    5. | -.010278 -.0018847 .0001056 .0000194 | +----------------------------------------------+ . tabstat e e_2 ee1, stat(n sum mean) save stats | e e_2 ee1 ---------+------------------------------ N | 35 35 34 sum | -1.86e-09 .0142722 .0017067 mean | -5.32e-11 .0004078 .0000502 ---------------------------------------- . matrix sum=r(StatTotal) . local s_e_2 = sum[2,2] //.0142722 . local s_ee1 = sum[2,3] //.0017067 . global T = sum[1,1] // 35 . local rho=`s_ee1'/`s_e_2' // Autocorrelation rho: rhotype(tscorr) . disp `rho' .11958521 . disp $T*`v_lag' // to check if Ts < 1 .4540958 . local h = `rho'*sqrt($T/(1-$T*`v_lag')) . disp `h' .95753197 . disp 1-norm(`h') .16914941 SAS AUTOREG procedure returns the same Durbin h .9575 (See 5.2.2). Using the alternative term 21 DW will give you a quite different statistic largely because this sample is not large sufficiently. . dwstat // equivalent to estat dwatson Durbin-Watson d-statistic( 6, 35) = 1.743835 . matrix dw = r(dw) . local dw = dw[1,1] // dw . local h2 = (1-`dw'/2)*sqrt($T/(1-$T*`v_lag')) . disp `h2' 1.0255711 . disp 1-norm(`h2') .15254689 Let us run either .durbina or .estat durbinalt to conduct the Durbins alternative test, which produces a chi-squared statistic whose p-value is different from that of h2 above. . durbina // equivalent to .estat durbinalt Durbin's alternative test for autocorrelation --------------------------------------------------------------------------- lags(p) | chi2 df Prob > chi2 -------------+------------------------------------------------------------- 1 | 0.660 1 0.4164 --------------------------------------------------------------------------- H0: no serial correlation 5.2 SAS REG and AUTOREG Procedure

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 13

    http://www.masil.org http://www.joomok.org

    In SAS, you may use the REG procedure of SAS/STAT and the AUTOREG procedure of SAS/ETS. REG computes the D-W d statistic, while AUTORE produces both D-W d and Durbin h statistics.

    5.2.1 SAS REG Procedure

    The /DW option in the REG procedure computes the D-W d statistic. PROC REG DATA=masil.gasoline;

    MODEL lnG = lnPg lnI lnPnc lnPuc /DW; RUN; The REG Procedure Model: MODEL1 Dependent Variable: lnG Number of Observations Read 36 Number of Observations Used 36 Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model 4 0.77152 0.19288 176.71 |t| Intercept 1 -12.34184 0.67489 -18.29

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 14

    http://www.masil.org http://www.joomok.org

    PROC AUTOREG DATA=masil.gasoline; MODEL lnG = lnPg lnI lnPnc lnPuc /DW=1 GODFREY=4 ;

    RUN; The AUTOREG Procedure Dependent Variable lnG Ordinary Least Squares Estimates SSE 0.03383691 DFE 31 MSE 0.00109 Root MSE 0.03304 SBC -130.82883 AIC -138.74642 Regress R-Square 0.9580 Total R-Square 0.9580 Durbin-Watson 0.6047 Godfrey's Serial Correlation Test Alternative LM Pr > LM AR(1) 16.8353

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 15

    http://www.masil.org http://www.joomok.org

    lnG1 1 0.6878 0.1139 6.04

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 16

    http://www.masil.org http://www.joomok.org

    6. Correcting Autocorrelation: Feasible Generalized Least Squares This section illustrates methods to correct autocorrelation using the Prais-Winstens FGLS (Feasible Generalized Least Squares) and the Cochrane-Orcutt FGLS. If is known, you may try GLS (Generalized Least Squares). 6.1 FGLS in STATA This section illustrates how to estimate FGLS in STATA. See section 5 for the description of the data set used.

    6.1.1 Data Preparation . infile Year G Pg Y Pnc Puc Ppt Pd Pn Ps Pop /// using http://pages.stern.nyu.edu/~wgreene/Text/tables/TableF2-2.txt, clear . drop if Year==. . tsset Year time variable: Year, 1960 to 1995 . gen lnG=ln(G/Pop) . gen lnPg=ln(Pg) . gen lnI=ln(Y) . gen lnPnc=ln(Pnc) . gen lnPuc=ln(Puc) . sum ln* // summary statistics Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- lnG | 36 -.0037086 .1516908 -.3358192 .1602927 lnPg | 36 .6740943 .604228 -.0899247 1.41318 lnI | 36 9.110928 .2048051 8.705497 9.387147 lnPnc | 36 .4431982 .3794222 -.0090407 1.034962 lnPuc | 36 .6636122 .6301064 -.1791266 1.653263 . global OLS "lnG lnPg lnI lnPnc lnPuc" . regress $OLS Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 4, 31) = 176.71 Model | .771516959 4 .19287924 Prob > F = 0.0000 Residual | .033836907 31 .001091513 R-squared = 0.9580 -------------+------------------------------ Adj R-squared = 0.9526 Total | .805353866 35 .02301011 Root MSE = .03304 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.0590955 .0324849 -1.82 0.079 -.125349 .007158 lnI | 1.3734 .0756277 18.16 0.000 1.219156 1.527643 lnPnc | -.1267969 .1269934 -1.00 0.326 -.3858017 .1322079 lnPuc | -.1187082 .0813371 -1.46 0.154 -.2845962 .0471799 _cons | -12.34184 .6748946 -18.29 0.000 -13.7183 -10.96539 ------------------------------------------------------------------------------ . predict e, residuals . gen e1=l.e . gen e_2=e^2 . gen ee1=e*e1 . gen e1_2=e1^2 . gen e_e1_2=(e-e1)^2

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 17

    http://www.masil.org http://www.joomok.org

    . tabstat e e1 e_2 ee1 e1_2 e_e1_2, stat(n sum mean) save stats | e e1 e_2 ee1 e1_2 e_e1_2 ---------+------------------------------------------------------------ N | 36 35 36 35 35 35 sum | -3.73e-09 .020744 .0338369 .0228194 .0334066 .0204612 mean | -1.03e-10 .0005927 .0009399 .000652 .0009545 .0005846 ---------------------------------------------------------------------- . matrix sum=r(StatTotal) . local s_e_2 = sum[2,3] //.03383691 . local s_ee1 = sum[2,4] //.02281944 . local s_e1_2 = sum[2,5] //.03340659 . local s_e_e1_2 = sum[2,6] //.02046115 . local T = sum[1,1] //36

    6.1.2 Computing Autocorrelation Coefficient There are various ways of estimating the autocorrelation parameter . Autoregressive effort models to correct autocorrelation depend on the estimator and estimation methods such as iterative and matrix likelihood methods. The is often estimated using autocorrelation formula and Durbin-Watson d. . local rho=`s_ee1'/`s_e_2' // Autocorrelation rho: rhotype(tscorr) .67439496 . dwstat // DW d Durbin-Watson d-statistic( 5, 36) = .6046993 . local dw= `s_e_e1_2'/`s_e_2' // DW d .60469933 . local rho = 1-`dw'/2 // DW based rho: rhotype(dw) .69765033 In addition to tscorr (autocorrelation coefficient) and dw (D-W d-based rho), STATA provides theil (adjustment of autocorrelation coefficient), nagar (adjustment of D-W d-based coefficient), regress (default option, the coefficient of regression e on et-1 without intercept), and freg (the coefficient of regression e on et+1 without intercept) options . local rho=`s_ee1'/`s_e_2'*($T-$K)/($T) //Theil rho: rhotype(theil) .58072899 . local rho = ((1-`dw'/2)*$T^2+$K^2)/($T^2-$K^2) // Nagar .73104236 . regress e e1, noc // for the rho based on regression on the lagged residuals Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 1, 34) = 30.98 Model | .015587547 1 .015587547 Prob > F = 0.0000 Residual | .017105895 34 .000503115 R-squared = 0.4768 -------------+------------------------------ Adj R-squared = 0.4614 Total | .032693442 35 .000934098 Root MSE = .02243 ------------------------------------------------------------------------------ e | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- e1 | .6830819 .1227206 5.57 0.000 .4336837 .9324801 ------------------------------------------------------------------------------ . matrix b1 = e(b) . local rho = b1[1,1] . disp `rho' //.68308194 . gen e0=e[_n+1] (1 missing value generated)

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 18

    http://www.masil.org http://www.joomok.org

    . quietly regress e e0, noc // rho based on regression with the leaded residuals . matrix b2 = e(b) . local rho = b2[1,1] //.69798216 Once a estimator is determined, variables and the intercept need to be transformed using the estimator. . foreach var of global OLS { 2. gen T`var' = sqrt(1-`rho'^2)*`var' if (_n==1) 3. replace T`var' = `var'-`rho'*`var'[_n-1] if (_n !=1) 4. } . gen Intercept = sqrt(1-`rho'^2) if (_n==1) . replace Intercept = 1 -`rho' if (_n !=1) Alternatively, you may explicitly transform data variable by variable as follows. . gen TlnG = sqrt(1-`rho'^2)*lnG if (_n==1) . replace TlnG = lnG-`rho'*lnG[_n-1] if (_n !=1) . gen TlnPg = sqrt(1-`rho'^2)*lnPg if (_n==1) . replace TlnPg = lnPg-`rho'*lnPg[_n-1] if (_n !=1) . gen TlnI = sqrt(1-`rho'^2)*lnI if (_n==1) . replace TlnI = lnI-`rho'*lnI[_n-1] if (_n !=1) . gen TlnPnc = sqrt(1-`rho'^2)*lnPnc if (_n==1) . replace TlnPnc = lnPnc-`rho'*lnPnc[_n-1] if (_n !=1) . gen TlnPuc = sqrt(1-`rho'^2)*lnPuc if (_n==1) . replace TlnPuc = lnPuc-`rho'*lnPuc[_n-1] if (_n !=1)

    6.1.3 Prais-Winsten FGLS Now, you are ready to fits the Prais-Winsten FGLS with the transformed data. The intercept should be suppressed in the OLS. Thus, the F test and R2 are not reliable. Let us first uses the estimator computed from the autocorrelation formula. . local rho=`s_ee1'/`s_e_2' // Autocorrelation rho: rhotype(tscorr) (data transformation is skipped) . regress TlnG TlnPg TlnI TlnPnc TlnPuc Intercept, noconst // Prais-Winsten FGLS Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 5, 31) = 57.39 Model | .133792898 5 .02675858 Prob > F = 0.0000 Residual | .014453725 31 .000466249 R-squared = 0.9025 -------------+------------------------------ Adj R-squared = 0.8868 Total | .148246623 36 .004117962 Root MSE = .02159 ------------------------------------------------------------------------------ TlnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- TlnPg | -.1463802 .037076 -3.95 0.000 -.2219972 -.0707631 TlnI | 1.278264 .1054467 12.12 0.000 1.063204 1.493324 TlnPnc | -.0398843 .1276357 -0.31 0.757 -.300199 .2204304 TlnPuc | -.0669309 .0766292 -0.87 0.389 -.2232173 .0893554 Intercept | -11.49075 .9390551 -12.24 0.000 -13.40597 -9.575537 ------------------------------------------------------------------------------

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 19

    http://www.masil.org http://www.joomok.org

    The .prais command by default fits the Prais-Winsten FGLS. Use the rhotype(tscorr) option to specify the type of estimator. The twostep option stops iteration after the first iteration. The output is the same as the above. Compare SSM, the degree of freedom for the model, F to those of the above. . prais $OLS, rhotype(tscorr) twostep // Autocorrelation Iteration 0: rho = 0.0000 Iteration 1: rho = 0.6744 Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 4, 31) = 71.50 Model | .133347816 4 .033336954 Prob > F = 0.0000 Residual | .014453725 31 .000466249 R-squared = 0.9022 -------------+------------------------------ Adj R-squared = 0.8896 Total | .147801541 35 .004222901 Root MSE = .02159 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.1463803 .037076 -3.95 0.000 -.2219974 -.0707632 lnI | 1.278263 .1054466 12.12 0.000 1.063203 1.493323 lnPnc | -.0398849 .1276357 -0.31 0.757 -.3001995 .2204298 lnPuc | -.0669303 .0766292 -0.87 0.389 -.2232166 .089356 _cons | -11.49075 .9390546 -12.24 0.000 -13.40596 -9.575531 -------------+---------------------------------------------------------------- rho | .674395 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.110699 Let us use the D-W d-based estimator (=1-d/2). . local rho = 1-`dw'/2 // DW based rho: rhotype(dw) (data transformation is skipped) . regress TlnG TlnPg TlnI TlnPnc TlnPuc Intercept, noconst // Prais-Winsten FGLS Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 5, 31) = 54.03 Model | .122550721 5 .024510144 Prob > F = 0.0000 Residual | .014062158 31 .000453618 R-squared = 0.8971 -------------+------------------------------ Adj R-squared = 0.8805 Total | .136612879 36 .003794802 Root MSE = .0213 ------------------------------------------------------------------------------ TlnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- TlnPg | -.1523066 .0370525 -4.11 0.000 -.2278756 -.0767376 TlnI | 1.266635 .107309 11.80 0.000 1.047777 1.485493 TlnPnc | -.0308443 .1271973 -0.24 0.810 -.2902649 .2285764 TlnPuc | -.0638014 .0758518 -0.84 0.407 -.2185021 .0908993 Intercept | -11.3873 .955492 -11.92 0.000 -13.33604 -9.438561 ------------------------------------------------------------------------------ The rhotype(dw) option uses the D-W d-based estimator when estimating autoregressive error models. . prais $OLS, rhotype(dw) twostep Iteration 0: rho = 0.0000 Iteration 1: rho = 0.6977

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 20

    http://www.masil.org http://www.joomok.org

    Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 4, 31) = 67.24 Model | .122007581 4 .030501895 Prob > F = 0.0000 Residual | .014062161 31 .000453618 R-squared = 0.8967 -------------+------------------------------ Adj R-squared = 0.8833 Total | .136069743 35 .003887707 Root MSE = .0213 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.1523067 .0370525 -4.11 0.000 -.2278757 -.0767377 lnI | 1.266636 .1073091 11.80 0.000 1.047778 1.485494 lnPnc | -.0308446 .1271973 -0.24 0.810 -.2902653 .2285761 lnPuc | -.0638011 .0758518 -0.84 0.407 -.2185019 .0908996 _cons | -11.38731 .9554926 -11.92 0.000 -13.33605 -9.438566 -------------+---------------------------------------------------------------- rho | .6976503 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.137768 The following example uses the Theils estimator, which adjusts the autocorrelation coefficient. . local rho=`s_ee1'/`s_e_2'*($T-$K)/($T) //Theil rho: rhotype(theil) .58072899 . prais $OLS, rhotype(theil) twostep // Theil rho Iteration 0: rho = 0.0000 Iteration 1: rho = 0.5781 Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 4, 31) = 89.96 Model | .187934725 4 .046983681 Prob > F = 0.0000 Residual | .016189637 31 .000522246 R-squared = 0.9207 -------------+------------------------------ Adj R-squared = 0.9105 Total | .204124362 35 .005832125 Root MSE = .02285 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.1235462 .0367836 -3.36 0.002 -.1985669 -.0485254 lnI | 1.31413 .0982142 13.38 0.000 1.113821 1.514439 lnPnc | -.0700476 .1289092 -0.54 0.591 -.3329597 .1928645 lnPuc | -.0792208 .0791098 -1.00 0.324 -.2405664 .0821247 _cons | -11.81037 .8751567 -13.50 0.000 -13.59526 -10.02547 -------------+---------------------------------------------------------------- rho | .5780528 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.010610 The following uses the adjustment of D-W d-based estimator. . local rho = ((1-`dw'/2)*$T^2+$K^2)/($T^2-$K^2) // Nagar .73104236 . prais $OLS, rhotype(nagar) twostep Iteration 0: rho = 0.0000 Iteration 1: rho = 0.7462 Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 4, 31) = 58.73

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 21

    http://www.masil.org http://www.joomok.org

    Model | .100642676 4 .025160669 Prob > F = 0.0000 Residual | .013280162 31 .000428392 R-squared = 0.8834 -------------+------------------------------ Adj R-squared = 0.8684 Total | .113922839 35 .003254938 Root MSE = .0207 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.1649401 .0368649 -4.47 0.000 -.2401266 -.0897536 lnI | 1.237828 .1112792 11.12 0.000 1.010872 1.464783 lnPnc | -.0096368 .1261748 -0.08 0.940 -.266972 .2476984 lnPuc | -.0571432 .0740091 -0.77 0.446 -.2080859 .0937994 _cons | -11.13132 .9905196 -11.24 0.000 -13.1515 -9.111143 -------------+---------------------------------------------------------------- rho | .7461546 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.199445 The following uses the default type of estimator, which is obtained by regressing e on et-1 without the intercept. . prais $OLS, rhotype(regress) twostep // default Iteration 0: rho = 0.0000 Iteration 1: rho = 0.6831 Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 4, 31) = 69.90 Model | .129028315 4 .032257079 Prob > F = 0.0000 Residual | .014306248 31 .000461492 R-squared = 0.9002 -------------+------------------------------ Adj R-squared = 0.8873 Total | .143334562 35 .004095273 Root MSE = .02148 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.1485802 .0370721 -4.01 0.000 -.2241892 -.0729712 lnI | 1.274075 .1061383 12.00 0.000 1.057605 1.490546 lnPnc | -.0365926 .127477 -0.29 0.776 -.2965837 .2233986 lnPuc | -.0657675 .0763471 -0.86 0.396 -.2214784 .0899434 _cons | -11.45348 .9451596 -12.12 0.000 -13.38115 -9.525816 -------------+---------------------------------------------------------------- rho | .6830819 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.120645 The following uses the estimator obtained by regressing e on et+1 without the intercept. . prais $OLS, rhotype(freg) twostep Iteration 0: rho = 0.0000 Iteration 1: rho = 0.6980 Prais-Winsten AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 4, 31) = 67.18 Model | .121850923 4 .030462731 Prob > F = 0.0000 Residual | .014056649 31 .00045344 R-squared = 0.8966 -------------+------------------------------ Adj R-squared = 0.8832 Total | .135907573 35 .003883074 Root MSE = .02129 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval]

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 22

    http://www.masil.org http://www.joomok.org

    -------------+---------------------------------------------------------------- lnPg | -.1523921 .0370518 -4.11 0.000 -.2279598 -.0768244 lnI | 1.26646 .1073359 11.80 0.000 1.047547 1.485373 lnPnc | -.0307104 .1271908 -0.24 0.811 -.2901177 .2286969 lnPuc | -.0637561 .0758402 -0.84 0.407 -.2184332 .0909209 _cons | -11.38574 .9557293 -11.91 0.000 -13.33497 -9.436521 -------------+---------------------------------------------------------------- rho | .6979822 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.138165

    6.1.4 Cochrane-Orcutt FGLS

    Like Prais-Winsten FGLS, Cochrane-Orcutt FGLS runs OLS with the transform data. Unlike the Prais-Winsten, the Cochrane-Orcutt ignores the first observation. Let us begin with Cochrane-Orcutt FGLS using the autocorrelation coefficient. . regress TlnG TlnPg TlnI TlnPnc TlnPuc Intercept if _n > 1, noconst Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 5, 30) = 31.05 Model | .073993944 5 .014798789 Prob > F = 0.0000 Residual | .014299108 30 .000476637 R-squared = 0.8380 -------------+------------------------------ Adj R-squared = 0.8111 Total | .088293052 35 .002522659 Root MSE = .02183 ------------------------------------------------------------------------------ TlnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- TlnPg | -.142636 .0380588 -3.75 0.001 -.2203625 -.0649094 TlnI | 1.329594 .1396031 9.52 0.000 1.044487 1.614702 TlnPnc | -.0793608 .1464852 -0.54 0.592 -.3785234 .2198018 TlnPuc | -.0561649 .0797507 -0.70 0.487 -.2190375 .1067078 Intercept | -11.95372 1.249882 -9.56 0.000 -14.50632 -9.401116 ------------------------------------------------------------------------------ The .prais command has the corc option to estimate Cochrane-Orcutt FGLS. . prais $OLS, rhotype(tscorr) twostep corc Iteration 0: rho = 0.0000 Iteration 1: rho = 0.6744 Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 4, 30) = 36.74 Model | .0700521 4 .017513025 Prob > F = 0.0000 Residual | .014299112 30 .000476637 R-squared = 0.8305 -------------+------------------------------ Adj R-squared = 0.8079 Total | .084351212 34 .002480918 Root MSE = .02183 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.1426362 .0380589 -3.75 0.001 -.2203627 -.0649096 lnI | 1.329593 .139603 9.52 0.000 1.044486 1.6147 lnPnc | -.079361 .1464852 -0.54 0.592 -.3785237 .2198016 lnPuc | -.0561643 .0797507 -0.70 0.487 -.219037 .1067084 _cons | -11.9537 1.249881 -9.56 0.000 -14.5063 -9.401107 -------------+---------------------------------------------------------------- rho | .674395 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 23

    http://www.masil.org http://www.joomok.org

    Durbin-Watson statistic (transformed) 1.125520 The following two outputs use the D-W d-based estimator. . regress TlnG TlnPg TlnI TlnPnc TlnPuc Intercept if _n > 1, noconst Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 5, 30) = 28.41 Model | .066189092 5 .013237818 Prob > F = 0.0000 Residual | .013979013 30 .000465967 R-squared = 0.8256 -------------+------------------------------ Adj R-squared = 0.7966 Total | .080168105 35 .002290517 Root MSE = .02159 ------------------------------------------------------------------------------ TlnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- TlnPg | -.1492824 .0382297 -3.90 0.000 -.227358 -.0712069 TlnI | 1.307018 .1448034 9.03 0.000 1.01129 1.602746 TlnPnc | -.0599178 .1461395 -0.41 0.685 -.3583746 .2385389 TlnPuc | -.0563603 .0788697 -0.71 0.480 -.2174338 .1047132 Intercept | -11.75192 1.29727 -9.06 0.000 -14.4013 -9.102544 ------------------------------------------------------------------------------ . prais $OLS, rhotype(dw) twostep corc Iteration 0: rho = 0.0000 Iteration 1: rho = 0.6977 Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 4, 30) = 33.33 Model | .062119363 4 .015529841 Prob > F = 0.0000 Residual | .013979017 30 .000465967 R-squared = 0.8163 -------------+------------------------------ Adj R-squared = 0.7918 Total | .07609838 34 .002238188 Root MSE = .02159 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.1492826 .0382297 -3.90 0.000 -.2273581 -.071207 lnI | 1.307019 .1448035 9.03 0.000 1.01129 1.602747 lnPnc | -.059918 .1461396 -0.41 0.685 -.3583748 .2385388 lnPuc | -.05636 .0788697 -0.71 0.480 -.2174336 .1047135 _cons | -11.75193 1.297271 -9.06 0.000 -14.40131 -9.102547 -------------+---------------------------------------------------------------- rho | .6976503 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.140131 The followings estimate other autoregressive error models using other estimators such as Theils estimator. Pay attention to the rhotype() option. . prais $OLS, rhotype(theil) twostep corc Iteration 0: rho = 0.0000 Iteration 1: rho = 0.5781 Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 4, 30) = 53.70 Model | .111987045 4 .027996761 Prob > F = 0.0000 Residual | .01564161 30 .000521387 R-squared = 0.8774 -------------+------------------------------ Adj R-squared = 0.8611 Total | .127628655 34 .003753784 Root MSE = .02283 ------------------------------------------------------------------------------

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 24

    http://www.masil.org http://www.joomok.org

    lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.118986 .0370215 -3.21 0.003 -.194594 -.0433779 lnI | 1.385947 .1205699 11.49 0.000 1.13971 1.632183 lnPnc | -.140429 .1459554 -0.96 0.344 -.4385097 .1576518 lnPuc | -.0554635 .0823714 -0.67 0.506 -.2236882 .1127613 _cons | -12.45608 1.077645 -11.56 0.000 -14.65693 -10.25524 -------------+---------------------------------------------------------------- rho | .5780528 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.059270 . prais $OLS, rhotype(nagar) twostep corc Iteration 0: rho = 0.0000 Iteration 1: rho = 0.7462 Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 4, 30) = 27.31 Model | .04835238 4 .012088095 Prob > F = 0.0000 Residual | .013278094 30 .000442603 R-squared = 0.7846 -------------+------------------------------ Adj R-squared = 0.7558 Total | .061630474 34 .001812661 Root MSE = .02104 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.1643651 .0384036 -4.28 0.000 -.2427958 -.0859344 lnI | 1.245164 .1559178 7.99 0.000 .9267376 1.563591 lnPnc | -.0141688 .1443708 -0.10 0.922 -.3090133 .2806758 lnPuc | -.0561394 .0766464 -0.73 0.470 -.2126722 .1003934 _cons | -11.19776 1.399397 -8.00 0.000 -14.05572 -8.339815 -------------+---------------------------------------------------------------- rho | .7461546 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.174443 . prais $OLS, rhotype(regress) twostep corc // default Iteration 0: rho = 0.0000 Iteration 1: rho = 0.6831 Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 4, 30) = 35.43 Model | .066988349 4 .016747087 Prob > F = 0.0000 Residual | .014180226 30 .000472674 R-squared = 0.8253 -------------+------------------------------ Adj R-squared = 0.8020 Total | .081168575 34 .002387311 Root MSE = .02174 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.1450741 .0381281 -3.80 0.001 -.222942 -.0672062 lnI | 1.321657 .1415266 9.34 0.000 1.032621 1.610692 lnPnc | -.07231 .1463868 -0.49 0.625 -.3712718 .2266517 lnPuc | -.0562499 .0794347 -0.71 0.484 -.2184772 .1059774 _cons | -11.8828 1.267388 -9.38 0.000 -14.47115 -9.294444 -------------+---------------------------------------------------------------- rho | .6830819 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.130955 . prais $OLS, rhotype(freg) twostep corc

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 25

    http://www.masil.org http://www.joomok.org

    Iteration 0: rho = 0.0000 Iteration 1: rho = 0.6980 Cochrane-Orcutt AR(1) regression -- twostep estimates Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 4, 30) = 33.28 Model | .062012395 4 .015503099 Prob > F = 0.0000 Residual | .0139744 30 .000465813 R-squared = 0.8161 -------------+------------------------------ Adj R-squared = 0.7916 Total | .075986795 34 .002234906 Root MSE = .02158 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.1493802 .0382318 -3.91 0.000 -.22746 -.0713004 lnI | 1.306665 .1448787 9.02 0.000 1.010783 1.602547 lnPnc | -.0596277 .1461326 -0.41 0.686 -.3580704 .2388149 lnPuc | -.0563619 .0788564 -0.71 0.480 -.2174081 .1046842 _cons | -11.74877 1.297958 -9.05 0.000 -14.39955 -9.097981 -------------+---------------------------------------------------------------- rho | .6979822 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.140342

    6.1.5 Iterative Prais-Winsten and Cochrane-Orcutt FGLS STATA provides the iterative two-step estimation method for the Prais-Winsten and Cochrane-Orcutt FGLS. . prais $OLS, rhotype(tscorr) // Iterative Prais-Winsten FGLS Iteration 0: rho = 0.0000 Iteration 1: rho = 0.6744 Iteration 2: rho = 0.8361 Iteration 3: rho = 0.9030 Iteration 4: rho = 0.9273 Iteration 5: rho = 0.9366 Iteration 6: rho = 0.9403 Iteration 7: rho = 0.9419 Iteration 8: rho = 0.9426 Iteration 9: rho = 0.9428 Iteration 10: rho = 0.9430 Iteration 11: rho = 0.9430 Iteration 12: rho = 0.9430 Iteration 13: rho = 0.9431 Iteration 14: rho = 0.9431 Iteration 15: rho = 0.9431 Iteration 16: rho = 0.9431 Prais-Winsten AR(1) regression -- iterated estimates Source | SS df MS Number of obs = 36 -------------+------------------------------ F( 4, 31) = 31.14 Model | .044594423 4 .011148606 Prob > F = 0.0000 Residual | .011097443 31 .000357982 R-squared = 0.8007 -------------+------------------------------ Adj R-squared = 0.7750 Total | .055691865 35 .001591196 Root MSE = .01892 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.2101637 .0347875 -6.04 0.000 -.2811133 -.139214 lnI | 1.071587 .1288525 8.32 0.000 .8087905 1.334383 lnPnc | .0939725 .1252189 0.75 0.459 -.161413 .3493581 lnPuc | -.0341095 .0653817 -0.52 0.606 -.1674564 .0992375 _cons | -9.666983 1.148614 -8.42 0.000 -12.0096 -7.324369

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 26

    http://www.masil.org http://www.joomok.org

    -------------+---------------------------------------------------------------- rho | .9430583 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.531091 The following example is the iterative Cochrane-Orcutt FGLS. . prais $OLS, rhotype(tscorr)corc // Iterative Cochrane-Orcutt FGLS Iteration 0: rho = 0.0000 Iteration 1: rho = 0.6744 Iteration 2: rho = 0.8080 Iteration 3: rho = 0.9037 Iteration 4: rho = 0.9235 Iteration 5: rho = 0.9279 Iteration 6: rho = 0.9294 Iteration 7: rho = 0.9300 Iteration 8: rho = 0.9301 Iteration 9: rho = 0.9302 Iteration 10: rho = 0.9303 Iteration 11: rho = 0.9303 Iteration 12: rho = 0.9303 Iteration 13: rho = 0.9303 Iteration 14: rho = 0.9303 Cochrane-Orcutt AR(1) regression -- iterated estimates Source | SS df MS Number of obs = 35 -------------+------------------------------ F( 4, 30) = 21.15 Model | .029891322 4 .007472831 Prob > F = 0.0000 Residual | .01060038 30 .000353346 R-squared = 0.7382 -------------+------------------------------ Adj R-squared = 0.7033 Total | .040491702 34 .001190932 Root MSE = .0188 ------------------------------------------------------------------------------ lnG | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- lnPg | -.2223182 .036462 -6.10 0.000 -.2967836 -.1478527 lnI | .8847412 .2033351 4.35 0.000 .4694755 1.300007 lnPnc | .091974 .1237493 0.74 0.463 -.1607557 .3447038 lnPuc | -.0422291 .0655293 -0.64 0.524 -.1760577 .0915996 _cons | -7.865689 1.897604 -4.15 0.000 -11.74111 -3.990264 -------------+---------------------------------------------------------------- rho | .9302665 ------------------------------------------------------------------------------ Durbin-Watson statistic (original) 0.604699 Durbin-Watson statistic (transformed) 1.515506 6.2 FGLS in SAS

    SAS support both (iterative) two-step Prais-Winten and maximum likelihood algorithms.

    6.2.1 Two-step Prais-Winsten Estimation

    Once variables are transformed, run OLS with the intercept suppressed in the REG procedure. SAS by default uses the autocorrelation coefficient as the estimator. PROC REG DATA=masil.gasoline;

    MODEL TlnG = Intercept TlnPg TlnI TlnPnc TlnPuc /NOINT; RUN; The REG Procedure Model: MODEL1 Dependent Variable: TlnG

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 27

    http://www.masil.org http://www.joomok.org

    Number of Observations Read 36 Number of Observations Used 36 NOTE: No intercept in model. R-Square is redefined. Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model 5 0.13379 0.02676 57.39 |t| Intercept 1 -11.49075 0.93906 -12.24

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 28

    http://www.masil.org http://www.joomok.org

    Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 0 0.000940 1.000000 | |********************| 1 0.000634 0.674395 | |************* | Preliminary MSE 0.000512 Estimates of Autoregressive Parameters Standard Lag Coefficient Error t Value 1 -0.674395 0.134807 -5.00 Yule-Walker Estimates SSE 0.01445373 DFE 30 MSE 0.0004818 Root MSE 0.02195 SBC -157.26029 AIC -166.7614 Regress R-Square 0.9019 Total R-Square 0.9821 Durbin-Watson 1.1161 Standard Approx Variable DF Estimate Error t Value Pr > |t| Intercept 1 -11.4907 0.9546 -12.04

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 29

    http://www.masil.org http://www.joomok.org

    Estimates of Autocorrelations Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 0 0.000940 1.000000 | |********************| 1 0.000634 0.674395 | |************* | Preliminary MSE 0.000512 Estimates of Autoregressive Parameters Standard Lag Coefficient Error t Value 1 -0.674395 0.134807 -5.00 Algorithm converged. Yule-Walker Estimates SSE 0.01109864 DFE 30 MSE 0.0003700 Root MSE 0.01923 SBC -165.18245 AIC -174.68356 Regress R-Square 0.8093 Total R-Square 0.9862 Durbin-Watson 1.4670 Standard Approx Variable DF Estimate Error t Value Pr > |t| Intercept 1 -9.6703 1.1670 -8.29

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 30

    http://www.masil.org http://www.joomok.org

    Intercept 1 -12.3418 0.6749 -18.29

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 31

    http://www.masil.org http://www.joomok.org

    6.3.1 Using Two-Step P-W and C-O Estimations In LIMDEP, the Regress command has the AR1 option to fit autocorrelation error models. First, estimate OLS without considering autocorrelation. Note that LIMDEP by default report D-W d based estimator .69765. --> REGRESS;Lhs=LNG;Rhs=ONE,LNPG,LNI,LNPNC,LNPUC$ +-----------------------------------------------------------------------+ | Ordinary least squares regression Weighting variable = none | | Dep. var. = LNG Mean= -.3708600000E-02, S.D.= .1516908393 | | Model size: Observations = 36, Parameters = 5, Deg.Fr.= 31 | | Residuals: Sum of squares= .3383693649E-01, Std.Dev.= .03304 | | Fit: R-squared= .957985, Adjusted R-squared = .95256 | | Model test: F[ 4, 31] = 176.71, Prob value = .00000 | | Diagnostic: Log-L = 74.3732, Restricted(b=0) Log-L = 17.3181 | | LogAmemiyaPrCrt.= -6.690, Akaike Info. Crt.= -3.854 | | Autocorrel: Durbin-Watson Statistic = .60470, Rho = .69765 | +-----------------------------------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |t-ratio |P[|T|>t] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant -12.34185146 .67489522 -18.287 .0000 LNPG -.5909591880E-01 .32484970E-01 -1.819 .0786 .67409429 LNI 1.373400354 .75627733E-01 18.160 .0000 9.1109277 LNPNC -.1267972409 .12699351 -.998 .3258 .44319819 LNPUC -.1187078514 .81337098E-01 -1.459 .1545 .66361220 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.)

    LIMDEP by default uses the iterative Prais-Winten algorithm and D-W d based rho. The Maxit=1 option is needed to fit the two-step model without iteration. The Rho= option specifies estimator other than the D-W d based rho. Let us get the two-step Prais-Winsten estimates using the autocorrelation coefficient .67439496. --> REGRESS;Lhs=LNG;Rhs=ONE,LNPG,LNI,LNPNC,LNPUC;Ar1;Maxit=1;Rho=.67439496$ +---------------------------------------------+ | AR(1) Model: e(t) = rho * e(t-1) + u(t) | | Initial value of rho = .67439 | | RHO fixed at this value. No iterations done | | Method = Prais - Winsten | | Iter= 1, SS= .014, Log-L= 89.380783 | | Final value of Rho = .674395 | | Iter= 1, SS= .014, Log-L= 89.380783 | | Durbin-Watson: e(t) = .294576 | | Std. Deviation: e(t) = .029244 | | Std. Deviation: u(t) = .021593 | | Durbin-Watson: u(t) = 1.097155 | | Autocorrelation: u(t) = .451423 | | N[0,1] used for significance levels | +---------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant -11.49076580 .93905346 -12.237 .0000 LNPG -.1463812539 .37075971E-01 -3.948 .0001 .67409429 LNI 1.278265561 .10544651 12.122 .0000 9.1109277 LNPNC -.3988606680E-01 .12763530 -.313 .7547 .44319819 LNPUC -.6692949378E-01 .76628962E-01 -.873 .3824 .66361220 RHO .6743949600 .12480744 5.403 .0000 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.)

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 32

    http://www.masil.org http://www.joomok.org

    In order to get Cochrane-Orcutt estimates, add the Alg=Corc option. --> REGRESS;Lhs=LNG;Rhs=ONE,LNPG,LNI,LNPNC,LNPUC;Ar1;Alg=Corc;Maxit=1;Rho=.67439496$ +---------------------------------------------+ | AR(1) Model: e(t) = rho * e(t-1) + u(t) | | Initial value of rho = .67439 | | RHO fixed at this value. No iterations done | | Method = Cochrane - Orcutt | | Iter= 1, SS= .014, Log-L= 89.544219 | | Final value of Rho = .674395 | | Iter= 1, SS= .014, Log-L= 89.544219 | | Durbin-Watson: e(t) = .329030 | | Std. Deviation: e(t) = .029568 | | Std. Deviation: u(t) = .021832 | | Durbin-Watson: u(t) = 1.149271 | | Autocorrelation: u(t) = .425364 | | N[0,1] used for significance levels | +---------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant -11.95373176 1.2498799 -9.564 .0000 LNPG -.1426370959 .38058750E-01 -3.748 .0002 .67409429 LNI 1.329596006 .13960285 9.524 .0000 9.1109277 LNPNC -.7936272661E-01 .14648472 -.542 .5880 .44319819 LNPUC -.5616334926E-01 .79750447E-01 -.704 .4813 .66361220 RHO .6743949600 .12480744 5.403 .0000 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.)

    These outputs are identical to what STATA produces. Now let us use the D-W d based estimator to find the same results. --> REGRESS;Lhs=LNG;Rhs=ONE,LNPG,LNI,LNPNC,LNPUC;Ar1;Maxit=1$ +---------------------------------------------+ | AR(1) Model: e(t) = rho * e(t-1) + u(t) | | Initial value of rho = .69765 | | Maximum iterations = 1 | | Method = Prais - Winsten | | Iter= 1, SS= .014, Log-L= 89.845024 | | Final value of Rho = .862570 | | Iter= 1, SS= .014, Log-L= 89.845024 | | Durbin-Watson: e(t) = .274861 | | Std. Deviation: e(t) = .042097 | | Std. Deviation: u(t) = .021298 | | Durbin-Watson: u(t) = 1.125203 | | Autocorrelation: u(t) = .437399 | | N[0,1] used for significance levels | +---------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant -11.38732107 .95549227 -11.918 .0000 LNPG -.1523079877 .37052367E-01 -4.111 .0000 .67409429 LNI 1.266637673 .10730907 11.804 .0000 9.1109277 LNPNC -.3084534704E-01 .12719693 -.243 .8084 .44319819 LNPUC -.6380017910E-01 .75851491E-01 -.841 .4003 .66361220 RHO .8625695910 .85519204E-01 10.086 .0000 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) --> REGRESS;Lhs=LNG;Rhs=ONE,LNPG,LNI,LNPNC,LNPUC;Ar1;Alg=Corc;Maxit=1$ +---------------------------------------------+ | AR(1) Model: e(t) = rho * e(t-1) + u(t) |

  • Jeeshim and KUCC625 (7/18/2006) Autocorrelation and the AR(1) Process: 33

    http://www.masil.org http://www.joomok.org

    | Initial value of rho = .69765 | | Maximum iterations = 1 | | Method = Cochrane - Orcutt | | Iter= 1, SS= .014, Log-L= 89.951767 | | Final value of Rho = .848906 | | Iter= 1, SS= .014, Log-L= 89.951767 | | Durbin-Watson: e(t) = .302189 | | Std. Deviation: e(t) = .040841 | | Std. Deviation: u(t) = .021586 | | Durbin-Watson: u(t) = 1.163595 | | Autocorrelation: u(t) = .418203 | | N[0,1] used for significance levels | +---------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant -11.75194466 1.2972716 -9.059 .0000 LNPG -.1492838663 .38229644E-01 -3.905 .0001 .67409429 LNI 1.307020529 .14480358 9.026 .0000 9.1109277 LNPNC -.5991877953E-01 .14613907 -.410 .6818 .44319819 LNPUC -.5635909286E-01 .78869431E-01 -.715 .4749 .66361220 RHO .8489056293 .89340318E-01 9.502 .0000 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.)

    6.3.2 Using Iterative Two-Step P-W and C-O Estimations

    Now, eliminate the Maxit=1 option to fit the iterative two-step Prais-Winsten and Cochrane-Orcutt models. LIMDEP produces estimates and standard errors that are slightly different from those of STATA. LIMDEP needs a fewer iterations to reach convergence. --> REGRESS;Lhs=LNG;Rhs=ONE,LNPG,LNI,LNPNC,LNPUC;Ar1$ +---------------------------------------------+ | AR(1) Model: e(t) = rho * e(t-1) + u(t) | | Initial value of rho = .69765 | | Maximum iterations = 100 | | Method = Prais - Winsten | | Iter= 1, SS= .014, Log-L= 89.845024 | | Iter= 2, SS= .012, Log-L= 92.830874 | | Iter= 3, SS= .011, Log-L= 93.363107 | | Iter= 4, SS= .011, Log-L= 93.338255 | | Iter= 5, SS= .011, Log-L= 93.305255 | | Iter= 6, SS= .011, Log-L= 93.292051 | | Final value of Rho = .951197 | | Iter= 6, SS= .011, Log-L= 93.292051 | | Durbin-Watson: e(t) = .097606 | | Std. Deviation: e(t) = .061276 | | Std. Deviation: u(t) = .018909 | | Durbin-Watson: u(t) = 1.555377 | | Autocorrelation: u(t) = .222311 | | N[0,1] used for significance levels | +---------------------------------------------+ +---------+--------------+----------------+--------+---------+----------+ |Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] | Mean of X| +---------+--------------+----------------+--------+---------+----------+ Constant -9.618417704 1.1586795 -8.301 .0000 LNPG -.2112382834 .34724610E-01 -6.083 .0000 .67409429 LNI 1.065917444 .12991909 8.204 .0000 9.1109277 LNPNC .9701687217E-01 .12555264 .773 .4397 .44319819 LNPUC -.3367703716E-01 .65148131E-01 -.517 .6052 .66361220 RHO .9511970777 .52160225E-01 18.236 .0000 (Note: E+nn or E-nn means multiply by 10 to + or -nn power.) --> REGRESS;Lhs=LNG;Rhs=ONE,LNPG,LNI,LNPNC,LNPUC;Ar1;Alg=Corc$

  • Jeeshim and KUCC625 (7/18/2006)