applying regression using e views (with commands)

Upload: safia-aslam

Post on 14-Apr-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    1/85

    1 | E C O N O M E T R I C S 2 T E R M P A P E R

    APPLIED ECONOMICS RESEARCH CENTRE

    2013-14

    ECONOMETRICS 2

    TERM PAPER

    SAFIA ASLAM

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    2/85

    2 | E C O N O M E T R I C S 2 T E R M P A P E R

    CONTENTS

    Chapter8

    Multiple Regression Analysis: The Problem of Inference

    Rest r i ct ed Least Squares: Testing Linear Equality RestrictionsWa ld Test

    Testing For Structural or Parameter Stability of Regression Models

    O The Chow Test

    Computer Application using EViews

    Chapter10

    Multicollinearity: What Happens If The Regressors Are Correlated?

    The Nature of Multicollinearity

    Sources of Multicollinearity

    Practical Consequences of Multicollinearity

    Detection of Multicollinearity

    O High R2 but

    Few Significant T-Ratios

    O High Pair-wise Correlations among Regressors

    O Examination of Partial Correlations

    O Auxiliary Regressions

    O Tolerance and Variance Inflation Factor

    O Remedial Measures

    O Do Nothing Or

    O Follow Some Rules Of Thumb.

    O A Priori Information

    O Combining Cross-Sectional and Time Series Data

    O Dropping a Variable(S) and Specification Bias

    O Transformation of Variables

    O First Difference Form

    O Ratio Transformation

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    3/85

    3 | E C O N O M E T R I C S 2 T E R M P A P E R

    Ad di ti on al or New Data

    O Computer Application using EViews

    Chapter12

    Autocorrelation: What Happens If the Error Terms Are Correlated?

    The Nature of Autocorrelation

    Sources of Autocorrelation

    O Inertia.

    O Specification Bias: Excluded Variables Case

    O Specification Bias: Incorrect Functional Form

    O Cobweb Phenomenon

    O Lags.

    O Manipulation Of Data

    O Data TransformationPractical Consequences of Autocorrelation

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    4/85

    4 | E C O N O M E T R I C S 2 T E R M P A P E R

    2

    i

    Detection of Autocorrelation Gr ap hi ca l Method

    The Runs Test DurbinWatson D Test

    A General Test of Autocorrelation: The BreuschGodfrey (Bg) Test Remedial Measures of Autocorrelation

    NeweyWest Method. Genera l i zed Least-Square (G|LS) Method.

    Correcting For (Pure) Autocorrelation: The Method of Generalized LeastSquares (GLS)

    O When Is Known:

    O When Is Not Known:The First-Difference Method:

    O BerenbluttWebb Test,

    Based On DurbinWatson D Statistic:

    Estimated From The Residuals:

    Theil-Nagar Estimate Based On D Statistic:

    Estimating : The CochraneOrcutt (CO) Iterative

    Procedure: Estimating : Durbins Two-Step Method:

    The Durbin h Statistic

    Computer Application using EViews

    Chapter11

    Heteroscedasticity: What Happens If the Error Variance Is Non constant?

    The Nature of Heteroscedasticity

    Sources of Heteroscedasticity

    Error-Learning Models

    Data Collecting Techniques Improve Is Likely To Decrease.

    Outliers.

    Other Sources of Heteroscedasticity:

    O Incorrect Data Transformation (E.G., Ratio or First Difference

    Transformations)

    O Incorrect Functional Form (E.G., Linear Versus LogLinear Models).

    Practical Consequences of Heteroscedasticity

    Detection of Heteroscedasticity

    Informal Methods

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    5/85

    5 | E C O N O M E T R I C S 2 T E R M P A P E R

    Nature Of The Problem

    Graphical Method

    Formal Methods

    O Park Test

    O Glejser Test

    O Goldfeld-Quandt Test

    Whites General Heteroscedasticity Test Koenker Bassett (Kb) Test

    Remedial Measures

    1. When i2

    is knownThe method of weighted least square

    2. When i2is not known

    Whites heteroscedasticity-Consistent Variances and Standard Errors

    Plausible Assumptions about Heteroscedasticity Pattern.

    Assumption 1: The Error Variance Is Proportional To Xi2

    Assumption 2: The Error Variance Is Proportional ToXi. The Square Root

    Transformation

    Assumption 3: The Error Variance Is Proportional To the Square Of The

    Mean Value OfY.

    Assumption 4: A Log Transformation

    3. Computer Application using EViews

    Chapters18 To 20

    Simultaneous Regression Models

    1. The Nature Of Simultaneous-Equation Models2. The Identification Problem3. Rules For Identification

    The Order Condition Of Identifiability

    The Rank Condition Of Identifiability Hausman Specification Test

    The Method Of Indirect Least Squares (ILS): A Just Identified Equation

    The Method Of Two-Stage Least Squares (2SLS): An Over identified

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    6/85

    6 | E C O N O M E T R I C S 2 T E R M P A P E R

    Equation

    The Method Of Three-Stage Least Squares (3SLS) Using EViews

    The Granger Test

    Computer Application using EViews

    Chapter9

    Dummy Variable Regression Models

    The Nature of Dummy Variable

    Caution In the Use of Dummy VariablesAnalysis of Variance (ANOVA) Model

    Analysis of Covariance (ANCOVA) Models.

    Computer Application using EView

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    7/85

    7 | P a g e

    CHAPTER 8

    MULTIPLE REGRESSION ANALYSIS: THE PROBLEM OF

    INFERENCE

    RESTRICTED LEAST SQUARES: TESTING LINEAR EQUALITY RESTRICTIONS

    WALD Test: Wald test is used to test the validity of the linear restriction imposed on the

    parameters. There are occasions where economic theory may suggest that the coefficients in a

    regression model satisfy some linear equality restrictions.

    For instance, consider the CobbDouglas production function

    Y=KLe

    Where Yi = output, Li = labor input, and Ki = capital input.

    Written in log form, the equation becomes

    LnYi = 1 + 2 ln Ki + 3 ln Li + i (8.1)

    Now if there are constant returns to scale (equi proportional change in output for an

    equiproportional change in the inputs), economic theory would suggest that

    2 + 3 = 1

    This is an example of a linear equality restriction.

    TESTING LINEAR RESTRICTIONS IN EVIEWS:

    Step1: First Regress the Model (8.1) equation.

    Quickestimate equationwrite: log(Y) C log(K) log(L)

    Dependent Variable: LOG(Y)

    Method: Least Squares

    Date: 08/21/13 Time: 13:41

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    8/85

    8 | E C O N O M E T R I C S 2 T E R M P A P E R

    Sample: 1960 2012

    Included observations: 53

    VariableCoefficien

    t Std. Error t-Statistic Prob.

    C 10.94926 0.420685 26.02720 0.0000

    LOG(K) 3.734688 0.138653 26.93548 0.0000

    LOG(L) -1.271933 0.214719 -5.923718 0.0000

    R-squared 0.996592 Mean dependent var 18.36495

    Adjusted R-squared 0.996456 S.D. dependent var 0.432456

    S.E. of regression 0.025746 Akaike info criterion -4.426138

    Sum squared resid 0.033143 Schwarz criterion -4.314612

    Log likelihood 120.2927 F-statistic 7310.653

    Durbin-Watson stat 0.054050 Prob(F-statistic) 0.000000

    Step2: First Regress the Model (8.1) equationRegression window

    ViewCoefficient TestsWald-Coefficients Restrictions: Write: C(2) + C(3) = 1

    Wald Test:

    Equation: Untitled

    Test Statistic Value df Probability

    F-statistic 319.2578 (1, 50) 0.0000

    Chi-square 319.2578 1 0.0000

    Null Hypothesis Summary:

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    9/85

    9 | E C O N O M E T R I C S 2 T E R M P A P E R

    Normalized Restriction (= 0) Value Std. Err.

    -1 + C(2) + C(3) 1.462755 0.081865

    Restrictions are linear in coefficients.

    WALD TEST MANUALLY (USING EVIEWS):

    Unrestricted model: lnYi = 0 + 2 ln Ki + 3 ln Li + i

    STEP 1:

    First Regress the (8.1) equation.

    Quickestimate equationwrite: log(Y) C log(K) log(L)

    And Obtain RSSUR

    Dependent Variable: LOG(Y)

    Method: Least Squares

    Date: 08/21/13 Time: 13:41

    Sample: 1960 2012

    Included observations: 53

    VariableCoefficien

    t Std. Error t-Statistic Prob.

    C 10.94926 0.420685 26.02720 0.0000

    LOG(K) 3.734688 0.138653 26.93548 0.0000

    LOG(L) -1.271933 0.214719 -5.923718 0.0000

    R-squared 0.996592 Mean dependent var 18.36495

    Adjusted R-squared 0.996456 S.D. dependent var 0.432456

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    10/85

    10 | E C O N O M E T R I C S 2 T E R M P A P E R

    S.E. of regression 0.025746 Akaike info criterion -4.426138

    Sum squared resid 0.033143 Schwarz criterion -4.314612

    Log likelihood 120.2927 F-statistic 7310.653

    Durbin-Watson stat 0.054050 Prob(F-statistic) 0.000000

    Restricted model: ln (Yi /Li) = 0 + 2 ln (Ki / Li) + i

    STEP 2:

    First Regress the (8.2) equation.

    Quickestimate equationwrite: log(Y/L) C log(K/L)

    And Obtain RSSR

    Dependent Variable: LOG(Y/L)

    Method: Least Squares

    Date: 08/21/13 Time: 13:48

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C 18.42422 0.119150 154.6300 0.0000

    LOG(K/L) 5.936631 0.170981 34.72092 0.0000

    R-squared 0.959412 Mean dependent var 14.30042

    Adjusted R-squared 0.958617 S.D. dependent var 0.340546

    S.E. of regression 0.069277 Akaike info criterion -2.464402

    Sum squared resid 0.244764 Schwarz criterion -2.390052

    Log likelihood 67.30666 F-statistic 1205.542

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    11/85

    11 | E C O N O M E T R I C S 2 T E R M P A P E R

    Durbin-Watson stat 0.035469 Prob(F-statistic) 0.000000

    STEP 3:

    Apply the F Test of RSS Version.

    F = { (RSSRRSSUR) /M} / {RSSUR/ (N-K)}

    And compare it with F critical Values at m and (n-k) degrees of freedom;

    if | FCal | > | FCritical | then Restriction is invalid and vice versa

    F = {0.244764 0.033143 / 1} / {0.033143 / (53-3)}

    F = 319.69

    The above example shows that F test is significant 1% level of significance.

    TESTING FOR STRUCTURAL OR PARAMETER STABILITY OF REGRESSIONMODELS: THE CHOW TEST

    When we use a regression model involving time series data, it may happen that there is a

    structural change in the relationship between the regressand Y and the regressors. By structural

    change, we mean that the values of the parameters of the model do not remain the same

    through the entire time period.

    USING E VIEWS:

    Step 1: First Regress the Model (8.3) equation (with n = 34);

    Quickestimate equationwrite: Sav C Yd

    Dependent Variable: SAV

    Method: Least Squares

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    12/85

    12 | E C O N O M E T R I C S 2 T E R M P A P E R

    Date: 08/21/13 Time: 14:08

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C -15.73656 1.331322 -11.82025 0.0000

    YD 0.771388 0.022667 34.03118 0.0000

    R-squared 0.957821 Mean dependent var 29.38251

    Adjusted R-squared 0.956994 S.D. dependent var 4.246259

    S.E. of regression 0.880589 Akaike info criterion 2.620554

    Sum squared resid 39.54728 Schwarz criterion 2.694904

    Log likelihood -67.44468 F-statistic 1158.121

    Durbin-Watson stat 0.024164 Prob(F-statistic) 0.000000

    Step 2: First Regress the Model (8.3) equation. Regression windowViewStability TestsChow

    Breakpoint Test:

    Write Breakpoint Year in the Box: 1980

    Chow Breakpoint Test: 1980

    F-statistic 303.2648 Probability 0.000000

    Log likelihood ratio 137.4620 Probability 0.000000

    The above example shows that F test is significant 1% level of significance

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    13/85

    13 | E C O N O M E T R I C S 2 T E R M P A P E R

    MODEL

    DETERMINANTS OF LIFE EXPECTANCY IN PAKISTAN:

    Y=B1+B2X1+B3X2++B4X3+B5X4+B6X5

    X1: POPULATION

    X2: GDP

    X3: UNEMPLOYEMENT

    X4: URBAN POPULATION

    X5: HEALTH EXPENDITURE

    REGRESSION RESULTS:

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 13:57

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C -5.519635 1.350874 -4.085972 0.0002

    X1 -1.20E-07 9.29E-09 -12.86161 0.0000

    X2 -1.61E-11 2.45E-12 -6.580235 0.0000

    X3 -0.215865 0.085649 -2.520339 0.0152

    X4 2.668870 0.070082 38.08228 0.0000

    X5 -0.163192 0.129846 -1.256813 0.2150

    R-squared 0.997288 Mean dependent var 58.49076

    Adjusted R-squared 0.997000 S.D. dependent var 5.387357

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    14/85

    14 | E C O N O M E T R I C S 2 T E R M P A P E R

    S.E. of regression 0.295091 Akaike info criterion 0.503204

    Sum squared resid 4.092694 Schwarz criterion 0.726256

    Log likelihood -7.334900 F-statistic 3456.958

    Durbin-Watson stat 0.196325 Prob(F-statistic) 0.000000

    CHAPTER 10

    MULTICOLLINEARITY: WHAT HAPPENS IF THE

    REGRESSORS ARE CORRELATED?

    THE NATURE OF MULTICOLLINEARITY

    The term multicollinearity is due to Ragnar Frisch. Originally it meant the existence of a

    perfect, or exact, linear relationship among some or all explanatory variables of aregression model.

    X1 + X2 . + Xk = 0 (10.1.1)

    X1 + X2 . + Xk + i = 0 (10.1.2)

    Why does the classical linear regression model assume that there is no multicollinearity among

    the Xs? The reasoning is this: If multicollinearity is perfect in the sense of (10.1.1), the

    regression coefficients of the X variables are indeterminate and their standard errors are

    infinite. If multicollinearity is less than perfect, as in (10.1.2), the regression coefficients,

    although determinate, possess large standard errors (in relation to the coefficients themselves),

    which means the coefficients cannot be estimated with great precision or accuracy.

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    15/85

    15 | E C O N O M E T R I C S 2 T E R M P A P E R

    SOURCES OF MULTICOLLINEARITY

    There are several sources of multicollinearity. As Montgomery and Peck note, multicollinearity

    may be due to the following factors;

    1. The data collection method employed, for example, sampling over a limited range of the valuestaken by the regressors in the population

    2. Constraints on the model or in the population being sampled. For example, in the regressionof electricity consumption on income (X2) and house size (X3) there is a physical constraint in

    the population in that families with higher incomes generally have larger homes than families

    with lower incomes.

    3. Model specification, for example, is adding polynomial terms to a regression model, especiallywhen the range of the X variable is small.

    4. An over-determined model. This happens when the model has more explanatory variables thanthe number of observations. This could happen in medical research where there may be a smallnumber of patients about whom information is collected on a large number of variables.

    An additional reason for multicollinearity, especially in time series data, may be that theregressors included in the model share a common trend, that is, they all increase or decrease

    over time.

    PRACTICAL CONSEQUENCES OF MULTICOLLINEARITY

    In cases of near or high multicollinearity, one is likely to encounter the following consequences:

    a) Although BLUE, the OLS estimators have large variances and co variances, making precise estimationdifficult.

    b) Because of consequence 1, the confidence intervals tend to be much wider, leading to the acceptance of thezero null hypothesis (i.e., the true population coefficient is zero) more readily.

    c) Also because of consequence 1, the t-ratio of one or more coefficients tends to be statistically insignificant.d) Although the t-ratio of one or more coefficients is statistically insignificant, R2, the overall measure of

    goodness of fit, can be very high.

    e) The OLS estimators and their standard errors can be sensitive to small changes in the data.

    DOING REGREESION USING EVIEWS FOR MULTICOLLINEARITY

    DETECTION METHODS:

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    16/85

    16 | E C O N O M E T R I C S 2 T E R M P A P E R

    1. HIGH R2 BUT FEW SIGNIFICANT t- RATIO:

    MODEL: Yi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    STEP:

    First regress the above equation.

    Open the File containing dataQuickestimate equationwrite: Y C X1 X2 X3 X4 X5

    And check the R2 and t-ratios.

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 13:57

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C -5.519635 1.350874 -4.085972 0.0002

    X1 -1.20E-07 9.29E-09 -12.86161 0.0000

    X2 -1.61E-11 2.45E-12 -6.580235 0.0000

    X3 -0.215865 0.085649 -2.520339 0.0152

    X4 2.668870 0.070082 38.08228 0.0000

    X5 -0.163192 0.129846 -1.256813 0.2150

    R-squared 0.997288 Mean dependent var 58.49076

    Adjusted R-squared 0.997000 S.D. dependent var 5.387357

    S.E. of regression 0.295091 Akaike info criterion 0.503204

    Sum squared resid 4.092694 Schwarz criterion 0.726256

    Log likelihood -7.334900 F-statistic 3456.958

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    17/85

    17 | E C O N O M E T R I C S 2 T E R M P A P E R

    Durbin-Watson stat 0.196325 Prob(F-statistic) 0.000000

    The above regression results shows that R2

    = 0.997288(high).X1, X2, X3, X4 are significant while X5 is insignificant.

    There may be multicollinearity here.

    1. HIGH PAIR WISE CORRELATION AMONG REGRESSORS:

    STEP:

    First open the File containing data QuickGroup StatisticsCorrelation

    Write: X1 X2 X3 X4 X5

    X1 X2 X3 X4 X5

    X1 1.000000 0.898710 0.900580 0.988885 -0.071599

    X2 0.898710 1.000000 0.713399 0.863979 -0.222654

    X3 0.900580 0.713399 1.000000 0.902906 -0.159905

    X4 0.988885 0.863979 0.902906 1.000000 -0.053507

    X5 -0.071599 -0.222654 -0.159905 -0.053507 1.000000

    The above correlation matrix shows that X1 and X4 are highly correlated.

    2. EXAMINATION OF PARTIAL CORRELATION:

    STEP:

    First regress the equations. Open the File containing data Quickestimate

    equationwrite: Y C X1, Y C X2, Y C X3, Y C X4, Y C X5 individually and compare their r2 with R2 of overall

    regression.

    REGRESS Y ON X1:

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    18/85

    18 | E C O N O M E T R I C S 2 T E R M P A P E R

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 14:02

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C 46.13325 0.681312 67.71239 0.0000

    X1 1.20E-07 6.11E-09 19.57344 0.0000

    R-squared 0.882521 Mean dependent var 58.49076

    R2

    0.997288

    REGRESS Y ON X2:

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 14:03

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C 54.61037 0.659574 82.79642 0.0000

    X2 7.42E-11 8.68E-12 8.554942 0.0000

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    19/85

    19 | E C O N O M E T R I C S 2 T E R M P A P E R

    R-squared 0.589329 Mean dependent var 58.49076

    R2 0.997288

    REGRESS Y ON X3:

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 14:03

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C 47.82694 0.889198 53.78662 0.0000

    X3 3.221181 0.245909 13.09909 0.0000

    R-squared 0.770875 Mean dependent var 58.49076

    R2

    0.997288

    REGRESS Y ON X4:

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 14:03

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    20/85

    20 | E C O N O M E T R I C S 2 T E R M P A P E R

    C 22.00695 1.082998 20.32039 0.0000

    X4 1.241685 0.036487 34.03118 0.0000

    R-squared 0.957821 Mean dependent var 58.49076

    R2

    0.997288

    REGRESS Y ON X5:

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 14:04

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C 58.94157 5.968745 9.875036 0.0000

    X5 -0.146728 1.927387 -0.076128 0.9396

    R-squared 0.000114 Mean dependent var 58.49076

    R2

    0.997288

    The above regression shows data had no multicollinearity

    3. AUXILLARY REGRESSIONS:

    STEP:

    First regress the equations. Open the File containing data Quickestimate equationwrite: X2 C X3 X4,

    write: X3 C X2 X4, write: X4 C X2 X3 individually and compare the r2 with R2.

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    21/85

    21 | E C O N O M E T R I C S 2 T E R M P A P E R

    REGRESS X1 ON X2 X3 X4 X5:

    Dependent Variable: X1

    Method: Least Squares

    Date: 08/19/13 Time: 14:06

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C -1.22E+08 11339153 -10.78366 0.0000

    X2 0.000186 2.70E-05 6.900419 0.0000

    X3 4481648. 1162156. 3.856322 0.0003

    X4 6366302. 583130.7 10.91745 0.0000

    X5 4542256. 1906829. 2.382100 0.0212

    R-squared 0.989170 Mean dependent var 1.03E+08

    R2

    0.997288

    REGRESS X2 ON X1 X3 X4 X5:

    Dependent Variable: X2

    Method: Least Squares

    Date: 08/19/13 Time: 14:07

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    22/85

    22 | E C O N O M E T R I C S 2 T E R M P A P E R

    C 1.84E+11 7.50E+10 2.451252 0.0179

    X1 2677.158 387.9703 6.900419 0.0000

    X3 -2.16E+10 3.97E+09 -5.426820 0.0000

    X4 -8.45E+09 3.94E+09 -2.141991 0.0373

    X5 -2.88E+10 6.42E+09 -4.489499 0.0000

    R-squared 0.910163 Mean dependent var 5.23E+10

    R2

    0.997288

    REGRESS X3 ON X1 X2 X4 X5:

    Dependent Variable: X3

    Method: Least Squares

    Date: 08/19/13 Time: 14:07

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C 1.445271 2.266943 0.637542 0.5268

    X1 5.28E-08 1.37E-08 3.856322 0.0003

    X2 -1.76E-11 3.25E-12 -5.426820 0.0000

    X4 -0.011486 0.118091 -0.097265 0.9229

    X5 -0.757352 0.189557 -3.995372 0.0002

    R-squared 0.894134 Mean dependent var 3.310530

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    23/85

    23 | E C O N O M E T R I C S 2 T E R M P A P E R

    R2

    0.997288

    REGRESS X4 ON X1 X2 X3 X5:

    Dependent Variable: X4

    Method: Least Squares

    Date: 08/19/13 Time: 14:08

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C 18.57003 0.745923 24.89536 0.0000

    X1 1.12E-07 1.03E-08 10.91745 0.0000

    X2 -1.03E-11 4.82E-12 -2.141991 0.0373

    X3 -0.017156 0.176382 -0.097265 0.9229

    X5 -0.051514 0.267322 -0.192705 0.8480

    R-squared 0.981090 Mean dependent var 29.38251

    R2

    0.997288

    REGRESS X5 ON X1 X2 X3 X4:

    Dependent Variable: X5

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    24/85

    24 | E C O N O M E T R I C S 2 T E R M P A P E R

    Method: Least Squares

    Date: 08/19/13 Time: 14:08

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C 2.736547 1.448764 1.888884 0.0650

    X1 2.33E-08 9.77E-09 2.382100 0.0212

    X2 -1.03E-11 2.29E-12 -4.489499 0.0000

    X3 -0.329525 0.082477 -3.995372 0.0002

    X4 -0.015007 0.077873 -0.192705 0.8480

    R-squared 0.351577 Mean dependent var 3.072447

    R2

    0.997288

    The above regression shows data had no multicollinearity.

    REMEDIAL MEASURES:

    A. DROPPING A VARIABLE(S) AND SPECIFICATION BIAS :

    STEP:

    First Regress the (10.1) equation without X1 (by assuming it causes Multicollinearity).

    Open the File containing dataQuickestimate equationwrite: Y C X2 X4

    And check the R2 and t-ratios

    Dependent Variable: Y

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    25/85

    25 | E C O N O M E T R I C S 2 T E R M P A P E R

    Method: Least Squares

    Date: 08/19/13 Time: 14:12

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C 9.097915 1.536073 5.922841 0.0000

    X2 -3.84E-11 3.65E-12 -10.50413 0.0000

    X3 -0.751619 0.157433 -4.774216 0.0000

    X4 1.907816 0.078995 24.15124 0.0000

    X5 -0.706191 0.258311 -2.733880 0.0087

    R-squared 0.987744 Mean dependent var 58.49076

    Adjusted R-squared 0.986722 S.D. dependent var 5.387357

    S.E. of regression 0.620775 Akaike info criterion 1.973891

    Sum squared resid 18.49733 Schwarz criterion 2.159768

    Log likelihood -47.30811 F-statistic 967.0998

    Durbin-Watson stat 0.340734 Prob(F-statistic) 0.000000

    After dropping X1, regression shows that multicollinearity has been removed.

    B. TRANSFORMATION OF VARIABLE:

    I m transforming model into lag form.

    Dependent Variable: D(Y)

    Method: Least Squares

    Date: 08/19/13 Time: 14:18

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    26/85

    26 | E C O N O M E T R I C S 2 T E R M P A P E R

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 1.080539 0.091881 11.76017 0.0000

    D(X1) -2.20E-07 1.15E-08 -19.15005 0.0000

    D(X2) -4.13E-12 1.19E-12 -3.461294 0.0012

    D(X3) 0.002617 0.020385 0.128386 0.8984

    D(X4) -0.482934 0.278996 -1.730969 0.0902

    D(X5) 0.009456 0.021666 0.436466 0.6645

    R-squared 0.922727 Mean dependent var 0.362255

    Adjusted R-squared 0.914328 S.D. dependent var 0.179894

    S.E. of regression 0.052655 Akaike info criterion -2.941961

    Sum squared resid 0.127535 Schwarz criterion -2.716817

    Log likelihood 82.49098 F-statistic 109.8591

    Durbin-Watson stat 1.059673 Prob(F-statistic) 0.000000

    C. ADDITION OR NEW DATA.

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    27/85

    27 | E C O N O M E T R I C S 2 T E R M P A P E R

    CHAPTER

    12

    AUTOCORRELATION: WHAT HAPPENS IFTHE ERROR

    TERMS ARE CORRELATED?

    THE NATURE OF AUTOCORRELATION

    The term autocorrelation may be defined as correlation between members of series ofobservations ordered in time [as in time series data] or space [as in cross-sectional data].In theregression context, the classical linear regression model assumes that such autocorrelation

    does not exist in the disturbances Ui.

    Tintner defines autocorrelation as lag correlation of a given series with itself, lagged by anumber of time units, whereas he reserves the term serial correlation to lag correlationbetween two different series.

    SOURCES OF AUTOCORRELATION

    1. Inertia. A salient feature of most economic time series is inertia, or sluggishness. Asis well known, time series such as GNP, price indexes, production, employment, and

    unemployment exhibit (business) cycles.

    2. Specification Bias: Excluded Variables Case3. Specification Bias: Incorrect Functional Form4. Cobweb Phenomenon

    5. Lags.6. Manipulation of Data: Another source of manipulation is interpolation or

    extrapolation of data.

    7. Data Transformation

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    28/85

    28 | E C O N O M E T R I C S 2 T E R M P A P E R

    PRACTICAL CONSEQUENCES OF AUTOCORRELATION

    In the presence of autocorrelation the usual OLS estimators, although linear, unbiased, and

    asymptotically (i.e., in large samples) normally distributed, are no longer minimum varianceamong all linear unbiased estimators. In short, they are not efficient relative to other linear and

    unbiased estimators. Put differently, they may not be BLUE. As a result, the usual, t, F, and may not be valid.

    DOING REGREESION USING EVIEWS FOR AUTOCORRELATION

    DETECTION METHOD:

    1. GRAPHICAL METHOD:

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

    Write:

    X C Y

    Step 2: obtain the Residuals

    From the estimated equation result window Procmake residuals series(name) ok.

    Step 3: Plot these residuals and see the pattern for Autocorrelation.

    From step2QuickGraphScatter plot

    Write: r1(-1) r1

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    29/85

    29 | E C O N O M E T R I C S 2 T E R M P A P E R

    It is clear from the Graph that the residuals Rt and Rt-1 are serially correlated and positively correlated

    2. RUNS TEST:

    MODEL: Yi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

    Write:

    Y C X1 X2 X3 X4 X5

    Step 2: obtain the Residuals

    From the estimated equation result window Procmake residuals series (name) ok.

    Step 3: check the Runs by looking at changing sign of residuals, then find the interval by calculating mean variance

    the check whether Runs obtained lies in interval or not for clarity regarding Autocorrelation

    (- - - - - - -, + + + + + + + + + + , - - - - - - - - - - - - - -,+, - - -, + + + + + + + + + +, - - - - -, + + )

    R=8

    N1=23

    N2=29

    MEAN(R) = 2N1N2 /N +1

    = 1334/52+1

    = 25.653

    VARIANCE () = 2N1N2 (2N1N2N) / N2

    (N1)

    = 1334*1282/2704*51

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    30/85

    30 | E C O N O M E T R I C S 2 T E R M P A P E R

    = 1710188/137904

    = 12.4012

    If the null hypothesis of randomness is sustainable, following the properties of the normal

    distribution, we should expect that

    PROB [E(R)-1.96 R E(R) +1.96]

    PROB [25.653-1.96(12.4012) 8 25.653+1.96(12.4012)]

    PROB [1.3466 8 49.959]

    Obviously, this interval includes 8. So we can accept the null hypothesis that residuals do not containautocorrelation.

    3. DURBIN WATSON d TEST:

    MODEL: Yi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

    Write:

    Y C X2X3 X4 X5

    In the regression results there is Durbin-Watson dstatistics

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 13:57

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    31/85

    31 | E C O N O M E T R I C S 2 T E R M P A P E R

    C -5.519635 1.350874 -4.085972 0.0002

    X1 -1.20E-07 9.29E-09 -12.86161 0.0000

    X2 -1.61E-11 2.45E-12 -6.580235 0.0000

    X3 -0.215865 0.085649 -2.520339 0.0152

    X4 2.668870 0.070082 38.08228 0.0000

    X5 -0.163192 0.129846 -1.256813 0.2150

    R-squared 0.997288 Mean dependent var 58.49076

    Adjusted R-squared 0.997000 S.D. dependent var 5.387357

    S.E. of regression 0.295091 Akaike info criterion 0.503204

    Sum squared resid 4.092694 Schwarz criterion 0.726256

    Log likelihood -7.334900 F-statistic 3456.958

    Durbin-Watson stat 0.196325 Prob(F-statistic) 0.000000

    4. A GENERAL TEST OF AUTOCORRELATION:THE BREUSCHGODFREY (BG) TEST

    MODEL: Yi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equationequation

    Specification window

    Write:

    Y C X2X3 X4 X5

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    32/85

    32 | E C O N O M E T R I C S 2 T E R M P A P E R

    Step 2: from step 1, in estimated equation windowViewResidual Tests

    Serial Correlation LM Test

    Breusch-Godfrey Serial Correlation LM Test:

    F-statistic 28.96111 Probability 0.000000

    Obs*R-squared 42.88204 Probability 0.000000

    Test Equation:

    Dependent Variable: RESID

    Method: Least Squares

    Date: 08/19/13 Time: 14:44

    Pre sample missing value lagged residuals set to zero.

    Variable Coefficient Std. Error t-Statistic Prob.

    C -0.511900 0.658397 -0.777494 0.4413

    X1 -5.69E-09 4.60E-09 -1.238405 0.2226

    X2 1.71E-12 1.19E-12 1.433777 0.1592

    X3 0.035050 0.040369 0.868241 0.3903

    X4 0.027397 0.034699 0.789556 0.4343

    X5 0.030173 0.062743 0.480893 0.6331

    RESID(-1) 0.772369 0.154615 4.995436 0.0000

    RESID(-2) 0.109866 0.201820 0.544377 0.5891

    RESID(-3) 0.067084 0.201378 0.333126 0.7407

    RESID(-4) -0.015117 0.211764 -0.071384 0.9434

    RESID(-5) 0.029055 0.218522 0.132961 0.8949

    RESID(-6) -0.316267 0.171752 -1.841416 0.0728

    R-squared 0.809095 Mean dependent var -2.50E-15

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    33/85

    33 | E C O N O M E T R I C S 2 T E R M P A P E R

    Adjusted R-squared 0.757877 S.D. dependent var 0.280545

    S.E. of regression 0.138045 Akaike info criterion -0.926361

    Sum squared resid 0.781315 Schwarz criterion -0.480257

    Log likelihood 36.54857 F-statistic 15.79697

    Durbin-Watson stat 1.489135 Prob(F-statistic) 0.000000

    REMEDIAL MEASURES:

    1. NEWEYWEST CONSISTENT STANDARD ERRORS METHOD:

    MODEL: Yi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Step: Regress the Model

    Open the File containing dataQuickestimate equationequation

    Specification window

    Write:

    Y C X2X3

    In equation specification windowoptionsestimation option windowtick

    Heteroscedasticity-consistent covariance NeweyWestclick ok

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/21/13 Time: 11:22

    Sample: 1960 2012

    Included observations: 53

    Newey-West HAC Standard Errors & Covariance (lag truncation=3)

    Variable Coefficient Std. Error t-Statistic Prob.

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    34/85

    34 | E C O N O M E T R I C S 2 T E R M P A P E R

    C -5.519635 2.390986 -2.308518 0.0254

    X1 -1.20E-07 1.43E-08 -8.372859 0.0000

    X2 -1.61E-11 2.39E-12 -6.749603 0.0000

    X3 -0.215865 0.081656 -2.643598 0.0111

    X4 2.668870 0.133068 20.05638 0.0000

    X5 -0.163192 0.136790 -1.193012 0.2389

    R-squared 0.997288 Mean dependent var 58.49076

    Adjusted R-squared 0.997000 S.D. dependent var 5.387357

    S.E. of regression 0.295091 Akaike info criterion 0.503204

    Sum squared resid 4.092694 Schwarz criterion 0.726256

    Log likelihood -7.334900 F-statistic 3456.958

    Durbin-Watson stat 0.196325 Prob(F-statistic) 0.000000

    Only Std. Error of coefficients are corrected for autocorrelation

    2. GENERALIZED LEAST SQUARES (GLS) METHOD:

    When is known: We use that rho to estimate the generalized difference equation through OLSthen these results will be reliable and dont have autocorrelation problem.

    When is Unknown:

    based on DurbinWatson d Statistic:

    1 d/2

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    35/85

    35 | E C O N O M E T R I C S 2 T E R M P A P E R

    Write:

    Y C X1 X2X3 X4 X5

    And take Durbin Watson statistics and calculate rho (= p).

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 13:57

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C -5.519635 1.350874 -4.085972 0.0002

    X1 -1.20E-07 9.29E-09 -12.86161 0.0000

    X2 -1.61E-11 2.45E-12 -6.580235 0.0000

    X3 -0.215865 0.085649 -2.520339 0.0152

    X4 2.668870 0.070082 38.08228 0.0000

    X5 -0.163192 0.129846 -1.256813 0.2150

    R-squared 0.997288 Mean dependent var 58.49076

    Adjusted R-squared 0.997000 S.D. dependent var 5.387357

    S.E. of regression 0.295091 Akaike info criterion 0.503204

    Sum squared resid 4.092694 Schwarz criterion 0.726256

    Log likelihood -7.334900 F-statistic 3456.958

    Durbin-Watson stat 0.196325 Prob(F-statistic) 0.000000

    1 0.196325/2

    0.9018

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    36/85

    36 | E C O N O M E T R I C S 2 T E R M P A P E R

    Step 2: Regress the equation

    Open the File containing dataQuickestimate equation

    Write: Y- 0.9018*d(Y) 1-0.9018 X1-0.9018*d(X1) X5-0.9018*(X5)

    Dependent Variable: Y-0.9018*D(Y)

    Method: Least Squares

    Date: 08/21/13 Time: 11:30

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    1-0.9018 -54.11289 13.94341 -3.880893 0.0003

    X1-0.9018*D(X1) -1.17E-07 9.85E-09 -11.87465 0.0000

    X2-0.9018*D(X2) -1.71E-11 2.88E-12 -5.928562 0.0000

    X3-0.9018*D(X3) -0.235757 0.089920 -2.621862 0.0118

    X4-0.9018*D(X4) 2.659116 0.071015 37.44442 0.0000

    X5-0.9018*D(X5) -0.185270 0.137451 -1.347904 0.1843

    R-squared 0.997261 Mean dependent var 58.39192

    Adjusted R-squared 0.996964 S.D. dependent var 5.332351

    S.E. of regression 0.293827 Akaike info criterion 0.496515

    Sum squared resid 3.971373 Schwarz criterion 0.721658

    Log likelihood -6.909378 Durbin-Watson stat 0.187082

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    37/85

    37 | E C O N O M E T R I C S 2 T E R M P A P E R

    estimated from the Residuals:

    t= t-1 + t

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

    Write:

    Y C X1 X2X3 X4 X5

    Obtain the Residuals, from the estimated equation result window Procmake

    Residuals series (name) ok

    Step 2: Regress the residual on it lag to get rho to transform model in GDE

    Write:

    R1R1 (-1)

    Dependent Variable: R1

    Method: Least Squares

    Date: 08/21/13 Time: 11:32

    Sample (adjusted): 1962 2012

    Included observations: 51 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    D(R1) 0.356889 0.311125 1.147091 0.2568

    R-squared 0.024186 Mean dependent var 0.010390

    Adjusted R-squared 0.024186 S.D. dependent var 0.271481

    S.E. of regression 0.268178 Akaike info criterion 0.225078

    Sum squared resid 3.595959 Schwarz criterion 0.262957

    Log likelihood -4.739484 Durbin-Watson stat 0.133401

    Step 3: Regress the equation

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    38/85

    38 | E C O N O M E T R I C S 2 T E R M P A P E R

    Open the File containing dataQuickestimate equation

    Write: Y- 0.3568*d(Y) 1-0.3568 X1-0.3568*d(X1) X5-0.3568*(X5)

    Dependent Variable: Y-.3568*D(Y)

    Method: Least Squares

    Date: 08/21/13 Time: 11:35

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    1-.3568 -7.471330 2.116689 -3.529725 0.0010

    X1-.3568*D(X1) -1.14E-07 9.48E-09 -12.06953 0.0000

    X2-.3568*D(X2) -1.71E-11 2.64E-12 -6.461472 0.0000

    X3-.3568*D(X3) -0.247995 0.090371 -2.744203 0.0086

    X4-.3568*D(X4) 2.635871 0.070181 37.55831 0.0000

    X5-.3568*D(X5) -0.198586 0.146473 -1.355786 0.1818

    R-squared 0.997367 Mean dependent var 58.58935

    Adjusted R-squared 0.997081 S.D. dependent var 5.237607

    S.E. of regression 0.282978 Akaike info criterion 0.421274

    Sum squared resid 3.683530 Schwarz criterion 0.646418

    Log likelihood -4.953129 Durbin-Watson stat 0.146685

    from Durbin two steps method:

    Step 1: Regress the equation given suggested by Durbin for his two step methods

    Open the File containing dataQuickestimate equation

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    39/85

    39 | E C O N O M E T R I C S 2 T E R M P A P E R

    write: Y C X1 d(X1) X2 d(X2) X3 d(X3) X4 d(X4) X5 d(X5) d(Y)

    The coefficient of Yt-1 is estimated rho (=P)

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/21/13 Time: 11:39

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C -17.98627 3.065724 -5.866892 0.0000

    X1 -1.47E-07 1.22E-08 -12.00310 0.0000

    D(X1) -3.78E-07 2.41E-07 -1.570401 0.1242

    X2 -2.07E-11 2.75E-12 -7.517914 0.0000

    D(X2) 4.08E-12 8.47E-12 0.481797 0.6326

    X3 -0.213495 0.091318 -2.337915 0.0245

    D(X3) 0.176647 0.106567 1.657618 0.1052

    X4 3.177549 0.142231 22.34072 0.0000

    D(X4) -0.389698 1.727353 -0.225604 0.8227

    X5 -0.037026 0.130887 -0.282886 0.7787

    D(X5) 0.074601 0.113507 0.657232 0.5148

    D(Y) 3.333366 1.153424 2.889976 0.0062

    R-squared 0.998433 Mean dependent var 58.71860

    Adjusted R-squared 0.998002 S.D. dependent var 5.175647

    S.E. of regression 0.231364 Akaike info criterion 0.109524

    Sum squared resid 2.141171 Schwarz criterion 0.559812

    Log likelihood 9.152363 F-statistic 2316.511

    Durbin-Watson stat 0.453925 Prob(F-statistic) 0.000000

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    40/85

    40 | E C O N O M E T R I C S 2 T E R M P A P E R

    = 3.3333

    Step 2: Regress the equation (12.4)

    Open the File containing dataQuickestimate equation

    Write: Y- 3.3333*d(Y) 1-3.333 X1-3.3333*d(X1) X5-3.3333*(X5)

    Dependent Variable: Y-3.333*D(Y)

    Method: Least Squares

    Date: 08/21/13 Time: 11:43

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    1-3.333 3.158922 0.687286 4.596224 0.0000

    X1-3.333*D(X1) -1.40E-07 1.09E-08 -12.87266 0.0000

    X2-3.333*D(X2) -9.24E-12 3.84E-12 -2.405610 0.0202

    X3-3.333*D(X3) -0.060601 0.055325 -1.095376 0.2791

    X4-3.333*D(X4) 2.759547 0.086447 31.92176 0.0000

    X5-3.333*D(X5) -0.032140 0.061097 -0.526048 0.6014

    R-squared 0.994743 Mean dependent var 57.51121

    Adjusted R-squared 0.994172 S.D. dependent var 5.756341

    S.E. of regression 0.439444 Akaike info criterion 1.301554

    Sum squared resid 8.883108 Schwarz criterion 1.526698

    Log likelihood -27.84041 Durbin-Watson stat 0.369519

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    41/85

    41 | E C O N O M E T R I C S 2 T E R M P A P E R

    BerenbluttWebb test g-statistics for estimated =1:

    To test the hypothesis that 1. The test statistic they use is called the g-statistic, which

    is defined as follows:

    g = et2 / t

    2

    Step 1: Regress the equation

    Open the File containing dataQuickestimate equation

    Write: Y C X1 X2 X3 X4 X5

    Obtain residuals, t2

    Step 2: Regress the equation First difference

    Open the File containing dataQuickestimate equation

    Write: D(Y) C D(X10 D(X2) D (X3) D(X4) D(X5)

    Obtain residuals, et2

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/21/13 Time: 16:27

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C -5.519635 1.350874 -4.085972 0.0002

    X1 -1.20E-07 9.29E-09 -12.86161 0.0000

    X2 -1.61E-11 2.45E-12 -6.580235 0.0000

    X3 -0.215865 0.085649 -2.520339 0.0152

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    42/85

    42 | E C O N O M E T R I C S 2 T E R M P A P E R

    Dependent Variable: D(Y)

    Method: Least Squares

    Date: 08/21/13 Time: 16:29

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 1.080539 0.091881 11.76017 0.0000

    D(X1) -2.20E-07 1.15E-08 -19.15005 0.0000

    D(X2) -4.13E-12 1.19E-12 -3.461294 0.0012

    D(X3) 0.002617 0.020385 0.128386 0.8984

    D(X4) -0.482934 0.278996 -1.730969 0.0902

    D(X5) 0.009456 0.021666 0.436466 0.6645

    R-squared 0.922727 Mean dependent var 0.362255

    Adjusted R-squared 0.914328 S.D. dependent var 0.179894

    S.E. of regression 0.052655 Akaike info criterion -2.941961

    Sum squared resid 0.127535 Schwarz criterion -2.716817

    g = 0.0311

    X4 2.668870 0.070082 38.08228 0.0000

    X5 -0.163192 0.129846 -1.256813 0.2150

    R-squared 0.997288 Mean dependent var 58.49076

    Adjusted R-squared 0.997000 S.D. dependent var 5.387357

    S.E. of regression 0.295091 Akaike info criterion 0.503204

    Sum squared resid 4.092694 Schwarz criterion 0.726256

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    43/85

    43 | E C O N O M E T R I C S 2 T E R M P A P E R

    We find that dL=1.364 and dU =1.590 (5 percent level)

    It is clear that g statistics lies in the 0dL range we can take first difference by assuming =1.

    CHAPTER 11

    HETEROSCEDASTICITY: WHAT HAPPENS IF THE ERROR

    VARIANCE IS NONCONSTANT?

    THE NATURE OF HETEROSCEDASTICITY

    A critical assumption of the classical linear regression model is that the disturbances i have all

    the same variance, 2 if this assumption is not satisfied, there is heteroscedasticity. Hence,there is heteroscedasticity.

    E i = i2

    Notice the subscript ofi2,

    which reminds us that the conditional variances

    of i(= conditional variances ofYi) are no longer constant

    SOURCES OF HETEROSCEDASTICITY

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    44/85

    44 | E C O N O M E T R I C S 2 T E R M P A P E R

    1. Following the error-learning models, as people learn, their errors of behavior

    become smaller over time. In this case, i2is expected to decrease.

    2. As data collecting techniques improve, i2

    is likely to decrease.

    3. Heteroscedasticity can also arise as a result of the presence ofoutliers. An outlying

    observation, or outlier, is an observation that is much different (either very small or

    very large) in relation to the observations in the sample.

    4. Another source of heteroscedasticity arises from violating Assumption 9 of CLRM,

    namely, that the regression model is correctly specified.

    5. Another source of heteroscedasticity is skewness in the distribution of one or more

    regressors included in the model.

    6. Other sources of heteroscedasticity: As David Hendry notes, heteroscedasticity can

    also arise because of (1) incorrect data transformation (e.g., ratio or first difference

    transformations) and (2) incorrect functional form (e.g., linear versus loglinear models).

    PRACTICAL CONSEQUENCES OF HETEROSCEDASTICITY

    In the presence of heteroscedasticity the usual OLS estimators, although linear, unbiased, and

    asymptotically (i.e., in large samples) normally distributed, are no longer minimum variance

    among all linear unbiased estimators. In short, they are not efficient relative to other linear and

    unbiased estimators. Put differently, they may not be BLUE. As a result, the usual, t, F, and may not be valid.

    DOING REGREESION USING EVIEWS FOR HETEROSCEDASTICITY

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    45/85

    45 | E C O N O M E T R I C S 2 T E R M P A P E R

    DETECTION OF HETEROSCEDASTICITY:

    1. GRAPHICAL METHOD:

    MODEL: Yi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

    Write: Y C X1 X2 X3 X4 X5

    Step 2: obtain the Residuals

    From the estimated equation result window Procmake residuals series(name) ok.

    Step 3: Plot these residuals squares and X1, X2, X3, X4, X5 & individually and see the pattern for

    Hetroscedasticity. Y

    From step2QuickGraphScatter plot

    Write:

    X1 R1^2, X2 R1^2, X3 R1^2, X4 R1^2, X5 R1^2

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    46/85

    46 | E C O N O M E T R I C S 2 T E R M P A P E R

    It is clear from the graph that there is such pattern seen that causes hetroscedasticity.

    2. FORMAL METHODS:

    a. PARK TEST:

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    47/85

    47 | E C O N O M E T R I C S 2 T E R M P A P E R

    MODEL: Yi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

    Write:

    Y C X1X2

    And obtain the Residuals from the estimated equation result window Procmake

    Residuals series (name) ok.

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 13:57

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C -5.519635 1.350874 -4.085972 0.0002

    X1 -1.20E-07 9.29E-09 -12.86161 0.0000

    X2 -1.61E-11 2.45E-12 -6.580235 0.0000

    X3 -0.215865 0.085649 -2.520339 0.0152

    X4 2.668870 0.070082 38.08228 0.0000

    X5 -0.163192 0.129846 -1.256813 0.2150

    R-squared 0.997288 Mean dependent var 58.49076

    Adjusted R-squared 0.997000 S.D. dependent var 5.387357

    S.E. of regression 0.295091 Akaike info criterion 0.503204

    Sum squared resid 4.092694 Schwarz criterion 0.726256

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    48/85

    48 | E C O N O M E T R I C S 2 T E R M P A P E R

    Log likelihood -7.334900 F-statistic 3456.958

    Durbin-Watson stat 0.196325 Prob(F-statistic) 0.000000

    Lni =1++ 2lnXi+i

    Step 2: Regress the Equation (11.1) suggested by Park.

    Open the File containing dataQuickestimate equation

    Write:

    log(r1^2) c log(x1)

    log(r1^2) c log(x2)

    log(r1^2) c log(x3)

    log(r1^2) c log(x4)

    log(r1^2) c log(x5)

    And check the significance of coefficient of explanatory variable

    Dependent Variable: LOG(R1^2)

    Method: Least Squares

    Date: 08/21/13 Time: 12:07

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 39.65786 10.25252 3.868109 0.0003

    LOG(X1) -2.361010 0.557693 -4.233531 0.0001

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    49/85

    49 | E C O N O M E T R I C S 2 T E R M P A P E R

    Dependent Variable: LOG(R1^2)

    Method: Least Squares

    Date: 08/21/13 Time: 12:07

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 17.17790 5.018112 3.423181 0.0012

    LOG(X2) -0.866555 0.207702 -4.172101 0.0001

    Dependent Variable: LOG(R1^2)

    Method: Least Squares

    Date: 08/21/13 Time: 12:07

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C -1.586677 0.639137 -2.482532 0.0164

    LOG(X3) -1.935249 0.532505 -3.634238 0.0007

    Dependent Variable: LOG(R1^2)

    Method: Least Squares

    Date: 08/21/13 Time: 12:08

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    50/85

    50 | E C O N O M E T R I C S 2 T E R M P A P E R

    Variable Coefficient Std. Error t-Statistic Prob.

    C 19.37869 5.598606 3.461342 0.0011

    LOG(X4) -6.848214 1.657294 -4.132166 0.0001

    Dependent Variable: LOG(R1^2)

    Method: Least Squares

    Date: 08/21/13 Time: 12:08

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C -5.142800 2.376507 -2.164016 0.0353

    LOG(X5) 1.264550 2.120873 0.596240 0.5537

    If2s turns out to be statistically significant at 1%, it would suggest that heteroscedasticity is present

    in the data in the each case.

    b. GLESJER TEST:

    MODEL: Yi = 1 + 2X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    51/85

    51 | E C O N O M E T R I C S 2 T E R M P A P E R

    Write:

    Y C X1X2

    And obtain the Residuals from the estimated equation result window Procmake

    Residuals series (name) ok

    = 1+2+Xi+i

    Step 2: Regress the Equation (11.1a) suggested by Glesjer.

    Open the File containing dataQuickestimate equation

    Write:

    abs(r1) c x1

    abs(r1) c x2

    abs(r1) c x3

    abs(r1) c x4

    abs(r1) c x5

    And check the significance of coefficient of explanatory variable

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:16

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.462368 0.051704 8.942531 0.0000

    X1 -2.31E-09 4.60E-10 -5.020498 0.0000

    Dependent Variable: ABS(R1)

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    52/85

    52 | E C O N O M E T R I C S 2 T E R M P A P E R

    Method: Least Squares

    Date: 08/21/13 Time: 12:16

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.290397 0.029296 9.912631 0.0000

    X2 -1.30E-12 3.82E-13 -3.408431 0.0013

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:16

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.423221 0.050627 8.359624 0.0000

    X3 -0.060295 0.013883 -4.343235 0.0001

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:17

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.914931 0.137492 6.654407 0.0000

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    53/85

    53 | E C O N O M E T R I C S 2 T E R M P A P E R

    X4 -0.023500 0.004612 -5.094806 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:17

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.123370 0.185112 0.666459 0.5082

    X5 0.031866 0.059835 0.532554 0.5967

    If2s turns out to be statistically significant at 1%, it would suggest that heteroscedasticity is presentin the data in each case.

    = 1+2+Xi+ i

    Step 2: Regress the Equation (11.1b) suggested by Glesjer.

    Open the File containing dataQuickestimate equation

    Write:

    abs(r1) c x1^0.5

    abs(r1) c x2^0.5

    abs(r1) c x3^0.5

    abs(r1) c x4^0.5

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    54/85

    54 | E C O N O M E T R I C S 2 T E R M P A P E R

    abs(r1) c x5^0.5

    And check the significance of coefficient of explanatory variable

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:18

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.697759 0.093902 7.430750 0.0000

    X1^.5 -4.76E-05 9.19E-06 -5.181553 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:18

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.383647 0.041604 9.221475 0.0000

    X2^.5 -8.01E-07 1.80E-07 -4.443441 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:18

    Sample (adjusted): 1961 2012

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    55/85

    55 | E C O N O M E T R I C S 2 T E R M P A P E R

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.633219 0.090155 7.023649 0.0000

    X3^.5 -0.230573 0.049249 -4.681793 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:18

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 1.599442 0.269902 5.926016 0.0000

    X4^.5 -0.254295 0.049674 -5.119270 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:19

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.004676 0.366416 0.012761 0.9899

    X5^.5 0.123829 0.209160 0.592029 0.5565

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    56/85

    56 | E C O N O M E T R I C S 2 T E R M P A P E R

    If2s turns out to be statistically significant at 1%, it would suggest that heteroscedasticity is present

    in the data in each case.

    = 1+2+1/Xi+ i

    Step 2: Regress the Equation (11.1c) suggested by Glesjer.

    Open the File containing dataQuickestimate equation

    Write:

    abs(r1) c 1/x1

    abs(r1) c 1/x2

    abs(r1) c 1/x3

    abs(r1) c 1/x4

    abs(r1) c 1/x5

    And check the significance of coefficient of explanatory variable

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:20

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C -0.010629 0.048537 -0.218995 0.8275

    1/X1 20352326 3925383. 5.184800 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    57/85

    57 | E C O N O M E T R I C S 2 T E R M P A P E R

    Date: 08/21/13 Time: 12:20

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.135861 0.026436 5.139347 0.0000

    1/X2 1.42E+09 2.98E+08 4.761296 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:20

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.009848 0.044574 0.220930 0.8260

    1/X3 0.577436 0.110386 5.231046 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:20

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C -0.451516 0.132770 -3.400733 0.0013

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    58/85

    58 | E C O N O M E T R I C S 2 T E R M P A P E R

    1/X4 19.46190 3.801832 5.119086 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:21

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.357066 0.180871 1.974150 0.0539

    1/X5 -0.410269 0.541472 -0.757693 0.4522

    If2s turns out to be statistically significant at 1%, it would suggest that heteroscedasticity is presentin the data in each case.

    = 1+2+1/Xi+ i

    Step 2: Regress the Equation (11.1d) suggested by Glesjer.

    Open the File containing dataQuickestimate equation

    Write:

    abs(r1) c 1/x1^0.5

    abs(r1) c 1/x2^0.5

    abs(r1) c 1/x3^0.5

    abs(r1) c 1/x4^0.5

    abs(r1) c 1/x5^0.5

    And check the significance of coefficient of explanatory variable

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    59/85

    59 | E C O N O M E T R I C S 2 T E R M P A P E R

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:22

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C -0.245656 0.090575 -2.712173 0.0091

    1/X1^0.5 4471.569 848.7235 5.268582 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:22

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.047878 0.037971 1.260883 0.2132

    1/X2^0.5 25715.18 4894.805 5.253565 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:22

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    60/85

    60 | E C O N O M E T R I C S 2 T E R M P A P E R

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C -0.199286 0.083843 -2.376886 0.0213

    1/X3^0.5 0.713478 0.138597 5.147859 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/21/13 Time: 12:22

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C -1.135887 0.265139 -4.284128 0.0001

    1/X4^0.5 7.318108 1.426138 5.131418 0.0000

    Dependent Variable: ABS(R1)

    Method: Least Squares

    Date: 08/22/13 Time: 13:12

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.475984 0.362154 1.314313 0.1947

    1/X5^0.5 -0.443667 0.629237 -0.705088 0.4840

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    61/85

    61 | E C O N O M E T R I C S 2 T E R M P A P E R

    If2s turns out to be statistically significant at 1%, it would suggest that heteroscedasticity is present

    in the data in each case

    c. WHITES GENERAL HETEROSCEDASTICITY TEST:

    MODEL: Yi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

    Write:

    Y C X1X2X3 X4 X5

    Step 2: applying Whites General Hetroscedasticity Test

    In estimated equation in step1viewResiduals Tests

    White Heteroskedasticity Test (cross terms)

    or

    White Heteroskedasticity Test (no cross term)

    White Heteroskedasticity Test: no cross term

    F-statistic 5.982381 Probability 0.000014

    Obs*R-squared 31.13871 Probability 0.000557

    As n*R2 (=31.13871) > the critical chi-square (=3.94) value at the 5% level of significance, the

    conclusion is that there is heteroscedasticity

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    62/85

    62 | E C O N O M E T R I C S 2 T E R M P A P E R

    White Heteroskedasticity Test: cross term

    F-statistic 3.782162 Probability 0.000405

    Obs*R-squared 37.24425 Probability 0.010937

    As n*R2 (=37.24425) > the critical chi-square (=10.85) value at the 5% level of significance, theconclusion is that there is heteroscedasticity

    d. KOENKERBASSETT (KB) TEST:

    MODEL: Yi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

    Write:

    Y C X1X2X3 X4 X5

    I2= 1+2I

    2+i

    Step 2: Regress the (11.6a) equation

    Open the File containing dataQuickestimate equation

    Write: in

    R1^2 C (Y_CAP)^2

    Where I are the estimated values from the model (11.6). The null hypothesis is that 2= 0. If this is not rejected,then one could conclude that there is no heteroscedasticity.

    The null hypothesis can be tested by the usual ttest or theFtest,

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    63/85

    63 | E C O N O M E T R I C S 2 T E R M P A P E R

    Dependent Variable: R1^2

    Method: Least Squares

    Date: 08/21/13 Time: 12:44

    Sample (adjusted): 1961 2012

    Included observations: 52 after adjustments

    Variable Coefficient Std. Error t-Statistic Prob.

    C 0.414590 0.072768 5.697453 0.0000

    (Y_CAP)^2 -9.74E-05 2.07E-05 -4.714133 0.0000

    R-squared 0.307700 Mean dependent var 0.076373

    Adjusted R-squared 0.293854 S.D. dependent var 0.104297

    S.E. of regression 0.087643 Akaike info criterion -1.993383

    Sum squared resid 0.384066 Schwarz criterion -1.918336

    Log likelihood 53.82797 F-statistic 22.22305

    Durbin-Watson stat 0.407608 Prob(F-statistic) 0.000020

    REMEDIAL MEASURES:

    WHEN i2

    IS KNOWN: THE METHOD OF WEIGHTED LEAST SQUARES:

    Ifi2

    is known, the most straightforward method of correcting heteroscedasticity is by

    Means of weighted least squares, for the estimators thus obtained are BLUEYi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Yi/ = 1 (1/)+ 2 X1i /+ 3 X2i/+ 4 X3i/ +/+6X5i/+i WEIGHTED LEAST SQUARE

    Where iare the standard deviations

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    64/85

    64 | E C O N O M E T R I C S 2 T E R M P A P E R

    WHEN i2

    IS NOT KNOWN: Whites Heteroscedasticity-Consistent Variances and Standard Errors.

    MODEL: Yi = 1 + 2 X1i + 3 X2i + 4 X3i +5X4i + 6X5i +i

    Step 1: Regress the Model

    Open the File containing dataQuickestimate equation

    Write:

    Y C X1X2X3 X4 X5

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/19/13 Time: 13:57

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C -5.519635 1.350874 -4.085972 0.0002

    X1 -1.20E-07 9.29E-09 -12.86161 0.0000

    X2 -1.61E-11 2.45E-12 -6.580235 0.0000

    X3 -0.215865 0.085649 -2.520339 0.0152

    X4 2.668870 0.070082 38.08228 0.0000

    X5 -0.163192 0.129846 -1.256813 0.2150

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    65/85

    65 | E C O N O M E T R I C S 2 T E R M P A P E R

    Step 2: Regress the Model

    Open the File containing dataQuickestimate equation

    Write:

    Y C X1X2X3 X4 X5

    In equation specification windowoptionsestimation option windowtick

    Heteroscedasticity-consistent covariance Whitesclick ok

    Dependent Variable: Y

    Method: Least Squares

    Date: 08/22/13 Time: 14:14

    Sample: 1960 2012

    Included observations: 53

    White Heteroskedasticity-Consistent Standard Errors & Covariance

    Variable Coefficient Std. Error t-Statistic Prob.

    C -5.519635 1.416856 -3.895694 0.0003

    X1 -1.20E-07 8.29E-09 -14.42540 0.0000

    X2 -1.61E-11 1.64E-12 -9.811624 0.0000

    X3 -0.215865 0.056197 -3.841198 0.0004

    X4 2.668870 0.078266 34.09986 0.0000

    X5 -0.163192 0.106943 -1.525967 0.1337

    As we can see that in case of heteroscedasticity the OLS standard errors of slope coefficient

    of X1 X2 X3 are over estimated and X4 X5 are under estimated. And intercept was under

    estimated. After applying Whites Hetroscedasticity-Consistent Variances and Standard Errorsremedial procedure they were now corrected for hetroscedasticity.

    Plausible Assumptions about Heteroscedasticity Pattern :

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    66/85

    66 | E C O N O M E T R I C S 2 T E R M P A P E R

    Assumption 1: The error variance is proportional to XI2:

    E(i2)=2Xi

    2

    We may transform by dividing the original model through byXi

    Transforming using X1:

    Yi = 1(1/X1) + 2 + 3 X2i/X1 + 4 X3i/X1 + +6X5i/X1 +i TRANSFORMED MODEL

    Step 1: Regress the equation (11.7a)

    Open the File containing dataQuickestimate equationequation specification window

    Write: Y/X1 C 1/X1 X2/X1 X3/X1 X4/X1 X5/X1

    Dependent Variable: Y/X1

    Method: Least Squares

    Date: 08/22/13 Time: 14:31

    Sample: 1960 2012

    Included observations: 53

    White Heteroskedasticity-Consistent Standard Errors & Covariance

    Variable Coefficient Std. Error t-Statistic Prob.

    C -1.35E-07 1.30E-08 -10.36097 0.0000

    1/X1 -8.939349 1.641135 -5.447055 0.0000

    X2/X1 -1.65E-11 3.01E-12 -5.480641 0.0000

    X3/X1 -0.360089 0.097640 -3.687939 0.0006

    X4/X1 2.859217 0.091651 31.19696 0.0000

    X5/X1 -0.206358 0.123891 -1.665644 0.1024

    R-squared 0.999641 Mean dependent var 6.51E-07

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    67/85

    67 | E C O N O M E T R I C S 2 T E R M P A P E R

    Adjusted R-squared 0.999603 S.D. dependent var 2.18E-07

    S.E. of regression 4.34E-09 Akaike info criterion -35.56657

    Sum squared resid 8.85E-16 Schwarz criterion -35.34352

    Log likelihood 948.5141 F-statistic 26205.57

    Durbin-Watson stat 0.188567 Prob(F-statistic) 0.000000

    Similarly for X2 X3 X4 and X5:

    Dependent Variable: Y/X2

    Method: Least Squares

    Date: 08/22/13 Time: 14:33

    Sample: 1960 2012

    Included observations: 53

    White Heteroskedasticity-Consistent Standard Errors & Covariance

    Variable Coefficient Std. Error t-Statistic Prob.

    C -1.60E-11 1.20E-11 -1.337315 0.1876

    X1/X2 -1.80E-07 2.77E-08 -6.486804 0.0000

    1/X2 -15.75433 2.283040 -6.900594 0.0000

    X3/X2 -0.400503 0.187234 -2.139050 0.0377

    X4/X2 3.241575 0.140842 23.01573 0.0000

    X5/X2 -0.141550 0.103679 -1.365271 0.1787

    R-squared 0.999966 Mean dependent var 3.39E-09

    Adjusted R-squared 0.999962 S.D. dependent var 3.44E-09

    S.E. of regression 2.12E-11 Akaike info criterion -46.21324

    Sum squared resid 2.11E-20 Schwarz criterion -45.99019

    Log likelihood 1230.651 F-statistic 274036.8

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    68/85

    68 | E C O N O M E T R I C S 2 T E R M P A P E R

    Durbin-Watson stat 0.158014 Prob(F-statistic) 0.000000

    Dependent Variable: Y/X3

    Method: Least Squares

    Date: 08/22/13 Time: 14:34

    Sample: 1960 2012

    Included observations: 53

    White Heteroskedasticity-Consistent Standard Errors & Covariance

    Variable Coefficient Std. Error t-Statistic Prob.

    C -0.308845 0.117205 -2.635089 0.0114

    X1/X3 -1.49E-07 1.46E-08 -10.18503 0.0000

    X2/X3 -1.40E-11 3.08E-12 -4.564873 0.0000

    1/X3 -10.39333 1.716490 -6.054987 0.0000

    X4/X3 2.948408 0.095104 31.00182 0.0000

    X5/X3 -0.215436 0.120129 -1.793374 0.0793

    R-squared 0.999735 Mean dependent var 20.99458

    Adjusted R-squared 0.999707 S.D. dependent var 8.062351

    S.E. of regression 0.137977 Akaike info criterion -1.017182

    Sum squared resid 0.894776 Schwarz criterion -0.794130

    Log likelihood 32.95532 F-statistic 35499.75

    Durbin-Watson stat 0.192584 Prob(F-statistic) 0.000000

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    69/85

    69 | E C O N O M E T R I C S 2 T E R M P A P E R

    Dependent Variable: Y/X4

    Method: Least Squares

    Date: 08/22/13 Time: 14:35

    Sample: 1960 2012

    Included observations: 53

    White Heteroskedasticity-Consistent Standard Errors & Covariance

    Variable Coefficient Std. Error t-Statistic Prob.

    C 2.734611 0.080073 34.15149 0.0000

    X1/X4 -1.25E-07 9.49E-09 -13.13093 0.0000

    X2/X4 -1.64E-11 1.99E-12 -8.252539 0.0000

    X3/X4 -0.255739 0.068636 -3.726028 0.0005

    1/X4 -6.702676 1.430325 -4.686120 0.0000

    X5/X4 -0.189060 0.115224 -1.640812 0.1075

    R-squared 0.990734 Mean dependent var 2.006037

    Adjusted R-squared 0.989748 S.D. dependent var 0.111929

    S.E. of regression 0.011333 Akaike info criterion -6.015897

    Sum squared resid 0.006037 Schwarz criterion -5.792845

    Log likelihood 165.4213 F-statistic 1005.009

    Durbin-Watson stat 0.185140 Prob(F-statistic) 0.000000

    Dependent Variable: Y/X5

    Method: Least Squares

    Date: 08/22/13 Time: 14:35

    Sample: 1960 2012

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    70/85

    70 | E C O N O M E T R I C S 2 T E R M P A P E R

    Included observations: 53

    White Heteroskedasticity-Consistent Standard Errors & Covariance

    Variable Coefficient Std. Error t-Statistic Prob.

    C -0.107515 0.114543 -0.938644 0.3527

    X1/X5 -1.19E-07 7.82E-09 -15.21492 0.0000

    X2/X5 -1.56E-11 1.60E-12 -9.744259 0.0000

    X3/X5 -0.213828 0.057516 -3.717711 0.0005

    X4/X5 2.657934 0.074921 35.47637 0.0000

    1/X5 -5.458239 1.338748 -4.077122 0.0002

    R-squared 0.999139 Mean dependent var 19.34730

    Adjusted R-squared 0.999047 S.D. dependent var 3.065848

    S.E. of regression 0.094632 Akaike info criterion -1.771378

    Sum squared resid 0.420893 Schwarz criterion -1.548326

    Log likelihood 52.94151 F-statistic 10906.54

    Durbin-Watson stat 0.202398 Prob(F-statistic) 0.000000

    Notice that in the transformed regression the intercept term 2 is the slope coefficient in the

    original equation and the slope coefficient 1 is the intercept term in the original model.

    Therefore, to get back to the original model we shall have to multiply the estimated

    transformed model byXi.

    Assumption 2: The error variance is proportional toXi. The square root transformation:

    E (i2) =2Xi

    Transforming using X1:

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    71/85

    71 | E C O N O M E T R I C S 2 T E R M P A P E R

    Yi /X1= 1 (1/X1) + 2X1 + 3 X2i/X1 + 4 X3i/X1 + +6X5i/X1 +i TRANSFORMED

    Step 1: Regress the equation (11.8a)

    Open the File containing dataQuickestimate equationequation specification window

    Write:

    Y/x1^0.5 c 1/x1^0.5 x2/x1^0.5 x3/x1^0.5 x4/x1^0.5 x5/x1^0.5

    Dependent Variable: Y/X1^0.5

    Method: Least Squares

    Date: 08/22/13 Time: 14:42

    Sample: 1960 2012

    Included observations: 53

    White Heteroskedasticity-Consistent Standard Errors & Covariance

    Variable Coefficient Std. Error t-Statistic Prob.

    C -0.003423 0.000293 -11.69658 0.0000

    1/X1^0.5 -3.483622 1.212010 -2.874252 0.0061

    X2/X1^0.5 -2.95E-11 1.99E-12 -14.88087 0.0000

    X3/X1^0.5 -0.356717 0.062820 -5.678385 0.0000

    X4/X1^0.5 3.378221 0.131543 25.68143 0.0000

    X5/X1^0.5 -0.166805 0.123406 -1.351681 0.1829

    R-squared 0.998082 Mean dependent var 0.006034

    Adjusted R-squared 0.997878 S.D. dependent var 0.000748

    S.E. of regression 3.45E-05 Akaike info criterion -17.60661

    Sum squared resid 5.58E-08 Schwarz criterion -17.38355

    Log likelihood 472.5751 F-statistic 4892.212

    Durbin-Watson stat 0.287508 Prob(F-statistic) 0.000000

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    72/85

    72 | E C O N O M E T R I C S 2 T E R M P A P E R

    Similarly do for X2 X3 X4 and X5.

    Note an important feature of the transformed model: It has no intercept term.

    Therefore, one will have to use the regression-through-the-origin model to

    Estimate 1 and 2. Having run (11.8), one can get back to the original model

    Simply by multiplying transformed model by Xi.

    Assumption 3: The error variance is proportional to the square of the mean value ofY.

    E (i2)=2E(Yi)

    2

    Equation postulates that the variance of i2 is proportional to the square of the expected

    Value ofY.

    Therefore, if we transform the original equation as follows:

    Yi /E (Yi) = 1 (1/ E (Yi)) + 2X1/E (Yi) +3 X2i/ E (Yi) + 4 X3i/ E (Yi) + +6X5i/ E (Yi)+i

    The transformation (11.9) is, however, in operational because E(Yi) depends on 1 and 2,

    which are unknown. Of course, we know, which is an estimator ofE(Yi)

    Transforming using: i

    Yi /i = 1 (1/ i) + 2X1i/ i+ 3 X2i/ i + 4 X3i/i + +6X5i/ i + i

    Step: Regress the equation (11.8a)

    Open the File containing dataQuickestimate equationequation specifications window

    Write: Y/Y_CAP 1/Y_CAP X1/Y_CAP X2/Y_CAP X3/Y_CAP X4/Y_CAP X5/Y_CAP

    Dependent Variable: Y/Y_CAP

    Method: Least Squares

    Date: 08/22/13 Time: 14:55

    Sample: 1960 2012

    Included observations: 53

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    73/85

    73 | E C O N O M E T R I C S 2 T E R M P A P E R

    White Heteroskedasticity-Consistent Standard Errors & Covariance

    Variable Coefficient Std. Error t-Statistic Prob.

    1/Y_CAP -6.465594 1.412492 -4.577437 0.0000

    X1/Y_CAP -1.24E-07 9.12E-09 -13.58257 0.0000

    X2/Y_CAP -1.62E-11 1.85E-12 -8.803458 0.0000

    X3/Y_CAP -0.237086 0.063967 -3.706393 0.0006

    X4/Y_CAP 2.721034 0.078917 34.47970 0.0000

    X5/Y_CAP -0.182424 0.112227 -1.625497 0.1107

    R-squared 0.013722 Mean dependent var 0.999988

    Adjusted R-squared -0.091201 S.D. dependent var 0.005202

    S.E. of regression 0.005434 Akaike info criterion -7.486055

    Sum squared resid 0.001388 Schwarz criterion -7.263003

    Log likelihood 204.3804 Durbin-Watson stat 0.186775

    Assumption 4: A log transformation such as:

    lnYi = 1 + 2 lnXi+i

    very often reduces heteroscedasticity.

    Log Transforming:

    lnYi = 1 + 2 lnX1i + 3 lnX2i + 4 lnX3i + 5lnX4i+6lnX5i+i

    Step: Regress the equation (11.8c)

    Open the File containing dataQuickestimate equationequation specifications window

    Write: log(y) c log(x1) log(x2) log(x3) log(x4) log(x5)

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    74/85

    74 | E C O N O M E T R I C S 2 T E R M P A P E R

    Dependent Variable: LOG(Y)

    Method: Least Squares

    Date: 08/22/13 Time: 15:00

    Sample: 1960 2012

    Included observations: 53

    White Heteroskedasticity-Consistent Standard Errors & Covariance

    Variable Coefficient Std. Error t-Statistic Prob.

    C 5.341832 0.443418 12.04694 0.0000

    LOG(X1) -0.397072 0.050292 -7.895290 0.0000

    LOG(X2) -0.040926 0.010557 -3.876748 0.0003

    LOG(X3) 0.020821 0.008782 2.370920 0.0219

    LOG(X4) 2.058282 0.171616 11.99351 0.0000

    LOG(X5) 0.037678 0.012541 3.004484 0.0043

    R-squared 0.988388 Mean dependent var 4.064502

    Adjusted R-squared 0.987152 S.D. dependent var 0.095517

    S.E. of regression 0.010827 Akaike info criterion -6.107355

    Sum squared resid 0.005509 Schwarz criterion -5.884303

    Log likelihood 167.8449 F-statistic 800.0900

    Durbin-Watson stat 0.393330 Prob(F-statistic) 0.000000

    This result arises because log transformation compresses the scales in which the variables

    are measured, thereby reducing a tenfold difference between two values to a twofold

    difference.

    To conclude our discussion of the remedial measures, we reemphasize that all the

    transformations discussed previously are ad hoc; we are essentially speculating

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    75/85

    75 | E C O N O M E T R I C S 2 T E R M P A P E R

    about the nature ofi2.

    Which of the transformations discussed previously will

    work will depend on the nature of the problem and the severity of

    heteroscedasticity.

    CHAPTERS 18 TO 20

    SIMULTANEOUS REGRESSION MODELS

    THE NATURE OF SIMULTANEOUS-EQUATION MODELS

    In contrast to single-equation models, in simultaneous-equation models more than onedependent, or endogenous, variable is involved, necessitating as many equations as the

    number of endogenous variables. A unique feature of simultaneous-equation models is that the

    endogenous variable (i.e., regressand) in one equation may appear as an explanatory variable

    (i.e., regressor) in another equation of the system.

    Y1i = 10 + 12Y2i + 11 X1i + u1i

    Y2i = 20 + 21Y1i + 21 X1i + u2i

    Where Y1 and Y2 are mutually dependent, or endogenous, variables and X1 is an exogenous

    variable and where u1 andu2 are the stochastic disturbance terms, the variables Y1 and Y2 areboth stochastic.

    Therefore, unless it can be shown that the stochastic explanatory variable Y2 in (18.1) is

    distributed independently of u1 and the stochastic explanatory variable Y1in (18.2) is

    distributed independently of u2, application of the classical OLS to these equations individually

    will lead to inconsistent estimates.

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    76/85

    76 | E C O N O M E T R I C S 2 T E R M P A P E R

    As a consequence, such an endogenous explanatory variable becomes stochastic and is usually

    correlated with the disturbance term of the equation in which it appears as an explanatory

    variable.

    In this situation the classical OLS method may not be applied because the estimators thus

    obtained are not consistent, that is, they do not converge to their true population values no

    matter how large the sample size. Since simultaneous-equation models are used frequently,

    especially in econometric models, alternative estimating techniques have been developed by

    various authors.

    DOING REGREESION USING EVIEWS SIMULTENENOUS MODELS

    MODEL: Yi = 1 + 2 X1i + 3 X2i +4X4i + 5X5i +1i

    X1i= 1+2Yi+3X2i+4X3i+2i

    Where Yi and X1iare mutually dependent, or endogenous, variables X2i, X3i, X4i and X5i

    are an exogenous variable and where 1i and2i are the stochastic disturbance terms, the

    variables YiandX1iare both stochastic. Therefore, unless it can be shown that the

    stochastic explanatory variable Yi is distributed independently of1i

    and the stochastic explanatory variableX1i is distributed independently

    of2i, application of the classical OLS to these equations individually will lead to

    inconsistent estimates.

    THE IDENTIFICATION PROBLEM

    RULES FOR IDENTIFICATION

    THE ORDER CONDITION OF IDENTIFIABILITY

    A necessary (but not sufficient) condition of identification, known as the order condition,

    May be stated in two different but equivalent ways as follows (the necessary as well as

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    77/85

    77 | E C O N O M E T R I C S 2 T E R M P A P E R

    Sufficient condition of identification will be presented shortly):

    MK G - 1

    M= numbers of variables in the simultaneous model

    iK= numbers of variables in the specific equation

    G = numbers of endogenous variables or equations

    FOR EQUATION 1:

    65 2 1

    1 1

    FOR EQUATION 2:

    64 2-1

    2 1

    Hence equation 1 is just identified and equation 2 is over identified. As we know equation 1 is

    just identified, is estimated using ILS and 2SLS and equation 2 is over identified, is estimated

    using 2SLS method.

    THE RANK CONDITION FOR IDENTIFIABILITY:

    Here we can not apply the rank condition there are only two simultaneous equations only

    HAUSMAN SPECIFICATION TEST

    It is also called test of endogeneity or simultaneity problem

    EVI EWS:

    Step 1: Regress the equation 1. Run Yi onX1 X2 X3 X4 X5

    Open the File containing dataQuickestimate equation

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    78/85

    78 | E C O N O M E T R I C S 2 T E R M P A P E R

    Write:

    Y C X1 X2 X4 X5

    And obtain the Residuals from the estimated equation result window Procmake

    Residuals series (name) ok

    Dependent Variable: Y

    Method: Least Squares

    Date: 09/04/13 Time: 12:58

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C 9.097915 1.536073 5.922841 0.0000

    X2 -3.84E-11 3.65E-12 -10.50413 0.0000

    X3 -0.751619 0.157433 -4.774216 0.0000

    X4 1.907816 0.078995 24.15124 0.0000

    X5 -0.706191 0.258311 -2.733880 0.0087

    R-squared 0.987744 Mean dependent var 58.49076

    Adjusted R-squared 0.986722 S.D. dependent var 5.387357

    S.E. of regression 0.620775 Akaike info criterion 1.973891

    Sum squared resid 18.49733 Schwarz criterion 2.159768

    Log likelihood -47.30811 F-statistic 967.0998

    Durbin-Watson stat 0.340734 Prob(F-statistic) 0.000000

    Step 2: Regress the equation 2 using residual of step 1 as explanatory variable

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    79/85

    79 | E C O N O M E T R I C S 2 T E R M P A P E R

    Open the File containing dataQuickestimate equation

    Write:

    X1 C Y X2 X3 R1

    Now check the significance of this residual coefficient if it exceeds critical then we can say both the equation are

    simultaneous

    Dependent Variable: X1

    Method: Least Squares

    Date: 09/04/13 Time: 13:01

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C -1.53E+08 10008321 -15.31155 0.0000

    Y 3794853. 203298.8 18.66638 0.0000

    X2 0.000291 1.31E-05 22.24633 0.0000

    X3 5855713. 671827.1 8.716101 0.0000

    RESID01 -10309118 797193.2 -12.93177 0.0000

    R-squared 0.994332 Mean dependent var 1.03E+08

    Adjusted R-squared 0.993859 S.D. dependent var 42305993

    S.E. of regression 3315248. Akaike info criterion 32.95555

    Sum squared resid 5.28E+14 Schwarz criterion 33.14143

    Log likelihood -868.3221 F-statistic 2104.972

    Durbin-Watson stat 0.634807 Prob(F-statistic) 0.000000

    As residual is significant 1% level of significance; w can conclude that both equation are

    simultaneous

  • 7/30/2019 APPLYING REGRESSION USING E VIEWS (WITH COMMANDS)

    80/85

    80 | E C O N O M E T R I C S 2 T E R M P A P E R

    Similarly for equation 2 Repeat step 1 now.

    Step 2: Regress the equation 1 using residual of step 1 as explanatory variable

    Open the File containing dataQuickestimate equation

    Write:

    y c x1 x2 x4 x5 r1

    Now check the significance of this residual coefficient if it exceeds critical then we can say both the equation are

    simultaneous

    Dependent Variable: Y

    Method: Least Squares

    Date: 09/04/13 Time: 13:05

    Sample: 1960 2012

    Included observations: 53

    Variable Coefficient Std. Error t-Statistic Prob.

    C -0.811293 1.899331 -0.427146 0.6712

    X1 -9.78E-08 1.21E-08 -8.112440 0.0000

    X2 -1.70E-11 2.24E-12 -7.562896 0.0000

    X4 2.397581 0.101523 23.61622 0.0000

    X5 -0.049479 0.107277 -0.461224 0.6468

    RESID01 0.362655 0.101890 3.559284 0.0009

    R-squared 0.997575 Mean dependent var 58.49076

    Adjusted R-squared 0.997317 S.D. dependent var 5.387357

    S.E. of regression 0.279035 Akaike info criterion 0.391313

    Sum squared resid 3.659450