nclassified *lllllllimi a monte carlo investigation of … 24 air fore znst of tech wright-patterson...

101
24 AIR FORE ZNST OF TECH WRIGHT-PATTERSON AFB OH F/6 5/3 A MONTE CARLO INVESTIGATION OF ECONOMETRIC MODELS WITH FIXED AN-ETC(U) U 1980 M S ANSELMI NCLASSIFIED AFIT-CI-80-50 *lllllllimi I II 0fIIIfflii IIIIuInI

Upload: buidien

Post on 18-Oct-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

24 AIR FORE ZNST OF TECH WRIGHT-PATTERSON AFB OH F/6 5/3

A MONTE CARLO INVESTIGATION OF ECONOMETRIC MODELS WITH FIXED AN-ETC(U)U 1980 M S ANSELMINCLASSIFIED AFIT-CI-80-50*lllllllimi

I II 0fIIIffliiIIIIuInI

SECURITY CLAS5IFICATION OF THIS PAG I'll s 01 an1l-oi

REPORT DOCUMENTATION PAIS ORA INS I Rif( I IN

1. REPORT NUMBER 2 GOVT ACCESSION NO. 3 RECIPIUNT'S ATALOC, NUJMBER

5 TYPE OF REPORT & FEkIUD CO)VEHED

AMonte Carlo Investigation of Econometric', THESIS/DISSERTATIONModels with Fixed and Stochastic Regressors _______ ___

Where the Error Terms are AR( 1) and MA( 1). 6 PERFORMING O-AG. REPORT NUMBER

S 7. AUTHOQfs. _----- 8. CONTRACT OR GRANT NIIER(6)

CaptMichael Stephen/Anselmi

9 PERFORMING PRGANIZATION NAME AND ADDRESS 10 PROGRAM ELEMENT PROJECT. TASKAREA 8 -WORK UNIT NUMBERS

AFIT STUDENT AT: University of Colorado

11. CONTROLLING OFFICE NAME AND ADDRESS 12, REPORT DATE

AFIT/NR .1980

I=WPAFB3 OH 45433 13. NUMBER OF PAGES97

14. MONITORING AGENCY NAME ADORESS(11 dilernt Iro~ Controlling Dill,-) 15 SECURITY CLASS. (of this report)

/ :, - ~UNCLASSlbs DCLASS IFICATION DOWNGRADING

SCHEDULE

16. DISTRIBUTION STATEMENT (of this Report)

APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED

10. SUPPLEMENTARY N TIES

P O~F RPU C REEAS: lW AE 19-17 Air Force Institute of Technology (ATC)EDRIC C. LY hH. ajor USA Wright-Patterson AFB, OH 45433

pirgctor of Pubh Affairs19. KEY WORDS (Continue on reverse side if necessary and identify by block numtber)

20. ABSTRACT (Continue on reverse side it vacessary and identify by block numbn er)

ATTACHED

DD IAN31473 EDITION OF I NOV 65 IS OBSOLETE UNCLASSSOSECURITY CLASSIFICATION OF THIS PAGE (tr.Data #Lnie,.d.

ABSTRACT

Anselmi, Michael StephenCaptain, USAFPh.D., Economics, 1980, University of Colorado9? pages

A MONTE CARLO INVESTIGATION OF ECONOMETRIC MODELS WITH FIXED ANDSTOCHASTIC REGRESSORS WHERE THE ERROR TERMS ARE AR(1) AND MA(1)

A Monte Carlo study of four specialized regression procedures

is reported. Two of the procedures are designed for a simple linear

econometric model while the other two procedures axe designed for a

single period lagged endogenous variable and single exogenous

variable econometric model. For the simple linear model, the small

sample properties of the Pesaran procedure, designed for an MA(1)

error term, and the Beach-MacKinnon procedure, designed for an AR(i)

error term, are compared against the small sample properties of OLS

and against those of the Prais-Winston procedure. For the lagged

endogenous model, the small sample properties of the Zellner-Geisel

procedure, designed for an MA(1) error term, and the Wallis procedure,

designed for an AR(1) error term, axe likewise compared against the

small sample properties of OLS and of Prais-Winston. In addition, the

power of the Durbin-Watson d test is analyzed for both models and the

Durbin h and the McNown tests are analyzed for the lagged endogenous

model. For the simple linear model, the Prais-Winston transformation

is only performed if the Durbin-Watson d test indicates the hypothesis

of no autocorrelation should be rejectsmd. For the lagged endogenous

model, the transformation is only performed if the McNown test

80 1 134

indicates the hypothesis should be rejected. Finally, to analyze the

effects of misspecification, the two specialized procedures for each

model are applied to the wrong error structure.

PRIMARY SOURCES

Beach, C. M., and J. G. MacKinnon. "A Maximum Likelihood Procedurefor Regressions with Autocorrelated Errors." Ecorometrica,46 (January, 1978), 51-58.

Durbin, J. "Testing for Serial Correlation in Least-SquaresRegression When Some of the Regressors are Lagged DependentVariables." Econometrica, 38 (May, 1970), 410-421.

Hendry, D. F., and P. K. Trivedi. "Maximum Likelihood Estimation ofDifference Equations with Moving Average Errors: A SimulationStudy." Review of Economic Studies, 39 (April, 1972),117-146.

McNown, R. A Test for Autocorrelation inAyodels with LaggedDependent Variables. Colorado Discussion Paper No. 113.Boulder: Department of Econonies, University of Colorado,1977. Soon to be published in Review of Economics andStatistics. /

Maddala, G. S., and A. S. Rao. "ests for Serial Correlation i'.Regression Models with ged Dependent Variables andSerially Correlated E s Econometrica, 41 (July, 1973),761-774. /

Nicholls, D. F., A. R. Pagad and R. D. Terrell. "The Estimation andUse of Models witliMoving Average Disturbance Terms: ASurvey." Internetional Economic Review, 16 (February, 1975),113-134. /

Pesaran, M. H. 'Exac Maximum Likelih~od Estimation of a RegressionEquation witb a First-Order Moving-Average Error." Review ofEconomic StuLes, 40 (October, 1\73), 529-536.

Prais, S. J., and C. B. Winston. Trend E~timators and SerialCorrelation. Cowles Commission Discussion Paper No. 383.Chicago, 1954.

Rao, P., and Z. Griliches. "Small Sample Prperties of Several Two-Stage Regression Methods in the Conte t of AutocorrelatedErrors." Journal of the American Statistical Association,64 (March, 1969), 253-272.

A MONTE CARLO INVESTIGATION OF ECONOMETRIC MODELS WITH FIXED AND

STOCHASTIC REGRESSORS WHERE THE ERROR TERMS ARE AR(1) AND MA(1)

by

Michael Stephen Anselmi

B.A., University of Wyoming, 1968

M.S., University of Wyoming, 1969

M.A., University of Colorado, 1979

A thesis submitted to the Faculty of the Graduate

School of the University of Colorado in partial

fulfillment of the requirements for the degree of

Doctor of Philosophy

Department of Economics Acso oCAccession For

1980 NTIS GRID DC TAB 2

Just l, '

' , -. 01'

ai 0

This Thesis for the Doctor of Philosophy Degree by

Michael Stephen Anselmi

has been approved for the

Department of

Economics

by

. / ,-- , - --L

,J. Malcolm Dowling

Frank Hsiao

Date April 9_ 1980

Anselmi, Michael Stephen (Ph.D., Economics)

A Monte Carlo Investigation of Econometric Models with Fixed and

Stochastic Regressors Where the Error Terms are AR(1) and MA(1)

Thesis directed by Professor J. Malcolm Dowling

A Monte Carlo study of four specialized regression procedures

is reported. Two of the procedures are designed for a simple linear

econometric model while the other two procedures are designed for a

single period lagged endogenous variable and single exogenous

variable econometric model. For the simple linear model, the small

sample properties of the Pesaran procedure, designed for an MA(1)

error term, and the Beach-MacKinnon procedure, designed for an AR(1)

error term, are compared against the small sample properties of OLS

and against those of the Prais-Winston procedure. For the lagged

endogenous model, the small sample properties of the Zellner-Geisel

procedure, designed for an MA(1) error term, and the Wallis procedure,

designed for an AR(i) error term, are likewise compared against the

small sample properties of OLS and of Prais-Winston. In addition, the

power of the Durbin-Watson d test is analyzed for both models and the

Durbin h and the McNown tests are analyzed for the lagged endogenous

mcdel. For the simple linear model, the Prais-Winston transformation

is only performed if the Durbin-Watson d test indicates the hypothesis

of no autocorrelation should be rejected. For the lagged er.dcgenous

model, the transformation is only ;erformed if the McNown test

indicates the hypothesis should be rejected. Finally, to analyze the

effects of misspecification, the two specialized procedures for each

model are applied to the wrong error structure. The Pesaran

procedure was superior to CLS and Prais-Winston in the estimation cf'

iv

the coefficient of the exogenous variable but inferior in the

estimation of the coefficient of autocorrelation. It did remove the

autocorrelation under both error structures. The Beach-MacKinnon

procedure demonstrated results similar to those of Pesaran. For both

MA(1) and AR(1) autocorrelation, the Zellner-Geisel procedure was far

superior to Prais-Winston and better than OLS for large values of the

coefficient of autocorrelation. It did appear to remove MA(1) auto-

correlation from the residuals but was not very successful in removing

AR(1) autocorrelation. The Wallis procedure was inferior to CLS for

both error structures. In certain circumstances, the Wallis procedure

provided better estimates than Prais-Winston. In terms of removing

autocorrelation, the Wallis procedure did not do an acceptable job for

either error structure. All three tests for autocorrelation proved to

be nearly equal in power when the error term was AR(1). When the

error structure was YA(1), the Durbin-Watson d test exhibited

remarkable power. For the lagged endogenous model, the d test

outperformed the Durbin h test which outperformed the McNown test

when the error term was MA(1).

This abstract is approved as to form and content. I recommend itspublication.

FaC)lty member in char of thcage of h s

V

ACKNOWLEDGMVENTS

A project like this cannot be accomplished without the help of

others. I would first like to thank the United States Air Force and

the Air Force Institute of Technology (AFIT) who sponsored my degree

program and paid my full active duty salary as well as my tuition.

Without them I would not have been able to afford it. I wish to thank

my chairman, Malcolm Dowling, whose guidance and inspiration were

instrumental in the completion of my program. My family was a source

of strength over the many months of studying and test taking. My

wife, Carlene, and my daug-hters, Christine and Crystal, supported me

throughout my program even during the periods of stress. I will be

forever grateful for their patience and understanding. The staff of

the Economics Department and of the Bureau of Economic Research were

always more than willing to do what they could and their cheerful

attitude never failed to brighten up my day. Finally, I thank God

for the skill and talents he gave me and I thank Him for giving me

this opportunity to add to the body of economic knowledge.

TABLE OF CONTENTS

CH{APTER PAGE

I. INTRODUCTION... ..................... 1

Model I . . . . . . . . . . . . . . . . . . . . . . 3

Model I with an AR(1) Error Structure. ....... 3

Model I with an MA(1) Error Structure .. ..... 4

Model I...... .................. 4

Model II with an AR(1) Error Structure . . . . 5

Model II with an MA(1) Error Structure . . . . 5

Testing for Autocorrelation.. ............ 5

Effects of Misspecification... ........... 7

II. MODEL I ESTIMATION..... ............... 9

The Beach-MacKinnon Procedure for Model I-AR . ... 9

The Pesaran Procedure for Model I-MA. ......... 11

The Monte Carlo Procedure ............... 12

The Procedure in General ............. 12

Input Parameter Values .............. 14

Comparison Statistics Collected and Analyzed 14

III. MCDELII ESTIMATION. ................... 16

The Wallis Three-Step Procedure for Model I-R . 16

The Zeliner-Geisel Procedure for Model Il-MA . 18

The Monte Carlo Procedure ............... 19

The Procedure in General ............. 19

Input Parameter Values .............. 21

Comparison Statistics Collected and Analyzed 22

vii

CHAPTER PAGE

IV. RESULTS AND CONCLUSIONS ...... ............... ... 23

Comparison Statistic Results for Model I . ...... .. 24

Pesaran Procedure on MA(i) .......... .... 25

Beach-MacKinnon Procedure on AR(1).. . ... 27

Beach-MacKinnon Procedure on MA(1) ...... .. 28

Pesaran Procedure on AR(1) .. .......... . 29

Comparison Statistics Results for Model II.. . .. 31

Zellner-Geisel Procedure on MA() .. ..... . 31

Wallis Procedure on AR(). .. .......... .. 33

Wallis Procedure on MA(1) ... ........... . 35

Zellner-Geisel Procedure on AR(1) . ....... . 36

Autocorrelation Test Results for Model I ... ...... 37

MA(1) Error Structure .... ............. ... 37

AR(1) Error Structure ... ............ ... 38

Autocorrelation Test Results for Model II ..... 39

MA(1) Error Structure ..... ......... .... 39

AR(1) Error Structure .... ............. ... 41

Conclusions and Recommendations ... .......... . 43

Pesaran Procedure ..... ............... ... 43

Beach-MacKinnon Procedure ...... .......... 44

Zeliner-Geisel Procedure ... ........... . 44

Wallis Three-Step Procedure .. ............. 45

Durbin-Watson d Test ... ............ 45

McNown Test ...... .................. ... 46

Durbin h Test ...... ................ ... 46

SELECTED BIBLI0GRAPhY ........ ..................... ... 82

LIST OF TABLES

TABLE PAGE

I. ESTIMATION PROCEDURES CORRECTLY APPLIED TO

MODELS AND ERROR STRUCTURES ...... ............ 6

II. ESTIMATION PROCEDURES INCORRECTLY APPLIED TO

MODELS AND ERROR STRUCTURES ...... ............ 8

III. PERCENTAGE DIFFERENCES FOR SELECTED COMPARISON

STATISTICS FOR MODEL I- A, PESARAN VERSUS OLS . 48

IV. PERCENTAGE DIFFERENCES FOR SELECTED COMPARISON

STATISTICS FOR MODEL I-MA, PESARAN VERSUS

PRAIS-WINSTON ....... ................... ... 49

V. PERCENTAGE DIFFERENCES FOR SELECTED COMPARISON

STATISTICS FOR MODEL I-AR, BEACH-1,ACKINNON

VERSUS OLS ........ ................... . 50

VI. PERCENTAGE DIFFERENCES FOR SELECTED COMPARISON

STATISTICS FOR MODEL I-AR, BEACH-MACKINNON

VERSUS PRAIS-WINSTON ...... .. ............. 51

VII. PERCEN'TAGE DIFFERENCES FOR SELECTED COMPARISCN

STATISTICS FOR MODEL I-MA, BEACH-MLACKINNON

VERSUS OLS ....... .................... ... 52

VIII. PERCENTAGE DIFFERENCES FOR SELECTED COMPARISON

STATISTICS FOR MODEL I-MA, BEACH-MACKINNON

VERSUS PRAIS-WINSTON ..... ............... . 53

ix

TABLE PAGE

IX. PERCENTAGE GIFFEREECLS FG SELECTED COMPAKRISC

STATISTICS FOR I.C--T I-AR., IFSAFRAN VERSUS OLS . . 54

X. PERCENTAGE DIFFEHEUCS FOR SELECTED COPIPARISON

STATISTICS FOR MODEL I-R, -ESARAN VERSUS

PRAIS-WINSTON . . . . . . . . . . . . . . . . . . . 55

XI. PERCENTAGE DIFFERENCES FOR SELECTED COMPARISON

STATISTICS FOR MODEL II-MA, ZELLNAER-GEISEL

VERSUS CLS .......... .................... 56

XII. PERCENTAGE DIFFERENCES FOR SELECTED COMPAR!ISON

STATISTICS FOR MODEL II-MA, ZELLN.ER-GEISEL

VERSUS PRAIS-WINSTON ..... ............... ... 57

XIII. PERCENTAGE DIFFERENCES FOR SELECTED COMPAPISON

STATISTICS FOR MCEL Ii-AR., WALLIS VERSUS CLS f 53

XiV. ERCENTAGE DIFFERENCES FOR SELECTED COMPARISON

STATISTICS FOR MODEL II-AR, WALLIS VERSUS

PRAIS-WINSTON ....... ................... ... 61

XV. PERCENTAGE DIFFERENCES FOR SELECTED CCMPARISON

STATISTICS FOR MODEL I!-MA, WALLIS VERSUS OLS . . . 4

XVI. PERCENTAGE DIFFERENCES FOR SELECTED COMPARISON

STATISTICS FOR MODEL II-,VA, WALLIS VERSUS

PRAIS-WINSTON . . . . . . . . . . . . . . . . . . . 67

XVII. FERr'NTAGE DIEFFRENCES FCR SELECTED COMPARISON

STATISTICS FOR MODEL Il-AR, ZELLNER-GEISEL

VERSUS OLS . . . . . . . . . . . . . . . . . . . . 70

x

TABLE PAGE

XVIII. PERCENTAGE DIF1ERENCES FOR SELECTED COMPARISON

STATISTICS FOR MODEL II-AR, ZELLNER-GEISEL

VERSUS PRAIS-WINSTON ...... .............. . 71

XIX. PERCENTAGE OF TIME THE HYPOTHESIS OF NO

AUTOCORRELATION WAS ACCEPTED AND REJECTED BY

THE DURBIN-WATSON D TEST FOR MODEL I-A. ... ...... 72

XX. PERCENTAGE OF TIME THE HYPOTHESIS OF NO

AUTCCORBELATION WAS ACCEPTED AND REJECTED BY

THE DURBIN-WATSON D TEST FOR MODEL I-AR . .... ... 73

XXI. PERCENTAGE OF TIME THE HYPOTHESIS CF N0

AUTOCORRELATION WAS ACCEPTED Alf: '"I2'D FOR

THE ZELLNER-CEISEL RUNS ON MODEL II-MA ... ...... 74

XXII. PERCENTAGE OF TIME THE HYPOTHESIS CF NO

AUTOCORELLATION WAS ACCEPTED AND REJECTED FOR

THE WALLIS RUNS ON MODEL Il-MA ..... .......... 75

XXIII. PERCENTAGE OF TIME THE HYPOTHESIS OF NO

AUTOCORRELATION WAS ACCEPTED AND REJECTED FOR

THE ZELLNER-GEISEL RUNS ON MODEL II--R ...... ..... 78

XXIV. PERCENTAGE OF TIME THE HYPOTHESIS OF NO

AUTOCCERELATION WAS ACCEPTED AND REJECTED FOR

THE WALLIS RUNS ON MODEL II-AR.. .... .......... 79

CHAPTER I

INTRODUCTION

In numerous recent investigations in the literature, error

structures in the linear regression model other than the first-order

autoregressive or Markov model have appeared. In particular, a

moving average (MA) error structure has been applied in many different

areas: Trivedi (1970;1973) and Burrows and Godfrey (1973) have used

an MA error structure in their analysis of inventory; in consumption

dos Santos (1972) and Zellner and Geisel (1970) applied an MA error

structure; Hess (1973) applied it to durable goods demand; Rowley

and Wilton (1973) to wage determination; and Pesaran (1973) to

investment.

Despite this, many textbooks and applied researchers continue to

place emphasis on an autoregressive (AR) error structure as the only

alternative to the usual assumption of residual independence. Of the

many available econometrics textbooks, only a few devote any space to

a discussion of autoregressive-moving average (ARA.) error structures

1

and there is almost no mention of how these structures may arise in

applied econometric investigations. For example, in discussing

the familiar Koyck model (1954), several textbooks derive a reduced

1See for example, Pindyck and Rubinfeld (1976).

2

form equation similar to the following:

Yt = UYt-1 + (l-X)Xt + ut

where

ut = vt - Xv t-. (2)

If Vt is serially independent, ut is autocorrelated and follows a

first-order moving average (MA(i)) structure. Nevertheless, the

autocorrelation function of the MA(1) structure is seldom derived

or estimation methods explored.

There are, of course, good reasons for the popularity of the

AR(1) error structure. The Durbin-Watson d test (1950;1951) and

Cochrane-Orcutt (1949) procedures have been known for many years and

are built into most canned regression packages while testing and

correction procedures for the more general class of ARMA models

have only recently been developed. Moreover, efficient estimation

of these alternative error structure models typically requires the

use of maximum likelihood or nonlinear least squares computer routines

sufficiently complicated to disuade researchers who find it so easy to

use the canned routines.

In addition to the limited discussion and use of alternative

error structures, Nichols, Pagan and Terrell in a recent survey

article (1975) report a ". . . paucity of small sample studies .

Most of the Monte Carlo studies they found were in terms of a pure

MA time series A with no independent variable. A conspicuous

exception to this would be the work of Hendry and Trivedi (1972).

In an attempt to learn more about the small sample properties of

models ith an MA(1) error structure, I have analyzed via Monte Carlo

3techniques two simple econometric models using both AR(1) and MA(1)

error structures.

Model I

Model I is a simple linear, two variable econometric model,

Yt t + Ut. (3)

If Ut ' NID(0, 2 ), the ordinary least squares (OLS) estimate of is

unbiased, consistent and efficient. If ut is autocorrelated, the OLS

estimate of is still unbiased but asymptotically inefficient.

Model I with an AR(1) Error Structure

If Ut in (3) follows a first-order autoregressive structure,

ut = PUt-i + vt (4)

where -1 < p < 1 and vt \ NID(O,C ), then a maximum likelihood (ML)

procedure developed by Beach and MacKinnon (1978) will provide

estimates for and p that are asymptotically efficient.2

I compared the small sample results of the Beach and MacKinnon

procedure to OLS; the Beach-MacKinnon procedure is described in detail

in Chapter iI and the results are presented in Chapter IV.

I will henceforth refer to this model cum error structure as

Model I-AR, that is Model I with an AR(1) error structure. I will

also refer to the Beach-MacKinnon procedure as the B-M procedure.

rHildreth and Dent (1974) also have a procedure that providesefficient estimates but the Beach-MacKinnon procedure was more recentand it appears to use fewer computer resources.

4

Model I with an MA(1) Error Structure

If ut in (3) follows a first-order moving average structure,

ut = vt - kvt I (5)

where -1 < X < 1 and vt . NID(o,( 2 ), then an 1 procedure developed

by Pesaran (1973) will provide estimates for and X that are

asymptotically efficient.

I likewise compared the small sample results of the Pesaran

procedure to OLS; the Pesaran procedure is described in detail in

Chapter II and the results of my comparison are presented in Chapter

IV.

Henceforth, I will refer to this model as Model I-A& and to

the Pesaran procedure as PES.

Model II

Model Ii is a single-period lagged endogenous variable and

single exogenous variable econometric model,

Yt = 1Yt- 1 + Xt + ut. (6)

If Ut -. NJ(O,u2 ), OLS will provide consistent estimates for j andt N U

m. If, however, u. is autocorrelated, OLS will provide biased,

inconsistent and inefficient estimates. Additionally, Griliches

(1961) and others have shown that as long as j is positive, CLS

will contain an upward bias in the estimate cf .

5Model II with an AR(1) Error Structure

If u t in (6) follows the AR(i) error structure given by (4),

then a generalized least squares (GLS) procedure developed by

Wallis (1967) provides consistent but not fully efficient estimates of

z and 2.

I compared the small sample results of the Wallis procedure to

OLS; the Wallis procedure is described in detail in Chapter III and

the results are presented in Chapter IV.

Henceforth, this model will be referred to as Model II-AR and

the Wallis procedure will be referred to as the WAL procedure.

Model II with an MA(I) Error Structure

If ut in (6) follows the MA(1) error structure given by (5),

then an ML procedure developed by Zellner and Geisel (1970) provides

consistent and efficient estimates of , and 2.

I compared the small sample results of the Zellner-Geisel

procedure to O3; like the WAL procedure, the Zellrner-Geisel procedure

is described in detail in Chapter III and the results are presented

in Chapter IV.

Model II with an MA(1) error structure will be referred to as

Model II-MA and I will refer to the Zellner-Geisel procedure as the

Z-G procedure.

Table I summarizes the models, error structures and procedures

as they are correctly applied to each other.

Testing for Autocorrelation

In conjunction with the comparisons of the four different

procedures and 0L., statistics on three different tests for

6

TABLE I

ESTIMATION PROCEDURES CORRECTLYAPPLIED TO MODELS AND ERROR STRUCTURES

Model

Error Structure I: Yt = oXt + ut II: Yt = Yt-1 + "X t + ut

AR(1)

Ut = PUt 1 + Beach-MacKinnon Walli s

MA(1)

= Vt - Pesaran Zellner-Geisel

autocorrelation were collected. All three tests were designed to

detect the presence of AR autocorrelation and their performance when

the autocorrelation was MA(1) was monitored.

For Model I, a Durbin-Watson d test (1950;1951) was performed

after both the OLS regression and either a FES or B-M regression.

Since both FES and B-M were designed to remove autocorrelation,

you would expect that the d test would indicate the presence of

autocorrelation after OLS but not after FES or B-M. In addition, if

the d test indicated the presence of autocorrelation after an OLS

regression, 1 performed a Prais-Winston transformation and regression

procedure (1954) after which I did another d test. 3 The Prais-Winston

or P-W procedure results could then be compared to the OLS and either

31 chose to use the Przais-Winston transformation and regression

procedure over the Cochrane-Orcutt procedure because Prais-Winstonretains the first observation and Cochrane-Orcutt does not. Inaddition, both Rao and Griliches (1969) and SpLtzer (1979) have foundthe Prais-Winston procedure to be one of the best two-stageprocedures.

7

PES or B-M results. I present the outcome of these comparisons in

Chapter IV.

Similarly for Model II, a Durbin-Watson d test was performed

after both the QLS and either Z-G or WAL, regressions. But, since the

Durbin-Watson d test was not designed to detect autocorrelation in

models with lagged dependent variables, a Durbin h test (1970) and

a McNown test (1977) were also performed after each CLS regression.

Additionally, a Durbin h test -as performed after a WAL regression.

If the McNown test indicated the presence of autocorrelation after an

OLS regression, a P-W type transformation was performed on the data

and another OLS regression was perfcrmed on the transformed data.

The outcome of the comparisons of the three tests for autocorrelation

and of the comparisons of the small sample properties of the various

regression procedures is presented in Chapter IV.

Effects of Misspecification

I was also interested in what the outcome might be if one of

the four specialized procedures was applied to the wrong error

structure. For example, how well would the PES procedure remove AR(1)

autocorrelation when it was designed to handle MA(1) autocorrelation

and how well would it do estimating p when it was designed to estimate

X? This might occur if an applied researcher assumed or specified

that his error ter., was MA(i) and used PES when in actuality the

error term was AR(i).

4See Durbin (1970).

Table II shows how all four specialized procedures were

incorrectly applied to error structures. All of the properties and

autocorrelation test results that were compared for the correctly

applied procedures were also compared for the incorrectly applied

ones. These results are also summarized in Chapter IV.

TABLE II

ESTIMATION PROCEDURES INCORRECTLYAPPLIED TO MODELS AND ERROR STRUCTURES

Model

Error Structure I: Yt = oXt + ut II: Yt = jYt-1 + -xt + ut

Pesaran Zellner-Geiselu= Put_ + v t

MA(i)

= - Beach-MacKinnon Wallis

u t =vt -kvt_

CHAMTER II

MODEL I ESTIM.ATION

Model I is a simple linear, two variable econometric model,

Yt = X t + U t* I

The Beach-MacKinnon (1978) or B-M procedure is used to estimate this

model when ut follows an AR(i) error structure and the Pesaran (1973)

or FES procedure is used to estimate this model when u t follows an

MA(1) error structure.

In this chapter, I will explain each procedure in detail and I

will discuss the Monte Carlo procedure I followed in evaluating these

two procedures against OLS and Prais-Winston or P-W. The results of

the evaluation are presented in Chapter Il.

The Beach-MacKinnon Procedure for Model I-A?

The B-M procedure is an iterative technique which alternates

between the estimation of and p until the estimates of each become

arbitrarily close to the previous iteration's estimates.

In matrix terms, the estimate of , b, is obtained as follows:

b (Z'Z)-Z'w (2)

where

z =QX (3)

wQy (4)

10

and

1

(I-r) 2 0 .. . . . . 0

-r 1 0 . . . . 0

0. 0 -r 1 0 0 0. (5)

0 . . . . 0 -r 1

For the first iteration, the value of r, the estimate of p, is

assumed to be zero.

To estimate p, first calculate the residuals using the previous

iteration's estimate of and the original X and Y data, that is,

et Y - bX (6)t t

Then calculate

r = -2.( - - (7)

where

-ad 2_a_

0 co,-1 [272(c 3 (8)

(T - 2).Zet.et_1T T - 1)(E~e 2 - e2)t(9

(T - 1).e2 - T.Ze_ 1 - Ze2

( . - 12) (1o)

T , e oe -

1(11)C T -1 .e 2~ -e 2 ) '

T represents the number of observations and the summations run from

t =2 to t = T.

11

The B-M procedure was programmed in FORTRAN to run on a Control

Data Corporation (CDC) 6400 computer. This program is available

from the author upon request. The maximum number of iterations was

set at ten as Beach and MacKinnon (1978,p.53) suggested the average

number of iterations should run between four and seven to achieve

five-digit accuracy.

The Pesaran Procedure for Model I-MA

The PES procedure applies the simple and well-known iterative

method of False Positioni to the first-order condition of the

concentrated log likelihood function

X.w.oe2 - T.(1 - k2).(k + coS(t-1 )).wt2 ef 0 (12)

where

b (Z'WZ)-'Z 'WFy, (13)

e= Fy - Zb, (14)

Z =E, (15)

f11 f 12 iT

2 _21 f22 f 2T

fT1 fT2 f TT

f. = i. i---') (17)

wt =(X2 + 2.X-cos( t--) + (18)T +

and W is the inverse of the diagonal matrix with w-1 w21 -*

1See Hamuming (1971,pp.45-48).

12

as its diagonal elements. T represents the number of observations and

the summations run from t = 1 to t = T.

Dr. Pesaran graciously provided a copy of his own FORTRAN

program which was modified slightly in order to incorporate his logic

into my Monte Carlo program.2 Pesaran's own cut-offs were used, that

is, the program iterates up to a maximum of thirty times unless the

difference between successive estimates of and k is less than 10-4.

The Monte Carlo Procedure

The Procedure in General

After calculating an independent data series of the appropriate

length using

Xt = ext-1 + t(19)

where e = 0.8, X0 = 0.0 and Lt . NTD(0,1i), an error series, ut, was

calculated under either an AB(i) or an xA(1) error structure. i used

the internal pseudo uniform random number generator provided by the

FCRTRAN compiler on the CDC 6400 computer 3 and a normal approximation

2 Pesaran is planning to publish his programs under the titleDynamic Regression: Theory and Algorithms. See Pesaran (1976).

3,To validate the series of random numbers generated by the CDC

random number generator, twenty-five of the random number seeds werechosen at random from the 280 seeds required to complete this MonteCarlo investigation. These twenty-five seeds were then used toinitiate a uniform random number series utilizing the pseudo uniformrandom number generator provided by the International NMathematicaland Statistical Library (IMSL). A chi-square goodness-of-fit testwas run on each of the series initiated by the twenty-five seeds.Fourteen times out of the twenty-five, the I!SL series had a betterfit than CDC and eleven times out of twenty-five, CLC had a betterseries. Additionally, the Monte Carlo runs were re-run using theD-ISL generator and the results compared against those obtained when

13

routine obtained from H. M. Wagner (1975,P.934) to generate the n~rmal

random numbers. A different random number seed drawn from a table of

random digits was used for each run to insure different random number

series.

Using these X t and ut series, the dependent variable, Yt'

series was calculated using (i). The X t and Yt series then became

the data to be regressed. First an OLS regression was calculated,

the estimate of - saved, an estimate of p was calculated and saved

and a Durbin-Watson d test was performed using an a of 0.05. If the

hypothesis of no autocorrelation was rejected, a P-W transformation

and estimation procedure was performed, the new estimates of and P

saved and a new Durbin-Watson d test performed. If, following the

OLS regression, the hypothesis of no autocorrelation was not rejected,

the P-W estimates for this replication were taken to be the same as

those of the CLS regression. Following this, either B-1 or FS

estimates were calculated and saved and another Durbin-Watson d test

performed.

At first, the computer runs were made for the correctly

specified error structure, that is, B-M estimates were obtained for

Model I-AR and PES estimates were obtained for Model I-MA. Then to

check the effects of misspecification, different computer runs were

made for the misspecified models: B-, estimates were obtained :Cr

Model I-HA and PES estimates for Model I-AR.

the C:C generator was used. No major change in the results wasobserved; the differences in the results were about the same as couldbe expected if a different random number seed was used with the same

raIndom numer generator.

14

This general Nonte Carlo procedure was replicated 100 times for

each set of input parameter values. This provided for each model and

for each set of input parameter values, 100 OLS estimates, 100 B-M or

FS estimates and 100 P-'W estimates as well as the results of the

three Durbin-Watson d tests. The number of times the Durbin-Watscn d

test proved inconclusive was collected along with the number of times

the hypothesis of no autocorrelation was accepted and rejected. The

comparison statistics, discussed later in this chapter, were

calculated on the 100 estimates collected on each run.

Input Parameter Values

The input parameter values were determined after examining

recent Monte Carlo studies by Peach and MacKinnon (1978), Rao and

Griliches (1969), Maddala and Rao (1973), Spitzer (1979), Kenkel

(1975) and Hendry and Trivedi (1972).

Sample sizes of 20 and 50 were selected. The independent

variable coefficient, P, was set at 1.0 for all trials. The standard

deviation of vt, &vr was set at 20 when T = 20 and at 40 when T = 50.

Ten different values of X and p were tried: -.99, -.8, -.6, -.3,

-. 1, .1, .3, .6, .8 and .99. There were then twenty different sets

of input parameter values for both B-N and L S runs.

Comzarison Statistics Collected and Analyzed

The following statistics for the OLS, B-M or FES and P-W

estimates were collected: the sample mean, the sample variance, the

average absolute bias, the largest and smallest absolute biases, the

mean squared error (XSE) and the Euclidlan distance ala -yendr and

15

Trivedi (1972,p.122), that is

(E(A

E.D. Pi - pi )2 )2 (20)

where pi represents the true value of the ith parameter to be

Aestimated, pi represents its estimate and i runs from i = I to

i = the number of parameters to be estimated--two in the case of

Model I. The Euclidian distance gives a measure of "aggregate" bias.

These statistics as well as the results of the Durbin-Watscn d

tests were then output at the completion of a computer run on a

particular set of input parameter values and a particular model. The

consolidated results for all runs are presented in Chapter IV along

with a comnarison of my results with the outcome of other Monte Carlo

studies in the literature.

CHAPTER III

MODEL II ESTIMATION

Model II includes a single-period lagged endogenous variable and

a single exogenous variable

Yt = iYt-1 + 2Xt + ut. (1)

This model was originally derived by Koyck (1954) from an infinite

geometric lag model which arises when the adaptive expectations or

partial adjustment models are used. The Zellner-Geisel (1970) or

Z-G procedure was developed to estimate this model when the error

term, ut, follows an MA(i) structure. Wallis (1967) developed an

estimation procedure to handle the AR(1) error structure; I will use

WAL when referring to this procedure.

As in the last chapter, first I will describe each estimation

procedure separately and then I will outline the Monte Carlo

procedure I used to analyze the small sample properties of these

two procedures compared to those of OLS and Prais-Winston (P-W). The

results of this analysis are in Chapter IV

The Wallis Three-Step Procedure for Model IT-AR

The Wallis procedure is basically a generalized least squares

(GLS) technique that Wallis himself separated into three steps.1

1 This procedure is not to be confused with three-pass least

17

First, estimate the coefficients of Model II via OLS but

substitute Xt_1 as an instrument for Yt-1. Second, using the

residuals from this regression, that is,

et = Yt - blYt-1 - b 2 Xt, (2)

calculate an estimate of p making a correction for bias2

Eet'e t -1

r - 1 + 2 (3)rz e2 T

tT

where T represents the number of observations, the summation in the

numerator runs from t = 2 to t = T and the summation in the

denominator runs from t 1 to t = T. Third, use this estimate of p

to calculate the matrix

2 T-11 r r •*• •r

T-2r 1 r •*• •r

(4)

T-1 T-2

which is then used to obtain the GLS estimatas of j and ;2

b = (X'Q-x)- 1 X' o-1y. (5)

squares. In fact, Wallis wrote his article introducing his procedurein response to a previous paper by Taylor and Wilson (196)-.) onthree-pass least squares.

2n general, the correction for bias is k/T, where k is the

number of paramweters to be estimated.

The B-M procedure requires the least amount of computer 18

resources as it does not iterate nor search; only two least squares

matrix inversions are required for each replication.

The Zellner-Geisel Procedure for Model II-MA

The Zellner-Geisel procedure is a maximum likelihood (Wl)

technique that requires that Model II first be transformed. After

recognizing that in this model 8 is really k and after defining

wt Yt vt' (6)

you can obtain the transformed modelV

Yt = Wot + 2"(Xt + kXt-I

+ X2t-2 + ° + kt-IX1 ) + vt. (7)

Using OLS on (7), search over various values of k from zero to one

looking for the minimum residual sum of squares. The program which

implements the Z-G procedure performs a grid search of K for values

between zero and one and finds to five-digit accuracy the set of

estimates that yield the minimum residual sum of squares.

Both procedures, WAL and Z-G, were programmed in FORTRAN to ran

on a Control Data Corporation (CDC) 6400 computer. 3oth programs are

available from the author upon request.

3See Johnston (1972,pp.313-315) for the derivation.

19

The Monte Carlo Procedure

The Procedure in General

The data generation for Model II is exactly the same as for

Model I except that after the calculation of the X t series and the

appropriate ut series, Model II was used to generate the Yt series,

Yt = iYt-! + 2Xt + utt (8)

where Y = 200 for all trials.

As with Model I, an OLS regression was performed first on the

generated data and the estimates were saved. Using the residuals

from this regression, a Durbin-Watson d test was performed as well as

a Durbin h test and a McNown test.

Durbin (1970) and others recognized that the Durbin-Watson d

test is not applicable when there are lagged endogenous variables

as regressors in the model. In fact, Griliches (1961) notes that the

d statistic is biased toward 2.0 or biased toward the acceptance of

the hypothesis of no autocorrelation. Both Durbin (1970) and

McNown (1977) have developed alternate tests to be used in place of

the d test.

Durbin suggests the use of his h test, that is calculate

h = ro( T ) (9)

where T is the sample size and r is an estimate of o based on the OLS

regression residuals. The h statistic is used as a standard normal

deviate to test the hypothesis of p 0. IT > 1, the h test isb>,

20

not applicable and Durbin suggests an alternate test for this

situation.

If the h test is inapplicable, the following regression should

be performed on the sample residuals using OLS:

et = alet 1 + a2y + a3X + Yt (10)

where

et = Y - bly - b2 X (11)

and t runs from t = 2 to t = T. The test for autocorrelation of ut in

(1) consists in testing the significance of the estimate of a, in

(lO).1

McNown proposed a similar test to be used in place of the

Durbin-Watson d for lagged endogenous variable models. First perform

an 01, regression of the following model;

Yt = L.Yt-1 + f-Y t-2 + aaXt + a4Xt-1 + 6t (12)

where t runs from t = 3 tD t z T. To test the hypothesis of no

autocorrelation of ut in (1), perform a significance test on the

estimate of a. in (12).

If, based on the outcome of the McNown test, the hypothesis of

no autocorrelation of ut in (1) was rejected, a Prais-Winston (:-W)

transformation was performed on the data and new estimates of 'I, 2

and > or p were calculated and saved.

4See Durbin (1970).

5Out of 200,000 sets of generated data for Model Ii, theLurbin h test was inapplicable only twelve times.

~m~b~mbm

21

Finally, either a Z-G or WAL procedure regressicn was performed,

the estimates saved and a Durbin-Watson d test was performed using the

residuals from this regression. If the WAL procedure was used, a

Durbin h test was performed in addition to the d test.

As with Model I, the Monte Carlo procedure just outlined was

replicated 100 times for each set of input parameters and for each

error structure. Also each procedure, Z-G and WAL, was applied not

only to the error structure for which it was develored, but also to

the incorrect error structure.

Input Parameter Values

As stated in Chapter II, the values of the input parameters

were determined after a review of the recent literature. Sample sizes

of 20 and 50 were tried as with Model I. The coefficient of the

exogenous variable, 2, was set at 1.0 for all trials. The

coefficient of the lagged endogenous variable, 1, was set at either

0.4 or 0.8 if the WAL procedure was being run; if a Z-G procedure was

being run, 01 was set equal to the input value of K or p.

The standard deviation of vt, 6 v , was calculated based on

signal-to-noise ratios of either 10 or 100. Maddala and Pao

(1973,p.769) derived the white noise variance, c, from the signal-

to-noise ratio, G, when the error structure was AR(1) obtaining the

following:

1 2 2 022 + . i - P. 1 • - 2 5 • -Cv = .'l i P -eSi9.01 t +hP. li 2 G7 (13)

where =0.8 and 02 =10.0 throughout all trials. The corresponding

22

formula for an MA(I) error structure is as follows:

~2. 2= 1 e' 1 . 1 u 12 1

v 1 - 8.-1 - e" (1 X2- - 2"x'p 1 ) G

where 0 = 0.8 and c; = 10.0 throughout all trials.

Ten different values of k and p were tried; -. 99, -. 8, -. 6,

-. 3, -.1, .i, .3, .6, .8 and .99. However, because the Z-G

procedure searches only between zero and one for the value of the

autocorrelation parameter, only positive values of k and p were

attempted for the Z-G runs.

The combination of all these input parameter values represents

twenty different sets for Z-G runs and eighty different sets for

WAL runs.

Comparison Statistics Collected and Analyzed

The same comparison statistics were collected and analyzed for

Model II as were collected and analyzed for Model I. The results of

my analysis are presented and discussed in the next chapter along

with comparisons of my results to those found in the literature.

CHAPTER IV

RESULTS AND CONCLUSIONS

In this chapter, the estimated parameter comparison statistics

results are presented before the results of the autocorrelation

detection tests. Within each section of results, Model I will be

discussed before Model II. But before proceeding to the results an

explanation of Table III through Table XVIII is required.

Table III throught Table XVIII contain the percentaSe

differences between the estimated parameter comparison statistics of

two regression procedures for two or three parameters. Fcr each

parameter that was estimated, the percentage differences fcr bias,

variance and mean squared error (MSE) are displayed along with the

percentage difference for the Euclidian distance (E.D.).I Each row of

a table represents a different set of input parameter values.

The percentage difference was calculated using this formula:

Percentage Difference = pbi ic0 (i)

where Pb represents the estimated parameter comparison statistic for

the second procedure listed in the title of the table and P

tAlthough the largest absolute bias and the smallest absclutebias statistics were oollected and analyzed, they added little ornothing to the undersanding of the problem. Ccnsecuently, ;bheseresults are not included in the tables.

24

represents the estimated parameter comparison statistic for the first

procedure listed. For example, the first procedure listed in Table

III is Pesaran and OLS is the second.

The sign of the percentage difference gives a qualitative

comparison between the two procedures and the absolute value of the

percentage difference gives a quantitative comparison between the two.

If the percentage difference is negative, the estimated parameter

comparison statistic for the second procedure was less than that for

the first procedure and the larger the absolute value of the

percentage difference, the greater the difference zetween the two

procedures. On the other hand, if the percentage difference is

positive, the first procedure was better than the second, but a word

of caution is in order in analyzing the quantitative difference

because the percentage difference is not symmetric about zero. When

the percentage difference is positive, the largest value possible is

100 due to the way in which it is calculated. A percentage difference

of 50 is comparable to a negative percentage difference of -100; a

percentage difference of 75 is comparable to a negative percentage

difference of -300; 88 is comparable to -700; 90 to -900; 95 to -1300

through -2100; 98 to -3900 through -5900; and a percentage difference

of 10 is comparable to any negative percentage difference less than

-19,900.

Comparison Statistic Results for Model I

Since Model 1 is a simple linear regression model, OLS estimates

will be unbiased but not asymptotically efficient when the error term

is autoccrrelated. We are therefore mcst interested in the small

25

sample variance but we cannot overlook the bias in these small

samples. The results for PES and B-M applied to their correct error

structure will be discussed before the results of the incorrectly

applied error structures.

Pesaran Procedure on AC(!)

The results on the Pesaran procedure are mixed but in general

both OLS and P-W seemed to be slightly better. Looking at Table III

and Table IV, more negative percentage differences can be seen than

positive ones. The percentage differences for X are nearly all

negative meaning that OLS and P-W were almost always better than PES;

this is quite surprising considering that FES is truly estimating X

while OLS and P-W both estimated p as a proxy for X. If PES had any

edge over OLS and P-W on the estimation of X, it would be in the area

of X's variance when k was large, either negative or positive.

PES showed up better in the estimation of W where for both

large positive and negative values of k, FES had some rather large

positive percentage differences.

The most disturbing result of the FES runs on Model I-M-A was

that all three procedures estimated the wrong sign on X for all2

twenty sets of input parameter values. 2 LS and P-W were really

estimating o and that may well be the reason for these two procedures

obtaining the -wrong sign, but the PES procedure is designed to

estimate k and the fact that it estimated the wrong sign makes this

procedure suspect.

.his result cannct be seen from the tables.

26

Since the FES procedure uses the False Positicn algorithm as

mentioned in Chapter II and this algorithm approaches the root of the

function from both sides of zero, it is possible that the algorithm

may not work and therefore the ES procedure cannot be performed. The

program that performed the FES procedure kept track of the number of

times it was impossible to perform the FES procedure because of the

failure of the algorithm. In no case on the PES runs on Model i-NA

did the algorithm experience a failure.

Likewise, the P-W procedure requires the taking of a square root

and applying it to the first observation. Therefore, it is quite

possible that the value under the square root radical could be nega-

tive making it impossible to perform P-W. All four procedure programs

kept track of the number of times it was impossible to perform P-W due

to a negative value under the square root radical. During the -FES

runs on Model I-MA, this problem never occurred.

As mentioned in Chapter II, Pesaran's own cut-offs of thirty

iterations or parameter differences of less than i0-4 were used. The

PES procedure reached the thirty iteration cut-off rarely when T was

50 but when T was 20, the thirty iteration cut-off was reached

frequently with low values of X and almost every time with values of

X near _ I.

Some reluctance should accompany the use of the I-S procedure

to estimate X and for Model I-XA, A better alternative appears to

be that cf ?-W since on all comparison statistics it was better than

CLS. The P-W estimate of p would have to be used as the estimate for

but as can be seen from the tables, the 7-W estimate of p does not

appear to be a poor proxy for X.

27

Beach-MacKinnon Procedure on A(1)

Rao and Griliches (1969) found that non-linear maximum like-

lihood procedures like Beach-MacKinnon are no improvement over simpler

two-stage procedures like ?rais-Winston in samples of the same size.

But Spitzer (1979) in trying to duplicate the ,.crk of Rao and

Griliches came to the opposite conclusion. My results tend to favor

Spitzer.

Looking at Table V, the percentage differences for p are rather

small and that for large positive and negative values of p, the E-Y

variance is less than that for OLS. Bias favors OLS when p is

negative but B-M had the lower bias for positive values of p. -he

results for ,SE are very close to those for variance.

As for , the percentage differences are larger than for p and

all three comparison statistics seem to indicate that 3-M is better

than OLS when p is large. The Euclidian distance also tends to favor

B-M when p is near ± 1, especially for positive values,

Table VI compares B-M to P-W and the results are generally the

same as for OLS except that the percentage differences tend to : e

larger, especially for the estimation of P. For negatIve , ther

is no clear winner on the bias of $,

A surprisin- outcome of the 3--M runs on Model I-A3 ;was than 213

tended to do a better job in terms of Euclidian distance -han 7-,.

terms of the other comparison statistics, P-W dii dcaoer jcb than

OLS on the estimation of $; OLS came out better on the es-inatrn 0

p. OLS had some very large sample variances on the estimate of p

especially when IpI was large and this accounted for the P-, snowin 6 .

25

Due to negative values under the square root radical, P-W

estimates could not be calculated when IpI = .99 on several sets of

data although for far less than half of the number of sets of data

generated and for far fewer times when T was 50.

Since there are two different square roots to be taken in the

B-M procedure--see (5) and (7) in Chapter ll--and there is the chance

that the procedure might try to take the square root of a negative

number, it is possible that the B-M procedure carnot be performed.

The B-M procedure program kept track of the number of times it was

impossible to perform a B-M procedure due to negative values under the

square root radical. In no case was it impossible to calculate B-N

estimates on Model I-.kR.

The ten iteration out-off was reached only rarely but when it

was, it was more likely that the input value of p was near zero.

in general, the B-M procedure appears to be quite reliable for

the estimation of Model I-A?-.

Beach-MacKinnon Prccedure on MA(1)

The Beach-MacKinnon procedure does a job nearly conparable to

OLS on the estimation of x but it is inferior to P-W on the estimation

of \. The estimation of . tends to favor B-M over both 0LS and P-,.

Table VII3 presents the results of 3-M versus OLS and although

there are many negative percentage differences for the tree

corp-arson statistics for X, only a few are in double digits and many

are zero. So technically OLS is the winner but not by much. As fr

'Since B-M is really trying, to estimate o, all three proceduresare estimating p as a proxy for k; hence the o/,k label in Table

and Table VIII.

FI29

the estimation of , the percentage differences are larger and many of

them are positive indicating that B-N was better than 0LS. The bias

column seems to be sprinkled with negative values in a random fashion

but for positive and large negative values of X, B-M had the edge over

OLS. The Euclidian distance, which is an "aggregate" bias, indicates

that there was little difference between B-M and 0LS.

Table VIII has far more negative percentage differences and the

values are much larger especially for X and the Euclidian distance.

The results on are roughly the same as for B-N versus 0LS.

At no time was it impossible to do a B-M procedure or a P-W

procedure because of a negative value under the square root radical.

The average number of iterations required for a B-N regression was

slightly higher with The smaller sample size and the average fell as

the value of k grew. The ten iteration cut-off was reached more often

with a negative X than with a positive one.

As with the PES runs on Model I-NA, P-W came out the clear

winner against 0LS; all ccmparison statistics favored P-W.

Again, however, all three procedures estimated the wrong sign

on X for all possible sets of data. I believe the reason for the

wrong sign estimate is that the form of a. MA(I) error structure--(f)

in Chapter I--has a -k whereas the form of an A3(i) error stracture--

(4) in Chapter I--has a positive i. n other words, the regression

procedures are trying to estimate -X.

?esaran Procedure on AR(1)

The z-esaran procedure turns cut to be clearly inferi:r 7o L

in terms of the estimation 'f o; as fcr the comparison cf

3C

on P and for PES versus P-W, the results are mixed. These results

are displayed in Table LX and in Table X.

On the estimation of , the Pesaran procedure appears to have an

edge over CLS in terms of variance and MSE when o is near one; in the

cases of bias and Euclidian distance, FES seemed better than OLS

when p was positive.

Comparing PES to P-W, you can see almost the opposite results

to PES versus OLS. IES was better than P-W in the estimation of p for

large values of p; this was also true for Euclidian distance. In

terms of the estimation of , P-W appeared to be better than PES when

p > 0.6.

In results not tabulated, PES and OLS almost consistently

overestimated P when T was set at 50. At no time was it impossible to

perform a PBS procedure but when jpl = .99, the P-W procedure couldn't

be performed 10-20 times out of 100 due to negative values under the

square root radical. The thirty iteration cut-off on the FEES

procedure was reached more often for large values of p and for T = 20.

Comparing OLS and P-W, P-W was only slightly better than OLS on

the variance of both P and . On bias and MSE, P-W was generally

better than 0LS on the estimation of , while 0L3 was better on p. In

addition, OL$ was generally better than P-W in terms of Euclidian

distance.

The fact that P-W did not prove superior to 0LS on Model I-:s

still concerns me and I am unable to account for it. Both Rao and

Griliches (1969) and Spitzer (1979) found P-W to be one of the best

two-stage procedures but I am disappointed in its performance when the

error structure was AR(1). However, P-W did seem to remove the A.7(1)

31

autocorrelation as will be shown later in this chapter and it did

seem to do a better job than OLS when the error structure was 1A(i).

Before ending this section on Model I, I think it would be

instructive to repeat the results concerning whether it was possible

to perform a P-W procedure remembering that if (1 - r2) < 0, a P-W

transformation cannot be performed. In no case under an X'A(1) error

structure was it impossible to perform P-W and under an AR(1) error

structure only large values of p caused the value under the square

root radical to become negative and thereby make it impossible to

perform P-W. This suggests that if a P-W procedure can't be

performed, it is quite likely that the error structure is not MA(1).

ComDarison Statistics Results for Model Ii

Model II is a single-period lagged endogenous variable and

single exogenous variable econometric model. If the errcr term is

autocorrelated, O3 will provide biased, inconsistent and inefficient

estimates. The small sample bias, therefore, is the comparison

statistic of most interest but the other comparison statistics provide

noteworthy information as well. The results for Z-G and WAL applied

to their correct error structures -will be discussed first, then the

results of the incorrectly applied error structures will be presented.

Zellner-eisel Procedure on MA(l)

The Zellner-Geisel procedure would appear to be the procedure of

choice when the value of , is near one. Table XI displays mostly

positive percentage differences when > > 0.6 and Table i is nearly

all positive percentage differences. In addition, the percentage

__W-

32

differences arrayed in Table XII, which compares Z-G to P-W, are

almost all 90 or above indicating that Z-G was clearly better than

P-W based on the outcome of the McNown test.

The large negative percentage differences for the estimation of

K in Table XI for low values of K is quite consistent with Morrison

(1970). He found that OLS was good for small K and small signal-to-

noise ratios but that it degenerated rapidly as k and the signal-to-

noise ratio got larger. He also felt that maximum likelihood leaning

techniques gave the best estimates.

Morrison reported that the degeneration cf the OLS estimates as

X and the signal-to-noise ratio increased was due to the fact that the

OLS estimates displayed a strongly negative bias. OLS did provide

estimates of K that almost consistently underestimated the true value

of K. However, OLS did not consistently underestimate 2. P-W did

underestimate K more often than not and P-W far underestimated , in

all cases although the bias was less as K and the signal-to-noise

increased. Morrison's results and mine do conflict with Griliches

(1961) who concluded that as long as j in Model I! is positive,

OLS will overestimate it. Under an AR(i) error structure, OLS

overestimated j during the Z-G runs eighteen times out of twenty but

during the WAL runs on both MA(i) and AR(1) error structures, CLS

more often than not underestimated 31.

As could be discerned from the above discussion, OLS was

clearly superior to P-W; this superiority was evident in all

comparison statistics. In other non-tabulated results, the bias,

4 Recall that for Model 7:-YA, a, and k are the same.

33

variance, MSE and the Euclidian distance statistics tended to fall for

both parameter estimates and for all procedures as the signal-to-noise

ratio increased. This last result is not unexpected.

In his analysis of distributed lag estimators, Sargent (1968)

concluded that there was not much difference between special

procedures and OLS. My results tend to point to a different

conclusion: Z-G is a useful procedure especially when ;, is large.

Wallis Procedure on AR(1)

OLS proved to be better than WAL for all three parameter

estimates for all values of p and , except when p > 0.3 and j = 0.8.

WAL proved better than P-W for positive values of P.

The results cf the comparison between WAL and CLS are provided

in Table XIII which had to be continued on two additional pages

because of the eighty possible sets of input parameter values. The

results for WAL versus P-W can be found in Table XIV which is also

three pages long.

As can be seen in Table XIII, the large negative percentage

differences for the comparison statistics for the estimate of P and

for the Euclidian distance when the value of P is negative indicate

that OLS was better than WAL. Although not all the comparison

statistics for the estimates of j and 2 have negative percentage

differences nor are the differences as great as those for the

estimate of P, most of the percentage differences are negative. For

negative values of p, CLS is the choice over WAL.

The CLS superiority continues through s all positive values of

P when for values of P > 0.3, more and more of the percentFae

34

differences are positive indicating that WAL was better than OLS.

But notice that even for large positive values of p, when 0.4,

the percentage differences for the estimate of p are negative or

favoring OLS over WAL.

As for WAL versus P-W, WAL is favored on the estimate of P for

nearly every set of input parameter values. WAL is also favored on

the estimate of $2 for values of p > -0.6 but WAL is favored over OLS

on the estimate of p only for p > 0.3.

In non-tabulated results, WAL overestimated p when p < 0.6; WAL

underestimated j sixty times out of eighty and nearly every time that

= 0.4. Additionally, WAL estimated a positive p in twenty-six

times out of forty when p was negative but every time the true p was

positive, WAL estimated it as positive. P-W underestimated , in

nearly every case but both 0LS and ?-W always obtained the correct

sign on the estimate of P.

Concerning OLS versus P-W, in all but the variance of the

estimate of p, 0LS was better than P-W. The variance of the estimate

of p favored P-W slightly over CLS for mid-range values of p, but for

large positive and negative values of p, OLS was better than P-W even

for the variance of the estimate of p.

Although the signal-to-noise ratio had no appreciable effect on

the estimate of p, the bias, variance and YSE for the estimates of

and 32 tended to fall for all procedures when the value of the

signal-to-noise ratio went from 10 to 100.

.... ... .... III

35

Wallis Procedure on MA()

The results for this misspecified model are not very clear-cut.

Both the bias on the estimate of k and the Euclidian distance favor

OLS over WAL for negative values of k but they favor WAL over OLS for

positive k. Otherwise, the results are mixed.

Table XV and Table XVI provide the results on the WAL runs on

Model II-MA. The value of j had an important effect on the results.

When P, = 0.4, WAL is favored over OLS either by an outright positive

percentage difference or by a reduction in the size of the negative

percentage difference when = 0.8. This result is most marked when

k is negative and can be seen in the comparison statisti.cs for all

three parameter estimates as well as for the Euclidian distance.

Comparing WAL and P-W, WAL comes out the clear choice in terms

of the estimates of j and 2 as nearly every percentage difference

for these two parameter estimates is 95 or above. As for the

estimate of K, the variance tended to favor P-W through all values of

k but both the bias and the MSE favored P-W for negative values of k

while WAL was better for positive values of k. The Euclidian distance

statistic favored P-W for k < -0.3 but favored WAL for K, > -0.6.

As was the case with Model I-MA, the OLS and P-W procedures

estimated the wrong sign on K in all cases for the WAL runs on

Model II-MA. The ViAL procedure in all but nine cases estimated a

positive K; the nine cases where WAL estimated a negative K cccurred

when the true value of K > 0.3 and when T was 50.

The OLS versus P-W comparison yeilded mixed results. The bias,

variance and YSE on the estimate of K favored ?-W but OLS was almost

consistently better than ?-W on the estimatlon of g, and 2 . As far

36

as the Euclidian distance, OLS tended to be better than P-W for

mid-range values of K but for large positive and negatIve values of .,

P-W was generally better than OLS.

The signal-to-noise ratio had an effect on the 's once again.

The bias, variance and NSE on the estimates of the .'s tended to

become smaller as the signal-to-noise ratio increased.

Zellner-Geisel Procedure on AK(1)

This misspecified error structure model yielded quite clear

results. OLS did a better job than Z-G and Z-G was generally much

better than P-W.

Table XVII provides the results of the comparison between Z-G

and CLS and there are far more negative percentage differences than

positive ones. The verdict is not quite unanimous but i would chnose

OLS over Z-G.

Table XVIII provides the results of the comparison between Z-G

and P-W. There are some very large positive percentage differences in

this table but the largest negative percentage differences I found in

the eight different sets of runs I made can be seen under the variance

and MSE columns for 32 when p = .99. The Z-G procedure calculated

estimates for 52 when p was large that yielded enormous variances;

for example, when T was 50, the signal-to-noise ratio was 10 and o

was .99, the Z-4 procedure obtained a sampnle variance for the estimate

of 2 that was equal to 1145.33 as compared to CLS's 44.29 and P-W's

1.96. The exact reason for this is unknown but it may well be that

the interaction between the values of -he input parameters and the

grid searcr on K yielded some local instead of global minimums.

37

In non-tabulated results, both OLS and ?-W almost consistently

overestimated p while P-W almost consistently overestimated 2 as

well. OLS was generally better than P-4 in all comparison statistics

for both parameter estimates. Once again, as the signal-to-noise

ratio increased, the bias, variance and MSE for both parameter

estimates tended to fall.

Autocorrelation Test Results for Model I

MA() Error Structure

The hypothesis tested by the Lurbin-Watson d test, namely

H : p = 0, does not strictly apply to the case where the error0

structure is MA(1) but since the d statistic is calculated and

printed by most canned regression programs, the outcome of the use of

this test under the MA(i) error structure is quite remarkable.

Table XIX arrays the percentage of time the hypothesis of no AF(1)

autocorrelation was accepted and rejected for the various imout

parameter value sets for both the P:S and 3-M runs on Model I-15

Even though the d test is not applicable to an MA(1) error

structure, Table XIX exhibits a typical power function. At values of

X near 1.0 and -1.0, the hypothesis was accepted after an OLS

regression only a few times especially when T = 5C. At values of

near zero, the hypothesis was accepted the macrity of the time.

FThe number of times the Durbin-Watson i test proved to beinconclusive was collected but was not tatulated. The inconclusive

percentage can be obtained by adding together the percentages ofacceptances an rejections and subtracting the total from O0.

This power of the d test to detect MA(1) autocorrelation has

been reported before. Smith (1976) found the d test was sati-z oy

and uniformly better than Durbin's periodogram (1969), Geary's signr

test (1970) and Schmidt's d1 + d2 test (1972). Also lattberg (973)

found the power of the Durbin-Watson d test quite satisfactory for

mA(i) autocorrelation.

Even though both Fitts (173) and Godfrey (1978) have developed

tests for INA(I) autocorrelation, it is not poor policy to rely on the

d test. Fomby and Guilkey (15978) suggested that for A:(!) autocorre-

lation detection an a level of 0.50 or better should be used. If a

higher a level were used for testing for MA(i) autccorrelation, the d

test would likely be quite adequate.

The Pesaran procedure was developed to handle Yodel 7-YA and

looking at Table XIX, this procedure has apparently removed the YA(1)

autocorrelation from the residuals as can be seen by the acceptance

percentages being at least 88. But there is hardly any difference

between the _-ES and 3-N columns of the table, that is the 3-M

procedure seemed to also remove the MA(1) autocorrelaton. 7n

addition, both of these procedures appeared to io a better job than

Prais-Winston at removing the autocorrelation.

AB(i) Error Structure

The results for the urbi n-Watson d test for Model -A. are

given in Table XX which was arrayed exactly like Table XIX to

facilitate comparison of the two tables.

As can be seen by lookln. at the Beach-YacKinnon cr riht hand

side of Table XX, a tytical power uncton was obtained when the i

39

test was applied after an OLS regression: low acceptance/high

rejection for values of p near 1.0 and -1.0 and high acceptance/low

rejection percentages for p values near zero. As for the B-M and

P-W columns, the expected result is also displayed. Since both B-N

and P-W are designed to handle A (i) autocorrelaticn, the d test

acceptance rate should be quite high regardless of the value of p.

This is what can be seen but B-M seemed to do a better job than P-W

especially when p = .99.

As is obvious from the acceptance and rejection percentages

under the FES column, it is evident that with high negative and

positive values of p, the 17S procedure did not do an acceptable job

of removing AR(1) autocorrelation.6

Autocorrelation Test Results for Model Ii

NA(1) Error Structure

Again with the understanding that these tests were not designed

to detect MA(1) autocorrelation, Tables X(I and XXII show the results

of using the :urbin-Watson d, the HcNown and the :urbin h tests on

MA(1) autocorrelation. Table XXI shows the percentage of acceptances

and rejecticns for the Z-G runs on Model I-HA; Table XXII shows the

same for the WAL runs.

The Mciown test adninistered after an OLS regression appears to

te unrellable for small samples. It did a better job when T = 50 and

The zero acceptance zero rejection for z-, under the Pesaranruns for T = 20 and o = -0.1 and the triple dasnes under the 7each-Nac.innon runs is due to the fact that a ?-.' procedure was performedonly If there was a rejection by the 3urbin-Watscn d test.

40

for the lai:ger signal-to-noise ratio--this result is especially clear

in Table :c(Il--but both the h and d tests appear to have been more

powerful. The McNown test also exhibits an interesting anomaly when

K nears the value of one in Table XXI. The percentage of rejections

tends to show an increasing trend and the percentage of accertances,

a decreasing trend as k increases from a value of 0.1 but these trends

are markedly broken when the value of K = .99. This anomaly shows up

for the Z-G runs but it is absent in the WAL runs. it appears again,

interestingly enough, in the Z-G runs on Yodel II-AR to be discussed

later in this chapter. Why it appears in the Z-G runs and not in the

WAL runs is unclear to me; the McNown test was administered only

after an 0LS regression and since the data generation was the same

for either the Z-G or WAL runs for a given error structure, the

anomaly should have appeared under both runs. Neither the Z-G nor the

WAL procedure should have had any effect on the outcome of the McNor.

test.

The Durbin h test administered after an OLS regression appears

to be reliable; it is more powerful for the larger sample size, this

being especially evident for Id > 0.3 in both tables.

The Durbin-Watson d test again appears reasonably powerful in

detecting A(l) autocorreiation. if you consider the percentage of

inconclusives--not arrayed in the tables--as rejections, the d est

would be more powerful than ur'bin's h.

Looking at the d test results after a Z-3 regression, the Z-5

procedure seemed to do a passable job at removIng the Y A(I)

autoccrrelation. 'NeIcst of -.he rejection percenagies are below five

and all are below ten. And in all cases the acceptance percentages

41

are hi-her af_ er a Z-G procedure regression than after an OLS

regression. The d test results after a WAL procedure regression

indicate that the '.-4AL procedure did not do an acceptable job of

removing M(i) autoccrrelation. In fact, the results seem to parxalel

rather closely the outcome of the d test following an CLS regression

and indicate, if anything, that the WAL procedure tended to create

autocorrelation rather than remove it.

The h test administered after a WAL procedure recression also

indicates not only that the WAL procedure did not remove the VLA(1)

autocorrelation but also increased the chance of rejecting the

hypothesis of no autocorrelation when ,k < 0.

The value of , did not appear to affect the outonme of -ne

tests for autocorrelation and other than the effect mentined on t-e

McNown test, the signal-to-noise ratio did not seem to play a part

in the outcome of the tests either.

AR(J) Error Sticture

The results of the autocorrelation tests for Model TI-AK. are

arrayed in Tables XXIII and CiV; Table XXiII contains the results

for the Z-G runs and Table XXIV, the results for the WAL runs.

The McNown test and the Durbin h test were designed specifically

to detect AF autocorrelation in a lagged endcgencus mciel and

therefore they should be more powerful than the 7urn-a- - ats cn d test

which urbin himself (1970) said is not applicable to the lagged

endogenous case. et, the :urbin-Watson d test exhibited generaIyv

ewer acceptances than either the cNown or the h test esnec -ally for

the smaller samoie si.:) I e s the percentares cf' ace.ta.ce

42

and rejection for the d test and the h test are very nearly the same

indicatin.g that both were equally powerful.

The fact that my results indicate that the Lurbin-Watson d test

and Durbin's h test are about equal in power conflicts with previous

results of Kenkel (1974;1975;1976), Park (1975;1976) and Spencer

(1975). Kenkel and Park battled each other in the litera-ure over

the power of the h and d tests; Kenkel found that d was better than h

and Park countered that h was better than d. Spencer sided with

Kenkel; he found evidence of a serious sma-ll sample bias in the h test

in that it tended to detect serial correlation when none existed. My

results, as pointed out above, indicate that if there is a bias, the

h test tended toward a higher level of acceptance than the d test

when T = 20.

The anomaly mentioned earlier with respect to the McNown test

can be seen in Table XXIII. For values of p < 0.3, the 'c"Nown test

results paralleled those of the d and h tests but when the value of

p = .99, there was a sharp reversal of the trend of acceptances and

rejections. As you can see by locking at Table KXIV, the NcNown

test does not exhibit this behavior during the WAL runs. Again i a

at a loss to explain this. However, with a p value so close to 1.0,

it is entirely possible that unexpected taings might happen nuring

the GLS regression of (12) in Chapter III.

Neither the Z-G procedure nor the WAL procedure ;as very

successful at removing the AX.(1) autocorrelation even thougEh the

4A.: procedure was specifically designed for it. The Zurbin-Watson d

test exhibits rather high levels 3f rejection after a --C run and

rY

both the Durbin-Watson d test and the Durbin h test shcw high levels

of rejection after a WAL procedure especially -hen p is close to one.

Even though the signal-to-noise ratio did not seem to

appreciably affect the results, the value of j did have an

interesting effect. For IPI > 0.3, when :, went from 0.4 to 0.8, the

level of acceptances for the d test rose and the level of rejections

fell.

Conclusions and Recomrendations

As is true of any Monte Carlo investigation, the results should

not be generalized to any great extent. Strictly speaking, the

results are only valid for the exact input parameter values tried and

for the exact error structures used. The reader is warned thaU

applying the results of this investigation to values of parameters

or error structures not actually tried should be done cautiously.

That said, I will now summarize the results of this Monte Carlo

investigation.

Pesaran Procedure

The Pesaran procedure was developed to handle Model I-MA but in

general both OLS and P-W were better than ES on the estimation of X

but worse on the estimation of . The Zurbin-Watson d test seemed to

indicate that this procedure removed the Mi(I) autocorrelation. On

A.R() autecorrelation, this procedure displayed the same properties

except -hat with high values of , it did not remove the AR(I)

autocorrelation.

44

Use of P-W is rcomended over PES; the estimate of p would be

used as a proxy for X.

Beach-MacKinnon Procedure

The Beach-MacKinnon procedure was developed to handle Model i-AE

and the data seemed to point toward its superiority over both OLS and

P-W. Under an NA(1) error structure, the results are not as clear,

however. None of the procedures, OLS, or B-M, was able to

estimate the correct sign on k so caution is advised here. B-M did

slightly better on the estiimate of in terms of its variance and MSE

but both OLS and P-W were better in terms of the absolute bias of the

estimate of ,k. B-M also appeared to remove both MA(1) and AS(i)

autocorrelasion.

The use of B-M is recommended over both OLS and P-W unless

theoretically the sign on the estimate of p is doubted. In that case,

analyze the autocovariances of the disturbances of an OLS regression

to determine if the error structure might be MA,7 If the disturbance

appears to be MA(1), use of P-W is recommended.

Zellner-Geisel Procedure

The Zellner-Geisel procedure was designed for Model II-'-A and

the re .ults indicated that it was far suoerior to P-W and better than

CLS for large values of x. Fulthermore, it seemed to do a passable

job at removing the MA(i) autocorrelation. en AR(1) autocorrelation,

this procedure holds its o;,n against CLS especially for large values

of p in terms of its bias. The Z- Drocedure was clearly better than

See Box and Jenkins (197') or Pindyck and Rubinfeld (16) forinforma:ion cn how to analyze autocovariance functions.

45

P-W based on the outcome of the McNown test. However, the Z-G

procedure did not appear to remove AR(1) autocorrelation.

It is recommended that the Z-G procedure be performed and a

Durbin-Watson d test on the residuals be accomplished. If the

hypothesis of no autocorrelation is rejected, then look at the

autocovariance function of the residuals. If a-n NA(1) error structure

seems appropriate and the value of ,X > 0.6, use the Z-G procedure

estimates with some confidence. If, however, an AR(1) error

structure seems more appropriate, or an MA() error structure with a

small value for K, use of OLS on the data is recommended.

Wallis Three-Step Procedure

The Wallis procedure was designed for Model II-A7. The results

were not favorable. CLS was better than WAL in terms of bias,

variance, MSE and Euclidian. distance. This was true even when the

error structure was M(1). Additionally, based on the results of the

Durbin-Watson d test, the "JAL procedure did not do an acceptable job

removing either MA(1) or AR(i) autocorrelation.

In certain circumstances WAL performed better than P-W based on

the outcome of the McNown test but use of OLS is recommended when an

Aii(i) error structure has been specified. Try, also, either the

iteration on a Serial Correlation Par&meter procedure discussed by

Sargent (1968) or Klein's Nonlinear Maximum Likelihood procedure

(1958). Sargent found these two methods were the best in his study.

Durbin-Watson d Test

The Durbin-Watson d test proved to be quite powerful at

dezecting the presence of not only an AR(i) error structure but also

46

an HA(1) error structure. It was more powerful than the :urbin h test

when the error structure was M(1) and the model was Model II-11_A.

And it was at least as good if not better than Durbin's h for

Model II with an AR(1) error structure.

It is indeed fortuitous that most canned regression packages

calculate and output the Durbin-Watson d statistic and its continued

use is recommended.

McNown Test

In small samples, the McNown test appeared to be roughly as

powerful as the Durbin-Watson d test when the error structure was

AR(1). It was not very reliable when the error structure was MA(I).

Additionally, the P-W estimates for Model II--calculated only if the

McNown test indicated rejection of the no autocorrelation hypothesis--

tended to be inferior to the OLS, Z-G and WAL estimates. Re-runnirg

the Model II computer runs basing the P-W estimates on the outcome of

the Durbin-Watson d test or the Durbin h test is suggested as a

possible extension of this investigation.

Use of the McNown test on small samples is not recommended; use

the Durbin-Watson d test instead.

Durbin h Test

The Durbin h test seemed to be adequate in detecting both AR(i)

and HA(i) autocorrelation. 3ut the Durbin-Watson d test, even though

theoretically it should not be applied when lagged endogenous

variables are used as reressors, performed at least as well if not

better than the h test on A j1) data and cutperformed the h test on

XA(i) data=.

' 7

Because of the questionable nature of Durbin's h test in

recent studies and because the Durbin-Watson d test could detect both

AR(1) and MA(1) autocorrelation in small samples, use of the d test

over the h test is recommended.

C MN 1111 I I I I C. 11111- ,

z n

0 w

In I~ - - - - - - - - - - - - - - - - - -

In0-i 0Ln w-wnO o 0 s m ot

Ln I I I I- I I I t I I I III Iow

I- 0

- > 'It W I ml CWl 11,0 c

E

( -J fu CVmclt N - I (N I (NC4 N N

I-Zi

Cw1 1 iV I 99C !0

IT 11 1 1 -7I I N v v q r v

-n in ~f- tv m I I C) r-LA r- c$--

z>

z;

rLC - I I I II I r

0u CA

a 0 41Lio oo o oo o ooto

> LJu >w4

wi -'Zw'i

co V)c<

c- 0ru0 Il

U). W r-IfN.m m r aOO0 -m -- O(N(N

(4.1 I II I I I 1 1 1

z I

3 Ii >1 1 1 l IN 'I I

E

wt

ir0) O O O o o O 0 0 O O o O14 J -3I

CL OOOOOOOOOOOOOOGOOO 0!

OD- I I I I I W C

- j

,n 0 o M1qq 4 1

ZU)

- r 41

00

> o -- - -- - -- - -- -z

0- 01

Ln 00 0 0 0 00 0 0 0 0

z --

w w'

m i uinoCd0 >

cr c

u-c

0

IxzCL2 a uO J191 C !1 t C :0

to I I I I I

tfl

in r n CN i I ( w Wto

m -

u uI

m i

wz :3 C,0000000000000000000

wz

I 4r

of- C O fcD0- ~ O

U ~zw.

IA. >

oa 0

M 0M00 10 01 t, M w 0) )W 0

WjauU-

I.- OC00000000000000000itninUNLtt Vfi4LnNuicNI.tNiflc4iC4If

N - - OMCIY q -M r- q4q - 0 00- 000

uj

ui f- Two om t0 cv 1 CN Iq cn tn LnV)I.--C tA

-i L t- 0 0 - T M N M V V IT 0 M M CVtA 0 IQ CN 1 C14 17 m Ln in

>z W0 ::) a

LA -cc doIx uj w (A t, M M t, M 0 r- IT 0 M W o N W 0 M 0 r- 0

4 > Ln q a) Cl) LO U) q T - 0 t- q N(I ! 1! 1 1 1m z00

z2

Gouju - - - - - - - - -

> ui :r >ui-j 0 u

cr w0 coLL w

U')Inuizwcr-j faLww ciU. 0 >0 -0

E

w 0 W C14 0 N M M M ITC'm i

lz D3zwuIr MMOOOOODC300000000000)wCL V: r

1.- 00000000000000000000

-z I I c'vCN~

n-

LI I C-l N~ Cldin>

04Icr

0 wr

-: .n0 ]oo000000000000000100

C. Lz -

>L ui x

LimUe (nm(fmlm mI mN

u m

z

'D >

-E

wo -j.c i ~ n m q~~- o n e0~j0 01- Ln -- ) l w II0 -m

L) 0m C- m ) m 1 I I C.)( 111-C

Lii.

.

Nv m 11 1 N 1

0 ncI 71 C nM I

tjI

-

4-

GoJ0:0

N I I

I*3 Wt-ie ) I fl

4a -- - ----- -~ -~~ -- - -'- -

Uw

Ij I I t II I~ I

w- 00 0 0 0 00 0 0 0

Lfl MN to (tl q N ( N N (N N U (N Lf N f

M M 0 N- T0C

-Lr) CN rl- -q i C -MCI-I I I I Ii I I C1

40

-LL

>J II I I Il

m< 0> I IiL

4 n 4I- a

ow >

< .404u

L N q om to m 0 'r a or M r, MN M -

U.Q 1 >

-0

w oM0 7m C m w0m-r

V3 ~zJ C mdOi~aN

L

a fa

C,. U) c, I) c. in ) 0~ Nf C 0 N 0 ~ N) 0~ NI CV LI ~ N in

o ) Lo Cl I- ~ r-M C)t- IM

WL I~ m

I-I

LfL

z tA

t-A M C'Ud C I DW t V1W

a q c' o- c00: uI U) I I"t>c

cc c

wj uj

m Ln -j

I-L to (/7)O~O

( O ~ l 0l M I II I I I

Ln 7 r~OLC

W > -- O -m

I LL M

o jo

wo( M7; mNowr MMMMf M

M -

U-

w

04 U~l 04 if) (N if) (NJ U) C4 U) 04 If) (N Ts, I If(N M/ NU

21" L- m )0r - 0 -mt

L, I-

Lo,Oj& :0fl IflaG 0o,0oo 000ata-aom bI aaa'-a

aa0W c 0 0 0 0 00 0 0 0 0

oj

u Wn

03 wc LU N mow Oaoo~oooo00000

0 zJ

Ln w

z cc m~ r,0mmC) m m moo ooa o

W

wo :

I- 7

wO >

z00000000000000000000

(C4 Ut) ('4 al N' al "'al (' N l V) al4 ax " ' af ('W al N ' Ln cv

o~~r r- O~O ~-

T -- O mo- I co miII L' .C4 ) . I I N 1 14 1 1-I ID I 04 1

I r*

C4 > - e ' c ic'n I I 3 N I I- I I )

m1 1 1 .( -7 U) I- 11111 I I

m .(3 1 1. ( I C I I , W I I I I

e C C IWO C~ ! ~ W I 9C ! C C'.I I C! 9 1.-I-i - I -' -I -~ -

I/>

(A)

-jin

-U mI -mr Dmmr DMr M ~~O r ~ I N-o

<~I :3cn7 1 r I I m Lo C'r in7N 7r- INN I I Ima nI In I q I

0owL) >

L ~ M~t-O O-r M D -TM 0Nt- 0 MIT~ fI M MW I tC'IO Vo n o > I- - (Dnq0 - IO C I M I1 0N - I- C' Nm I Itw I f(N

> ID M I 1.1r I I I~ k- I I I I- 1 I

'( uj< I I

-J in co toV IN0C Y ')C Y M C'.' tO n~I 0 -OMM lD I N t- M M 01Inin - I f- T o I c I IC IO -P c" I 1 I0) 1 U')< m c I I-I I I I . I I

U -j

0>w

Li .7P M LD ~ . C'lT M(~ oIOM N M I

E m ar I-U)Nlz N -0m ft') L , L7 n c c

111 I I I I CV

0

m C)--I--- I T

fu I I I I I I I I I I III I I I I I I I I I I I I I I I I I

in

I000000000000000000000000000000CN N InI 0IOnI NINI nI ' nI N kr E' nI N (N In In "N C.In In CNI (N

uAj VC.4 .I C'%I I I I I I I. I I I I

1 7 1 I I C1 1 i f i I

M -CNI-ICI- VV ! II Oq 0 1 I-NT M>N I I

0)0 V()r 0V O t- N In IT r-C~ 0 ( 0)MN W oL)CJL Tfimm I (f.- -C L a D ) oF4Imc)mNf In i-N I Dwt

. - I I N 1 NMIiIIlcc

in)

C- C! C !C !C 4 !9C )C !C C ! C !I C

-j

. or

w > I C't 1 11 I T I-I, l, I C.,II (I I II Iz .1 I I

0 U>

a a<- II c I I M0 1 11

L I >

U.-

U L~ w -0c4 mW V- 1 nc)c ) 4I )v vI

I- >O 14 )I IM

0

oa -1 0n m L 0 0 m N IT V (IC~ia C~N)0I 1M V NI I 1 I-Il I II I II

LAJn

U L OOCOOOQOO00000if) Lnc nI )u , qi iN coLoL , 4L)u - 4I )F )

m I 0 r- M a)U - I wm r- I NI k

0 .j

2 0 1 , M r , 0 - M 0 0N t

Ln N~~m ~ vq wlW WLnLnU,

CL LnCI. I

LL UD -u>

U, 0 00 00 00 00 00L0

I- V)m C, MC N1 70( r -C

~~CI) CI) f WW,-E

0 f

co ow- 3 O C C

o uj i

- i 0 > 1 I

(4 C4 1 V) i-II II

wo

Ix~

0

m n 7 qCm - IT (N I7w000 U l m I

4I ? ?

C1 n1NC,0U NN0a " 0 m NN N

S LA 0 MclyN~ 1 CN I-

> CD CN I I I W I , I

LA- - Ln

40 >

z

4A I

JI w

CL r

- 0

:J LA 4

-u - r- m .0 0 1 -0 M0 r - 0 LAM r

-I go4 I I

w 4

U, -

L- W

-0

Wi c N m W - 0 O LMO c ~ .Clo 1. I -7 ICN cNi- I- I Lot- 4,<4. II I I I II I I .

z

.C1

w~( I 1 I7 CIA I m v (LLO-~C~

0.I I i I I I I I I I I I I I I I I I1 I I.* I I I I I I1

z~~~~~ ~ ~ ~ 00 0 0 0 0 0 0 00 0 0 0 0 0 0

0000000o0~0O000000O0O000O0O0000

CN C-V LA) LA N CN- L) L) c-N Ad tn LA Cq (N LA ULA C'N C14 LA tn (1N LA UA Cd N C%41 LA C4 C14

m - -

L

>~

%A 3 000000000000000000000000000000

z0 3

0 0

D 3 L OO m w O O m m OO ~ m O O O wZ DO I 000 a0aaa000 OM M O O-OM-000000aOaI

I" ex> - --W - -- -

z u >0 Li

X( 03A

z-

Lii

uJ

ui~~ ~ ~ ~ ~ ~ ~ iI W L2r TWt T NM- o C ,a

I I

0crt

I~ Cq M nc n L 4z r

tot

00 0 0 0 0 0 0 00 0 0 0 0 0 0

ILl

LU 00 Ovmomm~o~0C00000000

C. O ~~~c Y. CO m0OO

r oaa, C 0) Mooaco00N> C I-

N~~

0

I !

40

03Wi uO OL I NOLAN O OO

0 0W CJj

o LL

tn C4 0)

w

Li a,

<U-

aJ, ml CVI00 N0 - ( ( I

0~

I-m

0000 I I00 00000. - - - - - --

00000000000000000000

(N (N .Ln IA c (N A A (N (N A Ln (N (N A LO (N (N IA L

c; TI'T 4 - IK7 m - "I 4 m 7 44 T -vf-vi- a,4 v v14 1

(A C4 4 N ONI IO N 7 C -m - 7 l

4 m 1 -v 7 -- C ml-4vN -7

m iu m otr I W I Im I MV ~ -7.v c or-too- Nt I I N c I 1n .- I I -CC"

LA ~ OOOOOOOOOOO0000000000000000000

0 -1:LA T 0 r T M 0 t - r D C V M q I 7 0 M V 0 (Za: nL - T M f< D

0U.1C. >

oI IMI 4T M N I M I I- II . --

< 4 CIA O*,( C14 -T 0 orO14 -7 m C') -

1- Clma - I I I

wo d

ui0>a:0 C - W ~ O ~ '

U- t

zU

Lcr fl> d U) I N t-E4 T- I r I 64 4 u)g 4-

E

0: o

IIIMC acao-,omoo 00000 ooooooo ,Oooo

z ? ?? 0 >?ooooooo oo oo oo ooo oo00~o

l'- 000000000000000000000000000000(N (N It m t (rN N' m/ 01( N t Ut ( N (N / 0/ (N (N Wt 04( NC14 t( m 4 NN t Ut ( N '

I I c - - 7 I

.1 1l l L l CI I r I

V) 1)N 11C, I M 11MI V.1(1N DC C II I I I I I M I CN 1 IM

L Cv0M0 -L )mt -Nr f 70 Dcm0 -0 qLfa L ( - (N M 1M O M (D - NL Mr C

I Ia DI I - I I I I I II III IQ3

. I. . I 1 . . .L .C~ .L . . . .111 -I --I -)

Ln

-1

4.') 0

:) 2 4 I I I I 4 I I I I l

0 Ow

L2 %- 0 !M" - MVr J0Tt -MMMWM

(A mJ. II.-I a (1m I - q 1 NI 4C)~ 11oI mC~ C' ICNI a) I) C'(-- > - 1 C I I I I I I N I I I L I C

z u

0 1U) U. ~

In :) >a-JOw-

ZO MI NNL 1 I D00 DMC)M0 T0 )t C-- 3 )0L n(

ww

-T Li) in c 'tI m O2.n 0r c 'n - o ot -WnwLA40tI- l4o0 > M-4 ~ C C - - 1 - N CdC )I-tC4 Nt na

'aE

o1 o- Inm~~ CinLa o t- t- D 'Ciin Ln m't- D n O- m

r. .- I I' I 1 1 1 1 1 1

a:I I I 1- I I I 7

41 OO0000000000000000000000 ini 0n'iC2nn'iWW

LII C7- . . . . .- . .- .- r!I-?1 -? 77 1

#000000000000000000000000000000

Lii

to 0 0 ML N 7 t! c'-Ml - D

> N n I d)N - .- i- ~ ) 61 4- 7 q iC, 1 1111111 4141 1

co qa a, q q -LrCCCt- 0 LOm ' Ir f 0 U)

In r. Inc ~

z cm

-j

CA 0

In cL:3L q- -- T0I M -r m r e - N 0o

m/ t-OC-N O)qOC O X dT M MN "-

ow > I I I I U) I q IU I I

> *-- - I II I 4) C4 4 - I I I

z U-'

O w 1

~< cr~CI-

zo1

u rL

od to ) - (qU' W Owmmw (Nm n

II c

fl 14 I-1114441111OWI NOU

Nm* I C' - - NI il II 1 M y-

J

L m -n r-00m0mNr -00( n0M( -r

C-4

(LI 0 00m0 N0 00m00m0-00001)a)0m0- o0-000a000t 00 000D0-ITm

400IA 9

(A 00 0 0 0 0 0 0 00 0 0 0 0 0 0; zc0 >

(A I-L" ui 00moaco, mm oo000o000 0o momo)0

- - - - - - - - - --

0u0(n

u 0>

-j U) - a

4 f-i I

4 ~4 03

(1n al42 D a q c c a

L.0

4~~ ' I I 1 1 I I I I

ui u L o i cito ) o; Ln-T -m -im T m m ;: mE

o ID tor D- ) Z TM

Ex 03

a) O)0d'G0)mm)ObOooooooooo0oOOOOOOOO

z o00 00c~o000000000000000 000

000000000000000000000C)OOOO00000C'N N ) U N N' 0' U O N) N) C' U) C') C') Ul U) C') N' U) U) C') CN Ul U) C. N U) i U) C') CN

CNV

UU

I- 0)in 0 0 0 0 0 0 0 0 0 0 0 0 0 0

CIAI

0x 0

o) Ln

0 :9I

- L. cr

)( 03

U LnJD 01

CA m~ oaoO

L)

-0 c

z - m

Cr 01 O C .- 00

00

01 N- m- M DOOt T n0 tot- CO..NM t-0r100ro0o(~L - -11 tI Ul !LO (o t" 7 m I 2 m m- " O 7

c. 0a 0T- 0 -- 00 . 0 00N0N .-EN-w

uj4. > 0 I0I0)0(nC0 m l0 0 0n000 N 00 f00 qr0 C

0 fLr)-0 -- LO .- )Lr)Ln ~C -

U,

uj0ooLn~oiOo.i

Lt ~ 0 O i ) O - '

m ,C ',C )C CD 00 3, ',M 0 Ybo)0 0(DC1>~

oc

LI) C 00000 C00000000000000

m- - - - ---

z03.- ifl Li 00000ot-000000000O0t.IT

0 0Li u L)

D L. oooooor-ooooooooooonm

cr >J~ -'7--I-L I0>

0 ul (0

(n -4 m oo m m 11) M- - -4

>( 0z3

x 0 A

Li >

u- i

if)c L in n Ln (o ILnO '-,W -- 1L ('4C0

z

00 rn ) T r-- co N(N rN mo-- orr- bwj 0 > (N q o0 -m -( O0 t- o 11MiflJ.rCL MN I-(I -- (-( N 1 N .CN t)

Iq (D i III U) C4 I IC)

d, 0 -- )00 )000c Y)0 )0

:3 t ( . UMC O0OOO000OO0 0 , , ( ( 0)O

I) - -n -0- ,--

- c'0000000000000000000(* (N 0f i) (N (N wl 0f C' (N 0f 0f (N (N if) i) (N (N if If

Ld m IT o m I - -L I)r

I, - , I I I I r I - OILA 0

0- I3( T t T r NoMMN

Ln m cl N7TM r Z vt- m

a 3 (D

1/0 III-

0 -m -) - - - 0)-- 11- - - - - -

> > I

w w -)

_j Lr -jw 4 Lr) Ch)(N(Nf- NIOr4ITO MC(N i N L0( M

o ;EM NI (, 1 IN I o I

LU

uI

L0 73o~ olf 0o Do m oo M m 17 m N t.-

(x S 0) m m t- -U C-~ -~- Gmo m 0 (1) ID(ujO co0 -cl It L)I

LU

u : m 0 c c

u .00. 00.00.00.00.

oo- ooooooo-oo.-. o-o0

(* In (N In (N. U) IN U)" IN Ln, IN LI (N oLI)N LI) (N I(N IA

LA *-4 (D M

LI I O- n dn

n tozL C. -

- (A

0d CCO 'O W -t (

UW

C)0 - -- -- -_j >

Id

WO Jf

-j ~ ~ U -r w - t- '- MI- - D - N- 00

cn CrLJ n Dt-m- 0 0 000! 0) wm m0 0 0 m moN zA 2 NL NL NL N (( A( A( A( A(

A c4

Y. m

0

-Q u 'n caL

0) 0)~ ~ 1C~-

U.C, < - - C ~ ~ 'I

3 E

w j r-I I

ca 00 n W Or-L oNo- M oo 00 0 0M MW aC- --- - -

- C

1-0 0 <

L U

0U 0L4j m 0. )% Mr - N MM O0

o m) cyaiael m rC4O mc)C4ODC4(

14 0UJ t - -70 aO tl~ Cl (D(Nl(rc v m c o

Lu &

UJCU

M I I II I I I 1 II

I00000000000000000000

ir II

-C OV-0000000O00LAC'dtmr

z uj I n - 0 0

di Er

J0

U U) 03V

wo 0LA.

0 U

1)

<( U Lmo) 000000 000 000 DMOM()C -z <

>- wX

m>- UJ- C - TmI Lx

wow

-uj L~ 4 o -t -~nn t mo om -

w a)00.

mw~c m) 030Wo 0)C- -r'

0j0 (1) -

0 M-wa.

0 MMOO~0000000000000000

m ,191 0 I II II I 9C C 0

n 0

Z -jow

w z

Lz Z zO0

a2 4 0 o0r )wC -t< c c

0 LA

(z

o I

to o

II- c

.- a)LN- r D00

To 0 o

I- u -M

w0 ui

w f<2 z -0 OOO00 r!O0 OOO00000OIO 0. 0 1

-< Ez

u 0nu

9- QO00000000000000000

3c

.c u - N

5

-C_

04 -C

w4 0Ur 0

zJ 0

z- z

x n

a. Or -

-J I. 0 m

LL 3

0 L-

w ~U VqNr'Jcl) N NN '.mm q WE E I

IL0oz

w

ww -

00 >0 > 0 0

00 0 0 00 0 0 00 0 0 00 0 0 0W44 N M M N " 00 " " O M N N O

CN ,v M -0M0r Mw( l r J

3;

U

4c S

0-'

E 4- a ,, mza nt

uJ z XL

00 0-_I-z

o 3 0 u N~m_ -~ M v N-(m to- omw.-v

0 a A U

0( Ino (1'4 It

0 in4 4, ((( -4r n. -t-r _wwMI 0qI c na 1v4 -- flf

X0> 0 M

< W

u C _U.oz

(0

w

.-z (4w ~ : w .00. 0. .00.00. .00.00. 0.00.00.0 . 0.00.0

z000000000000000000000000000000

E- 000 000 000 00 000 00 !000 0

LA ~ ~ 4 LA (N aN 0N (N 0l LAL A( N( NL AL A(N( N( AL AL N(

7?

3

L -

4 -i34 0

-i UU

w a

r0 -

u- z

o 0~:(A u

z -

C) (A

0JW0 i-I- 0 Ul (A( W W A W A -41 ) l-( )t

>- 0 7 0 -c C')

2:U- 3

-j Io

uA I:l 014 u 14N 19N-

0 z4l

tu w L) U W Icr uw-

CLo o o o oo o o o o

I-1 L91 000000000000a.,000000 oE A( A( ' N( NLiI) A( N( N( A( f f

o N mLn mw r-r, oo N i o-

i r -')I %0( Dw -rCA) I

0 'C ac

cr04 MWM- -00r-M00VM9Loix --

'n -

uz zO0

w

w

0II- ' 0 t-

u-N

-J 0

LLnu'c

:DJ00 U

- i < Ic (I--w

0ow

00 0 oooooO0oo0000OO0O0MMI01

z

0 LAUL4

3c

C -

0-0

4-ij_j Ww00

-r 0

00 0 -a

o3 o (N-oo(Nc aoooqouo-a4Z M u .- a cLne( M ~ toC-CA l C_ M L ,LA

D40 crz

U_0--

< CC(N 0 (D0 00IZL - D 0) LI Nt 14r Ir f 0('M3:M - M0 N O O O( to f -Ln Mal 2 0 OM n m0, M ~

0 M

zo. w 3

WL a0 z

0 uj~o0 0 0 O 0 00 0 0 0 0

c w:

(N (N (N C! Ct 1! CA U)' (N (! C9 1N LA (f! aft U! C?( N( AInL A(

p4

3u

4

c

3 cr 0

0-

-

-J LI

U c t, (DW .-. C4 C14 -Tm (DV

00 0 -1

o m~LI Z

- -J

0 -4

~ >-0

L'. 3LI I

4 LI -1

LIx

-wL(.0

w4

0 00000000000000000000O0000000000

z 0 0 c 00000000000020000300??000000

0 CI 0 00T(D' MO~O ON 0 OO00

U.C

c

r u ~-mmOcwO0OOOW:tOOOO0z I u *.0- 4

4-

o M t )r 0 o 0oM 0o onooo

00 0-z. Z0

Do u u<Z m U .CN Cld

0 4xwi Z

- 0-

In L

y T~

> '-- 0 U -C4 o L -T o N o0o o o M -o0 o o- 0 In, U

)< >- 0

U-

In 00 m0 )r wmmwI

-w uu:3C

0 z

(00

a.uw -

CLC

7 o00000000000M00000000

500000000000000000000M~ o~ oS of N~ N~ N~ N~ inl 0( 0f 0( Ns c cS C o1 US U

Ainer, .j. (1-071) "A CompnendlJur or- Estimatior, of t-eAutoregressi!ve-IlovinE Average Yodel from Time Series L ata."In-nerna-.iloal E conomf c Revfev; 2 , 3 '+8-37?l.

Alrih,97)"An Alte-rnative -lerivaci on o'u--ir.' hStats-2--." -conoretrica, 4~6, 1493-1.94-

Bea c h, C . Y . and J. GC. M a cK --n no r. (1 976) "A Zaximum Likelihood-rocedure for Regressions wihAu-ocor--elaied Errors"Eoonr.enioa, 46, 51-3/3.

ql attlergo, C. C. (1973) "Evaluation. of the P ower of the .ur-in-~atson Statistic for IEon-First Order Serial CorrelationAlternatives." PBeview of Economics and Statistics,5,

Box, C. P r. and G. X. Je.nkirns. (1976) Ti-me Series Analysis:- orecasting a.,dL Control1. Revised edition. San 'Francisco:.olden-Lay.

Burrows, P. and L. Godfrey. (197") "IdentifyingE and Estimating the:arameters of a Symsnetrical Model ci TnrontorY investment."

A.col)-ied Economiics, 5, 193-197.

Cccluane, :P. and C. H. Orcutt. (1 949/) "Application of Least-sqluares-e-ression to 5elationships Containing, Auto -correlated Error_ern-s. " journal of the,,, ATnerica Sta-istical Asszociation, 42

6~ 1.

Th~n ~.ar~ ~.S. in. (195) A KnteCarlo Study ofA Iutcre.2essive tertiN oviiip Average Processes." journalof EconOmetrics, 7, 23-55.

-urlcir, J. (1959q) ---2 i'~ stima-ion of Paaetr n Moving-PAvera,,e Nodels." Bionetrilka, 46, 306-316.

. (1969/) "Tests for Serial Correlation in RoesoAnalys-is Based on the Periodoerar. of Least Squares Rcsi uals.Bionetrika, 5,1-15.

_________ *(190)"Terstig for Serial 'Correlation i-n Least-,~cu~on C-YrO55on .en Some of the Eeg ressors- are ioga ,eoodet rabi-- " e __ -- onotrieoa, 410,-'21.

__________ (f-: ":-esting fcr Zr--a-2 fnreatr LEastc- e .7.,7re On, crrie rlka,

:zoJ. (T "Tes-iao fcr Az oocrreia-,fca fn the !.,.t~rr,-7SSveiov4 nE Averag;e Irror Model. 2'cu'rna2of Sc ooero,1

cm T. ai K. lkey . T "On Shov-cs- 0Level of irfcae for the Durtin- vatson leso i

BaeinAlo ernatlive."l JoUrna of, -cooetric,3 23-21.

G eary,. C . (19070) "el1a t Ive E f i c f-e nc ycf C o unt cf Z- I Chan e sfor AssessiiiBesdua Auo orecsressi on in Leas-, Scuares

zs-'~ ~ ~ on.~ one,*'ar -27,

Godfrey, L. C. (107 a) "A L ote on the Use of L-urcf-'s hls hr-ne £uatoon Is Estir,-ated ~uetl aile.

;Koonoetric-a,-,-,2-23

___________ ( 0 7 8b) ~ ~tAai-st General ~c---vis veraF-e Er'-or T, ode' 7nere the legres I '- :-,I

LEed- pedy' Varl a-e s~ -'o'-cretrica, 4

Io ur~r- c- On t'~~u rca 0249 u , 14 S

-enirv r K . -rivec~ I'v Vbo

r,-C S AI ImuaII ou y .eeof C,- Stoa

Cltac Ofr 'ncturn a-~ -. ' MIl-

C, ,A.

- i r<- 1'c~~" -n 7' L-ul!'n cl of -- I,~

Y),r I_________________0!___

7 JL

AD0 -524 AIR FORtCE I NST OF TECH WRIGHT-PATTERSON AFB OH F/B 5/3

MONTE CARLO INVESTIGATION OF ECONOMETRIC MODELS WITH FIXED AN--ETC(Ul

UNCLASSIFIED AFIT-CI-80-50 NL

I~~ __________ D______

Kenkel, J. L. (1974) "Some Small Sample Properties of Durbin'sTests for Serial Correlation in Regressicn Models ContainingLagged Dependent Variables." Econometrica, 42, 763-769.

____ ._ (1975) "Small Sample Tests for Serial Correlation inModels Containing Lagged Dependent Variables." Review ofEconomics and Statistics, 57, 383-386.

;, (1976) "Comment on the Small-Sample Power of Durbin'sh-Test." Journal of the American Statistical Association, 71,96-97.

Klein, L. R. (1958) "The Estimation of Distributed Lags."Econometrica, 26, 553-565.

Koyck, L. M. (1954) Distributed Lags and Investment Analysis.Amsterdam: North-Holland.

McNown, R. (1977) A Test for Autocorrelation in Models withLagged Dependent Variables. Colorado Discussion Paper No.113. Boulder: Department of Economics, University of Colorado.

Maddala, G. S. (1977) Econometrics. New York: McGraw-Hill.

Maddala, G. S. and A. S. Rao. (1971) "Maximum Likelihood Estimationof Solow's and Jorgenson's Distributed Lag Models." Review ofEconomics and Statistics, 53, 80-88.

M (1973) "Tests for Serial Correlation in Regression

Models with Lagged Dependent Variables and Serially CorrelatedErrors." Econometrica, 41, 761-774.

Morrison, J. L., Jr. (1970) "Small Sample Properties of SelectedDistributed Lag Estimators." International Economic Review,11, 13-23.

Nelson, C. R. (1974) "The First-Order Moving Average Process:Identification, Estimation and Prediction." Journal ofEconometrics, 2, 121-141.

Nerlove, M. and K. F. Wallis. (1966) "Use of the Durbin-WatsonStatistic in Inappropriate Situations." Econometrica, 34,235-238.

Nicholls, D. F., A. R. Pagan and R. D. Terrell. (1975) "TheEstimation and Use of Models with Moving Average DisturbanceTerms: A Survey." International Economic Review, 16, 113-134.

Park, S. (1975) "On the Small-Sample Power of Durbin's h Test."Journal of the American Statistical Association, 70, 60-63.

_ (1976) "Rejoinder." Journal of the AmericanStatistical Association, 71, 97-98.

Pesaran, M. H. (1973) "Exact Maximum Likelihood Estimation of aRegression Equation with a First-Order Moving-Average Error."Review of Economic Studies, 40, 529-536.

_ (1976) AR, ARMA, DL1 and DL2 Programmes for SmallSample Estimation of Dynamic Economic Models. Cambridge:Department of Applied Economics, University of Cambridge. Soonto be published as Dynamic Regression; Theory and Algorithms.

Pindyck, F.. S. and D. L. Rubinfeld. (1976) Econometric Models andEconomic Forecasts. New York: McGraw-Hill.

Prais, S. J. and C. B. Winston. (1954) Trend Estimators and SerialCorrelation. Cowles Commission Discussion Paper No. 383.Chicago.

Rao, C. R. (1965) Linear Statistical Inference and Its Applications.New York: Wiley.

Rao, P. and Z. Griliches. (1969) "Small Sample Properties ofSeveral Two-Stage Regression Methods in the Context of Auto-Correlated Errors." Journal of the American StatisticalAssociation, 64, 253-272.

Rowley, J. C. R. and D. A. Wilton. (1973) "Quarterly Models ofWage Determination: Some New Efficient Estimates." AmericanEconomic Review, 63, 380-388.

dos Santos, J. G. (1972) "Estimating the Durability of ConsumersDurable Goods." Review of Economics and Statistics, 54, 475-477.

Sargent, T. J. (1968) "Some Evidence on the Small Sample Propertiesof Distributed Lag Estimators in the Presence of AutocorrelatedDisturbances." Review of Economics and Statistics, 50, 87-95.

Schmidt, P. (1972) "A Generalization of the Durbin-Watson Test."Australian Economic Papers, 11, 203-209.

Sims, C. A. (1974) "Distributed Lags." Frontiers of QuantitativeEconomics, Volume II. Edited by M. D. Intriligator and D. A.Kendrich. Amsterdam: North-Holland, 281-332.

Smith, V. K. (1976) "The Estimated Power of Several Tests forAutocorrelation with Non-First-Order Alternatives." Journal ofthe American Statistical Association, 71, 879-883.

Sowey, E. R. (1973) "A Classified Bibliography of Monte CarloStudies in Econometrics." Journal of Econometrics, 1, 377-395.

86

Spencer, B. G. (1975) "The Small Sample Bias of Durbin's Tests forSerial Correlation When One of the Regressors is the LaggedDependent Variable and the Null Hypothesis is True." Journal ofEconometrics, 3, 249-254.

Spitzer, J. J. (1979) "Small Sample Properties of Nonlinear LeastSquares and Maximum Likelihood Estimators In the Context ofAutocorrelated Errors." Journal of the American StatisticalAssociation, 74, 41-47.

Taylor, L. D. and T. A. Wilson. (1964) "Three Pass Least Squares:A Method for Estimating Models with Lagged Dependent Variables."Review of Economics and Statistics, 46, 329-346.

Tillman, J. A. (1975) "The Power of the Durbin-Watson Test."Econometrica, 43, 959-973.

Trivedi, F. K. (1970) "Inventory Behavior in U. K. Manufacturing1956-67." Review of Economic Studies, 37, 517-536.

. (1973) "Retail Inventory Investment Behavior."

journal of Econometrics, 1, 61-80.

Wagner, H. M. (1975) Principles of Operations Research WithApplications to Managerial Decisions. 2nd edition. New Jersey:Prentice-Hall.

Wallis, K. F. (1967) "Lagged Dependent Variables and SeriallyCorrelated Errors; A Reappraisal of Three-Pass Least-Squares."Review of Economics and Statistics, 49, 555-567.

Zellner, A. and M. Geisel. (1970) "Analysis of Distributed Lagiodels with Applications to Consumption Function Estimation."Econometrica, 38, 865-888.