prediction using estimators of linear regression model with

10
Journal of Economics and Sustainable Develo ISSN 2222-1700 (Paper) ISSN 2222-2855 (O Vol.3, No.13, 2012 Prediction Using E Aurocorrelated Erro Ka 1. Departme P 2. Departm *E-mail: Abstract Prediction remains one of the fun Regression Model is formulated und economic and social sciences leadin examine the performances of the O Maximum Likelihood estimator (M prediction of linear regression mo independent regressors and error experiments were conducted ove (multicollinearity - λ ) and sample goodness of fit statistics of the estim multicollinearity over the levels of a are concave – like. Also, as the lev estimators perform much better at a the sample size is small (n=10), the the same, even though at low level with the best estimator when λ estimator is best except when the au PR2 estimator is best. Moreover, a competes with the best estimator in .Keywords: Prediction, Estima 1.0 Introduction Linear regression model is proba relationship problems among variab values of one or more independen prediction of its values often becom formulated under some basic assum (non-stochastic) and independent; an methods of estimation of the parame The assumption of non-sto and social sciences because their r Many authors including Neter and situations and instances where these Ordinary Least Square (OLS) estim stochastic and independent of the e even though it is not Best Linear U valid if the error terms are further confidence interval and power of the The violation of the assu business and economics data. Wit coefficients may no longer be valid violated. Although the estimates of long as multicollinearity is not perf lopment Online) 119 Estimators of Linear Regression M or Terms and Correlated Stocha Regressors ayode Ayinde 1* K. A. Olasemi 2 O.E.Olubusoye 2 ent of Statistics, Ladoke Akintola University of Techno P . M. B. 4000, Ogbomoso, Oyo State,, Nigeria. ment of Statistics, University of Ibadan, Oyo State, Nig : [email protected] , or [email protected] ndamental reasons for regression analysis. However der some assumptions which are not always satisfied ng to the development of many estimators. This work Ordinary Least Square estimator (OLS), Cochrane-Or ML) and the estimators based on Principal Compo odel under the violations of assumption of non terms. With stochastic uniform variables as regre er the levels of autocorrelation ) ( ρ , correlation sizes, and best estimators for prediction purposes a mators. Results show that the performances of COR a autocorrelation have a convex like pattern while that vel of multicollinearity increases the estimators espec all the levels of autocorrelation. Furthermore, results s e performances of the COR and ML estimators are gen of autocorrelation the PC estimator either performs b 49 . 0 - and 6 . 0 λ . When the sample size is sm utocorrelation level is low and 4 . 0 - λ or 2 . 0 λ . at low level of autocorrelation in all the sample siz all the levels of multicollinearity. ators, Linear Regression Model, Autocorrelation, Mult ably the most widely used statistical technique f bles. It helps to explain observations of a dependent va nt variables, X 1 , X 2 ,...,X p. In an attempt to explain th mes very essential and necessary. However, the line mptions. Among these assumptions are regressors bein nd that the error terms are assumed to be independent. eter model have been developed. ochastic regressors is not always satisfied especially regressors are often generated by stochastic process d Wasserman (1974), Fomby et al. (1984), Madda e assumptions may be violated and also discussed the mator when used to estimate the model parameters error terms; the OLS estimator is still unbiased and h Unbiased Estimator (BLUE). Also, the traditional hyp r assumed to be normal. However, modification is r e test and the power of the test calculated for each sam umption of independent regressors leads to multco ith strongly interrelated regressors, interpretation gi d because the assumption under which the regression m the regression coefficients provided by the OLS estim fect, the regression coefficients have large sampling e www.iiste.org Model with astic Uniform ology geria. g r, the Classical Linear especially in business, k, therefore, attempts to rcutt estimator (COR), onent analysis (PC) in stochastic regressors, essors, Monte - Carlo n between regressors are identified using the and ML at each level of t of OLS, PR1 and PR2 cially the COR and ML show that except when nerally best and almost better than or competes mall (n =10), the COR . At these instances, the zes, the OLS estimator ticollinearity for solving functional ariable,y, with observed he dependent variable, ear regression model is ng assumed to be fixed . Consequently, various in business, economic s beyond their control. ala (2002) have given eir consequences on the s. When regressors are has minimum variance pothesis testing remains required in the area of mple. ollinearity as found in iven to the regression model is built has been mator is still unbiased as errors which affect both

Upload: alexander-decker

Post on 03-Mar-2016

232 views

Category:

Documents


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Prediction Using Estimators of Linear Regression Model with

Journal of Economics and Sustainable Development ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online)Vol.3, No.13, 2012

Prediction Using Estimators of Linear Regression Model with Aurocorrelated Error Terms and Correlated Stochastic Uniform

Kayode Ayinde

1. Department of Statistics, Ladoke Akintola University of Technology

P

2. Department of

*E-mail:

Abstract

Prediction remains one of the fundamental reasons for regression analysis. However, the Classical Linear Regression Model is formulated under some assumptions which are not always satisfied especially economic and social sciences leading to the development of many estimatorsexamine the performances of the Ordinary Least Square estimator (OLS), CochraneMaximum Likelihood estimator (ML) and the estimators based on Principal Cprediction of linear regression model under the violations of assumption of non independent regressors and error terms. With stochastic uniform variables as regressors, Monte experiments were conducted over the levels of autocorrelation(multicollinearity -λ ) and sample sizes, and best estimators for prediction pgoodness of fit statistics of the estimators. Resultsmulticollinearity over the levels of autocorrelation have a convex are concave – like. Also, as the level of multicollinearity inestimators perform much better at all the levels of autocorrelationthe sample size is small (n=10), the performances of the COR and ML estimators are generally bethe same, even though at low level of autocorrelation the PC estimator either performs better than or competes with the best estimator when ≤λestimator is best except when the autocorrelation level is low and PR2 estimator is best. Moreover, at low level of autocorrelation in all the sample sizes, the OLS estimator competes with the best estimator in all the levels of multicollinearity.

.Keywords: Prediction, Estimators, Linear Regression Model, Autocorrelation

1.0 Introduction

Linear regression model is probably the most widely used statistical techrelationship problems among variables. It helps to explain observations of a dependent variable,values of one or more independent variables, prediction of its values often becomes very essential and necessary. However, the linear regression model is formulated under some basic assumptions. Among these assumptions are regressors being assumed to be (non-stochastic) and independent; and that themethods of estimation of the parameter model have been developed.

The assumption of non-stochastic regressors is not always satisfied especially in business, economic and social sciences because their regressors are often generated by stochastic process beyond their control. Many authors including Neter and Wasserman (1974), Fomby et al. (1984), Maddala (2002) have given situations and instances where these assumptions may be violated and alOrdinary Least Square (OLS) estimator when used to estimate the model parameters.stochastic and independent of the error terms; the OLS estimator is still unbiased and has minimum variance even though it is not Best Linear Unbiased Estimator (BLUE). Also,valid if the error terms are further assumed to be normal. However, modification is required in the area of confidence interval and power of the test

The violation of the assumption of independent regressors leads to multcollinearitybusiness and economics data. With strongly interrelated regressors, interpretation given to the regression coefficients may no longer be valid because the assumption under which the regression model is built has been violated. Although the estimates of the regression coefficients provided by the OLS estimator is still unbiased as long as multicollinearity is not perfect,

Journal of Economics and Sustainable Development 2855 (Online)

119

Prediction Using Estimators of Linear Regression Model with Aurocorrelated Error Terms and Correlated Stochastic Uniform

Regressors

Kayode Ayinde1* K. A. Olasemi2 O.E.Olubusoye2

Department of Statistics, Ladoke Akintola University of Technology

P. M. B. 4000, Ogbomoso, Oyo State,, Nigeria.

Department of Statistics, University of Ibadan, Oyo State, Nigeria.

mail: [email protected], or [email protected]

Prediction remains one of the fundamental reasons for regression analysis. However, the Classical Linear Regression Model is formulated under some assumptions which are not always satisfied especially

ading to the development of many estimators. This work, therefore, attempts toof the Ordinary Least Square estimator (OLS), Cochrane-Orcutt estimator (COR),

Maximum Likelihood estimator (ML) and the estimators based on Principal Component analysis (PC) in prediction of linear regression model under the violations of assumption of non – independent regressors and error terms. With stochastic uniform variables as regressors, Monte

cted over the levels of autocorrelation )(ρ , correlation between regressors ) and sample sizes, and best estimators for prediction purposes are identified using

he estimators. Results show that the performances of COR and ML at each level of multicollinearity over the levels of autocorrelation have a convex – like pattern while that of OLS, PR1 and PR2

like. Also, as the level of multicollinearity increases the estimators especially the COR and ML perform much better at all the levels of autocorrelation. Furthermore, results show that

the sample size is small (n=10), the performances of the COR and ML estimators are generally bethe same, even though at low level of autocorrelation the PC estimator either performs better than or competes

49.0−≤ and 6.0≥λ . When the sample size is small (n =10), the COR mator is best except when the autocorrelation level is low and 4.0−≤λ or 2.0≥λ . At these instances, the

Moreover, at low level of autocorrelation in all the sample sizes, the OLS estimator h the best estimator in all the levels of multicollinearity.

Prediction, Estimators, Linear Regression Model, Autocorrelation, Multicollinearity

Linear regression model is probably the most widely used statistical technique for solving functional relationship problems among variables. It helps to explain observations of a dependent variable,values of one or more independent variables, X1, X2,...,Xp. In an attempt to explain the dependent variable,

ction of its values often becomes very essential and necessary. However, the linear regression model is formulated under some basic assumptions. Among these assumptions are regressors being assumed to be

and independent; and that the error terms are assumed to be independent. Consequently, various methods of estimation of the parameter model have been developed.

stochastic regressors is not always satisfied especially in business, economic cause their regressors are often generated by stochastic process beyond their control.

Many authors including Neter and Wasserman (1974), Fomby et al. (1984), Maddala (2002) have given situations and instances where these assumptions may be violated and also discussed their consequences on the Ordinary Least Square (OLS) estimator when used to estimate the model parameters.stochastic and independent of the error terms; the OLS estimator is still unbiased and has minimum variance

Unbiased Estimator (BLUE). Also, the traditional hypothesis testing remains valid if the error terms are further assumed to be normal. However, modification is required in the area of

and power of the test and the power of the test calculated for each sample.

The violation of the assumption of independent regressors leads to multcollinearitybusiness and economics data. With strongly interrelated regressors, interpretation given to the regression oefficients may no longer be valid because the assumption under which the regression model is built has been

violated. Although the estimates of the regression coefficients provided by the OLS estimator is still unbiased as perfect, the regression coefficients have large sampling errors which affect both

www.iiste.org

Prediction Using Estimators of Linear Regression Model with Aurocorrelated Error Terms and Correlated Stochastic Uniform

Department of Statistics, Ladoke Akintola University of Technology

University of Ibadan, Oyo State, Nigeria.

[email protected]

Prediction remains one of the fundamental reasons for regression analysis. However, the Classical Linear Regression Model is formulated under some assumptions which are not always satisfied especially in business,

. This work, therefore, attempts to Orcutt estimator (COR),

omponent analysis (PC) in stochastic regressors,

independent regressors and error terms. With stochastic uniform variables as regressors, Monte - Carlo , correlation between regressors

urposes are identified using the the performances of COR and ML at each level of

like pattern while that of OLS, PR1 and PR2 especially the COR and ML

show that except when the sample size is small (n=10), the performances of the COR and ML estimators are generally best and almost the same, even though at low level of autocorrelation the PC estimator either performs better than or competes

. When the sample size is small (n =10), the COR . At these instances, the

Moreover, at low level of autocorrelation in all the sample sizes, the OLS estimator

Multicollinearity

nique for solving functional relationship problems among variables. It helps to explain observations of a dependent variable,y, with observed

In an attempt to explain the dependent variable, ction of its values often becomes very essential and necessary. However, the linear regression model is

formulated under some basic assumptions. Among these assumptions are regressors being assumed to be fixed error terms are assumed to be independent. Consequently, various

stochastic regressors is not always satisfied especially in business, economic cause their regressors are often generated by stochastic process beyond their control.

Many authors including Neter and Wasserman (1974), Fomby et al. (1984), Maddala (2002) have given so discussed their consequences on the

Ordinary Least Square (OLS) estimator when used to estimate the model parameters. When regressors are stochastic and independent of the error terms; the OLS estimator is still unbiased and has minimum variance

the traditional hypothesis testing remains valid if the error terms are further assumed to be normal. However, modification is required in the area of

mple.

The violation of the assumption of independent regressors leads to multcollinearity as found in business and economics data. With strongly interrelated regressors, interpretation given to the regression oefficients may no longer be valid because the assumption under which the regression model is built has been

violated. Although the estimates of the regression coefficients provided by the OLS estimator is still unbiased as have large sampling errors which affect both

Page 2: Prediction Using Estimators of Linear Regression Model with

Journal of Economics and Sustainable Development ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online)Vol.3, No.13, 2012

the inference and forecasting that is based on the model (Chartterjee et al., 2000). Various methods have been developed to estimate the model parameters when muitcollineariinclude Ridge Regression estimator developed by Hoerl (1962) and Hoerl and Kennard (1970), Estimator based on Principal Component Regression suggested by Massy (1965), Marquardt (1970) and Bock, Yancey and Judg(1973), Naes and Marten (1988), and method of Partial Least Squares developed by Hermon Wold in the 1960s (Helland, 1988, Helland, 1990, Phatak and Jony 1997).

When all the assumptions of the Classical Linear Regression Model hold except that the erronot homoscedastic (i.e. E (U1U) = σGeneralized Least Squares (GLS) Model. Aitken (1935) has shown that the GLS estimator (X1Ω

1X )-1 X1Ω

1Y is efficient among the class of linear unbiased estimators of matrix of β given as V(β) = σ2(X1

Ω

is often estimated by^

Ω to have what is known

When the assumption of independencethe problem of autocorrelation arises. parameter estimation of the linear regression model when the error term follows autoregressive of orders one. The OLS estimator is inefficient even though unbiased. Its predicted values are also inefficient and the sampling variances of the autocorrelated error termsinvalid (Johnston, 1984; Fomby et al., 1984; Chartterjee, 2000; Maddala, 2002). To compensate for the lost of efficiency, several feasible GLS estimators have been developed. These includeCochrane and Orcutt (1949), Paris and Winstern (1954), Hildreth and Lu (1960), Durbin (1960), Theil (1971), the Maximum Likelihood and the Maximum Likelihood Grid (Beach and Mackinnon, 1978), and Thornton (1982). Chipman (1979), Kramer (1980), Kleiber (2001), Iyaniwura and Nwabueze (2004), Nwabueze ( 2005a, b, c), Ayinde and Ipinyomi (2007) and many other authors have not only observed the asymptotic equivalence of these estimators but have also noted that that their performancesregressor used. Rao and Griliches (1969) did one of the earliest Monteproperties of several two-stage regression methods in the context of autocorrelation error. Otherdone on these estimators and the violations of the assumptions of classical linear regression model include thatof Ayinde and Oyejola (2007), Ayinde (2007a,b), Ayinde and Olaomi (2008), Ayinde (2008), and Ayinde and Iyaniwura (2008).

In spite of these several works on these estimators, none has actually been done on their prediction. Very fundamentally, prediction is one of the basic reasons for regression analysis. Therefore, this paper does not only examine the predictive potential of somassumption of regression model making the model much closer to reality.

2.0 Materials and Methods

Consider the linear regression model is of the form:

ttt XXXY +++= 322110 ββββWhere , 0(~ε Ntcorrelated.

For Monte-Carlo simulation study, the parameters of equation (1) were specified and fixed as= 1.8 and β3 = 0.6. The levels of multicollinearity among the independent variables were sixteen (16) and specified as: )()( 1312 == xx λλλis twenty-one (21) and are specified as: experiment was replicated in 1000 times (R =1000) under Six (6) levels of sample sizes (n =10, 15, 20, 30, 50, 100).

The correlated stochastic uniform regressors were generated by first using the equations provided by Ayinde (2007) and Ayinde and Adegboye (2010) to generate normally distributed random variables with specified intercorrelation. With P= the equations give:

X1 = µ1 + σ1Z1

X2 = µ2 + ρ12 σ2Z1 + Z2

X3 = µ3 + ρ13 σ3Z1 + Z2 +

where m22 = , m23 = inter-correlation matrix has to be positive definite and hence, the correlations among the independent variable were taken as prescribed earlier). In the study, we assumed X

Journal of Economics and Sustainable Development 2855 (Online)

120

the inference and forecasting that is based on the model (Chartterjee et al., 2000). Various methods have been developed to estimate the model parameters when muitcollinearity is present in a data set. These estimators include Ridge Regression estimator developed by Hoerl (1962) and Hoerl and Kennard (1970), Estimator based on Principal Component Regression suggested by Massy (1965), Marquardt (1970) and Bock, Yancey and Judg(1973), Naes and Marten (1988), and method of Partial Least Squares developed by Hermon Wold in the 1960s (Helland, 1988, Helland, 1990, Phatak and Jony 1997).

When all the assumptions of the Classical Linear Regression Model hold except that the erro) = σ2In) but are heteroscedastic (i.e. E(U1U) = σ2

Ω), the resulting model is the Generalized Least Squares (GLS) Model. Aitken (1935) has shown that the GLS estimator

ent among the class of linear unbiased estimators of β with variance Ω

1X)-1 where Ω is assumed to be known. However, Ω is not always known, it to have what is known as Feasible GLS estimator.

of independence of error terms is violated as it is often found in time series data, the problem of autocorrelation arises. Several authors have worked on this violation especially in terms of the

stimation of the linear regression model when the error term follows autoregressive of orders one. The OLS estimator is inefficient even though unbiased. Its predicted values are also inefficient and the sampling variances of the autocorrelated error terms are known to be underestimated causing the t and the F tests to be invalid (Johnston, 1984; Fomby et al., 1984; Chartterjee, 2000; Maddala, 2002). To compensate for the lost of efficiency, several feasible GLS estimators have been developed. These include the estimator provided by Cochrane and Orcutt (1949), Paris and Winstern (1954), Hildreth and Lu (1960), Durbin (1960), Theil (1971), the Maximum Likelihood and the Maximum Likelihood Grid (Beach and Mackinnon, 1978), and Thornton

Kramer (1980), Kleiber (2001), Iyaniwura and Nwabueze (2004), Nwabueze ( 2005a, b, c), Ayinde and Ipinyomi (2007) and many other authors have not only observed the asymptotic equivalence of these estimators but have also noted that that their performances and efficiency depend on the structure of the regressor used. Rao and Griliches (1969) did one of the earliest Monte-Carlo investigations on the small sample

stage regression methods in the context of autocorrelation error. Otherdone on these estimators and the violations of the assumptions of classical linear regression model include that

Ayinde and Oyejola (2007), Ayinde (2007a,b), Ayinde and Olaomi (2008), Ayinde (2008), and Ayinde and

ite of these several works on these estimators, none has actually been done on their prediction. Very fundamentally, prediction is one of the basic reasons for regression analysis. Therefore, this paper does not only examine the predictive potential of some of these estimators but also does it under some violations of assumption of regression model making the model much closer to reality.

Consider the linear regression model is of the form:

tt uX +3

),0 2σ , t = 1, 2, 3,...n and ~0,1) i = 1, 2, 3 are stochastic and

simulation study, the parameters of equation (1) were specified and fixed as= 0.6. The levels of multicollinearity among the independent variables were sixteen (16) and

.99.0,9.0,8.0,...,3.0,4.0,49.0)( 23 −−−=xλ The levels of autocorrelation one (21) and are specified as: 99.0,9.0,8.0,...,8.0,9.0,99.0 −−−=ρ

experiment was replicated in 1000 times (R =1000) under Six (6) levels of sample sizes (n =10, 15, 20, 30, 50,

The correlated stochastic uniform regressors were generated by first using the equations provided by Ayinde Ayinde and Adegboye (2010) to generate normally distributed random variables with specified

the equations give:

1

2 (2)

Z3

and n33 = m33 - ; and Zi N (0, 1) i = 1, 2, 3. (The positive definite and hence, the correlations among the independent variable

were taken as prescribed earlier). In the study, we assumed Xi N (0, 1), i = 1, 2, 3. We further utilized the

www.iiste.org

the inference and forecasting that is based on the model (Chartterjee et al., 2000). Various methods have been ty is present in a data set. These estimators

include Ridge Regression estimator developed by Hoerl (1962) and Hoerl and Kennard (1970), Estimator based on Principal Component Regression suggested by Massy (1965), Marquardt (1970) and Bock, Yancey and Judge (1973), Naes and Marten (1988), and method of Partial Least Squares developed by Hermon Wold in the 1960s

When all the assumptions of the Classical Linear Regression Model hold except that the error terms are Ω), the resulting model is the

Generalized Least Squares (GLS) Model. Aitken (1935) has shown that the GLS estimator β of β given as β = β with variance – covariance

Ω is not always known, it

of error terms is violated as it is often found in time series data, Several authors have worked on this violation especially in terms of the

stimation of the linear regression model when the error term follows autoregressive of orders one. The OLS estimator is inefficient even though unbiased. Its predicted values are also inefficient and the sampling

are known to be underestimated causing the t and the F tests to be invalid (Johnston, 1984; Fomby et al., 1984; Chartterjee, 2000; Maddala, 2002). To compensate for the lost of

the estimator provided by Cochrane and Orcutt (1949), Paris and Winstern (1954), Hildreth and Lu (1960), Durbin (1960), Theil (1971), the Maximum Likelihood and the Maximum Likelihood Grid (Beach and Mackinnon, 1978), and Thornton

Kramer (1980), Kleiber (2001), Iyaniwura and Nwabueze (2004), Nwabueze ( 2005a, b, c), Ayinde and Ipinyomi (2007) and many other authors have not only observed the asymptotic equivalence

and efficiency depend on the structure of the Carlo investigations on the small sample

stage regression methods in the context of autocorrelation error. Other recent works done on these estimators and the violations of the assumptions of classical linear regression model include that

Ayinde and Oyejola (2007), Ayinde (2007a,b), Ayinde and Olaomi (2008), Ayinde (2008), and Ayinde and

ite of these several works on these estimators, none has actually been done on their prediction. Very fundamentally, prediction is one of the basic reasons for regression analysis. Therefore, this paper does not

e of these estimators but also does it under some violations of

(1)

) i = 1, 2, 3 are stochastic and

simulation study, the parameters of equation (1) were specified and fixed as β0 = 4, β1 = 2.5, β2 = 0.6. The levels of multicollinearity among the independent variables were sixteen (16) and

The levels of autocorrelation .99 Furthermore, the

experiment was replicated in 1000 times (R =1000) under Six (6) levels of sample sizes (n =10, 15, 20, 30, 50,

The correlated stochastic uniform regressors were generated by first using the equations provided by Ayinde Ayinde and Adegboye (2010) to generate normally distributed random variables with specified

N (0, 1) i = 1, 2, 3. (The positive definite and hence, the correlations among the independent variable

We further utilized the

Page 3: Prediction Using Estimators of Linear Regression Model with

Journal of Economics and Sustainable Development ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online)Vol.3, No.13, 2012

properties of random variables that cumulative distribution functiowithout affecting the correlation among the variables (Schumann, 2009) to generate

The error terms were generated using one of the distributional properties of the autocorrela(0, )) and the AR(1) equation as follows:

u1=

ut = ρut-1 + εt t = 2,3,4,…n

Since some of these estimators have now been incorporated into the Time Series Processor (TSP 5.0, 2005) software, a computer program was written usDetermination of the model )(

2_

Restimator, Maximum Likelihood estimator and the estimator based on Principal ComponenThe Adjusted Coefficient of Determination of the model was averaged over the numbers of replications. i.e.

The two possible PCs (PC1 and PC2) of the Priseparate Adjusted Coefficient of Determination. An estimator is best if its Adjusted Coefficient of Determination is closest to unity.

3.0 Results and Discussion

The full summary of the simulated resuand autocorrelation is contained in the work of Olasemi (2011). The graphical representations of the results when n=10, 15, 20, 30, 50 and 100 are respectively presented in Figure 1

Journal of Economics and Sustainable Development 2855 (Online)

121

properties of random variables that cumulative distribution function of Normal distribution produces U (0, 1) without affecting the correlation among the variables (Schumann, 2009) to generate

The error terms were generated using one of the distributional properties of the autocorrela) and the AR(1) equation as follows:

(3)

t = 2,3,4,…n (4)

Since some of these estimators have now been incorporated into the Time Series Processor (TSP 5.0, 2005) software, a computer program was written using the software to estimate the Adjusted Coefficient of

) the Ordinary Least Square (OLS) estimator, Cochrane Oestimator, Maximum Likelihood estimator and the estimator based on Principal ComponenThe Adjusted Coefficient of Determination of the model was averaged over the numbers of replications. i.e.

∑=

=

=R

i

iRRR

1

2_1 (5)

The two possible PCs (PC1 and PC2) of the Principal Component Analysis were used. Each provides its separate Adjusted Coefficient of Determination. An estimator is best if its Adjusted Coefficient of Determination

The full summary of the simulated results of each estimator at different level of sample size, muticollinearity, and autocorrelation is contained in the work of Olasemi (2011). The graphical representations of the results when n=10, 15, 20, 30, 50 and 100 are respectively presented in Figure 1, 2, 3, 4, 5 and 6.

www.iiste.org

n of Normal distribution produces U (0, 1) .

The error terms were generated using one of the distributional properties of the autocorrelated error terms (ut N

(3)

Since some of these estimators have now been incorporated into the Time Series Processor (TSP 5.0, 2005) ing the software to estimate the Adjusted Coefficient of

, Cochrane Orcutt (COR) estimator, Maximum Likelihood estimator and the estimator based on Principal Component Analysis (PRN). The Adjusted Coefficient of Determination of the model was averaged over the numbers of replications. i.e.

(5)

ncipal Component Analysis were used. Each provides its separate Adjusted Coefficient of Determination. An estimator is best if its Adjusted Coefficient of Determination

lts of each estimator at different level of sample size, muticollinearity, and autocorrelation is contained in the work of Olasemi (2011). The graphical representations of the results

, 2, 3, 4, 5 and 6.

Page 4: Prediction Using Estimators of Linear Regression Model with

Journal of Economics and Sustainable Development ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online)Vol.3, No.13, 2012

Journal of Economics and Sustainable Development 2855 (Online)

122

www.iiste.org

Page 5: Prediction Using Estimators of Linear Regression Model with

Journal of Economics and Sustainable Development ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online)Vol.3, No.13, 2012

From these figures, it can be generally observed that the performances of COR and ML at each level of multicollinearity over the levels of autocorrelation have a convex are concave – like. Also, as the level of multicollinearity increases the estimators perform much better at all the levels of autocorrelation even though the values of their averaged adjusted coefficient of determination are not high at low and moderate levels of estimators are much better at all the levels of autocorrelation. Furthermore except when the sample size is small (n=10), the performances of the COR and ML estimators are gelow level of autocorrelation the PR2 estimator either performs better than or competes with the best estimator when 49.0−≤λ and 6.0≥λ . When the sample size is small (n =10), twhen the autocorrelation level is low and Moreover, at low level of autocorrelation in all the sample sizes, the OLS estimator cestimator in all the levels of multicollinearity.

Very specifically in term of identification of the best estimator, Table 1, 2, 3, 4, 5 and 6 respectively summarize the best estimator for prediction at all the levels of autocorrelat10, 15, 20, 30, 50, 100.

Table 1: The Best Estimator for Prediction at different level of Multicollinearity and Autocorrelation when n=10. ρ

-0.49 -0.4 -0.3 -0.2

-0.99 COR COR COR COR COR-0.9 COR COR COR COR COR-0.8 COR COR COR COR COR-0.7 COR COR COR COR COR0.-6 COR COR COR COR COR-0.5 COR COR COR COR COR-0.4 COR COR COR COR COR-0.3 COR COR COR COR -0.2 PC2 COR COR ML -0.1 PC2 PC2 ML ML

0 PC2 PC2 ML ML 0.1 PC2 PC2 ML ML 0.2 PC2 PC2 ML ML 0.3 PC2 PC2 ML ML 0.4 PC2 PC2 ML ML 0.5 PC2 COR COR ML 0.6 COR COR COR COR COR0.7 COR COR COR COR COR0.8 COR COR COR COR COR0.9 COR COR COR COR COR0.99 COR COR COR COR COR

From Table 1 when n = 10, COR estimator is best for prediction at4.0−≤ρ and 7.0≥ρ . When −

estimator is only best when 2.0 ≤−Table 2: The Best Estimator for Prediction at different level when n=15.

Journal of Economics and Sustainable Development 2855 (Online)

123

From these figures, it can be generally observed that the performances of COR and ML at each level of multicollinearity over the levels of autocorrelation have a convex – like pattern while that of OLS, PR1 and PR2

like. Also, as the level of multicollinearity increases the estimators perform much better at all the levels of autocorrelation even though the values of their averaged adjusted coefficient of determination are not high at low and moderate levels of autocorrelation except when 6.0≥λ . At these instances, COR and ML estimators are much better at all the levels of autocorrelation. Furthermore except when the sample size is small (n=10), the performances of the COR and ML estimators are generally best and almost the same, even though at low level of autocorrelation the PR2 estimator either performs better than or competes with the best estimator

. When the sample size is small (n =10), the COR estimator is best except when the autocorrelation level is low and 4.0−≤λ or 2.0≥λ . At these instances, the PR2 estimator is best. Moreover, at low level of autocorrelation in all the sample sizes, the OLS estimator cestimator in all the levels of multicollinearity.

Very specifically in term of identification of the best estimator, Table 1, 2, 3, 4, 5 and 6 respectively summarize the best estimator for prediction at all the levels of autocorrelation and multicollinearity when the sample size is

The Best Estimator for Prediction at different level of Multicollinearity and Autocorrelation

λ -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR ML ML ML ML ML COR PC2 PC2 ML ML ML ML PC2 PC2 PC2 PC2 ML ML ML PC2 PC2 PC2 PC2 PC2 ML ML ML PC2 PC2 PC2 PC2 PC2 ML ML ML PC2 PC2 PC2 PC2 PC2 ML ML ML PC2 PC2 PC2 PC2 PC2 ML ML ML PC2 PC2 PC2 PC2 PC2 ML ML ML PC2 PC2 PC2 PC2 PC2 ML ML ML ML PC2 PC2 PC2 PC2 COR ML ML COR COR COR COR PR2 COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

From Table 1 when n = 10, COR estimator is best for prediction at all levels of multicollinearity especially when 5.02.0 ≤≤− ρ and 4.0−≤λ or 2.0≥λ , the PR2 is best. The ML

5.0≤≤ ρ and 1.03.0 ≤≤− λ .

The Best Estimator for Prediction at different level of Multicollinearity and Autocorrelation

www.iiste.org

From these figures, it can be generally observed that the performances of COR and ML at each level of

like pattern while that of OLS, PR1 and PR2 like. Also, as the level of multicollinearity increases the estimators perform much better at all the

levels of autocorrelation even though the values of their averaged adjusted coefficient of determination are not . At these instances, COR and ML

estimators are much better at all the levels of autocorrelation. Furthermore except when the sample size is small nerally best and almost the same, even though at

low level of autocorrelation the PR2 estimator either performs better than or competes with the best estimator he COR estimator is best except

. At these instances, the PR2 estimator is best. Moreover, at low level of autocorrelation in all the sample sizes, the OLS estimator competes with the best

Very specifically in term of identification of the best estimator, Table 1, 2, 3, 4, 5 and 6 respectively summarize ion and multicollinearity when the sample size is

The Best Estimator for Prediction at different level of Multicollinearity and Autocorrelation

0.7 0.8 0.9 0.99

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PR2 PR2 PR2 COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

all levels of multicollinearity especially when , the PR2 is best. The ML

Autocorrelation

Page 6: Prediction Using Estimators of Linear Regression Model with

Journal of Economics and Sustainable Development ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online)Vol.3, No.13, 2012

ρ

-0.49 -0.4 -0.3 -0.2

-0.99 COR COR COR COR COR-0.9 COR COR COR COR COR-0.8 COR COR COR COR COR-0.7 COR COR COR COR COR0.-6 COR COR COR COR COR-0.5 COR COR COR COR COR-0.4 COR COR COR COR COR-0.3 COR COR COR COR COR-0.2 COR COR COR COR COR-0.1 COR COR COR COR COR

0 PC2 COR COR COR COR0.1 PC2 COR COR COR COR0.2 PC2 COR COR COR COR0.3 PC2 COR COR COR COR0.4 COR COR COR COR COR0.5 COR COR COR COR COR0.6 COR COR COR COR COR0.7 COR COR COR COR COR0.8 COR COR COR COR COR0.9 COR COR COR COR COR0.99 COR COR COR COR COR

From the Table 2, it can be seen that the COR estimator is genemulticollinearity and autocorrelation except occasionally when autocorrelation level is low and multicollinearityis best .

Table 3: The Best Estimator for Prediction at different level of Multicollinearity and when n=20.

ρ

-0.49 -0.4 -0.3 -0.2

-0.99 COR COR COR COR COR-0.9 COR COR COR COR COR-0.8 COR COR COR COR COR-0.7 COR COR COR COR COR0.-6 COR COR COR COR COR-0.5 COR COR COR COR COR-0.4 COR COR COR COR COR-0.3 COR COR COR COR COR-0.2 COR COR COR ML -0.1 COR ML ML ML

0 PC2 ML ML ML 0.1 PC2 ML ML ML 0.2 PC2 ML ML ML 0.3 COR ML ML ML 0.4 COR COR COR COR COR0.5 COR COR COR COR COR0.6 COR COR COR COR COR0.7 COR COR COR COR COR0.8 COR COR COR COR COR0.9 COR COR COR COR COR0.99 COR COR COR COR COR

When n = 20 as revealed in Table 3, the COR estimator is best except atthe best estimator is PC2 when λsparsely COR even though the two of them compete very favorably well. (See figure 3).

Table 4: The Best Estimator for Prediction at different level of Multicollinearity and Autocorrelation when n=30. ρ

-0.49 -0.4 -0.3 -0.2

-0.99 COR COR COR COR COR-0.9 COR COR COR COR COR-0.8 COR COR COR COR COR-0.7 COR COR COR COR COR0.-6 COR COR COR COR COR-0.5 COR COR COR COR COR-0.4 COR COR COR COR COR-0.3 COR COR ML ML -0.2 ML ML ML ML

Journal of Economics and Sustainable Development 2855 (Online)

124

λ

-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR PC2 COR COR COR COR COR COR COR PC2 COR COR COR COR COR COR COR PC2 COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

From the Table 2, it can be seen that the COR estimator is generally best for prediction at all levels of multicollinearity and autocorrelation except 3.00 ≤≤ ρ when 49.0−≤λ or 6.0≥λoccasionally when autocorrelation level is low and multicollinearity level is very high or tends to unity, the PR2

The Best Estimator for Prediction at different level of Multicollinearity and

λ -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

When n = 20 as revealed in Table 3, the COR estimator is best except at 02.0 ≤≤− ρ49.0−≤λ or 7.0≥λ ; otherwise, the best estimator is often ML and

sparsely COR even though the two of them compete very favorably well. (See figure 3).

r Prediction at different level of Multicollinearity and Autocorrelation when

λ

-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML

www.iiste.org

0.7 0.8 0.9 0.99

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR PC2 COR PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 COR COR COR PC2 COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

rally best for prediction at all levels of

At these instances and level is very high or tends to unity, the PR2

Autocorrelation

0.7 0.8 0.9 0.99

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR ML ML ML ML PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 ML PC2 PC2 PC2 COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

3.0 . At these instances, ; otherwise, the best estimator is often ML and

r Prediction at different level of Multicollinearity and Autocorrelation when

0.7 0.8 0.9 0.99

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR ML ML COR COR ML ML ML ML

Page 7: Prediction Using Estimators of Linear Regression Model with

Journal of Economics and Sustainable Development ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online)Vol.3, No.13, 2012

-0.1 ML ML ML ML 0 ML ML ML ML

0.1 ML ML ML ML 0.2 ML ML ML ML 0.3 ML ML ML ML 0.4 COR COR COR COR COR0.5 COR COR COR COR COR0.6 COR COR COR COR COR0.7 COR COR COR COR COR0.8 COR COR COR COR COR0.9 COR COR COR COR COR0.99 COR COR COR COR COR

From Table 4, it can be observed that the best estimator for prediction when n = 30 is generally COR except when the 3.0≤ρ At these instances, the estimator based on using PC2 is best when

7.0≥λ ; otherwise, the best estimator is often ML and sparsely COR even though the two of them still compete very favorably well. (See figure 4).

Table 5: The Best Estimator for Prediction at different level of Multicollinearity when n=50. ρ

-0.49 -0.4 -0.3 -0.2

-0.99 COR COR COR COR COR-0.9 COR COR COR COR COR-0.8 COR COR COR COR COR-0.7 COR COR COR COR COR0.-6 COR COR COR COR COR-0.5 COR COR COR COR COR-0.4 COR COR COR COR COR-0.3 COR COR COR COR COR-0.2 COR COR COR COR COR-0.1 ML COR COR COR COR0 ML ML COR COR COR0.1 ML COR COR COR COR0.2 ML COR COR COR COR0.3 COR COR COR COR COR0.4 COR COR COR COR COR0.5 COR COR COR COR COR0.6 COR COR COR COR COR0.7 COR COR COR COR COR0.8 COR COR COR COR COR0.9 COR COR COR COR COR0.99 COR COR COR COR COR

From Table 5 when n= 50, the best estimator for prediction is still COR except when and when 2.01.0 ≤≤− ρ andλbest at the latter.

Table 6: The Best Estimator for Prediction at different level of Multicollinearity and Autocorrelation when n=100. ρ

-0.49 -0.4 -0.3 -0.2

-0.99 COR COR COR COR COR-0.9 COR COR COR COR COR-0.8 COR COR COR COR COR-0.7 COR COR COR COR COR0.-6 COR COR COR COR COR-0.5 COR COR COR COR COR-0.4 COR COR COR COR COR-0.3 COR COR COR COR COR-0.2 COR COR COR COR COR-0.1 COR COR COR COR COR

0 COR COR COR COR COR0.1 COR COR COR COR COR0.2 COR COR COR COR COR0.3 COR COR COR COR COR0.4 COR COR COR COR COR0.5 COR COR COR COR COR0.6 COR COR COR COR COR0.7 COR COR COR COR CO0.8 COR COR COR COR COR0.9 COR COR COR COR COR0.99 COR COR COR COR COR

Journal of Economics and Sustainable Development 2855 (Online)

125

ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML ML COR ML ML COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

From Table 4, it can be observed that the best estimator for prediction when n = 30 is generally COR except At these instances, the estimator based on using PC2 is best when

; otherwise, the best estimator is often ML and sparsely COR even though the two of them still compete very favorably well. (See figure 4).

The Best Estimator for Prediction at different level of Multicollinearity and

λ

-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

From Table 5 when n= 50, the best estimator for prediction is still COR except when ρ49.0−≤ . At the former estimator based on using PC2 is best while ML is

The Best Estimator for Prediction at different level of Multicollinearity and Autocorrelation when

λ

-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

www.iiste.org

PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 PC2 ML ML PC2 PC2 ML ML ML COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

From Table 4, it can be observed that the best estimator for prediction when n = 30 is generally COR except At these instances, the estimator based on using PC2 is best when 2.01.0 ≤≤ ρ and

; otherwise, the best estimator is often ML and sparsely COR even though the two of them still

Autocorrelation

0.7 0.8 0.9 0.99

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR PC2 COR COR PC2 PC2 COR COR PC2 PC2 COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

1.0≤ρ and 9.0≥λ . At the former estimator based on using PC2 is best while ML is

The Best Estimator for Prediction at different level of Multicollinearity and Autocorrelation when

0.7 0.8 0.9 0.99

COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR PC2 PC2 COR COR COR PC2 COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR COR

Page 8: Prediction Using Estimators of Linear Regression Model with

Journal of Economics and Sustainable Development ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online)Vol.3, No.13, 2012

When the sample size is large (n =100), the best estimator is generally COR except when 9.0≥λ . At these instances, the estimator based on

4.0 Conclusions

The effect of two major problems, MuCOR, ML and PC estimators of linear regression model has been jointly examined in this paper. the pattern of performances of COR and ML at each level of multicollineato be generally and evidently convex especially when generally concave. Moreover, the COR and ML estimators perform equivalently and betterperformances become much better as multicollinearity increases. The COR estimator is generally the best estimator for prediction except at high level of multicollinearity and low levels of autocorrelation. At these instances, the PC estimator is either best or competes with the COR estimator. is small (n=10) and multicollinearity level is not high, the OLS estimator is best at low level of autocorrelation whereas the ML is best at moderate levels of autocorrelation

References

Aitken, A. C. (1935): On least square and linear combinations oSociety, Edinburgh, 55: 42 – 48.

Ayinde, K. (2007). Equations to generate normal variates with desired Journal of Statistics and System, 2 (2), 99

Ayinde, K. (2007). A Comparative Study of the Performances of the OLS and Some GLS Estimators When Stochastic Regressors Are both Collinear and Correlated with Error

Ayinde, K. (2008). Performances of Some Estimators of Linear Model when Stochastic Regressors are correlated with Autocorrelated Error Terms.

Ayinde, K. and Adegboye, O. S. (2010)specified intercorrelation. Journal of Mathematical Sciences, 21(2), 183

Ayinde, K. and Ipinyomi, R. A. (2007)normally distributed regressors are stochastic,.

Ayinde, K. and Iyaniwura, J.O. (2008). fixed and stochastic regressors. Global Journal of Pure and Applied Sciences, 14(3)

Ayinde, K. and Oyejola, B. A. (2007). stochastic regressors are correlated with error terms

Beach, C. M. and Mackinnon, J. S. (1978)errors. Econometrica, 46, 51 – 57.

Bock, M. E., Yancey, T. A. and Judgeregression. Journal of the American Statistical Association, 60, 234

Chartterjee, S., Hadi, A.S. and Price,Publication,.John Wiley and Sons..

Chipman, J. S. Efficiency of Least Squares Estimation of Linear Trend when residuals are autocorrelated, Econometrica, 47(1979),115 -127.

Cocharane, D. and Orcutt, G.H. (1949)error terms. Journal of American Statistical Association, 44,32

Durbin, J (1960). Estimation of Parameters in Time series RegreSociety B, 22, 139 -153.

Formy, T. B., Hill, R. C. and Johnson,Berlin, Heidelberg, London, Paris, Tokyo

Helland, I. S. (1988). On the structure of partial least squSimulations and Computations, 17, 581

Helland, I. S. (1990). Partial least squares regre17, 97 – 114.

Hoerl, A. E. (1962). Application of ridge analysis to

Journal of Economics and Sustainable Development 2855 (Online)

126

When the sample size is large (n =100), the best estimator is generally COR except when . At these instances, the estimator based on using PC2 is best.

The effect of two major problems, Multicollinearity and autocorrelation, on the predictive ability of the OLS, COR, ML and PC estimators of linear regression model has been jointly examined in this paper. the pattern of performances of COR and ML at each level of multicollinearity over the levels of autocorrelation to be generally and evidently convex especially when 30≥n and 0<λ while that of OLS and PC is generally concave. Moreover, the COR and ML estimators perform equivalently and betterperformances become much better as multicollinearity increases. The COR estimator is generally the best estimator for prediction except at high level of multicollinearity and low levels of autocorrelation. At these

s either best or competes with the COR estimator. Moreover when the sample size is small (n=10) and multicollinearity level is not high, the OLS estimator is best at low level of autocorrelation whereas the ML is best at moderate levels of autocorrelation.

Aitken, A. C. (1935): On least square and linear combinations of observations. Proceedings of

Equations to generate normal variates with desired intercorrelation matrix.Journal of Statistics and System, 2 (2), 99 –111.

A Comparative Study of the Performances of the OLS and Some GLS Estimators When Stochastic Regressors Are both Collinear and Correlated with Error Terms. Journal of Mathematics and Statistics, 3(4), 196

Performances of Some Estimators of Linear Model when Stochastic Regressors are with Autocorrelated Error Terms. European Journal of Scientific Research, 20 (3),

(2010). Equations for generating normally distributed random variables with Journal of Mathematical Sciences, 21(2), 183–203.

(2007). A comparative study of the OLS and some GLS estistochastic,.Trend in Applied Sciences Research, 2(4), 354

J.O. (2008). A Comparative study of the Performances of Some Estimators of Linear Model with Global Journal of Pure and Applied Sciences, 14(3), 363–369.

B. A. (2007). A comparative study of performances of OLS and some GLS estimators when stochastic regressors are correlated with error terms. Research Journal of Applied Sciences, 2, 215–

S. (1978). A Maximum Likelihood Procedure regression with

and Judge, G. G. (1973). Statistical consequences of preliminaJournal of the American Statistical Association, 60, 234 – 246.

and Price, B. (2000). Regression by Example, 3rd Edition, A

fficiency of Least Squares Estimation of Linear Trend when residuals are autocorrelated,

(1949). Application of Least Square to relationship containing autocorAmerican Statistical Association, 44,32–61.

Estimation of Parameters in Time series Regression Models. Journal of Royal Statistical

and Johnson, S. R. (1984). Advance Econometric Methods. SpringerBerlin, Heidelberg, London, Paris, Tokyo.

On the structure of partial least squares regression. Communication is Statistics, Simulations and Computations, 17, 581 – 607.

st squares regression and statistical methods. Scandinavian Journal of Statistics,

Application of ridge analysis to regression problems. Chemical Engineering Progress, 58, 54

www.iiste.org

When the sample size is large (n =100), the best estimator is generally COR except when 1.00 ≤≤ ρ and

lticollinearity and autocorrelation, on the predictive ability of the OLS, COR, ML and PC estimators of linear regression model has been jointly examined in this paper. Results reveal

rity over the levels of autocorrelation while that of OLS and PC is

generally concave. Moreover, the COR and ML estimators perform equivalently and better; and their performances become much better as multicollinearity increases. The COR estimator is generally the best estimator for prediction except at high level of multicollinearity and low levels of autocorrelation. At these

Moreover when the sample size is small (n=10) and multicollinearity level is not high, the OLS estimator is best at low level of autocorrelation

f observations. Proceedings of Royal Statistical

intercorrelation matrix. International

A Comparative Study of the Performances of the OLS and Some GLS Estimators When Stochastic tics and Statistics, 3(4), 196 –200.

Performances of Some Estimators of Linear Model when Stochastic Regressors are European Journal of Scientific Research, 20 (3), 558-571.

Equations for generating normally distributed random variables with

A comparative study of the OLS and some GLS estimators when Trend in Applied Sciences Research, 2(4), 354-359.

A Comparative study of the Performances of Some Estimators of Linear Model with

A comparative study of performances of OLS and some GLS estimators when –220.

A Maximum Likelihood Procedure regression with autocorrelated

Statistical consequences of preliminary test estimation in

Edition, A Wiley-Interscience

fficiency of Least Squares Estimation of Linear Trend when residuals are autocorrelated,

Application of Least Square to relationship containing autocorrelated

Journal of Royal Statistical

Springer-Verlag, New York,

Communication is Statistics,

Scandinavian Journal of Statistics,

Chemical Engineering Progress, 58, 54 – 59.

Page 9: Prediction Using Estimators of Linear Regression Model with

Journal of Economics and Sustainable Development ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online)Vol.3, No.13, 2012

Hoerl, A. E. and Kennard, R. W.problems,.Technometrics, 8, 27 – 51.

Hildreth, C. and Lu, J.Y. (1960).University, Agricultural Experiment Statistical Bullet

Iyaniwura, J.O. and Nwabueze, J. C. (2004)trended independent variable. Journal of Nigeria Statistical Association,17, 20

Johnston, J. (1984). Econometric Metho

Kramer, W. (1980). Finite sample efficiency of OLS in linear regression model with autocorrelated errors,.Journal of American Statistical Association, 81, 150

Kleiber, C. (2001). Finite sample efficiency of Economic Letters, 72, 131 – 136.

Maddala, G. S. (2002). Introduction to Econometrics.

Marquardt, D. W. (1970). Generalized inverse, Ridge RegEstimation. Technometrics, 12, 591–

Massy, W. F. (1965). Principal Component Regression in eAmerican Statistical Association, 60, 234

Naes, T. and Marten, H. (1988). Principal Component Regression in NIR analysis: View points, Background Details Selection of components. Journal of Chemometrics, 2, 155

Neter, J. and Wasserman, W. ( 1974).

Nwabueze, J.C. (2005a). Performances of Estimators of linear model with autoindependent variable is normal. Journal of Nigerian Association of Mathematical Physics, 9, 379

Nwabueze, J.C. (2005b). Performances of Estimators of linear model with autoexponential independent variable. Journal of Nigerian Association of Mathematical Physics, 9, 385

Nwabueze, J.C. (2005c). Performances of Estimators of linear model windependent variable is autoregressive

Olasemi, K.A. (2011): Estimators of Linear Regression Model with AutocorrelatedStochastic Uniform Regressors”. Unpublished M.Sc. Thesis. University of

Paris, S. J. and Winstein, C. B. (1954).Discussion Paper, Chicago.

Phatak, A. and Jony, S. D. (1997). The geometry of par

Rao, P. and Griliches, Z. (1969). Small sample properties of several twoof autocorrelation error. Journal of American Statistical Asso

Schumann, E. (2009): Generating Correlated Uniform Variates.

http://comisef.wikidot.com/tutorial:correlateduniformvariates

Theil, H.( 1971). Principle of Econom

Thornton, D. L. (1982). The appropriate autocorrelation transformation when autocorrelation process has a finite past. Federal Reserve Bank St. Louis, 82

Journal of Economics and Sustainable Development 2855 (Online)

127

R. W. (1970). Ridge regression biased estimation for non51.

. Demand relationships with autocorrelated disturbances.University, Agricultural Experiment Statistical Bulletin, 276 East Lansing, Michigan.

J. C. (2004). Estimators of Linear model with Autocorrelated error terms aJournal of Nigeria Statistical Association,17, 20 – 28.

metric Methods. 3rd Edition, New York, Mc, Graw Hill.

Finite sample efficiency of OLS in linear regression model with autocorrelated Journal of American Statistical Association, 81, 150 – 154.

Finite sample efficiency of OLS in linear regression model with long memory disturb

Introduction to Econometrics. 3rd Edition, John Willey and Sons Limited, England.

Generalized inverse, Ridge Regression, Biased Linear Estimation and Non – 612.

Principal Component Regression in exploratory statistical research.American Statistical Association, 60, 234– 246.

Principal Component Regression in NIR analysis: View points, Background Journal of Chemometrics, 2, 155 – 167.

). Applied Linear Model. Richard D. Irwin, Inc..

Performances of Estimators of linear model with auto-correlated error terms whenJournal of Nigerian Association of Mathematical Physics, 9, 379

Performances of Estimators of linear model with auto-correlated error terms with Journal of Nigerian Association of Mathematical Physics, 9, 385

Performances of Estimators of linear model with auto-correlated error terms when the independent variable is autoregressive. Global Journal of Pure and Applied Sciences, 11, 131

Olasemi, K.A. (2011): Estimators of Linear Regression Model with Autocorrelated Error term ic Uniform Regressors”. Unpublished M.Sc. Thesis. University of Ibadan, Nigeria.

(1954). Trend estimators and serial correlation. Unpublished Cowles Commi

The geometry of partial least squares. Journal of Chemometrics, 11, 311

Small sample properties of several two-stage regression methods in the contJournal of American Statistical Association, 64, 251 – 272.

Schumann, E. (2009): Generating Correlated Uniform Variates.

http://comisef.wikidot.com/tutorial:correlateduniformvariates

Principle of Econometrics. New York, John Willey and Sons,.

propriate autocorrelation transformation when autocorrelation process has a finite past. Federal Reserve Bank St. Louis, 82 – 102.

www.iiste.org

Ridge regression biased estimation for non-orthogonal

ith autocorrelated disturbances. Michigan State

Estimators of Linear model with Autocorrelated error terms and

Finite sample efficiency of OLS in linear regression model with autocorrelated

OLS in linear regression model with long memory disturbances.

illey and Sons Limited, England.

ression, Biased Linear Estimation and Non – linear

xploratory statistical research. Journal of the

Principal Component Regression in NIR analysis: View points, Background

correlated error terms when Journal of Nigerian Association of Mathematical Physics, 9, 379 – 384.

correlated error terms with Journal of Nigerian Association of Mathematical Physics, 9, 385 – 388.

correlated error terms when the Global Journal of Pure and Applied Sciences, 11, 131 - 135.

Error term and Correlated Ibadan, Nigeria.

Trend estimators and serial correlation. Unpublished Cowles Commision,

Journal of Chemometrics, 11, 311 – 338.

stage regression methods in the context

propriate autocorrelation transformation when autocorrelation process has a finite

Page 10: Prediction Using Estimators of Linear Regression Model with

This academic article was published by The International Institute for Science,

Technology and Education (IISTE). The IISTE is a pioneer in the Open Access

Publishing service based in the U.S. and Europe. The aim of the institute is

Accelerating Global Knowledge Sharing.

More information about the publisher can be found in the IISTE’s homepage:

http://www.iiste.org

CALL FOR PAPERS

The IISTE is currently hosting more than 30 peer-reviewed academic journals and

collaborating with academic institutions around the world. There’s no deadline for

submission. Prospective authors of IISTE journals can find the submission

instruction on the following page: http://www.iiste.org/Journals/

The IISTE editorial team promises to the review and publish all the qualified

submissions in a fast manner. All the journals articles are available online to the

readers all over the world without financial, legal, or technical barriers other than

those inseparable from gaining access to the internet itself. Printed version of the

journals is also available upon request of readers and authors.

IISTE Knowledge Sharing Partners

EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open

Archives Harvester, Bielefeld Academic Search Engine, Elektronische

Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial

Library , NewJour, Google Scholar