1 time series forecasting (part ii) duong tuan anh faculty of computer science and engineering...

50
1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

Upload: marilynn-stafford

Post on 28-Dec-2015

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

1

Time Series Forecasting (Part II)

Duong Tuan Anh

Faculty of Computer Science and Engineering

September 2011

Page 2: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

2

Outline

Stationary and nonstationary processes Autocorrelation function Autoregressive models AR Moving Average models MA ARMA models ARIMA models Estimating and checking ARIMA models(Box-

Jenkins Methodology)

Page 3: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

3

Stochastic Processes

The time series in this part are all based on an important assumption – that the series to be forecasted has been generated by a stochastic process.

We assume that X1, X2, …,XT in the series is drawn randomly from a probability distribution. In modeling such a process, we try to describe the characteristics of its randomness.

We could assume that the observed series is drawn from a set of random variables. These random variables can be denoted by {Xt, t T} , T is set of time indices.

Page 4: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

4

Stationary and Nonstationary Processes We want to know whether or not the underlying stochastic

process that generated the time series can be invariant with respect to time.

If the characteristics of the stochastic process change over time, i.e., if the process is nonstationary, it will be difficult to represent the time series by a simple algebraic model.

If the stochastic process is fixed in time, i.e., if it is stationary, then one can model the process via an equation with fixed coefficients that can be estimated from past data.

The models described here represent stochatic processes that are assumed to be in equilibrium about a constant mean level. The probability of a given fluctuation in the process from that

mean level is assumed to be the same at any point in time.

Page 5: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

5

Stationary processes Mathematically, a stochastic process is called stationary if its first

moment and second moment are fixed and do not change in time. The first moment is the mean, E[Xt], and the second moment is the covariance between Xt and Xt+k.

The kind of covariance applied on the same random variable is called auto-covariance. Variance of a process, Var[Xt], is a special case of auto-covariance with the lag k = 0.

Therefore, a process is called stationary if: Mean: E[Xt] = < , t (3.1) Variance: Var[Xt] = 2 < , t (3.2) Auto-covariance is dependent only on the lag :

The set of all auto-covariance coefficients { k }, k = 0, 1, 2, ..creates the autocorrelation function(ACF) of the process. Note that 0 = 2

(3.3)

Page 6: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

6

Autocorrelation Function

ACF is often normalized to form the set of all autocorrelation coefficients { k }, k = 0, 1, 2, .. by formula:

That means:

Page 7: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

7

Sample autocorrelation function: If a process is stationary, its probability distribution p(Xt) is the

same at any time point and its shape (or at least some its properties) can be inferred by looking at a histogram of the observed values X1, X2, …, Xt.

And in practice, estimate of the mean can be computed from the sample mean of the series:

Estimate of variance can be computed from the formula of sample variance:

Page 8: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

8

Sample autocorrelation function: And similarly, estimate of the autocorrelation function can be

computed from sample autocorrelation function:

It’s easy to see from their definitions that both the theoretical and estimated autocorrelation functions a symmetrical, i.e., that the correlation for a positive displacement is the same as that for a negative displacement, so that:

k = -k

Page 9: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

9

Figure 1 shows a time series and its autocorrelation function

The readings from a chemical process with the autocorrelation function and the autocorrelation function.

The plot of an auto-correlation function is also called correlogram.

Page 10: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

10

How to check stationarity of a time series Stationarity of a time series can be identified

intuitively through the plot of time series based on the two properties: The values fluctuate around a fixed mean in long-term Data variance does not change at any time in time

Implicitly, in the condition of stationarity, we assume that the first two moments of Xt are finite.

Another way to decide whether a series is stationary is looking at a plot of the autocorrelation function.

Page 11: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

11

If the autocorrelation function falls off as k, the number of lags, increases, then the series is stationary. In this case, we have to solve two following problems: (a) determine whether a particular value of the sample auto-

correlation function

(b) test whether all the values of the autocorrelation function are equal to 0. (If they are, we know that we are dealing with white noise.).

is close enough to zero to permit assuming that the true value of the autocorrelation function k is equal to 0.

Page 12: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

12

Autoregressive Models (AR) In the autoregressive process of order p the current

observation Xt is generated by a weighted average of past observations going back p periods, together with a random disturbance in the current period.

A time series {Xt } is an autoregressive process with order p (AR(p)) if each observation Xt of AR(p) process can be denoted as the following equation:

Yt = 1Yt-1 + 2Yt-2 +… + pYt-p + + t (1)

where: { t } is a white noise process with mean E[t] = 0, variance

2 and covariance k = 0 for k 0 (or 0) is a constant term which relates to the mean of the stochastic process. Note that can be zero.

Page 13: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

13

White noise A time series { t } is called a white noise if it is a sequence of

independent and identically distributed random variables with finite mean and variance.

In particular, if it is normally distributed with mean zero and variance 2, the series is called Gaussian white noise. For a white noise series, all the ACFs are zero.

White noise processes may not occur very commonly, but weighted sums of a white noise process can provide a good representation of processes that are nonwhite.

Page 14: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

14

First order Autoregressive Models – AR(1) Consider the first order autoregressive model AR(1):

yt = yt-1 + t (2) The constant term can be omitted without loss of generality. If the expected value of yt is denoted by t, then

t = t-1 From this equation, we get t = Nt-N if the autoregressive process starts at time –N. Since the time series is stationary – i.e. it has no trend, then || <

1. If the process begins in the infinite past, then t = 0 for all t. Squaring both sides of (2) and taking expected value yields the

equation:

E(yt2) = 2E(yt-1

2) + E(t2) + 2E(yt-1t)

or y2 = 2y

2 + 2 where y

2 = var(yt) and 2 =var(t)

Page 15: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

15

AR(1) If the series is stationary with zero mean, then y

2 = 2 / (1-2)

To obtain the autocovariance coefficients, go back and multiply (2) by yt-1. That yields:

E(ytyt-1) = E(yt-12) + E(yt-1t) (3)

Let denote y2 as 0 and the autocovariance coefficients as 1,

2,…, m. Equation (3) can be rewritten as 1= 0 and in general

N= N0 , N 0. The larger the value of , assuming it is between 0 and 1, the

smoother yt will be. If < 0, the series will exhibit a jagged-edge pattern; technically, it will be less smooth than a pure white noise series.

Page 16: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

16

AR(2) AR(2) has the form:

yt = 1 yt-1 + 2 yt-2 + t (4) If both sides of the equation (4) are multiplied by yt and expected

values are taken, we get:

0= 11 + 22 + 2

Multiplying (4) by yt-1 and yt-2 and taking expected values yields

1= 10 + 21

2= 11 + 20

The above equations can be restated in terms of the auto-correlation coefficients 1 and 2 as

1= 1 + 2 1 (5)

2= 11 + 2 (6) These are called as the Yule-Walker equations.

Page 17: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

17

Yule-Walker Equations

Suppose we have the sample auto-correlation function for a time series which was generated by an AR(2) process. We could then measure 1 and 2 and substitute these numbers into the Yule-Walker equations. We have two algebraic equations which could be solved simultaneously for the two unknowns 1 and 2.

Thus, we could use the Yule-Walker equations to obtain estimates of the AR parameters 1 and 2.

Page 18: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

18

AR(p) Model Partial Autocorrelation Function. One problem in

constructing autoregressive models is identifying the order p of the underlying process. For moving average models this is less of a problem, since if

the process is of order q the sample autocorrelations should all be close to zero for lag greater than q.

Although some information about the order of an AR process can be obtained from the oscillatory behavior of the sample ACF, much more information can be obtained from the partial autocorrelation function (PACF).

To understand what the PACF is and how it can be used, let us first consider the covariances and autocorrelation function for the AR of order p.

Page 19: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

19

Partial Autocorrelation Function First, notice that the covariance with displacement k is

determined from: γk = E[yt-k (1 yt-1 + 2 yt-2 +…+ p yt-p + t )] (7)

Now let k = 0, 1, .., p, we obtain the following p +1 difference equations that can be solved simultaneously for γ0, γ1, …, γk:

0= 11 + 22 +..+ pp +2

1= 10 + 21 +..+ pp-1 …………………. …….. ……….. (8)

p= 1p-1 + 2p-2 +..+ p0

For displacement k > p the covariances are determined from: k= 1k-1 + 2k-2 +..+ pk-p (9) Now by dividing the left-hand and right-hand sides of the

equations in (8) by 0, we can derive a set of p equations that together determine the first p values of ACF:

Page 20: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

20

1= 1 + 21 +..+ pp-1 2= 11 + 2 +..+ pp-2 …. …. ….. …. … … (10) p= 1p-1 + 2p-2 +..+ p For displacement k > p we have, from Eq. (9):

k= 1 k-1 + 2k-2 +..+ pk-p (11)The equations in Eq. (10) are the Yule-Walker equations. If 1,2,…,p

are known, then the equations can be solved for 1, 2,…, p.and vice versa.

Unfortunately, solution of the Yule-Walker equation requires knowledge of p , the order of the AR process. Therefore, we solve the Yule-Walker equations for successive values of p.

In other words, suppose we begin by hypothesizing that p = 1, then Eqs. (10) reduce down to 1= 1 or, using the sample autocorrelations, ’1= ’1. Thus, if the calculated value is significantly different from 0, we know that the AR process is at least order 1. Let us denote this value ’1 by a1 .

Page 21: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

21

Now let us consider the hypothesis that p = 2 and solve the Eqs. (10) for p = 2 . Doing this gives us a new set of estimates ’1and ’2. If ’2 is significantly different from 0 we can conclude that the AR process is at least order 2, while if ’2 is approximately 0, we can conclude that p = 1. Let denote the value ’2 by a2.

We now repeat this process for successive values of p. For p = 3, we obtain an estimate of ’3, which we denote by a3, … We call this series a1, a2, a3,…the partial autocorrelation function (PACF) and note that we can infer the order of the autoregressive process from its behavior.

In particular, if the true order of the process is p, we should observe that aj 0, j > p. In other words, for an AR(p) series, the sample PACF cuts off at lag p.

Page 22: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

22

Let look at the second order autoregressive process AR(2): yt = 1.69 yt-1 – 0.8 yt-2 + t

The graph of 120 observations on a series generated by the AR(2) process yt = 1.69 yt-1 – 0.8 yt-2 + t

together with the theoretical and empirical ACFs (middle) and the theoretical and empirical PACFs (bottom). The theoretical values corresponds to the solid bars.

Page 23: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

23

How to check whether aj is nonzero

To test whether a particular aj is zero, we can use the fact that it is approximately normally distributed, with mean zero and variance 1/T. (T is the number of the data points in the time series).

Hence, we can check whether it is statistically significant at, say, the 5 percent level by determining whether it exceeds 2/T in magnitude.

Page 24: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

24

How to estimate parameters of AR(q) For AR(q) process, the difference equation for its

autocorrelation function is given by:

k= 1k-1 + 2k-2 +..+ pk-p We can rewrite this equation as a set of p simultaneous linear

equations relating the parameters 1 ,…,p to 1,…,p:

1= 1 + 21 +..+ pp-1

2= 11 + 2 +..+ pp-2

…. …. ….. …. … …

p= 1p-1 + 2p-2 +..+ p

Using these Yule-Walker equations to solve for the parameters 1 ,…, p in terms of the estimated values of the autocorrelation function, we arrive at the estimates of the parameters 1, 2,…,p.

Page 25: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

25

Moving Average (MA) Models Time series is called a moving average process of order q

(MA(q)) if each observation can be written as the following equation:

yt = t - 1t-1 - 2t-2 - qt-q (3.10)

where random disturbance component {t } is white noise process with 0 mean, constant variance

2 and auto-covariance k = 0 for k 0.

White noise processes may not occur very commonly, but weighted sums of a white noise process can provide a good representation of processes that are non-white noise.

So, in the MA(q) each observation yt is generated by a weighted average of random disturbances going back q periods.

Page 26: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

26

Lag operator

We can rewrite equation (3.10) in the form: yt = (B)t

where: (B)= 1 - 1B - 2B2 - …- qBq

is a polynomial of order q in B, and B is the lag operator which is used to describe the lag in time.

Bt = t-1

The mean of the moving average process is independent of time since E[yt] = and = 0.

Each t is assumed to be generated by the same white noise process, so that E[t] = 0, variance

2 and covariance k = 0 for k 0

Page 27: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

27

Stationary MA(q)

Let us now look at the variance, denoted by 0 of the moving average process of order q:

Var[yt] = 0 = E[(yt - )2]

=E(t2 +1

2t2 +… q

2 t-q2 - 21tt-1-…)

= 2(1 + 1

2 + 22 +…+ q

2) From the above equation, we see that if MA(q) is the realization

of a stationary random process, it must satisfy the following conditions:

1 + 12 + 2

2 +…+ q2 <

This result is trivial since we have only a finite number of i and thus their sum is finite.

Page 28: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

28

ACF of a stationary MA(q)

However, the assumption of a fixed number of i can be considered to be an approximation to a more general model. A complete model of most random process would require an infinite number of lagged disturbance terms (and their corresponding weights).

Then, as q, the order of the MA process, become infinitely large, we must require that 1 + 1

2 + 22 +…+ q

2 converge to ensure the stationarity of the MA process.

Convergence will usually occur if the i become smaller as i become larger. We will see later that if the process is stationary, by a MA model of order q, we expect the autocorrelation function k will become smaller as k become larger.

This is consistent with our result of the previous section that one indicator of stationarity is an ACF that approach to 0.

Page 29: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

29

MA(1) We begin with the moving average process of order 1. The process

is denoted by MA(1), and its equation is: yt = + t - 1t-1 This process has mean and variance 0 =

2(1 + 12).

Now let us derive the covariance for a one-lag displacement 1: 1 = E[(yt - )(yt-1 - )] = E[(t - 1t-1)( t-1 - 1t-2)] = - 1

2

In general we can determine the covariance for a k-lag displacement to be

k = E[(t - 1t-1)( t-k - 1t-k-1)] = 0 for k >1 Thus the MA(1) process has a covariance of 0 when the

displacement is more than 1 period. We now can determine the autocorrelation function for the process

MA(1): k = k/0 = - 1/(1+ 1

2) for k = 1 = 0 for k > 1

Page 30: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

30

MA(2) Now let us examine the moving average process of order 2. The

process is denoted by MA(2), and its equation is: yt = + t - 1t-1 - 2t-2

This process has mean and variance 0 = 2(1 + 1

2+ 22 ), and

covariances given by 1 = E[(t - 1t -1- 2t -2 )( t -1 - 1t -2 - 2t -3)] = - 1

2 + 2 12 = - 1(1 - 2)

2

2 = E[(t - 1t -1- 2t -2 )( t -2 - 1t -3- 2t -4)] = - 1

2

and k = 0 for k > 2

The process MA(2) has a memory of exactly two periods, so that the value of yt is influenced only by events that took place in the current period, one period back, and two periods back.

Page 31: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

31

MA(q) The MA process of order q has a memory of exactly q periods. Autocorrelation function of the MA process of order q is given by

the following:

k = 1 if k = 0

= (-k+ 1k+1+…+ q-k q)/(1 + 12 + 2

2 +…+ q2 ) if k= 1,2,..,q

= 0 if k > q

So, we can see why the sample ACF can be useful in specifying the order of a moving average process.

An MA(q) series is only linearly related to its first lagged values and hence is a "finite memory" model.

Page 32: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

32

An example of a second-order MA process might be: yt = t + 0.9 t-1 + 0.8t-2

Figure 2. The graph of 120 observations on a series generated by the MA(2) process yt = t + 0.9 t-1 + 0.8t-2 together with the theoretical and empirical ACFs (bottom) and the theoretical and empirical PACFs (middle). The theoretical values corresponds to the solid bars.

Page 33: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

33

Durbin’s method for estimating MA(q) Given the MA(q) with the equation:

yt = t - 1t-1 - 2t-2 -… - qt-q

The method for estimating MA(q) consists of two steps:Step 1: The first step consists of fitting an AR model of order m > q to

{yt}. Once m has been specified, the estimated AR parameters {’k} (k = 1,…,m) can be obtained via Yule-Walker estimator. Hence estimated {’t} of the noise sequence {t} can be derived, using the equation:

yt = ’1yt-1 + ’2yt-2 +… + ’pyt-p + t

Step 2: Using {’t}, we can write:

yt - ’t = 1’t-1 + 2’t-2 + …+ q’t-q for t = 0,…,N -1.

Page 34: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

34

Durbin’s method (cont.)

From which estimated values {’k} of {k} can be obtained through solving the equation system.

The order m can be selected via the AIC or BIC. However, a more expedient rule for selecting m is m = 2q.

TThe term Moving Average is historical and should not be confused with the moving average smoothing procedures.

he term Moving Average is historical and should not be confused with the moving average smoothing procedures.

The term Moving

Average is historical and should not be confused with the moving average smoothing procedures.

The term Moving

Average is historical and should not be confused with the moving average smoothing procedures.

Page 35: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

35

Summary A brief summary of AR and MA models is in order. We have the

following properties: - For MA models, the ACF is useful in specifying the order

because the ACF cuts off at lag q for an MA(q) series. - For AR models, the PACF is useful in order determination

because the PACF cuts off at lag p for an AR(p) process. - An MA series is always stationary, but for an AR series to be

stationary, all of its characteristic roots must be less than 1 in modulus.

- For a stationary series, the multistep ahead forecasts converge to the mean of the series and the variance of forecast errors converge to the variance of the series.

Page 36: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

36

ARMA Models

Many stationary random processes cannot be modeled as purely MA or as purely AR, since they have the qualities of both types of processes.

The logical extension of the models MA and AR is the mixed autoregressive – moving average process of order (p, q). We denote this process as ARMA(p,q) and represent it by:

yt = 1Yt-1 + 2Yt-2 +… + pYt-p + + t - 1t-1 - 2t-2 -…- qt-q

Why bother with the mixed model? The answer is parsimony: there are fewer parameters to be estimated.

Note: In practice, the values of p and q each rarely exceed 2.

Page 37: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

37

  ACF PACF

AR(p) Die out Cut off after the order p of the process

MA(q) Cut off after the order q of the process

Die out

ARMA(p,q) Die out Die out

Summary on AR, MA and ARMA

In this context… “Die out” means “tend to zero gradually” “Cut off” means “disappear” or “is zero”

Page 38: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

38

ARIMA Models In practice, many of the time series we will work with are

nonstationary, so that the characteristics of the underlying stochastic process change over time.

Now we construct models for those nonstationary series which can be transformed into stationary series by differencing one or more times.

We say that yt is homogeneous nonstationary of order d if

wt = Δd yt (3.32) is a stationary. Here Δ denotes differencing, i.e.,

Δyt = yt – yt-1

Δ2yt = Δyt – Δyt-1 After differencing time series to obtain a stationary series wt,

we can model wt as an ARMA process.

Page 39: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

39

ARIMA models (cont.) If wt = Δdyt and wt is an ARMA(p,q) process, then we say that yt is

an integrated autoregressive-moving average process of order (p,d,q) or simply ARIMA(p, d, q).

We can write the equation for the process ARIMA(p,d,q). We can restate the equation for the process ARIMA(p,d,q), using the lag operator (backward shift operator), as:

(B)Δdyt = (B)t (3.33)

with (B) = 1 - 1B - 2B2 - .. - pBp

and (B) = 1 - 1B - 2B2 - … pBq

We called (B) the autoregressive operator and (B) the moving average operator.

ARIMA models are a class of linear models that is capable of representing stationary as well as nonstationary time series.

Note that:ARIMA(p,0,q) = ARMA(p,q)

Page 40: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

40

Estimating and checking ARIMA models We have seen that any homogeneous nonstationary

time series can be modeled as an ARIMA process of order (p,d,q).

The practical problem is to choose the most appropriate values for p, d, and q , that is, to specify the ARIMA model.

This problem is partly resolved by examining both the autocorrelation function (ACF) and the partial auto-correlation function (PACF) for the time series of concern.

Page 41: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

41

The process for determining ARIMA model consists of the following steps:

Check whether the time series is stationary. If it’s not stationary, determine d the number of times that the series must be differenced to produce a stationary series.

After d is determined, we have to find possible values for p and q For MA(q) model, ACF will cut off after the order q of the process

while PACF will die out very soon. For AR(p) model, ACF will die out very soon while PACF will cut

off after the order q of the process. If both p and q are non-zero, it difficult to determine the exact

the orders of AR and MA. Therefore, we apply an iterative approach called Box-Jenkins methodology (1972). This model-building methodology involves a cycle consisting the three stages of model selection (identification), model estimation and model checking.

Page 42: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

42

Box-Jenkins methodology The cycle might have to be repeated several times and at the end,

there might be more than one model of the same time series. The Box-Jenkins methodology uses an iterative approach as follows:

An initial model is selected, from a general class of ARIMA models, based on an examination of the time series and an examination of its autocorrelations for several time lags

The chosen model is then checked against the historical data to see whether it accurately describes the series: the model fits well if the residuals are generally small, randomly distributed, and contain no useful information.

If the specified model is not satisfactory, the process is repeated using a new model designed to improve on the original one.

Once a satisfactory model is found, it can be used for forecasting.

Page 43: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

43

Notes on model selection (identification) The process of choosing the optimal (p, d, q) in an

ARIMA model is known as model selection (or identification).

Hannan & Rissanen have suggested the 3-step procedure: 1. determine the maximum length of lag for an AR model. 2. Use AIC (Akaike Information Criterion) to determine the

maximum length of lag in an AR model. 3. Use SC (Schwarz Criterion) to determine the maximum length

of lags for a mixed ARMA model.

Page 44: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

44

AIC and SCWith AIC, k is chosen to minimize

where t is the sum of the squared residuals, p is the maximum degree of ACF and T is the number of observations.

With SC, k is chosen to minimize

Page 45: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

45

Estimation of ARMA models Whenever an AR(1) or higher order process is used, a nonlinear

estimation procedure is often utilized. This procedure is also an optimization algorithm that attempts to minimize the sum of squared residuals through an iterative procedure.

S = t2

The same situation applies to ARMA models.

In using the common nonlinear algorithms, the answer that is obtained may differ depending on the initial guesses for the parameter values.

Any nonlinear algorithm could produce an incorrect answer for 2 reasons: It could reach to a local optimum. It could fail to converge at all.

Page 46: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

46

Estimation of ARMA models using software package.

Parameter estimation of ARMA models can be automatically performed by sophisticated software packages.

In some software packages, the user may have the choice of estimation method and can choose the most appropriate method based on the problem specifications

The list of software packages for time series analysis and forecasting: SPSS SAS Minitab R EViews S-Plus

Page 47: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

47

Model CheckingAfter a time-series model has been specified and its parameters

have been estimated, one must test whether the original specification was correct. The process of model checking involves two steps.

1. The autocorrelation function for the simulated series can be compared with the sample autocorrelation function of the original series. If the two autocorrelation functions seem very different, one needs to re-specify the model.

2. If the two are not markedly different, one can analyze the residuals of the model. Remember that we have assumed that the random error terms t in the actual process are normally distributed and independent. Then if the model has been specified correctly, the residuals t should resemble a white noise process.

Page 48: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

48

Note: Before using the model for forecasting, it must be checked for adequacy. Basically, a model is adequate if the residuals cannot be used to improve the forecasts, i.e.,

- The residuals should be random and normally distributed- The individual residual autocorrelations should be small. Significant residual autocorrelations at low lags or seasonal lags suggest the model is inadequate

Page 49: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

49

References J. E. Hanke & D. W., Business Forecasting, 8th Edition, Pearson

Prentice Hall, 2005. M.K. Evans, Practical Business Forecasting, Blackwell

Publishers, 2001. F.X. Diebold, Elements of Forecasting, 4th Edition, Thomson-

South-Western, 2007. R. S. Pindyck & D.L. Rubinfield, Econometric Models and

Economic Forecasts, 3rd Edition, McGraw Hill, 1991. R. S. Tsay, Analysis of Financial Time Series, 2nd Edition, Willy,

2005.

Page 50: 1 Time Series Forecasting (Part II) Duong Tuan Anh Faculty of Computer Science and Engineering September 2011

50

Appendix: Parameter estimation of AR(p) byLeast square method For AR(p) model, the least square method, which starts with the

(p+1)th observation, is often used to estimate the parameters. Specifically, conditioning on the first p observations, we have: yt = 1yt-1 + 2yt-2 +…+ pyt-p + at, t = p+1,…,T. which is in the form of the multiple linear regression and can be

estimated by the least square method. Denote the estimate of 1by ’1. The fitted model is

y’t = ’1yt-1 + ’2yt-2 +…+ ’pyt-p And the associated residual is: a’t = yt – y’t. The series {a’t} is called the residual series, from which we obtain

the variance of the series: