functional coefficient regression models for …jianhua/paper/huangshen04.pdffunctional coefficient...

25
Functional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA Z. HUANG University of Pennsylvania HAIPENG SHEN University of North Carolina at Chapel Hill ABSTRACT. We propose a global smoothing method based on polynomial splines for the es- timation of functional coefficient regression models for nonlinear time series. Consistency and rate of convergence results are given to support the proposed estimation method. Methods for automatic selection of the threshold variable and significant variables (or lags) are discussed. The estimated model is used to produce multi-step-ahead forecasts, including interval fore- casts and density forecasts. The methodology is illustrated by simulations and two real data examples. Key words: forecasting, functional autoregressive model, nonparametric regression, threshold autoregressive model, varying coefficient model. Running Heading: Functional coefficient models with splines. 1 Introduction For many real time series data, nonlinear models are more appropriate than linear models for accurately describing the dynamic of the series and making multi-step-ahead forecasts (see for example, Tong, 1990, Franses & van Dijk, 2000). Recently, nonparametric regression techniques have found important applications in nonlinear time series analysis (Tjøstheim & Auestad 1994, Tong, 1995, H¨ ardle, L¨ utkepohl & Chen, 1997). Although the nonparametric approach is appealing, its application usually requires unrealistically large sample size when more than two lagged variables (or exogenous variables) are involved in the model (the so-called “curse of dimensionality”). To overcome the curse of dimensionality, it is necessary to impose some structure to the nonparametric models. A useful structured nonparametric model that still allows appreciable flexibility is the functional coeffi- cient regression model described as follows. Let {Y t , X t ,U t } −∞ be jointly strictly stationary processes with 1

Upload: ngocong

Post on 07-May-2018

228 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Functional Coefficient Regression Models for Nonlinear

Time Series: A Polynomial Spline Approach

JIANHUA Z. HUANG

University of Pennsylvania

HAIPENG SHEN

University of North Carolina at Chapel Hill

ABSTRACT. We propose a global smoothing method based on polynomial splines for the es-

timation of functional coefficient regression models for nonlinear time series. Consistency and

rate of convergence results are given to support the proposed estimation method. Methods for

automatic selection of the threshold variable and significant variables (or lags) are discussed.

The estimated model is used to produce multi-step-ahead forecasts, including interval fore-

casts and density forecasts. The methodology is illustrated by simulations and two real data

examples.

Key words: forecasting, functional autoregressive model, nonparametric regression, threshold autoregressive

model, varying coefficient model.

Running Heading: Functional coefficient models with splines.

1 Introduction

For many real time series data, nonlinear models are more appropriate than linear models for accurately

describing the dynamic of the series and making multi-step-ahead forecasts (see for example, Tong, 1990,

Franses & van Dijk, 2000). Recently, nonparametric regression techniques have found important applications

in nonlinear time series analysis (Tjøstheim & Auestad 1994, Tong, 1995, Hardle, Lutkepohl & Chen, 1997).

Although the nonparametric approach is appealing, its application usually requires unrealistically large

sample size when more than two lagged variables (or exogenous variables) are involved in the model (the

so-called “curse of dimensionality”). To overcome the curse of dimensionality, it is necessary to impose some

structure to the nonparametric models.

A useful structured nonparametric model that still allows appreciable flexibility is the functional coeffi-

cient regression model described as follows. Let {Yt,Xt, Ut}∞−∞ be jointly strictly stationary processes with

1

Page 2: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Xt = (Xt1, . . . , Xtd) taking values in Rd and Ut in R. Let E(Y 2

t ) < ∞. The multivariate regression function

is defined as

f(x, u) = E(Yt|Xt = x, Ut = u).

In a pure time series context, both Xt and Ut consist of some lagged values of Yt. The functional coefficient

regression model requires that the regression function has the form

f(x, u) =d∑

j=1

aj(u)xj , (1)

where aj(·)’s are measurable functions from R to R and x = (x1, ..., xd)T . Since Ut ∈ R, only one-dimensional

smoothing is needed in estimating the model (1).

The functional coefficient regression model extends several familiar nonlinear time series models such

as the exponential autoregressive (EXPAR) model of Haggan & Ozaki (1981) and Ozaki (1982), threshold

autoregressive (TAR) model of Tong (1990), and functional autoregressive (FAR) model of Chen & Tsay

(1993); see Cai, Fan & Yao (2000) for more discussion. We borrow terminology from TAR and call Ut

the threshold variable. The formulation adopted in this paper allows both Ut and Xt in (1) to contain

exogenous variables. Functional coefficient models (or varying-coefficient models) have been paid much

attention recently, but most of the work has focused on i.i.d. data or longitudinal data (Hastie and Tibshirani

1993, Hoover, Rice, Wu & Yang, 1998, Wu, Chiang & Hoover, 1998, Fan & Zhang, 1999, Fan & Zhang, 2000,

Chiang, Rice & Wu, 2001, Huang, Wu & Zhou, 2002).

The local polynomial method (Fan & Gijbels, 1996) has been previously applied to the functional coef-

ficient time series regression models. Chen & Tsay (1993) proposed an iterative algorithm in the spirit of

local constant fitting to estimate the coefficient functions. Cai et al. (2000) and Chen & Liu (2001) used

the local linear method for estimation, where the same smoothing parameter (bandwidth) was employed

for all coefficient functions. The focus of these papers has been on the theoretical and descriptive aspects

of the model and on the estimation of coefficient functions and hypothesis testing. The important issue

of multi-step-ahead forecasting using functional coefficient models has not been well-studied. For exam-

ple, Cai et al. (2000) only considered one-step-ahead forecasting carefully. Moreover, utilization of a single

smoothing parameter in the local linear method could be inadequate if the coefficient functions have different

smoothness.

In this paper, we propose a global smoothing method based on polynomial splines for the estimation of

functional coefficient regression models for nonlinear time series. Different coefficient functions are allowed

to have different smoothing parameters. We establish consistency and rate of convergence results to give

support to the proposed estimation method. Methods for automatic selection of the threshold variable and

significant variables (or lags) are discussed. Moreover, we provide a method to produce multi-step-ahead

forecasts, including point forecasts, interval forecasts, and density forecasts, using the estimated model.

There have been many applications of the local polynomial method to nonlinear time series modeling

in the literature. In addition to the references mentioned above, we list Truong & Stone(1992), Truong

2

Page 3: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

(1994), Tjøstheim & Auestad (1994), Yang, Hardle, & Nielsen (1999), Cai and Masry (2000), Tschernig

& Yang (2000), to name just a few. We demonstrate in this paper that, global smoothing provides an

attractive alternative to local smoothing in nonlinear time series analysis. The attraction of the spline based

global smoothing is that it is closely related to parametric models and thus standard methods for parametric

models can be extended to nonparametric settings. However, it is not clear from the literature why the

spline method should work as a flexible nonparametric method for nonlinear time series. In particular, the

asymptotic theory for parametric models does not apply. There has been substantial recent development of

asymptotic theory for the spline method for i.i.d. data (see for example, Stone, 1994, Huang, 1998, 2001).

The consistency and rates of convergence of the spline estimators developed in this paper justifies that the

spline method really works in a time series context.

One appealing feature of the spline method proposed in this paper is that it yields a fitted model with

a parsimonious explicit expression. This turns out to be an advantage over the existing local polynomial

method. We can simulate a time series from the fitted spline models and thereby conveniently produce multi-

step-ahead forecasts based on the simulated data. As a contrast, direct implementation of the simulation

based forecasting method using the local polynomial smoothing can be computationally expensive and extra

care needs to be taken in order to relieve the computational burden (see section 3).

The rest of the paper is organized as follows. Section 2 introduces the proposed spline based global

smoothing method, discusses the consistency and rates of convergence of the spline estimates, and provides

some implementation details, such as knot placement, knot number selection, and determination of the

threshold variable and significant variables. Section 3 proposes a method for multi-step-ahead forecasting

using the fitted functional coefficient model. Some results of a simulation study are reported in section 4.

Two real data examples, US GNP and Dutch Guilder-US dollar exchange rate time series, are used in

sections 5 and 6 respectively to illustrate the proposed method and potential usefulness of the functional

coefficient model. Some concluding remarks are given in section 7. All technical proofs are relegated to the

Appendix.

2 Spline estimation

In this section we describe our estimation method using polynomial splines. Our method involves approxi-

mating the coefficient functions aj(·)’s by polynomial splines. Consistency and rates of convergence of the

spline estimators are developed. Some implementation details are also discussed.

2.1 Identification of the coefficient functions

We first discuss the identifiability of the coefficient functions in model (1). We say that the coefficient

functions in the functional coefficient model (1) are identifiable if f(x, u) =∑d

j=1 a(1)j (u)xj ≡ ∑d

j=1 a(2)j (u)xj

implies that a(1)j (u) = a

(2)j (u) for a.e. u, j = 1, . . . , d.

3

Page 4: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

We assume that E(XtXTt |Ut = u) is positive definite for a.e. u. Under this assumption, the coefficient

functions in model (1) are identifiable. To see why, denote a(u) = (a1(u), . . . , ad(u))T . Then

E

[{ d∑j=1

aj(Ut)Xtj

}2∣∣∣∣Ut = u

]= aT (u)E

(XtX

Tt

∣∣Ut = u)a(u). (2)

If∑d

j=1 aj(u)xj ≡ 0, then E[{∑j aj(Ut)Xtj}2] = 0, and thus E[{∑j aj(Ut)Xtj}2|Ut = u] = 0 for a.e. u.

Therefore, it follows from (2) and the positive definiteness of E(XtXTt |Ut = u) that aj(u) = 0 a.e. u,

j = 1, . . . , d.

2.2 Spline approximation and least squares

Polynomial splines are piecewise polynomials with the polynomial pieces joining together smoothly at a set

of interior knot points. Precisely, a (polynomial) spline of degree l ≥ 0 on an interval U with knot sequence

ξ0 < ξ1 < · · · < ξM+1, where ξ0 and ξM+1 are the two end points of U , is a function that is a polynomial

of degree l on each of the intervals [ξm, ξm+1), 0 ≤ m ≤ M − 1, and [ξM , ξM+1], and globally has l − 1

continuous derivatives for l ≥ 1. A piecewise constant function, linear spline, quadratic spline and cubic

spline correspond to l = 0, 1, 2, 3 respectively. The collection of spline functions of a particular degree and

knot sequence forms a linear function space and it is easy to construct a convenient basis for it. For example,

the space of splines with degree three and knot sequence ξ0, . . . , ξM+1 forms a linear space of dimension

M + 4. The truncated power basis for this space is 1, x, x2, x3, (x − ξ1)3+, ..., (x − ξM )3+. A basis with better

numerical properties is the B-spline basis. See de Boor (1978) and Schumaker (1981) for a comprehensive

account of spline functions.

The success of the proposed method relies on the good approximation properties of polynomial splines.

Suppose that in (1) the coefficient function aj , j = 1, . . . , d, is smooth. Then it can be approximated well by

a spline function a∗j in the sense that supu∈U |a∗

j (u) − aj(u)| → 0 as the number of knots of the spline goes

to infinity (de Boor 1978, Schumaker 1981). Thus, there is a set of basis functions Bjs(·) (e.g., B-splines)

and constants β∗js, s = 1, . . . , Kj , such that

aj(u) ≈ a∗j (u) =

Kj∑s=1

β∗jsBjs(u). (3)

Then, we can approximate (1) by

f(x, u) ≈d∑

j=1

{ Kj∑s=1

β∗jsBjs(u)

}xj ,

and estimate the β∗js’s by minimizing

�(βββ) =n∑

t=1

(Yt −

d∑j=1

{ Kj∑s=1

βjsBjs(Ut)}

Xtj

)2

, (4)

4

Page 5: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

with respect to βββ, where βββ = (βββT1 , · · · ,βββT

d )T and βββj = (βj1, · · · , βjKj)T . Assume that (4) can be uniquely

minimized and denote its minimizer by βββ = (βββT

1 , . . . , βββT

d )T , with βββj = (βj1, · · · , βjKj)T for j = 1, . . . , d.

Then aj is estimated by aj(u) =∑Kj

s=1 βjsBjs(u) for j = 1, . . . , d. We refer to aj(·)’s as the least squares

spline estimates.

The idea of using basis expansions can be applied more generally to other basis systems for function

approximation such as polynomial bases and Fourier bases. We focus in this paper on B-splines because of

the good approximation properties of splines and the good numerical properties of the B-spline basis. When

B-splines are used, the number of terms Kj in the approximation (3) depends on the number of knots and

the order of the B-splines. We discuss in section 2.4.2 how to select Kj , or the number of knots, using the

data. Note that different Kj ’s are allowed for different aj ’s. This provides flexibility when different aj ’s have

different smoothness.

2.3 Consistency and Rates of Convergence

To provide some theoretical support of the proposed method, we establish in this section the consistency

and convergence rates of the spline estimates. For the data generating process, we assume that

Yt =d∑

j=1

aj(Ut)Xtj + εt, t = 1, . . . , n

where εt is independent of Ut′ ,Xt′j , j = 1, . . . , d, t′ ≤ t, and εt′ , t′ < t, E(εt) = 0, and var(εt) ≤ C for some

constant C (the noise errors can be heteroscedastic). When Ut and Xt = (Xt1, . . . , Xtd) consist of lagged

values of Yt, this is the FAR model considered by Chen & Tsay (1993).

We focus on the performance of the spline estimates on a compact interval C. Let ‖a‖2 = {∫C

a2(t) dt}1/2

be the L2-norm of a square integrable function a(·) on C. We say that an estimate aj is consistent in

estimating aj if limn ‖aj − aj‖2 = 0 in probability.

For clarity in presentation, we now represent the spline estimate in a function space notation. Let Gj be

a space of polynomial splines on C with a fixed degree and knots having bounded mesh ratio (that is, the

ratios of the differences between consecutive knots are bounded away from zero and infinity uniformly in n).

Then the spline estimates aj ’s are given by

{aj , j = 1, . . . , d} = arg mingj∈Gj ,j=1,...,d

n∑t=1

{Yt −

d∑j=1

gj(Ut)Xtj

}2

I(Ut ∈ C).

This is essentially the same as (4) but in a function space notation (assuming that Bjs, s = 1, . . . ,Kj , is

a basis of Gj). Here, we employ a weighting function in the least squares criterion to screen off extreme

observations, following a common practice in nonparametric time series (see, for example, Tjøstheim &

Auestad, 1994).

Let Kn = max1≤j≤d Kj . Set ρn,j = infg∈Gj‖g − aj‖2 and ρn = max1≤j≤d ρn,j .

5

Page 6: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Theorem 1. Suppose Conditions (i) − (v) in the Appendix hold. Then

‖aj − aj‖22 = OP

(Kn

n+ ρ2

n

), j = 1, . . . , d.

In particular, if ρn = o(1), then aj is consistent in estimating aj, that is, ‖aj − aj‖2 = oP (1), j = 1, . . . , d.

The rate of convergence result in this theorem is in parallel to that for i.i.d. data (Stone, Hansen,

Kooperberg & Truong, 1997, Huang, 1998, 2001). Here Kn measures the sizes of the estimation spaces

Gj ’s. The quantity ρn measures the size of the approximation error, and its magnitude is determined by

the smoothness of aj ’s and the dimension of the spline spaces Gj ’s. For example, if aj , j = 1, . . . , d, have

bounded second derivatives, then ρn = O(K−2n ) (see theorem 7.2 in Chapter 7 of DeVore & Lorentz, 1993).

In this case, the rate of convergence of aj to aj is Kn/n + K−4n . In particular, if Kn increases with the

sample size and is of the same order as n1/5 (precisely, Kn/n1/5 is bounded away from 0 and infinity), then

the rate of convergence is n−4/5.

Note that in this paper we do not model the form of heteroscedasticity. Our estimate can be viewed as

a quasi-likelihood estimate and the consistency and rate of convergence results hold even in the presence

of heteroscedastic errors. (However, the multi-step-ahead forecasting method discussed in section 3 does

require homoskedastic errors.) In principle one could model the conditional variance function σ2(x, u) =

var(Yt|Xt = x, Ut = u) nonparametrically and improve the efficiency. Implementation of this idea is however

beyond the scope of this paper.

2.4 Implementation

In this subsection we discuss some implementation details of the proposed spline method, including knot

placement, data-driven knot number selection, and automatic selection of the threshold variable and signif-

icant variables (or lags).

2.4.1 Knot positions

For simplicity, we consider only two ways to place the knots. For a given number of knots, the knots can be

placed evenly such that the distances between any two adjacent knots are the same (referred to as equally

spaced knots). Alternatively, the knots can be placed at the sample quantiles of the threshold variable so

that there are about the same number of observed values of the threshold variables between any two adjacent

knots (referred to as quantile knots). The number of knots is selected using the data (see section 2.4.2).

When the threshold variable consists of lagged values of the time series, we observe in simulations that

the results based on quantile knots wiggle much more (and thus less satisfactory) than those based on equally

spaced knots. That is because the observed values of the threshold variable (which are the values of the time

series itself) do not spread evenly and are very sparse near the boundaries so that the model selection criteria

tend to choose too many knots in the middle of the data range if quantile knots are used. We recommend

using equally spaced knots.

6

Page 7: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

2.4.2 Selection of the number of knots

The numbers of knots serve as the smoothing parameters, playing the same role as bandwidths in the local

linear method. Although a subjective smoothing parameter may be determined by examining the estimated

curves or plots of the residuals, an automatic procedure for selecting Kj from the data is of practical

interest. The numbers of knots are chosen to minimize an appropriate criterion function. We will consider

four criterion functions: AIC (Akaike 1974), AICC (Hurvich & Tsai 1989), BIC (Schwarz 1978), and MCV

(modified cross-validation, Cai et al. 2000). Let n denote the number of terms on the right of (4), p =∑

j Kj

be the number of parameters to be estimated, and RSS = �(β) be the residual sum of squares in minimizing

(4). The first three criteria are defined as

AIC = log(RSS/n) + 2 ∗ p/n, AICC = AIC +2(p + 1)(p + 2)n(n − p − 2)

,

BIC = log(RSS/n) + log(n) ∗ p/n.

The MCV criterion can be regarded as a modified multi-fold cross-validation criterion that is attentive to

the structure of time series data. Let m and Q be two given positive integers and n > mQ. Use Q sub-series

of lengths n− qm(q = 1, ..., Q) to estimate the unknown coefficient functions aj and to compute the one-step

forecasting errors of the next section of the time series of length m based on the estimated models. The

MCV criterion function is AMS =∑Q

q=1 AMSq, where for q = 1, ..., Q,

AMSq =1m

n−qm+m∑t=n−qm+1

(Yt −

d∑j=1

{ Kj∑s=1

β(q)js Bjs(Ut)

}Xtj

)2

,

and {β(q)js } are computed from the sample {(Yt, Ut,Xt), 1 ≤ i ≤ n − qm}. It is suggested to use m = [0.1n]

and Q = 4 in Cai et al. (2000).

In our simulation studies (some results will be reported in section 4), we find that AIC and AICC behave

very similarly and both perform better than BIC and MCV. We shall use AIC in our real data examples.

2.4.3 Selection of the threshold variable and significant variables

It is important to choose the number of terms in (1) and an appropriate threshold variable U in applying

functional coefficient regression models. Knowledge on physical background of the data may help. When no

prior information is available, a data-driven method is desirable.

To fix ideas, consider the functional coefficient autoregressive models

Yt = a1(Yt−i0)Yt−i1 + · · · + ad(Yt−i0)Yt−id+ εt, i0 > 0, 0 < i1 < · · · < id.

Extension to include exogenous variables is straightforward. This is a special case of (1) with Ut = Yt−i0 and

Xt = (Xt1, . . . , Xtd)T = (Yt−i1 , Yt−i2 , . . . , Yt−id)T . We assume max(i0, i1, . . . , id) ≤ pmax for some constant

pmax > 0. We call i0 the threshold lag and i1, . . . , id the significant lags. To select the threshold lag and

7

Page 8: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

significant lags, we minimize a criterion such as AIC (AICC, BIC or MCV can also be used). A stepwise

procedure is employed to expedite computation.

Specifically, consider the following class of candidate models

Yt =∑j∈Sd

aj(Yt−d)Yt−j + εt, 1 ≤ d ≤ pmax, Sd ⊂ {1, · · · , pmax}. (5)

For a given candidate threshold lag d, we decide on an optimal subset S∗d of significant lags by stepwise

addition followed by stepwise deletion. In the addition stage, we add one significant lag at a time, choosing

among all candidate lags not yet selected in the model by minimizing the MSE (mean square error); the

addition process stops if the number of lags selected equals a pre-specified number qmax ≤ pmax. In the

deletion stage, we delete one lag at a time from the collection of lags selected in the addition stage, by

minimizing also the MSE until there is no lagged variable left in the model. After the stepwise addition and

deletion, we get a sequence of subsets of lag indices, and the one which minimizes AIC is chosen as S∗d . The

final model is determined by the pair {d, S∗d} that produces the smallest AIC. If pmax is not too big, we can

let qmax = pmax.

3 Forecasting

Forecasting is an important objective in time series analysis. Constructing multi-step-ahead forecasts for

nonlinear time series models is considerably more difficult than for linear models, and exact analytical

solutions are not available. We propose a simulation based method to make multi-step-ahead forecasts using

the functional coefficient models. One advantage of the simulation based method is that it automatically

provides interval forecasts and density forecasts in addition to point forecasts.

Specifically, consider an observed time series {y1, . . . , yT } whose dynamic follows a functional coefficient

model with homoskedastic errors. Suppose we want to forecast yT+k for some lead time k ≥ 1. For the mean

squared prediction error criterion, the optimal predictor yT+k is given by the conditional mean

yT+k = E(yT+k|FT )

where FT is the given information set up to time T . Using the fitted model recursively, we generate M

series {yT+1,m, . . . , yT+k,m}, 1 ≤ m ≤ M , that follow the observed series y1, . . . , yT . The error term in the

model can be generated by random sampling the residuals from the fitted model with replacement (this

is essentially the idea of bootstrapping, see Efron & Tibshirani, 1993). The k-step-ahead point forecast is

then given by the sample mean of the simulated values {yT+k,1, . . . , yT+k,M}. Before the resampling, it is

advisable to perform residual diagnostics to ensure that there is no serial correlation and strong evidence of

heteroscedasticity among the residuals.

The simulated series can also be used to construct interval forecasts and density forecasts. To be specific,

for 0 < β < 1, let yβT+k denote the 100 × β% sample quantile of {yT+k,1, . . . , yT+k,M}. The k-step-ahead

8

Page 9: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

forecast interval with confidence level 1−α is given by [yα/2T+k, y

1−α/2T+k ]. The histogram of the simulated data

{yT+k,1, . . . , yT+k,M} gives a preliminary k-step-ahead density forecast; alternatively, these simulated data

can be smoothed using any density estimation procedure to produce a smooth forecast density.

The forecasting method described above can in principle be used with any nonparametric estimates as

long as one can generate random samples from the fitted model. Note that the spline estimates are specified

by several parameters in basis expansions. The explicit expression of the spline estimate makes it convenient

to generate random samples from the fitted model. Therefore, we propose to implement the above forecasting

method using the spline estimates to make multi-step-ahead forecasts.

On the other hand, direct implementation with the local linear smoothing method can be computationally

expensive. The local linear smoothing method does not offer a parsimonious explicit expression of the fitted

model. It requires re-fitting at every point where the fitted coefficient functions need to be evaluated.

This could cause some computational problem since a large number of time series need to be simulated in

implementation of simulation based forecasting method (5000 series are simulated in our implementation

with the spline method). Fast computation of the local linear estimator using binning has been proposed

in the literature (see for example, Fan & Gijbels, 1996). It is possible to adapt the binning technique to

yield fast implementation of the simulation based forecasting method based on the local linear smoothing.

However, there is an issue of choosing bin size and how to quantify the impact of errors caused by binning,

especially the cumulative effect of such errors in the multi-step-forecasting context. Further development of

the binning method in the time series context would be interesting but beyond the scope of this paper.

In actual implementation of the simulation based forecasting method, we have applied a certain truncation

rule when the value of the threshold variable to be plugged into the model is outside the range of its historical

values. If this happens at the beginning of the simulated path, we replace the value of the threshold variable

by its closest historical value; if this happens later in the path, we discard the whole simulated series. The

reason for doing this is that the spline estimates of the coefficient functions outside the range of the historical

data are not reliable. The percentage of discarded series out of the 5000 simulated series is about 10% in

analyzing the US GNP data in section 5 and none in analyzing the Dutch guilder exchange rate data in

section 6.

4 Simulations

Monte Carlo simulations have been conducted to evaluate the finite sample performance of the proposed

spline estimator and to compare the four criteria in section 2.4.2 for selecting the number of knots in spline

estimation. We find that the spline estimates with AIC or AICC selected number of knots work the best.

We present in this section results from one simulated example.

We need a criterion to gauge the performance of estimators. For an estimator aj(·) of aj(·), define the

9

Page 10: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

square-Root of Average Squared Error (RASE) as

RASEj =[n−1

grid

ngrid∑k=1

{aj(uk) − aj(uk)}2

]1/2

, (6)

where {uk, k = 1, ..., ngrid} are grid points chosen to be equally spaced in a certain interval within the range

of data. In order to make the RASE values comparable from simulation to simulation, we need to choose a

common interval to hold the grid points. However, because of the time series nature of the data, the range

of the data may vary substantially from simulation to simulation, so some care needs to be taken. For the

example below, the interval to hold the grid points in (6) is specified as follows: the left boundary is chosen

to be the maximum of the 2.5 percentiles (100 of them) of the 100 data sets, and the right boundary is

the minimum of the 97.5 percentiles of the data sets. Note that our interval for holding the grid points is

different from that used in Cai et al. (2000).

Example 1. Consider an EXPAR model (Haggan & Ozaki, 1981, Ozaki 1982, Cai et al. 2000):

Yt = a1(Yt−1)Yt−1 + a2(Yt−1)Yt−2 + εt, (7)

where a1(u) = 0.138+(0.316+0.982u)e−3.89u2, a2(u) = −0.437− (0.659+1.260u)e−3.89u2

, and {εt} are i.i.d.

N(0, 0.22). In each simulation run, a time series of length 400 is drawn from this model. The simulation is

replicated 100 times.

For our spline estimators, we used quadratic splines with knots equally spaced between two boundary

knots (the spline functions are defined using extrapolation outside the boundary knots). Since the range of

the data varies substantially among the simulation runs, the boundary knots were not placed at the data

range. Instead, the two boundary knots were placed correspondingly at the 0.5 and 99.5 percentiles of the

threshold variable Yt−1 for a given data set, resulting in relatively stable boundary knot positions among

the simulation runs. Moreover, such placement of boundary knots can also help to enhance the numerical

stability when a large number of knots is used. For each simulated data set, the optimal number of knots

was selected in the range from 2 to 10 using the three criteria in section 2.4.2. Two boundary knots were

included when counting the knot number. As mentioned above, different optimal knot numbers were allowed

for coefficient functions a1(·) and a2(·) in order to adapt to possible different smoothness.

The summary statistics of RASEs of a1(·) and a2(·) for the spline fits with AIC, AICC, BIC and MCV

selected knot numbers are reported in Table 1. The number of grid points in calculating RASE is 240. The

performances of AIC and AICC are similar and are better than BIC and MCV. For the MCV criterion,

we used m = 40 and Q = 4 as in Cai et al. (2000), which gave overall the largest mean RASEs. We also

observed that MCV is sensitive to the choice of m and Q. Other choices of m and Q were tried and had

not changed our conclusion. Graphical tools have also been used to evaluate the fits. We observed that,

while the spline fit with AIC or AICC selected knot number can give almost unbiased estimates, BIC and

MCV yield estimates with bigger biases around the modes of the unknown functions. The variances of the

estimates are comparable among the methods with MCV having a slightly larger variance.

10

Page 11: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Put Table 1 here.

We have also applied the spline method to other simulated examples and results have been compared

with those of the local linear method suggested in Cai et al. (2000) (detailed results not shown here to save

space). We found that the local linear method usually gave estimates with larger bias than those given by

the spline method, especially around the modes; moreover, the spline fits are usually smoother than the

local linear fits, while the local linear fits tend to show spurious features more often. We also found that,

when the coefficient functions have different smoothness, the proposed spline method out-performs the local

linear estimator that uses only single smoothing parameter, which demonstrates the benefit of using different

smoothing parameters for different coefficient functions in the proposed method.

5 Application to US GNP

We consider the series {zt} of quarterly US real GNP (in 1982 dollars) from the first quarter of 1947 to the

first quarter of 1991, a total of 177 observations. This empirical example is of considerable interest on its

own right, following the papers of Tiao & Tsay (1994) and Potter (1995). The data are obtained from the

Citibase database and are seasonally adjusted. Prior to analysis logarithms and first differences of the data

were taken and the result was multiplied by 100. Precisely, we use for our analysis the transformed data

yt = 100 log (zt/zt−1), resulting in 176 data points in the series. The goal is to model the dynamic behavior of

US GNP and make out-of-sample predictions. A traditional and simple modeling approach is to fit a linear

autoregressive (AR) model to the series {yt}. However, evidence of nonlinearity of US GNP is provided

and nonlinear models are proposed in the econometrics literature (see, for example, Potter, 1995). Here

we would like to illustrate the proposed method by fitting a functional coefficient model and performing

multi-step-ahead (out-of-sample) forecasts based on the fitted model. We observe strong evidence of the

forecasting superiority of the nonlinear functional coefficient model over linear AR models, which suggests

that there exists nonlinearity in US GNP that can not be captured by linear autoregressions.

In order to compare the out-of-sample forecasting performance of different models, we split the series

{yt} into two subseries. We take the subseries (y1, . . . , y164) as the training set to choose the best AR model

and the best functional coefficient (FC) model. Then we calculate the postsample multi-step-ahead forecast

errors on the last 12 observations (y165, . . . , y176).

Using AIC to select the order of linear AR models, an AR(3) model is chosen and the (least squares)

fitted model is Yt = 0.508 + 0.342Yt−1 + 0.178Yt−2 − 0.148Yt−3. Standard diagnostics reveal that this model

fits the data well.

For the functional coefficient model, we apply the method in section 2.4.3 to select the threshold lag and

significant lags. Consider the class of models (5) with pmax = 4. We select a model specified by the pair

{d, Sd} that minimizes AIC in a stepwise procedure. When fitting any of the models in (5) using our spline

method, we use quadratic splines with knots equally spaced between two boundary knots that are placed at

11

Page 12: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

the 1% and 99% sample quantiles of the threshold variable. For computational simplicity, the same fixed

number of knots is used for all coefficient functions in the model selection process. Different numbers of

knots are tried and the results are quite stable. For the number of knots being 2 to 5, the following model

is always selected:

Yt = a1(Yt−2)Yt−1 + a2(Yt−2)Yt−2 + εt. (8)

The AIC values for the selected models are reported in Table 2. The smallest AIC is achieved when the

number of knots equals 3. Diagnostics also suggest the model is a valid model.

Put Table 2 Here.

The above AR and FC models (with 3 knots) fitted using the data {y1, · · · , y164} are employed to make

out-of-sample forecasts for one- to 12-steps ahead. To implement the simulation based forecasting method

for the FC model (section 3), 5000 series are simulated using the fitted model (i.e., M = 5000); each series

has length 12 and starts from the last few observations in the training data. To simulate the error terms in

the FC model, we sample with replacement from the centered residuals. (Residual diagnostics do not show

evidence of heteroscedasticity.) We have also tried to use noncentered residuals to simulate the error terms

and got similar results. The (point) forecasts are compared with the actual observed values and the absolute

prediction error (APE) is computed. Table 3 shows the relative APE of forecasts for the AR(3) model and

the FC model, using the AR(3) model as a benchmark. It is clear that the FC model outperforms the AR(3)

model in all except two steps (step 2 and step 4).

Following a suggestion by one referee, we also fitted an artificial neural network (ANN) model (see Franses

& van Dijk, 2000, Chapter 5) to the data. The ANN model, which uses lag 1 and lag 2 variables as inputs,

has a single hidden layer with 4 hidden units and allows a direct link from the inputs to the output. The

AIC criterion was used to select the number of hidden units (from 1 to 4) and significant lags (from 1 to 4)

in the model. A stepwise procedure similar to what is described in section 2.4.3 was implemented to select

the lags. Using the bootstrap method, we computed one- to 12-steps ahead forecasts from the ANN model.

The results are summarized in Table 3. It appears that the ANN model was outperformed by the FC model

in 10 out of the 12 steps.

Put Table 3 Here.

As explained in section 3, we can use the 5000 simulated series to generate postsample multi-step-ahead

interval forecasts and density forecasts. Figure 1 plots the 95% forecast intervals along with the actual

growth rates for the 12 quarters in the postsample period. We see that all the observed values are in the

corresponding forecast intervals. Multi-step density forecasts are given in Figure 2. We can also make

probabilistic statements using the simulated series. For example, we can answer questions like, “What’s the

chance of a positive growth rate in the next quarter? in 5 quarters?” Table 4 gives the estimated probability

for a positive growth rate for each of the next 12 quarters.

Put Figure 1 Here.

Put Figure 2 Here.

12

Page 13: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Put Table 4 Here.

In the above comparison the forecast performance is evaluated only on one series. Following Tiao &

Tsay (1994), we now apply a rolling procedure to assess the average forecast performance of the linear AR

model and the nonlinear FC model over many series. To be specific, consider 60 subseries (y1, · · · , yT ), T =

105, · · · , 164. For each subseries, we fit an AR model and a FC model with the order of the AR model and

the threshold lag and significant lags of the FC model being selected using the data. The AIC is used as

the model selection criterion. When fitting the FC model, we use 3 equally spaced knots with the boundary

knots at the 1% and 99% sample quantiles of the threshold variable. The result of the model selection is

quite stable; AR(3) or AR(4) is always selected for all the subseries, and the FC model always chooses yt−2

as the threshold variable and yt−1, yt−2 as the significant variables.

Then, for each subseries, multi-step-ahead postsample forecasts up to twelve steps are computed based

on the selected models. We use 5000 simulated series when implementing the forecasting procedure given

in section 3. Mean squared prediction error (MSPE) is calculated for each forecasting step. Table 5 shows

the relative MSPE of forecasts for the AR model and the FC model, using the AR model as a benchmark.

We find gains from using the nonlinear FC model at short forecasting horizons. At 2-, 3-, 4-, 5-steps ahead,

the gains over the AR model are 13.1%, 9.8%, 8.7%, 6.2%, respectively. At longer horizons there is no clear

difference between the FC and AR models. A similar empirical forecasting exercise has been done on the

same data set in the literature using the self-exciting threshold autoregressive (SETAR) model. Clements

& Smith (1997) observe that the SETAR model yields inferior forecasts to the AR model for horizons up to

four-steps ahead. At 1-, 2-, 3-, 4-steps ahead, the ratios of MSPE of SETAR to that of AR are approximately

1.02, 1.01, 1.07, 1.02 (see Fig. 2. of Clements and Smith 1997, page 473). Table 5 also reports the results of a

similar rolling forecasting exercise for ANN models. (The ANN models are selected for each subseries using

AIC and a stepwise procedure similar to that described in section 2.4.3. In calculating the MSPEs for the

ANN models, we have deleted three series where the ANN models produce very high MSPEs.) It appears

that the FC models outperform the ANN models in 11 out of the 12 steps.

Put Table 5 Here.

6 Application to Dutch Guilder-US Dollar Exchange Rate

As the second example, we consider the Dutch guilder-US dollar exchange rate, expressed as the number of

units of the Dutch guilder per US dollar. The sample period runs from 2 January 1980 until 31 December

1997. We analyze the series on a weekly basis, in which case we use observations recorded on Wednesdays.

(There are 10 observations on Wednesdays in the sample period that are not available. In these cases we use

the observation from the following Thursday, and if the Thursday observation is also unavailable, we use the

observation from the preceding Tuesday.) This series was used extensively in Franses & van Dijk (2000) to

illustrate applications of nonlinear time series models.

13

Page 14: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

As an illustration of our method, we consider the following functional coefficient model:

Yt = a1(Vt−1)Yt−1 + a2(Vt−1)Yt−2 + εt, (9)

where Yt = 100 log(Pt/Pt−1) (Pt denotes the exchange rate at time period t) is the logarithmic return

measured in percentage, and Vt−1 =∑4

j=1 |Yt−j | is the average absolute return over the last 4 weeks, which

can be considered as a measure of volatility. This model is similar to the two-regime Smooth Transition

AR (STAR) model considered in Chapter 3 of Franses & van Dijk (2000), except that in the STAR model,

the coefficient functions have a particular parametric form. We fitted Model (9) to the data for the years

1980–1989 using the proposed spline method. Quadratic splines with three equally spaced knots were used

where the boundary knots were placed at the 1% and 99% sample quantiles of the threshold variable Vt−1.

Using the method described in section 3, we computed 1- to 5-step forecasts from the fitted model for the

years 1990–1997. The ratios of the MSPE and MedSPE (median SPE) criteria, relative to an AR(2) model

which is used as the benchmark linear model, are given in Table 6. We also present results for the STAR

model (Franses & van Dijk, 2000, Table 3.12) and an ANN model in Table 6. The ANN model, which uses

Yt−1, Yt−2, and Vt−1 as input variables, has one hidden layer with 5 hidden units and allows a direct link

from the inputs to the output. The number of hidden units was selected by AIC. The bootstrap method

was also used to generate forecasts for the STAR and ANN models. It is seen from the table that the FC

model in general outperforms the STAR model and the ANN model. Comparing with the benchmark linear

model in terms of MSPE and MedSPE, however, there is not much to be gained in terms of out-of-sample

forecasting by using the nonlinear models in this example.

Put Table 6 Here.

We also compared the one-step-ahead density forecasts for the years 1990–1997 based on the AR, FC

and ANN models fitted using the data from the years 1980–1989. The approach we used to evaluate the

density forecasts is based on Dawid (1984) and Diebold, Gunther & Tay (1998) and was applied in, for

example, Clements & Smith (2000). Let {zt}T+nt=T+1 denote the probability integral transforms of the actual

realizations {yt} of the variables over the forecast period with respect to the forecast densities pt(yt), that

is, zt =∫ yt

−∞ pt(u) du, t = T + 1, . . . , T + n. When the forecast density corresponds to the true predictive

density (given by the data generating process), then the sequence of variates {zt}T+nt=T+1 is iid U(0, 1). Hence

the idea is to evaluate the forecast density by assessing whether the {zt} series departs from the iid uniform

distribution. We used the Kolmogorov-Smirnov (KS) test for the iid uniform hypothesis. Let z∗t denote the

inverse normal CDF transformation of zt. Then, the zt’s, which are iid U(0, 1) under the null, become iid

standard normal variates. Berkowitz (2001) argues that more powerful tools can be applied to testing a null

of iid N(0, 1) compared to one of iid uniformity and proposes a one-degree of freedom test of independence

against a first order autoregressive structure, as well as a three-degree of freedom test of zero-mean, unit

variance and independence. These two tests were also implemented in our forecast comparison. We also

conducted the Ljung-Box test of serial correlation with 25 lags for the {z∗t } series and the absolute values

of {z∗t }. The results are summarized in Table 7. Both of Berkowitz’s tests are not statistically significant

14

Page 15: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

for all models. At the 5% significance level, the Kolmogorov-Smirnov test reveals departure of {zt} from iid

U(0, 1) for the AR model, while the Ljung-Box test on the absolute values of {z∗t } found serial correlation

for the AR and ANN models. The results suggest a comparatively good performance of density forecasts

based on the FC model, at least in terms of the statistical tests considered above.

Put Table 7 Here.

7 Concluding remarks

In this paper we propose a global smoothing method using spline functions for estimation of functional

coefficient models for nonlinear time series. Splines with equally spaced knots are used and the methodology

works well in simulations and real data examples. It would be interesting to consider free-knot splines in

the current context, i.e., letting the data to choose the knot positions as in MARS (Friedman 1991, Lewis

& Stevens, 1991). See Stone, Hansen, Kooperberg & Truong (1997) and Hansen & Kooperberg (2002) for

recent reviews of the free-knot methodology. Some asymptotic theory for free-knot splines for i.i.d. data has

been developed in Stone & Huang (2002). We have focused on the case where the threshold variable Ut

is one-dimensional. When Ut is multi-dimensional, the proposed method is conceptually applicable, where

tensor product splines or general multivariate splines can be used to approximate the coefficient functions

aj ’s. However, actual implementation may require substantial further development, and models with Ut

having large dimensionality are often not practically useful due to the “curse of dimensionality”.

Our basis approximation method can be used to handle vector time series by replacing the objective

function (4) by a generalized variance of the vector-valued residuals (we thank one referee for pointing this

out to us). Specifically, suppose there are L time series generated according to

Ytl =dl∑

j=1

alj(Utl)Xtlj + εtl, t = 1, . . . , n, l = 1, . . . , L,

where alj(·)’s are unknown functions, and εtl’s are mean 0 errors. The Utl’s and Xtlj ’s could include

lagged values from all series and exogenous predictors. To fit this model, we could approximate each alj(·)by a basis expansion alj(u) ≈ ∑Llj

s=1 βljsBljs(u) = BTlj(u)βββlj , where Blj = (Blj1, . . . , BljLlj

)T and βββlj =

(βlj1, . . . , βljLlj)T . Denote βββl = (βββT

l1, . . . ,βββTldl

)T , βββ = (βββT1 , . . . ,βββT

L)T , U tl = (Xtl1BTl1(Utl), . . . , Xtldl

BTldl

(Utl))T

and U t = (UTt1, . . . ,U

TtL)T . We then minimize with respect to βββ the generalized variance

�(βββ) = det[

1n

n∑t=1

(Y t − U tβββ)(Y t − U tβββ)T

]

and thus get an estimate of alj(·) through the basis expansions. We shall explore further in our future

research the practical applications of this multivariate extension of our method. It would be useful to point

out that our basis approximation method is generally applicable to other structured nonparametric models

for multivariate time series such as additive models, where unknown nonlinear transformations of predictors

enter the right hand side of the model equation in an additive manner.

15

Page 16: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Acknowledgment

We wish to thank the editor, an associate editor, two referees, and Paul Shaman for helpful comments. We

also wish to thank Zongwu Cai for providing us his S-PLUS codes for implementing the local linear method

of Cai et al. (2000).

References

Akaike, H. (1974). A new look at the statistical model identification. EEEE Trans. Automat. Control 19,

716–723.

Berkowitz J. (2001). Testing density forecasts, with applications to risk management. J. Bus. and Econom.

Statist. 19, 465–474.

Bosq, D. (1998). Nonparametric statistics for stochastic processes: estimation and prediction, 2nd Ed.

Springer-Verlag, Berlin.

Cai, Z., Fan, J. & Yao, Q. (2000). Functional-coefficient regression models for nonlinear time series. J.

Amer. Statist. Assoc. 95, 941–956.

Cai, Z. & Masry, E. (2000). Nonparametric estimation of additive nonlinear ARX time series: Local linear

fitting and projections. Econom. Theory 16, 465–501.

Chen, R. & Liu, L.-M. (2001). Functional coefficient autoregressive models: Estimation and tests of

hypotheses. J. Time Ser. Anal. 22, 151–173.

Chen, R. & Tsay, R. S. (1993). Functional coefficient autoregressive models. J. Amer. Statist. Assoc. 88,

298–308.

Chiang, C.-T., Rice, J. & Wu, C. (2001). Smoothing spline estimation for varying coefficient models with

repeatedly measured dependent variables. J. Amer. Statist. Assoc. 96, 605–619.

Clements, M. P. & Smith, J. (1997). The performance of alternative forecasting methods for SETAR

models. International Journal of Forecasting 13, 463–475.

Clements, M. P. & Smith, J. (2000). Evaluating the forecast densities of linear and non-linear models:

applications to output growth and unemployment. Journal of Forecasting 19, 255–276.

de Boor, C. (1978). A Practical Guide to Splines. Springer, New York.

Dawid, A. P. (1984). Statistical theory: the prequential approach. J. R. Stat. Soc. Ser. A 147, 278–292.

DeVore R. A. & Lorentz, G. G. (1993). Constructive Approximation. Springer-Verlag, Berlin Heidelberg.

16

Page 17: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Diebold, F. X., Gunther, T. A. & Tay, A. S. (1998). Evaluating density forecasts: with applications to

financial risk management. International Economic Review 39 863–883.

Efron, B. & Tibshirani, R. J. (1993). An introduction to the bootstrap. New York: Chapman and Hall.

Fan, J. & Gijbels, I. (1996). Local Polynomial Modeling and Its Applications. Chapman and Hall, New

York.

Fan, J. & Zhang, W. (1999). Statistical estimation in varying coefficient models. Ann. Statist. 27,

1491–1518.

Fan, J. & Zhang, J.-T. (2000). Two-step estimation of functional linear models with applications to

longitudinal data. J. R. Stat. Soc. Ser. B 62, 303–322.

Franses, P. H. & van Dijk, D. (2000). Nonlinear Time Series Models in Empirical Finance. Cambridge

University Press, Cambridge.

Friedman, J. H. (1991). Multivariate adaptive regression splines (with discussion). Ann. Statist. 19, 1–141.

Hansen, M. H. & Kooperberg, C. (2002). Spline adaptation in extended linear models (with discussion).

Statist. Sci. 17, 2–51.

Hastie, T. J. & Tibshirani, R. J. (1993). Varying-coefficient models (with discussion). J. R. Stat. Soc. Ser.

B 55, 757-796.

Haggan, V. & Ozaki, T. (1981). Modeling nonlinear vibrations using an amplitude-dependent autoregressive

time series model. Biometrika 68, 189–196.

Hardle, W., Lutkepohl, H. & Chen, R. (1997). A review of nonparametric time series analysis. International

Statistical Review 65, 49–72.

Hoover, D. R., Rice, J. A., Wu, C. O. & Yang, L.-P. (1998). Nonparametric smoothing estimates of

time-varying coefficient models with longitudinal data. Biometrika 85, 809–822.

Huang, J. Z. (1998). Projection estimation in multiple regression with applications to functional ANOVA

models. Ann. Statist. 26, 242–272.

Huang, J. Z. (2001). Concave extended linear modeling: A theoretical synthesis. Statist. Sinica 11,

173–197.

Huang, J. Z., Wu, C. O. & Zhou, L. (2002). Varying-coefficient models and basis function approximations

for the analysis of repeated measurements. Biometrika 89, 111–128.

Hurvich, C. M. & Tsai, C-L. (1989). Regression and Time Series Model Selection in Small Samples.

Biometrika 76, 297–307.

17

Page 18: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Lewis, P. A. W. & Stevens, J. G. (1991). Nonlinear modeling of time series using multivariate adaptive

regression splines (MARS). J. Amer. Statist. Assoc. 86, 864–877.

Ozaki, T. (1982). The statistical analysis of perturbed limit cycle processes using nonlinear time series

models. J. of Time Ser. Anal. 3, 29–41.

Potter, S. M. (1995). A Nonlinear Approach to US GNP. J. Appl. Econometrics 10, 109–125.

Schwarz, G. (1978). Estimating the dimension of a model. Ann. Statist. 6, 461–464.

Schumaker, L. L. (1981). Spline Functions: Basic Theory. Wiley, New York.

Stone, C. J. (1994). The use of polynomial splines and their tensor products in multivariate function

estimation (with discussion). Ann. Statist. 22, 18–184.

Stone, C. J., Hansen, M., Kooperberg, C. & Truong, Y. (1997). Polynomial splines and their tensor

products in extended linear modeling (with discussion). Ann. Statist. 25, 1371–1470.

Stone, C. J. & Huang, J. Z. (2002). Free knot splines in concave extended linear modeling. J. Statist.

Plann. and Inference 108, 219–253.

Tiao, G. C. & Tsay, R. S. (1994). Some Advances in Non-linear and Adaptive Modelling in Time-series.

Journal of Forecasting 13, 109–131.

Tjøstheim, D. & Auestad, B. H. (1994). Nonparametric identification of nonlinear time series: Projections.

J. Amer. Statist. Assoc. 89, 1398–1409.

Tong, H. (1990). Nonlinear Time Series: A Dynamical System Approach. Oxford University Press, Oxford.

Tong, H. (1995). A personal overview of non-linear time series analysis from a chaos perspective (with

discussion). Scand. J. of Statist. 22, 399–445.

Truong, Y. & Stone, C. J. (1992). Nonparametric function estimation involving time series. Ann. Statist.

20, 77–97.

Truong, Y. (1994). Nonparametric time series regression. Ann. Inst. Statist. Math. 46, 279–293.

Tschernig, R. & Yang, L. J. (2000). Nonparametric lag selection for time series. J. Time Ser. Anal. 21,

457–487.

Yang, L. J., Hardle, W. & Nielsen, J. P. (1999). Nonparametric autoregression with multiplicative volatility

and additive mean. J. Time Ser. Anal. 20, 579–604.

Wu, C. O., Chiang, C.-T. & Hoover, D. R. (1998). Asymptotic confidence regions for kernel smoothing of

a varying-coefficient model with longitudinal data. J. Amer. Statist. Assoc. 93, 1388–1402.

18

Page 19: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Jianhua Huang, The Wharton School, Statistics Department, 400 Jon M. Huntsman Hall, 3730 Walnut

Street, Philadelphia, PA 19104-6340 (USA), Email: [email protected]

Appendix

In the appendix, we give the proof of theorem 1. For two sequences of positive numbers an and bn, we write

an � bn if an/bn is uniformly bounded and an � bn if an � bn and bn � an.

We impose the following conditions.

Conditions:

(i) The marginal density of Ut is bounded away from 0 and infinity uniformly on C.

(ii) The eigenvalues of E(XtXTt |Ut = u) are uniformly bounded away from 0 and infinity for all u ∈ C.

(iii) Kn � nr, 0 < r < 1.

(iv) The process {Yt,Xt, Ut}∞−∞ is jointly strictly stationary. The α-mixing coefficient α(t) of {Xt, Ut, Yt}satisfies α(t) ≤ Ct−α for α > (5/2)r/(1 − r).

(v) For some sufficient large m > 0, E(|Xtj |m) < ∞, j = 1, . . . , d.

Condition (ii) is necessary for identification of the coefficient functions in our functional coefficient model.

Other conditions are commonly used in the literature.

From the function space notation in section 2.3, we see that the spline estimates aj are uniquely deter-

mined by the function spaces Gj . Different sets of basis functions can be used to span the spaces Gj and

thus give the same estimates aj . We employ the B-spline basis in our proofs for convenience, but the results

do not depend on the choice of basis.

For j = 1, . . . , d, set Bjs = K1/2j Njs, s = 1, . . . , Kj , where Njs are B-splines as defined in DeVore &

Lorentz (1993, Chapter 5). There are positive constants M1 and M2 such that

M1|βββj |2 ≤∫ {∑

s

βjsBjs(u)}2

du ≤ M2|βββj |2, (10)

where βββj = (βj1, . . . , βjKj)T (see theorem 4.2 of Chapter 5 of DeVore and Lorentz 1993).

We need the following two lemmas whose proofs are given after the proof of theorem 1.

Lemma 1.

supaj∈Gj ,j=1,...,d

∣∣∣∣1n

∑t

{∑j aj(Ut)Xtj

}2

E{∑

j aj(Ut)Xtj

}2 − 1∣∣∣∣ = oP (1).

Let X be a (n×∑j Kj)-matrix with the t-th row being a vector with entries Bjs(Ut)Xtj , s = 1, . . . ,Kj ,

j = 1, . . . d.

Lemma 2. There is an interval [M1,M2] with 0 < M1 < M2 such that

P{all the eigenvalues of

1n

XT

X fall in [M1,M2]}→ 1.

19

Page 20: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Proof of theorem 1. Denote Y = (Y1, . . . , Yn)T . Then βββ = (XTX)−1

XT

Y. Note that aj(u) =∑

s βjsBjs(u),

where βjs are components of βββ. Set Y = (Y1, . . . , Yn)T with Yt =∑

j aj(Ut)Xtj and define βββ = (XTX)−1

XT

Y.

Write βββ = (βββT

1 , . . . , βββT

d )T , βββj = (βj1, . . . , βjKj)T , j = 1, . . . , d. Let aj(u) =

∑s βjsBjs(u). In the following

we evaluate first the magnitude of ‖aj − aj‖2 and then that of ‖aj − aj‖2.

Denote E = (ε1, . . . , εn)T . Then βββ − βββ = (XTX)−1

XT

E. Since εt has mean 0 and is independent of

Ut′ ,Xt′j , j = 1, . . . , d, t′ ≤ t, and εt′ , t′ < t, we obtain by conditioning that

E{Bjs(Ut)XtjεtBjs(Ut′)Xt′jεt′} = 0, t′ �= t.

Hence

E(ETXX

TE) = E

[∑j

∑s

{∑t

Bjs(Ut)Xtjεt

}2]

=∑

j

∑s

∑t

E{B2js(Ut)X2

tjε2t} � n

∑j

Kj ,

Therefore, ET

XXT

E = OP (n∑

j Kj). Using lemma 2, we have that

|βββ − βββ|2 = ET

X(XTX)−1(XT

X)−1X

TE � 1

n2E

TXX

TE = OP

(∑j Kj

n

),

which together with (10) yields

∑j

‖aj − aj‖22 � |βββ − βββ|2 = OP

(∑j Kj

n

).

Let a∗j ∈ Gj be such that ‖a∗

j − aj‖2 = infg∈Gj‖g − aj‖2 = ρn,j . Write a∗

j (u) =∑

s β∗jsBjs(u). Let βββ∗

be a∑

j Kj-dimensional vector with entries β∗js, s = 1, . . . ,Kj , j = 1, . . . , d. By (10) and lemma 2, with

probability tending to 1,

∑j

‖aj − a∗j‖2

2 � |βββ − βββ∗|2 � 1n

(βββ − βββ∗)TX

TX(βββ − βββ∗).

Since Xβββ = X(XTX)−1

XT

Y is an orthogonal projection,

1n

(βββ − βββ∗)TX

TX(βββ − βββ∗) ≤ 1

n|Y − Xβββ∗|2 =

1n

∑t

[∑j

{aj(Ut) − a∗j (Ut)}Xtj

]2

.

On the other hand, by Conditions (i) and (ii),

E

[∑j

{aj(Ut) − a∗j (Ut)}Xtj

]2

�∑

j

‖aj − a∗j‖2

2,

and thus1n

∑t

[∑j

{aj(Ut) − a∗j (Ut)}Xtj

]2

= OP

(∑j

‖aj − a∗j‖2

2

).

20

Page 21: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Consequently, ∑j

‖aj − a∗j‖2

2 = OP

(∑j

‖aj − a∗j‖2

2

)= OP

(∑j

ρ2nj

).

The proof of theorem 1 is complete.

Proof of lemma 1. For a stationary time series Z1, . . . , Zn, . . . , denote En(Z·) = (1/n)∑

t Zt and E(Z·) =

E(Zt). For aj ∈ Gj and aj′ ∈ Gj′ , 1 ≤ j, j′ ≤ d, write aj =∑

s βjsBjs and aj′ =∑

s′ βj′s′Bj′s′ . Fix η > 0.

If

|(En − E){Bjs(U·)Bj′s′(U·)X·jX·j′}| ≤ η, s = 1, . . . ,Kj , s′ = 1, . . . ,Kj′ , (11)

then

|(En − E){aj(U·)aj′(U·)X·jX·j′}| =∣∣∣∣ ∑

s

∑s′

βjsβj′s′(En − E){Bjs(U·)Bj′s′(U·)X·jX·j′}∣∣∣∣

≤ η∑

s

∑s′

|βjs||βj′s′ |Is,s′ ,

where Is,s′ equals 1 if the supports of Bjs and Bj′s′ overlap and 0 otherwise. By the properties of the

B-splines, there is a constant C1 such that∑

s Is,s′ ≤ C1 and∑

s′ Is,s′ ≤ C1. It follows from the Cauchy-

Schwarz inequality and (10) that

∑s

∑s′

|βjs||βj′s′ |Is,s′ ≤∑

s

|βjs|{∑

s′β2

j′s′Is,s′}1/2

C1/21

≤{∑

s

β2js

}1/2{∑s

∑s′

β2j′s′Is,s′

}1/2

C1/21

≤ C1

{∑s

β2js

}1/2{∑s′

β2j′s′

}1/2

≤ C2‖aj‖2‖aj′‖2.

Thus, (11) implies that |(En − E){aj(U·)aj′(U·)X·jX·j′}| ≤ ηC2‖aj‖2‖aj′‖2. Consequently,

I = P{

supaj∈Gj ,aj′∈Gj′

|(En − E){aj(U·)aj′(U·)X·jX·j′}|‖aj‖2‖aj′‖2

> η}

≤∑

s

∑s′

Is,s′P{|(En − E){Bjs(U·)Bj′s′(U·)X·jX·j′}| >

η

C2

}.

Let Xtj = XtjI(|Xtj | ≤ nδ) for some δ > 0 and define Xtj′ similarly. Note that

P{Xtj �= Xtj for some t = 1, . . . , n} ≤∑

t

P (|Xtj | > nδ) ≤ E|Xtj |mnmδ−1

→ 0,

provided m > δ−1. Note also that∑

s

∑s′ Is,s′ � Kn � nr. Thus

I � nr maxs,s′

P{|(En − E){Bjs(U·)Bj′s′(U·)X·jX·j′}| >

η

C2

}.

Applying theorem 1.4 of Bosq (1998) (with q = nγ for 0 < γ < 1), we obtain that, for k ≥ 3,

I � nr{n1−γ exp(−nγ−(r+4δ)) + nγ+k(r+2δ)/(2k+1)−2α(1−γ)k/(2k+1)},

21

Page 22: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

where α is the parameter in the upper bound of the α-mixing coefficients (see Condition (iii)). Since

limδ→0,γ→r,k→∞{r + γ + k(r + 2δ)/(2k + 1)}/{2(1− γ)k/(2k + 1)} = (5/2)r/(1− r), for α > (5/2)r/(1− r),

we can always find δ > 0, γ > 0, and k ≥ 3 satisfying 1 > γ > r +4δ and r + γ + k(r +2δ)/(2k +1)− 2α(1−γ)k/(2k + 1) < 0. Hence, I → 0 for such a choice of δ, γ, and k.

Observe that, if |(En − E)(aj(U·)aj′(U·)X·jX·j′)| ≤ η‖aj‖2‖aj′‖, j, j′ = 1, . . . , d, then∣∣∣(En − E){∑

j

aj(U·)X·j}2∣∣∣ ≤ η

∑j

∑j′

‖aj‖2‖aj′‖2 = η{∑

j

‖aj‖2

}2

� η∑

j

‖aj‖22.

Therefore, the desired result follows from the fact that I → 0.

Proof of lemma 2. Let βββ = (βββT1 , . . . ,βββT

d )T , βββj = (βj1, . . . , βjKj), j = 1, . . . , d. It follows from lemma 1

that1n

βββTX

TXβββ =

1n

∑t

{∑j

aj(Ut;βββ)Xtj

}2

� E{∑

j

aj(Ut;βββ)Xtj

}2

,

where aj(u;βββ) =∑

s βjsBjs(u). By (2), Conditions (i) and (ii), and (10),

E{∑

j

aj(Ut;βββ)Xtj

}2

�∑

j

‖aj(·;βββ)‖22 � |βββ|2.

Thus, (1/n)βββTX

TXβββ � |βββ|2 holds uniformly for all βββ, which yields the desired result.

Table 1: Mean (SE) of RASEs for the Spline Estimates of a1 and a2 with Different Criteria for Selecting

the Number of Knots.

Function AIC AICC BIC MCV

a1 0.077 (0.0021) 0.077 (0.0021) 0.086 (0.0021) 0.098 (0.0028)

a2 0.072 (0.0019) 0.072 (0.0018) 0.080 (0.0021) 0.080 (0.0026)

Table 2: Results on Model Selection for US GNP: Functional coefficient Models.

# of knots AIC d Sd

2 0.1137 2 1,2

3 0.0821 2 1,2

4 0.1059 2 1,2

5 0.0993 2 1,2

6 0.1211 1 1,4

22

Page 23: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Table 3: Absolute Prediction Error (APE) of Multi-step-ahead Forecasts: One Series. The first row is the

APE of the AR(3) model, the second and third rows are respectively relative APEs of FC and ANN models,

using the AR(3) model as benchmark.

forecast horizon 1 2 3 4 5 6 7 8 9 10 11 12

AR(3) APE 0.177 0.208 0.134 0.113 0.388 0.372 0.721 0.388 0.702 0.451 1.208 1.459

FC/AR(3) 0.821 1.240 0.969 1.542 0.643 0.565 0.711 0.468 0.697 0.510 0.801 0.844

ANN/AR(3) 0.104 1.423 1.044 0.605 1.046 0.989 1.016 1.018 1.040 1.113 1.074 1.131

Table 4: Estimated Probability for a Positive Growth Rate

period 1 2 3 4 5 6 7 8 9 10 11 12

probability 0.89 0.84 0.78 0.76 0.73 0.72 0.72 0.70 0.71 0.70 0.69 0.69

Table 5: Mean Squared Prediction Error (MSPE) of multi-step Forecasts: 60 Series. The first row is the

MSPE of the AR model, the second and third rows are respectively relative MSPEs of FC and ANN models,

using the AR model as benchmark.

forecast horizon 1 2 3 4 5 6 7 8 9 10 11 12

AR MSPE 1.064 1.181 1.254 1.229 1.224 1.145 1.045 0.891 0.896 0.890 0.905 0.934

FC/AR 0.999 0.869 0.902 0.913 0.938 0.955 1.001 1.024 1.013 1.005 0.991 0.963

ANN/AR 1.114 0.970 0.976 0.966 1.000 1.046 0.775 1.138 1.228 1.230 1.555 2.866

Table 6: Forecast evaluation of models for weekly returns on the Dutch guilder exchange rate. Reported are

the relative MSPEs and MedSPEs of FC, STAR and ANN models, using an AR(2) model as benchmark.

MSPE MedSPE

forecast horizon 1 2 3 4 5 1 2 3 4 5

FC/AR 1.034 1.012 0.997 0.997 1.000 1.024 1.062 0.988 0.975 0.978

STAR/AR 1.032 1.025 1.023 1.009 0.999 0.985 1.099 1.056 1.090 1.043

ANN/AR 1.535 1.276 1.107 1.037 0.999 1.183 1.115 1.105 1.091 1.025

23

Page 24: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

Table 7: Density forecast evaluation of models for weekly returns on the Dutch guilder exchange rate. Reported

are the p-values for five tests. For Berkowitz tests, “Indep.” tests for independence, and “Joint” for zero

mean, unit variance and independence. For Ljung-Box test, “Orig.” means test performed on the original

z∗t series and “Abs.” means test on the absolute values of z∗t .

Kolmogorov-Smirnov Berkowitz Ljung-Box

Indep. Joint Orig. Abs.

AR 0.016 0.881 0.550 0.168 1.84e-4

FC 0.058 0.955 0.784 0.118 0.057

ANN 0.181 0.588 1.000 0.130 3.46e-10

Step

Val

ue

1 2 3 4 5 6 7 8 9 10 11 12

-1.7

7-0

.59

0.59

1.77

2.95

Actual95% Forecast Interval

Figure 1: One to 12-steps Ahead Interval Forecasts for US GNP. The solid line is the actual growth rate of the

last 12 quarters, the dotted lines are the lower and upper limits of the bootstrapped 95% multi-step-ahead

forecast intervals obtained from the fitted FC model.

24

Page 25: Functional Coefficient Regression Models for …jianhua/paper/HuangShen04.pdfFunctional Coefficient Regression Models for Nonlinear Time Series: A Polynomial Spline Approach JIANHUA

-1 0 1 2 3 4

0.0

0.1

0.2

0.3

0.4

0.5

Step 1

Prediction

-2 -1 0 1 2 3 4

0.0

0.1

0.2

0.3

0.4

Step 2

Prediction

-2 -1 0 1 2 3 4

0.0

0.1

0.2

0.3

Step 3

Prediction

-2 -1 0 1 2 3 4

0.0

0.1

0.2

0.3

0.4

Step 4

Prediction

-2 -1 0 1 2 3 4

0.0

0.1

0.2

0.3

Step 5

Prediction

-2 -1 0 1 2 3 4

0.0

0.1

0.2

0.3

0.4

Step 6

Prediction

-2 -1 0 1 2 3 4

0.0

0.1

0.2

0.3

Step 7

Prediction

-2 -1 0 1 2 3 40

.00

.10

.20

.3

Step 8

Prediction

-2 -1 0 1 2 3 4

0.0

0.1

0.2

0.3

Step 9

Prediction

-2 -1 0 1 2 3 4

0.0

0.1

0.2

0.3

Step 10

Prediction

-4 -2 0 2 4

0.0

0.1

0.2

0.3

Step 11

Prediction

-4 -2 0 2 4 6

0.0

0.1

0.2

0.3

Step 12

Prediction

Figure 2: One to 12-steps Density Forecasts for US GNP. The vertical highlighted lines are the real growth

rates; the histograms are the multi-step-ahead density forecasts based on the fitted FC model.

25