bootstrapping stationary invertible varma models in echelon … · 2011-04-20 · is rather...

74
Bootstrapping stationary invertible VARMA models in echelon form: A simulation evidence 1 Tarek Jouini 2 Universit´ e de Montr´ eal September 2000 Compiled: February 14, 2008 1 This work was supported by the Canada Research Chair Program (Chair in Econometrics, Universit´ e de Montr´ eal), the Alexander-von-Humboldt Foundation (Germany), the Canadian Network of Centres of Excellence [program on Mathematics of Information Technology and Complex Systems (MITACS)], the Canada Council for the Arts (Killam Fellowship), the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, and the Fonds FCAR (Government of Qu´ ebec). 2 Centre interuniversitaire de recherche en analyse des organisations (CIRANO), Centre interuniversitaire de recherche en ´ economie quantitative (CIREQ), and D´ epartement de sciences ´ economiques, Universit´ e de Montr´ eal. Mailing address: epartement de sciences ´ economiques, Universit´ e de Montr´ eal, C.P. 6128 succursale Centre-ville, Montr´ eal, Qu´ ebec, Canada H3C 3J7. TEL.: (514) 343-6111, ext. 1814. E-mail: [email protected]

Upload: others

Post on 27-Apr-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Bootstrapping stationary invertible VARMA models inechelon form: A simulation evidence1

Tarek Jouini2

Universite de Montreal

September 2000Compiled: February 14, 2008

1 This work was supported by the Canada Research Chair Program (Chair in Econometrics,Universite de Montreal), the Alexander-von-Humboldt Foundation (Germany), the Canadian Networkof Centres of Excellence [program onMathematics of Information Technology and Complex Systems(MITACS)], the Canada Council for the Arts (Killam Fellowship), the Natural Sciences and EngineeringResearch Council of Canada, the Social Sciences and Humanities Research Council of Canada, and theFonds FCAR (Government of Quebec).

2 Centre interuniversitaire de recherche en analyse des organisations (CIRANO), Centreinteruniversitaire de recherche eneconomie quantitative (CIREQ), and Departement de scienceseconomiques, Universite de Montreal. Mailing address: Departement de scienceseconomiques,Universite de Montreal, C.P. 6128 succursale Centre-ville, Montreal, Quebec, Canada H3C 3J7. TEL.:(514) 343-6111, ext. 1814. E-mail: [email protected]

Page 2: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Abstract

In this paper we propose bootstrapping Hannan and Rissanen (1982) estimators instationary invertible VARMA models, with known order(p, q). Although we considerbootstrapping such models under the echelon form parameterization, the results we deriveherein remain valid to other alternative identification issues. In particular, we shall exploitthe theoretical developments stated in Dufour and Jouini (2005) and Dufour and Jouini(2008) to establish the asymptotic validity of bootstrapping such models for parametric andnonparametric bootstrap methods to approximating the joint distribution of the echelon formVARMA parameter estimates. The finite sample accuracy of our proposed method is evaluatedthrough a Monte Carlo (MC) simulation, specifically, by further studying the echelon formVARMA parameter confidence interval empirical coverage rates.

Keywords: Echelon form; VARMA models; parametric bootstrap; nonparametricbootstrap; MC simulation; confidence interval; empirical coverage rates.

i

Page 3: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Contents

1 Introduction 1

2 Framework 32.1 Echelon form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2 Regularity assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Bootstrapping VARMA models 73.1 Resampling scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.2 Asymptotic validity of bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Simulation study 20

5 Conclusion 24

A Appendix: Proofs 25

B Appendix: Simulation model design 50

ii

Page 4: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

1 Introduction

In recent years, bootstrap techniques has been widely recommended for applied econometricresearch [see Hall (1992) and Efron and Tibshirani (1993)]. New important theoretical andempirical developments has been established for bootstrapping time series. Several newparametric and nonparametric bootstrap procedures has been proposed for univariate andmultivariate time series analysis. In particular, bootstrapping method provides a popular way toperform more reliable finite sample inference than conventional asymptotics. Especially, wheninference is conducted with statistics whose distributions are either nontractable, nonstandard orgoverned by nuisance parameters. Further, the use of such techniques tends to provide powerfultools for establishing reliable small-sample inference in VAR and VARMA models. Since insuch models, nuisance parameter distributional problems may exist even in moderate and largesample sizes.

However, whether parametric or nonparametric method should be used depends on theparametric structure of the data generating process (DGP), the stationarity feature of the data,as well as the i.i.d. assumption made (or not) about the innovations characterizing such DGP.For instance, in the case the parametric structure of the DGP is assumed to be known, theobvious way is to incorporate such that structure into the resampling algorithm. Empiricalevidence has shown that such a way of proceeding preserves the nature of dependence betweenthe data and yields a good approximation to the joint distribution of the parameter estimates[see Efron and Tibshirani (1993)]. In contrast, when the parametric structure of the processis rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” ofKunsch (1989) and the stationary bootstrap “blocks of blocks” of Politis and Romano (1992),as many other procedures, look more adequate, especially, with highly dependent data.

In the univariate case, Stoffer and Wall (1991) have proposed the bootstrap technique as amethod for assessing the precision of Gaussian maximum likelihood estimates of the linearstate-space models parameters. They have shown that bootstrapping innovations within atime-invariant stable system, provides asymptotically consistent standard errors. For highlydependent data Berkowitz and Kilian (2000) have pointed out that the success of the “blockwisebootstrap” procedure is sensitive to the block size considered. They have proposed a data-basedblock size selection procedure for an AR

(2)

process, as an example, and discussed the choiceof bandwidth selection criteria for frequency domain bootstraps. In addition to Caner andKilian (1999) whom stressed the size distortions caused by conventional asymptotic criticalvalues for stationary tests of long-run purchasing power parity under the recent float, Inoueand Kilian (1999a) have shown that in many cases of interest for applied work, the standardbootstrap algorithm for unrestricted autoregressions remains valid for processes with exact unitroots; no pre-tests are required, at least asymptotically, and applied researchers may proceedas in the stationary case. They also shown, from a simulation study, that in many, but notall circumstances, the bootstrap distribution closely approximates both the exact finite-sampledistribution for the integrated and the near-integrated processes.

To identifying univariate ARMA(p, q

)models using the vector autocorrelation approach,

Paparoditis and Streitberg (1991) have established the asymptotic validity of the bootstrappingthe vector autocorrelation estimates. Similarly, Kreiss and Franke (1992) have developedan asymptotic theory for bootstrapping stationary stochastic processes of autoregressive

1

Page 5: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

moving-averageARMA type, with known order(p, q

). For this purpose they gave a proof

of the asymptotic validity of the nonparametric bootstrap proposal applied to M-estimators.Further, Berkowitz, Birgean and Kilian (1999) gave a benchmark for the relative accuracyof several nonparametric resampling algorithms based on ARMA representations of fourmacroeconomic time series. For each algorithm such as the “autoregressive sieve bootstrap”,the “blocks of blocks bootstrap”, the “stationary bootstrap”, the “Pre-Withened block bootstrap”and the “Cholesky-factor bootstrap”, the effective coverage accuracy of impulse response andspectral density bootstrap confidence intervals for standard sample size are evaluated. Theyfound that the autoregressive sieve approach based on the encompassing model is the mostaccurate. Moreover, they have stressed the importance of the choice of the lag order for theaccuracy of the AR sieve bootstrap method.

In the multivariate case, Hall and Horowitz (1996) have shown the ability of thenonparametric bootstrap method, namely the “blocks of blocks bootstrap”, in improving thecritical values of tests based on generalized method of moments (GMM) such as the test ofoveridentifying restrictions (J test) andt tests. In recent work adapting bootstrap techniques tostationary invertible VARMA models with finite order

(p, q

)and i.i.d. innovations, Paparoditis

(1996) has shown, based on Lewis and Reinsel (1985) asymptotic distributional results forthe Yule-Walker estimates corresponding to a truncated VAR

(∞), the asymptotic validity of

parametric and nonparametric bootstrapping methods to approximating the joint distribution ofthe growing set of estimators in the autoregressive as well as the moving average representationsof the process. Similarly, using Paparodits’s results, Inoue and Kilian (1999) have establishedthe asymptotic validity of bootstrapping for inference on orthogonalized impulse responses andvariance decomposition in VAR(∞) models. Kilian and Demiroglu (1997) have proposed theuse of bootstrap critical values to improve small-sample performance of the Bera-Jarque testfor normality in autoregressions. They have stressed the ability of such a tool in eliminatingsize-distortion tests for a wide range of univariate and multivariate finite order AR models. Theyhave also highlighted the higher power of the bootstrap test against a variety of non-Gaussianalternatives including GARCH innovations.

The small-sample confidence intervals properties for impulse response functions has beenconsidered in several studies. For instance, through Monte Carlo (MC) simulation study using awide range of bivariate models, Kilian (1998) have shown that in small samples, bias-correctedbootstrap intervals tend to be more accurate than the delta method intervals, standard bootstrapintervals and MC integration intervals, and hence, may imply economic interpretations of thedata that are substantively different from standard methods. He has also underlined the fact thatsuch procedure holds for VAR models estimated in levels, as deviations from a linear trend andin first difference, as well as random walk processes and cointegrated processes estimated inlevels. Moreover, Kilian (1998a) have pointed out from MC evidence the deterioration of thecoverage accuracy of the impulse response confidence intervals when the structural VAR modelsare characterized by fat-tailed or skewed innovations compared to the Gaussian ones. In suchdepartures from normality he has stressed the superiority of the nonparametric bootstrap methodover the parametric one in achieving accuracy, that is allowing levels close to the correspondingnominal level. From another work, Kilian (1998b) have shown that percentile-t intervalsbased on asymptotic pivots tend to behave erratically in small samples and may be much lessaccurate than bootstrap intervals based on nonpivotal statistics. His argument is that in finite

2

Page 6: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

samples these statistics are not even approximately pivotal and as a result, Edgeworth expansionarguments for pivotal statistics do not apply. This finding was supported by a simulationstudy showing that bootstrap intervals can be very accurate in the absence of asymptoticrefinements, and that there are a huge difference in coverage accuracy among asymptoticallyequivalent intervals that cannot be explained by Edgeworth expansion arguments. In recentwork using Bonferroni prediction interval, Kim (2001) has shown, from a MC simulationusing a stationary, unit root, and near-unit-root multivariate autoregressive processes, that thebootstrap-after-bootstrap method provides a superior small-sample alternative to asymptotic andstandard bootstrap prediction intervals. In testing linear restrictions on cointegrating vectors inmaximum likelihood cointegration analysis Gredenhoff and Jacobson (2001) have demonstratedthat a parametric bootstrap likelihood ratio test for cointegration frequently results in a nearlyexactα-level test. Moreover, using an extensive experimental design based on empirical modelsthey have outlined the complexity of the model in affecting the degree of size distortion withgiven sample size.

Since VAR models still receive increasing attention in building inference using bootstraptechniques with considerable additional refinements and given the fact that such models arespecial case of the VARMA models, it seems more convenient to consider developing newmethods for bootstrapping the latter as they are more pasimonious. Which tends to deliver moreaccurate inference (tests and confidence sets) and more precise forecasts than VAR models,as expected. This also can show up even more whenever newer additional refinements intobootstrapping could be available and easily adapted to VARMA models. Furthermore, it isworthnoting that since bootstrapping techniques are simulation-based techniques, it would bebetter for the VARMA case to conceive developing such procedures using simple estimationmethods rather than nonlinear ones such as the maximum likelihood (ML) method, however.Indeed, the latter is too heavy to implement and often may not converge, especially in persistent,highly parameterized or big systems. To that end, we propose bootstrapping VARMA modelsusing the simple linear estimation methods developed in Dufour and Jouini (2005) and Dufourand Jouini (2008) as they are considered as important extensions of the Hannan and Rissanen(1982) recursive linear estimation method to the multivariate case of ARMA models, under theechelon form identification.

The paper proceeds as follows. In section 2 we consider VARMA models under the echelonform identification issue which constitutes the more parsimonious representation that ensuresthe unique parametrization of such models. Section 3 establishes the asymptotic validity ofbootstrapping stationary invertible VARMA models for both parametric and nonparametriccases as well, using recursive linear estimation methods provided in Dufour and Jouini (2005)and Dufour and Jouini (2008). The behavior of the method in finite sample is then evaluatedfrom a simulation study in section 4. Finally, section 5 concludes. The proofs of lemmas,propositions and theorems derived in the sequel are given in appendix A.

2 Framework

In this paper, we consider ak-dimensional stationary invertible stochastic processes ofautoregressive moving-average (VARMA) type. We first present these models under the echelon

3

Page 7: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

form identification issue where we assume the dynamic indices or the Kronecker indicesas given, by means known or consistently estimated. Then formulate the basic regularityassumptions we shall consider in the sequel.

2.1 Echelon form

Letyt : t ∈ Z

be ak-dimensional random vector process following a stationary invertible

VARMA model. Then, for any vector of Kronecker indices say(p1, . . . , pk

)′, where thepi’s(

i = 1, ....k)

are non-negative integers, the echelon form1 VARMA representation is given by

Φ(L

)yt = µΦ + Θ

(L

)ut , (2.1)

for all t, whereyt =(y1,t, . . . , yk,t

)′, Φ

(L

)= Φ0−

∑pi=1 ΦiL

i andΘ(L

)= Θ0+

∑pj=1 ΘjL

j ,with L denoting the back shift operator,p = max

(p1, . . . , pk

), and Φi and Θj are the

autoregressive and moving averagek × k coefficient matrices, respectively, whereΘ0 = Φ0,with Φ0 be a lower-triangular matrix whose diagonal elements are all equal to one. Furthermore,µΦ = Φ

(1)µy with µy = E

(yt

)andΦ

(1)

= Φ0 −∑p

i=1 Φi, andut : t ∈ Z

is a (secondorder) white noise process, i.e., WN

(0, Σu

), whereΣu is ak × k positive definite symmetric

matrix. In the above representation (2.1) the operatorsφlm

(L

)andθlm

(L

)on any given rowl

of Φ(L

)andΘ

(L

)have the same degreepl and must satisfy

φlm

(L

)= 1−

p∑i=1

φll,iLi if l = m,

= −p∑

i=pl−plm+1φlm,iL

i if l 6= m,(2.2)

θlm

(L

)=

p∑

j=0

θlm,jLj , with Θ0 = Φ0, (2.3)

for l, m = 1, . . . , k, where

plm = min(pl + 1, pm

)for l ≥ m,

= min(pl, pm

)for l < m,

(2.4)

withΦ

(L

)=

[φlm

(L

)]l,m=1,...,k

andΘ(L

)=

[θlm

(L

)]l,m=1,...,k

. (2.5)

This echelon form parameterization of VARMA models ensures the uniqueness of left-coprimeoperatorsΦ

(L

)andΘ

(L

). Moreover, the implied stationarity and invertibility constraints in

(2.1) are such thatdetΦ

(z) 6= 0 anddet

Θ

(z) 6= 0 for all |z| ≤ 1, wherez stays for a

1Among other identifiable parameterizations, such as the final equations form, the echelon form has beenpreferred for parsimony and gain efficiency criteria. For proofs of the uniqueness of the echelon form and otheridentification conditions, the reader should consult to Hannan (1969, 1970, 1976, 1979), Deistler and Hannan (1981)and Hannan and Deistler (1988) and Lutkepohl (1991, Chapter 7).

4

Page 8: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

complex number,Φ(z)

= Φ0 −∑p

i=1 Φizi andΘ

(z)

= Θ0 +∑p

j=1 Θjzj . Further, the model

(2.1) has the following infinite autoregressive and moving average representations:

yt = µΠ +∞∑

τ=1

Πτyt−τ + ut , (2.6)

yt = µy +∞∑

v=0

Ψvut−v , (2.7)

for all t = 1, . . . , T , where

Π(z)

= Ik −∞∑

τ=1

Πτzτ andΨ

(z)

= Ik +∞∑

v=1

Ψvzv , (2.8)

with Π(z)

= Θ(z)−1Φ

(z), Ψ

(z)

= Φ(z)−1Θ

(z), det

Π

(z) 6= 0 anddet

Ψ

(z) 6= 0

for all |z| ≤ 1, andµΠ = Π(1)µy with Π

(1)

= Ik −∑∞

τ=1 Πτ . Moreover, we can findreal constantsC > 0 and ρ ∈ (

0, 1)

such that∥∥Πτ

∥∥ ≤ Cρτ and∥∥Ψv

∥∥ ≤ Cρv, hence∑∞τ=1

∥∥Πτ

∥∥ < ∞ and∑∞

v=1

∥∥Ψv

∥∥ < ∞,where‖.‖ stays for the Schur’s norm [see Horn

and Johnson (1985, section 5.6)],i.e.∥∥M

∥∥2 = tr[M ′M

]for any matrixM . Let also

Λ(η)(

z)

=∞∑

τ=0

Λτ

(η)zτ = Θ

(z)−1

, (2.9)

then by invertibility∥∥Λτ

(η)∥∥ ≤ Cρτ so that

∑∞τ=0

∥∥Λτ

(η)∥∥ < ∞. Now, setvt = yt − ut.

Then

vt = Φ−10

[µΦ +

p∑

i=1

Φiyt−i +p∑

j=1

Θjut−j

], (2.10)

where it can be easily seen that,vt is uncorrelated with the error termut. Therefore (2.1) canbe rewritten as

yt = µΦ +(Ik − Φ0

)vt +

p∑

i=1

Φiyt−i +p∑

j=1

Θjut−j + ut. (2.11)

Setting

β = vec[µΦ, Ik − Φ0, Φ1, . . . ,Φp, Θ1, . . . ,Θp

], (2.12)

Xt =[1, v′t, y

′t−1, . . . , y

′t−p, u

′t−1, . . . , u

′t−p

]′(2.13)

whereβ andXt are respectively(k2h + k

) × 1 and(kh + 1

) × 1 vectors, withh = 2p + 1.Then under the echelon form restrictions (2.1) through (2.4), the representation (2.11) can bealso written as

yt =[X ′

t ⊗ Ik

]Rη + ut, (2.14)

whereR is a(k2h + k

) × rp full rank columns matrix formed byrp-distinct selected vectorsfrom the identity matrix of order

(k2h + k

)such thatR′R = Irp , β = Rη with η is arp × 1

5

Page 9: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

vector of free varying parameters andrp <(k2h+k

). It is straightforward to note here that the

echelon form identification stated above, ensures that thek × rp matrixR′[Xt ⊗ Ik

]has a non

singular covariance matrix. Hence

rankR′[ΓX ⊗ Ik

]R

= rp (2.15)

whereΓX = E(XtX

′t

). Now, settingy =

[y′1, . . . , y

′T

]′, X =

[X1, . . . , XT

]and u =[

u′1, . . . , u′T

]′, then the corresponding stacked form of (2.14) associated with the hole sample is

such thaty =

[X ′ ⊗ Ik

]Rη + u, (2.16)

where[X ′⊗Ik

]R is akT ×rp matrix. In the sequel, we shall assume that the process is regular

with continuous distribution. So the assumption

rank[

X ′ ⊗ Ik

]R

= rp with probability1 (2.17)

must hold.

2.2 Regularity assumptions

Further assumptions on the innovation process and the truncation lag of the first stepautoregression are needed in order to establish the consistency as well as the asymptoticdistribution of the linear estimators defined below. For that we state the assumptions we shallconsider in the sequel.

Assumption 2.1 The vectorsut, t ∈ Z, are independent and identically distributed(i.i.d.) withmean zero, covariance matrixΣu and continuous distribution.

Assumption 2.2 There is a finite constantm4 such that, for all1 ≤ i, j, r, s ≤ k,

E |ui,tuj,tur,tus,t| ≤ m4 < ∞ , for all t.

Assumption 2.3 nT is a function ofT such that

nT →∞ andn2T /T → 0 asT →∞ (2.18)

and, for somec > 0 and0 < δ < 1/2,

nT ≥ cT δ for T sufficiently large. (2.19)

Assumption 2.4 The coefficients of the autoregressive (2.6) representation

n1/2T

∞∑

τ=nT +1

‖Πτ‖ → 0 asT →∞ . (2.20)

6

Page 10: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Assumption 2.5 nT is a function ofT such that

nT →∞ andn3T /T → 0 asT →∞ (2.21)

and, for somec > 0 and0 < δ < 1/3,

nT ≥ cT δ for T sufficiently large. (2.22)

Assumption2.1 means that we have a strong VARMA process, while Assumption2.2 onmoments of order four will ensure the empirical autocovariances of the process have finitevariances. Assumption2.4 characterizes the rate of decay of autoregressive coefficients inrelation withnT . Moreover, assumption2.3 implies thatnT goes to infinity at a rate slowerthanT 1/3; for example, the assumption is satisfied ifnT = cT δ with 0 < δ ≤ δ < 1/3.

Although the above assumptions are sufficient to show consistency of the two-step linearestimators, another assumption is need to show that the asymptotic normality of the distributionwhich is shown, also, unaffected by the use of estimated innovations.

Assumption 2.6 nT is a function ofT such that

nT →∞ andn4T /T → 0 asT →∞ . (2.23)

The latter assumption means thatnT goes to infinity at a rate slower thanT 1/4; for example,it is satisfied ifnT = cT δ with 0 < δ ≤ δ < 1/4. It is easy to see that the condition (2.23)entails (2.18). Finally, it is worthwhile to note that (2.20) holds for VARMA processes whenevernT = cT δ with c > 0 andδ > 0, i.e.

T δ∞∑

τ=nT +1

‖Πτ‖ → 0 asT →∞ , for all δ > 0 . (2.24)

This is easy to see from the exponential decay property of VARMA processes.

3 Bootstrapping VARMA models

In this section we consider bootstrapping stationary invertible VARMA models in echelon formusing recursive linear estimation procedure. Our approach remains valid to other alternativeidentification issues, however. In particular, we shall use the bootstrap based residualsprocedure. Such technique consists on generating pseudo-observations

y∗1, ..., y

∗T

, also called

bootstrap replicates, by first resampling ani.i.d. sampleu∗t : t = −b + 1, . . . T

, with b ∈ N(

b ≥ p), according to the assumptions made about the theoretical distributionF of the true

residualsut : t ∈ Z

. For instance, in lack of any prior assumptions onF , the seriesu∗t can beobtained as ani.i.d. sample drawn from the empirical distribution functionFT that puts massT−1 for each of the centred estimated residualsut. In revanche, whenF is assumed to belong toa finite dimensional parametric family of distributions, i.e.,F ∈ F (., η) : η ∈ ℵ ⊂ Rc, thenu∗t can be generated as ani.i.d. sample from the parametric distributionFT (; , η) whereη staysfor a consistent estimator ofη.

7

Page 11: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

3.1 Resampling scheme

Now, let ut

(η)

be an estimator of the true residualsut, whereη is a√

T -consistent estimatorof η. Also, assume that such an estimator is the third-step linear estimator described in Dufourand Jouini (2008), then it can be expressed as

ut

(η)

=t−1∑

τ=0

Λτ

(η)[

Φ0yt−τ −p∑

i=1

Φiyt−i−τ − µΦ

], t = 1, . . . , T (3.1)

or equivalently, as

ut

(η)

= yt − Φ−10

[µΦ +

p∑

i=1

Φiyt−i +p∑

j=1

Θjut−j

(η)]

, (3.2)

initiating withut

(η)

= 0 for t ≤ p. Also, setting the centered residualsut

(η)

= ut

(η)−uT

(η),

with uT

(η)

= T−1∑T

t=1 ut

(η). Then we have the following two theorems.

Theorem 3.1 LetX1 andX2 be twok-dimensional random vectors such thatX1 ∼ FT

(η)

and

X2 ∼ FT , whereFT

(η)

andFT are the empirical distributions ofut

(η)

andut, t = 1, . . . , T ,respectively. Then

E∥∥X1 −X2

∥∥2 = Op

(T−1

).

Let G = G(B)be the probability measure spaceG on theσ−algebraB such that

∫ ∥∥X∥∥2

dG < ∞ (3.3)

for any real valued random vectorX. Then the distance between any two distributionsG1 andG2 (∈ G), measured by the Mallow’s metric is defined as

d2

(G1, G2

)= inf

[E(X1 −X2

)2] 1

2, (3.4)

where(X ′

1, X′2

)′are real valued random vectors, andG1 = L(

X1

)andG2 = L(

X2

)are

the probability measures ofX1 andX2, respectively. Now, letF be the theoretical or truedistribution ofut. Then using theorem 2.3 and lemma 8.4 of Bickel and Freedman (1981) westate the following result.

Theorem 3.2 Sinceη is a√

T -consistent estimator ofη we show that

d2

(FT

(η), F

)−→ 0 in probability.

Note that by lemma 8.3 of Bickel and Freedman (1981) the convergence in thed2 metricimplies also the convergence of second-order moments. That isΣu

(η)

converges toΣu, inprobability, where

Σu

(η)

=1T

T∑

t=1

ut

(η)ut

(η)′

. (3.5)

8

Page 12: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Therefore, theorem3.2 justifies the parametric resampling scheme ofi.i.d. residuals, for

instanceu∗t = u∗t(η), from F

(; , η, Σu

(η))

, asymptotically. Now, setting

Σ∗u(η)

= E[u∗t

(η)u∗t

(η)′]

. (3.6)

Then we have the following proposition.

Proposition 3.1 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the representation (2.1). Let alsoη be a

√T -consistent estimator ofη. Then, by theorem

3.2we have ∥∥∥Σ∗u(η)− Σu

∥∥∥ = Op

(T−1/2

).

Note that in case of Gaussian errors the residualsu∗t

(η)

: −b + 1 ≤ t ≤ T

can be

identically and independently drawn randomly from the normal distributionN[0, Σ∗u

(η)]

with

Σ∗u(η)

= Σu

(η). The bootstrap replicates

y∗t : t = 1, . . . , T

or pseudo-data can then be

easily computed recursively, using

y∗t = Φ−10

[µΦ +

p∑

i=1

Φiy∗t−i +

p∑

j=0

Θju∗t−j

(η)]

, (3.7)

or alternatively

y∗t = µy +t+b−1∑

v=0

Ψv

(η)u∗t−v

(η), (3.8)

whereµy =[Φ

(1)]−1

µΦ with Φ(1)

= Φ0 −∑p

i=1 Φi, and theΨv (η)’s are the estimatedcoefficients associated with the infinite-order moving average representation of the process,such thatΨv

(η)

= Ik if v = 0, and

Ψv

(η)

= Φ−10

[ min(v,p

)∑

i=1

Ψv−i

(η)Φi + Θv

], v = 1, 2, . . . , (3.9)

Now, let Π(nT

)=

[µΠ

(nT

), Π1

(nT

), . . . , ΠnT

(nT

)]be the multivariate least squares

estimator of the stacked autoregressive coefficient matricesΠ(nT

)=

[µΠ

(nT

), Π1, . . . , ΠnT

]associated with the long autoregression truncated up to the lag-ordernT . Moreover, let

ut

(nT

)= yt − µΠ

(nT

)−nT∑

τ=1

Πτ

(nT

)yt−τ (3.10)

and

et

(nT

)= yt − µΦ −

(Ik − Φ0

)vt

(nT

)−p∑

i=1

Φiyt−i −p∑

j=1

Θj ut−j

(nT

)(3.11)

9

Page 13: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

with vt

(nT

)= yt − ut

(nT

), or equivalently

ut

(η)

=t−1∑

τ=0

Λτ

(η)[

Φ0yt−τ −p∑

i=1

Φiyt−i−τ − µΦ

], (3.12)

or

ut

(η)

= yt − Φ−10

[µΦ +

p∑

i=1

Φiyt−i +p∑

j=1

Θjut−j

(η)]

, (3.13)

initiating with ut

(η)

= 0 for t ≤ p, t = 1, . . . , T , be the first and second stages residualsestimates obtained from the observed data, respectively. Let also the centered residualsut

(nT

)= ut

(nT

)− uT

(nT

), et

(nT

)= et

(nT

)− eT

(nT

)andut

(η)

= ut

(η)− uT

(η), with

uT

(nT

)= T−1

∑Tt=1 ut

(nT

), eT

(nT

)= T−1

∑Tt=1 et

(nT

)anduT

(η)

= T−1∑T

t=1 ut

(η),

respectively. Further, setting

Σu

(nT

)=

1T

T∑

t=1

ut

(nT

)ut

(nT

)′, Σe

(nT

)=

1T

T∑

t=1

et

(nT

)et

(nT

)′(3.14)

and

Σu

(η)

=1T

T∑

t=1

ut

(η)ut

(η)′

. (3.15)

Then it is worthnoting, here, that when resampling thei.i.d. residualsu∗t(η)

we areautomatically resamplingi.i.d. residuals

u∗t

(nT

): −b + 1 ≤ t ≤ T

,e∗t

(nT

): −b + 1 ≤

t ≤ T

andu∗t

(η)

: −b+1 ≤ t ≤ T

, whereu∗t(nT

)ande∗t

(nT

)andu∗t

(η)

are the first-stepand two-step resampled residuals, respectively, withη being the

√T -consistent two-step linear

estimator forη. This is roughly obvious since these residuals are sequentially determined alongthe three consecutive steps of the linear estimation procedure proposed in Dufour and Jouini(2008) and herein. Now, setting

Σ∗u(nT

)= E

[u∗t

(nT

)u∗t

(nT

)′], Σ∗e

(nT

)= E

[e∗t

(nT

)e∗t

(nT

)′](3.16)

andΣ∗u

(η)

= E[u∗t

(η)u∗t

(η)′]

. (3.17)

Then, likewise, we can state the following theorems and proposition.

Theorem 3.3 Let X1 andX2 be twok-dimensional random vectors such thatX1 ∼ FT andX2 ∼ FT , with FT designating the empirical distribution ofut, t = 1, . . . , T . Then under theassumptions2.1 to 2.4we have

E∥∥X1 −X2

∥∥2 =

Op

(n2

T T−1)

if FT = FT

(u(nT

)),

Op

(n2

T T−1)

if FT = FT

(e(nT

))and

Op

(T−1

)if FT = FT

(η),

whereFT

(u(nT

)), FT

(e(nT

))and FT

(η)

stay for the empirical distributions of the centeredestimated residuals corresponding to the first and second stages linear estimation procedure,respectively.

10

Page 14: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Theorem 3.4 Under the assumption2.3we have

d2

(FT , F

)−→ 0 in probability asnT , T →∞,

for all cases whereFT designatesFT

(u(nT

)), FT

(e(nT

))or FT

(η)

.

Proposition 3.2 LetYt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the representation (2.1). Then, by theorems3.3 and 3.4, the assumptions2.1 to 2.4, wehave

∥∥∥Σ∗u(nT

)− Σu

∥∥∥ =∥∥∥Σ∗u

(nT

)−1 − Σ−1u

∥∥∥ = Op

(nT T−1/2

)∥∥∥Σ∗e

(nT

)− Σu

∥∥∥ =∥∥∥Σ∗e

(nT

)−1 − Σ−1u

∥∥∥ = Op

(nT T−1/2

)

and ∥∥∥Σ∗u(η)− Σu

∥∥∥ =∥∥∥Σ∗u

(η)−1 − Σ−1

u

∥∥∥ = Op

(T−1/2

).

In the following we provide a proof of the asymptotic validity to bootstrapping stationaryinvertible echelon form VARMA models. In particular, we shall consider bootstrapping suchmodels using the three-step recursive linear estimation method2 developed in Dufour and Jouini(2008).

3.2 Asymptotic validity of bootstrap

Now, lety∗t : −b + 1 ≤ t ≤ T ; b ≥ nT

be the pseudo-data replication obtained from

the resampling scheme described above. Setting alsoY ∗t

(nT

)=

[1, y∗′t−1, . . . , y

∗′t−nT

]′. Then

the bootstrap version of the multivariate least squares or Yule-Walker estimates of the stackedautoregressive coefficient matricesΠ

(nT

)areΠ∗

(nT

)=

[µ∗Π

(nT

), Π∗1

(nT

), . . . , Π∗nT

(nT

)],

such thatΠ∗

(nT

)= Γy∗

(nT

)ΓY ∗

(nT

)−1, (3.18)

where

Γy∗(nT

)= T−1

T∑

t=1

y∗t Y∗t

(nT

)′and ΓY ∗

(nT

)= T−1

T∑

t=1

Y ∗t

(nT

)Y ∗

t

(nT

)′. (3.19)

Now, let the norm‖A‖1, for any given matrixA, be simply the largest eigenvalue ofA′A, bymeans ∥∥A

∥∥1

= supx 6=0

∥∥Ax∥∥

∥∥x∥∥

. (3.20)

Therefore, we have the following proposition.

2Such a way of proceeding is quite appealing for practical use since bootstrapping VARMA models usingnonlinear estimation methods might be inconceivable to simulation-based techniques, even for small systems withfew parameters, or nonpersistent stationary invertibles VARMA models.

11

Page 15: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Proposition 3.3 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the pure autoregressive representation given by (2.6) truncated up to the lag ordernT .Then, by theorems3.1and3.2, proposition3.1, the assumptions2.1 to 2.3, and conditional onyt : t = 1, . . . , T

, we have:

∥∥∥ΓY ∗(nT

)−1 − ΓY

(nT

)−1∥∥∥

1= Op

(nT T−1/2

), (3.21)

whereΓY

(nT

)= E

[Yt

(nT

)Yt

(nT

)′], with Yt

(nT

)=

[1, y′t−1, . . . , y

′t−nT

]′.

Hence the following theorem provides the rate of convergence between the autoregressivecoefficient matrices estimates associated with a long autoregression truncated up to the lag ordernT , and their bootstrap analogous.

Theorem 3.5 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the pure autoregressive representation given by (2.6) truncated up to the lag ordernT .Then by Theorems3.1, 3.2, 3.3 and3.4, Propositions3.1 and3.3, assumptions2.1 to 2.5, andconditional on

yt : t = 1, . . . , T

, we have:

∥∥∥Π∗(nT

)− Π(nT

)∥∥∥ = Op

(n

3/2T /T 1/2

). (3.22)

Now, let l(nT

)be a sequence of vectors of size

(k2nT + k

) × 1, such that0 < m1 ≤∥∥l(nT

)∥∥ ≤ m2 < ∞. Further, setting

SY ∗(nT

)= T 1/2l

(nT

)′vec[ΩY ∗

(nT

)ΓY ∗

(nT

)−1], (3.23)

SY ∗(nT

)= T 1/2l

(nT

)′vec[ΩY ∗

(nT

)ΓY ∗

(nT

)−1]

(3.24)

andSY

(nT

)= T 1/2l

(nT

)′vec[ΩY

(nT

)ΓY

(nT

)−1], (3.25)

whereΩY ∗(nT

)= T−1

∑Tt=1 u∗t

(nT

)Y ∗

t

(nT

)′, ΩY

(nT

)= T−1

∑Tt=1 utYt

(nT

)′and finally

ΓY ∗(nT

)= E

[Y ∗

t

(nT

)Y ∗

t

(nT

)′]. Then, we have the following theorems.

Theorem 3.6 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic process withthe pure autoregressive representation given by (2.6) truncated up to the lag ordernT . Then byassumptions2.1 to 2.5, and conditional on

yt : t = 1, . . . , T

, we have:

∣∣∣SY ∗(nT

)− SY ∗(nT

)∣∣∣ = Op

(n

3/2T /T 1/2

). (3.26)

Theorem 3.7 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic process withthe pure autoregressive representation given by (2.6) truncated up to the lag ordernT . Then byassumptions2.1 to 2.5, and conditional on

yt : t = 1, . . . , T

, we have:

d2

L

(SY ∗

(nT

)∣∣ y1, . . . , yT

),L

(SY

(nT

))→ 0 in probability, (3.27)

where the notationL(.)

stays for the distribution function.

12

Page 16: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

The above theorem shows that Mallow’s distance between the conditional distribution ofSY ∗

(nT

)and the distribution ofSY

(nT

)converges in probability to zero, asymptotically.

Further, using the fact that convergence in thed2 metric implies also weak convergence ofthe corresponding random variables we state that, conditional on

yt : t = 1, . . . , T

SY ∗(nT

) ⇒ N[0, l

(nT

)′QY

(nT

)l(nT

)]in probability, (3.28)

since it has been shown in Lewis and Reinsel (1985) that

SY

(nT

) ⇒ N[0, l

(nT

)′QY

(nT

)l(nT

)]in probability, (3.29)

where“ ⇒ ” stays for weak convergence and

QY

(nT

)= ΓY

(nT

)−1 ⊗ Σu. (3.30)

Therefore, by Theorem3.6, we establish in the following theorem the asymptotic validityof bootstrapping the first stage Hannan-Rissanen estimates developed in Dufour and Jouini(2005c).

Theorem 3.8 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic process withthe pure autoregressive representation given by (2.6) truncated up to the lag ordernT . Thenby assumptions2.1 to 2.5, and conditional on

yt : t = 1, . . . , T

, we have the following

asymptotic distributional result:

L(√

T l(nT

)′vec[Π∗

(nT

)− Π(nT

)]∣∣∣y1, . . . , yT

)⇒ N

[0, l

(nT

)′QY

(nT

)l(nT

)]. (3.31)

Furthermore, let

Σ∗u(nT

)= T−1

T∑

t=1

u∗t(nT

)u∗t

(nT

)′, (3.32)

where

u∗t(nT

)= y∗t − µ∗Π

(nT

)−nT∑

τ=1

Π∗τ(nT

)y∗t−τ . (3.33)

Then we have the following proposition.

Proposition 3.4 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the pure autoregressive representation given by (2.6) truncated up to the lag ordernT .Then by assumptions2.1 to 2.6, and conditional on

yt : t = 1, . . . , T

, we have:

∥∥∥Σ∗u(nT

)− Σu

∥∥∥ =∥∥∥Σ∗u

(nT

)−1 − Σ−1u

∥∥∥ = Op

(n2

T /T 1/2). (3.34)

It is worthnoting to show from the resampling scheme described above using lemma 4.2 ofDufour and Jouini (2005c), that

y∗t = Φ−10

[µΦ +

p∑

i=1

Φiy∗t−i +

p∑

j=1

Θju∗t−j

(η)]

+ u∗t(η)

(3.35)

13

Page 17: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

and then

y∗t = µΦ +(Ik − Φ0

)v∗t

(nT

)+

p∑

i=1

Φiy∗t−i +

p∑

j=1

Θj u∗t−j

(nT

)+ e∗t

(nT

), (3.36)

wherev∗t(nT

)= y∗t − u∗t

(nT

), with

e∗t(nT

)= u∗t

(nT

)+

p∑

j=0

Θj

[u∗t−j

(η)− u∗t−j

(nT

)]. (3.37)

If further, we defineX∗t

(nT

)=

[1, v∗t

(nT

)′, y∗′t−1, . . . , y

∗′t−p, u

∗t−1

(nT

)′, . . . , u∗t−p

(nT

)′]′, then

(3.46) can easily be written as

y∗t =[X∗

t

(nT

)′ ⊗ Ik

]Rη + e∗t

(nT

), (3.38)

so that, the second step bootstrap analogous generalized least squares (GLS) estimatorsη∗ of ηare such that

η∗ = η + Q∗X

(nT

)Ω∗X

(nT

), (3.39)

where

Q∗X

(nT

)=

R′Υ∗

X

(nT

)R

−1(3.40)

and

Ω∗X(nT

)= T−1

T∑

t=1

R′[X∗

t

(nT

)⊗ Ik

]Σ∗u

(nT

)−1e∗t

(nT

), (3.41)

with Υ∗X

(nT

)= Γ∗X

(nT

) ⊗ Σ∗u(nT

)−1andΓ∗X

(nT

)= T−1

∑Tt=1 X∗

t

(nT

)X∗

t

(nT

)′. Now,

let

Q∗X =

R′Υ∗

XR−1

and Ω∗X = T−1T∑

t=1

R′[X∗

t

(η)⊗ Ik

]Σ∗u

(η)−1

u∗t(η)

(3.42)

be the bootstrap analogous of

QX =R′ΥXR

−1and ΩX = T−1

T∑

t=1

R′[Xt ⊗ Ik

]Σu

−1ut, (3.43)

respectively, withΥX = ΓX ⊗ Σu−1 and Υ∗

X = Γ∗X(η) ⊗ Σ∗u

(η)−1

, where specifically,

Γ∗X(η)

= E[X∗

t

(η)X∗

t

(η)′]

, with X∗t

(η)

=[1, v∗t

(η)′

, y∗′t−1, ..., y∗′t−p, u

∗t−1

(η)′

, ..., u∗t−p

(η)′]′

andv∗t(η)

= y∗t − u∗t(η). Then we have the following proposition.

Proposition 3.5 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.4, and conditional on

yt : t = 1, . . . , T

,

we have:∥∥∥Q∗X −QX

∥∥∥1

= Op

(T−1/2

)and

∥∥∥Q∗X

(nT

)−QX

∥∥∥1

= Op

(nT /T 1/2

). (3.44)

14

Page 18: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Given the results derived in the above proposition, we establish in the following theoremthe rate of convergence of the second-stage bootstrap estimates.

Theorem 3.9 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic process withthe VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2,Proposition3.1, the assumptions2.1 to 2.4, and conditional on

yt : t = 1, . . . , T

, we have:

∥∥η∗ − η∥∥ = Op

(T−1/2

). (3.45)

Now, setting

S∗X(nT

)= T 1/2Q∗

X

(nT

)Ω∗X

(nT

), S∗X = T 1/2Q∗

XΩ∗X and SX = T 1/2QXΩX . (3.46)

Then in the next theorem we establish the following asymptotic equivalence.

Theorem 3.10 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.6, and conditional on

yt : t = 1, . . . , T

,

we have: ∥∥∥S∗X(nT

)− S∗X∥∥∥ = op

(1). (3.47)

Moreover, the next theorem shows that Mallow’s distance between the conditionaldistribution of the bootstrap statisticS∗X given the realization

y1, . . . , yT

, and the distribution

of the statisticSX vanishes asymptotically.

Theorem 3.11 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.6, and conditional on

yt : t = 1, . . . , T

,

we have:

d2

L

(SX

),L

(S∗X

∣∣ y1, . . . , yT

)→ 0 in probability. (3.48)

The above theorem implies that conditional on the realizationy1, . . . , yT

the statistic

SX∗ has the Gaussian distributionN(0, QX

), asymptotically, since by central limit theorem it

has been shown in Dufour and Jouini (2008) thatSX converges in distribution to the GaussiandistributionN

(0, QX

), asymptotically. Hence, the asymptotic validity of bootstrapping the

two-step linear estimates can be easily established in the following theorem.

Theorem 3.12 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.6 and conditional on

yt : t = 1, . . . , T

,

we have the following asymptotic distribution of the bootstrap estimatesη∗ :

L(T 1/2

(η∗ − η

)∣∣y1, . . . , yT

)⇒ N

[0, QX

]in probability. (3.49)

Now, recall first that

e∗t(nT

)= y∗t −

[X∗

t

(nT

)′ ⊗ Ik

]Rη∗ (3.50)

15

Page 19: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

or equivalently

e∗t(nT

)= u∗t

(nT

)+

p∑

j=0

Θ∗j

[u∗t−j

(η∗

)− u∗t−j

(nT

)], (3.51)

so that the estimated residualsu∗t−j

(η∗

)can be easily either deduced and computed recursively

such as

u∗t(η∗

)= Φ∗−1

0 e∗t(nT

)+

(Ik− Φ∗−1

0

)u∗t

(nT

)+

p∑

j=1

Φ∗−10 Θ∗

j

[u∗t−j

(nT

)−u∗t−j

(η∗

)], (3.52)

or

u∗t(η∗

)= y∗t − Φ∗−1

0

[µ∗Φ +

p∑

i=1

Φ∗i y∗t−i +

p∑

j=1

Θ∗ju∗t−j

(η∗

)], (3.53)

initiating with u∗t(η∗

)= 0, t ≤ p, or alternatively as

u∗t(η∗

)=

t−1∑

τ=0

Λτ

(η∗

)[Φ∗0y

∗t−τ −

p∑

i=1

Φ∗i y∗t−i−τ − µ∗Φ

]. (3.54)

Then, setting

Σ∗u(η∗

)=

1T

T∑

t=1

u∗t(η∗

)u∗t

(η∗

)′. (3.55)

Therefore, we have the following proposition.

Proposition 3.6 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.4, and conditional on

yt : t = 1, . . . , T

,

we have: ∥∥∥Σ∗u(η∗

)− Σu

∥∥∥ =∥∥∥Σ∗u

(η∗

)−1 − Σ−1u

∥∥∥ = Op

(T−1/2

). (3.56)

Now, setting X∗t

(η∗

)=

[1, v∗t

(η∗

)′, y∗′t−1, . . . , y

∗′t−p, u

∗t−1

(η∗

)′, . . . , u∗t−p

(η∗

)′]′with

v∗t(η∗

)= y∗t − u∗t

(η∗

). Then, using again lemma 4.1 of Dufour and Jouini (2008) we get

u∗t(η∗

)− u∗t(η)

= −Z∗t

(η∗, η

)′(η∗ − η

), (3.57)

where

Z∗t

(η∗, η

)=

t−1∑

τ=0

R′[X∗

t−τ

(η∗

)⊗ Λτ

(η)′]

. (3.58)

Hence the bootstrap version of the third-step linear estimatorη developed in Dufour and Jouini(2008), sayη∗ is such that

η∗ = η∗ + Q∗X

(η∗

)Ω∗X

(η∗

), (3.59)

16

Page 20: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

where

Q∗X

(η∗

)=

T−1

T∑

t=1

Z∗t(η∗

)Σ∗u

(η∗

)−1Z∗t

(η∗

)′−1and (3.60)

Ω∗X(η∗

)= T−1

T∑

t=1

Z∗t(η∗

)Σ∗u

(η∗

)−1u∗t

(η∗

)(3.61)

are the bootstrap analogous of

QX

(η)

=

T−1T∑

t=1

Zt

(η)Σu

(η)−1

Zt

(η)′−1

andΩX

(η)

= T−1T∑

t=1

Zt

(η)Σu

(η)−1

ut

(η),

(3.62)respectively, with

Z∗t(η∗

)=

t−1∑

τ=0

R′[X∗

t−τ

(η∗

)⊗ Λτ

(η∗

)′](3.63)

as being the bootstrap version of

Zt

(η)

=t−1∑

τ=0

R′[Xt−τ

(η)⊗ Λτ

(η)′]

, (3.64)

whereXt

(η)

=[1, vt

(η)′

, y′t−1, . . . , y′t−p, ut−1

(η)′

, . . . , ut−p

(η)′]′

. More precisely, such anestimator can be easily computed by running the GLS estimation procedure using the weightmatrix Σ∗u

(η∗

), on the linear regression model

w∗t(η∗

)= Z∗t

(η∗

)′η + z∗

(η∗, η

)(3.65)

which is a variant of (3.69) that could be obtained after some manipulations, with

w∗t(η∗

)= u∗t

(η∗

)+ Z∗t

(η∗

)′η∗ (3.66)

andz∗t

(η∗, η

)= u∗t

(η)

+[(

η∗)− Z∗t

(η∗, η

)]′(η∗ − η

). (3.67)

In particular, it is worth mentioning that such a way of bootstrapping stationary invertibleVARMA parameters corresponds in fact to bootstrapping the scoring method in case ofGaussian errors, which turns out to be quite appealing in practice from the time efficiencyviewpoint, namely in heavy and/or persistent models. Setting further

Q∗X

(η∗

)=

T−1

T∑

t=1

Z∗t

(η∗, η

)Σ∗u

(η∗

)−1Z∗t

(η∗, η

)′−1, (3.68)

Ω∗X(η∗

)= T−1

T∑

t=1

Z∗t

(η∗, η

)Σ∗u

(η∗

)−1u∗t

(η∗

)and (3.69)

Ω•∗X(η∗

)= T−1

T∑

t=1

Z∗t

(η∗, η

)Σ∗u

(η∗

)−1u∗t

(η). (3.70)

17

Page 21: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Then, we can easily show that (3.71) can also be expressed equivalently as

η∗ − η = Q∗X

(η∗

)Ω∗X

(η∗

)+ Q∗

X

(η∗

)[Ω•∗X

(η∗

)− Ω∗X(η∗

)]. (3.71)

Similarly, let

Q∗X

(η)

=

E[Z∗t

(η)Σ∗u

(η)−1

Z∗t(η)′]−1

and Ω∗X(η)

= T−1T∑

t=1

Z∗t(η)Σ∗u

(η)−1

u∗t(η)

(3.72)be the bootstrap analogous of

QX

(η)

=

E[Zt

(η)Σ−1

u Zt

(η)′]−1

and ΩX

(η)

= T−1T∑

t=1

Zt

(η)Σ−1

u ut , (3.73)

respectively, where

Z∗t(η)

=t−1∑

τ=0

R′[X∗

t−τ

(η)⊗ Λτ

(η)′]

and Zt

(η)

=∞∑

τ=0

R′[Xt−τ ⊗ Λτ

(η)′]

. (3.74)

ThenQ∗X

(η)

andQX

(η)

can be alternatively expressed as

Q∗X

(η)

=

R′Υ∗X

(η)R

−1and QX

(η)

=

R′ΥX

(η)R

−1, (3.75)

respectively, with

Υ∗X

(η)

=t−1∑

τ1=0

t−1∑

τ2=0

[Γ∗X

(τ1 − τ2, η

)⊗ Λτ1

(η)′Σ∗u

(η)−1Λτ2

(η)]

(3.76)

and

ΥX

(η)

=∞∑

τ1=0

∞∑

τ2=0

[ΓX

(τ1 − τ2

)⊗ Λτ1

(η)′Σ−1

u Λτ2

(η)]

, (3.77)

whereΓ∗X(τ2 − τ1, η

)= E

[X∗

t+τ1

(η)X∗

t+τ2

(η)′]

andΓX

(τ2 − τ1

)= E

[Xt+τ1X

′t+τ2

], with

Γ∗X(0, η

)= Γ∗X

(η)

andΓX

(0)

= ΓX . Consequently, we have the following equivalences.

Proposition 3.7 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.4, and conditional on

yt : t = 1, . . . , T

,

we have:∥∥∥Q∗

X

(η)−QX

(η)∥∥∥

1=

∥∥∥Q∗X

(η∗

)−QX

(η)∥∥∥

1=

∥∥∥Q∗X

(η∗

)− Q∗X

(η∗

)∥∥∥1

= Op

(T−1/2

).

(3.78)

Moreover, the next theorem states the rate of convergence between the third stage recursivelinear estimates and their bootstrap analogous.

18

Page 22: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Theorem 3.13 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.4, and conditional on

yt : t = 1, . . . , T

,

we have: ∥∥η∗ − η∥∥ = Op

(T−1/2

). (3.79)

Further, setting

S∗X(η∗

)= T 1/2

Q∗

X

(η∗

)Ω∗X

(η∗

)+ Q∗

X

(η∗

)[Ω•∗X

(η∗

)− Ω∗X(η∗

)], (3.80)

S∗X(η)

= T 1/2Q∗X

(η)Ω∗X

(η)

and SX

(η)

= T 1/2QX

(η)ΩX

(η). (3.81)

Then we have the following theorem.

Theorem 3.14 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.6, and conditional on

yt : t = 1, . . . , T

,

we have: ∥∥∥S∗X(η∗

)− S∗X(η)∥∥∥ = Op

(T−1/2

). (3.82)

Furthermore, the next theorem shows that Mallow’s distance between the conditionaldistribution ofS∗X

(η)

given(y1, . . . , yT

)and that ofSX

(η)

converges to zero in probability.

Theorem 3.15 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.6, and conditional on

yt : t = 1, . . . , T

,

we have:

d2

L

(SX

(η))

,L(S∗X

(η)∣∣ y1, . . . , yT

)→ 0 in probability . (3.83)

In view of the above theorem and given that convergence ind2 metric implies alsoweak convergence of the corresponding random variables we can easily show, conditional

on(y1, . . . , yT

), thatS∗X

(η) ⇒ N

[0, QX

(η)]

in probability. Hence the following theorem

provides the theoretical justification in establishing the asymptotic validity of bootstrappingstationary invertible VARMA models using the recursive linear estimation methods developedin Dufour and Jouini (2005c).

Theorem 3.16 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.6 and conditional on

yt : t = 1, . . . , T

,

we have the following

L(T 1/2

(η∗ − η

)∣∣y1, . . . , yT

)⇒ N

[0, QX

(η)]

in probability. (3.84)

The third-stage bootstrap version residuals, sayu∗t(η∗

)could then be easily obtained using

u∗t(η∗

)=

t−1∑

τ=0

Λτ

(η∗

)[Φ∗0y

∗t−τ −

p∑

i=1

Φ∗i y∗t−i−τ − µ∗Φ

], (3.85)

19

Page 23: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

or equivalently from

u∗t(η∗

)= y∗t − Φ∗−1

0

[µ∗Φ +

p∑

i=1

Φ∗i y∗t−i +

p∑

j=1

Θ∗ju∗t−j

(η∗

)], (3.86)

initiating with u∗t(η∗

)= 0, t ≤ p. Finally, setting

Σ∗u(η∗

)=

1T

T∑

t=1

u∗t(η∗

)u∗t

(η∗

)′(3.87)

we get the following proposition.

Proposition 3.8 Letyt : t ∈ Z

be ak-dimensional stationary invertible stochastic processwith the VARMA representation in echelon form given by (2.1)-(2.5). Then, by Theorems3.1and3.2, Proposition3.1, the assumptions2.1 to 2.4, and conditional on

yt : t = 1, . . . , T

,

we have: ∥∥∥Σ∗u(η∗

)− Σu

∥∥∥ = Op

(T−1/2

). (3.88)

4 Simulation study

In this section, we evaluate the performance of our proposed methods for bootstrappingVARMA models through a simulation model design with different sample sizes (T = 100, 200,300, 400, 500, 1000). The simulated model is a stationary invertible bivariate echelon formVARMA process with Kronecker indices

(2, 0

), generated using (2.1) with Gaussian errors3.

All the results we are reporting here are based on 1000 trials for the MC simulation and aboutthe same number of replications for parametric and non parametric bootstrap methods as well.These results are obtained using GAUSS random number generator (version 3.2.37) with thefirst 100 simulated pseudo data being dropped to avoid numerical problems due to initializationeffects. In all replications, we allow the long-autoregression-lag-ordernT associated withthe first stage-estimation procedure to lie betweenln

(T

)and

√T . That is, be estimated

consistently, following Dufour and Jouini (2008), by minimizing the information criteria:

Cr(nT

)= ln

(det

Σu

(nT

))+ c1k

2 nT

T 1/2

(1 + c1k

2 nT

T 1/2

)and

Cr∗(nT

)= ln

(det

Σ∗u

(nT

))+ c1k

2 nT

T 1/2

(1 + c1k

2 nT

T 1/2

),

whereCr∗(nT

)stays for the parametric (or nonparametric) bootstrap analogous ofCr

(nT

),

with c1 = 0.10√

2/k andΣ∗u(nT

)be the parametric (or nonparametric) bootstrap analogous of

the first-stage error covariance matrix estimatorΣu

(nT

). Convergence criteria have also been

considered along with the estimation procedure in order to obtain stationary invertible VARMAmodels. These criteria are

ln(

det

Σu

(η))

< ln(

det

Σu

(nT

))+

c0√T

and (4.1)

ln(

det

Σu

(η))

< ln(

det

Σe

(nT

))(4.2)

3A complete description of the simulated model is given in appendix B.

20

Page 24: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

in the first and second stages of the estimation procedure, respectively, for the MC simulation,and

ln(

det

Σ∗u(η∗

))< ln

(det

Σ∗u

(nT

))+

c0√T

and (4.3)

ln(

det

Σ∗u(η∗

))< ln

(det

Σ∗e

(nT

))(4.4)

in the first and second4 stages estimation, respectively, for the parametric (or nonparametric)bootstrap method, andc0 is a positive constant sufficiently small. If consistency is not met thenincreasing and thereafter decreasing lags are considered in the first-stage long-autoregression.If further, these criteria are still not met, then the data is dropped and replaced5 with a new one.

Tables 1 through 4 provide a comparison study on the finite sample properties [mean,average-deviation (Avg-Dev), median, 5th and 95th percentiles, length of confidence intervalwith nominal of 90 percent (L90%), root mean square errors (RMSE) and standard-deviation(Std-Dev)] of the echelon form VARMA parameter estimates associated with a MC simulationand two versions of one typical bootstrap approximation. These two versions are, namely, theparametric and non parametric bootstrap parameter replicates that are obtained from one typicaltrial parameter estimates with the same set of simulated residuals. The inspection of these tablesreveals pronounced similarities in the finite sample distribution of the echelon form VARMAparameters across the various methods. In particular, table 1 shows that the average-deviation,the RMSE and the standard-deviation associated with the second-stage estimates of the truemodel, obtained by MC simulation with sample size of 200 observations, are about the samenumber of digits compared to their corresponding parametric and nonparametric bootstrapanalogous generated with the typical estimated model. Moreover, the parameter confidencesets with nominal level of 90% corresponding to each method have almost the same length.Which assumes that the finite sample parametric and nonparametric bootstrap distributions,though generated with one typical estimation of the true model, draw the same patterns asthe finite sample MC simulation distribution of the true parameters. Furthermore, since thethree-step estimates6 constitute refinements to those obtained in the second stage of the linearestimation procedure, these similarities get increasingly more pronounced and even more withgrowing sample sizes as shown in table 2 and 4 compared to tables 1 and 3. This should not besurprising since these parameters are consistently estimated.

Further, to get well informed on the finite sample properties of the echelon form VARMAparameter distributions related to all three different methods we plot histograms based onstudentized statistic values of the parameter estimates [see figures 1 through 6 for samples of200 observations]. These statistics are, respectively,

T (ηi

)=

√T

(ηi − ηi

)

se(ηi

) and T (ηi

)=

√T

(ηi − ηi

)

se(ηi

) , i = 1, . . . , 9,

4HereΣ∗e(nT

)= T−1 ∑T

t=1 e∗t(nT

)e∗t

(nT

)′, with e∗t

(nT

)as specified in (3.37).

5The frequency of repacement over the 1000 of trials considered for the MC simulation and Bootstrap methodsis too small for each case.

6The latters have been shown in Dufour and Jouini (2005c) identical to the one step-iteration of the score in caseof gaussian errors, hence more efficient than the two-step estimators.

21

Page 25: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

for the two-step and three-step estimates associated with the MC simulation, and

T ∗(η∗i)

=

√T

(η∗i − ηi

)

se∗(η∗i

) and T ∗(η∗i)

=

√T

(η∗i − ηi

)

se∗(η∗i

) , i = 1, . . . , 9,

for the second stage and third stage parametric (or nonparametric) bootstrap estimates,respectively, wherese

(ηi

), se

(ηi

), se∗

(η∗i

)and se∗

(η∗i

)are the respective square roots of

the i-th diagonal terms of the estimated covariance matricesQX

(nT

), QX

(η), Q∗

X

(nT

)and

Q∗X

(η∗

), where

QX

(nT

)=

R′ΥX

(nT

)R

−1with

ΥX

(nT

)= ΓX

(nT

)⊗ Σu

(nT

)−1and ΓX

(nT

)= T−1

T∑

t=1

Xt

(nT

)Xt

(nT

)′.

The plots of the simulated histogram of the studentized second stage estimates related to allproposed methods (MC simulation, parametric bootstrap and nonparametric bootstrap) showsome similarities [see 1, 3 and 5]. These similarities show up also for the case of the studentizedthird stage estimates, since we report the same pattern [see 2, 4 and 6]. In particular, each ofthese histograms exhibits a high probability mass around zero and no fat tails. But this does notmean in any sense that these finite sample approximations are symmetric. For better comparisonbetween these well shaped histograms we have considered curve versions of these by relatingthe midpoints associated with the clusters used in individual histograms. Figures 7, 8 and 9show that it is always worth to consider one more linear regression since the studentized thirdstage estimates provides improvements to the accuracy of the small sample probability densityfunction (pdf) approximation, compared to the studentized two-step estimates. See for examplethe diagrams ofµΦ,1 (or Drift[1]) and at a lower level those related toφ11,1 (or Phi1[1,1]).A further comparison between these small sample pdf approximations across MC simulation,parametric bootstrap and nonparametric bootstrap methods, as shown in figures 10 and 11 forthe two-step and the three-step studentized estimates, respectively, reveals that all of these smallsample pdf approximations mimic as close as possible each others for either stage levels of thestudentized estimates. Of course such similarities in the distributional behavior are expected tobe more and more pronounced with increasing sample sizes.

Obviously, at the estimation level of these models one should be questioned, on theindividual significance of the parameter estimates in finite sample framework. Or eventually,whether we are or not providing estimates that are close to the true values of the model’sparameters. Which is also equivalent to providing a consistent set of estimates that containsthe true parameter value with a given confidence level. To this end standard methods suggestedclassical confidence sets based on critical values derived from asymptotic approximations.Specifically, the followings confidence intervals

CIA

(i)

=[ηi − t1−α/2T

−1/2se(ηi

), ηi − tα/2T

−1/2se(ηi

)]and

CIA

(i)

=[ηi − t1−α/2T

−1/2se(ηi

), ηi − tα/2T

−1/2se(ηi

)]

22

Page 26: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

have nominal levels or probabilities1 − α of containing the true value of the parameterηi. Moreover, the standard theory assumes, here, symmetric intervals around the parameterestimatesηi andηi ast1−α/2 = −tα/2. However, since these asymptotic critical values shouldrather be used for large samples bootstrap methods suggested instead the use of their finitesample analogous, sayt∗i,α/2 andt∗i,1−α/2, andt∗i,α/2 andt∗i,1−α/2, (i = 1, . . . , 9), for the secondand third stages estimates, respectively. These can be easily obtained, first, by bootstrapping thestudentized estimates then solving for these bootstrap critical values so that

Pr[T ∗(η∗i

) ≤ t∗i,α/2

]= Pr

[T ∗(η∗i

) ≥ t∗i,1−α/2

]= α/2 and

Pr[T ∗(η∗i

) ≤ t∗i,α/2

]= Pr

[T ∗(η∗i

) ≥ t∗i,1−α/2

]= α/2.

Therefore, getting

CI∗B

(i)

=[ηi − t∗i,1−α/2T

−1/2se(ηi

), ηi − t∗i,α/2T

−1/2se(ηi

)]and

CI∗B

(i)

=[ηi − t∗i,1−α/2T

−1/2se(ηi

), ηi − t∗i,α/2T

−1/2se(ηi

)]

as the two-sided1−α level bootstrap confidence intervals for the individual parameter estimatesassociated with the second stage and third stage of the linear estimation procedure, respectively.To investigate the finite sample accuracy of all these type of confidence sets, we proposed sofor, in containing the true values of the model’s parameters, we need to study their empiricalcoverage rates through an intensive MC simulation. To this end, let firstαL be the empiricalprobability or frequency that the true value of the parameter is not covered by the confidenceinterval from the left side. Likewise, defineαU as the one you can get from the right side. Thenthe overall empirical coverage rate of these confidence sets should be equal to, say,αCI =1− αL− αU . Of course, one would expectαL + αU almost equal toα and henceαCI ' 1−α,in finite sample, but notαL ' αU ' α/2 since there no reason that these confidence sets mustbe symmetric in finite sample. The latter statement must show up asymptotically.

Tables 5 and 6 provide, for different sample sizes T = 100, 200, 3000, 400, 500, and1000, the empirical coverage ratesαCI of the echelon form VARMA parameter confidenceintervals with nominal level of 95 percent, as well as the left and the right exclusion empiricalprobabilitiesαL andαU under standard asymptotics, parametric bootstrap and nonparametricbootstrap methods. From these tables we can easily see that, irrespective to the sample size,the bootstrapt-critical value based confidence intervals provide empirical coverage rates closeto the nominal level with both parametric and non parametric bootstrap methods to all stagesestimation procedure for all parameters and sample sizes. Which is of course not true forthe classical confidence sets based on asymptotic critical values. For the latter, although theempirical coverage rates have improved notably with the third stage estimation, as we reportonly coverage rates of90.4 and88.4 percent forθ11,1 andθ11,2, respectively, for T = 100, wecan easily highlight the poor coverage rates associated with the second stage estimates for someof the model parameters. In particular, with T=100, onlyθ12,1 has a coverage rate of about94.2 percent (' 95%), the remainder parameters have a coverage rate between67.2 and91.5percent. These rates are improving with growing samples but not to all parameters since westill report poor coverage rates forµΦ,1 andφ11,1 for all the sample sizes considered here. This

23

Page 27: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

simulation study has shown that classical confidence sets fail to provide good coverage ratesfor the echelon form VARMA parameters. While the parametric and nonparametric bootstrapmethods outperform the standard theory by providing empirical coverage rates of the parameterconfidence sets close to the nominal level. This may be closely related to the combination of theuse of a consistent bootstrap estimation procedure that we proposed here for the echelon formVARMA models, and the use of the finite sample bootstrap critical values. For example table 7provides a sample of these critical values associated with a confidence set of nominal level 95%for T = 200.

5 Conclusion

In this paper we have provided theoretical developments to bootstrapping stationary invertibleechelon form VARMA model with known dynamic or Kronecker indices. Such finding isgeneral and remains valid to other VARMA identification issues as well as univariate ARMAmodels. The asymptotic validity of bootstrapping such models has been established forparametric as well nonparametric methods and rests mainly on the recursive linear estimationtechniques proposed in Dufour and Jouini (2008). Which turns to be quite interesting fromthe computing-time viewpoint, especially, when bootstrapping these models for small sampleinference purpose. Since, we might avoid implementing simulation-based inference techniquesbased on nonlinear estimation procedures such as the ML and its related problems that mayoccur, namely, with the initial guess and the possibly non convergence of the maximizationalgorithm in high dimensional and/or persistent models. Obviously, these benefits may showup more when bootstrapping the robust two-step linear estimators developed in Dufour andJouini (2005) since the bootstrap theoretical results stated herein could be easily viewed as ageneralization to the latters.

The performance of our proposed bootstrap methods have been evaluated through asimulation model design. Simulation results have shown how accurate are the parametersbootstrap (parametric and nonparametric) distributions in approximating the finite sample MCsimulation distributions associated the true model parameters. Moreover, the two variants of ourbootstrap proposal have provided simulation results quite similar. Further, a comparative studyon the finite sample empirical coverage rates of the echelon form parameters confidence setshave shown the superiority of the bootstrap-t–critical value based confidence intervals, with bothvariants, over the classical or standard ones based on asymptotic theory. These bootstrap-t-basedconfidence sets provide empirical coverage rates close to the nominal level irrespective to thesample size for all parameters. Which is not true using the standard approximations. Of course,more elaborated inference such as Granger causality testing or impulse response functions(IRFs) confidence intervals set up for policy-maker decisions, using bootstrap methods, couldbe highly interesting for applied econometricians within the VARMA model framework, sincelatter tend to be accurate compared to VAR models. Furthermore, this paper offers an easyand simple platform to built more sophisticated dynamic inference in VARMA models as themaximized MC tests developed by Dufour and Jouini (2006) for VAR models in finite sample.Namely, when the bootstrap fails. In addition, one might eventually consider inference inunivariate as well multivariate GARCH models by adapting these techniques.

24

Page 28: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

A Appendix: Proofs

PROOF OFTHEOREM 3.1 Fix X1,t = ut (η) andX2,t = ut. Then

E∥∥∥X1 −X2

∥∥∥2

= T−1T∑

t=1

∥∥∥ut(η)− ut

∥∥∥2 ≤ 2T−1

T∑

t=1

∥∥∥ut(η)− ut

∥∥∥2

+ 2∥∥∥uT

(η)∥∥∥

2, (A.5)

where∥∥∥uT

(η)∥∥∥

2= Op

(T−1

). Moreover, using lemma 4.2 of Dufour and Jouini (2005c) we can easily show that

∥∥∥ut(η)− ut

∥∥∥2 ≤ 2

∥∥∥ut(η)− ut

(η)∥∥∥

2+ 2

∥∥∥ut(η)− ut

∥∥∥2,

≤ 4

∥∥∥∥t−1∑

τ=0

[Xt−τ

(η)′ ⊗ Λτ

(η)− Λτ

(η)]∥∥∥∥

2

+

∥∥∥∥t−1∑

τ=0

[Xt−τ

(η)′ ⊗ Λτ

(η)]∥∥∥∥

2 ∥∥∥R

∥∥∥2∥∥∥η − η

∥∥∥2

+2

∥∥∥∥∞∑

τ=t

Λτ(η)[

Φ0yat−τ −

p∑

i=1

Φiyat−i−τ

]∥∥∥∥2

, (A.6)

whereyat = yt − µy . In addition, note that

∥∥∥∥t−1∑

τ=0

[Xt−τ

(η)′ ⊗ Λτ

(η)]∥∥∥∥

2

≤( t−1∑

τ=0

∥∥∥Xt−τ

(η)′ ⊗ Λτ

(η)∥∥∥

)2

. (A.7)

with[Xt−τ

(η)′ ⊗ Λτ

(η)]

=[Xt−τ

(η)′ ⊗ Ik

][1⊗ Λτ

(η)]

. Then

∥∥∥ut(η)− ut

∥∥∥2 ≤ 4

(k

t−1∑

τ=0

∥∥∥Xt−τ

(η)∥∥∥

2)( t−1∑

τ=0

∥∥∥Λτ(η)− Λτ

(η)∥∥∥

2)

+

(k

t−1∑

τ=0

∥∥∥Xt−τ

(η)∥∥∥

2)( t−1∑

τ=0

∥∥∥Λτ(η)∥∥∥

2) ∥∥∥R

∥∥∥2∥∥∥η − η

∥∥∥2

+2

( ∞∑

τ=t

∥∥∥Λτ(η)∥∥∥

2)( ∞∑

τ=t

∥∥∥Φ0yat−τ −

p∑

i=1

Φiyat−i−τ

∥∥∥2)

= Op(T−2

)+ Op

(T−1

)+ op

(1), (A.8)

by lemma 4.1 of Dufour and Jouini (2005c). Consequently, we get

E∥∥∥X1 −X2

∥∥∥2

= Op(T−1

)(A.9)

and then, asT →∞ ,

E∥∥∥X1 −X2

∥∥∥2

= op(1). (A.10)

PROOF OFTHEOREM 3.2 Applying the triangular inequality to the Mallow’s distance betweenFT

(η)

andF , we have

d2

(FT

(η), F

)≤ d2

(FT

(η), FT

)+ d2

(FT , F

), (A.11)

whereF stays for the empirical distribution of the true residualsut for t = 1, . . . T . Further,

d2

(FT

(η), F

)≤ d2

(FT

(η), FT

)+ op

(1), (A.12)

25

Page 29: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

sinced2

(FT

(η), F

)−→ 0 in probability almost every where asT −→ ∞. Moreover, by the Mallow’s distance between

FT

(η)

andFT we have

d2

(FT

(η), FT

)= inf

E∥∥X1 −X2

∥∥21/2 ≤

E∥∥X1 −X2

∥∥21/2

. (A.13)

Hence

d2

(FT

(η), FT

)= Op

(T−1

)(A.14)

and thereby

d2

(FT

(η), F

)= Op

(T−1

). (A.15)

Therefore, asT →∞ we get

d2

(FT

(η), F

)= op

(1). (A.16)

PROOF OFPROPOSITION 3.1 Let Σu(T

)= T−1

∑Tt=1 utu′t be the empirical variance-covariance matrix of the true

residualsut. Then, applying the triangular inequality to Schur’s norm, we get

∥∥∥Σ∗u(η)− Σu

∥∥∥ ≤∥∥∥Σ∗u

(η)− Σu

(T

)∥∥∥ +∥∥∥Σu

(T

)− Σu

∥∥∥, (A.17)

with∥∥∥Σu

(T

)− Σu

∥∥∥ = Op

(T−1/2

). Thus

∥∥∥Σ∗u(η)− Σu

∥∥∥ ≤∥∥∥Σ∗u

(η)− Σu

(T

)∥∥∥ + op(1), (A.18)

where

Σ∗u(η)

= E[u∗t

(η)u∗t

(η)′]

= Σu(η)

= T−1T∑

t=1

ut(η)ut

(η)′

. (A.19)

Therefore, it follows that

∥∥∥Σ∗u(η)− Σu

∥∥∥ ≤ T−1T∑

t=1

∥∥∥ut(η)ut

(η)′ − utu

′t

∥∥∥ + op(1)

≤ T−1T∑

t=1

∥∥∥ut(η)− ut

∥∥∥∥∥∥ut

(η)∥∥∥ +

∥∥∥ut

∥∥∥∥∥∥ut

(η)− ut

∥∥∥

+ op(1), (A.20)

where by theorem3.1 ∥∥∥ut(η)− ut

∥∥∥2

= Op

(T−1

). (A.21)

Hence, ∥∥∥Σ∗u(η)− Σu

∥∥∥ = Op

(T−1/2

), (A.22)

and thereafter, ∥∥∥Σ∗u(η)− Σu

∥∥∥ = op(1). (A.23)

PROOF OFTHEOREM 3.3 Let firstX1,t = ut(nT

)andX2,t = ut. Then

E∥∥∥X1 −X2

∥∥∥2

= T−1T∑

t=1

∥∥∥ut(nT

)− ut

∥∥∥2 ≤ 2T−1

T∑

t=1

∥∥∥ut(nT

)− ut

∥∥∥2

+ 2∥∥∥uT

(nT

)∥∥∥2, (A.24)

26

Page 30: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

with∥∥∥uT

(nT

)∥∥∥2

= Op(T−1

)and

∥∥∥ut(nT

)− ut

∥∥∥2 ≤ 2

∥∥∥ut(nT

)− ut(nT

)∥∥∥2

+∥∥∥ut

(nT

)− ut

∥∥∥2

≤ 2∥∥∥Π

(nT

)−Π(nT

)∥∥∥2∥∥∥Yt

(nT

)∥∥∥2

+ 2

∥∥∥∥∞∑

τ=nT +1

Πτ yat−τ

∥∥∥∥2

≤ Op

(n2T

T

)+ Op

(k)( ∞∑

τ=nT +1

∥∥Πτ

∥∥)2

= Op

(n2T

T

)+ op

(1), (A.25)

where Π(nT

)=

[µΠ

(nT

), Π1

(nT

), . . . , ΠnT

(nT

)]are the multivariate least squares or Yule-Walker estimates of the

truncated stacked matrixΠ(nT

)=

[µΠ

(nT

), Π1, . . . , ΠnT

]up to the lag ordernT , Yt

(nT

)=

[1, y′t−1, . . . , y′t−nT

]′

andyat = yt − µy . Then using Theorem 3.1 of Dufour and Jouini (2005c), and assumptions2.1 to 2.4we get

E∥∥∥X1 −X2

∥∥∥2

= Op

(n2T

T

). (A.26)

Now, let ratherX1,t = et(nT

). Then

E∥∥∥X1 −X2

∥∥∥2

= T−1T∑

t=1

∥∥∥et(nT

)− ut

∥∥∥2 ≤ 2T−1

T∑

t=1

∥∥∥et(nT

)− ut

∥∥∥2

+ 2∥∥∥eT

(nT

)∥∥∥2, (A.27)

with∥∥∥eT

(nT

)∥∥∥2

= Op(T−1

)and

∥∥et(nT

)− ut

∥∥ =∥∥∥[X′

t ⊗ Ik

]Rη − [

Xt(nT

)′ ⊗ Ik

]Rη

∥∥∥

≤∥∥Ik

∥∥∥∥R∥∥[∥∥Xt

∥∥∥∥η − η∥∥ +

∥∥∥Xt − Xt(nT

)∥∥∥∥∥η

∥∥]

= Op

( nT

T 1/2

), (A.28)

since it has been shown in Theorem 3.3 of Dufour and Jouini (2005c) that the second step linear estimatorη is√

T -consistent of

η, and also ∥∥∥Xt − Xt(nT

)∥∥∥ = Op

( nT

T 1/2

). (A.29)

Hence, by assumptions2.1 to 2.4we get

E∥∥∥X1 −X2

∥∥∥2

= Op

(n2T

T

). (A.30)

Finally, consider the case whereX1,t = ut(η). Then

E∥∥∥X1 −X2

∥∥∥2

= T−1T∑

t=1

∥∥∥ut(η)− ut

∥∥∥2 ≤ 2T−1

T∑

t=1

∥∥∥ut(η)− ut

∥∥∥2

+ 2∥∥∥uT

(η)∥∥∥

2(A.31)

where∥∥∥uT

(η)∥∥∥ = Op

(T−1

)under the assumptions2.1 to 2.4. Further, we have

∥∥∥ut(η)− ut

∥∥∥2 ≤ 2

∥∥∥ut(η)− ut

(η)∥∥∥

2+ 2

∥∥∥ut(η)− ut

∥∥∥2

≤ 4

∥∥∥∥t−1∑

τ=0

[Xt−τ

(η)′ ⊗ Λτ

(η)− Λτ

(η)]∥∥∥∥

2

+

∥∥∥∥t−1∑

τ=0

[Xt−τ

(η)′ ⊗ Λτ

(η)]∥∥∥∥

2

∥∥R∥∥2∥∥η − η

∥∥2

+2

∥∥∥∥∞∑

τ=t

Λτ(η)[

Φ0yat−τ −

p∑

i=1

Φiyat−i−τ

]∥∥∥∥2

, (A.32)

27

Page 31: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

where ∥∥∥∥t−1∑

τ=0

[Xt−τ

(η)′ ⊗ Λτ

(η)]∥∥∥∥

2

≤( t−1∑

τ=0

∥∥∥Xt−τ

(η)′ ⊗ Λτ

(η)∥∥∥

)2

, (A.33)

with[Xt−τ

(η)′ ⊗ Λτ

(η)]

=[Xt−τ

(η)′ ⊗ Ik

][I1 ⊗ Λτ

(η)]

. Therefore,

∥∥∥ut(η)− ut

∥∥∥2 ≤ 4

(k

t−1∑

τ=0

∥∥∥Xt−τ

(η)∥∥∥

2)( t−1∑

τ=0

∥∥∥Λτ(η)− Λτ

(η)∥∥∥

2)

+

(k

t−1∑

τ=0

∥∥∥Xt−τ

(η)∥∥∥

2)( t−1∑

τ=0

∥∥∥Λτ(η)∥∥∥

2)

∥∥R∥∥2∥∥η − η

∥∥2

+2

( ∞∑

τ=t

∥∥∥Λτ(η)∥∥∥

2)( ∞∑

τ=t

∥∥∥Φ0Yt−τ −p∑

i=1

ΦiYt−i−τ

∥∥∥2)

= Op

(T−1

)+ Op

(T−2

)+ op

(1), (A.34)

under the assumptions2.1 to 2.4. Hence, as a result we get

E∥∥∥X1 −X2

∥∥∥2

= Op

(T−1

). (A.35)

PROOF OFTHEOREM 3.4 The proof of this theorem is analogous to that of theorem3.2but under the assumptions2.1 to

2.4.

PROOF OFPROPOSITION3.2 By theorems3.3and3.4, using the same arguments as in proposition3.1, the statements of

the present proposition can be also easily established under the assumptions2.1 to 2.4.

PROOF OFPROPOSITION3.3 Recall first thaty∗t = µy +∑t+b−1

v=0 Ψv(η)u∗t−v

(η)

andyt = µy +∑∞

v=0 Ψv(η)ut−v .

Then for ease of notation, letu∗t = u∗t(η), Ψv = Ψv

(η)

andΨv = Ψv(η). In addition, letY ∗1,t

(nT

)=

[y∗′t−1, . . . , y∗′t−nT

]′

andY1,t

(nT

)=

[y′t−1, . . . , y′t−nT

]′. Further, setting

ΓY ∗1(nT

)= T−1

T∑

t=1

Y ∗1t

(nT

)Y ∗1t

(nT

)′ and ΓY1

(nT

)= E

[Y1t

(nT

)Y1t

(nT

)′]. (A.36)

Then it can be easily seen that

∥∥∥ΓY ∗(nT

)− ΓY

(nT

)∥∥∥2

=∥∥∥ΓY ∗1

(nT

)− ΓY1

(nT

)∥∥∥2

+ 2

nT∑

i=1

∥∥∥T−1T∑

t=1

y∗t−i − E(yt−i

)∥∥∥2, (A.37)

with

∥∥∥T−1T∑

t=1

y∗t−i − E(yt−i

)∥∥∥ ≤ ∥∥µy − µy

∥∥ +

t+b−1−i∑

v=0

∥∥Ψv −Ψv

∥∥ +∥∥Ψv

∥∥∥∥∥T−1

T∑

t=1

u∗t−i−v

∥∥∥

= Op

(T−1/2

)(A.38)

since∥∥µy − µy

∥∥ = Op(T−1/2

),∥∥∥T−1

∑Tt=1 u∗t−i−v

∥∥∥ = Op(T−1/2

)and by lemma 4.1 of Dufour and Jouini (2005c) we

have∑t+b−1−i

v=0

∥∥Ψv −Ψv

∥∥ = Op(T−1/2

). Hence,

nT∑

i=1

∥∥∥T−1T∑

t=1

y∗t−i − E(yt−i

)∥∥∥2

= Op

(nT

T

). (A.39)

28

Page 32: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Now, letΓY ∗1(nT

)(l)

= T−1∑T

t=1 y∗t−iy∗′t−i+l andΓY1

(nT

)(l)

= E[y∗t−iy

∗′t−i+l

], wherel = −nT +1, . . . , nT − 1 and

i = 1, . . . , nT . Then we can easily show that

ΓY ∗1(nT

)(l)− ΓY1

(nT

)(l)

= µyµ′y +

T+b−1−i∑

v=0

Ψv

[T−1

T∑

t=1

u∗t−i−v

]µ′y + µy

[T−1

T∑

t=1

u∗t−i−v

]′Ψ′v

− µyµ′y

+

T+b−1−i∑

v=0

Ψv

[T−1

T∑

t=1

u∗t−i−vu∗′t−i−v

]Ψ′v+l −ΨvΣuΨ′v+l

+

T+b−1−i∑

κ=1

T+b−1−i−κ∑

v=0

Ψv

[T−1

T∑

t=1

u∗t−i−vu∗′t−i−κ−v

]Ψ′v+κ+l

+

T+b−1−i+l∑

κ=1

T+b−1−i+l−κ∑

v=0

Ψv+κ−l

[T−1

T∑

t=1

u∗t−i+l−κ−vu∗′t−i+l−v

]Ψ′v

+∞∑

v=T+b−i

ΨvΣuΨ′v+l

= T0 + T1 + T2 + T3 + T4, (A.40)

with∥∥T4

∥∥ ≤ ∑∞v=T+b−i

∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψ′v+l

∥∥ = o(1),

∥∥T0

∥∥ ≤∥∥µyµ′y − µyµ′y

∥∥ + 2

T+b−1−i∑

v=0

∥∥∥Ψv

[T−1

T∑

t=1

u∗t−i−v

]µ′y

∥∥∥ = Op(T−1/2

), (A.41)

∥∥T2

∥∥ ≤T+b−1−i∑

κ=1

T+b−1−i−κ∑

v=0

∥∥Ψv

∥∥∥∥∥∥T−1

T∑

t=1

u∗t−i−vu∗′t−i−κ−v

∥∥∥∥∥∥Ψv+κ+l

∥∥

= Op(T−1/2

)(A.42)

and

∥∥T3

∥∥ ≤T+b−1−i+l∑

κ=1

T+b−1−i+l−κ∑

v=0

∥∥Ψv+κ−l

∥∥∥∥∥∥T−1

T∑

t=1

u∗t−i+l−κ−vu∗′t−i+l−v

∥∥∥∥∥∥Ψv

∥∥

= Op(T−1/2

), (A.43)

since∥∥∥T−1

∑Tt=1 u∗t−i−vu∗′t−i−κ−v

∥∥∥ and∥∥∥T−1

∑Tt=1 u∗t−i+l−κ−vu∗′t−i+l−v

∥∥∥ are bothOp(T−1/2

). For the termT1 we

have

∥∥T1

∥∥ ≤∥∥∥∥

T+b−1−i∑

v=0

Ψv

[T−1

T∑

t=1

u∗t−i−vu∗′t−i−v

]Ψ′v+l −ΨvΣuΨ′v+l

∥∥∥∥

≤T+b−1−i∑

v=0

∥∥Ψv −Ψv

∥∥∥∥∥∥T−1

T∑

t=1

u∗t−i−vu∗′t−i−v

∥∥∥∥∥∥Ψv+l

∥∥

+

T+b−1−i∑

v=0

∥∥Ψv

∥∥∥∥∥∥T−1

T∑

t=1

u∗t−i−vu∗′t−i−v − Σu

∥∥∥∥∥∥Ψv+l

∥∥

+

T+b−1−i∑

v=0

∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv −Ψv+l

∥∥

= S1 + S2 + S3. (A.44)

By lemma 4.1 of Dufour and Jouini (2005c) we haveS3 = Op(T−1/2

). Note also that

∥∥∥∥T−1T∑

t=1

u∗t−i−vu∗′t−i−v − Σu

∥∥∥∥ ≤∥∥∥∥T−1

T∑

t=1

u∗t−i−vu∗′t−i−v − Σ∗u(η)∥∥∥∥ +

∥∥∥Σ∗u(η)− Σu

∥∥∥, (A.45)

29

Page 33: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

where∥∥∥T−1

∑Tt=1 u∗t−i−vu∗′t−i−v − Σ∗u

(η)∥∥∥ = Op

(T−1/2

). Then by proposition3.1 we getS2 = Op

(T−1/2

)since∥∥∥Σ∗u

(η)−Σu

∥∥∥ = Op(T−1/2

). Therefore, by lemma 4.1 of Dufour and Jouini (2005c) we can easily haveS1 = Op

(T−1/2

),

then∥∥T1

∥∥ = Op(T−1/2

)and finally,

∥∥∥ΓY ∗1(nT

)(l)− ΓY1

(nT

)(l)∥∥∥ = Op

(T−1/2

). Hence, it follows that

∥∥∥ΓY ∗1(nT

)− ΓY1

(nT

)∥∥∥ = Op

( nT

T 1/2

)(A.46)

and then ∥∥∥ΓY ∗(nT

)− ΓY

(nT

)∥∥∥ = Op

( nT

T 1/2

)(A.47)

Further, we can easily show that

∥∥∥ΓY ∗(nT

)−1 − ΓY

(nT

)−1∥∥∥ ≤

∥∥∥ΓY ∗(nT

)−1∥∥∥∥∥∥ΓY ∗

(nT

)− ΓY

(nT

)∥∥∥∥∥∥ΓY

(nT

)−1∥∥∥, (A.48)

with ∥∥∥ΓY ∗(nT

)−1∥∥∥ ≤

∥∥∥ΓY ∗(nT

)−1 − ΓY

(nT

)−1∥∥∥ +

∥∥∥ΓY

(nT

)−1∥∥∥, (A.49)

where∥∥∥ΓY

(nT

)−1∥∥∥ is uniformly bounded by a positive constant for allnT [see Berk (1974, Page 491)]. Therefore, since∥∥∥ΓY ∗

(nT

)− ΓY

(nT

)∥∥∥∥∥∥ΓY

(nT

)−1∥∥∥ < 1 is an event whose probability converges to one asT goes to infinity, we get

∥∥∥ΓY ∗(nT

)−1 − ΓY

(nT

)−1∥∥∥ ≤

∥∥∥ΓY ∗(nT

)− ΓY

(nT

)∥∥∥∥∥∥ΓY

(nT

)−1∥∥∥2

1−∥∥∥ΓY ∗

(nT

)− ΓY

(nT

)∥∥∥∥∥∥ΓY

(nT

)−1∥∥∥

= Op

( nT

T 1/2

). (A.50)

Moreover, we have ∥∥∥ΓY ∗(nT

)−1 − ΓY

(nT

)−1∥∥∥1≤

∥∥∥ΓY ∗(nT

)−1 − ΓY

(nT

)−1∥∥∥. (A.51)

Hence ∥∥∥ΓY ∗(nT

)−1 − ΓY

(nT

)−1∥∥∥1

= Op

( nT

T 1/2

)(A.52)

and by assumption2.3we get∥∥∥ΓY ∗

(nT

)− ΓY

(nT

)∥∥∥1

= op(1).

PROOF OFTHEOREM 3.5 Setting

Γ∗1(T

)= T−1

T∑

t=1

[u∗t

(nT

)− ut

]Y ∗t

(nT

)′ (A.53)

and

Γ∗2(T

)= T−1

T∑

t=1

utY∗t

(nT

)′. (A.54)

Then it can be shown that

∥∥∥Π∗(nT

)− Π(nT

)∥∥∥ ≤∥∥∥Γ∗1

(T

)∥∥∥ +∥∥∥Γ∗2

(T

)∥∥∥∥∥∥ΓY ∗

(nT

)−1∥∥∥1, (A.55)

sincey∗t = Π(nT

)Y ∗t

(nT

)+ u∗t

(nT

), by the resampling scheme. In particular, we have

E∥∥∥Γ∗1

(T

)∥∥∥ ≤ T−1T∑

t=1

E∥∥∥[u∗t

(nT

)− ut

]Y ∗t

(nT

)′∥∥∥

E∥∥∥u∗t

(nT

)− ut

∥∥∥21/2

E∥∥∥Y ∗t

(nT

)∥∥∥21/2

= Op

( n3/2T

T 1/2

)+ Op

(n

1/2T

) ∞∑

τ=nT +1

∥∥Πτ

∥∥, (A.56)

30

Page 34: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

sinceE∥∥∥Y ∗t

(nT

)∥∥∥2

= nT tr[ΓY ∗

(nT

)(0)]

andE∥∥∥u∗t

(nT

) − ut

∥∥∥2

= Op(n2

T T−1)

+ Op(1)( ∑∞

τ=nT +1

∥∥Πτ

∥∥)2

.

Hence, under the assumptions2.4 we have∥∥∥Γ∗1

(T

)∥∥∥ = Op

(n

3/2T /T 1/2

). Moreover, under the assumption2.5 we get∥∥∥Γ∗1

(T

)∥∥∥ → 0 in probability. Now, sinceut andY ∗t(nT

)are independent we have

E∥∥∥Γ∗2

(T

)∥∥∥2

= T−2T∑

t=nT +1

E(u′tut

)E[Y ∗t

(nT

)′Y ∗t

(nT

)]

= T−1tr[Σu

]tr

[ΓY ∗

(nT

)(0)]

= T−1tr[Σu

]1 + nT tr

[Γy∗

(0)]

= Op

(nT

T

), (A.57)

whereΓy∗(l)

= E(y∗t y∗′t+l

)for anyl. Hence

∥∥∥Γ∗2(T

)∥∥∥ → 0 in probability asnT , T →∞. Which completes the proof.

PROOF OFTHEOREM 3.6 Note that

∣∣∣SY ∗(nT

)− SY ∗(nT

)∣∣∣ =

∣∣∣∣T 1/2l(nT

)′vec

[T−1

T∑

t=1

u∗t(nT

)Y ∗t

(nT

)′(

ΓY ∗(nT

)−1 − ΓY ∗(nT

)−1)]∣∣∣∣

≤ T 1/2∥∥∥l

(nT

)∥∥∥∥∥∥∥

T−1T∑

t=1

u∗t(nT

)Y ∗t

(nT

)′(

ΓY ∗(nT

)−1 − ΓY ∗(nT

)−1)∥∥∥∥

≤ T 1/2m1/22

∥∥∥∥T−1T∑

t=1

u∗t(nT

)Y ∗t

(nT

)′∥∥∥∥∥∥∥ΓY ∗

(nT

)−1 − ΓY ∗(nT

)−1∥∥∥1, (A.58)

where, by independence betweenu∗t(nT

)andY ∗t

(nT

)we have

E

∥∥∥∥T−1T∑

t=1

u∗t(nT

)Y ∗t

(nT

)′∥∥∥∥2

= T−2T∑

t=nT +1

E[u∗t

(nT

)′u∗t

(nT

)]E[Y ∗t

(nT

)′Y ∗t

(nT

)]

= T−1 tr[Σ∗u

(nT

)]1 + nT tr

[Γy∗

(0)]

= OP

(nT

T

). (A.59)

In the other side, by triangular inequality we know that

∥∥∥ΓY ∗(nT

)−1 − ΓY ∗(nT

)−1∥∥∥1≤

∥∥∥ΓY ∗(nT

)−1 − ΓY

(nT

)−1∥∥∥1

+∥∥∥ΓY

(nT

)−1 − ΓY ∗(nT

)−1∥∥∥1. (A.60)

Therefore, to complete the proof we have to identify the order of probability of∥∥∥ΓY

(nT

)−1 − ΓY ∗(nT

)−1∥∥∥1. To do so, recall

first that ∥∥∥ΓY

(nT

)− ΓY ∗(nT

)∥∥∥2

=∥∥∥ΓY1

(nT

)− ΓY ∗1(nT

)∥∥∥2

+ 2

nT∑

i=1

∥∥∥E(yt−i

)− E(y∗t−i

)∥∥∥2, (A.61)

where ∥∥∥E(yt−i

)− E(y∗t−i

)∥∥∥2

=∥∥µy − µy

∥∥2= Op

(T−1

)(A.62)

andΓY ∗1(nT

)= E

[Y ∗1t

(nT

)Y ∗1t

(nT

)′], with

ΓY1

(nT

)(l)− ΓY ∗1

(nT

)(l)

=

t+b−1−i∑

v=0

ΨvΣuΨ′v+l − ΨvΣ∗u

(η)Ψ′v+l

+∞∑

v=t+b−i

ΨvΣuΨ′v+l

+(µyµ′y − µyµ′y

)

= T1 + T2 + T3, (A.63)

31

Page 35: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

for l = −nT + 1, ..., nT − 1, where∥∥T2

∥∥ = o(1),

∥∥T3

∥∥ ≤∥∥µy − µy

∥∥∥∥µy

∥∥ +∥∥µy

∥∥∥∥µy − µy

∥∥ = Op(T−1/2

)(A.64)

and

∥∥T1

∥∥ ≤∥∥∥∥

t+b−1−i∑

v=0

ΨvΣ∗u

(η)Ψ′v+l −ΨvΣuΨ′v+l

∥∥∥∥

≤t+b−1−i∑

v=0

∥∥Ψv −Ψv

∥∥∥∥∥Σ∗u

(η)∥∥∥

∥∥Ψv+l

∥∥ +

t+b−1−i∑

v=0

∥∥Ψv

∥∥∥∥∥Σ∗u

(η)− Σu

∥∥∥∥∥Ψv+l

∥∥

+

t+b−1−i∑

v=0

∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv −Ψv+l

∥∥

= Op(T−1/2

). (A.65)

Hence∥∥∥ΓY1

(nT

) − ΓY ∗1(nT

)∥∥∥ = Op

(nT T−1/2

)and then

∥∥∥ΓY

(nT

) − ΓY ∗(nT

)∥∥∥ = Op

(nT T−1/2

). Consequently,

we have∥∥∥ΓY

(nT

)−1 − ΓY ∗(nT

)−1∥∥∥ = Op

(nT T−1/2

)and then

∥∥∥ΓY

(nT

)−1 − ΓY ∗(nT

)−1∥∥∥1

= Op

(nT T−1/2

).

Finally, we get∣∣∣SY ∗

(nT

) − SY ∗(nT

)∣∣∣ = Op

(n

3/2T T−1/2

)and by the assumption that

n3T

T→ 0 asT, nT → ∞ we show

that∣∣∣SY ∗

(nT

)− SY ∗(nT

)∣∣∣ = op(1).

PROOF OFTHEOREM 3.7 Recall first that

∣∣∣SY ∗(nT

)− SY

(nT

)∣∣∣2 ≤ 2

∣∣w1

∣∣2 + 2∣∣w2

∣∣2, (A.66)

where

E∣∣w1

∣∣2 = E

∣∣∣∣T 1/2l(nT

)′vec

[T−1

T∑

t=1

u∗t(nT

)Y ∗t

(nT

)′

ΓY ∗(nT

)−1 − ΓY

(nT

)−1]∣∣∣∣

2

≤ T−1∥∥∥l

(nT

)∥∥∥2E

∣∣∣∣vec

[ T∑

t=1

u∗t(nT

)Y ∗t

(nT

)′

ΓY ∗(nT

)−1 − ΓY

(nT

)−1]∣∣∣∣

2

≤ T−1m22 E

∥∥∥∥T∑

t=1

u∗t(nT

)Y ∗t

(nT

)′∥∥∥∥2∥∥∥ΓY ∗

(nT

)−1 − ΓY

(nT

)−1∥∥∥2

1

= m22 tr

[Σ∗u

(nT

)]1 + nT tr

[Γy∗

(0)]∥∥∥Γ∗Y

(nT

)−1 − ΓY

(nT

)−1∥∥∥2

1

= Op

(n3T

T

). (A.67)

sinceu∗t(nT

)andY ∗t

(nT

)are independent. Therefore, under the assumption that

n3T

T→ 0 asT, nT →∞ we can easily show

thatE∣∣w1

∣∣2 = op(1). In the other hand, we have

E∣∣w2

∣∣2 = E

∣∣∣∣T 1/2l(nT

)′vec

[T−1

T∑

t=1

[u∗t

(nT

)Y ∗t

(nT

)′ − utYt(nT

)]ΓY

(nT

)−1]∣∣∣∣

2

≤ T−1m22E

∥∥∥∥T∑

t=1

[u∗t

(nT

)Y ∗t

(nT

)′ − utYt(nT

)]∥∥∥∥2∥∥∥ΓY

(nT

)−1∥∥∥2

1

≤ 2 T−1m22

∥∥∥ΓY

(nT

)−1∥∥∥2

1

S1 + S2

, (A.68)

where

S1 = E

∥∥∥∥T∑

t=1

[u∗t

(nT

)Y ∗t

(nT

)′ − utYt(nT

)′]∥∥∥∥2

(A.69)

32

Page 36: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

and

S2 = E

∥∥∥∥T∑

t=1

ut

[Yt

(nT

)− Yt(nT

)]′∥∥∥∥2

, (A.70)

with Yt(nT

)=

[1, y′t−1, . . . , y′t−nT

]′ and such thatyt−i = µy +∑t+b−1

v=0 Ψvut−i−v for i = 1, ..., nT , Setting also

Y1,t

(nT

)=

[y′t−1, . . . , y′t−nT

]′ andUt(nT

)=

[u′t−1, . . . , u′t−nT

]′. Then

Y1,t

(nT

)= inT µy +

∞∑

v=0

[InT ⊗Ψv

]Ut−v

(nT

)(A.71)

and

Y1,t

(nT

)= inT µy +

t+b−1∑

v=0

[InT ⊗ Ψv

]Ut−v

(nT

), (A.72)

whereinT is a vector of sizenT with all its components equal to one. Hence, by independence betweenut andYt(nT

)−Yt(nT

),

and since ∥∥∥Yt(nT

)− Yt(nT

)∥∥∥2

=∥∥∥Y1,t

(nT

)− Y1,t

(nT

)∥∥∥2, (A.73)

we have:

S2 = tr[Σu

] T∑

t=1

E∥∥∥Y1,t

(nT

)− Y1,t

(nT

)∥∥∥2

≤ tr[Σu

] T∑

t=1

2E

∥∥∥inT

(µy − µy

)∥∥∥2

+ 4E

∥∥∥∥t+b−1∑

v=0

(InT ⊗ [

Ψv −Ψv])

Ut−v

(nT

)∥∥∥∥2

+4E

∥∥∥∥∞∑

v=t+b

(InT ⊗Ψv

)Ut−v

(nT

)∥∥∥∥2

= 2 tr[Σu

](

Op(nT

)+ 2 nT

T∑

t=1

t+b−1∑

v=0

tr[(

Ψv −Ψv)′(

Ψv −Ψv)Σu

]+

∞∑

v=t+b

tr[Ψ′vΨvΣu

])

≤ 2 tr[Σu

](

Op(nT

)+ 2 nT tr

[Σu

] T∑

t=1

t+b−1∑

v=0

∥∥Ψv −Ψv

∥∥2+

∞∑

v=t+b

∥∥Ψv

∥∥2)

= Op(nT

). (A.74)

since

E∥∥∥inT

(µy − µy

)∥∥∥2

= Op(nT T−1

). (A.75)

In the other side, we have

S1 ≤ 2 E

∥∥∥∥T∑

t=1

u∗t(nT

)[Y ∗t

(nT

)− Yt(nT

)]′∥∥∥∥2

+ 2 E

∥∥∥∥T∑

t=1

[u∗t

(nT

)− ut

]Yt

(nT

)′∥∥∥∥2

. (A.76)

Moreover, using the fact thatu∗t(nT

)andY ∗t

(nT

)− Yt(nT

)are independent and so areu∗t

(nT

)− ut andYt(nT

), we have:

S1 ≤ 2T∑

t=1

E∥∥∥u∗t

(nT

)∥∥∥2E∥∥∥Y ∗t

(nT

)− Yt(nT

)∥∥∥2

+ 2T∑

t=1

E∥∥∥u∗t

(nT

)− ut

∥∥∥2E∥∥∥Yt

(nT

)∥∥∥2

= 2T∑

t=1

E∥∥∥u∗t

(nT

)∥∥∥2E∥∥∥Y ∗1,t

(nT

)− Y1,t

(nT

)∥∥∥2

+ 2T∑

t=1

E∥∥∥u∗t

(nT

)− ut

∥∥∥2E∥∥∥Yt

(nT

)∥∥∥2

(A.77)

since ∥∥∥Y ∗t(nT

)− Yt(nT

)∥∥∥2

=∥∥∥Y ∗1,t

(nT

)− Y1,t

(nT

)∥∥∥2. (A.78)

33

Page 37: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Similarly, settingU∗t(nT

)=

[u∗t−1

(η)′

, . . . , u∗t−nT

(η)′]′ we can easily show that

Y ∗1,t

(nT

)= inT µy +

t+b−1∑

v=0

[InT ⊗ Ψv

]U∗t−v

(nT

). (A.79)

Hence, we have

S1 ≤ 2 tr[Σ∗u

(nT

)] T∑

t=1

E

∥∥∥∥t+b−1∑

v=0

(InT ⊗ Ψv

)[U∗t

(nT

)− Ut(nT

)]∥∥∥∥2

+ 2 tr[ΓY

(nT

)] T∑

t=1

E∥∥∥u∗t

(nT

)− ut

∥∥∥2

≤ 2 nT tr[Σ∗u

(nT

)] T∑

t=1

t+b−1∑

v=0

∥∥Ψv

∥∥2tr

[E(

u∗t−v

(η)− ut−v

)(u∗t−v

(η)− ut−v

)]+ Op

(n3

T

)

= 2 nT tr[Σ∗u

(nT

)] T∑

t=1

E∥∥∥u∗t

(η)− ut

∥∥∥2

t+b−1∑

v=0

∥∥Ψv

∥∥2+ Op

(n3

T

)

= Op(nT

)+ Op

(n3

T

)= Op

(n3

T

). (A.80)

sincetr[ΓY

(nT

)]= 1 + tr

[ΓY1

(nT

)]= 1 + nT tr

[ΓY1

(nT

)(0)]

= Op(nT

), E

∥∥∥u∗t(nT

) − ut

∥∥∥2

= Op(n2

T T−1)

andE∥∥∥u∗t

(η) − ut

∥∥∥2

= Op(T−1

), whereΓY

(nT

)= E

[Yt

(nT

)Yt

(nT

)′], ΓY1

(nT

)= E

[Y1,t

(nT

)Y1,t

(nT

)′] and

ΓY1

(nT

)(0)

= E(yty′t

). Therefore,E

∣∣w2

∣∣2 = Op

(n3

T T−1)

and therebyE∣∣∣SY ∗

(nT

) − SY

(nT

)∣∣∣2

= Op

(n3

T T−1)

.

Then under the assumption2.5we get

E∣∣∣SY ∗

(nT

)− SY

(nT

)∣∣∣2

= op(1). (A.81)

PROOF OFPROPOSITION3.4 Let Σ∗T(nT

)= T−1

∑Tt=1 u∗t

(nT

)u∗t

(nT

)′. Then

∥∥∥Σ∗u(nT

)− Σu

∥∥∥ ≤∥∥∥Σ∗u

(nT

)− Σ∗u(nT

)∥∥∥ +∥∥∥Σ∗u

(nT

)− Σu

∥∥∥

≤∥∥∥Σ∗u

(nT

)− Σ∗T(nT

)∥∥∥ +∥∥∥Σ∗T

(nT

)− Σ∗u(nT

)∥∥∥ + Op

( nT

T 1/2

), (A.82)

with∥∥∥Σ∗T

(nT

)− Σ∗u(nT

)∥∥∥ = Op(T−1/2

)and

∥∥∥Σ∗u(nT

)− Σ∗T(nT

)∥∥∥ ≤ T−1T∑

t=1

∥∥∥u∗t(nT

)− u∗t(nT

)∥∥∥∥∥∥u∗t

(nT

)∥∥∥ +∥∥∥u∗t

(nT

)∥∥∥∥∥∥u∗t

(nT

)− u∗t(nT

)∥∥∥

, (A.83)

where ∥∥∥u∗t(nT

)− u∗t(nT

)∥∥∥2 ≤

∥∥∥Π(nT

)− Π∗(nT

)∥∥∥2∥∥∥Y ∗t

(nT

)∥∥∥2

= Op

(n4T

T

). (A.84)

Hence∥∥∥Σ∗u

(nT

)− Σ∗u(nT

)∥∥∥ = Op

(n2

T T−1/2)

and then

∥∥∥Σ∗u(nT

)− Σu

∥∥∥ = Op

( n2T

T 1/2

). (A.85)

Therefore, under the assumptionn4

TT→ 0 asT, nT → ∞ we get

∥∥∥Σ∗u(nT

) − Σu

∥∥∥ = op(1)

and similarly as in the proof of

proposition3.3we can easily show that∥∥∥Σ∗u

(nT

)−1−Σ−1u

∥∥∥ = Op

(n2

T T−1/2)

and then∥∥∥Σ∗u

(nT

)−1−Σ−1u

∥∥∥ = op(1)

as

T, nT →∞.

PROOF OFPROPOSITION 3.5 Recall first thatyt = µy+∑∞

v=0 Ψvut−v andy∗t = µy +∑t+b−1

v=0 Ψvu∗t−v , where

Ψv = Ψv(η)

andu∗t = u∗t(η)

are used for ease of notation. Then we have

∥∥∥Q∗−1X −Q−1

X

∥∥∥2

1≤

∥∥∥Q∗−1X −Q−1

X

∥∥∥2 ≤ ∥∥R

∥∥4∥∥∥Υ∗X −ΥX

∥∥∥2

(A.86)

34

Page 38: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

where ∥∥∥Υ∗X −ΥX

∥∥∥2 ≤ 2

∥∥∥Γ∗X(η)− ΓX

∥∥∥2∥∥∥Σ∗u

(η)−1

∥∥∥2

+ 2∥∥ΓX

∥∥2∥∥∥Σ∗u

(η)−1 − Σ−1

u

∥∥∥2. (A.87)

Now let X1,t = 1, X2,t = vt, X3,t =[y′t−1, . . . , y′t−p

]′, X4,t =[u′t−1, . . . , u′t−p

]′, X∗1,t = 1, X∗

2,t = v∗t(η),

X∗3,t =

[y∗′t−1, . . . , y∗′t−p

]′ andX∗4,t =

[u∗t−1

(η)′

, . . . , u∗t−p

(η)′]′. ThenXi,t andX∗

i,t, i = 1, 2, 3, 4 are vectors of size

hi, respectively, withh1 = 1, h2 = k andh3 = h4 = kp. Setting∆Xij= Γ∗Xij

− ΓXijwith Γ∗Xij

= E[X∗

i,tX∗′j,t

]and

ΓXij= E

[Xi,tX

′j,t

], we can easily show that

∥∥∥Γ∗X(η)− ΓX

∥∥∥2

=4∑

i=1

∥∥∆Xii

∥∥2+ 2

4∑

i=2

i−1∑

j=1

∥∥∆Xij

∥∥2, (A.88)

since∥∥∥∆Xij

∥∥∥2

=∥∥∥∆Xji

∥∥∥2. In particular, note that

∥∥∆X11

∥∥ = 0. Further, we have∥∥∆X22

∥∥ =∥∥Γ∗X22

− ΓX22

∥∥ with

Γ∗X22= µyµ′y +

∑t+b−1v=1 ΨvΣ∗uΨ′v andΓX22 = µyµ′y +

∑∞v=1 ΨvΣuΨ′v , whereΣ∗u = Σ∗u

(η)

for ease of notation. Then

we get

∥∥∆X22

∥∥ ≤∥∥µy − µy

∥∥∥∥µy

∥∥ +∥∥µy

∥∥∥∥µy − µy

∥∥

+

t+b−1∑

v=1

∥∥Ψv −Ψv

∥∥∥∥Σ∗u∥∥∥∥Ψv

∥∥ +∥∥Ψv

∥∥∥∥Σ∗u − Σu

∥∥∥∥Ψv

∥∥ +∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv −Ψv

∥∥

+∞∑

v=t+b

∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv

∥∥. (A.89)

Hence, by lemma 4.1 and theorem 4.1 of Dufour and Jouini (2005c), and proposition3.1we get∥∥∆X22

∥∥ = Op(T−1/2

)since

∑∞v=t+b

∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv

∥∥ = o(1). Thus

∥∥∆X22

∥∥2= Op

(T−1

). Now, examining the term∆X33 , we have

∥∥∆X33

(l)∥∥ =

∥∥Γ∗X33

(l)− ΓX33

(l)∥∥

≤ ∥∥µy − µy

∥∥∥∥µy

∥∥ +∥∥µy

∥∥∥∥µy − µy

∥∥

+

t+b−1−i∑

v=0

∥∥Ψv −Ψv

∥∥∥∥Σ∗u∥∥∥∥Ψv+l

∥∥ +∥∥Ψv

∥∥∥∥Σ∗u − Σu

∥∥∥∥Ψv+l

∥∥ +∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv+l −Ψv+l

∥∥

+∞∑

v=t+b−i

∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv+l

∥∥, (A.90)

for l = −p + 1, . . . , p − 1 and i = 1, . . . , p. Therefore, using the same arguments as for (A.84) we can easily show that∥∥∆X33

(l)∥∥ = Op

(T−1/2

)since

∑∞v=t+b−i

∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv+l

∥∥ = o(1). Hence,

∥∥∆X33

(l)∥∥2

= Op(T−1

). Moreover,

we have∥∥∆X44

∥∥2= p

∥∥Σ∗u(η)− Σu

∥∥2= Op

(T−1

). (A.91)

Also, by the above developments it is easy to show that

∥∥∆X21

∥∥2=

∥∥∆X31

∥∥2= Op

(T−1

)and

∥∥∆X41

∥∥2= op

(1). (A.92)

Further, we have

∥∥∆X32

∥∥2 ≤ 2p[∥∥µy − µy

∥∥∥∥µy

∥∥ +∥∥µy

∥∥∥∥µy − µy

∥∥]2

+2

p∑

i=1

[ t+b−1−i∑

v=0

∥∥Ψv −Ψv

∥∥∥∥Σ∗u∥∥∥∥Ψv+i

∥∥ +∥∥Ψv

∥∥∥∥Σ∗u − Σu

∥∥∥∥Ψv+i

∥∥

+∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv+i −Ψv+i

∥∥

+∞∑

v=t+b−i

∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv+i

∥∥]2

= Op(T−1

), (A.93)

35

Page 39: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

and∥∥∆X42

∥∥2 ≤ 2

p∑

j=1

∥∥Σ∗u − Σu

∥∥2∥∥Ψj

∥∥2+

∥∥Σu

∥∥2∥∥Ψj −Ψj

∥∥2

= Op(T−1

). (A.94)

Furthermore, since∆X43 is a lower triangular matrix, we have

∥∥∆X43

∥∥2 ≤ 2

p−1∑

i=0

(p− i

) ∥∥Σ∗u − Σu

∥∥2∥∥Ψj

∥∥2+

∥∥Σu

∥∥2∥∥Ψj −Ψj

∥∥2

= Op(T−1

). (A.95)

Hence,∥∥Γ∗X

(η)− ΓX

∥∥2= Op

(T−1

),∥∥Υ∗X −ΥX

∥∥2= Op

(T−1

)and

∥∥Q∗−1X −Q−1

X

∥∥2

1= Op

(T−1

). Consequently, we

can easily show that∥∥Q∗X − QX

∥∥2

1= Op

(T−1

). Moreover, asT goes to infinity we have

∥∥Q∗X − QX

∥∥2

1= op

(1). Now,

consider the termQ∗X(nT

)−1 −Q−1X then

∥∥∥Q∗X(nT

)−1 −Q−1X

∥∥∥2

1≤

∥∥∥Q∗X(nT

)−1 −Q−1X

∥∥∥2

≤ 2∥∥R

∥∥4∥∥∥Γ∗X

(nT

)− ΓX

∥∥∥2∥∥∥Σ∗u

(nT

)−1∥∥∥2

+∥∥∥ΓX

∥∥∥2∥∥∥Σ∗u

(nT

)−1 − Σ−1u

∥∥∥2

, (A.96)

where∥∥Σ∗u

(nT

)−1 − Σ−1u

∥∥2= Op

(n2

T T−1)

by proposition3.4, and

∥∥∥Γ∗X(nT

)− ΓX

∥∥∥2

=∥∥∥T−1

T∑

t=1

X∗t

(nT

)X∗

t

(nT

)′ − E(XtX

′t

)∥∥∥2

≤ 2∥∥T1

∥∥2+

∥∥T2

∥∥2, (A.97)

whereT1 = T−1∑T

t=1

[X∗

t

(nT

)X∗

t

(nT

)′ −XtX′t

]andT2 = T−1

∑Tt=1 XtX′

t − E(XtX′

t

), with

∥∥T2

∥∥2= Op

(T−1

)

and

∥∥T1

∥∥ ≤ T−1T∑

t=1

∥∥∥X∗t

(nT

)X∗

t

(nT

)′ −XtX′t

∥∥∥

≤ T−1T∑

t=1

∥∥∥[X∗

t

(nT

)−Xt]X∗

t

(nT

)′∥∥∥ +∥∥∥Xt

[X∗

t

(nT

)−Xt]′∥∥∥

≤ T−1T∑

t=1

∥∥∥X∗t

(nT

)−Xt

∥∥∥∥∥∥X∗

t

(nT

)∥∥∥ +∥∥∥Xt

∥∥∥∥∥∥X∗

t

(nT

)−Xt

∥∥∥

. (A.98)

Using theorems3.1 to 3.4and proposition3.4we can easily show that

∥∥∥X∗t

(nT

)−Xt

∥∥∥2

=∥∥∥[y∗t − yt

]+

[ut − u∗t

(nT

)]∥∥∥2

+

p∑

i=1

∥∥∥y∗t−i − yt−i

∥∥∥2

+

p∑

j=1

∥∥∥u∗t−j

(nT

)− ut−j

∥∥∥2

≤ (2 + p

)∥∥∥y∗t − yt

∥∥∥2

+∥∥∥u∗t

(nT

)− ut

∥∥∥2

=(2 + p

)Op

( 1

T

)+ Op

(n2T

T

)

= Op

(n2T

T

). (A.99)

Hence,∥∥X∗

t

(nT

) − Xt

∥∥ = Op(nT T−1/2

)and then

∥∥Γ∗X(nT

) − ΓX

∥∥2= Op

(n2

T T−1). Therefore, it follows that∥∥Q∗X

(nT

)−1 −Q−1X

∥∥2

1= Op

(n2

T T−1), and thereafter

∥∥Q∗X(nT

)−QX

∥∥2

1= Op

(n2

T T−1). Finally, under the assumption

2.3, we get∥∥Q∗X

(nT

)−QX

∥∥2

1= op

(1).

PROOF OFTHEOREM 3.9 Note thatη∗ − η = Q∗X(nT

)Ω∗X

(nT

). Then

∥∥η∗ − η∥∥ ≤

∥∥∥Q∗X(nT

)∥∥∥1

∥∥∥Ω∗X(nT

)− ΩX

∥∥∥ +∥∥∥Q∗X

(nT

)−QX

∥∥∥1

∥∥ΩX

∥∥ +∥∥QX

∥∥1

∥∥ΩX

∥∥ , (A.100)

36

Page 40: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

where,∥∥QX

∥∥1

= Op(1),∥∥ΩX

∥∥ = Op(T−1/2

)and by proposition3.5,

∥∥∥Q∗X(nT

)−QX

∥∥∥1

= Op

( nT

T 1/2

)and

∥∥∥Q∗X(nT

)∥∥∥1

= Op(1)

+ Op

( nT

T 1/2

). (A.101)

Now, consider the term∥∥Ω∗X

(nT

)− ΩX

∥∥. Then it can be easily seen that

∥∥∥Ω∗X(nT

)− ΩX

∥∥∥ ≤ ∥∥R∥∥∥∥∥Σ∗u

(nT

)−1∥∥∥∥∥∥W ∗1

X

(nT

)∥∥∥ +∥∥∥W ∗2

X

(nT

)∥∥∥ +∥∥∥W ∗3

X

(nT

)∥∥∥

+∥∥R

∥∥∥∥∥Σ∗u

(nT

)−1 − Σ−1u

∥∥∥∥∥WX

∥∥ , (A.102)

where

WX =1

T

T∑

t=1

utXt′ , W ∗1

X

(nT

) 1

T

T∑

t=1

[e∗t

(nT

)− ut][

X∗t

(nT

)−Xt]′ , (A.103)

W ∗2X

(nT

)=

1

T

T∑

t=1

[e∗t

(nT

)− ut]Xt′ and W ∗3

X

(nT

)=

1

T

T∑

t=1

ut[X∗

t

(nT

)−Xt]′ . (A.104)

By proposition3.4, we have

∥∥∥Σ∗u(nT

)−1∥∥∥ = Op

(1)

+ Op

( nT

T 1/2

)and

∥∥∥Σ∗u(nT

)−1 − Σ−1u

∥∥∥ = Op

( nT

T 1/2

). (A.105)

Moreover, by independence betweenut andX∗t

(nT

)−Xt, we have

E∥∥∥W ∗3

X

(nT

)∥∥∥2

= Op

(n2T

T 2

), (A.106)

and then∥∥∥W ∗3

X

(nT

)∥∥∥ = Op

(nTT

). In addition, we have

∥∥∥W ∗1X

(nT

)∥∥∥ ≤

1

T

T∑

t=1

∥∥e∗t(nT

)− ut

∥∥21/2

1

T

T∑

t=1

∥∥∥X∗t

(nT

)−Xt

∥∥∥21/2

= Op

(n2T

T

)(A.107)

since it can be easily seen using theorems3.3and3.4, and proposition3.2 that

∥∥e∗t(nT

)− ut

∥∥ = Op

( nT

T 1/2

)and

∥∥∥X∗t

(nT

)−Xt

∥∥∥ = Op

( nT

T 1/2

). (A.108)

Now consider the termW ∗2X

(nT

), then proceeding with the same way as in the proof of theorem 3.3 in Dufour and Jouini

(2005c), we can easily see by independence betweene∗t(nT

)− ut andXt that

∥∥∥W ∗2X

(nT

)∥∥∥2

=∥∥∥ 1

T

T∑

t=1

[e∗t

(nT

)− ut]Xt

∥∥∥ = Op

(n2T

T 2

)(A.109)

Finally, since by independence betweenut andXt we have∥∥∥WX

∥∥∥ = Op

(T−1/2

), it follows that

∥∥∥Ω∗X(nT

)− ΩX

∥∥∥ = Op

(n2T

T

). (A.110)

Hence, we get∥∥η∗ − η

∥∥ = Op

( 1

T 1/2

)+ Op

(n2T

T

), (A.111)

and in view of the assumption2.3, we conclude that∥∥η∗ − η

∥∥ = op(1).

37

Page 41: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

PROOF OFTHEOREM 3.10 Recall that and

∥∥∥S∗X(nT

)− S∗X∥∥∥ = T 1/2

∥∥∥Q∗X(nT

)Ω∗X

(nT

)−Q∗XΩ∗X∥∥∥,

≤ T 1/2

∥∥∥Q∗X(nT

)∥∥∥1

∥∥∥Ω∗X(nT

)− Ω∗X∥∥∥ +

∥∥∥Q∗X(nT

)−Q∗X∥∥∥1

∥∥∥Ω∗X∥∥∥

(A.112)

where by proposition3.5

∥∥∥Q∗X(nT

)−Q∗X∥∥∥1

≤∥∥∥Q∗X

(nT

)−QX

∥∥∥1

+∥∥∥QX −Q∗X

∥∥∥1

= Op

( nT

T 1/2

)+ Op

( 1

T 1/2

)= Op

( nT

T 1/2

). (A.113)

Moreover, we have ∥∥∥Ω∗X(nT

)− Ω∗X∥∥∥ ≤

∥∥∥Ω∗X(nT

)− ΩX

∥∥∥ +∥∥∥ΩX − Ω∗X

∥∥∥ (A.114)

with∥∥∥Ω∗X

(nT

)− ΩX

∥∥∥ = Op

(n2

TT

). Now, using the fact that

∥∥vec[A

]∥∥ =∥∥A

∥∥ for any matrixA, we van easily show that

∥∥∥Ω∗X − ΩX

∥∥∥ ≤ ∥∥R∥∥∥∥Σ∗u

(η)−1∥∥

∥∥∥W 1∗X

(η)∥∥∥ +

∥∥∥W 2∗X

(η)∥∥∥ +

∥∥∥W 3∗X

(η)∥∥∥

+∥∥R

∥∥∥∥∥Σ∗u

(η)−1 − Σ−1

u

∥∥∥∥∥∥WX

∥∥∥, (A.115)

where

W 1∗X

(η)

=1

T

T∑

t=1

[u∗t

(η)− ut

][X∗

t

(η)−Xt

]′, (A.116)

W 2∗X

(η)

=1

T

T∑

t=1

[u∗t

(η)− ut

]Xt′ and W 3∗

X

(η)

=1

T

T∑

t=1

ut

[X∗

t

(η)−Xt

]′, (A.117)

with∥∥WX

∥∥ = Op(T−1/2

). In particular, since it can be easily seen that

∥∥u∗t(η)− ut

∥∥ = Op

(T−1/2

)and

∥∥∥X∗t

(η)−Xt

∥∥∥ = Op

(T−1/2

), (A.118)

we can easily show that

∥∥∥W 1∗X

(η)∥∥∥ ≤

T−1

T∑

t=1

∥∥∥u∗t(η)− ut

∥∥∥21/2

T−1T∑

t=1

∥∥∥X∗t

(η)−Xt

∥∥∥21/2

= Op

(T−1

). (A.119)

Further, by independence betweenu∗t(η)− ut andXt in one side and betweenut andX∗

t

(η)−Xt in the other side, we have

E∥∥∥W 2∗

X

(η)∥∥∥

2= Op

(T−2

)and E

∥∥∥W 2∗X

(η)∥∥∥

2= Op

(T−2

). (A.120)

Hence, we get ∥∥∥Ω∗X − ΩX

∥∥∥ = Op

(T−1

)(A.121)

since ∥∥∥Σ∗u(η)−1 − Σ−1

u

∥∥∥ = Op

(T−1/2

). (A.122)

Then, we have ∥∥∥Ω∗X(nT

)− Ω∗X∥∥∥ = Op

(n2T

T

)and

∥∥∥Ω∗X∥∥∥ = Op

(T−1/2

). (A.123)

Consequently, we get ∥∥∥S∗X(nT

)− S∗X∥∥∥ = Op

( n2T

T 1/2

)(A.124)

38

Page 42: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Moreover, under the assumption2.6we have

∥∥∥S∗X(nT

)− S∗X∥∥∥ = op

(1). (A.125)

PROOF OFTHEOREM 3.11 Note first that

E∥∥∥S∗X − SX

∥∥∥2

= T E∥∥∥Q∗XΩ∗X −QXΩX

∥∥∥2,

≤ 2T∥∥∥Q∗X −QX

∥∥∥2

1E∥∥∥Ω∗X

∥∥∥2

+ 2T∥∥∥Q∗X

∥∥∥2

1E∥∥∥Ω∗X − ΩX

∥∥∥2

(A.126)

where by proposition3.5we have

∥∥∥Q∗X −QX

∥∥∥2

1= Op

(T−1

)and

∥∥∥Q∗X∥∥∥2

1= Op

(1). (A.127)

Moreover, by the above theorem, we have

E∥∥∥Ω∗X − ΩX

∥∥∥ = Op

(T−2

)and E

∥∥∥Ω∗X∥∥∥2

= Op

(T−1

). (A.128)

Hence

E∥∥∥S∗X − SX

∥∥∥2

= Op

(T−1

). (A.129)

Then asT goes to infinity

E∥∥∥S∗X − SX

∥∥∥2

= op(1). (A.130)

PROOF OFPROPOSITION3.6 Let first

Σ∗u(T

)=

1

T

T∑

t=1

u∗t(η)u∗t

(η)′

. (A.131)

Then we can easily see that

∥∥∥Σ∗u(η∗

)− Σu

∥∥∥ ≤∥∥∥Σ∗u

(η∗

)− Σ∗u(T

)∥∥∥ +∥∥∥Σ∗u

(T

)− Σ∗u(η)∥∥∥ +

∥∥∥Σ∗u(η)− Σu

∥∥∥, (A.132)

with ∥∥∥Σ∗u(T

)− Σ∗u(η)∥∥∥ =

∥∥∥Σ∗u(η)− Σu

∥∥∥ = Op(T−1/2

), (A.133)

and

∥∥∥Σ∗u(η∗

)− Σ∗u(T

)∥∥∥ ≤ 1

T

T∑

t=1

∥∥∥u∗t(η∗

)− u∗t(η)∥∥∥

∥∥∥u∗t(η∗

)∥∥∥ +∥∥∥u∗t

(η)∥∥∥

∥∥∥u∗t(η∗

)− u∗t(η)∥∥∥

. (A.134)

SettingΦµ(p)

=[− µΦ, Φ0,−Φ1, . . . ,−Φp

]andY ∗t

(p)

=[1, y∗t , y∗t−1, . . . , y∗t−p

]. Then we can easily show that

u∗t(η)

=

t−1∑

τ=0

Λτ(η)Φµ

(p)Y ∗t−τ

(p). (A.135)

Moreover, by Lemma 4.2 of Dufour and Jouini (2005c) we have

u∗t(η)

=

t−1∑

τ=0

Λτ(η)Φµ

(p)Y ∗t−τ

(p)

and u∗t(η∗

)=

t−1∑

τ=0

Λτ(η∗

)Φ∗µ

(p)Y ∗t−τ

(p)

(A.136)

39

Page 43: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

whereΦµ(p)

=[− µΦ, Φ0,−Φ1, . . . ,−Φp

]andΦ∗µ

(p)

=[− µ∗Φ, Φ∗0,−Φ∗1, . . . ,−Φ∗p

]. Hence, it can be easily seen that

∥∥∥u∗t(η∗

)− u∗t(η)∥∥∥

2 ≤∥∥∥∥

t−1∑

τ=0

[Λτ

(η∗

)Φ∗µ

(p)− Λτ

(η)Φµ

(p)]

Y ∗t−τ

(p)∥∥∥∥

2

≤ 2

∥∥∥∥t−1∑

τ=0

[Λτ

(η∗

)− Λτ(η)]

Φ∗µ(p)Y ∗t−τ

(p)∥∥∥∥

2

+ 2

∥∥∥∥t−1∑

τ=0

Λτ(η)[

Φ∗µ(p)− Φµ

(p)]

Y ∗t−τ

(p)∥∥∥∥

2

(A.137)

and then

E∥∥∥u∗t

(η∗

)− u∗t(η)∥∥∥

2 ≤ 2∥∥∥Φ∗µ

(p)∥∥∥

2t−1∑

τ1=0

t−1∑

τ2=0

∥∥∥Λτ1

(η∗

)− Λτ1

(η)∥∥∥

∥∥∥Λτ2

(η∗

)− Λτ2

(η)∥∥∥

∥∥∥Γ∗Y(τ2 − τ1, p

)∥∥∥

+2∥∥∥Φ∗µ

(p)− Φµ

(p)∥∥∥

2t−1∑

τ1=0

t−1∑

τ2=0

∥∥∥Λτ1

(η)∥∥∥

∥∥∥Λτ2

(η)∥∥∥

∥∥∥Γ∗Y(τ2 − τ1, p

)∥∥∥, (A.138)

whereΓ∗Y(τ, p

)= E

[Y ∗t

(p)Y ∗t+τ

(p)′]

, with Γ∗Y(0, p

)= Γ∗Y

(p). Further, for some constant%, such that0 < ρ < % < 1, it

can be easily shown that ∥∥∥Γ∗Y(τ , p

)∥∥∥ ≤ %

∣∣τ∣∣∥∥∥Γ∗Y(p)∥∥∥ ≤

∥∥∥Γ∗Y(p)∥∥∥, (A.139)

with∥∥Γ∗Y

(p)∥∥ ≤ ∥∥Γ∗Y

(p) − ΓY

(p)∥∥ +

∥∥ΓY

(p)∥∥ and where as in proposition3.3

∥∥Γ∗Y(p) − ΓY

(p)∥∥ = Op

(pT−1/2

).

Therefore, we conclude that∥∥Γ∗Y

(p)∥∥ is uniformly bounded, and then

E∥∥∥u∗t

(η∗

)− u∗t(η)∥∥∥

2 ≤ 2∥∥∥Γ∗Y

(p)∥∥∥

∥∥∥Φ∗µ(p)∥∥∥

2( t−1∑

τ=0

∥∥∥Λτ(η∗

)− Λτ(η)∥∥∥

)2

+∥∥∥Φ∗µ

(p)− Φµ

(p)∥∥∥

2( t−1∑

τ=0

∥∥∥Λτ(η)∥∥∥

)2, (A.140)

which isOp(T−1

)by lemma 4.1 of Dufour and Jouini (2005c) and theorem3.9. Hence

∥∥∥Σ∗u(η∗

)− Σ∗u(T

)∥∥∥ = Op

(T−1/2

). (A.141)

Finally, we get

∥∥∥Σ∗u(η∗

)− Σu

∥∥∥ = Op

(T−1/2

)and then

∥∥∥Σ∗u(η∗

)−1 − Σ−1u

∥∥∥ = Op

(T−1/2

). (A.142)

PROOF OFPROPOSITION3.7 For ease of notation, let firstΛτ = Λτ(η), Λτ = Λτ

(η), Λ∗τ = Λτ

(η∗

), X∗

t = X∗t

(η),

X∗t = X∗

t

(η∗

)andΣ∗u = Σ∗u

(η). Then

∥∥∥Q∗X(η)−1 −QX

(η)−1

∥∥∥2

1≤

∥∥∥Q∗X(η)−1 −QX

(η)−1

∥∥∥2 ≤ ∥∥R

∥∥4∥∥∥Υ∗X

(η)−ΥX

(η)∥∥∥

2, (A.143)

where

Υ∗X(η)

= E

(t−1∑

τ=0

[X∗

t−τ ⊗ Λ′τ])

Σ∗−1u

(t−1∑

τ=0

[X∗

t−τ ⊗ Λ′τ])′

and (A.144)

ΥX

(η)

= E

( ∞∑

τ=0

[Xt−τ ⊗ Λ′τ

])Σ−1

u

( ∞∑

τ=0

[Xt−τ ⊗ Λ′τ

])′. (A.145)

40

Page 44: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

As before, defineX1,t = 1, X2,t = vt, X3,t =[y′t−1, . . . , y′t−p

]′, X4,t =[u′t−1, . . . , u′t−p

]′, X∗1,t = 1, X∗

2,t = v∗t(η),

X∗3,t =

[y∗′t−1, . . . , y∗′t−p

]′ andX∗4,t =

[u∗t−1

(η)′

, . . . , u∗t−p

(η)′]′. Then fori = 2, 3, 4, Xi,t andX∗

i,t are(hik

2 × 1)

vectors, respectively, withh2 = 1andh3 = h4 = p. Moreover, we have

∥∥∥Υ∗X(η)−ΥX

(η)∥∥∥

2=

4∑

i=1

∥∥Υ∗Xii

(η)−ΥXii

(η)∥∥2

+ 24∑

i=2

i−1∑

j=1

∥∥Υ∗Xij

(η)−ΥXij

(η)∥∥2

, (A.146)

where

Υ∗Xij

(η)

= E

(t−1∑

τ=0

[X∗

i,t−τ ⊗ Λ′τ])

Σ∗−1u

(t−1∑

τ=0

[X∗

j,t−τ ⊗ Λ′τ])′

and (A.147)

ΥXij

(η)

= E

( ∞∑

τ=0

[Xi,t−τ ⊗ Λ′τ

])Σ−1

u

( ∞∑

τ=0

[Xj,t−τ ⊗ Λ′τ

])′. (A.148)

In particular, we have

Υ∗X11

(η)

=

t−1∑

τ=0

Λ′τΣ∗−1u Λτ +

t−1∑

l=1

t−1−l∑

τ=0

Λ′τ+lΣ

∗−1u Λτ + Λ′τΣ∗−1

u Λτ+l

and (A.149)

ΥX11

(η)

=∞∑

τ=0

Λ′τΣ−1u Λτ +

∞∑

l=1

∞∑

τ=0

Λ′τ+lΣ

−1u Λτ + Λ′τΣ−1

u Λτ+l

(A.150)

Therefore, we have ∥∥∥Υ∗X11

(η)−ΥX11

(η)∥∥∥ ≤

∥∥A1

∥∥ +∥∥B1

∥∥ + 2∥∥A2

∥∥ +∥∥B2

∥∥

, (A.151)

with

∥∥A1

∥∥ =

∥∥∥∥t−1∑

τ=0

(Λ′τΣ∗−1

u Λτ − Λ′τΣ−1u Λτ

)∥∥∥∥

≤t−1∑

τ=0

∥∥Λτ − Λτ

∥∥∥∥Σ∗−1u

∥∥∥∥Λτ

∥∥ +∥∥Λτ

∥∥∥∥Σ∗−1u − Σ−1

u

∥∥∥∥Λτ

∥∥ +∥∥Λτ

∥∥∥∥Σ−1u

∥∥∥∥Λτ − Λτ

∥∥

, (A.152)

∥∥A2

∥∥ =

∥∥∥∥∥t−1∑

l=1

t−1−l∑

τ=0

(Λ′τ+lΣ

∗−1u Λτ − Λ′τ+lΣ

−1u Λτ

)∥∥∥∥∥

≤t−1∑

l=1

t−1−l∑

τ=0

∥∥Λτ+l − Λτ+l

∥∥∥∥Σ∗−1u

∥∥∥∥Λτ

∥∥ +∥∥Λτ+l

∥∥∥∥Σ∗−1u − Σ−1

u

∥∥∥∥Λτ

∥∥ +∥∥Λτ+l

∥∥∥∥Σ−1u

∥∥∥∥Λτ − Λτ

∥∥

(A.153)

∥∥B1

∥∥ =

∥∥∥∥∥∞∑

τ=t

Λ′τΣ−1u Λτ

∥∥∥∥∥ ≤∞∑

τ=t

∥∥Λτ

∥∥∥∥Σ−1u

∥∥∥∥Λτ

∥∥ = o(1)

(A.154)

and

∥∥B2

∥∥ =

∥∥∥∥∥t−1∑

l=1

∞∑

τ=t−l

Λ′τ+lΣ−1u Λτ +

∞∑

l=t

∞∑

τ=0

Λ′τ+lΣ−1u Λττ

∥∥∥∥∥

≤t−1∑

l=1

∞∑

τ=t−l

∥∥Λτ+l

∥∥∥∥Σ−1u

∥∥∥∥Λτ

∥∥ +∞∑

l=t

∞∑

τ=0

∥∥Λτ+l

∥∥∥∥Σ−1u

∥∥∥∥Λτ

∥∥ = o(1)

(A.155)

where by lemma 4.1of Dufour and Jouini (2005c)∥∥A1

∥∥ =∥∥A2

∥∥ = Op(T−1/2

)and then

∥∥Υ∗X11

(η) − ΥX11

(η)∥∥. Now,

recall thatyt = µy +∑∞

v=0 Ψvut−v andy∗t =∑t+b−1

v=0 Ψvu∗t−v , with b ∈ N (b ≥ p

), and where, for ease of notation,Ψv

41

Page 45: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

andu∗t−v stay forΨv(η)

andu∗t−v

(η), respectively. Then,

Υ∗X22

(η)

= µyµ′y ⊗Υ∗X11

(η)

+

t−1∑

τ=0

t+b−1−τ∑

v=1

[ΨvΣ∗uΨ′v ⊗ Λ′τΣ∗−1

u Λτ

]

+

t−1∑

l=1

t−1−l∑

τ=0

t+b−1−l−τ∑

v=1

[Ψv+lΣ

∗uΨ′v ⊗ Λ′τΣ∗−1

u Λτ+l

]+

[ΨvΣ∗uΨ′v+l ⊗ Λ′τ+lΣ

∗−1u Λτ

]

(A.156)

and

ΥX22

(η)

= µyµ′y ⊗ΥX11

(η)

+∞∑

τ=0

∞∑

v=1

[ΨvΣuΨ′v ⊗ Λ′τΣ−1

u Λτ

]

+∞∑

l=1

∞∑

τ=0

∞∑

v=1

[Ψv+lΣuΨ′v ⊗ Λ′τΣ−1

u Λτ+l

]+

[ΨvΣuΨ′v+l ⊗ Λ′τ+lΣ

−1u Λτ

]. (A.157)

Then∥∥∥Υ∗X22

(η)−ΥX22

(η)∥∥∥ ≤

∥∥∥Υ∗X11

(η)∥∥∥

∥∥µy

∥∥∥∥µy − µy

∥∥ +∥∥µy − µy

∥∥∥∥µy

∥∥

+∥∥µyµ′y

∥∥∥∥∥Υ∗X11

(η)−ΥX11

(η)∥∥∥

+∥∥C1

∥∥ +∥∥D1

∥∥ + 2∥∥C2

∥∥ +∥∥D2

∥∥

, (A.158)

where by lemma 4.1 of Dufour and Jouini (2005c), theorems3.1and3.2, and proposition3.1, we have

∥∥C1

∥∥ ≤t−1∑

τ=0

t+b−1−τ∑

v=1

∥∥Λτ

∥∥2∥∥Σ∗−1u

∥∥(∥∥Ψv −Ψv

∥∥∥∥Σ∗u∥∥∥∥Ψv

∥∥ +∥∥Ψv

∥∥∥∥Σ∗u − Σu

∥∥∥∥Ψv

∥∥

+∥∥Ψv

∥∥∥∥Σu

∥∥∥∥Ψv −Ψv

∥∥)

+∥∥Ψv

∥∥2∥∥Σu

∥∥(∥∥Λτ − Λτ

∥∥∥∥Σ∗−1u

∥∥∥∥Λτ

∥∥

+∥∥Λτ

∥∥∥∥Σ∗−1u − Σ−1

u

∥∥∥∥Λτ

∥∥ +∥∥Λτ

∥∥∥∥Σ−1u

∥∥∥∥Λτ − Λτ

∥∥)

= Op

(T−1/2

), (A.159)

∥∥C2

∥∥ ≤t−1∑

l=1

t−1−l∑

τ=0

t+b−1−l−τ∑

v=1

∥∥Λτ

∥∥∥∥Σ∗−1u

∥∥∥∥Λτ+l

∥∥(∥∥Ψv+l −Ψv+l

∥∥∥∥Σ∗u∥∥∥∥Ψv

∥∥ +∥∥Ψv+l

∥∥∥∥Σ∗u − Σu

∥∥∥∥Ψv

∥∥

+∥∥Ψv+l

∥∥∥∥Σu

∥∥∥∥Ψv −Ψv

∥∥)

+∥∥Ψv+l

∥∥∥∥Σu

∥∥∥∥Ψv

∥∥(∥∥Λτ − Λτ

∥∥∥∥Σ∗−1u

∥∥∥∥Λτ+l

∥∥

+∥∥Λτ

∥∥∥∥Σ∗−1u − Σ−1

u

∥∥∥∥Λτ+l

∥∥ +∥∥Λτ

∥∥∥∥Σ−1u

∥∥∥∥Λτ+l − Λτ+l

∥∥)

= Op

(T−1/2

), (A.160)

∥∥D1

∥∥ ≤ ∥∥Σu

∥∥∥∥Σ−1u

∥∥

t−1∑

τ=0

∞∑

v=t+b−τ

∥∥Ψv

∥∥2∥∥Λτ

∥∥2+

∞∑

τ=t

∞∑

v=1

∥∥Ψv

∥∥2∥∥Λτ

∥∥2

= o

(1)

(A.161)

and

∥∥D2

∥∥ ≤ ∥∥Σu

∥∥∥∥Σ−1u

∥∥

t−1∑

l=1

t−1−l∑

τ=0

∞∑

v=t+b−l−τ

∥∥Ψv+l

∥∥∥∥Ψv

∥∥∥∥Λτ

∥∥∥∥Λτ+l

∥∥

+

t−1∑

l=1

∞∑

τ=t−l

∞∑

v=1

∥∥Ψv+l

∥∥∥∥Ψv

∥∥∥∥Λτ

∥∥∥∥Λτ+l

∥∥ +∞∑

l=t

∞∑

τ=0

∞∑

v=1

∥∥Ψv+l

∥∥∥∥Ψv

∥∥∥∥Λτ

∥∥∥∥Λτ+l

∥∥

= o(1)

(A.162)

Hence,∥∥Υ∗X22

(η) − ΥX22

(η)∥∥2

= Op(T−1

). Similarly, it can be easily seen that

∥∥Υ∗Xii

(η) − ΥXii

(η)∥∥2

= Op(T−1

)

for i = 3, 4, and also∥∥Υ∗Xij

(η) − ΥXij

(η)∥∥2

= Op(T−1

)for i = 2, . . . , 4, andj = 1, . . . , i − 1, following the same

arguments. Thus∥∥Υ∗X

(η)−ΥX

(η)∥∥2

= Op(T−1

)and then

∥∥Q∗X(η)−1 −QX

(η)−1∥∥

1= Op

(T−1/2

). Thereby, we get

∥∥∥Q∗X(η)−QX

(η)∥∥∥

1= Op

(T−1/2

). (A.163)

42

Page 46: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Now, setting

QX

(T

)=

T−1

T∑

t=1

Zt(η)Σ−1

u Zt(η)′−1

. (A.164)

Then∥∥∥Q∗X

(η∗

)−1 −QX

(η)−1

∥∥∥1≤

∥∥∥Q∗X

(η∗

)−1 −QX

(T

)−1∥∥∥1

+∥∥∥QX

(T

)−1 −QX

(η)−1

∥∥∥1, (A.165)

where∥∥QX

(T

)−1 −QX

(η)−1∥∥

1= Op

(T−1/2

)and

∥∥∥Q∗X

(η∗

)−1 −QX

(T

)−1∥∥∥1≤

∥∥∥Q∗X

(η∗

)−1 −QX

(T

)−1∥∥∥ ≤

∥∥A1

∥∥ +∥∥A2

∥∥ +∥∥A3

∥∥, (A.166)

with

A1 = T−1T∑

t=1

[Z∗t

(η∗, η

)− Zt(η)]

Σ∗u(η∗

)−1Z∗t

(η∗, η

)′ (A.167)

A2 = T−1T∑

t=1

Zt(η)[

Σ∗u(η∗

)−1 − Σ−1u

]Z∗t

(η∗, η

)′ and (A.168)

A3 = T−1T∑

t=1

Zt(η)Σ−1

u

[Z∗t

(η∗, η

)− Zt(η)]′

. (A.169)

In particular, we have

∥∥A1

∥∥ ≤(

T−1T∑

t=1

∥∥∥Z∗t

(η∗, η

)− Zt(η)∥∥∥

2)1/2∥∥∥Σ∗u

(η∗

)−1 − Σ−1u

∥∥∥(

T−1T∑

t=1

∥∥∥Z∗t

(η∗, η

)− Zt(η)∥∥∥

2)1/2

+

(T−1

T∑

t=1

∥∥∥Z∗t

(η∗, η

)− Zt(η)∥∥∥

2)1/2∥∥∥Σ∗u

(η∗

)−1 − Σ−1u

∥∥∥(

T−1T∑

t=1

∥∥∥Zt(η)∥∥∥

2)1/2

+

(T−1

T∑

t=1

∥∥∥Z∗t

(η∗, η

)− Zt(η)∥∥∥

2)1/2∥∥Σ−1

u

∥∥(

T−1T∑

t=1

∥∥∥Z∗t

(η∗, η

)− Zt(η)∥∥∥

2)1/2

+

(T−1

T∑

t=1

∥∥∥Z∗t

(η∗, η

)− Zt(η)∥∥∥

2)1/2∥∥Σ−1

u

∥∥(

T−1T∑

t=1

∥∥∥Zt(η)∥∥∥

2)1/2

, (A.170)

where

T−1T∑

t=1

∥∥∥Z∗t

(η∗, η

)− Zt(η)∥∥∥

2 ≤ 2∥∥R

∥∥2T−1

T∑

t=1

2

∥∥∥∥t−1∑

τ=0

[X∗

t−τ

(η∗

)−Xt−τ

]⊗ Λτ

(η)′

∥∥∥∥2

+

∥∥∥∥t−1∑

τ=0

Xt−τ ⊗[Λτ

(η)′ − Λτ

(η)′]

∥∥∥∥2

+

∥∥∥∥∞∑

τ=t

[Xt−τ ⊗ Λτ

(η)′]

∥∥∥∥2

= 2∥∥R

∥∥2T−1

T∑

t=1

2

∥∥∥X∗t

(η∗

)−Xt

∥∥∥2∥∥∥∥

T−t∑

τ=0

Λτ(η)∥∥∥∥

2

+∥∥Xt

∥∥2∥∥∥∥

T−t∑

τ=0

[Λτ

(η)− Λτ

(η)]∥∥∥∥

2

+

∥∥∥∥∞∑

τ=T−t+1

[Xt−τ ⊗ Λτ

(η)′]

∥∥∥∥2

,

(A.171)

with T−1∑T

t=1

∥∥ ∑∞τ=T−t+1

[Xt−τ ⊗ Λτ

(η)′]∥∥2

= o(1)

and

∥∥∥X∗t

(η∗

)−Xt

∥∥∥2

=∥∥∥v∗t

(η∗

)− vt

∥∥∥2

+

p∑

i=1

∥∥y∗t−i − yt−i

∥∥2+

p∑

j=1

∥∥∥u∗t−j

(η∗

)− ut−j

∥∥∥2

≤ 2∥∥y∗t − yt

∥∥2+ 2

∥∥∥u∗t(η∗

)− ut

∥∥∥2

+

p∑

i=1

∥∥y∗t−i − yt−i

∥∥2+

p∑

j=1

∥∥∥u∗t−j

(η∗

)− ut−j

∥∥∥2

= Op

(T−1

)(A.172)

43

Page 47: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

since∥∥y∗t − yt

∥∥ = Op(T−1/2

)by the resampling scheme and

∥∥u∗t(η∗

) − ut

∥∥ = Op(T−1/2

)by lemma 4.1 of Dufour and

Jouini (2005c) and theorem3.3. Using again lemma 4.1 of Dufour and Jouini (2005c), we get

T−1T∑

t=1

∥∥∥Z∗t

(η∗, η

)− Zt(η)∥∥∥

2= Op

(T−1

)(A.173)

and then∥∥A1

∥∥ = Op(T−1/2

), by proposition3.6. Likewise, it can be easily shown that

∥∥A2

∥∥ and∥∥A3

∥∥ are bothOp(T−1/2

).

Hence∥∥∥Q∗X

(η∗

)−1 −QX

(T

)−1∥∥∥1

= Op

(T−1/2

)and then

∥∥∥Q∗X

(η∗

)−1 −QX

(η)−1

∥∥∥1

= Op

(T−1/2

), (A.174)

Thereafter, we get∥∥∥Q∗X

(η∗

)−QX

(η)∥∥∥

1= Op

(T−1/2

)and consequently

∥∥∥Q∗X

(η∗

)−QX

(η)∥∥∥

1= op

(1), (A.175)

under the assumption2.6. Finally, note that∥∥∥Q∗X

(η∗

)−QX

(η)∥∥∥

1≤

∥∥∥Q∗X(η∗

)−QX

(T

)∥∥∥1

+∥∥∥QX

(T

)−QX

(η)∥∥∥

1, (A.176)

where ∥∥∥Q∗X(η∗

)−1 −QX

(T

)−1∥∥∥1≤

∥∥∥Q∗X(η∗

)−1 −QX

(T

)−1∥∥∥ ≤

∥∥B1

∥∥ +∥∥B2

∥∥ +∥∥B3

∥∥, (A.177)

with

B1 = T−1T∑

t=1

[Z∗t

(η∗

)− Zt(η)]

Σ∗u(η∗

)−1Z∗t

(η∗

)′ (A.178)

B2 = T−1T∑

t=1

Zt(η)[

Σ∗u(η∗

)−1 − Σ−1u

]Z∗t

(η∗

)′ and (A.179)

B3 = T−1T∑

t=1

Zt(η)Σ−1

u

[Z∗t

(η∗

)− Zt(η)]′

. (A.180)

Specifically, we have

∥∥B1

∥∥ ≤(

T−1T∑

t=1

∥∥∥Z∗t(η∗

)− Zt(η)∥∥∥

2)1/2∥∥∥Σ∗u

(η∗

)−1 − Σ−1u

∥∥∥(

T−1T∑

t=1

∥∥∥Z∗t(η∗

)− Zt(η)∥∥∥

2)1/2

+

(T−1

T∑

t=1

∥∥∥Z∗t(η∗

)− Zt(η)∥∥∥

2)1/2∥∥∥Σ∗u

(η∗

)−1 − Σ−1u

∥∥∥(

T−1T∑

t=1

∥∥∥Zt(η)∥∥∥

2)1/2

+

(T−1

T∑

t=1

∥∥∥Z∗t(η∗

)− Zt(η)∥∥∥

2)1/2∥∥Σ−1

u

∥∥(

T−1T∑

t=1

∥∥∥Z∗t(η∗

)− Zt(η)∥∥∥

2)1/2

+

(T−1

T∑

t=1

∥∥∥Z∗t(η∗

)− Zt(η)∥∥∥

2)1/2∥∥Σ−1

u

∥∥(

T−1T∑

t=1

∥∥∥Zt(η)∥∥∥

2)1/2

, (A.181)

with

T−1T∑

t=1

∥∥∥Z∗t(η∗

)− Zt(η)∥∥∥

2 ≤ 2∥∥R

∥∥2T−1

T∑

t=1

2

∥∥∥∥t−1∑

τ=0

[X∗

t−τ

(η∗

)−Xt−τ

]⊗ Λτ

(η∗

)′∥∥∥∥2

+

∥∥∥∥t−1∑

τ=0

Xt−τ ⊗[Λτ

(η∗

)′ − Λτ(η)′]

∥∥∥∥2

+

∥∥∥∥∞∑

τ=t

[Xt−τ ⊗ Λτ

(η)′]

∥∥∥∥2

= 2∥∥R

∥∥2T−1

T∑

t=1

2

∥∥∥X∗t

(η∗

)−Xt

∥∥∥2∥∥∥∥

T−t∑

τ=0

Λτ(η∗

)∥∥∥∥2

+∥∥Xt

∥∥2∥∥∥∥

T−t∑

τ=0

[Λτ

(η∗

)− Λτ(η)]∥∥∥∥

2

+

∥∥∥∥∞∑

τ=T−t+1

[Xt−τ ⊗ Λτ

(η)′]

∥∥∥∥2

, (A.182)

44

Page 48: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

where∥∥X∗

t

(η∗

)−Xt

∥∥ = Op(T−1/2

)as previously shown, and

∥∥∥Λτ(η∗

)− Λτ(η)∥∥∥ ≤

∥∥∥Λτ(η∗

)− Λτ(η)∥∥∥ +

∥∥∥Λτ(η)− Λτ

(η)∥∥∥ = Op

(T−1/2

)(A.183)

using lemma 4.1 of Dufour and Jouini (2005c). It follows that

T−1T∑

t=1

∥∥∥Z∗t(η∗

)− Zt(η)∥∥∥

2= Op

(T−1

)(A.184)

and then∥∥B1

∥∥ = Op(T−1/2

). Similarly, it can be shown that

∥∥B2

∥∥ =∥∥B3

∥∥ = Op(T−1/2

). Hence

∥∥∥Q∗X(η∗

)−1 −QX

(T

)−1∥∥∥1

= Op

(T−1/2

)and then

∥∥∥Q∗X(η∗

)−QX

(T

)∥∥∥1

= Op

(T−1/2

). (A.185)

Finally, we have

∥∥∥Q∗X(η∗

)−QX

(η)∥∥∥

1= Op

(T−1/2

)and then

∥∥∥Q∗X(η∗

)− Q∗X

(η∗

)∥∥∥1

= Op

(T−1/2

)(A.186)

since ∥∥∥Q∗X(η∗

)− Q∗X

(η∗

)∥∥∥1≤

∥∥∥Q∗X(η∗

)−QX

(η)∥∥∥

1+

∥∥∥QX

(η)− Q∗X

(η∗

)∥∥∥1. (A.187)

Consequently,under the assumption2.6, we get

∥∥∥Q∗X(η∗

)−QX

(η)∥∥∥

1= op

(1)

and∥∥∥Q∗X

(η∗

)− Q∗X

(η∗

)∥∥∥1

= op(1). (A.188)

PROOF OFTHEOREM 3.13 Recall that

η∗ − η = Q∗X(η∗

)Ω∗X

(η∗

)+ Q∗X

(η∗

)[Ω•∗X

(η∗

)− Ω∗X

(η∗

)](A.189)

Then

∥∥η∗ − η∥∥ ≤

∥∥∥QX

(η)∥∥∥

1

∥∥∥ΩX

(η)∥∥∥ +

∥∥∥QX

(η)∥∥∥

1

∥∥∥Ω•∗X

(η∗

)− ΩX

(η)∥∥∥ +

∥∥∥Q∗X

(η∗

)−QX

(η)∥∥∥

1

∥∥∥Ω•∗X

(η∗

)∥∥∥

+∥∥∥Q∗X

(η∗

)− Q∗X

(η∗

)∥∥∥1

∥∥∥Ω∗X(η∗

)∥∥∥ +∥∥∥Q∗X

(η∗

)∥∥∥1

∥∥∥Ω∗X(η∗

)− Ω∗X

(η∗

)∥∥∥ (A.190)

where∥∥Q∗X

(η∗

) − QX

(η)∥∥

1= Op

(T−1/2

)and

∥∥Q∗X(η∗

) − Q∗X

(η∗

)∥∥1

= Op(T−1/2

)by the above proposition, and∥∥QX

(η)∥∥

1= Op

(1)

and∥∥ΩX

(η)∥∥ = Op

(T−1/2

)as shown in Dufour and Jouini (2005c). So to complete the proof we only

need to determine the probability orders of∥∥Ω•∗X

(η∗

) − ΩX

(η)∥∥ and

∥∥Ω∗X(η∗

) − Ω∗X

(η∗

)∥∥, respectively. In particular, we

have

∥∥∥Ω•∗X

(η∗

)− ΩX

(η)∥∥∥ =

∥∥∥∥∥T−1T∑

t=1

[Z∗t

(η∗, η

)Σ∗u

(η∗

)−1u∗t

(η)− Zt

(η)Σ−1

u ut

]∥∥∥∥∥

≤∥∥∥∥∥T−1

T∑

t=1

Zt(η)Σ−1

u

[u∗t

(η)− ut

]∥∥∥∥∥ +

∥∥∥∥∥T−1T∑

t=1

Zt(η)[

Σ∗u(η∗

)−1 − Σ−1u

]u∗t

(η)∥∥∥∥∥

+

∥∥∥∥∥T−1T∑

t=1

[Z∗t

(η∗, η

)− Zt(η)]

Σ∗u(η∗

)−1u∗t

(η)∥∥∥∥∥, (A.191)

with

∥∥∥∥∥T−1T∑

t=1

[Z∗t

(η∗, η

)− Zt(η)]

Σ∗u(η∗

)−1u∗t

(η)∥∥∥∥∥2

≤ ∥∥R∥∥2

2∥∥B1

∥∥2+ 4

∥∥B2

∥∥2+ 4

∥∥B3

∥∥2

, (A.192)

45

Page 49: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

where

∥∥B1

∥∥2=

∥∥∥∥∥T−1T∑

t=1

([X∗

t

(η∗

)−Xt

]⊗

T−t∑

τ=0

Λτ(η)′

)Σ∗u

(η∗

)−1u∗t

(η)∥∥∥∥∥2

≤ 2∥∥B11

∥∥2+ 2

∥∥B12

∥∥2, (A.193)

with

∥∥B11

∥∥2=

∥∥∥∥∥T−1T∑

t=1

([X∗

t

(η∗

)−Xt

]⊗

T−t∑

τ=0

Λτ(η)′

)Σ∗u

(η∗

)−1[u∗t

(η)− ut

]∥∥∥∥∥2

T−1T∑

t=1

∥∥∥X∗t

(η∗

)−Xt

∥∥∥2(

T−t∑

τ=0

∥∥∥Λτ(η)∥∥∥

)2∥∥∥Σ∗u(η∗

)∥∥∥2

T−1T∑

t=1

∥∥∥u∗t(η)− ut

∥∥∥2

= Op

(T−2

),

(A.194)

since∥∥X∗

t

(η∗

)−Xt

∥∥2 and∥∥u∗t

(η)− ut

∥∥2 are bothOp(T−1

)and

∑T−tτ=0

∥∥Λτ(η)∥∥ is Op

(1), and

E∥∥B12

∥∥2= E

∥∥∥∥∥T−1T∑

t=1

([X∗

t

(η∗

)−Xt

]⊗

T−t∑

τ=0

Λτ(η)′

)Σ∗u

(η∗

)−1ut

∥∥∥∥∥2

= T−2T∑

t=1

E∥∥∥X∗

t

(η∗

)−Xt

∥∥∥2∥∥∥∥∥

T−t∑

τ=0

Λτ(η)∥∥∥∥∥2∥∥∥Σ∗u

(η∗

)−1∥∥∥2tr

[Σu

]= Op

(T−2

), (A.195)

by independence betweenX∗t

(η∗

)−Xt andut. Then∥∥B1

∥∥2= Op

(T−2

). Similarly, we can easily show that

E∥∥B2

∥∥2= E

∥∥∥∥∥T−1T∑

t=1

(Xt ⊗

T−t∑

τ=0

[Λτ

(η)− Λτ

(η)]′)

Σ∗u(η∗

)−1u∗t

(η)∥∥∥∥∥2

= Op

(T−2

), (A.196)

sinceXt andu∗t(η)

are independent. Further, we show that

E∥∥B3

∥∥2= E

∥∥∥∥∥T−1T∑

t=1

(Xt ⊗

∞∑

τ=T−t+1

Λτ(η)′

)Σ∗u

(η∗

)−1u∗t

(η)∥∥∥∥∥2

= Op(T−2

). (A.197)

Hence ∥∥∥∥∥T−1T∑

t=1

[Z∗t

(η∗, η

)− Zt(η)]

Σ∗u(η∗

)−1u∗t

(η)∥∥∥∥∥ = Op

(T−1

). (A.198)

Likewise, we easily show that

∥∥∥∥∥T−1T∑

t=1

Zt(η)[

Σ∗u(η∗

)−1 − Σ−1u

]u∗t

(η)∥∥∥∥∥ =

∥∥∥∥∥T−1T∑

t=1

Zt(η)Σ−1

u

[u∗t

(η)− ut

]∥∥∥∥∥ = Op

(T−1

)(A.199)

and then ∥∥∥Ω•∗X

(η∗

)− ΩX

(η)∥∥∥ = Op

(T−1

). (A.200)

Again, using the same arguments as above, we get

∥∥∥Ω∗X(η∗

)− Ω∗X

(η∗

)∥∥∥ =

∥∥∥∥∥T−1T∑

t=1

[Z∗t

(η∗

)− Z∗t

(η∗, η

)]Σ∗u

(η∗

)−1u∗t

(η∗

)∥∥∥∥∥ = Op

(T−1

). (A.201)

Further, we can easily show ∥∥∥Ω∗X

(η∗

)− ΩX

(η)∥∥∥ = Op

(T−1

), (A.202)

46

Page 50: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

so then

∥∥∥Ω∗X(η∗

)∥∥∥ ≤∥∥∥Ω∗X

(η∗

)∥∥∥ +∥∥∥Ω∗X

(η∗

)− Ω∗X

(η∗

)∥∥∥

≤∥∥∥ΩX

(η)∥∥∥ +

∥∥∥Ω∗X

(η∗

)− ΩX

(η)∥∥∥ +

∥∥∥Ω∗X(η∗

)− Ω∗X

(η∗

)∥∥∥ = Op

(T−1/2

). (A.203)

Therefore, we get ∥∥η∗ − η∥∥ = Op

(T−1/2

)and then

∥∥η∗ − η∥∥ = op

(1), (A.204)

under under the assumption2.6.

PROOF OFTHEOREM 3.14 Note that

∥∥∥S∗X(η∗

)− S∗X(η)∥∥∥ ≤ T 1/2

∥∥∥Q∗X(η∗

)− Q∗X

(η∗

)∥∥∥1

∥∥∥Ω∗X(η∗

)∥∥∥ +∥∥∥Q∗X

(η∗

)∥∥∥1

∥∥∥Ω∗X(η∗

)− Ω∗X

(η∗

)∥∥∥

+∥∥∥Q∗X

(η∗

)−Q∗X(η)∥∥∥

1

∥∥∥Ω•∗X

(η∗

)∥∥∥ +∥∥∥Q∗X

(η)∥∥∥

1

∥∥∥Ω•∗X

(η∗

)− Ω∗X(η)∥∥∥

(A.205)

where, by proposition3.7,∥∥Q∗X

(η∗

)− Q∗X

(η∗

)∥∥1

and∥∥Q∗X

(η∗

)−Q∗X(η)∥∥

1are bothOp

(T−1/2

), and

∥∥Q∗X

(η∗

)∥∥1

and∥∥Q∗X(η)∥∥

1are bothOp

(1), since

∥∥Q∗X

(η∗

)−Q∗X(η)∥∥

1≤ ∥∥Q∗X

(η∗

)−QX

(η)∥∥

1+

∥∥Q∗X(η)−QX

(η)∥∥

1. (A.206)

Further, by theorem3.13,∥∥Ω∗X

(η∗

)∥∥ and∥∥Ω•∗X

(η∗

)∥∥ are bothOp(T−1/2

), while

∥∥Ω∗X(η∗

) − Ω∗X

(η∗

)∥∥ is Op(T−1

).

Furthermore, we have

∥∥∥Ω•∗X

(η∗

)− Ω∗X(η)∥∥∥ ≤

∥∥∥Ω•∗X

(η∗

)− ΩX

(η)∥∥∥ +

∥∥∥Ω∗X(η)− ΩX

(η)∥∥∥, (A.207)

with∥∥Ω•∗X

(η∗

)− ΩX

(η)∥∥ = Op

(T−1

)and

E∥∥∥Ω∗X

(η)− ΩX

(η)∥∥∥

2 ≤∥∥R

∥∥

16 E

∥∥∥∥∥T−1T∑

t=1

(T−t∑

τ=0

Λτ(η))′

Σ∗u(η)−1

u∗t(η)[

X∗t

(η)−Xt

]′∥∥∥∥∥2

+16E

∥∥∥∥∥T−1T∑

t=1

(T−t∑

τ=0

[Λτ

(η)− Λτ

(η)]

)′Σ∗u

(η)−1

u∗t(η)X′

t

∥∥∥∥∥2

+8E

∥∥∥∥∥T−1T∑

t=1

(T−t∑

τ=0

Λτ(η))′[

Σ∗u(η)−1 − Σ−1

u

]u∗t

(η)X′

t

∥∥∥∥∥2

+4E

∥∥∥∥∥T−1T∑

t=1

(T−t∑

τ=0

Λτ(η))′

Σ−1u

[u∗t

(η)− ut

]X′

t

∥∥∥∥∥2

+2E

∥∥∥∥∥T−1T∑

t=1

( ∞∑

τ=T−t+1

Λτ(η))′

Σ−1u utX

′t

∥∥∥∥∥2

, (A.208)

which is Op(T−2

)by mutual independence betweenut, Xt, u∗t

(η), X∗

t

(η), u∗t

(η) − ut and X∗

t

(η) − Xt and u∗t

(η).

Therefore, we get ∥∥∥Ω•∗X

(η∗

)− Ω∗X(η)∥∥∥ = Op

(T−1/2

)(A.209)

and then ∥∥∥S∗X(η∗

)− S∗X(η)∥∥∥ = Op

(T−1/2

). (A.210)

47

Page 51: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

PROOF OFTHEOREM 3.15 The Mallow’s distance between the conditional distribution ofS∗X(η)

ony1, . . . , yT

and

the distribution ofSX

(η), based on the Schur’s norm, is given by

d2

L

(SX

(η))

,L(S∗X

(η)∣∣ y1, . . . , yT

)= inf

E∥∥∥S∗X

(η)− SX

(η)∥∥∥

21/2

E∥∥∥S∗X

(η)− SX

(η)∥∥∥

21/2

(A.211)

with

E∥∥∥S∗X

(η)− SX

(η)∥∥∥

2 ≤ 2 T

∥∥∥Q∗X(η)∥∥∥

2

1E∥∥∥Ω∗X

(η)− ΩX

(η)∥∥∥

2+

∥∥∥Q∗X(η)−QX

(η)∥∥∥

2

1E∥∥∥ΩX

(η)∥∥∥

2

= Op

(T−1

)(A.212)

using proposition3.7, the fact thatE∥∥ΩX

(η)∥∥2 is Op

(T−1

), and also thatE

∥∥Ω∗X(η)−ΩX

(η)∥∥2 is Op

(T−2

)as shown above

in theorem3.14. Hence

d2

L

(SX

(η))

,L(S∗X

(η)∣∣ y1, . . . , yT

)= Op

(T−1/2

). (A.213)

and then

d2

L

(SX

(η))

,L(S∗X

(η)∣∣ y1, . . . , yT

)= op

(1)

as T →∞. (A.214)

Therefore, sinceSX

(η) ⇒ N

[0, QX

(η)]

asymptotically, we establish the following asymptotic distributional result

L(S∗X

(η)∣∣ y1, . . . , yT

)⇒ N

[0, QX

(η)]

in probability. (A.215)

PROOF OFPROPOSITION3.8 Again, let first

Σ∗u(T

)=

1

T

T∑

t=1

u∗t(η)u∗t

(η)′

. (A.216)

Then ∥∥∥Σ∗u(η∗

)− Σu

∥∥∥ ≤∥∥∥Σ∗u

(η∗

)− Σ∗u(T

)∥∥∥ +∥∥∥Σ∗u

(T

)− Σ∗u(η)∥∥∥ +

∥∥∥Σ∗u(η)− Σu

∥∥∥, (A.217)

whereΣ∗u(η)

= E[u∗t

(η)u∗t

(η)′],

∥∥∥Σ∗u(T

)− Σ∗u(η)∥∥∥ =

∥∥∥Σ∗u(η)− Σu

∥∥∥ = Op(T−1/2

), (A.218)

and

∥∥∥Σ∗u(η∗

)− Σ∗u(T

)∥∥∥ ≤ 1

T

T∑

t=1

∥∥∥u∗t(η∗

)− u∗t(η)∥∥∥

∥∥∥u∗t(η∗

)∥∥∥ +∥∥∥u∗t

(η)∥∥∥

∥∥∥u∗t(η∗

)− u∗t(η)∥∥∥

. (A.219)

Now, settingΦµ(p)

=[ − µΦ, Φ0,−Φ1, . . . ,−Φp

]andΦ∗µ

(p)

=[ − µ∗Φ, Φ∗0,−Φ∗1, . . . ,−Φ∗p

]. Then it can be easily seen

that

u∗t(η)

=

t−1∑

τ=0

Λτ(η)Φµ

(p)Y ∗t−τ

(p)

and u∗t(η∗

)=

t−1∑

τ=0

Λτ(η∗

)Φ∗µ

(p)Y ∗t−τ

(p). (A.220)

whereY ∗t(p)

=[1, y∗t , y∗t−1, . . . , y∗t−p

]. Hence

∥∥∥u∗t(η∗

)− u∗t(η)∥∥∥

2 ≤∥∥∥∥

t−1∑

τ=0

[Λτ

(η∗

)Φ∗µ

(p)− Λτ

(η)Φµ

(p)]

Y ∗t−τ

(p)∥∥∥∥

2

≤ 2

∥∥∥∥t−1∑

τ=0

[Λτ

(η∗

)− Λτ(η)]

Φ∗µ(p)Y ∗t−τ

(p)∥∥∥∥

2

+ 2

∥∥∥∥t−1∑

τ=0

Λτ(η)[

Φ∗µ(p)− Φµ

(p)]

Y ∗t−τ

(p)∥∥∥∥

2

(A.221)

48

Page 52: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

and then

E∥∥∥u∗t

(η∗

)− u∗t(η)∥∥∥

2 ≤ 2∥∥∥Φ∗µ

(p)∥∥∥

2t−1∑

τ1=0

t−1∑

τ2=0

∥∥∥Λτ1

(η∗

)− Λτ1

(η)∥∥∥

∥∥∥Λτ2

(η∗

)− Λτ2

(η)∥∥∥

∥∥∥Γ∗Y(τ2 − τ1, p

)∥∥∥

+2∥∥∥Φ∗µ

(p)− Φµ

(p)∥∥∥

2t−1∑

τ1=0

t−1∑

τ2=0

∥∥∥Λτ1

(η)∥∥∥

∥∥∥Λτ2

(η)∥∥∥

∥∥∥Γ∗Y(τ2 − τ1, p

)∥∥∥, (A.222)

whereΓ∗Y(τ , p

)as previously defined withΓ∗Y

(0, p

)= Γ∗Y

(p), and where for some constant%, such that0 < ρ < % < 1, we

easily show that ∥∥∥Γ∗Y(τ , p

)∥∥∥ ≤ %

∣∣τ∣∣∥∥∥Γ∗Y(p)∥∥∥ ≤

∥∥∥Γ∗Y(p)∥∥∥, (A.223)

with∥∥Γ∗Y

(p)∥∥ ≤ ∥∥Γ∗Y

(p)− ΓY

(p)∥∥ +

∥∥ΓY

(p)∥∥ and such that

∥∥Γ∗Y(p)− ΓY

(p)∥∥ = Op

(pT−1/2

), as in proposition3.3.

Therefore, since∥∥Γ∗Y

(p)∥∥ is uniformly bounded we conclude that

E∥∥∥u∗t

(η∗

)− u∗t(η)∥∥∥

2 ≤ 2∥∥∥Γ∗Y

(p)∥∥∥

∥∥∥Φ∗µ(p)∥∥∥

2( t−1∑

τ=0

∥∥∥Λτ(η∗

)− Λτ(η)∥∥∥

)2

+∥∥∥Φ∗µ

(p)− Φµ

(p)∥∥∥

2( t−1∑

τ=0

∥∥∥Λτ(η)∥∥∥

)2= Op

(T−1

), (A.224)

by lemma 4.1 of Dufour and Jouini (2005c) and theorem3.13. Therefore, we have

∥∥∥Σ∗u(η∗

)− Σ∗u(T

)∥∥∥ = Op

(T−1/2

)and finally,

∥∥∥Σ∗u(η∗

)− Σu

∥∥∥ = Op

(T−1/2

). (A.225)

49

Page 53: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

B Appendix: Simulation model design

The bivariate stationary invertible VARMA model considered in the simulation study, has thefollowing echelon form representation

yt = µΦ +(I2 − Φ0

)vt + Φ1yt−1 + Φ2yt−2 + Θ1ut−1 + Θ2ut−2 + ut

whereI2 represents the identity matrix of order two andvt = yt − ut, with ut = Cuεt, whereεt : t ∈ Z

are i.i.d Gaussian errors[0, I2

]andCu stays for the Cholesky decomposition

matrix of Σu = E(utu

′t

)such thatΣu = CuC ′

u. The restricted autoregressive and moving

average coefficient matrices, implied by the vector of Kronecker indices(p1 = 2, p2 = 0

)′, are

such that

Φ0 =[

1 0φ21,0 1

], Φ1 =

[φ11,1 0

0 0

], Φ2 =

[φ11,2 0

0 φ21,0

]

Θ1 =[

θ11,1 θ12,1

0 0

]and Θ2 =

[θ11,2 θ12,2

0 0

]

The chosen values for the Cholesky matrix as well as the model’s constant term are

Cu =[

0.7 0−0.2 0.5

]and µΦ =

[µΦ,1

µΦ,2

]=

[00

].

Moreover, the following model’s parameters values, sayφ21,0 = −0.2, φ11,1 = 0.8,φ11,2 = −0.97, θ11,1 = 0.4, θ12,1 = 0.6, θ11,2 = 0.57 and θ12,2 = −0.9, ensurethat stationarity and invertibility conditions be satisfied. Of course, other values for theseparameters might also ensure these stability constraints. The eigen values associated withthe autoregressive and moving average parts of the present model are all conjugate and are0.4+-0.9i and -0.26+-0.556780278i, respectively. The implied Euclidean norms are 0.98488578and 0.62449980, respectively. Hence the model is highly persistent.

50

Page 54: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Table 1: Second stage parameter estimates of echelon form VARMA model with Kroneckerindices (2,0) and sample of 200 observations.

Panel (A): MC Simulation

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 0.0000 -0.0027 0.0027 -0.0042 -0.1688 0.1705 0.3393 0.1016 0.1016µΦ,2 0.0000 0.0023 0.0023 0.0030 -0.0687 0.0732 0.1419 0.0431 0.0431φ21,0 -0.2000 -0.1992 0.0007 -0.1997 -0.2202 -0.1748 0.0454 0.0135 0.0134φ11,1 0.8000 0.7968 0.0031 0.7990 0.7562 0.8302 0.0740 0.0241 0.0239φ11,2 -0.9700 -0.9602 0.0097 -0.9645 -0.9869 -0.9196 0.0673 0.0243 0.0222θ11,1 0.4000 0.3927 0.0072 0.3972 0.2595 0.5048 0.2453 0.0753 0.0750θ12,1 0.6000 0.5956 0.0043 0.5977 0.4425 0.7417 0.2992 0.0913 0.0912θ11,2 0.5700 0.5577 0.0122 0.5580 0.4164 0.6943 0.2779 0.0869 0.0861θ12,2 -0.9000 -0.9059 0.0059 -0.8990 -1.0840 -0.7388 0.3452 0.1070 0.1069

Panel (B): Parametric Bootstrap

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 -0.0967 -0.0958 0.0008 -0.0970 -0.2461 0.0565 0.3026 0.0907 0.0907µΦ,2 0.0299 0.0292 0.0007 0.0286 -0.0417 0.0960 0.1377 0.0425 0.0425φ21,0 -0.1717 -0.1710 0.0006 -0.1718 -0.1928 -0.1464 0.0464 0.0139 0.0139φ11,1 0.8071 0.8040 0.0031 0.8057 0.7639 0.8396 0.0757 0.0230 0.0228φ11,2 -0.9704 -0.9605 0.0098 -0.9649 -0.9880 -0.9190 0.0690 0.0245 0.0224θ11,1 0.4412 0.4261 0.0151 0.4256 0.3003 0.5414 0.2411 0.0752 0.0736θ12,1 0.6243 0.6119 0.0124 0.6090 0.4830 0.7507 0.2677 0.0824 0.0815θ11,2 0.6172 0.6011 0.0161 0.5997 0.4488 0.7489 0.3001 0.0929 0.0915θ12,2 -0.8365 -0.8461 0.0095 -0.8494 -0.9926 -0.6997 0.2929 0.0916 0.0911

Panel (C): Non Parametric Bootstrap

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 -0.0967 -0.0998 0.0031 -0.1016 -0.2524 0.0501 0.3025 0.0921 0.0921µΦ,2 0.0299 0.0297 0.0002 0.0292 -0.0430 0.1003 0.1433 0.0438 0.0438φ21,0 -0.1717 -0.1711 0.0005 -0.1723 -0.1919 -0.1475 0.0444 0.0137 0.0137φ11,1 0.8071 0.8041 0.0029 0.8069 0.7635 0.8360 0.0725 0.0228 0.0226φ11,2 -0.9704 -0.9606 0.0097 -0.9641 -0.9864 -0.9207 0.0657 0.0231 0.0209θ11,1 0.4412 0.4305 0.0107 0.4336 0.3039 0.5505 0.2466 0.0768 0.0761θ12,1 0.6243 0.6211 0.0032 0.6210 0.4865 0.7549 0.2684 0.0821 0.0820θ11,2 0.6172 0.5996 0.0176 0.6002 0.4525 0.7409 0.2884 0.0890 0.0873θ12,2 -0.8365 -0.8453 0.0087 -0.8469 -0.9988 -0.6909 0.3079 0.0946 0.0942

Note – These estimates are obtained from 1000 replications. The eigen values of the model are all conjugate 0.4+-0.9i(0.98488578) and -0.26+-0.556780278i (0.624499800) for the autoregressive and moving average parts, respectively.

51

Page 55: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Table 2: Third stage parameter estimates of echelon form VARMA model with Kroneckerindices (2,0) and sample of 200 observations.

Panel (A): MC Simulation

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 0.0000 -0.0027 0.0027 -0.0021 -0.1716 0.1705 0.3421 0.1032 0.1031µΦ,2 0.0000 0.0022 0.0022 0.0030 -0.0688 0.0742 0.1430 0.0435 0.0435φ21,0 -0.2000 -0.1986 0.0013 -0.1992 -0.2196 -0.1745 0.0451 0.0137 0.0136φ11,1 0.8000 0.7977 0.0022 0.7980 0.7670 0.8270 0.0600 0.0186 0.0185φ11,2 -0.9700 -0.9633 0.0066 -0.9672 -0.9888 -0.9278 0.0610 0.0212 0.0202θ11,1 0.4000 0.3946 0.0053 0.3973 0.2844 0.4970 0.2126 0.0648 0.0646θ12,1 0.6000 0.5967 0.0032 0.5995 0.4684 0.7251 0.2567 0.0787 0.0786θ11,2 0.5700 0.5602 0.0097 0.5602 0.4547 0.6671 0.2124 0.0669 0.0662θ12,2 -0.9000 -0.8996 0.0003 -0.8977 -1.0413 -0.7652 0.2761 0.0843 0.0843

Panel (B): Parametric Bootstrap

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 -0.0967 -0.0965 0.0001 -0.0984 -0.2461 0.0549 0.3010 0.0920 0.0920µΦ,2 0.0299 0.0296 0.0003 0.0291 -0.0420 0.0982 0.1402 0.0429 0.0429φ21,0 -0.1717 -0.1696 0.0021 -0.1708 -0.1924 -0.1447 0.0477 0.0145 0.0143φ11,1 0.8071 0.8052 0.0019 0.8067 0.7750 0.8324 0.0574 0.0175 0.0174φ11,2 -0.9704 -0.9640 0.0063 -0.9676 -0.9904 -0.9250 0.0654 0.0217 0.0208θ11,1 0.4412 0.4301 0.0111 0.4316 0.3234 0.5308 0.2074 0.0645 0.0635θ12,1 0.6243 0.6140 0.0102 0.6122 0.5006 0.7321 0.2315 0.0712 0.0705θ11,2 0.6172 0.6089 0.0082 0.6084 0.5056 0.7170 0.2114 0.0663 0.0658θ12,2 -0.8365 -0.8395 0.0029 -0.8389 -0.9474 -0.7230 0.2244 0.0693 0.0693

Panel (C): Non Parametric Bootstrap

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 -0.0967 -0.1009 0.0042 -0.1019 -0.2527 0.0492 0.3019 0.0927 0.0926µΦ,2 0.0299 0.0302 0.0002 0.0299 -0.0430 0.1011 0.1441 0.0438 0.0438φ21,0 -0.1717 -0.1696 0.0020 -0.1706 -0.1901 -0.1447 0.0454 0.0142 0.0140φ11,1 0.8071 0.8055 0.0016 0.8060 0.7744 0.8322 0.0578 0.0179 0.0178φ11,2 -0.9704 -0.9642 0.0061 -0.9684 -0.9891 -0.9295 0.0596 0.0201 0.0191θ11,1 0.4412 0.4348 0.0064 0.4382 0.3258 0.5377 0.2119 0.0650 0.0647θ12,1 0.6243 0.6214 0.0029 0.6234 0.5040 0.7291 0.2251 0.0678 0.0677θ11,2 0.6172 0.6044 0.0128 0.6049 0.4958 0.7086 0.2128 0.0671 0.0659θ12,2 -0.8365 -0.8422 0.0056 -0.8444 -0.9544 -0.7270 0.2274 0.0703 0.0700

Note – These estimates are obtained from 1000 replications. The eigen values of the model are all conjugate 0.4+-0.9i(0.98488578) and -0.26+-0.556780278i (0.624499800) for the autoregressive and moving average parts, respectively.

52

Page 56: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Table 3: Second stage parameter estimates of echelon form VARMA model with Kroneckerindices (2,0) and sample of 500 observations.

Panel (A): MC Simulation

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 0.0000 0.0006 0.0006 0.0026 -0.1093 0.1116 0.2209 0.0667 0.0667µΦ,2 0.0000 -0.0006 0.0006 -0.0006 -0.0419 0.0434 0.0853 0.0264 0.0264φ21,0 -0.2000 -0.1991 0.0008 -0.1996 -0.2111 -0.1863 0.0248 0.0078 0.0078φ11,1 0.8000 0.7989 0.0010 0.7999 0.7731 0.8205 0.0474 0.0143 0.0142φ11,2 -0.9700 -0.9666 0.0033 -0.9681 -0.9821 -0.9457 0.0364 0.0117 0.0112θ11,1 0.4000 0.3959 0.0040 0.3964 0.3226 0.4657 0.1431 0.0437 0.0435θ12,1 0.6000 0.5962 0.0037 0.5957 0.5072 0.6867 0.1795 0.0542 0.0541θ11,2 0.5700 0.5669 0.0030 0.5676 0.4793 0.6486 0.1693 0.0513 0.0513θ12,2 -0.9000 -0.9023 0.0023 -0.9046 -1.0082 -0.7936 0.2146 0.0652 0.0652

Panel (B): Parametric Bootstrap

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 -0.0158 -0.0121 0.0036 -0.0107 -0.1241 0.0960 0.2201 0.0673 0.0672µΦ,2 0.0251 0.0242 0.0008 0.0233 -0.0231 0.0719 0.0950 0.0286 0.0286φ21,0 -0.1979 -0.1971 0.0007 -0.1972 -0.2093 -0.1840 0.0253 0.0079 0.0079φ11,1 0.7798 0.7793 0.0005 0.7804 0.7566 0.7994 0.0428 0.0131 0.0131φ11,2 -0.9729 -0.9688 0.0041 -0.9706 -0.9840 -0.9484 0.0356 0.0117 0.0109θ11,1 0.3809 0.3762 0.0046 0.3767 0.3042 0.4471 0.1429 0.0436 0.0433θ12,1 0.5693 0.5666 0.0027 0.5668 0.4769 0.6586 0.1817 0.0551 0.0551θ11,2 0.6312 0.6232 0.0079 0.6226 0.5408 0.7015 0.1607 0.0496 0.0489θ12,2 -0.8522 -0.8536 0.0014 -0.8511 -0.9561 -0.7577 0.1984 0.0620 0.0620

Panel (C): Non Parametric Bootstrap

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 -0.0158 -0.0138 0.0019 -0.0166 -0.1284 0.1014 0.2298 0.0693 0.0693µΦ,2 0.0251 0.0255 0.0004 0.0257 -0.0220 0.0721 0.0941 0.0286 0.0286φ21,0 -0.1979 -0.1972 0.0006 -0.1971 -0.2105 -0.1840 0.3945 0.0080 0.0080φ11,1 0.7798 0.7787 0.0011 0.7798 0.7557 0.7975 0.0418 0.0126 0.0126φ11,2 -0.9729 -0.9684 0.0045 -0.9707 -0.9841 -0.9475 0.0366 0.0125 0.0117θ11,1 0.3809 0.3777 0.0031 0.3774 0.3041 0.4497 0.1456 0.0445 0.0444θ12,1 0.5693 0.5666 0.0026 0.5646 0.4755 0.6600 0.1845 0.0551 0.0550θ11,2 0.6312 0.6257 0.0054 0.6267 0.5440 0.7039 0.1599 0.0490 0.0487θ12,2 -0.8522 -0.8533 0.0011 -0.8526 -0.9553 -0.7500 0.2053 0.0626 0.0626

Note – These estimates are obtained from 1000 replications. The eigen values of the model are all conjugate 0.4+-0.9i(0.98488578) and -0.26+-0.556780278i (0.624499800) for the autoregressive and moving average parts, respectively.

53

Page 57: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Table 4: Third stage parameter estimates of echelon form VARMA model with Kroneckerindices (2,0) and sample of 500 observations.

Panel (A): MC Simulation

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 0.0000 0.0005 0.0005 0.0020 -0.1140 0.1139 0.2279 0.0671 0.0671µΦ,2 0.0000 -0.0005 0.0005 -0.0006 -0.0430 0.0445 0.0875 0.0265 0.0265φ21,0 -0.2000 -0.1990 0.0009 -0.1996 -0.2109 -0.1861 0.0248 0.0078 0.0078φ11,1 0.8000 0.7997 0.0002 0.7994 0.7808 0.8167 0.0359 0.0105 0.0105φ11,2 -0.9700 -0.9675 0.0024 -0.9692 -0.9824 -0.9485 0.0339 0.0109 0.0106θ11,1 0.4000 0.3971 0.0028 0.3977 0.3321 0.4589 0.1268 0.0383 0.0382θ12,1 0.6000 0.5973 0.0026 0.5985 0.5158 0.6734 0.1576 0.0480 0.0479θ11,2 0.5700 0.5670 0.0029 0.5670 0.4978 0.6335 0.1357 0.0400 0.0399θ12,2 -0.9000 -0.9015 0.0015 -0.9025 -0.9870 -0.8181 0.1689 0.0505 0.0505

Panel (B): Parametric Bootstrap

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 -0.0158 -0.0124 0.0033 -0.0110 -0.1240 0.0964 0.2204 0.0679 0.0678µΦ,2 0.0251 0.0246 0.0004 0.0237 -0.0225 0.0740 0.0965 0.0287 0.0287φ21,0 -0.1979 -0.1969 0.0009 -0.1971 -0.2090 -0.1838 0.0252 0.0080 0.0080φ11,1 0.7798 0.7799 0.0000 0.7802 0.7625 0.7958 0.0333 0.0098 0.0098φ11,2 -0.9729 -0.9700 0.0028 -0.9717 -0.9849 -0.9502 0.0347 0.0109 0.0105θ11,1 0.3809 0.3782 0.0027 0.3767 0.3177 0.4402 0.1225 0.0368 0.0367θ12,1 0.5693 0.5671 0.0022 0.5680 0.4851 0.6432 0.1581 0.0483 0.0483θ11,2 0.6312 0.6265 0.0046 0.6259 0.5626 0.6908 0.1282 0.0390 0.0387θ12,2 -0.8522 -0.8519 0.0002 -0.8530 -0.9343 -0.7726 0.1617 0.0498 0.0498

Panel (C): Non Parametric Bootstrap

Coeff Value Mean Avg-Dev Median 5th-p 95th-p L 90% RMSE Std-Dev

µΦ,1 -0.0158 -0.0142 0.0015 -0.0164 -0.1296 0.1043 0.2339 0.0698 0.0698µΦ,2 0.0251 0.0259 0.0008 0.0258 -0.0221 0.0717 0.0938 0.0287 0.0287φ21,0 -0.1979 -0.1970 0.0009 -0.1969 -0.2106 -0.1833 0.0273 0.0081 0.0081φ11,1 0.7798 0.7793 0.0005 0.7794 0.7625 0.7961 0.0336 0.0099 0.0099φ11,2 -0.9729 -0.9697 0.0032 -0.9717 -0.9846 -0.9486 0.360 0.0117 0.0112θ11,1 0.3809 0.3796 0.0012 0.3787 0.3190 0.4456 0.1266 0.0384 0.0384θ12,1 0.5693 0.5686 0.0006 0.5678 0.4926 0.6500 0.1574 0.0473 0.0472θ11,2 0.6312 0.6276 0.0036 0.6281 0.5663 0.6875 0.1212 0.0371 0.0370θ12,2 -0.8522 -0.8525 0.0002 -0.8520 -0.9298 -0.7730 0.1568 0.0473 0.0473

Note – These estimates are obtained from 1000 replications. The eigen values of the model are all conjugate 0.4+-0.9i(0.98488578) and -0.26+-0.556780278i (0.624499800) for the autoregressive and moving average parts, respectively.

54

Page 58: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

1:E

mpi

rical

dist

ribut

ion

ofst

uden

tized

stat

istic

sas

soci

ated

with

the

seco

ndst

age

estim

ates

obta

ined

from

MC

sim

ulat

ion.

Not

e–

The

sehi

stog

ram

sar

eob

tain

edus

ing

1000

tria

lsw

ith20

0ob

serv

atio

nsa

mpl

esi

ze.

Eac

htr

ialc

onsi

sts

ofsi

mul

atin

gth

eps

eudo

data

with

the

true

mod

elth

enes

timat

ing

itsre

late

dpa

ram

eter

s.

55

Page 59: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

2:E

mpi

rical

dist

ribut

ion

ofst

uden

tized

stat

istic

sas

soci

ated

with

the

third

stag

ees

timat

esob

tain

edfr

omM

Csi

mul

atio

n.

Not

e–

The

sehi

stog

ram

sar

eob

tain

edus

ing

1000

tria

lsw

ith20

0ob

serv

atio

nsa

mpl

esi

ze.

Eac

htr

ialc

onsi

sts

ofsi

mul

atin

gth

eps

eudo

data

with

the

true

mod

elth

enes

timat

ing

itsre

late

dpa

ram

eter

s.

56

Page 60: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

3:E

mpi

rical

dist

ribut

ion

ofst

uden

tized

stat

istic

sas

soci

ated

with

the

seco

ndst

age

estim

ates

usin

gpa

ram

etric

boot

stra

pm

etho

d.

Not

e–

The

sehi

stog

ram

sar

eob

tain

edus

ing

1000

repl

icat

ions

with

200

obse

rvat

ion

sam

ple

size

.E

ach

tria

lcon

sist

sof

sim

ulat

ing

the

pseu

doda

taw

ithth

ees

timat

edm

odel

then

estim

atin

gits

rela

ted

para

met

ers.

57

Page 61: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

4:E

mpi

rical

dist

ribut

ion

ofst

uden

tized

stat

istic

sas

soci

ated

with

the

third

stag

ees

timat

esus

ing

para

met

ricbo

otst

rap

met

hod.

Not

e–

The

sehi

stog

ram

sar

eob

tain

edus

ing

1000

repl

icat

ions

with

200

obse

rvat

ion

sam

ple

size

.E

ach

tria

lcon

sist

sof

sim

ulat

ing

the

pseu

doda

taw

ithth

ees

timat

edm

odel

then

estim

atin

gits

rela

ted

para

met

ers.

58

Page 62: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

5:E

mpi

rical

dist

ribut

ion

ofst

uden

tized

stat

istic

sas

soci

ated

with

the

seco

ndst

age

estim

ates

usin

gno

npar

amet

ricbo

otst

rap

met

hod.

Not

e–

The

sehi

stog

ram

sar

eob

tain

edus

ing

1000

repl

icat

ions

with

200

obse

rvat

ion

sam

ple

size

.E

ach

tria

lcon

sist

sof

sim

ulat

ing

the

pseu

doda

taw

ithth

ees

timat

edm

odel

then

estim

atin

gits

rela

ted

para

met

ers.

59

Page 63: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

6:E

mpi

rical

dist

ribut

ion

ofst

uden

tized

stat

istic

sas

soci

ated

with

the

third

stag

ees

timat

esus

ing

nonp

aram

etric

boot

stra

pm

etho

d.

Not

e–

The

sehi

stog

ram

sar

eob

tain

edus

ing

1000

repl

icat

ions

with

200

obse

rvat

ion

sam

ple

size

.E

ach

tria

lcon

sist

sof

sim

ulat

ing

the

pseu

doda

taw

ithth

ees

timat

edm

odel

then

estim

atin

gits

rela

ted

para

met

ers.

60

Page 64: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

7:M

Csi

mul

atio

nof

the

empi

rical

prob

abili

tyde

nsity

func

tions

ofth

eof

stud

entiz

edst

atis

tics

asso

ciat

edw

ithth

eec

helo

nfo

rmVA

RM

Apa

ram

eter

estim

ates

.

Not

e–

The

sefin

itesa

mpl

edi

strib

utio

nsar

eob

tain

edfr

om10

00tr

ials

with

200

obse

rvat

ion

sam

ple

size

.S

uch

appr

oxim

atio

nsar

eob

tain

edby

rela

ting

the

mid

poin

tsas

soci

ated

with

the

diffe

rent

clus

ters

ofth

ehi

stog

ram

s,re

spec

tivel

y.T

heso

lidlin

ere

fers

toth

ese

cond

stag

ees

timat

esw

hile

the

dash

edon

ere

fers

toth

eth

irdst

age

estim

ates

.

61

Page 65: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

8:E

mpi

rical

prob

abili

tyde

nsity

func

tions

ofth

eof

stud

entiz

edst

atis

tics

asso

ciat

edw

ithth

epa

ram

etric

boot

stra

pec

helo

nfo

rmVA

RM

Apa

ram

eter

estim

ates

.

Not

e–

The

sefin

itesa

mpl

edi

strib

utio

nsar

eob

tain

edfr

om10

00tr

ials

with

200

obse

rvat

ion

sam

ple

size

.S

uch

appr

oxim

atio

nsar

eob

tain

edby

rela

ting

the

mid

poin

tsas

soci

ated

with

the

diffe

rent

clus

ters

ofth

ehi

stog

ram

s,re

spec

tivel

y.T

heso

lidlin

ere

fers

toth

ese

cond

stag

ees

timat

esw

hile

the

dash

edon

ere

fers

toth

eth

irdst

age

estim

ates

.

62

Page 66: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

9:E

mpi

rical

prob

abili

tyde

nsity

func

tions

ofth

eof

stud

entiz

edst

atis

tics

asso

ciat

edw

ithth

eno

npar

amet

ricbo

otst

rap

eche

lon

form

VAR

MA

para

met

eres

timat

es.

Not

e–

The

sefin

itesa

mpl

edi

strib

utio

nsar

eob

tain

edfr

om10

00tr

ials

with

200

obse

rvat

ion

sam

ple

size

.S

uch

appr

oxim

atio

nsar

eob

tain

edby

rela

ting

the

mid

poin

tsas

soci

ated

with

the

diffe

rent

clus

ters

ofth

ehi

stog

ram

s,re

spec

tivel

y.T

heso

lidlin

ere

fers

toth

ese

cond

stag

ees

timat

esw

hile

the

dash

edon

ere

fers

toth

eth

irdst

age

estim

ates

.

63

Page 67: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

10:

Em

piric

alpr

obab

ility

dens

ityfu

nctio

nsof

the

stud

entiz

edse

cond

stag

eec

helo

nfo

rmVA

RM

Apa

ram

eter

estim

ates

:A

com

paris

onbe

twee

nth

eth

rem

etho

ds.

Not

e–

The

sefin

itesa

mpl

edi

strib

utio

nsar

eob

tain

edfr

om10

00tr

ials

with

200

obse

rvat

ion

sam

ple

size

.S

uch

appr

oxim

atio

nsar

eob

tain

edby

rela

ting

the

mid

poin

tsas

soci

ated

with

the

diffe

rent

clus

ters

ofth

ehi

stog

ram

s,re

spec

tivel

y.T

heso

lidlin

ere

fers

toth

eM

Csi

mul

atio

npr

oced

ure

and

the

dash

edlin

ere

fers

topa

ram

etric

boot

stra

pm

etho

dw

hile

the

dotte

dan

dda

shed

one

stay

sfo

rth

eno

npar

amet

ricbo

otst

rap

tech

niqu

e.

64

Page 68: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Fig

ure

11:

Em

piric

alpr

obab

ility

dens

ityfu

nctio

nsof

the

stud

entiz

edth

irdst

age

eche

lon

form

VAR

MA

para

met

eres

timat

es:

Aco

mpa

rison

betw

een

the

thre

met

hods

.

Not

e–

The

sefin

itesa

mpl

edi

strib

utio

nsar

eob

tain

edfr

om10

00tr

ials

with

200

obse

rvat

ion

sam

ple

size

.S

uch

appr

oxim

atio

nsar

eob

tain

edby

rela

ting

the

mid

poin

tsas

soci

ated

with

the

diffe

rent

clus

ters

ofth

ehi

stog

ram

s,re

spec

tivel

y.T

heso

lidlin

ere

fers

toth

eM

Csi

mul

atio

npr

oced

ure

and

the

dash

edlin

ere

fers

topa

ram

etric

boot

stra

pm

etho

dw

hile

the

dotte

dan

dda

shed

one

stay

sfo

rth

eno

npar

amet

ricbo

otst

rap

tech

niqu

e.

65

Page 69: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Tabl

e5:

Com

para

tive

resu

ltson

confi

denc

ein

terv

alem

piric

alco

vera

gera

tes

ofec

helo

nfo

rmVA

RM

Apa

ram

eter

sat

ano

min

alle

velo

f95

perc

ent,

asso

ciat

edw

ithM

Csi

mul

atio

n,pa

ram

etric

boot

stra

pan

dno

npa

ram

etric

boot

stra

pm

etho

ds,r

espe

ctiv

ely.

Se

con

dS

tage

Est

ima

tes

Th

irdS

tage

Est

ima

tes

MC

Sim

ulat

ion

PA

Boo

tstr

apN

PB

oots

trap

MC

Sim

ulat

ion

PA

Boo

tstr

apN

PB

oots

trap

TC

oeff

αL

αC

CI

αU

αL

αC

CI

αU

αL

αC

CI

αU

100

µΦ

,116

.367

.216

.53.

793

.92.

43.

594

.12.

44.

891

.33.

92.

994

.22.

93.

494

.62.

Φ,2

3.9

91.0

5.1

3.5

93.4

3.1

2.5

93.6

3.9

3.1

93.0

3.9

3.0

93.8

3.2

2.2

93.9

3.9

φ21,0

5.2

90.9

3.9

3.3

93.2

3.5

4.1

92.5

3.4

3.7

92.1

4.2

2.8

94.2

3.0

3.9

93.0

3.1

φ11,1

6.0

88.3

5.7

1.3

96.7

2.0

1.9

95.3

2.8

2.7

94.0

3.3

1.2

96.9

1.9

2.1

94.6

3.3

φ11,2

7.8

91.5

0.7

1.9

96.5

1.6

2.1

96.9

1.0

4.9

93.4

1.7

3.0

95.1

1.9

3.4

95.6

1.0

θ 11,1

1.7

90.6

7.7

2.1

95.1

2.8

2.3

94.5

3.2

2.9

90.4

6.7

2.0

95.0

3.0

2.5

94.0

3.5

θ 12,1

2.0

94.2

3.8

2.6

95.6

1.8

2.8

95.2

2.0

2.7

91.5

5.8

2.8

94.6

2.6

2.8

93.7

3.5

θ 11,2

3.0

88.0

9.0

3.4

95.7

0.9

2.2

96.1

1.7

4.6

88.4

7.0

4.1

94.2

1.7

3.1

94.8

2.1

θ 12,2

4.3

89.7

6.0

2.0

95.6

2.4

3.0

94.5

2.5

3.1

93.0

3.9

2.5

95.6

1.9

2.5

95.4

2.1

200

µΦ

,115

.068

.516

.52.

195

.32.

62.

495

.81.

82.

694

.62.

82.

195

.22.

72.

195

.92.

Φ,2

4.7

91.6

3.7

4.1

93.9

2.0

3.2

95.5

1.3

3.2

94.4

2.4

3.5

95.3

1.2

3.2

95.6

1.2

φ21,0

5.0

91.6

3.4

4.1

92.8

3.1

5.4

91.5

3.1

3.1

94.0

2.9

4.3

93.0

2.7

5.1

92.7

2.2

φ11,1

6.7

87.1

6.2

2.3

96.2

1.5

2.6

95.0

2.4

2.8

94.9

2.3

2.3

95.5

2.2

2.5

95.3

2.2

φ11,2

7.0

91.2

1.8

1.6

96.4

2.0

1.8

96.3

1.9

2.7

95.7

1.6

2.6

95.1

2.3

2.5

95.2

2.3

θ 11,1

1.7

93.9

4.4

3.1

94.4

2.5

2.6

95.3

2.1

2.0

94.1

3.9

3.0

94.9

2.1

2.7

95.2

2.1

θ 12,1

1.8

96.2

2.0

2.2

96.3

1.5

2.2

95.8

2.0

2.3

94.7

3.0

1.5

97.0

1.5

2.8

95.3

1.9

θ 11,2

3.2

91.3

5.5

2.1

95.9

2.0

3.4

94.3

2.3

2.9

93.6

3.5

1.5

95.4

3.1

2.4

95.6

2.0

θ 12,2

2.5

93.2

4.3

3.1

94.8

2.1

1.9

96.0

2.1

2.9

94.9

2.2

2.9

95.0

2.1

1.8

96.6

1.6

300

µΦ

,116

.666

.716

.63.

695

.11.

33.

195

.51.

43.

395

.21.

53.

994

.71.

44.

094

.51.

Φ,2

4.4

92.4

3.2

3.1

94.4

2.5

3.1

93.3

3.6

2.8

94.9

2.3

2.6

94.3

3.1

2.6

94.2

3.2

φ21,0

2.9

93.9

3.2

2.7

96.3

1.0

2.5

95.8

1.7

1.4

96.7

1.9

2.3

96.6

1.1

2.0

96.2

1.8

φ11,1

6.9

85.6

7.5

2.3

95.4

2.3

2.3

94.6

3.1

2.7

95.1

2.2

1.3

96.4

2.3

1.8

96.3

1.9

φ11,2

5.5

92.5

2.0

2.4

95.7

1.9

1.3

96.8

1.9

2.9

95.8

1.3

2.9

94.5

2.6

2.0

96.2

1.8

θ 11,1

1.8

95.0

3.2

2.7

96.0

1.3

3.7

94.1

2.2

1.9

94.8

3.3

2.0

95.2

2.8

2.4

95.6

2.0

θ 12,1

1.3

96.9

1.8

2.2

96.0

1.8

1.7

96.6

1.7

2.9

94.7

2.4

3.3

94.3

2.4

2.5

95.6

1.9

θ 11,2

3.0

93.1

3.9

3.2

95.6

1.2

3.4

95.4

1.2

3.6

93.8

2.6

3.6

94.0

2.4

4.0

94.8

1.2

θ 12,2

2.7

93.4

3.9

3.0

94.3

2.7

2.7

94.9

2.4

2.7

94.6

2.7

2.7

94.6

2.7

2.1

96.1

1.8

Not

e–

The

seem

piric

alle

vels

are

obta

ined

over

1000

repl

icat

ions

for

each

met

hod.

PA

stay

sfo

rpa

ram

etric

met

hod

whi

leN

Pre

fers

tono

npar

amet

ricm

etho

d.

66

Page 70: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Tabl

e6:

Com

para

tive

resu

ltson

confi

denc

ein

terv

alem

piric

alco

vera

gera

tes

ofec

helo

nfo

rmVA

RM

Apa

ram

eter

sat

ano

min

alle

velo

f95

perc

ent,

asso

ciat

edw

ithM

Csi

mul

atio

n,pa

ram

etric

boot

stra

pan

dno

npa

ram

etric

boot

stra

pm

etho

ds,r

espe

ctiv

ely

(con

tinue

d).

Se

con

dS

tage

Est

ima

tes

Th

irdS

tage

Est

ima

tes

MC

Sim

ulat

ion

PA

Boo

tstr

apN

PB

oots

trap

MC

Sim

ulat

ion

PA

Boo

tstr

apN

PB

oots

trap

TC

oeff

αL

αC

CI

αU

αL

αC

CI

αU

αL

αC

CI

αU

400

µΦ

,116

.466

.517

.11.

895

.92.

32.

895

.02.

22.

395

.32.

41.

696

.02.

42.

395

.81.

Φ,2

2.6

92.7

4.7

1.8

95.4

2.8

2.1

95.8

2.1

1.9

94.8

3.3

1.5

95.5

3.0

1.9

96.2

1.9

φ21,0

3.5

92.8

3.7

3.1

95.2

1.7

3.0

95.6

1.4

2.4

94.9

2.7

3.0

95.3

1.7

3.1

95.8

1.1

φ11,1

5.9

87.3

6.8

1.7

97.3

1.0

2.4

96.9

0.7

2.1

95.7

2.2

2.1

96.1

1.8

2.6

95.5

1.9

φ11,2

6.3

91.3

2.4

1.5

96.5

2.0

1.5

96.4

2.1

3.1

95.2

1.7

2.2

95.7

2.1

2.1

95.8

2.1

θ 11,1

1.0

97.0

2.0

1.8

96.7

1.5

1.3

96.4

2.3

1.6

95.9

2.5

2.0

96.5

1.5

2.0

95.8

2.2

θ 12,1

1.2

97.3

1.5

2.3

94.7

3.0

2.1

96.1

1.8

2.6

95.1

2.3

3.2

93.6

3.2

3.2

95.1

1.7

θ 11,2

2.7

93.5

3.8

2.7

94.5

2.8

2.9

94.6

2.5

2.1

95.2

2.7

2.1

95.7

2.2

2.3

95.6

2.1

θ 12,2

2.2

94.7

3.1

2.0

96.2

1.8

1.9

95.5

2.6

2.9

95.3

1.8

3.2

94.8

2.0

2.9

95.3

1.8

500

µΦ

,117

.765

.217

.13.

093

.83.

21.

895

.13.

12.

595

.02.

52.

694

.92.

52.

295

.12.

Φ,2

3.6

92.9

3.5

1.9

95.4

2.7

2.1

95.4

2.5

2.2

95.4

2.4

1.7

95.7

2.6

1.7

95.9

2.4

φ21,0

2.5

93.7

3.8

1.6

96.1

2.3

0.9

96.1

3.0

1.1

96.1

2.8

1.2

96.8

2.0

0.8

96.5

2.7

φ11,1

7.7

84.1

8.2

2.5

93.4

4.1

4.3

92.1

3.6

2.8

94.9

2.3

2.7

94.3

3.0

2.7

95.6

1.7

φ11,2

4.8

92.8

2.4

1.6

95.0

3.4

1.9

94.6

3.5

2.0

96.1

1.9

0.9

96.5

2.6

1.0

95.9

3.1

θ 11,1

1.1

96.1

2.8

1.9

95.0

3.1

1.4

95.6

3.0

1.5

96.0

2.5

1.4

96.3

2.3

1.3

96.7

2.0

θ 12,1

0.9

97.3

1.8

1.9

95.3

2.8

1.5

95.7

2.8

1.8

95.3

2.9

2.1

95.8

2.1

1.6

95.5

2.9

θ 11,2

2.8

92.5

4.7

4.3

92.9

2.8

3.7

92.7

3.6

2.5

94.3

3.2

2.4

95.0

2.6

2.5

94.4

3.1

θ 12,2

3.4

92.5

4.1

4.3

93.4

2.3

2.9

93.0

4.1

3.1

94.3

2.6

1.8

95.8

2.4

2.4

94.5

3.1

1000

µΦ

,117

.665

.117

.33.

194

.12.

83.

094

.03.

02.

694

.82.

63.

294

.22.

62.

994

.42.

Φ,2

4.3

91.7

4.0

3.7

93.9

2.4

3.4

94.4

2.2

2.7

94.6

2.7

4.0

93.8

2.2

3.5

94.5

2.0

φ21,0

3.2

92.4

4.4

3.4

94.2

2.4

2.9

94.2

2.9

2.5

95.1

2.4

3.5

94.3

2.2

2.6

94.7

2.7

φ11,1

6.8

86.2

7.0

2.2

95.7

2.1

2.5

95.3

2.2

1.8

95.2

3.0

1.9

94.7

3.4

1.3

95.6

3.1

φ11,2

4.4

92.9

2.7

2.4

94.7

2.9

2.1

96.3

1.6

2.3

96.5

1.2

3.2

94.4

2.4

2.7

95.0

2.3

θ 11,1

1.5

96.4

2.1

2.6

95.9

1.5

2.4

95.5

2.1

1.7

95.8

2.5

2.0

96.3

1.7

1.9

95.8

2.3

θ 12,1

1.4

96.8

1.8

3.3

94.7

2.0

2.5

95.4

2.1

2.3

95.1

2.6

2.5

94.9

2.6

2.9

94.5

2.6

θ 11,2

2.5

94.5

3.0

3.7

94.7

1.6

2.3

96.5

1.2

2.0

95.6

2.4

2.8

95.2

2.0

1.9

96.1

2.0

θ 12,2

2.4

94.5

3.1

2.1

96.3

1.6

3.3

95.1

1.6

2.1

96.0

1.9

2.9

95.5

1.6

3.9

94.2

1.9

Not

e–

The

seem

piric

alle

vels

are

obta

ined

over

1000

repl

icat

ions

for

each

met

hod.

PA

stay

sfo

rpa

ram

etric

met

hod

whi

leN

Pre

fers

tono

npar

amet

ricm

etho

d.

67

Page 71: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Table 7: Example of finite sample bootstrap t-critical values for two sided echelon formVARMA parameter confidence sets at a nominal level of 95 percent with sample of 200observations

Two-step estimates Three-step estimatesPA Bootstrap NP Bootstrap PA Bootstrap NP Bootstrap

Coeff t∗0.025 t∗0.975 t∗0.025 t∗0.975 t∗0.025 t∗0.975 t∗0.025 t∗0.975

µΦ,1 -3.8991 4.1272 -4.2830 3.9534 -2.0801 2.1728 -2.1731 2.0720µΦ,2 -2.2746 2.1323 -2.4419 2.2051 -2.1845 1.9666 -2.1636 2.0055φ21,0 -2.1049 2.0178 -1.9349 2.0210 -1.8388 2.0320 -1.7464 2.1508φ11,1 -2.7714 2.5877 -2.6178 2.5239 -2.0226 2.0852 -2.0182 2.0096φ11,2 -1.8776 2.6804 -1.9538 2.6213 -1.8462 2.0655 -1.8305 2.0761θ11,1 -2.2425 1.6976 -2.3462 1.7879 -2.2370 1.9044 -2.2106 1.9573θ12,1 -2.0887 1.8100 -1.9807 1.8236 -2.2812 2.1323 -2.1791 1.8843θ11,2 -2.6892 2.1519 -2.5144 1.9812 -2.0630 2.2781 -2.3277 2.0573θ12,2 -2.3413 1.9019 -2.3454 2.1825 -2.0148 2.0386 -2.1580 2.1860

Note – These bootstrap empirical critical values are computed from 1000 replications for each method. PA stays forparametric method while NP refers to nonparametric method.

.

68

Page 72: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

References

Berk, K. N. (1974), ‘Consistent autoregressive spectral estimates’,The Annals of Statistics2(3), 489–502.

Berkowitz, J., Birgean, I. and Kilian, L. (1999), On the finite-sample accuracy of nonparametricresampling algorithms for economic time series, Technical Report, Finance andEconomics Discussion Series, Broad of Governors of the Federal Reserve System (U.S.) 04, Department of Economics, University of Michigan, Ann Arbor, MI 48109-1220,[email protected].

Berkowitz, J. and Kilian, L. (2000), ‘Recent developments in bootstrapping time series’,Econometric Reviews19(1), 1–48.

Bickel, P. J. and Freedman, D. A. (1981), ‘Some asymptotic theory for the bootstrap’,TheAnnals of Statistics9(6), 1196–1217.

Caner, M. and Kilian, L. (1999), Size distortions of tests of the null hypothesis ofstationarity: Evidence and implications for the PPP debate, Technical report, Departmentof Economics, University of Michigan, Ann Arbor, MI 48109-1220, [email protected].

Deistler, M. and Hannan, E. J. (1981), ‘Some properties of the parametrization of ARMAsystems with unknown order’,Journal of Multivariate Analysis11, 474–484.

Dufour, J.-M. and Jouini, T. (2005), Asymptotic distribution of a simple linear estimator forVARMA models in echelon form,in P. Duchesne and B. Remillard, eds, ‘StatisticalModeling and Analysis for Complex Data Problems’, Springer, New York, chapter 11,pp. 209–240.

Dufour, J.-M. and Jouini, T. (2005c),Simplified Order Selection and Efficient Linear Estimationfor VARMA Models with a Macroeconomic Application, Universite de Montreal.

Dufour, J.-M. and Jouini, T. (2006), ‘Finite-sample simulation-based inference in VAR modelswith application to granger causality testing’,Journal of Econometrics135, 229–254.

Dufour, J.-M. and Jouini, T. (2008),Simplified Order Selection and Efficient Linear Estimationfor VARMA Models with a Macroeconomic Application, Universite de Montreal.

Efron, B. and Tibshirani, R. J. (1993),An Introduction to Bootstrap, Chapman and Hall, Inc,USA.

Gredenhoff, M. and Jacobson, T. (2001), ‘Bootstrap testing linear restrictions on cointegratingvectors’,Journal of Business and Economic Statistics19(1), 63–72.

Hall, P. (1992),The Bootstrap and Edgeworth Expansion, Springer-Verlag, New York, Inc.

Hall, P. and Horowitz, J. L. (1996), ‘Bootstrap critical values for tests based ongeneralized-method-moments estimators’,Econometrica64(4), 891–916.

69

Page 73: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Hannan, E. J. (1969), ‘The identification of vector mixed autoregressive-moving averagesystems’,Biometrika57, 223–225.

Hannan, E. J. (1970),Multiple Time Series, Wiley, New York.

Hannan, E. J. (1976), ‘The identification and parameterization of ARMAX and state spaceforms’, Econometrica44(4), 713–723.

Hannan, E. J. (1979),The Statistical Theory of Linear Systems, Vol. 2 of Developments inStatistics, krishnaiah, p. r. edn, Academic Press, New York, chapter 2, pp. 83–121.

Hannan, E. J. and Deistler, M. (1988),The Statistical Theory of Linear Systems, John Wiley &Sons, New York.

Hannan, E. J. and Rissanen, J. (1982), ‘Recursive estimation of mixed autoregressive-movingaverage order’,Biometrika69(1), 81–94. Printed in Great Britain.

Inoue, A. and Kilian, L. (1999), Bootstrapping smooth functions of slope parameters andinnovation variances invar (∞) models, Technical report, Department of Economics,University of Michigan, Ann Arbor, MI 48109-1220, [email protected].

Inoue, A. and Kilian, L. (1999a), Bootstrapping autoregressive process with possible unit roots,Technical report, Department of Economics, University of Michigan, Ann Arbor, MI48109-1220, [email protected].

Kilian, L. (1998), ‘Small-sample confidence intervals for impulse response functions’,TheReview of Economics and Statistics80(2), 218–230.

Kilian, L. (1998a), ‘Confidence intervals for impulse responses under departures fromnormality’, Econometric Reviews17(1), 1–29.

Kilian, L. (1998b), Pitfalls in constructing bootstrap confidence intervals for asymtoticallypivotal statistics, Working Papers, Michigan-Center for Research on Economic andSocial Theory 04, Department of Economics, University of Michigan, Ann Arbor, MI48109-1220, [email protected].

Kilian, L. and Demiroglu, U. (1997), Residual-based bootstrap tests for normality inautoregresssions, Working Paper Series 14, Department of Economics, University ofMichigan, Ann Arbor, MI 48109-1220, [email protected].

Kim, J. H. (2001), ‘Bootstrap-after-bootstrap prediction intervals for autoregressive models’,Journal of Business and Economic Statistics19(1), 117–128.

Kreiss, J.-P. and Franke, J. (1992), ‘Bootstrapping stationary autoregressive moving-averagemodels’,Journal of Time Series Analysis13(4), 297–317.

Kunsch, H. R. (1989), ‘The jackknife and the bootstrap for general stationary observations’,The Annals of Statistics17(3), 1217–1241.

70

Page 74: Bootstrapping stationary invertible VARMA models in echelon … · 2011-04-20 · is rather unknown, nonparametric bootstrap techniques such as the “blockwise bootstrap” of K¨unsch

Lewis, R. and Reinsel, G. C. (1985), ‘Prediction of multivariate time series by autoregressivemodel fitting’,Journal of Multivariate Analysis16, 393–411.

Lutkepohl, H. (1991),Introduction to Multiple Time Series Analysis, Springer Verlag, NewYork.

Paparoditis, E. (1996), ‘Bootstrapping autoregressive and moving average parameter estimatesof infinite order vector autoregressive process’,Journal of Multivariate Analysis57(0034), 277–296.

Paparoditis, E. and Streitberg, B. (1991), ‘Order identification statistics in stationaryautoregressive moving-average models: Vector autocorrelations and the bootstrap’,Journal of Time Series Analysis13(5), 415–434.

Politis, D. N. and Romano, J. P. (1992), ‘A general resampling scheme for triangular arrays ofα-mixing random variables with application to the problem of spectral density estimation’,The Annals of Statistics20, 1985–2007.

Stoffer, D. S. and Wall, K. D. (1991), ‘Bootstrapping state-space models: Gaussianmaximum likelihood estimation and the kalman filter’,Journal of the American StatisticalAssociation86(416), 1024–1033.

71