consistent testing

Upload: tom-rohmer

Post on 06-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Consistent Testing

    1/35

    Consistent testing for a constant copula under strong mixingbased on the tapered block multiplier technique

    Martin Ruppert

    November 4, 2011

    Abstract

    Considering multivariate strongly mixing time series, nonparametric tests for a constantcopula with specified or unspecified change point (candidate) are derived; the tests areconsistent against general alternatives. A tapered block multiplier technique based onserially dependent multiplier random variables is provided to estimate p-values of the teststatistics. Size and power of the tests in finite samples are evaluated with Monte Carlosimulations.

    Key words: Change point test; Copula; Empirical copula process; Nonparametric estima-tion; Time series; Strong mixing; Multiplier central limit theorem.

    AMS 2000 subject class.: Primary 62G05, 62G10, 60F05, Secondary 60G15, 62E20.

    Graduate School of Risk Management and Department of Economic and Social Statistics, Universityof Cologne, Albertus-Magnus-Platz, 50923 Kln, Germany; Email: [email protected],Tel: +49 (0) 221 4706656, Fax: +49 (0) 221 4705074.

  • 8/3/2019 Consistent Testing

    2/35

    1. Introduction

    Over the last decade, copulas have become a standard tool in modern risk management.The copula of a continuous random vector is a function which uniquely determines thedependence structure linking the marginal distribution functions. Copulas play a pivotal

    role for, e.g., measuring multivariate association [see 45], pricing multivariate options [see48] and allocating financial assets [see 34]. The latter two references emphasize that timevariation of copulas possesses an important impact on financial engineering applications.

    Evidence for time-varying dependence structures can indirectly be drawn from functionalsof the copula, e.g., Spearmans , as suggested by Gaier et al. [20] and Wied et al. [51].Investigating time variation of the copula itself, Busetti and Harvey [8] consider a nonpara-metric quantile-based test for a constant copula. Semiparametric tests for time variation ofthe parameter within a prespecified family of one-parameter copulas are proposed by Diasand Embrechts [15] and Giacomini et al. [23]. Guegan and Zhang [24] combine tests forconstancy of the copula (on a given set of vectors on its domain), the copula family, and theparameter. The assumption of independent and identically distributed pseudo-observations

    is generally made in the latter references. With respect to financial time-series, the estima-tion of a GARCH model represents a frequently chosen option in order to approximate thisassumption using the residuals obtained after GARCH filtration. The effect of replacingunobserved innovations by estimated residuals, however, is to be taken into account. There-fore, specific techniques for residuals are required [cf., e.g., 11]. Exploring this approach,Rmillard [36] investigates a nonparametric change point test for the copula of residuals instochastic volatility models. Avoiding the need to specify any parametric model, Fermanianand Scaillet [19] consider purely nonparametric estimation of copulas for time-series understrict stationarity and strong mixing conditions on the multivariate process. A recent gen-eralization of this framework is proposed by van Kampen and Wied [50] who assume theunivariate processes to be strictly stationary but relax the assumption of a constant copulaand suggest a quantile-based test for a constant copula under strong mixing assumptions.

    We introduce nonparametric Cramr-von Mises-, Kuiper-, and Kolmogorov-Smirnov testsfor a constant copula under strong mixing assumptions. The tests extend those for time-constant quantiles by assessing constancy of the copula on its domain. In consequence,they are consistent under general alternatives. Depending on the object of investigation,tests with a specified or unspecified change point (candidate) are introduced. Whereas theformer setting requires a hypothesis on the change point location, it allows us to relax theassumption of strictly stationary univariate processes. P-values of the tests are estimatedbased on a generalization of the multiplier bootstrap technique introduced in Rmillardand Scaillet [38] to the case of strongly mixing time series. The idea is comparable to blockbootstrap methods: however, instead of sampling blocks with replacement, we generateblocks of serially dependent multiplier random variables. For a general introduction to thelatter idea, we refer to Bhlmann [6] and Paparoditis and Politis [32].

    This paper is organized as follows: in Section 2, we discuss convergence of the empiri-cal copula process under strong mixing. A result of Doukhan et al. [17] is generalizedto establish the asymptotic behavior of the empirical copula process under nonrestrictivesmoothness assumptions based on serially dependent observations. Furthermore, a taperedblock multiplier bootstrap technique for inference on the weak limit of the empirical copulaprocess is derived and assessed in finite samples. Tests for a constant copula with specifiedor unspecified change point (candidate) which are relying on this technique are establishedin Section 3.

    1

  • 8/3/2019 Consistent Testing

    3/35

    2. Nonparametric inference based on serially dependent observations

    As a basis for the tests introduced in the next section, the result of Segers [46] on the asymp-totic behavior of the empirical copula process under nonrestrictive smoothness assumptionsis generalized to enable its applicability to serially dependent observations. Furthermore,

    we introduce a multiplier-based resampling method for this particular setting, establish itsasymptotic behavior and investigate performance in finite samples.

    2.1. Asymptotic theory

    Consider a vector-valued process (Xj)jZ with Xj = (Xj,1, . . . , X j,d) taking values in Rd.

    Let Fi be the distribution function of Xj,i for all j Z, i = 1, . . . , d and let F be the joint distribution ofXj for all j Z. Assume that all marginal distribution functions arecontinuous. Then, according to Sklars Theorem [47], there exists a unique copula C suchthat F(x1, . . . , xd) = C(F1(x1), . . . , F d(xd)) for all (x1, . . . , xd) Rd. The -fields generatedby Xj, j t, and Xj, j t, are denoted by Ft = {Xj , j t} and Ft = {Xj, j t},respectively. We define

    (Fs, Fs+r) = supAFs,BFs+r

    |P(A B) P(A)P(B)|.

    The strong- (or -) mixing coefficient X corresponding to the process (Xj)jZ is given byX(r) = sups0 (Fs, Fs+r). The process (Xj)jZ is said to be strongly mixingifX(r) 0for r . This type of weak dependence covers a broad range of time-series models.Consider the following examples, cf. Doukhan [16] and Carrasco and Chen [10]:

    Example 1. i) AR(1) processes (Xj)jZ given by

    Xj = Xj1 + j ,

    where(j)jZ is a sequence of independent and identically distributed continuous innovationswith mean zero. For || < 1, the process is strictly stationary and strongly mixing withexponential decay of X(r).

    ii) GARCH(1, 1) processes (Xj)jZ,

    Xj = jj , 2j = +

    2j1 +

    2j1, (1)

    where (j)jZ is a sequence of independent and identically distributed continuous innova-tions, independent of 20, with mean zero and variance one. For + < 1, the process isstrictly stationary and strongly mixing with exponential decay of X(r).

    Let X1

    , . . . ,Xn

    denote a sample from (Xj

    )j

    Z. A simple nonparametric estimator of theunknown copula C is given by the empirical copula which is first considered by Rschendorf[43] and Deheuvels [13]. Depending on whether the marginal distribution functions areassumed to be known or unknown, we define

    Cn(u) :=1

    n

    nj=1

    di=1

    1{Uj,iui} for all u [0, 1]d ,

    Cn(u) := 1n

    nj=1

    di=1

    1{Uj,iui} for all u [0, 1]d , (2)

    2

  • 8/3/2019 Consistent Testing

    4/35

    with observations Uj,i = Fi(Xj,i) and pseudo-observationsUj,i = Fi(Xj,i) for all j = 1, . . . , nand i = 1, . . . , d , where Fi(x) = 1n nj=1 1{Xj,ix} for all x R. Unless otherwise noted,the marginal distribution functions are assumed to be unknown and the estimator Cn isused. In addition to the practical relevance of this assumption, Genest and Segers [22] provethat pseudo-observations Uj,i = Fi(Xj,i) permit more efficient inference on the copula thanobservations Uj,i for a broad class of copulas.Doukhan et al. [17] investigate dependent observations and establish the asymptotic behav-ior of the empirical copula process, defined by

    n{Cn C}, assuming the copula to possess

    continuous partial derivatives on [0, 1]d. Segers [46] points out that many popular familiesof copulas (e.g., the Gaussian, Clayton, and Gumbel families) do not satisfy the assumptionof continuous first partial derivatives on [0, 1]d. He establishes the asymptotic behavior ofthe empirical copula for serially independent observations under the weaker condition

    DiC(u) exists and is continuous onu [0, 1]d |ui (0, 1)

    for all i = 1, . . . , d . (3)

    Under Condition (3), the partial derivatives domain can be extended to u

    [0, 1]d by

    DiC(u) =

    limh0

    C(u+hei)C(u)h for all u [0, 1]d, 0 < ui < 1,

    limsuph0C(u+hei)

    h for all u [0, 1]d, ui = 0,limsuph0

    C(u)C(uhei)h for all u [0, 1]d, ui = 1,

    (4)

    and for all i = 1, . . . , d , where ei denotes the ith column of a d d identity matrix. Aresult of Bcher [4] permits to establish the asymptotic behavior of the empirical copulaprocess in the case of serially dependent observations. The following Theorem is based onnonrestrictive smoothness assumptions and mild assumptions on the strong mixing rate:

    Theorem 1. Consider observationsX1, . . . ,Xn, drawn from a strictly stationary process

    (X

    j)jZ satisfying the strong mixing condition X(r) = O(ra

    ) for some a > 1. If Csatisfies Condition (3), then the empirical copula process converges weakly in the space ofuniformly bounded functions on[0, 1]d equipped with the uniform metric

    ([0, 1]d), .

    :

    nCn(u) C(u) w.GC(u),

    whereasGC represents a Gaussian process given by

    GC(u) = BC(u) d

    i=1

    DiC(u)BC

    u(i)

    for allu [0, 1]d. (5)

    The vectoru(i) denotes the vector where all coordinates, except the ith coordinate ofu, arereplaced by 1. The processBC is a tight centered Gaussian process on [0, 1]

    d with covariancefunction

    Cov(BC(u)BC(v)) =jZ

    Cov1{U0u},1{Ujv}

    for allu,v [0, 1]d. (6)

    The proof is given in 5. Notice that the covariance structure as given in Equation (6)depends on the entire process (Xj)jZ. If the marginal distribution functions Fi, i = 1, . . . , d,are known then the limiting process reduces to BC. In this particular case, Condition (3)is not necessary to establish weak convergence of the empirical copula process.

    3

  • 8/3/2019 Consistent Testing

    5/35

    2.2. Resampling techniques

    In this Section, a generalized multiplier bootstrap technique is introduced which is appli-cable in the case of serially dependent observations. Moreover, a generalized asymptoticresult is obtained for the (moving) block bootstrap which serves as a benchmark for thenew technique in the following finite sample assessment.

    Fermanian et al. [18] investigate the empirical copula process for independent and identi-cally distributed observations X1, . . . ,Xn and prove consistency of the nonparametric boot-strap method which is based on sampling with replacement from X1, . . . ,Xn. We denote abootstrap sample by XB1 , . . . ,X

    Bn and define

    CBn (u) := 1nn

    j=1

    1{UBj u} for all u [0, 1]d, UBj,i := 1nn

    k=1

    1{XBk,iXBj,i}

    for all j = 1, . . . , n and i = 1, . . . , d . Notice that the bootstrap empirical copula can equiva-lently be expressed based on multinomially (n, n1, . . . , n1) distributed random variablesW = (W1, . . . , W n) :

    CWn (u) := 1nn

    j=1

    Wj1{UWj u} for all u [0, 1]d, UWj,i := 1nn

    k=1

    Wk1{Xk,iXj,i}

    for all j = 1, . . . , n and i = 1, . . . , d .

    Modes of convergence which allow us to investigate weak convergence of the empiricalcopula process conditional on an observed sample are considered next. The multiplierrandom variables represent the remaining source of stochastic influence in this conditionalsetting. The bootstrap empirical copula process converges weakly conditional onX1, . . . ,Xnin probability in

    ([0, 1]d), .

    if the following two criteria are satisfied, see van der

    Vaart and Wellner [49], Section 3.9.3, Kosorok [30], Section 2.2.3, and Bcher and Dette[5]:

    suphBL1(([0,1]d))

    EWhnCWn (u) Cn(u) Eh (GC(u)) P 0, (7)where EW denotes expectation with respect to W conditional on X1, . . . ,Xn. Furthermore,

    EWh

    nCWn (u) Cn(u) EWhnCWn (u) Cn(u)

    P 0. (8)

    The function h is assumed to be uniformly bounded with Lipschitz-norm bounded by one,i.e., h BL1(([0, 1]d)) which is defined byf : [0, 1]d R, f 1, |f() f()| for all , [0, 1]d .

    Moreover, h(.) and h(.) denote the measurable majorant and minorant with respect to thejoint data (i.e.,X1, . . . ,Xn and W). In the case of independent and identically distributedobservations satisfying Condition (3), validity of criteria (7) and (8) can be proven [see18, 4]. Hence, the bootstrap empirical copula process converges weakly conditional onX1, . . . ,Xn in probability which is denoted by

    nCWn (u) Cn(u) P

    WGC(u).

    4

  • 8/3/2019 Consistent Testing

    6/35

    Weak convergence conditional on X1, . . . ,Xn almost surely is defined analogously.

    Whereas the bootstrap is consistent for independent and identically distributed samples,consistency generally fails for serially dependent samples. In consequence, a block bootstrapmethod is proposed by Knsch [31]. Given the sample X1, . . . ,Xn, the block bootstrapmethod requires blocks of size lB = lB(n), lB(n)

    as n

    and lB(n) = o(n),

    consisting of consecutive observations

    Bh,lB = {Xh+1, . . . ,Xh+lB}, for all h = 0, . . . , n lB .

    We assume n = klB (else the last block is truncated) and simulate H = H1, . . . , H k indepen-dent and uniformly distributed random variables on {0, . . . , n lB}. The block bootstrapsample is given by the observations of the k blocks BH1,lB , . . . , BHk,lB , i.e.,

    XB1 = XH1+1, . . . ,X

    BlB

    = XH1+lB ,XBlB+1

    = XH2+1, . . . ,XBn = XHk+lB .

    Denote the block bootstrap empirical copula by

    CBn (u). Its asymptotic behavior is given

    next:

    Theorem 2. Consider observationsX1, . . . ,Xn, drawn from a strictly stationary process(Xj)jZ satisfying

    r=1(r + 1)

    16(d+1)

    X(r) < . Assume that lB(n) = O(n1/2) for0 < < 1/2. If C satisfies Condition (3), then the block bootstrap empirical copula processconverges weakly conditional onX1, . . . ,Xn in probability in

    ([0, 1]d), .

    :

    nCBn (u) Cn(u) P

    HGC(u).

    The proof is given in 5. The previous theorem weakens the smoothness assumptions ofa result obtained by Gaier et al. [21]; it is derived by means of an asymptotic result

    on the block bootstrap for general distribution functions established by Bhlmann [6],Theorem 3.1 and a representation of the copula as a composition of functions [18, 4]. Basedon bootstrap samples s = 1, . . . , S a set of block bootstrap realizations to estimate theasymptotic behavior of the empirical copula process is obtained by:

    GB(s)C,n (u) = nCB(s)n (u) Cn(u) for all u [0, 1]d.A process related to the bootstrap empirical copula process can be formulated if both theassumption of multinomially distributed random variables is dropped and the marginaldistribution functions are left unaltered during the resampling procedure. Consider in-dependent and identically distributed multiplier random variables 1, . . . , n with finite

    positive mean and variance, additionally satisfying j2,1 := 0 P(|j | > x)dx < for all j = 1, . . . , n (whereas the last condition is slightly stronger than that of a finitesecond moment). Replacing the multinomial multiplier random variables W1, . . . , W n by1/ , . . . , n/ (ensuring realizations having arithmetic mean one) yields the multiplier (em-pirical copula) process which converges weakly conditional on X1, . . . ,Xn in probability in

    ([0, 1]d), .

    , see Bcher and Dette [5]:

    n

    1nn

    j=1

    j

    1{Uju} Cn(u)

    P BC(u).5

  • 8/3/2019 Consistent Testing

    7/35

    For general considerations of multiplier empirical processes, we refer to the monographs ofvan der Vaart and Wellner [49] and Kosorok [30]. An interesting property of the multiplierprocess is its convergence to an independent copy of BC, both for known as well as forunknown marginal distribution functions. The process is introduced by Scaillet [44] in abivariate context, a general multivariate version and its unconditional weak convergence are

    investigated by Rmillard and Scaillet [38]. Bcher and Dette [5] find that the multipliertechnique yields more precise results than the nonparametric bootstrap in mean as well asin mean squared error when estimating the asymptotic covariance of the empirical copulaprocess based on independent and identically distributed samples. Hence, a generalizationof the multiplier technique is considered next. It is motivated by the fact that this techniqueis inconsistent when applied to serially dependent samples. Inoue [27] develops a blockmultiplier process for general distribution functions based on dependent data in which thesame multiplier random variable is used for a block of observations. We focus on copulasand consider a refinement of this technique. More precisely, a tapered block multiplier(empirical copula) process is introduced based on the work of Bhlmann [6], Chapter 3.3and Paparoditis and Politis [32]: The main idea is to consider a sample 1, . . . , n from aprocess (j)jZ of serially dependent tapered block multiplier random variables, satisfying:

    A1 (j)jZ is independent of the observation process (Xj)jZ.

    A2 (j)jZ is a positive c l(n)-dependent process, i.e., for fixed j Z, j is independentof (j + h) for all |h| c l(n), where c is a constant and l(n) as n whilel(n) = o(n).

    A3 (j)jZ is strictly stationary. For all j, h Z, assume E[j ] = > 0, Cov[j , j+h] =2v(h/l(n)) and v is a function symmetric about zero; without loss of generality, we consider = 1 and v(0) = 1. All central moments ofj are supposed to be bounded given the samplesize n.

    Weak convergence of the tapered block multiplier process conditional on a sampleX1, . . . ,Xnalmost surely is established in the following theorem:

    Theorem 3. Consider observationsX1, . . . ,Xn, drawn from a strictly stationary process(Xj)jZ satisfying

    r=1(r + 1)

    c

    X(r) < , whereas c = max{8d + 12, 2/ + 1}. Letthe tapered block multiplier process (j)jZ satisfyA1, A2, A3 with block length l(n) ,where l(n) = O(n1/2) for 0 < < 1/2. The tapered block multiplier empirical copulaprocess converges weakly conditional onX1, . . . ,Xn almost surely in

    ([0, 1]d), .

    :

    n

    1

    n

    n

    j=1j

    1{Uju}

    Cn(u)

    a.s.BMC (u),

    whereBMC (u) is an independent copy ofBC(u).

    The proof is given in 5.

    Remark 1. The multiplier random variables can as well be assumed to be centered aroundzero [cf. 30, Proof of Theorem 2.6]. Define 0j := j . Then

    BM,0C,n (u) =

    1n

    nj=1

    0j 0

    1{Uju} = 1n

    nj=1

    j

    11{Uju} = BMC,n(u)

    6

  • 8/3/2019 Consistent Testing

    8/35

    for allu [0, 1]d. This is an asymptotically equivalent form of the above tapered blockmultiplier process:

    sup[0,1]d

    BM,0C,n (u) BMC,n(u)

    = sup[0,1]d

    1BMC,n(u)

    P

    0,

    sinceBMC,n(u) tends to a tight centered Gaussian limit. The assumption of centered multi-plier random variables is abbreviated as A3b in the following.

    There are numerous ways to define tapered block multiplier processes (j)jZ satisfying theabove assumptions; in the following, a basic version having uniform weights and a refinedversion with triangular weights are investigated and compared:

    Example 2. A simple form of the tapered block multiplier random variables can be definedbased on moving average processes. Consider the function1 which assigns uniform weightsgiven by

    1(h) := 12l(n)1 for all |h| < l(n)

    0 else.

    Note that 1 is a discrete kernel, i.e., it is symmetric about zero and

    hZ 1(h) = 1. Thetapered block multiplier process is defined by

    j =

    h=

    1(h)wj+h for all j Z, (9)

    where (wj)jZ is an independent and identically distributed sequence of, e.g., Gamma(q,q)random variables withq := 1/[2l(n)

    1]. The expectation ofj is then given by E[j ] = 1, its

    variance by V ar[j ] = 1 for all j Z. For all j Z and |h| < 2l(n) 1, direct calculations further yield the covariance functionCov(j, j+h) = {2l(n) 1 |h|}/{2l(n) 1} whichlinearly decreases as h increases in absolute value. The resulting sequence (j)jZ satisfiesA1, A2, and A3. Exploring Remark 1, tapered block multiplier random variables can aswell be defined based on sequences (wj)jZ of, e.g., Rademacher-type random variables wjcharacterized by P(wj = 1/q) = P(wj = 1/q) = 0.5 or Normal random variableswj N(0, 1/q). In either one of these two cases, the resulting sequence (j)jZ satisfiesA1, A2, and A3b. Figure 1 shows the kernel function 1 and simulated trajectories ofRademacher-type tapered block multiplier random variables.

    Example 3. Following Bhlmann [6], let us define the kernel function by

    2(h) := max{0, {1 |h|/l(n)}/l(n)}

    for allh Z. The tapered multiplier process (j)jZ follows Equation (9), where (wj)jZ isan independent and identically distributed sequence of Gamma(q,q) random variables withq = 2/{3l(n)} + 1/{3l(n)3}. The expectation of j is given by

    E[j ] =1

    l(n)+ 2

    l(n)h=1

    1

    l(n)

    1 h

    l(n)

    = 1.

    7

  • 8/3/2019 Consistent Testing

    9/35

    8 6 4 2 0 2 4 6 80

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    0.4

    h

    1

    (h)

    0 20 40 60 80 1003

    2

    1

    0

    1

    2

    3

    j

    j

    Figure 1: Tapered block multiplier Monte Carlo simulation. Kernel function 1(h) (left) andsimulated trajectories of Rademacher-type tapered block multiplier random variables 1, . . . , 100(right) with block length l(n) = 3 (solid line) and l(n) = 6 (dashed line), respectively.

    8 6 4 2 0 2 4 6 80

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    0.4

    h

    2

    (h)

    0 20 40 60 80 1003

    2

    1

    0

    1

    2

    3

    j

    j

    Figure 2: Tapered block multiplier Monte Carlo simulation. Kernel function 2(h) (left) andsimulated trajectories of Rademacher-type tapered block multiplier random variables 1, . . . , 100(right) with block length l(n) = 3 (solid line) and l(n) = 6 (dashed line), respectively.

    For the variance, direct calculations yield

    V ar[j] =

    1l(n)2

    + 2

    l(n)h=1

    {l(n) h}2l(n)4

    V ar[w.] = 23l(n)

    +1

    3l(n)3

    V ar[w.] = 1

    for all j Z. For any j Z and |h| < 2l(n) 1, the covariance function Cov(j, j+h)can be described by a parabola centered at zero and opening downward [for details, see 6,Section 6.2]. The resulting sequence (j)jZ satisfiesA1, A2, andA3. Figure2provides anillustration of the kernel function 2 as well as simulated trajectories of Rademacher-typetapered block multiplier random variables. In this setting, (

    j)j

    Z satisfies A1, A2, andA3b. Notice the smoothing which is driven by the choice of kernel function and the blocklength l(n). This effect can be further explored using more sophisticated kernel functions,e.g., with bell-shape; this is left for further research.

    Given observations X1, . . . ,Xn of a strictly stationary process (Xj)jZ satisfying the as-sumptions of Theorem 3 and further satisfying Condition (3), estimation of Equation (5)

    requires three steps: consider s = 1, . . . , S samples (s)1 , . . . ,

    (s)n from a tapered block multi-

    plier process (j)jZ satisfying A1, A2, A3. A set of copies of the tight centered Gaussian

    8

  • 8/3/2019 Consistent Testing

    10/35

    process BC is obtained by

    BM(s)C,n (u) = 1nn

    j=1

    (s)j

    (s) 1

    1{Uju} for all u [0, 1]d, s = 1, . . . , S . (10)

    The required adjustments for tapered block multiplier random variables satisfying A1,A2, A3b are easily deduced from Remark 1. Finite differencing yields a nonparametricestimator of the first order partial derivatives DiC(u) :

    DiC(u) =

    Cn(u+hei)Cn(uhei)2h for all u [0, 1]d, h ui 1 h,Cn(u+2hei)

    2h for all u [0, 1]d, 0 ui < h,Cn(u)Cn(u2hei)2h for all u [0, 1]d, 1 h < ui 1,

    (11)

    where h = 1/

    n and ei denotes the ith column of the d d identity matrix [see 46, 4].Combining Equations (10) and (11), we obtain:

    GM(s)C,n (u) = BM(s)C,n (u) di=1

    DiCn(u)BM(s)C,n u(i) for all u [0, 1]d, s = 1, . . . , S . (12)An application of Segers [46], Proposition 3.2 proves that GM(s)C,n (u) converges weakly to anindependent copy of the limiting process GC for all s = 1, . . . , S .

    2.3. Finite sample behavior

    Having established the asymptotic theory, we evaluate and compare the finite sample prop-erties of the (moving) block bootstrap and the introduced tapered block multiplier technique

    when estimating the limiting covariance of the empirical copula process in MC simulations.The results of this section complement those of Bcher and Dette [5] and Bcher [4] on boot-strap approximations for the empirical copula process based on independent and identicallydistributed observations.

    We first simulate independent and identically distributed observations from the bivariateClayton copula given by

    CCl (u1, u2) =

    u1 + u2 1

    1

    , > 0, (13)

    for {1, 4}, i.e. Kendalls = /( + 2) {1/3, 2/3}; as a second family of copulas, weconsider the bivariate family of Gumbel copulas

    CGu (u1, u2) = exp

    { ln (u1)} + { ln (u2)}

    1

    , 1, (14)

    for {1.5, 3}, i.e. Kendalls = 1 1/ {1/3, 2/3}. In order to assess the performanceof the two methods when applied to serially dependent data, two examples of stronglymixing time-series are considered (cf. Example 1). Firstly, the class of AR(1) processes:we assume that a sample of n independent random variates

    Uj = (Uj,1, Uj,2), j = 1, . . . , n , (15)

    9

  • 8/3/2019 Consistent Testing

    11/35

    is drawn from one of the aforementioned copulas. Define

    j = (1(Uj,1),

    1(Uj,2)), for all j = 1, . . . , n . (16)

    We obtain a sample X1, . . . ,Xn of an AR(1) process having Normal residuals by the ini-tialization X1 = 1 and recursive calculation of

    Xj = Xj1 + j for all j = 2, . . . , n . (17)

    The initially chosen coefficient of the lagged variable is = 0.5.

    For comparison, we as well investigate observations from bivariate copula-GARCH(1, 1)processes having a specified static copula [see 33]. Based on the MC simulation ofj forall j = 1, . . . , n as given in Equation (16), heteroscedastic standard deviations j,i, i = 1, 2,are obtained by initializing

    0,i =

    i

    1 i i for i = 1, 2

    using the unconditional GARCH(1, 1) standard deviation and iterative calculation of thegeneral process given in Equation (1) with the following parameterizations:

    Xj,1 = j,1j,1, 2j,1 = 0.012 + 0.919

    2j1 + 0.072

    2j1, (18)

    Xj,2 = j,2j,2, 2j,2 = 0.037 + 0.868

    2j1 + 0.115

    2j1, (19)

    for all j = 1, . . . , n . The considered coefficients are estimated by Jondeau et al. [28] tomodel volatility of S&P 500 and DAX daily (log-)returns in an empirical application whichshows the practical relevance of this specific parameter choice.

    In the case of independent observations, the theoretical covariance Cov(GC(u),GC(v)),calculated at u = v {(1/3, 1/3), (2/3, 1/3), (1/3, 2/3), (2/3, 2/3)}, serves as a benchmark.However, the theoretical covariance is unknown after the transformations carried out to gen-erate dependent observations: though any chosen copula is invariant under componentwiseapplication of the (strictly monotonic) inverse standard Normal distribution, the transfor-mations required to obtain samples from AR(1) or GARCH(1, 1) processes may not leavethe copula invariant. The following Lemma provides an alternative benchmark based onconsistent estimation of the unknown theoretical covariance structure:

    Lemma 1. Consider a sampleX1, . . . ,XN. Assume that N and choose n such thatn(N) , as well as n(N) = o(N). Under the assumptions of Theorem 1, a consistentestimator for Cov(GC(u),GC(v)) is provided byCov GC(u),GC(v) := Cov nCn(u) CN(u) , nCn(v) CN(v) for allu,v [0, 1]d.

    The proof is given in 5. We apply Lemma 1 in 106 MC replications with n = 1, 000 andN = 106 to provide an approximation of the true covariance.

    In practice, it is not possible to iterate MC simulations; the limiting tight centered Gaussianprocess GC is instead estimated conditional on a sample X1, . . . ,Xn. Therefore, we apply

    10

  • 8/3/2019 Consistent Testing

    12/35

    Table 1: Mean and MSE (104) Monte Carlo results. I.i.d. and AR(1) settings, samplesize n = 100 and 1, 000 Monte Carlo replications. For each replication, we perform S = 2, 000tapered block multiplier (Mi) repetitions with Normal multiplier random variables, kernel functioni, i = 1, 2, block length lM = 3, and block bootstrap (B) repetitions with block length lB = 5.

    (u1, u2) (1/3, 1/3) (1/3, 2/3) (2/3, 1/3) (2/3, 2/3)Mean MSE Mean MSE Mean MSE Mean MSE

    i.i.d. settingClayton True 0.0486 0.0338 0.0338 0.0508( = 1) Approx. 0.0487 0.0338 0.0338 0.0508

    M2 0.0496 1.6323 0.0344 1.2449 0.0345 1.3456 0.0528 1.6220M1 0.0494 1.7949 0.0342 1.2934 0.0343 1.4128 0.0524 1.7910B 0.0599 2.8286 0.0432 2.4103 0.0429 2.1375 0.0643 3.1538

    Clayton True 0.0254 0.0042 0.0042 0.0389( = 4) Approx. 0.0255 0.0042 0.0042 0.0390

    M2 0.0259 0.9785 0.0051 0.3715 0.0048 0.3662 0.0407 1.6324M1 0.0257 1.0104 0.0050 0.3656 0.0048 0.3673 0.0404 1.7048B 0.0383 2.5207 0.0100 0.7222 0.0097 0.6662 0.0533 3.7255

    Gumbel True 0.0493 0.0336 0.0336 0.0484( = 1.5) Approx. 0.0493 0.0335 0.0335 0.0485

    M2 0.0514 1.3914 0.0346 1.2657 0.0340 1.2583 0.0497 1.4946M1 0.0510 1.5530 0.0344 1.3390 0.0338 1.3275 0.0495 1.6948B 0.0616 2.8334 0.0429 2.1366 0.0432 2.2273 0.0620 3.2134

    Gumbel True 0.0336 0.0058 0.0058 0.0293( = 3) Approx. 0.0335 0.0058 0.0058 0.0294

    M2 0.0355 1.1359 0.0064 0.4427 0.0063 0.3851 0.0307 0.9819M1 0.0353 1.1991 0.0063 0.4409 0.0062 0.3832 0.0306 1.0261B 0.0470 2.7885 0.0120 0.8396 0.0122 0.8959 0.0437 2.9355

    AR(1) setting with = 0.5Clayton Approx. 0.0599 0.0408 0.0409 0.0629( = 1) M2 0.0602 3.1797 0.0394 2.4919 0.0398 2.5903 0.0625 2.6297

    M1 0.0598 3.5305 0.0391 2.6090 0.0396 2.7410 0.0620 2.9826B 0.0699 3.4783 0.0496 2.9893 0.0492 2.8473 0.0761 4.0660

    Clayton Approx. 0.0329 0.0064 0.0064 0.0432( = 4) M2 0.0347 2.0017 0.0071 0.6040 0.0072 0.6666 0.0460 2.8656

    M1 0.0344 2.0672 0.0071 0.5869 0.0072 0.6583 0.0458 3.0571

    B0.0472 3.5289 0.0132 1.0990 0.0128 0.9658 0.0585 4.2965

    Gumbel Approx. 0.0617 0.0408 0.0409 0.0605( = 1.5) M2 0.0631 3.0587 0.0406 2.7512 0.0398 2.4187 0.0600 3.0433

    M1 0.0626 3.4252 0.0402 2.8739 0.0395 2.4920 0.0594 3.3824B 0.0735 3.7410 0.0501 3.2025 0.0502 3.1150 0.0735 4.0521

    Gumbel Approx. 0.0385 0.0061 0.0061 0.0345( = 3) M2 0.0425 2.5441 0.0069 0.5841 0.0070 0.5796 0.0375 2.1518

    M1 0.0422 2.7239 0.0069 0.5706 0.0070 0.5836 0.0373 2.2381B 0.0535 3.8803 0.0127 1.0720 0.0125 1.0526 0.0507 4.1894

    11

  • 8/3/2019 Consistent Testing

    13/35

    Table 2: Mean and MSE (104) Monte Carlo results. I.i.d. and AR(1) settings, samplesize n = 200 and 1, 000 Monte Carlo replications. For each replication, we perform S = 2, 000tapered block multiplier (Mi) repetitions with Normal multiplier random variables, kernel functioni, i = 1, 2, block length lM = 4, and block bootstrap (B) repetitions with block length lB = 7.

    (u1, u2) (1/3, 1/3) (1/3, 2/3) (2/3, 1/3) (2/3, 2/3)Mean MSE Mean MSE Mean MSE Mean MSE

    i.i.d. settingClayton True 0.0486 0.0338 0.0338 0.0508( = 1) Approx. 0.0487 0.0338 0.0338 0.0508

    M2 0.0494 1.2579 0.0346 0.8432 0.0343 0.8423 0.0522 1.0987M1 0.0490 1.3749 0.0345 0.8880 0.0341 0.8719 0.0519 1.2074B 0.0562 1.7155 0.0402 1.2154 0.0395 1.2279 0.0593 1.8137

    Clayton True 0.0254 0.0042 0.0042 0.0389( = 4) Approx. 0.0255 0.0042 0.0042 0.0390

    M2 0.0261 0.5811 0.0044 0.1889 0.0048 0.2027 0.0390 0.9932M1 0.0260 0.6024 0.0044 0.1876 0.0048 0.2008 0.0388 1.0702B 0.0347 1.6381 0.0076 0.3197 0.0073 0.2908 0.0489 2.1425

    Gumbel True 0.0493 0.0336 0.0336 0.0484( = 1.5) Approx. 0.0493 0.0335 0.0335 0.0485

    M2 0.0512 1.1513 0.0347 0.8157 0.0347 0.8113 0.0504 1.1697M1 0.0511 1.2909 0.0345 0.8653 0.0346 0.8692 0.0503 1.2811B 0.0577 1.7178 0.0402 1.2233 0.0403 1.1701 0.0574 1.8245

    Gumbel True 0.0336 0.0058 0.0058 0.0293( = 3) Approx. 0.0335 0.0058 0.0058 0.0294

    M2 0.0361 0.8340 0.0067 0.2577 0.0063 0.2505 0.0320 0.8678M1 0.0359 0.9107 0.0067 0.2576 0.0062 0.2444 0.0318 0.9078B 0.0435 1.7448 0.0095 0.3917 0.0094 0.3804 0.0388 1.5320

    AR(1) setting with = 0.5Clayton Approx. 0.0599 0.0408 0.0409 0.0629( = 1) M2 0.0615 2.3213 0.0413 1.8999 0.0414 1.9235 0.0646 2.2534

    M1 0.0608 2.5553 0.0408 1.9594 0.0410 2.0094 0.0638 2.4889B 0.0676 2.6206 0.0468 1.9802 0.0472 2.0278 0.0715 2.5494

    Clayton Approx. 0.0329 0.0064 0.0064 0.0432( = 4) M2 0.0344 1.2685 0.0074 0.3890 0.0073 0.4108 0.0460 1.7936

    M1 0.0341 1.3034 0.0073 0.3824 0.0072 0.4053 0.0455 1.8566

    B0.0432 2.1693 0.0105 0.5427 0.0101 0.4901 0.0546 2.8808

    Gumbel Approx. 0.0617 0.0408 0.0409 0.0605( = 1.5) M2 0.0649 2.2833 0.0432 2.1528 0.0433 1.9800 0.0646 3.0536

    M1 0.0639 2.4663 0.0426 2.1943 0.0428 2.0367 0.0638 3.2590B 0.0703 2.5133 0.0468 1.8163 0.0467 1.7087 0.0693 2.5655

    Gumbel Approx. 0.0385 0.0061 0.0061 0.0345( = 3) M2 0.0430 1.9156 0.0079 0.4884 0.0074 0.3950 0.0405 2.5846

    M1 0.0426 1.9784 0.0078 0.4806 0.0073 0.3858 0.0400 2.6083B 0.0492 2.2152 0.0102 0.5375 0.0099 0.4832 0.0455 2.1534

    12

  • 8/3/2019 Consistent Testing

    14/35

    Table 3: Mean and MSE (104) Monte Carlo results. AR(1) and GARCH(1, 1) settings,sample size n = 200 and 1, 000 Monte Carlo replications. For each replication, we perform S = 2, 000tapered block multiplier (Mi) repetitions with Normal multiplier random variables, kernel functioni, i = 1, 2, block length lM = 4, and block bootstrap (B) repetitions with block length lB = 7.

    (u1, u2) (1/3, 1/3) (1/3, 2/3) (2/3, 1/3) (2/3, 2/3)Mean MSE Mean MSE Mean MSE Mean MSE

    AR(1) setting with = 0.25Clayton Approx. 0.0506 0.0350 0.0350 0.0530( = 1) M2 0.0521 1.5319 0.0360 1.0527 0.0361 1.1258 0.0545 1.4689

    M1 0.0517 1.6574 0.0357 1.1003 0.0358 1.1662 0.0541 1.6083B 0.0591 2.0394 0.0415 1.4084 0.0417 1.5256 0.0624 2.1027

    Clayton Approx. 0.0272 0.0048 0.0048 0.0395( = 4) M2 0.0285 0.8366 0.0052 0.2290 0.0052 0.2208 0.0413 1.3315

    M1 0.0283 0.8792 0.0051 0.2273 0.0052 0.2200 0.0410 1.3858B 0.0375 1.9937 0.0084 0.3726 0.0085 0.3646 0.0495 2.1996

    Gumbel Approx. 0.0518 0.0349 0.0348 0.0507( = 1.5) M2 0.0549 1.4150 0.0373 1.1973 0.0377 1.2459 0.0550 2.0067

    M1 0.0545 1.5154 0.0370 1.2339 0.0373 1.2871 0.0545 2.0766B 0.0608 2.0500 0.0415 1.3234 0.0418 1.4093 0.0608 2.3118

    Gumbel Approx. 0.0346 0.0058 0.0058 0.0304( = 3) M2 0.0386 1.1686 0.0070 0.3079 0.0068 0.2979 0.0350 1.5135

    M1 0.0384 1.2224 0.0070 0.3052 0.0067 0.2892 0.0347 1.5157B 0.0447 1.8434 0.0095 0.3873 0.0097 0.4303 0.0409 1.8797

    GARCH(1, 1) settingClayton Approx. 0.0479 0.0340 0.0340 0.0516( = 1) M2 0.0491 1.1144 0.0347 0.8485 0.0343 0.8279 0.0520 1.0958

    M1 0.0486 1.3579 0.0339 0.9021 0.0338 0.8100 0.0515 1.2013B 0.0567 2.0156 0.0403 1.1765 0.0403 1.2556 0.0600 1.8542

    Clayton Approx. 0.0252 0.0055 0.0056 0.0403( = 4) M2 0.0259 0.5054 0.0051 0.1979 0.0053 0.2301 0.0399 1.0431

    M1 0.0258 0.6429 0.0052 0.2199 0.0051 0.2217 0.0390 1.0959B 0.0345 1.5252 0.0081 0.2764 0.0081 0.2921 0.0484 1.8359

    Gumbel Approx. 0.0500 0.0339 0.0339 0.0482

    ( = 1.5) M2 0.0516 1.0480 0.0356 0.8486 0.0354 0.8175 0.0511 1.2451M1 0.0516 1.2235 0.0351 0.9582 0.0352 0.8848 0.0503 1.2774B 0.0575 1.5928 0.0402 1.2395 0.0403 1.2346 0.0574 1.9198

    Gumbel Approx. 0.0341 0.0074 0.0074 0.0291( = 3) M2 0.0362 0.9284 0.0073 0.2941 0.0072 0.2587 0.0321 0.9133

    M1 0.0366 0.9782 0.0073 0.2786 0.0071 0.2888 0.0320 0.9819B 0.0435 1.6811 0.0103 0.3607 0.0101 0.3390 0.0390 1.6878

    13

  • 8/3/2019 Consistent Testing

    15/35

    the tapered block multiplier technique and the block bootstrap method. For detailed discus-sions on the block length of the block bootstrap, we refer to Knsch [31] as well as Bhlmannand Knsch [7]. In the present setting we choose lB(100) = 5 and lB(200) = 7; this choicecorresponds to lB(n) = 1.25n1/3 which satisfies the assumptions of the asymptotic theory.The tapered block multiplier technique is assessed based on a sequence (wj)jZ of Normal

    random variables as introduced in Examples 2 and 3. Moreover, serial dependence in thetapered block multiplier random variables is either generated on the basis of the uniformweighting scheme represented by the kernel function 1 or the triangular weighting schemerepresented by the kernel function 2. The block length is set to lM(n) = 1.1n1/4, hencelM(100) = 3 and lM(200) = 4, meaning that both methods yield 2lM-dependent blocks.Both methods are used to estimate the sample covariance of block bootstrap- or tapered

    block multiplier-based G.(s)C (u) and G.(s)C (v) based on s = 1, . . . , S = 2, 000 resamplingrepetitions for the given set of vectors u and v. We perform 1, 000 MC replications andreport mean and mean squared error (MSE) of each method.

    Tables 1, 2, and Table 3 show results for samples X1, . . . ,Xn of size n = 100 and n = 200based on AR(1) and GARCH(1, 1) processes. Since the approximation Cov(GC(u),GC(v))works well, it can, hence, be used as a benchmark in the case of serially dependent observa-tions. MC results based on independent and identically distributed samples indicate thatthe tapered block multiplier outperforms the block bootstrap in mean and MSE of estima-tion. The general applicability of these resampling methods however comes at the price ofan increased mean squared error [in comparison to the multiplier or bootstrap with blocklength l = 1 as investigated in 5]. Hence, we suggest to test serial independence of con-tinuous multivariate time-series as introduced by Kojadinovic and Yan [29] to investigatewhich method is appropriate. In the case of serially dependent observations, resamplingresults indicate that the tapered block multiplier yields more precise results in mean andmean squared error than the block bootstrap (which tends to overestimate) for the con-sidered choices of the temporal dependence structure, the kernel function, the copula, and

    the parameter. Regarding the choice of the kernel function, mean results for 1 and 2are similar, whereas 2 yields slightly better results in mean squared error. Additional MCsimulations are given in Ruppert [42]: if the multiplier or bootstrap methods for indepen-dent observations are incorrectly applied to dependent observations, i.e., lB = lM = 1, thentheir results do not reflect the changed structure adequately. Results based on Normal,Gamma, and Rademacher-type sequences (wj)jZ indicate that different distributions usedto simulate the multiplier random variables lead to similar results. To ease comparisonof the next section to the work of Rmillard and Scaillet [38], we use Normal multiplierrandom variables in the following.

    3. Testing for a constant copula

    Considering strongly mixing multivariate processes (Xj)jZ, nonparametric tests for a con-stant copula with a specified or unspecified change point candidate(s) consistent againstgeneral alternatives are introduced and assessed in finite samples.

    3.1. Specified change point candidate

    The specification of a change point candidate can for instance have an economic motivation:Patton [33] investigates a change in parameters of the dependence structure between variousexchange rates following the introduction of the euro on the 1st of January 1999. Focusing

    14

  • 8/3/2019 Consistent Testing

    16/35

    on stock returns, multivariate association between major S&P global sector indices beforeand after the bankruptcy of Lehman Brothers Inc. on 15th of September 2008 is assessedin Gaier et al. [21] and Ruppert [42]. Whereas these references investigate change pointsin functionals of the copula, the copula itself is in the focus of this study. This approachpermits to analyze changes in the structure of association even if a functional thereof, such

    as a measure of multivariate association, is invariant.Suppose we observe a sample X1, . . . ,Xn of a process (Xj)jZ. Constancy of the structureof association is initially investigated in the case of a specified change point candidateindexed by n for [0, 1] :

    H0 : Uj C1 for all j = 1, . . . , n ,

    H1 : Uj

    C1 for all j = 1, . . . , n,C2 for all j = n + 1, . . . , n ,

    whereas C1 and C2 are assumed to differ on a non-empty subset of [0, 1]d. To test for

    a change point in the structure of association after observation

    n

    < n, we split the

    sample into two subsamples: X1, . . . ,Xn and Xn+1, . . . ,Xn. Assuming the marginaldistribution functions to be unknown and constant in each subsample, we (separately)estimate U1, . . . , Un and Un+1, . . . , Un with empirical copulas Cn and Cnn,respectively. The test statistic is defined by

    Tn() =

    [0,1]d

    n(n n)

    n

    Cn(u) Cnn(u)2

    du (20)

    and can be calculated explicitly, for details we refer to Rmillard and Scaillet [38]. Theseauthors introduce a test for equality between to copulas which is applicable in the case ofno serial dependence. Weak convergence of Tn() under strong mixing is established in the

    following:

    Theorem 4. Consider observationsX1, . . . ,Xn, drawn from a process (Xj)jZ satisfyingthe strong mixing condition X(r) = O(ra) for some a > 1. Further assume a specifiedchange point candidate indexed by n for [0, 1] such thatUj C1, Xj,i F1,i forall j = 1, . . . , n, i = 1, . . . , d andUj C2, Xj,i F2,i for all j = n + 1, . . . , n ,i = 1, . . . , d . Suppose that C1 and C2 satisfy Condition (3). Under the null hypothesisC1 = C2, the test statistic Tn() converges weakly:

    Tn()w. T() =

    [0,1]d

    1 GC1(u)

    GC2(u)

    2du,

    whereasGC1 andGC2 represent dependent identically distributed Gaussian processes.

    The proof is given in 5. Notice that if there exists a subset I [0, 1]d such thatI

    1 C1(u)

    C2(u)

    2du > 0,

    then Tn() in probability under H1. The limiting law of the test statistic dependson the unknown copulas C1 before and C2 after the change point candidate. To estimatep-values of the test statistic, we use the tapered block multiplier technique described above.

    15

  • 8/3/2019 Consistent Testing

    17/35

    Corollary 1. Consider observationsX1, . . . ,Xn andXn+1, . . . ,Xn drawn from a pro-cess (Xj)jZ. Assume that the process satisfies the strong mixing assumptions of Theorem3. Letn for [0, 1] denote a specified change point candidate such thatUj C1,Xj,i F1,i for all j = 1, . . . , n, i = 1, . . . , d and Uj C2, Xj,i F2,i for allj = n + 1, . . . , n, i = 1, . . . , d . Suppose that C1 and C2 satisfy Condition (3).Fors = 1, . . . , S , let(s)1 , . . . , (s)n denote samples of a tapered block multiplier process (j)jZsatisfyingA1, A2, A3(b) with block length l(n) , where l(n) = O(n1/2) for0 < Tn() . (22)Hence, p-values can be estimated by counting the number of cases in which the simulatedtest statistic based on the tapered block multiplier method exceeds the observed one.

    Finite sample properties. Size and power of the test in finite samples are assessed in asimulation study. We apply the MC algorithm introduced in Section 2.3 to generate samplesof size n = 100 or n = 200 from bivariate processes (Xj)jZ. As a base scenario, seriallyindependent observations are simulated. Moreover, we consider observations from strictly

    stationary AR(1) processes with autoregressive coefficient {0.25, 0.5} and GARCH(1, 1)processes which are parameterized as in Equations (18) and (19). The univariate processesare either linked by a Clayton or a Gumbel copula. The change point after observationn = n/2 (if present) only affects the parameter within each family: the copula C1is parameterized such that 1 = 0.2, the copula C2 such that 2 = 0.2, . . . , 0.9. A setof S = 2, 000 Normal tapered block multiplier random variables is simulated, whereaslM(100) = 3 and lM(200) = 4 are chosen for the block length.

    Results of 1, 000 MC replications based on n = 100 and n = 200 observations are shownin Tables 4 and 5, respectively. The test based on the tapered block multiplier techniquewith kernel function 2 leads to a rejection quota under the null hypothesis which is close

    16

  • 8/3/2019 Consistent Testing

    18/35

    Table 4: Size and power of the test for a constant copula with a specified change pointcandidate. Results are based on 1, 000 Monte Carlo replications, n = 100, S = 2, 000 taperedblock multiplier repetitions, kernel function 2, and asymptotic significance level = 5%.

    2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

    i.i.d. settingClayton l = 1 0.036 0.110 0.295 0.612 0.881 0.983 1.000 1.000

    l = 3 0.050 0.114 0.315 0.578 0.877 0.976 1.000 1.000Gumbel l = 1 0.040 0.093 0.236 0.569 0.840 0.976 0.998 1.000

    l = 3 0.063 0.110 0.276 0.594 0.866 0.983 1.000 1.000

    GARCH(1, 1) settingClayton l = 1 0.037 0.106 0.298 0.598 0.868 0.977 1.000 1.000

    l = 3 0.047 0.120 0.303 0.588 0.876 0.978 0.999 1.000Gumbel l = 1 0.043 0.089 0.246 0.573 0.827 0.978 0.999 1.000

    l = 3 0.065 0.124 0.285 0.569 0.847 0.980 1.000 1.000

    AR(1) setting with = 0.25Clayton l = 1 0.051 0.115 0.308 0.592 0.849 0.969 0.999 1.000

    l = 3 0.047 0.111 0.292 0.547 0.836 0.968 0.998 1.000Gumbel l = 1 0.053 0.109 0.257 0.550 0.836 0.975 0.998 1.000

    l = 3 0.066 0.105 0.254 0.568 0.818 0.964 1.000 1.000

    AR(1) setting with = 0.5Clayton l = 1 0.086 0.154 0.313 0.549 0.798 0.928 0.985 1.000

    l = 3 0.078 0.117 0.236 0.462 0.730 0.868 0.986 0.999Gumbel l = 1 0.100 0.172 0.285 0.541 0.816 0.956 0.998 1.000

    l = 3 0.077 0.109 0.218 0.482 0.722 0.907 0.994 0.999

    to the chosen theoretical asymptotic size of 5% in all considered settings; comparing theresults for n = 100 and n = 200, we observe that the approximation of the asymptoticsize based on the tapered block multiplier improves in precision with increased samplesize. The tapered block multiplier-based test also performs well under the alternativehypothesis and its power increases with the difference 21 between the considered valuesfor Kendalls . The power of the test under the alternative hypothesis is best in the case ofno serial dependence as is shown in Table 5. If serial dependence is present in the samplethen more observations are required to reach the power of the test in the case of seriallyindependent observations. For comparison, we also show the results if the test assumingindependent observations (i.e., the test based on the multiplier technique with block lengthl = 1) is erroneously applied to the simulated dependent observations. The effects of

    different types of dependent observations differ largely in the finite sample simulationsconsidered: GARCH(1, 1) processes do not show strong impact, whereas AR(1) processeslead to considerable distortions, in particular regarding the size of the test. Results indicatethat the test overrejects if temporal dependence is not taken into account; the observedsize of the test in these cases can be more than twice the specified asymptotic size. Forcomparison, results for n = 200 and kernel function 1 are shown in Ruppert [42]. Theobtained results indicate that the uniform kernel function 1 leads to a more conservativetesting procedure since the rejection quota is slightly higher, both under the null hypothesisas well as under the alternative. Due to the fact that the size of the test is approximatedmore accurately based on the kernel function 2, its use is recommended.

    17

  • 8/3/2019 Consistent Testing

    19/35

    Table 5: Size and power of the test for a constant copula with a specified change pointcandidate. Results are based on 1, 000 Monte Carlo replications, n = 200, S = 2, 000 taperedblock multiplier repetitions, kernel function 2, and asymptotic significance level = 5%.

    2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

    i.i.d. settingClayton l = 1 0.047 0.172 0.524 0.908 0.993 1.000 1.000 1.000

    l = 4 0.063 0.164 0.552 0.905 0.989 1.000 1.000 1.000Gumbel l = 1 0.043 0.162 0.525 0.877 0.991 1.000 1.000 1.000

    l = 4 0.055 0.169 0.535 0.895 0.996 1.000 1.000 1.000

    GARCH(1, 1) settingClayton l = 1 0.040 0.169 0.503 0.894 0.994 1.000 1.000 1.000

    l = 4 0.056 0.160 0.541 0.903 0.992 1.000 1.000 1.000Gumbel l = 1 0.046 0.154 0.498 0.873 0.992 1.000 1.000 1.000

    l = 4 0.057 0.174 0.496 0.899 0.994 1.000 1.000 1.000

    AR(1) setting with = 0.25Clayton l = 1 0.052 0.180 0.521 0.866 0.989 1.000 1.000 1.000

    l = 4 0.057 0.149 0.497 0.867 0.988 1.000 1.000 1.000Gumbel l = 1 0.047 0.180 0.515 0.872 0.989 1.000 1.000 1.000

    l = 4 0.050 0.136 0.490 0.855 0.992 1.000 1.000 1.000

    AR(1) setting with = 0.5Clayton l = 1 0.107 0.237 0.523 0.813 0.975 0.999 1.000 1.000

    l = 4 0.058 0.137 0.396 0.748 0.957 0.998 1.000 1.000Gumbel l = 1 0.122 0.227 0.499 0.825 0.979 1.000 1.000 1.000

    l = 4 0.059 0.123 0.401 0.770 0.958 0.998 1.000 1.000

    3.2. The general case: unspecified change point candidate

    The assumption of a change point candidate at specified location is relaxed in the following.Intuitively, testing with unspecified change point candidate(s) is less restrictive but a trade-off is to be made: the tests introduced in this section neither require conditions on the partialderivatives of the underlying copula(s) nor the specification of change point candidate(s), yetthey are based on the assumption of strictly stationary univariate processes, i.e., Xj,i Fifor all j Z and i = 1, . . . , d . The motivation for this test setting is that only for a subsetof the change points documented in empirical studies, a priori hypothesis such as triggering

    economic events can be found [see, e.g., 15]. Even if a triggering event exists, its start(and end) often are subject to uncertainty: Rodriguez [41] studies changes in dependencestructures of stock returns during periods of turmoil considering data framing the East Asiancrisis in 1997 as well as the Mexican devaluation in 1994, whereas no change point candidateis given a priori. These objects of investigation are well-suited for nonparametric methodswhich offer the important advantage that their results do not depend on model assumptions.For a general introduction to change point problems of this type, we refer to the monographsby Csrg and Hrvath [12] and, with particular emphasis on nonparametric methods, toBrodsky and Darkhovsky [3].

    Let X1, . . . ,Xn denote a sample of a process (Xj)jZ with strictly stationary univariate

    18

  • 8/3/2019 Consistent Testing

    20/35

    margins, i.e., Xj,i Fi for all j Z and i = 1, . . . , d . We establish tests for the nullhypothesis of a constant copula versus the alternative that there exist P unspecified changepoints 1 < .. . < P [0, 1], formally

    H0 : Uj C1 for all j = 1, . . . , n ,

    H1 : there exist 0 = 0 < 1 < .. . < P < P+1 = 1 such thatUj Cp

    for all j = p1n + 1, . . . , pn and p = 1, . . . , P + 1,whereas, under the alternative hypothesis, C1, . . . , C P+1 are assumed to differ on a non-empty subset of [0, 1]d. In this setting, we estimate the pseudo-observations U1, . . . , Unbased on X1, . . . ,Xn. For any change point candidate [0, 1], we split the pseudo-observations in two subsamples U1, . . . , Un and Un+1, . . . , Un. The following teststatistics are based on a comparison of the resulting empirical copulas:

    Sn(,u) :=n(n n)

    n3/2

    Cn(u) Cnn(u) for all u [0, 1]d.The functional used to define the test statistic given in Equation (20) of the previous sectionis thus multiplied by the weight function n(n n)/n which assigns less weight tochange point candidates close to the samples boundaries. Define Zn := {1/ n , . . . , (n 1)/n}. We consider three alternative test statistics which pick the most extreme realizationwithin the set Zn of change point candidates:

    T1n = maxZn

    [0,1]d

    Sn(,u)2dCn(u), (23)

    T2n = maxZn

    max

    u{Uj}j=1,...,n Sn(,u) minu{Uj}j=1,...,n Sn(,u)

    , (24)

    T3n = maxZn maxu{Uj}j=1,...,n |Sn(,u)| , (25)which are the maximally selected Cramr-von Mises (CvM), Kuiper (K), and Kolmogorov-Smirnov (KS) statistic, respectively. We refer to Hrvath and Shao [26] for an investigationof these statistics in a univariate context based on an independent and identically distributedsample; T3n is investigated in Inoue [27] for general multivariate distribution functions understrong mixing conditions as well as in Rmillard [36] with an application to the copula ofGARCH residuals.

    Under the null hypothesis of a constant copula, i.e., C(u) = Cp(u) for all p = 1, . . . , P + 1,notice the following relation between Sn(,u) and a linear combination of the sequentialand the standard empirical process, more precisely a (d + 1)-time parameter tied downempirical copula process [see Section 2.6 in 12, 26, 36]:

    Sn(,u) =n(n n)

    n3/2{Cn(u) Cnn(u)}

    =1n

    nj=1

    1{Uju} C(u)

    n

    n

    nj=1

    1{Uju} C(u)

    , (26)for all [0, 1] and u [0, 1]d. Equation (26) is the pivotal element to derive the asymptoticbehavior of Sn(,u) under the null hypothesis which is given next.

    19

  • 8/3/2019 Consistent Testing

    21/35

  • 8/3/2019 Consistent Testing

    22/35

    whereas the limit is an independent copy ofBC(,u) BC(1,u). Under the alternative hy-pothesis, weak convergence conditional onX1, . . . ,Xn almost surely in

    ([0, 1]d+1), .

    to a tight limit holds.

    The proof is given in 5. Remarkably, the latter result is only valid if centered multiplier

    random variables are applied, i.e., if assumption A3b is satisfied. An application of thecontinuous mapping theorem proves consistency of the tapered block multiplier-based tests.The p-values of the test statistics are estimated as shown in Equation (22).

    For simplicity, the change point location is assessed under the assumption that there is atmost one change point. In this case, the alternative hypothesis can as well be formulated:

    H1b : [0, 1] such that Uj

    C1 for all j = 1, . . . , n,C2 for all j = n + 1, . . . , n ,

    whereas C1 and C2 are assumed to differ on a non-empty subset of [0, 1]d. An estimator

    for the location of the change point in, i = 1, 2, 3, is obtained replacing max functions byarg max functions in Equations (23), (24), and (25). For ease of exposition, the superindexi is dropped in the following if no explicit reference to the functional is required. Given a(not necessarily correct) change-point estimator n, the empirical copula ofX1, . . . ,Xnnis an estimator of the unknown mixture distribution given by [for an analogous estimatorrelated to general distribution functions, see 9]:

    Cn,1(u) = 1{n}C1(u) + 1{n>}1n C1(u) + n C2(u) , (28)for all u [0, 1]d. The latter coincides with C1 if and only if the change point is estimatedcorrectly. On the other hand, the empirical copula of

    Xnn+1, . . . ,Xn is an estimator of

    the unknown mixture distribution given by

    Cn,2(u) =1{n}(1 n)1 ( n)C1(u) + (1 ) C2(u) (29)+ 1{n>}C2(u),

    for all u [0, 1]d. The latter coincides with C2 if and only if the change point is estimatedcorrectly. Consistency ofn follows from consistency of the empirical copula and the factthat the difference of the two mixture distributions given in Equations (28) and (29) ismaximal in the case

    n = . Bai [1] iteratively applies the setting considered above to test

    for multiple breaks (one at a time), indicating a direction of future research to estimatelocations of multiple change points in the dependence structure.

    Finite sample properties. Size and power of the tests for a constant copula are shown inTables 6 and 7. The former shows results for n = 400 observations from serially independentas well as strictly stationary AR(1) processes with autoregressive coefficient = 0.25 ineach dimension. The alternative hypothesis H1b of at most one unspecified change point isconsidered. If present, then the change point is located after observation n = n/2 andonly affects the parameter within the investigated Clayton or Gumbel families: the

    21

  • 8/3/2019 Consistent Testing

    23/35

  • 8/3/2019 Consistent Testing

    24/35

  • 8/3/2019 Consistent Testing

    25/35

    The introduced tests for a constant copula offer some connecting factors for further research,e.g.: Inoue [27] investigates nonparametric change point tests for the joint distribution ofstrongly mixing random vectors and finds that the observed size of the test heavily dependson the choice of the block length l in the resampling procedure. For different types of seriallydependent observations, e.g., AR(1) processes with higher coefficient for the lagged variable

    or GARCH(1, 1) processes, it is of interest to investigate the optimal choice of the blocklength for the tapered block multiplier-based test with unspecified change point candidate.Moreover, test statistics based on different functionals offer potential for improvements:e.g., the Cramr-von Mises functional introduced by Rmillard and Scaillet [38] led tostrong results in the case of a specified change point candidate. Though challenging froma computational point of view, an application of this functional to the case of unspecifiedchange point candidate(s) is of interest as the functional yields very powerful tests.

    4. Conclusion

    Consistent tests for constancy of the copula with specified or unspecified change point

    candidate are introduced. We observe a trade-off in assumptions required for the testing:if a change point candidate is specified, then the test is consistent whether or not there is asimultaneous change point in marginal distribution function(s). If change point candidate(s)are unspecified, then the assumption of strictly stationary marginal distribution functionsis required and allows to drop continuity assumptions on the partial derivatives of theunderlying copula(s). Tests are shown to behave well in size and power when applied tovarious types of dependent observations. P-Values of the tests are estimated using a taperedblock multiplier technique which is based on serially dependent multiplier random variables;the latter is shown to perform better than the block bootstrap in mean and mean squarederror when estimating the asymptotic covariance structure of the empirical copula processin various settings.

    [1] Bai, J. (1997): Estimating multiple breaks one at a time, Econom. Theory, 13(3), pp.315352.

    [2] Billingsley, P. (1995): Probability and Measure, John Wiley & Sons.

    [3] Brodsky, B. E., Darkhovsky, B. S. (1993): Nonparametric Methods in Change PointProblems, Kluwer Academic Publishers.

    [4] Bcher, A. (2011): Statistical Inference for Copulas and Extremes, Ph.D. thesis, Ruhr-Universitt Bochum.

    [5] Bcher, A., Dette, H. (2010): A note on bootstrap approximations for the empirical

    copula process, Statist. Probab. Lett., 80(23-24), pp. 1925 1932.

    [6] Bhlmann, P. (1993): The blockwise bootstrap in time series and empirical processes,Ph.D. thesis, ETH Zrich, Diss. ETH No. 10354.

    [7] Bhlmann, P., Knsch, H. R. (1999): Block length selection in the bootstrap for timeseries, Comput. Statist. Data Anal., 31(3), pp. 295310.

    [8] Busetti, F., Harvey, A. (2010): When is a copula constant? A test for changingrelationships, J. Financ. Econ., 9(1), pp. 106131.

    24

  • 8/3/2019 Consistent Testing

    26/35

    [9] Carlstein, E. (1988): Nonparametric change-point estimation, Ann. Statist., 16(1), pp.188197.

    [10] Carrasco, M., Chen, X. (2002): Mixing and moment properties of various GARCH andstochastic volatility models, Econom. Theory, 18(1), pp. 1739.

    [11] Chen, X., Fan, Y. (2006): Estimation and model selection of semiparametric copula-based multivariate dynamic models under copula misspecification, Journal of Econo-metrics, 135(12), pp. 125154.

    [12] Csrg, M., Hrvath, L. (1997): Limit theorems in change-point analysis, John Wiley& Sons.

    [13] Deheuvels, P. (1979): La fonction de dpendance empirique et ses proprits: un testnon paramtrique dindpendance, Acadmie Royale de Belgique. Bulletin de la Classedes Sciences (5th series), 65(6), pp. 274292.

    [14] Dhompongsa, S. (1984): A note on the almost sure approximation of the empirical

    process of weakly dependent random variables, Yokohama Math. J., 32(1-2), pp. 113121.

    [15] Dias, A., Embrechts, P. (2009): Testing for structural changes in exchange rates de-pendence beyond linear correlation, Eur. J. Finance, 15(7), pp. 619637.

    [16] Doukhan, P. (1994): Mixing: properties and examples, Lecture Notes in Statistics,Springer-Verlag, New York.

    [17] Doukhan, P., Fermanian, J.-D., Lang, G. (2009): An empirical central limit theoremwith applications to copulas under weak dependence, Stat. Infer. Stoch. Process., 12(1),pp. 6587.

    [18] Fermanian, J.-D., Radulovic, D., Wegkamp, M. (2004): Weak convergence of empiricalcopula processes, Bernoulli, 10(5), pp. 847860.

    [19] Fermanian, J.-D., Scaillet, O. (2003): Nonparametric estimation of copulas for timeseries, J. Risk, 5(4), pp. 2554.

    [20] Gaier, S., Memmel, C., Schmidt, R., Wehn, C. (2009): Time dynamic and hierarchicaldependence modelling of an aggregated portfolio of trading books - a multivariatenonparametric approach, Deutsche Bundesbank Discussion Paper, Series 2: Bankingand Financial Studies 07/2009.

    [21] Gaier, S., Ruppert, M., Schmid, F. (2010): A multivariate version of Hoeffdings

    Phi-Square, J. Multivariate Anal., 101(10), pp. 25712586.

    [22] Genest, C., Segers, J. (2010): On the covariance of the asymptotic empirical copulaprocess, J. Multivariate Anal., 101(8), pp. 18371845.

    [23] Giacomini, E., Hrdle, W. K., Spokoiny, V. (2009): Inhomogeneous dependency mod-eling with time varying copulae, J. Bus. Econ. Stat., 27(2), pp. 224234.

    [24] Guegan, D., Zhang, J. (2010): Change analysis of dynamic copula for measuring de-pendence in multivariate financial data, Quant. Finance, 10(4), pp. 421430.

    25

  • 8/3/2019 Consistent Testing

    27/35

    [25] Hall, P., Heyde, C. (1980): Martingale limit theory and its application, Academic Press,New York.

    [26] Hrvath, L., Shao, Q.-M. (2007): Limit theorems for permutations of empirical pro-cesses with applications to change point analysis, Stoch. Proc. Appl., 117(12), pp.18701888.

    [27] Inoue, A. (2001): Testing for distributional change in time series, Econom. Theory,17(1), pp. 156187.

    [28] Jondeau, E., Poon, S.-H., Rockinger, M. (2007): Financial Modeling Under Non-Gaussian Distributions, Springer, London.

    [29] Kojadinovic, I., Yan, Y. (2011): Tests of serial independence for continuous multivari-ate time series based on a Mbius decomposition of the independence empirical copulaprocess, Ann. Inst. Statist. Math., 63(2), pp. 347373.

    [30] Kosorok, M. R. (2008): Introduction to empirical processes and semiparametric infer-

    ence, Springer.

    [31] Knsch, H. R. (1989): The jackknife and the bootstrap for general stationary obser-vations, Ann. Statist., 17(3), pp. 12171241.

    [32] Paparoditis, E., Politis, D. N. (2001): Tapered block bootstrap, Biometrika, 88(4), pp.11051119.

    [33] Patton, A. J. (2002): Applications of copula theory in financial econometrics, Ph.D.thesis, University of California, San Diego.

    [34] Patton, A. J. (2004): On the out-of-sample importance of skewness and asymmetricdependence for asset allocation, J. Financ. Econ., 2(1), pp. 130168.

    [35] Philipp, W., Pinzur, L. (1980): Almost sure approximation theorems for the multivari-ate empirical process, Z. Wahrscheinlichkeitstheorie verw. Gebiete, 54(1), pp. 113.

    [36] Rmillard, B. (2010): Goodness-of-fit tests for copulas of multivariate time series, Tech.rep., HEC Montral.

    [37] Rmillard, B., Scaillet, O. (2006): Testing for equality between two copulas, Tech.Rep. G-2006-31, Les Cahiers du GERAD.

    [38] Rmillard, B., Scaillet, O. (2009): Testing for equality between two copulas, J. Multi-variate Anal., 100(3), pp. 377386.

    [39] Rio, E. (1993): Covariance inequalities for strongly mixing processes, Ann. Inst. H.Poincar Sect. B, 29(4), pp. 587597.

    [40] Rio, E. (2000): Thorie Asymptotique des Processus Alatoires Faiblement Dpendants,Springer.

    [41] Rodriguez, J. C. (2006): Measuring financial contagion: A copula approach, J. Em-pirical Finance, 14(3), pp. 401423.

    [42] Ruppert, M. (2011): Contributions to Static and Time-Varying Copula-based Modelingof Multivariate Association, Ph.D. thesis, University of Cologne.

    26

  • 8/3/2019 Consistent Testing

    28/35

    [43] Rschendorf, L. (1976): Asymptotic distributions of multivariate rank order statistics,Ann. Statist., 4(5), pp. 912923.

    [44] Scaillet, O. (2005): A Kolmogorov-Smirnov type test for positive quadrant dependence,Canad. J. Statist., 33(3), pp. 415427.

    [45] Schmid, F., Schmidt, R., Blumentritt, T., Gaier, S., Ruppert, M. (2010): Copula-based measures of multivariate association, in: Jaworski, P., Durante, F., Hrdle, W.,Rychlik, T. (eds.), Copula theory and its applications - Proceedings of the Workshopheld in Warsaw, 25-26 September 2009, pp. 209235, Springer, Berlin Heidelberg.

    [46] Segers, J. (2011): Weak convergence of empirical copula processes under nonrestrictivesmoothness assumptions, Bernoulli, forthcoming.

    [47] Sklar, A. (1959): Fonctions de rpartition n dimensions et leur marges, Publicationsde lInstitut de Statistique, Universit Paris 8, pp. 229231.

    [48] van den Goorbergh, R. W. J., Genest, C., Werker, B. J. M. (2005): Bivariate optionpricing using dynamic copula models, Insur. Math. Econ., 37(1), pp. 101114.

    [49] van der Vaart, A. W., Wellner, J. A. (1996): Weak convergence and empirical processes,Springer Verlag, New York.

    [50] van Kampen, M., Wied, D. (2010): A nonparametric constancy test for copulas undermixing conditions, Tech. Rep. 36/10, TU Dortmund, SFB 823.

    [51] Wied, D., Dehling, H., van Kampen, M., Vogel, D. (2011): A fluctuation test forconstant spearmans rho, Tech. Rep. 16/11, TU Dortmund, SFB 823.

    [52] Zari, T. (2010): Contribution ltude du processus empirique de copule, Ph.D. thesis,Universit Paris 6.

    5. Appendix: Proofs of the results

    Proof of Theorem 1. The proof is established as in Gaier et al. [21], Proof of Theorem4 while applying a result on Hadamard-differentiability under nonrestrictive smoothnessassumptions obtained by Bcher [4]. Consider integral transformations Uj,i := Fi(Xj,i)for all j = 1, . . . , n, i = 1, . . . , d , and denote their joint empirical distribution functionby Fn. As exposed in detail by Fermanian et al. [18], Lemma 1, integral transformationsallow to simplify the exposition while obtained asymptotic results remain valid for generalcontinuous marginal distribution functions. Under the strong mixing condition X(r) =

    O(ra) for some a > 1, Rio [40] proves weak convergence of the empirical process in the

    space ([0, 1]d) :

    nFn(u) F(u) w. BF(u).

    Notice that the copula can be obtained by a mapping :

    : D ([0, 1]d),F (F) := FF11 , . . . , F 1d .

    Bcher [4], Lemma 2.6 establishes the following result which is pivotal to conclude theproof:

    27

  • 8/3/2019 Consistent Testing

    29/35

    Lemma 2. Assume that F satisfies Condition (3). Then is Hadamard-differentiable atC tangentially to

    D0 :=

    D C

    [0, 1]d

    |D is grounded and D(1, . . . , 1) = 0

    The derivative at F in D D0 is represented by

    F(D)

    (u) = D(u) d

    i=1

    DiFu(i)

    D(u(i)), for allu [0, 1]d,

    whereas DiF(u), i = 1, . . . , d is defined on the basis of Equation (4).

    Hence, an application of the functional delta method yields weak convergence of the trans-formed empirical process in (([0, 1]d), .) :

    nFn (u) ( (F)) (u)

    w.

    F (BF) (u) .

    To conclude the proof, notice that

    supu[0,1]d

    Fn (u) Cn(u) = O1n

    .

    Proof of Theorem2. Bhlmann [6], proof of Theorem 3.1 considers integral transformationsUj,i := Fi(Xj,i) for all j = 1, . . . , n and i = 1, . . . , d and proves the bootstrapped empiricalprocess to converge weakly conditional on X1, . . . ,Xn almost surely in the space D([0, 1]

    d)of cdlg functions equipped with the uniform metric

    .

    :

    nFBn (u) Fn(u) a.s.

    HBF(u).

    Based on Hadamard-differentiability of the map as established in the proof of Theorem1, an application of the functional delta method for the bootstrap [see, e.g., 49] yields weakconvergence of the transformed empirical process conditional on X1, . . . ,Xn in probabilityin (([0, 1]d), .) :

    n

    FBn (u) Fn (u) P

    H

    F (BF)

    (u) .

    To conclude the proof, recall that the empirical copula as defined in Equation ( 2) and themap share the same asymptotic behavior.

    Proof of Theorem 3. Based on integral transformations Uj,i := Fi(Xj,i) for all j = 1, . . . , nand i = 1, . . . , d , Bhlmann [6], proof of Theorem 3.2 establishes weak convergence ofthe tapered block empirical process conditional on a sample X1, . . . ,Xn almost surely in

    D([0, 1]d), .

    :

    n

    1n

    nj=1

    j1Uju} Cn(u)

    a.s.BMC (u),

    28

  • 8/3/2019 Consistent Testing

    30/35

    under assumptions A1, A2, A3, provided that the process is strongly mixing with givenrate. Notice that the tapered multiplier empirical copula process as well as its limit are right-continuous with left limits, hence, reside in D([0, 1]d). Functions in D([0, 1]d) are defined onthe closed set [0, 1]d, are bounded in consequence, which implies D([0, 1]d) ([0, 1]d).Following Lemma 7.8 in Kosorok [30], convergence in (D([0, 1]d), .) is then equivalent toconvergence in (

    ([0, 1]d

    ), .) (and more generally in any other function space of whichD([0, 1]d) is a subset and which further contains the tapered multiplier empirical copulaprocess as well as its limit).

    It remains to prove that the limiting behavior of the tapered block multiplier process forindependent observations is unchanged if we assume the marginal distribution functionsto be unknown. Using an argument of Rmillard and Scaillet [38], consider the followingrelation between the tapered block multiplier process in the case of known and unknownmarginal distribution functions

    BMC,n(u) =

    n

    1

    n

    n

    j=1d

    i=1j1{Uj,iu} Cn(u)

    =

    n

    1nn

    j=1

    di=1

    j

    1{Uj,iFi(F1i (ui))} Cn

    F1 F11 (u1) , . . . , Fd F1d (ud)

    = BMC,n F1 F11 (u1) , . . . , Fd F1d (ud) ,where BMC,n denotes the tapered block multiplier process in the case that the marginal dis-tribution functions as well as the copula are unknown. Under the given set of assumptions,we have in particular

    supu[0,1]d |

    Cn(u)

    C(u)

    | 0.

    Consider u(i) = (1, . . . , 1, ui, 1, . . . , 1). It follows that

    supui[0,1]

    1nn

    j=1

    1{Xj,iF1i (ui)} ui

    = supui[0,1]Fi F1i (ui) ui 0

    for all i = 1, . . . , d as n . We conclude that (BC,n,BMC,n) w. (BC,BMC ) in ([0, 1]d) ([0, 1]d).

    Proof of Lemma 1. Consider

    nCn(u) CN(u) = nCn(u) C(u) nCN(u) C(u)

    =

    nCn(u) C(u) n

    N

    N

    CN(u) C(u) w. GC(u),since the factor

    n/

    N tends to zero for n(N) and n(N) = o(N). Following Theorem

    1,

    NCN(u) C(u) converges to a tight centered Gaussian process in (([0, 1]d), .).

    The result is derived by an application of Slutskys Theorem and consistent (covariance)estimation.

    29

  • 8/3/2019 Consistent Testing

    31/35

    Proof of Theorem 4. With given assumptions, the asymptotic behavior of each empiricalcopula process is derived in Theorem 1:

    GC1,n(u) :=

    n

    Cn(u) C1(u)

    w. GC1(u) := BC1(u) d

    i=1

    DiC1(u)BC1 u(i)in (([0, 1]d), .). Analogously, we have

    GC2,nn(v) :=n n Cnn(v) C2(v)w. GC2(v) := BC2(v)

    di=1

    DiC2(v)BC2

    v(i)

    in (([0, 1]d), .). If a joint, hence, 2d-dimensional mean zero limiting Gaussian process(GC1(u) GC2(v)) exists, then a complete characterization can be obtained based on itscovariance function [cf. 49, Appendix A.2]. Hence, it remains to prove that the empiricalcovariance matrix converges to a well-defined limit. To ease exposition, indices within thesample X1, . . . ,Xn are shifted by n to locate the change point candidate at zero. Thecovariance matrix is given by

    limn

    Cov GC1,n(u),GC1,n(v) Cov GC1,n(u), GC2,nn(v)Cov

    GC2,nn(v),GC1,n(u) Cov GC2,nn(u), GC2,nn(v)

    for all u,v [0, 1]d if the limit exists, whereas

    Cov GC1,n(u),GC1,n(v) := 1n0

    i=n+1

    0j=n+1

    Cov 1{Uiu},1{Ujv} , (30)Cov

    GC1,n(u), GC2,nn(v) := 1n(n n)

    0i=n+1

    nnj=1

    Cov1{Uiu},1{Ujv}

    , (31)

    CovGC2,nn(v),GC1,n(u) := 1

    n(n n)

    nni=1

    0j=n+1

    Cov1{Uiv},1{Uju}

    , (32)

    CovGC2,nn(u), GC2,nn(v) := 1n n

    nni=1

    nnj=1

    Cov1{Uiu},1{Ujv}

    . (33)

    Convergence of the series in Equations (30) and (33) follows from Theorem 1, furthermorethe limiting variances are equal (under the null hypothesis) and the double sum in itsrepresentation can be simplified [see 39] to reconcile the result of Theorem 1. Equations(31) and (32) coincide by symmetry and converge absolutely, as:

    limn

    Cov GC1,n(u), GC2,nn(v) limn

    4n(n n)0

    i=n+1

    nnj=1

    X(|j i|)

    cf. Inoue [27], proof of Theorem 2.1 and Hall and Heyde [25], Theorem A5. Direct cal-culations and an application of the Cauchy condensation test to the generalized harmonic

    30

  • 8/3/2019 Consistent Testing

    32/35

    series yield:

    limn

    4

    n(n n)

    0i=n+1

    nnj=1

    X(|j i|) < 4i=1

    X(i) < 4i=1

    ia < , (34)

    based on strong mixing with polynomial rate X (r) = O (ra) for some a > 1. Absoluteconvergence of Equations (31) and (32) follows the comparison test for infinite series withrespect to the series given in Equation (34). Notice that (under the null hypothesis):

    n(n n)n

    Cn(u) Cnn(u)=

    n n

    nGC1,n(u)

    n

    nGC2,nn(u)

    for all u [0, 1]d. An application of the continuous mapping theorem and Slutskys theoremyields

    Tn()w. T() =

    [0,1]d

    1 GC1(u)

    GC2(u)

    2du.

    Proof of Corollary 1. Assume a sample X1, . . . ,Xn of (Xj)jZ with specified changepoint candidate n for [0, 1] such that Ui C1 for all i = 1, . . . , n and Ui C2 for all i = n+1, . . . , n . Based on pseudo-observations U1, . . . , Un and Un, . . . , Un(which are estimated separately for each subsample), consider

    n

    n nCn(u) C1(u) + n n

    n n nCnn(u) C2(u)for all u [0, 1]d. If C1 and C2 satisfy Condition (3), then weak convergence of the latterlinear combination follows by an application of Slutskys theorem and the proof of Theorem4. Merging the sums involved in the two empirical copulas yields:

    nn

    Cn(u) C1(u) + n nn Cnn(u) C2(u) (35)=

    n

    1

    n

    nj=1

    1{Uju} nn C1(u) n n

    nC2(u)

    =

    n 1n n

    j=1

    1{Uju} Cmix(u) , (36)for all u [0, 1]d where, asymptotically equivalent, Cmix(u) := C1(u) + 1 C2(u). Dueto separate estimation of pseudo-observations in each subsample, ties occur with positiveprobability Pn(Uj1,i = Uj2,1) > 0 for j1 {1, . . . , n} and j2 {n + 1, . . . , n} in finitesamples. Asymptotically, we have limn Pn(Uj1,i = Uj2,1) = 0.Hence, Equation (36) can be estimated on the basis of the tapered block multiplier approachas given in Theorem 3 and Equation (12), respectively. The Corollary follows as Equation

    31

  • 8/3/2019 Consistent Testing

    33/35

    (35) reconciles Equation (21) up to a rescaling of deterministic factors and an applicationof the continuous mapping theorem.

    Proof of Theorem 5. As observed in Equation (26), under the null hypothesis of a constantcopula, weak convergence of Sn(,u) can be derived based on the asymptotic behavior ofthe sequential empirical process. Under given assumptions, consider:

    BC,n(,u) :=1n

    nj=1

    1{Uju} C(u)

    ,

    GC,n(,u) := 1n

    nj=1

    1{Uju} C(u)

    for all (,u) [0, 1]d+1. Philipp and Pinzur [35] prove convergence ofBC,n(,u) : there exists > 0 depending on the dimension and the strong mixing rate X(r) = O(r4d{1+}) forsome 0 < 1/4, such that:

    sup01

    supu[0,1]d

    BC,n(,u) 1nBC(,u) = O {log n}almost surely in (([0, 1]d+1), .), where BC(,u) is a C-Kiefer process with covariance

    Cov (BC(1,u),BC(2,v)) = min(1, 2)jZ

    Cov1{U0u},1{Ujv}

    .

    Notice that, following the work of Dhompongsa [14], an improvement of the consideredstrong mixing condition is possible - this improvement, however, is not relevant for thetime-series applications investigated in this paper.

    It remains to prove weak convergence in the case of unknown marginal distribution func-tions. Note that

    BC,n (,u) =1n

    nj=1

    d

    i=1

    1{Uj,iFn,i(F1i (ui))} C(u)

    =GC,n , F1(F11 (u1)), . . . , Fd(F1d (ud))+

    nn

    n

    C

    F1(F11 (u1)), . . . ,

    Fd(F

    1d (ud))

    C(u)

    , (37)

    whereas weak convergence of the sequential empirical process BC,n (,u) is established andweak convergence of Equation (37) can be proven by an application of the functional deltamethod and Slutskys theorem [see 4, and references therein]. More precisely, if the partialderivatives DiC(u) of the copula exist and satisfy Condition (3), then

    GC,n(,u) w. BC(,u) di=1

    DiC(u)BC

    1,u(i)

    (38)

    in (([0, 1]d+1), .). Weak convergence of the (d+1)-time parameter tied down empirical

    32

  • 8/3/2019 Consistent Testing

    34/35

    copula process given in Equation (26) holds, since:

    Sn(,u) =GC,n(,u) nnGC,n(1,u)

    =BC,n(,u)

    n

    n

    BC,n(1,u)

    +

    n

    n

    n

    CF1(F11 (u1)), . . . , Fd(F1d (ud)) C(u)

    w.BC(,u) BC(1,u)

    in (([0, 1]d+1), .). Notice that Equation (38) and in particular continuity of the partialderivatives is not required for this result [cf. the work of 36, in the context of GARCHresiduals]. Convergence of the test statistics T1n , T

    2n , and T

    3n follows by an application of

    the continuous mapping theorem.

    Proof of Corollary 2. Under the null hypothesis, weak convergence conditional onX1, . . . ,Xn almost surely follows combining the results of Theorems 3 and 5. The strongmixing assumptions of the former theorem on conditional weak convergence of the taperedblock multiplier empirical copula process are relevant for this proof since they imply thoseof the latter Theorem 5. Under the alternative hypothesis, there exist P change pointcandidates such that

    0 = 0 < 1 < .. . < P < P+1 = 1,

    whereas Uj Cp for all j = p1n + 1, . . . , pn and p P:= {1, . . . , P + 1}. For anygiven [0, 1], define

    := 0 for 0 < 1,arg maxpPp

    1{p 1, yields the index of the maximal change point strictly dominated by .Consider the following linear combination of copulas:

    Ln(,u) :=

    p=1

    pn p1nn Cp(u) +

    n nn C+1(u)

    p=1

    p p1

    Cp(u) +

    C+1(u) =: L(,u)

    for all (,u

    ) [0, 1]d+1

    . Consider a sampleX

    1, . . . ,X

    n of a process (X

    j)jZ. Given knowl-edge of the constant marginal distribution functions Fi, i = 1, . . . , d , the empirical copula

    33

  • 8/3/2019 Consistent Testing

    35/35