simulation of brown–resnick processes - springer · 92 m. oesting et al. holds. therefore, we...

19
Extremes (2012) 15:89–107 DOI 10.1007/s10687-011-0128-8 Simulation of Brown–Resnick processes Marco Oesting · Zakhar Kabluchko · Martin Schlather Received: 18 January 2010 / Revised: 30 September 2010 / Accepted: 15 February 2011 / Published online: 15 March 2011 c The Author(s) 2011. This article is published with open access at Springerlink.com Abstract Brown–Resnick processes form a flexible class of stationary max-stable processes based on Gaussian random fields. With regard to applications, fast and accurate simulation of these processes is an important issue. In fact, Brown–Resnick processes that are generated by a dissipative flow do not allow for good finite approx- imations using the definition of the processes. On large intervals we get either huge approximation errors or very long operating times. Looking for solutions of this prob- lem, we give different representations of the Brown–Resnick processes—including random shifting and a mixed moving maxima representation—and derive various kinds of finite approximations that can be used for simulation purposes. Further- more, error bounds are calculated in the case of the original process by Brown and Resnick (J Appl Probab 14(4):732–739, 1977). For a one-parametric class of Brown– Resnick processes based on the fractional Brownian motion we perform a simulation study and compare the results of the different methods concerning their approxi- mation quality. The presented simulation techniques turn out to provide remarkable improvements. Keywords Error estimate · Extremes · Gaussian process · Max-stable process · Poisson point process AMS 2000 Subject Classifications 60G70 · 60G10 · 68U20 M. Oesting (B ) · M. Schlather Institut für Mathematische Stochastik, Georg-August-Universität Göttingen, Goldschmidtstr. 7, 37077 Göttingen, Germany e-mail: [email protected] Z. Kabluchko Institut für Stochastik, Universität Ulm, Helmholtzstr. 18, 89069 Ulm, Germany

Upload: phamcong

Post on 07-Sep-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Extremes (2012) 15:89–107DOI 10.1007/s10687-011-0128-8

Simulation of Brown–Resnick processes

Marco Oesting ·Zakhar Kabluchko ·Martin Schlather

Received: 18 January 2010 / Revised: 30 September 2010 /Accepted: 15 February 2011 / Published online: 15 March 2011c© The Author(s) 2011. This article is published with open access at Springerlink.com

Abstract Brown–Resnick processes form a flexible class of stationary max-stableprocesses based on Gaussian random fields. With regard to applications, fast andaccurate simulation of these processes is an important issue. In fact, Brown–Resnickprocesses that are generated by a dissipative flow do not allow for good finite approx-imations using the definition of the processes. On large intervals we get either hugeapproximation errors or very long operating times. Looking for solutions of this prob-lem, we give different representations of the Brown–Resnick processes—includingrandom shifting and a mixed moving maxima representation—and derive variouskinds of finite approximations that can be used for simulation purposes. Further-more, error bounds are calculated in the case of the original process by Brown andResnick (J Appl Probab 14(4):732–739, 1977). For a one-parametric class of Brown–Resnick processes based on the fractional Brownian motion we perform a simulationstudy and compare the results of the different methods concerning their approxi-mation quality. The presented simulation techniques turn out to provide remarkableimprovements.

Keywords Error estimate · Extremes · Gaussian process · Max-stable process ·Poisson point process

AMS 2000 Subject Classifications 60G70 · 60G10 · 68U20

M. Oesting (B) · M. SchlatherInstitut für Mathematische Stochastik, Georg-August-Universität Göttingen,Goldschmidtstr. 7, 37077 Göttingen, Germanye-mail: [email protected]

Z. KabluchkoInstitut für Stochastik, Universität Ulm, Helmholtzstr. 18, 89069 Ulm, Germany

90 M. Oesting et al.

1 Introduction

Stochastically continuous max-stable processes have been entirely characterized byde Haan (1984). Based on this approach many models for stationary max-stable pro-cesses have been considered. For instance, Smith (1990) introduced “rainfall-storm”models like the Gaussian and t extreme value processes. Further models are givenin Schlather (2002) and de Haan and Pereira (2006), see also de Haan and Ferreira(2006).

We will focus on a class of stationary max-stable processes that has been intro-duced by Brown and Resnick (1977), further investigated by Falk et al. (2004), andgeneralized by Kabluchko et al. (2009). This class is notable, as a subclass alsooccurs as the limit of maxima of independent copies of stationary and appropriatelyscaled Gaussian random fields (Kabluchko et al. 2009) and, in a modified form, asthe limit of empirical distribution functions (Kabluchko 2009a). Ergodic propertiesof the processes are studied by Kabluchko (2009b) and Wang and Stoev (2009) whodiscuss the decomposition into conservative and dissipative components; ergodicityand mixing properties are investigated by Kabluchko and Schlather (2009). Finally,Buishand et al. (2008) and de Haan and Zhou (2008) used Brown–Resnick processesfor modelling spatial rainfall.

This paper is organized as follows: in Section 2 we introduce the class of Brown–Resnick processes. Equivalent representations of these processes based on randomshifts are presented in Section 3. Section 4 deals with those Brown–Resnick pro-cesses which are generated by a dissipative flow and discusses further representationsfor them. All these different representations lead to different kinds of finite approxi-mations introduced in Section 5. Error estimates for these approximations are givenin Section 6 for the original process of Brown and Resnick (1977). In Section 7, wecompare the quality of different approximations by means of a simulation study.

Here, we restrict ourselves to max-stable processes with Gumbel margins. Fréchetand Weibull margins are obtained by marginal transformation.

2 Basics

A stochastic process {Z(t), t ∈ Rd} with Gumbel margins is called max-stable if the

processes {maxi=1,...,n Zi (t)− log n, t ∈ Rd} and {Z(t), t ∈ R

d} have the same lawfor any n ∈ N, where {Zi (t), t ∈ R

d} are independent copies of {Z(t), t ∈ Rd}.

Let {W (t), t ∈ Rd} be a Gaussian process with stationary increments, that is, the

law of {W (t + h) − W (h), t ∈ Rd} does not depend on the choice of h ∈ R

d . Forany second-order process W (·) with stationary increments the variogram γ (·) (seeChilès and Delfiner 1999) is defined by

γ (h) = E(W (h) − W (0))2, h ∈ Rd .

If W (0) = 0, then σ 2(t) = γ (t), where σ 2(t) denotes the variance Var(W (t)). Werecall the construction of stationary max-stable processes, which has been introducedby Brown and Resnick (1977) in case of W (·) being a Brownian motion.

Simulation of Brown–Resnick processes 91

Theorem 1 (Kabluchko et al. 2009) Let {Wi (t), t ∈ Rd}, i ∈ N, be independent

copies of a Gaussian random f ield {W (t), t ∈ Rd} with zero mean and variance

σ 2(·). Independently, let∑

i∈NδXi be a Poisson point process on R with intensity

exp(−x)dx.

1. The process {Z(t), t ∈ Rd}, def ined by

Z(t) = maxi∈N

(

Xi + Wi (t) − σ 2(t)

2

)

,

is a max-stable process with standard Gumbel margins.2. If, additionally, W (·) has stationary increments, then the process Z(·) is station-

ary and its law only depends on the variogram γ (·) of W (·). The process Z(·) iscalled Brown–Resnick process associated to the variogram γ (·).

3. Moreover, under the assumptions above,∑

i∈NδXi +Wi (·)−σ 2(·)/2 is a translation

invariant Poisson point process on RRd

.

3 Random shifts

Figure 1 shows that a finite approximation of the Brown–Resnick process based onthe definition turns out to appear non-stationary on large intervals if the equation

lim||t ||→∞

(

W (t) − σ 2(t)

2

)

= −∞ P − a.s. (1)

Fig. 1 Five finiteapproximations of the originalBrown–Resnick process,each based on the largest1,000 values of the underlyingPoisson point process.

0 50 100 150−150

−15

−10

−5

−100 −50

05

92 M. Oesting et al.

holds. Therefore, we seek equivalent representations of Brown–Resnick processesthat avoid this drawback. A first possibility is to include some “random shifting” intothe construction.

Theorem 2 Let Wi (·), i ∈ N, be independent copies of a Gaussian random f ield{W (t), t ∈ R

d} with zero mean, stationary increments and variance σ 2(·) and letQ be a probability measure on R

d . Independently of Wi (·), let∑

i∈Nδ(Xi ,Hi ) be a

Poisson point process on R × Rd with intensity measure exp(−x)dx × Q(dh). Then,

Z(t) = maxi∈N

(

Xi + Wi (t − Hi ) − σ 2(t − Hi )

2

)

, t ∈ Rd ,

is a Brown–Resnick process associated to the variogram γ (·), i.e. Zd= Z.

Proof Let t1, . . . , tm ∈ Rd , y1, . . . ym ∈ R and m ∈ N, be arbitrary and Pt1,...,tm be

the law of the random vector (W (t1), . . . , W (tm)).Then, � = ∑

i∈Nδ(Xi ,Hi ,Wi ) is a Poisson point process on R × R

d × RRd

withintensity measure exp(−x)dx × Q(dh) × PW (dw) and

Z(t1) ≤ y1, . . . , Z(tm) ≤ ym ⇐⇒ �

({

(x, h, w) ∈ R × Rd × R

Rd :

x > mink=1,...,m

(

yk − w(tk − h) + σ 2(tk − h)

2

)})

= 0.

Thus, we obtain

− log(P(Z(t1) ≤ y1, . . . , Z(tm) ≤ ym)

)

=∫

Rd

Rm

∫ ∞

mink=1,...,m

(

yk−wk+ σ2(tk−h)

2

) exp(−x)dx

× Pt1−h,...,tm−h(dw1, . . . , dwm) Q(dh)

=∫

Rd− log(P(Z(t1 − h) ≤ y1, . . . , Z(tm − h) ≤ ym)) Q(dh).

Due to the stationarity of Z(·) the right hand side equals

− log (P(Z(t1) ≤ y1, . . . , Z(tm) ≤ ym)) .

This theorem can be used for representing Brown–Resnick processes in manydifferent ways. Here, we give two corollaries as applications.

Simulation of Brown–Resnick processes 93

Corollary 3 Let W (·) be as in Theorem 1, W ( j)i ∼i.i.d. W, i ∈ N, j = 1, . . . , n.

Independently of W ( j)i , i ∈ N, let �( j) = ∑

δX ( j)

i, j = 1, . . . , n, be inde-

pendent Poisson point processes on R with intensity measure n−1 exp(−x)dx andh1, . . . , hn ∈ R

d . Then,

Z1(t) = maxj=1,...,n

maxi∈N

(

X ( j)i + W ( j)

i (t − h j ) − σ 2(t − h j )

2

)

, t ∈ Rd ,

is a Brown–Resnick process associated to the variogram γ (·), i.e. Z1d= Z.

Proof Note that the superposition∑n

j=1∑

i∈Nδ(X ( j)

i ,h j )is a Poisson point process

on R×Rd with intensity measure exp(−x)dx ×( 1

n

∑nj=1 δh j ), and apply Theorem 2.

Corollary 4 Let Wi (·) be as in Theorem 1, and I ⊂ Rd a f inite cuboid. Indepen-

dently of Wi let � = ∑δ(Xi ,Hi ) be a Poisson point process on R × I with intensity

measure exp(−x)dx × |I |−1dh. Then, Zd= Z2, where

Z2(t) = maxi∈N

(

Xi + Wi (t − Hi ) − σ 2(t − Hi )

2

)

, t ∈ Rd .

Proof With Q(dh) = |I |−11h∈I dh the assertion follows from Theorem 2.

4 Mixed moving maxima representation

The notion of max-stable processes generated by non-singular flows has been intro-duced by de Haan and Pickands (1986); further results on the representations ofmax-stable processes have been obtained in Kabluchko (2009b) and Wang and Stoev(2009) by transferring some work of Rosinski (1995) on SαS-processes.

Kabluchko et al. (2009, Theorem 14) showed that a Brown–Resnick process isgenerated by a dissipative flow if Eq. 1 holds. In the case d = 1 condition (1) issatisfied if liminft→∞γ (t)/ log t > 8.

Using the stationarity criterion from the third part of Theorem 1, we will beable to provide equivalent representations of Brown–Resnick processes given by thefollowing theorems.

Theorem 5 Let W ( j)i , i ∈ N, j ∈ Z

d , be independent copies of a Gaussian randomf ield W (·) with continuous sample paths, stationary increments, zero mean, varianceσ 2(·) and variogram γ (·) on R

d . Furthermore, let

T ( j)i = inf

(

argsupt∈Rd

(

W ( j)i (t) − σ 2(t)

2

))

94 M. Oesting et al.

where the “inf” is understood in the lexicographic sense if d > 1. We assume Eq. 1,so that T ( j)

i is well-def ined a.s.

Independently of W ( j)i , let �( j) = ∑

i∈Nδ

X ( j)i

, j ∈ Zd , be independent Pois-

son point processes on R with intensity measure m−d exp(−x)dx for some m ∈ N.

Furthermore, let p > 0. Then, Z3d= Z where

Z3(t) = maxj∈Zd

maxi∈N

T ( j)i ∈(− m

2 p, m2 p

]d

(

X ( j)i + W ( j)

i (t − pj) − σ 2(t − pj)

2

)

, t ∈ Rd ,

is a Brown–Resnick process associated to the variogram γ (·).

Proof Let C be the σ -algebra on C(Rd) generated by the sets

Ct1,...,tm (B) = { f ∈ C(Rd) : ( f (t1), . . . , f (tm)) ∈ B}

with t1, . . . , tm ∈ Rd , m ∈ N and B ∈ Bm , where Bm is the Borel-σ -algebra of

Rm . Endowed with the topology of uniform convergence on compact sets, C(Rd)

becomes a Polish space and the σ -algebra C coincides with the Borel-σ -algebragenerated by this topology.

We define ξ( j)i (t) = W ( j)

i (t) − σ 2(t)/2. Because of condition (1) each T ( j)i is

finite P-a.s. and M ( j)i = supt∈Rd (X ( j)

i + ξ( j)i (t)) is well-defined. The mapping

� : C(Rd) → Rd × C(Rd), X ( j)

i + ξ( j)i (·) →

(T ( j)

i , X ( j)i + ξ

( j)i (·)

),

is measurable since supx∈Rd ξ( j)i (x) = supx∈Qd ξ ( j)(x) and T ( j)

i is the first root

of ξ( j)i − sup(ξ

( j)i ). Therefore, the mapping theorem for Poisson point processes

(Kingman 1993) yields that∑

i∈Nδ(T ( j)

i ,X ( j)i +ξ

( j)i (·)) is a Poisson point process with

intensity measure

(A) =∫

R

1

mdexp(−x)PW

(x + ξ ∈ �−1(A)

)dx, A ∈ Bd × C ,

where PW is the law of the process W (·).Now we define Ut : C(Rd) → C(Rd), f (·) → f (·− t) and Vt : R

d × C(Rd) →R

d ×C(Rd), (x, f (·)) → (x + t, f (·− t)) as translations by t ∈ Rd . Then we obtain

(� ◦ Ut )(

X ( j)i + ξ

( j)i (·)

)=

(T ( j)

i + t, X ( j)i + ξ

( j)i (· − t)

)

= (Vt ◦ �)(

X ( j)i + ξ

( j)i (·)

).

The intensity measure of the Poisson point process∑

δX ( j)

i +ξ( j)i (·) is translation

invariant (with respect to Ut ) by Theorem 1. Because of the fact that � commuteswith the translation operators, is translation invariant (with respect to Vt ), as well.

Simulation of Brown–Resnick processes 95

Thus, for any j ∈ Zd , we obtain

maxi∈N

T ( j)i ∈(− m

2 p, m2 p

]d

(X ( j)

i + ξ( j)i (· − pj)

)d= max

i∈N

T ( j)i ∈(− m

2 p, m2 p

]d+pj

(X ( j)

i + ξ( j)i (·)

).

(2)

Now we consider each side of Eq. 2 separately. For different j ∈ Zd we get

stochastically independent processes. This yields

Z3(·) = maxj∈Zd

maxi∈N

T ( j)i ∈(− m

2 p, m2 p

]d

(X ( j)

i + ξ( j)i (· − pj)

)

d= maxj∈Zd

maxi∈N

T ( j)i ∈(− m

2 p, m2 p

]d+pj

(X ( j)

i + ξ( j)i (·)

).

Furthermore, by replacing T ( j)i , ξ

( j)i , and X ( j)

i by T ( j mod m)i , ξ

( j mod m)i , and

X ( j mod m)i , respectively, we obtain

maxi∈N

T ( j)i ∈(− m

2 p, m2 p

]d+pj

(X ( j)

i + ξ( j)i (·)

)

d= maxi∈N

T ( j mod m)i ∈(− m

2 p, m2 p

]d+pj

(X ( j mod m)

i + ξ( j mod m)i (·)

)

where “mod” is understood as a componentwise operation.For j1 ≡ j2 mod m, j1 �= j2 we have

((−mp/2, mp/2]d + pj1) ∩ ((−mp/2, mp/2]d + pj2) = ∅,

which guarantees that processes ξ( j1 mod m)i with T ( j1 mod m)

i ∈ (−mp/2, mp/2]d +pj1 and ξ

( j2 mod m)i with T ( j2 mod m)

i ∈ (−mp/2, mp/2]d + pj2 are independent. Bythese considerations we get

Z3(·) d= maxj∈Zd

maxi∈N

T ( j mod m)i ∈(− m

2 p, m2 p

]d+pj

(X ( j mod m)

i + ξ( j mod m)i (·)

)

d= maxk∈{0,...,m−1}d

maxj∈Zd

j mod m≡k

maxi∈N

T (k)i ∈(− m

2 p, m2 p

]d+pj

(X (k)

i + ξ(k)i (·)

)

= maxk∈{0,... m−1}d

maxi∈N

(X (k)

i + ξ(k)i (·)

)d= Z(·).

96 M. Oesting et al.

The last step is based on the fact that∑

k∈{0,...,m−1}d∑

i∈Nδ

X (k)i

is a Poisson point

process with intensity measure∑

k∈{0,...,m−1}d m−d exp(−x)dx = exp(−x)dx . By Kabluchko (2009b) condition (1) holds only if Z(·) has a mixed moving max-

ima representation. In order to construct such a representation we repeat results fromthe proof of Theorem 14 in Kabluchko et al. (2009).

Theorem 6 Let {Wi (t), t ∈ Rd}, i ∈ N, be independent copies of a Gaussian

random f ield {W (t), t ∈ Rd} with continuous sample paths, stationary incre-

ments, zero mean, variance σ 2(·) and variogram γ (·) on Rd . We assume that

condition (1) is satisf ied. Furthermore, let Ti = inf(

argsupt∈Rd

(Wi (t) − σ 2(t)

2

)),

Mi = supt∈Rd

(Wi (t) − σ 2(t)

2

)and Fi (·) = Wi (· + Ti ) − σ 2(·+Ti )

2 − Mi .

Independently of Wi , let∑

i∈NδXi be a Poisson point process with intensity mea-

sure exp(−x)dx. Then, the random measure∑

i∈Nδ(Ti ,Xi +Mi ,Fi ) def ines a Poisson

point process on Rd ×R×C(Rd) with intensity measure λ∗dt × e−ydy × Q(dF) for

some λ∗ > 0 and a probability measure Q on C(Rd). Furthermore, let∑

i∈Nδ(Si ,Ui )

be a Poisson point process on Rd × R with intensity measure λ∗dte−udu and

Fi ∼i.i.d. Q. Then, we have Z4d= Z for

Z4(t) = maxi∈N

(Ui + Fi (t − Si )

), t ∈ R

d .

Proof The first part is shown in the proof of Theorem 14 in Kabluchko et al. (2009).

For the second part note that∑

i∈Nδ(Ti ,Xi +Mi ,Fi ) and

∑i∈N

δ(Si ,Ui ,Fi )

are Poisson

point processes on Rd × R × C(Rd) with the same intensity measure. Furthermore,

we have Z(·) = maxi∈N �(Ti , Xi + Mi , Fi ) and Z4(·) = maxi∈N �(Si , Ui , Fi ) withthe transformation

� : Rd × R × C(Rd) → C(Rd), (u, s, f ) → u + f (· − s).

Remark 7 A similar result holds if we consider all the processes from Theorem 6restricted to pZ

d , p > 0, instead of Rd .

Then, for

T (p)i = inf

(

argsupt∈pZd

(

Wi (t) − σ 2(t)

2

))

,

M (p)i = sup

t∈pZd

(

Wi (t) − σ 2(t)

2

)

,

and F (p)i (·) = Wi

(· + T (p)

i

)−

σ 2(· + T (p)

i

)

2− M (p)

i , t ∈ pZd ,

Simulation of Brown–Resnick processes 97

the random measure∑

i∈Nδ(T (p)

i ,Xi +M(p)i ,F (p)

i )defines a Poisson point process on

pZd × R × R

pZdwith intensity measure λ(p) pdδt × e−udu × Q(p)(dF) for some

λ(p) > 0 and some probability measure Q(p). An equivalent representation Z4 of Zcan be given analogously to Theorem 6. Even more easily, all the other results fromSections 2, 3 and 4 can be transferred to processes on a lattice.

For approximating Z via representation Z4 the law Q is needed explicitly. Notethat, in general Q is not the law of W (· + T ) − σ 2(· + T ) − M (and Q(p) is not thelaw of W (· + T (p)) − σ 2(· + T (p)) − M (p)). If we assume W (0) = 0 and restrictourselves to processes on a lattice pZ

d , we get the following result.

Theorem 8 Let W (·) be as in Theorem 6 and

T (p) = inf

(

argsupt∈pZd

(

W (t) − σ 2(t)

2

))

.

Furthermore, assume W (0) = 0. Then, Q(p) is the law of

W (·) − σ 2(·)2

| T (p) = 0.

Proof Let A ∈ B(RpZd) and V ∈ B(R) such that

0 <

Ve−x dx < ∞.

Furthermore, let � = ∑i∈N

δ(T (p)

i ,Xi +M(p)i ,F (p)

i )the Poisson point process on pZ

d ×R × R

pZdwith the notations from Remark 7. Then, we have

Q(p)(A) = P(�({0} × V × A) = 1 | �({0} × V × RpZd

) = 1), (3)

since the intensity measure of � is a product measure.Since we may assume that the points (T (p)

i , Xi + M (p)i , F (p)

i ) are numbered suchthat the sequence (Xi )i∈N is decreasing (cf. Section 5), we get

P

(�({0} × V × A) = 1 | �({0} × V × R

pZd) = 1

)

=∑

i∈N

P

(T (p)

i = 0, Xi + M (p)i ∈ V

∣∣∣#

{i :

(T (p)

i , Xi + M (p)i

)∈ {0} × V

}= 1

)

· P

(F (p)

i ∈ A∣∣∣ T (p)

i = 0, Xi + M (p)i ∈ V,

#{

i :(

T (p)i , Xi + M (p)

i

)∈ {0} × V

}= 1

)(4)

98 M. Oesting et al.

with

P

(F (p)

i ∈ A | T (p)i = 0, Xi + M (p)

i ∈ V, #{i : (T (p)i , Xi + M (p)

i ) ∈ {0} × V } = 1)

= P

(F (p)

i ∈ A | T (p)i = 0, Xi ∈ V, (T (p)

j , X j + M (p)j ) /∈ {0} × V ∀ j �= i

)

= P

(F (p)

i ∈ A | T (p)i = 0

),

where we use the fact that Wi is independent of Xi , X j and W j for all j �= i .Employing Eqs. 3, 4, and

i∈N

P

(T (p)

i = 0, Xi + M (p)i ∈ V | �({0} × V × R

pZd) = 1

)= 1

we get

Q(p)(A) = P

(F (p)

i ∈ A | Ti = 0)

= P

(

W (·) − σ 2(·)2

∈ A | T (p) = 0

)

for all A ∈ B(RpZd).

Remark 9 Let � be defined as in the proof of Theorem 8. Considering the intensityλ(p) pd of the restriction of � on the set {0} × [0, ∞) × C(Rd) we get the equalityλ(p) pd = P(T (p) = 0).

Using only the assumptions of Theorem 6, Q can be described as the law of Fi

conditional on Xi + Mi and Ti . Let � = ∑i∈N

δ(Ti ,Xi +Mi ,Fi ) and E ∈ B(Rd × R)

such that∫

E e−x (dt × dx) ∈ (0, ∞). Furthermore, let N = �(E × C (Rd)) andi1, . . . , iN such that (Tik , Xik + Mik ) ∈ E for k = 1, . . . , N . By G1, . . . , G N wedenote a random permutation of Fi1 , . . . , FiN .

Theorem 10 Conditional on N = n, the processes G1, . . . , Gn are i.i.d. withlaw Q.

Proof We have to proof that all finite dimensional margins of G1, . . . , Gn are prod-ucts of one dimensional margins with law Q. By decomposing the sets of C andchanging numbering of indices it suffices to proof that P(G1 ∈ A1, . . . , Gn1 ∈

Simulation of Brown–Resnick processes 99

A1, Gn1+1 ∈ A2, . . . , Gn1+n2+...+nl ∈ Al | N = n) equals∏l

i=1 Q(Ai )ni for pair-

wise disjoint sets A1, . . . , Al ∈ C , n1, . . . , nl ∈ N with n1 + . . . + nl ≤ n. Letm = n1 + . . . + nl and A = ⋃l

i=1 Ai . Then, we have

P(G1 ∈ A1, . . . , Gn1 ∈ A1, Gn1+1 ∈ A2, . . . , Gm ∈ Al | N = n

)

=∑

k1≥n1,...,kl≥nlk1+...+kl≤n

P

⎝G1 ∈ A1, . . . , Gm ∈ Al

∣∣∣∣∣∣

l⋂

j=1

�(E × A j ) = k j , N = n

· P(�(E × A j ) = k j , j = 1, . . . , l | N = n

)

=∑

k1≥n1,...,kl≥nlk1+...+kl≤n

k1

n· · · k1 − n1 + 1

n − n − 1 + 1

k2

n − n1. . .

kl − nl

n − m + 1

·(

n

k1, . . . , kl , n − k1 − . . . − kl

)

· Q(A1)k1 · · · Q(Al)

kl

· (1 − Q(A))n−k1−...−kl

=l∏

i=1

Q(Ai )ni .

5 Finite approximations

Let Y1, Y2, . . . be independent exponentially distributed random variables withparameter λ > 0 and define Rn = ∑n

i=1 Yi for n ∈ N. Then,∑

i∈NδRi is a Poisson

point process on (0, ∞) with intensity λ. Applying the mapping theorem (Kingman1993) we get that

∑i∈N

δ− log Ri is a Poisson point process on R with intensity mea-sure λ exp(−x)dx and the sequence (Xi )i∈N with Xi = − log Ri is monotonicallydecreasing.

For simplicity we will only consider approximations of Z on a symmetric cuboid[−b, b], b ∈ R

d , based on the definition of Z and the representations Z1, Z2, Z3,and Z4, respectively.

1. For the Poisson point process � = ∑i∈N

δXi with intensity measure exp(−x)dxand independent copies {Wi (t), t ∈ [−b, b]} of a Gaussian process withstationary increments, let

Z (k)(t) = maxi=1,...,k

(Xi + Wi (t) − σ 2(t)/2

), t ∈ [−b, b], k ∈ N.

2. Let {X ( j)i }i∈N, j = 1, . . . , n, be decreasing sequences of points of the Poisson

point processes∑

i∈Nδ

X ( j)i

with intensity measure n−1 exp(−x)dx and {W ( j)i (t),

100 M. Oesting et al.

t ∈ [−b − h j , b − h j ]} be independent copies of Gaussian processes. Then, fork ∈ N, let

Z (k)1 (t) = max

j=1,...,nmax

i=1,...,k

(

X ( j)i + W ( j)

i (t − h j ) − σ 2(t − h j )

2

)

, t ∈ [−b, b].

3. Let∑

i∈Nδ(Xi ,Hi ) be a Poisson point process on I × R with intensity measure

|I |−1dh × exp(−x)dx . For each i ∈ N, let {Wi (t), t ∈ [−b − Imax, b − Imin]}be independent copies of a Gaussian process where Imin and Imax are the lowerand upper end point of I , and let

Z (k)2 (t) = max

i=1,...,k

(Xi + Wi (t − Hi ) + σ 2(t − Hi )/2

), t ∈ [−b, b], k ∈ N.

4. For jmin ≤ j ≤ jmax ∈ Zd , let {X ( j)

i }i∈N be descending sequences of points ofthe Poisson point processes

∑i∈N

δX ( j)

iwith intensity measure m−d exp(−x)dx .

We assume pjmin < a < b < pjmax. Furthermore, we have indepen-dent copies {W ( j)

i , t ∈ [−b − pj, b − pj]} of Gaussian process and define

T ( j)i = inf(argsup(W ( j)

i (t) − σ 2(t)2 )). For k ∈ N, t ∈ [−b, b], let

Z (k)3 (t) = max

j= jmin,..., jmaxmax

i=1,...,kT ( j)

i ∈(− m2 p, m

2 p]d

(

X ( j)i + W ( j)

i (t − pj) − σ 2(t − pj)

2

)

.

5. Let I be a finite interval in Rd with [−b, b] ⊂ I . Let (Ui , Si ) be descending

in Ui such that∑

i∈Nδ(Ui ,Si ) is a Poisson point process on R × I with intensity

measure λ∗ exp(−u)du × ds, i.e. U1 ≥ U2 ≥ U3 ≥ . . .. For each i ∈ N, let Fi

be an independent sample path with law Q.Then, for k ∈ N, t ∈ [−b, b], let

Z (k)4 (t) = max

i=1,...,k

(Ui + Fi (t − Si )

).

This construction is illustrated by Fig. 2.For simulating Fi ∼i.i.d. Q or the discretized version F (p)

i ∼i.i.d. Q(p) wecan either use Theorem 8 or 10. In the first case we simulate independent copiesW j (·) of W (·) and reject all those processes with T (p)

j �= 0. In the second casewe simulate a Brown–Resnick process (e.g. using the standard approximiationmethod) and use all those processes Fj with (X j + M j , Ti ) ∈ E in a randomorder. We have to choose E carefully such that we have to simulate as fewprocesses W j (·) as possible to get a realisation of Fi .

Simulation of Brown–Resnick processes 101

0 10 20−10−20

−4−2

02

4

Fig. 2 Construction of one sample path of the original Brown–Resnick process based on the representationZ(·) = Z4(·). The circles mark the points (Ui , Si ). The resulting process is displayed by the bold line.

The constants λ∗ or λ(p) pd can be estimated by counting #{i ∈ N : Xi >

0, Ti ∈ [0, 1]d} or #{i ∈ N : Xi > 0, T (p)i = 0} (i.e. estimating P(T (p) = 0)),

respectively.

6 Error estimates

In this section we assume that {W (t), t ∈ R} is a one-dimensional standardBrownian motion. Furthermore, we consider a symmetric interval [−b, b] on whichthe simulation is performed.

In order to estimate the error for the approximation of Z(·) by Z (k)(·), we definethe random variable Ck = inft∈[−b,b](Z (k)(t)+σ 2(t)/2). Then, we get the followingresult.

Proposition 11 (Oesting 2010, Proposition 3.8) Let x < c < 0. Then,

P

(Z (k)(t) �= Z(t) for some t ∈ [−b, b] | Xk ≤ x, Ck > c

)

≤ 4e−c b

c − xexp

(b

2

) (

1 − �

(c − x − b√

b

))

. (5)

Furthermore, we have

P(Ck ≤ c) ≤ 1 −(

1 − exp(− exp

(− c

2

)))·(

2�

(

− c

2√

b

)

− 1

)2

(6)

102 M. Oesting et al.

and

P(Xk > x) ≤ exp(−(k − 1)x)

(k − 1)! (1 − exp(− exp(−x))). (7)

The unconditional error probability P(Z (k)(t) �= Z(t) for some t ∈ [−b, b]) can bebounded by the sum of the rhs of Eqs. 5, 6 and 7.

If the right-hand sides of Eqs. 5, 6 and 7 tend to 0 for k → ∞, e.g. for c = c(k) =− log log(log(k)/2) and x = x(k) = − log(k)/2, we get

limk→∞ P(Z (k)(t) �= Z(t) for some t ∈ [−b, b]) = 0.

Similar results hold for Z (k)1 (·) and Z (k)

2 (·), cf. Oesting (2010). In particular, thesebounds have the same asymptotic behaviour.

To give bounds for Z4(·), we consider the standard Brownian motion restrictedto the lattice pZ for some p > 0. Here, we also have to adjust the Poisson pointprocess

∑i∈N

δ(Ui ,Si ) to the lattice and to restrict its second component to some finiteinterval [−v, v]. Let

∑i∈N

δ(Y (p)

i ,Ri )a Poisson point process on R × [−v, v] with

intensity measure λ(p) p exp(−y)dy × ∑k∈Z∩[−v/p,v/p] δpk(ds). Then, we get an

approximation of Z4 by

Z (k,v,p)

4 (t) = maxi=1,...,k

(Yi + Fi (t − Ri )

)

for t ∈ [−b, b] ∩ pZ, where Fi ∼i.i.d. Q(p). For the following result we use thatQ(p) is the law of W (t) − |t |

2 | T (p) = 0, where W is a standard Brownian motionand T (p) = inf

(argsupt∈pZ (W (t) − |t |/2)

).

Proposition 12 (Oesting 2010, Proposition 3.17) Let b, v ∈ pN with v − b ≥ 16and b ≥ 1. Furthermore, let Ck = mint∈[−b,b]∩pZ Z (k,v,p)

4 (t) and c < 0. Then, wehave

P

(Z (k,v,p)

4 (t) �= Z4(t) for some t ∈ [−b, b] ∩ pZ | Ck > c, Y (p)k ≤ c

)

≤ 64λ(p)e−c

(1 − exp (−p/2))2

(

exp

(

−√

v − b

2

)

+ 7√

b3 exp

(

−v − b

48b

))

. (8)

Simulation of Brown–Resnick processes 103

Furthermore, it holds that

P(Ck ≤ c) ≤ 1 −(

1 − exp(−λ(p)(2b + p)e−c/2

))

·⎛

⎝1 −(

2(o − b)λ(p))k exp

(− (k−1)c

2

)

(k − 1)!(

1 − exp(−e−(v−b)λ(p)c

))⎞

·(

1 − 4

1 − e−p/2

(

1 − �

(

− c√8b

−√

b

2

)

− exp(−c/2)

(

1 − �

(

− c√8b

+√

b

2

)))2⎞

(9)

and

P

(Y (p)

k > c)

≤((2v + p)λ(p)

)k

· exp(−(k − 1)c)

(k − 1)!(

1 − exp(−e−(2v+p)λ(p)c

)). (10)

The unconditional error probability P(Z (k,v,p)

4 (t) �= Z4(t) for some t ∈ [−b, b]) canbe bounded by the sum of the rhs of Eqs. 8, 9 and 10.

If we choose v = v(k) = k1/4 and c = c(k) = − log(k)/4 for example, again allthe error bounds given in Proposition 12 tend to 0 as k → ∞.

Here, the unconditional bound is given for fixed k ∈ N. On the other hand, we canalso use a stopping rule similar to Schlather (2002) and consider Z (k)

4 with k ∈ N such

that Ck > Y (p)k . Then, the error can be assessed by the bound for the first probability

which is independent of k.Note, that the upper bounds given in Proposition 12 can be made explicite by

employing λ(p) = P(T (p) = 0)/p (cf. Remark 9) and using further bounds like(1 − exp(−p/2))2/4 ≤ P(T (p) = 0) ≤ 1. Similar bounds can be given for Z (k)

3 , cf.Oesting (2010).

7 Simulation study

In order to compare the different simulation techniques described in Section 5 weperform a simulation study on R

1 using the software environment R (Ihaka and

104 M. Oesting et al.

Gentleman 1996; R Development Core Team 2009). We consider symmetric inter-vals [−b, b] with b ∈ {1, 2, 5, 10, 20, 30, 50} and the variogram γ (h) = 2|h|α forα ∈ {0.2, 0.4, 0.6, . . . , 1.8}. This means that, for all Brown–Resnick processes,condition (1) holds. We always consider the process on a grid of length 0.1.

In order to get a fair criterion for the comparison of the different methods we havefixed the number q of simulated sample paths of W (·) on [−b, b] per realization ofthe Brown–Resnick process, q = 100, 500 and 2,500. Simulation techniques for Ware given in Lantuéjoul (2002) and Schlather (2009). In order to approximate Z3 andZ4 the pathes W (·) have to be computed on an interval larger than [−b, b]. Here,we assume that the computing time depends linearly on the length of this intervaland modify the number of simulated sample paths to have an approximately equalcomputing time for all the approximations. We repeat any simulation of a Brown–Resnick process N = 5,000 times and call this a run.

For ease, we will call the approximation of Zi “method i” (i = 1, 2, 3, 4) and theapproximation of Z “method 0”.

Applying method 1, we have to choose h1, . . . , hn depending on b. It seems tobe reasonable to distribute h1, . . . , hn equally on [−b, b]. Furthermore, the distance�h = h2 − h1 should neither be too large—because we want to cover the intervalwith good approximations—nor too small—in order to get a method distinct frommethod 2. Here, depending on b, we choose some n ∈ {5, 10, 20, 50} such that0.5 ≤ �h ≤ 2.

In order to approximate Z2(·), let Q be the uniform distribution on [−b, b]. Whenapproximating Z3(·), we always set p as the mesh width, i.e. p = 0.1, and setjmax = 200 for b = 1, 2, jmax = 250 for b = 5 and jmax = 10b + 300 otherwise.Note that for the choice of jmax (and also for the choice of v in method 4) not itsabsolute value, but the difference pjmax − b is important. In practice, a difference of30 (or larger) provides very good results. On the other hand, one should be aware ofthe fact, that increasing jmax is quite expensive in terms of computing times if k isfixed. We also varied the intensity parameter m and got best results for m = 31.

In case of method 4 we set v = 20 for b = 1, 2, v = 25 for b = 5 and v = b + 30otherwise. Here, the choice of the set E is crucial. A large domain of E requires thesimulation of low values of Xi , involving high computational costs. A very smalldomain, however, leads to a high rejection rate. We choose E in the following way.Let � = ∑

i∈Nδ(Xi ,Ti ,Xi +Mi ) be a Poisson point process on R×R

d ×R with intensitymeasure . From Theorem 6 it is known that (R × E) = λ∗ ∫

E e−ydt × dy. Wecompare this to ([x0, ∞) × E) for some fixed x0 ∈ R. The latter one can beeasily estimated by simulation. Then, the probability of drawing Fi incorrectly whenrestricting our simulation to processes with X j > x0 conditional on �(R × E) > 0is bounded by 1 − exp(− ((−∞, x0) × E)). For our simulation study we choosex0 = −2 and approximate the area of highest intensity with cubes.

As already mentioned before, for methods 3 and 4, the (location of the) maximumof the Gaussian process is needed. To this end, we simulate the Gaussian processon a larger interval, which is at least of length pjmax + b or v + b, respectively,and take the maximum of the process restricted to this area which implies additional

Simulation of Brown–Resnick processes 105

errors. Note that we do not get any additional error by discretization as the equivalentrepresentations Z3(·) and Z4(·) also hold for Brown–Resnick processes restricted toa lattice (cf. Remark 7).

As a measure of approximation quality we took the largest distance between theempirical cumulative distribution function of the approximated process at the intervalbounds and the standard Gumbel distribution function. This is motivated by the factthat we expect the largest deviations from the original process at the interval bounds.Both bounds are taken into account in order to get a lower volatility of the results: Forindependent realisations Z (k)

i,1 (·), . . . , Z (k)i,N (·) let Z (k)

i,(1)(t), . . . , Z (k)i,(N )(t) be the order

statistic at location t ∈ pZ. Then, we define the deviation of approximation as

ε = 1

2max

j=1,...,N

∣∣∣∣exp

(− exp

(−Z (k)

i,( j)(−b)))

− j

N

∣∣∣∣

+ 1

2max

j=1,...,N

∣∣∣∣exp

(− exp

(−Z (k)

i,( j)(b)))

− j

N

∣∣∣∣ .

We perform all the simulations up to 50 times. After each run of all methods wecalculate the p-values for pairwise t-tests between the different methods assumingthat ε is normally distributed. We stop simulating a method whenever p < 0.005.Figure 3 depicts the methods for each pair (b, α) that have not been rejected after 50repetitions.

In general, methods 0, 1 and 2 perform best, if α or b is small. If both are large,method 4 is the best one. The area where method 4 performs best increases for qgrowing. For large q methods 0 and 2 have the same behaviour; if q is small, thereis a sharper distinction between these methods. Method 0 provides better results for

Fig. 3 Methods providing best results depending on the interval bound b and variogram parameter α usingq = 100 (left), 500 (middle), and 2,500 (right) simulated sample paths of W (·) on [−b, b] per realizationof the Brown–Resnick process.

106 M. Oesting et al.

−150 −100 −50 0 50 100 150 −150 −100 −50 0 50 100 150

−4−2

02

46

−4−2

02

46

Fig. 4 Finite approximations of the original Brown–Resnick process; five realizations of Z (1000)2 (·) (left)

and Z (1000)4 (·) (right).

small b, method 2 for small α. Method 3 only works well if q gets large. Then, weget best results for b large.

The typical behaviour for large b, α and q is shown in Figs. 1 and 4 for the standardBrownian motion. The development of the deviation ε for growing q and different band α is shown in Table 1. More generally, there are the following recommendationsconcerning the choice of methods in practice: If the variogram value evaluated atthe diameter of the simulated area is low, then use the original definition; simulationby random shifting is also appropriate if an unprecise simulation is sufficient; if thevariogram tends to infinity and the value evaluated at the diameter of the simulatedwindow is high, then the mixed moving maxima representation is best.

Table 1 The mean deviation of approximation is shown for different (α, b, q)

α = 0.4, b = 30 α = 1, b = 10 α = 1.6, b = 2

q 100 500 2,500 100 500 2,500 100 500 2,500

ε0 0.148 0.066 0.027 0.662 0.514 0.367 0.085 0.030 0.014

ε1 0.153 0.064 0.026 0.429 0.345 0.281 0.137 0.101 0.065

ε2 0.135 0.063 0.030 0.423 0.339 0.280 0.129 0.084 0.046

ε3 0.816 0.472 0.023 0.833 0.493 0.026 0.848 0.519 0.025

ε4 0.379 0.099 0.066 0.514 0.076 0.014 0.357 0.045 0.012

By εi we denote the deviation of approximation by method i , i = 0, . . . , 4

Simulation of Brown–Resnick processes 107

Acknowledgments The research of Marco Oesting was supported by the German Research FoundationDFG through the Graduiertenkolleg 1023 Identif ication in Mathematical Models: Synergy of Stochasticand Numerical Methods, University of Göttingen, in form of a scholarship. The authors are grateful tothe referees for numerous valuable suggestions improving this article. We also thank Anthony C. Davisonfor his comments on simulations on a lattice and Sebastian Engelke for helpful discussions concerning themixed moving maxima representation.

Open Access This article is distributed under the terms of the Creative Commons AttributionNoncommercial License which permits any noncommercial use, distribution, and reproduction in anymedium, provided the original author(s) and source are credited.

References

Brown, B.M., Resnick, S.I.: Extreme values of independent stochastic processes. J. Appl. Probab. 14(4),732–739 (1977)

Buishand, T., de Haan, L., Zhou, C.: On spatial extremes: with applications to a rainfall problem. Ann.Appl. Stat. 2(2), 624–642 (2008)

Chilès, J.P., Delfiner, P.: Geostatistics: Modeling Spatial Uncertainty. Wiley Series in Probability andStatistics: Applied Probability and Statistics. Wiley, New York (1999)

de Haan, L.: A spectral representation for max–stable processes. Ann. Probab. 12(4), 1194–1204 (1984)de Haan, L., Ferreira, A.: Extreme Value Theory: An Introduction. Springer Series in Operations Research

and Financial Engineering. Springer, New York (2006)de Haan, L., Pereira, T.T.: Spatial extremes: Models for the stationary case. Ann. Stat. 34(1), 146–168

(2006)de Haan, L., Pickands, J.: Stationary min-stable stochastic processes. Probab. Theory Relat. Fields 72(4),

477–492 (1986)de Haan, L., Zhou, C.: On extreme value analysis of a spatial process. REVSTAT 6(1), 71–81 (2008)Falk, M., Hüsler, J., Reiss, R.D.: Laws of Small Numbers: Extremes and Rare Events, second, revised and

extended edition. Birkhäuser, Basel (2004)Ihaka, R„ Gentleman, R.: R: A language for data analysis and graphics. J. Comput. Graph. Stat. 5(3),

299–314 (2008)Kabluchko, Z.: Extremes of space-time gaussian processes. Stoch. Process. Their Appl. 119(11), 3962–

3980 (2009a)Kabluchko, Z.: Spectral representations of sum- and max-stable processes. Extremes 12(4), 401–424

(2009b)Kabluchko, Z., Schlather, M.: Ergodic properties of max-infinitely divisible processes. Stoch. Process.

Their Appl. 120(3), 281–295 (2010)Kabluchko, Z., Schlather, M., de Haan, L.: Stationary max-stable fields associated to negative definite

functions. Ann. Probab. 37(5), 2042–2065 (2009)Kingman, J.F.C.: Poisson processes. In: Oxford Studies in Probability, vol. 3. The Clarendon Press Oxford

University Press, New York (1999)Lantuéjoul, C.: Geostatistical Simulation: Models and Algorithms. Springer, Berlin (2002)Oesting, M.: Simulationsverfahren für Brown-Resnick-Prozesse. Master’s thesis, Georg-August-

Universität Göttingen. http://arxiv.org/abs/0911.4389v2 (2010)R Development Core Team: R: A Language and Environment for Statistical Computing. R Foundation for

Statistical Computing, Vienna, Austria. http://www.r-project.org (2009)Rosinski, J.: On the structure of stationary stable processes. Ann. Probab. 23(3), 1163–1187 (1995)Schlather, M.: Models for stationary max-stable random fields. Extremes 5(1), 33–44 (2002)Schlather, M.: Random Fields: Simulation and Analysis of RandomFields. R Package Version 2.0.43.

http://cran.r-project.org/package=RandomFields (2011)Smith, R.L.: Max-Stable Processes and Spatial Extremes. Technical report, University of Surrey (1990)Wang, Y., Stoev, S.A.: On the structure and representations of max-stable processes. Adv. Appl. Probab.

42(3), 855–877 (2010)