rights / license: research collection in copyright - non ... · we consider a wavelet galerkin...
TRANSCRIPT
Research Collection
Doctoral Thesis
Wavelet Galerkin schemes for option pricing in multidimensionalLévy models
Author(s): Winter, Christoph
Publication Date: 2009
Permanent Link: https://doi.org/10.3929/ethz-a-005773237
Rights / License: In Copyright - Non-Commercial Use Permitted
This page was generated automatically upon download from the ETH Zurich Research Collection. For moreinformation please consult the Terms of use.
ETH Library
Diss. ETH No. 18221
Wavelet Galerkin schemes for option pricing
in multidimensional Levy models
A dissertation submitted to
ETH Zurich
for the degree of
Doctor of Sciences
presented by
CHRISTOPH WINTER
Dipl. Math. TU Munchen
MSci. Math. Virginia Tech
born June 29, 1979
citizen of Germany
accepted on the recommendation of
Prof. Dr. Christoph Schwab, ETH Zurich, examiner
Prof. Dr. Rama Cont, Columbia University, co-examiner
Prof. Dr. Tobias von Petersdorff, University of Maryland, co-examiner
2009
Acknowledgments
First, I would like to thank my advisor Prof. Christoph Schwab, for all his supportduring the work on my thesis. He gave me not only helpful inspirations, but also thefreedom to realize my own ideas. I also owe thanks to Prof. Rama Cont and Prof.Tobias von Petersdorff who kindly agreed to be co-examiners of this thesis.
Special thanks go to my two colleagues, Dr. Norbert Hilber and Dr. Nils Reich, bothmembers of the Computational Method of Quantitative Finance group. Our numerousdiscussions as well as their friendship contributed a great deal toward my thesis not tomention making my assistantship at ETH Zurich most enjoyable. For similar reasons,I would like also to thank the members of the Seminar for Applied Mathematics, inparticular my colleagues Alexey, Bastian, Christian, Claude, Daniel, Gisela, Harish,Holger, Imran, Kersten, Marcel, Martina, Miro, Oleg, Paolo, Roman and Sohrab.
But most of all I want to thank my family Marcia, Hans, Tobias, Sebastian and Sarah.Without them none of this would be possible.
Christoph Winter
i
ii
Abstract
We consider a wavelet Galerkin scheme for solving partial integrodifferential equationswhere sparse tensor product spaces are applied for the discretization to reduce the com-plexity in the number of degrees of freedom. The resulting matrices are dense, sincethe jump operator is non-local. Therefore, wavelet compression methods are used todecrease the number of non-zero matrix entries.
We focus on algorithmic details of the scheme, in particular on the numerical integrationof the matrix coefficients. Since the multidimensional Levy densities have singularitiesat the origin and on the axes, variable order composite Gauss quadrature formulas areemployed. We show that the quadrature rule leads to exponential convergence for Levydensities which are piecewise analytic. Using an hierarchical data structure, an adaptivenumerical scheme is developed which computes each matrix entry with a given accuracy.The accuracy is chosen by an a-priori numerical analysis of the scheme such that thesolution of the perturbed problem still converges at the optimal rate.
We give numerical examples. In particular, the regularization of the multidimensionalLevy measure is considered where small jumps are either neglected or approximated byan artificial Brownian motion. We study and compare the impact of these approxima-tions on various financial contracts in multidimensional Levy market models.
iii
iv
Zusammenfassung
Wir betrachten ein Wavelet-Galerkin-Verfahren zum Losen von Partiellen Integro-Diffe-rentialgleichungen, bei welchem fur die Diskretisierung Dunngitter-Tensorprodukte ver-wendet werden, um die Komplexitat in der Anzahl der Freiheitsgrade zu reduzieren. Dieentstehenden Matrizen sind vollbesetzt, da der Sprung-operator nicht-lokal ist. Deshalbwerden Wavelet-Kompressionsmethoden benutzt, um die Anzahl der nicht-null Eintragezu verringern.
Wir konzentrieren uns auf die algorithmischen Einzelheiten des Verfahrens, insbesonde-re auf die numerischen Integration der Matrixkoeffizienten. Da die mehrdimensionalenLevy-Dichten singular im Ursprung und auf den Achsen sind, werden zusammengesetz-te Gauß-Quadraturformeln von unterschiedlicher Ordnung verwendet. Wir zeigen, dassfur Levy-Dichten, die stuckweise analytisch sind, diese Quadratur-Regeln exponentiellkonvergieren. Aufgrund einer hierarchischen Datenstruktur kann ein adaptives numeri-sches Verfahren entwickelt werden, welches jeden Matrixeintrag zu einer gegebenen Ge-nauigkeit berechnet. Diese Genauigkeit ist so bestimmt, dass die Losung des gestortenProblems immer noch mit der optimalen Rate konvergiert.
Wir geben numerische Bespiele. Im Besonderen wird die Regularisierung des mehrdimen-sionalen Levy-Maßes betrachtet, bei der kleine Sprunge vernachlassigt oder durch einekunstliche Brownsche Bewegung approximiert werden. Wir untersuchen und vergleichenden Einfluss dieser Naherungen auf verschiedene Finanzderivate in mehrdimensionalenLevy-Marktmodellen.
v
vi
Contents
1 Preliminaries 1
1.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Function spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Finite element method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 Variational formulation . . . . . . . . . . . . . . . . . . . . . . . . 31.3.2 Space discretization . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3.3 Matrix formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.4 Time discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.5 Convergence rates . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Multidimensional Levy models 9
2.1 Levy processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2 Levy copulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.3 Levy models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3.1 Stable processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3.2 Subordinated Brownian motion . . . . . . . . . . . . . . . . . . . . 202.3.3 Levy copula models . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3.4 Admissible models . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4 Properties of the Levy measure . . . . . . . . . . . . . . . . . . . . . . . . 26
3 Option pricing 31
3.1 Partial integrodifferential equation . . . . . . . . . . . . . . . . . . . . . . 313.2 Variational formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.3 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4 Wavelet basis 41
4.1 Wavelets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.1.1 Spline wavelets on the interval . . . . . . . . . . . . . . . . . . . . 414.1.2 Sparse tensor product space . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Wavelet discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.3 Wavelet compression of the Levy measure . . . . . . . . . . . . . . . . . . 474.4 Multilevel preconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
vii
Contents
5 Composite Gauss quadrature rules 53
5.1 Gauss-Legendre quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . 535.2 Composite Gauss quadrature . . . . . . . . . . . . . . . . . . . . . . . . . 55
6 Computational scheme 59
6.1 Hierarchical data structure . . . . . . . . . . . . . . . . . . . . . . . . . . 596.1.1 Element tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.1.2 Compression pattern . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2 Matrix computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626.3 Numerical integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626.4 Adaptive strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7 Model sensitivities and Greeks 69
7.1 Sensitivity with respect to model parameters . . . . . . . . . . . . . . . . 697.2 Sensitivity with respect to solution arguments . . . . . . . . . . . . . . . . 72
8 Impact of approximations of small jumps 75
8.1 Gaussian approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758.2 Basket options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778.3 Barrier options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
References 83
Curriculum Vitae 89
viii
Introduction
Over the last years financial models with jumps, and especially Levy models, have in-creased tremendously in popularity. By now it is well established that Levy modelsare more suitable for capturing market fluctuations than the classical Black-Scholesmodel [6], see, e.g., Cont and Tankov [16] and Schoutens [61] for an overview and empir-ical justification. The number of financial models with jumps is growing steadily, for themost popular as well as some recent examples we refer to [2, 9, 10, 26, 38, 41, 42, 61, 65].However, even in the Black-Scholes setting, closed form solutions to derivative pricingproblems are often unavailable or not easily computable. Furthermore, in models withjumps one cannot construct analytic solutions generally, not even for the pricing of plainEuropean vanilla options. Therefore, numerical methods for option pricing have beenanalyzed by many authors and several techniques have been developed to obtain efficientpricing algorithms. Especially in models with jumps, numerical challenges occur whichhave given rise to a number of innovative numerical tools. In this work we shall focuson the so-called wavelet method for asset pricing in multidimensional Levy models.
Consider a basket of d ≥ 1 risky assets whose log returns Xt = (X1t , . . . , X
dt )> ∈ Rd
at time t > 0 are modeled by a Levy process X = Xt : t ≥ 0 with state space Rd.By the fundamental theorem of asset pricing (see, e.g., [24]), arbitrage free prices u ofEuropean contingent claims on such baskets with “reasonable” payoffs g(·) and maturityT are given by the conditional expectation
u(t, x) = E
(e−r(T−t)g(XT ) | Xt = x
). (0.1)
Here, the expectation is taken with respect to an a-priori chosen martingale measureequivalent to the historical measure (see, e.g., [23, 25] for some measure selection crite-ria).
As it is well-known [34, Chapter 4], the family Ttt≥0 of maps Tt : g(·) → u(t, ·) is aone-parameter semigroup. We denote by A its associated infinitesimal generator, i.e.,
Au := limt→0+
1
t(Ttu− u), (0.2)
for all functions u ∈ D(A) in the domain
D(A) :=
u ∈ C∞(Rd) : lim
t→0+
1
t(Ttu− u) exists as strong limit
,
ix
Introduction
where C∞(Rd) is the space of continuous functions vanishing at infinity (see, e.g., [34]).Sufficiently smooth value functions u in (0.1) can be obtained as solutions of a partialintegrodifferential equation (PIDE), the Kolmogorov equation
∂tu + Au− ru = 0 , (0.3)
where A is the infinitesimal generator of the process X defined by (0.2). Among severalpossible notions of solution (classical, variational and viscosity solutions, to name themost frequently employed), we opt for variational solutions which are the basis for varia-tional discretization methods such as finite element discretizations. To convert (0.3) intovariational form, we formally integrate against a test function v and obtain (assumingr = 0 for convenience)
(∂tu, v) + E(u, v)︸ ︷︷ ︸(Au,v)
= 0 . (0.4)
Here, the bilinear expression E(u, v) denotes the extension of the L2(Rd) inner product(Au, v) corresponding to X from u, v ∈ C∞
0 (Rd) by continuity to the domain D(E). Forthe class of Levy processes considered, we show in this paper that E(·, ·) is in fact aDirichlet form.
In the univariate case, i.e., for a Levy process X with state space R, equations (0.3),(0.4) and methods for their numerical solution have been studied by several authors, see,e.g., [8, 17, 43, 45] and the references therein. The numerical methods investigated wereeither finite difference methods [8, 17] approximating viscosity solutions or variationalmethods [43, 45] approximating weak (or variational) solutions. Both solution conceptscoincide for sufficiently smooth solutions, but the resulting numerical schemes have es-sentially different properties. In [28], the univariate variational setting was extendedto d > 1 dimensions for pure jump processes built from 1-homogeneous Levy copulasand univariate marginal Levy processes with symmetric tempered stable margins. Thedomain of the infinitesimal generator A was characterized and it was shown that thecorresponding variational problem is well-posed. The multivariate, nonsymmetric casewas studied in [54], i.e., when the univariate marginal Levy processes are temperedstable, but with possibly nonsymmetric margins. Furthermore, analytical results wereprovided which are required for an efficient numerical implementation of (0.4). Underthese models option pricing using Fourier methods as in [11] is generally not possiblesince the characteristic functions are not given in closed form.
Following [28, 54] we consider a wavelet Galerkin scheme where sparse tensor productspaces are applied for the discretization to reduce the complexity in the number of degreesof freedom from O(h−d) to O(h−1 |log h|d−1). Here, h denotes the mesh width of thefinite element discretization. The resulting matrices are dense since the jump operatoris non-local. Therefore, wavelet compression methods are used to reduce the number ofnon-zero matrix entries. We focus on algorithmic details of the scheme, in particular onthe numerical integration of the matrix coefficients. Since the multidimensional Levydensities have singularities at the origin and on the axes, variable order composite Gaussquadrature formulas are employed.
x
Introduction
The outline is as follows. We first introduce some notation in Chapter 1 and statethe function spaces which are used in this work. The basic finite element method isbriefly explained. In Chapter 2 we recall essential definitions of Levy processes and Levycopulas. Several examples of multivariate Levy models are given and important prop-erties of the Levy measure are proved, in particular the so-called sector condition. InChapter 3 we derive the partial integrodifferential equation corresponding to the optionpricing problem. We show that the variational formulation has a unique solution in ananisotropic Sobolev space. Furthermore, the unbounded log-price domain is localized toa bounded domain and an error bound for the localization error is obtained. In Chap-ter 4 wavelets are explained and the localized problem is discretized in log price domainon a sparse tensor product space. A wavelet compression strategy for the resulting stiff-ness matrix is given which yields essentially the optimal complexity of non-zero matrixentries. Additionally, a multilevel preconditioner is presented. In Chapter 5 a compositequadrature is derived which combines elementary Gauss quadrature formulas on subdo-mains decreasing geometrically towards the singular support of the Levy measure. Weshow that the quadrature rule leads to exponential convergence for Levy densities whichare piecewise analytic. The computational scheme is explained in Chapter 6. Using ahierarchical data structure, an adaptive numerical scheme is developed which computeseach matrix entry with a given accuracy. The accuracy is chosen by an a-priori numericalanalysis of the scheme such that the solution of the perturbed problem still converges atthe optimal rate. In Chapter 7 we show how to compute model sensitivities, in particularthe so-called Greeks. The solution of the sensitivity problem has the same convergencerate as the solution of the initial option pricing problem. Finally, in Chapter 8 numer-ical examples are given. In particular, the regularization of the multidimensional Levymeasure ν is considered where small jumps are either neglected or approximated by anartificial Brownian motion. We study and compare the impact of these approximationson various financial contracts in multidimensional Levy market models.
xi
Introduction
xii
1 Preliminaries
We first set some notational conventions to limit subsequent interruptions. Since thenumerical analysis requires tools from functional analysis, in particular Sobolev spaces,we also state several function spaces which are used throughout this work. Furthermore,we briefly explain the basic finite element method for parabolic partial differential equa-tions. Here, the choice of the basis functions is crucial for efficient computation. It isshown in Chapter 4 that choosing a wavelet basis provides several advantages.
1.1 Notation
Let D be a non-empty open subset of Rd. If a function u : D → R is sufficiently smooth,we denote the partial derivatives of u by ∂nu = ∂n1
1 · · · ∂ndd u, where n = (n1, . . . , nd) ∈ Nd
is a multiindex. The order of the partial derivative is given by |n| =∑d
i=1 ni. TheEuclidean norm of x ∈ Rd is denoted by |x| and the Borel σ-algebra of Rd is given by
B(Rd). We set R = (−∞,∞] and write a ≤ b for a, b ∈ Rd
if aj ≤ bj, j = 1, . . . , d. Inthis case (a, b] is a half-open interval (a, b] = (a1, b1] × · · · × (ad, bd] and correspondinglyfor the closed interval [a, b]. Throughout, we write x . y to express that the scalar xis bounded by a constant multiple of y, i.e., there exists a c > 0 such that x ≤ c y.Correspondingly x ∼ y means x . y and y . x. We denote the indicator function of theset B ⊂ Rd by 1B : Rd → 0, 1.
For a non-empty set I ⊂ 1, . . . , d we define its complement by I c = 1, . . . , d\I. Weset xI = (xi)i∈I and use the notation
x+ yI = z ∈ Rd with zi =
xi if i /∈ Ixi + yi else
for x ∈ Rd, y ∈ R|I|. Furthermore, ∂I = ∂i1 · · · ∂ik for I = i1, . . . , ik, k ∈ 1, . . . , d.
A tensor product for matrices is given by the Kronecker product. For matrices B ∈ Rm×n,C ∈ Rl,k or vectors y ∈ Rm, z ∈ Rl their Kronecker product A = B ⊗ C ∈ Rml×nk,x = y ⊗ z ∈ Rml is defined by (see [29, Section 4.5.5])
A =
B11 C B12 C · · · B1nC
B21 C B22 C · · · B2nC...
.... . .
...Bm1 C Bm2 C · · · BmnC
, x =
y1 zy2 z
...ym z
1
1 Preliminaries
To simplify notation we also define the vector multiplication for z, y ∈ Rn by x = y.∗z ∈Rn where xj = yjzj , j = 1, . . . , n.
1.2 Function spaces
The numerical analysis and the variational formulation (0.4) require tools from functionalanalysis, in particular Sobolev spaces. For any integer n ∈ N we define
Cn(D) = u : ∂nu exists and is continuous on D for |n| ≤ n
and set C∞(D) =⋂n≥0C
n(D). The support of u is denoted by suppu and we defineCn0 (D), C∞
0 (D) consisting of all functions u ∈ Cn(D), C∞(D) with compact supportsuppu b D.
We denote by Hn(Rd), n ∈ N the usual Sobolev consisting of all functions in L2(Rd)with partial derivatives up to order n in L2(Rd) and norm
‖u‖2Hn(Rd) :=
∑
|n|≤n‖∂nu‖2
L2(Rd) .
These spaces can naturally be extended to isotropic Sobolev spaces H s(Rd) for non-integer s ≥ 0 as the spaces of all S∗-functions with finite norm
‖u‖2Hs(Rd) :=
∫
Rd
(1 + |ξ|)2s |u(ξ)|2 dξ,
where u is the Fourier transform of u. Similarly, we can define anisotropic Sobolev spacesHs(Rd) with norm
‖u‖2Hs(Rd) :=
∫
Rd
d∑
j=1
(1 + ξ2
j
)sj |u(ξ)|2 dξ , (1.1)
for any multiindex s ≥ 0. It is useful to notice that by [50, Section 9.2] the anisotropicSobolev spaces admit an intersection structure and we have
Hs(Rd) =
d⋂
j=1
Hsj
j (Rd) and ‖u‖2Hs(Rd) ∼
d∑
j=1
‖u‖2
Hsjj
, (1.2)
with‖u‖
Hsjj (Rd)
=∥∥∥(1 + ξ2
j )sj/2u∥∥∥L2(Rd)
.
Furthermore, we have a mixed Sobolev space Hn(Rd) = Hn1(R)⊗ · · · ⊗Hnd(R), n ∈ Nd
with the norm‖u‖2
Hn(Rd) :=∑
0≤mi≤ni
‖∂mu‖2L2(Rd) , (1.3)
2
1.3 Finite element method
For any s ≥ 0 we define Hs(Rd) by interpolation. Finally, we define the space
Hs(D) := u|D : u ∈ Hs(Rd), u|Rd\D = 0 . (1.4)
For sj + 1/2 /∈ N, j = 1, . . . , d, the space Hs(D) coincides with Hs0(D), the closure of
C∞0 (D) with respect to the norm of Hs(D) [46, Theorem 3.33].
1.3 Finite element method
We briefly explain the basic finite element method for parabolic partial differential equa-tions where we opt for variational solutions. Therefore, we first state the abstract vari-ational formulation and then describe the space and time discretization. Convergencerates for the finite element approximation are also given. For more details we refer tothe monographs [27, 66].
1.3.1 Variational formulation
Let V ⊂ H be two Hilbert spaces with continuous, dense embedding. We identify Hwith its dual H∗ and obtain the triplet
V ⊂ H ≡ H∗ ⊂ V∗ . (1.5)
In this Gelfand triplet we consider a bilinear mapping E : V × V → R. For f ∈L2((0, T );V∗) and u0 ∈ H, consider the following abstract problem:
Find u ∈ L2((0, T );V) ∩H1((0, T );V∗) such that
〈∂tu, v〉V∗,V + E(u, v) = 〈f, v〉V∗,V , ∀v ∈ V, a.e. in (0, T ), (1.6)
u(0) = u0 .
The bilinear form E(·, ·) can also be a Dirichlet form with domain D(E).
Definition 1.3.1. Let (E ,D(E)) be a closed form on L2(Rd), i.e., D(E) is a Hilbert spacewith inner product E sym
1 (·, ·) := E sym(·, ·) + 〈·, ·〉 and E(·, ·) is continuous with respect toEsym
1 (·, ·), i.e.,
|E(u, v)| . (E sym1 (u, u))1/2(Esym
1 (v, v))1/2, ∀u, v ∈ D(E).
The form (E ,D(E)) is called a Dirichlet form if for all u ∈ D(E) it follows that u+ ∧ 1 ∈D(E) and
E(u + (u+ ∧ 1), u − (u+ ∧ 1)) ≥ 0,
E(u− (u+ ∧ 1), u + (u+ ∧ 1)) ≥ 0.
Here u+ = u ∨ 0 denotes the positive part of the function u.
3
1 Preliminaries
The following theorem provides criteria for the existence and uniqueness of the solutionu of (1.6).
Theorem 1.3.2. Assume the bilinear form E(·, ·) satisfies the following properties. Thereexist some constants C1, C2 > 0 and C3 ≥ 0 such that for all u, v ∈ V there holds
|E(u, v)| ≤ C1 ‖u‖V ‖v‖V , (1.7)
E(u, u) ≥ C2 ‖u‖2V − C3 ‖u‖2
H . (1.8)
Then, the abstract parabolic problem (1.6) admits a unique solution.
Proof. See, e.g., [40, Theorem 4.1].
Remark 1.3.3. It is always possible to modify (1.8) such that C3 = 0 by setting u =e−C3tu. Therefore, we assume throughout E is coercive, i.e., C3 = 0.
For the discretization we use the method of lines where first (1.6) is only discretized inspace to obtain a system of coupled ordinary differential equations (ODEs). Second, atime stepping scheme is applied to solve the ODEs.
1.3.2 Space discretization
Let Vh be a one-parameter family of subspaces Vh ⊂ V with finite dimension Nh =dimVh <∞. For each fixed t ∈ (0, T ) we approximate the solution u(t, x) of (1.6) by afunction uh(t) ∈ Vh. Furthermore, let uh,0 ∈ Vh be an approximation of u0. Then, thesemidiscrete form of (1.6) is the initial value problem,
Find uh ∈ C1([0, T ];Vh) such that
〈∂tuh, vh〉H + E(uh, vh) = 〈f, vh〉V∗,V , ∀vh ∈ Vh, t ∈ (0, T ), (1.9)
uh(0) = uh,0 ,
for the approximate solution function uh(t) : [0, T ] → Vh. Let Vh be generated by afinite element basis Φh := φh,k : k ∈ ∆h with index set ∆h = 1, . . . , Nh. Efficientcomputation depends on the choice of the basis functions φh,k. If the operator A is local,we can use, e.g., the so-called hat functions as explained in Examples 1.3.4 and 1.3.5.For non-local operators, wavelets, which are explained in Chapter 4, provide severaladvantages.
Example 1.3.4. Let D = [0, 1] and consider V = H1([0, 1]). The interval is partitionedinto a uniform mesh Th with mesh width h. The uniform mesh Th is then given by theequidistant nodes xh,k := kh with k = 0, . . . , Nh + 1, where Nh = 1/h− 1. We define Vhas the space of piecewise polynomials of degree p− 1 ∈ N on the mesh Th which vanishat the endpoints. For piecewise linear continuous functions, p = 2, Vh is generated bythe so-called hat functions φh,k = max (1 − |x− xh,k| /h, 0), k = 1, . . . , Nh.
4
1.3 Finite element method
Example 1.3.5. Let D = [0, 1]d and consider V = H1([0, 1]d). Denote by Vh the approx-imation space of H1([0, 1]) as in Example 1.3.4. We define the subspace Vh ⊂ V withmesh width h as the tensor product of the one-dimensional spaces
Vh :=⊗
1≤i≤dVh = Vh ⊗ · · · ⊗ Vh .
The multidimensional hat functions φh,k = φh,k1 · · ·φh,kd, ki = 1, . . . , d
√Nh, i = 1, . . . , d,
are a basis of the space Vh where Nh = 1/hd − 1.
1.3.3 Matrix formulation
We write uh ∈ Vh in terms of the basis functions of Φh, uh(t, x) =∑
k∈∆huh,k(t)φh,k(x),
and obtain the matrix form of the semidiscretion (1.9)
Find uh ∈ C1([0, T ]; RNh) such that
M∂
∂tuh(t) + Auh(t) = f(t) , t ∈ (0, T ), (1.10)
uh(0) = uh,0 ,
where uh,0 denotes the coefficient vector of uh,0, f(t) = (fk(t)) the entries fk(t) =〈f, φh,k′〉 and M = (Mk′,k), A = (Ak′,k) the mass- and stiffness matrix with respect toΦh,
Mk′,k = 〈φh,k, φh,k′〉, Ak′,k = E(φh,k, φh,k′) , (1.11)
with k, k′ ∈ ∆h.
1.3.4 Time discretization
There are various time-stepping methods to approximate the solution of the ODEs (1.10).Here, we only use the θ-scheme. One can also apply finite elements for the time dis-cretization as in [32, 60] where an hp-discontinuous Galerkin method is used. It yieldsexponential convergence rates instead of only algebraic ones as in the θ-scheme.
We consider an uniform grid with time step ∆t = T/M and time points tm = m∆t,m = 0, . . . ,M , M ∈ N. Applying the θ-scheme we obtain the fully discrete form
Find umh ∈ Vh such that for m = 0, . . . ,M − 1
〈∆t−1(um+1h − umh ), vh〉H + E(um+θ
h , vh) = 〈fm+θ, vh〉V∗,V , ∀vh ∈ Vh, (1.12)
u0h(0) = uh,0 ,
where um+θh = θuh(tm+1) + (1 − θ)uh(tm) and fm+θ = θf(tm+1) + (1 − θ)f(tm).
5
1 Preliminaries
We can again write (1.12) in matrix notation
Find um+1h ∈ RNh such that for m = 0, . . . ,M − 1
∆t−1M(um+1h − umh ) + θAum+1
h + (1 − θ)Aumh = fm+θ, (1.13)
u0h = uh,0 .
For θ = 1/2, the scheme in (1.12), (1.13) coincides with the popular Crank-Nicholsonscheme.
1.3.5 Convergence rates
We want to determine the accuracy of the approximation (1.9) and (1.12). Toward thisend, let D ⊂ Rd be bounded with Lipschitz boundary ∂D and assume V = Hr(D),r ≥ 0 and H = L2(D). Let Th be a regular conforming partition of D into simplices withuniform mesh width h and let Vh ⊂ V with approximation order p, i.e., containing allpolynomials of degree p−1. It is well-known in the classical finite element approximationtheory (see, e.g., [14]) that for u ∈ Hs(D) with r ≤ s ≤ p
infuh∈Vh
‖u− uh‖V . hs−r ‖u‖ eHs(D). (1.14)
We have the following error estimate for the semidiscrete problem.
Theorem 1.3.6. Let u, uh be the solutions of (1.6),(1.9) with V, H and Vh as definedabove. Assume u ∈ C1([0, T ], Hs(D)). Then, for r ≤ s ≤ p
‖u− uh‖L2([0,T ],V) ≤ C(u)(‖u0 − uh,0‖H + hs−r
),
with a constant C(u) > 0 depending on u.
Proof. See [27, Theorem 6.14].
Similarly, for the fully discrete problem, where the Crank-Nicholson scheme is used forthe time discretization, we get the following result.
Theorem 1.3.7. Let u, umh be the solutions of (1.6),(1.12) with V, H and Vh as defined
above. Furthermore, let θ = 1/2 and assume u ∈ C1([0, T ], Hs(D)) ∩ C3([0, T ],V∗) and‖u0 − uh,0‖H . hs. Then, for r ≤ s ≤ p
∥∥uM − uMh∥∥2
L2(D)+ ∆t
M−1∑
m=0
∥∥∥um+1/2 − um+1/2h
∥∥∥2
V≤ C(u)
(∆t4 + h2(s−r)
),
with a constant C(u) > 0 depending on higher space and time derivatives of u.
Proof. See [33, Theorem 3.3].
6
1.3 Finite element method
Remark 1.3.8. For rough initial data u0 /∈ Hs(D) we only have u(t) ∈ Hs(D), t > 0. Tocompensate for the time singularity at t = 0 we need nonuniform time steps as considerede.g., in [44, 59]. For the Euler or Crank-Nicholson scheme algebraically graded meshesyield again optimal convergence rates [59, Remark 3.11].
As seen in the two previous theorems, we obtain for piecewise linear finite elements,p = 2, the L2-convergence rate O(h2) in space, provided the solution u is smooth enough.Here, we expressed the convergence rate in terms of the mesh width h. Writing it interms of the degrees of freedom Nh = dimVh with h = O(Nh
− 1d ) the convergence rate
decreases with the dimension which is called the “curse of dimension”. In Chapter 4 weshow that using sparse tensor product space we can again obtain the optimal convergencerate up to logarithmic terms.
7
1 Preliminaries
8
2 Multidimensional Levy models
In this chapter we give several examples of multidimensional Levy models, including Levycopula models, and show important properties of the Levy measure. Levy copulas werefirst introduced by Tankov [65] and were further developed by Kallsen and Tankov [37].Since the law of a Levy process X is time-homogeneous, it is completely characterized byits characteristic triplet (Q, ν, γ). The drift γ has no effect on the dependence structurebetween the components of X. The dependence structure of the Brownian motion partof X is given by its covariance matrix Q. For purposes of financial modeling it remains tospecify a parametric dependence structure of the purely discontinuous part of X whichcan be done by using Levy copulas.
2.1 Levy processes
We start by recalling essential definitions and properties of Levy processes. For moredetails we refer to the monographs [16, 52, 57]. Since Levy processes can have discon-tinuous sample paths, we need the class of cadlag functions.
Definition 2.1.1. A function f := [0, T ] → Rd is said to be cadlag if it is right-continuouswith left limits, i.e., for each t ∈ [0, T ] the limits
f(t−) = lims↑t
f(s), f(t+) = lims↓t
f(s),
exist and f(t) = f(t+).
Let (Ω,F ,P) be a complete probability space with filtration F = Ft : 0 ≤ t ≤ ∞.We assume (Ω,F ,F,P) satisfy the usual hypotheses (see [52]). A stochastic processX = Xt : t ≥ 0 is said to be adapted if Xt ∈ Ft, i.e., is Ft measurable, for each t.
Definition 2.1.2. An adapted, cadlag stochastic process X = Xt : t ≥ 0 on (Ω,F ,P)with values in Rd such that X0 = 0 is called a Levy process if it has the followingproperties.
1. Independent increments: Xt −Xs is independent of Fs, 0 ≤ s < t <∞.
2. Stationary increments: Xt−Xs has the same distribution as Xt−s, 0 ≤ s < t <∞.
3. Stochastically continuous: limt→sXt = Xs, where the limit is taken in probability.
9
2 Multidimensional Levy models
In what follows we denote by X i, i = 1, . . . , d, the coordinate projection of X =(X1, . . . , Xd)> ∈ Rd. We can associate to X = Xt : t ∈ [0, T ] a random measure
JX on [0, T ] × Rd, JX(ω, ·) =∑∆Xt 6=0
t∈[0,T ] 1(t,∆Xt), which is called jump measure. For any
measurable subset B ⊂ Rd, JX([0, t] ×B) counts then the number of jumps of X occur-ring between 0 and t whose amplitude belongs to B. The intensity of JX is given by theLevy measure.
Definition 2.1.3. Let X be a Levy process with state space Rd. The measure ν on Rd
defined by
ν(B) = E (#t ∈ [0, 1] : ∆Xt 6= 0, ∆Xt ∈ B) , B ∈ B(Rd),
is called the Levy measure of X. ν(B) is the expected number, per unit time, of jumpswhose size belongs to B.
The Levy measure satisfies∫
Rd 1 ∧ |z|2 ν(dz) < ∞. Using the Levy-Ito decompositionwe see that every Levy process is uniquely defined by a drift vector γ, a positive definitematrix Q ∈ Rd×d
sym and the Levy measure ν. The triplet (Q, ν, γ) is the characteristictriplet of the process X.
Theorem 2.1.4 (Levy-Ito decomposition). Let X be a Levy process with state space Rd
and ν its Levy measure. Then, there exist a vector γ and a d-dimensional Brownianmotion W with covariance matrix Q such that
Xt = γt+Wt +
∫ t
0
∫
|x|≥1xJX(ds, dx) + lim
ε↓0
∫ t
0
∫
ε≤|x|≤1x (JX(ds, dx) − ν(dx)ds)
= γt+Wt +∑
0≤s≤t∆Xs1|∆Xs|≥1 + lim
ε↓0
∫ t
0
∫
ε≤|x|≤1xJX(ds, dx) , (2.1)
where JX the jump measure of X.
Proof. See [57, Theorem 19.2].
The characteristic triplet could also be derived from the Levy-Khinchin representation.
Theorem 2.1.5 (Levy-Khinchin representation). Let X be a Levy process with state spaceRd and characteristic triplet (Q, ν, γ). Then for t ≥ 0,
E
(ei〈ξ,Xt〉
)= e−tψ(ξ), ξ ∈ Rd,
with ψ(ξ) = −i〈γ, ξ〉 +1
2〈ξ,Qξ〉 +
∫
Rd
(1 − ei〈ξ,z〉 + i〈ξ, z〉1|z|≤1
)ν(dz).
(2.2)
Proof. See [57, Theorem 8.1] or [34, Theorem 3.7.7].
10
2.1 Levy processes
Remark 2.1.6. The characteristic exponent ψ(ξ) of X is a continuous, negative definitefunction. Based on (2.2), it is well-known (see, e.g., [34]) that the infinitesimal generatorA in (0.2) corresponding to the Levy process X is a pseudodifferential operator actingon u ∈ C∞
0 (Rd) by the oscillatory integral
(Au)(x) = (2π)−d∫
Rd
ei〈ξ,x〉ψ(ξ)u(ξ)dx, (2.3)
where u(ξ) = (2π)−d∫e−i〈ξ,z〉u(z)dz denotes the Fourier transform of u.
Note that in (2.2) the integral with respect to the Levy measure exist since the integrandis bounded outside of any neighborhood of 0 and
1 − ei〈ξ,z〉 + i〈ξ, z〉1|z|≤1 = O(|z|2) as |z| → 0.
But there are many other ways to obtain an integrable integrand. We could, for example,replace 1|z|≤1 by any bounded measurable function f : Rd → R satisfying f(z) =1 + O(|z|) as |z| → 0 and f(z) = O(1/ |z|) as |z| → ∞. Different choices of f do notaffect Q and ν. But γ depends on the choice of the truncation function. If the Levymeasure satisfies
∫|z|≤1 |z| ν(dz) <∞, we can use the zero function as f and get
ψ(ξ) = −i〈γ0, ξ〉 +1
2〈ξ,Qξ〉 +
∫
Rd
(1 − ei〈ξ,z〉
)ν(dz),
with γ0 ∈ Rd. If the Levy measure satisfies∫|z|>1 |z| ν(dz) < ∞, then, letting f be a
constant function 1, we obtain
ψ(ξ) = −i〈γc, ξ〉 +1
2〈ξ,Qξ〉 +
∫
Rd
(1 − ei〈ξ,z〉 + i〈ξ, z〉
)ν(dz), (2.4)
with triplet (Q, ν, γc) where γc is called the center of X since E(Xt) = γct. We usethe representation (2.4) instead of (2.2) throughout this work but omit the subscriptc for simplicity. The requirement
∫|z|≤1 |z| ν(dz) < ∞ defines a special group of Levy
processes.
Proposition 2.1.7. A Levy process is of finite variation if and only if its characteristictriplet (Q, ν, γ) satisfies,
Q = 0 and
∫
|z|<1|z| ν(dz) <∞.
Proof. See [16, Proposition 3.9].
Coordinate projections of Levy processes are again Levy processes.
11
2 Multidimensional Levy models
Proposition 2.1.8. Let X = (X1, . . . , Xd)> be a Levy process with state space Rd
and characteristic triplet (Q, ν, γ). Assume∫|z|>1 |z| ν(dz) < ∞. Then, the marginal
processes Xj, j = 1, . . . , d, are again Levy processes with the characteristic triplet(Qjj, νj , γj) where the marginal Levy measures are defined as
νj(B) := ν(x ∈ Rd : xj ∈ B \0), ∀B ∈ B(R), j = 1, . . . , d.
Proof. See [57, Proposition 11.10].
No arbitrage considerations require Levy processes employed in mathematical financeto be martingales. The following result gives sufficient conditions of the characteristictriplet to ensure this.
Lemma 2.1.9. Let X be a Levy process with state space Rd and characteristic triplet(Q, ν, γ). Assume,
∫|z|>1 |z| ν(dz) < ∞ and
∫|z|>1 e
zjνj(dz) < ∞, j = 1, . . . , d. Then,
eXj, j = 1, . . . , d, are martingales with respect to the filtration F of X if and only if
Qjj
2+ γj +
∫
R
(ezj − 1 − zj) νj(dz) = 0, j = 1, . . . , d.
Proof. We obtain for 0 ≤ t < s using the independent and stationary increments prop-erty
E
(eX
js | Ft
)= E
(eX
jt +Xj
s−Xjt | Ft
)= eX
jt E
(eX
js−Xj
t
)
= eXjt E
(eX
js−t
)= eX
jt e(t−s)ψ(−iej ).
Therefore, setting ψ(−iej) = 0, j = 1, . . . , d, and using the Levy-Khinchin formula (2.4)yields
Qjj
2+ γj +
∫
Rd
(ezj − 1 − zj) ν(dz) = 0, j = 1, . . . , d.
The result follows with the definition of the marginal Levy density νj given in Proposi-tion 2.1.8.
Remark 2.1.10. Lemma 2.1.9 also holds for general semimartingales as shown in [51].
2.2 Levy copulas
We first give some definitions. The F -volume of (a, b], a, b ∈ Rd
for a function F : S → R,
S ⊂ Rd
is defined by
VF ((a, b]) :=∑
u∈a1 ,b1×···×ad,bd(−1)N(u)F (u) ,
where N(u) = |k : uk = ak|.
12
2.2 Levy copulas
Definition 2.2.1. A function F : S → R, S ⊂ Rd
is called d-increasing if VF ((a, b]) ≥ 0for all a, b ∈ S with a ≤ b and (a, b] ⊂ S.
Examples of d-increasing functions are distribution functions of random vectors X ∈ Rd,F (x1, . . . xd) = P[X1 ≤ x1, . . . , Xd ≤ xd], or more general
F (x1, . . . xd) = µ ((−∞, x1], . . . , (−∞, xd]) ,
where µ is a finite measure on B(Rd). F is clearly d-increasing since the F -volume isjust VF ((a, b]) = µ ((a, b]) for every a ≤ b. For modeling dependence structure, marginsplay an important role.
Definition 2.2.2. Let F : Rd → R be a d-increasing function which satisfies F (u) = 0 if
ui = 0 for at least one i ∈ 1, . . . , d. For any index set I ⊂ 1, . . . , d the I-margin of
F is the function F I : R|I| → R
F I(uI) := lima→∞
∑
(uj)j∈Ic∈−a,∞|Ic|
∏
j∈Ic
sgnuj
F (u1, . . . , ud) .
Since the Levy measure is a measure on B(Rd), it is possible to define a suitable notionof a copula. However, one has to take into account that the Levy measure is possiblyinfinite at the origin.
Definition 2.2.3. A function F : Rd → R is called Levy copula if
1. F (u1, . . . , ud) 6= ∞ for (u1, . . . , ud) 6= (∞, . . . ,∞),
2. F (u1, . . . , ud) = 0 if ui = 0 for at least one i ∈ 1, . . . , d,3. F is d-increasing,
4. F i(u) = u for any i ∈ 1, . . . , d, u ∈ R.
Levy copulas have analogous properties as ordinary copulas (see, e.g., [49] for an intro-duction to ordinary copulas).
Lemma 2.2.4. Let F be a Levy copula. Then,
0 ≤d∏
j=1
sgnujF (u1, . . . , ud) ≤ min|u1| , . . . , |ud| ∀u ∈ Rd ,
and∏dj=1 sgnujF (u) is nondecreasing in the absolute value of each argument |uj|.
Furthermore, Levy copulas are Lipschitz continuous, i.e.,
|F (v1, . . . , vd) − F (u1, . . . , ud)| ≤d∑
i=1
|vi − ui| ∀u, v ∈ Rd .
13
2 Multidimensional Levy models
Proof. Let u ∈ Rd, u1 ≥ 0 and 0 ≤ a1 ≤ u1. Set b1 = u1 and for 2 ≤ j ≤ d set aj = 0,bj = uj if uj ≥ 0 otherwise aj = uj, bj = 0, . Since F is d-increasing and grounded
VF ((a, b]) =∑
v∈a1 ,b1×···×ad,bd(−1)N(v)F (v)
=
d∏
j=2
sgnujF (u1, . . . , ud) −d∏
j=2
sgnujF (a1, u2 . . . , ud) ≥ 0 .
Similarly for u1 < 0. This gives the lower bound with a1 = 0.
Let I = i ⊂ 1, . . . , d. Then,
d∏
j=1
sgnujF (u1, . . . , ui, . . . ud)
≤ sgnui limn→∞
∏
j∈Ic
sgnuj
F (n sgnu1, . . . , ui, . . . , n sgnud)
≤ sgnui limn→∞
∑
(vj )j∈Ic∈−n,∞|Ic|
∏
j∈Ic
sgn vj
F (v1, . . . , ui, . . . , vd)
= sgnui Fi(ui) = |ui| .
Since i ∈ 1, . . . , d is arbitrary we obtain the upper bound. Lipschitz continuity isshown in [37, Lemma 3.2].
We also need tail integrals of Levy processes.
Definition 2.2.5. Let X be a Levy process with state space Rd and Levy measure ν. Thetail integral of X is the function U : Rd\0 → R given by
U(x1, . . . , xd) =d∏
i=1
sgn(xj)ν( d∏
j=1
I(xj)),
where
I(x) =
(x,∞) for x ≥ 0(−∞, x] for x < 0
Furthermore, for I ⊂ 1, . . . , d nonempty the I-marginal tail integral U I of X is thetail integral of the process XI := (X i)i∈I .
The next result, [37, Theorem 3.6], shows that essentially any Levy process X =(X1, . . . , Xd)> can be built from univariate marginal processes X j , j = 1, . . . , d andLevy copulas. It can be viewed as a version of Sklar’s theorem for Levy copulas.
14
2.2 Levy copulas
Theorem 2.2.6 (Sklar’s theorem for Levy copulas). For any Levy process X with statespace Rd there exists a Levy copula F such that the tail integrals of X satisfy
UI(xI) = F I((Ui(xi))i∈I) , (2.5)
for any nonempty I ⊂ 1, . . . , d and any xI ∈ R|I|\0. The Levy copula F is uniqueon∏di=1 RanUi.
Conversely, let F be a d-dimensional Levy copula and Ui, i = 1, . . . , d, tail integrals ofunivariate Levy processes. Then, there exits a d-dimensional Levy process X such thatits components have tail integrals Ui and its marginal tail integrals satisfy (2.5). TheLevy measure ν of X is uniquely determined by F and Ui, i = 1, . . . , d.
Using partial integration we can write the multidimensional Levy measure in terms ofthe Levy copula.
Lemma 2.2.7. Let f(z) ∈ C∞(Rd) be bounded and vanishing on a neighborhood of theorigin. Furthermore, let X be a d-dimensional Levy process with Levy measure ν, Levycopula F and marginal Levy measures νj, j = 1, . . . , d. Then,
∫
Rd
f(z)ν(dz) =d∑
j=1
∫
R
f(0 + zj)νj(dzj)
+d∑
j=2
∑
|I|=jI1<···<Ij
∫
Rj
∂If(0 + zI)F I ((Uk(zk))k∈I) dzI .
(2.6)
Proof. We proceed by induction with respect to the dimension d:
For d = 1, integration by parts yields
∫ ∞
0f(z)ν(dz) = − lim
b→∞f(b)ν (I(b)) + lim
a→0+f(a)ν (I(a)) +
∫ ∞
0∂1f(z)ν (I(z)) dz ,
∫ 0
−∞f(z)ν(dz) = lim
a→0−f(a)ν (I(a)) − lim
b→−∞f(b)ν (I(b)) −
∫ 0
−∞∂1f(z)ν (I(z)) dz ,
and since f is bounded
∫
R
f(z)ν(dz) = f(0) lima→0+
(ν (I(a)) + ν (I(−a))) +
∫
R
∂1f(z)sgn(z)ν (I(z)) dz .
Abusing notation, we write
ν(R) := lima→0+
(ν (I(a)) + ν (I(−a))) .
With f vanishing on a neighborhood of 0 we therefore find f(0)ν(R) = 0.
15
2 Multidimensional Levy models
For the multidimensional case, i.e., for d > 1, we use the Levy measure of X I which isgiven by
νI(B) := ν(x ∈ Rd : (xi)i∈I ∈ B \0
), ∀B ∈ B
(R|I|
).
We show by induction with respect to the dimension d that
∫
Rd
f(z)ν(dz) =f(0, . . . , 0)ν(R, . . . ,R)
+
d∑
i=1
∫
R
∂if(0, . . . , zi, . . . , 0)sgn(zi)νi (I(zi)) dzi
+
d∑
i=2
∑
|I|=iI1<···<Ii
∫
Ri
∂If(0 + zI)∏
j∈Isgn(zj)ν
I
∏
j∈II(zj)
dzI .
With f(0, . . . , 0)ν(R, . . . ,R) = 0, the definition of the tail integrals and Theorem 2.2.6we then have the required result.
For the induction step d−1 → d, using integration by parts and the induction hypothesiswe obtain∫
Rd
f(z)ν(dz) =
∫
Rd−1
∫
R
f(z′, zd)ν(dz′, dzd)
=
∫
Rd−1
f(z′, 0)ν(dz′,R)
+
∫
Rd−1
∫
R
∂df(z′, zd) sgn(zd)ν(
dz′, I(zd))
dzd
= f(0, . . . , 0)ν(R, . . . ,R)
+
d−1∑
i=1
∫
R
∂if(0, . . . , zi, . . . , 0)sgn(zi)νi (I(zi)) dzi
+
d−1∑
i=2
∑
|I|=iI1<···<Ii
∫
Ri
∂If(0 + zI)∏
j∈Isgn(zj)ν
I
∏
j∈II(zj)
dzI
+
∫
R
∂df(0, . . . , 0, zd) sgn(zd)ν (R, . . . ,R, I(zd))
+d−1∑
i=1
∫
R
∫
R
∂i∂df(0, . . . , zi, . . . , 0, zd) sgn(zi)sgn(zd)νi,d (I(zi), I(zd)) dzidzd
+
d−1∑
i=2
∑
|I|=iI1<···<Ii
∫
Ri
∫
R
∂I∂df(zI,d)∏
j∈I,dsgn(zj)ν
I,d
∏
j∈I,dI(zj)
dzI dzd ,
which is the claimed result.
16
2.2 Levy copulas
Remark 2.2.8. The boundedness assumption on f in Lemma 2.2.7 can be weakened tocertain unbounded f ∈ Cd(Rd), if the Levy measure ν decays sufficiently fast.
Using Lemma 2.2.7 we immediately obtain
Corollary 2.2.9. Let X = (X1, . . . , Xd)> be a d-dimensional Levy process with charac-teristic triplet (0, ν, γ). Then,
Cov(X i, Xj) =
∫
Rd
zizjν(dz) =
∫
R2
F i,j(Ui(zi), Uj(zj))dzidzj, ∀i 6= j ,
where F is the Levy copula from Theorem 2.2.6.
We conclude with examples of Levy copulas.
Example 2.2.10. Examples of Levy copulas are:
1. Independence Levy copula
F (u1, . . . , ud) =
d∑
i=1
ui∏
j 6=i1∞(uj) . (2.7)
2. Complete dependence Levy copula
F (u1, . . . , ud) = min|u1| , . . . , |ud|1K(u1, . . . , ud)
d∏
j=1
sgnuj , (2.8)
where K := x ∈ Rd : sgn(x1) = . . . = sgn(xd).3. Clayton Levy copulas
F (u1, . . . , ud) = 22−d(
d∑
i=1
|ui|−ϑ)− 1
ϑ (η1u1···ud≥0 − (1 − η)1u1 ···ud≤0
), (2.9)
where ϑ > 0 and η ∈ [0, 1]. For η = 1 and ϑ→ 0, F converges to the independenceLevy copula, for η = 1 and ϑ → ∞ to the complete dependence Levy copula. InFigure 2.1 the Clayton copula in d = 2 for ϑ = 0.5, 1.5 and η = 1 is plotted. Weinclude the upper bound min|u1| , |u2| and additionally give the correspondingcontour plot.
An important class of Levy copulas are so-called 1-homogeneous copulas.
Definition 2.2.11. A Levy copula is called 1-homogeneous if for any r > 0 there holds
F (ru1, . . . , rud) = rF (u1, . . . , ud),
for all (u1, . . . , ud)> ∈ Rd.
For further details and examples of Levy copulas, we refer to [28, 37].
17
2 Multidimensional Levy models
00.2
0.40.6
0.81
0
0.5
10
0.2
0.4
0.6
0.8
1
u1
u2 u
1
u 2
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
00.2
0.40.6
0.81
0
0.5
10
0.2
0.4
0.6
0.8
1
u1
u2 u
1
u 2
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Figure 2.1: Clayton copula (2.9) in d = 2 for ϑ = 0.5 (top) and ϑ = 1.5 (bottom)
2.3 Levy models
Financial models with jumps fall into two categories: Jump-diffusion models have anonzero Gaussian component and a jump part which is a compound Poisson processwith finitely many jumps in every time interval. An example of such a model is theMerton jump-diffusion model with Gaussian jumps [47]. On the other hand, infiniteactivity models have an infinite number of jumps in every interval of positive measure.A Brownian motion component is not necessary for infinite activity models since thedynamics of the jumps is already rich enough to generate nontrivial small-time behav-ior [9]. For a more detailed comparison of the two modelling approaches see, e.g., [16].We give several examples of multidimensional Levy models with infinite activity.
2.3.1 Stable processes
If Xt : t ≥ 0 is a Brownian motion on Rd then, for any r > 0 the process Xrt : t ≥ 0is identical in law with the process r1/2Xt : t ≥ 0 [57, Theorem 5.4]. This propertyis called selfsimilarity of a stochastic process with index 2. There are many selfsimilarLevy processes other than the Brownian motion, the so-called stable processes.
Definition 2.3.1. Let 0 < α < 2. A Levy process X = Xt : t ≥ 0 with state space Rd
is called α-stable if the distribution µ of X at t = 1 is α-stable, i.e., for any r > 0 thereexists c ∈ Rd such that
µ(z)r = µ(r1α z)ei〈c,z〉.
18
2.3 Levy models
It is shown in [57, Theorem 14.3] that any Levy process with the characteristic triplet(Q, ν, γ) has a α-stable probability measure if and only if Q = 0 and if there is a finitemeasure λ on the unit sphere S = x ∈ Rd : |x| = 1 such that
ν(B) =
∫
Sλ(dξ)
∫ ∞
01B(rξ)
1
r1+αdr, B ∈ B(Rd).
A simple example of a α-stable Levy process on Rd is given by the Levy measure
ν(dz) =
2d∑
j=1
cj |z|−d−α 1Qj dz, (2.10)
where cj ≥ 0,∑2d
j=1 cj > 0 and Qj denoting the j-th quadrant. Note that for d = 1 this is
the only possible α-stable process. The corresponding marginal processesX i, i = 1, . . . , dof X are again α-stable processes in R with Levy measure νi(dz) = ci |z|−1−α dz whereci depend on α, d and cj , j = 1, . . . , 2d.
For d > 1 the notation of stable processes can be extended by using nonsingular matricesfor scaling [63].
Definition 2.3.2. Let Q ∈ Rd×d be a matrix with positive eigenvalues. A Levy processX = Xt : t ≥ 0 with state space Rd is called Q-stable if for any r > 0 there exist ac ∈ Rd such that the distribution µ of X at t = 1 satisfies
µ(z)r = µ(rQ>z)ei〈c,z〉,
where rQ =∑∞
n=0(n!)−1(log r)nQn.
For Q = diag ((1/α, . . . , 1/α)), 0 < α < 2, we again obtain α-stable processes. Anextension of (isotropic) α-stable processes, are anisotropic α-stable processes for anα = (α1, . . . , αd) with 0 < αi < 2, i = 1, . . . , d.
Definition 2.3.3. Let 0 < αi < 2, i = 1, . . . , d and Q = diagα−1i : i = 1, . . . , d. A Levy
process X = Xt : t ≥ 0 with state space Rd is called α-stable if the distribution µ ofX at t = 1 is Q-stable, i.e., for any r > 0 there exist c ∈ Rd such that
µ(z)r = µ(r1
α1 z1, . . . , r1
αd zd)ei〈c,z〉. (2.11)
Since we have for the characteristic function µ(z) = e−ψ(z) if follows from (2.11) thatthe characteristic exponent of an α-stable process satisfies for any r > 0
<ψ(r1
α1 ξ1, . . . , r1
αd ξd) = r<ψ(ξ), ∀ξ ∈ Rd. (2.12)
19
2 Multidimensional Levy models
We assume that the Levy measure ν has a Levy density k, i.e., ν(dz) = k(z)dz and wedefine the symmetric part of the Levy density by ksym(z) = (k(z) + k(−z))/2. Similarto [28] we obtain for Q = 0 that
<ψ(r1
α1 ξ1, . . . , r1
αd ξd) =
∫
Rd
(1 − cos
(d∑
i=1
r1
αi ξizi
))ksym(z)dz
=
∫
Rd
(1 − cos〈ξ, z〉) ksym(r− 1
α1 z1, . . . , r− 1
αd zd)r− 1
α1−···− 1
αd dz.
Now using (2.12) the Levy density has to satisfy
ksym(r− 1
α1 z1, . . . , r− 1
αd zd) = r1+ 1
α1+···+ 1
αd ksym(z1, . . . , zd) . (2.13)
A simple example of a α-stable Levy process on Rd is given by the Levy measure
ν(dz) =
2d∑
j=1
cj
(d∑
i=1
|zi|αi
)−1− 1α1
−···− 1αd
1Qj dz, (2.14)
where cj ≥ 0,∑2d
j=1 cj > 0. The corresponding marginal processes X i, i = 1, . . . , d of
X are again α-stable processes in R with Levy measure νi(dz) = ci |z|−1−αi dz where cidepend on d, α and cj , j = 1, . . . , 2d. We plot the density (2.14) for d = 2, α = (0.5, 1.2)and cj = 1, j = 1, . . . , 4 in Figure 2.2.
z1
z 2
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
Figure 2.2: Anisotropic α-stable density in d = 2 for α = (0.5, 1.2) and correspondingcontour plot
2.3.2 Subordinated Brownian motion
A popular class of processes is obtained by subordination of a Brownian motion withdrift. For d > 1 there are two possibilities. Using a one-dimensional increasing processor subordinator G = Gt : t ≥ 0, the resulting process is given by
Xit = W i
Gt+ θiGt, θi ∈ R, t ∈ [0, T ],
20
2.3 Levy models
for i = 1, . . . , d where W = (W 1, . . .W d)> is a vector of d Brownian motions withcovariance matrix Q = (σiσjρij)1≤i,j≤d. Here, σ2
i , i = 1, . . . , d, is the variance of theone-dimensional Brownian motions W i, and ρij the correlation of the Brownian motionsW i and W j. But we can also use a d-dimensional Levy process G = (G1, . . . , Gd)>
which is componentwise increasing in each coordinate to obtain
Xit = W i
Git
+ θiGit, θi ∈ R, t ∈ [0, T ],
for i = 1, . . . , d. This is called multivariate subordination and was introduced in [3].
As an example we use as the one-dimensional subordinator a gamma process to obtaina multidimensional variance gamma process [41]. As in the one-dimensional case [42] weconsider a gamma process G with Levy density kG(s) = e−
sϑ (ϑs)−11s>0. Then, using
[57, Theorem 30.1] the Levy measure of X is given for B ∈ B(Rd) by
ν(B) =
∫
B
∫ ∞
0(2π)
−d2 detQ− 1
2 s−d2 e−〈z−θs,Q−1(z−θs)〉/(2s)e−
sϑ (ϑs)−1 dsdz
=
∫
B(2π)
−d2 ϑ−1 detQ− 1
2 e〈θ,Q−1+Q−>
2z〉∫ ∞
0s−
d2−1e
− 〈z,Q−1z〉2s
−„
〈θ,Q−1θ〉2
+ 1ϑ
«sdsdz
=
∫
B(2π)
−d2 ϑ−1 detQ− 1
2 e〈θ,Q−1+Q−>
2z〉∫ ∞
0s−
d2−1e−β
1s−γsdsdz ,
where β = 〈z,Q−1z〉/2 and γ = 〈θ,Q−1θ〉/2 + 1/ϑ. Using [30, Formula 3.471, 9] tointegrate the second integral we obtain the Levy measure
ν(dz) = 2 (2π)−d2 ϑ−1 detQ− 1
2 e〈θ,Q−1+Q−>
2z〉(β
γ
)−d/4K−d/2
(2√βγ)
dz, (2.15)
where K−d/2(ξ) is the modified Bessel function of second kind. For small ξ we have
K−d/2(ξ) ∼ ξ−d/2 and therefore ν(dz) ∼ 〈z,Q−1z〉−d/2 dz ∼ |z|−d dz since Q > 0. Themarginal processes X i, i = 1, . . . , d of X are variance gamma processes on R with Levy
measure νi(dz) = ϑ−1eθi/σ2i ze−
√2/ϑ+θ2i /σ
2i /σi |z| |z|−1 dz. We plot the density (2.15) for
d = 2, θ = (−0.1,−0.2), σ = (0.3, 0.4), ρ12 = 0.5 and ϑ = 1 in Figure 2.3.
z1
z 2
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
Figure 2.3: Variance gamma density in d = 2 and corresponding contour plot
21
2 Multidimensional Levy models
2.3.3 Levy copula models
Levy copulas F allow parametric constructions of multivariate jump densities fromunivariate ones. Let U1, . . . , Ud be one-dimensional tail integrals with Levy densityk1, . . . , kd, and let F be a Levy copula such that ∂1 . . . ∂dF exists in the sense of distri-butions. Then,
k(x1, . . . , xd) = ∂1 . . . ∂dF |ξ1=U1(x1),...,ξd=Ud(xd)k1(x1) . . . kd(xd) , (2.16)
is the jump density of a d-variate Levy measure with marginal Levy densities k1, . . . , kd.For example we can use the Clayton Levy copula (see Definition 2.9)
F (u1, . . . , ud) = 22−d(
d∑
i=1
|ui|−ϑ)− 1
ϑ (η1u1···ud≥0 − (1 − η)1u1 ···ud≤0
),
where ϑ > 0, η ∈ [0, 1] and consider α-stable marginal Levy densities, ki(z) = |z|−1−αi ,0 < αi < 2, i = 1, . . . , d. This leads to the d-dimensional Levy density
k(z) = 22−dd∏
i=1
(1 + (i− 1)ϑ
)αϑ+1i |zi|αiϑ−1
(d∑
i=1
αϑi |zi|αiϑ
)− 1ϑ−d
·(η1z1···zd≥0 + (1 − η)1z1···zd≤0
). (2.17)
Note that this is again an anisotropic α-stable process as shown in [28]. We plot thedensity (2.17) for d = 2, ϑ = 0.5, η = 0.5 and α = (0.5, 1.2) in Figure 2.4.
z1
z 2
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
Figure 2.4: Anisotropic α-stable Levy copula density (2.17) in d = 2 for α = (0.5, 1.2)and corresponding contour plot
2.3.4 Admissible models
We make the following assumptions on our models.
Assumption 2.3.4. Let X be a d-dimensional Levy process with characteristic triplet(Q, ν, γ), Levy density k and marginal Levy densities ki, i = 1, . . . , d.
22
2.3 Levy models
i) There are constants β−i > 0, β+
i > 0, i = 1, . . . , d such that
ki(z) .
e−β
−i |z|, z < −1
e−β+i z, z > 1
(2.18)
ii) Furthermore, we assume there exist an α-stable process X 0 with Levy density k0
such that
k(z) . k0(z), 0 < |z| < 1 . (2.19)
iii) If Q is not positive definite, we assume additionally that
ksym(z) & k0,sym(z), 0 < |z| < 1 , (2.20)
iv) Finally, we require that the density k is real analytic outside zi = 0, i = 1, . . . , d,
|∂nk(z)| . C |n| |n|! ‖z‖−α∞
d∏
i=1
|zi|−ni−1 , ∀zi 6= 0 . (2.21)
for C > 0, α = ‖α‖∞ and multiindex n = (n1, . . . , nd) ∈ Nd0.
For d = 1 the assumptions (2.18)–(2.21) coincide with the assumptions (A1)–(A4) in [45,Section 3.2]. It is shown that these are satisfied by a wide range of processes, includingthe generalized hyperbolic, Meixner and tempered stable processes. In the case that themarginal processes X i, i = 1, . . . , d are independent, (2.18)–(2.21) only needs to holdfor the corresponding marginal one-dimensional densities ki, i = 1, . . . , d. If one wantsto enforce a dependence structure given by a Levy copula, the assumptions can also bestated in terms of the corresponding Levy copula and marginal densities.
Assumption 2.3.5. Let X be a d-dimensional Levy process with characteristic triplet(Q, ν, γ), Levy copula F , marginal Levy densities ki with tail integrals Ui, i = 1, . . . , d.
i) There are constants β−i > 0, β+
i > 0, i = 1, . . . , d such that
ki(z) .
e−β
−i |z|, z < −1
e−β+i z, z > 1
(2.22)
ii) Furthermore, there exist a 1-homogeneous Levy copula F 0 and α-stable densitiesk0i with tail integrals U 0
i , i = 1, . . . , d such that
ki(z) ∼ k0i (z), 0 < |z| < 1, i = 1, . . . , d , (2.23)
∂1 . . . ∂dF (U(z)) ∼ ∂1 . . . ∂dF0(U 0(z)), 0 < |z| < 1 . (2.24)
23
2 Multidimensional Levy models
iii) Finally, we require that
|∂nki(z)| . Cnn! |z|−α−n−1 , ∀z 6= 0, i = 1, . . . , d, n ∈ N0 , (2.25)
|∂nF (u)| . C |n| |n|! min|u1| , . . . , |ud|d∏
i=1
|ui|−ni , ∀u ∈ Rd, n ∈ Nd0 . (2.26)
for C > 0, α = ‖α‖∞.
Any process with a Levy copula and marginal densities satisfying Assumption 2.3.5 alsosatisfies Assumption 2.3.4.
Proposition 2.3.6. Let X be a d-dimensional Levy process with characteristic triplet(Q, ν, γ), Levy copula F and marginal Levy densities ki, i = 1, . . . , d satisfying Assump-tion 2.3.5. Then, the Levy density k of X satisfies Assumption 2.3.4.
Proof. The assumption of semiheavy tails (2.18) obviously hold. It is shown in [28,Theorem 3.4] that a Levy process with a 1-homogeneous Levy copula and α-stablemargins is α-stable. Therefore, assumptions (2.19) and (2.20) follow with (2.16), (2.23)and (2.24). To show (2.21) we employ for n ∈ N0 the formula of Faa di Bruno [55]
∂nf(g(z)) =∑ n!
m1! · · ·mn!(∂mf) (g(z))
(∂g(z)
1!
)m1
· · ·(∂ng(z)
n!
)mn
,
where m = m1 + · · · + mn and the sum is over all m1, . . . ,mn for which m1 + 2m2 +· · · + nmn = n. Since marginal tail integrals satisfy using (2.25),
|∂nUi(z)| . Cnn! |z|−α−ni , ∀z 6= 0, i = 1, . . . , d, n ∈ N0 ,
we obtain for the composite function (∂1 . . . ∂dF U) (z)
|∂ni (∂1 . . . ∂dF (U (z)))|
=
∣∣∣∣∑ n!
m1! · · ·mn!(∂mi ∂1 . . . ∂dF ) (U(z))
(∂Ui(z)
1!
)m1
· · ·(∂nUi(z)
n!
)mn∣∣∣∣
.∑
Cn1n!m!
m1! · · ·mn!‖z‖−α∞
d∏
j=1
|zj |α |zi|αm |zi|−αm1−m1 · · · |zi|−αmn−mn
. Cn2 n! ‖z‖−α∞ |zi|−nd∏
j=1
|zj |α .
Using the Leibniz rule leads to
|∂ni k(x)| = |∂ni (∂1 . . . ∂dF (U(z))k1(z1) . . . kd(zd))|
=
∣∣∣∣∣∣
n∑
j=1
n!
j!(n− j)!∂j (∂1 . . . ∂dF (U(z)) ∂n−jki(zi)
d∏
m=0,m6=ikm(zm)
∣∣∣∣∣∣
24
2.3 Levy models
. Cn3 n!n∑
j=1
‖z‖−α∞ |zi|−jd∏
j=1
|zj |α |zi|−α−n+j−1d∏
m=0,m6=i|zm|−α−1
. Cn4 n! ‖z‖−α∞ |zi|−nd∏
m=0
|zm|−1 .
Proposition 2.3.7. Let X be a d-dimensional Levy process with Clayton Levy copula
F (u1, . . . , ud) = 22−d(
d∑
i=1
|ui|−ϑ)− 1
ϑ (η1u1···ud≥0 − (1 − η)1u1 ···ud≤0
),
where ϑ > 0, η ∈ [0, 1] together with tempered stable marginal densities [7, 9]
ki(z) = c−ie−β
−i |z|
|z|1+αi1z<0 + c+i
e−β+i z
z1+αi1z>0 , i = 1, . . . , d ,
with c+i , c−i ≥ 0, c+i + ci > 0, β+
i , β−i > 0 and 0 < αi < 2, i = 1, . . . , d. Then, the
Assumption 2.3.5 is satisfied.
Proof. Equations (2.22) and (2.23) obviously hold. Equation (2.24) is also consideredin [54]. There it was called equivalence preserving and was shown to be true for theClayton Levy copula which is 1-homogeneous, i.e., F 0 = F . Now consider the functionf(z) = zα with α = maxα1, . . . , αd. It is straightforward to see that f satisfies forn ∈ N0
|∂nf(z)| = |α| · · · |α− n+ 1| |z|α−n ≤ (d|α|e + n)! |z|α−n . Cnn! |z|α−n ,
with any C > 1. Using the Leibniz formula yields
∣∣∣∣∣∂n
(e−β|z|
|z|1+α
)∣∣∣∣∣ =
∣∣∣∣∣∣
n∑
j=1
(n
j
)∂je−β|z|∂n−j |z|−1−α
∣∣∣∣∣∣. Cnn! |z|−α−1−n .
for z 6= 0 which yields (2.25). To prove (2.26) we again apply the formula of Faa di
Bruno as in the proof of Proposition 2.3.6 with f(x) = x−1ϑ , and g(zi) =
∑dj=1 |zj|
−ϑ
|∂ni f(g(z))| =
∣∣∣∣∑ n!
m1! · · ·mn!(∂mi f) (g(z))
(∂g(zi)
1!
)m1
· · ·(∂ng(zi)
n!
)mn∣∣∣∣
. Cn1 n!∑ n!
m1! · · ·mn!
d∑
j=1
|zj|−ϑ
− 1ϑ−m
|zi|−ϑm−n
. Cn2 n! min|z1| , . . . , |zd| |zi|−n .
25
2 Multidimensional Levy models
2.4 Properties of the Levy measure
In the present section we follow [54] and verify properties of Levy measures correspondingto Levy processes X with state space Rd satisfying Assumption 2.3.4. For the (in generalnonsymmetric) bilinear form E(·, ·) corresponding to the generator A of X, we prove theso-called sector condition:
∃C > 0 : |=ψ(ξ)| ≤ C<ψ(ξ), for all ξ ∈ Rd. (2.27)
Due to a classical result of Berg and Forst [4] (see also [34, Chapter 4.7]) the sectorcondition together with the translation invariance of X, implies that E(·, ·) is a nonsym-metric Dirichlet form. For Levy processes, the sector condition also makes an explicitcharacterization of the domains D(A) and D(E) of A and E(·, ·) in terms of anisotropicSobolev spaces possible.
First, we show that the tails of the multivariate Levy processes decay exponentially fastprovided the one-dimensional tails decay exponentially.
Proposition 2.4.1. Let X be a Levy process with state space Rd and Levy measure νsuch that all marginal measures νi satisfy (2.18). Then, the Levy measure ν also decaysexponentially
∫
|z|>1eηi(z)ν(dz) <∞ , with ηi(z) =
(µ+i 1zi>0 + µ−i 1zi<0
)|zi| , i = 1, . . . , d ,
where 0 < µ−i < β−i and 0 < µ+i < β+
i , i = 1, . . . , d.
Proof. With Proposition 2.1.8 we obtain
∫
|z|>1eµi|zi|ν(dz) .
∫
|zi|>1eµi|zi|νi(dzi) <∞.
The following proposition provides an upper bound for |ψ(ξ)| and hence for |=ψ(ξ)|.
Proposition 2.4.2. Let X be a Levy process with state space Rd, characteristic triplet(Q, ν, γ) and characteristic exponent ψ. Assume Q = 0, γi = 0, i = 1, . . . , d, and theLevy measure ν satisfies (2.19) with α = (α1, . . . , αd). Then, there exists C > 0 suchthat for all ξ ∈ Rd with ‖ξ‖∞ > 1,
|ψ(ξ)| ≤ C
d∑
j=1
|ξj|αj . (2.28)
26
2.4 Properties of the Levy measure
Proof. For notational simplicity and w.l.o.g. we assume only positive jumps. We distin-guish between αi smaller or larger than 1. Let 0 ≤ j ≤ d such that
α1, . . . , αj < 1, 1 ≤ αj+1, . . . αd < 2 .
Then characteristic exponent ψ of X can be written as
ψ(ξ) =
∫
Rd≥0
1 − ei〈ξ,z〉 +
d∑
k=j+1
iξkzk1|z|≤1
ν(dz) + i
j∑
k=1
γkξk .
We assume that we can set γk, k = 1, . . . , j to zero. With B = [0, 1d|ξ1| ] × · · · × [0, 1
d|ξd| ]
we obtain for all ξ ∈ Rd
|ψ(ξ)| .
∫
[0,1]d
∣∣∣∣∣∣ei〈ξ,z〉 − 1 −
d∑
k=j+1
iξkzk
∣∣∣∣∣∣ν0(dz) + 1
.
∫
B
∣∣∣∣∣∣ei〈ξ,z〉 − 1 −
d∑
k=j+1
iξkzk
∣∣∣∣∣∣ν0(dz) +
∫
[0,1]d\B
1 +
d∑
k=j+1
|ξkzk|
ν0(dz) + 1.
Since the marginal densities k0i of k0 are again α-stable, we have for the first term
∫
B
∣∣∣∣∣∣ei〈ξ,z〉 − 1 −
d∑
k=j+1
iξkzk
∣∣∣∣∣∣ν0(dz) .
∫
B
j∑
k=1
|ξkzk| +
d∑
k=j+1
ξ2kz2k
ν0(dz)
.
j∑
k=1
∫ 1
|ξk|
0|ξkzk| ν0
k(dzk) +d∑
k=j+1
∫ 1
|ξk |
0ξ2kz
2kν
0k(dzk)
.
j∑
k=1
∫ 1
|ξk|
0|ξkzk|
1
zαk+1k
dzk +d∑
k=j+1
∫ 1
|ξk |
0ξ2kz
2k
1
zαk+1k
dzk
.
d∑
k=1
|ξk|αk .
For the second term note that if z ∈ [0, 1]d\B with components zk satisfying zk ≤ 1d|ξk | ,
k = j + 1, . . . , d, there exists lk such that zlk ≥ 1d|ξlk |
. Then,
∫
[0,1]d\B
1 +
d∑
k=j+1
|ξkzk|
ν0(dz) ≤
d∑
k=j+1
∫ ∞
−∞· · ·∫ 1
1d|ξk|
· · ·∫ ∞
−∞(1 + |ξkzk|) ν0(dz)
+d∑
k=j+1
∫ ∞
−∞· · ·∫ 1
d|ξk |
0· · ·∫ 1
1
d|ξlk|· · ·∫ ∞
−∞(1 + |ξkzk|) ν0(dz)
27
2 Multidimensional Levy models
≤d∑
k=j+1
∫ 1
1d|ξk |
(1 + |ξkzk|) ν0k(dzk)
+
d∑
k=j+1
∫ ∞
−∞· · ·∫ 1
d|ξk |
0· · ·∫ 1
1
d|ξlk|· · ·∫ ∞
−∞
(1 +
1
d
)ν0(dz)
. 1 +d∑
k=j+1
|ξk|αk +d∑
k=j+1
|ξk| +d∑
k=j+1
∫ 1
1
d|ξlk|ν0lk
(dzlk)
. 1 +d∑
k=1
|ξk|αk +d∑
k=j+1
|ξk| .
Therefore, we obtain |ψ(ξ)| . 1 +∑d
k=1 |ξk|αk for all ξ ∈ Rd and thus for ‖ξ‖∞ > 1 the
upper bound |ψ(ξ)| .∑d
k=1 |ξk|αk .
In order to prove (2.27), we also require a lower bound on <ψ(ξ).
Proposition 2.4.3. Let X be a Levy process with state space Rd, characteristic triplet(Q, ν, γ) and characteristic exponent ψ. Assume Q = 0, γi = 0, i = 1, . . . , d and theLevy measure ν satisfies (2.20) with α = (α1, . . . , αd). Then, there exists C > 0 suchthat for ‖ξ‖∞ sufficiently large
|<ψ(ξ)| ≥ C
d∑
j=1
|ξj|αj . (2.29)
Proof. Since k0,sym is α-stable it satisfies (2.13), i.e.,
k0,sym(r− 1
α1 x1, . . . , r− 1
αd xd) = r1+ 1
α1+...+ 1
αd k0,sym(x1, . . . , xd),
for all r > 0 and x ∈ Rd such that xi 6= 0. Herewith, using [28, Theorem 3.3] oneobtains that ψ0(ξ) :=
∫Rd(1 − cos (ξ, z))k0,sym(x)dx is an anisotropic distance function
of order (1/α1, . . . , 1/αd). Since all anisotropic distance functions of the same order areequivalent (cf., e.g., [21, Lemma 2.2]), there exists some constant C1 > 0 such that
ψ0(ξ) ≥ C1(|ξ1|α1 + . . . + |ξd|αd), for all ξ ∈ Rd.
Hence, with (2.20) there exists C2 > 0 such that
|<ψ(ξ)| =
∫
Rd
(1 − cos (ξ, x))ksym(x)dx ≥ C2
∫
B1(0)(1 − cos (ξ, x))k0,sym(x)dx
≥ C2 ψ0(ξ) − C2
∫
Rd\B1(0)(1 − cos (ξ, x))k0,sym(x)dx
28
2.4 Properties of the Levy measure
≥ C2 ψ0(ξ) − 2C2
∫
Rd\B1(0)k0,sym(x)dx
≥ C2 ψ0(ξ) − C3 ≥ C2C1
d∑
i=1
|ξi|αi − C3.
Since ψ is continuous we immediately obtain the sector condition.
Theorem 2.4.4. Let X be a Levy process with state space Rd, characteristic triplet(Q, ν, γ) and characteristic exponent ψ. Assume either Q > 0 or Q = 0, γi = 0,i = 1, . . . , d, and the Levy measure ν satisfies (2.20),(2.19). Then
|=ψ(ξ)| . |<ψ(ξ)| , ∀ξ ∈ Rd.
Proof. For Q = 0 the result follows with Propositions 2.4.2 and 2.4.3. For Q > 0 thereexists a C1 > 0 such that for all ξ ∈ Rd
|<ψ(ξ)| =1
2〈ξ,Qξ〉 +
∫
Rd
(1 − cos〈ξ, z〉) ν(dz) ≥ C1
d∑
j=1
ξ2j , (2.30)
and for ‖ξ‖∞ > 1 there exists C2 > 0 such that
|ψ(ξ)| ≤ |〈γ, ξ〉| + 〈ξ,Qξ〉 +
∫
Rd
∣∣∣ei〈ξ,z〉 − 1 − i〈ξ, z〉1|z|≤1
∣∣∣ ν(dz) ≤ C2
d∑
j=1
|ξj|2 . (2.31)
The result follows from the continuity of ψ.
29
2 Multidimensional Levy models
30
3 Option pricing
In this chapter we derive the partial integrodifferential equation corresponding to theoption pricing problem. It is shown that the variational formulation has a unique solu-tion in an anisotropic Sobolev space. Furthermore, the unbounded log-price domain islocalized to a bounded domain and the error incurred by the truncation is estimated.Throughout, we assume the risk-neutral dynamics of d ≥ 1 assets are given by
Sit = Si0ert+Xi
t , i = 1, . . . , d ,
where X is a d-dimensional Levy process with characteristic triplet (Q, νQ, γ) under
a risk-neutral measure Q such that eXi
is a martingale with respect to the canonicalfiltration F0
t := σ(Xτ , τ ≤ t), t ≥ 0, of the multivariate process X. As shown inLemma 2.1.9 this martingale condition implies
∫|z|>1 e
ziνQ(dz) < ∞, i = 1, . . . , d. This
equation holds for semiheavy tails satisfying (2.18) with β+i > 1, i = 1, . . . , d, as shown
in Proposition 2.4.1. We assume in the following that Q was fixed by some procedure(see, e.g., [12, 24, 36]) and drop the subscript Q in the following.
3.1 Partial integrodifferential equation
We consider a European option with maturity T <∞ and payoff g(ST ) which is assumedto be Lipschitz. An arbitrage free value V (t, s) of this option is given by
V (t, s) = E
(e−r(T−t)g(ST )|St = s
). (3.1)
It can be characterized as the solution of a PIDE.
Theorem 3.1.1. Let X be a Levy process with state space Rd and characteristic triplet(Q, ν, γ). Assume that the function V (t, s) in (3.1) satisfies
V (t, s) ∈ C1,2(
(0, T ) × Rd>0
)∩ C0
([0, T ] × Rd
≥0
).
Then, V (t, s) is a classical solution of the backward Kolmogorov equation
∂tV (t, s) +1
2
d∑
i,j=1
sisjQij∂sisjV + r
d∑
i=1
si∂siV (t, s) − rV (t, s) (3.2)
31
3 Option pricing
+
∫
Rd
(V (t, sez) − V (t, s) −
d∑
i=1
si (ezi − 1) ∂siV (t, s)
)ν(dz) = 0 ,
on (0, T ) × Rd>0 where V (t, sez) := V (t, s1e
z1 , . . . , sdezd), and the terminal condition is
given by
V (T, s) = g(s), ∀s ∈ Rd≥0 . (3.3)
Proof. We first calculate the risk-neutral dynamics of S it . Let Σ = (Σij)1≤i,j≤d be givensuch that ΣΣ> = Q. With the Ito formula for multidimensional Levy processes and theLevy-Ito decomposition we obtain
dSit = rSit dt+ Sit−dXit +
1
2QiiS
it dt+ Sit−e
∆Xit − Sit− − ∆X i
tSit−
= rSit dt+ Sit−γidt+ Sit−
d∑
k=1
ΣikdW kt +
∫
|z|<1Sit− ziJ(dt, dz) +
1
2QiiS
it dt
+ Sit−(e∆Xit − 1−∆X i
t + ∆X it1|∆Xt|≥1︸ ︷︷ ︸
−∆Xit1|∆Xt|<1
)
= rSit dt+ Sit−γidt+ Sit−
d∑
k=1
ΣikdW kt +
1
2QiiS
it dt
+
∫
Rd
Sit− (ezi − 1) J(dt, dz) +
∫
Rd
Sit−(ezi − 1 − zi1|z|<1
)ν(dz)dt .
Since eXi
is a martingale, we have
dSit = rSit dt+ Sit−
d∑
k=1
ΣikdW kt +
∫
Rd
Sit− (ezi − 1) J(dt, dz) .
We now apply the Ito formula for semimartingales [35, Theorem 4.57] to the discountedvalues e−rt Vt. We denote by [X,Y ]ct the continuous part of the quadratic covariation[X,Y ]t = XtYt −X0Y0 −
∫ t0 Xτ−dYτ −
∫ t0 Yτ−dXτ . Then, we calculate
d(e−rtVt) = −re−rtV dt+ e−rt(∂tV (t, St)dt+
d∑
i=1
∂siV (t, St−)dSit
+1
2
d∑
i,j=1
∂sisjV (t, St−)d[Si, Sj ]ct + V (t, St−e∆Xt)
−V (t, St−) −d∑
i=1
Sit−(e∆X
it − 1
)∂siV (t, St−)
)
= a(t)dt+ dMt ,
32
3.1 Partial integrodifferential equation
where
a(t) = −re−rtV + e−rt
∂tV +
d∑
i=1
∂siV rSit− +
1
2
d∑
i,j=1
QijSit−S
jt−∂sisjV
+
∫
Rd
(V (t, St−e
z) − V (t, St−) −d∑
i=1
Sit− (ezi − 1) ∂siV (t, St−)
)ν(dz)
)
dMt = e−rt(
d∑
i=1
∂siV (t, St−)Sit−
d∑
k=1
ΣikdW kt
+
∫
Rd
(V (t, St−ez) − V (t, St−)) J(dt, dz)
).
Since g is Lipschitz, V is also Lipschitz with respect to s and ∂siV is bounded fori = 1, . . . , d. With
E
(∫ T
0
∫
Rd
(V (t, St−ez) − V (t, St−))2 ν(dz)dt
)
. E
(∫ T
0
∫
Rd
d∑
i=1
(Sit−)2(e2zi + 1)ν(dz)dt
)
.
d∑
i=1
∫
R
(e2zi + 1)νi(dzi)E
(∫ T
0(Sit−)2 dt
)<∞ ,
and
E
(∫ T
0(Sit−)2 |∂siV (t, St−)| dt
). E
(∫ T
0(Sit−)2 dt
)<∞ ,
for i = 1, . . . , d, Mt is a square-integrable martingale, by [16, Proposition 8.6]. Therefore,e−rtVt−Mt is a martingale and since e−rtVt−Mt =
∫ t0 a(τ)dτ is also a continuous process
with bounded variation, we have a(t) = 0 almost surely, by [16, Proposition 8.9]. Thisyields the desired PIDE (3.2).
The PIDE (3.2) can further be transformed into a simpler form:
Corollary 3.1.2. Let X be a Levy process with state space Rd and characteristic triplet(Q, ν, γ) and marginal Levy measures νi, i = 1, . . . , d satisfying (2.18) with β+
i > 1,β−i > 0, i = 1, . . . , d. Furthermore, let
u(τ, x) = erτV(T − τ, ex1−(γ1+r)τ , . . . , exd−(γd+r)τ
), (3.4)
where
γi = −Qii
2−∫
R
(ezi − 1 − zi) νi(dzi) .
33
3 Option pricing
Then, u satisfies the PIDE
∂τu+ ABS[u] + AJ[u] = 0 , (3.5)
in (0, T ) × Rd with initial condition u(0, x) := u0. The differential operator ABS isdefined for ϕ ∈ C2
0 (Rd) by
ABS[ϕ] = −1
2
d∑
i,j=1
Qij∂xixj , (3.6)
and the integrodifferential operator AJ is given by
AJ[ϕ] = −∫
Rd
(ϕ(x + z) − ϕ(x) − z · ∇xϕ(x)) ν(dz) . (3.7)
The initial condition is given by
u0 = g(ex) := g(ex1 , . . . , exd) . (3.8)
Proof. To obtain constant coefficients we set in (3.2) xi = log si. Furthermore, wechange to time to maturity τ = T − t and set u(τ, x) = V (T − τ, ex1 , . . . , exd). Theresulting differential operator is given by
ABS[ϕ] = −1
2
d∑
i,j=1
Qij∂ijϕ +
d∑
i=1
(1
2Qii − r
)∂iϕ + rϕ, ϕ ∈ C2
0 (Rd),
and the integrodifferential operator by
AJ[ϕ] = −∫
Rd
(ϕ(x + z) − ϕ(x) −
d∑
i=1
(ezi − 1) ∂iϕ(x)
)ν(dz), ϕ ∈ C2
0 (Rd).
The interest rate r can be set to zero by transforming u to u using
u(τ, x) = e−rτ u(τ, x + rτ) .
Furthermore, the integrodifferential operator can be rewritten as
AJ[ϕ] = −∫
Rd
(ϕ(x + z) − ϕ(x) − z · ∇xϕ(x)) ν(dz) + γ · ∇xϕ(x) ,
where the coefficients of the drift vector γ are given by
γi =
∫
R
(ezi − 1 − zi) νi(dzi) , i = 1, . . . , d .
We remove the drift in the integrodifferential and in the diffusion operator by setting
u(τ, x) = u(τ, x1 + γ1τ, . . . , xd + γdτ) .
34
3.1 Partial integrodifferential equation
We next derive the PIDE for knock-out barrier options (see, e.g., [16, Section 12.1.2]for the one-dimensional case). The prices of corresponding knock-in and other barriercontracts with the same barrier can herewith be obtained using superposition and lin-earity arguments (see, e.g., [7, Section 6]). Let D ⊂ Rd
≥0 be an open subset and let
τD = inft ≥ 0|Xt ∈ Dc be the first hitting time of the complement set Dc = Rd\D byX. Then, the price of a knock-out barrier option with payoff g is given by
VD(t, s) = E
(e−r(T−t)g(ST )1T<τD|St = s
). (3.9)
If VD is sufficiently smooth, it can again be computed as the solution of a PIDE.
Theorem 3.1.3. Assume VD(t, s) in (3.9) satisfies
VD(t, s) ∈ C1,2(
(0, T ) × Rd>0
)∩ C0
([0, T ] × Rd
≥0
). (3.10)
Then, VD(t, s) satisfies the following PIDE.
∂tVD(t, s) +1
2
d∑
i,j=1
sisjQij∂sisjVD + rd∑
i=1
si∂siVD(t, s) − rVD(t, s) (3.11)
+
∫
Rd
(VD(t, sez) − VD(t, s) −
d∑
i=1
si (ezi − 1) ∂siVD(t, s)
)ν(dz) = 0 ,
on (0, T ) ×D where the terminal condition is given by
VD(T, s) = g(s), ∀s ∈ D , (3.12)
and the “boundary” condition reads
VD(t, s) = 0, for all (t, s) ∈ (0, T ) ×Dc. (3.13)
Proof. Define the deterministic value function g(s) := g(s)1s∈D, and consider theEuropean vanilla-type price function
V (t, s) = E
(e−r(T−t)g(ST∧τD) |St = s
).
Since S is a strong Markov process, there holds VD(t, St) = V (t, St) for all t ≤ T ∧ τD.Thus, applying the Ito formula as in the proof of Theorem 3.1.1 one obtains that VDsatisfies (3.11) on (0, T ) × D. By definition there holds VD(t, St) = 0 for all (t, St) ∈(0, T ) ×Dc.
Remark 3.1.4. Note that in contrast to plain European vanilla contracts, the price VDof a barrier contract does not satisfy the smoothness condition (3.10) for general Levymodels. The validity of (3.10) can however be shown in case the process X admits anon-vanishing diffusion component, i.e., Q > 0. Also for market models satisfying theACP condition of [57, Definition 41.11], Theorem 3.1.3 can be shown to hold, see [7].
35
3 Option pricing
3.2 Variational formulation
For u, v ∈ C∞0 (Rd) we associate with ABS the bilinear form
EBS(u, v) =1
2
d∑
i,j=1
Qij
∫
Rd
∂iu(x)∂jv(x)dx . (3.14)
To the jump part AJ we associate the bilinear canonical jump form
ECJ (u, v) = −
∫
Rd
∫
Rd
(u(x + z) − u(x) − z · ∇xu(x)) v(x)dx ν(dz) , (3.15)
and set E(u, v) = EBS(u, v)+ECJ (u, v). We can now formulate the abstract problem (1.6)
for European contracts with Dirichlet form (E ,D(E)), V = D(E) and H = L2(Rd):
Find u ∈ L2((0, T );D(E)) ∩H1((0, T );D(E)∗) such that
〈∂τu, v〉D(E)∗ ,D(E) + E(u, v) = 0 , τ ∈ (0, T ), ∀v ∈ D(E) , (3.16)
u(0) = u0 .
where u0 is defined as in (3.8)
Remark 3.2.1. We require that u0 ∈ H = L2(Rd) which implies a growth condition onthe payoff g. In Section 3.3 below we reformulate the problem on a bounded domainwhere this condition can be weakened. It is explicitly given by (3.19).
The well-posedness of (3.16) is ensured by
Theorem 3.2.2. Let X be a Levy process with state space Rd and characteristic triplet(Q, ν, γ) and Dirichlet form E(·, ·). Assume either Q > 0 or Q = 0, γi = 0, i =1, . . . , d, and the Levy measure ν satisfies (2.19), (2.20), with α = (α1, . . . , αd). Then,the variational equation (3.16) with u0 ∈ L2(Rd) admits a unique solution in D(E). ForQ > 0 there holds D(E) = H1(Rd) and for Q = 0 one obtains the anisotropic Sobolevspace D(E) = Hα/2(Rd) as defined in (1.1).
Proof. Since a Levy process X is stationary, its infinitesimal generator is translationinvariant. As Theorem 2.4.4 shows, the characteristic exponent ψ of X satisfies thesector condition (2.27). Therefore, the bilinear form E(u, v) is a Dirichlet form and,by [34, Example 4.7.32], it can be written as
|E(u, v)| = (2π)d∣∣∣∣∫
Rd
ψ(ξ)u(ξ)v(ξ)dξ
∣∣∣∣ .
According to Theorem 1.3.2, for existence and uniqueness of a solution of (3.16) we needto show that E(·, ·) satisfies the continuity condition (1.7) and the Garding inequality(1.8).
36
3.2 Variational formulation
At first, consider the case Q ≡ 0. Through Propositions 2.4.2 and 2.4.3, there existpositive constants C1, C2, C3 > 0 such that
<ψ(ξ) ≥ C1
d∑
j=1
|ξj |αj − C2 , |ψ(ξ)| ≤ C3
d∑
j=1
|ξj|αj + 1
for all ξ ∈ Rd . (3.17)
The continuity of E(·, ·) is ensured by
|E(u, v)| =
∣∣∣∣∫
Rd
ψ(ξ)u(ξ)v(ξ)dξ
∣∣∣∣ ≤ C3
∫
Rd
(1 +
d∑
i=1
|ξi|αi
)∣∣∣u(ξ)v(ξ)∣∣∣ dξ
≤ C3
∫
Rd
d∑
i=1
(1 + |ξi|2
)αi/2∣∣∣u(ξ)v(ξ)
∣∣∣ dξ
≤ C3
√√√√∫
Rd
d∑
i=1
(1 + |ξi|2
)αi/2 |u(ξ)|2 dξ
√√√√∫
Rd
d∑
i=1
(1 + |ξi|2
)αi/2 |v(ξ)|2 dξ
. ‖u‖Hα/2(Rd)‖v‖Hα/2(Rd),
where we used the fact that there exists a c > 0 such that
0 < c ≤∑d
i=1(1 + |ξi|2)αi/2
1 +∑d
i=1 |ξi|αi≤ 1
c<∞, ∀ξ ∈ Rd.
Furthermore, to prove the Garding inequality one finds
E(u, u) =
∫
Rd
<ψ(ξ)|u(ξ)|2 dξ
=
∫
Rd
(C1 + C2 + <ψ(ξ)) |u(ξ)|2 dξ − (C1 + C2)
∫
Rd
|u(ξ)|2 dξ ,
and
∫
Rd
(C1 + C2 + <ψ(ξ)) |u(ξ)|2 dξ ≥ C1
∫
Rd
(1 +
d∑
i=1
|ξi|αi
)|u(ξ)|2 dξ
≥ C1
∫
Rd
d∑
i=1
(1 + |ξi|2
)αi/2 |u(ξ)|2 dξ .
Theorem 1.3.2 thus yields the existence and uniqueness of a solution u ∈ Hα/2(Rd).
If Q > 0 one obtains the required results using the same arguments: With (2.30) and(2.31), instead of (3.17) holds
<ψ(ξ) &
d∑
j=1
|ξj |2 , |ψ(ξ)| .
d∑
j=1
|ξj|2 , for all ‖ξ‖∞ > 1,
and the result follows as above.
37
3 Option pricing
Remark 3.2.3. We omitted the partially degenerate case Q 6= 0 but Q 6> 0 in Theo-rem 3.2.2. Here, the domain D(E) can be obtained by writing
Q = (σiσjρij)1≤i,j≤d,
where ρij is the correlation of the Brownian motion W i and W j. Suppose σi = 0 for alli ∈ I ⊂ 1, . . . , d and σj > 0 for all j /∈ I. Using the intersection structure (1.2) oneobtains
D(E) =⋂
i∈IHαi/2i (Rd) ∩
⋂
j /∈IH1j (Rd).
Remark 3.2.4. Theorem 3.2.2 was also obtained in d = 1 by [45]. For d > 1, Theo-rem 3.2.2 was proved in [28] for symmetric tempered stable margins.
We convert the canonical form ECJ (·, ·) of (3.15) into the integrated jump form EJ(·, ·) by
using Lemma 2.2.7,
EJ(u, v) = −d∑
i=1
∫
R
∫
Rd
(u(x+ zi) − u(x) − zi∂iu(x)) v(x)ki(zi)dxdzi
−d∑
i=2
∑
|I|=iI1<···<Ii
∫
Ri
∫
Rd
∂Iu(x + zI)v(x)UI(zI)dxdzI . (3.18)
The next result states that for the integrals in (3.18) to exist it is sufficient that u ∈H1(Rd) = H1(R)⊗· · ·⊗H1(R) and u has compact support. Note that tensor products ofone-dimensional continuous, piecewise linear finite element basis functions satisfy theserequirements.
Proposition 3.2.5. Let u ∈ H1(Rd), v ∈ H1(Rd) and suppose u, v have compact supports.Then |EJ(u, v)| <∞.
Proof. For u, v ∈ H1(Rd) with compact supports there holds
∣∣∣∣∫
Rd
(u(x+ zi) − u(x) − zi∂iu(x)) v(x)dx
∣∣∣∣ . z2i ‖u‖H1(Rd) ‖v‖H1(Rd) i = 1, . . . , d .
With∣∣∣∣∫
Rd
∂Iu(x+ zI)v(x)dx
∣∣∣∣ ≤ ‖u‖H1(Rd) ‖v‖H1(Rd) ∀z ∈ Rd, I ⊂ 1, . . . , d ,
and ∣∣∣∣∫
R|I|
UI(zI)dzI∣∣∣∣ <∞ ∀I ⊂ 1, . . . , d ,
we obtain the asserted result.
38
3.3 Localization
3.3 Localization
In this section we show how one may localize the unbounded log-price space domainRd in (3.5) to a bounded domain D at the expense of the so-called localization error.To analyze the error introduced by this localization procedure on the option price, werequire the following polynomial growth condition on the payoff function: There existssome q ≥ 1 such that
g(s) . (
d∑
i=1
si + 1)q , for all s ∈ Rd≥0 . (3.19)
This condition is satisfied by all standard multi-asset options like, e.g., basket, rainbow,spread and power options.
The unbounded log-price domain Rd of the log price x = log s is truncated to a boundeddomain DR = [−R,R]d, R > 0. In terms of financial modeling, this corresponds toapproximating the solution V of the problem (3.2) by a barrier option VR which is thesolution of the problem (3.11). In log price the European barrier option is given by
uR(t, x) = E
(g(eXT )1T<τDR
|Xt = x),
where, for notational convenience, we have set r = 0. We show that if the probabilitydensity pt of the Levy process has semiheavy tails, the solution of the localized problemconverges pointwise exponentially to the solution of the original problem.
Lemma 3.3.1. Let X = Xt : t ≥ 0 be a Levy process with state space Rd and Levymeasure ν such that the marginal measures νi satisfy (2.18). Then, the probability densitypt(x), t > 0, of the process X decays exponentially independent of time t
∫
Rd
eηi(x)pt(x)dx <∞ , with ηi(x) =(µ+i 1xi>0 + µ−i 1xi<0
)|xi| , (3.20)
and 0 < µ−i < β−i and 0 < µ+i < β+
i , i = 1, . . . , d.
Proof. Using [57, Theorem 25.3], we know (3.20) is true if and only if∫
|z|>1eηi(z)ν(dz) <∞ .
The result (3.20) then follows from Proposition 2.4.1.
Theorem 3.3.2. Suppose the payoff function g : Rd → R satisfies (3.19). Let X be aLevy process with state space Rd and Levy measure ν such that the marginal measuresνi satisfy (2.18) with β+
i > q, β−i > q, i = 1, . . . , d, with q as in (3.19). Then,
|u(t, x) − uR(t, x)| . e−γ1R+γ2‖x‖∞ ,
with 0 < γ1 < mini min(β+i , β
−i ) − q and γ2 = γ1 + q.
39
3 Option pricing
Proof. Let ηi(x) be as in (3.20) and MT = supτ∈[t,T ] ‖Xτ‖∞. Then with (3.19)
|u(t, x) − uR(t, x)| ≤ E
(g(eXT )1T≥τDR
|Xt = x)
. E(eqMT 1MT>R|Xt = x
).
Using [57, Theorem 25.18] it suffices to show that
E
(eq‖XT ‖∞1‖XT ‖∞>R|Xt = x
)=
∫
Rd
eq‖z+x‖∞1‖z+x‖∞>RpT−t(z)dz
. eq‖x‖∞d∑
i=1
∫
Rd
eq|z|ie−ηi(z)1‖z+x‖∞>Reηi(z)pT−t(z)dz
. eq‖x‖∞d∑
i=1
∫
Rd
e−(minj min(µ+j ,µ
−j )−q)(R−‖x‖∞)eηi(z)pT−t(z)dz
. e−αR+β‖x‖∞d∑
i=1
∫
Rd
eηi(z)pT−t(z)dz .
The result follows with (3.20).
Remark 3.3.3. In d = 1 a similar proof is given in [17].
For any function u with support in DR we denote by u its extension by zero to all of Rd
and defineER(u, v) = E(u, v) .
Thus, we obtain continuity and a Garding inequality of ER(u, v) on the domain D(ER) =Hα/2(DR) ⊂ D(E) as defined in (1.4). Now we can restate the problem (3.16) on thebounded domain:
Find uR ∈ L2((0, T );D(ER)) ∩H1((0, T );D(ER)∗) such that
〈∂τuR, v〉D(ER)∗,D(ER) + ER(uR, v) = 0 , τ ∈ (0, T ), ∀v ∈ D(ER) , (3.21)
uR(0) = u0|DR.
By Theorem 3.2.2, the problem (3.21) is well-posed, i.e., there exists a unique solutionuR ∈ L2(0, T ;D(ER)) ∩ C0([0, T ];L2(DR)) which can now be approximated by a finiteelement Galerkin scheme.
40
4 Wavelet basis
Straightforward application of standard finite element schemes to calculate the stiffnessmatrix A = (E(φh,k, φh,k′))k,k′∈∆h
(1.11) as explained in Section 1.3 is inefficient dueto two reasons. For high-dimensional models we have the “curse of dimension”: Thenumber of degrees of freedom on a tensor product finite element mesh of uniform meshwidth h in dimension d grows like O(h−d) as h → 0. For jump models the non-localityof the underlying operator implies that the standard finite element stiffness matrix A
consists of O(h−2d) non-zero entries as h → 0, which is not practical even for a singleasset with small mesh widths.
As we show here, spline wavelets can overcome both problems while still being easyto compute. In particular, choosing wavelet bases has three main advantages. Firstly,we can break the curse of dimension using sparse tensor products to obtain essentiallydimension independent complexity. Secondly, the use of wavelets allows a multiscalecompression of the jump measure of X. Then, the complexity of jump models canasymptotically be reduced to Black-Scholes complexity. Finally, we show that waveletsprovide norm equivalences in fractional order spaces which lead to efficient precondition-ing even for pure jump operators.
4.1 Wavelets
We start by explaining wavelets in one dimension, following the construction describedin [19]. The d-variate bases are obtained by tensor product construction.
4.1.1 Spline wavelets on the interval
The one-dimensional interval D = [0, 1] is partitioned into an equidistant mesh T` withmesh width h` = 2−`, ` ∈ N. We define V` as the space of piecewise polynomialsof degree p− 1 ∈ N on the mesh T` which vanish at the endpoints and denote withN` = dim V` = O(2`). The spaces V` are nested, V` ⊂ V`+1, and generated by singlescale bases Φ` := φ`,k : k ∈ ∆` with suitable index set ∆`. Here, we change notationand write φ`,k instead of φh`,k for simplicity. We assume that the basis functions φ`,k ∈Φ`, ` ∈ N, have compact support of size |supp φ`,k| . 2−` and are normalized in L2,‖φ`,k‖L2([0,1]) = 1. The approximation order of Φ` is given by p.
41
4 Wavelet basis
In addition, we associated with Φ`,k a dual basis, Φ` := φ`,k : k ∈ ∆`, i.e., one has
〈φ`,k, φ`,k′〉 = δk,k′ , k, k′ ∈ ∆`. The approximation order of Φ` is denoted by p, and we
assume p ≤ p.
Given the single-scale basis Φ`, we can construct a biorthogonal complement or waveletbasis Ψ` = ψ`,k : k ∈ ∇`, Ψ` = ψ`,k : k ∈ ∇` with ∇` = ∆`+1\∆` such that
V`+1 = V` ⊕W`, V`+1 = V` ⊕ W`, ` ∈ N,
and
V` = W0 ⊕ · · · ⊕W`−1, ` ∈ N , (4.1)
where the increment spaces W`, W` are the span of Ψ`, Ψ` for ` > 0, and W0 := V1,W0 := V1. We suppose the wavelets ψ`,k have compact support |supp ψ`,k| . 2−` andare normalized in L2([0, 1]).
Any function u ∈ VL+1 has the representation
u =
L∑
`=0
∑
k∈∇`
u`,kψ`,k =
L∑
`=0
∑
k∈∇`
〈u, ψ`,k〉ψ`,k .
For u ∈ Hs([0, 1]), 0 ≤ s ≤ p one obtains an infinite series
u =∞∑
`=0
∑
k∈∇`
u`,kψ`,k, (4.2)
which converges in Hs([0, 1]). There holds the norm equivalence
‖u‖2eHs([0,1])
.
∞∑
`=0
∑
k∈∇`
22s` |u`,k|2 . ‖u‖2eHs([0,1])
, 0 ≤ s < p− 1/2 . (4.3)
Example 4.1.1. We give an example of wavelet basis for Hs([0, 1]), 0 ≤ s < 3/2 usingpiecewise linear continuous functions, p = 2, on [0, 1] vanishing at the endpoints. Themesh T` is defined by the nodes x`,k := k2−`−1 with k = 0, . . . , 2`+1. Let N` = 2`+1 − 1and c` :=
√3 · 2`/2−1, ` ∈ N0. We define the wavelets ψ`,k for level ` ∈ N0, k = 1, . . . , 2`.
For ` = 0 we have N0 = 1 and ψ0,1 is the function with value 2c0 at x1,0 = 1/2. For ` ≥ 1the wavelet ψ`,1 has the values ψ`,1(x`,1) = 2c`, ψ`,1(x`,2) = −c` and zero at all othernodes. The wavelet ψ`,2` has the values ψ`,2`(x`,N`
) = 2c`, ψ`,2`(x`,N`−1) = −c` and zero
at all other nodes. The wavelet ψ`,k with 1 < k < 2` has the values ψ`,k(x`,2k−2) = −c`,ψ`,k(x`,2k−1) = 2c`, ψ`,k(x`,2k) = −c` and zero at all other nodes. For ` = 0, . . . , 3 thesewavelets are plotted in Figure 4.1 for the space V4 and its decomposition.
42
4.1 Wavelets
PSfrag replacements
VL
W0
W1
W2
W3
φL,3
ψ0,1
ψ1,1
ψ2,2
ψ3,6
Figure 4.1: Single-scale space VL and its decomposition into multiscale wavelet spacesW` for L = 4
4.1.2 Sparse tensor product space
In D = [0, 1]d, d > 1 we define as in Example 1.3.5 the subspace VL as the full tensorproduct of the one-dimensional spaces VL+1 :=
⊗1≤i≤d VL+1 which can be written as
VL+1 = span ψ`,k : 0 ≤ `i ≤ L, ki ∈ ∇`i , i = 1, . . . , d ,
with basis functions ψ`,k = ψ`1,k1 · · ·ψ`d,kd, 0 ≤ `i ≤ L, ki ∈ ∇`i , i = 1, . . . , d. Using
(4.1) we can write VL+1 again in terms of increment spaces
VL+1 =⊕
0≤`i≤LW`1 ⊗ · · · ⊗W`d .
Therefore, together with (4.2) for any function u ∈ L2([0, 1]d) we have the series repre-sentation
u =
∞∑
`i=0
∑
ki∈∇`i
u`,kψ`,k.
Using the norm equivalences (4.3) and the intersection structure (1.2) we obtain
‖u‖2eHs([0,1]d)
.
∞∑
`i=0
∑
ki∈∇`i
(22s1`1 + . . . + 22sd`d) |u`,k|2 . ‖u‖2eHs([0,1]d)
, (4.4)
for 0 ≤ si ≤ p− 1/2, i = 1, . . . , d.
Remark 4.1.2. To obtain a multilevel preconditioner we only need these norm equiva-lences for Hα/2([0, 1]d), i.e., 0 ≤ si = αi/2 ≤ 1, i = 1, . . . , d. Therefore, p = 2 issufficient.
43
4 Wavelet basis
The space VL has O(2Ld) degrees of freedom which grow exponentially with increasingdimension d. To avoid this “curse of dimension” we introduce the sparse tensor productspace
VL+1 := span ψ`,k : 0 ≤ `1 + · · · + `d ≤ L, ki ∈ ∇`i , i = 1, . . . , d=
⊕
0≤`1+···+`d≤LW`1 ⊗ · · · ⊗W`d .
The difference between the tensor product space VL and the sparse tensor product spaceV L is shown in Figure 4.2 for level L = 3 using wavelets as described in Example 4.1.1.
Figure 4.2: Tensor product (left) and sparse tensor product (right) for d = 2
As L → ∞ we have N = dim(VL+1) = O(2dL) and N = dim(VL+1) = O(Ld−1 2L), i.e.,the spaces VL have considerably smaller dimension than VL. On the other hand, they dohave similar approximation properties as VL, provided the function to be approximatedis sufficiently smooth. As shown in [68] there holds for u ∈ Hs(D) with 0 ≤ r < p− 1/2,r ≤ s ≤ p,
infuL∈bVL
‖u− uL‖ eHr .
hs−r |log h|
d−12 if r = 0, s = p
hs−r else(4.5)
Please note that we can also state the approximation rate in terms of the level index L,i.e., hs−r = 2−L(s−r) and |log h| . L.
4.2 Wavelet discretization
We cast the variational form on bounded domain (3.21) into the matrix form (1.13) forthe finite-dimensional subspace VL+1.
Here, we use the integrated jump form (3.18) and integrate the first sum by parts
−∫
R
∫
Rd
(u(x + zi) − u(x) − zi∂iu(x)) v(x)ki(zi)dxdzi
44
4.2 Wavelet discretization
= −∫
R
∫
Rd
∂2i u(x + zi)v(x)k−2
i (zi)dxdzi
=
∫
R
∫
Rd
∂iu(x + zi)∂iv(x)k−2i (zi)dxdzi ,
for i = 1, . . . , d, where k−2(x) = sgn(x)∫I(x) U(z)dz is the second antiderivative of k
vanishing at ±∞. Therefore, the jump part of the bilinear form can be written as
EJ(u, v) =
d∑
i=1
∫
R
∫
Rd
∂iu(x+ zi)∂iv(x)k−2i (zi)dxdzi
−d∑
i=2
∑
|I|=iI1<···<Ii
∫
Ri
∫
Rd
∂Iu(x+ zI)v(x)UI(zI)dxdzI .
Using the basis ψ`,k = ψ`1,k1 · · ·ψ`d,kd, 0 ≤ `1 + · · · + `d ≤ L, ki ∈ ∇`i of VL+1 we need
to compute the stiffness matrix for the diffusion part
ABS(`′,k′),(`,k) = EBS(ψ`,k, ψ`
′,k′) =
d∑
i,j=1
1
2Qij
∫
DR
∂iψ`,k∂jψ`′,k′ dx,
and also for the jump part
AJ(`′,k′),(`,k) =
d∑
i=1
∫
R
∫
DR
∂iψ`,k(x+ zi)∂iψ`′,k′(x)k−2
i (zi)dxdzi
−d∑
i=2
∑
|I|=iI1<···<Ii
∫
Ri
∫
DR
∂Iψ`,k(x+ zI)ψ`′,k′(x)UI(zI)dxdzI .
We define the one-dimensional mass matrix Mi, stiffness matrix Si, cross matrix Ci as
Mi(`′,k′),(`,k) :=
∫ R
−Rψ`,kψ`′,k′ dx , Si(`′,k′),(`,k) :=
∫ R
−Rψ′`,kψ
′`′,k′ dx ,
Ci(`′,k′),(`,k) :=
∫ R
−Rψ′`,kψ`′,k′ dx , (4.6)
for 0 ≤ ` ≤ L, k ∈ ∇`. Then, we can write an entry in the diffusion stiffness matrix as
ABS(`′,k′),(`,k) =
d∑
i=1
1
2Qii S
i(`′i,k
′i),(`i,ki)
∏
j 6=iM
j(`′j ,k
′j),(`j ,kj)
−∑
i<j
Qij Ci(`′i,k
′i),(`i,ki)
Cj(`′j ,k
′j),(`j ,kj)
∏
r 6=i,jMr
(`′r ,k′r),(`r ,kr) .
45
4 Wavelet basis
Let Si`′,` denote the block matrix with entries (Si(`′,k′),(`,k))k′∈∇`i,k∈∇`
. We use the same
notation when we refer to the matrix of the same size as Si but with zero entries exceptthe block matrix Si`′,`. With this convention Si can be written as
Si =∑
0≤`′,`≤LSi`′,` .
The full tensor product and the sparse tensor product of two matrices with multilevelstructure are defined as
Si ⊗Mj =∑
0≤`′i,`i≤L
0≤`′j,`j≤L
Si`′i,`i⊗M
j`′j ,`j
, Si ⊗Mj =∑
0≤`′i+`i≤L
0≤`′j+`j≤L
Si`′i,`i⊗M
j`′j ,`j
,
respectively. Therefore, the stiffness matrix ABS can be computed as a d iterated sparsetensor product using one-dimensional matrices
ABS =d∑
i=1
1
2Qii
⊗
1≤j≤i−1
Mj ⊗Si ⊗⊗
i+1≤j≤dMj
−∑
i<j
Qij
⊗
1≤r≤i−1
Mr ⊗Ci ⊗⊗
i+1≤r≤j−1
Mr ⊗Cj ⊗⊗
j+1≤r≤dMr.
Additionally, for the jump part we define
Ai(`′,k′),(`,k) := −
∫
R
∫ R
−Rψ′`,k(x + z)ψ′
`′,k′(x)k−2i (z)dxdz ,
AI(`′I ,k
′I),(`I ,kI) :=
∫
R|I|
∫
[−R,R]|I|
∂Iψ`I ,kI(x+ z)ψ`
′I ,k
′I(x)UI(z)dxdz . (4.7)
where `I = (`i)i∈I , 0 ≤ `i ≤ L, kI = (ki)i∈I , ki ∈ ∇`i , I ⊂ 1, . . . , d, |I| > 1, andwrite the jump stiffness matrix as
AJ(`′,k′),(`,k) = −
d∑
i=1
∑
|I|=iI1<···<Ii
AI(`′I ,k
′I),(`I ,kI)
∏
j∈Ic
Mj(`′j ,k
′j),(`j ,kj)
.
As in the diffusion case we can then compute the jump stiffness matrix AJ as a sparsetensor product using the matrices AI and Mj . Applying the θ-scheme in time, we canwrite the problem (3.21) in fully discrete matrix form, similar to (1.13)
Find um+1L ∈ R
bN such that for m = 0, . . . ,M − 1
∆t−1M(um+1L − umL ) + θAum+1
L + (1 − θ)AumL = 0, (4.8)
u0L(0) = uL,0 .
46
4.3 Wavelet compression of the Levy measure
with matrix A = ABS + AJ, solution umL =∑
0≤|`|≤L∑
ki∈∇`ium
`,kψ`,k and degree of
freedoms N = dim(VL+1) = O(2L Ld−1). Since the Black-Scholes operator is a localoperator, there are only O(2L Ld−1) non-zero entries in ABS. But for the jump part,the stiffness matrix is, in general, densely populated. Using wavelet compression we canreduce the number of non-zero entries in AJ to O(2L L2(d−1)).
4.3 Wavelet compression of the Levy measure
Wavelet compression for isotropic domains has been studied extensively by various au-thors, e.g., [20, 19, 31, 69]. It is shown that compression yields asymptotically optimalcomplexity (on not necessarily tensor product domains) in the sense that the numberof non-zero entries in the resulting matrices grows linearly with the number of degreesof freedom. These results are extended to anisotropic spaces on sparse tensor productspaces in [53].
To define the compression scheme we need to introduce some notation. Consider tensorproduct wavelets ψ`,k = ψ`1,k1 ⊗ . . .⊗ ψ`d,kd
, ψ`′,k′ = ψ`′1,k′1 ⊗ . . .⊗ ψ`′d,k
′d. The distance
of support in each coordinate direction is denoted by
δxi := distsuppψ`i,ki, suppψ`′i,k′i ,
for i = 1, . . . , d and the distance of singular support
δsingxi
:=
distsingsuppψ`i,ki
, suppψ`′i,k′i , if `i ≤ `′idistsuppψ`i,ki
, singsuppψ`′i,k′i , else
Let 0 < α < p− 12 , define
L`,`′ :=
L(p− α/2) − p |`| if p(L− |`|) ≥ α/2(L − |`|∞)−α/2 |`|∞ else
+
L(p− α/2) − p
∣∣`′∣∣ if p(L−
∣∣`′∣∣) ≥ α/2(L −
∣∣`′∣∣∞)
−α/2∣∣`′∣∣∞ else
(4.9)
and mi := `i + `′i − 2 min`i, `′i. Furthermore, we denote the index sets I c`,`′, I`,`′ ⊂
1, . . . , d by
Ic`,`′ =
i ∈ 1, . . . , d : δxi > 2−min`i,`′i
, I`,`′ = 1, . . . , d\Ic
`,`′ , (4.10)
and set
βi`,` = L`,`′ − p(`i + `′i) + α∑
j 6=imin`j , `′j +
1
2
∑
j∈I`,`′
mj − p∑
j∈Ic`,`′
\imj ,
βi`,` = L`,`′ − pmax`i, `′i + α∑
j 6=imin`j , `′j +
1
2
∑
j∈I`,`′\i
mj − p∑
j∈Ic`,`′
mj .
(4.11)
47
4 Wavelet basis
The cut-off parameter are now defined by
Bi`,`′ = amax
2−min`i,`′i, 2β
i`,`/(2ep+α)
, a > 0,
Bi`,`′ = a′ max
2−max`i,`′i, 2
eβi`,`/(ep+α)
, a′ > 0.
The compression scheme is based on the fact that the matrix entries A(`′,k′),(`,k) =E(ψ`,k, ψ`
′,k′) can be estimated a-priori and therefore neglected if these are smaller thansome cut-off parameter. There are two reasons for an entry to be omitted. Eitherthe distance of the supports suppψ`i,ki
and suppψ`′i,k′i or the distance of the singularsupports is large enough for some i ∈ 1, . . . , d.
Theorem 4.3.1. Let X be a Levy process with state space Rd and characteristic triplet(Q, ν, γ) and Dirichlet form E(·, ·). Assume Q > 0 and that the Levy density k satisfies(2.21) with 0 < α < p− 1/2. Define the compression scheme by
A(`′,k′),(`,k) =
0, if ∃i ∈ Ic`,`′
: δxi > Bi`,`′
0, if ∃i ∈ I`,`′ : δsingxi > Bi
`,`′
A(`′,k′),(`,k), else
If p > 2dp − (d + 1)α and α ≤ 2/d, the number of non-zero entries for the compressedmatrix A is O(2LL2(d−1)).
Proof. See [53, Theorem 4.6.3].
Remark 4.3.2. Here, we only stated the isotropic case α1 = . . . = αd = α, i.e., ineach direction the same compression is used. Although we still get asymptotically opti-mal complexity, the number of matrix entries can further be reduced using anisotropiccompression. The corresponding compression scheme is defined in [53].
We give an example for the matrix compression
Example 4.3.3. Let a = 1, a′ = 1, p = 2, p = 2, α = 0.5 and L = 7. The correspondingcompression scheme is plotted in Figure 4.3 for d = 1, 2, 3. Zero entries due to the firstcompression are left white, zero entries due to the second compression are colored redand non-zero entries blue regardless of their size. For d = 1 there are 18% non-zeroentries, for d = 2 it is 35% and for d = 3, we have 52%.
We now consider the fully discrete, i.e., space and time, problem in matrix form (4.8)where we replace the matrix A with the compressed matrix A.
Find um+1L ∈ R
bN such that for m = 0, . . . ,M − 1
∆t−1M(um+1L − umL ) + θAum+1
L + (1 − θ)AumL = 0, (4.12)
u0L(0) = uL,0 .
There exists a unique solution umL of the perturbed scheme (4.12) and the solutionconverges at the optimal rate.
48
4.3 Wavelet compression of the Levy measure
Figure 4.3: Wavelet compression of the Levy measure for level L = 7 in d = 1, 2, 3
Theorem 4.3.4. Let X be a Levy process with state space Rd satisfying Assumption 2.3.4.Consider A as given in Theorem 4.3.1 and let all assumptions of Theorem 4.3.1 hold.Then, there exists a unique solution umL of the perturbed θ-scheme (4.12). Furthermore,
if u ∈ C1([0, T ],Hp(DR)) ∩ C3([0, T ],V∗) and the approximation uL,0 ∈ VL+1 of theinitial data u0 is quasi-optimal in L2(DR), then for θ = 1/2
∥∥uM − uML∥∥2
L2(DR)+ ∆t
M−1∑
m=0
∥∥∥um+1/2 − um+1/2L
∥∥∥2
V≤ C(u)
(∆t4 + 2−2L(p−α/2)
),
where u is the solution of (3.21) and the constant C(u) > 0 depends on higher space andtime derivatives of u.
Proof. See [53, Theorem 2.2.3, Theorem 3.3.8].
Remark 4.3.5. As already noted in Remark 1.3.8 for rough initial data we need to usenonuniform time steps to obtain optimal convergence rates. Furthermore, for Barriercontracts the solution may not be smooth at the barrier ∂D as indicated in Remark 3.1.4.Nonuniform mesh widths in space can be applied for again optimal convergence rates.
These convergence rates are shown in the next example. We only look at independentmargins because here one can obtain an exact solution with which to compare the finiteelement solution.
Example 4.3.6. Let d = 2 and consider two independent tempered stable marginal den-sities
ki(z) = cie−β
−i |z|
|z|1+αi1z<0 + ci
e−β+i z
z1+αi1z>0, i = 1, 2.
We solve the elliptic problem
A[u] = f on Ω = [0, 1]2, (4.13)
49
4 Wavelet basis
where f is chosen such that the exact solution is
u(x) =
(x2
1 − 2x31 + x4
1)(x22 − 2x3
2 + x42) if x ∈ Ω
0 else
We set the model parameter c1 = c2 = 1, β−1 = 10, β+
1 = 15, β−2 = 9, β+
2 = 16, α1 = 0.5,α2 = 0.7 and the compression parameter a = 1, a′ = 1, p = 2, p = 2. For L = 8 theabsolute value of the entries in the stiffness matrix A and the compressed matrix A areshown in Figure 4.4. Here, large entries are colored red. For the stiffness matrix blueentries are small but non-zero whereas for the compressed matrix blue entries are zeroeither due to the first or second compression. One clearly sees that the compressionscheme neglects small entries.
Figure 4.4: Stiffness matrix A (left) and compressed matrix A (right) for level L = 8
We solve problem (4.13) for various mesh widths hL = 2−L and plot the convergencerate in Figure 4.5. To compare the rates we also solved the problem on full grid. In theleft picture it can be seen that sparse grid has (up to log terms) the same rate as fullgrid and that the compression scheme preserves the convergence rate. To better show theadvantage of sparse grid we additionally plot the convergence rate in terms of degreesof freedom. For full grid we have N = O(22L) and for sparse grid N = O(L 2L). Theconvergence rate in full grid shows the “curse of dimension”, whereas for the sparse gridwe still obtain the optimal rate (up to log terms).
Since in general the matrix entries A(`′,k′),(`,k) cannot be computed exactly, we need toapproximate these with a numerical quadrature rule. To still retain the optimal orderof convergence, we require a certain accuracy.
Theorem 4.3.7. Consider A as given in Theorem 4.3.1 and let A be a perturbed matrixsuch that
∣∣∣(A − A)(`′,k′),(`,k)
∣∣∣ . ε`,`′ , with ε`,`′ . 2−(|`|+|`′|)/2 2−eL
`,`′ . (4.14)
Then, Theorem 4.3.4 still holds with A instead of A.
50
4.3 Wavelet compression of the Levy measure
10−3
10−2
10−1
10−8
10−7
10−6
10−5
10−4
s = 2.0
h
L2−
Err
or
full gridsparse gridcomp. sparse grid
101
102
103
104
105
106
10−8
10−7
10−6
10−5
10−4
s = −1.0
s = −2.0
Degrees of Freedom
L2−
Err
or
full gridsparse gridcomp. sparse grid
Figure 4.5: Convergence rate of the wavelet discretization in terms of the mesh width h(left) and in terms of degrees of freedom (right)
Proof. We need to show that the error satisfies ‖S‖2 . 2−eL
`,`′ , as shown in [53, The-
orem 2.5.2]. Let S =∣∣∣A`
′,` − A`′,`
∣∣∣ where A`′,` is the block matrix with entries
(A(`′,k′),(`,k))k′i∈∇`′i,ki∈∇`i
. Estimating for each row (or column), the sum over all en-
tries yields∑
k1∈∇`1
· · ·∑
kd∈∇`d
Sk′,k . 2|`|ε`,`′ and∑
k′1∈∇`′1
· · ·∑
k′d∈∇`′d
Sk′,k . 2|`′|ε`,`′ ,
We can rewrite this as∑
k1∈∇`1
· · ·∑
kd∈∇`d
wkSk′,k . wk′2(|`|+|`′|)/2 ε`,`′ ,
∑
k′1∈∇`′1
· · ·∑
k′d∈∇`′d
wk′Sk′,k . wk2(|`|+|`′|)/2 ε`,`′ ,
with weights wk = 2(|`′|−|`|)/4 and wk′ = 2(|`|−|`′|)/4. Using the Schur lemma [48, Lemme4] we obtain the required result.
For the computation we use sparse tensor products to obtain A. Theorem 4.3.7 stillholds.
Corollary 4.3.8. If the perturbed matrix A satisfies∥∥∥A`
′,` − A`′,`
∥∥∥2
. 2−eL
`,`′ ,
with `, `′ ∈ Nd−1 then∥∥∥A`
′,` ⊗Md`′d,`d
− A`′,` ⊗Md
`′d,`d
∥∥∥2
. 2−eL(`,`d),(`′,`′
d) .
Proof. Follows immediately since 2−eL(`,`d),(`′,`′
d) ≥ 2−
eL`,`′ .
51
4 Wavelet basis
4.4 Multilevel preconditioning
We have to solve a linear system
(M + θ∆tA
)um+1L =
(M− ∆t(1 − θ)∆tA
)umL ,
at each time step m = 0, . . . ,M − 1. For an iterative solution of these systems, Bu = b,we use multilevel preconditioning. The preconditioner is obtained by using the waveletnorm equivalences. With (4.4) for s = 0 we have for every u ∈ VL+1 with coefficient
vector u ∈ RbN that
〈u, u〉 . 〈u,Mu〉 . 〈u, u〉 ,Denote by DA the diagonal matrix with entries 2α1`1 +. . .+2αd`d for an index correspond-ing to level ` = (`1, . . . , `d). Then, Theorem 4.3.4 and (4.4) for si = αi/2, i = 1, . . . , dimply that
〈u,DAu〉 . 〈u,Au〉 . 〈u,DAu〉 ,Thus, we have 〈u,Du〉 . 〈u,Bu〉 . 〈u,Du〉, with the diagonal matrix D = I + θ∆tDA.Written in terms of u = D1/2u we finally obtain
|u|2 . 〈u,D−1/2BD−1/2u〉 . |u|2 .
The linear system B u = b with preconditioned matrix B = D−1/2BD−1/2 and righthand side b = D−1/2b can be solved with GMRES [56] in a number of steps which isindependent of level index L.
Lemma 4.4.1. For the linear system B u = b let uj denote the iterate obtained by theGMRES method with initial guess u0. There is a constant 0 < r < 1 independent of Land ∆t such that ∣∣∣b− Buj
∣∣∣ . rj∣∣∣b− Bu0
∣∣∣ .
Proof. See [28].
Example 4.4.2. Let d = 2 and consider two independent tempered stable marginaldensities as in Example 4.3.6. We compute the price of a basket option, g(s1, s2) =(12s1 + 1
2s2 −K)+ , with maturity T = 0.5, strike K = 100 and interest rate r = 0.01.We set c1 = c2 = 1, β−
1 = 10, β+1 = 15, β−
2 = 9, β+2 = 16, α1 = 0.5, α2 = 0.7
and compute the maximum number of GMRES iterations for m = 0, . . . ,M − 1 where∆t = 0.005. The values are shown in Table 4.1.
Level L 3 4 5 6 7 8
Max. Iterations 6 7 7 8 8 10
Table 4.1: Number of GMRES iterations
52
5 Composite Gauss quadrature rules
We have to evaluate integrals∫[−1,1]|I| f(zI)UI(z)dz for I ⊂ 1, . . . , d as seen in the
last section. The tail integrals U I(z) have a singularity at the origin and possibly oneach axis as shown in Example 2.3.3. Therefore, we can not use standard quadraturerules for integration since these depend on the smoothness of the function. Instead, weuse a composite Gauss quadrature rule proposed in [62]. Elementary Gauss quadratureformulas of varying orders on subdomains are combined. The size of these subdomainsdecreases geometrically towards the singular support of the integrand. Multidimensionalquadrature rules are obtained by using tensor products of one-dimensional quadratureformulas. We start recalling error estimates for the basic Gauss-Legendre quadraturerules.
5.1 Gauss-Legendre quadrature
For a given function f ∈ C([0, 1]) we set I [0,1]f :=∫ 10 f(s)ds and denote the g-point
Gauss-Legendre integration rule on [0, 1] by Q[0,1]g f :=
∑gj=1 ωg,jf(ξg,j). If f ∈ C2g([0, 1])
we obtain the following error estimate (see, e.g., [22])
∣∣∣E[0,1]g f
∣∣∣ :=∣∣∣I [0,1]f −Q[0,1]
g f∣∣∣ =
(g!)4
(2g + 1)[(2g)!]3
∣∣∣f (2g)(ξ)∣∣∣ , ξ ∈ [0, 1] .
We use the Stirling formula g! ∼ √2πg gge−g to obtain the estimate
∣∣∣E[0,1]g f
∣∣∣ . 2−4g
(2g)!maxξ∈[0,1]
∣∣∣f (2g)(ξ)∣∣∣ . (5.1)
On [0, 1]d we approximate the integral, I [0,1]df :=⊗
1≤i≤d I[0,1]f =
∫[0,1]d f(s)ds, for
f ∈ C([0, 1]d) by a tensor product Gauss-Legendre quadrature rule
Q[0,1]d
g f :=⊗
1≤i≤dQ[0,1]g f =
g∑
j1,...,jd=1
d∏
i=1
ωg,jif(ξg,j1, . . . , ξg,jd) ,
and obtain the following error bound.
53
5 Composite Gauss quadrature rules
Lemma 5.1.1. If f ∈ C2g([0, 1]d), the quadrature error E[0,1]dg f := I [0,1]df − Q
[0,1]dg f is
bounded by∣∣∣E[0,1]d
g f∣∣∣ . 2−4g
(2g)!
d∑
i=1
maxξ∈[0,1]d
∣∣∣∂2gi f(ξ)
∣∣∣ . (5.2)
Proof. We prove this lemma by induction over the dimension d. With (5.1) it is true ford = 1. For d > 1 we have
∣∣∣E[0,1]dg f
∣∣∣ =
⊗
1≤i≤dI [0,1] −
⊗
1≤i≤dQ[0,1]g
f
=
⊗
1≤i≤dI [0,1] −
⊗
1≤i≤d−1
I [0,1] ⊗Q[0,1]g +
⊗
1≤i≤d−1
I [0,1] ⊗Q[0,1]g −
⊗
1≤i≤dQ[0,1]g
f
=⊗
1≤i≤d−1
I [0,1] ⊗(I [0,1] −Q[0,1]
g
)f +
⊗
1≤i≤d−1
I [0,1] −⊗
1≤i≤d−1
Q[0,1]g
⊗Q[0,1]
g f
.2−4g
(2g)!maxξ∈[0,1]d
∣∣∣∂2gd f(ξ)
∣∣∣+2−4g
(2g)!
d−1∑
i=1
maxξ∈[0,1]d
∣∣∣∂2gi f(ξ)
∣∣∣ .
For our analysis we consider a class of functions which have singularities on the originand on the axes.
Assumption 5.1.2. Let f ∈ L1([0, 1]d). There exist 0 < α < d, α /∈ N, Cf > 0, such thatfor k ∈ N0, i = 1, . . . , d
∣∣∣∂ki f(ξ)∣∣∣ . k!Ck
f ‖ξ‖−α∞ ξ−ki , ∀ξ ∈ (0, 1)d . (5.3)
Equation (5.3) is satisfied by all tail integrals corresponding to a Levy process whichsatisfy Assumption 2.3.4, in particular (2.18) and (2.19). We introduce the notationI [0,1]fξi :=
∫ 10 f(ξ1, . . . , si, . . . , ξd)dsi where we just integrate over the i-th dimension,
i ∈ 1, . . . , d. Similarly Q[0,1]g fξi and E
[0,1]g fξi := I [0,1]fξi −Q
[0,1]g fξi . We can now state
the basic error estimates on rectangular domains.
Proposition 5.1.3. Let i ∈ 1, . . . , d, interval [a, b] with a, b ∈ R, 0 ≤ a ≤ b ≤ 1 andh = b− a. Assume f satisfies (5.3) and set I = 1, . . . , d\i. Then for
∣∣∣E[a,b]g fξi
∣∣∣ .∥∥ξI∥∥−α+ α
d
∞ h
(Cfh
4a
)2g
a−αd , for a > 0, ξI ∈ (0, 1)d−1 , (5.4)
∣∣∣E[a,b]0 f
∣∣∣ .∥∥ξI∥∥−α+ α
d
∞ h1−αd , for a = 0, ξI ∈ (0, 1)d−1 . (5.5)
54
5.2 Composite Gauss quadrature
Proof. Consider the transformation ϕ : [0, 1] → [a, b], ϕ(ξ) = a + hξ. Then, withI [a,b]fξ = I [0,1](fξ ϕ)h and ∂ki f(ξ1, . . . , ϕ(ξi), . . . , ξd) = ∂ki f h
k we get (5.4) by
∣∣∣E[a,b]g fξi
∣∣∣ = h∣∣∣E[0,1]
g (fξi ϕ)∣∣∣ . h
2−4g
(2g)!maxξi∈[a,b]
∣∣∣∂2gi f(ξ)
∣∣∣h2g
.∥∥ξI∥∥−α+ α
d
∞ h
(Cfh
4a
)2g
a−αd .
With |f | . ‖ξ‖−α∞ one obtains (5.5) since
∣∣∣E[0,h]0 fξi
∣∣∣ =
∣∣∣∣∣
∫
[0,h]f(ξ1, . . . , si, . . . , ξd)dsi
∣∣∣∣∣
.∥∥ξI∥∥−α+ α
d
∞
∣∣∣∣∣
∫
[0,h]s−α
di dsi
∣∣∣∣∣ .∥∥ξI∥∥−α+ α
d
∞ h1−αd .
5.2 Composite Gauss quadrature
On [0, 1] a geometric partition is given by 0 < σn < σn−1 < . . . < σ < 1 for n ∈ N,σ ∈ (0, 1). We denote the subdomains by Λj := [σn+1−j , σn−j ], with j = 1, . . . , n andΛ0 := [0, σn]. Given a linear degree vector q ∈ Nn, qj = dµje with slope µ > 0, we use oneach subdomain Λj , j = 1, . . . , n, a Gauss quadrature with degree qj and no quadraturepoints in Λ0. The subdomains and the quadrature points are plotted in Figure 5.1 forthe grading factor σ = 0.3, refinements n = 4 and a linear degree vector with slopeµ = 1.
Figure 5.1: Composite Gauss quadrature in d = 1
The composite Gauss quadrature rule in the i-th direction is defined by
Qn,qσ fξi =n∑
j=1
QΛjqj fξi , i ∈ 1, . . . , d, (5.6)
and converges exponentially.
Theorem 5.2.1. Let i ∈ 1, . . . , d and f satisfy (5.3). Consider
σ ∈ (0, 1), such that w =Cf (1 − σ)
4σ< 1, (5.7)
55
5 Composite Gauss quadrature rules
and a linear degree vector q = (q1, . . . , qn),
qj = dµje, with slope µ >(1 − α
d ) ln σ
2 lnw. (5.8)
Then, ∣∣∣I [0,1]fξi −Qn,qσ fξi
∣∣∣ .∥∥ξI∥∥−α+ α
d
∞ σn(1−αd) . (5.9)
Proof. On each Λj , j = 1, . . . , n we have the following estimate using (5.4) with a =σn+1−j and h = σn−j(1 − σ)
∣∣∣EΛjg fξi
∣∣∣ .∥∥ξI∥∥−α+ α
d
∞ h
(Cf (1 − σ)
4σ
)2g
σ−(n+1−j) αd .
∥∥ξI∥∥−α+ α
d
∞ w2gσ(n−j)(1−αd) .
Summing over all subdomains j = 1, . . . , n, yields
n∑
j=1
∣∣∣EΛjqj fξi
∣∣∣ .∥∥ξI∥∥−α+ α
d
∞
n∑
j=1
w2qjσ(n−j)(1−αd)
.∥∥ξI∥∥−α+ α
d
∞ σn(1−αd)
∞∑
j=1
(w2µσ
αd−1)j
.
The last sum converges since µ >(1−α
d) lnσ
2 lnw . We neglected the subdomains Λ0 in thecomposite Gauss quadrature. Using (5.5) we have
∣∣∣E[0,σn]0 fξi
∣∣∣ .∥∥ξI∥∥−α+ α
d
∞ σn(1−αd).
Remark 5.2.2. Condition (5.7) is suboptimal. Using [62, Theorem 4.1] or [13, Proposi-tion 2.8] we can obtain exponential convergence for any σ ∈ (0, 1).
We define the composite Gauss quadrature on [0, 1]d by the tensor product of one-
dimensional composite Gauss quadrature rules Qn,(q1,...,qd)σ f =
⊗1≤i≤dQ
n,qiσ fξi . The
subdomains and the quadrature points on [0, 1]d are plotted in Figure 5.2 for the gradingfactor σ = 0.3, refinements n = 4 and linear degree vectors with slope µ1 = · · · = µd = 1in d = 2, 3.
The composite Gauss quadrature rule converges exponentially with respect to the num-ber N of Gauss points.
Theorem 5.2.3. Let f satisfy (5.3). Consider a grading factor σ ∈ (0, 1) satisfying (5.7)and linear degree vectors (q1, . . . ,qd) satisfying (5.8). Then, there exist a γ > 0 suchthat the quadrature error decays exponentially
∣∣∣I [0,1]df −Qn,(q1,...,qd)σ f
∣∣∣ . e−γ2d√N .
56
5.2 Composite Gauss quadrature
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
00.2
0.40.6
0.81
0
0.5
10
0.2
0.4
0.6
0.8
1
Figure 5.2: Composite Gauss quadrature in d = 2, 3
Proof. We prove this theorem in two steps.
1. As in proof of Lemma 5.1.1 we prove
∣∣∣I [0,1]df −Qn,(q1,...,qd)σ f
∣∣∣ . e−γn,
by induction over the dimension d. With (5.9) it is true for d = 1. For d > 1 wehave with (5.9)
∣∣∣I [0,1]df −Qn,(q1,...,qd)σ f
∣∣∣ =
⊗
1≤i≤dI [0,1] −
⊗
1≤i≤dQn,qiσ
f
=⊗
1≤i≤d−1
I [0,1] ⊗(I [0,1] −Qn,qd
σ
)f
+
⊗
1≤i≤d−1
I [0,1] −⊗
1≤i≤d−1
Qn,(q1,...,qd−1)σ
⊗Qn,qd
σ f
.
∫
[0,1]d−1
∥∥ξI∥∥−α+ α
d
∞ dξIσn(1−αd) + e−γn
n∑
j=1
qd,j∑
m=1
ωj,mξ−α
dj,m
. e−eγn .
2. Let µ1 = maxµ1, . . . , µd. We estimate the number of quadrature points by
N ≤
n∑
j=1
qj,1
d
.
n∑
j=1
j
d
. n2d .
57
5 Composite Gauss quadrature rules
We give a numerical example which shows the exponential convergence of the compositeGauss quadrature formula.
Example 5.2.4. Consider the function f(x) =(∑d
i=1 xβiϑ)− 1
ϑon the domain [0, 1]d for
α = β1 = . . . = βd = 0.5. We apply a composite Gauss quadrature formula with gradingfactor σ = 0.2 and linear degree vectors with slope µ1 = . . . = µd = 0.5. For ϑ = 0.5
the relative quadrature error∣∣∣I [0,1]df −Q
n,(q1,...,qd)σ f
∣∣∣ /∣∣∣I [0,1]df
∣∣∣ versus 2d√N is plotted in
logarithmic scale in Figure 5.3. Additionally, we also plot the relative error for d = 2 andvarious σ. As already seen in the proof of Theorem 5.2.3 the convergence rate dependson 1 − α/d which increases in d.
1 1.5 2 2.5 3 3.5 4 4.5 5 5.510
−6
10−5
10−4
10−3
10−2
10−1
100
N1/2d
rel.
Err
or
d = 1d = 2d = 3
1 1.5 2 2.5 3 3.5 4 4.5 5 5.510
−6
10−5
10−4
10−3
10−2
10−1
100
N1/2d
rel.
Err
or
PSfrag replacements
σ = 0.01σ = 0.02σ = 0.05σ = 0.1σ = 0.2σ = 0.3
Figure 5.3: Exponential convergence of the composite Gauss quadrature for ϑ = 0.5,σ = 0.2 and d = 2, 3 (left) and d = 2 and various σ (right)
58
6 Computational scheme
As seen in Chapter 4 we need to compute matrix entries of the type
B(`′,k′),(`,k) =
∫
Rd
∫
DR
∂1 · · · ∂dψ`,k(x + z)ψ`′,k′(x)κ(z)dxdz , (6.1)
where the kernel κ satisfies (5.3), i.e.,∣∣∣∂ki κ(z)
∣∣∣ . k!Ckf ‖z‖−α∞ z−ki , ∀z ∈ Rd, k ∈ N0, i = 1, . . . , d ,
for 0 < α < d, α /∈ N and Cf > 0. Introducing a new variable y = x + z we can writethe integral (6.1) as
B(`′,k′),(`,k) =
∫
Σ`,k
∫
Σ`′,k′
∂1 · · · ∂dψ`,k(y)ψ`′,k′(x)κ(y − x)dydx , (6.2)
where Σ`,k = suppψ`,k. Similar equations have been studied for the boundary elementmethods, although only in the isotropic setting. Several schemes have been developedto solve these problems in dimension d ≤ 3, see [31, 58, 67] and the references therein.We adapt these methods to the anisotropic case for d ≥ 1. Throughout this section weconsider wavelets as described in Example 4.1.1 which are piecewise linear.
6.1 Hierarchical data structure
For an efficient implementation of the compression scheme it is necessary to have anhierarchical data structure. Therefore, we introduce an hierarchical element tree up toa given level L ∈ N.
6.1.1 Element tree
We start with D(0,...,0),(1,...,1) = DR as the first generation. On the `-th generation weconsider the elements D`,k where the multiindices are given by ` = (`1, . . . , `d), `i =0, . . . , `, i = 1, . . . , d with ‖`‖∞ = `, and k = (k1, . . . , kd), ki = 1, . . . , 2`i , i = 1, . . . , d.
Each element D`,k has sons D`+e,k+ek where
∣∣∣` + ˜∣∣∣ ≤ L,
˜i =
0 if `i 6= `
∈ 0, 1 if `i = `, i = 1, . . . , d, with ‖˜‖∞ = 1 ,
59
6 Computational scheme
and ki = 2ei(ki − 1) + 1, . . . , 2
eiki, i = 1, . . . , d. Since there exists a bijective mapping
which indicates each element D`,k uniquely by an integer λ, we write shortly Dλ = D`,k
and set |λ| = |`|, ‖λ‖∞ = ‖`‖∞.
Example 6.1.1. To illustrate this data structure we let d = 2, L = 2 and plot thecorresponding elements in Figure 6.1. The initial element D1 has 8 sons, Dλ, λ =2, . . . , 9. The element D2 has only 2 sons D10, D11 and so on. Note that the elementsDλ, λ = 6, . . . , 9 do not have any sons due to the sparse tensor product spaces.
PSfrag replacements
D1
D2 D3 D4 D5 D6 D7 D8 D9
D10 D11 D12 D13 D14 D15 D16 D17
Figure 6.1: Hierarchical element tree for d = 2 and L = 2
Similar to the standard single-scale finite element method, we do not compute the matrixentry (6.2) directly over suppψ`,k, since ψ`,k is not smooth. Instead, we decompose Σ`,k
into a set of elements⋃Dλ such that ψ`,k|Dλ
is smooth. More precisely, consider theset
L`,k =D
`+1,ek : ki = max2(ki − 1), 1, . . . ,min2ki + 1, 2l+1, i = 1, . . . , d.
Then,
Σ`,k = suppψ`,k =⋃
Dλ∈L`,k
Dλ, Σsing`,k = singsuppψ`,k =
⋃
Dλ∈L`,k
∂Dλ ,
and
ψ`,k(x)|Dλ=
2∑
n1,...,nd=1
2|`|/2ω`,k,n,λ φn(ϕ−1λ (x)), Dλ ∈ L`,k , (6.3)
with weights ω`,k,n,λ =∏di=1 ω`i,ki,ni,λ, shape functions φn(z) =
∏di=1 φni(zi) and dif-
feomorphism ϕλ : Dλ → [0, 1]d. The one-dimensional weights follow immediately fromExample 4.1.1 and the one-dimensional shape functions are φ1(z) = 1 − z, φ2(z) = z.
60
6.1 Hierarchical data structure
6.1.2 Compression pattern
To set up the compression scheme we need to check the distance criteria δxi > Bi`,`′
, i ∈Ic
`,`′and δsing
xi > Bi`,`′
, i ∈ I`,`′ . Checking these criteria for each matrix coefficient would
require O(N2) operations. For an efficient computation we exploit the tree structuredescribed above. We denote by σ`i,ki
= suppψ`i,ki, σsing
`i,ki= singsuppψ`i,ki
, i = 1, . . . , dand define a wavelet tree by
Definition 6.1.2. The wavelet ψe,son is the son of ψ`,father if σei,son
⊆ σ`i,father, i =
1, . . . , d and there exists i ∈ 1, . . . , d such that ˜i = `i + 1.
Then, the following lemmas hold.
Lemma 6.1.3. Let distσ`i,father, σ`′i,father > Bi`,`′
for i ∈ Ic`,`′
and σ`i+1,son ⊆ σ`i,father,σ`′i+1,son ⊆ σ`′i,father
Then distσ`i+1,son, σ`′i,father > Bie,`′ and distσ`i+1,son, σ`′i+1,son > Bie,e′ where ˜ =
(`1, . . . , `i + 1, . . . , `d) and ˜′ = (`′1, . . . , `′i + 1, . . . , `′d).
Proof. The result follows from Bi`,`′
≥ Bie,`′ ≥ Bie,e′ .
Lemma 6.1.4. Let distσ`i,father, σsing`′i,k
′i > Bi
`,`′for i ∈ I`,`′ , `i > `′i and σ`i+1,son ⊆
σ`i,father.
Then distσ`i+1,son, σsing`′i,k
′i > Bie,`′ where ˜= (`1, . . . , `i + 1, . . . , `d) and ˜′ = (`′1, . . . , `
′i +
1, . . . , `′d).
Proof. The result follows from Bi`,`′
≥ Bie,`′ .
Remark 6.1.5. Similar results for different wavelets in d = 2 are given in [31].
Using Lemma 6.1.3 and 6.1.4 we have only to check the distance criteria for coefficientswhich have a non-zero father. The number of operations for setting up the compressionscheme is then obviously of log linear complexity O(2LL2(d−1)).
61
6 Computational scheme
6.2 Matrix computation
Replacing the wavelets in (6.2) by the element representation (6.3) leads to
B(`′,k′),(`,k) =∑
Dλ∈L`,k
∑
Dλ′∈L`′,k′
2∑
n1,...,nd=1
2∑
n′1,...,n
′d=1
ω`,k,n,λω`′,k′,n′,λ′Q(λ,n),(λ′,n′) ,
with
Q(λ,n),(λ′,n′) = 2|λ|/2+|λ′|/2∫
Dλ
∫
Dλ′
∂1 · · · ∂dφn(ϕ−1λ (y))φn′(ϕ−1
λ′ (x))κ(y − x)dydx ,
or in terms of the reference interval
Q(λ,n),(λ′,n′) = (−1)|n| 2|λ|/2+|λ′|/2d∏
i=1
h′i
∫
[0,1]d
∫
[0,1]dφn′(x)κλ,λ′(x, y)dydx , (6.4)
where h′i = R 2−`′i , i = 1, . . . , d, and κλ,λ′(x, y) = κ(ϕλ(y) − ϕλ′(x)). Therefore,
computing the matrix entries reduces to computing the element-element interactionsQ(λ,n),(λ′,n′).
We can again use the hierarchical data structure to obtain an entry of a father elementfrom the son elements. For example, for a father element Dfather = D(`,k) with the twosons Dson1 = D(`1 ,...,`i+1,...,`d),(k1,...,2ki−1,...,kd) and Dson2 = D(`1,...,`i+1,...,`d),(k1,...,2ki,...,kd),we get
Q(father,n),(λ′,n′) =(Q(son1,n),(λ′,n′) + Q(son2,n),(λ′,n′)
)2−3/2 . (6.5)
Similarly,
Q(λ,n),(father,n′1,...,1,...,n
′d) =
(Q(λ,n),(son1,n′
1,...,1,...,n′d) + Q(λ,n),(son1,n′
1,...,2,...,n′d)/2
+Q(λ,n),(son2,n′1,...,1,...,n
′d)/2
)2−1/2 .
(6.6)
6.3 Numerical integration
Consider `,k, `′,k′ ∈ Nd, the corresponding λ, λ′, fix n,n′ and introduce the notationδi = distDi
λ, Diλ′ where Dλ = D1
λ × · · · × Ddλ. Let ε > 0, i ∈ 1, . . . , d and set
I = 1, . . . , d\i and z = y − x. We distinguish several cases: The integrand κλ,λ′(x, y)is non-singular in yi − xi, i.e., δi > 0, the elements are identical, Dλ = Dλ′ , or theelements share a common vertex.
1. Let δi > Cf maxhi, h′i/4. Consider
g &ln ε + α
d ln δi + (|λ′| /2 − |λ| /2) ln 2
2 lnw,
g′ &ln ε + (αd − 1) ln δi + (|λ′| /2 − |λ| /2 − `′i) ln 2
2 lnw′ ,
(6.7)
62
6.3 Numerical integration
number of Gauss points where w =hiCf
4δiand w′ =
h′iCf
4δi. Furthermore, let the
standard Gauss quadrature points and weights on [0, 1] be given by ξg, ωg ∈ Rg.
Then, we define quadrature points ξi ∈ Rgg′ and weights ωi ∈ Rgg′ by
ξi = ξλg⊗ 1g′ − 1g ⊗ ξλ
′
g′, ωi = 2`i/2+`
′i/2h−1
i φn′i(1g ⊗ ξ
g′).∗ ωλg ⊗ ωλ
′
g′ , (6.8)
where 1g = (1, . . . , 1)> ∈ Rg, ξλg,j = (ki − 1)hi + hiξg,j and ωλg,j = hiωg,j.
2. Let δi = 0, `i = `′i and ki = k′i. Consider
n &ln ε+ (|λ′| /2 − |λ| /2 − `i) ln 2
(1 − αd ) ln(hi σ)
. (6.9)
refinements for the composite Gauss quadrature and σ, q satisfying (5.7), (5.8).Furthermore, let the composite Gauss quadrature points and weights on [0, 1] begiven by ξ
n, ωn ∈ RN . Then, we define quadrature points ξi ∈ R2N and weights
ωi ∈ R2N by
(ξij)1≤j≤N = hiξn, (ξij)N+1≤j≤2N = −hiξn,
(ωij)1≤j≤N = hi
∫ 1
0φn′
i(ξn
+ x(1 − ξn))dx.∗ (1 − ξ
n).∗ ωn,
(ωij)N+1≤j≤2N = hi
∫ 1
0φn′
i(x(1 − ξ
n))dx.∗ (1 − ξ
n).∗ ωn.
(6.10)
3. Let δi = 0, `i = `′i and ki = k′i − 1. Consider
g &ln ε+ α
d lnhi + (|λ′| /2 − |λ| /2) ln 2
2 lnw,
n &ln ε+ (|λ′| /2 − |λ| /2 − `i) ln 2
(2 − αd ) ln(hi σ)
,(6.11)
number of Gauss points or refinements, respectively. We define quadrature pointsξi ∈ Rg+N and weights ωi ∈ Rg+N by
(ξij)1≤j≤g = hi + hiξg, (ξij)g+1≤j≤g+N = hiξn,
(ωij)1≤j≤g = hi
∫ 1
0φn′
i(ξg
+ x(1 − ξg))dx.∗ (1 − ξ
g).∗ ωg,
(ωij)g+1≤j≤g+N = hi
∫ 1
0φn′
i(xξ
n)dx.∗ ξ
n.∗ ωn.
(6.12)
Using a tensor product quadrature formula we have the following error estimate.
63
6 Computational scheme
Theorem 6.3.1. Assume that the kernel κ satisfies (5.3). Consider ε > 0 and assumeeither δi > Cf maxhi, h′i/4 or δi = 0, `i = `′i, ki = k′i or δi = 0, `i = `′i, ki = k′i− 1 fori = 1, . . . , d. Define the d-dimensional quadrature points and weights by
ξi
=⊗
1≤j≤i−1
1j ⊗ ξi ⊗⊗
i+1≤j≤d1j , ω =
⊗
1≤j≤dωj,
where the one-dimensional quadrature points and weights ξ i, ωi, i = 1, . . . , d are givenby (6.8), (6.10) or (6.12). Then, we obtain
∣∣∣∣∣2|λ|/2+|λ′|/2
d∏
i=1
h′i
∫
[0,1]d
∫
[0,1]dφn′(x)κλ,λ′(x, y)dydx− 〈ω, κ(ξ
1, . . . , ξ
d)〉∣∣∣∣∣ . ε.
Proof. We again distinguish three cases.
1. Let δi > Cf maxhi, h′i/4 and define f(x, y) = 2|λ|/2−|λ′|/2φn′(x)κλ,λ′(x, y). Usingthe standard product rule
∂nxif(x, y) = ∂nxi
κλ,λ′(x, y)φn′(x) + ∂n−1xi
κλ,λ′(x, y) ∂iφn′(x) ,
there holds for hi = R 2−`i ,
∣∣∂nyif(x, y)
∣∣ . 2|λ|/2−|λ′|/2n! (hiCf )n δ−α
d−n
i
∥∥zI∥∥−α−α
d
∞ , n ∈ N0 ,
and for δi & h′i and h′i = R 2−`′i ,
∣∣∂nxif(x, y)
∣∣ . 2|λ|/2−|λ′|/2n! (h′iCf )n−1 δ−α
d−n+1
i
∥∥zI∥∥−α−α
d
∞ , n ∈ N0 .
Therefore, we obtain similar to (5.4)
∣∣∣E[0,1]2
g,g′ fbxi,byi
∣∣∣ . 2|λ|/2−|λ′|/2 ∥∥zI∥∥−α+ α
d
∞
((hiCf4δi
)2g
δ−α
di +
(h′iCf4δi
)2g′
δ−α
d+1
i 2`′i
).
Choosing the number of Gauss points according to (6.7) we have
∣∣∣E[0,1]2
g,g′ fbxi,byi
∣∣∣ .∥∥zI
∥∥−α+ αd
∞ ε.
2. Let δi = 0, `i = `′i and ki = k′i. The integrand κλ,λ′(x, y) is singular on thediagonal xi = yi. We first transform this singularity to the axis. Let κi(s − t) =κ(z1, . . . , hi(s− t), . . . , zd) and consider the integral
I =
∫
[0,1]
∫
[0,1]φ(s)ψ(t)κi(s− t)dsdt .
64
6.3 Numerical integration
Introducing the variable z = s− t and splitting the integral yields
I = −∫
[0,1]
∫ s−1
sφ(s)ψ(s− z)κi(z)dzds
=
∫
[0,1]
∫ 0
s−1φ(s)ψ(s− z)κi(z)dzds+
∫
[0,1]
∫ s
0φ(s)ψ(s− z)κi(z)dzds .
With x = s− z, y = −z we have
∫
[0,1]
∫ 0
s−1φ(s)ψ(s− z)κi(z)dzds =
∫
[0,1]
∫ x
0φ(x− y)ψ(x)κi(−y)dydx ,
and therefore,
I =
∫
[0,1]
∫ x
0φ(x)ψ(x − y)κi(y)dydx+
∫
[0,1]
∫ x
0φ(x− y)ψ(x)κi(−y)dydx .
Finally setting x = ξ + η(1 − ξ), y = ξ, we obtain
I =
∫
[0,1]
∫
[0,1]φ(ξ + η(1 − ξ))ψ(η(1 − ξ))κi(ξ)(1 − ξ)dξdη
+
∫
[0,1]
∫
[0,1]φ(η(1 − ξ))ψ(ξ + η(1 − ξ))κi(−ξ)(1 − ξ)dξdη . (6.13)
The function
f(x, y) = 2|λ|/2−|λ′|/2(1− yi)(φn′
i(yi + xi(1 − yi))κi(yi) + φn′
i(xi(1 − yi))κi(−yi)
),
has a singularity at yi = 0 and satisfies (5.3) with respect to yi, i.e.,
∣∣∣∂kbyif(x, y)
∣∣∣ . 2|λ|/2−|λ′|/2k! (hiCf )k (hiyi)−α
d−k ∥∥zI
∥∥−α+ αd
∞ , k ∈ N0 .
The integrand f is polynomial in the xi and can be integrated exactly. Thus,similar to Theorem 5.2.1 we obtain
∣∣∣I [0,1]2fbyi−Qn,qhiσ
fbyi
∣∣∣ . 2|λ|/2−|λ′|/2 2`i∥∥zI
∥∥−α+ αd
∞ (hi σ)n(1−αd) ,
where σ, q satisfy (5.7), (5.8). Choosing the number of refinements according to(6.9) we have ∣∣∣I [0,1]2fbyi
−Qn,qhiσfbyi
∣∣∣ .∥∥zI
∥∥−α+ αd
∞ ε.
3. Let δi = 0, `i = `′i and ki = k′i − 1. Similar to the case of identical elements wehave κi(s+ t) = κ(z1, . . . , hi(s+ t), . . . , zd) and transform the integral
I =
∫
[0,1]
∫
[0,1]φ(s)ψ(t)κi(s+ t)dsdt .
65
6 Computational scheme
into
I =
∫
[0,1]
∫
[0,1]φ(ξ + η(1 − ξ))ψ(1 + η(ξ − 1))κi(ξ + 1)(1 − ξ)dξdη
+
∫
[0,1]
∫
[0,1]φ(ηξ)ψ(ξ(1 − η))κi(ξ)ξdξdη . (6.14)
The function
f(x, y) = 2|λ|/2−|λ′|/2φi(yi + xi(1 − yi))κi(yi + 1)(1 − yi) ,
can be integrated exactly in the xi direction and has no singularity in yi, i.e.,
∣∣∣∂kbyif(x, y)
∣∣∣ . 2|λ|/2−|λ′|/2k! (hiCf )k h−α
d−k
i
∥∥zI∥∥−α−α
d
∞ , k ∈ N0 .
The function
f(x, y) = 2|λ|/2−|λ′|/2φ(xiyi)κi(yi)yi ,
can again be integrated exactly in the xi direction and has singularity in yi, i.e.,
∣∣∣∂kbyif(x, y)
∣∣∣ . 2|λ|/2−|λ′|/2k! (hiCf )k (hiyi)−α
d−k+1
∥∥zI∥∥−α+ α
d
∞ , k ∈ N0 .
Choosing the number of Gauss points and refinements according to (6.11) we again
obtain an error in the i-th direction of order∥∥zI
∥∥−α+ αd
∞ ε.
Finally, tensorization arguments as in Theorem 5.2.3 yield the required result.
6.4 Adaptive strategy
As proposed in [31] we define an adaptive strategy to compute the element-elementinteractions Q(λ,n),(λ′,n′) with the precision ε`,`′ given by (4.14).
We loop over the dimension i = 1, . . . , d. For each i we do:
1. Starting point. If δi > Cf maxhi, h′i/4 we define quadrature points in the i-thdirection according to (6.7). Else if δi = 0, `i = `′i and ki = k′i or ki = k′i − 1,k′i = ki − 1 define quadrature points according to (6.9) or (6.11). Otherwise go toitem 2 if `i > `′i, item 3 if `′i > `i and item 4 if `i = `′i.
2. Case `i > `′i. Replace the larger element Dλ′ by its two sons and compute theassociated element-element interaction with precision 2−3/2ε`,`′ according to item1. The desired element-element interaction is calculated via formula (6.6).
3. Case `′i > `i. Replace the larger element Dλ by its two sons and compute theassociated element-element interaction with precision 2−1/2ε`,`′ according to item1. The desired element-element interaction is calculated via formula (6.5).
66
6.4 Adaptive strategy
4. Case `i = `′i. Replace both elements Dλ by their two sons and compute theassociated element-element interaction with precision ε`,`′ according to item 1.The desired element-element interaction is calculated via formulas (6.5) and (6.6).
Note that using this strategy we only have to compute an element-element interactionwhere Theorem 6.3.1 holds. The next lemma shows that the algorithm stops after, atthe most, O(‖`i − `′i‖∞) steps.
Lemma 6.4.1. Let i ∈ 1, . . . , d. The following statements concerning the computationof the element-element interaction by the above algorithm are valid:
1. The given element-element interaction is subdivided into at most O(|`i − `′i|) in-
teractions Q(bλ,n),( bλ′,n′)
where i ≥ `i, ′i ≥ `′i.
2. If `i ≤ `′i, there holds `i ≤ i ≤ ′
i ∼ `′i. The analogous result holds if `′i ≤ `i.
3. On a fixed level i and ′i the number of directly computed as well as subdividedelement-element interactions is O(1).
Proof. See [31, Lemma 9.7].
Now with formulas (6.5), (6.6), Lemma 6.4.1 and Theorem 6.3.1 it follows that theproposed quadrature algorithm computes the desired element-element interactions witha precision that stays proportional to ε`,`′ .
Corollary 6.4.2. Let X be a Levy process with state space Rd and characteristic triplet(Q, ν, γ). Assume the Levy density k(z) satisfies (2.21), i.e. is real analytic outside ofzi = 0, i = 1, . . . , d. Let ε`,`′ be given by (4.14). Then, the number of quadrature points
to compute an entry A(`′,k′),(`,k) is at most O(L2d) and the overall operations to compute
the stiffness matrix A at most of log linear complexity O(2LL4d−2).
Proof. We have for the one-dimesional Gauss points in (6.8), g, g ′ . L, for the re-finements in (6.10), n . L and for the quadrature points and refinements in (6.12)again, g, n . L. Therefore, we need at most O(L2) quadrature points in each directioni = 1, . . . , d.
67
6 Computational scheme
68
7 Model sensitivities and Greeks
Calculating price sensitivities is a central modeling and computational task for riskmanagement and hedging. We distinguish between two classes: Sensitivities of the priceV in (3.1) to variations of a model parameter, like the Greek Vega ∂σV , and sensitivitiesof V to variations of the state space such as the Greek Delta ∂SV .
7.1 Sensitivity with respect to model parameters
Suppose the market model and hence the operator A = ABS + AJ in (3.5) depends onsome model parameter η. We want to calculate the sensitivity of the solution u of (3.5)with respect to η. To this end, we write u(η0) for a fixed realization η0 of η in order toemphasize the dependence of u on η0 in (3.5). Typical examples are the Greeks Vega(∂σu), Rho (∂ru) and Vomma (∂σσu). Other sensitivities which are not so commonlyused in the financial community are the sensitivity of the price with respect to the jumpintensity or the order of the process that models the underlying. We show that the finiteelement approximation to such sensitivities satisfies again the scheme (1.12) with a righthand side fm+θ which depends on the approximation um+θ
h of the pricing function u.We also show that the approximation of these sensitivities converge with the same rateas uh.
Definition 7.1.1. Let X be a Levy process with state space Rd and characteristic triplet(Q, ν, γ). We call X a parametric Levy model with admissible parameter set Sη, if
(i) for all η ∈ Sη, X is a Levy process on a filtered probability space (Ω,F ,F,P).
(ii) the mapping Sη 3 η → Q, ν, γ is infinitely differentiable.
Let C be a Banach space over a domain D ⊂ Rd. C is the space of parameters orcoefficients in the operator A and Sη ⊆ C is the set of admissible coefficients. We denoteby u(η0) the unique solution to (1.6) and introduce the derivative of u(η0) with respectto η0 ∈ Sη as the mapping Dη0u(η0) : C → V,
u(δη) := Dη0u(η0)(δη) := lims→0+
1
s
(u(η0 + sδη) − u(η0)
), δη ∈ C.
We also introduce the derivative of A(η0) with respect to η0 ∈ Sη
A(δη)ϕ := Dη0A(η0)(δη)ϕ := lims→0+
1
s
(A(η0 + sδη)ϕ −A(η0)ϕ
), ϕ ∈ V, δη ∈ C.
69
7 Model sensitivities and Greeks
We assume that A(δη) ∈ L(V, V∗) with V being a real and separable Hilbert spacesatisfying
V ⊆ V d→ H ∼= H∗ d
→ V∗ ⊆ V∗.
We further assume that there exists a real and separable Hilbert space V ⊆ V such thatAv ∈ V∗, ∀v ∈ V. We have the following relation between Dη0u(η0)(δη) and u.
Lemma 7.1.2. Let A(δη) ∈ L(V , V∗), ∀δη ∈ C and u(η0) : (0, T ] → V, η0 ∈ Sη be theunique solution to
∂tu(η0) + A(η0)u(η0) = 0 in (0, T ) × Rd, (7.1)
u(η0)(0, ·) = g(x) in Rd. (7.2)
Then u(δη) solves
∂tu(δη) + A(η0)u(δη) = −A(δη)u(η0) in (0, T ) × Rd, (7.3)
u(δη)(0, ·) = 0 in Rd. (7.4)
Proof. Since u(η0)(0) = g does not depend on η0 its derivative with respect to η is 0. Nowlet ηs := η0+sδη, s > 0, δη ∈ C. Subtract from the equation ∂tu(ηs)(t)+A(ηs)u(ηs)(t) =0 equation (7.1) and divide by s to obtain
∂t1
s
(u(ηs)(t) − u(η0)(t)
)+
1
s
(A(ηs) −A(η0)
)u(ηs)(t) +
1
sA(η0)
(u(ηs)(t) − u(η0)(t)
)= 0.
Taking lims→0+ gives equation (7.3).
We associate to the operator A(δη) the Dirichlet form E(δη; ·, ·) : V × V → R which isgiven by E(δη;u, v) =
(A(δη)u, v
). The variational formulation of (7.3)–(7.4) reads:
Find u(δη) ∈ L2((0, T );V) ∩H1((0, T );V∗) such that
〈∂tu(δη), v〉 + E(η0; u(δη), v) = −E(δη;u(η0), v) , ∀v ∈ V, a.e. in (0, T ), (7.5)
u(δη)(0) = 0 .
Note that (7.5) has an unique solution u(δη) ∈ V due to the assumptions on E(η0, ·, ·),A and u(η0) ∈ V. As in (1.12) the fully discrete form is given by
Find umh ∈ Vh such that for m = 0, . . . ,M − 1
〈∆t−1(um+1h − umh ), vh〉H + E(η0; um+θ
h , vh) = −E(δη;um+θh , v) , ∀vh ∈ Vh, (7.6)
u0h(0) = 0 ,
or in matrix notation ∆t−1M(um+1h − umh ) + θAum+1
h + (1 − θ)Aumh = −Aum+θ, where
A is matrix of the Dirichlet form E(δη; ·, ·) with respect to Φh.
70
7.1 Sensitivity with respect to model parameters
Example 7.1.3. Let d = 2 and consider a pure jump Levy process as in Proposition 2.3.7with Clayton Levy copula
F (u) = 22−d(
d∑
i=1
|ui|−ϑ)− 1
ϑ (η1u1···ud≥0 − (1 − η)1u1 ···ud≤0
),
and tempered stable marginal densities
ki(z) = cie−β
−i |z|
|z|1+αi1z<0 + ci
e−β+i z
z1+αi1z>0 , i = 1, . . . , d.
The sensitivity of the Levy copula with respect to ϑ is given by
∂ϑF (u) =1
ϑ2F (u)
(ln
(d∑
i=1
|ui|−ϑ)
+ϑ∑d
i=1 |ui|−ϑ ln |ui|∑d
i=1 |ui|−ϑ
).
We compute the sensitivity with respect to ϑ in d = 2 of a basket put option price withpayoff g(s1, s2) = (K − 1
2s1 − 12s2)+ , where the maturity T = 0.5, strike K = 100 and
interest rate r = 0.01. We set c1 = c2 = 1, β−1 = 10, β+
1 = 15, β−2 = 9, β+
2 = 16,α1 = 0.5, α2 = 0.7, ϑ = 0.5 and η = 0.5. The sensitivity is shown in Figure 7.1.
50
100
150 50
100
1500
0.5
1
1.5
2
s2s
1
Figure 7.1: Sensitivity of a basket put option with respect to ϑ in d = 2
We establish convergence rates for the sequence umM−1m=0 of sensitivities with respect
to model parameters as the discretization parameter h tends to zero. We show that thecomputed sensitivities converge essentially at the same rate as the computed prices. Fornotational simplicity the subscript η0 is omitted.
Theorem 7.1.4. Let u, umh be the solutions of (7.5),(7.6) and the assumptions of Theo-
rem 1.3.7 be fulfilled. Additionally assume u ∈ C 1([0, T ], Hs(D))∩C3([0, T ],V∗). Then,for r ≤ s ≤ q + 1
∥∥uM − uMh∥∥2
L2(D)+ ∆t
M−1∑
m=0
∥∥∥um+1/2 − um+1/2h
∥∥∥2
V≤ C(u, u)
(∆t4 + h2(s−r)
),
71
7 Model sensitivities and Greeks
where C(u, u) > 0 depends on higher space and time derivatives of u and u.
Proof. See [33].
Theorem 7.1.4 shows that if the error of the approximate price converges with O(hs−r)+O(∆t2), the error of the approximate sensitivity preserves the same convergence ratesboth in space and time.
7.2 Sensitivity with respect to solution arguments
We also want to calculate the sensitivity of the solution u to a variation of argumentst, x. Typical examples are the Greeks Theta (∂τu), Delta (∂xu) and Gamma (∂xxu).We show that these sensitivities can directly be obtained by postprocessing the finiteelement solution uh without additional runs. Again our numerical approximations tothese sensitivities converge with the same rate as uh.
Let u be the solution of the variational problem (1.6). We discuss the computation
of Dnu = ∂|n|
∂n1x1
···∂ndxd
u for arbitrary multiindex n ∈ Nd0. For µ ∈ Zd and h ∈ R+ we
define the translation operator T µh ϕ(x) = ϕ(x+ µh) and the forward difference quotient∂h,jϕ(x) = h−1(T
ej
h ϕ(x) −ϕ(x)), where ej, j = 1, . . . , d, denotes the j-th standard basisvector in Rd. For n ∈ Nd
0 we denote by ∂nhϕ = ∂n1
h,1 · · · ∂ndh,dϕ and by Dn
h the differenceoperator of order n ≥ 0
Dnhϕ :=
∑
γ,|n|=nCγ,nT
γh ∂
nhϕ.
Definition 7.2.1. The difference operator Dnh of order |n| = n and mesh width h is called
an approximation to the derivative Dn of order s ∈ N0 if for any D0 ⊂ D there holds
‖Dnϕ−Dnhϕ‖ eHr(D0)
≤ Chs‖ϕ‖Hs+r+n(D),∀ϕ ∈ Hs+r+n(D). (7.7)
Given a basis Φh of Vh, the action of Dnh to vh ∈ Vh can be realized as matrix-vector
multiplication vh 7→ Dnhvh, where
Dnh =
(Dnhφh,1, · · · ,Dn
hφh,N)∈ RN×N ,
and vh is the coefficient vector of vh with respect to basis Φh, respectively.
Example 7.2.2. Let Vh be as in Example 1.3.4 the space of piecewise linear continuousfunctions on [0, 1] vanishing at the end points 0, 1. For α, β, γ ∈ R and µ ∈ N0 wedenote by diagµ(α, β, γ) the matrices
diagµ(α, β, γ) =
· · · 0 α β γ 0 · · ·· · · 0 α β γ 0 · · ·
. . .. . .
. . .. . .
. . .
72
7.2 Sensitivity with respect to solution arguments
where the entries β are on the µ-th lower diagonal. Then, the matrices Qh of the forwarddifference quotient ∂h and Tµ of the translation operator T µh respectively are given by
Qh = h−1diag0(0,−1, 1), Tµ = diagµ(0, 1, 0).
Hence, for example, we have for the centered finite difference quotient
D2hϕ(x) = h−2(ϕ(x + h) − 2ϕ(x) + ϕ(x− h)),
of order 2 in one dimension D2h = T−1Q
2h = h−2diag0(1,−2, 1).
Example 7.2.3. Let Vh as in Example 1.3.5 be the tensor product of the one-dimensionalspaces. Then, the matrix Dn
h is given by
Dnh =
∑
γ,|n|=nCγ,nTγ1 ⊗ · · · ⊗Tγd
Qn1h ⊗ · · · ⊗Q
ndh .
We have the following convergence result for the approximation of sensitivities withrespect to solution arguments.
Theorem 7.2.4. Let u, umh be the solutions of (1.6),(1.12) and the assumptions of Theo-rem 1.3.7 be fulfilled. Additionally, assume that u(x, t) is sufficiently smooth in [0, T ]×Dand that the approximation ∂βhu
0h is quasi-optimal in L2(D) for all β ≤ n. Assume fur-
ther that Dnh approximates Dn in the sense of Definition 7.2.1. Then there holds
∥∥DnuM −Dnhu
Mh
∥∥2
L2(D)+ ∆t
M−1∑
m=0
∥∥∥Dnum+1/2 −Dnhu
m+1/2h
∥∥∥2
V≤ C(u)
(∆t4 + h2(s−r)
),
where C(u) > 0 depends on higher space and time derivatives of u.
Proof. See [33].
Remark 7.2.5. (i) Note that we can not get higher convergence rates than s− r, evenif u has a higher regularity.
(ii) Theorem 7.2.4 shows that arbitrary derivatives of u can be approximated with thesame rate as u itself, provided u is sufficiently smooth.
Example 7.2.6. We consider the same problem as in Example 4.3.6 where for d = 2 theelliptic problem
A[u] = f on Ω = [0, 1]2,
is solved for the exact solution
u(x) =
(x2
1 − 2x31 + x4
1)(x22 − 2x3
2 + x42) if x ∈ Ω
0 else
73
7 Model sensitivities and Greeks
with two independent tempered stable marginal densities
ki(z) = cie−β
−i |z|
|z|1+αi1z<0 + ci
e−β+i z
z1+αi1z>0, i = 1, 2.
We set the model parameter c1 = c2 = 1, β−1 = 10, β+
1 = 15, β−2 = 9, β+
2 = 16, α1 = 0.5,α2 = 0.7 and compute the sensitivities D(1,0)u and D(1,1)u in Figure 7.2. It can be seenthat approximation of the sensitivities converges at the same rate as the approximationof the solution u.
10−2
10−1
10−7
10−6
10−5
10−4
10−3
10−2
h
L2−
Err
or
s = 2.0PSfrag replacements
u
D(1,0)
u
D(1,1)
u
Figure 7.2: Convergence rates for the solution u and the sensitivities D (1,0)u, D(1,1)u
For more details and numerical examples we refer to [33].
74
8 Impact of approximations of small jumps
In this chapter we consider a regularization of the multivariate Levy measure wheresmall jumps are either neglected or approximated by artificial Brownian motion. ThisGaussian approximation is often proposed to simulate Levy processes [1, 15] or to priceoptions using finite differences [18]. Applying the methods developed in Chapter 4 &6 gives accurate numerical schemes for either model. We use our scheme to studyand compare the error of diffusion approximations of small jumps in multivariate Levymodels via accurate numerical solutions of the corresponding PIDEs for various typesof contracts.
8.1 Gaussian approximation
Let X be a d-dimensional Levy process with characteristic exponent
ψ(ξ) = −i〈γ, ξ〉 +
∫
Rd
(1 − ei〈ξ,z〉 + i〈ξ, z〉
)ν(dz),
where we assume∫|z|>1 |z| ν(dz) <∞. For ε > 0 let νε be a measure such that νε = ν−νε
is a finite measure and∫
Rd |z|2νε(dz) < ∞. Then, the characteristic exponent can bedecomposed into two parts
ψ(ξ) = −i〈γε, ξ〉 +
∫
Rd
(1 − ei〈ξ,z〉
)νε(dz)
︸ ︷︷ ︸ψε(ξ)
+
∫
Rd
(1 − ei〈ξ,z〉 + i〈ξ, z〉
)νε(dz)
︸ ︷︷ ︸ψε(ξ)
, (8.1)
where γεi = γi −∫
Rzi ν
εi (dzi), i = 1, . . . , d. Correspondingly we can decompose X into
its small and large jump parts
Xt = γεt+N εt +Xε,t = Xε
t +Xε,t, (8.2)
where N ε is a compound Poisson process with jump measure νε. The small jump partXε is independent of N ε and has the covariance matrix Qε =
∫Rd zz
>νε(dz). We
assume Qε is non-singular. Let Σε be a non-singular matrix such that ΣεΣ>ε = Qε. Xε
can be approximated by a d-dimensional standard Brownian motion W independent ofN ε. The next theorem shows that the process Σ−1
ε Xε converges in distribution to Was ε→ 0.
75
8 Impact of approximations of small jumps
Theorem 8.1.1. Let X be a Levy process with state space Rd and characteristic triplet(0, ν, γ). Assume that Qε is non-singular for every ε ∈ (0, 1] and that for every δ > 0there holds ∫
〈Q−1ε z,z〉>δ
〈Q−1ε z, z〉νε(dz) → 0, as ε→ 0.
Assume further that for some family of non-singular matrices Σεε∈(0,1] there holds
Σ−1ε QεΣ
−>ε → Id, as ε→ 0,
where Id denotes the identity matrix in Rd. Then, for all ε ∈ (0, 1] there exists a cadlagprocess Rε such that
Xt(d)= γεt+ ΣεWt +N ε
t +Rεt , (8.3)
in the sense of equality of finite dimensional distributions. Furthermore, we have for all
T > 0, supt∈[0,T ] |Σ−1ε Rεt |
(P)−→ 0, as ε → 0 where γε, N ε are given in (8.2) and W is ad-dimensional standard Brownian motion independent of N ε.
Proof. See [15, Theorem 3.1].
We give an example of the decomposition (8.1) into small and large jumps.
Example 8.1.2. Let X = (X1, . . . , Xd)> be a d-dimensional Levy process with Levymeasure ν and marginal Levy measures νi, i = 1, . . . , d. To obtain νε in d = 1 we simplycut off the small jumps, i.e.,
νε = ν1|z|>ε. (8.4)
For d > 1 the Levy measure νε could be obtained by νε = ν1‖z‖∞>ε where jumpsare neglected if the jump size in all directions is small. But the corresponding one-dimensional Levy measures νεi , i = 1, . . . , d are not then of the form (8.4). If we choose
νε = ν1minz1,...,zd>ε, (8.5)
the corresponding one-dimensional Levy measures ν εi , i = 1, . . . , d again satisfy (8.4).We consider the Clayton Levy copula model as explained in Section 2.3.3 with the densityk given by (2.17) for d = 2, ϑ = 0.5, η = 0.5 and α = (0.5, 1.2). The correspondingregularized density kε, νε(dz) = kε(z)dz as in (8.5) for ε = 0.01 is plotted in Figure 8.1.
We now consider a d-dimensional pure jump process X with characteristic triplet (0, ν, γ)where the Levy measure ν satisfies (2.18). Let γ be chosen according to Lemma 2.1.9such that eX
j, j = 1, . . . , d are martingales. The covariance matrix is given by Q =∫
Rd zz>ν(dz). For any ε > 0 the process X can be approximated by a compound
Poisson process Y ε1 as in (8.2) where the small jumps are neglected as in (8.5),
Y ε1,t = γε1t+N ε
t . (8.6)
76
8.2 Basket options
z1
z 2
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
Figure 8.1: Regularized anisotropic α-stable Levy copula density for α = (0.5, 1.2), ε =0.01 and corresponding contour plot
The characteristic triplet of Y ε1 is (0, νε, γε1) and γε1 is again such that eY
ε,j1 , j = 1, . . . , d
are martingales. A better approximation can be obtained by replacing the small jumpswith a Brownian motion which yields a jump-diffusion process Y ε
2 ,
Y ε2,t = ΣεWt + γε2t+N ε
t , (8.7)
with characteristic triplet (Qε, νε, γε2). The processes W and N are independent. Y ε
2 hasthe same covariance matrix as X and drift γε2,j = γε1,j−Qε,jj/2, j = 1, . . . , d. For ε→ ∞we obtain a diffusion process Y ∞
t = ΣWt + γ∞t with covariance matrix Q = ΣΣ> anddrift γ∞,j = −Qjj/2, j = 1, . . . , d.
There are two sources of error. We have a discretization error using a mesh width h > 0and a modeling error using ε > 0. To assess the impact of ε > 0, we use the discretization(4.12) for ε = 0 and ε > 0. Here, h is chosen so small that the discretization error isnegligible in comparison to the truncation error due to cut-off of jumps of size smallerthan ε.
Remark 8.1.3. To obtain a converging scheme for finite difference methods, ε > 0 waschosen in [18] depending on the mesh width h. For a fixed mesh width h > 0 thediscretization error increases as ε→ 0, i.e., ε = 0 cannot be used.
8.2 Basket options
Consider a basket option u(t, x) with payoff g(x) where the log price processes of theunderlyings are given by the pure jump process X = (X 1, . . . , Xd)> and correspondinglyuε1(t, x), uε2(t, x) for the processes Y ε
1 , Y ε2 . We want to study the error |u(T, x) − uεi (T, x)|
for ε → 0, i = 1, 2. Since we adjusted the drift to preserve the martingale property, weadditionally introduce the processes
Zεi,t = Xt + (γεi − γ)t, i = 1, 2.
77
8 Impact of approximations of small jumps
which have the same drift as Y εi and the same Levy measure as X.
Proposition 8.2.1. Assume g is Lipschitz continuous. Then,
∣∣E(g(x +XT )) − E(g(x + Zε1,T ))∣∣ .
d∑
j=1
∫ ε
−ε|zj|2 νj(dzj), ∀x ∈ Rd, (8.8)
∣∣E(g(x +XT )) − E(g(x + Zε2,T ))∣∣ .
d∑
j=1
∫ ε
−ε|zj|3 νj(dzj), ∀x ∈ Rd. (8.9)
Proof. We have for i = 1, 2,
∣∣E(g(x +XT )) − E(g(x + Zεi,T ))∣∣ ≤ E |g(x +XT ) − g(x +XT + (γεi − γ)T )|
≤ Td∑
j=1
∣∣γεi,j − γj∣∣ .
Furthermore,
∣∣γε1,j − γj∣∣ =
∣∣∣∣∫
Rd
(ezj − 1 − zj) νε(dz)
∣∣∣∣ ≤∫ ε
−ε
∫ |zj |
0es |zj − s| ds νj(dz)
≤ eε
2
∫ ε
−ε|zj |2 νj(dz), j = 1, . . . , d,
∣∣γε2,j − γj∣∣ =
∣∣∣∣Qε,jj
2−∫
Rd
(ezj − 1 − zj) νε(dz)
∣∣∣∣ ≤1
2
∣∣∣∣∣
∫ ε
−ε
∫ |zj |
0es(zj − s)2 ds νj(dz)
∣∣∣∣∣
≤ eε
6
∫ ε
−ε|zj |3 νj(dz), j = 1, . . . , d.
Remark 8.2.2. For d = 1 a similar proof is given in [17].
The same error estimates are also obtained for the compound Poisson and Gaussianapproximation.
Proposition 8.2.3. Assume g ∈ C2(Rd). Then,
∣∣E(g(x + Zε1,T )) − E(g(x + Y ε1,T ))
∣∣ .
d∑
j=1
∫ ε
−ε|zj|2 νj(dzj), ∀x ∈ Rd. (8.10)
Furthermore, assume g ∈ C4(Rd) and∫
Rd |z| ν(dz) <∞. Then,
∣∣E(g(x + Zε2,T )) − E(g(x + Y ε2,T ))
∣∣ .
d∑
j=1
∫ ε
−ε|zj|3 νj(dzj), ∀x ∈ Rd. (8.11)
78
8.2 Basket options
Proof. Consider the Taylor series expansion of g(x) at x0
g(x) = g(x0) + ∇g(x0) · (x− x0) +1
2(x− x0) ·D2g(x0)(x− x0) + O(|x− x0|3),
where D2g is the Hessian matrix of g. Define Rε = Zε1 − Y ε1 . The Levy process Rε has
Levy measure νε and is independent of Y ε1 . Since Zε,j1,T and Y ε,j
1,T , j = 1, . . . , d, have the
same expected value, we have E(Rε,jT ) = 0, j = 1, . . . , d. Thus, we obtain
∣∣E(g(x + Zε1,T )) − E(g(x + Y ε1,T ))
∣∣
=
∣∣∣∣∣∣
d∑
j=1
E
(∂jg(x+ Y ε
1,T
))E(Rε,jT
)+
d∑
j=1
d∑
k=1
O(E(Rε,jT Rε,kT
))∣∣∣∣∣∣
.
d∑
j=1
E
((Rε,jT
)2).
Equation (8.10) follows from
E((Rε,jT
)2)=
∫ ε
−ε|zj |2 νj(dzj), j = 1, . . . , d.
Furthermore, Y ε2,t = Y ε
1,t + ΣεWt + (γε2 − γε1)t, where the standard Brownian motion Wis independent of Y ε
1 . We set x = x+ (γε2 − γε1)T and obtain
∣∣E(g(x + Zε2,T )) − E(g(x + Y ε2,T ))
∣∣=∣∣E(g(x + Zε1,T )) − E(g(x + Y ε
1,T )) + E(g(x + Y ε1,T )) − E(g(x + Y ε
2,T ))∣∣
=
∣∣∣∣∣∣
d∑
j=1
d∑
k=1
1
2E
(∂j∂kg
(x+ Y ε,j
1,T
))E
((Rε,jT Rε,kT
))+
d∑
j=1
O(E(|Rε,jT |3
))
−d∑
j=1
d∑
k=1
1
2E
(∂j∂kg
(x+ Y ε,j
1,T
))Qε,jk + O
(E(|ΣεWT |4
))∣∣∣∣∣∣
.
d∑
j=1
(∫ ε
−ε|zj|3 νj(dzj) +
(∫ ε
−ε|zj|2 νj(dzj)
)2).
Now with cj =∫ ε−ε |zj | νj(dzj) <∞, j = 1, . . . , d and Jensens’s inequality we have
(∫ ε
−ε|zj|2 νj(dzj)
)2
= c2j
(∫ ε
−ε|zj |
|zj | νj(dzj)
cj
)2
≤ c2j
∫ ε
−ε|zj |2
|zj | νj(dzj)
cj
= cj
∫ ε
−ε|zj |3 νj(dzj), j = 1, . . . , d.
79
8 Impact of approximations of small jumps
Remark 8.2.4. For d = 1 similar results are given in [64]. Under less restrictive as-sumptions on g(x) error estimates for d = 1 are also proved in [17]. The results couldbe extended to d > 1 using, e.g., [5]. These error estimates, however, do not appear tobe optimal as we show in the numerical examples.
Using Proposition 8.2.1 and 8.2.3 we immediately obtain
Corollary 8.2.5. Assume the Levy measure ν satisfies (2.19) with α = (α1, . . . , αd).Then, for g ∈ C4(Rd)
∣∣E(g(x +XεT )) − E(g(x + Y ε
1,T ))∣∣ . ε2−maxα1,...,αd, ∀x ∈ Rd, 0 < αj < 2,
∣∣E(g(x +XεT )) − E(g(x + Y ε
2,T ))∣∣ . ε3−maxα1,...,αd, ∀x ∈ Rd, 0 < αj < 1.
These convergence rates can also be shown numerically even for g /∈ C 4(Rd) and α > 1.
Example 8.2.6. Let d = 1 and consider the tempered stable density as in Example 4.3.6.We compute the price of a put option with maturity T = 0.5, strike K = 100 and interestrate r = 0.01. We set c = 1, β− = 10, β+ = 15 and compute for Y ε
1 , Y ε2 the convergence
rate with respect to ε at s = 100 using various α’s. As shown in Figure 8.2, the rates2 − α and 3 − α are always obtained.
10−4
10−3
10−2
10−1
10−3
10−2
10−1
100
101
abs.
Err
or
s = 1.5
s = 1.0
s = 0.8
PSfrag replacements
ε
α = 0.5α = 1.0α = 1.2
10−4
10−3
10−2
10−1
10−8
10−7
10−6
10−5
10−4
10−3
10−2
10−1
abs.
Err
or
s = 2.4
s = 2.0
s = 1.8
PSfrag replacements
ε
α = 0.5
α = 1.0
α = 1.2
α = 0.5α = 1.0α = 1.2
ε
Figure 8.2: Convergence rates with respect to ε for Y ε1 (left) and Y ε
2 (right) in d = 1
Example 8.2.7. Let d = 2 and consider two independent tempered stable marginaldensities as in Example 4.3.6. We compute the price of a basket option, g(s1, s2) =(K − 1
2s1 − 12s2)+ , with maturity T = 0.5, strike K = 100 and interest rate r = 0.01.
We set c1 = c2 = 1, β−1 = 10, β+
1 = 15, β−2 = 9, β+
2 = 16, α1 = 0.5 and α2 = 0.7. Wecompute for Y ε
1 , Y ε2 the convergence rate with respect to ε at s = (100, 100). As shown
in Figure 8.3 the rates 2 − α2 and 3 − α2 are obtained.
80
8.3 Barrier options
10−4
10−3
10−2
10−1
10−3
10−2
10−1
100
s = 1.3
abs.
Err
or
PSfrag replacements
ε10
−410
−310
−210
−110
−7
10−6
10−5
10−4
10−3
10−2
s = 2.3
abs.
Err
or
PSfrag replacements
ε
ε
Figure 8.3: Convergence rates with respect to ε for Y ε1 (left) and Y ε
2 (right) in d = 2
8.3 Barrier options
Propositions 8.2.1 and 8.2.1 do not hold for barrier options since the option price is notsmooth at the boundary ∂D. In particular it is shown in d = 1 for tempered stabledensities with 1 < α < 2 and c+ = c− that the derivative of the option price behavesin log price like |x− logB|α/2−1 as x → logB (see, e.g., [39]). Therefore, one obtains alarge error at the boundary by approximating X with Y ε
2 .
Example 8.3.1. Let d = 2 and consider again a pure jump process (Q ≡ 0) with twoindependent tempered stable marginal densities as in Example 8.2.7. We compute theprice of a down-and-out basket option, g(s1, s2) = (K − 1
2s1 − 12s2)+ , on the domain
D = [B,∞]2 with barrier B = 80, maturity T = 0.5, strike K = 100 and interest rater = 0.01. We set c1 = c2 = 1, β−
1 = 10, β+1 = 15, β−
2 = 9, β+2 = 16, α1 = 0.5 and
α2 = 0.7. The option price is shown in Figure 8.4.
80100
120140 80
100120
1400
0.5
1
1.5
2
s2s
1
Opt
ion
Pric
e
Figure 8.4: Barrier option price in d = 2 with barrier B = 80 and strike K = 100
The relative error for approximating X by Y ε2 is plotted in Figure 8.5. Additionally, we
also show the corresponding error for the non-barrier basket option. As expected, the
81
8 Impact of approximations of small jumps
relative error is significantly higher for a barrier option close to the barrier.
8090
100110
120 8090
100110
1200
0.002
0.004
0.006
0.008
0.01
0.012
s2s
1
rel.
Err
or
PSfrag replacements
ε = 0.001ε = 0.003ε = 0.007ε = 0.01
8090
100110
120 8090
100110
1200
1
2
3
4
5
x 10−5
s2s
1
rel.
Err
or
PSfrag replacements
ε = 0.001
ε = 0.003
ε = 0.007
ε = 0.01
ε = 0.001ε = 0.003ε = 0.007ε = 0.01
Figure 8.5: Relative error for various values of ε using Y ε2 (8.7) in place of X in (8.2) for
a barrier option (left) and a non-barrier basket option (right) in d = 2 withbarrier B = 80 and strike K = 100
82
References
[1] S. Asmussen and J. Rosinski. Approximations of small jumps of Levy processeswith a view towards simulation. J. Appl. Probab., 38(2):482–493, 2001.
[2] O.E. Barndorff-Nielsen. Normal inverse Gaussian distributions and stochasticvolatility modelling. Scand. J. Statist., 24(1):1–13, 1997.
[3] O.E. Barndorff-Nielsen, J. Pedersen, and K. Sato. Multivariate subordination, self-decomposability and stability. Adv. in Appl. Probab., 33(1):160–187, 2001.
[4] C. Berg and G. Forst. Non-symmetric translation invariant Dirichlet forms. Invent.Math., 21:199–212, 1973.
[5] R.N. Bhattacharya. On errors of normal approximation. Ann. Probability, 3(5):815–828, 1975.
[6] F. Black and M. Scholes. The pricing of options and corporate liabilities. J. PoliticalEconomy, 81:637–659, 1973.
[7] S. Boyarchenko and S. Levendorskiı. Barrier options and touch-and-out optionsunder regular Levy processes of exponential type. Ann. Appl. Probab., 12(4):1261–1298, 2002.
[8] M. Briani, C. La Chioma, and R. Natalini. Convergence of numerical schemes forviscosity solutions to integro-differential degenerate parabolic problems arising infinancial theory. Numer. Math., 98(4):607–646, 2004.
[9] P. Carr, H. Geman, D.B. Madan, and M. Yor. The fine structure of assets returns:An empirical investigation. Journal of Business, 75(2):305–332, 2002.
[10] P. Carr, H. Geman, D.B. Madan, and M. Yor. Self-decomposability and optionpricing. Math. Finance, 17(1):31–57, 2007.
[11] P. Carr and D.B. Madan. Option pricing and the fast fourier transform. Journal ofComputational Finance, 2(4):61–73, 1999.
[12] T. Chan. Pricing contingent claims on stocks driven by Levy processes. Ann. Appl.Probab., 9(2):504–528, 1999.
[13] A. Chernov, T. von Petersdorff, and C. Schwab. Exponential convergence of hpquadrature for integral operators with Gevrey kernels. Research Report 2009-03,Seminar for Applied Mathematics, ETH Zurich, 2009.
83
References
[14] P.G. Ciarlet. The finite element method for elliptic problems, volume 4 of Studiesin Mathematics and its Applications. North-Holland Publishing Co., Amsterdam,1978.
[15] S. Cohen and J. Rosinski. Gaussian approximation of multivariate Levy processeswith applications to simulation of tempered stable processes. Bernoulli, 13(1):195–210, 2007.
[16] R. Cont and P. Tankov. Financial modelling with jump processes. Financial Math-ematics Series. Chapman & Hall/CRC, Boca Raton, FL, 2004.
[17] R. Cont and E. Voltchkova. A finite difference scheme for option pricing in jumpdiffusion and exponential Levy models. SIAM J. Numer. Anal., 43(4):1596–1626,2005.
[18] R. Cont and E. Voltchkova. Integro-differential equations for option prices in expo-nential Levy models. Finance Stoch., 9(3):299–325, 2005.
[19] W. Dahmen, H. Harbrecht, and R. Schneider. Compression techniques for boundaryintegral equations - asymptotically optimal complexity estimates. SIAM J. Numer.Anal., 43(6):2251–2271, 2006.
[20] W. Dahmen, S. Prossdorf, and R. Schneider. Wavelet approximation methods forpseudodifferential equations. II. Matrix compression and fast solution. Adv. Com-put. Math., 1(3-4):259–335, 1993.
[21] H. Dappa. Quasiradiale Fouriermultiplikatoren. PhD thesis, TH Darmstadt, 1982.
[22] P.J. Davis and P. Rabinowitz. Methods of numerical integration. Academic Press,New York-London, 1975.
[23] F. Delbaen, P. Grandits, T. Rheinlander, D. Samperi, M. Schweizer, and C. Stricker.Exponential hedging and entropic penalties. Mathematical Finance, 12:99–123,2002.
[24] F. Delbaen and W. Schachermayer. The fundamental theorem of asset pricing forunbounded stochastic processes. Math. Ann., 312(2):215–250, 1998.
[25] E. Eberlein and J. Jacod. On the range of options prices. Finance Stoch., 1:131–140,1997.
[26] E. Eberlein and K. Prause. The generalized hyperbolic model: financial derivativesand risk measures. In H. Geman, D. Madan, S.R. Pliska, and T. Vorst, editors,Mathematical finance - Bachelier Congress, 2000, Springer Finance, pages 245–267.Springer, Berlin, 2002.
[27] A. Ern and J.-L. Guermond. Theory and practice of finite elements, volume 159 ofApplied Mathematical Sciences. Springer-Verlag, New York, 2004.
84
References
[28] W. Farkas, N. Reich, and C. Schwab. Anisotropic stable Levy copula processes -analytical and numerical aspects. Math. Models and Methods in Appl. Sciences,17:1405–1443, 2007.
[29] G.H. Golub and C.F. Van Loan. Matrix computations. Johns Hopkins Studies inthe Mathematical Sciences. Johns Hopkins University Press, Baltimore, MD, thirdedition, 1996.
[30] I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series, and products. AcademicPress, New York, 1980.
[31] H. Harbrecht and R. Schneider. Wavelet Galerkin schemes for boundary integralequations - implementation and quadrature. SIAM J. Sci. Comput., 27(4):1347–1370, 2006.
[32] N. Hilber, A.-M. Matache, and C. Schwab. Sparse wavelet methods for optionpricing under stochastic volatility. Journal of Computational Finance, 8(4):1–42,2005.
[33] N. Hilber, C. Schwab, and C. Winter. Variational sensitivity analysis of parametricMarkovian market models. In L. Stettner, editor, Advances in Mathematics ofFinance, volume 83, pages 85–106. Banach Center Publ., 2008.
[34] N. Jacob. Pseudo differential operators and Markov processes, volume I of Fourieranalysis and semigroups. Imperial College Press, London, 2001.
[35] J. Jacod and A.N. Shiryaev. Limit theorems for stochastic processes, volume 288of Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, Berlin, 2ndedition, 2003.
[36] J. Kallsen and A.N. Shiryaev. The cumulant process and esscher’s change of mea-sure. Finance and Stochastics, 6(4):397–428, 2002.
[37] J. Kallsen and P. Tankov. Characterization of dependence of multidimensional Levyprocesses using Levy copulas. Journal of Multivariate Analysis, 97:1551–1572, 2006.
[38] G. Kou. A jump diffusion model for option pricing. Manage. Sci., 48:1086–1101,2002.
[39] O. Kudryavtsev and S.Z. Levendorskiı. Fast and accurate pricing of barrier optionsunder Levy processes. Preprint, 2007.
[40] J.-L. Lions and E. Magenes. Problemes aux limites non homogenes et applications.Vol. 1. Travaux et Recherches Mathematiques, No. 17. Dunod, Paris, 1968.
[41] E. Luciano and W. Schoutens. A multivariate jump-driven financial asset model.Quantitative Finance, 6(5):385–402, 2006.
[42] D.B. Madan, P. Carr, and E. Chang. The variance gamma process and optionpricing. European Finance Review, 2:79–105, 1998.
85
References
[43] A.-M. Matache, P.-A. Nitsche, and C. Schwab. Wavelet Galerkin pricing of Ameri-can options on Levy driven assets. Quantitative Finance, 5(4):403–424, 2005.
[44] A.-M. Matache, C. Schwab, and T. P. Wihler. Linear complexity solution ofparabolic integro-differential equations. Numer. Math., 104(1):69–102, 2006.
[45] A.-M. Matache, T. von Petersdorff, and C. Schwab. Fast deterministic pricing ofoptions on Levy driven assets. M2AN Math. Model. Numer. Anal., 38(1):37–71,2004.
[46] W. McLean. Strongly elliptic systems and boundary integral equations. CambridgeUniversity Press, Cambridge, 2000.
[47] R.C. Merton. Option pricing when underlying stock returns are discontinuous.Journal of Financial Economics, 3(1-2):125–144, 1976.
[48] Y. Meyer. Ondelettes et operateurs. II. Actualites Mathematiques. Hermann, Paris,1990. Operateurs de Calderon-Zygmund.
[49] R.B. Nelsen. An introduction to copulas, volume 139 of Lecture Notes in Statistics.Springer-Verlag, New York, 1999.
[50] S. M. Nikol′skiı. Approximation of functions of several variables and imbedding the-orems, volume 205 of Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, New York, 1975.
[51] A. Papapantoleon. Applications of semimartingales and Levy processes in finance:duality and valuation. PhD thesis, University of Freiburg, 2006.
[52] P.E. Protter. Stochastic integration and differential equations, volume 21 of Stochas-tic Modelling and Applied Probability. Springer-Verlag, Berlin, 2nd edition, 2005.
[53] N. Reich. Wavelet compression of anisotropic integrodifferential operators on sparsetensor product spaces. PhD thesis, ETH Zurich, 2008.
[54] N. Reich, C. Schwab, and C. Winter. On Kolmogorov equations for anisotropicmultivariate Levy processes. Research Report 2008-03, Seminar for Applied Math-ematics, ETH Zurich, 2008.
[55] S. Roman. The formula of Faa di Bruno. Amer. Math. Monthly, 87(10):805–809,1980.
[56] Y. Saad and M.H. Schultz. GMRES: a generalized minimal residual algorithm forsolving nonsymmetric linear systems. SIAM J. Sci. Statist. Comput., 7(3):856–869,1986.
[57] K. Sato. Levy processes and infinitely divisible distributions, volume 68 of CambridgeStudies in Advanced Mathematics. Cambridge University Press, Cambridge, 1999.
[58] S.A. Sauter and C. Schwab. Quadrature for hp-Galerkin BEM in R3. Numer. Math.,78(2):211–258, 1997.
86
References
[59] D. Schotzau. hp-DGFEM for parabolic evolution problems. PhD thesis, ETH Zurich,1999.
[60] D. Schotzau and C. Schwab. hp-discontinuous Galerkin time-stepping for parabolicproblems. C. R. Acad. Sci. Paris Ser. I Math., 333(12), 2001.
[61] W. Schoutens. Levy processes in Finance. John Wiley & Sons, Chichester, 2003.
[62] C. Schwab. Variable order composite quadrature of singular and nearly singularintegrals. Computing, 53(2):173–194, 1994.
[63] M. Sharpe. Operator-stable probability distributions on vector groups. Trans.Amer. Math. Soc., 136:51–65, 1969.
[64] M. Signahl. On error rates in normal approximations and simulation schemes forLevy processes. Stoch. Models, 19(3):287–298, 2003.
[65] P. Tankov. Dependence structure of Levy processes with applications to risk man-agement. Rapport Interne No. 502, CMAPX Ecole Polytechnique, Mars 2003.
[66] V. Thomee. Galerkin finite element methods for parabolic problems, volume 25of Springer Series in Computational Mathematics. Springer-Verlag, Berlin, 2ndedition, 2006.
[67] T. von Petersdorff and C. Schwab. Fully discrete multiscale Galerkin BEM. InW. Dahmen, A. Kurdila, and P. Oswald, editors, Multiscale wavelet methods forpartial differential equations, volume 6 of Wavelet Anal. Appl., pages 287–346. Aca-demic Press, San Diego, CA, 1997.
[68] T. von Petersdorff and C. Schwab. Numerical solution of parabolic equations inhigh dimensions. M2AN Math. Model. Numer. Anal., 38(1):93–127, 2004.
[69] T. von Petersdorff, C. Schwab, and R. Schneider. Multiwavelets for second-kindintegral equations. SIAM J. Numer. Anal., 34(6):2212–2227, 1997.
87
References
88
Curriculum Vitae
Personal details
Name Christoph Winter
Date of birth June 29, 1979
Place of birth Grafelfing, Germany
Citizenship Germany and USA
Education
05/2005–01/2009 PhD studies in Mathematics at ETH Zurich
Zurich, Switzerland
04/2005 Diploma in Mathematics at TU Munchen
10/1999–04/2005 Studies in Mathematics at TU Munchen
Munich, Germany
05/2003 Master of Science in Mathematics at Virgina Tech
08/2002–05/2003 Studies in Mathematics at Virgina Tech
Blacksburg, Virgina, USA
07/1999 Abitur at Willi-Graf-Gymnasium
07/1990–07/1999 Secondary school at Willi-Graf-Gymnasium
Munich, Germany
89