introduction to lévy processes - univ-toulouse.fr

39
Introduction to Lévy Processes Huang Lorick [email protected] Document type These are lecture notes. Typos, errors, and imprecisions are expected. Comments are welcome! This version is available at http://perso.math.univ-toulouse.fr/lhuang/enseignements/ Year of publication 2021 Terms of use This work is licensed under a Creative Commons Attribution 4.0 International license: https://creativecommons.org/licenses/by/4.0/

Upload: others

Post on 18-Dec-2021

11 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Introduction to Lévy Processes - univ-toulouse.fr

Introduction to Lévy Processes

Huang [email protected]

Document typeThese are lecture notes. Typos, errors, and imprecisions are expected.Comments are welcome!

This version is available athttp://perso.math.univ-toulouse.fr/lhuang/enseignements/

Year of publication2021

Terms of useThis work is licensed under a Creative Commons Attribution 4.0 International license:https://creativecommons.org/licenses/by/4.0/

Page 2: Introduction to Lévy Processes - univ-toulouse.fr

Contents

Contents 1

1 Introduction and Examples 21.1 Infinitely divisible distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Examples of infinitely divisible distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 The Lévy Khintchine formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 Digression on Relativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Lévy processes 82.1 Definition of a Lévy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Examples of Lévy processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 Exploring the Jumps of a Lévy Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Proof of the Levy Khintchine formula 193.1 The Lévy-Itô Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 Consequences of the Lévy-Itô Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4 Lévy processes as Markov Processes 244.1 Properties of the Semi-group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.2 The Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.3 Recurrence and Transience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.4 Fractional Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

5 Elements of Stochastic Calculus with Jumps 315.1 Example of Use in Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.2 Stochastic Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.3 Construction of the Stochastic Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.4 Quadratic Variation and Itô Formula with jumps . . . . . . . . . . . . . . . . . . . . . . . . . 345.5 Stochastic Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Bibliography 38

1

Page 3: Introduction to Lévy Processes - univ-toulouse.fr

Chapter 1

Introduction and Examples

In this introductive chapter, we start by defining the notion of infinitely divisible distributions. We then giveexamples of such distributions and end this chapter by stating the celebrated Lévy-Khintchine formula. Theproof of the latter will be given in a subsequent chapter.

1.1 Infinitely divisible distributions

Historically, Paul Lévy was interested in "arithmetic of probabilities", where he would investigate properties ofprobabilities distributions that can be decomposed as the sum of independent copies of itself. This field gaverise to what we now call infinitely divisible distributions.

Infinitely divisible distributions and Lévy process are closely related, as Lévy process have infinitelydivisible distributions.

We start by introducing the concept of infinitely divisible distribution and give some examples.

Definition 1.1.1. We say that a random variable X is infinitely divisible if for all n ∈ N, there existsY1, . . . , Yn such that

X(d)= Y1 + · · ·+ Yn.

A very simple consequence of this definition is the following result:

Proposition 1.1.2. The following are equivalent:

• X has an infinitely divisible distribution,

• µX the distribution of X has an n-convolution root that is itself the distribution of a random variablefor each n,

• φX the characteristic function of X has an n-root that is itself the characteristic function of a randomvariable for each n.

We leave the proof as an exercise.

1.2 Examples of infinitely divisible distributions

Gaussian random variablesLet X be a random vector. We say that X has a Gaussian distribution if there exists m ∈ R and a symmetricpositive definite matrix A such that X has density:

1(2π)d/2

√det(A)

exp(−1

2 〈x−m,A−1(x−m)〉

).

In this case, we write X ∼ N (m,A), m is the mean and A the covariance matrix.An easy exercise gives that the Fourier transform of such random variable is

φX(ξ) = exp(i〈ξ,m〉 − 1

2 〈ξ, Aξ〉).

2

Page 4: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 1. INTRODUCTION AND EXAMPLES 3

Hence, it is easy to see that:

φX(ξ)1/n = exp(i〈ξ, m

n〉 − 1

2 〈ξ,A

nξ〉).

Consequently, we have that X is infinitely divisible with Yi ∼ N(mn ,

An

).

Poisson random variableWe say that a discrete random variable X has a Poisson distribution with parameter λ if

P(X = k) = e−λλk

k! .

Consider now Y independent of X with Poisson distribution of parameter µ , we have:

P(X + Y = k) =k∑l=0

P(X = l, Y = k − l) =k∑l=0

e−λλl

l! e−µ µk−l

(k − l)! .

Grouping terms in the last identity, we get:

P(X + Y = k) = e−(λ+µ) 1k!

k∑l=0

k!l!(k − l)!λ

lµk−l = e−(λ+µ) (λ+ µ)k

k! .

Consequently, convolution of two Poisson distribution is a Poisson distribution and Poisson distributions areinfinitely divisible.

Alternatively, one can show that the characteristic function for a Poisson distribution is

φX(ξ) = exp(λ(eiξ − 1)),

giving that Poisson distributions are infinitely divisible, with Yi with Poisson distribution of parameter λn .

Compound Poisson random variableConsider N to be a Poisson random variable with parameter λ. Since N is integer-valued, one can form thefollowing sum:

X =N∑k=1

Yi,

where Yi are independent and identically distributed, independent of N . Let us denote µY their commondistribution.

Proposition 1.2.1. The characteristic function of X is

φX(ξ) = exp(λ

∫(ei〈ξ,y〉 − 1)µY (dy)

).

Proof.

φX(ξ) = E(ei〈ξ,X〉) = E(ei〈ξ,∑N

i=1Yi〉) = E

(+∞∑k=0

ei〈ξ,∑k

i=1Yi〉1N=k

).

Now, exploiting the independence of N and Yi’s, we can write:

φX(ξ) = E

(+∞∑k=0

ei〈ξ,∑k

i=1Yi〉

)P(N = k)

=+∞∑k=0

E(ei〈ξ,

∑k

i=1Yi〉)e−λ

λk

k! .

Page 5: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 1. INTRODUCTION AND EXAMPLES 4

Now, we note that

E(ei〈ξ,

∑k

i=1Yi〉)

=k∏i=1

E(ei〈ξ,Yi〉) = φY (ξ)k,

denoting φY the common characteristic function of the Yi’s. We thus obtained

φX(ξ) =+∞∑k=0

e−λλk

k! φY (ξ)k = exp(λ(φY (ξ)− 1

)).

To conclude, we only write φY (ξ) as∫ei〈ξ,y〉µY (dy).

Hence, we see that a compound Poisson distribution also has an infinite divisible distribution.

1.3 The Lévy Khintchine formula

One can notice that in every example above, the characteristic function has an exponential form. This is nocoincidence, as it is a shared properties by all infinitely divisible distributions. In fact, one can even give moreinformation on the exponent. This is the so-called Lévy Khintchine formula. In this section, we only state theresult, the proof will be given later.

Theorem 1.3.1. A probability distribution µ on Rd is infinitely divisible if and only if there exists

• a vector b ∈ Rd, called the drift, or mean,

• a symmetric positive definite d× d matrix A , called the covariance matrix,

• a measure on Rd ν such that∫Rd\0min(|y|2, 1)ν(dy) < +∞,

such that: ∫Rdei〈ξ,y〉µ(dy) = exp

(i〈b, ξ〉 − 1

2 〈ξ, Aξ〉+∫Rd\0

ei〈ξ,y〉 − 1− 〈ξ, y〉1|y|≤1ν(dy)).

Remark 1.3.2. Such a measure ν is called a Lévy measure. Later on, this measure will be linked to jumpsthe when discussing Lévy process. It shall be noted that one can state the whole theory by adding thatν0 = 0 and integrating over Rd, the point being that there should be no jumps of size 0. Besides, there isnothing special about the cut-off 1|y|≤1 appearing above, one could take any ε > 0 and consider instead1|y|≤ε, or even 1

1+|y|2 . Doing that would change the value for b.

Remark 1.3.3. Obviously, the outstanding part of the previous theorem is the only if part. Indeed, if we aregiven a distribution with the above characteristic function, it is quite easy to see that it is infinitely divisible.

Definition 1.3.4. The triple (A, ν, b) above is called the characteristic triplet, and they completely determinethe distribution µ. Note that since A is a symmetric positive definite matrix, we will interchangeably write(Q, ν, b) as generating triplet, where Q is the quadratic form defined by Q(z) = 〈z,Az〉.

One interpretation of this result is that any infinitely divisible distribution can be decomposed as thesum of fundamental building blocks. One would immediately observe that 1

2 〈ξ, Aξ〉 in the exponent comesfrom a Gaussian distribution. Besides, barring the term multiplied by the indicator function, the integral∫Rd\0(e

i〈ξ,y〉 − 1)ν(dy) is the characteristic function of a compound Poisson process.

Stable distributionsIn this paragraph, we introduce a very important class of distributions known as Stable Distributions.Historically, those distributions arise from extensions of the Central Limit Theorem. Let X1, X2, . . . be asequence of i.i.d. random variable, and for an, bn two sequence of real numbers, form

Sn = X1 + · · ·+Xn − anbn

.

If there exists a random variable X such that Sn converges in distribution to X, then we say that X has astable distribution. A rather classical example of such distributions is for instance the case when X has a

Page 6: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 1. INTRODUCTION AND EXAMPLES 5

finite second moment. In this case, one can take bn = σ√n and an = m, and we see that N (m,σ2) is a stable

distribution.As an exercise, the reader can prove the following result:

Proposition 1.3.5. Sn ⇒ X if and only if for all n, there exists cn and dn such that

X1 + · · ·+Xn = cnX + dn,

where X1, . . . , Xn are independent copies of X.

Remark 1.3.6. In the previous proposition, if dn can be taken to be 0, then X is said to be strictly stable.Besides, it can be shown that the only possible choice for cn is of the form σn1/α. This parameter α is calledthe index of the stable distribution.

The next result characterises Lévy-Khinchine exponent for a stable process.

Theorem 1.3.7. The Lévy-Khinchine exponent of a stable distribution can be one of two forms:

1. α = 2, then ν = 0 so that X ∼ N (b, A),

2. α < 2, then A = 0 and ν is of the form:

ν(dx) = C1dx

x1+α1x≥0 + C2dx

|x|1+α1x<0, where C1, C2 ≥ 0.

The proof of this result can be found in Sato [11]. In the one-dimensional case, an extensive discussioncan be found in Zolotarev [12]. The higher dimensional cases are more elaborate, and many questions are stillopen to this day. We must also mention the book by Samorodnitsky and Taqqu [10].

We conclude this paragraph by giving an alternate expression for the exponent of a Stable distribution inone dimension.

Theorem 1.3.8. A random variable X has a stable distribution if and only if there exists σ > 0, β ∈ [−1, 1]and b ∈ R such that

• if α = 2φX(ξ) = exp(iξb− 1

2σ2ξ2)

• if α < 2 and α 6= 1φX(ξ) = exp

(iξb− σα|ξ|α

[1− iβsgn(ξ) tan

(πα2

) ]),

• if α = 1,

φX(ξ) = exp(iξb− σ|ξ|

[1 + iβ

2πsgn(ξ) log(|ξ|)

]),

The proof of this result can be found in all three books mentioned above.

Remark 1.3.9.

• The parameters b and σ are designates respectively the drift and scale, whereas β is the skewness of thedistribution. Taking β = 0 gives a symmetrical stable distribution.

• Plugging β = 0 and b = 0, we see that the exponent of a stable process is essentially |ξ|α, for α rangingfrom 0 to 2. Because of that, we see that every stable distribution has a density, ranging from theGaussian density to the Cauchy density:

fX(x) = σ

π[(x− b)2 + σ2] .

Series representations for those densities are available, often relying on special functions. Note that forα < 2, the distributions are heavy-tailed. In fact, it can be shown that if X has an α stable distribution,then E(|X|γ) < +∞ for all γ < α.

Page 7: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 1. INTRODUCTION AND EXAMPLES 6

We end this paragraph on stable distributions by mentioning the following result from Chambers, Mallowand Stuck, ofter abbreviated CMS method in the literature.

Theorem 1.3.10. Consider U and W two independent random variables, such that

• U has a uniform distribution on[−π2 ,

π2]

• W has an exponential distribution of parameter 1,

Set

ζ = −β tan πα2 , and ξ =

1αarctan(−ζ) if α 6= 1π2 if α = 1

If α 6= 1, then

X = (1 + ζ2) 12α

sin(α(U + ξ)

)cos(U) 1

α

cos(U − α(U + ξ)

)W

1−αα

If α = 1, then

X = 1ξ

((π2 + βU

)tan(U)− β log

( π2W cosUπ2 + βU

)).

X has a stable distribution with index α, and skewness β.

The proof can be found in

1.4 Digression on Relativity

In this section, we would like to give an example in the theory of relativity where infinitely divisible distributionsarise. More precisely, we will discuss the relativistic stable distribution.

Consider a particule in R3 whose mass is m > 0 and momentum is p = (p1, p2, p3) ∈ R3. According to themodels in relativity theory, the total energy of this particule is

√m2c4 + c2|p|2, where c is the speed of light.

Subtracting mc2 which is the energy due to mass, the kinetic energy of the particule is then given by

E(p) =√m2c4 + c2|p|2 −mc2.

We considerφm,c(p) = e−E(p) = exp

(−√m2c4 + c2|p|2 +mc2

).

Theorem 1.4.1. φm,c is the characteristic function of an infinitely divisible distribution.

Proof. This proof is in two parts. First, using Bochner’s theorem, we identify φm,c as a characteristic function.Next, we express the nth root of φm,c as a characteristic function as well.

We first recall Bochner’s theorem.

Theorem 1.4.2. A function ψ is a characteristic function if and only if ∀n ∈ N, ∀z1, . . . , zn ∈ C, ∀p1, . . . , pn,n∑

i,j=1ψ(pi − pj)zizj ≥ 0

Rewriting the kinetic energy as:

−E(p) = mc2

(1−

√1 + |p|2

m2c2

)= mc2ψ(p),

it is enough to use Bochner’s theorem on eψ(p). In fact, one clever way to rewrite ψ is though the use of theGamma function, that is:

ψ(p) = 1− 12√π

∫ +∞

0

(1− e−(1+ |p|2

m2c2 )x)

dx

x3/2 .

We point out that at first glance, we might have a problem considering these integrals at 0, and we shouldconsider a sequence approaching zero in order to be perfectly rigourous. But as it is not the main focus of

Page 8: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 1. INTRODUCTION AND EXAMPLES 7

these notes, we will just admit these integrals to be defined. Now to show that ψ is positive definite, it isenough to focus on the exponential part:

n∑i,j=1

zizj

(1

2√π

∫ +∞

0

(e−(1+

|pi−pj |2

m2c2 )x)

dx

x3/2

)= 1

2√π

∫ +∞

0e−x

n∑i,j=1

zizje−|pi−pj |

2

m2c2 x

︸ ︷︷ ︸

positive definite

dx

x3/2 .

Indeed, p 7→ e−|p|2

m2c2 x is the characteristic function of a random variable with distribution N (0, m2c2

2x ). Thus,we obtained that

p 7→ φm,c(p) = exp(−√m2c4 + c2|p|2 +mc2

)is the characteristic function of some distribution. To see that it is infinitely divisible, we write:

φm,c(p)1n = exp

(− 1n

√m2c4 + c2|p|2 + mc2

n

)= exp

(−√

(nm)2( cn

)4+ c2|p|2 +mn

( cn

)2)

= φnm, cn (p).

Remark 1.4.3. This remarkable fact allowed physicists to use criteria developed for probability theory torelativity. In particular, the existence of bound states1 for relativistic Schrodinger operators follows from theapplication of a recurrence criteria.

1https://en.wikipedia.org/wiki/Bound_state

Page 9: Introduction to Lévy Processes - univ-toulouse.fr

Chapter 2

Lévy processes

In this chapter, we define Lévy processes and give a few examples of such processes. We discuss their relationwith infinitely divisible distributions and the nature of their jumps. We will spend a large part of thischapter discussing integration with respect to a Poisson random measure in order to set-up the proof of theLévy-Khintchine formula in the next chapter.

2.1 Definition of a Lévy process

There exists many equivalent definition for the Brownian motion. As it is not the main focus of these lectures,here is one definition that will suffices us.

Definition 2.1.1. A stochastic process (Bt)t≥0 is a Brownian motion if:

• Almost surely, t 7→ Bt is continuous,

• For all s, t > 0, Bs+t −Bt has the same distribution as Bs

• For all n ≥ 1 and all times 0 ≤ t0 ≤ t1 ≤ · · · ≤ tn, the random variables Bt0 , Bt1 −Bt0 , . . . , Btn −Btn−1

are independent.

Now one can wonder, what happens if we drop the assumption on continuity of t 7→ Bt ? The readertrained in probability would then observe that the Poisson process (more on that one later) also fit thedescription. In fact, the class of all process with independent and stationary increments is known as the Lévyprocesses. Let us write a formal definition.

Definition 2.1.2. A stochastic process (Xt)t≥0 in Rd is a Lévy process if the following conditions are satisfied:

1. Independent increments:for all n ≥ 1 and all times 0 ≤ t0 ≤ t1 ≤ · · · ≤ tn, the random variables Xt0 , Xt1 −Xt0 , . . . , Xtn −Xtn−1

are independent.

2. Stationarity of increments:for all s, t ≥ 0, the distribution of Xt+s −Xs does not depends on s.

3. X0 = 0 almost surely.

4. It is stochastically continuous:

P(|Xs −Xt| > ε

)−→s→t

0.

5. It is càdlàg almost surely.

Remark 2.1.3. The last item above can be dropped, as one can prove that there always exists a càdlàgmoditication (i.e. a process that is different on a set of measure zero). However, proving this is quite involvedand we opt to add the càdlàg property in the definition of a Lévy process.

8

Page 10: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 2. LÉVY PROCESSES 9

The fact that the previous definition actually give rise to a probability measure on the space of càdlàgfunctions from R+ to Rd invokes Kolmogorov’s extension criterion. The details can be found in Billingsley [3].

Obviously, the Brownian motion and the Poisson process satisfies all of these properties. The reader canalso observe that the sum of a Poisson process and a Brownian motion also satisfies these properties.

In the next chapter, we will see that any process satisfying those properties can be decomposed as thesum of a Brownian motion, a compound Poisson process and an L2 martingale. This is the celebrated LévyItô decomposition.

Now, beside the fact that Gaussian and Poissonian distributions are in both, what would be the linkbetween Lévy processes and infinitely divisible distributions? The answer is that at any given time, a Lévyprocess has an infinitely divisible distribution.

Proof. Let (Xt)t≥0 be a Lévy process. For all n ∈ N, we can write:

Xt = Xt −Xn−1n t +Xn−1

n t −Xn−2n t + · · ·+X t

n−X0

We used the fact that X0 = 0. Now, all of those increments are independent, and identically distributed (sincewe are considering increments of size t/n), and we decomposed Xt as sum of i.i.d. random varaible. Thus, Xt

is indeed infinitely divisible.

Being infinitely divisible, the characteristic function of Xt must also satisfy the Lévy-Khintchine formula:

E(ei〈ξ,Xt〉) = exp[t

(i〈b, ξ〉 − 1

2 〈ξ, Aξ〉+∫Rd\0

ei〈ξ,y〉 − 1− 〈ξ, y〉1|y|≤1ν(dy))]

,

for some characteristics (b, Aν). Therefore, we will refer to the triplet as the characteristic triplet for the Lévyprocess X as well.

2.2 Examples of Lévy processes

As we saw earlier, Gaussian and Poisson distributions are infinitely divisible. This means that their continuoustime counter-parts, that are the Brownian motion and the Poisson process are Lévy processes.

Just in case the reader is unfamiliar with Poisson processes on R, here’s a brief summary. A Poisson processis the only stochastic process with independent and stationary increments with bounded jumps. Equivalently,a Poisson Process of parameter λ has at all time a Poisson distribution of parameter λt:

P(Nt = k) = e−λt(λt)k

k! .

As such, one can compute its characteristic function:

φNt(ξ) = exp(λt(eiξ − 1)).

We see that we can force a Lévy-Khintchine exponent form by writing this as

φNt(ξ) = exp(t

∫R

(eiξx − 1)λδ1(dx)).

Thus, the Lévy measure of a Poisson process is a Dirac mass.Similarly, we saw that a compound poisson random variable was infinitely divisible as well, then the

stochastic process

Xt =Nt∑i=1

Yi

where Nt is a Poisson process is also a Lévy process. We naturally call this one the compound poisson process.The same calculation as above then gives:

φXt(ξ) = exp(t

∫(ei〈ξ,y〉 − 1)λµY (dy)

),

and we see that the compound Poisson process has the Lévy measure λµY (dy).

Page 11: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 2. LÉVY PROCESSES 10

Remark 2.2.1. The Lévy measure has an interpretation in terms of jumps of the Lévy process. Indeed, theBrownian motion, having continuous trajectories, its Lévy measure is zero everywhere. The Poisson processhas jumps of size 1, giving a Dirac mass at 1 as Lévy measure, and a compound Poisson, whose jumps are therealisation of the random variables Yi’s at rate λ has Lévy measure λµY (dy). In general, the Lévy measurecan be seen as the intensity of the jumps in a certain region of space.

The Gamma ProcessWe consider (Xt)t≥0 such that for all t ≥ 0,

L(Xt) = Γ(αt, β), α, β > 0.

This means that for all t > 0, the density of Xt is

f(t, x) = βαt

Γ(αt)xαt−1e−βx1x>0.

A simple integration yields that the characteristic function of Xt is φXt(ξ) = βαt

(β−iξ)t . To simplify thecomputations, we take α = β = 1, and give an alternative expression for characteristic function more suitableto the Lévy-Khintchine formula:

1(1− iξ)t = e−t ln(1−iξ).

Now, notice that (ln(1− iξ)

)′= i

1− iξ = iφX1(ξ) = i

∫ +∞

0eiξxe−xdx.

Indeed, for α = β = 1, X1 has an exponential distribution of parameter 1. Integrating both sides with respectto ξ gives:

ln(1− iξ) =∫ +∞

0

(eiξx − 1

)e−xxdx.

In other words, we exhibited the Lévy measure of (Xt)t≥0 to be ν(dx) = e−x

x dx.

Stable SubordinatorsIn general, a subordinator is just a non decreasing process. But for a Lévy process to be increasing, meansseveral things. First, there cannot be a Brownian part, as the Brownian motion cannot be (just) increasing.

Second, the Lévy measure cannot charge (−∞, 0), otherwise, the process would see negative jumps andthe trajectories cannot be increasing.

Theorem 2.2.2. If T is a subordinator, then its characteristic function takes the form

E(eiξT ) = exp(ibξ +

∫ +∞

0(eiξx − 1)µ(dx)

),

where b ≥ 0and the Lévy measure satisfies the additional requirements:

µ(

(−∞, 0))

= 0 and∫ +∞

0min(1, y)µ(dy) < +∞.

The proof of this result can be found in Bertoin [2]. We will say more on the subject one we proved theLévy Khintchine formula.

Probably the most used type of subordinator is the α stable subordinator; that is as its name indicates,a stable process with increasing trajectories. Let us denote (Tαt )t≥0 such process. Looking at the previousTheorem, we see that we need to take 0 < α < 1 to guarantee the conditions on the Lévy measure. Besides,the measure having to give zero mass to all negative reals, we see that we have to get:

E(eiξTαt ) = e−ξ

α

.

Now, a simple computation gives

ξα = α

Γ(1− α)

∫ +∞

0

(1− eiξx

) dx

x1+α .

Page 12: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 2. LÉVY PROCESSES 11

We thus see that the characteristic triplet of such α stable subordinator has to be (0, αΓ(1−α)

dxx1+α ).

Those type of random processes are useful for considering time-change. Let us give one example using(Tαt )t≥0 and a d dimensional Brownian motion.

Example 2.2.3. Consider (Bt)t≥0 and (Tαt )t≥0 an α stable subordinator, independent of B. The subordinatedBrownian motion (BTαt )t≥0 is a d dimensional 2α stable process. Indeed, we find its characteristic function tobe

E(e〈ξ,BTαt 〉) = e−|ξ|α

.

This example gives us a very simple way to get a d dimensional α stable process. We have to mention thoughthat we do not get all the Stable process in dimension d in this way, since the process obtained is clearlysymmetric.

2.3 Exploring the Jumps of a Lévy Process

In this section, we investigate the structure of the jumps of a Lévy process. We will link the jumps to aPoisson random measure, which will lead us to the celebrated Lévy-Itô decomposition. Henceforth, (Xt)t≥0will denote a Lévy process with generating triplet (b, A, ν).

Remark 2.3.1. Note that until now, we did not specify where did the triplet come from. The Lévy-Khintchinerepresentation states that any Lévy process is characterised by a triplet, but the origin of this triplet is fornow unclear. In this section, we will define these objects in relation to some path properties of the process.

The Large Jumps of a Lévy process as a Compound Poisson ProcessThe idea is to see within the jumps of a Lévy process, the structure of a Poisson point process:

••BO•••

•• •

• •BO••

/

The difficulty in the analysis of the jumps of a Lévy process comes from the fact that even though thejumps are countable, it is possible to have: ∑

0<s≤t|∆Xs| = +∞.

Page 13: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 2. LÉVY PROCESSES 12

In other words, it is possible for the jumps to accumulate. This difficulty will be dealt with thanks to the factthat Lévy processes will always have the property that∑

0<s≤t|∆Xs|2 < +∞.

To exploit this, we need to define the jump measure associated with our Lévy process. Fix A a Borel set suchthat 0 /∈ A. We define the random variables:

TA1 = inft > 0; ∆Xt ∈ A,...

TAn+1 = inft > TAn ; ∆Xt ∈ A,...

Since X has càdlàg paths and that 0 /∈ A, we see that

TAn ≥ t ∈ Ft+ = Ft.

Thus, those random variables are stopping times. Besides, the assumption 0 /∈ A yields that

limn→+∞

TAn = +∞ almost surely.

We introduce Nt(A) the following quantity:

Nt(A) = #0 ≤ s ≤ t; ∆Xs ∈ A =∑

0<s≤t1∆Xs∈A =

+∞∑n=1

1TAn ≤t.

This quantity is a counting process without explosion (since TAn → +∞) that counts the number of times thejumps lands in A.

Theorem 2.3.2. Let A be a Borel set such that 0 6∈ A. Then, Nt(A) is a Poisson process.

Proof. We can see that for all times 0 ≤ s < t <∞,

Nt(A)−Ns(A) ∈ σXu −Xu, s ≤ v ≤ u ≤ t,

and thanks to the fact that X has independent increments, Nt(A) − Ns(A) is independent of Fs, that is,Nt(A) has independent increments.

Finally, we observe that Nt(A) − Ns(A) counts the number of jumps that Xs+u − Xs has in A, for0 ≤ u < t− s. Using the fact that X has stationary increments, we then conclude that Nt(A)−Ns(A) hasthe same distribution as Nt−s(A).

To summarise,

• Nt(A) is a counting process,

• Nt(A) has independent increments,

• Nt(A) has stationary increments

we can conclude that Nt(A) must be a Poisson process.

We also define ν(A) to be the quantity:

ν(A) = E[N1(A)],

that is ν(A) is the intensity (or parameter) of the Poisson process Nt(A). Consequently, we deduce thatE[Nt(A)] = tν(A).

Page 14: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 2. LÉVY PROCESSES 13

Remark 2.3.3. A rather useful property of Nt(A) is that if A and B are disjoints, Nt(A) and Nt(B) areindependent. This property comes from the fact that Nt(A) and Nt(B) relies on different increments of(Xt)t≥0, as soon as A and B are disjoints.

Thus, the large jumps of a Lévy process give rise to a Poisson process. The fact that only large jumps areconsidered comes from the assumption 0 6∈ A. Note that the Poisson process Nt(A) explicitly depends on theprescribed Borel set A. We can thus ask what is the dependency of this process with respect to the Borel set?

Theorem 2.3.4. The set function A 7→ Nt(A) defines a σ-finite measure on R\0. The set functionA 7→ ν(A) = E[Nt(A)] is also a σ-finite measure on R\0.

Proof. By construction, Nt(A) is a counting measure. Besides, it is clear from the linearity properties of theexpectation that ν is also a measure.

Definition 2.3.5. The measure ν is called the Lévy measure of the process (Xt)t≥0. This measure is thethird element in the characteristic triplet of (Xt)t≥0.

The fact that ν(A) < +∞ when 0 /∈ A is actually a consequence of the fact that Nt(A) has jumps of size1. Indeed, the moments of the Lévy measure are closely related to the moments of the Lévy process. Moreprecisely, we have the following result:

Theorem 2.3.6. Let (Xt)t≥0 be a Lévy process with bounded jumps:

supt≥0|∆Xt| < C,

where C is a fixed non-random constant. Then, for all m ≥ 1, E[|Xt|m] < +∞, that is Xt has moment ofevery order.

Since Nt(A) has jumps of size 1, thus bounded jumps, it has moments of every order. The Lévy measureis defined to be the 1st moment of Nt(A), thus it is finite.

Note that this is the first step towards satisfying the definition of a Lévy measure:∫Rd

min(1, |x|2)ν(dx) < +∞,

since we just obtained that∫|x|>1 ν(dx) < +∞. We will deal with the part

∫|x|≤1 |x|

2ν(dx) later.Note that we actually have a stronger result, linking the moments of the Lévy measure to the moments of

the process itself. See Theorem 25.3 p 159 in Sato [11].

Proof of Theorem 2.3.6. This proof follows the proof of Theorem 2.4.7 p118 in Applebaum [1]. We define thesequence of stopping times

T1 = inft > 0; |Xt| > C...

Tn+1 = inft > Tn; |Xt −XTn | > C.

This sequence form an increasing sequence of stopping times. First, assume T1 < +∞ almost surely. Since|∆Xs| ≤ C for any time, we have by induction that:

supt>0|Xs∧Tn | ≤ 2Cn.

By the strong Markov property, we get that Tn − Tn−1 is independent of FTn−1 and has the same distributionas T1. Thus, because T1 < +∞, we have

E(e−Tn) = E(e−T1)n = αn,

for a certain α ∈ [0, 1]. We thus get:

P(|Xt| > 2Cn) ≤ P(Tn ≤ t) ≤ etE(e−Tn) ≤ etαn.

Page 15: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 2. LÉVY PROCESSES 14

From this last inequality, we deduce that E|Xt|m is finite:

E[|Xt|m] = E[|Xt|m1|Xt|≤2Cn] + E[|Xt|m1|Xt| 2Cn].

For the first part, there are no problems:

E[|Xt|m1|Xt|≤2Cn] ≤ (2Cn)m.

For the second part, we write:

E[|Xt|m1|Xt|>2Cn] =+∞∑r=n

E[|Xt|m12rC<|Xt|≤2(r+1)C]

≤+∞∑r=n

(2(r + 1)C)mP(2rC < |Xt| ≤ 2(r + 1)C)

≤+∞∑r=n

(2(r + 1)C)mP(2rC < |Xt|)

≤+∞∑r=n

(2(r + 1)C)metαr < +∞.

Thus, when T1 is finite almost surely, we have that Xt has moment of every order. Now, if P(T1 = +∞) > 0,then we can write:

E[|Xt|m] = E[|Xt|m1T1<+∞] + E[|Xt|m1T1=+∞].

Then, for E[|Xt|m1T1<+∞], we can argue as before, and we are left with

E[|Xt|m1T1=+∞] ≤ CmP(T1 = +∞) ≤ Cm.

Thus, the proof is complete.

Remark 2.3.7. So far, we concluded that Nt(·) is a measure. Note that this is actually a random measure,that is a random variable taking value in the space of measures. This begs the question of defining a probabilityspace on the set of all measures. As it is not the main focus of these notes, we will not dwell to long on thisconstruction. The interested reader can type "random measure" in a search engine to get many references onthe matter. The case for Poisson random measures is of particular interest for us, and the reader can see thatin this case, one can completely characterise the random measure through a Laplace-like transform.

Now that we established that Nt(·) is a measure, what kind of result can we obtain when integrating withrespect to it? The answer is simple: we know what the measure does on indicators of Borel sets, we canextend this with results from measure theory to get the following.

Theorem 2.3.8. Let A be a Borel set such that 0 /∈ A. Let f be measurable and finite on A. We have∫A

f(x)Nt(dx) =∑

0<s≤tf(∆Xs)1∆Xs∈A.

Besides,(∫Af(x)Nt(dx)

)t≥0 is a Lévy process and the following formula holds

E(∫

A

f(x)Nt(dx))

= t

∫A

f(x)ν(dx).

The last identity comes from the fact that E[Nt(A)] = tν(A) can be extended to integrable functions withusual arguments. We already know that

(∫Af(x)Nt(dx)

)t≥0 is a Lévy process, actually, we can way a bit

more:

Proposition 2.3.9. The Lévy process(∫Af(x)Nt(dx)

)t≥0 is a compound Poisson process.

Page 16: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 2. LÉVY PROCESSES 15

Proof. First, we observe that it is enough to prove the statement when f is a simple function, that is

f(x) =m∑i=1

αi1Ai(x).

Note that we can always reduce to the case where the Ai’s are disjoints. Thus∫Af(x)Nt(dx) =

∑ni=1Nt(Ai∩A),

and the Ai’s being disjoints make Nt(Ai ∩A) independents (see Remark 2.3.3).Next, since we do know the characteristic exponent for compound Poisson process, the goal is to show that

E(

exp[i

⟨ξ,

∫A

f(x)Nt(dx)⟩])

= exp(t

∫(ei〈ξ,y〉 − 1)µ(dy)

).

for some measure µ to be determined. We plug the specific expression for f and use the fact that the Ai’s aredisjoints:

E(

exp[i

⟨ξ,

∫A

f(x)Nt(dx)⟩])

= E

(exp

[i

⟨ξ,

∫A

m∑i=1

αi1Ai(x)Nt(dx)⟩])

= E

(exp

[i

⟨ξ,

m∑i=1

αiNt(A ∩Ai)⟩])

=m∏i=1

E(

exp[i 〈ξ, αiNt(A ∩Ai)〉

]).

Now, since Nt(A ∩Ai) has Poisson distribution with parameter tν(A ∩Ai), we get:

E(

exp[i 〈ξ, αiNt(A ∩Ai)〉

])= exp

(tν(A ∩Ai)

(ei〈ξ,αi〉 − 1

)).

We thus obtained

E(

exp[i

⟨ξ,

∫A

f(x)Nt(dx)⟩])

=m∏i=1

exp(tν(A ∩Ai)

(ei〈ξ,αi〉 − 1

))Passing the product over i inside, we can rewrite the right hand side as:m∏i=1

exp(tν(A ∩Ai)

(ei〈ξ,αi〉 − 1

))= exp

(m∑i=1

tν(A ∩Ai)(ei〈ξ,αi〉 − 1

))= exp

(t

∫Rd

(ei〈ξ,x〉 − 1)µ(dx)),

where we define νA,f (B) = ν(A ∩ f−1(B)) (recall f is our simple function). Thus, we obtained

E(

exp[i

⟨ξ,

∫A

f(x)Nt(dx)⟩])

= exp(t

∫Rd

(ei〈ξ,x〉 − 1)νA,f (dx)), (2.1)

and conclude that∫Af(x)Nt(dx) is a compound Poisson process.

We point out the important formula established in the previous proof (2.1). In this formula, we characterisedthe Fourier transform of the Lévy process

∫Af(x)Nt(dx), where f is any measurable function. In this

exposition, we chose not to talk too much on random measures, but note that giving an expression toE(

exp[i⟨ξ,∫Af(x)Nt(dx)

⟩ ])in general is a way to characterise the law of the random measure Nt(dx). We

highlight this fact in the following Corollary:

Corollary 2.3.10. Let A be a Borel set such that 0 /∈ A, and (Nt(·))t≥0 be the jump measure of some Lévyprocess. Then, the following holds true:

1. The process∫Af(x)

(Nt(dx)− tν(dx)

)is a martingale,

2. Quadratic variation

E

[∣∣∣∣∫A

f(x)(Nt(dx)− tν(dx)

)∣∣∣∣2]

= t

∫A

|f(x)|2ν(dx).

Page 17: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 2. LÉVY PROCESSES 16

3. Fourier transform:

E[exp

(i

⟨ξ,

∫A

f(x)(Nt(dx)− tν(dx)

)⟩)]= exp

(t

∫Rd

(ei〈ξ,x〉 − 1− i〈ξ, x〉)νA,f (dx)).

Proof. The fact that∫Af(x)

(Nt(dx)− tν(dx)

)is a martingale comes from the fact that

∫Af(x)Nt(dx) has

independent increments with mean t∫Af(x)ν(dx), by definition of ν.

Besides, the last identity holds true by definition of the measure νA,f (B) = ν(A∩f−1(B)). The only resultwe need to prove is the quadratic variation. To obtain this one, we start from equation (2.1) and differentiatewith respect to ξ twice. Then, plugging ξ = 0 yields the desired conclusion.

Finally, we give an extension to the result stated in Remark 2.3.3.

Proposition 2.3.11. Let A,B be two disjoints Borel sets such that 0 /∈ A, B. Then, the two Lévy processesare independent: ∑

0<s≤t∆Xs1∆Xs∈A and

∑0<s≤t

∆Xs1∆Xs∈B.

Remark 2.3.12. An even stronger extension of this result can be found in the literature, see e.g. Bertoin:Two Levy processes are independent if and only if they do not have simultaneous jumps. We refer to Protter[9] Theorem 39 p 29 for a proof.

We now use the jump measure to decompose our initial Lévy process (Xt)t≥0. Fix a cut-off level a > 0.The specific value for a does not matter. Assume for a while that our original Lévy process(Xt)t≥0 does nothave a continuous part.

As it is a càdlàg process, we can then write it as the sum of its jumps:

Xt =∑

0<u≤t∆Xu,

Now, we can split this sum into two parts:

Xt =∑

0<u≤t∆Xu1|∆Xs|≤a +

∑0<u≤t

∆Xu1|∆Xs|>a,

and from what precedes, we can then write∑0<u≤t

∆Xu1|∆Xs|>a =∫|x|>a

xNt(dx).

Since |x| ≥ a does not contain 0, this process is a compound Poisson process: the large jumps of a Lévyprocess form a compound Poisson process.

Now, going back to the separation above, what meaning can we give to the small jump parts:∑0<u≤t

∆Xu1|∆Xs|≤a

In this case, the set |x| ≤ a does contain 0 and we cannot use the previous results. However, what will helpus here is that this process only has bounded jumps (by definition, this process has jumps of size less than a).We will see that this remaining term forms an L2 martingale.

The Small Jumps as a Compensated Poisson IntegralThe small jumps part is probably the most difficult part to understand. In terms of trajectory, the processmoves through the accumulation of small jumps. Notice that in the previous section, the recurring conditionthat 0 /∈ A prevented the jumps to be too small. We now relax this assumption and try to see what we cansay about the jump measure. The key ingredient for dealing with small jumps is Theorem 2.3.6, namely thata Lévy process with bounded jumps have moments of every order. The main result of this section is thefollowing:

Page 18: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 2. LÉVY PROCESSES 17

Theorem 2.3.13. Let (Xt)t≥0 be a Lévy process with jumps bounded by a > 0:

supt≥0|∆Xt| < a.

Define Zt = Xt − E(Xt). Then (Zt)t≥0 is a martingale and

Zt = Zct + Zdt ,

where (Zct )t≥0 is a Brownian motion (thus has continuous paths), (Zdt )t≥0 is a martingale and:

Zdt =∫|x|≤a

x(Nt(dx)− tν(dx)

).

Moreover, (Zct )t≥0 and (Zdt )t≥0 are independent Lévy processes.

Proof. Form the fact that (Zt)t≥0 has zero mean and independent increments, we deduce that (Zt)t≥0 is amartingale and Lévy process.

Now, for a Borel set A, define

Mt(A) =∫A

x(Nt(dx)− tν(dx)

)=∑

0<s≤t∆Xs1∆Xs∈A − t

∫A

xν(dx).

Now, consider An =

an+1 ≤ |x| ≤

an

. Since these sets are disjoints by Proposition 2.3.11, the Lévy

processes Mt(An) are pairwise independent. Define Mnt =

∑nk=1Mt(Ak), the goal is to establish the two

properties:

• for all n ≥ 0, the Lévy processes Z −Mn and Mn are independent;

• Mn → Zd and Z −Mn → Zc.

The fact that the to processes Z −Mn and Mn are independent actually comes from the fact that thoseprocesses do not jump simultaneously.

The fact that Mn and Z −Mn converge come from the fact that Z has bounded jumps. Indeed, since ithas bounded jumps, it has moment of every order, in particular, by independence of Z −Mn and Mn, wehave:

Var(Zt −Mnt ) + Var(Mn

t ) = Var(Zt) < +∞.Thus, the two martingales Z−Mn andMn are bounded in L2, thus converge in L2. We denote their respectivelimits Zc and Zd. It remains us to prove that Zd has the explicit form given in the statement of the theoremand that Zc has continuous sample paths.

For Zd, we have:

Mnt =

n∑k=1

Mt(Ak) =∫∪nk=1Ak

x(Nt(dx)− tν(dx)

), with ∪nk=1 Ak =

a

n+ 1 ≤ |x| ≤ a.

Letting n→ +∞ yields the result.We now have to establish the path continuity of Zc. But since Z −Mn converges to Zc in L2, using

Doob’s inequality,|| supt≥0

Zct − (Zt −Mnt )||L2 ≤ 2 sup

t≥0||Zct − (Zt −Mn

t )||L2 .

Thus the convergence in L2 can be made uniform in t, and we can find a subsequence that converge almostsurely uniformly in t to Zc, and thus Zc has to have continuous paths.

Remark 2.3.14. The crucial part of the last result is probably that

Zdt =∫|x|≤a

x(Nt(dx)− tν(dx)

),

since so far, we did not know how to define Poisson integrals for sets such as |x| ≤ a containing 0. We nowhave the answer: the integral is defined in L2 sense, though this process we call "compensation":

Subtracting tν(dx) to ensure the convergence of the integral∫·(Nt(dx)− tν(dx)

)is called compensating

the Poisson integral. Similarly, the martingale Nt − λt is called a compensated Poisson process.

Page 19: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 2. LÉVY PROCESSES 18

Remark 2.3.15. In the next chapter, we will elaborate on the continuous part (Zct )t≥0, and actually showthat when non-identically zero, then it has to be a Brownian motion.

We conclude this section by pointing out that we proved that ν, defined as the intensity of the Poissonprocess, is a Lévy measure.

Proposition 2.3.16. ν defined as E[Nt(·)] is a Lévy measure, that is, a sigma-finite measure satisfying∫Rd\0

min(1, |x|2)ν(dx) < +∞.

Proof. In Theorem 2.3.6 above, we already showed that∫|x|≥1 ν(dx) < +∞. For the part where |x| ≤ 1, we

write: ∫|x|≤1

|x|2ν(dx) = limn→+∞

∫∪nk=1Ak

|x|2ν(dx).

We recall that the Ak are defined in the proof of Theorem 2.3.13.Now, using item 2 in Corollary 2.3.10, since ∪nk=1Ak do not contain zero, we have that

∫∪nk=1Ak

|x|2ν(dx) = E

∣∣∣∣∣∫∪nk=1Ak

x(N1(dx)− ν(dx)

)∣∣∣∣∣2 = E

[M1

(∪nk=1 Ak

)2].

Thus, we get ∫|x|≤1

|x|2ν(dx) = limn→+∞

E[M1

(∪nk=1 Ak

)2]

= E[(Zd1 )2] < +∞.

Page 20: Introduction to Lévy Processes - univ-toulouse.fr

Chapter 3

Proof of the Levy Khintchine formula

In this chapter, we collect the results of the previous chapter and establish the Lévy-Khintchine formula. Tothat end, let (Xt)t≥0 be a Lévy process, and let (Nt(·))t≥0 be the associated jump measure.

3.1 The Lévy-Itô Decomposition

To prove the Lévy-Khintchine formula, the idea is to remove the large jumps first that form a compensatedPoisson process, then deal with the small jumps using a compensated Poisson integral, and whatever is lefthas to be a drifted Brownian motion.

We thus define:Zt = Xt −

∫|x|>1

xNt(dx).

Proposition 3.1.1. The Lévy process(∫|x|>1 xNt(dx)

)t≥0

is a compound Poisson process, and is independentof (Zt)t≥0.

Proof. The fact that the integral form a compound Poisson process has been established in the previouschapter (see Proposition 2.3.9). The independence comes from the fact that those two Lévy processes do nothave simultaneous jumps (see Proposition 2.3.11).

Now, (Zt)t≥0 only have jumps smaller than one. We have the following result.

Proposition 3.1.2. The following decomposition holds:

Zt = Zct + Zdt .

The two processes (Zct )t≥0 and (Zdt )t≥0 are independent Lévy processes and (Zct )t≥0 is a Brownian Motion

Proof. This is actually an extension to Theorem 2.3.13. Indeed, we already know that the small jumpsdecompose into a continuous part Zc and a discontinuous part Zd. We also know that those two processesare independent, the only novelty here is the nature of the continuous part. To show that Zc is a Brownianmotion, the strategy is to prove:

E[ei〈ξ,Zct 〉] = e−

t2 〈ξ,Aξ〉,

for a certain matrix A. For convenience, we consider the one-dimensional case. Note that by construction, Zchas no jumps, so it has moments of every order. Let us write

φZct (ξ) = E[ei〈ξ,Zct 〉] = e−tη(ξ).

Because Zc has all moments, we know that η is C∞. Besides, since Z has zero mean, so does Zc, and η′(0) = 0.Consequently, for all m ≥ 2, it holds that

E[(Zct )m] = a1t+ a2t2 + · · ·+ am−1t

m−1. (3.1)

Let now 0 = t0 < t1 < · · · < tn = t be a partition of [0, t], we have

E[ei〈ξ,Zct 〉] = E

n−1∑j=0

(ei〈ξ,Zctj+1〉 − ei〈ξ,Z

ctj〉) .

19

Page 21: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 3. PROOF OF THE LEVY KHINTCHINE FORMULA 20

On each increments, we use Taylor’s formula on the exponential function and we split the sum into

S1(t) = iξ

n−1∑j=0

ei〈ξ,Zctj 〉

(Zctj+1 − Zctj

);

S2(t) = −ξ2

2

n−1∑j=0

ei〈ξ,Zctj 〉

(Zctj+1 − Zctj

)2;

S3(t) = −ξ2

2

n−1∑j=0

(ei〈ξ,Zctj+θj(Zctj+1−Z

ctj

)〉 − ei〈ξ,Zctj〉)(Zctj+1 − Zctj

)2.

Now, note that because Zc is a Lécy process it has independent increments and

E(S1(t)) = E

iξ n−1∑j=0

ei〈ξ,Zctj 〉

(Zctj+1 − Zctj

) = 0.

Similarly, using the fact that E[(Zcv)2] = a1v, we have

E(S2(t)) = −a1ξ2

2

n−1∑j=0

φZctj(ξ)(tj+1 − tj).

The last term is more tricky, and to analyse it, we introduce the event:

Bα =

max0≤j≤n−1

suptj≤u,v≤tj+1

|Zcu − Zcv| ≤ α

.

Note that this event is such that its probability tends to zero with the mesh size, thanks to the continuity ofthe paths of Zc.

Let now E[S3(t)] = E[S3(t)1Bα ] + E[S3(t)1Bcα ]. Using the elementary inequality |eiu − 1| ≤ 2, we get:

|E[S3(t)1Bcα ]| ≤ ξ2

2

n−1∑j=0

E[∣∣∣ei〈ξ,Zctj 〉(ei〈ξ,θj(Zctj+1−Z

ctj

)〉 − 1)∣∣∣(Zctj+1 − Zctj

)2]

≤ ξ2n−1∑j=0

E[(Zctj+1 − Zctj

)21Bcα

].

We now use Cauchy-Schwartz on the right hand side:

ξ2n−1∑j=0

E[(Zctj+1 − Zctj

)21Bcα

]≤ ξ2P(Bcα)1/2

n−1∑j=0

E[(Zctj+1 − Zctj

)4]1/2

.

Now, using again that Zc is a Lévy process with all moments, we get

|E[S3(t)1Bcα ]| ≤ ξ2P(Bcα)1/2O(t2 + t3)1/2.

Now, because of the path continuity of Zc, we see that the probability P(Bcα) goes to 0 as the mesh of thepartition tj goes to 0.

We now turn to the other term. We use the mean value theorem again:

|E[S3(t)1Bα ]| ≤ |ξ|3

2 E

n−1∑j=0

(Zctj+1 − Zctj

)31Bα

≤ αa1t|ξ|3

2 .

To get the last inequality, we exploited the fact that on Bα, the increments are bounded by α. But now, αcan be made arbitrarily small, so we conclude that this term must go to zero.

In the end, we get to the equation

φZct (ξ)− 1 = −a1ξ2

2

∫ t

0φZcs (ξ)ds,

solving for φZct yields the desired result.

Page 22: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 3. PROOF OF THE LEVY KHINTCHINE FORMULA 21

We have established the following result, the so-called Lévy-Itô decomposition:

Theorem 3.1.3. Let (Xt)t≥0 be a Lévy process. Let (Nt(·))t≥0 be the jump measure and ν(·) the Lévymeasure. Then, there exist (Bt)t≥0 a Brownian motion, independent of (Nt(·))t≥0, b ∈ Rd such that for allt > 0

Xt = Bt + bt+∫|x|≤1

x(Nt(dx)− tν(dx)

)+∫|x|>1

xNt(dx).

Remark 3.1.4. This result is generally remembered as any Lévy process is the sum of a (drifted)Brownian motion, a compound Poisson process and an L2 martingale, all independent.

The continuous part corresponds to a Brownian motion, the large jumps to a compound Poisson processand the small jumps form an L2 martingale.

From the Lévy-Itô decomposition, we trivially get the celebrated Lévy Khintchine formula, that characterisesthe Fourier transform of any Lévy process in terms of the characteristics of a Lévy process.

Theorem 3.1.5. Let b ∈ Rd, Q a semi-definite quadratic form on Rd, and ν a measure on Rd\0 such that∫Rd min(1, |x|2)ν(dx) < +∞. We define

Ψ(ξ) = −i〈ξ, b〉 − 12Q(ξ) +

∫Rdei〈x,ξ〉 − 1− i〈x, ξ〉1|x|≤1ν(dx)

Then, it holds thatE(ei〈Xt,ξ〉

)= exp (tΨ(ξ)) .

Proof. We start from the Lévy-Itô decomposition. Since Bt is a Brownian motion, there exists a quadraticform Q such that

E[ei〈Bt+bt,ξ〉] = e−i〈ξ,b〉−12Q(ξ).

Besides, since the large jumps part∫|x|>1 xNt(dx) form a compound Poisson process, it holds that

E

(exp

[i

⟨ξ,

∫|x|>1

xNt(dx)⟩])

= exp(t

∫|x|>1

(ei〈ξ,y〉 − 1)ν(dy)).

Finally, from Corollary 2.3.10

E

[exp

(i

⟨ξ,

∫|x|<1

x(Nt(dx)− tν(dx)

)⟩)]= exp

(t

∫|x|<1

(ei〈ξ,x〉 − 1− i〈ξ, x〉

)ν(dx)

).

Finally, since those three processes are independent, the characteristic function of the sum is the product ifthe characteristic functions, which gives the expected result.

3.2 Consequences of the Lévy-Itô Decomposition

In this section, we exploit the Lévy-Itô decomposition to establish some nice properties for Lévy processes.These result are not in any importance order. We do not give proofs to all the stated results, but give specificreferences in the literature.

We start with some results on the trajectory of a Lévy process.

Theorem 3.2.1. A Lévy process is continuous if and only if its Lévy measure ν is zero.

Proof. This result is obvious, given the Lévy-Itô decomposition: the only part left corresponds to a driftedBrownian motion.

Conversely, here is what we can say when the Lévy measure is not zero.

Theorem 3.2.2 (Jumping times). Let (Xt)t≥0 be a Lévy process with Lévy measure ν.

• If ν(Rd) =∞, then almost surely, the jumping times are countably dense in R+

• If 0 < ν(Rd) <∞, then almost surely, jumping times are infinitely many and countable in increasingorder, and the first jump has exponential distribution with parameter 1/ν(Rd).

Page 23: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 3. PROOF OF THE LEVY KHINTCHINE FORMULA 22

Proof. See Theorem 21.3 in Sato [11] p136.

Theorem 3.2.3. Let (Xt)t≥0 be a Lévy process with generating triplet (A, ν, b). Then,

• If A = 0 and ν(Rd) < +∞, or if A = 0, ν(Rd) = +∞ and∫|x|≤1 |x|ν(dx) < +∞, then the function

t 7→ Xt has finite variation (a.s.),

• If A 6= 0 or if∫|x|≤1 |x|ν(dx) = ∞, then the function t 7→ Xt has infinite variation on [0, t) for any

t > 0.

Proof. See Theorem 2.4.5 p129 in Applebaum [1].

Now we give the asymptotic behaviour of the Lévy exponent.

Proposition 3.2.4. Suppose the dimension d = 1. Recall Ψ is the characteristic exponent of some Lévyprocess (Xt)t≥0. Using the notations of Theorem 3.1.5,

1. we havelim

|ξ|→+∞

Ψ(ξ)ξ2 = 1

2Q(ξ),

2. if X has bounded variation,lim

|ξ|→+∞

Ψ(ξ)ξ

= −i× b,

3. Ψ is bounded if and only if (Xt)t≥0 is a compound Poisson process.

Proof. See Proposition 2 p 16 in Bertoin [2].

We end this section with two approximation results.

Theorem 3.2.5. Suppose that (µn)n≥1 is a sequence of infinitely divisible distributions with generating triplet(Qn, νn, bn). Let µ be a probability distribution on Rd.

Then µn → µ if and only if µ is infinitely divisible with generating triplet (Q, ν, b), with Q, ν and bsatisfying the following conditions:

• if f : Rd → R is bounded continuous, vanishing on a neighbourhood of 0,

limn→+∞

∫Rdf(x)νn(dx) = lim

n→+∞

∫Rdf(x)ν(dx).

• define the quadratic form Qn,ε by

Qn,ε(z) = Qn(z)−∫|x|≤ε

〈x, z〉2νn(dx).

Thenlimε→0

lim supn→+∞

|Qn,ε(z)−Q(z)| = 0, for all z ∈ Rd.

• bn −→n→+∞

b.

Proof. See Theorem 8.1 p 41 in Sato [11]. Note that in this reference, the proof is given for the triplet(An, νn, bn)c, meaning that one can change the cut-off function 1|x|≤1 to any function c(x) bounded andsuch that

c(x) =

1 + o(|x|) when |x| → 0,O(1/|x|) when |x| → +∞.

The above result is understood as "convergence of characteristics is convergence of the process". As acorollary, we have the following (see Corollary 8.8 p 45 in Sato [11]).

Corollary 3.2.6. Every infinitely divisible distribution is the limit of a sequence of compound Poissondistributions.

Page 24: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 3. PROOF OF THE LEVY KHINTCHINE FORMULA 23

3.3 Exercises

Exercice 1.Suppose that X and Y are two independent Lévy processes with characteristic exponents Ψ and Φ. Show

that the process Zt = Xt + Yt, t ≥ 0, is a Lévy process with characteristic exponent Ψ + Φ.

Exercice 2.Proof of Proposition 2.3.11. The goal is to establish the independence of:

J1t =

∑0<s≤t

∆Xs1∆Xs∈A and J2t =

∑0<s≤t

∆Xs1∆Xs∈B,

when A,B are two disjoints Borel sets such that 0 /∈ A, B. We define

Cξt = ei〈ξ,J1t 〉

E[ei〈ξ,J1t 〉]− 1, and Dζ

t = ei〈ζ,J2t 〉

E[ei〈ξ,J2t 〉]− 1.

1. Show that Cξ and Dζ are martingale

2. Show that for tk a subdivision of [0, t],

E[CξtDζt ] = E

[∑k

(Cξtk+1− Cξtk)(Dζ

tk+1−Dζ

tk)].

3. Deduce that

E[CξtDζt ] = E

∑0<s≤t

∆Cξt ∆Dζt

.4. Prove that E[CξtD

ζt ] = 0.

5. Conclude thatE[ei〈ξ,J

1t 〉ei〈ζ,J

2t 〉] = E[ei〈ξ,J

1t 〉]E[ei〈ζ,J

2t 〉].

Plus d’exercices en examples chez Sato p. 45.Faire exercice preuve du TCL Stable chez Meerscheart via Pareto.

3.4 Discussion

We conclude this chapter by discussing some left-over facts.

Semi-martingale TheoryThe culmination of this chapter is the Lévy-Khintchine formula, characterising any infinitely divisibledistribution. Another very important class of stochastic processes share a similar characterisation: the classof semi-martingales.

The Theory of semi-martingale is much more abstract than Lévy processes, and we chose not to spend toomuch time on the subject, but the reader may consult Protter [9] or Jacod-Shiryaev [6] on the subject. Wemust warn the reader though, the latter is a very difficult read.

Poisson Random MeasuresIf the reader consults Bertoin [2], or Applebaum [1], he will see that very soon, the concept of Poisson randommeasure is discussed. It does shed more light on the structure of the jumps of a Lévy process, but here, wechose not to introduce yet another concept. However, Poisson processes are a very interesting toping in theirown right. We refer the reader to Kallenberg [7] for a detailed exposition on the subject.

Page 25: Introduction to Lévy Processes - univ-toulouse.fr

Chapter 4

Lévy processes as Markov Processes

In this chapter, we view Lévy processes as Markov processes. We point out that we anticipated a little bit aswe already used the Markov property in the previous chapter.

We recall that a stochastic process is said to be a Markov process if

E[f(Xt)|Fs] = E[f(Xt)|Xs]

It is trivial to see that Lévy processes possess the Markov property, we write:

E[f(Xt)|Fs] = E[f(Xs +Xt −Xs)|Fs],

Now, Xt −Xs is independent of Fs, and obviously, Xs is Fs measurable, thus:

E[f(Xt)|Fs] =∫Rdf(Xs + y)pt−s(dy),

denoting pu(·) the law of Xu. Hence, E[f(Xt)|Fs] does indeed only depend on Xs, thus the Markov property.In this chapter, we highlight the principal resutls of (Xt)t≥0 related to its Markov property.

4.1 Properties of the Semi-group

The first result we introduce is actually proved in the few lines before, and is related to the semigroup of(Xt)t≥0.

Lemma 4.1.1. The semigroup associated with the Lévy process (Xt)t≥0 is time homogeneous. We denote it(Tt)t≥0:

E[f(Xt)|Xs = x] = Tt−sf(x).

Let us give some definitions

Definition 4.1.2. A family of measures (pt)t≥0 is a convolution semigroup if p0 = δ0 and

pt+s = pt ∗ ps.

Such a semigroup is said to be weakly continuous if for all f ∈ Cb(Rd),

limt→0

∫Rdf(x)pt(dx) = f(0) =

∫Rdf(x)dδ0.

Obviously, a semigroup (Tt)t≥0 associated to a Lévy process (Xt)t≥0 is a convolution semigroup. We nowstate a Lemma that is a direct consequence of the definitions above and of the stochastic continuity of (Xt)t≥0.

Lemma 4.1.3. The semigroup (Tt)t≥0 is a weakly continuous convolution semigroup.

Proof. Let f ∈ Cb(Rd), not identically zero. From the continuity of f , for all ε > 0, there exist η > 0 such that

sup|x|≤η

|f(x)− f(0)| ≤ ε/2.

24

Page 26: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 25

Now, from the stochastic continuity of (Xt)t≥0, there exists η′ > 0 such that

0 < t ≤ η′ ⇒ P(|Xt| > η) ≤ ε

4 supx∈Rd |f(x)| .

For such t, we then find∣∣∣∣∫Rd

(f(x)− f(0)

)pt(dx)

∣∣∣∣ ≤ ∫|x|≤η

|f(x)− f(0)|pt(dx) +∫|x|>η

|f(x)− f(0)|pt(dx),

≤ sup|x|≤η

|f(x)− f(0)|+ 2 supx∈Rd

|f(x)|P(|Xt| > η) ≤ ε.

We can now state an important result on (Tt)t≥0.

Theorem 4.1.4. Every Lévy process is a Feller process.

Proof. Let us recall first that a Feller process is such that

• Tt : C0(Rd)→ C0(Rd), C0(Rd) being the set of all continuous functions that vanish at infinity.

• limt→0 ‖Ttf − f‖∞=0 for all f ∈ C0.

We know that (Tt)t≥0 is a weakly continuous convolution semigroup that is such that

Ttf(x) =∫Rdf(x+ y)pt(dy),

where pt denotes the law of Xt. Let f ∈ C0, we need to show that Ttf is continuous. Consider xn → x, wehave:

limn→+∞

Ttf(xn) = limn→+∞

∫Rdf(xn + y)pt(dy).

Now, from the dominated convergence theorem, we have

limn→+∞

∫Rdf(xn + y)pt(dy) =

∫Rdf(x+ y)pt(dy),

which proves that Ttf is continuous. We can use the dominated convergence theorem again to prove

lim|x|→+∞

|Ttf(x)| ≤ lim|x|→+∞

∫Rd|f(x+ y)|pt(dy)

≤∫Rd

lim|x|→+∞

|f(x+ y)|pt(dy) = 0.

Consequently, we do have Ttf ∈ C0.We now prove the second point in the Feller condition. Note that we can assume f 6= 0. Using the

stochastic continuity of (Xt)t≥0, for all ε > 0, and any r > 0, there exist t0 > 0 such that

0 < t < t0 ⇒ P(|Xt| > r) ≤ ε

4‖f‖∞.

Using the continuity of f , we can find δ > 0 such that

|y| ≤ δ ⇒ supx∈Rd

|f(x+ y)− f(x)| ≤ ε

2 .

Choosing r = δ now gives

‖Ttf − f‖∞ = supx∈Rd

∣∣∣Ttf(x)− f(x)∣∣∣

≤∫|x|≤δ

supx∈Rd

|f(x+ y)− f(x)|pt(dy) +∫|x|>δ

supx∈Rd

|f(x+ y)− f(x)|pt(dy)

≤ ε

2P(|Xt| ≤ δ) + 2‖f‖∞P(|Xt| > δ)≤ ε.

Page 27: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 26

4.2 The Generator

We now turn to the infinitesimal generator of a Lévy process, viewed as a Markov process. In general, for aMarkov process (Xt)t≥0, one defines the generator of (Xt)t≥0 as the limit:

Ltf(x) = lims→t

E[f(Xs)|Xt = x].

We recall that in general, the generator alone is not enough to characterise the distribution of (Xt)t≥0, and oneneeds to express also the domain of the generator, that is all the function for which the limit exist. We assumethe reader to be familiar with these concepts, the universal reference on the matter being Ethier Kurtz [4].

The first thing we can say is that in the case of Lévy processes, since the semigroup is homogeneous,the generator do not depend on the time. We can actually say a lot more, by recalling the LévyKhintchineformula. Before stating the main result, we need to recall some definition from analysis on pseudo-differentialoperators.

First, let us recall that the Fourier transform of a function f in Schwartz’s space S, defined as

F(f)(ξ) =∫Rdei〈x,ξ〉f(x)dx,

has inverse transform given by:

F−1(g)(x) = 1(2π)d

∫Rde−i〈x,ξ〉g(ξ)dξ.

Then, it is usual that operations such as derivatives, or integration can be seen on the Fourier transformas ’multipliers’.

Definition 4.2.1. A pseudo-differential operator L acting on functions f ∈ S is defined by its symbol `, inthe following way:

L(f)(x) = 1(2π)d

∫Rde−i〈x,ξ〉`(ξ)F(f)(ξ)dξ.

With this definition in mind, we have the following result.

Theorem 4.2.2. Let (Xt)t≥0 be a Lévy process with characteristic exponent Ψ(ξ), that is E[ei〈Xt,ξ〉] =exp (tΨ(ξ)) , with

Ψ(ξ) = i〈b, ξ〉 − 12 〈ξ, Aξ〉+

∫Rd\0

ei〈ξ,y〉 − 1− 〈ξ, y〉1|y|≤1ν(dy).

Let (Tt)t≥0 denotes the semigroup and L the infinitesimal generator, we have:

1. For each t ≥ 0, f ∈ S, x ∈ Rd,

Ttf(x) = 1(2π)d

∫Rde−i〈ξ,x〉etΨ(ξ)F(f)(ξ)dξ.

2. For each, f ∈ S, x ∈ Rd,Lf(x) = 1

(2π)d

∫Rde−i〈ξ,x〉Ψ(ξ)F(f)(ξ)dξ,

so that L is a pseudo differential operator with symbol Ψ.

3. For each, f ∈ S, x ∈ Rd,

Lf(x) = i〈b,∇f(x)〉 − 12Tr

(AD2f(x)

)+∫Rd\0

f(x+ z)− f(x)− 〈∇f(x), z〉1|z|≤1ν(dz).

Proof. We skip this proof, since it work exactly as one would expect: differentiate under the integral. Thedifficulty comes from checking that it is actually possible to do so. We refer to Theorem 3.3.3 in Applebaum[1] for a complete proof.

Page 28: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 27

Examples of Generators for some Lévy processesThe main take-away from the previous Theorem is that the generator of a Lévy process is read through itscharacteristics. In this paragraph, we list a few Lévy processes with their corresponding characteristics andthe corresponding form of the generator.

Example 4.2.3 (The drifted Brownian motion). The most well-known example. A Brownian motion withdrift has characteristics (A, b, 0), where A = (ai,j)1≤i,j≤d is the covariance matrix. Then, it is well known thatthe generator is

Lf(x) = 〈b,∇f(x)〉+ 12Tr

(AD2f(x)

)=

d∑j=1

bj∂

∂xjf(x) + 1

2

d∑j,k=1

ai,j∂2

∂xj∂xkf(x).

Example 4.2.4 (The Poisson process). A Poisson process with parameter λ has characteristics (0, 0, λδ1).Thus, the corresponding generator is a difference operator:

Lf(x) = λ(f(x+ 1)− f(x)

).

Example 4.2.5 (The compound Poisson process). Consider the process

Xt =Nt∑i=1

Yi

where (Nt)t≥0 is a Poisson process with parameter λ and Yi are i.i.d. with common distribution µ. Wedetermined in a previous chapter that the characteristics of a compound Poisson process are (0, 0, λµ).Therefore, the generator of (Xt)t≥0 is

Lf(x) =∫Rd

(f(x+ z)− f(x)

)λµ(dz).

Example 4.2.6 (The case of Stable operators). Consider a rotationally invariant stable process (Xt)t≥0. ItsLévy exponent is given by

Ψ(ξ) = −|ξ|α =(√

ξ21 + · · ·+ ξ2

d

)α.

Let’s pretend for a moment that the usual rule when dealing with Fourier multipliers holds true here, that is:replace ξ with −i∂xi . Then, we would get for a generator:

−(√−∂2

x1− · · · − i∂2

xd

)α= −(−∆)α2 .

This is often referred in the literature as the fractional Laplacian. We will discuss more at length the relationbetween fractional derivatives and stable processes in a subsequent paragraph, but for a very nice and thoroughpresentation, the reader can consult Meerschaert and Sikorskii [8].

Example 4.2.7 (relativistic stable operators). Form Section 1.4, recall the relativistic stable distribution ashaving characteristics:

E(ξ) =√m2c4 + c2|ξ|2 −mc2.

Again, with the correspondance ξ to −i∂xi , we get a representation that is familiar to physicist (or so I amtold):

L = −(√

m2c4 − c2∆−mc2).

This operator is related to the quantisation of free energy, but for more details on that, the reader shouldprobably ask a physicist...

Page 29: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 28

4.3 Recurrence and Transience

Any good chapter on Markov processes would not be complete without a discussion on recurrence transience.In this paragraph, we give two criteria to link this property to integrability properties of the Lévy-Khintchineexponent.

Definition 4.3.1. A Lévy process (Xt)t≥0 is said to be

• recurrent at the origin iflim inft→+∞

|Xt| = 0 (a.s.),

• transient at the origin iflim inft→+∞

|Xt| = +∞ (a.s.).

Naturally, the dichotomy recurrent/transient still holds for Lévy processes (see e.g. Theorem 35.4 p239 inSato [11]). We give two criteria.

Theorem 4.3.2. Fix a > 0. Let (Xt)t≥0 be a Lévy process with characteristic exponent Ψ. Then, the followingare equivalent:

1. (Xt)t≥0 is recurrent,

2. limq→0

∫|x|≤a

<(

1q −Ψ(ξ)

)dξ = +∞, where <(z) is the real part of the complex number z,

3. lim supq→0

∫|x|≤a

<(

1q −Ψ(ξ)

)dξ = +∞

Theorem 4.3.3. Let (Xt)t≥0 be a Lévy process with characteristic exponent Ψ. Then (Xt)t≥0 is recurrent ifand only if ∫

|x|≤a<(

1−Ψ(ξ)

)dξ = +∞, ∀a > 0.

It is remarkable that recurrence and transience can be determined only by looking at the exponent. Adirect application of the previous result gives the recurrence of the Brownian motion for d = 1, 2, which wasalready known through other means.

Exercice 3.Use Theorem 4.3.3 to show that

1. for d = 1, an α stable process is recurrent if 1 ≤ α ≤ 2 and transient if 0 < α < 1.

2. for d = 2, all strictly α stable process are transient when 0 < α < 2,

3. for d ≥ 3, all Lévy processes are transient.

Invariant measureWe conclude this section by giving a result on invariant distribution for some Lévy processes. Let (Zt)t≥0 be aLévy process with characteristics (Q, b, ν). We denote Ψ its characteristic exponent. For c ∈ R, we introducethe Ornstein Uhlenbeck process directed by Z as the solution of the SDE:

Xt = x+ Zt − c∫ t

0Xsds, t ≥ 0.

We say that µ is an invariant measure for a semigroup (Pt)t≥0 if:

Pt(x, ·) −→t→+∞

µ.

Page 30: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 29

Theorem 4.3.4. Fix c > 0. If the Lévy measure ν satisfies∫|x|>2

log |x|ν(dx) < +∞.

Then, X has limiting distribution µ, with

µ(z) = exp(∫ +∞

0ψ(e−csz)ds

).

4.4 Fractional Derivatives

In this section, we investigate more closely the relation between stable processes and fractional derivatives.We consider only the dimension 1.

Fractional derivatives are natural extensions to regular derivatives that arise in all sorts of applications,ranging from physics to finance and hydrology. Their versatility comes from a form of universality of thestable distribution, coming from generalisation of central limit theorems.

First, as we discussed in example 4.2.6, we need to give meaning to the operator −(√− d2

dx2

)α. In general,

we will define the fractional derivative dα

dxα through its Fourier expression, and link this expression to thegenerator of a Stable process.

Proposition 4.4.1. If f and its derivatives up to some integer order n > n + α exist and are absolutelyintegrable, then the fractional derivative

dxαf(x) = 1

∫Re−ixξ(−iξ)αF(f)(ξ)dξ.

exists and its Fourier transform is (−iξ)αF(f)(ξ).

Proof. This is an application of results form analysis. The proof is left as an exercise. The reader can consultProposition 2.5 p29 in Meerschaert and Sikorskii [8] for guidance. We point out that the sign convention isdifferent in the previous reference. We use a convention that is more in line with probability theory, thus thenegatives.

Consider now a one-sided α stable process (Xt)t≥0 with α < 1. We saw earlier that the Lévy measure ofsuch a process is dx

x1+α1x≥0, but note that in this case,∫x<1

xdx

x1+α = [x−α+1]10 = 1,

since α < 1. We say that in the case α < 1, there is no need for compensation. Then, the generatorfor (Xt)t≥0 writes:

Lf(x) =∫ +∞

0

(f(x+ z)− f(x)

) dz

zα+1 .

We can now integrate by parts to get:

Lf(x) = −∫ +∞

0f ′(x+ z)dz

zα= − d

dx

∫ +∞

0f(x+ z)dz

zα.

This form is often referred to as the Riemann-Liouville fractional derivative.

Remark 4.4.2. Again, the sign convention is different. We have to point out that in the literature, thepreferred definition for the Riemann-Liouville fractional derivative is

• If 0 < α < 1dα

dxαf(x) = d

dx

∫ +∞

0f(x− z)z−α dz

Γ(1− α) .

• If 1 < α < 2dα

dxαf(x) = d2

dx2

∫ +∞

0f(x− z)z1−α dz

Γ(2− α) .

Page 31: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 30

Note that the choice of constant in front of the Lévy measure is to ensures that the Fourier expression simplifiesnicely.

We have yet to say why those objects agree. To do so, we have the following result.

Proposition 4.4.3. When 0 < α < 1, it holds that∫ +∞

0

(eiξx − 1

) dx

x1+α = (−iξ)α.

Proof. We establish that in general for u ≥ 0 and 0 < α < 1,∫ +∞

0

(1− e−ξx

) dx

x1+α = ξα.

First, notice that1− e−ξx =

∫ x

0ξe−ξydy.

Plug this identity in the integral and exchange the order of integration:∫ +∞

0

(1− e−ξx

) dx

x1+α =∫ +∞

0

∫ x

0ξe−ξydy

dx

x1+α

= −∫ +∞

0

(∫ +∞

y

x−1−α)ξe−ξydy

= ξ

α

∫ +∞

0e−ξyy−αdy = ξα

α

∫ +∞

0e−xx−αdx

= ξα

αΓ(1− α).

We leave an alternate proof as an exercise left to the reader. Again, the reader may consult Meerschaertand Sikorskii [8] for guidance (see proposition 3.10 p 56).

Exercice 4.The goal is to compute the integral

I(α) =∫ +∞

0

(eiξx − 1

) dx

x1+α

To that end, we define

I(s, α) =∫ +∞

0

(eiξx−sx − 1

) dx

x1+α

1. Using an integration by parts, show that

I(s, α) = (iξ − s)∫ +∞

0eiξx−sxx−αdx.

2. Show that for a, b > 0, the characteristic function for Γ(a, b) distribution is:∫ +∞

0eiξx

ba

Γ(a)e−bxdx =

(1− iξ

b

)−a.

3. Deduce that

I(s, α) = (iξ − s)Γ(1− α)s1−α

(1− iξ

s

)−α= −Γ(1− α)(s− iξ)α.

4. Use the dominated convergence theorem to reach the desired result.

Thus, this shows that the generator L actually have symbol (−iξ)α.

Page 32: Introduction to Lévy Processes - univ-toulouse.fr

Chapter 5

Elements of Stochastic Calculus with Jumps

We start this chapter by saying that Stochastic calculus can be built from general semi-martingales, a class ofstochastic processes encompassing Lévy processes. However, the construction is rather hard and theoretical,relying on tools ranging form analysis to algebra.

In the case of Lévy processes, exploiting the Lévy-Itô decomposition, we see that we only need to givemeaning to stochastic integrals with respect to the three "building blocs": Brownian motion, compensatedPoisson process and compound Poisson process.

At the end of this chapter, we will have given meaning to a stochastic differential equation of the form:

Yt = y0 +∫ t

0b(s, Ys)ds+

∫ t

0σ(s, Ys)dBs

+∫

[0,t]×|x|<1F (Ys, x)

(N(ds, dx)− ν(dx)ds

)+∫

[0,t]×|x|>1G(Ys, x)N(ds, dx).

This chapter is largely inspired by chapters 4 and 6 in Applebaum [1]. For a construction of stochasticcalculus for semi-martingales, one can consult [9].

This chapter assumes that the reader is familiar with stochastic integration, differential equations forBrownian motion.

5.1 Example of Use in Applications

We start this chapter by exploring a rather popular type of equation arising in the applications: jump-diffusions.It is well known that diffusion equations, stochastic differential equations of the form

dXt = b(t,Xt)dt+ σ(t,Xt)dBt, X0 = x0,

can be used to model various phenomena. In finance for instance, they can be used to model stock prices.However, as the many financial crashes have demonstrated, the behaviour of a stock price is often far fromthe one of such an SDE. If one wanted to include the possibility of large losses to the evolution of a stockprice, one option would be the following algorithm:

1. Start at x0.

2. Let T1 be an exponential random variable.

• if T1 ≤ T then continue the along dXt = b(t,Xt)dt+ σ(t,Xt)dBt• else simulate loss at time T1 as JT1 and restart the process at XT − JT1

3. Define T2 as exponential random variable and repeat step 2.

This algorithm actually amounts to solving the SDE:

dXt = b(t,Xt)dt+ σ(t,Xt)dBt − dJt, X0 = x0,

where (Jt)t≥0 is a compound Poisson process. Of course, one can easily imagine complexification of thisequation. For instance, one can imagine (Jt)t≥0 depending on (Xt)t≥0, or adding small jumps. The advantagesin keeping the model as it is comes from the fact that one do not need to bother with constructing integralswith respect to Lévy processes.

31

Page 33: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 32

5.2 Stochastic Integration

As we assumed the reader to be already familiar with Brownian motion-related stochastic analysis, we willnot expand on that front. Also, as we explained above as a consequence of the Lévy-Itô decomposition, thelarge jump parts form a compound Poisson process and has finite variation. Thus, on that front as well, theclassical Stieltjes integration still applies, and we formally know how to define the part∫

[0,t]×|x|>1G(s, x)N(ds, dx).

Observe that the Poisson process Nt(·) is extended in a measure over R+, the same way as any finite variationprocess is. Besides, similar in order for the above expression to be usable from a probability-theory point ofview, we need to assume as usual that s 7→ G(s, x) is predictable.

Really, the only unknown is how to define stochastic integration with respect to the small jump partsthat form an L2 martingale. To do so, we need to explain how to construct integrals with respect tomartingale-valued masures.

In this section, we only give the outline of the construction of stochastic integration with respect tomartingale valued martingales. The reader can consult [1] for more details. Also we will denote (Nt(·))t≥0 thePoisson random measure associated to a Lévy process (Xt)t≥0, and Fs the filtration associated with (Xt)t≥0.

Compensated Poisson MeasureRecall that for all Borel set A such that 0 /∈ A, Nt(A) has a Poisson distribution with parameter tν(A). Wethus consider the martingale:

Nt(A) = Nt(A)− tν(A).

We extend this measure to R+ × Rd by prescribing

N([t, s]×A) = Ns(A)− Nt(A).

Then, N(ds, dx) satisfies the following properties:

1. N(0 ×A) = 0 almost surely,

2. N([t, s]×A) is independent of Fs

3. E(N([t, s]×A)2) = (s− t)ν(A).

Again, the idea is to define such objects for general sets A by taking a limit in L2 sense. But before statingthe result, we need to discuss about integrands.

Predictable ProcessesJust like in the Gaussian case, the integrand needs to be predictable. Fix E a Borel set and 0 < T < +∞.Let P denote the smallest σ algebra with respect to all mappings F : [0, T ]× Rd × Ω→ R satisfying 1 and 2bellow are measurable:

1. for each 0 ≤ t ≤ T , the mapping (x, ω) 7→ F (t, x, ω) is B ⊗ Ft− measurable,

2. for each x ∈ E andω ∈ Ω, the mapping t 7→ F (t, x, ω) is left continuous.

We call P the predictable σ algebra, and a G-measurable mapping is said to be predictable. We nowdefine a subclass of predictable mappings:

H2(T,E) =F predictable ;

∫ T

0

∫E

E[|F (t, x)|2]dtν(dx) < +∞.

Naturally, we can define an inner product that agrees with the definition above:

〈F,G〉H2(T,E) =∫ T

0

∫E

E[F (t, x)G(t, x)]dtν(dx).

Then, with no surprises, we have:

Page 34: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 33

Theorem 5.2.1. H2(T,E) is a Hilbert space. Moreover, the space of simple functions, that are functions ofthe form

F =m∑j=1

n∑k=1

Fk(tj)1[tj ,tj+1]1Ak ,

where tj is a partition of [0, T ] and Ak are disjoints Borel sets with ν(Ak) < +∞, is dense in H2(T,E)

For the proof of this result, see Lemma 4.1.13 p 217 and Lemma 4.1.4 p 218 in Applebaum [1]. Thus, weonly need to define the integrals with respect to simple functions, and we get a general construction whentaking the limit.

5.3 Construction of the Stochastic Integral

Note that a simple function of the form

F =m∑j=1

n∑k=1

Fk(tj)1[tj ,tj+1]1Ak ,

is basically a linear combination of the product of two indicator functions, with the weight Fk(tj). Thus, wedefine the stochastic integral by setting

IT (F ) =m∑j=1

n∑k=1

Fk(tj)N([tj , tj+1 ×Ak]) =∫ T

0

∫E

F (t, x)N(dt, dx).

that is the same linear combination, but where the product of indicators is measured with our martingale-valuedmeasure N(ds, dx). This construction enjoys the linearity property and the following properties:

1. E[IT (F )] = 0,

2. E[IT (F )2] =∫ T

0

∫E

E[F (t, x)2]dtν(dx).

Thus, IT (·) is a linear isometry from the space of simple functions to L2(Ω), and by density, one canextend the construction of the whole space H2(T,E).

Remark 5.3.1 (Itô’s Isometry). It thus holds for any predictable function F : [0, T ]× Rd × Ω→ R that

E

∣∣∣∣∣∫ T

0

∫E

F (t, x)N(dt, dx)

∣∣∣∣∣2 =

∫ T

0

∫E

E[F (t, x)2]dtν(dx).

This identity is called Itô’s Isometry, and is the key to the definition of the stochastic integral.

Lévy-type Stochastic IntegralsWe are now able to give meaning to

Yt = y0 +∫ t

0bsds+

∫ t

0σsdBs

+∫

[0,t]×|x|<1F (s, x)

(N(ds, dx)− ν(dx)ds

)+∫

[0,t]×|x|>1G(s, x)N(ds, dx),

when b, σ, F and G are predictable. Note that the literature often employs the shorthand differential notation

dYt = btdt+ σtdBt + F (t, x)N(dt, dx) +G(t, x)N(dt, dx).

Note that assume that (Xt)t≥0 has Lévy-Itô decomposition:

Xt = bt+Bat +∫|x|<1

xNt(dx) +∫|x|>1

xNt(dx),

Page 35: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 34

where Bat has covariance a = σ2. Consider also L = R+ → R a predictable mapping. Then, in the definitionof (Yt)g≥0 above, if we specify

F = σL, G(t, x) = K(t, x) = xLt,

then, (Yt)t≥0 can be written asdYt = LtdXt.

We call Y a Lévy stochastic integral. Naturally, the next step will be to explore stochastic differentialequations. However, before that, we introduce the Itô formula with jumps, that make use of such stochasticintegrals.

5.4 Quadratic Variation and Itô Formula with jumps

The goal of this section is to be able to use the Itô Formula when the process we consider has jumps. Beforewe can state this formula however, we must introduce the concept of quadratic variation.

Quadratic VariationWe start by giving a definition-theorem:

Definition 5.4.1. The quadratic variation of a process (Xt)t≥0, denoted ([X,X]t)t≥0 is càdlàg and satisfiesthe following properties:

1. [X,X]0 = X20 and ∆[X,X]t = (∆Xt)2

2. If 0 = t0 < t1 < · · · < tn = t is a partition of [0, t] whose mesh go to zero, then

X20 +

∑ti

(Xti+1 −Xti)2 −→|ti+1−ti|→0

∆[X,X]t

the convergence is in probability.

First, we should warn the reader that a similar definition exists in the continuous case, where the bracketis denoted with 〈X,X〉t instead. Actually, this angled bracket is defined to be the compensator of the squaredbracket ([X,X]t)t≥0, the unique finite variation process making ([X,X]t − 〈X,X〉t)t≥0 a local martingale. Inthe case where (Xt)t≥0 is continuous, those two processes coincide. See Protter [9] p124 for a discussion onthe subject.

Just as in the continuous case, the Kunita-Watanabe inequality holds.

Theorem 5.4.2. Let (Xt)t≥0 and (Yt)t≥0 be two semi-martingales, and (Ht)t≥0 and (Kt)t≥0 two measurableprocesses. Then, ∫ +∞

0|Hs||Ks||d[X,Y ]s| ≤

√∫ +∞

0H2sd[X,X]s

√∫ +∞

0K2sd[Y, Y ]s

See Theorem 25 p 69 in Protter [9] for a proof. We can now state the Itô formula when the process (Xt)t≥0is a semi-martingale. Again, we refer to Protter, Theorem 32 p78 for a proof.

Itô’s Formula with jumpsAt last, we can state the main result of this section.

Theorem 5.4.3. Let (Xt)t≥0 be a semi martingale. Let F : Rd → R a twice differentiable function. We have:

F (Xt)− F (x) =∫ t

0+F ′(Xs−)dXs + 1

2

∫ t

0+F ′′(Xs−)d[X,X]cs +

∑0<s≤t

F (Xs)− F (Xs−)−∇F (Xs) ·∆Xs,

where [X,X]cs denotes the continuous part of the bracket [X,X]s.

We refer to Applebaum [1] for a detailed proof. The reader familiar with the continuous version willnotice that the beginning is very familiar. The part

∑0<s≤t F (Xs)− F (Xs−)−∇F (Xs) ·∆Xs is basically

accounting for the jumps of the process, whereas [X,X]cs in the continuous case simply becomes the usual〈X,X〉s. In the next section, we give an example of application of this formula.

Page 36: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 35

Stochastic ExponentialsIn this section, we discuss the solution of an SDE of the form

dZt = Zt−dYt,

where (Yt)t≥0 is a Lévy-type stochastic integral. The solution to this equation is referred to as stochasticexponential, or Doléans-Dade exponential, and is defined as

EYt = exp(Yt −

12 [Y, Y ]ct

) ∏0<s≤t

(1 + ∆Ys)e−∆Ys .

Reader familiar with the continuous case will observe that when (Yt)t≥0 has no jumps, then this exponentialcoincide with the exponential appearing in relation to the Girsanov Theorem. The main difference comingfrom the fact that here again, we must account for the jumps of the process.

Theorem 5.4.4. The stochastic process (EYt )t≥0 satisfies the equation

dZt = Zt−dYt.

Proof. We start by showing that EYt = eSYt , where

dSYt = σtdBt + [bt + 12σ

2t ]dt+

∫|x|>1

log[1 +G(t, x)

]N(dt, dx)

+∫|x|≤1

log[

1 + F (t, x)]N(dt, dx) +

∫|x|≤1

(log[

1 + F (t, x)]−H(t, x)

)dtν(dx)

using the fact that (Yt)t≥0 is a Lévy type stochastic integral:

Yt = y0 +∫ t

0bsds+

∫ t

0σsdBs

+∫

[0,t]×|x|<1F (s, x)

(N(ds, dx)− ν(dx)ds

)+∫

[0,t]×|x|>1G(s, x)N(ds, dx),

Then, the result follows from an application of Itô’s formula. For more details, see exercise 5.1.2 and Theorem5.1.3 in Applebaum [1].

We state a useful property for such exponentials:

Lemma 5.4.5. It holds that for each t ≥ 0,

EYt EZt = EY+Z+[Y,Z]t .

Proof. Again, this follows from an application of Itô’s formula.

5.5 Stochastic Differential Equation

In this section, we collect important result for an equation of the form

Yt = y0 +∫ t

0b(s, Ys)ds+

∫ t

0σ(s, Ys)dBs

+∫

[0,t]×|x|<1F (Ys, x)

(N(ds, dx)− ν(dx)ds

)+∫

[0,t]×|x|>1G(Ys, x)N(ds, dx).

Most of these results can be found in Chapter Applebaum [1]. We do not give any proofs, but we will alwaysgive an exact reference for it.

Page 37: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 36

Existence and UniquenessUnsurprisingly, we start by giving a sufficient condition to ensure existence and uniqueness. Just as in thecontinuous case, a notion of strong and weak solution exists. But to avoid paraphrasing other texts, we onlyrefer here to the discussion in [1], subsection 6.2, page 363.

We impose the following conditions

• Lipschitz condition: there exists K1 > 0 such that for all y1, y2 ∈ Rd,

|b(y1)− b(y2)|2 + ‖a(y1)− a(y2)‖2 +∫|x|≤c

|F (y1, x)− F (y2, x)|2ν(dx) ≤ K1|y1 − y2|2.

• Growth condition: there exists K2 > 0 such that for all y ∈ Rd:

|b(y)|2 + ‖a(y)‖2 +∫|x|≤c

|F (y, x)|2ν(dx) ≤ K2(1 + |y|2).

• Continuity: the mapping y 7→ G(y, x) is continuous for all x > 1.

Theorem 5.5.1. Under these assumptions, there exists a unique càdlàg strong solution to

Yt = y0 +∫ t

0b(s, Ys)ds+

∫ t

0σ(s, Ys)dBs

+∫

[0,t]×|x|<1F (Ys, x)

(N(ds, dx)− ν(dx)ds

)+∫

[0,t]×|x|>1G(Ys, x)N(ds, dx).

This result is not optimal obviously, as it only deals with strong solutions. We point out that just as inthe Gaussian case, one can link the weak solution to the so-called martingale problem. Then, showing weakexistence and uniqueness boils down to giving sharp estimates on transition functions.

Notably, one can establish well-posedness to the martingale problem using a Parametrix expansion for thecase of tempered stable distributions (see [5]).

As for local conditions, some results exists, see for instance Theorem 6.2.11 in Applebaum [1]. The term"local" here is referring to a potentially explosive stopping time. This phenomena is called "explosion time",see also Protter [9].

Stochastic flows and Markov PropertyIn this paragraph, we assume the conditions of the previous paragraph to ensure existence and uniqueness tothe SDE. Then, one can consider the path of the solution, and as in the continuous case, ask wether or notthe future depends on the past.

More generally, one can study the mapping that to a deterministic starting point x ∈ Rd maps the uniquestrong solution to the SDE. The remarkable thing in the case of Lévy processes is that in the homogeneouscase (coefficients not depending on the time), the independence of the increments is preserved.

Definition 5.5.2. Let Φ = Φs,t, 0 ≤ s ≤ t < +∞ be a familly of measurable mappings from Rd ×Ω→ Rd:

Φωs,t(y) ∈ Rd.

We say that Φ is a stochastic flow if almost surely,

1. Φωr,t(y) = Φωr,s(

Φωs,t(y)), for all 0 ≤ r ≤ s ≤ t < +∞ and all y ∈ Rd

2. Φωs,s(y) = y, for all s ≥ 0 and all y ∈ Rd.

We say that it is a Lévy flow if in addition, we have that

(L1) for each n ∈ N, and 0 ≤ t1 < · · · < tn < +∞, y ∈ Rd, the random variables

Φtj ,tj+1(y); 1 ≤ j ≤ n are independent.

(L2) the mapping t 7→ Φs,t(y) is càdlàg for each y ∈ Rd and t ≥ s.

Page 38: Introduction to Lévy Processes - univ-toulouse.fr

CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 37

Theorem 5.5.3. Let Φ be the mapping that to y ∈ Rd maps:

Yt = y0 +∫ t

0b(Ys)ds+

∫ t

0σ(Ys)dBs

+∫

[0,t]×|x|<1F (Ys, x)

(N(ds, dx)− ν(dx)ds

)+∫

[0,t]×|x|>1G(Ys, x)N(ds, dx).

then, Φ is a Lévy flow.Moreover, the solution to the SDE is a Markov process

Proof. See theorem 6.4.2 in Applebaum [1] for the Levy flow property and theorem 6.4.5 for the Markovproperty.

Page 39: Introduction to Lévy Processes - univ-toulouse.fr

Bibliography

[1] D. Applebaum. Lévy Processes and Stochastic Calculus, II Edition. Cambridge University Press, 2009.

[2] Jean Bertoin. Lévy Processes. Cambridge University Press, October 1998.

[3] Patrick Billingsley. Probability and measure. John Wiley & Sons, 2008.

[4] E. Ethier and T. Kurtz. Markov Processes. Characterization and Convergence. Wiley, 1997.

[5] L Huang. Density estimates for sdes driven by tempered stable processes, 2016.

[6] A. N. Shiryaev J. Jacod. Limit theorems for stochastic processes. Springer, 2002.

[7] Olav Kallenberg. Foundations of modern probability. Springer Science & Business Media, 2006.

[8] Mark M Meerschaert and Alla Sikorskii. Stochastic models for fractional calculus, volume 43. Walter deGruyter, 2011.

[9] P. E. Protter. Stochastic Integration and Differential Equations. Springer, 2005.

[10] Gennady Samorodnitsky and Murad S Taqqu. Stable non-gaussian random processes. 2005.

[11] K. Sato. Lévy processes and Infinitely divisible Distributions. Cambridge University Press, 2005.

[12] Vladimir M Zolotarev. One-dimensional stable distributions, volume 65. American Mathematical Soc.,1986.

38