an introduction to stochastic calculus
TRANSCRIPT
-
7/29/2019 An Introduction to Stochastic Calculus
1/92
AN INTRODUCTION TOSTOCHASTIC CALCULUS
Marta Sanz-SoleFacultat de MatematiquesUniversitat de Barcelona
September 18, 2012
-
7/29/2019 An Introduction to Stochastic Calculus
2/92
Contents
1 A review of the basics on stochastic processes 4
1.1 The law of a stochastic process . . . . . . . . . . . . . . . . . 4
1.2 Sample paths . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 The Brownian motion 9
2.1 Equivalent definitions of Brownian motion . . . . . . . . . . . 9
2.2 A construction of Brownian motion . . . . . . . . . . . . . . . 11
2.3 Path properties of Brownian motion . . . . . . . . . . . . . . . 15
2.4 The martingale property of Brownian motion . . . . . . . . . . 18
2.5 Markov property . . . . . . . . . . . . . . . . . . . . . . . . . 20
3 Itos calculus 25
3.1 Itos integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 The Ito integral as a stochastic process . . . . . . . . . . . . . 31
3.3 An extension of the Ito integral . . . . . . . . . . . . . . . . . 33
3.4 A change of variables formula: Itos formula . . . . . . . . . . 35
3.4.1 One dimensional Itos formula . . . . . . . . . . . . . . 36
3.4.2 Multidimensional version of Itos formula . . . . . . . . 39
4 Applications of the Ito formula 464.1 Burkholder-Davis-Gundy inequalities . . . . . . . . . . . . . . 46
4.2 Representation ofL2 Brownian functionals . . . . . . . . . . . 474.3 Girsanovs theorem . . . . . . . . . . . . . . . . . . . . . . . . 49
5 Local time of Brownian motion and Tanakas formula 52
6 Stochastic differential equations 57
6.1 Examples of stochastic differential equations . . . . . . . . . . 59
6.2 A result on existence and uniqueness of solution . . . . . . . . 60
6.3 Some properties of the solution . . . . . . . . . . . . . . . . . 65
6.4 Markov property of the solution . . . . . . . . . . . . . . . . . 67
7 Numerical approximations of stochastic differential equa-
tions 71
8 Continuous time martingales 74
8.1 Doobs inequalities for martingales . . . . . . . . . . . . . . . 74
8.2 Local martingales . . . . . . . . . . . . . . . . . . . . . . . . . 77
8.3 Quadratic variation of a local martingale . . . . . . . . . . . . 79
-
7/29/2019 An Introduction to Stochastic Calculus
3/92
9 Stochastic integrals with respect to continuous martingales 83
10 Appendix 1: Conditional expectation 86
11 Appendix 2: Stopping times 90
3
-
7/29/2019 An Introduction to Stochastic Calculus
4/92
1 A review of the basics on stochastic pro-
cesses
This chapter is devoted to introduce the notion of stochastic processes andsome general definitions related with this notion. For a more complete ac-count on the topic, we refer the reader to [11]. Let us start with a definition.
Definition 1.1 A stochastic process with state space S is a family {Xi, i I} of random variables Xi : S indexed by a set I.For a successful progress in the analysis of such an object, some furtherstructure on the index set I and on the state space S is required. In this
course, we shall mainly deal with the particular cases: I = N,Z+,R+ and Seither a countable set or a subset ofRd, d 1.The basic problem statisticians are interested in, is the analysis of the prob-ability law (mostly described by some parameters) of characters exhibited bypopulations. For a fixed character described by a random variable X, theyuse a finite number of independent copies of X -a sample of X. For manypurposes, it is interesting to have samples of any size and therefore to con-sider sequences Xn, n 1. It is important here to insist on the word copies,meaning that the circumstances around the different outcomes of X do notchange. It is a static world. Hence, they deal with stochastic processes
{Xn, n 1} consisting of independent and identically distributed randomvariables.This is not the setting we are interested in here. Instead, we would like to givestochastic models for phenomena of the real world which evolve as time goesby. Stochasticity is a choice in front of a complete knowledge and extremecomplexity. Evolution, in contrast with statics, is what we observe in mostphenomena in Physics, Chemistry, Biology, Economics, Life Sciences, etc.Stochastic processes are well suited for modeling stochastic evolution phe-nomena. The interesting cases correspond to families of random variables Xiwhich are not independent. In fact, the famous classes of stochastic processes
are described by means of types of dependence between the variables of theprocess.
1.1 The law of a stochastic process
The probabilistic features of a stochastic process are gathered in the jointdistributions of their variables, as given in the next definition.
Definition 1.2 The finite-dimensional joint distributions of the process{Xi, i I} consists of the multi-dimensional probability laws of any finite
4
-
7/29/2019 An Introduction to Stochastic Calculus
5/92
family of random vectors Xi1, . . . , X im, where i1, . . . , im
I and m
1 is
arbitrary.
Let us give an important example.
Example 1.1 A stochastic process {Xt, t 0} is said to be Gaussian if itsfinite-dimensional joint distributions are Gaussian laws.Remember that in this case, the law of the random vector (Xt1, . . . , X tm) ischaracterized by two parameters:
(t1, . . . , tm) = E(Xt1, . . . , X tm) = (E(Xt1), . . . , E (Xtm))
(t1, . . . , tm) =
Cov(Xti , Xtj )1i,jm .
In the sequel we shall assume that I R+ and S R, either countable oruncountable, and denote by RI the set of real-valued functions defined on I.A stochastic process {Xt, t 0} can be viewed as a random vector
X : RI.
Putting the appropriate -field of events in RI, say B(RI), one can define,as for random variables, the law of the process as the mapping
PX(B) = P(X1(B)), B B(RI).
Mathematical results from measure theory tell us that PX is defined by meansof a procedure of extension of measures on cylinder sets given by the familyof all possible finite-dimensional joint distributions. This is a deep result.In Example 1.1, we have defined a class of stochastic processes by means ofthe type of its finite-dimensional joint distributions. But, does such an objectexist? In other words, could one define stochastic processes giving only itsfinite-dimensional joint distributions? Roughly speaking, the answer is yes,
adding some extra condition. The precise statement is a famous result byKolmogorov that we now quote.
Theorem 1.1 Consider a family
{Pt1,...,tn, t1 < .. . < tn, n 1, ti I} (1.1)
where:
1. Pt1,...,tn is a probability onRn,
5
-
7/29/2019 An Introduction to Stochastic Calculus
6/92
2. if
{ti1 < .. . < tim
} {t1 < .. . < tn
}, the probability law Pti1 ...tim is the
marginal distribution of Pt1...tn.
There exists a stochastic process {Xt, t I} defined in some probability space,such that its finite-dimensional joint distributions are given by (1.1). Thatis, the law of the random vector (Xt1 , . . . , X tn) is Pt1,...,tn.
One can apply this theorem to Example 1.1 to show the existence of Gaussianprocesses, as follows.Let K : I I R be a symmetric, nonnegative definite function. Thatmeans:
for any s, t I, K(t, s) = K(s, t); for any natural number n and arbitrary t1, . . . , tn I, and x1, . . . , xn R,
ni,j=1
K(ti, tj)xixj 0.
Then there exists a Gaussian process {Xt, t 0} such that E(Xt) = 0 forany t I and Cov (Xti , Xtj ) = K(ti, tj), for any ti, tj I.To prove this result, fix t1, . . . , tn
I and set = (0, . . . , 0)
Rn, =
(K(ti, tj))1i,jn andPt1,...,tn = N(0, ).
We denote by (Xt1, . . . , X tn) a random vector with law Pt1,...,tn. For anysubset {ti1, . . . , tim} of{t1, . . . , tn}, it holds that
A(Xt1, . . . , X tn) = (Xti1 , . . . , X tim ),
with
A =
t1,ti1 tn,ti1
t1,tim tn,tim
,
where s,t denotes the Kronecker Delta function.By the properties of Gaussian vectors, the random vector (Xti1 , . . . , X tim )has an m-dimensional normal distribution, zero mean, and covariance matrixAAt. By the definition of A, it is trivial to check that
AAt = (K(til , tik))1l,km .
Hence, the assumptions of Theorem 1.1 hold true and the result follows.
6
-
7/29/2019 An Introduction to Stochastic Calculus
7/92
1.2 Sample paths
In the previous discussion, stochastic processes are considered as randomvectors. In the context of modeling, what matters are the observed valuesof the process. Observations correspond to fixed values of . This newpoint of view leads to the next definition.
Definition 1.3 The sample paths of a stochastic process {Xt, t I} are thefamily of functions indexed by , X() : I S, defined by X()(t) =Xt().
Sample paths are also called trajectories.
Example 1.2 Consider random arrivals of customers at a store. We set ourclock at zero and measure the time between two consecutive arrivals. Theyare random variables X1, X2, . . . . We assume Xi > 0, a.s. Set S0 = 0 andSn =
nj=1 Xj, n 1. Sn is the time of the n-th arrival. The process we
would like to introduce is Nt, giving the number of customers who have visitedthe store during the time interval [0, t], t 0.Clearly, N0 = 0 and for t > 0, Nt = k if and only if
Sk t < Sk+1.
The stochastic process {Nt, t 0} takes values on Z+. Its sample paths areincreasing right continuous functions, with jumps at the random times Sn,n 1, of size one. It is a particular case of a counting process. Sample pathsof counting processes are always increasing right continuous functions, theirjumps are natural numbers.
Example 1.3 Evolution of prices of risky assets can be described by real-valued stochastic processes {Xt, t 0} with continuous, although very rough,sample paths. They are generalizations of the Brownian motion.
The Brownian motion, also called Wiener process, is a Gaussian process{Bt, t 0} with the following parameters:
E(Bt) = 0
E(BsBt) = s t,
This defines the finite dimensional distributions and therefore the existenceof the process via Kolmogorovs theorem (see Theorem 1.1).
7
-
7/29/2019 An Introduction to Stochastic Calculus
8/92
Before giving a heuristic motivation for the preceding definition of Brownian
motion, we introduce two further notions.
A stochastic process {Xt, t I} has independent increments if for anyt1 < t2 < . . . < tk the random variables Xt2 Xt1 , . . . , X tk Xtk1 areindependent.
A stochastic process {Xt, t I} has stationary increments if for any t1 < t2,the law of the random variable Xt2 Xt1 is the same as that ofXt2t1.Brownian motion is termed after Robert Brown, a British botanist who ob-served and reported in 1827 the irregular movements of pollen particles sus-
pended in a liquid. Assume that, when starting the observation, the pollenparticle is at position x = 0. Denote by Bt the position of (one coordinate)of the particle at time t > 0. By physical reasons, the trajectories must becontinuous functions and because of the erratic movement, it seems reason-able to say that {Bt, t 0} is a stochastic process. It also seems reasonableto assume that the change in position of the particle during the time interval[t, t + s] is independent of its previous positions at times < t and therefore,to assume that the process has independent increments. The fact that suchan increment must be stationary is explained by kinetic theory, assumingthat the temperature during the experience remains constant.
The model for the law of Bt has been given by Einstein in 1905. Moreprecisely, Einsteins definition of Brownian motion is that of a stochasticprocesses with independent and stationary increments such that the law ofan increment Bt Bs, s < t is Gaussian, zero mean and E(Bt Bs)2 = t s.This definition is equivalent to the one given before.
8
-
7/29/2019 An Introduction to Stochastic Calculus
9/92
2 The Brownian motion
2.1 Equivalent definitions of Brownian motion
This chapter is devoted to the study of Brownian motion, the process intro-duced in Example 1.3 that we recall now.
Definition 2.1 The stochastic process {Bt, t 0} is a one-dimensionalBrownian motion if it is Gaussian, zero mean and with covariance functiongiven by E(BtBs) = s t.The existence of such process is ensured by Kolmogorovs theorem. Indeed,it suffices to check that
(s, t) (s, t) = s tis nonnegative definite. That means, for any ti, tj 0 and any real numbersai, aj, i, j = 1, . . . , m,
mi,j=1
aiaj(ti, tj) 0.
Buts t =
0
11[0,s](r) 11[0,t](r) dr.
Hence,
mi,j=1
aiajti tj =m
i,j=1
aiaj
0
11[0,ti](r) 11[0,tj ](r) dr
=0
mi=1
ai11[0,ti](r)
2dr 0.
Notice also that, since E(B20) = 0, the random variable B0 is zero almostsurely.
Each random variable Bt, t > 0, of the Brownian motion has a density, andit is
pt(x) =12t
exp(x22t
),
while for t = 0, its density is a Dirac mass at zero, {0}.Differentiating pt(x) once with respect to t, and then twice with respect tox easily yields
tpt(x) =
1
2
2
x2pt(x)
p0(x) = {0}.
9
-
7/29/2019 An Introduction to Stochastic Calculus
10/92
This is the heat equation on R with initial condition p0(x) = {0
}. That
means, as time evolves, the density of the random variables of the Brownianmotion behaves like a diffusive physical phenomenon.There are equivalent definitions of Brownian motion, as the one given in thenest result.
Proposition 2.1 A stochastic process {Xt, t 0} is a Brownian motion ifand only if
(i) X0 = 0, a.s.,
(ii) for any 0
s < t, the random variable Xt
Xs is independent of the
-field generated by Xr, 0 r s, (Xr, 0 r s) and Xt Xs is aN(0, t s) random variable.
Proof: Let us assume first that {Xt, t 0} is a Brownian motion. ThenE(X20 ) = 0. Thus, X0 = 0 a.s..Let Hs and Hs be the vector spaces included in L
2() spanned by (Xr, 0 r s) and (Xs+u Xs, u 0), respectively. Since for any 0 r s
E(Xr (Xs+u Xs)) = 0,
Hs and Hs are orthogonal in L2(). Consequently, Xt
Xs is independent
of the -field (Xr, 0 r s).Since linear combinations of Gaussian random variables are also Gaussian,Xt Xs is normal, and E(Xt Xs) = 0,
E(Xt Xs)2 = t + s 2s = t s.
This ends the proof of properties (i) and (ii).Assume now that (i) and (ii) hold true. Then the finite dimensional distri-butions of{Xt, t 0} are multidimensional normal and for 0 s t,
E(XtXs) = E((Xt
Xs + Xs)Xs) = E((Xt
Xs)Xs) + E(X
2s )
= E(Xt Xs)E(Xs) + E(X2s ) = E(X2s ) = s = s t.
Remark 2.1 We shall see later that Brownian motion has continuous sam-ple paths. The description of the process given in the preceding propositiontell us that such a process is a model for a random evolution which starts fromx = 0 at time t = 0, such that the qualitative change on time increments onlydepends on their length (stationary law), and that the future evolution of theprocess is independent of its past (Markov property).
10
-
7/29/2019 An Introduction to Stochastic Calculus
11/92
Remark 2.2 It is easy to prove that the if B =
{Bt, t
0
}is a Brownian
motion, so is B = {Bt, t 0}. Moreover, for any > 0, the process
B =
1
B2t, t 0
is also a Brownian motion. This means that zooming in or out, we willobserve the same sort of behaviour. This is called the scaling property ofBrownian motion.
2.2 A construction of Brownian motion
There are several ways to obtain a Brownian motion. Here we shall give P.Levys construction, which also provides the continuity of the sample paths.Before going through the details of this construction, we mention an alter-nate.Brownian motion as limit of a random walk
Let {j, j N} be a sequence of independent, identically distributed randomvariables, with mean zero and variance 2 > 0. Consider the sequence ofpartial sums defined by S0 = 0, Sn =
nj=1 j. The sequence {Sn, n 0} is
a Markov chain, and also a martingale.
Let us consider the continuous time stochastic process defined by linear in-terpolation of{Sn, n 0}, as follows. For any t 0, let [t] denote its integervalue. Then set
Yt = S[t] + (t [t])[t]+1, (2.1)for any t 0.The next step is to scale the sample paths of {Yt, t 0}. By analogy withthe scaling in the statement of the central limit theorem, we set
B(n)t =
1
nYnt, (2.2)
t 0.A famous result in probability theory -Donsker theorem- tell us that the se-quence of processes B
(n)t , t 0}, n 1, converges in law to the Brownian
motion. The reference sample space is the set of continuous functions van-ishing at zero. Hence, proving the statement, we obtain continuity of thesample paths of the limit.Donsker theorem is the infinite dimensional version of the above mentionedcentral limit theorem. Considering s = k
n, t = k+1
n, the increment B(
n)t
B(n)s =1
n
k+1 is a random variable, with mean zero and variance t
11
-
7/29/2019 An Introduction to Stochastic Calculus
12/92
s. Hence B(n)t is not that far from the Brownian motion, and this is what
Donskers theorem proves.
P. Levys construction of Brownian Motion
An important ingredient in the procedure is a sequence of functions definedon [0, 1], termed Haar functions, defined as follows:
h0(t) = 1,
hkn(t) = 2n2 11[ 2k
2n+1, 2k+12n+1
[ 2n2 11[ 2k+1
2n+1, 2k+22n+1
[,
where n 1 and k {0, 1, . . . , 2n 1}.The set of functions (h0, h
k
n) is a CONS of L2
([0, 1], B([0, 1]), ), where stands for the Lebesgue measure. Consequently, for any f L2([0, 1], B([0, 1]), ), we can write the expansion
f = f, h0h0 +n=0
2n1k=0
f, hknhkn, (2.3)
where the notation , means the inner product in L2([0, 1], B)([0, 1]), ).Using (2.3), we define an isometry between L2([0, 1], B([0, 1]), ) andL2(, F, P) as follows. Consider a family of independent random variableswith law N(0, 1), (N0, N
kn). Then, for f
L2([0, 1],
B([0, 1]), ), set
I(f) = f, h0N0 +n=0
2n1k=0
f, hknNkn .
Clearly,E
I(f)2
= f22.Hence I defines an isometry between the space of random variablesL2(, F, P) and L2([0, 1], B([0, 1]), ). Moreover, since
I(f) = limm
f, h0
N0 +
m
n=0
2n1
k=0
f, hkn
Nkn ,
the randon variable I(f) is N(0, f22) and by Parsevals identityE(I(f)I(g)) = f, g, (2.4)
for any f, g L2([0, 1], B([0, 1]), ).Theorem 2.1 The process B = {Bt = I(11[0,t]), t [0, 1]} defines a Brow-nian motion indexed by [0, 1]. Moreover, the sample paths are continuous,almost surely.
12
-
7/29/2019 An Introduction to Stochastic Calculus
13/92
Proof: By construction B0 = 0. Notice that for 0
s
t
1, Bt
Bs =
I(11]s,t]). Hence, by virtue or (2.4) the process Bt Bs is independent ofany Br, 0 < r < s, and Bt Bs has a N(0, t s) law. From this, bya change of variables one obtains that the finite dimensional distributionsof {Bt, t [0, 1]} are Gaussian. By Proposition 2.1, we obtain the firststatement.
Our next aim is to prove that the series appearing in
Bt = I
11[0,t]
= 11[0,t], h0N0 +n=0
2n1k=0
11[0,t], hknNkn
= g0(t)N0 +
n=0
2n1k=0
gkn(t)Nkn (2.5)
converges uniformly, a.s. In the last term we have introduced the Schauderfunctions defined as follows.
g0(t) = 11[0,t], h0 = t,gkn(t) = 11[0,t], hkn =
t0
hkn(s)ds,
for any t
[0, 1].
By construction, for any fixed n 1, the functions gkn(t), k = 0, . . . , 2n 1,are positive, have disjoint supports and
gkn(t) 2n2 .
Thus,
supt[0,1]
2n1k=0
gkn(t)Nkn
2n2 sup0k2n1 |Nkn |.The next step consists in proving that |Nkn | is bounded by some constantdepending on n such that when multiplied by 2
n
2 the series with theseterms converges.
For this, we will use a result on large deviations for Gaussian measures alongwith the first Borel-Cantelli lemma.
Lemma 2.1 For any random variable X with lawN(0, 1) and for anya 1,
P(|X| a) ea2
2 .
13
-
7/29/2019 An Introduction to Stochastic Calculus
14/92
Proof: We clearly have
P(|X| a) = 22
a
dxex2
2 22
a
dxx
ae
x2
2
=2
a
2e
a2
2 ea2
2 ,
where we have used that 1 xa
and 2a2
1.
We now move to the Borel-Cantellis based argument. By the precedinglemma,
P
sup
0k2n1|Nkn | > 2
n4
2n1k=0
P|Nkn | > 2
n4
2n exp2n21
.
It follows that n=0
P
sup
0k2n1|Nkn | > 2
n4
< +,
and by the first Borel-Cantelli lemma
P
lim infn
sup
0k2n1|Nkn | 2
n4
= 1.
That is, a.s., there exists n0, which may depend on , such that
sup0k2n1
|Nkn | 2n4
for any n n0. Hence, we have proved
supt[0,1]
2n1
k=0
gk
n
(t)Nk
n 2
n2 sup0k2n1 |
Nk
n | 2
n4 ,
a.s., for n big enough, which proves the a.s. uniform convergence of the series(2.5).
Next we discuss how from Theorem 1.1 we can get a Brownian motion indexedby R+. To this end, let us consider a sequence B
k, k 1 consisting ofindependentBrownian motions indexed by [0, 1]. That means, for each k 1,Bk = {Bkt , t [0, 1] is a Brownian motion and for different values of k, they
14
-
7/29/2019 An Introduction to Stochastic Calculus
15/92
are independent. Then we define a Brownian motion recursively as follows.
Let k 1; for t [k, k + 1] setBt = B
11 + B
21 + + Bk1 + Bk+1tk .
Such a process is Gaussian, zero mean and E(BtBs) = s t. Hence it is aBrownian motion.
we end this section by giving the notion of d-dimensional Brownian motion,for a natural number d 1. For d = 1 it is the process we have seen so far.For d > 1, it is the process defined by
Bt = (B1t , B2t , . . . , Bdt ), t 0,where the components are independent one-dimensional Brownian motions.
2.3 Path properties of Brownian motion
We already know that the trajectories of Brownian motion are a.s. continuousfunctions. However, since the process is a model for particles wanderingerratically, one expects rough behaviour. This section is devoted to provesome results that will make more precise these facts.Firstly, it is possible to prove that the sample paths of Brownian motionare -Holder continuous. The main tool for this is Kolmogorovs continuitycriterion:
Proposition 2.2 Let {Xt, t 0} be a stochastic process satisfying the fol-lowing property: for some positive real numbers , and C,
E(|Xt Xs|) C|t s|1+.
Then almost surely, the sample paths of the process are -Holder continuouswith <
.
The law of the random variable Bt Bs is N(0, t s). Thus, it is possible tocompute the moments, and we have
E
(Bt Bs)2k
=(2k)!
2kk!(t s)k,
for any k N. Therefore, Proposition 2.2 yields that almost surely, thesample paths of the Brownian motion are Holder continuous with (0, 1
2).
15
-
7/29/2019 An Introduction to Stochastic Calculus
16/92
Nowhere differentiability
We shall prove that the exponent = 12
above is sharp. As a consequencewe will obtain a celebrated result by Dvoretzky, Erdos and Kakutani tellingthat a.s. the sample paths of Brownian motion are not differentiable. Wegather these results in the next theorem.
Theorem 2.2 Fix any (12
, 1]; then a.s. the sample paths of {Bt, t 0} are nowhere Holder continuous with exponent . As a consequence, a.s.sample paths are nowhere differentiable and there are of infinite variation oneach finite interval.
Proof: Let
(12
, 1] and assume that a sample path t
Bt() is -Holder
continuous at s [0, 1). Then|Bt() Bs()| C|t s|,
for any t [0, 1] and some constant C > 0.Let n big enough and let i = [ns] + 1; by the triangular inequalityB j
n() B j+1
n() Bs() B j
n()
+Bs() B j+1
n()
Cs
j
n
+ s j + 1
n
.Hence, by restricting j = i, i + 1, . . . , i + N 1, we obtain
B jn
() B j+1n
() CN
n
=
M
n.
Define
AiM,n =B j
n() B j+1
n() M
n, j = i, i + 1, . . . , i + N 1
.
We have seen that the set of trajectories where t Bt() is -Holder con-tinuous at s is included in
M=1 k=1 n=k ni=1 AiM,n.Next we prove that this set has null probability. Indeed,
Pn=k ni=1 AiM,n
lim inf
n Pni=1AiM,n
lim infn
ni=1
P
AiM,n
lim infn n
PB 1
n
Mn
N,
16
-
7/29/2019 An Introduction to Stochastic Calculus
17/92
where we have used that the random variables B jn
B j+1n
are N0, 1n andindependent. But
PB 1
n
Mn
=
n
2
MnMn
enx2
2 dx
=12
Mn 12Mn 12
ex2
2 dx Cn 12.
Hence, by taking N such that N
12
> 1,
P
n=k ni=1 AiM,n
lim infn nC
n12
N
= 0.
since this holds for any k, M, se get
PM=1 k=1 n=k ni=1 AiM,n
= 0.
This ends the proof of the first part of the theorem.If a sample path t Bt() were differentiable at some s, then it would beLipschitz continuous (i.e. Holder continuous with exponent = 1), and wehave just proved that this might only happen with probability zero.It is known that if a real function is of finite variation on some finite interval,then it is differentiable on that interval. This yields the last assertion of thetheorem and ends its proof.
Quadratic variation
The notion of quadratic variation provides a measure of the roughness ofa function. Existence of variations of different orders are also important inprocedures of approximation via a Taylor expansion and also in the develop-ment of infinitesimal calculus. We will study here the existence of quadraticvariation, i.e. variation of order two, for the Brownian motion. As shall beexplained in more detail in the next chapter, this provides an explanation tothe fact that rules of Itos stochastic calculus are different from those of theclassical differential deterministic calculus.Fix a finite interval [0, T] and consider the sequence of partitions given bythe points n = (t
n0 = 0 tn1 . . . tnrn = T). We assume that
limn |n| = 0,
where |n| denotes the norm of the partition n:|n| = sup
j=0,...,rn1(tj+1 tj)
Set kB = Btnk Btn
k1.
17
-
7/29/2019 An Introduction to Stochastic Calculus
18/92
Proposition 2.3 The sequence
{rnk=1(kB)
2, n
1
}converges inL2() to
the deterministic random variable T. That is,
limn E
rn
k=1
(kB)2 T
2 = 0.
Proof: For the sake of simplicity, we shall omit the dependence on n. Setkt = tktk1. Notice that the random variables (kB)2kt, k = 1, . . . , n,are independent and centered. Thus,
E
rn
k=1
(kB)2
T
2 = E
rn
k=1(kB)
2
kt
2
=rnk=1
E
(kB)2 kt
2
=rnk=1
3(kt)
2 2(kt)2 + (kt)2
= 2rnk=1
(kt)2 2T|n|,
which clearly tends to zero as n tends to infinity.
This proposition, together with the continuity of the sample paths of Brow-nian motion yields
supn
rnk=1
|kB| = , a.s.,
something that we already know from Theorem 2.2.Indeed, assume that V := supn
rnk=1 |kB| < . Then
rnk=1
(kB)2 sup
k
|kB|
rnk=1
|kB|
V supk
|kB|.
We obtain limn
rnk=1(kB)
2 = 0. a.s., which contradicts the result provedin Proposition 2.3.
2.4 The martingale property of Brownian motion
We start this section by giving the definition of martingale for continuoustime stochastic processes. First, we introduce the appropriate notion of fil-tration, as follows.A family {Ft, t 0} of sub fields ofFis termed a filtration if
18
-
7/29/2019 An Introduction to Stochastic Calculus
19/92
1.
F0 contains all the sets of
Fof null probability,
2. For any 0 s t, Fs Ft.If in addition
s>tFs = Ft,for any t 0, the filtration is said to be right-continuous.Definition 2.2 A stochastic process {Xt, t 0} is a martingale with respectto the filtration {Ft, t 0} if each variable belongs to L1() and moreover
1. Xt is Ftmeasurable for any t 0,
2. for any 0 s t, E(Xt/Fs) = Xs.If the equality in (2) is replaced by (respectively, ), we have a super-martingale (respectively, a submartingale).Given a stochastic process {Xt, t 0}, there is a natural way to define afiltration by considering
Ft = (Bs, 0 s t), t 0.To ensure that the above property (1) for a filtration holds, one needs to com-plete the -field. In general, there is no reason to expect right-continuity.However, for the Brownian motion, the natural filtration possesses this prop-
erty.A stochastic process with mean zero, independent increments possesses themartingale property with respect to the natural filtration. Indeed, for 0 s t,
E(Xt Xs/Fs) = E(Xt Xs) = 0.Hence, a Brownian motion possesses the martingale property with respect tothe natural filtration.
Other examples of martingales with respect to the same filtration, relatedwith the Brownian motion are
1. {B2t t, t 0},2. {exp
aBt a2t2
, t 0}.
Indeed, for the first example, let us consider 0 s t. Then,E
B2t /Fs
= E
(Bt Bs + Bs)2/Fs
= E
(Bt Bs)2/Fs
+ 2E((Bt Bs)Bs/Fs)+ E
B2s/Fs
.
19
-
7/29/2019 An Introduction to Stochastic Calculus
20/92
Since Bt
Bs is independent of
Fs, owing to the properties of the conditional
expectation, we have
E
(Bt Bs)2/Fs
= E
(Bt Bs)2
= t s,E((Bt Bs)Bs/Fs) = BsE(Bt Bs/Fs) = 0,
E
B2s/Fs
= B2s .
Consequently,E
B2t B2s/Fs
= t s.For the second example, we also use the property of independent increments,
as follows:
E
exp
aBt a
2t
2
/Fs
= exp(aBs)E
exp
a(Bt Bs) a
2t
s
/Fs
= exp(aBs)E
exp
a(Bt Bs) a
2t
s
.
Using the density of the random variable Bt Bs one can easily check that
E
exp
a(Bt Bs) a
2t
s
= exp
a2(t s)
2 a
2t
2
.
Therefore, we obtain
E
exp
aBt a
2t
2
/Fs
= exp
aBs a
2s
2
.
2.5 Markov property
For any 0 s t, x R and A B(R), we set
p(s,t,x,A) =1
(2(t s)) 12 A exp|x
y
|2
2(t s) dy. (2.6)Actually, p(s,t,x,A) is the probability that a random variable, Normal, withmean x and variance t s take values on a fixed set A.Let us prove the following identity:
P{Bt A / Fs} = p(s,t,Bs, A), (2.7)which means that, conditionally to the past of the Brownian motion untiltime s, the law ofBt at a future time t only depends on Bs.
20
-
7/29/2019 An Introduction to Stochastic Calculus
21/92
Let f : R
R be a bounded measurable function. Then, since Bs is
Fs
measurable and Bt Bs independent ofFs, we obtainE(f(Bt)/ Fs) = E(f(Bs + (Bt Bs)) / Fs)
= E(f(x + Bt Bs))x=Bs
.
The random variable x + Bt Bs is N(x, t s). Thus,
E(f(x + Bt Bs)) =R
f(y)p(s,t,x,dy),
and consequently,
E(f(Bt)/ Fs) =R
f(y)p(s,t,Bs, dy).
This yields (2.7) by taking f = 11A.Going back to (2.6), we notice that the function x p(s,t,x,A) is measur-able, and the mapping A p(s,t,x,A) is a probability.Let us prove the additional property, called Chapman-Kolmogorov equation:For any 0 s u t,
p(s,t,x,A) = R p(u,t,y,A)p(s,u,x,dy). (2.8)We recall that the sum of two independent Normal random variables, is againNormal, with mean the sum of the respective means, and variance the sumof the respective variances. This is expressed in mathematical terms by thefact that
fN(x,1) fN(y,2) =R
fN(x,1)(y)fN(y,2)( y)dy= fN(x+y,1+2).
Using this fact, we obtainR
p(u,t,y,A)p(s,u,x,dy) =A
dz
fN(x,us) fN(0,tu)
(z)
=A
dzfN(x,ts)(z) = p(s,t,x,A).
proving (2.8).This equation is the time continuous analogue of the property own by thetransition probability matrices of a Markov chain. That is,
(m+n) = (m)(n),
21
-
7/29/2019 An Introduction to Stochastic Calculus
22/92
meaning that evolutions in m + n steps are done by concatenating m-step
and n-step evolutions. In (2.8) m + n is replaced by the real time t s, mby t u, and n by u s, respectively.We are now prepared to give the definition of a Markov process.Consider a mapping
p : R+ R+ R B(R) R+,
satisfying the properties
(i) for any fixed s, t R+, A B(R),
x p(s,t,x,A)is B(R)measurable,
(ii) for any fixed s, t R+, x R,
A p(s,t,x,A)
is a probability,
(iii) Equation (2.8) holds.
Such a function p is termed a Markovian transition function. Let us also fixa probability on B(R).Definition 2.3 A real valued stochastic process {Xt, t R+} is a Markovprocess with initial law and transition probability function p if
(a) the law of X0 is ,
(b) for any 0 s t,
P{Xt A/Fs} = p(s,t,Xs, A).
Therefore, we have proved that the Brownian motion is a Markov processwith initial law a Dirac delta function at 0 and transition probability functionp the one defined in (2.6).
Strong Markov property
Throughout this section, (Ft, t 0) will denote the natural filtration asso-ciated with a Brownian motion {Bt, t 0} and stopping times will alwaysrefer to this filtration.
22
-
7/29/2019 An Introduction to Stochastic Calculus
23/92
Theorem 2.3 Let T be a stopping time. Then, conditionally to
{T 0, set St = supst Bs. Then, for any a
0
and b a,P{St a, Bt b} = P{Bt 2a b} . (2.10)
As a consequence, the probability law of St and |Bt| are the same.Proof: Consider the stopping time
Ta = inf{t 0, Bt = a},
which is finite a.s. We have
P{
St
a, Bt
b}
= P{
Ta
t, Bt
b}
= P{
Ta
t, BTatTa
b
a}
.
Indeed, BTatTa = Bt BTa = Bt a and B and BTa have the same law.Moreover, we know that these processes are independent of FTa.This last property, along with the fact that BTa and BTa have the samelaw yields that (Ta, B
Ta) has the same distribution as (Ta, BTa).Define H = {(s, w) R+ C(R+;R); s t, w(t s) b a}. Then
P{Ta t, BTatTa b a} = P{(Ta, BTa) H} = P{(Ta, BTa) H}= P{Ta t, BTatTa b a} = P{Ta t, 2a b Bt}= P
{2a
b
Bt
}.
Indeed, by definition of the process {BTt , t 0}, the condition BTatTa bais equivalent to 2ab Bt; moreover, the inclusion {2ab Bt} {Ta t}holds true. In fact, ifTa > t, then Bt a; since b a, this yields Bt 2ab.This ends the proof of (2.10).For the second one, we notice that {Bt a} {St a}. This fact alongwith (2.10) yield the validity of the identities
P{St a} = P{St a, Bt a} + P{St a, Bt a}= 2P
{Bt
a
}= P
{|Bt
| a
}.
The proof is now complete.
24
-
7/29/2019 An Introduction to Stochastic Calculus
25/92
3 Itos calculus
Itos calculus has been developed in the 50 by Kyoshi It o in an attempt togive rigourous meaning to some differential equations driven by the Brown-ian motion appearing in the study of some problems related with continuoustime Markov processes. Roughly speaking, one could say that Itos calculusis an analogue of the classical Newton and Leibniz calculus for stochastic pro-cesses. In fact, in classical mathematical analysis, there are several extensionsof the Riemann integral
f(x)dx. For example, ifg is an increasing bounded
function (or the difference of two of this class of functions), Lebesgue-Stieltjesintegral gives a precise meaning to the integral
f(x)g(dx), for some set of
functions f. However, before Itos development, no theory allowing nowheredifferentiable integrators g was known. Brownian motion, introduced in thepreceding chapter, is an example of stochastic process whose sample paths, al-though continuous, are nowhere differentiable. Therefore, Lebesgue-Stieltjesintegral does not apply to the sample paths of Brownian motion.There are many motivations coming from a variety of disciplines to considerstochastic differential equations driven by a Brownian motion. Such an objectis defined as
dXt = (t, Xt)dBt + b(t, Xt)dt,
X0 = x0,
or in integral form,
Xt = x0 +t0
(s, Xs)dBs +t0
b(s, Xs)ds. (3.1)
The first notion to be introduced is that ofstochastic integral. In fact, in (3.1)the integral
t0 b(s, Xs)ds might be defined pathwise, but this is not the case
fort0 (s, Xs)dBs, because of the roughness of the paths of the integrator.
More explicitly, it is not possible to fix , then to consider the path(s, Xs()), and finally to integrate with respect to Bs().
3.1 Itos integral
Throughout this section, we will consider a Brownian motion B = {Bt, t 0}defined on a probability space (, F, P). We also will consider a filtration(Ft, t 0) satisfying the following properties:
1. B is adapted to (Ft, t 0),2. the -field generated by {Bu Bt, u t} is independent of (Ft, t 0).
25
-
7/29/2019 An Introduction to Stochastic Calculus
26/92
Notice that these two properties are satisfied if (
Ft, t
0) is the natural
filtration associated to B.We fix a finite time horizon T and define L2a,T as the set of stochastic processesu = {ut, t [0, T]} satisfying the following conditions:
(i) u is adapted and jointly measurable in (t, ), with respect to the product-field B([0, T]) F.
(ii)T0 E(u
2t )dt < .
The notation L2a,T evokes the two properties -adaptedness and square
integrability- described before.
Consider first the subset ofL2a,T consisting ofstep processes. That is, stochas-tic processes which can be written as
ut =n
j=1
uj11[tj1,tj [(t), (3.2)
with 0 = t0 t1 tn = T and where uj, j = 1, . . . , n, are Ftj1measurable square integrable random variables. We shall denote by E theset of these processes.
For step processes, the Ito stochastic integral is defined by the very naturalformula T
0utdBt =
nj=1
uj(Btj Btj1), (3.3)
that we may compare with Lebesgue integral of simple functions. Noticethat
T0 utdBt is a random variable. Of course, we would like to be able to
consider more general integrands than step processes. Therefore, we must tryto extend the definition (3.3). For this, we have to use tools from FunctionalAnalysis based upon a very natural idea: If we are able to prove that (3.3)
gives a continuous functional between two metric spaces, then the stochasticintegral defined for the very particular class of step stochastic processes couldbe extended to a more general class given by the closure of this set withrespect to a suitable norm.
The idea of continuity is made precise by the
Isometry property:
E
T0
utdBt
2= E
T0
u2tdt
. (3.4)
26
-
7/29/2019 An Introduction to Stochastic Calculus
27/92
Let us prove (3.4) for step processes. Clearly
E
T0
utdBt
2=
nj=1
E
u2j(jB)2
+ 2j
-
7/29/2019 An Introduction to Stochastic Calculus
28/92
The next step consists of identifying a bigger set than
Eof random processes
such that E is dense in the norm L2( [0, T]). This is actually the setdenoted before by L2a,T. Indeed, we have the following result which is acrucial fact in Itos theory.
Proposition 3.1 For any u L2a,T there exists a sequence (un, n 1) Esuch that
limn
T0
E(unt ut)2dt = 0.
Proof: Assume u
L2a,T, bounded, and has continuous sample paths, a.s. An
approximation sequence can be defined as follows:
un(t) =[nT]k=0
u
k
n
11[ kn ,
k+1n [
(t).
Clearly, un L2a,T and by continuity,T0
|un(t) u(t)|2dt =[nT]k=0
k+1nT
kn
u
k
n
u(t)
2
dt
Tsupk suptk |un
(t) u(t)|2
0,
as n , a.s. Then, the approximation result follows by bounded conver-gence.In a second step, we assume that u L2a,T is bounded. Let be a Cfunction with compact support contained in [1, 1]. For any n 1, setn(x) =
1n
(nx) and
un(t) =t0
n(t s)u(s)ds.
un is a stochastic process with continuous sample paths, a.s. Classical resultson approximation of the identity yield
T0
|un(s) u(s)|2ds 0,a.s.
By bounded convergence,
limn E
T0
|un(s) u(s)|2ds 0.
28
-
7/29/2019 An Introduction to Stochastic Calculus
29/92
Finally, consider u
L2a,T and define
un(t) =
0, u(t) < n,u(t), n u(t) n,0, u(t) > n.
Clearly, sup,t |un(t)| n and un L2a,T. Moreover,
ET0
|un(s) u(s)|2ds = ET0
|u(s)|211{|u(s)|>n}ds 0,
where we have used that for a function f
L1(,
F, ),
limn
|f|11|f|>nd = 0.
By using the approximation result provided by the preceding Proposition,we can give the following definition.
Definition 3.1 The Ito stochastic integral of a process u L2a,T is
T
0utdBt := L
2()
limn
T
0unt dBt. (3.5)
In order this definition to make sense, one needs to make sure that if theprocess u is approximated by two different sequences, say un,1 and un,2, thedefinition of the stochastic integral, using either un,1 or un,2 coincide. Thisis proved using the isometry property. Indeed
E
T0
un,1t dBt T0
un,2t dBt
2=T0
E
un,1t un,2t2
dt
2 T
0Eu
n,1t ut
2dt + 2
T
0Eu
n,2t ut
2dt
0,By its very definition, the stochastic integral defined in Definition 3.1 satisfiesthe isometry property as well. Moreover,
(a) stochastic integrals are centered random variables:
E
T0
utdBt
= 0,
29
-
7/29/2019 An Introduction to Stochastic Calculus
30/92
(b) stochastic integration is a linear operator:
T0
(aut + bvt) dBt = aT0
utdBt + bT0
vtdBt.
Remember that these facts are true for processes in E, as has been mentionedbefore. The extension to processes in L2a,T is done by applying Proposition3.1. For the sake of illustration we prove (a).Consider an approximating sequence un in the sense of Proposition 3.1. Bythe construction of the stochastic integral
T0 utdBt, it holds that
limn ET
0 u
n
t dBt
= ET
0 utdBt
,
Since ET
0 unt dBt
= 0 for every n 1, this concludes the proof.
We end this section with an interesting example.
Example 3.1 For the Brownian motion B, the following formula holds:T0
BtdBt =1
2
B2T T
.
Let us remark that we would rather expect T0 BtdBt = 12
B2T, by analogy
with rules of deterministic calculus.To prove this identity, we consider a particular sequence of approximatingstep processes, as follows:
unt =n
j=1
Btj1 11]tj1,tj ](t).
Clearly, un L2a,T and we have
T
0E(unt Bt)2 dt =
n
j=1
tj
tj1
EBtj1 Bt2
dt
Tn
nj=1
tjtj1
dt =T2
n.
Therefore, (un, n 1) is an approximating sequence of B in the norm ofL2( [0, T]).According to Definition 3.1,
T0
BtdBt = limn
nj=1
Btj1
Btj Btj1
,
30
-
7/29/2019 An Introduction to Stochastic Calculus
31/92
in the L2() norm.
Clearly,n
j=1
Btj1
Btj Btj1
=1
2
nj=1
B2tj B2tj1
12
nj=1
Btj Btj1
2
=1
2B2T
1
2
nj=1
Btj Btj1
2. (3.6)
We conclude by using Proposition 2.3.
3.2 The Ito integral as a stochastic process
The indefinite Ito stochastic integral of a process u L2a,T is defined asfollows: t
0usdBs :=
T0
us11[0,t](s)dBs, (3.7)
t [0, T].For this definition to make sense, we need that for any t [0, T], the process{us11[0,t](s), s [0, T]} belongs to L2a,T. This is clearly true.Obviously, properties of the integral mentioned in the previous section, likezero mean, isometry, linearity, also hold for the indefinite integral.The rest of the section is devoted to the study of important properties of thestochastic process given by an indefinite Ito integral.
Proposition 3.2 The process {It =t0 usdBs, t [0, T]} is a martingale.
Proof: We first establish the martingale property for any approximatingsequence
Int =t0
uns dBs, t [0, T],
where un
converges to u in L2
( [0, T]). This suffices to prove the Proposi-tion, since L2()limits of martingales are again martingales (this fact followsby applying Jensens inequality).Let unt , t [0, T], be defined by the right hand-side of (3.2). Fix 0 s t T and assume that tk1 < s tk < tl < t tl+1. Then
Int Ins = uk(Btk Bs) +l
j=k+1
uj(Btj Btj1)
+ ul+1(Bt Btl).
31
-
7/29/2019 An Introduction to Stochastic Calculus
32/92
Using properties (g) and (f), respectively, of the conditional expectation (see
Appendix 1) yields
E(Int Ins /Fs) = E(uk(Btk Bs)/Fs) +l
j=k+1
E
E
ujjB/Ftj1
/Fs
+ E
ul+1E
Bt Btl/Ftl1
/Fs
= 0.
This finishes the proof of the proposition.
A proof not very different as that of Proposition 2.3 yields
Proposition 3.3 For any process u L2a,T and bounded,
L1() limn
nj=1
tjtj1
usdBs
2=t0
u2sds.
That means, the quadratic variation of the indefinite stochastic integral isgiven by the process {t0 u2sds,t [0, T]}.The isometry property of the stochastic integral can be extended in the fol-
lowing sense. Let p [2, [. Then,
Et
0usdBs
p C(p)E
t0
u2sdsp
2
. (3.8)
Here C(p) is a positive constant depending on p. This is Burkholders in-equality.A combination of Burkholders inequality and Kolmogorovs continuity cri-terion allows to deduce the continuity of the sample paths of the indefi-
nite stochastic integral. Indeed, assume that
T0 E(ur)
p2 dr < , for any
p
[2,
[. Using first (3.8) and then Holders inequality (be smart!) implies
Et
surdBr
p C(p)E
ts
u2rdrp
2
C(p)|t s| p21ts
E(ur)p2 dr
C(p)|t s| p21.
Since p 2 is arbitrary, with Theorem 1.1 we have that the sample paths oft0 usdBs, t [0, T] are Holder continuous with ]0, 12 [.
32
-
7/29/2019 An Introduction to Stochastic Calculus
33/92
3.3 An extension of the Ito integral
In Section 3.1 we have introduced the set L2a,T and we have defined thestochastic integral of processes of this class with respect to the Brownianmotion. In this section we shall consider a large class of integrands. Thenotations and underlying filtration are the same as in Section 3.1.Let 2a,T be the set of real valued processes u adapted to the filtration (Ft, t 0), jointly measurable in (t, ) with respect to the product -field B([0, T])Fand satisfying
P
T0
u2tdt <
= 1. (3.9)
Clearly L2a,T
2a,T. Our aim is to define the stochastic integral for processes
in 2a,T. For this we shall follow the same approach as in section 3.1. Firstly,we start with step processes (un, n 1) of the form (3.2) belonging to 2a,Tand define the integral as in (3.3). The extension to processes in 2a,T needstwo ingredients. The first one is an approximation result that we now statewithout giving a proof. Reader may consult for instance [1].
Proposition 3.4 Let u 2a,T. There exists a sequence of step processes(un, n 1) of the form (3.2) belonging to 2a,T such that
limn
T
0 |unt
ut
|2dt = 0,
a.s.
The second ingredient gives a connection between stochastic integrals of stepprocesses in 2a,T and their quadratic variation, as follows.
Proposition 3.5 Let u be a step processes in 2a,T. Then for any > 0,N > 0,
P
T0
utdBt
>
PT
0u2tdt > N
+
N
2. (3.10)
Proof: It is based on a truncation argument. Let u be given by the right-hand
side of (3.2) (here it is not necessary to assume that the random variables ujare in L2()). Fix N > 0 and define
vNt =
uj, if t [tj1, tj[, and
nj=1 u
2j(tj tj1) N,
0, if t [tj1, tj[, and nj=1 u2j(tj tj1) > N,The process {vNt , t [0, T]} belongs to L2a,T. Indeed, by definitiont
0|vNt |2dt N.
33
-
7/29/2019 An Introduction to Stochastic Calculus
34/92
Moreover, if T0 u2tdt N, necessarily ut = v
Nt for any t
[0, T]. Then by
considering the decompositionT0
utdBt
>
=
T0
utdBt > ,T0
u2tdt > N
T
0utdBt > ,
T0
u2tdt N
,
we obtain
P
T0
utdBt
>
P
T0
vNt dBt
>
+ P
T0
u2tdt > N
.
We finally apply Chebychevs inequality along with the isometry property ofthe stochastic integral for processes in L2a,T and get
P
T0
vNt dBt
>
12
E
T0
vNt dBt
2 N
2.
This ends the proof of the result.
The extension
Fix u 2a,T and consider a sequence of step processes (un
, n 1) of theform (3.2) belonging to 2a,T such that
limn
T0
|unt ut|2dt = 0, (3.11)
in the convergence in probability.By Proposition 3.5, for any > 0, N > 0 we have
P
T0
(unt umt )dBt
>
P
T0
(unt umt )2dt
> N
+
N
2.
Using (3.11), we can choose such that for any N > 0 and n, m big enough,
P
T0
(unt umt )2dt > N
2.
Then, we may take N small enough so that N2
2
. Consequently, we haveproved that the sequence of stochastic integrals of step processesT
0unt dBt, n 1
34
-
7/29/2019 An Introduction to Stochastic Calculus
35/92
is Cauchy in probability. Since the convergence in probability is metrizable,
this sequence does have a limit in probability. Hence, we then defineT0
utdBt = P limn
T0
unt dBt. (3.12)
It is easy to check that this definition is indeed independent of the particularapproximation sequence used in the construction.
3.4 A change of variables formula: Itos formula
Like in Example 3.1, we can prove the following formula, valid for any t 0:
B2t = 2t0
BsdBs + t. (3.13)
If the sample paths of{Bt, t 0} were sufficiently smooth -for example, ofbounded variation- we would rather have
B2t = 2t0
BsdBs. (3.14)
Why is it so? Consider a similar decomposition as the one given in (3.6)obtained by restricting the time interval to [0, t]. More concretely, considerthe partition of [0, t] defined by 0 = t0 t1 tn = t,
B2t =n1j=0
B2tj+1 B2tj
= 2n1j=0
Btj
Btj+1 Btj
+n1j=0
Btj+1 Btj
2, (3.15)
where we have used that B0 = 0.Consider a sequence of partitions of [0, t] whose mesh tends to zero. Wealready know that
n1
j=0 Btj+1 Btj
2 t,
in the convergence of L2(). This gives the extra contribution in the devel-opment of B2t in comparison with the classical calculus approach.Notice that, if B were of bounded variation then, we could argue as follows:
n1j=0
Btj+1 Btj
2 sup0jn1
|Btj+1 Btj |
n1j=0
|Btj+1 Btj |.
35
-
7/29/2019 An Introduction to Stochastic Calculus
36/92
By the continuity of the sample paths of the Brownian motion, the first factor
in the right hand-side of the preceding inequality tends to zero as the meshof the partition tends to zero, while the second factor remains finite, by theproperty of bounded variation.Summarising. Differential calculus with respect to the Brownian motionshould take into account second order differential terms. Roughly speaking
(dBt)2 = dt.
A precise meaning to this formal formula is given in Proposition 2.3.
3.4.1 One dimensional Itos formula
In this section, we shall extend the formula (3.13) and write an expressionfor f(t, Bt) for a class of functions f which include f(x) = x
2.
Definition 3.2 Let {vt, t [0, T]} be a stochastic process, adapted, whosesample paths are almost surely Lebesgue integrable, that is
T0 |vt|dt < , a.s.
Consider a stochastic process {ut, t [0, T]} belonging to 2a,T and a randomvariable X0. The stochastic process defined by
Xt = X0 + t
0
usdBs + t
0
vsds, (3.16)
t [0, T] is termed an Ito process.
An alternative writing of (3.16) in differential form is
dXt = utdBt + vtdt.
To warm up, we state a particular version of the Ito formula. By C1,2 wedenote the set of functions on [0, T] R which are jointly continuous in(t, x), continuous differentiable in t and twice continuous differentiable in xwith derivatives jointly continuous.
Theorem 3.1 Let f : [0, T] R R be a function in C1,2 and X be anIto process with decomposition given in (3.16). The fol lowing formula holdstrue:
f(t, Xt) = f(0, X0) +t0
sf(s, Xs)ds +t0
xf(s, Xs)usdBs
+t0
xf(s, Xs)vsds +1
2
t0
2xxf(s, Xs)u2sds. (3.17)
36
-
7/29/2019 An Introduction to Stochastic Calculus
37/92
An idea of the proof. Consider a sequence of partitions of [0, T], for example
the one defined by tnj = jtn . In the sequel, we avoid mentioning the superscriptn for the sake of simplicity. We can write
f(t, Xt) f(0, X0) =n1j=0
f(tj+1, Xtj+1) f(tj, Xtj )
=n1j=0
f(tj+1, Xtj) f(tj, Xtj )
+f(tj+1, Xtj+1) f(tj+1, Xtj)
=
n
1
j=0
sf(tj, Xtj )(tj+1 tj)
+xf(tj+1, Xtj )(Xtj+1 Xtj )
+1
2
n1j=0
2xxf(tj+1, Xj)(Xtj+1 Xtj )2. (3.18)
with t ]tj, tj+1[ and Xj an intermediate (random) point on the segmentdetermined by Xtj and Xtj+1.
In fact, this follows from a Taylor expansion of the function f up to the
first order in the variable s, and up to the second order in the variable x.The asymmetry in the orders is due to the existence of quadratic variationof the processes involved. The expresion (3.18) is the analogue of (3.15).The former is much simpler for two reasons. Firstly, there is no s-variable;secondly, f is a polynomial of second degree, and therefore it has an exactTaylor expansion. But both formulas have the same structure.
When passing to the limit as n , we obtainn1j=0
sf(tj, Xtj)(tj+1 tj) t0
sf(s, Xs)ds
n1j=0
xf(tj+1, Xtj )(Xtj+1 Xtj )
t
0xf(s, Xs)usdBs
+t0
xf(s, Xs)vsds
n1j=0
xxf(tj+1, Xj)(Xtj+1 Xtj)2 t0
2xxf(s, Xs)u2sds,
in the convergence of probability.
37
-
7/29/2019 An Introduction to Stochastic Calculus
38/92
Itos formula (3.17) can be written in the formal simple differential form
df(t, Xt) = tf(t, Xt)dt + xf(t, Xt)dXt +1
22xxf(t, Xt)(dXt)
2, (3.19)
where (dXt)2 is computed using the formal rule of composition
dBt dBt = dt,dBt dt = dt dBt = 0,dt dt = 0.
Consider in Theorem 3.1 the particular case where f : R R is a functionin C
2
(twice continuously differentiable). Then formula (3.17) becomes
f(Xt) = f(X0) +t0
f(Xs)usdBs +t0
f(Xs)vsds
+1
2
t0
f(Xs)u2sds. (3.20)
Example 3.2 Consider the function
f(t, x) = et2
2t+x,
with , R.Applying formula (3.17) to Xt := Bt -a Brownian motion- yields
f(t, Bt) = 1 + t0
f(s, Bs)ds + t0
f(s, Bs)dBs.
Hence, the process {Yt = f(t, Bt), t 0} satisfies the equation
Yt = 1 + t0
Ysds + t0
YsdBs.
The equivalent differential form of this identity is the linear stochastic differ-
ential equation
dYt = Ytdt + YtdBt,
Y0 = 1. (3.21)
Black and Scholes proposed as model of a market with a single risky assetwith initial value S0 = 1, the process St = Yt. We have seen that such aprocess is in fact the solution to a linear stochastic differential equation (see(3.21)).
38
-
7/29/2019 An Introduction to Stochastic Calculus
39/92
3.4.2 Multidimensional version of Itos formula
Consider a m-dimensional Brownian motion {(B1t , , Bmt ) , t 0} and preal-valued Ito processes, as follows:
dXit =ml=1
ui,lt dBlt + v
itdt, (3.22)
i = 1, . . . , p. We assume that each one of the processes ui,lt belong to 2a,T
and thatT0 |vit|dt < , a.s. Following a similar plan as for Theorem 3.1, we
will prove the following:
Theorem 3.2 Let f : [0, ) Rp
R be a function of class C1,2
andX = (X1, . . . , X p) be given by (3.22). Then
f(t, Xt) = f(0, X0) +t0
sf(s, Xs)ds +pk=1
t0
xkf(s, Xs)dXks
+1
2
pk,l=1
t0
xk,xlf(s, Xs)dXks dX
ls, (3.23)
where in order to compute dXks dXls, we have to apply the following rules
dBks dBlt = k,lds, (3.24)
dBks ds = 0,
(ds)2 = 0,
where k,l denotes the Kronecker symbol.
We remark that the identity (3.24) is a consequence of the independence ofthe components of the Brownian motion.
Example 3.3 Consider the particular case m = 1, p = 2 and f(x, y) = xy.That is, f does not depend on t and we have denoted a generic point ofR by(x, y). Then the above formula (3.23) yields
X1t X2t = X
10X
20 +t0
X1s dX2s +t0
X2s dX1s +t0
u1su
2s
ds. (3.25)
Proof of Theorem 3.2: Let n = {0 = tn0 < < tnpn = t} be a sequenceof increasing partitions such that limn |n| = 0. First, we consider thedecomposition
f(t, Xt) f(0, X0) =pn1i=0
f(tni+1, Xtni+1) f(tni , Xtni )
,
39
-
7/29/2019 An Introduction to Stochastic Calculus
40/92
and then Taylors formula for each term in the last sum. We obtain
f(tni+1, Xtni+1) f(tni , Xtni ) = sf(tni , Xtni )(tni+1 tni )
+pk=1
xkf
tni , X1tni
, . . . , X ptni
(Xktn
i+1 Xktn
i)
+p
k,l=1
fk,ln,i2
(Xktni+1
Xktni
)(Xltni+1
Xltni
),
with tni ]tni , tni+1[ and
inf[0,1] 2
xk,xlf
t
n
i , X
1
tni +
X
1
tni+1 X1
tni
, fk,l
n,i
sup[0,1]
2xk,xlf
tni , X1tni
+
X1tni+1
X1tni
,
We now proceed to prove the convergence of each contribution to the sum. Inorder to simplify the arguments, we shall assume more restrictive assumptionson the function f and the processes ui,l. Notice that the process X hascontinuous sample paths, a.s.
First termpn
1
i=0
sf(tni , Xtni )(t
ni+1 tni )
t
0sf(s, Xs)ds, (3.26)
a.s. indeed pn1i=1
sf(tni , Xtni )(t
ni+1 tni )
t0
sf(s, Xs)ds
=
pn1i=1
tni+1tni
sf(t
ni , Xtni ) sf(s, Xs)
ds
pn1i=1
tni+1tni
sf(t
ni , Xtni ) sf(s, Xs) ds
t sup1ipn1
sf(tni , Xtni ) sf(s, Xs)
11[tni,tni+1]
(s) .
The continuity of sf along with that of the process X implies
limn sup1ipn1
sf(tni , Xtni ) sf(s, Xs)
11[tni,tni+1]
(s) = 0.
This gives (3.26).
40
-
7/29/2019 An Introduction to Stochastic Calculus
41/92
Second term
Fix k = 1, . . . , p. We next prove
pn1i=1
xkf
tni , X1tni
, . . . , X ptni
(Xktni+1 X
ktni
) t0
xkf(s, Xs)dXks (3.27)
in probability, which by (3.22) amounts to check two convengences:
pn1i=1
xkf
tni , X1tni
, . . . , X ptni
tni+1tni
uk,lt dBlt
t0
xkf(s, Xs)uk,ls dB
ls, (3.28)
pn1i=1
xkf
tn
i , X1
tni , . . . , X p
tni tni+1
tni v
k
t dt t
0 xkf(s, Xs)vk
sds, (3.29)
for any l = 1, . . . , m.We start with (3.28). Assume that xkf is bounded and that the processesuk,l are in L2a,T . Then
E
pn1i=1
xkf
tni , X1tni
, . . . , X ptni
tni+1tni
uk,lt dBlt t0
xkf(s, Xs)uk,ls dB
ls
2
= Epn1
i=1t
ni+1
tni
xkftn
i, X1
tn
i
, . . . , X ptn
i xkf(s, Xs) u
k,l
sdBl
s2
=pn1i=1
E
tni+1tni
xkf
tni , X
1tni
, . . . , X ptni
xkf(s, Xs)
uk,ls dB
ls
2
=pn1i=1
tni+1tni
E
xkf
tni , X1tni
, . . . , X ptni
xkf(s, Xs)
uk,ls2
ds,
where we have applied successively that the stochastic integrals on disjointintervals are independent to each other, they are centered random variables,along with the isometry property.
By the continuity of xkf and the process X,
sup1ipn1
xkftni , X1tni , . . . , X ptni
xkf(s, Xs)
11[tni ,tni+1](s) 0, (3.30)
a.s. Then, by bounded convergence,
sup1ipn1
Exkftni , X1tni , . . . , X ptni
xkf(s, Xs)
uk,ls 11[tni ,tni+1](s)
2 0,and hence we get (3.28) in the convergence of L2().
41
-
7/29/2019 An Introduction to Stochastic Calculus
42/92
For the proof of (3.29) we also assume that xkf is bounded. Then,pn1i=1
xkf
tni , X1tni
, . . . , X ptni
tni+1tni
vkt dt t0
xkf(s, Xs)vksds
=
pn1i=1
tni+1tni
xkf
tni , X
1tni
, . . . , X ptni
xkf(s, Xs)
vksds
pn1i=1
tni+1tni
xkftni , X1tni , . . . , X ptni
xkf(s, Xs) |vks |ds.
By virtue of (3.30) and bounded convergence, we obtain (3.29) in the a.s.
convergence.
42
-
7/29/2019 An Introduction to Stochastic Calculus
43/92
Third term
Given the notation (3.24), it suffices to prove that
pn1i=0
fk,ln,i(Xktni+1
Xktni
)(Xltni+1
Xltni
) m
j=1
t0
2xk,xlf(s, Xs)uk,js u
l,js ds. (3.31)
which will be a consequence of the following convergences
pn1i=0
fk,ln,i
tni+1tni
uk,js dBjs
tni+1tni
ul,j
s dBj
s
j,j
t0
2xk,xlf(s, Xs)uk,js u
l,j
s ds,
(3.32)pn1i=0
fk,ln,i
tni+1
tni
uk,js dBjs
tni+1
tni
vlsds
0, (3.33)
pn1i=0
fk,ln,i
tni+1tni
vksds
tni+1tni
vlsds
0. (3.34)
Let us start by arguing on (3.32). We assume that 2xk,xlf is bounded andthat ul,j L2a,T for any l = 1, . . . , p, j = 1, . . . , m. Suppose that j = j. Then,the convergence to zero follows from the fact that the stochastic integrals
tni+1tni
uk,js dBjs ,tni+1tni
ul,j
s dBj
s ,
are independent.Assume j = j. Then,
E
pn1i=0
fk,ln,i
tni+1
tni
uk,js dBjs
tni+1
tni
ul,js dBjs
t0
2xk,xlf(s, Xs)uk,js u
l,js ds
(T1 + T2),
with
T1 = E
pn1i=0
fk,ln,i
tni+1
tni
uk,js dBjs
tni+1tni
ul,js dBjs
tni+1tni
uk,js ul,js ds
,T2 = E
pn1i=0
fk,ln,i
tni+1
tni
uk,js ul,js ds
t0
2xk,xlf(s, Xs)uk,js u
l,js ds
.
43
-
7/29/2019 An Introduction to Stochastic Calculus
44/92
Since 2xk,xlf is bounded,
T1 CEpn1i=0
tni+1
tni
uk,js dBjs
tni+1tni
ul,js dBjs
tni+1tni
uk,js ul,js ds
.This tends to zero as n (see Proposition 3.3).As for T2, we have
T2 Esupi fk,ln,i 2xk,xlf(s, Xs) 11]tni ,tn+1i ](s)
t
0uk,js u
l,js ds
By continuity,
limn supi
(fk,ln,i 2xk,xlf(s, Xs) 11]tn
i,tn+1i
](s)
= 0.
Thus, by bounded convergence we obtain
limn T2 = 0.
Hence we have proved (3.32) in L1().Next we prove (3.33) in L1(), assuming that 2xk,xlf is bounded and ad-
ditionally that the processes uk,j
L2a,T and vl
L2( [0, T]). In thiscase,
E
pn1i=0
fk,ln,i
tni+1
tni
uk,js dBjs
tni+1
tni
vlsds
C
pn1i=0
E
tn
i+1
tni
usdBs
tn
i+1
tni
vsds
Cpn1
i=0
Etni+1tni
u2sds
12
E
tni+1tni
|vs|ds2
12
Cpn1i=0
|tni+1 tni |12
tni+1tni
E|us|2ds 1
2
E
tni+1tni
|vs|2ds1
2
Csupi
|tni+1 tni |12
pn1
i=0
tni+1
tni
E|us|2ds
12
pn1
i=0
tni+1
tni
E|vs|2ds
12
,
44
-
7/29/2019 An Introduction to Stochastic Calculus
45/92
which tends to zero as n
a.s. Starting from the second term, we have
omitted superindices, for the sake of simplicity.The proof of (3.34) is easier. Indeed assuming that 2xk,xlf is bounded,
pn1i=0
fk,ln,i
tni+1
tni
vks ds
tni+1
tni
vlsds
Csup
i
tni+1tni
|vls|ds t
0|vks |ds.
The first factor tends to zero as n , while the second one is boundeda.s.The proof of the theorem is now complete.
45
-
7/29/2019 An Introduction to Stochastic Calculus
46/92
4 Applications of the Ito formula
This chapter is devoted to give some important results that use the Ito for-mula in some parts of their proofs.
4.1 Burkholder-Davis-Gundy inequalities
Theorem 4.1 Letu L2a,T and set Mt =t0 usdBs. Define
Mt = sups[0,t]
|Ms| .
Then, for any p > 0, there exist two positive constants cp, Cp such that
cpE
T0
u2sds
p2
E(MT)p CpET
0u2sds
p2
. (4.1)
Proof: We will only prove here the right-hand side of (4.1) for p 2. Considerthe function
f(x) = |x|p,for which we have that
f(x) = p|x|p1sign(x),f(x) = p(p 1)|x|p2.
Then, according to (3.20) we obtain
|Mt|p =t0
p|Ms|p1sign(Ms)usdBs + 12
t0
p(p 1)|Ms|p2u2sds.
Applying the expectation operator to both terms of the above identity yields
E(|Mt|p
) =
p(p
1)
2 Et
0 |Ms|p
2
u2
sds
.
We next apply Holders inequality to the expectation with exponents pp2
and q = p2
and get
Et
0|Ms|p2u2sds
E
(Mt )
p2t0
u2sds
[E(Mt )p]p2p
Et
0u2sds
p2
2p
.
46
-
7/29/2019 An Introduction to Stochastic Calculus
47/92
Doobs inequality (see Theorem 8.1) implies
E(Mt )p
p
p 1
pE(|Mt|p).
Hence,
E(Mt )p
p
p 1
pp(p 1)
2
p2
Et
0u2sds
p2
.
This ends the proof of the upper bound.
4.2 Representation of L2 Brownian functionals
We already know that for any process u L2a,T, the stochastic integral process{t0 usdBs, t [0, T]} is a martingale. The next result is a kind of conversestatement. In the proof we shall use a technical ingredient that we writewithout giving a proof.In the sequel we denote by FT the -field generated by (Bt, 0 t T).
Lemma 4.1 The vector space generated by the random variables
exp
T0
f(t)dBt 12
T0
f2(t)dt
,
f L2([0, T]), is dense in L2(, FT, P).
Theorem 4.2 Let Z L2(, FT). There exists a unique process h L2a,Tsuch that
Z = E(Z) +T0
hsdBs. (4.2)
Hence, for any martingale M bounded in L2
, there exist a unique processh L2a,T and a constant C such that
Mt = C+t0
usdBs. (4.3)
Proof: We start with the proof of (4.2). Let H be the vector space consistingof random variables Z L2(, FT) such that (4.2) holds. Firstly, we arguethe uniqueness of h. This is an easy consequence of the isometry of the
47
-
7/29/2019 An Introduction to Stochastic Calculus
48/92
stochastic integral. Indeed, if there were two processes h and h satisfying(4.2), then
E
T0
(hs hs)2ds
= E
T0
(hs hs)dBs2
= 0.
This yields h = h in L2([0, T] ).We now turn to the existence of h. Any Z H satisfies
E(Z2) = (E(Z))2 + ET
0
h2sds .From this it follows that if (Zn, n 1) is a sequence of elements of H con-verging to Z in L2(, FT), then the sequence (hn, n 1) corresponding tothe representations is Cauchy in L2a,T. Denoting by h the limit, we have
Z = E(Z) +T0
hsdBs.
Hence H is closed in L2(, FT).For any f L2([0, T], FT), set
Eft = exp
t
0fsdBs 1
2
t
0f2s ds
.
By the Ito formula,
Eft = 1 +t0
Efs f(s)dBs.
Consequently, the representation holds for Z := EfT and also any linear com-bination of such random variables belong to H. The conclusion follows fromLemma 4.1.
The representation (4.3) is a consequence of the following fact: The martin-gale M converges in L2() as t
to a random variable M
L2(),
andMt = E(M|Ft).
Then by taking conditional expectations in both terms of (4.2) we obtain
Mt = E(M|Ft) = E(M) +t0
hsdBs.
The proof of the theorem is now complete.
48
-
7/29/2019 An Introduction to Stochastic Calculus
49/92
Example 4.1 Consider Z = B3T. In order to find the corresponding process
h in the integral representation, we apply first Itos formula, yielding
B3T =T0
3B2t dBt + 3T0
Btdt.
An integration by parts gives
T0
Btdt = T BT T0
tdBt.
Thus,
B3T =
T0 3B
2t + T t dBt.
Notice that E(BT)3 = 0. Then ht = 3 [B
2t + T t].
4.3 Girsanovs theorem
It is well known that if X is a multidimensional Gaussian random variable,any affine transformation brings X into a multidimensional Gaussian randomvariable as well. The simplest version of Girsanovs theorem extends thisresult to a Brownian motion. Before giving the precise statement and the
proof, let us introduce some preliminaries.
Lemma 4.2 Let L be a nonnegative random variable such that E(L) = 1.Set
Q(A) = E(11AL), A F. (4.4)Then, Q defines a probability onF, equivalent to P, with density given by L.Reciprocally, if P and Q are two probabilities on Fand P Q, then threreexists a nonnegative random variable L such thatE(L) = 1, and (4.4) holds.
Proof: It is clear that Q defines a -additive function on
F. Moreover, since
Q() = E(11L) = E(L) = 1,
Q is indeed a probability.Let A F be such that Q(A) = 0. Since L > 0, a.s., we should haveP(A) = 0. Reciprocally, for any A Fwith P(A) = 0, we have Q(A) = 0as well.The second assertion of the lemma is Radon-Nikodym theorem.
49
-
7/29/2019 An Introduction to Stochastic Calculus
50/92
If we denote by EQ the expectation operator with respect to the probability
Q defined before, one has
EQ(X) = E(XL).
Indeed, this formula is easily checked for simple random variables and thenextended to any random variable X L1() by the usual approximationargument.
Consider now a Brownian motion {Bt, t [0, T]}. Fix R and let
Lt = expBt 2
2t . (4.5)
Itos formula yield
Lt = 1 t0
LsdBs.
Hence, the process {Lt, t [0, T]} is a positive martingale and E(Lt) = 1,for any t [0, T]. Set
Q(A) = E(11ALT) , A FT. (4.6)
By Lemma 4.2, the probability Q is equivalent to P on the -field
FT.
By the martingale property of{Lt, t [0, T]}, the same conclusion is true onFt, for any t [0, T]. Indeed, let A Ft, then
Q(A) = E(11ALT) = E(E(11ALT|Ft))= E(11AE(LT|Ft))= E(11ALt) .
Next, we give a technical result.
Lemma 4.3 LetX be a random variable and letG be a sub -field ofFsuchthat
E
eiuX |G
= eu22
2 .
Then, the random variable X is independent of the -field G and its proba-bility law is Gaussian, zero mean and variance 2.
Proof: By the definition of the conditional expectation, for any A G,
E
11AeiuX
= P(A)eu22
2 .
50
-
7/29/2019 An Introduction to Stochastic Calculus
51/92
In particular, for A := , we see that the characteristic function ofX is that
of a N(0, 2). This proves the last assertion.Moreover, for any A G,
EA
eiuX
= eu22
2 ,
saying that the law of X conditionally to A is also N(0, 2). Thus,
P ((X x) A) = P(A)PA (X x) = P(A)P (X x) ,yielding the independence of X and G.
Theorem 4.3 (Girsanovs theorem) Let R and setWt = Bt + t.
In the probability space (, FT, Q), withQ given in (4.6), the process {Wt, t [0, T]} is a standard Brownian motion.Proof: We will check that in the probability space (, FT, Q), any incrementWtWs, 0 s < t T is independent ofFs and has N(0, ts) distribution.That is, for any A Fs,
EQ eiu(WtWs)11A = EQ 11Ae
u22 (ts) = Q(A)eu22 (ts).
The conclusion will follow from Lemma 4.3.Indeed, writing
Lt = exp
(Bt Bs)
2
2(t s)
exp
Bs
2
2s
,
we have
EQ
eiu(WtWs)11A
= E
11Aeiu(WtWs)Lt
= E
11Ae
iu(BtBs)+iu(ts)(BtBs)2
2 (ts)Ls
.
Since Bt Bs is independent ofFs, the last expression is equal toE(11ALs) E
e(iu)(BtBs)
eiu(ts)
2
2 (ts)
= Q(A)e(iu)2
2 (ts)+iu(ts)2
2 (ts)
= Q(A)eu2
2 (ts).
The proof is now complete.
51
-
7/29/2019 An Introduction to Stochastic Calculus
52/92
5 Local time of Brownian motion and
Tanakas formula
This chapter deals with a very particular extension of It os formula. Moreprecisely, we would like to have a decomposition of the positive submartingale|Bt x|, for some fixed x R as in the Ito formula. Notice that the functionf(y) = |y x| does not belong to C2(R). A natural way to proceed is toregularize the function f, for instance by convolution with an approximationof the identity, and then, pass to the limit. Assuming that this is feasible,the question of identifying the limit involving the second order derivativeremains open. This leads us to introduce a process termed the local time of
B at x introduced by Paul Levy.
Definition 5.1 Let B = {Bt, t 0} be a Brownian motion and let x R.The local time of B at x is defined as the stochastic process
L(t, x) = lim0
1
2
t0
11(x,x+)(Bs)ds
= lim0
1
2{s [0, t] : Bs (x , x + )}, (5.1)
where denotes the Lebesgue measure onR.
We see that L(t, x) measures the time spent by the process B at x during aperiod of time of length t. Actually, it is the density of this occupation time.
We shall see later that the above limit exists in L2 (it also exists a.s.), a factthat it is not obvious at all.Local time enters naturally in the extension of the Ito formula we alludedbefore. In fact, we have the following result.
Theorem 5.1 For any t 0 and x R, a.s.,
(Bt x)+
= (B0 x)+
+t0 11[x,)(Bs)dBs +
1
2L(t, x), (5.2)
where L(t, x) is given by (5.1) in the L2 convergence.
Proof: The heuristics of formula (5.2) is the following. In the sense ofdistributions, f(y) = (y x)+ has as first and second order derivatives,f(y) = 11[x,)(y), f(y) = x(y), respectively, where x denotes the Diracdelta measure. Hence we expect a formula like
(Bt x)+ = (B0 x)+ +t0
11[x,)(Bs)dBs +1
2
t0
x(Bs)ds.
52
-
7/29/2019 An Introduction to Stochastic Calculus
53/92
However, we have to give a meaning to the last integral.
Approximation procedureWe are going to approximate the function f(y) = (y x)+. For this, we fix > 0 and define
fx(y) =
0, if y x (yx+)2
4, if x y x +
y x if y x +
which clearly has as derivatives
fx(y) =
0, if y x (yx+)
2, if x y x +
1 if y x +
and
fx(y) =
0, if y < x 12
, if x < y < x + 0 if y > x +
Let n, n 1 be a sequence ofC functions with compact supports decreas-ing to {0}. For instance we may consider the function
(y) = c exp(1 y2)1
11{|y|
-
7/29/2019 An Introduction to Stochastic Calculus
54/92
Convergence of the terms in (5.3) as n
The function fx is bounded. The function gn is also bounded. Indeed,
|gn(y)| =R
fx(y z)n(z)dz
=1
n
1n
fx(y z)n(z)dz
2fx.Moreover,
gn(Bs)11[0,t] fx(Bs)11[0,t]
0,
uniformly in t and in . Hence, by bounded convergence,
Et0
|gn(Bs) fx(Bs)|2 0.
Then, the isometry property of the stochastic integral implies
Et0
[gn(Bs) fx(Bs)] dBs2
0,as n .We next deal with the second order term. Since the law of each Bs has adensity, for each s > 0,
P{Bs = x + } = P{Bs = x } = 0.Thus, for any s > 0,
limn g
n(Bs) = f
x(Bs),
a.s. Using Fubinis theorem, we see that this convergence also holds, foralmost every s, a.s. In fact,t
0ds
dP11{fx,(Bs)=limn gn(Bs)}
= dPt
0ds11{fx,(Bs)=limn gn(Bs)} = 0.
We have
supyR
|gn(y)| 1
2.
Indeed,
|gn(y)| =1
2
R
n(z)11(x,x+)(y z)dz
12
yx+yx
|n(z)|dz 22
.
54
-
7/29/2019 An Introduction to Stochastic Calculus
55/92
Then, by bounded convergencet0
gn(Bs)ds t0
fx(Bs)ds,
a.s. and in L2.Thus, passing to the limit the expression (5.3) yields
fx(Bt) = fx(B0) +t0
fx(Bs)dBs +1
2
t0
1
211(x,x+)(Bs)ds. (5.4)
Convergence as 0 of (5.4)Since fx(y) (y x)+ as 0 and
|fx(Bt) fx(B0)| |Bt B0|,we have
fx(Bt) fx(B0) (Bt x)+ (B0 x)+,in L2.Moreover,
Et
0
fx,(Bs) 11[x,)(Bs)
2ds
Et
011(x,x+)(Bs)ds
t
0
22s
ds.
that clearly tends to zero as 0. Hence, by the isometry property of thestochastic integral t
0fx,(Bs)dBs
t0
11[x,)(Bs)dBs,
in L2.Consequently, we have proved thatt
0
1
211(x,x+)(Bs)ds
converges in L2 as 0 and that formula (5.2) holds.
We give without proof two further properties of local time.
1. The property of local time as a density of occupation measure is madeclear by the following identity, valid por any t 0 and every a b:b
aL(t, x)dx =
t0
11(a,b)(Bs)ds.
55
-
7/29/2019 An Introduction to Stochastic Calculus
56/92
2. The stochastic integral t0 11[x,)(Bs)dBs has a jointly continuous ver-sion in (t, x) (0, ) R. Hence, by (5.2) so does the local time{L(t, x), (t, x) (0, ) R}.
The next result, which follows easily from Theorem 5.1 is known as Tanakasformula.
Theorem 5.2 For any (t, x) [0, ) R, we have
|Bt x| = |B0 x| +t0
sign(Bs x)dBs + L(t, x). (5.5)
Proof: We will use the following relations: |x| = x+
+x, x = max(x, 0) =(x)+. Hence, by virtue of (5.2), we only need a formula for (Bt + x)+.Notice that we already have it, since the process B is also a Brownianmotion. More precisely,
(Bt + x)+ = (B0 + x)+ +t0
11[x,)(Bs)d(Bs) + 12
L(t, x),
where we have denoted by L(t, x) the local time of B at x. We havethe following facts:
t
0
11[x,
)(
Bs)d(
Bs) =
t
0
11(
,x](Bs)dBs,
L(t, x) = lim0
1
2
t0
11(x,x+)(Bs)ds
= lim0
1
2
t0
11(x,x+)(Bs)ds
= L(t, x),
where the limit is in L2().Thus, we have proved
(Bt
x) = (B0
x)
t
0
11(
,x](Bs)dBs +1
2
L(t, x). (5.6)
Adding up (5.2) and (5.6) yields (5.5). Indeed
11[x,)(Bs) 11(,x](Bs) =
1, if Bs > x
1 if Bs < x0 if Bs = x
which is identical to sign (Bs x).
56
-
7/29/2019 An Introduction to Stochastic Calculus
57/92
6 Stochastic differential equations
In this section we shall introduce stochastic differential equations driven by amulti-dimensional Brownian motion. Under suitable properties on the coeffi-cients, we shall prove a result on existence and uniqueness of solution. Thenwe shall establish properties of the solution, like existence of moments of anyorder and the Holder property of the sample paths.
The setting
We consider a d-dimensional Brownian motion B = {Bt = (B1t , . . . , Bdt ), t 0}, B0 = 0, defined on a probability space (, F, P), along with a filtration(Ft, t 0) satisfying the following properties:
1. B is adapted to (Ft, t 0),2. the -field generated by {Bu Bt, u t} is independent of (Ft, t 0).
We also consider functions
b : [0, ) Rm Rm, : [0, ) Rm L(Rd;Rm).When necessary we will use the description
b(t, x) =
bi
(t, x)1im , (t, x) =
i
j(t, x)1im,1jd .
By a stochastic differential equation, we mean an expression of the form
dXt = (t, Xt)dBt + b(t, Xt)dt, t (0, ),X0 = x, (6.1)
where x is a m-dimensional random vector independent of the Brownianmotion.We can also consider any time value u 0 as the initial one. In this case,we must write t
(u,
) and Xu = x in (6.1). For the sake of simplicity we
will assume here that x is deterministic.The formal expression (6.1) has to be understood as follows:
Xt = x +t0
(s, Xs)dBs +t0
b(s, Xs)ds, (6.2)
or coordinate-wise,
Xit = xi +
dj=1
t0
ij(s, Xs)dBjs +t0
bi(s, Xs)ds,
57
-
7/29/2019 An Introduction to Stochastic Calculus
58/92
i = 1, . . . , m.
Strong existence and path-wise uniqueness
We now give the notions of existence and uniqueness of solution that will beconsidered throughout this chapter.
Definition 6.1 A m-dimensional stochastic process (Xt, t 0) measurableand Ft-adapted is a strong solution to (6.2) if the following conditions aresatisfied:
1. The processes (ij(s, Xs), s
0) belong to L2a,
, for any 1
i
m,1 j d.
2. The processes (bi(s, Xs), s 0) belong to L1a,, for any 1 i m.
3. Equation (6.2) holds true for the fixed Brownian motion defined before.
Definition 6.2 The equation (6.2) has a path-wise unique solution if anytwo strong solutions X1 and X2 in the sense of the previous definition areindistinguishable, that is,
P{X1(t) = X2(t),for any t 0} = 1.
Hypotheses on the coefficients
We shall refer to (H) for the following set of hypotheses.
1. Linear growth:
supt
[|b(t, x)
|+
|(t, x)]
| L(1 +
|x|). (6.3)
2. Lipschitz in the x variable, uniformly in t:
supt
[|b(t, x) b(t, y)| + |(t, x) (t, y)|] L|x y|. (6.4)
In (6.3), (6.4), L stands for a positive constant.
58
-
7/29/2019 An Introduction to Stochastic Calculus
59/92
6.1 Examples of stochastic differential equations
When the functions and b have a linear structure, the solution to (6.2)admits an explicit form. This is not surprising as it is indeed the case forordinary differential equations. We deal with this question in this section.More precisely, suppose that
(t, x) = (t) + F(t)x, (6.5)
b(t, x) = c(t) + D(t)x. (6.6)
Example 1 Assume for simplicity d = m = 1, (t, x) = (t), b(t, x) =
c(t) + D, t 0 and D R. Now equation (6.2) reads
Xt = X0 +t0
(s)dBs +t0
[c(s) + DXs]ds,
and has a unique solution given by
Xt = X0eDt +
t0
eD(ts)(c(s)ds + (s)dBs). (6.7)
To check (6.7) we proceed as in the deterministic case. First we consider theequation
dXt = DXtdt,with initial condition X0, which solution is
Xt = X0eDt, t 0.
The we use the variation of constants procedure and write
Xt = X0(t)eDt.
A priori X0(t) may be random. However, since eDt is differentiable, the Ito
differential of Xt is given by
dXt = dX0(t)eDt + X0(t)eDtDdt.
Equating the right-hand side of the preceding identity with
(t)dBt + (c(t) + XtD) dt
yields
dX0(t)eDt + XtDdt
= (t)dBt + (c(t) + XtD) dt,
59
-
7/29/2019 An Introduction to Stochastic Calculus
60/92
that is
dX0(t) = eDt [(t)dBt + c(t)dt] .
In integral form
X0(t) = x +t0
eDs [(s)dBs + c(s)ds] .
Plugging the right-hand side of this equation in Xt = X0(t)eDt yields (6.7).
A particular example of the class of equations considered before is LangevinEquation:
dXt = dBt
Xtdt, t > 0,
X0 = x0 R, where R and > 0. Here Xt stands for the velocity attime t of a free particle that performs a Brownian motion different from theBt in the equation. The solution to this equation is given by
Xt = etx0 +
t0
e(ts)dBs.
Notice that {Xt, t 0} defines a Gaussian process.
6.2 A result on existence and uniqueness of solutionThis section is devoted to prove the following result.
Theorem 6.1 Assume that the functions , and b satisfy the assumptions(H). Then there exists a path-wise unique strong solution to (6.2).
Before giving a proof of this theorem we recall a version of Gronwalls lemmathat will be used repeatedly in the sequel.
Lemma 6.1 Let u, v : [, ]
R+ be functions such that u is Lebesgue
integrable and v is measurable and bounded. Assume that
v(t) c +t
u(s)v(s)ds, (6.8)
for some constantc 0 and for any t [, ]. Then
v(t) c expt
u(s)ds
. (6.9)
60
-
7/29/2019 An Introduction to Stochastic Calculus
61/92
Proof of Theorem 6.1
Let us introduce Picards iteration scheme
X0t = x,
Xnt = x +t0
(s, Xn1s )dBs +t0
b(s, Xn1s )ds,n 1, (6.10)
t 0. Let us restrict the time interval to [0, T], with T >