martingales i

5
3. Marti ngales I. Let us start with a sequence {X i } of independent random variables with E [X i ] = 0 and E [X 2 i ] = 1. We saw earlier that for a sequence {a j } of constants S = i=1 a i X i will converge with probability 1 and in mean square provided j a 2 j < We shall now see that actually a j (X 1 ,X 2 ,...,X  j1 ) can be a function of X 1 ,...,X  j1 . If they satisfy j E [a j (X 1 ,...,X  j1 ) 2 ] < then the series S = j=1 a j (X 1 ,X 2 ,...,X  j1 )X j will converge. Let us show for the present convergence of S n = n j=1 a j (X 1 ,X 2 ,...,X  j1 )X j to S in L 2 (P ). This requires us to show that lim n→∞ m→∞ E [|S n S m | 2 ] = 0. A straight forward computation of E  [ n j=m+1 a j (X 1 ,...,X  j1 ) X j ] 2 sho ws that the non-di agonal terms are 0. If i = j , either X i or X j sticks out and makes the expectation 0. On the other hand if i = j E [a 2 i (X 1 ,...,X  k1 )X 2 i ] = E [a i (X 1 ,...,X  i1 ) 2 ] resulting for n > m, E [|S n S m | 2 ] = n j=m+1 E [a j (X 1 ,...,X  i1 ) 2 ] 1

Upload: siwat-thongsuk

Post on 14-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

3. Martingales I.

Let us start with a sequence X i of independent random variables with E [X i] = 0 andE [X 2i ] = 1. We saw earlier that for a sequence aj of constants

S =∞

i=1

aiX i

will converge with probability 1 and in mean square provided

j

a2j < ∞

We shall now see that actually aj(X 1, X 2, . . . , X j−1) can be a function of X 1, . . . , X j−1. If they satisfy

j

E [aj(X 1, . . . , X j−1)2] < ∞

then the series

S =∞

j=1

aj(X 1, X 2, . . . , X j−1)X j

will converge. Let us show for the present convergence of

S n =

n

j=1

aj(X 1, X 2, . . . , X j−1)X j

to S in L2(P ). This requires us to show that

limn→∞

m→∞

E [|S n − S m|2] = 0.

A straight forward computation of

E

[

n

j=m+1

aj(X 1, . . . , X j−1) X j ]2

shows that the non-diagonal terms are 0. If i = j, either X i or X j sticks out and makesthe expectation 0. On the other hand if i = j

E [a2i (X 1, . . . , X k−1)X 2i ] = E [ai(X 1, . . . , X i−1)2]

resulting for n > m,

E [|S n − S m|2] =

n

j=m+1

E [aj(X 1, . . . , X i−1)2]

1

proving the convergence of S n in L2(P ).

Actually one does not need even independence of X j. If

E [X j|X 1, . . . , X j−1] = 0

andE [X 2j |X 1, . . . , X j−1] = σ2j (X 1, . . . , X k−1)

then E [S n] = 0 and

E [|S n − S m|2] =n

j=m+1

E [a2j(X 1, . . . , X j−1)2 σ2j (X 1, . . . , X j−1)]

and the convergence of

j=1

E [a2j(X 1, . . . , X j−1)2 σ2j (X 1, . . . , X j−1)]

implies the existence of the limit S of S n in L2(P ).

All of this leads to the following definition. We have a probability space (Ω, F , P ) and afamily of sub σ-fields F i with F i−1 ⊂ F i for every i. A sequence X i(ω) of random variablesis called a (square integrable) martingale with respect to (Ω, F i, P ) if

1) X i is F i measurable for every i ≥ 0.

2) For every i ≥ 1, E [X i −X i−1|F i−1] = 0

3) E [X 2i ] < ∞ for i ≥ 0.

If we dente by σ2i (ω) = E [(X i − X i−1)2|F i−1] then E [(X i − X i−1)2] = E [σ2i (ω)] < ∞. If X i is a (square integrable) martingale then ξ i = X i −X i−1 is called a (square integrable)martingale difference. It has the properties

1) ξ i is F i measurable for every i ≥ 1.

2) For every i ≥ 1, E [ξ i]|F i−1] = 0

3) E [ξ 2i ] < ∞ for i ≥ 1.

X 0 is often 0 or a constant. It does not have to be. It is F 0 measurable and for n ≥ 1,

X n = X 0 +n

i=1

ξ i

It is now an easy calculation to conclude that

E [X 2n] = E [X 0]2 +

n

j=1

E [ξ 2j ]

2

If we dente by σ2i (ω) = E [ξ 2i |F i−1] then E [ξ 2i ] = E [σ2i (ω)] < ∞. If we take a sequence of bounded functions ai(ω) that are F i−1 measurable, then we can define

Y n =

n

i=1

ai(ω)ξ i

It is easy to see that Y n is again a martingale and E [(Y i − Y i−1)2|F i−1] = a2i (ω)σ2i (ω).One just needs to note that if ξ i is a sequence of martingale differences and ai is F i−1measurable, then ηi = aiξ i is again a martingale difference and E [η2i |F i−1] = a2i (ω)σ2i (ω).Y n is called a martingale transform of X n and one writes

(Y n+1 − Y n) = an(X n+1 −X n)

or∇Y n = an∇X n

Notice that in the definition of the increment ∇X n and ∇Y n the increments stick out. Thisis important because otherwise the cross terms in the calculation of the expectation of thesquare of the sum will not equal 0. Martingale transforms generates new martingale froma given martingale. There are obvious identities.

∇Y n = an∇X n, ∇Z n = bn∇Y n ⇒∇Z n = anbn∇X n

leading to the inversion rule

∇Y n = an∇X n ⇒∇X n = bn∇Y n

with bn = [an]−1.

One should think of martingale differences ξ i as possible returns on a standard bet of oneunit at the i-th round of a game. There is some randomness. The game is ”fair” if theexpected return is 0 no matter what the past history through i−1 rounds. F i−i representshistorical information through i − 1 rounds is. One should think of ai as the leverage,with negative values representing short positions or betting in the opposite direction. Thedependence on ω is the strategy that can only be based on information available throughthe previous round. No matter what the strategy is for the leverage, the game is always”fair”. You can not find a winning (or losing ) strategy in a ”fair” game.

Stopping Times. A special set of choices for ai(ω) is a decision about when to stop.τ (ω) is a non-negative integer valued function on Ω and ai(ω) is defined by

ai(ω) =

1 for i < τ (ω)0 for i ≥ τ (ω)

The condition that ai(ω) is F i−1 measurable leads to, for i ≥ 1,

ω : τ (ω) ≤ i ∈ F i

3

You can not decide to ”stop” after looking into the ”future”. In particular quitting whileahead can not be a winning strategy.

A paradox. What if in a game of even odds you double your bet every time you lose. If there is any chance of winning at all, sooner or later you will win a round. You will exitwith exactly $1 in winnings. This is indeed a stopping time. But this requires ability toplay as many games as it takes. Even in a fair game you can have a long run of bad luckand you can not really afford it. You wont live that long or before that you will exceedyou credit limit. If you put a limit on the number of games to be played, say N , then withprobability 2−N you will have a big loss of $(2N − 1) and with probability 1− 2−N a gainof $1. Technically, in order to show that the game remains ”fair” , i.e E [X τ ] = E [X 0], onehas to assume that stopping times τ is bounded i.e. τ (ω) ≤ T for some non-random T .There is maximum number of rounds before which one has to stop.

With this provision a proof can be made. Let ai(ω) = 1 if τ (ω) ≥ i and 0 otherwise. Themartingale transform

∇Y n = an∇X n

yieldsY n = X τ , if n ≥ τ

In particular E [X τ −X 0] = E [Y T − Y 0] = 0. Actually

Y n = X τ ∧n

and we have proved

Theorem. Let X n be a martingale and τ (ω) a stopping time. Then Y n = X τ ∧n is againa martingale.

Remark. If τ is unbounded, then to show that E [X τ −X 0] = 0 one has to let n →∞ inE [X τ ∧n −X 0] = 0. This requires uniform integrability assumptions on X n. In particularif sup0≤n≤τ |X n| ≤ C , for some constant C , there is no difficulty.

There is a natural σ-field F τ associated with a stopping time. Intuitively this is all theinformation one has gathered up to the stopping time. For example if a martingale X n isstopped when it reaches a level a or above , i.e τ = inf n : X n ≥ a, then the set

A = ω : X n goes below level b before going above level a

will be in F τ but not ω : X n ≥ a unless we know for sure that τ ≥ a. Technically

Definition. A set A ∈ F τ if for every k

A ∩ τ ≤ k ∈ F k

Equivalently one can demand that for every k

A ∩ τ = k ∈ F k

4

The following facts are easy to verify. F τ is a σ-field for any stopping time τ . If τ = ℓ isa constant then F τ = F ℓ. If τ 1 ≤ τ 2 then F τ 1 ⊂ F τ 2 . If τ 1, τ 2 are stopping times so areτ 1 ∧ τ 2, τ 1 ∨ τ 2. If τ is a stopping time and f (t) is an increasing function of t satisfyingf (t) ≥ t, then f (τ ) is a stopping time.

Theorem (Doob’s stopping theorem). If τ 1 ≤ τ 2 ≤ T are two bounded stopping timesthen

E [X τ 2|F τ 1 ] = X τ 1

It is enough to prove that for any stopping time bounded by T

E [X T |F τ ] = X τ

Let A ∈ F τ . We need to show

A

X T dP =

A

X τ dP

It suffices to show that for each k

A∩τ =k

X T dP =

A∩τ =k

X τ dP =

A∩τ =k

X kdP

we can then sum over k. But A ∩ τ = k ∈ F k and for B ∈ F k, that

B

X T dP =

B

X kdP

follows from E [X T |F k] = X k.

Remark. Although we have worked with square integrable martingales the definition of a martingale and Doob’s stopping theorem only needs the existence of the mean. Justintegrability of |X n| is all that is needed for the definition to make sense. Of course anycalculation that involves the second moment needs the variances to be finite.

5