rw_martingale

4
Lecture 28: Martingales - Optional Stopping Theorem 1.) Theorems Theorem (4.4.1): If X n is a submartingale and N is a stopping time with P(N k) = 1 for some k, then EX 0 EX N EX k . Proof: Since X N n is a submartingale, we have EX 0 = EX N 0 EX N k = EX N . To derive the upper bound, let K n =1 {N<n} =1 {N n-1} . Since K n is predictable, Theorem (4.2.7) implies that (K · X ) n = X n - X N n is a submartingale and so EX k - EX N = E(K · X ) k E(K · X ) 0 =0. Remark: If (X n ) 0 is a martingale, then under the assumptions of (4.4.1), we have EX N = EX 0 . Theorem (4.7.1): If X n is a uniformly integrable submartingale, then for any stopping time N , the stopped process X N n is uniformly integrable. Proof: Since (X + n ; n 0) is a submartingale and N n n is a bounded stopping time, Theorem (4.4.1) implies that EX + N n EX + n . Since (X + n ; n 0) is also uniformly integrable, it follows that sup n EX + N n sup n EX + n < , and so the martingale convergence theorem implies that X N n X N a.s. and that E|X N | < . We then have E(|X N n |; |X N n | >K) = E(|X N |; |X N | > K, N n)+ E(|X n |; |X n | > K, N > n). Since E|X | < and X n is uniformly integrable, K can be chosen large enough that each term is smaller than /2 for all n 1. The last computation in this proof also shows that: Corollary (4.7.2): If E|X N | < and X n 1 N>n is uniformly integrable, then X N n is uniformly integrable. Theorem (4.7.3): If X n is a uniformly integrable submartingale, then for any stopping time N we have EX 0 EX N EX , where X = lim n→∞ X n . 1

Upload: prakash-dhage

Post on 20-Jul-2016

220 views

Category:

Documents


1 download

DESCRIPTION

random walk and martingale relationship

TRANSCRIPT

Page 1: RW_Martingale

Lecture 28: Martingales - Optional Stopping Theorem

1.) Theorems

Theorem (4.4.1): If Xn is a submartingale and N is a stopping time with P(N ≤ k) = 1 forsome k, then

EX0 ≤ EXN ≤ EXk.

Proof: Since XN∧n is a submartingale, we have

EX0 = EXN∧0 ≤ EXN∧k = EXN .

To derive the upper bound, let Kn = 1{N<n} = 1{N≤n−1}. Since Kn is predictable, Theorem(4.2.7) implies that (K ·X)n = Xn −XN∧n is a submartingale and so

EXk − EXN = E(K ·X)k ≥ E(K ·X)0 = 0.

Remark: If (Xn)∞0 is a martingale, then under the assumptions of (4.4.1), we have

EXN = EX0.

Theorem (4.7.1): If Xn is a uniformly integrable submartingale, then for any stopping timeN , the stopped process XN∧n is uniformly integrable.

Proof: Since (X+n ;n ≥ 0) is a submartingale and N ∧ n ≤ n is a bounded stopping time,

Theorem (4.4.1) implies that EX+N∧n ≤ EX+

n . Since (X+n ;n ≥ 0) is also uniformly integrable, it

follows thatsup

nEX+

N∧n ≤ supn

EX+n <∞,

and so the martingale convergence theorem implies that XN∧n → XN a.s. and that E|XN | <∞.We then have

E(|XN∧n|; |XN∧n| > K) = E(|XN |; |XN | > K,N ≤ n) +E(|Xn|; |Xn| > K,N > n).

Since E|X| <∞ and Xn is uniformly integrable, K can be chosen large enough that each termis smaller than ε/2 for all n ≥ 1.

The last computation in this proof also shows that:

Corollary (4.7.2): If E|XN | <∞ and Xn1N>n is uniformly integrable, then XN∧n is uniformlyintegrable.

Theorem (4.7.3): If Xn is a uniformly integrable submartingale, then for any stopping timeN we have EX0 ≤ EXN ≤ EX∞, where X∞ = limn→∞Xn.

1

Page 2: RW_Martingale

Proof: Theorem (4.4.1) shows that EX0 ≤ EXN∧n ≤ EXn. The result then follows by lettingn→∞ and observing that XN∧n → XN and Xn → X∞ in L1.

The Optional Stopping Theorem: If L ≤M are stopping times and (YM∧n)∞0 is a uniformlyintegrable submartingale, then EYL ≤ EYM and

YL ≤ E[YM |FL].

Proof: The inequality EYL ≤ EYM follows from Theorem (4.7.3) if we take Xn = YM∧n andN = L. (Notice that M ∧ L = L.)

Next, let A ∈ FL and define

N ={L on AM on Ac.

Then N is a stopping time with L ≤ N ≤ M and so EYN ≤ EYM by the first part of thetheorem. Since N = M on Ac and EYN = E[YN ;A] + E[YN ;Ac], it follows that

E[YL;A] = E[YN ;A] ≤ E[YM ;A] = E[E[YM |FL];A].

In particular, if ε > 0 and we let A = {YL − E[YM |FL] > ε} ∈ FL, then

εP(A) ≤ E[YL − E[YM |FL]] ≤ 0

and so P(A) = 0. Since this holds for all ε > 0, it follows that YL ≤ E[YM |FL] almost surely.

Theorem (4.7.5): Suppose that (Xn)∞0 is a submartingale and that E[|Xn+1−Xn||Fn] ≤ B a.s.If N is a stopping time with EN <∞, then (XN∧n)∞0 is uniformly integrable and EXN ≤ EX0.

Proof: To prove uniform integrability, it suffices to prove that each termXN∧n can be dominatedby an integrable random variable. To this end, observe that

|XN∧n| ≤ |X0|+(N∧n)−1∑

m=0

|Xm+1 −Xm|

≤ |X0|+∞∑

m=0

|Xm+1 −Xm|1{N>m}.

Since {N > m} ∈ Fm, we have

E[|Xm+1 −Xm|1{N>m}] = E[E[|Xm+1 −Xm||Fm]1{N>m}] ≤ BP(N > m),

and so

E

[ ∞∑m=0

|Xm+1 −Xm|1{N>m}

]≤ B

∞∑m=0

P(N > m) = B EN <∞.

2

Page 3: RW_Martingale

Wald’s Identity: If Sn = ξ1 + · · ·+ξn is a random walk with µ = Eξ1 <∞ and N is a stoppingtime with EN <∞, then (Sn − nµ)∞0 is a martingale and E[SN −Nµ] = 0, i.e., ESN = µE[N ].

Theorem (4.7.6): If (Xn)∞0 is a non-negative supermartingale and N is a stopping time, thenEX0 ≥ EXN , where X∞ = limXn.

Proof: By Theorem (4.4.1), we have EX0 ≥ EXN∧n. The monotone convergence theoremimplies that

E[XN ;N <∞] = limn→∞

E[XN ;N ≤ n],

while Fatou’s lemma implies that

E[XN ;N =∞] ≤ lim infn→∞

E[Xn;N > n].

Adding these two lines then gives

EXN ≤ lim infn→∞

EXN∧n ≤ EX0.

2.) Asymmetric Simple Random Walk:

Let ξ1, ξ2, · · · be i.i.d. random variables with distribution

P(ξi = 1) = p, P(ξi = −1) = q = 1− p,

and let Sn = ξ1 + · · ·+ ξn and Fn = σ(ξ1, · · · , ξn).

Claim (a): If 0 < p < 1 and φ(x) = ((1− p)/p)x, then φ(Sn) is a martingale.

Proof: Since Sn and ξn+1 are independent, on the set {Sn = m} we have

E[φ(Sn+1)|Fn] = p ·(

1− pp

)m+1

+ (1− p) ·(

1− pp

)m−1

= (1− p+ p)(

1− pp

)m

= φ(Sn).

Since this holds for all m ≥ 0, it follows that E[φ(Sn+1)|Fn] = φ(Sn).

Claim (b): If Tx = inf{n : Sn = x}, then for a < 0 < b, we have

P(Ta < Tb) =φ(b)− φ(0)φ(b)− φ(a)

.

Proof: Let N = Ta ∧ Tb. Since φ(SN∧n)∞0 is bounded, it is a uniformly integrable martingaleand so Theorem (4.5.6) implies that the limit limn→∞ φ(SN∧n) exists almost surely and in L1.

3

Page 4: RW_Martingale

Since convergence to an interior point of (a, b) is clearly impossible, it follows that N < ∞almost surely and

φ(0) = E[φ(SN∧0)] = E[φ(SN )]= P(Ta < Tb)φ(a) + P(Tb < Ta)φ(b).

We can then solve for P(Ta < Tb) by using the fact that P(Ta < Tb) + P(Tb < Ta) = 1.

Claim (c): Suppose that 1/2 < p < 1. If a < 0, then

P(minnSn ≤ a) = P(Ta <∞) =

(1− pp

)−a

.

If b > 0, then P(Tb <∞) = 1.

Proof: The first result follows from the fact that P(Ta < ∞) = limb→∞ P(Ta < Tb) and thatlimb→∞ φ(b) = 0. Similarly, the second result follows from the fact that lima→∞ φ(−a) = 1.

Claim (d): Suppose that 1/2 < p < 1. If b > 0, then ETb = b/(2p− 1).

Proof: Since Xn = Sn−n(p−q) is a martingale and Tb∧n is a bounded stopping time, Theorem(4.4.1) implies that

0 = E[STb∧n − (p− q)(Tb ∧ n)].

Now b ≥ STb∧n ≥ infm Sm and (c) implies that E[infm Sm] > −∞, so by the dominated con-vergence theorem, we have ESTb∧n → ESTb

= b as n → ∞. Since the monotone convergencetheorem implies that E[Tb ∧ n] ↑ ETb, it follows that b = (p− q)ETb.

4