4. markov chains (9/23/12, cf. ross) 1. introduction 2...

76
4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov Equations 3. Types of States 4. Limiting Probabilities 5. Gambler’s Ruin 6. First Passage Times 7. Branching Processes 8. Time-Reversibility 1

Upload: buithu

Post on 27-Sep-2018

232 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

4. Markov Chains (9/23/12, cf. Ross)

1. Introduction

2. Chapman-Kolmogorov Equations

3. Types of States

4. Limiting Probabilities

5. Gambler’s Ruin

6. First Passage Times

7. Branching Processes

8. Time-Reversibility

1

Page 2: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

4.1. Introduction

Definition: A stochastic process (SP) {X(t) : t ∈ T}

is a collection of RV’s. Each X(t) is a RV; t is usually

regarded as “time.”

Example: X(t) = the number of customers in line at

the post office at time t.

Example: X(t) = the price of IBM stock at time t.

2

Page 3: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

T is the index set of the process. If T is countable,

then {X(t) : t ∈ T} is a discrete-time SP. If T is some

continuum, then {X(t) : t ∈ T} is a continuous-time

SP.

Example: {Xn : n = 0,1,2, . . .} (index set of non-

negative integers)

Example: {X(t) : t ≥ 0} (index set is <+)

3

Page 4: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

The state space of the SP is the set of all possible

values that the RV’s X(t) can take.

Example: If Xn = j, then the process is in state j at

time n.

Any realization of {X(t)} is a sample path.

4

Page 5: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Definition: A Markov chain (MC) is a SP such that

whenever the process is in state i, there is a fixed

transition probability Pij that its next state will be j.

Denote the “current” state (at time n) by Xn = i.

Let the event A = {X0 = i0, X1 = i1, . . . Xn−1 = in−1}

be the previous history of the MC (before time n).

5

Page 6: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

{Xn} has the Markov property if it forgets about its

past, i.e.,

Pr(Xn+1 = j|A ∩Xn = i) = Pr(Xn+1 = j|Xn = i).

{Xn} is time homogeneous if

Pr(Xn+1 = j|Xn = i) = Pr(X1 = j|X0 = i) = Pij,

i.e., if the transition probabilities are independent of

n.

6

Page 7: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Recap: A Markov chain is a SP such that

Pr(Xn+1 = j|A ∩Xn = i) = Pij,

i.e., the next state depends only on the current state

(and is indep of the time).

7

Page 8: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Since Pij is a probability, 0 ≤ Pij ≤ 1 for all i, j.

Since the process has to go from i to some state, we

must have∑∞j=0Pij = 1, for all i. Note that it may be

possible to go from i to i (i.e., “stay” at i).

Definition: The one-step transition matrix is

P =

P00 P01 P02 · · ·P10 P11 P12 · · ·

... ... ...

8

Page 9: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: A frog lives in a pond with three lily pads

(1,2,3). He sits on one of the pads and periodically

rolls a die. If he rolls a 1, he jumps to the lower

numbered of the two unoccupied pads. Otherwise,

he jumps to the higher numbered pad. Let X0 be the

initial pad and let Xn be his location just after the nth

jump. This is a MC since his position only depends

on the current position, and the Pij’s are independent

of n.

P =

0 1/6 5/6

1/6 0 5/61/6 5/6 0

. ♦

9

Page 10: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: Let Xi denote the weather (rain or sun) on

day i. We’ll think of Xi−1 as yesterday, Xi as today,

and Xi+1 as tomorrow. Suppose that

Pr(Xi+1 = R | Xi−1 = R,Xi = R) = 0.7

Pr(Xi+1 = R | Xi−1 = S,Xi = R) = 0.5

Pr(Xi+1 = R | Xi−1 = R,Xi = S) = 0.4

Pr(Xi+1 = R | Xi−1 = S,Xi = S) = 0.2

10

Page 11: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

X0, X1, . . . isn’t quite a MC, since the probability that

it’ll rain tomorrow depends on Xi and Xi−1.

We’ll transform the process into a MC by defining the

following states in terms of today and yesterday.

0 : Xi−1 = R, Xi = R

1 : Xi−1 = S, Xi = R

2 : Xi−1 = R, Xi = S

3 : Xi−1 = S, Xi = S

11

Page 12: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Thus, we have, e.g.,

Pr(Xi+1 = R | Xi−1 = R,Xi = R) = P00 = 0.7

Pr(Xi+1 = S | Xi−1 = R,Xi = R) = P02 = 0.3

Using similar reasoning, we get

P =

0.7 0 0.3 00.5 0 0.5 00 0.4 0 0.60 0.2 0 0.8

. ♦

12

Page 13: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: A MC whose state space is given by the

integers is called a random walk if Pi,i+1 = p and

Pi,i−1 = 1− p.

P =

... ... ... ... ...· · · 1− p 0 p 0 0 · · ·· · · 0 1− p 0 p 0 · · ·· · · 0 0 1− p 0 p · · ·· · · 0 0 0 1− p 0 · · ·

... ... ... ... ...

. ♦

13

Page 14: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example (Gambler’s Ruin): Every time a gambler

plays a game, he wins $1 w.p. p, and he loses $1

w.p. 1− p. He stops playing as soon as his fortune is

either $0 or $N. The gambler’s fortune is a MC with

the following Pij’s:

Pi,i+1 = p, i = 1,2, . . . , N − 1

Pi,i−1 = 1− p, i = 1,2, . . . , N − 1

P0,0 = PN,N = 1

0 and N are absorbing states — once the process

enters one of these states, it can’t leave. ♦14

Page 15: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example (Ehrenfest Model): A random walk on a fi-

nite set of states with “reflecting” boundaries. Set of

states is {1,2, . . . , a}.

Pij =

a−ia if j = i+ 1

ia if j = i− 1

0 otherwise

Idea: Suppose A has i marbles, B has a− i. Select a

marble at random, and put it in the other container.

♦15

Page 16: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

4.2 Chapman-Kolmogorov Equations

Definition: The n-step transition probability that a

process currently in state i will be in state j after n

additional transitions is

P(n)ij ≡ Pr(Xn = j|X0 = i), n, i, j ≥ 0.

Note that P (1)ij = Pij, and

P(0)ij =

1 if i = j0 otherwise

.

16

Page 17: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Theorem (C-K Equations):

P(n+m)ij =

∞∑k=0

P(n)ik P

(m)kj .

Think of going from i to j in n + m steps with an

intermediate stop in state k after n steps; then sum

over all possible k values.

17

Page 18: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Proof: By definition,

P(n+m)ij

= Pr(Xn+m = j|X0 = i)

=∞∑k=0

Pr(Xn+m = j ∩Xn = k|X0 = i) (total prob)

=∞∑k=0

Pr(Xn+m = j|X0 = i ∩Xn = k)Pr(Xn = k|X0 = i)

(since Pr(A ∩ C|B) = Pr(A|B ∩ C)Pr(C|B))

=∞∑k=0

Pr(Xn+m = j|Xn = k)Pr(Xn = k|X0 = i)

(Markov property). ♦

18

Page 19: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Definition: The n-step transition matrix is

P(n) =

P(n)00 P

(n)01 P

(n)02 · · ·

P(n)10 P

(n)11 P

(n)12 · · ·

... ... ...

The C-K equations imply P(n+m) = P(n)P(m).

In particular, P(2) = P(1)P(1) = PP = P2.

By induction, P(n) = Pn.19

Page 20: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: Let Xi = 0 if it rains on day i; otherwise,

Xi = 1. Suppose P00 = 0.7 and P10 = 0.4. Then

P =

0.7 0.30.4 0.6

.Suppose it rains on Monday. Then the prob that it

rains on Friday is P (4)00 . Note that

P(4) = P4 =

0.7 0.30.4 0.6

4

=

0.5749 0.42510.5668 0.4332

,so that P (4)

00 = 0.5749. ♦

20

Page 21: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Unconditional Probabilities

Suppose we know the “initial” probabilities,

αi ≡ Pr(X0 = i), i = 0,1, . . . .

(Note that∑iαi = 1.) Then by total probability,

Pr(Xn = j) =∞∑i=0

Pr(Xn = j ∩X0 = i)

=∞∑i=0

Pr(Xn = j|X0 = i)Pr(X0 = i)

=∞∑i=0

P(n)ij αi.

21

Page 22: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: In the above example, suppose α0 = 0.4

and α1 = 0.6. Find the prob that it will not rain on

the 4th day after we start keeping records (assuming

nothing about the first day).

Pr(X4 = 1) =∞∑i=0

P(4)i1 αi

= P(4)01 α0 + P

(4)11 α1

= (0.4251)(0.4) + (0.4332)(0.6)

= 0.4300. ♦

22

Page 23: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

4.3 Types of States

Definition: If P(n)ij > 0 for some n ≥ 0, state j is

accessible from i.

Notation: i→ j.

Definition: If i→ j and j → i, then i and j communi-

cate.

Notation: i↔ j.23

Page 24: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Theorem: Communication is an equivalence relation:

(i) i↔ i for all i (reflexive).

(ii) i↔ j implies j ↔ i (symmetric).

(iii) i↔ j and j ↔ k imply i↔ k (transitive).

Proof: (i) and (ii) are trivial, so we’ll only do (iii). To

do so, suppose i ↔ j and j ↔ k. Then there are n,m

such that P (n)ij > 0 and P

(m)jk > 0. So by C-K,

P(n+m)ik =

∞∑r=0

P(n)ir P

(m)rk ≥ P

(n)ij P

(m)jk > 0.

Thus, i→ k. Similarly, k → i. ♦24

Page 25: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Definition: An equivalence class consists of all states

that communicate with each other.

Remark: Easy to see that two equiv classes are dis-

joint.

Example: The following P has equiv classes {0,1} and

{2,3}.

P =

1/2 1/2 0 01/2 1/2 0 0

0 0 3/4 1/40 0 1/4 3/4

. ♦

25

Page 26: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: P again has equiv classes {0,1} and {2,3}— note that 1 isn’t accessible from 2.

P =

1/2 1/2 0 01/2 1/4 1/4 0

0 0 3/4 1/40 0 1/4 3/4

. ♦

Definition: A MC is irreducible if there is only one

equiv class (i.e., if all states communicate).

Example: The previous two examples are not irre-

ducible. ♦26

Page 27: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: The following P is irreducible since all states

communicate (“loop” technique: 0→ 1→ 0).

P =

1/2 1/21/4 3/4

. ♦

Example: P is irreducible since 0→ 2→ 1→ 0.

P =

1/4 0 3/4

1 0 00 1/2 1/2

. ♦

27

Page 28: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Definition: The probability that the MC eventually

returns to state i is

fi ≡ Pr(Xn = i for some n ≥ 1|X0 = i).

Example: The following MC has equiv classes {0,1},

{2}, and {3}, the latter of which is absorbing.

P =

1/2 1/2 0 01/2 1/2 0 01/4 1/4 1/4 1/4

0 0 0 1

.

We have f0 = f1 = 1, f2 = 1/4, f3 = 1. ♦28

Page 29: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Remark: The fi’s are usually hard to compute.

Definition: If fi = 1, state i is recurrent. If fi < 1,

state i is transient.

Theorem: Suppose X0 = i. Let N denote the number

of times that the MC is in state i (before leaving i

forever). Note that N ≥ 1 since X0 = i. Then i is

recurrent iff E[N ] = ∞ (and i is transient iff E[N ] <

∞).

29

Page 30: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Proof: If i is recurrent, it’s easy to see that the MC

returns to i an infinite number of times; so E[N ] =∞.

Otherwise, suppose i is transient. Then

Pr(N = 1) = 1− fi (never returns)

Pr(N = 2) = fi(1− fi) (returns exactly once)

...

Pr(N = k) = fk−1i (1− fi) (returns k − 1 times)

So N ∼ Geom(1 − fi). Finally, since fi < 1, we have

E[N ] = 11−fi <∞. ♦

30

Page 31: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Theorem: i is recurrent iff∑∞n=1P

(n)ii = ∞. (So i is

transient iff∑∞n=1P

(n)ii <∞.)

Proof: Define the event

An ≡ 1 if Xn = i

0 if Xn 6= i.

Note that N ≡ ∑∞n=1An is the number of returns to i.

Then by the trick that allows us to treat the ex-

pected value of an indicator function as a probability,

we have. . .31

Page 32: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

∞∑n=1

P(n)ii =

∞∑n=1

Pr(Xn = i|X0 = i)

=∞∑n=1

E[An|X0 = i] (trick)

= E[ ∞∑n=1

An

∣∣∣∣∣X0 = i

]= E[N |X0 = i] (N = number of returns)

= ∞

⇔ i is recur (by previous theorem). ♦

32

Page 33: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Corollary 1: If i is recur and i↔ j, then j is recur.

Proof: See Ross. ♦

Corollary 2: In a MC with a finite number of states,

not all of the states can be transient.

Proof: Suppose not. Then the MC will run out of

states not to go to an infinite number of times. This

is a contradiction. ♦33

Page 34: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Corollary 3: If one state in an equiv class is transient,

then all states are trans.

Proof: Suppose not, i.e., suppose there’s a recur

state. Since all states in the equiv class communi-

cate, Corollary 1 implies all states are recur. This is

a contradiction. ♦

Corollary 4: All states in a finite irreducible MC are

recurrent.

34

Page 35: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Proof: Suppose not, i.e., suppose there’s a trans

state. Then Corollary 3 implies all states are trans.

But this contradicts Corollary 1. ♦

Definition: By Corollary 1, all states in an equiv class

are recur if one state in that class is recur. Such a

class is a recurrent equiv class.

By Corollary 3, all states in an equiv class are trans

if one state in that class is trans. Such a class is a

transient equiv class.35

Page 36: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: Consider the prob transition matrix

P =

1/2 1/21/4 3/4

.Clearly, all states communicate. So this is a finite,

irreducible MC. So Corollary 4 implies all states are

recurrent. ♦

36

Page 37: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: Consider

P =

1/4 0 0 3/4

1 0 0 00 1 0 00 0 1 0

.

Loop: 0 → 3 → 2 → 1 → 0. Thus, all states commu-

nicate; so they’re all recurrent. ♦

37

Page 38: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: Consider

P =

1/4 0 3/4 0 00 1/2 0 1/2 0

1/2 0 1/2 0 00 1/2 0 1/2 0

1/5 1/5 0 0 3/5

.

The equiv classes are {0,2} (recur), {1,3} (recur),

and {4} (trans). ♦

38

Page 39: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: Random Walk: A drunk walks on the inte-

gers 0,±1,±2, . . . with transition probabilities

Pi,i+1 = p

Pi,i−1 = q = 1− p

(i.e., he steps to the right w.p. p and to the left w.p.

1− p).

39

Page 40: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

The prob transition matrix is

P =

...q 0 p 0 00 q 0 p 0

. . . 0 0 q 0 p . . .0 0 0 q 0

...

.

Are the states recurrent or transient?

Clearly, all states communicate. So Corollary 1 implies

that if one of the states are recur, then they all are.

Otherwise, all states will be transient.40

Page 41: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Consider a typical state 0. If 0 is recurrent [transient],

then all states will be recurrent [transient]. We’ll find

out which is the case by calculating∑∞n=1P

(n)00 .

Suppose the drunk starts at 0. Since it’s impossible

for him to return to 0 in an odd number of steps, we

see that P (2n+1)00 = 0 for all n ≥ 0.

41

Page 42: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

So the only chance he has of returning to 0 is if he’s

taken an even number of steps, say 2n. Of these

steps, n must be taken to the left, and n to the right.

So, thinking binomial, we have

P(2n)00 =

2nn

pnqn =(2n)!

n!n!pnqn, n ≥ 1.

Aside: For large n, Stirling’s approximation says that

n! ≈√

2π nn+12 e−n.

42

Page 43: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

After the smoke clears,

P(2n)00 ≈

[4p(1− p)]n√πn

,

so that

∞∑n=1

P(n)00 =

∞∑n=1

P(2n)00

=∞∑n=1

[4p(1− p)]n√πn

=∞ if p = 1/2<∞ if p 6= 1/2

.

So the MC is recur if p = 1/2 and trans otherwise.

43

Page 44: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Definition: If p = 1/2, the random walk is symmetric.

Remark: A 2-dimensional r.w. with probability 1/4 of

going each way yields a recurrent MC.

A 3-dimensional r.w. with probability 1/6 of going

each way (N, S, E, W, up, down) yields a transient

MC.

44

Page 45: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

4.4 Limiting Probabilities

Example: Note that the following matrices appear to

be converging. . . .

P =

0.7 0.30.4 0.6

, P(2) =

0.61 0.390.52 0.48

,

P(4) =

0.575 0.4250.567 0.433

, P(8) =

0.572 0.4280.570 0.430

, . . .

45

Page 46: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Definition: Suppose that P (n)ii = 0 whenever n is not

divisible by d, and suppose that d is the largest integer

with this property. Then state i has period d. Think

of d as the greatest common divisor of all n values for

which P(n)ii > 0.

Example: All states have period 3.

P =

0 1 00 0 11 0 0

. ♦

46

Page 47: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Definition: A state with period 1 is aperiodic.

Example:

P =

0 1 0 01 0 0 0

1/4 1/4 1/4 1/40 0 1/2 1/2

.

Here, states 0 and 1 have period 2, while states 2 and

3 are aperiodic. ♦

47

Page 48: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Definition: Suppose state i is recurrent and X0 = i.

If the expected time until the process returns to i is

finite, then i is positive recurrent.

Remark: It turns out that. . .

(1) In a finite MC, all recur states are positive recur.

(2) In an∞-state MC, there may be some recur states

that are not positive recur. Such states are null recur.

Definition: Pos recur, aperiodic states are ergodic.

48

Page 49: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Theorem: For an irreducible, ergodic MC,

(1) πj ≡ limn→∞P(n)ij exists and is independent of i.

(The πj’s are called limiting probabilities.)

(2) πj is the unique, nonnegative solution ofπj =

∑∞i=0 πiPij, j ≥ 0

1 =∑∞j=0 πj

.

In vector notation, this can be written as π = πP.

“Heuristic “proof”: see Ross. ♦49

Page 50: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Remarks: (1) πj is also the long-run proportion of

time that the MC will be in state j. The πj’s are often

called stationary probs — since if Pr(X0 = j) = πj,

then Pr(Xn = j) = πj for all n.

(2) In the irred, pos recur, periodic case, πj can only

be interpreted as the long-run proportion of time in j.

(3) Let mjj ≡ expected number of transitions needed

to go from j to j. Since, on average, the MC spends

1 time unit in state j for every mjj time units, we have

mjj = 1/πj.

50

Page 51: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: Find the limiting probabilities of

P =

0.5 0.4 0.10.3 0.4 0.30.2 0.3 0.5

.Solve πj =

∑∞i=0 πiPij (π = πP), i.e.,

π0 = π0P00 + π1P10 + π2P20 = 0.5π0 + 0.3π1 + 0.2π2,

π1 = π0P01 + π1P11 + π2P21 = 0.4π0 + 0.4π1 + 0.3π2,

π2 = π0P02 + π1P12 + π2P22 = 0.1π0 + 0.3π1 + 0.5π2,

and π0 + π1 + π2 = 1. Get π ={

2162,

2362,

1862

}. ♦

51

Page 52: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Definition: A transition matrix P is doubly stochastic

if each column (and row) sums to 1.

Theorem: If, in addition to the conditions of the pre-

vious theorem, P is a doubly stochastic n× n matrix,

then πj = 1/n for all j.

Proof: Just plug in πj = 1/n for all j into π = πP

to verify that it works. Since this solution must be

unique, we’re done. ♦52

Page 53: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: Find the limiting probabilities of

P =

0.5 0.4 0.10.3 0.3 0.40.2 0.3 0.5

.

This is a doubly stochastic matrix, so we immediately

have that π0 = π1 = π2 = 1/3. ♦

53

Page 54: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

4.5 Gambler’s Ruin Problem

Each time a gambler plays, he wins $1 w.p. p and loses

$1 w.p. 1− p = q. Each play is independent. Suppose

he starts with $i. Find the probability that his fortune

will hit $N (i.e., he breaks the bank) before it hits $0

(i.e., he is ruined).

54

Page 55: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Let Xn denote his fortune at time n. Clearly, {Xn} is

a MC.

Note Pi,i+1 = p and Pi,i−1 = q for i = 1,2, . . . N − 1.

Further, P00 = 1 = PNN .

We have 3 equiv classes: {0} (recur), {1,2, . . . , N −1}

(trans), and {N} (recur).

55

Page 56: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

By a standard one-step conditioning argument,

Pi ≡ Pr(Eventually hit $N |X0 = i)

= Pr(Event. hit N |X1 = i+ 1 and X0 = i)

×Pr(X1 = i+ 1|X0 = i)

+ Pr(Event. hit N |X1 = i− 1 and X0 = i)

×Pr(X1 = i− 1|X0 = i)

= Pr(Event. hit N |X1 = i+ 1)p

+ Pr(Event. hit N |X1 = i− 1)q

= pPi+1 + qPi−1, i = 1,2, . . . , N − 1.

56

Page 57: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Since p+ q = 1, we have

pPi + qPi = pPi+1 + qPi−1

iff

p(Pi+1 − Pi) = q(Pi − Pi−1)

iff

Pi+1 − Pi =q

p(Pi − Pi−1), i = 1,2, . . . , N − 1.

57

Page 58: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Since P0 = 0, we have

P2 − P1 =q

pP1

P3 − P2 =q

p(P2 − P1) =

qp

2

P1

...

Pi − Pi−1 =q

p(Pi−1 − Pi−2) =

qp

i−1

P1.

Summing up the LHS terms and the RHS terms,

i∑j=2

(Pj − Pj−1) = Pi − P1 =i−1∑j=1

qp

j P1.

58

Page 59: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

This implies that

Pi = P1

i−1∑j=0

qp

j =

1−(q/p)i

1−(q/p) P1 if q 6= p (p 6= 1/2)

iP1 if q = p (p = 1/2)

.

In particular, note that

1 = PN =

1−(q/p)N

1−(q/p) P1 if p 6= 1/2

NP1 if p = 1/2

.

59

Page 60: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Thus,

P1 =

1−(q/p)

1−(q/p)Nif p 6= 1/2

1/N if p = 1/2

,

so that

Pi =

1−(q/p)i

1−(q/p)Nif p 6= 1/2

i/N if p = 1/2

. ♦

By the way, as N →∞,

Pi →

1− (q/p)i if p > 1/2

0 if p ≤ 1/2. ♦

60

Page 61: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: A guy can somehow win any blackjack hand

w.p. 0.6. If he wins, he fortune increases by $100;

a loss costs him $100. Suppose he starts out with

$500, and that he’ll quit playing as soon as his fortune

hits $0 or $1500. What’s the probability that he’ll

eventually hit $1500?

P5 =1− (0.4/0.6)5

1− (0.4/0.6)15= 0.870. ♦

61

Page 62: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

4.6 First Passage Time from State 0 to State N

P(n)ij ≡ P (Xn = j|X0 = i)

Definition: The probability that the first passage time

from i to j is n is

f(n)ij ≡ P (Xn = j|X0 = i,Xk 6= j, k = 0,1, . . . , n− 1).

This is the probability that the MC goes from i to j

in exactly n steps (without passing thru j along the

way).

62

Page 63: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Remarks:

(1) By definition, f(1)ij = P

(1)ij = Pij

(2) f(n)ij = P

(n)ij −

∑n−1k=1 f

(k)ij P

(n−k)jj

P(n)ij = Prob. of going from i to j in n steps

f(k)ij = Prob. of i to j for first time in k steps

P(n−k)jj = Prob. of j to j in remaining n− k steps

63

Page 64: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Special Case: Start in state 0 and state N is an ab-

sorbing (“trapping”) state.

f(1)0N = P

(1)0N = P0N

f(2)0N = P

(2)0N − f

(1)0N P

(1)NN = P

(2)0N − P

(1)0N

f(3)0N = P

(3)0N − f

(1)0N − f

(2)0N

= P(3)0N − P

(1)0N − (P (2)

0N − P(1)0N ) = P

(3)0N − P

(2)0N

...

f(n)0N = P

(n)0N − P

(n−1)0N

f(n)0N ’s can be calculated iteratively starting at f(1)

0N .

64

Page 65: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Define T ≡ first passage time from 0 to N

E(T k) =∞∑n=1

nkPr(T = n) =∞∑n=1

nkf(n)0N

=∞∑n=1

nk(P (n)0N − P

(n−1)0N )

Usually use a computer to calculate this.

(WARNING! Don’t break this up into 2 separate ∞

summations!) Stop calculating when f(n)0N ≈ 0.

65

Page 66: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

2nd Special Case: 2 absorbing states N,N ′

Same procedure as before but divide each f(n)0N , f(n)

0N ′

by the probs. of being trapped. So probs. of first

passage times to N , N ′ in n steps are

f(n)0N∑∞

k=1 f(k)0N

andf

(n)0N ′∑∞

k=1 f(k)0N ′

.

66

Page 67: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

4.7 Branching Processes ← Special class of MC’s

Suppose X0 is the number of individuals in a certain

population. Suppose the probability that any individ-

ual will have exactly j offspring during its lifetime is

Pj, j ≥ 0. (Assume that the number of offspring from

one individual is independent of the number from any

other individual.)

67

Page 68: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

X0 ≡ size of the 0th generation

X1 ≡ size of the 1st gener’n = # kids produced by

individuals from 0th gener’n.

...

Xn ≡ size of the nth gener’n = # kids produced by

indiv.’s from (n− 1)st gener’n.

Then {Xn : n ≥ 0} is a MC with the non-negative

integers as its state space. Pij ≡ P (Xn+1 = j|Xn = i).

68

Page 69: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Remarks:

(1) 0 is recurrent since P00 = 1.

(2) If P0 > 0, then all other states are transient.

(Proof: If P0 > 0, then Pi0 = P i0 > 0. If i is recurrent,

we’d eventually go to state 0. Contradiction.)

These two remarks imply that the population either

dies out or its size →∞.

69

Page 70: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Denote µ ≡ ∑∞j=0 jPj, the mean number of offspring

of a particular individual.

Denote σ2 ≡ ∑∞j=0(j − µ)2Pj, the variance.

Suppose X0 = 1. In order to calculate E[Xn] and

Var(Xn), note that

Xn =Xn−1∑i=1

Zi

where Zi is the # of kids from indiv. i of gener’n

(n− 1).70

Page 71: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Since Xn−1 is indep of the Zi’s,

E[Xn] = E[Xn−1∑i=1

Zi

]= E[Xn−1]E[Zi]

= µE[Xn−1].

Since X0 = 1,

E[X1] = µ

E[X2] = µE[X1] = µ2

...

E[Xn] = µn.

71

Page 72: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Similarly,

Var(Xn) =

σ2µn−1

(µn−1µ−1

), if µ 6= 1

nσ2, if µ = 1

72

Page 73: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Denote π0 ≡ limn→∞Pr(Xn = 0|X0 = 1) = prob that

the population will eventually die out (given X0 = 1).

Fact: If µ < 1, then π0 = 1.

Proof:

Pr(Xn ≥ 1) =∞∑j=1

Pr(Xn = j)

≤∞∑j=1

jPr(Xn = j)

= E[Xn] = µn→ 0 as n→∞.

Fact: If µ = 1, then π0 = 1.73

Page 74: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

What about the case when µ > 1?

Here, it turns out that π0 < 1, i.e., the prob. popula-

tion dies out is < 1.

π0 = Pr(pop’n dies out)

=∞∑j=0

Pr(pop’n dies out|X1 = j)︸ ︷︷ ︸πj0

Pr(X1 = j)︸ ︷︷ ︸Pj

,

where πj0 implies that the families started by the j

members of the first generation all die out (indep’ly).

74

Page 75: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Summary:

π0 =∞∑j=0

πj0Pj (∗)

For µ > 1, π0 is the smallest positive number satisfy-

ing (*).

75

Page 76: 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2 ...sman/courses/6761/6761-4-MarkovChains.pdf · 4. Markov Chains 4. Markov Chains (9/23/12, cf. Ross) 1. Introduction 2. Chapman-Kolmogorov

4. Markov Chains

Example: Suppose P0 = 14, P1 = 1

4, P2 = 12.

µ =∞∑j=0

jPj = 0 ·1

4+ 1 ·

1

4+ 2 ·

1

2=

5

4> 1

Furthermore, (*) implies

π0 = π00 ·

1

4+ π1

0 ·1

4+ π2

0 ·1

2=

1

4+

1

4π0 +

1

2π2

0

⇔ 2π20 − 3π0 + 1 = 0

Smallest positive sol’n is π0 = 12.

76