cs433 modeling and simulation lecture 06 – part 02 discrete markov chains

13
CS433 Modeling and Simulation Lecture 06 – Part 02 Discrete Markov Chains Dr. Anis Koubâa http://10.2.230.10:4040/akoubaa/cs433/ 11 Nov 2008 Al-Imam Mohammad Ibn Saud University Al-Imam Mohammad Ibn Saud University

Upload: elmo

Post on 26-Jan-2016

67 views

Category:

Documents


2 download

DESCRIPTION

Al-Imam Mohammad Ibn Saud University. CS433 Modeling and Simulation Lecture 06 – Part 02 Discrete Markov Chains. http://10.2.230.10:4040/akoubaa/cs433/. Dr. Anis Koubâa. 11 Nov 2008. Goals for Today. Practical example for modeling a system using Markov Chain State Holding Time - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

CS433Modeling and Simulation

Lecture 06 – Part 02

Discrete Markov Chains

Dr. Anis Koubâa

http://10.2.230.10:4040/akoubaa/cs433/

11 Nov 2008

Al-Imam Mohammad Ibn Saud UniversityAl-Imam Mohammad Ibn Saud University

Page 2: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

22

Goals for Today

Practical example for modeling a

system using Markov Chain

State Holding Time

State Probability and Transient

Behavior

Page 3: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

3

Example• Learn how to find a model of a given system• Learn how to extract the state space

Page 4: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

4

Example: Two Processors System

Consider a two processor computer system where, time is divided into time slots and that operates as follows: At most one job can arrive during any time slot and this can happen

with probability α. Jobs are served by whichever processor is available, and if both are

available then the job is given to processor 1. If both processors are busy, then the job is lost. When a processor is busy, it can complete the job with probability β

during any one time slot. If a job is submitted during a slot when both processors are busy but

at least one processor completes a job, then the job is accepted (departures occur before arrivals).

Q1. Describe the automaton that models this system (not included).

Q2. Describe the Markov Chain that describes this model.

Page 5: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

5

Example: Automaton (not included) Let the number of jobs that are currently processed by the

system by the state, then the State Space is given by X= {0, 1, 2}.

Event set: a: job arrival, d: job departure

Feasible event set: If X=0, then Γ(X)= a If X= 1, 2, then Γ(Χ)= a, d.

State Transition Diagram

0 1 2

a

- / a,d

a

-d d / a,d,d

dd

-/a/ad

Page 6: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

6

Example: Alternative Automaton(not included) Let (X1,X2) indicate whether processor 1 or 2 are busy, Xi= {0, 1}. Event set:

a: job arrival, di: job departure from processor i Feasible event set:

If X=(0,0), then Γ(X)= a If X=(0,1) then Γ(Χ)= a, d2. If X=(1,0) then Γ(Χ)= a, d1. If X=(0,1) then Γ(Χ)= a, d1, d2.

State Transition Diagram

a

- / a,d1

a

-

d2 d1

d1,d2

-/a/ad1/ad2

-

d1

a,d2

a,d1,d2

00

10

11

01

Page 7: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

7

7

Example: Markov Chain

For the State Transition Diagram of the Markov Chain, each transition is simply marked with the transition probability

0 1 2

p01

p11

p12

p00p10

p21

p20

p22

00 1p 01p 02 0p

10 1p 11 1 1p 12 1p

220 1p 2

21 2 1 1p 2

22 21 1p

Page 8: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

8

8

Example: Markov Chain

Suppose that α = 0.5 and β = 0.7, then,

0 1 2

p01

p11

p12

p00p10

p21

p20

p22

0.5 0.5 0

0.35 0.5 0.15

0.245 0.455 0.3ijp

P

Page 9: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

9

How much time does it take for going from one state to another?

State Holding Time

Page 10: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

10

10

State Holding Times

Suppose that at point k, the Markov Chain has transitioned into state Xk=i. An interesting question is how long it will stay at

state i. Let V(i) be the random variable that represents the number of

time slots that Xk=i.

We are interested on the quantity Pr{V(i) = n}

1 1

1

1 1

Pr Pr , ,..., |

Pr | ,...,

Pr ,..., |

k n k n k k

k n k n k

k n k k

V n X i X i X i X ii

X i X i X i

X i X i X i

1 1 2

2 1

Pr | Pr | ...,

Pr ,..., |

k n k n k n k n k

k n k k

X i X i X i X X i

X i X i X i

| | |P A B C P A B C P B C

Page 11: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

11

11

State Holding Times

This is the Geometric Distribution with parameter Clearly, V(i) has the memoryless property

1

1 2

2 1

Pr Pr |

Pr | ...,

Pr ,..., |

k n k n

k n k n k

k n k k

V n X i X ii

X i X X i

X i X i X i

1Pr 1 nii iiV n p pi

1 2

2 3

3 1

1 Pr |

Pr | ,...,

Pr ,..., |

ii k n k n

k n k n k

k n k k

p X i X i

X i X i X i

X i X i X i

iip

Page 12: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

12

12

State Probabilities

An interesting quantity we are usually interested in is the probability of finding the chain at various states, i.e., we define

Pri kX ik For all possible states, we define the vector

0 1, ...k k k π Using total probability we can write

1 1Pr | Pr

1

i k k kj

ji jj

X i X j X jk

p k k

In vector form, one can write

1k k k π π P 1k k π π POr, if homogeneous Markov Chain

Page 13: CS433 Modeling and Simulation Lecture 06 –  Part  02  Discrete Markov Chains

13

13

State Probabilities Example

Suppose that

1 0 00 π

Find π(k) for k=1,2,…

with0.5 0.5 0

0.35 0.5 0.15

0.245 0.455 0.3

P

0.5 0.5 0

1 0 0 0.35 0.5 0.15 0.5 0.5 010.245 0.455 0.3

π

Transient behavior of the system In general, the transient behavior is obtained by

solving the difference equation 1k k π π P