cs433 modeling and simulation lecture 11 continuous markov chains dr. anis koubâa 01 may 2009...

Download CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University

If you can't read please download the document

Upload: christine-pearson

Post on 18-Jan-2018

233 views

Category:

Documents


3 download

DESCRIPTION

3 “Discrete Time” versus “Continuous Time” time Events occur at known points in time Fixed Time Discrete Time u time Events occur at any point in time Variable Time Continuous Time svt 1=u-s1=u-s 2=v-u2=v-u =1=1 =1=1 3=t-v3=t-v

TRANSCRIPT

CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Kouba 01 May 2009 Al-Imam Mohammad Ibn Saud University Goals for Today Understand the Markov property in the Continuous Case Understand the difference between continuous time and discrete time Markov Chains Learn how to use Continuous Markov Chains for modelling stochastic processes 3 Discrete Time versus Continuous Time time Events occur at known points in time Fixed Time Discrete Time u time Events occur at any point in time Variable Time Continuous Time svt 1=u-s1=u-s 2=v-u2=v-u =1=1 =1=1 3=t-v3=t-v 4 Definition (WiKi): Continuous-Time Markov Chains In probability theory, a Continuous-Time Markov Process (CTMC) is a stochastic process { X(t) : t 0 } that satisfies the Markov property and takes values from a set called the state space. The Markov property states that at any times s > t > 0, the conditional probability distribution of the process at time s given the whole history of the process up to and including time t, depends only on the state of the process at time t. In effect, the state of the process at time s is conditionally independent of the history of the process before time t, given the state of the process at time t. 5 Definition 1: Continuous-Time Markov Chains A stochastic process {X(t), t 0} is a Continuous-Time Markov Chain (CTMC) if for all 0 s t and non-negative integers i, j, x(u), such that 0 u < s, In addition, if this probability is independent from s and t, then the CTMC has stationary transition probabilities: s X(s)=i t X(t)=j u X(u)=x(u) 6 Differences between Continuous-Time and Discrete-Time Markov Chains Discrete Markov ChainContinuous Markov Chain Time t k or k + s,t + Transient Transition Probability P ij (k) for the time interval [k, k+1] P ij (s,t) for the time interval [s,t] Stationary Transition Probability P ij (1)= P ij in the time unit equal to 1 Time duration fixed P ij ( ) for the time duration t-s dependent on the duration Transition Probability to the Same State P ii can be different from 0 P ii ( )=0 for any time Events occur at known points in time Fixed Time =1=1 =1=1 Discrete Time u time Events occur at any point in time Variable Time svt 1=u-s1=u-s 2=v-u2=v-u 3=t-v3=t-v Continuous Time 7 Definition 2: Continuous-Time Markov Chains A stochastic process {X(t), t 0} is a Continuous-Time Markov Chain (CTMC) if T he amount of time spent in state i before making a transition to a different state is exponentially distributed with rate a parameter v i, When the process leaves state i, it enters state j with a probability p ij, where p ii = 0 and All transitions and times are independent (in particular, the transition probability out of a state is independent of the time spent in the state). Summary: The CTMC process moves from state to state according to DTMC, and the time spent in each state is exponentially distributed 8 Differences between DISCRETE and CONTINOUS Summary: The CTMC process moves from state to state according to DTMC, and the time spent in each state is exponentially distributed CTMC process DTMC process Five Minutes Break You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions. 10 Chapman Kolmogorov: Transition Function Define the Transition Function (like Transition Probability in DTMC) Using the Markov (memoryless) property Transition Matrix State Holding Time Transition Rate Transition Probability Time Homogeneous Case 12 Homogeneous Case The transition rate matrix q ij is the transition rate that the chain enters state j from state i i =-q ii is the transition rate that the chain leaves state i Continuous Markov ChainDiscrete Markov Chain i P ij j P ij : Transition Probability Transition Time is deterministic (each slot) P ji i j k q ki = k. P ki q ji = j. P ji q ik = i. P ik q ij = i. P ij P ij : Transition Probability, q ij input rate from i to j, i output rate Transition Time is random 13 Continuous Markov Chain Discrete Markov Chain i P ij j P ij : Transition Probability Transition Time is Known (each slot) P ji i j k q ki = k. P ki q ji = j. P ji q ik = i. P ik q ij = i. P ij P ij : Transition Probability, q ij input rate of state j from state i, i output rate from state i for all other neighbor states Transition Time is randoms 14 Transition ProbabilityMatrix in Homogeneous Case Thus, if P( ) is the Transition Matrix AFTER a time period p ij (0) is the instantaneous transition function from i to j Two Minutes Break You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions. Next: State Holding Time 16 State Holding and Transition Time In a CTMC, the process makes a transition from one state to another, after it has spent an amount of time on the state it starts from. This amount of time is defined as the state holding time. Theorem Theorem: State Holding Time of CTMC The state holding time T i := inf {t: X t i | X 0 = i} in a state i of a Continuous-Time Markov Chain Satisfies the Memoryless Property Is Exponentially Distributed with the parameter i Theorem Theorem: Transition Time in a CTMC The time T ij := inf {t: X t = j | X 0 = i} spent in a state i before a transition to state j is exponentially distributed with the parameter q ij 17 State Holding Time: Proofs Suppose our continuous time Markov Chain has just arrived in state i. Define the random variable Ti to be the length of time the process spends in state i before moving to a different state. We call Ti the holding time in state i. The Markov Property implies the distribution of how much longer youll be in a given state i is independent of how long youve already been there. Proof (1) (by contradiction): Suppose it is time s, you are in state i, and i.e., the amount of time you have already been in state i is relevant in predicting how much longer you will be there. Then for any time r < s, whether or not you were in state i at time r is relevant in predicting whether you will be in state i or a different state j at some future time s + t. Thus which violates the Markov Property. Proof (2): The only distribution satisfying the memoryless property is the exponential distribution. Thus, the result in (2). Example: Computer System Assume a computer system where jobs arrive according to a Poisson process with rate . Each job is processed using a First In First Out (FIFO) policy. The processing time of each job is exponential with rate . The computer has a buffer to store up to two jobs that wait for processing. Jobs that find the buffer full are lost. Example: Computer System Questions Draw the state transition diagram. Find the Rate Transition Matrix Q. Find the State Transition Matrix P 20 Example The rate transition matrix is given by a d aa a dd The state transition matrix is given by Transient State Probabilities 22 State Probabilities and Transient Analysis Similar to the discrete-time case, we define In vector form With initial probabilities Using our previous notation (for homogeneous MC) Obtaining a general solution is not easy! Steady State Probabilities 24 Steady State Analysis Often we are interested in the long-run probabilistic behavior of the Markov chain, i.e., As with the discrete-time case, we need to address the following questions Under what conditions do the limits exist? If they exist, do they form legitimate probabilities? How can we evaluate these limits? These are referred to as steady state probabilities or equilibrium state probabilities or stationary state probabilities 25 Steady State Analysis Theorem: In an irreducible continuous-time Markov Chain consisting of positive recurrent states, a unique stationary state probability vector with These vectors are independent of the initial state probability and can be obtained by solving 26 Example For the previous example, with the above transition function, what are the steady state probabilities a d aa a dd Solve 27 Example The solution is obtained Uniformization of Makov Chains 29 Uniformization of Markov Chains In general, discrete-time models are easier to work with, and computers (that are needed to solve such models) operate in discrete-time Thus, we need a way to turn continuous-time to discrete-time Markov Chains Uniformization Procedure Recall that the total rate out of state i is q ii = (i). Pick a uniform rate such that (i) for all states i. The difference - (i) implies a fictitious event that returns the MC back to state i (self loop). 30 Uniformization of Markov Chains Uniformization Procedure Let P U ij be the transition probability from state I to state j for the discrete- time uniformized Markov Chain, then i j k q ij q ik Uniformization i j k End of Chapter