markov tutorial8

Upload: maundumi

Post on 03-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 MARKOV Tutorial8

    1/14

    Tutorial 8

    Markov Chains

  • 7/28/2019 MARKOV Tutorial8

    2/14

    2

    Markov Chains

    Consider a sequence of random variablesX0, X1, , and the set of possible values of

    these random variables is {0, 1, , M}.

    Xn : the state of some system at time n

    Xn = i

    the system is in state iat time nMi 0where,

  • 7/28/2019 MARKOV Tutorial8

    3/14

    3

    Markov Chains

    X0, X1, form a Markov Chainif

    Pij = transition prob.

    = prob. that the system is in state iand it will next be in state j

    ij

    nn

    nnnn

    P

    iXjXP

    iXiXiXiXjXP

    }|{

    },,...,,|{

    1

    0011111

  • 7/28/2019 MARKOV Tutorial8

    4/14

    4

    Transition Matrix

    Transition prob., Pij

    Transition matrix, P

    MMMM

    M

    M

    PPP

    PPPPPP

    P

    ...

    ...

    ......

    10

    11110

    00100

    MiPP

    M

    jijij ,...,1,0,1&0 0

  • 7/28/2019 MARKOV Tutorial8

    5/14

    5

    Example 1

    Suppose that whether or not it rains tomorrowdepends on previous weather conditions onlythrough whether or not it is raining today.

    If it rain today, then it will rain tomorrow withprob 0.7; and if it does not rain today, then it

    will not rain tomorrow with prob 0.6.

  • 7/28/2019 MARKOV Tutorial8

    6/14

    6

    Example 1

    Let state 0 be the rainy day

    state 1 be the sunny day

    The above is a two-state Markov chain havingtransition probability matrix,

    6.04.0

    3.07.0P

    0.39]61.0[:2Dayinondistributi

    0.3]7.0[6.04.0

    3.07.00]1[:1Dayinondistributi

    ],01[:0DayinondistributistartingtheIf

    2(1)(2)

    (1)

    uPPuu

    uPu

    u

  • 7/28/2019 MARKOV Tutorial8

    7/14

    7

    Transition matrix

    The probability that the chain is in state iafter n steps is the ithentry in the vector

    where

    P: transition matrix of a Markov chainu: probability vector representing the

    starting distribution.

    n(n) uPu

  • 7/28/2019 MARKOV Tutorial8

    8/14

    8

    Ergodic Markov Chains

    A Markov chain is called an ergodicchain(irreduciblechain) if it is possible to go

    from every state to every state (notnecessarily in one move).

    A Markov chain is called a regularchain ifsome power of the transition matrix hasonly positive elements.

  • 7/28/2019 MARKOV Tutorial8

    9/14

    9

    Regular Markov Chains

    For a regular Markov chain with transitionmatrix, P and ,

    ithentry in the vector is the long runprobability of state i.

    n

    nPW

    lim

    ...][and

    Wofrowcommontheiswhere

    10

    P

  • 7/28/2019 MARKOV Tutorial8

    10/14

    10

    Example 2

    From example 1,

    the transition matrix

    The long run prob. for rainy day is 4/7.

    6.04.0

    3.07.0P

    7/3

    7/4

    6.03.0

    4.07.0

    1herew6.04.0

    3.07.0][][

    2

    1

    212

    211

    212121

    P

  • 7/28/2019 MARKOV Tutorial8

    11/14

    11

    Markov chain with

    absorption state

    Example:

    Calculate

    (i) the expect time to absorption

    (ii) the absorption prob.

    10002.01.03.04.0

    4.02.03.01.0

    0001

    matrixtransition

  • 7/28/2019 MARKOV Tutorial8

    12/14

    12

    MC with absorption state

    First rewrite the transition matrix to

    N=(I-Q)-1

    is called a fundamental matrix for P

    Entries of N,

    n ij = E(time in transient state j|start at transient state i)

    IRQP

    1000

    01002.04.01.03.0

    4.01.02.03.0

  • 7/28/2019 MARKOV Tutorial8

    13/14

    13

    MC with absorption state

    (i) E(time to absorb |start at i)=iQI )

    1

    ...1

    )(( 1

    2281.15263.0

    3509.05789.1)( 1QIN

    7544.1

    9298.1

    1

    1

    2281.15263.0

    3509.05789.1

    1

    1

    )(1

    QI

  • 7/28/2019 MARKOV Tutorial8

    14/14

    14

    MC with absorption state

    (ii) Absorption prob. B=NR

    bij = P( absorbed in absorption state j |

    start at transient state i)

    4561.05439.0

    7017.02983.0

    0.24.0

    0.41.0

    2281.15263.0

    3509.05789.1)(

    1RQI