probabilistic reasoning over time using hidden markov models
DESCRIPTION
Probabilistic Reasoning Over Time Using Hidden Markov Models. Minmin Chen. Contents. 15.1~15.3. Time and Uncertainty. Noisy sensor. Agent: security guard at some secret underground installation Observation: Is the director coming with an umbrella State: Rain or not. Not fully observable. - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/1.jpg)
PROBABILISTIC REASONING OVER TIME USING HIDDEN MARKOV MODELSMinmin Chen
![Page 2: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/2.jpg)
CONTENTS
15.1~15.3
![Page 3: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/3.jpg)
TIME AND UNCERTAINTY
Agent: security guard at some secret underground installation
Observation: Is the director coming with an umbrella
State: Rain or not
Noisy sensor
Not fully observable
time
![Page 4: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/4.jpg)
TIME AND UNCERTAINTY
Observation: Measured Heart Rate Electrocardiogram
(ECG) Patient’s Activity
State Atria Fibrillation? Tachycardia? Bradycardia?
Noisy sensor
Not fully observable
time
![Page 5: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/5.jpg)
STATES AND OBSERVATIONS
Unobservable state variable : Xt Observable evidence variable: Et Example 1: for each day
U1,U2,U3, …… R1, R2, R3, ……
Example 2: for each recording Et = {Measured_heart_rate t, ECG t, activity t} Xt = {AF t, Tachycardia t, Bradycardia t}
![Page 6: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/6.jpg)
ASSUMPTION1: STATIONARY PROCESS
Changing world Unchanged laws remains the same for
different t
![Page 7: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/7.jpg)
ASSUMPTION 2: MAKROV PROCESS
Current states depends only on a finite history of previous states
First-order markov process
States
Transition Probability Matrix
Initial Distribution
![Page 8: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/8.jpg)
ASSUMPTION 3: RESTRICTION TO THE PARENTS OF EVIDENCE
The evidence variable at time t only depends on the current state:
![Page 9: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/9.jpg)
Rt-1 P(Rt|Rt-1)
true 0.7
false 0.3
HIDDEN MARKOV MODEL
Hidden state
sequence
Evidence sequence
Rt-1
RtRt+
1
Ut-1 Ut Ut+1
Rt P(Ut|Rt)
true 0.9
false 0.2
![Page 10: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/10.jpg)
JOINT DISTRIBUTION OF HMMS
Bayes rule
Chain rule
Conditional independence
![Page 11: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/11.jpg)
EXAMPLE
DAY: 1 2 3 4 5 Umbrella: true true false true true Rain: true true false true true
Rt-1 P(Rt|Rt-1)
true 0.7
false 0.3
Rt P(Ut|Rt)
true 0.9
false 0.2
![Page 12: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/12.jpg)
EXAMPLE
![Page 13: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/13.jpg)
HOW TRUE THESE ASSUMPTIONS ARE
Depends on the problem domain To overcome violations to the assumptions
Increasing the order of Markov process model Increasing the set of state variables
![Page 14: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/14.jpg)
INFERENCE IN TEMPORAL MODELS
Filtering: posterior distribution over the current state,
given all evidence to date Prediction:
Posterior distribution over the future state, given all evidence to date
Smoothing: Posterior distribution over a past state, given all
evidence to date Most likely explanation:
The sequence of states most likely to generate those observations
![Page 15: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/15.jpg)
FILTERING & PREDICTION
Transition modelPosterior
distribution at time t
Prediction
Sensor model
Filtering
![Page 16: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/16.jpg)
PROOF
Forward Alg
Bayes Rule
Chain Rule
Conditional Independence
Marginal Probability
Chain Rule
Conditional Independence
![Page 17: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/17.jpg)
INTERPRETATION & EXAMPLE
0.5
0.5
U1=true
U2=true
0.50.7
0.3
0.5
0.3
0.7
0.45
0.9
0.10.2
Rt-1 P(Rt|Rt-1)
true 0.7
false 0.3
Rt P(Ut|Rt)
true 0.9
false 0.2
![Page 18: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/18.jpg)
INTERPRETATION & EXAMPLE
0.5 0.818
0.5 0.182
0.5
0.5
U1=true
U2=true
0.7
0.3
0.3
0.7
0.9
0.2
0.627
0.7
0.3
0.373
0.3
0.7
0.565
0.9
0.075
0.2
Rt-1 P(Rt|Rt-1)
true 0.7
false 0.3
Rt P(Ut|Rt)
true 0.9
false 0.2
![Page 19: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/19.jpg)
INTERPRETATION & EXAMPLE
0.5 0.818
0.883
0.5 0.117
0.182
0.5
0.5
0.627
0.373
U1=true
U2=true
0.7
0.3
0.3
0.7
0.9
0.2
0.7
0.3
0.3
0.7
0.9
0.2
Rt-1 P(Rt|Rt-1)
true 0.7
false 0.3
Rt P(Ut|Rt)
true 0.9
false 0.2
![Page 20: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/20.jpg)
LIKELIHOOD OF EVIDENCE SEQUENCE
The likelihood of the evidence sequence
The forward algorithm computes
![Page 21: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/21.jpg)
SMOOTHING
Divide Evidence
Bayes Rule
Chain Rule
Conditional Independence
![Page 22: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/22.jpg)
INTUITION
Sensor modelBackward
message at time
k+1
Sensor model
Backward Message at time k
![Page 23: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/23.jpg)
BACKWARD
Backward Alg
Marginal Probability
Chain Rule
Conditional Independence
Conditional Independence
![Page 24: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/24.jpg)
INTERPRETATION & EXAMPLE
0.5 0.818
1
0.5 10.182
Rt-1 P(Rt|Rt-1)
true 0.7
false 0.3
Rt P(Ut|Rt)
true 0.9
false 0.2
U1=true
U2=true
0.90.9
0.20.2
0.69
0.7
0.3
0.41
0.3
0.7
0.883
0.117
![Page 25: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/25.jpg)
FINDING THE MOST LIKELY SEQUENCE
true true true true true
true true true true true
![Page 26: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/26.jpg)
FINDING THE MOST LIKELY SEQUENCE
Enumeration Enumerate all possible state sequence Compute the joint distribution and find the
sequence with the maximum joint distribution Problem: total number of state sequence grows
exponentially with the length of the sequence Smooth
Calculate the posterior distribution for each time step k
In each step k, find the state with maximum posterior distribution
Combine these states to form a sequence Problem:
![Page 27: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/27.jpg)
VITERBI ALGORITHM
true true false true true
.8182
.5155
.0361
.0334
.0210
.1818
.0491
.1237
.0173
.0024
![Page 28: Probabilistic Reasoning Over Time Using Hidden Markov Models](https://reader035.vdocument.in/reader035/viewer/2022062809/56815a4c550346895dc77e26/html5/thumbnails/28.jpg)
PROOF
Divide Evidence
Bayes Rule
Chain Rule
Conditional Independence
Chain Rule