795m winter 200008/12/20151 hidden markov models chris brew the ohio state university

46
795M Winter 2000 04/07/22 Hidden Markov Models Chris Brew The Ohio State University

Upload: giles-harmon

Post on 17-Jan-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 1

Hidden Markov Models

Chris Brew

The Ohio State University

Page 2: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 2

Introduction

• Dynamic Programming• Markov models as effective tools for language

modelling• How to solve three classic problems

• Calculate the probability of a corpus given a model

• Guess the sequence of states passed through

• Adapt the model to the corpus

• Generalization of word-confetti

Page 3: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 3

Edit Distance• You have a text that can do

• Insert a character• Delete a character• Substitute one character for another

• The edit distance between two sequences x1…xn, y1…ym is the smallest number of elementary operations that will transform x1…xn into y1…ym

Page 4: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 4

Algorithm for edit distance

• Fill up a rectangular array of intermediate results starting at the bottom left and working up to the top right.

• This is time efficient, because it avoids backtracking.

• It can be made space efficient, because not all the entries in the array are relevant to the best path

Page 5: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 5

Initialization

0

P

H

O

S

S

C H E A P

Page 6: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 6

Initialization• def sdist(string1,string2):

delCost = 1.0 insCost = 1.0 substCost = 1.0 m = len(string1) n = len(string2) d[0][0] = 0.0…

• This code is not a complete program, needs imports and so on.

Page 7: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 7

The borders5

4

3

2

1

0 1 2 3 4 5

P

H

O

S

S

C H E A P

Page 8: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 8

The borders• We fill in the first row, adding entries with indices

(1,0) through (m,0) • …

for i in range(m): d[i+1,0] = d[I,0] + delCost

• We fill in the first column, adding entries with indices (0,1) through (0,n) • …

for j in range(m): d[0,j+1] = d[0,j] + insCost

Page 9: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 9

Recursion5 5 4 4 4 4

4 4 3 3 3 3

3 3 2 2 3 4

2 2 1 2 3 4

1 1 2 3 4 5

0 1 2 3 4 5

P

H

O

S

S

C H E A P

Page 10: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 10

Recursion

for i in range(m): for j in range(n):

if string1[I] == string2[j]:subst = 0

else:subst = substCost

d[i+1,j+1] = min(d[i,j] + subst, d[i+1,j]+

insCost, d[i,j+1]+

delCost)

Page 11: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 11

Wrapup

• At the end, the total distance is in the cell at (m,n).

• This version says that there is no charge for matching a letter against itself, but that it costs one penalty point to match against anything else. It would be easy to vary this if we thought, for example, that it was less bad to confuse some letter pairs than to confuse others.

Page 12: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 12

Dynamic Programming

• Like many other algorithms, DP is efficient because it systematically records intermediate results.

• There are actually exponentially many paths through the matrix, but only a polynomial amount of effort is needed to fill it out.

• If you’re clever, no need to fill all the cells

Page 13: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 13

Topics

• The noisy channel model• Markov models• Hidden Markov models• What is Part of speech tagging?• Three problems solved

• Probability estimation (problem 1)• Viterbi algorithm (problem 2)• Forward-Backward algorithm (problem 3)

Page 14: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 14

The noisy channel model

• Incomplete information

NoisyChannel Words only

Words+

Parts-of-speech

Page 15: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 15

Markov Models

• States and transitions (with probabilities)

the

dogs bit

Page 16: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 16

Matrix form of Markov models

• Transition Matrix(A) The Dogs BitThe 0.01 0.46 0.53Dogs 0.05 0.15 0.80Bit 0.77 0.32 0.01

• Start with initial probabilities p(0)The 0.7Dogs 0.2Bit 0.1

Page 17: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 17

Using Markov models

• Choose initial state from p(0). Say it was “the”

• Choose transition from “the” row of A. If we choose “dogs” that has probability 0.46.

• But we can get to “dogs” from other places too.p(1)[“dogs”] =p(0)[“the”]*0.46+p(0)[“dogs”]*0.15+p(0)[“bit”]*0.32

• After N time steps p(n) =ANp(0)

Page 18: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 18

Using Markov models II

• If we want the whole of p(1) we can do it efficiently by multiplying the matrix A by the vector p(0).

• We can do the same to get p(2) from p(1)

• After N time steps p(n) =ANp(0)• Best path and string probability also not

hard.

Page 19: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 19

Hidden Markov Models

• Now you don’t know the state sequence

det

vb n

these

athe

dogs

bit

cats

dogs

bit

chased

Page 20: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 20

Matrix form of HMMs

• Transition Matrix(A) Emission Matrix (B) DET N VB Dogs Bit The …DET 0.01 0.89 0.10 DET 0.0 0.0 1.0N 0.30 0.20 0.50 N 0.2 0.1 0.0VB 0.67 0.23 0.10 VB 0.1 0.6 0.0

• Start with initial probabilities p(0)Det 0.7N 0.2VB 0.1

Page 21: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 21

Using Hidden Markov models

• Generation:• Draw from p(0)

• Choose transition from relevant row of A

• Choose emission from relevant row of B

• After N time steps p(n) =ANp(0)

• Easy because state stays known. • If one wanted, one could generate all possible

strings, annotating with probability.

Page 22: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 22

State sequences

• All you see is the output:• “The bit dogs …”

• But you can’t tell which of• DET N VB … DET VB N …• DET N N … DET VB VB …

• Each of these has different probabilities. • Don’t know which state you are in.

Page 23: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 23

The three problems

• Probability estimation• Given a sequence of observations O and a

model M. Find P(O|M)

• Best path estimation• Given a sequence of observations O and a

model M, find a sequence of states I which maximizes P(O,I|M).

Page 24: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 24

The third problem

• Training• Adjust the model parameters so that P(O|

M) is as large as possible for given O. Hard problem because there are so many adjustable parameters which could vary

Page 25: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 25

Probability estimation

• Easy in principle. Form joint probability of state sequences and observations P(O,I|M). Marginalize out I.

• But this involves sum over exponentially many paths.

• Efficient algorithm uses idea that probability of state at time t+1 is easy to get from knowledge of all states at time t.

Page 26: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 26

Probability estimation

• Getting the next time stepdogs

bi(dogs)i(t+1)

DET(t)

VB(t)

N(t)

bdet(bit)

bit

bvb(bit)

bn(bit)

avb,i

adet,i

an,i

Page 27: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 27

Event 1

Arrive in state j at time step t. (big event)

)(tj

Page 28: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 28

Event 2

• Generate word k from state j

jkb

Page 29: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 29

Event 3

• Transition from state j to state i

jia

Page 30: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 30

Event 4

• Continue to from j to end of string (big event)

)1( +tiβ

Page 31: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 31

Best path

dogs

bi(dogs)i(t+1)

DET(t)

VB(t)

N(t)

bdet(bit)

bit

bvb(bit)

bn(bit)

avb,i

adet,i

an,i

Maximize not sum

Page 32: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 32

Backward probabilities

• Counterpart of forward probsbit cat

βdet(t+2)

dogs

bi(dogs)

i(t+1)

DET(t)

VB(t)

N(t)

bdet(bit)

bvb(bit)

bn(bit)

avb,i

adet,i

an,i

βi(t+1)

ai,vb

ai,det

ai,n

bdet(cat)

bvb(cat)

bn(cat)

βn(t+2)

βvb(t+2)

Page 33: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 33

Forward and Backward

• Note that our notation is not quite the same as that in M&S p334. Ours is a state-emission HMM, theirs is an arc-emission HMM. See the note on p338 for more details.

• We assume that i(t) includes the probability of generating words up to but not including the one in the state just reached. βi(t) therefore starts by generating this word

Page 34: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 34

State probabilities

i(t)βi(t) is p(in state i at time t, all words)

• Sum over all states k of k(t)βk(t) is p(sentence)

• p(in state i at time t) is i(t)βi(t)/ (Sumk k(t)βk(t) )

• p(in state i) average over all time ticks of p(in state i at time t)

Page 35: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 35

Training

• Uses forward and backward probabilities

• Starts from an initial guess• Improves the initial guess using data• Stops at a (locally) best model• Specialization of the EM algorithm

Page 36: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 36

Factorizing the path

• Consider p(in state i at time t and in state j at time t+1| Model,Observations)

• We could see this as two things• Get to i while generating words up to t

* Get from t to end of corpus while generating remaining words.

Page 37: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 37

Factorizing the path 2

• Consider p(in state i at time t and in state j at time t+1| Model,Observations)

• We could see this as four things• Get to i while generating words up to t• Generate word from i• Make correct transition from i to j

• Get from t+1 to end of corpus while generating remaining words.

• The merit of this is that we can use the current model for the inside bit.

Page 38: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 38

Factorizing the path 3• Consider p(in state i at time t and in state j at time t+1|

Model,Observations)• We could see this as four things

• Get to i while generating words up to t• Repeat ad lib

• Generate word from current state• Make a transition that generates the word that we saw

• Get from t+k to end of corpus while generating remaining words.

• If we wanted, the model for the inside bit could be a bit more complicated than we assumed above. Research topic.

Page 39: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 39

Expected transition counts 2

• We have these things already• Forward prob: i(t)

• Transition prob: aij

• Emission prob: bj(word)

• Backward prob βj(t+1)

Page 40: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 40

Expected Transition counts 3

dogs

bJ(dogs)

i(t)

bdet(...)

bvb(...)

bn(...)

avb,i

adet,i

an,i

βi(t+1)

ai,vb

ai,det

ai,n

bdet(cat)

bvb(cat)

bn(cat)

bit

bi(bit)ai,j

Page 41: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 41

Estimated transition probabilities

i(t)aijbj(word)βj(t+1) is count(in state i at time t,in state j at time t+1, all words)

• p(in state i at time t,in state j at time t+1) is i(t)aijbj(word)βj(t+1) / (Sumk i(t)aikbk(word)βk(t+1) )

• Sum over all time ticks to get expected transition counts.

• Derive new probabilities from these counts.

Page 42: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 42

Estimated emission probabilities

• Calculate expected number of times in state j at places where particular word happened.

• Divide expected number of times in state j• average over time ticks is new emission

probability.

Page 43: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 43

Re-estimation (for everybody)• Recall that we guessed the initial parameters.

Replace initial parameters with new ones derived as above.

• These will be better than the originals because:• The data ensures that we only consider paths

which can generate the words that we did see in the corpus

• Paths which fit the data well get taken frequently, bad paths infrequently

Page 44: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 44

Re-estimation (the details)

• Baum et al. show that this will always converge to a local maximum.

• An instance of Dempster Laird and Rubin’s EM algorithm.

• For a modern review of EM see: ftp://ftp.cs.utoronto.ca/pub/radford/emk.pdf

Page 45: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 45

Summary

• Three problems solved • Simple model based on finite-state

technology• Sensitive to a limited range of context

information• Re-estimation as in instance of the EM

algorithm

Page 46: 795M Winter 200008/12/20151 Hidden Markov Models Chris Brew The Ohio State University

795M Winter 2000 21/04/23 46

Where to get more information

• Maryland implementation in C• My implementation in Python• Matlab code by Zoubin Ghahramani• Manning and Schütze ch 9.• Charniak chapters 3 and 4• http://www.georgetown.edu/cball/ling361/tagging_overview.html