first order hidden markov model - diva portal833697/fulltext01.pdf · markov process, hence the...

96
First Order Hidden Markov Model Theory and Implementation Issues Mikael Nilsson Department of Signal Processing Blekinge Institute of Technology Blekinge Institute of Technology Research Report 2005:02

Upload: others

Post on 19-Jun-2020

12 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

First Order Hidden MarkovModel

Theory and Implementation Issues

Mikael Nilsson

Department of Signal ProcessingBlekinge Institute of Technology

Blekinge Institute of TechnologyResearch Report 2005:02

Page 2: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

First Order Hidden Markov ModelTheory and Implementation Issues

Mikael Nilsson

February, 2005

Research ReportDepartment of Signal Processing

Blekinge Institute of Technology, Sweden

Page 3: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this
Page 4: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Abstract

This report explains the theory of Hidden Markov Models (HMMs). The em-phasis is on the theory aspects in conjunction with the implementation issuesthat are encountered in a floating point processor. The main theory and imple-mentation issues are based on the use of a Gaussian Mixture Model (GMM) asthe state density in the HMM, and a Continuous Density Hidden Markov Model(CDHMM) is assumed. Suggestions and advice related to the implementationare given for a typical pattern recognition task.

Page 5: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this
Page 6: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Contents

1 Introduction 5

2 Discrete-Time Markov Model 9

2.1 Markov Model Example . . . . . . . . . . . . . . . . . . . . . . . 10

3 Discrete-Time Hidden Markov Model 13

3.1 The Urn and Ball Model . . . . . . . . . . . . . . . . . . . . . . . 13

3.2 Discrete Observation Densities . . . . . . . . . . . . . . . . . . . . 14

3.3 Continuous Observation Densities . . . . . . . . . . . . . . . . . . 15

3.4 Types of Hidden Markov Models . . . . . . . . . . . . . . . . . . 16

3.5 Summary of HMM parameters . . . . . . . . . . . . . . . . . . . . 19

4 Three Problems for Hidden Markov Models 21

5 Solution to Problem 1 - Probability Evaluation 23

5.1 The Forward Algorithm . . . . . . . . . . . . . . . . . . . . . . . 24

5.2 The Backward Algorithm . . . . . . . . . . . . . . . . . . . . . . . 26

5.3 Scaling the Forward and Backward Variables . . . . . . . . . . . . 27

5.3.1 The Scaled Forward Algorithm . . . . . . . . . . . . . . . 27

5.3.2 The Scaled Backward Algorithm . . . . . . . . . . . . . . . 30

6 Solution to Problem 2 - “Optimal” State Sequence 33

6.1 The Viterbi Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 34

6.2 The Alternative Viterbi Algorithm . . . . . . . . . . . . . . . . . 35

6.3 The Fast Alternative Viterbi Algorithm . . . . . . . . . . . . . . . 38

7 Solution to Problem 3 - Parameter Estimation 41

7.1 Baum-Welch Reestimation . . . . . . . . . . . . . . . . . . . . . . 42

7.1.1 Multiple Observation Sequences . . . . . . . . . . . . . . . 49

7.1.2 Numerical Issues . . . . . . . . . . . . . . . . . . . . . . . 51

7.2 Initial Estimates of HMM Parameters . . . . . . . . . . . . . . . . 53

3

Page 7: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

4

8 Clustering Algorithms 558.1 The K-Means Algorithm . . . . . . . . . . . . . . . . . . . . . . . 568.2 The SOFM Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 58

9 Training and Recognition using Hidden Markov Models 679.1 Training of a Hidden Markov Model . . . . . . . . . . . . . . . . . 679.2 Recognition using Hidden Markov Models . . . . . . . . . . . . . 71

10 Summary and Conclusions 75

A Derivation of Reestimation Formulas 77A.1 Optimization of Qπi

. . . . . . . . . . . . . . . . . . . . . . . . . . 79A.2 Optimization of Qaij

. . . . . . . . . . . . . . . . . . . . . . . . . 81A.3 Optimization of Qbj

. . . . . . . . . . . . . . . . . . . . . . . . . . 83A.3.1 Optimization of Qbj ,cjk

. . . . . . . . . . . . . . . . . . . . 84A.3.2 Optimization of Qbj ,{µjk,Σjk} . . . . . . . . . . . . . . . . . 86

A.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Page 8: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 1

Introduction

In statistical pattern recognition the aim is to make supervised or unsupervisedclassification. Various methods and applications have been suggested in the lit-erature, a good review can be found in [1].

The sequential pattern recognition, which can be seen as a special case of sta-tistical pattern recognition, is currently mostly approached by the use of the Hid-den Markov Model (HMM). As a matter of fact, current state-of-the-art speechrecognizer systems are based on HMMs. The fundamentals for speech recogni-tion can be found in [2]. Although the HMMs originally emerged in the domainof speech recognition, other interesting fields have shown an interest for the useof these stochastic models, such as handwriting recognition [3, 4, 5], biologicalsequence analysis [6, 7, 8], face recognition [9, 10, 11], etc.

In this report the pattern recognition task using HMMs with gaussian mix-tures is approached from a more general perspective. This implies that the al-gorithms and numerical issues addressed are essential for all pattern recognitiontasks using HMMs and gaussian mixtures on a floating point processor.

The outline of this report is as follows:

Chapter 1 - Introduction

This chapter.

Chapter 2 - Discrete-Time Markov Model

Explains the fundamentals of a Markov model, i.e. the hidden part is uncov-ered. As an example, the weather is modelled by a Markov model and the stateduration distribution is derived as well.

Chapter 3 - Discrete-Time Hidden Markov Model

The Markov model from the previous chapter is extended to the HMM. An

5

Page 9: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

6 Chapter 1. Introduction

urn and ball model example is given as an introduction to HMMs. Discrete andcontinuous observation densities are discussed, as well as different model struc-tures and their properties.

Chapter 4 - Three Basic Problems for Hidden Markov Models

The major problems for the HMM, probability estimation, ”optimal” statesequence and parameter estimation, are addressed.

Chapter 5 - Solution to Problem 1 - Probability Evaluation

How to find the probability for an observation sequence is addressed here andthe theoretical textitforward and textitbackward algorithm is explained. Thechapter includes practical implementation aspects yielding the scaled forwardand scaled backward algorithm.

Chapter 6 - Solution to Problem 2 - “Optimal” State Sequence

The problem of finding an “optimal” state sequence is investigated. The the-oretical Viterbi algorithm is derived. The more practical implementation of theViterbi algorithm leads into the alternative Viterbi algorithm and the fast alter-native Viterbi algorithm.

Chapter 7 - Solution to Problem 3 - Parameter Estimation

The Baum-Welch reestimation formulas for a HMM are derived. The train-ing formulas are extended to multiple training sequences. Numerical issues areinvestigated and suggestions for the initial values for the state distribution, thestate transition matrix and the Gaussian mixture are given.

Chapter 8 - Clustering Algorithms

The unsupervised classification is approached, with the aim to find the ini-tial parameters for the Gaussian mixture. Two commonly used clustering algo-rithms are explained: the k-means algorithm and the Self Organizing FeatureMap (SOFM).

Chapter 9 - Training and Recognition using Hidden Markov Models

Addresses issues for a complete pattern recognition task using HMMs. Thetraining steps are discussed as well as the numerical issues needed for robustHMM training. Pattern recognition and suggestions for rejection of non-seenclasses are discussed.

Page 10: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 1. Introduction 7

Chapter 7 - Summary and Conclusions

Finally, the results and observations in the report are summarized and con-cluded.

Page 11: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this
Page 12: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 2

Discrete-Time Markov Model

This chapter describes the theory of Markov chains and it is supposed that thehidden part of the Markov chain is uncovered from the HMM, i.e. an observableMarkov model is investigated. A a system is considered that may be described atany time, as being at one ofN distinct states, indexed by 1, 2, . . . , N . At regularlyspaced, discrete times, the system undergoes a change of state (possibly back tothe same state) according to a set of probabilities associated with the state. Thetime instances for a state change is denoted t and the corresponding state qt. Inthe case of a first order Markov chain, the state transition probabilities do notdepend on the whole history of the process, only the preceding state is taken intoaccount. This leads into the first order Markov property defined as

P (qt = j|qt−1 = i, qt−2 = k, . . .) = P (qt = j|qt−1 = i) (2.1)

where P is the probability and i,j,k are state indexes. Note that the right hand ofEq. (2.1) is independent of time, which gives the state transitions probabilities,aij, to be

aij = P (qt = j|qt−1 = i) , 1 ≤ i, j ≤ N (2.2)

Due to standard stochastic constraints [2], the state probabilities, aij, havethe following properties

aij ≥ 0 ∀j, iN∑

j=1

aij = 1 ∀i (2.3)

The state transition probabilities for all states in a model can be describedby a transition probability matrix

9

Page 13: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

10 Chapter 2. Discrete-Time Markov Model

A =

⎡⎢⎢⎢⎣a11 a12 . . . a1N

a21 a22 . . . a2N...

.... . .

...aN1 aN2 . . . aNN

⎤⎥⎥⎥⎦N×N

(2.4)

The probability to start in a state, i.e. the initial state distribution vector, isdescribed by

π =

⎡⎢⎢⎢⎣π1 = P (q1 = 1)π2 = P (q1 = 2)

...πN = P (q1 = N)

⎤⎥⎥⎥⎦N×1

(2.5)

and the corresponding stochastic property for the initial state distribution vectoris

N∑i=1

πi = 1 (2.6)

where the πi is defined as

πi = P (q1 = i), 1 ≤ i ≤ N (2.7)

Equation (2.1) - (2.7) describes the properties and equations for a first orderMarkov process, hence the Markov model itself can be described by A and π.

2.1 Markov Model Example

In this section an example of a discrete time Markov process will be presentedwhich leads into the main ideas about Markov chains. A four state Markov modelof the weather will be used as an example, see Fig. 2.1.

It is assumed that the weather is observed at a certain time every day (e.g.in the morning), and that it is observed and classified as belonging to one of thefollowing states:

• State 1: cloudy

• State 2: sunny

• State 3: rainy

• State 4: windy

Page 14: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 2. Discrete-Time Markov Model 11

State 1 State 2

State 3 State 4

a12

a21

a13a31 a24a42

a34

a43

a14

a41

a23

a32

a11 a22

a33 a44

Figure 2.1: Markov model of the weather.

Given the model in Fig. 2.1, it is possible to answer several questions aboutthe weather patterns over time. For example, what is the probability to get thesequence “sunny, rainy, sunny, windy, cloudy, cloudy” in six consecutive days?The first task is to define the state sequence, O, as

O = (sunny, rainy, sunny,windy, cloudy, cloudy) = (2, 3, 2, 4, 1, 1)

Given the sequence and the model of Fig. 2.1, the probability calculationof the observation sequence given a Markov model, P (O|A,π), can directly bedetermined as

P (O|A,π) = P (2, 3, 2, 4, 1, 1|A,π)

= P (2)P (3|2)P (2|3)P (4|2)P (1|4)P (1|1)

= π2 · a23 · a32 · a24 · a41 · a11

In a more general sense, the probability calculation of a state sequence q =(q1, q2, . . . , qT ) can be determined as

P (q|A,π) = πq1 · aq1q2 · aq2q3 · . . . · aqT−1qT(2.8)

Page 15: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

12 Chapter 2. Discrete-Time Markov Model

The second task is to find the probability for the state to remain the samefor d time instances, given that the system is in a known state. In the weatherexample, the time instances correspond to days, yielding the following observationsequence

O = (i, i, i, . . . , i, j �= i)

day 1 2 3 d d+ 1

with the probability

P (O|A,π, q1 = i) = P (O, q1 = i|A,π)/P (q1 = i)

= πi(aii)d−1(1 − aii)/πi

= (aii)d−1(1 − aii)

= pi(d) (2.9)

In Eq. 2.9, pi(d) can be interpreted as the state duration distribution andthe characteristic exponential distribution for a Markov chain appears. Basedon pi(d) in Eq. 2.9 it is possible to find the expected number of observations(duration) in a state as

E[di] =∞∑

d=1

d · pi(d)

=∞∑

d=1

d · ad−1ii (1 − aii)

= (1 − aii)∂

∂aii

( ∞∑d=1

adii

)

= (1 − aii)∂

∂aii

(aii

1 − aii

)=

1

(1 − aii)(2.10)

Page 16: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 3

Discrete-Time Hidden MarkovModel

The discrete-time Markov model described in the previous chapter is to to someextent restrictive and can not be applicable to all problems of interest. Therefore,the Hidden Markov Model (HMM)is introduced in this chapter. The extensionimplies that every state will be probabilistic and not deterministic (e.g. sunny).This means that every state generates an observation at time t, ot, according toa probabilistic function bj(ot) for each state j. The production of observationsin the stochastic approach is characterized by a set of observation probabilitymeasures, B = {bj(ot)}N

j=1, where N is the total number of states and the prob-abilistic function for each state j is

bj(ot) = P (ot|qt = j) (3.1)

The concept of how the HMM works is introduced by using an example calledthe urn and ball model.

3.1 The Urn and Ball Model

Assume that there are N large glass urns with colored balls. There are M distinctcolors of the balls, see Fig. 3.1.

The steps needed for generating an observation sequence are:

1. Choose an initial state q1 = i according to the initial state distribution π.In the urn and ball model the urn corresponds to a state.

2. Set t = 1 (i.e. clock, t = 1, 2, . . . , T ).

3. Choose a ball from the selected urn (state) according to the symbol proba-bility distribution in state i, bi(ot). This colored ball represents the obser-vation ot. Put the ball back to the urn. For example, the probability for apurple ball is in the first urn 0.61, see Fig. 3.1.

13

Page 17: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

14 Chapter 3. Discrete-Time Hidden Markov Model

· · ·

Urn 1 Urn 2 Urn 3 Urn N

P (R) = 12%

P (O) = 7%

P (K) = 6%

P (G) = 4%

P (B) = 10%

P (P ) = 61%

P (R) = 30%

P (O) = 1%

P (K) = 37%

P (G) = 10%

P (B) = 17%

P (P ) = 5%

P (R) = 8%

P (O) = 9%

P (K) = 10%

P (G) = 54%

P (B) = 10%

P (P ) = 9%

P (R) = 2%

P (O) = 4%

P (K) = 3%

P (G) = 1%

P (B) = 88%

P (P ) = 2%

Figure 3.1: Urn and ball example with N urns and M = 6 different colors (R =red, O=orange, K=black, G=green, B=blue, P=purple).

4. Transit to a new state (urn) qt+1 = j according to the state-transitionprobability distribution for state i, i.e. aij.

5. Set t = t+ 1; return to step 3 if t < T ; otherwise, terminate the procedure.

These steps describe how the HMM works when generating an observationsequence. It should be noted that the urns can contain balls of the same color,and the distinction among various urns is the way the collection of the coloredballs is composed. Therefore, an isolated observation of a ball of a particular colordoes not immediately indicate which urn it is drawn from. Furthermore, the linkbetween the urn and ball example and a HMM, is that the urn corresponds to astate and the color corresponds to a feature vector (the observation).

3.2 Discrete Observation Densities

The urn and ball example described in the previous section is an example of adiscrete observation density HMM with M distinct codewords (colors). In gen-eral, the discrete observation density partitions the probability density function(pdf) into M codewords or cells with assigned probabilities. These codewords willbe denoted v1,v2, . . . ,vM where one symbol corresponds to one cell. The parti-tioning is usually called vector quantization and for this purpose several analysismethods exist [2]. Vector quantization is performed by creating a codebook basedon the mean vectors for every cluster.

The symbol for the observation is determined by the nearest neighbor rule,i.e. the symbol in the cell with the nearest codebook vector is selected. In the

Page 18: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 3. Discrete-Time Hidden Markov Model 15

urn and ball example this corresponds to the dark grey ball being classified asa black, due to that this is the most probable choice. In this case the symbolsv1,v2, . . . ,vM are represented by different colors (e.g. v1 =red). The probabilitydistributions for the observable symbols are denoted B = {bj(ot)}N

j=1, where thesymbol distribution in state j is defined as

bj(ot) = bj(k) = P (ot = vk|qt = j) , 1 ≤ k ≤M (3.2)

This is the first step in the determination of the codebook. The second stepis to estimate the sets of observation probabilities for each codebook vector inevery state respectively. The major problem with discrete output probability isthe vector quantization operation partitioning which destroys the original signalstructure, when the continuous space is separated into regions. If the outputdistributions are overlapping the partitioning introduces errors. This is due to afinite training data set being used implying unreliable parameter estimates. Thevector quantization errors introduced may cause performance degradation of thediscrete density HMM [2]. This will occur when the observed feature vector isintermediate between two codebook symbols. This is why continuous observationdensities can be used instead of discrete observation densities. The continuousapproach avoids quantization errors. However this approach typically requiremore calculations.

3.3 Continuous Observation Densities

The continuous observation densities bj(ot) in the HMM are created by using aparametric probability density function or a mixture of several functions. To beable to reestimate the parameters of the probability density function (pdf) somerestrictions for the pdf are needed. A common restriction is that the pdf shouldbelong to the log-concave or elliptically symmetrical density family [2]. The mostgeneral representation of the pdf, for which a reestimation procedure has beenformulated, is the finite mixture of the form

bj(ot) =M∑

k=1

cjkbjk(ot), j = 1, 2, . . . , N (3.3)

where M is the number of mixtures. The stochastic constraints for the mixtureweights, cjk, must fulfil

M∑k=1

cjk = 1 j = 1, 2, . . . , N

cjk ≥ 0 j = 1, 2, . . . , N, k = 1, 2, . . . ,M

(3.4)

The bjk(ot) can be either D-dimensional log-concave or of elliptically sym-metric density with mean vector µjk and covariance matrix Σjk

Page 19: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

16 Chapter 3. Discrete-Time Hidden Markov Model

bjk(ot) = N (ot,µjk,Σjk

)(3.5)

The most utilized D-dimensional log-concave or elliptically symmetric densityfunction, is the Gaussian density function

bjk(ot) = N (ot,µjk,Σjk

)=

1

(2π)D/2|Σjk|1/2e−1

2

(ot − µjk

)TΣ−1

jk

(ot − µjk

)(3.6)

To approximate a basic observation source, the Gaussian mixture providesa straightforward solution to gaining a considerable accuracy. This is due tothe flexibility and convenient estimation of the pdfs. If the observation sourcegenerates a complicated high-dimensional pdf, the Gaussian mixture becomescomputationally difficult to treat, due to the considerable number of parametersto estimate and large covariance matrices. To estimate these parameters thereis a need of a huge amount of well representative training data, this is, however,hard to achieve in a practical situation. With insufficient training data setsthe parameter estimation will be unreliable. Especially the covariance matrixestimation is highly sensitive for the lack of representative training data.

The size of the covariance matrices increases with the square proportional tothe vector dimension D. If the feature vectors are designed to avoid redundantcomponents, the elements outside the diagonal of the covariance matrices areusually small. One solution is to use a diagonal covariance matrix approximation.The diagonality provides a simpler and a faster implementation for the probabilitycomputation. This is performed by reducing Eq. (3.6) to

bjk(ot) = N (ot,µjk,Σjk

)=

1

(2π)D/2|Σjk|1/2e−1

2

(ot − µjk

)TΣ−1

jk

(ot − µjk

)

=1

(2π)D/2(∏D

d=1 σjkd

)1/2e

−D∑

d=1

(otl − µjkd)2

2σ2jkd

(3.7)where the variance terms σjk1, σjk2, . . . , σjkD are the diagonal elements of thecovariance matrix Σjk.

3.4 Types of Hidden Markov Models

Different kinds of structures for HMMs can be used. The structure is definedby the transition matrix, A. The most general structure is the ergodic or fully

Page 20: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 3. Discrete-Time Hidden Markov Model 17

connected HMM. In this model every state can be reached from every other stateof the model.

As an example in this section a N = 4 state model is introduced, see Fig. 3.2a.The ergodic model has the property 0 < aij < 1 where the “zero” and the “one”have been excluded in order to fulfill the ergodic property. The state transitionmatrix, A, for an ergodic model, can be described by

A =

⎡⎢⎢⎣a11 a12 a13 a14

a21 a22 a23 a24

a31 a32 a33 a34

a41 a42 a43 a44

⎤⎥⎥⎦4×4

(3.8)

In some pattern recognition tasks, it is desirable to use a model which modelsthe observations in a successive manner. This can be done by employing theleft-right model or Bakis model, see Figs. 3.2b,c. The property for a left-rightmodel is

aij = 0, j < i (3.9)

This implies that no transitions can be made to previous states. The lengthsof the transitions are usually restricted to some maximum length ∆, typicallytwo or three

aij = 0, j > i+ ∆ (3.10)

Note that for a left-right model, the state transitions coefficients for the laststate have the following property

aNN = 1aNj = 0, j < N

(3.11)

If ∆ = 1 yields that

A =

⎡⎢⎢⎣a11 a12 0 00 a22 a23 00 0 a33 a34

0 0 0 a44

⎤⎥⎥⎦4×4

(3.12)

which corresponds to Fig. 3.2b.If ∆ = 2 yields that

A =

⎡⎢⎢⎣a11 a12 a13 00 a22 a23 a24

0 0 a33 a34

0 0 0 a44

⎤⎥⎥⎦4×4

(3.13)

which corresponds to Fig. 3.2c.

Page 21: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

18 Chapter 3. Discrete-Time Hidden Markov Model

(a)

(b)

(c)

State 1 State 2

State 3 State 4

a12

a21

a13a31 a24a42

a34

a43

a14

a41

a23

a32

State 1 State 2 State 3 State 4

a34a12 a23

State 1 State 2 State 3 State 4

a34a12

a13

a23

a24

a11 a22

a33 a44

a11 a22 a33 a44

a11 a22 a33 a44

Figure 3.2: Different structures for HMMs.

Page 22: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 3. Discrete-Time Hidden Markov Model 19

The choice of a constrained model structure (for example the left-right model)requires no modification in the training algorithms, due to the fact that an initialstate transition probability set to zero will remain zero.

3.5 Summary of HMM parameters

The elements of a discrete-time hidden Markov model described in section 3.1-3.4will be summarized in this section.

1. The number of states is denoted N . Although the states are hidden, formany practical applications there is often some physical significance at-tached to the states or to the sets of states of the model [2]. For instance,in the urn and ball model the states correspond to the urns. The labels forthe individual states are {1, 2, . . . , N}, and the state at time t is denotedqt.

2. The model parameter is denoted M . If discrete observation densities areused, the parameter M is the number of classes or cells, e.g. M equals thenumber of colors in the urn and ball example. If continuous observationdensities are used, M represents the number of mixtures in every state.

3. The initial state distribution is denoted π = {πi}Ni=1, where πi is defined as

πi = P (q1 = i) (3.14)

4. The state transition probability distribution is denoted A = [aij], where

aij = P (qt+1 = j|qt = i) , 1 ≤ i, j ≤ N (3.15)

5. The observation symbol probability distribution is denoted B = {bj(ot)}Nj=1,

where the probabilistic function for each state, j, is

bj(ot) = P (ot|qt = j) (3.16)

The calculation of bj(ot) can be found with discrete- or continuous obser-vation densities.

This leads into the complete specification of a HMM requiring two modelparameters N and M . The specification of the three sets of probability measuresπ, A and B is necessary. A simplified and commonly used notation for theprobability measures is

λ = (A,B,π) (3.17)

Page 23: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this
Page 24: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 4

Three Problems for HiddenMarkov Models

Given the basics of a HMM from the previous chapter, three central problemsarise for applying the model to real-world applications.

Problem 1

The observation sequence is denoted O = (o1,o2, . . . ,oT , ), and theHMM λ = (A,B,π). The question is how to compute the probabilityof the observation sequence for a given HMM, P (O|λ).

Problem 2

The observation sequence is denoted O = (o1,o2, . . . ,oT , ), and theHMM λ = (A,B,π). The question is how to choose the correspond-ing state sequence, q = (q1, q2, . . . , qT ), in an optimal sense (i.e. best“explains” the observations).

Problem 3

How can the probability measures, λ = (A,B,π), be adjusted tomaximize P (O|λ)?

The first problem can be viewed as a recognition problem. With a number oftrained models the task is to find a model which best describes the observationsequence given.

In the second problem the task is to uncover the hidden part of the model. InHMMs it is more or less impossible to find the “correct” state sequence, except inthe case of degenerated models. The problem should be solved in some optimalsense, which will be explained later.

The third problem can be viewed as the training problem, i.e. models are cre-ated for each specific application based on the corresponding training sequences.The training problem is the hardest and most crucial one in most applications.

21

Page 25: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

22 Chapter 4. Three Problems for Hidden Markov Models

The HMM should be optimally adapted based on the training data to fit anddescribe real world phenomena as well as possible [2].

Page 26: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 5

Solution to Problem 1 -Probability Evaluation

The aim is to find the probability of the observation sequence, O = (o1,o2, . . . ,oT , ),given the model λ, i.e., P (O|λ). The observations produced by the statesare assumed to be independent. The probability of the observation sequenceO = (o1,o2, . . . ,oT ) generated by a state sequence q can be calculated by theproduct

P (O|q, B) = bq1(o1) · bq2(o2) · . . . · bqT(oT ) (5.1)

The probability of the state sequence, q can be found as1

P (q|A,π) = πq1 · aq1q2 · aq2q3 · . . . · aqT−1qT(5.2)

The joint probability of O and q, i.e. the probability that O and q occursimultaneously, is the product of the above two terms

P (O,q|λ) = P (O|q, B) · P (q|A,π)

= πq1bq1(o1)aq1q2bq2(o2) · . . . · aqT−1qTbqT

(oT )

= πq1bq1(o1)T∏

t=2

aqt−1qtbqt(ot)

(5.3)

The aim is to find P (O|λ), and the probability of O (given the model λ) isobtained by summing the joint probability of all possible state sequences q,

1Explained earlier in Sec. 2.1.

23

Page 27: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

24 Chapter 5. Solution to Problem 1 - Probability Evaluation

P (O|λ) =∑all q

P (O|q, B) · P (q|A,π)

=∑

q1,q2,...,qT

πq1bq1(o1)aq1q2bq2(o2) · . . . · aqT−1qTbqT

(oT )

=∑

q1,q2,...,qT

πq1bq1(o1)T∏

t=2

aqt−1qtbqt(ot)

(5.4)

The interpretation of the computation in Eq. 5.4 is the following. Initially attime t = 1, the process starts by jumping to state q1 with the probability πq1 . Inthis state the observation symbol o1 is generated with probability bq1(o1). Theclock advances from t to t+ 1 and a transition from q1 to q2 will occur with theprobability aq1q2 . The symbol o2 will be generated with the probability bq2(o2).The process continues in the same manner until the last transition is made (attime T ), i.e. a transition from qT−1 to qT will occur with the probability aqT−1qT

,and the symbol oT will be generated with the probability bqT

(oT ).

The direct computation has one major drawback. The direct computationis infeasible due to the exponential growth of computations as a function ofsequence length T . To be exact, it needs (2T −1)NT multiplications, and NT −1additions [2]. Even for small values of N and T ; e.g. for N = 5 (states), T = 100(observations), there is a need for (2 · 100 − 1)5100 ≈ 1.6 · 1072 multiplicationsand 5100 − 1 ≈ 8.0 · 1069 additions! It is obvious that a more efficient procedureis required to solve this kind of problem. An excellent tool, which cuts thecomputational cost to linear, relative to T , is the forward algorithm.

5.1 The Forward Algorithm

A forward variable αt(i), is defined as

αt(i) = P (o1o2 . . .ot, qt = i|λ) (5.5)

where t represents time and i is the state. This gives that αt(i) will be the proba-bility of the partial observation sequence, o1o2 . . .ot, (until time t) when being instate i at time t. The forward variable can be calculated inductively, see Fig. 5.1.

The αt+1(i) is found by summing the forward variable for all N states attime t multiplied with the corresponding state transition probability, aij, and bythe emission probability bj(ot+1). This can be performed by using the followingprocedure:

Page 28: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 5. Solution to Problem 1 - Probability Evaluation 25

State 1 State 1 State 1

State 2 State 2 State 2

State 3 State 3 State 3

......

...

State N State N State N

aq1q3

aq2q3

aq3q3

aqN q3

Time

Stat

e

αt(1)

αt(2)

αt(3)

αt(N)

αt+1(3)

t − 1 t t + 1

Figure 5.1: Forward Procedure - Induction Step.

1. Initialization

Set t = 1; (5.6)

α1(i) = πibi(o1), 1 ≤ i ≤ N (5.7)

2. Induction

αt+1(j) = bj(ot+1)N∑

i=1

αt(i)aij, 1 ≤ j ≤ N (5.8)

3. Update time

Set t = t+ 1;Return to step 2 if t < T ;Otherwise, terminate the algorithm (goto step 4).

4. Termination

P (O|λ) =N∑

i=1

αT (i) (5.9)

The forward algorithm needs N(N+1)(T−1)+N multiplications and N(N−1)(T − 1) additions. Again, for N = 5 (states), T = 100 (observations), thisyields 5(5 + 1)(100− 1) + 5 = 2975 multiplications and 5(5− 1)(100− 1) = 1980additions. This is a significant improvement compared to the direct method(1.6 · 1072 multiplications and 8.0 · 1069 additions).

Page 29: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

26 Chapter 5. Solution to Problem 1 - Probability Evaluation

5.2 The Backward Algorithm

The recursion described in the forward algorithm can be used backwards in timeas well. The backward variable βt(i) is defined as

βt(i) = P (ot+1ot+2 . . .oT |qt = i, λ) (5.10)

The backward variable can be interpreted as the probability of the partialobservation sequence from t + 1 to the end (T ), given state i at time t and themodel λ. Note that the definition for the forward variable is a joint probabilitywhereas the backward probability is a conditional probability. In a similar man-ner (according to the forward algorithm), the backward variable can be calculatedinductively, see Fig. 5.2.

State 1 State 1 State 1

State 2 State 2 State 2

State 3 State 3 State 3

......

...

State N State N State N

aq1q3

aq2q3

aq3q3

aqN q3

Time

Stat

e

βt+1(1)

βt+1(2)

βt+1(3)

βt+1(N)

βt(3)

t t + 1 t + 2

Figure 5.2: Backward Procedure - Induction Step.

The backward algorithm includes the following steps:

1. Initialization

Set t = T − 1;

βT (i) = 1, 1 ≤ i ≤ N (5.11)

2. Induction

βt(i) =N∑

j=1

βt+1(i)aijbj(ot+1), 1 ≤ i ≤ N (5.12)

Page 30: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 5. Solution to Problem 1 - Probability Evaluation 27

3. Update time

Set t = t− 1;Return to step 2 if t > 0;Otherwise, terminate the algorithm.

Note that the initialization step 1 arbitrarily defines βT (i) to be 1 for all i.

5.3 Scaling the Forward and Backward Variables

The calculation of αt(i) and βt(i) involves multiplication with probabilities. Allthese probabilities have a value less than one (generally significantly less thanone), and as t starts to grow large, each term of αt(i) or βt(i) starts to headexponentially to zero. For a sufficiently large t (e.g. 100 or more) the dynamicrange of αt(i) and βt(i) will exceed the precision range of essentially any machine(even in double precision) [2]. The basic scaling procedure multiplies αt(i) by ascaling coefficient that is dependent only on the time t and independent of thestate i. The scaling factor for the forward variable is denoted ct (scaling is doneevery time t for all states i - 1 ≤ i ≤ N) . This factor will also be used to scalethe backward variable, βt(i). It is useful to scale αt(i) and βt(i) with the samefactor, see problem 3 (parameter estimation).

5.3.1 The Scaled Forward Algorithm

Consider the computation of the forward variable, αt(i). In the scaled variantof the forward algorithm some extra notations will be used. αt(i) denotes theunscaled forward variable, αt(i) denotes the scaled and iterated variant of αt(i),αt(i) denotes the local version of αt(i) before scaling and ct will represent the scal-ing coefficient at each time. The scaled forward algorithm includes the followingsteps:

1. Initialization

Set t = 2;

α1(i) = πibi(o1), 1 ≤ i ≤ N (5.13)α1(i) = α1(i), 1 ≤ i ≤ N (5.14)

c1 =1

N∑i=1

α1(i)

(5.15)

α1(i) = c1α1(i) (5.16)

Page 31: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

28 Chapter 5. Solution to Problem 1 - Probability Evaluation

2. Induction

αt(i) = bi(ot)N∑

j=1

αt−1(j)aji, 1 ≤ i ≤ N (5.17)

ct =1

N∑i=1

αt(i)

(5.18)

αt(i) = ctαt(i), 1 ≤ i ≤ N (5.19)

3. Update time

Set t = t+ 1;Return to step 2 if t ≤ T ;Otherwise, terminate the algorithm (goto step 4).

4. Termination

logP (O|λ) = −T∑

t=1

log ct (5.20)

The main difference between the scaled and the default forward algorithm liesin steps 2 and 4. In step 2 can Eq. (5.19) be rewritten if Eq. (5.17) and Eq. (5.18)are used

αt(i) = ctαt(i)

=1

N∑k=1

(bk(ot)

N∑j=1

αt−1(j)ajk

) ·(bi(ot)

N∑j=1

αt−1(j)aji

)

=

bi(ot)N∑

j=1

αt−1(j)aji

N∑k=1

bk(ot)N∑

j=1

αt−1(j)ajk

, 1 ≤ i ≤ N

(5.21)

By induction, the scaled forward variable is found in terms of the none scaled

αt−1(j) =

(t−1∏τ=1

)αt−1(j), 1 ≤ j ≤ N (5.22)

Page 32: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 5. Solution to Problem 1 - Probability Evaluation 29

The ordinary induction step can be found as (same as Eq. (5.8) but with onetime unit shift)

αt(i) = bi(ot)N∑

j=1

αt−1(j)aji, 1 ≤ i ≤ N (5.23)

With Eq. (5.22) and Eq. (5.23) in mind, it is now possible to rewrite Eq. (5.21)as

αt(i) =

bi(ot)N∑

j=1

αt−1(j)aji

N∑k=1

bk(ot)N∑

j=1

αt−1(j)ajk

=

bi(ot)N∑

j=1

(t−1∏τ=1

)αt−1(j)aji

N∑k=1

bk(ot)N∑

j=1

(t−1∏τ=1

)αt−1(j)ajk

=

(t−1∏τ=1

)(bi(ot)

N∑j=1

αt−1(j)aji

)(

t−1∏τ=1

)N∑

k=1

(bk(ot)

N∑j=1

αt−1(j)ajk

)

=αt(i)

N∑k=1

αt(k)

, 1 ≤ i ≤ N

(5.24)

As Eq. (5.24) shows, when the scaled forward algorithm is applied each αt(i)is scaled by the sum of all states of αt(i).

The termination (step 4) of the scaled forward algorithm requires the evalu-ation of P (O|λ). This must be done in an indirect way. Since the sum of αT (i)can not be used. This is due to the fact that αT (i) already is scaled. However,this can be solved by employing

Page 33: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

30 Chapter 5. Solution to Problem 1 - Probability Evaluation

T∏τ=1

N∑i=1

αT (i) = 1 (5.25)

T∏τ=1

cτ · P (O|λ) = 1 (5.26)

P (O|λ) =1

T∏τ=1

(5.27)

As Eq. (5.27) shows P (O|λ) can be found, but if Eq. (5.27) is used the resultwill still be very small (and probable be out of the machine precision for a com-puter). However, the logarithm on both sides of Eq. (5.27) gives the followingequation

logP (O|λ) = log1

T∏τ=1

= −T∑

t=1

log ct (5.28)

This expression is used in the termination step of the scaled forward algorithm.The logarithm of P (O|λ) is often just as useful as P (O|λ) as, in most cases, thismeasure is used as a comparison with other probabilities (for other models).

5.3.2 The Scaled Backward Algorithm

The scaled backward algorithm can be found more easily, since it will use thesame scale factor as the forward algorithm. The notation used is similar to theforward variable notation, i.e. βt(i) denotes the unscaled backward variable, βt(i)

denotes the scaled and iterated variant of βt(i),βt(i) denotes the local version of

βt(i) before scaling and ct will represent the scaling coefficient at each time. Thescaled backward algorithm includes the following steps:

1. Initialization

Set t = T − 1;

βT (i) = 1, 1 ≤ i ≤ N (5.29)

βT (i) = cTβT (i), 1 ≤ i ≤ N (5.30)

2. Induction βt(i) =

N∑j=1

βt+1(j)aijbj(ot+1), 1 ≤ i ≤ N (5.31)

βt(i) = ctβt(i), 1 ≤ i ≤ N (5.32)

Page 34: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 5. Solution to Problem 1 - Probability Evaluation 31

3. Update time

Set t = t− 1;Return to step 2 if t > 0;Otherwise, terminate the algorithm.

Page 35: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this
Page 36: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 6

Solution to Problem 2 -“Optimal” State Sequence

The central problem is to find the optimal sequence of states to a given observa-tion sequence and model used. Unlike Problem 1, an exact solution can not befound and there are several possible ways of solving this problem. The difficultyis the definition of the optimal state sequence, because there are several possibleoptimality criteria [2]. One optimal criterion is to choose the states, qt, that areindividually most likely at each time t. To find the state sequence a probabilityvariable is needed

γt(i) = P (qt = i|O, λ) (6.1)

This is the probability of being in state i at time t given the observationsequence O and the model λ. Another interpretation of γt(i) is

γt(i) = P (qt = i|O, λ)

=P (O, qt = i|λ)

P (O|λ)

=P (O, qt = i|λ)

N∑i=1

P (O, qt = i|λ)

(6.2)

and since αt(i) = P (o1o2 . . .ot, qt = i|λ) and βt(i) = P (ot+1ot+2 . . .oT |qt = i, λ),P (O, qt = i|λ) can be found as the joint probability

P (O, qt = i|λ) = P (o1o2 . . .ot, qt = i|λ)·P (ot+1ot+2 . . .oT |qt = i, λ) = αt(i)βt(i)(6.3)

By Eq. (6.3) it is now possible to rewrite Eq. (6.2) as

33

Page 37: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

34 Chapter 6. Solution to Problem 2 - “Optimal” State Sequence

γt(i) =αt(i)βt(i)

N∑i=1

αt(i)βt(i)

(6.4)

When γt(i) is calculated using Eq. (6.4), the most likely state at time t, q∗t ,can be found as

q∗t = arg max1≤i≤N

[γt(i)] , 1 ≤ t ≤ T (6.5)

Eq. (6.5) maximizes the expected number of correct states, however therecould be some problems with the resulting state sequence, since the state tran-sition probabilities have not been taken into consideration. For example, whathappens when some state transitions have zero probability (aij = 0)? This meansthat the given optimal path may not be valid. Obviously, a method generating aguaranteed valid path is preferred. Fortunately such a method exists, based ondynamic programming, namely the Viterbi algorithm. The γt(i) could not be usedfor this purpose, but it will still be useful in problem 3 (parameter estimation).

6.1 The Viterbi Algorithm

The Viterbi algorithm is similar to the forward algorithm. The main difference isthat the forward algorithm uses the sum over previous states, whereas the Viterbialgorithm uses maximization. The aim for the Viterbi algorithm is to find thesingle best state sequence, q = (q1, q2, . . . , qT ), for the given observation sequenceO = (o1,o2, . . . ,oT ) and model λ. Consider

δt(i) = maxq1,q2,...,qt−1

P (q1q2 . . . qt−1, qt = i,o1o2 . . .ot|λ) (6.6)

which is the probability of observing o1o2 . . .ot using the best path that ends instate i at time t, given the model λ. By using induction, δt+1(i) can be found as

δt+1(i) = bj(ot+1) max1≤i≤N

[δt(i)aij] (6.7)

To retrieve the state sequence, it is necessary to keep track of the argumentthat maximizes Eq. (6.7), for each t and j . This is done by saving the argumentin an array ψt(j). Here follows the complete Viterbi algorithm:

1. Initialization

Set t = 2;

δ1(i) = πibi(o1), 1 ≤ i ≤ N (6.8)

ψ1(i) = 0, 1 ≤ i ≤ N (6.9)

Page 38: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 6. Solution to Problem 2 - “Optimal” State Sequence 35

2. Induction

δt(j) = bj(ot) max1≤i≤N

δt−1(i)aij, 1 ≤ j ≤ N (6.10)

ψt(j) = arg max1≤i≤N

[δt−1(i)aij] , 1 ≤ j ≤ N (6.11)

3. Update time

Set t = t+ 1;Return to step 2 if t ≤ T ;Otherwise, terminate the algorithm (goto step 4).

4. Termination

P (O,q∗|λ) = max1≤i≤N

[δT (i)] (6.12)

q∗T = arg max1≤i≤N

[δT (i)] (6.13)

5. Path (state sequence) backtracking

(a) Initialization

Set t = T − 1

(b) Backtrackingq∗t = ψt+1

(q∗t+1

)(6.14)

(c) Update time

Set t = t− 1;Return to step (b) if t ≥ 1;Otherwise, terminate the algorithm.

The algorithm involves multiplication with probabilities and this is the sameproblem as for the forward and backward algorithm. Therefore, a numericalstable algorithm is required.

6.2 The Alternative Viterbi Algorithm

As mentioned the original Viterbi algorithm involves multiplications with proba-bilities. One way to avoid this is to take the logarithm of the model parameters.Obviously this logarithm will end up with problems when some model parametersare zero. This is often the case for A and π but can be avoided by checking forzeros in A and π and replacing the output for the logarithm to the maximumnegative number due to computer precision −δmax. The problem also arises whenbi(ot) becomes zero. Hence, if bi(ot) becomes zero1 it will be replaced by δmin.Here follows the alternative Viterbi algorithm

1Read more about numerical problems in Sec. 7.1.2.

Page 39: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

36 Chapter 6. Solution to Problem 2 - “Optimal” State Sequence

1. Preprocessing

πi =

{log (πi) if πi > δmin

−δmax, otherwise, 1 ≤ i ≤ N (6.15)

aij =

{log (aij) if aij > δmin

−δmax, otherwise, 1 ≤ i, j ≤ N (6.16)

2. Initialization

Set t = 2;

bi(o1) = log (bi(o1)) , 1 ≤ i ≤ N (6.17)

δ1(i) = log (δ1(i))

= log (πibi(o1))

= πi + bi(o1), 1 ≤ i ≤ N (6.18)

ψ1(i) = 0, 1 ≤ i ≤ N (6.19)

3. Induction

bj(ot) = log (bj(ot)) , 1 ≤ j ≤ N (6.20)

δt(j) = log (δt(j))

= log

(bj(ot) max

1≤i≤Nδt−1(i)aij

)= bj(ot) + max

1≤i≤N

[δt−1(i) + aij

], 1 ≤ j ≤ N (6.21)

ψt(j) = arg max1≤i≤N

[δt−1(i) + aij

], 1 ≤ j ≤ N (6.22)

4. Update time

Set t = t+ 1;Return to step 3 if t ≤ T ;Otherwise, terminate the algorithm (goto step 5).

5. Termination

P (O,q∗|λ) = max1≤i≤N

[δT (i)

](6.23)

q∗T = arg max1≤i≤N

[δT (i)

](6.24)

Page 40: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 6. Solution to Problem 2 - “Optimal” State Sequence 37

6. Path (state sequence) backtracking

(a) Initialization

Set t = T − 1

(b) Backtracking

q∗t = ψt+1

(q∗t+1

)(6.25)

(c) Update time

Set t = t− 1;Return to step (b) if t ≥ 1;Otherwise, terminate the algorithm.

To get a better understanding of how the Viterbi (and the alternative Viterbi)work, consider a model with N = 3 states and an observation of length T = 8.In the initialization (t = 1) δ1(1), δ1(2) and δ1(3) are found. Assume that δ1(2)is the largest. Next time (t = 2) three variables will be used, namely δ2(1), δ2(2)and δ2(3). Let us assume that δ2(1) is now the maximum of the three. In thesame manner the variables δ3(3), δ4(2), δ5(2), δ6(1), δ7(3) and δ8(3) will be themaximum at their time, see Fig. 6.1. From Fig. 6.1 one can see that the Viterbialgorithm is working with the lattice structure.

State 1 State 1 State 1 State 1 State 1 State 1 State 1 State 1

State 2 State 2 State 2 State 2 State 2 State 2 State 2 State 2

State 3 State 3 State 3 State 3 State 3 State 3 State 3 State 3

Time

Stat

e

1 2 3 4 5 6 7 8

Figure 6.1: Example of Viterbi search.

An algorithm only finding the probability, P (O,q∗|λ), for the “optimal” statesequence and not extracting the actual state sequence can be made faster and iscalled the fast alternative Viterbi Algorithm.

Page 41: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

38 Chapter 6. Solution to Problem 2 - “Optimal” State Sequence

6.3 The Fast Alternative Viterbi Algorithm

The alternative Viterbi finds the “optimal” state sequence, q∗t , and the corre-

sponding probability P (O,q∗|λ). In some cases the actual state sequence is notso important and only the probability for the state sequence is desired. For ex-ample, when a competition between two (or more) models is desired, the highestprobability is only desired. In this case the backtracking and ψt(j) can be re-moved from the algorithm, this implies that a faster and less memory requiredalgorithm can be applied. The algorithm is called the fast alternative Viterbialgorithm (a similar algorithm can be constructed based on the standard Viterbialgorithm). Here follows the fast alternative Viterbi algorithm:

1. Preprocessing

πi =

{log (πi) if πi > δmin

−δmax, otherwise, 1 ≤ i ≤ N (6.26)

aij =

{log (aij) if aij > δmin

−δmax, otherwise, 1 ≤ i, j ≤ N (6.27)

2. Initialization

Set t = 2;

bi(o1) = log (bi(o1)) , 1 ≤ i ≤ N (6.28)

δ1(i) = log (δ1(i))

= log (πibi(o1))

= πi + bi(o1), 1 ≤ i ≤ N (6.29)

(6.30)

3. Induction

bj(ot) = log (bj(ot)) , 1 ≤ j ≤ N (6.31)

δt(j) = log (δt(j))

= log

(bj(ot) max

1≤i≤Nδt−1(i)aij

)= bj(ot) + max

1≤i≤N

[δt−1(i) + aij

], 1 ≤ j ≤ N (6.32)

4. Update time

Page 42: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 6. Solution to Problem 2 - “Optimal” State Sequence 39

Set t = t+ 1;Return to step 3 if t ≤ T ;Otherwise, terminate the algorithm (goto step 5).

5. Termination

P (O,q∗|λ) = max1≤i≤N

[δT (i)

](6.33)

Page 43: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this
Page 44: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 7

Solution to Problem 3 -Parameter Estimation

The third problem is concerned with the estimation of the model parameters,λ = (A,B,π). The problem can be formulated as

λ∗ = arg maxλ

[P (O|λ)] (7.1)

that is, given an observation O, the aim is to find the model λ∗ from all possible λthat maximizes P (O|λ). This problem is the most difficult of the three problems,as there is no known way to analytically find the model parameters that maximizethe probability of the observation sequence in a closed form. However, the modelparameters can be chosen to locally maximize the likelihood P (O|λ). Somecommonly used methods for solving this problem is the Baum-Welch method(which has essentially the same solution as the expectation-maximation methodin this task) or gradient techniques. Both methods use iteration technique toimprove the likelihood P (O|λ). However, there are some advantages with theBaum-Welch method compared to the gradient techniques [12, 13, 14]:

• Baum-Welch is numerically stable with an increasing likelihood in everyiteration.

• Baum-Welch converges to a local optima.

• Baum-Welch has linear convergence.

This section will derive the reestimation equations used in the Baum-Welchmethod.

41

Page 45: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

42 Chapter 7. Solution to Problem 3 - Parameter Estimation

7.1 Baum-Welch Reestimation

The model λ has three terms, the state transition probability distribution A, theinitial state distribution π and the observation symbol probability distributionB. Since continuous observation densities are used here, B will be representedby1 cjk,µjk and Σjk. To describe the procedure for reestimation, the probabilityξt(i, j) will be useful

ξt(i, j) = P (qt = i, qt+1 = j|O, λ) =P (qt = i, qt+1 = j,O|λ)

P (O|λ)(7.2)

which gives the probability of being in state i at time t, and state j at time t+1,given the model λ and the observation sequence O. The paths that satisfy theconditions required by Eq. (7.2) are illustrated in Fig. 7.1.

State 1 State 1

State 2 State 2

State 3 State i State j State 3

......

State N State N

a1i

a2i

a3i

aNi

aijbj(ot+1)

aj1

aj2

aj3

ajN

Time

αt(i) βt+1(j)

t − 1 t t + 1 t + 2

Figure 7.1: Computation of ξt(i, j).

Considering Eqs. (2.2), (3.1), (5.5), (5.10), (6.3) and (7.2) yields that ξt(i, j)can be found as

ξt(i, j) =αt(i)aijbj(ot+1)βt+1(j)

P (O|λ)

=αt(i)aijbj(ot+1)βt+1(j)

N∑i=1

N∑j=1

αt(i)aijbj(ot+1)βt+1(j)

(7.3)

1See Eqs. 3.3-3.6.

Page 46: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 7. Solution to Problem 3 - Parameter Estimation 43

As mentioned in problem 2 it is the probability γt(i) of being in state i at timet, given the entire observation sequence O and the model λ. Hence the relationbetween γt(i) and ξt(i, j) can be found by using Eqs. (6.1) and (7.2)

γt(i) = P (qt = i|O, λ) =N∑

j=1

P (qt = i, qt+1 = j|O, λ) =N∑

j=1

ξt(i, j) (7.4)

If the sum over time t is applied to γt(i), one obtains a quantity that can beinterpreted as the expected number of times that state i is visited, or equivalently,the expected number of transitions made from state i (if the time slot t = T isexcluded) [2]. If the summation is done over ξt(i, j) it will result in the expectednumber of transitions from state i to state j. The term γ1(i) will also prove tobe useful. It can thus be concluded that

γ1(i) = probability of starting in state i (7.5)T−1∑t=1

γt(i) = expected number of transitions from state i in O (7.6)

T−1∑t=1

ξt(i, j) = expected number of transitions from state i to state j in O

(7.7)

Given the above definitions it is possible to derive the reestimation formulasfor π and A

πi = expected frequency (number of times) in state i at time t = 1= γ1(i)

=α1(i)β1(i)

N∑i=1

α1(i)β1(i)

=α1(i)β1(i)

N∑i=1

αT (i)

Page 47: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

44 Chapter 7. Solution to Problem 3 - Parameter Estimation

and

aij =expected number of transitions from state i to state j

expected number of transitions from state i

=

T−1∑t=1

ξt(i, j)

T−1∑t=1

γt(i)

=

T−1∑t=1

αt(i)aijbj(ot+1)βt+1(j)

T−1∑t=1

αt(i)βt(i)

In the derivation of Eqs. (7.8) and (7.8), the following probabilities are usefulto get a further insight

P (O|λ) =N∑

i=1

αt(i)βt(i) =N∑

i=1

αT (i)

P (O, qt = i|λ) = αt(i)βt(i)

P (O, qt = i, qt+1 = j|λ) = αt(i)aijbj(ot+1)βt+1(j)

(7.8)

The Eqs. (7.3) and (6.4), and thereby Eqs. (7.8) and (7.8) can be found byusing the unscaled forward and backward variables. If scaled variables are usedthe probability ξt(i, j) will be found as

Page 48: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 7. Solution to Problem 3 - Parameter Estimation 45

ξt(i, j) =αt(i)aijbj(ot+1)βt+1(j)

N∑i=1

N∑j=1

αt(i)aijbj(ot+1)βt+1(j)

=

(t∏

τ=1

)αt(i)aijbj(ot+1)

(T∏

τ=t+1

)βt+1(j)

N∑i=1

N∑j=1

(t∏

τ=1

)αt(i)aijbj(ot+1)

(T∏

τ=t+1

)βt+1(j)

=

(t∏

τ=1

)(T∏

τ=t+1

)αt(i)aijbj(ot+1)βt+1(j)(

t∏τ=1

)(T∏

τ=t+1

)N∑

i=1

N∑j=1

αt(i)aijbj(ot+1)βt+1(j)

=αt(i)aijbj(ot+1)βt+1(j)

N∑i=1

N∑j=1

αt(i)aijbj(ot+1)βt+1(j)

(7.9)

and the scaled γt(i) as

Page 49: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

46 Chapter 7. Solution to Problem 3 - Parameter Estimation

γt(i) =αt(i)βt(i)

N∑i=1

αt(i)βt(i)

=

(t∏

τ=1

)αt(i)

(T∏

τ=t

)βt(i)

N∑i=1

(t∏

τ=1

)αt(i)

(T∏

τ=t

)βt(i)

=

(t∏

τ=1

)(T∏

τ=t

)αt(i)βt(i)(

t∏τ=1

)(T∏

τ=t

)N∑

i=1

αt(i)βt(i)

=αt(i)βt(i)

N∑i=1

αt(i)βt(i)

(7.10)

As Eqs. (7.9) and (7.10) show, ξt(i, j) and γt(i) are the same if scaled or nonescaled forward and backward variables are used. Herein lies the elegance withusing a scale factor that is dependent of time, and independent of the state. Acloser look at Eq. (7.9) shows that ξt(i, j) can be found easier when the scaledforward and backward variables are used

Page 50: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 7. Solution to Problem 3 - Parameter Estimation 47

ξt(i, j) =αt(i)aijbj(ot+1)βt+1(j)

N∑i=1

N∑j=1

αt(i)aijbj(ot+1)βt+1(j)

=αt(i)aijbj(ot+1)βt+1(j)

N∑i=1

αt(i)

(N∑

j=1

aijbj(ot+1)βt+1(j)

)

=αt(i)aijbj(ot+1)βt+1(j)

N∑i=1

αt(i)βt(i)

=αt(i)aijbj(ot+1)βt+1(j)

N∑i=1

αt(i)βt(i)

ct

=αt(i)aijbj(ot+1)βt+1(j)N∑

i=1

(t∏

τ=1

)αt(i)βt(i)

=αt(i)aijbj(ot+1)βt+1(j)(t∏

τ=1

)N∑

i=1

P (O, qt = i|λ)

= αt(i)aijbj(ot+1)βt+1(j)

(7.11)

Since π and A use ξt(i, j) and γt(i) for calculation, these probabilities willalso be independent of which forward or backward variables are used (scaled orunscaled). Thereby Eqs. (7.8) and (7.8) can be calculated, without numericalproblems, as

πi =α1(i)β1(i)

N∑i=1

αT (i)

= γ1(i) (7.12)

Page 51: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

48 Chapter 7. Solution to Problem 3 - Parameter Estimation

and

aij =

T−1∑t=1

αt(i)aijbj(ot+1)βt+1(j)

T−1∑t=1

αt(i)βt(i)

=

T−1∑t=1

ξt(i, j)

T−1∑t=1

γt(i)

(7.13)

The reestimation of cjk, µjk and Σjk is more complicated. However, if themodel only has one state N and one mixture M , it is an easy averaging task2

cj = 1 (7.14)

µj =1

T

T∑t=1

ot (7.15)

Σj =1

T

T∑t=1

(ot − µj

) (ot − µj

)′(7.16)

In practice, there could be multiple states and multiple mixtures and thereare no direct assignments of the observation vectors to individual states as theunderlying state sequence is unknown. Since the full likelihood of each obser-vation sequence is based on the summation of all possible state sequences, eachobservation vector ot contributes to the computation of the likelihood for eachstate j. Instead of assigning each observation vector to a specific state, eachobservation is assigned to every state and weighted. This weight is the proba-bility of the model being in a state and a specific mixture for this particularyobservation vector. The probability, for state j and mixture k, is found by3

γt(j, k) =αt(j)βt(j)

N∑i=1

αt(j)βt(j)

· cjkN(ot,µjk,Σjk

)M∑

k=1

cjkN(ot,µjk,Σjk

) (7.17)

The reestimation formula for cjk is the ratio between the expected number oftimes the system is in state j using the kth mixture component, and the expectednumber of times the system is in state j. That is

cjk =

T∑t=1

γt(j, k)

T∑t=1

M∑k=1

γt(j, k)

(7.18)

2as transpose is ′ used , this because no confusion with observation time T should be made.3γt(j, k) is a generalization of γt(j).

Page 52: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 7. Solution to Problem 3 - Parameter Estimation 49

Calculating µjk and Σjk can be achieved by weighting the simple averages inEqs. (7.15) and (7.16). As before, the weighting is the probability of being instate j and using mixture k when observing ot. The reestimation formulas forµjk and Σjk will hereby be found as

µjk =

T∑t=1

γt(j, k)ot

T∑t=1

γt(j, k)

(7.19)

Σjk =

T∑t=1

γt(j, k)(ot − µjk

) (ot − µjk

)′T∑

t=1

γt(j, k)

(7.20)

The reestimation described in this section is based on one training sample,which is typically not sufficient to get reliable estimates. This is particulary thecase when left-right models are used and multiple parameters should be found.To get reliable estimates it is convenient to use multiple observation sequences.

7.1.1 Multiple Observation Sequences

In the Baum-Welch method a large number of parameters are estimated and alarge number of training examples are needed to get robust and reliable estimates.If only one observation sequence is used to train the model it will be overtrainedto this specific sample, which can lead to numerical problems for the reestimationprocedure.

The modification of the reestimation procedure is straightforward as follows.Let us denote a set of R observation sequences as

O =[O(1),O(2) . . . ,O(R)

](7.21)

where O(r) =(o

(r)1 ,o

(r)2 , . . . ,o

(r)Tr

)is the rth observation sequence and Tr is

the corresponding observation sequence length. The new function to maximizeP (O|λ), according to Eq. (7.1), can now be found as (if the observations se-quences, O(r), are assumed independent)

P (O|λ) =R∏

r=1

P(O(r)|λ) (7.22)

Page 53: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

50 Chapter 7. Solution to Problem 3 - Parameter Estimation

A more complete derivation of the reestimation formulas, without the inde-pendence assumption can be found in appendix A.

The method for achieving this maximation problem is similar to the procedurewith only one observation of sequences. The variables αt(j), βt(j), γt(j, k) andξt(i, j) are found for every training sample. Then the reestimation is performedon the accumulated values. In this way the new parameters are an average of theresult from each individual training sequence. The procedure is repeated until aoptima is found or the maximum limit of iterations is reached. The reestimationformulas will now be

πi =

R∑r=1

α(r)1 (i)β

(r)1 (i)

R∑r=1

N∑i=1

α(r)Tr

(i)

=1

R

R∑r=1

γ(r)1 (i) (7.23)

aij =

R∑r=1

Tr−1∑t=1

α(r)t (i)aijbj(o

(r)t+1)β

(r)t+1(j)

R∑r=1

Tr−1∑t=1

α(r)t (i)β

(r)t (i)

=

R∑r=1

Tr−1∑t=1

ξ(r)t (i, j)

R∑r=1

Tr−1∑t=1

γ(r)t (i)

(7.24)

cjk =

R∑r=1

Tr∑t=1

γ(r)t (j, k)

R∑r=1

Tr∑t=1

M∑k=1

γ(r)t (j, k)

(7.25)

µjk =

R∑r=1

Tr∑t=1

γ(r)t (j, k)o

(r)t

R∑r=1

Tr∑t=1

γ(r)t (j, k)

(7.26)

Σjk =

R∑r=1

Tr∑t=1

γ(r)t (j, k)

(o

(r)t − µjk

)(o

(r)t − µjk

)′R∑

r=1

Tr∑t=1

γ(r)t (j, k)

(7.27)

Note that the result from Eq. (7.26) is used in Eq. (7.27). These are the finalformulas used for the reestimation.

Page 54: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 7. Solution to Problem 3 - Parameter Estimation 51

7.1.2 Numerical Issues

One problem with training HMM parameters with the reestimation formulas isthat the number of training observation may not be sufficient. There is alwaysan inadequate number of occurrences of low probability events to give food esti-mates of the model parameters. This has the effect that the mixture weights andthe diagonal values in the covariance matrix can be zero, which is numericallyundesired. To avoid this a numerical floor εc is introduced

cjk =

{cjk, if cjk ≥ εcεc otherwise

(7.28)

and a similar floor εΣ is introduced for the diagonal values in the covariancematrix

Σjk(d, d) =

{Σjk(d, d), if Σjk(d, d) ≥ εΣεΣ, otherwise

, 1 ≤ d ≤ D (7.29)

The numerical floor values are typical (double precision considered) in therange of 10−10 ≤ εc, εΣ ≤ 10−3 [2]. The extreme value for εΣ can be found by

εΣ > δ(1/D)min (if diagonal covariances are used), i.e. if all diagonal elements in the

covariance are zero one will multiply εΣ with itself D times. This is the deter-minant if diagonal covariances are used. Note that the final result can not besmaller than δmin (underflow). This also indicates that a large dimension of thefeature vectors will require a higher numerical precision.

Another problem is that the Gaussian mixture bj(ot) can be zero, which isnot desired as there is always a small probability of an event (in the Gaussiancase). This is avoided by a probability floor which is the smallest numericalvalue possible, δmin, due to the machine precision (in double precision is δmin =2−1022 ≈ 2.2251 · 10−308). Hence, the Gaussian mixture is found by

bj(ot) =

{bj(ot) if bj(ot) ≥ δmin

δmin, otherwise(7.30)

Another numerical problem occurs when γt(j, k) is calculated by Eq. (7.17).The problem lies in finding

cjkN(ot,µjk,Σjk

)M∑

k=1

cjkN(ot,µjk,Σjk

) (7.31)

The problem is that N (ot,µjk,Σjk

)can actually become zero for every mix-

ture, hence a division by zero occurs. This typically occurs when the dimensionof the observations becomes large (say 50 or more using double precision). The

Page 55: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

52 Chapter 7. Solution to Problem 3 - Parameter Estimation

reason why N (ot,µjk,Σjk

)becomes zero (due to numerical precision), can be

observed by the definition

N (ot,µjk,Σjk

)=

1

(2π)D/2|Σjk|1/2e−1

2

(ot − µjk

)TΣ−1

jk

(ot − µjk

)(7.32)

If the exponent −12

(ot − µjk

)TΣ−1

jk

(ot − µjk

)is a large negative number, it

results in the expression becoming zero (numerical underflow). The solution tothis problem will be to multiply a scale factor eν with Eq. (7.31) in both thenumerator and the denominator which yields

cjkN(ot,µjk,Σjk

) · eν

M∑k=1

cjkN(ot,µjk,Σjk

) · eν

(7.33)

The effect of multiplying with this exponential factor can be seen in the defi-nition of the normal distribution

N (ot,µjk,Σjk

) · eν =1

(2π)D/2|Σjk|1/2e−1

2

(ot − µjk

)TΣ−1

jk

(ot − µjk

)· eν

=1

(2π)D/2|Σjk|1/2e−1

2

(ot − µjk

)TΣ−1

jk

(ot − µjk

)+ ν

(7.34)

Hence, the large negative exponential term can be adjusted to be zero, whereν can be selected as

ν = −maxk

[−1

2

(ot − µjk

)TΣ−1

jk

(ot − µjk

)](7.35)

which implies that the denominator in Eq. (7.31) is always nonzero. Hence,division by zero will not occur, and Eq. (7.31) will still calculate the same math-ematically correct value.

The logarithm of the Gaussian mixture can also be rewritten to avoid unnec-essary numerical underflow (that is Eq. (7.30) must be applied). The logarithmof the Gaussian mixture is commonly used in the Viterbi algorithms, that is inEqs. (6.17), (6.20), (6.28) and (6.31). Note that the logarithm of the Gaussianmixture can be mathematically rewritten as

Page 56: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 7. Solution to Problem 3 - Parameter Estimation 53

logM∑

k=1

cjkN(ot,µjk,Σjk

)= log

M∑k=1

elog(cjkN(ot,µjk,Σjk))

= −κ+ logM∑

k=1

elog(cjkN(ot,µjk,Σjk))+κ(7.36)

where κ should be chosen so that the exponential in the sum will be a value closeto the maximum exponent allowed by the machine precision (log (δmax)). Thisgives that κ can be formulated as

κ = log (δmax) − log(M + 1) − maxk

[log(cjkN

(ot,µjk,Σjk

))](7.37)

where

log(cjkN

(ot,µjk,Σjk

))= log (cjk) − D

2log (2π) − 1

2log (|Σjk|)−

12

(ot − µjk

)TΣ−1

jk

(ot − µjk

) (7.38)

7.2 Initial Estimates of HMM Parameters

Before the Baum-Welch reestimation can be applied for training it is important toget good initial parameters so that the reestimation leads to the global maximumor as close as possible to it. An adequate choice for π and A is the uniformdistribution when an ergodic model is used. For example, the ergodic model inFig. 3.2a will have the following initial π and A for a four state model

π =

⎡⎢⎢⎣0.250.250.250.25

⎤⎥⎥⎦4×1

(7.39)

A =

⎡⎢⎢⎣0.25 0.25 0.25 0.250.25 0.25 0.25 0.250.25 0.25 0.25 0.250.25 0.25 0.25 0.25

⎤⎥⎥⎦4×4

(7.40)

If left-right models are used, π will have the probability one for the first stateand zero for the other states. For example, the left-right model in Fig. 3.2b havethe following initial π and A

Page 57: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

54 Chapter 7. Solution to Problem 3 - Parameter Estimation

π =

⎡⎢⎢⎣1000

⎤⎥⎥⎦4×1

(7.41)

A =

⎡⎢⎢⎣0.5 0.5 0 00 0.5 0.5 00 0 0.5 0.50 0 0 1

⎤⎥⎥⎦4×4

(7.42)

The parameters for the emission distribution need a good initial estimationfor rapid and proper convergence. For a left-right model this can be performedby using uniform segmentation into the states of the model, for every trainingsample. After segmentation, all observations from the state j are collected fromall training samples. If an ergodic model is used and no apriori information aboutthe segmentation is given, the state segmentation and the mixture segmentationcan both be segmented in an unsupervised manner. This unsupervised segmen-tation of the observations can be performed by the clustering algorithms.

Page 58: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 8

Clustering Algorithms

Clustering can be viewed as an unsupervised procedure which classifies patternsinto groups (clusters). The clustering problem can be referred to many con-texts and disciplines, however here the clustering problem is approached fromthe statistical pattern recognition perspective. Many different algorithms exist,and variations of these algorithms for the clustering problem such as: k-means,self organizing feature map (SOFM), hierarchial algorithms, evolutionary algo-rithms (EAs), simulated annealing (SA), etc. In this chapter the k-means andthe SOFM are explained, and the aim is to use them to find good initial valuesfor the Gaussian mixture, which should be used for the reestimation proceduredescribed in the previous chapter.

The clustering of N data points is a method by which the data points are clus-tered into K groups (clusters) of smaller sets of similar data in an unsupervisedmanner. The data will here be vectors x1,x2, . . . ,xN ∈ RD (the notation forthe clusterings algorithms will not follow the notations for the description of theHMM). A clustering algorithm attempts to find K natural groups or centroids,denoted µ1,µ2, . . . ,µK , of data based on some similarity. To determine whichcluster a point belongs to a distance measure between points will be needed, thedistance being really a measure of similarity. The Euclidian distance measure ishere used

d(xi,xj) =

√√√√ D∑d=1

(xi(d) − xj(d))2 (8.1)

A commonly used criterion for clustering is the error criterion which for eachpoint computes the squared distance from the corresponding cluster center andthe mean of distances for all points in the data set (that is the mean square error)

55

Page 59: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

56 Chapter 8. Clustering Algorithms

Emse =1

N

N∑n=1

d

(xn − µarg min

k[d (xn,µk)]

,xn − µarg mink

[d (xn,µk)]

)2

(8.2)The output from the clustering algorithm is basically a statistical description

of the cluster centroids and indices telling what cluster each data point belongsto.

8.1 The K-Means Algorithm

The k-means algorithm can be considered as the workhorse of clustering algo-rithms. It is a popular clustering method that minimizes the clustering errorcriterium. However, the k-means algorithm is a local search procedure and itis well known that it suffers from the serious drawback that its performanceheavily depends on the initial starting conditions [15]. Here follows the k-meansalgorithm:

1. Initialization

Choose K vectors from the training vectors, x, at random. Thesevectors will be the centroids µk.

2. Recursion

For each vector in the training set, n = 1, 2, . . . , N , let everyvector belong to a cluster k. This is done by choosing the clusterclosest to the vector

k∗n = arg mink

[d (xn,µk)]

where d (xn, µk) is the distance measure. The Euclidian distancemeasure is used (in vector notation)

d (xn,µk) =

√(xn − µk)

T (xn − µk)

3. Test

Recompute the centroids µk by taking the mean of the vectorsthat belongs to this centroid. This is done for every µk. If novectors belong to µk - create new µk by choosing a random vectorfrom x. If there has not been any change of the centroids fromthe previous step go to step 4, otherwise go back to step 2.

Page 60: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 8. Clustering Algorithms 57

4. Termination

From this clustering, the following parameters can be found

ck = number of vectors classified in cluster k

divided by the total number of vectors

µk = sample mean of vectors classified in cluster k

Σk = sample covariance matrix of the vectors classified in

cluster k

Emse = clustering mean square error, see Eq. (8.2)

k∗n = index that indicates which center each data vector

belongs to

To illustrate the problem of the initially selected centers, a simple cluster-ing is made on two dimensional data which is to be clustered in four clusters.Fig. 8.1 shows the result of k-means algorithm when the initial cluster was chosenproperly.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.2

0.4

0.6

0.8

1

data points, xncluster centers, µk

Figure 8.1: K-means clustering into four clusters, with good initial centers.

In Fig. 8.2 the result of k-means algorithm in a different run is shown. Herethe initial chosen centers (see initialization of k-means algorithm) were chosenpoorly.

Page 61: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

58 Chapter 8. Clustering Algorithms

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.2

0.4

0.6

0.8

1

data points, xncluster centers, µk

Figure 8.2: K-means clustering into four clusters, with poorly chosen initial cen-ters.

This k-means algorithm is one tool that can be used to find the initial param-eters of the Gaussian mixture.

8.2 The SOFM Algorithm

The Self-Organizing Feature Map (SOFM), also known as the Kohonen map [16],is a neural network based on competitive learning and the arrangement of theneurons. The SOFM described here can be seen as an epoch-based variant of theSOFM described in [16]. The principle of the SOFM is to transform the arbitrarydimensional input signals into a one, two or possible three-dimensional discretemap, this to more easily visualize the map. Here the arbitrary input signals willbe represented by the input vectors x1,x2, . . . ,xN ∈ RD and the dimension totransform to is one or two-dimensional. The map to transform to will be denotedv1,v2, . . . ,vK ∈ RDv . Two quantities are taken into consideration, the featuremap vk and the cluster means µk, meaning that every cluster centroid will havea topology position in the feature map.

For the one-dimensional feature map the topology will here be a line, seeFig. 8.3.

The two-dimensional feature map will here create a square topology. That

Page 62: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 8. Clustering Algorithms 59

1 62 73 84 95 10

Figure 8.3: Line topology, number of clusters K = 10.

is, it will have a perfect square if the number of clusters has an even squareroot, otherwise the largest square possible is created and the remaining neuronsare first added at the top of the square and later to the right of the square, seeFig. 8.4 and Fig. 8.5. Other topologies can of course be applied, e.g. a circle,however the square (or extended square) is here used as the basic topology for atwo-dimensional feature map.

1 2 3

4 5 6

7 8 9

1 2 3

4 5 6

7 8 9

10

1 2 3

4 5 6

7 8 9

10 11

1 2 3

4 5 6

7 8 9

10 11 12

Figure 8.4: The two-dimensional feature map for clustering in K = 9, 10, 11, 12clusters.

Given the selected feature map and initial cluster centroids, the SOFM canbe divided into three processes [17]:

1. Competition processThe competition between the neurons for a specific data vector, xn willyield a winning neuron. The winning neuron index for the data vector xn,k∗n, will be found as

k∗n = arg mink

[d (xn,µk)] (8.3)

2. Cooperation processThe winning neuron in the map vk∗

nlocates the of center of cooperating

neurons. The question that now arises is how a topological neighborhood

Page 63: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

60 Chapter 8. Clustering Algorithms

1 2 3

4 5 6

7 8 9

10 11 12 13

1 2 3

4 5 6

7 8 9

10 11 12 13

14

1 2 3

4 5 6

7 8 9

10 11 12 13

14

15

1 2 3 4

5 6 7 8

9 10 11 12

13 14 15 16

Figure 8.5: The two-dimensional feature map for clustering in K = 13, 14, 15, 16clusters.

should be defined? Neurons that fire tend to excite neurons closer to thefiring neuron more than neurons further away, that is in a neurobiologicalsense [17]. Given this fact it is desired to have a neighborhood function hk∗

n,k

for the topological map. A natural choice of of the neighborhood function,is the Gaussian function, hence hk∗

n,k is defined as

hk∗n,k = e−

(d(vk∗n,vk))2

2σ2 (8.4)

where d(vk∗

n,vk

)is the Euclidian distance, see Eq. (8.1), between the win-

ning neuron, vk∗n, and a specific neuron, vk. The variance, σ2, of the Gaus-

sian neighborhood function indicates how much the neighborhood neuronsshould be affected by the winning neuron.

One feature for the neighborhood functions in the SOFM is that it shrinkswith time. A common choice for the dependence of σ, which makes theneighborhood function shrink, is the exponential decay. Here a choice forthe start variance, σ2

0, and the end variance, σ2end, will be described. Let

us consider the line topology, see Fig. 8.3, the maximum distance in themap will be denoted dv,max and the minimum distance dv,min. The neigh-borhood function value for the maximum distance in the map hmax shouldbe chosen close to one, which implies that essentially all neurons will beaffected initially

hmax = e− d2

v,max

2σ20 (8.5)

This gives that the initial standard deviation can be found as

Page 64: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 8. Clustering Algorithms 61

σ0 =

√− d2

v,max

2 · log (hmax)(8.6)

In a similar manner the neighborhood function value for the minimum dis-tance in the map, hmin, will give that almost only one neuron will be af-fected. That is hmin is chosen close to zero

hmin = e− d2

v,min

2σ2end (8.7)

The final standard deviation can be found as

σend =

√− d2

v,min

2 · log (hmin)(8.8)

Given the initial standard deviation, σ0, the final, σend, and the number ofiterations, the exponential factor can be found and σ can be updated atevery time-step. In Fig. 8.6 the neighborhood function can be found forfour time instances.

0

1

σ0

σend

Figure 8.6: The neighborhood function, shrinkage during time, for a one-dimensional feature map. Here the winning neuron is neuron number five.

Page 65: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

62 Chapter 8. Clustering Algorithms

3. Adaptive process

From an initial disorder the SOFM algorithm will gradually converge to anorganized representation of the input space, provided that the settings forthe algorithm are selected properly. The update equation for the SOFMalgorithm contains the neighborhood function and learning rate α. Theupdate equation for one input vector xn can be found as

µk = µk + αhk∗n,k (xn − µk) , 1 ≤ k ≤ K (8.9)

As mentioned before the neighborhood function can advantageously varywith time and this is also the case for the learning rate. By choosingdifferent settings for the learning rate and the neighborhood function, theadaptation can be divided into two phases; the ordering phase and theconvergence phase [17]:

(a) Ordering phaseThe ordering phase is the initial phase of the adaptive process. Herethe topological ordering of the cluster means takes place. The exactnumber of iterations for this phase is hard to determine because thelearning is a stochastic process, however some suggestions for the num-ber of iterations are given in [16, 17]. Here the training is suggestedto be performed in epoch training, that is the N input vectors areused Noe (oe - ordering epoch) times in the ordering phase. In everyepoch, the input vectors are chosen in a permutated order to avoid thealgorithm to run on the same sequence every epoch.

However, the number of iterations is only dependent of the numberof clusters K and not the number of input vectors [16]. Hence thenumber of epochs for the ordering phase can be determined by

Noe = fix

(K ·Mo

N

)+ 1 (8.10)

where Mo is a factor to multiply with the number of clusters to getthe total number of iterations. When the number of epochs is decidedit is desired, in the ordering phase, to have a shrinking neighborhoodfunction and also a shrinking learning rate. The variance of the neigh-borhood function will shrink in a exponential manner, as in the co-operation process. The learning rate will decrease in a exponentialmanner, as well. The learning rate starts at α0 and ends at αend.

(b) Convergence phaseAfter the ordering phase, the learning rate is fixed at αend, whichshould have a value that gives small adjustments of the map. In the

Page 66: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 8. Clustering Algorithms 63

Variable Value

Mo 1000Mc 2000α0 0.5αend 0.01h0 0.9hend 0.1

Table 8.1: Example of initial settings for a SOFM algorithm.

neighborhood function the nearest neighbors should only affect thewinning neuron, that is the variance for hmin is used. The number ofepochs for the convergence phase Nce (ce - convergence epoch) can ina similar way, as in the ordering phase, be found as

Nce = fix

(K ·Mc

N

)+ 1 (8.11)

where Mc is a factor multiplied with the number of clusters K whichresults in the desired number of iterations Nce.

Given the number of clusters desired and the feature map, in which dv,max anddv,min can be found, the initial settings for the algorithm are proposed accordingto Tab. 8.1. Note that this is not the optimal settings for all situations, since thelearning is a stochastic process [16]. Also note that σ0, σend, Noe and Nce can befound from the variables in the table, by applying Eqs. (8.6), (8.8), (8.10) and(8.11).

Here follows the SOFM algorithm:

1. Initialization

Choose K vectors from x at random. These vectors will be theinitial centroids µk, which are going to be found correctly. Choosethe desired feature map dimension, Dv (typically one or two).Create a feature map vn ∈ RDv and find dv,max and dv,min. Setthe start and end values for the learning rate, α0 and αend. Thestart and end values hmax and hmin. Set the number of iterationscontrols Mo and Mc. Apply Eqs. (8.6), (8.8), (8.10) and (8.11)to find σ0, σend, Noe and Nce.

2. Ordering phase

Iterate the following Noe times:

(a) Choose a permutated order of the N data points

Page 67: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

64 Chapter 8. Clustering Algorithms

(b) Iterate N times (where n is an index that follows the permutatedorder):

i. Find the winning neuron according to:

k∗n = arg mink

[d (xn,µk)]

ii. Update the cluster means according to:

µk = µk + αhk∗n,k (xn − µk) , 1 ≤ k ≤ K

iii. Decrease α and σ in a exponential manner

3. Convergence phase

Iterate the following Nce times:

(a) Choose a permutated order of the N data points

(b) Iterate the following N times (where n is an index that follows thepermutated order):

i. Find the winning neuron according to:

k∗n = arg mink

[d (xn,µk)]

ii. Update the cluster means according to:

µk = µk + αhk∗n,k (xn − µk) , 1 ≤ k ≤ K

4. Termination

From the clustering, the following parameters can be found:

ck = number of vectors classified in cluster k

divided by the total number of vectors

µk = sample mean of vectors classified in cluster k

Σk = sample covariance matrix of the vectors classified in

cluster k

Emse = the mean square error for the clustering, see Eq. (8.2)

k∗n = index that indicates which center each data vector

belongs to

For an example of how the SOFM can perform consider the input distributionin Fig. 8.7 (∼ 9000 data points).

Applying the one-dimensional feature map and using a clustering into 2500clusters yields the result as in Fig. 8.8.

Using a two-dimensional feature map and a clustering into 50 × 50 clustersyields the result according to Fig. 8.9.

Page 68: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 8. Clustering Algorithms 65

0 0.5 1 1.50

0.2

0.4

0.6

0.8

1

1.2

data points, xn

Figure 8.7: Distribution of the data points to cluster.

0 0.5 1 1.50

0.2

0.4

0.6

0.8

1

1.2

cluster centers, µk

Figure 8.8: Result with a one-dimensional feature map. The settings used arefound according to Tab. 8.1.

Page 69: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

66 Chapter 8. Clustering Algorithms

0 0.5 1 1.50

0.2

0.4

0.6

0.8

1

1.2

cluster centers, µk

Figure 8.9: Result with a two-dimensional feature map. The settings used arefound according to Tab. 8.1.

Page 70: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 9

Training and Recognition usingHidden Markov Models

In this chapter a description of the HMM used in a pattern recognition task isaddressed. First the training aspects of models are discussed followed by a de-scription of how to use the resulting models for a recognition task. The traininghas already been explained to some extent in problem three - parameter estima-tion, however, the overview of more complete training of a HMM will be describedhere. The continuous mixture densities and diagonal covariance matrices are as-sumed here as it has been found to be more convenient and sometimes preferableto use diagonal covariances and several mixtures, rather than a few mixtures us-ing full covariances [2]. The reason is that it is hard to get a reliable estimationof the off diagonal values in the covariance matrix, which can lead to a matrixinversion problem when the Gaussian pdf is calculated.

9.1 Training of a Hidden Markov Model

Given the training samples a suitable model should be chosen, see Fig. 9.1.

The first decision for a HMM modelling is the type of model and here twotypical models are considered; the left-right model and the ergodic model. Theergodic model is the most general because all states are connected. Left-rightmodels are commonly used for patterns where the features follow in a successiveorder.

The second thing to decide is the number of states N to use. The number ofstates is usually connected to a corresponding real phenomenon in the patterns[2]. The number of mixtures (or codewords when discrete densities are used) Mhas to be decided as well.

The next step is to divide the observations into states and mixtures. Firstthe observation vectors are segmented into the model states. If the informationabout which state each vector belongs to is given (transcribed observations) no

67

Page 71: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

68 Chapter 9. Training and Recognition using Hidden Markov Models

Training SamplesO = [O(1),O(2), . . . ,O(R)]

Select Model Type- ergodic

- left-rightSelect Number of States, N

Select Number of Mixtures, M

Segmentation into StatesSegmentation into Mixtures

Initial Model, λ0

Figure 9.1: Choices to be made before reestimation.

unsupervised method is needed, otherwise some unsupervised method is used (i.e.k-means or SOFM ). If an ergodic model is used and an unsupervised methodapplied, all observations are collected and clustered into the desired number ofstates. In a left-right model the uniform segmentation into states can preferablybe used and every observation sequence can be divided into states uniformly asin Fig. 9.2. The vectors can be divided into the state regions according to

�1 . . . Tr/N . . . 2 · Tr/N . . . 3 · Tr/N . . . N · Tr/N (9.1)

which implies that the observations �1 . . . Tr/N will belong to state one , �Tr/N+1 . . . 2 · Tr/N state two etc.

Given the state segmentation, the mixture segmentation takes place on eachstate. Vectors that belong to a state are collected for all observation sequencesand an unsupervised segmentation algorithm is applied to these vectors.

From initial decisions and segmentations a good initial model, λ0, is found.This model is then used for reestimation and given this initial model λ0, thetraining will continue in an iterative manner, see Fig. 9.3.

The detailed actions for the steps in Fig.9.3 can be found below:

• Check modelThe model is checked for numerical stability according to Eqs. (7.28) and(7.29).

Page 72: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 9. Training and Recognition using Hidden Markov Models 69

︸ ︷︷ ︸ ︸ ︷︷ ︸ ︸ ︷︷ ︸ ︸ ︷︷ ︸ ︸ ︷︷ ︸O(r) =

⎫⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎭D

︷ ︸︸ ︷Tr

State 1 State 2 State 3 State 4 State 5

Figure 9.2: Uniform segmentation into states for a left-right model.

Initial modelλ0

Check model

Baum-Welch reestimation

Check model

Trained modelλ

Convergence?

yes

no

Figure 9.3: Training of a HMM.

Page 73: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

70 Chapter 9. Training and Recognition using Hidden Markov Models

• Baum-Welch reestimationThe model is reestimated according to Eqs. (7.23), (7.24), (7.25), (7.26)and (7.27). For the reestimation process the scaled forward variable αt(i)and the scaled backward variable βt(i) are used. Given these variablesγt(i), ξt(i, j) and γt(j, k) can be found according to Eqs. (7.10), (7.11) and(7.17), respectively. Note that Eqs. (7.33) and (7.35) should be used toavoid numerical problems in Eq. (7.17).

• ConvergenceThe reestimation is ended when the desired maximum number of iterationsis reached, denoted MaxIter, or when the learning curve reaches a certainlimit. The learning curve is defined as

J (n) =

∣∣∣∣∣∣(Ptot(n) − Ptot(n− 1)

)Ptot(n)

∣∣∣∣∣∣ (9.2)

where n is the iteration index. Ptot(n) is the total probability for all samplesat iteration n, and is defined as

Ptot (n) =R∑

r=1

P(O(r),q∗(r)|λn

)(9.3)

where P(O(r),q∗(r)|λn

)can be found by the fast alternative Viterbi algo-

rithm (in this case only the probability is needed, the actual state sequenceis not needed here). Since Eq. (7.22) is going to be maximized it will bethe same as maximizing the logarithm of Eq. (7.22). This gives that themaximation can be reformulated as

logR∏

r=1

P(O(r)|λ) =

R∑r=1

logP(O(r)|λ) (9.4)

where P(O(r)|λ) can be found by the Viterbi algorithm or the forward algo-

rithm. However, the fast alternative Viterbi algorithm can advantageouslybe used here. The output is already in the log domain and it is faster thanthe forward algorithm (more about this in the next section).

Note that the learning curve J(n) can be seen as a normalized derivation of

Ptot (n) and when J (n) reaches a small value the iteration scheme is ended.This value will here be denoted εstop and is in a typical application in therange 0 − 10−4. Also a minimum number of iterations, denoted MinIter,can be set, to make sure that a minimum of iterations are made.

Page 74: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 9. Training and Recognition using Hidden Markov Models 71

Variable Description Typical value(s)

N Number of states -M Number of mixtures -δmax Largest number due to computer precision 1.7977 · 10308

δmin Smallest number due to computer precision 2.2251 · 10−308

MinIter Minimum number of reestimations -MaxIter Maximum number of reestimations -εstop Stopping criteria for reestimation 0 − 10−4, 5 · 10−5

εc Numerical floor on the mixture weights 10−10 − 10−3, 10−5

εΣ Numerical floor on the diagonal of the 10−10 − 10−3,

covariances 10−5, δ1/Dmin

Table 9.1: Essential variables for HMM training, double precision is considered.

Various variables and settings are needed to train a HMM, see summationin Tab. 9.1. It should be emphasized that these settings are from subjectiveempirical experiments in speech and face recognition using double precision.

9.2 Recognition using Hidden Markov Models

Given V trained models the task is to choose the model with the highest possibleprobability for an unknown observation. The probabilities that can be used arebased on the scaled forward algorithm, the alternative Viterbi algorithm and thefast alternative Viterbi algorithm. However, the alternative Viterbi algorithm isfaster than the scaled forward algorithm (a maximum operation is usually fasterthan a sum), and further, the fast alternative Viterbi algorithm is faster than thealternative Viterbi algorithm (due to the fact that the actual state sequence is notneeded). This means that the fast alternative Viterbi algorithm could preferablybe used. Note that Eqs. (7.36), (7.37) and (7.38) can advantageously be used forbetter numerical stability in the algorithm.

The fast alternative Viterbi algorithm can now be used to find the probabilitiesfor each model given an unknown observation O. The winning model can be foundas

v∗ = arg max1≤v≤V

[P (O,q∗|λv)

](9.5)

These are the steps in the recognition, see Fig. 9.4.

In Fig. 9.4 there is no control over what happens if the observation sequencegiven the system is an unseen entity. This means that if the system is given someclass which it is not aware of (trained on), the recognition will assign this unseenentity to the most likely model.

Page 75: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

72 Chapter 9. Training and Recognition using Hidden Markov Models

Observation sequence, O

Probability computationfor λ1

Probability computationfor λ2

Probability computationfor λV

Select maximum

...

P (O,q∗|λ1)

P (O,q∗|λ2)

P (O,q∗|λV )

...

v∗ = arg max1≤v≤V

[P (O,q∗|λv)

]

Figure 9.4: Recognition using HMMs.

This implies that acceptance logic could be applied in order to avoid accep-tance of unknown entities, see Fig. 9.5. This acceptance logic could use somethreshold acting on the probability for the models. Each model is then acceptedto further compete with the remaining models according to

if

{P (O,q∗|λv) ≥ Pv accept

P (O,q∗|λv) < Pv reject(9.6)

where Pv is the threshold for model v. The choice for Pv can be made ex-perimentally or possibly by creating a function of the R training probabilitiesP(O(r),q∗(r)|λv

).

Another approach could be to use some information about the relationshipbetween the found probabilities. Consider the transformation of the output prob-abilities

p = pv = P (O,q∗|λv) − minv

[P (O,q∗|λv)

]+ 1 (9.7)

which yields a vector p. The least probable model has value one, and morelikely models will have values greater than one. Furthermore, consider the nor-malized vector of the probabilities

p = pv =pv

V∑i=1

pi

, v = 1, 2, . . . , V (9.8)

and also consider the uniform probability vector

p = pv =1

V, v = 1, 2, . . . , V (9.9)

Page 76: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 9. Training and Recognition using Hidden Markov Models 73

Clearly, if p and p are similar the decision of the winning model will becomehard. On the other hand, if the difference between the probability vectors ishigher this means that p has some high values and some low, which in turnmakes the decision easier (less confusion)1.

The obvious question is now how to create a similarity measure. One wayis to use the Jensen difference [18, 19]. The Jensen difference is based on theShannon entropy

HV(p) = −V∑

i=1

pi log (pi) (9.10)

and is defined as2

JV(p,p) = HV

(p + p

2

)− 1

2(HV (p) +HV (p)) (9.11)

The Jensen difference is always non-negative and becomes zero only if p = p[19]. The decision for accepting a model can be made by setting a threshold onthe Jensen difference, that is

if

{JV(p,p) ≥ ϕ acceptJV(p,p) < ϕ reject

(9.12)

The threshold ϕ is typically found by experiments, since the Jensen differencewill be a measure highly dependent on the various decisions made for the patternrecognition task.

Also, information from the observation sequence can directly be used in theacceptance logic. For example in some pattern recognition tasks the length ofobservation sequence T could be used to remove observation sequences that areof unnatural length.

1That is, if V > 1, otherwise is p = p.2The learning rate defined in Eq. (9.2) is not the same as the Jensen difference. The Jensen

difference has a subindex V and two input arguments and the learning rate only one argumentand no subindex.

Page 77: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

74 Chapter 9. Training and Recognition using Hidden Markov Models

Observation sequence, O

Probability computationfor λ1

Probability computationfor λ2

Probability computationfor λV

Select maximum

Acceptance logic

...

P (O,q∗|λ1)

P (O,q∗|λ2)

P (O,q∗|λV )

...

v∗ = arg max1≤v≤V

[P (O,q∗|λv)

]

... {0, 1}

Figure 9.5: Recognition with an acceptance decision included using HMMs.

Page 78: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Chapter 10

Summary and Conclusions

The theory from the Markov model leading into the Hidden Markov Model(HMM) has been investigated with numerical issues on a floating point proces-sor in consideration. Theoretical algorithms, with the emphasis on the use ofGaussian mixtures, such as forward, backward and Viterbi, have been extendedto more applied algorithms such as the scaled forward, the scaled backward, thealternative Viterbi, the fast alternative Viterbi. Numerical issues concerning theBaum-Welch reestimation have been investigated and suggestions for numericalstability have been presented.

Unsupervised clustering using k-means and SOFM has been explored in orderto get mean vectors and covariance matrices for an initial model if no transcriptionof the data exists.

The report summarizes the use of the described algorithms for a completepattern recognition task using HMMs.

75

Page 79: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this
Page 80: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Appendix A

Derivation of ReestimationFormulas

The reestimation Eqs. (7.23), (7.24), (7.25), (7.26) and (7.27) are derived. Con-

sider the observation sequences O =[O(1),O(2) . . . ,O(R)

]where O(r) =

(o

(r)1 ,o

(r)2 , . . . ,o

(r)Tr

)is the rth observation sequence and Tr is the rth observation sequence length, justas in Eq. (7.21). The reestimation formulas are found by assuming the indepen-dence between the observation sequences. However, let us start by avoiding thisassumption and derive the independence equations from a more general descrip-tion. The general description of the dependencies for these observation sequencescan be found as [20]

⎧⎪⎪⎪⎨⎪⎪⎪⎩P (O|λ) = P

(O(1)|λ) · P (O(2)|O(1), λ

) · . . . · P (O(R)|O(R−1),O(R−2), . . . ,O(1), λ)

P (O|λ) = P(O(2)|λ) · P (O(3)|O(2), λ

) · . . . · P (O(1)|O(R),O(R−1), . . . ,O(2), λ)

...P (O|λ) = P

(O(R)|λ) · P (O(1)|O(R), λ

) · . . . · P (O(R−1)|O(R−2),O(R−3), . . . ,O(1), λ)

(A.1)Given the equations in Eq. (A.1), the multiple observation probability given

the model λ can be found as

P (O|λ) =R∑

r=1

wrP(O(r)|λ) (A.2)

where wr are the combinatorial weights and defined as (general case)⎧⎪⎪⎪⎨⎪⎪⎪⎩w1 = 1

RP(O(2)|O(1), λ

) · . . . · P (O(R)|O(R−1),O(R−2), . . . ,O(1), λ)

w2 = 1RP(O(3)|O(2)λ

) · . . . · P (O(1)|O(R),O(R−1), . . . ,O(2), λ)

...wR = 1

RP(O(1)|O(R), λ

) · . . . · P (O(R−1)|O(R−2),O(R−3), . . . ,O(1), λ)(A.3)

77

Page 81: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

78 Appendix A. Derivation of Reestimation Formulas

The aim is now to maximize P(O(r)|λ) (the extension to P (O|λ) will follow).

The first approach is to use a maximum likelihood (ML) estimation for thisproblem. However, this can not be done due to lack of information (or hiddeninformation), i.e. the state sequence. This is why the expectation maximization(EM) algorithm can advantageously be applied [14, 12, 13, 21].

Consider one of the observations sequences O(r), the incomplete-data likeli-hood function for this sequence is P

(O(r)|λ) whereas the complete likelihood

function is P(O(r),q|λ). The auxiliary Baum function for sample r is then [14]

Qr (λ′, λ) =∑all q

P(O(r),q|λ) logP

(O(r),q|λ′) (A.4)

where λ is the current estimate of parameters for the HMM, and λ′ is the newlyfound parameters which will improve the probability P (O|λ). This auxiliaryBaum function has the following property (which can be seen by applying theJensen’s inequality [14])

Qr (λ′, λ) ≥ Qr (λ, λ) → P(O(r)|λ′) ≥ P

(O(r)|λ) (A.5)

By considering Eqs. (5.3) and (A.4) the following can be concluded

P(O(r),q|λ′) = πq1bq1(o

(r)1 )

T∏t=2

aqt−1qtbqt(o(r)t ) (A.6)

and

logP(O(r),q|λ′) = log

′q1b′q1

(o(r)1 )

T∏t=2

a′qt−1qt

b′qt(o

(r)t )

)= log

′q1

)+

T∑t=2

log(a

′qt−1qt

)+

T∑t=1

log(b′qt(o

(r)t )) (A.7)

Hence Qr can be rewritten as

Qr (λ′, λ) =∑all q

(log(πq

′1

))P(O(r),q|λ)+

∑all q

(Tr∑t=2

log(a

′qt−1qt

))P(O(r),q|λ)+

∑all q

(Tr∑t=1

log(b′qt(o

(r)t )))

P(O(r),q|λ)

(A.8)

Of course the aim is to maximize P (O|λ), and the extended auxiliary functioncan thereby be found as

Q (λ′, λ) =R∑

r=1

wrQr (λ′, λ) (A.9)

Page 82: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Appendix A. Derivation of Reestimation Formulas 79

and similar as for the Qr the following rule holds [20]

Q (λ′, λ) ≥ Q (λ, λ) → P (O|λ′) ≥ P (O|λ) (A.10)

By considering Eq. (A.8), Eq. (A.9) can be rewritten as

Q (λ′, λ) =R∑

r=1

wr

∑all q

(log(πq

′1

))P(O(r),q|λ)︸ ︷︷ ︸

Qπi(λ′, λ)

+

R∑r=1

wr

∑all q

(Tr∑t=2

log(a

′qt−1qt

))P(O(r),q|λ)︸ ︷︷ ︸

Qaij(λ′, λ)

+

R∑r=1

wr

∑all q

(Tr∑t=1

log(b′qt(o

(r)t )))

P(O(r),q|λ)︸ ︷︷ ︸

Qbj(λ′, λ)

= Qπi(λ′, λ) +Qaij

(λ′, λ) +Qbj(λ′, λ)

(A.11)

Since the parameters which are to be optimized now are independently dividedinto Qπi

(λ′, λ), Qaij(λ′, λ) and Qbj

(λ′, λ), the optimization can be performed oneach term individually.

A.1 Optimization of Qπi

Here the optimization of Qπi(λ′, λ) is performed by using Lagrange multipliers.

The function to optimize is

Qπi(λ′, λ) =

R∑r=1

wr

∑all q

(log(π

′q1

))P(O(r),q|λ)

=R∑

r=1

wr

N∑i=1

(log(π

′i

))P(O(r), q1 = i|λ) (A.12)

since selecting all possible state sequences (all q) is here really the repeatedlyselection of q1. The standard stochastic constraint for this optimization is

N∑i=1

π′i = 1 (A.13)

To determine the optimal parameters, Lagrange multipliers are applied. Thisgives the two partial derivatives

Page 83: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

80 Appendix A. Derivation of Reestimation Formulas

∂π′i

[R∑

r=1

wr

N∑i=1

(log(π

′i

))P(O(r), q1 = i|λ)+ ρ

(1 −

N∑i=1

π′i

)]= 0 ⇔

R∑r=1

wr1

π′i

P(O(r), q1 = i|λ)− ρ = 0

(A.14)

and

∂∂ρ

[R∑

r=1

wr

N∑i=1

(log(π

′i

))P(O(r), q1 = i|λ)+ ρ

(1 −

N∑i=1

π′i

)]= 0 ⇔

N∑i=1

π′i = 1

(A.15)

where ρ is the Lagrange multiplier. From Eq. (A.14) ρ can be found as

ρ =1

π′i

R∑r=1

wrP(O(r), q1 = i|λ) (A.16)

From the same equation π′i can be found as

π′i =

1

ρ

R∑r=1

wrP(O(r), q1 = i|λ) (A.17)

By inserting Eq. (A.17) into Eq. (A.15), an expression for ρ becomes eminent

N∑i=1

R∑r=1

wrP(O(r), q1 = i|λ) = 1 ⇔

ρ =N∑

i=1

R∑r=1

wrP(O(r), q1 = i|λ) (A.18)

Now two expressions for ρ are found, that is Eqs. (A.16) and (A.18). Sinceonly Eq. (A.16) contains π

′i, the equality between them can be used to extract π

′i

1

π′i

R∑r=1

wrP(O(r), q1 = i|λ) =

N∑i=1

R∑r=1

wrP(O(r), q1 = i|λ)⇔

π′i =

R�r=1

wrP(O(r),q1=i|λ)N�

i=1

R�r=1

wrP(O(r),q1=i|λ)

(A.19)

To further express the reestimation formula for πi, considering the equalitygiven by Bayes rule

P(O(r), q1 = i|λ) = P

(O(r)|λ)P (q1 = i|O(r), λ

)= P

(O(r)|λ) γ(r)

1 (i)(A.20)

Page 84: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Appendix A. Derivation of Reestimation Formulas 81

Given Eq. (A.20) and Eq. (A.19), the reestimation formula can be found as

π′i =

R�r=1

wrP(O(r)|λ)γ(r)1 (i)

N�i=1

R�r=1

wrP(O(r)|λ)γ(r)1 (i)

=

R�r=1

wrP(O(r)|λ)γ(r)1 (i)

R�r=1

wrP(O(r)|λ)N�

i=1γ(r)1 (i)

=

R�r=1

wrP(O(r)|λ)γ(r)1 (i)

R�r=1

wrP(O(r)|λ)

(A.21)

A.2 Optimization of Qaij

Here the optimization of Qaij(λ′, λ) is performed by using Lagrange multipliers.

The function to optimize is

Qaij(λ′, λ) =

R∑r=1

wr

∑all q

(Tr∑t=2

log(a

′qt−1qt

))P(O(r),q|λ)

=R∑

r=1

wr

N∑i=1

N∑j=1

(Tr∑t=2

log(a

′ij

))P(O(r), qt−1 = i, qt = j|λ)

(A.22)since selecting all state sequences (all q) here means all the transitions from i to jand weighting with the corresponding probability at each time t. The stochasticconstraint for this function is

N∑j=1

a′ij = 1 (A.23)

To determine the optimal parameters, Lagrange multipliers are applied. Thisgives the two partial derivatives

∂a′ij

[R∑

r=1

wr

N∑i=1

N∑j=1

(Tr∑t=2

log(a

′ij

))P(O(r), qt−1 = i, qt = j|λ)+

N∑i=1

ρi

(1 −

N∑j=1

a′ij

)]= 0 ⇔

R∑r=1

wr

Tr∑t=2

1

a′ij

P(O(r), qt−1 = i, qt = j|λ)− ρi = 0

(A.24)

Page 85: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

82 Appendix A. Derivation of Reestimation Formulas

and

∂∂ρi

[R∑

r=1

wr

N∑i=1

N∑j=1

(Tr∑t=2

log(a

′ij

))P(O(r), qt−1 = i, qt = j|λ)+

N∑i=1

ρi

(1 −

N∑j=1

a′ij

)]= 0 ⇔

N∑j=1

a′ij = 1

(A.25)where ρi is the Lagrange multipliers. From Eq. (A.24) ρi can be found as

ρi =R∑

r=1

wr

Tr∑t=2

1

a′ij

P(O(r), qt−1 = i, qt = j|λ) (A.26)

or from the same equation also a′ij can be found as

a′ij =

1

ρi

R∑r=1

wr

Tr∑t=2

P(O(r), qt−1 = i, qt = j|λ) (A.27)

By inserting Eq. (A.27) into Eq. (A.25), another expression for ρi becomeseminent

N∑j=1

1ρi

R∑r=1

wr

Tr∑t=2

P(O(r), qt−1 = i, qt = j|λ) = 1 ⇔

ρi =N∑

j=1

R∑r=1

wr

Tr∑t=2

P(O(r), qt−1 = i, qt = j|λ) (A.28)

Now two expressions for ρi are found, that is Eqs. (A.26) and (A.28). Sinceonly Eq. (A.26) contains a

′ij, the equality between them can be used to extract

a′ij

1

a′ij

R∑r=1

wr

Tr∑t=2

P(O(r), qt−1 = i, qt = j|λ) =

N∑j=1

R∑r=1

wr

Tr∑t=2

P(O(r), qt−1 = i, qt = j|λ)⇔

a′ij =

R�r=1

wr

Tr�t=2

P(O(r),qt−1=i,qt=j|λ)N�

j=1

R�r=1

wr

Tr�t=2

P(O(r),qt−1=i,qt=j|λ)⇔

a′ij =

R�r=1

wr

Tr�t=2

P(O(r),qt−1=i,qt=j|λ)R�

r=1wr

Tr�t=2

N�j=1

P(O(r),qt−1=i,qt=j|λ)⇔

a′ij =

R�r=1

wr

Tr�t=2

P(O(r),qt−1=i,qt=j|λ)R�

r=1wr

Tr�t=2

P(O(r),qt−1=i|λ)(A.29)

To further express the reestimation formula for a′ij, consider the equalities

given by

Page 86: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Appendix A. Derivation of Reestimation Formulas 83

P(O(r), qt−1 = i, qt = j|λ) = P

(O(r)|λ)P (qt−1 = i, qt = j|O(r), λ

)= P

(O(r)|λ) ξt−1(i, j)

(A.30)

andP(O(r), qt−1 = i|λ) = P

(O(r)|λ)P (qt−1 = i|O(r), λ

)= P

(O(r)|λ) γt−1(i)

(A.31)

Given these equations and Eq. (A.29), the reestimation formula can be foundas

a′ij =

R�r=1

wr

Tr�t=2

P(O(r)|λ)ξt−1(i,j)

R�r=1

wr

Tr�t=2

P(O(r)|λ)γt−1(i)

=

R�r=1

wrP(O(r)|λ)Tr−1�t=1

ξt(i,j)

R�r=1

wrP(O(r)|λ)Tr−1�t=1

γt(i)

(A.32)

A.3 Optimization of Qbj

Here the optimization of Qbj(λ′, λ) is performed by using the Lagrange multipli-

ers. The function to optimize is

Qbj(λ′, λ) =

R∑r=1

wr

∑all q

(Tr∑t=1

log(b′qt(o

(r)t )))

P(O(r),q|λ)

=R∑

r=1

wr

N∑j=1

(Tr∑t=1

log(b′j(o

(r)t )))

P(O(r), qt = j|λ) (A.33)

Selecting all state sequences (all q) in Eq. A.33 is the same as all emissionsfrom state j and weighting with the corresponding probability at each time t.This is the general description given any distribution, however, here the gaussianmixture are considered, that is

b′j(o

(r)t ) =

M∑k=1

c′jk

1

(2π)D/2|Σ′jk|1/2

e−1

2

(o

(r)t − µ

′jk

)T

Σ′−1jk

(o

(r)t − µ

′jk

)(A.34)

One way to continue is to insert Eq. (A.34) into Eq. (A.33). However, thiswill be difficult to optimize because of the logarithm of the sum. The problemis that by using a Gaussian mixture we introduce a new missing (or hidden)information, that is the mixture weights. There are then two hidden parameters,the state sequence and the mixture weights. This implies that Eq. (A.4) can berewritten for the case of Gaussian mixtures

Page 87: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

84 Appendix A. Derivation of Reestimation Formulas

Qr,bj(λ′, λ) =

∑all q

∑all m

P(O(r),q,m|λ) logP

(O(r),q,m|λ′)

=∑all q

∑all m

P(O(r),q,m|λ) Tr∑

t=1

log b′qt(o

(r)t ,mqtt)

=N∑

j=1

M∑k=1

Tr∑t=1

log c′jk

e− 1

2 o(r)t −µ

′jk

TΣ′−1jk

o(r)t −µ

′jk

(2π)D/2|Σ′jk|1/2

P(O(r), qt = i,mqtt = k|λ)

(A.35)

where m =[mq11,mq22,...,mqtt,...,mqT T

]indicates the mixture component for each

state at each time. This gives that the new Qbjfor the gaussian mixture, incor-

porating both the hidden state and mixtures, and it can be found as

Qbj(λ′, λ) =

R∑r=1

wrQr,bj(λ′, λ)

=R∑

r=1

wr

N∑j=1

M∑k=1

Tr∑t=1

log c′jk

e− 1

2 o(r)t −µ

′jk

TΣ′−1jk

o(r)t −µ

′jk

(2π)D/2|Σ′jk|1/2

P(O(r), qt = i,mqtt = k|λ)

=R∑

r=1

wr

N∑j=1

M∑k=1

Tr∑t=1

log c′jkP (•)︸ ︷︷ ︸

Qbj ,cjk

+

R∑r=1

wr

N∑j=1

M∑k=1

Tr∑t=1

(−D

2log(2π) − 1

2log(|Σ′

jk|)− 1

2

(o

(r)t − µ

′jk

)T

Σ′−1jk

(o

(r)t − µ

′jk

))P (•)︸ ︷︷ ︸

Qbj ,{µjk,Σjk}(A.36)

where P (•) = P(O(r), qt = i,mqtt = k|λ). This means thatQbj ,cjk

andQbj ,{µjk,Σjk}can be optimized independently.

A.3.1 Optimization of Qbj ,cjk

Here the optimization of Qbj ,cjk(λ′, λ) is performed by using Lagrange multipliers.

The function to optimize is

Qbj ,cjk(λ′, λ) =

R∑r=1

wr

N∑j=1

M∑k=1

Tr∑t=1

log c′jkP (•) (A.37)

with the stochastic constraint

M∑k=1

c′jk = 1 (A.38)

Page 88: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Appendix A. Derivation of Reestimation Formulas 85

To determine the optimal parameters, Lagrange multipliers are applied. Thisgives the two partial derivatives

∂c′jk

[R∑

r=1

wr

N∑j=1

M∑k=1

Tr∑t=1

log c′jkP (•) +

N∑j=1

ρj

(1 −

M∑k=1

c′jk

)]= 0 ⇔

R∑r=1

wr

Tr∑t=1

1

c′jk

P (•) − ρj = 0

(A.39)

and

∂∂ρj

[R∑

r=1

wr

N∑j=1

M∑k=1

Tr∑t=1

log c′jkP (•) +

N∑j=1

ρj

(1 −

M∑k=1

c′jk

)]= 0 ⇔

M∑k=1

c′jk = 1

(A.40)

where ρj is the Lagrange multipliers. From Eq. (A.39) ρj can be found as

ρj =R∑

r=1

wr

Tr∑t=1

1

c′jk

P (•) (A.41)

or from the same equation also c′jk can be found as

c′jk =

1

ρj

R∑r=1

wr

Tr∑t=1

P (•) (A.42)

By inserting Eq. (A.42) into Eq. (A.40), the expression for ρj becomes eminent

M∑k=1

1ρj

R∑r=1

wr

Tr∑t=1

P (•) = 1 ⇔

ρj =M∑

k=1

R∑r=1

wr

Tr∑t=1

P (•)(A.43)

Two expressions for ρj are found, Eqs. (A.41) and (A.43). Since only Eq. (A.41)contains c

′jk, the equality between them can be used to extract c

′jk

R∑r=1

wr

Tr∑t=1

1

c′jk

P (•) =M∑

k=1

R∑r=1

wr

Tr∑t=1

P (•) ⇔

c′jk =

R�r=1

wr

Tr�t=1

P (•)M�

k=1

R�r=1

wr

Tr�t=1

P (•)

(A.44)

To further express the reestimation formula for c′jk, the equality given by

Bayes’ rule is considered

Page 89: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

86 Appendix A. Derivation of Reestimation Formulas

P (•) = P(O(r), qt = i,mqtt = k|λ)

= P(O(r)|λ)P (qt = j,mqtt|O(r), λ

)= P

(O(r)|λ) γt(j, k)

(A.45)

The considering of Eq. (A.45) gives that Eq. (A.44) can be found as

c′jk =

R�r=1

wr

Tr�t=1

P(O(r)|λ)γt(j,k)

M�k=1

R�r=1

wr

Tr�t=1

P(O(r)|λ)γt(j,k)

c′jk =

R�r=1

wrP(O(r)|λ)Tr�t=1

γt(j,k)

R�r=1

wrP(O(r)|λ)Tr�t=1

M�k=1

γt(j,k)

(A.46)

A.3.2 Optimization of Qbj ,{µjk,Σjk}

Here the optimization of Qbj ,{µjk,Σjk} (λ′, λ) is performed by finding out when thepartial derivatives are zero for the following function

Qbj ,{µjk,Σjk} =R∑

r=1

wr

N∑j=1

M∑k=1

Tr∑t=1

(−D

2log(2π) − 1

2log(|Σ′

jk|)− 1

2

(o

(r)t − µ

′jk

)T

Σ′−1jk

(o

(r)t − µ

′jk

))P (•)

(A.47)Before the derivations are made, some matrix algebra results are needed, see

Tab. A.1.

∂∂x

[xTy]

= y ∂∂x

[xTx]

= 2x

∂∂x

[xTAy

]= Ay ∂

∂x

[yTAx

]= ATy

∂∂x

[xTAx

]=(A + AT

)x A + AT = 2A, if A is symmetric

tr (kA) = k · tr (A) tr (A + B) = tr (A) + tr (B)

tr (AB) = tr (BA) ∂∂A

[tr (AB)] = B + BT − diag (B)

xTAx = tr(AxxT

)∂

∂A[log (|A|)] = 2A−1 − diag (A−1), if A is symmetric

|A−1| = 1/|A|

Table A.1: Matrix algebra results.

Page 90: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Appendix A. Derivation of Reestimation Formulas 87

First the partial derivative for µ′jk is found as

∂µ′jk

[Qbj ,{µjk,Σjk}

]=

R∑r=1

wr

Tr∑t=1

(∂

∂µjk

[−1

2

(o

(r)t − µ

′jk

)T

Σ′−1jk

(o

(r)t − µ

′jk

)])P (•)

=R∑

r=1

wr

Tr∑t=1

(∂

∂µjk

[−1

2

((o

(r)t

)T

Σ′−1jk o

(r)t −

(o

(r)t

)T

Σ′−1jk µ

′jk−

− (µ′jk

)TΣ

′−1jk o

(r)t +

′jk

)TΣ

′−1jk µ

′jk

)])P (•)

=R∑

r=1

wr

Tr∑t=1

(−1

2

(−(Σ

′−1jk

)T

o(r)t − Σ

′−1jk o

(r)t +(

Σ′−1jk +

′−1jk

)T)

µ′jk

))P (•)

=R∑

r=1

wr

Tr∑t=1

′−1jk

(o

(r)t − µ

′jk

))P (•)

(A.48)

Setting Eq. (A.48) equal to zero and extracting µ′jk yields the reestimation

formula for µ′jk

R∑r=1

wr

Tr∑t=1

′−1jk

(o

(r)t − µ

′jk

))P (•) = 0 ⇔

Σ′−1jk

R∑r=1

wr

Tr∑t=1

o(r)t P (•) − Σ

′−1jk µ

′jk

R∑r=1

wr

Tr∑t=1

P (•) = 0 ⇔

µ′jk =

R�r=1

wr

Tr�t=1

o(r)t P (•)

R�r=1

wr

Tr�t=1

P (•)

(A.49)

The considering of Eq. (A.45) gives that Eq. (A.49) can be found as

µ′jk =

R�r=1

wr

Tr�t=1

o(r)t P(O(r)|λ)γt(j,k)

R�r=1

wr

Tr�t=1

P(O(r)|λ)γt(j,k)

µ′jk =

R�r=1

wrP(O(r)|λ)Tr�t=1

γt(j,k)o(r)t

R�r=1

wrP(O(r)|λ)Tr�t=1

γt(j,k)

(A.50)

Now to the second partial derivative, with the aim of finding the reestimationformula for Σ

′jk. The second partial derivative can be found as

Page 91: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

88 Appendix A. Derivation of Reestimation Formulas

∂Σ′−1jk

[Qbj ,{µjk,Σjk}

]=

R∑r=1

wr

Tr∑t=1

(∂

∂Σ′−1jk

[−1

2log(|Σ′

jk|)− 1

2

(o

(r)t − µ

′jk

)T

Σ′−1jk

(o

(r)t − µ

′jk

)])P (•)

=R∑

r=1

wr

Tr∑t=1

(∂

∂Σ′−1jk

[12log(|Σ′−1

jk |)− 1

2tr

′−1jk

(o

(r)t − µ

′jk

)(o

(r)t − µ

′jk

)T)])

P (•)

= 12

R∑r=1

wr

Tr∑t=1

(2Σjk − diag (Σjk) − 2

(o

(r)t − µ

′jk

)(o

(r)t − µ

′jk

)T

+

diag

((o

(r)t − µ

′jk

)(o

(r)t − µ

′jk

)T))

P (•)

=R∑

r=1

wr

Tr∑t=1

(Σjk −

(o

(r)t − µ

′jk

)(o

(r)t − µ

′jk

)T

−12diag

(Σjk −

(o

(r)t − µ

′jk

)(o

(r)t − µ

′jk

)T))

P (•)(A.51)

This equation should be set to zero and the reestimation for Σ′jk can be found.

Note that in Eq. (A.51) it is implied that if

R∑r=1

wr

Tr∑t=1

(Σjk −

(o

(r)t − µ

′jk

)(o

(r)t − µ

′jk

)T)P (•) = 0 (A.52)

it directly follows that

R∑r=1

wr

Tr∑t=1

(1

2diag

(Σjk −

(o

(r)t − µ

′jk

)(o

(r)t − µ

′jk

)T))

P (•) = 0 (A.53)

Given this information the reestimation formula for Σ′jk can be found by

considering Eq. (A.52)

R∑r=1

wr

Tr∑t=1

(Σjk −

(o

(r)t − µ

′jk

)(o

(r)t − µ

′jk

)T)P (•) = 0 ⇔

Σjk

R∑r=1

wr

Tr∑t=1

P (•) =R∑

r=1

wr

Tr∑t=1

P (•)(o

(r)t − µ

′jk

)(o

(r)t − µ

′jk

)T

Σjk =

R�r=1

wr

Tr�t=1

P (•)�o(r)t −µ

′jk

��o(r)t −µ

′jk

�T

R�r=1

wr

Tr�t=1

P (•)

(A.54)

When considering Eq. (A.45), Eq. (A.54) can be found as

Page 92: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Appendix A. Derivation of Reestimation Formulas 89

Σjk =

R�r=1

wr

Tr�t=1

P(O(r)|λ)γt(j,k)�o(r)t −µ

′jk

��o(r)t −µ

′jk

�T

R�r=1

wr

Tr�t=1

P(O(r)|λ)γt(j,k)

Σjk =

R�r=1

wrP(O(r)|λ)Tr�t=1

γt(j,k)�o(r)t −µ

′jk

��o(r)t −µ

′jk

�T

R�r=1

wrP(O(r)|λ)Tr�t=1

γt(j,k)

(A.55)

Note that the reestimation formula for Σ′jk use the new estimated mean vec-

tors, µ′jk.

A.4 Summary

The derivations of the reestimation formulas for multiple observations using com-binatorial weights have now been performed, resulting in the Eqs. (A.21), (A.32),(A.46), (A.50) and (A.55).

Some comments about the combinatorial weights used follow. For indepen-dent observations sequences the weights are chosen as

wr =1

R

R∏r=1

P(O(r)|λ)

P (O(r)|λ)(A.56)

By considering Eqs. (A.56) and (A.2), the independence formula can be foundas

P (O|λ) =R∑

r=1

1

R

R∏r=1

P(O(r)|λ)

P (O(r)|λ)P(O(r)|λ) =

R∏r=1

P(O(r)|λ) (A.57)

Further, the reestimation formula Eqs. (7.23), (7.24), (7.25), (7.26) and (7.27)can be found by inserting Eq. (A.56) in Eqs. (A.21), (A.32), (A.46), (A.50) and(A.55).

Page 93: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this
Page 94: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

Bibliography

[1] Anil K. Jain, Robert P. W. Duin, and Jianchang Mao, “Statistical pat-tern recognition: A review,” IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 22, no. 1, pp. 4–37, 2000.

[2] L. Rabiner and B.H. Juang, Fundamentals of Speech Recognition, Prentice-Hall, 1993, ISBN 0-13-015157-2.

[3] Zimmermann M. Bunke H., “Hidden markov model length optimization forhandwriting recognition systems,” in Proceedings of the Eighth InternationalWorkshop on Frontiers in Handwriting Recognition, 2002, pp. 369–374.

[4] Danfeng Li, Biem A., and Subrahmonia J., “HMM topology optimizationfor handwriting recognition,” in Procedings of ICASSP, 2001, vol. 3, pp.1521–1524.

[5] Gunter S. and Bunke H., “A new combination scheme for HMM-basedclassifiers and its application to handwriting recognition,” in Proceedings ofthe 16th International Conference on Pattern Recognition, 2002, vol. 2, pp.332–337.

[6] Cohen A., “Hidden markov models in biomedical signal processing,” inProceedings of the 20th Annual International Conference of the IEEE, 1998,vol. 3, pp. 1145–1150.

[7] K. Karplus, C. Barrett, and R. Hughey, “Hidden Markov models for detect-ing remote protein homologies,” Bioinformatics, 14(10):846–856, 1998.

[8] N. Collier, C. No, and J. Tsujii, “Extracting the names of genes and geneproducts with a hidden markov model,” Proc. COLING 2000, 201–207, 2000.

[9] Kohir V. V. and Desai U.B., “Face recognition using a DCT-HMM ap-proach,” in Proceedings on the Forth Workshop on Applications of ComputerVision, WACV, October 1998, pp. 226–231.

[10] Frank Wallhoff, Stefan Eickeler, and Gerhard Rigoll, “A Comparison of Dis-crete and Continuous Output Modeling Techniques for a Pseudo-2D Hidden

91

Page 95: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

92 Bibliography

Markov Model Face Recognition System,” in IEEE Int. Conference on Im-age Processing (ICIP), Thessaloniki, Greece, 2001.

[11] Nefian A.V. and Hayes III M. H., “Face detection and recognition usinghidden Markov models,” in Proceedings on the International Conference onImage Processing, ICIP, 1998, vol. 1, pp. 141–145.

[12] Baum L.E. and Eagon J. A., “An inequality with applications to statisticalestimation for probabilistic functions of Markov processes and to model fore-cology,” Bulletin of American Mathematical Society, vol. 73, pp. 360–363,1967.

[13] Baum L.E., Petrie T., Soules G., and Weiss N., “A maximization techniquein the statistical analysis of probabilistic functions of Markov chains,” Annalsof mathematical Statistics, vol. 41, no. 1, pp. 164–171, 1970.

[14] Baum L.E., “An inequality and associated maximisation technique in statis-tical estimation for probabilistic functions of Markov processes,” Inequalities,vol. 3, pp. 1–8, 1972.

[15] Pena J.M., Lozano J.A., and Larranaga P., “An empirical comparison offour initialization methods for the k-means algorithm,” Pattern RecognitionLetters, 20, 1999, 1027-1040. 50, 1999.

[16] T. Kohonen, “The self-organizing map,” Proceedings of the IEEE, vol. 78,no. 9, pp. 1464–1480, September 1990.

[17] S. Haykin, Neural Networks: a comprehensive foundation, Prentice-Hall,second edition, 1999, ISBN 0-13-273350-1.

[18] J. Burbea and C.R. Rao, “On the Convexity of Some Divergence MeasuresBased on Entropy Functions,” IEEE Trans. Information Theory, vol. IT-28,no. 3, pp. 489–495, 1982.

[19] R. Vergin and D. O’Shaughnessy, “On the Use of Some Divergence Measuresin Speaker Recognition,” in Proceedings of ICASSP, 1999, pp. 309–312.

[20] Xiaolin Li, M. Parizeau, and R. Plamondon, “Training hidden markov mod-els with multiple observations - a combinatorial method,” IEEE Transactionson Pattern Analysis and Machine Intelligence, vol. 22, no. 4, pp. 371–377,April 2000.

[21] J. Bilmes, “A gentle tutorial on the em algorithm and its application toparameter estimation for gaussian mixture and hidden markov models,” Re-port, University of Berkeley, ICSI-TR-97-021, 1997.

Page 96: First Order Hidden Markov Model - DiVA portal833697/FULLTEXT01.pdf · Markov process, hence the Markov model itself can be described by A and π. 2.1 Markov Model Example In this

First Order Hidden Markov ModelTheory and Implementation Issues

Mikael Nilsson

ISSN 1103-1581ISRN BTH-RES--02/05--SE

Copyright © 2005 by the authorAll rights reserved

Printed by Kaserntryckeriet AB, Karlskrona 2005