automatic continuous speech recognition database speech text scoring

Post on 24-Dec-2015

235 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Automatic Continuous Speech Recognition

Databasespeech

texttext

text

text

Scoring

Automatic Continuous Speech Recognition

Problems with isolated word recognition:– Every new task contains novel words

without any available training data.– There are simply too many words, and this

words may have different acoustic realizations. Increases variability

coarticulation of “words” Speech velocity

– we don´t know the limits of the words.

In CSR, should we use words? Or what is the basic unit to represent salient acoustic and phonetic information?

Model Units Issues

Accurate.– Represent the acoustic realization that

appears in different contexts. Trainable Generalizable:

– New words can be derived

Comparison of Different Units

Words: – Small task.

accurate, trainable, no-generalizable

– Large Vocabulary: accurate, non-trainable, no-generalizable.

Phonemes:– Large Vocabulary:

No-accurate, trainable, over-generalizable

Syllables– English: 30,000

No-very-accurate, no-trainable, generalizable

– Chinese: 1200 tone-dependent syllables– Japanese: 50 syllables for

accurate, trainable, generalizable

Allophones: Realizations of phonemes in different context.– accurate, no-trainable, generalizable

– Triphones: Example of allophone.

Traning in Sphinx

phonemes set is trained

senons are trained:1-gaussians to 8_or_16-gaussinas

triphones are created

senons are created

senons are prunned

triphones are trained

Context Independent: Phonemes– SPHINX:

model_architecture/Telefonica.ci.mdef Context Dependent:Triphone:

– SPHINX: model_architecture/Telefonica.untied.mdef

Clustering Acoustic-Phonetic Units

Many Phones have similar effects on the neighboring phones, hence, many triphones have very similar Markov states.

A senone is a cluster of similar Markov

states. Advantages:

– More training data.– Less memory used.

Senonic Decision Tree (SDT)

SDT Classify Markov States of Triphones represented in the training corpus by asking Linguistic Questions composed of Conjuntions, Disjunctions and/or negations of a set of predetermined questions.

Linguistic Questions

Question Phones in Each Question

Aspgen Hh

Sil Sil

Alvstp d,t

Dental dh, th

Labstp b, p

Liquid l, r

Lw l, w

S/Sh S, sh

…. …

Decision Tree for Classifying the second state of k-triphone

Is left phone (LP) a sonorant or nasal?

yes

Is right phone (RP) a back-R? Is LP /s,z,sh,sh/?

Is RF voiced?

Is LP back L or ( LC neither a nasal or RF A LAX-vowel)?

Senone 1 Senone 5 Senone 6

Senone 4

Senone 3Senone 2

When applied to the word welcome

Is left phone (LP) a sonorant or nasal?

yes

Is right phone (RP) a back-R? Is left phone /s,z,sh,sh/?

Is RF voiced?

Is LP back L or ( LC neither a nasal or RF A LAX-vowel)?

Senone 1 Senone 5 Senone 6

Senone 4

Senone 3Senone 2

The tree can automatically constructed by searching, for each node, the question that the maximum entropy decrease – Sphinx:

Construction: $base_dir/ c_scripts/03.bulidtrees. Results: $base_dir/trees/Telefonica.unpruned/A-0.dtree

When the tree grows, it needs to be pruned – Sphinx:

$base_dir/ c_scripts/ 04.bulidtrees. Results:aA $base_dir/trees/Telefonica.500/A-0.dtree $base_dir/Telefonica_arquitecture/Telefonica.500.mdef

Subword unit Models based on HMMs

Words

Words can be modeled using composite HMMs

A null transition is used to go from one subword unit to the following

/sil/ /t/ /uw/ /sil/

Continuous Speech TrainingDatabase

speech

texttext

text

text

Scoring

For each utterance to train, the subword units are concatenated to form words model.– Sphinx: Dictionary– $base_dir/training_input/dict.txt– $base_dir/training_input/train.lbl

Let’s assume we are going to train the phonemes in the sentence:– Two four six.

The phonems of this sentence are:– /t//w//o//f//o//r//s//i//x/

Therefore the HMM will be:

/sil/ /t/ /uw/ /sil//f/ /o/ /r/ /s/ /i/ /x/

We can estimate the parameters for each HMM using the forward-backward reestimation formulas already definded.

The ability to automatically align each individual HMM to the corresponding unsegmented speech observation sequence is one of the most powerful features in the forward-backward algorithm.

Language Models for Large Vocabulary Speech Recognitin

Databasespeech

texttext

text

text

Scoring

Instead of using:

The recongition can be imporved using the calculating the Maximum Posteriory Probability:

P M P M P M P M k ii i k k( / ) ( ) ( / ) ( )O O

M,,q=MPMPikMPMP kkki 21 )()( ; )/()/( OO

Languaje ModelLanguaje ModelViterbiViterbi

Language Models for Large Vocabulary Speech Recognitin

Goal:– Provide an estimate of the probability of a

“word” sequence (w1 w2 w3 ...wQ)

for the given recognition task.

This can be solved as follows:

QwwwwPWP 321

121

213121321

|

||

QQ

Q

wwwwP

wwwPwwPwPwwwwPWP

Since, it is impossible to reliable estimate the conditional probabilities,

hence in practice it is used an N-gram language model:

En practice, realiable estimators are obtained for N=1 (unigram) N=2 (bigram) or possible N=3 (trigram).

121121 || jNjNjQQQ wwwwPwwwwP

121| jj wwwwP

121| jj wwwwP j

Examples:

Unigram:P(Maria loves Pedro)=P(Maria)P(loves)P(Pedro)

Bigram:P(Maria|<sil>)P(loves|Maria)P(Pedro|loves)P(</sil>|Pedro)

CMU-Cambridge Language Modeling Tools

$base_dir/c_scripts/languageModelling

Databasespeech

texttext

text

text

Scoring

P(Wi| Wi-2,Wi-1)=

C(Wi-2 Wi-1 )=Total Number Sequence Wi-2 Wi-1 was observed

C(Wi-2 Wi-1 Wi ) =Total Number Sequence Wi-2 Wi-1 Wi was observed

C(Wi-2 Wi-1 Wi )

C(Wi-2 Wi-1)

where

top related