chapter-4 subsegmental, segmental and...

47
60 CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND SUPRASEGMENTAL FEATURES FOR SPEAKER RECOGNITION USING GAUSSIAN MIXTURE MODEL Speaker recognition is a pattern recognition task which involves three phases namely, feature extraction, training and testing. In the feature extraction stage, features representing speaker information are extracted from the speech signal. In the present study LP residual derived from the speech data is used for training and testing and also processing of LP residual in time domain at subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are built, one for each speaker, using the training data of the speaker. During the testing phase, the models are tested with the test data. Based on the results with test data, decision is made about the identity of the speaker. 4.1 THE SPEECH FEATURE EXTRACTION The selection of the best parametric representation for acoustic data is an important task in the design of any text-independent speaker recognition system. The acoustic features should fulfill the following requirements. Be of low dimensionality to allow a reliable estimation of parameters of the Automatic speaker recognition systems. Be independent of the speech and recording environment.

Upload: others

Post on 27-Nov-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

60

CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND SUPRASEGMENTAL

FEATURES FOR SPEAKER RECOGNITION USING GAUSSIAN MIXTURE MODEL

Speaker recognition is a pattern recognition task which

involves three phases namely, feature extraction, training and

testing.

In the feature extraction stage, features representing speaker

information are extracted from the speech signal. In the present

study LP residual derived from the speech data is used for training

and testing and also processing of LP residual in time domain at

subsegmental, segmental and suprasegmental levels. In the

training phase, GMMs are built, one for each speaker, using the

training data of the speaker. During the testing phase, the models

are tested with the test data. Based on the results with test data,

decision is made about the identity of the speaker.

4.1 THE SPEECH FEATURE EXTRACTION

The selection of the best parametric representation for acoustic

data is an important task in the design of any text-independent

speaker recognition system. The acoustic features should fulfill the

following requirements.

Be of low dimensionality to allow a reliable estimation of

parameters of the Automatic speaker recognition systems.

Be independent of the speech and recording environment.

Page 2: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

61

4.1.1. PRE-PROCESSING

The task begins with the pre-processing of the speech signal

collected from each speaker. The speech signal is sampled at 16000

samples/sec and it is resampled to 8000 samples/sec. In the

preprocessing stage, the given speech utterance is pre-emphasized,

blocked into a number of frames and windowed. The frame size

chosen is 5 msec which corresponds 40 samples and a frame shift

between frames is 2.5 msec which corresponds to 20 samples has

been taken in the subsegmental processing of LP residual. The

frame size chosen is 20 msec which corresponds 40 samples and a

frame shift between frames is 2.5 msec which corresponds to 20

samples has been taken in the segmental processing of LP residual

and its sampling frequency is decimated by 4 times hence frame

size is same as subsegmental level. The frame size chosen is 250

ms which corresponds 40 samples and a frame shift between

frames is 6.25 msec which corresponds to 1 sample has been taken

in the suprasegmental processing of LP residual in which signal is

decimated by 50 times. The preprocessing task is carried out in a

sequence of steps as explained below.

4.1.1.1. Pre-Emphasis

The given speech samples in each frame are passed through a

first order filter to spectrally flatten the signal and make it less

susceptible to finite precision effects later in the signal processing

task. The pre-emphasis filter used has the form H (z) =1- z-1, 0.9≤ a ≤1.0. In fact, it is sometimes better to difference the entire speech

utterance before frame blocking and windowing.

Page 3: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

62

4.1.1.2. Windowing

After pre-emphasis, each frame is windowed using a window

function. The windowing ensures that the signal discontinuities at

the beginning and end of each frame are minimized. The window

function used is the Hamming window given below,

W (n) =0.54- 0.46 , 0≤n≤N-1 (4.1)

Where N is the number of samples in the frame.

4.1.2. Approach to Speech Feature Extraction

One of the early problems in speaker recognition systems was

to choose the right speaker specific excitation source features from

the speech. Excitation source models were chosen to be GMM or

HMM, as they are assumed to offer a good fit to the statistical

nature of speech. Moreover, the excitation source models are often

assumed to have diagonal covariance matrices which arises the

need for speech features those are by nature uncorrelated.

Speaker recognition system uses subsegmental, segmental

and suprasegmental features from LP residual represent different

speaker specific excitation source features. These features are

robust to channel and environmental noise. We present a brief

overview of subsegmental, segmental and suprasegmental features

of LP residual.

4.1.2.1. Subsegmental, Segmental and Suprasegmental Features of the LP Residual

The 12th order LP residual signal is blocked into frames using

specified frame size of 20 msec and frame shift of 10 msec. The LP

Page 4: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

63

residual contains more speaker specific information. It has been

shown that humans can recognize people by listening to the LP

residual signal [57]. This may be attributed the speaker-specific

excitation source information present at different levels. This work

views the speaker specific excitation source information.

In subsegmental analysis, the LP residual is blocked into

frames of size 5 msec considered in shifts of 2.5 msec for extracting

the dominant speaker information in each frame. Each frame has

40 samples with shift 20 samples in segmental analysis, the LP

residual is blocked into frames of size 20 ms considered in shifts of

2.5 msec for extracting the pitch and energy of the speaker. In

suprasegmental analysis, the LP residual is blocked into frames of

size 250 msec considered in shifts of 6.25 msec for extracting long-

term information, which has very low frequency information of the

speaker. At each level source based speaker characteristics are

represented in the database independently using GMM and

combine them to improve the Speaker recognition system.

4.2 GAUSSIAN MIXTURE MODEL FOR SPEAKER RECOGNITION

GMM is a classic parametric method best used to model

speaker identities due to the fact that Gaussian components have

the capability of representing some general speaker dependent

spectral shapes. Gaussian classifier has been successfully

employed in the several text-independent speaker identification

applications since the approach used by this classifier is similar to

that used by the long term average of spectral features for

representing a speaker’s average vocal tract shape [101].

Page 5: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

64

In a GMM model, the probability distribution of the observed data

takes the form given by the following equation [102].

(4.2)

Where M is the number of component densities, x is a D

dimensional observed data, bi ( x ) is the component density and pi

is the mixture weight for i = 1, .., M as shown in Fig. 4.1.

= (4.3)

Each component density denotes a D-dimensional

normal distribution with mean vector and covariance matrix ∑i.

The mixture weights satisfy the condition and therefore

represent positive scalar values. These parameters can be

collectively represented as λ = { i} for i = 1,….. M. Each speaker

in a speaker in a speaker identification system can be represented

by one distinct GMM and is referred by the speaker’s models λi, for

i = 1, 2, 3, …N, where N is the number of speakers.

4.2.1. Training the Model

The training procedure is similar to the procedure followed

in vector quantization. Clusters are formed within the training data.

Each cluster is then represented with multiple Gaussian probability

distribution function (pdf). The union of many such Gaussian pdf’s

is a GMM.

The most common approach to estimate the GMM

parameters is the maximum likelihood estimation [103], where

P(X/λ) is maximized with respect to λ. P(X/λ) is the conditional

Page 6: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

65

probability and vector X = {x1, x2, ….xi} is the set of all feature

vectors belonging to a particular acoustic class. Since there is no

closed form solution to the maximum likelihood estimation,

convergence is guaranteed only when large enough data is

available. An iterative approach for computing the GMM model

parameters using Expectation-maximization (EM) algorithm [104] is

followed.

Fig. 4.1: Diagram of Gaussian Mixture Model.

E-Step: Posterior probabilities are calculated for all the training

feature vectors. Posterior probability for a feature vector i of the nth

frame of the given speaker is follows

P = (4.4)

Page 7: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

66

M-Step: The M-step uses the posterior probabilities from the E-

Step to estimate model parameters as follows:

(4.5)

i = (4.6)

And

I = (4.7)

Set Pi = , I and i, and iterate the sequence of

E-step and M-step a few times by iteratively checking for the

condition. The EM algorithm improves on the GMM parameter

estimates by iteratively checking for the condition

P (X| z+1) > P (X| z) (4.8)

4.2.2. Testing the Model

Let the number of models representing different acoustic

classes be N. hence j, where j = {1, 2, 3,….N}, is the set of GMMs

under consideration. For each test utterance, feature vectors xn at

time n are extracted. The probability of each model given the

feature vectors xn is given by

P( j|xn) = (4.9)

Since P(xn) is a constant and P( j), the apriori probabilities, are

assumed to be equal, the problem is reduced to finding the j that

maximizes . But is given by

= P({x1,x2,……,xI}| j) (4.10)

Page 8: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

67

Where, I is the number of feature vectors for each frame of the

speech signal belonging to a particular acoustic class. Assuming

that each frame is statistically independent, Equation 4.10 can be

written as

P({x1,x2,……,xI}| j) = (4.11)

Applying logarithm on Equation 4.9 and simplifying for N we have

Nr= (4.12)

Where Nr is declared as the class to which the feature vectors

belong. Note that {Nr, r = {1,2,3,……,N}} is the set of all acoustic

classes.

4.3 EXPERIMENTAL RESULTS

4.3.1. Database Used for the Study

In this study we consider identification task on TIMIT

Speaker database [4]. The TIMIT corpus of read speech has been

designed to provide speaker data for the acquisition of acoustic-

phonetic knowledge and for the development and evaluation of

automatic speaker recognition systems. TIMIT contains a total of

6300 sentences, 10 sentences spoken by each of 630 speakers from

8 major dialect regions of the United States. We consider 380

utterances spoken by 38 speakers out of 630 speakers for speaker

recognition. For each speaker maximum of 10 speech utterances

among which 8, 7 and 6 are used for training and tested with

minimum 2, 3 and 4 speech utterances.

Page 9: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

68

4.3.2. Experimental Setup

In general, speaker recognition refers to both speaker

identification and speaker verification. Speaker identification is the

task of identifying a given speaker from a set of speakers. In the

closed-set speaker identification no speaker outside the given set is

used for testing. Speaker verification is the task of verification is

either to accept or reject the claim of the speaker. In this work

experiments have been carried out on closed-set speaker

identification.

The system has been implemented in Matlab7 on Windows

XP platform. We have used LP order of 12 for all experiments. We

have trained the model using Gaussian mixture components as 4,

8, 16 and 32 for different training and testing speech utterances

which are spoken by 38 speakers respectively. Here, recognition

rate is defined as the ratio of the number of speakers identified to

the total number of speakers tested.

4.3.3. Extraction of Complete Source Information of LP Residual, HE of LP Residual and RP of LP Residual at Different levels.

As the envelope of the short-time spectrum corresponds to

the frequency response of the vocal tract shape, one can observe

the short-time spectrum of the LP residual for different LP orders

and the corresponding signal LP spectra to determine the extent of

the vocal tract information present in the LP residual. As the order

of the LP analysis is increased, the LP spectrum approximates the

short-time spectral envelope better. The envelope of the short-time

spectrum corresponds to the frequency response of the vocal tract

Page 10: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

69

shape, thus reflecting the vocal tract system characteristics.

Typically the vocal tract system is characterized by a maximum of

five resonances in the 0-4 kHz range. Therefore LP order of about 8-

14 seems to be most appropriate for a speech signal resample at 8

kHz. For low order, say 3 as shown in Fig.4.3 (a), the LP spectrum

may pick up only the prominent resonances, and hence the

residual will still have a large amount of information about the

vocal tract system. Thus the spectrum of the residual Fig.4.3 (b)

contains most of the information of the spectral envelope, except for

the prominent resonances. On the other hand, if a large order, say

30 is used, then the LP spectrum may pick up spurious peaks as

shown in Fig. 4.3(e). These spurious peaks influence the

corresponding LP residual obtained by passing the speech signal

through may be affected due to the influence of these spurious

nulls in the spectrum of the inverse filter is shown in Fig 4.2 (f).

Page 11: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

70

Fig. 4.2: (a) LP Spectrum, (b) LP Residual Spectrum for LP Order 3, (c) LP Spectrum, (d) Residual Spectrum for LP Order 9, (e) LP Spectrum and (f) Residual Spectrum for LP Order 30.

Page 12: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

71

From above discussion, it is evident that LP residual does

not contain any significant features of the vocal tract shape for LP

orders in the range 8-20. The LP residual may contain mostly the

source information at subsegmental, segmental and

suprasegmental levels. The features derived from the LP residual at

these levels are called as residual features. We verified that the

speaker-specific information present in the LP residual dominates

the amplitude information than phase information due to inverse

filtering. Hence we separate the amplitude information and phase

information of the LP residual using Hilbert transform, hence

amplitude information contained in the HE of LP residual and

phase information contained in RP of LP residual at subsegmental,

segmental and suprasegmental levels are shown in Figs 4.3 and

4.4.

Figs: 4.3 Analytical Signal Representation of a) Subsegmental, b) Segmental and c) Suprasegmental Feature Vectors using HE

of LP Residual.

Page 13: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

72

Fig 4.4: Subsegmental, Segmental and Suprasegmental Feature Vectors for RP of LP Residual.

4.3.4. Effect of Model at subsegmental, segmental and Suprasegmental Levels and Amount of Training and Test data

This section presents performance of the subsegemental,

segmental and suprasegmental levels of LP residual (complete

source) based speaker recognition systems with respect to number

of mixture components per model (At each level) and amount of

training and testing data. The recognition performance of

subsegmental, segmental and suprasegmental levels of LP residual

for the amount of train and test data is shown in the Tables 4.1 and

4.2.

Page 14: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

73

4.4 DISCUSSION ON SPEAKER RECOGNITION PERFORMANCE

The speaker recognition performance with respect to

training and testing data is presented in section 4.4.1 with detailed

explanation.

4.4.1. With Respect to Varying Amount of Training and Testing Data

For better performance, we are applying a condition that the

testing utterances are less than training utterances for better

understanding of performance and good authentication, by

following the previous condition the training utterances are

decreased testing utterances are increased till it reach 6 and 4

utterances respectively among the 10 utterances per speaker.

In this experiment, for each speaker to develop one model

with mixture components 2, 4, 8 and 32 by using GMM for training

utterances. The various amounts of training data were sequentially

taken and tested with corresponding testing utterances by following

above conditions.

We observed best results for 6-4 utterance model

compared to 8-2 and 7-3 utterance models, which are discussed in

further sections.

It is also evident that when there is no enough training

data, the selection of model order becomes more important. For all

amounts of training data, performance is increased from 2 to 32

Gaussian components.

Page 15: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

74

4.5 MODELING SPEAKER INFORMATION FROM SUBSEGMENTAL (Sub), SEGMENTAL (Seg) AND SUPRASEGMENTAL (Supra) LEVELS OF LP RESIDUAL

Speaker-specific excitation source information extracted at

subsegmental level in which one pitch cycle is modeled. At this

level, GMM model is used to capture variety of speaker

characteristics. Blocks of 40 samples from the voiced regions of LP

residual are used as input to the GMM model. Successive blocks

are formed with a shift of 20 samples. One GMM model is trained

using LP residual at subsegmental level. Since the block size is less

than a pitch period, the variety characteristics of the excitation

source (LP residual) within one glottal pulse are captured. The

performance of speaker identification at subsegmental of LP

residual is shown in Figs 4.5(a), 4.7(a) and in the 2nd column of

Tables 4.1 and 4.2. At the segmental level, two to three glottal

cycles of speaker-specific information is modeled in which the

information may be attributed to pitch and energy. At this level,

GMM model is used to capture variety of speaker characteristics.

Blocks of 40 samples from the voiced regions of LP residual are

used as input to the GMM model. Successive blocks are formed

with a shift of 5 samples. One GMM model is trained using LP

residual at segmental level. Since the block size is 2-3 pitch period,

the variety characteristics of the excitation source (LP residual)

within 2-3 glottal pulse is captured The performance of the Speaker

recognition system at segmental level is shown in Figs 4.5(a) ,4.7 (a)

and in the 3rd column of Table 4.1 and 4.2. At the suprasegmental

level, 25 to 50 glottal cycles of speaker-specific information is

modeled in which the information may be attributed to long-term

variations means that using this feature speaker is recognized even

Page 16: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

75

though the speaker is aged. This is motivation of my work. The

performance of the Speaker recognition system at suprasegmental

level is shown in Figs 4.5(a), 4.7(a) and in the 4th column of Tables

4.1 and 4.2. These are compared with base line Speaker recognition

system using MFCCs in the 6th column of tables 4.1 and 4.2.

For comparison purpose, the base line speaker recognition

system using speaker information from the vocal tract and

segmental source feature are developed for the same database.

Speech signal is processed in blocks of 20 msec with a shift of 10

msec. For every frame 39 dimensional MFCCs are computed. The

performance of this speaker recognition system is shown in figs

4.5(b) and 4. 7(b) and Tables 4.1 and 4.2 and this compared with

the Speaker recognition performance of sub, seg and supra levels of

LP residual.

The performance of speaker recognition system have been given in

the form of percentile(%) in the all the Tables of this chapter.

Page 17: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

76

Table 4.1: Speaker recognition performance of Subsegmental (Sub), Segmental (Seg) and Suprasegmental (Supra) information of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 7 used as training 3 used as testing.

No. of Mixtur

es

Sub

(%)

Seg

(%)

Supra

(%)

SRC=

Sub+seg+supra

(%)

MFCCs

(%)

Sub+Seg+

Supra+ MFCCs

(%)

2 13.33 20 10 10 30 16.67

4 35 26.67 5 23.33 36.67 30

8 36.67 43.33 5 46.67 46.67 60

16 83 56.67 5 80 66.67 80

32 93.33 56.67 5 83.33 60 83.37

Page 18: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

77

Fig. 4.5: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of LP Residual and b) Sub+Seg+Supra along with MFCCs

Page 19: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

78

4.5.1 Combining Evidences From Subsegmantal, Segmental and Suprasegmental Levels of LP Residual

By the way of deriving each feature, the information present

at subsegmental, segmental and suprasegmental levels are different

and hence may reflect different aspect of speaker specific source

information. By comparing their recognition performance it can be

observed that the subsegmental features provide best performance.

Thus the sub segmental features may have more speaker-specific

evidence compared to other level features. The different

performances of the recognition experiments indicate the different

nature of speaker information present.

In case of identification, the muddle pattern of features is

considered as an indication of the different nature of information

present. In the confusion Pattern, principal diagonal represents

correct identification and the rest represents miss classification.

Figure 4.6 shows the confusion patterns of the identification results

conducted for all the proposed features using TIMIT database,

respectively. In each case, the confusion pattern is entirely

different. The decisions for both true and false identification are

different. This indicates that they reflect different aspect of source

information. This may help in combining the evidences for further

improvement of the recognition performance from the complete

source perspective.

Page 20: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

79

Fig.4.6: Confusion Pattern of a) Sub, b) Seg, c) Supra of LP Residual, d) SRC=Sub+Seg+Supra e) MFCCs and f) SRC+MFCC’s

Information for Identification of 38 Speakers from TIMIT Database.

Page 21: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

80

Table 4.2: Speaker recognition performance of Sub, Seg and Supra information of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 6 used for training 4 used for testing.

No. of Mixtures

Sub (%)

Seg (%)

Supra (%)

SRC= Sub+seg+supra

(%)

MFCCs (%)

SRC+MFCCs (%)

2 20 13.33 10 26.66 50 36.66

4 43.33 20 13.33 46.66 50 46.66

8 73.33 46.66 10 76.66 50 76.66

16 90 56.66 6.66 86.66 66.67 90

32 96.66 70 3.33 80 70 83.33

Page 22: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

81

Fig. 4.7: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of LP Residual and b) Sub+Seg+Supra

along with MFCCs.

Page 23: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

82

4.6 MODELING SPEAKER INFORMATION FROM SUBSEGMENTAL SEGMENTAL AND SUPRASEGMENTAL LEVELS OF HE OF LP RESIDUAL

The amplitude information is obtained from LP residual using

Hilbert transform and which is 900 a shifted version of LP residual.

Since the HE represents the magnitude information of the LP

residual. The HE of LP residual is processed at subsegmental,

segmental and suprasegemental levels. In this subsegmental,

segmental and suprasegemental sequences are derived from the HE

of LP residual is called as HE features. The speaker recognition

performances for subsegmental, segmental and suprasegmental

levels are shown in Figs 4.8(a)-4.10(a) respectively. The combined

amplitude information at each level is improved is shown in Figs

4.8(b)-4.10(b) respectively. The experimental results shown in

Tables 4.3, 4.4 and 4.5 for 38 speakers of TIMIT database.

Page 24: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

83

Table 4.3: Speaker recognition performance of Sub, Seg and Supra information of HE of LP residual of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 8 used for training 2 used for testing.

No. of mixtures

Sub (%)

Seg (%)

Supra (%)

SRC = Sub+seg+supra

(%)

MFCCs (%)

SRC+MFCCs (%)

2 26.67 46.67 3.33 40 33.33 33.33

4 46.67 30 3.33 60 50 63.33

8 33.33 30 0 50 53.33 56.33

16 50 40 3.33 50 53.33 56.33

32 70 53.33 3.33 63.33 60 64

Page 25: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

84

Fig.4.8: The Performance of Speaker Recognition System for a)

Sub, Seg and Supra Levels of HE of LP Residual and b) Sub+Seg+Supra along with MFCCs.

Page 26: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

85

Table 4.4: Speaker recognition performance of Sub, Seg and Supra information of HE of LP residual of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 7 used for training 3 used for testing.

No. of mixtures

Sub (%)

Seg (%)

Supra (%)

SRC=Sub+seg+ Supra

(%)

MFCCs (%)

SRC+MFCCs (%)

2 13.33 6.67 20 13.33 33.33 23.33

4 36.67 26.67 3.33 40 50 40

8 36.67 40 3.33 46.67 53.33 46.67

16 60 43.33 3.33 70 53.33 70

32 70 76.67 6.67 80 66.67 80

Page 27: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

86

Fig. 4.9: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE of LP Residual and b)

Sub+Seg+Supra along with MFCCs.

Page 28: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

87

Table 4.5: Speaker Recognition Performance of Sub, Seg and Supra Information of HE of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 6 used for Training 4 used for Testing.

No. of mixtures

Sub (%)

Seg (%)

Supra (%)

SRC=Sub+seg+ Supra

(%)

MFCCs (%)

SRC+MFCCs (%)

2 20 16.67 0 23.33 26.67 50

4 36.67 16.67 3.33 33.33 50 76.67

8 46.67 50 3.33 46.67 56.67 60

16 53.33 53.33 3.33 80 53.33 66.67

32 80 43.33 3.33 66.67 63.33 36.67

Page 29: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

88

Fig 4.10: The performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE of LP Residual and

b) Sub+Seg+Supra along with MFCCs.

Page 30: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

89

4.7 MODELING SPEAKER INFORMATION FROM SUBSEGMENTAL SEGMENTAL AND SUPRASEGMENTAL LEVELS OF RP OF LP RESIDUAL

The Phase information is obtained from LP residual using

Hilbert transform and which is 900 a shifted version of LP residual.

Since the HE represents the magnitude information of the LP

residual, we can obtain the cosine of the information from LP

residual by dividing with HE. Hence we obtain phase information

from LP residual is known as RP of LP residual. The RP of LP

residual is processed at subsegmental, segmental and

suprasegmental levels. These levels are derived from RP of LP

residual is called as RP features. The speaker recognition

performances for subsegmental, segmental and suprasegmental

levels for RP of LP residual are shown in Figs 4.11(a)-4.13(a)

respectively. The combined phase information at each level is

improved and shown in figs 4.11(b)-4.13(b). The experimental

results shown in tables 4. 6 - 4. 8 for 38 speakers.

Page 31: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

90

Table 4.6: Speaker recognition performance of Sub, Seg and Supra information of RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 8 used for training 2 used for testing.

No. of mixtur

es

Sub (%)

Seg (%)

Supra (%)

SRC=Sub+seg+ supra (%)

MFCCs (%)

SRC+MFCCs (%)

2 10 20 6.67 43.33 33.33 13.33

4 36.67 23 3.33 26.67 50 63.33

8 30 56 3.33 66.67 53.33 50

16 50 56 3.33 76.67 53.33 76.67

32 80 73.33 3.33 80 60 84.67

Page 32: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

91

Fig 4.11: The Performance of Speaker Recognition System for

a) Sub, Seg and Supra Levels of RP of LP Residual and b) Sub+Seg+Supra along with MFCCs.

Page 33: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

92

Table 4.7: Speaker recognition performance of Sub, Seg and Supra information of RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 7 used for training 3 used for testing.

No. of mixtures

Sub (%)

Seg (%)

Supra (%)

SRC=Sub+ seg+supra

(%)

MFCCs (%)

SRC+MFCCs (%)

2 53.33 13.33 0 13.33 33.33 63.33

4 43.33 30 3.33 30 50 60

8 56.67 40 3.33 40 53.33 70

16 66 36.67 3.33 36.67 53.33 70

32 76.67 30 3.33 30 66.67 83.333

Page 34: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

93

Fig 4.12: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of RP of LP Residual and b) Sub+Seg+Supra along with MFCCs.

Page 35: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

94

Table 4.8: Speaker recognition performance of Sub, Seg and Supra information of RP of LP residual of 38 speakers from TIMIT database. Each speaker spoken 10 sentences, among them 6 used for training 4 used for testing.

No. of mixtur

es

Sub (%)

Seg (%)

Supra (%)

SRC=Sub+ seg+supra

(%)

MFCCs (%)

SRC+MFCCs (%)

2 40 36.67 0 46.67 26.67 50

4 60 43.33 3.33 70 50 76.67

8 53.33 33.33 3.33 60 56.67 60

16 56.67 40 3.33 66.67 53.33 66.67

32 70 43.33 3.33 36.67 63.33 36.67

Page 36: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

95

Fig 4.13: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of RP of LP Residual and b) Sub+Seg+Supra along with MFCCs.

Page 37: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

96

4.8 COMBINING EVIDENCES FROM SUBSEGMENTAL, SEGMENTAL AND SUPRASEGMENTAL LEVELS OF HE AND RP OF LP RESIDUAL

The procedure to compute subsegmental, segmental and

suprasegmental feature vectors from HE and RP of the LP is same

as described earlier except the input sequence. In one case the

input will be HE and the other case it will be RP. The unipolar

nature of the HE helps in suppressing the bipolar variations

representing sequence information and emphasizing only the

amplitude values. As a result, the amplitude information in the

subsegmental, segmental and suprasegmental sequences of LP

residual are shown in Figs 4.3 (a) (b) and (c). On the other hand,

the residual phase represents the sequence information of thee

residual samples. Figs 4.4 (a), (b) and (c) show the residual phase of

the subsegmental, segmental and suprasegmental processing

respectively. In all these cases, the amplitude information is

absent. Hence analytic signal representation provides amplitude

and sequence information of the LP residual samples

independently. In [113] it was shown that information present in

the residual phase significantly contributes to the speaker

recognition. We propose that, information present in the HE may

also contribute well to speaker recognition. Further, as they reflect

different aspect of the source information, the combined

representation of both the evidences may be more effective for

speaker recognition. We conduct different experiments on TIMIT

database for 38 speakers.

Page 38: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

97

Table 4.9: Speaker recognition performance of Sub, Seg and Supra information of HE and RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 8 used for training 2 used for testing.

No. of mixtures

HE+RP of Sub

(%)

HE+RP of Seg

(%)

HE+RP of

Supra (%)

SRC=HE+Rp of

Sub+seg+supra (%)

MFCCs (%)

SRC+MFCCs (%)

2 13.33 26.67 6.67 20 33.33 23.33

4 60 43.33 6.67 56.67 50 56.67

8 33.33 73.33 33.33 56.67 53.33 60

16 63.33 66.67 46.67 83.33 53.33 86.67

32 66.67 53.33 30 76.67 60 76.67

Page 39: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

98

Fig 4.14: The Performance of Speaker Recognition System for a) Sub, Seg and Supra levels of HE and RP of LP Residual and b)

Sub+Seg+Supra along with MFCCs.

Page 40: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

99

Table 4.10: Speaker recognition performance of Sub, Seg and Supra information of HE and RP of LP residual of 38 speakers. Each speaker spoken 10 sentences, among them 7 used for training 3 used for testing.

No. of mixtures

HE+RP of Sub

(%)

HE+RP of Seg

(%)

HE+RP of

Supra (%)

SRC= HE+RP of

Sub+seg+supra (%)

MFCC’s (%)

SRC+MFCC’s (%)

2 20 13.33 20 20 33.33 23.33

4 46.67 30 6.67 46.67 50 46.67

8 53.33 60 3.33 66.67 53.33 70

16 73.33 73.33 6.67 70 53.33 76.67

32 90 75 6.67 86.67 66.67 93.37

Page 41: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

100

Fig 4.15: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE and RP of LP Residual and

b) Sub+Seg+Supra along with MFCCs.

Page 42: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

101

Table 4.11: Speaker recognition performance of Sub, Seg and Supra information of HE and RP of LP residual for 38 speakers. Each speaker spoken 10 sentences, among them 6 used for training 4 used for testing.

No. of mixtures

Sub (%)

Seg (%)

Supra (%)

SRC=Sub+ seg+supra

(%)

MFCCs

(%)

SRC+MFCCs (%)

2 26.67 36.67 6.67 26.67 26.67 50

4 50 43.33 3.33 60 50 76.67

8 60 33.33 6.67 63.33 56.67 60

16 83.33 40 3.33 83.33 53.33 86.67

32 93.33 43.33 3.33 93.33 63.33 98.67

Page 43: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

102

Fig 4.16: The Performance of Speaker Recognition System for a) Sub, Seg and Supra Levels of HE and RP of LP Residual and

b) Sub+Seg+Supra along with MFCCs.

Page 44: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

103

4.9 Discussion on Speaker Recognition Performance with respect to varying amount of Training and Testing data.

In this experiment, speaker models with 2, 4, 8, 16 and 32

component densities were trained using 8 and 6 speech utterances

and tested with 2 and 4 speech utterances per speaker. The

recognition performance of residual features, HE and RP features

for 2 and 4 test speech utterances versus 8 and 6 speech

utterances are train data are shown in the figs 4.5-4.16 and Tables

4.1-4.11

It is shown that with increase in test speech utterances per

speaker the recognition performance increases. The largest increase

in percentage of recognition for training speech utterances per

speaker, when the amount of test speech utterances are 4 in the

case of residual features, HE and RP features individually shown in

the Tables 4.1-4.8 and Figs 4.5-4.13 and fusion of HE and RP

features since fusion of both provides complete source information

[Tables 4.9-4.11 and Figs 4.14-4.16].

4.10 DISCUSSION ON SPEAKER RECOGNITION PERFORMANCE WITH RESPECT TO DIFFERENT TYPES OF FEATURES

To investigate Speaker recognition performance using LP

residual at subsegmental, segmental and suprasegmental levels

with respect to the component densities per model where each

speaker is model at subsegmental, segmental and suprasegmental

information of LP residual. The performance of subsegmental level

is more than the other two levels. Similarly each speaker is modeled

at subsegmental, segmental and suprasegmental of HE and RP of

Page 45: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

104

LP residual. Individually, the performance of HE and RP is less than

residual features. Fusion of HE and RP improves the performance

of Speaker recognition system [Tables and Figs]. Therefore, the

fusion of HE and RP features provides better performance than the

residual features alone.

This shows the robustness of the combined HE and RP

representation of complete source is providing additional

information to the MFCC features. From this observation we

conclude that combined representation of HE and RP features are

better than the residual features alone. It indicates complete

information present in the source can be represented by the

combined representation of the HE and RP features.

4.11 COMPARATIVE STUDY OF HE FEATURES AND RP FEATURES OVER RESIDUAL FEATURES FOR RECOGNITION SYSTEM

We have compared the results obtained by the observed new

approach with some recent works which were discussed in detail in

section 2.6. In these works features used and database are

different. Tables 4.12 and 4.13 shows comparative analysis of

different features for speaker recognition performance

Table 4.12: Comparison of Speaker Recognition Performance at Different Databases for LP residual at Sub, Seg and Supra levels. Database Sub

(%) Seg (%)

Supra (%)

SRC=Sub+ Seg+Supra (%)

MFCCs (%)

SRC+MFCCs (%)

NIST-99 64 60 31 76 87 96

NIST-03 57 58 13 67 66 79

TIMIT 90 56.67 13.33 86.67 66.67 90

Page 46: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

105

Table 4.13: Comparison of speaker recognition performance at different Databases for HE and RP of LP residual at subsegmental, segmental and suprasegmental levels.

Databases

Type of

signal

(%)

Sub

(%)

Seg

(%)

Supra

(%)

SRC=Sub+Seg+S

upra

(%)

MFCCs

(%)

SRC+MFCCs

(%)

NIST-99

HE 44 56 8 71 87 94

RP 49 69 17 73 87 93

HE+RP 64 78 22 88 87 98

NIST-03

HE 32 39 7 54 66 76

RP 23 51 14 56 66 77

HE+RP 48 59 17 72 66 83

OBSERVEDMODEL

GMM using

Database is

TIMIT

HE 80 76.67 20 80 66.67 80

RP 80 76.67 6.67 80 63.33 84.67

HE+RP 93.33 75 46.67 93.33 63.33 98.67

Page 47: CHAPTER-4 SUBSEGMENTAL, SEGMENTAL AND ...shodhganga.inflibnet.ac.in/bitstream/10603/30961/13/13...subsegmental, segmental and suprasegmental levels. In the training phase, GMMs are

106

4.12 SUMMARY In this chapter, model the speaker-specific source information from

LP residual at subsegmental, segmental and suprasegmental using

GMM. The segmental and suprasegmental level information is

decimated by a factor of 4 and 50, respectively. Experimental

results show that subsegmental, segmental and suprasegmental

levels contain speaker information. Further combining the

evidences from each level, the performance improvement indicates

the different nature speaker information at each level. Towards

the end, the idea of subsegmental, segmental and suprasegmental

features of LP residual, HE of LP residual and RP of LP residual at

subsegmental, segmental and suprasegmental levels for speaker

recognition system using GMM’s are proposed.