ee513 audio signals and systems - university of kentucky ...donohue/ee513/unit7.pdfcomputer...

25
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky

Upload: others

Post on 02-Feb-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

  • EE513Audio Signals and Systems

    Statistical Pattern ClassificationKevin D. Donohue

    Electrical and Computer EngineeringUniversity of Kentucky

  • Interpretation of Auditory ScenesHuman perception and cognition greatly exceeds any computer-based system for abstracting sounds into objects and creating meaningful auditory scenes. This perception of objects (not just detecting acoustic energy) allows for interpretation of situations leading to an appropriate response or further analyses.

    Sensory organs (ears) separate acoustic energy into frequency bands and convert band energy into neural firingsThe auditory cortex receives the neural responses and abstracts an auditory scene.

  • Auditory ScenePerception derives a useful representation of reality from sensory input.Auditory Stream refers to a perceptual unit associated with a single happening (A.S. Bregman, 1990) .

    Acoustic to Neural

    Conversion

    Organize into Auditory Streams

    Representation of Reality

  • Computer InterpretationIn order for a computer algorithm to interpret a scene

    Acoustic signals must be converted to numbers using meaningful models.Sets of numbers (or patterns) are mapped into events (perceptions).Events are analyzed with other events in relation to the goal of the algorithm and mapped into a situation (cognition or deriving meaning).Situation is mapped into an action/response.

    Numbers extracted from the acoustic signal for the purpose of classification (determination of event) are referred to as features.

    Time -based features are extracted from signal transforms such as:EnvelopeCorrelations

    Frequency-based features are extracted from signal transforms such as:Spectrum (Cepstrum)Power Spectral Density

  • Feature Selection Example Consider a problem of discriminating between the spoken words yes and no based on 2 features:1. The estimate of first formant frequency g1 (resonance of the

    spectral envelope)2. The ratio in dB of the amplitude of the second formant frequency

    over the third formant frequency g2.

    A fictitious experiment was performed and these 2 features were computed for 25 recordings of people saying these words. The feature were plotted for each class to develop an algorithm to classify these samples correctly.

  • Feature PlotDefine a feature vector.

    Plot G, given a yes was spoken, with green o’s, and given a nowas spoken, be wiht red x’s.

    ⎥⎦⎤

    ⎢⎣⎡=

    2

    1ggG

  • Minimum Distance ApproachCreate representative vector for yes and no features

    For a new sample with estimated features, use decision rule:

    Results in 3 incorrect decisions.

    ∑=

    =25

    1)|(

    251

    nyes yesnGμ

    ∑=

    =25

    1)|(

    251

    nno nonGμ

    yesno

    yes

    noμGμG −≥

  • Normalization With STDThe frequency features had larger values than the amplitude ratios, and therefore had more influence in the decision process.

    Remove scale differences by normalizing each feature by its standard deviation over all classes.

    Now 4 errors result (why would it change?)

    ( ) ( ) ⎟⎟⎠

    ⎞⎜⎜⎝

    ⎛−+−= ∑ ∑

    = =

    25

    1

    25

    1

    2|

    2| )|()|(25

    1

    n nnoiiyesiii μnongμyesngσ

  • Minimum Distance ClassifierConsider feature vector x with the potential to be classified as belonging to K exclusive classes. Classification decision will be based on the distance of the feature vector to one of the template vectors representing each of the K classes. The decision rule is for a given observation x and set of template vectors zk for each class, decide on class k such that:

    [ ])()(minarg kTkkk

    D zxzx −−=

  • Minimum Distance ClassifierIf some features need to be weighted more than others in the decision process, as well as exploiting correlation between the features, the distance for each feature can be weighted to result in the weighted minimum distance classifier:

    where W is a square matrix of weights with dimension equal to length of x. If W is a diagonal matrix, it simply scales each of the features in the decision process. Off diagonal terms scale the correlation between features. If W is the inverse of the covariance matrix of the features in x, and zk is the mean feature vector for each class, then the above distances are referred to as the Mahanalobis distance.

    [ ])()(minarg kTkkk

    D zxWzx −−=

    [ ] ( )( )[ ]1

    1E1 E

    =⎟⎠⎞

    ⎜⎝⎛

    ∑ −−==K

    k

    Tkkk kK

    k zxzxWxz

  • Correlation ReceiverIt can be shown that selecting the class based on the minimum distance between the observation vector and the template vector is equivalent to finding the maximum correlation between the observation vector and the template:

    or

    where the template vectors have been normalized such that

    [ ] [ ]kTkk

    kT

    kkk

    CD zxzxzx ==−−= maxargminarg )()(

    kPkTk allfor constant) a is (P =zz

    [ ] [ ]kTkk

    kT

    kkk

    CD WzxzxWzx ==−−= maxargminarg )()(

  • DefinitionsRandom variable (RV) is a function that maps events (sets) into a discrete set of real numbers for a discrete RV, or a continuous set of real numbers for a continuous RV.

    Random process (RP) is a series of RVs indexed by a countable set for a discrete RP, or by a non-countable set for continuous RP.

  • Definitions: PDF First Order

    The likelihood of RV values is described through the probability density function (pdf).

    [ ] ∫=

  • Definitions: Joint PDF

    The probabilities describing more than one RV is described by a joint pdf.

    ( ) ( )[ ] ∫ ∫=

  • Definitions: Conditional PDFThe probabilities describing a RV given that the another event has already occurred is described by a conditional pdf.

    Closely related to this is Bayes’ rule:)(

    ),()|(| ypyxpyxp

    Y

    XYYX =

    )()()|(

    )|(

    )()|(),()()|(

    ||

    ||

    xpypyxp

    xyp

    xpxypyxpypyxp

    X

    YYXXY

    XXYXYYYX

    =

    ==

  • Examples: Gaussian PDFA first order Gaussian RV pdf (scalar x) with mean µ and standard deviation σ is given by:

    A higher order joint Gaussian pdf (column vector x) with mean vector m and covariance matrix ∑ is given by:

    ⎟⎟⎠

    ⎞⎜⎜⎝

    ⎛ −−= 2

    2

    2 2)(exp

    21)(

    σμ

    πσxxpX

    ( )[ ][ ][ ]T

    Tn

    Tn

    xxx

    p

    ))((EE

    ,,

    )()(21exp

    21)(

    21

    12/12/X

    mxmxxm

    x

    mxmxx

    −−=∑

    ==

    ⎟⎠⎞

    ⎜⎝⎛ −∑−−

    ∑= −

    L

    π

  • Example UncorrelatedProve that for an Nth order sequence of uncorrelated Gaussian zero-mean RVs the joint PDF can be written as:

    Note that for Gaussian RVs uncorrelated implies statistical independence.Assume variances are equal for all elements. What would the autocorrelation of this sequence look like?How would the above analysis change if RVs were not zero mean?

    ∏=

    ⎟⎟⎠

    ⎞⎜⎜⎝

    ⎛−=

    N

    i i

    i

    i

    Xxp

    12

    2

    2 2)(exp

    21)(

    σπσx

  • Class PDFsWhen features are modeled as RVs, their pdfs can be used to derive distance measures for the classifier, and an optimal decision rule that minimizes classification error can be designed.Consider K classes individually denoted by ωk. Feature values associated with each class can be described by:a posteriori probability (likelihood the class after observation/data)

    a priori probability (likelihood the class before observation/data)

    Likelihood function (likelihood observation/data given a class)

    )( xkkp ω

    )( kp ωxx

    )( kkp ω

  • Class PDFsThe likelihood function can be estimated through empirical studies. Consider 3 speakers whose 3rd formant frequency is distributed by:

    Classifier probabilities can be obtained from Bayes’ rule

    )()()(

    )(xppxp

    xpx

    kkkxkk

    ωωω =

    Decision Thresholds

    )( 1ωxpx

    )( 2ωxpx

    )( 3ωxpx

  • Maximum a posteriori Decision RuleFor K classes and observed feature vector x, the maximum a posteriori (MAP) decision rule states:

    or by applying Bayes’ rule:

    For the binary case this reduces to the (log) likelihood ratio

    ijppω jkiki ≠∀> )()( if Decide xx ωω

    ijp

    pppω

    ik

    jkjii ≠∀> )(

    )()()( if Decide

    ωωω

    ωx

    x xx

    ( ) ( ) ⎟⎟⎠

    ⎞⎜⎜⎝

    >

    <

    )()(

    ln)(ln)(ln )()(

    )()(

    ik

    jkji

    ik

    jk

    j

    i

    pp

    pppp

    pp

    i

    j

    i

    j

    ωω

    ωωωω

    ωω

    ω

    ω

    ω

    ω

    xxxx

    xxx

    x

  • ExampleConsider a 2 class problem with Gaussian distributed feature vectors

    Derive the log likelihood ratio and describe how the classifier uses distance information to discriminate between the classes.

    [ ][ ] [ ][ ][ ]2222

    1111

    2211

    21

    ))((E

    ))((E

    E E,,

    ω

    ω

    ωω

    T

    T

    TNxxx

    mxmx

    mxmx

    xmxmx

    −−=∑

    −−=∑

    ==

    = L

  • HomeworkConsider a 2 features for use in a binary classification problem. The features are Gaussian distributed are form feature vectorx = [x1, x2]T. Derive the log likelihood ratio and corresponding classifier for the 3 different cases listed below:• 1) 2)

    3) 4)

    Comment how each classifier computes “distance” and uses it in the classification process.

    [ ]

    ⎥⎦

    ⎤⎢⎣

    ⎡=∑⎥

    ⎤⎢⎣

    ⎡=∑

    −=−=

    ==

    2.0008.0

    ,2.10

    06.01,1 ]1,1[

    5.0)()(

    21

    21

    21TT

    kk pp

    mm

    ωω[ ]

    ⎥⎦

    ⎤⎢⎣

    ⎡−

    −=∑=∑

    −=−=

    ==

    5.02.02.05.0

    1,1 ]1,1[

    5.0)()(

    21

    21

    21TT

    kk pp

    mm

    ωω

    [ ]

    ⎥⎦

    ⎤⎢⎣

    ⎡=∑⎥

    ⎤⎢⎣

    ⎡=∑

    ==

    ==

    5.0005.0

    ,1.00

    01.00,0

    5.0)()(

    21

    21

    21T

    kk pp

    mm

    ωω

    [ ]

    ⎥⎦

    ⎤⎢⎣

    ⎡=∑⎥

    ⎤⎢⎣

    ⎡=∑

    −=−=

    ==

    2.0008.0

    ,2.10

    06.01,1 ]1,1[

    8.0)( 2.0)(

    21

    21

    21TT

    kk pp

    mm

    ωω

  • Classification Error

    Classification error is the percentage of decision statistics that occur on the wrong side of the threshold, scaled by the percentage of times such an event occurs.

    1T

    )( 1ωλλp

    )( 2ωλλp

    )( 3ωλλp

    2T

    ∫∫∫∫∞−

    ∞−

    +⎟⎟⎠

    ⎞⎜⎜⎝

    ⎛++=

    2

    2

    1

    1

    )()()()()()()( 3322211T

    kT

    T

    kT

    ke dppdpdppdppp λωλωλωλλωλωλωλω λλλλ

  • Homework

    For the previous example, write an expression for probability of a correct classification by changing the integrals and limits (i.e. do not simply write pc=1-pe)

  • Approximating a Bayes ClassifierIf density functions are not known:

    Determine template vectors that minimize distances to feature vectors in each class for training data (vector quantization).

    Assume form of density function and estimate parameters (directly or iteratively) from the data (parametric or expectation maximization).

    Learn posterior probabilities directly from training data and interpolate on test data (neural networks).

    EE513�Audio Signals and SystemsInterpretation of Auditory ScenesAuditory SceneComputer InterpretationFeature Selection Example Feature PlotMinimum Distance ApproachNormalization With STDMinimum Distance ClassifierMinimum Distance ClassifierCorrelation ReceiverDefinitionsDefinitions: PDF First OrderDefinitions: Joint PDFDefinitions: Conditional PDFExamples: Gaussian PDFExample UncorrelatedClass PDFsClass PDFsMaximum a posteriori Decision RuleExampleHomeworkClassification ErrorHomeworkApproximating a Bayes Classifier