•url: .../publications/courses/ece_8423/lectures/current/lecture_05

12
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence Covariance Matrix Diagonalization Resources: PP: LMS Tutorial CNX: LMS Adaptive Filter DM: DSP Echo Cancellation WIKI: Echo Cancellation ISIP: Echo Cancellation • URL: .../publications/courses/ece_8423/lectures/current/lectur e_05.ppt • MP3: .../publications/courses/ece_8423/lectures/current/lectur LECTURE 05: LMS ADAPTIVE FILTERS

Upload: wynn

Post on 23-Feb-2016

48 views

Category:

Documents


0 download

DESCRIPTION

LECTURE 05: LMS ADAPTIVE FILTERS. Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence Covariance Matrix Diagonalization Resources: PP: LMS Tutorial CNX: LMS Adaptive Filter DM: DSP Echo Cancellation WIKI: Echo Cancellation ISIP: Echo Cancellation. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8443 – Pattern RecognitionECE 8423 – Adaptive Signal Processing

• Objectives:The FIR Adaptive FilterThe LMS Adaptive FilterStability and ConvergenceCovariance Matrix Diagonalization

• Resources:PP: LMS TutorialCNX: LMS Adaptive FilterDM: DSP Echo CancellationWIKI: Echo CancellationISIP: Echo Cancellation

• URL: .../publications/courses/ece_8423/lectures/current/lecture_05.ppt• MP3: .../publications/courses/ece_8423/lectures/current/lecture_05.mp3

LECTURE 05: LMS ADAPTIVE FILTERS

Page 2: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 2

• Apply our fixed, finite length LMS filter to the problem of a continuously adaptive filter that can track changes in a signal.

• Specification of the filter involves three essential elements: the structure of the filter, the overall system configuration, and the performance criterion for adaptation.

The Basic Adaptive Filter

)(nx )(ny

)(nd

)(ne+–

• Definitions: Data vector:

Filter coefficients:

Convolution:

Error Signal: )()( nyndne

f

tn Lnxnxnx )1(...,),1(),( x

tnnnn Lfff )1(...,),1(),0( f

ntn

L

in inxifny xf

1

0

)()()(

• To optimize the filter coefficients sample by sample, we need an iterative solution to the normal equations.

• Alternately, we could compute a batch solution for every new sample, but this wastes computational resources.

Page 3: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 3

Iterative Solutions to the Normal Equations• Recall our objective function:

• This is a quadratic performance surface for which a global minimum can be found using a gradient descent approach (following the derivative from an initial guess, f0.

• Recall our optimal solution was:

• A general iterative solution is given by:

• The constant, n, is a step-size parameter and pn defines the search direction.

• A general class of iterative algorithms are those which iterate based on the gradient of the mean-squared error:

and Dn is an (LxL) weighting matrix. A large number of possible algorithms exist based on the selection of Dn and n .

}{ 2 neEJ

gRf 1*

nnnn p-ff* 1

t

nnnnnnnn Lf

JfJ

fJJwhere

1

,...,1

,0fJJDp

Page 4: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 4

The Method of Steepest Descent• Principle: move in a direction opposite to the gradient of the performance

index, J, and by a distance proportional to the magnitude of that gradient:

)(2 gRffJ

J

• Combining expressions, we can write our update equation:

nnn

nnn

n

neE

matrixidentitytheiswhere

Jff

fJp

IID

2

)(

1

2

• If we differentiate J with respect to f:

)(1 gRfff nnn

• The term steepest descent arises from the fact that the gradient is normal to lines of equal cost. For a parabolic surface, this is the direction of steepest descent.

• The following property can be proven:

where max is the largest eigenvalue of the autocorrelation matrix.max

* 20}{lim

ifff nn

Page 5: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 5

The LMS Adaptive Filter• To design a filter which is responsive to changes in the signal, we need an

iterative structure that is dependent on the (local properties) of the signal.

• We could use:

where the autocorrelation and cross-correlations are computed with respect to the nth sample. But this is not computationally efficient.

• Instead, we can estimate the gradient from the instantaneous error:

)(1 nnnnn gfRff

• We can write this in terms of the error signal and the input signal:

nn

nnn

nneJneEJ

ffJffJ

)(ˆˆ)( 22

nnn

nnn

n

ntnn

nnn

ne

neneneJ

ndneJ

xff

xffJ

xf

Jff

)(

)(2)()(2ˆˆ

))(()(ˆ

ˆ2

1

22

1

Page 6: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 6

Observations• This is known as the LMS adaptive filter update. The filter impulse response at

each sample is equal to its previous value plus a term proportional to the dot product of the error and the signal (recall the orthogonality principle).

• This approach is widely used (e.g., modems, acoustic echo cancellers).

• A group delay can be applied to the filter for applications where there is a known fixed delay.

• It is a computationally simple update: approx. L multiplications and additions per step. This is significant since the filter sometimes has to be long (e.g., echo cancellation applications).

• The filter tracks instantaneous variations in the signal, which can be good (channel switching) and bad (nonstationary noise). Hence, control of the adaptation speed becomes critical.

• The filter is often initialized to a value of zero, which is safe (why?).

• For an excellent treatment on an echo cancellation application, including discussion of many important DSP issues, see DM: DSP Echo Cancellation.

Page 7: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 7

Performance Considerations• Two major concerns about the performance of filter: (1) stability and

convergence; (2) the mean-squared error.

• One approach to analyzing the performance of filter is to compare its performance on a stationary signal.

• Recall we can analyze the long-term solution of the normal equations in terms of the Fourier transform:

• Convergence of the filter can be obtained by applying the so-called independence assumption:

• This condition is stronger than requiring the input to be white noise.• We begin with our expression for the filter coefficients:

• We can take the expectation:

jxx

jxdj

eReR

eF

mnE tmn 0xx

nntnnnn

tnnn

nnn

ndnd

e(n)

xfxxIxxfffxff

)()())((1

1

)(

)()(1

RfgRffRIxfxxIf

notingE

ndEE*

n

nntnnn

Page 8: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 8

Convergence of the Mean• We can define an error vector:

• Subtracting f* from both sides of our update equation:

*nn E ffu

• We can prove convergence in the mean if we can prove that:

• We can decouple the update equation using eigenvalue analysis:

• We can define a rotated error vector:

• Multiple both sides by Qt and note QtQ=I:

• Since is diagonal:

0][limlim

*nn E ffu

nn

nt

n uΛIu QQ1

uQu t

nn

*nn

**n

*n

E

EE

uRIufRIfRIufRffRIff

1

1

1

)(

][

nn

tn

nt

nt

ntt

nt

nt

nt

nt

nt

uΛIuQΛIuuΛuQuΛQuQRuQIuQuRIQuQ

1

1 QQQ

)(1)(1 juju njn -

Page 9: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 9

Convergence of the Mean (Cont.)• This equation:

is a first-order difference equation, whose solution in terms of is:

• Consequently:

• The eigenvalues of R are all real and positive since R is symmetric. Therefore:

• This must be satisfied for all j and all eigenvalues j:

• In practice, this bound is too high and too difficultto estimate online. A stronger condition can be derived:

• But: so:

• This is a very important result because it tells us how to set the adaptation speed in terms of something we can measure.

)(1)(1 juju njn -

)(0 ju

)(1)( 01 juju njn -

110)(lim 1 jnnprovidedju -

jjj

202011 -

max

20

L

ii

1

20

)})({(}{ 2

1

nxELtrL

ii

R

)(20

)})({(20 2 inputtheofpowerL

ornxEL

Page 10: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 10

Diagonalization of a Correlation Matrix• Consider a correlation matrix, R, with eigenvalues .

• Define a matrix where qi form a set of real eiqenvectors satisfying or .

• Define a spectral matrix, :

• We can write the following relations:

• The first two equations follow from the definition of an eigenvalue.

• The third equation summarizes the second equation in matrix form.

• The last equation follows by postmultiplying by Qt and noting that Q Qt = I.

LqqqQ ...,,, 21)( jij

ti qq IQQ t

L

0000......00....0000

2

1

Λ

t

L

QQΛRQΛRQqRq

qqqQΛ

iii

L21

...,,, 21

L...,2,1,i,i

Page 11: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 11

• We introduced the FIR adaptive filter and the LMS adaptive filter.

• We derived its update equations.

• We also derived its stability and convergence properties for stationary signals, which led to a simple method for setting the adaptation speed in terms of a quantity we can easily measure.

• Things we did not discuss but are treated extensively in the textbook: The eigenvalue disparity problem: convergence is not uniform and depends

on the spread of the eigenvalues of the autocorrelation matrix. Time-constants for convergence: the speed of convergence creates an

overshoot/undershoot type problem. Estimation of time-delay or group-delay of the filter: often set a priori in

practice based on application knowledge or constraints. Steady-state mean squared error value of the error: related to the power of

the input signal and the adaptation speed. Transfer function analysis for deterministic signals: of historical

significance but not of great practical value since we are most concerned about stochastic signals.

Summary

Page 12: •URL:  .../publications/courses/ece_8423/lectures/current/lecture_05

ECE 8423: Lecture 05, Slide 12

Echo Cancellation for Analog Telephony