Post on 07-Apr-2018

240 views

Category:

## Documents

TRANSCRIPT

1/19

CHAPTER 3

2/19

In the case of known correlation expression the solution for the optimalcoefficients of the tapped-delay-line filter was

(1.25)pRh-1

0 !

Where Rwas the correlation matrix of the filter tap inputs and p wasthe cross-correlation between the input vector and a desired response.

If the filter operates in an environment where Rand p are unknown, we

may use all the data collected up to and including time n to compute the

estimates and in order to solve the normal equations. When,however the tapped-delay-line filter contains a large number of

coefficients this procedure is highly inefficient. A more efficient

approach is to use an adaptive filter.

)(nR )(np

3/19

Figure 3.1 Block diagram of adaptive filter

Processes:

2) filtering or operating process. (d(n)= desired response must be

provided)

4/19

Let y(n) denote the output of the tapped-delay-line filter at time n,

as shown by the convolution sum

!

!k

knunkhny1

)1(),()((3.1)

the errorsignalis

The error signal e(n) is utilized by the adaptive process to generatecorrections at each iteration to be applied to the tap coefficients inorder to move closer to the optimum Wiener configuration.

)()()( nyndne ! (3.2)

5/19

! ! !

!M

k

M

k

M

m

d kmrnmhnkhkpnkhPn1 1 1

)(),(),()1(),(2)(I(3.3)

where the quantities , and

are results of ensemble

averaging.

)]([2 ndEPd ! )]1()([)1( ! knundEkp

)]1()1([)( ! mnuknuEkmr

Assuming the values ofh(1,n), h(2,n), . . . , h(M,n) are known, thevalue of the mean squared error is

6/19

The Method of Steepest Descent

The dependence ofI(n)on the filter coefficient can be visualized as abowl-shaped surface with an unique minimum. The adaptive processhas the task of continually seeking the minimum point of this error

performance surface.The optimization technique used is the method of steepest descent.

First we compute the M-by-1 gradient vector whose kth

elementis

)(n

!

!!x

xM

m

Mkkmrnmhkpnkh 1

,...2,1)(),(2)1(2),(

I

(3.4)

which is obtained by differentiating both sides of Eq. (3.3) withrespect to h(k,n).

This expression can be simplified to obtain

? A MkknuneEnkh

,...2,1)1()(2),(

!!x

xI

(3.5)

7/19

The gradient vector is then written as

? A? A

? A

!

xx

xx

xx

!

)1()(2

.

.

)1()(2

)()(2

),(/)(

.

.

),2(/)(

),1(/)(

)(

MnuneE

nuneE

nuneE

nMhn

nhn

nhn

n

I

I

I

which can be given in short-hand notation as

(3.6)

? A)()(2)( nneEn u! (3.7)

? ATM-nu-nunun 1)(1),...,(),()( !u (3.8)where

8/19

Now defining the M-by-1 coefficient vector

? ATnMhnhnhn ),(),...,(2,),(1,)( !h (3.9)The update equation for the coefficient vector according to the

steepest-descent algorithm is defined as

? A)(21)(1)( nnn ! Qhh

where the factor 1/2 has been introduced for convenience and Q is a

positive scalar.

Substituting Eq. (3.7) in Eq. (3.10) the update law becomes;

? A)()()(1)( nneEnn uhh Q!

(3.10)

(3.11)

9/19

The error signal e(n) is defined as

)()()()( nnndne T hu!

Substituting Eq. (3.12) into Eq. (3.11) we get

(3.12)

? A? A ? A )()()()()(()())()()()(()(1)(

nnnEndnEn

nn-ndnEnnT

T

hh

hhh

QQ

Q

!

!

(3.13)

which can be rewritten as

phR-I

Rh-phh

QQ

QQ

!

!

)()(

)()(1)(

n

nnn

(3.14)

10/19

Stability of the Steepest Descent Algorithm

The stability performance of the algorithm depends on

i) the correlation matrix R(determined by the process)

ii) the step-size parameterQ (to be chosen by designer)

For the stability analysis define a coefficient error vector as

0)()( hhc ! nn

where h0

is the solution of pRh !0 (3.16)

(3.15)

Subtracting h0 from both sides of (3.14) and using the normal

equation to eliminate p we get

? A00 -)()(1)( hhR-Ihh nn Q! (3.17)

11/19

According to the definition of the coefficient error vector we have

)()(1)( nn cR-Ic Q!

We can represent Rin terms of its eigenvalue matrix 0

0!RQQT

Premultiplying (3.18) by QT we get

(3.18)

(3.19)

)(-)()()(1)( nnnn TTTT RcQcQcR-IQcQ QQ !! (3.20)

Define the transformed coefficient error vector

)()( nn TcQv ! (3.21)

which implies

1)(1)( ! nn TcQv (3.22)

12/19

Since )()()()( nnnnTTTT vcRQQQRIcQRcQ 0!!!

can be written. Thus we obtain )()(1)( nn v-Iv 0! Q

IQQ !T

(3.23)

which represents a system ofn uncoupled scalar valued first-order

difference equations, the kth one being

M1,2,...,knvnv kkk !! )()1(1)( QP- (3.24)

with solution

M1,2,...,kvnv kn

kk !! (0))1()( QP- (3.25)

For stability

M1,2,...,kk ! 110 QP-

Therefore the steepest-descent algorithm is stable if

maxP

Q2

0 (3.27)

(3.26)

,

13/19

The solution for the original coefficient vectorh(n) can be

reformulated as follows:Premultiply (3.21) by Q to obtain )()()( nnn

T ccQQQv !!

Using (3.15) eliminate c(n) and solve forh(n) to obtain

)()(0

nn Qvhh !

which can be rewritten in the form

k

M

k

k nvn qhh !

!1

0 )()(

(3.28)

(3.29)

where qis are normalized eigenvectors associated with theeigenvalues Pis of the matrix R.

Thus the behaviour of the ith coefficient is found to be

M1,2,...,ivqihni,h

M

k

n

kkki !! !10 )1((0))()( QP- (3.30)

14/19

The Mean Squared Error

At time n, the value of the mean squared error is given as

!

!M

k

kk nvn1

2

min )()( PII(3.31)

Substituting (3.25) in (3.31) we get

!

!

M

k

k

2n

kk vn1

2

min )0()1()( QPPII -(3.32)

When the steepest descent algorithm is convergent, that is the step-

size parameterQ is chosen within the bounds ,thenirrespective of the initial conditions.

maxPQ 2/0

min)(lim II !gp

nn

15/19

The steepest-descent algorithm, although shown to converge to theoptimum Wiener solution, irrespective of the initial conditionsunfortunately it requires exact measurements of the gradient vector ateach iteration which is not possible in reality.There is need for an algorithm that derives estimates of the gradientvector from the limited number of available data.One such algorithm is the so-called least-mean-square (LMS)

algorithm which uses instantaneous unbiased estimates of thegradient vector in the form:

)()(2)( nnen u!

In terms of the coefficients update mechanism the LMS algorithm isformulated as

? A )()()()(2

1)(1)( nnennnn uhhh QQ !!

(3.33)

(3.34)

16/19

The error signal was defined as

)()()()( nnndne T hu! (3.12)

Figure 3.2 Multidimensional signal-flow graph representation of the LMS algorithm

17/19

Convergence of the Coefficient Vector in the LMS Algorithm

The statistical analysis of the LMS algorithm is based on the following

conditions:1. Each sample vector u(n) of the input signal is assumed to beuncorrelated with all previous sample vectors u(k) fork= 0, 1,..., n-1.2. Each sample vectoru(n) of the input signal is uncorrelated with all

previous samples of the desired response d(k) fork= 0, 1,..., n-1.

Then from Eqs. (3.34) and (3.12), we observe that the coefficient vectorh(n+1) at time n+1 depends only on three inputs:

1. The previous sample vectors of the input signal, namely, u(n), u(n-1)

,...,u(0).2. The previous samples of the desired response, namely, d(n), d(n-1),...,d(0).3.The initial value h(0) of the coefficient vector.

The coefficient vectorh(n+1) is independent of both u(n+1)and d(n+1).

18/19

The analysis is as follows: Eliminate e(n) by substituting Eq. (3.34)

in Eq. (3.12) to get? A

? A )()()()()()()()()()(1)(

ndnnnn

nnndnnn

T

T

uhuuI

huuhh

QQ

Q

!

!

(3.35)

Using (3.15) to eliminate h(n) from the right-hand side of Eq.(3.35) to get

0)()( hhc ! nn (3.15)

? A? A

? A ? A00

0

)()()()()()()(

)()()()()(1)(

huuuhcuuI

uhcuuIh

nnndnnnn

ndnnnnn

TT

T

!

!

QQ

QQ

(3.36)

which can be rewritten as

? A ? A0)()()()()()()(1)( huuucuuIc nnndnnnnn

TT ! QQ (3.37)

19/19

Taking expectations of both sides of (3.37) and using independence

ofc(n) from u(n) we get? A ? A ? A? A ? A ? A ? A

? A 00

0

)(

)()()()()()()(

)()()()()()()(1)(

Rh-pcRI

huu-ucuuI

huuucuuIc

QQ

QQ

QQ

!

!

!

nE

nnEndnEnEnnE

nnndnEnnnEnE

TT

TT

(3.38)

where ? A)()( ndnEup ! ? A)()( nnE TuuR!andHowever since the right-hand side of (3.38) is zero, reducing topRh !0

? A ? A)(1)( nEnE cRIc Q!(3.39)

which is in the same mathematical form as (3.18).

Thus, the LMS algorithm converges in the mean provided that the step-

size parameterQ satisfies the condition

maxP

Q2

0 (3.40)