adaptive filtering chapter3
TRANSCRIPT

8/4/2019 Adaptive Filtering Chapter3
1/19
CHAPTER 3
Adaptive Tappeddelayline Filters
Using the Gradient Approach
Adaptive FilteringAdaptive Filtering

8/4/2019 Adaptive Filtering Chapter3
2/19
In the case of known correlation expression the solution for the optimalcoefficients of the tappeddelayline filter was
(1.25)pRh1
0 !
Where Rwas the correlation matrix of the filter tap inputs and p wasthe crosscorrelation between the input vector and a desired response.
If the filter operates in an environment where Rand p are unknown, we
may use all the data collected up to and including time n to compute the
estimates and in order to solve the normal equations. When,however the tappeddelayline filter contains a large number of
coefficients this procedure is highly inefficient. A more efficient
approach is to use an adaptive filter.
)(nR )(np
Adaptive Tappeddelayline Filters Using the Gradient Approach

8/4/2019 Adaptive Filtering Chapter3
3/19
Figure 3.1 Block diagram of adaptive filter
Processes:
1) adaptive or training process
2) filtering or operating process. (d(n)= desired response must be
provided)
Adaptive Tappeddelayline Filters Using the Gradient Approach

8/4/2019 Adaptive Filtering Chapter3
4/19
Let y(n) denote the output of the tappeddelayline filter at time n,
as shown by the convolution sum
!
!k
knunkhny1
)1(),()((3.1)
the errorsignalis
The error signal e(n) is utilized by the adaptive process to generatecorrections at each iteration to be applied to the tap coefficients inorder to move closer to the optimum Wiener configuration.
)()()( nyndne ! (3.2)
Adaptive Tappeddelayline Filters Using the Gradient Approach

8/4/2019 Adaptive Filtering Chapter3
5/19
! ! !
!M
k
M
k
M
m
d kmrnmhnkhkpnkhPn1 1 1
)(),(),()1(),(2)(I(3.3)
where the quantities , and
are results of ensemble
averaging.
)]([2 ndEPd ! )]1()([)1( ! knundEkp
)]1()1([)( ! mnuknuEkmr
Adaptive Tappeddelayline Filters Using the Gradient Approach
Assuming the values ofh(1,n), h(2,n), . . . , h(M,n) are known, thevalue of the mean squared error is

8/4/2019 Adaptive Filtering Chapter3
6/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
The Method of Steepest Descent
The dependence ofI(n)on the filter coefficient can be visualized as abowlshaped surface with an unique minimum. The adaptive processhas the task of continually seeking the minimum point of this error
performance surface.The optimization technique used is the method of steepest descent.
First we compute the Mby1 gradient vector whose kth
elementis
)(n
!
!!x
xM
m
Mkkmrnmhkpnkh 1
,...2,1)(),(2)1(2),(
I
(3.4)
which is obtained by differentiating both sides of Eq. (3.3) withrespect to h(k,n).
This expression can be simplified to obtain
? A MkknuneEnkh
,...2,1)1()(2),(
!!x
xI
(3.5)

8/4/2019 Adaptive Filtering Chapter3
7/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
The gradient vector is then written as
? A? A
? A
!
xx
xx
xx
!
)1()(2
.
.
)1()(2
)()(2
),(/)(
.
.
),2(/)(
),1(/)(
)(
MnuneE
nuneE
nuneE
nMhn
nhn
nhn
n
I
I
I
which can be given in shorthand notation as
(3.6)
? A)()(2)( nneEn u! (3.7)
? ATMnununun 1)(1),...,(),()( !u (3.8)where

8/4/2019 Adaptive Filtering Chapter3
8/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
Now defining the Mby1 coefficient vector
? ATnMhnhnhn ),(),...,(2,),(1,)( !h (3.9)The update equation for the coefficient vector according to the
steepestdescent algorithm is defined as
? A)(21)(1)( nnn ! Qhh
where the factor 1/2 has been introduced for convenience and Q is a
positive scalar.
Substituting Eq. (3.7) in Eq. (3.10) the update law becomes;
? A)()()(1)( nneEnn uhh Q!
(3.10)
(3.11)

8/4/2019 Adaptive Filtering Chapter3
9/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
The error signal e(n) is defined as
)()()()( nnndne T hu!
Substituting Eq. (3.12) into Eq. (3.11) we get
(3.12)
? A? A ? A )()()()()(()())()()()(()(1)(
nnnEndnEn
nnndnEnnT
T
hh
hhh
QQ
Q
!
!
(3.13)
which can be rewritten as
phRI
Rhphh
QQ
QQ
!
!
)()(
)()(1)(
n
nnn
(3.14)

8/4/2019 Adaptive Filtering Chapter3
10/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
Stability of the Steepest Descent Algorithm
The stability performance of the algorithm depends on
i) the correlation matrix R(determined by the process)
ii) the stepsize parameterQ (to be chosen by designer)
For the stability analysis define a coefficient error vector as
0)()( hhc ! nn
where h0
is the solution of pRh !0 (3.16)
(3.15)
Subtracting h0 from both sides of (3.14) and using the normal
equation to eliminate p we get
? A00 )()(1)( hhRIhh nn Q! (3.17)

8/4/2019 Adaptive Filtering Chapter3
11/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
According to the definition of the coefficient error vector we have
)()(1)( nn cRIc Q!
We can represent Rin terms of its eigenvalue matrix 0
0!RQQT
Premultiplying (3.18) by QT we get
(3.18)
(3.19)
)()()()(1)( nnnn TTTT RcQcQcRIQcQ QQ !! (3.20)
Define the transformed coefficient error vector
)()( nn TcQv ! (3.21)
which implies
1)(1)( ! nn TcQv (3.22)

8/4/2019 Adaptive Filtering Chapter3
12/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
Since )()()()( nnnnTTTT vcRQQQRIcQRcQ 0!!!
can be written. Thus we obtain )()(1)( nn vIv 0! Q
IQQ !T
(3.23)
which represents a system ofn uncoupled scalar valued firstorder
difference equations, the kth one being
M1,2,...,knvnv kkk !! )()1(1)( QP (3.24)
with solution
M1,2,...,kvnv kn
kk !! (0))1()( QP (3.25)
For stability
M1,2,...,kk ! 110 QP
Therefore the steepestdescent algorithm is stable if
maxP
Q2
0 (3.27)
(3.26)
,

8/4/2019 Adaptive Filtering Chapter3
13/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
The solution for the original coefficient vectorh(n) can be
reformulated as follows:Premultiply (3.21) by Q to obtain )()()( nnn
T ccQQQv !!
Using (3.15) eliminate c(n) and solve forh(n) to obtain
)()(0
nn Qvhh !
which can be rewritten in the form
k
M
k
k nvn qhh !
!1
0 )()(
(3.28)
(3.29)
where qis are normalized eigenvectors associated with theeigenvalues Pis of the matrix R.
Thus the behaviour of the ith coefficient is found to be
M1,2,...,ivqihni,h
M
k
n
kkki !! !10 )1((0))()( QP (3.30)

8/4/2019 Adaptive Filtering Chapter3
14/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
The Mean Squared Error
At time n, the value of the mean squared error is given as
!
!M
k
kk nvn1
2
min )()( PII(3.31)
Substituting (3.25) in (3.31) we get
!
!
M
k
k
2n
kk vn1
2
min )0()1()( QPPII (3.32)
When the steepest descent algorithm is convergent, that is the step
size parameterQ is chosen within the bounds ,thenirrespective of the initial conditions.
maxPQ 2/0
min)(lim II !gp
nn

8/4/2019 Adaptive Filtering Chapter3
15/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
The steepestdescent algorithm, although shown to converge to theoptimum Wiener solution, irrespective of the initial conditionsunfortunately it requires exact measurements of the gradient vector ateach iteration which is not possible in reality.There is need for an algorithm that derives estimates of the gradientvector from the limited number of available data.One such algorithm is the socalled leastmeansquare (LMS)
algorithm which uses instantaneous unbiased estimates of thegradient vector in the form:
)()(2)( nnen u!
In terms of the coefficients update mechanism the LMS algorithm isformulated as
? A )()()()(2
1)(1)( nnennnn uhhh QQ !!
(3.33)
(3.34)

8/4/2019 Adaptive Filtering Chapter3
16/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
The error signal was defined as
)()()()( nnndne T hu! (3.12)
which leads to the following adaptation scheme.
Figure 3.2 Multidimensional signalflow graph representation of the LMS algorithm

8/4/2019 Adaptive Filtering Chapter3
17/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
Convergence of the Coefficient Vector in the LMS Algorithm
The statistical analysis of the LMS algorithm is based on the following
conditions:1. Each sample vector u(n) of the input signal is assumed to beuncorrelated with all previous sample vectors u(k) fork= 0, 1,..., n1.2. Each sample vectoru(n) of the input signal is uncorrelated with all
previous samples of the desired response d(k) fork= 0, 1,..., n1.
Then from Eqs. (3.34) and (3.12), we observe that the coefficient vectorh(n+1) at time n+1 depends only on three inputs:
1. The previous sample vectors of the input signal, namely, u(n), u(n1)
,...,u(0).2. The previous samples of the desired response, namely, d(n), d(n1),...,d(0).3.The initial value h(0) of the coefficient vector.
The coefficient vectorh(n+1) is independent of both u(n+1)and d(n+1).

8/4/2019 Adaptive Filtering Chapter3
18/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
The analysis is as follows: Eliminate e(n) by substituting Eq. (3.34)
in Eq. (3.12) to get? A
? A )()()()()()()()()()(1)(
ndnnnn
nnndnnn
T
T
uhuuI
huuhh
QQ
Q
!
!
(3.35)
Using (3.15) to eliminate h(n) from the righthand side of Eq.(3.35) to get
0)()( hhc ! nn (3.15)
? A? A
? A ? A00
0
)()()()()()()(
)()()()()(1)(
huuuhcuuI
uhcuuIh
nnndnnnn
ndnnnnn
TT
T
!
!
QQ
QQ
(3.36)
which can be rewritten as
? A ? A0)()()()()()()(1)( huuucuuIc nnndnnnnn
TT ! QQ (3.37)

8/4/2019 Adaptive Filtering Chapter3
19/19
Adaptive Tappeddelayline Filters Using the Gradient Approach
Taking expectations of both sides of (3.37) and using independence
ofc(n) from u(n) we get? A ? A ? A? A ? A ? A ? A
? A 00
0
)(
)()()()()()()(
)()()()()()()(1)(
RhpcRI
huuucuuI
huuucuuIc
QQ
QQ
QQ
!
!
!
nE
nnEndnEnEnnE
nnndnEnnnEnE
TT
TT
(3.38)
where ? A)()( ndnEup ! ? A)()( nnE TuuR!andHowever since the righthand side of (3.38) is zero, reducing topRh !0
? A ? A)(1)( nEnE cRIc Q!(3.39)
which is in the same mathematical form as (3.18).
Thus, the LMS algorithm converges in the mean provided that the step
size parameterQ satisfies the condition
maxP
Q2
0 (3.40)