adaptive filtering chapter4

Upload: -

Post on 07-Apr-2018

246 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Adaptive Filtering Chapter4

    1/21

    CHAPTER 4

    Adaptive Tapped-delay-line Filters

    Using the Least Squares

    Adaptive FilteringAdaptive Filtering

  • 8/6/2019 Adaptive Filtering Chapter4

    2/21

    In this presentation the method of least squares will be used to derive arecursive algorithm for automatically adjusting the coefficients of atapped-delay-line filter, without invoking assumptions on the statisticsof the input signals. This procedure, called the recursive least-squares( RLS ) algorithm, is capable of realizing a rate of convergence that ismuch faster than the LMS algorithm, because the RLS algorithmutilizes all the information contained in the input data from the start of the adaptation up to the present.

    Adaptive Tapped-delay-line Filters Using the Least Squares

  • 8/6/2019 Adaptive Filtering Chapter4

    3/21

    Adaptive Tapped-delay-line Filters Using the Least SquaresThe Deterministic Normal Equations

    The requirement is to design the filter in such a way that it minimizesthe residual sum of squares of the error.

    Figure 4.1 Tapped-delay-line filter

    !!

    n

    i

    ien J 1

    2 )()((4.1)

  • 8/6/2019 Adaptive Filtering Chapter4

    4/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    The filter output is the convolution sum

    !

    !! M

    k

    nik iunk hi y1

    ,...,2,1)1(),()((4.2)

    which upon substituting becomes)()()( i yid ie !

    !! !

    !! !

    !

    n

    i

    M

    k

    M

    m

    n

    i

    n

    i

    M

    k

    miuk iunmhnk h

    k iuid nk hid n J

    11 1

    11 1

    2

    )1()1(),(),(

    )1()(),(2)()(

    (4.3)

    where nM ewhere

  • 8/6/2019 Adaptive Filtering Chapter4

    5/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    Introduce the following definitions:

    1) We define the deterministic correlation between the input signals attaps k and m, summed over the data length n , as

    !

    !!n

    i

    M mk miuk iumk n1

    1,...,1,0,)()(),;(J (4.4)

    2) We define the deterministic correlation between the desired responseand the input signal at tap k , summed over the data length n , as

    !

    !!n

    i

    M k k iuid k n1

    1,...,1,0)()();(U(4.5)

    3) We define the energy of the desired response as

    !

    !n

    id id n E

    1

    2 )()((4.6)

  • 8/6/2019 Adaptive Filtering Chapter4

    6/21

  • 8/6/2019 Adaptive Filtering Chapter4

    7/21

    Adaptive Tapped-delay-line Filters Using the Least SquaresThis set of M simultaneous equations constitutes the deterministicnormal equations whose solution determines the least-squares filter.

    ? AT n M hnhnhn ),(),...,,2(),,1()( !hV ector form of the least-squares filter (4.10)

    The deterministic correlation matrix of the tap inputs

    !

    )1,1;(..)1,1;()0,1;(.....

    .....

    )1,1;(..)1,1;()0,1;(

    )1,0;(..)1,0;()0,0;(

    )(

    M M n M n M n

    M nnn

    M nnn

    n

    J J J

    J J J J J J

    and the deterministic cross-correlation vector

    ? AT

    M nnnn )1;(),...,1;(),0;()(! UUU

    (4.11)

    (4.12)

  • 8/6/2019 Adaptive Filtering Chapter4

    8/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    With these definitions the normal equations are expressed as)()()( nnn h ! (4.13)

    Assuming J(n) is nonsingular

    )()()( -1 nnnh ! (4.14)

    and for the resulting filter the residual sum of squares attains theminimum value:

    )()()()(min nnn E n J T

    d h! (4.15)

  • 8/6/2019 Adaptive Filtering Chapter4

    9/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    Properties of the Least-squares Estimate

    Property 1 . The least-squares estimate of the coefficient vector approaches the optimum Wiener solution as the data length n approachesinfinity, if the filter input and the desired response are jointly stationaryergodic processes.

    Property 2 . The least-squares estimate of the coefficient vector isunbiased if the error signal e(i) has zero mean for all i.

    Property 3 . The covariance matrix of the least-squares estimateequals , except for a scaling factor, if the error vector has zeromean and its elements are uncorrelated.

    h-1

    0e

    Property 4 . If the elements of the error vector are statisticallyindependent and Gaussian-distributed, then the least-squares estimate is

    the same as the maximum-likelihood estimate.

    0e

  • 8/6/2019 Adaptive Filtering Chapter4

    10/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    Let A and B be two positive definite, M by M matrices related by

    T CCDBA -1-1! (4.16)

    The Matrix-Inversion Lemma

    where D is another positive definite, N by N matrix and C is an M by N matrix. According to the matrix-inversion lemma, we may express theinverse of the matrix A as follows:

    ? A BCBCCDBC-BAT T -1-1

    ! (4.17)

  • 8/6/2019 Adaptive Filtering Chapter4

    11/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    The Recursive Least-Squares (RLS) Algorithm

    The deterministic correlation matrix is now modified term byterm as

    )(n

    !

    !n

    imk ck iumiumk n

    1

    )()(),;( H J (4.18)

    where c is a small positive constant and is the Kronecker delta;k H

    {!

    ! k m

    k mmk

    0

    1

    H (4.19)

  • 8/6/2019 Adaptive Filtering Chapter4

    12/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    This expression can be reformulated as

    ! !

    1

    1

    )()()()(),;(n

    imk ck iumiuk numnumk n H

    (4.20)

    where the first term equates yielding),;1( mk nJ

    1,...,1,0,)()(),;1(),;( !! M mk k numnumk nmk n J J (4.21)

    Note that this recursive equation is independent of the arbitrarily smallconstant c.

  • 8/6/2019 Adaptive Filtering Chapter4

    13/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    D efining the M -by-1 tap input vector

    ? AT M -nu-nunun 1)(1),...,(),()( !u (4.22)we can express the correlation matrix as

    )()()()( nnnn T uu! (4.23)

    and make the following associations to use the matrix inversion lemma

    1)(

    1)()( -1

    !!

    !!DuC

    BA

    n

    -nn

    Thus the inverse of the correlation matrix gets the following recursiveform

    )(1)()(11)()()(1)(

    1)()( 1--1-1

    1-1-

    n-nn-nnn-n

    --nn T T

    uuuu!

    (4.24)

  • 8/6/2019 Adaptive Filtering Chapter4

    14/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    )()( -1 nn !P

    For convenience of computation, let

    and )(1)()(1)(1)(

    )(n-nn

    n-nn

    T uPuuP

    k !

    Then, we may rewrite Eq. (4.24) as follows:

    1)()()(-1)()( -nnn-nn T Puk PP ! (4.27)

    (4.26)

    (4.25)

    The M -by-1 vector k (n ) is called the gain vector.

    Postmultiplying both sides of Eq.(4.27) by the tap-input vector u (n ) we get

    )(1)()()(-)(1)()()( n-nnnn-nnnT

    uPuk uPuP ! (4.28)Rearranging Eq. (4.26) we find that

    )()(1)()(1)()()( n-n-nn-nnn T k uPuPuk ! (4.29)Therefor substituting Eq. (4.29) in Eq. (4.28) and simplifying we get

    )()()( nnn uPk ! (4.30)

  • 8/6/2019 Adaptive Filtering Chapter4

    15/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    Reminding that the recursion requires not only

    updates for as given by Eq. (4.27) but also recursiveupdates for the deterministic cross-correlation defined by

    )()()( -1 nnnh !

    )()(-1 nn P!)(n

    !

    !!n

    i

    M k k iuid k n1

    1,...,1,0)()();(U(4.5)

    which can be rewritten as

    );1()()()()()()();(1

    1

    k nk nund k iuid k nund k nn

    i

    !! !

    UU

    (4.31)yielding the recursion

    )()()1()( nnd nn u! (4.32)

  • 8/6/2019 Adaptive Filtering Chapter4

    16/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    As a result

    ? A )()()1()()()()()1()( )()()1()()()()(

    nd nnnnd nnnn

    nd

    nnnnnn

    k PuPP

    uPPh

    !!!!

    With the suitable substitutions we get(4.33)

    ? A? A)1()1()()()()1()1(

    )()()1()1()()()1()(

    !

    !nnnnd nnn

    nd nnnnnnnT

    T

    Puk P

    k Puk Ph

    (4.34)which can be expressed as

    ? A )()()1()1()()()()1()( nnnnnnd nnn T Lk hhuk hh !! (4.35)where L (

    n) is a true estimation error defined as

    )1()()()( ! nnnd n T huL (4.36)

    Equations (4.35) and (4.36) constitute the recursive least-squares(RLS) algorithm.

  • 8/6/2019 Adaptive Filtering Chapter4

    17/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    Summary of the RLS Algorithm1. Let n = 1

    2. Compute the gain vector )(1)()(1

    )(1)()(

    n-nnn-n

    nT uPu

    uPk !

    3. Compute the true estimation error )1()()()( ! nnnd nT huL

    4. Update the estimate of the coefficient vector

    )()()1()( nnnn Lk hh !

    5. Update the error correlation matrix

    1)()()(-1)()( -nnn-nn T Puk PP !

    6. Increment n by 1, go back to step 2

    Side result: recursion of the minimum value of the residual sum of squares

    )()(1)()( minmin nen-n J n J L! (4.37)

  • 8/6/2019 Adaptive Filtering Chapter4

    18/21

    Comparison of the RLS and LMS Algorithms

    Adaptive Tapped-delay-line Filters Using the Least Squares

    Figure 4.2 Multidimensionalsignal-flow graph (a) RLS

    algorithm (b) LMS algorithm

  • 8/6/2019 Adaptive Filtering Chapter4

    19/21

    Adaptive Tapped-delay-line Filters Using the Least Squares1. In the LMS algorithm, the correction that is applied in updating theold estimate of the coefficient vector is based on the instantaneous

    sample value of the tap-input vector and the error signal. On the other hand, in the RLS algorithm the computation of this correction utilizes allthe past available information .

    2. In the LMS algorithms, the correction applied to the previous estimate

    consists of the product of three factors: the (scalar) step-size parameter Q, the error signal e( n -1) , and the tap-input vector u (n -1). On the other hand, in the RLS algorithm this correction consists of the product of twofactors: the true estimation error L (n ) and the gain vector k (n ). The gainvector itself consists of * -1(n ), the inverse of the deterministiccorrelation matrix, multiplied by the tap-input vector u (n ). The major difference between the LMS and RLS algorithms is therefore the

    presence of * -1(n ) in the correction term of the RLS algorithm that hasthe effect of decorrelating the successive tap inputs, thereby making theRLS algorithm self-orthogo n alizi n g . Because of this property, we findthat the RLS algorithm is essentially independent of the eigenvaluespread of the correlation matrix of the filter input.

  • 8/6/2019 Adaptive Filtering Chapter4

    20/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    3. The LMS algorithm requires approximately 20 M iterations toconverge in mean square, where M is the number of tap coefficientscontained in the tapped-delay-line filter. On the other band, the RLSalgorithm converges in mean square within less than 2 M iterations. Therate of convergence of the RLS algorithm is therefore, in general, faster than that of the LMS algorithm by an order of magnitude.

    4. Unlike the LMS algorithm, there are no approximations made in thederivation of the RLS algorithm. Accordingly, as the number of iterations approaches infinity, the least-squares estimate of thecoefficient vector approaches the optimum Wiener value, andcorrespondingly, the mean-square error approaches the minimum value

    possible. In other words, the RLS algorithm, in theory, exhibits zeromisadjustment. On the other hand, the LMS algorithm always exhibits anonzero misadjustment; however, this misadjustment may be madearbitrarily small by using a sufficiently small step-size parameter Q.

  • 8/6/2019 Adaptive Filtering Chapter4

    21/21

    Adaptive Tapped-delay-line Filters Using the Least Squares

    5. The superior performance of the RLS algorithm compared to the LMSalgorithm, however, is attained at the expense of a large increase incomputational complexity. The complexity of an adaptive algorithm for real-time operation is determined by two principal factors: (1) thenumber of multiplications (with divisions counted as multiplications)

    per iteration, and (2) the precision required to perform arithmeticoperations. The RLS algorithm requires a total of 3 M (3 + M )/2multiplications, which increases as the square of M , the number of filter coefficients. On the other hand, the LMS algorithm requires 2 M + 1multiplications, increasing linearly with M . For example, for M = 31 theRLS algorithm requires 1581 multiplications, whereas the LMSalgorithm requires only 63.