l6 adaptive filters

35
Lecture 6 Adaptive Filters H64DSP & H64DS2 1

Upload: louis-njoroge

Post on 14-Nov-2015

25 views

Category:

Documents


7 download

DESCRIPTION

Adaptive Filters

TRANSCRIPT

  • Lecture 6 Adaptive Filters

    H64DSP & H64DS2

    1

  • Lecture Outline & Learning Outcomes

    To have understanding on the following:

    Why need adaptive filtering

    Linear Optimal Wiener Filter and Gradient Descent Method.

    Applications of adaptive filtering

    LO3 analyse and/or design FIR, IIR and adaptive digital filters;

    2

  • Why Adaptive?

    Classical approach lowpass, highpass, bandstop and bandpass filters with fixed coefficients. Filters will only work for cases where spectra of signals of interest and noise do not overlap and do not vary.

    Optimal approach filters coefficients can be adapted and are usually selected as the optimum values which minimise a cost function in terms of mean squared error (or difference between actual output and desired output).

    When no a priori information is available, we need adaptive filters which can adjust their own parameters based on the incoming signal.

    3

  • Structure of an Adaptive Filter

    The device is a filtering process to produce an output in response to a sequence of

    input data.

    The adaptive process control an adjustable set of parameters used in the filtering process.

    FIR filters are popularly used as the structural basis in adaptive filters due to their inherent stability. We will also only focus on those that use FIR filters.

    4

  • Applications of Adaptive Filtering (1): Identification

    The plant is unknown and we use the adaptive filter to obtain a mathematical model that best fits, in some sense, the unknown plant. The input to the plant and the filter is the same and the filter is suppose to adapt such that its output y, is close to the plants output d. u = input of adaptive filter and plant y = output of adaptive filter d = desired response = output of plant e = d-y = estimation error

    5

  • Applications of Adaptive Filtering (2): Inverse Modelling

    The adaptive filter is to provide an inverse model that best represents (in some sense) the unknown noisy plant. Ideally, the inverse model has a transfer function equal to the reciprocal of the transfer function of the plant, such that the combination of the two gives an ideal transmission medium.

    u = output of plant and input to adaptive filter y = output of adaptive filter d = desired response=delayed system input e = d-y = estimation error

    6

  • Applications of Adaptive Filtering (3) Inverse Modelling Example: Channel Equalisation

    7

  • Example: Headset for pilots. Cockpit has lots of noise eg engine noise which interferes with pilots voices. Noise in far and near microphones maybe slightly different but are correlated. The filter will produce output y(n) which are best estimates of the noise picked up by near microphone. This is subtracted from d(n) to give the desired speech.

    Applications of Adaptive Filtering (4): Interference Cancellation

    Block Diagram of a Noise Reduction Headset

    d(n) = speech + noise

    y(n)

    e(n)

    +

    -

    x(n) = noise'

    Adaptive Filter

    e(n)Speech Output

    Filter Output (noise)

    Far Microphone

    Near Microphone

    8

  • Applications of Adaptive Filtering (5) Interference Cancellation Example: ECG Signal

    Example extracted from Rahman et al, Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5) (see Recommended Reading In Moodle)

    9

  • The adaptive filter is used to estimate future values of a signal based on past values of the signal.

    u = input of adaptive filter=delayed version of random signal y = output of adaptive filter d = desired response=random signal e = d-y = estimation error=system output

    Applications of Adaptive Filtering (6): Prediction

    10

  • Wiener Filter (1)

    Proposed by Norbert Wiener in 1940s.

    In many applications, the desired signal (eg speech, image) is not readily available. It may be corrupted by noise. Need to extract the desired signal by filtering. Conventional filtering using LP, HP, BS or BP may not be optimal especially if the environment is time varying.

    Suppose in the block diagram below, u(n) = d(n) +v(n) where d(n) is the desired signal and v(n) is the additive noise, the Wiener filter W(z) is designed to recover the desired signal d(n), by producing y(n) the best estimates of d(n).

    u(n) y(n)

    11

  • Wiener Filter (2): FIR based

    Other data

    12

  • Wiener Filter (3)

    PROBLEM STATEMENT

    Suppose for the identification problem, we have the input samples going into the unknown system un and the output (desired signal) dn where

    and the adaptive filter has the output signal

    Determine the filter weights :

    121 :tor weight vec MMoT wwwwW

    21

    )1(

    1

    :signal Desired

    )instant at time linedelay in the (stored :signalInput

    ddd

    n

    u

    u

    u

    U

    n

    nM

    n

    on

    n

    1

    0

    M

    m

    nmmn uwy

    13

  • such that the mean square error (power of error signal) defined below, is minimised

    where the error signal is

    2neEJ

    nnn yde

    The solution for the Wiener filter coefficients requires estimates of the autocorrelation of the input signal and the cross-correlation of the input signal and the desired signal.

    Wiener Filter (4)

    14

  • RWWWPdE

    WUUEWWUdEdE

    WUWUddE

    WUdEeEJ

    WUdyde

    WUy

    TT

    n

    n

    T

    n

    TT

    nnn

    T

    n

    T

    knn

    T

    nnn

    T

    nnnnn

    T

    nn

    2

    2

    2

    function Cost

    signalError

    formIn vector

    2

    2

    22

    22

    SOLUTION

    Wiener Filter (5)

    15

  • nMnMnnMnnM

    nMnnnnn

    nMnnnnn

    uuuuuu

    uuuuuu

    uuuuuu

    ER

    111101

    111101

    101000

    nMn

    nn

    nn

    ud

    ud

    ud

    EP

    1

    1

    0

    where R is the auto-correlation of the input signal and P is the cross-correlation of the input signal and the desired signal:

    Wiener Filter (6)

    16

  • Wiener Filter (7): Performance Criterion

    equation HopfWiener

    022

    0:

    1

    2

    min

    PRW

    PRW

    PRW

    W

    eEJ n

    Minimisation of the cost function

    17

  • Wiener Filter (8): Performance Criterion

    18

  • Example 1 Wiener Filter

    Determine the optimal weights for the filter.

    19

  • Points to note:

    For pseudo-stationary signals such as speech signals, the filter coefficients can be periodically recalculated and updated for every block of say N input samples.

    Choice of filter length? Must make a compromise too small means filter may not be able to function properly; too large results in computational complexity.

    Depending upon the relationship between the input u(n) and desired signal d(n), the filter can be used in various applications.

    20

  • Example 2: Channel Equalisation

    Shown on next slide

    Model of a noisy communication channel where d(n) is the data to be transmitted.

    21

  • The output x(n) is corrupted by additive white noise v(n) of zero mean and variance 0.1, giving u(n) where: u(n)=x(n)+v(n) and x(n)-0.9458x(n-1)=d(n) Assume x(n) and v(n) are uncorrelated Variance of process d(n) is 0.9486

    22

  • Answer

    Determine the optimum weight for a Wiener filter and hence the minimum mean square error.

    23

  • The mean squared error J as a function of the weight w is:

    Error Performance Surface

    24

  • Drawback of Wiener-Hopf solution

    Requires calculation of an inverse of a matrix can be computationally expensive.

    Requires a priori information about input signal (autocorrelation and crosscorrelation)

    25

  • Least Mean Square (LMS) Filter

    Introduced by Widrow and Stearns in 1985.

    The LMS filter is an adaptive FIR filter which estimates the filter weights, or coefficients, needed to minimise the error, e(n), between the output signal y(n) and the desired signal, d(n).

    Each filter weight are updated on a sample by sample basis (iteratively) based on e(n) such that the solution to the Wiener-Hopf equation is approached.

    Therefore, the LMS adaptive filter can be considered as an adaptive realization of the Wiener filter and it is used when the signal statistics are not (completely) known.

    The basis of the filter is the use of the instantaneous estimates of the gradient in the steepest descent method.

    26

  • A mathematical algorithm that minimises functions. Given a function f(x) defined by a set of parameters x1, x2, ., gradient

    descent starts with an initial set of parameter values and iteratively moves toward a set of parameter values that minimises the function.

    This iterative minimisation works by taking steps in the negative direction

    of the function gradient.

    Gradient Steepest Descent Method (1)

    27

  • Gradient Steepest Descent Method (2)

    signal

    error

    vector

    input

    tap

    parameter

    rate

    learning

    vector

    weight tapof

    valueold

    vector

    weight tapof

    valueupdate

    nnnn

    nnT

    kT

    nn

    Tnn

    n

    nn

    nn

    UeWW

    UeUWUdW

    WUdeJ

    J

    eW

    JWW

    2

    22 Since

    error ousinstantane ofgradient is and

    factor econvergencor parameter rate learning is where

    :Update

    1

    2

    2

    2

    1

    28

  • Important considerations are the learning rate parameter and initial values for weights Wo.

    The learning rate parameter determines how big a step downhill is taken along the performance error surface. It determines the rate of convergence how fast the algorithm produces the optimal weights (when gradient should be zero and cost function is minimum).

    If is small, it will require many iterations to arrive at minimum cost.

    If is large, we may step over the minimum value and this may cause oscillations about the minimum value.

    It is possible to change the step size (adaptive ) at each iteration.

    For stability, 0 < < 2/max where max is the maximum eigenvalue of the autocorrelation matrix of the input data, R.

    Wo can be determined by guessing (base on past experience) or by calculation using known input signal.

    Gradient Steepest Descent Method (3)

    29

  • reached is minimum until Repeat

    UeWW

    yde

    uwy

    oopStart of l

    W

    nnnn

    nnn

    M

    m

    nmmn

    o

    2 weights the Update

    error get to

    output desired h theoutput wit computed theCompare

    weightsprevious usingoperation

    filtering theperform andu sampleinput next Get the

    select and guess initial Start with

    1

    1

    0

    The algorithm implements the following loop

    With each new sample, a new set of weights are calculated to adapt to changing signals

    The LMS Algorithm

    30

  • Example 3: LMS Filter with Gradient Descent Method

    Consider the noise cancellation system as shown in the figure below.

    The filter has two taps, w(0) and w(1) and the following initial values for the primary signal and reference signal: d(0)=3, d(1)=-2, d(2)=1, u(0)=3, u(1)=-1, u(2)=2. Using a learning rate of 0.1,

    (i) write the following equations required for the weight updates: y(n), e(n), w(0) and w(1) (ii) hence determine the values for w(0) and w(1) for the first 2 iterations (Similar question except for part (ii) which requires two interations only,

    appears in Example Sheet 4. Please see solution posted in Moodle)

    31

  • Advantages of LMS Filter

    low computational complexity only requires local

    information

    simple to implement FIR based with compact algorithm

    allow real-time operation track signals with changing characteristics

    does not need statistics of signals ie correlation based only on samples of the input signal and the desired signal.

    32

  • Summary Adaptive filters are time-variant filters which adapts their weights to

    minimise a cost function, most commonly power in an error signal, the difference between the desired signal and the actual signal.

    Adaptive filters are applicable for processing dynamically changing signals. Optimal Wiener filter requires statistical properties of signals which are

    usually not available in practice. The LMS filter algorithm only requires samples of input signal and desired

    signal. It seeks minimum cost optimal weights by estimating the gradient of the instantaneous error and moving in the negative direction of the gradient towards minimum cost. If learning rate parameter is carefully chosen, the algorithm will approach the optimal Wiener solution.

    FIR-based LMS filters are guaranteed to have unimodal performance error surfaces only one minimum.

    In general, using other adaptive filters, cost functions may have local minima. If there are, the global minimum is reached by appropriate selection of learning rate parameter and initial weight values. However, what we have learnt serves as a basis for more practical variants of the algorithm.

    33

  • See papers on Examples of Adaptive Filter Applications in Moodle: Recommended Reading.

    See also a Simulink program (dspanc) for an example of adaptive noise cancellation. (Type dspanc at Matlab command prompt. Experiment with different values of learning rate parameter). Block diagram is also given in next slide

    Recommended Reading

    34

  • 35