adaptive filters solutions of computer projects ali h. sayed · 2 part i: optimal estimation...

74
ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed Electrical Engineering Department University of California at Los Angeles c 2008 All rights reserved Readers are welcome to bring to the attention of the author at [email protected] any typos or suggestions for improvements. The author is thankful for any feedback. No part of this material can reproduced or distributed without written consent from the publisher.

Upload: others

Post on 21-May-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

ADAPTIVE FILTERSSolutions of Computer Projects

Ali H. Sayed

Electrical Engineering DepartmentUniversity of California at Los Angeles

c©2008 All rights reserved

Readers are welcome to bring to the attention of the author at [email protected] any typos or

suggestions for improvements. The author is thankful for any feedback.

No part of this material can reproduced or distributed without

written consent from the publisher.

Page 2: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

ii

Page 3: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

CONTENTS

PART I: OPTIMAL ESTIMATION 1

PART II: LINEAR ESTIMATION 4

PART III: STOCHASTIC-GRADIENT METHODS 20

PART IV: MEAN-SQUARE PERFORMANCE 34

PART V: TRANSIENT PERFORMANCE 41

PART VI: BLOCK ADAPTIVE FILTERS 44

PART VII: LEAST-SQUARES METHODS 49

PART VIII: ARRAY ALGORITHMS 56

PART IX: FAST RLS ALGORITHMS 58

PART X: LATTICE FILTERS 61

PART XI: ROBUST FILTERS 67

iii

Page 4: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

iv

Page 5: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART I: OPTIMAL ESTIMATION

1

Page 6: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

2 Part I: Optimal Estimation

COMPUTER PROJECT

Project .1 (Comparing optimal and suboptimal estimators) The programs that solve this project arethe following.

1. psk.m This function generates a BPSK signal x that assumes the value +1 with probability p and thevalue −1 with probability 1− p.

2. partB.m This program generates four plots of xN , as a function of N , one for each value of p — seeFig. 1.

2 4 6 8 10−2

−1

0

1

2p=0.1

Number of observations

Est

imat

e

2 4 6 8 10−2

−1

0

1

2

Est

imat

eNumber of observations

2 4 6 8 10−2

−1

0

1

2

Est

imat

e

Number of observations2 4 6 8 10

−2

−1

0

1

2

Est

imat

e

Number of observations

p=0.3

p=0.5 p=0.8

Figure I.1. The plots show the values of the optimal estimates xN for differentchoices of N (the number of observations) and for different values of p (whichdetermines the probability distribution of x).

3. partC.m This program generates a plot that shows {xN , xN,av} for four different values of p over theinterval 1 ≤ N ≤ 300 — see Fig. 2.

4. partD.m This program estimates the variances of {xN , xN,av , xdec, xsign}. Typical values are

{0.0031, 0.1027, 0.0040, 0.0040}

Page 7: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part I: Optimal Estimation 3

0 100 200 300−2.5

−2

−1.5

−1

−0.5p=0.1

Number of observations

Est

imat

e

Optimal estimatesAveraged estimates

0 100 200 3000.5

1

1.5p=0.3

Number of observations

Est

imat

e

Optimal estimatesAveraged estimates

0 100 200 300−2

−1

0

1

2

Number of observations

Est

imat

e

p=0.5

Optimal estimatesAveraged estimates

0 100 200 3000.5

1

1.5

2

2.5p=0.8

Est

imat

e

Number of observations

Optimal estimatesAveraged estimates

Figure I.2. The plots show the values of the optimal estimates xN (dottedlines) and the averaged estimates xN,av (solid lines) for different choices of N(the number of observations) and for different values of p (which determinesthe probability distribution of x). Observe how the averaged estimates aresignificantly less reliable for a smaller number of observations.

Page 8: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART II: LINEAR ESTIMATION

4

Page 9: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part II: Linear Estimation 5

COMPUTER PROJECTS

Project II.1 (Linear equalization and decision devices) The programs that solve this project are thefollowing.

1. partA.m This program solves parts (a) and (c), which deal with BPSK data. The program generates5 plots that show the time and scattering diagrams of the transmitted sequence {s(i)}, the receivedsequence {y(i)} (which is the input to the equalizer), and the equalizer output {s(i −∆)} for severalvalues of ∆. A final plot shows the values of the estimated m.m.s.e. and the number of erroneousdecisions as a function of ∆. The program allows the user to change σ2

v . Typical outputs of thisprogram are shown in the sequel.

Figure 1 shows four graphs displayed in a 2× 2 matrix grid. The graphs on the left column pertain tothe transmitted sequence s(i), while the graphs on the right column pertain to the received sequencey(i). The graphs on the bottom row are scatter diagrams; they show where the symbols {s(i), y(i)} arelocated in the complex plane for all 2000 transmissions. We thus see that the s(i) are concentrated at±1, as expected, while the y(i) appear to be concentrated around the four values {−1.5,−0.5, 0.5, 1.5}.The graphs on the top row are time diagrams; they show what values the symbols {s(i), y(i)} assumeover the 2000 transmissions. Clearly, the s(i) are either ±1, while the y(i) assume values around{±0.5,±1.5}.

0 2000−2

−1

0

1

2Transmitted sequence

Am

plitu

de

Time0 2000

−2

−1

0

1

2Received sequence

Am

plitu

de

Time

−1 1 −1

0

1

Re

Im

−2 −1 0 1 2−1

0

1

Re

Im

Figure II.1. The plots show the time (top row) and scatter (bottom row) di-agrams of the transmitted and received sequences {s(i), y(i)} for BPSK trans-missions. Observe how the s(i) are concentrated at ±1 while the y(i) areconcentrated around {±0.5,±1.5}.

Figure 2 again shows four graphs displayed in a 2 × 2 matrix grid. The graphs on the left columnpertain to the received sequence y(i), while the graphs on the right column pertain to the output ofthe equalizer, s(i −∆), for ∆ = 0, i.e., s(i). The graphs on the bottom row are the scatter diagramswhile the graphs on the top row are the time diagrams. Observe how the symbols at the output of theequalizer are concentrated around ±1.

Figure 3 is similar to Fig. 2, except that it relates to ∆ = 1. Comparing Figs. 2 and 3, observe how thescatter diagram of the output of the equalizer is more spread in the latter case. Figure 4 is also similarto Fig. 2, except that it now relates to ∆ = 3. Observe how the performance of the equalizer is poor inthis case.

Figure 5 shows two plots: one pertains to the m.m.s.e as a function of ∆, while the other pertains tothe number of erroneous decisions as a function of ∆. It is seen that although the choice ∆ = 0 leads tothe lowest m.m.s.e. at the assumed SNR level of 25 dB, all three choices ∆ = 0, 1, 2 lead to zero errors.

Figure 6 again shows the time and scatter diagrams of the equalizer input (left) and output (right) using∆ = 0 and σ2

v = 0.05, i.e., SNR= 14 dB. Observe how the equalizer performance is more challenging atthis SNR level. Still, the number of erroneous decisions came out as zero in this run.

Page 10: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

6 Part II: Linear Estimation

0 2000−2

−1

0

1

2Equalizer output for ∆=0

Time

Am

plitu

de

0 2000−2

−1

0

1

2Equalizer input

Time

Am

plitu

de

−2 −1 0 1 2−1

0

1

ReIm

−2 −1 0 1 2−1

0

1

Re

Im

Figure II.2. The plots show the time (top row) and scatter (bottom row)diagrams of the input and output of the equalizer, i.e., {y(i), s(i)}, for ∆ = 0and BPSK transmissions. Observe how the equalizer output is concentratedaround ±1.

0 2000−2

−1

0

1

2Equalizer output for ∆=1

Time

Am

plitu

de

0 2000−2

−1

0

1

2Equalizer input

Time

Am

plitu

de

−2 −1 0 1 2−1

0

1

Re

Im

−2 −1 0 1 2−1

0

1

Re

Im

Figure II.3. The plots show the time (top row) and scatter (bottom row) dia-grams of the input and output of the equalizer, i.e., {y(i), s(i)}, for ∆ = 1 andBPSK transmissions. Observe how the equalizer output is again concentratedaround ±1.

2. partB.m This program solves parts (b) and (d), which deal with QPSK data. The program generates 3plots that show the scatter diagrams of the transmitted sequence {s(i)}, the received sequence {y(i)},and the equalizer output {s(i − ∆)} for several values of ∆. A final plot shows the values of theestimated m.m.s.e. and the number of erroneous decisions as a function of ∆. The program allows thedesigner to change the value of the noise variance σ2

v . Typical outputs of this program are shown inthe sequel.

Figure 7 shows the scatter diagrams of the transmitted (left) and received (right) sequences, {s(i), y(i)}.

Page 11: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part II: Linear Estimation 7

0 2000−2

−1

0

1

2Equalizer output for ∆=3

Time

Am

plitu

de

0 2000−2

−1

0

1

2Equalizer input

Time

Am

plitu

de

−2 −1 0 1 2−1

0

1

ReIm

−2 −1 0 1 2−1

0

1

Re

Im

Figure II.4. The plots show the time (top row) and scatter (bottom row)diagrams of the input and output of the equalizer i.e., {y(i), s(i)}, for ∆ = 3and BPSK transmissions. Observe how the performance of the equalizer is poorin this case.

0 1 2 30

0.2

0.4

0.6

0.8

Delay (∆)

m.m

.s.e

.

0 1 2 30

200

400

600

800

Delay (∆)

Num

ber

of e

rror

s

Figure II.5. The plots show the m.m.s.e (left) and the number of erroneousdecisions (right) as a function of ∆ for BPSK transmissions.

Observe how the s(·) are concentrated at√

2(±1 ± j)/2, as expected, while the y(i) appear to beconcentrated around 16 values.

Figure 8 shows the scatter diagrams of the equalizer input (left) and output (right) sequences for thecases ∆ = 0 and ∆ = 1. Observe how the equalizer output is now concentrated around the QPSKconstellation symbols.

Figure 9 repeats the same scatter diagrams for the cases ∆ = 2 and ∆ = 3. Observe how the equalizerfails completely for ∆ = 3.

The plot showing the m.m.s.e. and the number of erroneous decisions as a function of ∆ is similar toFig. 5. Figure 10, on the other hand, shows the time and scatter diagrams of the equalizer input (left)and output (right) using ∆ = 0 and ∆ = 1 for σ2

v = 0.05, i.e., SNR= 14 dB. The equalizer performanceis more challenging at this SNR level. Still, the number of erroneous decisions came out as zero in thisrun.

3. partE.m This program solves parts (e) and (g) for BPSK data and for various noise levels. It generatesa plot of the symbol error rate, measured as the ratio of the number of erroneous decisions to the totalnumber of transmissions (adjusted to 20000) as a function of the SNR at the input of the equalizer.The plot shows the SER curves with and without equalization. A typical output is shown in the leftplot of Fig. 11.

4. partF.m This program solves parts (f) and (g) for QPSK data and for various noise levels. It alsogenerates a plot of the symbol error rate. The plot shows the SER curves with and without equalization.

Page 12: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

8 Part II: Linear Estimation

0 2000−2

−1

0

1

2Equalizer output for ∆=0

Time

Am

plitu

de

0 2000−4

−2

0

2

4Equalizer input

Time

Am

plitu

de

−2 −1 0 1 2−1

0

1

ReIm

−2 0 2−1

0

1

Im

Re

Figure II.6. The plots show the time (top row) and scatter (bottom row)diagrams of the input and output of the equalizer, i.e., {y(i), s(i)}, for ∆ = 0,BPSK transmissions, and σ2

v = 0.05 (i.e., SNR= 14 dB).

−2 −1 0 1 2−2

−1

0

1

2Transmitted sequence

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Received sequence

Re

Im

Figure II.7. The plots show the scatter diagrams of the transmitted (left)and received (right) sequences {s(i), y(i)} for QPSK transmissions.

A typical output is shown in the right plot of Fig. 11.

Page 13: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part II: Linear Estimation 9

−2 −1 0 1 2−2

−1

0

1

2Equalizer output for ∆=0

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Equalizer input

Re

Im

−2 −1 0 1 2−2

−1

0

1

2

Re

Im

−2 −1 0 1 2−2

−1

0

1

2

Re

Im

Equalizer input Equalizer output for ∆=1

Figure II.8. The plots show the scatter diagrams of the equalizer input (left)and output (right) for the cases ∆ = 0 and ∆ = 1 for QPSK transmissions.

−2 −1 0 1 2−2

−1

0

1

2Equalizer output for ∆=2

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Equalizer input

Re

Im

−2 −1 0 1 2−2

−1

0

1

2

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Equalizer input

Re

Im

Equalizer output for ∆=3

Figure II.9. The plots show the scatter diagrams of the equalizer input (left)and output (right) for the cases ∆ = 2 and ∆ = 3 for QPSK transmissions.

Page 14: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

10 Part II: Linear Estimation

−2 −1 0 1 2−2

−1

0

1

2Equalizer output for ∆=0

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Equalizer input

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Equalizer output for ∆=1

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Equalizer input

Re

Im

Figure II.10. The plots show the scatter diagrams of the equalizer input (left)and output (right) for QPSK data with ∆ = 0 (top row) and ∆ = 1 (bottomrow) and at SNR= 14 dB.

10 15 2010

−5

10−3

10−1

SNR (dB)

SE

R

BPSK

10 15 20

10−4

10−2

100

SNR (dB)

SE

R

QPSK

Without equalization

With equalization

With equalization

Without equalization

Figure II.11. Plots of the SER over 20000 transmitted BPSK symbols (left)and QPSK symbols (right), with and without equalization, for SNR valuesbetween 8 and 20 dB and using ∆ = 1.

Page 15: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part II: Linear Estimation 11

Project II.2 (Beamforming) The programs that solve this project are the following.

1. partA.m This program solves part (a). It generates two plots. One plot shows a portion ofthe baseband sinusoidal wave s(t) along with the corresponding signals received at the fourantennas. A second plot shows s(t) and the output of the beamformer — see Figs. 12 and 13.The estimated value of the m.m.s.e. is 0.0494, whereas the theoretical m.m.s.e. is 0.05.

−2

0

2Baseband signal

−2

0

2 Signal at antenna 1

−2

0

2Signal at antenna 2

−2

0

2Signal at antenna 3

200 250 300 350 400 450 500 550 600−2

0

2 Signal at antenna 4

Sample index

Figure II.12. The plots show the baseband signal s(t) (top) along with thesignals received at the four antennas. In this simulation, the noise at eachantenna is assumed white, and the noises across the antennas are assumeduncorrelated.

200 250 300 350 400 450 500 550 600−2

−1

0

1

2

Sample index

Am

plitu

de

Baseband signal and output of beamformer

Baseband signal (smooth curve)Beamformer output

Figure II.13. The plot shows the baseband signal (solid line) and the outputof the beamformer (dotted line).

2. partB.m This program solves part (b). It generates two plots. One plot shows the basebandsinusoidal waves at 30o and 45o, along with the signals received at the four antennas. A secondplot shows the baseband waves again, along with the combined signal s(t), and the beamformeroutput superimposed on the signal arriving at 30o — see Figs. 14 and 15. The estimated valueof the m.m.s.e. is 0.067, while the theoretical m.m.s.e. is 0.05.

3. partC.m This program solves part (c). It generates four plots. Two of the plots deal with thecase of a single baseband signal at 30o; they plot the baseband signal and the signals at theantennas, as well as the baseband signal and the beamformer output. Two other plots dealwith the case of two incident signals at 30o and 45o; they plot the incident baseband signalsand the signals at the antennas, as well as the combined signal and the beamformer output —

Page 16: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

12 Part II: Linear Estimation

see Figs. 16–19. The estimated value of the m.m.s.e. is 0.0907 for a single incident wave and0.1013 for the case of two incident waves, while the theoretical m.m.s.e. is 0.0901.

−1

0

1 Baseband signal arriving at 30o

−0.2

0

0.2 Interference signal arriving at 45o

−2

0

2 Signal at antenna 1

−2

0

2 Signal at antenna 2

−2

0

2 Signal at antenna 3

200 250 300 350 400 450 500 550 600−2

0

2 Signal at antenna 4

Sample index

Figure II.14. The plots show the baseband signal at 30o (first from top) andthe interfering signal at 45o (second from top), along with the signals receivedat the four antennas.

−2

0

2Baseband signal and beamformer output

Baseband signal at 30o

Beamformer output

−1

0

1 Baseband signal arriving at 30o

−0.2

0

0.2Interference signal arriving at 45o

200 250 300 350 400 450 500 550 600−2

0

2 Combination of baseband and interference signals

Sample index

Figure II.15. The figure shows the baseband signal at 30o and the interferingsignal at 45o (middle plots), along with the combined signal (bottom plot), andthe beamformer output (dotted line) superimposed on the signal arriving at30o (solid line in top plot).

4. partD.m This program solves part (d) and generates two polar plots that show the power gainof the beamformer as a function of θ — see Fig. 20.

Page 17: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part II: Linear Estimation 13

−2

0

2Baseband signal for the case of spatially−correlated noise

−2.5

0

2.5 Signal at antenna 1

−2.5

0

2.5 Signal at antenna 2

−2.5

0

2.5Signal at antenna 3

200 250 300 350 400 450 500 550 600−2.5

0

2.5 Signal at antenna 4

Sample index

Figure II.16. The plots show the baseband signal (top) along with the signalsreceived at the four antennas. In this simulation, the noise at each antenna isassumed white, and the noises across the antennas are spatially correlated.

−1

0

1Baseband signal arriving at 30o for the case of spatially−correlated noise

−0.5

0

0.5 Interference signal arriving at 45o

−2.5

0

2.5 Signal at antenna 1

−2.5

0

2.5 Signal at antenna 2

−2.5

0

2.5 Signal at antenna 3

200 250 300 350 400 450 500 550 600−2.5

0

2.5 Signal at antenna 4

Sample index

Figure II.17. The plots show the baseband signal at 30o (first from top), theinterfering signal at 45o (second from top), along with the signals received atthe four antennas for the case of spatially-correlated noise.

Page 18: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

14 Part II: Linear Estimation

200 250 300 350 400 450 500 550 600−2

−1

0

1

2Baseband signal and beamformer output for spatially−correlated noise

Sample index

Am

plitu

de

Baseband signal (smooth curve)Beamformer output

Figure II.18. The plot shows the baseband signal (solid line) and the outputof the beamformer (dotted line) in the case of spatially-correlated noise.

200 250 300 350 400 450 500 550 600−2

0

2Baseband signal arriving at 30o and beamformer output

Baseband signal (smooth curve)Beamformer output

200 250 300 350 400 450 500 550 600−2

0

2

Sample index

Combined baseband and interference signals

Figure II.19. The figure shows the combined incident signal (bottom), alongwith the beamformer output (dotted line) superimposed on the signal arrivingat 30o (solid line in top plot).

5

10

15

20

30

210

60

240

90

270

120

300

150

330

180 0

5

10

15

20

25

30

210

60

240

90

270

120

300

150

330

180 0

Figure II.20. A polar plot of the power gain of the beamformer in dB as afunction of the direction of arrival θ for the case of 4 antennas (left) and 25antennas (right).

Page 19: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part II: Linear Estimation 15

Project II.3 (Decision feedback equalization) The programs that solve this project are thefollowing.

1. partA.m This program solves part (a) and generates a plot showing the impulse response se-

quence and the magnitude of the frequency response of the channel, |C(ejω)| over [0, π]. Itsoutput is shown in Fig. 21.

0 1 2 3−1

−0.5

0

0.5

1

1.5Impulse response

Tap index

Am

plitu

de

0 1 2 3

5

6

7

8

Frequency response

ω (rad/sample)

dB

Figure II.21. Impulse and frequency response of the channel C(z) = 0.5 +1.2z−1 + 1.5z−2 − z−3.

2. partB.m This program solves part (b) and generates a plot showing a scatter diagram of thetransmitted and received sequences {s(i), y(i)}. A typical output is shown in Fig. 22.

−4 −2 0 2 4−4

−2

0

2

4Transmitted sequence

Re

Im

−4 −2 0 2 4−4

−2

0

2

4Received sequence

Re

Im

Figure II.22. Scatter diagrams of the transmitted (left) and received se-quences (right), {s(i), y(i)}, for QPSK transmissions.

3. partC.m This program solves parts (c) and (d) and generates three plots. One plot showsscatter diagrams for the received sequence {y(i)} and the sequence at the input of the decisiondevice (for the case ∆ = 5) — see Fig. 23. A second plot shows the number of erroneousdecisions as a function of ∆, as well as the m.m.s.e. (both theoretical and measured) as afunction of ∆ — see Fig. 24. A third plot shows the impulse response sequences of the channel,the combination channel-feedforward filter, and the feedback filter delayed by ∆ — see Fig. 25.The impulse response of the feedback filter has also been extended by some zeros in the plot inorder to match its length with the convolution of the channel and the feedforward filter for easeof comparison. Observe how the taps of the feedback filter cancel the post ISI. The number oferrors for this simulation was zero.

4. partE.m This program generates a plot showing the number of erroneous decisions as a functionof the length of the feedforward filter, L — see the plot on the left in Fig. 26.

5. partF.m This program generates a plot showing the number of erroneous decisions as a functionof the length of the feedback filter, Q — see the plot on the right in Fig. 26.

6. partG.m This program solves part (g) and generates a plot showing the symbol error rate asa function of the SNR level at the input of the equalizer. Each point in the plot is measuredusing 2× 104 transmissions. A typical output is shown in Fig. 27.

Page 20: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

16 Part II: Linear Estimation

−4 −2 0 2 4−4

−2

0

2

4Received sequence

Re

Im

−2 −1 0 1 2−2

0

2Input of decision device

Re

Im

Figure II.23. Scatter diagrams of the received sequence (input of the equal-izer) and the sequence at the input of the decision device for ∆ = 5 and QPSKtransmissions.

1 5 10 150

500

1000

1500

Delay (∆)

Num

ber

of e

rror

s

0 5 10 150

2

4

6

8

Delay (∆)

m.m

.s.e

. Measured values

Theoretical values

Figure II.24. The plot on the left shows the number of erroneous decisionsas a function of ∆, while the plot on the right shows the m.m.s.e. as a functionof ∆ as deduced from both theory and measurements.

7. partH.m This program solves part (h) and generates a plot that shows the scatter diagrams ofthe received sequence and of the sequence at the input to the decision device (for L = 4 and∆ = 4). The plot also shows the number of erroneous decisions as a function of L, as well asthe theoretical and estimated values of the m.m.s.e. for various L. A typical output is shownin Fig. 28. The number of errors in this simulation was zero.

8. partI.m This program solves part (i) and generates a plot that shows the impulse and frequencyresponses of both the actual channel and its estimate, as well as scatter diagrams of the receivedsequence {y(i)} and the sequence at the input of the decision device — see Fig. 29.

9. partJ.m This program solves part (j) and generates a plot that shows the impulse and frequencyresponses of the actual channel and its estimate, as well as scatter diagrams of the receivedsequence {y(i)} and the sequence at the input of the decision device — see Fig. 30.

Page 21: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part II: Linear Estimation 17

0 1 2 3−1

−0.5

0

0.5

1

1.5Channel

Tap index

Am

plitu

de

0 5 10 15−0.5

0

0.5

1

1.5

Channel convolved with feedforward filter

Tap index

Am

plitu

de

0 5 10 15−1.5

−1

−0.5

0

0.5Delayed feedback filter

Tap index

Am

plitu

de

Post ISI

Feedbacktaps

Figure II.25. The top left plot shows the impulse response sequence of thechannel. The two plots on the right show the convolution of the channeland feedforward filter impulse responses (top) and the feedback filter responsedelayed by ∆ (bottom).

1 5 10 150

500

1000

1500

Feedforward filter length

Num

ber

of e

rror

s

1 2 3 4 5 6

0

Feedback filter length

Num

ber

of e

rror

s

Figure II.26. Number of erroneous decisions as a function of the feedforwardfilter length using ∆ = 5 and Q = 2 (left), and as a function of the feedbackfilter length using ∆ = 5 and L = 6 (right).

8 9 10 11 1210

−5

10−3

10−1

SE

R

SNR (dB)

Figure II.27. The plot shows the symbol error rate as a function of the SNRlevel at the input of the equalizer. The simulation assumes ∆ = 5, L = 6 andQ = 1.

Page 22: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

18 Part II: Linear Estimation

−4 −2 0 2 4−4

−2

0

2

4Received sequence

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Input of decision device

Re

Im1 5 10 15 20

0

500

1000

Feedforward filter length

Num

ber

of e

rror

s

5 10 15 20

0.2

0.4

0.6

Feedforward filter lengthm

.m.s

.e.

Theoretical and measured m.m.s.e.

Figure II.28. The plots in the first row show the scatter diagrams of thereceived sequence and of the sequence at the input of the decision device forL = 4 and ∆ = 4, using a linear equalizer structure. The plots in the secondrow show the number of erroneous decisions as a function of the feedforwardfilter length (L), as well as the theoretical and estimated values of m.m.s.e. forvarious values of L.

1 2 3 4−1

0

1

2

Impulse responses oforiginal and estimated channels

Tap index

Am

plitu

de

0 1 2 34

6

8

10

Frequency responses of original and estimated channels

ω (rad/sample)

dB

OriginalEstimate

−4 −2 0 2 4−4

−2

0

2

4Received sequence

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Input of decision device

Re

Im

DFE

Figure II.29. The plots in the first row show the impulse and frequencyresponses of the channel and its estimate. The plots in the second row showthe scatter diagrams of the received sequence and the sequence at the input ofthe decision device. This simulation pertains to a DFE implementation withL = 6, Q = 1, and ∆ = 5.

Page 23: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part II: Linear Estimation 19

1 2 3 4−2

−1

0

1

2

Impulse responses oforiginal and estimated channels

Tap index

Am

plitu

de

0 1 2 34

6

8

10

Frequency responses of original and estimated channels

ω (rad/sample)

dBOriginalEstimate

−4 −2 0 2 4−4

−2

0

2

4Received sequence

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Input of decision device

Re

Im

Linear equalization

Figure II.30. The plots in the first row show the impulse and frequencyresponses of the channel and its estimate. The plots in the second row showthe scatter diagrams of the received sequence and the sequence at the input ofthe decision device. This simulation pertains to a linear equalizer with L = 4and ∆ = 4.

Page 24: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART III:STOCHASTIC-GRADIENT

METHODS

20

Page 25: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part III: Stochastic-Gradient Methods 21

COMPUTER PROJECTS

Project III.1 (Constant-modulus criterion) The programs that solve this project are the following.

1. partA.m This program prints the locations of the global minima as well as the minimum value of thecost function at these minima. It also plots J(w).

2. partC.m This program generates three plots — see Figs. 1–3. Each plot shows the evolution of w(i) asa function of time, as well as the evolution of J [w(i)].

0 2 4 6 8 10 12 14 16 18−1

−0.5

0

0.5

1

w(i)

Iteration

Weight trajectories for two different initial conditions and step−size µ=0.2

−1 −0.5 0 0.5 1

0.7

0.9

1.1

w(i)

J[w

(i)]

Trajectory superimposed on cost function

−1 −0.5 0 0.5 1

0.7

0.9

1.1

w(i)

J[w

(i)]

Trajectory superimposed on cost function

w(−1)=0.3

w(−1)=−0.3

Figure III.1. The top plot shows the evolution of the scalar weight w(i) fromtwo different initial conditions and for µ = 0.2; the filter is seen to convergeto the global minimum that is closest to the initial condition. The bottomplot shows the evolution of J [w(i)] superimposed on the cost function J(w) forboth initial conditions.

3. partD.m This program generates a plot showing the contour curves of J(w) superimposed on the tra-jectories of the weight estimates — see Fig. 4.

Page 26: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

22 Part III: Stochastic-Gradient Methods

0 2 4 6 8 10 12 14 16 18−1

−0.5

0

0.5

1

w(i)

Iteration

Weight trajectories for two different initial conditions and step−size µ=0.6

−1 −0.5 0 0.5 1

0.7

0.9

1.1

w(i)

J[w

(i)]

Trajectory superimposedon cost function

−1 −0.5 0 0.5 1

0.7

0.9

1.1

w(i)

Trajectory superimposedon cost function

J[w

(i)]

w(−1)=0.3

w(−1)=−0.3

Figure III.2. The top plot shows the evolution of w(i) starting from twodifferent initial conditions and for µ = 0.6. Due to the larger value of thestep-size, the filter is seen to oscillate around the global minimum that isclosest to the initial condition. The bottom plot shows the evolution of J [w(i)]superimposed on the cost function J(w) for both initial conditions.

0 10 20 30 40 50 60 70 80 90

−1

−0.5

0

0.5

1

w(i)

Weight trajectory using step−size µ=1 and initial condition

Iteration

−1 −0.5 0 0.5 1

0.7

0.8

0.9

1

1.1

w(i)

J[w

(i)]

Trajectory superimposed on the cost function

w(−1)=0.2

Figure III.3. The top plot shows the evolution of w(i) starting from twodifferent initial conditions and for a large step-size, µ = 1. In this case, thefilter does not converge.

Page 27: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part III: Stochastic-Gradient Methods 23

2

4

6

8

10

12

−1 −0.5 0 0.5 1−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

α−axis

β−ax

is

Contour curves and two trajectories starting from different initial conditions

Figure III.4. A plot of the contour curves of J(w), along with the trajectoriesof the resulting weight estimates starting from two different initial conditions(one is on the left and the other is on the right). Here w = col{α, β}.

Page 28: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

24 Part III: Stochastic-Gradient Methods

Project III.2 (Constant-modulus algorithm) The programs that solve this project are the following.

1. partA.m This program generates a plot showing the learning curve of the steepest-descent method andan ensemble-average learning curve for CMA2-2. The program allows the user to modify the values ofγ, w−1, L, and N . A typical output is shown in Fig. 5.

0 50 100 150 200 250 300 350 400 450 5000

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Iteration

Theoretical and ensemble−average learning curvesJ(

wi )

Steepest−descentCMA2−2

Figure III.5. Learning curve of the steepest-descent method corresponding

to the cost function J(w) = E�γ − |uw|2�2, along with an ensemble-average

curve generated by running CMA2-2 and averaging over 200 experiments.

2. partB.m This program generates two plots. One plot shows the contour curves of the cost function alongwith the four weight-vector trajectories generated by CMA2-2 (see Fig. 6). The second plot shows thesame contour curves with the weight-vector trajectories generated by steepest-descent (see Fig. 7). Theprogram allows the user to modify the values of γ, w−1, and N .

Page 29: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part III: Stochastic-Gradient Methods 25

2

4

6

8

10

12

−1 −0.5 0 0.5 1−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

α−axis

β−ax

is

Contour curves and weight−vector trajectories starting from four different initial conditions

Figure III.6. Contour curves of the cost function J(w) = E�γ − |uw|2�2,

along with the trajectories generated by the steepest-descent method fromfour distinct initial conditions.

2

4

6

8

10

12

−1 −0.5 0 0.5 1−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

α−axis

β−ax

is

Contour curves and weight−vector trajectories for CMA2−2 starting from different initial conditions

Figure III.7. Contour curves of the cost function J(w) = E�γ − |uw|2�2,

along with the trajectories generated by CMA2-2 from four distinct initial con-ditions.

Page 30: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

26 Part III: Stochastic-Gradient Methods

Project III.3 (Adaptive channel equalization) The programs that solve this project are the following.

1. partA.m This program solves part (a) and generates two plots showing the scatter diagrams of the trans-mitted, received, and estimated sequences {s(i), u(i), s(i − ∆)} during training and decision-directedmodes of operation. A typical output is shown in Figs. 8–9. No erroneous decisions were observed inthis simulation.

−4 −2 0 2 4−4

−2

0

2

4Training sequence

Re

Im

−4 −2 0 2 4−4

−2

0

2

4Transmitted sequence

Re

Im

Figure III.8. Scatter diagrams of the QPSK training sequence (left) and the16-QAM transmitted data (right).

−20 −10 0 10 20−20

−10

0

10

20Received sequence

Re

Im

−4 −2 0 2 4−4

−2

0

2

4Equalizer output

Re

Im

Figure III.9. Scatter diagrams of the received sequence (left) and the outputof the equalizer (right). Observe how the equalizer outputs are centered aroundthe symbols of the 16-QAM constellation.

2. partB.m This program solves part (b) and generates a plot showing the scatter diagrams of the outputof the equalizer for three lengths of training period (150, 300, and 500), and for two adaptation schemes(LMS and ε-NLMS), as shown in Fig. 10. No erroneous decisions were observed in this simulation inthe last two cases of ε-NLMS, while 1 error was observed in the last case for LMS. Observe how theperformance of LMS is inferior. If the training period is increased further, the performance of LMS canbe improved.

3. partC.m This program solves part (c) and generates a plot showing the scatter diagram of the outputof the equalizer when the transmitted data is selected from a 256-QAM constellation and the equalizeris trained using ε-NLMS with 500 QPSK symbols. A typical output is shown in Fig. 11. No erroneousdecisions were observed in this simulation.

4. partD.m This program solves part (d) and generates a plot of the SER curve versus SNR for several QAMconstellations. A typical output is shown in Fig. 12. Observe how, for a fixed SNR, the probability oferror increases as the order of the constellation increases.

5. partE.m This program solves part (e) and generates two plots of the scatter diagrams of the inputand output of the equalizer for two different configurations. A typical output is shown in Fig. 13. Noerroneous decisions were observed in both simulations.

6. partF.m This program solves part (f) and generates a plot of the SER curve versus SNR for several QAMconstellations. A typical output is shown in Fig. 14.

7. partG.m This program solves part (g). Figure 15 shows the impulse and frequency responses of thechannel, and it is seen that the channel has a pronounced spectral null; a fact that makes the equalizationtask more challenging. Figure 16 shows the frequency and impulse responses of the channel, the RLS

Page 31: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part III: Stochastic-Gradient Methods 27

−10 0 10−10

0

10

Im

LMS/150 symbols

−10 0 10−10

0

10

Im

LMS/300 symbols

−5 0 5−5

0

5

Im

LMS/500 symbols

−5 0 5−5

0

5

Re

Im

ε−NLMS/150 symbols

−5 0 5−5

0

5

ReIm

ε−NLMS/300 symbols

−5 0 5−5

0

5

Re

Im

ε−NLMS/500 symbols

Figure III.10. Scatter diagrams of the signal at the output of the equalizerfor three different number of training symbols, 150, 300, and 500, and for bothalgorithms, LMS (top row) and ε-NLMS (bottom row).

−20 −15 −10 −5 0 5 10 15 20−20

−15

−10

−5

0

5

10

15

20Output of equalizer for 256−QAM transmissions

Re

Im

Figure III.11. Scatter diagram of the signal at the output of the equal-izer. The equalizer is trained with 500 QPSK symbols using ε−NLMS andthe transmitted data during the decision-directed mode is from a 256-QAMconstellation.

equalizer at the end of the adaptation process, and of the convolution of the channel and equalizerimpulse response sequences (we are only plotting the real parts of the impulse response sequences).Observe how the combined frequency response still exhibits a spectral null; the equalizer has not beensuccessful in fully removing the null from the channel spectrum (i.e., it has not been successful inflattening the channel). The program also generates a plot of the scatter diagrams of the transmitted4-QAM data and the outputs of the linear equalizer for two cases. In one case, the equalizer is trainedwith RLS for 100 iterations, and in the second case the equalizer is trained with ε-NLMS for 2000iterations. A typical output is shown in Fig. 17. Observe how ε-NLMS fails for this channel while noerroneous decisions were observed for RLS in this simulation. The performance of RLS can be improvedby increasing the length of the equalizer.

Page 32: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

28 Part III: Stochastic-Gradient Methods

5 10 15 20 25 3010

−4

10−3

10−2

10−1

100

SER x SNR (logarithmic scale)

SNR (dB)

SE

R

4−QAM 16−QAM 64−QAM

256−QAM

Figure III.12. Plots of the SER as a function of the SNR at the input ofthe equalizer and the order of the QAM constellation. The adaptive filter istrained with ε-NLMS using 500 QPSK training symbols.

−40 −20 0 20 40−40

−20

0

20

40Received sequence

Re

Im

−10 −5 0 5 10−10

−5

0

5

10Equalizer output (using 10 feedforward taps)

Re

Im

−40 −20 0 20 40−40

−20

0

20

40

Re

Im

Received sequence

−10 −5 0 5 10−10

−5

0

5

10

Re

Im

Equalizer output (using 20 feedforward taps)

Figure III.13. Scatter diagrams of the input and output of the decision-feedback equalizer for 64-QAM transmissions. The top row corresponds to aDFE with 10 feedforward taps, 2 feedback taps, and delay ∆ = 7. The lowerplot corresponds to a DFE with 20 feedforward taps, 2 feedback taps, and delay∆ = 10.

Page 33: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part III: Stochastic-Gradient Methods 29

5 10 15 20 25 3010

−4

10−3

10−2

10−1

100

SNR (dB)

SE

R

SER x SNR for a DFE (logarithmic scale)

4−QAM 16−QAM 64−QAM

256−QAM

Figure III.14. A plot of the SER as a function of the SNR at the input ofthe equalizer and the order of the QAM constellation. The DFE is trained withε-NLMS using 500 QPSK training symbols.

0 10 20 30−0.5

0

0.5

Impulse response

Tap index

Am

plitu

de

0 1 2 3−60

−40

−20

0

Frequency response

ω (rad/sample)

dB

Figure III.15. Impulse (left) and frequency (right) responses of channel.

0 1 2 3−60

−40

−20

0

Freq. response of channel

dB

0 1 2 3

0

10

20

30Freq. response of equalizer

dB

0 1 2 3

−30

−20

−10

0Freq. response of combination

ω (rad/sample)

dB

10 20 30−0.5

0

0.5

Tap index

Am

plitu

de

Impulse resp. of channel

10 30 50−4

−2

0

2

4

Tap index

ampl

itude

Impulse resp. of equalizer

20 50 800

0.2

0.4

0.6

0.8

Tap index

ampl

itude

Impulse resp. of combination

ω (rad/sample)ω (rad/sample)

Figure III.16. Frequency and impulse responses of the channel (left), the RLSequalizer at the end of the adaptation process (center), and of the convolutionof the channel and equalizer (right).

Page 34: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

30 Part III: Stochastic-Gradient Methods

−1 0 1−1

0

1Training sequence

Re

Im

−2 0 2−2

0

2RLS/100 symbols

Re

Im

−5 0 5−5

0

5NLMS/2000 symbols

Re

Im

Figure III.17. Scatter diagrams of the transmitted sequence and the outputsof the equalizer for two training algorithms: RLS (center) and ε-NLMS (farright). The former is trained with 100 QPSK symbols while the latter is trainedwith 2000 QPSK symbols.

Page 35: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part III: Stochastic-Gradient Methods 31

Project III.4 (Blind adaptive equalization) The programs that solve this project are the following.

1. partA.m This program solves part (a) and generates two plots. Figure 18 shows the impulse responsesof the channel, the equalizer, and the combination channel-equalizer. Observe that the latter has apeak at sample 19 so that the delay introduced by the channel and equalizer is 19. Figure 19 shows atypical plot of the scatter diagrams at transmission, reception, and output of the equalizer.

0 1 2 3−1

0

1

Channel impulse resp.

Tap index

Am

plitu

de

0 20

−0.2

0

0.2

Equalizer impulse resp.

Tap indexA

mpl

itude

0 200

0.2

0.4

0.6

0.8

Impulse resp. of combination

Tap index

Am

plitu

de

Figure III.18. Impulse responses of the channel, the blind equalizer, andthe combination channel-equalizer for QPSK transmissions. Observe that thelatter has a peak at sample 19.

−1 0 1−1

0

1Transmitted sequence

Re

Im

−5 0 5−5

0

5Received sequence

Re

Im

−1 0 1−1

0

1Equalizer output

Re

Im

Figure III.19. Scatter diagrams of the transmitted sequence, received se-quence, and the output of the equalizer for QPSK transmissions and usingCMA2-2. The first 2000 samples are ignored in order to give the equalizersufficient time for convergence.

2. partB.m This program solves part (b) and generates three plots. Figure 20 shows the impulse responsesof the channel, the equalizer, and the combination channel-equalizer. Figure 21 shows a typical plotof the scatter diagrams at transmission, reception, and output of the equalizer. Observe that althoughsymbols from 16−QAM do not have constant modulus, CMA2-2 still functions properly; albeit moreslowly. Figure 22 shows the real and imaginary parts of 50 transmitted symbols and the corresponding50 decisions by the slicer (with the delay of 19 samples accounted for in order to synchronize thesignals). These 50 symbols are chosen from the later part of the data after the equalizer has converged.

0 1 2 3−1

0

1

Channel impulse resp.

Tap index

Am

plitu

de

0 20

−0.2

0

0.2

Equalizer impulse resp.

Tap index

Am

plitu

de

0 200

0.5

1Impulse resp. of combination

Tap index

Am

plitu

de

Figure III.20. Impulse responses of the channel, the blind equalizer, and thecombination channel-equalizer for 16-QAM transmissions.

3. partC.m This program solves part (c) and generates three plots as in part (b). Here we only show inFig. 23 the resulting scatter diagrams for MMA. Observe how the scatter diagram at the output of theequalizer is more focused when compared with CMA2-2.

4. partD.m This program solves part (d) and generates scatter diagrams for three additional blind adaptivealgorithms. Typical outputs are shown in Fig. 24.

Page 36: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

32 Part III: Stochastic-Gradient Methods

−5 0 5−5

0

5Transmitted sequence

Re

Im

−20 0 20−20

0

20Received sequence

Re

Im

−5 0 5−5

0

5Equalizer output

Re

Im

Figure III.21. Scatter diagrams of the transmitted sequence, received se-quence, and the output of the equalizer for 16-QAM transmissions and usingCMA2-2. The first 15000 samples are ignored in order to give the equalizersufficient time for convergence.

10 20 30 40 50−5

0

5Re(transmission)

Am

plitu

de

10 20 30 40 50−5

0

5Rel(decision)

Am

plitu

de

10 20 30 40 50−5

0

5Im(transmission)

Am

plitu

de

10 20 30 40 50−5

0

5Im(decision)

Sample index

Am

plitu

de

Figure III.22. Time diagrams of 50 transmitted symbols and the correspond-ing decisions after equalizer convergence. The top two plots correspond to thereal parts, while the bottom two plots correspond to the imaginary parts.

−5 0 5−5

0

5Transmitted sequence

Re

Im

−20 0 20−20

0

20Received sequence

Im

Re−5 0 5

−5

0

5MMA equalizer output

Re

Im

Figure III.23. Scatter diagrams of the transmitted sequence, received se-quence, and the output of the equalizer for 16-QAM transmissions and usingMMA. The first 15000 samples are ignored in order to give the equalizer suffi-cient time for convergence.

Page 37: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part III: Stochastic-Gradient Methods 33

−5 0 5−5

0

5Transmitted sequence

Re

Im

−20 0 20−20

0

20Received sequence

Re

Im

−5 0 5−5

0

5CMA1−2 equalizer output

ReIm

−5 0 5−5

0

5Transmitted sequence

Re

Im

−20 0 20−20

0

20

Re

Im

Received sequence

−5 0 5−5

0

5

Re

Im

RCA equalizer output

−5 0 5−5

0

5

Re

Im

Transmitted sequence

−20 0 20−20

0

20

Re

ImReceived sequence−5

0

5

−50 5

Re

ImStop−and−go equalizer output

Figure III.24. Scatter diagrams of the transmitted (left) and received se-quences (center), and of the output of the equalizer (right) for three blindalgorithms: CMA1-2 (top row), RCA (middle row), and stop-and-go algorithm(bottom row).

Page 38: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART IV: MEAN-SQUAREPERFORMANCE

34

Page 39: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part IV: Mean-Square Performance 35

COMPUTER PROJECTSProject IV.1 (Line echo cancellation) The programs that solve this project are the following.

1. partA.m This program generates two plots showing the impulse response sequence of the channel aswell as its frequency response, as shown in Fig. 1. It is seen that the bandwidth of the echo path modelis approximately 3.4 kHz.

20 40 60 80−0.2

−0.1

0

0.1

Tap index

Am

plitu

de

Impulse response

0 2000 4000

−90

−60

−30

0Frequency response

Hz

dBFigure IV.1. Impulse and frequency responses of the echo path model.

2. partB.m This program solves parts B and C — see Fig.2. The powers of the input and output signalsare 0 and −6.3 dB, respectively, so that ERL ≈ 6.3 dB and the amplitude of a signal going through thechannel is reduced by 1/2.

1000 3000 5000−4

−2

0

2

4

Time

Am

plitu

de

Composite source signal

0 2000 4000−50

−30

−10

10

dB

Hz

Signal spectrum

0.5 1.5 2.5

x 104

−5

0

5Far−end signal

Time

Am

plitu

de

0.5 1.5 2.5

x 104

−5

0

5Echo signal

Time

ampl

itude

Figure IV.2. The top row shows the CSS data and their spectrum, while thebottom row shows 5 blocks of CSS data and the resulting echo.

3. partD.m This program solves parts D and E and generates two plots. Figure 3 shows the echo pathimpulse response and its estimate, while Fig. 4 shows the far-end signal, the echo, and the error signal.The resulting ERLE was found to be around 56 dB. Consequently, the total attenuation from the far-endinput to the output of the adaptive filter is seen to be ERL + ERLE ≈ 62 dB.

4. partF.m The power of the echo signal is the MSE of the adaptive filter, so that

Perror(dB) ≈ 10 log10

�σ2

v +µσ2

v

2− µ

�= −39.42 dB

Since Pecho ≈ −6.3 dB, we find that the theoretical ERLE is ≈ 33.1 dB. The simulated value in thisexperiment is ≈ 29.5 dB. The simulated value will get closer to theory if the simulation is run for alonger period of time.

Page 40: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

36 Part IV: Mean-Square Performance

20 40 60 80 100 120−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

Tap index

Am

plitu

de

Echo path impulse response and its estimate

Echo pathPath estimate

Figure IV.3. The figure shows the impulse response sequences of the echopath and its estimate by the adaptive filter.

−5

0

5Far−end signal

Am

plitu

de

−5

0

5Echo signal

Am

plitu

de

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5

x 104

−5

0

5

Time

Am

plitu

de

Error signal

Figure IV.4. The top row shows the far-end signal, while the middle rowshows the corresponding echo and the last row shows the resulting error signal.

Page 41: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part IV: Mean-Square Performance 37

Project IV.2 (Tracking a Rayleigh fading channel) The programs that solve this project are the fol-lowing.

1. partB.m This program generates three plots. One plot shows the amplitude and phase componentsof a Rayleigh fading sequence, while a second plot shows the cumulative distribution function of theamplitude sequence. Typical outputs are shown in Figs. 5 (for the first 500 samples) and 6. The latterfigure can be used, for example, to infer that the amplitude of |x(n)| is attenuated by more than 10 dBfor 10% of the time, and so forth.

100 300 500

−30

−20

−10

0

Time

|x(n

)| (

dB)

Magnitude plot of a Rayleigh fading sequence

100 300 500

−2

0

2

Time

arg

(x(n

)) (

rad)

Phase plot of a Rayleigh fading sequence

Figure IV.5. Amplitude and phase plots of a Rayleigh fading sequence withDoppler frequency fD = 66.67Hz and sampling frequency fs = 1 kHz.

Observe that the sequence x(n) in Fig. 5 undergoes rapid variations. This is because in this case,the Doppler frequency and the sampling frequency are essentially of the same order of magnitude(fD = 66.67Hz and fs = 1000Hz). In loose terms, the inverse of fD gives an indication of the timeinterval needed for the sequence x(n) to undergo a significant change in its value. In our example,this time amounts to ≈ 15ms, which translates into 15 samples at the assumed sampling rate. For thesake of comparison, Fig. 7 plots the amplitude component of another Rayleigh fading sequence withthe same Doppler frequency fD = 66.67Hz but with sampling period Ts = 1µs. The ratio of fs to fD

is now ≈ 15000 samples so that the variations in x(n) occur now at a significantly slower pace.

2. partC.m This program solves parts C, D, and E. It generates a plot of the learning curve of LMS byrunning it for 30000 iterations and averaging over 100 experiments. The channel rays are assumed tofade at 10Hz and the sampling period is set to Ts = 0.8µs. Figure 8 shows a typical trajectory ofthe amplitude of the first ray and its estimate by the LMS filter. The leftmost plot shows that thetrajectories are in very good agreement. The rightmost plot zooms onto the interval [1000, 1100].

Figure 9 shows only the first 1000 samples of a typical learning curve. In addition, the simulated andtheoretical MSE values in this case are 0.001056 and 0.001026, respectively. The theoretical valuesobtained from the expressions shown in parts (d) and (e) are almost identical. This is because atfD = 10Hz, the value of α is unity for all practical purposes. Also, the difference that is observedbetween simulation and theory for the MSE is in part due to the approximate first-order auto-regressivemodel that is being used to model the Rayleigh fading behavior.

3. partF.m This program solves part F and generates a plot of the MSE as a function of the Dopplerfrequency over the range 10Hz to 25Hz. A typical output is shown in Fig. 10. It is seen that as theDoppler frequency increases, the tracking performance of LMS deteriorates. This behavior is expectedsince higher Doppler frequencies correspond to faster variations in the channel; remember that a higherDoppler frequency translates into a larger covariance matrix Q and, therefore, into more appreciablechannel nonstationarities.

Page 42: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

38 Part IV: Mean-Square Performance

−30 −25 −20 −15 −10 −5 0 5

10−3

10−2

10−1

100

|x(n)| (dB)

cdf

Cumulative distribution function

Figure IV.6. The cumulative distribution function of a Rayleigh fading se-quence corresponding to a vehicle moving at 80Km/h.

200 400 600 800 1000 1200 1400 1600 1800 2000−10

2

−101

−100

Time

|x(n

)| (

dB)

Magnitude plot of a slowly Rayleigh fading sequence

Figure IV.7. Amplitude plot of a Rayleigh fading sequence with Dopplerfrequency fD = 66.67Hz and sampling frequency fs = 1MHz.

4. partG.m This program solves part G. It generates a plot of the filter MSE as a function of the step-size.The Doppler frequency is fixed at 10Hz for both rays, and the MSE values are obtained by averagingover 50 experiments — see Fig. 11.

Page 43: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part IV: Mean-Square Performance 39

0 1 2 3

x 104

0

0.5

1

Time

Am

plitu

de

Rayleigh fading rayand its estimate

1000 1050 11001.456

1.458

1.46 A closer look

Time

Am

plitu

de

Estimate

Actual ray

Figure IV.8. The leftmost plot shows a typical trajectory of the amplitude ofthe first Rayleigh fading ray and its estimate (both trajectories are in almostperfect agreement). The rightmost plots zooms on the interval 1000 ≤ i ≤1100.

0 100 200 300 400 500 600 700 800 900 1000−35

−30

−25

−20

−15

−10

−5

0

5

Iteration

MS

E (

dB)

An ensemble−average learning curve for LMS when used to track a Rayleigh fading channel

Figure IV.9. The first 1000 samples of a learning curve that results fromtracking a two-ray Rayleigh fading channel using LMS.

Page 44: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

40 Part IV: Mean-Square Performance

10 15 20 25−30

−29.5

−29

MS

E (

dB)

fD (Hz)

MSE vs. Doppler frequency by simulation and theory

SimulationTheory

Figure IV.10. A plot of the tracking MSE of the LMS filter as a function ofthe Doppler frequency of the Rayleigh fading rays.

0.01 0.02 0.03 0.04 0.05 0.06 0.07−30

−29.8

−29.6

−29.4

−29.2

−29

−28.8

MS

E (

dB)

Step−size (µ)

MSE vs. step−size for LMS tracking of a Rayleigh fading channel

SimulationTheory

Figure IV.11. A plot of the tracking MSE as a function of the step-size forRayleigh fading rays with Doppler frequency at 10Hz.

Page 45: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART V: TRANSIENTPERFORMANCE

41

Page 46: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

42 Part V: Transient Performance

COMPUTER PROJECT

Project V.1 (Transient behavior of LMS) The programs that solve this project are the following.

1. partA.m This program solves parts A, B, and C. It generates data {ui, d(i)} with the desired specifica-tions. Figure 1 shows a typical ensemble-average learning curve for the weight-error vector, when thestep-size is fixed at µ = 0.01. The ensemble-average curve is superimposed onto the theoretical curve.It is seen that there is a good match between theory and practice. Figure 2 plots the MSD as a functionof the step-size. Observe how the filter starts to diverge for step-sizes close to 0.1. The rightmost plotin Fig. 2 shows the behavior of the function f(µ) over the same range of step-sizes. The plot showsthat f(µ) exceeds one around µ ≈ 0.092. We thus find that the simulation results are consistent withtheory.

1000 2000 3000 4000 5000

−45

−40

−35

−30

−25

−20

−15

−10

−5

0

Iteration

MS

D (

dB)

LMS theoretical and ensemble−average learning curves for ECase of Gaussian regressors

SimulationTheory (smooth curve)

||wo−wi||2

Figure V.1. Theoretical and simulated learning curves for the weight-errorvector of LMS operating with a step-size µ = 0.01 and using Gaussian regres-sors.

0.04 0.06 0.08 0.1 0.12

−40

−20

0

20

Step−size

MS

D (

dB)

MSD vs. step−size for LMSfor Gaussian regressors

SimulationTheory

0.04 0.06 0.08 0.1 0.12

0.5

1

1.5

Step−size

f(µ)

Plot of f(µ)

µ≈ 0.0917

Stabilitybound

Figure V.2. Theoretical and simulated MSD of LMS as a function of thestep-size for Gaussian regressors (leftmost plot). The rightmost plot shows thebehavior of the function f(µ) over the same range of step-sizes.

2. partD.m This program generates two plots. Figure 3 shows a typical ensemble-average learning curvefor the weight-error vector, when the step-size is fixed at µ = 0.05. The ensemble-average curve issuperimposed onto the theoretical curve that follows from Prob. V.7. It is seen that there is also a goodmatch between theory and practice for non-Gaussian regressors.

Page 47: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part V: Transient Performance 43

0 100 200 300 400 500 600 700 800 900 1000−40

−35

−30

−25

−20

−15

−10

−5

0

Iteration

MS

D (

dB)

LMS theoretical and ensemble−average learning curves for ECase of non−Gaussian regressors

SimulationTheory (smooth curve)

||wo−wi||2

Figure V.3. Theoretical and simulated learning curves for the weight-errorvector of LMS operating with a step-size µ = 0.05 and using non-Gaussianregressors.

Figure 4 plots the MSD as a function of the step-size. The theoretical values are obtained by using thefollowing expression

MSD = µ2σ2vr′T(I− F )−1q, EMSE = µ2σ2

vr′T(I− F )−1r

where {r, r′, q, F} are defined in the statement of the theorem. Observe how the filter starts to divergefor step-sizes close to 0.1. Actually, using the estimated values for A and B that are generated by theprogram, it is found that

1

λmax(A−1B)≈ 0.103,

1

max�λ(H) ∈ IR+

≈ 0.204

so that we again find that the simulation results are consistent with theory.

0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1−40

−20

0

20

40

Step−size

MS

D (

dB)

MSD vs. step−size for LMS with non−Gaussian regressors

Simulation

Theory

Figure V.4. Theoretical and simulated MSD of LMS as a function of thestep-size for non-Gaussian regressors.

Page 48: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART VI: BLOCK ADAPTIVEFILTERS

44

Page 49: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part VI: Block Adaptive Filters 45

COMPUTER PROJECT

Project VI.1 (Acoustic echo cancellation) The programs that solve this project are the following.

1. partA.m This program generates two plots showing the impulse response sequence of the room as wellas its frequency response up to 4 kHz, as shown in Fig. 1.

100 200 300 400 500 600 700 800 900 1000

−0.05

0

0.05

0.1

Tap index

Am

plitu

de

Impulse response sequence of a room model

0 500 1000 1500 2000 2500 3000 3500 4000−40

−30

−20

−10

0

10

Hz

Mag

nitu

de (

dB)

Magnitude response of a room model

Figure VI.1. Impulse and frequency responses of a room model.

2. partB.m This program generates two plots. One plot shows the loudspeaker signal, the echo signal, andthe error signals that are left after adaptation by ε−NLMS using µ = 0.5 and µ = 0.1. The second plotshows the variation of the relative filter mismatch as a function of the step-size. Typical outputs areshown in Figs. 2 and 3.

It is seen from Fig. 2 that the error signal is reduced faster for µ = 0.5 than for µ = 0.1.

3. partC.m This program generates a plot showing the loudspeaker signal, the echo signal, and the errorsignals that are left after adaptation by ε−NLMS using µ = 0.1 and ε−NLMS using power normalizationwith µ = 0.1/32 (see Fig. 4). The mismatch by the former algorithm is found to be −10.49 dB whilethe mismatch by the latter algorithm is found to be −9.72 dB. The performance of both algorithmsis therefore similar although, of course, ε−NLMS with power normalization is less complex compu-tationally. Comparing the convergence of the error signal when µ = 0.1 in the DFT-block adaptiveimplementation with that in Fig. 2, we see that the block implementation results in faster convergence.

4. partD.m This program generates a plot showing the loudspeaker signal, the echo signal, and the errorsignal that is left after adaptation using ε−NLMS with power normalization and µ = 0.03/32 — seeFig. 5.

5. partE.m This program generates learning curves for ε-NLMS and the constrained DFT-block adaptiveimplementation of Alg. 28.7 by averaging over 50 experiments and smoothing the resulting curves usinga sliding window of width 10 — see Fig. 6, which shows the superior convergence performance of theblock adaptive filter.

Page 50: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

46 Part VI: Block Adaptive Filters

−5

0

5

Am

plitu

de

Loudspeaker signal

−5

0

5Echo signal

Am

plitu

de

−5

0

5

Am

plitu

de

Error signal obtained by using NLMS with step−size µ=0.5

0 2 4 6 8 10

x 104

−5

0

5

Time

Am

plitu

de

Error signal obtained by using NLMS with step−size µ=0.1

Figure VI.2. The top row shows the loudspeaker signal, while the second rowshows the corresponding echo and the last two rows show the resulting errorsignal for different step-sizes in the ε−NLMS implementation.

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

−13

−12.5

−12

−11.5

−11

−10.5

Step−size

Mis

mat

ch (

dB)

Relative filter mismatch vs. step−size

Figure VI.3. Relative filter mismatch for ε−NLMS as a function of the step-size; the values are computed after training with 20 CSS blocks.

Page 51: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part VI: Block Adaptive Filters 47

0

Am

plitu

de

Loudspeaker signal

−5

0

5

Am

plitu

de

Echo signal

−5

0

5Error signal using a DFT−based block adaptive filter and power−normalized NLMS

Am

plitu

de

0 1 2 3 4 5 6

x 104

−5

0

5

Time

Am

plitu

de

Error signal using a DFT−based block adaptive filter and NLMS with µ=0.1

Figure VI.4. Performance of NLMS-based DFT-block adaptive implementa-tions. The top row shows the loudspeaker signal, while the second row showsthe corresponding echo and the last two rows show the error signals.

−1

0

1

Am

plitu

de

Louspeaker signal using real speech

−1

0

1

Am

plitu

de

Echo signal using real speech

0 0.5 1 1.5 2 2.5

x 105

−1

0

1

Time

Am

plitu

de

Error signal using a DFT−based block adaptive filter

Figure VI.5. Performance of a DFT-block adaptive implementation appliedto a real speech signal. The top row shows the loudspeaker signal, while themiddle row shows the corresponding echo and the last row shows the errorsignal left after cancellation.

Page 52: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

48 Part VI: Block Adaptive Filters

1 2 3 4 5 6

x 104

−30

−28

−26

−24

−22

−20

−18

−16

−14

−12

−10

Iteration

Learning curves for ε−NLMS and DFT−based block adaptive filters

MS

E (

dB)

NLMS with µ=0.5

DFT−based block adaptive filter with µ=0.1/32

Figure VI.6. Learning curves for ε−NLMS and the constrained DFT blockadaptive filters using an AR(1) loudspeaker signal.

Page 53: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART VII: LEAST-SQUARESMETHODS

49

Page 54: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

50 Part VII: Least-Squares Methods

COMPUTER PROJECTS

Project VII.1 (OFDM receiver) The programs that solve this project are the following.

1. partA.m This program generates a plot showing the estimated channel taps both in the time domainand in the frequency domain for the two training schemes of Fig. 1. A typical output is shown in Figs. 1and 2.

1 2 3 4 5 6 7 80

0.5

1Exact channel taps

Am

plitu

de

1 2 3 4 5 6 7 80

0.5

1Taps estimated by using 64 tones in one symbol

Am

plitu

de

1 2 3 4 5 6 7 80

0.5

1Taps estimated by using 64 tones over 8 symbols

Tap index

Am

plitu

de

Figure VII.1. Magnitude of the channel taps in the time domain: exacttaps (top plot), estimated taps using 64 tones in one symbol (middle plot) andestimated taps using 64 tones over 8 symbols (bottom plot).

2. partB.m This program generates a plot of the BER as a function of the SNR for both training schemesof Fig. 1 by averaging over 100 random channels. A typical plot is shown in Fig. 3. The program alsoplots the scatter diagrams of the transmitted, received, and equalized data at SNR = 25 dB. A typicalplot is shown in Fig. 4.

Page 55: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part VII: Least-Squares Methods 51

0 10 20 30 40 50 60 700

0.1

0.2

Exact channel tones

0 10 20 30 40 50 60 700

0.1

0.2

Estimated channel tones using 64 tones in one symbol

0 10 20 30 40 50 60 700

0.1

0.2

Tone index

Estimated channel tones using 64 tones over 8 symbols

Figure VII.2. Magnitude of the channel tones (Λ) in the frequency domain:exact channel tones (top plot), estimated tones using 64 tones in one symbol(middle plot), and estimated tones using 64 tones over 8 symbols (bottom plot).

0 2 4 6 8 10 12 14 16 18 2010

−3

10−2

10−1

100

SNR (dB)

BE

R

BER vs SNR for two training schemes

Exact channelScheme (a)Scheme (b)

Figure VII.3. BER versus SNR for the two training schemes of Fig. 10.7using N = 64 (symbol size), P = 16 (cyclic prefix length), and M = 8 (channellength).

Page 56: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

52 Part VII: Least-Squares Methods

−2 −1 0 1 2−2

−1

0

1

2Transmitted contellation

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Received signal

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Estimated sequence using exact channel

Re

Im

−2 −1 0 1 2−2

−1

0

1

2Estimated sequence using scheme (a)

Re

Im

Figure VII.4. Scatter diagrams of the transmitted and received sequences(top row) and the recovered data using the exact channel and training scheme(a) from Fig. 10.7 (bottom row) at SNR = 25 dB.

Page 57: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part VII: Least-Squares Methods 53

Project VII.2 (Tracking Rayleigh fading channels) The programs that solve this project are the fol-lowing.

1. partA.m This program generates two plots. One plot shows the ensemble-average learning curves forε-NLMS, RLS, and extended RLS — see Fig. 5. With the values chosen for fD and Ts, the valuesof α and q turn out to be α = 0.99999999936835 and q = 1.263309457044670 × 10−9. That is, α isessentially equal to one and q is negligible so that the time-variation in wo

i is slow. For this reason,there is practically no difference in performance between RLS and its extended version. In addition, inthis situation, although RLS and extended RLS converge faster than ε-NLMS, the steady-state trackingperformance for all algorithms is similar. Moreover, recall from Table 21.2 that the steady-state MSEperformance of ε-NLMS and RLS can be approximated by

MSELMS =µσ2

v

2− µTr(Ru)E

�1

‖ui‖2�

+µ−1

2− µTr(Ru)Tr(Q)

MSERLS =σ2

v(1− λ)M + 1(1−λ)

Tr(QRu)

2− (1− λ)M

where, in our case, Q = qI, Ru = I, M = 5, σ2v = 0.001, λ = 0.995. These expressions result in the

theoretical values MSELMS ≈ −28.95dB and MSERLS ≈ −29.77dB, where the expectation E (1/‖ui‖2) isevaluated numerically via ensemble-averaging. The values obtained from simulation (by averaging thelast 100 values of the ensemble-average error curves) are in good match with theory, namely, MSELMS ≈−29.10dB and MSERLS ≈ −28.57dB (from simulation). In Fig. 6, we also plot ensemble-average curvesfor the mean-square deviation, E ‖ewi‖2. It is again seen that the steady-state performances are similarand they tend to approximately −70 dB.

100 200 300 400 500 600 700 800 900 1000−30

−25

−20

−15

−10

−5

0

5

10

Iteration

MS

E (

dB)

Ensemble−average learning curves for tracking a Rayleighfading channel using NLMS, RLS, and extended RLS.

NLMS

RLS and extended RLS

Figure VII.5. Ensemble-average learning curves for ε−NLMS, RLS, and ex-tended RLS when used in tracking a Rayleigh fading multipath channel withfD = 10Hz and Ts = 0.8µs.

2. partB.m This program generates two plots for the case fD = 80Hz. One plot shows the ensemble-averagelearning curves for ε-NLMS, RLS, and extended RLS — see Fig. 7. With the values chosen for fD andTs, the value of α is still close to one and the value of q is still small, namely, α = 0.99999995957410 andq = 8.085179670214160 × 10−8. For this reason, there is still practically no difference in performancebetween RLS and its extended version. Now, however, it is observed that the steady-state performanceof ε-NLMS is superior:

MSELMS ≈ −28.34 dB, MSERLS ≈ −16.73 dB (simulation)

In Fig. 8, we also plot ensemble-average curves for the mean-square deviation, E ‖ewi‖2.

3. partC.m This program generates two plots. One plot shows the ensemble-average learning and mean-square error curves for ε-NLMS, RLS, and extended RLS for the case of α = 0.95 and q = 0.1 — see

Page 58: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

54 Part VII: Least-Squares Methods

100 200 300 400 500 600 700 800 900 1000−80

−60

−40

−20

0

20

Iteration

MS

D (

dB)

Mean−square deviation ensemble−average learning curves for tracking aRayleigh fading channel using NLMS, RLS, and extended RLS

NLMS

RLS and extended RLS

Figure VII.6. Ensemble-average mean-square deviation curves for ε−NLMS,RLS, and extended RLS when used in tracking a Rayleigh fading multipathchannel with fD = 10Hz and Ts = 0.8µs.

100 200 300 400 500 600 700 800 900 1000

−25

−20

−15

−10

−5

0

5

10

Iteration

MS

E (

dB)

Ensemble−average curves for tracking a Rayleigh fading channelat maximum Doppler frequency f

D=80 Hz

NLMS

RLS and extended RLS

Figure VII.7. Ensemble-average learning curves for ε−NLMS, RLS, and ex-tended RLS when used in tracking a Rayleigh fading multipath channel withfD = 80Hz and Ts = 0.8µs.

Fig. 9, while the other shows the same curves for the case α = 1 and q = 0.1 — see Fig. 10. It is seenfrom these figures that the performance of extended RLS is now superior.

Page 59: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part VII: Least-Squares Methods 55

100 200 300 400 500 600 700 800 900 1000

−60

−50

−40

−30

−20

−10

0

10

20

30

Iteration

MS

D (

dB)

Mean−square−deviation ensemble−average learning curves for trackinga Rayleigh fading channel at maximum Doppler frequency f

D = 80 Hz

NLMS

RLS and extended RLS

Figure VII.8. Ensemble-average mean-square deviation curves for ε−NLMS,RLS, and extended RLS when used in tracking a Rayleigh fading multipathchannel with fD = 80Hz and Ts = 0.8µs.

200 400 600 800 1000

0

10

20

Iteration

MS

E (

dB)

Mode of nonstationary model at α=0.95

200 400 600 800 1000

0

10

20

30

40

50

Iteration

MS

D (

dB)

Mode of nonstationary model at α=0.95

RLS

extendedRLS NLMS

RLS NLMS

extended RLS

Figure VII.9. Ensemble-average learning (left) and mean-square deviation(right) curves for ε−NLMS, RLS, and extended RLS when used in tracking amultipath channel with α = 0.95 and q = 0.1.

200 400 600 800 1000

0

10

20

Iteration

MS

E (

dB)

Mode of nonstationary model at α=1

200 400 600 800 1000

0

20

40

60

Iteration

MS

D (

dB)

Mode of nonstationary model at α=1

extendedRLS

NLMS

RLS

RLS

NLMS

extended RLS

Figure VII.10. Ensemble-average learning (left) and mean-square deviation(right) curves for ε−NLMS, RLS, and extended RLS when used in tracking amultipath channel with α = 1 and q = 0.1.

Page 60: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART VIII: ARRAYALGORITHMS

56

Page 61: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part VIII: Array Algorithms 57

COMPUTER PROJECT

Project VIII.1 (Performance of array implementations in finite precision) The program that solvesthis probject is the following.

1. partA.m The program generates ensemble-average learning curves for RLS and its QR array variant usingthree choices for the number of bits, {30, 25, 16} and averaging over 100 experiments. A typical outputis shown in Fig. 1. It is seen that the QR array method is superior in finite-precision implementations.For instance, at 16 bits, RLS fails while the QR method still functions properly.

50 100 150 200−30

−20

−10

0

10

MS

E (

dB)

RLS algorithm (30 bits)

50 100 150 200−30

−20

−10

0

10

MS

E (

dB)

QR algorithm (30 bits)

50 100 150 200−30

−20

−10

0

10

20

MS

E (

dB)

RLS algorithm (25 bits)

50 100 150 200−30

−20

−10

0

10

MS

E (

dB)

QR algorithm (25 bits)

50 100 150 200

0

10

20

30

Iteration

MS

E (

dB)

RLS algorithm (16 bits)

50 100 150 200−30

−20

−10

0

10

Iteration

MS

E (

dB)

QR algorithm (16 bits)

Figure VIII.1. Ensemble-average learning curves for RLS and its QR arrayvariant for three choices of the number of quantization bits.

Page 62: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART IX: FAST RLSALGORITHMS

58

Page 63: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part IX: Fast RLS Algorithms 59

COMPUTER PROJECT

Project IX.1 (Stability issues in fast least-squares) The programs that solve this project are the fol-lowing.

1. partA.m This program generates ensemble-average learning curves for four fast fixed-order implemen-tations. A typical output is shown in Fig. 1. It is seen that in floating-point precision, the algorithmsbehave rather similarly.

200 400 600 800 1000−30

−20

−10

0

MS

E (

dB)

Iteration

FTF algorithm

200 400 600 800 1000−30

−20

−10

0

IterationM

SE

(dB

)

FTF with rescue mechanism

200 400 600 800 1000−30

−20

−10

0

MS

E (

dB)

Iteration

Stabilized FTF

200 400 600 800 1000−30

−20

−10

0

Iteration

MS

E (

dB)

Fast array algorithm

Floatingpoint

Floatingpoint

Floatingpoint

Floatingpoint

Figure IX.1. Ensemble-average learning curves for four fast fixed-order filterimplementations in floating-point precision.

2. partB.m This program generates ensemble-average learning curves for four fast fixed-order filters usingquantized implementations with 36 bits for both signals and coefficients. A typical output is shown inFig. 2. It is seen that, even at this number of bits, the standard FTF implementation becomes unstable.When implemented with the rescue mechanism (39.10), the rescue procedure is called upon on severaloccasions but it still fails to recover filter performance. The stabilized FTF implementation also fails inthis experiment. Only the fast array implementation is able to maintain the original filter performancein this case.

Page 64: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

60 Part IX: Fast RLS Algorithms

200 400 600 800 1000

0

50

100

MS

E (

dB)

Iteration

FTF algorithm

200 400 600 800 1000

−20

−10

0

10

Iteration

MS

E (

dB)

FTF with rescue mechanism

200 400 600 800 1000

−20

0

20

MS

E (

dB)

Iteration

Stabilized FTF

200 400 600 800 1000−30

−20

−10

0

Iteration

MS

E (

dB)

Fast array algorithm

36 bits

36 bits

36 bits

36 bits

Figure IX.2. Ensemble-average learning curves for four fast fixed-order filterimplementations averaged over 100 experiments in a quantized environmentwith 36 bits.

Page 65: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART X: LATTICE FILTERS

61

Page 66: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

62 Part X: Lattice Filters

COMPUTER PROJECT

Project X.1 (Performance of lattice filters in finite precision) The programs that solve this projectare the following.

1. partA.m This program generates ensemble-average learning curves for the various lattice filters fordifferent choices of the number of bits. The results are shown in Figs. 1 through 5. All filters work wellat 35 bits, but some filters start facing difficulties at lower number of bits. It seems from the figuresthat the array lattice form is the most reliable in finite precision, while the a posteriori lattice formwith error feedback is the least reliable.

−30

−10

10

MS

E (

dB)

A posteriori lattice

Iteration−30

−10

10

MS

E (

dB)

A priori lattice

Iteration

−30

−10

10

MS

E (

dB)

A priori w/ error feedback

Iteration

50

100

150

MS

E (

dB)

A posteriori w/ error feedback

Iteration

50 100 150 200−30

−10

10

MS

E (

dB)

Normalized lattice

Iteration50 100 150 200

−30

−10

10

MS

E (

dB)

Array lattice

Iteration

B=35 bits

Figure X.1. Ensemble-average learning curves for various lattice implemen-tations using 35 bits.

2. partB.m This program generates ensemble-average learning curves for the various lattice filters in thepresence of an impulsive interference at iteration i = 200 and assuming floating-point arithmetic.The result is shown in Fig. 6. It is seen that all algorithms recover from the effect of the impulsivedisturbance.

3. partC.m This program generates ensemble-average learning curves for the various lattice filters in thepresence of an impulsive interference at iteration i = 200 and assuming finite-precision arithmetic withB = 20 and B = 16 bits. The results are shown in Figs. 7 and 8.

Page 67: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part X: Lattice Filters 63

0

50

100M

SE

(dB

)

Iteration

A posteriori lattice

−30

−10

10

30

MS

E (

dB)

Iteration

A priori lattice

−30

−10

10

30

MS

E (

dB)

Iteration

A priori w/ error feedback

0

50

100

MS

E (

dB)

Iteration

A posteriori w/ error feedback

50 100 150 200−30

−10

10

30

MS

E (

dB)

Iteration

normalized lattice

50 100 150 200−30

−10

10

30

MS

E (

dB)

Iteration

Array lattice

B=25 bits

Figure X.2. Ensemble-average learning curves for various lattice implemen-tations using 25 bits.

−30

0

50

100

MS

E (

dB)

A posteriori lattice

Iteration−30

−10

10

MS

E (

dB)

A priori lattice

Iteration

−30

−10

10

MS

E (

dB)

A priori w/ error feedback

Iteration

0

50

100

MS

E (

dB)

A posteriori w/ error feedback

Iteration

50 100 150 200−30

0

50

100

MS

E (

dB)

Normalized lattice

Iteration50 100 150 200

−30

−10

10

MS

E (

dB)

Array lattice

Iteration

B=20 bits

Figure X.3. Ensemble-average learning curves for various lattice implemen-tations using 20 bits.

Page 68: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

64 Part X: Lattice Filters

0

40

80

MS

E (

dB)

A posteriori lattice

Iteration

0

40

80

MS

E (

dB)

A priori lattice

Iteration

0

40

80

MS

E (

dB)

A priori w/ error feedback

Iteration

0

40

80

MS

E (

dB)

A posteriori w/ error feedback

Iteration

50 100 150 200−30

0

40

80

MS

E (

dB)

Normalized lattice

Iteration50 100 150 200

−30

−10

10M

SE

(dB

)

Array lattice

Iteration

B=16 bits

Figure X.4. Ensemble-average learning curves for various lattice implemen-tations using 16 bits.

−20

0

20

40

MS

E (

dB)

A posteriori lattice

Iteration−20

0

20

40

MS

E (

dB)

A priori lattice

Iteration

−20

0

20

40

MS

E (

dB)

A priori w/ error feedback

Iteration−20

0

20

40

MS

E (

dB)

A posteriori w/ error feedback

Iteration

50 100 150 200−40

−20

0

20

40

MS

E (

dB)

Normalized lattice

Iteration50 100 150 200

−40

−20

0

20

40

MS

E (

dB)

Array lattice

Iteration

B=10 bits

Figure X.5. Ensemble-average learning curves for various lattice implemen-tations using 10 bits.

Page 69: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part X: Lattice Filters 65

−30

−10

10

MS

E (

dB)

A posteriori lattice

Iteration−30

−10

10

MS

E (

dB)

A priori lattice

Iteration

−30

−10

10

MS

E (

dB)

A priori w/ error feedback

Iteration−30

−10

10

MS

E (

dB)

A posteriori w/ error feedback

Iteration

100 200 300 400 500−30

−10

10

MS

E (

dB)

Normalized lattice

Iteration100 200 300 400 500

−30

−10

10

MS

E (

dB)

Array lattice

Iteration

Figure X.6. Ensemble-average learning curves for various lattice implemen-tations in floating-point precision with an impulsive disturbance occurring atiteration i = 200.

100 200 300−40

−10

20

MS

E (

dB)

A priori lattice

Iteration100 200 300

−40

−10

20

MS

E (

dB)

A priori w/ error feedback

Iteration

100 200 300−40

20

80

MS

E (

dB)

Normalized lattice

Iteration100 200 300

−40

−10

20

MS

E (

dB)

Array lattice

Iteration

B=20 bits

Figure X.7. Ensemble-average learning curves for various lattice implementa-tions in 20-bits precision with an impulsive disturbance occurring at iterationi = 200.

Page 70: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

66 Part X: Lattice Filters

−10

10

30

MS

E (

dB)

A priori lattice

Iteration

−10

10

30

MS

E (

dB)

A priori w/ error feedback

Iteration

100 200 300 400 500−20

0

20

40

MS

E (

dB)

Normalized lattice

Iteration100 200 300 400 500

−40

−20

0

20

40

MS

E (

dB)

Array lattice

Iteration

B=10 bits

Figure X.8. Ensemble-average learning curves for various lattice implementa-tions in 10-bits precision with an impulsive disturbance occurring at iterationi = 200.

Page 71: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

PART XI: ROBUST FILTERS

67

Page 72: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

68 Part XI: Robust Filters

COMPUTER PROJECT

Project XI.1 (Active noise control) The programs that solve this project are the following.

1. partA.m This program generates a plot of the function g(α), which is shown in Fig. 1. The plot on theright zooms on the interval α ∈ [0, 0.1]. It is seen that g(α) < 1 for α < 0.062. Also, g(α) is minimizedat α = 0.031. Note however that for all values of α in the range α ∈ (0, 0.062), the function g(α) isessentially invariant and assumes values very close to one.

0 0.2 0.4 0.6 0.81

1.1

1.2

1.3

α

g(α)

0 0.05 0.10.9998

1

1.0002

1.0004

1.0006

αg(

α)Figure XI.1. A plot of the function g(α) (left) and a zoom on the intervalα ∈ (0, 0.062).

2. partB.m This program generates ensemble-average mean-square deviation curves (i.e., E ‖ewi‖2) forseveral values of α using binary input data. A typical output is shown in Fig. 2.

0.5 1 1.5 2 2.5 3

x 104

−70

−60

−50

−40

−30

−20

−10

0

Iteration

MS

D (

dB)

Learning curves of FeLMS for different values of α (i.e., different step−sizes)

α=0.01

α=0.02

α=0.03

α=0.05

α=0.1

Figure XI.2. Ensemble-average mean-square deviation curves of the filtered-error LMS algorithm for various choices of α and for binary input data.

3. partC.m This program generates ensemble-average learning curves for the filtered-error algorithm forseveral values of α and using both binary input and white Gaussian input. Typical outputs are shownin Figs. 3–4. It is observed that the algorithm shows unstable behavior for values of α beyond 0.5.

4. partD.m This program generates ensemble-average learning curves for four algorithms with α = 0.15. Atypical output is shown in Fig. 5. It is seen that NLMS shows best convergence performance, followedby filtered-error LMS, modified filtered-x LMS and filtered-x LMS. It is also seen that the modificationto FxLMS improves convergence.

Page 73: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

Part XI: Robust Filters 69

1000 2000 3000 4000 5000

−60

−40

−20

0

20

40

Iteration

MS

E (

dB)

Learning curves of FeLMS for different values of α and using white binary input

α=0.6

α=0.1

α=0.5

α=0.05

α=0.2

Figure XI.3. Ensemble-average learning curves of filtered-error LMS for var-ious choices of α and for both binary input.

1000 2000 3000 4000 5000−60

−40

−20

0

20

40

60

Iteration

MS

E (

dB)

Learning curves of FeLMS for different values of α and using white Gaussian input

α=0.6

α=0.1 α=0.5

α=0.05

α=0.2

Figure XI.4. Ensemble-average learning curves of filtered-error LMS for var-ious choices of α and for Gaussian input.

Page 74: ADAPTIVE FILTERS Solutions of Computer Projects Ali H. Sayed · 2 Part I: Optimal Estimation COMPUTER PROJECT Project .1 (Comparing optimal and suboptimal estimators) The programs

70 Part XI: Robust Filters

1000 2000 3000 4000 5000

−60

−50

−40

−30

−20

−10

Iteration

MS

E(d

B)

Learning curves of FeLMS, FxLMS, and mFxLMS using white Gaussian input

FxLMS

mFxLMS

FeLMS

NLMS

Figure XI.5. Ensemble-average learning curves of NLMS, FeLMS, FxLMS andmFxLMS for α = 0.15 and white Gaussian input.