quantization effect of implementing adaptive...

12
Quantization Effect of Implementing Adaptive Algorithms in a Fixed-Point DSP Macario O. Cordel II Electrical and Electronics Engineering Institute University of the Philippines, Diliman Abstract — New adaptive algorithms for Digital NEC are being proposed and evaluated against certain recommendations such as the ITU-T G.168. Performance evaluation for new adaptive algorithms usually uses simulation which assumes infinite precision. When implemented in a fixed-point DSP, the assumption of infinite precision is not applicable because values are being quantized based on the capability of the DSP. Quantization introduces error in iteration and thus, affects the performance of the algorithm. This paper presents the effect of quantization and suggests several considerations when simulating adaptive algorithms. I. INTRODUCTION Digital Network Echo-Canceller, shown in figure 1, is one of the many applications of the adaptive systems. The adaptive filter system produces replica of the echo that is being reflected back at the received end so that the echo can be removed before it interferes other signals. The algorithm for this adaptive system must converge fast enough and attenuate the echo low enough for it to be acceptable as echo canceller algorithm. ITU-T G.168 recommendations suggests several test procedures and standards in evaluating the performance of an adaptive algorithm for digital NEC. Figure 1. Adaptive Echo-Canceller System The most popular and basic algorithm available for adaptive system today is the least-mean squares (LMS) algorithm. The LMS algorithm became popular because of its simplicity and fast convergence rate. LMS updates the FIR model of the ‘unknown system’ using the past values of the input signal, or where is the weight vector, is the input vector is the error signal which is the difference of between the desired signal and the inner product of the weight vector and the input vector, written as, and corresponds to the step size of the algorithm, which, also determines the convergence of the algorithm. (1)

Upload: phungxuyen

Post on 24-May-2018

218 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Quantization Effect of Implementing Adaptive …read.pudn.com/downloads173/doc/806201/Documentation.doc · Web viewQuantization Effect of Implementing Adaptive Algorithms in a Fixed-Point

Quantization Effect of Implementing Adaptive Algorithms in a Fixed-Point DSP

Macario O. Cordel IIElectrical and Electronics Engineering Institute

University of the Philippines, Diliman

Abstract — New adaptive algorithms for Digital NEC are being proposed and evaluated against certain recommendations such as the ITU-T G.168. Performance evaluation for new adaptive algorithms usually uses simulation which assumes infinite precision. When implemented in a fixed-point DSP, the assumption of infinite precision is not applicable because values are being quantized based on the capability of the DSP. Quantization introduces error in iteration and thus, affects the performance of the algorithm. This paper presents the effect of quantization and suggests several considerations when simulating adaptive algorithms.

I. INTRODUCTION

Digital Network Echo-Canceller, shown in figure 1, is one of the many applications of the adaptive systems. The adaptive filter system produces replica of the echo that is being reflected back at the received end so that the echo can be removed before it interferes other signals. The algorithm for this adaptive system must converge fast enough and attenuate the echo low enough for it to be acceptable as echo canceller algorithm. ITU-T G.168 recommendations suggests several test procedures and standards in evaluating the performance of an adaptive algorithm for digital NEC.

Figure 1. Adaptive Echo-Canceller System

The most popular and basic algorithm available for adaptive system today is the least-mean squares (LMS) algorithm. The LMS algorithm became popular because of its simplicity and fast convergence rate. LMS updates the FIR model of the ‘unknown system’ using the past values of the input signal, or

where is the weight vector, is the input vector

is the error signal which is the difference of between the desired signal and the inner product of the weight vector and the input vector, written as,

and corresponds to the step size of the algorithm, which, also determines the convergence of the algorithm.

Because of its simplicity and fast convergence, modifications of the LMS algorithm for better performance were presented. Some of these are NLMS, DNLMS and DNLMS-M-max algorithms.

The Normalized-LMS algorithm is one of the most common adaptive filtering algorithms used in echo cancellation [1]. It is a straightforward version of the classical LMS algorithm except that the adaptation constant is normalized with respect to the signal power [2]. Normalizing the adaptation constant reduces the divergence of the algorithm. Its governing equations are:

where , is a small constant preventing division by

zero, is the l2 – norm operation.

The Delayed-NLMS algorithm, which is a modification of the NLMS, has the advantage of pipelining in the error feedback by inserting delays of D samples resulting in the below equation

The DNLMS-M-max algorithm, a hybrid of the DNLMS and the partial update algorithm, is an improvement of the DNLMS, in fact, the M-Max modification is first applied in NLMS as proposed in [4] to lessen the computational burden in NLMS. M-max algorithm partially updates the weight vector by

(1)

(2)

(3)

(4)

(5)

Page 2: Quantization Effect of Implementing Adaptive …read.pudn.com/downloads173/doc/806201/Documentation.doc · Web viewQuantization Effect of Implementing Adaptive Algorithms in a Fixed-Point

selecting the M largest input element in the input vector. The M largest input vector is first determined and then updates the corresponding vector elements afterwards. This reduces effectively the demand of memory resources and computation process when implemented in DSPs. The coefficient update equation for the DNLMS is given by

It is important to note that the major consideration for the convergence of the said NLMS-based algorithms is the step-size, µ, whose range depends on the eigenvalue of the input autocorrelation matrix. If noise is added in the input vector, such as the quantization error, µ, the probability for it to exceed its limit will become higher.

In this study, the NLMS, DNLMS and DNLMS-M-max performances will be studied when implemented in fixed-point DSP, specifically the effect of quantization on their performances. Section II presents this effect based from the product round-off error before addition model. Section III shows the simulated performances of the said algorithms and confirms the mathematical model in section II. Lastly, Section IV gives the summary, findings and conclusions on the quantization effect on the performance of adaptive algorithms.

II. MODELING OF QUANTIZATION EFFECT

The adaptive algorithm was implemented as shown in the block diagram in figure 2. The red boundary indicates that the section is implemented in DSP while the blue markers indicate the point where quantization happens.

In digital networks, received signals passes from a 4-wire circuit to a 2-wire circuit through a transformer. Ideally, this transformer will transfer all energy from one side to the other. However, due to imperfect impedance matching, replicas of the data are reflected back called echo. This returned signal can be modeled as the sum of received signal passing thru an “unknown system” called echopath and noise.

Quantization Noise Model for Adaptive Filters

When quantization is not considered in simulation, the input signal is simply scaled such that no absolute value will be greater than 1. When quantization is considered, input signal is also scaled such that the absolute values of the input vector are not greater than 1. The scaling factor used for this study was the lowest power-of-two that is larger than the max of the input vector.

Figure 2. Block diagram for Adaptive System in Digital Networks with quantization location

After scaling, quantizing into 2 nbits-1 is next, where nbits gives the number of data registers of the DSP. Quantizing, in this study, was implemented by rounding. It is performed at the input of the DSP and after every multiplication process in it. Figure 2 shows the quantization at the input while figure 3 shows the quantization after multiplications and its statistical model.

Individual error sequence, ei[n], gives is a statistical representation for round off errors after multiplication and has the following assumptions [5]:

a) ei[n] is a sample of stationary white noise process, with each sample eα[n] being uniformly distributed over the quantization error;

b) the quantization sequence {eα[n]} is uncorrelated with the unquantized sequence, {v[n]}, the input sequence {x[n]}, and all other quantization noise sources.

a)

(6)

1z 1z 1z 1z

Adaptive Filter

Input sequence

DSP Section

Input Quantization

Echo Quantization

Page 3: Quantization Effect of Implementing Adaptive …read.pudn.com/downloads173/doc/806201/Documentation.doc · Web viewQuantization Effect of Implementing Adaptive Algorithms in a Fixed-Point

b)

Figure 3. a) Quantization location inside the adaptive filter and b) its statistical model

Being the case, and assuming further that e i[n] are statistically independent of each other, then each error source develops a round-off error noise at the output of the digital filter. Figure 3b gives the equivalent statistical model for this. If σ2

o

denotes the variance of each individual noise source at the output of the multiplier, then the total noise power is simply equal to the total noise variance which is

where N is the number of branches, which, for simplicity, equal the filter length, δ is the quantization intervals and is equal to 2-

b+1, where b is the number of bits in the DSP. Table 1 gives total noise variance for our filter length N = 131 and different length of data.

Table 1. Quantization Noise Variance for N = 131

4 bits 8 bits 16 bits 32 bits

σ2T -7.68dB -31.76dB -80dB -176dB

Signal-to-Noise Ratio after input scaling

From [5], the signal-to-noise ratio after scaling is given by

where is the number of bits – 1, equals the full scale range

and is the scaling factor. Note that, , in simulation, for simplicity, equals the square root of the signal power which is 1 since the generated signal is a white Gaussian noise with 0dB. From the previous equation, SNR for different number of bits can be computed and model.

Table 2 summarizes the signal-to-quantization noise error for different bit lengths with RFS = 4.0591 and A = 8 (using POT scaling). The input data is a white Gaussian noise (WGN) with 0dB, same data input used for this study.

Table 2. SNR after scaling WGN with 0dB input power, RFS = 4.0591 and A = 8 of echo

Quantization at input vector  K A  4.3109 8

4 bits 40.248 bits 64.3216 bits 112.4832 bits 208.80

Table 3. SNR after scaling echo with 0dB input power, RFS = 15984 and A = 16384 of input

Quantization of echo  K A  15984 16384

4 bits 35.088 bits 59.1616 bits 107.3232 bits 203.64

Based from table 2 and 3, SNR is greatly affected by the number of bits, which simply states that, as you increase the number of bits, the higher the precision and less quantization noise is experienced by the system. But as you decrease the number of bits, resolution becomes less accurate and chunk of information between quantization levels are disregarded amplifying the quantization noise.

Equation 8 for the SNR reveals that, in order to get higher SNR, Full range scale, should be near at least the scaling factor such that SNR is maximized. This is done by choosing the scaling factor to be equal to the full range. However, it is practical to choose for the lowest power-of-two factor so that division can be performed by shifting operation.

The total noise at the output combining all the noises, is approximately equal the highest noise contributor which is the noise due to quantization of the FIR filter.

(8)

(7)

1z 1z 1z 1z

Adaptive Filter

Te n

v2[n]v1[n] v3[n] vN[n]v4[n]

Input sequence

Page 4: Quantization Effect of Implementing Adaptive …read.pudn.com/downloads173/doc/806201/Documentation.doc · Web viewQuantization Effect of Implementing Adaptive Algorithms in a Fixed-Point

III. SIMULATION RESULT

In this set of simulation, the performances of NLMS, DNLMS and DNLMS M-max adaptive algorithm in different fixed-point DSP are investigated. The numbers of data bits considered are 4-bit, 8-bit and 16-bit DSP. Though 4-bit DSP is very rare, if not non-existent in the industry, qualifying the effect of very low resolution is necessary to see the effect of quantization. Simulations are carried out using the impulse response with the longest dispersion given in the ITU-T G.168 recommendation shown in figure 4.

Figure 4. Echo path model

The simulation is presented as follows: First, the learning curves of each algorithm are shown, superimposing their respective learning curve at 4-bit, 8-bit, 16-bit and 32-bit implementation. Next, the magnitude responses of the converged FIR models are presented together with the frequency responses of the 4-bit, 16-bit and 32-bit implementation. Third, the effect of quantization error in the convergence factor, µ, and the stability of the system will be observed, and lastly, the relationship of the simulation result and the model in section II will be established.

For the whole simulation, except for some conditions, the input signal x[n] is white Gaussian noise (WGN) with signal-to-noise ratio (SNR) of 200dB prior to quantization. The echo return loss (ERL), which is the attenuation of input signal after passing the echo path, is assumed to be 6dB. The filter length is chosen such that it is equal to the filter length of the echo path, which is 131. The adaptation constants for the NLMS-based algorithms are α = 0.5, β = 0.008. The mean-squared-error (MSE) is calculated as the average instantaneous squared error over 100 trials.

Scaling is done by first getting the lowest power-of-two (PO2) factor that is greater than or equal the maximum value in the range. This PO2 factor will be the scaling factor of the input vector such that all values are less than or equal to 1. After

scaling, the scaled vector prior to the input of the ‘DSP’ is quantized. Furthermore, multiplication inside the ‘DSP’ is quantized before accumulation. This is to be consistent with the round-off model in section II.

Figure 5 to 7 shows the learning curves or MSE-plots for NLMS, DNLMS and DNLMS M-max algorithms, respectively. In figure 5, unquantized MSE plot of the NLMS and the 32-bit implementation shows that there is no significant difference between the unquantized and the 32-bit quantized simulation. However, at 16-bit simulation, an increase of around 100dB noise is experienced which increases the minimum MSE from around -180dB to -80dB. Decreasing further the bit representation increases the minimum MSE from about -80dB up to -35dB bit representation which is fairly tolerable condition. Decreasing further the bit representation to 4-bit resulted in an adaptive filter which ‘does not learn’.

Figure 5. Learning curves for different quantization in NLMS implementation

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000-180

-160

-140

-120

-100

-80

-60

-40

-20

0

samples

Mea

n S

quar

e E

rror (

dB)

Learning Curve for NLMS

32 bits

16 bits8 bits

4 bits

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000-200

-180

-160

-140

-120

-100

-80

-60

-40

-20

0

samples

Mea

n S

quar

e E

rror (

dB)

Learning Curve for DNLMS

32 bits

16 bits

8 bits4 bits

0 20 40 60 80 100 120 140-3

-2

-1

0

1

2

3x 10

4

Am

plitu

de

samples

Impulse response of the echo path from ITU-T G.168

Page 5: Quantization Effect of Implementing Adaptive …read.pudn.com/downloads173/doc/806201/Documentation.doc · Web viewQuantization Effect of Implementing Adaptive Algorithms in a Fixed-Point

Figure 6. Learning curves for different quantization in DNLMS implementation, D = 32

Figure 7. Learning curves for different quantization in DNLMS M-max implementation, D = 32, M = 32

Inspection of the learning curves for the DNLMS and the DNLMS M-max algorithms in figures 6 and 7 reveals the same amount of noise is added in the final value of the MSEs for a specific implementation. Specifically, for 16-bit representation, 100dB of noise power is added at the MSE final value, for 8 bit representation, around 45dB noise is added. And for 4-bit implementation, improvement is not evident.

Figures 8-10 shows the magnitude responses of the converged FIR model of the echopath. Again, the 32-bit and the unquantized magnitude responses shows no significant difference in their magnitude response.

For NLMS FIR model in figure 8, all fixed-point representations copied the magnitude response of the unquantized filter except for the 4-bit representation. The 16-bit and the 32-bit representation fairly copied the magnitude response of the unquantized response. 8-bit representation has an offset in its midband gain by +10dB.

For DNLMS FIR model in figure 9, all-fixed point implementation, at least has the resemblance of the unquantized magnitude response except for the 4-bit representation. 32-bit consistently shows its accurate modeling with respect to the unquantized model. 16-bit representation, in the DNLMS, shows its more faithful modeling with respect to the 32-bit modeling. However, 8-bit representation is 10dB higher than the 16-bit, the same with the NLMS case.

For the DNLMS-M-max algorithm magnitude response in figure 10, the 32-bit, 16-bit, 8-bit and the unquantized FIR model shows almost the same magnitude responses especially in the midband region. The 4-bit representation, again, unable to

follow or adapt to the “unknown response” as supported by its learning curves in figures 5-7.

Figure 8. Magnitude Response of the FIR model after convergence, using NLMS adaptive algorithm implemented in different quantization length

Figure 9. Magnitude Response of the FIR model after convergence, using DNLMS adaptive algorithm

implemented in different quantization length, D = 32

It is interesting to observe the behavior of the step-size µ for 4-bit representation since among the 4 quantization; the 4-bit has the learning curve which does not converge to a minimum value.

The convergence factor, µ, is a very important factor in LMS-based algorithm because it determines the convergence of the algorithm, theoretically, µ should be in the range

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1-60

-50

-40

-30

-20

-10

0

10Magnitude Response of the FIR model

Mag

nitu

de in

dB

Normalized Frequency x rad/sec

32 bits, 16 bits, unquantized

8 bits

4 bits

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1-90

-80

-70

-60

-50

-40

-30

-20

-10

0

10Magnitude Response of the FIR model

Mag

nitu

de in

dB

Normalized Frequency x rad/sec

8 bits

16 bits

32 bits/ unquantized

4 bits

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000-180

-160

-140

-120

-100

-80

-60

-40

-20

0

samples

Mea

n S

quar

e E

rror (

dB)

Learning Curve for DNLMS-Mmax

Page 6: Quantization Effect of Implementing Adaptive …read.pudn.com/downloads173/doc/806201/Documentation.doc · Web viewQuantization Effect of Implementing Adaptive Algorithms in a Fixed-Point

Figure 10. Magnitude Response of the FIR model after convergence, using DNLMS adaptive algorithm

implemented in different quantization length, D = 32, M= 32

in order for the algorithm to converge. Tr[R] in the upper bound of the range is the trace of the input autocorrelation matrix, which is equal to the sum of the eigenvalues of R. It can be shown though that tr[R] is also equal to the power of the input vector signal. A value that exceeds the upper and lower limit of the convergence factor will result to nonconvergence.

Figure 11 shows the unquantized values of µ(n), it varies with time because its step size is a function of the input vector given in eq. 4. The upper limit of the signal is not much of concern because µ(n) is normalized. The lower bound which is quite near zero should be watched out especially when quantization error is not considered. Inevitably, the quantization of the update factor, µ(n)e(n)x(n), should be implemented.

To see the effect of quantizing the update factor µ(n)e(n)x(n), it may be assumed that upon quantization, e(n) and x(n) stays unchanged and µ(n) is the factor that changed during the process. That is,

where denotes the quantization process.

Figure 12 shows the effective “new” values for µ(n) due to quantization. Comparing it with the unchanged or unquantized values for µ(n), in figure 11, the 16-bit and 32-bit implementation have their µ(n) above the lower limit 0. µ(n) for 8-bit implementation is around the range [0.1335, 0.2117].

This range is not small enough to cause nonconvergence, however, causes high minimum MSE. For 4-bit implementation, the range of µ(n) values is [0,0.1656]. Since the range includes 0 which is outside the range of allowable values for µ(n) for convergence, then, it is expected that the MSE for this representation is nonconvergent. Figure 13 shows the more detailed picture of effective µ(n) after quantization.

Figure 11. NLMS step size, µ

Figure 12. NLMS effective “quantized step-size, µ”

Comparison of the round-off model and the simulation results

In section II, the quantization error is estimated as the sum of all error contributed by individual quantization, i.e.,

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1-70

-60

-50

-40

-30

-20

-10

0

10Magnitude Response of the FIR model

Mag

nitu

de in

dB

Normalized Frequency x rad/sec

32 bits / unquantized

16 bits

8 bits4 bits

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000.1

0.11

0.12

0.13

0.14

0.15

0.16

0.17

0.18

0.19

samples

step

siz

e

NLMS step size, u

32 bits16 bits8 bits4 bits

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

samples

step

siz

e

NLMS step size, "quantized u"

32 bits16 bits8 bits4 bits

(9)

Page 7: Quantization Effect of Implementing Adaptive …read.pudn.com/downloads173/doc/806201/Documentation.doc · Web viewQuantization Effect of Implementing Adaptive Algorithms in a Fixed-Point

where the input power is set to 1 watt. To determine the contribution of the individual source of noise, each effect is isolated by quantizing the concerned source and leaving other points unquantized. For example, the quantization source noise is the quantization of echo signal. Echo signal will be quantized accordingly on the simulation, leaving input signal and the FIR model unquantized. This plot is shown in figure 14. Comparing the MSE and the computed SNR in table 8, it can be said that the computed values estimated the MSE of the signal.

Figure 13. Effective µ(n) after quantization

Figure 14. Learning Curve when quantization is applied only on echo signal

Table 4. Computed quantization effect on Noise Power (using formula (8))

Computed Quantization Effect on Noise Power when the echo signal is quantized

K A

15984 16384

4 bits -35.08 dB

8 bits -59.16 dB

16 bits -113.34 dB

32 bits -209.66 dB

Figure 15. Learning Curve when quantization is applied only on input signal and FIR model

Computed Quantization Effect on Noise Power

K A

4.3109 8

4 bits -40.24 dB

8 bits -64.32 dB

16 bits -112.48 dB

32 bits -208.80 dB

a) b)

Computed Quantization Noise Variance of the FIR Model

4 bits -7.68 dB

8 bits -31.76 dB

16 bits -80 dB

32 bits -176 dB

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

0.05

samples

step

siz

e

Effective (n) after Quantization

(11)

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000-200

-180

-160

-140

-120

-100

-80

-60

-40

-20

0

samples

Mea

n S

quar

e E

rror (

dB)

Learning Curve for NLMS

4 bits

8 bits

16 bits

32 bits

Quantization of echo signal

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000-200

-180

-160

-140

-120

-100

-80

-60

-40

-20

0

samples

Mea

n S

quar

e E

rror (

dB)

Learning Curve for NLMS

32 bits

16 bits

4 bits 8 bits

Quantization ofInput Vector andFIR model

Page 8: Quantization Effect of Implementing Adaptive …read.pudn.com/downloads173/doc/806201/Documentation.doc · Web viewQuantization Effect of Implementing Adaptive Algorithms in a Fixed-Point

Table 5. a) Computed Quantization Effect on Noise Power when Input is quantized. b) Quantization Noise Variance of

the FIR model

Figure 15 shows the MSE when the input signal and the FIR model are quantized while the echo signal is unquantized. Two sources are considered, the noise due to quantizarion of the input and the noise due to quantization of the FIR model. It can be deduced that the major contributor here is the quantization error of the FIR model by just comparing the plot with table 5.

Figure 16. Learning Curve when quantization is applied only on echo signal, input signal and FIR model

Table 6. Comparison of the total noise from the model product round-off noise before addition and the simulation

result of NLMS algorithm

Model product round-off noise

Simulated Result

4 bits -7.68dB -10.9dB

8 bits -31.76dB -31.79dB

16 bits -80dB -78.65dB

32 bits -176dB -174.9dB

Finally, the echo signal, input signal and FIR model are quantized all together and then compare with the computed error contribution of each source. From the previous observation plots and tables, the error due to quantization error of the FIR model gives the largest contributions on the overall noise as manifested

in its MSE. Table 6 shows this summary while figure 16 supported the data in the said table.

IV. CONCLUSION

In this paper, quantization effect on the performance of the NLMS, DNLMS and DNLMS M-max algorithms has been considered. The interest in the effect of quantization stems from the fact that simulation assumes infinite precision and most of the times does not accurately represent the performance when implemented in DSP.

For the three NLMS-based algorithms, NLMS, DNLMS and the DNLMS-M-max, simulation shows that the effects of quantization error on their MSE are the same. That is, for 32-bit representation, the three algorithms give ~200dB MSE, for 16-bit representation, all three give ~80dB, for 8-bit is around ~35dB minimum MSE while the 4-bit representation gives no MSE for the three algorithms.

Isolating the effect of quantization in different locations of the adaptive system, it has been shown that the major contributor of quantization noise is the product round-off error at the FIR adaptive filter. In fact, the computed error of the FIR adaptive filter can be seen in the minimum Mean-Square Error of the algorithm.

Furthermore, for low resolution quantization, stability can be affected if the bit representation that will be used is very low such that the step-size, which determines the convergence of the algorithm, is truncated/rounded-off to zero.

RECOMMENDATION

Further study can be made for other algorithm to generalize the result. Product round-off error after addition can also be considered to compare with this study because in many hardware implementation schemes, multiplication operation is carried out as multiply-add operation with the result stored in a double-precision register [5].Product round-off error after addition will have noise variance of σo

2 as compared with Nσo2 of product

round-off error before addition.

REFERENCES

[1] Lee., R., Abdel-Raheem E., Khalid M.A.S., “Computationally-Efficient DNLMS-Based Adaptive Algorithms for Echo-Cancellation Application,” Journal of Communication, Vol. 1, No. 7, Nov/Dec 2006

[2] He, P.P., Dyba R.A., Pessoa, L.F.C., “Network Echo Cancellers: Requirements. Applications and Solutions”, Motorola Inc

[3] P. Voltz. “Sample Convergence of the Normalized LMS Algorithm with Decreasing Step Size,” Proc. IEEE

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000-200

-180

-160

-140

-120

-100

-80

-60

-40

-20

0

samples

Mea

n S

quar

e E

rror (

dB)

Learning Curve for NLMS

Quantization ofthe echo, inputand FIR model

32 bits

16 bits

8 bits4 bits

Page 9: Quantization Effect of Implementing Adaptive …read.pudn.com/downloads173/doc/806201/Documentation.doc · Web viewQuantization Effect of Implementing Adaptive Algorithms in a Fixed-Point

Int .,Conference Acoustic, Speech, Signal Process, May 1999, pp 2129-2132

[4] Abdel-Raheem E., “On Computationally-Efficient NLMS-Based Algorithms for Echo Cancellation Applications”, Symposium on Signal Processing and Information Technology. 2005 IEEE International.

[5] Mitra, S.K., Digital Signal Processing: A Computer-Based Approach, McGraw Hill., New York, 2006.