performance analisys of proportionate-type normalized mean square algorithms

35
Performance Analysis of Proportionate-Type Normalized Mean Square Algorithms Kaio Douglas Teófilo Rocha Miloš Doroslovački

Upload: kaio-douglas

Post on 11-Dec-2015

11 views

Category:

Documents


1 download

DESCRIPTION

Analysis of different PNLMS algorithms

TRANSCRIPT

Page 1: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

Performance Analysis of Proportionate-Type Normalized Mean Square Algorithms

Kaio Douglas Teófilo Rocha

Miloš Doroslovački

Page 2: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

1. Introduction

In an effort to gather the various types of Least Mean Square (LMS) algorithms and analyze their performance when applied to adaptive filtering, this work is conducted. A special attention is given to the Proportionate-Type Normalized LMS algorithms, as seven different examples of this category are presented. Also, to each of these algorithms, a version based on the affine projection format is assigned, giving rise to seven new algorithms. An extensive set of simulations is performed in order to compare in many ways the performance of these algorithms when working as the engine of a system identification process. For the unknown system to be identified, different impulse responses are given, but mainly, the system is chosen to have a sparse impulse response, as an specific application – echo cancellation – is addressed in this work. The simulations also vary in terms of input signal and measurement noise signal. The software MATLAB from MathWorks is used to carry these simulations.

In section 2, a mathematical description of the algorithms to be tested by simulation is given. A base framework for both the proportionate-type NLMS and affine projection algorithms is presented.

In section 3, the simulation results are presented. The simulations are divided into six groups. In the first three groups, three different unknown system impulse responses are tested. For each one, both a white Gaussian noise and a colored noise signal are used as input. In simulation 4, the behavior of the algorithms are analyzed in the case which a shift occurs in the original impulse response. In simulation 5, the input signal is a speech, and five experiments are performed, each of them with a different type of measurement noise. Finally, simulation 6 addresses the affine projection versions of the algorithms, as the previous five sets only contemplate the proportionate-type NLMS format.

Section 4 concludes this work with remarks of the results and suggestions for possible further subjects to be addressed and added to it.

Page 3: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

2. Description of Proportionate-Type NLMS Algorithms

In this section, applications of the PtNLMS algorithms are shown, in order to justify their study in this work. After given the motivation, a unified framework is presented, so that the description of the different types of algorithms can be constructed based on it. The description of algorithms is separated into two parts. The first one concerns the regular PtNLMS algorithms and the second part addresses these same algorithms, but modified to follow a different framework based on the affine projection algorithm.

2.1. Application of PtNLMS Algorithms

One of the main motivations for the study of PtNLMS algorithms is their use in network echo cancellation, in order to reduce the presence of delayed copies of the original signal [1]. In modern telephone networks, greater delays increase the need for echo cancellation techniques, requiring faster echo cancellation algorithms, especially when the echo path is sparse. A sparse echo path has a large percentage of the energy distributed to only a few coefficients, whereas a dispersive echo path has its energy distributed more evenly among its coefficients [1]. In this work, greater attention is given to the sparse echo paths, as the PtNLMS algorithms can show improved performance when dealing with sparse echo paths than standard algorithms such as the least mean square (LMS) and the normalized least mean square (NLMS) [1]. An example of a sparse echo path impulse response is shown in Figure 1.1.

0 50 100 150 200 250 300 350 400 450 500

-0.15

-0.1

-0.05

0

0.05

0.1

0.15

0.2

0.25

0.3

Coefficient Index

Am

plitu

de

Sparse Impulse Response

Figure 2.1. Sparse impulse response example.

Page 4: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

2.2. Unified Framework for PtNLMS Algorithms

As this work is based on [1], the notation introduced by the authors will also be used here, as well as the base model for the PtNLMS algorithms. Only real signals are contemplated in this study.

Regarding notation, the following will be used throughout this work. Vectors are denoted by bold lowercase letters, such as x. Also, we consider all the vectors to be column vectors, unless the contrary is stated. Scalars are notated by Roman or Greek letters, such as x or β. The ith

component of a vector x is represented by x i. Matrices are denoted as bold uppercase letters, such as G. When dealing with time varying vectors, we consider a vector at time k to be represented as x (k ).

Referring to the standard system identification block diagram in Figure 2.2, we introduce our framework. Consider an input signal x (k ) for time k that enters an unknown system with impulse response w . Let the output of the of the system be y (k )=wT x (k) where

x (k )=[ x (k ) , x (k−1 ) ,…,x (k−L+1 ) ]T , and L is the length of the filter. Then, measurement noise

v (k ) is added to our output, giving the measured output of the system, d (k). The impulse response of the system is estimated with the adaptive filter coefficient vector (or weight vector) w (k ), which has the same length L. The output of the adaptive filter is given by

y (k )=wT (k )x (k ). The error signal e (k ) is the difference between d (k) and y (k ), used to drive the adaptive algorithm.

Figure 2.2. Adaptive filtering system identification block diagram.

The base PtNLMS algorithm is presented in Table 2.1. The term F [¿ wl (k )∨ , k ], where l∈{1,2 ,…, L}, is the law that governs how each coefficient is updated. The quantity γmin is important when F [¿ wl (k )∨ , k ] is zero in some cases. It is used to set the minimum gain a coefficient can receive. The constant δ p, with δ p≥0, along with ρ, also ρ≥0, prevents the very small coefficients from stalling, especially in the beginning, when the coefficients are zero. γ l(k ) selects the maximum value between the previously calculated γmin and the actual coefficient gain,

y (k)

x (k )e (k )

v (k )

y (k) d (k)

w (k )

w

Page 5: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

so that the gain matrix can be constructed. G (k )=Diag {g1 ( k ) ,…,gL (k )} is the time-varying

stepsize control diagonal matrix, with the gl (k ) values in the principal diagonal and all the other entries being zero. The new estimated coefficients vector is calculated. β is the fixed stepsize parameter and δ is typically a small positive number added to the denominator in order to avoid division by zero if the inputs are zero, x (k )=0.

Table 2.1. Base PtNLMS Algorithm with Time-Varying Stepsize Matrix

x (k ) ¿ [ x ( k ) , x (k−1 ) ,…, x ( k−L+1 ) ]Ty (k ) ¿ xT (k) w (k )e (k ) ¿d (k )− y ( k )F [|wl (k )|,k ] ¿Depends on the specific algorithm

γmin (k ) ¿ ρmax {δ p , F [|w1 (k )|, k ],…, F [|wL (k )|,k ]}γ l (k ) ¿max {γmin (k ) , F [|wl (k )|, k ]}

gl (k )¿

γl(k )

1L∑i=1

L

γi(k )

G (k ) ¿Diag {g1 (k ) ,…,gL (k )}

w (k+1 ) ¿ w ( k )+βG (k ) x ( k ) e(k)xT ( k )G (k ) x (k )+δ

As commented in the previous subsection, we are going to work with systems with sparse impulse response, so we can use as a start point w (k )=0, because, as stated in [1], the PtNLMS algorithms rely on the fact that as the system impulse response is sparse, most of the coefficients are zero, leading to having most of the estimated weights to be correct. Then, the weights corresponding to the active coefficients should be updated towards their correct value in the fastest way possible, in order to speed the convergence up.

The objective of the PtNLMS algorithms is, then, to determine which coefficients are zero and which are non-zero, as well as to assign gains to the estimated coefficients so that they can reach the true value in a manner that the convergence speed is increased as much as possible. These tasks are performed by the control law, that manages the way the gains are assigned to the estimated coefficients, as well as applies some criteria to distinguish which coefficients are closer or farther from their true values. Most of the PtNLMS control laws work to assign the minimum possible gains to the coefficients they detect that are closer to the real values. The main differences between them is how they assign gains to the coefficients that are not near the true values, and this is the subject of the next subsection.

2.3. Proportionate-Type Normalized Least Mean Square Algorithms

Page 6: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

Now that the motivation and a unified framework were given to the study of the PtNLMS algorithms, they are shown individually and in further details in this subsection.

2.3.1. Normalized Least Mean Square Algorithm (NLMS)

We begin with the NLMS algorithm, which strictly speaking, is not a proportionate-type algorithm, as its main characteristic is to assign the same gain to all the estimated coefficients, regardless their difference compared to the true values.

Referring to our base algorithm in Table 2.1, to obtain the NLMS algorithm, we choose the

control law F [|wl|, k ]=1, which will then lead to a gain matrix G (k )=I for all k .

2.3.2. Proportionate Normalized Least Mean Square Algorithm (PNLMS)

As the name suggests, the PNLMS is an actual proportionate-type algorithm, the simplest of them. The gain assigned by its control law is proportional to the magnitude of the estimated coefficients, i.e.

F [|wl (k )|,k ]=|wl (k )|,1≤l ≤L.

It is based on the assumption that the system impulse response is sparse, explaining that coefficients with larger magnitudes should have a faster adaptation than those that are closer to zero [1].

2.3.3. Improved Proportionate Normalized Least Mean Square Algorithm (IPNLMS)

The IPNLMS algorithm is a hybrid between the NLMS and PNLMS algorithms, in a way that the user can control how much it should behave as one or the other two base algorithms. Its control law is given by

F [|wl (k )|,k ]=(1−α IPNLMS )‖w (k )‖1

L+( 1+α IPNLMS )|wl (k )|,

where ‖w (k )‖1=∑j=1

L

|w j (k )| is the L1 norm of the estimated coefficients vector. The parameter

α IPNLMS, −1≤α IPNLMS≤1, is used to define if the algorithm will behave more as the NLMS or the

Page 7: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

PNLMS algorithms, or even somewhere in between the two. Note that if α IPNLMS=−1, the algorithm behaves exactly as the NLMS algorithm, and if α IPNLMS=1, it behaves exactly as the PNLMS algorithm instead.

This algorithm adopts ρ=0, so that the γmin logic presented in Table 2.1 is not needed. The gain vector components are, then, given by

gl (k )=F [|wl (k )|,k ]

‖F [|wl (k )|, k ]‖1

=(1−α IPNLMS)

2L+( 1+α IPNLMS )

|wl (k )|2‖w (k )‖1

.

To avoid division by zero, which may happen at the beginning of adaptation, when the estimated coefficients are close to zero, in practice, a modified expression for the gain vector elements is used instead:

gl (k )=(1−α IPNLMS)

2L+(1+α IPNLMS )

|w l ( k )|2‖w (k )‖1+ϵ IPNLMS

,

where ϵ IPNLMS is a small positive number [1], similar to the δ parameter in our w (k+1 ) expression in Table 2.1.

2.3.4. μ Proportionate Normalized Least Mean Square Algorithm (MPNLMS)

The MPNLMS algorithm considers a parameter ϵ used to define a region around the true coefficients that when all the estimated coefficients are within it, it is considered that the algorithm has converged. The region is so called ϵ -vicinity.

The algorithm assigns a gain proportional to the logarithm of the estimated coefficients, and its control law is given by

F [|wl (k )|,k ]= ln(1+μ|w l (k )|),1≤l ≤L ,

where μ=1 /ϵ [1].

2.3.5. Individual Activation Factor Proportionate Normalized LMS Algorithm (IAF-PNLMS)

The IAF-PNLMS algorithm has the following control law:

F [|wl (k )|,k ] ¿|wl (k )|

Page 8: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

ψ l(k ) ¿ {12F [|wl (k )|, k ]+ 1

2γl (k−1 ) ,

ψ l (k−1 ) ,

k=mL,m=1,2,3 ,…otherwise

γ l (k ) ¿max (ψ l ( k ) ,F [|w l (k )|, k ]).In the initialization, γ l(0) is typically initialized with a small positive constant for all the

coefficients, such as 10−2/L.

The difference present in the IAF-PNLMS algorithm is that it transfers part of the inactive coefficients gain to the active ones, by means of ψ l (k ). Therefore, it offers better gain distribution to the coefficients, compared to the PNLMS and IPNLMS algorithms, however, it slows down the convergence rate of small coefficients [1]. These two qualities combined make this algorithm more appropriate for system impulse responses with high sparseness, i.e. with just a very few active coefficients.

2.3.6. Adaptive μ Proportionate Normalized LMS Algorithm (AMPNLMS)

By modifying the MPNLMS algorithm, we have a new algorithm called Adaptive MPNLMS (AMPNLMS). The main modification is in the μ parameter, which is constant in the MPNLMS algorithm and now is allowed to vary in time in the AMPNLMS algorithm, providing more flexibility in the minimization of the mean square error (MSE).

Table 2.2 shows the steps to get the modified control law for the AMPNLMS algorithm. It begins with ζ (k+1), an estimate of the mean square error, obtained by time averaging, where 0<ξ<1 is a constant. Then, we scale the estimated MSE by a factor ν, giving ~ϵ L(k), the distance to the steady-state MSE, the value that indicates we reached convergence, when the MSE reaches it. After this, we calculate ϵ c (k ), the distance each weight deviation should be from zero, so that we can confirm

convergence. The time varying μ is then calculated, so that finally the control law can be generated [1].

Table 2.2. Obtaining the control law for the AMPNLMS algorithm

ζ (k+1 ) ¿ξζ ( k )+(1−ξ ) e2 (k )~ϵ L(k) ¿

ζ (k+1)ν

ϵ c (k ) ¿√ ~ϵL(k )Lσx

2

μ(k) ¿ 1ϵ c (k )

F [|wl (k )|] ¿ ln ¿

2.3.7. Adaptive Segmented Proportionate Normalized LMS Algorithm (ASPNLMS)

Page 9: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

In order to avoid the calculation of the logarithm term in the MPNLMS algorithm, the Segmented PNLMS (SPNLMS) algorithm was introduced. Its idea is based on divide the logarithm function into two segments: the first one with a certain slope, and the second one with zero slope. Tuning the slope of the first segment and the transition point between the two is an important factor when making this linear approximation of the logarithm curve.

Here, the Adaptive Segmented PNLMS (ASPNLMS) algorithm is proposed, a slight modification of the SPNLMS algorithm, in order to simplify the AMPNLMS algorithm. Table 2.3 shows how this new algorithm works, concluding with the control law [1].

The parameter Υ is the scaling factor, which defines the slope of the first segment, as well as the transition point between the two segments, therefore, it plays an important role in the performance of the algorithm.

Table 2.3. Obtaining the control law for the ASPNLMS algorithm

ζ (k+1 ) ¿ξζ ( k )+(1−ξ ) e2 (k )~ϵ L(k) ¿

ζ (k+1)ν

ϵ c (k ) ¿√ ~ϵL(k )Lσx

2

F [|wl (k )|] ¿ {¿ wl (k )∨ ¿Υ ϵ c

,¿1 ,if |wl (k )|<Υ ϵ c (k )if |wl (k )|≥Υ ϵ c(k )

2.4. The Affine Projection Algorithm

After presenting the proportionate-type NLMS algorithms, we now present the affine projection algorithm. It was motivated by the desire of obtaining an improvement in the rate of convergence of the NLMS algorithm [2]. Its principle is to recycle old data to improve its convergence speed, however, by reusing data the final algorithm misadjustment increases. The trade-off between final misadjustment and convergence rate is obtained by the introduction of a convergence factor [3].

Page 10: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

As presented in [3], to derive the process behind the affine projection algorithm, we first define a matrix X ap(k ) composed by the last N input signals, x (k ) , x (k−1 ) ,…, x (k−N+1), where x (k )=[ x (k ) , x (k−1 ) ,…,x (k−L+1 ) ]T :

X ap (k )=[ x (k ) x (k−1 ) … x (k−N+1 )x (k−1 ) x (k−2 ) … x (k−N )

⋮ ⋮ ⋱ ⋮x (k−L+1 ) x (k−L ) … x (k−N−L+2 )]X ap (k )=[ x ( k ) x (k−1 )…x (k−N+1 ) ] .

Then, referring to the block diagram in Figure 2.2, we define the output of the adaptive filter

yap (k )=XapT (k ) w (k )=[ yap , 0(k )

yap , 1(k )⋮

yap , N−1(k )] .

The measured output of the unknown system is represented as

dap (k )=[ d (k )d (k−1)

⋮d (k−N+1)] .

The difference between dap (k ) and yap (k ) is the error, that drives the adaptive algorithm, and it is represented as

eap (k )=[ eap , 0(k )eap , 1(k )⋮

eap, N−1(k )]=d ap(k )− yap (k )=[ dap (k )− yap ,0(k )

dap (k−1 )− yap ,1(k )⋮

dap (k−N+1 )− yap , N−1(k )] .

To summarize the affine projection algorithm process, a table similar to the one used for the proportionate-type algorithms is presented with all the steps needed to obtain the updated estimated coefficients. Note that the same γmin logic is used here, so that the gain matrix can be generated and plugged into the final expression.

Table 2.4. Base Affine Projection Algorithm

X ap (k ) ¿ [ x ( k ) , x (k−1 ) ,…, x ( k−N+1 ) ]yap (k ) ¿ X ap

T (k )w (k )eap (k ) ¿dap(k)− yap ( k )F [|wl (k )|,k ] ¿Depends on the specific algorithm

Page 11: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

γmin (k ) ¿ ρmax {δ p , F [|w1 (k )|, k ],…, F [|wL (k )|,k ]}γ l (k ) ¿max {γmin (k ) , F [|wl (k )|, k ]}

gl (k )¿

γl(k )

1L∑i=1

L

γi(k )

G (k ) ¿Diag {g1 (k ) ,…,gL (k )}w (k+1 ) ¿ w ( k )+βG (k )X ap (k ) (Xap

T (k ) Xap (k )+δ I )−1eap(k )

We can obtain the affine projection version of all the algorithms discussed in the previous

subsection by just plugging their control laws F [|wl (k )|, k ] into our base algorithm.

3. Simulation Results

In order to analyze and compare the behavior and performance of the algorithms described in the last section, several simulations were performed. This section presents the results of such simulations, as well as some explanation on how they were prepared. Different input signals were used, such as white Gaussian noise, colored noise and speech. Most of the simulations used the system impulse response in Figure 2.1, although different impulse responses were also experimented. As measurement noise, in most cases white Gaussian noise was added to the unknown system output, sometimes with different signal to noise ratio (SNR) values. Both the proportionate-type NLMS and affine projection algorithms were contemplated in the simulations. The software MATLAB was used to carry the simulations, because it is relatively easy to implement and experiment the algorithms in different situations, proving to be a powerful tool when dealing with digital signal processing.

Page 12: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

3.1. Simulations with the Proportionate-Type NLMS Algorithms

We begin the experiments with the proportionate-type NLMS algorithms: NLMS, PNLMS, IPNLMS, MPNLMS, IAF-PNLMS, AMPNLMS and ASPNLMS. Each simulation is separated in subsections, varying the input type, the unknown system impulse response, measurement noise, etc. When needed, the parameters used in each algorithm is explicitly cited.

3.1.1. Simulation 1: Impulse Response Type 1

For the first simulation, we use the system impulse response shown in Figure 2.1, the measurement noise used is white Gaussian noise with SNR=40 dB. The parameters used in each algorithm is presented in Table 3.1.

Table 3.1. Parameters for Simulation 1

Algorithm Parameters

NLMS β=0.3 δ=0.0001

PNLMSρ=0.01δ p=0.01

β=0.3δ=0.0001

IPNLMSα IPNLMS=0

ϵ IPNLMS=0.0001

β=0.3δ=0.0001

MPNLMS

μ=1000ρ=0.01δ p=0.01

β=0.3δ=0.0001

IAF-PNLMS β=0.3 δ=0.0001

AMPNLMS

ξ=0.99ν=1000σ x=1

ρ=0.01

δ p=0.01

β=0.3δ=0.0001

ASPNLMS

ξ=0.99ν=1000σ x=1

ρ=0.01

Υ=10δ p=0.01

β=0.3δ=0.0001

We have two simulations in this subsection: one with white Gaussian noise input and the other with colored noise input. The coloring filter is a low pass filter with a pole at z=−0.9. For

Page 13: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

each simulation, 100 Monte Carlo runs were executed. The learning curves for them are shown in Figures 3.1 and 3.2.

0 0.5 1 1.5 2 2.5 3 3.5 4

x 104

-45

-40

-35

-30

-25

-20

-15

-10

-5

0Comparing Algorithms - White Noise Input

Iteration Number

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMS

NLMS

PNLMS

IPNLMS

MPNLMS

AMPNLMS

ASPNLMS

IAF-PNLMS

Figure 3.1. Learning curve for simulation 1, white Gaussian noise input.

0 1 2 3 4 5 6 7 8

x 104

-45

-40

-35

-30

-25

-20

-15

-10

-5

0Comparing Algorithms - Colored Noise Input

Iteration Number

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMS

NLMS

PNLMS

IPNLMS

ASPNLMS

AMPNLMS

MPNLMS

IAF-PNLMS

Page 14: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

Figure 3.2. Learning curve for simulation 1, colored noise input.

In the first set of curves (Figure 3.1), we can first mention that one advantage of the proportionate type algorithms over the simple NLMS is the fast initial convergence, which may or may not prevail during the process, depending on the algorithm. Examples of algorithms that seem to maintain fast convergence rate during the whole process are the MPNLMS, AMPNLMS and ASPNLMS algorithms, the first two being based on direct calculation of a logarithm term and the third one presents a linearization of this logarithm curve. Although it is a simplification, the ASPNLMS algorithm has a very good performance, even offering less computational effort. Strangely, the IAF-PNLMS algorithm reaches a convergence plateau, but above the other algorithms, maybe because the impulse response is not sufficiently sparse, as in [4] it seems to have a better performance when the impulse response is strongly sparse.

When the input signal is colored noise (Figure 3.2), it is noticeable that the convergence speed is reduced for all the algorithms. The best performances are presented by the MPNLMS and AMPNLMS algorithms, maintaining fast convergence rate during the whole process. The NLMS does not reached convergence with this amount of iterations.

3.1.2. Simulation 2: Impulse Response Type 2

In the second simulation, another impulse response is used, as shown in Figure 3.3. It is also sparse, but it has a smaller number of coefficients, 256, whereas the previous one had 512.

Page 15: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

0 50 100 150 200 250

-0.1

-0.05

0

0.05

0.1

Coefficient Index

Am

plitu

de

Sparse Impulse Response Type 2

Figure 3.3. Sparse impulse response type 2.

The simulation parameters are the same as the previous simulation, as presented in Table 3.1. Also, the measurement noise is white Gaussian noise with SNR=40 dB. We also have two types of input: white Gaussian noise and colored noise, the same as in the previous subsection. The learning curves are shown in Figures 3.4 and 3.5.

0 0.5 1 1.5 2 2.5 3 3.5 4

x 104

-45

-40

-35

-30

-25

-20

-15

-10

-5

0Comparing Algorithms - White Noise Input

Iteration Number

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMS

NLMS

PNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMS

ASPNLMS

Page 16: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

Figure 3.4. Learning curve for simulation 2, white Gaussian noise input.

0 1 2 3 4 5 6 7 8

x 104

-45

-40

-35

-30

-25

-20

-15

-10

-5

0Comparing Algorithms - Colored Noise Input

Iteration Number

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMSNLMS

IPNLMS

PNLMS

ASPNLMS

MPNLMS

IAF-PNLMS

AMPNLMS

Figure 3.5. Learning curve for simulation 2, colored noise input.

Comparing the results of this simulation with the last one, we can see that most of the algorithms took a slight more iterations to converge. The biggest difference is the performance of the NLMS algorithm, a lot better here than in the previous one. In the white noise input case, the best performances are presented by the ASPNLM and AMPNLMS algorithms, followed by the MPNLMS. In the colored input case, MPNLMS algorithm is the best, followed by AMPNLMS and ASPNLMS algorithms. In both simulations, the IAF-PNLMS algorithm showed an even worse performance than in the previous simulations.

3.1.3. Simulation 3: Impulse Response Type 3

Similar to the previous two simulations, the only difference is the impulse response being used. Now, instead of a sparse impulse response, we use a dispersive one. It is generated by multiplying a white Gaussian noise signal with length 512 by an exponential e−0.1 t, producing an exponentially damped signal, as shown in Figure 3.6.

Page 17: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

0 50 100 150 200 250 300 350 400 450 500

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

2.5

3

3.5

Coefficient Index

Am

plitu

de

Exponentially Damped Impulse Response

Figure 3.6. Impulse response type 3.

The same parameters are used here. Figures 3.7 and 3.8 show the learning curves for white noise input and colored noise input, respectively.

0 0.5 1 1.5 2 2.5 3 3.5 4

x 104

-45

-40

-35

-30

-25

-20

-15

-10

-5

0

5Comparing Algorithms - White Noise Input

Iteration Number

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMS

NLMS

IAF-PNLMS

ASPNLMS

IPNLMS

PNLMS

AMPNLMS

MPNLMS

Figure 3.7. Learning curve for simulation 3, white Gaussian noise input.

Page 18: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

0 1 2 3 4 5 6 7 8

x 104

-40

-35

-30

-25

-20

-15

-10

-5

0Comparing Algorithms - Colored Noise Input

Iteration Number

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMSNLMS

IAF-PNLMSPNLMS

MPNLMS

ASPNLMS

IPNLMS

AMPNLMS

Figure 3.8. Learning curve for simulation 3, colored noise input.

In order to check the robustness of the algorithms, this simulation with a dispersive impulse response was carried out. As can be seen, the convergence rate is slower than in the two previous cases. As the PNLMS algorithm is intended to work with sparse impulse responses, its performance got worse here. The IAF-PNLMS also had a bad performance in this case, as expected, given that it goes better with impulse responses with high sparseness. However, in the white noise input case, the other algorithms converged in a similar fashion, being the NLMS the slowest of them. In the colored noise input simulation, none of the algorithms reached convergence.

3.1.4. Simulation 4: Shift Tracking Analysis

The purpose of this simulation is to analyze the capability of the algorithms to cope with a shift in the coefficients of the unknown system impulse response while the adaptation process is happening. From now on, only the sparse impulse response in Figure 2.1 is going to be used in the simulations. The parameters of the algorithms used in this simulation are the same as the ones presented in Table 3.1. As measurement noise, white Gaussian noise with SNR=40 dB is used again.

Page 19: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

In the two simulations preformed in this section, the original impulse response is shifted by 50 coefficients when the number of iterations is half the total. Again, the system is excited with two types of input: white Gaussian noise and colored noise, with the coloring filter being a low pass filter with a pole at z=−0.9. Figures 3.9 and 3.10 show the learning curves obtained from the simulations.

0 1 2 3 4 5 6 7 8

x 104

-40

-35

-30

-25

-20

-15

-10

-5

0

5Shift Tracking - White Noise Input

Iteration Number

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMSNLMS

NLMS

PNLMS

PNLMSIPNLMS

IPNLMSMPNLMS

MPNLMSAMPNLMS

AMPNLMS

ASPNLMS ASPNLMS

IAF-PNLMS IAF-PNLMS

Figure 3.9. Learning curve for simulation 4, white Gaussian noise input.

0 1 2 3 4 5 6 7 8

x 104

-40

-35

-30

-25

-20

-15

-10

-5

0

5Shift Tracking - Colored Noise Input

Iteration Number

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMS

NLMS NLMS

AMPNLMS

AMPNLMS

ASPNLMSASPNLMS

PNLMS

PNLMS

MPNLMSMPNLMS

IPNLMSIPNLMSIAF-PNLMS

IAF-PNLMS

Figure 3.10. Learning curve for simulation 4, colored noise input.

Page 20: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

As we can see, in the white input case, all of the algorithms can track the shift in the impulse response very fast, showing very small or even none delay relative to the first part. In the colored input scenario, the changes are more apparent, as for most of them, half the iterations is not sufficient to reach convergence, but still, it seems they will continue the process normally.

3.1.5. Simulation 5: Speech Input

In this subsection, we address the performance of the algorithms when the input is a speech signal. The simulations use the impulse response in Figure 2.1. The speech input signal used in all the simulations is shown in Figure 3.11.

0 1 2 3 4 5 6 7 8

x 104

-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

n

Am

plitu

de

Speech Input Signal

Figure 3.11. Plot of the speech signal to be used in the simulations.

This subsection is composed by 5 simulations, all of them using the same speech input signal and the system impulse response cited. In the first one, no measurement noise is added to the output of the unknown system. In the second one, white Gaussian noise with SNR=40 dB is used as measurement noise, whereas in the third one, SNR=20dB is used. In the fourth one, we apply a coloring filter (low pass filter with a pole at z=−0.9) to the SNR=40 dB white Gaussian noise and add as measurement noise. Finally, in the last simulation, a second speech (Figure 3.12) is added as measurement noise with 1/10 of the power of the system output.

In each simulation, 100 Monte Carlo runs are executed. To generate 100 different inputs from a single speech signal, circular shifts are applied to the base signal. The parameters used in the algorithms are the ones shown in Table 3.1.

Page 21: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

0 1 2 3 4 5 6 7 8

x 104

-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

n

Am

plitu

de

Speech Signal for Measurement Noise

Figure 3.12. Plot of the speech signal to be used as measurement noise in the 5th simulation.

The resultant learning curves are shown in Figures 3.13 to 3.17.

0 1 2 3 4 5 6 7 8

x 104

-80

-70

-60

-50

-40

-30

-20

-10

0Comparing Algorithms - Speech Input (No Meas. Noise)

n

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMS

NLMSIPNLMS

MPNLMS

IAF-PNLMS

AMPNLMS

PNLMS

AMPNLMS

Figure 3.13. Learning curve for simulation 5, no measurement noise.

Page 22: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

0 1 2 3 4 5 6 7 8

x 104

-40

-35

-30

-25

-20

-15

-10

-5

0Comparing Algorithms - Speech Input (40dB White Noise)

n

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMS

AMPNLMS

PNLMS

ASPNLMS

MPNLMSIAF-PNLMS

NLMS

IPNLMS

Figure 3.14. Learning curve for simulation 5, 40 dB white measurement noise.

0 1 2 3 4 5 6 7 8

x 104

-25

-20

-15

-10

-5

0Comparing Algorithms - Speech Input (20 dB White Noise)

n

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMS

ASPNLMSNLMSAMPNLMS

IAF-PNLMS PNLMS MPNLMSIPNLMS

Page 23: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

Figure 3.15. Learning curve for simulation 5, 20 dB white measurement noise.

0 1 2 3 4 5 6 7 8

x 104

-40

-35

-30

-25

-20

-15

-10

-5

0Comparing Algorithms - Speech Input (40dB Colored Noise)

n

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMS

NLMS ASPNLMS

AMPNLMS

PNLMS

MPNLMSIAF-PNLMS

IPNLMS

Figure 3.16. Learning curve for simulation 5, colored measurement noise.

0 1 2 3 4 5 6 7 8

x 104

-30

-25

-20

-15

-10

-5

0

5

10

15

20Comparing Algorithms - Speech Input (Speech Noise)

n

MS

E (

dB)

NLMSPNLMS

IPNLMS

MPNLMS

IAF-PNLMS

AMPNLMSASPNLMS

NLMS

ASPNLMSAMPNLMS

PNLMS IAF-PNLMS IPNLMS

MPNLMS

Page 24: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

Figure 3.17. Learning curve for simulation 5, second speech measurement noise.

Beginning with the first simulation, where no measurement noise is added, the result is rather chaotic and hard to make conclusions, except that none of the algorithms seem to have reached convergence.

In the second and third simulations, in which 40 dB and 20 dB white Gaussian noise were added as measurement noise, respectively, it is easier to see that the algorithms reach a convergence plateau, but unlike the previous simulations with noise input signals, they establish in different levels, possibly because speech is some kind of non-stationary signal. In both cases, the IPNLMS algorithm has a deeper convergence, reaching the level closest to the measurement noise level. A noticeable difference is that in the 40 dB case, the levels are closer to each other, whereas in the 20 dB case, they are more scattered through the vertical axis.

In the last simulation, with a second speech as measurement noise, the result is even more chaotic. There is no convergence at all, and the mean square error happens to be greater than the output power throughout the process for all the algorithms, except the IPNLMS.

3.2. Simulations with the Affine Projection Algorithms

To finalize the series of simulations, the PtNLMS algorithms are modified to follow the affine projection algorithm format. All the algorithms use N=4, i.e. 3 of the last input vectors are recycled when calculating a new estimated coefficients vector. Two simulations are performed in this subsection. They both use the impulse response in Figure 2.1, as well as the parameters in Table 3.1, except that now it is used β=0.01. As measurement noise, white Gaussian noise with SNR=40 dB is used. As in the previous simulations, one of the simulations, white Gaussian noise is used as input and the other one uses colored noise, with the same coloring filter. The learning curves are shown in Figures 3.18 and 3.19.

Page 25: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

0 1 2 3 4 5 6 7 8

x 104

-45

-40

-35

-30

-25

-20

-15

-10

-5

0

5Comparing Algorithms (Affine Projection) - White Noise Input

Iteration Number

MS

E (

dB)

AP-NLMSAP-PNLMS

AP-IPNLMS

AP-MPNLMS

AP-IAF-PNLMS

AP-AMPNLMSAP-ASPNLMS

NLMS

ASPNLMS

MPNLMS

PNLMS

IAF-PNLMSIPNLMS

AMPNLMS

Figure 3.18. Learning curve for simulation 6, white Gaussian noise input.

0 1 2 3 4 5 6 7 8

x 104

-45

-40

-35

-30

-25

-20

-15

-10

-5

0

5Comparing Algorithms (Affine Projection) - Colored Noise Input

Iteration Number

MS

E (

dB)

AP-NLMSAP-PNLMS

AP-IPNLMS

AP-MPNLMS

AP-IAF-PNLMS

AP-AMPNLMSAP-ASPNLMS

NLMS

PNLMS

IPNLMS

ASPNLMS

MPNLMS

AMPNLMS

IAF-PNLMS

Figure 3.19. Learning curve for simulation 6, colored noise input.

Page 26: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

Comparing the results of the affine projection algorithms to simulation 1, the proportionate-type algorithms seem to outperform the affine projection ones, those converge faster. In the white noise input case, the AP-MPNLMS algorithm has the best performance, followed by the AP-AMPNLMS and the AP-ASPNLMS. The AP-IAF-PNLMS has the same strange behavior, converging in a level that is above the others. The AP-NLMS does not reach convergence. In the colored noise input case, the convergence speed is slightly reduced, and the AP-AMPNLMS algorithm outperforms the AP-MPNLMS.

4. Conclusion

As a conclusion of this work, it can be said that most of the algorithms that were presented can successfully identify an unknown system even if it does not have a sparse impulse response, which is the type of impulse response the proportionate-type NLMS algorithms were designed for. Some of them present relatively better performance, such as the MPNLMS, the AMPNLMS and the ASPNLMS. The first two have a disadvantage in the computational complexity, as they depend on the calculation of a logarithmic term. Then, the ASPNLMS seems to be a very good alternative, as it performs as well as the other two, sometimes even better, and has a simpler implementation, due to the linearization of the logarithmic term. An interesting point is that the algorithms can keep track of a changing impulse response, as shown in a simulation where the impulse response is shifted in the middle of the adaptation process. However, when dealing with speech input, the results are not very conclusive and the simulation process requires more computational power, being slower than the other ones. A more conclusive way to analyze the results of the speech input simulation is by listening to the error signal generated in one run of the code. By using this technique, in most of the situations, the IPNLMS, MPNLMS, AMPNLMS and ASPNLMS algorithms presented a better performance. Finally, the affine projection format was tested, and the conclusion was that in the scenarios it was tested, the performance of the proportionate-type NLMS algorithms were still better. Although, only one type of simulation was performed with this format. Therefore, a possible expansion of this work

Page 27: Performance Analisys of Proportionate-Type Normalized Mean Square Algorithms

is to test these algorithms in different conditions and detect which of the formats is better in each situation. Also, there is the need to find a better way to test the behavior of the algorithms when the input is a speech signal.

5. References

[1] K. Wagner and M. Doroslovački, “Proportionate-Type Normalized Least Mean Square Algorithms”, pp. 1–12, 119–132, Mar. 2013.

[2] Affine Projection 1, pp. 334–341

[3] Affine Projection 2, pp. 156–161

[4] F. C. de Souza, R. Seara and D. R. Morgan, “A PNLMS Algorithm With Individual Activation Factors”, in IEEE Trans. Signal Process., vol. 58, no. 4, pp. 2036–2047, Apr. 2010.