a8.1

4
Stability Analysis of Fast Recursive Least Squares Algorithm: Application to Adaptive Filtering Farid Ykhlef, M. Arezki, A. Guessoum and D. Berkani LATSI laboratory, Department of Electronics, Faculty of Engineering Sciences, University of Blida, Algeria. [email protected] Abstract- In this paper, we introduce a new numerically stable version for the Fast Recursive Least Squares (FRLS) algorithm. As an additional contribution, we present an analysis of the FRLS algorithm instability problems. An experimental study is conducted in order to determine the origin of the numerical instability. The originality of our investigation lies mainly in finding the relation between the stability of the FRLS algorithm and the positions of its (forward/backward) prediction part zeros. Keywords: Fast Recursive Least Squares, stability, prediction. I. INTRODUCTION In the field of adaptive signal processing, it is well known that FRLS algorithms can produce a good trade-off between convergence speed and computational complexity. But, it is also well-known that the FRLS algorithm suffers from numerical instability when operating under the effects of finite precision arithmetic [1]. For this reason, it is not widely used because of its undesirable tendency to diverge when operating in finite precision arithmetic [2]. To compensate, modifications to the algorithm have been introduced that are either occasional (performed when a predefined condition(s) is violated) or structured as part of the normal update iteration. To improve on the stability of the FRLS algorithm, a number of methods of that kind have been proposed [1]-[5]. In the same trend, we introduce a new numerically stable version for the stationary case of this algorithm. In the present work, we investigate in detail the instability problem of the FRLS algorithm, in the context of systems identification application. A. Adaptive Identification The identification is done by modelling the unknown system with an adaptive finite impulse response (FIR) filter and subtracting the desired signal t y from the output signal t y ˆ [6]. A basic schema of identification by an adaptive algorithm is shown in Fig. 1. The coefficients of the adaptive filter 1 N,t W are adjusted by a FRLS algorithm, according to the following equations: t t t N y y ˆ , = ε (1) t N t N t N t N t N K W W , , , 1 , , ~ γ ε = (2) with t N T t N t X W y , 1 , ˆ = (3) where exponent (.) T denotes matrix or vector transposed, N is the length of the filter, t is the discrete time index, t N X , denotes a vector which summarizes the past of the signal t x over a length of N points, t n is the additional noise, t y is the desired signal, t N , ε is the a priori error and the quantity t N t N K , , ~ γ denotes the adaptation gain (or Kalman gain) calculated, independently of the filtering part, by a Fast Recursive Least Squares (FRLS) algorithm using linear forward/backward prediction analysis on the input signal t x [8]. Here is the first difficulty for FRLS algorithms when applied to the identification of (Moving Average) MA models: the adaptation gain calculated on the input signal t x is more sensitive to this non-stationary signal than to the system to be identified [7] [8]. Figure 1. System identification structure. B. FRLS Algorithm For the following, we consider the so-called Fast Transversal Filters (FTF) version of the FRLS algorithm [3]. The FRLS algorithm is divided into two parts: a Prediction part and a Filtering part (Fig. 2). The prediction part provide to the filtering part an adaptation gain (or Kalman gain) vector to identify the unknown system. Moreover, the FRLS algorithm is a result of taking advantage of redundancies arising from the solution of four transversal filtering problems, each related through their use of the same input data: a filter for each of the one-step forward and backward prediction problems ( t N A , / t N B , ), a filter defining the gain vector (Kalman gain) of the Recursive Least Squares (RLS) algorithm ( t N t N K , , ~ γ ), and a filter to provide the weight vector corresponding to the desired problem being solved ( N,t W ). Combined in a special way, these four transversal filters provide the exact solution to the RLS problem at all times and define the FRLS algorithm [2]. The filtering part of the FRLS algorithm has the distinct advantage of robustness to numerical errors. Knowing the good numerical properties of the filtering part and the non influence of this last on the prediction part, our study is entirely based around the prediction part. Unknown System Adaptive Filter + + _ t y t y ˆ t x t , N ε t n

Upload: vo-phong-phu

Post on 20-Oct-2015

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A8.1

Stability Analysis of Fast Recursive Least Squares Algorithm: Application to Adaptive

Filtering

Farid Ykhlef, M. Arezki, A. Guessoum and D. Berkani LATSI laboratory, Department of Electronics, Faculty of Engineering Sciences, University of Blida, Algeria.

[email protected]

Abstract- In this paper, we introduce a new numerically stable version for the Fast Recursive Least Squares (FRLS) algorithm. As an additional contribution, we present an analysis of the FRLS algorithm instability problems. An experimental study is conducted in order to determine the origin of the numerical instability. The originality of our investigation lies mainly in finding the relation between the stability of the FRLS algorithm and the positions of its (forward/backward) prediction part zeros.

Keywords: Fast Recursive Least Squares, stability, prediction.

I. INTRODUCTION

In the field of adaptive signal processing, it is well known that FRLS algorithms can produce a good trade-off between convergence speed and computational complexity. But, it is also well-known that the FRLS algorithm suffers from numerical instability when operating under the effects of finite precision arithmetic [1]. For this reason, it is not widely used because of its undesirable tendency to diverge when operating in finite precision arithmetic [2]. To compensate, modifications to the algorithm have been introduced that are either occasional (performed when a predefined condition(s) is violated) or structured as part of the normal update iteration. To improve on the stability of the FRLS algorithm, a number of methods of that kind have been proposed [1]-[5]. In the same trend, we introduce a new numerically stable version for the stationary case of this algorithm. In the present work, we investigate in detail the instability problem of the FRLS algorithm, in the context of systems identification application.

A. Adaptive Identification The identification is done by modelling the unknown system with an adaptive finite impulse response (FIR) filter and subtracting the desired signal ty from the output signal ty [6]. A basic schema of identification by an adaptive algorithm is shown in Fig. 1. The coefficients of the adaptive filter 1−N,tW are adjusted by a FRLS algorithm, according to the following equations:

tttN yy ˆ, −=ε (1)

tNtNtNtNtN KWW ,,,1,,~

γε−= − (2) with

tNT

tNt XWy ,1,ˆ −= (3) where exponent (.)T denotes matrix or vector transposed, N is the length of the filter, t is the discrete time index,

tNX , denotes a vector which summarizes the past of the

signal tx over a length of N points, tn is the additional noise, ty is the desired signal, tN ,ε is the a priori error and the quantity tNtN K ,,

~γ denotes the adaptation gain (or

Kalman gain) calculated, independently of the filtering part, by a Fast Recursive Least Squares (FRLS) algorithm using linear forward/backward prediction analysis on the input signal tx [8]. Here is the first difficulty for FRLS algorithms when applied to the identification of (Moving Average) MA models: the adaptation gain calculated on the input signal tx is more sensitive to this non-stationary signal than to the system to be identified [7] [8].

Figure 1. System identification structure.

B. FRLS Algorithm For the following, we consider the so-called Fast

Transversal Filters (FTF) version of the FRLS algorithm [3]. The FRLS algorithm is divided into two parts: a Prediction part and a Filtering part (Fig. 2). The prediction part provide to the filtering part an adaptation gain (or Kalman gain) vector to identify the unknown system. Moreover, the FRLS algorithm is a result of taking advantage of redundancies arising from the solution of four transversal filtering problems, each related through their use of the same input data: a filter for each of the one-step forward and backward prediction problems ( tNA , / tNB , ), a filter defining the gain vector (Kalman gain) of the Recursive Least Squares (RLS) algorithm ( tNtN K ,,

~γ ), and a filter to provide the weight vector

corresponding to the desired problem being solved ( N,tW ).

Combined in a special way, these four transversal filters provide the exact solution to the RLS problem at all times and define the FRLS algorithm [2]. The filtering part of the FRLS algorithm has the distinct advantage of robustness to numerical errors. Knowing the good numerical properties of the filtering part and the non influence of this last on the prediction part, our study is entirely based around the prediction part.

Unknown System

Adaptive Filter

+

+ _

ty

ty

tx

t,Nε

tn

Page 2: A8.1

Figure 2. The FRLS algorithm.

II. NEW NUMERICALLY STABLE VERSION OF THE FRLS ALGORITHM

Several numerical solutions of stabilization, with stationary signals, are proposed in the literature [1], [3], [4], [5] and [9]. On the other hand, the FRLS algorithm is notoriously unstable, but it is possible to maintain stability by using a few equations. Here, we follow the method made in [5], to propose a new numerically stable version of the FRLS algorithm. This method is based on a first order model of the propagation of the numerical errors. The general principle is to modify the numerical properties of the algorithm without modifying its theoretical behavior [5].In this case, by using some known relationships between the different backward a priori prediction errors, we define a ”control variable” tN ,ξ , theoretically null, given by [10]:

])1[( 1,

0,,,

ftNs

ftNs

ctNtN rrr μ+μ−−=ξ (4)

with

tNT

tNNtc

tN XBxr ,1,, −− −= (5.a) 111,

0,

~ ++−β−= N

,tNtNf

tN Kλr (5.b)

1,11,1,

11,

~ ++−−

+− αγλ−= NtNtNtN

NftN Kr (5.c)

where relations (5) represent the backward a priori prediction errors, theoretically equal, calculated differently. The scale parameter ( 10 ≤μ≤ s ) controls the propagation of the numerical errors in the algorithm [9], [10]. In practice, the variable tN ,ξ is never null on account of the machines precision. To stabilize the algorithm, we use the variable tN ,ξ to calculate a backward a priori prediction error:

tNc

tNs

tN rr ,,, ξ+= (6)

Based on this method of stabilization, various versions of the FRLS algorithm were developed. For adaptive filtering, the new version of FRLS (NS-FRLS) is summarized on table I along with an old one [9]. This new version does not use the intermediate variable t,N 1+γ . According to the simulations, the best results are given for

sμ = 0.25. To simplify and to ensure the numerical stability of this version in the stationary case, it is necessary to choose a forgetting factor λ [10]:

pN11−>λ (7)

where the parameter p is a real number higher than 2 to ensure the numerical stability of the algorithm.

III. STABILITY ANALYSIS OF THE FRLS ALGORITHM

It is well-known that the reduction in complexity of the FRLS algorithm is paid by a significant degradation of its numerical stability properties. The numerical errors are propagated in a way not limited in time, which leads to an unstable solution [11], [12]. The performances of convergence speed and tracking capabilities are degraded very quickly for a forgetting factor λ close to 1. With this method of numerical stabilization in stationary case, the NS-FRLS algorithm is still unstable for some particular cases, when we must choose a forgetting factor very close to the stability condition (7). Thereafter, tests show these particular cases. The analysis of numerical errors propagation proposed in [1] was conducted for the NS-FRLS algorithm.

The equations of numerical errors propagation for the recursive variables of the NS-FRLS algorithm according to the state linear model are [1], [8]:

1−Δ=Δ tt ZF(t)Z (8) with

tNtN K ,,

tN ,ε

tx ty

FRLS algorithm

Prediction Part Filtering Part

TABLE I NS-FRLS ALGORITHM.

- Prediction part:

1,1,, −−−= tNT

tNttN XAxe 2

,1,1,, tNtNtNtN e−− += γαλα

⎥⎦

⎤⎢⎣

⎡ −+⎥

⎤⎢⎣

⎡=

−−−+

1,1,

,

1,,1

1~

0~tNtN

tN

tNtN A

eK

Kλα

1,1,,1,,

~−−− −= tNtNtNtNtN KeAA γ

tNT

tNNtc

tN XBxr ,1,, −− −=

111,

0,

~ ++−−= N

,tNtNf

tN Kλr β

1,11,1,

11,

~ ++−−

+−−= NtNtNtN

NftN Kr αγλ

])1[( 1,

0,,,

ftNs

ftNs

ctNtN rrr μμξ +−−=

tNc

tNs

tN rr ,,, ξ+=

⎪⎪⎭

⎪⎪⎬

+=

=

+++

+

−−

+

111

1

11

1

1 Nt,N

st,Nt,N

t,Nt,N

t,Nt,N

t,Nt,N

K~rγγ

γ

γααλ

γ (old version)

1,1,1,1,

1,

1,, ~ −+

+−+

+= tNN

tNs

tNtNN

tN

tNtN Kr

γβλαλα

γ (new version)

⎥⎦

⎤⎢⎣

⎡−

+=⎥⎦

⎤⎢⎣

⎡−+

++ 1~~

0

~1,1

,1,1, tNN

tNtNtN B

KKK

tNtNs

tNtNtN KrBB ,,,1,,

~γ−= − 2

,,1,, )( stNtNtNtN rγβλβ += −

- Filtering part: tN

TtNttN XWy ,1,, −−=ε

tNtNtNtNtN KWW ,,,1,,

~γε−= −

Page 3: A8.1

⎥⎥⎥

⎢⎢⎢

ΔΔΔ

t

t

t

t

bca

Z , ⎥⎦

⎤⎢⎣

⎡ΔΔ

=ΔtN

tNt

Aa

,

,

α,

⎥⎥⎦

⎢⎢⎣

ΔΔ

=ΔtN

tNt

Kc

,

,

~

γ, ⎥

⎤⎢⎣

⎡ΔΔ

=ΔtN

tNt

Bb

,

,

β

(9) where tVΔ stands for the first order approximation of the error on the theoretical variable tV . The matrix F(t) is an approximate transition matrix for errors propagation [1], [5]. For the NS-FRLS algorithm, the F(t) matrix is given by:

⎥⎥⎥

⎢⎢⎢

⎡=

(t)F(t)F(t)F(t)F(t)F(t)F(t)F(t)F(t)F

F(t)

333231

232221

131211

(10)

The algorithm is stable when the eigen-values of the F(t) matrix are close to 1 in magnitude. In general, the stability analysis of the F(t) matrix is very difficult, due to the complexity of some of its components and to its dependency on the input signal [1], [5]. In this work, the instability study of FRLS algorithm is conducted entirely around the submatrix )(22 tF . This sub-matrix contains a companion matrix )(tM c [10]. The roots of the backward predictor vector tNB , are equal to the eigen-values of the

)(tM c matrix [13]:

⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢

=

NtN

tN

tN

tN

c

B

BBB

tM

1,

31,

21,

11,

100

010001000

)(

L

MMOMM

L

L

L

(11)

Tests on AR signals are realized to determine the origin of instability for this particular case.

IV. RESULTS

A. Test with AR Signals We assume that the input signal to the adaptive filter

can be modeled as an autoregressive (AR) process whose order is the same as the adaptive filter length. To test the numerical behavior of the prediction part of the numerically stable FRLS algorithm, the input signals used for this simulation are AR signals (autoregressive). Then, for a number of selected input parameters of the NS-FRLS algorithm, we observe the evolution of the zeros of the forward/backward predictor’s variables. And we will observe also the convergence of these variables, for an order of the predictor:

1) Equal to the AR order; 2) Higher than the AR order (over-estimated order); 3) Lower than the AR order (underestimated order). The goal of these simulations is to analyze the position

of the zeros of the forward/backward predictors in the z-plane (unit circle) in order to verify the numerical stability. In these simulations, we will choose various values for the poles of the synthesized AR processes:

1) Poles far from the unit circle; 2) Poles close to the unit circle. We will use fifth-order AR processes characterized by

the following poles:

AR5f (poles far from the unit circle): .5.0,6.0,75.0 )33.0()45.0( π±π± j2j2 e e

AR5c (some poles close to the unit circle): .5.0,6.0,98.0 )33.0()45.0( π±π± j2j2 e e

Table II shows the stability state of NS-FRLS for various orders, and with the two different input signals. We note that the NS-FRLS algorithm is unstable for the AR5c signal and it is stable for the AR5f signal, for a forgetting factor λ very close to the stability condition. Table II presents also the stable case of the NS-FRLS algorithm for the AR5c signal; this stability is only assured if we increase the value of the forgetting factor λ . Figs. 3 and 4 show some results for the two synthetic AR signals with an order equal to 5. Referring to this result, we see that if one zero of the backward predictor exceeds the value 1, a divergence of the algorithm is observed (see Fig. 4); that means that one eigen-value of the sub-matrix

)(22 tF is greater than 1.

-1 -0.5 0 0.5 1-1

-0.5

0

0.5

1

Real Part

Imag

inar

y P

art

Forward predictor

-1 -0.5 0 0.5 1-1

-0.5

0

0.5

1

Real Part

Imag

inar

y P

art

Backward predictor

○ Predicted pole × Real pole

Figure 3. Position of the estimated zeros of the forward/backward predictors in steady-state for the AR5f signal: poles far from the unit

circle (λ=0.9333(p=3), N=5).

-1 -0.5 0 0.5 1-1

-0.5

0

0.5

1

Real Part

Imag

inar

y P

art

Forward predictor

-1 -0.5 0 0.5 1-1

-0.5

0

0.5

1

Real Part

Imag

inar

y P

art

Backward predictor

○ Predicted pole × Real pole

Figure 4. Position of the estimated zeros of the forward/backward predictors just before the divergence for the AR5c signal: poles close to

the unit circle (λ=0.9333(p=3), N=5). Therefore the errors propagation sub-system

corresponding to the sub-matrix )(22 tF is unstable. Various experiments have been performed to assess the stability of the NS-FRLS algorithm. These experiments have been carried out on several signals of the same kind as the AR5f signal and the AR5c signal. The results reveal that for the signals of the same kind as the AR5f signal, the choice of a forgetting factor λ very close to the stability condition (7) allows to maintain the stability of

TABLE II STATE OF STABILITY FOR VARIOUS ORDERS

N 4 5 7

λ 0.9167 0.9375 0.9333 0.95 0.9524 0.9643

F stable stable stable stable stable stable

C unstable stable unstable stable unstable stable

F : AR5f signal ; C : AR5c signal.

Page 4: A8.1

the algorithm NS-FRLS. But, for the signals of the AR5c kind, stability is only assured when we choose a forgetting factor λ extremely higher than the condition. Moreover, for the AR5c signal, the stability of the algorithm is also observed in the largely over-estimated case. B. Application to Systems Identification

In this part, we have identified a system with the setup of Fig. 1. The adaptive filter is based on the new numerically stable version of the FRLS algorithm and the reference system has N = 32. A zero-mean Gaussian noise nt with a SNR set to 30 dB was added to the output of the filter. For the first experiment, the input signal is a white noise and the desired signal was filtered by (FIR) obtained synthetically. The system mismatch:

2

2

,)(opt

tNopt

W

WWtmis

−= (12)

where optW denote the optimal system, is computed as a function of time and plotted in Fig. 5. A stationary 10th-order autoregressive input signal is chosen for a second experiment. The poles of the input signal are located at )45.0(98.0 π± j2e , )2.0(98.0 π± j2e ,

)33.0(6.0 π± j2e , )25.0(5.0 π± j2e and )4.0(25.0 π± j2e on the z-plane. Fig. 6 shows also the evolution of the system mismatch for the AR10 input signal. The stability condition of the algorithm is always respected for all experiments.

0 2000 4000 6000 8000 1000010000-40

-30

-20

-10

0

Samples

Am

plitu

de (d

B)

System mismatch

Figure 5. System identification (input signal: white noise, N=32 and

λ=0.9895833(p=3)).

0 2000 4000 6000 8000 1000010000

-30

-20

-10

0

Samples

Am

plitu

de (d

B)

System mismatch

Figure 6. System identification (input signal: AR10 signal, N=32 and

λ=0.9921875 (p=4)).

V. CONCLUSION

In this paper a new numerically stable version of the FRLS algorithm was proposed for the stationary input signal cases. Initially, we tested the stability of numerically stabilized FRLS algorithm with synthetic autoregressive (AR) input signals. The analysis of the propagation of numerical errors was conducted for the NS-FRLS algorithm. A sub-matrix of the propagation errors matrix was tested for a particular case.

These algorithms are stable for a suitable choice of the forgetting factor; its minimal limit depends in fact on the nature of the input signal. With the new proposed version of NS-FRLS, simulation results were presented in this work for the systems identification application.

REFERENCES [1] D. T. M. Slock, and T. Kailath, “Numerically Stable Fast

Transversal Filters for Recursive Least Squares Adaptive Filtering,” IEEE Trans. on Signal Processing, vol. 39, No. 1, pp. 92-114, January 1991.

[2] J. R. Bunch, R. C. LeBorne, and I. K. Proudler, “Measuring and maintaining consistency: a hybrid FTF algorithm”, Int. Journal of Applied Math and Comp. Sci., vol. 11, No. 5, pp. 1203-1216, 2001.

[3] J. M. Cioffi, and T. Kailath, “Fast Recursive Least Squares Transversal Filters for Adaptive Filtering,” IEEE Trans. Acoustic, Speech, Signal Processing, vol. Assp-32, No. 4, pp. 304 -337, April 1984.

[4] J.–L. Botto, “Stabilization of fast recursive least-squares transversal filters for adaptive filtering,” in Proc. ICASSP 87 Conf., (Dallas, TX), pp. 403-406, April 1987.

[5] A. Benallal, and A. Gilloire, “A New Method to Stabilize Fast RLS Algorithms Based on a First-Order Model of the Propagation of Numerical Errors,” in Proc. ICASSP 88 Conf., (New York, N.Y.), pp. 1373-1376, April 1988.

[6] R. Morgan, J. Benesty, and M. Sondhi, “On the evaluation of estimated impulse response,” IEEE Signal processing letters, vol. 5, No. 7, pp. 174-176, July 1998.

[7] A. Benallal, “A study of the transversal fast recursive least squares algorithms and application to the identification of acoustic impulse responses,” (in French) Ph. D. dissertation, University of Rennes I, Rennes France, December 1988.

[8] J. K. Soh, and S. C. Douglas, “Analysis of the Stabilized FTF Algorithm with Leakage Correction,” Proc. 30th Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, vol. 2, pp. 1088-1092, November 1996.

[9] M. Arezki, A. Benallal, F. Ykhlef, A. Guessoum, and D. Berkani, “New method of comparison NS-FRLS and NLMS Algorithms for the acoustic echo cancellation,” Proceedings of the 4th international symposium on CSNDSP 2004, University of Newcastle, U.K, pp. 528-532, July 2004.

[10] F. Ykhlef, “A study of the stability of the fast recursive least squares algorithms with predictable signals and application to the speech signal,” (in French) Master thesis, University of Blida, Algeria, February 2002.

[11] D. T. M. Slock, “Backward consistency concept and round-off error propagation dynamics in recursive least-squares algorithms,” Opt. Engr., vol. 31, No. 6, pp. 1153-1169, June 1992.

[12] P. Regalia, “Numerical stability issues in fast least-squares adaptation algorithms,” Opt. Engr., vol. 31, No. 6, pp. 1144-1152, June 1992.

[13] P. Stoica, and A. Nehorai, “On stability and root location of linear prediction models,” IEEE Trans. Acoustic, Speech, Signal Processing, vol. Assp-35, No. 4, pp. 582 -584, April 1987.