a complex version of the lasso algorithm and its...

5
The 7 th International Telecommunications Symposium (ITS 2010) A complex version of the LASSO algorithm and its application to beamforming Jos´ e F. de Andrade Jr. and Marcello L. R. de Campos Program of Electrical Engineering Federal University of Rio de Janeiro (UFRJ) Rio de Janeiro, Brazil 21.941–972, P. O. Box 68504 e-mails: [email protected], [email protected] Jos´ e A. Apolin´ ario Jr. Department of Electrical Engineering (SE/3) Military Institute of Engineering (IME) Rio de Janeiro, Brazil 22.290–270 e-mail: [email protected] Abstract— Least Absolute Shrinkage and Selection Operator (LASSO) is a useful method to achieve coefficient shrinkage and data selection simultaneously. The central idea behind LASSO is to use the L1-norm constraint in the regularization step. In this paper, we propose an alternative complex version of the LASSO algorithm applied to beamforming aiming to decrease the overall computational complexity by zeroing some weights. The results are compared to those of the Constrained Least Squares and the Subset Selection Solution algorithms. The performance of nulling coefficients is compared to results from an existing complex version named the Gradient LASSO method. The results of simulations for various values of coefficient vector L1-norm are presented such that distinct amounts of null values appear in the coefficient vector. In this supervised beamforming simulation, the LASSO algorithm is initially fed with the optimum LS weight vector. Keywords— Beamforming; LASSO algorithm; complex-valued version; optimum constrained optimization. I. I NTRODUCTION The goal of the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm [1] is to find a Least-Squares (LS) solution such that the L 1 -norm (also known as “taxi- cab” or “Manhattan” norm) of its coefficient vector is not greater than a given value, that is, L 1 {w} = N k=1 |w k |≤ t. Starting from linear models, the LASSO method has been applied to solve various problems such as wavelets, smoothing splines, logistic models, etc [2]. The problem may be solved using quadratic programming, general convex optimization methods [3], or by means of the Least Angle Regression algorithm [4]. In real-valued applications, the L 1 regularized formulation is applicable to a number of contexts due to its tendency to select solution vectors with fewer nonzero components, resulting in an effective reduction of the number of variables upon which the given solution is dependent [1]. Therefore, the LASSO algorithm and its variants could be of great interest to the fields of compressed sensing and statistical natural selection. This work introduces a novel version of the LASSO algorithm which, as opposed to an existing complex version, viz, the recently published Gradient LASSO [5], brings to the field of complex variables the ability to produce solutions with a large number of null coefficients. The paper is organized as follows: Section II presents an overview of the LASSO problem formulation. Section III-A starts by providing a version of the new Complex LASSO (C- LASSO) while the Gradient LASSO (G-LASSO) method and Subset Selection Solution (SSS) [6] are described in the sequence. In Section IV, simulation results comparing the proposed C-LASSO to other schemes are presented. Finally, conclusions are summarized in Section V. II. OVERVIEW OF THE LASSO REGRESSION The real-valued version of the LASSO algorithm is equiv- alent to solving a minimization problem stated as: min (w1,··· ,w N ) 1 2 M X i=1 (y i - N X j=1 x ij w j ) 2 s.t. N X k=1 |w k |≤ t (1) or, equivalently, min (wR N ) 1 2 (y - Xw) T (y - Xw) s.t. g(w)= t -kwk 1 0. (2) The LASSO regressor, although quite similar to the shrinking procedure used in the ridge regressor [7], will cause some of the coefficients to become zero. Several algorithms have been proposed to solve the LASSO [3]- [4]. Tibshirani [1] suggests, for the case of real-valued signals, a simple, albeit not efficient, way to solve the LASSO regression problem. Let s(w) be defined as s(w)= sign(w). (3) Therefore, s(w) is a member of set S = {s j } , j =1,..., 2 N , (4) whose elements are of the form s j =[±1 ± 1 ... ± 1] T . (5) The LASSO algorithm in [1] is based on the fact that the constraint N k=1 |w k |≤ t is satisfied if s T j w t = αt LS , for all j, (6) with t LS = kw LS k 1 (L 1 -norm of the unconstrained LS solution) and α (0, 1]. However, we do not know in advance the correct index j corresponding to sign(w LASSO ). At best, we can project

Upload: others

Post on 17-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A complex version of the LASSO algorithm and its ...aquarius.ime.eb.br/~apolin/papers/ITS2010Andrade.pdf · A complex version of the LASSO algorithm and its application to beamforming

The 7th International Telecommunications Symposium (ITS 2010)

A complex version of the LASSO algorithm and its application to beamforming

Jose F. de Andrade Jr. and Marcello L. R. de CamposProgram of Electrical Engineering

Federal University of Rio de Janeiro (UFRJ)Rio de Janeiro, Brazil 21.941–972, P. O. Box 68504

e-mails: [email protected], [email protected]

Jose A. Apolinario Jr.Department of Electrical Engineering (SE/3)

Military Institute of Engineering (IME)Rio de Janeiro, Brazil 22.290–270

e-mail: [email protected]

Abstract— Least Absolute Shrinkage and Selection Operator(LASSO) is a useful method to achieve coefficient shrinkageand data selection simultaneously. The central idea behindLASSO is to use the L1-norm constraint in the regularizationstep. In this paper, we propose an alternative complex versionof the LASSO algorithm applied to beamforming aiming todecrease the overall computational complexity by zeroing someweights. The results are compared to those of the ConstrainedLeast Squares and the Subset Selection Solution algorithms.The performance of nulling coefficients is compared to resultsfrom an existing complex version named the Gradient LASSOmethod. The results of simulations for various values ofcoefficient vector L1-norm are presented such that distinctamounts of null values appear in the coefficient vector. In thissupervised beamforming simulation, the LASSO algorithm isinitially fed with the optimum LS weight vector.

Keywords— Beamforming; LASSO algorithm; complex-valuedversion; optimum constrained optimization.

I. INTRODUCTION

The goal of the Least Absolute Shrinkage and SelectionOperator (LASSO) algorithm [1] is to find a Least-Squares(LS) solution such that the L1-norm (also known as “taxi-cab” or “Manhattan” norm) of its coefficient vector is notgreater than a given value, that is, L1{w} =

∑Nk=1 |wk| ≤

t. Starting from linear models, the LASSO method hasbeen applied to solve various problems such as wavelets,smoothing splines, logistic models, etc [2]. The problemmay be solved using quadratic programming, general convexoptimization methods [3], or by means of the Least AngleRegression algorithm [4]. In real-valued applications, the L1

regularized formulation is applicable to a number of contextsdue to its tendency to select solution vectors with fewernonzero components, resulting in an effective reduction ofthe number of variables upon which the given solution isdependent [1]. Therefore, the LASSO algorithm and itsvariants could be of great interest to the fields of compressedsensing and statistical natural selection.

This work introduces a novel version of the LASSOalgorithm which, as opposed to an existing complex version,viz, the recently published Gradient LASSO [5], bringsto the field of complex variables the ability to producesolutions with a large number of null coefficients. The paperis organized as follows: Section II presents an overviewof the LASSO problem formulation. Section III-A starts

by providing a version of the new Complex LASSO (C-LASSO) while the Gradient LASSO (G-LASSO) methodand Subset Selection Solution (SSS) [6] are described in thesequence. In Section IV, simulation results comparing theproposed C-LASSO to other schemes are presented. Finally,conclusions are summarized in Section V.

II. OVERVIEW OF THE LASSO REGRESSION

The real-valued version of the LASSO algorithm is equiv-alent to solving a minimization problem stated as:

min(w1,··· ,wN )

1

2

M∑i=1

(yi −N∑j=1

xijwj)2 s.t.

N∑k=1

|wk| ≤ t (1)

or, equivalently,

min(w∈RN )

1

2(y −Xw)T (y −Xw) s.t.

g(w) = t− ‖w‖1 ≥ 0. (2)

The LASSO regressor, although quite similar to theshrinking procedure used in the ridge regressor [7], willcause some of the coefficients to become zero. Severalalgorithms have been proposed to solve the LASSO [3]- [4].Tibshirani [1] suggests, for the case of real-valued signals,a simple, albeit not efficient, way to solve the LASSOregression problem. Let s(w) be defined as

s(w) = sign(w). (3)

Therefore, s(w) is a member of set

S = {sj} , j = 1, . . . , 2N , (4)

whose elements are of the form

sj = [±1 ± 1 . . . ± 1]T . (5)

The LASSO algorithm in [1] is based on the fact that theconstraint

∑Nk=1 |wk| ≤ t is satisfied if

sTj w ≤ t = αtLS , for all j, (6)

with tLS = ‖wLS‖1 (L1-norm of the unconstrained LSsolution) and α ∈ (0, 1].

However, we do not know in advance the correct indexj corresponding to sign(wLASSO). At best, we can project

Page 2: A complex version of the LASSO algorithm and its ...aquarius.ime.eb.br/~apolin/papers/ITS2010Andrade.pdf · A complex version of the LASSO algorithm and its application to beamforming

The 7th International Telecommunications Symposium (ITS 2010)

a candidate coefficient vector onto the hyperplane definedby sTj w = t. After testing if the L1-norm of the projectedsolution is a valid solution, we can accept the solution ormove further, choosing another member of S such that, aftera number of trials, we obtain a valid LASSO solution.

III. COMPLEX SHRINKAGE REGRESSION METHODS

A. A Complex LASSO Algorithm

The original version of the LASSO algorithm, as proposedin [1], is not suitable to be used in complex-valued applica-tions, such as beamforming. Aiming to null a large portionof the coefficient vector, we suggest to solve (1) in twosteps, treating separately real and imaginary quantities. Thealgorithm, herein called C-LASSO for Complex LASSO,is summarized in Algorithm 1. In order to overcome someof the difficulties brought by the complex constraints, wepropose that

N∑k=1

|wk| ≤ t (7)

be changed to

N∑k=1

|Re{wk}| ≤ tR and (8)

N∑k=1

|Im{wk}| ≤ tI . (9)

Considering the equality condition in (7)

N∑k=1

|wk| = t ≤N∑k=1

{|Re{wk}|+ |Im{wk}|

}≤ tR + tI ,

(10)choosing tR = tI and the lower bound in (10), we have

tR = tI =αtLS2

. (11)

For an array containing N sensors, where all coefficientsare nonzero, consider the total number of multiplicationsrequired for calculating a beamforming output equal to 3N .For each coefficient having null real part (or imaginary),the computational complexity is reduced from three to twomultiplications. For each null coefficient, the number ofmultiplications is reduced from three to zero. Thus, theresulting number of multiplication operation is:

NR = 3N − (Np + 3Nz) (12)

where Nz and Np are the amounts of null coefficients andnull coefficient parts (real or imaginary), respectively. Thecount of Np excludes the null parts which had already beentaken into account for Nz .

Algorithm 1 : The Complex LASSO (C-LASSO) AlgorithmInitializationα ∈ (0, 1];

X; % (X is a matrix with N ×M elements)

Y; % (Y is a vector with N × 1 elements )

wLS ← (XHX)−1XHY;

tLS ←∑Nk=1 |w

LSk | ;

tR ← αtLS/2 ;

tI ← tR;

woReal ←Re[wLS ] ;

woImag ← Im[wLS ] ;

CcReal ← sign[wo

Real] ;

CcImag ← sign[wo

Imag] ;

XReal ←Re[X];

YReal ← Im[Y];

R−1Real ←

12(XRealX

HReal)

−1;

R−1Imag ←

12(XImagX

HImag)

−1;

while (‖wcReal‖1 > tReal) do

Col← Numbers of columns of CcReal;

f ← 1(Col×1) × tR, 1(Col×1) = [1 1 . . . 1]T(Col×1);

wcReal ← wo

Real −R−1RealC

cReal

((Cc

Real)HR−1

Real

CcReal

)−1((Cc

Real)Hwo

Real − f)

;

CReal ← [CReal sign[wcReal]];

end while

while (‖wcImag‖1 > tImag) do

Col← Numbers of columns of CcImag;

f ← 1(Col×1) × tI , 1(Col×1) = [1 1 . . . 1]T(Col×1);

wcImag ← wo

Imag −R−1ImagC

cImag

((Cc

Imag)HR−1

Imag

CcImag

)−1((Cc

Imag)Hwo

Imag − f)

;

CImag ← [CImag sign[wcImag]];

end while

wLASSO ← wcReal + jwc

Imag .

B. The Subset Selection Solution Procedure

The Subset Selection (SS) solution [6] is obtained fromthe ordinary least squares solution (LS) after forcing thesmallest coefficients (in magnitude) to be equal to zero.In order to compare the beam patterns produced by theSS and C-LASSO algorithms, the number of zero realand imaginary parts obtained by C-LASSO algorithm aretaken into account to calculate the coefficients using the SSalgorithm. The SS algorithm is summarized in Algorithm 2.

Page 3: A complex version of the LASSO algorithm and its ...aquarius.ime.eb.br/~apolin/papers/ITS2010Andrade.pdf · A complex version of the LASSO algorithm and its application to beamforming

The 7th International Telecommunications Symposium (ITS 2010)

Algorithm 2 : The Subset Selection (SS) SolutionInitialization: wLS , NZReal , NZImag ;

[wSSSReal ,posReal]← Sort(|wLSReal |);[wSSSImag ,posImag]← Sort(|wLSImag |);for i = 1 to NZReal dowSSSReal(i)← 0

end forfor k = 1 to NZImag dowSSSImag (k)← 0;

end forwSSSReal(posReal)← wSSSReal ;wSSSImag (posImag)← wSSSImag ;wSSSReal ← wSSSReal . ∗ sign(wSSSReal);wSSSImag ← wSSSImag . ∗ sign(wSSSImag );wSSS ← wSSSReal + jwSSSImag ;

C. The Gradient LASSO AlgorithmThe gradient LASSO (G-LASSO) algorithm, summarized

in Algorithm 3, is computationally more stable thanquadratic programming (QP) schemes because it doesnot require matrix inversions; thus, it can be moreeasily applied to higher-dimensional data. Moreover, theG-LASSO algorithm is guaranteed to converge to theoptimum under mild regularity conditions [2].

Algorithm 3 : The G-LASSO AlgorithmInitialization: w = 0 and m = 0;

while (not(coverge)) dom← m+ 1;

Compute the gradient ∇(w)← (∇(w)1, ...,∇(w)d);

Find the (l, k, γ) which minimizes γ∇(w)lk for l = 1, . . . , d, k = 1, . . . , pl, γ = ±1;

Let vl be the pl dimensional vector such that the k-th elementis γ and the other column elements are 0;

Find β = argminβ∈[0,1]C(w[β, vl]);

Update w:

wlk ←

(1− β)wlk + γβ l = l, k = k

(1− β)wlk l = l, k 6= k

wlk otherwise

end while

return w.

IV. SIMULATION RESULTS

In this section, we present the results of simulationsfor various values of α such that distinct numbers of nullcoefficients arise in the coefficient vector w. The simulationswere performed from a supervised modeling beamformingand the LASSO algorithm was initially fed with the optimumweight vector wopt. Subsection IV-A describes, briefly, thesignal model used to calculate wopt and wLASSO. Theresults are presented in Subsection IV-B

A. Signal ModelConsider a uniform linear array (ULA) composed by N

receiving antennas (sensors) and q receiving narrowbandsignals coming from different directions φ1, · · · , φq . Theoutput signal observed from N sensors during M snapshotscan be denoted as x(t1),x(t2), · · · ,x(tM ). The N×1 signalvector is then written as:

x(t) =

q∑k=1

a(φk)sk(t) + n(t) , t = t1, t2, · · · , tM , (13)

or, using matrix notation,

X = [x(t1) x(t2) · · · x(tM )] = AS+ η, (14)

Sensor 1

Sensor 2

Sensor (N-1)

Sensor N

d d(N-3)d

Figure 1. Narrowband Signal Model for Uniform Linear Array (ULA) forsnapshot at instant t.

where matrix X is the input signal matrix of dimensionN × M . Matrix A = [a(φ1), · · · ,a(φq)] is the steeringmatrix of dimension N × q, whose columns are denoted by

a(φ) = [1, e−j(2π/λ)d cos(φ), · · · , e−j(2π/λ)d(N−1) cos(φ)]T .(15)

S, in (14), is a q ×M signal matrix, whose lines refer tosnapshots. η is the noise matrix of dimension N×M ; λ andd are the wavelength of the signal and the distance betweenantenna elements (sensors), respectively.

Based on above definition, the covariance matrix R,defined as E[x(t)xH(t)], can be estimated as

R = XXH = ASSHAH + ηηH , (16)

Page 4: A complex version of the LASSO algorithm and its ...aquarius.ime.eb.br/~apolin/papers/ITS2010Andrade.pdf · A complex version of the LASSO algorithm and its application to beamforming

The 7th International Telecommunications Symposium (ITS 2010)

which, except for a multiplicative constant, corresponds tothe time average

R =1

M

M∑k=1

x(tk)xH(tk). (17)

From the formulation above and imposing linear con-straints, it is possible to obtain a closed-form expressionto wopt [8], the LCMV solution

wopt = R−1C(CHR−1C

)−1f , (18)

such that CHwopt = f , C and f are given by [9]

C = [1, e−jπ cos(φ), · · · , e−jπ(N−1) cos(φ)]T (19)

andf = 1. (20)

B. The Results

Experiments were conducted where we designed ULAbeamformers with 50 and 100 sensors. C-LASSO beampatterns were compared to SSS and CLS beam patternsfor α values equal to 0.05, 0.10, 0.250, and 0.50. We alsocompared the capacity of shrinking of the C-LASSO to theG-LASSO algorithms.

The simulations presented in this section were carried outconsidering the direction of arrival of the desired signalθ = 120◦. The four interfering signals used in the sim-ulations employed power levels such that Interference-to-Noise Ratios (INR) were 30, 20, 30, 30 dB. The number ofsnapshots was 6, 000.

0 20 40 60 80 100 120 140 160 180!120

!100

!80

!60

!40

!20

0

20

Azimute (Degrees)

dB

LASSO Beamforming

Total Number of Coefficients: 50CLASSO Number of Coefficients equal Zero: 21CLASSO Number of Real Parts equal Zero: 36 CLASSO Number of Imaginary Parts equal Zero: 35

! Jammer 1 ! Jammer 2 ! Jammer 3 ! Jammer 4

Total Number of Coefficients: 50SSS Number of Coefficients equal Zero: 21SSS Number of Real Parts equal Zero: 36SSS Number of Imaginary Parts equal Zero: 35

Wopt

WLasso

WSSS

Figure 2. Beam pattern calculated for θ = 120◦, N = 50, and α = 0.05.

Figure 2 compares the beam patterns from the CLS,the SS, and the C-LASSO algorithms, where it is possibleto realize that the constraints relating to interference aresatisfied by the C-LASSO coefficients. In this experiment,

0 20 40 60 80 100 120 140 160 180!120

!100

!80

!60

!40

!20

0

Azimute (Degrees)

dB

LASSO Beamforming

Total Number of Coefficients: 50CLASSO Number of Coefficients equal Zero: 7CLASSO Number of Real Parts equal Zero: 26CLASSO Number of Imaginary Parts equal Zero: 31

! Jammer 1 ! Jammer 2 ! Jammer 3 ! Jammer 4

Total Number of Coefficients: 50SSS Number of Coefficients equal Zero: 7SSS Number of Real Parts equal Zero: 26SSS Number of Imaginary Parts equal Zero: 31

Wopt

WLasso

WSSS

Figure 3. Beam pattern calculated for θ = 120◦, N = 50, and α = 0.1.

about 42% of the coefficients are zero and the percentagefor the real and imaginary parts is around 70%.

In the simulation presented in Figure 3, only about 14%of the coefficients are zero, although the percentage for thereal and imaginary parts is around 50%.

0 20 40 60 80 100 120 140 160 180!140

!120

!100

!80

!60

!40

!20

0

20

Azimute (Degrees)

dB

LASSO Beamforming

Total Number of Coefficients: 50CLASSO Number of Coefficients equal Zero: 0CLASSO Number of Real Parts equal Zero: 25CLASSO Number of Imaginary Parts equal Zero: 25

! Jammer 1 ! Jammer 2 ! Jammer 3 ! Jammer 4

Total Number of Coefficients: 50SSS Number of Coefficients equal Zero: 0SSS Number of Real Parts equal Zero: 25SSS Number of Imaginary Parts equal Zero: 25

Wopt

WLasso

WSSS

Figure 4. Beam pattern calculated for θ = 120◦, N = 50, and α = 0.25.

In the simulations whose results are presented in Figures 4and 5, no coefficient is forced to zero (both real andimaginary parts) although real or imaginary parts of somecoefficients were zeroed, representing 50% of the total.

Figure 5 shows that, for α = 0.50 and N = 100, the C-LASSO and the SS solution beam patterns approximate theoptimal case (CLS), even with 50% of real and imaginaryparts equal to zero.

Page 5: A complex version of the LASSO algorithm and its ...aquarius.ime.eb.br/~apolin/papers/ITS2010Andrade.pdf · A complex version of the LASSO algorithm and its application to beamforming

The 7th International Telecommunications Symposium (ITS 2010)

0 20 40 60 80 100 120 140 160 180!140

!120

!100

!80

!60

!40

!20

0

20

Azimute (Degrees)

dB

LASSO Beamforming

Total Number of Coefficients: 100CLASSO Number of Coefficients equal Zero: 0CLASSO Number of Real Parts equal Zero: 50CLASSO Number of Imaginary Parts equal Zero: 50

! Jammer 1 ! Jammer 2 ! Jammer 3 ! Jammer 4

Total Number of Coefficients: 100SSS Number of Coefficients equal Zero: 0SSS Number of Real Parts equal Zero: 50SSS Number of Imaginary Parts equal Zero: 50

Wopt

WLasso

WSSS

Figure 5. Beam pattern calculated for θ = 120◦, N = 100, and α =0.50.

10 6 10 5 10 4 10 3 10 2 10 1 1000

2

4

6

8

10

12

14

16

18

20

22

Nul

l Rea

l Par

ts

Null Real Parts Quantities versus

GLASSOCLASSO

Figure 6. Null Real Parts Quantities versus α for N = 20.

10 6 10 5 10 4 10 3 10 2 10 1 1000

2

4

6

8

10

12

14

16

18

20

Nul

l Rea

l Par

ts Q

uant

ities

Null Imaginary Parts Quantities versus

GLASSOCLASSO

Figure 7. Null Imaginary Parts Quantities versus α for N = 20.

Based on the results of a second experiment, whosesimulations are presented in Figures 6 and 7, we can say thatthe algorithm C-LASSO is more efficient than the algorithmG-LASSO in terms of coefficient shrinkage, with regardsto forcing coefficients (or their real or imaginary parts) tobecome zero.

V. CONCLUSIONS

In this article, we proposed a novel complex versionfor the shrinkage operator named C-LASSO and exploredshrinkage techniques in computer simulations. We comparedbeam patterns obtained using the algorithms C-LASSO, SS,and CLS. Based on the results presented in the previoussection, it is clear that the C-LASSO algorithm can beemployed to solve problems related to beamforming withreduced computational complexity.

Although, for very low values of α, the radiation patternpresents extra secondary lobes, which decrease the SNRin the receiver, it is noted that the constraints concerningjammers continue to be satisfied by the C-LASSO algorithm,making this technique ideal for anti-jamming systems [10].

One important area to apply and test this kind ofstatistical selection algorithm would be in the field ofelectronic warfare [10], where miniaturization, short delay,and low power circuits are required.

ACKNOWLEDGMENTS

The authors thank the Brazilian Army, the Brazilian Navy,and the Brazilian National Research Council (CNPq), undercontracts 306445/2007-7 and 478282/2007-9, for partiallyfunding of this work.

REFERENCES

[1] R. Tibshirani,“Regression Shrinkage and Selection via the Lasso,”Journal of the Royal Statistical Society (Series B), vol. 58, no. 1, pp.267–288, May 1996.

[2] K. Jinseog, K. Yuwon and K. Yongdai, “Gradient LASSOalgorithm,” Technical report, Seoul National University, URLhttp://idea.snu.ac.kr/Research/papers/GLASSO.pdf., 2005.

[3] M. R. Osborne, B. Presnell and B. A. Turlach,“On the LASSO and ItsDual,” Journal of Computational and Graphical Statistics, vol. 9, pp.319–337, May 1999.

[4] B. Efron, T. Hastie, I. Johnstone and R. Tibshirani, “Least angleregression,” Annals of Statistics, vol. 32, no. 2, pp. 407–499, 2004.

[5] E. van den Berg and M. P. Friedlander, “Probing the Pareto frontierfor basis pursuit solutions,” SIAM Journal on Scientific Computing,vol. 31, no. 2, pp. 890–912, 2008.

[6] M. L. R. de Campos and J. A. Apolinario Jr, “Shrinkage methodsapplied to adaptive filters,” Proceedings of the International Conferenceon Green Circuits and Systems, June 2010.

[7] A. E. Hoerl and R. W. Kennard,“Ridge Regression: Biased estimationfor nonorthogonal problems,” Technometrics, vol. 12, no. 1, pp. 55–67,February 1970.

[8] H. L. Van Trees, Optimum array processing, 1st Ed., vol. 4, Wiley-Interscience, March 2002.

[9] P. S. R. Diniz, Adaptive Filtering: Algorithms and Practical Implemen-tations, 3rd Ed., Springer, 2008.

[10] M. R. Frater, Electronic Warfare for Digitized Battlefield, 1st Ed.,Artech Printer on Demand, October 2001.