Transcript
  • 7/28/2019 On the Performance of the Constant Modulus Array Restricted to the Signal Subspace

    1/5

    ON THE PERFORMANCE OF THE CONSTANT MODULUS ARRAY

    RESTRICTED TO THE SIGNAL SUBSPACE

    J.R. Cerquides and J.A. Fernndez-Rubio

    Signal Theory and Communications Department, Polytechnic University of Catalonia

    Mdulo D5, Campus Nord UPC, C/Gran Capitn s/n, 08034 Barcelona, SPAIN

    Tel.:+34-3-4015938, Fax:+34-3-4016447

    E-mail: [email protected]

    ABSTRACT

    The Constant Modulus Array has a slow rate of

    convergence mainly due to both the nonconvex nature

    of its cost function and the well known behavior ofstochastic steepest descent algorithms for environments

    with a large eigenvalue spread. In this paper we analyze

    the solutions of the Constant Modulus Cost Functions,

    showing that the weight vectors associated to its

    minima lie on the signal subspace. From that

    information we develop a modified version of the

    Constant Modulus Array, which speeds up the

    convergence and reduces the final misadjustment error.

    The proposed method is specially useful for arrays

    having a large number of sensors and low Signal to

    Noise Ratio for the Source of Interest.

    1. INTRODUCTION

    The Constant Modulus Array (CMA) was introduced

    in 1986 by Gooch and Lundell [1], who suggest to

    apply the Constant Modulus (CM) criterion originally

    designed by Godard [2] to the field of adaptive antenna.

    Due to its interesting properties (low computational

    load, independence of the array manifold, etc.) it has

    become probably the most popular blind beamforming

    scheme. However, it is far from being free of

    drawbacks. One of its main inconvenient is its slowconvergence, which sometimes makes it inapplicable in

    practical environments, specially when the Signal to

    Noise Ratio of the incoming signals is poor. This is the

    problem we address in our paper. As we will show, the

    This work has been partially supported by the National

    Research Plan of Spain CICYT, under Grant TIC-95-1022-

    C05-01

    analysis of the extrema of the cost function will reveal

    useful information about the location of the minima and

    its relationship with the signal subspace. This fact can

    be exploited to speed up the convergence of the

    algorithm.

    The paper is organized as follows: in section 2 we

    review the constant modulus cost functions, introducing

    the appropriated vectorial notation; section 3 is devoted

    to analyze the nature of the solutions and its

    consequences. In section 4 we propose a novel

    technique exploiting the results of the performed

    analysis and having in mind to avoid some of the

    undesirable properties of the original proposal. Section

    5 shows the relationship between the proposed

    technique and the Generalized SideLobe Canceller

    (GSLC). Simulation results of the suggested algorithm

    compared with other approximations are shown in

    section 6. Finally we end the paper by paying attention

    to special scenarios where the proposed algorithm can

    be specially useful.

    2. THE CONSTANT MODULUS ARRAY

    The CMA is one of the simplest blind beamforming

    scheme. Although, as happens with Sato and Decision

    Directed (DD) algorithms, it can be seen as a particular

    case of the more general family of Bussgangalgorithms, it was developed independently by Godard

    and later applied to array processing by Gooch and

    Lundell. The algorithm tries to minimize a nonconvex

    cost function designed to penalize deviations in the

    envelope of the array output signal:

    [ ]( )J E y n=

    2 2

    1 (1)

    being y[n] the output of the array, obtained as a linear

    combination of the received signals,

  • 7/28/2019 On the Performance of the Constant Modulus Array Restricted to the Signal Subspace

    2/5

    [ ] [ ]y n nH= w x (2)where x[n] is the received snapshot and w is the weight

    vector, which describes the spatial response of the

    array. SuperindexH

    hold for the hermitian operator, i.e.:

    transpose and conjugate.

    The underlying idea under the mathematicalformulation is very simple: the Source Of Interest (SOI)

    is assumed to have the constant envelope property1.

    This property is lost due to the contribution of noise

    and/or interfering processes to the output signal y[n].

    The algorithm tries to indirectly remove noise and

    interferences by restoring the loss property.

    The optimization procedure follows a single, LMS

    like, stochastic steepest descent algorithm which yields

    a weight vector adaptation equation given by:

    [ ] [ ] [ ]( ) [ ] [ ]w w xn n y n y n n+ = 1 12

    *

    (3)where is the step-size parameter, whose choice is acommitment between convergence speed and

    misadjustement noise. Probably the more useful result

    about the optimum value of is given by Katia andDuhamel[3], who suggest to select it as:

    [ ] [ ] [ ]( ) =

    +0 2

    1 1

    1x n y n y n(4)

    where 0 is the normalized step-size parameter, whichmust lie in the open (0,1]. The resulting algorithm is

    called the Normalized CMA (NCMA).

    3. SOLUTIONS OF THE CONSTANT MODULUS

    COST FUNCTION AND ITS NATURE

    In a general environment, the snapshot x[n] will be

    composed of two terms,

    [ ] [ ] [ ]x d nn s n nm mm

    M

    = +=

    1

    (5)

    where M is the number of incoming signals, dm is the

    generalized steering vector (including possible

    multipath effects), sm[n] represents the m-th signal and

    n[n] models the noise (usually thermal noise) generatedat the sensors. Under this assumption, y[n] can be

    rewritten as:

    [ ] [ ] [ ] [ ] [ ]y n s n n g s n n nH

    m m

    H

    m

    M

    m m y

    m

    M

    = + = += =

    w d w n1 1

    1This property is shared by many manmade

    communications signals (i.e.: PSK and FSK modulations,

    among others).

    (6)

    The set of definitions gm = wHdm (m = 1..M) and ny[n] =

    wHn[n] is implicit in the above equation. Susbtituting

    (6) into (1) and manipulating the expression, we can

    finally find:

    ( )J f g g k k PM M M n= 1 1 1

    , , , (7)where km (m = 1..M) represents the kurtosis

    2of the m-th

    signal, m (m = 1..M) is the standard deviation of the m-th signal, and Pn is the noise power at the array output,

    Pn

    H

    nn= w R w (8)

    To determine the behavior of the CM algorithm we

    need to find the extrema of J, solving the following

    vectorial equation:

    =w

    0J (9)

    However, taking into account expression (7) and the

    relationship between the set of coefficients (g1...gM,Pn)

    and the weight vector w, it is possible to apply the chainrule to equation (9) obtaining:

    = + =

    = + =

    =

    =

    w w w

    d R w 0

    JJ

    gg

    J

    PP

    J

    g

    J

    P

    m

    m

    n

    n

    m

    M

    m

    m

    n

    nn

    m

    M

    1

    1

    (10)

    Equation (10) has two sets of solutions:

    1. IfJ/Pn = 0, then the first term must also be zero,but if we assume that the generalized steering

    vectors dm are linearly independent, the only valid

    solution is then given by J/gm = 0 for all m. It ispossible to demonstrate that this condition implies

    gm = 0 for all m, and consequently, the weight vector

    associated to this solution lies completely in the

    noise subspace. Also, it is not difficult to

    demonstrate that this solution is a minimum if and

    only if all the incoming signals show a kurtosis

    larger than two, which is not the case for

    communications signals. Thus, this solution may be

    catalogued as an unwanted one.

    2. IfJ/Pn 0, then we can rewrite eq. (8) to read asfollows:

    w

    R d

    R d= =

    =

    =

    J

    g

    J

    P

    cmnn m

    m

    M

    n

    m nn m

    m

    M

    1

    1 1

    1

    (11)

    2The kurtosis of a signal is defined as the quotient between

    its fourth order momentum and the squared second order

    momentum.

  • 7/28/2019 On the Performance of the Constant Modulus Array Restricted to the Signal Subspace

    3/5

    From observation of eq. (9) we can conclude that

    this set of solutions result in a linear combination of

    the eigenvectors associated to the signal subspace. A

    special case of interest is encountered under the

    classical hypothesis Rnn = n2I. In this situation, the

    desired solutions of the CM algorithm are linearcombinations of the generalized steering vectors of

    the incoming signals.

    4. THE PROPOSED ALGORITHM

    From the performed analysis it is obvious that we can

    directly avoid unwanted solutions if the number of

    sources present in the signal scenario is

    "approximately" known. We have quoted the word

    "approximately" because our proposal does not need to

    know exactly the number of incoming sources. It is

    enough to provide information about the maximumexpected number of simultaneous sources. We will

    denote this number by S (S M).

    Under this condition the proposed algorithm is

    summarized as follows:

    1. Estimate Rxx=E{x[n]xH[n]} from the received data.

    2. Solve the generalized eigenvalue problem described

    by Rxxvi = iRnnvi for all i=1..N, being N the numberof sensors.

    3. Extract the N-S less significant eigenvectors andform the new matrix:

    [ ]V v v v= + +S S N1 2, , , (12)where we assume that the eigenvectors vi have been

    previously normalized in step 2.

    4. Solve the linearly constrained problem:

    [ ]( )min E y n H nnw

    w R V 0J subject to=

    =2

    2

    1

    (13)

    where the introduction of a set of N-S restrictions

    will improve the convergence rate of the adaptive

    process.

    The developed algorithm is designed to work overblocks of data rather than on a sample by sample basis,

    although it is possible, if required, to follow an adaptive

    procedure for obtaining the eigenvectors of the data

    correlation matrix [4].

    5. REFORMULATION OF THE PROPOSED

    TECHNIQUE AND THE GSLC

    Equation (13) yields a constrained optimization

    problem. A first approach is to employ the Frost

    algorithm, preprocessing all snapshots to remove

    components lying in the subspace spanned by V.However, this approach does not exploit the subspace

    rank reduction to reduce the number of computations. A

    general solution to the optimization of the Constant

    Modulus Cost Function given some linear restrictions

    was given by Griffiths[5]. However, in our special case,

    as all restrictions are equal to zero the formulation can

    still be simplified. By having in mind the structure of

    the GSLC beamformer, shown in figure 1, and taking

    into account how it works, it is clear that the simplest

    possible choice for w0 is:

    w 00 = (14)avoiding the need to perform any computation related

    with the upper branch of the beamformer. The blocking

    matrix B must be orthogonal to the restrictions. If,

    under typical conditions, the noise is assumed to be

    uncorrelated between sensors, having equal power in

    all of them, the orthogonality condition for B can be

    written in terms ofV as:

    B V 0H = (15)

    Thus, the columns ofB must lie in the signal subspace,

    and B can be chosen as:

    [ ]B v v v=

    1 2

    , , ,S

    (16)

    where vi is the i-th most significant eigenvector ofRxx.

    B wa

    w0x[n]

    z[n]

    ya[n]

    y0[n] y[n]

    #N #S

    x1[n]

    x2[n]

    xN[n]

    Figure 1 - Structure of the GSLC beamformer

    Steps 3 and 4 of the algorithm proposed in section 4

    must be modified according to the new formulation.

  • 7/28/2019 On the Performance of the Constant Modulus Array Restricted to the Signal Subspace

    4/5

    3. Extract the S more significant eigenvectors and formthe matrix B as described in eq. (16).

    [ ]B v v v= 1 2, , , S (17)

    4. For every new snapshot, x[n],

    Project it over the signal subspace to obtain z[n],[ ] [ ]z B xn n

    H= (18)Update the weight vector wa[n] following:

    [ ] [ ][ ]( )[ ]

    [ ]

    [ ][ ]w w

    zz

    0a an n

    y n

    n

    y n

    y nn+ =

    1

    1

    2

    *

    (19)

    where the normalized version of the CMA is preferred

    for the adaptation process.

    6. SIMULATIONS

    The signal scenario chosen is shown in table 1. The

    selected array is linear, having 30 omnidirectional

    sensors. Distance between two of them is half

    wavelength.

    # of signal Signal type Angle of arrival Input SNR

    #1 4-PSK 30 -5 dB

    #2 8-PSK 0 -5 dB

    #3 Tone, f=0.1 20 10 dB

    Table 1 - Signal scenario for the simulations

    In figure 2 we can observe the evolution of the output

    SINR for both algorithms, proposed and classical

    NCMA, when they are optimally initialized. The signal

    subspace method is several times faster than its

    unconstrained version.

    0 200 400 600 800 1000 1200 1400 1600 1800 20003

    4

    5

    6

    7

    8

    9

    10Evolution of the output SINR

    # iterations

    dB

    Proposed

    NCMA

    Optimum

    Figure 2 - Evolution of the output SINR for both algorithms:

    proposed vs. NCMA

    It is difficult to notice, in the representation of the

    output SINR, the evolution of the weight vector and the

    final misadjustement. The proposed technique also

    achieves more precise reception patterns than the usual

    NCMA. This fact is shown in figure 3, where both

    algorithms are initialized to the optimum weight vector.

    The plot shows the error, computed as the distance

    between the optimum and the actual weight vectors for

    both algorithms.

    0 500 1000 1500 2000 25000

    0.5

    1

    1.5

    2

    2.5x 10

    -3

    # iterations

    Proposed

    NCMA

    Figure 3 - Distance between the weight vectors and the

    optimum

    6. CONCLUSIONSThrough the analysis of the solutions of the Constant

    Modulus Cost functions we have developed a modified

    version of the NCMA algorithm, which exploits the fact

    that the optimum weight vector lies on the signal

    subspace of the autocorrelation matrix of the snapshots.

    The proposed technique speeds up the evolution of the

    adaptive beamformer towards the optimum solution,

    showing better properties once the algorithm has

    converged.

    Although the computational load of the proposed

    method is higher than in NCMA, there are severalinteresting cases where the suggested algorithm is

    specially useful:

    when the data set available for beamforming is small for arrays composed by a great number of sensors when convergence speed becomes a critical

    parameter in spite of computational load

    In a forthcoming paper we will make a full detailed

    computational balance of both methods, obtaining

  • 7/28/2019 On the Performance of the Constant Modulus Array Restricted to the Signal Subspace

    5/5

    expressions for the excess of error introduced in each

    case.

    7. REFERENCES

    [1] Godard, D.N., "Self recovering equalization andcarrier tracking in two- dimensional data

    communication systems", IEEE Transactions on

    Communications, vol. COM-28, pp. 1867-1875,

    Nov. 1980.

    [2] Gooch, R.P. and Lundell, J.D., "The CM array: an

    adaptive beamformer for constant modulus

    signals", Proceedings of the International

    Conference on Acoustics, Speech and Signal

    Processing, Tokyo, pp. 2523-2526, Apr. 1986.

    [3] Hilal, K. and Duhamel, P., "A convergence study

    of the constant modulus algorithm leading to a

    normalized CMA and a block-normalized CMA",

    Proceedings of the European Signal Processing

    Conference, Bruselas, Belgica, pp. 135-138, Aug.

    1992.

    [4] Yang, J.F., Kaveh, M., "Adaptive eigensubspace

    algorithms for direction or frequency estimation

    and tracking", IEEE Transactions on Acoustics,

    Speech, and Signal Processing, vol. ASSP-36, pp.

    241-251, Feb. 1988.

    [5] Rude, M.J. and Griffiths, L.J., "A linearly

    constrained adaptive algorithm for constant

    modulus signal processing", Proceedings of the

    European Signal Processing Conference, vol. I.,

    pp. 237-240, Barcelona, Sep. 1990.


Top Related