ESTIMATION-BASED ADAPTIVEFILTERING AND CONTROL

a dissertation

submitted to the department of electrical engineering

and the committee on graduate studies

of stanford university

in partial fulfillment of the requirements

for the degree of

doctor of philosophy

Bijan Sayyar-Rodsari

July 1999

c© Copyright by Bijan Sayyar-Rodsari 1999

All Rights Reserved

ii

I certify that I have read this dissertation and that in my opinion it is fully

adequate, in scope and quality, as a dissertation for the degree of Doctor of

Philosophy.

Professor Jonathan How(Principal Adviser)

I certify that I have read this dissertation and that in my opinion it is fully

adequate, in scope and quality, as a dissertation for the degree of Doctor of

Philosophy.

Professor Thomas Kailath

I certify that I have read this dissertation and that in my opinion it is fully

adequate, in scope and quality, as a dissertation for the degree of Doctor of

Philosophy.

Dr. Babak Hassibi

I certify that I have read this dissertation and that in my opinion it is fully

adequate, in scope and quality, as a dissertation for the degree of Doctor of

Philosophy.

Professor Carlo Tomasi

Approved for the University Committee on Graduate Studies:

iii

Abstract

Adaptive systems have been used in a wide range of applications for almost four

decades. Examples include adaptive equalization, adaptive noise-cancellation, adap-

tive vibration isolation, adaptive system identification, and adaptive beam-forming.

It is generally known that the design of an adaptive filter (controller) is a diffi-

cult nonlinear problem for which good systematic synthesis procedures are still lacking.

Most existing design methods (e.g. FxLMS, Normalized-FxLMS, and FuLMS) are ad-

hoc in nature and do not provide a guaranteed performance level. Systematic analysis

of the existing adaptive algorithms is also found to be difficult. In most cases, ad-

dressing even the fundamental question of stability requires simplifying assumptions

(such as slow adaptation, or the negligible contribution of the nonlinear/time-varying

components of signals) which at the very least limit the scope of the analysis to the

particular problem at hand.

This thesis presents a new estimation-based synthesis and analysis procedure for

adaptive “Filtered” LMS problems. This new approach formulates the adaptive filter-

ing (control) problem as an H∞ estimation problem, and updates the adaptive weight

vector according to the state estimates provided by an H∞ estimator. This estimator

is proved to be always feasible. Furthermore, the special structure of the problem

is used to reduce the usual Riccati recursion for state estimate update to a simpler

Lyapunov recursion. The new adaptive algorithm (referred to as estimation-based

adaptive filtering (EBAF) algorithm) has provable performance, follows a simple up-

date rule, and unlike previous methods readily extends to multi-channel systems

and problems with feedback contamination. A clear connection between the limit-

ing behavior of the EBAF algorithm and the classical FxLMS (Normalized-FxLMS)

iv

algorithm is also established in this thesis.

Applications of the proposed adaptive design method are demonstrated in an Ac-

tive Noise Cancellation (ANC) context. First, experimental results are presented for

narrow-band and broad-band noise cancellation in a one-dimensional acoustic duct.

In comparison to other conventional adaptive noise-cancellation methods (FxLMS

in the FIR case and FuLMS in the IIR case), the proposed method shows much

faster convergence and improved steady-state performance. Moreover, the proposed

method is shown to be robust to feedback contamination while conventional methods

can go unstable. As a second application, the proposed adaptive method was used

for vibration isolation in a 3-input/3-output Vibration Isolation Platform. Simula-

tion results demonstrate improved performance over a multi-channel implementation

of the FxLMS algorithm. These results indicate that the approach works well in

practice. Furthermore, the theoretical results in this thesis are quite general and can

be applied to many other applications including adaptive equalization and adaptive

identification.

v

Acknowledgements

This thesis has greatly benefited from the efforts and support of many people whom

I would like to thank. First, I would like to thank my principle advisor Professor

Jonathan How. This research would not have been possible without Professor How’s

insights, enthusiasm and constant support throughout the project. I appreciate his

attention to detail and the clarity that he brought to our presentations and writings.

I would also like to acknowledge the help and support of Dr. Alain Carrier from Lock-

heed Martin’s Advanced Technology Center. His careful reading of all the manuscripts

and reports, his provocative questions, and his dedication to meaningful research has

greatly influenced this work. I would like to gratefully acknowledge members of my

defense and reading committee, Professor Thomas Kailath, Professor Carlo Tomasi,

and Dr. Babak Hassibi. It was from a class instructed by Professor Kailath and Dr.

Hassibi that the main concept of this thesis originated, and it was their research that

this thesis is based on. It is impossible to exaggerate the importance of Dr. Hassibi’s

contributions to this thesis. He has been a great friend and advisor throughout this

work for which I am truly thankful.

My thanks also goes to Professor Robert Cannon and Professor Steve Rock for giv-

ing me the opportunity to interact with wonderful friends in the Aerospace Robotics

Laboratory. The help from ARL graduates, Gordon Hunt, Steve Ims, Stef Sonck,

Howard Wang, and Kurt Zimmerman was crucial in the early stages of the research

at Lockheed. I have also benefited from interesting discussions with fellow ARL stu-

dents Andreas Huster, Kortney Leabourne, Andrew Robertson, Heidi Schubert, and

Bruce Woodley, on both technical and non-technical issues. I am forever thankful for

their invaluable friendship and support. I also acknowledge the camaraderie of more

vi

recent ARL members, Tobe Corazzini, Steve Fleischer, Eric Frew, Gokhan Inalhan,

Hank Jones, Bob Kindel, Ed LeMaster, Mel Ni, Eric Prigge, and Luis Rodrigues.

I discussed all aspects of this thesis in great detail with Arash Hassibi. He helped

me more than I can thank him for. Lin Xiao and Hong S. Bae set up the hardware for

noise cancellation and helped me in all experiments. I appreciate all their assistance.

Thomas Pare, Haitham Hindi, and Miguel Lobo provided helpful comments about the

research. I also acknowledge the assistance from fellow ISL students, Alper Erdogan,

Maryam Fazel, and Ardavan Maleki. I would like to also name two old friends, Khalil

Ahmadpour and Mehdi Asheghi, whose friendship I gratefully value.

I owe an immeasurable amount of gratitude to my parents, Hossein and Salehe, my

sister, Mojgan, and my brother, Bahman, for their support throughout the numerous

ups and downs that I have experienced. Finally, my sincere thanks goes to my wife,

Samaneh, for her gracious patience and strength. I am sure they agree with me in

dedicating this thesis to Khalil.

vii

Contents

Abstract iv

Acknowledgements vi

List of Figures xii

1 Introduction 1

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 An Overview of Adaptive Filtering (Control) Algorithms . . . . . . . 6

1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Estimation-Based adaptive FIR Filter Design 14

2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2 EBAF Algorithm - Main Concept . . . . . . . . . . . . . . . . . . . 16

2.3 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.3.1 H2 Optimal Estimation . . . . . . . . . . . . . . . . . . . . . 19

2.3.2 H∞ Optimal Estimation . . . . . . . . . . . . . . . . . . . . . 20

2.4 H∞-Optimal Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.4.1 γ-Suboptimal Finite Horizon Filtering Solution . . . . . . . . 21

2.4.2 γ-Suboptimal Finite Horizon Prediction Solution . . . . . . . 22

2.4.3 The Optimal Value of γ . . . . . . . . . . . . . . . . . . . . . 23

2.4.3.1 Filtering Case . . . . . . . . . . . . . . . . . . . . . 23

viii

2.4.3.2 Prediction Case . . . . . . . . . . . . . . . . . . . . 27

2.4.4 Simplified Solution Due to γ = 1 . . . . . . . . . . . . . . . . 29

2.4.4.1 Filtering Case: . . . . . . . . . . . . . . . . . . . . . 29

2.4.4.2 Prediction Case: . . . . . . . . . . . . . . . . . . . . 30

2.5 Important Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.6 Implementation Scheme for EBAF Algorithm . . . . . . . . . . . . . 32

2.7 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.7.1 Effect of Initial Condition . . . . . . . . . . . . . . . . . . . . 35

2.7.2 Effect of Practical Limitation in Setting y(k) to s(k|k) (s(k)) 36

2.8 Relationship to the Normalized-FxLMS/FxLMS Algorithms . . . . . 38

2.8.1 Prediction Solution and its Connection to the FxLMS Algo-

rithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.8.2 Filtering Solution and its Connection to the Normalized-FxLMS

Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2.9 Experimental Data & Simulation Results . . . . . . . . . . . . . . . 41

2.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3 Estimation-Based adaptive IIR Filter Design 58

3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3.2.1 Estimation Problem . . . . . . . . . . . . . . . . . . . . . . . 63

3.3 Approximate Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.3.1 γ-Suboptimal Finite Horizon Filtering Solution to the Linearized

Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

3.3.2 γ-Suboptimal Finite Horizon Prediction Solution to the Lin-

earized Problem . . . . . . . . . . . . . . . . . . . . . . . . . 66

3.3.3 Important Remarks . . . . . . . . . . . . . . . . . . . . . . . 66

3.4 Implementation Scheme for the EBAF Algorithm in IIR Case . . . . 67

3.5 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3.6 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

ix

4 Multi-Channel Estimation-Based Adaptive Filtering 78

4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.1.1 Multi-Channel FxLMS Algorithm . . . . . . . . . . . . . . . 79

4.2 Estimation-Based Adaptive Algorithm for Multi Channel Case . . . 81

4.2.1 H∞-Optimal Solution . . . . . . . . . . . . . . . . . . . . . . 85

4.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

4.3.1 Active Vibration Isolation . . . . . . . . . . . . . . . . . . . . 86

4.3.2 Active Noise Cancellation . . . . . . . . . . . . . . . . . . . . 89

4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5 Adaptive Filtering via Linear Matrix Inequalities 104

5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.2 LMI Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.2.1 Including H2 Constraints . . . . . . . . . . . . . . . . . . . . 110

5.3 Adaptation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 111

5.4 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

6 Conclusion 121

6.1 Summary of the Results and Conclusions . . . . . . . . . . . . . . . 121

6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

A Algebraic Proof of Feasibility 126

A.1 Feasibility of γf = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

B Feedback Contamination Problem 128

C System Identification for Vibration Isolation Platform 132

C.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

C.2 Identified Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

C.2.1 Data Collection Process . . . . . . . . . . . . . . . . . . . . . 133

C.2.2 Consistency of the Measurements . . . . . . . . . . . . . . . . 134

C.2.3 System Identification . . . . . . . . . . . . . . . . . . . . . . 137

x

C.2.4 Control design model analysis . . . . . . . . . . . . . . . . . . 140

C.3 FORSE algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

Bibliography 155

xi

List of Figures

1.1 General block diagram for an FIR Filterm . . . . . . . . . . . . . . . . . . 13

1.2 General block diagram for an IIR Filter . . . . . . . . . . . . . . . . . . . 13

2.1 General block diagram for an Active Noise Cancellation (ANC) problem . . . . 46

2.2 A standard implementation of FxLMS algorithm . . . . . . . . . . . . . . . 47

2.3 Pictorial representation of the estimation interpretation of the adaptive control

problem: Primary path is replaced by its approximate model . . . . . . . . . 47

2.4 Block diagram for the approximate model of the primary path . . . . . . . . 48

2.5 Schematic diagram of one-dimensional air duct . . . . . . . . . . . . . . . . 48

2.6 Transfer functions plot from Speakers #1 & #2 to Microphone #1 . . . . . . 49

2.7 Transfer functions plot from Speakers #1 & #2 to Microphone #2 . . . . . . 49

2.8 Validation of simulation results against experimental data for the noise cancel-

lation problem with a single-tone primary disturbance at 150 Hz. The primary

disturbance is known to the adaptive algorithm. The controller is turned on at

t ≈ 3 seconds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2.9 Experimental data for the EBAF algorithm of length 4, when a noisy measurement

of the primary disturbance (a single-tone at 150 Hz) is available to the adaptive

algorithm (SNR=3). The controller is turned on at t ≈ 5 seconds. . . . . . . 51

2.10 Experimental data for the EBAF algorithm of length 8, when a noisy measurement

of the primary disturbance (a multi-tone at 150 and 180 Hz) is available to the

adaptive algorithm (SNR=4.5). The controller is turned on at t ≈ 6 seconds. . 52

2.11 Experimental data for the EBAF algorithm of length 16, when a noisy measure-

ment of the primary disturbance (a band limited white noise) is available to the

adaptive algorithm (SNR=4.5). The controller is turned on at t ≈ 5 seconds. . 53

xii

2.12 Simulation results for the performance comparison of the EBAF and (N)FxLMS

algorithms. For 0 ≤ t ≤ 5 seconds, the controller is off. For 5 < t ≤ 20 seconds

both adaptive algorithms have full access to the primary disturbance (a single-

tone at 150 Hz). For t ≥ 20 seconds the measurement of Microphone #1 is used

as the reference signal (hence feedback contamination problem). The length of

the FIR filter is 24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

2.13 Simulation results for the performance comparison of the EBAF and (N)FxLMS

algorithms. For 0 ≤ t ≤ 5 seconds, the controller is off. For 5 < t ≤ 40 seconds

both adaptive algorithms have full access to the primary disturbance (a band

limited white noise). For t ≥ 40 seconds the measurement of Microphone #1 is

used as the reference signal (hence feedback contamination problem). The length

of the FIR filter is 32. . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

2.14 Closed-loop transfer function based on the steady state performance of the EBAF

and (N)FxLMS algorithms in the noise cancellation problem of Figure 2.13. . . 56

3.1 General block diagram for the adaptive filtering problem of interest (with Feedback

Contamination) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

3.2 Basic Block Diagram for the Feedback Neutralization Scheme . . . . . . . . . 72

3.3 Basic Block Diagram for the Classical Adaptive IIR Filter Design . . . . . . . 73

3.4 Estimation Interpretation of the IIR Adaptive Filter Design . . . . . . . . . 73

3.5 Approximate Model For the Unknown Primary Path . . . . . . . . . . . . . 74

3.6 Performance Comparison for EBAF and FuLMS Adaptive IIR Filters for Single-

Tone Noise Cancellation. The controller is switched on at t = 1 second. For

1 ≤ t ≤ 6 seconds adaptive algorithm has full access to the primary disturbance.

For t ≥ 6 the output of Microphone #1 is used as the reference signal (hence

feedback contamination problem). . . . . . . . . . . . . . . . . . . . . . . 75

3.7 Performance Comparison for EBAF and FuLMS Adaptive IIR Filters for Multi-

Tone Noise Cancellation. The controller is switched on at t = 1 second. For

1 ≤ t ≤ 6 seconds adaptive algorithm has full access to the primary disturbance.

For t ≥ 6 the output of Microphone #1 is used as the reference signal (hence

feedback contamination problem). . . . . . . . . . . . . . . . . . . . . . . 76

xiii

4.1 General block diagram for a multi-channel Active Noise Cancellation (ANC) prob-

lem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

4.2 Pictorial representation of the estimation interpretation of the adaptive control

problem: Primary path is replaced by its approximate model . . . . . . . . . 91

4.3 Approximate Model for Primary Path . . . . . . . . . . . . . . . . . . . . 92

4.4 Vibration Isolation Platform (VIP) . . . . . . . . . . . . . . . . . . . . . 92

4.5 A detailed drawing of the main components in the Vibration Isolation Platform

(VIP). Of particular importance are: (a) the platform supporting the middle mass

(labeled as component #5), (b) the middle mass that houses all six actuators (of

which only two, one control actuator and one disturbance actuator) are shown

(labeled as component #11), and (c) the suspension springs to counter the grav-

ity (labeled as component #12). Note that the actuation point for the control

actuator (located on the left of the middle mass) is colocated with the load cell

(marked as LC1). The disturbance actuator (located on the right of the middle

mass) actuates against the inertial frame. . . . . . . . . . . . . . . . . . . 93

4.6 SVD of the MIMO transfer function . . . . . . . . . . . . . . . . . . . . . 94

4.7 Performance of a multi-channel implementation of EBAF algorithm when distur-

bance actuators are driven by out of phase sinusoids at 4 Hz. The reference signal

available to the adaptive algorithm is contaminated with band limited white noise

(SNR=3). The control signal is applied for t ≥ 30 seconds. . . . . . . . . . . 95

4.8 Performance of a multi-channel implementation of FxLMS algorithm when simu-

lation scenario is identical to that in Figure 4.7. . . . . . . . . . . . . . . . 96

4.9 Performance of a multi-channel implementation of EBAF algorithm when distur-

bance actuators are driven by out of phase multi-tone sinusoids at 4 and 15 Hz.

The reference signal available to the adaptive algorithm is contaminated with band

limited white noise (SNR=4.5). The control signal is applied for t ≥ 30 seconds. 97

4.10 Performance of a multi-channel implementation of FxLMS algorithm when simu-

lation scenario is identical to that in Figure 4.9. . . . . . . . . . . . . . . . 98

4.11 Performance of a Multi-Channel implementation of the EBAF for vibration isola-

tion when the reference signals are load cell outputs (i.e. feedback contamination

exists). The control signal is applied for t ≥ 30 seconds. . . . . . . . . . . . 99

xiv

4.12 Performance of the Multi-Channel noise cancellation in acoustic duct for a multi-

tone primary disturbance at 150 and 200 Hz. The control signal is applied for

t ≥ 2 seconds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

4.13 Performance of the Multi-Channel noise cancellation in acoustic duct when the

primary disturbance is a band limited white noise. The control signal is applied

for t ≥ 2 seconds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

4.14 Closed-loop vs. open-loop transfer functions for the steady state performance of

the EBAF algorithm for the simulation scenario shown in Figure 4.13. . . . . 102

5.1 General block diagram for an Active Noise Cancellation (ANC) problem . . . . 115

5.2 Cancellation Error at Microphone #1 for a Single-Tone Primary Disturbance . 116

5.3 Typical Elements of Adaptive Filter Weight Vector for Noise Cancellation Problem

in Fig. 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

5.4 Cancellation Error at Microphone #1 for a Multi-Tone Primary Disturbance . 118

5.5 Typical Elements of Adaptive Filter Weight Vector for Noise Cancellation Problem

in Fig. 5.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

B.1 Block diagram of the approximate model for the primary path in the presence of

the feedback path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

C.1 Magnitude of the scaling factor relating load cell’s reading of the effect of control

actuators to that of the scoring sensor . . . . . . . . . . . . . . . . . . . . 144

C.2 Magnitude of the scaling factor relating load cell’s reading of the effect of distur-

bance actuators to that of the scoring sensor . . . . . . . . . . . . . . . . . 145

C.3 Magnitude of the scaling factor relating load cell’s reading of the effect of control

actuators to that of the scoring sensor after diagonalization . . . . . . . . . . 146

C.4 Magnitude of the scaling factor relating load cell’s reading of the effect of distur-

bance actuators to that of the scoring sensor after diagonalization . . . . . . . 147

C.5 Comparison of SVD plots for the transfer function to the scaled/double-integrated

load cell data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

C.6 Comparison of SVD plots for the transfer function to the actual load cell data . 148

C.7 Comparison of SVD plots for the transfer function to the scoring sensors . . . 149

C.8 Comparison of SVD plots for the transfer function to the position sensors colocated

with the control actuators . . . . . . . . . . . . . . . . . . . . . . . . . 149

xv

C.9 Comparison of SVD plots for the transfer function to the position sensors colocated

with the disturbance actuators . . . . . . . . . . . . . . . . . . . . . . . 150

C.10 The identified model for the system beyond the frequency range for which mea-

surements are available . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

C.11 The final model for the system beyond the frequency range for which measure-

ments are available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

C.12 The comparison of the closed loop and open loop singular value plots when the

controller is used to close the loop on the identified model . . . . . . . . . . 153

C.13 The comparison of the closed loop and open loop singular value plots when the

controller is used to close the loop on the real measured data . . . . . . . . . 154

xvi

Chapter 1

Introduction

This dissertation presents a new estimation-based procedure for the systematic syn-

thesis and analysis of adaptive filters (controllers) in “Filtered” LMS problems. This

new approach uses an estimation interpretation of the adaptive filtering (control)

problem to formulate an equivalent estimation problem. The adaptation criterion for

the adaptive weight vector is extracted from the H∞-solution to this estimation prob-

lem. The new algorithm, referred to as Estimation-Based Adaptive Filtering (EBAF),

applies to both Finite Impulse Response (FIR) and Infinite Impulse Response (IIR)

adaptive filters.

1.1 Motivation

Least-Mean Squares (LMS) adaptive algorithm [51] has been the centerpiece of a wide

variety of adaptive filtering techniques for almost four decades. The straightforward

derivation, and the simplicity of its implementation (especially at the time of limited

computational power) encouraged experiments with the algorithm in a diverse range

of applications (e.g. see [51,33]). In some applications however, the simple imple-

mentation of the LMS algorithm was found to be inadequate. Subsequent attempts

to overcome its shortcomings have produced a large number of innovative solutions

that have been successful in practice. Commonly used algorithms such as normalized

1

1.1. MOTIVATION 2

LMS, correlation LMS [47], leaky LMS [21], variable-step-size LMS [25], and Filtered-

X LMS [35] are the outcome of such efforts. These algorithms use the instantaneous

squared error to estimate the mean-square error, and often assume slow adaptation

to allow for the necessary linear operations in their derivation (see Chapters 2 and 3

in [33] for instance). As Reference [2] points out:

“Many of the algorithms and approaches used are of an ad hoc nature;

the tools are gathered from a wide range of fields; and good systematic

approaches are still lacking.”

Introducing a systematic procedure for the synthesis of adaptive filters is one of the

main goals of this thesis.

Parallel to the efforts on the practical application of the LMS-based adaptive

schemes, there has been a concerted effort to analyze these algorithms. Of pioneering

importance are the results in Refs. [50] and [23]. Reference [50] considers the adap-

tation with LMS on stationary stochastic processes, and finds the optimal solution

to which the expected value of the weight vector converges. For sinusoidal inputs

however, the discussion in [50] does not apply. In [23] it is shown that for sinusoidal

inputs, when time-varying component of the adaptive filter output is small compared

to its time-invariant component (see [23], page 486), the adaptive LMS filter can be

approximated by a linear time-invariant transfer function. Reference [13] extends the

approach in [23] to derive an equivalent transfer function for the Filtered-X LMS

adaptive algorithm (provided the conditions required in [23] still apply). The equiva-

lent transfer function is then used to analytically derive an expression for the optimum

convergence coefficients. A frequency domain model of the so-called filtered LMS al-

gorithm (i.e. an algorithm in which the input or the output of the adaptive filter or

the feedback error signal is linearly filtered prior to use in the adaptive algorithm)

is discussed in [17]. The frequency domain model in [17] decouples the inputs into

disjoint frequency bins and places a single frequency adaptive noise canceler on each

bin. The analysis in their work utilizes the frequency domain LMS algorithm [11]

and assumes a time invariant linear behavior for the filter. Other important aspects

1.1. MOTIVATION 3

of the adaptive filters have also been extensively studied. The effect of the model-

ing error on the convergence and performance properties of the LMS-based adaptive

algorithms (e.g. [17,7]), and tracking behavior of the LMS adaptive algorithm when

the adaptive filter is tuned to follow a linear chirp signal buried in white noise [5,6],

are examples of these studies∗. In summary, existing analysis techniques are often

suitable for analyzing only one particular aspect of the behavior of an adaptive filter

(e.g. its steady-state behavior). Furthermore, the validity of the analysis relies on

certain assumptions (e.g. slow convergence, and/or the negligible contribution of the

nonlinear/time-varying component of the adaptive filter output) that can be quite

restrictive. Providing a solid framework for the systematic analysis of adaptive filters

is another main goal of this thesis.

The reason for the difficulty experienced in both synthesis and analysis of adaptive

algorithms is best explained in Reference [37]:

“It is now generally realized that adaptive systems are special classes of

nonlinear systems . . . general methods for the analysis and synthesis of

nonlinear systems do not exist since conditions for their stability can be

established only on a system by system basis.”

This thesis introduces a new framework for the synthesis and analysis of adaptive

filters (controllers) by providing an estimation interpretation of the above mentioned

“nonlinear” adaptive filtering (control) problem. The estimation interpretation re-

places the original adaptive filtering (control) synthesis with an equivalent estimation

problem, the solution of which is used to update the weight vector in the adaptive

filter (and hence the name estimation-based adaptive filtering). This approach is

applicable (due to its systematic nature) to both FIR and IIR adaptive filters (con-

trollers). In the FIR case the equivalent estimation problem is linear, and hence exact

solutions are available. Stability, performance bounds, transient behavior of adaptive

FIR filters are thus precisely addressed in this framework. In the IIR case, however,

only an approximate solution to the equivalent estimation problem is available, and

∗The survey here is intended to provide a flavor of the type of the problems that have capturedthe attention of researchers in the field. The shear volume of the literature makes subjective selectionof the references unavoidable.

1.2. BACKGROUND 4

hence the proposed estimation-based framework serves as a reasonable heuristic for

the systematic design of adaptive IIR filters. This approximate solution however, is

based on realistic assumptions, and the adaptive algorithm maintains its systematic

structure. Furthermore, the treatment of feedback contamination (see Chapter 3 for a

precise definition), is virtually identical to that of adaptive IIR filters. The proposed

estimation-based approach is particularly appealing if one considers the difficulty with

the existing design techniques for adaptive IIR filters, and the complexity of available

solutions to feedback contamination (e.g. see [33]).

1.2 Background

The development of the new estimation-based framework is based on recent results

in robust estimation. Following the pioneering work in [52], the H∞ approach to

robust control theory produced solutions [12,24] that were designed to meet some

performance criterion in the face of the limited knowledge of the exogenous distur-

bances and imperfect system models. Further work in robust control and estimation

(see [32,46] and the references therein) produced straightforward solutions that al-

lowed in-depth studies of the properties of the robust controllers/estimators. The

main idea in H∞ estimation is to design an estimator that bounds (in the optimum

case, minimizes) the maximum energy gain from the disturbances to the estimation

errors. Such a solution guarantees that for disturbances with bounded energy, the

energy of the estimation error will be bounded as well. In the case of an optimal

solution, an H∞-optimal estimator will guarantee that the energy of the estimation

error for the worst case disturbance is indeed minimized [28].

Of crucial importance for the work in this thesis, is the result in [26] where the H∞-

optimality of the LMS algorithm was established. Note that despite a long history

of successful applications, prior to the work in [26], the LMS algorithm was regarded

as an approximate recursive solution to the least-squares minimization problem. The

work in [26] showed that instead of being an approximate solution to an H2 minimiza-

tion, the LMS algorithm is the exact solution to a minmax estimation problem. More

1.2. BACKGROUND 5

specifically, Ref. [26] proved that the LMS adaptive filter is the central a priori H∞-

optimal filter. This result established a fundamental connection between an adaptive

control algorithm (LMS algorithm in this case), and a robust estimation problem.

Inspired by the analysis in [26], this thesis introduces an estimation interpretation of

a far more general adaptive filtering problem, and develops a systematic procedure for

the synthesis of adaptive filters based on this interpretation. The class of problems

addressed in this thesis, commonly known as “Filtered” LMS [17], encompass a wide

range of adaptive filtering/control applications [51,33], and have been the subject of

extensive research over the past four decades. Nevertheless, the viewpoint provided

in this thesis not only provides a systematic alternative to some widely used adaptive

filtering (control) algorithms (such as FxLMS and FuLMS) with superior transient

and steady-state behavior, but it also presents a new framework for their analysis.

More specifically, this thesis proves that the fundamental connection between adap-

tive filtering (control) algorithms and robust estimation extends to the more general

setting of adaptive filtering (control) problems, and shows that the convergence, sta-

bility, and performance of these classical adaptive algorithms can be systematically

analyzed as robust estimation questions.

The systematic nature of the proposed estimation-based approach enables an al-

ternative formulation for the adaptive filtering (control) problem using Linear Matrix

Inequalities (LMIs), the ramifications of which will be discussed in Chapter 5. Several

researchers (see [18] and references therein) in the past few years have shown that

elementary manipulations of linear matrix inequalities can be used to derive less re-

strictive alternatives to the now classical state-space Riccati-based solution to the H∞control problem [12]. Even though the computational complexity of the LMI-based

solution remains higher than that of solving the Riccati equation, there are three main

reasons that justify such a formulation [19]: (a) a variety of design specifications and

constraints can be expressed as LMIs, (b) problems formulated as LMIs can be solved

exactly by efficient convex optimization techniques, and (c) for the cases that lack

analytical solutions such as mixed H2/H∞ design objectives (see [4], [32] and [45] and

references therein), the LMI formulation of the problem remains tractable (i.e. LMI-

solvers are viable alternatives to analytical solutions in such cases). As will be seen

1.3. AN OVERVIEW OF ADAPTIVE FILTERING (CONTROL) ALGORITHMS 6

in Chapter 5, the LMI framework provides the machinery required for the synthesis

of a robust adaptive filter in the presence of modeling uncertainty.

1.3 An Overview of Adaptive Filtering (Control)

Algorithms

To put this thesis in perspective, this section provides a brief overview of the vast

literature on adaptive filtering (control). Reference [36] recognizes 1957 as the year

for the formal introduction of the term “adaptive system” into the control literature.

By then, the interest in filtering and control theory had shifted towards increasingly

more complex systems with poorly characterized (possibly time varying) models for

system dynamics and disturbances, and the concept of “adaptation” (borrowed from

living systems) seemed to carry the potential for solving the increasingly more com-

plex control problems. The exact definition of “adaptation” and its distinction from

“feedback”, however, is the subject of long standing discussions (e.g. see [2,36,29]).

Qualitatively speaking, an adaptive system is a system that can modify its behavior

in response to changes in the dynamics of the system or disturbances through some

recursive algorithm. As a direct consequence of this recursive algorithm (in which

the parameters of the adaptive system are adjusted using input/output data), an

adaptive system is a “nonlinear” device.

The development of adaptive algorithms has been pursued from a variety of view

points. Different classifications of adaptive algorithms (such as direct versus indirect

adaptive control, model reference versus self-tuning adaptation) in the literature re-

flect this diversity [2,51,29]. For the purpose of this thesis, two distinct approaches for

deriving recursive adaptive algorithms can be identified: (a) stochastic gradient ap-

proaches that include LMS and LMS-Based adaptive algorithms, and (b) least-squares

estimation approaches that include adaptive recursive least-squares (RLS) algorithm.

The central idea in the former approach, is to define an appropriate cost function

that captures the success of the adaptation process, and then change the adaptive

1.3. AN OVERVIEW OF ADAPTIVE FILTERING (CONTROL) ALGORITHMS 7

filter parameters to reduce the cost function according to the method of steepest de-

scent. This requires the use of a gradient vector (hence the name), which in practice

is approximated using instantaneous data. Chapter 2 provides a detailed description

of this approach for the problem of interest in this Thesis. The latter approach to

the design of adaptive filters is based on the method of least squares. This approach

closely corresponds to Kalman filtering. Ref. [44] provides a unifying state-space ap-

proach to adaptive RLS filtering. The main focus in this thesis however, is on the

LMS-based adaptive algorithms.

Since adaptive algorithms can successfully operate in a poorly known environment,

they have been used in a diverse field of applications that include communication

(e.g. [34,41]), process control (e.g. [2]), seismology (e.g. [42]), biomedical engineering

(e.g. [51]). Despite the diversity of the applications, different implementations of

adaptive filtering (control) share one basic common feature [29]: “an input vector and

a desired response are used to compute an estimation error, which is in turn used to

control the values of a set of adjustable filter coefficients.” Reference [29] distinguishes

four main classes of adaptive filtering applications based on the way the desired

signal is defined in the formulation of the problem: (a) identification: in this class of

applications an adaptive filter is used to provide a linear model for an unknown plant.

The plant and the adaptive filter are driven by the same input, and the output of the

plant is the desired response that adaptive filter tries to match. (b) inverse modeling:

here the adaptive filter is placed in series with an unknown (perhaps noisy) plant, and

the desired signal is simply a delayed version of the plant input. Ideally, the adaptive

filter converges to the inverse of the unknown plant. Adaptive equalization (e.g. [40])

is an important application in this class. (c) prediction: the desired signal in this case

is the current value of a random signal, while past values of the random signal provide

the input to the adaptive filter. Signal detection is an important application in this

class. (d) interference canceling: here adaptive filter uses a reference signal (provided

as input to the adaptive filter) to cancel unknown interference contained in a primary

signal. Adaptive noise cancellation, echo cancellation, and adaptive beam-forming

are applications that fall in this last class. The estimation-based adaptive filtering

algorithm in this thesis is presented in the context of adaptive noise cancellation, and

1.3. AN OVERVIEW OF ADAPTIVE FILTERING (CONTROL) ALGORITHMS 8

therefore a detailed discussion of the fourth class of adaptive filtering problems is

provided in Chapter 2.

There are several main structures for the implementation of adaptive filters (con-

trollers). The structure of the adaptive filter is known to affect its performance,

computational complexity, and convergence. In this thesis, the two most commonly

used structures for adaptive filters (controllers) are considered. The finite impulse

response (FIR) transversal filter (see Fig. 1.1) is the structure upon which the main

presentation of the estimation-based adaptive filtering algorithm is primarily pre-

sented. The transversal filter consists of three basic elements: (a) unit-delay element,

(b) multiplier, and (c) adder, and contains feed forwards paths only. The number

of unit-delays specify the length of the adaptive FIR filter. Multipliers weight the

delayed versions of some reference signal, which are then added in the adder(s). The

frequency response for this filter is of finite length (hence the name), and contains

only zeros (all poles are at the origin in the z-plane). Therefore, there is no question

of stability for the open-loop behavior of the FIR filter. The infinite-duration impulse

response (IIR) structure is shown in Figure 1.2. The feature that distinguishes the

IIR filter from an FIR filter is the inclusion of the feedback path in the structure of

the adaptive filter.

As mentioned earlier, for an FIR filter all poles are at the origin, and a good

approximation of the behavior of a pole, in general, can only be achieved if the length

of the FIR filter is sufficiently long. An IIR filter, ideally at least, can provide a

perfect match for a pole with only a limited number of parameters. This means

that for a desired dynamic behavior (such as resonance frequency, damping, or cutoff

frequency), the number of parameters in an adaptive IIR filter can be far fewer than

that in its FIR counterpart. The computational complexity per sample for adaptive

IIR filter design can therefore be significantly lower than that in FIR filter design.

The limited use of adaptive IIR filters (compared to the vast number of appli-

cations for the FIR filters) suggests that the above mentioned advantages come at

a certain cost. In particular, adaptive IIR filters are only conditionally stable, and

therefore some provisions are required to assure stability of the filter at each iteration.

There are solutions such as Schur-Cohn algorithm ([29] pages 271-273) that monitor

1.3. AN OVERVIEW OF ADAPTIVE FILTERING (CONTROL) ALGORITHMS 9

the stability of the IIR filter (by determining whether all roots of the denominator of

the IIR filter transfer function are inside the unit circle). This however requires in-

tensive on-line calculations. Alternative implementations of adaptive IIR filters (such

as parallel implementation [48], and lattice implementation [38]) have been suggested

that provide simpler stability monitoring capabilities. The monitoring process is in-

dependent of the adaptation process here. In other words, the adaptation criteria

do not inherently reject de-stabilizing values for filter weights. The monitoring pro-

cess detects these de-stabilizing values and prevents their implementation. Another

significant problem with adaptive IIR filter design stems from the fact that the perfor-

mance surface (see [33], Chapter 3) for adaptive IIR filters is generally non-quadratic

(see [33] pages 91-94 for instance) and often contains multiple local minima. There-

fore, the weight vector may converge to a local minimum only (hence non-optimal

cost). Furthermore, it is noted that the adaptation rate for adaptive IIR filters can

be slow when compared to the FIR adaptive filters [33,31]. Early works in adaptive

IIR filtering (e.g. [16]) are for the most part extensions to Widrow’s LMS algorithm

of adaptive FIR filtering [51]. More recent works include modifications to recursive

LMS algorithm (e.g. [15]) that are devised for specific applications. In other words,

existing design techniques for adaptive IIR filters are application-specific and rely on

certain restrictive assumptions in their derivation. Our description of the Filtered-U

recursive LMS algorithm in Chapter 3 will further clarify this point. Furthermore,

as [33] points out: “The properties of an adaptive IIR filter are considerably more

complex than those of the conventional adaptive FIR filter, and consequently it is

more difficult to predict their behavior.” Thus, a framework that allows a unified

approach to the synthesis and analysis of adaptive IIR filters, and does not require

restrictive assumptions for its derivation would be extremely useful. As mentioned

earlier, this thesis provides such a framework.

Finally, for a wide variety of applications such as equalization in wireless com-

munication channels, and active control of sound and vibration in an environment

where the effect of a number of primary sources should be canceled by a number of

control (secondary) sources, the use of a multi-channel adaptive algorithm is well jus-

tified. In general, however, variations of the LMS algorithm are not easy to extend to

1.4. CONTRIBUTIONS 10

multi-channel systems. Furthermore, the analysis of the performance and properties

of such multi-channel algorithms is complicated [33]. As Ref. [33] points out, in the

context of active noise cancellation, the successful implementation of multi-channel

adaptive algorithms has so far been limited to cases involving repetitive noise with a

few harmonics [39,43,49,13]). For the approach presented in this thesis, the syntheses

of single-channel and multi-channel adaptive algorithms are virtually identical. This

similarity is a direct result of the way the synthesis problem is formulated (see 4).

1.4 Contributions

In meeting the goals of this research, the following contributions have been made to

adaptive filtering and control:

1. An estimation-interpretation for adaptive “Filtered” LMS filtering (control)

problems is developed. This interpretation allows an equivalent estimation for-

mulation for the adaptive filtering (control) problem. The adaptation criterion

for adaptive filter weight vector is extracted from the solution to this equiva-

lent estimation problem. This constitutes a systematic synthesis procedure for

adaptive filters in filtered LMS problems. The new synthesis procedure is called

Estimation-Based Adaptive Filtering (EBAF).

2. Using an H∞ criterion to formulate the “equivalent” estimation problem, this

thesis develops a new framework for the systematic analysis of Filtered LMS

adaptive algorithms. In particular, the results in this thesis extend the funda-

mental connection between the LMS adaptive algorithm and robust estimation

(i.e. H∞ optimality of the LMS algorithm [26]) to the more general setting of

filtered LMS adaptive problems.

3. For the EBAF algorithm in the FIR case:

(a) It is shown that the adaptive weight vector update can be based on the

central filtering (prediction) solution to a linear H∞ estimation problem,

the existence of which is guaranteed. It is also shown that the maximum

1.4. CONTRIBUTIONS 11

energy gain in this case can be minimized. Furthermore, the optimal en-

ergy gain is proved to be unity, and the conditions under which this bound

is achievable are derived.

(b) The adaptive algorithm is shown to be implementable in real-time. The

update rule requires a simple Lyapunov recursion that leads to a computa-

tional complexity comparable to that of filtered LMS adaptive algorithms

(e.g. FxLMS). The experimental data, along with extensive simulations

are presented to demonstrate the improved steady-state performance of

the EBAF algorithm (over FxLMS and Normalized-FxLMS algorithms),

as well as a faster transient response.

(c) A clear connection between the limiting behavior of the EBAF algorithm

and the existing FxLMS and Normalized-FxLMS adaptive algorithms has

been established.

4. For the EBAF algorithm in the IIR case, it is shown that the equivalent es-

timation problem is nonlinear. A linearizing approximation is then employed

that makes systematic synthesis of adaptive IIR filter tractable. The perfor-

mance of the EBAF algorithm in this case is compared to the performance of

the Filtered-U LMS (FuLMS) adaptive algorithm, demonstrating the improved

performance in the EBAF case.

5. The treatment of feedback contamination problem is shown to be identical to

the IIR adaptive filter design in the new estimation-based framework.

6. A multi-channel extension of the EBAF algorithm demonstrates that the treat-

ment of the single-channel and multi-channel adaptive filtering (control) prob-

lems in the new estimation based framework is virtually the same. Simulation

results for the problem of vibration isolation in a 3-input/3-output vibration iso-

lation platform (VIP) prove feasibility of the EBAF algorithm in multi-channel

problems.

7. The new estimation-based framework is shown to be amenable to a Linear Ma-

trix Inequality (LMI) formulation. The LMI formulation is used to explicitly

1.5. THESIS OUTLINE 12

address the stability of the overall system under adaptive algorithm by produc-

ing a Lyapunov function. It is also shown to be an appropriate framework to

address the robustness of the adaptive algorithm to modeling error or param-

eter uncertainty. Augmentation of an H2 performance constraint to the H∞disturbance rejection criterion is also discussed.

1.5 Thesis Outline

The organization of this thesis is as follows. In Chapter 2, the fundamental concepts

of the estimation-based adaptive filtering (EBAF) algorithm are introduced. The

application of the EBAF approach in the case of adaptive FIR filter design is also

presented in this chapter. In Chapter 3, the extension of the EBAF approach to the

adaptive IIR filter design is discussed. A multi-channel implementation of the EBAF

algorithm is presented in Chapter 4. An LMI formulation for the EBAF algorithm is

derived in Chapter 5. Chapter 6 concludes this dissertation with a summary of the

main results, and the suggestions for future work. This dissertation contains three

appendices. An algebraic proof for the feasibility of the unity energy gain in the

estimation problem associated with adaptive FIR filter design (in Chapter 2) is dis-

cussed in Appendix A. The problem of feedback contamination is formally addressed

in Appendix B. A detailed discussion of the identification process is presented in Ap-

pendix C. The identified model for the Vibration Isolation Platform (VIP), used as a

test-bed for multi-channel implementation of the EBAF algorithm, is also presented

in this appendix.

1.5. THESIS OUTLINE 13

x(k) x(k − 1) x(k − 2) x(k − N)

z−1z−1z−1

W0 W1 W2 WN−1 WN

+

u(k)

Fig. 1.1: General block diagram for an FIR Filterm

x(k) r(k)

r(k − 2)

z−1

z−1

z−1

a0

a1

a2

aN

b1

b2

bN

+ +

u(k)

Fig. 1.2: General block diagram for an IIR Filter

Chapter 2

Estimation-Based adaptive FIR

Filter Design

This chapter presents a systematic synthesis procedure for H∞-optimal adaptive FIR

filters in the context of an Active Noise Cancellation (ANC) problem. An estimation

interpretation of the adaptive control problem is introduced first. Based on this inter-

pretation, an H∞ estimation problem is formulated, and its finite horizon prediction

(filtering) solutions are discussed. The solution minimizes the maximum energy gain

from the disturbances to the predicted (filtered) estimation error, and serves as the

adaptation criterion for the weight vector in the adaptive FIR filter. This thesis refers

to the new adaptation scheme as Estimation-Based Adaptive Filtering (EBAF). It

is shown in this chapter that the steady-state gain vectors in the EBAF algorithm

approach those of the classical Filtered-X LMS (Normalized Filtered-X LMS) algo-

rithm. The error terms, however, are shown to be different, thus demonstrating that

the classical algorithms can be thought of as an approximation to the new EBAF

adaptive algorithm.

The proposed EBAF algorithm is applied to an active noise cancellation problem

(both narrow-band and broad-band cases) in a one-dimensional acoustic duct. Ex-

perimental data as well as simulations are presented to examine the performance of

the new adaptive algorithm. Comparisons to the results from a conventional FxLMS

algorithm show faster convergence without compromising steady-state performance

14

2.1. BACKGROUND 15

and/or robustness of the algorithm to feedback contamination of the reference signal.

2.1 Background

This section introduces the context in which the new estimation-based adaptive fil-

tering (EBAF) algorithm will be presented. It defines the adaptive filtering problem

of interest and describes the terminology that is used in this chapter. A conventional

solution to the problem based on the FxLMS algorithm is also outlined in this sec-

tion. The discussion of key concepts of the EBAF algorithm and the mathematical

formulation of the algorithm are left to Sections 2.2 and 2.3, respectively.

Referring to Fig. 2.1, the objective in this adaptive filtering problem is to adjust

the weight vector in the adaptive FIR filter, W (k) = [w0(k) w1(k) ... wN(k)]T (k is

the discrete time index), such that the cancellation error, d(k)−y(k), is small in some

appropriate measure. Note that d(k) and y(k) are outputs of the primary path P (z)

and the secondary path S(z), respectively. Moreover,

1. n(k) is the input to the primary path,

2. x(k) is a properly selected reference signal with a non-zero correlation with the

primary input,

3. u(k) is the control signal applied to the secondary path (generated as u(k)4=

[x(k) x(k − 1) · · · x(k − N)] W (k)),

4. e(k) is the measured residual error available to the adaptation scheme.

Note that in a typical practice, x(k) is obtained via some measurement of the primary

input. The quality of this measurement will impact the correlation between the

reference signal and the primary input. Similar to the conventional development of

the FxLMS algorithm however, this chapter assumes perfect correlation between the

two.

The Filtered-X LMS (FxLMS) solution to this problem is shown in Figure 2.2

where perfect correlation between the primary disturbance n(k) and the reference

signal x(k) is assumed [51,33]. Minimizing the instantaneous squared error, e2(k), as

2.2. EBAF ALGORITHM - MAIN CONCEPT 16

an approximation to the mean-square error, FxLMS follows the LMS update criterion

(i.e. to recursively adapt the weight vector in the negative gradient direction)

W (k + 1) = W (k) − µ

2∇e2(k)

e(k) = d(k) − y(k) = d(k) − S(k) ⊕ u(k)

where µ is the adaptation rate, S(k) is the impulse response of the secondary path,

and “⊕” indicates convolution. Assuming slow adaptation, the FxLMS algorithm

then approximates the instantaneous gradient in the weight vector update with

∇e2(k) ∼= −2 [x′(k) x′(k − 1) · · · x′(k − N)]T

e(k)4= −2h′(k)e(k) (2.1)

where x′(k)4= S(k) ⊕ x(k) represents a filtered version of the reference signal which

is available to the LMS adaptation (and hence the name (Normalized) Filtered-X

LMS). This yields the following adaptation criterion for the FxLMS algorithm

W (k + 1) = W (k) + µh′(k)e(k) (2.2)

A closely related adaptive algorithm is the one in which the adaptation rate is

normalized with the estimate of the power of the reference vector, i.e.

W (k + 1) = W (k) + µh′(k)

1 + µh∗′(k)h′(k)e(k) (2.3)

where ∗ indicates complex conjugate. This algorithm is known as the Normalized-

FxLMS algorithm.

In practice, however, only an approximate model of the secondary path (obtained

via some identification scheme) is known, and it is this approximate model that is

used to filter the reference signal. For further discussion on the derivation and analysis

of the FxLMS algorithm please refer to [33,7].

2.2 EBAF Algorithm - Main Concept

The principal goal of this section is to introduce the underlying concepts of the new

EBAF algorithm. For the developments in this section, perfect correlation between

2.2. EBAF ALGORITHM - MAIN CONCEPT 17

n(k) and x(k) in Fig. 2.1 is assumed (i.e. x(k) = n(k) for all k). This is the same

condition under which the FxLMS algorithm was developed. The dynamics of the

secondary path are assumed known (e.g. by system identification). No explicit model

for the primary path is needed.

As stated before, the objective in the adaptive filtering problem of Fig. 2.1 is to

generate a control signal, u(k), such that the output of the secondary path, y(k), is

“close” to the output of the primary path, d(k). To achieve this goal, for the given

reference signal x(k), the series connection of the FIR filter and the secondary path

must constitute an appropriate model for the unknown primary path. In other words,

with the adaptive FIR filter properly adjusted, the path from x(k) to d(k) must be

equivalent to the path from x(k) to y(k). Based on this observation, in Fig. 2.3 the

structure of the path from x(k) to y(k) is used to model the primary path. The

modeling error is included to account for the imperfect cancellation.

The above mentioned observation forms the basis for an estimation interpreta-

tion of the adaptive control problem. The following outlines the main steps for this

interpretation:

1. Introduce an approximate model for the primary path based on the architecture

of the adaptive path from x(k) to y(k) (as shown in Fig. 2.3). There is an

optimal value for the weight vector in the approximate model’s FIR filter for

which the modeling error is the smallest. This optimal weight vector, however,

is not known. State-space models are used for both FIR filter and the secondary

path.

2. In the approximate model for the primary path, use the available information to

formulate an estimation problem that recursively estimates this optimal weight

vector.

3. Adjust the weight vector of the adaptive FIR filter to the best available estimate

of the optimal weight vector.

Before formalizing this estimation-based approach, a closer look at the signals

(i.e. information) involved in Fig. 2.1 is provided. Note that e(k) = d(k) − y(k) +

Vm(k), where

2.3. PROBLEM FORMULATION 18

a. e(k) is the available measurement.

b. Vm(k) is the exogenous disturbance that captures the effect of measurement

noise, modeling error, and the initial condition uncertainty in error measure-

ments.

c. y(k) is the output of the secondary path.

d. d(k) is the output of the primary path.

Note that unlike e(k), the signals y(k) and d(k) are not directly measurable. With

u(k) fully known, however, the assumption of a known initial condition for the sec-

ondary path leads to the exact knowledge of y(k). This assumption is relaxed later

in this chapter, where the effect of an “inexact” initial condition in the performance

of the adaptive filter is studied (Section 2.7).

The derived measured quantity that will be used in the estimation process can

now be introduced as

m(k)4= e(k) + y(k) = d(k) + Vm(k) (2.4)

2.3 Problem Formulation

Figure 2.4 shows a block diagram representation of the approximate model to the

primary path. A state space model, [ As(k), Bs(k), Cs(k), Ds(k) ], for the secondary

path is assumed. Note that both primary and secondary paths are assumed stable.

The weight vector, W (k) = [ w0(k) w1(k) · · · wN(k) ]T , is treated as the state vector

capturing the trivial dynamics, W (k + 1) = W (k), that is assumed for the FIR filter.

With θ(k) the state variable for the secondary path, then ξT =(

W T (k) θT (k))

is

the state vector for the overall system.

The state space representation of the system is then[W (k + 1)

θ(k + 1)

]=

[I(N+1)×(N+1) 0

Bs(k)h∗(k) As(k)

][W (k)

θ(k)

]4= Fk ξk (2.5)

2.3. PROBLEM FORMULATION 19

where h(k) = [x(k) x(k − 1) · · · x(k − N)]T captures the effect of the reference input

x(·). For this system, the derived measured output defined in Eq. (2.4) is

m(k) =[

Ds(k)h∗(k) Cs(k)] [ W (k)

θ(k)

]+ Vm(k)

4= Hk ξk + Vm(k) (2.6)

A linear combination of the states is defined as the desired quantity to be estimated

s(k) =[

L1,k L2,k

] [ W (k)

θ(k)

]4= Lk ξk (2.7)

For simplicity, the single-channel problem is considered here. Extension to the multi-

channel case is straight forward and is discussed in Chapter 4. Therefore, m(k) ∈R1×1, s(k) ∈ R1×1, θ(k) ∈ RNs×1, and W (k) ∈ R(N+1)×1. All matrices are then

of appropriate dimensions. There are several alternatives for selecting Lk and thus

the variable to be estimated, s(k). The end goal of the estimation based approach

however, is to set the weight vector in the adaptive FIR filter such that the output

of the secondary path, y(k) in Fig. 2.3, best matches d(k). So s(k) = d(k) is chosen,

i.e. Lk = Hk.

Any estimation algorithm can now be used to generate an estimate of the desired

quantity s(k). Two main estimation approaches are considered next.

2.3.1 H2 Optimal Estimation

Here stochastic interpretation of the estimation problem is possible. Assuming that

ξ0 (the initial condition for the system in Figure 2.4) and Vm(·) are zero mean uncor-

related random variables with known covariance matrices

E

[ξ0

Vm(k)

] [ξ∗0 V∗

m(j)]

=

[Π0 0

0 Qkδkj

](2.8)

s(k|k)4= F(m(0), · · · , m(k)), the causal linear least-mean-squares estimate of s(k), is

given by the Kalman filter recursions [27].

There are two primary difficulties with the H2 optimal solution: (a) The H2 solu-

tion is optimal only if the stochastic assumptions are valid. If the external disturbance

2.3. PROBLEM FORMULATION 20

is not Gaussian (for instance when there is a considerable modeling error that should

be treated as a component of the measurement disturbance) then pursuing an H2

filtering solution may yield undesirable performance; and (b) regardless of the choice

for Lk, the recursive H2 filtering solution does not simplify to the same extent as

the H∞ solution considered below. This can be of practical importance when the

real-time computational power is limited. Therefore, the H2 optimal solution is not

employed in this chapter.

2.3.2 H∞ Optimal Estimation

To avoid difficulties associated with the H2 estimation, we consider a minmax formu-

lation of the estimation problem in this section. Here, the main objective is to limit

the worst case energy gain from the measurement disturbance and the initial condi-

tion uncertainty to the error in a causal (or strictly causal) estimate of s(k). More

specifically, the following two cases are of interest. Let s(k|k) = Ff(m(0), · · · , m(k))

denote an estimate of s(k) given observations m(i) for time i = 0 up to and including

time i = k, and let s(k)4= s(k|k − 1) = Fp(m(0), · · · , m(k − 1)) denote an estimate

of s(k) given m(i) for time i = 0 up to and including i = k − 1. Note that s(k|k)

and s(k) are known as filtering and prediction estimates of s(k), respectively. Two

estimation errors can now be defined: the filtered error

ef,k = s(k|k) − s(k) (2.9)

and the predicted error

ep,k = s(k) − s(k) (2.10)

Given a final time M , the objective of the filtering problem can now be formalized as

finding s(k|k) such that for Π0 > 0

sup

Vm, ξ0

M∑k=0

e∗f,k ef,k

(ξ0 − ξ0)∗Π−1

0 (ξ0 − ξ0) +

M∑k=0

V∗m(k)Vm(k)

≤ γ2 (2.11)

2.4. H∞-OPTIMAL SOLUTION 21

for a given scalar γ > 0. In a similar way, the objective of the prediction problem can

be formalized as finding s(k) such that

sup

Vm, ξ0

M∑k=0

e∗p,k ep,k

(ξ0 − ξ0)∗Π−1

0 (ξ0 − ξ0) +

M∑k=0

V∗m(k)Vm(k)

≤ γ2 (2.12)

for a given scalar γ > 0. The question of optimality of the solution can be answered

by finding the infimum value among all feasible γ’s. Note that, for the H∞ optimal

estimation there is no statistical assumption regarding the measurement disturbance.

Therefore, the inclusion of the output of the modeling error block (see Fig. 2.3) in

the measurement disturbance is consistent with H∞ formulation of the problem. The

elimination of the “modeling error” block in the approximate model of primary path

in Fig. 2.4 is based on this characteristic of the disturbance in an H∞ formulation.

2.4 H∞-Optimal Solution

For the remainder of this chapter, the case where Lk = Hk is considered. Referring

to Figure 2.4, this means that s(k) = d(k). To discuss the solution, from [27] the

solutions to the γ-suboptimal finite-horizon filtering problem of Eq. (2.11), and the

prediction problem of Eq. (2.12) are drawn. Finally, we find the optimal value of γ

and show how γ = γopt simplifies the solutions.

2.4.1 γ-Suboptimal Finite Horizon Filtering Solution

Theorem 2.1: [27]Consider the state space representation of the block diagram ofFigure 2.4, described by Equations (2.5)-(2.7). A level-γ H∞ filter that achieves(2.11) exists if, and only if, the matrices

Rk =

[Ip 00 −γ2Iq

]and Re,k =

[Ip 00 −γ2Iq

]+

[Hk

Lk

]Pk

[H∗

k L∗k

](2.13)

(here p and q are used to indicate the correct dimensions) have the same inertia forall 0 ≤ k ≤ M , where P0 = Π0 > 0 satisfies the Riccati recursion

Pk+1 = FkPkF∗k − Kf,kRe,kK

∗f,k (2.14)

2.4. H∞-OPTIMAL SOLUTION 22

where Kf,k =(

FkPk

[H∗

k L∗k

] )R−1

e,k (2.15)

If this is the case, then the central H∞ estimator is given by

ξk+1 = Fkξk + Kf,k

(m(k) − Hkξk

), ξ0 = 0 (2.16)

s(k|k) = Lkξk + (LkPkH∗k) R−1

He,k

(m(k) − Hkξk

)(2.17)

with Kf,k = (FkPkH∗k)R−1

He,k and RHe,k = Ip + HkPkH∗k .

Proof: see [27].

2.4.2 γ-Suboptimal Finite Horizon Prediction Solution

Theorem 2.2: [27]For the system described by Equations (2.5)-(2.7), level-γ H∞filter that achieves (2.12) exists if, and only if, all leading sub-matrices of

Rpk =

[ −γ2Ip 00 Iq

]and Rp

e,k =

[ −γ2Ip 00 Iq

]+

[Lk

Hk

]Pk

[L∗

k H∗k

](2.18)

have the same inertia for all 0 ≤ k < M . Note that Pk is updated according to Eq.(2.14). If this is the case, then one possible level-γ H∞ filter is given by

ξk+1 = Fkξk + Kp,k

(m(k) − Hkξk

), ξ0 = 0 (2.19)

s(k) = Lkξk (2.20)

where

Kp,k = FkPkH∗k

(I + HkPkH

∗k

)−1

(2.21)

and

Pk =(I − γ−2PkL

∗kLk

)−1Pk, (2.22)

Proof: see [27].

Note that the condition in Eq. (2.18) is equivalent to(I − γ−2PkL

∗kLk

)> 0, for k = 0, · · · , M (2.23)

and hence Pk in Eq. (2.22) is well defined. Pk can also be defined as

P−1k = P−1

k − γ−2L∗kLk, for k = 0, · · · , M (2.24)

2.4. H∞-OPTIMAL SOLUTION 23

which proves useful in rewriting the prediction coefficient, Kp,k in Eq. (2.21), as

follows. First, note that

FkPkH∗k

(I + HkPkH

∗k

)−1

= Fk

(P−1

k + H∗kHk

)−1

H∗k (2.25)

and hence, replacing for P−1k from Eq. (2.24)

Kp,k = Fk

(P−1

k − γ−2L∗kLk + H∗

kHk

)−1H∗

k (2.26)

Theorems 2.1 and 2.2 (Sections 2.4.1 and 2.4.2) provide the form of the filtering and

prediction estimators, respectively. The following section investigates the optimal

value of γ for both of these solutions, and outlines the simplifications that follow.

2.4.3 The Optimal Value of γ

The optimal value of γ for the filtering solution will be discussed first. The discussion

of the optimal prediction solution utilizes the results in the filtering case.

2.4.3.1 Filtering Case

2.4.3.1.1 γopt ≤ 1: First, it will be shown that for the filtering solution γopt ≤ 1.

Using Eq. (2.11), one can always pick s(k|k) to be simply m(k). With this choice

s(k|k) − s(k) = Vm(k), for all k (2.27)

and Eq. (2.11) reduces to

sup

Vm ∈ L2, ξ0

M∑k=0

Vm(k)∗Vm(k)

(ξ0 − ξ0)∗Π−1

0 (ξ0 − ξ0) +

M∑k=0

Vm(k)∗Vm(k)

(2.28)

which can never exceed 1 (i.e. γopt ≤ 1). A feasible solution for the H∞ estimation

problem in Eq. (2.11) is therefore guaranteed when γ is chosen to be 1. Note that

it is possible to directly demonstrate the feasibility of γ = 1. Using simple matrix

2.4. H∞-OPTIMAL SOLUTION 24

manipulation, it can be shown that for Lk = Hk and for γ = 1, Rk and Re,k have the

same inertia for all k.

2.4.3.1.2 γopt ≥ 1: To show that γopt is indeed 1, an admissible sequence of distur-

bances and a valid initial condition should be constructed such that γ could be made

arbitrarily close to 1 regardless of the filtering solution chosen. The necessary and

sufficient conditions for the optimality of γopt = 1 are developed in the course of

constructing this admissible sequence of disturbances.

Assume that ξT0 =

(W T

0 θT0

)is the best estimate for the initial condition of the

system in the approximate model of the primary path (Fig. 2.4). Moreover, assume

that θ0 is indeed the actual initial condition for the secondary path in Fig. 2.4. The

actual initial condition for the weight vector of the FIR filter in this approximate

model is W0. Then,

m(0) =[

Ds(0)h∗(0) Cs(0)] [ W0

θ0

]+ Vm(0) (2.29)

H0ξ0 =[

Ds(0)h∗(0) Cs(0)] [ W0

θ0

](2.30)

where m(0) is the (derived) measurement at time k = 0. Now, if

Vm(0) = Ds(0)h∗(0)(W0 − W0

)= KV(0)

(W0 − W0

)(2.31)

then m(0) − H0ξ0 = 0 and the estimate of the weight vector will not change. More

specifically, Eqs. (2.16) and (2.17) reduce to the following simple updates

ξ1 = F0ξ0 (2.32)

s(0|0) = L0ξ0 (2.33)

which given L0 = H0 generates the estimation error

ef,0 = s(0|0) − s(0)

= L0 ξ0 − L0 ξ0

= Ds(0)h∗(0)(W0 − W0

)= Vm(0) (2.34)

2.4. H∞-OPTIMAL SOLUTION 25

Repeating a similar argument at k = 1 and 2, it is easy to see that if

Vm(1) = [Ds(1)h∗(1) + Cs(1)Bs(0)h∗(0)](W0 − W0

)= KV(1)

(W0 − W0

)(2.35)

and

Vm(2) = [Ds(2)h∗(2) + Cs(2)Bs(1)h∗(1) + Cs(2)As(1)Bs(0)h∗(0)](W0 − W0

)= KV(2)

(W0 − W0

)(2.36)

then

m(k) − Hkξk = 0, for k = 1, 2 (2.37)

Note that when Eq. (2.37) holds, and with Lk = Hk, Eq. (2.17) reduces to

s(k|k) = Lkξk = Hkξk (2.38)

and hence

ef,k = s(k|k) − s(k)

= s(k|k) − [m(k) − Vm(k)]

= Hkξk − [m(k) − Vm(k)]

=[Hkξk − m(k)

]+ Vm(k)

= Vm(k) for k = 1, 2 (2.39)

Continuing this process, KV(k), for 0 ≤ k ≤ M can be defined as

KV(0)

KV(1)

KV(2)...

KV(M)

=

Ds(0) 0 0 0 · · · 0

Cs(1)Bs(0) Ds(1) 0 0 · · · 0

Cs(2)As(1)Bs(0) Cs(2)Bs(1) Ds(2) 0 · · · 0. . .

...... · · · Ds(M)

h(0)

h(1)

h(2)...

h(M)

4= ∆MΛM (2.40)

2.4. H∞-OPTIMAL SOLUTION 26

such that Vm(k), ∀ k, is an admissible disturbance. In this case, Eq. (2.11) reduces

to

sup

ξ0

M∑k=0

Vm(k)∗Vm(k)

(ξ0 − ξ0)∗Π−1

0 (ξ0 − ξ0) +

M∑k=0

Vm(k)∗Vm(k)

=sup

ξ0

(W0 − W0)∗[

M∑k=0

K∗V(k)KV(k)

](W0 − W0)

(ξ0 − ξ0)∗Π−1

0 (ξ0 − ξ0) + (W0 − W0)∗[

M∑k=0

K∗V(k)KV(k)

](W0 − W0)

(2.41)

From Eq. (2.40), note that

M∑k=0

K∗V(k)KV(k) = Λ∗

M∆∗M∆MΛM = ‖ ∆MΛM ‖2

2 (2.42)

and hence the ratio in Eq. (2.41) can be made arbitrarily close to one if

limM→∞

‖∆MΛM‖2 → ∞ (2.43)

Eq. (2.43) will be referred to as the condition for optimality of γ = 1 for the

filtering solution.

Equation (2.43) can now be used to derive necessary and sufficient conditions for

optimality of γ = 1. First, note that a necessary condition for Eq. (2.43) is

limM→∞

‖ΛM‖2 → ∞ (2.44)

or equivalently

limM→∞

M∑k=0

h∗(k)h(k) → ∞ (2.45)

The h(k) that satisfies the condition in (2.45) is referred to as exciting [26]. Several

sufficient conditions can now be developed. Since

‖∆MΛM‖2 ≥ σmin(∆M) ‖ΛM‖2 (2.46)

2.4. H∞-OPTIMAL SOLUTION 27

one sufficient condition is that

σmin(∆M) > ε, ∀ M, and ε > 0 (2.47)

Note that for LTI systems, the sufficient condition (2.47) is equivalent to the require-

ment that the system have no zeros on the unit circle. Another sufficient condition

is that h(k)’s be persistently exciting, that is

limM→∞

σmin

[1

M

M∑k=0

h(k)h∗(k)

]> 0 (2.48)

which holds for most reasonable systems.

2.4.3.2 Prediction Case

The optimal value for γ can not be less than one in the prediction case. In the previous

section we showed that despite using all available measurements up to and including

time k, the sequence of the admissible disturbances, Vm(k) = KV(k)(W0 − W0

)for

k = 0, · · · , M (where KV(k) is given by Eq. (2.40)), prevented the filtering solution

from achieving γ < 1. The prediction solution that uses only the measurements up to

time k (not including k itself) can not improve over the filtering solution and therefore

the energy gain γ is at least one.

Next, it is shown that if the initial condition P0 is chosen appropriately (i.e. if it is

small enough), then γopt = 1 can be guaranteed. Referring to the Lyapunov recursion

of Eq. (2.65), the Riccati matrix at time k can be written as:

Pk =

(k−1∏j=0

Fj

)P0

(k−1∏j=0

Fj

)∗

, Fj =

[I 0

Bs(j)h∗(j) As(j)

](2.49)

Defining

ΨjA = As(j)As(j − 1) · · ·As(0) (2.50)

Eq. (2.49) can be written as

Pk =

[I 0∑k−1

j=0 ΨjABs(j)h

∗(k−1−j) ΨkA

]P0

[I 0∑k−1

j=0 ΨjABs(j)h

∗(k−1−j) ΨkA

]∗(2.51)

2.4. H∞-OPTIMAL SOLUTION 28

From Theorem 2.2, Section 2.4.2, the condition for the existence of a prediction

solution is (I − γ−2PkL∗kLk) > 0, or equivalently

(γ2 − LkPkL∗k) > 0 (2.52)

Note that Lk = [ Ds(k)h∗(k) Cs(k) ], and therefore Eq. (2.52) can be re-written as

γ2 −[

Ds(k)h∗(k) Cs(k)]Pk

[h(k)D∗

s(k)

C∗s (k)

]> 0 (2.53)

Replacing for Pk from Eq. (2.51), and carrying out the matrix multiplications, Eq. (2.53)

is equivalent to

γ2 −[

h(k)D∗s(k) +

∑k−1j=0 h(k−1−j)B∗

s (j)Ψ∗jA C∗

s (k)

Ψ∗kA C∗

s (k)

]∗× P0 ×[

h(k)D∗s(k) +

∑k−1j=0 h(k−1−j)B∗

s (j)Ψ∗jA C∗

s (k)

Ψ∗kA C∗

s (k)

]> 0 (2.54)

Introducing

h′∗(k) = Dsh∗(k) +

k−1∑j=0

Cs(k)ΨjABs(j)h∗(k−1−j) (2.55)

as the filtered version of the reference vector, h(k), Eq. (2.54) can be expressed as

γ2 −[

h′∗(k) Cs(k)ΨkA

]P0

[h′(k)

Ψ∗kA C∗

s (k)

]> 0 (2.56)

Selecting the initial value of the Riccati matrix, without loss of generality, as

P0 =

[µI 0

0 αI

](2.57)

and the Eq. (2.56) reduces to

γ2 − µh′∗(k)h′(k) − αCs(k)ΨkAΨ∗k

A C∗s (k) > 0 (2.58)

It is now clear that a prediction solution for γ = 1 exists if

µ <1 − αCs(k)Ψk

AΨ∗kA C∗

s (k)

h′∗(k)h′(k)(2.59)

Equation (2.59) is therefore the condition for optimality of γopt = 1 for the prediction

solution.

2.4. H∞-OPTIMAL SOLUTION 29

2.4.4 Simplified Solution Due to γ = 1

2.4.4.1 Filtering Case:

The following shows that with Hk = Lk and γ = 1, the Riccati equation (2.14)

is considerably simplified. To this end, apply the matrix inversion lemma, (A +

BCD)−1 = A−1 − A−1B[C−1 + DA−1B]−1DA−1, to

Re,k =

[Ip 0

0 −Iq

]+

[Hk

Hk

]Pk

[H∗

k H∗k

](2.60)

with A =

[Ip 0

0 −Iq

], B =

[Hk

Hk

], C = I, and D = Pk

[H∗

k H∗k

]. It is easy to

verify that the term DA−1B is zero. Therefore

R−1e,k =

[Ip 0

0 −Iq

]−[

Hk

−Hk

]Pk

[H∗

k −H∗k

](2.61)

In which case,

Kf,kRe,kK∗f,k =

(FkPk

[H∗

k H∗k

])R−1

e,k

([Hk

Hk

]PkF

∗k

)= 0

for γ = 1 and for all k. Thus the Riccati recursion (2.14) reduces to the Lyapunov

recursion Pk+1 = FkPkF∗k with P0 = Π0 > 0.

Partitioning the Riccati matrix Pk in block matrices conformable with the block

matrix structure of Fk, (2.14) yields the following simple update

P11,k+1 = P11,k, P11,0 = Π11,0

P12,k+1 = P12,kAs(k) + P11,kh(k)B∗s (k), P12,0 = Π12,0

P22,k+1 = Bs(k)h(k)∗P11,kh(k)B∗s (k) + As(k)P ∗

12,kh(k)B∗s (k)+

Bs(k)h∗(k)P12,kA∗s(k) + As(k)P22,kA

∗s(k), P22,0 = Π22,0

(2.62)

The filtering solution can now be summarized in the following theorem:

2.5. IMPORTANT REMARKS 30

Theorem 2.3: Consider the system described by Equations (2.5)-(2.7), with Lk =Hk. If the optimality condition (2.43) is satisfied, the H∞-optimal filtering solutionachieves γopt = 1, and the central H∞-optimal filter is given by

ξk+1 = Fkξk + Kf,k

(m(k) − Hkξk

), ξ0 = 0 (2.63)

s(k|k) = Lkξk + (LkPkH∗k) R−1

He,k

(m(k) − Hkξk

)(2.64)

with Kf,k = (FkPkH∗k)R−1

He,k and RHe,k = Ip + HkPkH∗k , where Pk satisfies the Lya-

punov recursion

Pk+1 = FkPkF∗k , P0 = Π0. (2.65)

Proof: follows from the discussions above.

2.4.4.2 Prediction Case:

Referring to Eq. (2.26), it is clear that for γ = 1 and for Lk = Hk, the coefficient

Kp,k will reduce to FkPkH∗k . Therefore, the prediction solution can be summarized

as follows:Theorem 2.4: Consider the system described by Equations (2.5)-(2.7), with Lk =Hk. If the optimality conditions (2.43) and (2.59) are satisfied, and with P0 as definedin Eq. (2.57), the H∞-optimal prediction solution achieves γopt = 1, and the centralfilter is given by

ξk+1 = Fkξk + Kp,k

(m(k) − Hkξk

), ξ0 = 0 (2.66)

s(k) = Lkξk (2.67)

with Kp,k = FkPkH∗k where Pk satisfies the Lyapunov recursion (2.65).

Proof: follows from the discussions above.

2.5 Important Remarks

The main idea in the EBAF algorithm can be summarized as follows. At a given time

k, use the available information on; (a) measurement history, e(i) for 0 ≤ i ≤ k, (b)

control history, u(i) for 0 ≤ i < k, (c) reference signal history, x(i) for 0 ≤ i ≤ k, (d)

2.5. IMPORTANT REMARKS 31

the model of the secondary path and the estimate of its initial condition, and (e) the

pre-determined length of the adaptive FIR filter to produce the best estimate of the

actual output of the primary path, d(k). The key premise is that if d(k) is accurately

estimated, then the inputs u(k) can be generated such that d(k) is canceled. The

objective of the EBAF algorithm is to make y(k) match the optimal estimate of d(k)

(see Fig. 2.3). For the adaptive filtering problem in Fig. 2.1 , however, adaptation

algorithm only has direct access to the weight vector of the adaptive FIR filter.

Because of this practical constraint, the EBAF algorithm adapts the weight vector in

the adaptive FIR filter according to the estimate of the optimal weight vector given

by Eqs. (2.63) or (2.66) (for the filtering, or prediction solutions, respectively). Note

that ξTk =

(W T (k) θT (k)

). The error analysis for this adaptive algorithm is discussed

in Section 2.7. Now, main features of this algorithm can be described as follows:

1. The estimation-based adaptive filtering (EBAF) algorithm yields a solution that

only requires one Riccati recursion. The recursion propagates forward in time,

and does not require any information about the future of the system or the

reference signal (thus allowing the resulting adaptive algorithm to be real-time

implementable). This has come at the expense of restricting the controller to

an FIR structure in advance.

2. With Kf,kRe,kK∗f,k = 0, Pk+1 = FkPkF

∗k is the simplified Riccati equation,

which considerably reduces the computational complexity involved in propa-

gating the Riccati matrix. Furthermore, this Riccati update always generates

a non-negative definite Pk, as long as P0 is selected to be positive definite (see

Eq. (2.65)).

3. In general, the solution to an H∞ filtering problem requires verification of the

fact that Rk and Re,k are of the same inertia at each step (see Eq. (2.13)). In a

similar way, the prediction solution requires that all sub-matrices of Rpk and Rp

e,k

have the same inertia for all k (see Eq. (2.18)). This can be a computationally

expensive task. Moreover, it may lend to a breakdown in the solution if the

condition is not met at some time k. The formulation of the problem eliminates

2.6. IMPLEMENTATION SCHEME FOR EBAF ALGORITHM 32

the need for such checks, as well as the potential breakdown of the solution, by

providing a definitive answer to the feasibility and optimality of γ = 1.

4. When [ As(k), Bs(k), Cs(k), Ds(k) ] = [ 0, 0, 0, I ] for all k, (i.e. the output

of the FIR filter directly cancels d(k) in Figure 2.1), then the filtering/prediction

results reduce to the simple Normalized-LMS/LMS algorithms in Ref. [26] as

expected.

5. As mentioned earlier, there is no need to verify the solutions at each time step,

so the computational complexity of the estimation based approach is O(n3)

(primarily for calculating FkPkF∗K), where

n = (N + 1) + Ns (2.68)

where (N +1) is the length of the FIR filter, and Ns is the order of the secondary

path. The special structure of Fk however reduces the computational complexity

to O(N3s +NsN), i.e. cubic in the order of the secondary path, and linear in the

length of the FIR filter (see Eq. (2.62)). This is often a substantial reduction

in the computation since Ns N . Note that the computational complexity for

FxLMS is quadratic in Ns and linear in N .

2.6 Implementation Scheme for EBAF Algorithm

Three sets of variables are used to describe the implementation scheme:

1. Best available estimate of a variable: Referring to Eqs. (2.16) and (2.19), and

noting the fact that ξTk =

(W T (k) θT (k)

), W (k) can be defined as the estimate

of the weight vector, and θ(k) as the secondary path state estimate in the

approximate model of the primary path.

2. Actual value of a variable: Referring to Fig. 2.1, define u(k)4= h∗(k)W (k) as the

actual input to the secondary path, y(k) as the actual output of the secondary

path, and d(k) as the actual output of the primary path. Note that d(k) and

2.6. IMPLEMENTATION SCHEME FOR EBAF ALGORITHM 33

y(k) are not directly measurable, and that at each iteration the weight vector

in the adaptive FIR filter is set to W (k).

3. Adaptive algorithm’s internal copy of a variable: Recall that in Eq. (2.4), y(k)

is used to construct the derived measurement m(k). Since y(k) is not directly

available, the adaptive algorithm needs to generate an internal copy of this

variable. This internal copy (referred to as ycopy(k)) is constructed by applying

u(k) (the actual control signal) to a model of the secondary path inside the

adaptive algorithm. The initial condition for this model is θcopy(0). In other

words, the derived measurement is constructed as follows

m(k) = e(k) + ycopy(k) (2.69)

where

θcopy(k + 1) = As(k)θcopy(k) + Bs(k)u(k) (2.70)

ycopy(k) = Cs(k)θcopy(k) + Ds(k)u(k) (2.71)

Given the identified model for the secondary path and its input u(k) = h∗(k)W (k),

the adaptive algorithm’s copy of y(k) will be exact if the actual initial condi-

tion of the secondary path is known. Obviously, one can not expect to have the

exact knowledge of the actual initial condition of the secondary path. In the

next section, however, it is shown that when the secondary path is linear and

stable, the contribution of the initial condition to its output decreases to zero

as k increases. Therefore, the internal copy of y(k) will converge to the actual

value of y(k) over time.

Now, the implementation algorithm can be outlined as follows;

1. Start with W (0) = 0 and θ(0) = 0 as the initial guess for the state vector in the

approximate model of the primary path. Also assume that θcopy(0) = 0, and

h(0) = [ x(0) 0 · · · 0 ]T . The initial value for the Riccati matrix is P0 which

is chosen to be block diagonal. The role of P0 is similar to the learning rate in

LMS-based adaptive algorithms (see Section 5.3.2).

2.6. IMPLEMENTATION SCHEME FOR EBAF ALGORITHM 34

2. If 0 ≤ k ≤ M (finite horizon):

(a) Form the control signal

u(k) = h∗(k)W (k) (2.72)

to be applied to the secondary path. Note that applying u(k) to the

secondary path produces

y(k) = Cs(k)θ(k) + Ds(k)u(k) (2.73)

at the output of the secondary path. This in turn leads to the following

error signal measured at time k:

e(k) = d(k) − y(k) + Vm(k) (2.74)

which is available to the adaptive algorithm to perform the state update

at time k.

(b) Propagate the state estimate and the internal copy of the state of the

secondary path as follows[

W (k + 1)

θ(k + 1)

]

θcopy(k + 1)

=

[

Fk + Kf,k [0 − Cs(k)]]

Kf,kCs(k)

[(Bs(k)h∗(k) 0

]As(k)

[

W (k)

θ(k)

]

θcopy(k)

+[

Kf,k

]

0

e(k)

(2.75)

where e(k) is the error sensor measurement at time k given by Eq. (2.74),

and Kf,k = FkPkH∗k(I + HkPkH

∗k)−1 (see Theorem 2.3). Note that for the

prediction-based EBAF algorithm Kf,k should be replaced with Kp,k =

FkPkH∗k .

2.7. ERROR ANALYSIS 35

(c) update the Riccati matrix Pk using the Lyapunov recursion[P11 P12,k+1

P ∗12,k+1 P22,k+1

]=

[I 0

Bs(k)h∗(k) As(k)

][P11 P12,k

P ∗12,k P22,k

][I 0

Bs(k)h∗(k) As(k)

]∗(2.76)

Pk+1 will be used in (2.75) to update the state estimate.

3. Go to 2.

2.7 Error Analysis

In Section 2.6, it is pointed out that the proposed implementation scheme can deviate

from an H∞-optimal solution for two main reasons:

1. The error in initial condition of the secondary path which can cause ycopy to be

different from y(k).

2. The additional error in the cancellation of d(k) due to the fact that y(k) can

not be set to s(k|k) (or s(k)). All one can do is to set the weight vector in the

adaptive FIR filter to be W (k).

Here, both errors are discussed in detail.

2.7.1 Effect of Initial Condition

As earlier discussions indicate, the secondary path in Fig. 2.1 is assumed to be linear.

For a linear system the output at any given time can be decomposed into two compo-

nents: the zero-input component which is associated with the portion of the output

solely due to the initial condition of the system, and the zero-state component which

is the portion of the output solely due to the input to the system.

2.7. ERROR ANALYSIS 36

For a stable system, the zero-input component of the response will decay to zero

for large k. Therefore, any difference between ycopy(k) and y(k) (which with a known

input to the secondary path can only be due to the unknown initial condition) will

go to zero as k grows. In other words, exact knowledge of the initial condition of the

secondary path does not affect the performance of the proposed EBAF algorithm for

sufficiently large k.

2.7.2 Effect of Practical Limitation in Setting y(k) to s(k|k)

(s(k))

As pointed out earlier, the physical setting of the adaptive control problem in Fig. 2.1

only allows for the weight vector in the adaptive FIR filter to be adjusted to W (k).

In other words, the state of the secondary path can not be set to a desired value at

each step. Instead, θk evolves based on its initial condition and the control input,

u(k), that we provide. Assume that θ(k) is the actual state of the secondary path at

time k. The actual output of the secondary path is then

y(k) = Ds(k)h∗(k)W (k) + Cs(k)θ(k) (2.77)

which leads to the following cancellation error

d(k) − y(k) = d(k) −(Ds(k)h∗(k)W (k) + Cs(k)θ(k)

)(2.78)

For the prediction solution of Theorem 2.4, adding the zero quantity ±Cs(k)θ(k) to

the right hand side of Equation (2.78), and taking the norm of both sides,

‖d(k) − y(k)‖ = ‖ d(k) −(Ds(k)h∗(k)W (k) + Cs(k)θ(k)

)± Cs(k)θ(k) ‖

= ‖(d(k) − Ds(k)h∗(k)W (k) − Cs(k)θ(k)

)+ Cs(k)

(θ(k) − θ(k)

)‖

Therefore,

‖d(k) − y(k)‖

ξ∗0Π−10 ξ0 +

M∑k=0

V∗m(k)Vm(k)

≤ ‖ d(k) − s(k) ‖

ξ∗0Π−10 ξ0 +

M∑k=0

V∗m(k)Vm(k)

+‖ Cs(k)

(θ(k) − θ(k)

)‖

ξ∗0Π−10 ξ0 +

M∑k=0

V∗m(k)Vm(k)

(2.79)

2.7. ERROR ANALYSIS 37

where ξ0 = (ξ0− ξ0) and ξk is defined in Eq. (2.5). Note that the first term in the right

hand side of Eq. (2.79) is the prediction error energy gain (see Eq. (2.12)). Therefore,

the energy gain of the cancellation error with the prediction-based EBAF exceeds the

error energy gain of the H∞ optimal prediction solution by the second term on the

right hand side of Eq. (2.79). It can be shown that when the primary inputs h(k)

are persistently exciting (see Eq. (2.48)), the dynamics for the state estimation error,

θ(k)−θ(k), are internally stable which implies that the second term on the right hand

side of Eq. (2.79) is bounded for all M , and in the limit when M → ∞∗.

When Ds(k) = 0 for all k, an implementation of the filtering solution that utilizes

the most recent measurement, m(k), is feasible. In this case, the filtering solution in

Eqs. (2.16)-(2.17) can be written as follows:

ξk|k = ξk + PkH∗k (Ip + HkPkH

∗k)−1

(m(k) − Hkξk

)(2.80)

ξk+1 = Fkξk|k (2.81)

s(k|k) = Lkξk|k (2.82)

where the weight vector update in the adaptive FIR filter follows Eq. (2.80). With

a derivation identical to the one for prediction solution, it can be shown that the

performance bound in this case is

‖d(k) − y(k)‖

ξ∗0Π−10 ξ0 +

M∑k=0

V∗m(k)Vm(k)

≤ ‖ d(k) − s(k|k) ‖

ξ∗0Π−10 ξ0 +

M∑k=0

V∗m(k)Vm(k)

+‖ Cs(k)

(θ(k|k) − θ(k)

)‖

ξ∗0Π−10 ξ0 +

M∑k=0

V∗m(k)Vm(k)

(2.83)

An argument similar to the prediction case shows that the second term on the right

hand side has a finite gain as well.

∗Reference [24] shows that if the exogenous disturbance is assumed to be a zero mean whitenoise process with unit intensity, and independent of the initial condition of the system ξ0, then theterminal state estimation error variance satisfies

E(ξk − ξk)(ξk − ξk)∗ ≤ Pk

2.8. RELATIONSHIP TO THE NORMALIZED-FXLMS/FXLMS ALGORITHMS 38

2.8 Relationship to the Normalized-FxLMS/FxLMS

Algorithms

In this section, it will be shown that as k → ∞, the gain vector in the prediction-

based EBAF algorithm converges to the gain vector in the classical Filtered-X LMS

(FxLMS) algorithm. Thus, FxLMS is an approximation to the steady-state EBAF.

The error terms in the two algorithms are shown to be different (compare Eqs. (2.89)

and (2.2)). Therefore it is expected that the prediction-based EBAF demonstrate

superior transient performance compared to the FxLMS algorithm. Simulation re-

sults in the next section agree with this expectation. The fact that the gain vectors

asymptotically coincide, agrees with the fact that the derivation of the FxLMS algo-

rithm relies on the assumption that the adaptive filter and the secondary path are

interchangeable which can only be true in the steady state. Similar results are shown

for the connection between the filtering-based EBAF and the Normalized FxLMS

adaptive algorithms.

For the discussion in this section, the secondary path is assumed, for simplicity, to

be LTI, i.e. [ As, Bs, Cs, Ds ]. Note that for the LTI system, ΨkA in Eq. (2.50) reduces

to Aks . The Riccati matrix Pk in Eq. (2.51) can then be rewritten as

Pk =

[I 0∑k−1

j=0 AjsBsh

∗(k−1−j) Aks

]P0

[I 0∑k−1

j=0 AjsBsh

∗(k−1−j) Aks

]∗(2.84)

Eq. (2.84) will be used in establishing the proper connections between the filtered/predicted

solutions of Section 2.4 and the conventional Normalized-FxLMS/FxLMS algorithms.

2.8.1 Prediction Solution and its Connection to the FxLMS

Algorithm

To study the asymptotic behavior of the state estimate update, note that for an stable

secondary path Aks → 0 as k → ∞. Therefore, using Eq. (2.84)

Pk→[

I 0∑k−1j=0 Aj

sBsh∗(k−1−j) 0

]P0

[I 0∑k−1

j=0 AjsBsh

∗(k−1−j) 0

]∗as k → ∞(2.85)

2.8. RELATIONSHIP TO THE NORMALIZED-FXLMS/FXLMS ALGORITHMS 39

which for P0 =

[P11(0) P12(0)

P21(0) P22(0)

]results in

Pk→[

I∑k−1j=0 Aj

sBsh∗(k−1−j)

]P11(0)

[I∑k−1

j=0 AjsBsh

∗(k−1−j)

]∗as k → ∞ (2.86)

Selecting P11(0) = µI as in Eq. (2.57), and noting the fact that Kp,k = FkPkH∗k

(Theorem 2.4), it is easy to see that as k → ∞

Kp,k → µ

[I∑k

j=0 AjsBsh

∗(k−j)

](Dsh

∗(k) +

k−1∑j=0

CsAjsBsh

∗(k−1−j)

)∗

→ µ

[I∑k

j=0 AjsBsh

∗(k−j)

]h′(k) (2.87)

and therefore the state estimate update in Theorem 2.4 becomes[W (k+1)

θ(k+1)

]=

[I 0

Bsh∗(k) As

][W (k)

θ(k)

]+

µ

[h′(k)∑k

j=0 AjsBsh

∗(k−j)h′(k)

](m(k) − Dsh

∗(k)W (k) − Csθ(k))

(2.88)

Thus,the following update for the weight vector is derived

W (k+1) = W (k) + µh′∗(k)(m(k) − Dsh

∗(k)W (k) − Csθ(k))

(2.89)

Note that m(k) = e(k) + ycopy(k) (see Eq. (2.69)), and hence the difference between

the limiting update rule of Eq. (2.89) (i.e. the prediction EBAF algorithm), and the

classical FxLMS algorithm of Eq. (2.2) will be the error term used by these algorithms.

More specifically, e(k) in the FxLMS algorithm is replaced with the following modified

error (using Eq. (2.71)):

e(k) + ycopy(k) − Dsh∗(k)W (k) − Csθ(k) = e(k) + Csθcopy(k) − Csθ(k). (2.90)

Note that if y(k) is directly measurable, then the modified error will be[e(k) + y(k) − Dsh

∗(k)W (k) − Csθ(k)]

2.8. RELATIONSHIP TO THE NORMALIZED-FXLMS/FXLMS ALGORITHMS 40

The condition for optimality of γ = 1 in the prediction case (see Eq. (2.59)), can

also be simplified for stable LTI secondary path as k → ∞. Rewriting the optimality

condition for the prediction solution, Eq. (2.59), as

µ <1 − αCsA

ksA

∗ks C∗

s

h′∗(k)h′(k)(2.91)

for a stable secondary path, Aks → 0 as k → ∞, and hence

µ <1

h′∗(k)h′(k)as k → ∞ (2.92)

is the limiting condition for the optimality of γ = 1 in the prediction case. This is

essentially a filtered version of the well known LMS bound [26].

2.8.2 Filtering Solution and its Connection to the Normalized-

FxLMS Algorithm

In the Filtered case the gain vector is Kf,k = FkPkH∗k (I + HkPkH

∗k). In Section 2.8.1

the limiting value for the quantity FkPkH∗k in Eq. (2.87) is computed. In a similar

way it can be shown that, with P11(0) = µI, as K → ∞,

(I + HkPkH∗k) → (1 + µh′∗(k)h′(k)) (2.93)

and hence the coefficient for the state estimate update in the filtering case becomes

Kf,k → µ

1 + µh′∗(k)h′(k)

(h′(k)∑k

j=0 AjsBsh

∗(k−j)h′(k)

)as k → ∞ (2.94)

Thus the update rule for the weight vector in the filtering EBAF algorithm would be

W (k+1) = W (k) + µh′∗(k)

(1 + µh′∗(k)h′(k))

(m(k) − Dsh

∗(k)W (k) − Csθ(k))

(2.95)

which is similar to the Normalized-FxLMS algorithm (Eq. (2.3)) in which the error

signal is replaced with a modified error signal described by Eq. (2.90).

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 41

2.9 Experimental Data & Simulation Results

This section examines the performance of the proposed EBAF algorithm for the ac-

tive noise cancellation (ANC) problem in a one dimensional acoustic duct. Figure 2.5

shows the schematic diagram of the one-dimensional air duct that is used in the exper-

iments. The control objective is to attenuate (cancel in the ideal case) the disturbance

introduced into the duct by Speaker #1 (primary noise source) at the position of Mi-

crophone #2 (error sensor) by the control signal generated by Speaker #2 (secondary

source). Microphone #1 can be used to provide the reference signal for the adaptation

algorithm. Clearly, Microphone #1 measurements are affected by both primary and

secondary sources, and hence if these measurements are used as the reference signal

the problem, commonly known as feedback contamination, has to be addressed.

A dSPACE DS1102 DSP controller board (which includes TI’s C31 DSP processor

with 60 MHz clock rate, and 128k of 32-bit RAM), and its Matlab 5 interface are

used for real time implementation of the algorithm. A state space model (of order 10)

is identified for this one-dimensional acoustic system. Note that of the four identified

transfer functions, only the transfer function from Speaker #2 to Microphone #2 (i.e.

the secondary path) is required by the estimation-based adaptive algorithm. Figs.

2.6 and 2.7 show the identified transfer function for the one-dimensional duct. This

section will first provide experimental data that validate a corresponding simulation

result. More sophisticated experiments and simulations are then presented to study

various aspects of the EBAF algorithm.

Figure 2.8 shows the experimental data in a typical noise cancellation scenario,

along with corresponding plots from a simulation that is designed to mimic that

experiment. Here, the reading of Microphone #2 (i.e. the cancellation error) is

shown when an adaptive FIR filter of length 4 is used for noise cancellation. The

primary source is a sinusoidal tone at 150 Hz, which is also available to the adaptation

algorithm as the reference signal. A band-limited white noise (noise power = 0.008)

is used as the measurement noise for the simulation in Figure 2.8. The sampling

frequency is 1000 Hz for both experiment and simulation. P11,0 = 0.05I4×4, P12,0 = 0,

and P22,0 = 0.005I10×10 are used to initialize the Riccati matrix in Eq. (2.14). The

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 42

experiment starts with adaptive controller off, and about 3 seconds later the controller

is turned on. The transient response of the adaptive FIR filter lasts for approximately

0.05 seconds. There is a 60 times reduction in the magnitude of the error. Therefore,

with the full access to the single tone primary disturbance, the EBAF algorithm

provides a fast and effective noise cancellation. The results from a corresponding

Matlab simulation (with the same filter length, and similar open loop error at 150

Hz) are also shown in Fig. 2.8. The transient behavior and the steady state response

in the simulation agree with the experimental data, thus assuring the validity of the

set up for the simulations presented in this chapter.

Figures 2.9, 2.10, and 2.11 show the results of more noise cancellation experiments

in the one dimensional acoustic duct. In all these three experiments, the reference

signal available to the adaptation scheme is formed such that a considerable level of

uncorrelated additive white noise corrupts the primary disturbance. This is done to

examine the robustness of the EBAF algorithm in the case where clean access to the

primary disturbance is not possible. In practice however, efforts are made to produce

as clean a reference signal as possible.

In Figure 2.9 the primary disturbance is a sinusoid of amplitude 0.3 volts at 150

Hz. The reference signal used by the EBAF algorithm is subject to a band limited

white Gaussian noise, and the signal to noise ratio is approximately 3. For this

experiment the length of the adaptive filter is 4, and the controller is turned on

at t = 4.7 seconds. A 35 times reduction in the magnitude of the disturbance at

Microphone #2 is measured. While there is a reduction of magnitude 3.5 in the

error at Microphone #1, this is achieved as a byproduct of the noise cancellation at

Microphone #2 (i.e. noise cancellation at Microphone #1 is not an objective of the

single-channel noise cancellation attempt in this experiment). Chapter 4 will address

the multi-channel noise cancellation in detail. Note that the magnitude of noise

cancellation in this case is lower than the cancellation achieved in Figure 2.8 where

the reference signal was fully known to the adaptive algorithm. This observation

confirms the well known fact that the quality of the reference signal (i.e. its degree

of correlation to the primary disturbance) profoundly affects the performance of the

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 43

adaptive algorithm [51,33]. Figure 2.10 shows the experimental results for multi-

tone noise cancellation where a noisy reference signal is available to the adaptive

algorithm. Here the number of taps in the adaptive filter is 8, and the primary

disturbance consists of two sinusoids at 150 and 180 Hz. The signal to noise ratio for

the available reference signal in this case is approximately 4.5. A 16 times reduction

in the amplitude of the disturbances at Microphone #2 is recorded. Some reduction

in the amplitude of the noise at Microphone #1 is also recorded. Note that, the sole

objective of the EBAF algorithm, in this case, is to cancel the noise at the position

of Microphone #2, and therefore no attempt is made to reduce the disturbances at

Microphone #1.

Figure 2.11 shows the results of the EBAF algorithm in the case where the primary

disturbance is a band limited white noise. As in Figures 2.9 and 2.10, only a noisy

measurement of the primary disturbance is available to the adaptive algorithm. The

signal to noise ratio in this case is approximately 4.5. The length of the adaptive

filter in this case is 16, and a reduction of approximately 3 times in the measurements

of Microphone #2 is achieved. For a better performance, the number of taps for the

adaptive FIR filter should be increased. Hardware limitations, however, prevented

experiments with higher order FIR filters. The performance of the EBAF algorithm

with longer FIR filters is examined through simulations in the rest of this section.

In Figure 2.12, the effect of feedback contamination (i.e. the contamination of the

reference signal with the output of the adaptive FIR filter through some feedback

path) when the primary source is a single tone is studied in simulation. In [33] the

subject of feedback contamination is discussed in detail, where relevant references

to the conventional solutions to this problem are also listed. Here, however, the

objective is to show that the proposed EBAF algorithm maintains superior perfor-

mance (compared to FxLMS and normalized-FxLMS (NFxLMS) algorithms) when

such a problem occurs and no additional information is furnished. Fig. 2.12 contains

a typical response to feedback contamination for EBAF, FxLMS and NFxLMS algo-

rithms. For the first 5 seconds, the input to Speaker #2 is grounded (i.e. u(k) = 0

for k ≤ 5). Switching the controller on results in large transient behavior in the

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 44

case of FxLMS and NFxLMS while, for the EBAF algorithm, the transient behav-

ior does not display the undesirable overshoot. Different operation scenarios (with

various filter lengths, and adaptation rates) were tested, and this observation holds

true in all cases. For the next 15 seconds, the primary source is directly available

to all adaptive algorithms and the steady-state performance (left plots) is virtually

the same. From k = 20 on (right plots), the output of Microphone #1 (which is

contaminated by the output of the FIR filter) is used as the reference signal. Once

again, Fig. 2.12 shows a typical result. Note that in the case of FxLMS and NFxLMS

the adaptation rate must be kept small enough to avoid unstable behavior when the

switch to contaminated reference signal takes place. The EBAF algorithm allows for

faster convergence in the face of feedback contamination. For the results in Fig. 2.12,

the length of the adaptive FIR filter (for all three algorithms) is 24. For the EBAF

algorithm P11,0 = 0.005I24×24, P22,0 = 0.0005I10×10, and P12,0 = 0. For FxLMS and

NFxLMS algorithms, the adaptation rates are 0.005 and 0.025, respectively.

Figure 2.13 considers the effect of feedback contamination in a wide-band (10−500

Hz) noise cancellation process. For the results in Fig. 2.13, the length of the adaptive

FIR filter is 32. For the EBAF algorithm P11,0 = 0.05I32×32, P22,0 = 0.005I10×10,

and P12,0 = 0. For FxLMS and NFxLMS algorithms the adaptation rates are 0.0005

and 0.01, respectively. The FxLMS algorithm becomes unstable for faster adaptation

rates, hence forcing slow convergence (i.e. lower control bandwidth). For NFxLMS,

the normalization of the adaptation rate by the norm of the reference vector (a vector

of length 32 in this case) prevents unstable behavior. The response of the algorithm

under feedback contamination is however still slower than EBAF algorithm. Further-

more, the oscillations in cancellation error due to the switching between modes of

operation are significantly higher when compared to the oscillations in EBAF case.

Figure 2.14 shows the closed-loop (i.e. the transfer function from the primary dis-

turbance source at Speaker #1 to Microphone #2 during steady-state operation of

the adaptive control algorithm) performance comparison for wide-band noise cancel-

lation. The EBAF algorithm outperforms FxLMS and Normalized-FxLMS adaptive

algorithms, even though the same level of information is made available to all three

adaptation schemes. For the result presented here the length of the FIR filter (for

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 45

all three approaches) is 32, and the band-limited white noise which is used as the

primary source is available as the reference signal. Since the frequency response is

calculated based on the steady-state data, the adaptation rate of the algorithms is

not relevant. Measurement noise for all three simulations is a band-limited white

noise with power 0.008, which is found to result in a steady-state attenuation that is

consistent with the experiments.

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 46

. . .x(k)

n(k)

z−1

W0 W1 WN

+++

+

−Adaptive FIR Filter

Update Weight Vector

AdaptationAlgorithm

SecondaryPath

(Known)

Primary Path(Unknown)

Digital Control System

Physical Plant

d(k)

u(k) y(k)

Vm(k)

e(k)

Fig. 2.1: General block diagram for an Active Noise Cancellation (ANC) problem

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 47

x(k)

x′(k)

d(k)

e(k)

y(k)u(k)

+

+

−

Primary Path(Unknown)

P (z)

Adaptive FIR

Filter S(z)

A Copy Of(known)

(known)

Secondary Path

Secondary Path

LMSAdaptationAlgorithm

Fig. 2.2: A standard implementation of FxLMS algorithm

x(k) d(k)

e(k)

y(k)u(k)

Vm(k)

+++

+

−

Primary Path

An FIR Filter

Adaptive FIR Filter

Adaptation Algorithm

A Copy Of

Secondary Path

Secondary Path

Modeling Error

Fig. 2.3: Pictorial representation of the estimation interpretation of the adaptive control problem:Primary path is replaced by its approximate model

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 48

z−1

z−1

W0 W1 WN +++

+As(k)

Bs(k) Cs(k)

Ds(k)x(k) x(k − 1) x(k − N)

m(k)u(k)

Vm(k)

Replica of the FIR Filter Replica of the Secondary Path

Fig. 2.4: Block diagram for the approximate model of the primary path

Microphone #1 Microphone #2

Speaker #1 Speaker #2

Fig. 2.5: Schematic diagram of one-dimensional air duct

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 49

101

102

10−2

10−1

100

101

101

102

−1000

−800

−600

−400

−200

0

200

400

101

102

10−2

10−1

100

101

101

102

−2000

−1500

−1000

−500

0

Speaker #1 → Microphone #1 Speaker #2 → Microphone #1

Frequency (Hz)

Frequency (Hz)

Frequency (Hz)

Frequency (Hz)

Magnitude

Magnitude

Phase

(Deg

.)

Phase

(Deg

.)

Fig. 2.6: Transfer functions plot from Speakers #1 & #2 to Microphone #1

101

102

10−2

10−1

100

101

101

102

−2000

−1500

−1000

−500

0

101

102

10−2

10−1

100

101

101

102

−1000

−800

−600

−400

−200

0

200

400

Speaker #1 → Microphone #2 Speaker #2 → Microphone #2

Frequency (Hz)

Frequency (Hz)

Frequency (Hz)

Frequency (Hz)

Magnitude

Magnitude

Phase

(Deg

.)

Phase

(Deg

.)

Fig. 2.7: Transfer functions plot from Speakers #1 & #2 to Microphone #2

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 50

2 3 4−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

2.9 2.95 3 3.05−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

4 6 8 10−0.02

−0.01

0

0.01

0.02

2 3 4−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

2.9 2.95 3 3.05−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

4 6 8 10−0.02

−0.01

0

0.01

0.02

e(k)

-Si

mul

atio

ne(

k)

-E

xper

imen

t

Cancellation Error Transient Behavior Steady-State Behavior

Time (sec.)Time (sec.)Time (sec.)

Fig. 2.8: Validation of simulation results against experimental data for the noise cancellationproblem with a single-tone primary disturbance at 150 Hz. The primary disturbance is known tothe adaptive algorithm. The controller is turned on at t ≈ 3 seconds.

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 51

0 2 4 6 8 10 12 14 16 18 20−0.5

0

0.5

0 2 4 6 8 10 12 14 16 18 20−0.5

0

0.5

Experimental Data for a Sinusoidal Primary Disturbance of 150 Hz

Err

orat

Mic

roph

one

#2

Err

orat

Mic

roph

one

#1

Time (sec.)

Fig. 2.9: Experimental data for the EBAF algorithm of length 4, when a noisy measurement ofthe primary disturbance (a single-tone at 150 Hz) is available to the adaptive algorithm (SNR=3).The controller is turned on at t ≈ 5 seconds.

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 52

0 5 10 15 20 25 30−0.5

0

0.5

0 5 10 15 20 25 30−0.5

0

0.5

Experimental Data for a Multi-Tone Sinusoidal Primary Disturbance of 150&180 Hz

Err

orat

Mic

roph

one

#2

Err

orat

Mic

roph

one

#1

Time (sec.)

Fig. 2.10: Experimental data for the EBAF algorithm of length 8, when a noisy measurementof the primary disturbance (a multi-tone at 150 and 180 Hz) is available to the adaptive algorithm(SNR=4.5). The controller is turned on at t ≈ 6 seconds.

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 53

0 5 10 15 20 25 30−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5Experimental Data for a Band Limited White Noise

Err

orat

Mic

roph

one

#2

Time (sec.)

Fig. 2.11: Experimental data for the EBAF algorithm of length 16, when a noisy measurementof the primary disturbance (a band limited white noise) is available to the adaptive algorithm(SNR=4.5). The controller is turned on at t ≈ 5 seconds.

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 54

0 2 4 6 8 10−1

−0.5

0

0.5

1

20 22 24 26 28 30−1

−0.5

0

0.5

1

0 2 4 6 8 10−1

−0.5

0

0.5

1

20 22 24 26 28 30−1

−0.5

0

0.5

1

0 2 4 6 8 10−1

−0.5

0

0.5

1

20 22 24 26 28 30−1

−0.5

0

0.5

1

Time (sec.)Time (sec.)

EBAFEBAF

FxLMSFxLMS

NFxLMSNFxLMS

e(k)

e(k)

e(k)

e(k)

e(k)

e(k)

No Feedback Contamination With Feedback Contamination

Fig. 2.12: Simulation results for the performance comparison of the EBAF and (N)FxLMSalgorithms. For 0 ≤ t ≤ 5 seconds, the controller is off. For 5 < t ≤ 20 seconds both adaptivealgorithms have full access to the primary disturbance (a single-tone at 150 Hz). For t ≥ 20 secondsthe measurement of Microphone #1 is used as the reference signal (hence feedback contaminationproblem). The length of the FIR filter is 24.

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 55

0 10 20 30 40 50 60 70 80 90 100

−2

−1

0

1

2

0 10 20 30 40 50 60 70 80 90 100

−2

−1

0

1

2

0 10 20 30 40 50 60 70 80 90 100

−2

−1

0

1

2

Time (sec.)

EBAF

FxLMS

NFxLMS

e(k)

e(k)

e(k)

Cancellation Error - Band-Limited-White-Noise Primary Source

Fig. 2.13: Simulation results for the performance comparison of the EBAF and (N)FxLMSalgorithms. For 0 ≤ t ≤ 5 seconds, the controller is off. For 5 < t ≤ 40 seconds both adaptivealgorithms have full access to the primary disturbance (a band limited white noise). For t ≥40 seconds the measurement of Microphone #1 is used as the reference signal (hence feedbackcontamination problem). The length of the FIR filter is 32.

2.9. EXPERIMENTAL DATA & SIMULATION RESULTS 56

101

102

10−2

10−1

100

101

Open-LoopEBAFFxLMSNFxLMS

Mag

nitu

de

Frequency (Hz)

Transfer Function from Speaker #1 to Microphone #2

Fig. 2.14: Closed-loop transfer function based on the steady state performance of the EBAFand (N)FxLMS algorithms in the noise cancellation problem of Figure 2.13.

2.10. SUMMARY 57

2.10 Summary

The adaptive control problem has been approached from an estimation point of view.

More specifically, it has been shown that for a common formulation of the adaptive

control problem an equivalent estimation interpretation exists. Then, a standard H∞estimation problem has been constructed that corresponds to the original adaptive

control problem, and have justified the choice of estimation criterion. The H∞-

optimal filtering/prediction solutions have also been derived, and it has been proved

that the optimal energy gain is unity. The filtering/prediction solutions have been

simplified, and explained how these solutions form the foundation for an Estimation-

Based Adaptive Filtering (EBAF) algorithm. Meanwhile, the feasibility of the real

time implementation of the EBAF algorithm is justified.

An implementation scheme for the new algorithm has been outlined, and a corre-

sponding performance bound has been derived. It is shown that the classical FxLMS

(Normalized-FxLMS) adaptive algorithms are approximations to the limiting behav-

ior of the proposed EBAF algorithm. The EBAF algorithm is shown to display

improved performance when compared to commonly used FxLMS and Normalized-

FxLMS algorithms. Simulations have been verified by conducting a noise cancellation

experiment, and showing that the experimental data reasonably match a correspond-

ing simulation.

The systematic nature of the proposed EBAF algorithm can serve as the first step

towards methodical optimization of now pre-determined parameters of the FIR filter

(such as filter length, or adaptation rate). Furthermore, the analysis of the various

aspects of the algorithm directly benefits from the advances in robust estimation the-

ory. Finally, more efficient implementation schemes can further reduce computational

complexity of the algorithm.

Chapter 3

Estimation-Based adaptive IIR

Filter Design

This chapter extends the “estimation-based” approach (introduced in Chapter 2) to

the synthesis of adaptive Infinite Impulse Response (IIR) filters (controllers). Sys-

tematic synthesis of adaptive IIR filters is proven to be difficult and existing design

practices are ad hoc in nature. The proposed approach in this chapter is based on an

estimation interpretation of the adaptive IIR filter (controller) design that replaces

the original adaptive filtering (control) problem with an equivalent estimation prob-

lem. Similar to the case with FIR filters, an H∞ criterion is chosen to formulate this

equivalent estimation problem. Unlike the FIR case however, the estimation problem

in this case is nonlinear in IIR filter parameters. At the present time, the nonlinear

robust estimation problem does not have an exact closed form solution. The proposed

solution in this chapter is therefore an “approximation” that uses the best available

estimate of the IIR filter parameters to locally linearize the nonlinear robust esti-

mation problem at each time step. For the linearized problem, the estimation-based

adaptive algorithm is identical to the adaptive FIR filter design described in Chap-

ter 2. The systematic nature of this new approach is particularly appealing given the

complexity of the existing design schemes for adaptive IIR filters (controllers).

Simulations for active noise cancellation in a one dimensional acoustic duct are

58

3.1. BACKGROUND 59

used to examine the main features of the proposed estimation-based adaptive filter-

ing algorithm for adaptive IIR filters. The performance of the EBAF algorithm is

compared to that of a commonly used classical solution in adaptive control litera-

ture known as Filtered-U Recursive LMS algorithm. The comparison reveals faster

convergence, with improved steady-state behavior in the case of EBAF algorithm.

3.1 Background

This section defines the adaptive filtering problem of interest and describes the ter-

minology that is used in the rest of this chapter. The description of feedback con-

tamination problem, as well as an introductory discussion of the classical Filtered-U

recursive LMS algorithm are also included in this section.

Figure 3.1 shows the adaptive filtering problem of interest. Similar to the FIR

case (see Figure 2.1), the objective here is to adjust the weight vector in the adaptive

IIR filter such that the output of the secondary path provides an acceptable match to

the output of the primary path. Figure 3.1 includes the following signals: (a) ref(k):

the input to the primary path, which is the same as the reference signal available to

the adaptive filter when there is no feedback path, (b) u(k): the control signal applied

to the secondary path, (c) y(k): the output of the secondary path (i.e. the signal

that should match d(k)), and (d) e(k): the residual error which is used to update the

weight vector in the adaptive IIR filter. Note that, despite assuming full knowledge of

the primary path input in Figure 3.1, the reference signal available to the adaptation

algorithm will be affected by the adaptive filter output when a feedback path exists.

In practice, the input to the primary path is not always fully known, and a signal

with “enough” correlation to the primary input replaces ref(k) in Figure 3.1 [33].

The existence of the feedback path will effect the correlation between the reference

signal available to the adaptive algorithm (x(k) in Fig. 3.1) and the primary input to

the primary path (ref(k) in Fig. 3.1) and has a profound effect on the performance of

the adaptive filter. This phenomenon, known as feedback contamination, is extensively

studied in the adaptive control literature ([33], Chapter 3). A simple solution for this

feedback problem is to use a separate feedback (as a part of the control system) to

3.1. BACKGROUND 60

cancel the undesirable feedback signal. An implementation of this idea, known as

feedback neutralization, is shown in Fig. 3.2. Note that in this scheme, W (z) is the

adaptive FIR filter that generates the control signal. F (z) is another adaptive FIR

filter whose output eliminates (in the ideal case) the effect of feedback contamination.

This approach, however, requires special care in the implementation to avoid the

cancellation of the reference signal all together (see [33] for details). A closer look at

Figure 3.2 indicates that even though feedback neutralization employs two FIR filters,

the overall adaptive controller is no longer a zero-only system (F (z) is positioned in

a feedback path). Precisely speaking, feedback neutralization is indeed an adaptive

IIR filtering algorithm.

A more direct (and more general) approach to the design of adaptive IIR filters

in such circumstances is shown in Figure 3.3. In this approach the feedback path is

treated as a part of the overall plant for which the adaptive IIR filter is designed.

Of possible adaptive IIR algorithms, only the Filtered-U recursive LMS algorithm

(FuLMS) will be considered here [14]. This selection is justified by the fact that the

FuLMS adaptive algorithm exhibits the main features of a conventional adaptive IIR

filtering algorithm and has been used successfully in noise cancellation problems (see

Chapter 3 in [33] and the references therein). Referring to Figure 3.3, the residual

error is

e(k) = d(k) − s(k) ⊕ r(k) = d(k) − y(k) (3.1)

where s(k) is the impulse response of the secondary path, and ⊕ indicates convolution.

Note that the conventional derivation of the Filtered-U algorithm does not include

the exogenous measurement disturbance Vm(k). The output of the IIR filter r(k) is

computed as

r(k) = aT (k)x(k) + bT (k)r(k − 1) (3.2)

where a(k) = [ a0(k) a1(k) · · · aL−1(k) ]T is the weight vector for A(z) at time k,

and b(k) = [ b1(k) b2(k) · · · bM(k) ]T is likewise defined for B(z). Moreover, x(k) =

[ x(k) x(k − 1) · · · x(k − L + 1) ]T and r(k−1) = [ r(k − 1) r(k − 2) · · · r(k − M) ]T

are reference signals for A(z) and B(z), respectively.

3.2. PROBLEM FORMULATION 61

Defining a new overall weight vector w(k) =[aT (k) bT (k)

]T, and a generalized

reference vector u(k) =[xT (k) rT (k − 1)

]T, Eq. (3.2) can be rewritten as

r(k) = wT (k)u(k) (3.3)

which has the same format as the output of an ordinary FIR filter. Reference [33]

shows that the steepest-descent algorithm can be used to derive the following update

equation for the generalized weight vector

w(k + 1) = w(k) + µ [s(k) ⊕ u(k)] e(k) (3.4)

if the instantaneous squared error, e2(k), is used to estimate the mean-square error,

E [ ‖ e2(k) ‖ ]. Referring to Section 2.1, the error criterion here is the same as that

of the LMS algorithm. The algorithm is called the Filtered-U recursive LMS algo-

rithm since it uses an estimate of the secondary path, s(k), to filter the generalized

reference vector u(k). The derivation of this algorithm explicitly relies on a slow

convergence assumption and therefore, in general, the convergence rate µ should be

kept small. Slow adaptation also enables the derivation of an approximate instan-

taneous gradient vector that significantly reduces the computational complexity of

the algorithm (see [33], page 93). The ad hoc nature of the FuLMS, however, has

significantly complicated the analysis of the algorithm. As Reference [33] indicates,

the global convergence and stability of the algorithm have not been formally proved,

and the optimal solution in the case of large controller coefficients is found to be ill-

conditioned. Nevertheless, successful implementations of this algorithm are reported,

and hence it will be used as the conventional design algorithm to which the EBAF

algorithm will be compared.

3.2 Problem Formulation

The underlying concept for estimation-based adaptive IIR filter design is essentially

the same as that of the estimation-based adaptive FIR filter design. For clarity,

however, the main steps in the estimation interpretation of the adaptive filtering

(control) problem are repeated here:

3.2. PROBLEM FORMULATION 62

1. Introduce an approximate model for the primary path based on the architecture

of the adaptive path from x(k) to y(k) (as shown in Fig. 3.1). The goal is to

find the optimal weight vector in the approximate model for which the modeling

error is the smallest. As in Chapter 2, state-space models are used for both the

adaptive filter (IIR in this case) and the secondary path.

2. In the approximate model for the primary path, use the available information to

formulate an estimation problem that recursively estimates this optimal weight

vector.

3. Adjust the weight vector of the adaptive IIR filter to the best available estimate

of the optimal weight vector.

For simplicity, the case without feedback contamination is considered first. The case

with feedback contamination is a straightforward extension, and will be discussed in

Appendix B. A model for the secondary path is assumed available (e.g. via iden-

tification). The primary path however, is completely unknown. Figure 3.4 provides

a pictorial presentation for the above mentioned estimation interpretation, which is

identical to Figure 2.3, except for the replacement of the FIR filter with an IIR one.

The main signals involved in Figure 3.4 are similar to those in Figure 2.2 and are

described here for easier reference. First, note that

e(k) = d(k) − y(k) + Vm(k) (3.5)

where e(k) is the available error measurement, Vm(k) is the exogenous disturbance

that captures measurement noise, modeling error and initial condition uncertainty,

y(k) is the output of the secondary path, and d(k) is the output of the primary path.

Equation (3.5) can be rewritten as

e(k) + y(k) = d(k) + Vm(k) (3.6)

where the left hand side is a noisy measurement of the output of the primary path

d(k). Since y(k) is not directly measurable (neither is d(k) of course), the trick is to

have the adaptive algorithm generate an internal copy of y(k), and then define the

3.2. PROBLEM FORMULATION 63

derived measured quantity as

m(k)4= e(k) + ycopy(k) = d(k) + Vm(k) (3.7)

which will be used in formulating the estimation problem. To generate the internal

copy of y(k), the adaptive algorithm uses the available model for the secondary path

and the known control input to the secondary path, u(k). See Section 2.7 for a

discussion of the impact of the initial condition error.

3.2.1 Estimation Problem

Figure 3.5 shows a block diagram representation of the approximate model to the

primary path. Here, a state space model, [ As(k), Bs(k), Cs(k), Ds(k) ], for the

secondary path is assumed. A second order IIR filter (with 5 parameters) models the

adaptive IIR filter. Define W (k) = [ a0(k) b1(k) · · · bN (k) a1(k) · · · aN(k)]T (N = 2

in Fig. 3.5) to be the unknown optimal vector of the IIR filter parameters at time

k. Then, ξ =(

W T (k) θT (k))T

is the state vector for the overall system. Note that

θ(k) captures the dynamics of the secondary path in the approximate model. The

state space representation of the system is[W (k + 1)

θ(k + 1)

]=

[I(2N+1)×(2N+1) 0

Bs(k)h∗(k) As(k)

][W (k)

θ(k)

]

ξk+14= Fk ξk (3.8)

where h(k) = [x(k) r(k − 1) · · · r(k − N) r(k − 1) · · · r(k − N)]T captures the ef-

fect of the reference input x(·). Note that

r(k) = x(k)a0(k) + r(k − 1)b1(k) + · · ·+ r(k − N)bN (k), (3.9)

(where r(−1) = · · · = r(−N) = 0) and therefore, the system dynamics are nonlinear

in the IIR filter parameters. For this system, the derived measured output is

m(k) =[

Ds(k)h∗(k) Cs(k)] [ W (k)

θ(k)

]+ Vm(k)

4= Hk ξk + Vm(k) (3.10)

3.2. PROBLEM FORMULATION 64

where m(k) should be constructed at each step according to Equation (3.7). Note

that measurement equation is also nonlinear in the parameters of the IIR filter. Now,

define a generic linear combination of the states as the desired quantity to be estimated

s(k) =[

L1,k L2,k

] [ W (k)

θ(k)

]4= Lk ξk (3.11)

Here, θ(·) ∈ RNs×1, W (·) ∈ R(2N+1)×1, m(·) ∈ R1×1 and s(·) ∈ R1×1. All matrices are

then of appropriate dimensions. Different choices for Lk lead to different estimates of

W (k), and hence different adaptation criteria for the parameters in the IIR filter. As

in Chapter 2, an H∞ criterion will be used to directly estimate d(k) (hence Lk = Hk).

The objective is to find an H∞ suboptimal filter s(k|k) = F(m(0), · · · , m(k)) (or H∞suboptimal predictor s(k) = F(m(0), · · · , m(k−1))), such that the worst case energy

gain from the measurement disturbance and the initial condition uncertainty to the

error in a causal estimate of s(k) = d(k) remains bounded. In other words, s(k|k) is

sought such that for a given γf > 0 and Π0 > 0,

sup

Vm, ξ0

M∑k=0

(s(k) − s(k|k))∗(s(k) − s(k|k))

ξ∗0Π−10 ξ0 +

M∑k=0

V∗m(k)Vm(k)

≤ γ2f (3.12)

Similarly s(k) is desired such that for a given γp > 0 and Π0 > 0,

sup

Vm, ξ0

M∑k=0

(s(k) − s(k))∗(s(k) − s(k))

ξ∗0Π−10 ξ0 +

M∑k=0

V∗m(k)Vm(k)

≤ γ2p (3.13)

Since Eqs. (3.8), (3.10), and (3.11) are nonlinear in IIR filter parameters, both of these

estimation problems are nonlinear. An exact closed form solution to the nonlinear

H∞ estimation problem is not yet available. One solution to this problem is to use

the following linearizing approximation; at each time step, we replace the IIR filter

3.3. APPROXIMATE SOLUTION 65

parameters, a0, b1, · · · bN , in Equation (3.9) with their best available estimate. This

reduces the estimation problems in Eqs. (3.12) and (3.13) into linear H∞ estimation

problems for which the solutions in Section 2.4 can be directly applied. A similar

linearizing approximation is commonly adopted in the optimal estimation problems

for nonlinear dynamic processes and is referred to as continuous relinearization in

extended Kalman filtering [9,20]. As Reference [9] indicates, in the context of opti-

mal estimation for the nonlinear processes, “considerable success has been achieved

with this linearization approach”. Extensive simulations in this Thesis reveal equally

successful results in adaptive IIR filter design (see Section 3.6).

Remark: As Figure 3.5 suggests, feedback is an integral part of an IIR filter structure.

The nonlinearity of the robust estimation problem in Eqs. (3.12) and (3.13) is due

to the existence of this structural feedback loop. Obviously, a physical feedback path

outside the IIR filter (see Figure 3.1) will have the same effect. The treatment of

the nonlinearity in the case of an IIR filter, i.e. replacing the IIR filter parameters

in Equation (3.9) with their best available estimates, carries over to the case where

such a feedback path exists. Appendix B discusses the case with reference signal

contamination in more detail.

3.3 Approximate Solution

With the linearizing approximation of the previous section, at any given time k,

h(k) will be fully known. This eliminates the nonlinearity in Equations (3.8)-(3.11),

and allows the straightforward solutions in Section 2.4 to be used. Note that for the

linearized problem all the optimality arguments in Chapter 2 are valid and the simpli-

fications to filtering (prediction) solutions also apply. The linearizing approximation,

however, prevents any claim on the optimality of the solution. Only the simplified

solutions to the linearized estimation problems of Eqs. (3.12) and (3.13) (i.e. the

central filtering solution corresponding to γf = 1 for the filtering solution, and the

central prediction solution corresponding to γp = 1 for the prediction solution) are

stated here.

3.3. APPROXIMATE SOLUTION 66

3.3.1 γ-Suboptimal Finite Horizon Filtering Solution to the

Linearized Problem

Theorem 3.1: Invoking Theorem 2.3 in Chapter 2, for the state space representationof the block diagram of Figure 3.5, described by Equations (3.8)-(3.11), and for Lk =Hk, the central H∞-optimal filtering solution to the linearized problem is given by

ξk+1 = Fkξk + Kf,k

(m(k) − Hkξk

), ξ0 = 0 (3.14)

s(k|k) = Lkξk + (LkPkH∗k) R−1

He,k

(m(k) − Hkξk

)(3.15)

with

Kf,k = (FkPkH∗k)R−1

He,k and RHe,k = Ip + HkPkH∗k (3.16)

where Pk satisfies the Lyapunov recursion

Pk+1 = FkPkF∗k , P0 = Π0. (3.17)

3.3.2 γ-Suboptimal Finite Horizon Prediction Solution to the

Linearized Problem

Theorem 3.2: Invoking Theorem 2.4 in Chapter 2, for the state space representationof the block diagram of Figure 3.5, described by Equations (3.8)-(3.11), and for Lk =Hk, if (I − PkL

∗kLk) > 0, then the central H∞-optimal prediction solution to the

linearized problem is given by

ξk+1 = Fkξk + Kp,k

(m(k) − Hkξk

), ξ0 = 0 (3.18)

s(k) = Lkξk (3.19)

with Kp,k = FkPkH∗k where Pk satisfies the Lyapunov recursion (3.17).

3.3.3 Important Remarks

1. As indicated in Theorems 3.1 and 3.2, the solution to the linearized robust es-

timation problem (Eq. (3.12) for the filtering problem and Eq. (3.13) for the

prediction problem) requires the solution to only one Riccati equation. Further-

more, the Riccati solution propagates forward in time and does not involve any

3.4. IMPLEMENTATION SCHEME FOR THE EBAF ALGORITHM IN IIR CASE 67

information regarding the future of the system or the reference signal. Thus,

the resulting adaptive algorithm is real-time implementable.

2. Note that the Riccati update for the simplified solution to the linearized robust

estimation problems reduces to a Lyapunov recursion which always generates a

non-negative Pk as long as P0 > 0.

3. Based on the results in Theorem 3.1, the linearized filtering problem has a

guaranteed solution for γf = 1. This will prevent any breakdown in the solution

and allows real-time implementation of the algorithm.

4. Theorem 3.2 proves that the prediction solution to the linearized problem is

guaranteed to exist for γp = 1, as long as the condition (I − PkL∗kLk) > 0

is satisfied. Furthermore, the discussion in Section 2.8 shows that this condi-

tion translates into an upper limit for the adaptation rate in the steady-state

operation of the adaptive system.

5. From features (2) and (3), there is no need to verify the solutions at each time

step, so the computational complexity of the estimation based approach is O(n3)

(primarily for calculating FkPkF∗K in Eq. (3.17)), where

n = (2N + 1) + Ns (3.20)

where (2N + 1) is the total number of IIR filter parameters for an IIR filter of

order N , and Ns is the order of the secondary path. As in the FIR case, the

special structure of Fk reduces the computational complexity to O(N3s + NsN),

i.e. cubic in the order of the secondary path, and linear in the length of the

FIR filter.

3.4 Implementation Scheme for the EBAF Algo-

rithm in IIR Case

The implementation scheme parallels that of the adaptive FIR filter discussed in

Chapter 2. For easier reference, the main signals involved in the description of the

3.4. IMPLEMENTATION SCHEME FOR THE EBAF ALGORITHM IN IIR CASE 68

adaptive algorithm are briefly introduced here. For a more detailed description, see

Chapter 2. In what follows (a) W (k) is the estimate of the adaptive weight vector,

(b) θ(k) is the estimate of the state of the secondary path, (c) u(k)4= h∗(k)W (k)

is the actual control input to the secondary path, (d) y(k) and d(k) are the actual

outputs of the secondary and primary paths, respectively, (e) e(k) is the actual error

measurement, and (f) θcopy(k) and ycopy(k) are the adaptive algorithm’s internal copy

of the state and output of the secondary path which are used in constructing m(k)

according to Eq. (3.7). Now, the implementation algorithm can be outlined as follows:

1. Start with W (0) = 0 and θ(0) = 0 as the initial guess for the state vector in the

approximate model of the primary path. Also assume that θcopy(0) = 0, and

r(−1) = · · · = r(−N) = 0 (hence h(0) = [ x(0) 0 · · · 0 ]T ). The initial value

for the Riccati matrix is P0 which is chosen to be block diagonal.

2. If 0 ≤ k ≤ M (finite horizon):

(a) Linearize the nonlinear dynamics in Eqs. (3.8), (3.10), and (3.11) by sub-

stituting for a0(k), b1(k), · · ·, bN (k) with their best available estimates

(i.e. a0(k) = W (1, k), b1(k) = W (2, k), · · ·, bN (k) = W (N + 1, k)) in

Eq. (3.9).

(b) Form the control signal

u(k) = h∗(k)W (k) (3.21)

to be applied to the secondary path. Note that applying u(k) to the

secondary path produces

y(k) = Cs(k)θ(k) + Ds(k)u(k) (3.22)

at the output of the secondary path. This in turn leads to the following

error signal measured at time k:

e(k) = d(k) − y(k) + Vm(k) (3.23)

which is available to the adaptive algorithm to perform the state update

at time k.

3.5. ERROR ANALYSIS 69

(c) Propagate the state estimate and the internal copy of the state of the

secondary path as follows[

W (k + 1)

θ(k + 1)

]

θcopy(k + 1)

=

[

Fk + Kf,k [0 − Cs(k)]]

Kf,kCs(k)

[(Bs(k)h∗(k) 0

]As(k)

[

W (k)

θ(k)

]

θcopy(k)

+[

Kf,k

]

0

e(k)

(3.24)

where e(k) is the error sensor measurement at time k given by Eq. (3.23),

and Kf,k = FkPkH∗k(I + HkPkH

∗k)−1 (see Theorem 3.1). Note that for

the prediction-based EBAF algorithm Kf,k must be replaced with Kp,k =

FkPkH∗k .

(d) update the Riccati matrix Pk using the Lyapunov recursion[P11 P12,k+1

P ∗12,k+1 P22,k+1

]=

[I 0

Bs(k)h∗(k) As(k)

][P11 P12,k

P ∗12,k P22,k

][I 0

Bs(k)h∗(k) As(k)

]∗(3.25)

Pk+1 will be used in (3.24) to update the state estimate.

3. Go to 2.

3.5 Error Analysis

As explained in Section 2.7, for a linear stable secondary path the contribution of

the initial condition at the output of the secondary path decays to zero for large k.

This means that ycopy(k) converges to y(k) for sufficiently large k. In other words,

3.6. SIMULATION RESULTS 70

exact knowledge of the initial condition of the secondary path does not affect the

performance of the proposed EBAF algorithm for sufficiently large k. Unlike the

discussions in Section 2.7 however, a performance bound for the EBAF algorithm in

the IIR case is not yet available.

3.6 Simulation Results

This section examines the performance of the proposed EBAF algorithm for the

active noise cancellation (ANC) problem in the one dimensional acoustic duct (see

Figure 2.5). The control objective is the same as that in Section 2.9, i.e. to attenuate

(cancel in the ideal case) the disturbance introduced into the duct by Speaker #1

(primary noise source) at the position of Microphone #2 (error sensor) by the control

signal generated by Speaker #2 (secondary source). Microphone #1 can be used to

provide the reference signal for the adaptation algorithm. Clearly, Microphone #1

measurements are affected by both primary and secondary sources, and if used as the

reference signal, feedback contamination exists.

A state space model (of order 10) is identified for this one-dimensional acoustic

system. Of four identified transfer functions (see Figs. 2.6 and 2.7), only the transfer

function from Speaker #2 to Microphone #2 (i.e. the secondary path) is used by the

estimation-based adaptive algorithm. The IIR filter in EBAF approach is of order 5

(i.e. a total of 11 parameters for adaptive IIR filter). Each FIR filter in the FuLMS

implementation is of order 6 (a total of 12 parameters in the adaptive filter). Note

that all the measurements in the simulations are subject to band-limited white noise

(with power 0.008), and the sampling frequency in all cases is 1 KHz.

Figure 3.6 compares the performance of an estimation-based adaptive IIR filter

with that of a FuLMS algorithm in a single-tone noise cancellation problem. The

frequency of the tone is 150 Hz. Referring to Fig. 3.6, the first second shows the open

loop measurement of the error sensor, Microphone #2. At t = 1 (sec) the adaptive

control is switched on. For the next 5 seconds both adaptive algorithms have access

to the primary disturbance (i.e. x(k) is fully known to the adaptation schemes). At

t = 6 the reference signal available to the adaptive algorithms is switched to the

3.6. SIMULATION RESULTS 71

measurements of Microphone #1. Note that in this case only a filtered version of the

primary disturbance which is also contaminated with the output of the adaptive filter

is available to the adaptive algorithms. The adaptation rate for FuLMS algorithm

is kept small (0.00003 in this case), to avoid unstable behavior when the reference

signal becomes contaminated. The EBAF algorithm converges faster than FuLMS

algorithm (EBAF algorithm reaches its steady-state behavior in approximately 1.5

seconds), while avoiding the unstable behavior due to feedback contamination. A 50

times reduction in the error amplitude (without feedback contamination) is recorded.

With feedback contamination however, only a 10 times reduction is achieved.

Figure 3.7, shows a similar scenario for the multi-tone case. Here the primary

disturbance consists of two sinusoidal signals at 150 and 140 Hz. A trend similar to

the single-tone case is observed here. EBAF algorithm is robust to feedback contam-

ination and allows faster convergence rates. The performance however is not as good

as the single tone case. With uncontaminated reference signal only a reduction of

order 10 is achieved. With feedback contamination this performance is reduced to a

factor of 6. The results shown in Figures 3.6 and 3.7 capture the typical behavior of

the adaptive IIR filters under the EBAF and FuLMS algorithms.

3.6. SIMULATION RESULTS 72

ref(k) d(k)

x(k) u(k) y(k)

e(k)

Vm(k)

++

+

++

+

−Adaptive IIR

FilterSecondary Path

(Known)

(Known)

Primary Path(Unknown)

Feedback Path

Fig. 3.1: General block diagram for the adaptive filtering problem of interest (with FeedbackContamination)

ref(k) d(k)

x(k)

y(k)u(k)

e(k)

Vm(k)

++

+

+

+

+

++

−−

Adaptive FIR Filter

Adaptive FIR FilterSecondary Path

(Known)S(z)

Primary Path

P (z)(Unknown)

Feedback Path

W (z)

F (z)

F (z)

Fig. 3.2: Basic Block Diagram for the Feedback Neutralization Scheme

3.6. SIMULATION RESULTS 73

ref(k) d(k)

x(k) y(k)u(k)

e(k)

Vm(k)

A(z)

B(z)

+

+++

+

++

−Secondary Path

(Known)

(Known)

Primary Path(Unknown)

Feedback Path

Fig. 3.3: Basic Block Diagram for the Classical Adaptive IIR Filter Design

x(k) d(k)

y(k)u(k)

e(k)

Vm(k)

+++

+

−

An IIR Filter

Adaptive IIR Filter

Modeling Error

A Copy Of

Secondary Path

Secondary Path

Primary Path

Fig. 3.4: Estimation Interpretation of the IIR Adaptive Filter Design

3.6. SIMULATION RESULTS 74

x(k)

m(k)d(k)

u(k)Vm(k)

r(k)

As(k)

Bs(k) Cs(k)

Ds(k)

a0

a1

a2

b1

b2

z−1

z−1

z−1+ + ++ +

Approximate Model For Primary Path

A Second Order IIR Filter

Fig. 3.5: Approximate Model For the Unknown Primary Path

3.6. SIMULATION RESULTS 75

0 1 2 3 4 5 6 7 8 9 10−1

−0.5

0

0.5

1

0 1 2 3 4 5 6 7 8 9 10−1

−0.5

0

0.5

1

e(k)

-E

BA

FA

dapt

ive

IIR

e(k)

-Fu

LM

SA

dapt

ive

IIR

Time (sec.)

Single Tone Error Cancellation - EBAF vs. FuLMS

Fig. 3.6: Performance Comparison for EBAF and FuLMS Adaptive IIR Filters for Single-ToneNoise Cancellation. The controller is switched on at t = 1 second. For 1 ≤ t ≤ 6 seconds adaptivealgorithm has full access to the primary disturbance. For t ≥ 6 the output of Microphone #1 isused as the reference signal (hence feedback contamination problem).

3.6. SIMULATION RESULTS 76

0 1 2 3 4 5 6 7 8 9 10−1.5

−1

−0.5

0

0.5

1

1.5

0 1 2 3 4 5 6 7 8 9 10−1.5

−1

−0.5

0

0.5

1

1.5

e(k)

-E

BA

FA

dapt

ive

IIR

e(k)

-Fu

LM

SA

dapt

ive

IIR

Time (sec.)

Multi Tone Error Cancellation - EBAF vs. FuLMS

Fig. 3.7: Performance Comparison for EBAF and FuLMS Adaptive IIR Filters for Multi-ToneNoise Cancellation. The controller is switched on at t = 1 second. For 1 ≤ t ≤ 6 seconds adaptivealgorithm has full access to the primary disturbance. For t ≥ 6 the output of Microphone #1 isused as the reference signal (hence feedback contamination problem).

3.7. SUMMARY 77

3.7 Summary

A new framework for the synthesis and analysis of the IIR adaptive filters is intro-

duced. First, an estimation interpretation of the adaptive filtering (control) is used

to formulate an equivalent nonlinear robust estimation problem. Then, an approxi-

mate solution for the equivalent estimation problem is provided. This approximate

solution is based on a linearizing approximation, from which the adaptation law for

the adaptive filter weight vector is extracted.

The proposed approach clearly indicates an inherent connection between the adap-

tive IIR filter design and a nonlinear robust estimation problem. This connection

brings the analysis and synthesis tools in robust estimation into the field of adaptive

IIR filtering (control). Simulation results demonstrate the feasibility of the proposed

EBAF algorithm.

Chapter 4

Multi-Channel Estimation-Based

Adaptive Filtering

This chapter extends the estimation-based adaptive filtering algorithm, discussed

in Chapter 2, to the multi-channel case where a number of adaptively controlled

secondary sources use multiple reference signals to cancel the effect of a number of

primary sources (i.e. disturbance sources) as seen by a number of error sensors. The

multi-channel estimation-based adaptive filtering algorithm is shown to maintain all

the main features of the single-channel solution, underlining the systematic nature of

the approach.

In addition to the noise cancellation problem in a one dimensional acoustic duct, a

structural vibration control problem is chosen to examine the performance of the pro-

posed multi-channel adaptive algorithm. An identified model for a Vibration Isolation

Platform (VIP) is used for vibration control simulations in this chapter. The perfor-

mance of the new multi-channel adaptive algorithm is compared to the performance

of a multi-channel implementation of the FxLMS algorithm.

78

4.1. BACKGROUND 79

4.1 Background

For a wide variety of applications such as equalization in wireless communication

when more than one receiver/transmitter are involved, or active control of sound

and vibration in cases where the acoustic environment or the dynamic system of

interest is complex and a number of primary sources excite the system, multi-channel

adaptive filtering (control) schemes are required [1]. A brief description of the multi-

channel implementation of the FxLMS algorithm in Section 4.1.1 will support the

observation in Reference [33] that “compared to single-channel algorithms, multi-

channel adaptive schemes are significantly more complex”. As Reference [33] points

out, successful application of the classical multi-channel adaptive algorithms has been

limited to cases involving repetitive noise with a few harmonics [43,49,13]. This

observation agrees with the results in Ref. [3] where a significant noise reduction is only

achieved for periodic noises∗. In contrast to the classical approaches to multi-channel

adaptive filter (control) design, this chapter will show that, for the new estimation-

based approach, the multi-channel design is virtually identical to the single channel

case. Furthermore, the analysis of the multi-channel adaptive system in the new

framework is a straightforward extension of the analysis used in the single channel

case.

4.1.1 Multi-Channel FxLMS Algorithm

Figure 4.1 shows the general block diagram of a multi-channel ANC system in which

reference signals can be affected by the output of the adaptive filters. Simulations in

this chapter, however, are based on the assumption that the effects of feedback are

negligible and that the reference signal is available to the adaptation scheme through

a noisy measurement. Measurement noise however is independent from the reference

signal itself. Note that, this section is only intended as a brief review of the multi-

channel FxLMS adaptive algorithm. For a detailed treatment of the subject see [33],

∗Another interesting conclusion in Ref. [3] is the following: the performance of a Multi-Channelimplementation of the FxLMS algorithm is similar to the performance of an H∞ controller whichwas directly designed for noise cancellation.

4.1. BACKGROUND 80

Chapter 5, and the references therein.

Referring to Figure 4.1, the ANC adaptive filter has J reference input signals

denoted by xT (k) = [x1(k) x2(k) · · · xJ(k)]. The controller generates K secondary

signals that are elements of the control vector uT (k) = [u1(k) u2(k) · · · uK(k)].

Therefore, a K×J matrix of adaptive FIR filters can be used to describe the adaptive

control block in Figure 4.1,

Ω(k) =

W T11(k) · · · W T

1J (k)... W T

kj(k)...

W TK1(k) · · · W T

KJ(k)

(4.1)

where

W Tkj(k)

def= [wkj,1(k) · · · wkj,(L−1)(k)] (4.2)

is the adaptive filter relating the j-th reference signal to the k-th control command.

Note that L is the length of the adaptive FIR filters. Defining the reference signal

vector as

XT (k) =[xT

1 (k) xT2 (k) · · · xT

J (k)]

(4.3)

where xTj (k) = [xj(k) xj(n − 1) · · · xj(n − L + 1)] is the last L samples of the j-th

reference signal, and the adaptive weight vector as

W(k)def= [ Ω(1,:)(k) · · · Ω(K,:)(k) ] (4.4)

with Ω(k,:)(k) referring to the k-th row of matrix Ω(k), the control vector u(k) can

be defined as

u(k) = X T (k)W(k) (4.5)

where

X (k) =

X(k) 0 · · · 0

0 X(k) · · · 0... 0

. . . 0

0 · · · 0 X(k)

(4.6)

4.2. ESTIMATION-BASED ADAPTIVE ALGORITHM FOR MULTI CHANNEL CASE 81

is a (JKL)×K matrix. The error signal vector can now be defined as

e(k) = d(k) − S(k) ⊕ u(k) (4.7)

where S(k) is the impulse response of the K-input/M-output secondary path, and

⊕ indicates the convolution operation. Defining the instantaneous squared error,

eT (k)e(k), as an approximation for the sum of the mean-square errors, the gradient

based update equation for the weight vector W(k) is [33],

W(k + 1) = W(k) + µX ′(k)e(k) (4.8)

where X ′(k) is the matrix of the filtered reference signals obtained by the Kronecker

convolution operation on the reference signal vector

X ′(k) =

s11(k) ⊕ X(k) s21(k) ⊕ X(k) · · · sM1(k) ⊕X(k)

s12(k) ⊕ X(k) s22(k) ⊕ X(k) · · · sM2(k) ⊕X(k)...

.... . .

...

s1K(k) ⊕ X(k) s2K(k) ⊕ X(k) · · · sMK(k) ⊕ X(k)

(4.9)

Note that the adaptation algorithm has access to an estimate of the secondary path

only (e.g. through system identification) and therefore sij (an estimate of the impulse

response for the single-channel transfer function from input j to output i) are used

in calculation of the filtered reference signals.

4.2 Estimation-Based Adaptive Algorithm for Multi

Channel Case

The underlying concept for estimation based adaptive filtering algorithm is the same

for both single-channel and multi-channel systems. Figure 4.1 is the multi-channel

block diagram representation of Fig. 2.1 (in Chapter 2) where the main components of

active noise cancellation problem were described. Therefore, an estimation interpre-

tation identical to the one in the single-channel case can be used to translate a given

4.2. ESTIMATION-BASED ADAPTIVE ALGORITHM FOR MULTI CHANNEL CASE 82

adaptive filtering (control) problem into an equivalent robust estimation problem.

See Chapter 2 for a detailed treatment of the steps involved in this translation.

As in Chapter 2, state space models for the adaptive filter and the secondary path

are used to construct an approximate model for the unknown primary path. This

approximate model replicates the structure of the adaptive path from the primary

source, ref(k), to the output of the secondary path, y(k) (see Fig. 4.2). Note that for a

given disturbance input, there is an “optimal” (but unknown) setting of adaptive filter

parameters for which the difference between the primary path and its approximate

model is minimized. Finding this optimal setting is the objective of the estimation

based approach which can be summarized as follows:

1. Devise an estimation strategy that recursively improves the estimate of the

optimal values of the adaptive filter parameters in the approximate model of

the primary path,

2. Set the actual value of the weight vector in the adaptive filter to the best

available estimate of the parameters obtained from the estimation strategy.

Note that in Figure 4.2,

e(k) = d(k) − y(k) + Vm(k) (4.10)

where (a) e(k) ∈ RM×1 is the measured error vector, (b) Vm(k) ∈ RM×1 is the exoge-

nous disturbance that captures measurement noise, modeling error and uncertainty

in the initial condition of the secondary path, and (c) y(k) = S(k) ⊕ u(k) (also in

RM×1) is the output of the secondary path. S(k) is the impulse response of the sec-

ondary path and ⊕ denotes convolution. Here u(k) obeys Eq. (4.5) with the same

definitions for X (k) and W(k) as in (4.6) and (4.8), respectively. Equation (4.10)

can be rewritten as

e(k) + y(k) = d(k) + Vm(k) (4.11)

where the left hand side is a noisy measurement of the output of the primary path

d(k). Since y(k) is not directly measurable (neither is d(k)), the adaptive algo-

rithm should generate an internal copy of y(k) (referred to as ycopy(k)). The derived

4.2. ESTIMATION-BASED ADAPTIVE ALGORITHM FOR MULTI CHANNEL CASE 83

measured quantity can then be defined as

m(k)4= e(k) + y(k) = d(k) + Vm(k) (4.12)

which will be used in formulating the estimation problem. The only assumption

involved in constructing m(k) is the assumed knowledge of the initial condition of

the secondary path. In Chapter 2 it is shown that for a linear, stable secondary path (a

realistic assumption in practice), any error in y(k) due to an initial condition different

from what is assumed by the algorithm remains bounded (hence it can be treated as

a component of the measurement disturbance). Furthermore, for sufficiently large k,

this error decays to zero (i.e. ycopy(k) → y(k)). In Figure 4.3, the state space model

for the secondary path is

θ(k + 1) = As(k)θ(k) + Bs(k)u(k) (4.13)

y(k) = Cs(k)θ(k) + Ds(k)u(k) (4.14)

where θ(k) is the state variable capturing the dynamics of the secondary path. The

weight vector of the adaptive filter

W(k) =[

W T11(k) · · · W T

1J(k) W T21(k) · · · W T

2J(k) · · · W TK1(k) · · · W T

KJ(k)]T

(4.15)

is also treated as the state vector that captures the dynamics of the FIR filter.

Note that Wkj(k) is itself a vector of length L (length of each FIR filter). ξTk =[

WT (k) θT (k)]

is then the state vector for the overall system. The state space

representation of the system is then[W(k + 1)

θ(k + 1)

]=

[I(JKL)×(JKL) 0

Bs(k)X ∗(k) As(k)

][W(k)

θ(k)

]

ξk+14= Fkξk (4.16)

where X (k), defined by Equation (4.6), captures the effect of the reference input

vector X(k). For this system, the derived measured output is

m(k) =[

Ds(k)X ∗(k) Cs(k)] [ W(k)

θ(k)

]+ Vm(k)

4= Hkξk + Vm(k) (4.17)

4.2. ESTIMATION-BASED ADAPTIVE ALGORITHM FOR MULTI CHANNEL CASE 84

where m(k) is defined in Equation (4.12). Noting the objective of the adaptive

filtering problem in Fig. 4.1, s(k) = d(k) is the quantity to be estimated. Therefore,

s(k) =[

Ds(k)X ∗(k) Cs(k)] [ W(k)

θ(k)

]4= Lkξk (4.18)

Here m(k) ∈ RM×1, s(k) ∈ RM×1, θ(k) ∈ RNs×1 where Ns is the order of the

secondary path. All matrices are then of appropriate dimensions. Note that Equa-

tions (4.16) through (4.18) are identical to Equations (2.5) through (2.7). The only

difference is in the dimension of the variable involved, and the fact that h(k) is re-

placed by X ∗(k).

Choosing the H∞ criterion to generate a filtering (prediction) estimate of s(k), the

equivalent estimation problem will also be identical to that in Chapter 2 (i.e. Eq. (2.11)

for the filtering estimate, and Eq. (2.12) for the prediction estimate). Defining

s(k|k)4= F(m(0), · · · ,m(k)) as the filtering estimate of s(k), the objective in the

filtering solution is to find s(k|k) such that the worst case energy gain from the mea-

surement disturbance and the initial condition uncertainty to the error in the filtering

estimate is properly bounded, i.e.

sup

Vm, ξ0

M∑n=0

[s(k) − s(k|k)]∗ [s(k) − s(k|k)]

(ξ0 − ξ0)∗Π−1

0 (ξ0 − ξ0) +

M∑n=0

V∗m(k)Vm(k)

≤ γ2f (4.19)

In a similar way, defining s(k)4= F(m(0), · · · ,m(k − 1)) as the prediction estimate

of s(k), the objective in the prediction solution is to find s(k) such that

sup

Vm, ξ0

M∑n=0

[s(k) − s(k)]∗ [s(k) − s(k)]

(ξ0 − ξ0)∗Π−1

0 (ξ0 − ξ0) +

M∑n=0

V∗m(k)Vm(k)

≤ γ2p (4.20)

Solutions to these two problems are discussed in detail in Chapter 2 and therefore

the next section only briefly presents the simplified solutions.

4.3. SIMULATION RESULTS 85

4.2.1 H∞-Optimal Solution

Theorem 4.1: Invoking Theorem 2.3 in Chapter 2, for the state space representation

of the block diagram in Figure 4.3 (described by Equations (4.16)-(4.18)), and for

Lk = Hk, the central H∞-optimal solution to the filtering problem in Eq. (4.19) is

obtained for γf = 1, and is described by

ξk+1 = Fkξk + Kf,k

(m(k) − Hkξk

), ξ0 = 0 (4.21)

s(k|k) = Lkξk + (LkPkH∗k) R−1

He,k

(m(k) − Hkξk

)(4.22)

with

Kf,k = (FkPkH∗k)R−1

He,k and RHe,k = Ip + HkPkH∗k (4.23)

where Pk satisfies the Lyapunov recursion

Pk+1 = FkPkF∗k , P0 = Π0. (4.24)

Theorem 4.2: Invoking Theorem 2.4 in Chapter 2, for the state space representation

of the block diagram in Figure 4.3 (described by Equations (4.16)-(4.18)), and for

Lk = Hk, if (I −PkL∗kLk) > 0, then the central H∞-optimal prediction solution to the

linearized problem is obtained for γp = 1, and is given by

ξk+1 = Fkξk + Kp,k

(m(k) − Hkξk

), ξ0 = 0 (4.25)

s(k) = Lkξk (4.26)

with Kp,k = FkPkH∗k where Pk satisfies the Lyapunov recursion (4.24).

4.3 Simulation Results

The implementation scheme for the EBAF algorithm in multi-channel case is identical

to the implementation scheme in the single-channel case (see Chapter 2 for the FIR

case, and Chapter 3 for the IIR case), and therefore it is not repeated here.

4.3. SIMULATION RESULTS 86

4.3.1 Active Vibration Isolation

The Vibration Isolation Platform (VIP) (see Figure 4.4 and 4.5) is an experimental

set up which is designed to capture the main features of a real world payload isolation

and pointing problem. Payload isolation refers to the vibration isolation of payload

structures with instruments or equipments requiring a very quiet mounting [1]. VIP

is designed such that the base supporting the payload (middle mass in Figure 4.5)

can emulate spacecraft dynamics. Broadband as well as narrowband disturbances

can be introduced to the middle mass (emulating real world vibration sources such

as solar array drive assemblies, reaction wheels, control moment gyros, crycoolers,

and other disturbance sources that generate on-orbit jitter) via a set of three voice

coil actuators. The positioning of a second set of voice coil actuators allows for

the implementation of an adaptive/active isolation system. More specifically, the

Vibration Isolation Platform consists of the following main components:

1. Voice-Coil Actuators: 6 voice-coil actuators are mounted on the middle mass

casing. Three of these actuators (positioned 120 degrees apart on a circle of

radius 4.400 inches) are used to shake the middle mass and act as the source of

disturbance to the platform. They can also be used to introduce some desired

dynamics for the middle mass that supports the payload. As shown in Figure

4.5, these actuators act against the ground. The other three actuators (placed

120 degrees apart on a circle of radius 4.000 inches) act against the middle mass

and are used to isolate the payload (top mass) from the motions of the middle

mass. Note that the two circles on which control and disturbance actuators are

mounted are concentric, and one set of actuators is rotated in the horizontal

plane by 60 degrees with respect to the other.

2. Sensors: VIP is equipped with two sets of sensors,

(a) Position Sensors: Each actuator is equipped with a colocated position

(gap) sensor which is physically inside the casing of the actuator. Three

additional position sensors are used as truth sensors (Figure 4.5) to mea-

sure the displacement of the payload in the inertial frame.

4.3. SIMULATION RESULTS 87

(b) Load Cells: Three load cells are used to measure the interaction forces

between the middle mass and the payload. These sensors are colocated

with the point of contact of the control actuators and the payload. It is

important to note that any interaction force between the payload and the

rest of the VIP system is transfered via these load cells.

A state-space model for the VIP platform is identified using the FORSE system

identification software developed at MIT [30]. The detailed description of the identi-

fication process, the identified model, and the model validation process is discussed

in Appendix C. Figure 4.6 shows the singular value plots for the MIMO transfer

functions from control ([u])/disturbance ([d]) actuators to the load cells ([lc]) and

scoring sensors ([sc]).

In all simulations that follow, the length of the adaptive FIR filters, i.e. L in

Equation 4.2, is 4 (unless stated otherwise). This length is found to be sufficient for

an acceptable performance of the adaptive algorithms. The sampling frequency for

all the simulations in this section is 1000 Hz. Furthermore, all measurements are

subject to band limited white noise with power 0.008.

Figures 4.7 and 4.8 show the reading of the scoring sensors (i.e. the variations of

the payload from the equilibrium position in the inertial frame) for the multi-channel

implementation of the EBAF and the FxLMS adaptive algorithms, respectively. Dis-

turbance actuators apply sinusoidal excitation of amplitude 0.1 Volts at 4 Hz to the

middle mass. The phase for the excitation of the disturbance actuator #1 is assumed

to be zero, while disturbance actuators #2 and #3 are 22.5 and 45 degrees out of

phase with the first actuator. Only a noisy measurement of the primary disturbance

is assumed to be available to the adaptive algorithms. The signal to noise ratio for

the available reference signal is 3.0. For simulations in Figures 4.7 and 4.8 the control

signal starts at t = 30 seconds. Figure 4.7 shows that the amplitude of the transient

vibrations of the payload under the EBAF adaptive algorithm (for 30 ≤ t ≤ 60)

does not exceed that of the open loop vibrations. In contrast, the amplitude of the

transient vibrations under the FxLMS, Figure 4.8, exceeds twice the amplitude of

the open loop vibrations in the system. For a smaller amplitude during transient

4.3. SIMULATION RESULTS 88

vibrations, the adaptation rate for the FxLMS algorithm should be reduced. This

will result in an even slower convergence of the adaptive algorithm. Note that, for the

results in Figure 4.8, the adaptation rate is 0.0001. Even with this adaptation rate,

FxLMS algorithm requires approximately 20 more seconds (compared to the EBAF

case) to converge to its steady state value. In the steady state, the EBAF algorithm

achieves a 20 times reduction in the amplitude of the payload vibrations. For the

FxLMS algorithm in this case, the measured reduction is approximately 16 times.

Figures 4.9 and 4.10 show the reading of the scoring sensors when the primary

disturbances are multi-tone sinusoids. The primary disturbance consists of sinusoidal

signal of amplitudes 0.1 and 0.2 volts at 4 and 15 Hz, respectively. As in the single tone

case, both components of the excitation for the disturbance actuator #1 are assumed

to have zero phase. Each sinusoidal component of the excitation for the disturbance

actuator #2 (#3) is assumed to have a phase lag of 22.5 (45) degrees with respect to

the corresponding component of the excitation in actuator #1. Figures 4.9 and 4.10

demonstrate a trend similar to that discussed for the single tone scenario. For the

FxLMS algorithm, a trade off between the amplitude of the transient vibrations of

the payload and the speed of the convergence exists. The adaptation rate here is the

same as the single tone case. Slower adaptation rates can reduce the amplitude of the

transient vibrations at the expense of the speed of the convergence. For the EBAF

algorithm, however, better transient behavior and faster convergence are observed. In

the steady state, the EBAF algorithm provides a 15 times reduction in the amplitude

of the vibrations of the payload. For the FxLMS algorithm a 9 times reduction is

recorded.

In Fig. 4.11 the effect of feedback contamination in the performance of the EBAF

algorithm is examined. As in the previous simulations, control actuators are switched

on at t = 30 seconds. The reference signal available to the adaptation algorithm is

the output of the load cells which measure the forces transfered to the payload. Obvi-

ously, load cell measurements contain the effect of both primary disturbances (single

tone sinusoids at 4 Hz for this example), and control actuators (and hence the classi-

cal feedback contamination problem). Here, no special measure to counter feedback

contamination is taken. Figure 4.11 shows that an average of 4 times reduction in the

4.3. SIMULATION RESULTS 89

magnitude of the vibrations transfered to the payload is achieved (i.e. a degraded per-

formance when compared to the case without feedback contamination). The EBAF

algorithm, however, maintains its stability in the face of feedback contamination

without any additional measures, hence exhibiting robustness of the algorithm to the

contamination of the reference signal. Note that for the FxLMS adaptive algorithm,

with the adaptation rate similar to that in Figure 4.10, feedback contamination leads

to an unstable behavior. The adaptation rate should be reduced substantially (hence

extremely slow convergence of the FxLMS algorithm), in order to recover the stability

of the adaptation scheme.

The simulations in this section have shown that the multi-channel implementation

of the estimation-based adaptive filtering algorithm provides the same advantages

observed for the single channel case. More specifically, the multi-channel EBAF

algorithm achieves desirable transient behavior and fast convergence without com-

promising steady-state performance of the adaptive algorithm. It also demonstrates

robustness to feedback contamination. It is important to note that, the above men-

tioned advantages are achieved by an approach which is essentially identical to the

single-channel version of the algorithm.

4.3.2 Active Noise Cancellation

Consider the one dimensional acoustic duct shown in Figure 2.5. Here, disturbances

enter the duct via Speaker #1. The objective of the multi-channel noise cancellation

is to use both available speakers to simultaneously cancel the effect of the incoming

disturbance at Microphones #1 and #2. The control signal is supplied to each speaker

via an adaptive FIR filter (i.e. in the case of Speaker #1 added to the primary

disturbance). Figure 4.12 shows the output of the microphones when the primary

disturbance (applied to Speaker #1) is a multi-tone sinusoid with 150 Hz and 200

Hz frequencies. The length of each FIR filter in this simulation is 8. For the first

two seconds the controller is off (i.e both adaptive filters have zero outputs). At

t = 2.0 seconds the controller is switched on. The initial value for the Riccati matrix

is P0 = diag(0.0005I2(N+1), 0.00005INs) (where N + 1 is the length of each FIR filter,

4.3. SIMULATION RESULTS 90

and Ns is the order of the secondary path). It is clear that the error at Microphone

#2 is effectively canceled in 0.2 seconds. For Microphone #1 however the cancellation

time is approximately 5 seconds. A 30 times reduction in disturbance amplitude is

measured at Microphone #2 in approximately 10 seconds. For Microphone #1 this

reduction is approximately 15 times. Note that the distance between Speaker #2

and Microphone #1 (46 inches) is much greater than the distance between Speaker

#1 and Microphone #1 (6 inches). Due to this physical constraint, Speaker #2

alone, is not enough for an acceptable noise cancellation at both microphones. The

experimental data in Figures 2.9 and 2.10 in which the result of a single-channel

implementation of the EBAF algorithm, aimed at noise cancellation at Microphone

#2, was shown, confirm this observation. Using a multi-channel approach however,

allows for a substantial reduction in the amplitude of the measured noise at both

microphones. Nevertheless, noise cancellation at the position of Microphone #1 tends

to be slower than the noise cancellation at the position of Microphone #2.

A similar scenario with band limited white noise as the primary disturbance is

shown in Fig. 4.13. Here the length of each FIR filter is 32. The performance of the

adaptive multi-channel noise cancellation problem in the frequency domain is shown

in Fig. 4.14. Once again the cancellation at Microphone #2 is superior to that at

Microphone #1.

4.3. SIMULATION RESULTS 91

x(k)

d(k)

y(k)

e(k)

u(k)

Vm(k)

+++

+

+

−−

Secondary

PrimaryPath

Path

PathFeedback

Primary Source

AdaptiveFilter

ref(k)

K

MJ

Fig. 4.1: General block diagram for a multi-channel Active Noise Cancellation (ANC) problem

x(k) d(k)

e(k)

y(k)u(k)

K

M

J

Vm(k)

+++

+

−

Primary Path

An FIR Filter

Adaptive FIR Filter

Adaptation Algorithm

A Copy OfSecondary Path

Secondary Path

Modeling Error

Fig. 4.2: Pictorial representation of the estimation interpretation of the adaptive control problem:Primary path is replaced by its approximate model

4.3. SIMULATION RESULTS 92

As(k)

Bs(k) Cs(k)

Ds(k)

Z−1

x(k)

d(k) m(k)

Vm(k)

+++A Matrix OfAdaptiveFilters

A Copy Of Adaptive Filter A Copy Of Secondary Path

Fig. 4.3: Approximate Model for Primary Path

Fig. 4.4: Vibration Isolation Platform (VIP)

4.3. SIMULATION RESULTS 93

Fig. 4.5: A detailed drawing of the main components in the Vibration Isolation Platform (VIP).Of particular importance are: (a) the platform supporting the middle mass (labeled as component#5), (b) the middle mass that houses all six actuators (of which only two, one control actuator andone disturbance actuator) are shown (labeled as component #11), and (c) the suspension springsto counter the gravity (labeled as component #12). Note that the actuation point for the controlactuator (located on the left of the middle mass) is colocated with the load cell (marked as LC1).The disturbance actuator (located on the right of the middle mass) actuates against the inertialframe.

4.3. SIMULATION RESULTS 94

100

102

10−5

100

SVD for [d]−>[lc]

Vol

ts/v

olts

100

102

10−5

100

SVD for [u]−>[lc]

100

102

10−5

100

SVD for [d]−>[sc]

Vol

ts/v

olts

Frequency (Hz)10

010

2

10−5

100

SVD for [u]−>[sc]

Frequency (Hz)

Fig. 4.6: SVD of the MIMO transfer function

4.3. SIMULATION RESULTS 95

0 20 40 60 80 100 120

−10

0

10

0 20 40 60 80 100 120

−10

0

10

0 20 40 60 80 100 120

−10

0

10

Time (sec.)

Scoring Sensor Readouts - EBAF

e(1)

e(2)

e(3)

Fig. 4.7: Performance of a multi-channel implementation of EBAF algorithm when disturbanceactuators are driven by out of phase sinusoids at 4 Hz. The reference signal available to the adaptivealgorithm is contaminated with band limited white noise (SNR=3). The control signal is appliedfor t ≥ 30 seconds.

4.3. SIMULATION RESULTS 96

0 20 40 60 80 100 120

−10

0

10

0 20 40 60 80 100 120

−10

0

10

0 20 40 60 80 100 120

−10

0

10

Time (sec.)

Scoring Sensor Readouts - FxLMS

e(1)

e(2)

e(3)

Fig. 4.8: Performance of a multi-channel implementation of FxLMS algorithm when simulationscenario is identical to that in Figure 4.7.

4.3. SIMULATION RESULTS 97

0 20 40 60 80 100 120

−10

0

10

0 20 40 60 80 100 120

−10

0

10

0 20 40 60 80 100 120

−10

0

10

Time (sec.)

Scoring Sensor Readouts - EBAF

e(1)

e(2)

e(3)

Fig. 4.9: Performance of a multi-channel implementation of EBAF algorithm when disturbanceactuators are driven by out of phase multi-tone sinusoids at 4 and 15 Hz. The reference signalavailable to the adaptive algorithm is contaminated with band limited white noise (SNR=4.5). Thecontrol signal is applied for t ≥ 30 seconds.

4.3. SIMULATION RESULTS 98

0 20 40 60 80 100 120

−10

0

10

0 20 40 60 80 100 120

−10

0

10

0 20 40 60 80 100 120

−10

0

10

Time (sec.)

Scoring Sensor Readouts - FxLMS

e(1)

e(2)

e(3)

Fig. 4.10: Performance of a multi-channel implementation of FxLMS algorithm when simulationscenario is identical to that in Figure 4.9.

4.3. SIMULATION RESULTS 99

0 20 40 60 80 100 120

−5

0

5

0 20 40 60 80 100 120

−5

0

5

0 20 40 60 80 100 120

−5

0

5

Time (sec.)

Scoring Sensors Readouts for Single-Tone at 4 Hz With Feedback Contamination

e(1)

e(2)

e(3)

Fig. 4.11: Performance of a Multi-Channel implementation of the EBAF for vibration isolationwhen the reference signals are load cell outputs (i.e. feedback contamination exists). The controlsignal is applied for t ≥ 30 seconds.

4.3. SIMULATION RESULTS 100

0 1 2 3 4 5 6 7 8 9 10−6

−4

−2

0

2

4

0 1 2 3 4 5 6 7 8 9 10−4

−2

0

2

4

Multi-Channel Active Noise Cancellation in Acoustic Duct

Time (sec.)

Mic

roph

one

#1

(Vol

ts)

Mic

roph

one

#2

(Vol

ts)

Fig. 4.12: Performance of the Multi-Channel noise cancellation in acoustic duct for a multi-toneprimary disturbance at 150 and 200 Hz. The control signal is applied for t ≥ 2 seconds.

4.3. SIMULATION RESULTS 101

0 1 2 3 4 5 6 7 8 9 10−10

−5

0

5

10

15

0 1 2 3 4 5 6 7 8 9 10−10

−5

0

5

10

Multi-Channel Active Noise Cancellation in Acoustic Duct

Time (sec.)

Mic

roph

one

#1

(Vol

ts)

Mic

roph

one

#2

(Vol

ts)

Fig. 4.13: Performance of the Multi-Channel noise cancellation in acoustic duct when theprimary disturbance is a band limited white noise. The control signal is applied for t ≥ 2 seconds.

4.3. SIMULATION RESULTS 102

102

103

10−2

10−1

100

OloopCloop

102

103

10−2

10−1

100

OloopCloop

Speaker #1 → Microphone #1

Speaker #1 → Microphone #2

Mag

nitu

deM

agni

tude

Fig. 4.14: Closed-loop vs. open-loop transfer functions for the steady state performance of theEBAF algorithm for the simulation scenario shown in Figure 4.13.

4.4. SUMMARY 103

4.4 Summary

The estimation-based synthesis and analysis of multi-channel adaptive (FIR) filters is

shown to be identical to the single-channel case . Simulations for a 3-input/3-output

Vibration Isolation Platform (VIP), and a multi-channel noise cancellation in the one

dimensional acoustic duct are used to demonstrate the feasibility of the estimation-

based approach. The new estimation-based adaptive filtering algorithm is shown to

provide both faster convergence (with an acceptable transient behavior), and im-

proved steady state performance when compared to a multi-channel implementation

of the FxLMS algorithm.

Chapter 5

Adaptive Filtering via Linear

Matrix Inequalities

In this chapter Linear Matrix Inequalities (LMIs) are used to synthesize adaptive

filters (controllers). The ability to cast the synthesis problem as LMIs is a direct

consequence of the systematic nature of the estimation-based approach to the design

of adaptive filters proposed in Chapters 2 and 3 of this thesis. The question of internal

stability of the overall system is directly addressed as a result of the Lyapunov-

based nature of the LMIs formulation. LMIs also provide a convenient framework for

the synthesis of multi-objective (H2/H∞) control problems. This chapter describes

the process of augmenting the H∞ criterion that serves as the center piece of the

estimation-based adaptive filtering algorithm with H2 performance constraints, and

investigates the characteristics of the resulting adaptive filter. As in Chapters 2 and 3,

an Active Noise Cancellation (ANC) scenario is used to study the main features of

the proposed formulation.

5.1 Background

A detailed discussion of the estimation-based approach to the design of adaptive

filters (controllers) is presented earlier in Chapters 2 and 3. The discussion here will

104

5.1. BACKGROUND 105

be kept brief and serves more as a notational introduction. Figure 5.1 is a block

diagram representation of the active noise cancellation problem, originally shown in

Figure 2.1. Clearly, the objective here is to generate a control signal u(k) such that the

output of the secondary path, y(k), is close to the output of the primary path, d(k).

In Chapter 2, it is shown that an estimation interpretation of the adaptive filtering

(control) problem can be used to formulate an equivalent estimation problem. It is

this equivalent estimation problem that admits LMIs formulation.

To mathematically describe the equivalent estimation problem, state space mod-

els for the adaptive filter and the secondary path are needed. As in Chapter 2,

[As(k), Bs(k), Cs(k), Ds(k)] is the state space model for the secondary path. The

state variable for the secondary path is θ(k). For the adaptive filter the weight vec-

tor, W (k) = [ w0(k) w1(k) · · · wN(k) ]T , will be treated as the state variable. The

state space description for the approximate model of the primary path can then be

described as: [W (k + 1)

θ(k + 1)

]=

[I(N+1)×(N+1) 0

Bs(k)h∗k As(k)

][W (k)

θ(k)

]ξk+1 = Fkξk (5.1)

where

h(k) = [x(k) x(k − 1) · · · x(k − N)]T (5.2)

captures the effect of the reference input x(k). Note that in Figure 5.1

e(k) = d(k) − y(k) + Vm(k) (5.3)

where e(k) is the available error measurement, Vm(k) is the exogenous disturbance

that captures measurement noise, modeling error and uncertainty in the initial condi-

tion of the secondary path, and y(k) is the output of the secondary path. To formulate

the estimation problem, a derived measurement for the output of the primary path is

needed. Rewriting Eq. (5.3) as

m(k)4= e(k) + y(k) = d(k) + Vm(k) (5.4)

the right hand side, i.e. d(k) + Vm(k), can be regarded as the “noisy measurement”

of the primary path output. Note that on the left hand side of Eq. (5.4) only e(k) is

5.1. BACKGROUND 106

directly measurable. In general, an internal copy of the output of the secondary path,

referred to as ycopy(k), should be generated by the adaptive algorithm. Section 2.7

discusses the ramifications of the introduction of this internal copy in detail, and

shows that for the stable linear secondary path the difference between ycopy(k) and

y(k) will decay to zero for sufficiently large k. Now, the derived measured output for

the equivalent estimation problem can be defined as

m(k) =[

Ds(k)h∗k Cs(k)

] [ W (k)

θ(k)

]+ Vm(k)

= Hkξk + Vm(k) (5.5)

where m(k) is defined in Equation (5.4). Noting the objective of the noise cancellation

problem, s(k) = d(k) is chosen as the quantity to be estimated:

s(k) =[

Ds(k)h∗k Cs(k)

] [ W (k)

θ(k)

]= Lkξk (5.6)

Note that m(·) ∈ R1×1, s(·) ∈ R1×1, θ(·) ∈ R1×1, and W (·) ∈ R(N+1)×1. All matrices

are then of appropriate dimensions. ξTk =

(W T (k) θT (k)

)is clearly the state vector

for the overall approximate model of the primary path.

The following H∞ criterion can be used to generate s(k)4= F(m(0), · · · , m(k−1))

(the prediction estimate of the desired quantity s(k)) such that the worst case energy

gain from the measurement disturbance and the initial condition uncertainty to the

error in the causal estimate of s(k) is properly limited, i.e.

sup

Vm, ξ0

M∑k=0

[s(k) − s(k)]∗ [s(k) − s(k)]

ξ∗0Π−10 ξ0 +

M∑k=0

V∗m(k)Vm(k)

≤ γ2 (5.7)

The Riccati-based solution to this problem is discussed in Chapter 2 in detail. It is

sometimes desirable, however, to have the adaptive filter meet some H2 performance

criteria in addition to the H∞ constraint in Equation (5.7). Linear matrix inequalities

5.2. LMI FORMULATION 107

offer a convenient framework for formulating the mixed H2/H∞ synthesis problem.

Furthermore, numerically sound algorithms exist that can solve these linear matrix

inequalities very efficiently. Therefore, next section pursues a first principle derivation

of the LMI formulation for the design of adaptive filters.

5.2 LMI Formulation

Assume the following specific structure for the estimator

ξk+1 = Fkξk + Γk

(m(k) − Hkξk

)(5.8)

s(k) = Lk ξk (5.9)

in which Γk is the design parameter to be chosen such that the H∞ criterion is met.

Now, the augmented system can be defined as follows[ξk+1

ξk+1

]=

[Fk 0

FkHk Fk − ΓkHk

][ξk

ξk

]+

[0

−Γk

]Vm(k) (5.10)

Zk4= s(k) − s(k) =

[Lk −Lk

] [ ξk

ξk

](5.11)

Introducing a new variable ξk4= ξk − ξk, the augmented system can be described as[

ξk+1

ξk+1

]=

[Fk 0

0 Fk − ΓkHk

][ξk

ξk

]+

[0

Γk

]Vm(k)

ηk+1 = Φkηk + ΨkVm(k) (5.12)

with

Zk4= s(k) − s(k) =

[0 Lk

] [ ξk

ξk

]= Ωkηk (5.13)

The LMI solution for the design of adaptive filters finds a Lyapunov function for the

augmented system in (5.12)-(5.13) at each step. In other words, at each time step, an

infinite horizon problem is solved, and the solution is implemented for the next step.

5.2. LMI FORMULATION 108

Introducing the quadratic function V (ηk) = ηkT Pηk (where P > 0), it is straight

forward [8] to show that for the augmented system at time k, (5.7) holds if

V (ηk+1) − V (ηk) < γ2Vm(k)TVm(k) −ZTk Zk (5.14)

Note that the inclusion of the energy of the initial condition error will only increase

the right hand side of the inequality in Eq. (5.14). Replacing for Zk and ηk+1 from

(5.12)-(5.13), and after some elementary algebraic manipulations the inequality in

(5.14) can be written as

[ηT

k VTm(k)

] [ ΦTk PΦk − P + ΩT

k Ωk ΦTk PΨk

ΨTk PΦk ΨT

k PΨk − γ2I

][ηk

Vm(k)

]< 0 (5.15)

Now, due to the block diagonal structure of the matrix Φk in (5.12), the Lyapunov

matrix P can be chosen with a block diagonal structure

P =

[R 0

0 S

](5.16)

Replacing for P , (5.15) will reduce to

F Tk RFk − R 0 0

0 (Fk − ΓkHk)T S (Fk − ΓkHk) − S + LT

k Lk (Fk − ΓkHk)T SΓk

0 ΓTk S (Fk − ΓkHk) ΓT

k SΓk − γ2I

< 0

(5.17)

in which R is independent of Γk and S. To formulate the 2 × 2 block in (5.17) as an

LMI in S and Γk, note that[(Fk − ΓkHk)

T S (Fk − ΓkHk) − S + LTk Lk (Fk − ΓkHk)

T SΓk

ΓTk S (Fk − ΓkHk) ΓT

k SΓk − γ2I

]=

D︷ ︸︸ ︷[LT

k Lk − S 0

0 −γ2I

]+

C︷ ︸︸ ︷[(Fk − ΓkHk)

T S

ΓTk S

] A−1︷︸︸︷S−1

B︷ ︸︸ ︷[S (Fk − ΓkHk) SΓk

](5.18)

5.2. LMI FORMULATION 109

and hence using Schur complement the inequality in (5.18) can be written as the

following set of linear matrix inequalities

−S S (Fk − ΓkHk) SΓk

(Fk − ΓkHk)T S LT

k Lk − S 0

ΓTk S 0 −γ2I

< 0

S > 0

(5.19)

Note that the LMI corresponding to the (1, 1) block in Eq. (5.17), i.e.

F Tk RFk − R ≤ 0 (5.20)

can never be strict (Fk has eigenvalues on unit circle). Most SDP-solvers, however,

only solve strictly feasible LMIs. Therefore, Fk should be first put in its modal form

Fk = Θ−1k

[I 0

0 ΛAs,k

]Θk (5.21)

for some diagonalizing transformation matrix Θk to isolate the poles on the unit cir-

cle. Straightforward matrix manipulations then lead to the following LMI in S, Rs,

T4= SΓk, and γ2 which can be strictly feasible:

Minimize γ2 > 0 subject to

−S S (Fk − ΓkHk) SΓk

(Fk − ΓkHk)T S LT

k Lk − S 0

ΓTk S 0 −γ2I

< 0

ΛTAs,k

RsΛAs,k− Rs < 0

S > 0, Rs > 0

(5.22)

where R = ΘTk

[I 0

0 Rs

]Θk. The solution to (5.22) provides estimator gain, Γk, as

well as the Lyapunov matrix P which ensures that the quadratic cost V (ηk) decreases

over time. It is shown in Chapter 2 that for the Riccati-based solution to Eq. (5.7)

the optimal value of γ is 1. In the absence of H2 constraints, γ in Eq. (5.22) can

be set to 1. This reduces the LMI solution to a feasibility problem in which S > 0,

Rs > 0, and T should be found.

5.2. LMI FORMULATION 110

5.2.1 Including H2 Constraints

Augmenting the above mentioned H∞ objective with appropriate H2 performance

constraints (such as the H2 norm of the transfer function from exogenous disturbance

Vm(k) to the cancellation error Zk in the augmented system described by (5.12)-

(5.13)) is also straight forward. Recall that [8]

‖ TZVm ‖22 = Tr

(ΩkWcΩ

Tk

)(5.23)

where Wc satisfies

ΦTk WcΦk − Wc + ΨkΨ

Tk = 0 (5.24)

To translate this into LMI constraints [10], note that bounding the H2 norm by ν2 is

equivalent to

[ΦT

k WcΦk − Wc Ψk

ΨTk −I

]< 0[

Q ΩkWc

WcΩTk Wc

]> 0

Tr (Q) − ν2 < 0

(5.25)

Allowing some conservatism, we pick Wc = P to augment (5.25) with the LMI con-

straints in (5.22). After some algebraic manipulation, we first express (5.25) as

F Tk RFk − R 0 0

0 (Fk − ΓkHk)T S (Fk − ΓkHk) − S −Γk

0 −ΓTk −I

< 0

Q 0 LkS

0 R 0

SLTk 0 S

> 0

Tr (Q) − ν2 < 0

(5.26)

To formulate the 2 × 2 block in the first inequality in (5.26) as an LMI, note that[(Fk − ΓkHk)

T S (Fk − ΓkHk) − S −Γk

−ΓTk −I

]=[

F Tk SFk − S − HT

k ΓTk SFk − F T

k SΓkHk −Γk

−ΓTk −I

]+

[HT

k ΓTk S

0

]S−1

[SΓkHk 0

](5.27)

5.3. ADAPTATION ALGORITHM 111

and that the LMI constraint on R can be decoupled from the rest. Therefore the

H2/H∞ problem can be formulated as the following LMI in S, T , Q, and Γk

Minimize αγ2 + βTr(Q) (α and β are known constants) subject to (5.22) and

−S SΓkHk 0

HTk ΓT

k S F Tk SFk − S − HT

k ΓTk SFk − F T

k SΓkHk −Γk

0 −ΓTk −I

< 0

[Q LkS

SLTk S

]> 0

Tr (Q) − ν2 < 0

(5.28)

Note that T = SΓk and Γk are not independent variables, and an appropriate linear

matrix inequality should be used to reflect this interdependence. For an alternative

derivation of the LMI formulation for the mixed H2/H∞ synthesis problem see [10]

and references therein.

5.3 Adaptation Algorithm

The implementation scheme is similar to that of the adaptive FIR filter discussed in

Chapter 2. For easier reference, the main signals involved in the description of the

adaptive algorithm are briefly introduced here. For a more detailed description please

see Chapter 2. In what follows (a) W (k) is the estimate of the adaptive weight vector,

(b) θ(k) is the estimate of the state of the secondary path, (c) u(k)4= h∗(k)W (k)

is the actual control input to the secondary path, (d) y(k) and d(k) are the actual

outputs of the secondary and primary paths, respectively, (e) e(k) is the actual error

measurement, and (f) θcopy(k) and ycopy(k) are the adaptive algorithm’s internal copy

of the state and output of the secondary path which are used in constructing m(k)

according to Eq. (5.4). Now, the implementation algorithm can be outlined as follows:

1. Start with W (0) = W0, θ(0) = θ0 as estimator’s best initial guess for the

state vector in the approximate model of the primary path. Also assume that

5.3. ADAPTATION ALGORITHM 112

θcopy(0) = θcopy,0 is adaptive algorithm’s initial condition for the internal copy

of the state of the secondary path.

2. For 0 ≤ k ≤ M(finite horizon):

(a) Form h(k) according to Eq. (5.2),

(b) Form the control signal u(k) = h∗(k)W (k), to be applied to the secondary

path. Note that applying u(k) to the secondary path produces

y(k) = Cs(k)θ(k) + Ds(k)u(k) (5.29)

at the output of the secondary path. This in turn leads to the following

error signal measured at time k:

e(k) = d(k) − y(k) + Vm(k) (5.30)

which is available to the adaptive algorithm to perform the state update

at time k.

(c) Propagate the internal copy of the state vector and the output of the

secondary path as

θcopy(k + 1) = As(k)θcopy(k) + Bs(k)u(k) (5.31)

ycopy(k) = Cs(k)θcopy(k) + Ds(k)u(k) (5.32)

(d) Form the derived measurement, m(k), using the direct measurement e(k)

and the controller’s copy of the output of the secondary path

m(k) = e(k) + ycopy(k) (5.33)

Note that e(k) is the error measurement after the control signal u(k) is

applied.

(e) Use the LMI formulation in (5.22) (the LMI in (5.28) in case H2 constraints

exist) to find Γk (estimator’s gain).

(f) Update the state estimate according to Eq. (5.8), and extract W (k + 1)

from ξk+1 as the new value for the adaptive weight vector in the adaptive

filter.

(g) If k ≤ M , go to (a).

5.4. SIMULATION RESULTS 113

5.4 Simulation Results

In this section the feasibility of the design procedure is examined in the context of

Active Noise Cancellation in the one dimensional acoustic duct shown in Figure 2.5.

The identified transfer function for the acoustic duct is shown in Figures 2.6 and 2.7.

The single channel noise cancellation scenario is considered here (i.e. Speaker #2 is

used to cancel, at the position of Microphone #2, the effect of the disturbance that

enters the acoustic duct via Speaker #1). The length of the adaptive FIR filter in

this example is 8. The results presented in this section are intended to demonstrate

the feasibility of the LMI formulation in adaptive FIR filter design.

Figure 5.2 compares the output of the secondary path, y(k), to the output of the

primary path, d(k), when the primary disturbance input is a sinusoid at 30 Hz. The

adaptive algorithm has full access to the primary disturbance in this case. The error

in noise cancellation, i.e. the measurement of Microphone #2, is also plotted in this

figure. Note that, error measurements are subject to band limited white Gaussian

noise with power 0.008. Effectively, after 0.2 seconds the output of the adaptive filter

reaches its steady-state value. An approximately 10 times reduction in the amplitude

of the disturbances at Microphone #2 is recorded. A typical behavior of the elements

of the adaptive weight vector is shown in Figure 5.3. The variations in the elements

of the weight vector during steady-state operation of the algorithm are small, and

effectively after 0.2 seconds they assume their steady-state values.

Figure 5.4 shows the simulation results for a multi-tone primary disturbance that

consists of 30 and 45 Hz frequencies. In this case the output of the adaptive filter

effectively reaches its steady-state in about 0.35 seconds. The reduction in the magni-

tude of the measured disturbances at the position of Microphone #2 is approximately

5 times in this case. The simulation conditions are similar to the single-tone case. A

typical behavior of the elements of the adaptive FIR weight vector is shown in Fig-

ure 5.5. The elements of the weight vector display more variation during steady-state

operation of the adaptive FIR filter.

Even though the objective of the noise cancellation problem in this chapter is

the same as that in Chapter 2, the LMI-based adaptive algorithm is computationally

5.4. SIMULATION RESULTS 114

more expensive. Furthermore, simulation results presented in this section indicate

that, for the formulation of the problem presented in this chapter, the performance

of the Riccati-based EBAF is better than that of the LMI-based EBAF. This can

be associated with the conservatism introduced in the formulation, in particular the

diagonal structure assumed for the matrix P (Eq. (5.16)). The assumption of a

constant P matrix also results in additional conservatism. Nevertheless, the problem

formulation as LMIs provides a potent framework in which multiple design objectives

can be handled. Furthermore, the uncertainty in system model can be systematically

addressed in the LMI framework.

5.4. SIMULATION RESULTS 115

x(k) d(k)

u(k) y(k)

e(k)

Vm(k)

+

+

+

−

Primary Path(Unknown)

Secondary Path(Known)

AdaptiveFilter

AdaptationAlgorithm

Fig. 5.1: General block diagram for an Active Noise Cancellation (ANC) problem

5.4. SIMULATION RESULTS 116

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−1

−0.5

0

0.5

1

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−1

−0.5

0

0.5

1

Time (sec.)

e(k)

y(k)d(k)

d(k

)vs

.y(k

)(V

olts

)M

icro

phon

e#

2(V

olts

)

LMI Single-Tone Noise Cancellation

Fig. 5.2: Cancellation Error at Microphone #1 for a Single-Tone Primary Disturbance

5.4. SIMULATION RESULTS 117

0 0.1 0.2 0.3 0.4 0.5−0.05

0

0.05

0.1

0.15

0.2

0 0.1 0.2 0.3 0.4 0.5−0.05

0

0.05

0.1

0.15

0.2

0 0.1 0.2 0.3 0.4 0.5−0.05

0

0.05

0.1

0.15

0.2

Time(sec.)0 0.1 0.2 0.3 0.4 0.5

−0.05

0

0.05

0.1

0.15

0.2

Time(sec.)

Filt

erW

eigh

tFilt

erW

eigh

t

(1) (3)

(5) (8)WkWk

WkWk

Fig. 5.3: Typical Elements of Adaptive Filter Weight Vector for Noise Cancellation Problem inFig. 5.2

5.4. SIMULATION RESULTS 118

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−2

−1

0

1

2

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−2

−1

0

1

2

Time (sec.)

e(k)

y(k)d(k)

d(k

)vs

.y(k

)(V

olts

)M

icro

phon

e#

2(V

olts

)

LMI Multi-Tone Noise Cancellation

Fig. 5.4: Cancellation Error at Microphone #1 for a Multi-Tone Primary Disturbance

5.4. SIMULATION RESULTS 119

0 0.1 0.2 0.3 0.4 0.5−0.05

0

0.05

0.1

0.15

0.2

0 0.1 0.2 0.3 0.4 0.5−0.05

0

0.05

0.1

0.15

0.2

0 0.1 0.2 0.3 0.4 0.5−0.05

0

0.05

0.1

0.15

0.2

0 0.1 0.2 0.3 0.4 0.5−0.05

0

0.05

0.1

0.15

0.2

Time (sec.)Time (sec.)

Filt

erW

eigh

tFilt

erW

eigh

t

(1) (3)

(6) (8)WkWk

WkWk

Fig. 5.5: Typical Elements of Adaptive Filter Weight Vector for Noise Cancellation Problem inFig. 5.4

5.5. SUMMARY 120

5.5 Summary

This chapter suggests that LMI-based synthesis tools can be used to design adaptive

filters. The feasibility of this approach is demonstrated in a typical adaptive ANC

scenario. One clear benefit is that the framework is suitable for designing adaptive

filters in which performance and robustness concerns are systematically addressed.

Chapter 6

Conclusion

6.1 Summary of the Results and Conclusions

In this dissertation, a new estimation-based procedure for the systematic synthesis

and analysis of adaptive filters (controllers) in “Filtered” LMS problems has been

presented. This is a well known nonlinear control problem for which “good” sys-

tematic synthesis and analysis techniques are not yet available. This dissertation

has proposed a two step solution to the problem. First, it developed an estimation

interpretation of the adaptive filtering (control) problem. Based on this interpreta-

tion, the original adaptive filtering (control) problem is replaced with an equivalent

estimation problem. The weight vector of the adaptive filter (controller) is treated

as the state variable in this equivalent estimation problem. In the original adaptive

control problem, a measured error signal (i.e. the difference between a desired signal,

d(k) in Fig. 2.1, and a controlled signal, y(k) in Fig. 2.1) reflects the success of the

adaptation scheme. The equivalent estimation problem has been constructed such

that this error signal remains a valid measure for successful estimation. The second

step, is then to solve the corresponding estimation problem. An observer structure

for the estimator has been chosen, so that “estimates” of the optimal weight vector

can be formed. The weight vector in the adaptive filter is then tuned according to

these state estimate.

121

6.1. SUMMARY OF THE RESULTS AND CONCLUSIONS 122

Both H2 and H∞ measures can be used as estimation criteria. The H∞ criterion

was chosen for the development of the adaptive algorithm in this Thesis. More specif-

ically, the equivalent estimation problem seeks an estimate of the adaptive weight

vector such that the energy gain from the exogenous disturbances and the initial

condition uncertainty to the cancellation error (i.e. the error between d(k) and y(k)

in Fig. 2.1) is minimized. This objective function is justified by the nature of the

disturbances in the applications of interest (i.e. active noise cancellation and active

vibration isolation). The following is a summary of the results in this Thesis:

1. In the case of adaptive FIR filters:

(a) The equivalent estimation problem is shown to be linear. Given a bound on

energy gain γ > 0, the robust estimation literature provides exact filtering

and prediction solutions for this problem. The work in this Thesis proves

that γ = 1 is the optimal energy gain, and derives the conditions under

which this bound is achievable.

(b) The optimality arguments in this Thesis provide the conditions under

which the existence of an optimal filtering (prediction) solution for the

problem is guaranteed. This eliminates the possibility of the solution

breakdown which could have prevented real-time implementation of the

algorithm.

(c) It is shown that the filtering and prediction solutions only require one

Riccati recursion. The recursion propagates forward in time, and does not

rely on any information about the future of the system or the reference

signal (thus allowing the resulting adaptive algorithm to be implementable

in real-time). This has come at the expense of restricting the controller to

an FIR structure in advance.

(d) For the optimal value of γ = 1, the above mentioned Riccati recursion sim-

plifies to a Lyapunov recursion. This leads to a computational complexity

that is comparable to that of a classical filtered LMS adaptive algorithm,

such as the FxLMS.

6.1. SUMMARY OF THE RESULTS AND CONCLUSIONS 123

(e) Experimental results, along with extensive simulations have been used to

demonstrate the improved transient and steady-state performance of the

EBAF algorithm over classical adaptive filtering algorithms such as the

FxLMS and the Normalized FxLMS.

(f) A clear connection between the limiting behavior of the EBAF algorithm

and the FxLMS (Normalized-FxLMS) adaptive algorithm has been estab-

lished. In particular, it is shown that the gain vector in the prediction-

based (filtering-based) EBAF algorithm converges to the gain vector in the

FxLMS (Normalized FxLMS) as k → ∞. The error terms however, are

shown to be different. Thus, the classical FxLMS (Normalized FxLMS)

adaptive algorithms can be thought of as an approximation to the EBAF

algorithm in this Thesis. This connection might explain the observed

improvement in both transient and steady-state performance of the new

EBAF algorithm.

2. For the EBAF algorithm in the IIR case, it has been shown that the equivalent

estimation problem is nonlinear. An exact solution for the nonlinear robust

estimation problem is not yet available. A linearizing approximation that makes

systematic synthesis of adaptive IIR filter tractable has been adopted in this

Thesis. The performance of the EBAF algorithm in this case has been compared

to the performance of the Filtered-U LMS (FuLMS) adaptive algorithm. The

proposed EBAF algorithm has been shown to provide improved steady-state

and transient performance.

3. The treatment of feedback contamination problem has been shown to be iden-

tical to the IIR adaptive filter design in the new estimation-based framework.

4. A multi-channel extension of the EBAF algorithm has been provided to demon-

strate that the treatment of the single-channel and multi-channel adaptive fil-

tering (control) problems in the new estimation based framework is virtually

identical. Simulation results for the problem of vibration isolation (in a 3-

input/3-output vibration isolation platform (VIP)), and noise cancellation in a

6.2. FUTURE WORK 124

one dimensional acoustic duct have been shown to prove the feasibility of the

EBAF algorithm in the multi-channel case.

5. The new estimation-based framework has been shown to be amenable to a

Linear Matrix Inequality (LMI) formulation. The LMI formulation is used to

explicitly address the stability of the overall system under adaptive algorithm by

producing a Lyapunov function. Augmentation of an H2 performance constraint

to the H∞ disturbance rejection criterion has also been discussed.

6.2 Future Work

There are several possible directions for future work. The first direction is to address

the question of stability for the EBAF-based adaptive IIR filters. During extensive

simulations, the estimation-based adaptive IIR filter was observed to be stable. Ob-

taining a formal proof for the stability of the system, however, is a difficult problem.

Exploring the role of a different linearizing approximation, in reducing the nonlinear

robust estimation problem encountered in the IIR filter design into a tractable linear

estimation problem, is another interesting avenue for further research. It is also in-

teresting to examine the possibility of formulating other classes of adaptive filtering

problems from the estimation point of view.

It is possible to formulate a systematic approach to the synthesis of “optimal”

adaptive filters (e.g. adaptive filters of optimal length) by augmenting the existing

estimation criterion with an appropriate constraint or objective function. The sys-

tematic synthesis of a robust adaptive filter, in which an error in the modeling of

the secondary path is explicitly handled during the synthesis process, would be an-

other interesting extension to the work in this Thesis. The LMI formulation of the

estimation-based approach, in particular, offers a rich machinery for addressing such

problems.

Application of this approach to adaptive equalization is the subject of ongoing

research. For adaptive equalization, adopting an H2 estimation criterion is well jus-

tified. With an H2 objective, the estimation based adaptive filtering algorithm in

6.2. FUTURE WORK 125

this Thesis is in fact an extension of the RLS algorithm to the more general class of

filtered LMS problems.

Appendix A

Algebraic Proof of Feasibility

A.1 Feasibility of γf = 1

As Chapter 2 points out, it is possible to directly establish the feasibility of γf = 1.

To do so, it should be shown that Rk and Re,k (in Theorem 2.1) have the same inertia,

for all k, if γf = 1. First, define

Z 4=

[A B

C D

]=

− (P−1k

) (H∗

k H∗k

)(

Hk

Hk

) (I 0

0 −γ2I

) (A.1)

Then, apply UDL decomposition to A.1 to get

Z14=

[I 0

−CA−1 I

][A B

C D

][I −A−1B

0 I

]=

[A 0

0 ∆A

](A.2)

(where ∆A = D−CA−1B is in fact Re,k). Note that Z and Z1 have the same inertia.

Now, perform LDU decomposition to get

Z24=

[I −BD−1

0 I

][A B

C D

][I 0

−D−1C I

]=

[∆D 0

0 D

](A.3)

(where ∆D = A − BD−1C will reduce to A for γ = 1). Clearly, Z and Z2 have the

same inertia as well. Therefore, Z1 and Z2 should have the same inertia. Since the

126

A.1. FEASIBILITY OF γF = 1 127

(1, 1) block matrices for Z1 and Z2 are the same (i.e. A = −P−1k ), the (2, 2) block

matrices, i.e. Re,k in Z1 and Ri in Z2, should have the same inertia, which is the

condition 2.13 in Theorem 2.1.

Appendix B

Feedback Contamination Problem

Figure B.1 shows the block diagram for an approximate model of the primary path

when a feedback path from the output of the IIR filter to its input exists. The

presentation here follows the discussions in Sections 3.2 and 3.3, and therefore it is

kept brief. The state space description for the block diagram in Figure B.1 is as

follows:

W (k + 1)

θ(k + 1)

ϕ(k + 1)

=

I(2N+1)×(2N+1) 0 0

Bs(k)h∗k As(k) 0

Bf (k)h∗k 0 Af (k)

W (k)

θ(k)

ϕ(k)

ξ(k+1) = Fkξ(k) (B.1)

where θk is the state variable for the secondary path, ϕk is the state variable for the

feedback path, and hk and W (k) are defined in Section 3.2.1. Note that

x(k) = ref(k) + Df(k)h∗(k)W (k) + Cf(k)ϕ(k) (B.2)

r(k) = a0x(k) + b1r(k − 1) + · · · + bNr(k − N), r(−1) = · · · = r(−N) = 0

(B.3)

where the contamination of the reference signal with the feedback from the output of

the adaptive filter is evident. For this system the derived measured output, m(k), is

128

129

described as

m(k) =[

Ds(k)h∗k Cs(k) 0

]W (k)

θ(k)

ϕ(k)

+ Vm(k)

= Hkξ(k) (B.4)

while the quantity to be estimated, s(k), is

s(k) =[

Ds(k)h∗k Cs(k) 0

]W (k)

θ(k)

ϕ(k)

= Lkξ(k) (B.5)

Similar to the derivations in Chapters 2 and 3, Hk = Lk, i.e. s(k) = d(k). It is desired

to find an H∞ causal filter s(k|k) = F(m(0), m(1), · · · , m(k)) such that

sup

Vm, ξ0

M∑k=0

(s(k) − s(k|k))∗(s(k) − s(k|k))

ξ∗0Π−10 ξ0 +

M∑k=0

Vm(k)∗Vm(k)

≤ γ2f (B.6)

Equivalently, an strictly causal predictor s(k) = F(m(0), m(1), · · · , m(k − 1)) can be

found such that

sup

Vm, ξ0

M∑k=0

(s(k) − s(k))∗(s(k) − s(k))

ξ∗0Π−10 ξ0 +

M∑k=0

Vm(k)∗Vm(k)

≤ γ2p (B.7)

Here γf (γp) are positive numbers. Note that Vm(k) is assumed to be an L2 signal.

Obviously, Equations (B.1), (B.4) and (B.5) are nonlinear in IIR filter parameters

and therefore the estimation problem in Eq. (B.6) (or Eq. (B.7)) is a nonlinear H∞problem. As in Sections 3.2 and 3.3, the IIR filter parameters in (B.2) and (B.3)

are replaced with their best available estimate to obtain a linear time-variant system

130

dynamics. For this linearized system the solution in Section 3.3 will exactly apply,

and hence it is not repeated here. It is clear that the above mentioned discussion

holds true when the adaptive IIR filter is replaced with an adaptive FIR filter.

131

r(k-2)

ref(k)

x(k)

r(k)

u(k) m(k)d(k)

Vm(k)

++ +++

++

Z−1

Z−1

Z−1

Z−1

a0

a1

a2

b1

b2

Af (k)

Bf (k)Cf (k)

Df (k)

As(k)

Bs(k) Cs(k)

Ds(k)

Second Order IIR Filter

Approximate Model for Primary Path With Feedback Contamination

Fig. B.1: Block diagram of the approximate model for the primary path in the presence of thefeedback path

Appendix C

System Identification for Vibration

Isolation Platform

C.1 Introduction

Advanced control techniques are model based techniques. The achievable perfor-

mance for these control techniques, therefore, depends on the accuracy of the avail-

able model. Thus, the objective of system modeling is to capture the dynamics of a

system in the form of a mathematical model as accurately as possible. In general,

two approaches to the mathematical modeling of a system are used; (a) Analytical

Approach: applies rules of physics (that govern system dynamics) to derive a physical

model, and (b) System Identification Approach: uses the experimental input/output

data to construct a mathematical model.

Analytical methods represent not only the input/output behavior of the system

but also capture the internal mechanics and physics of the system. Identification based

methods, on the other hand, are mainly concerned with input/output behavior of the

system. Reference [22] provides a detailed discussion on the role of each approach in

a control design problem. This appendix discusses the state space model derived for

the Vibration Isolation Platform (VIP) based on an advanced method of fitting the

measured transfer function measurements. The main components of the VIP, along

132

C.2. IDENTIFIED MODEL 133

with their operational role are described in Chapter 4.

C.2 Identified Model

This section discusses the system identification strategy used to extract a state space

model for the Vibration Isolation Platform (VIP) based on transfer function measure-

ments obtained over 0.50− 660.0 Hz frequency range. The process of data collection

is described first. The consistency of the collected data for the load cells and truth

(scoring) sensors with the physical behavior of the system is examined next. Load

cell measurements reflect the interacting forces between the middle mass and the pay-

load, while truth (scoring) position sensors measure the displacement of the payload

from its equilibrium in inertial frame. It is therefore important to develop a model

in which the readings from these two sets of sensors agree with the true physics of

the system. For this modeling to be successful, the collected data should correctly

captured the true physics of the problem, and the discussion on the consistency of the

measurements demonstrates this important fact. Finally, the identified state space

model is presented and compared with the original measurements.

C.2.1 Data Collection Process

A program that controls both actuator excitation and sensor measurements was de-

veloped to measure the transfer functions. This program runs on a sun SPARC1e

real time processor. The measurements are conducted as follows:

1. At a given frequency, a sinusoidal excitation is applied to only one actuator at

a time. Other actuators are commanded to zero. A total of 209 data points

(logarithmically spaced) are used to cover the 0.5 − 660 Hz frequency range.

The amplitude of the sinusoidal excitation is set to 0.2 volts to ensure linear

behavior for the system throughout the measurements.

2. The program maintains this excitation for 30 seconds (to assure that the system

reaches its steady state behavior) before it starts recording sensor readings. The

C.2. IDENTIFIED MODEL 134

sampling period for analog to digital conversion (performed by Tustin 2100 data

conversion device) is 63 micro-seconds. The same period is used for the digital-

to-analog converter that drives the actuators.

3. The time data for the measurements is temporarily stored. With the known

excitation frequency, a least squares algorithm is applied to the measurements

over 20 cycles of the recorded data to extract the amplitude and phase of the

measurements for each sensor.

With this data for each test frequency point, the transfer functions from each ac-

tuator to all existing sensors can be constructed. This data is then used by system

identification routine to extract a state space model for the system.

C.2.2 Consistency of the Measurements

To examine the consistency of the available measurements, the physical relationship

between the load cell and truth sensor measurements should be explored. Note that,

from measurements, transfer function matrices Hlc,u, Hlc,d, Hsc,u, and Hsc,d defined

by

LCu =

LC1

LC2

LC3

=

H1,1 H1,2 H1,3

H2,1 H2,2 H2,3

H3,1 H3,2 H3,3

U1

U2

U3

= Hlc,uU (C.1)

LCd =

LC1

LC2

LC3

=

H1,4 H1,5 H1,6

H2,4 H2,5 H2,6

H3,4 H3,5 H3,6

D1

D2

D3

= Hlc,dD (C.2)

SCu =

SC1

SC2

SC3

=

H10,1 H10,2 H10,3

H11,1 H11,2 H11,3

H12,1 H12,2 H12,3

U1

U2

U3

= Hsc,uU (C.3)

SCd =

SC1

SC2

SC3

=

H10,4 H10,5 H10,6

H11,4 H11,5 H11,6

H12,4 H12,5 H12,6

D1

D2

D3

= Hsc,dD (C.4)

C.2. IDENTIFIED MODEL 135

are available. Meanwhile, straightforward dynamics suggest that

LC

(j2πf)2= SCALElc · Mx,y,z · X, and SC = SCALEsc · X (C.5)

where LC is the vector of load cell force measurements, M is the mass/inertia ma-

trix, X are the position measurements, and SC is truth (scoring) sensor’s position

measurement. The factor SCALEsc = diag(Ssc1, Ssc2, Ssc3) accounts for the scaling

differences in the position measurement as it is seen by the truth sensor. With the

orthogonal transformation

T4=

0 −1√2

1√2

1√1.5

−0.5√1.5

−0.5√1.5

1√3

1√3

1√3

the effects of the actuators and the measurements of the sensors can be decomposed

into two tilt motions (about the two perpendicular axes in x-y plane), and a piston

motion in z direction. Now, the load cell reading can be related to that of the scoring

sensor as follows:

T ·

SCALE−1lc︷ ︸︸ ︷

S−1

lc1 0 0

0 S−1lc2 0

0 0 S−1lc3

· 1

(j2πf)2·

LC1

LC2

LC3

=

Mθx,θy,z︷ ︸︸ ︷

Ix 0 0

0 Iy 0

0 0 M

θx

θy

z

(C.6)

T ·

SCALE−1sc︷ ︸︸ ︷

S−1

sc1 0 0

0 S−1sc2 0

0 0 S−1sc3

SC1

SC2

SC3

=

θx

θy

z

(C.7)

and therefore

1

(j2πf)2· Hlc,u · H−1

sc,u = SCALElc · T ∗ · Mθx,θy,z · T · SCALE−1sc (C.8)

Note that a similar relationship can be derived for measurements from the disturbance

actuators. The left hand side in Eq. (C.8) is the available measurement data, and

C.2. IDENTIFIED MODEL 136

hence an optimization problem can be set up to find the optimal scaling factors as

well as the inertia and mass parameters. Instead of solving this problem at each

frequency, the left hand side of Eq. (C.8) over the frequency range [0.5 − 30] Hz is

averaged to obtain

([Hlc,u

(j2πf)2

]H−1

sc,u

)averaged

= 10−3 ·

0.2174 0.0668 0.0635

0.0697 0.1931 0.0586

0.0705 0.0650 0.1751

(C.9)

Now a simple optimization algorithm yields the following optimal scaling factors and

inertia/mass parameters for the system

SCALElc = diag(1.1043, 1.0352, 0.9560) × 22.24(N/Volts)

SCALEsc = diag(1.0887, 1.0122, 0.9935) × 0.25(mm/Volts)

Ix = 0.6579 Kg.m2

Iy = 0.6184 Kg.m2

M = 1.7801 Kg

Applying the orthogonal transformation T and the optimal scaling factors for the

load cells and the scoring sensors,

Mmeasuredθx,θy,z = T · SCALE−1

lc

([Hlc,u

(j2πf)2

]· H−1

sc,u

)SCALEsc · T ∗

the matrix in Equation (C.9) is transformed into

([Hlc,u

(j2πf)2

]H−1

sc,u

)diagonalized

= 10−3 ·

0.1223 0.0004 0.0065

0.0019 0.1203 −0.0060

−0.0049 0.0024 0.3070

(C.10)

in which the diagonal dominance of the matrix is evident. This, in essence, indicates

that the relationship between load cell and scoring (truth) sensor measurements con-

forms with what the physics of the VIP system predicts.

Using the optimal scaling for the sensors and the orthogonal transformation T ,

(Hlc,u/(2πf)2)·H−1sc,u can be diagonalized for each frequency. Figures C.1 and C.2 show

C.2. IDENTIFIED MODEL 137

the elements of the 3×3 matrices[(Hlc,u/(2πf)2) · H−1

sc,u

]and

[(Hlc,d/(2πf)2) · H−1

sc,d

],

respectively, for the raw data. All elements of these matrices are roughly of the same

order of magnitude. Figures C.3 and C.4 on the other hand reflect the elements of

similar matrices for the diagonalized version. The diagonal dominance in these ma-

trices is maintained for all frequencies. Also note that the numerical values for the

diagonal elements in Figures C.3 and C.4 agree with those obtained by the optimiza-

tion process, indicating a truthful reflection of system dynamics for frequencies up to

50 Hz in the collected data.

C.2.3 System Identification

This section discusses the identified model for a 6-input/12-output continuous time

state space representation of the VIP system. As mentioned earlier, the three control

actuators (denoted by U) along with the three disturbance actuators (denoted by

D) form the inputs to the VIP system. Three load cells (colocated with the control

actuators and denoted by LC), six displacement sensors (one colocated within each

actuator stator and denoted by CP for the ones colocated with control actuators

and DP for the ones colocated with disturbance actuators), and three scoring sensors

(measuring the inertial motion of the payload and denoted by SC) constitute the 12

outputs for this system.

For the results presented here, the Frequency domain Observability Range Space

Extraction (FORSE) algorithm (see Section C.3, and Ref. [30], Section 3 for a detailed

description of the identification algorithm) is used. FORSE is a model synthesis tech-

nique that operates directly on the transfer function data without an inverse Fourier

transform, and has numerical robustness properties similar to time domain synthesis

techniques (such as ERA and q-Markov methods). The discussion in Ref. [30] proves

that a specifically constructed matrix from the frequency domain transfer function

measurements has the same basis as the observability matrix for the system, and thus

the A and C matrices can be found directly. The rest of the realization (i.e. B and

D matrices) can then be found using the least squares method.

The available frequency domain data (0.50 − 660.0 Hz) is not enough to allow

C.2. IDENTIFIED MODEL 138

the identification algorithm to capture the true behavior of the system at low fre-

quencies. Therefore, the transfer function measurements from the 6 actuators to the

load cells are scaled by K/(j2πf)2 (for an appropriate K) effectively translating force

measurements into position measurements. This data is then fed to the FORSE al-

gorithm to extract a state space model (A, B, CLC , DLC) for the transfer functions

[U, D] → K · LC/(j2πf)2. Note that, this is indeed a MIMO (6-input/3-output)

identification process in which the maximum singular value of the modeling error is

minimized. Keeping A and B fixed, the transfer function data for [U, D] → [CP ],

[U, D] → [DP ], and [U, D] → [SC] are then used to derive the state space model

for each of the above mentioned subsystems. Once again the available data does not

provide enough information to correctly capture the high frequency behavior of the

system. Therefore, in the course of modeling the feed-through terms for the transfer

functions (i.e. DLC , DCP , DDP , and DSC) are forced to be zero.

To cancel the effect of the double integration and scaling of the load cell data, the

output of the transfer function [U, D] → K ·LC/(j2πf)2 should be scaled by 1/K and

differentiated twice. To approximate the required scaling and double differentiation,

a second order filter of the form

F (s) =ω2

0(j2πf)2

K(j2πf + ω0)2

(with ω0 = 2πf0 and for some appropriate f0), can be used to filter the load cell

output (i.e. y = CLC · x + DLC · u). This approach however introduces 6 extraneous

states, the elimination of which can effect the quality of the identified model. Instead,

the state space model [A, B, CLC , DLC ] is used to perform the scaling and double

differentiation as follows:

1

K· ¨y =

1

K·CA2 · x + CAB · u + CB · u

(C.11)

where if (1/K) · CB ∼= 0, then the state space model[A = A, B = B, CLC =

1

K· CA2, DLC =

1

K· CAB

]is an adequate representation for the transfer function [U, D] → [LC]. Note that no

extraneous states are introduced here, hence the state space model for the Vibration

C.2. IDENTIFIED MODEL 139

Isolation Platform is as follows:

A B

CLC DLC

CCP = CCP DCP = DCP

CDP = CDP DDP = DDP

CSC = CSC DSC = DSC

A good fit to the data was found by using 36 states for the system. Increasing the

number of the states did not appear to improve the fit. Figures C.5-C.6 compare

the singular value plots for the measured MIMO transfer functions and the transfer

functions of the model generated using FORSE before and after double differentiation.

As Figure C.5 clearly indicate, the original model provides a good match for the scaled

data over the entire 0.5 − 660 Hz frequency range. Figure C.6 then shows that the

differentiation process does not introduce significant error in the identified model, as

the singular value plots for the un-scaled load cell data closely match those of the

final load cell model. Note that all significant modes are properly captured in both

cases. Figures C.7-C.9 compare the singular value plots of the identified model and

the measured data for other input/output channels. As these figures indicate, the

identified model closely matches the measured data in all cases. Note that double

differentiation of the load cell output does not affect the identified model from the

actuators to the other sensors (i.e. CP, DP, and SC sensors).

It is important to point out that despite the presence of a pole around 15 Hz that

dominates the response, the identification algorithm has been reasonably successful

in capturing the MIMO zeros in this frequency range (Figures C.8-C.9). Figure C.10

shows the identified model over a wider frequency range. The low frequency behavior

of the load cells, as well as the high frequency behavior of the position sensors is

correctly reflected in this model. As mentioned earlier in this model DLC , DCP , DDP ,

and DSC are all set to zero. Figure C.11 shows the final identified model (i.e. after

double differentiation and the scaling) for the VIP system. Both low frequency and

high frequency behavior for the load cells are correctly reflected. Also the model

remains truthful at high frequencies for the position sensors. For future reference,

natural frequencies and damping factors for the poles of the identified model are

C.2. IDENTIFIED MODEL 140

listed here:

Pole Natural Damping Ratio

Frequency (Hz)

0.9733 0.0195

1.0156 0.0291

1.8925 0.8298

2.2409 0.1549

4.0462 0.0074

4.3862 0.0201

4.4216 0.0109

15.2601 0.0159

16.8545 1.0000

40.0545 0.0100

44.3453 0.0121

48.1396 0.0062

50.3646 0.0090

224.5517 0.0021

323.1164 0.0029

393.0650 0.0032

439.2836 0.0054

468.8545 0.0044

C.2.4 Control design model analysis

As a final test for the quality of the identified model, a simple H2 output feedback

controller was designed based on the identified mathematical model. Note that no

attempt has been made to obtain the best possible H2 controller for the VIP system.

The only objective here is to examine the consistency of the identified model and the

measured data from a control design point of view. For the results presented here:

x = A x + B1 u + B2 d

y = CLC x + DLC,1 u + DLC,2 d

C.3. FORSE ALGORITHM 141

z =

[zx

zu

]=

[CLC DLC,1

0 w ∗ I

][x

u

]

where u is the 3 × 1 vector of control actuator command, d is the 3 × 1 vector of

disturbances. B1 = B(:, 4 : 6), B2 = B(:, 1 : 3), DLC,1 = DLC(:, 4 : 6), DLC,2 = DLC(:

, 1 : 3), and w is some weight (chosen to be 0.05 here) on the control command. Figure

C.12 compares the closed and open loop singular value plots for the MIMO transfer

function from three disturbance actuators to the load cell and scoring sensors. The

important feature of this plot is not the performance of the designed controller, but

the consistency of the closed loop behavior of the system as it is seen by the two

different sets of sensors (i.e. load cell and scoring sensors). This same controller is

then used to close the loop on the measured frequency response data. The comparison

of open and closed loop SVD plots in this case is shown in Figure C.13. Once again,

the performance of the controller on the closed loop transfer function is consistently

reflected in both load cell and scoring sensor measurements. Note the close similarities

between the closed-loop behavior of the system under this H2 controller when either

the model or the real data is used. This close correlation of the closed-loop behavior

indicates that this identified model is a good control design model.

C.3 FORSE algorithm

The identification technique used for this work was originally develop at MIT as part

of the MACE program [30,22]. The approach integrates the Frequency domain Ob-

servability Range Space Extraction (FORSE) identification algorithm, the Balanced

Realization (BR) model reduction algorithm, and the logarithmic and additive Least

Square (LS) modal parameter estimation algorithms for low order highly accurate

model identification. The algorithm is called Integrated Frequency domain Observ-

ability Range Space Extraction and Least Square parameter estimation algorithm

(IFORSELS).

The Balanced Realization (BR) model reduction algorithm transforms a state

space model to a balanced coordinate and reduces the model by deleting the states

C.3. FORSE ALGORITHM 142

associated with the smallest principle values. The LS estimation algorithms im-

prove the fitting of reduced models to experimental data by updating state space

parameters in modal coordinates. Models derived by both the FORSE and BR al-

gorithms are non-optimal models, but their computations are non-iterative. While

the LS algorithms compute optimal parameter estimates, its computation may de-

viate to non-optimal parameters if initial estimates are very inaccurate. Integrating

these algorithms in an iterative manner avoids the computational difficulties of the

LS algorithms and improves modeling accuracies of the FORSE and BR algorithms.

As a result, the IFORSELS identification algorithm is capable of generating highly

accurate, low order models.

Assume that the following frequency response samples were obtained from experi-

ments, G(ωk) , k = 1, 2, · · · , K. The objective of frequency domain state space model

identification is to minimize the cost function

J =

K∑k=1

‖G(ωk) − (C(jωkI − A)−1B + D)‖22 , (C.12)

where A, B, C, D are system matrices of the following state space equation,x(t) = A x(t) + B u(t)

y(t) = C x(t) + D u(t). (C.13)

It is apparent that this is a nonlinear problem. The FORSE algorithm computes a

suboptimal estimate of these matrices using a subspace-based approach. To make

the cost J very small, the state space model is usually over parameterized. If model

order is not constrained, this FORSE model can be used by the LS algorithms as an

initial estimate to derive a more accurate estimate of the A, B, C, D matrices.

The entire IFORSELS algorithm consists of the FORSE identification algorithm,

the BR model reduction algorithm and two LS parameter estimation algorithms. It is

important to point out that one can use other subspace identification, model reduc-

tion and parameter estimation algorithms to form such an integrated identification

algorithm. In fact, the Eigensystem Realization Algorithm was first used to form the

algorithm [1]. The FORSE algorithm is now used because it is a frequency domain

algorithm and does not suffer from time domain aliasing errors. Other advantages

C.3. FORSE ALGORITHM 143

of the FORSE algorithm include its frequency domain weighting feature and its ca-

pability of processing data which is not uniformly distributed along the frequency

axis [2]. The BR model reduction algorithm is used simply because its computational

software is readily available in MATLAB. Other model reduction algorithms, such as

the weighted BR algorithm, the Projection algorithm and the Component Cost algo-

rithm can also be incorporated into this integrated algorithm. The two LS parameter

estimation algorithms used are the Additive LS (ALS) algorithm, which minimizes

the error cost J in Equation C.12, and the Logarithmic LS (LLS) algorithm, which

minimizes the following logarithmic error cost

Jlog =K∑

k=1

ny∑i=1

nu∑j=1

‖ log(Gij(ωk)) − log((C(jωkI − A)−1B + D)ij)‖22 , (C.14)

where the subscript ij indicates the (i, j)-th element of the matrix, and nu and ny

are the numbers of inputs and outputs of the system, respectively.

In the IFORSELS algorithm, the LLS parameter estimation algorithm is used

in the early stages of model reduction and updating. The logarithmic cost of the

LLS algorithm weights data samples equally, regardless of their magnitude [2], while

the FORSE and ALS algorithms place more emphasis on data samples which have

high magnitudes. Hence, in the early stages of model reduction and updating, more

emphasis needs to be placed on fitting in the frequency ranges where the magnitude

of data samples is low (such as areas around system transmission zeros). For the

identification process presented in this report the ALS algorithm is not used.

In summary, the entire IFORSELS identification algorithm consists of the follow-

ing steps. Initially, the FORSE algorithm is used to generate an over parameterized

model. This model is then forwarded to the model reduction and updating iterations

using the BR and LLS algorithms. The identified model is then judged (by human)

based on the accuracy of the fit and the order of the model. If necessary, modes

can be deleted or added to this model and the process of IFORSELS identification

algorithm can be repeated again until a satisfactory model is obtained.

C.3. FORSE ALGORITHM 144

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[U−

>(L

C/W

2 )]/[U

−>

SC

])

10 20 30 40 5010

−6

10−5

10−4

10−3

Scaling Factors between Load Cell & Truth Sensor Measurements (Raw Data), 04/12/98

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[U−

>(L

C/W

2 )]/[U

−>

SC

])

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[U−

>(L

C/W

2 )]/[U

−>

SC

])

Frequency (Hz)10 20 30 40 50

10−6

10−5

10−4

10−3

Frequency (Hz)10 20 30 40 50

10−6

10−5

10−4

10−3

Frequency (Hz)

Fig. C.1: Magnitude of the scaling factor relating load cell’s reading of the effect of controlactuators to that of the scoring sensor

C.3. FORSE ALGORITHM 145

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[D−

>(L

C/W

2 )]/[D

−>

SC

])

10 20 30 40 5010

−6

10−5

10−4

10−3

Scaling Factors between Load Cell & Truth Sensor Measurements (Raw Data), 04/12/98

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[D−

>(L

C/W

2 )]/[D

−>

SC

])

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[D−

>(L

C/W

2 )]/[D

−>

SC

])

Frequency (Hz)10 20 30 40 50

10−6

10−5

10−4

10−3

Frequency (Hz)10 20 30 40 50

10−6

10−5

10−4

10−3

Frequency (Hz)

Fig. C.2: Magnitude of the scaling factor relating load cell’s reading of the effect of disturbanceactuators to that of the scoring sensor

C.3. FORSE ALGORITHM 146

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[U−

>(L

C/W

2 )]/[U

−>

SC

])

10 20 30 40 5010

−6

10−5

10−4

10−3

Scaling Factors between Load Cell & Truth Sensor Measurements (Diagonalized Data), 04/12/98

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[U−

>(L

C/W

2 )]/[U

−>

SC

])

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[U−

>(L

C/W

2 )]/[U

−>

SC

])

Frequency (Hz)10 20 30 40 50

10−6

10−5

10−4

10−3

Frequency (Hz)10 20 30 40 50

10−6

10−5

10−4

10−3

Frequency (Hz)

Fig. C.3: Magnitude of the scaling factor relating load cell’s reading of the effect of controlactuators to that of the scoring sensor after diagonalization

C.3. FORSE ALGORITHM 147

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[D−

>(L

C/W

2 )]/[D

−>

SC

])

10 20 30 40 5010

−6

10−5

10−4

10−3

Scaling Factors between Load Cell & Truth Sensor Measurements (Diagonalized Data), 04/12/98

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[D−

>(L

C/W

2 )]/[D

−>

SC

])

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

10 20 30 40 5010

−6

10−5

10−4

10−3

abs(

[D−

>(L

C/W

2 )]/[D

−>

SC

])

Frequency (Hz)10 20 30 40 50

10−6

10−5

10−4

10−3

Frequency (Hz)10 20 30 40 50

10−6

10−5

10−4

10−3

Frequency (Hz)

Fig. C.4: Magnitude of the scaling factor relating load cell’s reading of the effect of disturbanceactuators to that of the scoring sensor after diagonalization

C.3. FORSE ALGORITHM 148

100

101

102

10−4

10−3

10−2

10−1

100

101

102

SVD for Transfer Function [ U D ] −> [ LC/S2 ], 05/18/98

Frequency (Hz)

SV

D (

Vol

ts/V

olts

), M

odel

(−)

vs. M

easu

red

(−−

)

Fig. C.5: Comparison of SVD plots for the transfer function to the scaled/double-integratedload cell data

100

101

102

10−4

10−3

10−2

10−1

100

101

102

SVD for Transfer Function [ U D ] −> [ LC ] after Double Differentiation, 05/18/98

Frequency (Hz)

SV

D (

Vol

ts/V

olts

), M

odel

(−)

vs. M

easu

red

(−−

)

Fig. C.6: Comparison of SVD plots for the transfer function to the actual load cell data

C.3. FORSE ALGORITHM 149

100

101

102

10−4

10−3

10−2

10−1

100

101

102

SVD for Transfer Function [ U D ] −> [ SC ], 05/18/98

Frequency (Hz)

SV

D (

Vol

ts/V

olts

), M

odel

(−)

vs. M

easu

red

(−−

)

Fig. C.7: Comparison of SVD plots for the transfer function to the scoring sensors

100

101

102

10−4

10−3

10−2

10−1

100

101

102

SVD for Transfer Function [ U D ] −> [ CP ], 05/18/98

Frequency (Hz)

SV

D (

Vol

ts/V

olts

), M

odel

(−)

vs. M

easu

red

(−−

)

Fig. C.8: Comparison of SVD plots for the transfer function to the position sensors colocatedwith the control actuators

C.3. FORSE ALGORITHM 150

100

101

102

10−4

10−3

10−2

10−1

100

101

102

SVD for Transfer Function [ U D ] −> [ DP ], 05/18/98

Frequency (Hz)

SV

D (

Vol

ts/V

olts

), M

odel

(−)

vs. M

easu

red

(−−

)

Fig. C.9: Comparison of SVD plots for the transfer function to the position sensors colocatedwith the disturbance actuators

C.3. FORSE ALGORITHM 151

100

105

10−6

10−4

10−2

100

102

SVD for [ U D ] −> [ LC/S2 ]

SV

D (

Vol

ts/V

olts

), M

odel

Onl

y

Frequency (Hz)10

010

510

−6

10−4

10−2

100

102

SVD for [ U D ] −> [ CP ], 05/18/98

Frequency (Hz)

100

105

10−6

10−4

10−2

100

102

SVD for [ U D ] −> [ SC ]

SV

D (

Vol

ts/V

olts

), M

odel

Onl

y

Frequency (Hz)10

010

510

−6

10−4

10−2

100

102

SVD for [ U D ] −> [ DP ]

Frequency (Hz)

Fig. C.10: The identified model for the system beyond the frequency range for which measure-ments are available

C.3. FORSE ALGORITHM 152

100

105

10−6

10−4

10−2

100

102

SVD for [ U D ] −> [ LC ]

SV

D (

Vol

ts/V

olts

), M

odel

Onl

y

Frequency (Hz)10

010

510

−6

10−4

10−2

100

102

SVD for [ U D ] −> [ CP ], 05/18/98

Frequency (Hz)

100

105

10−6

10−4

10−2

100

102

SVD for [ U D ] −> [ SC ]

SV

D (

Vol

ts/V

olts

), M

odel

Onl

y

Frequency (Hz)10

010

510

−6

10−4

10−2

100

102

SVD for [ U D ] −> [ DP ]

Frequency (Hz)

Fig. C.11: The final model for the system beyond the frequency range for which measurementsare available

C.3. FORSE ALGORITHM 153

100

101

102

10−4

10−3

10−2

10−1

100

101

102

H2 Controller Applied to the Load Cell Model

Frequency (Hz)

SV

D O

pen−

Loop

(−

) vs

. Clo

sed−

Loop

(−

−)

100

101

102

10−4

10−3

10−2

10−1

100

101

102

Frequency (Hz)

SV

D O

pen−

Loop

(−

) vs

. Clo

sed−

Loop

(−

−)

H2 Controller Applied to the Scoring Sensor Model

Fig. C.12: The comparison of the closed loop and open loop singular value plots when thecontroller is used to close the loop on the identified model

C.3. FORSE ALGORITHM 154

100

101

102

10−4

10−3

10−2

10−1

100

101

102

H2 Controller Applied to Real Load Cell Data

Frequency (Hz)

SV

D O

pen−

Loop

(−

) vs

. Clo

sed−

Loop

(−

−)

100

101

102

10−4

10−3

10−2

10−1

100

101

102

Frequency (Hz)

SV

D O

pen−

Loop

(−

) vs

. Clo

sed−

Loop

(−

−)

H2 Controller Applied to Real Scoring Sensor Data

Fig. C.13: The comparison of the closed loop and open loop singular value plots when thecontroller is used to close the loop on the real measured data

Bibliography

[1] E.H. Anderson and J.P. How. Active Vibration Isolation Using Adaptive Feed-

forward Control. ACC97, pages 1783–1788, July 1997.

[2] K.J. Astrom and B. Wittenmark. Adaptive Control. Addison-Wesley, 1989.

[3] M.R. Bai and Z. Lin. Active Noise Cancellation for a Three-Dimensional En-

closure by Using Multiple Channel Adaptive Control and H∞ Control. ASME

Journal of Vibration and Acoustics, 120:958–964, october 1998.

[4] D.S. Bernstein and W.M. Haddad. LQG Control with an H∞ Performance

Bound: A Riccati Equation Approach. IEEE Trans. on Auto. Control, 34:293–

305, 1989.

[5] N.J. Bershad, P.L. Feintuch, F.A. Reed, and B. Fisher. Tracking Characteristics

of the LMS Adaptive LineEnhancer-Response to a Linear Chirp Signal in Noise.

IEEE Trans. on Acoust., Speech, Signal Processing, 28:504–516, October 1980.

[6] N.J. Bershad and O.M. Macchi. Adaptive Recovery of a Chirped Sinusoid in

Noise, Part 2: Performance of the LMS Algorithm. IEEE Trans. on Signal

Processing, 39:595–602, March 1991.

[7] E. Bjarnason. Analysis of the Filtered-X LMS Algorithm. IEEE Trans. on Speech

and Audio Processing, 3:504–514, November 1995.

[8] Stephen Boyd, L. El Ghaoui, Eric Feron, and V. Balakrishan. Linear Matrix

Inequalities in System and Control Theory. SIAM, 1994.

155

BIBLIOGRAPHY 156

[9] A.E. Bryson and Y. Ho. Applied Optimal Control. Hemisphere Pub. Corp., New

York, NY, 1975.

[10] M. Chilali and P. Gahinet. H∞ Design With Pole Placement Constraints: An

LMI Approach. IEEE Trans. on Auto. Control, 41:358–367, March 1996.

[11] M. Dentino, J. McCool, and B. Widrow. Adaptive Filtering in Frequency Do-

main. Proceedings of the IEEE, 66:1658–1659, December 1978.

[12] J.C. Doyle, K. Glover, P. Khargonekar, and B. Francis. State-Space Solutions

to Standard H2 and H∞ Control Problems. IEEE Trans. Automat. Control,

34:831–847, August 1989.

[13] S.J. Elliott, I.M. Stothers, and P.A. Nelson. A Multiple Error LMS Algorithm

and Its Application to the Active Control of Sound and Vibration. IEEE Trans.

Acoust., Speech, Signal Processing, 35:1423–1434, October 1987.

[14] L.J. Eriksson. Development of the Filtered-U Algorithm for Active Noise Control.

J. Acoust. Soc. Am., 89:257–265, Jan. 1991.

[15] L.J. Eriksson, M.C. Allie, and R.A. Greiner. The Selection and Adaptation of

an IIR Adaptive Filter for Use in Active Sound Attenuation. IEEE Trans. on

Acoust., Speech, Signal Processing, 35:433–437, April 1987.

[16] P.L. Feintuch. An Adaptive Recursive LMS Filter. Proc. IEEE, 64:1622–1624,

Nov. 1976.

[17] P.L. Feintuch, N.J. Bershad, and A.K. Lo. A Frequency Domain Model for Fil-

tered LMS Algorithm-Stability Analysis, Design, and Elemination of the Train-

ing Mode. IEEE Trans. on Signal Processing, 41:1518–1531, April 1993.

[18] P. Gahinet and P. Apkarian. A Linear Matrix Inequality Approach to H∞ Con-

trol. Int. J. of Robust and Nonlinear Control, 4:421–448, 1994.

[19] P. Gahinet, A. Nemirovski, A.J. Laub, and M. Chilali. LMI Control Toolbox.

The Math Works Inc., 1995.

BIBLIOGRAPHY 157

[20] A. Gelb. Applied Optimal Estimation. The M.I.T Press, Cambridge, MA, 1988.

[21] R.D. Gitlin, H.C. Meadors, and S.B. Weinstein. The Tap-Leakage Algorithm:

An Algorithm for the Stable Operation of a Digitally Implemented, Fractional

Adaptive Space Equalizer. Bell System Tech. Journal, 61:1817–1839, October

1982.

[22] R.M. Glaese and K. Liu. On-Orbit Modeling and System Identification of the

Middeck Active Control Experiment. IFAC 13th World Congress, July 1996.

[23] J.R. Glover. Adaptive Noise Cancellation Applied to Sinusoidal Interferences.

IEEE Trans. on Acoust., Speech, Signal Processing, 25:484–491, December 1977.

[24] M. Green and D.J.N. Limebeer. Linear Robust Control. Prentice-Hall, Englewood

Cliffs, NJ, 1995.

[25] R.W. Harris, D.M. Chabries, and F.A. Bishop. A Variable Step Adaptive Filter

Algorithm. IEEE Trans. on Acoust., Speech, Signal Processing, 34:309–316, April

1986.

[26] B. Hassibi, A.H. Sayed, and T. Kailath. H∞ Optimality of the LMS Algorithm.

IEEE Trans. on Signal Processing, 44:267–280, February 1996.

[27] B. Hassibi, A.H. Sayed, and T. Kailath. Indefinite Quadratic Estimation and

Control. SIAM Studies in Applied Mathematics, 1998.

[28] Babak Hassibi. Indefenite Metric Spaces In Estimation, Control and Adaptive

Filtering. PhD thesis, Stanford University, 1996.

[29] S. Haykin. Adaptive Filter Theory. Prentice-Hall, Englewood Cliffs, NJ, 1996.

[30] R.N. Jacques. On-Line System Identification and Control Design for Flexible

Structures. PhD thesis, MIT Department of Aeronautics and Astronautics, 1994.

[31] C.R. Johnson. Adaptive IIR Filtering: Current Results and Open Problems.

IEEE Trans. Inform. Theory, 30:237–250, March 1984.

BIBLIOGRAPHY 158

[32] P.P. Khargonekar and K.M. Nagpal. Filtering and Smoothing in an H∞ Setting.

IEEE Trans. on Automat. Control, 36:151–166, 1991.

[33] S.M. Kuo and D.R. Morgan. Adaptive Noise Control Systems. John Wiley &

Sons, New York, NY, 1996.

[34] R. W. Lucky. Automatic Equalization for Digital Communication. Bell Syst.

Tech. J., 44:547–588, 1965.

[35] D.R. Morgan. An Analysis of Multiple Correlation Cancellation Loops with a

Filter in the Auxiliary Path. IEEE Trans. on Acoust., Speech, Signal Processin,

28:454–467, August 1980.

[36] K.S. Narendra. Parameter Adaptive Control-The End ... or The Beginning. 33rd

Conf. on Decision and Control, pages 2117–2125, December 1994.

[37] K.S. Narendra and A.M. Annaswamy. Stable Adaptive Systems. Prentice-Hall,

Englewood Cliffs, NJ, 1989.

[38] D. Parikh, N. Ahmed, and S.D. Stearns. An Adaptive Lattice Algorithm for

Recursive Filters. IEEE Trans. on Acoust., Speech, Signal Processing, 28:110–

111, Feb. 1980.

[39] S.R. Popovich, D.E. Melton, and M.C. Allie. New Adaptive Multi-channel Con-

trol Systems for Sound and Vibration. Proc. Inter-Noise, pages 405–408, July

1992.

[40] J.G. Proakis. Adaptive Equalization for TDMA Digital Mobile radio. IEEE

Trans. on Vehicular Tech., 40:333–341, 1991.

[41] S. Qureshi. Adaptive Equalization. Proc. IEEE, 73:1349–1387, 1985.

[42] E.A. Robinson and S. Treitel. Geophysical Signal Analysis. Prentice-Hall, En-

glewood Cliffs, NJ, 1980.

BIBLIOGRAPHY 159

[43] S.P. Rubenstein, S.R. Popovich, D.E. Melton, and M.C. Allie. Active Cancella-

tion of Higher Order Modes in a Duct Using Recursively-Coupled Multi-Channel

Adaptive Control System. Proc. Inter-Noise, pages 337–339, July 1992.

[44] A.H. Sayed and T. Kailath. A State-Space Approach to Adaptive RLS Filtering.

IEEE Signal Processing Magazine, pages 18–60, July 1994.

[45] C.W. Scherer. Multiobjective H2/H∞ Control. IEEE Trans. on Auto. Control,

40:1054–1062, June 1995.

[46] Y. Shaked and Y. Theodor. H∞-Optimal Estimation: A Tutorial. In Proc. IEEE

Conf. Decision Contr., volume 2, December 1992.

[47] T.J. Shan and T. Kailath. Adaptive Algorithms with an Automatic Gain Control

Feature. IEEE Trans. Circuits Syst., 35:122–127, January 1988.

[48] J.J. Shynk. Adaptive IIR Filtering Using Parallel-Form Realization. IEEE Trans.

on Acoust., Speech, Signal Processing, 37:519–533, April 1989.

[49] Scott D. Sommerfeldt. Multi-Channel Adaptive Control of Structural Vibration.

Noise Control Engineering Journal, 37:77–89, Sep.-Oct. 1991.

[50] B. Widrow. Adaptive filters i: Fundamentals. Technical report, Stanford Elec-

tron. Labs, Stanford University, 1966.

[51] B. Widrow and S.D. Stearns. Adaptive Signal Processing. Prentice-Hall, Engle-

wood Cliffs, NJ, 1985.

[52] G. Zames. Feedbck Optimal Sensitivity: Model Preference Transformation, Mul-

tiplicative Seminorms and Approximate Inverses. IEEE Trans. on Automat.

Control, 26:301–320, 1981.