chapter 3 1 parameter identification. table of contents o ne-parameter case tt wo parameters pp...

92
Chapter 3 1 Parameter Identification

Upload: shannon-strickland

Post on 12-Jan-2016

224 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Chapter 3

11

Parameter Identification

Page 2: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Table of ContentsTable of Contents One-Parameter Case Two Parameters Persistence of Excitation and Sufficiently Rich Inputs Gradient Algorithms Based on the Linear Model Least-Squares Algorithms Parameter Identification Based on DPM Parameter Identification Based on B-SPM Parameter Projection Robust Parameter Identification Robust Adaptive Laws State-Space Identifiers Adaptive Observers 22

Page 3: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

IntroductionIntroduction

33

The purpose of this chapter is to present the design, analysis, and simulation of algorithms that can be used for online parameter identification. This involves three steps:Step 1 (Parametric model ). Express the form of the parametric model SPM, DPM, B-SPM, or B-DPM.

Step 2 (Parameter Identification Algorithm). The estimation error is used to drive the adaptive law that generates online. The adaptive law is a differential equation of the form

( ) ( )t H t

( )H twhere is a time-varying gain vector that depends on measured signals.

Step 3 (Stability and Parameter Convergence). Establish conditions that guarantee *( )t

Page 4: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: One-Parameter CaseExample: One-Parameter Case

44

Consider the first-order plant model

Step 1: Parametric Model

Page 5: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: One-Parameter CaseExample: One-Parameter Case

55

Step 2: Parameter Identification Algorithm

parameter

errorAdaptive Law The simplest adaptive law for In scalar form may be introduced as

provided . In practice the effect of noise especially when is close to zero, may lead to erroneous parameter estimates.

Page 6: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: One-Parameter CaseExample: One-Parameter Case

66

Step 2: Parameter Identification Algorithm

Another approach is to update in a direction that minimizes a certain cost of the estimation error. As an example, consider the cost criterion:

where is a scaling constant or step size which we refer to as the adaptive gain and where is the gradient of J with respect to . We ill have

0, (0) adaptive law

Page 7: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

The adaptive law should guarantee that:parameter estimate and speed of adaptation are bounded andestimation error gets smaller and smaller with time.

Example: One-Parameter CaseExample: One-Parameter Case

77

Step 3: Stability and Parameter Convergence

( )t

Note that these conditions still do not imply that unless some conditions on the vector referred to as the regressor vector.

*( )t ( )t

Page 8: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: One-Parameter CaseExample: One-Parameter Case

88

Step 3: Stability and Parameter Convergence

Analysis

1. Solving

2. Lyapunov

Solving

( ) 0t *( )t

Page 9: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

and are bounded

Example: One-Parameter CaseExample: One-Parameter Case

99

Step 3: Stability and Parameter Convergence

is always bounded for any( )t ( )t

is bounded( ) ( )t t

is bounded

( )t ( )t

Page 10: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Analysis by Lyapunov

Example: One-Parameter CaseExample: One-Parameter Case

1010

Step 3: Stability and Parameter Convergence

or

Page 11: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: One-Parameter CaseExample: One-Parameter Case

1111

is uniformly stable (u.s.)

is uniformly bounded (u.b.)

asymptotic stability

So, we need to obtain additional properties for asymptotic stability

Page 12: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: One-Parameter CaseExample: One-Parameter Case

1212

Page 13: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: One-Parameter CaseExample: One-Parameter Case

1313

adaptive law

summary(i)

(ii)

Page 14: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: One-Parameter CaseExample: One-Parameter Case

1414

The PE property of is guaranteed by choosing the input u appropriately.

Appropriate choices of u:

and any bounded input u that is not vanishing with time.

Page 15: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: One-Parameter CaseExample: One-Parameter Case

1515

Summary

Page 16: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: Two-Parameter CaseExample: Two-Parameter Case

1616

Consider the first-order plant model

Step 1: Parametric Model

Step 2: Parameter Identification Algorithm

Estimation Model: Estimation Error:

A straightforward choice:

where is the normalizing signal such that

Page 17: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: Two-Parameter CaseExample: Two-Parameter Case

1717

Adaptive Law: Use the gradient method to minimize the cost,

Page 18: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: Two-Parameter CaseExample: Two-Parameter Case

1818

Step 3: Stability and Parameter Convergence

Stability of the equilibrium will very much

depend on the properties of the time-varying matrix

, which in turn depends on the properties of .

Page 19: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: Two-Parameter CaseExample: Two-Parameter Case

1919

For simplicity let us assume that the plant is stable, i.e., . If we choose

at steady state

2 20 1

0( )

( )A

c c

is only marginally stable0e

is bounded but does not necessarily converge to 0. constant input does not guarantee exponential stability.

e

Page 20: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Persistence of Excitation and Persistence of Excitation and Sufficiently Rich InputsSufficiently Rich Inputs

2020

Definition

Since is always positive semi-definite, the PE condition requires that its integral over any interval of time of length is a positive definite matrix.

Definition

Page 21: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Persistence of Excitation and Persistence of Excitation and Sufficiently Rich InputsSufficiently Rich Inputs

2121

Let us consider the signal vector generated as

where and is a vector whose elements are strictly proper transfer functions with stable poles.Theorem

Page 22: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

2222

Example: in the last example we had:

Persistence of Excitation and Persistence of Excitation and Sufficiently Rich InputsSufficiently Rich Inputs

In this case n = 2 and

is nonsingular

For , is PE.

Page 23: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

2323

Example:

Persistence of Excitation and Persistence of Excitation and Sufficiently Rich InputsSufficiently Rich Inputs

Possible u

Page 24: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: Vector CaseExample: Vector Case

2424

Consider the first-order plant model

Parametric Model

Page 25: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: Vector CaseExample: Vector Case

2525

Filtering with

where,

a monic Hurwitz polynomial

Page 26: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: Vector CaseExample: Vector Case

2626

If is Hurwitz, a bilinear model can be obtained as follows:Consider the polynomials

which satisfy the Diophantine equation

where is a monic Hurwitz polynomial of order 2n-m-1.y

Page 27: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: Vector CaseExample: Vector Case

2727

Filtering by

B-SPM model

Page 28: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: Vector CaseExample: Vector Case

2828

Note that in this case contains not the coefficients of

the

plant transfer function but the coefficients of the

polynomials

. In certain adaptive control systems such as

MRAC, the coefficients of are the controller

parameters, and the above parameterizations allow the

direct estimation of the controller parameters.

Page 29: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Example: Vector CaseExample: Vector Case

2929

If some of the coefficients of the plant transfer function are known, then the dimension of the vector can be reduced. For example, if are known, then we have:

where,

Page 30: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Gradient Algorithms Based on the Linear ModelGradient Algorithms Based on the Linear Model

3030

Different choices for cost function lead to different algorithms.As before we have:

Instantaneous Cost Function

referred to as the adaptive gain.

2

( )T

s

zJ

m

Page 31: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

3131

Theorem

Gradient Algorithms Based on the Linear ModelGradient Algorithms Based on the Linear Model

Instantaneous Cost Function

Page 32: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

3232

Integral Cost Function

where is a design constant acting as a forgetting factor

Gradient Algorithms Based on the Linear ModelGradient Algorithms Based on the Linear Model

Page 33: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

3333

Integral Cost Function

Gradient Algorithms Based on the Linear ModelGradient Algorithms Based on the Linear Model

Page 34: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

3434

Theorem:

Integral Cost Function

Gradient Algorithms Based on the Linear ModelGradient Algorithms Based on the Linear Model

Page 35: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Least-Squares AlgorithmsLeast-Squares Algorithms

3535

LS problem: Minimize the cost:

Let us now extend this problem

Now we present different versions of the LS algorithm, which correspond to different choices of the LS cost function.

Page 36: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Least-Squares AlgorithmsLeast-Squares Algorithms

3636

Recursive LS Algorithm with Forgetting Factor

where are design constants and

is the initial parameter estimate.

Page 37: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

Least-Squares AlgorithmsLeast-Squares Algorithms

3737

covariance matrix

is covariance matrix

where

Non-recursive LS algorithm

Recursive LS Algorithm with Forgetting Factor

Page 38: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

3838

Using the identity

recursive LS algorithm with forgetting factor

Theorem:

Least-Squares AlgorithmsLeast-Squares Algorithms

Recursive LS Algorithm with Forgetting Factor

Page 39: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

3939

When the above algorithm reduces to:

which is referred to as the pure LS algorithm.

Theorem

Least-Squares AlgorithmsLeast-Squares Algorithms

Pure LS Algorithm

Page 40: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

4040

The pure LS algorithm guarantees that

without any restriction on the regressor .

If , however, is PE, then .

Convergence of the estimated parameters to constant

values is a unique property of the pure LS

algorithm.

2sm

*

Least-Squares AlgorithmsLeast-Squares Algorithms

Pure LS Algorithm

Page 41: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

4141

One of the drawbacks of the pure LS algorithm is

that the covariance matrix P may become arbitrarily

small and slow down adaptation in some directions.

This is due to the fact that1 0P or P

This is the so-called covariance wind-up problem.

Another drawback of the pure LS algorithm is that

parameter convergence cannot be guaranteed to be

exponential.

Least-Squares AlgorithmsLeast-Squares Algorithms

Pure LS Algorithm

Page 42: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

4242

Modified LS Algorithms

One way to avoid the covariance wind-up problem is

using covariance resetting modification to obtain

Least-Squares AlgorithmsLeast-Squares Algorithms

where is the time at which

and are some design scalars.

Due to covariance resetting,

Page 43: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

4343

Modified LS Algorithms

Therefore, P is guaranteed to be positive definite for all t > 0. In fact, the pure LS algorithm with covariance resetting can be viewed as a gradient algorithm with time-varying adaptive gain P, and its properties are very similar to those of a gradient algorithm.

Least-Squares AlgorithmsLeast-Squares Algorithms

modified LS algorithm with forgetting factor

Where is a constant that serves as an upper bound for .

Page 44: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

4444

Modified LS Algorithms

Theorem:

Least-Squares AlgorithmsLeast-Squares Algorithms

Page 45: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

4545

Parameter Identification Based on DPMParameter Identification Based on DPM

Consider the DPM , it may be written as:

Where is chosen so that is a

proper stable transfer function, and is a proper

strictly positive real (SPR) transfer function.Normalizing error

Page 46: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

4646

Parameter Identification Based on DPMParameter Identification Based on DPM

state-space representation

where

there exist matrices

such that:

Page 47: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

4747

Parameter Identification Based on DPMParameter Identification Based on DPM

Theorem:

The adaptive law is referred to as the adaptive law based on the SPR-Lyapunov synthesis approach. It has the same form as the gradient algorithm.

Page 48: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

4848

Parameter Identification Based on B-SPMParameter Identification Based on B-SPM

Consider the B-SPM . The estimation error is generated as

Let us consider the cost

where is available for measurement.

where are the adaptive gain.

Page 49: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

4949

Parameter Identification Based on B-SPMParameter Identification Based on B-SPM

Since is unknown, this adaptive law cannot be implemented.

We bypass this problem by employing the equality

where . Since is arbitrary any can be selected without having to know .Therefore, the adaptive laws may be written as

Page 50: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

5050

Parameter Identification Based on B-SPMParameter Identification Based on B-SPM

Theorem:

Page 51: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

5151

Parameter ProjectionParameter Projection

In many practical problems, we may have some a priori knowledge of where is located in . This knowledge usually comes in terms of upper and/or lower bounds for the elements of or in terms of location in a convex subset of . If such a priori information is available, we want to constrain the online estimation to be within the set where the unknown parameters are located. For this purpose we modify the gradient algorithms based on the unconstrained minimization of certain costs using the gradient projection method.

where is a convex subset of with smooth boundary

Page 52: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

5252

Parameter ProjectionParameter Projection

The adaptive laws based on the gradient method can be modified to guarantee that by solving the constrained optimization problem given above to obtain

where

denote the boundary and the interior, respectively, of

and

is the projection operator.

Page 53: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

5353

Parameter ProjectionParameter Projection

The gradient algorithm based on the

instantaneous cost function with projection is

obtained by substituting

Page 54: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

5454

Parameter ProjectionParameter Projection

The pure LS algorithm with projection becomes:

Page 55: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

5555

Parameter ProjectionParameter Projection

Theorem: The gradient adaptive laws and the LS

adaptive laws with the projection modifications

respectively, retain all the properties that are

established in the absence of projection and in

addition guarantee that provided

Page 56: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

5656

Parameter ProjectionParameter Projection

Example: Consider the plant model

where a, b are unknown constants that satisfy some known bounds, e.g., b ≥ 1 and 20 ≥a ≥ -2.

SPM

SPM

The gradient adaptive law in unconstrained case is:

Now, apply the projection method by defining:

Page 57: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

5757

Parameter ProjectionParameter Projection

applying the projection algorithm for each set, we obtain the following adaptive laws:

Page 58: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

5858

Parameter ProjectionParameter Projection

Example: Let us consider the gradient adaptive law

SPM

with the a priori knowledge that for some known bound . In most applications, we may have such a priori information. We define

use projection method with to obtain the adaptive law

Page 59: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

5959

Robust Parameter IdentificationRobust Parameter Identification

In the previous sections we designed and analyzed a wide class of PI algorithms based on the parametric models that are assumed to be free of disturbances, noise, unmodeled dynamics, time delays, and other frequently encountered uncertainties. In the presence of plant uncertainties we are no longer able to express the unknown parameter vector in the form of the SPM or DPM where all signals are measured andis the only unknown term. In this case, the SPM or DPM takes the form

where is an unknown function that represents the modeling error terms.The following examples are used to show how above form arises for different plant uncertainties.

Page 60: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

6060

Robust Parameter IdentificationRobust Parameter Identification

Example: Consider a system with a small input delay

Actual plant Nominal plant

where

Page 61: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

6161

Robust Parameter IdentificationRobust Parameter Identification

Instability Example

Consider the scalar constant gain system

where d is a bounded unknown disturbance and . The adaptive law for estimating derived for d = 0 is given by

where and the normalizing signal is taken to be 1.

Parameter error equation

Now consider d ≠ 0, we have

Page 62: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

6262

Robust Parameter IdentificationRobust Parameter Identification

Instability ExampleIn this case we cannot guarantee that the parameter estimate is bounded for any bounded input u and disturbance d. For example for:

i.e., the estimated parameter drifts to infinity even though the disturbance disappears with time. This instability phenomenon is known as parameter drift. It is mainly due to the pure integral action of the adaptive law, which, in addition to integrating the "good" signals, integrates the disturbance term as well, leading to the parameter drift phenomenon.

Page 63: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

6363

Robust Adaptive LawsRobust Adaptive Laws

Consider the general plant

where is the dominant part, are strictly proper with stable poles and d is a bounded disturbance.

where

SPM

Page 64: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

6464

Robust Adaptive LawsRobust Adaptive Laws

For robustness, we need to use the following modifications:

Design the normalizing signal to bound the

modeling error in addition to bounding the

regressor vector .

Modify the "pure" integral action of the adaptive

laws to prevent parameter drift.

Page 65: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

6565

Robust Adaptive LawsRobust Adaptive Laws

Dynamic Normalization

Assume that are analytic in for some known .

1)2)3)

,s s

Lm m

Page 66: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

6666

Robust Adaptive LawsRobust Adaptive Laws

σ-Modification

A class of robust modifications involves the use of a small feedback around the "pure“ integrator in the adaptive law, leading to the adaptive law structure

where is a small design parameter and is the adaptive gain, which in the case of LS is equal to the covariance matrix P. The above modification is referred to asthe σ -modification or as leakage.Different choices of lead to different robust adaptive laws with different properties.

Page 67: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

6767

Robust Adaptive LawsRobust Adaptive Laws

Fixed σ -Modification

where σ is a small positive design constant. The gradient adaptive law takes the form

If some a priori estimate is available, then the term may be replaced with so that the leakage term becomes larger for larger deviations of from rather than from zero.

Page 68: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

6868

Robust Adaptive LawsRobust Adaptive Laws

Fixed σ -Modification

Theorem:

Page 69: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

6969

Robust Adaptive LawsRobust Adaptive Laws

Fixed σ -Modification

Main drawback:

If the modeling error is removed, i.e., , it will not

guarantee the ideal properties of the adaptive law

since it introduces a disturbance of the order of the

design constant σ.

Advantage:

No assumption about bounds or location of the

unknown is made.

Page 70: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

7070

Robust Adaptive LawsRobust Adaptive Laws

Switching σ -Modification

Page 71: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

7171

Robust Adaptive LawsRobust Adaptive Laws

Switching σ -Modification

Theorem:

Page 72: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

7272

Robust Adaptive LawsRobust Adaptive Laws

Switching σ -Modification

Theorem:

Page 73: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

7373

Robust Adaptive LawsRobust Adaptive Laws

ε -Modification

Another class of σ -modification involves leakage that

depends on the estimation error ε , i.e.,

where is a design constant. This modification

is referred to as the ε –modification and has

properties similar to those of the fixed σ -modification

in the sense that it cannot guarantee the ideal

properties of the adaptive law in the absence of

modeling errors.

Page 74: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

For parametric model in order to avoid parameter drift, we constrain to lie inside a bounded convex set that contains . As an example, consider the set

7474

Robust Adaptive LawsRobust Adaptive Laws

Parameter Projection

where is chosen so that .Following the last discussion, we obtain

where is chosen so that and

Page 75: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

7575

Robust Adaptive LawsRobust Adaptive Laws

Parameter Projection

Theorem:

Page 76: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

7676

Robust Adaptive LawsRobust Adaptive Laws

Parameter Projection

Page 77: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

7777

Robust Adaptive LawsRobust Adaptive Laws

Parameter Projection

The parameter projection has properties identical to

switching

σ -modification, as both modifications aim at keeping

.

In the case of the switching σ -modification, may

exceed but remain bounded, whereas in the case of

projection provided .

Page 78: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

7878

Robust Adaptive LawsRobust Adaptive Laws

Dead Zone

The principal idea behind the dead zone is to monitor the size of the estimation error and adapt only when the estimation error is large relative to the modeling error .

where is a known upper bound of the normalized modeling error . In other words, we move in the direction of the steepest descent only when the estimation error is large relative to the modeling error, i.e., when

Page 79: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

7979

Robust Adaptive LawsRobust Adaptive Laws

Dead Zone

To the discontinuity in, the dead zone function is made continuous as :

Page 80: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

8080

Robust Adaptive LawsRobust Adaptive Laws

Dead Zone

Normalized dead zone function

Page 81: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

8181

Robust Adaptive LawsRobust Adaptive Laws

Dead Zone

Theorem:

Page 82: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

8282

Robust Adaptive LawsRobust Adaptive Laws

Dead Zone

The dead zone modification guarantees that the estimated parameters always converge to a constant.

As in the case of the fixed σ-modification, the ideal properties of the adaptive law are destroyed in an effort to achieve robustness.

The robust modifications that include leakage, projection, and dead zone are analyzed for the case of the gradient algorithm for the SPM with modeling error. The same modifications can be used in the case of LS and DPM, B-SPM, and B-DPM with modeling errors.

Page 83: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

8383

State-Space IdentifiersState-Space Identifiers

Consider the state-space plant model

SSPM

The above estimation model has been referred to as the series-parallel model in the literature. The estimation error vector is defined as

A straightforward choice for

Page 84: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

8484

State-Space IdentifiersState-Space Identifiers

estimation error dynamics

where are the parameter errors.

where are constant scalars.

Adaptive laws:

Page 85: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

8585

State-Space IdentifiersState-Space Identifiers

Theorem:

Page 86: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

8686

Adaptive ObserversAdaptive Observers

Consider the LTI SISO plant

where . Assume that u is a piecewise continuous bounded function of time and that A is a stable matrix. In addition, we assume that the plant is completely controllable and completely observable. The problem is to construct a scheme that estimates both the plant parameters, i.e., A, B, C, as well as the state vector x using only I/O measurements.We refer to such a scheme as the adaptive observer.

Page 87: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

8787

Adaptive ObserversAdaptive Observers

A good starting point for designing an adaptive observer is the Luenberger observer used in the case where A, B, C are known. The Luenberger observer is of the form:

Where K is chosen so that is a stable matrix, and guarantees that exponentially fast for any initial condition and any input u. For to be stable, the existence of K is guaranteed by the observability of (A, C).

Page 88: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

8888

Adaptive ObserversAdaptive Observers

A straightforward procedure for choosing the structure of the adaptive observer is to use the same equation as the Luenberger observer , but replace the unknown parameters A, B, C with their estimates, generated by some adaptive law. But the problem we face with this procedure is the inability to estimate uniquely the n^2+2n parameters of A, B, C from the I/O data. The best we can do in this case is to estimate the parameters of the plant transfer function and use them to calculate . These calculations, however, are not always possible because the mapping of the 2n estimated parameters of the transfer function to the n^2 + 2n parameters of is not unique unless (A,B,C) satisfies certain structural constraints. One such constraint is that (A, B, C) is in the observer form, i.e., the plant is represented as:

Page 89: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

8989

Adaptive ObserversAdaptive Observers

where

We can use the techniques presented in the previous sections.

Page 90: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

9090

Adaptive ObserversAdaptive Observers

The disadvantage is that in a practical situation x may represent some physical variables of interest, whereas may be an artificial state vector.

However, the adaptive observer motivated from the Luenberger observer structure and is given by

Page 91: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

9191

Adaptive ObserversAdaptive Observers

is a stable matrix that contains the eigenvalues of the observer.

A wide class of adaptive laws may be used to generate online. As in last Chapter , develop the parametric model

Page 92: Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently

9292

THE ENDTHE END