backstepping design with local optimality matching...

14
1014 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 7, JULY 2001 Backstepping Design with Local Optimality Matching Zigang Pan, Member, IEEE, Kenan Ezal, Member, IEEE, Arthur J. Krener, Fellow, IEEE, and Petar V. Kokotovic ´ , Fellow, IEEE Abstract—In this study of the nonlinear -optimal control design for strict-feedback nonlinear systems, our objective is to construct globally stabilizing control laws to match the optimal control law up to any desired order, and to be inverse optimal with respect to some computable cost functional. Our recursive con- struction of a cost functional and the corresponding solution to the Hamilton–Jacobi–Isaacs (HJI) equation employs a new concept of nonlinear Cholesky factorization. When the value function for the system has a nonlinear Cholesky factorization, we show that the backstepping design procedure can be tuned to yield the optimal control law. Index Terms—Backstepping, disturbance attenuation, inverse optimality, local optimality, nonlinear Cholesky factorization. I. INTRODUCTION A FTER a successful solution of the linear -optimal control problem, recent research attention has been focused on the robust control design of nonlinear systems [2]–[6]. The dynamic game approach [7] provides a natural setting for worst-case designs and requires the solution of a Hamilton–Jacobi–Isaacs (HJI) equation. Although a general solution method is not available for HJI equations, a local solution exists when the nonlinear system has a controllable Jacobi linearization and the cost functional has a Taylor series expansion with quadratic leading terms. Similar to the solution of the Hamilton–Jacobi–Bellman equation [8], [9], the solution to the HJI equation can be found by computing the coefficients of its Taylor series expansion [10]–[12]. This Taylor series solution may provide an adequate approximation to the optimal control law in a small neighborhood of the origin but it may be unsatisfactory, or even unstable, in a larger region. Among the recent advances in nonlinear feedback design sur- veyed in [13] and [19], a systematic procedure, known as inte- grator backstepping [14], is applicable to nonlinear systems in strict-feedback form. This procedure has been perfected in a re- cent book [15]. The backstepping design offers a lot of flexi- Manuscript received November 2, 1999; revised October 30, 2000. Recom- mended by Associate Editor P. Tomei. This work was supported in part by the National Science Foundation under Grant ECS-9812346 and the Air Force Of- fice of Scientific Research under Grant F49620-00-1-0358. This research first appeared as a technical report [1]. Z. Pan is with the Department of Electrical and Computer Engineering and Computer Science, University of Cincinnati, Cincinnati, OH 45221 USA (e-mail: [email protected]). K. Ezal is with Toyon Research Corporation, Goleta, CA 93117 USA. A. J. Krener is with the Department of Mathematics, University of California, Davis, CA 95616 USA. P. V. Kokotovic ´ is with the Center for Control Engineering and Computation, Department of Electrical and Computer Engineering, University of California at Santa Barbara, Santa Barbara, CA 93106 USA. Publisher Item Identifier S 0018-9286(01)06622-3. bility at each step of the control design, but it does not clarify what choices will lead to a better overall design. In this paper, we approach the -optimal control problem for strict-feedback nonlinear systems by combining the Taylor series approach with the backstepping design. Our objective is to construct globally stabilizing control laws which match the optimal control law up to any desired order and are inverse op- timal with respect to some computable cost functional. Our re- cursive procedure employs a new concept of nonlinear Cholesky factorization, such that the given positive definite function is equal to the sum of squares of the state variables when ex- pressed in appropriate coordinates. We derive conditions under which a given positive definite nonlinear function has a non- linear Cholesky factorization. When the optimal value function has a nonlinear Cholesky factorization, we show that the back- stepping procedure can be tuned to yield the optimal control de- sign. We develop a recursive computation scheme for an factor- izable nonlinear function that matches the optimal solution up to any desired order of Taylor series expansion. Using this ap- proximating function and its nonlinear Cholesky factorization, we recursively construct the desired matching controller by an- other backstepping procedure. A simulation example illustrates the theoretical findings. A detailed discussion of the first order matching design can be found in [16]. Inverse optimal design for nonlinear systems has been investigated in earlier studies [17], [18]. In [17], robust control problems for systems subject to bounded disturbances are studied, for which robust inverse optimal controllers that guarantee input-to-state stability of the closed-loop systems are obtained. In [18], it is shown that input-to-state stability is both necessary and sufficient for inverse optimality of nonlinear systems. The objective of this paper, on the other hand, is to design inverse optimal controllers which achieve a desired level of disturbance attenuation with any desired order of local optimality. We formulate the problem in Section II. In Section III, we present nonlinear Cholesky factorization, which is utilized for the backstepping design in Section IV. A numerical example in Section V illustrates the theory. We close with concluding remarks in Section VI. II. PROBLEM FORMULATION We consider the following strict-feedback nonlinear system with disturbance inputs: (1a) (1b) . . . . . . (1c) 0018–9286/01$10.00 © 2001 IEEE

Upload: others

Post on 11-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

1014 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 7, JULY 2001

Backstepping Design with Local Optimality MatchingZigang Pan, Member, IEEE, Kenan Ezal, Member, IEEE, Arthur J. Krener, Fellow, IEEE, and

Petar V. Kokotovic´, Fellow, IEEE

Abstract—In this study of the nonlinear -optimal controldesign for strict-feedback nonlinear systems, our objective is toconstruct globally stabilizing control laws to match the optimalcontrol law up to any desired order, and to be inverse optimal withrespect to some computable cost functional. Our recursive con-struction of a cost functional and the corresponding solution to theHamilton–Jacobi–Isaacs (HJI) equation employs a new concept ofnonlinear Cholesky factorization. When the value function for thesystem has a nonlinear Cholesky factorization, we show that thebackstepping design procedure can be tuned to yield the optimalcontrol law.

Index Terms—Backstepping, disturbance attenuation, inverseoptimality, local optimality, nonlinear Cholesky factorization.

I. INTRODUCTION

A FTER a successful solution of the linear -optimalcontrol problem, recent research attention has been

focused on the robust control design of nonlinear systems[2]–[6]. The dynamic game approach [7] provides a naturalsetting for worst-case designs and requires the solution of aHamilton–Jacobi–Isaacs (HJI) equation.

Although a general solution method is not available for HJIequations, a local solution exists when the nonlinear system hasa controllable Jacobi linearization and the cost functional hasa Taylor series expansion with quadratic leading terms. Similarto the solution of the Hamilton–Jacobi–Bellman equation [8],[9], the solution to the HJI equation can be found by computingthe coefficients of its Taylor series expansion [10]–[12]. ThisTaylor series solution may provide an adequate approximationto the optimal control law in a small neighborhood of the originbut it may be unsatisfactory, or even unstable, in a larger region.

Among the recent advances in nonlinear feedback design sur-veyed in [13] and [19], a systematic procedure, known as inte-grator backstepping [14], is applicable to nonlinear systems instrict-feedback form. This procedure has been perfected in a re-cent book [15]. The backstepping design offers a lot of flexi-

Manuscript received November 2, 1999; revised October 30, 2000. Recom-mended by Associate Editor P. Tomei. This work was supported in part by theNational Science Foundation under Grant ECS-9812346 and the Air Force Of-fice of Scientific Research under Grant F49620-00-1-0358. This research firstappeared as a technical report [1].

Z. Pan is with the Department of Electrical and Computer Engineeringand Computer Science, University of Cincinnati, Cincinnati, OH 45221 USA(e-mail: [email protected]).

K. Ezal is with Toyon Research Corporation, Goleta, CA 93117 USA.A. J. Krener is with the Department of Mathematics, University of California,

Davis, CA 95616 USA.P. V. Kokotovicis with the Center for Control Engineering and Computation,

Department of Electrical and Computer Engineering, University of Californiaat Santa Barbara, Santa Barbara, CA 93106 USA.

Publisher Item Identifier S 0018-9286(01)06622-3.

bility at each step of the control design, but it does not clarifywhat choices will lead to a better overall design.

In this paper, we approach the -optimal control problemfor strict-feedback nonlinear systems by combining the Taylorseries approach with the backstepping design. Our objective isto construct globally stabilizing control laws which match theoptimal control law up to any desired order and are inverse op-timal with respect to some computable cost functional. Our re-cursive procedure employs a new concept of nonlinear Choleskyfactorization, such that the given positive definite function isequal to the sum of squares of the state variables when ex-pressed in appropriate coordinates. We derive conditions underwhich a given positive definite nonlinear function has a non-linear Cholesky factorization. When the optimal value functionhas a nonlinear Cholesky factorization, we show that the back-stepping procedure can be tuned to yield the optimal control de-sign. We develop a recursive computation scheme for an factor-izable nonlinear function that matches the optimal solution upto any desired order of Taylor series expansion. Using this ap-proximating function and its nonlinear Cholesky factorization,we recursively construct the desired matching controller by an-other backstepping procedure. A simulation example illustratesthe theoretical findings. A detailed discussion of the first ordermatching design can be found in [16].

Inverse optimal design for nonlinear systems has beeninvestigated in earlier studies [17], [18]. In [17], robust controlproblems for systems subject to bounded disturbances arestudied, for which robust inverse optimal controllers thatguarantee input-to-state stability of the closed-loop systems areobtained. In [18], it is shown that input-to-state stability is bothnecessary and sufficient for inverse optimality of nonlinearsystems. The objective of this paper, on the other hand, is todesign inverse optimal controllers which achieve a desired levelof disturbance attenuation with any desired order of localoptimality.

We formulate the problem in Section II. In Section III, wepresent nonlinear Cholesky factorization, which is utilized forthe backstepping design in Section IV. A numerical examplein Section V illustrates the theory. We close with concludingremarks in Section VI.

II. PROBLEM FORMULATION

We consider the following strict-feedback nonlinear systemwith disturbance inputs:

(1a)

(1b)...

...(1c)

0018–9286/01$10.00 © 2001 IEEE

Page 2: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

PAN et al.: BACKSTEPPING DESIGN WITH LOCAL OPTIMALITY MATCHING 1015

wherestate variable with ;scalar control input;-dimensional disturbance input.

We will denote the set of locally Lipschitz feedback control lawsby . We consider that the disturbanceis gen-

erated by some adversary player according to ,where : is piecewise continuous inand lo-cally Lipschitz in . We will denote the set of disturbance strate-gies by .

This nonlinear system can be compactly written as, where . Associ-

ated with it, we introduce a cost functional ,where .1 The design objectiveis to minimize the worst case disturbance attenuation level

, where denotesthe norm of a signal and is the optimal performance.The nonlinear optimal control problem then consists ofthe evaluation of and finding a controller thatguarantees any desired disturbance attenuation level .

This nonlinear problem has been shown [7] to be closelyrelated to a class of zero-sum differential games with the costfunctional indexed by the desired attenuation level

(2)

where the control is the minimizer and the disturbance is themaximizer. We are particularly interested in the upper value ofthe game: . For any the uppervalue of this zero-sum game iszerowhen .2 On theother hand, for any the upper value is strictly positive.Because of the above stated equivalence, we will focus on thezero-sum differential game problem (2) for a given value of.

The following two assumptions characterize the functions inthis study.

Assumption A1:The nonlinear functions , ,, and are (or simply smooth) in all

of their arguments. The open loop system of (1) has the originas an equilibrium, , .

Assumption A2:The weighting function is positivedefinite and radially unbounded in and , and .Furthermore, the Hessian of evaluated at 0 is a positivedefinite matrix . The function is positive for any . Inparticular .

The value function to the nonlinear zero-sum differentialgame problem satisfies the HJI equation

(3)

1We assume that the weighting function is quadratic inu. For the more generalcasel(x; u) = q(x)+ p(x) u+ r(x)u , we can easily remove the cross termby a simple redefinition of the control variable~u = u+ r p(x).

2In the case whenx(0) is not zero but fixed, we have the following inequalityif > , l(x; u)dt � w w dt + C, for some constantC > 0,which depends only on the initial condition, and for any terminal timeT > 0.

and, when the solution is available, the optimal control law isgiven by

(4)

While it is very difficult to obtain in an explicit form,it is relatively simple to compute the Taylor series expansionsof and around , see, for example, [8]–[10],[2], [11], [12]. Truncated Taylor series expansions yield near-optimal controllers when is close to the origin, but, when islarge, they may not guarantee stability for the nonlinear system.

Our design objective is not only to match the optimal controldesign up to any desired order, but also to guarantee global sta-bility and inverse robust optimality of the closed-loop system.This objective is made precise with the following two defini-tions.

Definition 1: A smooth control law is locally optimalmatching to the th order,if the truncation of the Taylor seriesexpansion of up to the th order is equal to that of theworst-case optimal control law .

Definition 2: A smooth control law is inversely robustoptimal,if there exists a nonnegative function

such that the zero-sum game with cost functional

(5)

admits as its minimax control law. Furthermore, the valuefunction associated with the cost functional and areradially unbounded, and the functionis positive for any

.Throughout the paper, any function with an “over bar” will

denote a function defined in terms of the transformed state vari-ables, such asdenoting expressed in terms of a new statevariable . Given any polynomial function , we will denote its

th order homogeneous terms by , and its homogeneousterms up to th order by , so that . Fora smooth function , denotes the th order homogeneousterms in the Taylor series expansion ofaround the origin, and

denotes the homogeneous terms up toth order in theTaylor series expansion. The variable denotesthe transformed coordinates for the nonlinear system.

Using this notation, we make the following basic assumption.Assumption A3:The Taylor series expansion of the

optimal value function is given up to st order, and the

Taylor series expansion of the minimax control law isgiven up to th order

.

III. A N ONLINEAR CHOLESKY FACTORIZATION

In the earlier linear-quadratic matching backstepping design[16], the key step that prescribes all the backstepping coef-ficients is the Cholesky factorization of the quadratic valuefunction. In this section, we study the nonlinear version of theCholesky factorization.

For any given positive–definite and radially unbounded non-linear function , we are interested in the necessary and suf-ficient conditions under which there exists global upper trian-

Page 3: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

1016 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 7, JULY 2001

gular diffeomorphism such that ,where is called upper triangular if it can be written as

......

... (6)

Without loss of generality, we will limit our attention to zeropreserving transformations, that is, . While a lowertriangular transformation is important for the backstepping con-trol design, we pursue a upper triangular transformation just toparallel the well known Cholesky factorization for positive–def-inite matrices. The result obtained below has a direct counterpartfor lower triangular transformations if we reverse the orderingof the state variables .

The Jacobian of the mappingis

......

...

where denotes terms of no particular interest. Since the map-ping is a diffeomorphism, the Jacobian is always nonsingularand the diagonal partial derivatives are nonzero, ,

. Therefore, these quantities are sign definite. Withoutloss of generality, we consider onlywhere the diagonal termsof are positive.

Definition 3: An upper triangular diffeomorphism:is said to bepositiveif the diagonal partial derivatives are

positive, , for any .The Cholesky factorization of a nonlinear radially unbounded

function , where , is defined as follows.Definition 4: A radially unbounded function is

said to admit a nonlinear Cholesky factorization iffor some global positive upper triangular diffeomor-

phism with .It will be shown later that the nonlinear Cholesky factoriza-

tion is unique within the class of positive diffeomorphisms.Given a positive upper triangular state transformation

, it is still quite difficult to obtain the inverse transformationof . To simplify this, we introduced the notion of the simpleupper triangular diffeomorphism.

Definition 5: An upper triangular diffeomorphism:is said to besimpleif , . Then,

the individual coordinate map is given by, , where

(we have set ) are arbitrary smooth functions such that.

There exists an equivalence relationship between the positiveupper triangular diffeomorphisms and simple ones in terms ofyet another factorization.

Proposition 1: An upper triangular diffeomorphism ispositive if and only if it can be uniquely factored as ,

where is a simple upper triangular diffeomorphism; andis a diagonal matrix

(7)

such that, for each

(8)

(9)

(10)

for all .Proof: The sufficiency part is straightforward by ob-

serving the diagonal of the Jacobian of is exactly, which is nonsingular. Furthermore,

the unboundedness condition (10) implies that the diffeomor-phism is global.

For the necessity part, we observe that ,for any , has a unique rootwhen , or . The functions

, are smooth by the positiveness of . Thisuniquely defines a simple upper triangular diffeomorphism

...

Then, the positive and smooth diagonal matrixis defined by

Evaluating , ,where , proves that satisfies (9).The property (10) is satisfied since . Thiscompletes the proof.

To obtain the nonlinear Cholesky factorization of wewill go through steps of a recursive construction. As shownin the following lemma, the objective of each step is to find onecoordinate transformation.

Lemma 1: Consider a nonlinear function , where isa scalar and is a vector. Assume the following conditions hold:

C1) is in all of its arguments;C2) there is a constant such that, for each fixed,

, ;C3) for each fixed , is radially unbounded in , i.e.,

.Then, there exists a unique mapping such that

1) is in all of its the arguments;2) for each fixed , , ;3) for each fixed , maps onto , i.e.,

;

Page 4: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

PAN et al.: BACKSTEPPING DESIGN WITH LOCAL OPTIMALITY MATCHING 1017

4) the function can be decomposed as

(11)

for some function , such that for any .If and only if the following condition holds:

E1) for each fixed , has a unique root, which is simple, i.e.,

Proof: To prove sufficiency, we let conditionsE1),C1)–C3) hold. ConditionsC2) and C3) imply that, for eachfixed , the function has at least a local minimum in. This,coupled with the smoothness assumptionC1) and uniquenessassumptionE1), implies that the minimum is achieved at

, so that , . The nonsingu-larity of guarantees that the root function

is by the implicit function theorem.Let and be defined as

(12)

(13)

Then it follows fromC2) that for any . The functionis well defined because is the root of the func-

tion . By E1) and the fact that achieves the min-imum, we have that , ; ,

. Therefore, , . This allows thefollowing definition of a positive function :

(14)

since

(15)

The mapping is then defined by

(16)

Statement 1 follows from the definition ofand . Statement3 holds by the assumptionC3) and the definition of , whilestatement 4 holds by the definition of and .

For statement 2, we evaluate the partial derivatives

Hence, the identity

which proves statement 2.Next, we show that the mapping is unique, which

further implies the function is unique. We will prove this bycontradiction. Suppose there exists another mapping, whichcorresponds to a function , that also satisfies the statements1–4, and , for some . Because

, we haveeithera) orb) .

In case of b), we observe the following equality:

This is a contradiction because both andare positive by statement 2.

In case of a), we conclude that , . Inparticular, . This leads to thecontradiction

where the last inequality follows from statement 2. Conse-quently, the hypothesis is not valid, and the mappingisuniquely defined. This completes the sufficiency proof.

To prove necessity, we assume that there exists a mappingsatisfying the statements 1–4. Then

It is concluded that vanishes if and only if, because of statement 2. Again by statement 2,is

strictly increasing in . Coupled with radially unboundednessstatement 3, we conclude that there is a unique rootfor the equation . Along with conditionsC2) andC3), this implies that, for each fixed, the function has aunique minimum in . The resultE1) follows from

and consequently, the necessity part of the lemma.We observe that the construction of the mappingin the

above lemma involves the following steps. First, we solve forthe unique smooth root function , which is guaranteed toexist by the conditionE1). Next, we can define the functionby (12). Then, mapping is given by (16).

Now, we are in the position to present thestep recursiveconstruction for the nonlinear Cholesky factorization of a givenradially unbounded, positive–definite, and smooth function.

Step 1: For notational consistency we let .Since is and radially unbounded, the conditionsC1)–C3) of Lemma 1 are satisfied for , with and

.As delineated in Lemma 1, in order for a smooth transforma-

tion to exist we make the following assumption:

B1) for any fixed dimensional vector ,the algebraic equation has a unique root

, which is simple

Under this assumption, there exists a smooth mappingand a smooth nonlinear function

such that

R1.1) for any fixed ,, ;

Page 5: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

1018 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 7, JULY 2001

R1.2) for each fixed , maps onto , i.e.,;

R1.3) the function can be decomposed as

We note here that the assumptionB1) is also necessary for theexistence of the transformation as proved in Lemma 1. Sincethe zero vector is the global minimum of the function, wehave

R1.4) the mapping is zero preserving, .Since is radially unbounded, positive–definite, and

smooth, we can conclude that the nonlinear functionis also radially unbounded, positive–definite, and smooth,based on the equality:

. This nonlinear function is uniquely definedby Lemma 1 and, hence, the construction in the subsequentsteps isindependentof the construction at this step.

Step : We assume inductively that the map-pings are constructed from the previous steps

, and we are given a radially unbounded, posi-tive–definite, and smooth nonlinear function ,that satisfies .

The radially unboundedness and smoothness ofimpliesconditionsC1)–C3) of Lemma 1. We make the following as-sumption, in order for a smooth mapping to exist:

B )for any fixed dimensional vector ,the algebraic equation has a unique root

, which is simple

Under this assumption, there exists a smooth transforma-tion and a smooth nonlinear function

such that

R .1) for any fixed ,, ;

R .2) for each fixed , maps onto , i.e.,;

R .3) the function can be decom-posed as

.Again, the assumptionB ) is necessary for the existence ofthe transformation as shown in Lemma 1. The mappingis also zero preserving, since admits a global minimum at

.

R .4) The mapping is zero preserving, i.e., .The following equality holds, as prescribed by the construc-

tion in Lemma 1

Therefore, from the fact that is radially unbounded, posi-tive–definite, and smooth, we can conclude that the nonlinearfunction is also radially unbounded, positive–definite, andsmooth. Furthermore, this nonlinear function is uniquelydefined by Lemma 1 and, hence, the construction of the subse-quent steps isindependentof the construction at this step.

Furthermore, at the th step, we have , which isequivalent to

(17)

This completes the -step recursive construction of the non-linear Cholesky factorization, which is summarized as follows.

Theorem 1: A positive definite, radially unbounded, posi-tive–definite, and smooth function with has aunique nonlinear Cholesky factorization if and onlyif the assumptionsB1)–B ) are satisfied. Furthermore, underthe same assumptions, can be factored as

(18)

where a simple upper triangular diffeomorphism andis a diagonal matrix that satisfies (7)–(10).

Proof: To prove sufficiency let assumptionsB1)–B )hold. By the recursive construction in Lemma 1 there exists tri-angular mappings such that the conditionsR1.1)–R1.4), , R .1)–R .4) are satisfied and (17) holds.These conditions imply that is a positive upper triangulardiffeomorphism.

To prove the necessity part of the theorem, we assume thatadmits a nonlinear Cholesky factorization. We first consider

the mapping and define .Since is radially unbounded and smooth, it satisfies condi-tions C1)–C3) of Lemma 1. The definition of the nonlinearCholesky factorization implies that and satisfy the state-ments 1–4 of Lemma 1 so thatE1) holds, which is equivalent toB1). Therefore,B1) is necessary for the existence of a nonlinearCholesky factorization. WhenB1) is satisfied, is uniquely de-fined, and so is .

Arguing inductively for , we determine for each, that the function is independent of the construction of the

previous mapping. The function is radially unbounded andsmooth. Then, by Lemma 1, the unique mappingexists onlyif assumptionB ) holds. Therefore, assumptionsB1)–B ) arenecessary for the existence of a nonlinear Cholesky factoriza-tion.

By the uniqueness of the mappings, at each stepthe diffeomorphism is uniquely defined whenever

it exists.The second part of the theorem follows directly from the

Proposition 1.The motivation for the introduction of the simple diffeomor-

phism (18) is that the computation of the inverse mappinginvolves only substitutions.

Next, we consider two illustrative examples. The first ex-ample is for quadratic functions. The second example involvesa 4th order polynomial and shows that the ordering of the coor-dinate variables in Definition 4 is critical for theexistence of the factorization.

Example 1: For , with positive definite , thenonlinear Cholesky factorization mappingis , wherethe upper triangular matrix is the Cholesky factor of the ma-trix , .

Example 2: Smooth, positive–definite, and radiallyunbounded function ad-mits a nonlinear Cholesky factorization in the ordering

Page 6: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

PAN et al.: BACKSTEPPING DESIGN WITH LOCAL OPTIMALITY MATCHING 1019

with given by .If we switch the ordering, we obtain the function

.Applying our result, we observe that the equation

has three distinctroots when . By Theorem 1, the function does notadmit a nonlinear Cholesky factorization.

A question of practical interest is whether assumptionsB1)–B ) can be checked without actually carrying out therecursive construction. It turns out that this is possible when

is strictly convex. To prove this proposition, we need thefollowing corollary to Lemma 1.

Corollary 1: For a nonlinear function that satisfiesC1)–C3) assume that

C4) is strictly convex so that ,.

Then,E1) is satisfied and statements 1–4 of the Lemma 1 hold.Furthermore, the nonlinear function of the decomposition(11) is strictly convex.

Proof: To show thatE1) holds under the convexity as-sumption, we note thatC4) implies that is strictly convex in, so that there is at most one root for the equation .

By C2) andC3), there exists at least one root becauseadmitsa global minimum for any fixed. Therefore, this root is uniqueand simple because . Hence,E1) holds.

To prove that , we show that, for any nonzero vector and any fixed

vector . For each and nonzero , there exists a scalarsuchthat satisfies , where

is the unique root of . Thus, we have

This proves the strict convexity of .For a strictly convex function , a nonlinear Cholesky fac-

torization exists for any ordering of the coordinate variables. This is an consequence of the following corollary

to Theorem 1, obtained by a recursive application of Corollary1.

Corollary 2: Consider a positive–definite, radially un-bounded and smooth nonlinear function , where .The function admits a nonlinear Cholesky factorization

, as defined in Definition 4, if it is strictly convex,, .

So far, we have presented a procedure to obtain a nonlinearCholesky factorization for a given function. For the nonlinearrobust control problem formulated in Section II, the value func-tion is known only approximately via its Taylor series ex-pansion. In the remainder of this section the objective is to de-rive a factorizable function that matches the function upto any given order of accuracy. We assume that the quadratic

approximation of is positive–definite. Ourgoal is to find, for any positive integer , a radially unboundedand smooth function which has a nonlinear Choleskyfactorization and such that . An approxi-mate nonlinear Cholesky factorization can be easily shown toexist by adding higher order terms to make

strictly convex. However, this simplistic convexification ap-proach would cause serious computational difficulties. Instead,we present an explicit construction for the approximate function

and the upper triangular diffeomorphism with mildernonlinear growth than the factor obtained via convexification.

Our recursive construction consists ofsteps. The followingillustrates a typical step. Consider a polynomial function

of order , where is a scalar and is a vector.Assume that and . Wepartition the matrix conformal with

where is a positive scalar. Our goal is to find whichsatisfies the statements 1–4 of Lemma 1, and in (11) issuch that with .

First, we evaluate the root of approximately.Let . Then, is an thorder polynomial and . This im-plies that the equation admits a unique root

.Although, there may be several roots to the equation

(19)

we are only interested in the root around .The Taylor series expansion of the root function up to

st order can be computed by equating homogeneous termsof the same order. Starting with , weobtain higher order terms

(20)

for . Therefore, we have

(21)

Next, by adding higher order correction terms we enforceto be the unique root of (19), and to satisfyC1)–C3)

andE1). Toward this end, we let and definea function through

(22)

From (21), we get

Page 7: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

1020 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 7, JULY 2001

and

(23)

Next, we proceed to modify the high-order polynomial termsof the function to guarantee that it is larger than a positiveconstant for all so thatC2), C3), andE1) are satisfied.

To modify the high-order terms of in the coordinateswe need a polynomial of order such that

(24)

for some constant . From , for the rootfunction to exist around the origin, we must chooseto be afraction of , say . The substitution of into (24),where is expanded as

(25)

and equating polynomials of the same order, results in

for .Therefore, by (23) and (24), we have

(26)

For large values of , and involve high-order polynomials that grow rapidly in magnitude asdeviatesfrom the origin. We alleviate this difficulty with two smoothscaling functions and , which satisfy

(27)

(28)

Using these scaling functions in (26), we have the followingidentity:

(29)

where . We note that multiplier hasbeen removed in the term because

Define

and . We note that is a smoothpositive function of , and is an th order polynomialof . Then, , where

(30)

Since is the unique root ofand

condition E1) is satisfied for the function . By inspection,conditionC2) is satisfied. For conditionC3) to hold, it is suffi-cient that

for some positive constant.With the scaling function , we can shape the growth rate

of the function , which correspondsto the virtual control law in the backstepping design. With thescaling function , we can indirectly shape the growth rate ofthe function , which further determines the growth rateof the approximate value function and the virtual control law inthe backstepping design. For induction purpose, we verify thatthe quadratic part of the function is , where

is positive–definite due to the posi-tive–definiteness of .

By repeated application of the above process, we can provethe following theorem.

Theorem 2: Assume that a smooth function withis locally quadratic, , and is positive–def-

inite. Then, for any desired matching order , there exists aradially unbounded and smooth function which admits anonlinear Cholesky factorization such thatand , where , a simple upper trian-gular diffeomorphism, and , a diagonal matrix, are recursivelyconstructed by the above procedure.

When is an analytic function and admits a Choleskyfactorization , then there exists ansuch that, , , where the func-tion is constructed as above under the additional assump-tion that as the scaling functions and , intro-duced at each step of the construction, converge to 1 on the set

.Proof: The first part follows from the construction pre-

ceding the theorem. When is positive definite and analytic,the functions and are analytic because the elements of

are roots of analytic equations. By the analytic versionof the implicit function theorem, they are also analytic under thenecessary and sufficient conditions of Theorem 1.

Page 8: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

PAN et al.: BACKSTEPPING DESIGN WITH LOCAL OPTIMALITY MATCHING 1021

For the second part of the theorem, we first consider the casewhen the scaling functions and are set to 1 at each stepof recursive construction. Then, the simple diffeomorphismis the truncated Taylor series approximation to up to theorder of . By the analyticity of , it converges to as

within a radius of convergence. The nonlinear functionin (25) is also the truncated Taylor series approximation for

the function . It converges, again due to the analyticity of ,as within a radius of convergence. This further impliesthat converges to as within the same radius ofconvergence. Hence, the functionconverges to aswithin the common radius of convergence.

For the general case, when the scaling factorsand con-verges to 1 as , we can conclude the same convergenceresult because of the fact that a product converges when all itsfactors converge.

In general, the function generated by the recursive procedureis not convex. Therefore the approximate functionincludeslower order polynomial terms as compared with the approxima-tion resulting from the simplistic convexification approach.

Equipped with this high-order matching result we proceed topresent our design procedure for an inverse optimal control lawthat matches the optimal design up to any given order.

IV. HIGHER ORDER MATCHING CONTROL DESIGN

Applying the nonlinear Cholesky factorization result to therobust optimal control problem for system (1) and cost func-tional (2), we can prove that a backstepping procedure can pro-duce the optimal solution as long as the value function solutionis factorizable in the lower triangular fashion.

Theorem 3: Consider the nonlinear system (1) and cost func-tional (2) under AssumptionsA1) andA2). Assume that thereexists a smooth value function satisfying the HJI equa-tion (3), which has a nonlinear Cholesky factorization

in the reverse order of state variables, .Then, the value function, minimax control law (4) and the cor-responding worst-case disturbance can be obtained by a back-stepping procedure.

The proof is straightforward and can be found in [1].Theorem 3 assumes the knowledge of the value function

. Instead, we pursue the inverse optimal design withlocally optimal matching as prescribed in Definitions 1 and 2.

Let be the desired matching order. By a result of[8] and [2], we obtain the Taylor series expansion ofup to

st order. Using Theorem 2, we can find a functionthat matches up to st order and can be factored as

. Clearly, satisfies the HJIequation (3) up to th order. Introducing , wefurther assume that the and are given by

...(31a)

(31b)

where the subscript indicates that this nonlinear function isto be matched in the backstepping design. System (1) in thecoordinate is

......

or, in compact form

Then, the function can be expressed asand satisfies the HJI equation (3) up to th

order

(32)

As a consequence of this construction, we have

for any and all . Settingyields the following approximate HJI equation for , for any

:

(33)

where and

...

We are now ready to present the backstepping procedure forhigh-order optimal matching design with global inverse opti-mality.

Page 9: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

1022 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 7, JULY 2001

Step 1: Define and .The virtual control law prescribed by the nonlinear Choleskyfactorization of is . Under this control law, thevalue function satisfies the HJI equation (33), with .Therefore, the derivative of is given by

where is the remainder of the approximate HJI equality (33)

This suggests the following smooth virtual control law:

which is equivalent to

(34)

Because and , we conclude that ,and hence is higher order than the desired matching order. Underthis control law, we have

Step , : From the preceding step we have a valuefunction and a virtual control lawfor , which can be decomposed into a matching part and ahigh-order part as follows:

The derivative of is

(35)

where is the corresponding worst-case disturbance

... (36)

The dynamics of the state variable are

(37a)...

...

(37b)

where and , are high-ordernonlinear (possibly vector-valued) functions.

By hypothesis, the transformation matches ,, up to th order. At the th step, we

define the new coordinate . Becauseof the matching between and , and the matchingbetween and , the dynamics of is given by

where and are high-order nonlinear functions.Again, based on the approximate factorization, we

recursively define the value function for this step as:. Using the re-

lationship (33), the derivative of can be expressed asfollows:

where

(38)

(39)

By the approximate HJI equality (33), we conclude that. From the expressions (38), (39) and (36),

we also conclude that contains as a factor. Hence,the smooth virtual control law for is

Page 10: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

PAN et al.: BACKSTEPPING DESIGN WITH LOCAL OPTIMALITY MATCHING 1023

This leads to the satisfaction of a dissipation inequality forthe dynamics with supply rate, . Theequivalent virtual control law for is

Therefore, the matching property of the virtual control law isverified. Under this control law, the derivative of is

This completes theth design step, up to .Step : For the final th step, we define

. Now the actual control variableappears in the derivative of :

where and are present due to the higher-order mis-matches betweenand .

The value function for this final step, which be-comes the value function for the complete system, is

. Be-

cause of the approximate HJI equality (32), the derivative ofis equal to the following expression, after two “completion ofsquares” arguments:

where

Because the approximating value functionsatisfies (32), thefunction is higher-order such that . Furthermore,from the definition of , we conclude that contains as afactor, and is a smooth function such that

.

As we choose the locally Lipschitz continuous function

if

if

if

(40)

where

and is any positive constant.Therefore, the control law

(41)

is the desired inverse optimal matching control design, and it isinverse robust optimal with respect to the cost function ,where

(42)

With this choice of , we note that, in a neighbor-hood of the origin,

for some positive constants and , and. Therefore, in a

neighborhood of the origin, we have . This impliesthat the control law (41) satisfies the matching requirement. Inthe case when , wehave . In the case when ,we have , which implies that , and,therefore, . Then, and

This completes the control design. This result is summarizedin the following theorem.

Theorem 4: Consider the nonlinear system (1) and cost func-tional (2) with AssumptionsA1)–A3). Let the function beany radially unbounded, positive–definite, and smooth functionthat has the same Taylor series expansion as the value function

up to th order and admits a nonlinear Cholesky fac-torization in the reverse order of state variables .Then, the control law (41) is locally optimal matching up tothorder and is inverse robust optimal with respect to the cost func-tional (5), where is a positive definite and radially unboundedfunction defined by (42), andis a positive function defined by(40).

Proof: We note that the constructed value functionis positive definite and radially unbounded because isassumed to be positive definite and radially unbounded, and

. The function is positive–definite andradially unbounded because it is larger than or equal to .

Page 11: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

1024 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 7, JULY 2001

Furthermore, the function satisfies the HJI equation and thematching requirements by the recursive construction.

In the above recursive construction, we can start from anygiven factorizable function that approximates the value func-tion . Clearly, for different ’s, the construction leads to, ingeneral, different controller designs, that all satisfy the inverserobust optimality requirement and the local matching property.A question that remains to be answered is whether there is anycriterion to distinguish which is more appropriate for a par-ticular problem.

V. EXAMPLE

We consider a second-order nonlinear system

with cost functional

where we fix the desired disturbance attenuation level to be.

The HJI equation associated with this zero-sum differentialgame is

(43)

where is the value function for the game, if it exists.The Taylor series expansion of the function aroundthe origin up to third order is given by [2]

A. 1st Order Matching

For we are only concerned with the second-orderTaylor series expansion of the value function

The controller design starts by first obtaining a factorizablefunction that matches the value functionup to 2nd order.It turns out that the function is exactly . The nonlinearCholesky factorization is equivalent to the Cholesky factoriza-tion for positive–definite matrices

Hence, the simple diffeomorphism is given by. In the coordinates, the

original system is given by

and the cost functional is

Next, we proceed through the two steps of backstepping. Inthe first step, we set , and choose a value function

, so that

Hence, the desired virtual control law for is . Theworst case disturbance for this step is .

In the second step, which is also the last step, we define thestate variable and obtain

The value function for this step is ,and, after some algebraic manipulations, we get

Therefore, we have, and define as in (40) with

, , , and . Then,the inverse optimal control law with optimality matching up tofirst order is

The worst case disturbance is.

B. Second-Order Matching

For , we are concerned with the third-order Taylorseries expansion of the value function. Again, we start by ob-taining a factorizable function that matches the value func-tion up to third order. We follow the recursive procedure ofSection III to construct the function that is factorizable in theorder . Let

Using (20), the root function for up to secondorder is . In terms of

and , the function is obtainedas . Choosing

, we obtain .Then, we have the following identity:

Page 12: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

PAN et al.: BACKSTEPPING DESIGN WITH LOCAL OPTIMALITY MATCHING 1025

where ,and is a design parameter.

Hence, the function to be matched at the second stepof the nonlinear Cholesky factorization is

.Let . For this last

step, the root function is fixed to be . Then, we have. With , is given by

. Thus, matches the following func-tion up to third order.

where , and is a designparameter.

The scaling functions and are chosen to reduce the con-trol effort for the inverse optimal matching design. Comparedwith a design using and , the control effort isdramatically reduced. A choice for which could further re-duce the control effort, but which leads to a more complicatedcontrol law, is

This completes the construction of the matching function

where

with the properties

and where

This function matches the value function up to the thirdorder.

The simple diffeomorphism in the nonlinear Choleskyfactorization for the function is

and, in the coordinates, the original system becomes

The approximate value function can be expressed as

and the cost functional is

With the function available, we now proceed to the twosteps of backstepping for the construction of an inversely robustoptimal controller that matches the optimal control law up tosecond order.

For the first step, we define , , andget

where . Hence, the desired virtual control is, and the worst case disturbance for this step is

.For the second step, we define a new state variable

which satisfies

where denotes the terms ofth order and above.The value function for this step is

and its derivative is given by

Page 13: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

1026 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO. 7, JULY 2001

Fig. 1. Phase-portraits.

Fig. 2. State trajectories.

where . Now can be chosen according to (40).The inverse robust optimal control law is given by

and the corresponding worst-case disturbance is

Simulations are used to illustrate the theoretical findings. Thedesign parameters and are chosen to reducethe control magnitude. For comparison, the 2nd-order local con-troller has also been simulated with the control law

. The phase-portraits for the disturbance-free closed-loopsystems under the two controllers are shown in Fig. 1. While themanifold is the stability boundary for the local controller,the matching controller guarantees global asymptotic stability.Starting at point , we have depicted the state trajectories inFig. 2 and the control trajectories in Fig. 3. We observe that ini-tially the control magnitude for the matching controller is some-

Fig. 3. Control efforts.

what larger than that for the local controller. This is the price tobe paid for global stability and robustness with respect to thedisturbance input.

VI. CONCLUSION

We have developed a design procedure which combines theTaylor series expansion and integrator backstepping to solve theproblem of nonlinear optimal control for strict-feedbacknonlinear systems. The procedure recursively constructs a costfunctional and the corresponding solution to the HJI equationfor a strict-feedback nonlinear system such that the optimal per-formance is matched up toanydesired order of the Taylor se-ries expansion. Moreover, this procedure is also applicable tothe nonlinear regulator problem. What lies at the heart of the re-cursive construction is the new concept of nonlinear Choleskyfactorization. The nonlinear Cholesky factorization for a givenpositive definite function is defined as an upper triangular coor-dinate transformation such that in the new coordinates the givenfunction is equal to the sum of squares of the state variables. Wehave obtained precise conditions under which a given nonlinearfunction has a nonlinear Cholesky factorization.

When the value function for the optimal control problemhas a nonlinear Cholesky factorization, we have shown that thebackstepping procedure can be tuned to result in the optimalcontrol design, thus justifying backstepping as an optimaldesign method. Making use of the Taylor series expansion forthe value function, we have developed explicit recursive com-putation schemes for a globally stable and inversely optimalcontroller that matches the optimal solution up to any desiredorder. A simulation example has been included to illustrate thetheoretical findings.

REFERENCES

[1] Z. Pan, K. Ezal, A. J. Krener, and P. V. Kokotovic´, “Backstepping de-sign with local optimality matching,” University of California at SantaBarbara, Santa Barbara, CA, Tech. Rep.CCEC 96-0810, August 1996.

[2] A. J. van der Schaft, “L -gain analysis of nonlinear systems and non-linearH control,” IEEE Trans. Automat. Contr., vol. 37, pp. 770–784,June 1992.

Page 14: Backstepping design with local optimality matching ...web.mit.edu/~tgibson/Public/pan01backstepping.pdf · In the earlier linear-quadratic matching backstepping design [16], the key

PAN et al.: BACKSTEPPING DESIGN WITH LOCAL OPTIMALITY MATCHING 1027

[3] A. Isidori and W. Kang, “H control via measurement feedback forgeneral nonlinear systems,”IEEE Trans. Automat. Contr., vol. 40, pp.466–472, Mar. 1995.

[4] J. A. Ball, J. W. Helton, and M. Walker, “H control for nonlinearsystems with output feedback,”IEEE Trans. Automat. Contr., vol. 38,pp. 546–559, Apr. 1993.

[5] R. Marino, W. Respondek, A. J. van der Schaft, and P. Tomei, “NonlinearH almost disturbance decoupling,”Syst. Control Lett., vol. 23, pp.159–168, 1994.

[6] Z. Pan and T. Bas¸ar, “Robustness of minimax controllers to nonlinearperturbations,”J. Optim. Theory Appl., vol. 87, pp. 631–678, Dec. 1995.

[7] T. Basar and P. Bernhard,H -Optimal Control and Related MinimaxDesign Problems: A Dynamic Game Approach, 2nd ed. Boston, MA:Birkhäuser, 1995.

[8] D. L. Lukes, “Optimal regulation of nonlinear dynamical systems,”SIAM J. Control, vol. 7, pp. 75–100, Feb. 1969.

[9] E. G. Al’brecht, “On the optimal stabilization of nonlinear systems,”PMM—J. Appl. Math. Mech., vol. 25, pp. 1254–1266, 1961.

[10] W. Kang, P. K. De, and A. Isidori, “Flight control in a windshear vianonlinearH method,” inProc. 31st IEEE Conf. Decision Control,Tucson, AZ, 1992, pp. 1135–1142.

[11] A. J. Krener, “The construction of optimal linear and nonlinear regula-tors,” in Systems, Models and Feedback, Theory and Applications, A.Isidori and T. J. Tarn, Eds. Boston, MA: Birkhäuser, 1992.

[12] , “Optimal model matching controllers for linear and nonlinear sys-tems,” in Proc. 2nd IFAC Symp., M. Fliess, Ed. Bordeaux, France:Pergamon Press, 1992, pp. 209–214.

[13] A. Isidori, Nonlinear Control Systems, 3rd ed. London, U.K.:Springer-Verlag, 1995.

[14] I. Kanellakopoulos, P. V. Kokotovic´, and A. S. Morse, “Systematic de-sign of adaptive controllers for feedback linearizable systems,”IEEETrans. Automat. Contr., vol. 36, pp. 1241–1253, Nov. 1991.

[15] M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic´,Nonlinear and Adap-tive Control Design. New York: Wiley, 1995.

[16] K. Ezal, Z. Pan, and P. V. Kokotovic´, “Locally optimal and robust back-stepping design,”IEEE Trans. Automat. Contr., vol. 45, pp. 260–271,Feb. 2000.

[17] R. A. Freeman and P. V. Kokotovic´, Robust Nonlinear Control DesignState-Space and Lyapunov Techniques. Boston, MA: Birkhäuser,1996.

[18] M. Krstic and Z. Li, “Inverse optimal design of input-to-state stabi-lizing nonlinear controllers,”IEEE Trans. Automat. Contr., vol. 43, pp.336–350, Mar. 1998.

[19] P. Kokotovicand M. Arcak, “Constructive nonlinear control: a historicalprospective,”Automatica, vol. 37, pp. 637–662, May 2001.

Zigang Pan (S’92–M’96) received the B.S. degreein automatic control from Shanghai Jiao Tong Uni-versity, Shanghai, P. R. China, in 1990 and the M.S.and Ph.D. degrees in electrical engineering from Uni-versity of Illinois at Urbana-Champaign in 1992 and1996, respectively.

In 1996, he was a Research Engineer at the Centerfor Control Engineering and Computation at Univer-sity of California, Santa Barbara. In the same year, hejoined the Polytechnic University, Brooklyn, NY, asan Assistant Professor in the Department of Electrical

Engineering, where he stayed for two years. Afterwards, he joined the faculty ofShanghai Jiao Tong University, Shanghai, P. R. China, in the Department of Au-tomation. In December 2000, he joined the Department of Electrical and Com-puter Engineering and Computer Science of the University of Cincinnati, wherehe is currently an Assistant Professor. His current research interests include per-turbation theory for robust control systems, optimality-guided robust parameteridentification, optimality-guided robust adaptive control, control design for sto-chastic nonlinear systems, and applications of adaptive control systems.

Dr. Pan is coauthor of a paper (with T. Bas¸ar) that received the 1995 GeorgeAxelby Best Paper Award. He is a member of the Phi Kappa Phi Honor Society.

Kenan Ezal (S’89–M’99) received the B.S. degreein electrical engineering from the University of Cal-ifornia at Los Angeles, the M.S.E. degree from theUniversity of Michigan at Ann Arbor, and the M.S.and Ph.D. degrees in electrical engineering from theUniversity of California at Santa Barbara, in 1989,1990, 1996, and 1998, respectively.

In 1991, he joined Delco Systems Operations,Santa Barbara, CA, where he was involved in thedevelopment of guidance and control algorithmsfor military aircraft, as well as the control and

calibration of resonating gyroscopes. Currently, he is a Senior Analyst at ToyonResearch Corporation in Santa Barbara, where he is researching automatictarget recognition algorithms, foliage penetrating radar, and bearings-onlymultitarget multisensor tracking algorithms. His research interests also includerobust and adaptive control of nonlinear systems, antilock braking, and adaptivelinearization of nonlinear amplifiers utilized in communications.

Dr. Ezal was the recipient of the Best Student Paper Award at the 1997 Con-ference on Decision and Control.

Arthur J. Krener (M’77–SM’82–F’90) was born inBrooklyn, NY, in 1942. He received the B.S. degreefrom Holy Cross College, Worcester, MA, and theM.A. and Ph.D. degrees from the University of Cal-ifornia, Berkeley in 1964, 1967, and 1971, respec-tively, all in mathematics.

Since 1971, he has been at the University ofCalifornia, Davis, where he is currently Professorof Mathematics. He has held visiting positions atHarvard University, Cambridge, MA, the Universityof Rome, Imperial College of Science and Tech-

nology, NASA Ames Research Center, Ames, IA, the University of California,Berkeley, the University of Paris IX, the University of Maryland, College Park,MD, the University of Newcastle, and the University of Padua. His researchinterests are in developing methods for the control and estimation of nonlineardynamical systems and stochastic processes.

Dr. Krener won a Best Paper Award for his 1981 paper coauthored withIsidori, Gori-Giorgi, and Monaco. His 1977 paper coauthored with R. Hermannwas recently chosen as one of 25 Seminal Papers in Control.

Petar V. Kokotovic (SM’74–F’80) recievedgraduate degrees from the University of Belgrade,Yuogoslavia, and the Institute of Automation andRemote Control, U.S.S.R. Academy of Sciences,Moscow, in 1962 and 1965, respectively.

From 1959 to 1966, he was with the PupinResearch Institute in Belgrade, Yugoslavia, and then,until 1991, with the Department of Electrical andComputer Engineering and the Coordinated ScienceLaboratory at the University of Illinois, Urbana,where he held the endowed Grainger Chair. In 1991,

he joined the University of California, Santa Barbara, where he directs theCenter for Control Engineering and Computation. He has coauthored eightbooks and numerous articles contributing to sensitivity analysis, singularperturbation methods, and robust adaptive and nonlinear control. He is alsoactive in industrial applications of control theory. As a consultant to Ford, hewas involved in the development of the first series of automotive computercontrols and at General Electric he participated in large scale systems studies.He received the 1983 and 1993 Outstanding IEEE TRANSACTIONSPaper Awardsand presented the 1991 Bode Prize Lecture. He is the recipient of the 1990 IFACQuazza Medal and the 1995 IEEE Control Systems Award.