theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...sznaieranddamborg(1987)byshowingthatthecon-trollerisstabilizing,andthatthec-lqrproblemis...

18
This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor Per-Olof Gutman under the direction of Editor Tamer Basar. * Corresponding author. Dip. Ingegneria dell'Informazione, Univer- sita % di Siena, Via Roma 56, 53100 Siena, Italy. E-mail addresses: bemporad@dii.unisi.it (A. Bemporad), morari@aut.ee.ethz.ch (M. Morari), v.dua@ic.ac.uk (V. Dua), e.pistiko- poulos@ic.ac.uk (E. N. Pistikopoulos). Automatica 38 (2002) 3}20 The explicit linear quadratic regulator for constrained systems Alberto Bemporad*, Manfred Morari, Vivek Dua, Efstratios N. Pistikopoulos Dip. Ingegneria dell'Informazione, Universita % di Siena, Via Roma 56, 53100 Siena, Italy Automatic Control Laboratory, ETH Zentrum, ETL I 26, 8092 Zurich, Switzerland Centre for Process Systems Engineering, Imperial College, London SW7 2BY, UK Received 24 September 1999; revised 9 October 2000; received in "nal form 16 June 2001 We present a technique to compute the explicit state-feedback solution to both the xnite and inxnite horizon linear quadratic optimal control problem subject to state and input constraints. We show that this closed form solution is piecewise linear and continuous. As a practical consequence of the result, constrained linear quadratic regulation becomes attractive also for systems with high sampling rates, as on-line quadratic programming solvers are no more required for the implementation. Abstract For discrete-time linear time invariant systems with constraints on inputs and states, we develop an algorithm to determine explicitly, the state feedback control law which minimizes a quadratic performance criterion. We show that the control law is piece-wise linear and continuous for both the "nite horizon problem (model predictive control) and the usual in"nite time measure (constrained linear quadratic regulation). Thus, the on-line control computation reduces to the simple evaluation of an explicitly de"ned piecewise linear function. By computing the inherent underlying controller structure, we also solve the equivalent of the Hamilton}Jacobi}Bellman equation for discrete-time linear constrained systems. Control based on on-line optimization has long been recognized as a superior alternative for constrained systems. The technique proposed in this paper is attractive for a wide range of practical problems where the computational complexity of on-line optimization is prohibitive. It also provides an insight into the structure underlying optimization-based controllers. 2001 Elsevier Science Ltd. All rights reserved. Keywords: Piecewise linear controllers; Linear quadratic regulators; Constraints; Predictive control 1. Introduction As we extend the class of system descriptions beyond the class of linear systems, linear systems with constraints are probably the most important class in practice and the most studied. It is well accepted that for these systems, in general, stability and good performance can only be achieved with a non-linear control law. The most popu- lar approaches for designing non-linear controllers for linear systems with constraints fall into two categories: anti-windup and model predictive control. Anti-windup schemes assume that a well functioning linear controller is available for small excursions from the nominal operating point. This controller is augmented by the anti-windup scheme in a somewhat ad hoc fashion, to take care of situations when constraints are met. Kothare et al. (1994) reviewed numerous apparently di!erent anti-windup schemes and showed that they di!er only in their choice of two static matrix parameters. The least conservative stability test for these schemes can be for- mulated in terms of a linear matrix inequality (LMI) (Kothare & Morari, 1999). The systematic and automatic synthesis of anti-windup schemes which guarantee closed-loop stability and achieve some kind of optimal 0005-1098/02/$ - see front matter 2001 Elsevier Science Ltd. All rights reserved. PII: S 0 0 0 5 - 1 0 9 8 ( 0 1 ) 0 0 1 7 4 - 1

Upload: others

Post on 27-Nov-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

�This paper was not presented at any IFAC meeting. This paper wasrecommended for publication in revised form by Associate EditorPer-Olof Gutman under the direction of Editor Tamer Basar.*Corresponding author. Dip. Ingegneria dell'Informazione, Univer-

sita% di Siena, Via Roma 56, 53100 Siena, Italy.E-mail addresses: [email protected] (A. Bemporad),

[email protected] (M. Morari), [email protected] (V. Dua), [email protected] (E. N. Pistikopoulos).

Automatica 38 (2002) 3}20

The explicit linear quadratic regulator for constrained systems�

Alberto Bemporad����*, Manfred Morari�, Vivek Dua�, Efstratios N. Pistikopoulos��Dip. Ingegneria dell'Informazione, Universita% di Siena, Via Roma 56, 53100 Siena, Italy

�Automatic Control Laboratory, ETH Zentrum, ETL I 26, 8092 Zurich, Switzerland�Centre for Process Systems Engineering, Imperial College, London SW7 2BY, UK

Received 24 September 1999; revised 9 October 2000; received in "nal form 16 June 2001

We present a technique to compute the explicit state-feedback solution to both the xnite and inxnitehorizon linear quadratic optimal control problem subject to state and input constraints. We show thatthis closed form solution is piecewise linear and continuous. As a practical consequence of the result,constrained linear quadratic regulation becomes attractive also for systems with high sampling rates,as on-line quadratic programming solvers are no more required for the implementation.

Abstract

For discrete-time linear time invariant systems with constraints on inputs and states, we develop an algorithm to determineexplicitly, the state feedback control law which minimizes a quadratic performance criterion. We show that the control law ispiece-wise linear and continuous for both the "nite horizon problem (model predictive control) and the usual in"nite time measure(constrained linear quadratic regulation). Thus, the on-line control computation reduces to the simple evaluation of an explicitlyde"ned piecewise linear function. By computing the inherent underlying controller structure, we also solve the equivalent of theHamilton}Jacobi}Bellman equation for discrete-time linear constrained systems. Control based on on-line optimization has longbeen recognized as a superior alternative for constrained systems. The technique proposed in this paper is attractive for a wide rangeof practical problems where the computational complexity of on-line optimization is prohibitive. It also provides an insight into thestructure underlying optimization-based controllers. � 2001 Elsevier Science Ltd. All rights reserved.

Keywords: Piecewise linear controllers; Linear quadratic regulators; Constraints; Predictive control

1. Introduction

As we extend the class of system descriptions beyondthe class of linear systems, linear systems with constraintsare probably the most important class in practice and themost studied. It is well accepted that for these systems, ingeneral, stability and good performance can only be

achieved with a non-linear control law. The most popu-lar approaches for designing non-linear controllers forlinear systems with constraints fall into two categories:anti-windup and model predictive control.Anti-windup schemes assume that a well functioning

linear controller is available for small excursions from thenominal operating point. This controller is augmented bythe anti-windup scheme in a somewhat ad hoc fashion, totake care of situations when constraints are met. Kothareet al. (1994) reviewed numerous apparently di!erentanti-windup schemes and showed that they di!er only intheir choice of two static matrix parameters. The leastconservative stability test for these schemes can be for-mulated in terms of a linear matrix inequality (LMI)(Kothare &Morari, 1999). The systematic and automaticsynthesis of anti-windup schemes which guaranteeclosed-loop stability and achieve some kind of optimal

0005-1098/02/$ - see front matter � 2001 Elsevier Science Ltd. All rights reserved.PII: S 0 0 0 5 - 1 0 9 8 ( 0 1 ) 0 0 1 7 4 - 1

Page 2: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

�The results of this paper hold also for the more general mixedconstraints D

�x(t)#D

�u(t))d.

performance, has remained largely elusive, thoughsome promising steps were achieved recently (Mulder,Kothare, &Morari, 1999; Teel & Kapoor, 1997). Despitethese drawbacks, anti-windup schemes are widely used inpractice because in most SISO situations, they are simpleto design and work adequately.Model predictive control (MPC) has become the ac-

cepted standard for complex constrained multivariablecontrol problems in the process industries. Here, at eachsampling time, starting at the current state, an open-loopoptimal control problem is solved over a "nite horizon.At the next time step, the computation is repeated start-ing from the new state and over a shifted horizon, leadingto a moving horizon policy. The solution relies on a lin-ear dynamic model, respects all input and output con-straints, and optimizes a quadratic performance index.Thus, as much as a quadratic performance index togetherwith various constraints can be used to express trueperformance objectives, the performance of MPC is ex-cellent. Over the last decade, a solid theoretical founda-tion for MPC has emerged so that in real life, large-scaleMIMO applications controllers with non-conservativestability guarantees can be designed routinely and withease (Qin & Badgwell, 1997). The big drawback of theMPC is the relatively formidable on-line computationale!ort, which limits its applicability to relatively slowand/or small problems.In this paper, we show how to move all the com-

putations necessary for the implementation of MPC o!-line, while preserving all its other characteristics.This should largely increase the range of applicability ofMPC to problems where anti-windup schemes and otherad hoc techniques dominated up to now. Moreover, suchan explicit form of the controller provides additionalinsight for better understanding of the control policyof MPC.We also show how to solve the equivalent of the

Hamilton}Jacobi}Bellman equation for discrete-timelinear constrained systems. Rather than gridding thestate space in some ad hoc fashion, we discover theinherent underlying controller structure and provide itsmost e$cient parameterization.The paper is organized as follows. The basics of MPC

are reviewed "rst to derive the quadratic program whichneeds to be solved to determine the optimal controlaction. We proceed to show that the form of this quad-ratic program is maintained for various practical exten-sions of the basic setup, for example, trajectory following,suppression of disturbances, time-varying constraintsand also the output feedback problem. As the coe$cientsof the linear term in the cost function and the right-handside of the constraints depend linearly on the currentstate, the quadratic program can be viewed as a multi-parametric quadratic program (mp-QP). We analyze theproperties of mp-QP, develop an e$cient algorithm tosolve it, and show that the optimal solution is a piecewise

a$ne function of the state (con"rming other investiga-tions on the form of MPC laws (Chiou & Za"riou,1994; Tan, 1991; Za"riou, 1990; Johansen, Petersen, &Slupphaug, 2000).The problem of synthesizing stabilizing feedback con-

trollers for linear discrete-time systems, subject to inputand state constraints, was also addressed in Gutman(1986) and Gutman and Cwikel (1986). The authors ob-tain a piecewise linear feedback law de"ned over a parti-tion of the set of states X into simplicial cones, bycomputing a feasible input sequence for each vertex vialinear programming (this technique was later extended inBlanchini, 1994). Our approach provides a piecewisea$ne control law which not only ensures feasibilityand stability, but is also optimal with respect to LQRperformance.The paper concludes with a series of examples which

illustrate the di!erent features of the method. Othermaterial related to the contents of this paper can befound at http://control.ethz.ch/&bemporad/explicit.

2. Model predictive control

Consider the problem of regulating to the origin thediscrete-time linear time invariant system

�x(t#1)"Ax(t)#Bu(t),

y(t)"Cx(t),(1)

while ful"lling the constraints

y���

)y(t))y���, u

���)u(t))u

���(2)

at all time instants t*0. In (1)}(2), x(t)3��, u(t)3��, andy(t)3�� are the state, input, and output vectors, respec-tively, y

���)y

���(u

���)u

���) are p(m)-dimensional

vectors�, and the pair (A,B) is stabilizable.Model predictive control (MPC) solves such a

constrained regulation problem in the following way.Assume that a full measurement of the state x(t) isavailable at the current time t. Then, the optimizationproblem

min�O��� �2��������

� �J(;,x(t))"x���� ��

Px��� ��

#

������

[x�����

Qx����

#u���Ru

��]�,

4 A. Bemporad et al. / Automatica 38 (2002) 3}20

Page 3: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

�As discussed above, several techniques exist to compute constrainthorizons N

�which guarantee feasibility, see Bemporad (1998), Gilbert

and Tin Tan (1991), Gutman and Cwikel (1986) and Kerrigan andMaciejowski (2000).

s.t. y���

)y����

)y���, k"1,2,N

�,

u���

)u��

)u���, k"0,1,2,N

�,

x���

"x(t),

x������

"Ax����

#Bu��, k*0,

y����

"Cx����

, k*0,

u��

"Kx����

, N�)k(N

,

(3)

is solved at each time t, where x����

denotes the predictedstate vector at time t#k, obtained by applying the inputsequence u

�,2, u

����to model (1) starting from the

state x(t). In (3), we assume that Q"Q�_0, R"R��0,P_0, (Q���,A) detectable (for instance, Q"C�C with(C,A) detectable), K is some feedback gain, N

, N

�, N

�are the output, input, and constraint horizons, respective-ly, with N

�)N

and N

�)N

!1 (the results of this

paper also apply for N�*N

).

One possibility is to choose K"0 (Rawlings &Muske, 1993), and P as the solution of the Lyapunovequation

P"A�PA#Q. (4)

The choice is only meaningful for open-loop stable sys-tems, as it implies that after N

�time steps, the control

is turned o!.Alternatively, one can set K"K

�(Chmielewski

& Manousiouthakis, 1996; Scokaert & Rawlings, 1998),where K

�and P are, the solutions of the unconstrained

in"nite horizon LQR problem with weights Q,R

K�

"!(R#B�PB)��B�PA, (5a)

P"(A#BK�)�P(A#BK

�)#K�

�RK

�#Q. (5b)

This choice of K implies that, after N�time steps, the

control is switched to the unconstrained LQR. WithP obtained from (5), J(;, x(t)) measures the settling costof the system from the present time t to in"nity under thiscontrol assumption. In addition, if one guarantees thatthe predicted input and output evolutions do not violatethe constraints also at later time steps t#k,k"N

�#1,2,#R, this strategy is indeed the optimal

in"nite horizon constrained LQ policy. This point will beaddressed in Section 3.The MPC control law is based on the following idea:

At time t, compute the optimal solution ;H(t)"�uH

�,2, uH

������� to problem (3), apply

u(t)"uH�

(6)

as input to system (1), and repeat the optimization (3) attime t#1, based on the new state x(t#1). Such a con-trol strategy is also referred to as moving or recedinghorizon.

The two main issues regarding this policy are feasibil-ity of the optimization problem (3) and stability of theresulting closed-loop system. WhenN

�(R, there is no

guarantee that the optimization problem (3) will remainfeasible at all future time steps t, as the system mightenter `blind alleysa where no solution to problem (3)exists. On the other hand, setting N

�"R leads to an

optimization problem with an in"nite number of con-straints, which is impossible to handle. If the set offeasible state#input vectors is bounded and containsthe origin in its interior, by using arguments from maxi-mal output admissible set theory (Gilbert & Tin Tan,1991), for the case K"0 Bemporad (1998) showed thata "nite constraint horizon, N

�, can be chosen without

loss of guarantees of constraint ful"llment, and a similarargument can be repeated for the case K"K

�. Similar

results about the choice of the smallest N�ensuring

feasibility of the MPC problem (3) at all time instantswere also proved in Gutman and Cwikel (1986) and havebeen extended recently in Kerrigan and Maciejowski(2000).The stability of MPC feedback loops was investigated

by numerous researchers. Stability is, in general, a com-plex function of the various tuning parametersN

�, N

, N

�, P, Q, and R. For applications, it is most

useful to impose some conditions on N, N

�and P, so

that stability is guaranteed for all Q_0, R�0. Then,Q and R can be freely chosen as tuning parameters toa!ect performance. Sometimes, the optimization prob-lem (3) is augmented with a so-called `stability con-strainta (see Bemporad & Morari (1999) for a survey ofdi!erent constraints proposed in the literature). This ad-ditional constraint imposed over the prediction horizon,explicitly forces the state vector either to shrink in somenorm or to reach an invariant set at the end of theprediction horizon.Most approaches for proving stability follow in spirit

the arguments of Keerthi and Gilbert (1988) who estab-lish the fact that under some conditions, the valuefunction <(t)"J(;H(t), t) attained at the minimizer;H(t) is a Lyapunov function for the system.Below, we recall a simple stability result based on sucha Lyapunov argument (see also Bemporad, Chisci, &Mosca, 1994).

Theorem 1. Let N"R, K"0 or K"K

�, and

N�(R be suzciently large for guaranteeing existence of

feasible input sequences at each time step.� Then, the MPClaw (3)}(6) asymptotically stabilizes system (1) while

A. Bemporad et al. / Automatica 38 (2002) 3}20 5

Page 4: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

We refer to (3) as xnite horizon also in the case N"R. In fact,

such a problem can be transformed into a "nite-horizon problem bychoosing P as in (4) or (5). Therefore `"nitenessa should refer to theinput and constraint horizons, as these are indeed the parameters whicha!ect the complexity of the optimization problem (3).

enforcing the fulxllment of the constraints (2) from all initialstates x(0) such that (3) is feasible at t"0.

Theorem 1 ensures stability, provided that the optim-ization problem (3) is feasible at time t"0. The problemof determining the set of initial conditions x(0) for which(3) is feasible has been addressed in Gutman and Cwikel(1986), and more recently in Kerrigan and Maciejowski(2000).

2.1. MPC computation

By substituting x����

"Ax(t)#������

A�Bu������

,Eq. (3) can be rewritten as

<(x(t))"1

2x�(t)>x(t)#min

��1

2;�H;#x�(t)F;,

s.t. G;)=#Ex(t)�, (7)

where the column vector ;O[u��,2, u�

������]�3��,

sOmN�, is the optimization vector, H"H��0, and

H,F,>,G,=,E are easily obtained from Q, R, and (3) (asonly the optimizer ; is needed, the term involving > isusually removed from (7)).The optimization problem (7) is a quadratic program

(QP). Since the problem depends on the current state x(t),the implementation of MPC requires the on-line solutionof a QP at each time step. Although e$cient QP solversbased on active-set methods and interior point methodsare available, computing the input u(t) demands signi"-cant on-line computation e!ort. For this reason, theapplication of MPC has been limited to `slowa and/or`smalla processes.In this paper, we propose a new approach to imple-

ment MPC, where the computation e!ort is movedo!-line. The MPC formulation described in Section 2provides the control action u(t) as a function of x(t)implicitly de"ned by (7). By treating x(t) as a vector ofparameters, our goal is to solve (7) o!-line, with respect toall the values of x(t) of interest, and make this dependenceexplicit.The operations research community has addressed

parameter variations in mathematical programs at twolevels: sensitivity analysis, which characterizes the changeof the solution with respect to small perturbations of theparameters, and parametric programming, where thecharacterization of the solution for a full range of param-eter values is sought. In the jargon of operations research,programs which depend only on one scalar parameterare referred to as parametric programs, while problemsdepending on a vector of parameters as multi-parametricprograms. According to this terminology, (7) is a multi-parametric quadratic program (mp-QP). Most of theliterature deals with parametric problems, but someauthors have addressed the multi-parametric case

(Acevedo & Pistikopoulos, 1999; Dua & Pistikopoulos,1999, 2000; Fiacco, 1983; Gal, 1995). In Section 4, we willdescribe an algorithm to solve mp-QP problems. To theauthors' knowledge, no algorithm for solving mp-QPproblems has appeared in the literature. Once the multi-parametric problem (7) has been solved o! line, i.e. thesolution ;H

�";H(x(t)) of (7) has been found, the model

predictive controller (3) is available explicitly, as theoptimal input u(t) consists simply of the "rst m compo-nents of ;H(x(t))

u(t)"[I 0 2 0];H(x(t)). (8)

In Section 4, we will show that the solution ;H(x) of themp-QP problem is a continuous and piecewise a$nefunction of x. Clearly, because of (8), the same propertiesare inherited by the controller.Section 3 is devoted to investigate the regulation prob-

lem over an in"nite prediction horizon N"#R,

leading to solve explicitly the so-called constrained linearquadratic regulation (C-LQR) problem.

3. Piecewise linear solution to the constrained linearquadratic regulation problem

In their pioneering work, Sznaier and Damborg (1987),showed that "nite horizon optimization (3), (6), withP satisfying (5), also provides the solution to the in"nite-horizon linear quadratic regulation problem with con-straints

<�(x(0))" min

��� ���� �2�

�����

x�(t)Qx(t)#u�(t)Ru(t)�,s.t. y

���)Cx(t))y

���,

u���

)u(t))u���,

x(0)"x�,

x(t#1)"Ax(t)#Bu(t), t*0

(9)

with Q, R, A, and B as in (3). The equivalence holds fora certain set of initial conditions, which depends on thelength of the "nite horizon. This idea has been recon-sidered later by Chmielewski and Manousiouthakis(1996) independently by Scokaert and Rawlings (1998),and recently also in Chisci and Zappa (1999). In Scokaertand Rawlings (1998), the authors extend the idea of

6 A. Bemporad et al. / Automatica 38 (2002) 3}20

Page 5: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

Sznaier and Damborg (1987) by showing that the con-troller is stabilizing, and that the C-LQR problem issolved by a "nite-dimensional QP problem. Unfortu-nately, the dimension of the QP depends on the initialstate x(0), and no upper-bound on the horizon (andtherefore on the QP size) is given. On the other hand,Chmielewski and Manousiouthakis (1996) describe analgorithm which provides a semi-global upper-bound.Namely, for any given compact set �

�of initial conditions,

their algorithm provides the horizonsN�"N

�such that

the "nite horizon controller (3), (6) solves the in"nitehorizon problem (9).We brie#y recall the approach given by Chmielewski

and Manousiouthakis (1996) and propose somemodi"cations, which will allow us to compute theclosed-form of the constrained linear quadraticcontroller (9)

u(t)"f�(x(t)), ∀t*0 (10)

for a compact set of initial conditions �(0), and show thatit is piecewise a$ne.Assume that (i) Q�0, (ii) the sets �, U of feasible states

and inputs, respectively, are bounded (e.g. ��u���

��, ��u���

��,��y

�����, ��y

�����(R, and C"I) and contain the origin in

their interior, and (iii) that for all the initial statesx(0)3�(0), there exists an input sequence driving thesystem to the origin under constraints. We also assumethat �,U are polytopes, as this is the case at hand in ourpaper, to simplify the exposition. Denote byP�, problem(3) with NON

�"N

�, N

"R, K"K

�(or, equiva-

lently, N"N, and P solving the Riccati equation (5b))

and by P�, problem (9). Note that in view of Bellman's

principle of optimality (Bellman, 1961), (10) can bereformulated by substituting x

����, u

��for x(t), u(t),

respectively, as predicted and actual trajectoriescoincide.Chmielewski and Manousiouthakis (1996) prove that,

for a given x(0), there exists a "nite N such that P� andP� are equivalent, and that the associated controller isexponentially stabilizing. The same result was proved inScokaert and Rawlings (1998). In addition, by exploitingthe convexity and continuity of the value function<

�with respect to the initial condition x(0),

Chmielewski and Manousiouthakis (1996) provide thetools necessary to compute an upper-bound on N forevery given set �(0) of initial conditions. Such anupper-bound is computed by modifying the algorithmin Chmielewski and Manousiouthakis (1996), asfollows.

(1) LetXM "�x3�: K�x3U� the set of states such that

the unconstrained LQ gain K�is feasible. As �,U

are polytopes, XM is a polytope, de"ned by a set oflinear inequalities of the form AM x)bM , i"1,2, n� ,where AM ,bM ,n� depend on the de"nition of �,U.

(2) Compute Z�"�x3��: x�Px)c�, where the largest

c such that Z�is an invariant subset of XM , is deter-

mined analytically (Bemporad, 1998) as

c" min ���2� ��

�bM ��AM P��(AM �)

. (11)

(3) Let q"c����(P��Q), where �

���denotes the min-

imum eigenvalue.(4) Let �(0) be a compact set of initial conditions. With-

out loss of generality, assume that �(0) is a polytope,and let �x

�l ��

be the set of its vertices. For eachx compute <

�(x

). This is done by computing the

"nite horizon value function <(x ) for increasing

values of N until x����

3Z�(this computation is

equivalent to the one described in Scokaert andRawlings (1998), where a ball B

�is used instead of Z

�).

(5) Let ;"max ���2�l

<�(x

), which is an upper

bound to <�(x) on �(0), as <

�and �(0) are convex

(in Chmielewski & Manousiouthakis, 1996), a di!er-ent approach is proposed where ; depends on x

�).

(6) Choose N as the minimum integer such thatN*(;!c)/q.

In Section 4, we will show that P� has a continuousand piecewise a$ne solution. Therefore, on a compact setof initial conditions �(0), the solution (10) to the CLQRproblem P� is also piecewise a$ne. Note that if �

�.�

then the solution is also global because �,U are bounded.

4. Multi-parametric quadratic programming

In this section, we investigate multi-parametric quad-ratic programs (mp-QP) of form (7). We want to derivean algorithm to express the solution ;H(x) and the min-imum value<(x)"J(;H(x)) as an explicit function of theparameters x, and to characterize the analytical proper-ties of these functions. In particular, we will prove thatthe solution ;H(x) is a continuous piecewise a$ne func-tion of x, in the following sense.

De5nition 1. A function z(x) :XC��, where X-�� isa polyhedral set, is piecewise a$ne if it is possible topartition X into convex polyhedral regions, CR

, and

z(x)"H x#k , ∀x3CR .

Piecewise quadraticity is de"ned analogously by let-ting z(x) be a quadratic function x�= x#H x#k .

4.1. Fundamentals of the algorithm

Before proceeding further, it is useful to de"ne

zO;#H��F�x(t), (12)

A. Bemporad et al. / Automatica 38 (2002) 3}20 7

Page 6: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

z3��, and to transform (7) by completing squares toobtain the equivalent problem

<�(x)"min

1

2z�Hz

s.t. Gz)=#Sx(t), (13)

where SOE#GH��F�, and <�(x)"<(x)!�

�x�

(>!FH��F�)x. In the transformed problem, the para-meter vector x appears only on the rhs of the constraints.In order to start solving the mp-QP problem, we need

an initial vector x�inside the polyhedral set

X"�x: ¹x)Z� of parameters over which we want tosolve the problem, such that the QP problem (13) isfeasible for x"x

�. A good choice for x

�is the center of

the largest ball contained in X for which a feasiblez exists, determined by solving the LP

max�����

�,

s.t. ¹ x#���¹ ��)Z ,

Gz!Sx)=

(14)

(in particular, x�will be the Chebychev center ofX when

the QP problem (13) is feasible for such an x�). If �)0,

then the QP problem (13) is infeasible for all x in theinterior of X. Otherwise, we "x x"x

�and solve the QP

problem (13), in order to obtain the corresponding opti-mal solution z

�. Such a solution is unique, becauseH�0,

and therefore uniquely determines a set of active con-straints GI z

�"SI x

�#=I out of the constraints in (13).

We can then prove the following result:

Theorem 2. Let H�0. Consider a combination of activeconstraints GI , SI ,=I , and assume that the rows of GI arelinearly independent. Let CR

�be the set of all vectors x, for

which such a combination is active at the optimum (CR�is

referred to as critical region). Then, the optimal z and theassociated vector of Lagrange multipliers � are uniquelydexned azne functions of x over CR

�.

Proof. The "rst-order Karush}Kuhn}Tucker (KKT)optimality conditions (Bazaraa et al., 1993) for themp-QP are given by

Hz#G��"0, �3��, (15a)

� (G z!= !S x)"0, i"1,2, q, (15b)

�*0, (15c)

Gz)=#Sx, (15d)

where the superscript i denotes the ith row. We solve(15a) for z,

z"!H��G�� (16)

and substitute the result into (15b) to obtain the com-plementary slackness condition �(!GH��G��!

=!Sx)"0. Let �x and �I denote the Lagrange multi-pliers corresponding to inactive and active constraints,respectively. For inactive constraints, �x "0. For activeconstraints, !GI H��GI ��I !=I !SI x"0, and therefore,

�I "!(GI H��GI �)��(=I #SI x), (17)

where GI ,=I ,SI correspond to the set of active constraints,and (GI H��GI �)�� exists because the rows of GI are linearlyindependent. Thus, � is an a$ne function of x. We cansubstitute �I from (17) into (16) to obtain

z"H��GI �(GI H��GI �)��(=I #SI x) (18)

and note that z is also an a$ne function of x. �

A similar result was obtained by Za"riou (Chiou& Za"riou, 1994; Za"riou, 1990) based on the optimalityconditions for QP problems reported in Fletcher (1981).Although his formulation is for "nite impulse responsemodels realized in a particular space form, where thestate includes past inputs, his arguments can be adapteddirectly to the state-space formulation (3)}(6). However,his result does not make the piecewise linear dependenceof u on x explicit, as the domains over which the di!erentlinear laws are de"ned are not characterized. We shownext, that such domains are indeed polyhedral regions ofthe state space.Theorem 2 characterizes the solution only locally in

the neighborhood of a speci"c x�, as it does not provide

the construction of the set CR�where this characteriza-

tion remains valid. On the other hand, this region can becharacterized immediately. The variable z from (16) mustsatisfy the constraints in (13):

GH��GI �(GI H��GI �)��(=I #SI x))=#Sx (19)

and by (15c), the Lagrange multipliers in (17) must re-main non-negative:

!(GI H��GI �)��(=I #SI x)*0, (20)

as we vary x. After removing the redundant inequalitiesfrom (19) and (20), we obtain a compact representation ofCR

�. Obviously,CR

�is a polyhedron in the x-space, and

represents the largest set of x3X such that the combina-tion of active constraints at the minimizer remains un-changed. Once the critical region CR

�has been de"ned,

the rest of the space CR��"X!CR�has to be ex-

plored and new critical regions generated. An e!ectiveapproach for partitioning the rest of the space was pro-posed in Dua and Pistikopoulos (2000). The followingtheorem justi"es such a procedure to characterize the restof the region CR��.

8 A. Bemporad et al. / Automatica 38 (2002) 3}20

Page 7: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

Fig. 1. Partition of CR��OXCR�; (b) partition of CR�� step 1; (c)

partition of CR�� Step 2; and (d) "nal partition of CR��.

Theorem 3. Let >-�� be a polyhedron, and CR�O

�x3>: Ax)b� a polyhedral subset of >, CR�O�.

Also let

R "�x3>:

A x'b

A�x)b�, ∀j(i�, i"1,2,m,

where m"dim(b), and let CR��O�� ��R

. Then

(i) CR��CR�">, (ii) CR

��R

"�, R

�R

�"�, ∀i

Oj, i.e. �CR�,R

�,2,R

�� is a partition of >.

Proof. (i) We want to prove that given an x3>, x eitherbelongs to CR

�or to R

for some i. If x3CR

�, we are

done. Otherwise, there exists an index i such thatA x'b . Let iH"min

��i: A x'b �. Then, x3R

H , be-

causeA Hx'b H and A�x)b�, ∀j(iH, by de"nition of iH.

(ii) Let x3CR�. Then xi such that A x'b , which

implies that x�R , ∀i)m. Let x3R

and take i'j. Since

x3R , by de"nition of R

(i'j) A�x)b�, which implies

that x�R�. �

Example 4.1. In order to exemplify the procedure pro-posed in Theorem 3 for partitioning the set of parametersX, consider the case when only two parameters x

�and

x�are present. As shown in Fig. 1(a), X is de"ned by the

inequalities �x��

)x�)x�

�, x�

�)x

�)x�

��, and CR

�by the inequalities �C1)0,2,C5)0� whereC1,2,C5 are a$ne functions of x. The procedure con-sists of considering, one by one, the inequalities whichde"ne CR

�. Considering, for example, the inequality

C1)0, the "rst set of the rest of the region

CR��OXCR�is given by R

�"�C1*0, x

�*x�

�,

x��

)x�)x�

��, which is obtained by reversing the sign

of the inequality C1)0 and removing redundant con-straints (see Fig. 1(b)). Thus, by considering the rest ofthe inequalities, the complete rest of the region isCR��"��

��R

, where R

�,2,R

�are graphically re-

ported in Fig. 1(d). Note that the partition strategy sug-gested by Theorem 3 can be also applied also when X isunbounded.

Theorem 3 provides a way of partitioning the non-convex set, XCR

�, into polyhedral subsets R

. For each

R , a new vector x

is determined by solving the LP (14),

and, correspondingly, an optimum z , a set of active

constraints GI ,SI ,=I , and a critical region CR . Theorem

3 is then applied to partition R CR

into polyhedral

subsets, and the algorithm proceeds iteratively. Thecomplexity of the algorithm will be fully discussed inSection 4.3.Note that Theorem 3 introduces cuts in the x-space

which might split critical regions into subsets. Therefore,after the whole x-space has been covered, those polyhed-ral regions CR are determined where the function z(x) isthe same. If their union is a convex set, it is computed topermit a more compact description of the solution(Bemporad, Fukuda, & Torrisi, 2001). Alternatively, inBorrelli, Bemporad, and Morari (2000), the authorspropose not to intersect (19)}(20) with the partition gen-erated by Theorem 3, and simply use Theorem 3 to guidethe exploration. As a result, some critical regions mayappear more than once. Duplicates can be easily elimi-nated by recognizing regions where the combination ofactive constraints is the same. In the sequel, we willdenote by N

, the "nal number of polyhedral cells de"n-

ing the mp-QP solution (i.e., after the union of neighbor-ing cells or removal of duplicates, respectively).

4.1.1. DegeneracySo far, we have assumed that the rows of GI are linearly

independent. It can happen, however, that by solving theQP (13), one determines a set of active constraints forwhich this assumption is violated. For instance, thishappens when more than s constraints are active at theoptimizer z

�3��, i.e., in a case of primal degeneracy. In

this case, the vector of Lagrange multipliers ��might not

be uniquely de"ned, as the dual problem of (13) is notstrictly convex (instead, dual degeneracy and non-unique-ness of z

�cannot occur, as H�0). Let GI 3�l��, and

let r"rankGI , r(l. In order to characterize such adegenerate situation, consider the QR decompositionGI "[��

�]Q of GI , and rewrite the active constraints in the

form

R�z"=

�#S

�x, (21a)

0"=�#S

�x, (21b)

A. Bemporad et al. / Automatica 38 (2002) 3}20 9

Page 8: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

� In general, the set of feasible parameters x can be a lower-dimen-sional subset of X (Filippi, 1997). However, when in the MPC formula-tion (3), u

���(u

���, y

���(y

���, the mp-QP problem has always

a full-dimensional solution in the x-space, as the critical region corre-sponding to the unconstrained solution contains a full-dimensional ballaround the origin.

where [����]"Q��SI , [��

��]"Q��=I . If S

�is non-zero,

because of the equalities (21b), CR�is a lower-dimen-

sional region, which, in general, corresponds to a com-mon boundary between two or more full-dimensionalregions.� Therefore, it is not worth to explore this combi-nationGI , SI ,=I . On the other hand, if both=

�and S

�are

zero, the KKT conditions do not lead directly to (19) and(20), but only to a polyhedron expressed in the (�, x)space. In this case, a full-dimensional critical region canbe obtained by a projection algorithm (Fukuda, 1997),which, however, is computationally expensive (the case=

�O0, S

�"0 is not possible, since the LP (14) was

feasible).In this paper, we suggest a simpler way to handle such

a degenerate situation, which consists of collecting r con-straints arbitrarily chosen, and proceed with the newreduced set, therefore avoiding the computation of pro-jections. Due to the recursive nature of the algorithm, theremaining other possible subsets of combinations of con-straints leading to full-dimensional critical regions willautomatically be explored later.

4.2. Continuity and convexity properties

Continuity of the value function<�(x) and the solution

z(x), can be shown as simple corollaries of the linearityresult of Theorem 2. This fact, together with the convex-ity of the set of feasible parametersX

�-X (i.e. the set of

parameters x3X such that a feasible solution z(x) existsto the optimization problem (13)), and of the value func-tion <

�(x), is proved in next Theorem.

Theorem 4. Consider the multi-parametric quadratic pro-gram (13) and let H�0, X convex. Then the set of feasibleparametersX

�-X is convex, the optimizer z(x) :X

�C�� is

continuous and piecewise azne, and the optimal solution<

�(x) :X

�C� is continuous, convex and piecewise

quadratic.

Proof. We "rst prove convexity of X�and <

�(x). Take

generic x�,x

�3X

�, and let <

�(x

�), <

�(x

�) and z

�, z

�the

corresponding optimal values and minimizers. Let�3[0,1], and de"ne z�O�z

�#(1!�)z

�, x�O�x

�#

(1!�)x�. By feasibility, z

�, z

�satisfy the constraints

Gz�)=#Sx

�, Gz

�)=#Sx

�. These inequalities

can be linearly combined to obtain Gz�)=#Sx� , andtherefore, z� is feasible for the optimization problem (13),

where x(t)"x� . This proves that z(x�) exists, and there-fore, convexity of X

�"�

CR

. In particular, X

�is connected. Moreover, by optimality of <

�(x�),<�

(x�))��z��Hz� , and hence <�

(x�)!��[�z�

�Hz

�#(1!�)z�

�Hz

�])

��z��Hz�!�

�[�z�

�Hz

�#(1!�)z�

�Hz

�]"�

�[��z�

�Hz

�#

(1!�)�z��Hz

�#2�(1!�)z�

�Hz

�!�z�

�Hz

�!(1!�)z�

�Hz

�]"!�

��(1!�)(z

�!z

�)�H(z

�!z

�))0, i.e. <

�( � x

�# ( 1! � ) x

�)) �<

�( x

�)# ( 1! � )<

�( x

�) ,

∀x�,x

�3X, ∀�3[0,1], which proves the convexity of

<�(x) onX

�. Within the closed polyhedral regionsCR

in

X�, the solution z(x) is a$ne (18). The boundary between

two regions belongs to both closed regions. Since H�0,the optimum is unique, and hence, the solution must becontinuous across the boundary. The fact that <

�(x)

is continuous and piecewise quadratic, followstrivially. �

In order to prove the convexity of the value function<(x) of the MPC problems (3) and (7), we need thefollowing lemma.

Lemma 1. Let J(;,x)"��;�H;#x�F;#�

�x�>x, and

let �> F�F H�_0. Then <(x)Omin

�J(x,;) subject to

G;)=#Ex is a convex function of x.

Proof. By Theorem 4, the value function <�(x) of the

optimization problem <�(x)Omin

���z�Hz subject to

Gz)=#Sx is a convex function of x. Let zH(x), ;H(x)be the optimizers of <

�(x) and <(x), respectively, where

zH(x)";H(x)#H��Fx. Then, <(x)"��;H(x)H;H(x)#

x�F;H(x)#��x�>x"�

�[zH(x)!H��Fx]�H[zH(x)!H��

Fx]#x�F[zH(x)!H��Fx]#��x�>x"<

�(x)!�

�x�FH��

F�x#��x�>x"<

�(x)#�

�x�(>!FH��F�)x. As �

> F�F H�

_0, its Schur's complement >!FH��F�_0, and there-fore, <(x) is a convex function, being the sum of convexfunctions. �

Corollary 1. The value function <(x) dexned by the optim-ization problem (3), (7) is continuous and piecewise quad-ratic.

Proof. By (3), J(;, x)*0, ∀x,;, being the sum of non-

negative terms. Therefore, J(;, x)"[��]��Y F�F H�[��]*0

for all [��], and the proof easily follows by Lemma 1. �

A simple consequence of Corollary 1 is that theLyapunov function used to prove Theorem 1 is continu-ous, convex, and piecewise quadratic. Finally, we canestablish the analytical properties of the controller (3),(6) through the following corollary of Theorem 4.

10 A. Bemporad et al. / Automatica 38 (2002) 3}20

Page 9: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

Corollary 2. The control law u(t)"f (x(t)), f :��C��, de-xned by the optimization problem (3) and (6) is continuousand piece-wise azne

f (x)"F x#g if H x)k , i"1,2,N���

(22)

where the polyhedral sets �H x)k �, i"1,2,N���

)

Nare a partition of the given set of states X.

Proof. By (12),;(x)"z(x)!H��F�x is a linear functionof x in each regionCR

"�x : H x)K , i"1,2,N

����.

By (8), u"f (x) is a combination of linear functions, andtherefore, is linear on CR

. Also, u is a combination of

continuous functions, and therefore, is continuous. �

Multiparametric quadratic programming problemscan also be addressed by employing the principles ofparametric non-linear programming, exploiting the BasicSensitivity Theorem (Fiacco, 1976, 1983) (a direct conse-quence of the KKT conditions and the Implicit FunctionTheorem). In this paper, we opted for a more directapproach, which exploits the linearity of the constraintsand the fact that the function to be minimized isquadratic.We remark that the command actions provided by the

(explicit) feedback control law (22) and the (implicit)feedback control law (3)}(6) are exactly equal. Therefore,the control designer is allowed to tune the controller byusing standard MPC tools (i.e., based on on-line QPsolvers) and software simulation packages, and "nallyrun Algorithm 1 to obtain the explicit solution (22) toe$ciently implement the controller.

4.3. Ow-line algorithm for mp-QP and explicit MPC

Based on the above discussion and results, the mainsteps of the o!-line mp-QP solver are outlined in thefollowing algorithm.

Algorithm 11 Let X-�� be the set of parameters (states);2 execute partition(X);3 for all regions where z(x) is the same and whose unionis a convex set, compute such a union as described byBemporad, Fukuda, and Torrisi (2001);

4 end.procedure partition(>)1.1 let x

�3> and � the solution to the LP (14);

1.2 if �)0 then exit; (no full dimensional CR isin >)

1.3 For x"x�, compute the optimal solution

(z�, �

�) of the QP (13);

1.4 Determine the set of active constraints whenz"z

�, x"x

�, and build GI ,=I ,SI ;

1.5 If r"rankGI is less than the number l of rows ofGI , take a subset of r linearly independent rows,and rede"ne GI ,=I ,SI accordingly;

1.6 Determine �I (x), z(x) from (17) and (18);1.7 Characterize the CR from (19) and (20);1.8 De"ne and partition the rest of the region as in

Theorem 3;1.9 For each new sub-region R

, partition(R

);

end procedure.

The algorithm explores the set X of parameters recur-sively: Partition the rest of the region as in Theorem3 into polyhedral sets R

, use the same method to parti-

tion each set R further, and so on. This can be represent-

ed as a search tree, with a maximum depth equal to thenumber of combinations of active constraints (see Sec-tion 4.4 below).The algorithm solves the mp-QP problem by par-

titioning the given parameter set X into Nconvex poly-

hedral regions. For the characterization of the MPCcontroller, in step 3 the union of regions is computedwhere the "rstN

�components of the solution z(x) are the

same, by using the algorithm developed by Bemporad,Fukuda, and Torrisi (2001). This reduces the total num-ber of regions in the partition for the MPC controllerfrom N

to N

���.

4.4. Complexity analysis

The number Nof regions in the mp-QP solution

depends on the dimension n of the state, and on thenumber of degrees of freedom s"mN

�and constraints

q in the optimization problem (13). As the number ofcombinations of l constraints out of a set of q is(�l)"q!/(q!l)!l!, the number of possible combinationsof active constraints at the solution of a QP is at most��l��

(�l)"2�. This number represents an upper boundon the number of di!erent linear feedback gains whichdescribe the controller. In practice, far fewer combina-tions are usually generated as x spans X. Furthermore,the gains for the future input moves u

���,2, u

������are

not relevant for the control law. Thus, several di!erentcombinations of active constraints may lead to the same"rst m components uH

�(x) of the solution. On the other

hand, the number Nof regions of the piecewise

a$ne solution is, in general, larger than the numberof feedback gains, because non-convex critical regionsare split into several convex sets. For instance, theexample reported in Fig. 6(d) involves 13 feedback gains,distributed over 57 regions of the state space.A worst-case estimate of N

can be computed from the

way Algorithm 1 generates critical regions. The "rstcritical region CR

�is de"ned by the constraints �(x)*0

(q constraints) and Gz(x))=#Sx (q constraints). If thestrict complementary slackness condition holds, only

A. Bemporad et al. / Automatica 38 (2002) 3}20 11

Page 10: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

Table 1O!-line computation time to solve the mp-QP problem and, in paren-theses, number of regions N

���in the MPC controller (N

�"num-

ber of control moves, n"number of states)

N�

n"2 n"3 n"4 n"5

2 0.44 s (7) 0.49 s (7) 0.55 s (7) 1.43 s (7)3 1.15 s (13) 2.08 s (15) 1.75 s (15) 5.06 s (17)4 2.31 s (21) 5.87 s (29) 3.68 s (29) 15.93 s (37)

q constraints can be active, and hence, every CR isde"ned by q constraints. From Theorem 3, CR�� con-sists of q convex polyhedra R

, de"ned by at most q in-

equalities. For each R , a new CR

is determined which

consists of 2q inequalities (the additional q inequalitiescome from the condition CR

-R

), and therefore, the

correspondingCR�� partition includes 2q sets de"ned by2q inequalities. As mentioned above, this way of generat-ing regions can be associated with a search tree. Byinduction, it is easy to prove that at the tree level k#1,there are k!m regions de"ned by (k#1)q constraints. Asobserved earlier, each CR is the largest set correspondingto a certain combination of active constraints. Therefore,the search tree has a maximum depth of 2�, as at eachlevel, there is one admissible combination less. In con-clusion, the number of regions in the solution to themp-QP problem is N

)�����

��k!q, each one de"ned by

at most q2� linear inequalities. Note that the above analy-sis is largely overestimating the complexity, as it does nottake into account: (i) the elimination of redundant con-straints when a CR is generated, and (ii) that empty setsare not partitioned further.

4.5. Dependence on n,m, p,N�,N

Let q�OrankS, q

�)q. For n'q

�the number of poly-

hedral regionsNremains constant. To see this, consider

the linear transformation x� "Sx, x� 3��. Clearly, x� andx de"ne the same set of active constraints, and thereforethe number of partitions in the x� - and x-space are thesame. Therefore, the number of partitions, N

, of the

x-space de"ning the optimal controller is insensitive tothe dimension n of the state x for all n*q

�, i.e. to the

number of parameters involved in the mp-QP. In par-ticular, the additional parameters that we will introducein Section 6 to extend MPC to reference tracking, distur-bance rejection, soft constraints, variable constraints, andoutput feedback, do not a!ect signi"cantly, the numberof polyhedral regions N

(i.e., the complexity of the mp-

QP), and hence, the number N���of regions in the MPC

controller (22).The number q of constraints increases with N

�and, in

the case of input constraints, with N�. For instance,

q"2s"2mN�for control problems with input con-

straints only. From the analysis above, the largerN

�,N

�,m, p, the larger q, and therefore N

. Note that

many control problems involve input constraints only,and typically horizons N

�"2,3 or blocking of control

moves are adopted, which reduces the number of con-straints q.

4.6. Ow-line computation time

In Table 1, we report the computation time and thenumber of regions obtained by solving a few test MPC

problems on random SISO plants subject to input con-straints. In the comparison, we vary the number of freemoves N

�and the number of states of the open-loop

system. Computation times were evaluated by runningAlgorithm 1 in Matlab 5.3 on a Pentium III-650 MHzmachine. No attempts were made to optimize the e$cien-cy of the algorithm and its implementation.

4.7. On-line computation time

The simplest way to implement the piecewise a$nefeedback law (22) is to store the polyhedral cells�H x)k �, perform an on-line linear search throughthem to locate the one which contains x(t), then lookupthe corresponding feedback gain (F , g ). This procedurecan be easily parallelized (while for a QP solver, theparallelization is less obvious). However, more e$cienton-line implementation techniques which avoid the stor-age and the evaluation of the polyhedral cells are current-ly under development.

5. State-feedback solution to constrained linear quadraticcontrol

For t"0, the explicit solution to (3) provides theoptimal input pro"le u(0),2, u(N

!1) as a function of

the state x(0). The equivalent state-feedback formu( j)"f

�(x( j)), j"0,2,N

!1 can be obtained by solv-

ing N�mp-QPs. In fact, clearly, f

�(x)"Kx for all

j"N�,2,N

!1, where f

�:X

�C��, and X

�O�� for

j"N�#1,2,N

,X

�O�x: y

���)C(A#BK)�x)y

���,

u���

)K(A#BK)�x)u���, h"0,2,N

�!j� for j"

N�,2,N

�. For 0)j)N

�!1, f

�(x) is obtained by solv-

ing the mp-QP problem

F�(x)O min

�O��� � �2������� ��J(;, x)"x�(N

)Px(N

)

#

������

[x�(k)Qx(k)#u�(k)Ru(k)]�

12 A. Bemporad et al. / Automatica 38 (2002) 3}20

Page 11: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

s.t. y���

)y(k))y���, k"j,2,N

�,

u���

)u(k))u���, k"j,2,N

�,

x( j)"x,

x(k#1)"Ax(k)#Bu(k), k*j,

y(k)"Cx(k), k*j,

u(k)"Kx(k), N�)k(N

(23)

and setting f�(x)"[I 0 2 0]F

�(x) (note that, compared

to (3), for j"0, (23) includes the extra constrainty���

)y(0))y���. However, this may only restrict

the set of x(0) for which (23) is feasible, but doesnot change the control function f

�(x), as u(0) does not

a!ect y(0)).Similar to the unconstrained "nite-time LQ problem,

where the state-feedback solution is linear time-varying, the explicit state-feedback solution to (3) is thetime-varying piecewise a$ne function f

�:��C��,

j"0,2,N!1. Note that while in the unconstrained

case dynamic programming nicely provides the state-feedback solution through Riccati iterations, because ofthe constraints here, dynamic programming would leadto solving a sequence of multiparametric piecewise quad-ratic programs, instead of the mp-QPs (23).The in"nite horizon-constrained linear quadratic

regulator can also be obtained in state-feedback form bychoosing N

�"N

"N

�"N, where N is de"ned ac-

cording to the results of Section 3.

6. Reference tracking, disturbances, and otherextensions

The basic formulation (3) can be extended naturally tosituations where the control task is more demanding. Aslong as the control task can be expressed as an mp-QP,a piecewise a$ne controller results, which can be easilydesigned and implemented. In this section, we will men-tion only a few extensions to illustrate the potential. Toour knowledge, these types of problems are di$cult toformulate from the point of view of anti-windup or othertechniques not related to MPC.

6.1. Reference tracking

The controller can be extended to provide o!set-freetracking of asymptotically constant reference signals. Fu-ture values of the reference trajectory can be taken intoaccount explicitly, by the controller, so that the controlaction is optimal for the future trajectory in the presenceof constraints.Let the goal be to have the output vector y(t) track r(t),

where r(t)3�� is the reference signal. To this aim, con-

sider the MPC problem

min�O���� �2���������

������

�[y����

!r(t)]�Q[y����

!r(t)]

# u�����

R u����

s.t. y���

)y����

)y���, k"1,2,N

�,

u���

)u��

)u���, k"0,1,2,N

�,

u���

) u��

) u���, k"0,1,2,N

�!1,

x������

"Ax����

#Bu��, k*0,

y����

"Cx����

, k*0,

u��

"u����

#�u��, k*0,

u��

"0, k*N�.

(24)

Note that the u-formulation (24) introduces m newstates in the predictive model, namely, the last inputu(t!1) (this corresponds to adding an integrator in thecontrol loop). Just like the regulation problem (3), we cantransform the tracking problem (24) into the form

min�

1

2;�H;#[x�(t) u�(t!1) r�(t)]F;

s.t. G;)=#E�x(t)

u(t!1)

r(t) �, (25)

where r(t) lies in a given (possibly unbounded) poly-hedral set. Thus, the same mp-QP algorithm can beused to obtain an explicit piecewise a$ne solution u(t)"F(x(t), u(t!1), r(t)). In case the reference r(t) isknown in advance, one can replace r(t) by r(t#k) in (24)and similarly get a piecewise a$ne anticipative controller u(t)"F(x(t), u(t!1), r(t),2, r(t#N

!1)).

6.2. Disturbances

We distinguish between measured and unmeasureddisturbances. Measured disturbances v(t) can be includedin the prediction model

x������

"Ax����

#Bu��

#<v(t#k�t), (26)

where v(t#k�t) is the prediction of the disturbance attime t#k based on the measured value v(t). Usually,v(t#k�t) is a linear function of v(t), for instancev(t#k�t),v(t) where it is assumed that the disturbanceis constant over the prediction horizon. Then v(t) appearsas a vector of additional parameters in the mp-QP,and the piecewise a$ne control law becomesu(t)"F(x(t), v(t)). Alternatively, as for reference tracking,when v(t) is known in advance one can replace v(t#k�t)by v(t#k) in (26) and get an anticipative controller u(t)"F(x(t), u(t!1), v(t),2, v(t#N

!1)).

A. Bemporad et al. / Automatica 38 (2002) 3}20 13

Page 12: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

Fig. 2. Example 7.1: (a) closed-loop MPC; (b) state-space partition andclosed-loop MPC trajectories.

�Let ;H�"[uH

�, uH

�]� be the optimal solution at time t. Then

;"[uH�,0]� is feasible at time t#1. The cost associated with ; is

J(t#1,;)"J(t,;H�)!x�(t)x(t)!0.01u�(t)*J(t#1,;H

���), which

implies that J(t,;H�) is a converging sequence. Therefore,

x�(t)x(t)#0.01u�(t))J(t,;H�)!J(t#1,;H

���)P0, which shows stab-

ility of the system.

Usually unmeasured disturbances are modeled as theoutput of a linear system driven by a white Gaussiannoise. The state vector x(t) of the linear prediction model(1) is augmented by the state x

�(t) of such a linear distur-

bance model, and the mp-QP provides a control law ofthe form u(t)"F(x(t),x

�(t)) within a certain range of

states of the plant and of the disturbance model. Clearly,x�(t) is estimated on line from output measurements by

a linear observer.

6.3. Soft constraints

State and output constraints can lead to feasibilityproblems. For example, a disturbance may push theoutput outside the feasible region where no allowed con-trol input may exist which brings the output back insideat the next time step. Therefore, in practice, the outputconstraints (2) are relaxed or softened (Zheng & Morari,1995) as y

���!M�)y(t))y

���#M�, whereM3�� is

a constant vector (M *0 is related to the `concerna forthe violation of the ith output constraint), and the term��� is added to the objective to penalize constraint viola-tions (� is a suitably large scalar). The variable � plays therole of an independent optimization variable in the mp-QP and is adjoined to z. The solution u(t)"F(x(t)) isagain a piecewise a$ne controller, which aims at keepingthe states in the constrained region without ever runninginto feasibility problems.

6.4. Variable constraints

The bounds y���, y

���, u

���, u

���, u

���, u

���may

change depending on the operating conditions, or in thecase of a stuck actuator, the constraints become u

���" u

���"0. This possibility can again be

built into the control law. The bounds can be treatedas parameters in the mp-QP and added to thevector x. The control law will have the formu(t)"F(x(t), y

���, y

���, u

���, u

���, u

���, u

���).

7. Examples

7.1. A simple SISO system

Consider the second order system

y(t)"2

s�#3s#2u(t),

sample the dynamics with ¹"0.1 s, and obtain thestate-space representation

x(t#1)"�0.7326 !0.0861

0.1722 0.9909 �x(t)#�0.0609

0.0064�u(t).y(t)"[0 1.4142]x(t). (27)

The task is to regulate the system to the origin whileful"lling the input constraint

!2)u(t))2. (28)

To this aim, we design an MPC controller based on theoptimization problem

min�� �����

x������

Px�����

#

����

[x�����

x����

# 0.01u���]

s.t. !2)u��

)2, k"0,1,

x���

"x(t)

(29)

where P solves the Lyapunov equation P"A�PA#Q(in this example, Q"I, R"0.01,N

"N

�"2, N

�"1).

Note that this choice of P corresponds to setting u��

"0for k*2 and minimize

����

x�����

x����

#0.01u���. (30)

The MPC controller (29) is globally asymptoticallystabilizing. In fact, it is easy to show that the valuefunction is a Lyapunov function of the system.� Theclosed-loop response from the initial conditionx(0)"[1 1]� is shown in Fig. 2(a).Themp-QP problem associated with theMPC law has

the form (7) with

H"�1.5064 0.4838

0.4838 1.5258�, F"�9.6652 5.2115

7.0732 !7.0879�,

14 A. Bemporad et al. / Automatica 38 (2002) 3}20

Page 13: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

[!5.9220 !6.8883]x

if �!5.9220 !6.8883

5.9220 6.8883

!1.5379 6.8291

1.5379 !6.8291�x)�2.0000

2.0000

2.0000

2.0000�,(Region �1)

2.0000

if �!3.4155 4.6452

0.1044 0.1215

0.1259 0.0922�x)�2.6341

!0.0353

!0.0267�,(Region �2, �4)

2.0000if �0.0679 !0.0924

0.1259 0.0922 �x)�!0.0524

!0.0519�,(Region �3)

!2.0000if �

!0.1259 !0.0922

!0.0679 0.0924 �x)�!0.0519

!0.0524�,(Region �5)

[!6.4159 !4.6953]x#0.6423

if �!6.4159 !4.6953

!0.0275 0.1220

6.4159 4.6953 �x)�1.3577

!0.0357

2.6423 �,(Region �6)

!2.0000

if �3.4155 !4.6452

!0.1044 !0.1215

!0.1259 !0.0922�x)�2.6341

!0.0353

!0.0267�,(Region �7, �8)

[!6.4159 !4.6953]x!0.6423

if �6.4159 4.6953

0.0275 !0.1220

!6.4159 !4.6953�x)�1.3577

!0.0357

2.6423 �(Region �9)

u"

��

��

��

��

��

��

��

��

��

��

��

��

��

��

��

��

��

��

��

��

��

��

G"�1 0

!1 0

0 1

0 !1�, ="�2

2

2

2�, E"�0 0

0 0

0 0

0 0�.The solution was computed by Algorithm 1 in 0.66 s (15regions examined), and the corresponding polyhedralpartition of the state-space intoN

"9 polyhedral cells is

depicted in Fig. 2(b). The MPC law is

and consists of N���

"7 regions. Region �1 corres-ponds to the unconstrained linear controller, regions�2, �3, �4 and �5, �7, �8 correspond to the

saturated controller, and regions �6 and �9 aretransition regions between the unconstrained and thesaturated controller. Note that the mp-QP solver pro-vides three di!erent regions �2,�3,�4, although in allof them, u"uH

�"2. The reason for this is that the

second component of the optimal solution, uH���, is di!er-

ent, in that uH���

"[!3.4155 4.6452]x(t)!0.6341 in re-gion �2, uH

���"2 in region �3, and uH

���"!2 in

region�4. Moreover, note that regions�2 and�4 arejoined, as their union is a convex set, but the same cannot

be done with region �3, as their union would not bea convex set, and therefore cannot be expressed as one setof linear inequalities.

A. Bemporad et al. / Automatica 38 (2002) 3}20 15

Page 14: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

Fig. 3. Example 7.1. Additional constraint x����

*!0.5: (a) closed-loop MPC; (b) state-space partition and closed-loop MPC trajectories.

Fig. 4. MIMO example: (a) closed-loop MPC: output y(t) (left), inputu(t) (right); (b) state-space partition obtained by setting u"[0 0]� andr"[0.63 0.79]�.

The same example is repeated with the additional stateconstraint x

����*x

���,

x���O�

!0.5

!0.5�,k"1. The closed-loop behavior from the initial condi-tion x(0)"[1 1]� is depicted in Fig. 3(a). The MPCcontroller was computed in 0.99 s (22 regions examined).The polyhedral partition of the state space correspondingto the modi"ed MPC controller is depicted in Fig. 3(b).The partition provided by the mp-QP algorithm consistsnow ofN

"11 regions (as regions�1, �2 and�3, �9

can be joined, the MPC controller consists of N���

"9regions). Note that there are feasible states smallerthan x

���, and vice versa, infeasible states x*x

���. This

is not surprising. For instance, the initial statex(0)"[!0.6,0]� is feasible for the MPC controller(which checks state constraints at time t#k, k"1),because there exists a feasible input such that x(1)is within the limits. On the contrary, forx(0)"[!0.47,!0.47] no feasible input is able to pro-duce a feasible x(1). Moreover, the union of the regionsdepicted in Fig. 3(b) should not be confused with theregion of attraction of the MPC closed-loop. For in-stance, by starting at x(0)"[46.0829,!7.0175]� (forwhich a feasible solution exists), the MPC controller runsinto infeasibility after t"9 time steps.

7.2. Reference tracking for a MIMO system

Consider the plant

y(t)"10

100s#1�4 !5

!3 4 �u(t), (31)

which was studied by Mulder, Kothare, and Morari(1999) and Zheng, Kothare and Morari (1994) and byother authors as an example for anti-windup controlsynthesis. The input u(t) is subject to the saturationconstraints

!1)u (t))1, i"1, 2.

Mulder et al. (1999) use a decoupler and two identical PIcontrollers

K(s)"�1#1

100s��2 2.5

1.5 2 �.

For the set-point change r"[0.63,0.79]� they show thatvery large oscillations result during the transient whenthe output of the PI controller saturates.We sample the dynamics (31) with ¹"2 s, and design

an MPC law (24) with N"20, N

�"1, N

�"0,

Q"I, R"0.1I. The closed-loop behavior starting fromzero initial conditions is depicted in Fig. 4(a). It is similarto the result reported by Mulder et al. (1999), where ananti-windup scheme is used on top of the linear controllerK(s).Themp-QP problem associated with theMPC law has

the form (7) with two optimization variables (two inputsover the one-step control horizon), and six parameters(two states of the original system, two states to memorizethe last input u(t!¹

�), and two reference signals),

with

H"�0.7578 !0.9699

!0.9699 1.2428 �,

F"�0.1111 !0.1422

!0.0711 0.0911

0.7577 !0.9699

!0.9699 1.2426

!0.1010 0.1262

0.0757 !0.1010�,

G"�1 0

!1 0

0 1

0 !1�, ="�1

1

1

1�,

16 A. Bemporad et al. / Automatica 38 (2002) 3}20

Page 15: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

Fig. 5. Double integrator example (N�"2, N

���"7, computation

time 0.77 s): (a) closed-loop MPC; (b) polyhedral partition of the state-space and closed-loop MPC trajectories.

E"�0 0 !1 0 0 0

0 0 0 !1 0 0

0 0 1 0 0 0

0 0 0 1 0 0�.The solution was computed in 1.15 s (13 regions exam-ined) with Algorithm 1, and the explicit MPC controlleris de"ned over N

���"9 regions.

A section of the x-space of the piecewise a$ne solutionobtained by setting u"[�

�] and r"[���

����] is depicted in

Fig. 4(b). The solution can be interpreted as follows.Region �1 corresponds to the unconstrained optimalcontroller. Regions �4,�5,�6,�9 correspond to thesaturation of both inputs. For instance in region�4 theoptimal input variation u(t)"[�

�]!u(t!1). In regions

�2,�3,�7,�8 only one component of the input vectorsaturates. Note that the component which does not satu-rate depends linearly on the state x(t), past input u(t!1),and the reference r(t), but with di!erent gains than theunconstrained controller of region�1. Thus, the optimalpiecewise a$ne controller is not just the simple saturatedversion of a linear controller.In summary, for this example, the mp-QP solver deter-

mines: (i) a two-degree of freedom optimal controller forthis MIMO system (31); and (ii) a set-point dependentoptimal anti-windup scheme, a nontrivial task as theregions in the (x, r, u)-space where the controller shouldbe switched must be determined. We stress the factthat optimality refers precisely to the design performancerequirement (24).

7.3. Inxnite horizon LQR for the double integrator

Consider the double integrator

y(t)"1

s�u(t)

and its equivalent discrete-time state-space represen-tation

x(t#1)"�1 1

0 1�x(t)#�0

1�u(t),y(t)"[1 0]x(t), (32)

obtained by setting yK (t)+(y� (t#¹)!y� (t))/¹, y� (t)+(y(t#¹)!y(t))/¹, ¹"1 s.We want to regulate the system to the origin while

minimizing the quadratic performance measure

�����

y�(t)y(t)#1

10u�(t) (33)

subject to the input constraint

!1)u(t))1. (34)

This task is addressed by using the MPC algorithm (3)where N

"2, N

�"2,

Q"�1 0

0 0�,R"0.01, and K,P solve the Riccati equation (5). Fora certain set of initial conditions �(0), this choice ofP corresponds to setting u

��"K

�x����

"

[!0.81662 !1.7499]x����

and minimizes

����

y�����

y����

#

1

10u���. (35)

On �(0), the MPC controller coincides with the con-strained linear quadratic regulator, and is thereforestabilizing. Its domain of attraction is, however, largerthan �(0). We test the closed-loop behavior from theinitial condition x(0)"[10!5]�, which is depicted inFig. 5(a).Themp-QP problem associated with theMPC law has

form (7) with

H"�1.6730 0.7207

0.7207 0.4118�, F"�0.9248 0.3363

2.5703 1.0570�,

G"�1 0

!1 0

0 1

0 !1�, ="�1

1

1

1�, E"�0 0

0 0

0 0

0 0�and was computed in 0.77 s (16 regions examined). Thecorresponding polyhedral partition of the state-space isdepicted in Fig. 5(b).The same example was solved by increasing the num-

ber of degrees of freedom N�. The corresponding parti-

tions, computation times, and number of regions arereported in Fig. 6. Note that by increasing the number offree control moves N

�, the control law only changes far

away from the origin, the more in the periphery the largerN

�. This must be expected from the results of Section 3,

as the set where the MPC law approximates the

A. Bemporad et al. / Automatica 38 (2002) 3}20 17

Page 16: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

Fig. 6. Partition of the state space for the MPC controller: N�"num-

ber of control degrees of freedom, N���

"number of polyhedral re-gions in the controller, o!-line computation time: (a) N

�"3,

N���

"15, CPU time: 2.63 s; (b) N�"4, N

���"25, CPU time: 560 s;

(c) N�"5, N

���"39, CPU time: 9.01 s; (d) N

�"6, N

���"57, CPU

time: 16.48 s.

constrained in"nite-horizon linear quadratic regulation(C-LQR) problem gets larger when N

�increases

(Chmielewski & Manousiouthakis, 1996; Scokaert& Rawlings, 1998). By the same arguments, the exactpiecewise a$ne solution of the C-LQR problem can beobtained for any set of initial conditions by choosing the"nite horizon as outlined in Section 3.Preliminary ideas about the shape of the constrained

linear quadratic regulator for the double integrator werepresented in Tan (1991), and are in full agreement withour results. By extrapolating the plots in Fig. 6, where theband of unsaturated control actions is partitioned in2N

�!1 sets, one may conjecture that as N

�PR, the

number of regions of the state-space partition tends toin"nity as well. Note that although the band of un-saturated control may shrink asymptotically as��x��PR, it cannot disappear. In fact, such a gap isneeded to ensure the continuity of the controller provedin Corollary 2. The fact that more degrees of freedom areneeded as the state gets larger in order to preserve LQRoptimality is also observed by Scokaert and Rawlings(1998), where for the state-space realization of the doubleintegrator chosen by the authors the state x"[20,20]requires N

�"33 degrees of freedom.

8. Conclusions

We showed that the linear quadratic optimal control-ler for constrained systems is piece-wise a$ne and we

provided an e$cient algorithm to determine its para-meters. The controller inherits all the stability and per-formance properties of model predictive control (MPC)but can be implemented without any involved on-linecomputations. The new technique is not intended toreplace MPC, especially not in some of the larger ap-plications (systems with more than 50 inputs and 150outputs have been reported from industry). It is expectedto enlarge its scope of applicability to situations whichcannot be covered satisfactorily with anti-windupschemes or where the on-line computations required forMPC are prohibitive for technical or cost reasons, suchas those arising in the automotive and aerospace indus-tries. The decision between on-line and o!-line computa-tions must be related also to a tradeo! between CPU (forcomputing QP) and memory (for storing the explicitsolution). Moreover, the explicit form of the MPC con-troller allows to better understand the control action,and to analyze its performance and stability properties(Bemporad, Torrisi, & Morari, 2000). Current research isdevoted to develop on-line implementation techniqueswhich do not require the storage of the polyhedral cells(Borrelli, Baotic, Bemporad, & Morari, 2001), and todevelop suboptimal methods that allow one to trade o!between performance loss and controller complexity(Bemporad & Filippi, 2001).All the results in this paper can be extended easily to

1-norm and R-norm objective functions instead of the2-norm employed in here (Bemporad, Borrelli, &Morari,2000). The resulting multiparametric linear program canbe solved in a similar manner as suggested by Borrelli etal. (2000) or by Gal (1995). For MPC of hybrid systems,an extension involving multiparametric mixed-integerlinear programming is also possible (Bemporad, Borrelli,& Morari, 2000; Dua & Pistikopoulos, 2000). Finally, wenote that the semi-global stabilization problem for dis-crete-time constrained systems with multiple poles on theunit circle which has received much attention since theearly paper by Sontag (1984) can be addressed in a com-pletely general manner in the proposed framework.

Acknowledgements

This research was supported by the Swiss NationalScience Foundation. We wish to thank NikolaosA. Bozinis for his help with the initial implementationof the algorithm, and Francesco Borrelli and AliH. Sayed for suggesting improvements to the originalmanuscript.

References

Acevedo, J., & Pistikopoulos, E. N. (1999). An algorithm for multi-parametric mixed-integer linear programming problem. OperationsResearch Letters, 24, 139}148.

18 A. Bemporad et al. / Automatica 38 (2002) 3}20

Page 17: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

Bazaraa, M. S., Sherali, H. D., & Shetty, C. M. (1993). Nonlinearprogramming*theory and algorithms (2nd ed.). New York: Wiley.

Bellman, R. (1961). Adaptive control processes*a guided tour. Princeton,NJ: Princeton University Press.

Bemporad, A. (1998). A predictive controller with arti"cial Lyapunovfunction for linear systems with input/state constraints. Automatica,34(10), 1255}1260.

Bemporad, A. (1998). Reducing conservativeness in predictive controlof constrained systems with disturbances. Proceedings of the37th IEEE Conference on Decision and Control, Tampa, FL(pp. 1384}1391).

Bemporad, A., Borrelli, F., & Morari, M. (2000). Explicit solution ofLP-based model predictive control. Proceedings of the 39th IEEEConference on Decision and Control, Sydney, Australia, December.

Bemporad, A., Borrelli, F., & Morari, M. (2000). Piecewise linearoptimal controllers for hybrid systems. Proceedings of the AmericanControl Conference, Chicago, IL.

Bemporad, A., Chisci, L., & Mosca, E. (1994). On the stabilizingproperty of SIORHC.. Automatica, 30(12), 2013}2015.

Bemporad, A., & Filippi, C. (2001), Suboptimal explicit MPC viaapproximate multiparametric quadratic programming. Proceedingsof the 40th IEEE Conference on Decision and Control, Orlando,Florida.

Bemporad, A., Fukuda, K., & Torrisi, F. D. (2001). Convexity recogni-tion of the union of polyhedra. Computational Geometry, 18,141}154.

Bemporad, A., & Morari, M. (1999). Robust model predictive control:A survey. In A. Garulli, A. Tesi, & A. Vicino (Eds.), Robustness inidentixcation and control, Lecture Notes in Control and InformationSciences Vol. 245 (pp. 207}226). Berlin: Springer.

Bemporad, A., Torrisi, F. D., & Morari, M. (2000). Performance analy-sis of piecewise linear systems and model predictive control systems.Proceedings of the 39th IEEE Conference on Decision and Control,Sydney, Australia, December.

Blanchini, F. (1994). Ultimate boundedness control for uncertaindiscrete-time systems via set-induced Lyapunov functions. IEEETransactions on Automatic Control, 39(2), 428}433.

Borrelli, F., Baotic, M., Bemporad, A., & Morari, M. (2001). E$cienton-line computation of closed-form constrained optimal controllaws. Proceedings of the 40th IEEE Conference on Decision andControl, Orlando, Florida.

Borrelli, F., Bemporad, A., & Morari, M. (2000). A geometric algorithmfor multi-parametric linear programming. Technical ReportAUT00-06, Automatic Control Laboratory, ETH Zurich, Switzer-land, February 2000.

Chiou, H. W., & Za"riou, E. (1994). Frequency domain design ofrobustly stable constrained model predictive controllers. Proceed-ings of the American Control Conference, Vol. 3 (pp. 2852}2856).

Chisci, L., & Zappa, G. (1999). Fast algorithm for a constrained in"nitehorizon LQ problem. International Journal of Control, 72(11),1020}1026.

Chmielewski, D., & Manousiouthakis, V. (1996). On constrained in"-nite-time linear quadratic optimal control. Systems and ControlLetters, 29(3), 121}130.

Dua, V., & Pistikopoulos, E. N. (1999). Algorithms for the solution ofmultiparametric mixed-integer nonlinear optimization problems.Industrial Engineering Chemistry Research, 38(10), 3976}3987.

Dua, V., & Pistikopoulos, E. N. (2000). An algorithm for the solution ofmultiparametric mixed integer linear programming problems. An-nals of Operations Research, 99, 123}139.

Fiacco, A. V. (1976). Sensitivity analysis for nonlinear programmingusing penalty methods. Mathematical Programming, 10(3), 287}311.

Fiacco, A. V. (1983). Introduction to sensitivity and stability analysis innonlinear programming. London, UK: Academic Press.

Filippi, C. (1997). On the geometry of optimal partition sets in multi-parametric linear programming. Technical report 12, Departmentof Pure and Applied Mathematics, University of Padova, Italy, June.

Fletcher, R. (1981). Practical methods of optimization; Vol. 2: Constrainedoptimization. New York: Wiley.

Fukuda, K. (1997). cdd/cdd# reference manual, (0.61 (cdd) 0.75(cdd#), ed.). Zurich, Switzerland: Institute for Operations Research.

Gal, T. (1995). Postoptimal analyses, parametric programming, and re-lated topics (2nd ed.). Berlin: de Gruyter.

Gilbert, E. G., & Tin Tan, K. (1991). Linear systems with state andcontrol constraints: The theory and applications of maximal outputadmissible sets. IEEE Transactions on Automatic Control, 36(9),1008}1020.

Gutman, P. O. (1986). A linear programming regulator applied tohydroelectric reservoir level control. Automatica, 22(5), 533}541.

Gutman, P. O., & Cwikel, M. (1986). Admissible sets and feedbackcontrol for discrete-time linear dynamical systems with boundedcontrol and states. IEEE Transactions on Automatic Control, AC-31(4), 373}376.

Johansen, T. A., Petersen, I., & Slupphaug, O. (2000). On explicitsuboptimal LQR with state and input constraints. Proceedings ofthe 39th IEEE Conference on Decision and Control (pp. 662}667).Sydney, Australia, December.

Keerthi, S. S., & Gilbert, E. G. (1988). Optimal in"nite-horizon feed-back control laws for a general class of constrained discrete-timesystems: stability and moving-horizon approximations. Journal ofOptical Theory and Applications, 57, 265}293.

Kerrigan, E. C., & Maciejowski, J. M. (2000). Invariant sets for con-strained nonlinear discrete-time systems with application to feasibil-ity in model predictive control. Proceedings of the 39th IEEEConference on Decision and Control.

Kothare, M. V., Campo, P. J., Morari, M., & Nett, C. N. (1994).A uni"ed framework for the study of anti-windup designs. Automati-ca, 30(12), 1869}1883.

Kothare, M. V., & Morari, M. (1999). Multiplier theory for stabilityanalysis of anti-windup control systems. Automatica, 35, 917}928.

Mulder, E. F., Kothare, M. V., & Morari, M. (1999). Multivariableanti-windup controller synthesis using iterative linear matrix in-equalities. Proceedings of the European Control Conference.

Qin, S. J., & Badgwell, T. A. (1997). An overview of industrial modelpredictive control technology. In Chemical process control*V, Vol.93, No. 316 (pp. 232}256). AIChe Symposium Series*AmericanInstitute of Chemical Engineers.

Rawlings, J. B., & Muske, K. R. (1993). The stability of constrainedreceding-horizon control. IEEE Transactions on Automatic Control,38, 1512}1516.

Scokaert, P. O. M., & Rawlings, J. B. (1998). Constrained linear quad-ratic regulation. IEEE Transactions on Automatic Control, 43(8),1163}1169.

Sontag, E. D. (1984). An algebraic approach to bounded controllabilityof linear systems. International Journal of Control, 39(1), 181}188.

Sznaier, M., & Damborg, M. J. (1987) Suboptimal control of linearsystems with state and control inequality constraints. Proceedings ofthe 26th IEEE Conference on Decision and Control, Vol. 1 (pp.761}762).

Tan, K. T. (1991).Maximal output admisible sets and the nonlinear controlof linear discrete-time systems with state and control constraints.Ph.D. thesis, University of Michigan.

Teel, A., & Kapoor, N. (1997). The L�anti-windup problem: its

de"nition and solution. European Control Conference, Brussels, Be-lgium.

Za"riou, E. (1990). Robust model predictive control of processes withhard constraints. Computers and Chemical Engineering, 14(4/5),359}371.

Zheng, A., Kothare, M. V., &Morari,M. (1994). Anti-windup design forinternal model control. International Journal of Control, 60(5),1015}1024.

Zheng, A., & Morari, M. (1995). Stability of model predictive controlwith mixed constraints. IEEE Transactions on Automatic Control, 40,1818}1823.

A. Bemporad et al. / Automatica 38 (2002) 3}20 19

Page 18: Theexplicitlinearquadraticregulatorforconstrainedsystemscse.lab.imtlucca.it/~bemporad/publications/papers/...SznaierandDamborg(1987)byshowingthatthecon-trollerisstabilizing,andthattheC-LQRproblemis

Alberto Bemporad received the master de-gree in Electrical Engineering in 1993 andthe Ph.D. in Control Engineering in 1997from the University of Florence, Italy. Hespent the academic year 1996/97 at theCenter for Robotics and Automation,Dept. Systems Science & Mathematics,Washington University, St. Louis, asa visiting researcher. In 1997-1999, he helda postdoctoral position at the AutomaticControl Lab, ETH, Zurich, Switzerland,where he is currently a$liated as a senior

researcher. Since 1999, he is assistant professor at the University ofSiena, Italy. He received the IEEE Center and South Italy section `G.Barzilaia and the AEI (Italian Electrical Association) `R. Marianiaawards. He has published papers in the area of hybrid systems, modelpredictive control, computational geometry, and robotics. He is in-volved in the development of theModel Predictive Control Toolbox forMatlab. Since 2001, he is an Associate Editor of the IEEE Transactionson Automatic Control.

Manfred Morari was appointed head ofthe Automatic Control Laboratory at theSwiss Federal Institute of Technology(ETH) in Zurich, in 1994. Before that hewas the McCollum-Corcoran Professorand Executive O$cer for Control and Dy-namical Systems at the California Instituteof Technology. He obtained the diplomafrom ETH Zurich and the Ph.D. from theUniversity of Minnesota. His interests arein hybrid systems and the control of bi-omedical systems. In recognition of his re-

search he received numerous awards, among them the Eckman Awardof the AACC, the Colburn Award and the Professional Progress Award

of the AIChE and was elected to the National Academy of Engineering(U.S.). Professor Morari has held appointments with Exxon R&Eand ICI and has consulted internationally for a number of majorcorporations.

Vivek Dua is a Research Associate at theCentre for Process Systems Engineering,Imperial College. He obtained B.E.(Hon-ours) in Chemical Engineering from Pan-jab University, Chandigarh, India in 1993and M.Tech. in Chemical Engineeringfrom the Indian Institute of Technology,Kanpur in 1995. He joined Kinetics Tech-nology India Ltd. as a Process Engineer in1995 and then Imperial College in 1996where he obtained PhD in Chemical En-gineering in 2000. His research interests

are in the areas of mathematical programming and its application inprocess systems engineering.

Stratos Pistikopoulos is a Professor in theDepartment of Chemical Engineering atImperial College. He obtained a Diplomain Chemical Engineering from the Aris-totle University of Thessaloniki, Greece in1984 and his PhD in Chemical Engineer-ing from Carnegie Mellon University,USA in 1988. His research interests includethe development of theory, algorithms andcomputational tools for continuous andinteger parametric programming. He hasauthored or co-authored over 150 research

publications in the area of optimization and process systems engineer-ing applications.

20 A. Bemporad et al. / Automatica 38 (2002) 3}20