5th conference on optimization techniques part i

577
Editorial Board D. Gries • P. Brinch Hansen • C. Moter • G. Seegmfiller • N. Wirth Prof. Dr. R. Conti Istituto di Matematica "Ulisse Dini" Universit5 di Firenze Viale Morgagni 67/A 1-50134 Firenze/Italia Prof. Dr. Antonio Ruberti Istituto di Automatica Universit5 di Roma Via Eudossiana 18 1-00184 Roma/Italia AMS Subject Classifications (1970): 49-02, 49A40, 49B20, 49B35, 49B40, 49C05, 49C10, 49DXX, 65K05, 68A25, 68A45, 90-02, 90C05, 90C10, 90C20, 90C30, 90C50, 90C99, 90D05, 90D25, 90D99, 93-02, 93B05, 93B10, 93B20, 93B30, 93B35, 93B99, 93C20, 93E05, 93E20 ISBN 3-540-06583-0 Springer-Verlag Berlin , Heidelberg ' New York ISBN 0-387-06583-0 Springer-Verlag New York • Heidelberg - Berlin This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically those of translation, reprinting, re-use of illustrations, broadcasting, reproduction by photocopying machine or similar means, and storage in data banks, Under § 54 of the German Copyright Law where copies are made for other than private use, a fee is payable to the publisher, tt~e amount o(the fee to be determined by agreement with the publisher. © by Springer-Verlag Berlin. Heidelberg i973. Library of Congress Catalog Card Number 73-20818. Printed in Germany. Offsetprinting and bookbinding: Julius Beltz, Hemsbach/Bergstr.

Upload: others

Post on 11-Sep-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 5th Conference on Optimization Techniques Part I

Editorial Board D. Gries • P. Brinch Hansen • C. Moter • G. Seegmfiller • N. Wirth

Prof. Dr. R. Conti Istituto di Matematica "Ulisse Dini" Universit5 di Firenze Viale Morgagni 67/A 1-50134 Firenze/Italia

Prof. Dr. Antonio Ruberti Istituto di Automatica Universit5 di Roma Via Eudossiana 18 1-00184 Roma/Italia

AMS Subject Classifications (1970): 49-02, 49A40, 49B20, 49B35, 49B40, 49C05, 49C10, 49DXX, 65K05, 68A25, 68A45, 90-02, 90C05, 90C10, 90C20, 90C30, 90C50, 90C99, 90D05, 90D25, 90D99, 93-02, 93B05, 93B10, 93B20, 93B30, 93B35, 93B99, 93C20, 93E05, 93E20

ISBN 3-540-06583-0 Springer-Verlag Berlin , Heidelberg ' New York ISBN 0-387-06583-0 Springer-Verlag New York • Heidelberg - Berlin

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically those of translation, reprinting, re-use of illustrations, broadcasting, reproduction by photocopying machine or similar means, and storage in data banks,

Under § 54 of the German Copyright Law where copies are made for other than private use, a fee is payable to the publisher, tt~e amount o(the fee to be determined by agreement with the publisher.

© by Springer-Verlag Berlin. Heidelberg i973. Library of Congress Catalog Card Number 73-20818. Printed in Germany.

Offsetprinting and bookbinding: Julius Beltz, Hemsbach/Bergstr.

Page 2: 5th Conference on Optimization Techniques Part I

PREFACE

T h e s e P r o c e e d i n g s a r e based on the p a p e r s p r e s e n t e d at the 5th IF IP

C o n f e r e n c e on Op t imiza t ion Techn iques held in R o m e , l~Iay 7-11, 1973. The

C o n f e r e n c e was s p o n s o r e d by the IF IP T e c h n i c a l C o m m i t t e e on Op t imiza t ion

(TC-7) and by the Cons ig l io Naz iona le de l l e R i c e r c h e (I ta l ian Nat ional R e s e a r c h

Council).

The Conference was devoted to recent advances in optimization techniques

and their application to modelling, identification and control of large systems.

Major emphasis of the Conference was on the most recent application areas,

including: Environmental Systems, Soeio-economic Systems, Biological Systems.

An interesting feature of the Conference was the participation of specialists

both in control theory and in the field of application of systems engineering.

The Proceedings are divided into two volumes. In the first are collected

the papers in which the methodological aspects are emphasized; in the second

those dealing with various application areas.

The International Program Committee of the Conference consisted of:

R. Conti, A. Ruberti (Italy) Chairmen, Fe de Veubeke (Belgium), E. Goto (Japan),

W. J. Karplus (USA), J. L. Lions (France), G. Marehuk (USSR), C. Oleeh (Poland),

L. S. Pontryagin (USSR), E. Rofman (Argentina), J. Stoer (FRG), J.H. Westcott (UK).

Previously published optimization conferences:

Colloquium on Methods of Optimization. Held in Novosibirsk/USSR, June 1968. (Lecture Notes in Mathematics, Vol. 112)

Symposium on Optimization. Held in Nice, June 1969. (Lecture Notes in Mathematics, Vol. 132)

Computing Methods in Optimization Problems. Held in San Remo, September 1968. (Lecture Notes in Operation Research and Mathematical Economics, Vol. 14)

Page 3: 5th Conference on Optimization Techniques Part I
Page 4: 5th Conference on Optimization Techniques Part I

TABLE OF CONTENTS

SYSTEM MODELLING AND IDENTIFICATION

Iden t i f ica t ion of S ys t ems Subject to Random State D i s t u r b a n c e A. V. B a l a k r i s h n a n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Adapt ive C o m p a r t i m e n t a l S t r u c t u r e s in Bio logy and Socie ty R. R. Mohle r , W. D. Smith . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

On the Opt imal Size of Sys t em Model M. Z. Dajani .................................................

Information-theoretic Methods for Modelling and Analysing Large Systems

R. E. Rink ...................................................

A New Criterion for Modelling Systems L. W. Taylor, Jr .............................................

Stochastic Extension and Functional Restrictions of Ill-posed Estimation Problems

E. Mosca ...................................................

Regression Operator in Infinite Dimensional Vector Spaces and Its Application to Some Identification Problems

A. Szymanski ...............................................

An Approach to Identification and Optimization in Quality Control W. Runggaldier, G.R. Jacur ...................................

On Optimal Estimation and Innovation Processes Y. A. Rosanov ~

Identification de Domaines J. Cea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The Model l ing of Ed ib le Oil Fa t Mix tu re s J. O. Gray , J . A . A i n s l e y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

20

33

37

46

57

69

83

92

103

DISTRIBUTED SYSTEMS

F r e e Boundary P r o b l e m s and Impu l se Con t ro l J. L. L ions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A Convex P r o g r a m m i n g Method in H i lbe r t Space and Its App l i ca t ions to Opt ima l C on t r o l of S y s t e m s D e s c r i b e d by P a r a b o l i c Equa t ions

K. Malanowski . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

116

124

~paper not r e c e i v e d

Page 5: 5th Conference on Optimization Techniques Part I

VI

About Some Free Boundary Problems Connected with Hydraulics C. Baiocchi ................................................. I 37

M4thode de D4composition Appliqu~e au Contr61e Optimal de Systbmes Distribu4s

A. Bensoussan, R. Glowinski, J.L. Lions ....................... 14 !

Approximation of Optimal Control Problems of Systems Described by Boundary-value Mixed Problems of Dirichlet-Neumann Type

P. Colli Franzone ............................................ ] 52

Control of Parabolic Systems with Boundary Conditions Involving Time-Delays P. K. C. Wang ................................................ ] 65

GAME THEORY

Characterization of Cones of Functions Isomorphic to Cones of Convex Functions

J. -P. Aubin ................................................. 174

Necessary Conditions and Sufficient Conditions for Pareto Optimality

in a Multicriterion Perturbed System

J. -L. Goffin, A. Haurie ....................................... ] 84

A Unified Theory of Deterministic Two-Players Zero-Sum Differential Games

C. Marchal .................................................. 194

About Optimality of Time of Pursuit M. S. Nikol'skii ............................................... 202

PATTERN RECOGNITION

Algebraic Automata and Optimal Solutions in Pattern Recognition E. Astesiano, G. Costa ........................................ 206

A New Feature Selection Procedure for Pattern Recognition Based on Supervised Learning

J. Kittler .................................................... 218

On Recognition of High Deformed Patterns by Means of Multilevel Descriptions

S. T y s z k o ~ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A Classification Problem in Medical Radioscintigraphy G. Walch ...................................................... 250

The Dynamic Clusters Method and Optimization in Non-Hierarchical Clustering

E. Diday ..................................................... 241

~paper not received

Page 6: 5th Conference on Optimization Techniques Part I

Vll

OPTIMAL CONTROL

A M a x i m u m P r i n c i p l e f o r G e n e r a l C o n s t r a i n e d O p t i m a l C o n t r o l P r o b l e m s - An E p s i l o n T e c h n i q u e A p p r o a c h

J . W. M e r s k y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

O p t i m a l C o n t r o l of S y s t e m s G o v e r n e d by V a r i a t i o n a l I n e q u a l i t i e s J . P . Yvon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

On D e t e r m i n i n g the S u b m a n i f o l d s of S t a t e S p a c e W h e r e the O p t i m a l V a l u e S u r f a c e H a s an I n f i n i t e D e r i v a t i v e

H. L. S t a l f o r d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

C o n t r o l of A f f i n e S y s t e m s wi th M e m o r y M. C. D e l f o u r , S . K . M i t t e r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

C o m p u t a t i o n a l M e t h o d s in H i l b e r t S p a c e f o r O p t i m a l C o n t r o l P r o b l e m s w i t h D e l a y s

A. P . W i e r z b i c k i , A. Ha tko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

S u f f i c i e n t C o n d i t i o n s of O p t i m a l i t y f o r C o n t i n g e n t E q u a t i o n s V. I. B l a g o d a t s k i h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

V a r i a t i o n a l A p p r o x i m a t i o n s of S o m e O p t i m a l C o n t r o l P r o b l e m s T. Z o l e z z i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Norm Perturbation o:[ Supremum Problems

J. Baranger ................................................

On Two Conjectures about the Closed-Loop Time-Optimal Control P . B r u n o v s k ~ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

C o u p l i n g of S t a t e V a r i a b l e s in t he O p t i m a l Low T h r u s t O r b i t a l T r a n s f e r P r o b l e m

R. H e n r i o n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

O p t i m i z a t i o n of the A m m o n i a O x i d a t i o n P r o c e s s U s e d in t he M a n u f a c t u r e of N i t r i c A c i d

P . U r o n e n , E. K i u k a a n n i e m i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

259

265

276

292

304

319

329

333

341

345

360

STOCHASTIC CONTROL

S t o c h a s t i c C o n t r o l wi th a t M o s t D e n u m e r a b l e N u m b e r of C o r r e c t i o n s J. Z a b c z y k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

D e s i g n of O p t i m a l I n c o m p l e t e S t a t e F e e d b a c k C o n t r o l l e r s f o r L a r g e L i n e a r C o n s t a n t S y s t e m s

W . J . N a e i j e , P . Va lk , O . H . B o s g r a . . . . . . . . . . . . . . . . . . . . . . . . . . . .

C o n t r o l of a Non L i n e a r S t o c h a s t i c B o u n d a r y V a l u e P r o b l e m J. P . K e r n e v e z , J . P . Q u a d r a t , M. V io t . . . . . . . . . . . . . . . . . . . . . . . . .

An A l g o r i t h m to E s t i m a t e S u b - O p t i m a l P r e s e n t V a l u e s f o r U n i c h a i n M a r k o v P r o c e s s e s w i th A l t e r n a t i v e R e w a r d S t r u c t u r e s

S. D a s Gup ta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

370

375

389

399

Page 7: 5th Conference on Optimization Techniques Part I

VIII

M A T H E M A T I C A L P R O G R A M M I N G

Some Recent Developments in Nonlinear Programming G. Zoutendijk ................................................

Penalty Methods and Augmented Lagrangians in Nonlinear Programming R. T. Rockafellar .............................................

On INF-Compaet Mathematical Programs R. J. -B. Wets ................................................

Nonconvex Quadratic Programs, Linear Complementarity Problems, and Integer Linear Programs

F. Giannessi, E. Tomasin .....................................

A Widely Convergent Minimization Algorithm with Quadratic Termination Property

G. Treccani .................................................

A Heuristic Approach to Combinatorial Optimization Problems E. Biondi, P.C. Palermo ......................................

A New Solution for the General Set Covering Problem L. B. Kov~cs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A Theoretical Prediction of the Input-Output Table E. Klafszky ..................................................

An Improved Algorithm for Pseudo-Boolean Programming S.Walukiewicz, L. SZomifiski, M. Faner .........................

Numerical Algorithms for Global Extremum Search J. Evtushenko ................................................

407

4t8

426

437

450

460

471

484

493

505

NUMERICAL METHODS

Generalized Sequential Gradient-Restoration Algorith with Applications to Problems with Bounded Control, Bounded State, and Bounded Time- Rate of the State

A. Miele, J.N. Damoulakis ~ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Gradient Techniques for Computation of Stationary Points

E. K. Blum ...................................................

Parameterization and Graphic Aid in Gradient Methods

J. -P. Peltier .................................................

Les Algorithmes de Coordination dans la M~thode Mixte d'Optimisation &

Deux Niveaux G. Grateloup, A° Titli, T. Lef~vre ..............................

509

517

526

~paper not received

Page 8: 5th Conference on Optimization Techniques Part I

IX

Applications of Decomposition and Multi-Level Techniques to the Optimization of Distributed Parameter Systems

Ph. Cambon, L. Le Letty .....................................

Attempt to Solve a Combinatorial Problem in the Continuum by a Method of Extension-Reduction

E. Spedicato, G. Tagliabue ....................................

538

554

Page 9: 5th Conference on Optimization Techniques Part I
Page 10: 5th Conference on Optimization Techniques Part I

Contents of Part II

(Lecture Notes in Computer Science, Vol. 4)

URBAN AND SOCIETY SYSTEMS

Some Aspects of Urban Systems of Relevance to Optimization Techniques

D. Bayliss ..................................................

Selection of Optimal Industrial Clusters for Regional Development S. Czamanski ................................................

Optimal Investment Policies in Transportation Networks S. Giulianelli, A. La Bella .....................................

An On-Line Optimization Procedure for an Urban Traffic System C. J. Macleod, A. J. AI-Khalili .................................

Hierarchical Strategies for the On-Line Control of Urban Road Traffic Signals

M. G. Singh ..................................................

Application of Optimization Approach to the Problem of Land Use Plan Design

K. C. Sinha, A. J. Hartmann ....................................

Some Optimization Problems in the Analysis of Urban and Municipal Systems

E. J. Beltrami ~ ................................................

Combinatorial Optimization and Preference Pattern Aggregation J. M. Blin, A. B. Whinston ......................................

A Microsimulation Model of the Health Care System in the United States: The Role of the Physician Services Sector

D. E. Yett, L. Drabek, M.D. Intriligator, L.J. Kimbell ............

I

9

22

31

42

60

73

85

COMPUTER AND COMMUNICATION NETWORKS

A Model for F i n i t e S to rage M e s s a g e Switching Networks F" Borgonovo , L. F r a t t a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

On C o n s t r a i n e d D i a m e t e r and Medium Opt imal Spanning T r e e s F . Maff iol i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

S imula t ion T echn i ques fo r the Study of Modulated C o m m u n i c a t i o n Channels

J. K. Skwirzynski .............................................

97

110

118

~ p a p e r not r e c e i v e d

Page 11: 5th Conference on Optimization Techniques Part I

XII

Gestion Optimale d'un Ordinateur Multiprogramme ~ M@moire Virtuelle

E. Gelenbe, D. Potier, A. Brandwajn, J. Lenfant ................

State-Space Approach in Problem-Solving Optimization A. S. Vincentelli, M. Somalvico ................................

132

144

ENVIRONMENTAL S Y S T E M S

Perturbation Theory and the Statement of Inverse Problems G. I. Marchuk ...............................................

A Model for the Evaluation of Alternative Policies for Atmospheric Pollutant Source Emissions

R. Aguilar, L. F. G. de Cevallos, P. G. de Cos, F. G6mez-Pallete, G. Martfnez Sanchez .........................................

Mathematical Modelling of a Nordic Hydrological System, and the Use of a Simplified Run-Off Model in the Stochastic Optimal Control of a

Hydroelectrical Power System M. Fjeld, S. L. Meyer, S. Aam ................................

A Two-Dimensional Model for the Lagoon of Venice C. Chignoli, R. Rabagliati ....................................

Sea Level Prediction Models for Venice A. Artegiani, A. Giommoni, A. Goldmann, P. Sguazzero, A. Tomasin .................................................

Optimal Estuary Aeration: An Application of Distributed Parameter

Control Theory W. Hullett ..................................................

Interactive Simulation Program for Water Flood Routing Systems F. Greco, L. Panattoni .......................................

An Automatic River Planning Operating System (ARPOS) E. Martino, B. Simeone, T. Toffoli ............................

On the Optimal Control on an Infinite Planning Horizon of Consumption, Pollution, Population, and Natural Resource Use

A. Haurie, M.P. Polls, P. Yansouni ...........................

159

167

179

203

213

222

231

241

251

ECONOMIC MODELS

Limited Role of Entropy in Information Economics J. Marschak .................................................

On a Dual Control Approach to the Pricing Policies of a Trading Specialist

M. Aoki .....................................................

264

272

Page 12: 5th Conference on Optimization Techniques Part I

XIII

Problems of Optimal Investments with Finite Lifetime Capital B. Nicoletti, L. Mariani ......................................

Some Economic Models of Markets M. J. H. Mogridge ............................................

Utilization of Heuristics in Manufacturing Planning and Optimization J. Christoffersen, P. Falster, E. Suonsivu, B. Sv~irdson ..........

Economic Simulation of a Small Chemical Plant G. Burgess, G. L. Wells .......................................

An Optimal Growth Model for the Hungarian National Economy I. Ligeti .....................................................

283

295

303

313

324

BIOLOGICAL SYSTEMS

On Optimization of Health Care Systems J. If. Milsum ................................................

Theoretical and Operational Problems in Driving a Physical Model of the Circulatory System

B. Abbiati, R. Fumero , F. M. Monteveechi, C. Parrella ..........

Modelling, Simulation, Identification and Optimal Control of Large Biochemical Systems

J. P. Kernevez ..............................................

Mod@lisation du Transfert Gazeux Pulmonaire et Caleul Automatique de la Capacit@ de Diffusion

D. Silvie, H. Robin~ C. Boulenguez .............................

A Model for the Generation of the Muscle Force During Saccadie Eye Movements

A. Cerutti, F. Peterlongo, R. Sehmid ~ .........................

On Some Models of the Muscle Spindle C. Badi, G. Borgonovo, L. Divieti .............................

335

347

3S7

366

378

paper not received

Page 13: 5th Conference on Optimization Techniques Part I

IDENTIFICATION OF SYSTEMS SUBJECT TO ~OM STATE DISTURBANCE

by A.V. Balakrishnan

Department of System Science University of California, Los Angeles*

ABSTRACT

A theory of identification of a class of linear systems --lumped and distributed -- in the presence of state or process noise. As a specific application~the problem of of identifying aircraft stability -- control derivatives in turbulence is considered, and results obtained on actual flight data are included.

INTRODUCTION

The conscious use of mathematical models in system optimization has been growing rapidly in recent years. Perhaps the most spectacular exan~le is the "world model" of Forrester [i]. If it is assumed that a model, before it can be used to predict the future, must first be verified on currently available data, then we are faced with the problem of "system identification" -- of estimating unknown parameters in the model based on obserw~d data. There is a large engineering literature (see [2]) on the subject of system identification and parameter estimation; much of it is however, in the nature of a collection of techniques with little or no precise mathematical framework. Often the authors make ad hoc simplifications for the avowed reason that the otherwise the mathematics involved is too complex. Whereas such a point of view may be harmless when theory plays a secondary role, as in the design of a physical system which can and is finally tested in the laboratory before the design is finalized, the situation is quite different in the identification problem where the end result is a set of numbers and we have really no means of being certain of their accuracy. Surely this would argue for extreme care in the mathematical formulation and analysis, and make the identification problem basically a mathematical and computational one.

In this paper we study the problem of identifying a linear system with time con- tinuous description (rather than the discrete-time as in the bulk of the engineering literature), and furthermore taking into account unknown inputs (modelled as random state disturbance), usually omitted for reasons of mathematical complexity. Such a formulation turns out to be actually necessary in the practical problem of estima- ting afrcraft parameters from flight data in turbulence, see [3].

More specifically, we can state the problem as follows: The observation y(t;~), (in the usual notation, ~ denotes the "sample point" corresponding to the random process) has the form:

y(t;~) = s(t;~) + n!(t;~) (i.i)

where nl(t;~) is the unaw~idable measurement errors modelled as a white Gaussian process (as a reasonable idealization of a band-limited process of band-width large compared to that of the process s(t;~)). The system is assumed to be linear in its response to the (known) input u(t), and to the unknown ("state") disturbance, assumed to be a physically realizable Gaussian process, and hence we can write:

t t

s(t;~) = ~o B(t-S) u(s)ds + ~o F(t-s) n2(s;~)ds (1.2)

Research supported in part under Grant No. 73-2492, Applied Mathematics Division, AFOSR U.S. Air Force.

Page 14: 5th Conference on Optimization Techniques Part I

where F(.) is determined from the spectral density of the disturbance process. We have omitted initial conditions, as we may, since we shall assume that the system is stable, that in fact:

ilB(t) lj + llF(t) ll = 0(exp -kt), k > 0 (i.3)

and allow for long observation time (only asymptotic properties of estimates are considered). We may also assume that the measurement noise in (i.I) is independent of the disturbance. The identification problem is that of estimating unknown para- meters in the functions B(.), and F(.), from the observation y(s,~), 0 s s S T, and the given known input u(s), 0 s s ~ T. Both nl(t,e) and n2(t,~) are white Gaussian with unit spectral density (matrix), and are independent of each other. It should be noted that instrument errors such as calibration error, bias error, time- delays etc., in the observation y(t;~) are supposed to be known and corrected for.

Although seemingly simple, this formulation we hasten to point out, includes all known identification (and control) problems for linear systems. For example, the most commonly stated version:

½(t) = A x(t) + B u(t) + F nl(t;~) (1.4)

y(t) = C x(t) + n2(t;~) (1.5)

is obviously of this form; in fact we readily see that:

F(t) = C e At F

B(t) = C e At B

and that (1.3) is satisfied if A is stable. The main simplification occurring in this example is of course that fact that the Laplace transforms of F(.) and B(.) are rational. In particular, (1.4) and (1.5) represent a "lumped parameter system," governed by ordinary differential equations. Our model includes also "distributed parameter" systems, governed by partial differential equations. For example, let G be a region in three-space dimensions with boundary F. Then we may consider a structure of the form (we omit the details):

8f =~a $2f ~--~ ij ~x i ~x. + n(t,.,~) (1.6)

3

f(t,~) on F = u(t,.)

y(t) = C f(t,.) + nl(t,~) (1.7)

where C is a linear transformation with finite dimensional range. "Solving" (1.6), we see that (1.7) can again be expressed in the terms (i.i), (1.2). What is important to note is that the formulation (i.I), (1.2) requires only the specifica- tion (within unknown parameters) of the functions B(.) and F(.). Yet another example is one in which the state space is finite dimensional but the state noise does not have a rational spectrum. A practical instance of this is the problem of identifying aircraft parameters in turbulence characterized by the Von Karman spec- trum. In other words, the Laplace transform of B(.) is rational, but:

2 f4 I e2~ift k F(t)dt

o (a+f2) 17/6

Although we shall not go into it here, we wish to note that it is also possible to formulate stochastic control problems in terms of (i.i), (1.2). Thus we may seek to minimize

T

fE[Qs(t;m), s(t;~)] dt o

Page 15: 5th Conference on Optimization Techniques Part I

subject to constraints on the control n(t), which is required to be adapted to the observation process y(t,~). See [4] for more on this.

Estimation Theory

Let @ denote the vector of unknown parameters that we need to identify. We shall assume that there is a true value go (in^other words, no "model" error), and that it lies in a known interval, I(0). Let e T denote an estimate of 0o based on y(s), 0 ~ s s T. We shall be interested only in estimates with the following two properties:

(i) asymptotically unbiased: E(~T) + 0o

(ii) consistent: @T converges in a suitable sense to O ° as T + infinity

We shall show that it is possible to find an estimate with these properties by invoking the classical method of "maximum likelihood."

For this we can proceed along one of two [essentially equivalent, as we shall show] points of view concerning the noise process nl(t;~). In the first (the Ito or Wiener process point of view), we write (i.i) in the integrated version:

t Y(t;~) = I s(o;~)d~ + Wl(t;e) (2.1)

Jo

where Wl(t;~) is a Wiener process on C(0,T), the Banach space of continuous func- tions on [O,T]. Then for each fixed @ (assuming or "given" 0) the process Y(t;~), 0 ~ t ! T with sample functions in C[0,T], induces a measure thereon which is absolutely continuous wit~ respect to the Wiener measure (induced by Wl(t,e)). Moreover we can then calculate the corresponding Radon-Nikodym derivative, which is then our "likelihood" functional, denoted p(Y;O;T), and the maximum likelihood estimate 0T is the value of @ which yields the maximum in 1.0 (or, actually in a sub- interval). The ca$culation of the derivative p(Y;O;T) can be accomplished by one of two ways. The first method is to use the Krein factorization technique (see his monograph [5]). Thus let

tL

re(t;8) = 2, B(t-s) u(s)ds (2.2) % 2

and let

R(@;t;s) = E[(s(t;~) - m(t))(s(s;<0) - re(s))*] (2.3)

E denoting expected value. Consider the operator T

Rf=g; g(t) = f R(t;s) f(s)ds o

(2.4)

mapping L2(0,T) into itself. Then R is non-negative definite and trace-class. Let I denote the identity operator. Then, following the procedure invented by Krein [5], we can "factorize" (I+R) -I as:

(I+R) -I = (I-~) (I-~) (2.5)

whereof is a uniquely determined Volterra operator, such that furthermore, (~+~*) is trace-class (nuclear). The probabilistic interpretation is that the process:

t

Z(t;~) = Y(t;~) - f 9(s;e)ds (2.6) Jo

where

Page 16: 5th Conference on Optimization Techniques Part I

t

Y(t;co) = Y(t;co) - ~o m(s;@)ds (2.7)

t

y(t;w) = L(t;s) dY(s;~), (2.8) o

L(t;s) being the kernel corresponding to ~ (and of course depends on 9), is a Wiener process, Moreover from this fundamental result, we can readily deduce that (cf [8]) :

I T T

= + m(t) ll dt - 2 [ ~ [m(t) -o

+ y(t;0J), dY(t;W)]]

where the integral:

T

f [y(t;~), dY(t~0~)] (2.10) o

(2.9)

is to be calculated as an Ito integral. This is fine~ except that there is a funda- mental practical difficulty in calculating (2.10) in that the observation is never actually such that W!(t;~) is a Wiener process. More accurately, one should model nl(t;~) as a band-limited process, of band large compared with that of s(t;e). On the other hand, if we do this, then we cannot calculate the likelihood functional. Hence we may proceed in the following approximate way: we assume in theory that Wl(t;e) which is band-limited, but of large bandwidth, so that, following [6], and exploiting the fact that (~F+ ~*) is trace-class, we have the approximations:

t

~(t~) ~ f L(t;s)(y(s;~)- m(s;@))ds o

T T

7~ Ira(t,@), dY(t;~)] ~ £ [re(t;0), Y(t;~)]dt

and the crucial approximation [see [6]J: T T

, = [y(t;~), y(t;~)]dt - ~ Tr. (~+~) [9(t ;~) dY(t ;~) ] e 1 o

(2.11)

Using these, we can finally write: T T 1)fily(t;~)+m(t;o)ll2dt-2fo[(m(t;O) p(Y;@;T) = exp(- ~

+ y(t;~)), y(t;~)] dt + Tr. (~+ f~*) (2.12)

We shall next (briefly) show that we obtain the same expression for the likelihood functional, using a slightly different point of view, which may be somewhat closer to reality. Thus we take the view that nl(t;~) is of bandwidth sufficiently large that we can assume it to be "white noise"-nl(t;~) for each ~ is an element of L2(0,T) with Gauss measure defined on the (field of) cylinder sets. Such a measure cannot be countably additive on the Borel sets. Denoting the measure by PG, we have that ('~eak distribution ~')

Page 17: 5th Conference on Optimization Techniques Part I

exp i[nl(.~),h] dPG = exp - IIhll 2

for every h in L2(0,T). Our basic sample point ~ can now be taken to be in the associated product Hilbert space H to accommodate nl(t,~) and n2(t;~) , with Gauss measure thereon. Then we can rewrite (I.i), (1.2) as:

where

y(t;m) = m(t;0) + s(t;~) + G n(t;o~) t

s(t;~) = fF(t-s)D n(sl0~)ds o

n(t;~) = Ln(t;w)J

GG* = I ; DG e = O; DD = I (Identity matrix in appropriate dimensions)

(2.13)

(2.14)

Then with ~ defined as before, we can readily see that t

(2.15)

again defines white noise:

for h in H. Allowing for finitely additive measures, we can again show that the measure induced by y(t;~) is absolutely continuous with respect to the Gauss measure induced by nl(t;~), and that the derivative can be expressed:

.] + Tr. (~ +~ ) (2.16)

which upon simplification is readily seen to yield the same expression as (2.12). The main feature of (2.16) is that it is the formal direct analogue of what is obtained in the case when the observation consists of a finite number of points. (See [7]),

Next we shall show how to obtain ~ by exploiting the state space representation, again in the generality of non-rational transforms of B(.) and F(.)

STATE SPACE REPRESENTATION

When the Laplace transforms of B(.), and F(.) are rational we can obtain the (finite-dimensional) state representation (as (1.4), (1.5)) for (i.i), (1.2) by a well-known procedure. Moreover, the determination of c~ using this representation turns out to nothing more than recursive filtering, with the corresponding likeli- hood functional expressions as given in [8]. To be specific, we shall now assume that:

u(t) is p-by-i

B(.) is n-by-p

F(.) is n-by-m

Page 18: 5th Conference on Optimization Techniques Part I

Then the dimension of n(t;~) in (2.13), (2.14) is (n/m-by-i), D is m-by-n+m, G is n-by-n+m. We shall work with the form (2.13), (2.14). Let ~denote the Hilbert space L2(0,~) n of n-by-one functions. In addition to (1.3), we also assume that:

!IB(t) II + lIF(t) II = 0(exp -kt), k > 0 (3.1)

dot representing derivatives. This is probably stronger than we need. Then we have:

Theorem

Under conditions (1.3) and (3.1), there exist linear transformations B, F, mapping Ep and E m into ~ such that

t

X(t ;0~) = / T(t-S)(m U(S)~-FD n(s ;~)I as (3.2) Jo - i

being the generalized solution of :

x(t;W) = A x(t;w) + B u(t) + FD n(t;~) (3.3)

where A is the operator with:

domain of A = [fEJ~, f absolutely continuous and derivative f'~.~],

Af = f'

and T(t) is the semigroup generated by A. Moreover (1.2), or equivalently (2o14), can be expressed:

y(t;~) = C x(t;~) + G n(t;~) (3.4)

where C is a linear bounded transformation mapping j~into E n. as:

B u ~ (-L) ~ B(s+nL)u , ueE P O

F w ~ (-L) ~ F(s+ne)w , wEE m

o

w h e r e L i s a f i x e d p o s i t i v e n u m b e r , and t h e n C i s d e f i n e d b y : L

i /o h(s)ds ' hg~%o Ch = g

Here B can be taken

Proof

Although the genesis of this representation was by a long and tedious route, it can be verified quite quickly. First of all, the conditions (1.3) and (3.1) yield the convergence in the norm of ~Y{ of both series:

B(s+nL)u and ~ #(s+nL)w o o

Next, it is immediately verified that

CT(t)B = B(t)u, t >- 0

CT(t)F w = F(t)w, t >_ 0

Page 19: 5th Conference on Optimization Techniques Part I

The interested reader will find it useful to rewrite (3.3) as a nonhomogeneous first order partial differential equation. Note that the state space we have obtained need have no relation to the original state space, if any. The state space we have obtained is reduced, and the controllable part coincides with the controllable part of the original state space. See [9]. The controllable part is finite dimensional if and only if the Laplace transforms or B(.) and F(.) are rational.

Using this representation, we shall next provide a (constructive, "real-time") characterization of (~. In fact, formally, the final results can be written exactly as in the case where the state space is finite dimensional. The proofs however will be quite different. First let us fix @, and let

t

y(t;~) = y(t;~) -re(t;0); x(t;~) = x(t;~) - fT(t-s)Bu(s)ds

Note first of all that t

x(t;~) = fT(t-s) FD n(s;m)ds

and hence

E ( ] l ~ ( t ; ~ ) l l 2) < = 0 ! t S T

(3.5)

Let q (t;~) denote the mapping:

n(t;w) = ~(s;~), 0-< s -< t_< T

mapping ~ into L2(0,t) n. Then we can calculate the conditional expectation

E

or, equivalently, the minimizing (linear bounded) operator

- L 2 )

o v e r t h e c l a s s o f l i n e a r bounded t r a n s f o r m a t i o n s L mapping L 2 ( 0 , t ) i n t o ;YC. (i0) it is known that the minimizing operator is given by

Let

Rx(t;s) = E(x(t;60)~(s;~)*)

Then we can see that:

Lo(t) = Rl2(t) (I + R22 (t)) -I

From

(3.6)

where

Rl2(t)f = h; h =

R22(t)f = g; g(s) =

t

fO Rx(t;s) C* f(s)ds

t

C Rx(S;~ ) C* f(o)d~, 0 < s < t.

Note furthermore that

Rx(t;s) = T(t-s) Rx(S;S) for s < t (3.7)

Page 20: 5th Conference on Optimization Techniques Part I

We can draw all the results we need from these.

~ ( t ; ~ ) - c ~ ( t ; ~ ) = Z o ( t ; ~ )

and let

so that in particular we have:

First let us define:

(3 .8 )

( 3 . 9 )

R 2 2 ( t ) - R 2 2 ( t ) G ( t ) = G ( t ) = R 2 2 ( t ) - G ( t ) R 2 2 ( t ) (3o10)

U s i n g ( 3 . 9 ) i n ( 3 . 8 ) w e h a v e : t t t

Zo(t;m) = Y(t;~) - fo R22(t;s) Y(S;~)ds + fo R22(t;s)ds JoPG(t;s;~),(~;~)d~

where we denote the kernel corresponding to G(t) by G(t;s;~) , and exploiting (3.10) we h=ve:

t

Zo(t;~) = y(t;~) - jofG(t;t;s) y(s;~)ds

and comparing with the Krein factorization (Appendix II in [8], for example), we obtain that

t t

j~O # G(t;t;s) y(s;~0)ds = C x(t;~) = f L(t;S) y(s;~)ds o

with L(t;s) given by (2.8), nhus characterizing ~, and hence also:

z ° (t ;~) = z (t ;~)

Next let us express x(t;~) in terms of z(s;~)~ 0 -< s _< t. For this, let us note that we may regard • as mapping L2(0,t) into itself (being a Volterra operator) and interpreting the adjoint (~ t) correspondingly, we have:

( i - ~£ t ) ( I - . ~ ) = I + R 2 2 ( t )

and using this we can write:

x(t;co) = a x ( t ; s ) c ( s ; ~ ) - G(c~;s;c0 z ( e ;w)d ds ( 3 . t l )

where we have used the fact that G(O;~;s)* = G(o;s;o). We can rewrite this as:

t foe (f~ sl f x(t;s) * * = C z(s;~)ds - Rx(t;s) C G(o;s;O)d z(O;~0)do

o i (3.12)

From:

x(t;00) = jo Rx(t;s)C * (s;00)- G(t;s;O) y(O;00)d ds

it follows that~

Page 21: 5th Conference on Optimization Techniques Part I

=

t

Rx(t;s)C*CKx(S;t)ds - o t t

fRx(t;s)C*G(t;s;~)ds fCRx(~;t)d~ o o

(3.13)

Now using the right side of (3.10) we have: t

* fo * CRx(S;C~)C - G(t;s;T) CRx(T;~)C dT = G(t ;s;~)

and hence: t

f G(t;s;t) = CRx(s;t)C - G(t;s;T) o

Substituting this we have: t t

oRx t

CRx(T ;t)C dT

;s)C* G(t;sl;t)ds = Rx(t;s)C* CRx(S;t)C ds o t t

/Rx(t;s) /o ~ - C G ( t ; s ; T ) C R ( T ; t ) C dT ds o

= E(i(t~) ~(t;~))c* (3.14) using (3.13). But letting

P(t) = E(IE~(t~) x(t~ )112)

we have from ( 3 . 1 2 ) , u s i n g (3 .14) and (3 .7) t h a t t

x(t;~) = /T(t-s) P(s)C* z(s;~)ds (3.15) o

which is the expression in terms of z(. ;~0) we sought. It should be noted that the line of reasoning for this proof is quite different from the usual (e.g. [8]) in that no Martingale theory is invoked. Now (3.15) in turn yields that

t

E(x(t;~) x(t;~)*) = /o T(t-s)P(s)C~CP(s)T(t-s)~ds

and hence we have that: t t

/o * ~ / T(t-s)P (s)C* P(t) = T(t-s)F F T(t-s) ds- CP(s)T(t-s) ds ~o

and hence for x,y in the domain of A we have:

[P(t)x,y] = [e(t)A*x,y] + [P(t)x, A y] + [FF x,y] - [e(t)C CP(t)x,y] (3.16)

with P(0) = 0. Also from:

Page 22: 5th Conference on Optimization Techniques Part I

10

t

o

we have that, denoting by K the operator: t

Kf = g; CT(t-s)P(s)C f(s)ds = g(t), O~ t <T

mapping L2(0,T) n into itself,

C~(.;m) = (! - K)-IK y(.;~) (3.17)

where K is a Volterra operator of Hilbert-Schmidt type. The operator P(s) is clearly non-negative definite and nuclear. From (3.17) we have that

~f= (I - K)-IK

and hence T

Trace (~+ 2*) = Tr. (K + K*) = 2 /o CP(s)C*ds

Finally, we can also see that ~(t;~) is the generalized solution of

~(t;~) - A ~(t;~) = P(t)C*(y(t;~) - C~(t;~))

being the recursive form sought. Finally we note that we can write:

z(t;~) = y(t;~) - C x(t;~)

(3.18)

(3.19)

(3.20)

where t

X(t;~) = x(t;e) + /O T(t-s) B u(s)ds

and x(t;0~) is readily verified to be the generalized solution of

~(t;c~) -A ~ ( t ; ~ ) = P ( t ) C y ( t ; c 0 ) - C ~ ( t ; ~ ) + Bu( t ) (3 .21)

We have t h u s o b t a i n e d t h e q u a n t i t i e s i n (2 .16) as s o l u t i o n s of d i f f e r e n t i a l equa - t i o n s , s o l v a b l e i n r e a l t i m e . Thus (2 .16) can he w r i t t e n , u s i n g ( 3 . 2 0 ) , ( 3 . 2 1 ) , and (3 .7 ) :

p(Y;0;T) = exp - ~ [Ci(t;~), Ci(t;~)]dt - 2 [y(t;~), Ci(t;~)ldt

T

+ 2 Tr. f CP(t)C*dt (3.22) Jo

COMPUTATION OF THE ESTIMATE

Let us next examine the computation of the estimate of the unknown parameter vector. First it is necessary to study some properties of the functional defined by (2.9), or, in the form in which we shall actually use it, (3.22). We shall use p(y;e;T) to indicate the functional generally, and p(Y;0;T) to indicate the value when we use the actual observation, in which case it will be called the "likelihood func- tional." Let ~ denote the sample space L2(0,T)n, and PG the Gauss measure thereon. When we have

Page 23: 5th Conference on Optimization Techniques Part I

11

~ p(y;8;T) dPG = i

so that using

q(y;e;T) = Log p(y;8;T)

(so that q(Y;0;T) is the log-likelihood functional), we have

~ (V0q(Y;8;T)) P(Y;8;T) dPG = 0

(since the dependence on 8 is sufficiently smooth). In particular then:

E(V 8 q(Y;0o;T)) = 0

Similarly (analogous to the familiar classical arguments):

2 2

38. 30, q(y;0;T) p(y;0;T) dPG 3 i

= - ~1% q(Y;8;T)) (~q(Y;0;T)) P(Y;8;T)dPG

where {8 i} denote the components of 8. In particular then:

These r e l a t i o n s a r e i m p o r t a n t to u s , b e c a u s e our t e c h n i q u e c o n s i s t s i n m i x i m i z i n g the log likelihood functional in the known interval in which 8 o lies; or, more accurately, we seek a root of the gradient of the log likelihood functional:

V e q(Y;8;T) = 0 (4.2)

using in particular the Newton-Raphson technique, hut avoiding the use of second derivatives, thanks basically to (4.1). First we note that, from (3.22), we have:

30i3 q(8;g;T) = (-i) y(t;~) - Ci(t;w), ~--~C~(t;~)l dt

T

+ Tr. ~ CP(t)C*dt (4.3) o i

Let M(O;T) denote the matrix with components mij defined by: T

3 c~(t;0) ~c£(t;0) mij (8 ;T) = dt (4.4)

Then our basic "identifiability condition" is that

M(0 o) = Limit (l/T) E(M(0o;T)) (4.5) T÷~

be positive definite; to check this, in practice, since 8 o is unknown, we must verify that this is satisfied for every 8 in the interval in which 8 o lies. (The

Page 24: 5th Conference on Optimization Techniques Part I

12

positive definiteness of M(@ o) assures it in a sufficiently small neighborhood of Oo). Our basic algorithm then is:

@n+l = @n + (M(en ;T) /T )-I (v0 q(Y;en;T)/T) (4.6)

The asserted positive definiteness also assures the positive definiteness of M(@n;T)/T for all large enough T in a sufficiently small neighborhood of @o" For

these aspects see [8]° Let us also note here that

E ~ $ (4.7)

which is most easily verified by using the Ito-integral version of the likelihood functional, (2.9). Assuming that

T

!im ~ [u(t), u(t+s) ldt T-+co

exists and is continuous in s, we can proceed to evaluate M(0) as in [8]. Reference may also be made to [8] for the asymptotic (ergodic) properties of unbiasedness and consistency, for the case where the system is finite dimensional (Laplace trans- forms of F(.) and B(.) are rational). The general case considered here can be treated in similar fashion, although space does not permit inclusion here.

EXPERIENCE ON ACTUAL DATA

We turn now to indicate briefly the results obtained using our computational tech- nique on actual (as opposed to simulated) data. The dynamic system considered arises from the longitudinal mode perturbation equations for an aircraft in wind- gust (turbulence) (Rediess-Iliff-Taylor, see [3]). This is a system where only a "rational" approximation of the spectrum of turbulence is used, so that the total system is finite dimensional. Leaving the many essential details to the compre- hensive work of lliff [Ii], the state space formulation of the problem is as follows: (see also [3]):

x(t) = A x(t) + B u(t) + F n2(t)

V(t) = C x(t) + D u(t) + G nl(t)

where nl(.), and n2(.) are independent white Gaussian, and the matrices in the equa- tions have the form:

A =

Z 1 0 i Z 1

0 0 i 0

M I 0 M 3 M I

0 0 0 i000

[z4] ~01

LO J

F = d

20vJ

Page 25: 5th Conference on Optimization Techniques Part I

13

C =

D =

0

0

lOMl-VZ I

k I

I ° ] 0

10M4-vZ 4 g

0

0 1 0

1 0 0

IOM 3 IOMI-~Z I 0

g g

32k 1 0 k 1

g = acceleration due to gravity

G = diag. [.0005, .0001, .01, .00001]

The lettered entries are unknown, except for ~, which is known (=1670). Note that the turbulence power is an unknown parameter.

Figure 1 shows the complete time history of the observation v(t) (four components) subdivided into various regions for later identification, as well as the input time- history. Estimates were computed over the various subregions each by three methods:

Method I: Neglecting the measurement noise on the angle of attack measure- ment (v4) and following the corresponding maximal likelihood technique developed in [3]. This is reasonable for this particular example at high turbulence levels.

Method II: This is the method developed herein, see also [7].

Method III: This was a "check" method, in which the turbulence was ignored completely in the model.

The results are summarized in Figure 2. Sample means and variances of the estimates obtained over the different data-regions are shown, along with the wind-tunnel values as well estimates obtained on other turbulence-free (smooth air) flights. It can be seen that Method II yields the most consistent estimates. It also turns out that Method II is the least in computational time -- the estimates converging in fewer iterations. It can also be seen that ignoring the turbulence leads to the worst results. For more discussion see [ii]. The remaining figures indicate the nature of the "fit" obtained using the estimated coefficients to the observed data. Figure 3 shows the close agreement provided by Method II. Figures 4 and 5 indicate how much worse the agre~nent is on the same stretch of data if the turbulence is not accounted for.

Page 26: 5th Conference on Optimization Techniques Part I

14

v~

RAD/SEC

V 2

RAD

v3

g's

O .00-

-0. ~0-

0o~0 7

O. 00-~

-n_~n-

v.

RAD

RAD

Figure 1. Total Jetstar Turbulence Time History Showing Intervals of Each Maneuver

Page 27: 5th Conference on Optimization Techniques Part I

15

-2

Z 1 -1

0

-20

M 1 ~10

0

-4

M 3 -2

0

-.3

Z4 - ,1

.1

-20

M4 -10

f ii

O SMOOTH AIR

[ ] METHOD I

METHOD II

A METHOD I I I

tx WIND TUNNEL PREDICTION r~

i i i i i

i i i i i

I i i i i

T I I I .J- i i

I I I I I MOOTH A IR METHOD 1 METHOD I1 METHOD I11 WIND TUNNEL

PREDICTION

Figure 2. Means and Standard Deviations for Five Methods of Estimating Coefficients

Page 28: 5th Conference on Optimization Techniques Part I

16

RAD/SEC

v~

% : i i

g'$ !

v48 , l o,"

RAD ]

I ~ J _ _ _ _ _ _ I i I I ' 0 10 20 30 40 50

T, SEC

Figure 3. Comparison of Flight Data from Maneuver ABCD and the

Estimated Data -- Method II

Page 29: 5th Conference on Optimization Techniques Part I

17

v~

RAD/SEC

v2

RAD

v3

g's

v,

RAD

B

q..

~ I I I t t I ,,l_, 2 4 6 8 10 12 14

T, SEC

Figure 4. Comparison of Flight Data from Maneuver E and Estimated Data - Method II

Page 30: 5th Conference on Optimization Techniques Part I

18

DEG/SEC

- - 0 , FLIGHT ~I . . . . . . . . . . . . . . . . . ESTIMATED

v~ 8 ! d

E)EG 8 T

.,.1

v3 T o

g's

8

v4 r

8

u t

~1_ I , L I 0 2 4 6

Figure 5.

I I I I 8 10 12 14

T, SEC

Comparison of Flight Data from Maneuver E and the Estimated Data Obtained by Method III

Page 31: 5th Conference on Optimization Techniques Part I

19

REFERENCES

i. Forrester, J.W.: "World Dynamics," Wright-Allen Press, 1971.

2. Proceedings of the Second IFAC Symposium on Identification and Process Para- meter Estimation, Prague, Czechoslovakia, June 1970.

3. Balakrishnan, A.V.: Identification and Adaptive Control: An Application to Flight Control Systems, Journal of Optimization Theory and Applications, March 1972.

4. Balakrishnan, A.V.: Identification and Adaptive Control of Non-dynamic Syst- ems, Proceedings of the Third IFAC Symposium on Identification and Process Parameter Estimation, June 1973.

5. Gohberg, I., and Krein, M°G°: Volterra Operators, AMS Translations, 1970.

6. Balakrishnan, A.V.: On the Approximation of Ito Integrals using Band-limited Processes (to be published, SIAM Journal on Control, February 1974).

7. Balakrishnan, A.V.: Modelling and Identification Theory: A Flight Control Application, in "Theory and Applications of Variable Structure Systems," Academic Press, 1972.

8. Balakrishnan, A.V.: Stochastic Differential Systems, Vol. 84, Lecture Notes on Economics and Mathematical Systems, Springer-Verlag, 1973.

9. Balakrishnan, A.V.: "System Theory and Stochastic Optimization," in "Network and Signal Theory," Peter Peregrinns Ltd., 1972.

i0. Balakrishnan, A.V.: Introduction to Optimization Theory in a Hilbert Space, Lecture Notes on Operations Research and Mathematical Systems, Vol. 42, Springer Verlag, 1971.

ii° Iliff, K.W.: Identification and Stochastic Control, An Application to Flight Control in Turbulence, UCLA Ph.D Dissertation, 1973.

Page 32: 5th Conference on Optimization Techniques Part I

ADAPTIVE COMPARTIMENTAL STRUCTURES IN

BIOLOGY AND SOCIETY

R.R. Mohler and W.D. Smith

INTRODUCTION

Many dynamical processes~ which are relevant to man, his biochemistry and the

society of which he is a part, are modelled conveniently as adaptive control sys-

tems with appropriate compartmental structures. It is assumed here that the systems

of concern may be described or accurately approximated by precise mathematical re-

lationships. Such equations are derived generally from established physical laws,

intuition, or experimental data. If this is not the case, it may be possible to use

a linguistic approach such as suggested by Zadeh for the development of a parallel

modeling and identification theory.

Even with the assumption that the system may be quantified conventionally, ho E

ever, it is not possible to say that the necessary theory of modelling and identi-

fication has been developed. In more cases than not, linear system models, despite

their popularity in the literature, fail because of their rigid structure. Still,

as is shown here, certain results from linear system theory may be utilized some ~i-

mes to derive effective models of complex adaptive processes. This convenience is

due to the linear-in-state structure of the bilinear model which is suggested in

this paper. Some bilinear system theory and its application to dynamic modeling has

been developed and much of this work is summarized by Mohler. The proceedings of

the first U.S.- Italy Seminar on Variable Structure Systems, which was an outgrowth

of common bilinear system research, further establishes the base for modeling and

identification as they relate to adaptive processes in general ISee Mohler and Ru-

berti I • Biological and societal processes, like those generally found throughout natu-

re, are distributed as well as adaptive. Still such processes are usually ameanable

to discretization according to compartments. For example, biological compartments

may be formed for anatomical convenience, and societal compartments according to ge!

graphical and governmental convenience. Suppose a single substance X is distributed

among n compartments between which it diffuses naturally at a rate described by co~

Servation equations of the form

R.R. Mohler is with the Department of Electrical and Computer Engineering Oregon

State University, Corvallis,Oregon 97331.USA.(P~esently visiting University of Rome)

W.D. Smith is with the University of New Mexico Medical School, Albuquerque,

New Mexico 87106. USA.

The research reported here is supported by NSF Grant GK 33249

Page 33: 5th Conference on Optimization Techniques Part I

21

dx. n. n

i = ~ ~'" - k ~- i + - + Pi - di dt lJ ~ki ~ia ~ai j=l

, (I)

i= l,...,n,

where x. is the concentration of substance X in the ith compartment, ~i" is the flux

of X from the jth to the ith compartment, #. is the flux of X from theJenvironment la

of uniform concentration x a to the ith compartment, Pi is the rate of production and

d i is the rate of destruction of X in the ith compartment. The primed summation de-

notes deletion of the ith term. Compartmental volumes are assumed constant.

The flux to the it:h compartment from the jth compartment is described by

~ij = Pij xj , (2)

where i, j=l,...,n,aandPij is an exchange parameter which may be constant or may be

a multiplicative control. Additive control may be synthesized through the net produc

tion of substance (Pi-di) which is zero for a eonversative system. (For brevlty,any nonma

nipulative terms have been neglected). It is readily seen from Eqs. (i) and (2) that a

collection of terms leads to a bilinear system of the followin~ form:

m dx dt - Ax + ~ B ku kx + Cu , (3)

k=l

where xcR n is the state vector; ueR TM is the control vector; A, B k and C are nxn, nxn

and nxm matrices respectively. Again, x is composed of compartmental quantities of

substance X; additive control may arise from net production, and multiplicative con-

trol from those exchange parameters which are manipulated. While, additive and multi

plicative controls are independent variables for the type of process considered he-

re, in general they might not be. Therefore, it is assumed here that zeros appear in

the appropriate positions of B k and C to maintain independent additive and multipli-

cative control variables where necessary. Those exchange parameters which can not be

manipulated result in constant coefficients of the A matrix. Here, it is assumed

that compartmental capacities do not change significantly. While the process may in-

clude transport of more than one substance,no generality is lost in utilizing the a-

bove form of equations. In the human body such processes are controlled by what is

called homeostasis.

BIOLOGICAL PROCESSES

A single cell may represent a convenient compartment for modeling a more compl!

cated organism. Nutrients are generated for cell growth and cell division by bioche-

mical processes within the cell itself and by transfer across the cell membrane. En-

zymes effect control by manipulation of membrane permeability and by triggering the

formation of proteins and nucleic acids and their precursors.

In addition to the natural diffusion from compartments of high concentration to

those of low concentration, there exists an active transport from low concentration

to high concentration with the necessary energy supplied by cellular metabolism. Such

active transport as derived by Hill and Kedem is described by a nonlinear function

(division of summations) of compartmental densities. The process may be approxima-

Page 34: 5th Conference on Optimization Techniques Part I

22

ted, however, by a bilinear equation ISee Mohler I .

The development considered here has assumed that the process involves mass tran

sport. Again the concept is more general and in physiology alone includes numerous

processes such as body fluid and electrolyte balance ISee Alvil,thermal regulation

IMilsuml, kinetics of material ingested or excreted from the body ISnyderl, control

of carbon dioxide in the lungs IGrodinsl as well as kinetics of metabolities in cell

suspensions ISheppard and Householder 1

Tracer experimentation

For biological processes it is usually impossible to measure the necessary

quantities directly. Frequently, it is possible, however, to estimate compartmental

system parameters by observing the behavior of a tracer, such as a radioisotope or a

dye, which has been injected or ingested into the system ISheppard and Householder I •

If it is assumed that the tracer in every compartment is distributed uniformly

through substance X, that labeled and unlabeled substances behave identically, and

that the tracer does not appreciably affect the compartmental system behavior, then

the tracer system with specific compartmental activity ai(t) is described by

n

d(xiai) n a k- [ ~ (t) a i+ ~ (t)a. (t) at ~ " ~ik (t) ~qi ia la

k=l q=!

-~ o<t) a. -d.(t> a. + f.(t) , <4) al i i 1 I

i = 1,...,n

where ai(t) is specific activity of tracer in the ith compartment, aid(t) is the sp~

cific activity of substance in influx ~ia(t), and fi(t) is the influx of tracer

which is inserted directly into compartment i. Normally, tracer experiments are con-

ducted with the compartmental process in "equilibrium" (+), and it is further assu-

med that the process is closed, ~ai =~ia =0, and conservative, Pi =di = O, i=l~...,n.

Let tracer be inserted directly into p compartments with its behavior measured

in q compartments. Then, with the above assumptions

da -I -- =dt S a+X d P f (5)

and w = Q a , (6)

where asR n is the specific activity vector; X d is the diagonal matrix of elements

xi,...,Xn; P is an input matrix with unity elements in each row corresponding to a

compartment which is accessible for tracer insertion and with zero elements elsewhe-

re; f is the p-vector of inserted total activity fluxes, and therefore Xd IP f is

the vector of inserted specific activity fluxes; w is the output q-vector of obser-

ved compartmental specific activities. Also, S = [sij ] with

= ~ij/xi , j =l,...,n, i # j sij

and n

_ i__ k~ I sii = x i = ~ik °

(+) I.e., with xi=$ij = Sai=$ia=Pi =di =0; i,j = I ..... n,i#j, but possible net flux

between compartments and with environment.

Page 35: 5th Conference on Optimization Techniques Part I

23

If the tracer enters the system by natural flux routes from p separate external

sources of labeled stubstance, each with its own tracer specific activity ai,

i=l,...,p, then

and

d_~a Xd I ~ad = Sa+ Pa

dt (7)

w = Q a . (S)

Here ~ad is a diagonal matrix of elements ~ia "'" ~na' ~ is a p-dimensional tracer

source activity vector, and P is the n×p output matrix with a single unity in each

row, i, for which aia # O.

In practice, the subject might be placed in a radioactive bath, and its absorp-

tion of tracer monitored in a so-called "soak up" experiment. In "wash out" experi-

ments, the subject is saturated to a given specific activity of tracer, and wash out

is observed in a tracer free environment. In either case, the presence of all compart_

ments can not be detected unless the tracer system, Eqs. (7) and (8), is completely

controllable and completely observable.

Obviously, the tracer system is linear, and may be analyzed by classical linear

system theory. For example, the conventional rank tests for controllability and ob-

servability may be applied to either tracer system to see that sufficient accessibi-

lity is available to identify the minimum realization of the system. For the time va

riant case, rank tests may still be applied to two convenient time-variant test ma-

trices to show that the model is a minimal realization ISee D'Angelo I . Standard pro-

cedures may then be used to synthesize the minimal realization from the input-output

observations. These, inturn, dictate the necessary tracer experiments.

Adaptation and Control

Cells, perhaps the most basic of biological compartments, multiply in a manner

similar to that of populations of biological species IMohlerl. This process of cellu

lar fission also may be likened to that of nuclear fission and the generation of ra-

dioactive particles IMohler and Shen I . Cells may multiply in a controlled manner to

synthesize an organ or an organism in the evolution of life which must adapt to its

environment. Just as the cellular processes are regulated in the body by homeostasis,

so are the combinations of these cells working together to perform integrated speci-

fic tasks in the functioning of a living organism or human being.

In the regulation of body temperature, for example, heat (additive control) is

produced internally by basal metabolism and heat is lost by means of respiration,ex-

cretion, evaporation from skin and radiation from skin. Transfer of heat between com

partments (such as skin, muscle and core of various internal organs) is regulated by

means of vasomotor control of blood vessels which effectively alters conductivity of

heat transfer as a bilinear control mechanism. The control policy in this case is

established by the hypothalamus in the brain based on temperature feedback informa- tion.

It is convenient to examine a two-compartment model of thermoregulation since

its behavior can be studied in a state plane. Skin and core are the compartments se-

lected since temperatures in these compartments are most meaningfult to system beha-

vior and they are monitored by the central nervous system for regulation of tempera- ture. ISee Mohler I

Water Balance

Water, comprising on the average about sixty percent of the body weight, must be

carefully regulated as part of the overall body process. About two-thirds of body wa

Page 36: 5th Conference on Optimization Techniques Part I

24

ter is located within the cells, and the remainder is in the so-called extracellular

compartment. The extracellular compartment or intravasculer compartment includes

plasma, the interstitual compartment in which tissue cells are bathed and the tran-

scellular compartment. In some cases there is a distinct anatomical boundary, such as

wall membrane, but for the interstitual compartment such boundaries are not so defi-

nitely defined. The transcellular compartment, which includes digestive secretions,

is separated from plasma by a continuous layer of epithelial cells and capillary me n

brane.

Tracer experiments are used to obtain the distribution of water throughout these

compartments. The intracellular compartment can not be so isolated, and must be ac-

counted for by taking differences.

Drinking, eating, skin absorption and respiratory absorption are all mechanisms

by which water enters the body's extracellular compartment. Oxidative water, a meta-

bolic end product, varies with diet and metabolic rate, and becomes an influx to the

intracellular compartment. Effluxes from the extracellular compartment to the enviro~

ment include urine, fecal water, skin evaporation and respiration.

Water balance is regulated by homeostasis through pressure forces and membrane

permeabilities. Feedback control again is established through the hypothalamus. An

amount of water equal to two-thirds of the blood volume is exchanged between the

blood and the extravascular fluid each minute.

The kidney with its thousands of filtration passages called nephrons is the key

component in body water balance. It processes blood plasma through its nephrons to

produce urine. The water loss in each nephron is regulated by the action of the ant i

diuretic hormone (ADH) on the tubule and the collecting duct permeability IPittsl.

Glomerular filtration rate(GFR)~which is collected at the arterial end of the nephron,

and the renal plasma flow (RPF) may be regulated to control water loss. RPF is a

function of arteriolar resistance in the kidney which is controlled by constrictor

muscles, again parametric control. While details of the process are quite complica-

ted, water excretion in urine may be approximated by a bilinear system with efflux

of form

~u = Uk (c +bgWb) , (9) g

where control u k is a function of permeability (determined by ADH), arteriolar resi~

tance and osmotic pressure driving water from renal tubules; Wb, body water, is asum

of compartmental amounts of water which in turn may be state variables. Cg , bg const.

Though to a much lesser degree, water balance is regulated by parametric con-

trol through the skin and through the lungs. Vasomotor control of blood flow and

hormone regulated permeability control water losses. Vasomotor constriction re-

duces hydrostatic pressure and thus reduces water transfer to the skin, and also by

dropping skin temperature reduces avaporation. In the lungs the net loss is regula-

ted parametrically by ventilation rate and vasomotor control of respiratory blood

vessels. Normally, the skin and the lungs play a much lesser role in controlling

water balance of mammals than do the kidneys. But under adverse conditions they can

be significant. Water loss through skin is extremely complicated and still not very well under-

stood. It is transported in three stages: exchange between the blood and the dermal

connective tissue, exchange between the dermis and the epidermis, and finally excha~

ge between the epidermis and the surroundings. The exchange takes place by filtra-

tion and diffusion~ The force which induces capillary filtrarion is described by the

Starling Hypothesis° This hypothesis states that the net force driving water out of

a capillary is equal to the hydrostatic pressure difference minus the osmotic pres-

sure difference with both differences taken between the blood and tissue IRuch and

Page 37: 5th Conference on Optimization Techniques Part I

25

Patton, p. 6251 . This results in a net outward force in the capillaries at the arte-

rioral end and a net im~ard force at the venous end. Water diffusion across capilla-

ry walls is regulated by an effective permeability, and it further diffuses through

layers of skin fibers and through duct openings. Then, water loss between skin and

surrounding air is usually assumed to be proportional to the difference between sa-

turated vapor pressure at skin temperature and the vapor pressure of the air IChew,

p. 881 • Consequently, it can be shown that net water loss through the skin is approx-

imately described during equilibrium operation by

~sa = Ps(Wd-We) = ks(WePse- rp ) , sa

where Ps is effective permeability, w d is dermis water concentration, w e is epider-

mis water concentration, Pse is saturated vapor pressure of skin, Psa is saturated

vapor pressure of air, and r is relative humidity. Again, the role of parametric

adaptive control is apparent. This skin water loss may then be further related to mo

re basic mammalian control functions as follows:

Ps (t)Ywb(t) + ksrPsa -rp

p (t) sa s

~sa = ks + k (I0) ks3 s

ks2 ( i +

Wb(t ) ksl)Us(t)

where Wb(t ) is amount of body water, us(t ) is portion of blood flowing to skin, and

ksi (i = 1,2,3) are constants. Here, a bilinear model could be used as an approxima-

tion, again more accurate than a linear model. A similarly complex relationship may

be derived for parametric control of water loss through the lungs. In the lungs, it

can be shown that ventilation rate is a parametric control variable. Some animals can considerably alter their body water by control of breathing.

For convenience, tracer experuments were conducted on a wild house mouse in a

controlled environment in order to obtain a simple model for its water balance. A

simplified single compartment which is used merely to demonstrate the procedure is

shown in fig. io Obviously, a more meaningful model would include more compartments

Such as for intracellular water, extracellular water and storage water.

Equilibrium acclimation graphs averaged over several animals of drinking in-

flux, food-water influx, urine efflux and net exchange by skin and lungs are provi-

ded by fig. 2 IHainesl. The solid curve in fig. 2 (a) connects experimental values

of drinking water influxes given at various levels of water deprivation. The dashed

curve connects the calculated values of water influx from food. It is found that

this influx for the unconstrained mouse is nearly proportional to body mass. Urina-

ry and fecal daily water losses are charted by dash-dot curves on fig. 2 (b). The

solid curve joins experimental data for net evaporative water effluxes at different

drinking levels. As drinking water is reduced, it shows that urinary-fecal and eva-

porative losses are decreased. Fig. 2 (b) further breaks down evaporative losses through skin (short dashes) and through evaporation (long dashes).

A typical long-term tracer wash out record for an equilibrium (ad libitum) ani- mal is shown in fig. 3. After tritiated water injection, periodic samples of body

water were analyzed for radioactivity. Note that the radioactive decay of tritium

(half life of 12.5 years) has negligible effect on the experiment. Obviously, the de

Page 38: 5th Conference on Optimization Techniques Part I

26

cline can be closely fitted by a single negative exponential over the time interval

shown. From the single compartmental equation for body water change

dw b

d--~ = ~in - #out (ii)

and the specific tracer activity change

da ~in a , (12)

dt w b

it is seen that in equilibrium the negative slope of the tracer experiment graph,

fig. 3, in magnitude is inversely proportional to the quantity of body water.

If the specific activity plotted in fig. 3, were plotted from separate experi-

mental data for evaporate and plasma specific activities, their graphs would have

the same slope as shown in fig. 3, but each would have its own level. This can be ex

plained by a two compartment model of: (I) internal water in equilibrium with blood

plasma, and (2) extravaseular fluid in skin and lungs affected by evaporative exchan

ge.

SOCIETAL PROCESSES

There have been numerous special journal issues, books, conferences and compre-

hensive reports devoted to such major areas of society as urban dynamics, public sy~

tems, health care, transportation and economics. Economics, with Keynes as a pioneer

researcher, probably has received most of the concentration. Again an adaptive stru~

ture, such as that of the bilinear system, is necessary to satisfy well-developed i~

tuitive concepts which in many cases have been substantiated experimentally. As no-

ted by Runyan, gross simplifications to a very complicated system, in many instances,

has hindered rather than helped interdisciplinary work in economics. The adaptive

bilinear structure offers a significant improvement in this respect, and parametric

control by economic investment is a good example of its use.

For population transfer models, arrival rates of different classes are equal to

appropriate population class terms multiplied by corresponding migration attractive- ness multipliers. Again~ to yield a bilinear model.

In socioeconomics, adaptive control may be synthesized through such quantities

as legislation, land zoning, taxes and capital investment.

Transportation Systems

Transportation represents one particularly interesting socioeconomic process. It

may be desired to develop a model in order to study the transportation needs of a na

tion, state or region. A significant part of this study might include an analysis of

transportation effectiveness to regulate socioeconomic development. It is apparent

from historical developments of railroad lines, freeways, air service and shipping

that industrial development and socioeconomic development in general follow the tran

sportation networks°

A systems overview of the transportation process is shown by fig. 4~ This block

diagram shows the major interactions and feedback loops of demography, economics,

transportation planning and transportation network dynamics.

The simplicity of this functional diagram is somewhat deceiving, since each block represents a major system in its own right, numerous interconnections and feed- back loops. Fortunately, this study need only concern itself with transportation re-

Page 39: 5th Conference on Optimization Techniques Part I

27

lated socioeconomic and environmental processes. While land-use and pollution are not

shown as separate subsystems in fig. 4, they are inherently part of the transporta-

tion planning/controller and the transportation dynamics subsystems. Land-use plan-

ning plays a most significant role in the generation of transportation policies which

are shown as inputs to the network dynamics.

Th~ transportation "plant" dynamics, which includes people and commodity multi-

modal transportation, is composed of three major components, viz., trip generation,

distribution and assignment (including modal choice).

The trip generation portion establishes a relationship between trip end-volume

and such socioeconomic factors as land use, demography and economics. Demographic in

formation utilized here includes total population, school enrollment and number of

household units in a zone of interest. Useful economic data includes total employ-

ment, average income, automobile ownership and retail sales. Hosford in a Missouri

study showed that only a few basic factors are necessary to estimate trip generation

with an effective trip attraction factor entering the process as a parametric con-

trol.

The trip-distribution model assigns trips from the generation model to competing

destination zones or compartments. Three widely used adaptive models are: (I) the

Fratar model, (2) the intervening opportunities model, and (3) the gravity model

IHeanne and Pyers I .

The Fratar model expands existing highway travel patterns by a growth factor for

each origin and destination zone. The future trips are then loaded on to transporta-

tion networks according to some assignment technique based on travel time, convenien

ce, etc. The intervening opportunities model is an exponential probability model ba-

sed on the simple relation that a trip will terminate in a region or compartment is

equal to the probability that this region contains an acceptable destination, times

the probability that an acceptable destination closer to the origin of the trip has

not been found. The gravity model, perhaps the most popular trip distribution model,

is analogous to Newton's gravitational law of masses. Consequently, the traffic bet-

ween zones or compartments is proportional to the product of the compartmental popu-

lations and inversely proportional to the square of the distance between the compar~

ments of interest. Frequently, the gravity model is further adapted to a particular

system by means of some parametric multiplier or attraction factor.

The traffic-assignment model allocates trip interchanges to a specific transpo~

tation system. This includes modal selection as well as network assignment. Most

traffic assignment models are based on a desired quality of minimum path length. But

more realistic models should include convenience, comfort, cost, safety, etc. Green-

shields derived a mathematical expression for such a quality of traffic flow.

Transportation models of this kind may be used to formulate development plans

for a statewide system which must be integrated with similar national models. Like

the national model which is an expansion of the Northeast Corridor study, statewide

studies rmast include well formed goals, diverse socioeconomic systems, numerous tran

sportation modes and geometrical nodes and different concepts of quality of transpo!

tation as they relate to an overall quality of life. Five state-wide transportation studies of a superficial nature are compared by Hazen.

REFERENCES

Alvi, Z.M., "Predictive Aspects of Monitored Medical Data", Ph.D. dissertation,

School of Engineering, University of California, Los Angeles, 1968.

Chew, R.M., "Water Metabolism of Mammals", Physiological Mammalogy, vol. II (Eds. W.V.Mayer and R.G.VanGelder), Academic Press, New York, 1965.

Page 40: 5th Conference on Optimization Techniques Part I

28

D'Angelo, H., Linear Time-Varying Systems: Analysis and Synthesis, Allyn and Bacon, Boston, 1970.

Greenshields, B.D., ~The Measurement of Highway Traffic Performance", Traffic Engi-

neer, 39, 26-30, 1969.

Grodins, F.S~, Control Theory and Biological Systems, Columbia Press, New York,1963.

Haines, H., Personal Communication, University of Oklahoma.

Hazen, P.Io, T'A Comparative Analysis of Statewide Transportation Studies", Highway

Research Record, 401, 39-54, 1972o

Heanne, K.E. and Pyers~ C.E., "A Compartative Evaluation of Trip Distribution Proce

dures s' , Highway Research Record, 114, 20-50, 1965.

Hill, T.Lo and Kedem, 0., "Studies in Irreversible Thermodynamics III. Models for

Steady-state and Active Transport across Membranes". J. Theoret. Biology

IO, 399-441, 1966.

Hosford, J.E., "Development of a Statewide Traffic Model for the State of Missouri",

Final Report of Project No. 2794-P, Midwest Research Institute, Kansas Cit~

Missouri, 1966.

Keynes, J.M.~ General Theory of Employment, Interest and Money, Harcourt, Brace and

Co., New York, 1935.

Mohler, R.R., Bilinear Control Processes with Applications to Engineering, Ecology

and Medicine, Academic Press, New York, to be published.

Moh!er, R.R. and Ruberti, A., Eds., Theory and Application of Variable Structure Sy~

tems, Academic Press, New York, 19720

Mohler, R.R. and Shen, C+N., Optimal Control of Nuclear Reactors, Academic Press,

New York+ 1970.

Milsum, J.H., Biological Control Systems Analysis, McGraw Hill, New York, 1966.

Pitts, R.F+, Physiology of the Kidney and Body Fluids, Year Book Medical Publishers,

Chicago, 1963.

Ruch, T.C. and Patton, H.D+, Physiology and Biophysics 19th edition, W.B. Saunders,

Philadelphia, 1965.

Runyan, H.M., "Cybernetics of Economic Systems +', IEEE Transactions Systems, Man and

Cybernetics, SMC-I, 8-17, 1971.

Snyder, W.S., et alo, "Urinary Excretion of Tritium following Exposure of Man to

HTO-a two Exponential Model", Phys. Med. Biology 13, 547-559, 1968.

Sheppard, C.W. and Householder, A.S., "The Mathematical Basis of the Interpretation

of Tracer Experiments in Closed Steady-state Systems", J. Applied Physics,

22, 5!O~520, 1951.

Zadeh, L.A., "Outline of a New Approach to the Analysis of Complex Systems and Deci-

sion Processes", IEEE Transactions Systems, Man and Cybernetics, SMC-3,

28-44, 1973.

Page 41: 5th Conference on Optimization Techniques Part I

29

INFLUXES ! Drinking Water ~ . ~ . ~

Water From Food -----_.J~-- ~ W a t e r Skin Absorption------"-~ jM'Content Respiratory Absorption~ WB

EFFLUXES

~.fUrinary + Fecal Water

~Sk in Evaporation "'~" Respirator y Evaporation

Fig. I. A Simplified Water Exchange Model for the Experimental Wild House Mouse.

Page 42: 5th Conference on Optimization Techniques Part I

30

: 3 - ( ~ ~ Drinking Water ~ ---~-- Water from Food

~-. I ~ (Metabolic+Hygroscopic)

~z_ilk...-----.........~,, 20=C, 50=/= relative humidity

;,

o ~ - ,2~ ,5 ,h FRACTION OF AD LIBITUM DRINKING WATER

o. Influxes

3 i ~ ~---0---* Urinary+ Fecal Water ~ Net Evaporative Water . (Skin + Respiratory)

--- ---~L---- Net Skin Evaporation ~" L ~ - , ~ = ~ ~ - a k - - - Net Respiratory Evaporation

-~ O C, 5OYo relative humidity

"J ° ' 0 u_ t l .

LlJ

O- I~/Z I/4 178' FRACTION OF AD LIBITUM DRINKING WATER

b. Efftuxes

Fig. 2. ~Equilibrium" Water Fluxes Versus Water Deprivation in the Wild House Mouse.

Page 43: 5th Conference on Optimization Techniques Part I

31

~ 0.60 t ~_° ~~ o~o]

-~ o.2o] ~o.,o 1

o.o01 0 . 0 6 1 ~ , i , j f

0 i 2 3 4 5 6 7 8 9 T IME A F T E R THO INJECTION (days)

Fig. 3. Specific Activity Versus Time After THO Injection in an Ad Libitum Wild House Mouse (Semilog Grid).

Page 44: 5th Conference on Optimization Techniques Part I

BIR

TH

RA

TE, ~

..

..

r~

~

DE

ATH

R

AT

qD

EM

OG

RA

PH

IC

MIG

RA

TIO

N--

: -->

-i

MO

DE

L R

AT

E

~,

PO

PU

LAT

ION

LE

VE

L,

CLA

SS

IFIC

AT

ION

&

D

IST

RIB

UT

ION

~o pA

c~F

Ic

1 R

THW

ES

T 8~

~

-~-]

s.

M

OD

EL

j I

TR

AN

SP

'~

--

"~

TR

AN

SP

, ~

~ "

-EM

AN

D >

'~TR

AN

SP

OR

TATI

ON

'

PO

LIC

~4

q

TR

AN

SP

OR

TA

TIO

N

/ I

/ C

ON

TR

OLL

ER

i

/ ..

...

i t

AR

TER

Y,

TR

IP

CO

NV

EN

IEN

CE

, |

i !

I I

CO

ST

&

PE

RFO

RM

AN

CE

)

TR

AD

E,

IND

US

TR

Y

HO

US

ING

__

I S

OC

IAL

..

..

..

..

..

..

..

.

IPR

EFE

RE

NC

E5

Note

: Arrows generally represent multivariable ou

tput

s.

E.go, TransPortation policies include components

of land us

e, legislation, taxation, investment in

transportation fo

r rapid transi~ h

ighways, airports

and seaports

Transportation . Outputs include trips/modal artery, tr

ip convenience, safety and cost,

pollution (CO, NOx, HC), et

c.

Fig~

4

Interconnection levels of

Dynamic Socioeconomic &

Transportation Systems

Page 45: 5th Conference on Optimization Techniques Part I

ON THE OPTIMAL SIZE OF SYSTEM MODEL

Mohamed Z. Dajani

University of Louvain

Louvain-la-Neuw~, Belgium

ABSTRACT

The problem of finding the best size for the system model is con-

sidered. By explicitly synthesizing the complexity cost of a proposed

model, one is able to transform this problem into a constrained inte-

ger programming problem.

i. INTRODUCTION

Finding the "best" model is a dilemma that has occupied the atten-

tion of an ever increasing number of investigators. Since complicating

the model may increase its accuracy, and the "best" model is the opti-

mal compromise between the conflicting demands of better accuracy and

simpler models [i], [2], [3].

In this brief paper, attention is focused on the problem of fin-

ding the best dimension of system model that incorporates, Dewly,

quantitatively the optimization of model complexity, or inversely its

accuracy. Note that model complexity is not singularly produced by the

increase of the model's dimension but is also affected by the choice

of a realization method (inter connections, and number of operators).

For a physical process with measurable input-output pairs it is

desired to find a best model subject to some general quantitative per-

formance criterion

JG = JEstimator + Jcontrol + Jcomplexity - JAccuracy (i)~

In order to proceed one has to start from a rough model and improve

upon it as time advances. Let

= f(x,u;~,t) ; ~ : (s×l) ; ~ : (m×l) (2)

be the initial system state model of s-order with known initial eondi-

:~ Definition of symbols are given in the list of symbols.

Page 46: 5th Conference on Optimization Techniques Part I

34

tions ~(0), and s being a changable variable. Find the optimal va-

lue of s = {, and that of u = [ such that JG is minimized. The

+ and - operators in (i) are symbolic and their exact nature is

determined as per the application, e.g.,

+ { } ~ e + { ' } { } = e - { ' } - , • (3)

2. ANALYSIS

Defining the complexity of the model cost, Jeomplexity as the

integration of submodels subcosts, one needs to define the following

per unit (p.u.) set of subcosts :

Cstat e : p.u. cost of deploying a single model (estimator) state ;

Ccontrol : p~u. cost of mapping a single state (estimate) into a sin-

gle closed loop control ;

Ceontrol energy : p.u. cost of input energy per control channel ;

Cgai n : p.u. cost for solving one Riccati equation to evaluate the

controller gain (estimator gain) ;

Cinterconnectio n : p.u. cost of interconnecting two different states

with a unit gain.;

Assuming an arbitrary number of intereonnections, g. Therefore~

J complexity Cstat + (mxs) x Ccontro I + m x Ccontrol energy : 2S x e

+ s(s-l) × Cgai n + 2g × Cinterconnectio n (4)

is the explicit expression of Jcomplexity as a function of the va-

riables s and a chosen set of p.u. costs and intercomplexity index

g. At this point~ it is worth noting that Jaccuracy will be chosen

in accordance with prevailing forms in the literature, i.e., the

weighted sum of the positive difference between the process output

and the model output, e.g.,

ai(Yi-Ymi)2 i

Zbl l Jaccuracy ] i i Yi-Ymi (5)

[etc

; for the same

input sequence .

Although it is not explicitly evident, Jaccuracy is an implicit

function of the model order, s.

Page 47: 5th Conference on Optimization Techniques Part I

35

The remaining two performance indices Jcontrol * Jestimation

are of the familiar quadratic-weighted-integral type.

The most natural approach at this point seems to apply the Pon-

tryagin maximum principle. However, its direct application will re-

veal no result and indicates its inability to handle optimization for

non-fixed configuration system. A generalization of the maximum prin-

ciple to this type of problems is certainly a useful extension.

Ideally one wishes to optimize JG by simultaneously optimizing

the modeling cost, JM ~ Jcomplexity + Jaccuracy, and the stochastic A

control cost, Js.c. = Jestimation + Jcontrol- It is true that both

group of costs are interdependent, however simultaneous optimization

of JM and Js.c. is analytically formidable. In the present treat-

ment a suboptimal procedure is adopted in which, first JM is opti-

mized, and then second Js.c. is optimized using the results of the

first step.

In this paper only the optimization process of JM is discussed

since that of Js.c. is widely treated in literature, e.g. [4]. In the

next section an algorithm (essentially an integer programming algo- A .

rithm) is presented whose output is the optimal value of s = s.

3. THE PROBLEM DEFINITION

Find the minimum of Jcomplexity such that :

i $ s 5 S ~: , (inequality constraint) , (6)

where S ~: the integer representing the model order at which case

[~T~] is a singular matrix. ~ is a characteristic matrix of the sys-

tem whose entries are the delayed input and output of the same system. O

For details see AstrSm and Eykhoff [i].

As typical of integer programming problems the solution for the

optimal model size ~ is not trivial and requires to meet certain con-

ditlons most of which are justified in our present formulation [5].

Page 48: 5th Conference on Optimization Techniques Part I

36

4. CONCLUSION

It is hoped that this presentation will stimulate deeper interest

in the engineering problem of finding the most feasible system model

by explicitly including a complexity cost that reflects the model rea-

lization effort and its versatility. The model identification problem

is transformed into a constrained integer programming problem. Natural

extension of this methodology is the generalization of the maximum

principle or the calculus of variations to systems with not fixed con-

figuration.

5. BIBLIOGRAPHY

O

[i] Astr6m, K.J.~ and Eykhoff, P. System identification - A survey.

Automatica, Vole 7, pp. 123-162~ 1971.

[2] Woodside~ C.M. Estimation of the order of linear systems. Automa-

tica, Vol. 7, pp. 727-733, 1971.

[3] Sage~ A.P., and Melsa, J.L. System identification. Academic Press,

New York~ 1972.

[4] Special issue on linear-quadratie-Gaussian problem. IEEE Trans. on

Automatic Control, Vol. 16, Dec. 1971.

[5] Graves, L.G., and Wolfe~ P. Recent advances in Mathematical pro-

srammin$~ McGraw-Hill, Inc., New York, 1963.

8. LIST OF SYMBOLS

x : (s×l) model state vector

u : (m×l) system control vector

: (q×r) model random parameter matrix.

s : the model size (dimension of state vector~ x)

ai,b i : time-varying weighting coefficients

Acknowledgement°

The author wishes to thank the Faculty of Applied Science and in

particular his colleagues at the Unit6 de M6canique.Appliqu6e of the

University of Louvain for their continuous support.

Page 49: 5th Conference on Optimization Techniques Part I

INFORMATION-THEORETIC METHODS FOR MODELLING AND ANALYSINGLARGE SYSTEMS

Raymond E. Rink Department of Electrical and Computer Engineering Oregon State University

Corvallis, Oregon

Introduction

The pairwise, causal interactions between components and between variables in

a complex system may be viewed as processes of communication. That is, for the

values of one variable to affect the values of another, there must be a direct path

(channel) which conveys information to the latter about the values of the former.

This point of view can provide some useful methods for modelling and analysing large

systems, as will be outlined in the following sections.

Transmission Model of a System

The first step in modelling a real system involves choosing a suitable set of

stat6 variables (if the model is to be Markovian) and identifying the direct inter-

actions between them.

The statement that "the value of variable x at time t has a causal effect on

the value of variable y at time t + I" implies that x(t) and y(t+l) are not statis-

tically independent, over the ensemble of all possible time series generated by the

system. This hypothesis can be tested by measuring the transmission (mutual infor-

mation) between x(t) and T(t+l), given some data. If the system is stationary and

ergodic, this transmission, denoted* T(x,y'ly), can be measured from a single time-

series of data. It is easily shown that T(x,y'ly) is zero if and only if x(t) and

y(t+l) are statistically independent. This idea was apparently first discussed by

Watanabe [I], and was called "interdependence analysis" by him. Conant [2] later

showed how the measured transmissions may be used to group the system variables into

strongly connected subsystems.

The transmission is defined as

T(x,y'[y) = H(y'[y) - H(y'Ix,y )

where the H's are entropy functions. If x and y are discrete variables (or quan-

tized continuous variables) with L possible levels each, the entropies can be writ-

ten as L L H(y'ly) = - ~ ~ Py,,y(i,j)logPy, ly(ilj)

i=l j=l

= - j [ k [ PY''Y'X(i'j'k)l°gPY'lY'x(ilj'k)' where Py, ly(ilj) is and H(y' Ix,y)

*y' = y(t+l) is, in most dynamic systems, most strongly dependent on y(t) itself. This dependence is eliminated by conditioning the transmission on y, leaving a more sensitive measure of interaction.

Page 50: 5th Conference on Optimization Techniques Part I

38

the probability that y~ will take on its ith value, given that y had its jth value,

etc. These probabilities are, of course, not known without prior knowledge and must

be estimated from frequency data. If N values of each of the variables are avail-

able, then the estimated transmission is

~ , (i,j) N , (i,j) ^ L L ~ y,

T(x,y~iY) = - ~ I N log ~ y 'y N i=l j=l

ij k N N

As N becomes very large, T will approach T if the data are statistically well- ^

behaved° In practice, N>>L 3 usually implies that T = T.

One weakness of the method is that the converse to the statement of the pre-

vious paragraph does not hold, i.e. it is not true that statistical dependence neces-

sarily implies a direct, causal effect of x(t) on y(t+l). It may be that a third

variable influences both x(t) and y(t+l) in such a way that the apparent transmis-

sion T(x,y'ly) is spurious. There is, however, the following simple procedure

which can be used, provided that enough data are available, to identify those inter-

actions that are direct and causal.

that there are n state variables {Xl,X 2, ..... x n} in a minimal represen- Suppose

tation of the system, and that the causal actions upon x I of all other (n-l) vari-

ables are to be determined. The first step is to measure all the palrwise trans-

missions and to find the i I for which ~(Xil,Xl, lXl )

takes its maximum value. This identifies a^variable Xil, which is tentatively

assumed to have a causal action on x I. If T(Xil,Xl'IX I) is, in fact, spurious, then

H(Xl'l{xili~il}) = H(Xl'l{xi}).

In other words, the addition of x i to the set of conditioning variables does not

produce a further decrease in the ~onditional entropy of x if x~ is not directly i ~I

connected to x I. Whereas, if x. is directly connected to x I and there is noise 11

anywhere in the system, the value of x i at time t is not uniquely determined by

the values of the other (n-l) variableslat time t, and the conditional entropy of

at time t+l will be further reduced by the addition of Xil to the conditioning X 1

set. This will have to be checked at the end of the procedure.

Assuming, then, x. to be causal, the second step is to remeasure the trans- I

X 1 missions between the ot~er (n-2) variables and conditioned on Xil to find the

i 2 which maximizes ~(xi2,Xl, lXl,Xil)"

If this maximum is not zero, it identlfies a variable x. which is tentatively 12

taken to have a causal action on x I that is not mediated by x i . Again, the condi- 1

tional entropies of x I with and without x.l in the conditioning set will have to be

checked at the end of the procedure, in order to verify this assumption.

Page 51: 5th Conference on Optimization Techniques Part I

39

The third step is to remeasure the transmissions between the remaining (n-3)

variables and x I conditioned on both Xll. and xi2 , and to identify the xi3 which has

the largest, etc. The process continues for s I steps with Sl~(n-l) , until the re-

maining conditional transmissions are all zero. Of course, any transmission that is

found to be zero at any step need not be remeasured, for that variable is then known

to be noncausal for x I.

At the end of the procedure it is necessary to check whether the net transmis- ^

sions T(x i J ,Xl'IXl,Xi~Xi2 ^ ~ ~ ..... x.lj -I ,x ij+l ,. .... x i Sl )

= H(Xl'|Xl,Xil,Xi2,^ ..... x. ,x. , ..... x i lj-i lj+l s I

- H(Xl'IX~,X. ,x ............. x i ) ± l I 12 s 1

are all nonzero for j=l,2, ..... Sl, thus confirming that the s I identified variables

are indeed causal for x I. If the net transmission is zero for any j, then that x i

is not causal and may be deleted from the subset, which, in turn, requires the net j

transmissions to be re-evaluated for the remaining (Sl-I) variables. If one of these

is zero, that variable must also be deleted, etc. The final result of the process

is a set of (sl-r I) variables with nonzero net transmissions to x I. If this process

is repeated for each of the state variables, the overall result is the transmission

model of the system. It can be shown diagrammatically with nodes and arrows, where

the arrows are labeled with the "strength" of the net transmission.

Obviously, s = max{s.} cannot be very large if this procedure is to be effec- 1

tire, given a finite amount of data and computing time. Conditional probabilities

which come from an (s+l)-dimensional distribution with L discrete levels along each

dimension are encountered in the course of the computation, and good frequency esti-

mates of these probabilities would require N>>L s+l values for each of the state vari-

ables. The number of computer operations is similarly large.

This does not prevent the method from being used for large-scale systems, how-

ever, provided that the interconneetions are relatively sparse. The state-dimension

n could be several hundred, and, if the maximum number of interconnections s which

impinge on any state variable were not more than, say, ten, the method could be used

with L = 4 or L = 8. Many large-scale systems are of this type, where variables

interact directly only with neighboring variables, as in a diffusion process. Spa-

tially distributed socio-economic and ecological systems in general would have this

property.

Functional Form of the Dependencies

The transmission modelling approach described in the last section has some su-

perficial similarity to correlation methods, especially to the partial-coherence-

function models of linear systems. Those methods are really much more restricted,

however, since they assume a linear form of relationship between variables, whereas

the transmission model refers to statistical dependence in general, without prior

assumption of any particular functional form.

Page 52: 5th Conference on Optimization Techniques Part I

40

Of course~ the functional forms must still be specified, if the model is to be

used for extensive analysis. The huge number of conditional transition probabili-

ties that comprise a purely stochastic model of a large system may well be unmanage-

able for simulation or optimization studies.

One approach to the needed simplification is to assume that the state variables

are governed continuously by a dynamic system of equations, each having a rate-

multiplier structure of the type used by Forrester, e.g.

sl-r I sl-r 1

x I = Xl[(al+nl)j~l flj(Xij ) - (bl+ml)j=iH glj(Xij )]" (i)

Here the rate of accumulation of x I is homogeneous of first degree in Xl, as is the

case in ecological and socio-economic systems where birth and death rates for spe-

cies; growth and decay rates for commodities and capital, etc., all are proportional

to the existing level of the variable.

The factors (al+n I) and (bl+m I) include the "normal" exponential growth and

decay parameters a I and b!, i.e. the average exponents that would be found in the

absence of feedback from other variables. They also include the "noise ~' fluctua-

tions nl(t) and ml(t) that may occur in the exponent due to randomness in the gen-

eration or degeneration process for the species or commodity.

The factors f~. (x.) or (normally only one of them would vary from • 3 I. glj (xi~) L 3 3

unity for a particular x. ) represent the modulation in a parameter value that is 1

caused by feedback from a~other variable in the system. In cases where the system

is a cybernetic one, these multipliers can perhaps be interpreted as endogenous con-

trol variables whose values are being optimized in some sense. Thus, the general

problem of identification of flj or glj would involve finding an appropriate per-

formance index and then synthesizing flj or glj as a function of xi. , i.e. the dual

optimization problem. We will consider here a special case, but on~ which is fre-

quently plausible; namely the "bang-bang '~ control law

flj(Xi.) = 1 + ~ 2 Sgn(xi. - ~'I. )" (2) 3 3 3

Here xi. is the :'normal ~ value of x.l., and (l±dlj/2) is the modifying factor that

is appl~ed to the normal growth parameter al, depending on whether x.l. > ~. is

favorable or unfavorable to the growth of x I. The only parameter to 3be d~termined

is dlj, the "depth of modulation," which will be seen to be related to the trans-

mission. We may remark in passing that, if (Hf i ) is considered to he a composite

control variable, the dynamic system is of the J bilinear type, a class whose

excellent controllability and performance qualities have been analyzed elsewhere

[3,4,5].

Consider the difference equation d

y(t+l) = y(t) + y(t)[a+n(t)] [i + ~ Sgn(x(t) - x)]. (3)

Page 53: 5th Conference on Optimization Techniques Part I

41

This is equivalent to the equation

z(t+l) ~ y(t+l) n(t) d y(t) " - 1 = a[l + ..... a ] [i + ~ Sgn(x(t) - x)]. (4)

This transformation would correspond, in an actual identification process, to tak-

ing the data {y(t)}, forming the ratios of successive values, and subtracting unity.

The precise relationship between T(x,z') and d depends on the statistics of

x(t) and n(t). If the data {x} come from a stationary process, the "normal" value

could be taken as that value which is exceeded by one-half the data. This is

equivalent to assuming that Sgn(x(t)-x) is binary with probabilities (1/2, 1/2),

regardless of the distribution of x(t) itself. Also, in many cases of large scale

systems (e.g. socio-economic) the fluctuations n(t) represent the aggregate deci-

sion noise of a collection of individuals, hence will tend to be normally distri-

buted.

Under these particular assumptions, a Monte Carlo simulation was used to deter- A

mine the transmission T(x,z') for various values of c and d, where c/2 is the stan-

dard deviation of n(t)/a. Values of x and n were generated by random number genera-

tors and z' computed. The simulated "data" {x} and {z} were then scaled and sorted

into L=I6 quantum intervals and the entropies calculated from the frequency esti-

mates of probability.

The resulting transmission values are plotted in Figure i, as functions of d/2

with c/2 as parameter. Clearly, the transmission value alone is not enough to de-

termine d if c is also ~known, as would usually be the case with real data.

The residual entropy H(z'Ix ) provides the entra information that is needed, ^

and it is plotted in Figure 2. If the measured entropy (e.g. H(z'Ix ) = 1.85 nats)

is drawn as a horizontal line on Figure 2. the intersections of that line with the

parametric family provide several pairs of values (c/2, d/2) which, when transferred

to Figure i, can be fitted to a curve as shown. The intersection of this curve with

the horizontal line corresponding to the measured value of transmission (e.g. ^

T = .563) gives the values of c/2 and d/2 (e.g. c/2=0.25 and d/2=0.435).

For the multivariable case where, for example, the interactions of (sl-rl)

variables {xi.} with x I are to be modelled by equations of the form of (i) and (2),

the data can 3be transformed as above to yield {Zl}. If the standard deviations of

m I and n I are known, the corresponding curves of Figure i can be used directly to

find the value of dlj/2 for each measured net transmission T(xi. , Xl').

J If, on the other hand, the standard deviation of nl, say, is unknown, but it

is known that ml=O and that all interactions are of the type flj rather than glj"

then Figures i and 2 can be used together to estimate the dlj/2 values. (This case

may occur, for example, in socio-economic systems where birth rates and capital

generation rates result from individual decisions and have, in the aggregate, both

causal and random fluctuations, whereas death rates and depreciation rates do not.)

The procedure involves simply taking the interactions one by one, treating the other

Page 54: 5th Conference on Optimization Techniques Part I

42

0.7" T(x,z')

0 .6

0.5

0.4

0.3

0.2

0.]

/ \ \

I

0.i

0

figure i.

-----+ ~ 1 i 0 ~. 6 t 0.I 0.2 0.3 0.4 0.5 0,7

Transmission as a function of d/2, with c/2 as parameter.

0.8

~{(z' I~)~ 2.2

2.0

i. 8 =

1.6

1.4

1.2

1.0 0

figure 2.

3 c~0 4

0.1

Residual Entropy as a function of d/2, with c/2 as parameter

d/2

d/2

Page 55: 5th Conference on Optimization Techniques Part I

43

factors as a resultant, normally-distributed "noise ~ factor

(a+nl)j~k ~ flj (xij) 5 (a+~Ik)

with residual entropy ^ ^ ^

H(Zl'IXik) = H(zI') - T(Xik, Zl).

Here nlk has a complicated expansion in terms of n I and the functions

dlj Sgn (xi. - ~i), j ¢ k, 2

3 3 and the only justification for treating it as having a normal distribution lies in

the tendency toward normality for functions of many stochastic variables, i.e. the

law of large numbers.

In the more general case where both types of interactions flj and glj are

known to exist and the standard deviations of n i and m.l are unknown, more general

procedures than those developed here will be needed.

Information and Perform~ance

Quite apart from the parametric type of model discussed in the last section,

there are some direct inferences that can be made about the potential performance

of a large system, based on the transmission diagram. This is possible because

there is, in any feedback control system, an intimate relationship between the

amount of information available to the controller and the error performance of the

system. In fact, it can he shown [6] that if the error entropy is taken as the

measure of performance for a linear system, the reduction in error entropy achieved

by feedback control is bounded by the information available to the controller.

Each state variable in a large system may be imagined to have a local

controller which is receiving information about the values of certain other vari-

ables over separate communication channels, corresponding to those paths in the

transmission diagram which impinge on the state variable in question. These chan-

nels represent a certain composite channel capacity ("information-gathering capa-

city" would be a more descriptive term) that is available to the local controller.

This composite local information capacity is not, however~ simply the sum of

the impinging transmission rates. The latter must be smaller than capacity if the

channel reliability is to be adequate. Although Shannon's Coding Theorem shows

that error-free transmission can be achieved at rates approaching the channel capa-

city, this can only be achieved at the expense of increasingly long time-delays for

encoding and decoding the data, and arbitrarily long delays cannot be tolerated in

real-time control. On the other hand, the total transmission rate will not be un-

necessarily small, either, since control effectiveness increases with the amount of

reliable data that is used. Thus, there is some optimal transmission between zero

and capacity.

Page 56: 5th Conference on Optimization Techniques Part I

44

It has been shown [7] that, if the plant and controller are linear and certain

other conditions are met, the optimal feedback transmission rate R satisfied the

equation

2R = nE(R),

where n is the number of state variables being transmitted and E(R) is the channel

reliability function. This equation always has a unique solution between zero and

capacity~ Conversely~ if the transmission rate and n are known, the channel capa-

city can be found as C(R,n).

Thus, the composite local capacity C 1 available for the control of state vari-

able x I can be estimated by calculating the total net transmission rate

sl-r I

R 1=¥ Z T(xi.,x l'Ixil ...... x. , x ..... x. ) j=l 3 /j-i lj+l ISl-r 1

where T is the sampling (coding) time interval, and then calculating

C 1 = C(Kl,Sl-rl)-

The total information capacity in the system is then

n C:ZC.. t 1

i=l

Consider now the potential performance of the system in the extreme case where

the total information-gathering capacity C t is centralized, i.e. where all ~'local

control" of state variables is replaced by a centralized controller which has the

objective of optimizing a single performance index for the system as a whole.

If that performance index is the mean square state error and if the inherent

"plant" mechanisms of the system are linear (not an unreasonable assumption for

many socio-economic processes like population, capital, etc., as mentioned previous-

ly), then the optimum form of control law is linear state feedback and the separa-

tion theorem applies, in this case it can be shown [7,8] that the total mean

square state error (m.s.e.) is bounded by

E[ 11 x(t)-Xoptimumil 2 ] ! N2 (T) i~2~i (T).= ~n,Ct,T) +F(n'Ct'r) e 2 (T) (5)

where the summation gives the m.s.e, due to estimation error, i.e. limited informa-

tion, and e2(T) is the m. Soe. resulting from the controller's limitations alone.

N2(T) is the mean square state disturbance that would be caused by the exogenous

noise acting on the uncontrolled system over the sample interval T and y(T) is the

square of the largest eigenvalue of the state transition matrix of the uncontrolled

system. The %i(T) are the squares of the largest eigenvalues of matrices ~i which

govern the estimation-induced state error dynamics according to the expression

Ax(t) = ~ ~i[x(t-iT) - x(t-iT)], i=2

where x(t-iT) is the real-time controller's estimate of the state vector of time

(t-iT), The function F is defined by

Page 57: 5th Conference on Optimization Techniques Part I

45

F(n,Ct,T) = (2A+k) exp [-2R~ CtT/n]

where A is an algebraic function of n, Ct, and T~ k is a parameter between zero and

one which measures the relative entropy power of the exogenous noise source compared

to that of a white Gaussian source, and Rt* is the solution of the equation

2Rt* = nE(Rt* ).

Several important conclusions can be drawn from the bound (5). For given

state dimension n and controller gain matrix, the m.s.e, is a function of only Ct,

the total information capacity, and T, the sampling-codlng time. There exists the

possibility of information instability if i - y(T)'F(n,Ct,T) is not greater than

zero. In particular, if

llm T+0 [2A(n'Ct'T) + k] > i,

then T must be greater than zero for the system to be informationally stable [8].

This result differs from the case of ordinary sampled-data control, where only e2(T)

is present in the m.s.e, and the optimum sampling interval T is always zero.

Thus, for given n,Ct, and optimum central controller, there exists a

Top t ~ 0 which minimizes the bound (5). The relationship between these minimum

values and C t provides the link between the potential performance of a large-scale

system and its transmission diagram.

l,

2.

3.

4.

5.

6.

7.

REFERENCES

Watanabe, S., IRE Transactions on Information Theory, PGIT-4, 84 (1954).

Conant, R., "Detecting Subsystems of a Complex System, '~ IEEE Transactions o_~n Systems, Man, and Cybernetics, pp. 550-553, (1972).

Rink, R. E. and R. R. Mohler, "Completely Controllable Bilinear Systems," SIAM Journal on Control, Vol. 6, 477-487 (1968).

Mohler, R. R. and R. E. Rink, "Control with a Multipllcative Mode, ~' Journal of Basic EngineerlnE, Vol. 91, 201-206 (1968).

Mohler, R. R., Bilinear Control Processes, with Application t__oo Engineering, Ecology, @nd Medicine, Academic Press, N.Y. (1973).

Wiedemann, H. L., "Entropy Analysis of Feedback Control Systems," in Advances in Control Systems, C. T. Leondes, Ed., Academic Press, N.Y., Vol. 7 (1969).

Rink, R. E., "Optimal Utilization of Fixed-Capacity Channels in Feedback Control," Automatica, Vol. 9, 251-255 (March 1973).

8. Rink, R. E., "Coded-Data Control of Linear Systems," (to be published).

Page 58: 5th Conference on Optimization Techniques Part I

A NEW CRITERION FOR MODELING SYSTEMS

By Lawrence W. Taylor~ Jr.

NASA Langley Research Center Hampton~ Virginia

The problem of modeling systems continues to receive considerable attention because of its importance and because of the difficulties involved. A wealth of information has accumulated in the technical literature on the subject of systems identification and parameter estimation; references i through 5 are offered as s~mmaries. The problem that has received most attention is one in which the form of the system dynamics is known 3 input and noisy output data are available~ and only the values of the unknown model parameters are sought which optimize a likelihood function or the mean-square response error. It would seem with the variety of esti- n~tes~ the aigorithms~ the error bounds~ and the convergence proofs~ and the numerical examples that exist for these systems identification problems, that any difficulties in obtaining accurate estimates of the unknown parameter values should be a thing of the past. Unfortunately~ difficulties continue to confront the analyst. Perhaps the most important reason for difficulties in modeling systems is that we do not know the form of the system dynamics. Consequently, the analyst must try a number of candi- date forms and obtain estimates for each. He is then confronted with the problem of choosing one of them with little or no basis on which to base his selection. It is tempting to use a model with many parameters since it will fit the measured response error best. Unfortunately, it is often the case that a simpler model would be better for predicting the system's response because the fewer number of unknown parameters could be estimated with greater accuracy. It is this problem of the analyst that this paper addresses and offers a criterion which can be used to select the best candidate model. Specifically~ a numerical example will be used to illustrate the notions expressed in references 4~ 6, and 7.

SYSTEMB IDENTIFICATION WITH MODEL FORMAT KNOWN

The systems identification problem of determining the parameters of a linear, constant-coefficient model will he considered for several types of estimates. It will be shown that maximum-likelihood estimates can be identical to those which mini- mize the mean-square response error. The subject algorithm~ therefore, can be used to obtain a variety of similar estimates. Attention is also given to the calculation of the gradient that is involved in the algorithm and to the Cramer-Rao bound which indicates the variance of the estimates.

Problem Statement

The problem considered is that of determining the values of certain model param- eters which are best with regard to a particular criterion, if the input and noisy measurements of the response of a linear, constant-coefficient system are given. The system to be identified is defined bythe following equations:

i = Ax + Bu y = Fx + Fu +b z = y + n

where x, u, y~ b, n~ z are the state, control, calculated response, bias, noise 3 and measured response, respectively. The unknown parameters will form a vector c. The matrices A, B, F, and G and the vectors b and x(O) are functions of c.

Minimum Response Error Estimate

One criterion that is often used in systems identification is the mean-squ~re

difference between the measured response and that given by the model. A cost func- tion which is proportional to the mean-square error can be written as

Page 59: 5th Conference on Optimization Techniques Part I

47

N

J : [ (zi - Yi)~Dl(zi " Yi) i=l

where D 1 is a weighting matrix and i is a time index. The summation approximates a time integral. The estimate of c is then

c=ARG c

where ARG MIN means that vector c which minimizes the cost function H.

Linearize the calculated response y with respect to the unknown parameter vector c

= + VcYi(C - CO) Yi Yi 0

where Yio is the nominal response calculated by using Co~ Vcy i is the gradient of y with respect to c~ and c O is the nominal c vector.

Substituting Yi into the expression for J and solving for the value of c which minimizes J yields

If this relationship is applied iteratively to update the calculated nominal response and its gradient with respect to the unknown parameter vector~ the minimum- response error estimate ~ will result. The method has been called quasi- linearization and modified Newton-Raphson. The latter seems more appropriate since the Newton-Raphson (ref. l) method would give

Ck+ 1 = c k + J c J

where N

VcJ k : -2 Z (VcYik)~Dl(zi - Yik ) i=l

N N

Vc2Jk = 2~ (VcYik)TDiVcYik - 2 ~ (Vc2yik)TDl(Zi_ Yik )

i=l i=l

The second term of Vc2Jk diminishes as the response error (z i - Yik) diminishes. The modified Newton-Raphson method is identical to quasi-linearization if the term is neglected.

Maximum Conditional Likelihood Estimate

Another criterion that is often used is to select c to maximize the likelihood of the measured response when c is given

: ARG MAX{P(zlc)} e

Page 60: 5th Conference on Optimization Techniques Part I

48

where

-i

= ~ exP I " g (zi " yi)TMI (zi " Yi P(zlc) (~)N(I~)/21MIIN/2 i=l

The maximum conditional likelihood estimate of the unknown parameters will be the set of values of c which maximize P(zlc)~ if it is recognized that y is a function of c. If it is noted that the max_imization of P~Ic) occurs for the value of c which minimizes the exponent~ the maximum conditional likelihood esti- mate is the ~ same as that which minimizes the mean-square response error provided the weighting matrix D 1 equals Ml'l.

Maximum Unconditional Likelihood (Bayesian) Estimate

The unconditional probability density function of z can be expressed as

P(z) = P(zlelP(e)

The probability of c relates to the a priori information available for c before use is made of the measurements z.

The unconditional probability of z is then

1 exp~ P z) : ( )[( =) mC )l/21mlNl2LM21112 7

. !2 (~ - °o)~M2 "i(° " c°~

N T -1 ~ (zi-Yl) M1 (zi'Yi)

i=l

Again~ the expression is maximized by minimizing the exponent. Setting the gradient with respect to c equal to zero and solving yields

^ ^ T -I -1 ck+l = Ck + (~Yik > M1 VCYik + M2" (vcTyikl~M1 (z i - Yik)

%i:l :i

- M -lt^ )I 2 ~ek - c0

An identical estimate would have resulted if a weighted sum of mean-square response error and the mean-square difference of c and its a priori value are minimized~ provided the weighting matrices used equaled the appropriate inverse error covariance matrices. Consequently, the same algorithm can be used for both estimates

a ~a ~i~l (zi Yi)~-i -1 = - (zi-Yi) +(c- Co)~M2 (c- c o

Variance of the Estimates

A very important aspect of systems identification is the quality of the esti- mates. Since the estimates themselves can be considered to be random numbers~ it

Page 61: 5th Conference on Optimization Techniques Part I

49

is useful to consider their variance~ or specifically their error covariance matrix. The Cramer-Rao bound (ref. l) provides an estimate of the error covarlance matrix.

E~- Ctrue)(~-Ctrue)~ = ~, (VcYi)TMl-~cYi +M 2- =I

If the a priori information is not used# M2 -1 will be the null mtrix.

Importance of Testin6 Results

The importance of testing a model once estimates of the unknown parameters have been made~ cannot be overemphasized. The test can be to predict response measure- ments not included as part of the data used in obtaining the model parameters. If the fit errorj J, is greater for this test than for a similar test using a model with fewer unknown parameters, it would indicate that the data base is insufficient for the more ccmplicated model. The testing should be repeated and the number of model parameters reduced until the fit error for the test is greater for a simpler model. The testing procedure should be used even if the model format is "known" since the data base can easily lack the quantity and the quality for estimating all of the model parameters with sufficient accuracy.

Reference 4 provides an example of a model which failed a test of predicting the system's response. Figure 1 shows how the fit error varied for both the data used to determine the model parameters and that of the test~ versus the number of model parameters. The lo~er curve shows that as additional unknown parameters are included, the fit error decreases. When the models were tested by predicting inde- pendent response measurements ~ however~ the upper curve shows the models having more tmknownparameters performed more poorly than simpler models. Fortunately, more data were available but had not been used in the interest of reducing the computational effort. When four times the data were used to determine the parameters of the same models~ the results shown in figure 2 were more successful. The fit error for the predicted response tests continued to decrease as the number of unknown model param- eters increased~ thereby validating the more ccmplicated models.

Unless such tests are performed~ the analyst is not assured of the validity of his model. Unfortunately~ reserving part of the data for testing is costly. A means of making such tests without requiring independent data is discussed in a later section of the paper.

SY~ IDENTIFICATION WITH MODEL FORMAT UNKNOWN

One never knows in an absolute sense what the model format is of any physical system because it is impossible to obtain complete isolation. Even if the model format was known with certainty, it is necessary to test one's results against a simpler model~ a point made in an earlier section. Regardless of the causej the requirement is the same: a number of candidate models must be compared on a basis which reflects the performance of each in achieving the modelts intended use. For the purpose of this discussion~ it will be assumed that the model's intended use is to predict the response of a system and that a meaningful measure of the model's performance is a weighted mean-square error.

A Criterion for Com~arin6 Candidate Models

Let us continue the discussion of the problem stated earlier using the same notation. The weighted mean-squ~re response error which~s minimized by the mini- mum response error estimate was

Page 62: 5th Conference on Optimization Techniques Part I

50

J =

N

L (z i - yi)TDl(Zi - yi )

i=l

Let us denote the weighted mean-square response error which corresponds to testing the model's performance in predicting the system's response as

N

= ~ (zli - yi)TDl(zli - yi )

i=l

where z I is measured response data that is not part of z which is used to deter- mine the model parameters. For canvenience, we cansider the inputj uj to be identical in both eases.

The criterion suggested for comparing candidate models is the expected value of j1. If it is possible to express the expected value~ E{J1}~ in terms not involving actual data for zl~ then a considerable saving in data can be made and improved estimates will result from being able to use all available data for estimating.

Let us examine first the expected value of the fit error with respect to the data used to determine estimates of the unknown parameters.

We can express the response error as

= Ytrue + n - y ~ Ytrue + n - Ytrue - c -VY(~ - Ctrue) z Y

Vy( ^ ) = n - c - Ctrue

assuming the response can be !inearized with respect to the model parameters over the range in which they are in error.

The expected value of the fit error, E{JI} , becomes

= E i " Yl)%l(zi - Yl

= E~ ~n- (VcYi)(~- Ctrue)~TDl~n - (VcYi)(~ - ctl~le

~i=l

Expanding we get

: _ Dl(VeYi)(o -

=l ~i=l

+ E (a T Ctrue)(VcYi) Dl(VcYi)(c - ctr u

Page 63: 5th Conference on Optimization Techniques Part I

5i

If a maximum likelihood estimate is used, or if 'the minimum mean-square response error estimate is used with a weighting equal to the measurement error covariauce matrix 3 then we can write

D I = M -I

= Ctrue +

=l J [%=1 J

Again, linearization is assumed and it is noted that

After substituting we get

where

z i - Ytrue i = n i

E{J}= E(7. niTM-ini ) _ 2E pTQ- )

N

i=l

N

i=l

Next, let us examine e~ected fit error E{J} of a model used to predict response measurements, z l, which are independent of the data z, used to determine the estimates of the model.

We can again express the expected fit error as

E(~) = E(i=~ 1 nliTDlnli~ -2E~i=~ 1 nliTDiVcYi( ~ _ Ctrue~

+ E~I=~ 1 (~T_ cTru~(VcYi)TD1VcYi(~. Ctrue~

Note that the onl~v difference between the above expression and that obtained earlier for E{J} is that the noise vector is n I instead of n. The same expres- sion can be used for ~ as before since it is the estimate of c based on the data, z, that is desired.

Page 64: 5th Conference on Optimization Techniques Part I

52

~= Ctrue +|i=~l (VcYi)TM-1VcY (VcYi)TM-ln

Substituting the above expression for @j and M -I for D i we get

-

~i=l M + E <pTQ-~)

where P and Q are defined as before. uncorrelated~ that is

then the second term is zero.

Since the noise vectors n and n I write

E<i=~ I nliTM'lnl~

~E nliTM-iVcYi Q-I i=~l(VCYi )TM-Inj

Since the noise vector~ n I and n 3 are

= 0 for all i ~ J

= TRACE

= TRACE

have the same covariance matrix~

n

E<i=~ 1 nlinli~ M"

= TRACE[N9 = N • MZ

where N is the number of time samples and MZ quantities. Also

We can now express

is the number of measurement

E niTM-ln i ~ = N "

<i=! j E(J I} in terms of E(J} as

E[J I} :E(J} +2E {P~Q'~}

M~ we sam

Examining the second term

ki=l

Page 65: 5th Conference on Optimization Techniques Part I

53

After taking the trace of the scalar and reordering the vectors we get

E ~ P~Q'~ ~ ~P~CE Vyi Q-1 (VYj)~ - ~=I j=l

Because the noise is uncorrelated at unlike times, the term simplifies to

T - - 1 E{pTQ-~) = TRACE njn i M iVcYiQ (Vcy j )TM"

j=l

Finally, we have that the expected fit error for the case of testing the model's prediction of the system's response as

E [ jl} = j + 2 TRACE = I VcYiQ'I J=l (VcYjlTM"

Since it is available, the actual fit error, J, is used instead of its expected value. The intent of the new criterion, E{Jl}, is that it be used instead of J in determining the level of model complexity that is best.

An Applieatic~ of the New Modelin6 Criterion

The control input and response time histories of the numerical example used to demonstrate the use of the new modeling criterion are shown in figure 3. The dynamics resemble that of the lateral-directional modes of an airplane and are described in detail in reference 9. The level of noise that has been added to the calculated time histories is also indicated in figure 3. The sampling rate was lO samples per second. The unknown parameters in the A, B 3 and b matrices can number as many as lO, 9, and 4, respectively. Only 16 parameters have values other than zero.

Five sets of calculated responses to which noise was added, were analyzed using the algorithm of reference 9. The analysis was repeated, allowing a different number of parameters to be determined by the algorithm. The fit error, J, in each case was averaged for the five sets of responses, and plotted as the lower curve of fig- ure 4 as a function of the number of unknown parameters allowed to vary. As the number of unknown parameters was increased, terms were always added and never sub- stituted. Because of this it can be argued that the fit error, J, should be monotonically decreasing as the number of unknown parameters increases. It can be seen in figure 4, however 3 that a point is reached beyond which J increases. The reason for the increase is because there is a point reached where convergence is a problem because of the essentially redundant unkno~-n parameters. Although some of the five cases for 23 unknown parameters showed a decrease in J, others did not. The analyst might be tempted to settle for the model having 19 unknown parameters since the calculated response best fits the "measured response." It would be a mistake to do so; one that is often made.

In addressing the question "Which of the candidate models best predict system response?" a test ~as applied. The results of the test were then compared to the modeling criterion in the following way. For each set of unknown parameters deter- mined, the corresponding calculated response was compared to the true response to which independent noise was added. The resulting fit errors were again averaged and plotted in figure 4 as the upper curve. The test performed simulates the practice of reserving actual response data for model testing purposes only. As a

Page 66: 5th Conference on Optimization Techniques Part I

54

result of these tests it can be seen that a model with only i0 unknown parameters is best in terms of predicting system response. This is six parameters fewer than the "true model." That is to say s there was less error in setting six parameters to zero than the error resulting from trying to determine more than l0 unknown parameters. The number of parameters that best predict system response will increase if additional data are used or if the noise is decreased.

The purpose of the new modeling criterion is to eliminate the need of independ- ent response measurements comparing the model candidates~ thereby allowing all of the response data to be used in determining the unknown parameters. The new criterion was calculated as part of the system's identification algorithm and the results are shown in figure 5. The results of testing the candidate models given in the previous figure are included in figure 5 for comparison. The minimum in the curve for the new criterion occurs at l0 unknown parameters as was the case for the previous test results. This indicates that the ne~ criterion can be used instead of testing the model candidates against independent response data.

The Problem of Too Many C~didate Models

Although it is a great help to have a criterion for comparing candidate models, there remains a problem of an excessive number of candidate models. A simple calcu- lation illustrates the enormous number of candidate models that result from the com- binations of unknown parameters that can be used for the simple example used to illustrate the modeling criterion. The total number of possible candidate models exceeds 8 million. Even though it is possible to greatly reduce the amount of calcu- lation effort by neglecting changes in the gradient of the response with respect to the unknown parameters, Vcy , solving a set of simultaneous algebraic equations would still be required for each model. Consequently, the calculations involved for 8 million candidate models becomes an economic consideration. It is estimated that testing all of the 8 million candidate models would require about one hour on a CDC-6600 computer. As the maximum number of unknown parameters increases~ the number of possible candidate models rapidly becomes astronomical.

In practice~ the analyst has enough understanding of the dynamics of the system being modeled to know to some degree which terms are primary and which are less important. It would be valuable, however, if one did not have to rely on the analyst's judgment. The problem of searching for the best candidate model, therefore~ remains a formidable problem worthy of attention.

REFERENCES

1.

2.

3.

4.

Balakrishnan, A. V.: Co~nunication Theory. McGraw-Hill Book Company, C. 1968. IFAC Symposium 1967 on the Problems of Identification in Automatic Control Systems (Prague, Czechoslovakia)j June 12-17~ 1967.

Cuenod, M.~ and Sage, A. P.: Comparison of Some Methods Used for Process Identi- fication. IFAC Symposium 1967 on the Problems of Identification in Automatic Control Systems (Prague~ Czechoslovakia), June 12-17, 1967.

Eykhoff, P.: Process Parameter and State Estin~tion. IFAC Symposium 1967 on the Problems of Identification in Automatic Control Systems (Prague, Czechoslovakia), J~me 12-17, 1967.

Taylor, Lawrence W., Jr.: Nonlinear Time-Domain Models of Human Controllers. Hawaii International Conference on System Sciences (Honolulu~ Hawaii), January 29-31, 1968.

Page 67: 5th Conference on Optimization Techniques Part I

55

5.

6.

7.

Taylor~ Lawrence W.j Jr. 3 and liiff, Kenneth W.: Systems Identification Using a Modified Newton-Raphson Method - A Fortran Program. NASA TN D-6734, May 1972.

Taylor, Lawrence W., Jr.: How Complex Should a Model Be? 1970 Joint Autc~atic Control Conference (Atlanta~ Georgia), June 24-26j 1970.

Akaike, H. : Statistical Predictor Identification. Ann. Inst. Statist. Math., Vol. 22, 1970.

Page 68: 5th Conference on Optimization Techniques Part I

56

Fit error

J

Test data-....,.. - - 0

o I ! l I l 0 i0 20 30 40 50

NUMBER ~ UNKNOWN PARAMETERS

Flgure l . An example of i n s u f f i c i e n t data/parameters, Pilot tracking case, T = 60 seconds. 40

Test d o l o - ~

~____~ odel data

J

o I I I I I 0 IO 20 30 40 50

NUMBER of UNKNOWN PARAMETERS

Figure 3. Response histories for example problem. Lateral-directlonal case.

Units in Degrees, Degrees per 5ec

8al0~

- t 0

-6 L 0

Fit error

J

....... 60

¢

Time - sec

, ~ M i n i m u m

% Test data

i ~ o d e l data ~%~MIn lmum

0 i al ~L.~ | I 0 5 I0 15 20 25

NUMBER of UNKNOWN PARAMETERS

Figure 4. Comparison of fit error using model and test data, Lateral- directional case.

Figure 2. An example of sufficient data/ parameters. Pilot tracking case. T = 240 seconds°

-,oo

-8 \

Fit error

J

9 Time - s e c

Intm°

0 ! I I I I 0 5 I0 15 20 25

NUMBER of UNKNOWN PARAMETERS Figure 5. Comparison of fit error using

test data and new criterion. Lateral- directional case.

Page 69: 5th Conference on Optimization Techniques Part I

STOCHASTIC EXTENSION AND FUNCTIONAL RESTRICTIONS

OF ILL-POSED ESTIMATION PROBLEMS

Edoardo Mosca

Facolt~ di Ingegneria UniversitA di Firenze

Abstract - In Theorem 1 it is shown that under mild conditions the minimum- variance smoothed-estimation of a Gaussian process y(t) with covariance Ry(t,~ from noise-corrupted measurements z(t)=y(t)+n(t), t~T, with n(t) Gaussian with covariance Rn(t,~) , is equivalent to a purely deterministic optimization problem, namely, findingi~ H (Ry) that minimizes the functional

L (ylz) o II yll 2 Ry "

Here <°/'>R denotes izmer-product of the reproducing kernel Hilbert space (RKHS)~H).

Theorems 2 and 3 deal with the more general situation where y is a set of linear mea- surements on an unknown function w(~ ), ~ A.

The above stochastic-deterministic equivalence provides valuable insight and

important consequences for modelling. An example shows how this equivalence allows one

to make use of Ealman-Bucy filtering in a purely deterministic problem of smoothing.

1. Introduction

The present paper evolves from some ideas of the author (Mosca 1970 a, b, 1971

a, b, 1972 a, b, c), and is very close in spirit with some recent works by Kimeldorf

and Wahba (1970 , 1971), and Parzen (]971). It deals with a problem that often occurs

in experimental work, for instance, in indirect-sensing experiments. Roughly speaking,

the problem consists of reconstructing auunknown function or signal w from a set of

linear measurements on w and from the knowledge of the class to which the signal be- longs. Specifically, it is assumed that:

(al). The unknown w is an element of a complex Hilbert space ~of functions mapping their domain A into the p-fold Cartesian preduut C p of the complex numbers C. The in-

ner product of~is denoted by ~ / o~ , and the corresponding norm by I~'l~. In

most applications, either ~=L2(A ) , where A is a Lebesgue-meas~urable subset of the r-fold Cartesian product R r of the real line R, or ~=12(A ) , where A denotes a subset of the set of all integers.

(a2). A linear measurement on an element w E~ equals the value of a continuous li- near functional

where, by the Riesz representation theorem (Naylcr and Sell, 1971), x~is the re- presentator of the functional.

Page 70: 5th Conference on Optimization Techniques Part I

58

Under the above assumptions, the problem can be stated as follows.

Problem I. Given the observations

equal to the sum of an additive noise n(t) and a set of linear measurements on w e

with i X { ' / ~ ) / ~ ~ ~ } a known family of functions in ~ ,

find a suitable estimate of the unknown w~X. [] With reference to Problem I some further assumptions are needed:

(a3). The observation set T is a Lebesque-measurable subset of the q-fold Cartesian

product R q of the real line R.

(a4). The additive noise n is a sample function from a zero-mean stochastic process

with covariance kernel

Example I. Let u be the input to an n-dimensional linear system with continuous

coefficients

starting from the initial state X(b~)=_X o . By observing the system output y over T=

[te, tl] , it is desired to reconstruct the initial state ~ and the system input over

the same interval T. The variation of constants f~rmula (Brockett, 1970 ) gives imme-

= <X_o,~/(~,eo)ce)_ _ ~ + < ~,k(%) C,o,t~ " t s Ct,e~2

where: _~(t,~ ) i~ the state-transi~io~ ~atri~ for/(~) _- AC~)_X (~) ;

is the system impulse-response; ~(t) the unit step function; E n is the Euclidean n-

space; and L2[to~ tl] is the space of all real-valued square-inte~rable functions on

Its, tl]. If one defines [ as the direct sum (Ealmos, 1957)

and

7

Page 71: 5th Conference on Optimization Techniques Part I

59

the observed output y can be expressed in terms of the an.~o~m

as in (5)o I~

If~is infinite-dimensional and T a continuous observation set, under fairly

general conditions it has been shown (Mosca, 1970 a, b, 1971 a), that no linear un-

biased estimate of wE~ exists. This means that there is no wE~.such that ~ where y(t) is the minimum-variance

~b

linear estimate of the value y(t) taken on by y at t ~ T. Following Hadamard's ter- minolog~ (Hadamard, 1952), Problem I is referred to as an il!-posed estimation prob-

lemo

The chief goal of this paper is to compare the two main approaches that have been devised to deal with the class of ill-posed problems described above. The first

approach (Franklin, 1970 ' Turchin, Kozlov, and Malkevich, 1971), called hereafter the stochastic-extension approach, consists of reinterpreting the unknown w as a sample

function from a random process with zero-mean and covariance kernel R w defined on A x A. Thus, known methods of statistical inference can be used to get a minimum-va-

riance linear estimate of w(~ ), ~A. The second approach (Mosca, 1971 b, 1972~u)oal- led hereafter the functional-restriction approach, consists of constraining the un-

known w to a linear Subset of~of well-behaved functions ~on A~ completely characte-

rized by an Hermitian nonnegative-definite kernel Kxon A x A. In functional analysis

this linear set of functions is denoted by H(EX~ , and called the reproduoin~ kernel

Hilbert space (RKHS) with reproducing kernel (RK) EX.

2. Stochastic Extensions

For the sake of simplicity, hereafter ~is assumed to be equal to L2(A ). This

makes the discussion somewhat less abstract, and the notation less cumbersome. However,

it should be clear that the conclusions that will be obtained are valid for more gene-

ral Hilbert spaces ~ .

Let R w be the covariance kernel of the zero mean process w

The stochastic nature of w induces a randomness on y. in fact, because of (I) , y turns out to be a zero-mean stochastic process with covariance kernel

Eq. (2) can be rigorously justified by using Fubini-Tonelli theorem (Royden t 1968 ) in conjunction with some auxiliary assumption on Rwl for instant%the finiteness of

its trace

Page 72: 5th Conference on Optimization Techniques Part I

60

Assuming the processes w and n uncorrelated

the crosscorrelation between w and z is simply

Using a form ala (Parzen~ 19671 that gives the minimum-variance linear estimate in

terms of a RKHS inner-product j, in the present instance one has

It is of some interest to note that the image of ~ under the transformation (I)

mapping functions on A into functions on T coincides with the minimttm-variance linear

estimate ~ of y. In fact~

~(K) X'(~,,{)oI~ = <~(-'¢)/ ISAx('4.;t')~(~.I,~)X'(~4~T')I~4~°(4 ~,+~'m. (3)

From now on, w and n are assumed to be Gaussian. Most of the results that fo!-

low~ however~ can also be reinterpreted for non-Gaussian processes under the usual

provision of translating strict-semse concepts into their corresponding wide-sense

versions. The following theorem~ whose proof can be found in the Appendix, is the key

result that allows one to relate the stochastic-extension approach and the functional-

restriction approach.

Theorem I. Let the two probability measures induced on the space of sample functions

on T by the kernels Ry+R n an~ R n be equivalent. Then the minimum-variance estimate ~(t), t ~ T, of y(t) based on z is with probability one the element in H(Ry) minimi-

zing the functional

1

over all y ~ H(Ry) Jff the two probability measures corresponding to Ry+R n and R n are

strongly equivalent in the H~jek's sense.

I Throughout this paper the following notational convention is adopted. Given an Her-

mitian nennegative-definite kernel P, H(P) will denote the unique corresponding R/~TS,

4" ' ° the inner product of H(P), and IIf llp

Page 73: 5th Conference on Optimization Techniques Part I

61

In (4) the functional

equals minus the natural log of the probability density funotional~ Radon-Nikodym de- rivative or likelihood ratio, of the probability meastLre induced on the space of sample functions by the process

with~respect to the probability measure corresponding to the process ~ ~(t)=n(t)l t T ~ ~ Notice also that Theorem I gives a necessary and sufficient condition for

to be~in H(Ry). This condition r namely strong equivalence, if 2

~ ( ~ ) = ~ C ~ - ~ ) ,

i~e~ n is white r is simply

T

~. Functional Restrictions

Let~be again an arbitrary functional Hilbert space r and r be the linear transformation mapping~into functions y on T according to (~). Let ~[ C] be the range of ~. Next, let ~C~be the space of all minimum-norm solutions~of (%)r when y runs over~ [ P ~.~can also be equivalently defined as the closed s1~bspace of~spanned by the t-indexed family of functions~x(., t)r t~T~Then r the restrict- ionT of Pto~,

has the properties listed in the following

Lemma (Mosca 197Oa , b, 1971 a, b, 1972a ) i) The range of~ o~incides with REHS H(Ky) with RK given by

K~(e,~) = <x(.,~),×~.~e)~ ~ t,~ ~T. (6) i i ) The transformation ~ : ~ - - ~ H ( b ) establishes a con~menoe or isometric

isomorphism between~and H(Ey), i.e.

~ , ~ ~

21f n is white, the discussion can be made mathemically rigorous by using the notion of a generalized process (Gelfand and Vilenkin r 1964 ). On a more qualitative basis, one can think of H(~) as the pseudo-REHS L2(T ).

Page 74: 5th Conference on Optimization Techniques Part I

62

i i i ) The inverse transformation~-1: H(Ky)-~)~of ~ is given by

where the right-hand side of this equation defines the preimage of y under ~in the norm-topology of~

Properties i)-~ii) have naturally direct relation with the problem of the pre- sent paper if~is identified with L2(A)o With these premises, we are now ready to describe the functional -restriction approach through the steps that follow.

I. Choose an Hermitian no~_ueg~tive-definite kernel i~ such a way that H(~)~. Thus,as is proved in the Appendix, for any ~

where

'~(+,) = < X, x~o , t ) )Kx (87

From the Lemma it follows immediately that the transformation defined by (8) establishes a congruence between H(K)0 and RKHS H(Ky) with RK

AA

Moreover, if y and ~are related as in (8)I

2. Among all y on the hypersphere

find the one~ call it ~¢ ~ minimizing the functional

def ined by ( 5 ) o

3. Increase the radius~ until large variations ~ produce small variations

of

v'

Denote the function y obtained in this way by y.

4. The preimage of ~ under the transformation (8) is the functional-restrict-

tion estimate ~ of ~.

Notice, that, by introducing the Lagrange multiplier ~ ~ steps 2 and 3 are eq~-

valent to minimizing the functional

Page 75: 5th Conference on Optimization Techniques Part I

63

with suitable ~ , among all y ~ H(~). To this end, it has already been shown

(Mosca, 1972~that an iterative algorithm of the steepest-descent type can be used to compute ~ v and ~ iff the two probability measures corresponding to Ky + R n and R n are strongly equivalent. Finally, it is to be noticed, that, regardless of the in-

terpretation that led us to (13), the problem of minimizing the functional (13) is

the classical problem of smoothin~ in applied mathematics.

4" Equivalence between Stochastic Extensions and Functional Restrictions

Comparing (13) with (4), and taking into account the basic role played in

both cases by the condition of strong equivalence, one can conclude that if this con- dition is fulfilled and

then the stochastic-extension (minimum-variance) estimate ~ equals the functional-

restriction estimate ~. In practice, the comparison between the two approaches is

even simpler for in the stochastic-extension approach Ry and R n are specified up to

multiplicative positive constants. This means that in practical instances ~ has to be found byminimizing

instead of (4). As in (13), here ~ must be estimated adaptively from the actual re-

ceived data z(t), t ~ T. L ~ (y I z) and F~ (yl z) are now of the same form, and therefore the next theorem follows.

Theorem 2 (Equivalenc e between ~ and ~). Let Ky be the RK used in the functional- restriction estimation of y, and Ry the covariance kernel used in the stochastic-ex- tension estimation of y. Let

Ky = Ry,

and the two probability measures corresponding to Ry + R n strongly equivalent.

Then the functional-restriction estimate of y equals with probability one the stoch- astic-extension estimate of y

Next theorem concerns the estimation of ~.

Th@orem 3 (Equivalence between ~ and ~). Let K~be the RK used in the functional-

restriction estimation of~, and R w be the covariance kernel used in the stochastic- extenslon estimation of w. Let

K~ : Rw (~5)

be such that

H(~<X) = ~ (16).

Page 76: 5th Conference on Optimization Techniques Part I

64

Finally, let the two probability measures corresponding to R + R n and R n be strong- J

ly equivalent. Then, the functional-restriction estimates equal with probability one

the corresponding stochastic-extension estimates, namely,

Proof. The first of (17) follows as a corol]ary of Theorem I. The second of (17) can

be proved by noting that (8), being a congruence, is I-I between H(K~) and H(Ky),

and that, being ~ in H(K~),

where the last equality was proved in (3).

~. Conclusions.

The equivalence that has been proved to exist between functional restrictions

and stochastic extensions in estimation, provides valuable insight and important con-

sequences for modelling. To those more inclined to accept deterministic rather than

stochastic formulations, the above results tell that the covariance kernel R w used

in the estimation of w can be chosen on the grounds of the analytic properties of the

REHS H(Rw). The opposite is evidently also true. More importantly, one can make di- rect use of efficient algorithms developed within the stochastic framework to find

functional-restriction estimates.

E~mple 2 (Smogthin~). Let z(t), t ~ T = [to, t l ] , be an observed p~ocess. The problem

is to find the function y on T with absolutely continuous first derivative and with

= o ]

minimizing the functional

T T

This i s a comp le te l y d e t e r m i n i s t i c problem o f smoothing, the f u n c t i o n a l t o be m i n i - mized be ing a compromise between the smoothness o f the es t imate and f i d e l i t y t o the data° However, i n o rder t o make use o f known a l g o r i t h m s , l i k e Kalman-Buoy f i l t e r i n g 1 developed a n d u s u a l l y a p p l i e d i n a ptu~el 7 s~oohast io con tex t , we w i l l r e f o rmu la te

the problem in a convenient way. First~ note that the first integral at the right-hand side of the last equa-

tion can be interpreted as the probability density functional ~(y I~) corresponding

to the process

with y a given deterministic signal and n a zero-mean white Gaussian process with co-

variance

Page 77: 5th Conference on Optimization Techniques Part I

65

Second~ consider the set'of all signals y generated as outputs on T of the finite-dimensional linear system

driven by inputs u E L2(T):

~r

_x (~o) o

~ o a n also be seen to be the class of a l l f ~ c t i o n s on T with absolutely continuous f i r s t der ivat ive t with second deri~rative s ~ r e - i n t e ~ a b l e on TI and such that y(to) = ~ ( to ) : O. Therefore the dee i redy that solves the smoothing problem must be an element of ~ . Each y E ~ has the representation

T

,here ~(~ ) -~ <.1 (~ ) i s the impulse response o f ~ , T~erefore, acccrdin~ to pa~ i ) of the Lemma, ~ is the R~S with R~

s ~ ,

where A denotes minimum. Moreover, according to part i i ) of the Lemma, there must be a closed s~bspace ~ of L2(T) congruent with H(~y), i . e . , for each y ~ H ( ~ ) there is a unique u ~ ~such that

11 ~ II~(T )

But, bein~ ~( t ) - -u( t ) , i t must aleo be

-- II ~ 11 ~ Whence the conclusion is

The initial problem is therefore equivalent to finding the element y ~ H(Ky) that minimizes the functional

L($1~.)= ~(~tz)+ ll~lt~f.

Page 78: 5th Conference on Optimization Techniques Part I

66

Being the condition of strong equivalence TrK • ~ fulfilled, according to Theorem 2, this can be looked at as a stochastic-exteYnsion estimation problem. For this problem we already know [compare 62)and 00)]that the finite-dimensional linear system above driven by white noise generates at its output a process with cov~rian- ce ~. Therefore~ the smoothed estimate we are looking for can be computed by first Kalman-Bucy filtering the observations z(t), t 6 T, as in the figure below, and then smoothing the output~of the Kalma~u-Bucy filter (K~ilath and Frost, 1968 , Rhodes,

1971)-

white noise I I A ~ white noise

x((;o) = o

Equivalent ~tochastic Model

r t

1 , ~ ) I .,~ 1 I

i I

Kalman-Bucyl ~ filter ~-----~ ~

X(~o) = o

APPENDI X

Proof of Theorem I. The assumed equivalence between the two probability measures

induced by Ry + R n and R n can be expressed, as Kalliampur and 0odaira (1963) showed, by the fact that Ry can be represented by the expansion

where ~ ~Z

~ ~ is ~ orthonorm~l se~s~ce of f=ot~ans ~ ~<~), a=d the f=otion~l series converges pointwise to Ry(t~qY) on T x T. Now let (~U { ~ be a complete

orthonormal sequence in H(Rn). Thus

Therefore v using the two previous expansions,

From this expansion, one sees that { ~ ~" ~ ~ ( ~ } is a complete or thonorm~ l sequence i n ~(R~ + ~n)" ~ i n g use o f ( 3 ) , we get

~j

Page 79: 5th Conference on Optimization Techniques Part I

67

where the second equality holds by continuity of the inner product, and ~j~ ~)(4+~j )v~ ~'(') ~y~R~ are independent, identically distributed, Gaussi~n ran- dom variables with zero-mean and unit variance.

We are now ready to show that

~ H(Ry) with pr. I ~ R y In fact,

There fore

+ R n and R n are strongly equivalent.

i,ij,1__j >i E ~=l i*:x i 1"=i 1 + ~ i

i ''~ "~+~i f=.t In the above line, the first equivalence is a consequence of Kolmogorcw 0-I Law (Lo~- ve, 1963), whereas the second equivalence can be proved by using the positivity of the ~i's. The condition =~4 ~'<~I has been..called, by H~jek (1962) strong equivalence between the probability measures corresponalng to Ry + R n and R n.

The fact that under the strong equivalence condition ~ minimizes (4) follows easily.[]

Proof of (8). If ~ H(K X), by the reproducing property of H(K~ ), one has

Hence Kx

A

Page 80: 5th Conference on Optimization Techniques Part I

68

REFERENCES

BROCK~T, R.W., 1970 , Finlte Dimensional Linear Systems (New York: Wiley)

FRANKLIN, J.N., 1970 ' J. Math. Analysi s Applic., 31, 682-716.

GELFAND, I.M., and VIL~KIN, N.Ya., 1964 ' Generalized Functions, Vol. 4 (New York: Academic);

HADAM~D, J.~ 1952, Lectures on Canchy's Problem inLinear Partial Differential Equa- tions (New York: Dover)

HAJEK, J., 1962 , Czech Math. J., 87, 404-444-

HALMOS, P.R., 1957, Introduction to H ilbert Spage (New York: Chelsea Publishing Co.)

KAiLATH, T. and FROST, P., 1968 , I.E.E.E. Trans. Autqmatic Control, 13, 655-660.

KALLIAMPUR, G. mud OODAIRA, H., 1963, Time Series AnalEsis, M. Rosenblatt, Ed. (New York: Wiley)

KIMELDORF, G.S. and WA~LBA, G., 1970 , The Annals of ~ Math. Statlst~cs, 1970, 41, 245- 502; 1971, J. of Math. Analysis Ap~llg., I, 82-95.

LOEVE, M., 1963 , Probabilit The, (princeton, N.J.: Van Nostrand)

MOSCA, E., 1970 a, Preprints 2nd Prate IFAC S,vmp. Idsntification and Process Parame- ter Estimation, Paper 1.1, June (Prague: Academia); 1970 b, Functional Analytic Found- ations of System EnKineering, Eng. Summer Conf. Notes, The University of Michigan, Ann Arbor, July; 1971 a, I.E.E.E. Trans. Inf, ThegrE, 17, 686-696; 1971 b, Proc~ I.E.E.E. Decision and Control Conf., December; 1972 a, I.E.E.E. Trans. Automatic Con- trol, 17, 459-465; 1972 b, Int. J. Systems Science, 357-374; 1972 c, I.E.E.E. Trans.

Inf. Theory, 18, 481-487.

NAYLOR, A.W. and SELL, G.R., 1971, Linear 0perator TheorE in Engineering and Science (New York: Holt, Rinehart, and Winston)

PARZEN, E., 1967, Time Series Analysls £aPers (San Francisco: Holden-Day); 1971, Proc. Fifth Princeton Conf. Inf. Sciences and system.9.s, March.

RHODES, I.B., 1971, I.E.E.E. Trans. Automat±c Control, 16, 688-706

ROYDEN, H.L., 1968, Real Analysis (New York: Macmillan)

TURCHIN, V.F.~ KOZLOV, V.P., and MALKEVICH, M.S., 1971 , Soviet Physics Uspekhi (Eng-

lish translation), $3, 681-703.

Page 81: 5th Conference on Optimization Techniques Part I

REGRESSION OPERATOR IN INFINITE DIMENSIONAL VECTOR SPACES AND ITS

A PPLICA TION TO SOME; IDENTIFICA TION PROBLEMS

Andrzej Szymanski

Institut of Mathematics

University of Lodz

P O L A N D

I n t r o d u c t i o n

The paper is an attempt to transfer the notion of regression and

its properties known for numerical random variables into the theory of

Banach valued random variables and to apply that theory to some iden-

tification problems.

Assuming that the trajectories of stochastic processes realized

in the system are elements of a separable Banach space, we may regard

them as Banach valued random variables and the problem of identifica-

tion can be transfered into the domain of the infinite dimentional

probability theory.

On the basis of the generalized Radon-Nikodym theorem given by

Rieffel for the Banach valued random variables, the definition of the

conditional expectation of a random variable with respect to any

field has been formulated in the paper and its existence and unique-

ness have been proved. The Kolmogoroff theorem on the form of the con-

ditional expectation has also been transfered. That makes it possible

to formulate the definition of the regression operator for the mentio-

ned random variables.

The problem of identification can be reduced to the problem of

finding the regression operator on the basis of the realizations of

stochastic processes in the input and output of the system.

Part II of the paper refers to some properties of the regression

operator in a separable Hilbert space. On the basis of the given here

approximation theorem an approximative algorithm has been found for

linear operators and one example given.

Page 82: 5th Conference on Optimization Techniques Part I

70

1. T h e R e gr e s s ion O per a t or in a B a n a ch

Space

1.1. B asi c concepts and definitions.

Let us denote by (~ ,~, p) a probability space, where ~ is some

~-field of subsets of the set ~ and p is normed complete measure de-

fined on ~ . Let X be a separable Banach space over the field of real

numbers.

DEFINITION 1.1. By the X-valued random variable ~ we mean a strongly

measurable (with respect to ~ ) mapping of the space~ into the space

X.

DEFINITION 1.2. An element of a Banach space X defined as follows

is called the mean value or the expectation E~ of the X-valued ran-

dom variable. The integral is meant in the sense of Bochner.

We assume more ever that all the random variables considered in

this paper are Bochner integrable.

1.2. Conditional ex~qtation of an X-valued random variable with res-

pect to a ~-field.

DEFINITION 1 °3. An ~-measurable random variable E (~I~) which satis-

fies, for every A~ ~ the relationship

is called the conditional expectation of X-valued random variable

with respect to a ~-field ~¢~.

The existence and uniqueness of the conditional expectation for

the random variables considered here follows from the generalization

of the classical Radon-Nikodym theorem for an X-valued measures, publ~

shed by N. Rieffel ([5], p. 466)

Analogously as for real random variables we have the following:

PROPERTY 1.1. If ~=~+~ , then for any ~-field ~r2the equality

Page 83: 5th Conference on Optimization Techniques Part I

71

holds.

PROPERTY 1.2. If a random variable

I-field ~, then

is measurable with respect to a

1.3. Conditional expectation with respect to a random variable and

concept of regression operator.

Let ~ and q be measurable mappin~ of the set ~ into separable

Banach spaces X 1 and X 2 respectively. Let ~i (i=i,2) denote the I-field

of borelian subsets of the space X . With the use of the mapping ~ we l

transpose the measure from the I-field,into the ~-field ~i" acee~

ting for every set B~ 1

p~(B) = p(~-IB)

Let us define the ~-fiel£ ~-i~ I as follows

It follows from the above remarks that ~-l~ ~ .

DEFINITION !.4. The conditional expectation of the random variable

with respect to the I-field ~-i% 1 is called the conditional expec-

tation of the random variable q with respect to the random variable

and denoted by E(~;~) . We have

For the conditional expectation E(~]~) the following theorem an !

logous to that of Kolmogoroff is true

THEOREM i.i. If ~ is Xl-Valued random variable and q is a X2-valued

random variable, then there exists a borelian mapping ~:XI-~ X 2 i.e.

a measurable mapping with respect to the ~-field ~l and such that

~-IB ~ ~l for B~ ~2 , which satisfies the relationship

Page 84: 5th Conference on Optimization Techniques Part I

72

for almost all ~ 0 ~ being determining uniquely precisely to equiva-

lence (in the induced measure p~).

The Kolmogoroff theorem for X-valued random variables may be pro-

ved by use of the Radon-Nikodym-Rieffel theorem analogously as in the

classical case ([6]).

DEFINITION 1o5o A borelian mapping M: XI-~ X 2 satisfying the relation-

ship

M [~(~)] :E(ql ~ ) (~) for almost all ~,n

is called t h e r e g r e s s i o n o p e r a t o r of the X2-va-

lued random variable ~ with respect to the Xl-Valued random variable

The existence and uniqueness of the regression operator follows

from the generalized Kolmogoroff theorem.

1.4. Identification theorem.

It is easy to verify that, for mentioned in this paper random va-

riables~ we have following

PROPERTY 1.3. If Banach valued random variables ~ and ~ are indepen-

dent in the sense of [4], than

holds with probabmlity i.

THEOREM 1.2. (Identification theorem) Let the random variables ~ and

be independent and q denoted as

where U is borelian mapping Xl-~-X 2 .

The regression operator of ~ with respect to ~ i d e n t i -

f i e s the borelian mapping U iff

Proof. N e c e s s a r i t y. Let us consider the random variable

= q -U~ o Taking its expectation and using the assumption U~=E(q[~) we have

Page 85: 5th Conference on Optimization Techniques Part I

73

S u f i c c i e n c y. Considering the conditional expectation of

with respect to ~ we have

By the virtue of the property 1.3

holds. Hence

E( )=u S holds with probability i# which was to be proved.

1.5. Interpretation of identification theorem in domain of controll

theory.

I U Fig. 1

Let us consider the system described by the borelian operator U.

We may observe the stochastic processes ~ and ~ in the input and out-

put of this system.

Assuming that the trajectories of stochastic processes ~ , q and

realized in the system are elements of a separable Banach space we

may regard them as Banach valued random variables and the problem of

identification can be transfered into the domain of the infinite di-

mentional probability theory.

The identification theorem says that in the case shown in Fig. 1

if ~ and ~ are independent and E~ = O we can search the regression

operator ~ with respect to ~ to find the borelian operator U.

Then due to the identification theorem the problem of identifi-

cation can be reduced to the problem of finding the regression opera-

tor on the basis of the observed realizations of stochastic processes

in the input and output of system.

The next chapter deals with the searching of the regression operator

in the case when the space X is a Hilbert space.

Page 86: 5th Conference on Optimization Techniques Part I

74

2. P r o p e r t i e s o f t h e R e g r e s s s i o n O p e r a-

tor in a Hilber t Space

2.1. Basic remarks.

Let H (i=i,2) denote separable Hilbert Spaces. We assume that l

all the random variables appearing in this chapter satisfy the condi-

tion

It is easy to verify that for the above considered random variab-

les following Lemma holds.

LEMMA 2.1. If q~cJ and a H -valued random variable is ~ 2

by condition !llE(~Ir~) (~)~2p(d~) < o<D ~ then n

i~-measurable

2.2. The relationship between the regression operator in Hilbert

space and the least square method.

Let ~ be a ~-valued random variable and ~ a H2-valued random va-

riable. We denote the scalar product in the space H 2 as <. , .>

Denote by ~i the ~-field of borelian subsets of the spaces H. l

(i=i,2). Consider the class ~of operators acting from H 1 into H 2

with the following properties:

i) U are borelian operators i.e. measurable in the sense

-i u (B)~931 for B~T~ 2

ii) [

For the regression operator in Hilbert space the following theo-

rem is true

THEOREM 2 .io The functional E {I~-U~ 2 attains its minimum in the class

by the condition ]m~IM(x)II2p~(dx) < ~ iff

holds with the probability 1 i.e. iff U is the regression operator M.

Page 87: 5th Conference on Optimization Techniques Part I

75

P r o o f. By the properties of scalar product in Hilbert space and by

the assumed condition ]JE ( ~I~ ) (~) II2P(d~) < ~ we obtain

~ II~ ~ II <~. II~ -~(~;~)~2+EIJ~(~I~-~I2+ 2E I < ~ ~t~), ~(~I~)-~ >] Considering the last term we have

By the lemma 2 .i we obtain

It follows from the properties 1.1 and 1.2 that

Thus we have

The last term in estimated sum equals zero and the remaining ones

are non negative, thus the expression to be estimated attains minimum

iff the equality

holds with the probability i, which was to be proved.

2.3. Approximation theorem.

Let a H2-valued random variable ~ be independent from Hl-Valued

random variable ~ . Let us assume that the expectation of ~ is equal

zero. Let a random variable ~ be expressed as

where A is a linear operator. According to the identification theorem

the operator A is the regression operator of q with respect to the ran_

dora variable ~ . By the virtue of the theorem 2.i the operator A, as

the regression operator minimizes the functional E II~-A~ 2 in the

class ~ . The minimum of this functional we denote as 6 2 . We have

~2~ll ~ -A~ 2

Page 88: 5th Conference on Optimization Techniques Part I

76

Let f{e(q ]a)~i=is2 j=l,2 denote orthonormal bases in spaces H i - ''~'~ 3

(j=l,2) . Then we may show random variables ~ and ~ uniquely in the

form

2- e*' ~= t i i and ~ ~ iei (2)

Let us determine the random variables ~(N) and q(N) as N N

~(N)~ ~ e (1) . >i i and ~(N)=<~, ~ i e(2)i By the above notations the following theorem holds.

THEOREM 2.2 (Main theorem) If the linear regression operator A is boun-

ded and if operators A exist and satisfy conditions N

i~(~)% ~(~)II<~in for ~ = 1 . 2 . . . .

where the minimum is taken over all the finite dimensional operators

A with matrices of order N, then N

N-~o~ N

where

P r o o f. Let us denote

By the theorem 2 oi we have 2 2 5.<5N

We may show the expression A uniquely in the form

= ~ ~ e (2) A~ aji ~i j

and analogously N ~-~ (N). (2)

:,i~j a ~ie AN~ • ~ ji j

Thus we may express the errors ~2 and 6 2 in the following way N

and

<% l (N) Taking into account the fact that a = 0 for i=N+l, N+2, 31

s2--~ c . - , a . . ti) j

.~°# we can

Page 89: 5th Conference on Optimization Techniques Part I

77

The operator A N minimizes the functional ~2 N' since it

that

,< ( j- %i ) • ~ J

From the last expression we can obtain

2

where ~

~N,{ aji ~i) 2

It is easy to see that ( ~J-/aji--: ]~i)2--~0

because the respective series are convergent.

is easy to see

Let us consider the first term of the expression of the M . We N

have N ~ m

but /' C~+4 3x ,],~ ]

From the above considerations and from the fact that the operator A

is bounded follows that

~N~ 0

summarizing the obtained results we have

~2 2

where

Thus

--7-- O

~2 -~-~2 which was to be proved. N

We may call the above proved theorem the approximation theorem

because it gives the way of approximation of the linear bounded reg-

ression operator A by means of the finite dimensional operators AN0

when all the conditions are satisfied. This approximation is under-

stood in the sense of the minimization of square mean error.

The approximation theorem enables in a general case of a linear

bounded regression operator the derivation of the approximative algo-

rithm. This makes the statistical estimation of the operators A and N

the use of computer for calculation possible.

Page 90: 5th Conference on Optimization Techniques Part I

78

2.3. Approxima_tive alqorithm.

Let us assumpt that E [ =0 and E ~ =0. On the contrary we may con-

sider the random variables ~'={-E~ and ~'= ~-E~

Let us denote by ~(N) and A ~(N) ~ (N) the covariance matrices

defined in the following way

and

/}~)~<~):[cov( %, h)] i,j~N THEOREM 2.3. If deter(N)% 0 then the operators A N determined by the

theorem 2.2 can be expressed in the following form

~(N)

r o o f. ~et ~s oonsider the e~pre~sion ~ l; ~ (N)_~ ~ I~)11 ~" ~rom the

properties of scalar product and by the above notations we may obtain N

N N

E ]I~(N)-AN ~(N) II2=E]I ~(N)I2+~cov = ,~-~a(N) (N) ( ~i ~ ~k' ~

n N V~ (N) COV --24~ ~ a . (

We have the following equation after the differentiation of the above

expre s s ion ~ ~a (N) Jo k

for io, jo--i~2, ..., N

Using the notation of

coy< {k• ho ):°°v( ~jo ~io I

~(N) and A we can write ~(N) q (N)

~° A~(N) = J~ ~(N) ~ (~)

Because the matrix ~(N) by the assumption is non singular we have

[(N) %(N) " ~CN)

This ends the proof.

From the theorem 2.3 the approximative algorithm follows. Deter-

mination of the elements of the covariance matrices on the basis of

the observed realizations needs the statistical approach• which will

considered in the next chapter.

Page 91: 5th Conference on Optimization Techniques Part I

79

3. S t a t i s t i e a 1 a p p r o a c h t o t h e a p p r o-

ximation of linear bounded regres-

sion operator

3.1. Statistical approximation theorem.

(i) Let us denote bY(l ~ - k-th realization of the 1-th component of

, analogously by Yk - of the random variable ~ . We can determine

the estimator of covariance cov( ~i" [j) by the formula

_ 1 ~ (i) (j) cov( j)n -1

The above estimator converges stochastically to the covariance

coy( ~i # ~j). A (n) and ~(n) by the formulas If we denote the matrices ~(N) ~{(N) ~ (N)

~(n) [ 1 ~-~ (i) (j)]

and

= -_--f i,j& N j[{(N) q (N) t,~ Xk

all the elements of those matrices will converge stochastically to the

elements of the matrices ~{(N) and ~(N) q(N)"

Let us denote ~(n)= ~(n) (n) N ~(N) " ~ ~(N) q(N)

We raise the problem of convergence of the sequence of the funcionals -, (n) ,,~-

E II q-A N ~ to the value $2 t, when N-~ o~ and n-~

By this notions the following theorem is true.

THEOREM 3 .I. If the Hl-Valued random variable ~ and the H24valued ran-

dom variable ~ are independent and if det A (n)} O~ and E ~ < ~o, then ~(n) .2 NO2 limEH -

where

P r o o f. By the theorem 2.1, by the H~lder inequality and using the

properties of scalar product we have

2 ~A 4 EII~ ¢(n) ~2 2 ~(n). %_% J 2+ 2.E LJ(%- J%ll 2

Page 92: 5th Conference on Optimization Techniques Part I

80

Let us consider now the term E~N-AN n')" ( ~ 2

II . By the H~lder' s inequality

we have

j} tl

Using again the HSlder's inequality and by the theorem 6.31 ([2]) we

have

Since

r (N) limE ]a: .'- (al n) ) i '] 4--0

Summarizing the obtained results we have 2 ^ (n) , 2< 2

where ~-~ 0 if n --~

n Thus by the above results and by the theorem 2.2 we obtain

N(n) 2 lira ~ II~- A jj 2 ; A

which was to be proved.

3.2. Example__=.

Let us consider an example of application of the statistical

approximation theorem to the identification of the dynamical system

described by the differential equation

d2y a ~Z a2Y= x --- + + dt 2 ldt

Let us assume that the inputx(t) is a trajectory of a stochastic

process ~(~t) and that the function x(t) is square integrable on

the interval [O,T] . Then x is an element of Hi!bert space L 2 . By the

same assumptions concerning all the others processes ~ and q we can

apply the statistical identification theorem to the identification of

the mentioned system.

All the observed functions must be shown in the following way

(l) ~ (1) z (t)= a k ~k(t)

where (~k a r e a n y o r t h o n o r m a l f u n c t i o n s c h o o s e d f o r t h e c o n s i d e r e d

Page 93: 5th Conference on Optimization Techniques Part I

81

case respectively.

The observed signal must be sampled and the coefficients a(1)for k

given realization 1 are determined by the condition

~[~4 z(1) (ti)-~a(1)~ k ~k(ti)I2=min

The coefficients are determined in such a way that they minimize

the above square mean error

Having all the coefficients of the observed realizations we can

make computations of the matrices i ,N ~ , ~ (N) (N) and A(n)Determi-

ning of the number N giving a good approximation is the problem, which

is not solved in this paper. After the computation of the estimator

^ ~ (n } 1c.) A(n)we may estimate the output signal as A N . Obvmously, the matrix N

depends on the choosing of the orthonormal system.

Conclusions

The main purpose of this paper was the transfer of the notion of

regression into the theory of Banaeh valued random variables. This

approach enables the formulation of identification problem in the do-

main of the infinite dimensional probability theory.

In the special case, when the random variables are Hilbert valued

we state that the regression operator minimizes the square mean error.

This makes the proof of the approximation theorem possible in the case

of the linear bounded regression operator. Given further the statisti-

cal approximation theorem enables practically the identification of

linear systems.

The problem unsolved in this paper is as well the approximation

of the nonlinear regression operator.

It seems to the author that the general formulation of identifi-

cation problem presented in his paper should help with solving also

these problems.

Page 94: 5th Conference on Optimization Techniques Part I

82

R e f e r e n c e s

[i i Dunford N~ Schwartz J. - Linear Operators, Part i, London 1958

[2]Goldberger A.S. - Econometric Theory, John wiley and Sons. Inc.,

New York - London - Sydney 1964

[3] Hille E. -Functional Analysis and Semi-groups, New York 1948

[41 Mourier E. - Les element aleatoires dans un espace de Banach,

Annals de L'Institut Henri Poincare, 13(1953), pp. 161-244

151 Rieffel N. - The Radon-Nikodym theorem for Bochner Integral,

Transactions of the American Mathematical Society, 131 (1968),

pp. 466-487

[6] Ren~i A. -Probability theory, Academia Kiado, Budapest 1972

Page 95: 5th Conference on Optimization Techniques Part I

AN APPROACH TO IDENTIFICATION

AND OPTIMIZATION IN Q~ALITY CONTROL

W. Runggaldier Facolt~ di Statistica

Universit~

Padsva (Italy)

G. Romanin J a c u r

Laboratorio di Elettronic~ Industriale del CNR

Pa~ova (Italy)

I. INTRODUCTION

In what f o l l o w s a p r o c e d u r e of s i m u l t a n e o u s i d e n t i f i c a t i o n and c o n t r o l i s p r e s e n t

ed, applied to the problem of quality control of a production line where items are classified as either good or defective. There two different subprcblems may be con-

sidered :

I ) Checking the process performance, which is actually an identification problem.

2) Designing a sampling procedure t where the defective items sampled may be replaced

by non-defective ones. The sampling procedure is to meet some given requirements,

generally keeping the average outgoing ratio of defectives below a desired level t

at the minimum sampling cost. This is actually a problem of optimal control.

1.1 The model. Taking ideas from dual control, we suggest a unified procedure for

the si,mltaneous identification and control, as shown in fig. I.

p,N plant

Pd

~I sampler

control

and ....

p(_l - f )

Fig. I

The output stream of a production plant, supposed to contain a proportion p of

defective i%ems, is formally grouped into lots of N items each, with N a suitable

number. A sampler, operating at frequency f, inspects n = Nf items from each lot, and replaces the d defectives found by nondefectives. The average outgoing ratio of

Page 96: 5th Conference on Optimization Techniques Part I

84

defectives thus becomes p(1 - i). In fact the average number of defectives still in

the lot after sampling is given by

Ep(dto t- d) = p . N - p.f.N = pN(i - f) (I)

(dte t : total number of defectives in the lot).

The numbers n ~ud d are used both to get better and better estimates of p, (iden-

tification), and to update the sampling frequency f (control). The cycle repeats for

each lot.

1,2 The objective® Designing the procedure in such a wa~v that it not only identifies

the value of p,but also makes us aware of any change of p, at the same time guarantee

ing that the average outgoing ratio of defectives does not exceed a desired level Pd'

all this at minimum total cost. We assume p changes in only one direction~ namely it increases, i.e. the produc-

tion process can only deteriorate.

2. ID~qTIFICATION AND CONTROL

IN CASE OF CONSTANT PARAMETER

For expositor2 purposes the procedure is first presented assuming p does not

change with time.

2.1 The identificati0nn Following the ideas of Bayesian statistics,we consider p

distributed according to an a-priori beta distribution with parameters a and ~ .

Having observed d defective items among the n sampled from the lot, we obtain an a-

posteriori distribution~ again a beta distribution, whose parameters are

ai+ I = a + d.

~i+I ~ ~i + (~C di) (The index i refers to the lot).

Initially we take aO = ~0 = I (uniform distribution of p, equivalent to absen-

ce of information), As the variance of the beta distribution is ~iven by

2 )2

we see that the larger is the number of items sampled, the smaller is

ing an incrasingly accurate identification.

(3)

o , impl Z

2 .2 The c o n t r o l . Let Pl be t h e r a t i o of d e f e c t i v e s i n t h e nex t l o t to be sampled.

Sampl ing w i t h f r e q u e n c y f , t h e a v e r a g e o u t g o i n g r a t i o o f d e f e c t i v e s becomes ( s e e ( l ) ) pl(1 - f ) . Bein~ p ~ o ~ t ~ t , s ( h ) = p, to =e~t *he ~ 4 e ~ t i v e ( 1 . 2 ) , t h ~ sempting frequency f m~st satisfy the condition

p(1 - f ) & Pd (4)

Page 97: 5th Conference on Optimization Techniques Part I

85

along with minimizing the cost, which here is the sampling cost supposed to be a linear function of f, i.e. A * f (A: cost of sampling N items).

The only information we have about the value of p is its beta distribution updat-

ed so far. Denoting by m and ~ its mean value and standard deviation respectively,

by the Chebyshev inequality the probability of p exceeding ~ = m + Ko is at most I/K 2. We therefore may suppose p not greater than ~ and solve for f by substituting

in (4) ~ for p, thereby satisfying our objective except for a probability with known upper bound.

For what follows a good value for K was found to be K = 3. Then f becomes

f. ~ - p./~

( ~ : - + 3~)

Initially f is set equal to I - Pd equivalent to ~ = I.

Fig. 2 shows a flow chart of the procedure described so far (The dotted cases are

not to be considered for the present case).

/ ~Pd / I I

I nt=dt=nt= 0

if: I -Pd

F In=d=O

l

item ,,/

yes . ~

Id = d+1

f .... L _~

(, X % p

"- - ----F - _.x

In - n+1 - E

)a~le n e x t / ~ \fre~ 7' /

yes

rn

m-a

o~

, \

i

\ . . . . . . l . . . . J

nt = nt+n = i

I % : %+d

I I m=m (nt,dt) ~o = o (nt,dt) 'I

I

1,

f : ~- Pdl~ I I \ xl '~ \ /

c+

i+

C

O

0~

Fig. 2

Page 98: 5th Conference on Optimization Techniques Part I

86

3. THE GE~[ERAL CASE OF THE

PARAMETER VARYING WITH TIME

Relaxing the condition introduced in the previous section 2., the ratio of defec-

tives p in the production is now allowed to vary with time. Following cur objective

we then have to decide from lot to lot whether p has changed. For this purpose a

testing procedure is derived.

3.1 Definin~a testin~ure. Being at the beginning of a new lot we denote

again by p its ratio of defectives. For a p that does not change, the beta distri-

bution up'ted so far maybe considered to hold.

Then,(see Sec. 2.2),except for a small probability~we have p < ~. Also E(Pl)=p •

Therefore~if pl < p we decide for no change in the lot. In this case for the identi-

fication and control we proceed as for p constant. If however pl ~ ~t we decide a

change has occurred. In this latter case a new identification is started and the

control procedure has to be changed. To reach our decision, as the v~lue of Pl is

not known, the testing procedure has to be combined with the sampling. This amounts

to determining a value c such that if the number d of defective items sampled in the

lot is less than c,we decide for the first alternative (no change). If however while

sampling, d reaches the value o, the second alternative is chosen (a change has

occurred).

At this point two remarks have to be made.

Since,deciding for the first alternative,we chose the sampling frequency accord-

ing to (5),which guarantees our objective (the average outgoing ratio cf defectives • A

does not exceed p=) only if E(p~)< p, we have to make sure that on the mean the

testxng procedure recognlzes the sltuatlon pl ~ p. Since when (pl)~ ~ and the

sampling frequency is f, on the mean the number of defectives found E(d) exceeds the

value Nf~9 to make sure our objective, c has to satisfy the restriction

o ~ Nf~ (6)

The second remark refers to the control procedure to be used when the change is

signaled. If~ sampling a lot with frequency f, up to some moment a number of defec-

tives d was fo%md, the total number of defectives in the lot d I up to that point

is expected to be E(dl5 = d/f. Therefore, as soon as a change zs signaled (d=~ ~ Nf~),

we ~st expect a number of defectives already passed through the inspection process

given by (see (~5,(655

d d ( 1 _ f ) ~ n ( ~ f) ~(d l - d ) ~ ~ - ~ : ~ - ? - :~Pd ' so that to guarantee in any case for the average outgoing ratio of defectives not to

exceed pd ~ from the moment a change has been signaled, we cannot tolerate other defe~

tives to pass through the inspection process~which implies a sampling frequency f =!

to the end of the lot.

3.2 Implementin~ the te_st~_~ urocedure. Our approach to determine the value of c

follows the lines of Bayesian testing. We substitute the two composite hypotheses

of 3.1 (see 7 below) by two corresponding simple ones (7'5

(75 !Pi < ~) <7') ~Pl = m

<Pl a ~ [Pl

Page 99: 5th Conference on Optimization Techniques Part I

87

(m is as before the mean value of the beta distribution updated so far).

The probability for a type - I error (to decide for the second alternative while

the first one holds) is then g~ven by (8). Similarly for a type-II error (to decide

for the first alternative while the second one holds) the probability is expressed

by (9). n o-1

PI : ~ [d'n)md(1-m)n-d = I - ~ [d'n)md(1-m) n-d (8)

d=c e~t d=O

d=O where n is the value of Nf rounded to the nearest integer, i.e. the number of items

sampled from the whole lot. Introducing cost factors C I and C_ l for the type-I respec

tlvely type-II errors and assumin~ equal a~-priori probability for the two alterna-

tive hypotheses (7'), the cost related to the testing procedure is given by

I + PIi) c : ~ ( o IPz c~I 0o)

We want the value of c to minimize this cost. Supposing for a moment such a value of c exists and is unique, we find it by examining how C varies for two successive

values of o (c and e + I). This amounts to ~lculating the d~fference

F .,o. -.n-o o n-o] = ~c1(n) ,LOnp [ l-p) - CIm (l-m) 0(o+I )-C(c) J and examining the sign of the expression within brackets~ i.e.

cn~c(I-~) n-c- oimC(l-m) n-c (12)

Increasing the value of c by one, the first term of (12) ~s maltiplied by ~/(I-~)

and the second one by m/(1-m), where obviously p/(1-~) > %/(I-m). As a result, (12) and consequently (11) is a strictly increasing function of c,

implying the minimum of C, if it exists, to be unique. The corresponding minimizing

value of c is the one at which (12) changes sign (from negative to positive). For

reasonable choices of C v and Cvv this changing of sign, if any, occurs in the range

c > O, as (12) is negative formic = O. Otherwise by the choice of C I and CII a sampl

ing one by one would have been more convenient. Since (see(6)) the value of c is -

not allowed to exceed Nf~, if (12) does not change its sign in the range from 0 to Nf~, we set c = Nfp.

Instead of determining at the beginning of each cycle the corresponding value of

c and use it for the test, we may proceed in a recu~sive manner thereby greatly

simplifying the testing procedure. We start by calculating (12) for c -- O. Then, ~o~

each defective item found during th~ s~mpl~g, we calculate (12) rec~arsively for the

next value of o. As soon as either (12) changes sign or the number d of defectives

sampled reaches the value Nf~, we decide for the second alternative (change) con, inn_

ing the sampling with f=1 to the end of the lot and starting a new identification.

If however by the end of the lot none of the two situations has occurred, we decide

for the first alternative (no change) and proceed with identification and control as for p constant. The whole procedure with the details can be seen from the flow chart

of Fig. ? inserting in the dotted oases X and XI the flow segments of Fig. 3 (the dotted oases Y~ YI, Z are not to be considered here).

Sampling the first lot t the flow segment X is left out.

Page 100: 5th Conference on Optimization Techniques Part I

88

O ~ The testing

procedure

n o

\ z > 7=- J---

~ nt=dt=O J

'J P I " - ( 1 - ~ ) N f

Pn = (I-~) ~f

¢ initializing values

for testing procedure

in next cycle

Fig. 3

~-3 Cost factors and double testing. If we make the wrong decision that a chs~ge

has occurred, the n t items sampled since the present identification has started and

from which we derived the relative information, are no longer considered/as a new

identification begins. Thsrefore, t~ reach the same stage of information as before,

n t mere items have to be ~mpled. This imulies the choice of CI= A/N -n t (remember

from 2.2,A is the cost of sampling N items). CII is chosen as ~he cost of having a

lot (N items) whose outgoing ratio of defectives on the mean exceeds the value Pd"

At the moment a type-I error occurs~ n t might be large, implying a large value of C I

and consequently of the cost related to the test.

In order to Ooesibly reduce this cost and also to improve the reliability of the

test, the procedure described in the previous section is extended to a double testing

urocedure including a second test to be performed in the lot that immediately follows

the one where a change has been si~aled, with the aim of making a check on the de-

cision taken in the previous lot.

The modified procedure ~s as follows. If, during the sampling of a lot~ the test

leads us to decide that a change has occurred~ we go on sampling with f=1 to the end

of the lot and, while a new identification is started t the old one is continued. In

the following lot, sampled with frequency f corresponding to the new identification,

along with the normal test the second test, based upon the old identification, is

performed.

If this second test confirms the previous decision, we definitely abandon the old

identification and go on with the new one. If however the second test rejects the

previous decision, only the old identification is continued and the sampling frequen

cy for the following let chosen according to it.

With the modifications just seen, the cost factor C I for the normal test can be

reduced to the value A, as in case of a wrong decision taken in a lot, previous its

rejection in the fellowing~ we have to sample about N items.

The second test follows the lines of the normal test determining c' such that for

Page 101: 5th Conference on Optimization Techniques Part I

89

d _~c' by the end of the lot, the change previously signaled is confirmed, otherwise

rejected. The value of c' however is not obtained this %~me by minimizing a cost,

but by setting it equal to Nf~ ~ where ~'= m' + 3~' (m',~' referring to the old beta

distribution). This choice of c' is motivated by the following two considerations.

First, the lot is sampled at higher frequency, implying more accurate decisions. Se

condly~ the cost factors related to the second test are difficult tc determine be- ing partly due to chance.

The whole procedure as described so far can be seen in more details from the pre-

vious flow charts (Fig. 2, Fig. 3) inserting the flow segments Y and YI of Fig. 4 in

the corresponding dotted cases. (The dotted case Z is not yet to be considered).

: double test

+ 3

n t = n~

d t = d' t

: ° l I

n~ = nt/ ' =d t d t

1

saving of the previous

identification before

starting a new one

Fig. 4

In Fig. 5 a diagram is shown, where on the horizontal axis each segment represents

a let and where on the vertical axis we plot the corresponding sampling frequency.

The * indicates the point where a change signal occurs.

In the let following the one in which the double test is performed, the sampling

frequency is represented by the upper dotted line in case the change has been confirm

ed, otherwise by the lower one. The values of f used for this diagram are derived

from the results of an actual simulation run with N = 400, p = 0.1, Pd = 0.06.

Page 102: 5th Conference on Optimization Techniques Part I

90

1.O

.5

3.~.. Minimizin~ total cost.

/t | | I -~ I J / J

The minimum value of the cost C related to the test de-

pends upon f, becoming larger as f becomes smaller. If we want to take into account

not only the sampling cost (A-f for a lot)~ but also the cost C~ minimizing their

sum (our procedure in f&ct contemplates both control and identification), we note

that a reduction in the value of f might not compensate the corresponding increase

in the value of C. As f~ given by (5), depends upon G , which (see (3) and consider-

ations following it) reduces as the procedure goes on, from the moment the saving

in the sampling effort (A. ~f) is less than the increase of C (~C), we should not

decrease further the value of ~ keeping it e~al to its last value that we denote by ~ .

The ~eas just outlined are implemented as follows. First the cost C related to

the test has to be calculated I in other words we need the values for PI and PI- that

can be calculated recursively using the values AP I and ~PTT made alr~ly available

by the flow segment X of Fig. 3. For this purpose {he segme~{ Z of Fig. 6 is insert-

ed in the oc~respendin~ ca~e of segment ~.

-- [ .... t

PI = PI + BC-API 1

[ PI ~ PII = - ( 1 - m ) ~ N N f I

] P I = I + z~ PI

PII=~PII,BC = I

+ Fig. 6 Fig. 7

~he s~,mbo! BC in the segment Z of Fig. 6 stands for the binomial coefficient (Nf).

The values BC~ PI and PI- h~ve to be initialized, which can be done in segment XI of the flow modified as in iFig. 7.

If in section X of the flow the test is fullfilled (d = o), we have already the

values of PI and PI we need. If not, at the end of the lot we will go through the I

steps of section X again simulating a fictitious sampling, until the test is full-

Page 103: 5th Conference on Optimization Techniques Part I

9i

filled. Entering section Y we then calculate the value of C.(This last part is not

reported in the flow as it is not difficult to imagine how it performs). Next we

proceed to the comparison of ~C (difference between the value of C just calculated

and the one of the previous lot) with A- ~ f ( ~ f is the difference between the sampl

ing frequeney used in the previous lot and the present one).

The comparison is made except if a change has been signaled in the previous lot.

If dC exceeds A'd f, ~ will be kept constant ( ~ = ~ min) for the lots that follow.

The comparison is not performed~ if previously ~ had already been set equal to a

min, the same if we are at the end of the first two lots.

In case a change was signaled that has been confirmed, a new identification begins

and the whole procedure starts anew. If the change has not been confirmed, we proce-

ed as described above excluding from consideration the lot, where the double test is

performed.

Both costs considered above depend not only upon f, they depend also upon N,

considered here an input chosen in a suitable way. A suitable N means it should

neither be too large, to allow a convenient updating both of the identification and

the control, nor too small, to prevent an excessive randomness in the ratio of defec

tires from lot to lot.

We found a reasonable choice for N the number 500.

4. CONCLUSION

Comparing the procedure described here with the standard procedures used in ~ali-

ty control,we foundthat we are better off.The above procedure in fact is an adaptive

one,while those are not.There is however a cost to be paid for adaptivity, namely the computing we need for each cycle.

The way we approached the identification of the possibly time-varying parameter p

is not the only possible one,as also a regression analysis could serve the scope.If

however the variation of p shows a rather irregular pattern,a regression analysis

could not be sufficiently adequate and to obtain better ad~utivity~a procedure as the one described would be preferable.

Looking for possible generalizations of the above method,we remark that for each

lotpthe quality control problem as described here may also be visualized as a par-

ticular decision process over a Narkov chain,where transition probabilities are on-

ly partially known and may also vary with time.We therefore think the method may be extended to such more general problems.

As the restriction,that the expected ratio of defectives in the final production

should not exceed the value Pd, involves the unknown parameter p,one could also think

of approaching similarly stochastic programming problems with restrictions involving unknown and possibly time-varying parameters.

Acknowledgement:This work has been supported by the Consiglio Nazionale delle Ricerche,Rome (Italy).

Page 104: 5th Conference on Optimization Techniques Part I

IDENTIFICATION DE OOMAINES

Jean CEA

I , M. S. P ,

PARC VALROSE

0 6 0 3 4 - NICE CEOEX

1 INTROOOCTION, :XEMPLES :

Oens oerteins probl6mes, d'identification ou de contrBle, l'@l@ment

identifier ou 8 contr61er est un ouvert delR n ou, ce qui revient au

m~me, sa fronti~re~

Soit ~ uns f~mille d~ouverts de jR n ~ ~ l'oovert 0¢~ on associe un

probl~me not6 J[9) ; on a donc une fonction

les prob!~mes qui se posent sont alors les suivants :

PROBLEME 1 :

inf J(~)

PROBLEME 2 :

d6terminer ~ ell tel que opt

j[~opt ) _< jr~,

En g@n6ral, ld d@finition de J utilise une "fonction d'~tat" et une

"observation".

o~ y~ ast "la" solution dens un espace ~ de l'6quation

ECO,y) = o

Page 105: 5th Conference on Optimization Techniques Part I

93

En g~n~rel,~sera un espace de {onctlons ou de distributions d~{iniss

sur £ , et E(9,y) = o sera une ~quetion aux d~riv@es pertielles,

Si H d6signe un nouvel espace et C# un op~rateur de V darts H,

C~ :~ ~ H, on observera C~ y~ , Ze r~sultat de l'observation

sara d~sign~ par hg ~ on aura doric le fonction coot :

J e) = llce ye - hglt2 H

EXEMPLE 1 :

D=~UFU£

( 1 . i )

Darts cat exemple, l'6quation

d'6tat est une ~quatlon de trans-

mission : k dEsigne un nombre

positi¢ donn~, et fun ~16ment

donn6 darts L2(D) :

- k AY l + Y l = ~lo (~)

- AY2 + Y2 = | I ~ ( £ ' )

y2 22] k aYl

an a n ~ I t )

Y2 = o (~)

L 'op@ra teu r A est l e l a p l a c i e n ; ~ p e r t i r de 9 l e t des donn@es f i x e s , k, #)

on a donc d@f in i les #onc t i ons Yl s t Y2 ou mieux l a # o n c t i o n y£ dont l es

r e s t r i c t i o n s ~ ~ e t ~ ~ ' sont Y l e t Y2"

Si K C D, on peut a l o r s consid@rer l es f o n c t i o n s coo ts s u i v a n t e s

ou

[ i . 3 ] J (~) = ~ I t Y ~ - hg l l 2H i (K~

Page 106: 5th Conference on Optimization Techniques Part I

94

Si K C ~ , on peut consid@rer une autre ¢onction coot

{1 .4J = ½ 1 l y e - hg i l 2 L2 (K I

un probl~me qui entre dens le cadre de cat exemple est le suivant :

trouver la forms optimale O d'un di~lectrique de constants K tells opt

qu,'un champ 61ectrique soit minimum dens un domains donn@ K,

EXEMPLE 2,

B e s t une b o u l e de c e n t r e C e t

de rayon donn6 r

0 =OU 2 UB

On peut identifier la recherche de

~ cells de B ou mieux ~ cells

du centre C de la boule B :

DanslR 2 les coordonn6es de C seront

d@sign6es pe r a = ( S l , a 2)

Ainsi, dens oe cos la "vraie" variable sere a L

A partir de e on d69init Ya solution de :

i

- by + y = 91 [~]

~Y _ I CloS} I ~ - o (~)

L y = o {F)

On peu~ maintenant d@finir des 9onctions coots somme dens l'sxemple 1 ;

Oens cat example B pourreit representer, par example, la section droite

d'qn tuba par lequel on apporte ou on 6vaoue de la oheleur.

De 9e~on plus g@n~rale, on pourrait 9a±re d6pendre 2 de ae IR n ; On ss

ram~ne clots ~ une minimlsstion d'une 9onction J d~ginie sur un sous ensem-

ble delR n [dens l'exemple 2, c'est l'ensemble sO peut se trouver le centre

de l a bou le B) ,

Page 107: 5th Conference on Optimization Techniques Part I

95

Notons qu'au point de vue num@rique pour cheque @valuation de J il

faudra discr@tiser (5) et r@soudre le syst6me approch@.

Ce genre de probl@mes a 6t@ 6tudi@ par KOENIG et ZOLESIO [th6se de

3i@me cycle, Unlversit@ de NICE).

EXEMPLE 3 :

11 s'aglt de la famille des probl@mes ~ fronti@re variable ;

ce genre de probi@mes act @tudi@, avec les m@thedes pr@sent@es dens

cette communication, par A. DERVIEUX et B. PALMERIO.

~=T

~t=T

~t=o O=U~ o < • <ST

11 s'agit de d@terminer yet

F t, e < t < T de fagon qua les

relations (8) et (7) aient lieu

~t 4y = f (Q)

[I.6] ~_!Y = h {F t ] ~n

YC''t=°) = Yo ['] [~o ]

(1.7] y = g [r t)

iss @l@ments 4, g, h, y sont donn@s.

On peut transformer ce probl@me en un probl@me de minimlsation : ~ tout Q

on assocls la solution yQ de (1.6) et on pose

i I T 12 JcQ) = J lyQ - gl dt o L2[Ft )

O@terminer yet Otels qua (i.6] [i.7) solent v~rifi@es revient ~ trouver

Qopt par lequel J[Qopt ) = o

Dens cs qui suit nous nous int6ressone ~ des ouverts ~ C D c~ D est

donn@, ~ @tent un compact delR n. Si ×~ d@signe le #onction caraot@ristique

de ~ , on d@signe par~ l'ensemble des u, u = XQ~ ~

Page 108: 5th Conference on Optimization Techniques Part I

P

TOPOLOGIE sur~ou sur~ :

On ddslgne par ~ A ~ la difference sym~trique entre 0 et ~

On dire qua ~n----~ 0 Iorsque mes[~ ~ ~n ) ~ o lorsque u ~ +~

Cela revient ~ prendre sur ~ une topolog±e du type LP[o)~ 1 ~ p < +~

2 EXISTENCE O'UNE SOLUTION

En g@n@rel, !'existence d'une solution du probl6me 2 aura lieu si par exemple

est compact et J continu, I1 est donc important de conna~tre des ensem-

bles d'ouverts qui soient co@pacts pour, per example, le topologle L2CO) :

En voici un example :

Solt B un ouvert non vide centenu dens D ~ soit 2[e,h,r) le famille des 0

ouverts ~ de D qui contiennent N ° et qui v~rifient la propri6t6 uni~orme

de cane suivente : V ~ ~ ~[O,h,r), ~ x e 9~, il existe un cane C de x

sommet x, d'ang!e 20 de c8t6 h tel que V yeQNB(x,r] on eit ye C C • X

8[x,r) d~signe la boule de centre x et de rayon r

96

Page 109: 5th Conference on Optimization Techniques Part I

97

D, CHENAIS a d@montr@ le

THEOREME : E[8,h,r] un compact dans L2[D].

Notons qu'on peut aussi caract6riser E(8,h,r] par une condition uni~orme

de Lipsehitz relative ~ la fronti@re des ~ ( ~[8,h,r].

3 DEVELOPPEMENT DE J ; CONDITION NECESSAIRE D'OPTIMALIIE.

F~ Soient ~ et Q + ~ deux 61~ments deAL , u et u + 6u les {onctions

caract6ristiques associ@es ~ on s'int~resse aux fonctlons J qui admettent

un d@veloppement du type :

[ 3 . 1 ] J [ u + ~u] : J [ u ] + T l [ U , ~u] + T 2 [ u , ~ u ]

o~ Tl[U,6U] [resp, T2[u,~u] est un in#iniment petit d'ordre 1 (resp d'ordre

sup@rieur ~ I, par exemple d'crdre 2]

On suppose que

[ 3 , 2 ] T l [ U , ~ U ] = S [ u , u + ~u] - S [ u , u ]

o5 v - - - ~ S [ u , v ) e s t l i n ~ a i r e e t c o n t i n u e ; s i p a r exemple

~LC LP [o ] 1 ~ p < + - , s l p ' v ~ r i f i e ! + ~ = 1, e l o r s S [ u , v ] s ' ~ c r i r e p p '

r

S [ u , v ] = ] G I x ] v [ x ] dx [ 3 . 3 ] ~ D u

[ G u ~ L P ' [ D ]

un cas i n t ~ r e s s a n t e s t e e l u i o~ p = l ~ G ( L~' [D] . u

En te rmes d ' o u v e r t s , on peu t 6 c r i r e ces r e l a t i o n s sous l a fo rme

[3.1]' J [ ~ + 8~] = J[~] + TI[~,~] + T2[~,~]

[3.2) '

o~ ,,, ) S [ ~ , ~ ] es t a d d i t i v e e t c o n t i n u e

Page 110: 5th Conference on Optimization Techniques Part I

98

[ 3 , 3 ) ' f SC£,II] = I G~(x) dx II

,, G~ e L p ' ( o )

+ I - Notons que si 6~ C £, 6£ C~ ,~ + 6£ = (~ U 60 +] \ ~8

OR

[3.4) TI(~,6£) = f G~[x] dx - S G£[x) dx 6a + 6~"

EXEMPLE 1 :

Darts le cadr~ de l~exemple i dun ° i, en choisissant J d~in

par (i,2), introduisons l'~tat "edjoint" p£ = (pl,P2) par :

F k~Pl + P2 = (y - Yd) i ~Q)

i-A p2 + p2 = (y- yd)IQ [a')

I Pl = P2 (3,5) I -~ (r)

k ~Pl = _~P2,

P2 = o (~)

a i o r s , on o6montre que

(3.83 G~ = (k - i) Vy~ Vp~

avec Vv(x) = grad, v [ x ] c f . J. CEA, A, GIOAN, O, ~ICHEL.

Page 111: 5th Conference on Optimization Techniques Part I

99

O ~ ~ est un point critique de J. siil existe 8 > o tel qua TI(~,~] 2 o

V ~ t e l que ~ + ~ & ~ , mes [16~1) ~ r ; 16~ I = [~ + ~ ) A Q

No tons qua dons un c a d r e h i l b e r t i e n , l o r s q u ' i l n ' y a p a s de c o n t r a i n t e , l a

c o n d i t i o n a n a l o g u e e s t ~ rad J ( u ) = o.

En t e n a n t compte de ( 3 . 1 ) ' , [ 3 . 2 ) ' , ( 3 . 3 ) ' e t [ 3 . 4 ) on d~mont re l a

PROPOSITION 3.1 :

Q est un point critique de~si et seulement si

G£{x) < o V x 6 8

[ 3 . 7 ] G o [ x ] > o V x ~

N a t u r e l l e m e n t on a l a

Si £ est optimal alors £ est critique. Ainsi les relations (3.7] constituent une

une condition n6cessaire pour qua l'ouvert ~ soit une solution du probl~me 2,

4 APPROXIMATION D'UN POINT CRITIQUE.

Ella est bas~e sur les conditions [3.7] d'optimallt6 : soit £ , de~inissons

T(~) per

x e T [~ )< > G~(x] < o

s i l ' e n s e m b l e o6 G~ s ' a n n u l l e e s t de mesure n u l l e , [ 3 . 7 ) s ' ~ c r i t

( 3 . 7 ] ' £ = T [ ~ )

On a donc l ' a l g o r i t h m e su ivan t :

f ~O donne

£n+ 1 = T [ £ n] n = o j i ~ . . .

Page 112: 5th Conference on Optimization Techniques Part I

lO0

Naturel!ement, 11 ~auOra modifier cat elgorithme si 0~ n'entreine pas

T [ ~ ) ~ .

Cat algorithme a donn@ d'excellents r@sultats num@riques, la convergence

ayent en g@n@ral lieu apr@s une ou deux iterations, toutefois, le convergence

th6orique de cat algorithme reste encore ~ @teblir.

CAS PARTICULIER 1

[Etudi6 par BENDALI - DJAADANE - Th@se de 3i~me cycle - Universit@ d'Alger]

L~ouvert ~ sst d6fini comma l'image d'un ouvert fixe B delR n par une appli-

cation t----# x = XCa,t) qui d6pend d'un param~tre aeK CIR p.

K est un ensemble compact ; Ainsi la "vraie" variable n'est pas ~ mais a

et Jest done une fonction de a :

Pour d@terminer grad J(e], on commence par calculer

c I Tl[a,ga) = j G~[x] du - g~[x) du, o~ en r@alit~

~+~

G est indic@ par a ~ en ~ait cette relation s'~crit

T I [ e , ~ a ) = ~ Ga[X[e+~e,~) ] J (a+~a , t ) d t - j G e ( X ( a , t ) ) J [ a , t ] d t

B B cO J repr@sente l e Jacobien du changement de v a r i a b l e . A partir de l& , en

d@veloppant G et J, l~in#iniment petit 6tent ~a, on obtient la dife@rentielle e

de J par rapport & 6a et donc le gradient de J en a. On psut alors utiliser

n'importe quelle m6thode du type gradient.

CAS PARTICULIER 2 : [@tudi@ par KOENIG et ZOLESIO]

-[a-$amiii~-~-d]ouverts de~R n est d@finie comma @tent l'image d'une partie (1-

d'un espace de Banach A par une application u v@rifiant 1as propri~t6s

s~ivantes : u est injec~ive

La "vraie" variable a~ ~ peut alors Etre un @16ment delR n [example 2) ou

un @l~ment d~un espaee eonctionnel [probl@me "& fronti@re libra")

On salt eussi dens ce cas expliciter le gradient.

Page 113: 5th Conference on Optimization Techniques Part I

101

CAS PARTICULIER 3 :

Dens ~2 pour simplifier, ~ est rep6r6 ~ l'aide d'une fonctlon

x; 1

0 1

J ( ~ + h , ) = J(~] + h G e ( t ) . ~ ( t ] d t . . . . 0 /

Catte lois h est l'infiniment petit ; on a n6glig~ les termes d'ordre

sup6rieur ~ 1 ;

Cette relation permet de savoir dens quelle "direction" il faut sa d@placer

d~terminer J ; de plus si on avait t --4 ~(a,t) teEO,l], a~K CIR p, pour

on pourrait obtenir ~aeilement le gradient de J par rapport ~ a.

La "vraie" variable est ici ~.

Oervieux et Palmerlo obtiennent,

en passant par les relations int~r-

m~dia i res [ 3 . 1 ] , [ 3 . 2 ) , [ 3 . 3 ) , une

relation du type :

CAS PARTICULIER 4 :

On suppose que ~ =i~l ~i ; en fait les w i sont des "@l@ments ~inis" : ouverts

de D, deux ~ deux disjoints. ~ sera rep@r@ par un ensemble d'indices

= ~i ; En reprenant les principes de la m@thode classique I n = ~ iu~z~

du gradient, et en les modiflant pour les app3iquer au probl~me actuel,

J.CEA, A, GIOAN et J~ MICHEL proposent un algorithme convergent, Les r~sul-

tats num@rlques sont boris mais inf~rieurs toutefois & eeux de la m6thode des

approximations successives.

Page 114: 5th Conference on Optimization Techniques Part I

I02

BI BL IOGRAPHIE

Nous donnons une bibliographie tr@s sommaire, limit@e aux

cas partiouliers @tudi@s ;

BERGMAN S., SCHIFFER M.

Kernel functions and elliptic differential equations

in mathematical physics, Academic Press, New-YorK, 1953

CEA J., GIOAN Aoj MICHEL J.

@uelques r@sultats sur l'identification de domeines.

A para~tre dens "CALCOLO" 1973.

CHENAIS O., Un r@sultat d'existence dens un probl~me d'identification

de domaine, note eux C.R. Acad, Sci. PARIS, t 276, [12 F6v.

1973].

OERVIEUX. A.

PALMERIO. B.

Identification de fronti@re dens leces d'un probl~me de

Oirichlet, note aux C.R. Aced Sci. PARIS, t.275

(20 Novembre 1972).

HADAMARD. J. M6moire sur le probl~me d'analyse relati# & l'@quilibre

des plaques ~lastlques encastr@es, oeuvres de Jacques

dadamard, C.N.R.S., PARIS, iB68.

KOENIG. M.

ZOLESIO, J.P°

Sur la location d'un domaine de forme donn@e, th~se 3i~me

cycle, I,M,S.P., Universit~ de NICE, 13 Mars 1973.

Page 115: 5th Conference on Optimization Techniques Part I

THE MODELLING OF EDIBLE OIL FAT MIXTURES

J.O. Gray- J.A. Ainsley Dept. of Electrical Engineering

U.M.I.S.T. Sackville Street, Manchester

U.K.

Introduction

This paper is concerned with a practical application of straightforward modelling

and optimisation methods to an industrial process which promises to yield very

attractive economic returns. The process consists of the large scale batch pro-

duction of edible oil fat mixtures such as margarine and shortening products.

The batch production of edible oil fat mixtures requires the mixing of quantities of

raw material to meet both economic and taste criteria. The latter is the more

difficult to quantify but it is found that the variation with temperature of the

ratio of the liquid to solid phase of the mixture has an important bearing on its

taste and tactile properties and hence on the acceptability of the product. A

measure of this ratio is obtained by dilatometerymeasurements at specific tempera-

fumes. The dilatometerytemperature profile of a compound oil fat mixture is~ how-

ever, a highly non-linear function of the behaviour of the individual constituents

and much skill and experience is required by the analyst in the determination of

constituent fractional proportions to ensure that the final compound meets the

required dilatation temperature specification. Accurate computer models of this

non-linear behaviour for a range of compounds would be a valuable aid to the design-

er of edible oil fat mixtures both in speeding the design process and reducing

wastage due to error.

This paper deals initially with the derivation from experimental data of regression

models to describe the dilatation temperature profiles of binary and tertiary inter-

active oil fat mixtures. These models are subsequently incorporated in a computer

design program to calculate mixture constituent ratios which will yield a best fit

to any specified mixture dilatation temperature profile. Economic restraints are

incorporated by setting an upper bound to the fractional proportion of any of the

constituent components. Such bounds will vary with the seasonal changes in raw

material costs and are at the discretion of the designer.

Dilatation and the oil correction factor

If a quantity of solid fat is heated, the volume of fat during heating can be

represented by the curve ABCD in Figure i. Between point A and point B the fat is

Page 116: 5th Conference on Optimization Techniques Part I

104

completely solid and between C and D it is in a purely liquid condition. The

dilatation of a fat at any temperature is defined as the expansion during isothermal

melting and is represented in Fi___gure 1 by the difference in respective volumes be-

tween the partially melted fat at point P and the subcooled liquid fat at point R.

The dilatation is usually denoted by the symbol D T where the subscript refers to

the temperature at which the dilatation is measured. Dilatation figures are usually 3

related to 25 grams of fat with the volume being expressed in mm .

The measurement of dilatation depends on a knowledge of the subcooled liquid line

CQ which is normally difficult to determine experimentally due to the inherent in-

stability of the subcooling process. CQ must therefore be obtained by extrapola-

tion of the easily measured liquid line CD. The coefficient of the oil thermal

expansion has been shown (1) to be of the form

E = k + yT

The constants I and 7 have been given a range of values by various experimenters (1)

and, depending on the value used, the resulting dilatation figure can lie within a

spread of 36 dilatation units at 20°C. This spread narrows as the degree of extra-

polation decreases and, as it is impractical to determine coefficients k and 7 for

every possible combination of fat and oils, some compromise figure must be deter-

mined and generally adopted in the measurements. A degree of uncertainty is thus

present in any quoted dilatation figure and this will be reflected in the expected

accuracy of any analytical model derived.

The modelling of binary mixtures

The data from six binary oil mixtures were available for modelling. The mixtures

were as follows:

!. PALM OIL + LARD

2. HARDENED FISH OIL + LARD

3. HARDENED FISH 0IL + PALM OIL

4. COCO OIL ÷ PALM OIL

5. HARDENED FISH OIL + COCO OIL

6. LARD + COCO OIL

The dilatations for each of these mixtures was measured at various compositions and

the following temperatures: 20°C, 30°C and 37°C.

A typical set of experimental results is shown in F igur e 2 which demonstrates the

general non-linear form of the mixing process. To improve the symmetry of the raw

data a dimensionless dilatation d was adopted where

Page 117: 5th Conference on Optimization Techniques Part I

105

d = f(X )

AX~ + BX8 - g(X )

Here X, X~

A,B

f(X A)

and g(X)

are the constituent concentrations,

are the respective pure component dilatations

is the raw data

is a least squares regression model obtain from the raw data.

Polynomial forms of up to fourth order were determin~using a regression analysis

computer program to obtain the optimum g(X e) function and, for each polynomial

form, both the total and residual variances were calculated with the significance

of the model being examined by means of the F test method (2). A typical regression

result is shown in TABLE i where the F test percentage figure indicates the signific-

ance of the model. The standard error is given in terms of dilatation units when

the model is applied to the original data. The 95% confidence limits of the poly-

nomial coefficients are also listed.

TABLE 1

A typical regression result given by the

F test method. Binary mixture.

Hardened fish oil lard mixture

d20 = 1.0- 0.65XF + 0.57X~, 0.i 3X~- 0.06X~

F-test 99% Standard error ± 7 units

Coefficient Limit

1.0 ± 0.0O4

-0.65 ± 0.01

0.57 ± O.01

O.13 ± 0.01

-0.06 ± O. 01

The predicted error for the best d20 models of mixtures 2, 3, 4 and 5 lay within a

band of approximately 40 dilatation units at an F test significance level of 99%.

The most ill fitting model was that of mixture 1 where the predicted error band was

150 dilatation units. At 30°C the error band was small for all mixtures due to a

more accurate oil correction figure and the smaller absolute values of the dilata-

tions.

Page 118: 5th Conference on Optimization Techniques Part I

106

Modelling of tertiary mixtures

Four mixtures of oils consisting of three interactive components were modelled.

These were:

Io HARDENED FISH OIL + PALM OIL • COCO OIL

2. HARDENED FISH 0IL ÷ LARD ÷ PALM OIL

3. HARDENED FISH OIL • LARD + COCO OIL

4. LARD ÷ PALM 0IL ÷ COCO OIL

The measured dilatations of these mixtures were available at various concentrations

and the following temperatures: 20°C, 30°C and 37°C. In this case, as shown in

Figure 3,there are two possible ways of proceeding to formulate the model and a

range of polynomial regression models were postulated which had the two general

forms

D T = K O t Kla + K2b • K3c + K4aa ÷ K5b~ ÷ K6c~

= K 0 ÷ Kla + K2b + K3c + K46~ac ÷ K56abc • K66aab D T

where a, b, c = AX~ BX, CX

~ac ~ 6bc ~ 6ab = dac }A + el, %c IB + CI' dab IA • B I

= i, 1.5, 2.0, 2.5 ..o

The predicted D20 figures for for best models of mixtures i, 2 and 3 were within a

band of 50 dilatation units at an F test significance level of 99%. The correspond-

ing D30 and D37 figures for these mixtures were 40 and 20 dilatation units respect-

ively. The best model for mixture # produced corresponding error bands of 80, 90

and 94 dilatation units. A typical regression result is shown in TABLE 2. Here a

variable is declared redundant if the chance that it is redundant is in excess of

50%.

TABLE 2

Typical regression result given by F test

Tertiary Mixture

Hardened fish oil/palm oil/coco oil mixture

D20 = 1178 - 3.3FX F - 0.96PXp - 1.5CX C

0.19dFp FX F + PXp * 1.65dFc FX F ~ CX C

2 ÷ 0.32Dcp CX C ÷ PXp + O°OO1 FX F

2 2 0.002 PXp - 0.0004 CX C

F-test 9g.g Standard error ± 22 dilatation units

Page 119: 5th Conference on Optimization Techniques Part I

107

Coefficient Limit Coefficient

1178 ± 4 0.O02

-3.3 ± 0.035 0.0004

-0.96 ± 0.049 0.19

-1.5 ± 0.048 1.65

0.00! ± 0.00003 0.32

The term - 0.0004 is redundant

Limit

± 0.00017

± 0.00018

± 0,039

± O.037

± O.O59

Nuclear magnet± ¢ resonance spectroscopy

Nuclear magnetic resonance spectroscopy methods can be employed to give a direct

reading of liquid in a solid/liquid sample and thus, in this case, it is possible to

determine the solid fat index at any temperature and hence the dilatation figure.

Due to errors in the oil correction calculation, more accurate models will only

result if a more absolute measurement criterion such as the solid fat index is

adopted. Some preliminary measurements have been taken using this method and a

comparison with similar dilatometerymeasurements is given in Figure 4. Future

experimental data will be based entirely on NMRS measurements.

Synthesising o?timum three component oil mixtures

A computer program based on the simplex hill climbing routine and incorporating the

best regression models of tertiary oil mixtures was devised to calculate the composi-

tion of a three component oil fat mixture which meets a required dilatation

temperature specification. The program which is written in Fortran IV requires 8K

of computer store. There are eight steps in its execution.

(i) The desired mixture dilatation values are entered.

(2) The appropriate oil mixture is chosen.

(3) Pure component dilatation values are entered.

(4) The maximum desired value of each constituent in the mixture is chosen.

This value will be determined by economic or other criteria at the disposal

of the analyst. The stoichiometric mixture relationship is assumed to be

satisfied.

(5) Starting point of search is chosen.

Data entry is now complete and the following steps are automatically undertaken by

the program:

(6) By altering each constituent by ±0.5% six points circling the chosen starting

point Po are obZained as shown in Figure 5.

(7) The objective function e is obtained at each of the seven points

Page 120: 5th Conference on Optimization Techniques Part I

108

(8)

e = (E20 -o $20)2 t (E30 - S30 )2 + (E37 - $37 )2

where E20 ~ E30 ~ E37 are the mixture dilatations at 20°C~ 30°C and 37°C

calculated from the regression models and $20, $30, $37 are the specified

mixture dilatations.

The smallest objective function is determined and this point used as an initial

point in a new search. The search is terminated if the initial point has the

smallest value of objective function. If the search meets a boundary then the

optimum solution is that point which crosses the boundary. As shown in

Figure 5 the search near the apex of the triangle is essentially constrained

to two directions.

No facility exists in the simple hill climbing program used to determine whether

the minimum obtained is local or global. Instead a map of the objective function is

printed out which can he visually scanned by the operator to obtain the global mini-

mum search area~

Typical computer results are shown in TABLE 3 for four oil mixtures where a

constant arbitrary dilatation profile is specified. This profile was appropriate

for the first two oil mixtures and a good design resulted.

The D37 figure specified for mixture 3 was, however, unrealistically low as LARD

PALM COCO mixtumes are generally characterised by D37 values in the range 50 ÷ 120

dilatation units. The attempt by the hill climbing routine to meet this figure had

a subsequent deleterious effect on the D30 figure obtained. A similar situation

occurs with the HSO,LARD.PALM.D30 mixture.

Discussion of results

The regression methods used in the derivation of the mixture models were found to

be generally successful in dealing with the inherent non-linearities of component

interaction and, once dilatation estimate errors are eliminated, better models should

be obtained.

Possible improvements in the optimisation program include the incorporation of a

sensitivity routine to inform the operator as to which component variation has the

greatest effect on the dilatation figures and the insertion of a command inter-

ceptor. The latter routine would allow the operator to re-enter the design program

at any point, change a parameter value and exit directly to ascertain the effect of

his action on the dilatation contour. This would enable a fast iterative design to

be completed with a minimum of data entry.

Page 121: 5th Conference on Optimization Techniques Part I

109

As the entire program only occupies some 8k words of computer store it is suitable

for use on a small, inexpensive, dedicated computer which could also be used for on-

line control of the mixing system from the tank farm. Alternatively, the program is

suitable for use on a time shared terminal connected via a land link to a remote

central processor. The economics of the latter approach are now being investigated.

A computer program of this type will completely remove the tedium of design calcula-

tion and ensure a specified dilatation temperature profile for the final product

batch. The associated reduction in product waste due to design error will, of

course, result in significantly lower production costing over a wide range of

product mixtures.

Conclusions

The oil correction calculation gives rise to a large error in dilatation readings,

particularly at low temperatures, and this error is reflected in the derived

regression models. A more absolute measurement technique than normal dilatometery

methods is thus necessary if accurate models are required. The optimisation

program developed for synthesising oil mixtures is simple in concept~ easy to use

and gives good results provided a realistic dilatation contour is specified.

References

(1) Boekenoogen, H.A. Analysis and Characterisation of Oil Fats and Fat Products,

1966 Vol. i. (Chichester: John Wiley - Interscience Publications).

(2) Perry, J~H. Chemical En$ineers Handbook, 1963, 4th Edn. (New York: McGraw-Hill

Book Co.).

Page 122: 5th Conference on Optimization Techniques Part I

<:

~H

%

~ <~

\

t ~

~l cJ

3

Page 123: 5th Conference on Optimization Techniques Part I

,7

D

D

D

0 o

),

3 e-

--~

3

°

0 0 ~°

73

Q

B

©

3 X c-

,--]

Page 124: 5th Conference on Optimization Techniques Part I

i ......

Magnitude of

Non Linearity

of Three Component

Model

FIGURE 3

Magnitude of

Non Linearity

of Three

Component

Model

Page 125: 5th Conference on Optimization Techniques Part I

113

Compar ison of dimensionless di latat ions obtained from di la tometry and NMR

e - - d i la ta tometry x N M R

dzo

1.0

0 O

Palm o i l / La rd mixture

Ill

x ® ®

X

@

~opalm oli ,~

X 0

x 0 0 X

X ~ x

X

~OOyo

0

I-0

0

Hardened f ish o i l /La rd

x

mixture

~xgx

o ~o hardened fish oil --. too% FIGURE 4

Page 126: 5th Conference on Optimization Techniques Part I

114

~oo%

\ \

/

8 ~oo%

'C \

C too%

FIGURE 5

Page 127: 5th Conference on Optimization Techniques Part I

ARBITARY SPECIFIED MIXTURE DILATATION

D20 = 550;

D30 = 200;

D37 : iO

TABLE3

MIXTURE

Assumed

D20

Component

D30

Dilatations

D37

Maximum

Desired

B 1 =

Fractional

B 2 :

Composition B 3 =

Optimum

Mixtures

Calculated

D20

D30

bilatation

D37

[~rors

D20

CDilatation~D30

( U

nifs

3D37

HFO

PALM

COCO

68

5

53

5

74

8

20

3

25

1

24

8 1

29

1

0

0.97

H

0.5

P

0.2

C

O. 8303

H

O.1211

P

0.0485

C

552

196

9

2

-4

-i

HFO

LARD

COCO

658

600

748

203

172

24

8

81

IO

0.97

H

i.O

L

0.2

C

565

207

12

15

7

2

LARD

PALM

COCO

790

625

748

306

225

24

169

91

iO

i.O

L

I.O

P

0.2

C

0.4523

L

O.4147

P

0.1330

C

........ !

567

166

76

HFO

LARD

PALM

680

695

525

193

266

225

O

141

91

0.95

H

I.O

L

I.O

P

O.8104

H

0.0090

L

O.1806

P

586

151

32

O. 7~70

H

O. 1852

L

0.O688

C

17

-34

66

36

-49

22

Lm

Page 128: 5th Conference on Optimization Techniques Part I

FREE BOUNDARY PROBLEMS AND IMPULSE CONTROL ....... ~, . . . . . . . . . . . . . . . . . . . . . .

J.L. LIONS

University of PARIS Vl

and

I .R. I .A. ,Rocquencourt

INTRODUCTION

Free boundary problems classicaly arise in Physics, one of the simplest

examples being the melting of ice. We do not intend here, by no means,

to review this family of problems, which are very numerous, and have

been studied by a large number of different methods. ~e only want to

give brief indications on the possible use of the so-called variational

inequalities (V.I) in these problems, the interest being there to

connect the free boundar E problems and the optimization theory ; this is

the subject of Section I below.

We consider next impulse control problems, as introduced in A.Be~souss~u

and the Author [3] 1)~

One can show that the optimal cost function is characterized by the

solution of a new free bpundar2 problem~ Mathematical analogies with

problems solved in Section I lead us to introduce a new tool, which

(somewhat) extends the V.I : the &uasi-variational inequalities (Q.V.I),

as introduced in [2] by A. Bensoussan, N. Gomrsat and the Author for the

stationary case and in [3~)by A. Bensoussan and the Author for the evolu-

tion case. ~nis is the object of Section 2.

Complete reports ~n the many problems which arise along these lines will

be given in [3]3), 4), 5) [9]. Other problems directly related to this

note are studied in [11]o

The results briefly presented in Section 2 extend the results obtained by

various authors in the so-called s - S policy. Let us quote here the works

of Scarf [12], Veinott [14]. (Cf. also the bibliographies of these works).

The methods introduced in this short note show the strong connections

which exist between free boundary problems in Physics, ~n problems of

management and in techniques of optimization.

Page 129: 5th Conference on Optimization Techniques Part I

117

I. FREE BOUNDARY PROBT.~S AND V.I.

I. I An example.

I~t O be a bounded open set of ~n(1), with smooth boundary F. Let

be a partial differential operator given in ~ by

n ~ ~ n

(1.1) A~ = - i,Dj=1 x~i(aij(x) .i)+ i=I ~ ai(x) i + a°(x)~

where aij, a i, a o E L~(O). We suppose that A is elliptic in the

following sense : if ~ , ~ 6 HI(o) (2) and if we define

(1.2) a(~,@) =i,j=IJO~ [ ai (x) ~ i x~i 8~ dx + i~I ai ~l @ dx + a ° ~@ dx

then

, ~>0. (1.3) a(~,~) ~ ~ II~ll i(o)

The problem we want to consider is the following : we look for a function

u in O such that

(1.4) Au - f ~ 0 in O,

(1.5) u - $ g 0 in O

(1.6) (Au-f)(u-%) = 0 in O

~u (5) (1.7) ~ = o

where f, @ are given in O, and where ~-~ denotes the

tire associated with A .

conormal deriva-

(I) In the physical applications, n =1,2,3. In the problems of impulse control that we study in Section 2, the dimension n cau be very large.

(2) HI(o) = {~I ~, ~x . . . . '~x E T'2(O)}, ~ being real valued ; I n

- dx + ~ dx. li~N~1(e)- i--D1 i

(3) Actually this~boundary condition should be dropped at boundary points (if any) where u = %.

Page 130: 5th Conference on Optimization Techniques Part I

t18

Of co~rse O is divided in two regions where respectively Au = f and

u = # , with (1.4) and (1.5) being satisfied everywhere, and these re-

gions being not given ; the interface between these two regions is a

free surface.

~e now transform the problem (1.4)...(1.7) into a V.i. One can check

that it is equivalent to finding u satisfying

(1.8) a(u,v-u) > (f,v-u)

u <~ ~.

VV~<~ ,

The problem (1.8) is what is called a variational inequality (V.I) follo-

wing [13] and [10]; it follows from these works that under the hypothe-

sis (1.3), the problem (1.8) admits a unique solut~gn.

!. 2 Various remarks.

Remark 1.1.

A large number of problems in Physics and in Mechanics lead to problems

of this type, in stationary or non stationary cases ; in the latter case,

one has to use the tool of V.I. of evolution, introduced in [ 10]. Exam-

ples and variants arising in elasto-plasticity, in non-newtonian fluids

etc.. are studied in the book [6].

Remark I. 2.

In the problem alluded to in Remark I. I , one obtains a V.I on the

~'physical" tmknown function u . A very important step forward to the

theory is due to C. Baiocchi [I] who showed that classical free boundary

problems arising in infiltration theory can be reduced - on a new " non

ph3Tsical" unknown function, say u- to a V.I. on u . This idea, properly

used, permits the solution of a number of ether free boundary problems ;

of. [ < [ 5 ] [ 7 ] .

Remark I .3.

For the problems of V.I. numerical methods are known and numerically

implemented ; a report is given in the book of Glowinski, ~r4moli~res

and the Author [8].

Page 131: 5th Conference on Optimization Techniques Part I

119

Ii. IN~ULSE CONTROL PROBLEMS.

2.1. Formulation of the problem.

We give here a very formal presentation ; for precise statements we

refer to [3], I) and 3). We explain the problem on an example. We assume

that we have, at time t, an amount x = [xl .... ,Xnl of goods (I) .

We suppose that we know the cumulative demand D(t,s) between t and

s > t ; D(t,s) can be deterministic or stochastic. A policy Vxt is a

double sequence :

(2.1) Vxt = t ' ~xt ; Oxt " ~xl ; "'"

I where t ~ Oxt <

I and where ~xt

~xlt 2 , Oxt, -.- ;

02 3 xt < °xt <'" " are the instants where we place orders,

2 ~xt ' --- E ~n are the amount of goods we order at time

this double sequence is deterministic or stochastic.

We denote by T the horizon, it can be finite or infinite ; we suppose

here, to fix ideas, that T < ~ .

The cost function i~ defined as follows :

(2.2) Jxt(Vxt) = E f(Yxt(S),s)ds + Nxt t

(where one should drop the Expectation if the problem is deterministic),

where :

i)

ii)

Yxt(S) = state at time s = amount of goods at our

disposal at time s (this is uniquely determined if

D(t,s) is given and if Vxt is chosen) ;

f(x,s) = holding cost by unit of time if x > 0 ,

= outage cost by unit of time if x < 0 ;

iii) Nxt = number of orders we place during the period

It, T] ; we can assume here that we have to pay I for

placing an order.

(I) ~herefore the dimension in this problem is equal to the number of items and consequently n can be very large.

Page 132: 5th Conference on Optimization Techniques Part I

~20

This is an impulse control problem. We want to minimize (2.2) with res-

pect to "all" possible policies (2.1). Let us define

(2.3) u(x,t) = inf Jxt(Vxt)

V~at we w~ut to show (for precise statements and proofs, we refer to the

works of A. Bensoussan and the Author referred to in the Bibliography),

is that u(x,t) can be characterized by the solution of a new free

boundary problem and that this free boundary problem can be studied by

using the technique of u~siyariational inequalitie ~ (Q.v.I.).

2.2. Free boundary problem.

One can show that u(x,t)

and equalities (I) :

satisfies the following set of inequalities

(2.~) u - ~(u) g 0 ,

(2.S) ~_ Cu+ Au-f)(u-~(u)) = 0

(2.7) u(x~) = 0 ,

where in (2.5)

(2.8) ~(u)(x) = I + inf.u(x+~), ~ = {~i }' ~i >/ O.

Of course (2.2) obviously follows from the definition. The proof of

(2.5) is immediate : if we consider the ~articular policy ~xt which

consists in ordering ~ (~i>~ O) at time t, then :

u(x,t) = inf Jxt(Vxt) ~< inf Jxt(Vxt) = I + inf Jx+~,t(Vx+~,t)

hence (2.5) follows. The proofs of (2.4)(2.6) are much more complicated.

(I) Of course one has to make here precise hypothesis on the nature of the Demand D(t,s) ; cf. A. Bensoussan and the A., loc.cit.

(2) The operator A is determined by D . It is a second order (resp. first order) operator in the stechastic (resp. deterministic) case.

~u (2) (2.4) -~ +Au- f~<O, x~n, t E [t o ,~] •

Page 133: 5th Conference on Optimization Techniques Part I

121

let us observe now that ( 2 . 4 ) . . . ( 2 . 7 ) is indeed a free boundary problem.

of (2.6) there are two regions in ~ x [to, T ] ; in one region, I~virtue

say S, one has :

(2 .9 ) u = M(u)

and in the complementary region C one has :

( 2 . 1 o ) au - B-~ + Au- f = 0 •

The interface between S and C is a free boundary.

Some remarks are now in order :

Remark 2. I.

The non linear operator

since to compute M(u) at point

neighborhood of x.

Remark 2.2.

u ~ M(u) defined by (2 .8 ) is of non local type,

x it is not sufficient to know u in a

and more

the pro-

Many other examples or operators of the type of M as given by (2.8)

arise in applications. Cf. A.Bensoussan and the Author, loc. cir.

Remark 2.3.

From the solution u one can derive the best policy (which exists) ;

this is an extension of the "s - S policy", introduced in [12][14](see

also the Bibliographies of these works).

Remark 2.4.

In Section 2.3. below, we briefly consider the stationary case,

over in a bounded domain O of ~n with smooth boundary F (I);

blem is then to find u satisfying

(2.11) Au - f ~< 0 , in O

( 2 . 1 2 ) u - M ( u ) ~< 0 ,

( 2 . 1 3 ) ( A u - f ) ( u - M ( u ) ) = 0 ,

and au (2)

(2 .14) ~ = 0 o n I ~

We assume that A is given by (1.1) sm_d satisfies (1.3).

~) CZ. [311) 3),for the interpretation of the problem. The introduction of a bounded domain O eliminates some technical difficulties.

(2) Actually this boundary condition should be dropped at boundary points where u = M(u).

Page 134: 5th Conference on Optimization Techniques Part I

122

2.3. Quasi variational inequalities (Q.V.I.).

The ana logy between ( 2 . i l ) .o . (2 .14) and ( t . 4 ) . . . ( 1 .7 ) i s c l e a r ; the

difference is that ~ , which is given in (1.5), is now "replaced" by

M(u) which is not given.

One can check that (2.11) o.. (2.14) is equivalent to finding u such

that

(2 .15) I a(u,v-u) ~ ( f , v - u ) V v ~ ~(u ) ,

I u ( m u ) . t

The problem (2.15) is called a quasi variational inequalities (Q.V.I.).

One can show the existence and uniqueness of a maximal solution

which can be obtained as the limit of solutions of V.I.

Let us introduce u ° solution of the Neumann problem :

( 2 . ! 6 ) a(u°,v) = (f,v) V v E HI (o )

and let us define u n , n >i 1 , as the solution of the V.I. :

(2.17) a(u~,v-u ~) ~ ~ ,v -u n) 7 v ~< m u d - l ) ,

One can show that

o I n (2.18) u ~ u ~ ..° ~ u ~ ..°

and obtain u ~ solution of (2.15), as the limit (in LP(~)

and HI(o) weakly) of u n as n~

V P finite

Remark 2.5.

Algorithm (2.17) used jointly with numerical technique for solving V.!.

gives numerical methods for solving Q.V.I. Cf. [2][9].

Remark 2.6.

For numerical purposes, one is of course faced with the problem of dimen-

sionality. We shall report elsewhere on the use of decomposition methods

and sub-optimal policies.

Page 135: 5th Conference on Optimization Techniques Part I

123

BiBZIOGRAPHY

[I] C.Baiocchi, C.R.Acad. Sc., 273 (1971), 1215-1218.

[2] A.Bensoussau, M.Goursat and J.~.Lions, C.R.Acad.Sc.Ps~is, s@ance du

2 avril 1973.

E3] A.Bensoussan and J.L.Lions,

I) C.R. Acad. Sc. SEance du 2 avril 1973,

2) C.R. Acad. Sc. S@auce du 2 avril 1973,

3) Book in preparation,

4) Work to the memory of I.Petrowsky, Ousp.Nat. Nauk 1974

5) Int. Journal of Appl. mat. and Optimization.

E4] H.Br~zis and G.Duvaut, C.R. Acad. Sc. Paris, t.276 (1973), 875-878.

E5] H.Br~zis and G.Stampacchia, C.~,Acad. Sc. Paris, 276, (1973), 129-132.

[6] G.Davaut and J.L.Iions, In@quations en M@canique et en PhEsique,

Dunod ~ 1972.

[7] G.Duvaut, C.R. Acad. Sc. Paris, June 1973.

[8] R.Glowinski, J.L.Lions and R. Tr@moli~res, Book to appear, Dunod, 1974.

[9] N.Goursat, Report LABORIA, 1973.

[10]J.L.Lions and G.Stampacchia, Comm. Pure and Appl. Math XX (1967),

493-519.

11]J.P.qaudrat and ~.Viot, Report LABORIA, 1973.

12]H.Scarf ~ath.Neth. in the social sciences. Stanford Univ.Press,

13] G. Stampacchia

14] A.E. Veinott

1960.

C.R.Acad. Sc. Paris. 258 (1964) 4413-44~6.

J. SIAM Appl. Nath, 14,5, September 1966.

Page 136: 5th Conference on Optimization Techniques Part I

A CONVEX PROGRA~ING ~ETHOD IN HiLBERT SPACE A~D ITS APPLICATIONS

TO 0PTI~3~L C0h~ROL OF SYST~I~ DESCRIBED BY PARABOLIC EQUATIONS

Kazimierz Nalanowski

Polish Academy of Sciences Institute of Applied Cybernetics

Warsaw, Poland

1. INTRODUCTION

Some optimal control problems for systems described by partial dif-

ferential equations can be reduced [2, 6] to the problem of minimiza-

tion of a convex functional J(y) on a closed convex and bounded set D

in a Hilbert space. However, this set (the so called attainability

set) usually is not given in an explicit form. Therefore the direct

minimization of the functional is very difficult. On the other hand it

is comparatively easy to find the points of support of D by given sup-

porting hyperplanes° It was Gilbert who first proposed[4] to use these

points of support for construction of a sequence minimizing J(y). He

considered the case of a quadratic functional defined on n-dimensional

Euclidean space. It turne$ out that the speed of convergence of Gil-

bert's algorithm is comparatively law. Several attempts were made to

improve the speed of convergence of Gilbert's procedure ~, 8, 9]. All

these methods delt with quadratic functionals defined on n-dimension-

al Euclidean space.

In this paper all modifications of Gilbert's method are reduced to

one general scheme, which is applied to the problem of minimization of

a convex functional on closed, convex and bounded set in a Hilbert

space.

The use of this scheme is proposed for solving an optimal control

problem for parabolic equations. Some computations were performed and

~he three modifications of the method were compared on the basis of

obtained numerical results.

2.C0kr~ PROGRA~L~IING PROBLEM AND AN !TERATiVE PROCEDURE FOR SOLVING IT

In a Hilbert space H there is given a closed, convex and bounded,

i.e. weakly compact [2,11] set D.

~oreover, a non-negative real convex functional J(Yl is defined on

the space H. It is ass mmed that J(y) is two times continuously differ-

entiable and its Hessian J"(yl satisfies the following conditions:

Page 137: 5th Conference on Optimization Techniques Part I

125

n~zll2~ (J"(ylz, zl ~< Nllzll 2) Yy~D, Vz~H (I)

where 0 < n~N~ .

The condition (I) implies that J(y) is strictly convex.

Our purpose is to find an element Yopt g D called an optimal ele-

ment satisfying the relation

J(Yopt) = inf J(y). (2)

y 6D Since J(y) is lower semicontinuous functional as a convex ~1] and

the set D is weakly compact the point Yopt exists and it is unique

due to the strict convexity of J(y).

At the point Yopt the following necessary and sufficient condition

of optimality has to be satisfied

(-J' (Yopt) ,Y) ~ (-J' (Yopt) , Yopt ) ~/~& O. (3)

To determine Yopt an iterative procedure is used based on minimiza-

tion of some quadratic approximations of the functional J(y) on closed

and convex subsets of D, which are constructed successively.

To this end we define functionals

Ji (y) ~ J(Y) + (J' (Yi)'y - Yi ) + ~ N(Y - Yi' y - Yi ) , (4a)

2i(Y) ~ J(Y) + (J~ (Yi),y - Yi) + ~ n(y - Yi' Y - Yi ) " (4b)

Ji(y) and $i(y) are quadratic functionals and

Ji(Yi ) = ~(Yi ) = J(Yi ) • (5a)

It follows from (q) that

$i(Y) ~ J(Y) ~Ji(Y) (Sb)

We denote by Yi 6 D i the unique element satisfying the relation

Ji-1 (yi) = i~ Ji-1 ( y)" (6)

Y£D i

Let Yi+l £ D by any arbitrary element such that

(-J' (yi), y) ~< (-J'(yi), ~i+I ), V y £D, (7)

i. e. Yi+l is any point at which the hyperplane ~I i orthogonal to

-J' (yi) supports D.

Theore m . If the family of closed, convex and bounded sets Di~ D is constructed in such a way that

[yil u c Di+1 (8) then the sequence {Yi~ is strongly convergent to Yopt"

Page 138: 5th Conference on Optimization Techniques Part I

126

The proof of Theorem is given in Appendix q.

As it is shown in Appendix 2 the optimal value of the functional

can be estimated as follows

max O, J(y~, ) + + (J' (Y~) ' ~+~I

#(yopt~ ~ #(yi ) ,

where

(91

In the particular case where J(y) is a quadratic functional of the

form

J(yl = (z - y, z - yl (IoI

where z ~ H is a given element, which does not belong to D, we have n=

= N = 2,and the condition (71 takes on the form (z-yi,yl~(z-yi,Yi+11.

In this case the estimation (9~ reduces to the following one

max , ~ ~(z-Yi,z-Yi~ 1~ ~ i (z-~ ,z-y~l (z-Y°Pt'z-Y°Pt) (11)

3° SOi~ METHODS OF CONSTRUCTING THE SETS D i

Note that according to Theorem the iterative procedure is con-

vergent to the optimal solution if the condition (81 is satisfied no

matter what are the forms of the sets D i. However the speed of con-

vergence depends very much on the shape of D i.

We present three methods of constructing the family D i known from

literature.In all these methods the sets D i are characterized by fi-

nite number of parameters.

a. Gilbert's method [4]

This method is the simplest one. We start from any arbitrary point

YO ~D and at (i+q~-th iteration we choose as Di+ I the segment

In this case the element Yi+1 can be easily found from the formula

Yi+1 = Yi + ~C i+1(yi+q - yi I , where

n (-J' (yi), i+n - yil i+I = ~n ' ~(Yi+1 - Yi' ~i+I - Yi ~ J (121

Hence the points Yi+1 are determined in a very simple way.However

Page 139: 5th Conference on Optimization Techniques Part I

127

the speed of convergence in terms of the number of iterations is very

low [4].

This slow convergence is due to the fact that at each step we use

only the information obtained in the last iteration (namely yi ) and

we do not take into consideration the previous ones.

To overcome this disadvantage the following modification was pro-

posed:

b. Barr's method [3]

In this method as in the previous one we select an arbitrary ini-

tial point Y0 and we put D 1 = Fy 0, Yl]' but as D i we choose

coo o co v ,oo - . . o : F 1 2 ... i+1]

: conv ~i' Yi' ' Yi J ' (131

where y.3 l (j = 1, ..., i+1) denotes the vectors spanning the set D i . Every element y £ D i can be represented in the form a convex com-

bination of y~, i.e. i+1

j=l where i+1

~C j ~0, ~ ~ = I . (1~) j= l

Hence~ in order to find the element Yi we have to determine the val-

ues ~ of the coefficients <J at which the function Ji_l(~) assumes

its minimum subject to the constraints (14). This functional can be

rewritten in the form

i+1 A

%-I(y) = J(Yi-1) + (J' (Yi-1), ~ , ~JY~) + j=l

- , d~ Yi Yi- =

: °o - [ < c i , ~- > + ~ ~ , Bi ~ > ] ('15) where

Co = j(yi_l) + 1 N(Yi_l, Yi_l)

are (i+l)-dimensional vectors

Bi = {_I N(y~, yk)~i+l/i+l is (i+1)× (i+l}-dimensional

matrix)

Page 140: 5th Conference on Optimization Techniques Part I

128

<o~ ~ denotes the inner product in (i+l)-dimensional

Euclidean space.

~inimization of (15) subject to (14) is a typical finite dimension-

al quadratic programming problem and it can be solved using any of

well knowuprocedures [5].

Note that in this method the dimension of B i usually increases af-

ter each iteration, what in turn increases the time of computations.

To avoid this difficulty some modifications of the construction of

D i were proposed ~, 7, ~ , that makes it possible to limit the dimen-

sion of the vectors ~i to a given number.

c. Nahi-V~hee!er~s method [8]

Both methods presented up to now were general in the sense that

they could be applied to any set D. The third method takes advantage

of a specific form of the set D.

In linear optimal control problems as a set D we choose the so cal-

led attainability set.We consider the case of scalar control function

with constrained magnitude and performance index depending on termin-

~I S~a~: We put

D= y , y= A(t) u(t) at : lu(t)l , (165 t 0

where: u(t) ~ R 1 is a scalar control functionl A(t) is a continuous

linear operator from R fl into H, T is a fixed time of control.

The set D defined by (16) satisfies [~ all conditions of Theorem.

The appropriate elements y will be found by means of determining

control functions u('l corresponding to y according to (16).

It is easy to check ~, ~ that the control ~i+l(t) corresponding

to the element Yi+l satisfying (7) is given by

5i+l(t) = sgn (-J' (yi 5 , A(t)) (175

As Uo(-) we choose any arbitrary piece-wise constant function sat-

isfying the condition

~uo(t)1% I, Let

rn Vt £LO,~ ] (18)

I to j) (,95 be the set of all points of discontinuity of Uo(').

The function Ul (') is derived from (17).

By 1 we denote the set of all points of discontinuity of Uq(').

Let us assume that this set has a finite number of elements and let

us put

Page 141: 5th Conference on Optimization Techniques Part I

129

We introduce the set U I of all piece-wise constant functions sat-

isfying (q81 with the points of discontinuity at t~.

We put [ T

D1 =~Y : Y =S A(t)u(tl dr; u(.) £UII • (215

L 0 ]

~ ~ o~v~o~ ~ {~o~ ~ {~ ~ o~ ~ ~ ~o~eove~ ~e ~e~ ~ ~ closed and convex [8], hence it satisfies all the conditions of Theo-

rem.

By ul(. ) EU I we denote the control functions which corresponds to

the element Yl satisfying (61.

In the same way as before we denote by Ui+ 1 the set of all piece-

-wise constant functions satisfying (18) with the points of discon-

tinuity belonging to the set

= ~k

where tiJ and ~k i+I denote the points of discontinuity of ui( "I and

ui+1 (") respectively.

It follows from this construction that usually the number of the

elements tJ i increases at each iteration. Let the number of these

elements be L i.

In the same way as D I we define

D i = y • y = A(t) u(tl at; u(.) ~u • (23) 0

Now we shall show how to find the element Yi satisfying (65. Let us

denote j+l

t i

/ li~ = A(t) at, j = o, I, ..., L i (2g)

t~ Li+1

where t~ = 0, t i = T, and i~ i are given elements of the space H.

It follows from the definition of U i and D i that

j=1

where ~c j are the values of the control function u(.) vals (t j-l, tJ)-

Thus to find ui(.)

in the inter-

it is enough to determ/ne the coefficients ~_~ J.

Page 142: 5th Conference on Optimization Techniques Part I

130

satisfying the condition

that minimize the functional Li+1

~i_I(~) : j(yi_1) + (J (Yi_l) , ~ ~Jl~) + • Li+l Li+l j=l ~

\j=1 j=l

= c o- [<ci,4:> + <~t, B io~>]

(26)

(27)

Hence we obtain a quadratic form similar to (15) and to find oL~

any procedure of quadratic programming in a finite dimensional space

can be used as it was proposed in Barf's method.

~. APPLICATION TO 0PTIMAL COETROL

OF SYSTE~ DESCRIBED BY PARABOLIC EQUATIONS

The method of programming presented above can be used to determine

optimal control of systems described by linear parabolic equations.

Let V and H be two Hilbert spaces such that V ~H C,V', and let A

be a linear, continuous and coercive operator from V to V'. Let U be

another Hilbert space and G e ~ (U; HI •

For t 6 [0,T] we consider the following equation

dy(t)/dt + A y(t) = G u(tl (28)

with the initial condition

y(01 = Y0 a H. (28a)

For each ugL2(O,TIU) equation (28) has [6] a unique solution

y(u) e c(o,~; R). A function u ~ L2(0,T! U) is called an admissible control if

u(t) 6 %~ for almost all t a [0,T] , (29)

where lIcU is a closed, convex and bounded set.

The optimization problem is to find such an admissible control u 0

which minimizes the functional

~(y(T; u)) . (30)

Let us denote

d~a__V_( t I Yo ll~j D = { y ( T ) a ~ ~ + A y ( t ) = G u ( t ) ) y (O) = : u ( t ) c~ .

(:31) It can be shown [6] that D is closed,convex and bounded, therefore

Page 143: 5th Conference on Optimization Techniques Part I

131

to find minimum of J(y(T;u)) we can apply the procedure described in

Section 2. To this end we shall show how to find Y-i+1 satisfying (7).

Let us introduce an adjoint equation

-(dp(t)/dt) + A ~ p(t) = 0 ; (52)

p(T) = -J~(y(T| u i)) . (52a)

It is easy to show [6] that the condition (7) is equivalent to the

following one

(G~p(t), Ui+l(t)) ~ (G ~ p(t), u) Vu~and for almost

all t e[O,T]. (33)

Thus to find Yi+l = Y(T;Ui+l) we have to integrate two equations:

first (321 in order to obtain Ui+l from (331, and then (28).

Having ~i+l(t) and Yi+1 we can apply the procedure of minimaliza-

tion described in Section 2. As a numerical example we consider a system described in an oblong

(0,1) x (O,T) by heat equation ~y(x, t) ~2y(x, t)

with initial condition

and boundary conditions

~.x(o, t~ : o; ~x

~t

y(x, O)

x 2 o (54)

: o (34a)

9x(1, t) t)]o (54b)

The cost functional is a quadratic one

J(ul = (z - y(T), z - y(TI)= ~ [z(x) - y(x,T)~ 2 dx (35)

0

where z g L2(O,T) is desired final temperature distribution.Two types

of constraints imposed on control function are considered

(a) m %u(t) ~< M for almost all ta[O,T]j (361

(b) functions (u(t) are given [I, qO] as solutions of the eq.

d~t(t) = _ _~I u(t) + ~ v(t), (37) 6

u(O) = Oj (37a) where

p ~< v(tl ~ P. (37bi

To perform computation eq. (34) was discretized both in

space variables. The following numerical data were taken

case (a) ~ = I; m = -1; M = 1; T = 1; z(x) ~- 0.45; 0.5 = 2.5 10-2; Ax = 10-1~ At

time and

Page 144: 5th Conference on Optimization Techniques Part I

132

case (b) ~ = 10; ~= 0,04"; p = O; P = 1; T = 0.2; 0.4;

z(x) ~ 0.2; At = 4"× 10-3; 8×10-3; ~x = 5 X lOW 2.

The computations were performed for all three methods using a com-

puter 0DRA 1204 and obtained results are listed in Table 1.

The considered control functions are piece-wise constant and the

last column of the Table I gives the number of discontinuities obtain-

ed in each method. This number characterizes the simplicity of obtain-

ed control.

Type of constraint s

(a)

z(x) - o.45

(a)

~(x) ~ o.5

(b)

T=0.2 sec

TABLE 1

No. of Time I iter- ations sec. i

35 9oo J

5 37@ I

6 3911 I

800 1

4" 185 I

5 615 I

J(y(T; u~

4.4"80 y" 10 -3

1-576 ~" 10 -3

6.163 x 10-4.

1. 599 ~ 10 -2

1,530 ~" 10 - 2

1.4"89 × 10 -2

Estimation of error

4.320 ,,, lO -3

1.576 ~ 10 -3

1.769 x 10 -zt"

1.360 × 10 -3

8.372 × 10 -4

6.659 x 10 -1(

(b) T = 0.4

sec •

1 -- Gilbert's method; 2- Barf's method; 3 - Nahi-Wheeler

NO. Of di-

seont.

10

5

5 ii ,ii,

6

3 2

14

6

6

26

7 6

323 4:5 8,10 -2 9000 576 14.552~ 10 -2 1.296×

4151 I#.546~10 -2 3-20@×

659 13.O90~10 -3 3.O90×

540 13"414 ~10 -4 3 .~14×

3647 14"989~10 -6 #.989×

10 -5 5

10 -4 4

10 - 4 4

10 -3 19

10 - 4 7

10 -6 6

s method, i

An example of the plots of values of functional vs. the number of

iterations for different method is given in Fig. 1 and the values of

the functional vs. computation time in Fig. 2.

The forms of optimal control and the final temperature distribu -

tions obtained using Nahi-Wheeler method are presented in Fig.3.

It follows from the obtained results that the speed of convergence

expressed in terms of the number of iterations is the lowest for Gil-

bert's method. On the other hand the time of each iteration for this

method is comparatively low. Hence the total computation time is of

the same order as in Barf's method and several times lower then in

Ns~i-Wheeler's . As far as the accuracy and simplicity of obtained

control is concerned the Nahi-Wheeler's method is the best, Barf's is

not very much inferior and Gilbert's is much worse than two others.

Page 145: 5th Conference on Optimization Techniques Part I

133

o ) ' : 1 ' ~ ' k ' / o ' / 2 ' / e ' / e . ' ~ no. o f itepado~

Fig.1. Value of functional J(u) vs. the number of iterations.

Case (a) z(xl ~ 0.45

I0-~

fO-3

50 I I t l J I I I I I 1~ t I t

ioo Iooo t [soY7

Fig.2. Value of functional J(u) vs. computation time.

Case (a) z(x)---- 0.45

y(x,r)' 0,45

O..'/45

-1.of-

I

O.5

I

7 " ~ 0,2

O.'J I L

~.O x- v(t) ~

Jk _

0

I

0.5

0.! Case(a), zfx)=-O,45 Case(b), 7=0.2sec

L L

1.Ox-

o.2 t/sec.7

Fig. 3. Form of control and final temperature distribution

Summing up it seems that the best is Barr's method. In the case

where the high accuracy is necessary Nahi-Wheeler's can be applied.

REFERENCES

1. Arienti G., Colonelli A., Kenneth P. Cahiers de IRIA 2,81-106(1970) 2. Balakrishnan A.V., Introduction to optimization theor~ in a Hil-

bert space, Springer Verlag, 1971. 3. Barr R.0. SIAM J. on Control 7, 415-429 (1969) 4. Gilbert E.G. SIA~i J. on Control 4, 61-80 (1966)

Page 146: 5th Conference on Optimization Techniques Part I

134

5. Hadley G. ~ Nonlinear and d,Tnamic programming,Addison-Wesley,1964 6. Lions J.L., Optimal control of systems 5overned by partial dif-

ferential equations, Springer Verlag, 1971 7. btalanowski K., Arch. Autom. i Telemech. 18, 3-18 (1973) 8. Nahi N.E., V~aeeler L.A., IEEE Trans. AC-12, 515-521 (1967} 9. Pascavaradi T., Narendra K.S., BIA]{ J. on Control 8,396-402(1970)

10. Sakawa Y., IEEE Trans. AC-11, 420-426 (19661 11. Veinberg M.M., Variational method and method of monotone opera-

tors, Gostiechizdat~1972 (in Russian}

APPENDIX 1. PROOF OF THEOP~J~I

First it will be shown that the sequence{J(y~)] is non-increasing. Indeed,substituting in (#a) y = Yi+l,and taking-lnto consideration (5), (6) and (8) we get

(1.1) J(Yi+l ) ~< Ji(~i+l ) ~< Ji(Yi ) = J(yi ) •

On the other hand sequence ~J(y~)] is bounded from below by J(Yopt ) . Therefore it is convergent.-Weare going to show that

(1.2) iim J(yi ) > J(Yopt ). i ---~ o~

To this end first we shall prove that

(1.3) lira (-J~ (Yi)' Yi+l - Yi ) = O. i --~o~

Let us assume that (1.3) is not satisfied.Then taking into account

(7) ~e conclude that there exists a constant 6 > 0 such that for every

integer m > O there exists a subscript ~ > m such that

(1.4) (-J' (Y~), ~ +1 - Y9 ) ~ ~. 6

Let us denote Y = Yr_ + ~ ( ~ + I - y ~ ) ' ~ (o, I ) .

u It follows from (6) and (8) that

5~ (~) ~ ~ (~ +1 ), Vz~(O,ll. Taking into consideration (4) and (1.4) we get

1 2 (-J'(y~l,y2 +I-Y~)>- ~IIY~+I-Y~ +~(-~'(y~l,~+1-y~ i +

where

(1.5) ~ = sup Ily - z l l 2 < ~ . y,z eD

l>~tting 05= rain{l, £/N~]1 we get { ~N ' I ~ ]

where > 0 does not depend on ~ . Taking into consideration (5b) and (1.6) we obtain

~(~) + (J (~)' ~+1 - ~) + J(~+l)\< ~?(Y~+I)+ ~ J~+ l - ~11 4 J(~) -

Page 147: 5th Conference on Optimization Techniques Part I

135

or

which contradicts the convergence of the sequence ~J(yi) ~ . This con-

tradiction proves (1.3).

Now let us prove 41.2). Denoting

Y~ = Yi + ~ (Yopt - Yi ), ~g[0,1]

we get from (~) and (5)

(1.7) ~i (y~) : J(Yi ) + ~ (J' (Yi) 'Yopt - Yi ) + nj~ 2 II Yopt - Yi il 2~<

~< J(ya ) ~< (1 - ~ )J(yi ) + ~ J(Yopt)' From (7) and (1.7) we have

(1.8) (-g' (yi) 'Yi+l - Yi ) >~ ('J' (Yi)Yopt - Yi ) ) J(Yi ) - J(Yopt ) ~ O.

41.3) together with (1.8) prove (1.2).

For J(y) is weakly lower semicontinuous,the set D is weakly com-

pact and Yopt is the unique element satisfying (2) then (1.2) implies

(1.9) Yi ~ Yopt"

To prove the strong convergence we note that it follows from (1) that

(1.1o) nlly- zll ~< (J'(y), y - z) - (J'(z), y - z), Yy,z eD.

Hence from (7) and (1.10) we get

(1.11) nAYopt - Yi II~< (J' (Yopt) 'Yopt - Yi ) + G-J' (yi) ,Yopt - Yi )~<

4 (J' (Yopt) 'Yopt - Yi ) + (-J' (Yi) '~i+1 - Yi ).

For i --~ ~ the first term on the right hand side of 41.10) tends

to zero by (1.9) and the second by (1.3). Hence

(1.12) lira l~Yop t - yill = O q.e.d. i--~

APPENDIX 2. ESTI[vL%TION OF THE ERROR

It is obvious that the following estimation takes place

(2.1) J(Yopt) ~< J(yi).

To estimate J(Yopt) from below we take advantage of (5b) and find the estimation of Ji(y ) on D.

Consider the situation where the hyperplane M i separates the set D

and the point at which $i(y) assumes its global minimum.lt takes place iff

o.

Using (4) we rewrite this condition in the form

(2.2) (J' (yi) , J' (yi) + n(Yi+q - yi )) > O.

In this case the minimal value of $.(y) on D is not greater then

Page 148: 5th Conference on Optimization Techniques Part I

},36

the minimal value of this functional on M i.

This minimal value is assumed at the point y, where

~i (y~ : ~J (Yi)" Taking into consideration that Yi+1 ~ ~i we get from (#b)

~ ( (J' (Yi)' ~i+I - Yi )

+ I) (J (yi), Yi+1 -Yi )" %

From (5b) ~ (2.1)~ (2.2) and (2,3) we obtain the estimation (9) of

the error of i-th iteration.

Page 149: 5th Conference on Optimization Techniques Part I

ABOUT SOME FREE BOUNDARY PROBLEMS CONNECTED WITH HYDRAULICS

CLAUDIO BAIOCCH_!I, Laboratorio di Analisi Numerica del C. N. R.

PAVIA, ITALY

N.I. DESCRIPTION OF THE PROBLEM

Two water reservoirs, of different levels, are separated by an

earth dam, which is assumed homogeneous, isotropic and with impervious

basis. We ask for [he steady flow of the water between the reservoirs,

and in particular for the flow region~ (or, in other words, for the

unknown part y= ~(x) of its boundary).

Denoting by D the dam, and by Yl and Y2 the water levels (with Yl

greater than Y2; see picture I) the mathematical problem can be stated

as follows (see for instance L8J for the general treatment of this ty-

pe of problems):

] Find a subset ~ of D, bounded from above by a continuous decre~

sing function y= ~(x), such that there exists in ~ an harmonic

function u=u(x,y) which satisfies the boundaryGconditions:

~tu=Yl on AF; u=y 2 on BC; u=y on CC~ and FC~ ;~nU-0 on AB and FC .

( ~n denoting the normal derivative; we point out that on the "free

boundary" we have both the Dirichlet and the Neumann condition).

x

PICTURE I

Page 150: 5th Conference on Optimization Techniques Part I

I38

I n a p a p e r o f 1971 (see _ _ ; I ] ' / 2 j ) I p r o v e d t h a t , i f D i s a r e c t a n -

g l e , D = 30,a6X]0,yIC (in picture I AF and BE are vertical, AB is

horizontal; the origin is in A) , setting:

W(X,y):] y LU(X,t)- St for (x,y)6 ~ ; w(x,y)= 0 for (x,y)6D--~

the new unknown function w satisfies the relations:

(I) W ~0; AW~I; w(1-~w) = 0 on D; w=g on 9D

where g(x,y) is defined on f~ D by the formula:

(Y-Yl) 2 (y_y2) 2 (2) g=- 2 on AF; g= 2 on BC; g linear on AB; g=0 elsewhere.

The system (I)r (2) can be studied as a varionational inequality;

precisely, denoting by H I (D) the space of square summable functions

on D, whose first derivatives are also square summable, and setting:

K : ~ v6HI (D) ; U ~_.0 on D; v : g on ~ D~

the function w satisfies:

(3) w E K; ~v~K: ~_[gradw,grad(v-w)]dxdy~_ ~(w-v)dxdy.

Vice-versa it is well known (see ~9 ~ ) that (3) has a unique solu-

tion; and we can proof that w solution of (3) is such that, setting:

! 0~ ; u=y-D w on ~ (4) ~ = (x,y) ~ D; w(x,y) ~ ~

(and obviously graph of ~ = ~ - g D ) we get a solution of the free

boundary problem. In this way we obtain an existence-uniqueness theo-

rem, via the varionational inequalities theory; and we will see later

that this approach allows us also a new numerical treatment.

At present we can apply similar methods to a wide class of free

boundary problems (see [3], L4],ES~); here I will limit myself to des-

cribe one of these problems, corresponding to the case where the sha-

pe of the dam is a trapezium (see picture 2). System (I) becomes now:

(5) w ~0; A w _~ I; w(1-~w) = 0; w=g on FEB; w linear on AB;

{ DyW=y-y I on AF (we reobtain w=g on ~ D if AF is vertical) .

Denoting by -q the value of DxW on AB, q represents the "dischar

ge" of the dam (which is now unknown; in the previous case i~.-was

2 2 2 -I). Now we can define, for real q: q= (YI-Y2) (a) H I v ~ 0 on D; v=g on FEB ; Vx=- q on AB] Kq = ~v 6 (D) ;

and we must modify (3) into the formula (6) following (where a(u,v) is

a bilinear form associed with - A (like .~gradu,gradv dxdy) whose

9 "natural boundary condition" on AF is ~ ) :

Page 151: 5th Conference on Optimization Techniques Part I

139

~AF(Y1_y ) (6) Wq6 Kq; ~ V6Kq : a(Wq,V-Wq) ~ (Wq-V)dxdy + 2 (V-Wq)dy

Actually we can proof (see (4 3 and ~7J ) that there exists a uni-

que value q% of q, such that (4) with w = Wq~ gives the solution

of the free boundary problem; and this unique value of q can be carac~

terized as the unique value of q such that the corresponding w solu-

tion of (6) is a "more regular" function, for instance w 6 CI (D~. q

PICTURE 2

N.3. NUMERICAL APPROACH

In the cases where the shape of the dam is simple (like a rectan-

gle, or a trapezium) we discretized the problem ( (3) or (6) respec-

tively) by a finite difference scheme. In the first case we must

solve the set of inequalities:

I Wh~0; AhWh ~ I; Wh(1-AhWh ) = 0 in the interior gridpoints~

(7) w h = g in the gridpoints of the boundary

where h is the mesh amplitude and ~h is the usual 5-points discreti-

sation of the Laplace operator; system (7) has a unique solution and

can be solved by means of an iterative scheme, with S.O.R. and proje~

tions (for the details see L~J,[~J) ; A h will be defined as the union

of the mesh with center in the gridpoints where w h>0; setting also

Uh = Y-~2Wh ( ~2 discretizing Dy) we have the following convergence

result (see L-SJ) :

(8) u h converges to u in H 1 discrete; ~ = interior of lim ~h. h-~0

Page 152: 5th Conference on Optimization Techniques Part I

140

In the second case we must modify (7) into:

i Wh~ - 0; ~hWh Z I; Wh(1-~hW h) = 0 in the interior gridpoints

(9) ~2 wh=g on FEB; J1Wh = -q on A~7, Wh = Y-Yl on AF

and we must add to the solution of (9) an algorithm for the choice of

an approximation q(h) of q~ In order to do it we look for the tooth

of the equation fh(q) = 0 where fh (q) = ~2Wh (F); it can be proved

that q,--~fh(q) is a convex, strictly decreasing function, which as-

sumes opposite signe in two known points q0 and ql; setting q(h) as

the unique tooth of fh(q) and w h as the unique solution of (9) with

q=q(h), we can proof [seeO~J) that q(h) converges to qe and (with

notations similars to the ones used in (8) ) the convergence result

(8) is still valid.

The numerical results obteined by applying our method show a good

with the ones reported in the literature; we refer to ~5# agreement

for some comparisons. We want to point out that the main advantages

of our method from the numerical point of view are the semplicity of

programming and the speed of execution (in comparison with a classi-

cal difference method we see that both execution time and Fortran sta

tements necessary to program the algorithm are reduced to about one

third) ; moreover our method is rigorous from the mathematical point

of view.

REFERENCES

I C. BAIOCCH[. Sur un probl~me ~ fronti~re libre traduisant le filtr~ ge des liquides ~ travers des milieux poreux. C. R. Acad. Sc. Paris 273 (~2~) 1215-1217.

2 C. BAIOCCH!. Su un problema di frontiera libera connesso a questio- ni di idraulica. Ann. di Mat. 97 (1972) 13 7-127

3 C. BAIOCCH!, V. COMINCIOLI, L. GUERRI, G. VOLPI. Free boundary pro- blems in the theory of fluid flow through porous media: numerical approach. To appear in Calcolo

4 C. BAIOCCHI~ V. COMINCIOLI, E. MAGENES, G. POZZI. Free boundary pr~ blems in the theory of fluid flow through porous media: existence and uniqueness theorems. To appear in Ann. di Mat.

5 C. BAIOCCHI~ E. M~GENES. Proceedings of the Symposium "Metodi valu- tativi nella Fisica-Matematica", Acc. Naz. Lincei, Rome dec 1972o

6 V. COMINCIOLI, L. GUERRI, G. VOLPI. Analisi numerica di un problema di frontiera libera connesso col moto di un fluido attraverso un mezzo poroso. Publ° N. 17 of Lab. Anal. Num. Pavia (I~I)

7 V. COMINCIOLI. Paper in preparation. 8 M. E~ HARR. Groundwater and seepage. Mc Graw Hill, N.Y.I 19 62 9 G. STAMPACCHIA. Formes bilin~aires coercives sur les ensembles con-

vexes. C. R. Acad. Sc. Paris,258 (1964) 4413-4416.

Page 153: 5th Conference on Optimization Techniques Part I

METHODE DE DECOMPOSITION APPLIQUEE

AU CONTROLE OPTIMAL DE SYSTEMES DISTRIBUES

A. BENSOUSSAN, R. GLOWINSKI, J.L. LIONS

(IRIA 78 - ROCQ~NCOURT - FRMNCE)

INTRODUCTION

Le but de ce travail est d'Etudier l'applieation des techniques de d6composition-

coordination ~ la resolution numErique des probl~mes de contr$1e optimal gouvernEs

par des Equations aux dErivEes partielles. L'approche retenue est celle procEdant par

d6composition du domaine o~ on ramgne, par un proegd~ it6ratif, la resolution du pro-

blgme de contr$1e optimal initial, ~ celle d'une suite de sous-problgmes de m~me na-

ture relatifs ~ des sous-domaines rEalisant une partition du domaine initial. On dE-

gagera les principes de la mEthode sur un probl~me de contrSle oO l'Equation d'Etat

est lingaire elliptique et le domaine rectangulaire, mais les techniques dgcrites ci-

dessous s'appliquent ~ des gEomgtries plus compliquges et ~ des gquations d'Etat non

linEaires,stationnaires ou d'Evolution.

I - ETUDE DU PROBLEME GLOBAL

I.I - NOTATIONS - HYPOTHESES. ~$

On eonsidgre un domaine ~ de la forme

F 2

Figure 1 F4

et on note ~ = r] U ~ [J % U ~ la fronti~re de ~ . L'Etat du syst~me

est donne par

(1.1)

"Az = 0

z] =0 %Ur4

F I UF2

n2(Fi ) ~z oE v = (Vl,V 2) est le contrSle ; v i E , i=l,2. Dans (I.I), ~n repr6sente la

dgrivEe normale orient~e vers l'extSrieur de f~ .

Le critgre est dEfini par (en remarquant que z = z(v) = z(x;v))

(1.2) J(v) = / Iz(x) - Yd(X) 12dx + ~ I v2(x)dl~ ; ~ > 0 . J~ lr 1 UF2

Page 154: 5th Conference on Optimization Techniques Part I

142

On consid~re !'ensemble (reprfisentant les contraintes)

! 2 i (1.3) l~ad = lad x Ua d , Uad convexe fermg de L2(Fi ).

Le probl~me global est d~fini par

(1.4) Minimiser J(v), v E Nad .

1.2 - CONDITIONS NECESSAIRES ET SUFFISANTES D'OPTIMALITE.

II est standard (cf. J.L. LIONS (I 1 ) que (1.4) possgde une solution unique u,

caract~ri~e par la condition d'Euler suivante

(1.5) J/~ (Y-Yd)(z-y)dx + ~) !F u(v-u)dF >~ O

1 UF2

g vEl~ad , z=z(v), y=z(u), u ~ ~ad "

On introduit l'gtat adjoint d~fini par

£ t -AP = Y-Yd dans

(1.6) plr3Ur4 = 0

I~ rlUr2 =o ;

alors (cf. J.L. LIONS (I)), (1.5) fiquivaut ¢

(1.7) / (p+~u)(v-u)dF ~ O, V v C ~ad ' u ~ Nad " d rl U~

Ii sera commode de r~crire !es conditions d'optimalit~ sous forme d'une in~quation

variationnelle. On introduit !'espace de Hilbert

(1.8) v ={z] zE Nl(~), zlr3Ur4

o~ HI(~) est l'espace de Sobolev d'ordre I.

On notera que (y,p) E gx g . On a alors

=o}

Thgorgme t.1 Le.. t r i p l e t (y,p,u) (o_~ u est le contr61e optimal et

solution unique de !'in~quation variationnelle

(1.9) f~

y l'~tat optimal) est

grad y grad z dx - / Fl U F 2

+ b [ (~)a+p) (v-u)dF

J F 1 Ur2

o3 a et b

f uz dF + a( grad p grad q dx - (y-yd)q dx)

> 0 lqz,q EIr ,v ~ l~ad , Y,PE¥ , uE l~ad ,

sont deux constantes positives arbitraires (1)

(I) Le rSle de a et b appara~tra au n ° 3.2

Page 155: 5th Conference on Optimization Techniques Part I

143

II - DECOMPOSITION

2.1 - HYPOTHESES - NOTATIONS.

Pour ~conomiser les notations, on consid~re une d~composition de

sous-domaines ~l, ~ n (la g~n~ralisation F 31

F41 F i s u r e 2 07, avec des notations ~videntes

an deux

sous domaines s'en d~duit aussitSt) r ~2

n

F 2 ~2

F42

F 3 = F3! U F32

F 4 = F4; UF42

= ~! ~ ~ , suppos~e r~guli~re.

Pour i=|,2, on introduit les espaces de Hilbert

(2.]) Vi = { z E H](~i) I ziF3iU F4 i

On pose Yid = Yd I ~. i

On introduit le probl~me suivant : trouver (yi,Pi,Ui)

fiant

(2.2) SY] = Y2 sur

L P] P2 sur

e t

(2 .3)

=~,2

a(

bC

=0} .

dans %r i x ¥ i i

x t;ad v~ri-

grad Yi grad z.dx - / u.z. d F. +

1 i

/ • i=1,2 ~. .

1 1

(~u i + pi)(vi-ui)d ri) > 0 If z i, qi' vie %'i x gi XUad i=! ,2

l

et

(2.4) Zl = z 2 [I ' ql = q21~

On a alors

Th~or~me 2.1 Ii existe une solution unique de (2.2),(2.3) donn~e par :

Page 156: 5th Conference on Optimization Techniques Part I

144

(2.5) i Yi = restriction de y ~ ~i

t Pi = r e s t r i c t i o n de p ~ ~i

l u i = r e s t r i c t i o n de u ~ r i <

D~monstration

D6finissons (yi,Pi,ui)

~AY i

par (2.5). Alors d~apr~s (].I) et (1.6), on a

= 0 dans ~i

(2.6) Yil F3i U F4i

~Yi =

~-IFi u i

= 0

(2.7)

~f

i -A Pi = Yi - Yid dans

1 =0

I pilr3i U r4i

%

~I = O F i

f< (2.8) i (Pi + 'J ui)(vi-ui)d ri >~ O.

J r i

Les relations (2.2) sont v~rifi~es (propri~t6s des traces dans HI(~)). Par ailleurs,

d'aprgs LIONS-MAGENES (I) , on peut d6finir

~Yi ~ H- ~ SPi -~ (2.9)

et de plus, on a

(2.10)

8oient alors zi' qi

(2.8)

1

~Yl ~Y2 = ~nl 7 nll --I SPl ~P2

Fi v6rifiant (2.4), v i E Nad " On a, d~apr~s (2.6), (2.7),

f Ay i z. dx + a [ (- hpi-(yiYid))qidx + b (~ui+Pi)(vi-ui)dri] ~0. 1 1

D'aprgs !a ~ormule de Green, il en m~sulte

]~| _ I ~Yi zid(~i) ) + a !( ]~ grad Pi grad qidx - ! ( grad Yi grad zidx ~. ~--n i i i

- (yi-Yid)qidx - ~--~ qid($~i )) + b ~ (pi ÷ Vui)(vi-ui)dri ~ 0 '~i ~i i r i

et on vgrifie ais6ment, grace g (2.6), (2.7), (2.4), (2.11) que (2.11) entra~ne (2.3).

Page 157: 5th Conference on Optimization Techniques Part I

145

Soit r~ciproquement une solution de (2.3). On pose

(2.12) y(x) = ~ Yi(X) Xi(X) i=1,2

p(x) = ~ Pi(X) Xi(X) i=1,2

est la fonction caract~ristique de ~i" o~ Xi(X) Y,P E Hl(g). Comme

II est classique que (2.2) entra~ne

ZIF3 i = YilF3i ' PIF3 i = PiIF3 i

YIF4 i YiIF4 i Pl = Pi = , F4 i IF4i

on a bien y,pE1f . On v~rifie alors ais~ment que (2.3) implique (1.9). •

2.2 - LAGRANGEN GENERALISE.

Bien que (2.3) soit une in~quation variationnelle non symm~trique (et donc ne

provient pas d'un problgme d'optimisation), on peut lui associer un lagrangien d~fini

par

(2.13)

I / grad Yi grad zidx - /F ui zi dFi + i= ,2 i i

+ a (i=~ /~ grad Pi grad qidx - /~ (yi-yid)qidx) + , 2 i i

(~ u i + pi)(vi-ui)dri ) + + b [i=I, 2 F i

+ /I %(Zl-Z2)dl + a /~ ~(ql-q2)d~

Th~or~me 2.2

o~ , ~ ~-~ (~)

i 1 II existe (yi,Pi,ui) E Fix ~i x N ad e t %, ~ E H-2(~ ), v~rifiant les con-

traintes (2.2). e t ~>0 , pour tout zi,q i EIi, viEN~d " Une telle solution est

de plus unique.

D~monstration

Comme (yi,Pi,Ui) est solution de (2.2), (2.3), elle est d~finie de mani~re uni-

que par (2.5). Soient alors

(yi,Pi,ui,%,~) et (yi,Pi,Ui,%',D')

deux solutions ~ventuelles. On a

(2.14) / (~-%')(z]-z2)d~ +/ (~-~')(ql-q2)d~ = 0 V zi,qi ~ ~i .

Mais l'application . + zil ~ z 1

Page 158: 5th Conference on Optimization Techniques Part I

146

= I

= --~n I~P2 = - ~n[l

1

de ¥i -> H2 (I) est g image dense et (2.14) implique alors

d'oN l'unicit~. Par ailleurs, on v~rifie que

SY ! I ~Y2 X°--Tnl

SPi

III - METHODE DE COORDINATION

3. i - PRINCIPE DE LA METHODE.

La m~thode propos~e est it~rative. Supposons que nous soyons ~ l'~tape k, avec

~= ,~k, "~= k . On cherche alors yk , pk , u k r~alisant ~ >i O, V zi,qi,vi comme

dans le Th~or~me 2.2 et pour les valeurs pr~cgdentes de % et N . Le probl~me se d~-

compose aussitSt en

Yi = 0 dans ~i

=0

r3iUr4i

(3.1) k

N ~n r. = ui

i

-~n ~ = (-I)i- k k

~ p k = y k Yid dans f~i 1 i p k = 0 ? i i r3i U r4i

(3.2) I ?pi[k

"~n F. =O

k --~n ~ = (-l)i Ilk

F k k k i (3.3) (~) ui+Pi)(vi-ui)d r i >. o, vviE Na d •

l

Les relations (3.1)~ (3°2), (3.3) s'interpr~tent comme les C.N.S. d'optimalit~ du pro-

blgme suivant : l'~tat du syst~me est donn~ par (3.1) et le eritgre est donng par

(3.4) jk(uik) = . (yk- Yid)2dx + x) i i i ;~ Yi dE " 7~ i F i

Page 159: 5th Conference on Optimization Techniques Part I

147

On calcule ensuite

dualit~ de H½([ )

(3.5)

k o~ p

ik+l k+! , ~ par les formules suivantes (o~ S est l'application de

+ H~ (I))

k+l k k k k = ~ + 0 S( Pl - P2 )

est une suite de nombres r~els, born~e.

3.2 - LE PROBLEME DE LA CONVERGENCE.

On v~rifie tout d'abord la relation suivante

Igrad(Yi-y~)12dx - ~ (u.-u~)(y~-y.)dF. i=1,2 . i=1,2 . z l l l z

1 l

(3.6)

- a /g a /~ ( k k I I grad(Pi-pk) 12dx - ~ Yi-Yi ) (Pi-Pi)dx -

i=1,2 . i=1,2 . i 1

- b~ F /F ( k k + (ui-uk)2d F i + b I Pi-Pi)(ui-ui)d F i i=! ,2 . i=l ,2 .

i i

d'o~ encore (avec des notations ~videntes)

1

(3.7) + i

a

k k k (~/-~)(P] - P2 - P] + P2 )d~ >~ O,

b ~ I%-.~12 b /r .k 12dFi + T~ IPi - Pi + 1

1

l'infigalitfi de Poincarg, et la continuitfi de l'application trace, il

i/ k . ~ lui-u~T ~ ~ ~ + (yi-Yi) dF i

1 i

. a g )2dx +

~is d'apr~s

existe une constante C telle que

(3.8) ~. ,grad z,2dx ~ C ( ~ z2dx + ~ z 2 d F i ) . .

i i i

de sorte que (3.7) et (3.8) impliquent

; - -

(3.9) z i

Z ~ 'ui-u~'2(b~ - ~ ) + ~ + a ~ ~ 0 "

Page 160: 5th Conference on Optimization Techniques Part I

148

Supposons donc que

0 (3.10) 4y C- > ! °

On choisit

-~ < 2C, ] ¥ ~ <2C, a< 2C-a

b = r} < a 2C

et e n f i n b de f a ~ o n que

2~>~+ b-

ce q u i e s t p o s s i b l e s i ~) > ~ , ce q u i e s t v f r i f i 6 , g r a c e ~ ( 3 . 1 0 ) . Darts c e s c o n - rl

d i t i o n s , i l e x i s t e une c o n s t a n t e C > 0 t e l l e que

- C( ~ t ly i -yk[ t 2 + l [p i -pkt t2 + [ u i - u i l ) + +a >~0 . z

(3.11)

Posons ~k = ik _ l

k k m =]~ -~

On a

d'o~

~k+l k ok ~, k k+ , - = ~<yl-yl-y 2 Y2 )

= k . k k+ , ~?+] _ m k 0 S(pl-Pl-P 2 P2 )

( -X)(YI-Yl - Y2 Y2 )d~ "!< +(0k)2cl ~ l[yk-yi I12 + (3. ] + 2 0 Jl i

kf~ k k + 2 O ( xk-x)(Yl - Y] - Y2 + Y2 )all

k o{i C 1 est une constante. De (3.12) et de relations analogues ~crites pour m , on

d6duit

{ f +]l12 + a211 k+] m I!- ~< ll%kll 2 + a2 IImkll 2 + (pk)2c2( ~ I lyk - yfi2 + llpk _ pill2)_

(3.13) - 2C @k( ~ l[yk - yill2 + !I k 12 k 2

Pi- Pi + lui-uil )" i

II est clair que l~on peut choisir la suite O k de mani~re que IIKkll 2 + llrnkll 2 soit

dgcroissante et convergente. II en r6sulte que

k k k Yi + Yi ' Pi ÷ Pi ' ui ÷ ui "

De plus xk, ]k demeurent dans des born6s. Un passage g la limite dans ~>~ 0 montre k la convergence faible de %k ~ respectivement vers k,u . On a ainsi dgmontr6 le

Page 161: 5th Conference on Optimization Techniques Part I

1 4 9

Th~orgme 3 . 1

Sous l'hypothgse (3.10), on a

k (3.14) Yi ÷ Yi'

k Pi ÷ Pi dans Yi

(3.15) u~ ~u i dans L2(q)

(3.16) %k , k ÷ ~ ,D dan§ H- ~ (~) faible.

IV - APPLICATION NUMERIQUE

4.1 - POSITION DU PROBLEME.

On a pris ~ = )0,4 (x) 0,I( et d~compos8 ~ en quatre sous-domaines ~i,i=I,2,3,

4, comme indiqu~ sur la figure 3, les notations ~tant des variantes ~videntes de cel-

les du N°2.1. Le critgre ~tant celui du N°I.I, relation (1.2), l'~tat du syst~me est

donng (avec f#O F car cela fac~lite la construction de sol~tions exactes) par : , , 3 b F32 ,,, -33 - 3 / : '

r I a I ~ ~ ~ F 2 12 23 34

~2 ~3 ~4 Figur e 3

(4.1)

F41 F42 F43 F44

I - $z = f dans

zlr3u F4 = 0

~]rl u rE v

avec

(4.2) v Eq~a d =~= L2(FI ) x L2(F2 ).

4.2 - CHOIX DU PROBLEME TEST.

Si dans (4.1) et dans le crit~re on prend f et

par :

Yd dgfinis, respectivement,

2 2 (4.3) f(xl,x 2) = 2 (-x I - x 2 + 4x I + x2)

2 2 (4.4) Yd(Xl,X2) = -(x2-x2)(Xl-4X |) - 8 v

On vgrifie faeilement que le problgme de contrSle optimal Min

tion unique : v6~

Vl(X2) = -4(x2-x2)

(4.5) 2

v2(x ) -4(x2-x 2)

l'gtat yet l'~tat adjoint p correspondant ~tant donn~s par : 2 2

(4.6) y(x|,x2) = -(x2-x2)(xl-4Xl)

J(v) admet comme solu-

Page 162: 5th Conference on Optimization Techniques Part I

150

2 (4.7) P(Xl,X 2) = 4 ~(x2-x2).

4~3 - MISE EN OEUVRE DE L'ALGORITHME DE DECOMPOSITION - COORDINATION.

L'algorithme du N=3.1, relatif g une dgcomposition de ~ en deux sous-domaines

se g~ngralise ais~ment g la decomposition du N°4.1. Au vu des relations (3.1), (3.2)

il faut rgsoudre ~ chaque itgration un probl~me de contrSle optimal, elliptique, dans

chaque ~i' i=1'2'3'4 ; l'approximation des Equations (3.1), (3.2) a Et~ effeetuEe R

partir d'une discr~tisation par diffgrences finies de pas h=~o , soit environ 400

points de discrgtisation par sous-domaine. Les sous-probl~mes de contr$1e ci-dessus

ont ~t~ r~solus par la m~thode du gradient dont la mise en oeuvre est facilitge en

remarquant que les matrices de discr~tisation, assoei~es aux ~quations (3.1), (3.2)

sont identiques, sur chaque sous-domaine, et indgpendantes de i et k; ces matrices

gtant sym~triques et dEfinies positives on peut alors utiliser une factorisation de

CHOLESKY faite une lois pour routes au debut du programme.

4.4 - RESULTATS NUMERIQUES.

Dans les deux exemples qui ont EtE traitEs, on a initialisE l'algorithme de dE-

composition - coordination en prenant %1 = O, ~I=o et on a pris S=Identit~ dans (3.5).

Exemple !

Ii correspond ~ v=5 ; pour O k = O =2 on a

max I l l yl-Y21 0.653, max I 4 4 10 IO = yl-Y2 I= 1.5 x 10 -2 , max ly I - Y2 I= 1.8 x 10 -4

k On a am~lior~ la vitesse de convergence en utilisant une strategic de p variables de

type steepest descent permettant d'obtenir max I Y~- - Y~I- = 1.8 x lO -4 d'o~ un gain k

important par rapport ~ la m~thode avec P fixe. B

E x~mp le 2

Ii correspond ~ ~=O.|25 ; le problgme a ~t~ rEsolu en utilisant la stratfigie de k

p vari@bles ci-dessus ; ~ ~tant plus petit que dans le probl~me pr~cgdent la conver-

gence est plus lente soit max ly I - y~l = O.506, maxly ~ - y~I = 2.10 -3,

I Y l= |o4 °

Remarqu..e. 3 . !

Dans l e s deux exemples p r f i c ~ d e n t s , l a r ~ s o h t i o n ~ chaque i t g r a t i o n , des s o u s -

p rob lgmes de e o n t r S l e o p t i m a l p a r l a mEthode du g r a d i e n t n ' a j a m a i s demand~ p l u s de

t r o i s i t ~ r a t i o n s i n t e r n e s (pour un t e s t d ' a r r ~ t a s s e z s f ivg re ) . ~

Remarque 3.2 -2

Pour ~ de l ' o r d r e de l0 on n ' a pu o h t e n i r l a conve rgence de l ' a l g o r i t h m e de

dEcomposition-coordination la condition (3.10) n'Etant visiblement pas satisfaite. B

Temps de calcul

Dans les deux exemples precedents, de l'ordre de 1 minute pour 5 iterations de

coordination.

Page 163: 5th Conference on Optimization Techniques Part I

151

[I) J.L. Lions ContrSle optimal des syst~mes gouvern~s par des ~quatlons aux d~ri-

v~es partielles, Dunod-Gauthier-Villars (1968).

(I) J.L. Lions - E. Magenes Probl~mes aux limites non homog~nes (T.l), Dunod (1968).

Page 164: 5th Conference on Optimization Techniques Part I

APPROXIMATION OF OPTIMAL CONTROL PROBLEMS OF SYSTEMS

DESCRIBED BY BOUNDARY-VALUE MIXED PROBI~MS

OF DIRICHLET-NEUMANN TYPE

P. COLLI FRA_NZONE

Laboratorio di Anaiisi Numerica del C.N.R.- Pavia, Italia

INTRO DUCT ION

We report here some results on optimal control of systems described

by boundary-value mixed problems of Dirichlet-Neumann type for second

order linear elliptic partial differential operators. These results are

achieved in the framework of the deterministic theory as developed in

Lions' book [I] on optimal control. We consider here the case of a boun

dary control on the Dirichlet condition. The state of the system is the

solution of a mixed non-variational problem.

The initial control problem I) is approximated by a family I ) of £

control problems of systems described by variational problems of Neumann

type and obtained by applying the penalization method. Convergence re-

sults are reported. Moreover a family Iv) of finite dimension optimiza-

tion problems, which converges to the initial problem I), is considered.

Results of some numerical experiences are reported. (~)

Statin9 O f the_problem; notations and definitions.

Let 9 be a bounded open set of R n whose boundary F is a(n-1 ) dimen-

sional manifold and let ~ be a(n-2) dimensional manifold which separates

F into two open non-empty disjoint sets F ° and FlSUCh that:

F=FoU r i U X , FoNr1:¢ , ton r l = X (t For u,vE H 1 (a) ) , set:

~ ! ~ ~ ~u ~v dx+lao~x. (I) a(u~v)= ~ai~(x) ~x. (x) uvdx i,j=l 3 l

Hereafterf we assume the following regularity hypotheses on ~ and on the

(~) Completely detailed proves will appear on "Rend. Istituto Lombardo

(Sc. Mat. and Nat.).

~V -~2 (i) HI (~)={V:V~ L 2 (~) ,8-~--~ (~]) ~ i=I, .... n}; for the definitions of real 1

Sobolev spaces Hs(~) and HS(F) with s ~ R, see Lions-Magenes [IJ

chap. I.

Page 165: 5th Conference on Optimization Techniques Part I

153

the coefficients in the form a(u,v):

i) 3~=£ and ~ are C2-manifolds, locally ~ is totally on one side of F, (2

ii) aij (x) E C 3 (~) , ao(X) ( C I (~) )

n iii) ~ aij(x)~i~j>~I~l 2 ,~=(~I,~2 ..... ~n ) 6 R n, Vx 6~, ~>0

1,]=l

% (x) >.0, Vx~

The form (I) is continuous and bilinear on H I (~) and defines the second-

order elliptic differential operator:

(2) A= - . . ~ 3x. (aij (x)3~--~.)+ao(X)

1,3=i l 3

The adjoint form a ~(u,v) is defined by: a'(u,v)=a(v,u) , Vu,v 6 H I (~) ;

the operator A ~ associated is:

n

(3) A*= _/X-'[ ~ ( X ) ~ , ) + a o (x) i,j=13xi (aji 3

State of the s[stem

Set T={t:t6H 1 (~),roYot=O}(3) and ~{H -T (~) , ~ ( [O,1[, let w£ T be the

unique solution of the variational equation:

a~(w,t) = <~,t> ,V t 6T (4) -TIQ T,~

Which is equivalent to the homogeneous mixed pr.oblem of Dirichlet-Neu-

mann type : (5) A~w=~ in ~, roYo%r=-O, rlYA~W=O

Under the assumptions i),ii) it follows from the regularity results by

Shamir [I] :

W ~ H 3/2 -~ (~) , V q>O

The space defined by

2

(3 (4

(s

ck(~), k positive integer, is the space of functions which are k-ti-

mes continuously differentiable in ~.

We denote by r o and r ! the restriction operators from~to F ° and F I.

The bracket < ,> indicates the duality between H s and its dual space.

n tx ~ ~u You=trace of u on F, YAU=~ aij ~ ; ~--~cos(n,x i) , conormal derivati-

i,3=I 3

ve to A on F, cos(n,xi)=i-th direction cosine of n, n being th nor-

mal at£ exterior ~. For trace theorems, see Lions-Magenes [I].

Page 166: 5th Conference on Optimization Techniques Part I

154

XA~,~ (~)={v:v ~ H ~/2-D(~) ~ A~V ~ H -T (~) t rOYoV=O, rlTA~V=O}

endowed with the norm

~2 ~,_ =IIvll 2 +ITA~vJI

a Hilbert space for ~ E ]0,1 ] and i & [O,1[. is

The operator A ~ is thus an algebraic and topological isomorphism of

XA~,~ (~) on (~) for each ~ ~ ]O, and 7 6 [O, .

Chosen

[o½[ by transposition, there exists a unique y 6 H ~ (~), VT 6 [O,}[ solution of:

(4) <y,A~w> = <f,w> - <go,roYA.W> +<gl 'rlYow> (6) , T~ -~ ~ -p~ p,~ ~,F O -~rFo

. 3/2~o

gWfXA, r (a)

By definition y is the ~'weak ~' solution of the mixed problem:

(5) Ay=f in ~ , roYoy=go, rlYAy=gl

The regularity result

y ~ H ilz+~ (e)

follows from Vishik-Eskin's work [I] and therefore the boundary condi-

tions may be interpreted according to trace theorems.

We point out that the state y of the system is solution of a bounda-

ry-value problem of non-variational type; this means that the state y

does not belong to H 1 (~).

Control space

In many applications, it is very frequently encountered the case in

which the control is exercised through the boundary (i.e. on the bounda-

ry conditions). Usually controls are described by measurable functions;

however piece-wise constant controls are often used.

We shall consider the case of a boundary control on the Dirichlet

condition on F . If we want to develop our theory in a Hilbert space fra o

mework and still allow piece-wise constant controls (or controls with

even stronger singularities) then we are led to choose as control space

(6) The mapping w÷rlYoW is a linear continuous map of T°={v;v E H 3/~-° (~),

o< I , in HIo-~(FI ) , the bracket <,> then indicates the dua- roYoV=O},

lity of H-I+~ (FI) and HIo-°(FI ) .

Page 167: 5th Conference on Optimization Techniques Part I

155

q~the following Sobolev space:

~=H ~ (Fo) where ~ 6 I0,½[

For each v ~ ~ , let y(v) 6 H ~+~ (~) be the unique solution of:

(6) P) { Ay(v)=f in ~, roYoY(V)=go+V, rlYAY(V)=g 1

Observations space

We confine ourselves to the case in which the trace of y is obser-

ved on F I. The mapping v+rlYoY(V) is an affine continuous map of ~in

in order to avoid as much possible the use of spaces H s, s not H (rl); an integer, we may consider in particular rlyoY(V) E L 2 (FI) and then choo-

se as observations space:

Y=L 2 (FI)

Control problem

Given

~ad a closed convex and bounded subset of q]~

Yd ~ Y

With every control v£~, we associate the "cost function":

J(v)= I IrlToY(V)-ydI2d~+II v2d~ , I~0

F F 1 0 We shall consider the optimization problem:

I) Find u ~ %d minimizing J(v) over ~ad

We obtain the following result:

Proposition -I-

(7)

(7 Problem I) ad~.its a unique )solution u, termed optimal

control which is characterized by the inequality:

I[rl~oY(U)-Yd]'rlYoY(V-u)d~+l I u(v-u)d~>10, Vv ~ ~ad F F 1 o

(7) For i>O uniqueness follows from J(v) being strictly convex; for i=0

it follows from the uniqueness of the Cauchy problem for the elliptic

operator A.

Let us now trasform (7) by introducing the adjoint p(u), unique solution

in H 1 (~) of the problem:

(8) A~p(u)=O in ~, roToP(U)=O, r17A~P(U)=-(rlYoY(U)-Yd)

By a regularity result of Shamir the solution p(u) belongs to H s/~-n (~)

V n>O. This allows us to apply Green's formula and thus transform ine-

quality (7) into the equivalent form:

Page 168: 5th Conference on Optimization Techniques Part I

!56

" p ( u ) ' v - u > +'i U o

Therefore the system (6) , (8) , (9) admits a unique solution {y,p,u} where

u is the optimal control.

A p_pproximation of I)

The main task in the numerical solution of problem I) is to produce

a good approximation scheme for the state equation (6) which is of non-

variational type.

Set _~ [ (9) H l

as (w,~)=a(w,?)+s w 9 6 (~) ) ro7o w'roYo'~da , ;

F o

we approximate problem P) (6), by means of a family P ) of variational g problems :

for each ¢>0, let Ys (v) be the unique solution in H 1 (f2) of the

(I o) i variational equation: i

a s (y¢ (v) ,~)= <f'~> +s-I (go +v)" roYo ~da+ <gl'rlYo?> L. -P'~ P'~ r -{%'rl V2'r l

o

where we assume g! E H -I/2 (F I) while go and v belong to ~ .

The variational equation (I0) is equivalent to the following boundary

value problem of Neumann type:

pc)~Aya (v)=f~ in

( I I ) ( i0) ' ¥ o y a (v) =ga (V) on F [ ~YAY~ (vJ +Xro

~ go+V, on F °

gs (v) = ¢gl ' on F 1

(8) We note that: ~ (u)=roYA~P(U) +}~u.

(9) We note that: for v~Hi (~)~ IIvl i 2 -<M(~) [~I [~-II 2 + 1 ~ ci 1 ~--i'= oF~

+[ IroYoVl i 2 ] , since F O is a non-empty open set of F, it follows off o

that a (v~v)>~811vl i ~Vv 6 H- (~) ~ where S>O is independent of v and g i~

¢ (small enough); a (u,v) is therefore coercive on H 1 (~).

(I0) We note that boundary condition on F is of "natural type".

Page 169: 5th Conference on Optimization Techniques Part I

157

where ×r ° is the characteristic function of F 0 on r.

Let us consider the family I¢) of control problems hereafter descri-

bed:

Xs) Find us ~d minimizin~ J (v) over % s d

where Je (v)= I IrlYoY¢ (v)-y~'2d~+k I v2d~"

r F 1 0

We then obtain:

Proposition-2-

For e a c h a>O, p r o b l e m I ) a d m i t s a u n i q u e s o l u t i o n u ; m o r e o v e r a E c

necessary and sufficient condition for u to be an optimal control for s I¢ ) i s t h a t t h e f o l l o w i n g e q u a t i o n s a n d i n e q u a l i t i e s b e s a t i s f i e d :

Pa) as (ye,~) = <f,~> +a-I I (go+Us) .roYo~d~+ <gl,rl~o~> , -p,~ p,a r -~'rl ~/z' r I

o

V~ 6 H l (~)

P~) aa~(Ps,*)=- I [rlYoY¢-Yd]rlYo*d~, ~ 9 ( HI ([~) F

1

I (-~roYoPs+lu¢) (v-us)d~>.O , Vv £ ~d. r o

We have the following convergence result:

Theorem - I -

As s÷0 we have:

u e converges to u weakly in ~ (strongly in L 2 (F O) )

Furthemore :

ya÷y in H ~ (~), V T <1, ps÷p in H I+6 (~), V 8< 1

and

Js (us)÷ J(u).

Numerical approximation n

Let ~ b e t h e s e t P+ o r d e r e d a s f o l l o w s :

(hl ..... hn)'< (hi' .... hn)~--->hi~<hi, i=I ..... n

For each h e ~ let V h be a finite dimension subspace of H I (~) endowed

with t h e i n d u c e d n o r m .

We now suppose that the a p p r o x i m a t i o n V h s a t i s f i e s t h e f o l l o w i n g :

Condition -I-

For e a c h h ~ t h e r e e x i s t s a l i n e a r c o n t i n u o u s o p e r a t o r ~h o f H 1 (~)

Page 170: 5th Conference on Optimization Techniques Part I

158

in V h such that:

lim i [ y - , ~ h y t l = 0 , V y 6H 1 (~) h+o 1, Q

S ~k ~ [0,I[ ' .~ClhTS-kt tyl I , V y ~ (~), ! ,Y-~hYi Ik, ~ s,Q ts ~ [k,2] C being a constant independent of h and y.

We approximate problem P~) (11), with the family of problem Pc,h ) :

*For each a>O and h6 ~ let yc,h(V)6 V h the unique solution of:

Pc,h) IIF a~ (y¢,~,w~)=_pi~<f,w~>~p,f~+¢-l.Fo[ (go +v)rlYOwh d°+d/2,Fl<gl'rlYOwh~, IF '

t , V w h ~ V h .

n ~k Set ~=R+ ~ for each k6 ~ let be a finite dimension subspace of~

endowed with the induced norm, and let qk be a linear continuous opera-

tor of ~ in ~k such that:

v6~ , liml I v-qk vl I%:0 k÷o

There exist approximations {Vh,~ h} and {~k,qk } which satisfy the sta-

ded h~:potheses; see, for example Aubin [I], Bramble-Schatz [1].

Finally, for each k 6 ~ let:

~k d a closed convex set in ~k

such that

Set c=M.lhl s, M positive constant and s>O, yc,h(V)=Yh(V)

let us consider the family Iv) of optimization problems:

I ) Find u e~k ~k 9 ad minimizing J (Vk) over ad

f a,~ (vk): ] / r l T o Y h ( V k ) _ y d l 2 d ~ + X Vkd~2 . P 1 F o

where

and ~ = ( h , k ) ,

Proposition -3-

For any ~j problem Iv) admits at least one solution; the set of the

solutions u of I V ) is characterized by the inequality:

S u V F P O o

Furthemore u is solution of I ) if and only if it satisfies the system:

Page 171: 5th Conference on Optimization Techniques Part I

159

Ph )

Ph )

as (Yh'Wh)= <f'wh> +s-I I (go+Uv)'roYoWh do+ <gl'r ,{ w. > I o n P -p,~ p,~ ~ -~/2, r i ~,r 1

o

, V w h 6 v h

aZ(Ph,Wh)=- I [rlYoYh-Yd]'rlYoWhd~, ~w h e V h

F i (111

I[ o oPh+lU ] <Vk u Id > O, V F O

The above described approximation is supported by the following conver-

gence result:

Theorem -2-

For

~=Mlhl s, M positive constant and s 610,1[ as v+O (i.e.

h÷O, k÷O independently) the sequence {u }, u being a solu-

tion of I ) converges weakly in ~ (strongly in L 2 (F o)) to r

the unique solution u of problem I).

Finally some results on numerical experiences performed by G.Gazza-

niga (Laboratorio di Analisi Numerica del C.N.R.-Pavia) and myself are

reported; computations were carried on an IBM 360/44 installed at the

Centro di Calcoli Numerici - University of Pavia.

Let us consider the following case

Fig. I F 1

F 1

F I Z={(O.I,O),(O.9,0) = Points of discontinuity in boundary con- ditions

F ° { i x~

chosen ~ , Fo,F I, ~as in figure 1~let ~k be the partition of F ° into k

intervals of equal length and ~ad be the set of functions on F piece- o

I_ I . _(Ii) We observe that ~Ji(u )=-[roYoPh+lU ; the described scheme presents

the advantage of calculating the gradient values through the trace

of the adjoint state Ph on F O.

Page 172: 5th Conference on Optimization Techniques Part I

160

wise constant on the partition ~k and valued between 0 an 7. 2

Let h=(hl,h 2) be an element of R+. We denote by J~h the regular

mesh of points M=(mlhl,m2h2),m i6 Z.

We consider the approximation V h of Hi(a) hereafter described:

2 a h(M)= Z [(mi-1)hi, (mi+1)hi]

i=I 2 x i n (1-1 -rail

i=l hT Vh={Zh,Zh= Zhgh,Z h 6R} , 8h(X)=

M 6~h (~) o

) , x ~ ~h (M)

, x~ ah(M)

Numerical experiences are performed on the model-problem:

a~ and J(v)=~ ly(v)iFl-Ydl2d~ -Ay(v)=O in ~%, yiFo=V, ?nlrl=gl

F I

and we choose yd=zlF I and gl=~iF]

x 2 a) z=arct g x i_~12

X 2 b ) '--~arCtgx -O 65 z=z -

1 "

whe re :

and then Yott=z, Uott = . ×=0.5

, 0.5 ~ X'$03

.¢'Tr , o4, {×< 0.3~

l . ~ , × = 0 . ~ , 5 , x = 0 . 6 5 "z arctg~) and then Yott=z, Uott =

LO ,O.&WX.~ 0.9

The solution of problem I ) has been achieved by using the direct

gradient method with projection; namely, the method using as direction

of descent the projection of the gradient on the linear manifold by act~

ve constraint~ and a coniugate direction at the next step if no other

constraint becomes active. Figure 2 represent the approximated optimal

control found in the specific numeric cases considered.

Page 173: 5th Conference on Optimization Techniques Part I

1"0

II

II

II

II i ~ -

~t

H

n" i~.

0 m

N

v

H

0 I

I1

Ix3

~-

II

II

II

0 t © I cn

ct

Z

~

v

II 0 0 1

0 °

0 0 0 0

0

0

-1 1 ...

.....

I .........

.. I-

j

o ~,

Page 174: 5th Conference on Optimization Techniques Part I

162

REFERENCES

AUBIN,J.P. [13-"Approximations of elliptic boundary-value problems"°

Wiley - Interscience, vol.XXVI, (1972).

BRAMBLE, J.H.-SCHATZ, A.H. [1~-"Least squares methods for 2mth order

elliptic boundary-value problems" Math. Comp. voi.25,

N.113, pp.!-31, (1971).

BENSOUSSAN-BOSSAVIT-NEDELEC [I 3 -"Approximation des probl~mes de contr~-

le" Cahiers de I'I.R.I.A., n.2.

COLLI FRANZONE, P. [13-"Approssimazione mediante il metodo di penalizza-

zione di problemi misti di Dirichlet-Neumann per operato-

ri lineari ellittici del secondo ordine", to appear on

BolI.U.M.I..

COLLI FRANZONE, P.-GAZZANIGA, G. [IJ-"Sull'analisi numerica del proble-

ma misto di Dirichlet-Neumann per equazioni lineari el-

littiche del secondo ordine" Pubblicazione N.34 del L.A.N.

del C.N.R. di Pavia.

LATTES, R.-LIONS, J.L. E1~-"M~thode de quasi r~versibilit~ et app!ica-

tions" Dunod, Paris, (1967).

LIONS, J.L. <13-"Optimal control of systems governed by partial diffe-

rential equations" Grundlchren B.170, Springer-Berlino,

(I 971 ) .

E2~-"Some aspects of the optimal control of distributed pa-

rameter systems" Series in Applied Mathematics, SIAM,

vol.6, (1972) .

LIONS, J.L.-MAGENES, E. [1~-"Non-homogeneous boundary value problems and

applications" Grundlchren, B.181, Springer-Berlin, (I 971) .

SHAMIR, E. ~1~-"Regularization of mixed second order elliptic problems"

Israel J. of Math., 6, pp.150-168, (1968).

VISH!K, I.M.-ESKIN, I.G. ~1~-"Elliptic convolution equations in bounded

domains and their applications" Uspehi Math.Nauk 22, I

(133) , pp.15-76, (1967) .

YVON, J.P. E13-"Application de la p~nalization a la r~solution d'un pro-

bl~me de contrSle optimal" Cahier de I'IRIA n.2, (1970).

Page 175: 5th Conference on Optimization Techniques Part I

CONTROL OF PARABOLIC SYSTEMS WITH BOUNDARY CONDITIONS INVOLVING TIME-DELAYS

P.K.C.Wang

Department of System Science

University of California

Los Angeles,California

U.S.A.

In this paper, we consider various problems in the optimal control and stability

of parabolic systems with boundary conditions involving time-delays. These problems

arise physically in the control of diffusion processes in which time-delayed feedback

signals are introduced at the boundary of a system's spatial domain. To illustrate

the basic ideas, only the results for parabolic systems with the simplest forms of

boundary conditions involving time-delays will be presented. Results of a more

general nature are discussed in detail in reference [6].

I. PRELIMINARIES

Let ~ be a bounded open set in R n with an infinitely differentiable boundary F

and I denote a given finite time interval ]0,T[. We consider a parabolic system of

the form:

~Y - by = f in Q=~×I, (i) }t

where A is the Laplacian operator. The function f corresponds to either a distribut-

ed control or a specified function defined in Q.

Let F 1 and F 2 be given disjoint subsets of F such that F=FIuF 2. Let Z=FxI;

ZI=FI×I,i=I,2. We consider the following Neumann boundary condition involving a

time-delay:

where

~(x,t) =A ~ cos(N,xi ) a~x (x,t) = q(x,t) on ~, (2)

i,j=l J

q(x,t) = ~(x){y(w(x),t-Y) + u(x,t)}, (3)

where ~ is a given C ~ function defined on P with compact support in FI; cos(N,xi) is

the i-th directional cosine of the outward normal N at a point xCF; u represents

either a boundary control or a given function defined in E; the time-delay T is a

specified positive number; w is a continuously differentiable bijection of F onto

F such that w(x)=x if xEF 2 and w(x)CF 2 if x ~FI; moreover, its Jacobian does not

vanish on F.

The initial data for (i) are given by

Page 176: 5th Conference on Optimization Techniques Part I

164

y(x,0) = Yo(X), x6~, I (4) /

y(x,t') = @0(X,t~), (x,t')@FX[-T,0[, }

where y and 9~ are specified functions. Note that only ~^, the restriction of +0 2 o u to F X[-T,O[, is of importance here. In what follows, we shall first give suffi-

cient conditions for the existence of a unique solution of the mixed initial-boundary

value problem (1)-(4). Then, various optimal control problems will be discussed.

For simplicity, let the final time T=KT, where K is a given positive integer.

Z =FxI. and El=F~xI We introduce the following notations: Ij=](j-I)T,jT[, Qj=~×Ij, J J ] 3

for i=1,2 and j=0,1,...,K. Let Hr(~), r>~0, denote the Sobolev space of order r on

~. For any pair of real numbers r,s>~0, the Sobolev space Hr'S(Q) is defined by

Hr's(Q) = H0(I;Hr(~))~HS(I;H0(~)), Q=~XI,

where Hs(I;X) denotes the Sobolev space of order s of functions defined on I and with

values in X.

For optimal control problems~ it is of importance to consider the cases where

the control f or u belongs to L2(Q) or L2(E) respectively. For these cases, we have

the following results:

1 ~ ~ ~ ~ 2 Theorem I: Let y ~@~,u and f be given with y ~H~(~) {resp. H2(~)}, ~n6H2'#(Zn) --'--2 o o o i i u u

(resp. L2(S0 )}' ~E~½'~(S) {resp. L~(S)} and fEL~(Q) {resp. (HZ'T(Q)) ', dual of

H~'~(Q))}. Then, there exists a unique solution y~H2'I(Q) {resp. He'#(Q)} for i

problem (1)-(4). Moreover, y(.,jT)~HI(~) {resp. H~(~)} for j=l ..... K.

The above theorem can be established by first solving the problem on QI using

the basic results of Lions and Magenes [3;p.33,p.81] specialized to the case of (i)-

(4). In a similar manner, the existence of a unique solution on Q2 can be establish-

ed by using the solution on QI to generate the initial data at t=T and boundary data

on S 2. This advancing process is repeated for Q3~Q4,... , until the final cylinder

set QK is reached° Note that the solution y on Qj (denoted hereafter by yj), j=l,

..°,K, is required to satisfy the initial data at t=(j-l)T:

yj(x,(j-l)T) = Yj_l(x,(j-l)T), XE~,

and boundary condition (2) whose right-hand-side is given by

(5)

qj(x,t) = ~(x){Yj_I(W(X),t-T) + u(x,t)}. (6)

In order to apply the same results of Lions and Magenes to any Qj, we must verify

that Yj-l and Yj_llZ. ,j=2,...,K, satisfy the same conditions as required for Yo and

ql" This can be sh~w~ by making use of the trace theorem [3;p.9] and a result per-

taining to the continuity of Yj_l(.,t) on [(j-2)T,(j-I)T] [2;p.19].

Page 177: 5th Conference on Optimization Techniques Part I

165

II. OPTIMAL CONTROL

We shall consider optimal control problems for system (1)-(4) in which the con-

trol corresponds to either f or u belonging to a specified closed convex set UQEL2(Q)

or U EEL2(E) respectively.

Problem i: Let yo,@O and u be given functions satisfying the hypotheses of Theo-

rem 1 so that for a given control fEUQ, a unique solution y(f)~H2"l(Q) exists and

y(.,T;f)EHI(~). The problem is to find a f°EUQ such that j(fo)~j(f) for all f ~UQ,

where J is a cost functional given by

J(f)=ll~lY(x,t;f)-Ydi2dxdt + 12!~Y(x,T;f)-YdTi2dx + ~3 IQ(Nf) f d x d t , Q (7)

2 where 1i~0 and 11+~2+~3>0; Yd and Yd~ are given in L (Q) and L2(~) respectively;and

2 N is a positive linear operator on L (Q) into L (Q).

For the above problem, it is known [i] that for 13>0 , a unique optimal control

fo exists, moreover~ fo is characterized by

j,(fo).(f_fo) = ~i In(Y(fo)_yd)(y(f)_y(fO)) dxdt

o o + X 2 (y(x,T;f)-YdT)(Y(x,T;f)-y(x,T;f ))dx

+ %3 [ (Nf°)(f-f°) dxdt ~ 0 for all fEUQ. (8) JQ

The above condition can be simplified by introducing the following adjoint equa-

tion. For each f~UQ, we define p=p(f)=p(x,t;f) as the solution of

8p(f) - St - Ap(f) = >~l(y(f)-yd) in Q, (9)

with terminal condition

p(x,T;f) = %2(Y(x,T;f)-YdT) , xE~, (io)

and boundary condition:

~P(f)(x,t) = 0 for (x,t)e([r-w(supp(@))]×l)U(w(supp(@))×~) ~o

(ii)

~(X,t) = ~(w-l(x))I Jw(x)Ip(w-l(x),t+I;f) for (x,t) Ew(supp(~))x]0,T-y[, ~u

(12)

where J denotes the Jacobian of w; w(supp~)) is the image of the support of w under the mapping w and

Page 178: 5th Conference on Optimization Techniques Part I

166

n

~(x,t) ~-a!i!(f)(x,t). (i3) DO = ~ c°s(D'xi) DX, i,j=$ 3

We observe that for given yd,YdT and f, problem (9)-(13) can be solved backward

in time starting from t=T by first obtaining the solution P=PK on QK and terminal

condition (19) and boundary condition

DPK(f) DO (x,t) = 0 for (x,t) 6Z K. (14)

Having found pK ~ we may proceed to solve the problem on QK-I backward in time with

terminal dats at t=(K-l) :

PK_I(X,(K-I)T) = PK(X,(K-l)[), xE~, (15)

and with boundary conditions:

SPK-I (f) DIJ (x,t) = 0, (x,t)~[F-w(supp(~))]XlK_l, (16)

DPK_I (f) -i t -i 21) ,(x,t) = @(w (x))iJw(x)IPK(W (x),t+T;f), (x,t)~w(supp((~))XlK_ I,

(17)

Note that the right-hand-side of (i7) is completely determined once PK is known.

This backward process is repeated until the solution on the initial cylinder set QI

is determined. For fEL2(Q)~ the existence of a unique solution pK(f)~H2'I(QK)

with pK (.,(K-I)~)~HI(~) can be established by applying Theorem 1 to (9)-(13) with

obvious change of variables and with reversed sense of time t'=T-t. The result can

bel extendedl to Qj,I~<j~<K-I~ in the same way, since the right-hand-side of (17) is in

H2'4(ZK_I) (by trace theorem). Thus, we have the result:

Lemma i: Let the hypotheses of Theorem 1 be satisfied. Then for given YdEL2(Q),

YdTEHl(~) and any fEL2(Q), there exists a unique solution p(f) EH2'I(Q) to problem

(9)-(13).

Now, in view of Lemma i~ we can proceed to simplify (8) using the adjoint equa-

tion, It can be shown [6] by setting f=fo in (9)-(13), multiplying both sides of

(9) by (y(f)_y(fO)) and integrating over Q that (8) reduces to

Q(p(f°)+k3Nf°)(f-f°) dxdt >I 0 for all fEUQ. (is)

This result can be summarized as:

Theorem ~: For Problem i with cost functional (7) with Yd6L2(Q)'YdT ~ HI(Q) and %3>0,

there exists a unique optimal control fo which is determined by the solution of (i),

Page 179: 5th Conference on Optimization Techniques Part I

167

(9) with boundary conditions (2),(11)-(12), initial condition (4) and terminal condi-

tion (i0) (all with f=fo). Moreover, fo satisfies (18).

For the special case where UQ=L2(Q), (18) is satisfied when

fo = -%~iN-ip(f°). (19)

In particular, if N is the identity operator on L2(Q), then,in view of Lemma i, we

have fOEH2,1(Q). To obtain the optimal control (19) in feedback form, we follow

Lions' approach [i] by first considering the following set of equations with s E I:

3y3t - £y + %~-ip = 0, 1

~p - - Ap - %1 y = -%lYd , 3t

(x,t) ~ ~×]s,T[, (20)

with boundary conditions:

=I~(x){y(w(x),t-y)+u(x,t)} if t-T~s

] l~(x){~s(W(x),t-T)+u(x,t)} if t-T<s

I (x,t) 6~×]s,T[,

^A ~-~D (x' t) =I0' (x' t) E ~s= (F-w(supp (~)) x] s'T [ (J (w(supp (~)) ×IK) ' 1

~(w-l(x)) ] Jw(X) I p (w-l(x), t+T), (x, t) ew(supp (~)) ×Is, T-y [,

(21)

(22)

and with initial and terminal conditions:

y(x,s) = Ys(X), I

p(x,T) = %2(Y(x,T)-YdT), I x~, (23)

1 where Ys is given in H (~) and ~ is a given function defined in Fx[s-y,s[, whose 2 s I ! 2

restriction ~s to F x[s-y,s[ is in H~'~(F x[s-T,s[). Note that (20)-(23) provide

the solution to the optimal control problem associated with (i) for t E]s,T[, UQ =

L2(Q) and with a cost functional given by

T T

Js(f)=%l Is I~lY(X't;f)-ydl2 dxdt + %21~ Iy(X'T;f)-ydTI2 dx + %3 Is I~ (Nf)f dxdt"

(24)

The problem with %3>0 has a unique optimal control in the form of (19). Consequent-

ly, (20)-(23) has a unique solution {y,p} also. In fact, for any given pair (Ys '~)s i I 2

EHI(~)~H~'~(F x[s-T,sD, the solution y,p~H2'l(~x]s,T[). Moreover, the following

property can be readily established.

Page 180: 5th Conference on Optimization Techniques Part I

168

Proposition: Let {y,p} be the solution of (20)-(23) with s=0.

"state" at time s, by the pair (y(.,s),$s) , where

$,S(, t,) = i #P'O(" '~) f°r t' ~ {s=[<'O[ffi [s-~"s[' 1

y(.,t*)[r2 for t'E[s-T,s[-~ s. i

Then, for all pairs s~t in I,

Define ~ , the system s

(25)

pC',t) = P(t,s)O s + rs<'-,t). (26)

where P(t,s) and rs(.,t) are defined as follows:

(i) we solve the equations:

+ 3 -iy = o, 1

a-Z -Ay o, t (x,O6 ~x]s,'r[, - ~t - kl~ =

with boundary conditions

~(x)~(w(x),t-T) if t-T>~s, )

~(x,t) = I i

+(X)~s(W(X),t-~) if t-T<S, i ^

~(~,t) = I o, (~,t)e z s

@(w-l(x)) [ Jw(X) [y(w-l(x), t+T),

ix,t) 6~×]s,T[,

(x,t)6 w(supp(¢))×]s,T-r[,

and initial and terminal conditions:

(27)

(28)

(29)

then

~(x,s) = y(x,~), I xee,

V(x,T) = k28(x,T), i

P(t,s)o s = y(.,t);

(30)

(31)

(ii) we solve the equations:

3t

~t = -llYd'

(x, t) ~×]s,T [, (32)

with boundary conditions:

(~(x){q(w(x)~t-T)+u(x,t)} if t-T~s,

?q(x,t) =

[ ~(x)u(x,t) if t-T<s, i , (x,t) 6g~X]s,T[, (33)

Page 181: 5th Conference on Optimization Techniques Part I

169

^

~(x, t) I 0, (~, t) e z s,

= l@(w-l(x))lJw(X)l~(w-l(x ),t+T),

and w i t h i n i t i a l and t e r m i n a l c o n d i t i o n s :

(x,t) E w(supp(~))×]s,T-T[, i (34)

then

n(x,s) = 0, 1 x@~,

~(x,T) = 12(n(x,T)-YdT) , )

rs(X,t) = ~(x,t).

(35)

(36)

Now, the optimal feedback control can be obtained by setting s=t in (26) and

substituting the result into (19):

f°(.,t) = -%31N-l(p(t,t)gt+rt(.,t)), tel. (37)

By making use of Schwartz's kernel theorem [4], it can be verified that the op-

timal feedback control (37) (with N being the identity operator on L2(Q)) can be re-

presented in the form:

t

K0(x,x',t)y(x',t)dx' + "t-T-F 2Kl(x'x''t't' )#t(x',t')dF2dt'+rt(x,t)l t ' (38)

cost functional is minimized over UZ:

lY(x,t;u)-yd]2dxdt %21 ]y(x,T;n)-YdT]2dx J(u)=ll jQ +

T

+ 13 I ~s (~u)u dFdt, 0 upp (~)

(39)

where the %~s,y d and YdT are as in Problem i, and ~ is a positive linear operator on

L 2(~) into L2(Z).! ~-- If yo,~O and f satisfyl the conditions of Theorem i, then for each

u~U~, y(u) EH2'~(Q) and y(.,T;u)~HY(~). Hence J(u) is defined. Similar to Pro-

blem i, this problem has a unique optimal control u°EU~ if %3>0. Also, u ° can be

characterized by

%1 I(Y(U°)-Yd)(Y(u)-y(u°)) dxdt + %2 I(Y(x'T;u°)-YdT)(Y(x'T;u)-y(x'T;u°))dx -Q -Q

T

+ %3 I I (~u°)(u-u°) dFdt )0 for all u~U~. (40) 0 "supp (~)

where {K0,K I} is the kernel of P(t,t).

P ro~leg_m~: Let yo,~0 and f be given functions satisfying the hypothesis of

Theorem i with uEL2(E). Let y(x,t;u) denote the solution of (1)-(4) at (x,t) cor-

responding to a given control u~ UE. We wish to find a uEU~ such that the following

Page 182: 5th Conference on Optimization Techniques Part I

170

The foregoing inequality can be simplified by introducing an adjoint equation

whose form is identical to (9)-(13). From Theorem i, for any u6L2(Z), there exists

3 ~_ ! Y~d E L2 i a unique solution y(u)6H2'4(Q) with y(.,T;u)~H2(~). If (Q) and YdTEH2(~),

then the right-hand-side of (9) and (I0) are in L2(Q) and H2-(~) respectively. Simi- 3 3

far to Problem i, we can establish the existence of a unique solution p(u)~H2'~(Q)

for (9)-(13). Moreover, (40) can be simplified as

T

I (p(u°)~(x)+%3Nu°)(u-u°) dFdt >~ 0 for all (41)

0 supp (@)

in Problems 1 and 2, the cost functionals involve only deviations of the solu-

tions from their desired values averaged over the interior of the spatial domain ~.

Similar results can be obtained for optimal control problems in which the spatial

averaging in the cost functional is taken over both ~ and its boundary F[6].

III. STABILITY

In applications~ it is of interest to establish stability conditions for systems

with specified forms of feedback controls. Here, we consider system (i) with f=0

and with a Dirichlet boundary condition of the form:

y(x, t)=~(x) {y(w(x) ,t-T)+u(x, t) }, (x,t) ~ Fx[O,°°[, (42)

where # and w are as in (3). In view of (38), we consider a feedback control which

is a linear function of the state O t (defined by (25)) and has a representation of

the form:

t

u(x,t)= I Fo(X,X',t)y(x',t) dx' + I I Fl(x,x',t,t')~t(x',t')dF2dt', -~ -t_T~F 2

(x,t) 6Fx[O,~[. (43)

We shall establish a sufficient condition for the boundedness of solutions of system

(i) with boundary condition (43) and feedback control (43).

Let Qt denote the cylinder set ~x]O,tl[ for O<tl<°°. Let Yo6C°(~) and y be a

continuous ~unction in Qt I satisfying (i) in QtlU Stl where St=~x{t}. Then the weak

_ maximum principle [5] asserts that the maximum of IYl in Qt I is attained on the part

SOU(Fx]O,t!]) of the boundary of Qtl , i.e.

max ly(x~t) i ~ max {ma_x lYo(X) I, sup ly(x,t)I}. (44) Qtl ~ Fx]O,t 1 ]

we shall show by contradiction that if

t

q(x't)-I~(x)i Ii 1 + I IFo(X'x''t)Idx' + I • I IFl(X'x''t't')IdF2dt' II < 1 t-T JF 2

(45)

Page 183: 5th Conference on Optimization Techniques Part I

171

for all (x,t) EF×[0,~[, then the maximum of IYl is attained on S0 or

max_ly(x,t) I ~ m aXlYo(X) I for all t~0. (46) x6fi

First, assume that there exists a point (x*,t*)EFx]T,tl] such that

[y(x*,t*)I= max ly(x,t)[. (47)

Qt 1

Then, i n view of (42) and (43) , we have

Iy(x*,t*)I~<I~(X*)IIIY(W(X*),t*-T)I+ I IF0(x*,x,t*)lly(x,t*)Idx

t*

+ I I 'Fl(x*'x't*'t)''Y(X't)'dxdtl ~< n(x*'t*)'Y(x''t*)'' (48) t*-T Jr 2

where N is defined in (45). It is evident that if (45) is satisfied, then (48)

leads to a contradiction. Consequently, such a point (x*,t*) does not exist under

condition (45).

Now, assume that a point (x*,t*)EFx[0,T] exists such that (47) holds. Then,

from (42) and (43), we have

I y(x*, t*)l~< 1 ~(x*)I I I ~0(w(x*), t*-T) 1 \

(I fill ) ' + IF0(x*,x,t*)Idx + ~Fl(X*,X,t*,t) Idxdt ly(x*,t*)l I •

~2 -T F (49)

If we impose the condition that ~0 is continuous on E 0 and satisfies

then

and

sup [~0(x,t) l < max lYo(X)], (50) E 0

I~0(W(X*),t*-T) I < max lYo(X) I ~ ly(x*,t*) I (51)

ly(x*,t*)l < n(x*,t*)ly(x*,t*) I. (52)

Again~ under condition (45),(52) leads to a contradiction. Thus, such (x*,t*) does

not exist. Finally, we note that the foregoing results remain valid for arbitrarily

large t I. Consequently, (46) holds. Thus, we have established:

Theorem 3: Let y be a classical solution of (i) with f=0 satisfying boundary condi-

tion (42),(43) and initial data (4). Let Yo~C°(~) and ~0EC°(ZO ) such that (50)

is satisfied. Then, under condition (45), ma~ly(x,t) I < ma_xly (x) I for all t~0.

The conditions in the above theorem represent restrlctl~ns ~n the lnltxal data,

Page 184: 5th Conference on Optimization Techniques Part I

t72

the parameter in the boundary conditions and the kernels of the feedback control

operators. Physically speaking, the result simply states that if the feedback gain

and the maximum magnitude of the past boundary data are sufficiently small, then the

solution will not grow with time.

IV. CONCLUDING REMARKS

In this paper, only parabolic systems with the simplest forms of boundary condi-

tions involving time-delays have been considered. One may consider optimal control

and stability problems for parabolic system (I) with a variety of more complex bound-

ary conditions involving time-delays. A few examples are given below:

~. Boundarr ~ Condition Involvin$ Multi~l ~ Time-Delays : This a generalization of

the Neumann boundary condition (2): M

~(x,t) + Y(x,t)y(x~t) = ~(x) I ~bm(x,t)y(w(x),t-T m)

m=0 M

+ ~ Cm(X,t)~(w(x),t-Tm)+U(X,t) I on E, (53)

where 0~T0<%I<°~<%M; ~b m and c m are specified coefficients. If Y,Cm~m=l~.~.~M are

identically zero on F~ the results of this paper can be extended to this case without

difficulty.

2. Boundary Condition with Indirect Control: Here, the boundary condition is

identical to (2) and (3) except that u is generated indirectly by

u(x,t) = g(x)Tz(t), (x,t)~, (54)

where (o)T denotes transposition; g is a given mapping from F into Rr; z(t) E R r is

the solution of the following system of linear ordinary differential-difference equa-

tions: M

dz(t) ~'~ Fm(t)z(t-T m) + G(t)v(t) (55) dt

m=0

with given initi~l da~a

z(t ~) = ~(t ~)6R r, t'6[-Tm,0], (56)

where 0<TO<%I<..~<IM; Fm(t) and G(t) are given matrices. The control v(-)~, a

specified closed convex subset of L2(I;RS). One may consider the optimal control

problem involving the minimization of a convex cost functional of the solution y and

the control v. Also~ instead of (55), one may replace it by a functional differen-

tial equation. Finally, one may consider similar problems for general parabolic and

hyperbolic systems with boundary conditions involving time-delays. These problems

will be discussed elsewhere.

Page 185: 5th Conference on Optimization Techniques Part I

173

ACKNOWLEDGEMENT: This work was supported by U.S.Air Force-Office of Scientific Re-

search, Grant No. AFOSR-72-2303.

REFERENCES :

[i] J.L.Lions, Contr$1e Optimal de Syst~mes Gouvern~s par des Equations aux D~riv~es

Partielles, Dunod,1968.

[2] J.L.Lions and E.Magenes, Non-homogeneous Boundary Value Problems and Applications,

Vol.l, Springer-Verlag, N.Y. 1972 (Translated by P.Kenneth).

[3] J.L.Lions and E.Magenes, Non-homogeneous Boundary Value Problems and Applications,

Vol.2, Springer-Verlag, N.Y. 1972 (Translated by P.Kenneth).

[4] L.Schwartz, "Theorie des Noyaux", Proc. Int. Congress of Mathematicians, Vol.l,

1950, pp.220-230.

[5] O.A.Lady~enskaja,V.A. Solonnikov and N.N.Ural'ceva,Linear and Quasilinear Equations

of Parabolic Type, Translations of Math. Monographs No.23, Am.Math.Soc.R.I.1968.

[6] P.K.C.Wang, "Optimal Control of Parabolic Systems with Boundary Conditions Involv-

ing Time-Delays", Univ. of Calif. School of Engineering and Applied Science Rpt.

No. UCLA-ENG-7346, June,1973.

Page 186: 5th Conference on Optimization Techniques Part I

CHARACTERIZATION OF CONES OF FUNCTIONS

ISOMORPHIC TO CONES OF CONVEX FUNCTIONS

Jean-Pierre AUBIN

University of Paris-9 Dauphine - Paris

INTRODUCTION

The aim of this paper is the characterization of cones of func-

tions A(U) defined on a topological space U isomorphic with the cone

F(S) of lower semi-continuous convex functions defined on a convex

subset S of a locally convex vector space F in the following sense :

T h e r e e x i s t s a c o n t i n u o u s map ~ from U on to S s u c h t h a t any

f u n c t i o n ~ £ A(U) can be written in a unique way in the following

form

~(~ = g(~ u) where g ~ F(S)

The existence of such an isomorphism implies that all the proper-

ties of lower semi-continuous convex functions hold for the functions

of A(U) o Before stating the theorem characterizing such cones of

functions, we shall describe two examples.

This paper summarizes the report [~ .

i. EXAMPLE : F u n c t i o n s d e f i n e d on a f a m i l y o f convex c o m p a c t s u b s e t s .

i. 1 D@finitions

Let

~X and X ~ be two paired vector spaces supplied with their weak

(i, l o p o l o g i e s

and let us denote by

(I. 2) ~ the set of convex compact subsets of X

If L = {PI' .... , pn } ranges over the family of finite subsets

Page 187: 5th Conference on Optimization Techniques Part I

175

(i. 3)

of X' and ~L(X) = max l<pi, x>[ denotes the semi-norms of i=l,...n

the weak topology of X, we shall supply A with the topology

defined by the semi distances

6L(A' B) = max{ sup inf ~ L (~ - y) , sup inf l~L(X - y)] x6A yeB yeB xeA

when L ranges over the finite subsets of X'

(1. 4)

(I. 5)

Definition I. 1

We s h a l l ~.ay t h a t AoCA i s '~onvex" i f

V A, B~ Ao, q ~ [o, 11 , ~A÷(I-~)B~A0

and that a function ~ from A° into ~ i s "convex"

i f

V A, B ~Ao ,~[o, i] ,~A +(i-)B).< X~(A)+ (1-~)~(B).

We s h a l l deno te by A(Ao) t h e cone of lower s e m i - c o n t i n u o u s

"convex" f u n c t i o n s d e f i n e d on A o

(i. 6)

(i. 7)

Remark i. 1

. If q : X.

Ae A~ .~ x&A

In particular, the support functions

A ~ A | ) q(A, p) = sup < p, x > x~A

are '~ffine".

,~- is a convex function, then i

~ (A) = inf q(x) is "convex': on A.

where p ~ X'

We can prove the following result.

Proposition i. 1

Any f u n c t i o n ~ A(A,) can be w r i t t e n i n a un ique way

Page 188: 5th Conference on Optimization Techniques Part I

i76

(I, 8)

(i. 9)

(i, i0)

(i. 11)

(I" 12)

~(A) = g (~(A))

w h e r e

l[ i) 0-(A) ,~ { p ~- ........ --)cr(A~ p) ] ( ~ X'

ii) g is a lower semi-continuous function on ~ X'

!o 2 Som_~e applications

Let us introduce

i) a function 9e A(A0)

ii} a lower semi-continuous convex function g £ F(S)

where S is a convex subset of a locally convex vector

space Y

iii) a continuous linear map L e L(X, Y)

and the "conjugate functions" defined by

l i) ~p) = sup [~(A, p) -~(A)] where p e X'

A~o

g (q) = sup [< q, y> - g(y) ] where q %ii) y,

yeS

Pr0positio__~n io 2

I f { ao } 6- Ao i s s u c h t h a t g , i s c o n t i n u o u s a t L ao ,

t h e n t h e r e e x i s t s y e Y' s u c h t h a t

inf IV (A) + inf g(- L x)~ = A ~ Ao xeA

= - rain [~'(L' q) + g~ (q)]

q6Y ~

We can deduce from this proposition the following "duality

result" for minimization problems

Corollary 1. 1

L e t P ~ Y be a c l o s e d c o n v e x c o n e w i ~ h n o n - e m p t y i n t e r i o r

o P (for the Mackey topology T(Y, Y'))

If u ~ Y, let us introduce the subset

Page 189: 5th Conference on Optimization Techniques Part I

(i. 13)

(I. 14)

(i. 15)

177

A = {A e Ao such that there exists x ~ A with L x - u E P} u

I f t h e r e e x i s t s {ao} e Ao such t h a t

L ao - u ~ P~

t h e n t h e r e e x i s t s ~ ~ p+ such t h a t

inf ~(A) = inf [~(A) - o(A, L' q) + <q, u>]

Ae Au Ae Ao

= - ~((L' q) - <q, u> = - sin [~L' q) - <q, u>~

qeP +

Brief economic interpretation

Let X =~n denote a commodity space, X' = ~n the space of

price systems, P+ =f~.n+ the cone of positive prices, u ~ n

a "demand" vector.

We denote by Ao a "convex" set of convex compact subsets

containing o.

We interpret A e Ao as a ~irm whose production set is

A - ~n+ and whose "maximum profit function" is

~(A, p) = sup <p, x> defined on P+ = [~n x e A +

We take Y = X and L to be the identity mapping.

Then the set A is the set of firms which can satisfy the u

"demand" vector u.

If ~ : Ao~ )~ is a cost function defined on the set

of firms, then the minimization of ~ on ~u amounts to

the minimization on Ao of the perturbed cost function

A , ) ~ (A) - ~(A, ~) + <4, u>

Page 190: 5th Conference on Optimization Techniques Part I

t78

2. EXAMPLE : F u n c t i o n s d e f i n e d on ~ a l g e b r a s

2. io D~finitions

Let

~i) A be a o_algebra defined on a set

(i. I) ~ii) m : Ae____~i3-~ m be an atomless bounded vector-valued

measure.

We shall setA ~ B if and only if re(A) = re(B) and Ao

the factor set A/~ , supplied with the distance

(2. 2) ~(A, B) = ~Im(A) -m(B)~

By the Lyapounov theorem, re(A) is convex.

Therefore,

(e 3) lw<l, A, B) = m-l(lra(A) + (i -I )re(B)) belongs toAo.

Definition 2. 1

We s h a l l d e n o t e by A(Ao) t h e cone of l ower s e m i - c o n t i n u o u s

f u n c t i o n s d e f i n e d on A~ s a t i s f y i n g

(2. 4) q(~(l t Af B)) i I ~(A) + (i -I )9(B)

Remarks 2. 1

If g : ~m _____> ~ is a lower semi-continuous convex

function defined on m(A), then the function %0 defined by

(2. 5) ~(A) = g(m(A))

belongs to ~(Ao) . Conversely, any function ~6 ~(Ao) can be

written in a unique way in the form (2. 5) where g is convex.

In particular, the functions A| ..... _)<p, m(A) > and

AF____ >- <p, m(A) > (where p6~ m ) belong to A(A=).

Page 191: 5th Conference on Optimization Techniques Part I

179

(2. 6)

(2. 7)

. Let ~ = { 9 ~ L~(~, A, [ml) such that 9 (~) & [O, 13 }

and ~°6 L I(~, A, Iml).

Then the function ~ : Ao ~-----9~[~ defined by

(A) = inf { ~%o[~) 8 (e)d]ml(~) I @6~ and ~ (~)dlm I (~) = m(A)}

belongs to A (Ao) .

2. 2. Some Applications

L e t u s i n t r o d u c e

i) a function {~eA(Ao)

ii) a lower semi-continuous convex function g E F(S) where S

is a convex subset of a locally convex vector space Y

ii) a continuous linear map L ~ L (~n y)

and the "conjugate functions" defined by

l i) t~(p) = sup [ <p, re(A) > - ~(A)] where p & ~m

AeAo

ii) g~ (q) = sup [ <q, y> - g(y) ] where q ~ Y' y£S

Proposition 2. 1

I f t h e r e e x i s t s Ao • Ao such t h a t g i s con t inuouS a t L m[A0),

t h e n t h e r e e x i s t s q ~ Y ' s u c h t h a t

l inf [~(A) A~ Ao + g( - L m(A))]

|| = - min [~(L' q) + g~(q)] < qGY'

We can deduce from this proposition the following "duality

result" for minimization problems.

Page 192: 5th Conference on Optimization Techniques Part I

180

Corollar Z 2. 1

L e t P C Y be a c l o s e d c o n v e x c o n e w i t h a n o n - e m p t y i n t e r i o r

P ( f o r t h e ~ a c k e y t o p o l o g y • (Y, Y~))

I f u 6 Y, l e t us i n t r o d u c e t h e s u b s e t

( 2 . 10) A = {A ~ Ao s u c h t h a t L re(A) - u 6 P} u

I f t h e r e e x i s t s Ao & Ao s u c h t h a t

(2. Ii) L m (Ao) - u ~ P

t h e n t h e r e e x i s t s -q ~ P+ s u c h . t h a t

(2. 12

linf q~(A) : inf [~(A) - <q, L m(A)> + <q, u> ] A u A~Ao

=- ~(L ~ ~) + <q, u> =- rain [ ~(L ~ q) - <q, u>] qeP +

Brief economic interpretation

For instance¢ we can regard A as a set of "decisions" defined

on the set ~ of "elementary decisions" and the measure m as the

map associating with any decision A ~ Ao an "act" m(A)& q~ m

resulting from the decision A. We take Y =~m and we interpret

u ~ m as an objective. Then the set A is the set of deci- U

sions~ whose acts are greater than ~he objective u.

If ~ : A o ~ is a cost function defined on the set

of decisions, then the minimization of ~ on A u amounts to

the minimization on Ao of the perturbed cost function

A~-------~ ~ (A) - <q, re(A)> + <~, u>

Page 193: 5th Conference on Optimization Techniques Part I

181

3. STATEMENT OF THE THEOREM OF ISOMORPHISM

The two above cones of functions are examples of cones of

" ~ -vex" functions

3. i. The cone of y-vex functions

Let

l i) U be a set

(3. I) ~ii)~'I(U) be the set of probability discrete measures

I ~ = Z ei 6(ui) on U

finite

(where ~(u i) is the Dirac measure at ui, o< i >/ 0, 7 finite

Let us introduce

(3. 2) ~ be a correspondance with non-empty values from

L 'I(U) into U

(3. 3)

(3. 4)

(3. 5)

(3. 6)

~i = i)

Definition 3. I

We s h a l l say t h a t a func t ion t~. u~ ) ~ i s " ~ - v e x " i f

(~) ~ ~ ei ~ (Ui) for any ~ - ~ ~i ~(ui) @ b°'l(u) and " ~ -affine"

q(~) = ~ ~i ~(ui) for any ~ : Z °i 6(ui) ~ ~'I(U)

Remark 3. 1

If U is a convex subset of vector space X, then usual convex

functions are ~-vex functions where ~ is the map defined by

~ = Z ei ui whenever ~ = ~ ei 6(ui) ~ ~'I(U) .

If A is the family of convex compact subsets of X, then the

"convex" functions ~ £ ~(A) are ~-vex functions where

is the map defined by

~ = ~ ~i Ai whenever ~ = E ~i ~(Ai) £ ~'I(A)

Page 194: 5th Conference on Optimization Techniques Part I

182

° A m If A is a o-algebra and m : ~____~< is a bounded atomless

vector-valued measure, then the functions ~ £ A(Ao) are

~-vex functions where ~ is the map defined by

(3. 7) ~ = m-l([ ei m(Ai)) whenever ~ = Z ~i' @(A i) G ~' 1 (Ao)

3. 2. The cone of lower semi-continuous ~-vex functions

Let us denote by

(3. 8) G the vector space of ~ -affine functions defined on U

We shall supply U with the topology defined by the semi-distan-

ces

(3. 9) @L(Uf v) = max ~ fi(u) - fi(v) I i=l,...~n

when L = {fl ..... fn~ ranges over the finite subsets of G.

We denote by

(3. i0) ~[(U) the cone of lower semi-continuous ~-vex functions

Theorem 3. 1

There e x i s t

(3. ll) !i) a locally convex vector space F

lii) a continuous map n from U into F

s u c h t h a t

(3. 12) S = ~(U) is a convex subset of F

and such t h ~ t h e map

(3. ].3) g ~ F ( S ) ~ ~ = g o ~ 6 fl~(U)

i s an i s o m o r p h i s m from t h e cone F(S] of l o w e r s e m i - c o n t i n u o u s

convex f u n c t i o n s on S on to t h e cone A (u) of l o w e r s e m i - c o n t i -

nuous ~ - v e x f u n c t i o n s on U.

The proof of this theorem, based on the the minimax properties

of the ~-vex functions, together with other results and the

references, can be found in [i]

Page 195: 5th Conference on Optimization Techniques Part I

183

[q . Jean-Pierre AUBIN

Existence of saddle points for classes of convex and non-

convex functions.

Mathematics Research Center Technical Summary Report ~ 1289

University of Wisconsin (1972).

Page 196: 5th Conference on Optimization Techniques Part I

NECESSARY CONDITIONS AND SUFFICIENT CONDITIONS

FOR PARETO OPTI~LITY IN A MULTICRITERION PERTURBED SYSTEM*

JEAN-LOUIS GOFFIN ALAIN HAURIE

Ecole des Hautes Etudes Commerciales - Montreal

i. INTRODUCTION

Vector valued optimization problems have recently attracted a renewed

attention. Decision making in engineering and economic systems is

confronted by a multiplicity of goals, and it seems necessary to take

it into account directly, rather than finding an a priori mix of goals.

The concept of Pareto optimality has been extensively studied in the

realm of economic theory and more recently in the field of optimal

control theory ([I] - E6]).

In addition to the complexity due to multiple goals, the decision-

maker is often faced with perturbations or uncertainties which influen-

ce the consequences of his actions. These disturbances, if their pro-

bability distribution cannot be assessed can be modelled as set-cons-

trained perturbations following the lines given in Refs [7] [9].

This description is quite fitting if the perturbations are caused by

the existence of other decision makers - or players - which can affect

the consequence of the decision of a coalition of players[10].

Formally the system is described by :

(i) A decision set X

(ii) A perturbation set Y

(iii) A vector valued cost criterion

¢ : x × z ÷ R p , (x, y) ~ ,(x, y) ~ (~j(x, y)) j=l ..... p

In section 2~ the optimality criterion is precisely defined as a

generalization of Pareto-optimality.

In section 3~ the scalarization process is studied for this class of

system.

In section 4, a Lagrange multiplier theorem is proved.

*This work was supported by the Canada Council under Grants $72-0513 and $73-0268o

Page 197: 5th Conference on Optimization Techniques Part I

185

Notation

¢ ( . ) A ( ~ j ( . ) )

~ x , y g R p

j=l,... ,p

x < y if xj .< yj

Rp A Rp z } : {z~ : O<

j = i, .... p

!

~x(X, y) will denote the gradient w.r.t, x of the fonction ~(., y) at

p o i n t x .

~(x; h) will denote the directional derivative in the direction h of

the function ~j (x) @

2. DEFINITION OF OPTIMALITY

To every decision x in X is associated a set of possible outcomes :

~(x , Y) =a { , ( x , y) : y ~ Y}

Any p r e o r d e r g i v e n on {~O(x, Y) : x ~ X} w i l l induce a p r e o r d e r on X.

Given a preorder (X,-<) a minimal element will be called a Pareto-mi-

nimum in X.

In this paper the following preorder will be used :

x ~ x' if Sup ~(x, Y) _< Sup ~(x', Y)

where Sup ~(x, Y) is the L.U.B. of ~(x, Y) in (R p, _<). Note that :

Sup ~(x , Y) = (Sup ~ j ( x , Y)) j--I ..... p

Thus the following definition of Pareto optimality is adopted.

Definition 2.1 : x* in X is Pareto-optimal if, for all x in x :

Sup {~j(x, y) : y c Y} _< Sup{~j(x* y) : y~ Y} ~je.{l ..... p}

implies that :

Sup {~j(x, y) : y C Y} = Sup{~j(x,* y) : y ~ Y} Vj~ {i ..... p}

We can define the auxiliary cost function ~: X + R p by :

(x) _A Sup ~(x , Y).

Definition 2.1 is then equivalent to :

x* in X is Pareto optimal if for all x in X :

Page 198: 5th Conference on Optimization Techniques Part I

186

In order to characterize Pareto optimal elements, two approaches will

b e u s e d : T h e scalarization process [i] - [~ and a direct method

based on an extension of the theorem of Kuhn and Tucker ~.

3. THE SCALARIZATION PROCESS

The main results in optimization with a vector-valued criterion revolve

around the process of scalarization [I] [6]. In the presence of

perturbations the scalarization cannot, in general, be performed di-

rectly on the cost functions as for ~j > 0 , j = i,..., p the condi-

tion : i~ X and :

P P Sup { Z ~j ~j(~, y) : y ~ Y} < Sup{ Z ~j ~j(x, y) : y ~ Y}

j =I j --I

Y x ~ X

does not imply that i is Pareto-optimal! T)" "

The scalarization must be applied to the auxiliary cost function

First we state the following well known result.

Theorem 3.1 : Let ~. > 0, j = i,..., p and x* in X be such that : ]

P P ~x g X Z ~j ,J~:(x*)_< Z ~j~_ (x)

j=! j=l J

then x* is Pareto optimal.

Corollary 3.!

VxaX Sup yl ~ Y

<2 YpCY

Let ~j > 0, j = i,..o, p and x* in X be such that :

P Z aj ¢~j(x*, y j) _<

j=l

P Sup Z c~, ~j(x, yj)

yl~Y j=l 3

yt Y Yp Y

(T)Consider as an example : X =A R , y A= {-l, l},~(x, y) ~A (x~y(x-l),

2 = + Ix - i]) and the x - y(x-l)). Thus ~(x) (x z + !x - !I, x z

only Pareto optimal element is x* = ~. But ~l(X, y)+~2(x, y) = 2x 2

which attains its unique minimum at i = 0.

Page 199: 5th Conference on Optimization Techniques Part I

t87

then x* is Pareto-optimal.

Proof :

P P Z aj Sup Cj(x, y ) = Z Sup aj ~j(x, yj)

j--i ygY j=l yjaY

P = Sup Z ~j Cj(x, yj) •

ylgY,... ,yj£Y j = l

For a necessary scalarization condition to hold some convexity assump-

tions are needed. The following lemma indicates what kind of convexity

must be met.

Lemma 3.1 : x* in X is Pareto-optimal iff the following holds :

(~(x*) - R b n (~(x~ + Rb = { ? ( x . ) } (3.1)

P r o o f : C l e a r l y x* i s P a r e t o - o p t i m a l i f f

O b v i o u s l y ( 3 . 1 ) i m p l i e s ( 3 . 2 ) . C o n v e r s e l y l e t ~ be an e l e m e n t o f t h e

L .H .S . o f ( 3 . 1 ) t h e n , f o r some x i n X, ~ ' and ~" i n R p one has :

thus FOx) -- T (x* ) - (~' + ~ " ) e ~O(x*) - RP and the L . . . S . o f (3 .Z) contains ~ (x) .

Thus ( 3 . 2 ) i m p l i e s ~ ( x * ) - - ~ ( x ) , ~ ' = ~" -- 0 and f i n a l l y o~ = ~ ( x * ) ,

that is (3.1). I

Remark 3.1 : If X is a convex subset of a linear space and each~j is

convex on X, then~(X) ÷ R p is convex, (but~j quasi-convex is not suf- ficient)

Remark 3.2 : ~(X) ¢ R p is convex iff :

V x , x' ~ x , Vx [o , l ] 3 z ~ x s . t .

i.e. there exists a mapping T : X × X × E0, I~ ÷ X such that :

Vx, ~, ~ x , V ~ [ o , :Lq ~(T(~, x , , ~,)) __<~,~(~) + (1 - ~ , )~ (x , ) .

Theorem 3.2 : If~(x) + R p is convex and x* is Pareto-optimal then

there exist ~j _> 0 for all j and ~Z > 0 for some Z such that :

P P x ~ X Z ~j ~j(x*) S Z ~j~j(x)

j =I j =i

Page 200: 5th Conference on Optimization Techniques Part I

188

Proof : Direct application of lemma 3.1 and the separation theorem, m

AssumRtion 3.~ :

X is a convex subset of a Banach space U

Y is a compact space

: U x y ÷ R p is continuous and convex in x E U.

x + ~x(X, y) is an equicontinuous family.

Then~ we can prove the following theorem using results of Danskin

Dem'Yanov ~, and Lemaire ~,

Theorem 3 .3 :

( i )

Under assumption 3.1 :

If x* E X is Pareto-optimal then there exist a. Z 0 for all j, J

aZ > 0 for some % such that :

P ' (x*, y) x - x* > = 0 (3.3) Min Z a j Max < ~ j x '

x(X j = l y ~ Y j ( x * )

where Y j ( x ) ~ {y ~ Y : * j ( x , y) = Max ~ j ( x , y ) } y~Y

(ii) Conversely, if there exist ~j > 0 for each j such that (3.3)

holds, then x* is Pareto-optimal.

Proof : For each j, ?j is directionally differentiable with (see Refs

[12_] [14 :

H~j ' ( x , y ) , h > ' ( x ; h) = Max < ~ j x y 6 g j (x)

S i n c e : P P

( 2 e~j L ~ j ) ' (x ; h ) = 2 c~j 'h 0 ' " j = l j = l ' j ( x , h)

We can u s e t h e o r e m 2 .4 o f Dem'Yanov [13] and t h e o r e m s 3 .1 and 3 .2 to

g e t ( i ) and ( i i ) .

It is clear that the scalarization approach is the simpler one, but

to be applicable it requires convexity assumptions, even to get

necessary conditions.

In the next section, we will develop necessary conditions for Pareto-

optimality, which do not require convexity.

Page 201: 5th Conference on Optimization Techniques Part I

189

4. A LAGRANGE MULTIPLIER THEOREM

In this section, the results of Danskin [12] and Bram [15] will be used

extensively.

Assumption 4.1 :

Y is a compact space

v X : R n x y ÷ R p is continuous and its derivative ~x ( , y) is

continuous.

X ~R n is defined by a set of constraints :

X = {x ~ R n : gi(x) _> 0 i = 1 ..... m}

gi : Rn ÷ R is C I, i -- I, ... , m.

Theorem 4.1 : Under Assumption 4.1, if x* in X is Pareto-optimal and

if the Kuhn - Tucker constraint qualification is satisfied at x*, then

• ~ 0 for i = i, . m and aj > 0 for j = i,... p t h e r e exist ~i "" ' - '

aZ > 0 f o r some ~, s u c h t h a t :

P m Z a j Max < ~ j x ( X , y ) , h > ~ Z h i < ( x * ) h > (4 1)

j=l y~Yj (x*) i=l gi ' "

and ~k > 0==~gk(x*) = 0. (4.2)

J Furthermore, if the gi's are concave and the ~j s are convex in x

then (4.1) with ~i ~ 0 , i = I,~.., m and aj > 0, j = i,..., p implies

that x* is Pareto-optimal.

Proof : Let x* ¢ X be a Pareto-optimal point. This means that for

every x E X either ~ j s.t.?j(x) > ~j(x*)

or V J j(x) : jcx*)

this implies that :

Vxex Max 0 (43) J

(Note that this is a necessary condition for Pareto-optimality,

but it is a necessary and sufficient condition for weak Pareto-

optimality).

Let F be the set of vectors admissible at x, that is the set of h ~ R n

such that there exist an arc issuing from x, and lying in X, with

a tangent vector at x equal to h.

Page 202: 5th Conference on Optimization Techniques Part I

190

Thus ( 4 . 3 ) b e c o m e s :

j J

U n d e r t h e c o n s t r a i n t q u a l i f i c a t i o n c o n d i t i o n , o n e h a s :

(4,4)

where

Let

F = {h ~ R n : < gi(x*), h > _> 0, i E I}

I = {i : gi(x*) : 0}.

Wj ~= ~x(X*, Yj(x*)) , then (4 . 3 )becomes :

Max Max < w, h > _> 0 ~h £ F j = I.-, p WEW.

]

or

Max < w, h > _> 0 ~h E F ~$~UW~

and, if W is the convex hull of U W. ,

j:l . . . . . p J

Max < w, h > _> 0 ~h ~ r wEW

Consider the cone F* ~ {z ~ R n : < z, h > ~ 0 ~h ~ F} then

following Bram~s proof ~ it can be shown that F*~ W is not empty.

Let ~ £ F*N W , then there exist I i ~ 0 , i ~ I

?

= Z ki gi (x*) i E I

and there exist wj ~ Wj, aj ~ 0 , j : 1 .... , p ,

such that :

P

j : l ] ]

s u c h that :

P Z ~. = i

j=l ]

therefore, setting X i = 0 if i ~ I

P ~h ~ R n Z ~j Max < w h >

j =i wj~Wj J'

m

_> Z i i < gi(x*), h > j = l

one has :

that is (4.1) o

Page 203: 5th Conference on Optimization Techniques Part I

191

Let us remark that if each set Wj reduces to a single element, then

each ~j is differentiable at x* and (4.1) yields the generalized

Kuhn - Tucker result :

p m X mj VJ'.x(X*, Y j ( x * ) ) = X t i g~Cx*).

j--i J i = l

Finally, the proof of the sufficient condition is the same as in the

classical case. •

5. CONCLUSION

The notion of Pareto-optimality has been extended to perturbed systems.

The scalarization process and the extended Lagrange multiplier rule

which have been obtained can be used to characterize and compute opti-

mal decisions.

A promising field of application of these results is "n-player" game

theory without side payments [16] ; the reader could verify that the

boundary of the Auman's characteristic function for a given coalition

corresponds to the set of all Pareto-optimal outcomes in an adequatly

defined perturbed system.

[ i ]

[2]

[3]

[4]

REFERENCES

A.W. STARR et Y.C. HO :

Nonzero-Sum Differential Games, JOTA, 3 (1969), 184-206.

Further Properties of Nonzero-Sum Differential Games, JOTA

(1969), 207-219.

T.L. VINCENT & G. LEITMANN :

Control Space Properties of cooperative games, JOTA, 4

(1970), 91-113.

A. BLAQUIERE :

Sur la g~om~trie des surfaces de Pareto d'un jeu diff~ren-

tiel ~ N joueurs, C.R. Acad. Sc. Paris S6r. A, 271 (1970),

744-747.

Page 204: 5th Conference on Optimization Techniques Part I

192

[5] A~ BLAQUIERE, Lo JURICEK & K.E. WIESE :

Sur la g~om~trie des surfaces de Pareto d'un jeu diff~ren-

tiel ~ N joueurs; th~or~me du maximum, C.R. Acad. Sc. Paris

A, 271 (1970), i030-I032

[6] A. HAURIE :

Jeux quantitatifs ~ M joueurs, doctoral dissertation, Paris

1970.

[7] M.C. DELFOUR & S.K. MITTER :

Reachability of Perturbed Linear Systems and Min Sup Pro-

blems, SIAM J. On control, 7 (1969), 521-533

[8] D.P. BERTSEKAS & I.B. RHODES :

On the Minimax Reachability of Targets and Target Tubes,

Automatica, 7 (1971), 233-247.

[9] J.D. GLOVER & F.C. SCHWEPPE :

Control of Linear Dynamic Systems with Set Constrained Dis-

turbances, IEEE Trans. on Control, AC-16 (1971), 411-423.

[I0] A. HAURIE :

On Pareto Optimal Decisions for a Coalition of a Subset of

Players, IEEE Trans. on Automatic Control, avril 1973.

[11] H.W. KUHN & A.W. TUCKER :

Non-Linear Programming, 2nd Berkeley Symposium of Mathema-

tical Statistics and Probability, Univ. Calif. Press,

Berkeley 1951.

[12] J. DANSKIN :

On the Theory of Min-Max, J.SIAM Appl. Math., Vol. 14

(1966), pp. 641-664.

[13] V.F. DEM'YANOV & A.M. RUBINOV :

Minimization of functiona!s in normed spaces, SIAM J. Con-

trol, Vol. 6 (1968), pp. 73-88.

[14] B. LEMAIRE :

Probl&mes min-max et applications au contrSle optimal de

syst~mes gouvern~s par des ~quations aux d~riv6es partiel-

les lin~aires, Th~se de doctorat, facult6 des sciences,

Universit6 de Paris, 1970.

[15] J . B ~ M :

The Lagrange Multiplier Theorem for Max-Min with several

Constraints, J. SIAM App. Math. Vol 14 (1966), pp 665-667.

Page 205: 5th Conference on Optimization Techniques Part I

193

[161 R.J. AUMANN :

A Survey of Cooperative Games without side Payments, in

Essays in Mathematical Economics, ed. M. Shubik, Princeton

1969.

Page 206: 5th Conference on Optimization Techniques Part I

A UNIFIED THEORY OF DETERMINISTIC TWO-PLAYERS ZERO-SUM DIFFERENTIAL GAMES

Christian Marehal

Office National d'Etudes et de Recherches Atrospatiales (ONERA)

92320 - Chhtillon (France)

Abstract

This paper is a shorter presentation of "Generalization of the optimality theory of Pontryagin to deterministic

two-players zero-sum differential games ~ [MARCHAL, 1973] presented at the fifth 1FIP conference on optimization

techniques.

The very notions of zero-sum game and deterministic game are discussed in the first sections. The only interes-

ting case is the case when there is "complete and infinitely rapid information". When the minimax assumption is not

satisfied it is necessary to define 3 types of games according to ratios between time-constant of a chattering bet-

ween two or several controls and delays necessary to measure adverse control and to react to that control ; it thus

emphasizes the meaning of the "complete and infinitely rapid information" concept.

In the last sections the optimality theory of Pontryagin is generalized to deterministic two-players zero-sum

differential games ; it leads to the notion of extremal pencil (or bundle) of trajectories.

When some eanonieity conditions generalizing that of Pontryagin are satisfied the equations describing the ex-

tremal pencils are ~ery simple but lead to many kinds of singularities already found empirically in some simple

examples and called barrier, universal surfaces, dispersal surfaces, focal lines, equivocal lines etc...

lntroduction

Many authors have tried to extand to differential game problems the beautiful Pontryagin's theory used in optimi-

zation problems, but there is so many singularities in differential game problems, even in the deterministic two-

players zero-sum case, that a general expression is difficult to find and to express.

A new notion is used here : the notion of "extremal pencil (or bundle) of trajectories". This notion allows to

present the generalization of Pontryagin's theory in a simple way.

t. Two-players zero-sum differential games

Usually two-players zero-sum differential games are presented as follow :

A) There is a parameter of description t that we shall call the time.

B) The system of interest used n other parameters xt, x 2 . . . . x n and we shall put :

(1) X = (~IzXtA./ . . . . )Xn) = state vector.

We shall assume that t, the performance index of interest (called also cost function or pay off) is a only function

of the final values--~¢, t~ ; if necessary, if for instance I is related to an integral taken along the described trajectory

X(t), we must add into X a component related to that integral.

C) Ther~_~ two p i e r s that we shall call the maximisor M and the minlmisor m, each of them chooses a measurable

control M(t) and re(t) (respectively in the control doma~, g~)M(t) and ~)m(t) Borelian functions of t) and the velo-

city vector V = dX/dt is a given Borelian function of X,t and the two controls :

Page 207: 5th Conference on Optimization Techniques Part I

195

D) There is a "playing space" ~ subset of the R n+l ~,t space and a "terminal surface" ~ , or more generally a

"terminal subset" ~ along the boundaries of ~ . We shall assume that the set ~ is open.

E) The control equation (2) is defined everywhere in ~ and the performance index I(Xf, tf) is defined everywhere

in ~ ; the game starts in at a given initial p o i n t ~ o, t o ; themaximisar tries to maximises I at the first arrival at

and the minimisor tries to minimizes it.

2. Zero-sum games

Since there is only one performance index I(Xf, tf) it seems that the above defined game is zero-sum, however it

is possible that both players have interest to avoid the termination of the game (e.g. the cat, the mouse and the

hole around a circular take, when the cat blocks the hole) and in order to avoid diffieutties of non zero-sam games

the value of the performance index must also be defined in non-finite cases.

3. Deterministic cases of two-players zero-sum

differential games

A first condition of determinism is that both players have complete informations on the control function

(~,M, m, t), the control domains ~f)M(t) and re(t), the performance index I(Xf, tf), the playing space ~ , the

terminal subset ~ and the initial eonditions-Xo, t o.

It is possible to imagine some particular eases of detem~inistie games such as :

A) One of the two players, for instance M, has more or less complete informations on the present and past state

vector~t ) and choose a pure strategy based on these various informatinns :

B) He must indicate his choice to the other player.

C) The second player choose then its own control function m'~t).

Hence : When (3) is given the problem of m is an ordinary'problem of minimization and then M chooses its stra-

tegy (3) in order that this minimum be as large as possible.

However, if the informations of the first player are incomplete, the eouditions A and B are generally unrealistic :

the choice of a good mixed strategy improves very often the possibilities of the first player and the real problems

is thus not deterministic.

The only realistic and deterministic cases arc the following :

A) Both players can measure the present value of-~at an infinite rate ; we shall call T M and T m the infinitely small

delays necessary to obtain this measure and to react to it.

B) In some eases the optimal control requires a chattering between two or several controls, we shall assume that

these chatterings can be made at an infinite rate and we shall call ~"M and ~m the corresponding infinitely small

intervals of time.

C) There is then 3 deterministic cases :

CI) Case when "~m + Tin<< ~'M

It is the maximin case or Mm-case, everything happens as if the minimisor m could choose its control after the

maximisor M at any instant.

C2) "~M + TM<<'~r'm" It is the minimax case or raM-case symmetrical to the previous one.

Page 208: 5th Conference on Optimization Techniques Part I

196

C3) ~- 'M~< ~-m * Tm and ~'m < < ~M + TM

It is the neutral case or .N-case, both players choose their own control independently of the opponent choice.

The determinism of that Ias t case requires some more conditions (see in chapter 5.1 the condition of equa-

li ty of H 1 and t"12).

Of course the assumption of determinism implies that both players know if the game is a raM, Mra or N-game.

A simple example of these 3 tapes of game is given in the following example :

Initial conditions : x o = t o = 0 ; terminal subset tf = 1 ; performance index I = xf ; control function

d x / d t = P-~I 2 + Mra - 2I'n 2 ; control domains : ]MJ <. 1 ; lint .< 1.

H e n c e :

Maximin case : M = .#-+ 1 ; m = - s i g n M ; xf = -1

Minimax c a s e : m - 4. 1 ; M = s i g n m ; x f = +1

Neutral case : M and m chatter equally at very high rate as in a Poisson process between + 1 and -1, i t

g ives xf = 0.

We shal l see that these 3 types of games are equivalent (and the comparisons of TM, Tm, ]~"M' "g'm are

not necessary) if the control function has the form :

and i f ' ~ M and /o r ~ m a r e bounded.

4. The upper game and the lower game

Another reason of undeterrainism appears when there is d iscont inui t ies of the performance index (e.g. :

[ = 0 if xf5 t$ 0 and I = 1 if xf = 0) or when the terminal subset has particular forms (such as the two sheets

of a cone) and when an infini tely sraall change of the control, e spec ia l ly near the final instant , gives a large

change of the performance index : i t i s indeed impossible to follow the opponent react ions with an infinite

accuracy.

In order to avoid that kind of undeterminisra i t i s sufficient to give to one player an infini tesimal right to

cheat, i .e, to add to the veloci ty vector V = d ~ d t a component "~ 'w 'hose integral j ~ l ~ ' ~ l l . d t is as small as

desired by i ts opponent in any suff ic ient ly large bounded se t of the"~,t,t space.

We obtain thus the upper game and the lower game according to the player who has the right to cheat

(cLassification independant of the minimax, raaxirain and neutral types),

The upper and lower values of a game will be major elements of appreciation of that gmne in a given

s i tuat ion s ince an infinite accuracy is never poss ible .

5. Extension af the Pontryagin ' s theory to determinist ic

two-players z ero-sura differential games

5. L The adjoint vcctor]5~and the Generalized Hamiltonian H*(P, X, t)

We sha l l use the ordinary notations and first the adjoint vector of Pontryagin P which will be c lose ly

related to the s t ra tegy of each player.

Page 209: 5th Conference on Optimization Techniques Part I

197

As usual the Hamiltonian will be the scalar product :

(5) H = ~ ~ " ' / M ~ = "~..V"~" and by a direct generalization of the notion of "Generalized Hamiltonian" (MARCttAL 1971, ROCKAFELLAR 1970)

the new "Generalized Hamiltonian ~ is :

A) For a game of "maximin type" :

'6) I"l (P, )~.~) .= SU-[~ i r l f P.V(X, MIFrI~

B) For a game of "minimax type n :

{71 =

C) For a game of "neutral type n let us define H 1 and H2, the )~i. Is being positive and their sum being one :

(8)

It is easy to verify that always :

but these two quantities are note necessarily equal (if for instance, for given P, X, t, the scalar product P.V is

equal to M - m, M and m being arbitrary real positive numbers).

The determinism of a game of "neutral type" requires that the two functions H 1 (P, X, t) and H2(P, X, t) be

identical, they of course are then equal to the Generalized Hamiltonian H*(P, X, t) of the game.

A sufficient condition of equality of H I and tt 2 is that at least one of the two control domains ~M(t ) and

~m(t) be a compact set of a Rq space and that for any (X, t} the control func t ion"~=-~(~ , -M,,-~, t) be uniformly

continuous with respect to the corresponding control parameter.

Of course the maximin type being the most favourable to the minimisor the corresponding Generalized Hamilto-

nian is always the smallest and conversely the Generalized Hamiltonian of the minimax type is always the largest.

It is easy to verify that the 3 Generalized Hamiltonian are identical if the control has the form (4) and if the

velocities V M and/or V m are bounded.

It is possible to see now how the adjoint vector P is related to the strategies of both players, let us assume for

instance that locally, between the instants t: and t; + ~ I:, the maximisor want to maximises the scalar product K.X

and the minimisor want to minimizes it, they will both choose the control corresponding to P = K in (6), (7) or (8)

according to the type of the game (with a chattering if necessary) and, if H* is continuous in terms of P-~, X-~and t,

they ~vill obtain :

= + o @

Page 210: 5th Conference on Optimization Techniques Part I

198

5.2. The conditions o~ eanonicity

In order to avoid the difficulties coming from the discontinuities of trajectories X~t) we shall assume that the

control function V = V(~, M, m, t) (with-~£ ~)M(t) and m ~ ) m ( t ) is bounded in any bounded set of the -~ , t spa~e,

it implies that the trajec:ories X(t) are Lipehitz functions of t: and that the Generalized Hamiltanian H*(P, X, t) is

bounded in any bounded set of the P, X, t space.

On the ether hand let us note that the part of the terminal subset where the performance index is very small is

considered as a forbidden zone by the maximisor and conversely the part of the terminal subset where the performance

index is very large is cansidercd as a forbidden zone by the minimlsar.

Thus, in order to obtain a generalization of the Pantryagin theory to differential games, it is necessary that the

conditions of application of that theory to problems with forbidden zones be satisfied, that is (MARCHAL 1971, page

151).

A) The problem must be canonical in the generalized meaning of Pontryagin for the admissibility of the discontinuous

type, i.e. here, since velocities V are locally bounded :

The Generalized Itamihanian tt (P, X, t) must be a locally

! Lipschitzian function of P--~, X-~and t.

This severe condition has uhe advantage to involve the equivalence between chattering and relaxation, which is

necessary especially for neutral type games.

B) The terminal subset ~' must be union or intersection of a finite number of closed or open "smooth" se ts (i.e.

manifolds with everywhere a tangent subspace Lipschitzian function of the position) these manifolds must be limited

by a finite number of "smooth" hypersurface themselves limited by a finite number of "smooth" hyperlines et c ....

For instance the terminal subset can be the surface of a polyhedron or of a cylinder and this condition is satis-

fied in almost all ordinary problems, however it is not satisfied at the origin for the function y = x n when either

0.5 < n < 1 or l < n < 2 .

C) For any value lo the parts of the terminal subset defined either by I > I o orby I />I o orby I < I o orby

t ~ I o must sat isfy the same condition o~ "smoothness ".

5.3. The Generalized Pontryagin~s formulas

In ordinary problems of optimization, when the Generalized Hamihanlan H*(P, ~ , t) is defined and when the

conditions of canonicity are satisfied it is possible to adjoin to each cxtremal trajeetory'-~t) an absolutely canti-

nuans adjoint function P-~t) different from zero and such that, with H*[P(t)j X(t), t] = H*(t), either :

or, more generally (when H (P, X, t) is locally Lipehltzian but not continuously differentiable in terms of P, X and

t).

I Thevect° r ~ ( ~ ( ~ ) ] - ' ~ ( ~ ) / ~ * ( ~ ) )

~ ~elongs [or almost all t to the domain "~F ~ ]D~) ~' X--'~(~) j ~ )

Page 211: 5th Conference on Optimization Techniques Part I

199

This domain DHt(~, ~ , t) being the smallest closed and convex set (of the R 2n+1 space) containing the gradient

vectors ~H / 9 ( P , X, t) obtained at points (P'~+ ~ X*+ ~ t) where :

A) H* is differentiable in terms of P, X and t ;

B) ~:~ and ~ are infinitely small (i.e. of course DHt is the limit for fz-----.~-0 of domains DHt E obtained when

tl$'gll and II g ~ II can vary from 0 to E ; DHt is a particular "convex hull").

When there is forbidden zones with "smooth ~ boundaries the adjoint function P(t) becomes the sum of an absolu-

tely continuous function and a "jump function" (i.e. a function with a finite or denumerable number of discontinuities

and which is constant between these discontinuities) and the equations (11) and (12) become more complex.

Let us now try to generalize these equations to differential game problems.

We shall only consider upper game problems with a bounded playing space and we shall decompose them into the

different "games of kind" corresponding to either I ~ I o or l ~ I o.

Let us note that some people consider that the part of the terminal subset ~ corresponding to I ~ lo. in the

upper game problem, part that we shall call ~ + , is only the closure of the corresponding part of the initial game,

some other people add to that closed set the points where the local upper limit of the performance index is I o and

thus obtain all points where that local upper limit is larger than or equal to I o. Anyhow in both cases the set ~ +

is closed and thus the two cases are similar.

We shall call ~_~,_ the subset ~ - - ~ + .

It is easy to demonstrate that the part of the playing space corresponding to [ < I o (in the upper game problem)

is open, we shall call that subset Oio and we shatl call ~ the remaining part of the playing space. We are espe-

cially interested by the closed set ~ intersection of the boundaries of Oio and ~IIo'

The generalization of Pontryagin's theory lsee demonstration in MARCItAL, to appear)leads to :

If the playing space is bounded, to each point (~o' to) of the boundary ,J~ corresponds a pencil (or bundle) of -.-> absolutely continuous trajectories Xi(t) belonging entirely to :~ , each of them being associated to an adjoint

function'~i(t) defined on the same interval of time and sum of an absolutely continuous function of t and a jump

function of t (with a bounded total variation).

We shall call extremal pencil or extremal bundle the union of the trajectories Xi(t) ; this extremal pencil satisfy

the following generalized conditions :

A) The pencil begins at the point (~o' to) of interest with at least one trajectory.

B) Each trajectory Xi-'~t) is defined on a closed interval of time (tio, tif) and ends at the terminal subset ~ (and not

only at ~ + as written erroneously in MARCHAL 1973). ¢>

C) h point (~ , t) of the extremal pencil can belong to one or to several trajectories Xq(t) and there is always at

least one corresponding adjoint vector Pi(t) different from zero.

D) Wish H* (t) Xi(t), the = H*(Pi(t), t), equations (11) become :

(13)

-_ H

J

J

Page 212: 5th Conference on Optimization Techniques Part I

200

with :

(14)

E) In the same way the generalized equations (12) becomes :

For any given i we have for almost all t :

(~)~ --~ -9, ~ ~V

ae k F) The functions Pi(t) and H~ (t) can also have jumps in the directions given by the infinite values of the positive

factors )~ij"

G) As usual when there is forbidden zones, if a point (~ , t ) of the pencil belong to the terminal zone ~ one can add

to the derivatives of the vector ( ~ , -H*) given in (13) and (15) a component normal to ~ and directed outward (if

(~ , t ) t~ '~-) or toward the playing space (if ( '~, t)~ ~e+), one can even add a jump provided that exist connecting

absolutely continuous fnnctions P i ( ~ ) , H i (tp) leading from P(t-), tt*(t-) to "~t+) , H*(t+), verifying for any t.~

the relation (6)(or (7) or (8)) and having their derivatives with respect to ~o in the directions given by these outer or

inner normal components.

H) Finally for each trajectory Xi(t) of the extremal pencil the final values [Pi(tif), H~(tif)l ,must satisfy the ordinary

final conditions of Pontryagin. (also called *transversality conditions").

A simple way to obta;n the directions normal to ~ is to use a Lipsehitzias penalty function f(X, t) equal to

zero on ~ and negative in the playing space ~ ; the local gradient of f ( ~ , t) with re~peet so (~,t) gives the outer

normal direction (or directions, for instance at a corner of ~ ).

On the same way, for the condition H. the final vector [Pi'(tif) ; -I~* (tif)l,. with (n + i) components, must b'e

parallel to and in the direction of the local gradient of a Lipsehitzian function f+(X, t) equal to zero on ~ + and

negative anywhere else (or antisymmetrically wdth respect t o " ~ - if (Xif, tif)~-~-).

Let us note that around points (~,, t) belonging to ~'+ but not to ~e-~_ it is useless to consider the function

f+(x~ t) out of ~ + ~ ( ~ d conversely for L (~ , t) if (~,, t )e ~- ) .

On the other hand if the direction of grad f+(X, t) (or grad f (X , t)), it is not continuous at the final point

(~i{' tif) of interest, the vector ~ , -H'ill may be into an), direction of the corresponding conic convex hull.

Conclusion

The generalization of the optimization theory of Pontryagin to deterministic two-players zero-sum differential

games leads to the notion of extremal pencil and to the above equations and rules which are sometimes sufficient

to determine these pencils (see for instance the two examples of MARCHAL 1973). The main remaining question is

to improve the conditions of backward construction of extremal pencils outlined in that reference.

Page 213: 5th Conference on Optimization Techniques Part I

201

References

ATHANS, M . - The status of optimal control theory and applications for deterministic systems, A, 8 differential

games. IEEE trans, on automatic control, (April 1966).

BEHN, R.D. and HO, Y.C. - On a class of linear stochastic differential games. IEEE trans, on auto-control,

(June 1968).

CttYUNG, D.H. - On a class of pursuit evasion games. IEEE trans, on auto-control, (August 1970).

HO, Y.C. and BARON, S. - Minimal time intercept problem. IEEE trans, on auto-control, (April 1965).

HO, Y.C. , BRYSON, A.E. and BARON, S. - Differential games and optimal pursuit-evaslan strategics. IEEE

trans, on auto-control, (October 1965).

tSAACS, R. - Differential games. -John Wiley and Sons, (1965).

JACOB, J.P. and POLAK, E. -On a class of pursuit-evasion problems. IEEE trans, on -auto-contro|, (December t967).

MARCHAL, C. - Thearetlcal research in deterministic optimization. ONERA publication n ° 139, (1971).

MARCHAL, C. - The hi-canonical systems. Techniques of optimization. A.V. Balakrishnan Editor, Academic Press, New York and London, (1972).

MARCHAL, C. - Generalization of the optimality theory of Pontryagin to deterministic two-players zero-sum diffe-

rential games. ONERA, tir$ ~ part.n ° 1233, (1973).

MARCHAL, C. - Theoretical research in deterministic two-players zero-sum differential games. ONERA publication,

to appear.

MESCttLER, P.A. - Ou a goal-keeplng differential game. IEEE trans, on auto-control, (February 1967).

MESCHLER, P.A. - Comments on a linear pursuit-evasion game. IEEE trans, on auto-control, (June 1967).

PONTRYAGIN, L.S. , BOLTYANSKII, V.G. , GAMKRELIDZE, R.V. and MISCR~KO, E.F. - The mathematical

theory of optimal processes. Iuterscience Publishers, John Wiley and Sons, Inc. New York, (1962).

ROCKAFELLAR, R.T. - Generalized Hamiltonian equations for convex problems of Lagrange. Pacific J. of Math.

33, 411-427 (1970).

ROCKAFELLAR, R.T. - Dual problems of optimal control. Techniques of optimization. A.V. Balakrishnan editor.

Ac. Presse, New York, London, (1972).

WARGA, J . - Minimax problems and unilateral curves in the calculus of variations. SIAM Journal on Control A, 3,

1, (1965).

Page 214: 5th Conference on Optimization Techniques Part I

ABOUT OPTIMALITY OF TIME OF PURSUIT

M. S.NIKOL 'SKII

STEKLOV MATHEMATICAL INSTITUTE,

MOSCOW, USSR

I~ll tell about results of me, Dr. P.B.Gus~atnikov and V.I.Uho-

botov in the problem optimality of pursuit time. I have studied with

Dr. P.B.Gusjat-aikov this problem in 1968. There is the article about

this results in Soviet Math. Dokl. (vol. 184, N 3, 1969). After this

article was the article of N.N.Krasovskii and A.I.Subbotin about

this questioao Their result is more general. I'll not tell about re-

sult of N.N.Krasovskii aud A.I.Subbotin (see Soviet Journal of Appli-

ed Math. and Mechan., N ~, 1969). Recently Dr. P.B.Gusjatnikov and

I have got some results in this field in cooperation with Dr.V.I.Uho-

botov.

Let the motion of a vector ~ in Euclidean space ~be descri-

bed by the linear vector differential equation

(1) Q =

where ~£ 6. ~ , C is a constant square matrix , i~6. []>C-

E QC~ ~ . t~ and ~ are control vectors. The vector ~ corresponds

to the pursuer, the vector I~ to the evader. P and Q are convex

compact sets. Let a terminal subspace ~ be assigned in ~: The

pursuit is assumed to be completed when the point • reaches

for the first time~ It is in the interest of the pursuer to complete

the pursuit. The information of pursuer is the equality (1) and ~(~),

V(~) for all present ~ ~ O . The functions ~) ~ V(~)

are meserable functions. The pursuer don't know ~(~) in advance,

~) can be arbitrary meserable function.

P~utrjagin have constructed the method of pursuit from initial

point ~@ and gave estimate %(~o) of time of pursuit.

Page 215: 5th Conference on Optimization Techniques Part I

203

I'ii say some words about this method.

Let ~ is complemental subspace of ~ and ~T is operator

of pro~ection of ~ onto ~ parallely ~.

Pomtrjagi~ considers the compact sets ~ g~.C~ ..rf g ~ their geome~ical difference 1~f(~) = l[g ~ -- l~g ~ and in-

t e ~ a l - W - ~ I - - WO:,'-~d~ , which i s a compact co~ve~ s e t ~ . - _ 0

The time of pursuit frnm "~o in the theory of Pontrjagin is the least root of inclusion-

~C

Let ~(%o) is such root.

Definition I. The optimal time of pursuit from ~o is the least

time within pursuer can complete the pursuit from ~o.

The Pontrjagin time %(~o) is optimal if the following Condi-

tion ~ is fulfilled.

Condition A. For all extreme point ~ 6 P

such that for all ~E [0,T]

~C "cC A

EQ

exists ~=

Theorem I. If the Condition A is fulfilled and ~(~o) ~ T,

then ~(~) is optimal time of pursuit.

We shall give some sufficient conditions for fulfilmemt the

Condition A.

Let us O E P , 0 is interior point of Q in its support

subspace and ~,V their support subspaces in ~h ~(~) is Ye

restriction of mapp~g~T ~ on ~, ~(~) is re~triction of ~C

mapping q[~ o~ V.

Let us ~[%) can be factored in such wa~ !

Page 216: 5th Conference on Optimization Techniques Part I

204

where ~ is linear mapping ~ into L , {~1~ ~)~( ~ ~. , ~1~(~)

is l inear mapping g:}{1 ~ t o L , 1{~} is analyt ical for ~{£o,T]

and homeomorp~c ~or ~ ~ g o , T ] with the exception f i n i t e set ]{ ;

set Q sweep oo pletely :he set for

Definition 2. A convex closed set S from space ~/

i~[~ JV >/~ ) is strictly convex, if each its support plane has

only one common point with S .

Definitio~ A convex closed set S from space ~ ( ~ I )

is regular, if it is strictly convex and each its boundary point has

only one support plane.

Theorem 2. Let ~(~} can be factored in the form

here ~ is linear mapping V- into ~ , ~(~) -nonnegative

analytical functiono Let FQ is strictly convex in ~ . In

these conditions Condition A is fulfilled.

Theo rem ~ , Le~ ~ is regular in V • I n t h e s e conditions t h e

equality (5) is necesssry for fulfilment the Condition A.

The another sufficient conditions for fulfilment the Condition

A are given by t~he following Theorem ~. ~C

Theorem @. If for ~ >I 0 q~ and ~ are commutative on

Page 217: 5th Conference on Optimization Techniques Part I

205

, ~ amd ~Q sweeps completely the set ~P

ditiom A is fulfilled.

Example. The Pomtr~agin ' s t e s t e x ~ p l e :

"6 •

, rhea the Corn-

~,~,~,~ ~R K , K~

amd the time ~(~) is optimal.

are positive comstamts,

{al~ ~ , frill

, the~ the conditiom A is fulfilled

Page 218: 5th Conference on Optimization Techniques Part I

ALGEBRAIC AUTOMATA AND OPTIMAL SOLUTIONS

IN PATTERN RECOGNITION

E.ASTESIANO - G.COSTA

Istituto di Matematica - Universit~ di Genova -

Via L.B. Alberti,4 16132 GENOVA (ITALY)

INTRODUCTION U. Grenander (1969) proposed a formalization of the linguistic approach

in pattern recognition (see also Pavel (1969))o Though the main interest in this me-

thod is undoubtely for its pratical application to the construction of particular

gr~m~nars of patterns,we think the theoretical questions worthwhile further insight.

Therefore we devote our atten~on to the abstract formulation of the problem; however,

in order to give a detailed model,we restrict ourselves to a definite class of deci-

sion rules. In the recognition system we consider here, the objects to be recognized

are (represented by) terms on a graded alphabet and the decision rules are (impleme5

ted by) algebraic automata; moreover we assume that sample classes (in some sense the

"training sets") and rules of identifications of objects in images are given. In this

context we investigate the problems related to the definition, the existence and the

effective construction of optimal grmmmars of patterns.

This paper is a generalization and improovement of a previous work of one of the au-

thors. The results can also be considered (but the point of wiew is completely diff~

rent) a generalization of well known classical results in algebraic theory of automa

ta; these can in fact be obtained as a particular case of our results (when the sample

classes are a partition). The first part is devoted to set up substantially well known

definitions and results, in a language more apt to treat our problem. In the second

the recognition model and a definition of the optimal solution are proposed and ex -

plaited. In the last section~conditions of existence for the optimal solution are g!

vet and the problem of the effective construction is solved. Finally a few examples

are presented to show that sQme seemingly incomplete results cannot in fact be sub-

stantially improoved.

i. PRELIMINARY NOTIONS. We refer to Cohn (1965) and Gr~tzer (1968) for the algebraic

concepts and to Thatcher and Wright (1968) an~ Arbib and Give'on (1968) for what co~

cerns algebraic automata°

If X is a non empty set, DF(X) is the set of all families of non ~npty,mutually dis-

joint,subsets of X; we denote its elements by~,~,..., ~,..° If~=[A i, i aI} then

O . .

~/ A. and ~: = ~fA ~ . Conslder on DF(X) the relatlon ~-- defined by: Ao: i~l i ~ O;

U~'--~ iff a (unique)map ~ ~ :~---~exists s.t-~ A ~ , A~ f(A); we also write

Propositio n i.I ~-~ is an order relation on DF(X) and DF 1 (X) : =(DF(X),b--) is a

complete lattice.//

Remark. We indicate by // the end of the prof; the proof is omitted whenever it is

This work was partially supported by a G.N.A.F.A. (C.N.R.) grant.

Page 219: 5th Conference on Optimization Techniques Part I

207

straightforward or substantially known.

Denote now by E(X) the complete lattice of equivalences on the set X, ordered by_. ~

as a subset of X ~ X • If~DF(X) and ~ ~E(X),then @is an~-equivalence iff,

A~, A = x~A~x~o For any P g E(X), let DL(X,P) be the set ~(~,~) /~6DF(X),

~P , ~ is a ~-equivalence and ~. be the relation on DL(X,P) defined by :

Proposition 1.2. E_~ is an order relation and DLI(X,P) : = (DL(X,P),~) is acomplete

lattice iff P is a (complete) sublattice of E(X). //

A graded (ranked) set is a pair (~,~'),where~-.is a non empty set and ~ a map from

fnto N (the set of non negative integers); if n~,then ~ := ~'-l(n). From now n

on we shall simply write~ instead of (~,O').

A ~-algebra (or simply:an algebra) is a pair ~=(A,~ ),where A is a non empty set,

the carrier of ~, and ~ is a map that assigns to each element ~in~ an operator

on A. If oJ~n ' ~is an n-ary operator, that is : if n~l, then ~: An--P A ;

if n -0,then %is an element of A. We use capital German letters,~p~,...#~,...~

~... to indicate ~-algebras .

If X is a set disjoint from ~, then we denote free 5--algebra of~'-terms on X ( or

simply :terms) by ~_(X) and its carrier by F~(X); if X =~ we write ~. and F~.

Given ~= (A, ~), we denote by rp~ the unique homomorphism from ~into ~ ;we

say that ~is connected iff rp~ is onto. The set of all congruences on ~ will be

denoted by C (~); C( ~)is a (complete) sublattice of E(A), hence DLI(A,C(~)) is

a complete lattice. If o~DF(A), then an ~-congruence is simply an ~-equivalence

on A which is a congruence on ~; C(~,~) is the set of all ~-eongruences on ~ •

We write : C,C(~), DLI(P), instead of C(~), C(~), DLI(~ ,P).

A graded (ranked) alphabet is a finite graded set ~ , s.t. ~ ~ ~. From now on o

will be a graded alphabet that we consider as fixed and given.

A ~-automaton is a pair M =(~, ~ ),where : ~--(Q , ~) is a ~" -algebra and ~

DF(Q). The response function of M is just rp~ ; we shall denote it also by rPM •

M is connected iff ~ is connected~and M is finite iff Q is finite. The behaviour

of M is ~M .= IB/B = rPMl(F), F e ~J; the equiresponse congruence of M ER(M), is

the canonical congruence on ~ associated with rPM ; eventually ,~(M):----(ER(M), ~M).

Lemma 1.3. For any ~- -automaton M, ~A.(M)e DLI(C).//

If (~,~) ~ DLI(C), then "~((~,~)) is the ~-automaton ( ~/~ , ~/~), where

- =

Lemma 1.4. If (~,~) e DLI(C), then q~((e,~)) is connected and ~(~ ((~,0~)))--

= ( @ , 0 3 ) . I 1

Page 220: 5th Conference on Optimization Techniques Part I

208

Let M = (~[) ~) be a ~-automaton, i =1,2 ; an homomorphism from M into M is a i ....... I 2

pair (?,~),where~ :~,--~ ~Z is an algebras homomorphism, 9 : ~I--~ ~2 is a map

~M2, iff : and, ~ F @ ~i' ~ (F) ~(F)" (~,~) is an isomorphism, and we write M I

is an algebras isomorphism, ~ is one to one and onto and ~(F) = ~(F). If Mland

M 2 are connected and there is an homomorphism (~,~) from M I into M2, then (~,~)

is uniquely determined by the properties : rpM =~orPM and~(F)-~ (F) ~FF-~, too- 2 1

reover ~ is onto, we indicate this by writing Ml--@~M2.These definitions and pro-

perties allow us to state the following lermna.

Lermna 1.5. If M and M' are connected automata, then : i) ~ (]k(M))=M ; ii) M--~M'

iff ~(M) 4.~(M~).//

We denote by ~. the class of connected ~- automata mod. the isomorphism equiva- c

lance, by [M]~ the class of ~_ corresponding to M and by ~ the relation on ~de-

fined by : EMI~[M~ - iff M 2 ~ M I. By the above lermna, ~ is correctly defined and

it is order relation.

Theorem i°6. (~,~4) is a poset (i.e. partially ordered set) anti-isomorphic to

DLl(C),therefore (9Q'~) is a complete lattice anti-isomorphic to DLI(C).//

We recall that similar results~ for monadic algebra automata without final states~

can be found in B~chi (1966).

Given ~= (A, ~ ), a symbol x not in A IJE. and S~--A, then for each "It in F~(SU{x~)

we can define the unary operator on~ I~I=H~ , as follows : ~a e A,

i) if • = s @ S, then i~(a)=s; ii) if ~ = x , then ll~ll~(a) = a ;

iii) if ~ = ~i++. ~nOO ,then ~I'~I~(a) = ~o~( II~--iII~ (a), .... ~l~nll~ (a)); clearly

if~ =O~e~o ,then ll~II~(a) = ~ •

Remarks. i) the above operators correspond to the "polynomials" in Gr~tzer (1968);

ii) One can verify that if ~ is connected, then, ~S~A, [ II~:|I~,~eF~(SU~x}~

= I ~l~li~, ~: e F~ ~x~ ) ; iii) if ~= ~(Y) , for any set Y , then ll~(a) is

the ~-term on Y obtained from ~ by replacing each occurrence of x in ~: with the

term a+

Consider ~ e DF(A) and ~ connected.

Definition i+I. N~is the relation on A Sot., if a, beA, (a,b) eN~ iff~'6~(Ix)),

eA.a~Fz , I [ ~ ; ~ ( a ) @ A~ i f f llz'l$(h) e A i.

Theorem 1.7. i) N~ is the maximnm of C(~,~) ; ii) G(~,~)----I~'C(~)/e_~N~}

iii) C(~,~) is a (complete) sublattice of the complete,lattice C(~).

Proof. Hint : use a modified, but equivalent (see above remark),definition

of N~, considering F~ (A~x~) instead of F~( ~x~ )°//

Page 221: 5th Conference on Optimization Techniques Part I

209

For any ~ @DF(F~[ ), set C(~ ): = C ( ~Z,~); by theorems 1.6 and 1.7 a class of

minimal 5"-automata for ~(i.e. having ~ as their behaviour) exists: the class

2. THE RECOGNITION MODEL. We need a few other definitions before we can give our

recognition model.Consider on DF(X), X non empty set,the relation M-- defined by :

M-- ~ iff ~ ~ ~ and the map ~ is a bijection. It is soon verified that

W-- is an order relation. If ~= (A, c() and ~C(~) then ~ is an O~-separa-

ring congruence iff each congruence class intersects at most one element of ~. We

denote by SC (~) the set of all d-separating congruences on ~[.

Consider now the free algebra __~ and a set H = [(ti,t'i)I (ti,t')_ FZ X

i = I,...,N~ ; we denote by ~(H) the congruence on ~generated by H , seeGr~tzer

(1968). One can verify that ~(H) coincides with the reflexive~ symmetric and tran-

sitive closure of the relation R(H), defined by :(t,t')~ R(H) iff there is a pair

(ti,t')i in H s.t. t' is obtained from t replacing in t t i by t'i "

We can now give in detail our recognition model; see Grenander (1969) and Pavel (1969)

for the motivations, for part of the terminology and a wider discussion on the subject.

- The "objects" to be recognized are coded on a graded alphabet

-What we actually recognize are (structural)descriptions of the "objects"(analogous

to the "configurations" in Grenander (1969)) i.e. -terms. One "object" may corre-

spond to mauy different descriptions.

- We have some information about the corrispondence descriptions - "objects" ,i.e.

we are given a finite set H F x F : (t,t')eH means that t and t' correspond to

the same "object" and can thus be identified. If appears quite natural that we iden-

tify also all the pairs in R(H) and that we extend the process by reflexivity, sym-

metry and transitivity. This eventually amounts to consider on F the identifica-

tions given by the congruence ~(H). We can restate all this, by saying that we are

given a finitely generated congruence I~ on ~Z " we call imases the classes

(mod I~) and the images now become, for us, the objects to be recognized.

- We are given ~DF(F~), the family of examples , i.e. a family of sets ofalrea

dy classified descriptions, o~and I~ must be such that I~ is -- ~-separating.

- An admissible, for I~ and O~ , family of patterns is a family ~6DF(F~ ) s.t.

h ~- ~ and I~ is a ~-congruence.

- An admissible, for I~ and ~, decision rule is a map r : F~. ~ ~ , such

: ~r: = -l~r'i(A)' A~)--~ is an admissible family of patterns and a (connected) that

-automaton M e~ists such that ~r---- ~M (we say that r is implemented by M ).

Page 222: 5th Conference on Optimization Techniques Part I

210

Usually a decision rule can be implemented by several (even non isomorphi¢)~'-aut.

Definition 2.1. A solution of the recosnition problem,with I~ and ~ ~given,

is a E-automaton implementing an admissible, for I~ and o~,decision rule.

Consider ~=(A,c~), ~EDF(A) and P_~SC(~A), then : EDL (~, P):= [(~,~) (~,~)

DL( A ,P),

Remark • The order ~ induced on EDL(~,P) by the order < we have on DL(A,P) is sot.

(~,~)~(~,~) iff ~ and ~H--~°

We call quasi-complete lower semilattice (q.c.l.s.-lattice~ any ordered set in which

all non ~ subsets have a g.l.b. Thus~ if we consider the set EDF(~): =

={0~DF(F~. ) I~H-~} , the set of extensions of ~ ,ordered byH-- ; it is ea-

sy to see that (EDF(~),~--) is a q.c.l.s.-lattice but not a lattice. From this we

have also that,if P is a q.c.l.s.-lattice, then EDL(~P) is a q.c.l.s.-lattice, but,

even if P is a complete lattice, EDL (~P) is not in general a lattice. This fact

I is of great importance as+we shall see now.

Set SC(~): = SC(~K,~) and, if P ~_ SC(~), Ip 1=~ ~;~6P /~2-I~ then from

theorem 1.6 and the remark about <~ we have the following theorem.

Theorem 2.1. M is a solution of the recognition problem, for given I~ and ~ ,

iff ~(M) E EDL (~, ISC(~)).//

Remark. We can use this theorem for a new definitionof "solution of the recognition

problem". Indeed from now on we shall refer to EDL (~, i SC(~)) as the set of so-

lutions of the recosnition problem, (for given I~ andS). We can now characterize different kinds of solutions. Let : ~6DF(F~_), I~ & SC(~)

and P _~ SC(o~) be given. First of all we observe that considering P~_.SC(~) instead

of SC(~) means that we are using only a subclass of all the admissible decision

rules: exactly the class of rules implemented by automata which, for their algebraic

structure, correspond to P (see theorem 1.6). For instance if P=FSC(c~):=

= { ~{ ~SC(~), ind (~)< +~ then we consider only the decision rules implemen

ted by finite ~ -automata whose equiresponse congruence is o~-separating (now we

are not taking account of I~ ).

Definition 2.2. EDLE(~'!P) : = I (~'~): (~'(~) ~ EDL(O~, Ip), ~= max.of

(ip n c(~))} This is the set of "economical" solutions: for each admissible family of patterns

~, we consider the minimal, i.e. 'Imore economical", ~ -automata for ~ in the

class of automata corresponding to P ( if they exist).

Page 223: 5th Conference on Optimization Techniques Part I

211

Definition 2.3. If ~= (A,O~), ~6DF(A) and ~II~SC(~O~), then ~T: =IBT ,B&~I,

where B~ := a~B [a]~

Definition 2.4. EDLJ(~'IP) : = I (~'~) / M~G Ip~ •

This is the set of ~ustified solutions: for each ~ in Ipwe extend each A in'by

adding to A only elements of F~ which are ~I/-congruent to at least one element of A.

It is now quite reasonable the following definition.

Definition 2.5. The set of $ood .solutions of the recognition problem~ for given O~, I@ I I I I and P is CS(~, P) =EDLE(e{, P) t% EDLG(~, P)= l (T ,~T) /T=max( P~C(~l )) .

I The three above sets are ordered by<< ,as subsets of EDL(~, P)°

Definition 2.6. The optimal solution , for given ~,I~ and P, is the maximum of

CS(~,IP), if it exists. We denote it by o.s. (~,Ip).

Consider now the two conditions : I [o(] ~ ( ~ , ~ T ) , ~,, Ip , l ' u ' b ' / c (~T ) (Ip~, c( ,AT))e P .

E~]~(T,~) £EDL (~,Ip), l.u.b.lc(~ ) (ip n c( 8)) el P .

Propositien 2.2. i) EDLJ(0~, Ip)~-- Ip (as posets); ii) if max EDLJ(~,IP) exists,

then max GS (~,Ip) exists and the two coincide; if condition~dJholds, the converse

is also true. //

Proposition 2.3. (Stability property) If condition [~ holds and (~,&) = o.s.(~,

Ip) then, ~' l~t" s ' t '~ t t "~ l l~-~, o.s. ( 4 I, IPl°t SC(~t))= (~[/,(~). Proof. A'~-~ =~=~ ~= SO(4) andlas ~_le , Is ~ SC(~)° Since by prop.2.2

T =max P and clearly~-~H--~ :-- implies = :=~ , then (T,~) = --max EDLJ (1~', Ip 1"~ SCI~')). Thus by prop.2.2 (~_t~,~) = max G.S(~I,IP ~ SC(~I)).// Remark° One can easily verify that condition ~ ,and therefore E~ ~ is met, for

example~ when P = SC(~) or P = FSC(~),

3. EXISTENCE OF OPTIMAL SOLUTIONS. Let M =(~, ~) be a ~-automaton; if ~eSC(~,~),

then M/~ is the ~'-automaton (9/~,~/~), where ~7~:={F'/F'=I[q]~ , q~F) ,

F & ~. Recalling def.2.3, we shall write I~ instead of ~I~ •

Proposition 3.i. If ER(M) = I~ and ~M I~, then : o.s. (~,Isc(~)) existsiff

max SC(g,~) exists; if max SC(~,~) =~ then o.s.(~,Iso(~))=]~(M/~ ).

Proof. Set ~: I~ = ; from well known isomorphism theorems (see, for instance,

Gr~tzer (1968)), we have the lattice isomorphisms : C(~)~__C(~)---~/~6C(~),

~}; this implies the poset isomorphisms: SC(~,~)=SC(~ , ~/~)~-I~ /

~@Isc(~M) -- ISG(~) = ISC(~). Thus by Prop.2.2 the first part of the proposition

== ISC(~), remarking is proved. As for the second, if ~ : = max that ~/~ corresponds

Page 224: 5th Conference on Optimization Techniques Part I

212

to ~ i n the isomorphism SC(B,~)= ~SC<~), we have : ~19=~1e1~,1~'-~19. Hence and fro~ the definitions of M/@ andp.,we have :>(M/@ )=(~, ~ ) =

Proposition 3.2,. If o.s. (~,Isc(~)) exists and ~(M) e EDLJ(o~,Isc(~)), then

exists s.t.~ = max SC(~,~) and ~(M/~) = o.s. (~ ISC(~)).

Proof. Setting ~ "= ER(M), by propo.2.2, o.s, (~,Isc(~))=max EDLJ(~,Isc(~))

and thus : (I~, ~)~<~(M) = (~,~M)~o.s. (~,Isc(~)). Since ~--~Mimplies

SC(~ M) ~--_ SC(~) , recalling the stability property (Prop.2.3), we have :

o.s.(~,Isc(~)) = o.s. (~M, ISC(~)~ Sc(~M))= o.s.(~M ISc(~M))" Prop.2.2

I yields : B o,s. (~M, ISc(~M)) ~ ~ max Sc(~M)~ o.s.(~ M, ~SC((~M)) =

= o.s. (~M, ISc(6M)). Thus by Prop.3.1 ~ exists and #(M/~) = o.s.((~M,~sc(~M))

= o . s . (¢~ , ! sc (A)).II

Cons ide r ~--- (A, ~ ) and ~ e D F ( A ) ; we now i n v e s t i g a t e on t h e n a t u r e of S C ( ~ ) .

If x ~ '~ L'I'A, recalling the definition of the operators llzll ,'= ix} ), we

have the following definition.

Definition 3.1. V' is the relation on A defined by : if a, b6A, (a,b)G~ iff

~ F~_(~x~), ~B~, l l~l l~ (a)~ B~ I I~ l~(b)~ B~B ° and II2:l~(b) ~ B

II~l~(a)@ BVB .o

This relation is a modification and generalization of a relation defined in Verbeek

(1967). It is clear that V~ is reflexive and symmetric but examples show that it

is not in general transitive (e.g. when ~--~_and B~ B is a finite set).

Let V~ be the transitive closure of V~ ; we have then the following proposition •

Proposition 3.3. i ) (a,b)@ V~ ~ (llZtl~(a) , I lr l l~(h))ev$ ~=~Fe ¢×~ );

ii) V~ is a congruence one; iii) SC(~) ----[76 c(~)l~=_v,~} ; iv) sc(gg(~)

is a (quasi-complete) lower sub-semilattice of the complete lattice C(~); if

V~ = V~ then V~ =max SC(~,~) and SC(~) is a complete lattice.

Proof. i) follows from the property :~C,~'~ F (~x~), ~aSA, ll~[l~(l|i?{~(a)) =

----]l~l]~a), # where ~: =[l~ll~(~)--~ and ~.'=~x})~ ii) consider~ (V~6~); by i) and

the definition of @ (V'~), it is easy to prove that ~(V~ ) = V~ . The proof of

iii) again is based on property i) and the fact that the same property holds for

any congruence. Eventually it is quite easy to verify the first part of iv)~ while

the second is rather trivial given ii) and iii). //

Remark. If V~ ~ V~ it is not true, in general, that V~ is the l.u.h, of

SC(~,~), as we shall see with an example later on.

The following proposition is strictly related, as we shall see, to the stability property.

Page 225: 5th Conference on Optimization Techniques Part I

213

Proposition 3.4. If V~ = %, s,~ ~ =~% ; then : V~ = N~ = V~ = V~ and

G(~) -- SC(~,~) = SC(~,~), hence SC(9~) is a (complete) sublattice of C(~).

Proof. It is easy to verify that ~I~-~ implies V~ V'~ ; hence : V~_ V~N~ o

If B e~ , denote BV~ by B, and~11~H~ byIl~II, ~-F (~x})o As V~ is a congruence

(a,b) ~ V~=~ (HZll (a), I~ZlI(B)) 6 V~ , now, ~ B @~ , we have : |IZl I (a) 6 B

i.e. (a,b)~ N~ ; so N~V~ , hence : (+) V~= V~ = N~ which implies also V~ =

= V~ • Moreover : ~ll'--~ yields SC(~,~) ~. SC(~, ~ ) 9--C(~) ; by (+),

Theor.l.7 ii) and Prop. 3.3~ iii) , we have the coincidence of the three sets.//

Consider now ~(1), ~(2) ~ SC(~,~), where ~(1) =- ~(2) ; i f ~ ( i ) . = ~ / , ~ , ~(i)

and "~ := ~(i)/ ~(i) , i = 1,2, then the canonical epimorphism ~ :~)~(lJ

is such that ~B 6 ~ , B / ~(i) ~'~ B / ~ (2) and ~'I(B/ ~ (2)) = B/~ (i)o

Proposition 3o5, If u,v ~ A/ ~ (1) , the carrier of ~ (1) , then : (u,v) ~ V~I,)

if f (~(u), ~(v)) ~ V~tL) (where obviously V'~(~) is on A/~ L&) ).

Proof. It follows from the definition of the V'-relation and of the epimorphism

and from the following property, (see GrEtzer (1968)): ~ q~&F~.( ~x~ ),~u 6 Q(1)

~f ( I I '~ l l~w(~) ) = ll"cll~c,~ , f (u) ) . / / Setting ~= ~, if M = (~,~), we have the following results.

-i (F.), i~l~ , Corollary3.6. ~f ~=~q,i,~ and ~=~IBi!~i rp,

then v& = % if__~ v~ = v~ (where V% is on ~Z and V~ one)//

0or.3.6. and Prop.3.3~ together with Prop.3.2~ give the following theorem.

Theorem 3.7. If ~<M) g EDLJ (~,ISC(~)) and V'~ = V~ then o.s.(~, I SC(~))

exists and o.s. (~,ISC(~)) = ( V~ , ~V~ ) = ~ ( ~Iv~ ).11 This theorem gives a sufficient condition for the existence of the optimal solution,

and can be used to obtain an effective construction for it. This problem is investi-

gated in what follows.

Consider ~ = (Q, ~) and ~ ~ DF(Q); on Q we define the following relation Z'~: =

= (~ Z' nT/O n ' where, if a,b@Q, i) (a,b)~Z'o --iff a@F~ b@F~Fo and b&F ~

a @F kTF ~F~ " ii) (a,b) ~ Z' iff (a,b)~Z' and ~O~, if O~ @ ~ then, o ' m÷l -- m

~al,... , ap ~ Q, the elements ~(al,... , aj,a, aj+l,...,a p) an_~d ~(al,...,aj,

b, aj+l,... , ap) are Z'm - related, j = O,...,p.

Lemma 3.8. i) Z~V~ ; ii) ~_~ SC(~,~) ~&Z~ ; iii) if Z~is transitive,

then z~=v~ v~

Proof. We use a modified, but equivalent, definition of V% , considering F~ ([x}%/Q)

Page 226: 5th Conference on Optimization Techniques Part I

214

instead of F~([x~ ) and set [[~i~ : =]~.

i). Obviously (a,b) 6V'~ ~ (a,b) ~ Z'o ; if (a,b)~V~ ==) (a,b)~ Z'k ' ~k~ m,

then ~0~ e ~ptl ' ~ al'~''a p ~ Q consider, for j = O,®..,p and u=a,b :

~j := ~(al,~.o~a j ~u, aj+ l ..... a ) and ~.:----a o..a x a + .o.a~@~x~LfQ )o P 3 I J j I p

Then, also recalling Prop. 3.3 , we have , (a,b) @ V ~ (%%ZE~ (a) ,~3~(b) = ^

= (~.,B)&j J V~ and thus by induction hypothesis (~3,bj). ~ ZT'm This shows that

(a,b) ~- V~ ----~ (a,b)e Z' m+l

ii) is quite obvious as Z% is an ~-separating relation.

iii). We show that if Z~is transitive then it is an ~ -separating congruence, so

Z~ ~- V~ ;hence, by i), Z~ = V~ .If Z~ is transitive, clearly it is an ~-sepa-

rating equivalence; we have only to prove it is a congruence. First of all we observe

that : (a,b)(-Z~@~i , ~-p~O ,~a I .... , ape Q, ( ^aj,hj)~ Z~ , j =O ..... p.

Then, if u~ e ~ and (u.,v)% Z~ , j=l,.o.,p and ~: = ~-'(mod Z'~ ), ~(Ul,..,Up) P 3 J

,~. OC (Vl,U 2 ..... Up) o.. :-" 4~(vl,..O,Vp).//

Remark. It is also quite clear that : a) if Z' = Z' then Z' = Z' ,~ m,p~O ; m m+l m m~

b) if Q is finite then m exists, m ~--IQ~ 2 , s.t. Z~ -- Z' • (The transitivity is r~

not involved here.)

Theorem 3.9. If a finite automaton M = (~,~) is given s.t. ~.(M) ---- (I~, I~),

there is an algorithm for deciding whetaer o.s. (~,Isc(~)) exists and, if so, for

finding it.

Proof. By the above remark Z% can be actually eoraputed. If ZT~ is transitive,

then by lermna 3.8 ZY~ = V~ = V~ , o.s. (~,!SC(~)) = (V~ , ~-V~ ) -- ~(M/V~ )

and M/V~ is obtained by an effective construction. Otherwise, we can by inspection

on all the congruences contained in Z'~ (and they are a finite number) verify whe-

ther SC(~,~) has a maximum or not and so decide whether o.s (~,Isc(~)) exists

or not ; if it exists, by Prop.3.1., o.s. (~, ISG(~)) ---- ~( M/~ ), where ~ : =

= max SC(~,~)./I

Theorem 3.10. If a finite automaton M = (~,~) is given , s.t. ~(M) G EDLJ(~,

ISC(~4)) and Z'~. is transitive, then o.s. (~,Isc(~4)) exists and there is an algo-

rithm for finding it.

Proof. It follows directly from Prop.3.7ar~d lemma 3.8//

Theorem 3oil. Let @ : = ~ (H) ~ C(~Z), H finite set, and a regular family ~ (i.e.

~= ~ , for a finite ~-automaton ~) be given. There is an algorithm for deciding

whether o.s. (O~, ~ SC(~)) exists and if it exists, for finding it.

Page 227: 5th Conference on Optimization Techniques Part I

215

Proof. Recalling from section 2 the definition of ~(H) we can see that if D is the

maximal depth of the terms in H, then ind (~)< q-~iff each term od depth D+I is

-congruent to some term of depth ~ D . If ind (~) 4-. q- ~ it is not difficult

to construct a finite~ -automaton ~ s.t. ER(~) ----~ • If ~---- (~, ~) and

~----(~,~))~ b e i n g _ the automaton for ~ and ~: = ~i,i ~I~ let M := (~x~,

~x ~ ) and M c be the connected subautomaton of M (whose construction is effective).

- ~ ~. l(q)a Ai# [ c / .c = ix i ,, / Mc = .here ,}c= Fi i: , i,, and Fi,= I .Q rp i cJ.

~ ' Ai:---- rp'l(F*)- 1 . ~ i s -separating i f f DF(~)~ e, - - - -~ i , i 6 I J ; M~. is f i - M

nite, then w~ can decide whether its elements are non empty and mutually disjoint.

If it is so, define M : = (~,~) " then ~M ~ -" , = @ and ER( M ) ----~. Hence by

Theorem 3.9 we have proved our assertions. //

to CONCLUSIONS AND EXAMPLES. The above results show not only that, as it was be expecte~

the existence and the nature of the optimal solution for a recognition problem depend

on the devices we use to recognize, bout also that when the o.s. does not exist eve-

ry maximal element in the set of good solutions is an o.s. in respect to a restricted

class of recognizers. Thus supplementary "goodness" criteria are needed to define a

unique o.s.. Moreover we have shown the essential role of the images ; for instance,

if we are able to recognize the images, then for a regular family of examples ~ we

know all about the optimal solution. Finally if u~ is a partition, the V~ = N~.

Z~ = V~ = N$ and the algorithnfor computing Z~ is exactly the one used by Brainerd

(1968) for minimizing tree automata.

Example I. Let ~ = 5" o L2 ~i= I~I l)la,b~ ; Q = lqo,ql,q2, q31 and ~ =(Q,0(),

where ~= qo; ~a:qo ~-9 ql,q I ~ q2' q2 ~ q3' q3 ~ q3 ; ~ b: qo e--) q3 'ql ~--2q3'

q2 ~ ql' q3 ~ q3 "

is therefore a monadic-algebra ; it can be seen that in a monadic algebra the

Z' and V' relations coincide. If ~ = llql ~ , ~q~l then we have V~ = Z~ = Z' ' 1

and the "relation classes" are lql,q3 ~ , lq2,q3, qo~ ; hence V~* V~ . One can verify

by inspection that the only ~ -separating congruence on ~ is the diagonal con -

gruence ~ .Therefore : max SC( ~, ~ ) =~ and ~bviously ~ = N~ # V~ .

Example 2. Let ~ be as above; then the terms in F~- can be viewed simply as

elements of ~'2 ~ • So consider the congruence ~ on ~ whose classes are :

so=[Z%~ , Sl= ~a~ , s2= a2b(ab) *, s3= a(ab)*a , s4= ab(ab)*a, s5= ab(ab)*,s6=b(ab)~ $

s7= ~--"- ~ s.

Page 228: 5th Conference on Optimization Techniques Part I

216

Set ~ = I SI~ s 2 ' s31 ~ then ~:= ~/~ =[~[a]~ , [a2bT®~ ,L[a2j~l.

It is easy to verify that on ~/~ • the "relation classes" of V~ are :

~sl,s2, s4, s5,s6,sT$ , {s3, so,s4~s5,s6,s7} ; we also have : l.u.b. SC(~_/~,~ )

is the (non~-sep.) congruence whose classes are ~so~ and Isl .... , s7~ ) and which

isobviously different from V~. Moreover, setting M = (~,~) - see ex.l- ,

/.~(M) ~ EDL(~,~SC(~)) ; then , as we have seen, max, SC(~ ,q ) exists, while

max SC(~/~,~) does not exist, which implies that also o.s. (~,~)SC(~)) does

not exist.

Example 3. Let Z = ~o ~/ 5" ! - i~ V l a , b,c~ ; then we can consider ~ ~ instead

of F~ ° Set : D: = ta~AJ ~b~ U cc*a LTcc*b, A1 : = c*bD*, Al:=C*bD*cc * ,A2:=c*aD* ,

A2 := c*aD*cc*, A~= c* , ~= AI'A2 , I~ = A . Then V a = ~ , with classes

A3'AIV AI' A2V A2' so that o.s.(~, SC(~)) = (V~, ~AIVA 1 , A2VA21 ).

RE FE RE NC E S

ARBIB,M.A° and GiVE'ON, Y.(1968) "Algebra automata I : parallel programming as a

prolegomena to the categorical approach" Inform. and Control 12,331-345.

ASTESIANO, E. (19"/3) "A recognition model", to appear in Pubblicazioni dell'Istitu-

to di Matematica dell'Universit~ di Genova.

BRAINERD,W.S. (1968) "The minimalization of tree automata", Inform.and Control 13,

484-491.

B~CHI,J.R. (1966) "Algebraic Theory of feedback in discrete systems- Part I " in

Automata theory ,edited by E.R. Caianello ,Academic Press.

COlIN,P.M. (1965) "Universal ~Igebra~,, ~arp~,New York.

GP&TZER, G. (,1968) ~Universal algebra" ~Von Nostrand,New York.

GRENANDER,U.(1969) UFoundations of patterns analysis" ,Quart.Appl.Math. 27, 1-55

PAVEL,M. (1969) "Fondements math~natiques de la reconnaissance des structures",

Hermann, Paris

THATCHER, J.W. and WRIGHT, J.B.(1968) nGeneralized finite automata theory with an

application to a decision problem of second order logic",Math. Systems Theory,Vol.2.

N. 1,57-81.

Page 229: 5th Conference on Optimization Techniques Part I

217

VERBEEK, L.A.M. (1967) " Oongruence separation of subsets of a monoid with

application to automata " , Math. Systems Theory, VoI.I, N.4, 315-324

Page 230: 5th Conference on Optimization Techniques Part I

A NEW FEATURE SELECTION PROCEDURE FOR PA~EIhN RECOGNITION BASED ON SUPERVISED LEARNING

by

Josef Kittler Control and Systems Group, Department of Engineering

University of Cambridge, England

I. Introduction

Linear methods of feature selection are characterised by a linear transformation

or 'mapping' of a pattern vector x from an N dimensional space X into a n < N m

dimensional space Y . The feature vector ~ e Y which is obtained from ~ by T

transformation T ~ i.e. ~ = xJT , has a reduced number of components and should,

if successful, contain all of the information necessary for discriminating between

classes present in the original vector x .

Many methods have been suggested for determining the transformation matrix T

required for linear feature selection. But most of these can be classified in one

of the following two categories:

a) methods based on the Karhunen-Loeve expansion

b) methods using discriminant analysis techniques.

In the first part of the present paper a new method of feature selection based

on the Karhu~en-Loeve (K-L) expansion is proposed. Subsequently a relationship

between the superficially different K-L expansion and discriminant analysis approaches

is established and in so doing a more unified approach to the problem of feature

selection is introduced.

2. A Method of Feature Selection for Pattern Recognition Based on SupervisedLearnin~

The method of feature selection discussed in this paper is based on the proper-

ties of the K-L expansion. Since the detailed treatment of the Karhunen-Loeve ex-

pansion of discrete and continuous processes can be found elsewhere (Mendel and Fu,

Fukunaga, Kittler and Young) only a brief description of the method will be given

here. Also for simplicity, we shall confine our discussion to the case of discrete

data.

Consider a sample of random N dimensional pattern vectors ~ . Each vector

-- be ~i is associated with one of m possible classes ~i " Let the mean of xs~ i

and denote the noise on --Ix by ~ , i.e.

~i = ~ + zi ll)

Without loss of generality we can assume that the overall mean ! = E{~} = 0 since

it is clearly possible to centralize the data prior to analysis by removal of the

overall mean. Suppose that the probability of occurence of i-th class is P(~i )

and let membership of patterns in their corresponding classes be known.

Suppose that we would like to expand the vector ~ linearly in terms of some

deterministic functions ~ and associated coefficients Yk ' i.e.

Page 231: 5th Conference on Optimization Techniques Part I

2!9

N

k=l

(2)

subject to the conditions:

~) ~ are orthonormal

~) Yk are uncorrelated

y) the representation error

m

: [ c i) {Lh 2} i=l

^

incurred by approximating x with x composed of

i.e.

(3)

n < N terms in the expansion (2),

n

k=l

is minimised.

Fu and Chien have shown that the deterministic functions satisfying the property

through B are the eigenvectors T

T . . .

of the sample covariance matrix C defined as

m c = ~ {xx T} = ~ P(~) E{xx T}

---- l --l--l i=l

In order to satisfy condition ~ , it is necessary to arrange the eigenvectors

~l,...,tN in the descending order of their associated eigenvalues,

l I ~ 12 ~ ... I n ~ ... ~ 1 N

Chien and Fu also showed that the eigenvalues I k are the variances of the transformed

features Yk and that the expansion has some additional favourable properties, in

particular, the total entropy and residual entropy associated with the transformed

features are minimised.

The transformation T of the pattern vectors into the K-L coordinate system

results in compression of the information contained in the original N-dimensional

pattern vectors ~ into n < N terms of the K-L expansion. This latter property

has been utilised for feature selection in pattern recognition by various authors

and a few of the possibilities are listed below,

i) If the features Yk are to be uncorrelated then T should he chosen as

the matrix of eigenvectors associated with the mixture covariance matrix C 1 defined

as (Watanabe)

c I = E {x_ _x T}

Page 232: 5th Conference on Optimization Techniques Part I

2 2 0

ii) If ~he transformation is required to decorrelate the components of the

noise vector z then the matrix of eigenvectors T should correspond to the -%

averaged within class covariance matrix C 2 F i.e.

m

c 2 : [ P(~i ) ~ {hzi T} i=l

It should be noted that method ii) selects features irrespective of the discriminat-

ory information contained in the class means and the utilization of information about

the mean vectors in method i) is not optimal in any sense. This can be seen from

a detailed analysis of variances I k of the new features Yk ' i.e.

m m m 2 2 "2

Ik = ~ P(~i ) E(Yk ) = ~ P(~i ) Oki + ~ P(~i ) Yki = 0 + ~k i=l i=l i=l

where 2

~ki = E { (Yki - Yki )2}

and

(4)

(5)

Yki = E {Yki } (6)

In an earlier paper, Kittler t J. and Young, P.C. (1973) have shown that the

discriminant power of a feature against class means is related to the ratio ~k/[ 2-

provided that the averaged within class covariance matrix is in the diagonal form.

The magnitude of the first term in (4), however, contains no discriminatory

information at all. It is therefore desirable to normalise the averaged within class

variances to unity and thus to allow selection of the features on the basis of the

magnitude of the eigenvalues k k . It is this norraalising transformation which is

the essence of the proposed new feature selection technique.

iii) In order to normalise the noise, we first have to diagonalise the averaged

within class covariance matrix C2~

m ~2 [ P(~i ) E {z_~ --~z T} (7)

i=l

by transforming x into a new feature vector y using the system of eigenvectors

associated with C 2. Thus the feature vector y__ where

T T y : x U (81

will have a diagonal eovariance matrix

m i li 12 O T T I i

c ~ X P(~-) E {u z{h u} : 1 y i= ! l -- , .

I 0 " 1 [ N]

{9)

Page 233: 5th Conference on Optimization Techniques Part I

221

! where lk s are eigenvalues associated with C2°

The matrix C can now be transformed into identity matrix by multiplying Y

each component of the feature vector ~ by the inverse of its standard deviation.

In matrix form this operation can be written as

T T = X S (i0)

where

s = ". (11)

Once the averaged within class covariance matrix is in the identity form, by solving

the eigenvalue problem, C B = BA (12) g g

we can obtain a new K-L coordinate system, B , which is optimal with respect to the

mixture covariance matrix C , where g

T} = E {g g (13) Cg -- _

A is the matrix of eigenvalues of the matrix C g g

Note that the class means are included in this case. Thus B is a coordinate system

in which the square of the projections of the class means onto the first coordinates

averaged over all classes is maximised.

The eigenvalues lgk which are the diagonal elements of Ag can now be

expressed as

m

kgk = 1 + ~ P(~i ) (E{~ g i}) 2 = 1 + "%k (14) i=l

It follows that the features selected according to the magnitude of kg k will now

be ordered with respect to their discriminatory power.

The feature selection procedure can be summarised in the following steps:

l} Using K-L expansion diagonalise the mixture covariance matrix C = E {x ~T}

of the original N-dimensional data x . Disregard those features which are

associated with negligable eigenvalues and generate a feature vector T

x = xTw of dimension N < N where W is the system of eigenvectors of C.

Find the K-L coordinate system U in which the averaged within class

covariance matrix C defined in (9) is diagonal. Y

Normalise the features Yk to transform the matrix C into identity Y

matrix form.

Determine the final K-L coordinate system B which is associated with the

mixture covariance matrix C g

2)

3)

4)

If we ignore the first K-L expansion, which does not affect feature selection

but only reduces the computational burden of the following two K-L analyses, we can

Page 234: 5th Conference on Optimization Techniques Part I

222

view the overall linear transformation T = USB as one that decorrelates the mix-

ture covariance matrix C 1 subject to the condition that

TTc2 T = I (15)

The resulting feature vector f is then given as

fT = xTT (16) m

Note that the proposed feature selection technique is applicable to supervised

pattern recognition problems only because the membership of the training patterns in

the individual classes must be known a priori so that the averaged within class

covariance matrix C 2 can be computed.

In order to show the relationship between the K-L expansion techniques i), ii),

iii)~ and the discriminant analysis techniques which we describe in the next section

it is necessary to derive the results outlined above in an alternative manner. This

requires that the problem is viewed as one of simultaneous diagonalization of the

matrices C 2 and C 1 ° From the previous discussion, we know that

suTc2us = I (17)

Now by utilising (17), the eigenvalue problem (12) can be written as

suT(cI - IgC2)US ~ = 0 (18)

But since SU T is nonsingular, the condtion (18) will be satisfied only if

ICl-XgC2i = o (19)

-I It follows that l are the eigenvalues of the matrix ClC 2 ~ i.e.

g

(CIC2 -I ~ I I) t = 0 (20) g --

2.1 Experimental Results

The results of an experimental comparison of the three K-L procedures outlined

above is given in Fig. I. Data for the experiment were generated digitally accord-

ing to the rule,

(class A) : x I ~ N(2,2),x2...x 9 ~ N(O,I) ,xlO ~ N(O,O.25)

(class B) : x I ~ N(1.95,2) ,x2...x 9 ~ N(Owl),Xlo ~ N(O.5,0.25)

where N(Uto) defines a normal distribution with standard deviation ~ and mean ~ .

From these results, it will appear that the method iii) performs substantially

better than the other two K-L procedures.

3. Discriminant Analysis

Let us now formulate the problem of feature selection in a different way.

Suppose that we wish to find a linear transformation matrix T which maximises some

Page 235: 5th Conference on Optimization Techniques Part I

223

distant criterion d defined over the sample of random vectors in the transformed

space. Two of the most important distance measures are as follows:

a) The intraset distance dln between the n-th feature of all the pattern vectors

in one class averaged over all classes, which is defined by

m N.l Ni

dln = ½i=l [ P<~')~ ~.~2 jZ I1 ~t~ (~i~ - ~il ) (~i~ l

where N. is the number of vectors x E ~. and t l -- l --n

transformation matrix T .

b)

- ~il) Tt (21)

is the n-th column of the

The interset distance d2n between the n-th feature of all patterns, which is

defined by

m i=l Ni Nh

i=2 m h=l i h j 1

It has been shown, Kittler, J. (1973), that these distance critera can be ex-

pressed in terms of sample covariance matrices. In particular the distances dln

and d2n become

= t T C2t_n (23) dln -n

with C 2 given by (4) and

d2n = t~T~ t_n

where m

= c l- [ P2(~ i) E {zihT} (24) i=l

By analogy, the sum of the interset and the intraset distances d3n , is

m m N~ N h

d3n = ½ ~ P(~i ) ~ P(mh)N7 h ~ ~ tT(x_ij - ~hl ) (~j - ~l)Ttn i=l h=l j=l i=i --n

can be written as

(25)

= t T (C 2 + M) t_n = < Cltn (26) d3n --n

where m

T M = [ P<%) AA

i=l

Clearly a number of different distance criteria could be constructed; but, in all

cases it would be possible to express the distance in terms of a covariance-type

matrix C , i.e.

d = t T Ct (27) n -n -n

Page 236: 5th Conference on Optimization Techniques Part I

224

Using these results the maximisation of a chosen distance criterion d can be

carried out by maximising with respect to the transformation vector t , subject to

some additional constraintsr e.g. holding constant some distance s where

s = tTc t (28) s-n

which is irrelevant for classification purposes, s might, for example, be defined

as the intraset distancep The solution for this kind of problem can be obtained by

method of Lagrange multipliers. In case of a simple constraint, maximisation of d

subject to s = const can be written as the maximisation of f , where

f = d-l(s - const) = tTct - I(tTc t - const) (29) -%q --~ --n s--n

Setting the first derivatives of f with respect to the components of t equal --n

zero yields

(C - ~C )t = O (30) s -n

But if (30) is then postmultiplied by the inverse of C , we get an eigenvalue s

problem

(CC -l - ~I)t = 0 (31) s --n

When there is more than a single constraint, the solution is more complicated

since the function under consideration now becomes

f = d - ~ ~I(SI - const) (32)

i

the solution must be obtained using general optimisation techniques. However, in the

special case when there are only two constraints and one of these is simply the

condition of orthonormality of the matrix T , ioe.

TTT = I (33)

the problem of optimisation can be posed as the eigenvalue problem

(C - ~C - ~I)T = 0 (34) s

And since in the design of pattern recognition systems our chief interest is an

classification performance~ we can determine ~ experimentally to achieve the mini-

mum error rate.

3.1 Experimental Results

This latter approach for two constraints has been tested on four class, arti-

ficially generated data. The classes were generated according to the rule,

(class A) : Xl, ..... ,xlO ~ (O,i)

(class B) : Xl, ..... ,x 8 ~ (O,l),x 9 ~ (4.2,1),xi0 ~ (-4.2,1)

Page 237: 5th Conference on Optimization Techniques Part I

225

(class C): Xl, ..... ,x 8 % (O,l),x 9 ~ (2.2,1.3),xi0 ~ (2.2,1.3)

(class D) : Xl, ..... ,x 8 ~ (O,1),x 9 % (2.2,1.3),xlO % (-2.2,1.3)

The summary of the results obtained using classifier with linear discriminant

function is in Fig. 2. The best error rate corresponds to ~ = 0.7 .

4. Discussion

There are many possible ways of defining the distance between the elements of a

sample and there are even more combinations of constraints and distance criteria

that could be maximised in any particular feature selection problem. Consequently

we shall restrict our discussion here to a few specific methods that have been

suggested in the past.

First, let us consider the method proposed by Sebestyen. In this procedure, the

distance criterion to be maximised is d3n , subject to the condition that the sum

of the intraset distances remains constant. In this case the transformation matrix T'

is the solution of the equation (31), i.e.

!

(CI C'l-z - II) t = O (35) --n

Comparing the relationship (35) with (20) we see that the column vectors t' --n

colinear with the coordinate system obtained by the method iii) described in

section 2, i.e.

are

t' = ~n t (36)

But in contrast to the method iii), where T was chosen to satisfy

TTc2 T = I (37)

the columns of the transformation matrix T' in the present case must be such that

N

TC 2 t' = const = K (38) n = l

And from (36) it follows that

N 2 {n --ntTC~tz--n = K (39)

n=l

Thus, we can choose the N-I coefficients ~n

there any particular choice of the coefficients

From (35) it follows that

t~ T Clt ~ - Int'Tc2t~

where

and evaluate the last one. But is

~n which would give better features?

2 2 2 ~n + ~n IM - In~ n = O (40)

n

1 = t 'T Mr' (41) M --n --n n

and T'TMT ' is diagonal matrix. Using (40) and (36) the maximised distance d3n

Page 238: 5th Conference on Optimization Techniques Part I

226

can be written

2 d3n = In6 n (42)

However, from ~0)~ the eigenvalue 1 can be expressed in terms of 1 M as n

n

I n = 1 + l M (43) n

2 the Thus we can conclude that although the distance 3n is proportional to ~n '

discriminatory power inherent in d3n is a function of 1 and therefore independent n

of ~n " Thus ~n can be chosen

2 2 ~n = ~i = const n,i. (44)

and T ~ becomes

T ~ = const T (45)

Apart from some constant of proportionality the features obtained in this

manner will be exactly the same as those obtained by the method iii) and will satisfy

the ordering criterion (13). It is interesting to note that the distance d3n is

only proportional to the discriminatory power of the n-th feature. If d3n is used

as an ordering criterion, therefore, any ill-chosen coefficients could result in a

suboptimal ordering of the features.

If the interset distance d2n is maximised instead of d3n , with the same con-

straints, then the situation is rather more complicated since the matrix C in (31)

is now replaced by ~ . It can be shown that the matrix T obtained as the solution

of (31) diagonalizes both matrices ~ and C 2 and this means that

TT~T - AA O = O (46)

TTc2 T and A is the matrix of eigenvalues of ~C21 . Substituting for where i 0

from (24) the first term on the left hand side of (46) can be rewritten to yield

m A O T T (M [ p2 T = + - (t0i)E{~zl})T - AA 0 O (48)

i=l

The term in the middle of the left hand side of the equation must also be diagonal

and it is, in fact, this term which determines the optimal coordinate system T . m

Depending on the relative dominance of M or ~ p2(~i)E{z_izT }__~ the axes T may

i=l

coincide with the coordinate system in the previous case or lie in the direction

determined by the term m P2(wi)E{z zT} However, in general, T will be a --l--i °

i = l

compromise between these two.

denote the second term of (48) by AM s i.e. Let us

Page 239: 5th Conference on Optimization Techniques Part I

227

m

AM = TT(M - [ P2(w i) E {ziziT})T (49) i=l

Then by analogy to (43) the elements i of the matrix of eigenvalues A can be n

e x p r e s s e d a s

1 = 1 + ~S (50) n

n

Now even if we select the features according to the magnitude of the eigenvalues In,

instead of the distance d2n the ordering may not necessarily be satisfactory since

the magnitude of 1 M is proportional not only to the discriminatory power of the

n-th feature but als n to noise, represented by the second term of ~M , i.e.

n

~M T t T ~ p2 = t Mt - ( (~.) E{z.z.T}) t (51) -~n --n --n l --l--l --n

n i=l

Thus the method might in certain circumstances yield inferior features to those ob-

tained by maximising the criterion d3n .

Finally, a few remarks are necessary in connection with the special case of

feature selection with two constraints discussed in the previous section. From the

experimental results obtained where the distance d2n was maximised subject to the

condition

N

tn C2tn n=l

= const t (52)

we can conclude that the optimal coordinate axes T almost coincide with the eigen-

vectors of the matrix M (since ~ is such that the constraint matrix cancels out m

the within class scatter matrix C2 - g? p2([~i)E{ziT})_ . This fact is even more

i=l

obvious from the experimental results of Nieman, who originally suggested this approach.

He assumed that the a priori class probabilities associated with the classification

of ten numerals were equal. The minimum error rate was then obtained for ~ = 0.9 .

Now if P(m i) = 0.i, Vi, then the eigenvalue problem defined by (34) becomes

(C 2 - C2/IO + M - ~C 2 - II)T = (M - II)T = O (53)

when ~ = 0.9 . This result is only to be expected since we cannot possibly de-

correlate both within and between class scatter matrices by a single orthonormal

transformation. And it is quite reasonable therefore, that the most important

features will be selected with some degree of confidence only in the coordinate

system where their means are decorrelated. Thus, in practice, we can only hope that

our choice will not be affected by the projected noise.

These same remarks apply when the distance d3n is maximised. But in this

case when v = 0 the problem reduces to the K-L method i). However, from the above

Page 240: 5th Conference on Optimization Techniques Part I

228

discussion we see that better results can be obrained for ~ + + 1 o Consequently

Nieman~s method will yield superior features.

5. Conclusion

The most important result of the comparative study described in this paper is

the correspondence and, in particular cases, the direct equivalence that has been

established between some statistical feature selection techniques developed from

the Karhunen-Loeve expansion and some alternative techniques obtained by maximisation

of a distance criterion. This allows us to extend the properties of features obtained

by linear transformation derived from the K-L expansion to the distance optimisation

methods, and vice versa. Thus we know, for instance, that the features obtained by

Sebestyen's method will be not only maximally separated but also uncorrelated.

In a previous paper we have shown analytically and confirmed experimentally

that the K-L procedure iii) is particularly favourable for feature selection appli-

cations. Some additional experimental results supporting this conclusion are pre-

sented in section 2 of the present paper. Moreover the comparative study described

here reveals that this procedure retains its advantages for an even larger class of

linear transformation techniques which include methods based on separability

measures.

The coordinate system used by the K-L method iii) can be obtained by a successive

application of the K-L expansion, as described in section 2. Alternatively it can

be obtained from the system of eigenvectors associated with the product of two

matrices, one of them being the within class covariance matrix as described in

section 3. Howeverw the designer may have some difficulties with the latter approach,

particularly if the matrix being inverted is not well defined. Beth for this reason,

and also in order to have a greater control of the analysed data, it seems better to

use the first method. Although this implies two eigenvalue analyses, the matrices

involved are symmetric and the problem is computationally quite simple.

References

l. Chien, Y.T.~ Fur K.S.~ IEEE Trans. Inf. Theory, IT-15, 518 (1967)

2. Fukunaga, K.: Introduction to statistical pattern recognition, The Macmillan

Company, New York, (1972)

3. Kittler, J.: On Linear Feature Selection. Technical Report of Cambridge

University Eng. Dept., CUED/B-Control/TR54 (1973).

4. Kittler, Jo; Young~ P.C.: A new approach to feature selection based on the

Karhunen-Loeve expansion t Jnl. Pattern Recognition (to be published, 1973).

5. Mendel, J.M., FUr K.S.: Adaptive, Learning and Pattern Recognition Systems,

Academic Press, New York (1970).

6. Niemann, H.: An Improved Series Expansion for Pattern Recognition,

Nachrichkentechn, Z, pp. 473-477, (1971).

Page 241: 5th Conference on Optimization Techniques Part I

2 2 9

7. Sebestyen, G.S.: Decision Making Processes in Pattern Recognition, the

Macmillan Company, New York (1962).

8. Watanabe, S.: Computers and Information Science If, Academic Press, New York

(1967).

Fig. 1

70

60

5 O

4 O

3 0

2 0

1 0

Unordered

----- Method i ........ Method ii ~._ Method iii

• . . , , , : .-~...: . . , - , , v"~'~.:, ."rr~... "...-e-'~. ,.,': ~

. . . . . I 1 ! | 1

O 2 4 6 8 I0

Number of Features

o

o

o q4

~4 o

20

15

IO

O

Fig. 2

a a ~ = -.5

------ b ~ =-.i

~ , \ ........ ~ : . 7

~), \ -.- ~ : 1.o

| , . I I I I

2 4 6 8 iO

Number of Features

Page 242: 5th Conference on Optimization Techniques Part I

A CLASSIFICATION PROBLEM IN MEDICAL RADIOSCINTIGRAPHY

Georg Walch

IBM Heidelberg Scientific Center

I. INTRODUCTION

In this paper we present a classificatlon procedure which we have

developed in the framework of hypotheses testing and decision theory

to do the classification of scintigraphic images into the class of

normal pattern and that of pattern with anomalies in an interactive

way.

In nuclear medicine the distribution of applied radioactive compounds

in human body is measured by imaging devices like moving scanner and

gamma-camera to receive information about tumours and metastases on

the base of different storage effects in those lesions and the sur-

rounding healthy tissue~ The accumulated data is called scintigraphic

image or scintigram because scintillations in crystals are involved

in the detection of emitted y-quanta.

These scintigrams are characterized by high statistical noise, both

radioactive decay and y-detection being of statistical nature, and

low spatial resolution. Both facts reduce the detectability of small

anomalies produced by tumours in an early stage. With regard to radi-

ation damage it is not allowed to improve the signal-to-noise ratio

by increasing the amount of applied activity. Therefore, digital fil-

ters of Wiener type, constructed with the goal to minimize the expec-

tation of the difference between filter result and object distribution

in the sense of least squares, were applied to give an optimal compro-

mise between noise suppression and resolution enhancement (Pistor et

al).

They may be considered as two stage operators, the first part for the

optimal estimation of the undisturbed signal response, the second part

for the inversion of the linear transformation involved in the imaging

process.

Page 243: 5th Conference on Optimization Techniques Part I

231

In the filtered images the resolution, the contrast, the signal-to-

noise ratio, and thereby the detectability of small anomalies are

improved. But the classification between normal and abnormal pattern,

the decision whether an observed fluctuation is of statistical or bio-

logical nature, is still left to the human inspector. To do this

classification automatically or at least to base the human decision

on quantitative measures, a likelihood ratio test is adapted to this

special case.

2. LIKELIHOOD RATIO TEST

2.1 General Remarks

In our case the null hypothesis H0 is:

The image distribution is a known normal intensity pattern.

The alternative HI is:

The normal pattern is changed at known position by an anomaly of

known shape.

The paramters size and strength of the anomaly are unknown and may

vary. Therefore, the alternative is a composite hypothesis. It is

reduced to a single one by estimating those parameters according to

the maximum likelihood method.

The likelihood ratio test rejects the null hypothesis H0 if the ratio

L0 (X) A - < K ( 1 )

Ll (X)

or

T = -- 2 log A > K' (2)

where X is the given sample of observations, L0 and Ll denote the

joint probability functions of the sample under H0 and HI, respect-

ively. According to the Neyman-Pearson lemma a likelihood ratio test

is most powerful, i.e. there is no test with equal or smaller type I

error which has a smaller type II error (Lindgren).

Page 244: 5th Conference on Optimization Techniques Part I

232

2.2 Specialized Formulae

An image area containlng n observed quanta or counts is divided in k

image cells. If Pi is the probability for a single count to be in the

i-th cell, the expected frequency in this cell is nPi. The observed

frequencies x i are Poisson distributed with means nPi which distri-

butions may be approximated by normal distributions with the same

means and variances. The x. being statistically independent, the l

joint probability L of the sample X = (xl, ..., x k) is

k 1 (xi_nPi) 2 (3) L (X) = ] e

i=I (2~nPi) I/2 2np i

and

k k (xi-nPi)e - 2 log L(X) = k log(2zn) + Z log Pi + E (4)

i=] i=I 2nPi

If H0 is true we write p0 i for Pi and (X2) 0 for the last sum in eq.

(4), if Hi is true we write pl i and (X2)l, respectively. Then T is

expressed by k P0i T : Z log ( ) + (X2) 0 - (X2) I (5)

i=I P1i

TO make use of this formula we have to formulate the hypotheses H0

and HI, i.e. to give explicitely the p0 i and pli, and we have to fix

the critical value K ~. Furthermore we want to know the probabilities

of the errors of type ! and type II.

2.3 Distributlon of the Test Statistic and Decision St rategie

To fix K ~ we have to know the distribution of T under H0. Asymptoti-

cally it is X 2 distribution if the test area is chosen randomly. But

applying the test only at those areas where the difference between

filtered image and expected normal image is maximum, the probability

to get greater values for T increases, therefore, the distribution of

T is changed. This is gained by simulation of images with given inten-

sity distribution, calculating T at the points of those maximum differ-

ences (the first term in (5), which is small compared to (X2) 0 -(X2) ~

for small anomalies, is neglected).

Page 245: 5th Conference on Optimization Techniques Part I

233

Figure I shows in the left curve the probability P0 that the statis-

tic T is greater than the given abscissa X if H0 is true. It allows

to determine the type I error ~ for a given critical value K' or vice

versa to fix K' for given significance level ~.

TO determine type II error 8 further simulation of images containing

anomalies of known size and strength are needed. The right curve of

figure I shows the probability Pl that T is less than. X in case of

HI. We can read from it the probability that T is less than a fixed

K' in spite of HI being true which is the type II error. (This curve

is valid only for fixed parameters mean count rate, size, and strength

of the anomaly).

For definitively fixing the critical value K' different strategies

are possible, e.g. to take K' for a fixed ~, or to take K' which

minimizes the total error ~ + 8, or to choose K' to minimize the to-

tal risk R which is the sum of the two errors weighted by risk funct-

ions R0 and RI and by the a-priori probabilities p (H0) and p (HI) for

H0 and HI being true, respectively:

R = R0 p(H0) ~ + Rl p(HI) B (6)

The power of the test, which is the probability of rejecting H0 if H0

is false, is shown in the receiver operating characteristic in figure

2 as function of ~ for a given parameter set.

Figure 3 gives the power as function of the strength of the anomaly

for a confidence level e = 5 %.

2.4 Interactive Application

To apply the test procedure for clinical images we have to overcome

two difficulties:

I) We do not know the shape of the anomaly. But we assume

that a sphere is a good approximation for the shape of

tumours in an early stage.

2) We do not know the normal pattern with sufficient precision

Page 246: 5th Conference on Optimization Techniques Part I

234

because there are large individual differences. But we

have to make use of this knowledge only at the area where

the test is executed. Therefore we approximate the trend

in the neighbourhood of this area as quadratic polynom by

least squares fitting.

There remains the problem to define the test area and its surrounding.

The best way to do this is an interactive one because size and shape

of the surrounding depend on the position of the anomaly within the

organ under examination. This interactive procedure is demonstrated

with the help of some figures.

The first example in figure 4 is a simulated image where we know the

shape of the trend and the positions of anomalies, but we take the

action as in the case without this knowledge. We first display in-

tensity profiles (figure 5) crossing the suspicious region, marked

with arrows I in figure 4. The crossing point of the profiles is

marked with a star in these curves. We define with the light pen at

each of these curves two inner points which border the test area and

two outer points which limit the neighbourhood used for approximation

of the trend. The test is executed at the position where the differ-

ence between filtered image and trend is maximal. The next display

{figure 6) then shows, crossing the position of this maximum two pro-

files of observed values as points, two profiles of the filtered image

as wavy lines¢ and two profiles of the trend as parabolae. In addition

position coordinates and the value of the test statistic are displayed

at the right.

While the anomaly at the first test region is obvious, the two follow-

ing cases at the position of arrows 2 and 3 in figure 4 are doubtful.

The appearance of the profiles is very similar, but the test statis-

tic is greater than the critical value K' = 16 for e = 5 % at the

position of the true anomaly (figure 7) and the statistic is very

small at the position without true anomaly (figure 8).

The next example shows a clinical case of a liver scintigram {figure

9) where the search is made for metastases with negative storage

effects. Figure 10 shows the profiles through the position marked in

figure 9 and the result in figure 11 is acceptance of an anomaly.

Page 247: 5th Conference on Optimization Techniques Part I

235

3. CONCLUSION

While digital image processing in nuclear medicine until recently

consisted in a quality improvement by various filter methods, but

left the recognition and classification fully to the human inspection

and experience, the proposed test is a step towards automatic pattern

recognition and classification. Its practical application is still in

the beginning stage, but the success with simulated images lets us

assume that it will be helpful in the clinical work.

REFERENCES

(1) B.W. Lindgren: Statistical Theory, Macmillan, London (1968)

(2) P.Pistor, G.Walch, H.G.Meder, W.A.Hunt, W.J.Lorenz, A.Amann,

P.Georgi, H.Luig, P.Schmidlin, H.Wiebelt: Digital Image Process-

ing in Nuclear Medicine, Kerntechnik 14, 299 - 306, and 353 - 359

(1972)

(3) P.Pistor, P.Georgi, G.Walch: The Heidelberg Scintigraphic Image

Processing System, Proc.2. Symposium on Sharing of Computer Pro-

gram s and Technology in Nuclear Medicine , Oak Ridge National

Laboratory (1972)

Page 248: 5th Conference on Optimization Techniques Part I

236

0 . 8 "~

~ ) t c J T R I ~ U T I ~ I gE S T , ~ T [ S T [ C T

0 . $

0 . 4

0 . ~

DO = =J

A~l~ = 2 5

LEV = 5 e

Fig. I: Distribution of test statistic

~ e ~ 5 2e : tg X

0.4

RECfi[VER a~R~T~NG C~ARACT£RI~T~C

J DO = 5 A~P = L~

LEV = .{~

Fig. 2: Receiver operating characteristic

e.~: a.: t B .3 e . 4 t . g I

W

0 . 8

0 , 4

e . 2

OpERAT 10¢~ C~A~ACTER [ ~T ~ C

f / / /

~" DO = 5

LEV : g e

/ CDN = 5 "~

/ 18 2.e 30 48 A~P

Fig. 3: Operation characteristic

Page 249: 5th Conference on Optimization Techniques Part I

237

- :7~*- ~,¢7:.:"

• :.%-'- ;:., z~ tL r~ + s - . ~

~+*t l__ -+

AL~ e I~A~I'INITT

"~AECh~ST~ 's 8 ILD

READY

3 D~OC~SSED

Fig° 4: Simulated image after filtering

m

I s

$

m

N

m

i'

, , .•"

~' t l tE g t

,•*'•*,.. , '- '. ,

+ /~, . . • .+ ' ..... '*++•

I" i

-\,

+m I

l

' +.~6~&~&6~.~6~:.~

Fig. 5: Profiles at 1

Page 250: 5th Conference on Optimization Techniques Part I

238

@@

@

F

)

)

~ T S9 ~4

i l P ~ L T (

Fig. 6: Result at I

f WA~. ~3 ~ l

i ~ -

~O

Fig. 7: Result at 2

Page 251: 5th Conference on Optimization Techniques Part I

239

ZEILE ~I D~IUCICff

~ - R£TUR+ I

TE~T 98 g@

DCHIO $ i

100

$

IIm$

IS

25

Y

Fig. S: Result at 3

~a,×= &4

-H- -=H~. .;7=~;~;~:~;~== x = -, : ........

Zg~LE

: ~: : : ' :

:7._~ ~ = , , ~ ; ~ ; , . _ : - . : ' ~ ' r ~ ~ s , - ~

W H-- ' :.7- M-- . : '" -7 l.., i"

' . I - =l- ,

:-I- 7_ ~- . w t~E~-~..E .! =I-'

• - =I-" '-- I-"

|ILD ~

~$~$& f~IN~LZI~ IA~IINIA~ ~ 2 I I $ ~ 1 2 ~ 2 e e e

Fig. 9: Liver scintigram after filtering

Page 252: 5th Conference on Optimization Techniques Part I

240

Z I ~ IIII

1~, I .' ¶ - -

i " ' -

l 'IU~r,.I'E I , I

t ..%

o

, %

i " ~ J . . . . . . . . • • %_. . .

Fig. 10: Profiles of scintigram at marked position

I TnT ~ g~ r

~ f

I

" I

Fig. 11: Results of trend fitting and testing

Page 253: 5th Conference on Optimization Techniques Part I

THE DYNAMIC CLUSTERS METHOD AND

OPTIMIZATION IN NON HIERARCHICAL-CLUSTERING

E. DIDAY

I.R.I.A~ Rocqueneourt (78) FRANCE

Abstract

Algorithms which are operationnally efficient and which give a good

partition of a finite set, produce solutions that are not necessarily

optimum. The main aim of this paper is a synthetical study of properties

of optimality in spaces formed by partitions of a finite set. We for-

malize and take for a model of that study a family of particularily

efficient techn~uesof"clusters centers" type. The proposed algorithm

operates on groups of points or "kernels"lthese kernels adapt and evol-

ve into interesting clusters.

After having developed the notion of "strong" and "weak" patterns, and

the computer aspects, we illustrate the different results by an artifi-

cial example.

I/ Introduction

1.1 - ~ e _ p 5 ~ e

In various scientific areas (medecine, biology, archeology, economy,etc)

vastsets of objects represented by a finite number of parameters fre-

quently appear ; for the specialist, obtaining the groupings "natural

and homogeneous" together with the most representative elements of such

a set constitutes an important stage in the understanding of his data.

A good approach for the solution of the problem is provided by cluste-

ring techniques which consist of finding a partition for a finite set

such that each object resembles more to the objects within its group

than the objects outside. In mathematical terms the problem can be ela-

borated under one of the following forms : considering a certain W cri-

terion :

A - Find the partition of E which optimizes W.

B - Find the partition of E which optimizes W among all the partitions

in K classes.

The family of methods to which we will refer concerns mainly problem B,

(~)Institut de Recherche d'Informatique et d'Automatique.

Page 254: 5th Conference on Optimization Techniques Part I

242

but it will also be helpful for the user in resolving the following C

problem.

C - F~nd_am~ng a!l the partitions in K classes, the partition for which . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

each class will have the most representative kernel (kernel is a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

of points from the population to be classified)

In paragraph 1.2., we shall briefly give the main properties of the dy-

namic clusters method This family of methods will be used as a mo-

del for the true purpose of this study which will be developed in & 1.3.

1.2 - The Dynamic Clusters method . . . . . . . . . . . . . . . . . . . . . . . . . .

One takes a function g which permits the transformation of a partition

of E in a finite set of kernels and a function f permitting the passage

of several kernels to a partition. The principle of this method is sim-

ple, it consists of applying in an alternative manner the f and g func-

tions from an initial choice of kernels ; providing some hypothesis

which will be given, the decreasing of criterionW is insured • The forma-

lism which we are giving allows us to obtain numerous variations for

this technique and notably, as particular cases, the methods of HALL and

BALL (1965), FREEMAN (1969), and DIDAY (1970). We took this family of

methods as a model of our study for numerous reasons :

a) They allow us to avoid storing the table of N.~-~(where N=card (E))

similarities of objects two by two. This permits the processing of a

population much more important than by other more classical techniques.

(SOKAL and SNEATH (1963), JOHNSON (|969), ROUX (1968), LERMAN (1970).

b) These techniques are very fast. For instance, the variant studied in

DIDAY (1970) allows the processing on an IBM 360/91 of a population of

900 items each characterized by 35 parameters in three and a half minu-

tes .

c) These techniques do not suffer from the chain effect (cf. JOHNSON

(|96~)). In other words, they do not tend to bring two points that are

apart closer to each other if these two points are joined by a line of

points closer to each other.

d) It is not necessary to define arbitrary thresholds to determine the

classes nor to stop the process (cf. SEBESTIEN (1966), Bonner (1964),

HILL (1967) etc...).

a) The use of kernels favors the realization of partitions around the

agglomerations with a high density and attenuates the effect of the

marginal points (cf. figs i4 & 15). It favors also the apparition of

empty classes. And finally, let us underline that the use of kernels

Page 255: 5th Conference on Optimization Techniques Part I

243

permits us to provide problem c) with "local optima".

All of the realizable techniques, having as their goal to minimize the

criteria W, provide solutions for which nothing proves that they are

optimal. Yet, the various studies recently carried out on the present

status of the research in "clustering" (see BOLSHEV (1969), FISHER and

VAN NESS (1971), WATTANABE (1971), BALL (]970), CORMACK (1971)) empha-

size the nonexistence of a synthetic study of the solutions obtained for

a given algorithm.

T~e present paper is devoted .to this study. We have limited ourselves

to a particular type of algorithm ; but naturally, this analysis could

be extended to other techniques. The set of solutions will he called V k.

Each solution attained by an algorithm is optimal with respect to a cer-

tain part of V k which is a particularly rooted tree. This leads to at-

tribute a structure to the V k space. It is particularly shown that,

under some hypothesis, this space can be partitioned into a finite

number of rooted trees which have for roots so-called "non biased"

stable solutions and for leaves some "impasse members". The various

attained results are applied as follows :

a) One builds a random variable which permits the attaining an idea of

the structure of V k. An invariant is thus obtained which is interesting

for multiple reasons : notably for the data evolving timewise and a

comparison of the efficiency of the techniques.

b) We define several types of "fuzzy sets" which will truly provide the

user with the various facets of reality that he wants to grasp.

c) We are herewith publishing a new kind of technique which will allow,

by switching from one rooted tree to another, an approach to the global

optimum.

The example will particularly underline the interests of the "strong

forms" which are a very useful tool for the practitioner allowing him

to extract from his population ~he most significant groups of points.

Finally, let us mention that we have skipped the theoretical develop-

ments, restraining ourselves to results that are both interesting for

understanding and for the computer techniques employed.

2/ A Few Notations and Definitions

E : the set of objects to be classified, it will be assumed to be

finite.

~(E) : the set of the subset of E.

e k : the set of partitions of E in a number n <k parts.

Page 256: 5th Conference on Optimization Techniques Part I

244

L k C {L = (A] ~. ~o ~Ak)/A~cA~ } where, A will represent, for instance,

E or R n.

Vk = ~k x Pk" +

W an injec~ive application : V k+ ~ .

A local optimum on C ~ V k will be an element v~such that W(v ~) = min W(v) v~C

If C = V k one has a global optimum.

x x ,

x x x I ~ . x x ' " x ~ , ' T

Example l :

Let E be the set of 17 points as shown in

fig~ I. Let us take

L2={L= (A I ,A2)/Ai=E , card (A I ) =3, card (A2) =2}

and V2=~ 2 x ~2'

Let us choose W(v = Z l ~ ~ d(x,y) where d is I=I card Pi x~Ai Y~Pi

the Euclidian distance. The global optimum v ~ = (L a , P~) where

L ~ = ( ,A 2) is shown in fig° 2. The dotted line indicates the points

and P~ : the 3 points identified by the M form A l ; of E that form P|

the ~ is used to represent the two points which constitute A2.

3/ Constructing_the Triplets (f,g,w)

We shall write v = (L,P)~V k where 14K~ k : L = (A 1,,..,Ak) with A.CAI and

PKPk : P = (Pl'~°~'Pk) where the P~'s are the classes of the partition

P of E~ We shall also take the following mappings :

D : E x ~(E)÷ R +

which in practice will express the similarity of one element of E with

one c o m p o n e n t o f E.

R : A x T x Pk ÷ ~+ (where T is the set of integers between | and k).

T h i s m a p p i n g w i l l b e u s e d t o a g g r e g a t e a n d s e p a r a t e t h e c l a s s e s . F o r

instance, R(x,i,P) = D(x,Pi) can be chosen°

The t r i p l e t ( f , g , w ) i s c o n s t r u c t e d a s f o l l o w s :

k +

W : Vk÷ a : v = (L,P)~ W(v) = ~ ~ R(x,i,P) i=l x~A.

I

f : ~k +~k : f(L) = P with

P'l = {x~E/D(x'Ai) ~ D(x,Aj) for j¢i },

in case of equality, x is attributed to the part of the smallest index.

g : ~pk-> ~L k : g(P) = L with

Page 257: 5th Conference on Optimization Techniques Part I

245

A i = {~he n i elements a ~ A which minimize R(a,i,P) } . The value of

n. will depend upon the variant chosen (cf 2). i

In [9J we took as a convention to call A. the "kernel" of the i th i

Remark : If R : A x T x Vk+ Lk, then g : Vk÷ L k must be chosen.

The dynamic clusters method consists in applying alternatively

function f followed by the function g on the attained result and this

from L(O)C~k either estimated or drawn at random.

class.

It is not our intention to explore all the possible variants. Rather

we shall explore those which appear to be interesting by simply varying

the choice of g and R (allowing the reader to dream up others).

a) For this variant, one has : A ~ R n, n. = ! Vi ; i

if furthermore R(x,i,P) = D(x,Pi) , g(P) = L is such that A i is the cen-

ter of gravity (~) of P. in the sense of D. l

WATANABE gives an history of this kind of method in E25] .

b) A ~ E and n i = card(Fi) where :

F i = {xKE/R(x,i,P) <R(x,j,P) j#i} , if i<j and R(x,i,P) = R(x,j,P),

x is affected to F.. It is obvious that the A.'s are identical to i i

the Fi's and constitute a partition of E.

A thorough study of this case can be found in E9] (which is a generali-

zation of the method proposed by FREEMAN in [12] where L k = ~ and g is

replaced by f~ Let us note that an interesting variant of this method !

consists in chosing : i~{! 2 .... k} n i = ~.card(F i) with ~ = ~ as an

example.

c) A ~ E and n i is fixed once and for all i ~ {l,...,k} ; n i will be

chosen by the user if he has an idea of the contents of his data, other-

wise, he can let :

~.card E for all i (see and [,O] ) ni = k

d) A ~ E, n i is fixed or equal to ~.card P. with O< ~<! ; A. is defined i I

as being the n. elements of P. which minimize R(x,i,P). When n. is fi- l I i

xed and in the case where the number of elements per kernel become supe-

rior to the number of elements of the corresponding class, one will

take for instance, n i = card Pi if class Pi is concerned.

(~ x is called ~he center of g~avity of Pi in the sense of D if inf

D(x,Pi) = xKR n D(x,P i)

Page 258: 5th Conference on Optimization Techniques Part I

246

3.3-Construction of triplets that make the sequence u n decreasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Definition of the sequenc~ u n and v n : let h be the V k÷ V k mapping such

that v = (L,P)~Vk~ h(v) = (g(P),f(g(P))).

A sequence {v n} is defined by v o and Vn+ | = h(v n)

A sequence {Un } is defined from the sequences ~n } by Un = W(Vn)

Definition of S :

+

Let S : ~k x L k ÷ ~ : k

S(L,M) = i$ ] x~A. l

Definition of a square function (~)

R(x,i,Q) where Q = f (M) .

R will be called square if :

S(L,M) ~ S(M,M) =) S(L,L) ~< S(L,M)

Theorem ]~ :

If R is square, the (f,gpW) triplet makes the u n sequence decrease for

t~ose cases e.here the .~mher of elements per kernel is fixed.

4/ The Structure of ~k,~k,Vk and Optimality Properties.

Let us consider the graph F = (Vk, h). Then appear particular ele-

ments in V k :

The following properties are equivalent and characterize a non-biased

element ~ v = (L,P) E V k

a) v is a root of a looped tree (m~m~) of F .

b) L = h(v)~

c) e = g(P);f(L) = e.

The d) and e) properties respectively allow the characterization of the

non-biased elements of ~k(resp. ~k ) .

(~) We have shown an example of a square function in [9~ and [10]

(m~) The demonstrationsof all the results of this paper are in the "Thgse d'Etat" of the author.

(~i~) This name comes from the fact that the kernels corresponding to such an element are in the center (in the meaning of g) of the class they determine (in the meaning of f).

(~m~) cf. Appendix ! for a definition of a "looped tree".

Page 259: 5th Conference on Optimization Techniques Part I

247

d) g(f(L)) = L.

e) f(g(P)) = P

The properties a) and h) are equivalent and characterize an impasse

element v = (L,P)~V k.

a) v is a leaf of r .

b) e # f(L) or f-l(g-l(L)) =

Let us point out that the c) and d) properties which follow respective-

ly, permit us to characterize the impasse elements of Lk(respectively

~k)°

c) g-! (L) = ~ or f-I (g-I (L)) =

d) f-I (p) = ~ or g-! (f-1 (p)) = ~ or f-! (g-I (f-l (p))) =

The following theorem can immediately be deduced from the definitions of

proposition 2j (cf. appendix I) and from theorem 1.

Theorem 2 :

If R is square, then :

a) Each connected component of F = (Vk,h) is a looped tree.

b) There exists in V k at least one non-biased element.

c) If a non-biased element v ~V k is the vertex of a tree C, then v

is a local optimum with respect to the set of vertices of C.

d) If w~V k is not a non-biased element, w belongs to a looped tree

with the root w ~ , and W(w) >W(w°).

e) The global optimum is a non-biased element.

5/ S~arching For Invariants

5 . 1 - ~ _ ~ £ _ S ~ _ ~ _ ~ :

One will first assume that the triplet (f,g,W) makes u n decreasing VUo,

(in other words VX~Vk, W(h(x)) <W(x)). The probality space (~,~,P)

of the family of the looped trees is defined as follows :

= Vk~ ~= { the algebra generated by the partition of ~ in looped trees}

(i.e. the set of the parts of ~ which are the unions of looped trees).

P : ~÷(O,I) is such that if C~ ~ is the union of n looped trees 1 n

• = card C.. CI, ..,Cnthen P(C) card(~) i~ I l

The random variable X (so called of the family of the trees) of (~,~,P)

in (~,B) where B is the Borelean tribu~is the mapping ~+~ such that

X(v) = W(w) where w is a non-blased element of the looped tree

containing v. X is actually a random variable, for if I~B,X-I(I) is the

Page 260: 5th Conference on Optimization Techniques Part I

248

union of trees of V k with elements such that X(v)~ I for the vertices.

The distribution function F(x = pr (X < x) expresses the probability of

obtaining an element v~V k in a looped tree or a loop containing a non-

biased element w such that W(w) <x. In 7.1, an example of an empirical

distributionfunction corresponding to a n-sample of V k is given. In the

case where x : W(f(x)) > W(x) (in other words, one does not assume that

the series u n is decreasing YUo),One can also define a random varia-

ble on the connected components of V k. The random variable of (Vk,~,P)

in (R,B) is such that X(v) inf = y~C W(y) where C is the connected part of

V k to which v belongs.The introduction of these random variables permits

us to give an idea of the connected components and of their respective

size by means of the empirical distribution functions. This also

gives us a tool to make a comparison of the different techniques ; the

best one being those where the roots of the largest trees correspond

to the smallest values taken by W (see 6.1.).

5.2 - ~$_~E~z_~H~ZZ~!~ :

5 2.1 - Characterization of the various t~2es of forms : • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Let CI,.~.,C n be the n connected parts of the graph (Vk,h) and

C = C! x C 2 X.o~X C n ~ one defines the mapping Z : C ÷ ~+ by :

Z(V) = W(v|) + .•. + W(Vn)where : V = (v|,...,v)¢C and vi~C i

Let V : Z(V') = Min Z(V) Let V ~ = (v] .... v n) and v~ ~ (Li~pi~ VCC

is the non-biased (If R is square C I is a looped tree~ or a loop and v@

element of C.). Let us denote as the jth class of the partition pi 3

Let H be the mapping E ÷~n which, ~o each element x CE associates

the vector (~l,..O,~n) where ~i is the number of the class to

which the element ~ belongs in P I~. Let H(y) = (~ l,...,Bn) and ~ (x,y) be

the number of indices i = 1,2,...,n such that xi-Y i = O. Let F n

and F 1 be two multi-valued functionsdefined on E such that :

Fn(X) = {y~E/~(x,y) = n} and F|(x) = { y~E/~(x,y)~l}

Definition of strong forms (@)

The following properties are equivalent and characterize the partition

P~ of E for which each class is a strong form. ~) p~= pl~ p2 m ~ o.. ~pn ~

2) P~is the less thin ~ ~) of the partitions which are thinner than

pl~ .pn~

(~) The intersection of two partitions is the set of the parts obtained in taking the intersection of each class of one by all the classes of the other.

~)A participation P is said to be thinner than a partition P''of E if every class of P' is the union of classes of P.

Page 261: 5th Conference on Optimization Techniques Part I

249

3) P@ is the partition defined 5y the quotient space E/H.

4) Pf is the partition defined by the connected parts of the graph

F n = (E,Fn).

Definition of weak forms :

The following properties are equivalent (f) and characterize the parti-

tion Q ~ of E for which each class is a weak form.

I) Q* is the thinner of the partitions which are less thin than p1 t pn~

2) Q~ i s t h e p a r t i t i o n d e f i n e d by t h e s e t o f t h e c o n n e c t e d p a r t s o f

the graph ~ = (E,FI).

More generally, if we impose Fp(X) • {ycE/6(x,~>p } and r p = (E,Fp),

the set of the connected parts of F for p = O,l,2,...,n constitutes a P

hierarchy which induces the subdominant ultra-metric of a certain dis-

tance (cf Appendix 3).

Remark :

It follows from these definitions that pt is a thinner partition than

Qe.

Definition of the over)apping points and of the isolated points :

They are characterized by the fact that they are strong forms reduced

to a single point. They are distinguished by the following properties :

- a point a~E is isolated if ~ (a,x) = O i~x~E.

- a point aGE is overlapping if ~x~E : 0<~ (a,x) <n.

5.2.2 - ~ X X ~

The interest of "fuzzy sets" of Zadeh that we are introducing here is

that they permit :

a) To obtain new forms from set operations on the strong forms

(union, intersection, etc.).

b) To characterize these new forms without having to define their

profile types (for instance, by the calculating of means) and even wi-

thout knowing the elements of which they are constituted.

Each strong form A can be considered as a "fuzzy set" characterized by

(x~a~ where a~A. One see the mapping h A : E÷{O,I} such that hA(X) = n

from this definition (3rd property) that hA(a ) = l ~a~A. One can use

h A in order to have an idea of the degree of similarity with A of an

overlapping point or of another strong form. One can also use the

(1) ~or a ~emonstration of this equivalence cf. Appendix 2.

Page 262: 5th Conference on Optimization Techniques Part I

250

mapping F : ~${O~i} where ~is the set of the weak forms of E and m

! I F(B) card(B) x~B (m j ~ l h A j ( X ) )

w h e r e t h e Aj a r e t h e m s t r o n g f o r m s w h i c h c o n s t i t u t e B. T h i s m a p p i n g F

e x p r e s s e s t h e d e g r e e o f w e a k n e s s o f B b u t t h e m o r e t h e s t r o n g f o r m s Aj

a r e d i s s i m i l a r , t h e m o r e F ( B ) w i l l b e s m a l l e r .

5.4 - g~)!~_!)~_~!~)~!_2~!~_~_~g~g!~_~!~ :

It is a matter of constructing with the aid of two no~-biased elements

and v 2 of ~/k_ a third non-biased element v 3 to improve the criterion V~

Suppose R is square.

We will denote v i = (Li,pi)~v k with L i = (L~ ..... L~) and pi=(p~ ..... p~); i i i • = (Lj,Pj). vj

We are presuming that W is additive (which is practically often true)

otherwise that there exists a mapping z : +

z : ~(E)x P(E)+~ such that k

v I 2 Let us assume that and v are two non-biased solutions and that

{V~l i k ,...,v~ ~ are the k smallest values taken by z(x) with i . J ' ~" i"

x ~ {~/~ = 1,2 and j = ~2 .... k} . Let us denote P = (P~_],.. .,P~k )_ and

.... L~ )° One can easily prove then thefollowing proposition. L = (L ~ Jk

Proposition :

If L # e !, e ~ ~2~ and Pee k, then the looped tree containing v = (e,e)

has as a root a non-biased element v3 such that W(v3)<inf(W(v|),W(v2)).

6/ Examples of Applications

We have applied the ease c) on the inputs of Ruspini (cf fig.6). First

of all, this permitted us to observe the speed of the method in compa-

rison to that of Ruspini ; therefore, in taking K=4,n1=n2=n3=n4=n5 ,

R(x i,L) = D(x~Ci) = ~_ d(x,y) where d is the Euclidian distance, we ' y~:

ran 50 passes of the method (each time changing the drawing of L(O~in

2,57 minutes on the CII 10 070. These 50 passes have pointed out the

existence of 6 looped trees. The frequency of the appearance of each

of these 6 solutions are indicated in figure 12. This graph is, in fact,

a histogram of the random variables which have been defined in 5.1. The

abcissa is represented by u = Lim Un(Cf 3.3.) ; the c~nvergence is ge-

nerally obtained at about 4 iterations. The most frequent

solution is the one that corresponds to the four best classes ; the

Page 263: 5th Conference on Optimization Techniques Part I

251

value of u for this solution is clearly better than for the other so-

lutions, which shows that it really corresponds to the best partition.

The best solution corresponds to the root of the biggest looped tree,

which is satisfactory for the method. The most frequent solutions are

indicated in figures 7,8,9 and 10. One can easily see that the solutions

given in figures 9,10 and I I do not carry any information (of 5.2) to

the solution given in figure 7. From these solutions one obtains 4

"strong forms" corresponding exactly to the four classes of the best

solution.

Let us remark that in applying proposition 3 to the solution given in

figures 8 and 9, one brings out the looped tree whose root is the solu-

tion correspondong to that of figure 7.

One gives (table I), the table of the strong forms obtained by taking

this time K=6,n5=n6=5, without changing the other parameters and in

taking 5 passes of the method (n=5). This table brings out the exis-

tence of 6 strong forms and 3 weak forms. Let BI,B2,B 3 denote the weak

forms and A 1 the strong forms (cf Fig.|3). One can measure the "weak-

ness" of B i by using the F functions (cf 5.2.2). As

3 4 4 5 j=l ~ hA'j (x) = I+~+~ ~xcBI ;j=4 ~ hA'j (x)=l+ 51 Wx~B2; and hA6(X)=l WxCB 3

One has : 13 3

F(B I) = ~-~, F(B2) = ~ and F(B 3) = I

One sees that B 3 is a strong form, that B I is almost a strong form and

that B 3 is a relatively weak form. These strong and weak forms that

appear in fig. 13 express quite well these values. Finally, let us

point out that in 4 of the 5 solutions, there appear empty classes which

signify that the number of classes that actually do exist must be smal-

ler than 6.

Conclusion

A wide field of research is still open : in practice one should :

develop the choice of f and g (e.g. by means of the learning methods) ;

develop the techniques allowing the choice of k (the number of classes

required a priori) ; realize an exhaustive comparison of the various

variations of the clusters centers method ; develop in depth the tech-

niques of the passage from one tree to another ; make a statistical

survey of the structure of the space V k in relation with E, particu-

larly as for as the relative number of impasse elements, non-biased

elements, size of the trees, levels, etc.., is concerned ; set up tech-

niques allowing a clearer vision of the strong forms table(e.g.of the min

Page 264: 5th Conference on Optimization Techniques Part I

252

mum spanning tree type) ; use the weak forms in order to detect among

the strong forms the low density zones (obtaining "holes" leading to

numerous practical applications). Let us also point out the fact that

Hstrong forms table" allow us to obtain at once the three types of

classification procedure :

a) Partitioning (by taking the partition corresponding to the best va-

lue of W) ;

b) Clumping (using the overlapping points) ;

o) Hierarchical classification (using the "connected descendant" method),

BIBLIOGRAPHY

(1) BALL G.H., 1970 - Classification Analysis - Technical Note, Stanford

Research Institute.Menlo Park,California 94025 USA

(2) BARBU M.,7968 - Partitions d'un ensemble fini : leur treillis

M.S.H~ n ° 22

(3) BENZECRI J~P,, ~97| - Algorithmes rapides d'agrggation. Sup.Class°

$=9 Laboratoire de Statistique Math~matique. Uni-

versit~ de Paris-6

(4) BENZECRI J.P., 1970 - ReprEsentation Euclidienne d'un ensemble muni

de masses et de distances. Universit~ de Paris-6

(5) BERGE Co, 1967 - Th~orie des graphes et ses applications - Dunod

Ed. Paris.

(5)~BOLSHEV LoN., !969 - Cluster Analysis - I,S.I.R.S.S.'69

(6) BONNER R°E.,1964 - On some clustering technics - IBM Journal of

Research and Development

(7) CORMACK R.M.~1971 - A review of Classification - The Journal of the

Royal Statistical Society,Serie A, vol 134, Part 3

(8) D!DAY E,BERGONTM, BARRE J, 1970-71-72 - Differentes notes sur la

programmation de la M~thode des nudes dynamiques

Note I~R.I.A., Rocquencourt 78.

(9) DIDAY E, i970 - La m~thode des nudes dynamiques et la reconnais-

sance des formes. Cahiers de I'I.R.I.A., Rocquen-

court 78

(IO)DIDAY E, ~971 ~ Une nouvelle m~thode en classification automatique

et reconnaissance des formes - Revue de Statis-

tique appliqu~e, vol. XIX,n = 2.

(JI)FISHER L,VAN NESS J.W. 1971 - Admissible Clustering Procedures. Bio-

metrika,58,|,p. 9].

(12)FREEMAN N, 1969 - Experiments in discrimination and classification.

Pattern Recognition J.vol.l,n ° 3.

Page 265: 5th Conference on Optimization Techniques Part I

253

(13) HALL D.J, HALL G.H, 1965 - Isodata a Novel Method of Data Analysis

and Pattern Classification. Technical Report, 5R I

Project 5533,Stanford Research Institute, Menlo

Park, California U.S.A.

(14) HILL D.R,1967 - Mechanized Information Storage, retrieval and dis-

semination. Proceedings of the F.I.D/I.F.I.P. Joint

Conference Rome.

(]5) JOHNSON S.C,1967 - Hierarchical clustering schemes. Psychometrica

32,241-45

(]6) LERMAN H,1970 - Les bases de la classification automatique -

Gauthiers-Villars, 1970

(]7) PICARD J,1972 - Utilisation des m~thodes d'analyse de donn~es dans

l'~tude de courbes exp~rimentales. Th~se de 3g cycle

Laboratoire de Statistique Math~matique Universit~

Paris-6

(]8) ROMEDER J.M, 1969 - Methodes de discrimination. Th~se de 3~ cycle

Statistique Math~matique. Facult~ des Sciences de

Paris-6

(19) ROUX M,1968 - Un algorithme pour construire une hi~rarchie parti-

culi~re - Th~se de 3~ cycle.Laboratoire de Statis-

tique Math~matique,Universit~ de Paris-6.

(20) RUSPINI H.R,1970 - Numerical Methods for fuzzy clustering. Infor-

mation Science 2,p.319-350

(21) SANDOR G,LENOIR P,KERBAOL M,1971 - Une ~tude en ordinateur des

correlations entre les modifications des prot~ines

s~riques en pathologie humaine.C.R. Acad. Sc. Paris,

t.272,p,331-334

(22) SANDOR G,DIDAY E,LECHEVALLIER y,BARRE J,1972 - Une ~tude informa-

tique des correlations entre les modifications des

prot~ines s~riques en pathologie humaine. C.R. Acad.

So. Paris, t. 274, d.p. 464-467

(23) SEBESTIEN G.S,1966 - Automatic off-line Multivariate Data Analysis

Proc. Fall Joint Computer Conference pp.685-694

(24) SOKHAL R,R,SNEATH P.H.R,1963 - Numerical Tasconomy. W.H. Freeman

& Co, San Francisco and London.

(25) WATANABE M.S,|97! - A unified view of clustering algorithms. IFIP

Congress 71, ljubiana, Booklet TA-2

(26) ZADEH L.A,]965 - Fuzzy sets. Inf. Control 8,pp. 338-353

(27) ZAHN C.I, 1971 - Graph Theoretical methods for detecting and des-

cribing Gestalt Clusters. I.E.E.E. trans, on Compu-

ters, vol. C-20, n ° I, January.

Page 266: 5th Conference on Optimization Techniques Part I

254

(28) McQUEEN j, 1967 - Some Methods for Classification and Analysis of

Multivariate Observations. 5th Berkeley Symposium

on Mathematics, Statistics and probability~ vol.

I~ n ° I, pp. 281-297

APPENDICES

Appendix l

Let B be a finite set and a function h : B÷B. The graph defined by B and

the set of the arcs (h(x),x) will be noted by F =(B,h).One knows that

the set of the connected components of F constitute a partition of B ;

each of these components have a particular form :

lle~e!!!!~_! : Each connected component of F contains at maximum one

circuit°

One gives (fig°4) an example of a connecte~ cOmponent of F .We shall say

that x is a fixed point if x=h(x).Atree having for its root a fixed point +

will be called a looped tree(see fig.5).Let W be a mapping B÷E

Proposition 2 : If W is injective on the entire sequence v and verifies .............. n the property: W(h(x))<W(x) then :

I) Each connected component of F contains a loop and only one and does

not contain another circuit.

2) Each connected component of F is a looped tree or a loop.

3) If y~B is not a fixed point, there exists a fixed point x such that

W(x) <w (y)

Appendix 2

The problem is to show that the following two properties are equivalent

to characterize a weak form° 1~ pn ~

I) Q~ is the finest of the partitions which are less fine than P ,..o,

2) 02 is the parittion defined by the set of the connected parts of the

graph F 1 = (E,FI).

Append.ix 3,

Theorem : Let A be the mapping (~) E x E÷ ~ such that ~ (x,y) = n-~(x,y)

and let E' be the quotient space ~ ~) E/H. If F is the multi-mapping E' P

E' ÷~(E') such that Fp(X) = {ycE'/ ~(x,y)>P } a~d r p is the graph (E',Fp),

then :

I) The set of the connected parts of F for p = O,I,2,...,n constitute P

a hierarchy on E'

2) This hierarchy induces the subdominant ultrametric of A •

-- ee e~itlon ot @ in 5.2.]. (~) H is the mapping as defined in 5~2oI.

Page 267: 5th Conference on Optimization Techniques Part I

L "~T£

/t- + \

/ ++~+ + \ I --~+~-+ ~ \ + +

/ \ ++ +

/ |

1+ + + l

r / +++ + +

L + + / + ++ /

+ \ /

~. ++.i /

/ + + + +~

I ÷ + +

~' +++ + + I \ ++4-+ / + +

N + / ~- + /

/ \

/ + x

I +++ +++ ÷ \ { + + + I

\ + ++ I + \ /

9 "ST~I

n~

%

~L" ~ • ~, sl,

Page 268: 5th Conference on Optimization Techniques Part I

0 L "BT£

i+ ++ + •

+ + ++

/ +++~

" ,, +++ ++ + "~

• , +++++ +~, I

6 ".~T~{

I+ +* + + ~ + •

~+++ ÷ I • I

...... < < 4 /

/ + -+ / ~ ÷+ + ++

• , ++ ..,

x x x + s a ÷:I "--'"

++ x

++ ~

++4"++ ++ /

+ + +÷ g

I ++ + + ,

-. ÷ I

9q:

Page 269: 5th Conference on Optimization Techniques Part I

257

Frequency of appearance of each. solution after 50 drawings L (0)

Fig. 12

U=0.5 U= 2.11

2.22 U= 3.817

The value

corresponds to the solution in Fig. 7 corresponds to the solution in Fig. 8 corresponds to the solution in Fig. 9 corresponds to the solution in Fig. 10

B 2

Fig. 13

Page 270: 5th Conference on Optimization Techniques Part I

258

! ,? 12

t~

2~

4~ 4~

47

~J ~4

59 6O

6~ 6~

'~ 75

Table I: Strong forms f

4~

~O~S

)r the inputs of Ruspini

rig° 14 rig. 7 5

0

X X

X X

The signs "x" represent the elements to be classi- fied whereas the sign "0" represent the center of gravity of the 5 elements.

x o ~

K

The three closest ele- ments of the population are represented by the sign "x"; the center of gravity of these three elements attenuate the effect of the marginal ~ 7 ~ ±

Page 271: 5th Conference on Optimization Techniques Part I

A M A X I M U M P R I N C I P L E F O R G E N E R A L

CONSTRAINED OPTIMAL CONTROL PROBLEMS -

AN EPSILON TECHNIQUE APPROACH

Jerome W. Mersky

iZi5 S. Leland St. San Pedro, CA 90731 U.S.A.

We wish to present an extension of the Epsilon Technique (Reference i) to

general constrained optimal control problems with systems governed by ordinary

differential equations. The use of the Epsilon Technique provides a straight-

forward constructive approach %o the maximum principle, and, in particular, to

the lagrange multipliers for a very general constrained optimal control problem

which subsumes the so-called "bounded phase coordinate" problem.

Before formally stating the problem, we establish some definitions and

notations. We shall be working in the class of generalized controls in the sense

of McShane (Reference 2) and L. C. Young (Reference 4):

Definition Let U be a compact set in R k. A function u which, to almost every

t, assigns a probability measure ~(- ,t), defined on the lebesque subsets of U, is

said to be a generalized control.

Definition If f(~, • ) is a function defined on U, it may be extended to be a function

is the probability measure asso- of generalized controls f by the following: if ~o

ciated with the generalized control u then O

f(~, u o) / (u, t) = f(~, u)d ~'o U

Since in the following all controls are assumed to be generalized there will

be no confusion in dropping the tilde over the symbol for generalized controls.

Generalized controls may be considered as elements of the dual space

~F o(Q)" Therefore, we shall use the weak ::-" topology for the topology of Ug,

the class of generated by the compact set U.

Page 272: 5th Conference on Optimization Techniques Part I

260

We are now in a position to state the problem we wish to consider: viz.

Problem P In the class of generalized controls, minimize

T

f o g(t, x(t), u)dt

subject to

£(t) = f(t, x(t), u) a . e .

~( t ,x( t ) ,u) = 0 a . e .

~ ( t , x ( t ) , u) -< o a . e .

* (t, x ( t ) ) ~ 0 a . e .

n w h e r e x( t ) ~ ~ , ~ ( t , x ( t ) , u ) ~ 1~ p, ~ ( t , x ( t ) , u ) ~ R r , ~ ( t , x ( t ) ) ~ R q, f , g, ~ and ~)

are continuous in (t,x,u) and continuously differentiable in x and, ~ is C" in (t,x).

In addit ion, we r equ i r e that~ for all admissab le x(t),

T foil ~:(t)It Zdt -< N < ~

where N is a fixed constant, and that

[ x , f ( t , x , ~ ) ] ~ c(~+ilxll z)

We replace this with the "epsilon-problem'.

t ~ [0, T].

Problem P ¢. irl the class of generalized controls minimize h(¢,x(-),u) =

T T

IA z~ 11 ~(t) - f ( t ,x( t ) ,u) I I zdt + ~ II ~( t ,x( t ) ,~) l l Zdt

T T

fo Ifo [re(t, x(t), ~), @(t, ~(t) , ~)]dt + z 7 [n(t, x(t)) , ¢ (t, x(t))]dt +~

T

+ f o g( t ,x( t ) ,u)dt

Page 273: 5th Conference on Optimization Techniques Part I

261

w h e r e e > O, a n d

i 0 m ( t , x ( t ) , u ) = {

( Qb(t, x(t), u)

n(t, x(t)) =

w i t h t h e s a m e c o n d i t i o n s a s i n P r o b l e m P .

~(t, x(t), u) < 0

~b(t,x(t),u) ~ 0

l o , ( t ,x( t ) ) < o

¢ ( t ,x( t ) ) , ( t , x ( t ) ) ~ o

The following theorem gives the existence of, and necessary conditions for,

solutions to Problem P •

Theorem i Under the above conditions, there exists a solution pair Xs(. ), u

for problem P . This pair satisfies the "e-maximum principle": ¢

I. I) Let

w h e r e

~ ' ( e , t , x ( t ) , u ) -- [ z ( , , t ) , f ( t , ~ ( t ) , u ) ] - [L(e , t ) ,~( t ,x ( t ) ,u ) ]

- [ M ( c , t ) , ~ ( t , x ( t ) , u ) ] - [ N ( e , t ) , ~ ( t , x ( t ) , u)] - g( t , x ( t ) , u)

x ,01

L(s , t ) = ¼~(t , xe(t) ,u¢)

M ( S , t ) = l m ( t , Xs( t ) ,ue)

1 .T N(e,t) - Z--e J t n (s 'xe(s ) )ds

~V(t,x(t),u) = ~(t,x(t))~t + ~x ( t 'x( t ) ) f ( t 'x( t ) 'u)

Page 274: 5th Conference on Optimization Techniques Part I

262

Then,

1.2)

Je(, , t, x (t), -- max JC(¢, t , x e (t), u) u e U g

If we write Y(%t) = i ° (x (t) - f(t,x¢(t),u )) then we have

: -Vf*Y +V~P >i'lL +V(~*M + V g * ( ~ n ) +Vg* a. e.

Y(%T) = 0

evaluated along (% x (t), u ). £

i. 3) The multipliers Ni(e,t ) are monotonic non-increasing and constant on

intervals along which ~i(t, x e (t)) < 0.

Comments The proof is omitted here and may be obtained from l~eference 3.

The basic idea is to observe first that

T [n(t, x(t)) ,* (t, x(t))]dt =

0

o l t

IT I- t n ( s , x ( s ) ) a s , , ( t , x ( t ) )

0

Then if r(a,t,~:,X) is used to denote the sum of the integrands in the definition of

h(%t,x(t),u) as a function of

u I × ~ c ( t , x ) ~ ~-~ ! f ( t , x , u ) , g ( t , x , u ) , ~ ( t , x , u ) , ~ ( t , x , u ) ~ u

then

d@dr-(e't':k ,X + O ( X - × e ) ) i > 0

@=0

Page 275: 5th Conference on Optimization Techniques Part I

263

T h i s g i v e s I . l ) . To ob t a in i . 2), l e t d(t) be an e l e m e n t of the S c h w a r z s p a c e of

infinitely smooth functions with compact support, and Xd(t ) = xe(t) + @ d(t). Then

dh(e'Xd('d@ ) ,u¢) [

0=0

=0

gives I. 2). Condition I. 3) is immediate.

We m a y p r o c e e d now to t he l i m i t i n g f o r m of T h e o r e m 1 w h i c h g i v e s us a

new existence theorem and maximum principle for Problem P.

T h e o r e m 2 As ¢ c o n v e r g e s to z e r o , x ( . ) c o n v e r g e s u n i f o r m l y to x (-) and u o

c o n v e r g e s w e a k * to u ° w i th Xo(. ), u ° b e i n g a s o l u t i o n p a i r to P r o b l e m P . If ,

f u r t h e r m o r e , the m a t r i x

~u

3u

has full rank along (t, Xo(t), Uo) then the following limits exist:

~(t) : ~r~z(~,t)

X (t) = l i r a L(~, t) ~ 0

~(t) : l i m M ( e , t )

~(t) : l i m N ( ~ , t )

and we have the maximum principle:

2. t ) Let

~ ( t , x ( t ) , u) = [~ (t), f ( t , x ( t ) , u)] - [x (t), ~o(t, x ( t ) , u)]

Page 276: 5th Conference on Optimization Techniques Part I

264

[~(t),@(t,x(t),u)]- [~(t),,(t,x(t),u)]

- g(t, x(t), u)

Then

max yf(t,x (t),u) Jd(t'x°(t)'u°) = u ¢ Ug ~ o

z. z) [ = -Vf*~ +V~*X +V@*~ + V~;"'~ + Vg* a.e.

~(T) = 0

evaluated along {t, Xo{t), Uo).

Z. 3) The multipliers ~i(t) are non negative and ~i(t) = 0 when ~i(t, Xo{t),Uo) < 0;

the multipliers ~i(t) are monotonic non increasing functions of t which are constant

along intervals along which %i(t, Xo(t)) < 0 with v.l(T) = 0.

Comments Again a detailed proof may be found in Reference 3. The main idea

here is that the matrix above, having full rank, has an inverse which allows us to

write u - u as a function of the constraint terms ~(t,x (t),u¢), etc. so that the s o

convergence arguments proceed in a straight-forward manner.

l~efe rence s

i.

Z.

3.

4.

Balakrishnan, A. V.,"The Epsilon Technique - A Constructive Approach to Optimal Control" in Control Theory and the Calculus of Variations, A. V. Balakrishnan (ed), Academic Press, New York, 1969

~vlcShane, E. J., "Optimal Controls, Relaxed and Ordinary, " in Mathemat- ical Theory of Control, A. V. Balakrishnan and L. W. Neustadt (eds), Academic Press, New York, 1967

Mersky, $. ~V. ,An Application of the Epsilon Technique to Control Problems with Inequali_~ Con____~straints, Dissertation, University of California, Los Angeles, 1973

Young, L. C. ,Lectures on the Calculus of Variations and Optimal Control Theory, W. B. Saunders Co., Philadelphia, 1969

Page 277: 5th Conference on Optimization Techniques Part I

OPTIMAL CONTROL OF SYSTEMS

GOVERNED BY VARIATIONAL INEQUALITIES

J.P. YVON

(IRIA 78 - ROCQUENCOURT - FRANCE)

SUMMARY

In many physical situations, systems are not represented by equations but by va- riationnal inequalities : a typical case is systems involving semi-porous mediums but there are many other examples (cf. e.g. Duvaut-Lions [4]). This paper (1)is de- voted to the study of optimal control problems for such systems. As in the case of par- tial differential equations we are led to consider the analogous separation between elliptic and parabolic systems ; this is studied first and then we give two algorithms with application to a biochemical-example.

I - ELLIPTIC INEQUALITIES

Let us define

- V Hilbert space, K closed convex subset of V, - a(y,z) bilinear form on V, continuous and positive definite, - j(z) convex %.s.c. functional on V,

and

- U Hilbert space, N ad a closed convex subset of

- B E ~(~; V') .

Then we consider the following problem :

Problem E O

For each v E U find y E K solution of

(I.|) a(y,~-y) + j(~)-j(y) >i (f+Bv,@-y) V ~ E K

where f is given in V'.

Theorem 1. I

Under the following hypothesis

Ii a(" ") is c°ercive : a(~,~) >-~ f , ~> 0

[1.2) or

j(.) is strictly convex

there is a unique solution y(v) to Problem E . O

For the proof of this theorem cf. Lions-Stampaeehia [6 I. We introduce now :

- ~ Hilbert space and C E ~(V;~)

- z d given in

and we consider

Problem E I

Find u E ~ad solution of

(1.3) J(u) ,.< J(v)

(I) More details about definitions proofs etc.., will be given in Yvon E8] •

vuEV

Page 278: 5th Conference on Optimization Techniques Part I

266

where

Theorem 1.2

If we assume

(i) Nad bounded (or v > 0 in (I.4)) and

li) If m --~m weakly in V and v ---~ v weakly in 4, then -- vn v . . . . . . n . . . . . . .

(1.5) lim (B Vn,~n) ~ (Bv,~),

then nhere is at least one solution to (1.3).

For the proof one uses a minimizing sequence of J(v) and the hypothesis (l.5)-ii) al- lows to take the limit in (1.1).

Remark I The assumption ii) of (1.5) is a property of eompacity of B. For instance in the

case

(1 .6 ) H ~__.V ~-~V'

(each space dense in the next with continuousinjeetion) we may take

B v = ~ v B E ~ ( U ; ~ )

so that

(B v,~)vv, = (B v,~) H If the injection from V into H is compact then we obtain property (|.5)-ii).

II - PARABOLIC SYSTEMS

We suppose now that we have (as in (1.6))

- V,H Hilbert spaces, V ~ H dense with continuous injection so that

V~H = H ~ c_~V ~ - a(y,z) bilinear form on V, symmetric and coercive, - f given in L2(O,T;H), - j(z) a convex £.s.c. functional on V with domain

D(j) = {~E V lj(~) < + ~ }

- Yo given in the closure of D(j) in H.

Then we have

Theorem 2.1

With the previous data, there exists a unique function y such that

(2.1) y E C (~,T] ;V) , ~"~ E L2(O,T; H)

and satisfying

(2.2) I dy z-y) + a(y,z-y) + j(z) - j(y) ~ (f,z-y) "dt ~

V z E D(j).

(2.3) y(O) = Yo

Demonstration in Brezis [2]

Now let us define - ~Hilbert space, ~ad closed convex subset of

- B E ~(~;L2(O,T;H))

a.e. in (O,T),

Page 279: 5th Conference on Optimization Techniques Part I

267

and

Problem P o For each vE Ua d find y satisfying (2.1) and

(2.4) (~t,z-y) + a(y,z-y) + j(z) -j(y)~ (f+Bv, z-y)

There exists a unique y(v) solution of Problem

- ~ Hilbert space and C E ~(L2(O,T;V); ~) - z d given in

and

Problem P1

Find u 6 Uad such that

(2.5) J(u) <d(v) V v E Ua d

with

P . Now let us introduce O

2 (2.6) J(v) =~ Cy(v) - Zd[12 + v[Iv~ v >10

Theorem 2.2 w-T{h the following hypothesis

I i) The injection from V into H is compact,

(2.7) ii) ~d is bounded (or v >0)

there exists at least one solution u to Problem P1 "

An example

Let Q an open set of ~n F its boundary and consider the system

~t -f~y = v in Q x ]"O,T [

O = Yo in Q Yo given in L2(Q)

and

jO if r < h h,k > 0 .

(2.9) ¢(r) kr if r >/ h

In order to set properly the inequality associated with relations (2.8) le~

H=L2(Q) V = HI(Q) U = L2(Q)

(2.10) a(y,z) = JQI i~i. ~Y(x) 8z--~-(x) dx • = ~ x i ~ x i

~0 if ri h

j(z) =/F F[z(y)]d Y with F(r) = Ik(r2_h2) if r >z h

y(v) is the unique solution of the variationnal inequality :

(2.11)

so that

Page 280: 5th Conference on Optimization Techniques Part I

268

~dd-~tv ,z-y(v)) + a(y(v)~z-y(v)) + j(z) - j(y(v)) ~ (v,z-y(v)) 2 • e (~) (2.

12" I Y(V)It=o = Yo

The cost functional is

= - zdl 2 m ~ dt + v [[ V[l ~ V ~0 (2. !3) J(v) ]

Then if ~ ; is bounded or v > 0 there exists at least one optimal control U so- lution of ~5) with (2.13).

Remark 2.1

As in the elliptic ease the main difficulty of the problem (theoretically and nu- merically) is the non differentiability of the mapping

(2.14) v ~y(v) solution of Problem Po (or Eo).

So that the question of necessary optimal conditions from u is, as far as we know, yet opened. For some results towards this direction el. F. Mignot [ 7 ]

III- DUALITY APPROACH

Using the ideas submitted in a paper from Cea-Glowinski-Nedelec ~ ] one can obtain a dual formulation of the variationnal inequality. For reason of simplicity we suppose that we are in the elliptic case and that the variationnal inequality comes from the minimization problem :

(3. 1) Inf ½ a(%9) + j (9) - (f + Bv~9) ~E V

Then the fundamental assumptions are :

There exists a Banach space L, a closed convex bounded set Aof L' with 0 E A' and an operator G (non necessarily linear) from V into L such that

(3.2) j(~) = S~ A <~ , G(~)> (I)

Now we can re,trite ~he problem (3.1) in the form

(3.3) !nf Sup ~(~,~;v) ~p EV ~ EA

with 1

(3.4) ~(~,~9 v) = ~ a(~,~) + < ~ ,G(~) >- (f+Bv,~).

The dual formulation of (3.3) is

(3.5) Sup Inf ~(~,~;v) ~E A ~EV

Example 3

We take the analogue of example 2. The problem is in the general form of (3.1) with a(.,,) and j(.) given by (2.10) (2.11).

So that we have L = LI(F) L' = L~(F)

!l = {~ 6 e~(r) I 0 ~ ~(~) 4 I a.e. on F} . and

= k ~(?) ~ (2(y) _ h 2)

which is a non linear operator from V = H I( ~ into L~(F).

(]) Here <.,.> denotes the duality product between L T and L.

Page 281: 5th Conference on Optimization Techniques Part I

269

Theorem 3.1

Under the following hyp0thesis

IFor ea__ch k E A the mapping

(3.6) -+ <k ,G(~)>

L is convex i.s.e., there exists a saddle point for ~(.,.;v)

The proof is an application of the theorem of KY Fan-Sion.

Theorem 3.2

is continuous from V- weak into L'-weake If G(~)

then

(3.7)

and if assumption (3.6) holds,

M(k) =;~(y(k),k;v) = inf ~(~0,k;v) ~6V

is G~teaux diffenriatiable with derivative given by

(3.8) M'(k). ~= < ~,G(y(k))>

Corollary 3. l A necessary and sufficient condition for k ~ E A solvin$ the dual problem (3.5) is

(3.9) < ~-k,G(y(k~)) > >~0 V k E A

Corollary 3.2 If k W solve the dual (3.5) then y(k ~) solve the primal (3.3). Now we can state the

problem corresponding to the paragraph ] :

Optimal control problem

I A. For each v E Uad compute k(v) and y(v) = y(k(v)) solution

of Sup Inf ;£ (~0 ,~;v)

(3.10) ~EA ~ EV

B. Find u E Uad such that

J(u) ~< J(v) with J given in (1.4)

Then using the optimality condition (3.9), we can associate to the previous problem the

Suboptimal control problem

I A. For k fixed in A compute y(k;v) solution of

Inf ~(~,k;v) ~EV

B. Solve the optimal control problem

(3.11) Inf l]Cy(k;v)- Zall 2 + v~l ~

i v 6 ~ad this gives u(h) and yO~) = y(k;u(k))

C. Then finally find ~E A satisfying

< k~-k , G(y(k*))> ~0 V k 6 A

Remark 3.1

The previous technique which consists in permuting the determination of u and is due to Begis-Glowinski [i] . Notice that problem (3. ]I) is not equivalent to pro- blem (3.10) this can be shown on very simple counter-examples (cf. e.g. Yvon [8] ). Theorem 3.3

Under assumptions of ~$| and theorem 3.2 there exists at least one solution k'to

Page 282: 5th Conference on Optimization Techniques Part I

270

problem (3.1~)~

IV - REGULARIZATION

An other way to avoid the non differentiability of the mapping (2.14) is to approach Problem P! (for instance) by a sequence of problem more regular wich ensure the diffe- rentiabillty of cost functions~ We will expose the method in the parabolic case (§2).

Fundamental hypothesis°

(4.1) There exists a family of functionnal js (.) of convex functionals on V twice continuously differentiable such that

(4.2) j (~) + a(~,~) ~ V ~E V ~ independant of g

(I) (4,3) lira j ~) = j(~) V@ ~ V

e~O

(4.4) if v --~ y weakly in L2(O,T;V) as c -~O, then

5 r r ~ J 0 j ( y ( t ) ) d t 4 lim 0 JE(Ye(t))dt

(4.5) There exists a sequence z bounded in V such that j~(z s) = O

Then we define

Problem P oe

Find y solution of g

(4.6) i ( ~t ' z-Ye) + a(Y~'z-YE) + Je (z) - Je(Ys ) ~ (f+Bv,z-y~)

Ye(O) = Yo V z E V

Theorem 4.1

For each vE Nad there is a unique solution ye(v) to (4.6) such that

y (v) E C( [O,T] ;V). c

Furthermore y (v) ~ y(v) in L2(O,T;V) as s ~ O G

where y(v) is the solution of Problem P . o With notation of §2 we introduce

(4.7) J~(v) = llCye(v) - Zdl]~ +V Ilvll~

and the

Problem PIE Find ueE~ad

(4.8)

Theorem 4.2

such that

Je(ua) ~< Ja(v) V v ~ Ua d .

There exists at least one u~ solution of (4.8) and as sequence ~E, } of {us} such that

u , -~ u in I~ g

where u is a solution of Problem PI"

g -+O there exists a sub-

(I) For simplicity we assume that D(j) = D(j~) = V. (Cf. notations of Th. 2.1)

Page 283: 5th Conference on Optimization Techniques Part I

271

Remark 4

As je(.) is in class C 2 the Problem Pos may be rewritten as

dY e ( ~ ,z) + a(ye,z) + (j~(ye),z) = (f+Bv,z) V z E V

and Problem PI~ is then an ordinary optimal control problem for parabolic systems.

V - APPLICATION TO A BIOCHEMICAL-EXAMPLE

The system represents an enzymatic reaction in a membrane with semi-porous boun- dary. The problem is unidimensional in space and the functions a(x,t), s(x,t), p(x,t) represent.respectively the concentration of activator, substrate and product in the membrane ~j#. In dimension-less variables the problem may be stated as

Ida ~2a

(5.1) la(o,t) = Vo(t) a(l,t) = Vl(t)

La(x,o) = O

0s _ 02__s +o a s

~ t Ox 2 l+a " l+s = O 0 > 0

(5.2) ]s(o,t) = So s(1,t) = ~I ~o,~l E~ |

~ s (x,o) = 0

t 0 x 2 l+a " l+s

1 + ~(p(o,t)) = 0 (5.3)

I ~x (l,t) + +(p(l,t)) 0

p(x,o) = o

where ~(r) is real function given in (2.9).

Control variables are v and o v! : oo co

U = L (O,T) x L (O,T)

and

~ad = {v E

The cost function is

(5.4) J(v) = J ( o , t ) - Zo(t)12dt+ 0

Theorem 5.1 The system ( 5 . 1 ) . . . ( 5 . 3 ) admits a unique s o l u t i o n a ( v ) , s ( v ) , p ( v ) .

O ~< vi(t ) ~< M a.e. on (O,T), i=1,2 }

z! (t) 1 2dt

(I) For more details about enzymatic systems cf. Kernevez [5] .

Page 284: 5th Conference on Optimization Techniques Part I

272

Theorem 5.2

There exist at least one uE~ d satisfying J(u~ J(v) VvE l~d with J(v)

$,iven b i (5.4).

VI - NUMERICAL RESULTS

Example I

To give comparative numerical results on the two types of algorithm we have

considered first the example of § ll.Computation have been performed with

= )O,I[ so that r = {O}U{ I}

and z d only function of time so that the cost function is

2 (61) J(v) = 2 f~ l~x(l,t) - Zd(t) I2dt + ~ !I~I u

(solution is sy~netric by changing x in l-x).

Represents the state corresponding to an "optimal control" computed by duality

method (~III), The threshold is given by h=0.5.

Fisure 2

Gives comparison between regularization and duality on the same example. The

suboptimality of duality is clear on this picture, Actually the "optimal" values of

cost are : -2

duality : 4. 10

regularization : O,36 !O -2

Remark 6.1

In the previous examples ~ in (6.1) has been taken near zero so that the opti~

mal state may fit z d as well as possible.

Example 2

(Bio-chemical example of ~V).

As an example 1 the problem has been considered completely symmetric with a

unique control v(t) so that boundary conditions in (5,1) are

a(o,t) = a(1,t) = v(t)~

Figure 3 and Fi~ 4

give optimal control and corresponding optimal value of the state computed by

regularization, Figure 3 represents also values of optimal control computed for two

values of the regularization parameter g . The only active constraint is v ~ O in

this example.

Page 285: 5th Conference on Optimization Techniques Part I

273

Example 1

desired state

0.5

0.0 !

0.25

I I "optimal state"(duality)

I f

0.5 0.75

Fisure | Optimal state

Time 1.

Example 1

0.5

0,0 "

O.0

desired state

!

0.25

! regularization ! !

• I

I :~ualit~ i ' I

I

. i 7 , I 0.5 0.75

Figure 2 Optimal state

•Time I.

Page 286: 5th Conference on Optimization Techniques Part I

274

O. 03

0.02

0 .01

II v(t)

A / \ \ : ",\,{ !'\

# |

it" i I |

:11 i!t 0.25

Example 2

s= 10 - 5

/. i 10 -1

/ /

0 . 5

~ O ~ S O"

i t

0.75

Figure 3 Optimal control

.Time

0 . 5

0 . 3

O.

Example 2

s = I0 -5

i . ~ 'Optimal state" ....

U-- _

J / . , , , ! i

............................ ~ ' ~ ! o!~ o'~ 0

Fi.$ure 4 Optimal sta~.

~ime

Page 287: 5th Conference on Optimization Techniques Part I

(i) D. Begis

H. Glowinski

H. Brezis

275

Vll - REFERENCES

(2)

(3) J tea

R. Glowinski

J.C. Nedelec

(4) G. Duvaut

J.L. Lions

(5) J.P. Kernevez

(6) J.L. Lions

"Dual num. meth. for some variational problem..."

in Techniques of optimization. Academic Press (1972).

"Probl~mes unilat~raux") Journal of Math. pures et appliqu~es 51,

(]972).

"Minimisation de fonctionnelles non diff~rentiables", Proceedings

of the Dundee Num. Anal. Symp. (1972).

"Les in~quations en m~canique et en physique", Dunod Paris (1972)

"Evolution et contr$1e des syst~mes bio-math~matiques" Thgse,

Paris (1972).

"Variational Inequalities", Comm. on pure and app. math.

G. Stampacchia vol XX, pp. 439-519 (1967).

(7) F. Mignot S~minaire Lions-Brezis Paris 1972-1973.

(8) J.P. Yvon Th~se Paris 1973.

Page 288: 5th Conference on Optimization Techniques Part I

ON DETERMINING THE SUBMANIFOLDS OF STATE SPACE WHERE THE

OPTI~L VALUE SURFACE HAS AN INFINITE DERIVATIVE

Harold L. Stalford

Radar Analysis Stsff Radar Division

Naval Resesrch Laboratory Washington, D.C. 20375

ABSTRACT

The problem of obtaining the optimal value surface of an optimal

control process is investigated. In practice, the optimal cost

surface often possesses an infinite derivative at points of certain

submanifolds of the state space. A necessary condition is derived

with which the equations of such submanifolds can be established with-

out solving first the entire optimal control problem. The necessity

of the condition is proved in a theorem, but only for submsnifolds

having one dimension less than the dimension of the state space.

Three examples are provided to illustrate the utility of the condition.

I. INTRODUCTION

An objective in solving optimal control problems is to calculate

the optimal cost of transfer from any initial state to the terminal

set. Calculating and plotting the optimal cost values above the state

space produces a surface which we shall call the optimal value surface.

An optimal value surface is not necessarily smooth at every point. In

general, the surface consists of points belonging to three distinct

classes. First~ we have the set of points where the surface is smooth.

Second, there are those points where all approaching tangents to the

surface remain bounded but where the surface is not smooth. The third

class consists of those points where at least one approaching tangent

cannot be bounded. We investigate this latter class.

Our desire is to determine where the optimal value surface has an

infinite derivative without solving first the entire optimal control

problem. In practice~ an infinite derivative will occur along smooth

submanifolds of the state space. This paper presents a necessary con-

dition for establishing the equstions of such subm~nifolds.

Page 289: 5th Conference on Optimization Techniques Part I

277

We shall now define the family of optimal control processes for

which the ensuing theory holds.

II. A FAMILY OF OPTIN[AL CONTROL PROCESSES

The family of optimal control processes under investigation have

their dynamical behavior governed by systems of ordinary differential

equations and have their evolution of state described by the motion of

a point in n-dimensional Euclidean space E n. The seven basic elements

that are needed to define such optimal control processes are four

functions (f, U, fo' and go ) , two sets (X and ®) and a function space

Q. These elements are described subsequently. The dynamical behavior

of an optimal control process, hereafter, is modeled by means of the

state velocity function f in the state equation

(1) ~ ( t ) = f ( m ( t ) , u ( t ) ) , r e ( t ) E E n, u ( t ) E E m

where f: E n x E TM ~ E n is a continuous function. The evolution of

state is described by a point moving in the state space X, an open

subset of E n. The terminal set ® is a closed set contained in the

closure of X. For non-autonomous systems (that is, f an explicit

function of t), one component of ~ is time t itself.

The controller of the process is equipped with the two elements Q

and U. The function space ~ is the space of all Lebesgue measurable

functions of time t on bounded intervals whose values have range in

E m. Constraints on the control functions in ~ are given implicitly by

the set valued function

(2) U : X ~ set of all compact subsets of E m,

where U is a continuous set value function. For each state x, the set

U(x) is precisely the set of control values available to the controller

at the state x.

A solution of the differential equation (i) for some measurable

control function and given initial conditions is called a trajectory.

A trajectory ~ : [to, tf] ~ E n is said to be admissible if it lies

entirely in the state space X for all times t contained in the inter-

val [to, tf). An admissible trajectory is said to be terminating if

~(tf) is contained in the terminal set 0. The time tf is called the

terminating or final time for a terminating admissible trajectory.

Page 290: 5th Conference on Optimization Techniques Part I

278

The time tf belongs to the interval [to, ~); tf does not necessarily

have to be the same terminating time for distinct trajectories unless

it is constrained to be fixed by the terminal set 9.

A control function u:[to, tf] ~ E m is said to be admissible if it

has at least one corresponding admissible trajectory ~:[to, tf) ~ X

such that u(t)E U(~(t)) for all t contained in [to, tf). Here, the

trajectory of $ corresponds to the control function u if

t

~(t) = ~(co) + f

4 o

f (~ (T) , u(~)) dr

for all t contained in [to, tf]. Note that, since f is required to be

only continuous, solutions to the differential equation (I) are not

necessarily unique for each control function u.

Let x ° be contained in X. Let C(x o) denote the set of all admis-

sible control functions u:[to, tf] ~ E m having at least one terminat-

ing admissible trajectory emanating from x o. For the control function

u contained in C(Xo) , let T(Xo;U) denote the set of all terminating

admissible trajectories ~ emanating from Xo, corresponding to the con-

trol function u, and satisfying u(t) C U(~(t)) for all t contained in

the domain of u.

The state is to be transferred from x ° contained in X to the ter-

minal set 9; the initial time t o is fixed, while the final time tf is

not necessarily specified. The performance criterion

( 3 ) J(Xo, ~ u) = go(~(tf)) +ftf

t o

fo(~CT), u(T)) dr

is to be minimized where the function go is a real valued continuously

differentiable function defined on a neighborhood of the terminal set

Q, the function f is a real valued bounded continuous function with o

domain E n x E m, the control function u belongs to C(x o) and the tra-

jectory ~ is a member of [(Xo; u). The real number J(xo, ~, u)

denotes the value of performance associated with the transfer.

In summary, a member of the family of optimal control processes is

represented by the Septuple (f, U, fo' go' X, 9, ~) where f, U, and fo

are continuous functions, go is continuously differentiable, ~ is the

space of Lebesgue measurable controls, X is open in E n, and 3 is a

Page 291: 5th Conference on Optimization Techniques Part I

279

closed set sontained in the closure of X. Let F denote this family of

optimal control processes.

III. OPTIMAL VALUE FUNCTION

Let x be contained in the state space X. Let the control func- o

tion u* be contained in C(x o) and let the trajectory ~* be contained

in T(xo; u*). The pair (u*, ~*) is said to be optimal at x ° if, for

all control functions u contained in C(x o) and for all trajectories

contained in T(Xo; u), the following inequality is satisfied:

J(x o, ~*, u*) ~ J(x o, ~, u)

If the pair (u*, ~*) is optimal at Xo, then the value J(xo, ~*, u*)

is arbitrarily defined to be V(Xo). If an optimal pair (u*, ~*)

exists for every x o contained in the state space X then we have a

function from X into the real numbers:

V : X~ E 1

Thus, for a state x, the value V(x) denotes the optimal transfer cost

from x to the terminal set. The function V is called the optimal

value function. We suppose that V is well defined on the entire state

space X. A plot of the optimal value function above the state space

is called the optimal value surface.

Some definitions are needed in order to describe an assumption

placed on the optimal value function V.

Definition i. A decomposition D of the state space X is a denumer-

able collection of disjoint subsets whose union is X. This is usually

written as D = . ~Xo, Xj : j E Jl where J is a denumerable Y

index set for l

the members of D other than X o.

Definition 2. A regular decomposition D of the state space X is a

decomposition D o,. Xj : j such that is open and dense in I

X and such that for each j E J, X. is a continuously differentiable

submanifold of E n. 3

Since X ° is open in X, it follows that X ° is a continuously dif-

ferential submanifold of dimension n of E n.

Definition 3. Let B be a subset of E n. A mapping F : B - E 1 is said

to be continuously differentisble on B if there is sn open set W

Page 292: 5th Conference on Optimization Techniques Part I

280

containing 3 such that F may be extended to a function which is con-

tinuously differentiable on W.

We are now in a position to describe the assumption. It is an

unresolved conjecture in optimal control theory for the family of

optimal control processes considered herein. It is satisfied by

hypothetically constructed examples as well as optimal control models

of physical processes.

Assumption I. There exists a regular decomposition D of the state

space X such that the optimal value function V is continuously dif-

ferentiable on the members of D.

The optimal value function of most control processes is continu-

ous. It is, however~ discontinuous in some control problems that

model processes in nature. For example, Vincent [5] presents an

optimal control model of an agricultural problem where the objective

is to control insects that eat or destroy crops. Therein, the optimal

value surface is shown to have a tear or split extending along a

smooth submanifo!d of the state space. Since the continuity assump-

tion can fail to be satisfied, we introduce another assumption to take

its place. Incidentally~ it is readily satisfied if the optimal value

function is continuous.

Suppose Assumption ! is met and let X. be a member of the regular J

decomposition D. Let ~ be a point on the submanifold X.. 3

Assumption 2. There exists an open neighborhood ~ of ~ in the topo-

logy of X. and an open subset $ of X whose closure contains ~ such j o

that the optimal value function V when restricted to S has a continu-

ous extension V ! from S to a closed set C of X that contains e and

such that V 1 is continuously differentiable on ~.

Here, a function V 1 : C ~ E 1 is said to be a continuous extension

of V from S to C if V 1 is continuous on C and Vl(X) = V(x) for 911 x

contained in S.

Consider an optimal value surface which is continuous and satis-

fies Assumption I. Take a pair of scissors and make a number of

smooth cuts in the surface. Then deform the surface in a continu-

ously differentiable manner. The resulting surface is of the general

type that satisfies Assumption 2. It is introduced so that the en-

suing theory encompasses problems in which the optimal value surfaces

have tears running along smooth submanifolds of the state space.

Page 293: 5th Conference on Optimization Techniques Part I

281

IV. THE FUNDAMENTAL PARTIAL DIFFERENTIAL

EQUATION OF DYNAMIC PROGRAMMING

Let (f, U, fo' go' X, ®, ~) be an optimal control process in F

such that its optimal value function V satisfies Assumption I. Let

D = . {Xo, Xj : j C Jl be a regular decomposition with which V satis- !

lies Assumption I. It is shown in Stalford [4] that the fundamental

partial differential equation of dynamic programming must be met on

the open and dense member X o of the decomposition D of X. This

equation is written as

(4) 0 = MINIMUM { f o ( X , V ) + G r a d V(x) • f ( x , v ) } v c v ( ~

and holds for all x ~ X . o

This equation will be utilized subsequently in determining points

where the optimal value function is not smooth.

V. PROBLEM STATEMENT

Let Xj be some n-i dimensional member of the regular decomposition

D. For convenience, let M denote the continuously differentiable sub-

manifold X.. Let the state ~ denote a point on M and let N(~) denote 3

a unit row vector that is normal to M at ~. Finally, let N(~) T

denote the transpose of N(~).

Definition 4. The optimal value function V has an infinite derivative

at ~ in the normal direction N(~) if the limit

limit Grad V(~ + h . N(~)) - N(~) T h~o +

cannot be bounded. The notation h~o + denotes that h takes on only

positive values.

Suppose that the optimal value function V has at each point of M

an infinite derivative in at least one of the normal directions to M.

Recall that an n-i dimensional submanifold of E n has two normal direc-

tions at each of its points. Vectorially, one is of course the nega-

tive of the other.

Problem. Determine the equation o f the submanifold M without solving

Page 294: 5th Conference on Optimization Techniques Part I

282

first the optimal control problem for optimal control feedback

policies.

VI. ORTHOGONAL TRANSFORMATION OF COORDINATES

Let T(~) denote a matrix of tangential orthogonal vectors to M at

a. We desire to transform the x coordinates linearly into new coordi-

nates such that the new coordinate system coincides with the normal

vector N(a) and the tangential vectors T(~). Such a transformation is

given by the matrix equation

(5) y : K(~) , x

where K(~) is the matrix composed of the normal vector N(~) and the

tangential vector T(~), that is,

The normal vector N(~) and the tangential vectors T(ff) can be

chosen such that the equation

(7) K(~) , K(~) T = Identity Matrix

holds where K(~) T is the transpose of K(~). Equation (7) implies that

K(~) T is equal to the inverse of K(~). Thus Equation (5) is an ortho-

gonal transformation of coordinates.

Equation (4) can be rewritten as

(8) 0 = MINIMUM i fo(X'V) + Grad V(x).K(e)T.K(~).f(x,v)} v C U ( x )

w h e r e x E X o . S u b s t i t u t i n g E q u a t i o n (6) i n t o E q u a t i o n (8) a n d d e l e t -

i n g z e r o t e r m s we o b t a i n

(9) 0 = MINI~T~ ~ l f o ( X ~ v) + [ G r a d V ( x ) . T ( ~ ) T ] . [ T ( ~ ) . f ( x , v ) ] v C U(x)

+ [ G r a d V ( x ) . N ( ~ ) T ] - [ N ( ~ ) . f ( x , v ) ] }

for all x C Xo~

Page 295: 5th Conference on Optimization Techniques Part I

283

VII. THEOREM

Let I Xkl be any sequence in X O that converges to the state ~.

The following observation can be verified by invoking Assumptions 1

and 2.

Observation I. In the limit as x k converges to ~, the (i x n-l)

matrix [Grad V(x k).T(~) T] is bounded.

This observation states that the slopes of the optimal value sur-

face in the tangent direction to M at ~ remain finite as & is

approached. Lemma 8.2.1 of Stalford [3, p. 84] asserts this obser-

vation for the case that the optimal value function is continuous.

When this is not the case, Assumption 2 can be used to amend the

lemma.

Let C(~) be the convex closure of the set

{ N ( ~ ) . f ( c ~ , v ) : v ~ U ( ~ ) } .

Note that C(~) is composed of scalars or one dimensional vectors

since N(~) is a (i x n) vector and f(~, v) is a (n x i) vector. Belo~

when we speak of the zero vector in C(~) we, in essence, mean the real

number zero.

Theorem 1. If ~ is a point of the submanifold M where the optimal

value function has an infinite derivative in the normal direction

N(~), then it is necessary that the zero vector belongs to the boun-

dary of C(~).

Proof. We prove the theorem by showing that a contradiction arises if

the zero vector belongs either to the interior or the exterior of

c(~).

Suppose that the zero vector belongs to the interior of C(~). Let

5 be some positive number such that 5 and -8 are contained in the

interior of C(~). Let I Xkl be the sequence defined by

(I0) x k = ~ + hk.N(~ )

Page 296: 5th Conference on Optimization Techniques Part I

284

where the sequence ~hk~ is positive and converges to zero as k goes

to infinity. Since the functions f and U are continuous, there exists

an integer K such that if k ~ K then the real numbers 5 and -5 are

contained in the convex closure of the set

N(o~).~(Xk~ v) : v C U(X k) .

In particular~ for k z K there exist controls Vl(X k) and v2(x k)

contained in the control set U(x k) such that

(11) N ( ~ ) ~ f ( x k ~ V l ( X k ) ) < -5

(12) N ( ~ ) ' f ( x k , v 2 ( x k ) ) > 5.

Let the sequence IR(Xk) I of real numbers be defined by

(13) R(x k) = Grad V(Xk).N(~)T

f o r a l l x k c o n t a i n e d i n t Xk}, S i m i l a r l y , l e t the s e q u e n c e s ~S l (Xk) }

and I S2(Xk) ~ be defined by

(14) Sl(Xk) = N(~)'f(xk~ Vl(Xk))

(15) S2(Xk) = N ( ~ ) . f ( x k , v 2 ( x k ) ) ,

respectively. In view of Definition 4~ Equations (I0) and (13) imply

that the sequence I R(Xk) ~ cannot be bounded. Thus, invoking Equations

(ii) and (12), we see that one of the sequences I Sl(Xk)'R(Xk) 1 and

IS2(Xk).R(Xk)} cannot be bounded from below. This implies that the

minimum of the expression in the curly brackets of Equation (9) cannot

be zero for x k sufficiently close to ~. This is because the first two

terms in that expression are bounded for all x k sufficiently close to

~. The first is bounded since the functions fo 2rid U are continuous

and since U(x) is compact for each x contained in X. The second is

bounded since~ in addition, the function f is continuous and Observa-

tion 1 holds. We have already remarked that the third term takes on

large negative values for x k sufficiently close to ~. This contra-

diction of Equation (9) proves the falsehood of the zero vector be-

longing to the interior of C(~). Next, we show that the zero vector

Page 297: 5th Conference on Optimization Techniques Part I

285

cannot belong to the exterior of C(~).

Suppose that the zero vector belongs to the exterior of C(~).

Since the function f is continuous and U(~) is compact, the set

{ N(=).f(~, v) : v E U(~)}

is a compact subset of the real numbers. Therefore, the set C(~) is

a compact interval [b, c] where b and c represent the endpoints.

Since the zero vector is in the exterior of C(~), either b and c are

both positive or they are both negative. Thus the third term in

Equation (9) takes on only large positive numbers or large negative

numbers for x k sufficiently close to ~. As before, the first two

terms in Equation (9) are bounded for all x k sufficiently close to ~.

Therefore, we have again contradicted the validity of Equation (9).

In conclusion, zero must belong to the boundary of C(~) and our

theorem is proved.

VIII. APPLICATIONS

The necessary condition of the theorem is applied in this section

to three examples. In each example the necessary condition determines

a family of equations for the sought submanifold M. The family of

equations corresponds to a family of terminal sets. Thus, the equa-

tion selected is the one that has an intersection with the terminal

set of the example being studied. Employing the knowledge that tra-

jectories enter the terminal set rather than leave it is also a factor

in determining the submanifold M. As in previous sections the sub-

manifold M designates the points of the state space where the optimal

value function has an infinite derivative.

Example I. An antique example to illustrate the theory is given by

the one-dimensional time-optimal regulator process of Leitmann

[I, p. 48] and Pontryagin [2, p. 23]. In this example, a rocket

travels in a straight line toward a terminal set. ~ith the motion of

the rocket controlled by a thrust program, the objective is to bring

the rocket to rest at the terminal set and render the transfer time a

minimum.

The equation of motion of a point mass moving horizontally above

a flat earth states that the acceleration is equal to the thrust value.

Page 298: 5th Conference on Optimization Techniques Part I

286

In state equation form, we have

(16) Xl = x2

£2 = v -I ~ v ~ i

where x I is the position of the rocket relative to the terminal set,

x 2 is the speed of the rocket and the control value v represents the

thrust. The maximum thrust is normalized to unity and the rocket

engines are reversible. Here, the terminal set is the two-dimensional

point (Xl, x 2) with x I = 0 and x 2 = 0. The state space X is the two-

dimensional Euclidean space E 2. In the performance criterion, the

function go is identically zero and the function fo is identically one

since we are minimizing the transfer time from an initial state to the

origin. Finally, note that U(Xl, x 2) = [-i, I] for all (Xl, x 2) con-

tained in X.

Without solving this example for optimal control feedback policies,

we desire to find those points of the state space where the optimal

value surface has an infinite derivative. Our approach is to apply

the necessary condition of the theorem. ~ore specially, we select an

arbitrary point (~i , ~2 ) in the state space and check the necessary

condition of the theorem to see if it is possible for the optimal va-

lue function to have an infinite derivative st that point. This is

accomplished by employing an orthogonal change of coordinates from the

old coordinates (Xl, x 2) to new coordinates (YI' Y2 ) such that the

component Y2 serves as the normal direction in the Theorem. Therefore,

we obtain a normal direction at those points where an infinite deriva-

tive may possibly exist. And, this is sufficient to determine the

equation of the submanifold passing through the points sought for.

Let (~i~ ~2 ) be contained in X. Consider the orthogonal transfor-

mation

[ cos(8) sin(e)]

where 8 is an angle yet unspecified. Suppose that (~i' ~2 ) lies on a

submanifold M having at (GI, G2 ) the normal vector

N(~I , ~2 ) = [ - s i n ( 8 ) c o s ( e ) ]

such that the optimal value function has an infinitive derivative at

Page 299: 5th Conference on Optimization Techniques Part I

287

all points of M. The product N(~I , ~2).f(~l, ~2 v) reduces to

N(~I' ~2)'f(~l ' ~2' v) =-~2 sin(e) + v cos(e)

where -l~v~l. According to the theorem we seek an angle e = e(~ I, ~2 )

such that zero is contained in the boundary of the set

(18) I-~ sin(e) + v cos(e) : -i -< v ~ I 1

If ~2 equals zero then this condition is met if and only if

e ( ~ l , O) = ± 4 / 2 .

For a non-zero ~2' zero is contained in the boundary of the Set

(18) if ~ satisfies the equation

(19) tan(e) = -+ I/~ 2.

The angle e is the angle through which the old coordinates (Xl, x~

are rotated so that the new coordinate Y2 is normal to the submanifold

M at the point (~i' ~2 )" The new coordinate Yl is, therefore, tangent

to M at (~i' if2 )" The submanifold M is a curve in which locally, it

can be expressed as a functional relationship between ~i and ~2" If

the equation of M is given by ~2 as a function of ~I' then the slope

to M is given by

d~ 2 (20) d~ 1 tan(~).

Substituting Equation (19) into Equation (20) and integrating, we

obtain the solutions

2 (21) ~(~2 ) = -~i + Cl

( 2 2 ) ½(c~2 ) 2 = c~ 1 + c 2

for the equation of M. Here, c I and c 2 are constants of integration.

We derived these equations by utilizing the necessary condition of the

theorem. Thus, if there are submanifolds of the state space on which

the optimal value function has an infinite derivative then they must

sstisfy these equetions. And, indeed, for terminel conditions of

Page 300: 5th Conference on Optimization Techniques Part I

288

el = ~2 = 0 together with invoking that trajectories enter the ter-

minal set rather than leave it, we obtain the submanifold M. That is,

the terminal conditions of ~I = e2 = 0 imply that c I = c 2 = O. Since

trajectories enter the terminal set (0,0) rather than leave it, the

equations for the sought submanifold are reduced to

(23 ) ½(~2 )2 = - ~ l ' ~2 > 0

2 ( 2 4 ) ~ ( ~ 2 ) = a l ' ~2 ~ O.

The o p t i m a l v a l u e f u n c t i o n f o r t h i s e x a m p l e i s g i v e n i n S t a l f o r d

[ 4 ] . I t s d e r i v a t i v e i s e a s i l y o b t a i n e d s i n c e t h e r e i n t h e o p t i m a l

value function is given in analytical form. The derivative is infi-

nite only at the points defined by Equations (23) and (24).

Example 2. As a second example, consider the rectilinear motion of a

rocket operating at constant power. In Leitmann [i, p. 29], the equa-

tions of motion are given as

(25) Xl = v

(26) i 2 = v 2.

-i ~ v~ I

The terminal set is the origin (0,0). The objective is to transfer

the rocket from an initial state to the terminal set and render the

transfer time a minimum. Thus, the function go is identically zero

while fo is identically one. The state space X is the half space be-

low the x I - axis. Consider the orthogonal transformation given in

Equation (17) and let the Y2 component be the normal direction to M.

According to the theorem, we seek an angle ~ such that zero is con-

tained in the boundary of the set

(27 ) t - v s i n ( 9 ) + v 2 c o s ( e ) : - 1 ~ v ~ 1 1

This is only possible if 8 = 0 at each point of M.

Integrating Equation (20) with % = 0, we obtain

(28) ~2 = c

where c is a constant of integration. Equation 128) implies that the

submanifold M is a straigh~ line parallel to the x I - axis. Applying

Page 301: 5th Conference on Optimization Techniques Part I

289

the terminal conditions of ~i = ~2 = 0, we determine the integration

constant to be zero. The submanifold M coincides with the x I - axis.

In this example, it is not possible to reach the terminal set

(0,0) from any point on M with the exception of (0,0). This is be-

cause the x 2 - coordinate is non-increasing only for the control value

v = O. Interestingly, the state space X does not contain M. This

example therefore lies outside the scope of the theorem. However, as

a point on M is approached normal to M, the derivative of the optimal

value function does indeed become unbounded.

Example 3. Vincent [5] presents optimal control models of several

pest management programs in agriculture where insecticides and preda-

tors are utilized to minimize the damage done to crops by insects. In

particular, a model is given of a program where an insecticide spray

nonharmful to the predators is used and where no biological control is

introduced to change the natural population growth of the predators.

The state equations of the model are

(29) Xl = Xl(l - x2) - v Xl, o ~ v ~ 1

(30) i 2 = x 2(x I - i)

Here, x I represents the ratio of the actual number of insect pests and

a tolerable level of such pests. The state x 2 is the ratio of the

actual number of predators and a desired level of them. The control

value v corresponds to the amount of insecticide used. It is desired

to transfer the state of the system to the equilibrium point (i,i) and

minimize the cost criterion

~tf(5x I + 5v) d~.

This integral models the cost associated with crop loss and the in-

secticide used.

As before, we consider the orthogonal transformation given in

Equation (17) and let the Y2 - component be normal to M. If (~I' ~2 )

is a point of M, an angle ~ = 0(~i, ~2 ) is to be determined such that

zero is contained in the boundary of the set

31> - v sin(+ + 2+l-1) cos + : o v l}

An inspection of Set (31) reveals that 8 must be a solution of one of

Page 302: 5th Conference on Optimization Techniques Part I

290

the equations

(32) ~2(e I - I)

tan(@) = ~i(i "2 )

(33) tan(e) = (I - ~i)/ i.

Substituting Equation (32) into Equation (20), integrating the result

and applying the terminal conditions of ~I = ~2 = i~ we obtain

(34) ~(~2 ) + ~(~I ) = ~i - 1 + ~2 - I.

The inequality

(35) ~. x < (x-i)~ x ~ <0, I) U (I~ ~)

implies that (~i' ~2 ) = (i, I) is the only point satisfying Equation

(34).

Substituting Equation (33) into Equation (20)~ integrating the

result and applying the terminal conditions of ~i = ~2 = I, we have

the equation

(36) e2 = ~n(~l ) - ~I + 2.

Equation (33) resulted from the Set (31) for the control value v = i.

Implementation of this control value into Equation (29) results in the

state x I decreasing in time° Therefore, since trajectories enter the

terminal set (i,i) rather than leave it, Equation (36) with the con-

straint ~I e i is the derived equation of the submanifold M. And,

this is indeed the set of states where the optimal value function has

an infinite derivative. As pointed out in Vincent [5], the optimal

value function is, in addition, discontinuous across the submanifold

~. But the optimal value function does satisfy Assumption 2.

Page 303: 5th Conference on Optimization Techniques Part I

291

REFERENCES

[i]. Leitmann, G., AN INTRODUCTION TO OPTIMAL CONTROL, McGraw Hill,

(1966).

[2]. Pontryagin, L. S., et al., THE MATHEMATICAL THEORY OF OPTIMAL

PROCESSING, Interscience Publishers, New York, (1962).

[3]. Stalford, H., "Sufficiency Conditions in Optimal Control and

Differential Games," ORC 70-13, Operations Research Center,

University of California, Berkeley, California, 1970.

[4]. Stalford, H., "An Equivalency Optimality Theorem Over a Family

of Optimal Control Processes," Proc. of the 1972 Int. Conf. on

Cybernetics and Society, IEEE Systems, Nan and Cybernetics

Society, October 9-12, 1972, Washington, D. C.

[5] . Vincent, T. L., "Pest Management Programs Via Optimal Control

Theory," 13th Joint Automatic Control Conference of the American

Automatic Control Council, Stanford University, Stanford,

California, August 16-18, 1972.

Page 304: 5th Conference on Optimization Techniques Part I

CONTROL OF AFFINE SYSTEMS WITH MEMORY

M.C. Delfour*

<Univ~rsit~ de Montr6al)

and

S.K. Mitter**

(Massachusetts Institute of Technology)

i. INTRODUCTION

In this paper we present a number of results related to control and esti-

mation problems for affine systems with memory. The systems we consider are

typically described by linear functional differential equations or Volterra

integro-differential equations.

Our results may be divided into four categories:

(i} State-space description of systems with memory.

(ii) Feedback solution of the finite-time quadratic cost problem.

(iii) Feedback solution of the infinite-time quadratic cost problem°

(iv) Optimal linear filtering.

The .main difficulty in the study of the syste~ considered in this paper

The work of the first author was supported in part by National Research Council of Canada Grant A-8730 at the Centre de Recherehes Math6matiques, Universite de M~ntreal~ Montr~ai 101, Quebec, Canada.

The work of the second author was supported by AFOSR Grant 72-2273, NSF Grant GK-25781 and NASA Grant NGL-22-009-124 all at the Electronic Systems Laboratory, M.I.T., Cambridge, Mass. 02139.

Page 305: 5th Conference on Optimization Techniques Part I

293

is that the state spaces involved are infinite dimensional and that the equations

describing the evolution of the state involve unbounded operators. Once an ap-

propriate function space is chosen for the state space a fairly complete theory

for the control and estimation problems for such systems can be given.

2. Affine Systems with Memory

In this paper we shall consider two typical systems: one with a fixed

memory and one with a time varying memory.

Let X be the evolution space and U be the control space. We assume that

X and U are finite-dimensional Euclidean spaces.

2.1. Constant Memory

Given an integer N ~ 1 and real numbers - a = QN < "'" < 01

T > 0, let the system with constant memory be described by:

< 0 = 0 and o

(1)

N dx

(t) = A (t)x(t) + [ A.(t)x(t+G.) oo i = 1 l 1

f o

+ Aol (t,~)x(t+0)dQ + f(t)

-a

+ B(t)v(t) in [0,T]

x(0) = h(0) , -a _< Q < 0,

where Aoo , Ai, Ao! and B are strongly measurable and bounded, f £ L2(0,T; X)

and v C L2(0,T; U).

We first need to choose an appropriate space of initial data and an appro-

priate state space. It was shown in DELFOUR-MITTER [i], [2], that this can

Page 306: 5th Conference on Optimization Techniques Part I

294

indeed be done provided that (1) is rewritten in the following form:

{2)

dx N I x(t+Oi) ' t+0i>0 d-~ = Aoo(t)x(t) + I Ai(t)

i=l I h I (t+Q i) , otherwise

o !

" - a h I (t+@) , othezwise

x(O)

+ f(t) + B(t)v(t) , in [0,T],

= h O"

We can pick initial data h = (h°,h ~) in the product space X x L2(-a,O;X),

where the solution of (2) is x : [0,T] + X.

We can now define the state at time t as an element x(t) of X x L2(-a;0;X)

as follows:

(3) i x(t} O = x(t)

}~{t)~{e) = .! x(t+0) , t+e_>0 t

[ (t+0) , otherwise.

For additional details see DELFOUR-MITTER [i], [2]. System (I) has a memory of

fixed duration [-a~0].

2.2. T_i,me Varying Memory

Consider the system

Page 307: 5th Conference on Optimization Techniques Part I

295

(4)

i t

~ (t) = A (t)x(t) + A (t,r)x(r)dr o I

o

+ f(t) + B(t)v(t) , in [0,T]

x(0) = h O in X,

Where Ao, A I and B are strongly measurable and bour, ded. If we change the variable

r to G = r-t and define

(5)

Ii (t) : A (t) O o

\ oi (t,@)

I A (t,t+@) , -t < @ < O, !

0 , -~< @< t,

equation (4) can be rewritten in the form

(6)

0

d"t" I xct÷01 , t÷e>0 1 ~c, : %oCt,xct, ÷ % ct,o~ i dO -~ i h I (t+0) , otherwise

+ f(t) + B(t)v(t) in [0,T]

x(0) = h ° in X, h I in L2(-~,0;X),

with h I = 0. In this form equation (6) is similar to equation (2). However

here we consider the system to have a memory of infinite duration in order to

accomodate the growing memory duration [-t,0]. The state space will be chosen

to be the product X x L2(-~,0;X). The state at time t is an element x(t) of

X x L2(-~,0;X) which is defined as

Page 308: 5th Conference on Optimization Techniques Part I

296

(7) I ~(t) ° = x(t)

i x(t+@) . -t < @ < 0

[ h l(t+C)) , _oo < C) < - t

3. State Equation

It will be more convenient to work with an evolution equation for the

state of the system rather than equations (i) or (4). In order to obtain the

state evolution equation corresponding to equation {1) let

(S) I H = X x L2(-a~0;X)

IV = {(h(0)~h) ! h ~ HI (-a,0;X) }.

The injection of V into H is continuous and V is dense in H. We identify H with

its dual. Then if V ~ denotes the dual space of V, we have

VF H~V ~

This is the framework utilized by Lions (cf. J.L. LIONS) to study evolution equa-

tions. Define the unbouxlded operator A(t): V "+ H by,

(9)

o

(t)h(0) + Z Ai{t)h(0 i) + Aol(t,Q)h((9)dQ I\ (~(t)h)° = A°° i=1 ] -a

(A(t)h) i (Q) = ~(@),

and the bounded operator

B(t): U + H by

Page 309: 5th Conference on Optimization Techniques Part I

297

(i0) I (B (t)u) O = B(t)u

(B (t)u) I (0) = 0

and f(t) S H by

(ii) f(t) ° = f(t), f(t)l = 0.

Then for all h in V, it can be shown that x is the unique solution in

of

(12) W(0,T) = {z E L2(0,T;V) I D z 6 L2(0,T;H)} [D denotes the distri-

butional derivative]

(13)

l d.~ t

~(o)

= A(t)x(t) + B(t)u(t) + f(t) in [0,T]

= h.

Similarly in the case of equation (4) we let

X x L2(-~,0;X)

{(h(0),h) I h ~ HI(-~,0;X)}.

We again have

V C H C V ' .

We now define A(t): V ~ H as follows:

(15)

(A (t)h) =

(A(t)h) I (0)

A (t)h (0) oo

= a~-(O),

I OAol + (t,0)h (0)dO

Page 310: 5th Conference on Optimization Techniques Part I

298

B(t) and f(t) be as defined in {6) and (7). For all h in V, x is the unique

solution in

of

(16) W(0,T)

I dx " t '

(17)

= {z £ L 2 (0,T;V) I D z ~ L 2 (0,T;H) }

= A(t)x(t) + B(t)v(t) + f(t), in [0,T],

= h.

4. Optimal Control Problem in [0,T]

We now consider a quadratic cost function,

(18)

~(vm) = <~ ~(T), ~(T)) H - 2,,},x(T)) H

IT x(t) )H [ (Q (t) x (t) ,

]o -- 2(q(t)~ x(t))H]dt

[ T

I0 (N(t)v(t), v(t))udt,

where L E ~(H)~ /E H, q E L 2 (0,T;H) and Q: [0,T] ÷ ~(H) and N: [0,T] + ~(U)

are strongly measurable and bounded. Moreover L, Q(t) and Nit) are self adjoint

and positive and there exists c > 0 such that

(19) Vt~ Vu, (N(t)u,u) U > 0.

For this problem we know that given h in V, there exists a unique optimal control

function u in L2(0,T;U) which minimizes J(v,h) over all v in L2(0,T;U). Moreover

this optimal control can be synthesized via the feedback law

-i ~ (20) u(t) = --N(t) B(t)* [~(t)x(t) + r(t)],

Page 311: 5th Conference on Optimization Techniques Part I

299

where ~ and r are characterized by the following equations:

and

(21)

(22)

(23)

i d~ (t i~ (T)

I R(t) [~(t)

= A(t)*Z(t) + w(t)A(t) --~(t)R(t)~(t) + Q(t) = 0, in [0,T]

= B (t)*N (t)B(t)

I dr- t, ~( ) + [A(t) R(t)~(t)]*r(t) + [~(t)f(t) + q(t)] = 0, in [0,T]

~r(T) = A

Here a solution of (21) is a map 7: [0,T] ÷ ~(H) which is weakly continuous

such that for all h and k in V the map t ÷ (h,~(t)k) H is in HI(0,T;R); a solution

of (23) is a map r: [0,T] + H such that r E L2(0,T;H) and D r ~ L2(O,T;V').

For details see DELFOUR-MITTER [3] and BENSOUSSAN-DELFOUR-MITTER [i].

5. Optimal Control Problem in [0, ~]

We can also give a complete theory for cost functions of the form

(24) J (v,h) = [(Ox(t), x(t)) H + (Nv(t),v(t))u]dt

O

with the following hypothesis:

i)

c > 0 such that

vu (Nu,u) U > c lull; 2) X is the solution of

3)

G E ~(V,U) of

E,c/~(H), N E ~(U) are positive and self adjoint and there exists

I dx.t. Ax(t) + BY(t) in [0, ~]

(25)

x(0) = h in V;

(Stabilizability hypothesis) there exists a feedback operator

the form

(26) Gh = Gooh(0) + j=l~ Gih(0i) + - Gol (0)h(G)d@

Page 312: 5th Conference on Optimization Techniques Part I

300

such that the closed loop system

i da(t) = [A + B G]x(t) in [0, ~]

(27)

!x(0) = h ~ V

be LZ-stable, that is

(28) %r h E H,

[ co

I iI~t)il ~ dt < ~

)o

For a study of the stabilizability problem see VANDEVENNE [i] [2].

When system (25) is stabilizable, there exists a unique u in L21oc(0,~;U)

which mini~nzizes J~ (v,h) over all v in L21oc(0,~;U) for a given h. Moreover this

optimal u can be synthesized via a constant feedback law.

(29) u(t) = -- N B* ~ x(t),

where 7[ is a solution of the algebraic Riccati equation

(30) ~ * ~ - ~ + 5 = 0

A solution of (30) is a positive self adjoint elemena of ff~(H) such that

(30) is verified as an equation in ~(V,V'). The operator 7[ in ~(H) can be

decomposed into a matrix of operators

ol

7[

(since H is either X x L2( - a,0;X) or X x L2(--~,0;X)) where

i T E $Z~(X) , ~ E/~(L2(-- a,0;X) X) oo o 1

E ~(X,L~ ( - a,0;X)) 7[ C ~(L~( - a~0;X)). 711o ~ ii

Moreover

' *~ + ~ (0) + ~ (0)* + Q -~ R~

~OO * = 7[ > 0 OO

= 0

~ l o h° ) (0~) = T[lO(~)hO , t~ -~ ~[lO(O:): [-- a,O] + c~(X)

Page 313: 5th Conference on Optimization Techniques Part I

301

~dn N- 1

de I----~O (a) = ~ (a) [A -- R~oo] + 2 Ai*Woo~ (~-@) I0 O0 i = l

+ AOI(a)*WOO + W11(e'0) , a.e. in [-- a,0]

(- a) = ~,~'*~oo lo

f o

(w h I Ca) = w Ca)*h I (a)d~ Ol 10

- a

I (W11hl)(a) = I i W*l(e'8)h1(~)d~

(a,8)~ ÷~Wl1(e,8)- : [-- a.O] x ,[-- a.O],÷ /~(X)

[~ + ~] ~ (a,8) -- Ao1(a) ~o(B) + ~1o(a)Aol C8)

N-1

+ 2 Ai*W (S) *~ (a-Q i ) + i= 1 io

N-I

2 ~ (~)Aj~ (~-@i) j=l I0

IO IO

(- a,~) = ~'*Zlo(~)* , z (a,-a) = ~ (a)A's 11 II Io

(a,B) = ~ (S,a)*. ~11 I1

Under additional hypothesis on A and Q we can also describe the asymptotic

behaviour of the closed loop system

(31) x(0) = h in V.

in [0, ~]

Definition Given a Hilbert space of observations Y and an observer M e~(H,Y),

System (25) is said to be observable by M if each initial datum h at time 0 can be

determined from a knowledge of v in L21oc(0,~;U ) and the observation

(32) z(t) = M x(t) in [0,~].

When System (15) is observable by Q I/2 , for each initial datum h

Page 314: 5th Conference on Optimization Techniques Part I

302

(33) x(t) + 0 as t + ~,

where x is the solution of the closed loop system (31)o

In the special case where

and Qoo is positive definite, the closed loop system (21) is L2-stable.

further details see DELFOUR-MCCALLA-MITTER.

For

6. Optimal Linear Filterinq and Dualit Y

Let E and F be two Hilbert spaces. We consider the system

~(dx't') = %o (t)x(t) + ~ Ai(tlx(t+0i) + a Aol (t,0)x(t+0)d0 i=I

(34) ~ + B(t)~(t) + fit) ,

I X(0) = h O + ~o

~ x(@) = hl(@) + ~I(0) , - a < @ < 0,

where ~ = (~o ~l) is the noise in the initial datum, and ~ is the input noise with

values in F. We assume an observation of the form (with values in E)

(35) z(t) = C(t)x(t) + n(t),

where ~ represents the error in measurement and C(t) belongs to ~.(X,E). AS in

BENSOUSSAN [i] {~°,~l,~,n} will be modelled as a Gaussian linear random functional

on the Hilbert space.

(36) ~ = X x L2( - a,0;X) x L~(0,T;E) x LZ(0,T;F)

with zero mean and covariance operator

P ~ (¢)

For each T we want to determine the best estimator of the linear random functional

x(T) with respect to the linear random functional z(s), 0 _< s < T. For the solution

to this problem see BENSOUSSAN [2] and BENSOUSSAN-DELFOUR-MITTER [2].

Page 315: 5th Conference on Optimization Techniques Part I

303

References

A. BENSOUSSAN [1] FiltrageOptimal des Syst~mes Line'ires, Dunod, Paris 1971. [2], Filtrage Optimal des syst6mes " ~" ...... avec Llnealres retard, I.R.I.A. report INF 7118/71027, Oct. 1971.

A. BENSOUSSAN, M.C. DELFOUR and S.K. MITTER [1] Topics in S~em Theory in Infinite Dimensional Spaces, forthcoming monograph.

A. BENSOUSSAN, M.C. DELFOUR and S.K. MITTER [2] Optimal Filtering for Linear Stochastic Hereditary Differential Systems, Proc. 1972 IEEE Conference on Decision and Control, New Orleans, Louisiana, U.S.A., Dec. 13-15, 1972.

M.C. DELFOUR and S.K. MITTER [1] Hereditary Differential Systems with Constant Delays, I - General Case, J. Differential Equations, 12 (1972), 213-235. [2], Hereditary Differential Systems with Constant Delays, II - A Class of Affine Systems and the Adjoint Problem. To appear in J. Differential Equations. [3], Controllability, Observability and optimal Feedback Control of Hereditary Differential Systems, SIAM J. Control, i0 (1972), 298-328.

M.C. DELFOUR, C. McCALLA and S.K. MITTER, Stability and the Infinite-Time Quadratic Cost Problem for Linear Hereditary Differential Systems, C.R.M. Report 273, Centre de Recherches Mathematiques, Universit~ de Montreal, Montreal I01, Canada; submitted to SIAM J. on Control.

J.L. LIONS, O~timal Control of Systems Governed by Partial Differential Equations, Springer-Verlag, New York, 1971.

H.F. VANDEVENNE, [i] Qualitative Properties of a Class of Infinite Dimensional Systems, Doctoral Dissertation, Electrical Engineering Department, M.I.T. January 1972. [2], Controllability and Stabilizability Properties of Delay Systems~ Proc. of the 1972 IEEE Decision and Control Conference, New Orleans, December 1972.

Page 316: 5th Conference on Optimization Techniques Part I

Andrze j P. Wierzbicki~ Andrzej Hatko ~

COMPUTATIONAL METHODS IN HILBERT SPACE

FOR OPTIMAL CONTROL PROBLEMS WITH DELAYS

Summary

The paper consits of two parts. The first part is devoted to

basic relations in the abstract theory of optimization and their

relevance for computational methods. The concepts of the abstract

theory (developed by Hurwicz, Uzawa, Dubovitski, Milyutin, Neustadt

and others) linked together with the notion of a projection on a cone

result in an unifying approach to computational methods of optimiza-

$1on. Several basic computational ccncepts~ such as penalty functio-

nal techniques, problems of normality of optimal solutions, gradient

projection and gradient reduction techniques, can be investigated in

terms of a projection on a cone.

The second part of the paper presents an application of the gra-

dient reduction technique in Hilbert space for optimal control prob-

lems with delays. Such an approach results in a family of computatio-

nal methods~ parallel to the methods known for flnlte-dlmmensional

and other problems: conjugate gradient methods, variable operator

(variable metric) methods and generalized Newton's (second variation~

method can be formulated and applied for optimal control problemg

with delays. The generalized Newton's method is, as usually, the most

efficient; however, the computational difficulties in inverting the

hessian operator are limiting strongly the applications of the method.

Of other methods~ the variable operator technique seems to be the most

promissing.

Technical University of Warsaw, Institute of Automatic Control,

Faculty of Electronics, Nowowiejska ]5/19, Warsaw, Poland.

Page 317: 5th Conference on Optimization Techniques Part I

305

I. Basic relations in the abstract theory of optimization

and computational methods

I. Basic theory.

Two basic rezults of the abstract theory of extremal solutions

are needed in the sequel.

Theorem I. (See e.g. [5] ). Let E,F be linear topological

spaces, D be a nonempty convex cone (positive cone) in F. Let

Q: E-~R I, P: E--~F, peF and Yp = (y6 E: p-P(y)6 D 3 .

(i) Suppose there exists ~¢ D ~ (called Lagrange multiplier)

and ~eY~ such that <~,P(y) - p> = 0 and

Q(~) + <~,P(~) - p> % Q(y) + <?,P(y) - p> for all yeE (I)

The n ^) Q(y % Q(y) for all y GYp (2)

(ii) Let Q,P be convex (P relative to the cone D).

Let the cone D have an interior point and suppose there exists @

Yl ~E such that P - P(Yl )~ D. Suppose there exists a point ~ sa-

tisfying (2). Then there exists ~E D ~ satisfying (I) and such that

<?, P(9) - p> =o (3)

(iii) Given pl,P2&F suppose there exist ~i,~2 minimizing Q

in Ypl' Yp2 respectively. Suppose there exist ~I' ~2 satisfying

(I) and (3) for ~I and ~2' Then

<~I 'Pl - P2 ~ ~ Q(~2 ) " Q(91 ) ~ ~2' Pl - P2 ~ (4)

Recall that the general form of a Lagrange functional is

L(%,?, y) = oQ(y) ÷ <?, P(Y - P> (5)

, > O, is equivalent with ~o ~ 0 whereas the normal form, with ? o

to L(?,X) = Q(y) + <?, P(y) - p> (6)

Therefore, the theorem I (ii) gives a sufficient condition of norma-

lity of a convex optimization problem. The requirement of a nonempty @

D is fairly severe and by no means necessary (we shall give an @

example of the existence of a normal Lagrange multiplier when D is

empty). However, weaker conditions of normality of convex problems

have not been sufficiently investigated.

The part (iii) of the theorem is basic for sensitivity analysis

of optimization problems and results in the following corollary.

Corollary 1. Suppose the space F is normed. Suppose there is

an open set ~= F such that the assumptions of theorem I, part

(iii), hold for each Pl,P2 e ~. Define the functional ~: ~-PR I

Page 318: 5th Conference on Optimization Techniques Part I

306

by ~(~) -- Q(~) = mivn " Q(y). Suppose the normal S p

Lagrange multipliers ? are determined uniquely for each p ~ and

the mapping N: ~ --~ F, ~ = N(p), is continuous. Then the functio- #%

nal Q is differentiable and [Q(p, ~p) = - <~, ~p> ; hence the

gradient of the functional ~ is -7 •

The properties of the mapping N - uniqueness, continuity and

Lipschltz continuity - are quite important in sensitivity analysis

and X other computational aspects of optimization. However, we shall

not investigate these properties here.

Another theorem of basic importance, which snmmarlzes results

proven in [6] , [7] , [8] , is the following.

Theorem 2. Let E be a linear topological space. Suppose the

functional Q : E -~R i has a local constrained minimum at a point

in a given set YpCE. Suppose the set Yq = {y~E: Q(y) < Q(9)}

has a nonempty internal cone K i at ~ (that is, a convex open cone

K i such that there is a nelghbourhood U(~) such that (K i + ~)(~

U(~) ~ Yq). Suppose the set Yp has a nonempt~ external cone K e

at 9 (,that is, a convex cone K e such that for each keKe, for

each open cone K O containing k, and for each neighbourhood U(9) ^) the set (K o + y ~ U(9)~ Yp contains more points shan only ~) Then

m there are nonzero functionals qo £ K i and ql g Ke such that

qo + ql = @ . The fact that qo / ~) ' ql ~ @ does not necessarily imply that

a corresponding lagrange functional has a normal form. Actually, if

Yp = [ya E: p - P(y)G D~ , we need additionally some assumptions

resulting in a form of the Farkas lemma in order to represent the

elements of K e by the elements of D W.

Corollary 2. Suppose E,F are normed spaces. Let Q: E-~ R I

" (y)); hence be differentiable (with the gradient denoted by Qy •

K i" = [~Q~ (~), "~ O} Let P: E-~F be differentiable Cwith the

derivative denoted by P (y)).

(i) Suppose Yp = {ye E: p - P(y) e D~ , where D is a non-

trivial positive cone (inequality constraint). Suppose P..~ (9)D ~" is

weakly ~* closed; hence, by Farkas lemma, K e =

<~,P(~) -- o~ ). Thus there exists 76 0 such that

^~(^) ~ (~) = ~ therefore, the <~,P(~) - p> = 0 and ~y y + PF ~ ;

Lagrange functional has a normal form.

(ii) Suppose Yp = ~ygE: p - P(F) =@3 , hence D is a

trivial cone (equality constraint). Suppose Py(y) is onto; hence,

by Lyusternik theorem, Km = P~(9) m ~. Thus there exists ~ F * e y (obviously, <~,P(~) - p> : O) such that Q~(~) + P*(~)y ~ : ~) ; the-

refore, the Lagrange functional has a normal form.

Page 319: 5th Conference on Optimization Techniques Part I

307

The sufficient condition of normality in (ii) - that is, the re-

quirement that Py(y) is onto - is quite natural and useful. The

sufficient condition of normality ~n (i) - the weak W closedness of O

P~ (9) D*- is less restrictive than the nonemptiness of D, but rat- Y

her cumbersome to check.

2. Projection on cones in Hilbert space

Let D be a closed convex cone in Hilbert space ~. Given

y a~, denote by yD an element of D such that

~yD _ yl~ = rain l~d- y~ (7) D 4eD

The element y is called the projection of y on D.

Lemma 1. (see e.g. [9] ). If E is a strictly normed space

(such that ~Ix+yl~ : l~xll + l~Yll implies x : my, ~ R I ; in

particular, a Hilbert space) then the projection yD of yeE on a

closed convex set D ~ E is determined uniquely.

Lemma 2. The element yD& D is=the projectfon of 2G~ on a clo-

sed convex cone D"~ if and only if (i) yD Y , D ~- (8)

(ll) <yD, yO_ y> --0 (9) Proof. If (1) and (li), then ~Id-3,}l 2 = ~lyD_y~12 + 2<yD_y,d> +

+ {~d_yD~ 2 > llyD_yl~ 2 for all deD. If not (&), then there is

dGD such that <yD-y,d> < 0 and there is a> 0 such that

<yD-y,d> +&lldl}2< O. Hence llyD + ~d- y H2< ~yD-yI~ 2 ; since

D is convex cone, yD + 6 d e D, and yD cannot satisfy (7). If not (ii), then (yD ND_ D , y~ ~ 0 (we have always (yD, Y -Y> >i 0

since yD-yGD*, yDa D). There is &l ~ 0 such that for all

~.~ (0, &~) the inequality - <yD,yD_y> + 6~lyD~ 2< 0 holds. Hence

H( I- £)y~ - yll 2 < ~IyD-y~I 2 ; since D is a cone, (I- &)yDeD for

sufficiently small E.> 0 and yD cannot satisfy (7).

Lemma 3. If ~IYD~ = min~Id{l , then YD = yD.

Proof. Since yD satisfies (8) and ((9), we have ildll 2 =

=l[d_yO, 2÷ llYOi{ 2÷ 2<a_yO,yD> = l%~_yO i12 + llYOiI ~ + 2 <d-y,yO> >i [IyD[[ 2 for all d-y~D*. Since YD is unique, YD = yD

D Lemma @. The projection y on a cone in Hilbert space has the following properties :

(i) II Y~II ~<O IlYll

( i i ) I lY l - Y2 II % I~Yl " Y211 ( l i i ) I f Y = ~1-Y2 and ?2e D '~', Shen II yDII ~ IlYlll (iv) If Y=Yl-Y2, YleD, y2 e D'and ~Yl 'Y2> = 0 then yD=yl Proof. (1) Relation (9) implies [[yD [I 2 = <yD,y>~< ~lyDl[ "ily[i "

D D 2 D D _yD)+ = (ii) We have ~{yl-y 2 ~ = <y1-72 , y1_Y2_(y ~ (y2_yD)>

Page 320: 5th Conference on Optimization Techniques Part I

308

= - < h - < Y2 :-'2'h-Y2> . D D IlYI-:211 llY1"~21i as a consequence of (8) and (9).

(iii) llyD~i 2 = <yD ,i.Y2 5 4 <YD,yl) ~< I}, D II II YII{ if Y2eD~.

(iv) Since _<'D.y2~ ~ O, <YI'Y~> = 0 and , = ,l-y2, hence

<,D-y,,2> ~. lly2112. Since <y--y,y1> ~/ 0 and I~,D-,II 2 =

= <yD ,,_y> , hence <yD-,,,2> >~ |I YD''II 2 Therefore, II ,D-Y%|I2 =

= I I~ 'D- ' - '2 II 2 = ib'D'Y 112 + 11'211 2 . 2 < y D - , , , 2 > g O, and D = ' I "

Lemma 5. The functional ~ : ~-~R I defined by ~(y) = ^,4 D

= 0.5 IIyD~I 2 is differentiable and has the gradient Q,(F) = y •

Proof. Consider the optimization problem: minimize Q(d) =

= 0.5 Ildll 2 for de D~+ ,. The unique solution of the problem is

yD L:Tf 3 Sli 1 ~e if ~Intl; is " ::;f "a nc:~ i n:qda ~ 0.5 ,Id + ~II 2~0.5 fly D+ ~,,2 sa o if ~ = _yD (otherwise d=-~

yields a contradiction), hence ~ = _ D is the unique element such th&t 0.5 ~d I~ a + <~ , d-y~ 0.5 ~,ull~- + <~,,D-y> for all

d~. Therefore, ~ = _yD is the unique, normal Lagrange multiplier

for the problem (observe that the cone D may have an empty interior;

consider the positive cone in ~ L 2 [Dto,tl] ). Moreover, the mapping

N: ~-b~ such that ~ = N(y) = -, is Lipschitzian- see Lemma 4,

property (ii). Corollary I yields the conclusion of the lemma.

~. Application to penalty functional techniques.

The penalty functional methods have bee~. applied to the cases

when the constraining operator P is finlte-dlmmenslonal (see e.g.

[5] , [I0] ) and in special cases of infinite-dimmenslonal operators

(so called S-technlque, see [lq] ). The notion of projection on a

cone makes it possible to generalize the penalty techniques to arbi-

trary infinlte-dimmensiona! operators.

Consider the problem: minimize Q(y) on Yp= {2gE: p-P(,), _D}

where E is a normed space, P : E-~ , D is a self-conjugate

cone (D ~= D) in ~. Observe that most of the typical positive cones

are self-conjugate; however, the last assumption is made in order to

simplify the analysis, and the results can be generalised for the

case when D is not self-conjugate.

Define the increased penalty functional ~ : E ~R I-~ R I by

( Y , S ) = Q(x~ + o.5.~ 11 ('P(~,~ - P )D I I2 (10)

. tend monotonically towards Infinity. Theorem 3 Let ~n~L

Suppose for each ~n there is a Yn minimizing ~ (y, 5 n ) for

g E. Then

( i ) ' (yn'

Page 321: 5th Conference on Optimization Techniques Part I

309

(ii) inf Q(Y) ~ J (Yn' ~n ~

(iii) ~lim 9n I~(P(y n) - p)D~]2 = 0 A

(iv) If there exists y = lim Yn and Q, P are continuous,

then Q(~) ~ Q(y) for all yeYp.

(v) Denote Pn = (P(Yn)'p)D + p" Then lim Pn = p and each

Yn minimizes Q(y) over Ypn = [ymE: pn-P(y)e D~ .

(vi) Denote ~n = ~n (P(Yn I-p)D " If Q,P are differentiable,

then ~n is the normal lagrange multiplier for the prob-

lem of minimizing Q(y) over Ypn =

= [yeE: Pn- P(Y)m D] with Pn = (P(Yn)'p)D + p°

Proof. The points (i)...(iv) are an easy generalization of the theo-

rems presented in [5] • To prove (v) observe that Pn - P(Yn )ED~=D'

according to lemma 2; hence Yn ~ Ypn" Moreover, Q(yn ) + 0.5 an"

"~{(P(Yn ) - P)D~I2$ Q(Y) + 0.5~ n II (P(Y) - P)D~I2 for all yeE.

But for all yeY _ we have P2 = (P(Yn) - p)D _ (p(y) . p) eD = D~

Denote Pl = (P(Y~?- p)O~ D; hence P(y) P = P~ " P2 and, accor-

ding ~o lemma 4 p. (iii), we have H(P(y)-p) DI~ ~ I~(P(Y n) - p)O~

for all YCYpn" Therefore, Q(yn ) ~ Q(y) for all y eYpn-

To prove point (vl) observe that}Is differentlable according to _ ~ + p*

lemma 5 and ~y(Yn~ ~n ) - Qy(Yn ) ~n y(Yn)(p(yn) . p)D = @ . Hence

n o = ~n (P(Yn)-p) satisfies the Lagrange conditi n. Moreover, ~6D

and <?n' P(Yn ) - Pn > = En < (P(Yn) - p)D p(yn) . P_ (p(yn)_p~> =0 according to lemma 2. Therefore, ~ n has the properties of a Lagrange

multiplier - see Corollary 2, point (1).

The Lagrange functional corresponding to the penalty technique

has a normal form - though no normality assumptions has been made.

If the original problem is not normal, then the sequence {~ n~7 does

{ ~ converges to p. Hence the pe- not converge. But the sequence Pn 4

nalty functional technique approximates optimization problems by nor- mal optimization problems.

Corollary 3. If the assumptions of Theorem 3, point (vi) are sa-

tisfied for all p &~, then the set of pe~ such that the optimiza-

tion problem is normal is dense in ~.

The corollary explains why the Balakrishnan's & -technique leads

to a normal formulation of the maximum principle: singular optimal

control problems are approximated by normal ones when applying penalty

functionals. However, the increased penalty technique has one sincere

drowbacM: the method becomes ineffectives computationally when $ in-

creases. Thw computational effort necessary to sc~e the unconstrained

problem of minimization of ~ increases rapidly with ~ ; this is due

to "steep valley" effects, well known in computational optimization.

Page 322: 5th Conference on Optimization Techniques Part I

310

To overcome this difficu!$y, another technique has been proposed [12].

Define the shifted penalty functional ~ : E ~x R I-~ R I by

~Cy,v,~ ) = Q(y~ + 0.5 ~ ll(P(y) + v)Dll 2 (11)

where vE~ is not necessarity equal-p. The theorem 3, points

(v), (vi), can be restated in this case:

Theorem @. Given vE~ and ~> 0 suppose there exists Yv mi-

nimizing ~ over E. Then:

(i) Denote Pv = (P(Yv) + v)D - v e~. The element Yv minimi- v _ ] zes Q(y) over ~pv y~ E: Pv P(y)~ D

(il) Denote ~v = ~ (P(Yv) + v)D" If P,Q are dlfferentiable,

then ~v is the normal Lagrange multiplier for the problem of mini-

mization of Q over Ypv"

The penalty shifting method consists of a suitable algorithm of

changing v in order to ach~e pv-~ p. An efficient algorithm was

proposed first by Powell [13/ for equalit~ constraints in R n and

later generalized in [12] for inequality constraints. Stated in

terms of projection on cones, the algorithm has the form:

Vn+ j = (v n + P(Yvn)) D - p ; v I =-p (12)

Theorem 5. Suppose there exists a solution Yo and a normal

Lagrange multiplier ~o for the problem: minimize Q(y) over

Yp = {y &E: p - P(y)e D} , where E is a normed space, ~ is Hilbert,

D is a selfconjugate cone in ~ , Q: E -~R 1 and P : E--~75 are Fre-

chet differentiable, p = pog~ is given. Suppose there is a neigh-

bourhood U(p o) such that the optimization problem has solutions

for all paU(Po~ and the mapping N: U(p o)-~ ~, , ~= N(p), is well-

defined and Lipschitzlan, ~I N(P I) - N(P 2) H ~< R~ H Pl-P2 ~ for all

pl,P2gU(Po ). Suppose the shifted penalty functional ~(y,v,~ ) has

a minimum with respect to ymE for each S>~ j and each v in a

neighbourhood U(vl) , v I = -p~. Suppose v is changed iteratlvely:

vq = -Po' Vn+q = (P(Yn) + Vn) - Po' where Yn are minimal points

of q~(Y,Vn,~). Then:

(i) There exists .~"~Isuch that for all R>~N" the sequence

{Vn] ~ converges to Vo = + ~o - Po and [-Vn~ ~ ~ U(v 1) whe- ~ +4 D

teas the sequenoe [Pn~ defined by Pn = (P(Yn) Vn~ - v n con-

verges to Po and [Pn)4 ~ U(Po)" 1 + ~" , SUCh (li) Given any ~ > 0 there exists ~>~ ~", ~ >. ~ R~

that S>~ Implies l~pn+1 -po Jl ~( ~lJPn PoJ~ '

lJvn+ 1 " roll 6 ~ II Vn-VoH • Thus the convergence is at least geo-

metrical and an arbitrary convergence rate can be achived.

Proof. Since Vo = vl + + ~o' Voe U(vl ) for sufficiently

large ~ and there is a neighbourhood U(v^) c U(v4) such that

v I ~ U(v o). Since Pl = Po + (I°(Yl) " Po )D'uhence -'by the Theorem 3 -

Page 323: 5th Conference on Optimization Techniques Part I

311

there is a sufficiently large ~ such that Pl m U(Po)"

Suppose VnmU(Vo) and Pn~U(Po ). By the Theorem 4,

~n = S (P(Yn) + vn)D is the normal Lagrange multiplier for the

problem of minimizing Q(y) over Thus we have: I I Ypn'

v I n ....... S,~L ~n- Pn ; Vn+1 =-~ ~n- Po : Vo ........ ~ ~o - Po

and

Vn - v° Jn (~n- ~o ) " (Pn'Po) ; Vn+l"V O = ~ (~n " ?o ) Since ~n = N( ) is Lipschitzlan,

II Pn - Po II ~ ~ II Pn " Po II ÷ II v Vol l S n " and, if ~ > R~

~lPn - Po II ~< ~ ~v n - v o~

On the other hand

llvn+~, roll.< h !1 Pn - poll hence

IlVn+~..oll ~ __h_ s- R7 ilvn " v°ll If E is sufficiently large, Vn+IE U(v o) and Yn+l' Pn+1 exist. Moreower

IlPn+l - poll -< ~ I! Pn" Poll and Pn+le U(Po). By induction, [v.~ ~ = U(v^) ~ U(v.)

I,. iIJ 4 U /

{Pn]lc U(Po). The convergence rate ~= ~ can and be made arbitrarily small. ~- R~

The assumptions of Theorem 5 can be made more explicit by in-

vestigating the conditions of the Lipschitz-continuity of ~ = N(p),

the existence of minimal Yn etc. However, these problems shall not

be pursued in this paper. It should only be stressed that the assump-

tion of Lipschitz-continuity of N is essential, what can be shown

by simple examples even in R I - see [12] . Although an arbitrary

convergence rate o@ can be achived, it is not practical to require

too small 0@ and too large ~ , since the computational effort of sol-

ving the problem of unconstrained minimization of ~(y,v, 9 ) ~ecomes rather large in that case.

4. Application to gradient projection and reduction techniques.

Consider once more the problem of minimizing Q(y) over

Yp = {y e % : p - P(y)eD] where Q : ~(y-~R1, p : ~y_,~ are

dlfferentiable, ~y, ~p are Hilbert, D is a positive cone in ~p.

Assume there is a Yl GYp given and construct a cone K(y I) = ~y~ .3V p( )_ ~py( ) ~y~ .

Y%,~o£,(o~f- Yl YS D~ Assume the cone is e x t e r - n a l - see theorem 2 - to the s e t Yp a t Yl ( t o make t h e a s s u m p t i o n

e x p l i c i t , s e v e r a l r e g u l a r i t y a s s u m p t i o n s c o u l d be made) . Assume the

(Yl) does no t b e l o n g to K(y 1) However , the p r o - direction d = -Qy

Page 324: 5th Conference on Optimization Techniques Part I

312

j ection d k of d on K~s)provides for a good approximation of a

permissible direction of improvement.

Assume now that {~y¢~y : Py(yl) ~yeo~ ~ = Py(Yl ) D*(again,

to make the assumption explicit, we should use Farkas lemma -

-Py (Yl)D being weakly closed - or Lyusternik theorem - D =

Py(Yl ) a surjection) o This is the basic normality assumption ch

makes it possible to introduce a normal Lagrange multiplier ~ at a

nonopt imal point y°

Since K(yl), = ~yg~ I-P (yl)~D.Y;D1} , D I - = D + ~(p-P(yl)) ~ R Ic and ~D? =Y{~g ~, p-P(yl); = 0} ,

hence K~'(yl) = {- y(~)~: ~D ~, ,P(yL) - p> = 0} . When pro-

jecting d on K(y~), we obtain - d~K~(yq) and

~d k - d,dk~ = O. If d = -Qy ~yl ), we have:

(1) ~ D*; <~, p-P(yl)> = 0

(iii) <Py(yl)~ , Q;(yl ) + P~(yl)~> -- 0 (13)

Hence we have a normal Lagrange multiplier ~ for a nonoptimal

point Yl ~ Yp~ It coincides with the optimal ~ for an optimal YI'

since d k = 0 if Yl is optimal. The multiplier satisfies the

usual conditions (i) ; but there is also an additional condition

(iii) - trivial at an optimal point - which helps to determine ~ for

a nonoptimal point.

In a general case it is not easy to make a constructive use of

the set of conditions (q3) (1), (iii). However, these conditions ge-

neralise the known notions of the Rosen gradient projection [I#] or,

in a special but ver~ important case, of the Wolfe reduced gradient

05] Actually, assume D = {e~ , Py(y~) being onto; then K(F 1)

is the linear subspace tangent to Yp at Yl (the null,pace of

Py(yl)). The conditions (i) are trivial, the condition (ii) amounts

to an orthogonal gradient projection on the tangent subspace, and

the condition (iii) results in an explicite value of the Lagrange

multiplier and the Rosen gradient ~rojectlon

] y(Yl

The notion of Wolfe's reduced gradient applies to several opti-

mal control problems when ~y = ~xX~u snd ~p = ~x" The most im-

portant feature of the class of problems is that the state x 6~x

can be explicitly determined (analytically or numerically~ in terms

of the control u~ u. The statement of such a problem is; minimize

Q(x,u) for (x,u)~ Yo = ~(x,u) g~x x ~u : P~x,u) = 0 ~]~.x} where

Page 325: 5th Conference on Optimization Techniques Part I

313

Q : ~x~u -~ R 3 and P : ~x~u-*~x are Frechet differentiable

and Px(X,U) is onto for all (x,u)*Yo. The last assumption corres-

ponds to the requirement that x could be determined in terms of u.

If we take y = (x,u), Py(y) = (Px(X,U), Pu(X,U)), Q~(y) =

(Q~(x,u) * = , Qu~X,U)) and apply the Rosen's projection on the subspace

obtain

- x(X,U) :i o+xu, = - Q (x,u) - P (x,u) 7

However, this is not the most useful way of introducing a La-

grange multiplier in that case. Since Px1(X,U) exists for (x,u)+ Yo'

we can change the original variables y = (x,y) to ~ = (y+l,y2) whe-

re ~y~ =~x + P'1(x'U)Px " u(X'U) YU'- ~ ~Y2 : ~u. Now P~(y) = = ( P x ( X , U ) , O) a n d m~(y)* = (Qx(X+ , u ) , mu(X,U)* - P u ( X , U ) P ~ ( x , u )

Q~(x,u)). The tangent subspace becomes K = {@]W ~u and the pro-

jection is particularly simple. The new Lagrange multiplier (we keep

to the original denotation) is

(x,u) = - Qx(X,U) (~6) and the new projection of the negative gradient is

( i ) _ bk = _ Q x ( x , u ) _ Px(X,U)? = O

- P (x,u)p-

The relation (i) has a simple interpretation: we choose the new

Lagrange multiplier in such a way that the Lagrange functional

Q(x,u) + ~ ,P(x,u)> doeas not depend in a linear approximation on

~x. The Lagrange multiplier determined by )(16) differs obviously

from the multiplier (15); however, at an optimal solution the multi-

pliers are identical. The projection (15), (il) is a generalization

of the Wolfe's reduced gradient; in the original space it is a non-

orthogonal projection (but it is a projection, since the variable

transformation is one-to-one). The resulting technique has been in

fact widely employed in many approaches of the calculus of variation

and provides for many computational methods of optimization.

5. Reduced gradient and the basic variational equations

If the condition P(x,u) = ~ determines implicitly a mapping

S : ~u-~x, x = S(u), then instead of minimizing Q(x,u) over Yo

we have to minimize J(u) = Q(S(u),u) in ~u" However, the determina-

tion of S might be cumbersone, and it is often useful to express

the derivatives of J in terms of the derivatives of an equivalent

Lagrange functional. This technique amounts actually to the gradient

reduction technique. Assume Q,S are twice differentiable ; so is J.

Denote J(u+~u)=J(u)+ ~b(u),~u~ +0.5 ~u,A(u)~u> +~(~l~ul12) (18)

Page 326: 5th Conference on Optimization Techniques Part I

314

where b(u) is the gradient and A(u) the hessian operator of the

functional J. On the other hand

J(u~ = L( ? ,x,u) = Q(x,u) + ~, ~(x,u)> (19) if

L _ ( @ , x , u ) = P ( x , u ) = 0 (20]

which can be interpreLted~as'~ the basic state equation of the optimal

control problem. By choosing ~ according to (16) we obtain

Qx(X,U) L~(~,x,u) = + ? =~) (21) Px(X,U)

which can be interpreted as the basic adjoint equation of the problem.

Hence the gradient of the functional J (the reduced gradient of Q)

is Lug( ? ,x,u) -- Qu(X,U) + Pu(X,U)~ = b(u) (22) The linear approximation of the state equation is

~x' = - p - t ( x , ~ ) P ( x , u ) ~ u ; ~x -- ~x' + o( II ~u l l ) ( 25 ) X U

Expanding L( ~,x + 6"x~ u + ~u) into second-order terms of ~x, ~u

and applying (23) results in the hessian operator of the functional J

A(u)=L L'1 L L-IL -L L-IL -L L'1 L + L (2z~) ' u~x9 xx ~x ~u u~ x~ xu ux x~ 9u uu

with the den~tationsEortenea ih an obviou~ @ay; observe that

L~x = Px (x'u)" However, the explicit expression for the hessian is

n~t She most useful computationally. Phere is an alternate way: to

expand L( ~ +~ ,x+ ~x, u+ ~u) into second-order terms and choose an

appropriate ~. Hence the hessian operator can be determined by the

set of equations

(i) A(u) ~u = L ,~V + L ~X'+ L ~u

0 u: uu

(iii) E) = ~-x~X'+ L u ~u (25) The e q u a t i o n s ( i i ~ , ( i i i ) - where ( i i i ) i s e q u i v a l e n t t o (23) - a re

the linearisation of the adjoint and state equation and are called

basic variational equations.

In some computational approaches an important problem is the in-

version of the hessian operator, in order to determine the Newton's

direction of improvement d = -A -I (u)b(u). Setting ~u = d, A(u)~a=

= -b(u) in (26) and assuming L -I exists, we get UU

( i ) d = - L -1 (L . _ ~ + L ~ + b ( u ) ) UU U ~ ~ UX ~ ~

( i i ) e ~ - " - + - ' ~ - " = (L -L L n )~" (n -L L L )~-~-L L b(u) X~ XU UU U~ ? XX XU UU UX ~ XU Ut~

(iii) 0 = -L~u~IuLu~ ~ + [L__-L. uLuuLux)~-L uLu'u b(u) (26)

where (ii), (iii) are called can?onic~al variationa~ equations. Their

solution is more difficult than the solution of basic variational

equations; usually, the canonical equations represent a nontrivia!

two-point boundary problem, and the tNpical method of their solution

is the reduction to a Riccati-type nonlinear equation.

Page 327: 5th Conference on Optimization Techniques Part I

315

II. Application to optimal control problems with delays.

6. Optimal control problem with delays: the gradient and the hessian.

Consider the problem: mlnimise the performance functional

Q(x,u) = Sj~fo(X(t),x(t-r ),u(t),u(t-~),t)dt + h(x(t I )) (27) where x satisfies the process equation

P(x,u) = ~ ~ x(t) = f(x(t),x(t-lz ),u(t),u(t-~ ),t);

x(t) = El(t) for t ~[to-1~ ,to]

u(t) = $2(t) for t 6 [to-W ,t o ) (28)

The analysis of the relations between this particular problem

and the general Hilbert space problem leads to the known, following

results.

Denote H =- fo + ~f and

~(t+r):~(t); x[t+1:)=z(t); x((t-~']:y(t); u(1;+r):w(t); u(t-1~):v(t) (29)

Define the shifted hamiltonian function H by

= I + ' : (30)

t t+~ o for t~(t1-~,t I]_ - where ~ ~ denotes the function ~ with arguments evaluated at

I t+l~ t +I: . Then the process equation (29) can be rewritten in the form

the adjolnt equation- under appropriate differentiabilit2 assump-

tions-takes the form @ @

: - H x ; ~(t I) : -~x(X(t~)) (52)

and the gradient equation is b =_ H @ u (33)

These results apply for the case without any additional constraints

save (29). If, for example, a final state-function x(t) = ~3(t) for

t 6 GI-I; ,tl] is given, the penalt~ functional approach is most useful.

The basic variational equations take the form

~=H x~X+H~ ~S + H.uYU + H.v~v; ~(t) = 0 for t ~[to-~,to] ~: - H x ~?-Hx~-H x gF " Hxx~X'Hxz~Z-Hxv~V - HxugU - Hxw~W ;

~ (tl) ~: - hxx ~(h ~ u? '~ u~, u:~ ux uz uv uu Huw (3~')

The canonical variational equations have a complicated form

~:AI~+B~+ C~+A2Y ~+B 2y~+ c 2~+ ~x ~: A3~ ~ + B3;~ + C?Y~'p ~. + A@~ * B@~'"+~ C#S'~* ~ (357

where ~(t)= ~'~(t-~),~x(t)=0 for t ~[to-l~,to],~'~(t I)= -hxx~(t I),

Aj, Bj, Cj are matrices determined by second-order derivatives of the

hamiltonian function H (assumed a matrix related to Huu , Huv , Huw

is invertible) and ~x' ~ are determined by the gradient b.

A numerical solution of the equations (3~) presents no particu-

lar difficulties, since we can solve the advanced-type adjoint varia-

tional equation backwards after solving the retarded-t~pe state va-

Page 328: 5th Conference on Optimization Techniques Part I

3i6

rlational equation. A numerical solution of the equations (35) pre-

sents a major computational problem, since they are of neutral type

with two-polnt boundary conditions.

Several techniques have been proposed in order to solve the equa- r ) ,

tions k35 or equivalently, to invert the hessian operator of an

optimal control problem with delays. Most of the techniques consist

of a Riccati setting of an integral operator type. However, such a

setting results in an integral or partial differential equation of

Riccati type, which is also rather difficult to solve computationallyo

Recently, Chan and Perkins ~ proposed a simple iteratlve technique

for solving equations (35), which is quite effective for quadratic

problems. However, the Newton's method (or "second variation method")

is iterative itlself in non-quadratic cases. Therefore, it is rather

reasonable to ommit the inversion of the hessian for non-quadratlc

optimal control problems with delays.

7. Universal conjugate direction and variable operator methods.

A computational example.

The general formulation of an optimal control problem in Hilbert

space is a natural basis for the application of several universal com-

putational methods, well known for nonlinear programming problems in

R n and recently generalised for Hilbert space problems. There are

two families of such methods.

The first family, called conjugate directions methods, is based

on the concept of conjugacy or A-orthogonality.

The second family, called variable operator or variable matric

methods~ is based on the notion of the outer product in ~. A variable -I

operator V i is an aproximation of A constructed iteratively by

Vi+ I = V i + ~V i where ~V i is determined with help of outer

products in ~ . A discussion of general properties of these two fa-

milies of optimization methods is given in ~8] . The existing compu-

tational experience seems to indicate that these methods are more

effective than the Newton's if the inversion of the hessian is diffi-

cult. One of the conjugate gradient methods was applied as a subproce-

dure in solving the following computalonal example - constructed by

M. Jacobs and T.J. Kao in an unpublished paper and computed by St.

Kurcyusz. The example illustrates the effectiveness of penalty func-

tional techniques. The problem consist in achieving a given final

complete state (a function of time) x(t) = 0 for t g [~,2] in a ti-

me delay system:

~(t) = -x(t-:) + u(t), t~[0,2] ; x(t) = I, t 6~I,0] while mini-

~ 2 dt The problem has an analytical mizing the functional Q = ~ o

Page 329: 5th Conference on Optimization Techniques Part I

317

solution, hence provides for a goo~d test of computational methods.

The results achived by a penalty shifting technique applied to the fi-

nal complete state are followings:

No of iterations Final state No of computed Penalty Performance of penalty shift constraint functional functional functional

violation values

0 (beginning) I .0 I 51 .0 -

I 9.4x10 .3 I 07 0.168 0. 167

2 I .4xI0 -3 88 0.171 0.169

3 t+.0xl0 -@ 1 7 0.I 71 0.169

The number of computed functional values per iteration decreases

(which is typical for penalty shifting methods) instead of increasing

(typical for penalty increase methods).

Bibliography

~i~ Dubovitski A.J., Milyutin A.A.: Extremal problems with comstraints.

Journal of Computational Mathematics and Mathematical Physics

(Russian), Vol V, No 3, P.595-453, 1965.

2] Neustadt L.W.: An abstract variational theory with applications

to a broad class of optimization problems. SIAM Journal on

Control, Vol. V, No 1, p. 90-137, 1967.

[3] Goldshtein J.G.: Duality theory in mathematical programming

(Russian~, Nauka, Moscow 1971.

[4] Pshenitshny B.N.: Necessary conditions of optimality (Russian),

Nauka, Moscow 1969.

[5] Luenberger D.G.: Optimization by vector space methods. J. Wiley, N. York 1969.

[6] Neustadt L.W. A general theory of extremals. Journal on Compu-

ter and System Science, Vol. III, P.57-91, I£69.

[7] Girsanov I.W.: Lectures on mathematical theory of extremal

problems. University of Moscow, 1970.

KS] Wierzblcki A.P.: Maximum principle for semiconvex performance

functions ls. SIA~M Journal on Control, V.X No 3 P.@@#-@59, 1972.

9] Galperin A.M.: Towards the theory of permissible directions

(Russian), Kibernetika No 2, p. 51-59 , 1972.

50] Fiacco, A.V., Mc Cormick G.P.: The s~quential unconstrained mi-

nimization technique for nonlinear programming. Management

Science. Vo.X, No 2, p. 360-366, 196@.

~ Balakrishnan, A.V. A computational approach to the maximum

principle. Journal of Computer and System Science, Vol.V, 1971.

Page 330: 5th Conference on Optimization Techniques Part I

318

~ Wierzbicki AoP.: A penalty function shifting method in con-

strained static optimization and its convergence properties.

Archiwum Automatykl i Telemechaniki, Vol. XVI, No 4, p. 395-

416, ~971.

[13] Powell, M.J.D.: A method for nonlinear constraints in minimi-

sation problems° In R. Fletcher: Optimization, Academic Press,

N. York 1969.

~4S Rosen, J.B.: The gradient projection method for nonlinear

programming. Part I, II. Journal of SIAM, Vol. VIII, p. 181-

217, 1960, Vol. IX, p. 514-532, 1962.

~5S Wolfe, P° Methods of nonlinear prograw~ing. In J. Abadle:

Nonlinear Programming, Interscience, J. Wiley, N. York, 1967.

~ Horwitz LOB., Sarachlk P.E.: Davidon's method in Hilbert Space.

SIAM J. on Appl. Math., Vol. XVI, No 4, p. 676-695, 1968.

~ Wierzbicki A.P.: Coordination and sensitivity analysis of a

large scale problem with performance iteration. Proc. of V-th

Congress of IFAC~ Paris 1972.

~ Wierzbicki A.P.: Methods of mathematical programming in Hilbert

space. Polish-Italian Meeting on Control Theory and Applica-

tions, Cracow 1972.

0~ Chart H.C.~ Perkins W.R. Optimization of time delay systems

using parameter imbedding. Proc. of V-th Congress of IFAC,

Paris 1972.

Page 331: 5th Conference on Optimization Techniques Part I

SUFFICIENT CONDITIONS OF OPTINALITY FOR CONTINGENT

EQUATIONS

V.I. Blagodatskih

~athematical Institute of USSR Academy of Sciences, Moscow,

USSR

I. Statement o£the proble m

In this paper we prove sufficient conditions of opti-

mality in the form of maximum principle for controllable processes

Which behaviour is described by contingent equatiau.

Let E u be Euclidean n space of the state x = (~,...,x n)

with norm IIx~ = ~ and ~ (~) De space o~ alA nonempty

compact subsets of E ~ with Hausdorff metric

(F,G) : {d : F c S d (G), G c

where S d [f~) denotes # neighborhood of set M in space E n.

Let's consider controllable processes which behaviour is descri-

bed by contingent equation

(1) ~ ~ F(x) ,

or the differential inclusion as it is also called. Here F : E n-~

~(E n) is certain given mapping. The absolutely continuous func-

tion x(t) is the solution of the equation (I) on the interval

[0, T]if the inclusion x(t) g F(x(t)) is valed almost every-where

on this interval.

Let Mo, M I be nonempty closed subsets of E n. These subsets

may be non-convex and non-compact. The solution x(t) given on inter-

val ~0, T~ does the transfer from the set M 0 into the set M I for

the time T if the conditions x(O) g Mo, x(T) g M I are satisfied.

The time-optimal control problem is to define the solution of the

equation (I), doing the transfer from the set M o into the set M I

for m~imal time.

Let G be an arbitrary nonempty closed subset of E n. The f~uc-

tion

(2) C(~) : max (f,~)

f~G

E n is called the support function of the set of the vector y E

G.If the maximum in the expression (2) for the given vector ~o is

reached at the vector fo E G, that is, C(~) = (fo' ~o), then the

Page 332: 5th Conference on Optimization Techniques Part I

320

hyperplane Qx,~) = C(~c) is called the support hyoerolane for the

set G at the point fo and the vector ~o is called ~he support

vector at the point re. In this case the inequality (f - fo, ~ )6 0

is valed for any vector f g G. As one can see from the condition

(2) the support function C( ~ ) is single-valued function of the

set G. 0n the contrary, if we know the support function C(~), we

can incite only convex hull of set G, that is,

~E E n

If F : E n ~-~ (E n) is an arbitrary mapping, we can consider

the support function of the set F(x) for any x g En; we shall de-

note this function by C(x, ~) and shall call by the support func-

tion of the m ~ F:

C(x, ~) = max (f,W).

ze F(x)

Next !emma follows directly from the definitions of the support

function C(x, ~ ) and Hausdorff metric.

Lemma I. If the m ~ F : E n -~3~ (E n) satisfies Lipshitz's

condition (is continuous~, then the support f traction C(x, ~ ) satis-

fies Lipshitz's condition Qis cont~uous) in x for any fixed vector

E E n. On the contrary, if the suoport function C(x, ~ ) satis--

fi__es Lipshitz's condition (is cqntinuous)~ x , then the respeq -

tire mapping conv F : x --~ cony F(x) satisfies Li~shitz's condi-

t i0n (is continuous).

Together with the inclusion (1) let's consider the differential

inclusion

(3) ~ ~ cony F(x).

Lemma 2, I_~f absolutely continuous fuact, ion x(t) is the solu-

tion of the equation ql)on the interval ~0, T~ , then the ineq~it.y

is valed almost ever.ywhere_ on this interval for any vector ~ E n.

On the contrary, if the condition__~)is .valed for absolutely conti-

nuous fuu@tiem x(t) almost ever,Twhere on interval [O, T] for any

vecto_~r ~ m En~ then this function is the solution of the equation

(~) on the inte=val tO, T J.

Page 333: 5th Conference on Optimization Techniques Part I

321

Proof of lemma 2 follows directly from the definitions of the

support function C(x, ~ ) and of the solutions of differential

inclusions.

Let Co(y) and CI( ~ ) be support functions of the sets M o

and MI, respectively.

Maximum principle° Assume that thesuppor t function C(x, ~ )

is cgntinuously differentiable in x and the solution x(t) does

the transfer from the set M o into the set MI on interval [O,TJ.

We shall sa~ that the solution x(t) satisfies the maximum princip-

le on interval [O, T~ if there exists nontrivial solution ~ (t) of

adjoint system

C(x(t ) , W ) (5) ~ = - ~x

such that following conditions are valed:

A) the maxi'mumcqndition

(6) (~(t), W (t)) = c(x(t), ~ (t))

i_ss vale~d almost ever.~wher e on interval ~0, T] ;

B) trausversality condition on the set ~o: vector ~ (0) is

th e suoport vecto r for the set M o at the point x(O), that is,

(2) co( '/:" (o)) = (x(O), ~ (o)) ;

C) transversality condition on the set M 1 : vector - ~ (T) is

the Support vector for the set M 1 at the point x(T), that is,

~8) % ( - ~ (~ ) ) = (x (~ ) , - W (~) ) .

~ufficient conditions of optimality in the form of maximu~

principle can be received in the following section.

2. The ms~n result

The region of reacheh~lit ~ YT for the equation (1) is the

set of all points x o E E n from which we can do the transfer into

the set M 1 for the time not exceeding T. The set M 1 is strongly

stable if set M 1 lies interily in Y ~ for any ~ ~ O. In par-

ticular, if set M 1 consists of the single point, then the defini-

Page 334: 5th Conference on Optimization Techniques Part I

322

tion of the strong stability coincides with the definition of the

local controllability in small of the equation (1) at this point (I).

Theorem I. Assume that the se~ M I is s~rongly st__able and the

~ort function C(x, ~ ) ~ Lipshitz'~ condition in x fo___~

fixed vector ~ , the@ - the inclusion Y~ ~ int Y ~ is valed

f o . _ r ~ Z z~,z'~ , o.<'z"z<z' ,~.

Proqf. Let x o ~ Yr~ , sad x(~) be the solution of the equa-

tion (I), doing the transfer from the point x 0 into the set M I

for the time i ~ ~ r I , that is~ x(O)= xo, x(~)~ M I. Since the

set M 1 is strongly stable, M~ c iut Y ~ and x(r) @ int Yr~-g

Thus, there exisSs & > 0 that S z (~ ~))c y ~z_~ . Since

the support function C(x, ~ ) satisfies Lipshitz's condition in x,

by lemma I mapping conv F(x) also satisfies Lipshitz's condition.

Thus, the theorem of the contiuuous dependence of the so±utiom on

the initial conditions is valed (2) for the iuclution (3). It

follows that there exists such neighbourhood U(x o) that there

exists the solution y(t) of the inclusion (3) for any poiut

Yo ~ U(xo)' and this solution does the trausfer from the poia% Yo

into the set S~ Qx[r)) for the time T , that is, ~r)~S~ (x(r)).

A~y solution of the inclusion (3) cam be approximated with auy accura-

ty by the solutions of the equation (1) (see theorem 2.2 in paper

(3)), therefore there exists the solution x*(t) of the equation

(1) with initial condition x~(O) = Yo that ~! x"(t) - ~t)~ .< ~ .

Thus, x*(r) ~ S~(x(~)) and we cam do the transfer from the point

x'(r) into the set ~ for ~he time .< ~ ~ 2- , that is, yo~Y[~. ~ Q.E.D. Therefore, U~x o) Y~z, that is, Xoe imt Y~.

The support function

ia x if for a~y vector

for stay numbers ~ ,~

C(x, ~) for the equation (q) is concave

~ E n, for any points Xl, x2aE n and

O, ~ +~ = I, condition

(9) ~c(~,Wj +o/3 c(~2,~) ~< C(<x I +j3 x2, / )

is valed. This condition is equivalent to that of concavity of the

multivaled mappiug F(x), that is equivalent to condition

~or any xl ~ ~n.

Page 335: 5th Conference on Optimization Techniques Part I

323

If the support function Ctx, F ) is continuously differentiable

in x, then the condition (9) is equivalent (4_) to the condition

9 c(x. ,/.') ~ ) 9 x

c(x2,~) - c (~ , ~ )

£or any vector ~ E E n and for any points

a weaker condition on the support function

the support function O(x, ~) is concave in x at the point

in the direction ~o , if the condition

9c(%, ~) ( ' x - xo) ~ C(x'~) " c(%'~) ( lo) 9 x

xl,x2~ hm Let's define

C(x,~). Let's say that

%

is valed for any x ~L ~.

Let x(t) be solution of equation (I) om interval [0, T] ,

(t) be respective solution of adjoint system (5). Let's say

that the solution x(t) satisfiesstrong trausversality condition

on the set ~, if the condition

%(-W(t)) < (x(t), -~(t))

is valed for any O~t(T.

Note, that if the set M I is strongly stable, solution x(t)

satisfies maximum principle on interval ~0, T~ and the support func-

tion C(x, ~) is concave in x at point x(t) in the directio~

~(t), O~t@T, then the condition (II) is valed. Indeed, as it

will be shown in the proof of theorem 2 under the given conditions

vector - ~/ (t) is the support vector for the region of teachability

YT-t at the point x(t). And since M 1 c int YT-t for any O~t<T

condition (II) is valed.

The main result of this paper is

Theorem 2. Assume that Mo, M 1 are nonempty close d subsets of

E n, the solution x(t) qf the equation (I) does the trausfer fro m

the set M o into the set M 1 on interval [0, T~ and it satisfies

maximum principle on that interval and ~/(t) is the solution of the

adjoint system. Assume that the support function C(x, ~) is concave

in x at the pqint x(t) in the direction V ~ (t) foz. any ta[O,T]

Page 336: 5th Conference on Optimization Techniques Part I

324

and the solution x(t) satisfies stron~ transversalit~ condition

on the set M 1. Then the solution x(t) is optimal.

Proof. Let ~t) be an arbitrary solution of the equation (1)

defined on the interval [0, T] . Inequality

d (12) ~ ( y ( t ) - x(t), ~ ( t ) ) ~ 0

dt

is valed almost everywhere on this interval. Indeed, using lemma 2

and conditions (6) and (10), we get

d ~(y(t)-x(t), ~ (t)) = (y(t), ~'(t)) - (x(t),~(t)) + dt

+ (~t)-x(t),~(t))$ C(Y(t), ~(t)) - C(x(t), ~(t)) -

/ 9C(x(t ) , ~ (~))

l 9 x , y(t) -x(t)).< o.

Let ,~r ~ 0 ~ ~ ~ T, be hyperplaue passing through point

x(r ) and orthogonal to vector ~ (T). It is impossible to do

the transfer from the hyperplaue ~ into the set M 1 for the

time 8< T - Zo Imdeed, let y(t) be arbitrary solution of the

equation (1) with the condition y(~) ~ ~r . Inte~atiag an

inequality (12) on the interval [~, ~ + e] we receive

(y (~+ e) ~ x(~+ e), ~ (~+ e)).< o.

From the strong transversality condition (11) it follows that

Cl(- ,~(z + e ) ) < ( x < z + e), - , ¢ (~+ e)) =

= ( ~ + e) - x ( ~ + e), V ( t + e ) ) - ( ~ z + e),

~ ( r + e)),< ( y ( ~ + e), - , / , ( r + e) ) ,

that is, C1(-~(~+ e))<(~+ e), -~(r + ~)). It means that

point yi~ + 8) does not belong to set ~.

If point ~ ~ ) satisfies the condition (yi~)-x(~), ~(~))<0,

then it is impossible to do the transfer from this point into the

set M 1 for the time T -r. Iadeed, integrating an inequality

(12) on the im~terval IT , T] we can receive (F(T)-x(T), ~(T))<O.

Page 337: 5th Conference on Optimization Techniques Part I

325

It contradicts the traasversality condition (8) on the set ~, Thus,

the ayperplane /~ is the support for the region teachability

IT_~ at the poin~ xkz) with the support vector - W (~) for all

From the transversality condition (7) on tae set M o it follows

that MoDYTC] ~ . Thus, it is impossible to do the transfer from

the set M o into the set ~ for the time < T, that is the solu-

tion x(t) is optimal. Q.E.D.

RemarK. Since the solution xkt) of the equation ('i) is also

~he solution o% the equation (3), then by the theorem 2 the solution

x(t) is also optimal for the equation (3).

Corrollary. Assume that the set M 1 is strongly stable I the

support functi0n C(x, ~ ) is concave in x and the solution x(t)

does the transfer from the set M o into the set M 1 on interval

[0, T~and i t satisfies maxim~r~ciple o n this interval. The~

the solution x(t) is optimal.

Pr0of of this corrollary coincides with that of the theorem 2

but we have to use the result of theorem I instead of strong trans-

versality condition (11).

In case sets M o and M 1 are the points and F(x) is convex

for all xEE n, this corrollary was proved in the author's paper (5).

Thus, to solve the time-optimal control problem, in case set

M fl is strongly stable and the support function C(x,~) is concave

in x , it is sufficient to find at least one solution of the equa-

tion (1) which satisfies maximum principle. Note, the solution may

not be the single one. In case, we have some solution x(t), O~ t ~T,

and want to know wether it is optimal or not, it is sufficiemt to

verify all the conditions of theorem 2. Note, we have to verify the

conditiom of concavity of the support function C(x, ~ ) only at the

points x(t) in the directions ~ (t).

3. Examples

Now we consider "classical control process". It behaviour is

described by the system of differential equations x = f(x,u), ueU.

Then the support function is

Page 338: 5th Conference on Optimization Techniques Part I

326

C(x, ~) = max (Z(x,u), ~ ). u~U

The sufficient condition of optimality similar to that in the above

corrollary for the liner control processes

X= AX+ V, V~V

was proved in paper (6) taking into the account the additional as-

sumption of convexity of sets Mo, M I and V. In paper (7) the con-

dition of strong stability of the set M I was loosed up to strong

transversality condition on the set M I .

Example I. Assume that the behaviour of the control system is

described by differential equation of order n

. x (n-l) u), (13) x (n) = f(x, x,.., ,

where vector u belongs to the set U(~) in space E , depending

on y = (x, x,o°., ~n-lJ). The set ~I consists of one point x =0.

Suppose, that following conditions are satisfied:

1) functions f1(y) = min f(y,u) and f2(y) = max f(y,u)

u eU(y) u~ U(y)

are continuously differentiable;

2) function f1(y) is convex and function f2(y)

3) point x = 0 is the interior point of the set

Then the support function

is concave;

z(o,u(o)).

c(y, ~) = y2~+ o..+ Yn ~ n-1 + ~I (y) --r--- + ~2 (y) 2

is concave in y ~ It was shown in paper (1_) that the proccess (13)

is locally controllable in small at the point x = 0 in the assump-

tion 3). Thus, maximum principle is sufficient condition of optima-

lity for proccess (13) in the assumption 1) - 3).

The oscillating objects which behaviour is described by differen-

tial equation of order 2 were considered in paper (8_). Some condi-

tions were reoeived by the method of regular synthesis under which

ms_~'mum principle is sufficient condition of optimality. It is easy

to varify that oscillating objects satisfy the above assumption

Page 339: 5th Conference on Optimization Techniques Part I

327

1 ) - 3 ) .

Thus, o p t i m a l i t y of t r o j e c t o r i e s can be received d i r e c t l y from the above c o r r o i l a r y w i thou t regu la r syn thes is . We can received more general r e s u l t s by the given method, For example, the cond i t i on of =ra~ector ies reaching swi tch ing l i n e s under non-zero angles can be omitted. The support function C(x, ~) was concave in x in the

example I and we made use of corrollary to show that the solutions,

satisfying m8~mum principle, are optimal. The support function is

not concave in the following example and sets Mo, M I and F(x)

are not convex but the given solutions satisfy all conditions of

theorem 2, amd thus, are optimal.

Example 2. Consider the control system

~=~ 2 2

- - ~ e - - - 3 + x 1 3 (q# ) % = + u

2 2

u=+l

5" -7 Set M I consists of two points (E , 3) and (E, 3), and set M o

consists of two sets {~ = -3, ~ 7. o } ~d {~, = o, ~ ~ -3 ~. Sets Mo, M I and FLx) are pot convex and set ~'I is not strongly

stable for the given sys=em. The support funu=ion

C ( x , ~ ) = x2Y'l + 3 ~/~ * i % f _ X l e_ 4 ~",-i~.~1 2 2

is continuously differentiable in x , but is not concave in case

of definition (9). The adjoint system is

'~ = ( I - 24) e-4 ~'~ - /,~'.~i

2

Two solutions x~(t) = (~t 2 - 3t + ~ , - , = -

3t 3) and the solution ~ (t)~ I of the adjoiat system 3t+ i ,

satisfy all conditions of the theorem 2 and both of them are optimal.

Page 340: 5th Conference on Optimization Techniques Part I

328

References

1. Blagodatskih~ V.I., On local controllability of differential incluti~u (Russia=), Differenc. Uravn. 9,

No. 2, (1973) 361-362. 2. Filippov, A.F. ~ Classical solutions of differential equations

with multivalued right-baud sides (English

trams.), SIAM Control, 5 (1967), 609-621.

3. Hermes, H®, The generalized differential equation xmR(t,x), Aav. in ~ath., ~, No. 2, (1970) 149-169.

@. Ponstein, J. ~ Seven kinds of convexity, SIAM Review, --9, No. 1,

(1967) 115-119. 5. Blagodatskih~ V.I.~ Sufficient condition of optimality (Russian),

Differens. Uravn., 9, No. 3, (1973),~16-~22. 6. Boltyamskii, V.G., Linear problem of optimal control (Russian),

Differenc. Uravm., 5, No. 3, (1969) 783-799. 7. Dajovich, S., On optimal control theory in linear systems (Rus-

sian), Differenc. Uravn., 8, No. 9, (1972)

1687-1690. 8. Boltyauskii~ V.G., Mathematical methods of optimal control,

Moscow, (19e9).

Page 341: 5th Conference on Optimization Techniques Part I

VARIATIONAL APPROXIMATIONS OF SOME

OPTIMAL CONTROL PROBLEMS

TULLIO ZOLEZZI

Centro di Matematica e di Fisica Teorica del C.N.R.-Genova

I t Necessary and sufficient conditions are investigated such that the optimal states

and cont ro l s~ and the va lue of a genera l opt imal con t ro l problem depend in a c o n t i -

nuous way on the data.

Let us consider the sequence of problems Pn ,n~O, g iven by (u con t ro l~x s t a t e ) t 2n

minl t fn(t,x, u)dt + hn[X(tln),X(t2n)]~,

In

= gn(t,x,u) a.e. in (tln, t2n) ,

with constraints

(tln, X(tln),t2n, X(t2n))E B n , (t,x(t)) ~A n if tEEtln,t2n],

u(t)~Vn(t,x(t)) aoe.in (tln,t2n), ~uI~ ~<e • Lp n

Po is given, and P ,n~l ,is to be considered as "variational perturbation" n

of Po" Assume that there exist optimal Un, X n for P ,n~/l. Variational convergence n .............

of {Pn} to Po means the fo l lowing : t he r e e x i s t opt imal uo ,x ° for Po such t h a t

(perhaps for some subsequence)

(Un, X n) ---9 (uo~x o) (in some sense),

min P ---~ min P . n o

This means (a) e x i s t e n c e of opt imal c o n t r o l s for Po ;

(b) " v a r i a t i o n a l s t a b i l i t y " of Po i f P ~ Po v a r i a t i o n a l l y for "many" sequences n

Pn (depending obv ious ly on convergence o f { (Un, X ) ~ )°

2. Sufficients conditions for variational convergence.

In the above generality, we get

min Pn ---~min Po ~ Xn --gxo uniformly

(and~generally speaking~ no "usual" convergence is obtained about u ) under general n

conditions on An~Bn,Vn, en, and not very strong assumptions a~bout fn, g n.

Moreover

gn linear in u ir~lies u --~ u o in L p. n

Page 342: 5th Conference on Optimization Techniques Part I

330

Such convergence can fail for variable end time problems (simple examples show

min P ~min P for time - optimal problems with uniform convergence of data).The n o

general case is considered in Zolezzi ($)o

Assume now that tln~t2n are fixed (for every n),and

gn(t,x,u) = an(t)x + bn(t) u + Cn(t) ,

fn = O ~ hn depending on x(t2n) only~

V (t,x) a compact polytope independent on (t,x) . n

Moreover suppose that lim inf hn(Yn)2/ho(y o) if Yn ---~Yo' and for every z o there

exists z such that lim sup hn(Zn) ~ho(z o) • Then n

__i ~ --~b ° in L 1 implies (for some optimal Un,n ~ O) (an~C n) (ao,c o) b n

min Pn --~min Po~ Xn---~x o uniformly~ Un--* u o in every L p, p<~ •

The same conclusions hold with b --J b o in L 1 only, if either n

(a) u is scalar ,b n is piecewise continuous and bn(t) = O for no more than r

points (r independent on n) ; also if g is non-linear in u~ with a monotoniaity

assumption on g(t~x,.),

or

(b) u~RS~s ~i~ and the following regularity assumption holds: for every

p~q6 extr Vn ~ any orthonormal basis (Yl'°'''Ym)'~71(t)bn(t)(p-q)YJl- O

beeing the for no more than r points or intervals~ r in.dependent on n ( n

principal matrix of ~ = a n x).

Moreover u o is piecewise constant, and u --~u o n

vals of u o,

Same results hold if we minimize either

t2n f b~(t,u(t)~Tdt t In

~n~formly on continuity inte[

t 2

i~ n #*rt x ~ +~lulP) dr, P2 i, (~ 2 0 or ~t (~ ~ ~ ~ in

If Po has uniqueness~ same results hold when we minimize

~2n

f fn(t~x,u) dt t In

assuming that convergence of { fn} is "coercive"~that is

Page 343: 5th Conference on Optimization Techniques Part I

331

lim inf fn(t'x'u)~ f0(t,x,u) +~ (Ju -vl p) , p~ i,

convex and strictly increasing.

Clearly applications can be made to variational stability of classical problems

of calculus of variations.

About the above results, see Zolezzi (4)-

3£~ Nec@ssary conditions for variational con v@rsence.

Take

gn(t,x,u) = a(t)x + b (t)u + c(t) , n

so perturbing now only b, and minimize

minly - x(T) I ,t2n = T fixed , y given

(minimum final distance problem). Then

b n --~ b o in L I is a necessary condition for strong convergence of optimal controls

for y in some restricted region, when either Po is (completely) controllable, or

the regularity assumption holds and uniform convergence of optimal controls u ---~u n o

does not destroy optimality of u o.

Moreover

b n --~ b o in L 1 is a necessary condition to strong convergence of optimal controls

minimizing

~y - x(T) I ÷ z x(T) ~ y and z given •

About such results see Zolezzi (~) .

4. Among the (few) result on such problems (applications of which can be found in

many fields connected with optimization~ for example perturbation, sensitivity pro-

blems) see results of Cullum (I), Kirillova (2)o

Such known results are generalized and substanti~lly extended in this work.

All the above mentioned results can be shown to be a by~product of a general me-

thod~ called "variational conversence" by the present author~ generalizing the clas

sical direct method of the calculus of variations, and useful to obtain, for gene-

ral minimum problems, both existence and "stability" under perturbations (from a

variational point of view). See Zolezzi (6) about some abstract results on this

subject.

Page 344: 5th Conference on Optimization Techniques Part I

332

References

(i) CULLUM~J. Perturbations and approximations of continuous optimal control

problems.

Math.Theory of Control~edited by Balakrishnqn-Neustadt.

Academic Press, 1967.

(2) KIRILLOVA~F,M. On the correctness of the formulation of optimal control

problems.

S,I.A.M.J. Control 1,36-58(1963).

(3) ZOLEZZI~T. Su aleuni problemi debolmente ben posti di controllo ottimo.

RIC. di MAT.21~I84-203(1972).

(4) ZOLEZZI, T. Su alcuni problemi fortemente ben posti di controllo ottimo.

To appear in ANN.MAT.PURAAPPL.

(5) ZOLEZZI,T~ Condizioni necessarie di stabilit~ variazlonale per il problema

lineare del minimo scarto finale.

To appear in B.U.M.I.

(6) ZOLEZZI~To On convergence of minima.

To appear in B.U.M.I.

Page 345: 5th Conference on Optimization Techniques Part I

NORM PERTURBATION OF SUPREMUM RROBLEMS ([)

J. BARANGER , Institut de Math6matiques Appliqu~es, B.P. 53

38041 GRENOBLE C~dex FRANCE.

ABSTRACT

Let E be a normed linear space, S a closed bounded subset of E

and J an u.s.c. (for the norm topology) and bounded above mapping of S into JR.

It is well known that in general there exists no s E S ~sueh that

J(s) : Sup J(s) see

(even if S is weakly compact).

For J(s) = ilx-sll (with x given in E), Edelstein, Asplund and Zisler

have shown, under various hypotheses on E and S, that the set

(s) : {~E E~ s E S-such that IIs-xll : Sup IIs-xII} xEs

is dense in E.

Here we give analogous results for the problem

sup s(S

These results generalize those of Asplund and Zisler and allow us

to obtain existence theorems for perturbed problems in optimal control.

1. THE PROBLEM.

Let E be a normed linear space, S a closed and bounded subset of E

and J an u.s.c. (for the norm topology) and bounded above mapping of S into ]{.

We are looking for an s E S ,such that

(1) J(s) : Sup J(s) see

A particular (and famous) case of problem 1 is the problem of farthest points

(i.e. J(s) = fix-eli, where x is given in E).

(~) This work is part of a thesis submitted at Universit6 de Grenoble in 1973.

Page 346: 5th Conference on Optimization Techniques Part I

334

1.!. Problem(1 ~ has no solution in general (even with S weakly compact and

J(s) = [Is-x[l with x given in E). Here is a counter example :

Let E be a separable Hilbert space with basis el, i E

S : {ei, i 6 ~,} O {0} is weakly compact.

For any x E E. we have :

!Ix-eill z = 1 + llxil 2 - 2(x,e i)

Now suppose that (x~ei) > O, ¥i E ~ ; then, we have :

Sup Iix-siI = ~ 2 ~ and this supremum is never attained. s6S

].2. Existence results for the problem of farthest points.

As we have just seen in i.i., this problem has no solution in

general { however, Asplund [2], generalizing a result of Edelstein [i] -- who himself

generalized a result of Asplund [i] -- has obtained the following :

Theorem , (Asplund)

Let E beareflexive locally uniformly rotund Banaeh space and S a

bounded and norm closed subset of E. Then the subset of the x E E having farthest

points in S is fat (x) in E.

2. THE PERTURBED PROBLEM.

As it is impossible to assert that problem (i) has a solution, we

consider the perturbed problem.

Does there exist a s E S such that

(2) J(~), fix ~II = sup (J<s), IIx-slI) sES

where

(3) S is a bounded and (norm) closed subset of the normed linear of

space E, J is an u.s.c, and bounded above mapping S into ~ , and x is given in E.

We shall call an s E S verifying (2) a J farthest point (in short a JFP).

We have the following generalization of Asplund's result :

Theorem ]

Le~ E be a locally uniformly rotund and reflexive Banach space ;

then under hypothesis (3) the subset of the x E E admitting a JFP in S is a fat

subset of E.

Proof : The function r(x) : Sup (J(s)+llx-s!l) is convex, lipschitzian with constant sES

i and satisfies :

(~) A fat subset in E is a subset of E which contains the intersection of a

countable family of open and dense subsets of E. By the Balre category theorem

such a set is itself dense in E.

Page 347: 5th Conference on Optimization Techniques Part I

335

Sup (~)r(x) : sup Sup (J(s)+blx-s[[) : sup [J(s) + sup Lb-siJ] xEB(~,b) x s s x

= r(y)+b

Theh Corollary of lemma 3 of Asplund [2] asserts that there exists a fat subset

G of E such that for every y E G~all p E Dr(y) (~) have a norm equal to one.

Take such a p E Dr(y), I[Pll = i. We have : r(x) ~ r(y) + < p,x-y > , Vx 6 E,.

Therefore

(4) r(2y-x) ~ r(y) + < p,y-x > , Vx E E.

E being reflexive there exists an x E B(¥,r(y))(we can always suppose r(y) > 0 )

such that :

< p,x-y > = -r(y).

Then (4) implies r(2y-x) ~ 2r(y). The converseis trivial ; so we have

r(2y-x) = 2r(y).

Hence, for every n E ~,, there exists s 6 S ,such that : n

(5) ll2y-X-Sn[ I + J(s n) > 2r(y) - @(i ~ )

where @(g,t) is the modulus of local uniform rotundity (8)

s +x-2y Set u = n if IISn+X-2yll ~ 0

n llsn+x_2yil

= ~ elsewhere

and

(6) tn = Sn+Un J(Sn) '

We have tn+X-2y = Sn+X-2y+u n J(Sn)

: Un(llSn+X-2yll + J(Sn))

so first

[ltn+X-2y[[ = ]J(s n) + t[Sn+X-2yl] [

-- a(s n) + tlsn+x-2yJl sufficiently large by (5),

and second

t +x-2y n u -

n lltn_X_2yll

this quantity being positive for n

( * ) B ( y , b ) i s the ball {x , tlx-y]t < b}

(~) Dr(y) is the sub-differential of r it y

( ) for lltl[ = i ~(e,t) = Inf {i - ~l[t+~l , I[~I = 1 , l[u-tll >_ e}

Page 348: 5th Conference on Optimization Techniques Part I

336

Hence (5) gives

t ]2y-X-tni I --> 2r(y) - ~(i , r~_Yy))V-X and this implies

! !ltn-xil £ [ r(y)~

Thus~ tn converges towards x and Un towards ~ = u .

Finally there exists a sequence s such that lim d(Sn) = lim J(s)=8. n q n q q

Taking the limiz in q in (6) we see that s converge (for the norm topology) n

towards s = x-u8, Then q

!ls-ylt + J(s) : ttx-uO-y]l + J(s)

: IIIx-yll-el+ J(s) :

> tlx-ytl + J(s) -8 > IIx-yll = r(y), because of the uos.c, of J.

3. APPLICATION IN OPTIMAL CONTROL THEORY .

We shall limit ourselves to just one example.

The state equation is :

[ -V(uVy) = f , f 6 L2,(~) (7)

. y 6 H~(n)

where ~ is open in IRn .

A z d being given in L2(~) (or Hol(n)), we are looking for a ] 6 bad such that

(s) lly(~)-ZdliLe(or NO1) = u6lnf~d ilY(U)-Zdll where %d is a closed subset of

: {u E LP~(~) ; O < ~ <_ u(x) <_ B a.e}

We take O < p < 1 in order to ensure the local uniform rotundity of LP([~).

Notice that neither ~ad no___rr the mapping u + y(u) are convex.

It is impossible to apply the theorem of Asplund to problem (8), the hypothesis

being too weak. But we can apply theorem 1 to obtain :

Theorem 2

For every E > 0 the subset of the w E LP(~) such that there exists

E ~ad satisfying

IY(U) ZdllL2(or H O) L : Inf !Iy(u)-zJIL2(o r Hl)-eilu-wllLp

is fat in LP(~).

Proof : We apply theorem 1 with

! II I a(u) : - E Y(U)-Zdl

It remains nnly to show that J is u.s.c. In fact we have :

Page 349: 5th Conference on Optimization Techniques Part I

337

L e n a I

÷ y(u) is a continuous mapping from LP(~) into H~(~) (these u two

spaces being endowed with their norm topology).

Lemma 1 is a consequence of :

Lemma 2

u + y(u) is a continuous mapping from LP(~) with its norm topology

into H~(~) with the weak topology.

Proof of lenm~a 2 : Let u m E LP converging towards u 6 L p in norm. Put Ym = y (Um)"

The variational form of (7) gives

fOUm(VYm ")2 = f~f Ym

Hence, by Poincarr~'s inequality

f(VYm)2 ! I]~]L21lYmlIH~

i towards a y. Therefore there exists a subsequence Ym. which converges weakly in H 0 ]

Let us-now consider the variational form of (7)

1 (9) fUm.VYm Vz = ffz Vz E H~

] ] There exists a subsequence u which converges towards u a.e. and

m.

]k

l u . <~)-u<x)llw<x)l ! 2slv~(x)l e L~(n> . ]k

Then, Lebesgue's theorem implies that u Vz converges (for the norm topology in L 2) m. ]k

towards uVz , We can now take the limit in (9) and obtain : i fuVyVz = ] f z Vz ~ H d ,

so y = y(u). It is then trivial to obtain that the whole sequence Ym converges to y.

Proof of lemma 1 : Consider :

X m = fUm(VYm-Vy)2 _> ~f(VYm-VY)2 _> 0 .

We shall show that X converges towards zero m

X = fUm(VYm)e-2fumVYmVZ + fUm(VY)2 .

1 into L 2 is compact As the canonical injection from H 0

fum(VYm)2 = ff Ym converges towards ffy = fu(Vy) 2

-2fUmVYmVZ converges towards -2fuVyVz (as we have seen in the

proof of iemma I).

Another application of Lebesgue's theorem shows that fUm(VY)2 converges towards fu(Vy) 2 .

Page 350: 5th Conference on Optimization Techniques Part I

3 3 8

4. OTHER RESULTS.

Using Asplund's techniques, Zisler [i] has obtained three theorems

which can be generalized as follows.

Theorem 5.

Let E he a Banach space whose dual is a locally uniformly rotund

and strongly differentiable space (SDS) (~), S a closed and bounded subset of E ~

and J an u.s.c, and bounded above mapping of S into ~ .

Then the subset of the x E E~ having a JFP in S is fat in E ~.

Theorem 4.

Let E be a weakly uniformly rotund (~) Banach space, S a weakly

compact subset of E and J as in theorem 3. Then the subset of the x E E having a

JFP in S is fat in E.

Theorem 5.

Let E he a reflexive, Frechet differentiable Banach space, S a

weakly compact subset of E' and J as in theorem 3. The same conclusion as in

theorem 3 is valid.

Proof : there is no difficulty to adapt the proofs given by Zisler using the same

device as in theorem 2. We give here a proof of theorem 5 based on a different method

than Zisler's.

X being reflexive there exists an x such that

ilx-yll = r(y) and r(2y-x) = 2m(y).

Hence there exists s E S ,such that : n

ll2y-X-Snl I ~ 2r(y) - £n (gn ÷ 0 when n + ~)

There also exists fn E X ~ such that ...llfnll = 1 and fn(2y-x-s n) = .. 2y-X-Snll

then

r(y) > fn(Y-X) = fn(2y-X-Sn) - fn(Y-S n)

> 2r(y) - e - l l y - S n l I > r(y) - - - n - - n

(X) An SDS is a Banach space in which every convex continuous function is strongly

differentiable on a G~. dense in is domain. (See [3] for more details).

(~) A Banach space is weakly uniformly rotund if llXn[ I ÷ i, ilyni I + i, IlXn+Yni I ÷ 2 tends

imply Xn-YnV'weakly to zero.

Page 351: 5th Conference on Optimization Techniques Part I

339

v A theorem of Smulian [i] now states that fn converges (for the

norm topology in X z) towards an f with II~I = 1 . Moreover

r(y) ~ f(y-s n) = f(2y-x-s n) - f(y-x) so lim f(y-s n) = r(y). u->co

S being weakly compact there exists a subsequence of s converging towards an n

s 6 S. Such an s satisfies

IIs-yll ~ f(y-s) = lim f(Y-Sn) = r(y) n

the theorem is then proved for J = O. The device used in theorem 2 gives now the

general case.

Remark.

One may look for other perturbations than llx-sll ; the case

llx-sIl 2 hase been solved by Asplund [4] when E is a Hilbert space. We have obtained,

in collaboration with Tames [i]2 results for perturbation of the form ~(IIx-sIl) where

is a positive, convex, increasing function such that lim e(u) = ~ . (E is supposed u->~o

to be a reflexive Banaeh space having the property :

(H) If a sequence x n converges weakly towards x and IIXnl I converges towards Ilxll

then ttXn-~l + 0 ).

Page 352: 5th Conference on Optimization Techniques Part I

340

BIBLIOGRAPHY

ASPLLBD~ E. [114 The potential of projections in Hilbert space(quoted in

Edelstein Eli) •

ASPLUND~ E. [ 2]. Farthest point of sets in reflexive locally uniformly rotund

Banach space. Israel J. of Maths 4 (1966) p 213-216.

ASPLUND, E. [ 3]- Frechet differentiability of convex functions.

Acta Math 421 (1968) p 31-47.

ASPLUND, E. [4]. Topics in the theory of convex functions.

Proceedings of NATO, Venice, june 1968.

Aldo Ghizetti editor, Edizioni Oderisi.

BAR~NGER, J. [ 1]. Existence de solution pour des problSmes d'optimisation non

convexe.

C.R.A.S. t 274 p 307.

BARANGER, J. [2]. Quelques r~sultats en optimisation non convexe.

DeuxiSme pattie : Th~or~mes d'existence en densit~ et application

au contr$1e. Th~se Grenoble 1973.

BAR~NGER, J.~ and T~I~M~ R. [ 1].

Non convex optimization problems depending on a parameter.

A para~tre~

ED~TEIN, M. [1]. Farthest points of sets in uniformly convex Banach spaces.

IsraeiJ. of Math 4 (1966) p 171-176.

Sr@JLiAN, V.L. [1]. Sur la d~rivabilit6 de la norme dans l'espace de Banach.

Dokl. Akad. Nauk SSSR (N.S) 27 (1940), p 643-648.

ZISLER, V. [1]. On some ex~remal problems in Banach spaces.

Page 353: 5th Conference on Optimization Techniques Part I

ON TWO CONJECTURES ABOUT THE CLOSED-LOOp TI~-OPTIMAL

CONTROL

Pavo i Brunovs<V

Mathematical Institute Slovak Academy of Sciences

Bratislava, Czechoslovakia

Consider the linear control system

(L) ~ = Ax + u

(xE~ n, A constant), with control constraints u~U, where U is a con-

vex compact poiytope of dimension men imbedded in R n, containing

the origin in its relative interior, and the problem of steering

the system from a given initial position to the origin in minimum

time.

While the theory of the time-optimal control problem for the sys-

tems (L) has been satisfactorily developed as far as the structure of

the open-loop optimal controls is concerned, this is not the case of

their synthesis - the closed-loop time-optimal control.

To synthesize the open-loop controls into a closed-loop controller

is formally allways possible as soon as the optimal controls are uni-

que. There are various reasons which make a synthesis desirable - the

most important perheaps being that if a system which is under the ac-

tion of a closed-loop optimal controller is deviated from Its optimal

trajectory by an instantaneous perturbation, the rest of its trajecto-

ry will again be optimal for the new initial position.

If the system (L) is normal, it is well known from the basic opti-

mal control theory that the set ~ of initial points x6R n for which

the optimal control Ux(t) , tE[O, T(x)] (T(x) being the optimal time

of steering x to O) exists is open in R n, the optlmal controls are u-

nique, an~ they are plecewise constant with values at the vertices

of U. Its synthesis v is obtained by v(x) = Ux(O).

It is ~enerally believed that the system under the action of the

closed-loop controller v,

(CL) ~ = Ax + v(x),

exhibits the following properties :

(i) its behavior is indeed optimal

Page 354: 5th Conference on Optimization Techniques Part I

342

(ii) its behavior will not be severely affected by small perturbaDions.

Formulating conjecture (i) more precisely, it means that the so-

lutions of (CL) coincide with the optimal trajectories of (L)~ the sen-

se in which (ii) can be understood, follows from Theorems 2 and 3 be-

low.

Due to discontinuity of v, care has to be taken with the defini-

tion of solution of (CL). Numerous studies of discontinuous differen-

tial equations in the fifties lead to the conclusion that the classi-

cal (Carath4odory) solution may not represent well the behavior of

the system modeled by such an equation. In particular, it does not cha-

racterize the so called sliding (chattering) Which may occur along

the surfaces of discontinuity. The necessity to modify the definition

of solution is clearly seen if one tries to investigate the conjecture

(ii).

The best and most universal definition of solution of a disconti-

nuous differential equation is due to Filippov. This, applied to (CL),

defines a solution of (CL) as a solution of the multivalued differen-

tial equation

(CL o) ~ E Ax + V(x)

in the usual sense, where

: fl tl co el v , J-> o ~(N) :0

is the Lebesgue measure in R n and B(x,~) is the ball with center x

and radius ~.

Thus, in order to justify (i), we have to prove that the Filippov

solutions of (CL), i.e. the solutions of (CLo) , are optimal trajectories

o f (L).

Unfortunately, i± turns out that this is not true in general and

the following theorem~ which settles completely the case n = 2, shows

that the systems for which (i) is not true, are not exceptional.

We shall say that the closed-loop control v is "good", ifever~

solution of (CL o) is an optim~l trajectory of (L) sad, vice versa, eve-

ry optimal trajectory of (L) is a solution of (~Lo).

Theorem 1. Let n = 2 and let (L) be normal. The____n, v i._ss goo d i__f

and only i~ the following conditions are not met :

_A has two distinct real eigenvalues and there exists e verbex w

suo that w>: u>} contains the oi envector

o__f -A correspondi_~n~ t__qo its larger eigenvalue but not the other

Page 355: 5th Conference on Optimization Techniques Part I

343

e igenvector of -A .

In particular, w is allways good if m = 1 .

A typical example of bad synthesis is given at the end of the

paper : v(x) = w I on the negative x I - semiaxis, which has measure

zero, an@ therefore this value is supressed in the Filippov definition:

the solution of (CL o) from a point x = (xi, O) instead of being a so-

lution of ~ = Ax + Wl, will slide along the line of discontinuity

x2 = O with epees ~I-- - x l= co(Ax + w2~ Ax + w4}/%(x2 = O} .

Let us now turn to the stability conjecture (ii). We shall model

the pertur-bstions as measurable functions p(t) satisfying the estima-

te Ip(t)i < 6 and we shall be interested in statements, valid for any

perturbation satisfying the given bound. Those can be conveniently

expressed in terms of solutions of the multivalued differential equa-

tion

(CLg) E Ax + V(x) + B(O,~ )

Namely, (# (t) is a solution of (CL 6) if and only if there exists

a measurable function ~(t), Ip(t)l~ 6 such that ~(t) is s solution of

the equation

E Ax + V(x) + p(t)

Of course, one cannot expect positive results in case the synthesis

is not good for the unperturbed system. Therefore, we restrict oursel-

ves to n = 2 and we shall assume that the system is nor~l and no

vertex of U satisfies the conditions of Theorem I. Under these assum-

ptions the following is true :

Theorem 2. Let m = I. Then, for s~y compact KC ~ and an~/ ~> 0

there exists an 6>O such that all solutions of (CL$) st@rtin~ at

po%nts x£K reach the origin in a time not exceedin~ T(x) ÷7 "

Theorem !. Le__~t m = I. Then, given sr~v ~ K(~ and ~ 9> O,

there exists an ~.>O such that any solution of (CL%) starting st _s

point x E K reaches B(O, ~) at tim___~e not exceedin~ T(x) and stays in

S(O, ~ ) afterwards

~s far es higher dimensions are concerned, no definite results

have yet be obteine~. However, there are some reasons to believe that

no synthesis is goo8 if n>2 and m > l, while the case n> 2, m = 1

is unclear.

Page 356: 5th Conference on Optimization Techniques Part I

344

For the 4etsi!ed version of the proofs of Theorems i - 3

cf. Brunovsk~.

x I = _ x I + u I

~ = x 2 + u 2 ,

Wl=(l,O) , w 2 = (0, i), w 3 = (-1, 0), w 4 = (0, -1) ,

vcx~ - ~v 4

¢cx ~ - " 2

References "

Brunovsk~, P.~ The closed-loop time-optimal control, submitted for

publicstion to SIAM Journal on Control

Filippov, A.F. M stemati~eski_~ sbornik 51, 99-128 (1960)

Page 357: 5th Conference on Optimization Techniques Part I

COUPLING OF STATE VARIABLES IN THE OPTIMAL LOW THRUST ORBITAL

TRANSFER PROBL~I

Romain HENRION

Aspirant F.N.R.S.

Aerospace Laboratory

University of Liege

BELGIUM

I. INTRODUCTION

The high specific cost of orbiting a satellite explains the importance of

the low thrust orbital transfer problem, Indeed, mass and size of such an engi-

ne are much smaller than for an impulsive one, but its fuel expenditure can be

as close as wanted, if final time is left open. However, the nemerous studies

undertaken have stressed two main difficulties :

- the possible existence of singular arcs (intermediate thrust arcs, which do

not proceed directly from the PONTRYAGIN maximum pr~ciple), the opt mality

of which is not ensured;

- the high sensit~ity of opt~al trajectories with respect to the unknown ini-

tial conditions, a consequence of the two-point-boundary-value-problem intro-

duced by the maximum pr~ciple.

We shall show here how these difficulties are reduced to a great extent by de-

coupling the state variables with repect to the thrust amplitude control•

2. PROBLEM FORMULATION

Us~g non-dimens~nal polar variables, the plane tr~ectory of a low thrust

engine in a central inverse-square-force field may be descried by the following

equations :

r = u r

• u e r

2 • ue 1 u r ~----~+ s i n ~ (1)

r

• U r U@ . + i COS

u6 r

C

where r , O , u r , u 8 are position and velocity variables, g the instantaneous

mass and c the ejection velocity• Denoting by a the acceleration factor, we

Page 358: 5th Conference on Optimization Techniques Part I

346

have the constraint

0<E~<a

The aim of optimization is to choose the control variables ~ and ~ so as to

minimize some cost function, e.g. fuel consumption.

3. PONTRYAGIN Yu%XIMUM PRINC IPLE

Denoting by i the adjoint state vector, the Hamiltonian takes the form

H = H ° ÷ $ H I

where H = k ~ , q (q , ~ , ~)

As H is autonomous~ we have the well-known equation

H=Ht =O

which shows that the numerical value of H is a trajectory constant.

The adjoint state variables are governed by the equation

k=-H q

or explicitly

r

2 u 0 u o 2 u r u e

= ~e ÷ ('-@ - ~ ) Xu - " - ' 2 - - ~u~ r r r r

XO =O

- u 8

u r r r u 8

. X 8 uo Ur

+ T ~u e ~u e r 2 r ~u r

~ [ s i n ~ . Zu u r

+ c o s ~ . au ] o

Maximizing H according t o the maximum principle, we get :

- for sin ~ = X /% 9 m r } where X = (X~

cos ~ = XUe/~ r

+ ~2

u 6

1/z )

Page 359: 5th Conference on Optimization Techniques Part I

347

- for ~ ~ = 0 if H 1 < 0

= a if H I > 0

However, if H 1 vanishes for some finite period, the principle no longer

fixes ~ for H is no longer an explicit function of ~ . This case corresponds

to a so-called singular or intermediate thrust arc. Now, by deriving the con-

dition H I = 0 several times, it is possible to determine precisely that special

value { which keeps H I = 0 , But, its optimality is by no means guaranteed.

We shall show that by decoupling the present state variables with respect to

, we shall be led to a reduced system, governed non-linearly by a new control.

Applying then the LEGENDRE-CLEBSCH condition, we easily establish all the

known optimality conditions for singular arcs.

4. PRACTICAL LMPORTANCE OF THE DECOUPLING OPERATION

The practical, i.e. numerical, importance of the decoupling operation is

shown on figure I. We represented the flow-diagram starting with ~ , of the

polar variables. One easily sees that the coupling of this system is extremely

strong m as all the variables influence one another directly. Indeed, there are

only two stages.

It is clear that this is very prejudicial to numerical precision and sta-

bility : some error in any variable has an immediate consequence on all the

other ones. But, the larger the number of stages between two given variables,

the lower the speed and magnitude of the influence of an error on the first

upon the value of the second. Therefore, our aim is to increase this number of

stages. We shall see that by making use of canonical transformations, it can be

brought up to four. Results show also a serious decrease in sensitivity to errors

in the unknown initial values.

5. CANONICAL TRANSFORMATIONS

Because of the hamiltonian formulation of the problem, the decoupling is

best achieved by making use of canonical transformations. They are described by

the equation

k' d q - H dt = A' d Q - K d T + d F (2)

which tranforms the set (q , t , % , H) into the set (Q , T , A , K).F stands

for the corresponding generating function. Except from the last one, all the

transformations we shall use are of the MATIIIEU type where F = O.

Page 360: 5th Conference on Optimization Techniques Part I

348

5.1. Ist Transformation

Initial set : r ~ 0 ~, u t ~ u 0 , ~

r B

+ k du 8 + k u d ~ - H dt dr + X o d8 + k du r u8 Associated differential form : %r u r

(3) l

The first Sransformation we shall do. has been given by FRAEIJS de VEUBEKE ,

and is described by the relations optimizing $ . @ is now a new state variable :

% = k sin ,~ U r

u e

New differential form ~ Ar dr + k8 de + ld ul. + %~ d~ + % dB - H dt

Substituting (4) into (3) and identifying to (5), we get

(4)

(5)

X o ÷ u= u~ sin ~ - cos

(6)

u o = uk cos ~ + ~ sin

Making use of (4) and (6), the new expression of the Hamiltonian is readily

derived :

= + SH I

HO r = u% (sin ~ ~ % + cos ~----~ + ~ r sin ~ - cos ~ %r)

H I =~A -~A

sin~ % 2

r

The new equations of state q and costate k can then be computed from

q=H k

q

Page 361: 5th Conference on Optimization Techniques Part I

349

(7) now shows that at present, there are left only three variables directly

governed by ~ , namely uk , U and ~ .

Their equations have the follo~ing forms :

uA = f(q,t,X) + i . . & ; X = u

5.2, 2nd Transformation

The aim of this transformation is to eliminate ul's dependan,~y on ~ .

Applying the method inspired from hydrodynamics and used by FRAEIJS de VEUBEKE 2

and KELLEY, KOPP and MOYER 3, we solve the equation

du I d~

1 1 c

which produces u X = - c £n ~ + w (8)

The integration constant w wikl be used instead of ul as a new state variable.

The associated differential form is :

Ir dr + A0 dO + Xw dw + ~@ d~ + o d~ - H dt (9)

Substituting (8) into (5) and identifying to (9), we get

W

U ~ w

(io)

The expression of the switching function H I is now :

I H1 -- - --c ~ (ll)

It shows that a present, B is the only variable still directly controlled by ~ .

It may therefore be considered as a new control variable replacing ~ . However~

trying to apply the corresponding LEGENDRE-CLEBSCH condition, we see that it is

trivially satisfied. We shall therefore continue our decoupling operations by

decoupling now the system with respect to ~ .

Page 362: 5th Conference on Optimization Techniques Part I

350

5.3. 3rd Transformation

As for the second ~ransformation~ the equations

dr rdO = rd~ sin~ cos~ cos~

lead to the change of variables

g= r cos ~

h = r sin~

which implies

= cos ~ , % g r

kh = sin ~ ~ Xr

sin~ ....... ~ , , (x~

+ COS~

r (~

+ ze)

(12)

o H I : 0 ~ : 0 (t3)

~i I = 0 ~h = 0 (14)

H I = 0 k2 k (g2 _ 2 h 2) _& w

%w (h 2 + g2) 5/2

= o (15)

A singular arc being characterized by the vanishing of the switching

function H I p we get by multiple derivation

6. SINGULAR ARCS

Figure 2 gives the corresponding flow-diagram. We can draw now the following

clues :

I) ~ now governs only u ~ whereas h is the only non-ignorasle variable governed

by ~ •

Moreover h appears non-linearly in all the equations. The application of the

LEGENDRE-CLEBSCH condition to h, considered as a new controlj will produce a

useful condition° This point will be developed in the next paragraph;

2) ~he coupling of the present system is much weaker than for the polar varia-

bles. Indeed, it seems that there are now four stages. However, for numerical

integrations, the first stage is eliminated, as B has an analytical solution.

In paragraph 8, we show how the number of four stages can be restored.

Page 363: 5th Conference on Optimization Techniques Part I

351

As FRAEIJS de VEUBEKE I has shown that

2 h 2 g ~ 2

W , this implies

(16)

which is equal to the well-known relation

I 1 - -- ~ sin @ .< -- (17)

If /f

(13) and (14) now reduce the Hamiltonian to

• X 3 h 3 X H = H = g ~ w (18)

o ~w (h 2 + g2) 5/2

Figure 2 shows that ~ influences the variables appearing in H only through h.

So, as already indicated above, h may be considered as a new control. Indeed,

(15) expresses the stationarity of H with respect to h.

O = ~H Xg 2 Xw (g2 _ 2 h 2)

Applying the LEGENDRE-CLEBSCH condition :

O > 32H 3 X w . h g2 h 2 - - , , - . . . . . . . [ 3 - 2 ] ~h 2 (h 2 + g2) 7/2

By (16), this leads to

(2o)

and in conjunction with (17) :

1 - --4 sin @ ~< O . (22)

If

Now

+ n h (3 g2 _ 2 h 2) X H I = O - 6 g h (g2 + h 2) %g w

+ g (g2 _ 4 h 2) ~ ~ O (23)

HI--O h (3 g2 - 2 h 2) X2 - 3 h (3 g2 + 2 h 2) X o H B w w

- 12 g2 + 3 2 (g2 - 2 h 2 ) %~ + ~ g (6 h 2 Xg g

-2h X ) X W

h ~ o (2 l )

Page 364: 5th Conference on Optimization Techniques Part I

352

where

+ (3 g2 - 4 h 2) k2 + h ([3 g2 - 8 h 2) % g

÷ 3(4 g4 - 2 h 4 + 3 g2 h 2 ) %2 = 0 g

g X + h % h ~ E - C Zn ~ + w + ----~-

I w

L0

(24)

Equation (24) gives the value of ~ . The corresponding arc is physically fea-

sable if 0 < ~ ~ a ®

c a s e s Special

H being autonomous and % = O ~ the numerical values of Ii and % are to to

constant on the whole trajectory.

a) H = ~ = 0 : this case corresponds to free final time and polar angle. to I

One easily shows * that necessarily ~ & O. Sop in this case there are no

singular, intermediate thrust arcs.

5) H " O ~ ~ # O (LAWDEN's spiral)

Eliminating % ~ 7, and w from (24) by (15), (18) and (23), we get g to

= 3~ h (g2 _ 2 h2)(9 g4 + 4 h 4 - 7 g2 h 2)

(g2 + h2)(3 g2 _ 2 h2) 3

(16) and (24) then imply ~ ~ 0 , which means that LAWDEN's intermediate thrust

arc is not optimal. It therefore follows that optimal arcs can only exist for

H#O.

With this result~ one easily shows then, that (21) and (22) actually reduce

to h<O

l ---~< sin@<O

7, NEW CONSTANT OF MOTION (SINGULAR ARCS)

Making use of (12), (13) and (14)p it is straightforward to show that the

following expression is constant on a singular arc :

C=3Ht+wk -2gk w g

Clearly, it may prove very useful for numerical integrationsof singular arcs.

Page 365: 5th Conference on Optimization Techniques Part I

353

8. TWO ADDITIONAL CANONICAL TRANSFORMATIONS

In order to improve decoupling, two additional transformations are useful.

8.1. 4th Transformation

First, we shall substitute to g, h and w the new variables ~ , 6 and v

according to the relations

k a = g k w

I8 " h i w

v =w-~g-Sh

The associated differential form

leads to

k d~ + ~8 dfl + Iv dv + I~ d~ + o dV - H dt

= - t g / X w

6 " - I h / I w

v w

8.2. 5th Transformation

With

the differential form

Aa da + 13 dfl + A n dq + It0 d0~ + < dB - H dt

implies by (10) :

I II q v

v

Putting now

~. ~ e p q

We are led to - ~ e O = I P

and the corresponding differential form with a non-vanishing generating function

% d~ + A6 d8 + X~ d~ +%V dB + %P do - H d t - d~p

Hamiltonian :

Page 366: 5th Conference on Optimization Techniques Part I

354

= +~H I H H °

o p o~

- Xt3 exp (3n) (X2c~ + 2.-3/2

a s )

= / c H 1 exp (0)IP - X

(25)

State equations

, - 5/2 a = - 2 ~g + 3 exp (30) I X B (l + Xg )

2 . x2)(x2 + X2)- 512 = 2 _ ~2 + exp (3n)(2 X 8 a a

o = 6~

e

c.

(26 )

=- + 2 ~ X +2BX B X B Xp o~

o - 312 X O = 3 exp (3p) 18 (~2 + X2)

o~

- ~ exp ( p ) / ~

X = ~ exp (0)/~ 2

Figure 3 displays the corresponding flow-diagram. Even if ~ is eliminated,

there are now four stages. So t the decoupling of this system is very strong and

it should lead to a serious improvement in numerical precision and sensitivity.

Especially the last variable p should be rather insensitive to a sudden change,

i.e. a switch of ~ , as it is separated by four stages from ~ . The following

application confirms this conjecture.

Page 367: 5th Conference on Optimization Techniques Part I

355

9. APPLICATION

In order to cheek the quality of the system (26), we have used it to des-

cribe a fuel-optimal transfer trajectory, with open final time and angle.

We have H i k - O and consequently, there are no singular arcs m but only

thrusting ( ~ a a) and coasting (~ m O) arcs• Now, there are only five equations

which must necessarily be integrated. Indeed :

- the "orbital transfer of variables", introduced by FRAEIJS de VEUBEKE l

allows to jump the coasting arcs;

-k = 0 ;

- m i s i g a o r a h l e ;

- k is ignorable, provided the switching function H l is replaced by - H ; p o

= c ~t. - adopting t o - ~ Po ' ~ may be replaced by - c

At present, with

~ = e -p I e

;~ = e-P l B

~=e -O I P

it is easily seen that p is also ignorable, and that only the five following

equations are left :

• - 5 / 2 ~-- 2 ~B+ 3 ~I k2 (k21 + ~)

2_ k21)(~ ÷ k~)- 5/2 ~= 2 - ~2 ÷ (2 X 2

X 2 = 8 X 2 + 2 a X I - X 3

• 2 l 2 - 3 / 2 X 3 = - ~ X 3 + 3 ~2 (X + X2 ) +c t

As the two following equations are ignorable, their integration has no effect

on precision :

P = 8

i

Page 368: 5th Conference on Optimization Techniques Part I

356

The corresponding flow-diagram is similar to figure 3, if B and % are

suppressed and if %1 ' %2 and %3 are substituted to % ' 18 and % : there p

are four stages,

Numerical results

We adopted a = 0.03

c = 0.3228

which corresponds to rather a low thrust level. The numerical stability and

precision difficulties of this problem are well known :

- Figure 4 displays the behaviour of p on one coasting and two thrusting arcs.

According to the theoretical forecast, the join of the curves at the junc-

tion points where ~ switches, is very smooth and neat. This confirms the high

degree of decoupling announced by the flow-diagram;

- By integrating several trajectories forward and then backward, the new system

of variables exhibited on increase of precision of at least 2 significant

digits with respect to the polar variables;

- The sensitivity with respect to unknown initial values was reduced by I and

mostly 2 significant digits. As a consequence, convergence was achieved at

points where the polar variables did diverge.

Moreover~ these improvements increased with the length of the trajectory

and the number of switchings.

I0. CONCLUSION

The decoupling operation presented in this paper leads to two important

results :

- an easy, direct theoretical examination of singular arcs;

- a considerable increase in numerical precision and decrease in sensitivity

with respect to initial conditions.

Page 369: 5th Conference on Optimization Techniques Part I

357

REFERENCES

I. FRAEIJS de VEUBEKE, B.

"Canonical Transformations and the Thrust-Coast-Thrust Optimal Transfer

Problem"

AstronauticaActa, 4, 12 , 323-328, (1966)

2. FRAEIJS de VEUBEKE, B.

"Une g~n~ralisation du principe du maximum pour les syst~mes bang-bang

avec limitation du nombre de commutations"

Centre Beige de Recherches Math~matiqueso Colloque sur la Th~orie Math~ma-

tique du Contr~le Optimal. Vender, 55-67, Louvain (1970)

3. KELLEY, H.J.~ KOPP, R.E. and MOYER, H.Go

"Singular Extremal{'in"Toplc8 in Optimization"

(ed, G. LEITMANN), Ac. Press, chap. 3, 63-101, New York (1967)

Page 370: 5th Conference on Optimization Techniques Part I

i ~ i = = S~3VIS x i ! r ....... ~

..

i

~ ............................................ J

L 3~IACJl3

• ("')~=~ uo!~enbe aq~ u! ~l~lo!ldxa sJ~adde x : su~gw

X

I

L .................................... .;

S39VIS

Page 371: 5th Conference on Optimization Techniques Part I

)

+

359

S T A G E S :

r . . . . . . . . . . . . . . . . . . . . . . . .

,~, ,"N, . . . . . . . . . . . + . . . . . . . . . . . . . . ~ . - + . . . . . . . . . . . . . . . . + . . . . . . .

I , II ! X + I I

FIGURE 3

I,+ ~ .... ,.Rus,.NoPN.sE ../. 12 / \ ' , / : / ".,

; ', ! ",

, ! X,,, Y "- : • • ,

,,i \ I ",, , , , . . , , , / , , , , > t

• ~ I 2 X /, / 6 7 8

- 8

FIGURE 4

Page 372: 5th Conference on Optimization Techniques Part I

OPTIMIZATION OF THE AMMONIA OXIDATION PROCESS USED IN

THE MANUFACTURE OF NITRIC ACID

P. Uronen and E. Kiukaanniemi

University of Oulu

Finland

I. Introduction

This work deals with a quite straightforward engineering application

of modelling and optimization methods and therefore we will con-

centrate on the practical points of view, because similar methods

may be used also in optimizing other processes using ageing type of

catalysts,

To be able to analyse and optimize the operation of a plant~ one must

first have a mathematical model describing the real behaviour of the

process. The modelling of a plant can be done in many different ways;

one can use the physical and chemical relationships the process is

based on~ or one can try to find the model experimentally. So we can

have different models for the same process.

The models based on physical and chemical relationships have a mean-

ingful general structure~ ie. the right control and state variables

will be automatically taken into account. However, the model will

include generally so many nonlinear differential and partial diffe-

rential equations that the use of model without many approximations

is impossible. The empirical models normally can be quite simple in

form, but they do not have the same technically meaningful properties

as models derived using the physical and chemical laws.

Therefore a suitable combination of these both methods may be a good

compromise and this semi-empirical method has been used in this work.

2. Process descriptioq

The plant studied is the ammonia oxidation process used in the manu-

facture of nitric acid, Figure I shows schematically the whole pro-

Page 373: 5th Conference on Optimization Techniques Part I

361

Air N H 3 ~

f J 6 ~II q I' t___]

I: Compressor 2: Preheater of air 3: ~ixer 4: Burner 5: Waste heat boiler 6: Cooler 7: Oxidation tower 8: Acid coolers 9: Resulting acid

Tail gases 1 7

I

f---

] i

I

8

T I

,_L.1H 2 0

I

! !

sll 8 I I

I !

I J:1 i

Fig. !. Schematic representation of a nitric acid plant.

cess. The raw materials are air and gaseous ammonia which are fed

after mixing (about 10-11 Vol % ammonia in the mixture) to a reactor,

where ammonia is catalytically burned to nitric oxide and water.

The next steps are cooling of the reaction products, oxidation of

nitric oxide to nitric dioxide and then absorbing NO 2 into water in

several countercurrent towers to form nitric acid (55-60%).

The first step, the oxidation of ammonia, is the most critical for

the whole process. Various investigators /3/ have shown that the con-

version in the oxidation of NO and in absorbtion is very high (97-

99%) and stable. Therefore the optimization of the ammonia oxidation

process will mainly determine the optimum of the whole unit.

The details of the chemical and kinetic phenomena included in the ca-

talytic oxidation of ammonia are not yet fully known. Probably the

Page 374: 5th Conference on Optimization Techniques Part I

362

reaction takes place stepwise as fast bimo!ecular reactions on the

catalyst surface and thus we can assume according to Oele /2/ that

the rate controlling factor here will be the diffusion of ammonia

molecules to the surface of catalyst.

Platinum with 10% rhodium will be used as catalyst and it is conve-

niently provided in the form of wowen sieves (4-6 sieves) of fine wi-

res.

The service period of the catalyst varies from 3 to 5 months and its

activity will decrease towards the end of the period principally

according to Figure 2. A part of platinum will be lost in the use and

this is an important factor to be taken into account in optimization

of the process.

98

94

9o

I conve rsio~i/~

0.0

_~ig.

0.3 0.6 0.9 time normalized

2. A decreasing type activity curve of the olatinum c~talyst.

3. Mathematical model

Based on the physical and chemical phenomena involved in the above

process we can qualitatively conclude that the most important factors

affecting on the conversion of the oxidation process are: ammonia to

air ratio in feed, feed temperature, total gas charge on sieves~

plant pressure, number of sieves and sieve dimensions.

In following we will assume that the pressure will remain constant.

As mentioned above we can assume that the rate of reaction will be

controlled by the chemisorbtion of ammonia moles to the surface of

the catalyst.

Page 375: 5th Conference on Optimization Techniques Part I

363

Oele /2/ has derived the following empirical formula for the mass

transfer in turbulent flow

( 1 ) 0 . 9 2 4 . ~ • Re 0 ' 3 3

: , where P Cp d

a is the mass transfer coefficient

l is the thermal conductivity of the gas mixture

Re is the Reynolds number

P is the overall pressure

Cp is the molar spesific heat of the mixture

d is the diameter of the catalyst wire.

Using now well-known formula for driving force in mass transfer, mass

balance and the definition of conversion and Reynolds number we get

equation (2):

(2) In(l-x) = Inf(11.82 m o- 28.6) , where Cp~0.33d0.67G0.67

x i s t he c o n v e r s i o n o f NH 3 t o NO f is characteristic factor for sieves representing wire

area - sieve area ratio

n is number of sieves

U is the dynamic viscosity of the mixture

G is the total gas charge per sieve area

m o is the ammonia content in feed

Curievici and Ungureanu /I/ have shown that if we know m ° and To, the

temperature of feed~ we can with a reasonable accuracy calculate the

mean temperature at the sieves and thus also the values of I, thermal

conductivity, ~, dynamic viscosity and cp, molar spesific heat as li-

near functions of m O and T O in the normal operation range. So the

following model can be derived

T .10-3(ggmo-228.7)+320m 2-748.1mo-70.67 ( 3 ) I n ( l - x ) = o ..... o x

To'10-3(1 .43+3.84mo)+13.06mo2+5.03mo+6.51

n . f

( 2 7 . 4 5 . 1 0 - 3 . T o + 9 3 . 4 4 m o + 1 3 . 6 1 ) 0 " 3 3 ( d . 1 0 6 ) 0 . 6 7 G 0 . 6 7

This model can be adapted to a real plant using equation (3) multi-

plied by a correction factor k which can be determined using the no-

minal operating point values of the plant.

Page 376: 5th Conference on Optimization Techniques Part I

364

When this is done for example at three eonversion levels representing

three points on the ageing curve of the catalyst, we can get the time

dependence for the correction factor k. So the ageing of the catalyst

can be expressed as a polynomial of the normalized time t (normalized

so that 1.0 represents 100 days).

An other possibility for matching the model to plant data would be to

keep all numerical coefficients in equation (3) as unknown and if a

necessary amount of conversion measurements could be made we could

formulate a quadratic error function and the values for the coeffi-

cients could then be found by minimizing this error function.

The difficulty here is the reliable measurement of the conversion, so

that the ageing curve of the catalyst could be estimated.

4. Simulation

To test the model an extensive examplewise digital simulation study

was done. In this study the effect of small changes in process con-

ditions (mo~ G, To, n~ f) on the conversion was investigated. This

was carried out by a program written in FORTRAN IV. Figures 3, 4, 5

and Tables I and II represent the results. The simulation was carried

out at one conversion level which corresponds to one age of the cata-

lyst i.e. one point on the ageing curve of the catalyst.

The results of the simulation show good conformity with operational

experiences and with previously published results concerning the

conversion/%

m =0. I05 fo =I. 39

95 D =0. 0000?6 G =0.253

i N =5 94 Ton =443

93 . . . . .

340 360 380 400 47o 4- o Fig° 3. Effect of the temperature on the conversion.

Page 377: 5th Conference on Optimization Techniques Part I

365

I00

95

90

85

80 .0

conversion/%

T o =443 - ~ ~ ~

=1.39 "~-" --. D =0. 000076 "-- .. G =0. 253 " - . . N =5 " - - mon=O. I05 ~-.

,%

.~8o .~85 .uto .d95 .I0o .±o~ .±IO .II5 .I20

Fig. 4. Effect of the mixture strength in feed on the conversion.

IO0

95

9O

conversion/% N = 5

T =443 m ° =0.105 fo =1.39 D =0.0O0076 % =o.253

• 150 . I 7 5 .200 .2'25 .~50 .275 .300 . ~ 2 5 "~ '

F i g . 5, E f f e c t o f the gas charge on the c o n v e r s i o n .

TABLE i:

The effect of number of sieves on the conversion

Constants:

T ='443 m = 0.105 fo 1.39 D = 0.000076 G = 0.253

Conversion number of sieves

0.815120 3 0.894678 4 0.940000 5 0.965819 6 0.980528 7 0,988907 8 0.993680 9 0.996400 10 0.997949 11 0.998831 12

TABLE ii:

The effect of sieve dimensions on the conversion

Constants :

T = 443 m = 0.105 G ° = 0.253 N = 5

Conversion D/f

0.9a0000 0.000076/1.39 0.961976 0 .000076/1 .5 0.941904 0 .000060 /1 .2 0,990603 0 . 0 0 0 0 4 0 / t . 5

Page 378: 5th Conference on Optimization Techniques Part I

366

behaviour of nitric acid plants. For example we can see, that in-

creasing total gas charge will decrease conversion, Fig. 4. Physical-

ly~ increase in gas charge means shorter contact time and thus the

result can be explained.

5. Optimization

The model presented can be used to optimize the operation of the

plant during one period of service. As mentioned earlier the critical

part in the process is the oxidation of ammonia. Thus the conversion

of the other parts of the process can be approximated as constants.

The conversion of the oxidation process can be calculated with aid of

the model as a function of the air feed around the nominal operating

point values of the plant. This was done in the example solution at

three conversion levels corresponding to three values for the factor

k i.e. three ages of the catalyst. Figure 6 shows the result.

I00 ~ conversion/%

b

9O

8O / 7O

~ . I , , t t i l i 1 i

O.7 0.8 0.9 1.0 I.i

Fig. 6. Effeo~ of air flow on conversion.

IE

1.2

F3/F3N

These results can be combined by matching time polynomials to these

curves and thus the final result for conversion will be function of

the normalized time t and the air feed.

(4) x = x(tiF 3) = ao(t)+al(t)F3+a2(t)F3 2

The objective function for optimization purposes can now be formed.

The total variable costs minus the price of produced nitric acid and

export steam integrated over the service period will be used for mi-

Page 379: 5th Conference on Optimization Techniques Part I

367

nimization:

if ~ F2 F 3 _ K T__ _4T N F2 = - - + ( - - KS£---~N - x ( t l F 3 ) J ~ dt (5) J KIF2N K2+K3)F3 N

%)

In this equation KI,..., 5 are weighted cost coefficients which must

be individually evaluated for each plant. F 2 means the flow rate of

ammonia and T is the temperature rise in the burner. Subscript N de-

notes for the nominal values. The first three terms in equation (5)

mean the cost of ammonia, the cost of energy needed for compression

of air and the cost of platinum losses which as a first approximation

depend on the flow rate. The fourth term means the value of export

steam and the temperature rise can be shown to be easily calculated

when the values of F3, the air feed and mo, the ammonia concentration

in feed are known /4/. The last term includes the conversion and ex-

presses the value of produced nitric acid.

The linear relationships have been assumed here between the costs and

relative changes from nominal values and this is reasonable, if the

variations are not very large.

The exact values for coefficients K i should be determined from real

cost curves of the plant.

As constraints for this optimization problem will be:

I) mo~0.13 ~ the ammonia contents in feed is less than 13% which is

the lower explosion limit of ammonia-air mixture

2) 0.7 F3N e F 3 ~ 1.1 F3N , the capacity of air compressor.

The solution of this optimization task can be evalueated by using

regressive dynamic programming. Using some preliminary numerical va-

lues the solution was computed. Figures 7 and 8 represent the results

graphically. From these curves we can conclude that the optimal stra-

tegy means a small air flow and high concentration of ammonia at the

beginning of the service period and air feed F 3 will increase and

ammonia concentration will decrease towards the end of the service

period. Normal strategy is that ammonia concentration will be held

constant over the whole period.

The preliminary calculations also show that the optimal control stra-

tegy derived here will give about 2% better value for the objective

function in comparison with the conventional method. The realization

of the proposed method would be quite easy also with conventional

Page 380: 5th Conference on Optimization Techniques Part I

368

13.0

12.0

II.0

I0.0

9.0

Fig. 7.

mo~

~ ___~~_ _ I0.I

5__---u ' I , , ~ t i m e

0.0 0.2 0.4 0.6 0.8 normalized

Optimal control curve for NH3-concentration

in feed.

dynamic programming

.... constant feed

I t-/% I . . . . • .oo I

O. 9O i

O. 80 L_~_~ 0.0 0.2 0.4

Fig. 8.

1.08

1.037

, , , , ~ time

0.6 0.8 normalized

Optimal control curve for the plant capacity.

dynamic programming

..... constant feed

instruments providing that reliability and repeatability of the con-

version measurement methods are remarkably improved.

6. Summar Z

A semiempirical mathematical model for the industrial ammonia oxi-

dation process has been derived. The model takes also the ageing of

the catalyst into account. The model is used to simulate the be-

haviour of the plant and The results show a good conformity with the

Page 381: 5th Conference on Optimization Techniques Part I

369

operational experiences and published qualitative results.

Based on this model an examplewise optimization study of the whole

nitric acid plant has been carried out. The objective function used

is the total variable costs minus total revenue value of production

integrated over one service period of the catalyst.

This minimization problem can be solved by using dynamic programming

and preliminary numerical calculations give about 2% better result

than the conventional strategy in plant operation.

7. References

/I/ Curievici, I., Ungureanu, S.T., An analogical model of the

reactor for ammonia catalytic oxidation, Buletinul Institutului

Politehnic Din Iasi, Serie Noua, Tomul, XIV (XVIII), Fasc. I-2,

227, 1968.

/2/ 0ele, A.P., Technological aspects of the catalytic combustion of

ammonia with platinum gauze elements, Chem. Eng. Sci. 3, I-2,

146, 1958.

/3/ Roudier, Houssier and Tracez, An Examination of nitric acid

process yields, Information Chimie, Vol. 12, 6, 27.

/4/ Uronen, P., Kiukaanniemi, E., Optimization of a nitric acid

plant, Process Technology International, December 1972, Vol. 17,

No. 12,

Page 382: 5th Conference on Optimization Techniques Part I

STOCHASTIC CONTROL WITH AT MOST DENU~RABL~ NUMBER 0F

C 0RRECTIONS

J. Za b c zyk

Institute of Fmthematics PAN

Warsaw, POLAND

Let us consider Markov processes X I, o. o,~ defined on a state

space Eo By Markov ~orocess ~ we shall mean an object (~, xt, ~t'

~'~t, pi) where ~ is a sample space with elements ~ being

mappings from ~0,+ oo) into E, x t are random variables such that

x t <~) = ~(t) for all t >i 0, ~6~ ; ~t' ~ are ~-algebras

operators 0t --~--*--~, eta(s) = ~(t+s) and probability

measures pi are defined on ~ o The measures Pix describe the motion

of the process X l starting from x ~E.

A strateg~_~ is a sequence ~ = ((1~i,d±)) where

0 = ~1 ~ ~2 ~°°" are stopping times with respect to { ~t } and

d 1,d2,~o® are functions mapping ~ into LI,2,°°",k~ ,

measurable with respect to ~I' ~2' ~ ° ° respectively, ( the

stopping times ~ I ~ ~2' °'" may be interpreted as moments of

corrections and the functions d I ,d2, ..o indicate the processes

chosen at the moments ~I,~2'°'° )"

To simplify the next definition we shall suppose that the contro-

let chooses the moment ~i+I knowing the past of the process be-

ginning from T io That means ~i+I e~. (vJ ~ = i+1 and di+ I = l

e~( ' ~"- - = di+1~ where ~ii@,is any stopping time and di@,any

measurable function°

N,Y~ Px ~ Every strategy ~ defines new measures P , , N=I...,

Page 383: 5th Conference on Optimization Techniques Part I

371

x@E which satisfy the conditions

1,~' d~(x) ~x (~) = ~x (~) ,

N,~I~ N-1 ,~" d N

Px (A ~ ~(B)) = E x (Px~ N (B) ; A) J ~ ~ J ~

Px(A):Px (A), Ae ~'~ , ~ B ~ " , N : 2 , 3 , . -

Let C1,oo.,ck, g

on E° Let us put

v N (x) : (

be some non-negative Borel functions defined

"C~4 N I g(Xs ) ds- ~d±(x~±)) O

~m v~ (x) ~(x [

- OQ

, if the limit exists

, otherwise

vN(x) =sup~ v~(x) , v~(x) =s~ (x) , ~ eE.

There holds the following theorem (sea C41 ) Theorem Io If the functions C1,o..,c k Gig,..., Gkg , where

i ~ Gig(x) = Ex( ~ g(xs)dS), are bounded and (finely) continuous then

O

I) v N are bounded, Borel functions also , V~ ~ ~ ,

2) v~+1(x ) = sup ~( g(xs)ds + v~(x~)) - ci(x) ~,i

~,i O

Let us consider an example

Let E be an interval [-~ and .~xample.

Let -~, ~ be stopping points for both processes, if the cost

Page 384: 5th Conference on Optimization Techniques Part I

372

functions c1~c 2 are equal to a number c ~0 and the reward is

equal to the exit time from the open interval (-~,~) then

(see [¢~ ) there exist two numbers ~o' ~o ~ 0 such that the

best strategy, in the case of N = + ~, is the following:

if ~ ~o and the starting point x is in (-~,0~ choose the

process X 2 and wait until it reaches the point ~o and at that

point choose the process X I and wait until it reaches the point -~o

and so~n~ if ~<~o and the starting point x is in (-~, 01

choose the process X 2 and ~2 = +~ " The number ~o satisfies

the equation

th2~ = 2~ -c o

It is rather surprising that the correction points ~o'- ~o

do not depend on ~ o

Profo Eo Dynkin posed the problem to find the best strategy for

the model considered in the example if, at any moment t ~ 0 the

controller knows the states of the process only at moments ~i ~ to

As far as we know the exact solution is not found yet° In virtue of

the solution of the example it seems reasonable to use the following

strategy (for simplicity let us suppose that ~ o and x e(-~,0~):

choose the process X 2 and as the moment of the next correction the

moment -x + ~o "

Now we give "excessive characterization" of the function v~.

A Borel function~is said to belong to ~i if it is finaly.

continuous (see Ro Bl~aenthal, R° Getoor [II ), ~ - c i and

for all x @ E and all stopping times

i

~heorem 2. Under the same assumptions as in Theorem I~ the func-

tio_n v~ is the least function v which satisfies the conditions,.

v - Gig @ ~i for i = 1,2,o.o,k.

Page 385: 5th Conference on Optimization Techniques Part I

373

and

Proof° Since, for every i = 1,2,.o.,k

i

@

@

= ~z ~(x) - ~.~(~ ~(x~))

therefore ci(x) + (v~- ~ig)(x)@ ~(v~- Gig)(x~).

Now,let v - Qig 6~i for i = 1,2,o..,k then v ~v~

f o r N. = 1,2, . . . . Indeed v ~ ~x( &g(xs)dS + v ( x r ) ) - c i ( x ) so

V~GIg - c i , and therefore v~ rio If V~VN_ I then

+ :

This completes the proof.

t, .

time problem° in that case we define

w N = max

w@o= sup i~ N N

w N = sup w N and w@~ = sup woo

where ~ is a bounded, Borel measurable and finely continuous

function defined on E o

The equations 2) and 3) from Theorem I have now the form

W~.l(X) = sup x(~,~N)(~ ) - oi(~ ~uw~ = m~(~,w~) ~,i

w~(x) sup - ~w~= .~x(%~) ~,i

An e~mple of that ~n~ (N = 2) ms considered in [31 .

An excessive characterization of w~ in the case o£ c i ~. 0

% vi)- j4. , : - - j

Analogous theorems can be proved if we consider the stopping

Page 386: 5th Conference on Optimization Techniques Part I

374

i = 1,2, o~o~k was given by Griegelionis, Shiryaev L2~ .

References

~ RoMo Bl~menthal, R~Ko Getoor, Narkov procasses and Potential

Theory, Academic Press, New Yor~ - London, 1968

[21 B.Io Griegelionis, A.No Shiryaev, On controllad Narkov process6s

and the Stefan problem, Problemy P~dachi luformaaii,

4(1968), ppo 60-72

3] Jo Zabczyk, A mathematical correction problem, Kybernetika,

8(1972), pp.317-322

~I J. Zabczyk , Optimal control by means of switchings, Studia

Y~thematica 45(1973), ppo161-171

Page 387: 5th Conference on Optimization Techniques Part I

DESIGN OF OPTIMAL INCOMPLETE STATE FEEDBACK CONTROLLERS FOR LARGE LINEAR CONSTANT SYSTEMS

W.J. Naeije, P. Valk, O.Ho Bosgra

Delft University of Technology, Delft, The Netherlands.

SUMMARY

In this paper the theory of linear optimal output feedback control is

investigated in relation to its applicability in the design of high-

dimensional linear multivariable control systems. A method is presented

which gives information about the relative importance of the inclusion

of a state vector element in the output feedback. The necessary condi-

tions of the optimization problem are shown to be a set of linear/quad-

ratic algebraic matrix equations. Numerical algorithms are presented

which take account of this linear/quadratic character.

I. INTRODUCTION

In the design of feedback controllers for linear, time-invariant systems

of high dimension, implementation restrictions may result in the addi-

tional constraintthat the control is a function of only a limited set

of elements of the state vector. Optimization theory can then still be

used if the structure of the controller, as specified in advance, is

used as an additional constraint relation. In this paper the controller

will be assumed to be a time-invariant matrix of feedback gains. As

many servomechanism- and tracking problems can be reduced to regulator

problems with a time-invariant feedback matrix, the optimal output regu-

lator problem as discussed here can be viewed as the basis of a wide

class of controller design problems.

In existing literature on this subject, the necessary conditions of the

mathematical optimization problem are derived for a deterministic prob-

lem setting. As questions regarding existence and uniqueness of solutions

are easiest solved in the case of optimization over a finite time inter-

val, this problem has drawn attention first [i] . Extension of the results

to the infinite time interval showed the dependence of the time-invariant

feedback matrices upon the initial conditions of the problem. Moreover,

only necessary conditions could be given, and existence and uniqueness

of the solutions could not be quaranteed [2,3].

The corresponding stochastic output regulator problem [ 4,5] leads to

essentially similar necessary conditions. The necessity to choose noise

Page 388: 5th Conference on Optimization Techniques Part I

376

intensity matrices in this case is equivalent to choosing initial con-

ditions in the deterministic case. So the widely used technique to

assume initial conditions, uniformly distributed on the surface of the

n-dimensional unit sphere [I] is equivalent to assuming a unity noise

intensity matrix. However, in the stochastic problem such a trick can

be replaced by the proper choice of a noise intensity matrix, based on

knowledge of or assumptions on the physical background of the system.

This motivates the use of a stochastic problem setting here.

In using the optimal output regulator theory in controller design, two

main problems arise for which no sufficient solution exists in litera-

ture. At first, the designer needs information about the relative im-

portance of the inclusion of a state vector element in the output feed-

back, to be able to make a suitable compromise between implementation

costs and improved control behaviour. Secondly, numerical solution of

the matrix equations which constitute the necessary conditions of the

optimization problem must be performed using a suitable algorithm. This

paper treats these two problems after a short recapitulation of the

optimization problem.

II. NECESSARY CONDITIONS

Consider the time-invariant linear system

~(t) = A x(t) + B u(t) + H w(t) E{X(to) } = 0 (i)

y(t) = C x(t) (2)

with state-vector x(t)~ input vector u(t) ~ no£se vector w(t) and output

vector y(t) of dimensions n, m, r, k respectively.w(t) is Gaussian white

noise, characterized by E{w(t)) = 0, E{w(t) wT(~) } = ~ 6(t-T) in which

> 0 is a time-invariant positive semidefinite intensity matrix.

Constraining the feedback to

u(t) = F y(t) = F C x(t), F C = L (3)

leads to the optimization problem: choose F as to minimize the quadrat-

ic performance index

J = E{xT(t) Q x(t) + uT(t) R u(t)} (4)

with x(t), u(t) and F satisfying (i), (2) and (3).

If S = E{x(t) xT(t)} is the solution of the variance equation for the

closed-loop system:

F(S,F) = (A + BFC)S + S(A + BFC) T + H ~ H T =" 0 (5)

Page 389: 5th Conference on Optimization Techniques Part I

377

the performance index can be written:

J = tr{(Q + cTFTBTRBFC)S} (6)

Using the matrix-minimum principle [6] by introducing a Hamiltonian

function

H(S,F,P) = J + tr{F(S,F)P T} (7)

in which P is the adjoint matrix, the necessary conditions follow

from:

_ ~H(S~F~P)~s ~= ~ = 0

~H(S,F,P)~p ~= F(S,F) (8)

~H(S,F,P) = 0 ~F .~

The feedback matrix F, resulting from the necessary conditions, is:

F = -R-IBTpscT(cscT) -I (9)

in which the adjoint matrix P = pT > 0 is the solution of

P(A + BFC) + (A + BFc)Tp + Q + cTFTRFc = 0

and the variance matrix S = S T > 0 is the solution of (5).

performance criterion is

J~ = tr{ PH~HT} (ii)

A necessary and sufficient condition for the existence of a solution F

which minimizes (6) is that the system (i), (2) be output feedback

stabilizable. Necessary and sufficient conditions for output feedback

stabilizability are presently unknown; a sufficient condition is given

in [ 7]. Evidently, a necessary condition is that (A,B) be stabilizable

and (DT,A) be detectable with Q = DD T [ 8]. If an optimizing solution F

exists, uniqueness cannnot be proved generally. So the practical use of

the necessary conditions consists of finding a numerical solution to

the equations (5), (9) and (I0) in which P > 0 and S > 0, implying that

the closed loop system is stable, and investigating the nature of this

solution, e.g. by comparing the performance index belonging to it with

the performance index obtained with optimal complete state feedback.

From the structure of the feedback matrix F follows, that the optimal

output feedback forms a minimum variance estimate ~(t) of x(t), given

the observation y(t) = C x(t). This linear estimate ~(t) is determined

by the projection: [ 9, p. 88]

(i0)

The resulting

Page 390: 5th Conference on Optimization Techniques Part I

378

~(t) : scT(cscT) -I C x(t) (12)

and this vector is used in the same feedback structure as is encountered

in optimal complete state feedback:

u(t) = -R-IBTp i(t) (13)

In the sequel, with no loss of generality the output matrix C will be

assumed to be C = [Cli 0 ] with C 1

be written as

LI i0]

S12SII I 0j

~(t) = - ..... ,-- x(t), S = T -i~

non-singular. In that case, (12) can

with partitioning consistent with the partitioning of C.

(14)

III. SELECTION OF OUTPUT VARIABLES

For a practical application of the theory of optimal output feedback,

information is required about the importance of inclusion of each ele-

ment of the state vector in the output feedback. Comparing all possible

choices by computing the resulting performance indices seems unrealistic

from a computational point of view. However, any computationally less

involved method of comparison must be of an approximative nature. Here

a method is developed based on a comparison of trajectories of an ideal

system (e.g. using optimal feedback of the complete state vector) with

trajectories of the same system under conditions of output feedback.

Let x°(t) be the trajectory of the system under consideration and L O

the optimal feedback matrix using complete state feedback:

~°(t) = A x°(t) + B u°(t) + H w(t)

u°(t)

Let x(t)

~(t) = A x(t) + B u(t) + H w(t)

u(t) = L x(t) = FC x(t)

The difference between both trajectories, e(t) = x(t)

governed by the equation [ 2]

6(t) = (A + BL) e(t) + B (L-L °) xO(t).

Define the source term vector q(t) in (17) as

q(t) = B (L-L °) x°(t)

(15) = L O xO(t)

be the trajectory of the system with output feedback matrix L:

( 1 6 )

- x°(t), is

17)

18)

Page 391: 5th Conference on Optimization Techniques Part I

379

Assuming that A + BL is stable, the difference between both trajecto-

ries will be minimum in some sense if q(t) is minimal. Introduce as a

source term objective function

I = E{qT(t) q(t)} = E{tr(q(t) qT(t))} (19)

Inserting (18) in (19) and interchange of the expectation and trace

operators leads to l

I = tr I(L - L O) S°(L - L°)TJ (20)

where S ° is the variance matrix of the optimal system using complete

state feedback, which is the symmetric positive (semi)definite solu-

tion of the Lyapunov equation

(A + BL°)S ° + S°(A + BL°) T + H~H T = 0 (21)

The necessary condition for minimizing I over all possible choices L

within the constraint L = FC is

~-~I I, = 0 (22)

Elaboration of (22) leads to

L ~ = L°s°cT(cs°cT)-Ic (23)

assum!ng the inverse exists. Note that ~ , the output feedback matrix

minimizing the source term in (17), is obtained by applying a projector

operating upon the optimal complete state feedback matrix L °.

[i0, Ch. XI] . The corresponding extreme value of the source term objec-

tive function is

I ~ = tr[(L ~ - L °) S °(~ - L O)T] (24)

Using the adopted structure of the output matrix C and the correspond-

ing partitioning of S ° and L °,

l s ° = $1 o,,

and combining (24) and (23) leads to

o 1 o I" = tr[L2(S22 - S~S~IS12)~2]

As a computational alternative,

I" = tr[ °~-l~I] L2S22IJ2

I I S12 S = ( S ° ) - 1 = - % - - 7 7 C - - 1

(25)

(26) can be converted into[ ii]

(26)

(27)

(28)

Page 392: 5th Conference on Optimization Techniques Part I

380

and partitioning consistent with (25).

Equation (26) and (27) provide a computationally feasible means of

estimating the relative importance of inclusion of each state vector

element in the output feedback structure. Assume the optimal complete

state feedback parameters L O and S ° are known. Assume further that a

set of 1 state vector elements U 1 are selected on technological grounds

as possible measured output elements (l 4 n). From the set U/, k ~ 1

elements are to be selected to form the output vector. This selection

may be performed as follows.

all i~ by omitting the i-th element from U/, i = 1,2.°./ and Compute

determine I~ ± = min ~ ~ . . ±.. term u/_ 1 Dy omitting the j-th element from 3 l l

U I. This process can be repeated up to U k, It should be noted, that

I s in eq. (24) forms the square of a matrix norm of the difference of

L ° and a projection of L °, and that this projection, L ~, is not used

as a feedback matrix in the selection procedure. This implies that a

selection procedure, based on equation (24), can still be useful in

those cases in which a feedback matrix L ~ does not stabilize the system

in eq. (17). Based on eq. (24), other selection procedures are possible

and further engineering constraints can be included in the decision

process. Also the optimal complete state feedback matrices L ° and S °

can be replaced by corresponding other matrices which render "ideal"

system behaviour.

IV. SOLUTION OF NECESSARY CONDITIONS

For a numerical solution of the optimization problem two basic ap-

proaches are:

a. Use of numerical function minimisation algorithms based on the

gradient of the performance index with respect to the parameter

matrix L;

b. Numerical solution of the matrix equations e.g. by iterative pro-

cedures.

The gradient of the performance index with respect to the parameter

matrix L can be analytically derived, using a suggestion by Kwakernaak

for a similar problem [ 12, Ch. 5.7]. This derivation (see appendix I)

gives as a result:

~ = 2 RLISll + BTp ~Ll ~s12 J (29)

Page 393: 5th Conference on Optimization Techniques Part I

381

in which the output feedback matrix L has the structure

L =[ Lli I 0] (30)

and L and S are partitioned consistent with the adopted structure of

the output matrix C. S and P follow from

(A + BL)S + S(A + BL) T + H~H T = 0 (31)

P(A + BL) + (A + BL)Tp + Q + LTRL = 0 (32)

Numerical algorithms that use (29) need the solution of (31) and (32)

for each step where gradient evaluation is required. As the number of

gradient evaluations generally is large, either in a pure gradient

method with a small step size or in quadratically convergent search

methods, this class of numerical techniques does not seem to be easily

applicable for high-dimensional systems. Moreover, no use is made of

insight into the analytical properties of the equations nor the ana-

lytically known solution of the feedback matrix L, equation (9).

For high-dimensional systems, solution of the matrix equations (5,9,10)

by successive approximation techniques seems useful. At first, proper-

ties of the matrix equations must be investigated.

Equations (10,9) can be written as

ATp + PA + Q - PBR-1BTp + [ I - c T ( c s c T ) - I c ] . (33)

• PBR-IBTp[I - scT(cscT)-Ic] : 0

Using the adopted structure for C and the partitioning of S (14), the

projector in (33) can be written:

4 : $_ kol thus (33) becomes:

F(P,S) = ATp + PA + Q - PBR-IBTp +

PBR-iBTp

0 I-S 1 S12

0:

f 0

T -i = 0 -S12SII : IJ

If consistent partitionings are made:

FI2 F22~

(34)

(35)

(36)

Page 394: 5th Conference on Optimization Techniques Part I

382

then F22 turns out to be linear in P22 and independent of S:

T + T T + = 0 (37) F22 = P22A22 + A22P22 PI2AI2 + AI2PI2 Q22

while FII and FI2 are quadratic in PII and PI2' So F(P,S) is a mixed

linear/quadratic matrix equation in P.

It can be noted that the appearance of S-terms in (35) can be eliminat-

ed by applying a similarity transformation on the state vector: KL °I x + = i x (38) T -i

-S12SII I

Although this transformation can not be computed in advance and so has

no practical significance, it does not affect the linear/quadratic

character of (35) and it shows that the role of S in equation (35) is

only limited to a transformation of state space.

As a conclusion~ a computational algorithm primarily must meet the

requirements set by the linear/quadratic character of F(P,S) = 0.

At present, two important algorithms for equations (5,9,10) are the

Axs~ter algorithm[ 13, p. 31~, based on [5] and later adapted to the

time-invariant case by Levine and Athans [14], and a simpler algorithm

suggested by Anderson and Moore [13, p. 314]. Main disadvantages of

these algorithms are:

a. The algorithms do not take advantage of the properties of the matrix

equations as mentioned before;

b. The algorithms require a stabilizing initial output feedback matrix

which can be difficult to determine;

c. The main drawback of both algorithms is the necessary condition

that the closed-loop system matrix remains stable in the course +

of the iteration. The algorithms provide no guarantee for this

stability, and practical applications show that in all but the

simplest cases both algorithms fail for this reason. Also the fact

that Axsiter's algorithm converges in the performance index

J = tr(PH~H T) [5,14] is only valid as long as the closed-loop

system remains stable, and so has no significance as a proof of

convergence.

Better algorithms might be developed if the mentioned properties of

the equations (5,9,10) are taken into account:

a. The presence of the S-terms in eq. (i0) results in a transformation

of the state space;

Page 395: 5th Conference on Optimization Techniques Part I

383

b. The equation (i0) has a mixed linear/quadratic character; as a

result of the adopted structure of the output matrix C, the linear

and the quadratic parts appear in separate partitions of the matrix

equation.

If a Newton-Raphson algorithm could be analytically derived for the

equations (5,9,10), these properties would be incorporated in the

algorithm. However, the complexity of the equations prohibits such an

approach. Thus, a linear converging algorithm is the only realizable

proposition. As the role of S in eq. (i0) is limited, Newton-Raphson

applied to eq. (10) separately, assuming P as the only variable, may

provide a basis for an algorithm. The result is: (see appendix II).

Pk+I(A - BR-IBTPk) + (A - BR-IBTPk)TPk+ 1 + ~kPk+IBR-IBTPka ~

+ ~kPkBR-IBTPk+Ie ~ + Q + PkBR-IBTp k - ~kPkBR-IBTPk~ ~ : 0 (39)

-SIIS I

~k = I K

Eq. (39) is a linear matrix equation in Pk+l" Numerical solution re-

quires the use of a Kronecker-product and this is inefficient or im-

possible for high-dimensional systems. As Lyapunov matrix equations can

be efficiently solved, even for high-dimensional systems [15,16J, an

adaptation of (39) to the Lyapunov structure is desired. Three algo-

rithms that perform this step will be suggested. All are based on the

replacement in eq. (39) of the term

~k~k+IBR-IBTPk + PkBR-IBTPk+ 1 - PkBR-IBTPk~ ~ by a term AQk:

- BR-IBTPk) + (A - BR-IBTPk)TPk+I + Q + PkBR-IBTp k + AQ k = 0 Pk+l (A

(40)

(Algorithm !) AQ k = ~kPkBR-IBTPk~ ~ (41)

(Algorithm II)AQk = ~k<-T:--1 BR-IBT -. pR ~K~ kj (42)

( A l g o r i t h m I I I ) AQ k = a ' BR-1BTPk + Pk BR-1BT PkBR-1BTPk~k ~ T

(431

PR in (42,43) is a predicted value for P22 and is determined by an

equation representing (37), the linear partition of (35):

T + = 0 (44) PRkA22 + A2]PRk + (PI2)kAI2 + AI2(PI2)k Q22

Page 396: 5th Conference on Optimization Techniques Part I

384

Due to the appearance of the closed-loop system matrix (A - BR-IBTp k) in

(40), the iteration in S~ based on (5) can be chosen as

Sk(A - BR-IBTPk )T + (A - BR-IBTPk)Sk + HCH T (45)

!%: 0 [0; 0 -1 T I-, .......... I - ......... T -i PkBR-IB T = 0

+ BR B Pki ~ ~ T~-I~-I + ~;i-$22-S12SIISI k 1

So the algorithms consist of iteratively solving (40,45) with AQ k given

by (41), (42) or (43). In the latter two cases, also (44) must be solved

at each step. These algorithms have the following properties:

If (A - BR-IBTPo ) is stable, then (A - BR-IBTPk ) is stable and Pk > i. 0,

k = 1,2... (algorithms I, II; see appendix III).

2. The initial stabilizing feedback matrix -BR-IBTp is a state feed- o

back matrix, allowing the algorithm to start on the stable optimal

state feedback matrix. Known algorithms require initial stabilizing

output feedback.

3. Comparing with known algorithms, the range of convergence is signifi-

cantly increased due to the guarantee of stability of the closed-

loop system matrix and because the linear and quadratic partitions

of the matrix equation (i0) are treated separately in the algorithms.

However, the conditions for convergence can not explicitly be given

due to lack of knowledge about existence and uniqueness of the solu-

tions to eq. (5,9,10) .

4. In algorithms II and III, A22 must be a stable matrix for (44) to be

efficiently solvable [15,16 } Under this condition, PR = P22 in a

stationary solution of the algorithms as can easily be proven by re-

garding the fact that P22 is a positive definite solution of a quad-

ratic matrix equation and hence is unique.

IV. APPLICATIONS

In the appllcation to technological systems, the suggested output se-

lection procedure has provided satisfactory results. No numerical prob-

lems were encountered. An application to a 14-dimensional open-loop

stable boiler system ~7~ is shown in fig. I. The application of the

resulting ordering of state vector elements in optimal output feedback

with a subsequent decreasing number of output elements yields subsequent

increasing values of the performance index, fig. 2. The fact that this

sequence is flat over a considerable range can be interpreted as a satis-

factory result of the selection algorithm. The results of fig. 2 were

obtained using algorithm II° Comparing the convergence properties of the

Page 397: 5th Conference on Optimization Techniques Part I

385

proposed algorithms with the algorithms of Axs[ter and of Anderson/

Moore as applied to the same system, showed as a result that the

Anderson/Moore algorithm failed when using 7 or less output variables

and the Axs~ter algorithm when using 6 of less. Both failures were due

to instability of the closed-loop system matrix. The proposed algorithms

I, II, III showed convergence up to 3, 2 and 1 output variables re-

spectively. The speed of convergence for 8 output variables is shown

in fig. 3. The general experience is that convergence slows down with

decreasing number of output variables, fig. 4. It should be mentioned

that the algorithms II and III exhibited practically identical behav-

iour. For all Lyapunov equation computations, the accelerated series

method of R.A. Smith [15 ]was used. L.

10-' ISELECTION | STEP

10-z /// ~ STATE-VECTOR ELEMENT

12 L 5 11 1 7 6 % 2 9 ] 8 13 10 fig. 1 Selection procedure applied to 14-dim. boiler system,

PERFOR- ]35 WiANCE J~ INDEX

30

25

20

15

BOILER-SYSTEM DIM.=14

#OUTPUT-FEEDBACK ELEM.

1L 13 12 11 10 9 S 7 6 5 ~ 3 2 I fig.2 Performance of optimal output-feedback dependent

upon number of.output elements.

Page 398: 5th Conference on Optimization Techniques Part I

386

Log i tlLk+l-Lkll 110 ~

1

]0 '

10'-

"•., "*", ,\

DI . . . . & < ":.

= -%,.....NAXAT F_ R ""A L G ]I

~ ITERATION u__.

1 2 3 L 5 6 ? t5 9 10 11 12 13 ]& 1S fzg.3 Convergence behaviour of numerical algorithms.

Log

~1Lk+ { L k tl

l0 ~ t

i ' . . . . . " . . . . . z

SOILER-S'Y SJEM ~" " [)1~1.=14 ~ OUTPUT- .... . . . ALG I ELEMENTS

- - - : A LG X ~ ' ~ 6 ITERATION #

o ig. 4

! 2 3 & 5 6 7 S g 10 11 12 13 l& 15 16 17 1~ 19 20

Convergence of proposed algorithms dependent upon number of output elements.

V. CONCLUSIONS

The results of this paper involve an algorithm for selection of output

variables in linear optimal output feedback control and improved numer-

ical algorithms for solving the necessary conditions of optimal output

feedback; these algorithms take into account analytical properties of

the relevant linear/quadratic matrix equations. With these new results,

Page 399: 5th Conference on Optimization Techniques Part I

387

the range of applicability of optimal output theory in linear output

controller design certainly can be increased, as only very few ~18,19~

numerical applications have appeared that use existing algorithms.

However, further improvements in the use of the suggested algorithms

may be expected if questions regarding necessary and sufficient condi-

tions for existence and uniqueness of solutions to the relevant matrix

equations are better understood.

REFERENCES

I. D.L. Kleinman, M. Athans, The design of suboptimal linear time- varying systems. IEEE Trans. AUt. Contr. 13(1968), 150-160.

2. R.L. Kosut, Suboptimal control of linear time-invariant multivariable systems subject to control structure constraints. Ph.D. diss. Univ. Pennsylvania, 1969; also IEEE Trans. Aut. Contr. 15(1970), 557-563.

3. W.S. Levine, T.L. Johnson, M. Athans, Optimal limited state variable feedback controllers for linear systems. IEEE Trans. Aut. Contr. 16(1971), 785-793.

4. P.J. McLane, Linear optimal stochastic control using instantaneous output feedback. Int. J. Contr. 13(1971), 383-396.

5. S. Axs~ter, Sub-optimal time-variable feedback control of linear dynamic systems with random inputs. Int. J. Contr. 4(1966), 549-566.

6. M° Athaus, The matrix minimum principle, inf. Contr. 11(1968), 592-606. 7. M.T. Li, On output feedback stabilizability of linear system. IEEE

Trans. Aut. Contr. 17(1972), 408-410. 8. A.K. Nandi, J.H. Herzog, Comments on "Design of single-input system

for specified roots using output feedback". IEEE Trans. Aut. Contr. 16(1971), 384-385.

9. D.G. Luenberger, Optimization by vector space methods. Wiley, N.Y. 1969 i0. M.C. Pease, Methods of matrix algebra. Academic Press, New York 1965. ii. T.E. Fortmann, A matrix inversion identity. IEEE Trans. Aut. Contr.

15(1970), 599. 12. H. Kwakernaak, R. Sivan, Linear optimal control systems. Wiley-

Interscience, New York 1972. 13. B.D.O. Anderson, J.B. Moore, Linear optimal control. Prentice-Hall,

Englewood Cliffs, N.J., 1971. 14. W.S. Levine, M. Athans, On the determination of the optimal constant

output feedback gains for linear multivariable systems. IEEE Trans. Aut. Contr. 15(1970), 44-48.

15. R.A. Smith, Matrix equation XA + BX = C. SIAM J. Applo Math. 16(1968) 4 198-201.

16. P.G. Smith, Numerical solution of the matrix equationAX + XA T + B = 0. IEEE Trans. Aut. Contr. 16(1971), 278-279.

17. O.H. Bosgra, Application of optimal output control theory to a model of external power station boiler dynamic behaviour. Report N-95, Lab. Meas. Contr., Delft Univ. Techn., Stevinweg i, Delft, The Nether- lands, 1973.

18. E.J. Davison, N.S. Rau, The optimal output feedback control of a synchronous machine. IEEE Trans. Pow. App. Syst. 90(1971), 2123-2134.

19. M. Ramamoorty, ~. Arumugam, Design of optimal constant-output feed- back controllers for a synchronous machine, Proc. IEE 119(1972), 257-259.

Page 400: 5th Conference on Optimization Techniques Part I

388

APPENDIX I

Let X be an element of L. Partial differentiation of eq. (31) gives:

. ~ ~S ~S T ~ T Z-~(A+BL) S+,A+BL)~-+~]-(A+BL) +S.T~(A+BL) = 0 (AI)

~-~ tr (Q+LTRL)S using (6)

T %S ~ T : tr ~Q+L RL)~+~(L RL).S]

_ T ~S+ ~L T ~ (32) = tr {P(A+BL)+(A+BL) P}7~ 2~-.SL R i using J

F ~ T , q ~ L %L T 2tri~SPB+~-~SL R i using (AI)

This last expression is equivalent to equation (29).

APPENDIX II

Equation (i0)~ written in the form (35) as F(P) = 0, can be different- iated, using ~ as defined mn (39):

dF=-dP. A-ATdP+dPBR- 1BTp+PBR- 1BTdp-~ dPBR- IBTp~-~PBR- 1BTdp~ = 0 (A2) Writing F and P as properly ordered vectors F and P, (A2) becomes:

dE--=- (i~I) dP- ( ~A T ) dP+ ( BR- 1 BTp~I ) dP+ ( I®PBR- 1 B T ) dP- ( BR- 1 BTp ~ ) dP

-(~aPBR-IB T)dP = 0 (A3)

In (A3), the derivative of F with respect to P is explicitly given. Inserting this derivative in the Newton-Raphson expression

|dPl (Pk*l-Pk) = -F(Pk ) (4) L --2

K

and making the conversion from vectors back to matrices directly leads to (39).

APPENDIX III

Proof of property I for algorithms I, If: If (A - BR-IBTp k) is asymp-

totically stablen then by Lyapunov's theory Pk+l > 0, because in (40)

Q + PkBR-IBTp k + AQ k > 0 and the pair (A,Q ½) is assumed to be detectable.

As (40) can be written as:

Pk+!(A-BR-IBTPk+I)+(A-BR-IBTPk+I)TPk+I+Q+Pk+IBR-IBTPk+I+AQk+

-P ~BR-IBT(Pk+I-Pk) = 0 +,p t k+l k'

and the solution P .~ of (A5) is positive definite, by Lyapunov's

(A5)

Page 401: 5th Conference on Optimization Techniques Part I

CONTROL OF A NON LINEAR STOCHASTIC

BOUNDARY VALUE PROBLEM

J.P. KERNE~Z

Facult~ des Sciences

6, Boulevard Gabriel

21000 - DIJON

J.P. QUADRAT, M. VIOT

I.R.I.A.

78 - ROCQUENCOURT

I - POSITION OF THE PROBLEM

The aim of this paper is to describe an optimal feedback control of a bioche-

mical system described by partial differential equations and submitted to a random

environment. Such biochemical systems have been described and studied in the deter-

ministic case in J.P. KERNEVEZ (~ . In this section we give a short presentation of

these membraneous systems, leading us to the stochastic model studied in the following

sections. In section 2 are given some indications about existence and unicity of a so-

lution for the state equations. In section 3 is given a way to approach an optimal

feedback control of the system. In section 4 the particular case of a linear feedback

is considered and numerical results are given. An artificial membrane separates 2 com-

partments I and 2. The membrane is made of inactive protein correticulated with enzyme.

In the compartments are some substrate

in the membrane.

S=I, I=0

COMPARTMENT

I

S and some inhibitor I which are diffusing

MEMBRANE

S=I, l=w

COMPARTMENT

2

x

S is reacting in the membrane because of enzyme which is a catalyst of a biological

reaction. In this paper we are interested only by the stationnary case. The evolution

case will be treated in a paper to be published (J.P. QUADRAT, M. VIOT [7) ). Let us

call

y(x) = substrate concentration at point x in the membrane (O < x< I)

i(x) = inhibitor concentration.

Page 402: 5th Conference on Optimization Techniques Part I

390

The stationnary case equations are

(1.1)

i ~ y(x) i y" (x)

= ]~i(x) +I y(x) [

I i ( x ) = w x + ( l - x ) e {

; y(O) = y(1) = I

I i (x) ~ . . F

I 1

0 x

In ( 1 . 1 ) 2 parameters may be random : O and w .

O depends upon how much activator is in the system and this quantity of activator is

not well-known, w is the concentration of inhibitor in the 2rid compartment.

To control the system we have at our disposal e , inhibitor concentration in the Irst

compartment. Moreover the control, to be efficient, will have to work in a feedback

closed loop from an observation of the system.

In the present case observation is the flux of substrate entering the membrane at x=O,

that is -y'(O). Therefore controls will be of the form

(1.2) e = u(y'(O))

+ where u is some function from ~ into IR . The cost function to minimize is the

average deviation between y'(O) and a fixed value z d :

(1.3) rain Ely'(O) - Zd 12

uE ~ad

where Uad is some fixed subspaee of functions with values in ~+, For this appa-

rently very simple unidimensionnal problem we are however faced with 2 main difficul-

ties :

using feedbacks (1.2) leads to a boundary value problem of the type

(1.4) y~'(x) = F(y(x),y'(O),x), y(0) and y(|) given

Existence and unicity of a solution for (1.4) are not standard. In section 2 we shall

Page 403: 5th Conference on Optimization Techniques Part I

391

see that we must, for instance, impose to the feedback law u(.) to be monotone decrea-

sing. This condition has a physical meaning (see remark 2.3) and led us to the notion

of resulatory feedback.

Another difficulty is the stochastic aspect of the control problem. We can no more

use a variationnal approach, as in BENSOUSSAN (I) . Here we used an algorithm, called

of "independant simulations", which is an extension of the strong law of large numbers

to stochastic control problems. This method was already tested in the framework of

stochastic dynamic programming (J.P. QUADRAT ~) , J.P. QUADRAT, M. VIOT ~) ) and

finds here a new field of applications (see section 3). In computations we looked

for an optimum of (1.3) in a class of linear feedbacks :

i 'u(y'(O)) = ( ~ (Y'(0)-z d) + B) +

(1.5) ~ 0 and bounded

B ~ O and bounded

This paper gives only a presentation of the ideas and of the main results. A more

detailed study including proofs can be found in (7) .

II - EXISTENCE AND UNICITY OF A

SOLUTION FOR THE STATE EQUATIONS

(2. l)

Let us call

+ u : ~ ÷ IR a continuous application

2 = ~+ ={ (~,w)[ ~0 , w ~0 )

p = some measure of probability on

Let us assume that

(2.2) /~ ( 2 + w2)d~<

We wish to solve the stochastic system

(2.3) y" = ~ ; p.p.x E )0,I ( ; a. 8.60

y(0) = y(1) (p.s.)

(2.4) i(x,~) = w x + (:-x) 0(~)

(2.5) 0(~) = u(y'(O,~)) (a.s.)

For a given ~ , one can prove existence (but not unicity) of a solution for problem

(2.3), (2.4), (2.5) by compacity methods (J.L. LIONS (4)). Then one finds a solution

to the stochastic problem using a "measurable sections" theorem. (For a similar si-

tuation see A. BENSOUSSAN, R. TEMAM ~) ). Therefore we can state the following re-

suit :

Page 404: 5th Conference on Optimization Techniques Part I

392

Theorem 2.1

Under hypothesis (2.i),(2.2), the stochastic system (2.3)-(2.5) admits a solu-

tion y such that

(2.6) Y E L2(Q,D;H2(O,~)) ; O~ y< ] (a.s°)

Remark 2.1

Without any feedback (2.5)~ the system would be

(2 7) y~, = d • l + w x + ( 1 - ; y ( O ) = y ( 1 ) = 1

The second member being monotone increasing with respect to y, one gets easily uni-

city of the solution of (2.7) for o,w,e given and positive.

Unfortunately, in our problem we lose this monotonicity because of (2.5).

Therefore the unicity problem must be approached in a different way. B

For O> O, w ~ O given and @ varying in ~+, let us call y@ the solution of

(2.7). Then we can prove the

Len~na 2.1

The function v :8

for @ = O

for @ ÷ ~

Remark 2.2

For d > O and

of the function v : e ÷

y~ (O) is continuous, strictly increasing, with

'(o) < o - ~ < Yo

y~(O) ÷ 0

w>~O

' 0 Ye( )

given, let us draw in the plane (y~(O),@) the graphs

and of a feedback u:y'(O) ÷ 0.

e

~f~ U

> y' (o)

Every point of their intersection is such that

(2.8) 9 = u(y~(O))

i.e. the constraint (2.5) is satisfied. Therefore, in general, with any feedback law -!

u, there is not a unique way for the system to work. For instance~ if u=v , it is

clear that every solution Ye ' @~ O, of (2.7) will verify (2.8). But if u is mono-

tone decreasing, v being strictly increasing, the 2 graphs have only one point of

intersection, and this for every ~ = (o,w), ~ >0, w ~ O, the caseo =O being trivial.

Therefore we have shown the

Page 405: 5th Conference on Optimization Techniques Part I

393

V

- ~ U

~' (o)

Theorem 2.2

If u is continuous, monotone decreasing from ~ into ~+, the stochastic sys-

tem (2.3-2.5) admits a unique solution.

Remark 2.3

The choice of a monotone decreasing feedback law u implies that when the flux

of substrate entering the membrane, -y'(O), is becoming less intense, the feedback u

regulates it by lessening the inhibitor concentration in the Irst compartment, so that

the transformation of substrate in product increases, y decreases in the membrane and

- y'(O) increases. One can check the same regulatory effect of the feedback law u

when -y'(O) is increasing. Therefore one gets a stable steady state. This is what is

expressed by the preceding result of unicity. We shall call regulatory the monotone

decreasing feedbacks.

In the following sections we shall work mainly with linear resulatory feedbacks of the

form

(2.9) u(y'(O)) = ( ~(y'(O)-z d) + B )+ (~ O)

Numerical integration for ~ fixed.

For a given ~ =(O,w) and a given feedback law (2.9) the system (2.3), (2.4), (2.9)

is solved using the following under-relaxation method : let L be the operator from g H2(O,I) into H2(0, I) defined by

(2. 10) L ( y ) = g( l -x - s ) F ( y , y ' (O))ds + (x - s )F (y ,y ' (O))ds)+ ( l - a ) y ~0 0

(2.11) F(y,y'(O)) = u y f + l+wx+(1-x) (~(y (O)-Zd)+~) +y

Then the following sequence is defined

(2.12) Yo = l ; Yn+l = Lg (yn)

For ~ small enoug there is convergence of (2.12). (The integrals are approximated by

Page 406: 5th Conference on Optimization Techniques Part I

394

a classical method : Newton-Cotes, Gauss,..~).

III - THE CONTROL PROBLEM

Let

(3. i)

For uE al~d

Uad be ~he set of linear regulatory feedbacks defined by

u(y~(O)) =(~ (y'(O)-Zd)+~) + ; -M]~<~<O ; O~< 8.~< M 2.

and ~0 = (~,w)E ~2 let y(u,00) be the solution of equation

(3.2) y~, = - - 0 y I+wx+(1-x)u(y'(O))+y ; y(0) = y(1) = I.

2 Let ~ be a law of probability on ~+ and z d a number ; the cost function to mini-

mize on Ua d is given by

(3.3) J(u) = / ly'(u,~)(o) -Zd 12 dD(0J) ~2 +

It can be shown, using hypothesis (2.2), that the cost function J(.) is continuous

with respect to the parameters ~ and ~ defining the linear feedback u. So that the

control problem (3.1), (3.2), (3o3) admits an optimal solution, for every measure ve-

rifying (2.2).

Remark 3.1

When the measure D is a discrete measure of the form

r (3.4) '~ = ! ~ 6

' r 2 ~ - j ! J

2 Dirac measure at point

J

the problem (3.2), (3.3) can be written

~j,

(3.5) Y~ = |+w.x+(1-x)u(y'(O))+yj ; yj(O) = yj(1) = I ; j=l,...,r

r -1 I '(o)- df2 (3.6) min J(u) = r Yj

u EUad j=!

For any measure ~, the idea is to discretize it in a sequence D r

r I

(3.7) D r = r ~ 6 "= j .1 1 co,

(3.8) (e1,..o ,~j,.. o) = sequence of independant simulations of co according

to the law ~ .

Then we must solve the problem (3.5), (3.6); this is possible by purely deterministic

methods (see section 4). This procedure is justified by the following result of con-

vergence ; let ~r (resp 4) the minimum cost associated to the measure (3.7) (resp

Page 407: 5th Conference on Optimization Techniques Part I

395

~ ~r) the initial measure ~) ; let Ur = (~r an optimal linear feedback for D , then

Theorem 3.1

~or almost every sequence of i ndependant s i m u l a t i o n s ,

(3.9) lim ~r = r

an d eyery convergent subsequence of the sequence ~r~r ) converges towards an ~)

which is optimal for ~.

Remark 3.2

It is clear that theorem 3.1 is an extension of the strong law of large numbers.

it can be also expressed in the following abstract form : Let (X~(~))~ E A be a fa-

mily of integrable random variables and for every s E A , let (X~(~))j >I be a se-

quence of independant random variables following the same law that X~. Then under so-

me hypothesis of continuity of X in ~ and of compacity of the set A, one proves that :

r (3.10) min ! I ~(~) converges almost surely towards min E(~).

sEA r j=l SEA

Moreover, f o r ~ f i x e d , l e t ~ (~) such t h a t r • r

I f XJ% (~) = min ! ~ X j(~) (3.11) ~ j=l Sr SEA r j=l s "

Then every convergent subsequence of the sequence (~r(~)) converges towards an

such that

(3.12) E(~) = min E(X ) ~6A

In our case S= u , X (0~) = ]y'(u,~)(O)-Zd]2 . So that the linearity of feedbacks is s

not essential in the conclusion of theorem 3.1.

IV - OPTIMAL LINEAR FEEDBACK CONTROL

Let Ua d be given by (3.1) and let (~l,...,~r) be r independant simulations

of the random parameters ~= (~,w). From (3.5) and (3.6) the cost function to minimize

becomes r

= -I ~l]Y](O)- Zdl2 ; - M E s~O ; O~ B~M 2 • (4.1) Jr (s 'B) r j

We use a gradient method with respect to s and B • Let us assume that,

(4.2) S(yj(O) - Zd) +~ > 0 V j =l...r,

then the partial derivatives ~jr sjr ~s ' ~T can be obtained by the following

Theorem 4.1

The gradient of jr is given by the relations

(4.3) ~jr = _ ! 1%J ~ dx ~ r j=1 O

Page 408: 5th Conference on Optimization Techniques Part I

396

(4.4) =-- X, ~ dx 3B r j=1 J 0 J

(4.5)

the Xj , j=],o.o,r

(4.6) v~l "j

(4.7)

Fj (y,z,~,$) = ~Y

l+w.x+(1-x) I~(Z-Zd)+- ~ +y 3

being obtained by integration of the primal and dual systems :

= Fj(yj,yj(O),~,~) ; yj(O) = yj(!) =

T ~ ~ (yj,yj(O),~,~) %. !X0 = ~Y J

! /i ~.j(1) = 0

Remark 4. ]

The integration of (4.6) was made by under-relaxation using the operator

1 3F x M (X) =g[(1-x) (2 (y ' (O) -Zd) - ~z % - x (l-s) ~s as + e - 0

+ - xas} + (l-s) X

The sequence X o = O, ~ n+! = M (%n) is then converging towards a solution of (4.6)

if ¢ is taken small enough.

Remark 4.2

The optimal open loop control problem is included in the preceding one, by ta-

king ~ = O, B varying in (O,M2). In the following we can compare the performances of

these 2 types of control and verify the improvement given by the feedback part.

NUMERICAL RESULTS

In the following 3 pictures we see respectively :

. In the Irst one ~ is random on (30;40) with a uniform distribution, w is fi-

xed at w=6 and we have looked for an optimal open !00.~ control to approach z d = -2.

All the curves are between the 2 extreme ones corresponding to ~=30 and ~ =40

. In the 2rid one ~ is still random and w fixed as in the Irst picture, but

now we have looked for an o_~timal linear feedback contro~ and the values of y'(O) for

all the curves between o = 30 and ~ =40 fit very closely to Zd=-2.

. In the 3rd picture ~is fixed at 36 and w is a random variable equally distri-

buted between 3.75 and 8°25. This time again an optimal linear feedback control gives

a good minimization of the deviation between y'(O) for all the curves and Zd=-2.

Page 409: 5th Conference on Optimization Techniques Part I

3 9 7

/

0 . 5 ~ zd:-2 - - ~ + +

0.4~ u"= 40 / /

0 3 / /

0.2~ /

o.1// / / /

O"

SOLUTION OPEN LOOP OPTIMALE cout=0.0173 ~= 2.17

W'= 6.0

_ _ ~ I ! ! i I " # . . . . . . . . t I • j •

01 0.2 Q3 0.4 0,5 Q6 0.7 0,8 09 x"

Y(X J ].0 /

/ 0.9 .~

0.8 /

o.7 /

/ /

o.5 S 0.4

/

0.3 S /

0,2 / /

o.~ S-

o

l

zd=-2.0 "\ T=40 \

SOLUTION FEEDBACK OPTIMALE

I i I l i I

0.1 0.2 0.3 0.4 QS 0.6

cout = 10 -6

o(='200 ~=2.15 W= 6.0 I I i

0.7 0.8 0.9 F

X

Page 410: 5th Conference on Optimization Techniques Part I

Y(X)

1.0

0.9

O.8

0,7

0.6

0.5

0.4

O.3

0.2

0.1

zd='2 \

%4",=3.75

/

% J /

S J .d /

S /

.d /

/ /

S / / /

0

398

..tK)LUTION FEEDBACK ~ I M A L E cout = 4.10 "~ ~,--2o (~=2.34

:36

--------+-- I i i i I I ~,, l

0.I 0.2 0.3 0.4 0.5 0.6 07 0.8 0.9 X

RE FERENCE S

(i) [2)

[3]

[4)

(5) (6)

17]

A. Bensoussan, Identification et filtrage, Cahier IRIA n°1Fgvrier 1969.

A. Bensoussan, R. Temam, Equations stochastiques du type Navier-Stokes (g pa-

raTtre).

J.P. Kernevez~ Evolution et contr$1e de syst~mes bio-mathgmatiques~ th~se, Paris

i972, N°CNRS 1.0. 7246,

J.L. Lions, Quelques m6thodes de r6solutions des 6quations aux d~riv6es partiel-

les non lingaires. Dunod Paris 1969.

J.P. Quadrat, Th~se Docteur-lng6nieur ParisVl (1973).

J.P. Quadrat, M. Viot, M6thodes de simulations en programmation dynamique sto-

chastique. Rev. Fr. d'Aut, et de Rech. Operat. R.I. 1973.

Cahier IRIA. Syst~mes Bio-chimiques (dirig~ par J,P. Kernevez)(~ para~tre),

Page 411: 5th Conference on Optimization Techniques Part I

AN ALGORITHM TO ESTI~IATE SUB-OPTIMAL PRESENT

VALUES FOR UNICHAIN MARKOV PROCESSES WITH

ALTERNATIVE RE~JARD STRUCTURES

S. Das Gupta Electrical Engineering Department Jadavpur University Calcutta 700029, INDIA

I. INTRODUCTION

Howard's algorithm (3), henceforth to be mentioned as

H-algorithm, determines in a closed form, the optimal decision

for a class of discrete Markov processes in the infinite horizon

and associated alternative reward str~ctures. His approach is

reasonably general and is applicable, among others, to problems

for discount factor ~ between 0 and I and as well as, with some

modification, for ~ = I . The algorithm however becomes rather

uneconomic for

(a) large-scale systems

and or (b) discount factors close to unity

Finkbelner and Runggaldler (2) proposed an algorithm,

henceforth to be mentioned as FR-algorlthm, which is essentially

a sub-optlmal algorithm that approached the optimal values by

changing from one pollcy decision to another better one and

improving upon the present values, often still further, by some

Page 412: 5th Conference on Optimization Techniques Part I

400

additional iterations to any pre-asslgned degree of accuracy.

since the process of optimization exhibits contraction properties

(4,2), the authors provided a formula to estimate, at the end of

each iteration, the number of iterations necessary to bring the

estimated preaent value vector within a pro-assigned neighbourhood

of the optimal present value vector. When, in particular, this

number is reduced to zero or negative, the last decision and the

present value vector are taken as the respective sub-optimal

values sought. The advantage of this method lles in the fact that

the computation 8tops when a desired accuracy is reached. However,

it also runs Into difficulty for cases when, in particular, the

discount factor is close to unity.

2. A SUB-0PTIMAL ALGORITHM

For large-scale systems, the approximate aprlori estimate

of certain qu~utlties help the computational algorithm

conslderably~ Both H and FR algorithms require starting values to

initiate the computation. If these are chosen on the basis of

the quick initial estimates a large amount of Iteratlve

computation may be reduced. This led to the derivation of

estimates of steady-state probability and gain in ref.(1). Using

the notations of Howard (3), we define the first order estimate

of the steady state probability distribution as~ (I) given by

= ' e ' r P ( 2 . 1 )

and in general~ the m th. order estimate of Yc will be

T'- .= ~ # ~. ,~ e ( 2 . 2 ) N

where P is the transition probability matrix. Consequently the

Page 413: 5th Conference on Optimization Techniques Part I

401

m th. order estimate of gain, g(m), will be defined as

gCm) = Tg(m)q (2.3)

where q is the immediate reward vector. Evidently, both ~(m) and

g(m) converge respectively to the steady-state probability ~ and

average gain g, as m approaches infinity.

For discounted Markov processes, the estimate of g plays an

important role in the present algorithm. Here we start with an

arbitrary policy to estimate vicorresponding to the present value

vector v given by

v = (I - (2.4)

from the approximate expression

v i = (I + (2.5)

where I is an identity matrix of proper order. Then v i is fed into

the policy improvement routine (3) to find a new policy. If the new

policy does not match the old policy, a new set of v i is to be

determined by the value determination algorithm acaording to eqn.

(2.5). A little consideration will show that this is already

available from the results of the preoeding policy iteration

algorithm.

Eventually, when there is a match between two consecutive

policy decisions, the last policy is taken as the sub-optlmal

policy and the last estimate of the present value, vi, is modified

as follows to give the corrected estimate of the present value Ves t

Page 414: 5th Conference on Optimization Techniques Part I

402

Vest = vl + ~ 2 eg (2 .6 )

It is showl~ in the appendlx that a sufficient condition to

guarrantee that the value Ves t is a better estimate of v than the

value v I will be when

_ ( 2 ) 1 -~ (2 .7) ~ > -~-

which is generally not too difficult to satisfy in actual processes.

The estimate Ves t may then be run for further iterations of

FR-algorithm with pre-set error level or use~ as the starting

value of v in the H-algorlthm.

3. DISCUSSION OF THE RESULTS

Severalproblems were solved on the computer with various

values of discount factor. Howard's Taxi-Cab problem was, for

Instance~ one of them~ The results on computing time were compared

with those corresponding to the pure FR-algorlthm run of the same

problems for the same desired accuracy levels. A saving of about lO

to 20% was frequent.

When compared with the corresponding H-algorithm, the

sub-optimal policies almost always coincided with the optimal

policy*. Thus only one iteration cycle of H-algorithm was necessary

*One exception cited in ref. (I) for ~=I

Page 415: 5th Conference on Optimization Techniques Part I

in Such cases.

403

It is not necessary to use the exact value of g in eqn.(2.6).

Estimates of g may be used. Use of g(1) in eqn. (2.6) and of g(4)

has shown any hardly noticeable difference in the computer time

saved. In all the cases considered, g(4) came within ½% of the value

of g.

4. CONCLUSION

A new algorithm to obtain a sub-optlmal policy and estimate

of present values for a class of discounted discrete Markov processes

having alternative reward structure in an lnfinlte horizon have

been discussed. Based on initial estimates of steady-state

probability and gain, this algorithm determines ~ policy and

estimates the present value vectors which could either be used as it

is or in conjunction with FR-algorlthm or H-algorithm depending upon

the accuracy requirement. In both the later cases it generally

accelerates the process of computation.

5. REFERENCE

I. Das Gupta, S., Int, J, C0ntr0%, Vo! 14, N0.6,1031-40 (1971)

2. Finkbeiner, B., and Runggaldier,W., Computing methods in Optimization Problem-Vol 2, ed. by Zadeh, L. A., and Balakrishnan, A. V., (1965)

3. Howard, R. A., D~mmic Programming and Markov Processes , Technology Press and Wiley, (1965)

4. Liusternik, L. A. s and Sobolev, V. J., Elements of Functional Analysis, Unga~Inehart and Winston, New York, [196!]

Page 416: 5th Conference on Optimization Techniques Part I

404

APPENDIX I

It will be shown here that eqn, (2.6) will give a better

estimate of present value provided that the inequality (2.7) is

satisfied. We assume that the transition probability matrix ,

P = L=VPiJ ~ has distinct elgenvalues with

N

j:1 PiJ

where N is the order of the matrix P. Since the largest eigenvalue

of P is I, we can write the matrix P in the following form (I)

N-I P = s + [ ~ iT! (!-2)

i=I

where S is the constituent matrix of P corresponding to the

eigenvalue I p and T i are the other constituent matrices with

respect to the other (N-I) eigenvalue Obviously S is given by,

!im p~ S = e~ (I-3)

where e is a column vector each element of which is unity. In view

of (I-3) and eqn® (2.6) we may express the correction term in

eqn. (2.6) as

since

q = g (I-5)

Page 417: 5th Conference on Optimization Techniques Part I

405

The actual present value ~, according to eqn° (2.4) is

with

v = (I -~P)-Sq = (I +~P)q.[ Fnpnq

= v i + a (1-6)

a = ~ ~npnq (1-7)

Similarly

with

Ves t = V i + ~ ~msq = V i + b (I-8)

cO

b = [ ~ msq (I-9)

To find condition for which the distance

llv - Vestll~ ~< ~lw- vi% ~ (I-I0)

we have only to find the condition when

a-b - ½ ~l b ~ 2 0 ( I - 1 1 )

where a do t between two vec to rs s i g n i f y i~mer p r o d u c t . Th is i m p l i e s

t ~ a t

[ Fm+nqT(TcT~ (n) -½ ~cT~v )q>.O (I-12)

in view of relations (1-7), (1-9) and (2.2).

Now a sufficient condition that the R.H.8. of eqn (1-12) be

positive~ or zero will be when 7[ TTE(n) _ ½ ~T~i s positive semi-

definite for all values of n between 2 and ~ . This in turn requires

that

~(n)_ ½~ > o (I-13)

Page 418: 5th Conference on Optimization Techniques Part I

406

for all n between 2 and~, where, in general, we mean by x > y,

that each element of x is greater than the corresponding element

of y.

According to eqn (2.2),

also as

we have

_ (n+l) vL(n)p (I-14)

w = ~ P (I-15)

(~(n+1) ½~) =(~(n)-½~)P

Now if it is known apriori that

(i-~6)

~(n)_ ~T~ > o (I-!7)

and thatnone of the columns of P can have all zero entries

Thus we have only to aheck if the inequality (2.7) is satisfied to

show that Ves t is a better estimate than v i

Page 419: 5th Conference on Optimization Techniques Part I

SOME RECENT DEVELOPMENTS IN NONLINEAR PROGRAMMING

by G. Zoutendijk, University of Leyden, Netherlands

I. INTRODUCTION

The general nonlinear programming problem will be defined as

max {/~xj I)( E I~ c_ ~;~ I ~

with

in which _~! and ~£ are finite index sets and ~ ~ I~A~C} a convex

polyhedron; ~ is supposed to be connected and to satisfy some regularity

conditions (like being the closure of its interior); the function /~ is

continuous. Usually differentiability of the functions / and/i

of second partial derivatives is also assumed.

Three special cases can be distinguished :

(1)

(2)

and existence

i. All constraints are llnear~ to be subdivided into

a. linear programming if //XJ is linear~

b. quadratic programming if//xj is quadratic and c. (general) linearly constrained nonlinear programming.

From a computational point of view two important subclasses of the last class

may be considered~ the nearly linear problems (few nonlinearities in the

objective function of a relatively simple nature and many linear constraints)

and the highly nonlinear problems (few variables and constraints and a highly

nonlinear objective function).

2. There are no constraints: unconstrained optimization.

3. The~e are also nonlinear constraints.

Again it makes sense to consider the subclasses of nearly linear and highly

nonlinear problems. Some methods will only work for convex programs (/ concave,

R convex).

Page 420: 5th Conference on Optimization Techniques Part I

408

!I. LINEAR PROGRAMMING AND UNCONSTRAINED OPTIMIZATION

In linear programming the problems are usually large and structured; there are

relatively few non-zero elements in the coefficients matrix. The productform

algorithm has been successfully applied to the solution of these large problems.

Re-inversion techniques have gradually become more sophisticated in that they better

succeed in representing the inverse of the basis by means of a minimum number of

non-zero elements° For reasons of numerical stability special decomposition methods

are being applied for the inverse. For this the reader is referred to Bartels and

Golub (1969) as well as to Forrest and Tomlin (1972). Many special methods have been

developed for special structures. Much success has been obtained with the so-called

generalized upper bound technique (see Dantzig and Van Slyke, 1967).

In unconstrained optimization (max /yX) ) most of the methods are hillclimbing

methods. Most widely used is the variable metric method. Writing ~= ~/~J and

~= 74÷i 2~ , the formulae are :

o ~< arbitrary

/-/o=~ (or any other ~ by ~ positive definite and symmetric matrix),

7"

/

determined by solving the one dimensional problem max /6e S J.

This method like most unconstrained optimization methods has the quadratic termination

i.e. it is finite for//Aj quadratic, /~Jr_/~-~}<7"~X. In that property,

case it can be easily shown that the following relations hold : -/

2. /S~J~ s/=Oj ~/" or equivalently /Z~jL ~-~ 0 //" > ~js

the directions S ~ are mutually conjugate.

This variable metric method, suggested by Davidon and further developed by

Fletcher and Powell (1963)~ is a member of a class of methods (see Broyden, 1967).

Page 421: 5th Conference on Optimization Techniques Part I

409

~=~//,A~ $~ ~ VP2 for the matrix updated

according to the variable metric method the update formula for a general member of

the family reads :

/~'~,, ~ ~ ~ arbitrary.

Recently Dixon (1972) has shown that all members of the family generate identical

points and directions (provided the llne search is carried out in a perfect way).

This does not mean that all methods are the same in practice when line searches

are not carried out in a perfect way and numerical stability of the H-matrix

becomes of importance. In practice it has sometimes be worthwhile to reset the

H-matrix to the unit matrix periodically. Research is going on to find methods in

which no line searches are required.

Alternatively for the unconstrained maximization problem one could use a method of

conjugate (feasible) directions.

These methods work along the following lines :

i.

2.

)<o arbitrary.

for ~ = o,/, . . . ~ - /

in X '~ require for $~

5 with maximizing A

3. after ~Z steps :

a. either start afresh with ×O(new) = K ~ (old) ;

r, b. or use a moving tableau, i,e. require $~-- O for

~ = - ~ - ~ . / ~ ~ - ~ , . . . ~ ~ - / .

Depending on add i t iona l requirements to f i x the di rect ions ~ i . e , depending on

the direction generator chosen another method will result. Variant b. in step 3

is usually better from the computational point of view. This family of methods of

conjugate directions has been first proposed by Zoutendijk (1960, 1970 a, 1978).

Recently the computational aspects and convergence properties of some of these

methods have been worked out in detail. All these methods have the quadratic

termination property.

Page 422: 5th Conference on Optimization Techniques Part I

410

A special method from this class is the following :

I. ~0 arbitrary~ / ~o=Z , calculate ~t ~=0 ;

2. for ~=0~_.. ~-/ :

3.

4.

To solve the direction problem 2 a.

inversion is necessary after step 4.

formula for £ ~ :

a. $~_~max 5 = O, f = O , / , . - . , ~ - / j 5 / - ¢ ,5 ~ / :

C. X4÷t~_ X~, A~ $4[ , calculate _~ V ) ~'

near-optimality test; if not passed :

~ * / ---- ~--O- i~ ~ J s ~ ,- Xe(new)= X & ( o l d ) ; ~/;= g * , , go 1;o 2, .

- / we need ~ rather than ~ , so that no

It is even possible to give an explicit

-.__ C f Since for a quadra t ic funct ion ~ the matr ix ~t in the general case can

be considered to be an approximation to the inverse Hessian, so that the f i r s t step

of each cycle of ~ steps is a quasi Newton step. For a quadratic function the

method will terminate after at most ~ steps. If the steplengths are arbitrarily

chosen during the first ~z steps and in step ~,/ A =/ is chosen, then the method

will also terminate in the maximum of the quadratic function. For a general

function we may therefore expect that this metricized norm method is less crucially

dependent on the accuracy of line searches which might be an important advantage.

The method has been developed by Hestenes (1969) and, independently, by Zoutendijk

(1970b).

III. LINEARLY CONSTRAINED NONLINEAR PROGRAMMING.

The linearly constrained nonlinear programming problem

max A X ~-

can be solved by applying one of the methods of conjugate feasible directions.

Page 423: 5th Conference on Optimization Techniques Part I

411

5. X#"""= X4'~ '~4[

I,: ] f } " (I< ] (here ~. are the rows of A and i~;(°J-~ Z K°--- 41 );

' " <) ] 2. A~=min , A~ with A~.'= max aS and A~ =max A ~ Z~ ~ ;

3. if ~ ~_ ~ add ~ ~ S= O to the direction problem;

if )%, = )~j add 4, & = O to the direction problem for the hyperplanes

just hit and omit the conjugacy relations.

4. S~ has to satisfy all the relations added to the direction problem during

previous steps as well as /~y75~ ~0. If no ~ can be found either the

oldest one of the relations {A~}rS=O has to be omitted or one of the 7- 7-

relations ~, ~ = O has to be replaced by ~, $ <= O (the one with the most

negative dual variable should be taken).

If, for some Z ~ ~,~T3&<O, then remove this relation from the next direction

problem.

5 ~

Again any direction generator may be chosen, so that we have outlined a whole

class of methods.

It is also possible to adapt a variable metric method to problems involving non-

linear constraints. One of the possibilities is :

At A ° S o : find by solving through complimentary pivoting (see Zoutendijk, 1973)

The solution can be written in,#the form ~o= /~o ~o with

~/o = i-~ ~ A ~ ~ consisting of those

~6~/X°Jfor which ~. $ =O and the dual variable >0 (essential

constraints in X o ).

At Xf I=0,/,~,,..:

if ~_

if ~= ~'

update ~ according to a variable metric formula;

%' ~ g~ {~K--~ being the hyperplane then H~+/ =~/~ - ~r ~ • .

Page 424: 5th Conference on Optimization Techniques Part I

just hit);

If p is the number of rows of

X° (new) = X ~-P (old).

412

then start afresh after ~-p steps with

iV. GENERAL NONLINEAR PROGRAMMING

In general nonlinear programming we can distinguish :

I. direct methods;

2. barrier function and penalty function methods;

3, primal-dual methods;

4. special methods fom special problems.

As far as the special methods are concerned we may mention :

a. Separable programming (objective and constraint functions separable, see

Miller (!973));

b. Geometric programming for problems of the type

in which the <. [X~ are posynomials~ i.e. functions of the type

p ~ ,

= . , i . , × i " G > " > ° "

It can be shown that the dual of a geometric program is a linearly constrained

nonlinear programming problem (see Zangwill (1969)).

c. Convex programming (/~j concave,~ convex);

for these problems we have the cutting plane method~ developed by Cheney and

Goldstein (1959) and - independently - by Kelley (1960) as well as a

decomposition method developed by Wolfe (1967). These methods are dual to each

other.

The direct methods are especially suited for the nearly linear problems (large,

structured, few nonlinearities). There are three different approaches :

a. Direct extension of the methods of feasible directions; however, instead of

=O requiring ~, 5 ~ like in the case of linear constraints we require

Page 425: 5th Conference on Optimization Techniques Part I

413

V / ~ J ~ <0 when/6(~J= ~ (in practice whenAl~J~ Z,_~ ). Hence we restrict the search to the interior of the cone of feasible directions.

One way of doing this has been outlined by Zoutendijk (1960).

b. Interior point methods like the modified feasible directions method

(Zoutendijk~ 1966). Having an interior point ~ (interior with respect to

the nonlinear constraints) and a linearization ~] ~ ~ we maximize ~7//~J~

within L IXJ leading to a solution i ; we then maximize/2~ ~(~- ~j

as a function of A within ~ which may either result in an interior maximum

X t or in a boundary point ~ . In the latter case the linear relation

v/,I~dT(X-~J~O with £ denoting the constraint just hit will be added to

the linearized constraints set while a new interior point will be chosen on the

line connecting ~ and ~ , If ~ is convex no feasible point will be cut

off. In the former case the conjugacy relation {~/(XJ- V/i~ J IT(X-X~= 0 can be added to the linearlzed constraints set while X' can be taken as new

interior point. This procedure can be adapted to non-convex regions; nonlinear

equalities are difficult to handle, however. Conjugacy relations have to be

omitted if no progress can otherwise be made.

c. Hemstitching methods where in ~ ~ ~ we are allowed to make a small step in a

direction tangent to the cone of feasible directions~ so that in the case of a

nonlinear constraint we will leave the feasible region. By projecting the point

so obtained onto the intersection of the nonlinear surfaces concerned we obtain

a new and better feasible point. This approach is being taken successfully in

the Generalized Reduced Gradient Method (Abadie and Carpentier, 1969). Nonlinear

equalities can also be handled by this method.

Barrier function and penalty function methods are well-known and widely used to

solve nonlinear programs of not too large a size. Let the problem be defined by

(i) and (2). Then an example of a mixed method is :

solve max {7:'x p<JV } forpo -'S<>, > " " °>

7 z:

Page 426: 5th Conference on Optimization Techniques Part I

414

The s t a r t ing point ×+ should s a t i s fy the r e l a t i o n s A " / X y < ¢ . ~ 6 # •

.'~,o The same will then hold for the subproblem solution X which will be starting

point for the next subproblem, etc. See further Fiacco and Mc Cormick (1968) and

Lootsma ( 1970).

In primal-dual methods we try to improve the primal and the dual variables more or

less simultaneously. This could for instance be done by using a generalized

Lagrangean function (Roode~ 1968), i.e. a function such that

i. max rain ~X x~j : rain max ~/X~ ~;

with ~ being a convex subset of a space of a certain dimension p

The two problems will be called the primal and the dual problem, respectively.

2. The primal problem is equivalent to the original problem.

3. The dual problem can be solved relatively easily.

In that case it makes sense to solve the dual problem instead. Writing

=max ~{X, 1~y p/ 2 we have to solve the problem

Each function evaluation ~/~y entails the solution of a linearly constrained

maximization problem in X ,

From this it follows that for reasons of computational efficiency the number of

function evaluations should be as small as possible. Buys (1972) has developed a

method of this type for the function

Recently Robinson (1973) has reported another primal-dual method which looks quite

promising. Expanding / ~ - /X 3 in a Taylor serious with respect to K ~ •

~ ; ( j : / - I X / ~ j 4 . V ~ ( 2 ( ~ J 7 " I X _ X O + . . , and w-~iting

Page 427: 5th Conference on Optimization Techniques Part I

assuming that at step

constrained problem :

415

we have available vectors X ~, ~ ~ we solve the linearly

which results in a new point X ~÷t and a new dual solution ~ I (dual variables

of the linearized constraints). Convergence to the original nonlinear programming

problem can be proved.

APPLICATIONS

Although nonlinear programming methods have been and are being applied to many

different problems - makroeconomic planning, refinery scheduling/plant

optimization, design and control engineering, economic growth models, pollution

abatement models, approximation under constraints and resource conservation models

- the number of applications is still limited. There are several reasons for this.

First the is highly nonlinear, non-convex, so that local optima cannot be

avoided. Secondly the word nonlinear is negative by definition. Often there is

little theoretical knowledge about the process other than that the relation

between certain variables is not linear; empirical relations without theoretical

foundation might be dangerous to use since they might not hold anymore after

some time.

Then there is the problem of data organization and model updating which is already

tremendous in the linear programming case. Finally theme are few computer codes

available and those available are not very sophisticated; up to now there is

little commercial motivation for computer manufacturers and software houses to

supply these codes, This might change in the future, however, when the need

increases to obtain better solutions to some of the nonlinear problems we have

to face,

Page 428: 5th Conference on Optimization Techniques Part I

416

Abadie, J. and Carpentier, J. 1969

REFERENCES

Generalization of the Wolfe reduced gradient

method to the case of nonlinear constraints~

pp. 37-47 of R. Fletcher (ed.),

Optimization, Academic Press.

Bartels, R.H. and Golub~ G.H. 1969 The simplex method of linear programming

using LU decomposition, Comm. ACM 12,

266-268.

Broyden, C.G. 1967 Quasi-Newton methods and their application

to function minimization, Math. of

Computation 21, 368-381.

Buys, J.D. 1972 Dual algorithms for unconstrained

optimization problems, thesis, Univ. of

Leyden.

Cheney, E.W. and Go!dstein A.A. 1959 Newton's method for convex programming and

Tchebycheff approximation, Num. Math. ~,

253-268.

Dantzig, G.B. and Van Slyke, R0 1967 Generalized upper bounding techniques,

J. Comp. Systems sol. ~ 213-226.

Dixon, L°C.W. 1972 Quasi-Newton algorithms generate identical

points, Math. Progr. ~, 383-387.

Fiaceo, A.V. and Mc Cormick,G.P. 1968 Nonlinear programming: sequential

unconstrained minimization techniques,

Wiley.

Fletcher, R. and Powel!~ M.J.D. 1963 A rapidly converging descent method for

minimization~ Computer Journal i, 163-168.

Forrest, J.J.H. and Tomlin~ J.A~ 1972

Hestenes, M.R. 1969

Updated triangular factors of the basis to

maintain sparsity in the product form

simplex method, Math. Progr. ~, 268-278.

Multiplier and gradient methods~ Journal

Optimization Theory and Appl. ~, 303-320.

Page 429: 5th Conference on Optimization Techniques Part I

Kelly, J.E.

Lootsma, F.A.

Miller, C.

Robinson, S.M.

Roode, J.D.

Wolfe, P.

Zangwill, W.I.

G. Zoutendijk

1960

417

The cutting plane method for solving convex

programs, J. Soc. Industr. Appl. Math. ~,

708-712.

1970 Boundary pmoperties of penalty functions for

unconstrained minimization, thesis, Eindhoven

Techn. University.

1963 The simplex method for local separable

programming, pp. 89-100 of R.L. Graves and

P. Wolfe (eds), Recent Advances in Math.

Programming, Mc Graw-Hill.

1972 A quadratically-convergent algorithm for

general nonlinear programming problems, Math.

Progr. ~, 145-156.

1968 Generalized Lagrangean Functions, thesis,

Univ. of Leyden.

1967 Methods of nonlinear programming, pp. 97-181

of J. Abadie (ed.), Nonlinear Programming,

North-Holland.

1969 Nonlinear programming, Prentice-Hall.

1960 Methods of Feasible Directions, Elsevier.

1966 Nonlinear programming: a numerical survey, J.

Soo. Industr. and Appl.Math.Control ~, 194-210.

1970 a Nonlinear programming, computational methods,

pp. 37-85, in J. Abadie (ed.), Nonlinear and

integer programming, North-Holland.

1970 b Some algorithms based on the principle of

feasible directions, pp. 93-122 of J.B. Rosen,

O.L. Mangasarian and K. Ritter, (eds.), Non-

linear programming, Academic Press.

1973 On linearly constrained nonlinear programming,

in: A.R. Goncalvez (ed.), Proceedings of the

Figueira da Foz Nato Summerschool on Integer

and Nonlinear Programming.

Page 430: 5th Conference on Optimization Techniques Part I

F_EENALTY METHODS AND AUGMENTED LAGRANGIANS IN NONLINEAR PROGRA~MING

R. Tyrrell Rockafel!ar Dept. of Mathematics, University of Washington

Seattle, Washington 98195 U.S.A.

The usual penalty methods for solving nonlinear programming prob-

lems are subject to numerical instabilities, because the derivatives

of the penalty functions increase without bound near the solution as

computation proceeds. In recent years, the idea has arisen that such

instabilities might be circumvented by an approach involving a

Lagrangian function containing additional, penalty-like terms. Most

of the work in this direction has been for problems with equality con-

Straints. Here some new results of the author for the inequality case

are described, along with references to the current literature. The

proofs of these results will appear elsewhere.

E__quality C.onstraints

Let fo,fl,..~,fm be real-valued functions on a subset X of a

linear topological space, and consider the problem

(i) minimize fo(X) over {x ~ Xlfi(x) = 0 for i=l .... ,m}.

The augmented Lagrangia__nn for this problem, as first introduced in 1958

by Arrow and Solow [2], is

m (2) n(x,y,r) = fo(X) + ~ [rfi(x) 2 + Yifi(x)],

i=l

where r ~ 0 is a penalty parameter and y = (yl,...,ym) ~ R m. In

fact, this is Just the ordinary Lagrangian function for the altered

problem in which the objective function fo is replaced by

fo + rfl 2 + °'°+rfm 2' with which it agrees for all points satisfying

the constraints.

The motivation behind the introduction of the quadratic terms is

that they may lead to a representation of a local optimal solution in

terms of a local unconstrained minimum. If ~ is a local optimal

solution to (1) with corresponding Lagrange multipliers Yi' as fur-

nished by classical theory, the function m

Lo(X,Y) = fo (x) + X Yifi (x) i=l

*This work was supported in part by the Air Force Office of Scientific Research under grant AF-AFOSR-72-2269.

Page 431: 5th Conference on Optimization Techniques Part I

419

has a stationary point at x which is a local minimum relative to the

manifold of feasible soltuions. However, this stationary point need

not be a local minimum in the unconstrained sense, and L may even

have negative second derivatives at x in certain directions normal

to the feasible manifold. The hope is that by adding the terms

rfi(x) 2, the latter possibility can be countered, at least for r

large enough. It is not difficult to show this is true if x satis-

fies second-order sufficient conditions for optimality (cf. Ill).

The augmented Lagrangian gives rise to a basic class of algo-

rithms having the following form:

[ Given (yk,rk), minimize L(x,yk,r k) (partially ?) in

(3) ~x ~ X to get x k. Then, by some rule, modify (yk,rk)

Ito get (yk+l,rk+l).

Typical exterior penalty methods correspond to the case where

yk+l = yk = 0 and r k+l = ar k (~ = some factor > 1). In 1968,

Hestenes [10] and Powell [19] independently drew attention to poten-

tial advantages of the case

(4) yk+l = k+2rk k k,rk), rk+l r k. y VyL(X ,y

The same type of algorithm was subsequently proposed also by Haarhoff

and Buys [9] and investigated by Buys in his thesis [4]. Some dis-

cussion may also be found in the book of Luenberger [13]. Recently

Bertsekas [3] has obtained definitive results in the case where an

e-bound on the gradient is used as the stopping criterion for the

minimization at each stage. These results confirm that the conver-

gence is essentially superllnear when r k ÷ ~. Various numerical

experiments involving modifications of the Hestenes-Powell algorithm

still in the pattern of (3) have been carried out by Miele and his

associates [15], [16], [17], [18]; see also Tripathi and Narendra [26].

Some infinite-dlmensional applications have been considered by Rupp

[24], [25]. An algorithm of Fletcher [6] (see also [7], [8]) may, in one form,

be considered also as a "continuous" version of (3) in which certain

functions of x are substituted for y and r in L(x,y,r); one

then has a single function to be minimized. The original work of

Arrow and Solow [2] also concerned, in effect, a "continuous" version

of (3) in which x and y values were modified simultaneously in

locating a saddle point of L.

Inequality Constraints.

For the inequality-constralned problem,

Page 432: 5th Conference on Optimization Techniques Part I

420

(P) minimize fo(X) over {x c X I fi(x) ~ 0, i = l,...,m},

it is not immediately apparent what form the augmented Lagrangian

should have, but the natural generalization turns out to be m

(5) L ( x , y , r ) = fo(X) + [ ~ ( f i ( x ) , Y i , r ) , i = l

where ] rfi(x) 2 + Yifi(x) if fi(x) ~ -Yi/2r,

(6) l(fi(x),Yi~r) <-yi2/4r if fi(x) ~ -Yi/2r.

In dealing with (5), the multipliers Yi are not constrained to be

nonnegative, in contrast with the ordinary Kuhn-Tucker theory. This

Lagrangian was introduced by the author in 1970 [20] and studied in a

series of papers [21], [22]~ [23], the main results of which will be

indicated below. It has also been treated by Buys [3] and Arrow,

Gould and Howe [i]. Related approaches to the inequality-constrained

problem may be found in papers of Wierzblckl [27], [28], [29], Fletcher

[7], Kort and Bertsekas [Ii], Lill [12], and Mangasarian [14].

To relate the augmented Lagrangian to penalty approaches, it

should be noted that by taking y = 0 one obtains the standard

"quadratic" penalty function. Observe also that the classical Lag-

grangian for problems with inequalities can be viewed as a limiting

case: m fo x) + X if y 0, Yifi (x)

(7) lim L(x,y,r) = Lo(X,y) = ~ i=! r+0 \-= if y ~ 0.

The following properties of (5)-(6) can be verified [21], [23]:

L(x,y,r) is always concave in (y,r), and it is continuously differ-

entiable (once) in x if every fi is differentiable. Furthermore,

it is convex in x if (X and) every fi is convex; the latter is

referred to as the convex case. Higher-order differentiability is

not inherited by L from the functions fi along the "transition

surfaces" corresponding to formula (6), However, as will be seen from

Theorem 4 below, most of the interest in connection with algorithms

and their convergence centers on the local properties of L in a

neighborhood of a point (x,y,r) such that x is a local optimal solu-

tion to (P), ~ is a corresponding multiplier vector in the classical

sense of Kuhn and Tucker, and ~ > 0. If the multipliers ~i satisfy

the complementary slackness conditions, as usually has to be assumed

in a close analysis of convergence, it is clear that none of the

"transition surfaces" will pass through (x,y,~), and hence L will

be two or three times continuously differentiable in some neighborhood

Page 433: 5th Conference on Optimization Techniques Part I

421

of (x,y,~), if every fi has this order of differentiability.

(Certain related Lagrangians recently proposed by Mangasarian [14]

inherit higher-order dlfferentiability everywhere, but they are not

concave in (y,r).)

The class of algorithms (3) described above for the equality case

may also be studied in the inequality case. In particular, rule (4)

gives an immediate generalization of the Hestenes-Powell algorithm.

We have shown in [22] that in the finite-dimensional convex case, this

algorithm always c0nverges globally if, say, an optimal solution

exists along with a Kuhn-Tucker vector y. This is true even if the

minimization in obtaining x k is only approximate in a certain sense.

The multiplier vectors yk converge to some particular Kuhn-Tucker

vector ~, even though the problem may possess more than one such

vector. For convex and nonconvex problems, results on local rates of

convergence in the equality case are applicable if the multipliers at

the locally optimal solution in question satisfy complementary slack-

ness conditions.

Dual Problem.

The main theoretical properties of the augmented Lagrangian,

fundamental to all applications, can be described in terms of a certain

dual problem corresponding to the global saddle point problem for L.

To shorten the presentation here, we henceforth make the simplifying

assumption that X is compact and the functions fi are continuous.

It must be emphasized that this assumption is not required, and that

the more general setting is in fact the one treated in [21], [22], [23].

It should also be clear that our focus on inequality constraints

involves no real restriction. Mixtures of equations and inequalities

can be handled in much the same way.

The dual problem which we associate with (P) in terms of the

augmented Lagrangian L is

(D) maximize g(y,r) over all y ~ R m and r > 0, where

g(y,r) = min L(x,y,r) (finite). xEX

Note that constraint y ~ 0 is not present in this problem. Nor does

the condition r > 0 represent a true constraint, since, as is easily

seen, g(y,r) is nondecreasing as a function of r for every y.

Thus the dual problem is one of unconstrained maximization. Further,

g(y,r) is concave in (y,r), and in the convex case it is continu-

ously differentiable, regardless of the dlfferentlability of fi [21].

Page 434: 5th Conference on Optimization Techniques Part I

422

THEOREM l[23].mln(P) = sup(D) = lim g(yk,rk), where

denotes a__nn arbitrary s_~uence with yk bounded and r k

c@ (yk,rk)k= 1

k k THEOREM 2123]~Let (y ,r )~=i den_~ote a_nny s_eequence wit___~h

sup(D) = lim g(yk,r-~) and y bounded (but not necessarily with k ~). k ~ ]~

r ÷ Let x minimize L(x,y ,r ) over X to within e ,

where e k ÷ 0. Then all cluster points of the sequence x ~ are

optimal solutions to (P).

If yk ~ 0, Theorem I asserts the familiar fact in the theory of

p e n a l t y f u n c t i o n s t h a t

(8) mln(P) lim mln[fo(X) + r k m = ~ maxa{0,fi(x))]- k xEX i=l

r +~

More generally, it suggests a larger class of penalty-like methods in

which still r k ÷ ~ but yk is allowed to vary. Perhaps, through a

good rule for choosing yk, such a method could yield improved

convergence and thereby reduce some of the numerical instabilities

associated with having r k + ~. Theorem 2 even holds out the attrac-

tive possibility of algorithms in which both yk and r k remain

bounded. The fundamental question here is whether a bounded maximi-

zing sequence (yk,rk) exists at all for (D). In other words, under

what circumstances can it be said that the dual problem has an optimal

Solution (y,~)?

It is elementary from Theorem 1 and the definition of the dual

that a necessary and sufficient condition for (y,r) to be an optimal

Solution to (D) and ~ to be a (globally) optimal solution to (P) is

that (~,y,~) be a (global) saddle point of L. The following

theorems on saddle points therefore show that our question about the

existence of bounded maximizing sequences (yk,rk) has an affirmative

answer for "most" problems.

THEOREM 3 [21]. In the convex cas__ee, (x,~,r) is a saddle Point

of L if and onl~ i_[f (~,~) is a saddle point o_ff the classical

Lagrangian L ° inn (7).

THEOREM 4 [23]. Sup pose that x c int X c R n, and that each

is differentiable of class C 2 near ~.

(a) I_~f (x,~,~) is a ~lobal saddle point o~f L, then (x,y)

satisfies the second-order Decessar[ conditions [5, p.25] for local

optimality in (P), an_d_ ~ i_s_s ~lobally optimal.

f. 1

Page 435: 5th Conference on Optimization Techniques Part I

423

(b) If (x,y) satisfies th__~e second-order sufficient conditions

[5,P.30] for local optimality i__nn (P) and x i_~s uniquely globally

optima! , then (x,y,~) i_~s ~ global saddle point of L for all

sufficiently large.

Part (b) of Theorem 4 stengthens a local result of A~row, Gould

and Howe [i] involving assumptions of complementary slackness and the

superfluous constraint y > O. A corresponding local result has also

been furnished by Mangasarian [14] for his different family of

Lagrangians. It is shown in [23] that the existence of a dual optimal

solution (y,~) depends precisely on whether (P) has a second-order

stability property with respect to the ordinary class of perturbations.

1 .

2 .

.

4.

5.

6 .

7.

8.

.

i0.

ll.

REFERENCES

K. J. Arrow, F. J. Gould and S. M. Howe, "A general saddle point result for constrained optimization", Institute of Statistics Mimeo Series No. 774, Univ. of N. Carolina (Chapel Hill), 1971.

K. J. Arrow and R. M. Solow, "Gradient methods for constrained maxima, with weakened assumptions", in Studies in Linear and Nonlinear Programming, K. Arrow, L. Hurwlcz and H. Uzawa ~itor@,

Univ. Press, 1958.

D. P.BertSekas, "Combined primal-dual and penalty methods for constrained minimization", SIAM J. Control, to appear.

J. D. Buys, "Dual algorithms for constrained optimization", Thesis, Leiden, 1972.

A. V. Fiacco and G. P. McCormick, Nonlinear Programming: Se~uenti- a~l Unconstrained Optimization T e c h ~ Wiley, i968.

R. Fletcher, "A class of methods for nonlinear programming with termination and convergence properties", in Integer and Nonlinear Prosramming, J. Abadie (editor), North-Holland, 1970.

R. Fletcher, "A class of methods for non-linear programming III: Rates of convergence", in Numerical Methods for Non-linear Opti- mization, F. A. Lootsma (editor), Academic Press, 1973.

R. Fletcher and Z. Li!l, "A class of methods for nonlinear pro- gramming, II: computational experience", in Nonlinear Programming , J. B. Rosen, O. L. Mangasarian and K. Ritter (editors), Academic Press, 1971.

P. C. Haarhoff and J. D. Buys, "A new method for the optimization of a nonlinear function subject to nonlinear constraints", Computer J. 13 (1970), 178-184.

M R. Hestenes, "Multiplier and gradient methods" J. Opt. Theory Appl. 4(1969), 303-320•

B. W. Kort and D. P. Bertsekas, "A new penalty function method for constrained minimization", Proc. of IEEE Decision and Control

Page 436: 5th Conference on Optimization Techniques Part I

424

Conference s New Orleans, Dec. 1972.

12. S. A. Lil!~ "Generalization of an exact method for solving equali- ty constrained problems to deal with inequality constraints", in Numerical Methods __f°r Nonlinear Optimization, F. A. Lootsma (editor), Academic Press, 1973.

13. D. G~ Luenberger, Introduction __t° linear and nonlinear program- mi__~, Addison-Wesley, 1973, 320-322.

14. O. L. Mangasarian, "Unconstrained Lagrangians in nonlinear pro- gramming", Computer Sciences Tech. Report #174, Univ. of Wiscon- sin, Madison, 1973.

15. A. Miele, E. E. Cragg, R. R. Iver and A. V. Levy, "Use of the augmented penalty function in mathematical programming, part I", J. Opt. Theory App!. 8(1971), 115-130.

16. A. Miele, E. E. Cragg and A. V. Levy, "Use of the augmented pen- alty function in mathematical programming problems, part II", J. Opt. Theory Appl. 8(1971, 131-153.

17. A. Mie!e, P. E° Moseley and E. E. Cragg, "A modification of the method of multipliers for mathematical programming problems", in Techniques of Optimization ~ A. V. Balakrishnan (editor), Academic Press, 1972.

18. A. Miele, P. E. Moseiey, A. V. Levy and G. M. Coggins, "On the method of multipliers for mathematical programming problems", J. Opt. Theory Appl. 10(1972), 1-33.

19. M. J. D. Powell, "A method for nonlinear optimiztion in minimiza- tion problems", in Optimization, R. Fletcher (editor), Academic Press, 1969~

20. R. T. Rockafellar, "New applications of duality in convex pro- gramming", written version of talk at 7th International Symposium on Math. Programming (the Hague, 1970) and elsewhere~ published in the Proc. of the 4th Conference on Probability (Bra§ov, Romania~---~l~.

21o R. T. Rockafellar, "A dual approach to solving nonlinear program- ming problems by unconstrained optimization", Math. Prog., to appear.

22. R. T. Rockafeilar, "The multiplier method of Hestenes and Powell applied to convex programming", J. Opt. Theory Appl., to appear.

23. R. T. Rockafellar, "Augmented Lagrange multiplier functions and duality in nonconvex programming", SIAM J. Control, to appear.

24. R. D. Rupp, "A method for solving a quadratic optimal control problem", J. Opt. Theory Appl. 2(1972), 238-250.

25. R. D. Rupp, "Approximation of the classical isoperimetric problem", J. Opt. Theory Appl. 3(1972), 251-264.

26. S. S. Tripathi and K. S. Narendra, "Constrained optimization problems using multiplier methods", J. Opt. Theory Appl. 2(1972), 59-70.

Page 437: 5th Conference on Optimization Techniques Part I

425

27.

28.

29.

A. P. Wierzbicki, "Convergence properties of a penalty shifting algorithm for nonlinear programming porblems with inequality constraints", Archiwum Automatiki i Te!emechaniki (1970).

A. P. Wierzbicki, "A penalty function shifting method in con- strained static optimization and its convergence properties", Archiwum Automatyki i Telemechaniki 16(1971), 395-416.

A. P. Wierzbicki and A. Hatko, "Computational methods in Hilbert space for optimal control problems with delays", these proceed- ings.

Page 438: 5th Conference on Optimization Techniques Part I

ON INF-COMPACT MATHEMATICAL PROGRAMS

by

Roger J.-B. Wets

timum is attained aC a feasible point) and (ii) the problem is stable

~here exist scalars m often called Lagrange multipliers - that can be

associated to the constraints and allow us to replace the mathematical

problem by an unconstrained problem whose optimal solution is identi-

cal to that of the original problem). For convex mathematical problems,

stability coincides with solvability of the dual program. These proper-

ties correspond to some properties of the variational function of the

problem. If our original problem is: Find inf f(x) subject to

G(x) ~ 0 ~ where G is a vector valued map; then one way to define the

variational function P is as:

P(u) = Inf{f(x) I G(x) ~ u} E

It is easy to verify that if P(O) is finite and P is locally Lip-

schitz at O on its effective domain ({u I P(u) < +=}) then the ori-

ginal problem is stable; if P is lower semicontinuous at 0 then the

problem is assymptotieally stable also calles dualizable, see e. g.

[6,7,12,13]. In this paper one finds various theorems which allow us to

conclude that the variational function possesses the appropriate proper-

ties when the objective and the constraints of the original problem sa-

tisfy compactness-type assumptions. In order to extend these results to

certain classes of control problems and stochastic programs, we develop

in section 4 some further properties for composition of inf-compact

functions.

2. NOTATIONS AND TERMINOLOGY

The class of functions to be minimized is limited to those with

domain a separable reflexive Banach space and range ]-~,+~] and who

are not identically += . S~ f a member of this class, then its ef-

fective domain is dom f = {x I f(x ) < +~} . The (~-) level set of f

is by definition L~(f) = {x I f(x ) ~ ~} with ~ ~ ~. It is well known

that a function is lower semicontinuous (l.s.c.) if all its level sets

are closed, f is inf-com a~_a~.t if L~(f) is compact for all ~ g ~ .

This terminology was introduced by Moreau who developed with Rockafella

and Valladier the basic properties of these functions~ see e.g. [4,5,8,

I lj. An inf-compact function always possesses a minimum. The epigraph

of a function f is the subset of ]-~,+~[ x X such that epi f =

Supported by N.S.F. Grant No. GP-31551 May 1973

Page 439: 5th Conference on Optimization Techniques Part I

427

{(~,x) I ~ ~ f(x)} . The indicator function of a subset C is denoted

by ~C and is defined by ~C(X) = O if x g C and = +~ otherwise.

By K we shall denote a set valued function from a space U into the

subsets of a space X , i.e. for each given u , ~(u) is a subset of

X possibly the empty set. Typically K(u) = {x I x satisfies some pro-

perty which depends on u} , more specifically K(u) could be

{x I (x,u) g D} where D is a fixed subset of X x U . The map ~ is

said to be up pe K semicontinuous if its graph is closed in X x U . Such

a map is said to be Lipschitz if the function d(<(ul),K(ua)) is Lip-

schitz where d denotes the Hausdorff distance between K(u I) and

~(u =) We remind you that given P,Q two subsets of X then d(P,Q)

= max(6(P,Q), ~(Q,P)) where.

(P,Q) = Inf Sup. llx-Yll x~P ygQ

Usually, one defines the Hausdorff distance only for compact se~ and d

then is real valued, in what follows we use the extended definition gi-

ven here.

3. PROJECTION THEOREMS

will denote the canonical projection of X x U onto U . Sup-

pose f : X x U + ]-~,+~] and not ~ += . By ~f we denote the pro-

jected function of f , i.e. the function such that ~f(u) = Inf f(x,u).

It is easy to verify that el ~ epi f = epi cl ~f . In particular we

have that ~f(u) = +~ if dom f {(x,u) Ix ~ X} = ~ . We can view ~f

as the variational function of an optimization problem where u repre-

sent the perturbation. Thus, whatever can be said about ~f can also

be translated into properties of the variational function.

Proposition I. Suppose f : X x U ÷ ]-~,+=], f • +~ and f is convex

then ~f is convex on U .

Proof: This proposition is an immediate consequence of the definition

since a function is convex if and only if its epigraph is convex and

epi ~f is then the (linear) projection of the convex set epi f

Proposition 2. Suppose f : X x U ÷ ]-~,+~], f ~ +~, X and U both

reflexive separable Banach spaces and f is weakly inf-compact. Then

~f is weakly inf-compact.

Proof: It suffices to show that the sets L~(~f) are weakly compact

for all ~ c ~ But this follows from the fact that L~(~f) = ~L~(f) ,

Given a mathematical program - by which we mean a constrained op-

timization problem in finite or infinite dimensional spaces - it is or

is not a "well-set" problem, i.e. (i) the problem is solvable (the op-

Page 440: 5th Conference on Optimization Techniques Part I

428

i.e° the L~(~f) are simply (linear) projections of weakly compact

sets and thus are weakly compact.

Proposition 3o Suppose f : X x U ~ ]-~,+~] , f ~ +~, X and U sepa-

rable relexive Banach spaces and f(x,u) = g(x) + @K(u)(X) with g

weakly inf-compact and K weakly upper semicontinuous. Then ~f is

weakly lower semicontinuous.

Proof: First observe that for fixed u c L~(~f) implies that there ex-

ists x such that (x,u) g Le(f) , this follows from the inf-compact-

hess of f in x for fixed u . Moreover if (x,u) C L~(f) then

x ~ Le(g) since f(x,u) = g(x) whenever f(x,u) < +~ . Let us now

consider the sequence {u i} s L~(~f) converging weakly to u ° . By the

above remarks there exists a corresponding {x i} S L~(g) . By weak com-

pactness of L~(g) , {x i} contains a converging subsequenee to a point

in L~(g) , say x . The weak upper semicontinuity of < implies that

x E <(u °) , i.e. ~<(uo)(X) = 0 . Thus

~f(u °) S f(x) ~ ~ .

i.e. u ° ~ L~(~f) ~ From this follows the lower semicontinuity of f .

Proposition 4. Suppose f : X x U ÷ ~-~,+~ with X and U finite

dimensional Euclidean spaces and f(x,u) = g(x) + ~K(u)(X) where g

in inf-compact, convex, ~(u) = {x I (x,u) c P} and P is a nonempty

convex polyhedral subset of X x U . Then either ~f is inf-compaet or

L~o(~f) - the set of minimum points of ~f - is unbounded.

Proof: If P is bounded then it follows from proposition 2 that ~f

is inf-compact which completes the proof in this case. Thus~ it suffi-

ces to consider the case when P is unbounded. From proposition ] and

3 we know that f is convex and l.s.c~ and we have always that

s ° k Inf zf ~ Inf g > -~ by inf-compactness of g This implies that

the infimum of ~f is finite. If the infimum is not actually attained

then from the properties of l.s.c, convex functions []0] it follows that

there exists a ray~ say {u ° + %v, % ~ O} in dom zf such that zf

is strictly decreasing on this ray. For each fixed u , f(x,u) is inf-

compact in x ~ thus, for each % there exists x(1) such that

f(uO+%v,x(l)) = ~f(uO+Xv) with x(%) E <(u°+%V) . Now, since P is a

convex polyhedron, it can always be written as ((x,u) I Ax + Bu = p ,

x ~ O , u ~ O} where A and B are matrices (Weyl-Minkowski). Since

x(X) ~ <(u°+%v) it follows that for all ~ ~ O , Ax(%) + Bu ° + lBv =p

with x(%) ~ O , u ° + %v ~ 0 . On the other hand for % sufficiently

Page 441: 5th Conference on Optimization Techniques Part I

429

large x(l)EL~o+c(g) for e arbitrarily small, thus the sequence

x(l) contains a convergent subsequence to a point of L o(g) , say x.

From the above it follows that Ax + Bu ° - p = Bv(-lim %) which now

implies that By = O . Thus x £ K(u°+lv) for all % ~ 0 and since by

lower semicontinuity g(x) ~ lim inf g(x(%)) , we can not have a strict-

ly decreasing sequence on values for ~f on the ray {u ° + Iv, % ~ O}.

The polyhedral restriction is by no means superfluous in the pre-

ceding proposition. To see this consider the following simple example:

Find inf g(x) = x + @[O,i] (x)

subject to x ~ 0 , ux ~ ]

the corresponding variational function ~f = u -I for u ~ I and +~

otherwise. The function g is clearly inf-compact and K(u)= {x I x ~ O,

x ~ u -~} is an aupper semicontinuous set valued function. In this case

Inf ~f(u) = ~o = 0 but Leo(~f) = # . In fact ~f is convex and

L (nf) is unbounded for all ~ > eo .

Proposition 5. Suppose f : X x U ÷ ~-~,+=~ with X and U separable

reflexive Banach spaces, and f(x,u) = g(x) + ~K(u)(X) where g is

Lipschitz on X and <(u) is Lipschitz on its effective domain

{u I <(u) # ~} . Then ~f is Lipschitz on dom ~f .

Proof: It suffices to show that there exists a constant M such that

l~f(u) - zf(v) l ~ M IIu-vll for all u,v dom f

It is easy to verify that if ~f(u) = -~ then the hypotheses imply

that ~f(v) = -~ . Thus let us assume that ~f(u) and ~f(v) are fi-

nite and that ~f(u) ~ ~f(v) . Let us also assume that there exist x

and y such that ~f(x,u) = ~f(u) and f(y,v) = zf(v) , i.e. that the

infimum of f is actually attained for u and v . Moreover, assume

that there is a y ~ ~(v) such that ]ly-x[I = Inf {IIy-xH I y c ~(v)~ ,

we have then

I~f(u) - zf(v)[ ~ [~f(u) - f(y,v)[

B • II x-F II

B • K- llu-vll

where the first inequality follows from the fact that ~f(u) ~ zf(v)

f(y,v) , the second inequality follows from the Lipschitz property

of f on X (with constant B ) and the last inequality follows from

Page 442: 5th Conference on Optimization Techniques Part I

430

the Lipschitz property of < (with constant K ) since IIy-xI[~ d(<(u),

~(v)) N KIlu-vll ~ Now, if either the infimum of g on K(u) is not ac-

tually attained (or on ~(v)) or given a point in <(u) there is no

closest point in ~(v) the standard modification (using e-optimal

points) of the above arguments yields the proof.

4. A COMPOSITION THEOREM

In order to apply some of the above results to control problems

and stochastic programming problems, we need to extend a result of

Moreau [4] ~ viz.: If f and g are inf-compact so is f + g . To

prove the proposition 6 below:

Proposition 6. Suppose f(x,~) : X x~ ÷ ]-~,+~] is a~ family of func-

tions where X is a finite Euclidean space. Moreover~ suppose that the

f(x,~) are inf-compact, convex in x for all and u-measurable in

for all x . Let ~ be a probability measure on ~. Then F(x) =

/ f(x,~)du is a convex, inf-compact function.

We need to introduce some concepts which are standard in Convex Analy-

sis but might still be somewhat unfamiliar to the mathematical program-

mer. We only need these concepts to prove proposition 6, they will not

be used in further developments. SHy f ~ +~ is convex function with

range ]-~,+~] , then its recession function fO + is defined by

fo+(y) = Lim ~-11f(x+%y ) - f(x)l ~+~

It can be shown that this limit is independent of x and that fO + is

a positively homogeneous convex function. See 19] for a detailed des-

cription of the properties of recession functions. In particular it is

easy to show that:

Lemmao Suppose f : X ÷ ~-~,+~] is convex and lower semiconti-

nuous. Then f is inf-compact if and only if fO+(y) > O for all

y~O.

Since in proposition 6 we allow for the possibility of infinite

valued functions in the integrand~ we have to give a meaning to the /

sign. In fact all what is required is a slight modification of the stan-

dard Lebesgue-Stieltjes integral. As usual, we define the integral f .

du as the sum of its positive part and of its negative part with the

understanding (i) that either part may possibly be divergent, and (ii)

that if U{~If(x,~) = ±~I} # O then the positive (negative) part is

Page 443: 5th Conference on Optimization Techniques Part I

431

automatically defined to be ±~ ; in addition we adopt the convention

that the integral is +~ whenever the positive part is +~ whatever

be the value of the negative part. If the integrand is almost every-

where finite, it corresponds to the Lebesgue-Stieltjes integral. This

integral possesses essentially the same properties as the standard

Lebesgue-Stieltjes integral except that subaddivity replaces the usual

additivity property.

Proof of proposition 6: The proof of convexity can be found in El4] but

can in fact be reconstructed quite easily if one observes that F(x)

is a convex combination of convex functions. Remains to establish inf-

compactness, which in view of the lemma above is equivalent to showing

that FO+(y) > O for y # 0 since F is convex. From fO+(y,~) > O

for all y # O and for ~ , it follows readily from a weakend ver-

sion of the dominated convergence theorem (we do not require here that

the integral be convergent) that

FO+(y) ~ f fO+(y,~)d~ > 0 .

5. APPLICATIONS

A. Nonlinear programs. Let f : ~n ÷ ]-~ +~ , f ~ +~ and let

G : ~n ÷ ~m , i.e. G is a vector valued function with components

G i , i=|,...,m . A nonlinear program is then

Find inf f(x)

subject to G(x) N O

where by ~ we denote componentwise ordering. The standard variatio-

nal function associated with this problem, is given by

F(u) = {Inf f(x) I G(x) ~ u} .

It is obvious that all the projection theorems of section 3 provide

sufficient conditions to establish dualizability of the nonlinear pro-

gram (P is l.s.c, at O) or stability (P is locally Lipschitz at 0).

Moreover if the function G is continuous and f is inf-compact then

the nonlinear program is solvable.

We examine somewhat further the case when the objective function

as well as the contraint functions G i , i=1,...,m are convex func-

tions. Since the Gi 's are continuous (they are convex-finite on ~ )

it follows that for all scalars u i , the feasible region of the convex

program: Find Inf f(x) subject to G(x) ~ u , is a closed convex set.

Page 444: 5th Conference on Optimization Techniques Part I

432

For each i , the set Lui(Gi) is closed and convex and the feasible

region is given by ~ Lui(Gi) . One can also prove that.

Proposition 7. Let Gi , i=l,...,m be a class of lower semicontinuous

convex functions. Suppose that for some u , the ray {x o + %y, % ~ O}

is contained in C = {x I G(x) N u} ~ Then for all v such that D =

{x I G(x) ~ v} # @ and for any x e D we have that the ray {x + %y ,

X ~ 0 is also contained in D .

Proof: Since C and D can be represendted as the intersection of le-

vel sets, it suffices to show {x o + %y, % ~ O} Lui(Gi) implies

that for any x g Lvi(Gi) , {x + ky, % ~ 0} is also contained in

Lvi(Gi) ~ One can prove that from basic properties of convex functions

(see e.g. [;O,p. 139]) or rely as in section 4 on the properties of the

recession function and observe that under the above hypotheses GiO+(y)

must be ~ 0 which then automatically yields the conclusion.

As a corollary of the above proposition we have

Proposition 8o Suppose G is the constraint map of a convex program

and {x I G(x) ! O} is nonempty compact, then {x I G(x) ~ u}

is compact for all vectors u for which it is nonempty.

Proof: If the only ray contained in Ix I G(x) ~ O} is the trivial

ray, i.eo with direction y = O , then by proposition 7 it is also the

only ray contained in the closed convex set {x I G(x) ~ u}

If (x i G(x) ~ O} is compact and we restrict the pertubations u

of the original problem to a compact neighborhood of the origin (which

is the only region of interest anyway) and if f is inf-compact, it

follows not only that the original problem is solvable but we can then

apply propositions I and 2 to obtain further properties of this problem.

It is now easy to see how in a similar fashion one could also apply pro-

positions 3, 4 and 5 to this class of problems.

B. Stochastic Programs~ Provided the original problem satisfies some

very weak assumptions, to each "convex ~' stochastic program correspond

an equivalent deterministic convex program. However, it is not usually

possible to find an explicit simple analytic representation of this de-

terministic problem and consequently it might be very difficult to esta-

blish if a given problem is or is not well set° It is not possible to

Page 445: 5th Conference on Optimization Techniques Part I

433

develop here the full implications of the results of section 3 and 4.

We only intend to indicate here some of the more elementary applica-

tions. Some partial results have already appeared in the litterature,

see e.g. [13] for an application of a version of proposition 5 and Eli

where a version of proposition 3 appeared first and also some applica-

tions are indicated. The following problem will be called here a sto-

chastic program:

Find inf z = f(x) + E {Inf q(y,~) [ H(x,y,~) ~ O} x y

subject to G(x) N 0

The functions f : ~n ÷ ~ _~,+~

convex functions in x ; q : ~nx

are also convex in y and (x,y)

spect to a p~obability measure

random variable ~ . Let

and

and Gi : ~n ÷~ , i=l,...,m are

÷ ~-~,+~ and Hj :~nx~. ÷~

respectively and measurable with re-

that defines the distribution of the

Q(x,~) = Inf {q(y,~) [ H(x,y,$) < O}

Y

where E

f • d~ in section 4. The equivalent deterministic program is then

written as

Find inf z = f(x) + ~(x)

subject to G(x) ~ O .

~(x) = E{Q(x,~)}

(expectation) is defined in the same manner than the integral

We shall assume that it can be shown that Q(x,~) is measurable with re-

spect to ~ (see e.g. [2~ and [14] for a proof of the linear case).

Then ~(x) is convex since from proposition | it follows that Q(x,~)

is convex in x . We shall assume that q(x,~) is inf-compact in y

and that the set valued function K defined by <(x,~) = {yIH(x,y,~)

0} is upper simicontinuous in x . Both assumptions are quite natu-

ral. Since q(y,~) represents a penalty to be payed for selecting a

recourse action y and one might expect that the cost increases strict-

ly with Hyll . See for example the simple recourse model []5] where the

penalty is a function of the difference between x and ~ . The upper

semicontinuity of < is more technical but it is hard to imagine a

practical problem that would fail to satisfy this assumption. With these

assumptions, it follows directly from proposition 3 that Q(x,~) is

Page 446: 5th Conference on Optimization Techniques Part I

434

lower semicontinuous ix x . This property can also be proved for ~(x)

if some weak integrability condition is satisfied. If now f is also

l.s.c, and {x i G(x) ~ O} is compact, it follows then from proposi-

tion 3 again that the equivalent problem is solvable and at least as-

symptotically stable (dualizable).

If in addition one can show that the sets K(x,~) are bounded

then we can use propositions 2 and 6 to show that ~ is inf-compact.

Proposition 2 can be used again to obtain the properties of the varia-

tional function since f + ~{xIG(x) ~ 0 } i is then also inf-compact.

C. Control problems governed by partial differential equations.

Let us just mention here that the inf-compactness assumption can be

used to obtain existence and regularity conditions rather than coerci-

vity as used in [3]. It is obvious that all coercive functions are inf-

compact, the only part we loose is the uniqueness of the solution.

Page 447: 5th Conference on Optimization Techniques Part I

435

References

[l~ S. Gartska: "Regularity Conditions for a Class of Convex Programs",

Manuscript (1972).

[2] P. Kall: "Das zwelteilige Problem der stochastischen linearen

Programmierung", Z. Wahrscheinlichkeitstheorie verw. Gebiete

8 (1966), IOl-ll2.

[3] J.-L. Lions: Optimal Control of Systems Governed by Partial

Differential Equations, Springer-Verlag, Berlin (]971).

~] J.-J. Moreau: "Fonctionelles Convexes", Seminaire sur les Equa-

tions aux deriv~es partielles, Coll~ge des France,Paris

(]966-1967).

~5~ J.-J. Moreau: "Theoremes inf-sup, C. R. Acad. Sci. Paris,

258 (1964), 2720-2722.

~] R. T. Rockafellar: "Duality and Stability in Extremum Problems

Involving Convex Functions", Pacific J. Math.,2](]967),167-187.

E7~ R. T. Rockafellar: "Duality in Nonlinear Programming" in Mathe-

matics of Decision Sciences~ Lectures in Applied Mathematics,

Providence, R.I., ] 1 (]968), 40]-422.

[8] R. T. Rockafellar: "Level Sets and Continuity of Conjugate Con-

vex Functions", Trans. Amer. Math. Soc., 123 (]966), 46-63.

~] R. T. Rockafellar: Convex Analysis, Princeton University Press,

Princeton, N.J. (1969).

[IO] J. Stoer and C. Witzgall: Convexity and Optimization in Finite

Dimension I, Springer-Verlag, Berlin (1970).

[1|] M. Valadier: "Integration de Convexes Ferm~s notamment d'Epigra-

graphes. Inf.-Convolution Continue", R.I.R.O., R-2 (1970), 57-73.

~]2~ R. Van Slyke and R. Wets: "A Duality Theory for Abstract Mathema-

tical Programs with Applications to Optimal Control Theory",

J. Math. Anal. Applic. 22 (1968), 679-706.

D3~ D. Walkup and R. Wets: "Some Practical Regularity Conditions for

Nonlinear Programs", SIAM J. on Control, 7 (1969), 430-436.

E14~ D. Walkup and R. Wets: "Stochastic Programs with Recourse",

SIAM J. AppI. Math., 15 (]967), 1299-]314.

[151 W. Ziemba: "Stochastic Programs with Simple Recourse",

Manuscript (1972).

Page 448: 5th Conference on Optimization Techniques Part I

436

Mathematisches Institut

der Universitgt zu KSln

5000 K~In 41

Weyertal 86-90

and

Department of Mathematics

University of Kentucky

Lexington, Kentucky

Page 449: 5th Conference on Optimization Techniques Part I

NONCONVEX QUADRATIC PROGRAMSrLINEAR COMPLEMENTARITY

PROBLEMS, AND INTEGER LINEAR PROGRAMS

% ¢% F.Giannessi E.Tomasin

Abstract.

The problem of nonconvex quadratic programs is considered, and an

algorithm is proposed to find the global minimum, solving the correspo~

ding linear complementarity problem. An application to the general com- plementarity problem and to 0-I integer programming problems, is shown.

I - Introduction.

The aim of this paper is to study the general quadratic programm~g

problem, i.e. the problem of finding the minimum of a quadratic function

under linear constraints. Such a problem is often met in many fields of mathematics, mechanics, economics, and so on. If the objective fun-

ction is convex, the problem is well known, both theoretically and com-

putationally [2, 4, 6~, while, when the objective function is nonconve~ the problem of finding the global minimum is still open, even if there

are many methods to find a local minimum. Among the methods proposed to

solve the general case ~, 5, 11, 13~, two kinds of approaches can be

distinguished: a) enumerative methods [3, 111, and b) cutting plane me-

thods [5, 13 3 . While the former approach seems to be not so efficient

as expected, the latter till now, gave rise to methods which are not al

ways finite ~6~.

In this paper a method to solve the general quadratic programming

problem is proposed, where the quadratic problem is substituted by an equivalent linear complementarity problem, and this is solved by a par-

ticular cutting plane method [8]. This way no method of pratically ef-

ficiency is produced but after a successive investigation of the method

and of the respective properties, by modifying somewhere the initial ba ses of the method, the algorithm was implemented E14~. Thus, the verti-

ces of a convex polyedron are found, which minimize a linear function

and satisfy a given condition, for example complementarity (this way the

problem of nonconvex quadratic programming is solved) or a 0-I integer condition. Then the method can be used also to solve a general linear

complementarity problem which can be met independently from the quadratic

(~) Research supported by Netional Groups of Functional Analysis and its Applications

of Mathematical Commitee of C.N.R.

t Department of Operations Research and Statistical Sciences, Univ. o~ PISA, Via

S.Giuseppe, 22, PISA, ITALY.

tt Mathematical Institute, Ca' Foseari University, VENICE, ITALY.

Page 450: 5th Conference on Optimization Techniques Part I

438

programming problem, or 0-I programs.

2-The general_quadratic programming problem.

The problem with which we are concerned is, without loss of genera

!ity, the following I:

P : min ~(x)=cTx + ½xTDx, x6X ={x:Ax~br x~0}

where A is a matrix of order mxn and D is a matrix of order nxn. The

following theorems hold:

THEOREM 2.1 (Kuhn-Tucker [1 ~ 1 0 ] ) I f x i s a l oca l minimum for P, t he re e x i t v e c t o ~ y, u, vr ~uch that :

T (2.1a) c + Dx - A y - u = 0

(2.1b) Ax - v = b

(2.1c) X, y, u, V > 0

T T (2.1d) x u = y v = 0

THEOREM 2.2

/ / t y

(2.2)

holds

([4], p.146) o I f (x, y, u, v) £~ a so lu t i on of (2.1), then the equa

I ~(x) = ~(cTx + bTy)

THEO~M 2.3 ([14])o

If i) X ~ ~

ii) ~ ~ {0}

iii) X is compact

then •

Define

I (Jx + ]y)} ±nf {[ ---~.

T

i-- ;~= ;c= ; z ' = A b b

(x) (>< Y ; z"= \ ,,)

THEQREM 2 .A The l i n e ~ c o m p l e m e n g ~ y problem:

- T z . -T - z" b; (2.3) min c z'~ z~6{z:Az ' = z' = 0, 2

equivalent to P.

z > 0}

I - The "T" as superscript ann ~'min" denote transposition and global

minimum respectively.

Page 451: 5th Conference on Optimization Techniques Part I

439

Theorem 2.4, which is a staightforward conseguence of theorems 2.1, 2.2

shows that, to solve P, we have to solve a linear complementarity prob-

lem [7, 9]. Then,in the following sections, the linear complementarity

problem, will be considered.

3 - The linear complementarity problem.

For sake of simplicity define now

x'T= (xi) , x "T= (Xm+ ~) , A= (aij) , a T= (ai0) ,

and consider the problem:

T Q : min c x,

c T=(cj) , i=1,..m; j=1,..n=2m,

xT=(x ,T, x,, T)

x6X = {x :Ax = a; x'Tx"=0; x > 0}

Now the problem Q will be considered, instead of (2.3), which is a patti

cular case of Q. Then, define X ~ the convex closure of the points of X,

which verify x'Tx~'=0; the following theorems hold:

THEOREM 3. I The s~t of optAmal so lu t io~ of Q is a face (in particular a v~ttex) of X. PROOF. A vector x6R must have at least n-m=m zero elements. Then either

an optimal solution (shortly o.s.) of Q is a vertex of X, or it belongs

to a face of X, whose points are o.s. of Q.

The result follows.

3 THEOREM 3.2 Q is equiv~ent to the linear problem.

T (3.1) min c x ; xE X

which is a conseguence of theorem 3.1.

If X ~ would be known, the problem (3.1) could be solved in place of Q.

As X ~ is unknown but it is sufficient to know only a subset of X ~ con-

taining an o.s. of Q, such a subset is determined to reduce Q to a li- near programming problem.

4 - A cutting plane method to solve Q. The case of nondegeneracy.

Consider the problem:

T Q0 : min c x ; x 6X

The method previously outlined, consists firstly in solving Q0' as an

o.s. of QO satisfying

x'Tx '' = 0

is also an o.s. of Q and then of P.

2 - In the sense that the @irst n elements o@ an optimal solution o@ the latter one

are an optimal solution of the @ormer one.

3 - In the sense that they have the same o.s. and the same minimum.

Page 452: 5th Conference on Optimization Techniques Part I

440

But~ by the theorem 2.3 Q0 has no finite o.s.;then in section 6 a

transformation is defined, such that a problem equivalent to Q0' having

finite o.s.tis obtained. Suppose then, that Q0 has finite o.s. If an

o°s. such that x'Tx"=0 terminate the algorithm. Otherwiseja linear ine-

quality is determined, such that it is not satisfied by V^ (where V^ is

the vertex corresponding to the current o.s.), but is satisfied by ~ve-

ry other vertex of X.

This abviously can be realized in many ways; here the strongest inequa-

lity is generated. This happens when every vertex in X adjacent to V 0

strictly verifie the generated inequality. If there is no degeneracy,

this can be easily performed; otherwise;it is more complicated. In this

section the case of non degeneracy is considered.

Let the coordinates of V 0 be the basic solution (briefly, b.s.) of the

reduced form:

(4.1) xi + ai,m+1 Xm+l + "'" + ~in Xn = ai0 i=I,.oo; m.

of the system Ax = a; where Xm+ I = ... = Xn = 0.

If the n-vector

(4.2) (~IO~O~.,~m0 , 0, 0 .... 0)

satisfies x'Tx" = 0~ then it is o.s. of Q too.

Otherwise, a convex p~y~dron is determined, which has only the verti-

ces of X, but V . 0

By the hypothesis of non degeneracy, it is:

(4.3) ~i0 > 0

Define

i = I,...i m.

x, =] Sup[x~] : x.l = ~i0 - ~i]'x'3 --> 0, i=I,..., m}, j=m+1 .... ~n.

I

[0 ~ if x. = + 3

m+1 ~j = 1 ; j = m+1 ..... n;

ll/x i if x. < + 3

T a = (~m+1 ~m+1 ~ .... ~m+1 ~n )

The inequality T

(4.4) ~ x" > !

is said a cut for X. The intersection, X ~between X and the halfspace

(4.4) is the required polyhedron, as it is shown by the following:

THEOREM _4__-! t h e i n e q u a l i t y (4.4) ~ i) ~ not v e r i f i e d by Vn; ii) /S weakly v e r i f i e d oy a l l t he v e r t i c e s of x which are a d l a c e ~ £o v 0, 111 )~ verr f red by every other ver rex o~ x; i i i i ) e v ~ y ver tex of x i s a ver t ex of ~ too.

Which is easily prooved [8]. The cut (4.4) has interesting properties

Page 453: 5th Conference on Optimization Techniques Part I

441

Let QI be the problem obtained by adding to Q0 the constraints

(4.5) ~Tx" - Xn+ 1 = I ; Xn+ 1 >_ 0

As (4.2) does not verify x'Tx '' = 0, the addition of (4.5) to Q does not

change its solutions. QA is then associated to Q in place of Q^. An o.s.

of Q weakly verifies (~.4), so that it is an element of J~ (V~) i.e. I A '

of the set of vertices adjacent to V 0 in X. Let V I be o.s. of QI, and

Xh, h>m;the nonbasic variable (shortly, n.y.) that must become basic to

go from V 0 to V I. The current system is then:

x.+e. . .+e' x ~. +..4e. x =a. 1=I ,. ~m i z,m+lXm+1 +" i,h-1 h-1 l,h+~h+1 l,n+1 n+1 z,0

(4.6)

+ . .+~w x +~l x +...+~i x =~ Xh+~m+1 ,m+IXm+1 " m+1 ,h-1 h-1 m+1,h+1 h+1 m+1,n+1 n+1 m+1,0

IfV I at xj=0 j=m+1,...n+1, j#h,verifies x'Tx"=0;then it is o.s. of Q too. Otherwise, the procedure is iterated and another inequality is gene

rated which is not satisfied by VI, but is satisfied by every other ver

tex of the feasible region of QI" This is realized in next section.

5 - The cut in the case of degeneracy.

Assume that V I does not satisfy x'Tx"=0. Then to determine a cut,

like in section 4, which cuts off VI, the definition of cut has now to be enlarged; in fact o~(V1) may contain more than n+1-(m+1)=m elements,

so that it would be impossible to determine an inequality like (4.4),

weakly verified by all the elements of ~(VI). Remark that in (4.6) exists at least one [ £{i=1...m} such that ~'~ =0, so that to define a

z0 generic cut is to define a cut when degeneracy occurs, i.e. when (4.3) are not verified. Then consider the following linear programming prob- lem:

(5. la) min (ClX1+... +CNXN )

(5. Ib) xi+~ i , M+IXM+1 +... +~iNXN=ai0,

(5.1c) x > 0 3 --

and assume that the vector:

i=1 .... ~I ; j=I,...N

(5.2) (xi=~i0, i=I .... ,M; x.=0, i=M+I,...,N) 1

is an o.s. of (5.1)jwhich does not verify x'Tx"=0.

If M=m, N=n (5.1b) coincide with (4.1); after the first cut (4.5) has been determined, if M=m+1 and N=n+1,(5.1b) coincide with (4.6). Assume, without loss of generality 4

(5.3) ~i0>0, i=I ..... M; ~i0=0, i=M+1 ..... M , (0<M<M).

To define a cut which does not verify only (5.2) among the vert~es of

(5.1b,c) consider the latest M-M of (5.1b), which may be equivalently

4 - In <acttif ~io=O, i=I ..... Mnthen [5.1b,c] has only one vertex which does not

veri~y x'Tx"=O, a~then Q has no o.s.; i~ ~io>O, i=1 ..... M, we are in the case

of section 4.

Page 454: 5th Conference on Optimization Techniques Part I

442

written

(5,4) ai~M+IXM+I +~ • °+ C~iNXN ! 0, i=~+I,... ~N

Define C ={(x .~...~x ) : x . > 0; ... ; x > 0} and C , r=I,..o,M-M 0 M+I N M+I -- -- f

as the intersection among C o and the first r ~alfspaces ([.4). To de i-

ne a cut in this case firstly the convex polyhedral cone C=C ~ must be

determined. This is realized in a gradual way; in fact startlng from C 0

whose edges are trivially known, the edges of Cr+ I are obtained knowing

the edges of C . This way CM_ ~, and then C, is determined [8]. Assume

then to know t~e parametric equations

(5.5) x] = Sijt ~ j=M+I,...,N ; t>0;= i=1,...,k; 6ij~0

of the edges of C respectively indicated

H i ... H k •

The set O~ x (V 0) of the vertices of the convex polyhedron X, adjacent to

V 0 must now be determined 5. Then put

(5.6) {s = Sup {t:ai,M+ I ~s,M+1+...+~iNBsN!~i0, i=1,.o,M} s=1,..~k

6 (5.6) can be assumed to be finite

7 Then the elements of O~x(V 0) are the points

(5.7) V = (X = 8 {s' j=M+I ..... N), s=l ..... k . s sj si

Now a cut can be defined in this case. If kiN-M, using the method described in section 4, an inequality like

(4.4) said here too cut, can be easily determined. If k>N-M an inequa-

lity like (4.4) may not exist, then more then one inequality is genera-

ted. Define

I = {I ..... k} ; Xj = [ Xsj ' j=M+I ..... N S61

V T - T = (XM+I' .... XN) ; ~ = (~M+I,M+I'''''~M+I~N)

and consider the problem

T T (5.8) min V ~ , ~&{a:V ~>I

S --

seI } •

The feasible region of (5.8) is given by:

T (5.9) V ~-~ = I s£I;

s

8T=(81 .... ~Bk ) is a slack vector. where

g > 0

5 - The notations X, Vo,~X (V O] of section S, 4 are used agalm~

6 - See the ~ollowing section,

7 - V denote both the point and the [N-M]-up!e, S

Page 455: 5th Conference on Optimization Techniques Part I

443

j=M+I ,N) The meaning of (5.8) is obvious: an o.s. of (5.8) (~M+I,j' ''" gives the coefficients of the inequality

(5.10) ~M+I,M+IXM+I+...+~M+I,NXN~I

which, among the ones verified by all the vertices of x but V 0 minimizes the sum of the differences between the two terms of (5.10) evaluated at every element of ~x(V0). As (5.8) has finite o.s. (5.10) always exists. If all (5.7) weakly verify (5.10), it is said a eu~t for (5.1) and our aim is attained. Otherwise, another b.s. distinct from the optimal one is determined. Let it be the following

j=M+I,N) (~M+2,j /

to which by the definition of the system of constraints corresponds the inequality

(5.11) ~M+2,M+IXM+I+...+~M+2,NXN~I

analogous to (5.10), which is considered together with it. If every (5.9) weaklyverify at least one equationofthe system (5.10), (5.11~ then this is said c~tt/ng s~tem or briefly c~t for the problem (5.1). Otherwise;another b.s. of the problem (5.8) is determined and then ano- ther inequality like (5.11) is generated. As to every b.s. of the (5.8) corresponds a N-M-I face of the polyhedron X, iterating the procedure a finite number of times, a system of inequalities

x + . (5.12) ~M+I,M+I M+I " "+~M+I,NXN ~I' i=I ..... M-M

is determined which is said a cu22/~ S~tem, shortly a c~t for the prob- lem (5.1) and which is added to (5.1b). Then, an o.s. of the new prob- lem is determined; this is of course an element of ~x(V0 )~. Iterating the procedure described above, in a finite number of sheps an optimal solution of Q is reached. The finiteness of the method is justified by the following theorem:

THEOREM 5 1 ( [8 ] ) . The polyhedra ~ ind Xo have the s ~ e ve r t i c~

where X ° is the union of the differences between X and the convex hull of its vertices, and the convex hull of the vertices of X, but V 0.

6 - Determination of an optimal solution of Q0

By the theorem (2.3) Q0 has no finite optimal solutions. Neverthe- less with a transformation it is possible to obtain a problem equivalent to Q0 having finite o.s. Let Q and Q0 denote respectively Q and Q0i both with the additional constraints:

(6. I ) x1+x2+ ..... +Xn+Xn+1=Q0o ; Xn+1 >_0

Then, by the following:

THEOREM 6.1 ( [ 8 7 ) . Q has f i ~ t e o.s . , i f f a real Qoo exists , such that Q has f i - ni te o.s. satisfying the inequality Xn+1>0 ;

Page 456: 5th Conference on Optimization Techniques Part I

444

Q0 can be considered in place of Q0~ when the last has no finite o.s.,

if Q00 is large enough° Theorems 3.1 and 3.2 hold for Q too. Remark that, after the above transformation, there are vertices of the feasib-

le region of Q0 which are not vertices and then b.s. for Q0" It is use- ful to eliminate such vertices, as the computation increases rapidly

when the number of iterations, and then the degeneracy of the problems,

Q0, QI .-. Qn, becomes larger. This is realized, as indicated in [14], by a transformation, here briefly described. The following parametric capacity constraint is added, in place of (4.1)

(6.2) x1+x2 + ..... +Xn+Xn+l=atwith Xn+1 ->0

where ~ is a parameter.

Let Q0(~) be the problem Q0 with ~ in place of Q00, V0(a) an optimal vertex of Q0(a) and X(a) the feasible region of Q0(~). All the vertices

of X(~) adjacent to V0(a), which are not vertices of X;are determined.

Remark that every vertex of X, is such that

(6.3) lim ai0 (~) : +~

for at least one i6 {i=1,...m}. Then the set of vertices adjacent to every vertex satisfying (6.3) and such that they are vertices both in X and in X(e) , is determined. Such a set is considered in place of

~x(V0) ; it is said 0Q~ .... (V^ (e)). The method descibed in section 5

is [hen applied to J-SX~,, (V0(a)) and a system analogous to (5.12) is obtained. This procedure is justified remarking that:

lim X(~) = X

and that the vertices of X(~) which are not vertices of X, verify (6.3). The following

ProposZ2ion. The polyhedron defined by ~he c o ~ a i n ~ of Q0 and by the s~ tem (5.12)

~ the convex hull of the v ~ 2 i c ~ of x; holds.

In fact every vertex of X belongs to the polyhedron and every inequali-

ty of the system (5.12) define@a facet of the polyhedron itself.

7 - Connections between nonconvex and concave quadratic programming pro-

blems.

The idea previously described and the subsequent algorithm are not

so efficient to be applied in the form outlined above. It is then neces sary to implement the algorithm; to this aim, remark that the central

problem of this theory, is the concave programming problem, to which it is always possible to restrict. In factlit is known that for a given nonconvex quadratic programming problem, it is possible a decomposition

in convex, and concave sub-problemsE3 ]. Thus, given a nonconvex quadra- tic problem, it is enough to be able to solve convex and concave sub- problems. The solution of the former one can be easily obtained by the known methods. The method of the previous sections can be used for the

latter one.To this aim consider the problem P of section 2; the so cal- led facial decomposition can be used. It consists in a tree like proce

Page 457: 5th Conference on Optimization Techniques Part I

445

dure, beginning with a single node corresponding to X itself; from this

node there are branches leading to the (n-l) dimensional faces of X, let

they be Fi,(i=1 .... ,k), where k can be easily determined. [17]

If ~(x) is either convex, or concave on each of Fi1¥i=1,...,k the proce

dure can be stopped. Otherwiselfrom each of F i, where ~(x) is neither convex, nor noncave, there are branches going to the (n-2) dimensional faces of X. To these also the procedure described above is applied, un-

til a set of sub-problems of P, eithers convex, or concave is found.

Each convex sub-problem, can be solved using one of the efficient exis-

ting algorithms, while for the concave ones, the algorithm previously

described can be implemented as is shown in the following section.

8 - A sufficient condition for the optimality in a concave quadratic

progra/0ming problem.

Consider the problem P, assuming that'(x) is strct!y concave, and the complementarity problem (2.3) equivalent to P. Let (x, y, u, v) be

an o.s. of the linear programming problem associated to (2.3), i.e. a vertex of the polyhedron (2.1a,b,c). This always happens, when any pro-

blem Q0, QI .... ,Qn, previously considered, has been solved. If the o.s. thus obtained, satisfies the complementarity condition, a solution of Q,

and then of P, is at hand. Otherwise, using the algorithm described abo

ve, such a vertex is cut off, even if (X,~) is o.s. of P. More preciselD

let Qi l i=0,1,-'',r! be the problems which are to be solved, before ob- taining a vertex satisfying the complementarity condition, and let (x i, yi, u i, v i) be the corresponding o.s.; (xr,v r) is o.s. of P, but

it may be that (x i, v i) is o.s. of P with i<r. Since the complementari- ty condition is the only sufficient condition, which is at hand to deci

de if (x i, v i) is o.s. of P, all the problems Qi i=0,1,...,r are to be

solved. This happens frequently, and as iterating the procedure requires much computation, it is obvious to try to stop the algorithm if (x i, v i) is o.s. of P, even if (x i, yi, u i, v i) does not satisfy the complementa

rity condition. If P is strictly concavelan o.s. in necessarily a vert~

of X. Then, when (x i, v i) is a vertex of X a well known condition [15~

can be used, to decide if it is o.s. of P. Consider firstly the case of nondegeneracy.

I The case of nonde~eneracy

(0) (0) (0) (0) Let (x , v , y , u ) be an o.s. of the current i-th problem

QI and let (x (0) , v (0)) be a nondegenerate vertex of X. In this hypothe

sis, there are n vertices (x (i) , v (i)) i=I, .... n adjacent to it. Define t O the n-vector of the nonbasic variables, associated to (x (0) , v (0))

and t i i=I, .... n, the vectors associated to (x (i) , v(i)) in the space having t 0 as origin.

In the n vertices adjacent to t o let it be:

(8.I) b°(t i) ~(t °) pv i=I ..... n.

On the straight lines containing the edges originating in t 0, consider n points t~ili=1,...,ntsuch that:

(8.2) ~(t~i) = ~(t0) I ¥ i=I ..... n !

Page 458: 5th Conference on Optimization Techniques Part I

446

which necessarily exist, as~(x) is concave.

Remark that

(8°3) t ~i k~ tO+(1-k )t i = Z i r ¥i=I ¢,,. ,n

where (tO)_ (t i)

1 t0Dt0+tiDt i

and then l < 0 Wi=1,...,n i -

i.e~ the points t ~i are nonconvex linear combination of t O and of t i .

Consider now the hyperplane passing~rough the n points t~i; its equa-

tion is:

i=n

(8.4) g(t) = ~ t. / (1-X i) t i -I i=1 l i

Remark that g(t 0) = -Io Consider now the following problem:

(8.5a) max g(t)

subject to

(8.5b) Ax-v = b

0 where (8.5b)have to be suitably expressed in the space having t as origi~

If the linear programming problem (8.5) has o.s. t'psuch that

C8.6) gCt') < 0

t O is an optimal basis for Pli.e. (x (0) , v (0)) is o.s. of P, even if

(x(0) (0) (0) (0)) , v , y f U does not satisfy the complementarity condition~

if

(8.7) g(t ~) > 0,

a vertex where the following inequality may hold

1)(t) < ~Ct °) exists, so that the procedure of the algorithm of section 5 must be ite

rated. Analogous considerations may be done, for the case of degeneracy of

(x (0) ~, v (0)1 , s e e D 4 U / .

9 - An al~orithm for nonconvex quadratic problems.

After the remarks of the preceding sections here an algorithm to

solve nonconvex quadratic programming problems is briefly outlined:

Step I Let the problem P be given. Its feasible region can be decompo-

sed, in such a way that P is replaced by a finite number of convex, and strictly concave problems. The first ones, can be solved by any algori-

thm for convex quadratic programming problems. To solve the second ones,

use step 2.

Page 459: 5th Conference on Optimization Techniques Part I

447

Step 2 The concave programming problem is transformed into the comple-

mentarity problem. Determine Q0" Put r=0 and go to step 3.

Step 3 The linear parametric problem of section 6 is solved and the

set 0~X(~ ) (V0(~)) is determined. Go to step 4.

Step 4 A cut is determined; the constraints which define the present

cut, are in addition to Q . Go to step 5. r

Ste p 5 Qr is solved (if r>0, an o.s. of Qr is quickly available). If

an o.s. of Qr verifies the complementarity condition, an o.s. of P is obtained as a subvector of it; terminate the algorithm. Otherwise, go

to step 6.

Step 6 If the ~subvector of the o.s. of Qrlcorresponding to a solution

of P, is a vertex of P, go to step 7; otherwise, put r=r+1 and go to

step 5.

Step 7 The problem (8.5) is determined and solved. If the o.s. of (8.~,

verifies (8.6) terminate the algorithm; if it verifies (8.7)! put r=r+1

and go to step 5.

Page 460: 5th Conference on Optimization Techniques Part I

448

[ I ] -

[ 2 ] -

[ 3 ] -

[ 4 ] -

-

[ q -

[ 7 ] -

[ 8 ] -

[9] -

[ I o ] -

[11] -

[ I C -

D 3 ] -

[1 4] -

[I S] -

REFERENCES

ABADIE J. ~ On the Khun-Tuck~ Theorem. In "Nonlinear programmlng",

J.Abadie (ed.) ~ North-Holland Publ. Co. ~ 1967, pp.19-36.

BEALE E.M.L., NumeA/cag Methods. In "Nonlinear Programming", J.Aba

die (ed.), North-Holland Publ. Co., 1967, pp.133-205.

BURDET C.A., Genial Quadratic Programming. Carnegie-Mellon Univ. Paper W~P. -41-71- 2, Nov. 1971.

COTTLE R. W . , The principal pivoting method of quadratic programming. In "Mathematics of the decision sciences, Part I, eds. G.B.Dant- zig and A.F.Veinott Jr. American Mathematical Society, Providen-

ce, 1968, pp.144-!62.

COTTLE R.W. and W.C. MYLANDER, Ritt~A's cutting plane method for non- conuex quadratic programming. In "Integer and nonlinear programming",

J;Abadie (ed.), North-Holland Publ. Co., 1970, pp.257-283.

DANTZIG C.B. ~ Linear Programming and Exten~io~{. Princeton Univ. Press ~ 1963.

DANTZ!G G.B. , A.F. VEINOTT, Mathemat/cs of the Decision Sciences. American Mathematical Society, Providence, 1968.

EAVES B.C. , On the basic theorem of complemev~%ity. "Mathematical

Progranuming", VoLt, 1971, n.1, pp. 68-75.

GIANNESSI F. ~ Nonconvex quadr~tLc programming, l inear complementarity problems, and i ~ l e g ~ linear programs. D e p t . o f O p e r a t i o n s R e s e a r c h

and Statistical Sciences, Univ. of PISA, ITALY. Paper A/I,

January 1973.

KARAMARDIAN S., The complementarity problem. "Mathematical Program-

ming"¢ Vol.2, 1972, n.1, pp. I07-123.

KUHN H.Wo and A.W. TUCKER, Nonlinear programming. In: "Second

Berkeley Symp. Mathematical Statistics and Probability'; ed.

J.Neyman, Univ. of California Press, Berkeley, 1951, pp.481-492.

LE~d<E CoE. ~ Bimat~x Equilibrium Point~ and M~thematical Programming. "Management Science", Voi.11, 1965, pp.681-689.

RAGHAVACHARI M. , On connections between zero-one i ~ t e g ~ programming and concave programming und~A linear eonstr~Lnts.

R I T T E R K. ~ A m~thod for ~olving maximum problems with a nonconcave quadra- t i c objective fun~on. Z .Wharscheinlichkeitstheorie, Vern. Geb. 4,

1966, pp. 340-351.

TOMASIN E . , Ggobal op t~ i zo~on in nonconvex quadratic progrm~ing and r~a ted f i n d s . Dept. of Operations Research and Statistical Scien_

ces, Univ. of Pisa, September 1973.

TUI HOANG, Concave progr~Tming under linear constrainbs. Soviet Math.,

1964, pp. 1437-1440.

Page 461: 5th Conference on Optimization Techniques Part I

449

E16] - ZWART P . B . Nonlinear programming: Country_examples to global optimization algorithms proposed by Ritter and T~. W a s h i n g t o n U n i v . , D e p t . o f Applied Mathematics and Computer Sciences School of Ingeeniring

and Applied Science. Report No. Co -1493-32- 1972.

[173 - BURDET The facial decomposition method. Graduate School of Industrial

Administration Carnegie Mellon Univ. Pittsbrgh, Penn. May 1972.

Page 462: 5th Conference on Optimization Techniques Part I

A WIDELY CONVERGENT MINIMIZATION ALGORITHM WITH QUADRATIC TERMINATION PROPERTY

b~GIULIO TRECCANI, UNIVERSITA DI GENOVA

I, INTRODUCTION AND NOTATIONS

We shall consider methods for minimizing a real valued function of n real

n variables~: R _.--> R of the following type :

i,i % = Xk+l. Xk = o# Pk k

n were pk ~ the search direction~ is a vector in R and O<k ~ the scalar stepsize~

is a suitable nonnegative real number,

Two properties of methods of this kind will be considered~ convergence and quadra-

tic termination.

Assume that ~ (x) has an unique absolute minimum point x*; then a method I.I is

said to be globally convergent for the funetion~ ~ if every solution x o£ i.i, i

starting at any point x6 Rn~ is convergent to x*. o

Assume now that ~ (x) is a convex quadratic function; then a method I,i is said

to have the quadratic termination property if for every initial point x it minimi- o

zes ~(x) in at most n interations,

In the following it will be assumed that

1.2 ~ (x) in continuously differentiable in R n ;

1,3 ~ (X) is bounded from below ;

1,4 every level set of ~ (x) is a bounded set

n 1.5 there is one and only one point x ~ R such that grad ~(x*) = O.

Then it can be proved that x* is the absolute minimum point of ~(x) and that

the level sets are conneetedo We remark however that ~ (x) has no convexity property,

We shall construct a modification of the well known Fletcher-Reeves conjugate

gradient method~ which will be proved to be convergent and to have the quadratic

termination property.

Even though accurate line minimization is required in our method~ it will be proved

to converge without any convexity assumption and search direction restoration~while

the quadratic termination property is conserved; this seems to be a somewhat new

result respect to the classical conjugate gradient methods of Hestenes-Stiefel,

Polak-Ribiere and Fletcher-Reeves, This algoritm has been deduced as an application

of the theory of Discrete Semi Dynamical Systems (DSDS)(see ~ ) ~ a short surm~ary

Page 463: 5th Conference on Optimization Techniques Part I

451

of definitions and results of this theory which will be used for our purposes is in

section 3.

1.6 Basic Notations.

n G(x) is the gradient of at the point x R .

gi = g(xi) '

d i = xi+ 1 - x.l "

I x I is the~clidean norm of x R n. +

R is the set of nonnegative real numbers, +

I is the set of nonnegative integers

~is a space, metric

F(X) is the set of compact non empty subsets of X

2. THE ALGORIT~I

2, I Xk+ I

2,2 dk=

2,3 ~ =

2.4 ~ ----

2.5 pk =

2.6 k

k Pk

£ R : grad ~0 (xk + ~ Pk

o if ~ - - ~ , ~ = M i n ~ T

gk gk + 1 i f %+1 ~ o ,

tgk Ilg~ +i l T dk ~

I dkll ~I

)T Pk = O I

otherwise

=O k

=i if gk,O , k

otherwise,

otherwise

2.7 Po -- "go

2.8 Pk+l = " gk+l

I gk÷II ~ } + l l k

lg~l~ - ,D e Pk "

3. COMMENTS AND IMPLEMENTATION OF THE ALGORITHM

3.1 Theorem.

Page 464: 5th Conference on Optimization Techniques Part I

452

n The solution of the algorithm 2~ starting at any point x R , is an infinite sequence

o

L j )xi~ ~ such that if for s6me k ~ I + we have mX'+l =~m ' then x.a = ~" for i>/ k

In addition, if x*f {xil ~ then ~i = ~(xi ) is a strictly decreasing sequence of

real numbers.

Proof. Assume that Xo,..,~ x k can be computed by algorit~me 2 and that x. ~ x* l

for i = 0~..~ k. Then the following properties hold for 02i ~ k

T Pi # 0 ~ Pi oi+l = 0 (i)

(ii)

(iii)

d.@O 1

#.>o I

(Xi+l) < ~(xi> o (iV)

These properties hold for i = 0 ; indeed Po " go by 2.7 and go # 0 by assumption,

hence ~ o ~ 0 and ~ o * O ; on the other hand ~o = i and ~(x I) ~(x O) .

Assume that these properties are true for i ~ k , and prove them for i+l. Since T

~l" ) 0 and gi ~ 0 , P i+ l can be computed by 2 .8; then we have Pk+l g i+ l =

2 T

' f = " ~ gi+l ) 0 ~ since gi+i Pi = 0 and gi+l ~ 0 . It follows that

Pi+i * 0 , xi+ 2 can be co~utedmd di+ 1 * 0 , ~<xi+2)<~(Xi+l), ~ i+l> 0

and the statement follows by induction.

Assume now that x = x* for some k~ I, while x ~ x* for j <k. J

Then by 2.8 we have Pk = 0 ~ which implies d k = O ~ Xk+ I = x k and Pi = di = 0

for every i~k.

It is clear that algorithm 2 is an ideal algorithm not only because the line search

2.4-2.4 is assumed to be exact, but also because no stopping rule is given and in

the computation of ~ very small quantities can be involved in the denominator, k

even if ~ is not very small.

For these reasons~maintag the assumption of exact line searches~ we propose an

equivalent form of the algorithm 2.

3.2. ALGORITHM

i. Set i ---- O ~ Po = - go

Page 465: 5th Conference on Optimization Techniques Part I

453

2. Compute ~ . = Min A. = Min l l

3. Compute di = ~i Pi

4. Compute x = x + d i+l i I

+ R grad~ (x i + ~ )T : Pi Pi

5. If ,.Igi+lJK g stop, otherwise go to 6. T

I gi÷l gi I 6. f~ =

i IgiJ I gi+ll

J gil 7. ~ --

i Jpij

J gi+lJ 2 I QIi 8. Pi+l = "gi+l + IgiJ 2 i- ~ i Pi

9. Set i = i+l and go to 2.

----0

3.3 THEOREM

+ If x # x* , then either there is k E I such that the algorithm stops at a point

O

Xk such that J~J< ~, and I Xo,.°°, Xk} belongs to the solution of algorithms 2

starting at x ° ~ or the solution of 3.2 is an infinite sequence x. such that l

+ I g i I)l~ and ~ (Xi+l)<~(xi) for every i~l

2 searclng at x • O

Proof. We have only to prove that k

, and{ xil is the solution of algorith~

l Pkl I~t l~ l From the proof of theorem 3.1 it follows that if gi~ ~ for 0,< i~<k ~ then d k ~= O

T 2 and ~k ~ 0 , while Pk gk = " I~ ' which implies that :

T T T fgk dkl I~ a~l t~ Pkl I ~I 2 I gJ

Now we shall prove that algorithms 2 and 3.2 have quadratic termination property•

3.4. THEOREM

If ~ (X) is a convex quadratic function of n real variables, then ~ (x) is

Page 466: 5th Conference on Optimization Techniques Part I

454

minimized by 2 and 3,2 in at most n interations.

Proof. Indeed if ~ is quadratic~ ~ i = 0 and 2 is the Fletcher-Reeves conjugate

gradient method~ which generates conjugate directions and minimizes ~ (x) in at

most n iterations.

C.E.D.

4. DISCRETE SEMI DYNAMICAL SYSTEMS.

4.1

(i)

(ii)

(iii)

4®2.

Assume that X is a local compact complete metric space. The norm generated by the

metric is denoted by { I " F(X) is the set of non empty compact subsets of X. A

map ~ : X ---~ F(X) is said to be upper semicontinuous at x (X if

lira Max Min i z-v[ = 0 .

y--) x zE f(Y) v~ f(x)

!+ i + A DSDS is the triple (X~ f) where f : X ~ ~ F(X) is such that :

f - [f(x,k),h] = f (x,k+h) ~ -- ~ x E X, h, k [I +

+ f is upper semicontinuous in X ~I .

A solution of the DSDS through x~X is a map O ~ : 3 -'* X such that :

(i)

(ii)

(iii)

(iv)

1+4 ~ C I

6- (o) = x

(j)~ f C ~ (k), j-k ], ¢ j,k C~ , j>sk

there is no proper extension of ~" which has the properties (i)~ (ii) and

(iii).

4.3 A set M -~X is said to be weakly positively invariant if for every x ( M +

there is a solution ~ of the DSDS such that ~ (k) ~ M for every k~l .

4.4 The positive limit set of a solution ~-~ of the DSDS in the set :

= -- ~ k --~ + ~ , such that 6-(k )-~ n n

4.5 A Liapunov function for a solution ~-- is a real valued function

such that i ~[~ (i)] ~ is a non increasing sequence of real numbers.

The following two properties hold :

Page 467: 5th Conference on Optimization Techniques Part I

455

4.6. Property.

L+(6 ) is a weakly positively invariant set.

4.7 Property

A continuous Liapunov ~ for a solution 6" is constant on L+(~).

Properties 4.6 and 4.7 will be crucial in our convergence proof.

5. CONSTRUCTION OF A DSDS WHICH IS RELATED TO ~E ALGORITHM 2.

Let X be the following set :

~={(xl~x2) ~ RnXR n : ~(x2)-<~(Xl) I

R n where ? : ~ R satisfies 1.2 ~ 1.3~ 1.4 and 1.5. n n

Then X is a closed subset of R >~ R and is a locally compact complete metric

space with the euclidean metric.

Assume that :

5.1

5.2

5.3

p : X --~ F(R n) is upper semicontinuous in a nonempty subset X' ~ X.

a : X ~ R ~ F( ) is upper semicontinuous in X X R n.

h : X ~ F(R n) defined as h (Xl~X2) I Y ~ Rn = ~ y=~p

~(e a(xl,x2,P), PE P(Xl,X2) 1 is such that ~ (y) ~< ~ (x2) for

every y 6 h(xl~x 2) and is upper semicontinuous in X ~,X'.

Then it is easy to see that h is upper semicontinuous in X and the map :

5.4 f : X ~ F(X) f(xl,x2) = (x2,x2 + h(Xl,X2) )

is supper semicontinuous in X, hence the triple (X~l+,f k) is a DSDS.

Assume now that the set :

5.5. A(Xl,X2,P) = [ ~cR + : grad ~ (x 2 + ~< p)Tp = O, ~ (x2+ ~(p) ~<~ (x2)

is empty. Then a(xl,x2,P) -----IO Io On the otherhand, if the set A(Xl,X2,P) ~,

we set :

Page 468: 5th Conference on Optimization Techniques Part I

456

5.6 a(xl,x2~ p) = ~{~ A(Xl~X2,P) : ~ (x2+ ~ p)< ~ (x2+ p Min A)/

The map a ; X ~ R n ---9 F(R +) is clearly upper semicontinuous.

We define a map p : X -~ F(R n) as follows.

5.7 if x 2 = X* P(Xl~X2) = O

5.8 if x I = x2~ ~ P(Xl,X 2) = -g(x2)

5.9 if x I * x 2 * x* , gT(xl)g(x 2) = 0 P(Xl,X2) =[y ( R n : l yl~ 1

y = = o(ig(x2) + o(2(x2_xl ) , o< 1 ~ O, ~ 2~ Oi "

T (xI_x2)T 5.10 if x I # x 2 ~ x* ~ g (xl)g(x 2) @ 0 , g(xl) = O

P(Xl~X 2) = -g(x2) +

I g(x2) 1 2 i gT(x I) g(x2) l

- - (log ) I g(Xl)l

i g(h) 1 2 I g(h)II g(h) l

5.11 if x l

T ~= x 2 ~ ~ ~ g (xl)g(x 2) ~ 0

P(Xl,X2) = -g(x2) + t gCh)i 2 I g(xl)l 2

, gT(xl)(Xl-X 2) # O

I(Xl-X2)~ g(h)l E h-x2 II g(xl) l

gT(xl)g(x 2) i -

(Xl-X 2) o

2

g(xl) I

i T ~Xl-X 2) g(x I) I

(x2-xl),

5.12 Property

The map P(Xl~X2) is upper semicontinuous for gT(xl)g(x 2) # 0 ~ while the map

h(Xl~X 2) is upper semicontinuous for every (xl~x 2) ~ X°

From the definitions 5.5 - 5°6 of the map a(xl~x2~P) ~ it follows that if we set ;

Page 469: 5th Conference on Optimization Techniques Part I

457

(5.13) .~ (xl,x 2)

then ~ : X ~ R

ned above,

= @(xl)+?(x 2)

is a Liapunov function for every solution ~ of the DSDS def~

5.14 Theorem

+ If 6 is any solution of the DSDS defined by 5.5-5.12 and k E1 is such that

~[~ (k)7 = ~(k+l~ = ~(k+2)J , then ~ (i)= (x*,x*) for every i~k.

Proof. Let ~ (k) = (Xl,X2),~ (k+l) = (x2,x 3) andC (k÷2) = (x3,x 4) then by

hypothesis and by 5.13 ~ (x I) = ~(x2) =? (x 3) =~ (x4); indeed

~[~ (k)] = ~[~(k+l)] implies ~ (x I) +~ (x2)=~(x 2) +~ (x3) ,

Hence it is sufficient to prove that Xl= x* for some i ~ (I~2,3~4). T

Clearly, if x 3 # x*, it must be Xl~ x2~ g (x2)(x2-xl) ~ O , and also x 2 # x3, since

x 2 = x 3 would imply ~(x4)<~ (x3).

T gT(×3)(x3.x2 ) Then we have Xl~ x 2 ~ x3~ g (x2)(x2-xl)~ O, = O.

Now if gT(x3)g(x 2) ~ O, then by 5.10-5.11 p[~ (k+l)] = P(X2,X3) ~ O and

T P (x2,x3)g(x3) =- ig(x3) I 2 O, which would imply ~ (x4)~(x3).

T T On the other hand if g (x3)g(x2) = 0 it would be P2(X2,x3)g(x3) =

= ~i g(x3) 2 +~i2(x3"x2)Tg(x3 ) = - ~I Ig(x3)l ~ O, which would imply

(x4) <~(x3)"

In any case we have proved that x 3

xl= x 2 = x 3 = x 4 = x* •

x* implies a contradiction~ hence

Q.E.D.

As a consequence of theorem 5.14 the following property holds.

5.15 Theorem.

For every solution 6- of the DSDS defined by 5.5-5.11, L+(6 ") is a nonempty set

and L+(~ ) = (x*,x*).

Proof. Since ~ is a Liapunov function for any solution, the level sets of ~ (x)

are strongly positively invariant, and since by 1.4 they are bounded, L+(~) must

be nonempt y.

Page 470: 5th Conference on Optimization Techniques Part I

458

As ~ (x) is continuous in Rn~ is continuous in X and by 4.7 is constant on L'(~')

Since by 4.6 L+(~) is weakly positively invariant , if (Xl,X 2) ~ L+(~ -) then

(Xl,X 2) = (x*,x*) by theorem 5.14.

C.E.D.

6. CONVERGENCE OF ALGORITHM 2.

Let x I be any solution of algorithm 2. Then if we set :

6.1 ~ (0) = (Xo~Xo) , ~ (i) = (Xo Xl) , ~ (k) = (Xk_l,X k) then any maximal

extension of C is a solution of the DSDS.

Indeed only the cases 5.7~ 5.8~ 5,8 and 5.11 can be verified.

It follows that L+(•) = (x*~x*) by theorem 5®15~ which implies that

x. ~ x* for i --~ + ~ . l

7. CONCLUSIONS.

We have constructed a DSDS whose properties imply the convergence of a conjugate

gradient method~ which is a modifification of the Fletcher and Reeves method and

has the quadratic termination property.

The convergence is global for functions continuously differentiable, bounded from

below~ having bounded level sets and one and only one critical point.

However these assumptions are not restrictive~ since if the function is simply

continuously differentiable~ the algorithm converges to a local minimum point if

the initial point is ~hosen in any bounded level set containing one and only

critical point.

Page 471: 5th Conference on Optimization Techniques Part I

459

1 G.P.Szego and G.Treccani :

2 G.P.Szego and G.Treccani :

BIBLIOGRAPHY

Semigruppi di Trasformazioni Multivoche in ~ +.

Symposia Mathematica, VoI.VI~ Academic Press,

London,New York, 1971, pp.287-307.

Axiomatization of minimization algorithms and a

new conjugate gradient method in G.Szego '~4inimi-

zation Algorithms", 1972 , Academic Press,Inc.

New Yorko

Page 472: 5th Conference on Optimization Techniques Part I

A HEURISTIC APPROACH TO COMBINATORIAL OPTIMIZATION

(o) PROBLEMS

by

E. Biondi, P.C. Palermo

Istituto di Elettrotecnica ed Elettronica Po!itecnico di Milano

i. Introduction

It is well known that the exact solution of large scale combinato

rial problems (for instance scheduling, sequencing, delivery, plant-

location, travelling salesman problem ............. ) implies hard

or prohibitive difficulties concerning the computation time and the

memory storage.

These problems are generally approached by heuristic techniques,

frequently developed ~ ad hoe", which allow to determine efficien-

tly near-optimal solutions.

This paper outlines a general heuristic approach, based on the co~

ceptual framework of Dynamic Programming, which seems to be effec-

tive in a large class of combinatorial problems.

Some efficient algorithms, successfully tested in classical flow-

shop scheduling and delivery problems, are discussed.

2. The approac~

Denote

f(x,d)

X

d

g(x)

a separable objective function,

the discrete set of state variables,

the discrete set of decision variables,

= min f(x,d) d

(o) This work was supported by C.N.R. (National Research Council)

and by foundation Fiorentini-Mauro.

Page 473: 5th Conference on Optimization Techniques Part I

461

According to the principle of Dynamic Programming the overall opt!

mization problem is decomposed into a sequence of linked sub-pro-

blems concerning sub-seZs of decision variables.

Let

d k be the decision variable (s) at the k-th stage, (k:l,...,N),

k k ~k } ={dl,...~d the feasible definition set of dk~ D

k x the state variables at the k-th stage,

f ~ t h e p a y - o f f o f t h e d e c i s i o n d~ a t t h e k - t h s t a g e , 1 1 k + l

x. the state variables at the stage (k+l) after the decision l

d~ a t t h e k - t h s t a g e , 1

~k = {d_d 1 _ _ d k - 1 }

5k the definition set of ~k

At the k-th stage the basic dynamic program involves the following

computation :

determine g(xk) : min {f~ + min f(xk+l , [k+l)} : dk l ~k 1

. k+l, = min {f~ + g~x. ;} .

dk 1 l (1)

Let

k •

l

k+l. h(x. )

i be a parametric estimate of g(x~ +I) ,

l

where h(x~ +I) is a known (suitably defined) l

k+l evaluation function of g(x. )

1

and I ~ 0 is an adjustment factor of h(x~ +I) , 1

be the (unknown) exact value of the adjustment

factor of h(x~ +l)- l

i.e. g(x k) : min {fk + lk h(xk+l)} (2) dk 1 1 1

Page 474: 5th Conference on Optimization Techniques Part I

4 6 2

In the followinglproblem (2) is dealt with.

Heuristic Search algorithms are developed in accordance with some

assumptions about the adjustment factors k~ , l

Such assumptions allow to reduce the computational complexity of

problem (i) by meaningful criteria.

2.1 Assumption 1

~ = I (k=!~.~N) ) (~k) i=l) ) 1 ' " " )nk

The assumption is very strong and rather rough~

However it allows to generate a large set S of sub-optimal solutions

by an efficient iterative al~orithm .

At each iteration a different value is assigned to the parameter k

and a solution is determined by solving the sequence of sub-pro-

blems :

min {f~ + ~ h(x~+l)} (k:l,.~0,N) , k given dk l 1

The minimum cost solution within the set S is selected.

The computational effort is limited because the cost of the solu-

tion computed by the algorithm is piecewise constant with respect to

k ) i~e. the same solution may be determined in a suitable k-range

(see fig. 1 and section 3 below).

b .............

Page 475: 5th Conference on Optimization Techniques Part I

463

2.2 Assumption 2

k~l = kk (i=l~''''nk) ' (k=l,...,N)

This assumption (more plausible than assumption i) leads to a con u

spicuous cut of the definition set D k

In fact

] ~ (a=l~'''~nk) ~ ~ is deleted from D k

if, for every xk~o, there exists a decision d~ D k ¢ such that 1

k+l, fk + ~k. h(xk+l) < fk + xk h(x. ; (exclusion test I). l i - ] ]

k = _ D k Let R 1 be the definition set of d k after the application of

the exclusion test I.

Efficient Branch-Search and Branch - and Bound al~orithms may be

developed by applying a suitable branchin~ criterion on the set

k R 1 .

An effective procedure is now outlined.

Let U k be an upper bound of g(x k) determined by a classical heuri-

stic method or by the iterative algorithm based on assumption i.

Assume that suk;(o~e~l), is a close estimate of g(x k) (the value

of the parameter ~ may turn out from empirical analyses of the

quality of the heuristic method used for the generation of the up-

per bound).

Denote

aU k _ f~ k l (d~ E R~)

kLi = h(xk+l) 1

The following Branchin~ Criterion is now introduced:

at the k-th stage the decision d k k such that o R1

Page 476: 5th Conference on Optimization Techniques Part I

464

kk ik = max

Lo d k ~ Li i aR

is selected for branching°

The geometric interpretation of the exclusion test I and of the

branching criterion is shown in fig, 2 .

!

/ J! /

/ /

~ ~ K~ ,

K

i l

l t

J

2.3. A_~ssumption

ll~ - I~ I < B H (i,j) (i,j=l,...,n k) (k=l,...,N)

This assumption is always true for a suitable value of the parame-

ter BH, which clearly depends oh the assumed evaluation function

h(x~ +I ) . 1

Page 477: 5th Conference on Optimization Techniques Part I

465

The closer h(xU+l)k is to g(x~+l)k , the smaller is the l-range 1

k c o n t a i n i n g t h e v a l u e s X i , ( i = l , . . . , n k) , and a s m a l l v a l u e 8H i s

likely.

A c c o r d i n g t o a s s u m p t i o n 3 some d e c i s i o n s may be d e l e t e d f rom D k.

In faet~ consider the example shown in fig. 3 .

'9

I I

i I ~i%

It turns out that d? is not preferred to d? if I ]

>(~(, ..............

sin {llk.u 3 - Ik'ull'1lkj _ iLil} ~ I ~ -

Consequently the following test may be performed :

d~ is deleted from D k according to assumption 3, if there exists 1

a decision d~ D k e such that ]

min {ll k . - ik k _ ik u] uiI'iILJ Li I} ~ BH (exclusion test II)

Page 478: 5th Conference on Optimization Techniques Part I

466

Experimental values of the parameter 8 H may be tested in order to

delete those decisions from D k, which have a small probability of

belonging to an optimal solution.

k _ D k The resulting subset R 2 ~ is generally larger than the sub-set

k found in accordance with assumption 2 (which corresponds to R 1

B H : 0).

Clearly the risk of deleting a good choice is less.

Branch - Search and Branch - and - Bound al$orithm~ may be deve-

loped by applying the Branchin~ Criterion, previously described,

k on the sub-set R 2 ~ (k=l~...,N)

3. Applications

The approach can be applied to all combinatorial problems that can

be solved via Dynamic Programming, Branch - and - Bound or Heuri-

stically guided search algorithms. The main problem becomes the d~

finition of a suitable evaluation function h(x)

According to the size of the problem, assumption 2 (more efficient,

useful for large problems) or assumption 3 (more founded, useful

for medium problems) may be taken into account in order to reduce

the computational effort.

Experiments have been performed with reference to classical delive-

ry and flow-shop scheduling problems, whose statement and formula-

tion are now briefly summarized.

3.1 Consider the following delivery problem :

a set of customers, each with a known location and a known r~

quirement for commodity, is to be supplied from a single warehouse

by vehicles of known capacity.

The objective is to design the routes which minimize the delivery

cost, subject to a capacity constraint for the vehicles and meeting

the customers' requirements.

Page 479: 5th Conference on Optimization Techniques Part I

467

Let

X : {xili:0,1,...,n}

u:{u..}, (i,j:O,l,...,n) , 13

k x c {x-x }

-- O

d k

Assume

h(x~ +I) : ~- u . l k+l o]

X.EX. ] i

The iterative algorithm

be a set of elements representing

the warehouse (i:O) and the custo-

mers (i=l,...,n),

the set of links available for the

transport,

the set of customers to be supplied

after (K-l) stages,

the feasible route to be selected at

the k-th stage.

[4] and a branch-search algorithm [13~ ae

cording to the outlined branching criterion have been experimen-

ted in large problems.

The computational results show that the minimum cost solution a~

rained by the iterative algorithm gives a satisfactory near-opti-

mal solution of the problem (a typical diagram of the results is

drawn in fig. i).

The quality of the solution supplied by the branch-search algo-

rithm is practically equivalent (or just better), while the com

putation time is considerably shorter.

The tests of both algorithms show remarkable advantages with re-

spect to the exact and heuristic methods proposed for the same

problem in the literature.

3.2 The classical three-machine flow-shop scheduling problem is

now considered:

n jobs must be processed through three machines (A,B~C) in the sa-

me technological order. All jobs are processed once and only once

on each machine. A job cannot be processed on a machine before it

has been completed on the preceding machine in the technological

Page 480: 5th Conference on Optimization Techniques Part I

468

order~

The objective is to schedule the flow-shop so that the comple-

tion time of the last job on the last machine ("makespan") is

minimized.

Denote

ai~ big e i the processing times of job i ~ (i:l,...,n), on

the machines A,B,C ;

d the job-set;

d k ~he job to be processed at the k-th stage (k=l,..,n);

~k={d_d I_ ,o_d k-l}

D k the job-set to be processed at the k-th stage;

B(x k) = max {B(x k-l) ~ ~ ao} + bo ; {d_~k+l} ] 1

C(X k) = max {c(xk-l)~ B(xk)} + c. 1

B(X I) = a+b where a~b~c are the processing times of the

C(x I) = B(x~)+c first job of the scheduling ;

f~ = C(x k) - C(x k-l) 1

As s ume F

% ~k+l ]

h(xk+l) : max I ~ b° + min c. -~C(x' k)_B(xk),

~ a. + min (c.+b.) (C(x k) - ~ ...... a.) dk+ i 3 -k+l ] 3 -k+l} ]

d {d d

A Branch - and - Bound technique has been applied according to

the Exclusion Test II and the Branchin$ Criterion. The results

show that the procedure allow to determine a near-optimal solu-

tion in large problems in a very effective way. It has been che-

cked that such solution generally corresponds to the optimal one

in small size problems,

Page 481: 5th Conference on Optimization Techniques Part I

REFERENCES

469

i N. Agin "Optimum seeking with Branch and Bound", Mgmt

Sei. 13, 176-186, (1966).

2 S. Ashour " An experimental investigation and comparative

evaluation of flow-shop scheduling techniques" ,

Opns. Res. 18, 541-549, (1970).

3 E. Balas "A note on the Branch - and - Bound PriDciple",

Opns. Res. 16, 442-445~ (1968).

4 E. Biondi, P.C. Palermo, C. Pluchinotta "A heuristic method

for a delivery problem", 7-th Mathematical Program-

ming Symposium 1970 The Hague - The Netherlands.

5 H. Campbell, R. Dudek, M. Smith "A heuristic algorithm for

the n job - m machine sequencing problem", Mgmt.

Sci. 16, 630-637, (1970).

6 N. Christofides, S. Eilon "An algorithm for the vehicle dispa~

ching problem", Opl. Res. Q. 20, 309-318 (1969~.

7 R.J. Giglio~ H.M. Wagner "Approximate Solutions to the three-

Machine Scheduling Problem", Opns. Res., 305-319

(1964).

8 P. Hart, N.J. Nilsson, B. Raphael "A formal basis of the heu

ristic determination of minimum cost paths" IEEE

Trans. SSC 4, 100-107 (1968).

9 M. Held, R.M. Karp "A Dynamic Programming Approach to se-

quencing problems", J. Soc. ind. appl. Math. i0,

196-208 (1962).

i0 E. Ignall, L. Schrage "Application of the Branch - and -

Bound Technique to some flow-shop scheduling pro-

blems", Opns. Res. 13, 400-412 (1985).

Page 482: 5th Conference on Optimization Techniques Part I

470

ii G.B. McMahon, P.P. Burton "Flow-shop scheduling with

Branch - and - Bound method", Opns. Res~ 15~

~73-481 (1967).

12 L.G. MiTten "Branch - and - Bound Methods: general formula-

tion and properties"~ Opns. Res. 16, 442-445,

( 1 9 6 8 ) .

13 P.C. Paiermo~ M. Tamaccio "A Branch-Search algorithm for a

delivery problem" Working Paper LCA 7i-4

(1971).

Page 483: 5th Conference on Optimization Techniques Part I

A HEW SOLUTION FOR THE GENERAL SET COVERING PROBLEM

L~szl6 B61a EOV~CS

Computer and Automation institute

Hungarian Academy of Sciences

1. Introduction~ The theory and applications of the set

pa~itioning and general set covering problem is discussed in the

present paper. After the problem formulation, three applications are

shown: bus route planning, airline-crew scheduling and a switching

circuit design method. After a short s~rvey of existing methods a

new algorithm is introduced in section 5. The exact algorithm is

based on the branch-and-bound principle, but no linear programming

is used in determining the bounds. A heuristic procedure is utilized

instead, which often gives the optimal or near optimal solution of

the problem. The optimality in certain cases guaranted by lemmata

1-3. The last lemma also provides a lower bound for the branch-and-

bound procedure, which is close to the real minimum of the sub-

problems after a few steps - as practice shows. A computer program

is written for a CDC 3300 compater. The results are promising. The

program is being further tested and developed.

2.Problem and terminology. Let us consider the following problem:

rnin ~T~

A 0 4

where A is a given ~ X v~ matrix of O and ~ elements

is an _C vector of positive integer elements and e is an ~ -vector,

each component of which is 1. Problem (l~ is known as the set co-

vering problem for the following reason. It is given ~ subsets of

Page 484: 5th Conference on Optimization Techniques Part I

472

the set

~ : ~ ,../ ~ , A cost c.~',>O is associated to each element je ~o •

At least one element of each set ~l 2 9~ is to be chosen i.e,

each set is to be covered at a minimal total cost. In other words

a set ~ is to be determined:

Problem tl) is obtained, if matrix is defined as

CL,~~ ~ (,0 otherwise.

The set ~ is the subscripts of variables ~#" having the value

1 in the optimal solution of problem (~ .

The general set covering problem ~'v~ fn cTx_

may be interpreted a similar way. The only difference is, that each

set J, should be covered ~>O times instead of just ones. Th~s ~

is a given ~ -vector of positive integer components.

The set partitioning problem

~ cww

(3) A_ =e

is ~he same as problem (~, except that the inequalities are

substit.ted by equations. The same i~terpretation may be used also

for problem (3) , the only difference is that each set should be

Page 485: 5th Conference on Optimization Techniques Part I

473

covered exact ij ones. Each solution

fines a partitioning of the set

i f

--]o

~_ of probZem (3) • L e t

otherwise. Then obviously

however de-

and

Thus the s e t s ~ . . . ~ Mm g i v e a p a r t i t i o n i n g o f s e t ~o • T h e r e f o r e

problem {31 may be s t a t e d as t o de termine t h e p a ~ i t i o n i ~ o f s e t ~o a t a m i n i m a l c o s t , I t s h o u l d be n o t e d , t h a t a p a r t f rom t h e t r i v i a l

case of identical columns in matrix A any postitioning uniquely de-

termines the corresponding solution ~ , if there is any.

3. Ao~lications.

3.1 Planni~ of bus routes.

with an attached cost c~ .

incidence matrix, i.e.

It is given o possible bus routes

There are ~ bus stops and ~ is the

if route J goes through bus stop ~

otherwise.

bus network of minimal cost is to be determined in such a way, that

at least one bus route should go through each bus stop. If variable

~ has the value 1 or 0 depending on whether bus route j is

realised or not, then problem (1) is the mathematical model for the

b~s route planning.

~.2 Airline-crew scheduling. Exactly one crew should be

assigned to each of the givenm flights in such a way that each crew

obtaines an acceptable full assignement /limited number of flights,

not too long working time etc./ The objective is to minimize the

Page 486: 5th Conference on Optimization Techniques Part I

474

number of crews actually ~sed. To solve this problem let us determine

a great nu]nber of acceptable crew assignements and calculate matrix

A."

if flight ~' is included in assignementj

otherwise

and the meaning of the variables

X,,-~ if assignement ~ is accepted

otherwise.

Then a se t pastitioning problem (3) is obtained with cj'---d Cj=~...,~).

is called truth f~nction, if both the variables u~;.-.) UN and

the function ~ may take only the values 0 and i. The value

0 is also referred to as the off position of the switch or the

truth value fals.__~e. Similarly the value 1 is also interpreted as

position on or truth value true. The costs of M~D gate and OR

gate are given. The problem is to realize the truth function

(u~) ;~n) at a minimal cost. Let us suppose, that the function

is given either in a tableau form or in a disjunctive normal form.

The latter is the disjunction of different terms, where each term is

a conjunction of a subset of variables U~) " 2 U~ and their negated

form A-u~ ; . . , ~ d-%.

Let ~k denote either variable u~ or its negated form 4-u~ •

Definition. The conjunction Q = ~k~ ~L ~kr is a prime implicant

of function F if U~,~k~'" ~kj ~ implies, that ~ (~,.-.,~N)=4

Page 487: 5th Conference on Optimization Techniques Part I

475

and no part of Q has the same property.

Then our problem may be transformed into a set covering problem

by the following steps:

( i ) Determine all prime implica~ts %)~Zr'v~ of function F

and their attached costs C~ CL} C~ /The number of conjunction

in the prime implicant times the price of the AND gate plus the price

of the OR gate counted once, becaase these prime implicants will be

connected by the sign of disjunction/

and define the matrix A :

t 0 otherwise.

Then the truth function ~ may be written as

F= and the problem becomes 1he set covering problem

If the function ~ takes the value 1 for more then half

of the values of the argument, then the function ~-F may be de-

termined instead. There are also other tricks to decrease the size

of the problem.

4. Survey of methods. Practically a version of any integer

programming method may be tried for the set covering problem. On the

other hand a great number of papers are already devoted to the subject,

thus only a partial list of publications is mentioned here in which

Page 488: 5th Conference on Optimization Techniques Part I

476

each direction is represented by one or two papers.

First of all a version of the Gomo~y [SJ cutting plane method,

by Martin ~l~ is reported to be effective to set covering problems.

A paper of House, Nelson and Rude [7] is devoted to the development

of a special algorithm only for the set covering problem. The sub-

stance of the method is the construction of additional rows to matrix

A for the exclusion of solutions not better than the best one ob-

tained so far during the algorithm. The paper of Bellmore and Ratliff

~3] also falls to the category of cutting plane methods with a sub-

stancially differen~ type of cutting method.

A typical example for the use of branch-and-bound method is

the paper of Lemke, Salkin and Spielberg [ ~ . The main difference

between their and our approaches/discussed in section 4/ is, that

they are solving linear programming subproblems for obtaining bounds

and in the method of the present paper no linear programming is used

at all. Most probably the structure of branching is also different.

Heuristic methods play an important role in large problems for

several reasons. The airline-crew scheduling is solved by a heuristic

method by Arabeyre, et all [l]. Garfinzel and Nemhouser [4J apply

other heuristics for the set partitioning problem. A rounding process

and consecutive fixing is used to decrease the size of the problem.

The smaller problems so obtained are solved by existing integer prog-

ramming methods. The group theoretic approach of integer programming

is u~ed by Thiriez Ill] having the special advantage, that in most

cases the determinant of the optimal basis B is usually small,

because of the O-1 elements in matrix A.

5. A new solution for the seutcoverin~ prob!em_~. The description

of the method consists of two part. The first part gives an explanation

Page 489: 5th Conference on Optimization Techniques Part I

477

of a h~uristic method. The second one describes an exact method using

the branch-and-bound principle and also the heuristic method for

calculating bounds.

5.1, A he uristi p method. Let us define the set of uncovered

rows, I k and the set of subscripts of unused variables, ~& after

executing iteration ~ .

and no columns are used:

Io ,

At the beginning all rows are uncovered

Let us introduce the column coumt at iteration ~ as

k

and the e v a l u a t i o n o f maused v a r i a b l e at i t e r a t i o n

thus it gives an evaluation of variable ~"

, ,-f V = o is nothing but the row covering cost for each individual row,

at iteration ~ .

At iteration ~÷ ~ the variable Xjk.~

evaluation is chosen:

with the best

The new sets are easily calculated

where

able

Mk+4 ~$ the set of subscripts of the rows covered by vari-

x~'k~ 4 :

The procedure i s c o n t i n u e d ~ n t i l e i t h e r ~ or ~t becomes

Page 490: 5th Conference on Optimization Techniques Part I

478

empty. In the first case we have obtained a feasible solution of the

problem. In the second one no feasible solution exists. Let us suppose

that a solution is obtained, i.e. I~ = ~)

Xi-- t 0 otherwise.

Then the following lemmata show how good this solution may be:

Lemma l° If

4 then ~ is an op¢imal solution of problem (lJ.

Proof. Consider any solution E of problem (~ and introduce

the following notation :

We can suppose~ that ~- ~ . Furthermore let

The n

The two expression may be handled seperately:

by the definition of ~j~ and ~= On the other hand

Page 491: 5th Conference on Optimization Techniques Part I

479

The first inequality holds because of the definition of the number

r and the second one is also valid, because ~ is a solution

of problem (1). (6) -(8) results, that

c v ; s c ~ ' ~ '

for any feasible solution of problem (1) , thus the lemma is

proved.

The following lemma gives a stronger result:

Lemma 2. Let us define the numbers ~" for each row:

(9) ~,.~'a~ -~ , ~i ~.~M~, ~ ~'¢ ~A) . . . ~ s - -~ -

I f

(~o) 2,,,,~ ~,.j ~ ~ ~ ~o~ ~ O<j.~ ~, ~~.. .~ j~.

4 then ~ is an optimal solution of problem (9 •

Proof. Consider any feasible solution ~ of problem 1 .

Because of the definition of numbers ~

%'--4

On the other hand, using the definition (5), the supposition (lO)

and the fact that ~ is a feasible solution of problem , i.e.

Page 492: 5th Conference on Optimization Techniques Part I

480

We obtain

~,~.u,. 5 c o . = c _ x .

Then ~ll 1 and C12) together gives the desired result, that

c<~ s c~'Z

which proves the lemma~

The following lemma provides a lower bound for the branch-and-

bound procedure if lemmata 1 and 2 did not prove the optimality

of the feasible solation ~ obtained by the above heuristic algorithm.

The optimality of j may also be proven sometimes by this result,

see the notes after the lemma.

Lemma 3. Denote the optimum of problem (~ by Z ~ . Then

4 1 Z > = __ ~ { ) '---A

whe re {~" is the "minimal covering fraction" of row i:

{°i I (¢= 4;. I ~9

Proof. Define the following new problem

Page 493: 5th Conference on Optimization Techniques Part I

481

whe re

~q ~n

~'---4 k - - 4 v'~ '

~t"=4 k = /

Ij if j:k J

otherwise.

To any feasible solution x there exists a solution of problem

l l5)) in such a way that the objective functions are the same, e.g.:

On the other hand the minimum of problem (15) is obviously ~ .

This proves the statetement (131 of the lemma.

Notes :

problem (1) .

1 ° If 4

cw~=_ _ ~ , then ~ is on optimal solution of

2 o . If _crY_ > F , then ~ gives a lower bound of the op-

timum of problem (l) which may be used in a branch-and-bound pro-

cedure.

5.2 A branch-and-bound procedure fo r solving problem i .

Many papers are devoted to the discussion of branch-and-bound pro-

cedures. See e.g. the survey of Lawler and Wood [~ . On the other

hand the computer program is now developed further, thus only some

basic caracteristics of the present comptur program is given here:

1 ° LIF0 /last in first out/ rule is used for memory saving

Page 494: 5th Conference on Optimization Techniques Part I

482

20 Fast termination of s~bproblems is guaranteed if no so-

lution of the subproblem exists. /The subproblem s are of the same

form then problem (1), only some of the variables are fixed either

at 0 or at 1.

3o Fast recanstruction of the next subproblem to be considered

is made in a special way.

~o Good solutions are usually given in case of abnormal

termination.

6m_Th~ computer program is written for a CDC 3300 computer

in FORTRAN. The program is being tested and further developed. It

solves problems up to lO0 variables in a few seconds. A version of

the program is suitable of solving the general set covering problem(2)

it to ~he form (~ o A similar approach may be without transforming

to solve also problem [~ , but most probably in this case used

a somewhat different evaluation of variables /or group of variables/

is more useful.

References

Arabeyre~ et all

The Airline-Crew Scheduling Problem: A Survay.

Transportation Science, 3 /1969/ No 2.

[2] M.L.Balinski, On maximum Matching, ~inimum Covering and their

Connections. Proceedings of the Princeton Symposium on Mathe-

matical programming . /1970/ 503-312

3] M.Belmore and D.Ratliff, Set Covering and Involuntary Bases,

Management Science 18 /1971/ 19~-206

[~] R.S.Garfinkel and G.L.Nemhouser, The Set-Partitioning Problem:

Page 495: 5th Conference on Optimization Techniques Part I

483

[a]

Set Covering with Equality constraints. Operations

Research 17/1969/ 848-856.

R.E. Gomory, An Algorithm for Integer Solutions of

Linear Programs, 269-302 in [6~ .

R.L.Graves and Ph.Wolfe /eds./ Recent Advances in

Mathematical Programming, McGraw-Hill /1963/

R.W.Ho~se, L.0.~elson, and T.Rado, Computer Studies of

a Certain Class of Linear Integer Problems. Recent Advances

of Optimization Techniqmes. Lavi and Vogl /eds./ Wiley

/1966/ pp. 251-280.

E.L.Lawler and D.~. Wood, "Branch-and-Bound Methods:

A Survey", Operations Research, l~ /1966/ 699-719.

C.E. Lemke, H.Salkin and K.Spielberg, Set Covering

By Single Branch Enumeration with Linear Programming

S~bproblems Operations Research /to appear/

G.T.Martin, An Accelerated Enclidean Algorithm for

Integer Linear Programs, 311-318 in [6J

K.Thiriez, The Set Covering Problem: A Group Theoretic

Approach Revue Trancaise de Recherche 0perationelle

v-3 /1971/ 85-10~.

Page 496: 5th Conference on Optimization Techniques Part I

A THEORETICAL PREDICTION OF THE INPUT-0UTPUT TABLE

Emil KLAFSZKY

Computer and Automation Institute,

H~ngarian Academy of Sciences

l~l The pr0ble~m.

J1,J2,...Jj~.~J n

going from I i to

of I i used ~p by

Denote !l,I2,.®.Ii,.o.I m sources and

sinks. Let ~2>0 be the amount of the quantity

Jj ~ in other words the quantity of the production

Jj~ We shall denote the mxn matrix (ofcj) by

A and ca l l it in~_~_zout~_ut__tab_~le.~ The sums .~o<%/. ( the t o t a l

output of z i ) and the s=s E_~,~ < the torsi input of J j ) a~ eb4

usual are called the in~_~t-__~oE~t~_ut_maErgi_na_~l_v_al_~es _.

The fundamental problem treated in this paper is : what we can say

about a new input-output matrix

cribed marginal output values

values C . . , : ( ~ , a , ~.)>o rent matr ix A : C~j ) ,

X = ( ~ij' ) which has the pres-

/~ : ( / 3 , , / % . . . ~ ) > o a~d i~put

respectively, when we know the our-

More exactly what additional hypothesis can be set by the aid of

the matrix A to ensure a "sufficiently good" solution of the

system of equalities

(i)

O o ,,~,... ~),

which evidently has many solutions in general.

Page 497: 5th Conference on Optimization Techniques Part I

485

2. The method RAS. A well-known method [3-7] which ensumes an

sufficiently good solution works with the following hypothesis:

The variables ~j are of the form

(2) ~,.> o , (,_-,,~,... ~),

5.>o, O:q,~,...,~). We s h a l l denote the d i a g o n a l m a t r i x c o n s i s t i n g o f t he numbers

~; (c : , ,~ , . . . . ) ana ~-0- - ' ,~ , . . . . ) by R and 5 respect i~ky.

So the assumption (2) can be written in the form:

X=~

Hence the name of the method. This method is sometimes called as

S~koys~i_~- or Fratar method /after the first users of it/

and also as the Gravit~ method /after the principle of gravitation

which was used to explain the hypothesis/.

It is easy to see t h a t i f there is a s o l u t i o n o f the system (1)-(2)

then the system (i) has a solution with the following properties

(3) and

}c i = O, i f ~,./= o

Conversly the above condition is also sufficient for the solvability

of (1)-(2) as the following well-known theorem asserts.

THEOR~ 1. I There is a solution - unique in ~,~ -- of the systems

I (i)-(2) if~ the~ is a solution o~ (i) satis~ing (5) •

Page 498: 5th Conference on Optimization Techniques Part I

486

I~ the followi~_~r__a~£9~h__we__£h__a!!_5££_~a£2~£E_h~o_t_~££!£_2~£a__o__n

some information theoretic considerations which also leads to the

method RAS. This treatment ~ields the above theorem as an eas~_

corollary.

If we denote the solution X ensured by theorem 1 by #~(A~,c) then beca~se of the uaiqueness we get easily the following property

of the method RAS.

COROLLARY Suppose ~hat for ( A , ~ , c ) and for (A, ~, c~) ~he system ( 1 ) - ( 2 ) is solvable. Then the re la t ion

holds~ i.e. the "step by step" prediction has

the same result as the "one step" one.

3. Treatment of the 9roblem using ~he geometric pro~rs~ming.

In the following for the sake of simplicity of the treatment we

may suppose withouth the loss of generality that the equalities

hold.

We shall use the hypothesis as follows:

The p re~c t i on X°C~ j ) i s considered to be "good"

if it satisfies the system (1) and the I-divergence

/information divergence or information surplus by

Shannon-Wiener/ of the table X related to table A

is minimal; i.e. if the function

~'--4 j=~ °(9'

has the minimal value on the set of solutions of system(l).

Page 499: 5th Conference on Optimization Techniques Part I

487

The continuous extension to the whole non-negativ orthant of the

function ~ ~ is zero if ~).=o.

Let us introdmce the following notations:

and

Since the function C@ ) has finite infinum on the sol~tion set of

(l) only if ~'=0 implies 4 =0 , thus we have to choise

.~,~. - o i~ ( ~,j ) ¢ Q.

Using this restrictionwe get a mathematical programmimgproblem:

the dual problem of a geometric programming as follows.

~inimize the function

(sj .Z__.. ],~j ~ <:d

on the solution set of the system

/ ~,;i ->- o, 6;j) E cR,

,~,4. ~ ~, ~-,,~,... ~). c,;i)e~

We may write the primal of this dnal geometric programm [2] :

Find the s,~prem~m of the objective f~uction

on the solution set of the system

Page 500: 5th Conference on Optimization Techniques Part I

488

t

In what follows we have to distinguish two cases. First we shall

investigate the problem (5) - (6) when it is canonical ¢A.,)

and secondly when it is not canonical but consistent CB.,).

A., The canonical property of the problem <5)- (6)means that

the system (6) ~as a soiutio~ of prope~y };j>o ¢ OVJ~Q ) .

However this assertion is equivalent to the (3) •

Using some results of the geometric programming we get the following

assertions:

Ci) For the dual geometric programm is canonical (and so con-

sistent ) and the feasibility set (6) is bounded, so the

objecUive function (5) attains its minimum. This minimum

is unique because of the strict convexity of the objective

function.

(ii) The canonical property of the problem and the bo~ndeness

of the dual objective function <i) imply that the primal

problem has optimal solution and the values of the two

optimums are the same ([2] pp. 169 Theorem I.).

( ii) The pair of solutions

e q~alit ie s

are optimal iff the

fo= all (Z,j)E

Page 501: 5th Conference on Optimization Techniques Part I

489

I hold(J2] pp. 167 Lemma 1.) . 1 I

I As ~q=~ therefore the above assertion is equivalent

I to dd~g~the following system of equalities [

i

t (8) ~ ) ' = ~ " for all ~)d)E~. i l

Using the above observations we get the following

THEOREM 2. Suppose that the prediction problem (with the

parameters ~i ~, c ) fulfills the canonical con-

dition (3) • Then the hypothesis of RAS method

and our hypothesis based on the minimization of

I-divergence yield the same result.

~_oof_~.. ~ ~¢,e,,~ ~ a sol~tio~ of (i) - (a) i.e. the &

solution of the problem by the method RAS . The numbersVfulfill (6)

~d the n=bers }'i'~" 5 f~fill (8). It is easy to see that

they also satisfy (7) because

So ~,~,~,b ~" are a pair of optimal solutions.

On the contrary let ~g~t~g1~ be now a solution of the optimization

problem. So they satisfy (8) and this means that they are represen-

ted in the form RAS •

The Theorem 1. is now the eas~_c~onse~ence of the Theorem 2. and

the assertion ~i~.

B., Let us suppose that the feasibility set (6) is consistent.

Denote E = (el)=(er~)) -- (~j) an rzaX z~ matrix, where

Page 502: 5th Conference on Optimization Techniques Part I

490

Denote further

It is clear that I~ E if= ~ II e~. II= ~_lleci)II,

Let ~s consider for E => o

with parameters ~, ~i c~

+ II E II

the modified prediction problem

whe re

(9) + Jl #"tl

It is obvioas that if g-~o then #.@)--~/3~J )

where ~ :°;- ~5°~=

and ~

The modified problem is a canonical one for all g > o and so its

optimal solution . can be written in the form RAS /Theorem 2./

modificatlon of the problem yields We shall show that the "small " "

"small modified" optimal solution, namely the theorem is tr~e a

follows:

THEOREM 3. ~ The optimal solation is a continuous function

_Pr_£oof: Indi~ectly~ let us suppose that there is a sequence

~j~)... E~°o. -~ o for which XE~-~ and X ~ X; °

Then, as the optimal solution of the original problem (g= o) is

unique we get the relation

Page 503: 5th Conference on Optimization Techniques Part I

491

O.o; ¢ (xD < ~(~)

Because of the continuity of the functiomal

relations

~( ,C+~,EJ-- -e(* : ) , <t ,,-.,o and ~,(x,[) - ¢ ( e j , . i <,---o.

This and (lO) imply the existence of an index

the relation

ecg+ oE)< ) holds.

we have the limit

~o for which

This however contradict the optimality of XEIo because the

Xo + is a feasible solution of the problem modified by ~o

T h e r e i s a w e l l - l r ~ o ~ m m e t h o d - d i f f e r e n t f r o m t h a t o f RAS - d u e

~0 ~m~_~St_e.~ha~_ Ill which works with the hypothesis of the mini-

mization o f t h e s q u a r e c o n t i n g e n c y :

I The prediction X = (~ij) is considered to be "good" I I if it satisfies the system (1) and the "distance" of I I the table X from the table A is minimal; i.e. if the

I function

, > ( x ; = . Z Z ~,,

I has ~he m in ima l va lue on the se t o f s o l u t i o n s o f system (Z),

There is a close connection between the functions ~ and ~ .

Namely if we take the first term of Taylor series of the function

~-~ arouad of point ~ ~? instead of the function ~

Page 504: 5th Conference on Optimization Techniques Part I

492

into the function of ~ ~hen we get exactly the function ~

which proves our observation.

References

Deming, W.E and Stephan, F.F. "On a least squares adjustment

of a sampled frequency table when the expected marginal

$otals are know" Ann.math.Star isls, ll/19¢O/ pp.

¢27-JJ~.

Duffin, R. J. and Peterson, E.L. and Zener,C. Geometri 9

~ , John Wiley, New York, 1966.

[3] D'Esopo,D~A~ and Lefkowitz, B. ~'An algorithm for comp~ting .

intersonal transfers using the gravity model." 0pns.

Res__~., 1963, ll.No.6, pp. 901-907.

Fratar,Thomas J. ~'Vehic~lar Trip Distribution by Successi- ve Approximations"

T r a f f i c ~ pp. 53-65 /January 195¢/

[6] Stone,R. and Bates, J and Bacharach, M: A ~ro~ramme for

Growth Input,output relationsships 1954-66.

University of Cambridge, 1963.

Stone~ Re and Brown, A.: "A long term growth modell for the

Britisch Economy" /In the book Europes F~ture in

Ed. Geary, R.C./

Page 505: 5th Conference on Optimization Techniques Part I

AN IMPROVED ALGORITHI~( FOR PSEUDO-BOOLEAN PROGRAN~LING

Stanislaw Walukiewicz, Leon $lomi~ski, Marian Faner

Polish Academy of Sciences Institute of Applied Cybernetics

Warsaw, Poland

I . II~RODUCTION

Problems of nonlinear integer programming have been developed re-

cently as applications to facility allocation [13,15], capital budg-

eting [12], transportation system management rl~] and networks design

[10]. The methods for solving such problems have been considered more

rarely than the methods for solving linear tasks, so today we have

three prominent methods for handling the above mentioned problems.

They are as follows: (il the implicit (in particular case explicit~

enumeration method proposed by Lawler and Bell in 1967 [11]; (ii~ the

the method of transformation into equivalent linear problem developed

by Fortet in 1959 [5 ] and Watters in 1967 [19 ] and (iii) the pseudo-

-Boolean programming described by Hammer and Rudeanu in 1968 [8,9,18].

The efficiency of the first two methods has been proved in practice

[11, 17]. Many authors [7, 17] have pointed out the following disad-

vantages of pseudo-Boolean programming:

(i) procedure is hardly suitable for automatic computations be-

cause of the low degree of the formalization;

(ii) test used to determine the families of feasible solutions are

quite weak;

(iii)the method falls into two independent parts (constructing the

set of feasible solutions, determining the optimal solution or solu-

tions) and the amount of computations seems to increase directly with

both the number of constraints and the number of variables.

The H-R algorithm for pseudo-Boolean programming, given in the pa-

per, has not the first of these disadvantages. The remaining two are

reduced. This algorithm may be considered as an application of the

branch and bound principle to discrete programming.In sections 3 and

5 we give a short description of the H-R algorithm.

Bradley [2, 3] has shown that every bounded integer programming

problem is equivalent to infinitely many other integer programming

problems. Then we may ask question which of the equivalent pr6blems

is the most suitable for branch and bound algorithm. We will discuss

this question in section 4, and now we note, that the answer contains

Page 506: 5th Conference on Optimization Techniques Part I

494

also the estimation of the efficiency of the method (ii).

of the computational experience with the H-R algorithm is

in section 6.

2. THE PROBLF~

A summary

presented

Let Qn be an n-dimensional unit cube, i.e. a set of points X =

= (Xd, x2, o..~ Xnl, xj ~ (~,II, j ~ (I-~), and let R be a set of

real numbers. By a pseudo-Boolean functions we shall mean any real

valued function f(xl, x2, ..., Xnl with bivalent (0,1) variables,i.e.

f : Qn --~ R. It is easy to see that such a function can be written as

an polynomial linear in each variable [8J.

We don't distinguish, by means of a special graphical symbol, the

logical variables, which take the value true (1) and false (0) from,

the numerical variables, which take only two values 1 and 0. It will

be clear from the context that either the variables x, y are logical

or numerical. In addition there exist well known isomorphism between

logical and arithmetical operations:

xV y = x + y - xy, (2.1)

x A y = xy, (2.21

x = I - x. (2.3)

As the pseudo-Boolean programming is a numerical program we sup-

pose that in each given pseudo-Boolean function at least a substitu-

tion (2.1) was done.

Now, it is possible to formulate a pseudo-Boolean program~ng prob-

lem. It is as follows:

Find such X ~ = (x~, x~, ..., x~)K Qn (one or all) that

f(x ~) = Min f(x), (2.4) where X ~ S

S = {X ~ gi(X) ~ O, i ~ (1,--~}. (2.5)

In this formulation f and gi' i ~ (I-~) , are pseudo-Boolean func-

tions, S - a set of feasible solutions. If S = ~ then the system of

constraints (2.5) is inconsistent. We call the pair (m,nl the size of

(2.@), (2.5) and S ~ -the set of optimal solutions.

3. THE H-R ALGORITHM

The problem (2.~)~ (2.5) may be solved by means of the explicit

enumeration as follows:

Let fak be an incumbent (the best feasible solution yet fiund) .For

each XgQ n we check, if X fulfils the system (2.5) and afterwords, if

f(X] ~ fak" The branch and bound principle allows:

(i} to execute this enumeration on the level of subsets of Qn;

Page 507: 5th Conference on Optimization Techniques Part I

495

(ii) to omit in this enumeration some of subsets of Qn without

loss of any X ~.

It is obvious that in each application of this pronciple we may

distinguish two main operations: branching and computation of bounds.

Generally speaking, the computational efficiency mainly depends on

the number of branching and in turn this number depends on the accu-

racy of the bounds computing. On the other hand, the time devoted to

the bounds computing shouldn't be too long,otherwise the applications

of the branch and bound principle is not efficient.

The H-R algorithm presented below, is a result of both theoretical

considerations and practical experiment s.

3.1. The General Description of the H-R A15orithm O

Let S i be a feasible solution set of the i-th constraint and Si_ 1

be a feasible solution set of the first (i-I) constraints, iE(T~).

For the sake of definitness, it is assumed that S O = Qn" Then #

S i = Si_ I ~ Si, i ( (T~). (3.1)

Joining the system (2.5) in the (m+1)-th constraint

fak - f(X) >I 0 (5.2)

where fak is a parameter, we have converted the pseudo-Boolean pro-

gramming problem into the problem of solving the system of (m+1) pseu-

do-Boolean inequalities.

It should be noted that it is not necessary to determine whole the

set S (S') . It is sufficient, if the following relation hold

S ~ C S i, i ~ (I-~). (5.3)

So furthermore by S i (S~) we shall mean the set which, may be,

doesn't contain all the feasible solutions of (2.4), (2.5) but which

contains all optimal solutions.

In other words, the H-R algorithm determines the feasible solu-

tions set of gl(X)~0 at the first step, and s/ter by checking which

of the X~S 1 are satisfying g2(X) ~0, i.e. it determines S 2 and so on

up to determining S m. Solving (3.2) for XES m we obtain all optimal

solutions of (2.4) , (2.5) •

5.2. The Branching Process

We can bring each of the constraint of (2.5) to the following form

(we omit index i here)

a O + alC I + a2C 2 + ... + akC k ~0, (3.4)

where ao, aj 6 R, Cj - a Boolean conjunction (product) of some of the

Page 508: 5th Conference on Optimization Techniques Part I

496

variables x 1~ x2~ o~o~ ~n~ j E(T~ ~ and

!at! >~ la2i ~ ... >i iakl- (3.5)

The H-R algorithm requires all sets to be described by means of

characteristic functions and eash characteristic function must be the

Boolean disjunction of the disjoint reduced conjunctions defined in

the following way:

A logical expression

h(X) = K DID 2 ... ~ (3.61

is called a reduced conjunction in Qn' if:

(i) K is a product of some of letters ~j, i.e. ~J ~ LjIFxj'xj)'J~

~(~). We not exclude the case K = 1.

(ii) Each D k is a negation of a product (disjunction) of some of

letters ~j, j ~ (~) , such as that the letters appear in K don't ap-

pear in any Dk, k E (T~).

The variables appear in K will be called fixed variables (in K).

It results from (3.6) that, if K ~ O, then h(X) describe in a uni-

vocal way, some set G C Qn' therefore we will sometimes write h(G)

instead h(X). A set G we will call a family (of solutions}. For the

sake of definitness, it is assumed that if K = O, then the corre-

sponding set is empty.

Let H(Si) be a characteristic function of the feasible solution

set of the first i constraints of (2.@), (2.5), i~(1,-~). Then

H(Si~ = hl(X ) v h2(X ) V ... Vhq(k3, (3.7)

where hi(X)A hi(X) = O, if i ~ j and q - a number of disjoint fami-

lies of solutions.

By branching a given family G by means of a conjunction C we mean

computing the reduced conjunction for the following products:

h(G1) = h(G) C, (3.8)

h(G2) = h(G)~. (3.9)

So we obtain two new disjoint subfamilies by branching the given

onee

3-3. The Computation of Bounds

Let (3.4~ be a (i+l)-th constraint,where for simplicity we omit in-

dex (i+1). To obtain Si+ 1 we branch the family corresponding to hl(X)

in (3.7) by means of C I according to (3.8) and (3.9). Next we branch

each of these subfamilies by means of C 2 and so on up to branching by

means of Ck® We repeat that action for each family in (3.7) •

The essence of the H-R algorithm consists in excluding some fami-

Page 509: 5th Conference on Optimization Techniques Part I

497

lies from above mentioned process. For each family we compute bounds

which play a role of an argument in various fathoming criteria. In

linear tasks we may use linear progrsmmlng to compute the bounds as

exactly as possible, but in nonlinear problem the computation of such

bounds creates certain difficulties.Below, we present some fathoming

criteria, which efficiency was investigated.

3.3.1. The Logical Criterion LC

If, for example, in accordance (3.81

Lc: h1(G I) = h(X)^C = O, (3.10) then we say that G I has been excluded by logical criterion.In such a

case

h2(G 2) = h(X) = h(G). (3.11)

Therefore we may make an assumption,that each family is not empty.

3.9.2. The Global Numerical Criteria GNC

These criteria are computed recursively for given constraint

gi(X}~O, i g(~,m), without using information of Si_1.Let I(G) (u(G)}

be a lower (upper) bound of gi(X) over the family G.We consider (3.4)

and at the begining we have

l(Qnl = a 0 + ~ , a i, (3.121

ai< 0

u(Q n) : a O + ~ • a i. (3.13)

ai> 0

After branching Qn by means of conjunction C 1 we obtain, accord-

ing to (3.8) and (3,9}, two families G I and G 2 for which

l l(Qn) + a I, if a I~ O l(G1) = l(Qnl , if a I ( 0 (3.14)

~ U(Qn ) , if a I>O

u(G1) = [U(Qn) + al, if al<O (3.15}

~l(~l , if a I>o I(G2) = [l(Q n) -%, if %<0 (3.16}

U(Qn ) - a I, if a I >0 u(G2) = Lu(Q n} , if a 1<0 (3.17}

For each conjunction Cj, j ~ (2-~), and a family G we obtain simi-

lar formulae, i.e. in (3.14}-(3.171 we replace Qn by G and a I by aj.

In results from (3.14)-(3.17} that in the branching process a low-

er bound of gi(X} >I 0 doesn't decrease and an upper bound of it

doesn't increase.

The GNC consists in checking of two conditions for each family G:

Page 510: 5th Conference on Optimization Techniques Part I

498

GNCI : l(G) ~ O, (3.18) GNC2: u(G)< O. (3.19)

In GNCI is satisfying, then all points of G are solutions of

gi(X) >/O, therefore we may exclude G from branching process.

In GNC2 is satisfying, then G doesn't contain any solution of gi(X) >i 0 and we may exclude it from branching process. The families

which are neither satisfying GNC1 nor GNC2 are branched in above de-

scribed way.

3.3.3. The Local Numerical Criteria LNC

These criteria answer the question if branching of given family G

of S i by means of the conjunction of gi+l(X)>/0 is necessary or not.

Let G be a family of S i and let (3.6) be its reduced conjunction.

We introduce two sets of index

% = { j : cjK = o } , J1 = {J ' CjK = K ) .

Let L(G} (U(G)) be a local lower (upper) bound of gi+l(X)

a family G then we have

aj L(G) = z (G) + g , - / , a j ,

J e J1 J ~ JO aj > 0 aj< 0

( 3 . 2 o )

( 3 . 2 1 )

0 over

(5.22)

u(G) = u(G) ÷ __~ aj - aj~ (5.23) J eJ1 J ~Jo aj< O aj~ 0

where l(G)~ u(G} we compute according to (3.12) and (3.13).Now we con-

struct criteria very similar to (3.18) and (3.19) by replacing l(G)

(u(Gll by L(GI (U(G) I. We will mark them by LNGI and LNC2 respect-

ively •

If LNCI is satisfying, then G is obviously the set of solutions of

gi+l(X) ~ O. If LNC2 is satisfying, then G cannot contain any solution

of gi+l(X) >i O. 3.3.4. The Incumbent Criterion IC

In practice we often know a relatively good estimation for fak.Let

F(G) be a lower bound of f(X) over G. If

it: F(G) > fak' (5.2~) then G may be excluded from further considerations.

~. THE EFFICIF~CY OF THE H-R ALGORITH~I

Let (3.7} be a characteristic function of a feasible solutions set

of the i-th constraint, i.e. we're replacing now B i by Si, i C (~.

Page 511: 5th Conference on Optimization Techniques Part I

499

Since all coefficients of considering constraint are real number,

then there exist infinitely many formulation of this constraintthat

H(S °) is sill a correspondin~ characteristic function of it. There-

fore there exist infinitely many formulations of a given pseudo-

-Boolean programming problem.

It is known [5, 19] that every pseudo-Boolean programming problem

may be converted into an equivalent linear problem. This convertion

consists in replacing each nonlinear component appearing in goal

function or/and in constraints by new variable and joining two ad-

ditional linear constraints. So the efficiency of such linearization

depends on the number of nonlinear components. According to [7] we

may solve every integer linear problem with about 70 variables. For

instance, in example 8 from [9 ] solved without computer,we have r =

= 18 nonlinear components m = 3, n = 7. So an equivalent problem has

n + r = 25 variables and m + 2r = 39 constraints. For comparison, the

biggest problem solved in C17] by means of IBM 7040 (m = 7, n = 30,

r = 10) has 40 variables and 27 constraints.

We may reformulate the question that we set in Introduction in the

following way: what factor does the efficiency of the H-R algorithm

depend on. Basing on the up-to-date results we may establish two such

factors: an order of solving constraints and a decomposition of sys-

tem of constraints.

4.1. The Order of Solving Constraints

We introduce the concept of the optimal order of solving con-

straints by means of so-called "strenght" of given constraint gi(X)

Let (3.#) be such a constraint. We define the strength of it as

W i = - a 0 - a l a j , i 6 ( 1 , - , - ~ . ( 4 . 1 ) aj4 0

If Wi( O, then gi(X) > O, is redundant, i.e. gi(X) ~_. 0 for each

X E Qn" If W i> I, then gi(X) ~ O is inconsistent and S = ~.So we may

consider only such constraints for which

o < w i < I (4.21 because for cases W i = 0 and W i = I the unique solutions are known.

We can see that (%.I) is an estimation of the number of families

in S~. Since in the H-R alforithm the enumeration is executed on the

level of families,therefore we try to choose as the first,such a con-

straint for which the number of families is as small as possible,i.e.

we choose gi(X) >~ O for which W i is the nearest either I or O.So we

can give the following priority rule in pseudo-Boolean programming:we

Page 512: 5th Conference on Optimization Techniques Part I

500

put constraints in order according to nonincreasing priority P(gi ) ,

and ~b(tV i - 0.5) for o.5~< wi~ I

P(gi ~ (4.3) [(0.5 - W i) for O< Wi~ < 0.5

We introduced coefficient b>l in (4.3) because if Wi~l then each

family has more fixed variables. In practical computation we assume

b = 2. In section 6 we present the computational results which il-

lustrate the usefullness of putting the constraints in optimal order.

4.2. The Decomposition of Systems of Constraints

The idea of decomposition of a system of constraints consists in

separating it into subsystems in such a way that constraints in each

subsystem contain as much as possible common variables. Such separa-

tion make possible fathoming the great number of families by LC.

Let qij be a number of components of gi(X)>i 0 in which the vari-

able xj appears~ i E (1-~) , j ~ (1-~) , gij >10. The correlation coef-

ficient of the i-th and k-th constraints we compute as n

Tik = ~ ~ qijqkj, i,k E (1--~} (4.4}

j=1

We separate (2.5) according to the following proceduz, e. Let

gs(X)>~ O be the strongest constraint. We compute Tsi for all i 6 (1-~),

i ~ s, and create a subsystem from these for which Tsi takes the

largest value. Next we find strongest constraint among remaining ones

and follow the above described way.

Let Z be a subsystem containing m z constraints.The priority of the

subsystem we define as

P(Z) = W(mz/m) , (4.5)

where W is a mean value of strength of constraints belonging to Z. We

begin computations with subsystem for which P(Z) is the largest.

5. THE I~[PLEME~{TATION OF THE H-R ALGORITH~

The H-R algorithm was written i ALGOL 120~ language and implement-

ed for the second generation, Polish computer ODRA 1204 (storage ca-

pacity 16 k, access time 6 3~sec, addition time 16 )~sec) . For compari-

son IBM 7090 - 32 k, 2.2 9sec, 4.4 psec.

The H-R program is an adaptive one with two parts ~ZSTER and EXE-

CL~OR. The first part obtains the information about the problem just

solved and upon this information it controls some parameters of the

second one. For example, ~h~STER checks, if the solved problem is line-

ar or not. For linear problem it checks, if it is the covering prob-

Page 513: 5th Conference on Optimization Techniques Part I

501

lem or not. MASTER also computes Wi, i ~ (7---,m).and puts the con-

straints in optimal order,and separates (2.5) into subsystems accord-

ing to section 4.2.

On the ground of this information we may automatically change: the

order of applying the fathoming criteria and the procedure of multi-

plication of characteristic functions. If for example 0.3 ~ W i ~ 0.6,

i ~ (~,m), then we consider (3.2) as the first constraint.

In the H-R program the multiplication of characteristic functions

is done in the following way: we determine all families for the first

(with the greatest priority) constraint. Then we take the last family

with the greatest number of fixed variables and solve g2(X) ~ 0 over

it and so on up to solving gm(X) ~0 and computing fak or improving

its value. Next the H-R program checks a list of created families and

if this list has run out, then it takes the next family from S I. Such

organization of the multiplication requires small storage capacity to

execute it.

6. COMPARISON OF TPiE CO~PUTATIONAL RESULTS

The efficiency of an algorithm should be measured by means of num-

ber (or its estimation) of properly defined operations needed to

solve suitable chosen typical tasks.But it is very difficult to de-

fine such operations in discrete programming, therefore the computer

time needed to solve the subjectively chosen examples is considered

as the measure of the efficiency of an algorithm. This time obviously

depends on these examples, the computer and the language.This creates

such a situation as described in [1]. The partial solution would be a

statistic approach to comparison of results for example in such a way

as in [13, 15].

In order to reduce subjectivism of our results we took all numer-

ical examples from the references [6, 11, 16, 17]. Table I presents

the comparison of results for linear tasks and Table 2 for nonlinear

ones. We can see that the H-R algorithm is the best especially for

nonlinear problems. We should observe that the examples [17] are rath-

er specially constructed for Taha's algorithm and in spite of all the

H-R algorithm, is faster that the remaining one.The example from the

second part of Table 2 cannot be solved either by Taha's or by Wat-

ter's algorithm.

Table 3 shows how optimal order of constraints influences the ef-

ficiency of computation. We can observe that the majority of these ex-

amples has wrong order of constraints and putting the constraints in

Page 514: 5th Conference on Optimization Techniques Part I

502

TABLE 1. Computational results for linear problems

Problem Computing time in seconds for method

Size Num-

be r

1

2

3

4

5

6

7 (d)

t 8 ( e )

9 (e)

m n I t ....

10 10

15 15

20 20

3O 3o

40 4O I 15 15

35 15

50 32

87 48

Source

Haldi

SZomi~ski

tl

Lawler and

Bell

1

2_6(a)

10-60

650_1325 (c)

3o

Mimstep

16

13_21 (b)

71-227

481-3.577

6658

921

711-1195

796

24960

Computer: IBM 7090 ODRA 1204

H-R al-

gorithm

1

4

22

136

354

27

~7

29

2383

(a) Results for different examples.

(b) Results for different versions of the algorithm.

(c) Computation non completed.

(d) This problem was solved by Freeman [4] by means of

the IBM 7044 computer in 150 seconds.

(e) This problem has a quadratic goal function.

t u optimal order is especially efficient for large problems ~ p to 10

times)~

Page 515: 5th Conference on Optimization Techniques Part I

503

TABLE 2. Computational results for nonlinear problems

Problem

Num- Size Source bet

m n r

1 3 5 5 2 3 10 5 3 3 20 5

7 5 lO 7 lO lO 7, 20 10 7 30 10

8 7 5 10 ~ 6 10 15

1 6 10 15 11 6 10 15 12 6 10 15 13 6 10 15 14 7 23 75 " r - i 15 7 23 105 ~

O 16 10 27 106 ,~p~ 17 10 27 156 ~m . 18 10 27 316 ~

Computer:

Computing time in seconds for method

Taha Watters

0.3 0.5 0.2 0.6 0.2 3.6 1.9 4.7 0.6 2.8 3.3 21,1 1.0 19.0 5-1 3.0 2.1 2.9 8.3 6.3 2.4 6,5 3.4 8.7

11.2 14.1

Lawler and Bell

0.6 8,3

919,4 0.7 3,8

> 3000,0

5.6

13 24

438 634 716

IBM 7040 For problems 14-18

IBM 7090

H-R al- gorithm

4 1 4 1

1 < 1

1 2 3

<1 2 1 1 1 1

68 89

187 298 407

0DRA 1204

TABLE 3. Efficiency of solving the equivalent problem

O

©

O

Problem

Num- Size bet

m n

1 15 15 2 20 20 3 3O 3O 4 15 15 5 2o 20

30 3o 50 32

8 7 23' 9 10 27

10 7 20

Computing time in seconds

the best references the worst order order order

4 5 22 22 38 235

136 3O0O 50O0 4 6 10 23 57 411 29 88

89 273 298 3000

2 3

8

39 632 235

347 w

3

Page 516: 5th Conference on Optimization Techniques Part I

504

fi. CONCLUSIONS

I. The H-R algorithm can be generalized in such a way as it was

done in ES~ with pseudo-Boolean programming.

2. On the ground o£ recent results we may guarantee to solve any

problem with about 50 variables and for the problem of special struc-

ture with considerably more variables. We may observe that the effi-

ciency of the H-R algorithm considerably increases in case of the

implementation it for highly parallel fourth generation computers

such as i!~IAC IV. 3. Many authors [2, 3, 7] have pointed out the fact that the e-

quivalent problem corresponding to a given one may considerably in-

crease the efficiency of the known algorithms. But this question

hasn't been investigated so far. On the ground of our results we may

say that the equivalent problem need not be a linear one.

REFERENCES

I. Anonymous~ ~athem. Pro rammin 2, 260-262 (q972). 2. Bradley H.G., ~anag. Sci. 17, 35#-366 (1971~. 3. Bradley H.G., Discrete Math. 1, 29-45 (1971). ~. Freeman R.J., ~perat. Re s. 1#, 935T941 (1966) 5. Fortet R., Cahiers d~--~ntr9 ' de RAcherche Operati_onnelle,l,5-36

(1959). 6. Haldi J., Working pap. No. #3.Graduate School of Business, Stan-

ford Univ. 1964. 7. Geoffrion A.M., Marsten R.E., ~anag. Sci. 18, 465-491 (1972). 8. Hammer P.L., Rudeanu S., Boolean methods in operation research

and related areas. Nerlin 1968. 9. Hammer P.L., Rudeanu S., 0oerat. Res. 17, 233-261 (fl969).

10. Hu T.C., Integer programming and networks flows. New York 1969. 11. Lawler E., Bell M., Operat. Res. 15, 1098-1112 (1967). 12. ~ao J.C., Walingford A.B~ ~lanag. Sci. 16, 51-60 (1968). 13. Nugent Ch.E°~ Vollmann T.E., Ruml J., ODerat. Res. 16, 150-173

(fl968). I@. Randolph P.H., Swinson G.E., Walker M.E. In: Applications of

mathematical programming techniques. London 1970. 15. Ritzman L.P., ~anag. Sci. 18, 240-248 (1972}. 16. Slomi~ski L°, ~-roc. of ~he Conf. on Control in Large Scale Sys-

tems of the Resources Distribution and Development,Jablonna 1972 (in Polish)°

17. Taha H., ~ . 18, B 328-343 (1972) 18. Walukiewicz S., Arch. Autom. i Telemech. ~5, ~55-483 (1970). 19. Watters L.J., O p~ra~. Res. 15, 117flJ117& (1967).

Page 517: 5th Conference on Optimization Techniques Part I

NUMERICAL ALGORITHMS FOR GLOBAL EXTREMUM SEARCH

J. Evtushenko

In many problems of eperation research in which systems contai-

ning uncertain parameters are designed, optimum solutions often are

based on minimax strategies. Utilization of such an approach for sol-

ving practical problems is restricted by lack of ~umerical methods.

To this time, as far as I know, there do not exist any general nume-

rical methods for obtaining minimax strategies in multistage games.

Some first results in this direction obtained (the numerical solution

of a class of one-step processes).

In present paper a numerical method for determinatiom of mini-

max estimation is presented. This method is based on numerical me-

thod of finding global extremum of a function [I, 2 I.

I@ Suppose that a function ~C~)satisfies Lipshitz conditions

whlth constant C :

We shall consider the problem

)2-

where Xis a compact set,

We shall call the vector

(1)

a solution of problem (I) if

Page 518: 5th Conference on Optimization Techniques Part I

506

,~]C - (2)

where ~ - is the accuracy of a solution.

If for any sequence ~, ~ "''~ ~C the value

is found then for all C~ belonged to the spheres

the condition (2) holds. If the spheres (3) entierly cover the do-

main ~then the problem (I) is solved and magnitude~ is an ap-

proximate maximum of/~on S. In~1] the simplest algorithm of

such covering is presented. In the case when ~ ) is differemti-

able function the local methods of maximum searching are used as

auxiliary methods which essentially accelerate computation.

Similar approach was used for finding global extremum of a

function which gradient satisfies Lipshitz conditions. The programm

was made which used A L G 0 L - 60 for computing.

2. We shall now consider determination of minimax estimation

for ~/~ steps process:

J= ~ *,,~-~ ~ ~-,~ ... ,~m ~ ]Cc~) (4)

where OC. - has dimension ~(~')and ~= ( '~d.' Og.~ , ...g &~Z) - is

a o<~ WoO di.n~ional vector. ~'he e~tremum with respect to ~n~ r"

vector X~ is searched on a compact domain Z~ C~i, where < is

an £ -dimensional Evclldian space. The function ~g~Jsatisfies a i

Lipshitz condition on the domain , ~ - ~'f X ~ '~ X ' ' ' ~ X Z , The method of seeking a global extremum is used step by step

for solving problem (~). This numerical method permits us to solve

problem (@) to an arbitrary fixed accuracy. A number of modifica-

tions are developed which use local methods for acceleration of

convergence.

Page 519: 5th Conference on Optimization Techniques Part I

507

In the Computer Center of the Academy of Sciences special pro-

grams have been developed for solving (~). As examples of problems

solved by this method we can mention the two following problems

N u m e r i c a l c o m p u t a t i o n s show that these methods work effeciently.

3. Consider the game problem

where W ' # - probability measures defined on sets X , Y respec-

tively. Function~is assumed to be continuous on Xxy. It is

easy to show that for any probability measure ~ , # :

Let ~(r) d e n o t e s the solution of prombes :

Function ~] is convex. Instead 0£ problem (5) we can solve

proolem (6). #or any me~ure~ we shall find ~) using method for

finding global maximum. For minimization ~) we can use the local

numerical method, which we shall discribe now°

4. Using approximation, we can our problem put into the follo-

wing mathematical form: Find a vector ~= (~d;Z~#..) ~) which

minimizes convex function

z (7) subject to a ~ ~ Z= { ~." ~ 0 ; ,'C z , . ~ k ~ . If Y{Z) is differentiable then for'solving problem we consider

the system:

Page 520: 5th Conference on Optimization Techniques Part I

508

'~e can prove : eeI 3]), that the lim=t solution g~ g(e,, O of

the system (8) is the solution of problem (7) for any ff .

If ~g~is nondifferentiable, but differentiable in any di-

rection (as in our case), we can use following disoreteversion:

where the vector ~I belongs to the set ~ I of support functi-

onals :

k sequence O6g such that

~ . . .~ + ~ ~=a:

If 6~ is sufficiently small then the limit of sequence ~ ~is

a solution point for any ~e ~ ~ .

References

i EBTymeHEO D,P., ~ypHa~ BNUZC~TenBHO~ MaT~N&TIEM Z ~&TOM&~-

qecEol @~3zK~ II, 1390-1408 (1971), MOCEBa.

2 EBTymeHmO D.F:, ~yp~az BNqMC~MTe~BHO~ MaTeM&TMEM M ~a~e~aT~-

~ecEoR ~S~Ez 12, 89-104 (1972), MOOEBa.

3 EB~ymes~o D.F., i~a~aH B.F. ~ypHa~ B~q~C~e~HOl ~a~e~a~

Ma~eMaT~qec~ol @isz~z 13, 588-598 (1973), MOC~BS.

Page 521: 5th Conference on Optimization Techniques Part I

G R A D I E N T T E C H N I Q U E S F O R C O M P U T A T I O N

OF S T A T I O N A R Y P O I N T S

E. K. B l u m

D e p a r t m e n t of M a t h e m a t i c s , U. of S o u t h e r n C a l i f o r n i a L o s A n g e l e s , C a l i f o r n i a 90007

L e t J be a r e a l f u n c t i o n a l d e f i n e d o n a s u b s e t of a H i l b e r t s p a c e H.

u ¢ H i s a s t a t i o n a r y p o i n t of J i f s o m e d e r i v a t i v e of J i s z e r o a t u. In

p a r t i c u l a r , i f u i s a m i n i m u m p o i n t o f J a n d t h e d e r i v a t i v e of J e x i s t s a t

u, t h e n t h e d e r i v a t i v e i s z e r o . T h u s , e x t r e m u m p o i n t s a r e s t a t i o n a r y p o i n t s ,

but of course the converse need not be true. We shall present some gradient

methods for determining stationary points of a rather general type. We consider

sets of non-isolated stationary points and non-convex functionals and give

conditions for convergence of the gradient methods. We then give applications

to the optimal control problem of Mayer and to the generalized eigenvalue

problem Ax = k Bx, where A and B are arbitrary bounded linear operators

from one Hilbert space to another.

The gradient methods presented here are based on the intuitive idea that

convergence can be expected whenever there is a neighborhood of the stationary

point u in which the cosine of the angle ~ (x) between the gradient ?J(x) and

the vector x-u is bounded away from zero. For a problem with equality

constraints, the angles ~i(x) between the gradients of the constraints and

x-u also enter into consideration. We shall first consider the problem with

equality constraints. <u, v> denotes the inner product.

Let R be the real line and let D be a subset of H. Let

gi: D -~ I~, 1 <i < p, be real functionals. The sets C(gi) = ~x eD : gi(x) = 0}

P and C = ~ C(gi) are called ,'equality constraints". Let $: D-b R be

i= 1

another functional, called the "objective (or cost) functional"° We denote the

Freshet (or strong) gradients of these functionals at x by ?$(x) and vgi(x ).

(See [2], [5], [6] or [8] for pertinent definitions. ) Then the differential of J at x

Page 522: 5th Conference on Optimization Techniques Part I

510

w i t h i n c r e m e n t h i s dJ (x ; h) = J ' ( x ) h = < VJ(x) , h> . The s u b s p a c e , G x

spanned by the vectors {Vgi(x) } is called the "gradient subspace" at x. Its

orthogonal complement, T x, is called the "tangent subspace". Thus,

@ T and VJ(x) = VJG(x ) + VJT(X ) . We call VJT(X ) the "tangential H= G x x

component " of VJ(x).

Definition i.

u~C.

u ¢ D is called a stationary_point of (J, gi) if VJT(U) = 0 and

If u is a local minimum point of J on the equality constraint C, then

under appropriate conditions u is a stationary point of { J, gi } . This is a direct

consequence of the Lagrange Multiplier Rule. There are many versions of this

rule. We state one in the following theorem.

Theorem h Let J(u) < J(x) for all x ¢ N ,q C, where N is some -- u u

neighborhood of u. Let VJ(y) and ~Tgi(y) exist at all points y = u +t o h +

P I ~ ti Vgi(u) ' where h ¢ Tu and t = (t0,t I , .... tp) is in some neighborhood

N(h) of 0 ¢R p+I Let Vgi(y ) be continuous in t ~N(h) and VJ(y) continuous

in t at t = 0. If VJ(u) /0, thenthere exist real }~0, kl ..... kp not all zero

P such that ~0 VJ(u) + Z k i vgi(u) = 0. Furthermore, if (Vgi(u)} is a linearly

i

independent set, then k 0 = I and % 1 ..... k P are unique and not all zero. Thus,

VJT(U ) = 0. and u is a stationary point of (J, gi } .

r i Proof: See LZI.

N o t a t i o n ~ = x / !I x!! is the n o r m a l i z e d v e c t o r .

Definition Z. A stationary point u of ( J , gi} is called quasiregular if

there exists a neighborhood N = N(u) such that for x ¢ N the following four

conditions are satisfied: (i) VJ(x) and vgi(x ) are uniformly continuous in x

and VJG(X ) / 0, vgi(x ) / 0; (ii) the Gram matrix (< Vgi(x), ?gj(x) >) is

n o n s i n g u l a r ( n o r m a l i t y ) ; ( i i i) . F o r the a n g l e @ (x) = a r c s i n (tl 7JT(X) II / I t V J(x)! t ) ,

when VJT(X) / 0 the gradient V @ (x) exists and [I V @ (x)[l is bounded away from

zero. When VJT(X ) = O, the one-sided differential d @ (x; h+) exists for all

Page 523: 5th Conference on Optimization Techniques Part I

511

h cH and de(x+h; h +) +) d e ( x ; h uniformly as Ilhll - . o ; (i~) For x ~ N

not a stationary point, let U x = { stationary points u such tha t u is the closest

stationary point to x on line segment [x,u] } . For u ¢ [Jx ' let AX = x-u

• = a r c c o s < Vgi(x), gX > , ~ : a r c c o s and c~ 1

(0 < c~ i, ~ < w. ) Then there is a constant 7 > 0 such that

P Z >y if VJT(X)/ 0 and E cos ~i >Y if VJT(X) :0.

1

A gradient procedure for determining quasi-regular stationary points is

< v e (x), a x >

P E cosZC~ i + cosZ~ i

given by the following f o r m u l a s .

Xn+ 1 = x + s n (1) n h(Xn)"

h(x) : hG(x) + hT(X). (Z)

P gi (x) hG(x) = - Z Vgi(x) (3)

I II vgi(x)ll

- tan e (x) l!vo(x)! 1 vo(x) if vJT{x) / o

hT(X) : (4)

0 if VJT(X) : 0.

d/Z < s n <d, where 0 <d <i/2 p. (5)

We call this the "angular gradient procedure" or a "mixed strategy procedure,,.

As an example, consider J(x) = <Ax, x> where A : H ~ H is a linear

bounded self-adjoint operator. ~Ve impose one equality constraint, C, defined by

g(x) = IIxIIZ -i; i.e. C is the unit sphere. Iris easily proved thafi u is a

unit eigenveetor of A if and only if VJT(U ) = 0. (See [Z].) In [2] it is also

shown that

r e ( x ) = !lxlt ! < A x ' x ' > ! I tVJT (x) l lax] l 2 IIx!l z

Page 524: 5th Conference on Optimization Techniques Part I

512

if ~TJT(X ) / 0 and cos 9 (x) / 0. The following theorem can be proved. (See

[z], [3], [4] )

Theorem Z. Let A be a bounded self-adjoint operator on H. If # /0

is an eigenvalue of A of multiplicity 1 and h is an isolated point of the

spectrum of A, then any unit eigenvector belonging to X is a quasiregular

stationary point of {J~ g} ,

( It appears that the angular gradient mefJqod can be used to compute such

intermediate eigenvalues and eigenvectors even when ~. is not isolated° In the

latter case, the stationary points are not isolated. ) For example, if

1 Ax : ~ K(t, s) x(s)ds is an integral operator with a symmetric kernel I<

~0

continuous on the unit square, then A is a compact self-adjoint operator on L Z.

Its spectrum is at most denumerable and a non-zero eigenvalue must be isolated.

By a suitable choice of x0(s), the angular gradient procedure will converge to

intermediate eigenfunctions. The general convergence result is as follows.

Th___eorem 3. Let u be a quasiregular stationary point of {J, gi} and N

a quasiregular neighborhood of u. There exists r > 0 and positive k < 1 such

that for any initial vector in the ball B(u, r) the angular gradient procedure

converges to a stationary point u* and Ii Xn u~:~II_< kn[(l+k)/(l k)] llx0 uli"

Proo f . See [2].

As a second example, consider the Mayer optbnal control problem. We

are given the differential equations of control dx/dt :f(x,u), with x 6 iR m and

u ¢ R q, the boundary conditions ~i (a,x(a), b,x(b)) = 0, l < i < p, and the cost

function J(a, x(a), b, x(b))° It is required to determine a control function u(t)

which produces a trajectory x(t) satisfying the boundary conditions and minimizes

J. We shall restrict u(t) to be piecewise continuous on [a, b] and have values

in some open set of R q. To simplify the example, suppose x(a) is prescribed.

Then x(b) depends only on u : u(t) and ~#j and J become functionals of u.

Take H to be the cartesian product of q copies of Lz[a,b] with inner product

Page 525: 5th Conference on Optimization Techniques Part I

< u,v> : Z ui(t)vi(t) dr. 0 i=l

VJ(u) = Jx(t) (Sf/Su) 0 and g~i(u ) =

matrix of partial derivatives (Sf i /8%)

513

1 q As is well-known, the gradients are given by

}'ix(t) (Sf/SU)o, where (Sf/SU)o is the

evaluated along a solution (x(t), u(t)) of

the differential equations, Jx(t) is the solution of the adjoint equations

dz/dt = - (Sf/ox)Tz with values Jx(b) = (SJ/Sx(b))0, and the ,~ix(t) are

solutions of the adjoint equations with ~ix(b) = (8~i/Sx(b)) 0 . However, the

gradient ~78 (x) would be too difficult to compute. Therefore the angular gradient

procedure is modified by taking

hT(X) : - (i/lq vJo(x)IL [ < re(x), va~(x) > r ) VJT(X),

and approximating the differential < V @ (x), V JT(X ) > by a finite difference

[e(x+s v Jr(X))- e(x)]/s.

To obtain convergence of the modified angular gradient procedure we

replace the condition (iv) in Definition 2 by the following: (iv T)

P 2 E cos ~i + cos d 0 cos ~ / I cos ~ 01 > 7, where ~ 0 = arccos <V@(x),A $T(X) > 1

and c~ 0 = arccos < VYT(X),~x > . This method has been applied successfully to

optimum rocket trajectory (minimum fuel) problems. It has also been tested

successfully on the classical brachistochrone problem.

Now, we consider a related method for the unconstrained problem [10],[iZ].

To motivate it, we consider the generalized eigenvalue problem Ax = k Bx, where

A and B are bounded linear operators on one Hilbert space H to another H'.

L e t J(x) = O / Z ) N A x - (<Ax, B x > / < B x , B x > ) Bx !I z , w h e n Bx / O. r h e ~ i t i s

not difficult to show [I0] that J(x) : 0 if and only if VJ(x) : 0. Thus, the

eigenvectors are precisely the stationary points of J. Since there can be a

subspace E l of eigenvectors, the set of stationary points need not be isolated.

Page 526: 5th Conference on Optimization Techniques Part I

514

To compute such stationary points we can use the gradient method,

4 where xn+ ! = x n h(Xn),

h(x) = - Z ~(x) VJ(x) (6) . II v:<xttt 2

A straightforward calculation yields that VJ(x) = (A-R(x)B) ~" (A-R(x)B)x, where

R(x) = <Ax, Bx> /<Bx, Bx> and ~denotes the adjoint operator. The method has

been tried successfully on the finite-dimensional case, where A and B are

square matrices. It should be especially effective in case where A and B

are band or sparse matrices and when only certain intermediate eigenvalues

are being sought.

The step in (6) is a special case of a more general method which .can be

applied to find certain kinds of non-isolated stationary points.

Definition 5. A set E of stationary points is a C-stationary set for J

if for ~ > 0 there exists a neighborhood N of E and a constant e > 0 such that

for x ~ N, ? g(x) is continuous in x and there exists a unique nearest point

x~': -" ~ E and for x ¢ N-E the following conditions hold: (i) V J(x) ~0; (ii)

Z cos co(x) > c, where c¢ = arccos < ~7$(x), Ax> and Ax =x-x ~< ;

( i i i ) f c o s ~(X)- 2(J(X)- $(X~;'))/HVTJ(x)II !!Axll I < e . 1"4 i s c a l l e d a

C- stationa_aF_~- ~nei hborhood.

The following convergence theorem is proved in [10].

Theorem 3. Let E be a C-stationary set for J and let N be a

G-stationaryneighborhood of E. For x ~N define

-2 (J(x) - J(x~:~))2 vJ(x) if vJ(x) / 0, iIvJIxl!!

h(x> = i (7)

I 0 otherwise. [

= + h(Xn). There exists a neighborhood IV[ of E and a and let Xn+ I x n

positive constant k < i such that for any initial vector x 0 ~ Iv[ the inequality

fix -Eli < kn!Ixo-E !i holds for n > 0 and the sequence (Xn) converges to a ii n I _

point in the closure of E. Furthermore, for arbitrary 8 > 0 the neighborhood

Page 527: 5th Conference on Optimization Techniques Part I

515

M can be chosen so that

I k z - (i- inf cos z =(x)) I < 6. x ~M-E

Of course, the step h(x) in (7) is only computable if we know J(x*).

(In the generalized eigenvalue problem, J(x ;~) = 0. ) It is possible that a close

approximation to the value J(x;', ~) would be available in practice and this might

suffice. However, pending further study, application is limited to those problems

in which J(x*) is known.

The step-sze in (7) is an approximation to the distance in the gradient

direction from x to the point nearest to x;~-'. Thus, the method of theorem 3

could be called a '~gradient method of closest approach. " in this respect, it

differs from the steepest descent method and other gradient methods

[Z], [5], [6], [7], [9]. Its application to eigenvalue problems in infinitedimensional

spaces (e. g. integral equations) would generally involve discretization errors

of some kind [i], [ii]. This requires further investigation.

I.

Z.

3.

4.

5.

6.

7.

References

K. Eo Atkinson, Numerical Solution of Eigenvalue Problem for Compact

Integral Operators, TAMS 1967.

E.I<. Blum, Numerical Analysis and Computation (Ch. 5,1Z), Addison-

Wesley, 1972.

, A Convergent Gradient Procedure in pre-Hilbert Spaces,

Pacific J. Math., 18 , 1 (1966).

, Stationary points of functionals in pre-Hilbert spaces,

J. Comp. Syst. Sci Apr. 67.

J. W. Daniel, The Approximate Minimization of Functionals, Prentice-

Hall, 1971.

A. A. Goldstein, Constructive Real Analysis, Harper 67 .

E.S. Levitin and B. T. Polyak, Constrained Minimization Methods,

Zh. vychisl. Mat° mat. Fig., 1966 (Comp. Math and Math. Phys).

Page 528: 5th Conference on Optimization Techniques Part I

8.

9.

10.

11.

12.

13.

516

M.Z. Nashed, Differentiability and Related Properties of Nonlinear

Operators - in Nonlinear Functional Analysis and Applications, ed. L. B.

Rall ACo Press 1971o

S. F. ivicCormick, A General Approach to One-step Iterative Methods

with Application to Eigenvalue Problems, J. Comp. and Syst. Sci.

Aug° 72°

E. K. Blum and G. Rodrigue, Solution of Eigenvalue Problems in

Hilbert Spaces by a Gradient Method, USC Iviatho Dept. Prepring Apr. 72.

H. Wieiandt, Error bounds for Eigenvalues of Symmetric Integral

Equations Proc. AMS Syrup. Applied Math., C, 1956.

G. Rodrigue, A Gradient Method for the Matrix Eigenvalue Problem

Ax = X Bx Kent State U. Math. Dept. Dec. 72.

S. McCormick and G. Rodrigue, A Class of Gradient Methods for Least

Squares Problems for Operators with Closed and Nonclosed Range,

Claremont U. and U.S.C. Report

Page 529: 5th Conference on Optimization Techniques Part I

PARAMETERtZA~ON AND GRAPHIC AID IN GRADIENT METHODS

Jean-Pierre PELTIER

Office National d'Etudes et de Rccherehes A6ro~patiales (ONERA)

92320 - Chatillon (France)

Abstract

The first part reports an experiment in which a graphic interactive console was used to operate a gradient-type

optimization program.

Some indications are provided on the program sturcture and the requirements for the graphic software. Conclu-

sions are drawn both upon advantages and difficulties related to such project.

The second part deals with parameterization of optimal control problems (i.e. solution through non-linear pro-

gramming~. A local measure of the loss of freedom pertaining to such technique is established. Minimization of this

loss leads to the concept of optimal parameterizatian. A first result is given and concerns the metric in parameters

space.

PART I : Console and gradient

1. Introduction

In the past, interactive graphic display consoles have been used, in the field of optimization, to select the

desired model (i.e. state equations, constraints...) and initiate computations (i.e. provide initial values as in

STEPNIE,W]SKI 1969). The experiment carried on at ONERA is original in that the interaction deals with the optimi-

zation procedure itself. A conclusion of previous computational experience had shown that, in general, a rapid

solution of large, highly non linear optimal control problems requires sequential use of several numerical techniques

although each of these is, on the paper, sufficient. This is why ONERA developed a fairly large optimization pro-

gram, TOPIC (Trajectory OPtimizatio~ for Interception with Constraints), offering a range of options. Options can

fall into seven categories :

1} controls (how will they be modelized)

2) constraints (penalization, Lagrange multipliers ...)

3) search direction-local analysis (metric choice, semi direct technique ...)

4) search direction-global analysis (takes past step into account e.g. variable metric}

5) step size (fixed, linear search techniques ...)

6) convergence index (Kelley 1962, Fave 1968'...)

7) technical options (e.g. integration procedures}.

Although some options seem to be non-independant, it is best to program them as if they where, the present tread

being to re-introduce in various algorithms possibilities which, at first, seemed not to be compatible.

In order to facilitate comparisons of methods and speed up computations, a graphic display console has been

interfaced with the program. The console has a treble action.

(i) monitor computations so that an operator can juge of their worthiness ;

(if) aid an operator diagnostics and enable direct action ;

(iii) facilitate edition of results.

2. Ggneral structure of the program

The presence of a graphic package together with an already voluminous computing program results in a quite

Page 530: 5th Conference on Optimization Techniques Part I

518

large memory requirement (about 400 k octet.s) so the program s t ructure has to be careful ly s tudied and be compatible

with over lay techniques . TOPIC structure is showed in figure I. It is ent irely written in FORTRAN IV language.

Each of the block can be divided in a se t of over layed segments except for MAIN, MAIN CONSOLE and MEMORY

blocks . The MEMORY block s t ands for labe l led commons which contains all var iable values such as : current control,

s t a te and gradient h i s to r ies , algori thm memories and so forth. Thus these va lues are preserved during overlay opera-

t i ,ms.

The MAIN program is reduced to a switch and ca l l s for in i t ia l iza t ion and input rout ines , to the console and opti-

mizat ion driver (s) in block 1.

Fig. ] - Program structure

"lS 77"

DRIVER

:f. I MAIN console I

l

l plotter 1 _ _ I f-

I MAIN MAIN

] a b e i l e d c o m m o n s

computin subroutines

_ _ J I ~-

I I

L

GENERAL PURPOSE ONERA

SOFTWARE

_ _ _ _

plotter 1

F

i/- console

console I - - I I -

o u t p u t

/

®

i!ol

3. Computing programs

A very genera l gradient program has to perform the following twelve t a sks :

1 - build a des ign vector from ac tua l parameters plus control funct ions (if required).

2 - Load ini t ia l s ta te .

3 - Forward integrat ion of :~tate ( load s ta te tables) .

4 - Compute performance index from final s ta te .

5 - Algorithm dec ides wither to go on 6 or 11

CALL TO CONSOLE.

6 - Compute final adjolnt .

7 - Backward integrat ion to compute gradient (or d i rect finite differences) .

8 - Projec t ion of funct ional gradient into design vector space .

9 - Algorithm dec ides to go on ]0 or 11 (or to exit)

CALL TO CONSOLE.

10 - Search direction modif icat ion (non local methods).

Page 531: 5th Conference on Optimization Techniques Part I

519

i t - Step computation.

12 - Control or design vector modification.

Of these tasks 1, 2, 3, 7 and 8 can be deleted in the case of a function extremalization. It has been sound effi-

cient to schedule calls to console after points 5 and 9.

The list of tasks gives a good idea of program(s) in block 1, i.e. driver program(s). Such routines are ~ust big

switches ~nd call specialized routines to perform each of the 12 tasks. When the console package returns to the

driver, a flag enables to restart computations at any of the 12 points.

Block 2 in figure I contains algorithm routines, integration routines, model routines plus least squares, linear

system solution and so forth.

Block 3 routines are divided into two c lasses :

- specific : routines handling control functions and design vector;

- general purpose : matrices, vectors and functions handling, such as performing + X II- [I operations on such

elements.

Such structure leads to an unusual programming of algorithm routines which cannot call directly subroutines to

perform tasks but instead return control to the driver with a proper set of flags to indicate what is needed.

4. The graphic package

It is, in fact, composed of two distinct packages :

(i) The ONERA general purpose graphic package : SYCEC ;

(ii) a special package, IS77, specific to the program.

SCREEN

TITLE I ~

LOGICAL KEYS

0 0 0

0 o 0

o 0 o

o

NUMERIC CU RVES LIGHT PEN

/ o:o,ooooo:o ALPHANUMERIC KEYBOARD

Fig. 2 - Console system

Page 532: 5th Conference on Optimization Techniques Part I

520

SYCEC divides the console screen into 4 areas, as shown on figure 2. We are going to review the use IS77 makes

of the 3 more important areas, with emphasis on the monitor set up which is more delicate :

A) The curve area : 3 curves are plotted against the number of function evaluation :

• the current value of the performance index (thin curve),

• the current value of the unaugmented (i.e. true), performance index (dotted curve),

. for visual aid, the best value reached for the performance index (heavy curve).

Comparing the first two histories, the operator sees how penalty terms are behaving. Comparing the first and the

last curves tells him immediately if an improvement is currently achieved : the two curves merge, After each gradient

computation sensitivity {unctions with respect to controls are plotted on the sereeu. Of course in intervention phase

(i.e, when the operator has a manual control) all interesting curves such as controls, state, pseudo-state (constraints)

~.. should be available for examination and even for manual changes (through alphanumeric keyboard or directly with

the light pen). For parametric optimization problems, histograms are plotted instead of functions.

t3) The numeric area : it allows 20 numbers, real or integers, to be displayed w&hout mnemonics. It has been found

necessary to reserve 8 lines for flag displays showing main options on the algorithms, printout volume, iteration

count. Thus the operator knows what is actually performed by the computing program. Remaining 12 lines display

the performance index, the unaugmented performance index, values of final state and pseudo state, final constraints

and the current step size value for the linear search. To compute a distinct pseudo-state for each of the current-

state constraint has been found more informative than to lump them up all in a single variable (which is possible),

in spite of the extra dimension requirement in the integration procedure.

Here again, in intervention phase IS77 brings upon request in the numerical area any significant number of the

problem for examination and modification.

C) The message area performs an important task of giving to numerical data on the screen their full meaning by

keeping the operator aware of where these data come from.

So diverse standard messages let the operator know :

- where the console package was called from in Block 1 ;

- which IS77 subroutine is in action ;

- if an action is expected from him (e.g. depress a key from a given set) ;

- if no action is expected. This is used for lengthy input/output operations. The message is supposed to keep the

operator cool.

To conclude this description of [$77, subroutine ~Maln console" has just a logical function mad calls, either

on automatic or manual mode, specialized subroutines ; Block 3 routines should be available to the graphic package

for simple computations as required by the operator.

5. Difficulties encountered

(i) TOPIC is a program specific in its formulation to one optimization problem, and 1S77 is specific both to the

problem and the program. Develop a non specific graphic package (i.e. depending neither upon problem formulation

nor upon the constitution of the memory block) would require a big programming effort. Values should be passed

through arguments instead of commons. IS77 would have its own input file (it has one, but short) to define, through

a set of flags priorities for the monitor option. And as a handbook could not indicate (as it is presently done) the

signification of numbers, this would have to be done on the screen. A file of mnemonics would have to be loaded and each numerical value shoed appear with its name. For such a work, the console software would have to be rebuilt

from scratch.

Page 533: 5th Conference on Optimization Techniques Part I

521

(ii) At it is, on the IBM 2250 console which is used, the possibility for manual intervention goes through a program

stop demanding an operator action. Thus there is no possibility to have a (normal) monitoring phase interrupted

unexpectedly by the operator, although this would be the best use for the system. Actually a simple 2 positions

switch would be sufficient for this purpose, but such switch does not exist on the 2250.

(iii) The economy of the system is delicate :

the time tg necessary to update the screen, in monitor option, is constant (about 3-4 seconds) while the time ne-

cessary for computations varies widely with the problem. A good reference is the central unit time per function eva-

luation, t e. If t e is too small compared to tg, the use of a complex console system is unjustified to monitor (on pro°

duetion) such unexpensive calculations. If t c is too long ( > 1 minut) the screen is static and the operator (a specia-

list) time is wasted although computations are expensive enough to be closely followed. For the problem deIt with

through IS77, t e varies from 10 secs to 1 minut which has been found reasonabl.e.

6. Conclusion

After a year and a half of use, the graphic system has proved it's efficiency for :

- acquiring rapidly an insight into a new problem,

- building a reasonable first guess for the algorithms.

- getting fast results (a night on the console is equivalent to a week of normal procedure ! ).

Moreover it helped us to reach some interesting conclusions upon constraints handling, global algorithms, inte-

gration methods and parameterization of optimal control problems (such conclusions might have been reached without

the graphic system but may be less rapidly).

However developmeot of a really general graphic software is an expensive task which now awaits the conclusion

of a present phase implementing a general purpose, versatile, multiple option optimization program.

PART II : Parameterization of optimal control problems

1. Introduction

Optimization problems in functional space and, among these, optimal control problems aim to determine, in gene-

ral, a vector valued function U*of, let us say, time over a given (finite) interval ct~ , which optimizes some perfor-

mance index. Any problem of practical importance in the field has to be solved on a digital computer and this even-

tually transforms the function into a set of many parameters. Already this approximation raised some theoretical

examination (KELLEY DENHAM 1968). Then, variable metric algorithms, which have represented a major improve-

ment in the field of parametric optimization, with their convergence properties, tempted several groups into using

these techniques to solve optimal control problems (BRUSCH 1970 - JOHNSON'KAMM 1971, SPEYER & al 1971, the

author 1971). Of course this technique impose a modeling of controls (figure 3) involving a limited number of para-

meters (an order of magnitude less than the average number of discrete points in functional programs). Of course

such a technique delivers only a suboptimal solution V* n which depends upon the way controls are parameterized

and the number n of parameters. A first question which arizes is to know wether or not one can find n(~-) large

enough sothat ~ I U " - V ~ )~,(~. i.e. if V~---~U* when n ~ .

Page 534: 5th Conference on Optimization Techniques Part I

522

U(t)!

! l

! ! 1

I i i J I I

~o t2 t3

J4

"~'~ - ~ n - I LI n

t 4 tn_ 2 tn_ 1 tf t

Fig. 3 - Parameterization example

This question of theoretical importance has been solved by CULLUM 3_972.

The contribution o~ this paper is to try to select, for a given number of parameters, the best parameterizatlon for

the control.

2. Steepest slope in a hifbert

Let U ~ and J(.) an application of bll into ~ . Thes lope /3 of J(.) in direction dU and at point ~ is defined

as

/ .~--,.o) I1~ a u tl "[

whe~e,~S.~ and li.~ is ~he selected ~or~ in ~. Whenever aFrechet derivative ~ ( ~ ) [ ' 7 exists at point~,

II d0 ~1 Let U~ be Hilbert and ~ ( . ) be an application of ~d onto ~*, i t ' s a dual space. ¢~( . ) is linear, continuous,

symmetric and coercive. Let ~ , o> be the duality product (i,e. the application of L~x (,~* into ~ defined by"

U*(U)) and define a dot product in ~ as

I . , . ), .> The norm in ~ will be the associated norm II. ~ .A classical step in the gradient technique conmsts into defi-

ning a steepest slope direction dU* which maximizes/,~ .

This direction is found to be

(4)

and ~ e corresponding slope is

(5)

d -

< Io), o ,,

Page 535: 5th Conference on Optimization Techniques Part I

523

Let us define

(6) ? ( ~. dlj) ~ / (dO)

3. The V(.) application

Similarly, let ~ be hilbert with a metric associated to application c~( . ) and K (.) an application in ~ , ~ . , .~

the duality product and V(.) an application of ~1 into I~ . At point ~ ~ , let .~ V (-~)[.] be the Frechet deriva- d a

t ire of V.

The steepest slope direction of K(.) in

(7) aU =

at point'U = V(a).

/~ at point ~ , da*, is transformed through b~Va (a--3 [.1 ,into direction

L v c~)[ -' ~

As example, suppose UI to be ~ 2 ( ~ ) , ~ to be-£~: V(a, t) is some control model depending upon a n-vectar

Dot products are respectively defined asJ~ ~ .[M(t)].,> dt and ~ . [ N ] . , > where N and M(t) ~a~ a r e symmetric,

continuous, positive definite matrices. Let ~ (a) [ . ]= ~ 'gk(g)I , ' -~ and

~(~)[ '-] - £~",@,')1 .>e~ where ~. I " > , not w be taken for ~ , .>, is the uanal euclidian product in ~n.

Notation H U is used by similarity with optimal control problems but, so far, is just a notation.

In this example,

~v c~,~q[N-'] ~ > ~ where appear terms

' /~, ~V'T-

4. Pararrreterization efficiency

In the case where K(.) is defined as the composed application

the chain derivation formula gives

~V~" ") the adjoint of ~V ~here ~ . [.] i, ~ (.~[.].

Page 536: 5th Conference on Optimization Techniques Part I

524

In the special ease where V(.) is binnique (i.e. ~ and ~ can be identified), it ~s possible to select in ~O~ the

~P( . ) metric so that

be identical to dUb(see (4)). Then .~ : L

In the general case (and in the ease of parameterizatian) ~ and ~ cannot be identified and even for an injec-

live V(.) application b V* is not so. Therefore ~ V -1 b V* r 7 cannot ( - . ~ (.))) be identified to o ~ ' 1 ( . ) and dU/~ dU*

However, it is sufficient that ~}V(.) be injeetive in order to confer to application b a

all the required properties, starting with coercivity, to be associated with a metric in / ~ . Moreover W( . ) precisely

defines the image-metric in the case of identifiable fl~-~l spaces. It is non-local (i.e. constant ~" ~'~. ¢~ ) if V(.)

is linear.

In any case we shall define the parameterization efficiency at point'ff as the value of ~ (dU),(where dU is defined

in (13)fi which is less thane one. ~(dU) can be considered as a local measure of the loss of freedom induced by

parameterization of the functional problem. In the above defined example of suboptimization in ~ n of a problem

defined in ~ 2 ( ~ ) , <gk ]'~ <gJt and the value of ~ is given by

5, Optimal parameLerization

Having built a index of quality for a parameterlzation, it is normal to try to maximize it.

Maximization of ~ can be carried on on several steps :

i) being given, select rift.) in so that is maximum.

2) For a given dP(.), v(.) can usually be imbedded into a family of transformations depending upon a given set

of parameters ~b~ which are not subject to optimization i.e. do not belong to the t a '~ set. It is possible to

solve an accessory optimization problem, maximizing 7 ( V ( ~'~'>, I b'~, .), dU ( | ' E ~ , ] b~ , .)) with respect

to l b ~ .

3) Evantaally it is possible to compare optimum values of ~ for various kinds of V(.) transforms and select a pre-

ferred parameterization technique.

The problem is ~hat as ~ is locally defined such accessory optimization problems will lead to local solutions

and local conclusions. However, if it can be done rapidly, the parameterization can be modified whenever the algo-

rithm does not use past step informations.

A first conclusion can be reached on optimal parameterization and it is a non-local conclusion :

in order to maximize ~ with respect to 0~7.), one should select o~.) Z6~ ' (.) as defined in (14) which, incidentally,

simplifies (15) as, in our example [Yl ,has to be equal to [ N'] .of (9).

Page 537: 5th Conference on Optimization Techniques Part I

525

6. Conclusion

A quality measure of the parameterization of a functional optimization problem has been introduced with the

efficiency coefficient ~ . This notion will enable comparisons of parameterization techniques, hopefully. A first

conclusion has been reached on optimal parameterization which leads to the selection of the proper metric (when it

exists) in the reduced space of controls. Proofs have not been given and will await applications to practical exam-

ples for a more complete development.

Acknowledgment

May Mr C. Aumasson, [rom ONERA, find here the author's gratitude for his many discussions and criticisms over

parameterization. The author is also indepted to Miss G. Mortier, from ONERA Computer Center, [or developing spe-

cial features in SYCEC and [or her help in the gr~hic project.

References

PART I

FAVE, J . - Crit~re de convergence par approximation de l'optimum pour la m6thode du gradient, in computing methods

in optimization problems, springer verlag 1969, p. 101-113 (proceedings of the 2nd International Conference on

Computational methods and optimization problems, S~:n Remo, sept. 1968).

KELLEY, H.J. - Methods of gradients,in Optimization Techniques, G. Leitmann ed., Ac. Press, 1962, p. 248-251.

STEPNIEWSKI, W.Z. , KALMBACH, C.F. Jr. - Multivariable search and its application to aircraft design optimization.

The Boeing Company, Vertol division, 1969.

PART II

BRUSCH, R . G . , S C H A P P E L L E , R . H . - Solution of highly constrained optimal control problems using non-linear

programming. AIAA paper 70-964 and AIAA Journal, vol. 11 n ° 2, p. 135-136.

CULLUM Jane - Finite dimensional approximations of state constrained continuous optimal control problems.

SIAM J. Control, vol. 10 n ° 4, Nov. 1972, p. 649-670.

JOHNSON, LL. , K A M M , J . L . - Near optimal shuttle trajectories using accelerated gradient methods AAS/AIAA

paper 328. Astrodynamics specialists conference, August 17o19 1971, Fort Landerdale Florida.

KELLEY, H . J . , D E N H A M , W . F . - Modeling and adjoint for continuous systems 2nd International Conference on

Computing Methods in Optimization Problems, San Remo, Italy, 1968 and JOTA vol. 3, n o 3, p. 174-183.

PIGOTT, B.A.M. - The solution of optimal control troblems by function minimization methods. RAE Technical

Report 71149, July 1971.

SPEYER, tl.L. , KELLEY, H.J. , LEVINE, N. , BENHAM, W.F. - Accelerated gradient projection technique with

application to rocket trajectory optimization. Automatica, vol. 7, p. 37-43, 1971.

Page 538: 5th Conference on Optimization Techniques Part I

LES ALGORITHMES DE COORDINATION

DANS LA METHODE MIXTE D'OPTIMISATION A DEIFX NIVEAUX

G. GRATELOUP A. TITLI T. LEFEVRE Professeur Attach~ de Recherches Ing~nieur de

au C.N.R.S. Recherche

! - INTRODUCTION

Pour contourner les difficult~s th~oriques et de calcul qui se pr~sentent

lors de la r~solution des probl~mes d'optimisation de grande dimension, un moyen

efficace est certainement l'introduction de m~thodes d'optimisation ~ deux niveaux,

utilis~es notamment en commande hi~rarchis~e dans les structures de commande g deux

niveaux.

Pour cette t~che, que l'on supposera ~tre l'optimisation statique d'un

ensemble de sous-processus interconnect~s, on peut alors utiliser la notion de

"division horizontale du travail" faisant apparaltre des sous-problgmes r~solus de

fagon locale, les actions locales ~tant coordonn~es par le niveau sup~rieur de

commande, de fa~on ~ obtenir l'optimum global.

Or, chacun des probl~mes inf~rieurs ~tant d~fini par 2 fonctions (modUle

du sous-processus et erit~re associ~), il y a trois modes possibles de d~composition-

coordination :

- par l'interm~diaire de la fonction crit~re

- par l'interm~diaire du module

- par action sur les 2 fonctions.

Ce troisi~me mode qui est gtudi~ ici, utilise comme grandeur de coordina-

tion les variables d'interconnexion entre sous-syst~mes et les param~tres de Lagrange

associ~s~

Dans cette communication, les sous-problgmes loeaux d'optimisation sont

d~finis, et diff~rentes possibilit~s de coordination sont propos~es pour le niveau

sup~rieur de cormnande. Sont examines notamment les coordonnateurs type gradient,

Newton, ~ iteration directe et gradient-iteration directe.

La r~solution d'nn probl~me de r~partition optimale des ~nergies, dans un

systgme de production hydro~lectrique, permet de mieux comparer certains de ces algo-

rithmes coordonnateurs.

Laboratoire d'Automatique et d'Analyse des Systgmes du C.N.R.S. B.P. 4036 31055 TOULOUSE CEDEX - FRANCE

Page 539: 5th Conference on Optimization Techniques Part I

527

II - DECOMPOSITION DANS LA METHODE MIXTE

II. I P osStion du probl~me (Probl~me lls~parable")

Supposons que le processus eomplexe ~ optimiser soit divis~ en N

sous-syst~mes comme eelui repr~sent~ sur la figure I. Y. sorties d~finitives

entr~es globales Ui Sous-syst~me

n ° i >Z. sorties de couplage entr~es de couplage X i 0 l

~M. co~m~andes i

Figure I

Ui' Xi' Mi' Zi' Yi sont des vecteurs g mu , mx , mM , my. composantes respectivement. 1 i i i

Pour un vecteur d'entr~e globale U donn~, le sous-syst~me est eompl~tement

dgerit en r~gime statique par les ~quations vectorielles :

Z i = Ti(M i, X i) (I)

Yi = Si(Mi' Xi) (2)

L'interconnexion entre les sous-syst~mes est repr~sent~e par :

X i = Hi(ZI...Zi...Z N) (3)

La fonction objectif du syst~me est suppos~e donnge sous forme "s~parable,

additive". N

F = ~--- fi(Xi, Mi) (4) i=l

Le probl~me global est de meximiser (4) en prgsence des contraintes

figali~ (I) et (3). ANCe probl~me d'o~timisation, on peut associer le Lagrangien :

f i (xi, i) ÷ ÷ "zi"

La solution optimale doit n~cessairement satisfaire les conditions de stationnarit~

de ce Lagrangien, ~ savoir : ~fi ~Ti )T LX. = 0 =- + ( ~i + £i (6)

l ~Xi 9Xi

= o = ~f_i + (~7 i )T >i (7) LMi ~Mi ~ Mi

N

LZ.~ = 0 = - ~i -~(.= ~zi~HJ )r ~j (8)

= O = T. - Z. (9) L~i i i

L~i = 0 = X i - Hi(Z I...Zi...ZN) (]0)

A.Titli)

II.2 D~eompgsition, formulation des sousTproblgmes (G.Grateloup,

Cependant, pour simplifier la r~solution de probl~mes de grande

Page 540: 5th Conference on Optimization Techniques Part I

528

dimension, on r~partit le traitement de ces ~quations entre deux niveaux de commande,

non arbitrairement, mais de fagon $ obtenir une forme "s6parable" des 6quations au

niveau infgrieur°

Dans la m~thode propos~e, une telle r6partition est obtenue en choisissant

et Z eomme variables de coordination, c'est-~-dire eomme variables transmises

pour utilisation au ler niveau de eommandeet modifi~es au niveau sup~rieur jusqu'g

l'obtention de la solution globale reeherch~e.

Pour e et Z donn~s, le Lagrangien prend alors la forme "s~parable" suivante N N

L = Z e i = fi(Xi,Mi)+ ~iTxi - eirHi(Zl .. .Zi...ZN)+~iT(Ti-Zi ) (ll) i=I

L'examen de L i permet de formuler chaque sous-probl~me en termes de eomman-

de optimale ; ainsi, !e sous-probl~me n ° i s'~crit :

max . fi(Xi,Mi) +e i X i - eiTHi(Zl , Z i, Z N) (12)

sous Ti(Xi, Mi) - Z i = 0 pour ~ et Z donn~s.

Ii apparalt bien dans (12) qu'~ la fois crit~re et module sont utilisgs

pour la coordination.

Sur le plan analytique, la r~solution de chaque sous-probl~me correspond

au traitement des ~quations (6), (7), (9), les ~quations restant ~ r~soudre au 2e

niveau ~tant : L~ (x, z) = 0

Lz(Z' F ,e) = 0 (13)

L'~quation (9), qui doit ~tre compatible pour ~ et Z donn~s, impose

reX. + mM.~ mz. (14) i I i

Le transfert d'informations n~cessaire entre niveaux de commande est repr~sent~ fig.2

I coordonnateur I 2e niveau x ~ ( e ~ ~ ~, ~x~(e~,z) de eommande

~ 2 ~ ~ f ~ ~ ~ ~ I ~) 1 ler niveau . . . . C2u -probl mel sons-probl me L deeo an e

I_ n °i I t I F_~ure 2 : Transfert des informations dans la m~thode mixte

11.3 Dgcomposition des probl~mes non s~parables (A. Titli, T.

Lef~vre, M. Richetin)

Dans l'hypoth~se de "non s~parabilit~" du probl~me retenue ici,

e'est-~-dire lorsque le couplage entre les sous-syst~mes intervient non seulement par

les gquations d'interconnexions classiques entre les entr~es et les sorties, mais

aussi, par l'interm~diaire des fonctions erit~res, on aboutit ~ la formulation suivan-

Page 541: 5th Conference on Optimization Techniques Part I

529

te du probl~me d'optimisation : N '~N

max F = max ~ fj(X'j, M'j, W) j=1

sous Z i = Ti(X" i, M" i, W) (15)

X" i = HIi(Z) i = I ~ N

WX, = H2i(Z) l

en mettant en @vidence le vecteur W form@ avec les composantes de X i, M i qui assu-

rent ce couplage suppl@mentaire.

Apr~s regroupement, si N'~< N, de certaines @quations de module

eouplage, il est possible d'@crire (15) sous la forme ci-dessous :

max ~ fj(X'j, M'j, W) j=1

W sous Z j = T'j(X'j, M'j, W)

X'j = H'j(Z') j = I ~ N'

Wx. = H".(Z') J

J

et de

(16)

obtenue :

La d@composition de ce probl~me global d'optimisation peut alors Stre

- soit en ins@rant W dans les variables de coordination Z et

- soit en ajoutant au probl~me initial des contraintes de la forme

• = O, j ~ J i e n s e m b l e d e s i n d i c e s d e s s o u s - p r o -

Mj b l a m e s en i n t e r a c t i o n a v e e l e s o u s -

p r o b l g m e i .

Dans ce cas, un terme de la forme ~ .E ~f[~! -[~i] ]

est ajout@ au Lagrangien global et les variables de coordination peuvent ~tre Z, ~,

III - COORDINATION DANS LA METHODE MIXTE

III. ; Coordonnateur type gradient :

Par analogie avec la m@thode de Arrow-Hurwicz pour la recherche d'un point

col, on peut utiliser l'algorithme coordonnateur suivant :

d~ =_ Le ~[~= ~[~) _ K L e de ~,

Diff@rentes @tudes de convergence de ce coordonnateur ont @t@ faites (A. Titli).

III.2 Coordonnateur type Newton

II est possible @galement, pour assurer la coordination, d'appliquer un

Page 542: 5th Conference on Optimization Techniques Part I

530

algorithme de Newton-Raphson ~ la r~solution de l'ensemble des 2 ~quations vectoriel-

les : L Z = O et L~ = 0

dW ~ w~ Fd LW] - I en ~crivant : ~-~ = - I--7~I L W LUWJ

m .~'i

L Q w J

On montre (A. Titli) que si cet algorithme est applicable, il est asymptotiquement

stable,

III°3 Coordonnateur ~ iteration directe N

Dans le cas d'un eouplage lin~aire (X i =7 CijZj) , et si

certaines conditions sur les composantes sont satisfaites, j=l

(~-_ mx. = ~. mz ), il est alors possible de calculer Z ~ partir de L~= O et

i .l ~Z partlr de =i0, mettant ainsi en oeuvre une m~thode ~ iteration directe.

Une ~tude g~n~rale de la convergence de ce mode de coordination a ~t~

effectu~e (T. Lef~vre).

III.4 Coordination mixte : iteration directe-gradient :

Darts ce genre de coordination, qui ne ngcessite pas ~XL=~z-~--

certaines gquations du niveau sup~rieur permettent une d~termination directe de

certaines variables de coordination (BI) , les autres variables (B 2) ~tant d~termin~es

par un algorithme de type gradient :

BI i+l = Fl(Ai) (19)

B2i+l = B2 l' +~ . K . F2(Ai) (20)

i :indice d'itgration, K > O, ~ = ! I suivant la nature des variables B 2 (variables

physiques ou param~tres de Lagrange).

Remarque : Cette m~thode mixte peut ~tre g~n~ralis~e au cas des probl~mes d'optimisa-

tion statique avec eontrainte in~galit~ et d'optimisation dynamique (A. Titli).

IV - EXEMPLE D~APPLICATION : REPARTITION OPTIMALE DES ENERGIES DANS UN RESEAU DE

PRODUCTION HYDROELECTRIQUE (T. Lef~vre)

IV.! Formulation duo. problgme

Nous traitons ici un probl~me similaire g celui abord~ par MANTERA,

Soit le r~seau s~rie-parall~le d~crit par la figure 3, et compos~

de deux rivi~res~ comportant chaeune deux centrales ; les quatre centrales fournis-

sent leur ~nergie gun systgme de 5 charges dont 2 modulables ("conforming loads")

at 3 non modulables ("non conforming loads").

Ces centrales sont couplges au syst~me de charges par un r~seau ~lectrique

devant fournir une puissance PD" Chaque centrale est d~finie par un module (Fig. 4) :

Page 543: 5th Conference on Optimization Techniques Part I

531

L qn = gnL(Pn L)

Les pertes de production PTGL

i L= {], 2~

n ~l, 2}

sont donn~es par : PTGL = PTGIL + PTG2L 2

avec : PTGL i =~ PTGL i n=!

Les pertes en ligne sont de la forme : 4+3 4+3

PL =7-- ~ (pi) T BiJ pj

i=! j=! Le probl~me global est : rain [PL + PTGLI + PTGL2 ~

pL ~ L = ~I, 2~ n n t l , 2~

SOUS : PL + PD - PT = 0 I ! !

q22 - q!2 - Q!1 - Q121 ~ 0 t

q2 - q| Qll 2 QI22 ~ 0

Contraintes sur les dgbits.

1 0 ~< P l x( P11 max

o .< P2 .< p21 O ~ P12 ~ P! 2 max

0 ~< P2 2 ~ P2 2 max

Contraintes sur les puissances.

eta pour lagrangien :

z z

Pour arriver ~ une d~composition de ce probl~me en deux sous-probl~mes relatifs au~

deux rivi~res, il est n~cessaire de d~composer PL' ce qui peut se faire par introduc-

tion de deux pseudo-variables P' 12 et P'22 et de deux contraintes ~galit~ suppl~men- 2 taires : P! 2 - P'l = 0

P22 - P'22 = O

( ~, A~ seront les paramgtres de Lagrange associgs ~ ces contraintes). PL prend

alors la forme s~parable : PL = PLI + PL2"

Si pour une m~thode mixte, nous choisissons : P'I 2, P'22, ~ , ~ et ~t

cormne variables de coordination, le lagrangien (21) prend alors une forme sgparable

L = L I + L2, avec :

Page 544: 5th Conference on Optimization Techniques Part I

532

~.. 2.

eorrespondant g deux problgmes d'optimisation au niveau inf~rieur.

IV.2 ModUle lin~aire

La r~solution num~rique de ee problgme de r~partition optimale de

l~nergie dans un r~seau hydro~lectrique a ~t~ faite sur un ordinateur II30-IBM

travai!lant en simple precision.

~iE~_i~K~ : au niveau infgrieur, deux m~thodes ont ~tg mises en oeuvre pour

r~soudre les probl~mes d~optimisation relatifs aux deux sous-problgmes :

- une m~thode lagrangienne utilisant un algorithme quadratique (Newton-

Ralphson) ;

- une m~thode de pgna!isation utilisant l'algorithme de Davidon.

N~xeau_~2~!d~nna~e_u~ : deux algorithmes ont gt~ utilisgs pour traiter les variables

de coordination :

- un algorithme de type gradient

- un algorithme mixte (gradient-iteration directe).

R~s~ta~ : I~ Al~rithme du gradient

L'~tude de la convergence de cet algorithme montre que la valeur

optimale de la constante d'itgration K est : K ~= 0.00065

Au bout d~une centaine d'it~rations, l'erreur totale

est de l'ordre de : ~ _.~ 0.85 et d~croit tr~s lentement (figure 5) ~ cause des , 2 ~ 2

pseudo variables P I , P 2 et des param~tres de Lagrange associ~s ~, ~ , qui

varient extrSmement peu A chaque iteration. Pour rem~dier ~ cette faible rapiditg de

convergence, on a utilis~ un algorithme mixte : gradient sur ~ et iteration direc-

te sur les variables dont l~volution est tr~s lente : A~ A Ce~te m~thode s'av~re tr~s efficace, comme on peut le constater sur les r~sultats

pr~sent~s ci-dessous°

2 = Al$orithme mixte

Cet algorithme converge en un nombre minimum d'it~rations pour

K ~= 0.0004. L'erreur totale est toujours d~finie par :

Page 545: 5th Conference on Optimization Techniques Part I

533

La figure 6 montre l'@volution au cours de la convergence de l'erreur

(pour PD = I0 MW).

La lin@arit@ suppos~e du module des centrales simplifie beaucoup la r@solu-

tion des optimisations locales. Mais un module plus proche de la rEalit@ (modUle

exponentiel ou de classe CI) peut aussi ~tre utilis@.

IV.3 ModUle exponentiel

Au niveau inf@rieur, les deux m~thodes d~crites pr~e@demment sont

utilis~es. Cependant, la m@thode de p~nalisation est plus performante.

Au niveau sup@rieur, seul l'algorithme mixte est retenu et les r@sultats

obtenus sont, ici encore, tr~s performants (cf. figure 7 qui donne l'~volution de

l'erreur).

L'utilisation de ce modgle non lin~aire, de type exponentiel, est plus

r~aliste et n'introduit aucune difficult~ suppl@mentaire de mise en oeuvre. Elle

conduit m~me gun temps de convergence plus faible.

IV.4 ModUle de classe C1

Au niveau inf@rieur, seule la m~thode de p@nalisation utilisant

l'algorithme de Davidon a @t@ retenue, car les m@thodes proc@dant ~ un calcul direct

du Hessien, sont oscillantes sur cet exemple.

L'algorithme mixte donne, ici encore, de bons r@sultats (cf. figure 8).

V - CONCLUSION

Dans cette communication, nous avons pr@sent~ une m~thode mixte de d@compo-

sition-coordination des probl~mes d'optimisation de grande dimension et d@fini les

t~ches de chaque niveau de commande.

Nous avons montr@ que le coordonnateur type gradient, toujours applicable,

prEsente des conditions de stabilit@ et que le coordonnateur type Newton est toujours

convergent, s'il est applicable.

Les conditions d'uti~isation d'une coordination ~ iteration directe ont

@t@ d~gag~es. Cette coordination apparalt int@ressante pour le traitement des

probl~mes non separables.

La resolution d'un probl~me de r@partition optimale des Energies dans un

systgme de production hydro~lectrique (probl~me hautement non s@parable et d@lieat

rEsoudre), nous a permis de mieux comparer, sur le plan des applications, eertains

de ces diff~rents coordonnateurs. En particulier, l'efficacit~ de l'algorithme de

coordination ~ iteration directe ou mixte (itEration directe + gradient) a @rE mise

en Evidence.

Page 546: 5th Conference on Optimization Techniques Part I

534

!

Charge CentrQle ?

20 miles

,<.,..,

Centrale 6 \ \

Centraie 4

Figure 3

- C e r e c t e r i s t i q u e s

~ ~ ~ ~ ' ~ i J ~ C e n t r a l e , ~ .~\,, 5 - , -

t Charge t / 1 ~ - ~ \~/ // "9, ,, -q "%

/ \ Centrale 2 i,] -~ / \ \ iI / \

/ \ \

,, 7 /

~ Cent~le 1

Centrale 3

t I C e n t r a l e 2

3200.

2dO0.

2400.

2o00

1600.

1200.

800.

4 0 0 . ~ 0 i

0 20 40 60 80 100 puissance produite ( M W )

des cen t r a les -

Figure 4

Page 547: 5th Conference on Optimization Techniques Part I

535

~T

6

4 K = 6,5]0 -4

PD = IOMW

2 ÷ \. I i "+--J+ -- + -+- +-+--+--+--+--+--+--+--+--+--+--+--"

I N 0

0 10 20 30 40 50 60 70 80 90 I00

Figur ~ 5 : ModUle lin~aire, Algorithme du gradient. Evolution de l'erreur,

4i' 1,40

12

"I!t 90 'flit 70

56 \ ....

42 ,,,

28

14

X

X * . . ]" .x . x ÷-i ~ . . . . " - " X --=-he - - ~ V V

+ K=3.10 -3

* K=4. ]0 -3

x K=7.10 -3

N 0 0 ~ 20

Fisure 6:ModUle lin~aire. Algorithme mixte. Evolution de l'erreur (PD

~ v v v v ~ % v v v ~

3O 40 50 = 1o mO

Page 548: 5th Conference on Optimization Techniques Part I

~ &T 2 4

21

781

15

12

3

. ,

8

7

6

5

4

3

0 5 I0

N

15 20

-3 +: K = 7.10

-3 ×: K = ]3.10

e: K = 20.10 -3

536

~T ~ Figure 7:ModUle exponentiel.Algorithme mixte.Evolution de l'erreur (PD=]O MW)

70 L

2

0 5 10 15

x: K = 5.10 -3

-2 +: K= 1(

• : K = ] .10 -2

N

2O

Figure 8 : ModUle de c!asse C I. Algorithme mixte. Evolution de l'erreur (PD = 10 MW)

Page 549: 5th Conference on Optimization Techniques Part I

537

BIBLIOGRAPHIE

ARROW K.J., HURWICZ L., UZAWA H. : Studies in linear and non linear programming.

Stanford University Press. 1964.

GRATELOUP G., TITLI A. : Combined decomposition and coordination method in large

dimension optimization problems. A paraltre dans International Journal of Systems

Science.

LEFEVRE T. : Etude et mise en oeuvre des algorithmes de coordination dans les

structures de commande hi~rarchis~e. Thgse de Docteur-lng~nieur. Universit~ Paul

Sabatier, Toulouse, d~cembre 1972.

MANTERA I.G.M. : Optimum hydroelectric-power generation scheduling by analog

computer. Proc. I.E.E. Vol. ]18, n ° I, January 197~.

TITLI A. : Contribution ~ l'~tude des structures de commande hi~rarchis~es en rue

de l'optimisation des processus complexes. Th~se de Doctorat gs-Sciences Physiques.

Universit~ Paul Sabatier, Toulouse, juin 1972.

TITLI A., LEFEVRE T., RICHETIN M. : Multilevel optimization methods for non-separable

problems and application. A paraTtre dans International Journal of Systems Science.

Page 550: 5th Conference on Optimization Techniques Part I

APPLICA~ONSOF DECOMPOSITION AND MULTI-LEVEL TECHNIQUE, ~

TO THE OPTIMIZATION OF DISTRIBUTED PARAMETER SYSTEMS

Ph. CAMBON L. LE LETTY

CERT/DERA ~ Complexe A~rospat~al TOULOUSE FRANCE

ABSTRACT

The resolution of optimal control problems for systems described by partial differential equations leads, a~ter complete Cor semi) discretizatlon, to large- scale optimization problems on models described by difference (or dlfferemtlal) equa- tions that are often not easy to solve directly,

The hierarchical multi-level approach seems to be well suited to a large class of synthesis problems for these complex systems, Three applications will be presented

here :

- minimum energy problem for the heat equation with distributed input [this i s the c l a s s i c a l t e s t example)

- m i n i m i z a t i o n o4 a s t e e l i ndex f o r the p a r a b o l i c e q u a t i o n w i t h o n e - s i d e d heating

- reheating ~urnace with a non linear boundary condition [radiation)

INTRODUCTION

Solving an optimal control problem Cor an identification problem) for systems described by partial differential equations leads, after complete dlscretlzatlon, to a large-scale optimization problem which is often difficult to solve directly in a global way.

The decomposition and multi-level hierarchical techniques with coordination seem well suited to the solutlon of a large class of these complex synthesis problems, Applicat±ons o~ these techniques wil! be made here to optimal control problems with the aid of two-level hierarchical structures,

Page 551: 5th Conference on Optimization Techniques Part I

539

I - GENERAL PRINCIPLES OF MULTI-LEVEL TECHNIQUES

Let us briefly present here the basic concept and the general principles of the hierarchical techniques. Associated first with the names o{ M.D. MESAROVIC as for the control aspect by hierarchical structures and L.S. LASOON and also some other au- thors as for the matehmatical programming aspect for large-scale optimization pro- blems, these techniques are now actively studied for compgex dynamic systems. In distributed parameter systems, some applications have also been studied by D.A. WISMER and Y,Y. HAIMES.

The baslc idea consists for a too complex "system-objectlve function problem" to be solved directly in a global approach to define a number of subproblems by sub- systems and subcriterla sufficiently simple to be efficiently treated by classlcal methods end algorlthms and to coordinate the interconnected set by hlgher levels [one or more higher levels),

The control structure [or the optimization structure] consists then of unlts at several levels giving a pyramidal hierarchical structure [f~gure I)

K th Level

/ /

/ 2 th Level / / [

/

lStLevel////

/

A I \

/ \

,[ I / c \

'

%

C~__

\ \ \ \ \

Z L it hio bj syb~ySt~umn orion OF&

\ \ \ \

\

I I

I GLOBAL SYSTEM AND OBJECTIFE FUNCTION

FIGURE 1

Page 552: 5th Conference on Optimization Techniques Part I

540

This structure is called '~several levels-several objective functions" ~ seve- ral objective functlons for the control units at a given level have different objec- tives which can be moreover in conflict due to the fact that the subproblems are se- parately and independently solved at a lower level while in fect interconnected, The aim of the higher levels called higher level controllers is then to coordinate the set of operations of lower level in order to achieve the optimal solution of the whole problem at the top o~ the structure.

The classical structure is a two-level structure with a single unit at the secono level. It is o£ course the simplest one but it is sufficient for most o{ the problems encountered.

I I - TWO-LEVEL HIERARCHICAL STRUCTURE,COORDINATION METHODS

Let us consider a system S which is decomposed into subsystems S i,

L. The system S i is represented by its m~el ...... equation :

R pi ~ R qi [ 1 ] --IZ" = ~ i [X i 'M i i - - -- ; ~ i ~ , M,i c R mi --iZ' e

where --iZ' are the outputs, ~i other subsystems,

coupling _ ~ t inputs

i = 1 t o

the local inputs and X. the coupling inputs from the --i

Z. --i

S. z 2~Outputs

~Iocal inputs

the coupling equations The whole system ms reconstructed by taking into account between the subsystems.

{2~ -~x =a!. i c i j ~j ' i , j = ! to L

REMARKS

1) We assume hare that the coupling equations between the subsystems are linear, This is not necessary ; non Linear equations ~i = Ci(Z) ere equally possible but in some coordination methods [as the non admissible method ) a separable form is

needed :

x. = Z cj {z. --m j~i --J

2] The comdition j#i [no internal coppling] is not necessary but will be usual-

ly the case.

O b j e c t i v e f u n c t i o n

We assume that we are given an objective function in a separate form which is

decomposable on the subsystems :

L

J = 2 J i c~ i , ~i,211 i = I

which can be written [by [1]]

Page 553: 5th Conference on Optimization Techniques Part I

541

L

[3] J = ~ 3 i [Xi-- ' --IM'] I = I

The global problem which is :

"Minimize J under the constraints (1] and [2], i = I to L" leads to the determination of the saddle point of the Lagrangian :

L = L [~i' ~i' ~i' ~i' ~i]i=l,L

C4] L

i=l ~ [Ji[~i'Mi] + ~i (~i - ii [~i' ~I ] + Pi (X i_ -j i Cij ~j]]

We assume that Ji and T i are continus and have continuous derivatives with respect to the variables.

Then, the e q u a t i o n s are :

ixi ° ~x_/ ~_xi) ~Ji i T

~i : ~i -L ~TIL-~I)

j~i ji

!o i X. - f Cij~j -i j#i

i = 1 , L

E i + _o i = o is)

fi = f c61

pj = 0

o

0

[7]

[8]

(9]

The application of the principles of two-level hierarchical techniques will consist here to split the treatment 0£ these statlonarity conditions into two levels in order that the first part correspond at the first level to a set of separated opti- mization problems end that the second level realize the coordination which is here the resolution of the remaining equations.

The second level controller will be of iterative type using the first level information after solving the sub-problems which will be done either also by an ±te- retive scheme or by e direct method dependinz on the problem.

Three now well-Known methods will be used.

I) Admissible" method

[or ooordinatlon by the ~i, also called coordination by the model]. The ~i, i = I,L, are given to the first level by the second level controller.

~ _ ~ _ ! ~ £ ~ _ ~ , we s o l v e :

Page 554: 5th Conference on Optimization Techniques Part I

542

giving :

' ~ 2 i ( ! )

L Ei pi[z)

i = I ~ L

This first pert of the squations represents the subproblems :

Min Ji (Ei'~i)

with :

X. Z C , . Z . ' ~i g i v e n

~ --i j#i ij -j

The local model and coupling equations are satisfied justifying the name "admissible",

The optimal Z remain to be found (coordination by the model] ~ then the combi- nation o4 the soluEions af the sub-problems will achieve the optimal solution o4 the global problem with : L = E L i and J = Z Ji'

i i

At the second level, the equations L7~ = 0 are then solved by an iterative scheme, ~or-exampie~-a-steepest descent metho~ --

+

~i Z - k i = 1 , L --i L--Zi '

or a Newtcn-Rslphson method :

or some other optimization algorithm, with :

~Z i = p~ - Z C T

~2ch is calculated from the values of the variables glven by the <irst level.

The algorithm is operating until .__ .ilbill ! E , This coordination method is applicable when :

Dimension (M i) ~ Dimension [~i!

condition which limits its applicability,

2 ) "Non a d m i s s i b l e " m e t h o d

(coordination by the ~i' also called coordination by the objective function],

First level :

The £i are fixed

~i =~i =-tzl = L..y~ = 0 g i v e s :

Page 555: 5th Conference on Optimization Techniques Part I

543

I ~i = ~i [~)

ki ~i (£] Second level :

Determination of new values of p_ by the algorithm

+

Ei Ei ÷ ~ [ ) !oi , i = l , t

where

~ i : x ' - ~ cij!j --z J#i

In this method, when the optimal solution Is not yet obtained, we have : e~d' i 0 : the coupling equations are not satisfied which justify the name "non

~iss~ble".

The global lagrangien can be written :

i j i

and i s of a separa te form f o r the v a r i a b l e s cons idered a t the f i r s t l e v e l , The cor - responding optimization sub-problems are :

I + T Xi _ Z T Min [Ji Pi --i J~i Cij -pj]

wi th Z. = _T i [ X i , ~ i ? --3. - - - -

This method does not imply any dimensionelity condition on the variables in the general case where the model equation is non linear in ~ and M i [second order terms are however neededj excluding non linearities of the form IX~jl or IMiji).

In the case where linearity occurs in ~i and ~i, the objective function fi needs to include non linear terms in ~i and M_/ up to the second order at least. In practice, it is necessary to examine the compatibility of the equations or to reformulate the coordination method.

3) Mixed method

(or coordination by the ~ and the ~].

We have :

At_the_~i£st_level [~ and ~ fixed )

L ~ i ~ giving ~ = ~ i (Z,p)

h

Page 556: 5th Conference on Optimization Techniques Part I

544

f

! . : ~ i - k z & z l , i : I,L

= £i + kp &pi

In the case where Z Dimension [X i) = Z Dimension [Z.} a non iterative direct i -- i --i

method can be used to solve :

~o i = 2 for the i

and

L_Z i : 2 f o r the £

from the values of the variables given by the first level.

I I I - APPLICATIONS. RELAXATION SCHEMES

We will give applications of the different coordination methods to optimal control problems arising from systems described by partial differential equations.

In order to gain in memory requirements on the computer and to gain also in convergence speed, we have been led to ovoid the application of the multi-level approach in its usual conception by using relaxation schemes for the resolution of the first level equations. We use then for these equations the "new" values of the coordination variables ~i and P_i for the next first level sub-problem (Si+ I, Ji+1 ),

iV - FIRST PROBLEM.'OPTIMAL DISTRIBUTED INPUT FOR THE HEAT EQUATION WITH MINIMUM

ENERGY This i s the c l a s s i c a l t e s t problem whose s o l u t i o n i s w e l l known and could be

obtained by easier and faster ways (either using the adjoint equation or approxima- ting the problem on a truncated basis of the eigenfunctions of the operator ~2/~x2). It is also the example given by O,A. WISMER in a multi-level approach using the maximum principle after semi-disoretlslzation and decomposition,

I ) The problemq and d i s c r e t i z a t i o n

C ~ t = 0 ~2xY + u ( x , t ] in @

I y [x ,O) = y~ (x ] i n ~

L Y [ o , t ) = Y [ l , t ) = 0 in Z

Final c o n d i t i o n ~ y I x ,T ] = Yo[x)

Cost f u n c t i o n ; J (u) = l lUI2LZ(Q]

x E [0.1]

t c [O.T]

After discretizo~ion : t i[i = 1,N) ; xj [j = I,M], we hove :

Page 557: 5th Conference on Optimization Techniques Part I

545

Y i j - Y l - l , J A t = a ( Y l , j + l - 2 Y i j + Y i , J - q

w h e r e q = D / ( A x ) 2

Ylj = Yoj initial condition

Yil = YiM = 0 : boundary conditions

YNj = Ydj = final desired condition

- i j la

i = 2 , N ) + u . . i j J = 2 , M - 1

2) Decomposition and coordination. . First solution

A first decomposition is to consider esch node (i,j) of the discretization grid as a subsystem Sij

Xij ~ Sij I zij >

with :

I Zij = Yij

Xij Yi-l,j/At ÷ o CYi+j+ I + Yi,j_l )

Mij uij

T h e n , we h e v e :

1 Zij = [Xij + uij+/m , m = ~-~ ÷ 2q

Xij = Zi_l,j/At + q (Zi,j+ I + Z±,j_d

i = 2,N j = 2,M+I

The objective functMon ±s : N M

J[u) = E Z z Ax At -- i:I j=1 u~j

equivalent to :

J ( w l = ~ r. u 2 i j iJ

Page 558: 5th Conference on Optimization Techniques Part I

546

The global lagrangion is written : N MMI F }

L = ~ ~ I u~J + ~iJ [Z i j [X i j * uij)/~] + PiJ [XL j -Z i - I /A t -~ [Z i ' j + I+Z i ' j - 1 ) ]~ i=2 j=2

M-1 FZ _y .] +~ z ~Nj ~ Nj dj uij on the boundaries i=I,

j=2 j= l , j=M

The initial and boundary conditions are taken into account in the coupling constraints by their particular values, respectively #or i = I and j = I and M.

The f

bxij

Luij < Lzi j

LpLj

Lplj

stationority conditions for 1 < i < N are :

-~ij / ~ + Oi° ' j

2 u , - /~ iJ ~ij

Pij - Pi+1,j/At - ~ (Pi,j-1

Zij - [Xij + uij] /

×ij - Zi-l,j / Z~t - c [Zi,j+ I + Zi,j_ I]

+ Oi, j+t )

The admissible method can be used as Dim. Mij = Dim, Zij~ We have then :

At the first level :

After immediate direct resolution :

i Xij = Zi_1, j /& t + ~ [Zi,j+ I + Zi,j_ I]

uij ~ zij - xij = [zij - zi_1,j]/At - o (zi~j+ I - 2 zij + zi,j_ I]

Pij 2 ~ uij

Plj Ulj/~ = 2 uij

At the second level +

zij = zij - K LZi j

where : 2

Lzij = A--T [ulj - u ~ - 2 o - 2 u,. + Ui, i+l,j ~ [ui,j*l iJ j-1 ) with

ulj [Zij - zi_1,j]/&t - d Ill,j+ 1 - 2 zij + zi,j_ I)

There remain to treat - for i=N the supplementary condition LXN j = ZNj - Ydj = 0 by ~Nj = XNj ÷ ~[ZNj-Ydj}

- the terms ~ u~j on the boundaries i=I, j=1, j=M. Here~ we obviously hove : uij = 0

We have then IN-I) [M-2] coupled sub systems. Several examples have been done, with different step sizes : N=31,61 ~ M=41,61, For N=31, M=41, we have 1170 sub-systems and 2370 variables (state variables end inputs} where 1170 variables ore coordination variables.

In the relaxation scheme for the admissible method, e new Zij is calculated at each node [i,j] ¢ { i=2,N , j=2,M-1 } and the ~ already calculated are used in

LZij.

Page 559: 5th Conference on Optimization Techniques Part I

547

Note here that we can solve for the Zij from Lzi j : 0 . The memory requirements are

on l y : Tableau Z±j ( i =1 ,N ~ j=1,M) and V e c t o r XNj ( j = 2 , M - I )

The problem has been solved on an IBM 360-44 computer ~or diqqerent step sizes (cf. tables). For N=31 and M=41, 125 iterations are needed for a 10 -3 relative

precision on : Sup Lzij and Sup XNJ which is too long.

i,J Zij J ~Nj

3) ,S,,e,,cond solution. New d,,e¢omposition and mixed second-level controller

A more interestimg decomposition consists to consider each column of the diseretisation on the time axis as a sub-system with :

~i = {YiJ ' j=2,M-I}

M. = { u i j , j = 2 , M - I } - - !

Then, we have now for the vectors Z i, ~i, X_.i the same model and coupling equations as i n the first solution :

X~ = Z i _ I / A t + ~ ( Z i , j + I + Z l , j _ I ) = (A --~Z') The mixed c o o r d i n a t i o n method g i ves :

At the f i r s t l e v e l

~i ° Lui : ~i ° 0 which gives the explicit solution :~ P_i = ~ P_.i

-~i ~ gi/2

x i ~z_ i - _u i A__t__t_h_e__s_e_c_o_n_d_l_B_v_e_l, we have :

i = 2 , N - I I L z i j -- m P i j " P L + I , j / A t - ~ (P i ,J+1 + P i , j - 1 ) = 0 j=2,M-I

I Lpi j ~ Zij .Zi_l,j/At q (Zi,j_ 1 e Zi,j+ 1) - uij = 0

~- @ LZNj = ~ PNJ/2 + XNj - ~ (PN,J+I + PN,j-I ) = 0

Then, a direct solution consisting of a sweep forwards on S i from S I to S n with ~i = ~o giving the Z_i {rom_~ol = ~ and of a sweep bachwards from S N to S 2 with ~Nj given by LZn j and giving the p.i from ~Z i = ~ gives the solution.

The Iteretive scheme is then on XNj : ÷

XNj = XNj + ~ (ZNj - Ydj)

~ I z .............. E

Page 560: 5th Conference on Optimization Techniques Part I

548

4) R e s u l t s

S e v e r a l cases have been s o l v e d some o f wh i ch a r e shown in t h e t a b l e s , The initial condition yo[X) is a symetrlc triangle about x = 0.5 and the ~inal condition Yd(X] is a symetrle polynomial o{ order 4 on x. The advantages of the relaxation scheme over the usual conception can be briefly described by the <ollowing :

I] Gain in memory requirements : NM variables instead o~ 2 NM {or the admissible method N#+ 2M variables instead oq 2 NM {or the mixed method

2) Gain in computing time by a factor o~ 2 ~or both cases in this example [the mixed method seems to be superior in most cases).

V - STEEL INDEX OPTIMIZATION WITH ONE-SIDED HEATING AND INPUT CONSTRAINT

I) ~ r o b l em

~y ~2y ~ : o ~x-~-~

y { x , O ] = y~ = cat

~Y x =0

~Y x= L ZE = u ( t )

Cos t { u n c t i o n /T

3 ( u ] = ]0 R[t] (u[t) - y](~ d t

where ~' ~=2 ~0 i~ u[t] -< y

REt) =I.I i$ u [ t ] > y

x ~ [ 0 , L ]

t £ [ O , T ]

Final conditions

y { O , T ] = TM y { L , T ] = TS

This is the heating of a slab with a decarbonation performance index. An oxydation criterion o~ the same nature could be used in the same way. The quantity

Pd = X J--f R(t] (u(t) - yl 2 dt represents the decarbonation depth. o

The numerical values are :

y = 850°C , uM = 1250°C TM = 1150 ° C TS = 1200 ° C , T = 6000 s

2) D i s c r e t i z a t i o n . D e c o m p o s i t i o n and c o o r d i n a t i o n

The previous second decomposition will only be considered. We have :

I ~ i { Y i J , j : I , M } Si = ~i {YLj , j=I,M}

. . . . 1 - - 1

with •

X i j - T ( X i , j + I + X i , j _ I ) = Z i _ i , j j = 2 , M - I

X l l - 2 T X i2 = Z i _ l , I j = l

T = d x At ~ = 1 + 2~

t = ~ ~ {Ri(XiM-y]2xAt + £ Penal (ui]xAt ~ f. i=2

T ~i | [Zi + £ i [A - Z . ] + ~T X i ] }

1 + ~i [XN1 - TM]2 x At + X M [XNM - TS]

Page 561: 5th Conference on Optimization Techniques Part I

549

where Penel [u±) = ~ 0 i# u i SuM

| (ui-u M) if u i > uM

We penalized the input constraint and the flnal condition XNI = TM

The variables are : XIM , {XLj,Zij ; j=I,M-I}, Pij (j=I,M), Pij (j=2,M-I]. The mixed method has been applied for : i < N, J=I,M-I and i=N, j=2,M-I and the non-admissible method has been applied ~or : i = N, j=1

First level :

i<N ~ . z LX iM = - P iM - T P i , M - 1 + ~Xi'---- M

L X i , M - 1 = - P i , M - I + ~ 0 i , M - 1 - T P i , M - 2

LP i J = Z i j - X i j = 0

L x i j = - ~ i J + ~ P L j - T ( P i , j + 1 + P L , J - I ]

Lxil = -~il + m Pil - T Pi2

gives ~ ~ ~ X

i=N- 2 LXN I = m Pil - T Pi2 + ~ [XNI - TM)

LXNM = -PNM + ~ + XM

gives XNI and XNM

Second l e v e l :

i < N

LZLM = ~iM

L z l J = ~ i j - P i + 1 , j J

L p i j = m X i j - T [ X i , j + I + X i , j _ I

Lpi I = m Xil - 2 T Xi2 - Zi_I, I

j = 1 , n - i

] - z i ~ j J

I i = N

LZNj = ~Nj ; J = l , M

LXM = X N M - TS

The solution {with g and g i fixed) is then as follows :

I) ~M and PN1 fixed, ~ and Z are given by a direct

j = I,M

j = 3,M-2

j=I,M I

forward-backward p~ocedure.

÷

2] Then : I pNI + = PN + ~ LpN I

i M = A M + ~ [XNM - TS)

RESULTS

It

An example is given in Table II (temperature at the surface and at x=O, in- put u[t]). The results are not sensitive to the step sizes. A comparison is made with a theoritlcal solutlon obtained after Laplace transformation and transforming the problem into a minimization of e time integral criterion with an integral constraint.

Page 562: 5th Conference on Optimization Techniques Part I

550

Vl - NON LINEAR CASE. REHEATING FURNACE WITH, A RADIATION BOUNDARY CONDITION

I) Th~roblem. De.composition

I ~Y - n @Zy y(O,T) : TM ~t - ~x-~ ~ Final conditions y[L,T) : TS

y(x,O) = yo[x,} Constraint : u(t) ~ uM = 1400°C

Tx y x= 0 = 0

I By I : ~ r[uct] + 273]" [YIL.t) + 273) 4] f

x=L

/0 T I 0 I# y(L't) ~ Y J{u} = R[t) x ~y[L,t) - y]2 dt R{t) = I iT y(L,t) > y

Th~ same decomposition gives analogue model and couplin~ equotlons, The Le~ronzCan is :

N

L = Z fi + ~i [XiM - Xi,M-I " XA x (u i + 273) 4 - [Xin+273) "] i=2

+ Z { Z Plj Xij - T[XI,j+ I ÷ Xi, j_ I) - Zi_1, i=2 j =2

N M + p i l ! ~ X i l - 2T Xi2 Z i - l ' 1 ]

{jo2 rij -xij] [zi. ,-uil + L [XNI - TMJ 2 + I M [XNM - TS]

2) Coordination by the. M.ixed Method

The mixed method is used for i~ N, j = I , M-I and i = N, j = 2, M-I.

The maln difference Is here we have, after first level resolution , to treat at the second level the terms :

L = 2 Penal (u.) x At - 4~X~x x (u i + 273) 3 u I E

LXlM = -~ PI~M-I ÷ ~I [I + 4XA x (ZiM + 273)9

L~ i = Zi M ZI,M_ ! - XAx [(u.f + 273) ~ - (ZIM + 273) ~]

The following procedure is used :

I) - ~f u I 7/ uM~ ~ = 0 and we have u i = uM

ZiM Is obtained from L~i = 0 (non linear equation)

~i is obta!ned fro LXi M = 0

~i and 2i ore obtained by the some #orword-bockword procedure as in the previous

example.

2) If u1 < uM ZiM is obtained from LXiM = O [non l inear equation), Pi,M-1 being known, o t te r obta i -

ning : Penol {ul)X~At . = 0 ~rom Lui = 0

~i = ~XAX[ui+273]~ u i is obtained from L~ i = 0 [non linear equation]

3) If u i thus calculated is ~ uM we go o~ain to I). If u i < uM, go to 2.

The stop criterion is on LpNIO

Page 563: 5th Conference on Optimization Techniques Part I

551

~he results (shown on the figures III) have been obtained on an IBM

360-44 computer. With N=61, M=11, and a satisfactory initialisation of PNI the execution time for the non linear case has been 7mn 24s where it was 3mn-'7s for the linear case. Non linearity increases of course the number of iterations and the computing time but it however difficult to say to which accuracy the non linear equations at the second level should be solved during the evolution of the iterati- ve procedure for the whole problem. The results are also little sensitive to step sizes variations,

V I I - CONCLUSION

The application of multi-level hierarchical techniques seems to be very interesting to solve optimal control problems for systems described by partial differential equations. The difficulties in optimization problems related to the large dimensionality of the systems obtained after discretization of partial diffe- rential equations makes this approach promising (cf. also Sr. WISMER). Moreover this approach allows to attach in the some way problems with constraints, performan- ce indexes different from the most usual case of quadratic functionals, and also non linear problems which will be often the case in pratical situations.

Here the applications have been made for optimization on a digital compu- ter. It will be very interesting to approach these problems in view of a hybrid computation or in wiew of parallel computing. This will introduce different techni- ques of decomposition and coordination.

B I B L I O G R A P H Y

El] R. KULIKOVSKI "Optimization of large scale systemS" Congr@s IFAC Varso- vie 1969

[2 ] D.A. WISMER "And eff icient computational procedure for the optimizat~n of a cZass of d ~ b u t e d p a r a m ~ systm~s" Jou rna l of Bas i c E n g i n e e r i n g June 1969

[3-] Ph.CAMBON - JPoCHRETIEN - L.LE LETTY - A. LE POURHIET "Commode ~t opti- misation des syst~m~ ~ param~tres r~poyutis" Convention D.R.M.E n ° 70 34 166, Lot n ° 4 - C.E.R.T. - D.E.R.A.

[4 ] Ph. CAMBON "Application du calcul ~ a r c h l s ~ ~ la commande optimale de syst~mes r~gis par des ~qu~tions aux d ~ r i v ~ p a ~ " Th@se de Doc- teur Ing@nieur - Universit@ de TOULOUSE, Juillet 1972

[5 ] A.TITLI "Op~m~aY~on de process~ complexes" (Cours donn@ ~ i ' INSA de TOULOUSE 1971)

[6 ] R. KISSEL - G.GOSSE - M.GAUVRIT - D.VOLPERT - J.F. LE MAITRE "Identi f ica- t ion et opt~misation des f o ~ " Convention D.G.R.S.T. n ° 70 7 2507 Soci@t@ HEURTEY - D.E.R.A.

[7 ] A. FOSSARD - M.CLIQUE s Mme N.IMBERT "Ap~gus su~ la commode ~ c ~ e " Revue RAIRO - Automatique n°3 + 1972

[8~] Y.Y,HAIMES "D~composition and multi level approach in the modeling and ma- nagement of water reso~ces syst~mes" NATO Advanced study Institute on De- composition as a tool for solving large scale problems CAMBRIDGE July 1972

Page 564: 5th Conference on Optimization Techniques Part I

552

t - P L A N C H E S [ : P R E M I E R E X E M P L E C O M M A N D E I ~ P A R T I E - I ~ Q U A T I O N D E L A C H A L E U R

APPLICATION DU CALCUL HIERARCHISE METHODE ADMISSIBLE (pr&cislon 10 -3 )

O°C Temperature 0 (x,t) t=O, ~-, ~ , T 3OO

[.1 - Evolution de la temp*~rature - M~thode admissible

200 =

IO0

L ~ 0 0,5 1 x

APPLICATION DU CALCUL HIERARCHISE 151 METHODE ADMISSIBLE

C . . . . . de U.(x , t ) t :O, T ~ , T

[.1 b i s - Evolution de la commando

t=O 0,5 1

INFLUENCE DU PAS EN t (N)

- - N = 3 t M=41 - - - N=61 M=41

0 1/2 1

Page 565: 5th Conference on Optimization Techniques Part I

H

- P

LA

NC

HE

S

H

: D

EU

XI~

ME

E

XE

MP

LE

C

RIT

~R

E

MI~

TA

LL

UR

GIQ

UE

-

SIM

PL

E

CO

ND

UC

TIO

N

RE

SO

LUTI

ON

P

AR

A

PP

RO

CH

E

THE

OR

IQU

E

Tem

pera

ture

"f

our ~

t h

eoriq

ue

U~

,

85O

II.1

- R6solutlon par

approche theorlque

1 ooo s

3 ooo s

ta

"'t-

1 ~ ------ R

echa

uffa

ge

"sim

ple

cond

uctio

n"

--

R

6cha

uffa

ge

par

rayo

nnem

ent

\ \ \ 14

00°C

\

....

....

....

....

....

....

....

....

..

1200

~C

\ ~

TS

1150

°C

\\

~,

~

Tem

pera

ture

"f

our"

J

/

~T

M

850~

C

- --

_T

_em

pera

ture

~ s

urfa

ce ~

~

--

~~

'

//

I.

ison "rayonnement"

et

V,,2"

i 20

°C ~

I

I I>

1

00

0s

30

00

s T

x

o0c l

t&M

®°C

'

HI

- P

LA

NC

HE

S

III

: T

RO

ISI~

ME

E

XE

MP

LE

F

OU

R

DE

R

I~C

HA

UF

FA

GE

-

RA

YO

NN

EM

EN

T

=

85

0~

=

1200

~C

C

HA

UFF

AG

E

PA

R

RA

YO

NN

EM

EN

T T

M

=11

50°C

U

M =

140

0°C

N=

61

M

=

11

T~

II

/ Ill.1

- Chau~e

~r rayonnement

~/

M~thode

mlxt

e

0 1

O00

s 3

O00

s T

t

INFL

UE

NC

E

DU

P

AS

E

N

TE

MP

S

CH

AU

FFA

GE

M

R

RA

YO

NN

EM

EN

T

----

--

N =

121

M

= 1

1 -

-

N=

61

M=

11

uN

120(

] 11

5C

850

~J1

Page 566: 5th Conference on Optimization Techniques Part I

A T T E M P T TO S O L V E A C O M B I N A T O R I A L P R O B L E M IN T H E C O N T I N U U M B Y

A M E T H O D O F E X T E N S I O N - R E D U C T I O N

E m i l i o S p e d[cat£o~ G i o r g i o T a g l i a b u e

C I S E , S e g r a t e , M i l a n o , I t a l y

A B S T R A C T

C o m b i n a t o r i a I o p t i m i z a H o n p r o b l e m s in n v a r i a b l e s a r e f o r m u l a t e d as n o n l i n e a r

p r o g r a m m i n g p r o b i e m s in ( n - l ) 2 v a r i a b l e s and n 2 c o n s t r a i n t s . Me thods f o r s o l v i n g

the l a r g e u n c o n s t r a i n e d o p t i m i z a t i o n p r o b l e m g e n e r a t e d a r e c o n s i d e r e d , w i t h e m p h a

s i s on c o n j u g a t e - g r a d i e n t a l g o r i t h m s based on the homogeneous mode l . T h e q u a d r a -

t i c a s s i g n e m e n t p r o b l e m is c o n s i d e r e d as an a p p l i c a t i o n e x a m p l e and r e s u l t s f r o m

the n o n l i n e a r p r o g r a m m i n g a p p r o a c h a r e d i s c u s s e d .

G E N E R A L I T I E S

L e t P be an :~ x rl p e r m u t a t i o n m a t r i x and f= f (P) a f u n c t i o n a l of P . T h e f u n c t i o n a l

f is s u p p o s e d to d e s c r i b e some p r o p e r t y of a d i s c r e t e model c o n s i s t i n g of n e l e -

ments w h i c h may be i n t e r c h a n g e d . T h e m a t r i x P d e s c r i b e s i n t e r c h a n g e s and a ma-

. p , l . t r i x P is sough~ w h i c h m i n i m i z e s f In p r i n c i p l e , can be d e t e r m i n e d by enume-

r a t i o n ; in p r a c t i c e such a p r o c e d u r e is no t f e a s i b l e , even f o r r e l a t i v e l y sma l l v a -

lues of n. O p t i m a l and s u b o p t i m a l t e c h n i q u e s u s i n g o n l y a s u b s e t of the n! p e r m u -

t a t i on m a t r i c e s e x i s t fop e s t i m a t i n g the m in imum of f when f has s p e c i a l s t r u c t u -

r es ; the main f e a t u r e of such t e c h n i q u e s is that t hey o p e r a t e in the d i s c r e t e s p a c e

of the p e r m u t a t i o n m a t r i c e s . It is p o s s i b l e , h o w e v e r , to o p e r a t e in the c o n t i n u u m in

the f o l l o w i n g way : we f o r m a l l y d e f i n e f in R n2 ( e x t e n s i o n ) r e p l a c i n g P in i t s d e -

f i n i t i o n by a g e n e r a l n x n m a t r i x X ; we i n t r o d u c e a se t of c o n s t r a i P t s ( r e d u c t i o n )

w h i c h make X to be a p e r m u t a t i o n m a t r i x ; we g i v e an a r b i t r a r y ( o r in some f a s h i o n

s e n s i b l y s e l e c t e d ) s t a r t i n g m a t r i x X and m i n i m i z e f s u b j e c t to the c o n s t r a i n t s , . o

E v e r y s o l u t i o n must be a p e r m u t a t i o n m a t r i x and a loca l m in imum f o r the c o n t i n u o u s

p r o b l e m ; i f the I o ta ! m in imum is a l s o a g l oba l m in imum then i t is an o p t i m a l s o l u t i o n

( g e n e r a l l y not u n i q u e ! ) to the o r i g i n a l c o m b i n a t o r i a i p r o b l e m . I f i t is on l y a l oca l

non g l o b a l m in imum, then i t may be o r not be a s u b o p t i m a l s o l u t i o n .

A se t of c o n s t r a i n t s d e f i n i n g a p e r m u t a t i o n m a t r i x is the f o l l o w i n g :

Page 567: 5th Conference on Optimization Techniques Part I

555

(1) x . . ( x . - 1 ) = 0 i , j = 1 , 2 , . . . . . n JJ U

t% (2 ) ~:×. = 1 j = 1 , 2 . . . . . . . n - ~

I,=1 IJ

( 3 ) { x . . = 1 i = I , 2, . . . . . . n-1 H

(14.) ~ x . . = n • I j b~:L

C o n d i t i o n s (1-4) ape n e c e s s a r y and s u f f i c i e n t fop X to be a p e r m u t a t i o n m a t r i x ; see

the A p p e n d i x f o r mope abou t them.

Sys tem ( t - 4 ) c o n s i s t s of ng+gn-1 e q u a t i o n s , and the 2 n - ! l i n e a r e q u a t i o n s may be

used to e l i m i n a t e 2 n - I c o m p o n e n t s of X . T h e r e f o r e e v e r y c o m b i n a t o r i a l o p t i m i z a t i o n

p r o b l e m w h i c h is a f u n c t i o n of a p e r m u t a t i o n m a t r i x can be e x p r e s s e d in the c o n t i n u 2

um by a n o n l i n e a r p r o g r a m m i n g p r o b l e m in ( n - l ) 2 v a r i a b l e s and n c o n s t r a i n t s .

In r e a l i s t i c p r o b l e m s of p l an t l a y o u t f o r m u l a t e d as q u a d r a t i c a s s i g n e m e n t n may be

mope than one hundred~ and the a s s o c i a t e d n o n l i n e a r p r o g r a m m i n g p r o b l e m is a v e r y

l a r g e one. In o r d e r to deal w i t h i t t h r o u g h the p e n a l t y f u n c t i o n a p p r o a c h , a l g o r i t h ' n s

fop u n c o n s t r a i n e d m i n i m i z a t i o n in v e r y many v a r i a b l e s have to be d e v e l o p e d .

C O N J U G A T E - G R A D I E N T A L G O R I T H M S F O R U N C O N S T R A I N E D M I N I M I Z A T I O N

B a s i c f o r m u l a s

A l g o r i t h m s f o r u n c o n s t r a i n e d m i n i m i z a t i o n of a f u n c t i o n F = F ( z ) of m v a r i a b l e s

w i th c o n t i n u o u s f i r s t d e r i v a t i v e s g=g(z ) may use o n l y f u n c t i o n v a l u e s ( d i r e c t

s e a r c h methods ) o r a l s o g r a d i e n t v a l u e s . H e r e we do not c o n s i d e r me thods of the

f i r s t k ind , b e c a u s e they of ten have o n l y l i n e a r Pate of c o n v e r g e n c e and some of the

most e f f i c i e n t of them (say N e l d e r and Mead 1 and P o w e l l f s 2 methods) have an

O (m 2) s t o r a g e r e q u i r e m e n t . N e w t o n ' s and Q u a s i - N e w t o n l s me thods a r e v e r y e f f i -

c i e n t a l g o r i t h m s u s i n g g r a d i e n t v a l u e s , but t hey have to be d i s c a r d e d f o r l a r g e p r o -

b lems b e c a u s e both s t o r a g e r e q u i r e m e n t and t ime pep i t e r a t i o n a r e O (m2). F o r t u n a

t e l y a r a t h e r fas t Pate of c o n v e r g e n c e (o f ten ( m + l ) - s u p e r l i n e a r ) and l i m i t e d s t o r a g e

r e q u i r e m e n t can be o b t a i n e d u s i n g c o n j u g a t e g r a d i e n t a l g o r i t h m s of the F l e t c h e r - 3

R e e v e s k i n d . T h e s e methods ape based on the i t e r a t i o n

(5) Zk+1= Zk-aks k

Page 568: 5th Conference on Optimization Techniques Part I

556

' ,~ere a k

f ined as

is a sca la r such that F{zk+ 1)~F(z k) and the nsearch vec tor ~p s k is de-

s =g o o

(6)

Sk = gk + bkSk-! (k=!, 2 . . . . . )

and b k is a sca ia r .

The fo l low ing f i ve choices of b k a re cons idered here:

T gk gk (F le t che r -Reeves 3 ) (7) b k - T

g k - i gk-1

(gk-gk 1)Tgk - (Po lak -R ib i e re 4- ) (8) bk= T

gk-1 gk-1

(gk -gk_ l )Tgk 5 (9) b k . . . . . ~ {Sorenson )

(gk-gk_ | ) Sk_ 1

gk gk Fk (10) b k - T (Fr ied 6 )

gk-1 gk - I

where the sca la r t kTs the nonzero solut ion of equation

Fk tk= ak-1Sk-1 gk - I ( 1 1 ) 1 =

Fk_ I 2Fk_ I

a variation of formula, (10) is obtained when the scalar t is the nonzero solution of

equation

(12) itkL+ °k-:Sk-:gktkl:,

Methods using fo rmulas {7),{8), (9) deLermine the minimum of a pos i t i ve de f in i te qua_

d ra t i c funct ion in no more than m i te ra t ions when a k sa t i s f i es the tTe×act l i nea r

search u condi t ion

T (13) gk+1 s k = 0

Page 569: 5th Conference on Optimization Techniques Part I

557

In such a case v a l u e s of b k d e f i n e d by f o r m u l a s (7), (8), (9) a r e the same, but

they d i f f e r on n o n q u a d r a t i c f u n c t i o n s o r when e q u a t i o n (13) does not ho ld . F o r m u -

las (10-11) and (10-12) ape the same i f e q u a t i o n (13) is s a t i s f i e d and c o i n c i d e w i t h

f o r m u l a (7) on q u a d r a t i c f u n c t i o n s . T h e y a l l o w to d e t e r m i n e the m in imum of a h o m o -

geneous f u n c t i o n F - 1 (x T K x ) r , w h e r e K is a p o s i t i v e d e f i n i t e m a t r i x , in no 2 r

mope than m+l i t e r a t i o n s , i f e q u a t i o n (13) ho l ds . F o r m u l a (10-12) is a v a r i a t i o n

that we p r o p o s e to F r i e d ~ s method; i t s r a t i o n a l e is that i f e q u a t i o n (12) is s o l v e d in

s tead of e q u a t i o n (11) then the l as t two s e a r c h v e c t o r s ape K - c o n j u g a t e even i f

e x a c t l i n e a r s e a r c h is not p e r f o r m e d .

T h e f i v e me thods have been i n c o r p o r a t e d in a p o l y a l g o r i t h m w h o s e s t r a t e g y is s i m i -

7 tap to that one a d o p t e d in a ~ ; t u a s i - N e w t o n p o l y a l g o r i t h m p r e v i o u s l y d e s c r i b e d ; in

8 p a r t i c u l a r a k is d e t e r m i n e d by a p a r a b o l i c s e a r c h based on F i e l d i n g l s method;

s t o r a g e r e q u i r e m e n t on the IBM 1800 is l ess than 2500+/4 m w o r d s ; d e t a i l s a r e g i - g

ven e l s e w h e r e

N u m e r i c a l e x p e r i m e n t s

E x t e n s i v e c o m p a r i s o n of the f i v e c o n j u g a t e g r a d i e n t methods d e s c r i b e d in the a b o v e

s e c t i o n was made dup ing the d e v e l o p m e n t of the p o l y a l g o r i t h m . W h e r e a s a d e t a i l e d 10

a n a l y s i s can be f ound e l s e w h e r e , the f o l l o w i n g comments a r e in o r d e r :

- m o r e a c c u r a t e l i n e a r s e a r c h r e d u c e s the n u m b e r of i t e r a t i o n s , k e e p i n g the same

p r e c i s i o n in d e t e r m i n i n g the m in imum. T h i s is e s p e c i a l l y m a r k e d on q u a d r a t i c and

homogeneous f u n c t i o n s w h e r e e x a c t l i n e a r s e a r c h is a c o n d i t i o n f o r t e r m i n a t i o n .

- r h e r e f o r e i f m f u n c t i o n e v a l u a t i o n s a r e e q u i v a l e n t to one g r a d i e n t e v a l u a t i o n , i t is

s t r o n g l y r e c o m m e n d e d to use h igh p r e c i s i o n in the l i n e a r s e a r c h b e c a u s e the num-

b e r of f u n c t i o n e v a l u a t i o n s tha t the s e a r c h r e q u i r e s is s u b s t a n t i a l l y i n d e p e n d e n t

on m.

- the f i v e a l g o r i t h m s b e h a v e s i m i l a r l y w i t h a m a r g i n a l s u p e r i o r i t y o f the P o l a k -

R i b i e r e and the S o r e n s o n methods o v e r the F l e t c h e r - R e e v e s and of Fp iedVs v a r i a -

t i on o v e r the o r i g i n a l method. The s u p e r i o r i t y of the las t me thods o v e r the o t h e r s

is e v i d e n t when h igh p r e c i s i o n l i n e a r s e a r c h is used, but is l os t when l o w p r e c i s i o n

is a d o p t e d .

- some c a s e s s h o w that~ w h e r e t h e o r e t i c a l l y m i te~ 'a t ions w o u l d be r e q u i r e d f o p

t e r m i n a t i o n , in p r a c t i c e th i s n u m b e r is l a r g e r when m is sma l l but may be s u b -

s t a n t i a l l y l ess when m is l a r g e . T h i s i s a p r o m i s i n g r e s u l t f o r the 9 o o d n e s s of the

a l g o r i t h m on f u n c t i o n s in v e r y many v a r i a b l e s .

Page 570: 5th Conference on Optimization Techniques Part I

558

- s t r i c t t e rm, :na t ion [s s t r o n g l y s e n s i b l e to e x a c t n e s s of the l i n e a r s e a r c h .

T a b l e 1 c l e a r l y e v i d e n c e s th i s f o r F r i e d ~ s method . The f u n c t i o n m i n i m i z e d was

a h o m o g e n e o u s f u n c t i o n w i t h r = o , k . . = d. . (d. . the K r o n e c k e r de l t a ) , s t a r t i n g p o i n t U ~J ~J

x = (1,2~ . . . . ~ m} and m equa l f o u r o r t w e n t y . T h e e x a c t v a l u e f o r a k is g i v e n by O

f o r m u l a T

&-~ gk Sk (14) a k = __(2PF k) 4.

SkTK s k

I ( x T k x ) r + q, w i t h q i d e n t i c a l l y - F r i e d ~ s f u n c t i o n a l model is the f u n c t i o n F = " 2 7

z e r o . If q ~ 0 we can ob ta i n ( m + g ) - t e r m i n a t i o n m a k i n g i n i t i a l l y two s t e e p e s t d e -

scent, s e a r c h e s , and u s i n g two e q u a t i o n s of the t ype (11) to s o l v e s i m u l t a n e o u s l y f o r

r and q. H o w e v e r e x p e r i m e n t s s h o w that even i f q is ~ 0 and l a r g e the e f f i c i e n c y

of the method ~s not r a d i c a l l y c h a n g e d . A t h e o r e t i c a l e x p l a n a t i o n is that i f F k c o n -

v e r g e s to F ~ ~ 0 , then F r i e d ~ s method tends to b e h a v e as F l e t c h e r - R e e v e s method .

- F r i e d l s me thod may be i n t e r p r e t e d as s c a l i n g b in F l e t c h e r - R e e v e s f o r m u l a . k

T e r m i n a t i o n is kep t s c a l i n g in the same way the P o l a k - R i b i e r e p a r a m e t e r and th i s

m o d i f i c a t i o n g i v e s m a r g i n a l i m p r o v e m e n t to the a l g o r i t h m .

T A B L E I ~ E f f e c t of e x a c t n e s s ~n the l i n e a r s e a r c h

F U N C T I O N M E T H O D a / a (~) F I T E R A T I O N S k k

m = 4

m = 4

m = 20

m = 20

P o l e k R i b i e r e

F r i e d l s v a r i a t i o n

Polak Ribiere

F r i e d l s v a r i a t i o n

1 1 E - 6 1 .01 2 E - 5 1. 1 2 E - 8

1 3 E - 3 8 1 . 0 1 ? E - 9 1. 1 4 E - 6

1 6 E - 6 1 . 0 1 g E - 6 I. I 3 E - 6

I 5 E - 6 1.01 5 E - 6 I. I 9 E - 6

.5 4 /4

/4 4 4

13 13

l g

11 t l l i

A P P L I C A T I O N T O T H E Q U A D R A T I C A S S I G N E M E N T P R O B L E M

T h e q u a d r a t i c a s s i g n e m e n t p r o b l e m a r i s e s when n e l e m e n t s must be a s s i g n e d to

n d i f f e r e n t l o c a t i o n s in o r d e r to m i n i m i z e a c o s t f u n c t i o n w h i c h can be w r i t t e n as

Page 571: 5th Conference on Optimization Techniques Part I

559

1 (15) f = ~ - ~____ x x a . i j fk i l b k j

w h e r e x , , = 1 i f e l e m e n t s i and j ape i n t e r r e l a t e d , x . . = 0 o t h e r w i s e . C l e a r l y

the m a t r i x X .~ f x i is a p e r m u t a t i o n m a t r i x w h e r e a s A-~ a l l and B="=- b k can

e a s i l y be i n t e r p r e t e d as I~exchangeH and I~distanceH m a t r i c e s .

A c o m p a c t w r i t i n g of the cos t f u n c t i o n f is f o r i n s t a n c e

1 T P ~ x T A x B ] (16) f - 2

T h e m a t r i x o f d e r i v a t i v e s of % D

the f o r m

f d i j df w h e r e d i j d x : ~ can be w r i t t e n in I j

1 ( A X B + A T x B T ) ( 17 ) D =

T h e q u a d r a t i c f u n c t i o n f g e n e r a l l y is no t p o s i t i v e d e f i n i t e ; i t can a s s u m e n e g a t i v e

v a l u e s , e v e n i f i t a s s u m e s n o n n e g a t i v e v a l u e s (f = 0 o n l y f o r a d e g e n e r a t e d p r o b l e m )

on the s p a c e of p e r m u t a t i o n m a t r i c e s .

In the usua l c o m b i n a t o r i a l f r a m e w o r k the q u a d r a t i c a s s i g n e m e n t p r o b l e m is d e a l t

w i t h by o p t i m a l and s u b o p t i m a l t e c h n i q u e s . T h e f i r s t ones g i v e a g l o b a l m i n i m u m

( w h i c h g e n e r a l l y is no t u n i q u e ! ) , but t hey ape u n f e a s i b l e when n is as l a r g e as

t w e n t y , s a y . S u c h a r e i m p l i c i t e n u m e r a t i o n m e t h o d s and L a w l e r l s 1! r e d u c t i o n to a

( l a r g e r ) l i n e a r a s s i g n e m e n t . S u b o p t i m a l t e c h n i q u e s ape g e n e r a l l y based on h e u r i -

st ics; w e l l k n o w n m e t h o d s h a v e been p u b l i s h e d by A r m o u r and B u f f a 1 2 G i l m o r e 1 3 14 15

H i l l i e r and C o n n o r s , G r a v e s and W h i n s t o n . H e u r i s t i c a l g o r i t h m s a r e f e a s i b l e

f op p r o b l e m s up to f i f t y v a r i a b l e s , say~ and the s o l u t i o n is g e n e r a l l y 9ood . Mope in

16 f o r m a t i o n a b o u t t h e s e m e t h o d s can be f o u n d e l s e w h e r e

F i n a l l y , t he i m p l i c i t e n u m e r a t i o n a l g o r i t h m f o r m i n i m i z i n g a q u a d r a t i c f u n c t i o n in

1? z e r o - o n e v a r i a b l e s u n d e r q u a d r a t i c c o n s t r a i n t s due to H a n s e n m i g h t be u s e f u l l y

a p p l i e d to the q u a d r a t i c ass ignement i h o w e v e r e x p e r i m e n t a l r e s u l t s a r e not k n o w n

to us . in the s o l u t i o n o f t he n o n l i n e a r c o n t i n u o u s p r o g r a m m i n g p r o b l e m we h a v e to

dea l w i t h t w o ma in po in ts , once a me thod f o r u n c o n s t r a i n e d o p t i m i z a t i o n is a v a i l a b l e .

T h e y a r e h o w to t r e a t the c o n s t r a i n t s and h o w to c h o o s e the i n i t i a l p o i n t .

C o n d i t i o n s (g}, ( 3 ) , ( 4 ) e r ' e r e a d i l y e l i m i n a t e d , d e l e t i n g the l as t r o w and c o l u m n f r o m

m a t r i x X and l e t t i n g an ( n - l ) by ( n - l ) u n k n o w n m a t r i x X . D e r i v a t i v e s d . . w i t h r e - I.I

s p e c t to x . . a r e g i v e n by the f o r m u l a I j

(18) d. . = d . . + d - d . - d . ~j ~j nn fn nj

C o n d i t i o n s (1) a r e d e a l t w i t h in o u r a p p r o a c h by b u i l d i n g the p e n a l t y f u n c t i o n

Page 572: 5th Conference on Optimization Techniques Part I

560

(19) ' ~ = f + ~ k [ × . , ( x . . - 1 ) + h i j ] g ~.~_| i j U U

w h e r e the " l o s s " c o e f f i c i e n t s k . . a r e p o s i t i v e and bo th k . . and the " c o r ' r e c t i o n u U U

c o e f f i c i e n t s h , . ape c o n s t a n t for" e v e r y u n c o n s t r a i n e d m i n i m i z a t i o n . A L a g r a n g i a n i j

f u n c t i o n a p p r o a c h has been d i s c a r d e d as the n u m b e r o f e q u a l i t y c o n s t r a i n t s i s g r e a

i :er than the n u m b e r of i n d e p e n d e n t v a r i a b l e s . A q u e s t i o n i s h o w to c h a n g e the

k . ) s and the h . . l s in o r d e r ' to f o r c e c o n v e r ' g e n c e to a f e a s i b l e p o i n t . We u s e d t w o U I j

m e t h o d s . In the f i p s t , f o l l o w i n g M i e l e e t a l . :~we h a v e the h.~.s i d e n t i c a l l y z e r o w h i l e LI

the k..~s ape aJt equa l to a v a l u e k ; i n i t i a l l y k=100 and then i t i s m o d i f i e d a c c o r d i n g U

to the f o r ' m u l a

(20) k ~ 1Ok ] ( T - f ) / k e

w h e r e e is a c o n v e r g e n c e p a r a m e t e r . A s M ie ie~s f o r m u l a g i v e s f as t Pate of c o n -

v e r g e n c e to a f e a s i b l e p o i n t w e l i m i t e d to f o u r the n u m b e r o f c y c l e s of u n c o n s t r a i n -

ed m i n i m i z a t i o n . In the s e c o n d m e t h o d w e f o l l o w e d P o w e l 1 1 9 . In the f i r s t c y c l e the

h. .1s a r e i d e n t i c a l l y z e r o and t he k. ~s ape a l l equa l to k = 100. T h e n , a c c o r d i n g IJ U

to some t e s t s , the k . . I s and the h..Ts a r e m o d i f i e d a t e v e r y c y c l e by f o r m u l a s l i k e U U

m u l t i p l i c a t i o n by a c o n s t a n t or- the m a p p i n g

(21) h . . 4 - - - h . . + x . , ( x . . - l ) ij U Ij ij

Convergence is supposed when max Ix..(x..-1) I /_ .0/4 and no mope than /4 cycles ij U

are allowed.

TabJe 2 shows how a matrix X is generated Ca case with n=6 and starting point

P4). With both strategies the maximum number of computer` operations required by

the nonlinear programming problem is easily bounded. No mor,e than four cycles ape

allowed; no more than I00(n-I) function evaluations are allowed in each cycle, co~

responding to about I0(n-I) iterations and ten function evaluations per linear search.

N o w the n u m b e r o f o p e r a t i o n s r e q u i r e d to c a l c u l a t e f g r ,ows as ? n 3 + 3 0 n 2, i f th r 'ee

a u x i l i a r y m a t r i c e s ape u s e d to s t o r e m a t r i x p r o d u c t s , o t h e r w i s e i t is O (n4); to

c a l c u l a t e D th i s n u m b e r g r o w s as 7 n 3 + 3 b n 2, i f t w o a u x i l i a P y m a t r i c e s ape u s e d .

We s u p p o s e tha t a d d i t i o n s and m u l t i p l i c a t i o n s t ake the same t ime and that s h i f t t i m e

is n e g l i g i b l e . T i m e r e q u i r e d by c o n j u g a t e g r a d i e n t a l g o r i t h m is O (n2)~ ther ,e fo r ,e 4

f o r n s u f f i c i e n t l y l a r g e the n u m b e r o f o p e r a t i o n s is b o u n d e d by 2b01 n . T h e mos t

e f f i c i e n t o f G i t m o r e l s a i g o r i t h m s r e q u i r e s O (n 5) c o m p u t e r o p e r a t i o n s ; C R A F T

n e e d s O (n 3) o p e r a t i o n s per` ~ te r ,a t ion ; f o p s m a l l n the n u m b e r o f i t e r a t i o n s is g e n e -

r ,a l t y t ow ; i t s p r o b a b i t i s t i c d e p e n d e n c e on n i s no t k n o w n to us .

Page 573: 5th Conference on Optimization Techniques Part I

561

T A B L E 2 - Genera t i on of a pe rmuta t ion m a t r i x

C Y C L E I T E R A T I O N S M A T R I X X

I 28

2 23

20

• 6 3 1 , . 0 6 5 , . 5 4 9 , . 0 3 7 , - . 183 .228, . 376, . 024, .401 , - . 0?5 .019, .053, .022, . 202, .67 .09 ~.43 , . 4 3 , . 1 4 , - . 0 8 .05 , . 0 6 , . 0 0 , . 1 2 , . 7 3

1 . 1 6 , - . 1 , - , 1 8 , - . 0 3 , - . 0 9 - . 1 6 , - . 17, .17, 1.21,-. /4.4 .11 , . 1 5 , . 9 6 , - . 0 3 , - . 1 8 - . 0 6 , 1.2 , - . 1 6 , - . 0 5 , - . 0 1 - . 0 6 , - . 0 9 , . 1 , - . 0 ? , 1.1

1.00, 0 , 0 ,0 , 0 0 , 0 ,0 , I , - . 0 2 0 , O, . g g , o , o 0 , 1,0, 0 , - . 0 1 0 , 0 ,0 , O, .99

Our bound is r e a l i s t i c ; 2801 is a high coe f f i c i en t , so on ly f o r l a r g e n, say n• 100,

the t ime r e q u i r e d by the cont inuum approach is es t imated to be less than that requi_

red by h e u r i s t i c p r o c e d u r e s . An advantage of h e u r i s t i c methods is that they c a l c u l a

te on ly v a r i a t i o n s of f due to exchange of e lements which r e q u i r e O(n) compute r

ope ra t i ons .

The second main po in t is how to choose the in i t i a l m a t r i x X . It is wel l known that

if the ob jec t func t ion is not convex, then d i f f e ren t i n i t i a l po in ts may lead to d i f f e ren t

minima. In our p rob lem e v e r y f eas ib l e point is a local minimum and a lso there may

be many g loba l minima. FoP q u a d r a t i c ass ignement p rob lems a r i s i n g f rom plant lay

out m u l t i p l i c i t y of 91obal minima is often a consequence of geomet r i ca l symmetry. In

more genera l cases a rough est imate of the number of 91obal minima can be done as

f o l l o w s . Le t us assume that A and B a re i n t e g e r va lued and max laij I L p

maxlbij I Zp, p being an i n tege r . Then on the space of pe rmuta t ion ma t r i ces it

ho lds

1 2 2 (22) 0 L. f ~ ~ p n

Assuming that the i n t ege r va lues of f e r e d i s t r i b u t e d u n i f o r m l y on the space of per"

mutat ion ma t r i ces then the ave rage number of 91obal minima is 2 2 . 3 . 4 . . . . ( n -2 ) /p 2.

Th i s imp l ies that even if d i f f e ren t s t a r t i n g po in ts may g ive 91obal minima, g e n e r a l -

ly we can expec t on ly local minima. The cho ice of the s t a r t i n g po in ts cou ld be made

Page 574: 5th Conference on Optimization Techniques Part I

562

u s i n g v a l u e s g i v e n by c o m b i n a t o r i a l p r o c e d u r e s ; h o w e v e r we d i d no t e x p l o r e t h i s

p o s s i b i l i t y ('I~) and we c h o s e a r b i t r a r i l y in i t ia~ u n f e a s i b l e p o i n t s . T h e f i v e c h o i c e s

c o n s i d e r e d a r e g i v e n in T a b l e 3.

T A B L E 3 S t a r t i n g p o i n t s

C A S E x . U

P1

P 2

P 3

P 4

P5

I0 i f j + i ( n - 1 ) i s e v e n , o t h e r w i s e - t 0

0 if sin[j+itn-1)] Z .5, otherwise I

C h a n g e i n e q u a l i t y s i g n in c a s e P 2

1/ in- 1 ~2

S t o c h a s t i c s e q u e n c e 0, 1

R e s u l t s a r e g i v e n in T a b l e 4 f o r n = 3 , 5 , 6 , 7 . T h e y w e r e the s a m e f o r M i e i e T s and

P o w e l l ~ s m e t h o d s , e x c e p t a f e w c a s e s w h i c h can be e x p l a i n e d by the f a c t t ha t w h e n

t h e r e a r e many m i n i m a ; the a c t u a l one c a l c u l a t e d may d e p e n d on the s e q u e n c e o f the

20 p e n a l t y c o e f f i c i e n t s . M a t r i c e s A and B a r e f o u n d in N u g e n t et a l . . F o r n=7

(36 v a r i a b l e s a n d 49 c o n s t r a i n t s ) the t ime o f e x e c u t i o n is a b o u t t w e n t y m i n u t e s on

the I B M 1800.

N o t a t i o n n . m . m e a n s tha t the c a s e was no t c o n s i d e r e d ; n . c . means tha t c o n v e r g e n

c e to a p e r m u t a t i o n m a t r i x w a s qo t o b t a i n e d . In t h i s c a s e the f i n a l m a t r i x s h o w e d

s o m e i d e n t i c a I r o w s .

T A B L E /4. R e s u i t s

N . w; P t P 2 P 3 P 4 P 5 N,b,

n . m °

25

/43

7 4

n, mo

31

4 6

84

15 17 f 5 25 17

n . c . 34 35 n . c . 30

49 62 52 49 43

n . m . 96 n . c . 84 n . m .

We c a n o b s e r v e tha t u n d e r c e r t a i n c o n d i t i o n s a p e n a l t y f u n c t i o n a p p r o a c h u s i n g c o -

n j u g a t e g r a d i e n t a l g o r i t h m s c a n n o t w o r k . F o r i n s t a n c e , i f a . . = b . . = 1 - d . . (d . . the U I.l iJ U

(~) the k n o w l e d g e o f d e r i v a t i v e s o f f w h i c h is a b y p r o d u c t o f t he c o n t i n u u m a p p r o -

ach c o u l d be u s e d in the c o m b i n a t o r i a l h e u r i s t i c s to s u g g e s t w h i c h e l e m e n t s s h o u l d

be ~ n t e r c h a n g e d

Page 575: 5th Conference on Optimization Techniques Part I

563

K r o n e c k e r symbo l ) and i n i t i a l l y x i j = r~ then e v e r y v a r i a b l e is changed by the s a

me amount at e v e r y i t e r a t i o n and a p e r m u t a t i o n m a t r i x is not g e n e r a t e d .

Co lumn headed N . b . and N. w. c o n t a i n the bes t and the w o r s t r e s u l t s quo ted by

a u g e n t fop some h e u r i s t i c p r o c e d u r e s .

The f o l l o w i n g c o n c l u s i o n s can be made:

1) the n o n l i n e a r p r o g r a m m i n g a p p r o a c h g e n e r a l l y g i ves a p e r m u t a t i o n m a t r i x in

two o r t h r e e c y c l e s

2) the f i na l m a t r i c e s do not u s u a l l y c o r r e s p o n d # g l o b a l m in ima and v a l u e s of f a r e

r a t h e r s c a t t e r e d

3) the h e u r i s t i c c o m b i n a t o r i a l p r o c e d u r e s a r e t h e r e f o r e s u p e r i o r both f o r q u a l i t y

of s o l u t i o n and t ime of e x e c u t i o n in the r a n g e c o n s i d e r e d f o r n. F o r l a r g e r n~

w h e r e the con t i nuum a p p r o a c h migh t become c o m p e t i t i v e , the c ,omputer t ime is

u n f o r t u n a t e l y too demand ing to make e x p e r i m e n t s p o s s i b l e .

C O N C L U S I O N

C o m b i n a t o r i a l o p t i m i z a t i o n p r o b l e m s have been e x p r e s s e d as n o n l i n e a r p r o g r a m m -

ing p r o b l e m s and e f f i c i e n t t echn iques f o r hand l i ng the l a r g e u n c o n s t r a i n e d m i n i m i z a

t ion p r o b l e m a r i s i n g have been c o n s i d e r e d . The c h o i c e of the i n i t i a l po in ts is a c r i

t i ca l p r o b l e m wh ich has not been s o l v e d s a t i s f a c t o r i l y . S o l u t i o n s f o r the q u a d r a t i c

ass i gnemen t p r o b l e m a r e i n f e r i o r to t hose g i ven by c o m b i n a t o r i a l t echn iques i in

fac t , the n e c e s s i t y of e x p l o r i n g a s u c c e s s i o n of loca l m in ima r e p r o d u c e s in a c e r -

ta in sense the o r i g i n a l c o m b i n a t o r i a l p r o b l e m .

A P P END I X

The d e f i n i t i o n o f a p e r m u t a t i o n m a t r i x X [ s e q u i v a l e n t to say that the e lemen ts of X

must s a t i s f y equaHons

(1.1) x..(x..-l) =0 i,j= 1,2,. .... n i j 1j

( 1 . 2 ) .~" x i j = 1 j = 1 , 2 . . . . . . n t = l

(1 .3 ) E x i j = 1 i = 1 ,2 , . . . . . n

Summing equa t ions (1 .2 ) and (1 .3 ) r e s p e c t i v e l y o v e r j and i we ob ta i n i d e n t i -

t i es . T h e r e f o r e sys tem ( I - 4 ) is r e a d i l y g e n e r a t e d . A l s o the f o l l o w i n g T h e o r e m

ho lds :

T h e o r e m : C o n d i t i o n s ( I - 4 ) a r e not r edundan t .

Page 576: 5th Conference on Optimization Techniques Part I

564

¢ , X lm(Xlm_t P r o o f . Suppose f i r s t t y that one of r e l a t i o n s ,I j , say ) = 0 can be d e l e t -

ed. Put Xli=1 for i ~ m, Xjm=1 for j ~Z I , Xlm = -n+2 and all remair~ing xij~s e-

qual to zero. Then all conditions are satisfied but the matrix is not a permutation

matrix.

Wi thou t toss of g e n e r a l i t y , suppose now that one of equa t i ons (2), say x i l = l ,

can be d e l e t e d . S y s t e m (2-4) can be w r i t t e n in th is case

(1.4) ~: × i j = l i = 1 , 2 . . . . . . n J--I

(1 .5 ) ~_ x . . = 1 j = ! , 2 , . o . . n , j ~ / l , l + l i f I L n , o t h e r - ~=l Ij w i se j ~/ 1,1-1.

Now put the d iagona l e lemen ts of X equa l to one excep t x l + l , I+1 ' i f ! ~ n, other '

w i se Xl_ 1,I_1=1 i f I = n. Pu t X l , l + l = i i f 1 ~ n, o t h e r w i s e x 1,1_1 = 1. Pu t a l l o -

t h e r x. Js equa l to z e r o . U

Then c o n d i t i o n s (1. 1 - 1 . 4 - 1 . 5 ) a r e s a t i s f i e d but the r e s u l t i n g m a t r i x is not a p e r m u

ra t i on m a t r i x .

F i n a l l y , o b s e r v e that a p e r m u t a t i o n m a t r i x s a t i s f i e s the u n i t a r i t y c o n d i t i o n

x T x = X X T= 1. T h i s equa t i on is e a s i l y d e r i v e d f r o m equa t i ons (1-3} , i ts n o n l i n e a -

r i t y be ing connec ted w i t h the n o n l i n e a r i t y of equa t i on { I . 1).

R E F E R E N C E S

1. N e l d e r ~ d . A . and Nead, R . : A s i m p l e x method f o r f unc t i on m i n i m i z a t i o n , CompuL

d.~7, 308-313~ 1965

2. Powe l I ,M . d. D. : An e f f i c i e n t method of f i n d i n g the min imum of a f unc t i on of seve Pal v a r i a b l e s w i t h o u t c a l c u l a t i n g d e r i v a t i v e s , Comput . d . , 7, 155-162, 1964

3. F i e t c h e r , R. and R e e v e s , C . M . : F u n c t i o n m i n i m i z a t i o n by c o n j u g a t e g rad ien ts~

C o m p u t e r J. ~ 7, 149-154, 1964

4. P o l a k , E. and R i b i e r e , G . : No te sup te c o n v e r g e n c e de methodes des d i r e c t i o n s con jugees , U n i v e r s i t y of C a l i f o r n i a , B e r k e l e y , Dep t . of E l e c t r i c a l E n g i n e e r - ing and C o m p u t e r S c i e n c e s , w o r k i n g p a p e r , 1969

5. S o r e n s o n , m . w . : C o n j u g a t e D i r e c t i o n P r o c e d u r e s f o r F u n c t i o n M in im iza t i on~ J o u r n a l of the F r a n k l i n Inst i tute, 288, 4 2 1 - 4 4 1 , 1969

6o ~ r i e d , I. : N - s t e p C o n j u g a t e G r a d i e n t M i n i m i z a t i o n Scheme f o r N o n q u a d r a t i c F u n c t i o n s , A I A A J o u r n a l , 9, 2286-2287~ 1971

7. S p e d i c a t o , E. : Un p o l i a l g o r i t m o p e r la m i n i r n i z z a z i o n e di una f u n z i o n e di p i~ v a - r i a b i l i ~ A t t i del C o n v e g n o A I C A su T e c n i c h e di S i m u l a z i o n e e A l g o r i t m i , M i l a - no, ' n f o rma t i ca~ N u m e r o spec ia le~ 1972

8. F i e l d i n g , K . : F u n c t i o n m i n i m i z a t i o n and l i n e a r search~ A l g o r i t h m 387~ Commun. of ACM) 13~8) 1970

Page 577: 5th Conference on Optimization Techniques Part I

565

9. Sped ica to , E. : Un p o l i a l g o r i t m o a 9 rad ien te con iuga to pep la m i n i m i z z a z i o n e di funz ion i n o n l i n e a r i in mol te v a r i a b i l i , Nota tecn ica C1SE-73 .012 , Mi lano, 1973

1 0 . S p e d i c a t o ~ E . : C I S E - R e p o r t to a p p e a r

! l . L a w l e r , E . L . : The Q u a d r a t i c Ass ignemen t P r o b l e m , Management Sc i , 9 , 5 8 6 - 599, 1963

19-. A r m o u r , G .C . and Buf fa , E . S . : A H e u r i s t i c A l g o r i t h m and S imu la t i on A p p r o a c h to R e l a t i v e L o c a t i o n of F a c i l i t i e s , Management S c i . , 9~ 294-309, 1963

if 3. G i l m o r e , P. C. : Opt imal and Subop t ima l A l g o r i t h m s f o r the Q u a d r a t i c A s s i g n e - ment, S IAM d.~ 10~ 305-313, 1962

14. Hi II i er , F . S . and Conners , M. M. : Q u a d r a t i c Ass ignemen t P r o b l e m AI 9or i thms and the Loca t i on of I n d i v i s i b l e F a c i l i t i e s , Management S c i . , 13, 42-57, 1966

15. G r a v e s , G . W . and Whinston, A . B . : An A l g o r i t h m f o r the Q u a d r a t i c Ass ignement Problem~ Management S c i . , 16, 453-471, 1970

16.Casanova~M. and Tag l iabue , G. " C I S E - R e p o r t to appea r

17.Hansen, P . : Q u a d r a t i c Z e r o - O n e P r o 9 r a m m i n 9 by l i m p l i c i t E n u m e r a t i o n , in Nu - m e r i c a l Methods fop n o n - l i n e a r O p t i m i z a t i o n ~ ( F . A . Lootsma, ed. )~ Academic P r e s s , 1972

18 .M ie le , A . , C o g g i n s ~ G . M . and Levy , A . V . : Upda t ing ru l es fop the pena l t y con - s tan t used in the pena l t y func t ion method fop mathemat ic p r o g r a m m i n g p rob lems , A e r o - A s t r o n a u t i c s R e p o r t n. 90, R i c e U n i v e r s i t y , Houston, 1972,

19. Povvel l, M. J. D. : A Method f o r N o n l i n e a r C o n s t r a i n t s in M in im i za t i on P rob lem§ in Opt im iza t ion~ (R. F l e t c h e r , ed.)~ Academic Press~ 1969

20. Nugent , C. E. ~Vol lmann, T. E. and Ruml J. : An expe r imen ta l compa r i son of tech n iques f o r the ass ignement of f a c i l i t i e s to Iocations~ O p e r a t i o n s Resea rch , 16, 150-173, 1968.

Th i s w o r k was suppor ted by the C o n s i g l i o Naz iona le de l le R i ce r che , in the f r a m e -

w o r k of the r e s e a r c h c o n t r a c t C I S E / C N R n. 7 1 . 0 2 2 0 7 . 7 5 - 115. 2946