randomized algorithms for analysis and control of ...eduardo/cursos/seville6.pdf · 1 ieiit-cnr...
Post on 06-Jul-2020
6 Views
Preview:
TRANSCRIPT
1
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 1© IEIIT – CNR 2005
Randomized Algorithms for Analysis and Control of Uncertain
SystemsRoberto Tempo
IEIIT-CNR, Politecnico di Torinoroberto.tempo@polito.it
Escuela Superior de Ingenieros, Universidad de Sevilla
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 2© IEIIT – CNR 2005
Course Schedule
� Monday, March 14, 11:00 - 14:00
� Tuesday, March 15, 9:30 - 14:00 and 15:30 - 18:30
� Wednesday, March 16, 9:30 - 14:00 and 15:30 - 18:30
� Thursday, March 17, 9:30 - 14:00
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 3© IEIIT – CNR 2005
� Additional documents, papers, etc, please consult
http://staff.polito.it/tempo/
� Questions after the course can be sent to
roberto.tempo@polito.it
� Some MATLABTM
codes are available at
http://www.polito.it/~dabbene
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 4© IEIIT – CNR 2005
Acknowledgments
� This research is supported by IEIIT-CNR of Italy
� Thanks to
Giuseppe Calafiore, Politecnico di Torino, Italy
Fabrizio Dabbene, IEIIT-CNR, Italy
Boris Polyak, Russian Academy of Sciences, Russia
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 5© IEIIT – CNR 2005
Overview
1. Preliminaries
2. Robustness Analysis and Design
3. Randomized Algorithms
4. Random Vector Generation
5. Random Matrix Generation
6. Randomized Algorithms for Robust Design
7. Probabilistic LMIs
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 6© IEIIT – CNR 2005
Overview
8. Probabilistic LPV Systems
9. Randomized Algorithms for Switched Systems
10. Miscellaneous Topics
11. Mixed Deterministic/Randomized Methods
12. Applications
Conclusions and Discussions
References
2
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 7© IEIIT – CNR 2005
CHAPTER 1
Preliminaries
Keywords: probability, robustness, uncertainty, good and bad sets
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 8© IEIIT – CNR 2005
Randomized Algorithms (RAs)
� Randomized algorithms are frequently used in many
areas of engineering, computer science, physics,
finance, optimization,…
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 9© IEIIT – CNR 2005
Randomized Algorithms (RAs)
� Combinatorial optimization, computational geometry
� Examples: Data structuring, search trees, graph
algorithms, sorting, …
� Motion and path planning problems
� Mathematics of finance: Computation of path integrals
� Bioinformatics (string matching problems)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 10© IEIIT – CNR 2005
RAs and Uncertain Systems
� Main point of the course: Rigorous study and
development of RAs for uncertain systems and control,
and related areas
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 11© IEIIT – CNR 2005
Uncertainty
� Uncertainty has been always a critical issue in control
theory and applications
� First methods to deal with uncertainty were based on a
stochastic approach
� Optimal control: LQG and Kalman filter
� Since early 80’s alternative deterministic approach
(worst-case or robust) has been proposed
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 12© IEIIT – CNR 2005
Robustness
� Major stepping stone in 1981: Formulation of the H∞
problem by George Zames
� Various “robust” methods to handle uncertainty now
exist: Structured singular values, Kharitonov,
optimization-based (LMI), l-one optimal control,
quantitative feedback theory (QFT)
3
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 13© IEIIT – CNR 2005
Robustness
� Late 80’s and early 90’s: Robust control theory became
a well-assessed area
� Successful industrial applications in aerospace,
chemical, electrical, mechanical engineering, …
� However, …
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 14© IEIIT – CNR 2005
Limitations of Robust Control
� Researchers realized some drawbacks of robust control
� Computational Complexity: Worst case robustness is
often NP-hard (not solvable in polynomial time unless
P==NP))
� Various robustness problems are NP-hard
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 15© IEIIT – CNR 2005
Limitations of Robust Control
� Consider uncertainty ∆ bounded in a set B of radius ρ.
Largest value of ρ such that the systems is stable for all
∆ ∈ B is called (worst-case) robustness margin
� Conservatism: Worst case robustness margin may be
small
� Discontinuity: Worst case robustness margin may be
discontinuous wrt problem data
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 16© IEIIT – CNR 2005
New Paradigm Proposed
� New paradigm proposed is based on uncertainty
randomization and leads to randomized algorithms for
analysis and synthesis
� Within this setting a different notion of problem
tractability is needed
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 17© IEIIT – CNR 2005
Brief History of RAs
� 1980: New notion of probability of instability by Stengel in the
context of flight control[1]
� 1986: Stengel’s book on stochastic optimal control[2]
� 1989: CDC paper[3] titled “Probabilistic Robust Controller Design”
� Basic idea: Use of Monte Carlo methods
� Major criticism: Absence of new mathematical tools[1] R.F. Stengel (1980)
[2] R.F. Stengel (1986)
[3] P. Djavdan and H.J.A.F. Tulleken and M.H. Voetter and H.B. Verbruggen and G.J. Olsder (1989)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 18© IEIIT – CNR 2005
Brief History of RAs
� 1996: CDC (held in Kobe) two papers[1,2] on explicit
bounds on the sample size
� 1998: Development of methods for “average” design
based on statistical learning theory by Vidyasagar[3]
[1] P.P. Khargonekar and A. Tikku (1996) [2] R. Tempo, E. W. Bai and F. Dabbene (1996)[3] M. Vidyasagar (1998)
4
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 19© IEIIT – CNR 2005
Brief History of RAs
� More recently, various research directions:
� Sample generation in various sets
� Bounds on the sample complexity
� Stochastic gradient algorithms for control design
� RAs for specific applications
� Combination of deterministic/randomized techniques
� RAs for nonconvex problems
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 20© IEIIT – CNR 2005
New Paradigm Proposed
� New paradigm proposed is based on uncertainty
randomization and leads to randomized algorithms for
analysis and synthesis
� Main motivation: Limitations of robust control
� Computational complexity (NP-hardness), conservatism
and discontinuity of the robustness margin
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 21© IEIIT – CNR 2005
Probability andRobustness
� The interplay of Probability and Robustness for control of uncertain systems
� Robustness: Deterministic uncertainty bounded
� Probability: Random uncertainty (pdf is known)
� Computation of the probability of performance
� Controller which stabilizes mostuncertain systems
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 22© IEIIT – CNR 2005
Key Features
� We obtain larger robustness margins at the expense of a
small risk
� We study the probability degradation beyond the
robustness margins
� Computational complexity is generally not an issue:
Randomized algorithms are low complexity
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 23© IEIIT – CNR 2005
Uncertain Systems
M(s) ∆ UncertaintySystem
� ∆ belongs to a structured set Bˇ
– Parametric uncertainty q
– Nonparametric uncertainty ∆i
– Mixed uncertainty
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 24© IEIIT – CNR 2005
Worst Case Model
� Worst case model: Set membership uncertainty
� The uncertainty ∆ is bounded in a set Bˇ
∆ ∈ Bˇ
� Real parametric uncertainty q=[q1,…, ql] ∈ —l
qi ∈ [qi-, qi
+]
� Nonparametric uncertainty
∆i ∈ { ∆i∈ —n,n: ||∆i|| ≤ 1}
5
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 25© IEIIT – CNR 2005
Robustness
� Robustness is to guarantee stability and minimize performance for all ∆ in Bˇ
M(s)
∆
d e
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 26© IEIIT – CNR 2005
Objective of Robustness
� Objective of robustness: To guarantee stability and
performance for all
∆ ∈ Bˇ
� Different probabilistic paradigm based on uncertainty
randomization of ∆ within Bˇ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 27© IEIIT – CNR 2005
Example: Flexible Structure
� Mass spring damper model
� Real parametric uncertainty affecting stiffness and
damping
� Complex unmodeled dynamics (nonparametric)
m1
l1
k1
m2
l2
k2
m3
l3
k3
m4
l4
k4
m5
l5
k5
l6
k6
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 28© IEIIT – CNR 2005
Flexible Structure
� M-∆ configuration for controlled systems and study stability of
q1, q2 ∈ —
∆1∈¬4,4
∆ ∈ Bˇ(( == {∆ ∈ ˇ:: σ(∆ )) < ρ):} : :
BAsICsM 1)()( −−=
∆=∆
1
52
51
00
00
00
Iq
Iq
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 29© IEIIT – CNR 2005
Probability Degradation Function
0 .3 5 0 .4 0 .4 5 0 .5 0 .5 5 0 .6 0 .6 5 0 .7
0 .9 4
0 .9 5
0 .9 6
0 .9 7
0 .9 8
0 .9 9
1
1 .0 1
P ro b a b i li s tic ra d iu s ρ
Est
ima
ted
pro
ba
bili
ty d
eg
rad
atio
n
394.0≈ρ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 30© IEIIT – CNR 2005
Probabilistic Model
� Probability density function associated to Bˇ
� We now assume that ∆ is a random matrix with given
density function f∆(∆) and support Bˇ
� Example: ∆ is uniform in Bˇ
6
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 31© IEIIT – CNR 2005
Uniform Density
� We take f∆(∆)=U[Bˇ] (uniform density within Bˇ). That
is
� In this case, for a subset S ⊆ Bˇ
[ ]
∈∆=
otherwise
ifvol
0)(
1ˇ
ˇˇ
BBBU
{ })()(
)(Pr
ˇˇ BB vol
vol
vol
d SS S =
∆=∈∆ ∫
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 32© IEIIT – CNR 2005
Performance Function
� In classical robustness we guarantee that a certain
performance requirement is attained for all ∆∈Bˇ
� This can be stated in terms of a performance function
J = J(∆)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 33© IEIIT – CNR 2005
Example 1: H∞ Performance
� Compute the H∞ norm of the transfer function Fu(M,∆)
between the error e and the disturbance d
J(∆) = || Fu(M, ∆)||∞
Fu(M,∆) is called the upper LFT (Linear Fractional
Transformation)
� For given γ >0, check if
J(∆) < γ
for all ∆ in Bˇ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 34© IEIIT – CNR 2005
Example 1: H∞ Performance - 2
� Continuous time SISO systems with real parametric uncertainty q. Consider the transfer function P(s,q) =
where q1 ∈ [0.2, 0.6] and q2 ∈ [10-5,3·10-5]
� Letting J(q) = || P(s,q)||∞, we choose γ=0.003
� Check if J(q)<γ for all q in these intervals
( ) )5.0102(5.000102.0)05.010(105.0
21
52
22
51
521
qsqsq
qsqq
+⋅+++++
−−
−
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 35© IEIIT – CNR 2005
Example 1: H∞ Performance - 3
� The set of q1, q2 for which J(q)<γ is shown below
0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.651
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3x 10
-5
q1
q2
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 36© IEIIT – CNR 2005
Example 2[1] : Robust Stability
� Robust stability of continuous time systems with real parametric uncertainty q. Consider the closed loop polynomial
Then,
J(q) = max{Reλ1(q), …, Re λn(q)}
where λ1(q),…,λn(q) are the roots of p(s,q)
l
lL sqasqaqaqsp )()()(),( 10 +++=
[1] G. Truxal (1961)
7
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 37© IEIIT – CNR 2005
Example 2: Robust Stability - 2
� Consider the closed loop uncertain polynomial p(s,q) =
where q1 ∈ [0.3, 2.5], q2 ∈ [0,1.7] and r=0.5
� Check stability for all q in these intervals
( ) ( ) ( ) 3221212121
2 132661 ssqqsqqqqqqr +++++++++++
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 38© IEIIT – CNR 2005
Example 2: Robust Stability - 3
� Set of unstable polynomials
� Taking r=0 the unstable set reduces to a singleton
q1
q2
0.3 2.5
0
1.7
r
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 39© IEIIT – CNR 2005
P1: Performance Verification
� For given performance level γ, check whether
J(∆) ≤ γ
for all ∆ in Bˇ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 40© IEIIT – CNR 2005
P2: Worst-Case Performance
� Find Jmax such that
)(maxmax ∆=∈∆
JJˇB
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 41© IEIIT – CNR 2005
Drawbacks: Complexity
� Complexity issue: Worst-case robustness is often NP-hard (not solvable in polynomial time unless P=NP)[1]
� In classical robustness a number of problems are NP-hard
– stability of interval matrices
– structured singular value
– static output feedback
– …
[1] V. Blondel and J.N. Tsitsiklis (2000)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 42© IEIIT – CNR 2005
Conservatism and Complexity: Trade-off
� Uncertain parameters often enter into the system in a
nonlinear fashion
� To avoid complexity issues (or just to find a solution of
the problem) overbounding techniques are used
� Worst-case robustness margins may be very conservative
� Another issue: Discontinuity of the robustness margin
8
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 43© IEIIT – CNR 2005
Good and Bad Sets
� We define two subsets of Bˇ
∆∆∆∆good= {∆: J(∆) ≤ γ } ⊆ Bˇ
∆∆∆∆bad = {∆: J(∆) > γ } ⊆ Bˇ
� ∆∆∆∆good is the set of ∆’s satisfying performance� Measure of robustness is
( ) ∫ ∆=good
dvol good ∆∆∆∆∆∆∆∆
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 44© IEIIT – CNR 2005
Example of Good and Bad Sets
q1
q2
0.3 2.5
0
1.7
∆∆∆∆bad
∆∆∆∆good
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 45© IEIIT – CNR 2005
Example of Good and Bad Sets - 2
q1
q2
0.3 2.5
0
1.7
∆∆∆∆bad
∆∆∆∆good
Taking smallr
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 46© IEIIT – CNR 2005
Probabilistic Robustness Measure
� In worst-case analysis we compute γ such that all ∆satisfy performance. Equivalently, we evaluate γ such that
∆∆∆∆good = Bˇ
� In a probabilistic setting, we are satisfied if the ratio
is close to one
( )( )
ˇBvol
vol good∆∆∆∆
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 47© IEIIT – CNR 2005
CHAPTER 2
Nuts and Bolts ofRobustness Analysis and Design
Keywords: M-∆ configuration, structured singular value, stability radii, Kharitonov Theorem
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 48© IEIIT – CNR 2005
General Robust Control Framework
� Performance is measured by a characteristic of e.
� Closed loop: e=Fu(M,∆)d
P(s)
∆
K(s)
d e
u y
( )KPMM
MMsM ,)(
2221
1211l
F=
=
M(s) strictly proper
[1] K. Zhou and J.C. Doyle (1998)
9
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 49© IEIIT – CNR 2005
Uncertainty
� Parametric uncertainty q
� Nonparametric uncertainty ∆i
� Mixed uncertainty
∆ Uncertainty
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 50© IEIIT – CNR 2005
Structured Uncertainty
� Subspace defining uncertainty structure
� Norm-bounded structured uncertainty
[ ]{ }brr IqIq ∆∆=∆∆= ,,,,,blockdiag: 11 1LL
llˇ
{ }ρρ ≤∆∈∆∆= ,:)( ˇˇ
B
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 51© IEIIT – CNR 2005
Robust Stability
� Let A,B,C be a realization of M11(s). Define the set
� The feedback connection is robustly stable if and only if every element in Aρ is stable.
� The stability radius
{ })(,: ρρ ˇBA ∈∆∆+== ∆∆ CBAAA
ρ
{ }stablerobustly is :sup ρρρ A=
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 52© IEIIT – CNR 2005
Examples of Structures
� Full complex block: ̌ = ¬n,m
� Full real block: ̌ = —n,m
( ) ∞
==)(
1)(sup
1
1111 sMjM ωσρ
ω
( ]
1
11111
111121,0 ))(Re())(Im(
))(Im())(Re(infsup
−
−∈
−=
ωωγωγω
σρ γω jMjM
jMjM
Complex stability radius
Real stability radius
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 53© IEIIT – CNR 2005
Mixed Structured Uncertainty
� Mixed uncertainty:
� µ is the structured singular value
))((sup1
ωµρ
ω jM=
{ }0))(det(:)(inf1
))((11 =∆−∆
=∈∆ ωσ
ωµjMI
jMˇ
Structured stability radius
[ ]{ }brr IqIq ∆∆=∆∆= ,,,,,blockdiag: 11 1LL
llˇ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 54© IEIIT – CNR 2005
Stability Radii
� We deal with stability radii in state space
{ })(,: ρρ ˇBA ∈∆∆+== ∆∆ CBAAA
{ }stablerobustly is :sup ρρρ A=
10
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 55© IEIIT – CNR 2005
Quadratic Stability of Interval Matrices
� Consider A∆=A+∆, where
� A∆ is quadratically stable if and only if ∃ P >0 such that
for all vertex matrices Ai
� Simultaneous solution of Lyapunov inequalities is a convex problem, but number of matrix inequalities grows as
nnA ,, —∈≤∆ ∞ ρ
0<+ iTi PAPA
2
2n
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 56© IEIIT – CNR 2005
Rank One µ Problem
� Consider stable t.f. M11(s)=u(s)vT(s) where u(s) and v(s) are l-dimensional vectors of rational functions
� Let ∆=diag(q1,…,ql), then
� This leads to a polytope of polynomials
� Edge Theorem: The polytope is stable if and only if the one-dimensional edges are stable
∑=
+=∆−l
111 )()(1))(det(
iiii svsuqsMI
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 57© IEIIT – CNR 2005
Interval Polynomials
� An interval polynomial is of the form
where qi∈[qi-, qi
+]
� Kharitonov Theorem:p(s,q) is stable if and only if fourparticular vertex polynomials (Kharitonov polynomials) are stable
lL ssqsqqqsp ++++= 2
210),(
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 58© IEIIT – CNR 2005
Convex Optimization in Control
� Many robust analysis and design problems may be formulated as convex optimization problems[1]
� A unifying framework is that based on SemidefiniteProgramming (SDP)
H2/H∞ control
Computable bounds for µ analysis/synthesis
Multi-model robust design
[1] S. P. Boyd L. El Ghaoui, E. Feron and V. Balakrishnan (1994)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 59© IEIIT – CNR 2005
Robust Optimization
� Many control problems in the presence of uncertainty may be cast as robust SDP
� For generic uncertainty structures, we can only compute relaxations of original robust SDP
subject to ,min xcT
0),( >∆xF)()()(),( 110 ∆++∆+∆=∆ mmFxFxFxF L
)(, ρˇB∈∆= Tii FF
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 60© IEIIT – CNR 2005
CHAPTER 3
Randomized Algorithms
Keywords: Monte Carlo methods, law of large numbers, Chernoff bound, worst-case bound
11
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 61© IEIIT – CNR 2005
Recall:Probabilistic Robustness Measure
� In worst-case analysis we compute γ such that all ∆satisfy performance. Equivalently, we evaluate γ such that
∆∆∆∆good = Bˇ
� In a probabilistic setting, we are satisfied if the ratio
is close to one
( )( )
ˇBvol
vol good∆∆∆∆
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 62© IEIIT – CNR 2005
Probability of Performance[1]
� We define the probability of performance as
pγ = Pr{J(∆) ≤ γ }
� Notice that, if f∆(∆) is uniform, then
( )( )
ˇBvol
volp good∆∆∆∆
=γ
[1] R.F. Stengel (1980)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 63© IEIIT – CNR 2005
� Let
� Define the set of stable polynomials
� The volume of stable polynomials
Volume of Stable PolynomialsContinuous Time[1]
nnsasaaasp +++= L10),(
{ }0)Re(:0),(,10:1 ≥∀≠≤≤∈= + ssaspaa in
n —S
( )nn volV S=
!1n
Vn ≤
∞→→≈ − neV nn as0
2
[1] A.S. Nemirovskii and B.T. Polyak (1994)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 64© IEIIT – CNR 2005
Volume of Stable PolynomialsDiscrete Time[1]
� Let
� Define the set of stable polynomials
� The volume of stable polynomials
V1=2, V2=4, V3=16/3
nzzaaazp +++= L10),(
{ }10),(: >∀≠∈= zazpa nn —S
( )nn volV S=
odd1
2
1 nVV
Vn
nn
−+ =
∞→→≈ − nnVnn
n as02 22
( ) even1 2
11 n
VnVnV
Vn
nnn
−
−+ +
=
[1] A.T. Fam (1989)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 65© IEIIT – CNR 2005
Bˇ
Bˇ
Volume of Stable PolynomialsDiscrete Time - 2
� If Sn ⊆ Bˇ we can compute in closed form the probability of stability
a0
a1
210),( zzaaazp ++=
( )( )
ˇB
S
vol
volp goodn ∆∆∆∆≡
=γ
Sn
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 66© IEIIT – CNR 2005
Randomized Algorithms
� Volume estimate and numerical integration with Monte
Carlo methods
� Monte Carlo and Quasi-Monte Carlo methods[1]
� Breaking the curse of dimensionality
[1] H. Niederreider (1992)
12
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 67© IEIIT – CNR 2005
Ingredients for RAs
� Assume that ∆ is random with pdff∆(∆) with support Bˇ
� Accuracy ε ∈(0,1) and confidenceδ ∈(0,1) be assigned
� Performance function for analysis and level↓ ↓J = J(∆) γ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 68© IEIIT – CNR 2005
Randomized Algorithms for Analysis
� Two classes of randomized algorithms for probabilistic
robust performance analysis
� Probabilistic performance verification (computepγ)� Probabilistic worst case performance (compute worst-
case performance)
� Both are based on Monte Carlo methods, i.e. on
uncertainty randomization
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 69© IEIIT – CNR 2005
Randomized Algorithms - 2
� We estimate pγ by means of a randomized algorithm
� First, we generate N i.i.d. samples
∆1, ∆2, …, ∆N ∈ Bˇ
according to the density f∆
� We evaluate J(∆1), J(∆2), …, J(∆N)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 70© IEIIT – CNR 2005
Empirical Probability
� Construct an indicator function
� An estimate of pγ is the empirical probability
where Ngood is the number of samples such that J(∆i) ≤ γ
( )N
NI
Np good
N
i
iN =∆= ∑
=1
1ˆ
( ) ≤∆
=∆otherwise
JifI
ii
0
)(1 γ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 71© IEIIT – CNR 2005
A Reliable Estimate
� The empirical probability is a reliable estimate if
� Find the minimum N such that
where ε ∈(0,1) andδ ∈(0,1)
� ε andδ are called accuracy and confidence
{ } εγγ ≤−≤∆=− NN pJpp ˆ)(Prˆ
{ } δεγ −≥≤− 1ˆPr Npp
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 72© IEIIT – CNR 2005
Law of Large Numbers[1]
� For any ε ∈(0,1) andδ ∈(0,1), if
then
δε 241≥N
[1] J. Bernoulli (1713)
{ } δεγ −≥≤− 1ˆPr Npp
13
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 73© IEIIT – CNR 2005
Remarks
� The number of samples computed with the Law of Large Numbers is independent of the number of blocks in ∆, the density function f∆ and the size of Bˇ
� The number of samples N is very large
2.0⋅109
99.5%
0.5%
1.0⋅1010
99.9%
0.5%
5.0⋅1010
99.5%
0.1%
2.5⋅1011N
99.9%1-δ0.1%ε
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 74© IEIIT – CNR 2005
Other Bounds
� The Bernoulli bound is based on the Chebyshev
inequality
� Other bounds are also available, such as those based
on the Bienaymé inequality
� A bound that largely improves the previous ones, for
small values of δ and ε, is the Chernoff bound
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 75© IEIIT – CNR 2005
Chernoff Bound[1]
� For any ε ∈(0,1) andδ ∈(0,1), if
then
2
2
2
log
εδ≥N
{ } δεγ −≥≤− 1ˆPr Npp
[1] H. Chernoff (1952)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 76© IEIIT – CNR 2005
Comparison Between Bounds
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 77© IEIIT – CNR 2005
Chernoff Bound - 2
� We remark that the Chernoff bound greatly improves upon Bernoulli. The dependence on 1/δ is logarithmic
� The dependence on 1/ε is still quadratic
1.2⋅105
99.5%
0.5%
1.6⋅106
99.9%
0.5%
3.0⋅106
99.5%
0.1%
3.9⋅106N
99.9%1-δ0.1%ε
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 78© IEIIT – CNR 2005
Computational Complexity of RAs
� RAs are efficient (polynomial-time) because
1. Random sample generation of ∆i can be performed in
polynomial-time
2. Cost of checking stability for fixed ∆i is polynomial-
time
3. Sample size is polynomial in the problem size and
probabilistic levelsε and δ
14
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 79© IEIIT – CNR 2005
Polynomial-Time
� The cost associated with the evaluation of J(∆i) for fixed ∆i is polynomial-time in many cases. For example, when checking stability or H∞ performance
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 80© IEIIT – CNR 2005
Cost of Checking Stability
� Consider a polynomial
� To check left half plane stability we can use the Routhtest. The number of multiplications needed is
� The number of divisions and additions is equal to this number
� We conclude that checking stability is O(n2)
odd for 4
1 even for
4
22
nn
nn −
nnsasaaasp +++= L10),(
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 81© IEIIT – CNR 2005
Worst-Case Performance
� Recall that
� Generate N i.i.d. samples
∆1, ∆2, …, ∆Ν ∈ Bˇ
according to the density f∆
� Compute
)(maxmax ∆=∈∆
JJˇB
)(maxˆ1
i
NiN JJ ∆=
= K
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 82© IEIIT – CNR 2005
Worst-Case Bound[1,2]
� For any ε ∈(0,1) andδ ∈(0,1), if
thenε
δ
−
≥1
1
1
loglog
N
[1] P.P. Khargonekar and A. Tikku (1996)
[2] R. Tempo, E. W. Bai and F. Dabbene (1996)
{ }{ } δε −≥≤>∆ 1ˆ)(PrPr NJJ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 83© IEIIT – CNR 2005
Comparison
� The number of samples required is much smaller than that needed with the Chernoff Bound.
� The dependence on 1/ε is basically linear
1.16⋅106
99.999%
0.001%
1.06⋅103
99.5%
0.5%
1.38⋅103
99.9%
0.5%
5.30⋅103
99.5%
0.1%
9.21⋅104
99.99%
0.01%
6.91⋅103N
99.9%1-δ0.1%ε
≈−
εε1
1log
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 84© IEIIT – CNR 2005
Interpretation of Worst Case Bound
vol(∆∆∆∆bad)< ε
J(∆)
∆
NJJ ˆmax −
J(∆)
∆vol(∆∆∆∆bad)< ε
NJJ ˆmax −
15
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 85© IEIIT – CNR 2005
Volumetric Interpretation
� In the case of f∆(∆) uniform, we have
� Therefore
is equivalent to
{ } ( )( )
ˇBvol
volJJ bad
N
∆∆∆∆=>∆ ˆ)(Pr
{ }{ } δε −≥≤>∆ 1ˆ)(PrPr NJJ
( ) ( ){ } δε −≥≤ 1Pr ˇBvolvol bad∆∆∆∆
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 86© IEIIT – CNR 2005
Confidence Intervals
� The Chernoff and worst-case bounds can be computed a-priori and are explicit
� The sample size obtained with the confidence intervals is not explicit
� Given δ ∈(0,1), upper and lower confidence intervals pLand pU are such that
{ } δγ −=≤≤ 1Pr UL ppp
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 87© IEIIT – CNR 2005
Confidence Intervals - 2
� The probabilities pL and pU can be computed a posteriori when the value of Ngood is known, solving equations of the type
with δL+δU=δ
( )
( ) UkN
UkU
N
k
LkN
LkL
N
Nk
ppk
N
ppk
N
good
good
δ
δ
=−
=−
−
=
−
=
∑
∑
1
1
0
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 88© IEIIT – CNR 2005
Confidence Intervals - 3
γp̂
Up
Lp
δ = 0.05
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 89© IEIIT – CNR 2005
� Let J(q) be the maximum real part of the roots of the closed loop polynomial
with q varying in the box Bq={q∈—n,||q||∞≤1}
� Suppose that p(s,0) is stable. Then, a box
Bq(ρ)={q∈—n,||q||∞≤ρ}
of radius ρ>0 can be determined so that
J(q)<0 for all q ∈ Bq(ρ)
nn sqasqaqaqsp )()()(),( 10 +++= L
ExampleIEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 90© IEIIT – CNR 2005
� Can we estimate a box bigger than Bq(ρ) so that only a small number of plants in this box are unstable?
� We solve such a problem by applying the previous results
Example - 2
16
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 91© IEIIT – CNR 2005
� Consider the polynomial
where
44
3310 )()()()(),( sqasqasqaqaqsp +++=
( ) ( ) ( )( )( ) ( )[ ]( )( )
( ) ( ) ( )( )( ) ( ) ( )
( ) ( )[ ]( )( ) ( ) ( )35.0501100)(
35.0501100
5.050110021)(
35.0501100
215.0501100)(
215.0501100)(
432
22
13
432
22
1
22
21432
432
22
1
432
22
11
432
22
10
++++−=++++−+
+−+++=+++−+
++++−=+++−=
qqqqqa
qqqq
qqqqqa
qqqq
qqqqqa
qqqqqa
Example -3 IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 92© IEIIT – CNR 2005
� The parameters q=[q1,q2,q3,q4] vary in the box [-1/2,1/2]4
� Since a0(q)=0 for q1=0.01, the box containing only stable plant is surely smaller than [-0.01,0.01]4
� Taking ε=0.001 and δ=0.01 we compute
� We generate q1,q2,…,q4603 i.i.d. samples using the uniform density function
ε
δ
−
>=1
1
1
log
log4603N
Example - 4
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 93© IEIIT – CNR 2005
Example - 5
� We define the performance function J(q) as the maximum real part of the roots of p(s,q) and compute
� With probability at least 1-δ=0.99, the volume of the bad set ∆bad is not greater than ε vol(Bˇ) = ε = 0.001
� We obtain an increase in size which is at least
6,250,000
00000042395.0)(maxˆ4603,,1
<−=∆==
i
iN JJ
K
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 94© IEIIT – CNR 2005
Computational Learning Theory
� The Law of Large Numbers studies the problem
where pγ = Pr{J(∆) ≤ γ }
� The function J(∆) is fixed
� Computational Learning Theory computes bounds on the sample size for the problem
where J is a class of functions
{ } δεγ −≥≤− 1ˆPr Npp
( ){ } δεγ −≥∈∀≤−≤∆ 1,ˆ)(PrPr JJpJ N
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 95© IEIIT – CNR 2005
VC and P-dimension[1,2]
� The bounds obtained depend on quantities called VC-dimension (if J is a binary valued function), or P-dimension (if J is a continuous valued function)
� VC and P-dimension are measures of the problem complexity
� The bounds obtained are very conservative
[1] M. Vidyasagar (1997)
[2] E.D. Sontag (1998)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 96© IEIIT – CNR 2005
Systems and Control Application
� The class of functions can take into account control parameters
� We replace J(∆) with J(∆,κ) where κ are the controller parameters
� With this approach, we generate random samples for both ∆ and κ, i.e.
∆1, ∆2, …, ∆N and κ1, κ2, …, κN
� We will discuss this approach in the broader contest of control system design with randomized algorithms
17
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 97© IEIIT – CNR 2005
Choice of the Distribution
� The probability Pr{∆ ∈S}
depends on f∆(∆)
� It may vary between 0
and 1 depending on the
pdf f∆(∆)
0.5
1
1.5
2
2.5
0
0.5
1
1.5
0
0.5
1
1.5
2
2.5
3
3.5
0.5
1
1.5
2
2.5
0
0.5
1
1.5
0
0.05
0.1
0.15
0.2
0.25
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 98© IEIIT – CNR 2005
Choice of the Distribution
� The bounds discussed are independent on the choice of the distribution. To compute the estimates we need to know the distribution
� Some research has been done in order to find the worst-case distribution in a certain class[1]
� Uniform distribution is the worst-case if a certain target is convex and centrally symmetric
[1] B. R. Barmish and C. M. Lagoa (1997)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 99© IEIIT – CNR 2005
Choice of the Distribution
� Minimax properties of the uniform distribution have been shown in[1]
[1] E. W. Bai, R. Tempo and M. Fu (1998)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 100© IEIIT – CNR 2005
CHAPTER 4
Random Vector Generation
Keywords: radial distributions, inversion method, generalizedGamma density, uniform distribution in norm balls
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 101© IEIIT – CNR 2005
RN and Univariate Generation
� Random number generation (RNG): Various methods
available for uniform generation in the interval [0,1)
� Linear and nonlinear RNGs, Fibonacci, feedback shift
register, BBS, MT, …
� Non-uniform univariate random variables: Suitable
functional transformations (e.g., inversion method)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 102© IEIIT – CNR 2005
Multivariate Sample Generation
� Emphasis on finite sample size (Chernoff bound)
� Development of techniques not based on asymptotic
methods such as Metropolis (random walk) or Hit-or-
Miss
� Multivariate random variables: Rejection and conditional
density methods
� Conditional density method requires computation of
multiple integrals and can be used in some special cases
18
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 103© IEIIT – CNR 2005
Uniform Sample Generation in lp Balls
� We study parametric uncertainty q in lp norm balls
� Objective: Uniform sample generation in the ball
B— = { q∈—n : ||q||p ≤ 1}
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1step 3
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 104© IEIIT – CNR 2005
Rejection Method
� Rejection method: Curse of dimensionality
� Rejection rate for generation of uniform samples in the
Euclidean sphere using an hypercube as bounding set
)12/( )/2()( +Γ= nn nπη
4. 107
n=20
5. 1013401.543.24231.90991.27321
n=30n=10n=4n=3n=2n=1
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 105© IEIIT – CNR 2005
Gamma Density
� Need to introduce a technical tool
� A random variable x has (unilateral) Gamma densitywith parameters (a,b) if
0,)(
1)( /1 ≥
Γ= −− xex
baxf bxa
ax
� We write x ∼ G(a,b)
� There exist standard and efficient methods for random generation according to G(a,b)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 106© IEIIT – CNR 2005
Non-uniform Distributions:Inversion Method
� A standard tool for univariate random variable generation is the inversion method
� Let w∈— be a r.v. with uniform distribution in [0, 1]. Let F be a continuous distribution function on — with inverse
� Then, the r.v. z=F-1(w) has distribution F
� The generation is obtained by passing a uniform r.v. through an appropriate transformation F-1
{ }10,)(:inf)(1 ≤≤==− yyxFxyF
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 107© IEIIT – CNR 2005
Change of Variables
� Let x be a r.v. with pdffx(x)
� Let x=g(y), g invertible
� The pdf of y is
y
ygygfyf xy ∂
∂= )())(()(
� This method also has multivariate extensions
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 108© IEIIT – CNR 2005
Gamma Density
� A random variable x∈— has Gamma density with parameters (a,b) if
0,)(
1)( /1 ≥
Γ= −− xex
baxf bxa
ax
� We denote x ∼ G(a,b)
� There exist standard and efficient methods for random generation according to G(a,b)
19
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 109© IEIIT – CNR 2005
Generalized Gamma Density
� A random variable x∈— has Generalized Gamma density with parameters (a,c) if
0,)(
)( 1 ≥Γ
= −− xexa
cxf
cxcax
� We denote x ∼ Gg(a,c)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 110© IEIIT – CNR 2005
Generalized Gamma Density - 2
0 0.5 1 1.5 2 2.5 3 3.5 40
0.2
0.4
0.6
0.8
1
1.2
( ) px
ppg epG −
Γ=
)(1
,1
1
p=1p=2p=4p=10p=100
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 111© IEIIT – CNR 2005
Comments
� If z ~ G(a,1), taking x = z1/c we have x ~ Gg(a,c)
� Samples distributed according to a bilateral density x ~ fx(x) can be obtained from a unilateral density z ~ fz(z) taking
x = sz
wheres is an independent random sign taking values+1 and-1 with equal probability
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 112© IEIIT – CNR 2005
Joint Density
� Let x=[x1,…,xn] be a r.v. with components xi
independently distributed according to the (bilateral) Generalized Gamma density with parameters 1/p,p
� The joint density of x is
pp
pi x
nn
nx
n
i
x ep
pe
p
pxf
||||
1
)/1(2
)/1(2
)( −−
=Γ
=Γ
= ∏
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 113© IEIIT – CNR 2005
Examples
� Multivariate normal N(0,I)
is Gg(1/2,2)
� Multivariate Laplace density
is Gg(1,1)
( )xx
nx
T
exf 2
1
2
1)(
−=
π
∑=
−
i
ix
nx exf21
)(
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 114© IEIIT – CNR 2005
Radial Densities
� A random vector x∈≈n has {p-radial density if
� g(r) is called the defining functionof x
px xrrgxf == ),()(
p
i
p
ipxx
/1
where
= ∑
20
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 115© IEIIT – CNR 2005
Examples of Radial Densities
� The multivariate NormalN(0,I)
is l2-radial
� The multivariate Laplace density
is l1-radial
( )xTx
nx exf 21
2
1)(
−=
π
∑−
= iix
nx exf21
)(
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 116© IEIIT – CNR 2005
Radial Density, p=2
22||||)( x
x exf ακ −=
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 117© IEIIT – CNR 2005
Radial Density, p=1
1||||)( xx exf ακ −=
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 118© IEIIT – CNR 2005
Radial Densities and Uniformity
� The key connection between lp-radial densities and uniform distributions in lp norm balls is given by the following fact
� Let x∈—n be lp-radial, and let w be uniform in [0, 1],
then the random vector
has uniform distribution on the norm ball
n
p
wxx
y /1=
{ }1: ≤p
xx
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 119© IEIIT – CNR 2005
Real Random Vectors
� Theorem[1]: Let ≈≡—. If ξi are real r.v. distributed according to the Generalized Gamma density
and w∈[0,1] is a uniformly distributed r.v., then the
vector
is uniformly distributed in B—
[1] G. Calafiore, F. Dabbene and R. Tempo (1998)
( )pG pgi ,~ 1ξ
[ ]Tn
p
xxx
wy n ξξ ,,, 1
1
K==
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 120© IEIIT – CNR 2005
Algorithm for Real Uniform Generation
� Generate n independent real scalars ξi ~Gg(1/p,p)
� Construct vector x of components xi=si ξi,where si are random signs
� Generate z=w1/n, where w is uniform in [0, 1]
� Return
� Similar algorithm exists for complex vectors
zxx
yp
=
21
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 121© IEIIT – CNR 2005
Real RandomVectors– Step 1
-4 -3 -2 -1 0 1 2 3 4-4
-3
-2
-1
0
1
2
3
4step 1
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 122© IEIIT – CNR 2005
Real RandomVectors– Step 2
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1step 2
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 123© IEIIT – CNR 2005
Real RandomVectors– Step 3
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1step 3
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 124© IEIIT – CNR 2005
Real Random Vectors p=1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 125© IEIIT – CNR 2005
Real Random Vectors p=0.7
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 126© IEIIT – CNR 2005
Real Random Vectors p=4
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
22
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 127© IEIIT – CNR 2005
Complex Random Vectors
� Theorem: Let ≈≡¬. If ηi are complex r.v. uniformly distributed on the unit circle, ξi are real r.v. distributed according to the Generalized Gamma density
and w∈[0,1] is a uniformly distributed r.v., then the vector
is uniformly distributed in B¬
( )pG pgi ,~ 1ξ
[ ]Tnn
p
xxx
wy n ηξηξ ,,, 1121
K==
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 128© IEIIT – CNR 2005
Non-uniform Distributions
� lp-radial densities may be used to obtain generic lp-
radial distributions (other than uniform) on the lp-norm
ball
� Let x∈—n be lp-radial, and let z~fza scalar r.v., then the
random vector has density
where v is a normalization constant
zxx
yp
=
pzny yrrfnr
rgyf === − ),(1
)()( 1ν
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 129© IEIIT – CNR 2005
Matrix Extensions
� Matrix Hilbert-Schmidt norms
� For 1-induced and ∞-induced matrix norms, the problem reduces to vector case
� Matrix spectral (max singular value) norm does not reduce to vector case
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 130© IEIIT – CNR 2005
CHAPTER 5
Matrix Sample Generation
Keywords: singular value decomposition, spectral norm, Haar density, conditional density method, Selberg integral, examples
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 131© IEIIT – CNR 2005
Matrix Sample Generation
� How to generate efficiently uniform matrix samples?
� Vector case is solved for the real and complex case, for
any lp norm ball
� Matrix case is solved for the real and complex case, for
any Hilbert-Schmidt p-norm (reduces to the vector case)
� For 1 and ∞ -induced matrix norms, the problem reduces
to the vector case
� Hard problem for the spectral norm
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 132© IEIIT – CNR 2005
A First Attempt: Rejection
� Methods based on rejection of samples generated from an outer-bounding set fail, due to dimensionality issues
� Let
then
� Uniform generation in BF and B∞ is easy
{ }nnF
nnF ≤∆∈∆= :)( ,¬B
{ }1:)1( , ≤∆∈∆= ∞∞ nn¬B
)1()1();()1( ()BB()()BB() ∞⊂⊂ nF
{ }1)(:)1( , ≤∆∈∆= σnn¬B
23
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 133© IEIIT – CNR 2005
Rejection Rates
� Let η be the average number of samples that one needs to generate in the outer set, to find one sample in the good set
1e372e236e124e81.8e54688ηF
1e955e542e262e168.7e88,64012η∞
10865432n =
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 134© IEIIT – CNR 2005
Singular Value Decomposition
� Consider ∆∈≈n,m, m≥ n.
� Singular Value Decomposition
∆ = UΣ V*
where U∈≈n,n and V∈≈m,n have orthonormal columns,
and
Σ = diag{σ1,σ2,…,σn}
where σ1>σ2>…>σn>0
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 135© IEIIT – CNR 2005
A Class of Matrix pdfs
� Unitarily Invariant densities: Depend only on s.v. of ∆
� Radial Symmetric densities: Depend only on norm of ∆
� Uniform distribution in B
is a special case of radial density
{ })()( Σ=∆= ∆∆ ffIF
{ }))(()( ∆=∆= ∆∆ σffRF
[ ]BU=∆∆ )(f
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 136© IEIIT – CNR 2005
The pdf of theSingular Values – Real Case
� Theorem[1]: Let ∆∈—n,m. The following statements are equivalent
� The pdff∆(∆) is unitarily invariant
� The joint pdf of U, Σ and V is )()()(),,(,, VffUfVUf VUVU Σ=Σ ΣΣ
{ }[ ]{ }[ ]
( )∏∏≤<≤=
−∆Σ −Σ=Σ
>==
==
nkiki
n
k
nmk
iT
V
TU
ff
VIVVVVf
IUUUUf
1
22
1
,1
)()(
0][,:)(
:)(
σσσ—
U
U
U
[1] G. Calafiore, F. Dabbene and R. Tempo (2001)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 137© IEIIT – CNR 2005
Real Matrices
� U— is a normalization constant
� Proof of previous theorem is based on the determination of the Jacobian of the mapping between ∆ and its SVD factors U,Σ,V. Details are highly technical
( )( )
( )( )∏∏
+−==+
−
−Γ−Γ
−Γ−Γ=
m
nmk
n
kmn
mn
kk
kk
112/)1(
2/)1(
12/)1(
12/)1(
2)8( π
—U
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 138© IEIIT – CNR 2005
Uniform Matrices – Real Case
� For particular case of uniform matrices
with σ1>σ2>…>σn>0
� The value of K— is given by the Selberg Integral
∏−
= +−+Γ+Γ+Γ++Γ=
1
0
2/
)2/)1(()12/()2/2/3()2/)1((
!n
i
n
nmiiiim
nK π—
( )∏∏≤<≤=
−Σ −=Σ
nkiki
n
k
nmkKf
1
22
1
)( σσσ—
24
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 139© IEIIT – CNR 2005
The pdf of theSingular Values – Complex Case
� Theorem[1]: Let ∆∈¬n,m. The following statements are equivalent
� The pdff∆(∆) is unitarily invariant
� The joint pdf of U, Σ and V is )()()(),,(,, VffUfVUf VUVU Σ=Σ ΣΣ
{ }[ ]{ }[ ]
( )∏∏≤<≤=
+−∆Σ −Σ=Σ
>==
==
nkiki
n
k
nmk
iV
U
ff
VIVVVVf
IUUUUf
1
222
1
1)(2
,1*
*
)()(
0][,:)(
:)(
σσσ¬
U
U
U
[1] G. Calafiore, F. Dabbene and R. Tempo (2001)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 140© IEIIT – CNR 2005
Complex Matrices
� U¬ is a normalization constant
� Proof of previous theorem is based on the determination of the Jacobian of the mapping between ∆ and its s.v.d. factors U,Σ,V
∏=
−−= n
k
mnn
kmkn1
)!()!(
2 π¬U
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 141© IEIIT – CNR 2005
Uniform Matrices – Complex Case
� Consider particular case of uniform matrices, and change of variables xi=σi
2 , with ordering condition removed
� The value of Kx is given by the Selberg Integral
∏ ∏= ≤<≤
− −=n
i nkiki
nmixx xxxKxf
1 1
2)()(
∏−
= +−+Γ+Γ++Γ=
1
02 )1()1(
)1(!
1 n
ix nmii
im
nK
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 142© IEIIT – CNR 2005
Outline of Sample Generation Method
� For uniform ∆, its SVD factors are independently distributed
1. Generate the samples of U and V (easy problem)
2. Generate the samples of Σ (hard problem)
3. Build matrix sample ∆=UΣVT
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 143© IEIIT – CNR 2005
Generation of Haar Samples
� Uniform distribution over orthogonal (or unitary) group
is known as the Haar invariant distribution
� Fundamental property: If U is Haar, then QU has same distribution as U, for any fixed orthogonal (unitary) matrix Q
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 144© IEIIT – CNR 2005
Generation of Haar Samples
� Haar matrix U∈—n,n may be generated by means of QR decomposition as follows1. X =randn(n,n);
2. [Q,R]=QR(X);
3. U=Q;
� Complex case works similarly
� Rectangular Haar matrices work similarly
25
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville © IEIIT – CNR 2005
Generation of the Singular Valuesfor Complex Matrices
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 146© IEIIT – CNR 2005
Conditional Density Method
� Is a general method that reduces generation according to
one n-dimensional distribution to n one-dimensional
sample generation problems
� Drawback: Requires computation of marginal densities
� The previous is a very hard problem in general:
Computing multiple integrals
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 147© IEIIT – CNR 2005
Conditional Densities Method - 2
� Write as
where
and
),...,( 1 nx xxf
)|()|()(),...,( 11122111 −= nnnnx xxxfxxfxfxxf LL
)()(
)|(111
111
−−− =
ii
iiiii xxf
xxfxxxf
L
LL
∫∫∫ += ninxii dxdxxxfxxf LL 111 ),...,(...)(
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 148© IEIIT – CNR 2005
Conditional Density Method - 3
� A vector x∈—n with density fx(x) can be obtained
generating sequentially the xi, i=1,…,n
� Each xi is generated independently according to the univariatedistribution
)|( 11 −iii xxxf L
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 149© IEIIT – CNR 2005
Computing the Marginal Densities: Complex Matrices
Let
be a partial Vandermonde matrix
[ ])()()(
111
21
112
11
222
21
21
i
ni
nn
i
i
i xxx
xxx
xxx
xxx
XXXV L
L
MLMM
L
L
L
=
=
−−−
∫∫∫ += ninxii dxdxxxfxxf LL 111 ),...,(...)(
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 150© IEIIT – CNR 2005
Key Result
� The marginal density is equal to
� Where M=R-1,
� Proof of result based on Dyson-Mehta Theorem
),...,( 1 nx xxf
∏=
−−=i
k
nmki
Tixnx xM
M
inKxxf
11
)!(),...,( VV
[ ] nlrnmlr
R lr ,,1,;1
1, K=
−−++=
26
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 151© IEIIT – CNR 2005
Dyson-Mehta Theorem
� Let Zn∈—n,n be a symmetric matrix such that
1. [Zn] i,j=ψ(xi,xj)
2.
3.
where dµ is a suitable measure, and c is a constant. Then
where Zn-1 is the submatrix obtained from Zn removing the row
and column containing xn
∫ = cxdxx )(),( µψ
∫ = ),()(),(),( zxydzyyx ψµψψ
∫ −+−= )det()1()()det( 1nnn ZncxdZ µ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 152© IEIIT – CNR 2005
Key Result - 2
� Given x1, x2,…, xi-1, the marginal density is expressed as a polynomial in xi
� The constants Ci and the coefficients bi are computed by means of appropriate recursions
� We have efficient way to compute conditional densities
∑−
=
−−=
)1(2
01)(
n
k
kik
nmiiii xbxCxp
∑−
=
−− =
)1(2
011 ),,|(
n
k
kik
nmiiiii xbxKxxxf K
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville © IEIIT – CNR 2005
Generation of the Singular Valuesfor Real Matrices
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 154© IEIIT – CNR 2005
Computing the Marginal Densities: Real Matrices
� Use again the conditional method
� Mathematical details are different from the complex case
� We again obtain marginal densities in “closed form”
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 155© IEIIT – CNR 2005
Key Result
� The marginal density fi (σ1,…, σi) may be computed as
where
being
∏=
−Ψ= −
i
k
nmki
Ki n
Rf1
2212
),,()( 1 σσσσ K
∫∫∫ +=ΨiD innii xdxdxxx )()(|)(|...),,( 11 µµ L&K V
{ }10 1 <<<<= xxD ni L&
kkk dxxxd υµ =)(
)1(21 −−= nmυ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 156© IEIIT – CNR 2005
Computation of ψ
� Theorem: Ψi(x1,…,xi) is equal to
where αi=(υ+1)(n-1)
− −
−−
00),,(0
),,()(
det
11
111
22
1
iTi
iiiK
i
xx
xxxM
x nRi
K
K
V
Vα
−
−
−
−=
odd, for
00)(
00)(
)()()(
even for 0)(
)()(
)(in
xF
x
xFxxS
inx
xxS
xM
iT
iT
iii
iT
ii
i
X
X
X
X
( )( )( )( )
21
21
21
21
1
)(;)(1 −−+−−
−−−+
==j
xijkjkj
xkjijk
j
ikj
i xFxS
27
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 157© IEIIT – CNR 2005
Marginal Densities IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 158© IEIIT – CNR 2005
Sample Generation: Summary
� Details are highly technical
� Computation of the pdf of the singular values
� Computation of the pdf of U,V (Haar distribution)
� Conditional density method
� Closed-form solution of a multiple integral
� Dyson-Mehta Theorem
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 159© IEIIT – CNR 2005
Polynomial-Time Algorithms
� Polynomial-time algorithms for the recursive generation of the singular values have been developed
� Algorithms require at each step only additions and multiplications of polynomial matrices
� Technical details and MATLABTM
software can be found at the URL http://www.polito.it/~dabbene
� Method becomes ill-conditioned for large n (n>20)
� Uniform matrices concentrate on the boundary of the norm-ball
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville © IEIIT – CNR 2005
Quasi-Monte Carlo Methods
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 161© IEIIT – CNR 2005
Quasi-Monte Carlo Method[1]
� RAs presented are based on Monte Carlo (MC)
� Monte Carlo used for integration and optimization
� Quasi-Monte Carlo (QMC) is a deterministic version of
MC
[1] H. Niederreiter (1992)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 162© IEIIT – CNR 2005
Quasi-Monte Carlo Method
� For integration, discrepancy is a measure of how the
sample set is “evenly distributed” within the integration
domain (unit cube)
� For optimization, the measure is the dispersion
28
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 163© IEIIT – CNR 2005
Discrepancy
� Let S be a non-empty family of subsets of [0,1]n and let
∆1,…, ∆N ∈ [0,1]n be a (deterministic) point set
� The discrepancy is defined as
where supremum is taken wrtS ∈ S, Vol(S) is the
volume of Sand I is the indicator function of the set S
|| )(
)(
sup),...,( 1 SVolN
I
D
N
i
i
NN −
∆
=∆∆
∑
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 164© IEIIT – CNR 2005
Integration Error
� Integration error is given by the Koksma-Hlawka
inequality
where V(g) is the variation of g
� To minimize integration error, we need to minimize
discrepancy
),...,( )V( )()/1( )( 1 || NN
iN
iDggNdg ∆∆≤∆−∆∆ ∑∫
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 165© IEIIT – CNR 2005
Optimal Sequences
� Need to find low discrepancy (optimal) sequences
which minimize discrepancy
� There are various low discrepancy sequences including
van der Corput (for n=1), Halton, Sobol, Nieterreiter
(for n>1)
� For Halton sequence we have
DN (∆1, ∆2, …, ∆N) = O(N-1 (log N)n)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 166© IEIIT – CNR 2005
MC and QMC
� The average absolute error of MC is O(N-1/2)
� Main result of QMC: The (deterministic) error is
O(N-1 (log N)n)
where n is the problem dimension
� This error is asymptotically smaller than O(N-1/2) but it depends on n
� Huge sample size is required before QMC is superior to MC
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 167© IEIIT – CNR 2005
CHAPTER 6
Randomized Algorithms forRobust Design
Keywords: LQ regulators, probabilistic quadratic stabilization,gradient based algorithms, probability-one controller
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 168© IEIIT – CNR 2005
Analysis vs Design with Uncertainty
� Starting point: worst-case analysis versus design
� Consider an interval family p(s,q), q∈Bq={q∈—n,||q||∞≤1}
� Analysis problem:
– Check if p(s,q) is stable for all q∈Bq
Answer: Kharitonov Theorem
� Design Problem: – Does there exist a q∈Bq such that p(s,q) is stable?
Answer: Unknown in general
29
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 169© IEIIT – CNR 2005
Example
� Example:
with qi∈[-0.5,0.5]� Probability of stability is 0.5n
� For n large the probability goes to zero� Nevertheless, it is easy to design a stable polynomial in
the family, e.g. p(s,q*) for
� Notice that this is not a fragile solution
)())((),( 110 −−−−= nqsqsqsqsp L
25.0*1
*1
*0 −==== −nqqq L
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 170© IEIIT – CNR 2005
Synthesis Paradigm
� Design the parameterized controller Kθ to guarantee stability and performance
P
∆
Kθ
d e
u y
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 171© IEIIT – CNR 2005
Probabilistic Synthesis Paradigm
� Uncertainty randomization of ∆ in Bˇ
� Convex optimization to design the controller K(s)
P(s)
∆
K(s)
d e
u y
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 172© IEIIT – CNR 2005
Synthesis Performance Function
� Recall that the parameterized controller is Kθ
� We replace J(∆) with a synthesis performance function
J = J(∆,θ)
where θ ∈ Θ represents the controller parameters to be
determined and its bounding set
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 173© IEIIT – CNR 2005
Randomized Algorithms for Synthesis
� Two classes of RAs for probabilistic synthesis
� Average performance synthesis[1]
� Robust performance synthesis[2]
[1] M. Vidyasagar (1998)
[2] B. Polyak and R. Tempo (2001)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 174© IEIIT – CNR 2005
Average Performance Synthesis
30
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 175© IEIIT – CNR 2005
Statistical Learning Theory for Design
� Statistical learning theory for probabilistic design[1]
� Controller randomization and uncertainty randomization� Bounds on the sample size: issue of conservatism[2]
[1] M. Vidyasagar (1997)[2] V. Kolchinskii, C.T. Abdallah, M. Ariola, P. Dorato and D. Panchenko (2000)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 176© IEIIT – CNR 2005
Average Performance Synthesis
� Denote the average performance of the plant byφ (θ) = E∆ {J(∆,θ)}where E∆ is expectation wrt∆ and optimal average performance is φ * = inf φ (θ)
θ∈Θ� RA returns with probability 1-δ a design vector θθθθΝ such
that φ (θθθθΝ) - φ * ≤ ε
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 177© IEIIT – CNR 2005
RA for Average Performance Synthesis
1. Determine finite sample size N=N(ε,δ)
2. Draw N samples ∆1,…, ∆Ν according to the given pdf
3. Return the controller parameter
θθθθΝ = arg inf Ngood(θθθθ)/N θ∈Θ
where Ngood (θθθθ) is the number of samples such that J(∆i,θθθθ)≤ γ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 178© IEIIT – CNR 2005
Comments
� This RA is based upon Statistical Learning Theory[1]
� Sample complexity may be determined using the bound
N ≥ (128/ε 2)(log(8/δ) + d (log (32e/ε ) + log log (32e/ε )))
for any ε ∈(0,1) andδ ∈(0,1) and where d is an upper bound on the P-dimension of the function family
J= = {J(·,θθθθ), for all θθθθ}
[1] M. Vidyasagar (2002)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 179© IEIIT – CNR 2005
Comments
� An upper bound for d can be computed for several control problems. However, this introduces an extra level of conservatism.
� First issue: The sample complexity is huge
� Second issue: How to compute “inf”
� Randomization may be used also in this case, but this implies to generate controllers at random
� Advantage: Convexity of J(∆, θθθθ) is not required
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 180© IEIIT – CNR 2005
Robust Performance Synthesis
31
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 181© IEIIT – CNR 2005
Robust Performance Synthesis
� RA returns with probability 1-δ a design vector θθθθΝ such
that
Pr{J(∆, θθθθΝ) ≤ γ } ≥ 1 - ε� Controller parameter θθθθΝ is constructed using a finite
number N of random samples of ∆
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 182© IEIIT – CNR 2005
RA for Robust Performance Synthesis
1. Initialization2. Determine sample size function N(k)=N(ε ,δ,k)3. Set k=0, i=0 and choose θθθθ0
4. Feasibility loop5. Set l=0 and feas=true6. While l <N(k) and feas=true7. Set i=i+1, l=l+18. Draw ∆ι according to f∆9. If J(∆ι, θθθθκ) > γ set feas = false10. End While
11. Exit condition12. If feas = true13. Set N=I14. Return θθθθΝ
= θθθθκ+1 and Exit15. End If
16. Update17. Update θθθθκ+1 = ψ upd(∆ι, θθθθκ)18. Set k=k+1 and goto 4
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 183© IEIIT – CNR 2005
Comments
� This RA is based on convexity of the performance function J(∆,θ) in θ
� Key point: Update rule ψ upd(∆ι, θθθθκ) in the update step
� Specific implementations of the update rule are a gradient step or a step of the ellipsoidal method
� For any ε, δ, number of steps N(k) of the algorithm is given by[1]
ε
δπ−
−+≤1
1log)6log()1( log2
)(k
kN
[1] Y. Oishi (2003)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 184© IEIIT – CNR 2005
RAs for Optimal Control (LQR)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 185© IEIIT – CNR 2005
Uncertain Systems in State Space
� We consider a state space description of the uncertain
system
with x(0)=x0; x∈—n; u∈—m, ∆∈ Bˇ
� For example, A(∆) is an interval matrix with bounded
entries
)()()()( tButxAtx +∆=&
+− ≤≤ ijijij aaa
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 186© IEIIT – CNR 2005
LQ regulator
� The performance index is the quadratic cost function
where S >0 and R >0 are given weights
� The state feedback controller is
where Q=QT >0 is solution of a QMI
( )∫∞
+=0
dtRuuSxxJ TT
xQBRu T 11 −−−=
32
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 187© IEIIT – CNR 2005
Quadratic Matrix Inequality
� Find Q=QT >0 such that, for given 0 ≤ γ ≤1,
for all ∆∈ Bˇ
� Then, the cost
is guaranteed for all ∆∈ Bˇ
01
0
1xQxJ T −≤
γ
( ) 02)()( 11 ≤++−∆+∆ −− TTT BBRQSQBBRQAQA γ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 188© IEIIT – CNR 2005
Robust Quadratic Stabilization
� Quadratic stabilization and guaranteed cost can be reduced to checking a finite number of matrix inequalities
� Example: If A(∆) is an interval matrix and R=I, for quadratic stabilization we take γ = 0 and we need to find a solution Q=QT >0 of
AiQ + Q(Ai)T – 2 BBT ≤ 0
for all vertex matrices Ai
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 189© IEIIT – CNR 2005
Probabilistic Quadratic Stabilization
� Critical problem: The number of LMIs may be too large and not tractable with classical interior point methods
� Example: If A(∆) is an interval matrix the number of LMIs is equal to the number of vertex matrices
� Probabilistic version of quadratic stabilization and LQ regulator
2
2nVN =
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 190© IEIIT – CNR 2005
Probabilistic Solution
� Randomly generate ∆1,…, ∆N ∈ Bˇ. Then, check if the
LMI
AiQ + Q(Ai)T – 2 BBT ≤ 0
is feasible for i= 1,…,N and find a common solution
Q=QT >0
� Critical problem: Even if N is relatively small, this is a
hard computational problem
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 191© IEIIT – CNR 2005
Sequential Algorithm
� Key point: Sequential algorithm which deals with one constraint at each step
� At step k we have
Phase 1: Uncertainty randomization of ∆Phase 2: Gradient algorithm and projection
� Final result: Find a solution Q=QT >0 with probability one in a finite number of steps
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 192© IEIIT – CNR 2005
Definitions
� Let En be an Euclidean space
and C be the cone of positive semi-definite matrices
=∈== ∑=
n
kik
nTn aAAA
1,
21,—E
{ }0: ≥∈= AA nEC
33
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 193© IEIIT – CNR 2005
Projection on a Cone
� For any real symmetric matrix A we define the projection [A]+∈C as
� The projection can be computed through the eigenvaluedecomposition A=TΛTT
� Then
where λi+=max {λi ,0}
[ ] XAAX
−=∈
+
Cminarg
TTTA ++ Λ=][
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 194© IEIIT – CNR 2005
Phase 1: Uncertainty Randomization
� Uncertainty randomization: Generate ∆k ∈ Bˇ
� Then, for guaranteed cost we obtain the QMI
( ) 02)()( 11 ≤++−∆+∆ −− TTkTk BBRQSQBBRQAQA γ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 195© IEIIT – CNR 2005
Matrix Valued Function
� Define a matrix valued function
and a scalar function
where || ⋅ || is the Frobenius norm
� We can also take the maximum eigenvalue of V(Q ,∆k)
( )TTkTk
k
BBRQSQBBRQAQA
QV112)()(
),(−− ++−∆+∆
=∆
γ
[ ]+∆=∆ ),(),( kk QVQv
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 196© IEIIT – CNR 2005
Phase 2: Gradient Algorithm
� We write
where ∂Q is the subgradient and the stepsizeµk is
and r>0 is a parameter
( ){ }[ ] ( ) >∆∆∂−=
++
otherwise
0, if ,1
k
kkkkQ
kkk
Q
QvQvQQ
µ
( ) ( ){ }( ){ } 2
,
,,kk
Q
kkQ
kkk
Qv
QvrQv
∆∂
∆∂+∆=µ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 197© IEIIT – CNR 2005
Closed-form Gradient Computation
� The function v(Q,∆k) is convex in Q and its subgradientis given by
if v(Q,∆k)≠0, and it is zero otherwise
( ){ }( )[ ] ( ) ( ) ( )[ ]
( )k
kTkkk
kQ
Qv
QVQSAQSAQV
Qv
∆∆+∆++∆∆
=∆∂++
,,)()(,
,
γγ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 198© IEIIT – CNR 2005
Theorem[1]
� Assumption: Every open subset of Bˇ has positive measure
� Theorem: A solution Q, if it exists, is found in a finite number of steps with probability one
� Idea of proof: The distance of Qk from the solution set decreases at each correction step
[1] B.T. Polyak and R. Tempo (2001)
34
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 199© IEIIT – CNR 2005
Worst-Case Solution
� The sequential algorithm provides a candidate solution for the set of QMIs
� We can check if this candidate solution satisfies all QMIs and it is a worst-case solution, otherwise we run the algorithm again
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 200© IEIIT – CNR 2005
Extensions
� Minimization of a measure of violation for problems
that are not strictly feasible[1,2]
� Uncertainty in the control matrix, B=B(∆), ∆∈ Bˇ
We take the feedback law
where Y and Q=QT >0 are design variables
xYQu 1−=
[1] B.R. Barmish and P. Shcherbakov (1999)
[2] G. Calafiore and B.T. Polyak (2001)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 201© IEIIT – CNR 2005
Further Extensions
� Nonlinear constrained systems
� Disturbance rejection and relations with game theory[1]
� Improvements from the numerical point of view
[1] T. Basar and P. Bernhard (1995)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 202© IEIIT – CNR 2005
Related Literature
� Related literature on optimization and adaptive control with linear constraints[1,2,3,4]
� Stochastic approximation algorithms have been widely studied in the stochastic control and optimization literature[6,7]
[1] S. Agmon (1954)[2] T.S. Motzkin and I.J. Schoenberg (1954)[3] B.T. Polyak (1964)[4] V.A. Bondarko and V.A. Yakubovich (1992)
[6] H.J. Kushner and G.G. Yin (2003)[7] J.C. Spall (2003)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 203© IEIIT – CNR 2005
Subsequent Research
� Design of common Lyapunov functions for switched system[1]
� From common to piecewise Lyapunov functions[2]
� Ellipsoidal algorithm instead of gradient algorithm[3]
� Stopping rule which provides the number of steps[4]
� Nonlinear constrained systems
[1] D. Liberzon and R. Tempo (2004)
[2] H. Ishii, T. Basar and R. Tempo (2004)
[3] S. Kanev, B. De Schutter and M. Verhaegen (2002)
[4] Y. Oishi and H. Kimura (2003)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 204© IEIIT – CNR 2005
Example[1]
� We study a multivariable example for the design of a controller for the lateral motion of an aircraft.
� The model consists of four states and two inputs
( ))(
31.053.2
0035.0
91.30
00
)(10
0
0010
)( tutx
NNYNNNN
Y
LLLtx
rpVg
Vg
rp
−
−+
−+−
=
βββββ
β
β
&&&
&
[1] B.D.O. Anderson and J.B. Moore (1971)
35
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 205© IEIIT – CNR 2005
Example - 2
� The state variables are
– x1 bank angle
– x2 derivative of bank angle
– x3 sideslip angle
– x4 jaw rate
� The control inputs are
– u1 rudder deflection
– u2 aileron deflection
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 206© IEIIT – CNR 2005
Example - 3
� Nominal values: Lp=-2.93, Lβ=-4.75, Lr=0.78, g/V=0.086, Yβ=-0.11, Nβ=0.1, Np=-0.042, Nβ=2.601, Nr=-0.29
� Perturbed matrix A(∆): each parameter can take values in a range of ±15% of the nominal value
� Quadratic stability (γ=0): take R=I and S=0.01I
� Remark: A(∆) is multiaffine in the uncertain parameters: quadratic stability can be ascertained solving simultaneously 29=512 LMIs
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 207© IEIIT – CNR 2005
Example - 4
� Sequential algorithm:
– Initial point Q0 randomly selected
– 800 random matrices ∆k
– The algorithm converged to
=
1.21620.73820.44520.7338
0.73820.77980.70200.1645
0.44520.70201.09270.0843-
0.73380.16450.0843-0.7560
Q
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 208© IEIIT – CNR 2005
Example - 5
� The corresponding controller
satisfies all the 512 vertex LMIs and therefore it is also a quadratic stabilizing controller in a deterministic sense
� The optimal LQ controller computed on the nominal plant satisfies only 240 vertex LMIs
== −
0.4954-10.237010.1758-2.8814-
49.9587-43.12844.3731-38.61911QBK T
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville © IEIIT – CNR 2005
CHAPTER 7
Probabilistic Algorithms forRobust LMIs
Keywords: robust LMI, robust and approximate feasibility, stochastic gradient methods, quadratic stability
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 210© IEIIT – CNR 2005
Robust LMIs
� We discuss stochastic optimization methods for determining admissible solutions to robust Linear Matrix Inequalities (LMIs)
� Robust LMI
where
mxxF —B ∈∈∆∀<∆ ,;0),(ˇ
nnTiii
m
ii FFFxFxF ,
10 );()(),( —∈=∆+∆=∆ ∑
=
36
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 211© IEIIT – CNR 2005
Robust LMIs - 2
� The set Bˇ indicates a structured norm bounded uncertainty set
[ ]{ }brr IqIq ∆∆=∆∆= ,,,,,blockdiag: 11 1LL
llˇ
{ }ρρ ≤∆∈∆∆= ,:)( ˇˇ
B
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 212© IEIIT – CNR 2005
Robust Feasibility
� Let be a non-empty convex and closed set
� A point is a robustly feasible solution if and only if
� Computing robust solution to LMIs is in general numerically difficult
� Robust SDP techniques can handle relaxations of the problem
m—X ⊆
x~
ˇBX ∈∆∀<∆∈ ;0),~( and ,~ xFx
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 213© IEIIT – CNR 2005
Feasibility Indicator Function
� Introduce a scalar function
where F+ indicates the projection of F onto the cone of positive semi-definite matrices
� The function is convex in x
),(),( ∆=∆ + xFxϕ
),( ∆xϕ
FWF W −= ≥+
0minarg
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 214© IEIIT – CNR 2005
Robust and Approximate Feasibility
� A point is a robustly feasible solution if and only if
� In case no robustly feasible solution exists, we assume a probability density on ∆, and say that x is an approximately feasible solution if
x~
ˇBX ∈∆∀<∆∈ ;0),~( and ,~ xx ϕ
{ }),(minarg ∆= ∆∈xEx
xϕ
X
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 215© IEIIT – CNR 2005
Recall: Projections
� Let F+ be the projection of a symmetric matrix F onto the positive semi-definite cone, then
� F+=RD+RT, where F=RDRT is the eigen-decomposition of F, with D=diag(d1,d2,…,dn), and
D+= diag(d1+,d2
+,…,dn+), where di
+=max(di,0)
� The (matrix) function F+(x) is convex in x
� The (scalar) function is convex in x),(),( ∆=∆ + xFxϕ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 216© IEIIT – CNR 2005
Subgradients of ϕ
� A subgradient of ϕ(x,∆) at x is computed as follows. Let
Then
∆
∆
∆=∆∇
+
+
),(Tr
),(Tr
),(1
),(1
xFF
xFF
xx
m
Mϕ
ϕ
≠∆∆∇
=∆∂otherwise 0
0),( if ),(),(
xxxx
ϕϕϕ
37
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 217© IEIIT – CNR 2005
Algorithm for Approximate Feasibility
� Assume ∆ is a random matrix with pdff∆
� Let g(x)=E{ϕ(x,∆)}� Let x be the minimum of g(x) over X
� Let
� Given an initial vector x0 consider the recursion
where [·]X denotes the projection onto the set X, and ∆k
are i.i.d. samples drawn from f∆
{ } ∆∈∀≤∆∂ ,;),( Xxxx µϕ
{ }[ ]X
),(1k
kxkkk xxx ∆∂−=+ ϕλ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 218© IEIIT – CNR 2005
Approximate Feasibility - 2
� Let further
� Theorem[1]:
kkkkk
kk
k
kk mmmxx
mx
m
mx λλ +===+= −−
−1001
1 ,0;
{ } )()()( kCxgxgE k ≤−
∑
∑−
=
−
=+−
= 1
0
1
0
222
0
2)( k
ii
k
iixx
kCλ
λµ
[1] G. Calafiore and B. Polyak (2001)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 219© IEIIT – CNR 2005
Approximate Feasibility - 3
� Moreover, if stepsizes are chosen so that
Then
� The proof of this result is based on convexity (Jensen inequality) and properties of projections
{ }∑∞
=∞=
→
0
0
ii
i
λ
λ
{ } )()(lim xgxgE kk
=∞→
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 220© IEIIT – CNR 2005
Algorithm for Robust Feasibility
� Assume that a strong feasibility condition holds: There exist
� Assume that, if x is not a robustly feasible solution, then it must hold that
Pr{F(x,∆) > 0} > 0
such that ,0,* >∈ εXx
ˇBX ∈∆∀≤−∈∀≤∆ ,: ,0),( * εϕ xxxx
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 221© IEIIT – CNR 2005
Robust Feasibility - 2
� Consider the recursion
where ∆k are i.i.d. random samples drawn from f∆
� Define the stepsizes
� For any the recursion finds a robustly feasible solution in a finite number of steps, w.p. 1
{ }[ ]X
),(1k
kxkkk xxx ∆∂−=+ ϕλ
{ }{ }
≠∆∆∂
∆∂+∆=
otherwise ,0
0),( if ,),(
),(),(2 kk
kkx
kkxkk
k
xx
xx ϕϕ
ϕεϕηλ
X∈0x
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 222© IEIIT – CNR 2005
Example: Quadratic Stability[1]
� Consider a linear system with
� System is quadratically stable if and only if there exists P>0 that satisfies simultaneously 512 Lyapunovinequalities for the vertex matrices
0,,)( 0 >≤∆∆+=∆ rrSAA ijij
=
−
−−=
3141.4014.7004.
1457.4727.2451.
5691.9394.1651.
,
201
001
022
0 SA
[1] B. R. Barmish and P. S. Shcherbakov (2002)
38
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 223© IEIIT – CNR 2005
Example - 2
� Let r= 0.5. The stochastic algorithm (robust feasibility) converged after N=50 iterations to the solution
� This solution may be checked to actually satisfy allLyapunov inequalities
=5371.02425.03177.0
2425.00443.28155.0
3177.08155.02487.1
P
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 224© IEIIT – CNR 2005
Example - 3
� For r= 1, we know that no actual robustly feasible solution exists
� After N=250 iterations of the approximate feasibility algorithm, we found the solution
� A posteriori probability of feasibility (computed with Monte Carlo using 100,000 samples) is pfeas=0.996
−−−−
=5577.00967.02649.0
0967.07455.19899.0
2649.09899.02042.1
P
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 225© IEIIT – CNR 2005
Example - 4
� We recall that to assess quadratic stability form a deterministic viewpoint one needs to solve simultaneous Lyapunov equations
� For n=4, 65,536 vertex matrices
� For n=10, 1,26 1030 vertex matrices
� n=4 is already beyond the capabilities of many current SDP solvers
22n
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 226© IEIIT – CNR 2005
Example - 5
� Consider an example with n=10
� Nominally stable
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 227© IEIIT – CNR 2005
Example - 6
� Check quadratic stability for element-wise independent uncertainty with r =0.5
� N =2000 iterations yielded the solution
� The estimated a posteriori probability of stability is pfeas=1.0
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 228© IEIIT – CNR 2005
Comments
� Two randomized algorithms for determining robustly feasible or approximately feasiblesolutions to robust LMIs
� For the latter algorithms convergence is only asymptotic
� These techniques are useful for problems which are intractable by means of standard (exact) LMI methods
� These solutions may serve as a good initial guess for a deterministic algorithm
39
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 229© IEIIT – CNR 2005
Optimization Problems[1]
� Extensions to optimization problems
� Consider convex function f(x) and function g(x,∆) convex in x for fixed ∆
� Semi-infinite (nonlinear) programming problem
min f(x)
g(x,∆) for all ∆ ∈ B
� Reformulation as stochastic optimization
� Drawback: Convergence results are only asymptotic
[1] V. B. Tadic, S. P. Meyn and R. Tempo (2003)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 230© IEIIT – CNR 2005
Scenario Approach[1]
� The scenario approach for convex problems
� Non-sequential method which provides a one-shot solution for general convex problems
� Randomization of ∆ ∈ B and solution of a single convex optimization problem
� Derivation of a new bound on the sample size
� Applications to control systems design
[1] G. Calafiore and M. Campi (2004)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville © IEIIT – CNR 2005
CHAPTER 8
Probabilistic LPV Systems
Keywords: gain scheduling, Linear-Parameter Varying systems, parameter discretization
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 232© IEIIT – CNR 2005
LPV
� LPV applications: Aircraft control, automated lane
guidance, communication networks
� Parameters q=q(t) are unknown but bounded in setBq
� They can be measured on-line by the controller
� Critical issue: Parameter discretization (complexity)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 233© IEIIT – CNR 2005
Quadratic LPV
� LPV plant
with q ∈ Bq
� Assumption: Orthogonality conditions are satisfied
=
)(
)(
)(
0))(())((
))((0))((
))(())(())((
)(
)(
)(
212
121
21
tu
td
tx
tqDtqC
tqDtqC
tqBtqBtqA
ty
te
tx&
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 234© IEIIT – CNR 2005
Quadratic LPV - 2
� The main goal is to design an LPV controller of the type
such that quadratic performance of the closed-loop system is guaranteed and
( )( ) q
T
T
dq
dttdtd
dtteteB∈∀<
∫
∫∞
∞
γ21
21
0
0
)()(
)()(sup
=
)(
)(
0))((
))(())((
)(
)(
ty
tx
tqC
tqBtqA
tu
tx c
c
ccc&
40
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 235© IEIIT – CNR 2005
Structured QMI Solution[1]
� The LPV problem is solvable if and only if there exist X=XT > 0 and Y=YT > 0 such that for ε > 0
0)()()()(
)()()()(),(
22112
11
≤+−
+++=− IqBqBqBqB
XqCqXCqXAXqAqXPTT
TT
εγ
0)()()()(
)()()()(),(
22112
11
≤+−
+++=− IqCqCqCqC
YqBqYBqYAYqAqYQTT
TT
εγ
0),(1
1
≤
−= −
−
YI
IXYXR
γγ
[1] G. Becker and A. Packard (1994)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 236© IEIIT – CNR 2005
Matrix Valued Function
� Define a matrix valued function
and a scalar function
� Remark: The gradients ∂X{ v(X,Y,q)} and ∂Y{ v(X,Y,q)} can be computed in closed form
=),(00
0),(0
00),(
),,(
YXR
qYQ
qXP
qYXV
[ ]+= ),,(),,( qYXVqYXv
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 237© IEIIT – CNR 2005
Gradient-based Algorithm
� We write
� Here µk is a stepsize and
( ){ }( )
( ){ }( )
( ) otherwise and or ,0,, if
,,,,
,,,,
11
1
1
kkkkkkk
kkk
kkkY
kkk
kkk
kkkX
kkk
YYXXqYXv
qYXw
qYXvYY
qYXw
qYXvXX
==>
∂−=
∂−=
++
++
++
µ
µ
( ) ( ){ } ( ){ }( ) 21
22,,,,,, kkk
Ykkk
Xkkk qYXvqYXvqYXw ∂+∂=
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 238© IEIIT – CNR 2005
Probabilistic LPV[1]
� Assume that q is random vector with support Bq
� Consider positive measure within Bq
� Theorem: The algorithm converges with probability one
in a finite number of iterations
� Remark: No assumption on the dependence on q of
matrices A, B1, B2, C1, C2, D12, D21
[1] Y. Fujisaki, F. Dabbene and R. Tempo (2003)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 239© IEIIT – CNR 2005
Application Example
� Multivariable example for the design of a controller for
the lateral motion of an aircraft (2 inputs, 3 outputs, 4
state variables)
� Nine uncertain parameters entering multiaffinely into the
state matrix
� Each parameter is perturbed by relative uncertainty ρ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 240© IEIIT – CNR 2005
Numerical Results
� Sequential algorithm
– Initial conditions randomly selected
– 800 iterations
– Computation of Xk and Yk
– Construction of the controller
� We set γ =3
� With ρ = 5% X800 and Y800 satisfy all 29=512 vertex QMIs
41
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 241© IEIIT – CNR 2005
Numerical Results
� With ρ = 8% X800 and Y800 still satisfy all 512 vertex QMIs
0.46110.01870.12460.1296
0.01870.60730.13350.2158
0.12460.13350.41660.1546
0.12960.21580.15460.4814
0.50020.14080.02990.1461
0.14080.33950.06870.0035-
0.02990.06870.48410.0715-
0.14610.0035-0.0715-0.6874
800
800
=
=
Y
X
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 242© IEIIT – CNR 2005
Algorithm Updates
0 100 200 300 400 500 600 700 800
update
iteration
� The plot shows the algorithm updates
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 243© IEIIT – CNR 2005
Fault Tolerant Control (FTC)
� Definition: Fault is an unpermitted deviation of at least one characteristic property or parameter of the system from the acceptable/usual/standard condition[1]
� Example (dramatic): Chernobyl plant[2]
[1] R. Isermann and P. Ballè (1997)
[2] G. Stein, Bode Lecture “Respect the Unstable” (1989)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 244© IEIIT – CNR 2005
Robust Fault Tolerant Control[1]
� Discrete-time plant with time-varying faults f
x(k+1)= A(f) x(k) + B1(f) d(k) + B2(f) u(k)e(k) = C1(f) x(k) + D11(f) d(k) + D12(f) u(k)y(k)= C2(f) x(k) +D21(f) d(k) + D22(f) u(k)
The vector of faults is f = [f1,…,fn] fi(k) = (1+wf,i (k) δ i) gi(k)
where gi(k) is the nominal value and the fault estimate uncertainty δ i is bounded by
| δ i | ≤ 1
[1] S. Kanev and M. Verhaegen (2004)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 245© IEIIT – CNR 2005
Robust Fault Tolerant Control
� Reformulation of the problem as LPV
� Fault estimate uncertainty δ is taken as a random variable with given pdf
� Development of randomized algorithms for computing controller parameters[1,2]
� Random generation of δ and use of ellipsoidal method
� Application: Brushless DC motor
[1] S. Kanev and M. Verhaegen (2004)
[2] S. Kanev (2004)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 246© IEIIT – CNR 2005
Model Predictive Control (MPC)[1]
� Development of randomized algorithms for MPC is an open area
� The separation principle does not hold when parametric uncertainty is present
� Difficulty of deriving output feedback controller
� BMI solution is NP-hard
� Approaches based on finite-horizon output feedback MPC do not always guarantee robust stability of the closed-loop system
[1] E. F. Camacho and F. Bordons (2004)
42
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 247© IEIIT – CNR 2005
CHAPTER 9
Randomized Algorithms for Switched Systems
Keywords: common Lyapunov functions, piecewise quadratic function, switching rules
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 248© IEIIT – CNR 2005
Common Lyapunov Functions for Finite Families
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 249© IEIIT – CNR 2005
Switched Systems Problem
Given Hurwitz matrices and matrix ,
find matrix
We deal with a finite set of matrices
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 250© IEIIT – CNR 2005
Common Lyapunov Function
� If P > 0 exists, then the quadratic function
V(x)= xTPx
is a common Lyapunov function for the family of asymptotically stable linear systems
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 251© IEIIT – CNR 2005
Special Case[1]
...
quadratic common Lyapunov function∃
In the special case when matrices commute
[1] K. S. Narendra and J. Balakrishnan (1994)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 252© IEIIT – CNR 2005
Function f
� Function f is convex differentiable real-valued functional on the space of symmetric matrices with the property
f(R) ≤ 0 if and only if R ≤ 0
43
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 253© IEIIT – CNR 2005
Two Examples of Functions f
1. If λ max is a simple eigenvalue we take
f(R) = λ max(R)
2. f(R) = || R+ ||2
where || . || is Frobenius norm, + is projection into the cone of non-negative definite matrices
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 254© IEIIT – CNR 2005
Gradient Algorithms: Preliminaries
Gradient:
( is unit eigenvector of with eigenvalue )
1.
2.
– convex in
given
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 255© IEIIT – CNR 2005
Gradient Iteration
– finite family of Hurwitz matrices
– arbitrary symmetric matrix
Gradient iteration:
– visits each index times
( – suitably chosen stepsize)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 256© IEIIT – CNR 2005
Theorem[1]
� Theorem: Solution P >0, if it exists, is found in finite number of steps
� Idea of proof: Distance of Pk from solution set decreases at each correction step
[1] D. Liberzon and R. Tempo (2004)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 257© IEIIT – CNR 2005
Example: Interval Family
� We study interval family of triangular Hurwitzmatrices
� We have vertices
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 258© IEIIT – CNR 2005
Numerical Results
� Results with deterministic gradient
� n = 4 (210 inequalities): 10,000 iterations (a few seconds)
� n = 5 (215 inequalities): 10,000,000 iterations (a few hours)
� Randomized gradients gives faster convergence� For comparison: quadstabMATLAB
TMstacks
when n = 5
44
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 259© IEIIT – CNR 2005
Comments
� Contribution
� Gradient iteration algorithms for solving simultaneously Lyapunov matrix inequalities
� Deterministic convergence for finite families
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 260© IEIIT – CNR 2005
Open Computational Issues
� Performance comparison of different methods
� Comparison with advanced LMI algorithms
� Addressing existence of solutions
� Optimal schedule of iterations
� Additional requirements on solution
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 261© IEIIT – CNR 2005
Piecewise Quadratic LyapunovFunctions
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 262© IEIIT – CNR 2005
■ Consider the switched system
dx(t)/dt = Aσ(t) x(t)
where x(t) is the state and σ(t) = 1, 2 is the switch■ Beyond common quadratic Lyapunov functions: Piecewise quadratic Lyapunov functions ■ Less conservative but harder to construct
Switched Systems: Another Approach
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 263© IEIIT – CNR 2005
■ Given γ > 0 and P1, P2 > 0, suppose that for any x: ||x||=1
xT(P1A1 + A1T P1) x ≤ -γ if xT(P1-P2)x ≥ 0
and
xT(P2A2 + A2T P2) x ≤ -γ if xT(P1-P2)x ≤ 0
Piecewise Lyapunov Functions[1,2]
[1] M. Wicks and R. DeCarlo (1997)
[2] J. Malmborg, B. Bernhardsson and K. J. Astrom (1996)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 264© IEIIT – CNR 2005
■ Then, we construct a switching rule
σ(t) = arg max{i=1,2} x(t)T Pi x(t)■ The piecewise quadratic Lyapunov function
V(x) = max{i=1,2} x(t)T Pi x(t)
decreases along trajectories of the system■ GAS is achieved
Switching Rule
45
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 265© IEIIT – CNR 2005
■ This switched system problem is not convexSolution: Combination of randomized algorithms ■ with branch and bound methods■ Design a piecewise quadratic Lyapunov function ■ with probability one in a finite number of steps■ Specific RA and proof techniques are technical
Solution via RA[1]
[1] H. Ishii, T. Basar and R. Tempo (2005)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 266© IEIIT – CNR 2005
CHAPTER 10
Miscellaneous Topics
Keywords:robustness in statistics, value set predictors
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 267© IEIIT – CNR 2005
Robustness in Statistics
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 268© IEIIT – CNR 2005
Robustness in Statistics
� Parameter estimation
� If ξ ∼ N(0,I) i.i.d., then is the best estimate
� However, if there are outliers, that can be modeled as
ξ ∼ (1-ε)N(0,I)+ε N(0,σ2I) with σ2>>1
then may be very bad
lK,,1* =+= iy ii ξθ
∑=
=−=l
l 1
1minargˆ
iii yy θθ
θ
θ̂
θ̂
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 269© IEIIT – CNR 2005
Robust Estimates
� Consider
� ξ is i.i.d. with some density fξ(ξ) belonging to a class P
� Then
� where G(·) is a given function, may be better than
lK,,1* =+= iy ii ξθ
( )∑=
−=l
1
minargi
iG yG θθθ
θ̂
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 270© IEIIT – CNR 2005
Robust Estimates – Choice of G
� For instance
is good for normal contaminated distributions
� θG is an intermediate between and the median
quadratic
linear
G(·)
θ̂
46
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 271© IEIIT – CNR 2005
Robust Estimates – Optimal G
� Fisher information
� The least favorable distribution is given by
� The best G in the sense of asymptotic variance of θG is
∫
= dttf
tffI
)(
)('
)(
2
ξ
ξ
ξ
)(minarg*ξξ
ξ
fIff P∈
=
)(ln)( ** tftG ξ−=
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 272© IEIIT – CNR 2005
Value Set Predictors
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 273© IEIIT – CNR 2005
Robust Stability – Value Set
� Consider the uncertain closed-loop polynomial
with q varying in the box Bq={q∈—n,||q||∞≤1}
� Zero exclusion principle: define the value set as
� If p(s,0) is stable and
then robust stability holds
nn sqasqaqaqsp )()()(),( 10 +++= L
{ }qqqjpV B∈= :),()( ωω &
∞≤≤∉ ωω 0 allfor )(0 V
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 274© IEIIT – CNR 2005
Probabilistic Predictors of Value Set
� Let q be a random vector uniformly distributed on Bq
� Define the risk-adjusted value set
� Vγ(ω) may be much smaller than V(ω)
{ } γωωωγ −≥∈ 1)(),(Pr:)( VqjpV
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 275© IEIIT – CNR 2005
Example – Interval Polynomial
� Consider an interval polynomial
where |ai –ai0|≤ rαi , i=1,…,n-1
� For fixed frequency, the value set is a rectangle (Kharitonov theorem)
V(ω)
Vγ(ω)
nssasaaasp ++++= L2
210),(
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 276© IEIIT – CNR 2005
Example – Spherical Polynomial - 2
� Consider a spherical polynomial
where
� V(ω) and Vγ(ω) are ellipses, Vγ(ω) can be rigorously calculated
nssasaaasp ++++= L2
210),(
∑−
=≤−1
02
20)(n
i i
ii raa
α
47
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 277© IEIIT – CNR 2005
Example – Spherical Polynomial - 2
� Then
Vγ(ω)
V(ω)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 278© IEIIT – CNR 2005
Linear Functions of Uniform Vectors
� Let q be uniformly distributed on a ball
� Then, for A∈ —m,n
has beta distribution with density
[ ] nq qq —BU ∈~
( )( )AqAqAAT ,1−
=&τ
( )( ) ( ) 10)1(
11
)( 221
22
2 ≤≤−+ΓΓ
+Γ=−−
−τττττ
mnm
mnm
n
f
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 279© IEIIT – CNR 2005
CHAPTER 11
Mixed Deterministic/Randomized Methods
Keywords: deterministically tractable and intractable parameters, stability and performance with fixed order controllers
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 280© IEIIT – CNR 2005
Randomized vs Deterministic Solutions
� Development of RAs motivated by intractability of various problems in systems and control
� RAs provide (efficiently) a probabilistic solution� The drawback is that this solution is given with a certain
accuracyε and confidenceδ
� Deterministic algorithms provide a strong solution, butcan be used for a narrow class of problems
� Randomized algorithms give a weaker solution, but can be utilized for a broader set of problems
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 281© IEIIT – CNR 2005
Mixed Randomized and DeterministicMethods
� Key idea: Development of algorithms which combine randomized and deterministic techniques[1,2]
� ProblemP(η ,θ) with parameters η and
θ� We divide these parameters in two sets consisting of
(deterministically) tractable and intractable
�η are tractable parameters→ deterministic methods
�
θare intractable parameters→ randomized techniques
[1] Y. Fujisaki, Y. Oishi and R. Tempo (2004)[2] Y. Fujisaki, Y. Oishi and R. Tempo (2005)
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 282© IEIIT – CNR 2005
Stabilization with FixedOrder Controller
� Specific problemΠ(θ,η) � Strictly proper SISO plantP(s) = NP(s)/DP(s)� Fixed order controller
whereX(s2), Y(s2), Z(s2) and V(s2) are even polynomialsin s
� Objective: Find a stabilizing controller placing the rootsof the closed-loop polynomial in the open LHP
48
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 283© IEIIT – CNR 2005
Tractable vs Intractable Parameters
� Definition: The (deterministically) tractable parametersare the coefficientsθ of X(s2) and the intractableparameters are the coefficientsη of Y(s2), Z(s2) and V(s2)
Step 1: We use RAs to compute the coefficientsηStep 2:We use deterministic methods to determine
the coefficients θ
The set of all tractable parameters θ is denoted byθθθθ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 284© IEIIT – CNR 2005
Characterization of TractableParameter Set
� Lemma: Suppose that the parametersη are selectedaccording to a RA. Then, the set θθθθ of all (tractable) stabilizing controller parameters is either empty or is a union of a finite number of polyhedral sets
� Exploitation of this result to determine a stabilizingcontroller
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 285© IEIIT – CNR 2005
Polynomial-Time Algorithm
� We construct an algorithm which proceeds in twophases:
1. Computation of a marginal stabilizing controller bymeans of a method based on matrix inversion
2. Computation of a stabilizing controller using a sensitivity approach
The resulting algorithm is polynomial-time
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 286© IEIIT – CNR 2005
Extensions toH∞ Performance
� Consider a performance problem with sensitivityfunction
and a (stable) weighting functionW(s)
� The objective is to find a controller C(s) satisfying
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 287© IEIIT – CNR 2005
Characterization forH∞ Performance
� Theorem: Suppose that the parametersη are chosen with a RA. Then, the set θ θ θ θ of all tractable controller parameters satisfying
is either empty or is given by
Θ Θ Θ Θ =
where Γi (φ) is a (possibly unbounded) polyhedral set for fixedφand i
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 288© IEIIT – CNR 2005
Further Results and Extensions
� Development of polynomial-time algorithms forH∞performance based on matrix inversion and sensitivity methods, adding an extra parameter φ ∈ [0, 2π]
� Extensions to uncertain plants where P(s) is replaced byan interval plantP(s,q,r)
� This extension can be carried on invoking some existing results of parametric stability and by means of a additional parameterλ ∈ [0, 1]
49
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 289© IEIIT – CNR 2005
CHAPTER 12
Applications of Randomized Algorithms
Keywords: flexible structures, high-speed networks, brushless DC motors, quantized systems
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 290© IEIIT – CNR 2005
Application of RAs
� Randomized algorithms have been developed for
various specific applications
� Control of flexible structures
� Stability and robustness of high speed networks
� Stability of quantized sampled-data systems
� Brushless DC motors
� …
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 291© IEIIT – CNR 2005
Application 1
Control of Flexible Structures
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 292© IEIIT – CNR 2005
Flexible Structure
� Mass spring damper model
� Real parametric uncertainty affecting stiffness and
damping
� Complex unmodeled dynamics (nonparametric)
m1
l1
k1
m2
l2
k2
m3
l3
k3
m4
l4
k4
m5
l5
k5
l6
k6
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 293© IEIIT – CNR 2005
Flexible Structure
� M-∆ configuration for controlled systems
q1, q2 ∈—
∆1∈¬4,4
BAsICsM 1)()( −−=
∆=∆
1
52
51
00
00
00
Iq
Iq
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 294© IEIIT – CNR 2005
Probabilistic Stability Margin
� For fixed ρ, we let
� For given p*∈[0,1] we define the probabilistic stability margin
� Clearly
{ }stable is Pr)( CBAp ∆+=&ρ
{ }*)(:sup*)( ppp ≥= ρρρ &
µρ 1
*)( ≥p
50
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 295© IEIIT – CNR 2005
Probability Degradation Function
0 .3 5 0 .4 0 .4 5 0 .5 0 .5 5 0 .6 0 .6 5 0 .7
0 .9 4
0 .9 5
0 .9 6
0 .9 7
0 .9 8
0 .9 9
1
1 .0 1
P ro b a b i li s tic ra d iu s ρ
Est
ima
ted
pro
ba
bili
ty d
eg
rad
atio
n
394.01 ≈µ
ρ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 296© IEIIT – CNR 2005
Application 2
Stability and Performance of High-Speed Networks
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 297© IEIIT – CNR 2005
Stability and Robustness
� Network topology
� Source and destination
nodes, links (with buffer
and capacity)
� Bottleneck link
� Stability and robustness
C_1
C_2
C_3
C_4
C_5
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 298© IEIIT – CNR 2005
Symmetric Single Bottleneck
� Parametric stability (discrete
time) with real uncertain
parameters
� Stability and robustness can
be studied in closed form
� Case study with 20 users
� Roots of the closed loop
polynomial (discrete-time)0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−0.1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
0.1
Imag
Real
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 299© IEIIT – CNR 2005
Non-Symmetric Single Bottleneck
� Closed form analysis is not
possible
� We use RAs based on MC
and QMC
0 0.5 1 1.5 2 2.5 3
x 104
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Net
wor
k st
abili
ty
Capacity
Halton Sobol Uniform pdf Optimal gridNiederreiter
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 300© IEIIT – CNR 2005
Additional Simulations
0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4
x 104
0.75
0.8
0.85
0.9
0.95
Net
wor
k st
abili
ty
Capacity
Halton Sobol Uniform pdf Optimal gridNiederreiter
51
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 301© IEIIT – CNR 2005
Application 3
Probabilistic StructuredReal Stability Radius
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 302© IEIIT – CNR 2005
Structured Real Stability Radius
� Let A∈¬n,n be a stable matrix, and consider the perturbed matrix
with B, Cof appropriate dimensions
� The real stability radius is the size of the smallest destabilizing perturbation
ˇB∈∆∆+=∆ ,)( CBAA
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 303© IEIIT – CNR 2005
Probabilistic Stability Radius
� We assume ∆ random, and estimate the probability of stability as a function of the uncertainty radiusρ
� For given p*, the probabilistic real stability radius is defined as
� We estimate the probabilistic stability radius using randomized algorithms
{ }** )(:sup),( pppAR ≥= ρρρ &
{ }ˇ
B∈∆∆= stable, is )(Pr)( Ap ρ
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 304© IEIIT – CNR 2005
Numerical Example
� We studied the example
ICB
A
==
−−−−−−
−−−−−−
−−−−−−
=
9809.28238.11812.44435.10764.28445.2
7190.01026.09139.18681.03964.02169.1
6973.02107.04244.00580.06813.01946.0
8705.20364.13362.36874.11677.14202.1
2641.48212.22311.53962.24700.15667.3
3271.15852.18166.21021.19633.09319.0
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 305© IEIIT – CNR 2005
Numerical Example - 2
� We computed p(ρ) for ρ∈[0.01 0.05] with two different structures:
� ∆ composed by three 2x2 full real blocks
� ∆ composed by a 4x4 and a 2x2 full real blocks
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 306© IEIIT – CNR 2005
Numerical Example - 3
0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.050.75
0.8
0.85
0.9
0.95
1
1.05
Uncertainty radius ρρρρ
Pro
bab
ility
of
stab
ility
Uncertainty structure 1
Uncertainty structure 2
52
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 307© IEIIT – CNR 2005
Application 4
Probabilistic RobustLeast Squares
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 308© IEIIT – CNR 2005
Uncertain Least Squares
� Consider the problem
with x ∈ —n, b ∈ —m, ∆∈ Bˇ
� Robust least squares
� Probabilistic least squares
)()( ∆=∆ bxA
2)()(maxminargˆ ∆−∆=∈∆
bxAxx
RLSˇB
{ }2)()(minargˆ ∆−∆= bxAExx
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 309© IEIIT – CNR 2005
Probabilistic Robust Least Squares
� Generate N samples of ∆ and compute
with
� We can compute recursively
∑=
∆−∆=N
i
iiN bA
NxE
1
2)()(
1)(ˆ &
)(ˆminargˆ xEx Nx
N =
Nx̂
)()(
),ˆ)()()((ˆˆ11
1
111111
+++
+++−++
∆∆+=
∆−∆∆+=kkT
kk
kkkkT
kkk
AAQQ
xAbAQxx
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 310© IEIIT – CNR 2005
PRLS Example
� We consider an uncertain matrix A(∆)=A+∆, ||∆||<1
=
−=
3
1
2
0
2.541
352
110
413
BA
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 311© IEIIT – CNR 2005
PRLS Example - 2
� We compute the estimates obtaining
TRLS
TLS
TN
x
x
Nx
]2055.02073.00312.0[ˆ
]9834.97285.90000.10[ˆ
000,10;]3448.01009.01768.0[ˆ
−=
−−=
=−=
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 312© IEIIT – CNR 2005
PRLS Example - 3
� To compare the previous estimates we computed the worst-case and the sample residuals
� Worst-case residuals are defined as
we obtained
2** )()(max ∆−∆=
∆bxAr wc
5720.29394.186532.2 === wcRLS
wcLS
wcN rrr
53
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 313© IEIIT – CNR 2005
PRLS Example - 4
� Statistics for the sample residuals
0107.05515.22848.2
4090.95164.188962.11
0198.06344.22650.2
Covariance SamplePeakAverage
RLS
LS
N
r
r
r
2
** )()( iii bxAr ∆−∆=
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 314© IEIIT – CNR 2005
Conclusions and Discussion
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 315© IEIIT – CNR 2005
Conclusions
� A lot of research remains to be done� Applications: Randomized algorithms and sample
generation for specific structures of uncertainty� Theory: Beautiful and nonstandard math behind
randomized algorithms� Complexity: Studying computational complexity of
these algorithms� Computation: High dimensional problems are ill
conditioned� Other Directions: Randomized algorithms for MPC
and FTC
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 316© IEIIT – CNR 2005
(Controversial) Conclusions
� A number of years ago, closed-form meant writing
down a “good looking” equation on a piece of paper
� Notion of closed-form solution has changed
substantially
� Various experts (rightfully) convinced us that a new
notion of closed-form is to re-write (if possible) a
control problem as convex optimization
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 317© IEIIT – CNR 2005
(Controversial) Conclusions
� Unfortunately, many important control problems are
not convex
� In such cases, we can either introduce relaxation or
use randomization
� Perhaps the control community should pay more
attention to randomization accepting a different (and
weaker) notion of problem solution
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 318© IEIIT – CNR 2005
Discussion
54
IEIIT-CNRIEIIT-CNR
Course on Randomized Algorithms, Seville 319© IEIIT – CNR 2005
Randomized Algorithmsfor Analysis and Control of Uncertain Systems
by RT, GC, FD with a foreword by M. Vidyasagar, Springer-Verlag, London, 2005
http://staff.polito.it/tempo/
top related