Download - Linear Optimal Control - ccs.neu.edu
![Page 1: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/1.jpg)
How does this guy remain upright?
Linear Optimal Control
![Page 2: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/2.jpg)
Overview
1. expressing a linear system in state space form
2. discrete time linear optimal control (LQR)
3. linearizing around an operating point
4. linear model predictive control
5. LQR variants
6. model predictive control for non-linear systems
![Page 3: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/3.jpg)
A simple system
k
b
m
Force exerted by the spring:
Force exerted by the damper:
Force exerted by the inertia of the mass:
![Page 4: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/4.jpg)
A simple systemk
b
m
Consider the motion of the mass
• there are no other forces acting on the mass
• therefore, the equation of motion is the sum of the forces:
This is called a linear system. Why?
![Page 5: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/5.jpg)
A simple systemk
b
mLet's express this in ''state space form'':
![Page 6: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/6.jpg)
A simple systemk
b
mLet's express this in ''state space form'':
![Page 7: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/7.jpg)
A simple systemk
b
mLet's express this in ''state space form'':
![Page 8: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/8.jpg)
A simple systemk
b
mLet's express this in ''state space form'':
![Page 9: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/9.jpg)
A simple systemk
b
mLet's express this in ''state space form'':
where
![Page 10: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/10.jpg)
A simple system
k
b
m Your fingerf
Suppose that you apply a force:
![Page 11: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/11.jpg)
A simple system
Suppose that you apply a force:
![Page 12: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/12.jpg)
A simple system
Suppose that you apply a force:
Canonical form for a linear system
![Page 13: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/13.jpg)
Continuous time vs discrete time
Continuous time
Discrete time
![Page 14: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/14.jpg)
Continuous time vs discrete time
Continuous time
Discrete time
What are A and B now?
![Page 15: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/15.jpg)
Continuous time vs discrete time
Continuous time
Discrete time
What are A and B now?
![Page 16: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/16.jpg)
Simple system in discrete time
We want something in this form:
![Page 17: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/17.jpg)
Simple system in discrete time
We want something in this form:
![Page 18: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/18.jpg)
Simple system in discrete time
We want something in this form:
![Page 19: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/19.jpg)
Simple system in discrete time
We want something in this form:
![Page 20: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/20.jpg)
Continuous time vs discrete time
CT
DT
CT
DT
![Page 21: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/21.jpg)
Exercise: write DT system dynamics
Viscous damping
External force
![Page 22: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/22.jpg)
Exercise: write DT system dynamics
Something else...
![Page 23: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/23.jpg)
Overview
1. expressing a linear system in state space form
2. discrete time linear optimal control (LQR)
3. linearizing around an operating point
4. linear model predictive control
5. LQR variants
6. model predictive control for non-linear systems
![Page 24: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/24.jpg)
The linear control problem
Given:
System:
![Page 25: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/25.jpg)
The linear control problem
Given:
System:
Cost function:
where:
![Page 26: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/26.jpg)
The linear control problem
Given:
System:
Cost function:
where:
![Page 27: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/27.jpg)
The linear control problem
Given:
System:
Cost function:
where:
Calculate:
Initial state:
U that minimizes J(X,U)
![Page 28: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/28.jpg)
The linear control problem
Given:
System:
Cost function:
where:
Calculate:
Initial state:
U that minimizes J(X,U)
Important problem!
How do we solve it?
![Page 29: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/29.jpg)
One solution: least squares
![Page 30: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/30.jpg)
One solution: least squares
![Page 31: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/31.jpg)
where
One solution: least squares
![Page 32: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/32.jpg)
where:
One solution: least squares
![Page 33: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/33.jpg)
Given:
System:
Cost function:
where:
Calculate:
Initial state:
U that minimizes J(X,U)
One solution: least squares
![Page 34: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/34.jpg)
Given:
System:
Cost function:
Calculate:
Initial state:
U that minimizes J(X,U)
One solution: least squares
![Page 35: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/35.jpg)
Substitute X into J:
Minimize by setting dJ/dU=0:
Solve for U:
One solution: least squares
![Page 36: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/36.jpg)
Solve for optimal trajectory:
What can this do?
Start here
End here at time=T
Image: van den Berg, 2015
![Page 37: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/37.jpg)
This is cool, but...– only works for finite horizon problems– doesn't account for noise– requires you to invert a big matrix
What can this do?
![Page 38: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/38.jpg)
Bellman optimality principle:
Let's try to solve this another way
Why is this equation true?
![Page 39: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/39.jpg)
Bellman optimality principle:
Let's try to solve this another way
Cost-to-go from state x at time t
Cost-to-go from state (Ax+Bu) at time t+1
Cost incurred on this time step
Cost incurred after this time step
![Page 40: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/40.jpg)
Let's try to solve this another way
For the sake of argument, suppose that the cost-to-go is always a quadratic function like this:
where:
![Page 41: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/41.jpg)
Let's try to solve this another way
For the sake of argument, suppose that the cost-to-go is always a quadratic function like this:
where:
Then:
![Page 42: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/42.jpg)
Let's try to solve this another way
For the sake of argument, suppose that the cost-to-go is always a quadratic function like this:
where:
Then:
How do we minimize this term?– take derivative and set it to zero.
![Page 43: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/43.jpg)
Let's try to solve this another way
How do we minimize this term?– take derivative and set it to zero.
optimal control as a function of state– but: it depends on P_{t+1}...
![Page 44: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/44.jpg)
Let's try to solve this another way
How do we minimize this term?– take derivative and set it to zero.
optimal control as a function of state– but: it depends on P_{t+1}...
How solve for P_{t+1}???
![Page 45: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/45.jpg)
Let's try to solve this another waySubstitute u into V_t(x):
![Page 46: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/46.jpg)
Let's try to solve this another waySubstitute u into V_t(x):
![Page 47: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/47.jpg)
Let's try to solve this another waySubstitute u into V_t(x):
![Page 48: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/48.jpg)
Let's try to solve this another waySubstitute u into V_t(x):
![Page 49: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/49.jpg)
Let's try to solve this another waySubstitute u into V_t(x):
Dynamic Riccati Equation
![Page 50: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/50.jpg)
Example: planar double integrator
Air hockey table
m=1
b=0.1
u=applied force
Initial position of the puck Initial velocity
Goal position
Build the LQR controller for:
Initial state:
Time horizon:
Cost fn:
![Page 51: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/51.jpg)
Example: planar double integrator
Air hockey table
Step 1:Calculate P backward from T: P_100, P_99, P_98, … , P_1
HOW?
![Page 52: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/52.jpg)
Example: planar double integrator
Air hockey table
Step 1:Calculate P backward from T: P_100, P_99, P_98, … , P_1
![Page 53: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/53.jpg)
Example: planar double integrator
Air hockey table
Step 1:Calculate P backward from T: P_100, P_99, P_98, … , P_1
![Page 54: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/54.jpg)
Example: planar double integrator
Air hockey table
Step 1:Calculate P backward from T: P_100, P_99, P_98, … , P_1
![Page 55: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/55.jpg)
Example: planar double integrator
Air hockey table
Step 1:Calculate P backward from T: P_100, P_99, P_98, … , P_1
......
![Page 56: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/56.jpg)
Example: planar double integrator
Air hockey table
Step 2:Calculate u starting at t=1 and going forward to t=T-1
......
![Page 57: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/57.jpg)
Example: planar double integrator
origin
0 0.200
1
0.2
![Page 58: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/58.jpg)
Example: planar double integrator
u_x, u_y
t
![Page 59: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/59.jpg)
Example: planar double integrator
![Page 60: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/60.jpg)
Example: planar double integrator
origin
00
![Page 61: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/61.jpg)
Example: planar double integrator
origin
00
![Page 62: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/62.jpg)
The infinite horizon case
So far: we have optimized cost over a fixed horizon, T.– optimal if you only have T time steps to do the job
But, what if time doesn't end in T steps?
One idea:– at each time step, assume that you always have T more time steps to go– this is called a receding horizon controller
![Page 63: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/63.jpg)
The infinite horizon case
Time step
Ele
men
ts o
f P
mat
rix
Notice that elt's of P stop changing (much) more than 20 or 30 time steps prior to horizon.
– what does this imply about the infinite horizon case?
![Page 64: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/64.jpg)
The infinite horizon case
Time step
Ele
men
ts o
f P
mat
rix
Notice that elt's of P stop changing (much) more than 20 or 30 time steps prior to horizon.
– what does this imply about the infinite horizon case?
Converging toward fixed P
![Page 65: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/65.jpg)
The infinite horizon case
We can solve for the infinite horizon P exactly:
Discrete Time Algebraic Riccati Equation
![Page 66: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/66.jpg)
Given:
System:
Cost function:
where:
Calculate:
Initial state:
U that minimizes J(X,U)
So, what are we optimizing for now?
![Page 67: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/67.jpg)
So, how do we control this thing?
![Page 68: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/68.jpg)
Overview
1. expressing a linear system in state space form
2. discrete time linear optimal control (LQR)
3. linearizing around an operating point
4. linear model predictive control
5. LQR variants
6. model predictive control for non-linear systems
![Page 69: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/69.jpg)
Inverted pendulum
How do we get this system in the standard form:
?
EOM for pendulum:
![Page 70: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/70.jpg)
Inverted pendulum
EOM for pendulum:
How do we get this system in the standard form:
![Page 71: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/71.jpg)
Inverted pendulum
EOM for pendulum:
How do we get this system in the standard form:
!!!!!!!
![Page 72: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/72.jpg)
Linearizing a non-linear system
Idea: use first-order Taylor series expansion
original non-linear system
![Page 73: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/73.jpg)
Linearizing a non-linear system
Idea: use first-order Taylor series expansion
original non-linear system
first order term
Linearize about
![Page 74: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/74.jpg)
Linearizing a non-linear system
Idea: use first-order Taylor series expansion
original non-linear system
first order term
Linearize about
We just linearized the system about x^*
![Page 75: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/75.jpg)
Linearizing a non-linear system
Suppose that x^* is a fixed point (or a steady state) of the system...
Then:
![Page 76: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/76.jpg)
Linearizing a non-linear system
Suppose that x^* is a fixed point (or a steady state) of the system...
Then:
where
Change of coordinates
![Page 77: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/77.jpg)
Example: pendulumUnit length
![Page 78: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/78.jpg)
Example: pendulumUnit length
Linearize about:
![Page 79: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/79.jpg)
Example: pendulum
where
![Page 80: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/80.jpg)
Overview
1. expressing a linear system in state space form
2. discrete time linear optimal control (LQR)
3. linearizing around an operating point
4. linear model predictive control
5. LQR variants
6. model predictive control for non-linear systems
![Page 81: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/81.jpg)
Linear Model Predictive Control
Drawbacks to LQR: hard to encode constraints– suppose you have a hard goal constraint?– suppose you have piecewise linear state and action constraints?
Answer:– solve control as a new optimization problem on every time step
![Page 82: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/82.jpg)
Linear Model Predictive Control
Given:
System:
Cost function:
where:
Calculate:
Initial state:
U that minimizes J(X,U)
![Page 83: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/83.jpg)
Linear Model Predictive Control
Given:
System:
Cost function:
where:
Calculate:
Initial state:
U that minimizes J(X,U)
We're going to solve this problem by expressing it explicitly as
a quadratic program
![Page 84: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/84.jpg)
Quadratic program
Minimize:
Subject to:
![Page 85: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/85.jpg)
Quadratic program
Minimize:
Subject to:
Constants are part of problem statement:
x is the variable
Problem: find the value of x that minimizes the objective subject to the constraints
![Page 86: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/86.jpg)
Quadratic program
Quadratic objective function
Linear inequality constraints
Linear equality constraints
Minimize:
Subject to:
![Page 87: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/87.jpg)
Quadratic program
Minimize:
Subject to:
![Page 88: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/88.jpg)
Quadratic program
Minimize:
Subject to:
Why?
![Page 89: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/89.jpg)
Quadratic program
Quadratic objective function
![Page 90: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/90.jpg)
Quadratic program
Quadratic objective function
Inequality constraints
![Page 91: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/91.jpg)
Quadratic program
Quadratic objective function
equality constraints
![Page 92: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/92.jpg)
QP versus Unconstrained Optimization
Minimize:
Subject to:
Original QP
![Page 93: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/93.jpg)
QP versus Unconstrained Optimization
Minimize:
Unconstrained version of original QP
Subject to:
![Page 94: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/94.jpg)
QP versus Unconstrained Optimization
Minimize:
Unconstrained version of original QP
How do we minimize this expression?
![Page 95: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/95.jpg)
QP versus Unconstrained Optimization
Minimize:
Unconstrained version of original QP
How do we minimize this expression?
![Page 96: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/96.jpg)
Linear Model Predictive Control
Minimize:
Subject to:
![Page 97: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/97.jpg)
Linear Model Predictive Control
Minimize:
Subject to:
What are the variables?
![Page 98: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/98.jpg)
Linear Model Predictive Control
Minimize:
Subject to:
What other constraints might we want add?
![Page 99: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/99.jpg)
Linear Model Predictive Control
Minimize:
Subject to:
![Page 100: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/100.jpg)
Linear Model Predictive Control
Minimize:
Subject to:
Can't express these constraints in standard LQR
![Page 101: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/101.jpg)
Linear MPC Receding Horizon Control
Minimize:
Subject to:
Re-solve the quadratic program on each time step:– always plan another T time steps into the future
![Page 102: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/102.jpg)
Controllability
A system is controllable if it is possible to reach any goal state from any other start state in a finite period of time.
When is a linear system controllable?
It's property of the system dynamics...
![Page 103: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/103.jpg)
Controllability
A system is controllable if it is possible to reach any goal state from any other start state in a finite period of time.
When is a linear system controllable?
Remember this?
![Page 104: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/104.jpg)
Controllability
What property must this matrix have?
![Page 105: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/105.jpg)
Controllability
This submatrix must be full rank.
– i.e. the rank must equal the dimension of the state space
![Page 106: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/106.jpg)
NonLinear Model Predictive Control
Given:
System:
Cost function:
where:
Calculate:
Initial state:
U that minimizes J(X,U)
![Page 107: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/107.jpg)
NonLinear Model Predictive Control
Given:
System:
Cost function:
where:
Calculate:
Initial state:
U that minimizes J(X,U)
![Page 108: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/108.jpg)
NonLinear Model Predictive Control
Given:
System:
Cost function:
where:
Calculate:
Initial state:
U that minimizes J(X,U)
![Page 109: Linear Optimal Control - ccs.neu.edu](https://reader031.vdocument.in/reader031/viewer/2022022709/6219fd66247e03002726fc7c/html5/thumbnails/109.jpg)
Minimize:
Subject to:
NonLinear Model Predictive Control
But, this is a nonlinear constraint– so how do we solve it now?
Sequential quadratic programming– iterative numerical optimization for problems with non-convex objectives or constraints– similar to Newton's method, but it incorporates constraints– on each step, linearize the constraints about the current iterate– implemented by FMINCON in matlab...