mpc on a chipcas.ee.ic.ac.uk/people/gac1/controlproject/ling.pdf · a simple test script. swing-up...

Post on 19-Oct-2020

0 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

MPC on a Chip

EPSRC Project Kick-off Meeting, Imperial College, London, 16 Oct 2009

Keck-Voon LING (ekvling@ntu.edu.sg)School of Electrical and Electronic EngineeringNanyang Technological University (NTU), Singapore

Outline

• “MPC on a Chip” Toolbox– a rapid prototyping tool– QP solvers (Interior point and active set methods) on a FPGA– reduced precision (16 bit mantissa) still works– interior point vs active set methods

• Multiplexed MPC– a strategy to reduce online computation– ideally suited for FPGA implementation, potential for pipelining– solving Ax=b problems the Multiplexed MPC way– control-theoretic framework to analyse/design iterative solutions

of Ax=b– design and implementation of reduced precision Ax=b solvers

“MPC on a Chip” Toolbox

– Create a “MPC Toolbox”– take an MPC problem from

design to embedded implementation, especially on FPGA

– leading to a wider application of the embedded MPC technology

– options for embedded MPC• Processor / DSP• Programmable h/w / ASIC

– embedded => application specific

• To promote the MPC technology for embedded control

Prototyping Environment

• Matlab/Handel-C– Algorithms are first

developed in Matlab– Translated the verified

Matlab code into Handel-C (FPGA implementation) or C (microblaze)

– Compile Handel-C / C into bit file and download it to FPGA / Microblaze

– Matlab/Handel-C Co-simulation or Matlab/microblaze Co-simulation

Fig. 2 Prototyping of MPC on a Chip

MPC implemented in MATLAB

MPC implemented in FPGA

PlantSimulation

MPC implemented in Microblaze

1) During every iteration, PC passes MPC parameters down to FPGA via UART

3) FPGA sends results back to PC via UART

4) 2 variants of MPC implementation on FPGA;1 using Handel-C, the other using Microblaze processor

2) MPC calculations is performed on FPGA

A Simple Test Script

Swing-up and Balancing of an Inverted Pendulum

RC10 board

FPGA Implementation• Implemented on Celoxica RC203 prototyping board with a Xilinx

XC2V3000 FPGA, coded with Handel-C.• IEEE Single Precision Floating Point is used for both ASM and IPM• Block RAMs are used for storage of QP problem matrices and

intermediate calculation results• Sequential implementation of both ASM and IPM• ASM used more storage than IPM as a larger linear system is

needed to be formed and solved

Resources IPM ASMClock rate 25 MHz 25 MHzLUTs 8,755 (30%) 8,773 (30%)FFs 2,153 ( 7%) 3,540 (12%)Block RAMs 22 (22%) 59 (61%)

Comparison of IPM on Various Embedded Platforms• IPM performance measured on various embedded platforms• On microcontrollers like STM32, soft core microprocessor like

Microblaze, TI DSPs, Pentium processors and FPGAs• Tested on 100 randomly generated QP problem of size nv=7, mc=71

Comparison of IPM on Various Embedded Platforms• IPM performance measured on various embedded platforms• On microcontrollers like STM32, soft core microprocessor like

Microblaze, TI DSPs, Pentium processors and FPGAs• Tested on 100 randomly generated QP problem of size nv=7, mc=71• All clock rates adjusted to 25MHz

IPM with Reduced Precision:Results• Average number of iterations

IPM with Reduced Precision:Results• Distribution of relative error

For 23 bits mantissa

QP solved with relative error, • less than 1e-4: 38.8%• less than 1e-2: 84.4%• less than 1e-1: 97.4%

QP unsolved: 0.5%

For 16 bits mantissa

QP solved with relative error, • less than 1e-4: 16.8%• less than 1e-2: 41.8%• less than 1e-1: 80.1%

QP unsolved: 1.88%

For 12 bits mantissa

QP solved with relative error, • less than 1e-4: 1.66%• less than 1e-2: 14.8%• less than 1e-1: 44.6%

Q unsolved: 12.0%

For 20 bits mantissa

QP solved with relative error, • less than 1e-4: 28.2%• less than 1e-2: 71.3% • less than 1e-1: 94.4%

QP unsolved: 0.63%

Storage (in terms of word length of variables) could be reduced by up to 30%, and yet maintaining reasonably good QP solution.

IPM with Reduced Precision on MPC Application: Results• Control Plots

Using IPM (16 bits mantissa)

Using IPM (20 bits mantissa)

Using IPM (12 bits mantissa)

Using IPM (23 bits mantissa)

IPM vs ASM:Convergence Speed (# of iterations)• In general, for

IPM, the number of iterations is around 11-14, not sensitive to the problem size.

• In contrast, for ASM, the number of iterations increases roughly linearly with the problem size.

3 4 5 6 7 8 9 10 11 12 13Number of decision variables, nv

3 4 5 6 7 8 9 10 11 12 13Number of decision variables, nv

50

45

40

35

30

25

20

15

10

5

50

45

40

35

30

25

20

15

10

5

Num

ber o

f ite

ratio

ns

50

45

40

35

30

25

20

15

10

5

50

45

40

35

30

25

20

15

10

5

Num

ber o

f ite

ratio

ns

30 40 50 60 70 80 90 100 110Number of constraints, mc

30 40 50 60 70 80 90 100 110Number of constraints, mc

ASM

ASMIPM

IPM

IPM vs ASM, Complexity: Storage, # of Operations per Iteration

• The number of arithmetic operations for solving the linear system of equations is:

• For Gauss-Jordan elimination with pivoting,

• For ASM, computation cost for solving the system of equations isdominating.

• However, for IPM, the fixed cost dominates.• From the expressions in the table, for IPM, the ratio (time spent for

solving Ax=b / total time to solve a QP problem) is lower than . This is confirmed by the following experiment results (right plot):

IPMASM

IPM vs ASM, Complexity: No. of Operations Per Iteration

• The overall performance can be quantified by the computation time required to obtain the solution of a QP problem.

• As seen from the above table, there are several factors that can affect the performances of ASM and IPM algorithms.

• Therefore, neither ASM nor IPM would always be superior to the other.

Overall Performance: Computation Time

No. of iterations Solving system of linear equations

Others dominating costs

IPM Insensitive to Increase with Fixed cost; increase with

ASM Increase linearly with

Increase with No

• When is small, ASM outperforms IPM, since the number of iterations required by ASM is small.

• But, when is large, the number of iterations becomes large as well. In this case, IPM is more efficient.

• This suggests that, when the problem is large, IPM is preferred.

Overall Performance: Computation Time

3 4 5 6 7 8 9 10 11 12 13Number of decision variables, nv

3 4 5 6 7 8 9 10 11 12 13Number of decision variables, nv

250

200

150

100

50

0

Com

puta

tiona

l tim

e (m

s)30 40 50 60 70 80 90 100 110

Number of constraints, mc30 40 50 60 70 80 90 100 110

Number of constraints, mc

ASM

ASMIPM

IPM250

200

150

100

50

0C

ompu

tatio

nal t

ime

(ms)

250

200

150

100

50

0

250

200

150

100

50

0

Multiplexed MPC

Standard MPC - limitations

– Constrained Model Predictive Control (MPC) needs tosolve constrained optimization problems on-line, whosecomputational complexity is

– Embedded application has limited computational resources

– System with fast dynamics requires short computational time to find a solution

: number of control inputs : control horizonum N

Basic idea of MMPC

u1

u2 TTime

u1

u2

T

Time

T/2

“Synchronous” MPC (SMPC)

Multiplexed MPC (MMPC)

View as periodic SISO plant

1 ,1

m

k k j j kj

x Ax B u+=

= + Δ∑

1 ( )k k k kx Ax B uσ+ = + Δ

k ( ),where ( ) ( mod ) 1 and u k kk k m uσσ = + Δ = Δ

Features of MMPC• Divide the original MPC problem into a

sequence of optimization• Solve each subsystem sequentially

– Reduced computational complexity– scalable to many inputs

• Update subsystem controls as soon as the solution is available, all the inputs are updated sequentially

• Each control updated takes account of all the information available

• Periodic control

Computational steps in MPC

• Standard MPC set up

• At time step k

Predictions:

: min ( ) 2 ( )

subject to

k

T TT Tk

Y x GU

QP U G G I U U G W x

EU F

λ

=Φ +

+ − −Φ

1. Measure

2. Form ( )

3. Solve QP 4. Receding horizon, 5. Repeat step (1)

k

Tk

k

x

G W x

Uu

−Φ

Multiplexed MPC Computational Steps

0 1 1

0 1

MMPC re-group into [ ... ] andsolve for , ,..., sequentially at each time step k.

T T T TmU u u u

u u−

11 0 11

10 1 1 0 0 1 11

0

Thus, the MMPC predictions equation becomes

and the QP becomes

min ( ) 2 ( )

subject to

mk i ii

mT T Tk i ii

Y x g u g u

u g g I u u g W x g u

Eu F

λ

+=

+=

=Φ + +

+ − −Φ −

smaller matrix, smaller QP, hence reduced on-line computational load in MMPC

can start computing this term, and prepare for the next QP, as soon as u0 is obtained.

MMPC on FPGA

Solving Mu=b iteratively -- MMPC way

• Let

• Solve Mu=b iteratively the MMPC way

• Stability can be ensured by design

1 1 1

1

, since

k k

k k k k k k

k k k

r b Mur b Mu M u u u ur r M u

+ + +

+

= −

⇒ = − − Δ = +Δ

⇒ = − Δ

1 ( )

( )

1 ( ) ( )

Plant model: r , (k+m)= (k)

MMPC law:

Closed loop: r ( )

k k k k

k k k

k k k k

r M u

u K r

I M K r

σ

σ

σ σ

σ σ+

+

= − Δ

Δ =

= −

Solving Mu=b iteratively -- MMPC way

• Jacobi’s method: M = U+D+L, K = D^(-1), solve each subsystem simultaneously

• Gauss-Seidel: K = (D+L)^(-1), solve each subsystem sequentially, similar to MMPC

• Multiplexed MPC, a generalised scheme to solve Mu=b iteratively (?)

Mu = b in MPC/QP problems

• M changes in some known manner in the QP iterations

• Use robust control idea to solve Mu = b

Reduced precision Mu=b solver

top related