step 3. what should we control (c)? (primary controlled variables y 1 =c )

55
1 Step 3. What should we control (c)? (primary controlled variables y 1 =c) 1. CONTROL ACTIVE CONSTRAINTS, c=c constraint 2. REMAINING UNCONSTRAINED, c=? What should we control? c = Hy H

Upload: anisa

Post on 28-Jan-2016

20 views

Category:

Documents


0 download

DESCRIPTION

What should we control? c = Hy. Step 3. What should we control (c)? (primary controlled variables y 1 =c ). CONTROL ACTIVE CONSTRAINTS, c=c constraint REMAINING UNCONSTRAINED, c=?. H. H. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

1

Step 3. What should we control (c)? (primary controlled variables y1=c)

1. CONTROL ACTIVE CONSTRAINTS, c=cconstraint

2. REMAINING UNCONSTRAINED, c=?

What should we control?c = Hy

H

Page 2: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

2

H

Page 3: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

3

2. UNCONSTRAINED VARIABLES:- WHAT MORE SHOULD WE CONTROL?- WHAT ARE GOOD “SELF-OPTIMIZING” VARIABLES?

• Intuition: “Dominant variables” (Shinnar)

• Is there any systematic procedure?

A. Sensitive variables: “Max. gain rule” (Gain= Minimum singular value)

B. “Brute force” loss evaluation

C. Optimal linear combination of measurements, c = Hy

Page 4: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

4

Optimal operation

Cost J

Controlled variable cccoptopt

JJoptopt

Unconstrained optimum

Page 5: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

5

Optimal operation

Cost J

Controlled variable cccoptopt

JJoptopt

Two problems:

• 1. Optimum moves because of disturbances d: copt(d)

• 2. Implementation error, c = copt + n

d

n

Unconstrained optimum

Page 6: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

6

Candidate controlled variables c for self-optimizing control

Intuitive

1. The optimal value of c should be insensitive to disturbances (avoid problem 1)

2. Optimum should be flat (avoid problem 2 – implementation error).

Equivalently: Value of c should be sensitive to degrees of freedom u. • “Want large gain”, |G|

• Or more generally: Maximize minimum singular value,

Unconstrained optimum

BADGoodGood

Page 7: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

7

Guidelines for CV Selection

• Rule 1: The optimal value for CV c should be insensitive to disturbances d (minimizes effect of setpoint error)

• Rule 2: c should be easy to measure and control (small implementation error n)

• Rule 3: c should be sensitive to changes in u (large gain |G| from u to c) or equivalently the optimum Jopt should be flat with respect to c (minimizes effect of implementation error n)

• Rule 4: For case of multiple CVs, the selected CVs should not be correlated.

Reference: S. Skogestad, “Plantwide control: The search for the self-optimizing control structure”, Journal of Process Control, 10, 487-507 (2000).

Page 8: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

8

Quantitative steady-state: Maximum gain rule

Maximum gain rule (Skogestad and Postlethwaite, 1996):Look for variables that maximize the scaled gain (Gs) (minimum singular value of the appropriately scaled steady-state gain matrix Gs from u to c)

Unconstrained optimum

Gu c

Page 9: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

9

Local Loss• Consider Taylor series expansion of J(u,d) around the moving optimal point

(uopt(d), d), when u differs from uopt(d)

• Ignoring higher order terms

Proof of maximum gain rule

u

cost J

uopt

• Let G be the (unscaled) steady-state gain matrix

Page 10: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

10

– Conclusion: To minimize the loss we want to maximize the “scaled gain”:

Proof of maximum gain rule

Page 11: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

11

Why is Large Gain Good?

u

J, c

Jopt

uopt

copt c-copt

Loss

G

With large gain G: Even large implementation error n in c translates into small deviation of u from uopt(d) - leading to lower loss

Variation of u

Page 12: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

12

Maximum Gain Rule in words

In words, select controlled variables c for which

the gain G (= “controllable range”) is large compared to

its span (= sum of optimal variation and control error)

Select CVs that maximize (Gs)

Page 13: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

13

Procedure Maximum Gain rule Analysis1. Process with

– nd (important) disturbances– nu unconstrained degrees of freedom– Several candidate sets of controlled variables c

2. From a (nonlinear) model, compute the optimal inputs (uopt) and outputs (copt) for different disturbances Need nd + 1 optimizations For each candidate output c, compute the optimal variation copt = (ciopt,max - ciopt,min)/2

3. Perturb in each input to find for each candidate c the gain, c = G u– Need nu simulations

4. Determine the implementation error ni for each candidate output ci

– Use engineering knowledge

5. Scaling matrix for outputs: S1=diag{1/|copti|+|ni|}

6. If possible: Scale the inputs such that a unit deviation from the optimal value has same effect on J (this makes Juu=I); otherwise compute Juu by perturbing u around the optimal

7. Select those outputs as controlled variables, which has a large

Page 14: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

14

EXAMPLE: Recycle plant (Luyben, Yu, etc.)

1

2

3

4

5

Given feedrate F0 and column pressure:

Dynamic DOFs: Nm = 5 Column levels: N0y = 2Steady-state DOFs: N0 = 5 - 2 = 3

Feed of A

Recycle of unreacted A (+ some B)

Product (98.5% B)

Page 15: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

15

Recycle plant: Optimal operation

mT

1 remaining unconstrained degree of freedom

Page 16: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

16

Control of recycle plant:Conventional structure (“Two-point”: xD)

LC

XC

LC

XC

LC

xB

xD

Control active constraints (Mr=max and xB=0.015) + xD

Page 17: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

17

Luyben rule

Luyben rule (to avoid snowballing):

“Fix a stream in the recycle loop” (F or D)

Page 18: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

18

Luyben rule: D constant

Luyben rule (to avoid snowballing):

“Fix a stream in the recycle loop” (F or D)

LCLC

LC

XC

Page 19: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

19

A. Maximum gain rule: Steady-state gain

Luyben rule:

Not promising

economically

Conventional:

Looks good

Page 20: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

20

How did we find the gains in the Table?

1. Find nominal optimum

2. Find (unscaled) gain G0 from input to candidate outputs: c = G0 u.• In this case only a single unconstrained input (DOF). Choose at u=L• Obtain gain G0 numerically by making a small perturbation in u=L while adjusting

the other inputs such that the active constraints are constant (bottom composition fixed in this case)

3. Find the span for each candidate variable• For each disturbance di make a typical change and reoptimize to obtain the optimal

ranges copt(di) • For each candidate output obtain (estimate) the control error (noise) n• The expected variation for c is then: span(c) = i |copt(di)| + |n|

4. Obtain the scaled gain, G = |G0| / span(c)5. Note: The absolute value (the vector 1-norm) is used here to "sum up" and get the overall span. Alternatively, the 2-norm could be used,

which could be viewed as putting less emphasis on the worst case. As an example, assume that the only contribution to the span is the implementation/measurement error, and that the variable we are controlling (c) is the average of 5 measurements of the same y, i.e. c=sum yi/5, and that each yi has a measurement error of 1, i.e. nyi=1. Then with the absolute value (1-norm), the contribution to the span from the implementation (meas.) error is span=sum abs(nyi)/5 = 5*1/5=1, whereas with the two-norn, span = sqrt(5*(1/5^2) = 0.447. The latter is more reasonable since we expect that the overall measurement error is reduced when taking the average of many measurements. In any case, the choice of norm is an engineering decision so there is not really one that is "right" and one that is "wrong". We often use the 2-norm for

mathematical convenience, but there are also physical justifications (as just given!).

IMPORTANT!

Page 21: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

21

B. “Brute force” loss evaluation: Disturbance in F0

Loss with nominally optimal setpoints for Mr, xB and c

Luyben rule:

Conventional

Page 22: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

22

B. “Brute force” loss evaluation: Implementation error

Loss with nominally optimal setpoints for Mr, xB and c

Luyben rule:

Page 23: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

23

Conclusion: Control of recycle plant

Active constraintMr = Mrmax

Active constraintxB = xBmin

L/F constant: Easier than “two-point” control

Assumption: Minimize energy (V)

Self-optimizing

Page 24: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

24

Recycle systems:

Do not recommend Luyben’s rule of fixing a flow in each recycle loop

(even to avoid “snowballing”)

Page 25: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

25

Summary: Procedure selection controlled variables

1. Define economics and operational constraints

2. Identify degrees of freedom and important disturbances

3. Optimize for various disturbances

4. Identify active constraints regions (off-line calculations)

For each active constraint region do step 5-6:

5. Identify “self-optimizing” controlled variables for remaining degrees of freedom

6. Identify switching policies between regions

Page 26: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

26

Conditions for switching between regions of active constraints

• Within each region of active constraints it is optimal to 1. Control active constraints at ca = c,a, constraint

2. Control self-optimizing variables at cso = c,so, optimal

• Define in each region i:

• Keep track of ci (active constraints and “self-optimizing” variables) in all regions i

• Switch to region i when element in ci changes sign• Research issue: can we get lost?

Page 27: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

27

Example switching policies – 10 km

1. ”Startup”: Given speed or follow ”hare”

2. When heart beat > max or pain > max: Switch to slower speed

3. When close to finish: Switch to max. power

Another example: Regions for LNG plant(see Chapter 7 in thesis by J.B.Jensen, 2008)

Page 28: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

28

Comments

• Evaluation of candidates can be time-consuming using general non-linear formulation

– Pre-screening using local methods. – Final verification for few promising alternatives by evaluating actual loss

• Maximum gain rule is extremely simple and insightful, but “may” lead to non-optimal set of CVs

– The maximum gain rule assumes that the worst-case setpoint errors Δci,opt(d) for each CV can appear together. In general, Δci,opt(d) are correlated.

• Exact local methods (worst-case or average-case) are more accurate

• Lower loss: measurement combinations as CVs (next)

Page 29: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

29

H

Page 30: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

30

• Operational objective: Minimize cost function J(u,d)

• The ideal “self-optimizing” variable is the gradient (first-order optimality condition (ref: Bonvin and coworkers)):

• Optimal setpoint = 0

• BUT: Gradient can not be measured in practice

• Possible approach: Estimate gradient Ju based on measurements y

• Approach here: Look directly for c as a function of measurements y (c=Hy) without going via gradient

Ideal “Self-optimizing” variables

Unconstrained degrees of freedom:

Page 31: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

31

Optimal measurement combination

H

So far: H is a ”selection matrix” [0 0 1 0 0]Now: H is a full ”combination matrix”

•Candidate measurements (y): Include also inputs u

Page 32: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

32

Amazingly simple!

Sigurd is told how easy it is to find H

Nullspace method (no noise)

H = ?

Basis: Want optimal value of c to be independent of disturbances

• Find optimal solution as a function of d: uopt(d), yopt(d)

• Linearize this relationship: yopt = F d

• Want:

• To achieve this for all values of d:

• To find a F that satisfies HF=0 we must require

• Optimal when we disregard implementation error (n)

V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007).

Optimal measurement combinations

Page 33: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

33

Nullspace method continued

• To handle implementation error: Use “sensitive” measurements, with information about all independent variables (u and d)

Unconstrained degrees of freedom:

Page 34: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

34

Optimal measurement combination

2. “Exact local method” (Combined disturbances and implementation errors)

• V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 19, 138-148 (2009).

Theorem 1. Worst-case loss for given H (Halvorsen et al, 2003):

Unconstrained degrees of freedom:

Applies to any H (selection/combination)

Theorem 2 (Alstad et al. ,2009): Optimization problem to find optimal combination is convex.

Page 35: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

35

Toy Example

Reference: I. J. Halvorsen, S. Skogestad, J. Morud and V. Alstad, “Optimal selection of controlled variables”, Industrial & Engineering Chemistry Research, 42 (14), 3273-3284 (2003).

Page 36: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

37

Toy Example: Measurement combinations

Page 37: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

38

Toy Example

Page 38: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

39

Example: CO2 refrigeration cycle

J = Ws (work supplied)DOF = u (valve opening, z)Main disturbances:

d1 = TH

d2 = TCs (setpoint) d3 = UAloss

What should we control?

pH

Page 39: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

40

CO2 refrigeration cycle

Step 1. One (remaining) degree of freedom (u=z)

Step 2. Objective function. J = Ws (compressor work)

Step 3. Optimize operation for disturbances (d1=TC, d2=TH, d3=UA)• Optimum always unconstrained

Step 4. Implementation of optimal operation• No good single measurements (all give large losses):

– ph, Th, z, …

• Nullspace method: Need to combine nu+nd=1+3=4 measurements to have zero disturbance loss

• Simpler: Try combining two measurements. Exact local method:

– c = h1 ph + h2 Th = ph + k Th; k = -8.53 bar/K

• Nonlinear evaluation of loss: OK!

Page 40: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

41

CO2 cycle: Maximum gain rule

Page 41: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

42

Refrigeration cycle: Proposed control structure

Control c= “temperature-corrected high pressure”

Page 42: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

43

The rest in not part of the syllabus for the course

Page 43: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

44

2. General (with noise). “Exact local method”

Loss L = J(u,d) – Jopt(d). Keep c = Hy constant , where y = Gyu + Gy

dd + ny

• I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad, ``Optimal selection of controlled variables'', Ind. Eng. Chem. Res., 42 (14), 3273-3284 (2003).

Optimization problem for optimal combination:

Theorem 1. Worst-case loss for given H (Halvorsen et al, 2003):

Applies to any H (selection/combination)

Page 44: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

45 V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 18, in press (2008).

V. Kariwala, Y. Cao, S. jarardhanan, “Local self-optimizing control with average loss minimization”, Ind.Eng.Chem.Res., in press (2008)

F – optimal sensitivity matrix = dyopt/dd

Theorem 2. Explicit formula for optimal H. (Alstad et al, 2008):

Theorem 3. (Kariwala et al, 2008).

Page 45: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

46

Toy Example again

Reference: I. J. Halvorsen, S. Skogestad, J. Morud and V. Alstad, “Optimal selection of controlled variables”, Industrial & Engineering Chemistry Research, 42 (14), 3273-3284 (2003).

Page 46: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

47

B2. Exact local method (with noise)

Page 47: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

48

B2. Exact local method, 2 measurements

Combined loss for disturbances and measurement errors

Page 48: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

49

B2. Exact local method, all 4 measurements

Page 49: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

50

• Our results on optimal measurement combination (keep c = Hy constant)• Nullspace method for n=0 (Alstad and Skogestad, 2007)

• Explicit expression (“exact local method”) for n≠0 (Alstad et al., 2008)

• Observation 1: Both result are exact for quadratic optimization problems

• Observation 2: MPC can be written as a quadratic optimization problem and optimal solution is to keep c = u – Kx constant.

• Must be some link!

Current research (Henrik Manum):

Self-optimizing control and Explicit MPC

Page 50: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

51

Quadratic optimization problems• Noise-free case (n=0)

• Reformulation of nullspace method of Alstad and Skogestad (2007) – Can add linear constraints (c=Hy) to quadratic problem with no loss

– Need ny ≥ nu + nd. H is unique if ny = nu + nd (nym = nd)

– H may be computed from nullspace method,

•V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind. Eng. Chem. Res, 46 (3), 846-853 (2007).

Page 51: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

52

Quadratic optimization problems• With noise / implementation error (n ≠ 0)

• Reformulation of exact local method of Alstad et al. (2008) – Can add linear constraints (c=Hy) with minimum loss.

– Have explicit expression for H from “exact local method”

•V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 18, in press (2008).

Page 52: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

53

Optimal control / Explicit MPC• Treat initial state x0 as disturbance d. Discrete time constrained MPC

problem:

• In each active constraint region this becomes an unconstrained quadratic optimization problem ) Can use above results to find linear constraints

1. State feedback with no noise (LQ problem) Measurements: y = [u x]

Linear constraints: c = H y = u – K x

• nx = nd: No loss (solution unchanged) by keeping c = 0, so u = Kx optimal!

• Can find optimal feedback K from “nullspace method”, – Same result as solving Riccatti equations

• NEW INSIGHT EXPLICIT MPC: Use change in sign of c for neighboring regions to decide when to switch regions •H. Manum, S. Narasimhan and S. Skogestad, ``A new approach to explicit MPC using self-optimizing control”, ACC, Seattle, June 2008.

Page 53: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

54

Explicit MPC. State feedback.Second-order system

time [s]

Phase plane trajectory

Page 54: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

55

Optimal control / Explicit MPC

2. Output feedback (All states not measured). No noise

• Option 1: State estimator

• Option 2: Direct use of measurements for feedback

“Measurements”: y = [u ym]

Linear constraints: c = H y = u – K ym

• No loss (solution unchanged) by keeping c = 0 (constant), so u = Kym is optimal, provided we have enough independent measurements:

ny ≥ nu + nd ) nym ≥ nd

• Can find optimal feedback K from “self-optimizing nullspace method”

• Can also add previous measurements, but get some loss due to causality (cannot affect past outputs)

H. Manum, S. Narasimhan and S. Skogestad, ``Explicit MPC with output feedback using self-optimizing control”, IFAC World Congress, Seoul, July 2008.

Page 55: Step 3.  What should we control (c)? (primary controlled variables  y 1 =c )

56

Explicit MPC. Output feedbackSecond-order system

time [s]

State feedback