control system i

95
1 Control Systems II (EEEN30041) Lecturer: Zhengtao Ding Email: [email protected] Webpage: http://personalpages.manchester.ac.uk/staff/Zhengtao.Ding c 2008, 2007 Zhengtao Ding, University of Manchester EEEN30041, School of E&EE, University of Manchester, 2008 1

Upload: 34plt34

Post on 02-Nov-2014

44 views

Category:

Documents


5 download

DESCRIPTION

Control System 1 lecture notes the introduction to the feedback control systems

TRANSCRIPT

Page 1: Control system I

1

Control Systems II (EEEN30041)

Lecturer: Zhengtao Ding

Email: [email protected]

Webpage: http://personalpages.manchester.ac.uk/staff/Zhengtao.Ding

c©2008, 2007 Zhengtao Ding, University of Manchester

EEEN30041, School of E&EE, University of Manchester, 2008 1

Page 2: Control system I

2

Course Format and Assessment

The course will be delivered in 20 lectures with 4 tutorials and a computer-basedassignment. Assessment will be based on an assignment (10%) and examination(90%). Course material will include online lecture notes and handouts.

Learning Outcomes

• Describe dynamic systems in continuous-time state space models, and dynamicmodels in discrete-time forms for industrial control applications.

• Design controllers in state space using state-feedback and output feedbackmethods including linear optimal control.

• Design and implement controllers in discrete-time using digital control includingimplementing PID in digital control.

• Appreciate important industrial control applications

EEEN30041, School of E&EE, University of Manchester, 2008 2

Page 3: Control system I

3

References

Dorf and Bishop, Modern Control Systems

Stefani, Shahian, Savant and Hostetter, Design of Feedback Control Systems

Kailath, Linear Systems

Ogata, Modern Control Engineering

Kuo, Automatic Control Systems

Nise, Control Systems Engineering

EEEN30041, School of E&EE, University of Manchester, 2008 3

Page 4: Control system I

4

Course Structure

Part 1. State space model and control design

Why state space control? Advantages and disadvantages of state space controldesign.

Part 2. Digital control

Why digital control?

EEEN30041, School of E&EE, University of Manchester, 2008 4

Page 5: Control system I

5

1. State Space Variables

The state variables describe the future response of a system, given the presentstate, the excitation inputs, and the equations describing the dynamics.

Example 1. Consider a mass-spring-damper system

Md2y

dt2+ b

dy

dt+ ky = u(t) (1)

where M is the mass, b the friction coefficient, and k the spring constant.

Taking the position and velocity as the state variables, ie:

x1 = y

x2 =dy

dt

EEEN30041, School of E&EE, University of Manchester, 2008 5

Page 6: Control system I

6

the dynamics can then be written as

dx1

dt= x2

dx2

dt=−bMx2 −

k

Mx1 +

1Mu

Question: Why y and dydt can be the state variables?

Example 2. RLC circuit

ic = Cdvcdt

= −iL + u(t) (2)

LdiLdt

= −RiL + vc (3)

with the voltage across R as the output, vo = RiL

EEEN30041, School of E&EE, University of Manchester, 2008 6

Page 7: Control system I

7

By taking x1 = vc and x2 = iL, we have

dx1

dt= − 1

Cx2 +

1Cu(t) (4)

dx2

dt=

1Lx1 −

R

Lx2 (5)

y = Rx2 (6)

Question: Is the set of state variables unique?

Alternative choice can be x1 = vc and x2 = vL.

Question: Consider a different RLC circuit and obtain its dynamic model.

EEEN30041, School of E&EE, University of Manchester, 2008 7

Page 8: Control system I

8

2. State Differential Equations

State Space Equation

A set of first order differential equations can be written as

x1 = a11x1 + a12x2 + . . .+ a1nxn + b11u1 + . . .+ b1mum

x2 = a21x1 + a22x2 + . . .+ a2nxn + b21u1 + . . .+ b2mum...

xn = an1x1 + an2x2 + . . .+ annxn + bn1u1 + . . .+ bnmum

(7)

EEEN30041, School of E&EE, University of Manchester, 2008 8

Page 9: Control system I

9

We can put the above in the matrix form

d

dt

x1

x2...xn

=

a11 a12 . . . a1n

a21 a22 . . . a2n... ... . . . ...an1 an2 . . . ann

x1

x2...xn

+

b11 . . . b1m... . . . ...bn1 . . . bnm

u1...um

(8)

The column vector consisting of the state variables is called state vector and it iswritten

x =

x1

x2...xn

(9)

We may also write x ∈ Rn, as there are n state variables. Similarly we have theinput vector. Using the state vector and input vector, we have the compact

EEEN30041, School of E&EE, University of Manchester, 2008 9

Page 10: Control system I

10

notation of the state differential equation

x = Ax+Bu

where A is an n× n matrix or A ∈ Rn×n, and B is an n×m matrix orB ∈ Rn×m. We can also write the output in the compact form

y = Cx+Du

The dimensions of C and D depend on the dimension of the column vector y.Puting the above two equations together, we have the most common state spaceequation

x = Ax+Bu (10)

y = Cx+Du (11)

Therefore a dynamic system is characterised by the four matrices {A,B,C,D}.

EEEN30041, School of E&EE, University of Manchester, 2008 10

Page 11: Control system I

11

Example 3. Write down the state space equation for RLC system shown inExample 2.

x =[

0 − 1C

1L −RL

]x+

[1C0

]u(t) (12)

y = [0 R]x (13)

For this example, we have

A =[

0 − 1C

1L −RL

], B =

[1C0

], C = [0 R], D = 0

Question: Determine the state space matrices A,B,C,D for Example 1.

EEEN30041, School of E&EE, University of Manchester, 2008 11

Page 12: Control system I

12

Solution of state space equation

Consider the first order differential equation

x = ax+ bu (14)

The solution is given by

x(t) = eatx(0) +∫ t

0

ea(t−τ)bu(τ)dτ (15)

With the matrix exponential function defined by

eAt = I +At+A2t2

2!+ . . .+

Aktk

k!+ . . . (16)

EEEN30041, School of E&EE, University of Manchester, 2008 12

Page 13: Control system I

13

we have the solution of the state space equation given by

x(t) = eAtx(0) +∫ t

0

eA(t−τ)Bu(τ)dτ (17)

Sometimes we denote Φ(t) = eAt and Φ(t) is referred to as the state transitionmatrix. Hence the above equation can be written as

x(t) = Φ(t)x(0) +∫ t

0

Φ(t− τ)Bu(τ)dτ (18)

It can be shown that Φ(t) equals the inverse Laplace transform of (sI −A)−1.

EEEN30041, School of E&EE, University of Manchester, 2008 13

Page 14: Control system I

14

3. Transfer Functions from State Equations

We can obtain the transfer function for a single-input-single-output (SISO) systemfrom its state space equation. Taking the Laplace transform of the state spaceequation

x = Ax+Bu (19)

y = Cx (20)

we have

sX(s) = AX(s) +BU(s) (21)

Y (s) = CX(s) (22)

From the first equation, we have

X(s)(sI −A) = BU(s) (23)

EEEN30041, School of E&EE, University of Manchester, 2008 14

Page 15: Control system I

15

and

X(s) = (sI −A)−1BU(s) (24)

Hence we have

Y (s) = C(sI −A)−1BU(s) (25)

Therefore the transfer function G(s) = Y (s)U(s) is given by

G(s) = C(sI −A)−1B (26)

Note that we can write G(s) = CΦ(s)B, as Φ(s) = (sI −A)−1

Example 4. Determine the transfer function of the RLC system in Example 3.

EEEN30041, School of E&EE, University of Manchester, 2008 15

Page 16: Control system I

16

Answer:

G(s) =RLC

s2 + RLs+ 1

LC

(27)

Φ(s) := (sI −A)−1 is a matrix with elements of Laplace transformed functions.Therefore we can use inverse Laplace transform to obtain Φ(t) which can be usedto evaluate the time response.

Example 5. Obtain the time respnse of the RLC system shown in Example 2,assuming R = 3, L = 1, C = 1/2, u(t) = 0 and x(0) = [1 1]T .

Solution: When u(t) = 0, we have

x(t) = Φ(t)x(0) (28)

EEEN30041, School of E&EE, University of Manchester, 2008 16

Page 17: Control system I

17

In this case, we have

A =[

0 − 1C

1L −RL

]=[

0 −21 −3

](29)

and

Φ(s) =1

s2 + 3s+ 2

[s+ 3 −2

1 s

](30)

From the inverse Laplace transform, we obtain

Φ(t) =[

(2e−t − e−2t) (−2e−t + 2e−2t)(e−t − e−2t) (−e−t + 2e−2t)

](31)

and subsequently

x(t) = Φ(t)[

11

]=[e−2t

e−2t

](32)

EEEN30041, School of E&EE, University of Manchester, 2008 17

Page 18: Control system I

18

Remark: Φ(t) can also be obtained by directly solving the differential equations.Indeed, let z = Tx with

T =[

1 −1−1 2

].

It can be obtained that z = TAT−1z := Az, with

A =[−1 00 −2

].

With

eAt =[e−t 00 e−2t

],

we have Φ(t) = T−1eAtT .

Stability of State Variable Systems

When we convert the state space model to the transfer function, we observe that

EEEN30041, School of E&EE, University of Manchester, 2008 18

Page 19: Control system I

19

the denominator of the transfer function is the determinant of (sI −A), ie,

d(s) = |sI −A| (33)

Therefore the stability of the state space model depends on the characteristicpolynomial of the state matrix A. The system is stable if all the poles are in theleft half plane, ie, the solutions of d(s) = 0 are all with negative real parts(equivalent to the eigenvalues of A are all with negative real parts).

EEEN30041, School of E&EE, University of Manchester, 2008 19

Page 20: Control system I

20

4. State Space Realization and Canonical Forms

Given a transfer function, how to write its state space equations? For example, ifthe transfer function is given by

G(s) =6

s2 + 3s+ 2(34)

we know that its state space realization is, based on the previous RLC example,

x =[

0 −21 −3

]x+

[20

]u(t) (35)

y = [0 3]x (36)

Consider a general second transfer function

G(s) =b1s+ b2

s2 + a1s+ a2(37)

EEEN30041, School of E&EE, University of Manchester, 2008 20

Page 21: Control system I

21

One realization is given by

x =[

0 1−a2 −a1

]x+

[01

]u(t) (38)

y = [b2 b1]x (39)

The state space realization is not unique. For the same transfer function, we canhave another realization as

x =[−a1 1−a2 0

]x+

[b1b2

]u(t) (40)

y = [1 0]x (41)

Note that the state space variables in the two realizations are different. The firstrealization is referred to as the controller canonical form, and the second as the

EEEN30041, School of E&EE, University of Manchester, 2008 21

Page 22: Control system I

22

observer canonical form, as one is convenient for controller design and the other isconvenient for observer design.

Controller Canonical Form

For a general transfer function given by (assuming the numerator and thedenominator are coprime)

G(s) =b1s

n−1 + b2sn−2 + . . .+ bn

sn + a1sn−1 + . . .+ an(42)

the controller canonical form is given by

x =

0 1 . . . 0... ... . . . ...0 0 . . . 1−an −an−1 . . . −a1

x+

0...01

u(t) (43)

y = [bn bn−1 . . . b1]x (44)

EEEN30041, School of E&EE, University of Manchester, 2008 22

Page 23: Control system I

23

Observer Canonical Form

For the above transfer function, the observer canonical form is given by

x =

−a1 1 . . . 0

... ... . . . ...−an−1 0 . . . 1−an 0 . . . 0

x+

b1...

bn−2

bn−1

u(t) (45)

y = [1 0 . . . 0]x (46)

Example 5. Put the state space equation shown for the RLC system into thecontroller and observer canonical forms.

EEEN30041, School of E&EE, University of Manchester, 2008 23

Page 24: Control system I

24

5. Controllability and Observability

Controllability: A system is completely controllable if there exists a control inputu(t) that can transfer any initial state x(0) to any other desired location x in afinite time.

We can check the controllability of the system

x = Ax+Bu (47)

y = Cx (48)

by examining the algebraic condition

rank[B AB . . . An−1B] = n (49)

For a SISO system, we have B as a vector, and therefore if we define

Pc = [B AB . . . An−1B] (50)

EEEN30041, School of E&EE, University of Manchester, 2008 24

Page 25: Control system I

25

then Pc is an n× n matrix. In this case, to check the controllability is equivalentto check if the determinant of Pc is nonzero.

Example 6. Check the controllability of the system

x =[−2 0d −3

]x+

[10

]u(t) (51)

y = [0 1]x (52)

where d is a constant.

Observability: A system is completely observable if and only if there exists a finitetime T such that the initial state x(0) can be determined from the observationhistory y(t) given the control u(t).

For a SISO system

x = Ax+Bu (53)

EEEN30041, School of E&EE, University of Manchester, 2008 25

Page 26: Control system I

26

y = Cx (54)

the system is completely observable is the determinant of the observability matrix

Po =

CCA

...CAn−1

(55)

is nonzero.

Example 7. Check the observability of the system

x =[

2 0−1 1

]x+

[1−1

]u(t) (56)

y = [1 1]x (57)

EEEN30041, School of E&EE, University of Manchester, 2008 26

Page 27: Control system I

27

Tutorial 1

Question 1. An inverted pendulum can be described by the following set ofdifferential equations

My +mlθ − u(t) = 0 (58)

mly +ml2θ −mlgθ = 0 (59)

where M is the mass of the cart, and m is the mass of the ball over the pendulumwith m << M , y is the horizontal position of the cart, l is the length of thependulum and θ is the angle of the pendulum, u is the control input. Write thestate space equation of the this system.

Question 2. A system is described by the following differential equation

x =[−1 02 −3

]x+

[01

]u(t) (60)

(61)

EEEN30041, School of E&EE, University of Manchester, 2008 27

Page 28: Control system I

28

Determine Φ(s) and Φ(t) of the system.

Question 3. Obtain the transfer function of the following state space system

x =[−1 02 −3

]x+

[01

]u(t) (62)

y = [1 0]x (63)

Question 4. Write the state space equations in the controller and observercanonical forms for the following systems described by the transfer functions

G1(s) =s+ 1

s2 + 5s+ 5, G2 =

s+ 14s2 + 4s+ 1

(64)

Question 5. Determine the controllability and observability of the state spacesystems described in Question 3.

EEEN30041, School of E&EE, University of Manchester, 2008 28

Page 29: Control system I

29

Question 6. Determine the controllability and observability of the state spacesystems

x =[−3 1−2 0

]x+

[11

]u(t) (65)

y = [1 0]x (66)

If the system is not controllable or observable, can you explain why?

EEEN30041, School of E&EE, University of Manchester, 2008 29

Page 30: Control system I

30

6. Full State Feedback Design

In the first step in the state variable design, we assume all the state variables areavailable for feedback control. In this case, we can design the control input as

u = −Kx (67)

where K is the gain matrix. The full state feedback design is to decide a suitablefeedback gain matrix K.

For the closed-loop control system, we have

x = Ax+B(−Kx) = (A−BK)x (68)

The stability of the closed-loop system depends on the characteristic polynomial of(A−BK)

Pole Assignment. To assign the closed loop poles at given locations via full statefeedback.

EEEN30041, School of E&EE, University of Manchester, 2008 30

Page 31: Control system I

31

Pole assignment can be achieved if the system is completely controllable.

Example 8. Design a full state feedback control of the system described by

d3y

dt3+ 5

d2y

dt2+ 3

dy

dt+ 2y = u (69)

such that the closed-loop poles are at {−1,−2,−3} respectively.

Solution: The transfer function of the system is

G(s) =1

s3 + 5s2 + 3s+ 2(70)

If we realize the system in the controller canonical form, we have

x =

0 1 00 0 1−2 −3 −5

x+

001

u(t) (71)

EEEN30041, School of E&EE, University of Manchester, 2008 31

Page 32: Control system I

32

y = [1 0 0]x (72)

If the control input is designed as

u = [k1 k2 k3]x (73)

The closed-loop system is given by

x =

0 1 00 0 1−2 −3 −5

x+

001

(−[k1 k2 k3]x) (74)

=

0 1 00 0 1

−(k1 + 2) −(k2 + 3) −(k3 + 5)

x := (A−BK)x (75)

The closed-loop characteristic polynomial is

|sI − (A−BK)| = s3 + (k3 + 5)s2 + (k2 + 3)s+ (k1 + 2) (76)

EEEN30041, School of E&EE, University of Manchester, 2008 32

Page 33: Control system I

33

Comparing it with the desired characteristic polynomial

(s+ 1)(s+ 2)(s+ 3) = s3 + 6s2 + 11s+ 6 (77)

we have k3 = 1, k2 = 8, k1 = 4, ie,

K = [4 8 1] (78)

Therefore the control input is designed as

u = −[4 8 1]x = −4x1 − 8x2 − x3 (79)

Note that for this realization we have x1 = y, x2 = dydt and x3 = d2y

dt2. The control

input is then expressed as

u = −4y − 8dy

dt− d

2y

dt2(80)

EEEN30041, School of E&EE, University of Manchester, 2008 33

Page 34: Control system I

34

Question: Design a state feedback controller the RLC circuit described by

x =[

0 −21 −3

]x+

[20

]u(t) (81)

to place the closed-loop poles as {−2,−2}.

Answer: u = −[12,−12]x = −(x1 − x2)/2.

Question: How to implement the feedback control if only the output y is available?

From Example 8 we have seen the convenience of using the controller canonicalform for the full state feedback design. In fact, for a given state space system, wecan transform the system to the controller canonical form if the system iscontrollable. To avoid the state transform, we have Ackermann’s formula for thefull state feedback control design.

Ackermann’s Formula. For a system {A,B}, the state feedback control gain K forthe closed loop control u = −Kx to achieve the desired closed-loop characteristic

EEEN30041, School of E&EE, University of Manchester, 2008 34

Page 35: Control system I

35

polynomial

d(s) = sn + α1sn−1 + . . .+ αn (82)

is given by

K = [0 0 . . . 1]P−1c d(A) (83)

where

d(A) = An + α1An−1 + . . .+ αnI (84)

and Pc is the controllability matrix.

Question: Applying Ackermann’s Formula for state feedback control design for theRLC circuit shown in the previous question.

Hint: d(s) = s2 + 4s+ 4, and d(A) = A2 + 4A+ 4I =[

2 −21 −1

].

EEEN30041, School of E&EE, University of Manchester, 2008 35

Page 36: Control system I

36

7. Observer Design

Full state feedback control needs the values of all the state variables. In industrialsystems, it is common that not all the state are available, and in this case, anobserver can be designed to provide an estimate of the unknown state variables.

Luenberger Observer. For a dynamic system

x = Ax+Bu (85)

y = Cx (86)

a full-state observer is designed as

˙x = Ax+Bu+ L(y − Cx) (87)

where x denotes the estimate of the state variable, and L is the observer gain.

EEEN30041, School of E&EE, University of Manchester, 2008 36

Page 37: Control system I

37

Define the observer error as

e = x− x (88)

we have the error dynamics

e = (Ax+Bu)− (Ax+Bu+ L(y − Cx))

= (A− LC)e (89)

To ensure the error asymptotically converge to zero as t→∞, we need thecharacteristic equation |sI − (A−LC)| = 0 to have all the roots in the left half ofthe complex plan.

Example 9. Design a full state observer for the dynamic system

x =

−5 1 0−3 0 1−2 0 0

x+

001

u(t) (90)

EEEN30041, School of E&EE, University of Manchester, 2008 37

Page 38: Control system I

38

y = [1 0 0]x (91)

Solution: Note the system is in the observer canonical form. For L = [l1 l2 l3]T ,we have

A− LC =

−5 1 0−3 0 1−2 0 0

− l1l2l3

[1 0 0] =

−(l1 + 5) 1 0−(l2 + 3) 0 1−(l3 + 2) 0 0

(92)

The characteristic polynomial is

|sI − (A− LC)| = s3 + (l1 + 5)s2 + (l2 + 3)s+ (l3 + 2) (93)

If we place the poles for the observer at {−2,−2,−2}, the desired characteristicpolynomial is

(s+ 2)3 = s3 + 6s2 + 12s+ 8 (94)

EEEN30041, School of E&EE, University of Manchester, 2008 38

Page 39: Control system I

39

By comparing the polynomials we have l1 = 1, l2 = 9, l3 = 6, ie,

L = [1 9 6]T (95)

The full state observer is given by

˙x =

−5 1 0−3 0 1−2 0 0

x+

001

u(t) +

196

(y − [1 0 0]x) (96)

The observer canonical form makes the design of observer gain easier. For thesystem what is not in the observer canonical form, we can still evaluate thecharacteristic polynomial, and then by comparing the coefficients with the desiredone to obtain the observer gain matrix. For design observer gains, we also haveAckermann’s formula.

Ackermann’s Formula. For a system {A,B,C}, the observer gain L to achieve the

EEEN30041, School of E&EE, University of Manchester, 2008 39

Page 40: Control system I

40

desired closed-loop characteristic polynomial

d(s) = sn + α1sn−1 + . . .+ αn (97)

is given by

L = d(A)P−1o [0 0 . . . 1]T (98)

where

d(A) = An + α1An−1 + . . .+ αnI (99)

and Po is the observability matrix.

EEEN30041, School of E&EE, University of Manchester, 2008 40

Page 41: Control system I

41

8. Compensator Design

We aim at design dynamic feedback control from the system output. There arethree steps in the compensator design.

• Full state feedback design

• Full state observer design

• Compensator design using the state estimate to replace the state variable in thefull state control design

The final control design is given by

u = −Kx(t) (100)

Question: How to ensure the stability of the closed-loop system?

EEEN30041, School of E&EE, University of Manchester, 2008 41

Page 42: Control system I

42

The separation principle plays an important part.

Consider the system

x = Ax+Bu (101)

y = Cx (102)

with the observer

˙x = Ax+Bu+ L(y − Cx) (103)

and the control law

u = −Kx(t) (104)

The closed-loop system with the observer can be written as

x = Ax−BKx

EEEN30041, School of E&EE, University of Manchester, 2008 42

Page 43: Control system I

43

˙x = Ax−BKx+ LC(x− x)

In terms of the state variable x and observer error e = x− x, we have

x = (A−BK)x+BKe

e = (A− LC)e

Treating [xT , eT ]T as the augmented state variable, we have

d

dt

[xe

]=[A−BK BK

0 A− LC

] [xe

](105)

The characteristic polynomial of the augmented system is given by∣∣∣∣sI − [ A−BK BK0 A− LC

]∣∣∣∣ = |sI − (A−BK)||sI − (A− LC)| (106)

EEEN30041, School of E&EE, University of Manchester, 2008 43

Page 44: Control system I

44

Therefore if |sI − (A−BK)| = 0 and |sI − (A− LC)| = 0 have all the roots inthe left half of the complex plane, the augmented system is stable. This is theseparation principle.

Example 9. Design a dynamic output feedback control (compensator) for thesystem

x =[

0 11 1

]x+

[01

]u(t) (107)

y = [1 0]x (108)

The poles for the closed loop controller are at {−1,−1} and the poles for theobserver error dynamics are at {−2,−2}.

EEEN30041, School of E&EE, University of Manchester, 2008 44

Page 45: Control system I

45

Tutorial 2Question 7. Design a full state feedback control u = −Kx for the dynamic systemdescribed by

x =

0 1 00 0 11 0 2

x+

001

u(t) (109)

such that the closed-loop system has the poles at {−2,−1± 2j}.

Question 8. The dynamics of a rocket is described by

x =[

0 01 0

]x+

[10

]u(t) (110)

y = [0 1]x (111)

The control input is designed as u = −2x1 − x2. Determine the roots of thecharacteristic equation.

EEEN30041, School of E&EE, University of Manchester, 2008 45

Page 46: Control system I

46

Question 9. For the dynamic system described in Question 8, how to change thecontrol input such that the roots of the closed-loop systems are at {−2± j}.

Question 10. Consider a second order system

x =[

1 03 1

]x+

[10

]u(t) (112)

y = [0 1]x (113)

Design an observer such that the observer poles are at {−1± j}.

Question 11. Design a full state observer for the dynamic system described by

x =

−2 1 0−2 0 11 0 0

x+

001

u(t) (114)

y = [1 0 0]x (115)

EEEN30041, School of E&EE, University of Manchester, 2008 46

Page 47: Control system I

47

such that the observer has the poles at {−2,−1± 2j}.

Question 12. Consider a state space compensator for the system

x =[

1 03 2

]x+

[10

]u(t) (116)

y = [0 1]x (117)

with the closed loop poles at {−1± j}, and the observer poles at {−2± j}.

EEEN30041, School of E&EE, University of Manchester, 2008 47

Page 48: Control system I

48

9. Tracking and Internal Model Design

Consider

x = Ax+Bu (118)

y = Cx (119)

We need to design a control input such that the output asymptotically tracks agiven reference r, which can be written as the output for the reference state spacemodel

xr = Arxr (120)

r = drxr (121)

Consider the case of tracking a constant reference, we have the reference model as

xr = 0 (122)

EEEN30041, School of E&EE, University of Manchester, 2008 48

Page 49: Control system I

49

r = xr (123)

Define the tracking error e = y − r. Taking the derivative of the tracking errorgives

e = y = Cx (124)

Let us use the notation z = x and v = u. Take e and z as the state variable of anaugmented state space system

d

dt

[ez

]=[

0 C0 A

] [ez

]+[

0B

]v (125)

If the augmented system is controllable, we can design a state feedback controllerin the form

v = −k1e− k2z (126)

EEEN30041, School of E&EE, University of Manchester, 2008 49

Page 50: Control system I

50

where k2 is a matrix (vector) in general. The state feedback ensures the stabilityof the augmented system which implies the asymptotic tracking with e convergingto zero. The control is input is given by integrating v as

u(t) = −k1

∫ t

0

e(τ)dτ − k2x(t) (127)

The integration in the above equation reflects the dynamics of the tracking signal.It is clear to see in a block diagram that the controller acts as an internal model.

Example 10. Internal model design for a unit step input for the system

x =[

0 1−2 −2

]x+

[01

]u(t) (128)

y = [1 0]x (129)

by placing the pools of the augmented system at {−1± j,−10}.

EEEN30041, School of E&EE, University of Manchester, 2008 50

Page 51: Control system I

51

The same control design can be extended to the case of tracking a polynomial oftime.

Question: How to design an internal model based controller to track a sinusoidalsignal?

EEEN30041, School of E&EE, University of Manchester, 2008 51

Page 52: Control system I

52

10. Optimal Control

The performance of a control system can be represented by a performance indexsuch as

J =∫ tf

0

g(x, u, t)dt (130)

Optimal control is concerning with the control design to minimize a performanceindex. Consider a particular performance index

J =∫ tf

0

xTQxdt (131)

where Q is an n× n positive definite matrix. To simplify the problem, we let tftend to infinity, that is, the index is given by

J =∫ ∞

0

xTQxdt (132)

EEEN30041, School of E&EE, University of Manchester, 2008 52

Page 53: Control system I

53

How to design a full state feedback law to minimise the index?

Consider the control design u = −Kx for x = Ax+Bu. The closed-loop system is

x = (A−BK)x := Hx (133)

If we have a positive definite matrix P such that

d

dt(xTPx) = −xTQx (134)

then substituting the above into the performance index, we have

J = −∫ ∞

0

d

dt(xTPx)dt = −xTPx |∞0 = xT (0)Px(0) (135)

where we assume that the closed-loop system is stable (x(∞) = 0).

EEEN30041, School of E&EE, University of Manchester, 2008 53

Page 54: Control system I

54

A direct evaluation gives

d

dt(xTPx) = xTPx+ xTPx

= xTHTPx+ xTPHx

= xT (HTP + PH)x (136)

From the differential equation ddt(x

TPx) = −xTQx we have

HTP + PH = −Q (137)

Therefore the optimal control design with the performance index J =∫∞

0xTQxdt

can be carried out in two steps:

• Solve the matrix equation

HTP + PH = −Q (138)

EEEN30041, School of E&EE, University of Manchester, 2008 54

Page 55: Control system I

55

to obtain matrix P that depends on the control gain K.

• Minimize the index

J = xT (0)Px(0) (139)

to determine the control gain or other parameters in the system.

Example 11. Design the state feedback control K = [k1 k2] by restricting k1 = 1with the optimal control index of Q = I and x(0) = [1 1]T for the system

x =[

0 10 0

]x+

[01

]u(t) (140)

(141)

Example 12. Continue from Example 11 by restricting the control gain asK = [k k] (ie., k1 = k2) and x(0) = [1 0]T .

EEEN30041, School of E&EE, University of Manchester, 2008 55

Page 56: Control system I

56

Solution: Solving the matrix equation HTP + PH = −I gives

p11 =1 + 2k

2k(142)

It is easy to see the control index is given by

J = xT (0)Px(0) = p11 = 1 +12k

(143)

The minimum value of J is obtained when k tends to infinity.

EEEN30041, School of E&EE, University of Manchester, 2008 56

Page 57: Control system I

57

11. Linear Quadratic Regulator

Consider the solution of Example 12. The bigger the controller gain, the smallerthe performance index; there is no optimal solution of the control gain. This is dueto the fact that we did not consider the control effort in the performance index.

In engineering systems, the bigger control input efforts often mean the biggerenergy consumption. To consider the input in the performance index, we oftendefine

J =∫ ∞

0

[xTQx+ uTRu]dt (144)

where R is a positive definite matrix. For SISO case, we have R as a constantscaler.

Consider the full state feedback control u = −Kx, the performance index can be

EEEN30041, School of E&EE, University of Manchester, 2008 57

Page 58: Control system I

58

written as

J =∫ ∞

0

[xTQx+ xTKTRKx]dt :=∫ ∞

0

xTSxdt (145)

where

S = Q+KTRK (146)

Similar to the case with no control in the performance index, we need to solve amatrix equation

HTP + PH = −S (147)

and the performance index is then given by

J =∫ ∞

0

[xTQx+ xTKTRKx]dt :=∫ ∞

0

xTSxdt = x(0)TPx(0) (148)

EEEN30041, School of E&EE, University of Manchester, 2008 58

Page 59: Control system I

59

Example 13. Repeat Example 12 with the new performance index

J =∫ ∞

0

[xTx+ ru2]dt (149)

General Solution for Linear Quadratic Regulator: Consider a dynamic systemdescribed by

x = Ax+Bu (150)

The optimal control input for the performance index

J =∫ ∞

0

[xTQx+ uTRu]dt (151)

is given by

u = −R−1BTPx (152)

EEEN30041, School of E&EE, University of Manchester, 2008 59

Page 60: Control system I

60

where P satisfies

ATP + PA− PBR−1BTP +Q = 0 (153)

This equation is often referred to as the matrix algeraic Riccati equation, and itcan be easily solved using Matlab.

EEEN30041, School of E&EE, University of Manchester, 2008 60

Page 61: Control system I

61

Tutorial 3

Question 13. Consider a dynamic system

x =[

0 13 2

]x+

[01

]u(t) (154)

y = [1 0]x (155)

Design a state feedback controller such that the system output tracks a constantr.

Question 14. Consider a first order dynamic system

x = x+ u (156)

The control input is designed as

u = −kx (157)

EEEN30041, School of E&EE, University of Manchester, 2008 61

Page 62: Control system I

62

such that the system is stale. Evaluate the performance index

J =∫ ∞

0

x2dt (158)

with x(0) = 2, and hence obtain an optimal value of k such that J is minimum.

Question 15. Repeat the optimal control design in Question 14, with the controlperformance index

J =∫ ∞

0

[x2 + ru2]dt (159)

where r is a constant.

Question 16. consider a dynamic system described by

x =[

0 10 0

]x+

[01

]u(t) (160)

EEEN30041, School of E&EE, University of Manchester, 2008 62

Page 63: Control system I

63

The initial value is given as x(0) = [1, 1]T . With the feedback control in the formof

u = −kx1 − kx2 (161)

obtain the relation between the performance index given by

J =∫ ∞

0

xTxdt (162)

and the controller gain k.

Question 17. Re-design the optimal control gain k in Question 16 using theoptimal control index

J =∫ ∞

0

(xTx+ u2)dt (163)

EEEN30041, School of E&EE, University of Manchester, 2008 63

Page 64: Control system I

64

Question 18. Determine the roots of the closed-loop control systems obtained inQuestions 15, 16 and 17.

EEEN30041, School of E&EE, University of Manchester, 2008 64

Page 65: Control system I

65

12. Digital Control and Sampled Data System

Question: Why digital control?

Due to the application of digital computers in industrial control systems. [Thereare also some inherently discrete-time systems.]

Sampled Data. Sampled data (or a discrete signal) are data obtained for systemvariables only at discrete time intervals.

We assume the sampling at the same fixed period, T , which is called the samplingperiod. The sampled data for a continuous time variable x(t) are denoted byx(kT ) with k taking the values of integers.

How to decide the sampling period?

It depends on the dynamics of the system, the required accuracy and hardwareconstraints.

EEEN30041, School of E&EE, University of Manchester, 2008 65

Page 66: Control system I

66

Sampler. An ideal sampler is a switch that closes for every T seconds for a instant.For an continuous time signal r(t), sampled at kT , the output from the samplerr∗(t) is an impulse signal,

r∗(t) = r(kT )δ(t− kT )

where δ is the impulse function.

Zero-order-hold. After the sampling, it is assumed that the value is kept at sameuntil next sampling, ie, the value of x(t) is assumed to be at the constant ofx(kT ) for kT ≤ t < (k + 1)T .

The impulse response of the zero-order-hold is given by

g0(t) ={

1, 0 ≤ t < T0, otherwise.

(164)

EEEN30041, School of E&EE, University of Manchester, 2008 66

Page 67: Control system I

67

and therefore its transfer function is given by

G0(s) =1s− 1se−Ts =

1− e−Ts

s(165)

Quantization error. An error due to a computer’s finite word size.

EEEN30041, School of E&EE, University of Manchester, 2008 67

Page 68: Control system I

68

13. The z-Transform

The output from an ideal sampler is a sequence of impulses with values r(kT ),and we write

r∗(t) =∞∑k=0

r(kT )δ(t− kT ) (166)

Taking the Laplace transform of the above equation, we have

L{r∗(t)} =∞∑k=0

r(kT )e−kTs =∞∑k=0

r(kT )(eTs)−k (167)

We define the z-transform as

R(z) = Z{r∗(t)} =∞∑k=0

r(kT )z−k (168)

EEEN30041, School of E&EE, University of Manchester, 2008 68

Page 69: Control system I

69

with z = eTs. Similar to the notation for the Laplace transformed functions, acapital letter R denotes the z-transformed functions of r(t).

Example 14. Determine the z-transform for the unit step function u(t).

U(z) =∞∑k=0

u(kT )z−k =∞∑k=0

z−k =1

1− z−1=

z

z − 1(169)

Example 15. Determine the z-transform of e−at.

F (z) =∞∑k=0

e−akTz−k =∞∑k=0

(eaTz)−k

=1

1− (eaTz)−1=

z

z − e−aT

EEEN30041, School of E&EE, University of Manchester, 2008 69

Page 70: Control system I

70

The z-transform table

x(t) X(s) X(z)δ(t) 1 1δ(t− kT ) e−kTs z−k

u(t), unit step 1/s zz−1

t 1/s2 Tz(z−1)2

e−at 1s+a

zz−e−aT

sinωt ωs2+ω2

z sinωTz2−2z cosωT+1

cosωt ss2+ω2

z(z−cosωT )z2−2z cosωT+1

e−at sinωt ω(s+a)2+ω2

ze−aT sinωTz2−2ze−aT cosωT+e−2aT

e−at cosωt s+a(s+a)2+ω2

z2−ze−aT cosωTz2−2ze−aT cosωT+e−2aT

EEEN30041, School of E&EE, University of Manchester, 2008 70

Page 71: Control system I

71

Inverse z transform

• Partial fraction method (similar to inverse Laplace transform)

• Long division

Transfer function of an open loop system

We need to multiply the transfer function of the zero-order-hold to the systemtransfer function Gp(s).

G(z) = Z{G0(s)Gp(s)} (170)

Example 16. Determine the z-transfer function of Gp(s) = 1s(s+1).

EEEN30041, School of E&EE, University of Manchester, 2008 71

Page 72: Control system I

72

G(s) =1− e−Ts

s

1s(s+ 1)

= (1− e−Ts)( 1s2− 1s

+1

s+ 1) (171)

It follows that

G(z) = (1− z−1)Z(1s2− 1s

+1

s+ 1)

= (1− z−1)(Tz

(z − 1)2− z

z − 1+

z

z − e−T)

=(ze−T − z + Tz) + (1− e−T − Te−T )

(z − 1)(z − e−T )

EEEN30041, School of E&EE, University of Manchester, 2008 72

Page 73: Control system I

73

14. Discrete-time Systems

Difference equation

Consider

y(k + n) + a1y(k + n− 1) + . . .+ any(k) = b1u(k + n) + . . .+ bnu(k) (172)

The solution of this difference equation can be obtained by iteration using

y(k + n) = −a1y(k + n− 1)− . . .− any(k) + b1u(k + n− 1) + . . .+ bnu(k)(173)

if necessary initial values are unknown.

Transfer function

Note that z−1 and z can be used as a shift operator. It can be shown that

Z{y(k − 1)} = z−1Z{y(k)} = z−1y(z)

EEEN30041, School of E&EE, University of Manchester, 2008 73

Page 74: Control system I

74

Z{y(k + 1)} = zZ{y(k)} = zy(z)

(174)

In fact, from the definition, we have z−1 = e−Ts, a delay operator in Laplacedomain.

Taking z-transform of the difference equation

znY (z) + a1zn−1Y (z) + . . .+ anY (z) = b1z

n−1U(z) + . . .+ bnU(z) (175)

and the transfer function is given by

G(z) =Y (z)U(z)

=b1z

n−1 + . . .+ bnzn + a1zn−1 + . . .+ an

(176)

EEEN30041, School of E&EE, University of Manchester, 2008 74

Page 75: Control system I

75

and finally we have

G(z) =b1z−1 + . . .+ bnz

−n

1 + a1z−1 + . . .+ anz−n(177)

System response using difference equation

For a given transfer function, we can obtain the difference and then obtain thesystem responses using the difference equation.

Example 17. For the system considered in Example 16 with T = 1, determine thefirst four terms of the output subject to the impulse input, assumingy(−2) = y(−1) = 0.

For T = 1, we have the transfer function

G(z) =Y (z)U(z)

=0.3678z + 0.2644

z2 − 1.3678z + 0.3678(178)

EEEN30041, School of E&EE, University of Manchester, 2008 75

Page 76: Control system I

76

The difference equation is obtained as

y(k + 2)− 1.3678y(k + 1) + 0.3678y(k) = 0.3678u(k + 1) + 0.2644u(k) (179)

or

y(k + 2) = 1.3678y(k + 1)− 0.3678y(k) + 0.3678u(k + 1) + 0.2644u(k) (180)

Note the {u(k)} = {1, 0, . . . , 0, . . .}. We have

y(0) = 1.3678y(−1)− 0.3678y(−2) + 0.3678u(−1) + 0.2644u(−2) = 0 (181)

y(1) = 1.3678y(0)− 0.3678y(−1) + 0.3678u(0) + 0.2644u(−1) = 0.3678 (182)

and y(2) = 0.7675 and y(3) = 0.9145.

EEEN30041, School of E&EE, University of Manchester, 2008 76

Page 77: Control system I

77

The method shown in the above example can also be used for evaluation ofinverse z transform.

Indeed, from partial fraction expansion, we have, taking U(z) = 1,

Y (z) = G(z)U(z) =0.3678z + 0.2644

z2 − 1.3678z + 0.3678=

1z − 1

+ 0, 63221

z − 0.3678

and

y(k) = z−1(Z−1{ z

z − 1+ 0, 6322

z

z − 0.3678}) = (1)k−1 − 0.6322(0.3678)k−1

EEEN30041, School of E&EE, University of Manchester, 2008 77

Page 78: Control system I

78

15. Closed-loop Transfer Functions and Stability

Closed-loop transfer function

Consider a closed-loop system

Y (z) = G(z)U(z) (183)

U(z) = D(z)E(z) (184)

E(z) = R(z)− Y (z) (185)

where D(z) denotes the transfer function of a digital controller, and E(z) andR(z) denote the feedback error and the reference signal.

The closed-loop transfer function is obtained as

Y (z)R(z)

=G(z)D(z)

1 +G(z)D(z)(186)

EEEN30041, School of E&EE, University of Manchester, 2008 78

Page 79: Control system I

79

The above result is the same for the closed-loop transfer function of thecontinuous-time system. In general, the block diagram manipulation follows in thesame way as the block diagram manipulation of the continuous-time systems.

Stability analysis

Conside the mapping between s-plane and z-plane. Let s = σ + jω and we have

z = eTs = eσ+jω (187)

It can be seen that for σ < 0 we have |z| < 1, that is, the left-half of the s planecorresponds to the area within the unit circle in z-plane. Therefore we have thefollowing statement.

A sampled (discrete-time) system is stable if all the poles of the closed-looptransfer function lie within the unit circle of the z-plane.

Example 18. The open-loop transfer function considered in Example 17 is under a

EEEN30041, School of E&EE, University of Manchester, 2008 79

Page 80: Control system I

80

closed-loop control with the controller D(z) = K, a constant. Evaluate thestability for the closed-loop system when K = 1 and K = 10.

The closed-loop transfer function is given by

KG(z)1 +KG(z)

=K 0.3678z+0.2644z2−1.3678z+0.3678

1 +K 0.3678z+0.2644z2−1.3678z+0.3678

(188)

and therefore the characteristic equation is

d(z) = z2 + (0.3678K − 1.3678)z + (0.2644K + 0.3678) (189)

For K = 1, we have

d(z) = z2 − z + 0.6832 = 0 (190)

and the poles, z1,2 = 0.50± j0.6182, are within the unit circle and the system is

EEEN30041, School of E&EE, University of Manchester, 2008 80

Page 81: Control system I

81

stable. For K = 10, we have

d(z) = z2 + 2.310z + 3.012 = 0 (191)

and the poles, z1,2 = −1.155± j1.295, are outside the unit circle and the systemis unstable.

Example 19. Range of T for stability

Consider a unit feedback sampled-data system with the plant transfer functionGp = 10

s+1. Determine the range of the sampling interval to ensure the closed-loopstability.

Let

G(s) =1− e−Ts

s

10s+ 1

= (1− e−Ts)10(1s− 1s+ 1

)

EEEN30041, School of E&EE, University of Manchester, 2008 81

Page 82: Control system I

82

Taking z-transform, we have

G(z) = (1− z−1)10(1

z − 1− z

z − e−T= 10

1− e−T

z − e−T

The closed-loop transfer function is obtained as

Gc(z) =G(z)

1 +G(z)=

101−e−Tz−e−T

1 + 101−e−Tz−e−T

=10(1− e−T )

z − e−T + 10(1− e−T )

The characteristic equation is given by z − e−T + 10(1− e−T ) = 0 orz = 10− 11e−T . For the stability, we need |z| < 1, ie,

− 1 < 10− 11e−T < 1

which gives 0 < T < − ln 911 or 0 < T < 0.2007.

EEEN30041, School of E&EE, University of Manchester, 2008 82

Page 83: Control system I

83

Tutorial 4

Question 19. Determine the z-transfer functions of the following plants (Hint: AZOH should be added in each case):

1) G1(s) = 1s(s+5)

2) G2(s) = s+2(s+1)(s+3)

3) G3(s) = 10e−2Ts

s+5

Question 20. A plant is described by the transfer function

Y (s)U(s)

=5

s(s+ 5)(192)

and the systems input and output are sampled with a sampling interval T = 0.1second.

1) Obtain the z transfer function between the input and the output.

2) Obtain the difference equation relating y(k) and u(k).

EEEN30041, School of E&EE, University of Manchester, 2008 83

Page 84: Control system I

84

3) Determine the system output (first five steps) under a unit step using thedifference equation obtained in 2) .

Question 21. A first order system Y (s)U(s) = 10

s+5 is sampled at every T seconds. The

control law for the system is designed as u(kT ) = −Ky(kT ) where K is thecontroller gain.

1) Determine the range of the sampling interval T such that the closed-loopsystem is stable with K = 10.

2) Determine the range of controller gain K such that the closed-loop system isstable with T = 0.1 second.

Question 22. Consider a discrete-time system

Y (z)U(z)

=0.4z + 0.2

z2 − 1.4z + 0.4(193)

with feedback control u(k) = −Ky(k).

EEEN30041, School of E&EE, University of Manchester, 2008 84

Page 85: Control system I

85

1) Obtain the closed-loop transfer function and the characteristic equation of theclosed-loop system, and then determine the stability of the system with K = 1and K = 10 respectively.

2) Suggest a method to determine the range of the controller gain K such thatthe closed system is stable.

EEEN30041, School of E&EE, University of Manchester, 2008 85

Page 86: Control system I

86

16. Discrete-time State Space Systems

State Difference Equation

Similar to the state differential equations, we have state difference equation

x(k + 1) = Ax(k) +Bu(k) (194)

y(k) = Cx(k) +Du(k) (195)

where x(k) is the state vector at the step k.

System Response

The response can be evaluated repeatedly with the given initial values:

x(1) = Ax(0) +Bu(0)

x(2) = Ax(1) +Bu(1) = A2x(0) +ABu(0) +Bu(1)

EEEN30041, School of E&EE, University of Manchester, 2008 86

Page 87: Control system I

87

...

x(k) = Akx(0) +Ak−1Bu(0) + . . .+Bu(k − 1)

Controllability and Observability

The controllability and observability can be checked in the same ways as for thecontinuous-time systems. The system is controllable if the controllability matrix

Pc = [B AB . . . An−1B] (196)

has full rank, and the system is observable if the observability matrix

Po =

CCA

...CAn−1

(197)

EEEN30041, School of E&EE, University of Manchester, 2008 87

Page 88: Control system I

88

has full rank.

Transfer functions from state difference equations

Taking z-transform of x(k + 1) = Ax(k) +Bu(k), we have

(zI −A)X(z) = BU(z)

X(z) = (zI −A)−1BU(z)

(198)

and

Y (z) = CX(z) = C(zI −A)−1BU(z) (199)

Therefore the transfer function is given by

G(z) =Y (z)U(z)

= C(zI −A)−1B (200)

EEEN30041, School of E&EE, University of Manchester, 2008 88

Page 89: Control system I

89

Note that the transfer function is in the same form as for the continuous-timecase, with the only difference that s is replaced by z. We would expect the samekind of state space realizations for discrete-time transfer functions, and this indeedis the case.

State space realization

The canonical forms corresponding to the transfer functions in s domain work inthe same way for z domain.

Example 20. Obtain the state difference equation for the transfer function

G(z) =0.3678z + 0.2644

z2 − 1.3678z + 0.3678(201)

The realization in the controller canonical form is given by

x(k + 1) =[

0 1−0.3678 1.3678

]x(k) +

[01

]u(k) (202)

EEEN30041, School of E&EE, University of Manchester, 2008 89

Page 90: Control system I

90

y = [0.2644 0.3678]x (203)

Stability

The characteristic equation for the transfer function in z domain is given by

d(z) = |zI −A| = 0 (204)

The system is stable if all the roots of the characteristic equation are within theunit circle.

Full State Feedback Control Design

Design the full state feedback control as

u(k) = −Kx(k) (205)

EEEN30041, School of E&EE, University of Manchester, 2008 90

Page 91: Control system I

91

which gives the closed-loop system

x(k + 1) = (A−BK)x(k) (206)

The full-state feedback control can be used to place the poles of the closed-loopsystem at the desired positions, if the system is controllable.

Example 21. Design the full state feedback control law for the system in Example20 to place the poles at {−0.5,−0.5}.

EEEN30041, School of E&EE, University of Manchester, 2008 91

Page 92: Control system I

92

17. Implementation of Digital Controllers

There are two methods for design a digital controller.

• Emulation method, ie, design the controller in continuous-time and then convertto discrete-time.

• Direct digital design, including discrete-time pole placement via full state feed-back shown in the previous section.

Digital implementation of PID controllers

In s domain, we have the transfer function for a PID controller as

U(s)X(s)

= Gc(s) = k1 +k2

s+ k3s (207)

We need approximations for differentiation and integration in discrete-time.

EEEN30041, School of E&EE, University of Manchester, 2008 92

Page 93: Control system I

93

Backward difference rule for differentiation

u(KT ) =dx

dt|t=kT =

1T

(x(kT )− x((k − 1)T ) (208)

The z-transform for this equation is given by

U(z) =1− z−1

TX(z) =

z − 1Tz

X(z) (209)

Forward-rectangular integration

u(kT ) = u((k − 1)T ) + Tx(kT ) (210)

and the z-transform gives

U(z)X(z)

=Tz

z − 1(211)

EEEN30041, School of E&EE, University of Manchester, 2008 93

Page 94: Control system I

94

The z domain transfer function for the PID controller is given by

U(z)X(z)

= k1 +k2Tz

z − 1+ k3

z − 1Tz

(212)

The difference equation for this transfer function is

u(kT ) = u((k − 1)T ) + (k1 + k2T +k3

T)x(kT )

−(k1 +2k3

T)x((k − 1)T ) +

k3

Tx((k − 2)T ) (213)

This is called the velocity form of PID implementation, as the control input at thecurrent step is calculated based on the control input in the previous step, and thecontribution in the current step only contributes to the change of the controlinput.

EEEN30041, School of E&EE, University of Manchester, 2008 94

Page 95: Control system I

95

Question: A PI controller is designed as U(s)X(s) = 10(1 + 1

s). Find the difference

equation for controller implementation.

Hint: U(z)X(z) = 10−9z−1

1−z−1 .

EEEN30041, School of E&EE, University of Manchester, 2008 95