Transcript
Page 1: Smart Control of 2 Degree of Freedom Helicoptersee.bradley.edu/projects/proj2019/2dofheli/poster.pdf · 2-DOF motors d d Actual con guration sent back to user V p V y ... (Raspberry

Smart Control of 2 Degree of Freedom HelicoptersGlenn Janiak, Ken Vonckx, Advisor: Dr. Suruz Miah

Department of Electrical and Computer Engineering, Bradley University, Peoria IL

Objective and ContributionObjective• Develop a platform allowing mobile devices to control

the motion of a group of helicoptersContribution• Determine trade-offs between traditional control

techniques and machine learning• Multi-Helicopter ApplicationApplications• Teleoperation approach to search and rescue• Aerial turbulence resistance

Problem Setup

Helicopter 1

Helicopter 2

Helicopter N

Proposed

smart control

algorithm of

helicopters

Mobile

device 1

Mobile

device 2

Mobile

device N

Wireless

network

Figure 1:High level architecture of the proposed system.

Figure 2:2-DOF helicopter (Quanser Aero).

• State-space representation of 2-DOF helicopter

θ

ψ

θ

ψ

=

0 0 1 00 0 0 10 −Ksp/Jp −Dp/Jp 00 0 1 −Dy/Jy

θψ

θ

ψ

+

0 00 0

Kpp/Jp Kpy/JpKyp/Jy Kyy/Jy

VpVy

Motion (Trajectory) Control Algorithm

Motion

controllerHelicopter

Encoder

Mobile

device

Error

signal

2-DOF

motors

θ

θd

d

Actual

configuration

sent back to

user

VpVy

Figure 3:A desired orientation is given by a user. The difference betweenthis input and the actual position is calculated. The controller the calcu-lates the proper amount of voltage to apply to the DC motors.

1 Employ state-space representation of 2-DOF helicopter:x = Ax + Bu

2 Use state feedback lawu = −Kx

to minimize the quadratic cost function:J(u) = ∫∞

0 (xTQx + uTRu + 2xTNu)dt3 Find the solution S to the Riccati equation

ATS + SA− (SB + N)R−1(BTS + NT ) + Q = 04 Calculate gain, K

K = R−1(BTS + NT )

Optimal Noise Resistant ControlAlgorithm

• Utilizes gain calculated in LQR• Added Kalman filter to reduce external disturbances to the

system

Figure 4:Noise resistant 2-DOF helicopter model.

Reinforcement Learning Algorithm• Uses neural network based on difference between desired

and actual orientation to determine optimal gain

θd− θ

d −

_θd − _θ

_ d − _

V

w1

w2

w3

w4

w5

w6

w7

w8

w9

w10

w11

w12

w13

w14

e1

e2

e3

e4

P

e1

e2

e3

e4

e21

e1e2

e22

e1e3

e1e4

e2e3

e2e4

e3e4

e23

e24

Input

layer

Hidden

layer

Output

layer

Figure 5:ADP Neural Network

Simulation Results

0 1 2 3 4 5 6 7 8 9 10

Time [s]

0

5

10

15

20

25

30

35

40

45

50

Pitch

[d

eg

]

(a)

0 1 2 3 4 5 6 7 8 9 10

Time [s]

0

20

40

60

80

100

120

140

160

180

200

Yaw

[deg]

(b)

0 1 2 3 4 5 6 7 8 9 10

Time [s]

-20

-15

-10

-5

0

5

10

15

20

Vo

ltag

e [V

]

(c)

0 1 2 3 4 5 6 7 8 9 10

Time [s]

-20

-15

-10

-5

0

5

10

15

20

Vo

ltag

e [V

]

(d)Figure 6:A comparison between LQG and LQR control for a step input isshown for (a) the main rotor and (b) the tail rotor and the correspondingvoltages in (c) and (d)

Experimental ResultsHelicopter #2

(Quanser Aero #2)

Helicopter #1

(Quanser Aero #1)

Mobile device

(Samsung

Galaxy S9+)

Single-board

computer #1

(Raspberry Pi 3

Model B #1)

Single-board

computer #2

(Raspberry Pi 3

Model B #2)

Figure 7:Experimental Setup

0 5 10 15 20 25 30

time(s)

0

5

10

15

Pitch

(de

g)

(a)

0 5 10 15 20 25 30

time(s)

0

10

20

30

40

50

60

Ya

w(d

eg

)

(b)Figure 8:ADP experimental results for (a) the main rotor and (b) the tail ro-tor given a step input

0 5 10 15 20 25 30

time(s)

-10

-5

0

5

10

15

20

Pitch

(de

g)

(a)

0 5 10 15 20 25 30

time(s)

-40

-20

0

20

40

60

Ya

w(d

eg

)

(b)Figure 9:Comparison between P and PI control for a step input is shownfor (a) the main rotor and (b) the tail rotor

(a) (b)Figure 10:(a) Time = 0 and (b) Time = 10

Conclusion and Future Work• Model-based reinforcement learning technique (ADP) is

useful when system model is unknown

• Implement PI controller for ADP algorithm• Use digital compass to increase accuracy of orientation

and help identify initial position

Top Related