vision path following with a stabilized quadrotor · vogal: prof. alexandra bento moutinho novembro...

116
Vision Path Following with a Stabilized Quadrotor Miguel Jos ´ e Jorge Rabac ¸a Dissertac ¸˜ ao para obtenc ¸˜ ao do Grau de Mestre em Engenharia Mec ˆ anica uri Presidente: Prof. Jo˜ ao Rog ´ erio Caldas Pinto Orientador: Prof. Jos´ e Raul Carreira Azinheira Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011

Upload: others

Post on 24-Aug-2020

15 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Vision Path Following with a Stabilized Quadrotor

Miguel Jose Jorge Rabaca

Dissertacao para obtencao do Grau de Mestre em

Engenharia Mecanica

JuriPresidente: Prof. Joao Rogerio Caldas Pinto

Orientador: Prof. Jose Raul Carreira Azinheira

Vogal: Prof. Alexandra Bento Moutinho

Novembro - 2011

Page 2: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado
Page 3: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

A man doesn’t know what he knows until he knows what he doesn’t know.

Laurence Peter

Page 4: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Este trabalho reflecte as ideias dos seus

autores que, eventualmente, poderao nao

coincidir com as do Instituto Superior Tecnico.

Page 5: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Abstract

The goals of the present work were: to propose a control strategy for a quadrotor, with the purpose

of following a ground track using airborne vision feedback; to prepare the chosen approach, develop the

tools, test them with a wheeled mobile robot; and evaluate the approach for the quadrotor in simulation

before it is applied in the experimental platform.

Three methods to estimate the tracking errors were developed and evaluated: a method based on

the geometry of the problem, a method based on the camera matrix, and a black box model.

An image treatment was developed to clean the noise from the images captured by the camera. The

estimation and image treatment were tested on the real track of the laboratory using the wheeled mobile

robot.

A virtual reality environment was created to test the control strategies of the quadrotor and to predict

its behavior. The virtual reality simulation results were compared with the experimental results obtained

with the wheeled mobile robot. It was verified that the tracks are similar, the virtual reality is able to

predict the experimental wheeled mobile robot behavior.

An estimation of the velocity from the image with Optical Flow was tested with the wheeled mobile

robot and it was concluded that, with the present setup, it is not possible to use for control purposes.

We may conclude that the estimation methods, the image treatment and the control strategies per-

form the desired task within the system limitations and may allow to proceed the experiment with the

quadrotor.

Keywords: Quadrotor, Path following, Real time image processing, Visual Servoing

v

Page 6: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

vi

Page 7: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Resumo

Os objectivos deste trabalho foram: propor uma estrategia de controlo de um quadrirotor, com o

proposito de seguir uma pista no chao, usando as imagens fornecidas pela camara a bordo; preparar

a abordagem escolhida, desenvolver as ferramentas e testa-las com um robo movel; e avaliar a abor-

dagem feita para o quadrirotor em simulacao, antes de esta ser aplicada na plataforma experimental.

Desenvolveu-se e avaliou-se tres metodos de estimacao dos erros de seguimento: um metodo

trigonometrico considerando a geometria do problema, um metodo baseado na matriz da camara e um

metodo com redes neuronais

O ruıdo das imagens obtidas pela camara foi limpo com recurso a tratamento de imagem. Os

metodos de estimacao e tratamento de imagens foram testados na pista do laboratorio com o robo

movel.

Foi criado um ambiente de realidade virtual para testar as estrategias de controlo para o quadrirotor e

prever o seu comportamento. Os resultados das simulacoes da realidade virtual foram comparados com

os resultados experimentais obtidos com o robo movel com rodas. Verificou-se uma boa aproximacao

entre simulacao e experiencia, constatando-se que a realidade virtual previu o comportamento do robo

real.

A estimacao da velocidade atraves da imagem com Optical Flow foi testado com o robo movel e

concluiu-se que com o setup actual nao e possıvel obter valores para fins de controlo.

Concluiu-se que os metodos de estimacao, tratamento de imagem e a estrategias de controlo

atingem o objectivo desejado dentro das limitacoes do sistema e podem permitir continuar a experiencia

com o quadrirotor.

Palavras chave: Quadrirotor, Seguimento de caminho, Processamento de imagem em tempo real,

Visual Servoing

vii

Page 8: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

viii

Page 9: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Acknowledgment

I would like to thank Prof. Jose Azinheira for the guidance, availability and patience bestowed to me

during the completion of this work.

I would also like to thank my family and girlfriend for their support, specially when I needed it the

most.

Finally, I am grateful to all my friends and colleagues and especially, Bernardo, Filipe, Hugo and

Tiago for their continuous help and encouragement.

ix

Page 10: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

x

Page 11: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Contents

Abstract v

Resumo vii

Acknowledgment ix

Contents xi

List of Figures xv

List of Tables xix

Notation xxi

1 Introduction 1

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 History of the Quadrotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.3 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.5 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.6 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Problem Statement 9

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2 Dynamic models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.2.1 Wheeled mobile robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.2.2 Quadrotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Image processing and tracking errors estimation 17

3.1 Image enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.1.1 Pre-enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.1.2 Post-enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.1.3 Final treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

xi

Page 12: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

3.1.4 Evaluation of the image treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.2 Mass centers estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3 Tracking error visual estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.3.1 Trigonometric Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.3.2 Camera matrix method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.3.3 Black Box model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.4 Evaluation of the estimation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.4.1 Evaluation conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.4.2 Tracking estimation with position and attitude known . . . . . . . . . . . . . . . . . 35

3.4.3 Tracking estimation with position and attitude unknown . . . . . . . . . . . . . . . . 36

3.4.4 Estimation discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 Wheeled mobile robot control 39

4.1 Rasteirinho model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.2 Control with ideal position feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.3 Path following in simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.3.1 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.3.2 Results from simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.4 Path following with experimental platform . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.5 Comparison between results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

5 Longitudinal velocity estimation with vision 51

5.1 Conditions of Optical Flow applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.2 Speed estimation with nominal track . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.3 Speed estimation with with added texture . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.4 Results discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

6 Quadrotor control 57

6.1 Non-holonomic control strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

6.1.1 Non-holonomic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

6.1.2 Control with ideal position feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 58

6.2 Holonomic control strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

6.2.1 Holonomic model 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

6.2.2 Holonomic model 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

6.2.3 Control with ideal position feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 62

6.2.4 Control in VR with Holonomic model 1 . . . . . . . . . . . . . . . . . . . . . . . . . 66

6.2.5 Control in VR with Holonomic model 2 . . . . . . . . . . . . . . . . . . . . . . . . . 70

6.3 Comparison between results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

xii

Page 13: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

7 Conclusions and Future Work 75

7.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

Bibliography 77

Appendix 81

A Results from rasteirinho simulations and experiments 83

B Results of control with attitude feedback 87

C Camera matrices for the real and virtual cameras 91

xiii

Page 14: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

xiv

Page 15: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

List of Figures

1.1 Gyroplane No.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Bell Boeing Quad TiltRotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Small quadrotors with diverse applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4 Robot arm and Quadrotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.5 Segmentation using mahalanobis distance . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.6 Organs viewed through endoscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1 Track of the laboratory and three frames, F0, Fr and Fc . . . . . . . . . . . . . . . . . . . 10

2.2 Tracking errors and tracking error frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3 Image partition and top and bottom centers . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.4 Block control diagram with ideal sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.5 Block control diagram with vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.6 Geometry of mobile ground robot and variables, adapted from [8] . . . . . . . . . . . . . . 13

2.7 Quadrotor with referencial, body frame from [14] . . . . . . . . . . . . . . . . . . . . . . . 14

3.1 Median filter effect, mostly seen in the top part of the images . . . . . . . . . . . . . . . . 18

3.2 Contrast adjustment effect, visible by the difference in brightness of the image (b) . . . . . 19

3.3 Histogram equalization effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.4 Images before and after post-enhancement, can be seen from image (a) and (b) before

and (d) and (e) after. The image (c) shows the effect of a traditional histogram equalization. 21

3.5 Possible artifacts obtained using the developed treatment . . . . . . . . . . . . . . . . . . 22

3.6 Images with the various steps of the image enhancement applied to the original image ob-

tained through image acquisition device, the parameters used for the post-enhancement

were Cu = 0.6, Cl = 0.17 and r = 0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.7 Comparison of images obtained during a track following with wheeled mobile robot in

location 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.8 Comparison of images obtained during a track following with wheeled mobile robot in

location 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.9 Histograms and normalized CDF of location 1 . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.10 Histograms and normalized CDF of location 2 . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.11 Camera frame with partition coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

xv

Page 16: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

3.12 Camera frame and mobile ground frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.13 Camera horizontal and vertical total view angles, θx and θy . . . . . . . . . . . . . . . . . 28

3.14 Camera frame and points of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.15 Camera image distortion, in the π frame a projection of the ground element over the

camera frame can be viewed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.16 Neural network inputs and outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.17 Camera view of the two simulations type . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.1 Control block diagram of the rasteirinho with ideal position feedback . . . . . . . . . . . . 40

4.2 The reference of the position yref (in blue) and the robot position y (in red) . . . . . . . . 41

4.3 Angle position ψ in response to the reference following . . . . . . . . . . . . . . . . . . . . 41

4.4 Control action r obtained for the reference following . . . . . . . . . . . . . . . . . . . . . 41

4.5 Control block diagram of the rasteirinho with VR simulation . . . . . . . . . . . . . . . . . 42

4.6 Images of the virtual and real laboratory track . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.7 Images of the virtual track with and without extra elements . . . . . . . . . . . . . . . . . . 43

4.8 Camera view near the parking zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.9 Cross tracking error ye (in blue) and heading error ψe (in red) during the simulations . . . 44

4.10 Control action r during the simulation of the virtual rasteirinho . . . . . . . . . . . . . . . . 44

4.11 Trajectory performed during simulation, with initial error located near (0, 0) . . . . . . . . . 45

4.12 Control block diagram of the experimental rasteirinho . . . . . . . . . . . . . . . . . . . . . 46

4.13 Cross tracking error ye(in blue) and heading error ψe (in red) during the experiment . . . . 47

4.14 Control action r during the experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.15 Momentary image distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.16 Reflexion over the line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

5.1 Track with and without added texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

5.2 The real velocities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.3 The estimated velocities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.4 The real velocities with references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.5 The estimated velocities with references . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

6.1 Control block diagram of the quadrotor with ideal position feedback . . . . . . . . . . . . . 59

6.2 Values obtained with control in ideal situation . . . . . . . . . . . . . . . . . . . . . . . . . 60

6.3 Values obtained with control in ideal situation . . . . . . . . . . . . . . . . . . . . . . . . . 63

6.4 Values obtained with control in ideal situation . . . . . . . . . . . . . . . . . . . . . . . . . 65

6.5 Control block diagram of the quadrotor with vision . . . . . . . . . . . . . . . . . . . . . . 66

6.6 Values obtained with control in simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

6.7 Values obtained with control in simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

A.1 Tracking errors (ye and ψe) during the simulation of the real and virtual rasteirinho for 0.3

m/s velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

xvi

Page 17: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

A.2 Tracking errors (ye and ψe) during the simulation of the real and virtual rasteirinho for 0.5

m/s velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

A.3 Tracking errors (ye and ψe) during the simulation of the real and virtual rasteirinho for 0.6

m/s velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

A.4 Control action r during the simulation of the real and virtual rasteirinho for 0.3 m/s velocity 85

A.5 Control action r during the simulation of the real and virtual rasteirinho for 0.5 m/s velocity 85

A.6 Control action r during the simulation of the real and virtual rasteirinho for 0.6 m/s velocity 86

B.1 Values obtained with control in simulation with feedback for Holonomic model 1 . . . . . . 88

B.2 Values obtained with control in simulation with feedback for Holonomic model 2 . . . . . . 89

xvii

Page 18: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

xviii

Page 19: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

List of Tables

3.1 Statistics of the errors of the methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.2 Statistics of the errors of the methods with roll angles variations of the quadrotor . . . . . 35

3.3 Statistics of the errors of the methods with height variation, ∆h . . . . . . . . . . . . . . . 36

3.4 Statistics of the errors of the methods with tilt angle variations, ∆θ . . . . . . . . . . . . . 36

3.5 Statistics of the errors of the methods with roll angle variation, ∆φ . . . . . . . . . . . . . 37

6.1 Statistics obtained during control in ideal situation . . . . . . . . . . . . . . . . . . . . . . 63

6.2 Statistics obtained during control in ideal situation . . . . . . . . . . . . . . . . . . . . . . 65

6.3 Statistics obtained during control without feedback . . . . . . . . . . . . . . . . . . . . . . 68

6.4 Statistics obtained during control with feedback . . . . . . . . . . . . . . . . . . . . . . . . 69

6.5 Statistics obtained during control without feedback . . . . . . . . . . . . . . . . . . . . . . 72

6.6 Statistics obtained during control with feedback . . . . . . . . . . . . . . . . . . . . . . . . 73

xix

Page 20: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

xx

Page 21: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Notation

Acronyms

CDF Cumulative Density Function

DOF Degrees of Freedom

DLQR Discrete Linear-Quadratic Regulator

EKF Extended Kalman Filter

IBVS Image Based Visual Servoing

NN Neural Network

PBVS Position Based Visual Servoing

PDF Probability Density Function

RMSE Root-Mean-Square Error

UAV Unmanned Aerial Vehicle

VR Virtual Reality

VTOL Vertical TakeOff and Landing

List of variables

The variables are listed in order of appearance.

Symbol Designation

(x, y, z) Variables of the position of the robot

(x0, y0, z0) Axis of the global frame

F0 Global fixed frame

Fr Robot frame

Fc Camera frame

xxi

Page 22: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Symbol Designation

Fe Tracking error frame

Fmg Mobile ground frame

[φ, θ, ψ] Attitude angles of the robot

ye Cross tracking error

ψe Heading error

[xct, yct] Centers of the top part of an image

[x(cb), y(cb)] Centers of the bottom part of an image

[φr, θr, ψr, Zr] attitude and height references for the quadrotor inner control

V0 Constant longitudinal velocity

[Ωl,Ωr] Left and right wheels angular velocity

Yr reference in the y axis

s Distance in the x axis between the robots mass center and the motor axis

C Center of the motor axis

b Distance from the motor to center of the motor axis

r Wheel radius; Adjusting index of the weighted threshold PDF; Control variable yaw angle

rate

vm Medium linear longitudinal velocity of the robot

θm Angular velocity of the robot

vl Linear velocity of the left wheel

vr Linear velocity of the right wheel

[ux, uy, uz] axis of the quadrotor frame

h Height of the camera; height of the robot

Vx Linear longitudinal velocity of the robot

X State vector pf the system

U Control vector of the system

Ad, Bd, Cd, Dd Matrices of the discretized system

J Cost function

Qd, Rd Weighting matrices in discrete domain

K Control gain matrix

S Time independent solution of Riccati equation

Q,R Weighting matrices in continuous domain

I Image

nk Number of pixels per k level

N Total of pixels in an image

k Image gray level

K Maximum pixel gray level

p Probability density function

c Cumulative distribution function

xxii

Page 23: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Symbol Designation

Hk Intensity of the new gray levels after histogram equalization

Pwt Weighted threshold PDF

Pu Upper limit of the PDF

Pl Lower limit of the PDF

Cpe CDF of the post-enhancement

Cu Upper threshold of the CDF

Cl Lower threshold of the CDF

n,m Number of lines and columns of the image

b Number of lines included on the top part of an image

Ifinal Final image

Ipre−enhanced Pre-enhanced image

Ipost−enhanced Post-enhanced image

i, j Line and column indexes

[xmg, ymg, zmg] Axis of the mobile ground frame

[xc, yc, zc] Axis of the camera frame

[θx; θy] axis of the angle frame

θX , θY Angles of the points in the image

θcam Tilt angle of the camera

[px, py] Lateral and longitudinal coordinates of a point in the moving ground frame

[px1, py1] Point values for the bottom blob

[px2, py2] Point values for the top blob

p0, p1andp2 Origin and two points of the mobile ground frame

fc Focal length

cc Image referential coordinates

αc Skew coefficient

kc distortion coefficients

p Homogeneous coordinate point in the image plane

K Camera matrix

X Point in the camera frame

π A camera view plane

R Rotation matrix from ground frame to camera frame

q Point in the ground frame

t Camera position in the ground frame

qx Coordinate x of a point in the ground frame

qy Coordinate y of a point in the ground frame

preal Point in the image frame of the real camera

pvirtual Point in the image frame of the virtual camera

xxiii

Page 24: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Symbol Designation

kreal Camera matrix of the real camera

kvirtual Camera matrix of the virtual camera

Z0 Nominal position of the camera of the global frame

Xr Camera coordinate in x axis on the robot frame

Zr Camera coordinate in z axis on the robot frame

∆h Height variation

∆θ Pitch angle variation

∆φ Roll angle variation

Vy lateral velocity relatively to the line

Vy lateral velocity of the robot

x Acceleration of the quadrotor in the x axis

y Acceleration of the quadrotor in the x axis

g Acceleration of gravity

Ref References for position and angle

yref Reference in the y axis

ψref Reference for the yaw angle

kce Distortion error

xxiv

Page 25: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Chapter 1

Introduction

1.1 Motivation

This thesis is a continuation of previous works, done by Jorge Domingues [11] and Bernardo Hen-

riques [14]. The purpose of Jorge Domingues’ work was the creation and stabilization of a experimental

platform where different control and estimation approaches could be tested and compared. Although

the creation of the platform was successful, its stabilization was left undone. The work of Bernardo

Henriques addressed the stability of the platform: the attitude estimation and control were developed

in simulation; the attitude estimation was experimentally tested but attitude control for the real platform

was left unchecked.

1.2 History of the Quadrotor

The first quadrotor was devised and built in the very beginning of the 20th century, when the brothers

Louis and Jacques Brequet started experiments in this area, under the guidance of the scientist and

academician Charles Richet. In 1907 the Brequet Brothers were reported to have successfully achieved

take off with a quadrotor named as ”Gyroplane No.1” with a pilot on board (figure 1.1). However, this

flight test was not considered to be the first flight since the assistance of four men was required to help

due to stability problems.

1

Page 26: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Figure 1.1: Gyroplane No.1

With the passing of time, advances have been made in several areas and in recent years the interest

in quadrotors has significantly increased for both military and civilian purposes. An example is the ”Bell

Boeing Quad TiltRotor”, in figure 1.2. This project is still ongoing because of the advantages it presents,

which are the capacity of carrying large weights and its VTOL features are the most important ones.

Since this vehicle presents both advantages and its behavior is similar to helicopters, research in this

technology is highly desired within the military.

(a) In the air (b) On an aircraft carrier

Figure 1.2: Bell Boeing Quad TiltRotor

Recent examples of smaller types of quadrotors with various applications are the ”Parrot AR.drone”,

”the flying saucepan” and the ”U4 QuadCopter”. The ”Parrot AR.drone”, in figure 1.3a, sold to the

general public, is an example of what the latest advances in technology made possible: a quadrotor

using wireless communications for both control of the device and to view the image obtained with the on

board cameras which can have diverse applications. ”the flying saucepan”, in figure 1.3b, is an example

of a quadrotor used for surveillance purposes in the United Kingdom. This quadrotor has assisted in an

arrest of a suspect, in a thick fog, with the use of a thermal camera on board, giving this way support

2

Page 27: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

to the police from a high position1. The ”U4 QuadCopter”, in figure 1.3c, is a quadrotor developed

by a Portuguese company UAVision that aims its use for law enforcement, coordination of fire-fighting

operations in difficult scenarios, inspection of critical structures with difficult access such as bridges,

electrical towers and others.

(a) Parrot AR.drone (b) the flying saucepan

(c) U4 QuadCopter

Figure 1.3: Small quadrotors with diverse applications

1.3 State of the Art

The advancements in technology and the low investment requirements for small devices have allowed

the research in devices such as UAVs. UAVs are being studied due to the possibilities they present and

the wide range of interesting applications they may have. In this work, the remote navigation of robots

through image is researched and it is meant to be a contribution for future works in this area.

In section 1.2, several quadrotor concepts were presented. The development of such vehicles was

however delayed and only started to appear in recent years due to the critical stability issues that the

robot has. It was only when solutions were available for the stabilization of quadrotors that the applica-

tions seen in section 1.2 started to appear.

In the area of autonomous flight, [18] showed it is possible to navigate with a quadrotor using infor-

mation provided by a camera with a downward orientation and without converting the information from

inside the image. A work on path following is presented in [5] where the objective is to track geometries

and maintain the quadrotor in a fixed position with the help of a fixed downward camera. In [6], a control

strategy for a quadrotor is made with a camera equipped with pan and tilt control in order to keep the tar-

get in sight. A quadrotor tracking a trajectory is made in [27], where a numerical simulation is made with1more can be read at http://www.dailymail.co.uk/news/article-1250177/Police-make-arrest-using-unmanned-drone.

html

3

Page 28: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

a pre-planned path while using a PID controller. A quadrotor is used in simulation for tracking a target

with control based on Backstepping and Frenet-Serret theory while using a camera pointed downward

(see figure 1.4b), in [2].

Related works have been done using other kinds of robots such as [15], where a ground mobile robot

follows a line while moving forward, but in this case image pre-processing is done in order to clean the

noise present, with the use a fixed threshold. In [7] a robot arm (see figure 1.4a) was controlled with

the help of a fixed camera using a neural network to convert image coordinates to robot coordinates

directly. Similarly [26] presents a robot arm controlled by vision using neural networks to convert image

coordinates obtained from a stereo camera equipped with a pair of lens.

(a) Robot arm

used in [7]

(b) DraganFlyer used in [2]

Figure 1.4: Robot arm and Quadrotor

It was verified that image enhancement and segmentation are areas in computer vision with a lot of

development of late. For example, methods using histograms as bases for image treatment are seen in

works such as [13], [25] and [4] where the first two are methods made to improve the image quality in

gray scale and the last one for segmentation of areas in a colored image. There are also works such as

[1] that seek new algorithms in order to improve images in gray scale. Even though new methods have

been developed, the Otsu method [16] and the method of segmentation using Mahalanobis distance

in color space, applied in works such as [10] visible in figure 1.5, are held with high regard in their

application due to the capabilities they present in image segmentation.

(a) Original image (b) Segmented image

Figure 1.5: Segmentation using mahalanobis distance

4

Page 29: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

In the area of image enhancement there are works that should be referred. These works deal with

an important aspect of images obtained in the real world named as reflexion of light. Depending on

the angle of the image capture and light conditions surrounding the camera, the image may present

defects which result in a loss of information. Applications for the research in this area can be seen in an

important area such as surgery, in works such as [22] and [21]. Applications can also occur in common

areas such as the improvement of video cameras, human detection under water and the removal of

reflexion in high speed cameras, [12], [24] and [23].

(a) Organ with reflective

light

(b) Organ with the reflection

removed

Figure 1.6: Organs viewed through endoscopy

For autonomous driving and flight, research has also been done in velocity estimation using only

the image obtained by the robot camera. When the value of the velocity is not available in certain

applications, alternatives of estimation are required and the application of algorithms using images is

one of those. A method used and applied in several works is the Optical Flow, it detects movement of

pixels with information from a previous image to the one being analyzed. Works such as [9] and [19]

estimate the velocity at which the robot is moving at and compares it with the real one, the tests are

done with robots capable of providing the real velocity. By knowing the real velocity and the estimated

velocity it is possible to evaluate the method. In both works the estimation was successfully made.

1.4 Objectives

The objectives of this thesis are:

• to propose a control strategy for the quadrotor, with the purpose of following a ground track using

airborne vision feedback;

• to prepare the approach to be chosen, develop the tools, test them with a wheeled mobile robot;

• to evaluate the approach for the quadrotor in simulation before it is applied in the experimental

platform.

5

Page 30: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

1.5 Contributions

During the development of this thesis, several contributions were made and the ones worth mentioning

are:

• A vision based simulation platform for systems that use image to apply the control;

• A Matlab Simulink block which treats the image obtained on the track of the path used;

• Matlab Simulink blocks with estimators of the tracking errors for a known position, orientation and

parameters of a camera;

• A C++ program to perform communications between the computer and the quadrotor and receive

commands from a joystick;

• Control strategy for path following with the quadrotor.

A user manual was written for the correct setup and operation of the quadrotor.

1.6 Thesis Outline

In this section a brief explanation of the contents present in the following chapters is done.

In chapter 2, an outline to complete the objective of this thesis is done, while explaining the options

made to successfully fulfill what is desired in the thesis.

Chapter 3 presents an explanation of the methods used for the estimation of variables, conversion

from normal image to values related to the systems, from an image provided by the camera. The image

treatment used to reduce the noise present in the images is also explained, including both the functions

used and the developed method. An evaluation of the methods developed is performed. This evaluation

is done in specific conditions and analyzed with both absolute and relative error dimensions.

Chapter 4 provides the obtained results for the control applied to the ground mobile robot and the

respective analysis made for each case. The control was tested considering ideal sensors, with vision

based simulation and with real system in the laboratory track.

In chapter 5 the longitudinal velocity estimation was tested and explained by using a method called

Optical Flow. The need for the longitudinal velocity estimation through image comes from the lack of

knowledge of the quadrotor velocity state. If the value is not available, an outside control is needed,

such as a pilot.

With chapter 6 the results from the control applied to the quadrotor are shown. The control is tested

considering ideal sensors and with vision based simulation. The model to be used in order to synthesize

the real systems control is observed and analyzed.

6

Page 31: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Chapter 7 provides the conclusions obtained and topics that provide starting points for future works.

7

Page 32: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

8

Page 33: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Chapter 2

Problem Statement

2.1 Introduction

The objective is for a quadrotor to follow a ground track in the laboratory (in figure 2.1) with the aid of

a camera while moving forward. Five different frames are used:

• F0 is the global fixed frame, with z axis pointing down, and both x and y positioned horizontally

on the ground of the laboratory. Here the track path is defined and the variables [x, y, z] are the

position of the robot.

• Fr is the robot frame and the location of its origin is presented for each robot in the section 2.2.

The location of the camera is defined in this frame.

• Fc is the camera frame, where the image is defined. Its origin and orientation is defined by the

camera conditions.

• Fe is the tracking error frame, presented in figure 2.2. The xe axis is defined by the tangent of the

track and ye is defined in the horizontal plane and its direction is to the right. The tracking errors

are defined in this frame.

• Fmg is the mobile ground frame. This frame is used to obtain the tracking errors and it is presented

in section 3.

The variables [φ, θ, ψ] are the Euler angles needed to rotate the frame F0 to the frame Fr. These are the

attitude angles, roll, pitch and yaw angles of the robot.

9

Page 34: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

F0

Fr

Fc

x0

y0z0

yc xczc

Figure 2.1: Track of the laboratory and three frames, F0, Fr and Fc

In order to follow the line two variables are necessary, these are the tracking errors. One of these

errors describes the distance between the robot and the line; it is defined as cross-tracking error and

stated as variable ye. The other error provides the orientation error of the system relatively to the line,

defined as the heading error and presented with the variable ψe. These variables are presented in figure

2.2 and their signals are related to the robots frame system. The methods used to obtain these values

are presented in section 3.3.

ye

ψe

xr

yr

xe

yetrack

Figure 2.2: Tracking errors and tracking error frame

In order to obtain the tracking errors from vision, it was decided to divide the image into two parts,

the top and the bottom parts. Then the mass centers of the track in the top and bottom images are

computed, the [xct, yct] and [xcb, ycb], respectively the top and bottom centers. In figure 2.3, the variables

are presented with a red mark defining the centers and the image partition is shown with a blue line.

xctyct

xcbycb

xc

yc

Figure 2.3: Image partition and top and bottom centers

10

Page 35: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Firstly the quadrotor control is performed considering ideal sensors, in which the values of [ye, ψe] are

known and the variables [x, y, h, φ, ψ, θ] are directly accessed. This can be viewed in figure 2.4, where

the blocks C,E and S are the controller, error estimator and the system, respectively. The system is a

stabilized quadrotor that receives references for both attitude and height, variables [φr, θr, ψr, zr]. The

attitude references are provided by the controller, although θ is controlled externally with the reference

of the velocity V0. This is necessary in order to to keep the system moving forward. The system is

presented in subsection 2.2.2 and the control is presented in section 6.2.3.

C S

E

Zrx,y,zф,θ,ψ

ye,ψe References

фr,θr,ψr

V0

Figure 2.4: Block control diagram with ideal sensors

Before applying the control in the real system, a vision based simulation is carried out (figure 2.5).

The trackign variables [ye, ψe] are now estimated with the methods presented in chapter 3. In figure 2.5,

the block:

• C and S are the same as before.

• V is the virtualization or the real image capture, where the variables of Cam used are the position

and orientation of the camera. This block provides an image that will be named as I.

• A1 represents the image enhancement performed over the raw image I and the resulting image is

named as Ifinal. The algorithms are presented in section 3.1. A1 is only used for the experiments,

due to the problems related to the real camera.

• A2 is the estimation of the [xct, yct, xcb, ycb] variables, performed from the enhanced image and the

method is presented in section 3.2.

• B is the estimation of the cross tracking and heading errors. These variables are obtained through

weighted estimation and the intrinsic parameters of the camera, called Cam in figure 2.5, with the

methods presented in section 3.3.

The control performed with vision for the quadrotor is presented in section 6.

11

Page 36: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

C S

A2 A1B VCam

Zr x,y,zф,θ,ψ

ye,ψe

фr,θr,ψrV0

IUxcb,ycbxct,yct

Vx

Figure 2.5: Block control diagram with vision

In order to test and verify the methods used to estimate the tracking errors and the image enhance-

ment, a ground robot was used. Here the control performed is similar to the ones presented in figures

2.4 and 2.5, although since the system is different, the input and output variables of the system are

different. In this case the input variables are the left and right wheels angular velocity, variables [Ωl,Ωr],

and the output variables are the position and orientation of the system, variables [x, y, ψ]. The references

are the [yr, ψr, V0] variables. The systems equations are presented in subsection 2.2.1 and the control

results obtained are presented in chapter 4.

In chapter 3.4 an evaluation of the tracking errors estimation methods is done. The methods are tested

in a vision based simulation environment and the estimation errors are analyzed in different conditions.

Since the quadrotor is a system with no sensor capable of providing the longitudinal speed, an attempt

to estimate it through vision is made in chapter 5. Trials are made with different setups in order to

determine if the estimation is possible.

2.2 Dynamic models

In this section a brief explanation of the dynamic models, of the mobile ground robot and quadrotor,

used for the simulations is provided. These models were used to evaluate and compare the control

strategies used. The models used for control design are presented in the control chapters 4 and 6.

2.2.1 Wheeled mobile robot

In this subsection, a brief explanation of the wheeled mobile robot is made, based on Edwin Carvalho

thesis [8]. This robot is able to move in the horizontal plane with the use of two motors connected to the

wheels, right and left separately. The geometric parameters of this robot are presented in figure 2.6:

• s represents the distance in the x axis between the robots mass center and the motor axis.

• C is the center of the motors axis.

• b is the distance from the motor to C.

12

Page 37: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

• r is the wheel radius.

The inputs of this system are the angular velocity of each wheel, Ωl and Ωr, which will be directly

converted into a linear velocity by the equations 2.1 and 2.2. The robot linear longitudinal velocity, vm,

and angular velocity, θm, can be obtained using equations 2.3 and 2.4, respectively. The wheeled mobile

robot will be called rasteirinho during this work.

The origin of the robot frame in rasteirinho is located at C.

Figure 2.6: Geometry of mobile ground robot and variables, adapted from [8]

vl = r · Ωl (2.1)

vr = r · Ωr (2.2)

vm =vl + vr

2(2.3)

θm =vl − vr

2 · b(2.4)

In the experimental plant both motors have an internal proportional-derivative controller that, with the

aid of an encoder, regulates and adjusts the wheels angular velocities to the desired ones.

The values of the motors angular velocities are described with values within [0, 255]. When a motor

receives an order with values in the interval [0, 127] or [129, 255], it rotates to one side or the other and

when it receives an order equal to 128 the motor will stop. The maximum linear speed is near 0.7 m/s.

A Bluetooth communication is used to remotely control the robot.

13

Page 38: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

2.2.2 Quadrotor

In this subsection a brief explanation of the quadrotor model is given, based on Bernardo Henriques’s

thesis [14]. The inner control and state estimations developed in [14] were used in this work. The

explanation will focus on the principal physical components of the quadrotor, how the motion is performed

and the states used for the inner loop control.

Figure 2.7: Quadrotor with referencial, body frame from [14]

In the quadrotor, the origin of the robot frame Fr is situated at the mass center. The axis are defined

as seen in image 2.7.

As it may be seen in figure 2.7, the quadrotor is equipped with four motors with propellers attached,

paired two by two with rotations in two different directions. These four propellers yield the necessary

lift force to oppose gravity. With separate changes in the propellers angular speeds it is possible to

achieve control over the quadrotors, in 6 DOFs. The attitude and height are controlled directly whereas

the horizontal translation is subactuated and controlled by changing the roll and pitch angles, φ or θ.

The quadrotor is equipped with sensors used to estimate the height, attitude, accelerations and angular

velocity in the robot frame system.

The height sensor range is [10 cm, 80 cm]. In [14] it was recommended that the quadrotor should fly

up to 50 cm to avoid surpassing the height of 80 cm, due to the height sensor dynamics. The system is

controlled in a frequency of 50 Hz.

An Extended Kalman Filter (EKF) was developed in [14] to estimate the attitude angles; a low level

Linear-Quadratic Regulator (LQR) was then implemented to stabilize the systems attitude and height.

These were used for the inner loop control of the system during the simulations in chapter 6.

The system control variables are [φr, θr, ψr, zr] and the outputs of the system are [φ, θ, ψ, x0, y0, h, Vx].

The variables [x0, y0, Vx] are not measured in the real system and are used for the purpose of simulation.

In order to simulate the real system behavior the position is necessary along with the attitude. The

variable Vx is the linear longitudinal velocity of the robot and it is used in order to maintain the system

moving forward as it will be seen in the chapter 6.

14

Page 39: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

2.3 Controller

Considering a LTI and continuous state space system 2.5.

x(t) = Ax(t) +Bu(t)

y(t) = Cx(t) +Du(t)(2.5)

Where A, B, C and D are the matrices of the model that describe the system to control, the vector

x is the states of the system and u is the input vector of the system, the controllable variables. Even

though the models are presented in continuous state space, the systems will be controlled in a discrete

time during simulations and experiments. The discrete state space model, equation 2.6

x(k + 1) = Adx(k) +Bdu(k)

y(k) = Cdx(k) +Ddu(k)(2.6)

where Ad, Bd, Cd and Dd are the matrices of the discretized system. To control the models a discrete

linear-quadratic regulator (DLQR) is used. The cost function is J , given by equation 2.7. In this equation

the constants Qd and Rd are weighting matrices.

J(u) =

∞∑k=1

(x(k)TQdx(k) + u(k)TRdu(k)) (2.7)

The DLQR gain matrix is then obtained, after manipulation of equations, by equation 2.8.

K = (BTd SBd +R)−1(BT

d SAd) (2.8)

Where S is the time independent solution of the Riccati equation, equation 2.9.

ATd SAd − S − (AT

d SBd)(BTd SBd +Rd)−1(BT

d SAd) +Qd = 0 (2.9)

The control law is given by equation 2.10

u(k) = −Kx(k) (2.10)

To obtain the gain matrix, K in equation 2.8 or 2.10, in discrete domain the Matlab lqrd command is

used. This function transforms the continuous model matrices into discrete domain and calculates the

equivalent weighting matrices from continuous domain into discrete domain, Qd and Rd, with the use of

equation 2.11. Afterwards it calls the function dlqr using the obtained discrete values.

15

Page 40: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Qd 0

0 Rd

=

∫ T

0

ATd 0

BTd I

Q 0

0 R

Ad Bd

0 I

dτ (2.11)

Q and R, the continuous weighting matrices, were tunned according to the desired response of the

system in closed loop.

16

Page 41: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Chapter 3

Image processing and tracking errors

estimation

The methods presented in this chapter are used to reduce the image noise and retrieve the robot

position variables, the blocks A1, A2 and B in the block diagram of figure 2.5.

An image can contain high amounts of information and noise at the same time. It is relevant to have

a good image filtering and processing. A section was written to describe what tools were used and

developed to reduce the noise and the unwanted information present in the image, section 3.1.

The method used to estimated the centers for the images partitions is presented in section 3.2. In

subsection 3.3.1.2 the transformation from cross tracking error into a error in the robot frame is provided.

For control purpose it is important to have the best possible accuracy in the variables obtained through

the sensors. In this case the sensor is a camera that provides information through an image. Thus the

used and developed methods are presented in the sub-section 3.3.

In section 3.4 the methods are studied, both the errors in the estimation and the need to know the

camera conditions. Depending on the camera conditions assumed, the estimations will be distorted.

3.1 Image enhancement

For a good information retrieval it is necessary to have a good image to work with, the cleaner the

image the easier the retrieval will be and a better position estimation will be obtained. An ideal image

is not possible due to the inconstant conditions present in the laboratory. In each position different light

intensities and light reflections are present, with the addition of a noise corrupting the signal due to the

wireless transmission.

17

Page 42: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

For a simpler explanation the image enhancement section is divided into three subsections. In the sub-

section Pre-enhancement the functions used and the respective options are explained. In the subsection

Post-enhancement a modified method based on the histogram equalization technique is presented and

finally in the subsection Final treatment the final step of the image treatment is presented.

3.1.1 Pre-enhancement

The first step of image enhancement is the removal of some random noise present in the image, this

noise is similar to a ”salt and pepper noise”. The function median filter from the toolbox Video and Image

Processing Blockset of Simulink is used.

Median filter function takes into account each pixel at a time comparing it to the respective neighbor-

hood and in case this pixel value is too different from the surroundings, it replaces it with the median

value of the surroundings. This function is capable of preserving the edges and reducing the noise with

little degradation to the base image proving to be a good tool for noise reduction, as it can be seen

through the passage of the top part of the figure 3.1a to 3.1b. In this case, the neighborhood size used

for the filter was 3x3, where the pixel in the middle is the one being compared. The bigger the neighbor-

hood the bigger is the processing time, and as it can be seen by the presented figures, the chosen size

of neighborhood suffices for the removal of the specific noise.

(a) without filter (b) with filter

Figure 3.1: Median filter effect, mostly seen in the top part of the images

The next step was to apply the contrast adjustment function from the same toolbox. This function

adjusts the contrast of an image by ’stretching’ the pixel range values present in the image to a new

range, providing a brighter image. This effect can be seen through the passage from 3.2a to 3.2b. This

function has proven to be a good tool but has shown to enhance visual unwanted information present

in the floor and occasionally resulting in images with too much brightness, so that it is hard to apply

a threshold for image binarization. The function used to minimize these problems is presented and

explained in the sub-section 3.1.2.

18

Page 43: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) without contrast

adjustment

(b) with contrast

adjustment

Figure 3.2: Contrast adjustment effect, visible by the difference in brightness of the image (b)

The functions chosen to be applied on the pre-enhancement over the raw image are:

• median filter

• contrast enhancement

3.1.2 Post-enhancement

For a reasonable level of brightness and visual unwanted information removal a method was devel-

oped. This method was based on the histogram equalization and the idea was originated in [25]. For

a better understanding a brief explanation of histogram equalization is provided, the source was the

previous article.

In an image I there is a number nk of pixels with value k, in a total of N pixels. Assume k is the pixel

gray levels and the level range is [0,K − 1], for example [0, 255]. With the knowledge of these variables

a probability density function (PDF) can be accessed, equation 3.1.

p(k) =nkN, for k = 0, 1, . . . ,K − 1 (3.1)

Knowing the PDF the cumulative distribution function (CDF) can be determined for each level, 3.2.

C(k) =

k∑m=0

p(m), for k = 0, 1, . . . ,K − 1 (3.2)

Assuming that it is desired to widen the maximum value from K to a new value L, then the intensity of

the new gray levels is given by equation 3.3.

Hk = (L− 1) · C(k) (3.3)

19

Page 44: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) without histogram

equalization

(b) with histogram

equalization

Figure 3.3: Histogram equalization effect

The method presented in [25] replaces the PDF with a different function, equation 3.4. The final gray

levels are obtained with additional operations presented in the article, but since the method developed

in this sub-section only uses this equation, deeper analysis will not be made.

Pwt(k) =

Pu if P (k) > Pu

(P (k)−Pl

Pu−Pl)r · Pu if Pl < P (k) < Pu

0 if P (k) < Pl

(3.4)

where Pu and Pl are values defined as the upper limit and the lower limit of the PDF. Both values

are related in percentage to the maximum of the images PDF. These determine how the values change.

Pu avoids dominance of the levels with high probabilities when allocating the output dynamic range. Pl

cuts the levels whose probabilities are too low (with low importance), and provides better use of the full

dynamic range. The r value will give higher weight to lower probabilities than the higher ones, protecting

lower probable levels and over-enhancement is less likely to occur, while r < 1. The method developed

uses the function 3.4 to obtain a weighted version of the CDF, equation 3.5, and then replaces the

original one in equation 3.3 of the histogram equalization technique. The function Cpe is named as CDF

post-enhancement.

Cpe(k) =

1 if C(k) > Cu

(C(k)−Cl

Cu−Cl)r if Cl < C(k) < Cu

0 if C(k) < Cl

(3.5)

By applying equation 3.5 to the CDF it was possible to remove most visual undesired information

present in the image. The function shortens the variation of the CDF values. After obtaining the new

CDF distribution, a normalization is done by restricting the distribution to the range of [0, 1]. In this

method the variables Cu and Cl roles are different from the Pu and Pl. In this case the variable Cl

protects and limits the number of pixels with lower level values. The variable Cu pulls most of the CDF

values to the top limit range. The r value will define how much the CDF values will converge to the limits.

The new gray levels will be defined by the CDF distribution, if the gray levels in the CDF have value 0

20

Page 45: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

then these gray levels will have a pixel value 0; if the gray levels in the CDF have value 1 then the gray

levels will have pixel value 255. The resulting image after this processing is shown in figures 3.4d and

3.4e.

As it can be seen in figures 3.4a and 3.4b the pixel values of the line and the surroundings vary due to

inconstant conditions present in the laboratory, making it hard for a fixed threshold to binarize and obtain

a good line image. If a normal histogram enhancement was applied to the image, as it can be seen in

figure 3.4c, thresholding would be hard since there exists more undesired information in the image with

pixels values proximate to those of the line pixel values. The normal threshold would not be able to filter

and would provide a lot of noise, like in the images. Even though the conditions change for different

positions, the post-enhancement method is able to adapt and clean most of the unwanted information,

as it can be seen in figures 3.4d and 3.4e.

(a) pre-enhanced image

1

(b) pre-enhanced image

2

(c) image 1 with

histogram equalization

(d) image 1 with

post-enhancement

(e) image 2 with

post-enhancement

Figure 3.4: Images before and after post-enhancement, can be seen from image (a) and (b) before and (d)

and (e) after. The image (c) shows the effect of a traditional histogram equalization.

The method presented good results but also presented a problem, when there is a low concentration

of line pixels and it tends to make random artifacts in the image borders, visible in figure 3.5. In order to

deal with these artifacts a final treatment was made.

21

Page 46: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Figure 3.5: Possible artifacts obtained using the developed treatment

3.1.3 Final treatment

In order to mitigate such artifacts over the future estimations a cross information between the pre-

enhanced image and the post-enhanced image was made. This cross information consisted on the

color inversion of the pre-enhanced image and the multiplication by the post-enhanced image, equation

3.6. The result of this product was a final image with weighted information, image 3.6c. The line will

have higher weight due to its high pixel values while the artifacts that appeared in the image borders that

do not belong to the same group values of the line will have lesser weights. Thus giving the possibility of

making a weighted estimation taking into account the pixel values, improving the final result that would

be obtained through the image 3.6b.

These image operations were implemented considering the processing time, making sure a good

sampling rate would be achievable.

Ifinal = ([1]− Ipre−enhanced) · Ipost−enhanced (3.6)

(a) Image with

pre-enhancement

(b) Image with

post-enhancement

(c) Final image

Figure 3.6: Images with the various steps of the image enhancement applied to the original image obtained

through image acquisition device, the parameters used for the post-enhancement were Cu = 0.6, Cl = 0.17

and r = 0.1

22

Page 47: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

3.1.4 Evaluation of the image treatment

In order to compare the effectiveness of the first part of the post-enhancement, a comparison to

normal thresholds is made. In figures 3.7 and 3.8 two different locations are shown. For both figures,

the raw image is presented first, then the result with the use of simple thresholds equal to 0.1 and 0.2,

the result with the post-enhancement and finally the final image is presented.

(a) Raw image (b) Image with

threshold equal to 0.1

(c) Image with threshold

equal to 0.2

(d) Image with

post-enhancement

(e) Final image

Figure 3.7: Comparison of images obtained during a track following with wheeled mobile robot in location 1

23

Page 48: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) Raw image (b) Image with

threshold equal to 0.1

(c) Image with threshold

equal to 0.2

(d) Image with

post-enhancement

(e) Final image

Figure 3.8: Comparison of images obtained during a track following with wheeled mobile robot in location 2

By comparing the images, it can be seen that with normal thresholds either over-detection or under-

detection of the line occurs. An over-detection can be observed in figure 3.7c. An under-detection is

visible in 3.8b and 3.8c.

To observe the effects of the image treatment, the images histogram and normalized CDF of the raw

and final image are presented for both locations, in figures 3.9 and 3.10.

(a) Histogram of raw image

24

Page 49: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(b) Histogram of final image

(c) Normalized CDF function for raw image (in blue) and for final image(in black)

Figure 3.9: Histograms and normalized CDF of location 1

(a) Histogram of raw image

(b) Histogram of final image

25

Page 50: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(c) Normalized CDF function for raw image (in blue) and for final image(in black)

Figure 3.10: Histograms and normalized CDF of location 2

It can be observed through the comparison of 3.9a with 3.9b and 3.10a with 3.10b, and by analyz-

ing the figures 3.9c and 3.10c that a distinct distribution is visible. The pixels in the final image are

concentrated in the low and high values of pixel intensity, as it was expected.

The histogram of the raw image in location 2 presented higher pixels values, this effect over the

image treatment is visible in the normalized CDF function, where the final part is slower to achieve the

value 1 when compared to location 1.

3.2 Mass centers estimation

The first step consists of dividing the image into two parts, top and bottom, and to obtain the coordi-

nates of the blob centers. For the calculation of these centers the equations 3.7 and 3.8 are used for the

x axis and equations 3.9 and 3.10 for the y axis. The parameters are:

• n and m are the number of lines and columns of the image.

• b is the number of lines to be included in the top part of the image, the blue line in figure 3.11, the

value used was 200.

• Ifinal(i, j) is the final image obtained from the image enhancement. Since the image is not binary,

the estimation is not a geometric center but a mass center.

• i and j are the line and column indexes.

This image partitioning gives different weights to the information provided by the top and bottom

according to the b value. If a partition is small, it will be more sensible to changes in its interior. Due to

this, the b will define which part is more important.

26

Page 51: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(b,m)

(n,1)

(1,m)(1,1)

(b,1)

(n,m)

xctyct

xcbycb

xc

yci

j

Figure 3.11: Camera frame with partition coordinates

xct =

[1 . . . 1

Ifinal(1, 1) . . . Ifinal(1,m)

.... . .

...

Ifinal(b, 1) . . . Ifinal(b,m)

×

1...

m

∑b

i=1

∑mj=1 Ifinal

−m/2 (3.7)

xcb =

[1 . . . 1

Ifinal(1 + b, 1) . . . Ifinal(1 + b,m)

.... . .

...

Ifinal(n, 1) . . . Ifinal(n,m)

×

1...

m

∑n

i=1+b

∑mj=1 Ifinal

−m/2 (3.8)

yct =

[1 . . . 1

Ifinal(1, 1) . . . Ifinal(1,m)

.... . .

...

Ifinal(b, 1) . . . Ifinal(b,m)

T

×

1...

b

∑b

i=1

∑mj=1 Ifinal

− n/2 (3.9)

ycb =

[1 . . . 1

Ifinal(1 + b, 1) . . . Ifinal(1 + b,m)

.... . .

...

Ifinal(n, 1) . . . Ifinal(n,m)

T

×

1 + b

...

n

∑n

i=1+b

∑mj=1 Ifinal

− n/2 (3.10)

3.3 Tracking error visual estimation

Due to the constant changing of the camera conditions, it was chosen to use a Position Based Visual

Servoing (PBVS) approach. Here the values of interest are obtained according to the camera location

on the mobile ground frame. In [18] a Image Based Visual Servoing (IBVS) approach was made where

the camera was positioned downward. This setup gives the possibility of obtaining the variables values

without distortions provided by the position and orientation of the camera.

To estimate the tracking errors, ye and ψe three different methods were considered and presented:

• Trigonometric method, in sub-section 3.3.1

27

Page 52: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

• Camera matrix method, in sub-section 3.3.2

• Black-box method, in sub-section 3.3.3

For these methods the aid of the mobile ground frame was necessary. The origin of this frame is

located at the projection of the camera on the ground. The axis xmg, ymg and zmg are defined by the

system of equations 3.11; the frame is similar to the global frame with the difference of being aligned

with the camera. The frame is presented in figure 3.12.

~xmg = ~zmg × ~xc

~ymg = ~zmg × ~xmg

~zmg = ~z0

(3.11)

zcyc xc

xmg

ymgzmg

Figure 3.12: Camera frame and mobile ground frame

3.3.1 Trigonometric Method

The method presented in this sub-section can be considered as the most intuitive out of the three. It

is based on direct trigonometric relations between the image coordinates and the mobile ground frame

coordinates.

3.3.1.1 From image plane to mobile ground plane

If the horizontal and vertical view angles are known, then it is possible to deduce the conversion from

pixel coordinates to angles θx and θy, along the x and y directions.

θY open angle

θX open angleθx

θy

Figure 3.13: Camera horizontal and vertical total view angles, θx and θy

28

Page 53: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

For a point with image coordinates (x, y) the equivalent angles are given by equations 3.12 and 3.13.

The variables θX open view angle and θY open angle correspond, as the names suggest, to the total angle

that the camera can view in the respective axis.

θy =θY open angle

m· y (3.12)

θx =θX open angle

n· x (3.13)

Since the camera height h and tilt angle θcam are known, then the coordinates (px, py) of a point in the

moving ground frame may be deduced from the angles θX and θY of its projection in the image.

For the longitudinal coordinate px, the relation may be computed from trigonometric relations as illus-

trated in figure 3.14. Leading to equation 3.14.

h

zc

xc

yc

θy

θcamera

Figure 3.14: Camera frame and points of interest

px =h

tan(θcamera − θy)(3.14)

For the lateral coordinate py, distances must be evaluated in the plane with angle θcam and θy, obtain-

ing the equation 3.15.

py =h · tg(θx)

sin(θcamera − θy)(3.15)

In equation 3.15 the camera roll angle is neglected. In order to mitigate or anticipate its influence, a

first order approximation may be considered, with equation 3.16.

py =h · tg(θX)

sin(θcamera − θy)+ h · tg(φ) (3.16)

29

Page 54: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

3.3.1.2 From mobile ground coordinates to tracking errors

To determine the heading error, equation 3.17 was deduced using the point values for both top and

bottom parts of the image, the py1 and px1 values for the bottom part, and the py2 and px2 for the top

part.

ψe = atan(

py1−py2

px1−px2

)(3.17)

The equation used to determine the cross-tracking error is computed as the distance between a vector

and a point , leading to equation 3.18, where point p0 is the origin of the mobile ground frame. p1 and

p2 are the bottom and top image centers in the mobile ground coordinates.

ye =(p0 − p1)× (p0 − p2)

|p2 − p1|(3.18)

By restricting the problem to the plane z = 0 it is possible to obtain a simplified version of equation

3.18, presented as equation 3.19.

ye =px2 · py1 − px1 · py2√

((px1 − px2)2 + (py1 − py2)2)(3.19)

3.3.2 Camera matrix method

This method is based on knowing a parameter that contains various informations of the image acqui-

sition device, known as the camera matrix. This matrix has some of the cameras internal parameters

embedded in it, equation 3.21. The internal parameters are:

• fc is the focal length, determines the angle of view and how much it will magnify in both x and y

axis;

• cc is the principal point, better known as image referential, considered to be the image center;

• αc is the skew coefficient, defines the angle between x and y pixels axis;

• kc is the distortions, coefficients of radial and tangential distortions (fish eye lens distortion coeffi-

cients for example).

The distortion parameter, in the case of this thesis, is neglected in value due to the nature of the

toolbox used to determine the internal parameters, for both the virtual and the real one, and by being so,

this parameter will not be referred again. The other parameters are related to the type of deformations

applied by the camera to the objects present in a plane that can be considered the camera view plane.

The camera view plane, as shown in figure 3.15, is the plane from which the camera retrieves projected

information and then deforms it to an image matrix. The values of the internal parameters are available

in appendix C, obtained using calibration toolbox 1.

1more information can be seen in http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html

30

Page 55: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

For the conversion from the image plane to the camera plane the equation 3.20 was used.

p = KX (3.20)

where:

• p, the homogeneous coordinate point in the image plane.

• K, the camera matrix.

• X, the homogeneous coordinate point in the camera view plane.

with:

K =

fc(1) αc · fc(1) cc(1)

0 fc(2) cc(1)

0 0 1

(3.21)

With the knowledge of the points in the camera view plane, it is now possible to start the conversion

from camera frame to the ground frame. With this in mind, it is important to understand how an object

on the ground is projected into the camera frame from a non-vertical position, visible in figure 3.15.

π

Figure 3.15: Camera image distortion, in the π frame a projection of the ground element over the camera

frame can be viewed

As it can be seen in the π plane in figure 3.15, the objects present in the ground plane suffered a non-

linear dimension transformation. Both the X and Y axis in the camera view plane suffered dimension

reductions, and different ones for each sides of the square. The dimension reduction is mostly due to

the different depths of the object points to the camera view plane that is originated from the orientation

and view position. The further the view is, greater is the dimension reduction visible. The bigger the

orientation angles differ from the ground, bigger the difference between edges. By knowing the type of

projection, a formula can be easily obtained to describe the coordinates transformation. This equation,

3.22, considers the position and orientation of the camera.

X = R(q − t) (3.22)

31

Page 56: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Where:

• X is the point in the camera frame.

• R is the rotation matrix from ground frame to camera frame.

• q is the point in the ground frame.

• t is the camera position in the ground frame.

By solving the system of equations 3.20 and 3.22 in order of the variable q, knowing that qz is zero, we

can obtain the coordinates qx and qy and with them obtain the line points in the ground frame for both

top and bottom parts. With both coordinates obtained in the ground frame, it is possible to estimate the

values of the cross-tracking error and heading error using equations 3.17 and 3.19.

3.3.3 Black Box model

In this section, a different methodology is used, and the objective is to create a model capable of doing

the same as the trigonometric and camera matrix methods: receiving the image centers in the camera

frame coordinates and providing the tracking errors directly, described in figure 3.16.

xct,yctxcb,ycb ye,ψe

Figure 3.16: Neural network inputs and outputs

The first approach was to find an empirical formula, but due to the existence of four input variables

and the significant changes they may provide to the outputs values when small variations occur, it was

not possible to achieve a relation. In order to model this, Neural Networks were used.

For the simplification of the NN, separate models are used to estimate each output variables, ye and

ψe. The input variables of the models are [xcb, ycb, xct, yct].

Several NN structures were tested and among them a personalized network was chosen. The ar-

chitecture of the proposed neural network is [4 7 15 4] neurons in each layer. The first, the fourth and

the output layers are composed by pure linear functions. The second and third layers are composed by

radial basis functions.

32

Page 57: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

For the use of the NN in the Simulink environment, where the image processing and control is per-

formed, the command gensim is used. This command generates a neural network block with the weights

and architecture functions obtained through the training.

3.3.3.1 Training methodology

The training of a NN with data provided by the real camera is not possible. It is necessary to have a

vast amount of data, input and output values of various points, in order to successfully train a NN. To

obtain the data for the training of the network model, the VR environment is used. Here it is possible to

have the exact values of the variables, and even though the NN is trained with data acquired with the

virtual camera in the same conditions as those of the real camera, the intrinsic parameters of the camera

differ. In order to use the model in the real camera, it is necessary to convert the image coordinates

from the real camera to the virtual one, using equation 3.23.

preal = KrealK−1virtualpvirtual (3.23)

where:

• p is the point in the image frame of the real or virtual camera.

• K is the camera matrix of the real or virtual camera.

For the data acquisition, two different methods were chosen:

• The first was using a straight line, in figure 3.17a.

• The second was using two blobs in each image part and those blobs were moved randomly, in

figure 3.17b.

In the first method it was observed that the resulting NN had problems characterizing high error values

due to the low variation of the blobs center values. The second method proved to give better results,

since the obtained data had a range wider than the one presented by the first method.

(a) Straight line (b) Black blobs

Figure 3.17: Camera view of the two simulations type

33

Page 58: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

The analyzes of the NN performance are presented in chapter 3.4, along with the viability of using

them for the real camera.

3.4 Evaluation of the estimation methods

In this section the errors of the estimation methods presented in section 3.3 are studied in two different

situations. In the first situation, which is more favorable, the position and orientation of the camera are

fedback to the estimators. In the second one, the position and orientation will have variations while the

estimators assume that the camera is in a fixed position. Afterwards, an analysis is made of the statistic

values obtained.

It is important to have good estimations but at the same time it is also important to minimize the

communication between the computer and the robot. So, it is analyzed if the feedback of the position

and orientation of the camera to the estimators is necessary or not, and which variables of the camera

are the most important for the feedback.

3.4.1 Evaluation conditions

The nominal position of the camera with respect to the global frame is considered to be at Z0 = −0.5m

with tilt angle −45. Since it is desired to know the errors of the estimation when used on the quadrotor,

the mass center of the robot was taken into account. By having the camera in a different position other

than the mass center, the effects of the robot attitude angles over the camera will have higher impact.

The position of the camera in the robot frame of the quadrotor is Xr = 5.5 cm and Zr = 4 cm.

The sample size of the statistic error analysis consists of 500 different tracking errors. These were im-

posed by positioning two separate black blobs, one in each image part and these blobs move randomly.

By using two black blobs instead of a line, a precise knowledge of the centers will be available and a

higher variation of the tracking errors will be studied. This will provide an overview of the accuracy of the

methods in a range wider than the one to be used.

For the statistic analyzes, the absolute error mean and standard deviation of the errors along with the

relative error mean and standard deviation are presented. The relative values will give a percentage

of the error values, according to the real ones. The variables used to describe the mean and standard

deviation are ε and σ. For the relative error mean and standard deviation, the variables used are ε% and

σ%.

The study performed in this section is done in the VR environment. Due to the experimental limitations

the same study is not possible for the real camera since here it is possible to have the exact locations

while in reality it is only an approximation with error.

34

Page 59: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

3.4.2 Tracking estimation with position and attitude known

The estimation methods have precise relations with the h and θ angle but in the case of the φ angle, its

effects over the estimation in trigonometric method are only mitigated and in the Black box model those

effects are not taken into account. The analyzes presented are for the nominal position of the camera

and with different φ angles.

During this section an additional method, which is called Methods mean, is used. The first step of the

method consists in obtaining the image centers in the mobile ground frame using the trigonometric and

camera matrix methods and then the mean of these values is computed. Afterwards, the calculation of

the cross tracking error and heading error is performed.

ye ψe

Method ε [m] σ [m] ε% σ% ε [o] σ [o] ε% σ%

Trigonometric -0,0004 0,0048 3,5 5,8 -0,0516 0,7678 7,3 31,1

Camera Matrix -0,0007 0,0027 2,0 3,9 -0,0516 0,1203 1,4 4,7

Methods mean -0,0006 0,0016 1,6 2,8 -0,0516 0,3724 3,4 14,7

Neural Network 0,0000 0,0016 2,4 6,6 0,0115 0,2120 1,6 6,7

Table 3.1: Statistics of the errors of the methods

In table 3.1 the mean and standard deviation of the errors present low values. The trigonometric

method shows to be the worst method when the relative errors are compared to the others. The ε%

and σ also present low values except for the trigonometric and mean methods. Due to the trigonometric

method, the mean method has worse estimations for the heading error, but, at the same time the cross

tracking error is improved. The neural network presents good results but the camera matrix method

presents better estimations.

ye ψe

∆φ Method ε [m] σ [m] ε% σ% ε [o] σ [o] ε% σ%

±5o

Trigonometric -0,0005 0,0062 6,0 8,7 -0,0516 1,1459 8,9 36,7

Camera Matrix -0,0006 0,0022 5,8 11,4 -0,0573 0,1089 1,3 4,9

Methods mean -0,0006 0,0028 5,0 5,4 -0,0516 0,5329 4,2 18,2

Neural Network 0,0001 0,0036 34,1 21,8 0,0516 0,9683 4,3 5,7

Table 3.2: Statistics of the errors of the methods with roll angles variations of the quadrotor

Through table 3.2 it may be observed that, even though the methods take into account or mitigate

the φ angle influence, an increase in the estimation errors is visible. The neural network presents bad

results as expected, since the training of the network was done without taking this variation into account.

35

Page 60: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

3.4.3 Tracking estimation with position and attitude unknown

In this section the tracking errors estimations are tested with different conditions from those assumed

by the estimators. The camera nominal position is the same as mentioned before.

The variables height, tilt and roll angles describe the variations that the robot should provide while in

motion and to ascertain which variables have more influence over the estimation methods, the positions

and angles of the robot were individually tested. By comparing the statistic values of the errors for each

case, it is possible to determine the need of knowing the robot conditions to the estimators and how they

may influence the control of the robot.

ye ψe

∆h (m) Method ε [m] σ [m] ε% σ% ε [o] σ [o] ε% σ%

0,05

Trigonometric 0,0008 0,0253 13,0 5,9 -0,0344 0,8308 9,3 42,0

Camera Matrix 0,0006 0,0212 10,6 4,8 -0,0458 0,1432 1,7 11,4

Methods mean 0,0007 0,0231 11,4 2,6 -0,0401 0,3495 3,8 17,0

Neural Network 0,0013 0,0229 11,3 4,9 0,0458 0,1776 1,9 8,5

-0,05

Trigonometric -0,0017 0,0201 12,8 8,8 -0,0630 0,7391 5,3 22,4

Camera Matrix -0,0020 0,0260 16,2 3,9 -0,0573 0,1318 1,3 4,2

Methods mean -0,0019 0,0230 14,0 3,2 -0,0573 0,3151 2,4 10,4

Neural Network -0,0013 0,0242 15,1 3,7 0,0229 0,1719 1,4 4,5

Table 3.3: Statistics of the errors of the methods with height variation, ∆h

The height variations have reduced impact over the estimations, as shown in table 3.3, since the

variations in the altitude should have low amplitudes. This variation affects mostly the cross heading

error because it is the element h that gives the amplitude of the error. In the case of ψe, the amplitude

of the distances is not required, only the relations between dimension need to be accurate.

ye ψe

∆θ Method ε [m] σ [m] ε% σ% ε [o] σ [o] ε% σ%

5o

Trigonometric 0,0009 0,0285 13,0 6,7 0,0516 1,8048 155,4 2866,1

Camera Matrix 0,0007 0,0256 13,4 11,4 0,0401 2,1314 122,0 2232,6

Methods mean 0,0008 0,0269 13,0 8,5 0,0458 1,9366 138,1 2538,4

Neural Network 0,0014 0,0273 13,0 9,5 0,1261 2,0798 138,4 2583,5

-5o

Trigonometric -0,0018 0,0229 16,6 16,2 -0,1490 2,2575 20,5 170,0

Camera Matrix -0,0021 0,0261 16,8 11,5 -0,1490 1,7647 24,6 226,3

Methods mean -0,0020 0,0244 16,6 13,8 -0,1490 1,9767 22,5 199,1

Neural Network -0,0013 0,0245 17,0 13,6 -0,0802 1,7934 22,8 198,8

Table 3.4: Statistics of the errors of the methods with tilt angle variations, ∆θ

36

Page 61: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

In table 3.4, looking at the influence of tilt angle variations it can be seen that the cross tracking error

has acceptable error in the estimations and a reduced variation. However, the estimation of the heading

error proved to be different, large error values can be observed with the relative mean and standard

deviation. These miscalculations come from the bad interpretation of the longitudinal distances. By

having a bigger longitudinal distance, the heading error will have lower values. The statistic values,

when the negative angle is applied, is largely inferior due to the difference between the resolution of

pixels. By looking further away, the pixel resolution will have a higher distance in the mobile ground

frame and the opposite happens when looking more downward.

The Methods mean, in this case, is the one that presents the best estimations, considering both tilt

variations at the same time variations.

ye ψe

∆φ Method ε [m] σ [m] ε% σ% ε [o] σ [o] ε% σ%

±5o

Trigonometric -0,0005 0,0048 31,5 16,9 -0,0516 1,0542 10,0 41,3

Camera Matrix -0,0007 0,0046 33,1 18,5 -0,0458 0,7850 3,9 5,2

Methods mean -0,0006 0,0038 32,3 17,5 -0,0458 0,7964 6,1 20,0

Neural Network 0,0001 0,0031 32,3 18,1 0,0516 0,7850 4,1 6,7

Table 3.5: Statistics of the errors of the methods with roll angle variation, ∆φ

The variations of the roll angle proved to have impact over the cross heading error estimation and little

impact over the heading error, as it can be observed in table 3.5. This was to be expected since the

angle effects over the projection of the ground object on the image plane do not alter the relations of the

distances between blobs centers; they only alter the location of the centers.

3.4.4 Estimation discussion

In this section it is shown that the tracking errors are being well estimated (since the errors in the

estimations presented low values) through the presented methods when knowing the position and ori-

entation of the robot.

When estimating the tracking errors with the position and orientation of the robot known, the method

that should be used is the camera matrix. This method presents the best global estimation when knowing

the robot conditions for the estimation. The Methods mean proved to give better results than the other

methods in simulation when considering the camera has a fixed position and orientation.

It is observed that the tilt angle, if not fedback to the estimators, and depending on the estimation

conditions, may lead to instability of the system due to the bad estimations of the ψe error. The roll angle

corrupts the estimation of the ye error and since this angle is used to remove the error in the quadrotor,

as it will be seen in chapter 6, it may be an important feedback in order to have a smooth behavior from

37

Page 62: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

the robot. The feedback of the h value should not be necessary since it has a reduced impact when

compared to others.

It is shown that the estimation done using Neural Network model gave results approximate to the other

methods for the virtual camera in nominal conditions. Despite having presented this, for the method to

be applied in the experimental platform, the estimation required better results. Since the estimation of

the blob centers has errors due to the pixel resolution of the camera and the real and virtual camera

intrinsic parameters also have errors, the application of this neural network would not prove beneficial

when compared to the other methods due to the constraints it presents. The training of new NN models

would be required for each nominal position and orientation of the camera.

38

Page 63: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Chapter 4

Wheeled mobile robot control

In this chapter the model used to synthesize the control for the robot rasteirinho is described. The

control with ideal sensors is performed and analyzed in order to determine the limitations and behavior

of the controlled model to reference signal. A control is done through vision with the aid of virtual reality

and the states used for the control are estimated through the image provided by the virtual camera. A

control is performed with the experimental platform of rasteirinho and the real camera. In the end of the

chapter the control and estimations obtained during the simulations and experiments are analyzed.

4.1 Rasteirinho model

The control strategy for this model consists in providing a constant longitudinal velocity (V0), since

there is no possible reference from the image to control this variable. If this was possible, the velocity

would be regulated according to the need. To control the model, the cross tracking and heading errors,

ye and ψe, are used as states and the yaw rate r is used as the control variable.

The model is given by the system of equations 4.1, this model is used for the simulations as the

non-linear system to be control. The system of equations presented was derived from the equations

presented in section Dynamic models, section 2.2.

x = V0cos(ψe)

ye = V0sin(ψe)

ψe = r

(4.1)

After selecting the states to control, ye and ψe and the linearization. The linearization point was ψ = 0.

The model is then given by equation 4.2. This equation is used to design the controller, where U = [r]

and the state is X = [ye, ψe].

X =

0 V0

0 0

X +

0

1

U (4.2)

39

Page 64: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

In the following sections the constant velocity used was V0 = 0, 4 m/s . The camera was positioned

at xr = 0.22 m and zr = −0.13 m from the center of the robot frame, with an tilt angle of −45o. It is

important to notice that in the experiment these are approximated values whereas the virtual camera

will have these exact values. This was decided in order to represent the real situation and consequently

replicate the behavior provided by the model. If the camera was not positioned in the same way a

detailed comparison would not be possible.

It is also important to refer that the reason for the camera to be located so high on board the rasteir-

inho was to perform a better evaluation of the algorithms in section 3, since the camera on the quadrotor

will have a wider view in comparison to the rasteirinho due to its altitude. If the camera was located

near the ground, the image problems would not be perceived and the algorithms would not be checked

before the real control is performed with the real quadrotor.

The control of the model is done with a 10 Hz sampling rate. The model controlled is the non-linear

presented by the system of equations (4.1).

4.2 Control with ideal position feedback

In this section the model rasteirinho is controlled in an ideal situation, and the model behavior is

studied. The control diagram is presented in figure 4.1.

C Sx

V0

ye,ψe

y,ψ

Ref

r

Figure 4.1: Control block diagram of the rasteirinho with ideal position feedback

In the figure, C and S represent the controller and the system. Ref represents the references given

to the lateral position y and heading ψ. During this simulation heading ψref reference has value 0. The

lateral position yref is given by a squared signal with amplitude 0.1 m and frequency of 0.25 Hz. The

tracking errors are obtained with equations (4.4), where the y and ψ variables are the actual position

and yaw angle of the rasteirinho.

ye = yref − y (4.3)

ψe = ψref − ψ (4.4)

40

Page 65: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

The values used for the weighting matrices and the control gain matrix are presented by equations

(4.5). The tuning of the weighting matrices was done looking at the close loop behavior, taking into

account the oscillations and speed in the removal of the error.

Q =

10000 0

0 100

R = 5

K =[31.27 5.9

](4.5)

Figure 4.2: The reference of the position yref (in blue) and the robot position y (in red)

Figure 4.3: Angle position ψ in response to the reference following

Figure 4.4: Control action r obtained for the reference following

41

Page 66: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

With figures 4.2, 4.3 and 4.4, it can be observed that the system has a fast and controlled response to

the imposed steps, with almost no oscillation. It must be noted that the robot also presented aggressive

behavior when analyzing the angle position and control action.

4.3 Path following in simulation

In this section the model rasteirinho is controlled through vision in a virtual reality environment where

the tracking errors used to control the model are estimated using the methods presented in section 3.3.

The behavior and influence of the estimation over the control is studied.

To have a better understanding of the simulation, the first step is to provide a description of the virtual

reality made with the purpose of creating simulations closely related to reality. This description is don in

subsection 4.3.1.

In simulation, the noise in the image acquisition is not considered. The loss of information, loss of

frames, momentary image distortion and noise induced by the wireless communications due to various

sources, such as loss of information and vibrations, are also not included.

On the other hand the virtual reality introduces elements that could represent disturbances very

similar to what could happen in reality with the wireless communications. The VR module is able to give

a preview of the rasteirinho and quadrotor behavior in the real experiments and anticipate problems that

may rise.

The control diagram of the simulation is presented in figure 4.5.

C S

A2B VCam

V0x,y,ψ

ye,ψe

r

Ixcb,ycbxct,yct

Figure 4.5: Control block diagram of the rasteirinho with VR simulation

In the figure, C, S and V represent the controller, the system and the virtualization of the image.

The variable Cam is the position, orientation and intrinsic parameters of the camera. A2 performs the

estimation of the blob centers. B represents the estimation methods for the tracking errors.

4.3.1 Environment

In this subsection the laboratory track used for path following is shown with images taken with both

the VR camera and the real camera, in figures 4.6a and 4.6b. The virtual track was modeled with values

taken from the real straight lines and curves in the laboratory.

42

Page 67: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) Perspective image of the virtual track (b) Perspective image

of the real track

Figure 4.6: Images of the virtual and real laboratory track

(a) Top image of the track with parking and

crosswalk

(b) Top image of the track without parking and

crosswalk

Figure 4.7: Images of the virtual track with and without extra elements

In chapter 3 the problem of multiple lines in the image was not addressed, visible in the middle of track

4.7a. Multiple lines (figure 4.8), depending on the height of the camera, result in a disturbance that may

unstabilize the system. To deal with this, in the simulation the parking and crosswalk were eliminated

(figure 4.7b) and in the real track these were covered with white sheets of paper.

Figure 4.8: Camera view near the parking zone

43

Page 68: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

The horizontal line, at the begining of the parking zone, would distort the estimations and create the

unwanted behavior.

4.3.2 Results from simulation

The disturbances provided by the camera proved to be beyond excessive and caused instability when

using the same weighting and gain matrices, equation 4.5. A fast and aggressive behavior of the robot

may cause the loss of sight of the path. The gains were reduced to increase the robustness to distur-

bances, at the expense of reduced accuracy, equations 4.6.

Q =

2500 0

0 156.25

R = 15

K =[10.36 3.87

](4.6)

The figures presented correspond to the values obtained during a lap along the track.

Figure 4.9: Cross tracking error ye (in blue) and heading error ψe (in red) during the simulations

Figure 4.10: Control action r during the simulation of the virtual rasteirinho

44

Page 69: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Figure 4.11: Trajectory performed during simulation, with initial error located near (0, 0)

Before a deep analysis of the figures, it is important to know that the virtual reality represents circles

as a sequence of straight segments. This effect over the estimation can be seen in 4.9 in both tracking

errors. Both of these have systematic ”waves” in the estimation while the ground mobile robot is moving

along the circular arcs.

It is important to know that in the beginning of the simulation, an initial error is included in order to

evaluate the system stability and quick response. It is also important to know that in the beginning of

the simulation, the first image obtained from the VR is in the predefined position from where the body

is programed to be, and that after the first instance the image will be in the position defined in the

simulation.

Through figure 4.9 it can be seen that the control was successful in regulating the error along the

straight segments, keeping the error below 7 cm along the circles, except at their beginning where

the error goes up to 20 cm due to the sudden transition. The control strategy was able to effectively

maintain the virtual rasteirinho from getting out of the track while maintaining a satisfactory system

stability. Although the errors may appear to be large, it is important to remind that the camera is in a

high position, from where the track line is frequently in the middle of the image frame and never at risk

of being lost from sight.

The control actions have high values due to the need of maintaining the line in sight, if the control is

not fast and high enough, the line will get out of sight and the rasteirinho will get lost since the linear

velocity has a high value.

4.4 Path following with experimental platform

In this section, the control action obtained with the experimental platform is presented, where the

estimation algorithms and image treatment are used for the real camera. Before presenting the results,

how the control action is sent to the experimental platform is described.

45

Page 70: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

To control the experimental platform the control variable was converted. By knowing the medium

velocity V0 and the yaw rate r it is possible to convert into left and right wheel linear velocities. By using

the equations 2.3 and 2.4, replacing Vm with V0 and θm with r and solving the system equations, it is

possible to achieve the equation 4.7, where variable b = 0.0725 m.

VlVr

=

1 b

1 −b

V0r

(4.7)

Since the orders sent to the motors are related to angular velocity and to the signal interval [0, 255],

the linear velocity of the wheels are first converted to an angular velocity and then a constant gain is

used to transform it into a value described by the interval [0, 255], system of equation 4.8.

Orderl = Vl

radius · 6.194

Orderr = Vr

radius · 6.194(4.8)

Where radius = 0.037 m is the wheel radius and 6.194 is the constant gain. This constant gain was

computed through the linear relation between the wheel angular velocity and the signal interval [0, 255].

The control diagram of the experiment is presented in figure 4.12.

C S

A2 A1B VCam

x,y,ψ

ye,ψe

rV0

IInalxcb,ycbxct,yct

Figure 4.12: Control block diagram of the experimental rasteirinho

In the figure, C, S and V represent the controller, the system and the virtualization of the image. The

variable Cam is the position, orientation and intrinsic parameters of the camera. A1 enhances the raw

image provided by the real camera. A2 performs the estimation of the blob centers. B represents the

estimation methods for the tracking errors.

Due to the differences between ideal and reality, the previous gains led the robot to instability, hence

the need to retune the control parameters. The gains were increased to have a higher derivative action

46

Page 71: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

and higher accuracy, the weighting matrices are presented in equations 4.9.

Q =

10000 0

0 400

R = 20

K =[16.69 4.95

](4.9)

The figures presented correspond to the values obtained during a lap along the track.

Figure 4.13: Cross tracking error ye(in blue) and heading error ψe (in red) during the experiment

Figure 4.14: Control action r during the experiment

It can be observed that the control of the real rasteirinho is successful despite the considerable oscil-

lations visible in both estimations and in the control action.

Through the analysis of the heading error, it is easy to observe some occasionally unexpected spikes

occur during the track. Some of these spikes are originated from bumps present in the laboratory floor.

Since the wireless camera is fixed with a 18 cm aluminum plate, the sudden bumps induce vibrations

with higher amplitude than the expected ones. During the tests it was verified problems similar to the

ones presented in figure 4.15. This indicates that either the wireless communications of the camera

has problems transmitting in certain regions of the track due to the surroundings or some bumps are

aggressive to the point of causing the hardware to have communication problems.

47

Page 72: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Figure 4.15: Momentary image distortion

The last part of the estimations of the tracking errors was expected to be closer to zero, like the

ones presented in the middle of the simulation. In order to verify the reason for the wrong estimation

obtained during the experiment, the images captured were observed during the time period where these

estimations occurred and it was verified that reflexion was corrupting the estimations, figures 4.16. It

is important to refer that the reflexion is an effect that was not dealt with in chapter 3 and the image

treatment used is not really able to deal with it in occasional cases, although the estimation stabilized

soon after. This indicates that although the significant influence of the reflexion over the estimation, the

control is robust enough to deal with it.

(a) Reflection over line with only

pre-enhancement

(b) Final image

Figure 4.16: Reflexion over the line

4.5 Comparison between results

It can be observed that in both simulation and in reality the system was successfully controlled with

the use of the control strategy described.

While comparing the figure 4.9 with 4.13 and figure 4.10 with 4.14, it is important to note that the gains

and behavior of the ground mobile robot cannot be exactly equal due to several reasons. Since one robot

is real and the other is simulated, the same gains could not be applied as expected. The linearization

and simplifications of the model create differences in the system to control. Since the cameras used in

both cases are different, the virtual and real cameras have different parameters. Also the camera fixed

48

Page 73: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

in the wheeled mobile robot may have a different set of conditions, position and orientation, to the ones

considered; unlike in VR, in reality a precise positioning and orientation of the camera cannot be made.

The behavior obtained in the simulation is very similar to the one obtained in the experimental platform

rasteirinho. This indicates that the VR simulation is well modeled and has approximate dimensions when

comparing the estimation figures, although there are visible differences. The length of the tracks straight

line segments part is smaller in VR. This indicates that there can be either small errors in the velocity

conversion of the experimental platform, the virtual track may have a shorter distance when compared

to the real track or, most likely, both problems are occurring at the same time.

It is also to note that the controller gains are different. Through these gains and the values of the con-

trol action obtained, it can be concluded that the experimental platform rasteirinho is slower in response

when compared to the virtual model. This was to be expected since the reality has problems that the

ideal case, theoretical model, does not posses.

With the comparison of figures 4.9 and 4.13 it can be observed that the first curve of the track

has a cornering curve, with a higher turn than the real one with values of −0.72 radians (-41.3o) and

−0.56 radians (-32.1o) respectively. This also originates a higher cross tracking error as it can be seen.

The error in modeling the curve can be seen as an over dimensioning of problematic curve that gives the

simulation rougher conditions than in reality. This gives a better expectation of the robustness provided

by the control laws that will be tested for the quadrotor in virtual environment.

With the presented results, it can be concluded that the VR simulation is a valid tool for vision based

simulations. The values obtained through the virtual and the real cameras reveal to be different but

justifiable through the analyzes done.

Through experiments using different velocities this analysis was also verified, further figures are pre-

sented in appendix A. Due to limitations of the real rasteirinho velocity range, only linear velocities up to

0.6 m/s were tested.

49

Page 74: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

50

Page 75: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Chapter 5

Longitudinal velocity estimation with

vision

As described earlier the state longitudinal velocity is not observable, as such it rises the problem of not

knowing in which direction the quadrotor is moving on and if the quadrotor is moving. For an autonomous

motion this information is essential. The Optical Flow was used to estimate the velocity. This method is

used to estimate pixel velocities in the video images, to determine moving objects or estimate velocities

of the camera. This method estimates both vertical and horizontal velocities in the image.

For this section the block Optical Flow of the toolbox Video and Image Processing Blockset in Simulink

was used.

The method was tested with the wheeled mobile robot on the track of the laboratory, in this robot the

velocity is known and it is possible to compare with the estimated velocities. The position and orientation

of the camera used for these analyzes is the same as the one used to estimate the tracking errors.

5.1 Conditions of Optical Flow applications

In [19] and [20] it can be seen that an estimation of the cameras velocities in the mobile ground

frame is possible, mounted in autonomous vehicles or hanged in mobile robot arms. In these studies

the cameras have specific conditions, namely the orientation and the surface where the optical flow is

used to read the velocities. In most of these studies the same conditions are presented, the camera

has a determined height while facing downwards with a regular and well defined surface. Both of these

conditions appear to be a standard for a good estimation.

In these cases the camera is facing directly the ground due to the difficult image processing needed

and the precise camera calibration is as referred in [9]. This is due to the fact that with a wide field of

view an extra number of iterations are required to sort suitable relations based on their location, near or

51

Page 76: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

far from each other. This type of view may increase significantly the distances between pixels making

this difference have a big impact over the estimation. For this reason it is common to see these studies

with a camera facing the surface.

The main assumption for optical flow is the small changes and smooth motion between frames. The

other assumption is that the surface has a reasonably a smooth surface, which moves rigidly or distorts

smoothly.

Although the conditions in which the method will be submitted indicate that a possible or even accurate

estimation may not be possible. The laboratory track has a lot of noise present in the image, most are

derived from: 1. light reflexion over the floor 2. a constant element in the image (track line, this will

distort the estimation) 3. undesired noise from the wireless communication (4.7) 4. floors characteristics

5. cameras position and orientation 6. lack of markings (with no markings unknown elements are needed

to relate between frames).

These reasons indicate a significant harsh environment for the estimator. The cameras position

and orientation alone has a significant impact, this gives uneven meaning between the image pixels

true motion. This effect can be seen in [9], where the use of a height sensor for the camera would

significantly improve the estimation. With these problems and the lack of markings in the floor for the

algorithm estimation should prove to be poor or unsuccessful.

The results obtained using this method are presented in the following sections, where in section 5.2

the method is used with nominal conditions of the laboratory track in figure 5.1a, and in section 5.3 its

used with added texture in figure 5.1b.

(a) Normal track (b) Track with added

texture

Figure 5.1: Track with and without added texture

The Optical Flow provides the longitudinal and lateral velocity elements of all moving elements within

the image. For the analyzes performed in this chapter, the mean of the longitudinal and lateral velocities

are computed and presented separately .

52

Page 77: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

5.2 Speed estimation with nominal track

Despite all these problems, trials were made with the hope of getting a positive result since the objec-

tive would be to at least estimate the longitudinal velocity. For the Optical Flow the method Lucas-Kanda

with difference filter was used, since this method in [3] presented better results and an inferior computa-

tional time when compared to method Horn-Schunck, this was verified with tests. For the velocity read-

ings the method was used, afterwards a separation of the velocity components, in x and y axis, were

separated into two matrices, then the mean of each matrix was obtained and displayed. The threshold

used for noise reduction in the method was 0.00001, since it is desired the reasonable pixel relations

between frames the threshold was defined with this level. The estimation was made with experimental

data obtained with the wheeled mobile robot.

The first trial presented was the estimation using the entire image and the second trial was the

estimation using a small part of the image. By using a small part of the image, it is possible to reduce

the noise provided by the line.

(a) Longitudinal velocity (b) Angular velocity

Figure 5.2: The real velocities

53

Page 78: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) Longitudinal velocity estimation with image

part

(b) Lateral velocity estimation with image part

(c) Longitudinal velocity estimation with all

image

(d) Lateral velocity estimation with all image

Figure 5.3: The estimated velocities

It can be observed, through the figures 5.3, that the estimation of the velocities present bad results

when compared to the real ones, this occurs when using the hole image or a small image part. The noise

elements present in the image largely corrupt the estimation, concluding that the estimation with the

current setup is not possible, but by adding texture it may be possible to mitigate the noise elements. This

texture will provide the method a higher number of reliable references on the ground and consequently

improve the estimations.

5.3 Speed estimation with with added texture

After verifying the results presented in the previous section, a trial was made with added texture near

the line of the track. The added texture used is presented in figure 5.1b, here the small pieces of paper

should compensate the lack of constant elements, these were positioned near the line. The methodology

used to obtain the results was similar to the one presented before, while using a different value for the

noise threshold 0.0001.

54

Page 79: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) Longitudinal velocity (b) Angular velocity

Figure 5.4: The real velocities with references

(a) Longitudinal velocity estimation with image

part

(b) Lateral velocity estimation with image part

(c) Longitudinal velocity estimation with all

image

(d) Lateral velocity estimation with all image

Figure 5.5: The estimated velocities with references

Through the comparison of the figure 5.4a with 5.5c, it can be observed that the method is now

capable of detecting more movements but with considerable variations and while using the hole image.

The added texture, when analyzing the hole image, was able to reduce the noise elements and provide

better readings. When observing the lateral velocities, it can be verified that the method is still unable of

detecting variations similar to the real ones.

55

Page 80: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

5.4 Results discussion

It is visible through the comparison of these figures that a possible estimation through the image using

Optical Flow is not possible when the estimator is submitted to the ground mobile robot conditions.

This was seen in both conditions, with and without added texture placed near the line, although it can

be observed that when extra texture is placed and all image is analyzed the algorithm was capable of

detecting more of the robot velocities, but it is still unable of providing a valid estimation for the use in

control.

In order to obtain better results, it may be necessary to place distinct references that will not be

confused with reflexions or with the line while preforming segmentation of that same references. By

isolating these references from the rest of the image, it may be possible to obtain a cleaner estimation

of the robots velocity. This reasoning comes from the quantity of noise present in the image that the

method detects/analyses.

Also the article [17] indicates that using this method to estimate the velocity in a quadrotor is not

possible even though the camera was in favorable conditions for the estimator. Indicating this way that

the quadrotor motion gives low smoothness between image frames from unknown sources. With the

trials done and the conditions in which the camera will be submitted in the quadrotor it was concluded

that the velocity estimation may not be possible using Optical Flow.

56

Page 81: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Chapter 6

Quadrotor control

In this chapter the models used to synthesize the control for the quadrotor are described. Afterwards a

comparison between the models used to describe the system is made, in order to determine the models

performance.

In order to test these control strategies and compare them in equal grounds, it was assumed that

the longitudinal velocity of the system was known, despite being an unknown state as it was described

in subsection 2.2. Similarly to chapter 4, the variable V0 is considered to be the constant longitudinal

velocity in which the quadrotor travels around the track. The velocity defined is 0.4 m/s. The lateral

velocity of the robot (Vy) is also an unknown variable but it is used in order to observe the robot behavior.

The control of the model is done with a 10 Hz sampling rate and the VR simulation time is 95 s. The h

value, quadrotor height, is set to 0.5 m.

6.1 Non-holonomic control strategy

In this section the model used to synthesize the non-holonomic control is chosen to achieve a control

similar to the one performed by the robot rasteirinho, by using a control strategy based on the angular

velocity while assuming a known constant linear longitudinal velocity.

6.1.1 Non-holonomic model

Before introducing the model, it is important to remind that the rasteirinho robot has no mobility to

the lateral axis, it is only capable of moving forward or rotating. On the other hand, the quadrotor is not

restrained.

In order to create the model, the first step was to define the state variables to control and to choose

the controlling variables. The controlling variables of the quadrotor are the reference attitude angle and

angle rate U = [θr, r], where the pitch angle θr provides the longitudinal control, whereas the yaw rate r

is used for lateral control.

57

Page 82: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

In order to prevent lateral drifts from unstabilizing and lose control of the system, it was chosen to add

the state Vy, the lateral velocity relatively to the path line, equation 6.1 (this side slip error is not controlled

but is indirectly observed in the image). This state will prevent abrupt drifts while the position state ye

will reduce the position error. The variables to control, X, are then given by X = [Vx, Vy, ye, ψe] and the

model is obtained through equations 6.2. The equations are obtained from [14] where the accelerations

are directly related to the attitude angles.

Vy = ye (6.1)

x = −θr · g

y = φr · g(6.2)

The lateral following dynamics is presented in equation 6.3, and its linearization yields to equation 6.4.

ye = V0 · sin(ψe) (6.3)

ye = V0 · ψe (6.4)

The control variable r is used with the aid of an integrator after the controller. The model used to design

the controller is then given by the state-space equation 6.5, where X = [Vx, Vy, ye, ψe] and U = [θr, r].

X =

0 0 0 0

0 0 0 0

0 1 0 V0

0 0 0 0

X +

−g 0

0 0

0 0

0 1

U (6.5)

6.1.2 Control with ideal position feedback

The results obtained using the control strategy defined by the non-holonomic model are presented

in this subsection, with the respective weighting matrices and the control gain matrix. The weighting

matrices were tuned with the purpose of both minimizing the cross tracking and heading errors while

maintaining a smooth control action. If the control action is too aggressive, the estimation through the

camera will be corrupted due to the attitude angles. In the following figures, only the control action angle

rate r will be shown since it is the only control variable really used by the model, θr is only used to

maintain a forward flight. The lateral velocity of the robot is shown in order to observe the quadrotor

drifts. The control diagram is presented in figure 6.1.

58

Page 83: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

C SZr

x,zф,θ

ye

фr,θr,r

V0 Vx

ψe

ddt

Vy~

y,ψ

Ref

Figure 6.1: Control block diagram of the quadrotor with ideal position feedback

In the figure, C and S represent the controller and the system. Ref represents the references given

to the lateral position y and heading ψ. During this simulation the lateral position yref and heading

ψref references have value 0. The tracking errors are obtained with equations 6.7, where the y and ψ

variables are the actual position and yaw angle of the quadrotor. The only condition that changes from

the initial state is the longitudinal velocity V0, from 0 to 0.4 m/s.

ye = yref − y (6.6)

ψe = ψref − ψ (6.7)

The values used for the weighing matrices and the control gain matrix are presented in equations

6.8.

Q =

1 0 0 0

0 10000 0 0

0 0 10000 0

0 0 0 1

R =

100 0

0 11.111

K =

−0.0953 0 0 0

0 10.8525 23.5 4.3410

(6.8)

59

Page 84: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) Cross tracking error ye (b) Quadrotor attitude angle ψ (in blue) and the

angle rate r (in red)

(c) Quadrotor lateral velocity Vy

Figure 6.2: Values obtained with control in ideal situation

It can be seen in figure 6.2a that this control is highly unstable and slow in response relatively to the

tracking errors. Due to the lateral drifts, the quadrotor has difficulty in stabilizing by only using the control

variable r due to the holonomic properties in the x and y plane. This can be seen in figure 6.2c and

6.2b, where the lateral velocity presents a behavior with high and undesired variations while the angular

rate tries to compensate it. To note that the control was tested with no steps and it was only imposed

that the quadrotor would not change y and ψ zero value. With steps the system would unstabelize.

Since this control model proved to be inefficient in describing the system, further analyzes and tests

were not made with it.

6.2 Holonomic control strategy

In this section two models are developed in order to test two different control strategies. In Holonomic

model 1 is considered that the longitudinal velocity is known and is used for control. In Holonomic model

2 it is considered that the velocity is unknown and the only way to remove the cross tracking error is

with the use of the φr control variable, although the velocity is used to maintain the quadrotor moving

forward, for comparison purposes. These two controls strategies are compared through the results and

the differences between them are analyzed. In both of the following models the control variables are

U = [φr, θr, r]. In order to evaluate the control performance, statistic analysis were made and for this

the RMSE, Mean, Total and Maximum were used. The Total value is given by equation 6.9, where N is

60

Page 85: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

the simulation sample size.

Total =

N∑K=1

|y(k)| (6.9)

6.2.1 Holonomic model 1

For this model it was considered that the longitudinal velocity is known and used by the controller to

remove the cross tracking error.

The state, X, is given by X = [Vx, Vy, ye, ψe] and the model is obtained using the system of equations

6.2 and the equation 6.10. This last equation is the result of the combination of equations 6.2 and 6.4

and the cross tracking error velocity, the velocity is controlled through the control action of the variable

φr. The state space Vx and ψe are controlled in the same way has in the non-holonomic model presented

in 6.1.1.

y = Vy + Vo · ψe (6.10)

The Holonomic model 1, used to design the controller, is then given by the state-space equations

presented in 6.11, where U = [φr, θr, r].

X =

0 0 0 0

0 0 0 0

0 1 0 V0

0 0 0 0

X +

0 −g 0

g 0 0

0 0 0

0 0 1

U (6.11)

6.2.2 Holonomic model 2

In this model the longitudinal velocity is considered to be unknown(except that the robot is moving

forward). This way a control that should be used in the real quadrotor, with an external longitudinal

velocity control, will be tested beforehand and compared with Holonomic model 1.

The state, X, is given by X = [Vx, Vy, ye, ψe] and the model is obtained using the same equations

presented in the previous models except for Vy. For this state the equation 6.12 is used.

ye = Vy (6.12)

61

Page 86: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

The Holonomic model 2, used to design the controller, is then given by the state-space equations

presented in 6.13, where U = [φr, θr, r].

X =

0 0 0 0

0 0 0 0

0 1 0 0

0 0 0 0

X +

0 −g 0

g 0 0

0 0 0

0 0 1

U (6.13)

6.2.3 Control with ideal position feedback

The results obtained using the control strategies defined in this section are presented here, with

the respective weighting and gain matrices. The weighting matrices were tuned with the purpose of

minimizing the cross tracking and heading errors while maintaining a smooth control action. If the control

action is too aggressive, the estimation through the camera will be corrupted. The errors are obtained

with equations 6.7, described with control block diagram 6.1.

The reference of the position yref is generated by a square signal with amplitude 0.15 m and fre-

quency of 0.05 Hz. The reference of the yaw angle ψref is generated by a square signal with amplitude

17.2 o and frequency of 0.15 Hz. The longitudinal linear velocity is V0 = 0.4 m/s.

6.2.3.1 Holonomic model 1

The values used for the weighting matrices and the control gain matrix are presented by equations

6.14.

Q =

4 0 0 0

0 4 0 0

0 0 0.1111 0

0 0 0 16

R =

277.7778 0 0

0 100 0

0 0 2.7778

K =

0 0.1279 0.0183 0.0030

−0.1819 0 0 0

0 0.0289 0.0369 2.1443

(6.14)

62

Page 87: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) The reference of the position yref (in blue)

and the quadrotor position y(in red)

(b) Control variable φr (in red) and the

quadrotor φ angle (in black)

(c) The control angle rate r (in black), the yaw

angle reference ψref (in blue) and the

quadrotor ψ angle (in red)

(d) Quadrotor lateral velocity Vy

Figure 6.3: Values obtained with control in ideal situation

The cross tracking and heading errors statistics along with the statistics of the control variables, vari-

ables φr and r, are presented in table 6.1. No information are provided about the Maximum values of ye

and ψe since steps were imposed to the system.

Error Control

ye [m] ψe [o] φr [o] r [o/s]

RMSE 0.0688 12.2 2.4 26.4

Mean -0.0095 0.1 -0.1 0.2

Total 13.0 2825.3 184.7 6090.1

Maximum - - 22.4 78.7

Table 6.1: Statistics obtained during control in ideal situation

As it can be seen in figure 6.3a, the system has a quick response to the desired motion and quickly

converges to the desired position, removing the position error while having a low angle φ (figure 6.3b)

and good reference following with angle ψ, figure 6.3c, even though this variable is used to help control

the position deviation. Although it can be verified that the system has a slow response, visible through

the comparison between the demanded attitude by the controller and the real angle provided by the

quadrotor, in figure 6.3c.

63

Page 88: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

In figure 6.3b, it can be observed that the system is slow in response to the demanded angle position

φr, despite this, it also shows that the action needed from the quadrotor is not necessarily big.

The lateral velocity, figure 6.3d, presents a controlled behavior and a oscillatory profile due to the type

of steps provided. This profile was expected, since the motion demanded was to move forward while

removing cross tracking and heading errors. It can also be verified that there are peaks, similar to the

ones present in the φ response. These peaks occur during the instant where the position steps are

imposed, since the control is performed from acceleration to position, consequently the velocity is used.

These peaks were expected, although a high control action is done with the φr angle and a low action

is done with r in order to remove the cross tracking error. This low control action is noticeable through

the behavior of ψ attitude, during the mid part of the angle step an overshoot occurs, this is done by the

controller as a fine control action to remove the cross tracking error.

6.2.3.2 Holonomic model 2

The values used for the weighting matrices and the control gain matrix are presented by equations

6.15.

Q =

1 0 0 0

0 4 0 0

0 0 0.1736 0

0 0 0 1

R =

277.7778 0 0

0 51.0204 0

0 0 0.5917

K =

0 0.1317 0.0234 0

−0.1309 0 0 0

0 0 0 1.2199

(6.15)

64

Page 89: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) The reference of the position yref (in blue)

and the quadrotor position y (in red)

(b) Control variable φr (in red) and the

quadrotor φ angle (in black)

(c) The control angle rate r(in black), the yaw

angle reference ψref (in blue) and the

quadrotor yaw angle ψ(in red)

(d) Quadrotor lateral velocity Vy

Figure 6.4: Values obtained with control in ideal situation

The position and angle errors statistics along with the statistics of the control actions, variables φr and

r, are presented in table 6.2.

Error Control

ye [m] ψe [o] φ [o] r [o/s]

RMSE 0.0705 14.5 2.4 17.6

Mean -0.0139 0.3 -0.1 0.3

Total 14.5 3576.8 188.4 4363.2

Maximum - - 23.1 42.7

Table 6.2: Statistics obtained during control in ideal situation

Through figure 6.4a it can be seen that the system has a quick response to the desired motion, but

has difficulties in removing the cross tracking error while having a low angle action φr, although the

reference following in angle ψ( figure 6.4c) presents a behavior without oscillations and a fast response

when compared to the size of the step.

Since the control is made from accelerations to positions, the velocity profile presented in figure 6.4d

was expected, although the amplitude of the velocity peaks have high values when compared to the

65

Page 90: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

general behavior of the lateral velocity.

6.2.4 Control in VR with Holonomic model 1

In this subsection, figures and results obtained while using the control designed with the Holonomic

model 1 are shown, for both cases of with and without attitude feedback. Afterwards, the comparison

of the control performed with and without the use of attitude feedback to the estimator is made, in order

to ascertain the advantage of knowing the quadrotors attitude through communications for the control

synthesized by Holonomic model 1. The control diagram block used in VR simulation is presented in

figure 6.5.

C S

A2B VCam

Zr x,y,zф,θ,ψ

ye

фr,θr,ψr

V0

Ixcb,ycbxct,yct

Vx

ψe

ddt

Vy~

Figure 6.5: Control block diagram of the quadrotor with vision

C, S and V are the controller, system and virtualization. A2 performs the estimation of the blob

centers. B represents the estimation methods for the tracking errors.

6.2.4.1 Control without attitude feedback

The values used for the weighting gains matrices and the control gain matrix are presented by equa-

tions 6.16.

Q =

4000 0 0 0

0 1000000 0 0

0 0 4444 0

0 0 0 2500

R =

10 0 0

0 15 0

0 0 15

K =

0 1.2701 0.1826 0.0328

−0.7134 0 0 0

0 0.0473 1.5446 1.9749

(6.16)

The figures presented correspond to the values obtained during a lap along the track.

66

Page 91: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) Cross tracking error ye (in blue) and heading error ψe (in red)

(b) Quadrotor attitude angle φ (in red) and the control variable φr angle (in blue)

(c) The control angle rate r (in blue) and the heading error ψe angle (in red)

(d) Quadrotor lateral velocity Vy

67

Page 92: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(e) Path performed by the quadrotor

Figure 6.6: Values obtained with control in simulation

Since the robot rasteirinho presented waves during the simulation (figure 6.6a), a similar behavior was

expected from the quadrotor. By knowing the origin of the ”waves”, it can be concluded that the control

was successful in regulating the error along the straight segments, keeping the error below 5.6 cm along

the circular arcs, except at the beginning of the curve where the error goes up to 17 cm due to the hard

turn.

The control variable φr presented (figure 6.6b) a smooth control with relatively small control actions

considering the errors.

This model uses the ψ angle to remove both tracking errors and due to this, during the cornering

curves the heading error ψe presents a high value. These values are obtained due to the controller

trying to compensate both errors. The control strategy was able to maintain the quadrotor over the path

while maintaining a good stability.

Important to remind the holonomic property of the quadrotor and that it has no sensor for the lateral

velocity of the robot, these factors make it hard to control especially during the cornering curves.

Estimated tracking errors Control

ye [m] ψe [o] φr [o] r [o/s]

RMSE 0.0470 5.7 1.0 12.3

Mean 0.0082 1.6 0.1 3.8

Total 28.4 3454.7 619.9 8160.7

Maximum 0.1992 24.8 5.0 48.5

Table 6.3: Statistics obtained during control without feedback

68

Page 93: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Important to note that the maximum error ye occurs during the first instants of the simulation.

6.2.4.2 Control with attitude feedback

In this case the weighting and gain matrices were identical to the ones used before, equation 6.16. The

figures obtained in this case are also very similar to the ones obtained before. They may be consulted

in appendix B.

Estimated tracking errors Control

ye [m] ψe [o] φr [o] r [o/s]

RMSE 0.0501 6.0 0.9 12.3

Mean 0.0089 1.5 0.1 3.8

Total 29.2 3554.5 521.6 8174.8

Maximum 0.2335 26.8 4.5 46.5

Table 6.4: Statistics obtained during control with feedback

6.2.4.3 Comparison

In order to compare the extra control or error obtained during the simulations, a percentage ratio was

defined, equation 6.17.

Extra% =Total(φno feedback)− Total(φwithfeedback)

Total(φwithfeedback)(6.17)

Through the comparison of the tables 6.3 and 6.4 it can be seen that the ye has higher values when

knowing the attitude. This shows that the estimations had considerable errors due to the conjunction of

situations where attitude angles, mostly the φ and θ angles, had values different from zero at the same

time. The same occurs for ψe.

It can also be observed that, by knowing the attitude, the control action was smoother and less de-

manding from the control variable φr, has it can be analyzed through the comparison of φr Total and

Maximum values. This can also be concluded with the percentage difference of the Total φr control ac-

tion made during flight, a percentage of 18.8 %. This value is significant since it was a flight of 95 s, time

corresponding to perform a hole lap around the track. If big tracks or flight courses were to be performed

this would be an important aspect to be considered since it would reduce the cost on the batteries, at

the same time it would have a smoother flight.

In the case of the control variable r, the changes were insignificant although the maximum value of

the control action was reduced with the attitude feedback.

69

Page 94: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

6.2.5 Control in VR with Holonomic model 2

In this subsection the figures and results obtained while using the control designed with Holonomic

model 2 are shown. The statistic analyzes are made, and the comparison of the control with and without

estimation using feedback is made in the end of the subsection. The control diagram block used in VR

simulation is presented in figure 6.5.

6.2.5.1 Control without attitude feedback

The values used for the weighting matrices and the control gain matrix are presented by equations

6.18.

Q =

1 0 0 0

0 40000 0 0

0 0 400 0

0 0 0 11.1111

R =

44.4444 0 0

0 51.0204 0

0 0 0.5917

K =

0 1.293 0.1283 0

−0.1309 0 0 0

0 0 0 3.5390

(6.18)

The figures presented correspond to the values obtained during a lap along the track.

(a) Cross tracking error ye (in blue) and heading error ψe (in red)

70

Page 95: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(b) Quadrotor attitude angle φ (in red) and the control variable φr angle (in blue)

(c) The control angle rate r (in blue) and the heading error ψe angle (in red)

(d) Quadrotor lateral velocity Vy

71

Page 96: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(e) Path performed by the quadrotor

Figure 6.7: Values obtained with control in simulation

During the simulation the control was successful in regulating the error along the straight segments,

keeping the error below 8.8 cm along the circular arcs, except at the beginning of the curve where the

error goes up to 26.2 cm due to the hard turn. Since this model only uses the φr angle to reduce the

cross tracking error and a smooth controlled behavior was imposed to the controller, the error has a high

value.

The heading error (ψe) presented (figure 6.7b) a oscillatory behavior due to the cornering curve. This

curve has a sharp turn to the left and then a slight curve to the right and since the control variable r is

only used to regulate the ψe error, it can be concluded that this model is fast to in regulating the heading

error.

Estimated tracking errors Control

ye [m] ψe [o] φr [o] r [o/s]

RMSE 0.0673 4.2 1.0 15.0

Mean 0.0205 1.1 0.1 3.9

Total 46.4 2506.9 596.9 8871.7

Maximum 0.2624 17.4 6.2 61.6

Table 6.5: Statistics obtained during control without feedback

The maximum value in this case occurs during the cornering curves with the value 26.2 cm.

72

Page 97: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

6.2.5.2 Control with attitude feedback

In this case the weight and gain matrices were identical to the ones used before, equation 6.18. The

figures obtained are also similar to the ones presented before. They may be consulted in appendix B.

Estimated tracking errors Control

ye [m] ψe [o] φr [o] r [o/s]

RMSE 0.0673 4.3 0.9 15.3

Mean 0.0210 1.1 0.1 3.8

Total 48.3 2478.1 504.5 8769.9

Maximum 0.2236 20.5 6.8 72.5

Table 6.6: Statistics obtained during control with feedback

6.2.5.3 Comparison

During these analyzes equation 6.17 was used.

Similar to the Holonomic model 1, the estimations during the simulations without feedback had lower

values, this indicates that the estimated values were lower compared to their actual value. This can be

observed with the Mean and Total ye values. Although in this case, the Maximum ye was inferior. This

indicates that either the control was now able to have better performance.

The ψe also suffered from underestimation but, opposite to the Holonomic model 1, the Total amount

of ψe was reduced. This indicates that the control performed better than before in removing the angle

errors, at a reduced cost of 1.2 % in the ψ angle.

The φ control action during a lap around the track was reduced with the knowledge of the attitude

by 18.1 %, similar amount to the Holonomic model 1. But in this case, the Maximum control action was

higher.

6.3 Comparison between results

The control performed while in ideal situation by the Holonomic model 1 indicated it could be better

than Holonomic model 2 in following the imposed references, in both the tracking errors. Although the

control in Holonomic model 2 had more errors, it was also smoother (visible with the comparison of the

statistics obtained for the control variable r). At the same it is verified that the use of the control variable

r is a good help in order to remove the little cross tracking error, errors that the variable φr has difficulties

dealing with.

With the comparison of the control models in VR it can be observed that the Holonomic model 2 has

more difficulties in fine control. This is shown by the comparison of the Total ye value, in Holonomic

model 1 and 2 were 28.4 and 46.4, with approximate control action values.

73

Page 98: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

On the other hand the ψe error in Holonomic model 1 and 2 were 3455 and 2507, with similar control

action values. Although the Maximum control action provided by r is higher in Holonomic model 2, visible

with the Total and RMSE values.

The Maximum heading error is higher in Holonomic model 1, indicating that Holonomic model 2 is

more stable in terms of angle following.

In the comparison of the estimations obtained during a track, figures 6.6a and 6.7a, it was verified that

Holonomic model 2 presents a peculiar behavior during the cornering curve. This occurs by only using

the φr angle to remove the cross tracking error, during the curve the control action made the quadrotor

quickly recover the lost position in comparison to Holonomic model 1.

During the tuning of the weighting matrices, it was verified that Holonomic model 2 is faster to adjust,

due to both nature of the control and the decoupling of the control variables. The nature of control

provided by Holonomic model 1, in the tight curves, may lead to the loss of sight of the camera to the

track.

The control of the Holonomic model 2 may be applied to other velocities equal or below the value

0.5 m/s, although the same happens with Holonomic model 1, the control here shows to almost lose

sight of the track with 0.5 m/s and for low velocities, such as 0.2 m/s, a highly oscillatory movement is

provided by the controller in order to remove the position error. If the control is performed in different

velocities another set of weighting matrices may be needed due to the behavior.

In the simulations done, the attitude feedback for the estimation methods is not necessary since the

control is robust enough to deal with the lack of this knowledge. Since the application at hand is a short

flight, it was concluded that there is no need to perform the attitude feedback for a successful remote

control.

Through the comparison of the figures 4.9, 6.7a and 6.6a, it can be seen that the quadrotor has a sim-

ilar behavior to the one presented by rasteirinho. Although it can be observed that during the cornering

curves the robots present considerable differences, this occurs due to the drifs of the quadrotor.

The Holonomic model 2 should be the one to be used on the experimental platform. This model is

easier to calibrate and does not need to use the longitudinal velocity during the control, which is ideal

for an external control of the longitudinal velocity.

74

Page 99: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Chapter 7

Conclusions and Future Work

7.1 Conclusions

With the completion of this work, it was concluded that:

• An image enhancement was developed and tested in the real conditions provided by the laboratory.

Within the conditions of the applications, it proved to be a tool capable of cleaning the noise.

• Methods to estimate the cross tracking and heading errors were applied, with the possibility of

feeding back the camera conditions to perform a better estimation.

• The effects of the quadrotors attitude over the estimation of the tracking errors was studied, and

also the need of feeding back the attitude to the errors estimators during flight. It was verified that,

in order to perform the control, the feedback of the attitude to the estimators is not needed; the

controller is robust enough to deal with the lack of knowledge.

• The image enhancement and the methods used to estimate the tracking variables were tested and

validated in the track of the laboratory with the use of the robot rasteirinho.

• A virtual reality simulator was successfully made, used and compared to reality, with values ob-

tained while using rasteirinho. It was observed that it was a good tool to prepare the vision based

experiments.

• With the validation of the virtual reality simulator it may be assumed that the developed control for

the quadrotor is well designed.

• The velocity estimation using Optical Flow is not possible using the current setup, since the noise

elements within the untreated image largely corrupts the estimation.

• The aim of creating a model to design the controller for the quadrotor, with the purpose of following

the track of the laboratory while using airborne vision feedback, was successful. This model was

compared to others and both the advantages and downsides verified. It was concluded that the

resulting controller is robust, safe and applicable to the real system.

75

Page 100: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

7.2 Future Work

• The estimation of the longitudinal velocity through image was shown to be inefficient due to the

lack of constant references on the ground and the constant noise. In order to deal with this, an

image treatment is needed to clean the noise and capture the desired constant elements. By doing

so, it should be possible to obtain values that can be related to the quadrotors or rasteirinho actual

velocity.

• The implementation of the control approach for to the real quadrotor system was prepared and is

ready to test.

76

Page 101: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Bibliography

[1] Arigela, S. and K.V. Asari (2006). An adaptive and non linear technique for enhancement of extremely

high contrast images. In Applied Imagery and Pattern Recognition Workshop, 2006. AIPR 2006. 35th

IEEE, oct.), pp. 24.

[2] Barrientos, A., J. Colorado, A. Martinez and J. Valente (2010). Rotary-wing mav modeling and control

for indoor scenarios. In Industrial Technology (ICIT), 2010 IEEE International Conference on, march),

pp. 1475 –1480.

[3] Barron, J.L., D.J. Fleet, S.S. Beauchemin and T.A. Burkitt (1992). Performance of optical flow tech-

niques. Computer Vision and Pattern Recognition, 1992. Proceedings CVPR ’92., 1992 IEEE Com-

puter Society Conference on, 236 – 242.

[4] Blas, M.R., M. Agrawal, A. Sundaresan and K. Konolige (2008). Fast color/texture segmentation

for outdoor robots. In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International

Conference on, sept.), pp. 4078 –4085.

[5] Bourquardez, Odile, Robert Mahony, Nicolas Guenardand Francois Chaumette, Tarek Hameland

and Laurent Eck (2009). Image-based visual servo control of the translation kinematics of a quadrotor

aerial vehicle. IEEE Transactions on Robotics 25, 743–749.

[6] Cabecinhas, D., C. Silvestre and R. Cunha (2010). Vision-based quadrotor stabilization using a pan

and tilt camera. 49th IEEE Conference on Decision and Control , 1644–1649.

[7] Cambron, M.E. and S.G. Northrup (2006). Calibration of a pole-mounted camera using a neural net-

work. In System Theory, 2006. SSST ’06. Proceeding of the Thirty-Eighth Southeastern Symposium

on, march), pp. 265 –269.

[8] Carvalho, Edwin John Oliveira (2008). Localization and cooperation of mobile robots applied to

formation control. Master’s thesis, Instituto Superior Tecnico.

[9] Chhaniyara, Savan, Pished Bunnun, Lakmal D. Seneviratne and Kaspar Althoefer (2008). Optical

flow algorithm for velocity estimation of ground vehicles: A feasibility study. International Journal on

Smart Sensing and Intelligent Systems 1.

[10] Couto, Miguel (2010). Localizacao e navegacao entre robos moveis: Dissertacao de mestrado.

Master’s thesis, Av. Rovisco Pais, 1.

77

Page 102: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

[11] Domingues, Jorge Miguel Brito (2009). Quadrotor prototype. Master’s thesis, Instituto Superior

Tecnico.

[12] Fujikake, H., K. Takizawa, T. Aida, T. Negishi and M. Kobayashi (1998). Video camera system using

liquid-crystal polarizing filter to reduce reflected light. Broadcasting, IEEE Transactions on 44(4), 419

–426.

[13] He, Kuen-Jan, Chien-Chih Chen, Ching-Hsi Lu and Lei Wang (2010). Implementation of a new

contrast enhancement method for video images. In Industrial Electronics and Applications (ICIEA),

2010 the 5th IEEE Conference on, june), pp. 1982 –1987.

[14] Henriques, Bernardo Sousa Machado (2011). Estimation and control of a quadrotor attitude. Mas-

ter’s thesis, Instituto Superior Tecnico.

[15] Ismail, A. H., H. R. Ramli, M. H. Ahmad and M. H. Marhaban (2009). Vision-based system for line

following mobile robot. IEEE Symposium on Industrial Electronics and Applications, 642–645.

[16] Otsu, Nobuyuki (1979). A threshold selection method from gray-level histograms. Systems, Man

and Cybernetics, IEEE Transactions on 9(1), 62 –66.

[17] Romero, Hugo, Sergio Salazar, , Rogelio Lozano, Member and IEEE (2009). Real-time stabilization

of an eight-rotor uav using optical flow. IEEE Transactions on Robotics 25, 809–817.

[18] Rondon, Eduardo, Luis-Rodolfo Garcia-Carrillo and Isabelle Fantoni (2010). Vision-based altitude,

position and speed regulation of a quadrotor rotorcraft. IEEE Interna, 628–633.

[19] Song*, Xiaojing, Lakmal D Seneviratne, Kaspar Althoefer, Zibin Song and Yahya H Zweiri (2007).

Visual odometry for velocity estimation of ugvs. IEEE International Conference on Mechatronics and

Automation, 1611–1616.

[20] Song, Xiaojing, Zibin Song, Lakmal D Seneviratne and Kaspar Althoefer (2008). Optical flow-

based slip and velocity estimation technique for unmanned skid-steered vehicles. IEEE International

Conference on Intelligent Robots and Systems, 101–106.

[21] Stoyanov, D. and Guang Zhong Yang (2005). Removing specular reflection components for robotic

assisted laparoscopic surgery. In Image Processing, 2005. ICIP 2005. IEEE International Conference

on, Volume 3, sept.), pp. III – 632–5.

[22] Tchoulack, S., J.M. Pierre Langlois and F. Cheriet (2008). A video stream processor for real-

time detection and correction of specular reflections in endoscopic images. In Circuits and Systems

and TAISA Conference, 2008. NEWCAS-TAISA 2008. 2008 Joint 6th International IEEE Northeast

Workshop on, june), pp. 49 –52.

[23] Tsuji, T. (2010). Specular reflection removal on high-speed camera for robot vision. In Robotics

and Automation (ICRA), 2010 IEEE International Conference on, may), pp. 1542 –1547.

78

Page 103: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

[24] Wang, Junxian, How-Lung Eng, A.H. Kam and Wei-Yun Yau (2004). Specular reflection removal for

human detection under aquatic environment. In Computer Vision and Pattern Recognition Workshop,

2004. CVPRW ’04. Conference on, june), pp. 130.

[25] Wang, Qing, Member, IEEE, Rabab K. Ward, Fellow and IEEE (2007). Fast image/video contrast

enhancement based on weighted thresholded histogram equalization. IEEE Transactions on Con-

sumer Electronics 53, 757 – 764.

[26] Zhao, Qingjie, Fasheng Wang and Zengqi Sun (2006). Using neural network technique in vision-

based robot curve tracking. In Intelligent Robots and Systems, 2006 IEEE/RSJ International Confer-

ence on, oct.), pp. 3817 –3822.

[27] Zhou, Qing-Li, Youmin Zhang, Yao-Hong Qu and C.-A. Rabbath (2010). Dead reckoning and

kalman filter design for trajectory tracking of a quadrotor uav. In Mechatronics and Embedded Systems

and Applications (MESA), 2010 IEEE/ASME International Conference on, july), pp. 119 –124.

79

Page 104: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

80

Page 105: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Appendix

81

Page 106: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado
Page 107: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Appendix A

Results from rasteirinho simulations

and experiments

In this appendix the graphics for other velocities done are presented. It was observed that with slower

velocities, estimation peaks appeared with less frequency. This indicates that with slower velocities the

bumps have a more smooth impact over the cameras hardware. It was also confirmed that the estimation

spikes are not only related to bumps since, in the simulation with the velocity equal to 0.5 m/s the real

estimation graphic presents almost no estimation peaks.

(a) Real estimation

(b) Virtual estimation

Figure A.1: Tracking errors (ye and ψe) during the simulation of the real and virtual rasteirinho for 0.3 m/s

velocity

83

Page 108: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) Real estimation

(b) Virtual estimation

Figure A.2: Tracking errors (ye and ψe) during the simulation of the real and virtual rasteirinho for 0.5 m/s

velocity

(a) Real estimation

(b) Virtual estimation

Figure A.3: Tracking errors (ye and ψe) during the simulation of the real and virtual rasteirinho for 0.6 m/s

velocity

84

Page 109: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) Real control

(b) Virtual control

Figure A.4: Control action r during the simulation of the real and virtual rasteirinho for 0.3 m/s velocity

(a) Real control

(b) Virtual control

Figure A.5: Control action r during the simulation of the real and virtual rasteirinho for 0.5 m/s velocity

85

Page 110: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(a) Real control

(b) Virtual control

Figure A.6: Control action r during the simulation of the real and virtual rasteirinho for 0.6 m/s velocity

86

Page 111: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Appendix B

Results of control with attitude

feedback

In this appendix the graphics obtained with the control while using attitude feedback is provided for

both models.

(a) Cross tracking error ye (in blue) and heading error ψe (in red)

(b) Quadrotor attitude angle φ (in red) and the control variable φr angle (in blue)

87

Page 112: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(c) The control angle rate r (in blue) and the heading error ψe angle (in red)

(d) Quadrotor lateral velocity Vy

Figure B.1: Values obtained with control in simulation with feedback for Holonomic model 1

(a) Cross tracking error ye (in blue) and heading error ψe (in red)

(b) Quadrotor attitude angle φ (in red) and the control variable φr angle (in blue)

88

Page 113: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

(c) The control angle rate r (in blue) and the heading error ψe angle (in red)

(d) Quadrotor lateral velocity Vy

Figure B.2: Values obtained with control in simulation with feedback for Holonomic model 2

89

Page 114: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

90

Page 115: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

Appendix C

Camera matrices for the real and

virtual cameras

In this appendix the camera matrices obtained for both the real and virtual cameras are presented,

both were obtained through the toolbox Camera Calibration Toolbox. To notice that manual tuning was

made due to the lack of the methods precision for low resolution cameras. It can be viewed by the values

obtained for the real camera, these gave little confidence over the estimated parameters for this case.

The camera resolution was [240, 320]

The virtual camera parameters obtained after optimization were:

• Focal Length: fc = [284.70913 285.68777]± [6.35720 5.90558]

• Principal point: cc = [159.32360 124.11345]± [2.33149 5.78723]

• Skew: αc = [0]± [0] => angle of pixel axes = 90± 0o

• Distortion: kc = [0.00178 − 0.00001 − 0.00009 − 0.00047 0]

• Distortion error: kce = ±[0.02141 0.07069 0.00258 0.00326 0]

• Pixel error: err = [0.15646 0.15015]

.

After manual correction of the terms, the values used were:

• Focal Length: fc = [285 286]

• Principal point: cc = [160 120]

.

91

Page 116: Vision Path Following with a Stabilized Quadrotor · Vogal: Prof. Alexandra Bento Moutinho Novembro - 2011. ... trigonometrico considerando a geometria do problema, um m´ etodo baseado

The real camera parameters obtained after optimization were:

• Focal Length: fc = [405.29692 457.43546]± [7.85498 8.27087]

• Principal point: cc = [180.46729 182.77437]± [5.85023 10.48517]

• Skew: αc = [0]± [0] => angle of pixel axes = 90± 0o

• Distortion: kc = [−0.22825 0.06205 0.00975 0.01016 0]

• Distortion error: kce = ±[0.04346 0.18624 0.00562 0.00263 0]

• Pixel error: err = [0.47883 0.49183]

.

After manual correction of the terms, the values used were:

• Focal Length: fc = [380 400]

• Principal point: cc = [160 120]

.

92