personal.us.es · agradecimientos durante la lenta y multiples´ veces interrumpida evoluci´on de...

252
DEPARTAMENTO DE INGENIER ´ IA DE SISTEMAS Y AUTOM ´ ATICA ESCUELA SUPERIOR DE INGENIEROS UNIVERSIDAD DE SEVILLA Distributed Architecture for the Cooperation of Multiple Unmanned Aerial Vehicles in Civil Applications por Jes´ us Iv´ an Maza Alca˜ niz PROPUESTA DE TESIS DOCTORAL PARA LA OBTENCI ´ ON DEL T ´ ITULO DE DOCTOR POR LA UNIVERSIDAD DE SEVILLA SEVILLA, FEBRERO 2010 Director: Prof. Dr.-Ing. An´ ıbal Ollero Baturone

Upload: others

Post on 16-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

DEPARTAMENTO DE INGENIERIA DE SISTEMAS Y AUTOMATICA

ESCUELA SUPERIOR DE INGENIEROS

UNIVERSIDAD DE SEVILLA

Distributed Architecture for the Cooperation of Multiple

Unmanned Aerial Vehicles in Civil Applications

por

Jesus Ivan Maza Alcaniz

PROPUESTA DE TESIS DOCTORAL

PARA LA OBTENCION DEL TITULO DE

DOCTOR POR LA UNIVERSIDAD DE SEVILLASEVILLA, FEBRERO 2010

Director: Prof. Dr.-Ing. Anıbal Ollero Baturone

Page 2: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer
Page 3: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

UNIVERSIDAD DE SEVILLA

Memoria presentada para optar al grado de Doctor por la Universidad de Sevilla

Autor: Jesus Ivan Maza Alcaniz

Tıtulo: Distributed Architecture for the Cooperation of Multiple

Unmanned Aerial Vehicles in Civil Applications

Departamento: Departamento de Ingenierıa de Sistemas y Automatica

V B DirectorAnıbal Ollero Baturone

El autor:Jesus Ivan Maza Alcaniz

Page 4: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer
Page 5: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

A mis padres y abuelos

A mi tıo Rodolfo

A Carmen

Page 6: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

vi

Page 7: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Agradecimientos

Durante la lenta y multiples veces interrumpida evolucion de esta tesis he acumulado muchas deudas,

y solamente tengo espacio para agradecer aquı una parte de las mismas.

En primer lugar, al Catedratico Anıbal Ollero, director de esta tesis y del Grupo de Robotica,

Vision y Control por su ayuda en los desarrollos presentados aquı. Tambien por haber dedicado

multiples esfuerzos para facilitarme una plataforma para la demostracion practica de muchos aspec-

tos del trabajo presentado en esta tesis.

A los Catedraticos Fernando Lobo Pereira y Pedro Marron por haber aceptado elaborar los

informes correspondientes a esta tesis. Tambien agradezco a Raja Chatila, Simon Lacroix, Aarne

Halme y Sami Ylonen la oportunidad que me brindaron de trabajar varios meses en sus laboratorios.

A los socios de los proyectos COMETS y AWARE: Jeremi Gancet, Gunter Hommel, Konstantin

Kondak, Markus Bernard, Emmanuel Previnaire, Jan Sperling, Jason Lepley, Ola Aribisala, Robert

Sauter, Olga Saukh, Manuel Gonzalo y Eduardo de Andres, por todo lo que he aprendido de ellos, y

por todos los buenos momentos que hemos compartido. En especial, querıa agradecer a Konstantin y

Markus su ayuda en el diseno y depuracion de la interfaz entre la capa deliberativa y ejecutiva de los

helicopteros empleados en los experimentos. Asimismo, agradezco al Catedratico Gunter Hommel

haberme facilitado un buen material fotografico de los experimentos, que ha permitido ilustrar los

conceptos de varios capıtulos de esta tesis.

Me gustarıa agradecerles tambien a Roberto Molina, David Scarlatti, Carlos Montes y David

Esteban (de la empresa Boeing Research and Technology Europe) su orientacion y ayuda en el

trabajo que se presenta en el capıtulo dedicado a las interfaces.

A mis companeros, y sobre todo amigos, Luis, Fernando, Jesus, Paco, Carlos, Angel, Manuel,

Antidio, Joaquın, por sus animos, inspiracion, discusiones interminables y risas. Especialmente, le

doy las gracias a Luis por todos estos anos de animos y amistad.

Finalmente, los agradecimientos a las personas mas cercanas. A Carmen y su familia, por todo

el amor y buenos momentos brindados a lo largo de estos anos. Ella siempre ha estado a mi lado, a

pesar de las dificultades, y dandole sentido a todos los esfuerzos.

Y quiero dar las gracias especialmente a mi familia, por toda su comprension y amor a lo largo

Page 8: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

viii

de toda mi vida. Ellos me han apoyado incondicionalmente y siempre, haciendo posible aquello que

parecıa imposible.

Por tanto, esta tesis esta dedicada a Carmen, a mis padres y abuelos, y a mi tıo Rodolfo.

Mis mas sinceras disculpas si he omitido a alguien que debiera recibir tambien mi agradec-

imiento.

Page 9: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Resumen

La robotica aerea tiene un gran potencial para la realizacion de tareas como la adquisicion de

datos e imagenes en areas inaccesibles por medios terrestres. Por otro lado, la complejidad de

ciertas aplicaciones requiere la cooperacion entre varios robots, debido a la necesidad de intervenir

simultaneamente en distintas localizaciones, a la extension espacial de la tarea a realizar o a las

limitaciones de carga de los robots. Incluso en casos en los que la cooperacion no es estrictamente

necesaria, esta puede ser empleada para incrementar la robustez de aplicaciones como deteccion y

localizacion.

Esta tesis presenta una arquitectura distribuida para la coordinacion y cooperacion autonoma

de multiples vehıculos aereos no tripulados (Unmanned Aerial Vehicles -UAVs- en ingles). La arqui-

tectura esta compuesta por diferentes modulos que resuelven los problemas habituales que surgen

durante la ejecucion de misiones multi-proposito, tales como la descomposicion de tareas complejas,

la asignacion de tareas, la deteccion y resolucion de conflictos, etc. Uno de los principales objetivos

en el diseno de la arquitectura ha sido imponer pocos requisitos a las capacidades ejecutivas de

los vehıculos autonomos que se quisieran integrar en la plataforma. Basicamente, esos vehıculos

deberıan ser capaces de moverse a una determinada localizacion y activar su carga util cuando

fuera requerido. De esta manera, es posible integrar vehıculos de diferentes fabricantes y grupos de

investigacion en la arquitectura desarrollada, permitiendo su uso en muchas aplicaciones multi-UAV.

En relacion a los modulos desarrollados en la arquitectura interna de cada UAV cabe mencionar

que el trabajo se ha focalizado principalmente en los modulos encargados de las siguientes funciones:

• Descomposicion de tareas complejas en tareas elementales ejecutables por el UAV directamente

(Capıtulo 4). Se han desarrollado algoritmos que permiten descomponer distintos tipos de

tareas, tales como la vigilancia de una zona o la monitorizacion de un objeto.

• Asignacion de tareas de manera distribuida (Capıtulo 5). Una vez definida la mision a eje-

cutar por el equipo de UAVs, es necesario decidir que UAV va a ejecutar cada tarea. Se han

desarrollado tres algoritmos para la asignacion distribuida de tareas, que han sido probados

en simulacion y con la plataforma real.

Page 10: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

x

• Deteccion y resolucion de conflictos (Capıtulo 6). Una vez que cada UAV tiene un plan

elaborado compuesto por tareas elementales, es necesario detectar si hay conflictos con los

planes de otros miembros del equipo. Uno de los conflictos mas crıticos aparece cuando los

UAVs comparten un mismo espacio aereo y sus trayectorias se solapan en espacio y tiempo.

Por tanto, se han desarrollado metodos distribuidos para la deteccion y resolucion de conflictos

entre las trayectorias de los diferentes UAVs.

La implementacion distribuida de la arquitectura ofrece mas robustez y escalabilidad en com-

paracion con una solucion centralizada, pero presenta retos significativos relacionados con la natu-

raleza asıncrona de los posibles eventos y mensajes intercambiados.

Se ha llevado a cabo la implementacion software tanto de la arquitectura multi-UAV como de la

interfaz persona-maquina (Human Machine Interface -HMI- en ingles) de la plataforma. Esta ultima

aplicacion ha sido disenada teniendo en cuenta las capacidades autonomas de la plataforma. Esta

tesis tambien presenta las caracterısticas de la interfaz persona-maquina junto con los resultados de

un estudio que analiza los beneficios de aplicar multiples modalidades sensoriales en dicha interfaz.

Dichas implementaciones software han sido probadas en simulacion y finalmente validada en

experimentos de campo con cuatro helicopteros autonomos en el marco del Proyecto AWARE finan-

ciado por la Comision Europea. El proceso de validacion se llevo a cabo en las instalaciones de la

empresa Protec-Fire (grupo Iturri) en Utrera (Espana) e incluyo varias misiones multi-UAV para

aplicaciones civiles en un entorno urbano simulado:

• Vigilancia con multiples UAVs.

• Confirmacion, monitorizacion y extincion de incendios.

• Transporte y despliegue de cargas con uno y varios UAVs.

• Seguimiento de personas.

La validacion incluyo una demostracion del sistema a los revisores de la Comision Europea del

proyecto AWARE, ası como a otros invitados de empresas y usuarios finales.

Page 11: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Acknowledgments

During the slow and often interrupted evolution of this thesis I have accumulated many debts, only

a proportion of which I have space to acknowledge here.

First of all, Professor Anıbal Ollero, director of this thesis and head of the Robotics, Vision and

Control group for encouraging the developments presented here. He has also devoted many efforts

to provide a platform for the practical demonstration of many aspects of the work presented in this

thesis.

I would like to thank Professors Fernando Lobo Pereira and Pedro Marron for accepting to report

this thesis. I should also thank Raja Chatila, Simon Lacroix, Aarne Halme and Sami Ylonen for

allowing me to work at their labs for some months along the thesis.

The partners of the COMETS and AWARE Project: Jeremi Gancet, Gunter Hommel, Konstantin

Kondak, Markus Bernard, Emmanuel Previnaire, Jan Sperling, Jason Lepley, Ola Aribisala, Robert

Sauter, Olga Saukh, Manuel Gonzalo and Eduardo de Andres, for all that I have learnt from them,

and for all the nice moments we had. Especially, I would like to thank Konstantin and Markus for

their help in the design and debugging process of the software interface between the deliberative

and executive layers of the helicopters used during the experiments. Also, my acknowledgement to

Prof. Gunter Hommel for sending me many good photographs taken during the experiments that

have allowed to illustrate different chapters of this thesis.

I would like also to thank Roberto Molina, David Scarlatti, Carlos Montes and David Esteban

from the Boeing Research and Technology Europe company for encouraging the work presented in

the chapter devoted to the multimodal interfaces.

My colleges, and over all friends, Luis, Fernando, Jesus, Paco, Carlos, Angel, Manuel, Antidio

and Joaquın, for their support, inspiration, never-ending discussions, laughs. Specially I should

thank Luis for all these years of support and friendship.

Finally, the acknowledgements to the people who is closer to me. Carmen and her family, for all

the love and good moments along these years. She has been always by my side, regardless of the

difficulties and giving a meaning to all the efforts.

And I specially thank to my family, for all the comprehension and love along my whole life. They

Page 12: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

xii

have supported me unconditionally and always, making possible what seemed impossible.

Thus, this thesis is dedicated to Carmen, to my parents and grandparents, and to my uncle

Rodolfo.

My sincere apologies if I have inadvertently omitted anyone to whom acknowledgement is due.

Page 13: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Abstract

Aerial robotics can be very useful to perform complex tasks such as data and image acquisition of

areas otherwise inaccessible using ground means, localization of targets, tracking, map building and

others. On the other hand, the complexity of some applications requires the cooperation between

several robots. Moreover, even if cooperation is not required, this cooperation can be used to increase

the robustness in applications such as detection and localization.

This thesis presents a distributed architecture for the autonomous coordination and cooperation

of multiple Unmanned Aerial Vehicles (UAVs). The architecture is endowed with different modules

that solve the usual problems that arise during the execution of multi-purpose missions, such as

task allocation, conflict resolution, complex task decomposition, etc. One of the main objectives in

the design of the architecture was to impose few requirements to the execution capabilities of the

autonomous vehicles to be integrated in the platform. Basically, those vehicles should be able to

move to a given location and activate their payload when required. Thus, autonomous vehicles from

different manufacturers and research groups can be integrated in the architecture developed, making

it easily usable in many multi-UAV applications.

The distributed implementation of the architecture provides more robustness and scalability

compared to a centralized solution, but also poses significant challenges related to the asynchronous

nature of the events and messages interchanged.

The multi-UAV architecture has been implemented along with the Human Machine Interface

(HMI) application of the platform. This application has been designed taking into account the

autonomous capabilities of the architecture. This thesis also presents the different features of the

HMI along with the results of a study that analyzes the benefits of applying multiple modalities in

the interface with the user.

The software implementation of both the architecture and the HMI has been tested in simulation

and finally validated in field experiments with four autonomous helicopters in the framework of the

AWARE Project funded by the European Commission. The validation process was carried out

in the facilities of the Protec-Fire company (Iturri group) in Utrera (Spain) and included several

multi-UAV missions for civil applications in a simulated urban setting:

Page 14: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

xiv

• Surveillance with multiple UAVs.

• Fire confirmation, monitoring and extinguishing.

• Load transportation and deployment with single and multiple UAVs.

• People tracking.

Finally, it is worth to mention that the validation included a demonstration of the system to the

European Commission reviewers of the project, people from the industry and potential end users.

Page 15: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Index

Agradecimientos vii

Resumen ix

Acknowledgments xi

Abstract xiii

Index xviii

Tables xx

Figures xxiv

1 Introduction 3

1.1 Motivation and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Decision-making in Complex Systems: Key Components . . . . . . . . . . . . . . . . 4

1.3 Outline and Main Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.4 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Cooperation and Networking of Multiple Mobile Autonomous Systems 13

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 General Concepts and Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.2.1 Coordination and Cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2.2 Classification of Multi-vehicle Systems . . . . . . . . . . . . . . . . . . . . . . 16

2.2.3 General Model for each Robot in the Team . . . . . . . . . . . . . . . . . . . 16

2.3 Physical Coupling: Joint Load Transportation . . . . . . . . . . . . . . . . . . . . . . 19

2.4 Vehicle Formations and Coordinated Control . . . . . . . . . . . . . . . . . . . . . . 22

2.5 Swarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.6 Intentional Cooperation Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.7 Mobile Systems Networked with Sensors and Actuators in the Environment . . . . . 29

2.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Page 16: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

xvi CONTENTS

3 Models and Decisional Architecture 33

3.1 Centralized / Decentralized Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3.2 Asynchronous Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.2.1 Asynchronous System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.2.2 Asynchronous Network Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.3 Multi-UAV Architecture in the AWARE Platform . . . . . . . . . . . . . . . . . . . . 42

3.3.1 Distribution of Decisional Capabilities . . . . . . . . . . . . . . . . . . . . . . 43

3.3.2 Models, Knowledge and AWARE Platform Components . . . . . . . . . . . . 43

3.3.3 Distributed Architecture for the Platform . . . . . . . . . . . . . . . . . . . . 45

3.3.4 Task Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.3.5 Task and Synchronization Managers . . . . . . . . . . . . . . . . . . . . . . . 52

3.3.6 Plan Builder / Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

3.3.7 Perception Subsystem (PSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4 Plan Refining Tools 57

4.1 Role during Monitoring Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

4.1.1 Location Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

4.1.2 Object Monitoring based on the Perception System Estimations . . . . . . . 59

4.2 Deployment Missions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.3 Task Refining in Surveillance Missions . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.3.1 Area Decomposition for UAV Workspace Division . . . . . . . . . . . . . . . 64

4.3.2 Sensing Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.3.3 Individual Areas Coverage Algorithm . . . . . . . . . . . . . . . . . . . . . . 67

4.3.4 Simulations Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.4 Static Obstacles Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.4.1 Geometric Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.4.2 Definition of the Basic Motion Planning Problem . . . . . . . . . . . . . . . . 72

4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

5 Distributed Task Allocation 75

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5.2 SIT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

5.3 SET Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.3.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

5.4 Synchronization during the Negotiation . . . . . . . . . . . . . . . . . . . . . . . . . 81

5.5 SIT and SET Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

5.6 Services and Tasks: S+T Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

5.7 Deadlock Situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

5.8 S+T Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

Page 17: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

CONTENTS xvii

6 Plan Merging Process 97

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

6.2 Distributed Method for Conflict Detection and Resolution . . . . . . . . . . . . . . . 99

6.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

6.2.2 Distributed Method for Conflict Resolution . . . . . . . . . . . . . . . . . . . 100

6.2.3 Geometrical Approach for Conflict Detection . . . . . . . . . . . . . . . . . . 103

6.2.4 Deadlock Detection and Resolution . . . . . . . . . . . . . . . . . . . . . . . . 108

6.3 Improvements Based on a Centralized Planner and the Velocity Profile . . . . . . . . 114

6.3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

6.3.2 Proposed Collision Avoidance Method . . . . . . . . . . . . . . . . . . . . . . 116

6.3.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

6.4 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

7 Platform Human Machine Interface 125

7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

7.2 AWARE Human Machine Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

7.3 Interactions between Operator and GCS . . . . . . . . . . . . . . . . . . . . . . . . . 128

7.3.1 Information Flow from GCS to Operator . . . . . . . . . . . . . . . . . . . . 129

7.3.2 Information Flow from Operator to GCS . . . . . . . . . . . . . . . . . . . . 130

7.3.3 Operator’s State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

7.4 System Developed based on Multimodal Technologies . . . . . . . . . . . . . . . . . 132

7.4.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

7.4.2 Tests Performed and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

7.5 Analysis of the Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

7.5.1 Probability Density Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

7.5.2 Comparative Results among Technologies . . . . . . . . . . . . . . . . . . . . 144

7.6 Conclusions and Future Developments . . . . . . . . . . . . . . . . . . . . . . . . . . 146

8 Experimental Results with the AWARE Project Multi-UAV Platform 149

8.1 Experimentation Scenario in the AWARE Project . . . . . . . . . . . . . . . . . . . . 149

8.2 AWARE Platform Subsystems involved in the Missions . . . . . . . . . . . . . . . . . 152

8.2.1 Unmanned Aerial Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

8.2.2 Ground Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

8.2.3 Wireless Sensor Network (WSN) . . . . . . . . . . . . . . . . . . . . . . . . . 156

8.2.4 Fire Truck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

8.3 Types of Missions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

8.3.1 Scheduling of the Experiments and Demonstration . . . . . . . . . . . . . . . 159

8.4 Preliminar Multi-UAV Missions in the AWARE’08 General Experiments . . . . . . . 160

8.4.1 Mission Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

8.4.2 ODL Modules during the Mission . . . . . . . . . . . . . . . . . . . . . . . . . 163

8.5 Multi-UAV Missions in the AWARE’09 General Experiments . . . . . . . . . . . . . 164

8.5.1 People Tracking (Mission #2) . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

8.5.2 Node Deployment and Fire Monitoring (Mission #5) . . . . . . . . . . . . . . 168

8.5.3 Multi-UAV Surveillance (Mission #7) . . . . . . . . . . . . . . . . . . . . . . 175

8.5.4 Load Transportation (Mission #8) . . . . . . . . . . . . . . . . . . . . . . . . 179

8.6 Summary of Results and Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . . 181

Page 18: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

xviii CONTENTS

9 Conclusions and Future Work 187

9.1 Revisiting the Main Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

9.1.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

9.1.2 Detailed Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

9.2 Perspectives for the Application in Civil Markets . . . . . . . . . . . . . . . . . . . . 191

9.3 Future Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

9.4 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

A Plan Builder / Optimizer 195

A.1 EUROPA Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

A.2 EUROPA Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

A.3 Application Example: a Deployment Mission in EUROPA . . . . . . . . . . . . . . . 198

B Network Setup in the AWARE Project 203

B.1 Platform Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

B.2 AWARE Middleware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

B.3 Time Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

C Coordinate Systems in the AWARE Platform 209

C.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

C.2 Global Coordinate System G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

C.3 UAV Coordinate System U . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

C.4 Camera Coordinate System C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

Bibliography 216

Page 19: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

List of Tables

3.1 Description of the modules in Fig. 3.5 along the different chapters of this thesis . . . 47

3.2 Possible internal events considered in the status evolution of a task τki . . . . . . . . 49

3.3 Parameters of a task with type λ = SURV . . . . . . . . . . . . . . . . . . . . . . . . 49

3.4 Type of tasks (λk) considered at the ODL level . . . . . . . . . . . . . . . . . . . . . 50

3.5 Type of elementary tasks (λ) considered in the Executive Layer (EL) . . . . . . . . . 51

3.6 Elementary task with type λ = GOTO: list of parameters . . . . . . . . . . . . . . . 51

3.7 Executive layer elementary tasks errors that can be reported to the deliberative level 52

3.8 Task request data fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.1 Initial coordinates, sensing width and relative capabilities . . . . . . . . . . . . . . . 70

5.1 Solutions computed with three different distributed task allocation algorithms and

the optimal result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

5.2 Results for missions with three UAVs and different number of tasks . . . . . . . . . . 85

5.3 Results for missions with five UAVs and different number of tasks . . . . . . . . . . 85

5.4 Results for missions with seven UAVs and different number of tasks . . . . . . . . . 85

5.5 Results with five tasks, different number of UAVs and values for the communication

range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

6.1 Summary of the conflicts among UAVs for the scenario depicted in Fig. 6.7 . . . . . 122

6.2 Solution computed for the four UAVs collision . . . . . . . . . . . . . . . . . . . . . . 122

6.3 Computational times of the two methods tested for different number of UAVs . . . . 122

7.1 Operator right and wrong actions depending on the type of button . . . . . . . . . . 134

7.2 Summary of the values represented in Fig. 7.5 . . . . . . . . . . . . . . . . . . . . . . 134

7.3 Description of the full set of experiments . . . . . . . . . . . . . . . . . . . . . . . . . 136

7.4 Summary of the results for the experiment #1 . . . . . . . . . . . . . . . . . . . . . 136

Page 20: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

xx LIST OF TABLES

7.5 Summary of the results for the experiment #2 . . . . . . . . . . . . . . . . . . . . . 137

7.6 Summary of the results for the experiment #3 . . . . . . . . . . . . . . . . . . . . . 138

7.7 Summary of the results for the experiment #4 . . . . . . . . . . . . . . . . . . . . . 138

7.8 Summary of the results for the experiment #5 . . . . . . . . . . . . . . . . . . . . . 139

7.9 Summary of the results for the experiment #6 . . . . . . . . . . . . . . . . . . . . . 140

7.10 Summary of the results for the experiment #7 . . . . . . . . . . . . . . . . . . . . . 140

7.11 Summary of the results for individual #5 . . . . . . . . . . . . . . . . . . . . . . . . 142

7.12 Summary of the improvements in mean with respect to the results of Experiment #2

(TS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

8.1 Scheduling of the different AWARE missions . . . . . . . . . . . . . . . . . . . . . . . 160

8.2 Tasks to be executed for the node deployment mission (April 2008) . . . . . . . . . . 163

8.3 Tasks to be executed for the Mission #2 and their decomposition in elementary tasks 166

8.4 Values of the parameters for the GOTO elementary tasks in Mission #2 . . . . . . . 167

8.5 Parameters of a task with type λ = GOTO . . . . . . . . . . . . . . . . . . . . . . . 168

8.6 Tasks executed during Mission #5 and their decomposition in elementary tasks . . . 174

8.7 Parameters for the elementary tasks with type λk1 = GOTO in Mission #5 . . . . . . 174

8.8 Task specified for the Mission #7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

8.9 Values for the tasks parameters (Πk) in Mission #7 . . . . . . . . . . . . . . . . . . 177

8.10 Parameters of a task with type λ = SURV . . . . . . . . . . . . . . . . . . . . . . . . 177

8.11 Parameters of the cameras on-board during the surveillance mission . . . . . . . . . 178

8.12 Values for the bids and resulting relative capabilities in percentage . . . . . . . . . . 179

8.13 Tasks to be executed for the Mission #8 . . . . . . . . . . . . . . . . . . . . . . . . . 181

8.14 Values of the parameters for the elementary GOTO tasks in Mission #8 . . . . . . . 181

8.15 Some figures that reflect the performance of the ODL during all the missions . . . . 183

8.16 Multimedia material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

Page 21: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

List of Figures

1.1 Common scenario for the AWARE Project experiments . . . . . . . . . . . . . . . . 9

1.2 Autonomous helicopters used for the demonstration of the system . . . . . . . . . . 10

2.1 Possible classification for multiple mobile autonomous systems . . . . . . . . . . . . 17

2.2 Hybrid model for a multi-robot system . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3 Load transportation system composed by three autonomous helicopters . . . . . . . 21

2.4 Coordinated flights in the COMETS Project involving an airship and two autonomous

helicopters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.5 Mission executed by the CROMAT platform . . . . . . . . . . . . . . . . . . . . . . . 28

2.6 Sensor deployment from an autonomous helicopter in 2009 . . . . . . . . . . . . . . . 31

3.1 A process I/O automaton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.2 A channel I/O automaton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.3 Composition of processes and channels . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.4 Global overview of the distributed multi-UAV system architecture . . . . . . . . . . 46

3.5 Detailed view of the internal On-board Deliberative Layer (ODL) architecture of a

single UAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.1 Location and orientation of the UAVs for object monitoring tasks . . . . . . . . . . . 63

4.2 The area captured with the camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

4.3 Vertically projectively planar surface . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.4 Covering a region using different sweep directions . . . . . . . . . . . . . . . . . . . . 68

4.5 Diameter function for a rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.6 Area partition simulation results. Optimal sweep directions have been represented by

arrows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

4.7 Resulting zigzag patterns minimizing the number of turns required . . . . . . . . . . 70

4.8 UAVs have to reconfigure their flight plans to cover the whole area . . . . . . . . . . 71

Page 22: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

xxii LIST OF FIGURES

4.9 Given the initial and goal locations, a set of waypoints avoiding the obstacles and

minimizing the distance are computed . . . . . . . . . . . . . . . . . . . . . . . . . . 74

5.1 A particular mission that shows some limitations of the SIT algorithm . . . . . . . . 80

5.2 Arithmetic mean of the global cost (and its standard deviation in meters) for the 100

random missions and the different methods implemented . . . . . . . . . . . . . . . . 84

5.3 Arithmetic mean of the global cost (and its standard deviation in meters) for the 100

random missions and the three methods implemented . . . . . . . . . . . . . . . . . 86

5.4 Mean of the messages sent per UAV in one hundred missions with five UAVs and

different number of waypoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

5.5 Example of multiple recursive services required to accomplish one task . . . . . . . . 88

5.6 Messages interchanged in the negotiation process using the S+T algorithm . . . . . 90

5.7 Mean of the total distance traveled by all the UAVs over one hundred missions with

different communication ranges, number of UAVs and five tasks . . . . . . . . . . . . 93

5.8 Mean of the maximum distance traveled by one UAV over one hundred missions with

300 and 600 meters as the communication range. The number of UAVs and tasks

considered in the missions were five . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5.9 Mean of the number of tasks executed by all the UAVs over one hundred missions

with different values of the communication range . . . . . . . . . . . . . . . . . . . . 95

6.1 Bounding solids adopted for each motion state of the UAV . . . . . . . . . . . . . . . 104

6.2 Top view of a configuration with four UAVs in a deadlock . . . . . . . . . . . . . . . 109

6.3 Wait-for graph associated to the configuration depicted in Fig. 6.2 . . . . . . . . . . 110

6.4 Example of deadlock duration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

6.5 Initial temporal overlapping between UAVs 1 and 2 . . . . . . . . . . . . . . . . . . . 118

6.6 The algorithm backtracks and creates new branches to avoid the collision . . . . . . 119

6.7 Three dimensional paths of five UAVs used in a simulation . . . . . . . . . . . . . . 121

7.1 A photograph of the AWARE platform HMI during the execution of a mission in 2009 127

7.2 Human machine interface during a real surveillance mission . . . . . . . . . . . . . . 128

7.3 System developed based on multimodal technologies . . . . . . . . . . . . . . . . . . 133

7.4 Graphical interface of the multimodal software application . . . . . . . . . . . . . . . 133

7.5 Graphical interface showing the results of a test . . . . . . . . . . . . . . . . . . . . . 135

7.6 Mouse interface experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

7.7 Individual #5: Histograms with the number of correct actions in each reaction time

interval for the different experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

7.8 Univariate Gaussian and univariate asymmetric Gaussian . . . . . . . . . . . . . . . 143

Page 23: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

LIST OF FIGURES xxiii

7.9 Individual #5 reaction time probability density functions using UAGs . . . . . . . . 143

7.10 Histograms with the number of correct actions in each reaction time interval for the

whole population during the different experiments . . . . . . . . . . . . . . . . . . . 145

7.11 Reaction times probability density functions for the whole population in the different

experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

7.12 Reaction times probability density functions for the whole population in the different

experiments considering only the transitions from one screen to another . . . . . . . 147

8.1 Common scenario for the AWARE Project experiments . . . . . . . . . . . . . . . . 150

8.2 Smoke and fire machines used in the building . . . . . . . . . . . . . . . . . . . . . . 151

8.3 Elements located in the surroundings of the building . . . . . . . . . . . . . . . . . . 151

8.4 Dummy bodies used as victims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

8.5 Fleet of TUB-H helicopters used in the experiments . . . . . . . . . . . . . . . . . . 153

8.6 The FC III E SARAH helicopter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

8.7 Visual and infrared cameras on-board the TUB-H helicopter . . . . . . . . . . . . . 154

8.8 Detailed view of the Node Deployment Device (NDD) . . . . . . . . . . . . . . . . . 155

8.9 Detail of the Load Transportation Device (LTD) . . . . . . . . . . . . . . . . . . . . 156

8.10 Ground cameras used in the experimentation scenario . . . . . . . . . . . . . . . . . 157

8.11 Detail of the WSN nodes used in the experiments . . . . . . . . . . . . . . . . . . . . 157

8.12 The automated mounted monitor of the fire truck . . . . . . . . . . . . . . . . . . . . 158

8.13 Screenshot from the human machine interface application during the activation of the

fire truck monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

8.14 The HMI screen during the mission performed in April 2008 . . . . . . . . . . . . . . 161

8.15 Paths followed by the two helicopters during the mission in April 2008 . . . . . . . . 162

8.16 Detail of the device on-board the helicopter for the node deployment operation . . . 163

8.17 Tasks executed by each UAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

8.18 Coordinated flights during the node deployment mission in April 2008 . . . . . . . . 165

8.19 CNP messages interchanged during the distributed negotiation process in the people

tracking mission (Mission #2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

8.20 Paths followed by the two helicopters during the people tracking mission (Mission #2)169

8.21 Screenshots of the platform HMI during the execution of Mission #2 . . . . . . . . . 170

8.22 CNP messages interchanged for the allocation of the sensor deployment tasks (Mission

#5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

8.23 Partial plans built by UAV 2 during the negotiation process depicted in Fig. 8.22

(Mission #5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

Page 24: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

xxiv LIST OF FIGURES

8.24 Paths followed by the two helicopters during the node deployment and fire monitoring

mission (Mission #5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

8.25 Screenshots of the platform human machine interface during the execution of Mission

#5: sensor nodes deployment and fire monitoring . . . . . . . . . . . . . . . . . . . . 176

8.26 Paths followed by the two helicopters during the multi-UAV surveillance mission . . 178

8.27 Screenshots of the platform HMI during the execution of the surveillance mission

(Mission #7) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

8.28 Path followed by the three helicopters transporting the pan&tilt camera . . . . . . . 182

8.29 Coordinates of the helicopters and the load during the flight . . . . . . . . . . . . . . 183

8.30 Screenshots of the platform human machine interface during the execution of Mission

#8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

A.1 EUROPA modules and their dependencies . . . . . . . . . . . . . . . . . . . . . . . . 197

A.2 Example batch application overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

A.3 Timelines and Predicates with Transitions between Predicates on each Timeline . . . 200

A.4 Screenshot with the visualization of the results obtained with PlanWorks . . . . . . 201

B.1 Network setup done during the missions summarized in Chap. 8 . . . . . . . . . . . 204

C.1 Global coordinate frame considered for the operational area . . . . . . . . . . . . . . 212

C.2 Coordinate frame attached to the UAVs . . . . . . . . . . . . . . . . . . . . . . . . . 212

C.3 Coordinate frame attached to the cameras . . . . . . . . . . . . . . . . . . . . . . . . 214

Page 25: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Notation

Acronyms

The following list of acronyms will be referred along this thesis:

HBN High Bandwidth Network WSN Wireless Sensor Network

GSN Ground Sensor Network UAV Unmanned Aerial Vehicle

GCS Ground Control Station LBN Low Bandwidth Network

PS Perception System PSS Perception Subsystem

ODL On-board Deliberative Layer EL Executive Layer

API Application Programming Inter-

face

PC104 Personal Computer with reduced

dimensions and bus expansion

compatibility

CNP Contract Net Protocol RGB Red, Green and Blue colour space

GPS Global Positioning System MW Middleware

HMI Human Machine Interface PDA Personal Digital Assistant

MPI Multi-Path Interference QoS Quality of Service

LTS Load Transportation System DMCS Disaster Management/Civil Secu-

rity

RTBS Real Time Base System ADC Analogue to Digital Converter

LAN Local Area Network WLAN Wireless Local Area Network Pro-

tocol (IEEE 802.11.x)

Notation and Elementary Concepts

Vector and Matrices

Along this thesis, matrix and vector variables are denoted by upright boldface type.

Page 26: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

2 Notation

Tasks and Platform Components

This thesis presents a distributed architecture for the autonomous coordination and cooperation of

a platform composed by multiple Unmanned Aerial Vehicles (UAVs). The architecture is endowed

with different modules that solve the usual problems that arise during the execution of multi-purpose

missions, such as task allocation, conflict resolution, complex task decomposition, etc. Then, the

key elements to be referred along this document are the platform components, i.e. the UAVs, and

the tasks to be executed by the platform.

Let us consider a platform with n UAVs. A subscript will be used in this thesis to identify each

of them. Then, the team will be denoted by U1, U2, . . . , Un.On the other hand, if the platform has to execute a set of m tasks, each one will be identified

with a superscript τ1, τ2, . . . , τm. The different tasks will be finally allocated among the UAVs of

the platform. The, if a task τ j is allocated to the UAV Ui, it will be represented by τ ji .

Later, in Chap. 3 the concept of elementary task will be introduced. Basically, an elementary

task is a type of task that is directly “understandable” and executable by the executive level of an

UAV. Along this document, the symbol ˆ will be used to identify the elementary tasks.

If a task τ j is decomposed into a set of ne elementary tasks, then a superscript before the letter

is used to identify each elementary task:

τ j−→1τ j ,2 τ j , . . . ,ne τ j

Page 27: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Chapter 1

Introduction

This thesis presents contributions in the field of distributed coordination and cooperation within

robot teams, and more precisely, within teams composed by multiple Unmanned Aerial Vehicles

(UAVs). The application of these techniques to provide autonomous capabilities in civil missions is

a key aspect in this work.

This chapter firstly presents the motivation and the main objectives of the research carried out.

Then, the outline and main contributions of the thesis are presented. Finally, the framework in

which the work has been developed is described.

1.1 Motivation and Objectives

Unmanned Aerial Vehicles (UAVs) are self-propelled air vehicles that are either remotely controlled

or capable of conducting autonomous operations. Since the first UAV flew, UAVs have been mainly

used in military applications and, in general, for classified purposes. Nevertheless, it is clear that

UAVs have a wide range of civil applications. The higher mobility and maneuverability of UAVs with

respect to ground vehicles makes them a natural approach for tasks like information gathering or

even the deployment of instrumentation. Aerial robots can be very useful to perform complex tasks

such as data and image acquisition of areas otherwise inaccessible using ground means, localization

of targets, tracking, map building and others. In recent years, the technologies for autonomous

aerial vehicles have experienced an important development. This has made the research on aerial

autonomous systems affordable for universities and research centers.

On the other hand, the complexity of some applications requires the cooperation between several

robots. Moreover, even if cooperation is not required, it can be used to increase the robustness in

applications such as detection and localization.

This thesis presents a distributed architecture for the autonomous coordination and cooperation

of multiple UAVs. The architecture is endowed with different modules that solve the usual problems

that arise during the execution of multi-purpose missions, such as task allocation, conflict resolution,

complex task decomposition, etc. One of the main objectives in the design of the architecture was to

impose few requirements to the execution capabilities of the autonomous vehicles to be integrated in

Page 28: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

4 Introduction

the platform. Basically, those vehicles should be able to move to a given location and activate their

payload when required. Thus, autonomous vehicles from different manufacturers and research groups

can be easily integrated in the architecture developed, making it easily usable in many multi-UAV

platforms.

The distributed implementation of the architecture provides more robustness and scalability

compared to a centralized solution, but also poses significant challenges related to the asynchronous

nature of the events and messages interchanged.

The multi-UAV architecture has been implemented along with the Human Machine Interface

(HMI) application of the platform. This application has been designed taking into account the au-

tonomous capabilities of the system. Then, it allows to decrease the workload of the operator in such

a way that a single operator can manage a complex platform composed by multiple heterogeneous

systems. This thesis also presents the different features of the HMI along with the results of a study

that analyzes the benefits of applying multiple modalities in the interface with the user.

One of the key objectives of this thesis was to demonstrate the applicability of the architecture for

civil missions with a real platform composed by multiple UAVs. Then, the software implementation

of both the architecture and the HMI has been tested in simulation and finally validated in field

experiments with four autonomous helicopters in the framework of the AWARE Project funded by

the European Commission. The validation process was carried out in the facilities of the Protec-

Fire company (Iturri group) in Utrera (Spain) and included several multi-UAV missions for civil

applications in a simulated urban setting:

• Surveillance with multiple UAVs.

• Fire confirmation, monitoring and extinguishing.

• Load transportation and deployment with single and multiple UAVs.

• People tracking.

Finally, it is worth to mention that the validation included a demonstration of the system to the

European Commission reviewers of the AWARE Project and other invited people from the industry.

1.2 Decision-making in Complex Systems: Key Components

The “Decision”, regarding intelligent systems, deals with different mechanisms focusing on the au-

tonomous and coherent processing of a mission (ranging from simple requests, to complex sequences

of high-level tasks), either within a centralized or distributed system. Four main different mecha-

nisms can be pointed out:

Allocation arises in multi-agent systems (hence composed of several entities), where each of the

agents is able to perform tasks in response to the tasks requests. The issue is to decide which

entity should be endowed with each given task to be performed. This requires the capability

to assess the interest of assigning a certain agent to a given task. This operation is especially

Page 29: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

1.2 Decision-making in Complex Systems: Key Components 5

difficult when the decision has to be done taking into account the current individual plans

of the agents as well as the tasks left to be assigned. Considering such allocation capabilities

within a centralized decisional system, requires to have all relevant information available inside

this central system: the models, each agent’s plan and the current states of tasks execution

for each agent.

Task Planning is a central issue of the decisional components. It aims at building a sequence of

tasks to perform, in order to achieve a given mission. This mission can be as well stated as:

• A simple definition of the state of the world that should be reached after a certain number

of tasks have been performed,

• Or as a complex sequence of instructions or intermediary states of the world, that the

system should reach following a given (possibly partial) order.

Within a single agent system, planning is already a complex piece of processing and compu-

tations, since the complexity to find an optimal plan is generally very difficult with the usual

market-available computation solutions. This is especially true when considering issues like

time windows, or uncertainties, in the computation of the plans.

Coordination is a process that arises within a system if given resources (either internal or external)

are simultaneously required by several components of this system. In the case of a multi-robot

system, a classic coordination issue to deal with is the sharing of space, between the different

robots, to ensure that each robot will be able to perform its plan safely and coherently regarding

the plans of the other robots. For example, if a mission involves complete coverage of a given

area, the region should be divided among the available robots and cameras accordingly with

their relative capabilities (such as maximum speed, autonomy, field of view of the cameras,

etc.).

Another important issue is the coordination of tasks between several robots: for instance, in

the case of monitoring an event, requiring several synchronized perceptions of the event with

convenient locations and orientations of the involved cameras.

Coordination of space sharing should be performed either continuously or iteratively during

the execution of a mission, since contingent events may require to revise and update the plans

at any time. Moreover, updating coordination information may eventually also be required to

improve the global plan of a group of robots, which current individual plans exhibit possible

improvements opportunities.

Supervision deals with the management (control) of the tasks execution, in several ways:

• A first concern is simply to keep the system aware of the tasks processing evolution during

their executions ;

• A second concern is to detect the possible tasks failures and (if possible) to react to such

events in a way that will prevent the system to fail.

Page 30: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6 Introduction

1.3 Outline and Main Contributions

The thesis consists of nine chapters complemented with three appendixes. A summary of the contents

of the chapters is presented here:

Chapter 2 presents a classification for the cooperation of multiple mobile autonomous vehicles

or robots taking into account the coupling between the vehicles and the type of cooperation. Then,

the research and development activities in load transportation, formations, swarming and intentional

cooperation are revised. Some of the material in this chapter has been published as book chapters:

• One chapter (Ollero and Maza, 2007b) of the book Multiple Heterogeneous Unmanned Aerial

Vehicles (Ollero and Maza, 2007c).

• Two chapters (Zanella et al., 2008; Baydere et al., 2008) of the book Cooperating Embedded

Systems and Wireless Sensor Networks (Banatre et al., 2008a).

In Chap. 3, the distributed architecture for the autonomous coordination and cooperation of

multiple Unmanned Aerial Vehicles (UAVs) is described. Part of the material from this chapter has

led to the publication of a paper in the Journal of Intelligent and Robotic Systems (Maza et al.,

2010b) and one chapter in the book Advances in Robotics Research (Kondak et al., 2009).

The internal architecture of each UAV is endowed with different modules that solve the usual

problems that arise during the execution of multi-purpose missions. The research work presented in

this thesis has been mainly focused on the solution of three relevant problems that should be solved

in any multi-UAV platform:

• Complex task decomposition into elementary tasks (Chap. 4).

• Task allocation (Chap. 5).

• Conflict detection and resolution (Chap. 6).

The plan of each UAV is composed by several tasks with different level of complexity. The

simplest tasks can be sent directly to the executive level of the UAV without further processing.

But other tasks may involve additional complex computations in order to decompose them into

elementary tasks directly executable by the UAV. Chapter 4 presents the decomposition process

of the complex tasks considered in the AWARE platform. Different parts of this chapter have been

published in the Sensors (Heredia et al., 2009) and the Robotics and Autonomous Systems (Caballero

et al., 2008a) journals, in one chapter (Maza and Ollero, 2007) of the book Distributed Autonomous

Robotic Systems 6 (Alami et al., 2007) and also in three international conferences (Maza and Ollero,

2004; Heredia et al., 2008; Caballero et al., 2008b).

Chapter 5 presents the algorithms developed to solve the multi-UAV task allocation problem

in a distributed manner. Different parts of the material in this chapter have led to a publication in

the Advanced Robotics journal (Viguria et al., 2010), two papers (Viguria et al., 2007; Viguria et al.,

2008) in the International Conference on Robotics and Automation (ICRA), one paper in the IEEE

Page 31: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

1.4 Framework 7

International Workshop on Safety, Security, and Rescue Robotics (Maza et al., 2006) and another

paper in the Eurocontrol Innovative Research Workshop & Exhibition (Maza et al., 2007).

Once the UAVs have their plans decided, the potential conflicts among them should be solved.

Chapter 6 presents the approaches adopted to solve this problem and part of the material has been

published in the RIAI journal (Rebollo et al., 2009) and also in two conferences (Rebollo et al.,

2008; Rebollo et al., 2007).

On the other hand, the issues related to the Human Machine Interface (HMI) application plays

an important role in order to have an usable and practical platform. Thus, Chapter 7 has been

devoted to describe the platform HMI and the benefits of applying multiple modalities in its design.

Part of this material has been published in two journals: the Journal of Intelligent and Robotic

Systems (Maza et al., 2010a) and the Sensors journal (Caballero et al., 2009).

Finally, Chapter 8 describes the demonstration scenario in which the previously presented

techniques and systems have been tested. The chapter presents results on different actual mis-

sions: surveillance with multiple UAVs; fire confirmation, monitoring and extinguishing; load trans-

portation and deployment with single and multiple UAVs; and people tracking. These results and

experiments have been detailed in a paper submitted to the Journal of Field Robotics.

The thesis is completed with Chap. 9, which discusses and concludes the results of the thesis

and in which the future work is summarized. Part of this material has been included as a chapter

(Ollero and Maza, 2007d) of the book Multiple Heterogeneous Unmanned Aerial Vehicles (Ollero

and Maza, 2007c).

Regarding the scientific output related to this thesis, it is worth to mention that the author is

editor with Anibal Ollero of the book Multiple Heterogeneous Unmanned Aerial Vehicles (Ollero

and Maza, 2007c) published by Springer-Verlag in the series Springer Tracts in Advanced Robotics.

Finally, the work presented in this thesis has been mainly developed in the framework of the

AWARE Project (see next section), that has received the second prize of the Robotics 2010 Awards

of the EURON Robotics Network (more 200 organizations mainly academia), the EUROP Robotics

Platform (mainly companies) and the EUnite Robotics Society.

1.4 Framework

The core of the work presented in this thesis has been performed in the frame of the European Project

AWARE (platform for Autonomous self-deploying and operation of Wireless sensor-actuator net-

works cooperating with AeRial objEcts, IST-2006-33579). The AWARE project lasted from June

2006 till September 2009 and is probably the first research project involving load transportation and

deployment with multiple autonomous aerial vehicles for civil applications in the world.

The general objective of AWARE1 was the design, development and demonstration of a platform

composed by heterogeneous systems able to operate in a distributed way in disaster management

scenarios without pre-existing infrastructure (or damaged infrastructure). Then, the platform should

comprise self-deployment capabilities, i.e. autonomous transportation and deployment of different

1http://www.aware-project.net

Page 32: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8 Introduction

types of loads (small sensors, cameras, communication equipment, etc.) by means of one or several

helicopters.

The systems integrated in the platform included unmanned aerial vehicles, wireless sensor net-

works, ground fixed cameras, ground vehicles with actuation capabilities, etc. On the other hand,

the required robustness in the application scenario led to a distributed approach for the operation

which involved challenging issues such as distributed estimation of the location of objects of interest,

distributed task allocation and conflict resolution for the UAVs, etc.

To reach the above mentioned main goal, the project had the following technical objectives:

1. Develop a scalable and self-organizing ground sensor network integrating mobile nodes and

including not only low energy and light sensors (WSN nodes) but also cameras and other

sensors with higher energetic requirements.

2. Develop the architecture and middleware required for the cooperation of the heterogeneous

systems including aerial vehicles, static sensor-actuator nodes, and mobile nodes carried by

ground vehicles and people. The middleware makes the communication among these hetero-

geneous nodes transparent even if its topology changes. Such a middleware adds a level of

abstraction in order to simplify application development.

3. Develop network-centric functionalities for the operation. The project includes the develop-

ment of techniques for the operation of the network, including surveillance, localization and

tracking. Furthermore, reliable co-operation strategies based on the explicit consideration

of the main sources of failures in the operation of the network are also considered. Thus,

reliability tools based on the use of multiple UAVs and the sensor network are required.

4. Develop new cooperation techniques for tasks requiring strong interactions between vehicles

and between vehicles and the environment, such as lifting and transporting by means of the

cooperation of several UAVs carrying the same load.

The work presented in this thesis is mainly related with the second and third subgoals of the

AWARE project.

In order to verify the success in reaching the objectives, the project considered the validation in

two different applications:

• Filming dynamically evolving scenes with mobile objects. Particularly cooperative object

tracking techniques by using the cameras in aerial objects cooperating with cameras on the

ground are required. Furthermore, this activity involves sensors carried by mobile entities

(people, vehicles, etc.) to obtain measures that can be also displayed in the broadcast picture.

• Disaster Management/Civil Security (DMCS), involving exploration of an area of interest,

detection, precise localization, deployment of the infrastructure, monitoring the evolution of

the objects of interest, and providing reactivity against changes in the environment and the loss

of the required connectivity of the network. Actuators, such as fire extinguishers, to generate

actions in real-time from the information provided by the sensors are also considered.

Page 33: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

1.4 Framework 9

Figure 1.1: Common scenario for the AWARE Project experiments. Left: structure used to simulatea building. Right: tents used by the AWARE team.

Three general experiments, one per project year, in a common scenario were conducted in order

to integrate the system and test the functionalities required for the above validations. These exper-

iments involved the wireless ground sensor network with mobile nodes, the UAVs, the middleware,

actuators, the network-centric cooperation of the UAVs with the ground sensor network, and the

self-deployment functionality.

The common scenario chosen for the tests was part of the Protec-Fire company (Iturri group)

facilities (see Fig. 1.1) located in Utrera (Spain).

The AWARE platform included a total of five helicopters: four TUB-H helicopters (see Fig. 1.2(a))

developed by the Technische Universitat Berlin (TUB) and one FC III E SARAH helicopter (Elec-

tric Special Aerial Response Autonomous Helicopter) developed by the Flying-Cam (FC) company

(see Fig. 1.2(b)).

The AWARE experiments offered the framework to validate the distributed implementation of

the architecture described in this thesis.

Besides, the thesis has been developed also within the following projects, that have provided

funding and equipment needed for the research:

• CROMAT2 (Coordinacion RObots Moviles Aereos y Terrestres). Funded by the Direccion

General de Investigacion. DPI2002-04401-C03-03. The main objective of this project was the

development of new methods and techniques for the cooperation of aerial and ground mobile

robots.

• COMETS3 (Real-time Coordination and cOntrol of Multiple hETerogeneous unmanned aerial

VehicleS, IST-2001-34304). The COMETS project lasted from May 2002 till July 2005. This

project was probably the first research project on multiple autonomous aerial vehicles for

civilian applications in Europe. The main objective of COMETS was to design and implement

2http://grvc.us.es/cromat/3http://grvc.us.es/comets

Page 34: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

10 Introduction

(a) TUB-H model autonomous helicopters (TechnischeUniversitat Berlin).

(b) FC III E SARAH autonomous helicopter (Flying-Cam company).

Figure 1.2: Autonomous helicopters used for the demonstration of the system.

a distributed control system for cooperative detection and monitoring using heterogeneous

UAVs.

• AEROSENS4 (AErial RObots and SENSor networks with mobile nodes for cooperative per-

ception) Funded by the Direccion General de Investigacion. DPI2005-02293. 2005-2008. The

project goal was the development of a system based on the use of aerial and ground robots

and sensor networks for cooperative perception. The system is based on the joint application

of Aerial Robotics and the technology of Wireless Sensor Networks.

• URUS5 (Ubiquitous networking Robotics in Urban Settings). Funded by the European Com-

mission (IST-045062). The URUS project focused on designing and developing a network of

robots that, in a cooperative way, interacts with human beings and the environment for tasks

of guidance and assistance, transportation of goods, and surveillance in urban areas. Specif-

ically, the objective was to design and develop a cognitive networked robot architecture that

integrates cooperating urban robots, intelligent sensors (video cameras, acoustic sensors, etc.),

intelligent devices (PDA, mobile telephones, etc.) and communications.

• ATLANTIDA6 (Application of Leading Technologies to Unmanned Aerial Vehicles for Re-

search and Development in ATM). With a budget of 28,9 million euro (44% funded by the

Spanish Center for the Technological and Industrial Development, CDTI) and 2011 as the time

4http://grvc.us.es/aerosens5http://www.urus.upc.es/6http://www.cenit-atlantida.org

Page 35: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

1.4 Framework 11

horizon, the ATLANTIDA project will tackle the technological and scientific challenges that

need to be addressed for high levels of automation to be introduced into the management of

complex air spaces. ATLANTIDA will explore an approach for automation in the management

of air traffic seamlessly applicable to any air vehicle operations, including conventional avia-

tion, civil and military UAVs and the futuristic personal air transport systems. A remarkable

aspect of the ATLANTIDA initiative is that it represents the main International R&D effort

in the area of civil UAV operations and the third largest effort related to ATM, complementing

the SESAR and NextGen initiatives.

• ROBAIR7 Project funded by the Spanish Research and Development Program (DPI2008-

03847). The main objective of this project is the research and development in new methods and

technologies to increase the safety and reliability in Aerial Robotics. The project includes three

main topics: safe and reliable controlled platforms, multi-UAV safe and reliable cooperation,

and integration of the aerial robots with the ground infrastructure.

It is worth to mention that this thesis has been partially developed during several stays in Univer-

sities abroad, at the Automation Technology Laboratory8 of the Helsinki University Of Technology

(July-October 2001) and the Robotics and Artificial Intelligence Group9 of the Laboratoire d’Analyse

et d’Architecture des Systemes (LAAS) at Toulouse (September-December 2004). All these centers

are currently involved (or have been involved) in the development of aerial and/or terrestrial field

robots.

7http://grvc.us.es/robair8http://automation.tkk.fi/9http://www.laas.fr/RIA/RIA.html.en

Page 36: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

12 Introduction

Page 37: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Chapter 2

Cooperation and Networking ofMultiple Mobile AutonomousSystems

This chapter presents a classification of different schemes for the cooperation of multiple mobile

autonomous vehicles or robots taking into account the coupling between the vehicles and the type

of cooperation. Then, the research and development activities in load transportation, formations,

swarming and intentional cooperation are revised. The chapter also considers mobile systems net-

worked with other elements in the environment to support their navigation and, in general, their

operation. The chapter refers theoretical work but also emphasizes practical field outdoor demon-

strations with ground and aerial vehicles.

2.1 Introduction

This chapter considers the cooperation of multiple autonomous vehicles or robots performing jointly

missions such as search and rescue, reconnaissance, surveying, detection and monitoring of dangerous

events, exploration and mapping, hazardous material handling, and others.

The coordination of a team of autonomous vehicles allows to accomplish missions that no individ-

ual autonomous vehicles can accomplish on its own. Team members can exchange sensor information,

collaborate to track and identify targets, perform detection and monitoring activities (Ollero and

Maza, 2007c), or even actuate cooperatively in tasks such as the transportation of loads.

The advantages of using multiple autonomous vehicles or robots when comparing to a single

powerful one can be categorized as follows:

• Multiple simultaneous interventions. A single robot or autonomous vehicle is limited at

any one time to sense or actuate in a single point. However, the components of a robot team

can simultaneously collect information from multiple locations and exploit the information

derived from multiple disparate points to build models that can be used to take decisions.

Page 38: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

14 Cooperation and Networking of Multiple Mobile Autonomous Systems

Moreover, multiple robots can apply simultaneously forces at different locations to perform

actions that could be very difficult for a single robot.

• Greater efficiency. The execution time of missions such as exploration, searching for targets

and others can be decreased when using simultaneously multiple vehicles.

• Complementarities of team members. Having a team with multiple heterogeneous vehi-

cles or robots offers additional advantages due to the possibility of exploiting their complemen-

tarities. Thus, for example, ground and/or aerial vehicles with quite different characteristics

and on-board sensors can be integrated in the same platform. For instance, the aerial vehicles

could be used to collect information from locations that cannot be reached by the ground vehi-

cles, while these ground members of the team could be equipped with heavy actuators. Then,

the aerial and ground vehicles could be specialized in different roles. But even considering the

aerial vehicles themselves we can found complementarities: the fixed-wing airplanes typically

have longer flight range and time of flight, whereas helicopters have vertical take-off and land-

ing capability, better maneuverability and therefore can hover to obtain detailed observations

of a given target.

• Reliability. The multi-robot approach leads to redundant solutions offering greater fault

tolerance and flexibility including reconfigurability in case of failures of individual vehicles.

• Technology evolution. The development of small, relatively low cost vehicles and mobile

robots is fuelled by the progress of embedded systems together with the developments on

technologies for integration and miniaturization. Furthermore, the progress on communication

technologies experienced in the last decade plays an important role in multiple vehicle systems.

• Cost. A single vehicle with the performance required to execute some tasks could be an

expensive solution when comparing to several low cost vehicles performing the same task. This

is clear for Unmanned Aerial Vehicles (UAVs), and particularly in small size, light and low

cost UAVs, where constraints such as power consumption, weight and size plays an important

role.

Section 2.2 of this chapter will deal with general concepts and contains a rough classification of

multiple autonomous systems. Then, the load transportation, formations, swarms and teams with

intentional cooperation are examined in more detail along Sects. 2.3–2.6. Finally, the networking of

mobile systems with other sensors and actuators in the environment is considered in Sect. 2.7.

2.2 General Concepts and Classification

In the first part of this section, the concepts of coordination and cooperation are briefly presented due

to their relevance in any multi-robot system. Then, a classification based on the coupling between

the vehicles is outlined.

Page 39: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

2.2 General Concepts and Classification 15

2.2.1 Coordination and Cooperation

In platforms involving multiple mobile systems, the concepts of coordination and cooperation play an

important role. In general, the coordination deals with the sharing of resources, and both temporal

and spatial coordination should be considered. The temporal coordination relies on synchronization

among the different mobile objects and is required in a wide spectrum of applications. For instance,

in the case of event monitoring, several synchronized perceptions of the event could be required.

On the other hand, spatial coordination of mobile objects deals with the sharing of the space

among them to ensure that each object will be able to perform safely and coherently regarding the

plans of the other objects, and the potential dynamic and/or static obstacles. Some formulations are

based on the extension of single robot path planning concepts. The classical planning algorithms

for a single robot with multiple bodies (Latombe, 1990)) may be applied without adaptation for

centralized planning (assuming that the state information from all the robots is available). The

main concern, however, is that the dimension of the state space grows linearly in the number of

robots. Complete algorithms require time that is at least exponential in dimension, which makes

them unlikely candidates for such problems. Sampling-based algorithms are more likely to scale well

in practice when there are many robots, but the resulting dimension might still be too high. For

such cases, there are also decoupled path planning approaches such as the prioritized planning that

considers one robot at a time according to a global priority.

Cooperation can be defined as a “joint collaborative behavior that is directed toward some goal

in which there is a common interest or reward” (Barnes and Gray, 1991). According to (Cao et al.,

1997), given some task specified by a designer, a multiple-robot system displays cooperative behavior

if, due to some underlying mechanism (i.e., the “mechanism of cooperation”), there is an increase

in the total utility of the system.

The cooperation of heterogeneous mobile entities requires the integration of sensing, control and

planning in an appropriated decisional architecture. These architectures can be either centralized

or decentralized depending of the assumptions on the knowledge’s scope and accessibility of the

individual objects, their computational power, and the required scalability. A centralized approach

will be relevant if the computational capabilities are compatible with the amount of information

to process, and the exchange of data meets both the requirements of speed (up-to-date data) and

expressivity (quality of information enabling well-informed decision-taking).

On the other hand, a distributed approach will be relevant if the available knowledge within

each distributed component is sufficient to perform “coherent” decisions, and this required amount

of knowledge does not endow the distributed components with the inconveniences of a centralized

system (in terms of computation power and communication bandwidth requirements). One way

to ensure that a minimal global coherence will be satisfied within the whole system is to enable

communication between the robots of the system, up to a level that will warranty that the decision

is globally coherent. One of main advantages of the distributed approach relies on its superior

suitability to deal with the scalability of the system.

Page 40: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

16 Cooperation and Networking of Multiple Mobile Autonomous Systems

2.2.2 Classification of Multi-vehicle Systems

Multi-vehicle and mobile robotic systems can be classified from different points of view. One possible

classification is based on the coupling between the individuals (see Fig. 2.1):

1. Physical coupling. In this case, the individuals are connected by physical links and then their

motions are constrained by forces that depend on the motion of other individuals. The lifting

and transporting of loads by several robots lies in this category and will be addressed in Sect.

2.3 of this chapter. The main problem is the motion coordinated control taking into account

the forces constraint. From the point of view of motion planning and collision avoidance, all

the members of the team and the load can be considered as a whole. Furthermore as the

number of individuals is usually low, both centralized and decentralized control architectures

can be applied.

2. Formations. The individuals are not physically coupled but their relative motions are strongly

constrained to keep the formation. Then, the motion planning problem can be also formulated

considering the formation as a whole. Regarding the collision avoidance problem within the

team, it is possible to embed it in the formation control strategy. Scalability properties to deal

with formations of many individuals are relevant and then, decentralized control architectures

are usually preferred. Section 2.4 of the chapter will deal with the formations and will also

show how the same techniques can be applied to control coordinated motions of vehicles even

if they are not in formation.

3. Swarms. They are homogeneous teams of many individuals which interactions generate

emerging collective behaviors. The resulting motion of individuals does not lead necessar-

ily to formations. Scalability is a main issue and then pure decentralized control architectures

are mandatory. Section 2.5 of the chapter will be devoted to the swarms.

4. Intentional cooperation. The individuals of the team move according to trajectories defined

by individual tasks that should be allocated to perform a global mission (Parker, 1998). These

robot trajectories typically are not geometrically related as in the case of the formations. This

cooperation will be considered in Sect. 2.6 of this chapter. In this case, problems such as multi

robot task allocation, high-level planning, plan decomposition and conflict resolution should be

solved taking into account the global mission to be executed and the different robots involved.

In this case, both centralized and decentralized decisional architectures can be applied.

In the rest of sections of this chapter, each type of multi-vehicle system is discussed in further

detail. But before to proceed with each one, a general model for each robot of the team is presented.

This model can be particularized to fit any of the types of the above classification, as it will be

shown later.

2.2.3 General Model for each Robot in the Team

Let us consider a team of robots that plan their actions according to a set of coordination and cooper-

ation rules R. In particular, we assume that the set R includes k possible tasks T = τ1, τ2, . . . , τk

Page 41: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

2.2 General Concepts and Classification 17

Figure 2.1: Graphical illustration of a possible classification for multiple mobile autonomous systems:a) Physical coupling, b) Formation, c) Swarms and d) Team executing tasks represented by stars.

that robots can perform, and specifies n logical conditions requiring a change of task in the current

plan. Let E = e1, e2, . . . , en be a set of discrete events associated with such conditions. Each task

has a set of m parameters Π = π1, π2, . . . , πm defining its particular characteristics.

Robotic systems composed of a physical plant and a decisional and control system implementing

such kind of cooperation rules R can be modeled as hybrid systems (Fierro et al., 2002; Chaimowicz

et al., 2004; Fagiolini et al., 2007; Li et al., 2008). Figure 2.2 shows a simplified hybrid model

that summarizes the different interactions that can be found in each member of the classification

presented above. Let qi ∈ Q be a vector describing the state of the i-th robot taking values in the

configuration space Q, and let τi ∈ T be the task τ that the i-th robot is currently executing. This

robot’s configuration qi has a continuous dynamics

qi = f(qi, ui, γi), (2.1)

where ui ∈ U is a control input and γi ∈ Γ models the influence of the possible physical coupling

with other robots and transported objects:

γi = h(γi, qi,cqi), (2.2)

with vector cqi = (qi1 , qi2 , . . . , qiNc) containing the configurations of the Nc neighbors physically

connected to the i-th robot. Then, according to the classification presented above in this section,

γi = 0 only if there is physical coupling among the autonomous systems.

Regarding ui, it is a feedback law generated by a low-level controller g : Q×Q×T ×S → U , i.e.

ui = g(qi, qi, τi, Xi), (2.3)

so that the robot’s trajectory qi(t) corresponds to the desired current task τi taking into account

the configurations of the N neighbors qi = (qi1 , qi2 , . . . , qiN ) with influence in the control of the i-th

robot. This influence can be found for example in the control problem of swarms and formations

Page 42: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

18 Cooperation and Networking of Multiple Mobile Autonomous Systems

(see the above classification). On the other hand, equation (2.3) also includes the vector Xi ∈ Staking values in the environment model space S, that encompasses estimations about targets to the

tracked, obstacles detected during the mission and/or known “a priori”, threats to be avoided, etc.

The i-th robot’s current task has a discrete dynamics δ : T × E → T , i.e.

τi+ = δ(τi, e), (2.4)

where e ∈ E is an event (internal or external) requiring a change of task from τi to τi+, both from

the set of tasks T .Event activation is generated by

e = A(qi, ϵi, Xi, µi), (2.5)

where ϵi represents the internal events (such as changes in the execution states of the tasks) and µi

is a vector µi = (µi1 , µi2 , . . . , µiNm) containing the messages coming from Nm robots cooperating

with the i-th robot. Those messages are used for example in the negotiation processes involved

in the intentional cooperation mechanisms and are generated in each robot by a decisional module

D (see Fig. 2.2). This module encompasses high level reasoning and planning, synchronization

among different robots, negotiation protocols for task allocation and conflict resolution purposes,

task management and supervision, complex task decomposition, etc.

Regarding the perception of the environment, it is possible in some cases to have a database

with “a priori” knowledge about the environment, including static obstacles, objects of interest and

threats, that can be updated with the information gathered during the mission. On the other hand,

object detection and localization (Merino, 2007) is usually required in many applications. The state

x of the object to be tracked obviously includes its position p(t), and for moving objects, it is

also convenient to add the velocity p(t) into the kinematic part of the state to be estimated. But

further information is needed in general. For example, an important objective in some missions is

to confirm that an object belongs to a certain class within a set Γ (for instance, in the case of fire

alarms detection, this set will include as classes fire alarms and false alarms). Therefore, the state

will include information regarding the classification of the object. Also, in certain applications, some

appearance information could be needed to characterize an object, which also can help in the task

of data association between different robots with different cameras. Additionally, this information

could even include the 3D volume of the object that can be added to the obstacles database. In

general, the appearance information is static, and will be represented by θ.

The complete dynamic state to be estimated is composed by the status of all the objects, No,

and the number of objects can vary with the time. The state estimated by the i-robot at time t is

then represented by the vector xi(t) = [xTi1(t), . . . ,xTiNo

(t)]T . Each potential object m is defined by:

xim =

pim

pim

θim

(2.6)

Page 43: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

2.3 Physical Coupling: Joint Load Transportation 19

On-boardSensors

Xi = E(zi, zi)

e = A(qi, ϵi, Xi, µi)

ui = g(qi, qi, τi, Xi)

γi = h(γi, qi,cqi)

qi = f(qi, ui, γi)

-ϕi

?Xi

-

-

zi

zi

zi

µi

?τi

µi6qi

6

-qi

qi

-qi

6γi

cqi -

H

-

e

ui

D

EnvironmentDatabase

-

-

?

-

Figure 2.2: General blocks and interactions considered in the hybrid model for each robot describedin Sect. 2.2.

The information about the objects will be inferred from all the measurements zi from the sensors

on-board the robots, and zi gathered by the fleet of Ns robots that can communicate with the i-th

robot zj , j = 1, . . . , Ns. The latter vector can be completed with the measurements from sensors

located around the environment such as static surveillance cameras, or nodes from Wireless Sensor

Networks (WSNs) deployed in the area of interest.

In conclusion, the hybrid dynamics H of the i-th robot shown in Fig. 2.2 has zi, µi, qi andcqi as

inputs and zi, µi and qi as outputs. This diagram is not intended to be exhaustive or to cover all

the possible architectures and existing systems. Instead, it pursues to provide a general overview

of the rough blocks and interactions that can appear in the different members of the classification

presented in this section.

2.3 Physical Coupling: Joint Load Transportation

The transportation of a single object by multiple mobile robots is a natural extension of the moving

by several persons of a large and heavy object that cannot be handled by a single person.

The coordinated control of the motion of each vehicle should consider the involved forces induced

Page 44: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

20 Cooperation and Networking of Multiple Mobile Autonomous Systems

by the other vehicles and the load itself. Thus, in the scheme depicted in Fig. 2.2, there is a term

γi = 0 modelling those forces, which is taken into account in the design of the controller in eq. 2.3.

It should be also mentioned that γi can be measured using on-board sensors. For example, in the

case of several autonomous systems transporting a load using ropes, a force sensor in the rope can

provide a measurement of the influence of the other robots and the load being transported.

Each robot could be controlled around a common compliance center attached to the transported

object. Under the assumption that each robot holds the object firmly with rigid links, the real

trajectories of all of the robots are equal to the real trajectory of the object. However, in some

transportation problems this assumption cannot be applied and the transported object moves with

a dynamic behavior that can be expressed by means of eq. 2.2.

A suitable approach for the coordinated control is the leader-follower scheme that will be more

detailed in the next section. In this scheme, the desired trajectory is the trajectory of the leader. The

followers estimate the motion of the leader by themselves through the motion of the transported

object. This approach can be extended to multiple followers and to robots with non-holonomic

constraints (Kosuge and Sato, 1999) by means of decentralized compliant motion control algorithms.

This method has been implemented in an experimental system with three tracked mobile robots

with a force sensor. In (Sugar and Kumar, 2002), the decentralized control of cooperating mobile

manipulators is studied with a designated lead robot being responsible for task planning. The control

of each robot is decomposed (mechanically decoupled) into the control of the gross trajectory and

the control of the grasp. The excessive forces due to robot positioning errors and odometry errors

are accommodated by the compliant arms.

In (Huntsberger et al., 2004), distributed coordinated control of two rovers carrying a 2.5 meters

long mockup of a photovoltaic tent is presented and demonstrated as an example of the CAMPOUT

behavior-based control architecture. Reference (Borenstein, 2000) details the OmniMate system,

which uses a compliant linkage platform between two differential drive mobile robots (Labmate)

that provides a loading deck for up to 114 kg of payload.

Lifting and transportation of loads by using multiple helicopters has been also a research topic

for many years motivated by the payload constraints of these vehicles and the high cost of helicopters

with significant payload. Particularly, the lifting and transportation by two helicopters (twin lift)

has been studied since the beginning of the nineties by means of nonlinear adaptive control (Mittal

et al., 1991) and H∞ control (Reynolds and Rodriguez, 1992). In (Lim et al., 1999) an interactive

Modeling, Simulation, Animation and Real-Time Control (MoSART) tool to study the twin lift

helicopter system is presented. However, until recently, only simulation experiments have been

found. In December 2007, lifting and transportation of a load by means of three autonomous

helicopters was demonstrated experimentally in the framework of the AWARE project 1 by the

Technische Universitat Berlin (TUB). After that first successful test, the load transportation system

was used again in 2009 to deploy a camera on the roof of a building with a height of 12 meters (see

Fig. 2.3) in the framework of the same project.

1http://www.aware-project.net/videos/AWARE V4.avi

Page 45: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

2.3 Physical Coupling: Joint Load Transportation 21

Magnetic Encoder

Magnetic Encoder

Cardan Joint

ForceSensor

Motor

Rope Mounting

Bolt

Release Pin

Figure 2.3: Load transportation system composed by three autonomous helicopters. Left: Threeautonomous helicopters from the Technical University of Berlin (TUB-H model) transporting awireless camera to the top floor of a building with a height of 12 meters in May 2009. Right: Detailof the load deployment device on-board each helicopter (the device is equipped with a force sensorto estimate the influence of the other helicopters and the load itself – term γi in eq. 2.2). Schemecourtesy of the Technische Universitat Berlin (TUB).

Page 46: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

22 Cooperation and Networking of Multiple Mobile Autonomous Systems

2.4 Vehicle Formations and Coordinated Control

Vehicle formation is a basic strategy to perform multi-vehicle missions including searching and

surveying, exploration and mapping, active reconfigurable sensing systems and space-based inter-

ferometry. An added advantage of the formation paradigm is that new members can be introduced

to expand or upgrade the formation, or to replace a failed member. Thus, several applications of

aerial, marine and ground vehicle formations have been proposed.

In the formations, the members of the group of vehicles must keep user-defined distances with the

other group members. The control problem consists of maintaining these user-defined distances, and

consequently the configurations of the N neighbors qi = (qi1 , qi2 , . . . , qiN ) in the formation should be

taken into account in the control law (see eq. 2.3). Those configurations can be either received via

inter-vehicle communication or estimated using the sensors on-board. Anyway, formation control in-

volves the design of distributed control laws with limited and disrupted communication, uncertainty,

and imperfect or partial measurements.

The vehicles platooning can be considered as a particular case consisting of a leader followed by

vehicles in a single row. Both lateral and longitudinal control to keep the safe headway and lateral

distance should be considered. The simplest approach relies on individual vehicle control from the

data received from the single immediate front vehicle (Bom et al., 2005). In this method, the sensor

noises generate the growing of regulation errors, from the first vehicle to the last one, leading to

oscillations. Inter-vehicle communication can be used to overcome this problem (Zhang et al., 1999).

Then, the distance, velocity and acceleration with respect to the preceding vehicle are transmitted

in order to predict the position and improve the controller, by guaranteeing the stability of tight

platoon applications (No et al., 2001). Intervehicle communication can be also used to implement

global control strategies.

The leader-follower approach has been also used to control general formations where the desired

positions of followers are defined relative to the actual state of a leader. It should be noted that

every formation can be further divided into simplest leader/follower schemes. Then, in this approach

some vehicles are designated as leaders and track predefined trajectories while the followers track

transformed versions of these trajectories according to given schemes. In the leader-follower approach

path planning only needs to be performed in the leader workspace. The paper by (Desai et al., 2001)

presents control techniques based on keeping the desired distance and angle with respect to a single

leader, or to maintain specified distances from two vehicles or from one vehicle and an obstacle.

In this work a simple kinematic model is used and simulation results are presented. A Leader-to-

Formation Stability (LFS) analysis is presented in (Tanner et al., 2004) along with different ways

to improve the safety, robustness, and performance characteristics of this approach to the formation

problem.

Other methods are based on a virtual leader, a moving reference point whose purpose is to direct,

herd and/or manipulate the vehicle group behavior. The lack of a physical leader among the vehicles

implies that any vehicle is interchangeable with any other in the formation. In (Leonard and Fiorelli,

2001) the virtual leader approach is combined with artificial potentials that define interaction control

forces between neighboring vehicles and are designed to enforce a desired inter-vehicle spacing. The

Page 47: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

2.4 Vehicle Formations and Coordinated Control 23

local asymptotic stability of the group geometric formation is also studied.

The above approaches have limitations when considering the reliability of the leaders and the

lack of explicit feedback from the follower to the leader. Then, if the follower is perturbed by

some disturbances, the formation cannot be maintained. In (Egerstedt et al., 2001) the motion of

the reference point (virtual vehicle) on a planned trajectory is governed by a differential equation

containing the error feedback, in such a way that if both the tracking errors and the disturbances

are within certain bounds, the reference point moves along the reference trajectory while the robots

follow it within the lookahead distance, and otherwise the reference point slows down and waits for

the robot. The paper presents experiments using a radio controlled car and a nomadic indoor robot.

In (Ogren et al., 2002) the same concepts are adopted and a Lyapunov function is used to define

a formation error. Thus, the formation feedback is incorporated in the virtual leaders. This paper

includes a simulation example with a linearized model of the robots.

The stability of the formation has been studied by many researchers that have proposed robust

controllers to provide insensitivity to possibly large uncertainties in the motion of nearby agents,

transmission delays in the feedback path, and the consideration of the effect of quantized information.

Graph theory has been also applied to analyze the behavior of the formation including stability

analysis, which is related to the eigenvalues of the graph matrix of the formation (Fax and Murray,

2002).

Practical applications of formation control should include a strategy for obstacle avoidance and

reconfiguration of the formation. The avoidance of big obstacles could be performed by changing

the trajectory of the whole formation to go around the obstacle or to pass through a narrow tunnel

(Desai et al., 2001). If the obstacles are smaller than the size of the formation, the vehicles should

be able to compromise the formation until the obstacle is passed. In order to do so, the obstacle

avoidance behavior should be integrated in the control strategy of the individual members of the

formation to avoid/bypass obstacles. Hybrid control techniques have been applied to avoid obstacles

and solve the formation reconfiguration (Zelinski et al., 2003).

In (Ren and Beard, 2008) the virtual leader approach is used and experimented indoors on a multi

robot platform consisting of five AmigoBots and two Pioneers 3-DX. The robots rely on encoder data

for their position and orientation and can communicate with each other using TCP/IP protocols.

Also in this work, the authors present another method that does not assume any explicit leader

or virtual leader and only requires local neighbor-to-neighbor information exchange between the

vehicles. The state of the vehicles or their relative state deviations are the coordination variables and

a consensus-based distributed cooperative control is applied to maintain the formation. Experimental

results by using three small indoor robots are presented. The position of each robot is measured

using a combination of dead reckoning and an overhead camera system.

In (Feddema and Schoenwald, 2001) decentralized control theory is applied to the control of mul-

tiple cooperative mobile robotic vehicles. Particularly, controllability, observability and Lyapunov

stability techniques are applied. The use of the proposed methods is illustrated on multi robot

formation control and perimeter surveillance problems.

A drawback of some of the above mentioned techniques is the assumption that each robot has

Page 48: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

24 Cooperation and Networking of Multiple Mobile Autonomous Systems

the ability to measure the relative position of other robots that are immediately adjacent to it or

to know these positions by means of the communication system. There are also approaches based

on computer vision (Vidal et al., 2004) to detect, track and estimate the leader motion on the

image plane. In (Das et al., 1997) omnidirectional cameras are installed in the robots in such a way

that each robot can compute estimations of the direction vector to its teammates. In this paper,

estimators and decentralized controllers that allow for changes in the formation are presented and

demonstrated experimentally by using adaptations of radio-controlled scale model trucks.

There are also behavior-based methods that are often inspired by biology, where formation

behaviors like flocking and following are common. Different behaviors are defined as control laws for

reaching and/or maintaining a particular goal. In (Balch and Arkin, 1998) behavior-based formation

control is presented. These methods rely on empirical evaluations instead of analytical proofs and

the competing behaviors may occasionally create strange and unpredicted motions.

The close formation flight control of homogeneous teams of fixed wing UAVs airplanes also

received attention in the last ten years. The large group formation of small UAVs also offers benefits

in terms of drag reduction and then increased payoffs in the ability to maintain persistent coverage

of a large area. Both linear (Giulietti et al., 2000) and non-linear control laws (Schumacher and

Singh, 2000) have been proposed and tested in simulation. However, practical implementations are

still very scarce. In (How et al., 2004) a demonstration of two fixed wings UAVs simultaneous flying

the same flight plan (tracking way-points in open-loop formation) is reported. In the same paper,

two UAVs were linked to the same receding horizon trajectory planner and independent timing

control was performed about the designed plans. In (Bayraktar et al., 2004) an experiment with

two fixed wing UAVs using the leader-follower approach is presented. The leader UAV was given

a pre-determined flight plan and the trajectory of the UAV was updated once per second in real

time through the ground station to keep the follower at a fixed distance offset from the leader.

Finally, (Gu et al., 2006) also presents experiments with two fixed-wing UAVs. A radio control pilot

maintains ground control of the leader aircraft while the autonomous follower aircraft maintains a

predefined position and orientation with respect to the leader aircraft. In this paper, the control

structure has an outerloop guidance control in which a nonlinear dynamic inversion model is used

for the forward and lateral control and a linear control law is used for the vertical control. The

resulting engine propulsion, and desired pitch and roll are controlled in the innerloop by means of

linear control laws.

Formation is not the only cooperative scheme of autonomous vehicles and mobile robots. Many

applications, such as exploration, mapping and others can be solved by a team of mobile vehicles. The

cooperation of multiple mobile systems can be examined from the point of view of the intentionality

to achieve a given mission. Then, according to (Parker, 1998), it is possible to distinguish between

intentional cooperation and swarm-type cooperation. Those approaches are considered in the next

two sections.

Page 49: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

2.5 Swarms 25

2.5 Swarms

The key concept in the swarms is that complex collective global behaviors can arise from simple

interactions between large numbers of relatively unintelligent agents. This swarm cooperation is

based on concepts from biology (Sharkey, 2006) and typically involves a large number of homogeneous

individuals, with relatively simple sensing and actuation, and local communication and control that

collectively achieve a goal. This can be considered as a bottom-up cooperation approach. It usually

involves numerous repetitions of the same activity over a relatively large area. The agents execute the

same program, and interact only with other nearby agents by measuring distances and exchanging

messages.

Thus, according to Fig. 2.1, the configurations of theN neighbors qi = (qi1 , qi2 , . . . , qiN ) should be

considered as well as the messages µi = (µi1 , µi2 , . . . , µiNm) coming from Nm robots cooperating with

the i-th robot. Nevertheless, it should be mentioned that depending on the particular communication

and sensing capabilities of the robots in the swarm, simplified mechanisms based on partial or

imperfect information could be required. For example, the estimation of the full vector qi is not

possible in many swarm-based systems, and partial information such as the distances with the

neighbors is the only measurement available. The same is applicable to the messages interchanged,

that can range from data packets sent through wireless links to simple visual signals based on lights

of different colors.

Regarding the individual capabilities of each robot, (Sharkey, 2007) presents a simple taxonomy

which distinguishes three different subareas based on the emphases and justifications for minimal-

ism and individual simplicity: scalable, practical minimalist and nature-inspired minimalist swarm

robotics.

The bio-inspired motivation of swarm robotics can be found for example in (Zhang et al., 2007),

which describes an adaptive task assignment method for a team of fully distributed mobile robots

with initially identical functionalities in unknown task environments. The authors employ a sim-

ple self-reinforcement learning model inspired by the behavior of social insects to differentiate the

initially identical robots into “specialists” of different task types, resulting in stable and flexible

division of labor; on the other hand, in dealing with the cooperation problem of the robots engaged

in the same type of task, the so-called Ant System algorithm was adopted to organize low-level task

assignment.

In (Spears et al., 2004) a framework that provides distributed control of large collections of

mobile physical agents is presented. The emphasis is on minimality and easy implementation. The

vehicles self-organize into structured lattice arrangements using only local information. The paper

includes simulations and implementation on a team of seven simple small robots. In (Zarzhitsky

et al., 2005) it is shown how the vehicles constitute a sensor network and remain in formation during

obstacle avoidance and search for a chemical emitter that is actively ejecting a toxic chemical into

the air. In (Nouyan et al., 2008), two distributed swarm intelligence control mechanisms are applied

on a task that consists in forming a path between two objects which an individual robot cannot

perceive simultaneously. All the controllers are able to form paths in complex obstacle environments

and exhibit very good scalability, robustness, and fault tolerance characteristics.

Page 50: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

26 Cooperation and Networking of Multiple Mobile Autonomous Systems

Another challenging distributed robotics concept related to the swarms research is the self-

assembly feature, that lies at the intersection between collective and self-reconfigurable robotics. In

(Gross et al., 2006), a so-called swarm-bot is a system comprised of autonomous mobile robots called

s-bots. S-bots can either act independently or self-assemble into a swarm-bot by using their grippers.

The paper reports on experiments in which the process that leads a group of s-bots to self-assemble

is studied, varying the number of s-bots (up to 16 physical robots), their starting configurations,

and the properties of the terrain on which self-assembly takes place.

In (Kube and Zhang, 1993) different mechanisms that allow populations of behavior based robots

to perform collectively tasks without centralized control or use of explicit communication are pre-

sented. The box-pushing task is used as an example and demonstrated with three robots. Reference

(Mataric, 1992) provides the results of implementing group behaviors such as dispersion, aggregation,

and flocking on a team of robots.

In general, the above approaches deal with homogeneous teams without explicit consideration

of tasks decomposition and allocation, performance measures, and individual efficiency constraints

of the members of the team. Those aspects are considered in the intentional cooperation schemes

described in the next section.

2.6 Intentional Cooperation Schemes

In this type of cooperation each individual executes a set of tasks (subgoals that are necessary for

achieving the overall goal of the system, and that can be achieved independently of other subgoals)

explicitly allocated to perform a given mission in an optimal manner according to planning strate-

gies (Gerkey and Mataric, 2003). The robots cooperate explicitly and with purpose. Thus, this

cooperation is defined as intentional cooperation (Parker, 1998).

Key issues in these systems include determining which robot should perform each task (task

allocation problem) so as to maximize the efficiency of the team and ensuring the proper coordination

among team members to allow them to successfully complete their mission. In order to solve the

multi-robot task allocation problem, some metrics to assess the relevance of assigning given tasks to

particular robots are required. In (Gerkey and Mataric, 2004) a domain independent taxonomy for

the multi-robot task allocation (MRTA) problem is presented. In the last years, a popular approach

to solve the MRTA problem in a distributed way is the application of market-based negotiation rules.

An usual implementation of those distributed negotiation rules (Botelho and Alami, 1999; Dias and

Stenz, 2002; Gerkey and Mataric, 2002) is based on the Contract Net Protocol (Smith, 1980). In

those approaches, the messages µi = (µi1 , µi2 , . . . , µiNm) coming from Nm robots cooperating with

the i-th robot are those involved in the negotiation process: announce a task, bid for a task, allocate

a task, ask for the negotiation token, etc.

Once the tasks have been allocated, it is necessary to coordinate the motions of the vehicles, which

can be done by means of suitable multi-vehicle path/velocity planning strategies, as mentioned in

Sect. 2.2. The main purpose is to avoid potential conflicts among the different trajectories when

sharing the same working space. It should be mentioned that even if the vehicles are explicitly

cooperating through messages, a key element in many motion coordination approaches is the updated

Page 51: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

2.6 Intentional Cooperation Schemes 27

Figure 2.4: Coordinated flights in the COMETS Project involving an airship and two autonomoushelicopters.

information about the configurations of the N neighbors qi = (qi1 , qi2 , . . . , qiN ). A formulation of

the multi-robot collision avoidance problem and different approaches that can be applied to solve it

can be found in (LaValle, 2006; Latombe, 1990).

On the other hand, teams composed by heterogeneous members involve challenging aspects,

even for the intentional cooperation approach. In (Ollero and Maza, 2007c) the current state of the

technology, existing problems and potentialities of platforms with multiple UAVs (with emphasis on

systems composed by heterogeneous UAVs) is studied. This heterogeneity is two-fold: firstly in the

UAV platforms looking to exploit the complementarities of the aerial vehicles, such as helicopters

and airships, and secondly in the information processing capabilities on-board, ranging from pure

remotely teleoperated vehicles to fully autonomous aerial robots.

The multi-UAV coordination and control architecture developed in the COMETS Project was

demonstrated for the autonomous detection and monitoring of fires (Ollero and Maza, 2007c) by

using two helicopters and one airship (see Fig. 2.4). Regarding teams involving aerial and ground

vehicles, the CROMAT architecture also implemented co-operative perception and multi-robot task

allocation techniques (Viguria et al., 2010) that have been demonstrated in fire detection, monitoring

and extinguishing (see Fig. 2.5).

In this thesis, the AWARE Project 2 (Maza et al., 2010b) distributed architecture for the au-

tonomous coordination and cooperation of multiple UAVs for civil applications is presented. The

architecture is endowed with different modules that solve the usual problems that arise during the

execution of multi-purpose missions, such as task allocation, conflict resolution, complex task de-

composition, etc. One of the main objectives in the design of the architecture was to impose few

2http://www.aware-project.net

Page 52: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

28 Cooperation and Networking of Multiple Mobile Autonomous Systems

Figure 2.5: Mission executed by the CROMAT platform composed by aerial and ground robots(CROMAT consortium, 2006).

requirements to the execution capabilities of the autonomous vehicles to be integrated in the plat-

form. Basically, those vehicles should be able to move to a given location and activate their payload

when required. Thus, heterogeneous autonomous vehicles from different manufacturers and research

groups can be integrated in the architecture developed, making it easily usable in many multi-UAV

applications. The software implementation of the architecture was tested in simulation and finally

validated in field experiments with four autonomous helicopters. The validation process included

several multi-UAV missions for civil applications in a simulated urban setting: Surveillance apply-

ing the strategies for multi-UAV cooperative searching presented in (Maza and Ollero, 2007); fire

confirmation, monitoring and extinguishing; load transportation and deployment with single and

multiple UAVs; and people tracking.

Finally, cooperative perception can be considered as an important tool in many applications

based on intentional cooperation schemes. It can be defined as the task of creating and maintaining

a consistent view of a world containing dynamic objects by a group of agents each equipped with

one or more sensors. Thus, a team of vehicles can simultaneously collect information from multiple

locations and exploit the information derived from multiple disparate points to build models that

can be used to take decisions. In particular, cooperative perception based on artificial vision has

become a relevant topic in the multi-robot domain, mainly in structured environments (Thrun, 2001;

Schmitt et al., 2002). In (Merino et al., 2006) cooperation perception methods for multi-UAV system

are proposed. Each robot extracts knowledge, by applying individual perception techniques, and

the overall cooperative perception is performed by merging the individual results. This approach

requires knowing the relative position and orientation of the robots. In many outdoor applications

it is assumed that the position of all the robots can be obtained by means of GPS and broadcasted

through the communication system. However, if this is not the case, the robots should be capable of

identifying and of localizing each other (Konolige et al., 2003) which could be difficult with the on-

board sensors. Another approach consists of identifying common objects in the scene. Then, under

Page 53: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

2.7 Mobile Systems Networked with Sensors and Actuators in the Environment 29

certain assumptions, the relative pose displacement between the vehicles can be computed from these

correspondences. In (Merino et al., 2006) this strategy has been demonstrated with heterogeneous

UAVs. In the ANSER project (see for example (Sukkarieh et al., 2003a)) decentralized sensor data

fusion using multiple aerial vehicles is also researched and experimented with fixed wing UAVs with

navigation and terrain sensors.

2.7 Mobile Systems Networked with Sensors and Actuatorsin the Environment

The development of wireless communication technologies in the last ten years makes possible the

integration of autonomous vehicles with the environment infrastructure. Particularly, the integration

with wireless sensor and actuator networks is very promising. The benefit of this integration can be

seen from two different points of view:

• The use of autonomous mobile robots to complement the information collected by the Wireless

Sensor Network (WSN), to perform as mobile “data mules”, to act as communication relays,

to improve the connectivity of the network and to repair it in case of malfunctioning nodes.

• The use of WSNs as an extension of the sensorial capabilities of the robots. In this case, the

information about the events in the environment will be inferred from all the measurements

zi from the sensors on-board and zi gathered by the fleet of Ns robots and nodes that can

communicate with the i-th robot zj , j = 1, . . . , Ns.

Static wireless sensor networks have important limitations as far as the required coverage and

the short communication range in the nodes are concerned. The use of mobile nodes could provide

significant improvements. Thus, they can provide the ability to dynamically adapt the network to

environmental events and to improve the network connectivity in case of static nodes failure. Node

mobility for ad-hoc and sensor networks has been studied by many researchers (Grossglauser and Tse,

2002; Venkitasubramaniam et al., 2004). Moreover, mobile nodes with single-hop communication

and the ability to recharge batteries, or refueling, have been proposed as data mules of the network,

gathering data while they are near of fixed nodes and saving energy in static node communications

(Jain et al., 2006). The coordinated motion of a small number of nodes in the network to achieve

efficient communication between any pair of other mobile nodes has been also proposed.

An important problem is the localization of the nodes of a WSN. This is an open problem because

GPS-based solutions in all the nodes are usually not viable due to the cost, the energy consumption

and the satellite visibility from each node. In (Caballero et al., 2008a) a probabilistic framework

for the localization of an entire WSN based on a mobile robot is presented. The approach takes

advantage of the good localization capabilities of the robot and its mobility to compute estimation

of the static nodes positions by using the signal strength of the messages interchanged with the

network.

In (Banatre et al., 2008b) a survey of existing methods for mobile nodes localization and naviga-

tion is presented. Several wheeled robotic mobile nodes exist, mainly low cost small mobile robots

Page 54: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

30 Cooperation and Networking of Multiple Mobile Autonomous Systems

such as MICabot (Notre Dame University), CotsBot (Berkeley University), Robomote (USC) and

Millibots (CMU). There are also algorithms to guide the mobile node reacting to sensorial stimulus,

such as the diffusion based technique to determine new sampling locations (Moore et al., 2004), and

the random walk algorithm to guide the node to a focus of interest. The potential field guiding algo-

rithm can be used to guide across the network along a safe path, away from the type of danger that

can be detected by the sensor. The so-called probabilistic navigation algorithm is used to guide a

mobile robot assuming neither a map nor a GPS are available (Batalin et al., 2004). This algorithm

was applied to guide a mobile robot in an indoors environment.

The use of mobile nodes has an impact on the quality of service of the sensor network. In

(Bisnik et al., 2007) quality metrics are related to the motion strategies of the mobile nodes and

two problems are addressed: compute the sensor trajectory and minimum speed of a single node to

satisfy a bound on the event loss probability, and compute the minimum number of sensors with

fixed speed required to satisfy this bound.

In (Batalin and Sukhatme, 2007), coverage, exploration and deployment of sensor nodes by means

of a single algorithm called Least Recently Visited (LRV) is proposed. In this work, a robot which

can carry sensor nodes as payload is considered. As the robot moves, it deposits nodes into the

environment based on certain local criteria. These nodes, once placed in the environment, provide

navigation directions for the robot. Simulation experiments with the LRV algorithm are presented.

However, in many scenarios, the motion of the mobile nodes installed on ground vehicles or carried

by persons is very constrained, due to the characteristics of the terrain or the dangerous conditions

involved, such as in civil security and disaster scenarios. The cooperation of aerial vehicles with the

ground wireless sensor network offers many potentialities. The use of aircrafts as data sinks when

they fly over the fixed sensor networks following a predictable pattern in order to gather data from

them have been proposed by several authors in the WSN community. In (Corke et al., 2003) an

algorithm for path computation and following is proposed and applied to guide the motion of an

autonomous helicopter flying very close to the sensor nodes deployed on the ground.

It should be noted that flight endurance and range of the currently available low cost UAVs is very

constrained (Ollero and Merino, 2004). Moreover, reliability and fault-tolerance is a main issue in

the cooperation of the aerial vehicles. Furthermore, these autonomous vehicles need communication

infrastructure to cooperate or to be tele-operated by humans in emergency conditions. Usually this

infrastructure is not available, or the required communication range is too large for the existing

technology. Then, the deployment of this communication infrastructure is a main issue. In the same

way, in most wireless sensor networks projects, it is assumed that the wireless sensor network has

been previously fully deployed without addressing the problems to be solved when the deployment is

difficult. Moreover, in the operation of the network, the infrastructure could be damaged or simply

the deployment is not efficient enough. Then, the problem is the repairing of the coverage or the

connectivity of the network by adding suitable sensor and communication elements.

In (Corke et al., 2004) the application of an autonomous helicopter for the deployment and re-

pairing of a wireless sensor network is proposed. This approach has been also followed in the AWARE

project (Maza et al., 2010b), whose platform has self-deployment and self-configuration features for

Page 55: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

2.7 Mobile Systems Networked with Sensors and Actuators in the Environment 31

Figure 2.6: Sensor deployment from an autonomous helicopter in the AWARE Project experimentscarried out in 2009.

the operation in sites without sensing and communication infrastructure. The deployment includes

not only wireless sensors (see Fig. 2.6) but also heavier loads such as communication equipment that

require the transportation by using several helicopters (see Fig. 2.3).

Finally, it should be noted that communication and networking also play an important role in

the implementation of control systems for multiple unmanned vehicles. Then, the next paragraphs

are devoted to this topic.

Single vehicle communication systems usually have an unshared link between the vehicle and

the control station. The natural evolution of this communication technique towards multi-vehicle

configurations is the star shaped network configuration. While this simple approach to vehicles

intercommunication may work well with small teams, it could not be practical or cost effective as

the number of vehicles grows. Thus, for example, in multi-UAV systems there are some approaches

of a wireless heterogeneous network with radio nodes mounted at fixed sites, on ground vehicles, and

in UAVs. The routing techniques allow any two nodes to communicate either directly or through

an arbitrary number of other nodes which act as relays. When autonomous teams of UAVs should

operate in remote regions with little/no infrastructure, using a mesh of ground stations to support

communication between the mobile nodes is not possible. Then, networks could be formed in an ad-

hoc fashion and the information exchanges occur only via the wireless networking equipment carried

by the individual UAVs. Some autonomous configurations (such as close formation flying) result in

relatively stable topologies. However, in others, rapid fluctuations in the network topology may occur

when individual vehicles suddenly veer away from one another or when wireless transmissions are

blocked by terrain features, atmospheric conditions, signal jamming, etc. In spite of such dynamically

changing conditions, vehicles in an autonomous team should maintain close communications with

others in order to avoid collisions and facilitate collaborative team mission execution. In order

Page 56: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

32 Cooperation and Networking of Multiple Mobile Autonomous Systems

to reach these goals, two different approaches have been adopted. One, closer to the classical

networks architecture, establishes a hierarchical structure and routes data in the classical down-

up-down traversing as many levels of the hierarchy as needed to reach destination. The other

prospective direction to assist routing in such an environment is to use location information provided

by positioning devices, such as global positioning systems (GPS), thus using what it is called location

aware protocols. These two techniques are compatible and can be mixed. For example, some of the

levels in a hierarchical approach could be implemented using location aware methods.

2.8 Conclusions

The concepts of coordinated and cooperative control of multiple vehicles deserved significant atten-

tion in the last years in the control, robotics, artificial intelligence and communication communities.

The implementation of these concepts involve integrated research in the control, decision and com-

munication areas.

This chapter has first reviewed the existing work on the transportation of a single load by different

autonomous vehicles. Both ground mobile robots and helicopters are considered. In order to solve

this problem, control theory based on models of the vehicles and their force interactions have been

applied.

The chapter also studied formation control. In this problem the application of control theory

based on models of the vehicles is also dominant. However, behavior based approaches that do not

use these models have been also demonstrated.

The work on swarms has been also reviewed. Approaches inspired in biology and multi-agent

systems are also common. The problems are typically formulated for large number of individuals

but up to now the practical demonstrations involve few physical robots.

The intentional task-oriented cooperation of robotic vehicles, possibly heterogeneous has been

also studied. The multi robot task allocation problem and the path planning techniques play an

important role here. Cooperative perception has been also included.

Finally, the chapter has explored the integration and networking of one or many autonomous

vehicles with sensors and actuators in the environment pointing out the benefits of this integration.

The self-deployment of the network and the motion planning to maintain quality of service are

promising approaches that have been preliminary studied but still require significant attention.

Finally, the chapter analyzes communication and networking technologies that play an important

role in the practical implementation of multi-vehicles systems. The integrated consideration of

communication and control problems is another promising research and development topic.

The work presented in this thesis is mainly related to the intentional task-oriented cooperation.

Then, in the next chapter, the architecture adopted for the coordination and cooperation is presented

along with the task model used. Nevertheless, the load transportation task is also considered as well

as the integration and networking of the UAVs with sensors and actuators in the environment.

Page 57: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Chapter 3

Models and DecisionalArchitecture

In this chapter, the asynchronous system and network models adopted to describe the distributed

decision algorithms in the next chapters are presented.

On the other hand, Chapter 2 included a classification based on the coupling between the robots

of a team. Taking into account that classification, the work presented in this thesis mainly belongs to

the intentional cooperation type (see Sect. 2.6). Thus, from the general scheme depicted in Fig. 2.2,

the block that will be mainly discussed along this document is D, which was the component in charge

of the decision making process. The architecture adopted for this block in the AWARE project will

be presented in this chapter along with the task model.

In the next section, the trade-off between centralized and decentralized decision making is pre-

sented, as far as this is a key issue in any multi-robot system.

3.1 Centralized / Decentralized Decision

A centralized decision configuration (with a minimal distributed supervision) is compatible (for

the least) and even complementary with a configuration endowed with fully distributed decision

capabilities. Section 1.2 enumerated the main components of the “Decision”: allocation, planning,

coordination and supervision. Any of them can be developed either within a central decisional

component or between several distributed components (e.g. the different robots within the system).

However, several trade-offs should be considered, regarding decision:

Knowledge’s scope and accessibility are preliminary requirements, to enable decisional aspects

within a central component and to ensure permanently the availability of (relevant) up-to-date

knowledge within this central component. This is a heavy requirement, since it requires the

centralization of any decisional-related knowledge from any component of the system, and

to have them refreshed continuously, in order to perform the decisional processes. However,

assuming that this requirement can be fulfilled, this provides the opportunity to perform the

Page 58: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

34 Models and Decisional Architecture

more informed decisions, and hence to manage the mission operations in the more efficient

way.

Regarding a system distributing the decision, the local scope of the available knowledge is a

double sharp edge issue: the only available knowledge is the knowledge related to the considered

component (or close to the considered component). This knowledge is usually far more easy

to access and to refresh, and hence, such a system can ensure that “up-to-date” information

is used to compute decisions. However, the drawback is that local (hence partial) knowledge

considerations lead to decisions that may turn out to be incoherent regarding the whole system.

Computational power and scalability. Regarding a system made of several robots, the amount

of data to process is quite huge: the processing of this knowledge in a centralized way requires

obviously powerful computational means. Moreover, such a centralized computation reaches

its limits if the number of robots increases: a centralized system can not be scalable to any

number of robots.

In contrast, a distributed model of the decision within a multi-robot system can stay available

with an increasing number of robots, since the complexity can be preserved as the number of

robots increases: each robot still only deals with its local, partial knowledge of the system,

which leads to evaluation of local information for its decisions-taking, even if these decisions

are of course not as well informed as a centralized decision-taking.

Either adopting a centralized or a distributed scheme for the decision in a multi-agent system,

the respective inconveniences of these approaches can be mitigated with respectively constraining

or extending their framework. Thus, a centralized approach will be relevant if:

• The computational capabilities are compatible with the amount of information to process.

• The exchange of data meets both the requirements of speed (up-to-date data) and expressivity

(quality of information enabling well-informed decision-taking).

On the other hand, a decentralized approach will be relevant if:

• The available knowledge within each distributed component is sufficient to perform “coherent”

decisions.

• This required amount of knowledge does not endow the distributed components with the

inconveniences of a centralized system (in terms of computation power and communication

bandwidth requirements).

One way to ensure that a minimal global coherence will be satisfied within the whole system is

to enable communications between the robots of the system. This can be done up to a level that

will warranty that the decision is globally coherent, while taking care not to reach an intractable

configuration of n different centralized systems.

Instead of definitely choose one of those extreme configurations of the decision’s repartition,

some alternative issues lie in hybrid solutions, that may fit at the best the requirements of an

Page 59: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.2 Asynchronous Models 35

heterogeneous system like AWARE. As noticed earlier in Sect. 1.2, the “Decision” is composed of

several aspects, hence some of them can be centralized, while some others may be complementary

exploited in a distributed way.

The next section describes the general model adopted along this thesis to describe the different

distributed decision making processes.

3.2 Asynchronous Models

In Chap. 2, a classification based on the coupling between the individuals of the team was provided.

From this classification, the work presented in this thesis mainly belongs to the intentional cooper-

ation type (see Sect. 2.6). Then, from the model depicted in Fig. 2.2, the block that will be mainly

present along this document is D, which was the component is charge of the decision making process.

A general model for the different distributed algorithms implemented in this block will be provided

in this section.

3.2.1 Asynchronous System Model

The purpose of this section is to introduce a formal model for asynchronous computing, the in-

put/output (I/O) automaton model (Lynch, 1997). This is a very general model, suitable for

describing almost any type of asynchronous concurrent system, including asynchronous network

systems such as the AWARE platform. By itself, the I/O automaton model has very little structure,

which allows it to be used for modelling many different types of distributed systems. What the

model does provide is a precise way of describing and reasoning about system components (e.g.,

processes or communication channels) that interact with each other and that operate at arbitrary

relative speeds.

I/O Automata

An I/O automata models a distributed system component that can interact with other system

components. It is a simple type of state machine in which the transitions are associated with named

actions. The actions are classified as either input, output, or internal. The inputs and outputs are

used for communication with the automaton’s environment, while the internal actions are visible

only to the automaton itself. The input actions are assumed not to be under the automaton’s control

– they just arrive from the outside – while the automaton itself specifies what output and internal

actions should be performed.

An example of a typical I/O automaton is a process in an asynchronous distributed system. The

interface of a typical process automaton with its environment is depicted in Fig. 3.1. The automaton

Pi is drawn as a circle, with incoming arrows labelled by input actions and outgoing arrows labelled

by output actions. Internal actions are not shown. The depicted automaton receives inputs of the

form init(v)i from the outside world, which are supposed to represent the receipt of an input value

v. It conveys outputs of the form decide(v)i, which are supposed to represent a decision on v. In

order to reach a decision, process Pi may want to communicate with other processes using a message

Page 60: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

36 Models and Decisional Architecture

system. Its interface to the message system consists of output actions of the form send(m)i,j , which

represents process Pi sending a message with contents m to process Pj , and input actions of the

form receive(m)j,i, which represents process Pi receiving a message with contents m from process

Pj . When the automaton performs any of the indicated actions (or any internal action), it may also

change its state.

This scheme of a generic process in an asynchronous distributed system can be adapted easily to

represent any process on-board a robot that is interacting with other robots and the environment.

Pi

init(v)i decide(v)i

send(m)i,j receive(m)j,i

Figure 3.1: A process I/O automaton.

Another example of a typical I/O automaton is a FIFO message channel. A typical channel

automaton, named Ci,j is depicted in Fig. 3.2. Its input actions are of the form send(m)i,j , and its

outputs are of the form receive(m)i,j . In the usual way of describing a distributed system using I/O

automata, a collection of process automata and channel automata are composed, matching outputs of

one automaton with same-named inputs of other automata. Thus, a send(m)i,j output performed

by process Pi is identified with (i.e., performed together with) a send(m)i,j input performed by

channel Ci,j . The important issue to note is that the various actions are performed one at a time,

in an unpredictable order. This is in contrast with synchronous systems, in which all the processes

send messages at once and then all receive messages at once, at each round of computation.

send(m)i,j receive(m)i,jCi,j

Figure 3.2: A channel I/O automaton.

Formally, the first element that gets specified for an I/O automaton is its “signature”, which is

simply a description of its input, output and internal actions. An universal set of actions will be

assumed. The signature S of an I/O automaton A, S = sig(A), is a triple consisting of three disjoint

sets of actions

S = sig(A) = (in(S), out(S), int(S)) with in(S) ∩ out(S) ∩ int(S) = ∅ , (3.1)

Page 61: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.2 Asynchronous Models 37

where in(S), out(S) and int(S) are the input, output and internal actions respectively. The

following definitions will be also considered:

• External actions: ext(S) = in(S) ∪ out(S)

• Locally controlled actions: local(S) = out(S) ∪ int(S)

• External signature (or external interface): extsig(A) = (in(S), out(S), ∅ )

Finally, all the actions of S will be denoted by acts(S).

An I/O automaton A, which will be simple called an automaton, consists of five components:

• sig(A), a signature

• states(A), a (not necessarily finite) set of states

• start(A), a nonempty subset of states(A) known as the initial states

• trans(A), a state-transition relation, where

trans(A) ⊆ states(A)× acts(sig(A))× states(A) .

which must have the property that for every state s and every input action π, there is a

transition (s, π, s′) ∈ trans(A)

• tasks(A), a task partition, which is an equivalence relation on local(sig(A)) having at most

countably many equivalence classes.

From now, acts(A) will be used as a shorthand for acts(sig(A)), and similarly in(A), and so on.

Then, for example we say that A is closed if it has no inputs, that is, if in(A) = ∅.The signature allows for more general types of actions than just the message-sending and message-

receipt actions modelled in a synchronous system model. As for the set of process states in the

synchronous network model, the set of states need not be finite. This generality is important since

it allows to model systems that have unbounded data structures such as counters and unbounded

length queues. As in the synchronous case, it is possible to have multiple start states so that it is

possible to include some input information in the start states.

An element (s, π, s′) ∈ trans(A) is called a transition, or step, of A. The transition (s, π, s′)

is called an input transition, output transition, and so on, based on the type of action π (input,

output, etc.). Unlike in the synchronous model, the transitions are not necessarily associated with

the receipt of a collection of messages; they can be associated with arbitrary actions.

If for a particular state s and action π, A has some transition of the form (s, π, s′), then we say

that π is enabled in s. Since every input action is required to be enabled in every state, automata are

said to be input-enabled. The input-enabling assumption means that the automaton is not able to

somehow “block” input actions from occurring. This assumption means, for example, that a process

Page 62: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

38 Models and Decisional Architecture

has to be prepared to cope in some way with any possible message value when a message arrives.

We say that state s is quiescent if the only actions that are enabled in s are input actions.

The input-enabling property seems to be a strong restriction on a general model, as far as many

system components are designed to expect certain inputs only to occur at designated times. For

example, Chapter 6 presents a distributed method to guarantee mutual exclusion when an UAV is

traversing a given path. The model adopted might expect an UAV not to submit two requests in

a row, before the system has granted the first request. However, there are other ways of modelling

such restrictions on the environment, without requiring that the environment actually be barred from

performing the input. In the example above, we might say that the environment is not expected to

submit a second request before receiving a response to the first, but that we do not constrain the

behavior of the system in the case of such an unexpected input. Or we might require the system to

detect the unexpected input and respond with an error message.

There are two major advantages of having the input-enabling property. First, a serious source of

errors in the development of system components is the failure to specify what the component does

in the face of unexpected inputs. Using a model that requires consideration of arbitrary inputs is

helpful in eliminating such errors. And second, use of input-enabling makes the basic theory of the

model work out nicely; in particular, input-enabling makes it reasonable to use simple notions of

external behavior for an automaton, based on sequences of external actions.

The fifth component of the I/O automaton definition, the task partition tasks(A), should be

thought of as an abstract description of “tasks”, or “threads of control”, within the automaton.

This partition is used to define fairness conditions on an execution of the automaton – conditions

that say that the automaton must continue, during its execution, to give fair turns to each of

its tasks. This is useful for modelling a system component that performs more than one job –

for example, participating in an ongoing algorithm while at the same time periodically reporting

telemetry and status information. It is also useful when several automata are composed to yield

one larger automaton representing the entire system. The partition is then used to specify that the

automata being composed all continue to take steps in the composed system. We will usually refer

to the task-partition classes as just tasks.

If we say that a task C is enabled in a state s, it means that some action in C is enabled in s.

In the following, the transition relation is described in a precondition-effect style. This style groups

together all the transitions that involve each particular type of action into a single piece of code.

The code specifies the conditions under which the action is permitted to occur, as a predicate on

the pre-state s. Then it describes the changes that occur as a result of the action, in the form of a

simple program that is applied to s to yield s′. The entire piece of code gets executed indivisibly,

as a single transition. Grouping the transitions according to their actions tends to produce concise

code, because the transitions involving each action typically involve only a small portion of the state.

Programs written in precondition-effect style normally use only very simple control structures.

This tends to make the translation from programs to I/O automata transparent, which makes it

easier to reason formally about the automata.

As an example of an I/O automaton, let us consider a communication channel automaton Ci,j

Page 63: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.2 Asynchronous Models 39

(see Algorithm 3.1). Here and elsewhere, we use the convention that if we do not mention a signature

component (usually the internal actions), then that set of actions is empty. On the other hand, the

states, states(Ci,j), and the start states, start(Ci,j), are most conveniently described in terms of a

list of state variables and their initial values.

Algorithm 3.1 Channel I/O automaton Ci,j .

Signature:

Input:

send(m)i,j ,m ∈M

Output:

receive(m)i,j ,m ∈M

States:

queue, a FIFO queue of elements of M , initially empty

Transitions:

χ1 : send(m)i,j

Effect: add m to queue

χ2 : receive(m)i,j

Precondition: m is first on queueEffect: remove first element of queue

Tasks:

receive(m)i,j ,m ∈M

Regarding the transitions in Algorithm 3.1, the send action χ1 is allowed to occur at any time

and has the effect of adding the message to the end of queue, while the receive action χ2 can only

occur when the message is question is at the front of queue, and has the effect of removing it. On

the other hand, the task partition tasks(Ci,j), groups together all the receive actions into a single

task. That is, the job of receiving (i.e., delivering) messages is thought of as a single task.

As a second example, let us consider a process automaton Pi (see Algorithm 3.2). In the following,

V is a fixed value set, null is a special value not in V , and f is a fixed function, f : V n → V .

Thus, the init action χ1 causes Pi to fill in the designated value in its own position in the val

vector, while the receive action χ3 causes it to fill in another position. These values can be updated

any number of times, by means of multiple init or receive actions. Pi is allowed to send its own

value any number of times on any channel. Pi is also allowed to decide any number of times, based

on new applications of f to its vector. On the other hand, the task partition, tasks(Pi), contains n

tasks: one for all the send actions χ2 for each j = i, and one for all the decide actions χ4. Thus,

sending on each channel is regarded as a single task, as is reporting decisions.

Now we describe how an I/O automaton A executes. An execution fragment of A is either a finite

sequence, s0, π1, s1, π2, . . . , πr, sr, or an infinite sequence, s0, π1, s1, π2, . . . , πr, sr, . . ., of alternating

states and actions of A such that (sk, πk+1, sk+1) is a transition of A for every k ≥ 0. Note that

if the sequence is finite, it must end with a state. An execution fragment beginning with a start

state is called an execution. We denote the set of executions of A by execs(A). A state is said to be

reachable in A if it is the final state of a finite execution of A.

If α is a finite execution fragment of A and α′ is any execution fragment of A that begins with

the last state of α, then we write αα′ to represent the sequence obtained by concatenating α and α′

Page 64: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

40 Models and Decisional Architecture

Algorithm 3.2 Process I/O automaton Pi.

Signature:

Input:

init(v)i, v ∈ Vreceive(v)j,i, v ∈ V, 1 ≤ j ≤ n, j = i

Output:

decide(v)i, v ∈ Vsend(v)i,j , v ∈ V, 1 ≤ j ≤ n, j = i

States:

val, a vector indexed by 1, . . . , n of elements in V ∪ null, all initially null

Transitions:

χ1 : init(v)i, v ∈ V

Effect: val(i)← v

χ2 : send(v)i,j , v ∈ V

Precondition: val(i) = vEffect: none

χ3 : receive(v)j,i, v ∈ V

Effect: val(j)← v

χ4 : decide(v)i, v ∈ V

Precondition: v = f(val(1), . . . , val(n)) withval(j) = null, 1 ≤ j ≤ nEffect: none

Tasks:

send(v)i,j : v ∈ V, i = jdecide(v)i : v ∈ V

eliminating the duplicate occurrence of the last state of α. Clearly, αα′ is also an execution fragment

of A.

Sometimes we will be interested in observing only the external behavior of an I/O automaton.

Thus, the trace of an execution α of A, denoted by trace(α), is the subsequence of α consisting of

all the external actions. We say that β is a trace of A if β is the trace of an execution of A. We

denote the set of traces of A by traces(A).

The I/O automaton is the core element to describe distributed systems. In the next section, we

discuss the asynchronous network model built on the basis of two I/O automata presented above:

the processes and channels.

3.2.2 Asynchronous Network Model

An asynchronous network consists of a collection of processes communicating by means of a com-

munication subsystem. In the version of this model that is most frequently encountered, this com-

munication is point-to-point, using send and receive primitives. Other versions of the model allow

broadcast actions, by which a process can send a message to all process in the network (including

itself), or multicast actions, by which a process can send a message to a subset of the processes.

Special cases of the multicast model are also possible, for example, one that allows a combination of

broadcast and point-to-point communication. In each case, various types of faulty behavior of the

network, including message loss and duplication, can be considered.

Page 65: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.2 Asynchronous Models 41

Send/Receive Systems

Let us consider an n-node directed graph G = (V,E) where we associate processes with the nodes

of G and allow them to communicate over channels associated with directed edges (see Fig. 3.3).

Asynchrony is allowed in both the process steps and the communication and then, the processes and

the channels will be modelled as I/O automata. Let M be a fixed message alphabet.

P1

P3P2

C1,2

C2,1

C1,3

C3,1

C2,3

C3,2

Figure 3.3: Composition of processes and channels. An example with three processes.

Processes The process associated with each node i is modelled as an I/O automaton, Pi. Pi

usually has some input and output actions by which it communicates with an external user; this

allows us to express problems to be solved by asynchronous networks in terms of traces at the “user

interface”. In addition, Pi has outputs of the form send(m)i,j , where j is an outgoing neighbor of i

and m is a message (that is, an element of M), and inputs of the form receive(m)j,i, where j is an

incoming neighbor of i. Except for these external interface restrictions, Pi can be an arbitrary I/O

automaton.

We consider two kinds of faulty behavior on the part of node processes: stopping failure and

Byzantine failure. The stopping failure of Pi is modelled by including in the external interface of Pi

a stopi input action, the effect of which is to permanently disable all the tasks of Pi. The Byzantine

failure of Pi is modelled by allowing Pi to be replaced by an arbitrary I/O automaton having the

same external interface.

Send/Receive Channels The channel associated with each directed edge (i, j) of G is modelled

as an I/O automaton Ci,j . Its external interface consists of inputs of the form send(m)i,j and outputs

of the form receive(m)i,j , where m ∈M . In general, except for this external interface specification,

the channel could be an arbitrary I/O automaton. However, interesting communication channels

have restrictions on their external behavior, for example, that any message that is received must

in fact have been sent at some earlier time. The needed restrictions on the external behavior of a

Page 66: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

42 Models and Decisional Architecture

channel can generally be expressed in terms of a trace property P . The allowable channels are those

I/O automata whose external signature is sig(P ) and whose fair traces are in traces(P ).

There are two ways in which such a trace property P is commonly specified: by listing a collection

of axioms or by giving a particular I/O automaton whose external interface is sig(P ) and whose fair

traces are exactly traces(P ). An advantage of listing axioms is that this makes it easier to define a

variety of channels, each of which satisfies a different subset of the axioms. On the other hand, an

advantage of giving an explicit I/O automaton is that in this case, the entire system consisting of the

processes and the most general allowable channels is described as a composition of I/O automata,

which is itself another I/O automaton. For example, this provides us with a notion of “state” for the

entire system, both processes and channels, which we can use in invariant assertion and simulation

proofs.

Asynchronous Send/Receive Systems

An asynchronous send/receive network system for directed graph G is obtained by composing the

process and channel I/O automata, using ordinary I/O automaton composition (Lynch, 1997). The

composition definition allows for the right interactions among the components; for example, when

process Pi performs a send(m)i,j output action, a simultaneous send(m)i,j input action is performed

by channel Ci,j . Appropriate state changes occur in both components.

Sometimes it is convenient to model the users of a send/receive system as another I/O automaton,

U . U ’s external actions are just the actions of the processes at their user interface. The user

automaton U is often described as the composition of a collection of user automata Ui, one for each

node i of the underlying graph. In this case, Ui’s external actions are the same as the actions of Pi

at the user interface. (If stopping failures are considered, the stop actions are not included among

the actions of the users.)

In this section, a general model to describe distributed system has been presented and briefly

discussed in some particular aspects. This model will be used in the following chapters to describe

the different distributed algorithms applied in our multi-UAV platform.

In the next section, the multi-UAV architecture adopted in the AWARE platform is presented.

The decision making process is divided in different modules that solve the usual problems that arise

during the execution of multi-purpose missions. Each distributed algorithm running inside these

modules follows the model that have been presented in this section.

3.3 Multi-UAV Architecture in the AWARE Platform

This section presents the distributed architecture for the autonomous coordination and cooperation

of multiple unmanned aerial vehicles in the AWARE platform. The architecture is endowed with

different modules that solve the usual problems that arise during the execution of multi-purpose mis-

sions, such as task allocation, conflict resolution, complex task decomposition, etc. One of the main

objectives in the design of the architecture was to impose few requirements to the execution capa-

bilities of the autonomous vehicles to be integrated in the platform. Basically, those vehicles should

Page 67: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.3 Multi-UAV Architecture in the AWARE Platform 43

be able to move to a given location and activate their payload when required. Thus, autonomous

vehicles from different manufacturers and research groups can be integrated in the architecture

developed, making it easily usable in many multi-UAV applications.

In the next section, the platform components with their corresponding models are presented.

Then, the distributed architecture of the platform, along with the task model adopted, are described

in Sects. 3.3.3 and 3.3.4.

Regarding the internal UAV architecture, it is endowed with different modules as it has been

mentioned above. Three of them have specific chapters devoted to them, whereas the others are

described in the final part of this section.

3.3.1 Distribution of Decisional Capabilities

In order to preserve genericity and to be opened for future developments and to allow simple pre-

defined missions, the planning and control architecture should allow various scenarios, ranging from

a planning and decisional activity entirely performed at a control center off-line, to an effective

decisional autonomy at the different AWARE components level with direct cooperation among them.

This will relieve the human operator from the burden of detailed AWARE component control and

relax communication constraints in terms of bandwidth and permanent availability.

Typical schemes could be:

• a mission planning scenario at a control center that produces “off-line” (before the mission) a

detailed set of actions that can be directly executed by the AWARE platform.

• a mission planning and control scenario that allows the human operator to produce a detailed

set of actions “off-line”, but also “on-line”, taking into account data gathered by the platform.

Such data will be essentially processed by the perception system that will send the results of

its interpretation to the human machine interface software.

• scenarios that involve high level planning by the human operator with autonomous context-

dependent task refinement at the AWARE components level. Depending on the task and the

context, direct coordination (or even cooperation) between the different subsystems may be

necessary. Such level of autonomy will, of course, involve also abilities for autonomous situation

assessment based on on-board perception functions.

3.3.2 Models, Knowledge and AWARE Platform Components

In this section, the physical models, common knowledge and components on which the platform

relies for decision-making are presented. This information complements the processing and generic

communication models previously presented in Sect. 3.2. Five main physical models can be identified:

• Environment model. In order to define the missions and to execute them with respect to

the environmental conditions, and to be able of interpreting the data acquired by the sensors,

it is necessary to define the data structures that will be used to represent the environment.

Page 68: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

44 Models and Decisional Architecture

We consider the following knowledge necessary to be provided to the system or to be extracted

and represented:

– Zone: this defines a geographical area possessing a geometrical delimiting boundary and

an identifier. The shape of the boundary is approximated and represented by a closed

sequence of straight-line edges. The boundary is defined by the coordinates of its vertices

in a geographical reference system (WGS84). A zone can be pre-defined or defined by

the user during mission specification though the HMI. A zone may be decomposed into

sub-zones and has several attributes such as buildings density, the nature of vegetation,

or the fact that it is an inhabited area, etc.

– Objects of different natures (i.e. cars, humans, houses, etc.). These objects may have

various related attributes (e.g., speed, direction for moving objects).

Furthermore, the following knowledge related to the AWARE Disaster Management validation

scenario is considered:

– Fire (attributes).

– Smoke (attributes).

– Wind properties (velocity, direction).

• Unmanned Aerial Vehicles model. The UAV models define the properties of the UAV in

terms of motion, sensing and communications:

– Motion model: kinematic model (maximum/nominal/minimal speeds, acceleration, min

gyration radius, etc.), max altitude, max range, max energy autonomy.

– Sensing model: the sensors on board the UAV and their orientation.

– Communication model: this model defines the communication capacities and constraints

intrinsically related to the UAVs: frequency, range, bandwidth.

– Action model: the commands that the UAV accepts and their possible replies. This is

to be a formal model of the interactions between any other platform subsystem and the

given UAV.

• Ground cameras model. The ground camera models define the properties of the cameras

in terms of motion, sensing and communications:

– Motion model (if applicable): pan&tilt model (maximum/nominal/minimal angular speed,

pan and tilt angles ranges, max energy autonomy, etc.).

– Sensing model: the sensors on board the ground camera and their orientation.

– Communication model: this model defines the communication capacities and constraints

intrinsically related to the ground cameras: frequency, range, bandwidth.

Page 69: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.3 Multi-UAV Architecture in the AWARE Platform 45

– Action model: the commands that the ground camera accepts and their possible replies.

This is to be a formal model of the interactions between any other platform subsystem

and the given ground camera.

• Wireless Sensor Network (WSN) model. The WSN model defines the properties of the

WSN in terms of sensing and communications:

– Sensing model: the variables that can be measured and their location.

– Communication model: this model defines the communication capacities and constraints

intrinsically related to the WSN: frequency, range, bandwidth.

– Action model: the commands that the WSN accepts and their possible replies. This is

to be a formal model of the interactions between any other platform subsystem and the

WSN.

• Perception model. The perception model defines the used sensor capacities in terms of

range, field of view, resolution, perceived attributes (temperature, color, range, etc.) i.e., the

physical parameters that the sensor is able to measure, expected quality of measure (related

to the attribute and environment conditions).

In the next sections, the multi-UAV architecture and task model adopted are described.

3.3.3 Distributed Architecture for the Platform

A global mission for the AWARE platform is specified by the user using the Human Machine Interface

(HMI) software. Each mission M will consist in a set of tasks (possibly ordered) that should be

executed by the platform. The task allocation process among the different UAVs could be done by

the user or may be autonomously performed in a distributed way. The latter might be necessary

in situations where large numbers of UAVs have to interact, where direct communication with a

central station is not possible, or where the local dynamics of the situation require timely reaction

of the UAVs involved in it.

The tasks that will be executed by the AWARE platform involve coordination, mainly for sharing

space, and cooperation, for example for the surveillance at different altitudes or from different

viewpoints of the same object, or when an UAV plays the role of a radio re-transmitter from/to

other UAVs and the central station. Cooperation includes coordination, but there is role sharing

between the subsystems to achieve a global common task.

The main objective in the design of the multi-UAV architecture was to impose few requirements

to the execution capabilities of the autonomous vehicles to be integrated in the platform. Basically,

those vehicles should be able to move to a given location and activate their payload when required.

Then, autonomous vehicles from different manufacturers and research groups can be integrated in

the AWARE architecture easily.

The global picture of the AWARE distributed UAV system is shown in Fig. 3.4. In each UAV,

there are two main layers: the On-board Deliberative Layer (ODL) and the proprietary Executive

Page 70: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

46 Models and Decisional Architecture

Layer (EL). The former deals with high-level distributed decision-making whereas the latter is in

charge of the execution of the tasks. In the interface between both layers, the ODL sends task

requests and receives the execution state of each task and the UAV state. For distributed decision-

making purposes, interactions among the ODLs of different UAVs are required. Finally, the HMI

software allows the user to specify the missions and tasks to be executed by the platform, and also to

monitor the execution state of the tasks and the status of the different components of the AWARE

platform.

UAV

UAV stateElementarytasks requestsElementary

tasks status

LTS

AWARE Human Machine Interface

Human operator

Missions Tasks

TasksUAV state

Tasks status

Executive Layer(helicopter)

Executive Layer (Load

Transportation)

Negotiation messages

Figure 3.4: Global overview of the distributed multi-UAV system architecture.

A more detailed view of the Deliberative Layer architecture is shown in Fig. 3.5. As it has been

mentioned above, the ODL has interactions with its executive layer and with the ODL of other

UAVs as well as with the HMI. The different modules shown in the ODL supports the distributed

decision-making processes involving cooperation and coordination.

The modules in Fig. 3.5 are described along this thesis according to the organization depicted in

Table 3.1.

Regarding the ODL, each module has been implemented in C++ to test the whole platform in

real missions (see Chap. 8), but the research work presented in this thesis has been mainly focused

on the following modules:

• Plan refining toolbox (Chap. 4): once a task arrives to the UAV, this module computes its

decomposition into elementary tasks (if applicable) and also the associated execution cost. It

uses the services of the perception subsystem (see Sect. 3.3.7) on-board.

Page 71: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.3 Multi-UAV Architecture in the AWARE Platform 47

Executive layer

CNP manager

AWARE HMI

Planned tasks

Task status

Atom task request

Atom task status

Proprietary supervisor

Task status

On-board Deliberative Layer

Executive Layer

Other UAVs

UAV controller

Task manager

UAV state

CNP messages

Task status

Task request

Synchinfo

Synch query

Synch manager

Plan merging

Plan builder / optimizer

Missions

4D trajectories

Plan refining toolbox

PSS managerProprietary GUI(supervision and management)

Figure 3.5: Detailed view of the internal On-board Deliberative Layer (ODL) architecture of a singleUAV.

Table 3.1: Description of the modules in Fig. 3.5 along the different chapters of this thesis.

Chapter Module(s)

3Task managerSynch managerPlan builder / optimizer

4Plan refining toolboxPSS manager

5 CNP manager6 Plan merging7 AWARE HMI8 Experimental validation of the

whole distributed architecture

Page 72: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

48 Models and Decisional Architecture

• CNP manager (Chap. 5): Taking into account the costs computed by the previous module and

the services of the plan builder, this module negotiates with other UAVs in order to allocate

the different tasks during the mission in a distributed manner.

• Plan merging module (Chap. 6): before the execution of the elementary tasks in the plan, this

module negotiates in order to guarantee that the path to be traversed by each UAV is clear of

other UAVs.

Once each UAV has built its own plan, the task and synchronization managers deal with changes

in the state of the tasks, synchronization in the execution with other UAVs and the interactions

with its executive layer.

Since each UAV has its own plan, several kinds of interaction during task achievement may occur:

• Cooperation: cooperative missions require the interaction between the different entities to

be planned and synchronized. For example, the contract net protocol has been used for the

implementation of the negotiation process among them for task sharing (see Chap. 5).

• Coordination for solving resource conflicts: the main resource conflict is due to the space shared

among the different UAVs. To solve this problem, local interactions between the concerned

UAVs, i.e., that plan to share the same region of space, are required (see Chap. 6).

• Redundancy and opportunism are related to the detection of identical tasks that are achieved

by several entities because their individual plans require these tasks. Detecting redundant

tasks such as taking same images enables to suppress them from all the cameras but one and

sharing the result, thus optimizing the global mission. But, on the other hand it is also possible

to use redundancy to improve the reliability of the whole system in case of a partial failure.

This is the rough picture of the distributed approach adopted for the multi-UAV platform. In

the rest of this section, the task model adopted and more details about three modules (task and

synchronization managers, plan builder/optimizer and perception subsystem) are presented. Then,

in the following chapters of the thesis, the rest of modules and the HMI application are discussed.

3.3.4 Task Model

Let us consider a mission M specified by the AWARE platform user. This mission is decomposed

(autonomously or manually) in a set of partially ordered tasks T . Those tasks can be allocated

to the UAVs manually from the HMI or autonomously in a distributed way (see Chap. 5). Let us

define a task with unique identifier k and type λ allocated to the i-th UAV as τki = (λ,− Ω,Ω+, ε,Π),

where −Ω and Ω+ are respectively the set of preconditions and postconditions of the task, and ε

is the internal event associated to the status evolution of the task (see Table 3.2). Finally, Π =

π1, π2, . . . , πm is the set of m parameters which characterizes the task. As an example, see

Table 3.3, which shows the parameters considered in a task consisting in covering a given area for

surveillance.

Page 73: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.3 Multi-UAV Architecture in the AWARE Platform 49

Table 3.2: Possible events considered in the status evolution of a task τki .

Internal Event (εk) DescriptionEMPTY No task

SCHEDULED The task is waiting to be executedRUNNING The task is in executionCHECKING The task is being checked against inconsistencies and

static obstaclesMERGING The task is in the plan merging process to avoid con-

flicts with the trajectories of other UAVsABORTING The task is in process to be aborted. If it is finally

aborted, the status will change to ABORTED, and oth-erwise will return to RUNNING

ABORTED The task has been aborted (the human operator hasaborted it or the UAV was not able to accomplishthe task properly)

ENDED The task has been accomplished properly

Table 3.3: Parameters of a task with type λ = SURV.

Parameters (Πk) Descriptionπ1 (Polygon) The set of vertices defining the

polygon of the area to be coveredby the UAV

π2(Altitude) Altitude (m) for the flight(ellipsoid-based datum WGS84)

π3(Speed) Specified speed (m/s) for theflight

π4(Overlapping) Desired overlapping in percent-age between consecutive rows ofthe zigzag pattern used to coverthe area

Page 74: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

50 Models and Decisional Architecture

Regarding the type of task (λk) at the ODL level of the architecture, the list shown in Table 3.4

has been considered in the AWARE platform.

On the other hand, preconditions and postconditions are event-based mechanisms that can deal

with events related to the evolution of the tasks states (see Table 3.2), to the reception of messages,

to the detection of a given object by the perception system of the platform (see Sect. 3.3.7), the

elapsing of a certain time period, etc. Then, the execution of a task starts when all the associated

preconditions are satisfied. On the other hand, it is also possible to specify postconditions, i.e.

conditions which satisfaction triggers the abortion of a task. If a task does not have any precondition

or postcondition, then τki = (λ,− Ω = ∅,Ω+ = ∅, ε,Π).

Table 3.4: Type of tasks (λk) considered at the ODL level.

Type of task (λk) DescriptionTAKE-OFF The UAV takes off and stabilizes at a default safe al-

titude, then switches to a secured wait mode, waitingfor further instructions

LAND The UAV starts landing procedures, lands, and is setto a ground secured mode

GOTO The UAV moves from its current location to a pointP (or to its vicinity)

GOTOLIST The UAV moves from its current location to each ofthe points of the waypoints list, following the orderof the points

DEPLOY The UAV moves from its current location to a pointP (or to its vicinity) and activates its payload in orderto deploy a device

TAKE-SHOT The UAV moves from its current location to a pointP (or to its vicinity) in order to take images of a givenlocation L. P is computed to have L in the center ofthe on-board camera’s field of view

WAIT The UAV is set to a secured waiting mode: hover orpseudo-hover, during a given period

SURV The UAV covers a given area defined by a polygonat a certain altitude

DETECT The perception system of the UAV starts to operatein detection mode, providing an alert if a given objectin the environment (fire, persons, etc.) is detected

TRACK The perception system of the UAV starts to operatein tracking mode, providing (if possible) location es-timations of a given object (fire, persons, etc.). TheUAV moves to a location that allows to improve theestimation of the location using the sensors on-board

HOME The UAV is commanded to return home

An example of a precondition or a postcondition related to the evolution of the tasks is the “end

of task” event of a different task. Thanks to the synchronization manager module (see Sect. 3.3.5), it

Page 75: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.3 Multi-UAV Architecture in the AWARE Platform 51

is possible to specify preconditions between tasks of different UAVs. Finally, it should be mentioned

that perception events (not related to the execution of a task) such as the detection of a fire or a

fireman in a disaster scenario, could be also the precondition of a task (i.e. a tracking task).

The ODL processes the tasks received and generates simpler tasks, called elementary tasks, that

are finally sent to the executive layer of the UAV. Let us define an elementary task with unique

identifier k and type λk allocated to the i-th UAV as τki = (λk, Πk, εk) where Πk = π1, π2, . . . , πmis the set of m parameters which characterizes the elementary task and εk is the event associated to

the elementary task evolution. It should be mentioned that RUNNING and ENDED are the only events

considered for the elementary tasks.

Then, the vehicles to be integrated in the AWARE platform should be able to receive elementary

tasks, report their associated execution events and execute them. A small set of elementary tasks

have been considered in order to allow the integration of a broader number of vehicles from different

manufacturers and research groups. Basically, those vehicles should be able to move to a given

location and activate their payload when required. Additionally, autonomous take-off and landing

capabilities are also required (see Table 3.5).

Table 3.5: Type of elementary tasks (λ) considered in the Executive Layer (EL). It can be seen asa subset of the tasks considered at the ODL level (see Table 3.4).

Type of task (λk) DescriptionTAKE-OFF The UAV takes off and stabilizes at a default safe al-

titude, then switches to a secured wait mode, waitingfor further instructions

GOTO The UAV moves from its current location to a pointP (or to its vicinity) and activates its payload if re-quired

LAND The UAV starts landing procedures, lands, and is setto a ground secured mode

On the other hand, as an example, Table 3.6 shows the seven parameters that are considered in

the elementary GOTO task.

Table 3.6: Elementary task with type λ = GOTO: list of parameters.

Parameters (Π) Descriptionπ1(x) East UTM coordinate (m)π2(y) North UTM coordinate (m)

π3(Altitude) Altitude (m) ellipsoid-based datum WGS84π4 (Speed) Desired speed (m/s) along the path to the

waypointπ5(Force heading) 1: force to the specified heading, 0: Not forceπ6(Heading) Desired heading (degree) along the way (N is

0 , E is 90 , W is −90 and S is 180 )π7(Payload) 1: to activate the payload around the location

of the waypoint, 0: not to activate

Page 76: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

52 Models and Decisional Architecture

It should be also mentioned that different types of errors required for planning and task managing

purposes were also considered between the executive and the deliberative levels. Table 3.7 provides

a list of the different errors that can be reported from the executive level due to a problem in any

elementary task.

Table 3.7: Executive layer elementary tasks errors that can be reported to the deliberative level.

Error code DescriptionELEM_TASK_OK No error in elementary task

ERR_ELEM_TASK_UNABLE_TO_ABORT The elementary task can not be abortedERR_ELEM_TASK_INCONSISTENCY The elementary task is inconsistentERR_ELEM_TASK_NOT_COMPLETED The task could not be completed

ERR_ELEM_TASK_UNKNOWN Elementary task name unknownERR_ELEM_TASK_FATAL_ERROR Unknown fatal error during elementary task execu-

tionERR_ELEM_TASK_ABORTED Elementary task interrupted during its processing

ERR_ELEM_TASK_UNREACHABLE Unreachable destinationERR_ELEM_TASK_NOT_READY Not ready to execute elementary task

ERR_ELEM_TASK_OUT_OF_ENERGY Elementary task not executed due to energy/fuelproblems

ERR_ELEM_TASK_PARAM_NOK Parameters error

Finally, it should be mentioned that the architecture allows to execute several tasks or elementary

tasks in parallel.

Tightly linked to the task model adopted is the operation of the task and synchronization man-

agers in Fig. 3.5. Then, the next section is devoted to the description of these modules of the

architecture.

3.3.5 Task and Synchronization Managers

The task manager module (see Fig. 3.5) receives the planned tasks from the plan builder module.

Those tasks can have preconditions and/or postconditions and the task model assumed is described

in Sect. 3.3.4.

On the other hand, the task manager also interfaces with the executive layer of the UAV. It

sends elementary tasks to the executive, which reports the state of both those tasks and the UAV

itself.

Finally, the synchronization manager ensures the synchronization in the execution of the tasks

in the plan of the UAV, and also between tasks of different UAVs.

Task Manager

The task manager controls partially ordered sequences of tasks in a consistent and in a timely and

safe manner. In each task request from the plan builder, the information shown in Table 3.8 is

included.

In the task request, the operation to be applied should be one of the following two alternatives:

Page 77: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.3 Multi-UAV Architecture in the AWARE Platform 53

Table 3.8: Task request data fields.

Data field Valuesoperation INSERT — ABORT

sequenceNumber Unique identifier for the tasktaskName The name of the task to be executed

taskParams The values for the different task parameters

• Dynamic task insertion (INSERT operation): this allows to request tasks insertion in the UAV’s

current plan, according to the relative order specified for the newly inserted task, versus the

current partial order of the tasks already scheduled. It allows to insert a task with preconditions

and/or postconditions. Both event-based mechanisms can deal with events related to the

evolution of tasks states, to the reception of messages, or the elapsing of a certain time period.

Preconditions can be specified either as mandatory or optional. If it is mandatory and the

precondition happens not to be satisfiable anymore, then the task is aborted. On the contrary,

if it is specified as optional, the precondition is considered as satisfied (and hence removed

from the task’s list of preconditions) if it is actually satisfied or if its own satisfiability becomes

unsatisfiable (and in this case the task is not aborted). An example of task precondition is the

“end of task” event of a different task.

It should be noted that thanks to the synchronization manager module (see Sect. 3.3.5), it is

possible to specify preconditions between tasks of different UAVs.

On the other hand, it is also possible to specify postconditions, i.e. conditions which satisfac-

tion triggers the abortion of a task. For example, this allows to interrupt a given task when

another one is achieved: during a surveillance mission, once a sequence of GOTO elementary

tasks covering an area is completed, we might want to interrupt also the TRACK task being

carried out by the perception subsystem (PSS) on-board.

• Dynamic task abortion (ABORT operation): this mechanism allows to dynamically request task

abortions in the current plan, while the plan is being executed. If the task is already running,

then the abortion of the task is an interruption. If the task is not yet running, then the abortion

is a cancellation (the task is de-scheduled). The abortion triggers a propagation mechanisms,

that checks which of the scheduled tasks depends on the aborted task (i.e. the tasks having

a precondition expecting an event from the aborted task, like an “end of task” event): if the

dependence is a mandatory precondition, then this task is also aborted and so on. If it is an

optional precondition, then the dependence is removed as if the precondition was satisfied, and

the corresponding task is not aborted.

The management of the preconditions and postconditions is carried out together with the task

synchronization module. In the interface between both there is a simple protocol based on messages

which will be explained in the next paragraphs.

Page 78: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

54 Models and Decisional Architecture

Synchronization Manager

The synchronization manager module is in charge of keeping the dependencies coherent among the

different tasks in the current plan of the UAV, and also with the tasks of other UAVs.

When a new task request with preconditions and/or postconditions arrives to the task manager

module, it sends a REPORT_DEPS message with the dependencies to the synchronization manager,

which saves it in a local database.

The synchronization manager is always checking the state of the tasks in the distributed UAV

system and, if there is any change, it updates the dependencies database. For instance, if a given task

changes its status to ENDED and if this event is a precondition for other tasks, those preconditions

are changed to satisfied. On the other hand, if all the postconditions of a task are satisfied, an INFO

message is sent to the task manager module that will proceed with the corresponding task abortion.

Before requesting an elementary task to the executive layer, the task manager sends a QUERY

message with the sequence number of the task to the synchronization manager. The satisfiability is

checked and the answer is sent back with an INFO message. If all the preconditions are not satisfied,

the task manager will ask again periodically.

3.3.6 Plan Builder / Optimizer

In the plan building stage of each mission, there are two different possibilities:

• Offline planning: previous to the mission execution, the AWARE platform user can plan a mis-

sion to tune its parameters and check its feasibility using the EUROPA framework developed

at NASA’s Ames Research Center (see Appendix A).

• Online planning: it is based on a plan builder / optimizer module integrated in the ODL

architecture (see Fig. 3.5) and programmed to solve the specific planning problems of the

UAVs in the AWARE platform.

Both options are described in the next subsections.

Plan Builder / Optimizer: Offline Planning

The EUROPA framework developed at NASA’s Ames Research Center is available under NASA’s

open source agreement (NOSA) since 2007. NOSA is an OSI-approved software license accepted as

open source but not free software. EUROPA (Extensible Universal Remote Operations Planning

Architecture) is a class library and tool set for building planners (and/or schedulers) within a

Constraint-based Temporal Planning paradigm and it is typically embedded in a host application.

Constraint-based Temporal Planning (and Scheduling) is a paradigm of planning based on an explicit

notion of time and a deep commitment to a constraint-based formulation of planning problems. This

paradigm has been successfully applied in a wide range of practical planning problems and has a

legacy of success in NASA applications.

As a simple application example in the AWARE project context, a deployment mission for an

autonomous helicopter is described in Appendix A.

Page 79: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

3.3 Multi-UAV Architecture in the AWARE Platform 55

Plan Builder/Optimizer: Online Planning

In general, the plan builder/optimizer module running during the mission execution generates a

plan P as a set of partially ordered tasks. In the AWARE platform, the main function of the online

planner will consist in ordering the motion tasks allocated to the UAV. Let us consider the i-th

UAV with a set of nm motion tasks to be executed. The planner will compute the order of the tasks

τki /k = 1 . . . nm that minimizes the execution cost:

Ci =

nm−1∑k=1

ck,k+1i (3.2)

where ck,k+1i is the motion cost between the locations associated to the tasks τki and τk+1

i . This

problem is an instance of the Travelling Salesmen Problem, often referred as TSP, which is NP-

hard. The simplest exact algorithm to solve it is based on a brute force search that tries all the

ordered combinations of tasks. The running time for this approach lies within a polynomial factor

of O((nm − 1)!), so this solution is only feasible when a small number of tasks are allocated to the

UAVs, which is the case in most of the AWARE platform missions.

But, if the user delegates the allocation process of tasks to the ODLs, each UAV will have

to run many times the planning algorithm during the autonomous negotiation with other UAVs.

Then, when the autonomous distributed allocation is launched, another algorithm with a lower

computational cost is required. Each time a new task is received, the plan builder runs an algorithm

that inserts the new task in all the possible and feasible locations in the current plan and chooses

the insertion point with lowest plan cost.

3.3.7 Perception Subsystem (PSS)

The main purpose of the Perception System (PS) of the AWARE platform is to build and update a

consistent representation of the environment. A fully distributed probabilistic framework has been

developed in order to achieve detection and tracking of events using the sensors provided by the

AWARE platform: visual and infrared images from UAVs and ground cameras, scalar measures like

temperature, humidity, CO or node signal strength from the sensor nodes of the WSN. It allows

reducing the network bandwidth requirements on data transmission and dividing the processing load

among different computers. As a result, the scalability of the whole system will be improved. In

addition, the ability of separately process the information increases the robustness of the architecture.

Then, the whole PS system is divided in several software instances called perception subsystems

(PSSs) modules, each attached to an AWARE platform component with perception capabilities.

Then, there are PSS modules for the UAVs with cameras on-board (see Fig. 3.5), for the ground

cameras, and for the wireless sensor network. Each of them processes locally the environment

information (images, sensors, ...) in order to reduce the amount of data transferred through the

network. All the PSSs share their beliefs about specific objects. The variety of objects to be

detected and tracked by the PSS is large but a common representation based on a probabilistic

framework has been considered (Sukkarieh et al., 2003b; Merino, 2007). The objects are related

Page 80: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

56 Models and Decisional Architecture

with the real world by means of their position and velocity in the Global Coordinate System G(see Appendix C). This information is always accompanied by the error estimation represented

as an information matrix. In order to disambiguate among different objects located in the same

place (or close), general information about the object is also included: mean color, histogram,

intensity and received signal strength indication (RSSI) when available. Finally, in order to fuse the

different estimations, an Information Filter (IF) has been applied due to its properties for distributed

implementations.

The rest of modules supporting the distributed decision-making process involving cooperation

and coordination are described in the Chaps. 4, 5 and 6 of this thesis.

3.4 Conclusions

In this chapter, the distributed multi-UAV architecture adopted has been presented. This architec-

ture is based on two software applications: the On-board Deliberative Layer (ODL) and the Human

Machine Interface (HMI).

Regarding the ODL, each module in Fig. 3.5 has been implemented in C++ to test the whole

platform in real missions (see Chap. 8), but the research work presented in this thesis has been

mainly focused on the following modules:

• Plan refining toolbox (Chap. 4): once a task arrives to the UAV, this module computes its

decomposition into elementary tasks (if applicable) and also the associated execution cost. It

uses the services of the perception sub-system on-board.

• CNP manager (Chap. 5): Taking into account the costs computed by the previous module and

the services of the plan builder, this module negotiates with other UAVs in order to allocate

the different tasks during the mission.

• Plan merging module (Chap. 6): before the execution of the elementary tasks in the plan, this

module negotiates in order to guarantee that the path to be traversed is clear of other UAVs.

Those modules are further discussed in the following chapters. The rest of modules in the ODL

have been described in this chapter, as well as the task model adopted. Then, it provides a reference

to the reader that can use it as a guideline to understand the contents of the following chapters.

Page 81: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Chapter 4

Plan Refining Tools

The plan refining toolbox module (see Fig. 3.5) provides mission and task decomposition services

to other modules of the architecture, as well as execution costs estimations. It also solves the static

obstacles avoidance problem. For those purposes, it can interact with the perception subsystem

module on-board to retrieve information about the environment. In the following sections, the

methods implemented for task decomposition in the context of missions involving monitoring, de-

ployment and surveillance are described. Finally, the static obstacles avoidance approach adopted

is also presented.

4.1 Role during Monitoring Tasks

Two different types of monitoring tasks have been considered: monitoring of a given location and

monitoring of a given object in the scenario. Then, the approaches adopted for each of them are

described in the following.

4.1.1 Location Monitoring

If it is required to monitor a given location, the plan refining toolbox can compute a waypoint for

the UAV in order to have that location in the center of the field of view of the on-board camera.

The approach adopted is described in the following.

Let us denote a 2D point in the image plane as m = [ u v ]T . A 3D point in the Global

Coordinate System (GCS) (see Appendix C) is given by M = [ x y z ]T . Let us denote by

m and M the corresponding augmented vectors by adding 1 as the last element of the vectors:

m = [ u v 1 ]T and M = [ x y z 1 ]T . Applying the usual pinhole model for the camera,

the relationship between a 3D point M and its image projection m is given by

sm = C[ R t ]M, (4.1)

where s is an arbitrary scale factor, [ R t ] is the rotation and translation which relates the GCS

to the Camera Coordinate System (CCS), and C, called the camera intrinsic matrix, is given by

Page 82: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

58 Plan Refining Tools

C =

αu γ u0

0 αv v0

0 0 1

(4.2)

with (u0, v0) the coordinates of the principal point, αu and αv the scale factors in image u and v

axes, and γ the parameter describing the skewness of the image axes.

In a first step, let us consider a vector v expressed in the CCS. Following (4.1)

vC =1

sC−1

u

v

1

=1

s

1αu

−γαuαv

γv0αuαv

− u0

αv

0 1/αv −v0/αv0 0 1

u

v

1

(4.3)

and applying simple algebraic manipulations, the components of the vector are given by

vC =1

s

v−v0αv

uαu− u0

αv− γ v−v0αuαv

1

. (4.4)

This vector can be transformed into the GCS by applying the rotation matrices in Appendix C

vG = RUGR

CUvC , (4.5)

allowing us to write the parametric equation in the GCS of each visual ray received by the camera

as follows x

y

z

=

x0

y0

z0

+ λvG . (4.6)

Then, given the real world coordinates of an object [ x0 y0 z0 ]T and imposing some con-

straints on the location and orientation of the UAV, it is possible to compute the 3D location for

the UAV that allows to have the object in the specified coordinates of the image plane.

In order to illustrate the practical computation of the UAV location for the monitoring task, let

us consider a particular case with several assumptions:

• An altitude z = zΠ for the UAV.

• The object should be in the center of the field of view.

• The on-board camera is aligned with the fuselage and pointing downwards with a fixed angle

θ.

• The desired orientation for the UAV is also given.

Page 83: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

4.1 Role during Monitoring Tasks 59

Thus, for the plane z = zΠ, λ = (zΠ − z0)/vz, andx

y

z

=

x0 + (zΠ − z0)vx/vzy0 + (zΠ − z0)vy/vz

, (4.7)

where the components vx, vy and vz of the vector vG can be computed for any coordinates m =

[ u v ]T in the image plane using the rotation matrices for the UAV and the camera on-board. If the

monitoring task requires to have the object in the center of the field of view, thenm = [ w/2 h/2 ]T

for an image resolution of w × h pixels. Regarding the rotation matrices, if the camera is aligned

with the UAV fuselage and it is pointing downwards with a fixed angle θ, then

RCU =

0 sin θ cos θ

1 0 0

0 cos θ − sin θ

. (4.8)

On the other hand, if the UAV should perform the monitoring task with pitch and roll angles

equal to zero, but with a given yaw angle ψ, then the UAV rotation matrix will be given by

RUG =

sinψ cosψ 0

cosψ −sinψ 0

0 0 −1

. (4.9)

After computing vG, equation (4.7) allows to obtain the coordinates x and y for the associated

waypoint in the monitoring task.

Up to now, the approach adopted for a monitoring task of a particular location has been described.

But another valuable capability for the platform could be to monitor a particular object identified

in the working scenario. Then, the computation of the waypoint for the observation will be based

on the estimations about that object provided by the perception system of the platform. In the next

section, the approach adopted for this case will be described.

4.1.2 Object Monitoring based on the Perception System Estimations

In different types of monitoring missions, it is required the observation of a given object from the

cameras on-board the UAV. The state of the object is estimated by the perception subsystem (PSS)

module in a distributed manner (see Sect. 3.3.7). Let us consider an object of interest with an

associated state x(t). This state obviously includes the position of the object p(t) and, if the object

is moving, it is convenient also to include the velocity p(t) into the estimated state. Both are called

the kinematic part of the state.

But further information is usually needed. In different types of missions it is also required to

confirm that an object belongs to a certain class within a set Γ (for instance, in the case of fire alarms

detection, this set will include as classes fire alarms and false alarms). Therefore, the object state

will also include information about the classification of the object. Moreover, in certain applications,

Page 84: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

60 Plan Refining Tools

some appearance information could be needed to characterize an event, which also can help in the

data association process among different UAVs with different camera views. This kind of information

will be usually static and will be represented by θ.

Then, the complete estimated state is composed by the status of all the objects, and the number

of objects No can vary with the time. The state at time t is represented by a vector x(t) =

[xT1 (t),xT2 (t), . . . ,x

TNo

(t)]T . Each potential object k is defined by:

xk(t) =

pk(t)

pk(t)

θk

. (4.10)

The information about the objects will be inferred from all the measurements zt gathered by the

fleet of UAVs. Once the perception system has estimated the state of a particular object, the plan

refining toolbox can compute a waypoint for the UAV in order to have the object in the center of

the field of view of the on-board camera. The approach adopted is described in the following.

The uncertainty in the estimation of the object is used to generate a convenient waypoint for

the observation in order to improve that estimation. As it has been mentioned before, a distributed

estimation of the position pk(t) of each object k is available. This estimation will have an associated

covariance or inertia matrix C. This matrix can be geometrically represented as a 3σ ellipsoid as it

will be shown in the following. The main axes of the ellipsoid can be computed to determine the

direction with higher associated uncertainty in the estimation of the position of the object.

A quadric can be expressed in general vector/matrix form as

xTQx+ px+ r = 0 (4.11)

where

x =

x

y

z

, Q =

q11 q12 q13

q12 q22 q23

q13 q23 q33

, p =

p1

p2

p3

and r is a constant. If the quadric is an ellipsoid, Q is symmetric, det(Q) > 0 and Q is an invertible

matrix. In order to extract useful characteristics of the ellipsoid such as the main axes directions

and modules, the general form in (4.11) should be converted to the center-orient form

(x− k)TRDRT (x− k) = 1 , (4.12)

where k is the center of the ellipsoid, R represents its rotation and D is a diagonal matrix. Let us

include the center k in the first term of (4.11)

(x− k)TQ(x− k) = xTQx− 2kTQx+ kTQk

= (xTQx+ pTx+ r)− (2Qk+ p)Tx+ (kTQk− r)= −(2Qk+ p)Tx+ (kTQk− r) .

Page 85: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

4.1 Role during Monitoring Tasks 61

Setting k = −Q−1p/2, then kTQk = pTQ−1p/4 and

(x− k)TQ(x− k) = pTQ−1p/4− r .

Dividing by the scalar on the right-hand side of the last equation and settingM = Q/(pTQ−1p/4−r)

(x− k)TM(x− k) = 1 .

Finally, the quadric symmetric matrix M can be factored using an eigendecomposition into

M = RDRT where R is a rotation matrix and D is a diagonal matrix whose diagonal entries are

positive. As far as the terrain where the objects are located is known, only the uncertainty in the

x − y plane will be considered in the following. Then, for two dimensions the eigendecomposition

can be done symbolically and will provide the values for the eigenvectors and hence the directions

of the main axis of the corresponding ellipse. An eigenvector v of M with associated eigenvalue λ

is a nonzero vector such that Mv = λv. The solutions of the quadratic equation det(M − λI) = 0

give the eigenvalues

λ1,2 =(m11 +m22)±

√(m11 −m22)2 + 4m2

12

2. (4.13)

By definition of eigenvectors, Mv1 = λ1v1 and Mv2 = λ2v2. It is possible to write the two

equations jointly by using a matrix R = [ v1 v2 ] whose columns are the unit-length eigenvectors.

The joint equation is MR = RD where D = diag(λ1, λ2) and multiplying on the right by RT , the

decomposition M = RDRT is obtained.

In order to avoid numerical problems when m12 is close to zero, if m11 ≥ m22 then the major

axis direction will be computed as

v1 =1√

(λ1 −m22)2 +m212)

[λ1 −m22

m12

], (4.14)

and otherwise as

v1 =1√

(λ1 −m11)2 +m212)

[m12

λ1 −m11

]. (4.15)

Each UAV computes periodically the covariance matrix Ck associated to an object k. The matrix

M describing the shape of an ellipse (x − k)TM(x − k) = 1 is related to the covariance matrix of

the same ellipse (Forssen, 2004) according to

M =1

4C−1.

From this equation, it is possible to apply the previous expressions (4.14) and (4.15) to compute

the main axis direction of the ellipse, i.e. the direction with higher uncertainty in the estimation of

the position of the object.

Page 86: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

62 Plan Refining Tools

Let us consider an object with coordinates [ xo yo zo ]T and main axis vector v1 = [ v1x v1y ]T .

The objective is to compute a location and orientation for the UAV that would allow to improve

the estimation of the object of interest. Let us denote the coordinates of the observation point w

by [ xw yw zw ]T and the desired orientation for the UAV by the roll (γw), pitch (βw) and yaw

(αw) angles. The initial location of the UAV is located at [ x y z ]T . The following constraints

should be satisfied to improve the estimation:

1. The object should be in the center of the field of view, i.e. mw = [ w/2 h/2 ]T for an image

resolution of w × h pixels.

2. The altitude for the monitoring task is zw = zo + zΠ, where zΠ is a parameter based on the

camera resolution.

3. The location of the UAV is in the perpendicular line to the main axis that crosses the estimated

location of the object. From the two possible solutions [ xw1 yw1 ]T and [ xw2 yw2 ]T , the

waypoint closer to the UAV is selected.

yw1 − yo = − v1xv1y(xw1 − xo) if

√(xw1 − x)2 + (yw1 − y)2 ≤

√(xw2 − x)2 + (yw2 − y)2

yw2 − yo = − v1xv1y(xw2 − xo) otherwise

(4.16)

4. The yaw αw of the UAV is computed to have the UAV pointing towards the object in the

direction perpendicular to the main axis v1 of the uncertainty ellipse, whereas the pitch and

roll are set to zero for simplicity (βw = γw = 0). Then, using the north as zero reference for

the yaw angle and assuming clockwise positive, its value will be given by

αw =π

2− arctan

(yo − ywxo − xw

). (4.17)

Figure 4.1(a) shows an example of a waypoint and UAV orientation computed following the

above presented rules.

If more than one UAV is commanded to monitor the same object, the first one will follow the

above mentioned rules, but the second and next UAVs will consider the places already occupied

around that object. In this case, the location is alternatively chosen between the perpendicular and

parallel lines to the main axis that crosses the estimated location of the object. Then, as it can

be seen in Fig. 4.1(b), if two UAVs are commanded to monitor the same object, the first one will

choose the closest location in the perpendicular line, whereas the second will be located in closest

waypoint in the parallel line.

Chapter 8 describes a people tracking mission with two UAVs that illustrates the experimental

application of the techniques described in this section.

Page 87: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

4.2 Deployment Missions 63

uav1alt: 70.00 wp1

x

object2alt: 55.32

v1

(xo,yo)

(a) Location for object monitoring with one UAV

uav1alt: 70.00 wp1

wp2

uav2alt: 70.00

x

object2alt: 55.32

v1

(xo,yo)

(b) Locations for object monitoring with two UAVs

Figure 4.1: Waypoint computation for object monitoring tasks. For a single UAV, its location willbe in the perpendicular line to the main axis that crosses the estimated location of the object. Fromthe two possible solutions, the waypoint closer to the initial UAV position is selected. If more thanone UAV is commanded to monitor the same object, the location is alternatively chosen between theperpendicular and parallel lines to the main axis that crosses the estimated location of the object.

4.2 Deployment Missions

When the user specifies several waypoints for sensor deployment, the task allocation process managed

by the CNP module (see Chap. 5) identifies the UAVs equipped with deployment devices (other UAVs

will bid with infinite cost). Once the deployment locations have been allocated among the UAVs

(based on metrics such as the distance), execution starts and the plan refining toolbox on-board

each UAV decomposes each sensor deployment task. The decomposition process is quite simple in

this case:

• Go to the waypoint.

• Go down until an altitude of hd meters with respect to the ground is reached.

• Activate the on-board device for dropping the sensor.

• Go up to the initial waypoint altitude.

Those elementary tasks will be inserted in the current plan by the plan builder module.

Page 88: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

64 Plan Refining Tools

4.3 Task Refining in Surveillance Missions

The mission consists in cooperatively search a given area to detect objects of interest, using the team

of UAVs. Algorithms to divide the whole area taking into account UAV’s relative capabilities and

initial locations are included in the plan refining toolbox. Resulting areas are assigned among the

UAVs, that can cover them using a zigzag pattern. Each UAV uses also the plan refining toolbox to

compute the sweep direction which minimizes the number of turns needed along the zigzag pattern.

Algorithms have been developed considering their computational complexity in order to allow near-

real time operation.

The problem has been decomposed into the subproblems of (1) determine relative capabilities

of each UAV, (2) cooperative area assignment, and (3) efficient area coverage. In Sect. 4.3.1 an

algorithm based on a divide-and-conquer, sweep-line approach is applied to solve the area partition

problem. In Sect. 4.3.2 we introduce the sensing capabilities considered on-board the UAVs and the

implications with respect to the pattern followed to cover an area. A discussion about the covering

algorithm is presented in Sect. 4.3.3 and finally, simulations are presented in Sect. 4.3.4.

4.3.1 Area Decomposition for UAV Workspace Division

In (Hert and Lumelsky, 2001) a polygon decomposition problem, the anchored area partition problem,

is described. It has many similarities to our multiple UAV terrain-covering mission. This problem

concerns dividing a given polygon P into n polygonal pieces, each of a specified area and each

containing a certain point (site) on its boundary. In our case, there are n UAVs Ui, i = 1, . . . , n,

each placed at a distinct starting point Si on the polygonal region P. The team of UAVs has the

mission of completely covering the given region, and to do this most efficiently, the region P should

be divided among the UAVs accordingly with their relative capabilities. Within its assigned region,

each vehicle will execute a covering algorithm which is discussed in Sect. 4.3.3.

The algorithm solves the case when P is convex and contains no holes (no obstacles). A gener-

alized version that handles nonconvex and nonsimply connected polygons is also presented in (Hert

and Lumelsky, 2001), but computational complexity increases in this case.

Relative capabilities of the UAVs The small UAVs are constrained in flying endurance and

range. Then, in a first approximation, maximum range of the UAVs seems to be a good measure of

their capabilities to perform the mission considered. As UAVs are heterogeneous, range information

should be scaled taking into account factors like flight speed and altitude required for the mission,

sensitivity to wind conditions, sensing width (due to different camera’s fields of view), etc.

Based on the relative capabilities of the vehicles, it is determined what proportion of the area of

the region P should be assigned to each of them. These proportions are represented in the following

by a set of values ci, i = 1, . . . , n, with 0 < ci < 1 andn∑i=1

ci = 1. Therefore, the problem considered

is as follows: Given a polygon P and n points (sites) S1, . . . , Sn on the polygon, divide the polygon

into n non-overlapping polygons P1, . . . ,Pn such that Area(Pi) = ciArea(P) and Si is on Pi.

Page 89: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

4.3 Task Refining in Surveillance Missions 65

Algorithm [htbp!]

Let S1, . . . , Sn be a set of sites (start positions of the UAVs), each of them with an area require-

ment, denoted AreaRequired(Si), which specifies the desired area of each polygon Pi.A polygon P which contains q sites is called a q-site polygon, and is called area-complete if

AreaRequired(S(P)) = Area(P) where AreaRequired(S(P)) is the sum of the required areas by the

sites in P.As it has been stated before, it is assumed a polygon P convex and with no holes (no obstacles).

In this case, it has been shown (Hert and Lumelsky, 2001) that the desired area partition can be

achieved using n−1 line segments, each of which divides a given q-site (q > 1) area-complete polygon

P, into two smaller convex polygons: a q1-site area-complete polygon and a q2-site area-complete

polygon with q1 + q2 = q and q1, q2 > 0. This allows to achieve an n-site anchored area partition in

which each of the polygons Pi is convex. The algorithm for computing line segments that partition

a convex polygon in this way is described in the following.

The procedure for dividing a given convex, area-complete polygon P into two smaller area-

complete polygons is summarized in Algorithm 4.1. The list wk with k = 1, . . . ,m of vertices and

sites of P in counterclockwise (CCW) order is provided as input, and the sites S1, . . . , Sq are assumed

to be numbered according to their appearance in this ordered list. The line segment L = (Ls, Le) is

initialized as the segment (w1, S1). Using Ls as a pivot point, this segment is swept counterclockwise

around the polygon until one of the following conditions holds:

1. Area(PrL) = AreaRequired(S(PrL));

2. S(PrL) = S1 and Area(PrL) > AreaRequired(S(PrL));

3. Area(PrL) < AreaRequired(S(PrL)) and Le = Sn.

In the first case, PrL and P lL are both area-complete polygons and, since Le never passes Sn, each

contains at least one site, so the desired division has been achieved. In the second case, there are no

sites on P to the right of L, so the starting point of the segment L can be moved counterclockwise

around P until Area(PrL) = AreaRequired(S(PrL)). In the third case, there are no sites to the left

of L, so Ls can be moved clockwise (CW) around P until the correct division of area is achieved.

The procedure in Algorithm 4.1 should be called exactly n−1 times to partition a convex, n-site

area-complete polygon into n convex, 1-site area-complete polygons.

4.3.2 Sensing Capabilities

Each UAV has associated an UAV Coordinate System U that changes its point of origin and its

orientation with the movement of the vehicle (see Appendix C). The cameras used in the experiments

were fixed and the roll, pitch and yaw angles were measured before the missions. In particular, during

the missions described in this thesis, the only non-zero angle for the camera orientation was γ. Then,

the full rotation matrix for the camera in equation (C.9) could be simplified as

Page 90: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

66 Plan Refining Tools

Algorithm 4.1 The procedure for dividing a convex area-complete polygon into two smaller area-complete polygons (Hert and Lumelsky, 2001).

Input:

convex polygon P; the list W (P) = wk, k = 1, . . . ,m of vertices and sites in CCW order;the set of sites S(P) = S1, . . . , Sq numbered according to their order in the list W (P).

Output:

two polygons P lL and PrL and their sites S(P lL) and S(PrL).

procedure ConvexDivide

1: S1 ← wk; L← (w1, wk)2: S(P rL)← S13: while Area(PrL) < AreaRequired(S(PrL)) and Le = Sn do4: if k > 1 and wk−1 ∈ S(P) then5: S(PrL)← S(PrL) ∪ wk−1

6: end if7: k ← k + 18: Le ← wk9: end while

10: if Le = S1 and Area(PrL) > AreaRequired(S(PrL)) then11: move Ls CCW along PrL until Area(PrL) = AreaRequired(S(PrL))12: else if Le = Sn and Area(PrL) < AreaRequired(S(PrL)) then13: move Ls CW along P until Area(PrL) = AreaRequired(S(PrL))14: else15: interpolate to find the point t on edge (wk−1, wk) such that if Le = t then Area(PrL) =

AreaRequired(S(PrL))16: Le ← t17: end if18: P lL ← P −PrL19: S(P lL)← S\S(PrL)

Page 91: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

4.3 Task Refining in Surveillance Missions 67

RCU (α = 0, β = 0, γ) =

1 0 0

0 cos γ −sinγ0 sin γ cos γ

. (4.18)

As the UAV moves along a straight line path between waypoints capturing video, the image

frustum of the camera defines an area on the terrain (see Fig. 4.2). Considering a plain terrain,

the sensing width w of an UAV flying at constant altitude can be computed from the equations

presented previously in Sect. 4.1. Given an image with a resolution of wr×hr pixels, let us considerthe four extreme points in the image plane: m1 = [ 0 0 ]T , m2 = [ wr 0 ]T , m3 = [ 0 hr ]T

and m4 = [ wr hr ]T . If the corresponding projected points on the terrain are denoted as p1, p2,

p3 and p4, then the sensing width w (see Fig. 4.2) will be given by

w = max(d(p1,p2), d(p3,p4)), (4.19)

where d(A,B) represents the Euclidean distance between two points A and B in the space.

w

xU

Figure 4.2: The area captured with the camera is the intersection of the camera frustum and theterrain.

As the planar algorithm for covering a given area is based on a zigzag pattern, the spacing of the

parallel lines will be determined in first approximation by equation (4.19). To be able to generalize

this planar algorithm directly to the three-dimensional environment considered, the nonplanar sur-

face (area) to be covered should be a vertically projectively planar surface. That is, a vertical line

passing through any point p on the surface A intersects it at only one point (Fig. 4.3).

4.3.3 Individual Areas Coverage Algorithm

Once each UAV has an area assigned (corresponding to a convex polygon Pi), an algorithm is needed

to cover this area searching for objects of interest. Those convex areas can be easily and efficiently

covered by back and forth motion along rows perpendicular to the sweep direction (simulations have

Page 92: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

68 Plan Refining Tools

(a)

V

I1

I2

I3

ZBCS

YBCS

XBCS

(b)

ZBCS

YBCS

XBCS

Figure 4.3: (a) The area shown is not projectively planar since the vertical line V intersects thesurface at three points. (b) This area is projectively planar. The subscript BCS stands for BaseCoordinate System and coincides with the GCS described in Appendix C.

shown that in general this pattern is faster than the spiral pattern). The time to cover an area in

this manner consists of the time to travel along the rows plus the time to turn around at the end of

the rows. Covering an area for a different sweep direction results in rows of approximately the same

total length; however, there can be a large difference in the number of turns required as illustrated

in Fig. 4.4.

Figure 4.4: Covering a region using different sweep directions. The number of turns is the mostcostly factor for covering a region along different sweep directions.

We therefore wish to minimize the number of turns in an area, and this is proportional to the

altitude of the polygon measured along the sweep direction. The altitude of a polygon is just its

height. We can use the diameter function d(θ) to describe the altitude of a polygon along the sweep

direction. For a given angle θ, the diameter of a polygon is determined by rotating the polygon

by −θ and measuring the height difference between its highest and lowest point. The altitude of a

polygon Pi for a sweep direction at an orientation of α is dPi

(α− π

2

).

The shape of a diameter function can be understood by considering the height of the polygon as

it rolls along a flat surface (see Fig. 4.5). Starting with one edge resting on the surface, we can draw

a segment from the pivot vertex to another vertex of the polygon, and the height of the polygon

will be determined by this vertex. Whenever the polygon has rolled on to the next side or when an

edge at the top of the polygon becomes parallel to the surface, we will change to a different segment

(from a different pivot vertex or to a different top vertex). Therefore, a diameter function has the

following form for an n sided convex polygon:

Page 93: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

4.3 Task Refining in Surveillance Missions 69

d(θ) =

k1 sin(θ + ϕ1) θ ∈ [θ0, θ1)

k2 sin(θ + ϕ2) θ ∈ [θ1, θ2)...

k2n sin(θ + ϕ2n) θ ∈ [θ(2n−1), θ2n)

(4.20)

where θ0 = 0 and θ2n = 2π. The diameter function is piecewise sinusoidal; its “breakpoints” θi occur

when an edge of the rotated polygon is parallel to the horizontal. The minimum of the diameter

function should lie either at a critical point (d′Pi= 0) or at a breakpoint. However, for any critical

point in between breakpoints d′′Pi< 0, which means that it corresponds to a maximum. Therefore,

the minimum must lie at a breakpoint and these breakpoints correspond to when the sweep direction

is perpendicular to an edge of the perimeter. Testing each of these sweep directions, the minimum

can be determined.

d( )q

q (rad)

(a) (c)(b) (d)

(a)

(b) (c) (d)

Figure 4.5: Diameter function for a rectangle.

A similar approach can also be applied when obstacles are present inside the areas. In this case,

the altitude to be minimized is the sum of the diameter function of the perimeter plus the diameter

functions of the obstacles.

4.3.4 Simulations Results

The algorithms presented in this section have been implemented in C++. In the simulation consid-

ered in the following, three UAVs have to search an area defined by a convex polygon with seven

edges. We assume different cameras on-board the UAVs, each of them leading to different values of

the sensing width w. In Table 4.1, the initial coordinates of the UAVs and their relative capabilities

(ci – see Sect. 4.3.1) are listed. Those values for ci have been obtained via an estimation of the

maximum range in function of parameters like remaining fuel, specific consumption, flight speed,

etc (Barcala and Rodrıguez, 1998). Using equation (4.19), and assuming constant altitudes during

the mission, sensing width (wi) of each UAV can be easily derived (see also Table 4.1).

Area partition has been computed using the algorithm presented in Sect. 4.3.1. The resulting

allocation is shown in Fig. 4.6. It can be seen that each UAV has been assigned an area (convex

polygon) according with its relative capability.

Page 94: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

70 Plan Refining Tools

Table 4.1: Initial coordinates, sensing width and relative capabilities of the UAVs.

xG(m) yG(m) zG(m) wi(m) ci(%)

UAV1 190.00 0.00 29.00 24.02 24.92UAV2 550.00 100.00 34.00 25.45 41.81UAV3 225.38 412.69 20.00 20.00 33.27

UAV 3

UAV 1UAV 2

Figure 4.6: Area partition simulation results. Optimal sweep directions have been represented byarrows.

Figure 4.7: Resulting zigzag patterns minimizing the number of turns required.

Page 95: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

4.4 Static Obstacles Avoidance 71

Each UAV has to find the optimal sweep direction which minimizes its assigned polygon’s altitude.

As it has been explained in Sect. 4.3.3, only the directions which are perpendicular to the edges of

each polygon have to be tested. Resulting directions have been represented by arrows in Fig. 4.6.

Then, each UAV has to compute the waypoints needed to follow a zigzag pattern perpendicular to

those directions (see Fig. 4.7). Distance between parallel lines depends on the sensing width of the

UAV.

On the other hand, a reconfiguration process has been also simulated. If UAV3 has to abort

its mission due to a low fuel level, the remaining UAVs have to cover the whole area. A new area

partition process has to be triggered and new sweep directions are followed (see Fig. 4.8).

UAV 1

UAV 2

Figure 4.8: UAV3 has to abort and remaining UAVs have to reconfigure their flight plans to coverthe whole area.

Finally, it is worth to mention that the approach presented in this section has been applied also

in actual experiments with real UAVs as it will be shown later in Sect 8.5.3.

4.4 Static Obstacles Avoidance

This section presents the approach adopted to solve the static obstacles avoidance problem. It is

assumed that a model of the environment is available using “a priori” information.

In the following, the notation presented in (LaValle, 2006) is used to describe the problem and

the solution adopted.

4.4.1 Geometric Modeling

A wide variety of approaches and techniques for geometric modeling exist, and the particular choice

usually depends on the application and the difficulty of the problem. In most cases, there are

Page 96: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

72 Plan Refining Tools

generally two alternatives: 1) a boundary representation, and 2) a solid representation.

The first step is to define the world W for which our choice will be a 3D world, i.e. W = R3.

The world contains two kinds of entities:

Obstacles: Portions of the world that are “permanently” occupied, for example the buildings.

UAVs: Solid bodies that are modeled geometrically and are controllable via a motion plan.

Both obstacles and UAVs will be considered as (closed) subsets of W. Let the obstacle region Odenote the set of all points in W that lie in one or more obstacles; hence O ⊆ W. The next step is

to define a systematic way of representing O being computationally efficient, and the same applies

for the UAVs.

A solid representation of O will be adopted in terms of a combination of primitives. Each

primitive Hi represents a subset of W that is easy to represent and manipulate in a computed. A

complicated obstacle region will be represented by taking finite, Boolean combinations of primitives.

4.4.2 Definition of the Basic Motion Planning Problem

Assume that the world W = R3 contains an obstacle region, O ⊂ W. Suppose that a rigid UAV,

A ⊂ W, is defined. Let q ∈ C denote the configuration of A, in which q = (xt, yt, zt, h) (h represents

the unit quaternion).

The obstacle region, Cobs ⊆ C, is defined as

Cobs = q ∈ C|A(q) ∩ O = ∅, (4.21)

which is the set of all configurations, q, at which A(q), the transformed UAV, intersects the

obstacle region, O. Since O and A(q) are closed sets in W, the obstacle region is a closed set in C.The leftover configurations are called the free space, which is defined and denoted as Cfree =

C \ Cobs.The general motion planning problem components are the following:

1. A world W = R3.

2. Semi-algebraic obstacle region O ⊂ W in the world.

3. Semi-algebraic rigid body UAV A defined in W.

4. The configuration space C determined by specifying the set of all possible transformations that

may be applied to the UAV. From this, Cobs and Cfree are derived.

5. A configuration, qI ∈ Cfree designated as the initial configuration.

6. A configuration qG ∈ Cfree designated as the goal configuration. The initial and goal configu-

rations together are often called a query pair (or query) and designated as (qI , qG).

7. A complete algorithm should compute a path, τ : [0, 1] → Cfree, such that τ(0) = qI and

τ(1) = qG, or correctly report that such a path does not exist.

Page 97: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

4.4 Static Obstacles Avoidance 73

The main difficulty is that it is neither straightforward nor efficient to construct an explicit

boundary or solid representation of either Cfree or Cobs.In our particular case, the scenario consists in an urban setting and then, the main obstacles

are buildings. Those buildings can be efficiently approximated using rectangular hexahedra. As the

UAV has a non-negligible size compared to the obstacles, the polyhedra are expanded to compensate

for this size when planning the motion of the UAV.

Due to the convex nature of the obstacles, it is possible to apply a combinatorial approach to find

paths through the continuous configuration space without resorting to approximations. The combi-

natorial approaches are alternatively referred to as exact algorithms in contrast to the sampling-based

motion planning algorithms.

The combinatorial algorithms are complete, which means that for any problem instance (over the

space of problems for which the algorithm is designed), the algorithm will either find a solution or

will correctly report that no solution exists.

Virtually all combinatorial motion planning approaches construct a roadmap along the way to

solving queries. Let G be a topological graph that maps into Cfree. Furthermore, let S ⊂ Cfree be

the swath, which is the set of all points reached by G. The graph G is called a roadmap if it satisfies

two important conditions:

1. Accessibility: From any q ∈ Cfree, it is simple and efficient to compute a path τ : [0, 1]→ Cfreesuch that τ(0) = q and τ(1) = s, in which s may be any point in S. Usually, s is the closest

point to q, assuming C is a metric space.

2. Connectivity-preserving: Using the first condition, it is always possible to connect some qI

and qG to some s1 and s2, respectively, in S. The second condition requires that if there

exists a path τ : [0, 1] → Cfree such that τ(0) = qI and τ(1) = qG, then there also exists a

path τ ′ : [0, 1] → S, such that τ ′(0) = s1 and τ ′(1) = s2. Thus, solutions are not missed

because G fails to capture the connectivity of Cfree. This ensures that complete algorithms

are developed.

By satisfying these properties, a roadmap provides a discrete representation of the continuous

motion planning problem without losing any of the original connectivity information needed to solve

it. A query, (qI , qG), is solved by connecting each query point to the roadmap and then performing

a discrete graph search on G. To maintain completeness, the first condition ensures that any query

can be connected to G, and the second condition ensures that the search always succeeds if a solution

exists.

The goal of our combinatorial method is to find shortest paths. This leads to the shortest-path

roadmap, which is also called the reduced visibility graph in (Latombe, 1990). Taking into account

that our obstacles are modeled as rectangular hexahedra, it is easy to compute the shortest-path

roadmap in three dimensions adapting the method presented in (Jiang et al., 1993). Then, given an

initial location and a goal (specified waypoint), the set of intermediate waypoints (vertices of the

polyhedra) that avoid the obstacles can be computed from the roadmap previously generated. The

Page 98: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

74 Plan Refining Tools

result is a path composed by the new waypoints computed plus the initial goal specified. Figure 4.9

shows an example that illustrates this approach.

qI

qG

O1

O2

q1

q2q3

q4

Figure 4.9: Given the initial and goal locations, a set of waypoints avoiding the obstacles andminimizing the distance are computed.

4.5 Conclusions

This chapter has described the different tools present in the plan refining toolbox module. This

module is always used to compute the detailed costs involved in the execution of a task. On the

other hand, the module also decomposes each complex task into a set of elementary tasks that can

be properly understood and executed by the EL of the UAV.

It has been shown how, depending on the particular task, the decomposition rules and algorithms

are different. But, in all the cases, the final output is a set of elementary tasks to be followed by the

executive layer.

Page 99: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Chapter 5

Distributed Task Allocation

The multi-robot task allocation (MRTA) problem has become a key research topic in the field of

distributed multirobot coordination in recent years. In this chapter, three algorithms (SIT, SET

and S+T) to solve the distributed task allocation problem are presented. A market based approach

based on the Contract Net Protocol (CNP) (Smith, 1980) has been adopted for all the algorithms

developed.

In the SIT algorithm, the UAVs consider their local plans when bidding and multiple tasks can be

allocated to a single UAV during the negotiation process. The second algorithm (SET) described in

this chapter is based on the negotiation of subset of tasks and can be considered as a generalization

of the former (SIT), which only negotiates single tasks. Both algorithms have been tested in a multi-

UAV simulator with multiple missions consisting in visiting waypoints. The objective of visiting a

waypoint can be manifold: deploy a sensor on that location, act as a “data-mule” for the WSN,

take images of a given area for surveillance, etc. From the simulation results, the SIT algorithm is

selected for its implementation and final usage in the real multi-UAV platform. The reason is that

the trade-off between quality of the solution and number of messages interchanged is better. Then,

the CNP manager module operation (see Fig. 3.5) is based on the SIT algorithm and it manages

the distributed task allocation process among the different UAVs in the AWARE platform.

The latter algorithm (S+T) solves the MRTA problem in applications that require the cooper-

ation among the UAVs to accomplish all the tasks. If an UAV cannot execute a task by itself, it

asks for help and, if possible, another UAV will provide the required service. In the thesis, tasks

consisting in transmitting data in real-time (that could require communication relay services) are

considered. On the other hand, the parameters of the algorithm can be adapted to give priority to

either the execution time or the energy consumption in the mission. The potential generation of

deadlocks associated to the relation between tasks and services is studied, and a distributed algo-

rithm that prevent them is proposed. The S+T algorithm has been also tested in simulations and

the results are presented in this chapter. Finally, it should be mentioned that this algorithm has

not been implemented in the real platform as far as the available area for the missions was not wide

enough to require communication relay services.

Page 100: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

76 Distributed Task Allocation

5.1 Introduction

An important issue in distributed multirobot coordination is the multi-robot task allocation (MRTA)

problem. It deals with the way to distribute tasks among the robots of a team and requires to define

some metrics to assess the relevance of assigning each task to a given robot. This chapter is focused

on the distributed solution of the MRTA problem, but centralized (Brumitt and Stenz, 1998; Caloud

et al., 1990), and hybrid (Dias and Stenz, 2002; Ko et al., 2003) approaches have been also addressed

in the literature.

Several multirobot architectures considering the MRTA problem in a distributed manner have

been validated on either physical or simulated robots. ALLIANCE (Parker, 1998), one of the

earliest demonstrated approaches and Broadcast of Local Eligibility (BLE) (Werger and Mataric,

2000), a distributed behavior-based architecture, are examples of systems based on behaviors with

high fault tolerance and adaptability to noisy environments. On the other hand, in the last years a

very popular approach to the MRTA problem considers the application of market-based negotiation

rules. This negotiation is usually implemented by using some variant of the Contract Net Protocol

(CNP) (Smith, 1980; Sandholm, 1993).

One of the first distributed market-based systems was M+ (Botelho and Alami, 1999), defined

within a general architecture for the cooperation among multiple robots (Botelho and Alami, 2001).

In this system, when a robot computes the cost of a task, it considers the next one in its local

plan in order to increase the efficiency of the solution. In MURDOCH (Gerkey and Mataric, 2000;

Gerkey and Mataric, 2002), a robot which is executing a task, does not take part in the different

negotiation processes. Therefore, the mechanism of task allocation is based on a purely greedy

method that assigns each new task to the most suitable available robot in the system. TraderBots

(Dias, 2004) considers dynamic environments (Dias et al., 2004) and total/partial failures of the

robots and communication links. Unlike the previous mentioned works, robots have a local plan and

more than one task can be allocated to each robot during the negotiation. In general, this approach

leads to solutions closer to the global optimum. In this work, as our goal is to find solutions close

to the optimum, UAVs consider their local plans when bidding and multiple tasks can be allocated

to a single UAV during the negotiation process.

On the other hand, these market-based approaches usually assume that each task can be executed

completely by a single robot. But this could not be the case for example in a surveillance or

exploration scenario, in which a task consisting in transmitting images in real-time could require

another UAV to act as a communication relay. Our approach to solve this problem is based on

the concept of service. If an UAV cannot execute a task by itself, it asks for help and, if possible,

another UAV will provide the required service. Those services are generated dynamically and are

necessary to successfully complete their associated tasks.

In this chapter, three algorithms (SIT, SET and S+T) to solve the distributed task allocation

problem are presented. A market based approach based on the Contract Net Protocol (CNP) (Smith,

1980) is adopted for all the algorithms developed.

Section 5.2 is devoted to the description of the first algorithm (SIT) which is based on the ideas

presented in (Dias, 2004). Then, a simple mission to point out some limitations of this algorithm is

Page 101: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

5.2 SIT Algorithm 77

described. In order to reduce these limitations, a new algorithm called SET which considers subsets

of tasks in the negotiation process is introduced in Sect. 5.3. Synchronization issues related with

both algorithms are described in Sect. 5.4. Finally, their performance is compared in Sect. 5.5 with

missions that required to visit a set of waypoints. From the simulation results, the SIT algorithm is

selected for its implementation and final usage in the real multi-UAV platform. The reason is that

the trade-off between quality of the solution and number of messages interchanged is better.

The latter algorithm (S+T) solves the MRTA problem in applications that require the coopera-

tion among the UAVs to accomplish all the tasks. If an UAV cannot execute a task by itself, it asks

for help and, if possible, another UAV will provide the required service. This protocol is also based

on a distributed market-based approach and could be considered an extension of the SIT algorithm.

The basic idea is that an UAV can ask for services when it cannot execute a task by itself. The cost

of the task will be the sum of the costs of the task and the service or services required.

A similar idea is presented in (Lemaire et al., 2004), where soft temporal constraints were con-

sidered using master/slave relations, and also in (Zlot and Stentz, 2006), where the efficiency of the

solution is increased considering at the same time the decomposition and allocation of complex tasks

in a distributed manner. However, the potential execution loops associated to the relation between

tasks and services that could lead to deadlock situations, is original in our work. To the best of our

knowledge, there is no other paper dealing with this problem in a distributed manner within the

MRTA area. Moreover, the parameters of our algorithm can be adapted to give priority to either

the execution time or the energy consumption (i.e., the sum of the distances traveled by each of the

UAVs) in the mission.

In Sect. 5.6 the S+T algorithm is described and illustrated with a simple example. In the same

section, the changes on the costs that allows the algorithm to prioritize between the execution time

and the energy spent on the mission is also explained. In Sect. 5.7, the deadlock problem is stated,

and a distributed algorithm to solve it is explained. Simulation results that illustrate the main

characteristics of the S+T algorithm are shown in Sect. 5.8.

5.2 Dynamic Single Task Negotiation with Multiple Alloca-tions to a Single UAV (SIT Algorithm)

Our goal was to find solutions that could minimize the global cost defined as the sum of all the

individual costs assuming independent tasks. Thus, the approach presented in (Dias and Stenz,

2002) was taken as a starting point. In the same manner, robots with a local plan and multiple

tasks allocated to a single robot during the negotiation were considered. In the implementation of

the SIT algorithm, several differences with the work in (Dias and Stenz, 2002) can be pointed out:

revenues are not used, a different synchronization method is applied and there is an agent acting as

an entry point for the tasks (the human machine interface application of the platform).

In general, the plan builder/optimizer module running during the mission execution generates a

plan P as a set of partially ordered tasks. Let us consider the i-th UAV with a set of nm motion

tasks to be executed. For a given order of the tasks τki /k = 1 . . . nm, the execution cost will be

Page 102: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

78 Distributed Task Allocation

given by

Ci =

nm−1∑k=1

ck,k+1i , (5.1)

where ck,k+1i is the motion cost between the locations associated to the tasks τki and τk+1

i .

In the negotiation process, each UAV bids for a task with the cost of inserting this task in its

local plan (marginal cost). Let us assume that the i-th UAV has a local plan Pi consisting in a set

of n ordered motion tasks τ1, τ2, . . . , τn with cost Ci and receives a new task. If this task is inserted

at the position j in the plan Pi, then a new plan Pi(τ j) with cost Ci(τj) is generated. In that case,

the associated marginal cost µj is given by

µj = Ci(τj)− Ci. (5.2)

The plan builder module of each UAV will compute the optimal insertion point of the new task

in its current plan. Taking into account the local plan of each UAV in the negotiation leads to better

solutions as it will be shown later.

The SIT algorithm is based on two different roles played dynamically by the UAVs: auctioneer

and bidders. In each auction there is only one auctioneer which is the UAV that has the token.

The auction is opened for a period of time and all the bids received within it are considered. When

the auction is finished and the task allocated, the auctioneer considers to pass the token to another

UAV. If that happens, the auctioneer changes its role to a bidder role and the UAV with the token

becomes the new auctioneer. The basic steps of the auctioneer and the bidder roles are given in

Algorithm 5.1. The best bid collected by the auctioneer is increased by a given percentage (usually

1%) to avoid transactions that will not significantly improve the solution.

The main difference with the basic CNP protocol is that the bid of each UAV depends on its

current plan and every time the local plan changes, the negotiation continues until no bids improve

the current global allocation. When the initial negotiation is over, the mission execution can start,

but new tasks can be generated at any moment. Therefore, the negotiation is dynamic in the

sense that new tasks are handled also during the mission execution. All the UAVs take part in the

negotiation of those new tasks with the only restriction that the current tasks in execution are not

re-negotiated.

The SIT algorithm has been tested in multi-UAV missions consisting in visiting waypoints and

returning to the home location. In this case, the local plan cost for the i-th UAV visiting a set of

nw ordered waypoints P 1, P 2, . . . , Pnw can be expressed as:

Ci = d(P (xi), P1) +

nw∑l=2

d(P l−1, P l) + d(Pnw , P (hi)), (5.3)

where P (xi) are coordinates corresponding to the current state of the i-th UAV, P (hi) is its home

location and d(A,B) is the Euclidean distance between the points A and B. In this particular

missions, each UAV should build its own local plan visiting the waypoints in an order that minimizes

the total distance travelled. This problem is equivalent to the TSP, which is a well known NP-hard

Page 103: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

5.2 SIT Algorithm 79

Algorithm 5.1 Description of the SIT algorithm running on-board each UAV for distributed taskallocation purposes. The model adopted was presented in Sect. 3.2.

Signature:

Input:

receive(m)j,i,m ∈M

Output:

send(m)i,j ,m ∈M

States:

announcementList /*list with the tasks to be re-announced to improve the current allocation*/localP lan /*current plan as an ordered set of tasks*/task /*current task being negotiated*/

Transitions:

χ1 : announcementList = ∅announce taskwhile timer is running do

receive bidsend whilecalculate best bidif best bid is smaller than the auctioneer bidthen

send task to best bidderend ifdelete task from announcementList

χ2 : receive(m)j,i,m ∈M

if new message m is a task announcement thencalculate the optimal position of the task inlocalP lancalculate bid (marginal cost)send bid to the auctioneer

else if new message m is a task award theninsert task in the localP lan in the positioncalculated beforeintroduce task in announcementList

end if

Tasks:

send(m)i,j ,m ∈M

Page 104: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

80 Distributed Task Allocation

problem. In our implementation, a greedy approach has been applied to solve it, inserting the new

task in the position which minimizes its insertion cost.

5.3 Dynamic Task Subsets Negotiation with Multiple Allo-cations to a Single UAV (SET Algorithm)

5.3.1 Motivation

Although the SIT algorithm leads to good solutions in general, simple missions can be found where

it does not find the global optimum. For example, let us consider the mission in Fig. 5.1, consisting

in visiting the waypoints wp1 and wp2. The optimal solution that minimizes the global cost defined

as the sum of all the individual costs assuming independent tasks is represented in Fig. 5.1,a. But,

if task wp2 is announced before task wp1, the SIT algorithm will not find the optimum. The steps

would be as follows:

1. Task wp2 will be allocated to UAV 2 (Fig. 5.1,b) which is the nearest one.

2. As the marginal cost of task wp1 is lower for UAV 2 than for UAV 1, this task is also allocated

to UAV 2 (Fig. 5.1,c).

3. UAV 2 announces both tasks, but it has the lower marginal costs for them and keeps both

tasks (Fig. 5.1,d).

4. After a given timeout expires, UAV 2 starts executing its tasks.

Optimal cost: 134.03

SIT cost: 147.91

wp1(30,40)

wp2(50,40)

UAV 1(0,0) UAV 2

(60,-20)

a)

c)

b)

d)

wp1(30,40)

wp2(50,40)

UAV 1(0,0) UAV 2

(60,-20)

wp1(30,40)

wp2(50,40)

UAV 1(0,0) UAV 2

(60,-20)

wp1(30,40)

wp2(50,40)

UAV 1(0,0) UAV 2

(60,-20)

Figure 5.1: A particular mission that shows some limitations of the SIT algorithm.

Page 105: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

5.4 Synchronization during the Negotiation 81

In this particular mission, the optimal solution would have been found if UAV 2 had announced

a subset of tasks composed by wp1 and wp2. This idea has been used to develop the algorithm

described in the next subsection.

5.3.2 Description

In order to improve the solution for the mission explained above, the negotiation of subset of tasks

was considered in the design of a new algorithm called SET (dynamic task subSETs negotiation).

The basic idea behind this algorithm is that negotiating subsets of tasks provides more information to

the UAVs than negotiating the tasks one by one. It should be also noted that SET can be considered

as a generalization of the SIT algorithm, which tries to improve the quality of the solutions.

Let us assume that the i-th UAV has a local plan Pi consisting in a set of n ordered motion

tasks τ1, τ2, . . . , τn with cost Ci and receives a set of tasks Γ with cardinality | Γ |= ns. If this set

is inserted at the position j in the plan Pi, then a new plan Pi(Γj) with cost Ci(Γj) is generated.

In that case, the associated marginal cost µj is given by

µj = Ci(Γj)− Ci. (5.4)

In our implementation, a greedy approach has been applied to find the insertion point of the

subset in the current local plan in order to minimize the associated cost.

On the other hand, a policy for building the subsets of tasks to be auctioned during the negotia-

tion process is required. A brute force algorithm trying all the possible combinations is not feasible.

In our approach, each UAV computes the subset of tasks with the highest cost in its local plan. The

computational cost to find this subset is not significant for the number of tasks usually managed by

a single UAV (less than 50).

As in the SIT algorithm, there are two roles: auctioneer and bidders. The basic steps of the

algorithm for the roles of auctioneer and bidder are given in Algorithm 5.2. When the cardinality

of the subset of tasks to be announced is one (ns = 1), the algorithm behaves exactly as SIT. Once

all the tasks have been allocated and there are no changes in the local plans of the UAVs during a

given period, the subset cardinality is increased by any UAV. This UAV will start the next stage

of auctions with subsets of two tasks (ns = 2). Finally, the algorithm will stop when there is no

interchange of tasks during a given stage or when the subset cardinality is greater than the number of

tasks to be announced by any UAV. Once the negotiation has finished, the UAVs will start executing

their local plans.

5.4 Synchronization during the Negotiation

The synchronization during the negotiation process has a relevant impact on the solutions. For

example, in (Dias, 2004) it is shown that having only one auction process running at a given time

leads to better solutions. When the auctions run in parallel, UAVs can be bidding with invalid

marginal costs if tasks in their local plans are not finally allocated to them. In our approach, a

Page 106: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

82 Distributed Task Allocation

Algorithm 5.2 Description of the SET algorithm running on-board each UAV for distributed taskallocation purposes. The model adopted was presented in Sect. 3.2.

Signature:

Input:

receive(m)j,i,m ∈M

Output:

send(m)i,j ,m ∈M

States:

announcementList /*list with the tasks to be re-announced to improve the current allocation*/localP lan /*current plan as an ordered set of tasks*/tasksSubset /*current subset of tasks being negotiated*/ns ←| tasksSubset | /*current cardinality of the subset of tasks being negotiated*/

Transitions:

χ1 : ns ≤| announcementList |compute tasksSubset to be announced fromlocalP lanannounce tasksSubsetwhile timer is running do

receive bidsend whilecalculate the best bidif best bid is smaller than the auctioneer bidthen

send tasksSubset to the winner of the auctionend ifdelete tasksSubset from announcementList

χ2 : receive(m)j,i

if m is a tasksSubset announcement thenset ns ←| tasksSubset |;calculate the optimal position of thetasksSubset in localP lancalculate bid (marginal cost)send bid

else if m is a tasksSubset award theninsert tasksSubset in localP lan in the posi-tion calculated beforeadd tasksSubset to announcementList

end if

Tasks:

send(m)i,j ,m ∈M

Page 107: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

5.5 SIT and SET Simulation Results 83

single token with a modified round robin algorithm has been used to guarantee only one auction

running at a given time.

The token is created by the UAV which starts an auction. Each UAV with tasks to announce

requests the token periodically. Once the current auction is over, its owner passes the token to the

UAV with more tasks to be announced. In order to implement properly this basic idea, communi-

cation channel characteristics, such as delays and errors in the messages, should be considered. For

example, the UAVs assume that the last UAV which has announced a task is the owner of the token.

But due to communication delays, the owner can change after some requests for the token sent to

the old owner. To solve this problem, this old owner answers with a rejection message containing

the identification of the new owner.

On the other hand, UAV failures are also considered. For example, if the token is requested

and no answer is received in a given period of time, the token is assumed lost (due to an UAV

communication system failure for example) and a new token is generated.

5.5 SIT and SET Simulation Results

Each simulated UAV runs the same ODL software used in the real experiments, whereas the executive

layer is simulated. Multi-UAV missions consisting in visiting waypoints and returning to home

positions have been used to test the algorithms. Hundreds of simulations with different number of

UAVs and tasks have been performed to compare the algorithms presented in the previous sections.

Moreover, another algorithm has been implemented in order to evaluate the relevance of the local

plans in the quality of the solutions. This algorithm will be called NoP (No local Plan) and uses a

basic CNP protocol where the UAVs only participate in the auction when they are idle. Furthermore,

a brute force algorithm has been used to compute the global optimal solutions when the sum of UAVs

and tasks is below a certain value.

In particular, for each given number of UAVs and waypoints, one hundred missions have been

run in a virtual world of 1000x1000 meters using random positions for the UAVs and the waypoints.

Each mission has been simulated with the three algorithms mentioned and Table 5.1 shows the

different solutions compared to the global optimum (the global cost is computed as the sum of the

individual costs of the UAVs). On the other hand, Fig. 5.2 shows the arithmetic mean of the global

cost (and its standard deviation in meters) for the 100 random missions and the different methods

implemented.

From the results, it should be noticed that using a local plan during the auction process improves

the solutions significantly. Moreover, the previously presented algorithms achieve good results, with

a maximum difference of 4.7% with respect to the optimal solution. The SET algorithm computes

better solutions in mean than the SIT method, but the improvement is small. In fact, it has been

found that the SET algorithm improvement is very sensitive to the initial locations of the UAVs

and the waypoints. Thus, using the mean global cost for one hundred random missions “smooths”

this improvement and the difference is not so significant. On the other hand, the standard deviation

values are high due to the random nature of the missions, that can lead to important differences in

the corresponding global costs.

Page 108: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

84 Distributed Task Allocation

3 5 7 90

1000

2000

3000

4000

5000

6000

7000

Number of tasks

Global cost (m)

NoP

SIT

SET

Optimum

(a) 3 UAVs

3 5 70

1000

2000

3000

4000

5000

6000

7000

Number of tasks

Global cost (m)

NoP

SIT

SET

Optimum

(b) 5 UAVs

Figure 5.2: Arithmetic mean of the global cost (and its standard deviation in meters) for the 100random missions and the different methods implemented.

Table 5.1: Solutions computed with three different distributed task allocation algorithms and theoptimal result. The last column represents the arithmetic mean of the global cost for the optimalsolution in meters. In the columns labelled as NoP, SIT and SET, the values represent the differencein percentage for each of these algorithms with respect to the optimal solution.

UAVs Tasks NoP SIT SET Optimum (m)3 3 65.20% 1.26% 0.66% 1435.403 5 101.02% 1.75% 1.28% 2061.803 7 114.73% 4.70% 4.02% 2362.603 9 129.13% 6.31% 4.67% 2649.505 3 39.50% 0.80% 0.27% 1264.785 5 112.37% 2.77% 1.31% 1793.355 7 150.16% 2.96% 2.22% 2161.68

Page 109: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

5.5 SIT and SET Simulation Results 85

Tables 5.2, 5.3 and 5.4 show the results from testing the algorithms with more waypoints for

three, five and seven UAVs. In those cases, it was not possible to compute the optimal solution

with our brute force algorithm due to the NP-hard nature of the problem. Therefore, in these tables

the percentage value corresponds to the difference with the solutions found with the SET algorithm

(last column). On the other hand, Fig. 5.3 shows the arithmetic mean of the global cost (and its

standard deviation in meters) for the 100 random missions and the three methods implemented.

Table 5.2: Results for missions with three UAVs and different number of tasks. The percentages arecomputed with respect to the solutions of the SET algorithm (last column).

Tasks NoP SIT SET9 118.90% 1.56% 2773.315 152.90% 1.07% 3616.7820 198.95% 0.83% 4122.7930 236.98% 1.12% 4979.8040 294.91% 0.94% 5582.42

Table 5.3: Results for missions with five UAVs and different number of tasks. The percentages arecomputed with respect to the solutions of the SET algorithm (last column).

Tasks NoP SIT SET9 142.32% 1.41% 2706.5915 182.65% 0.82% 3459.8620 205.71% 1.06% 4016.0130 253.74% 1.54% 4894.0240 302.06% 3.02% 5559.75

Table 5.4: Results for missions with seven UAVs and different number of tasks. The percentagesare computed with respect to the solutions of the SET algorithm (last column).

Tasks NoP SIT SET9 30.38% 0.28% 2465.2915 43.47% 0.70% 3337.3620 54.08% 0.53% 3954.5730 79.22% 0.51% 4794.2340 95.37% 0.44% 5598.34

With three and five UAVs the results are quite similar: a significant difference between the

NoP algorithm and the others (at least 118.9%) and very similar results for the SIT and the SET

algorithms, being the largest difference of 3.01%. But with seven UAVs the solutions of the NoP

algorithm are better as expected: with few robots, a single robot has a higher probability to execute

a task with high cost if the others are not idle.

Figure 5.4 compares the mean of messages transmitted by each UAV using the different algorithms

in one hundred missions with five UAVs. As expected, the number of messages increases with the

number of tasks. The SET algorithm needs more messages than others due to its more complex

Page 110: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

86 Distributed Task Allocation

9 15 20 30 400

0.5

1

1.5

2

2.5

x 104

Number of tasks

Global cost (m)

NoP

SIT

SET

(a) 3 UAVs

9 15 20 30 400

0.5

1

1.5

2

2.5

x 104

Number of tasks

Global cost (m)

NoP

SIT

SET

(b) 5 UAVs

9 15 20 30 400

0.5

1

1.5

2

2.5

x 104

Number of tasks

Global cost (m)

NoP

SIT

SET

(c) 7 UAVs

Figure 5.3: Arithmetic mean of the global cost (and its standard deviation in meters) for the 100random missions and the three methods implemented.

Page 111: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

5.6 Services and Tasks: S+T Algorithm 87

3 5 7 9 15 20 30 400

100

200

300

400

500

600

700

Mean of messages sent by each UAV

Number of tasks

NoP

SIT

SET

Figure 5.4: Mean of the messages sent per UAV in one hundred missions with five UAVs and differentnumber of waypoints.

negotiation protocol, but the number of messages also scales linearly with the number of tasks. On

the other hand, the best ratio between the improvement in the solutions and the number of messages

required is achieved with the SIT algorithm. The NoP algorithm could be used if the communication

among UAVs should be minimized.

5.6 Services and Tasks: S+T Algorithm

As it has been mentioned in the introduction, if an UAV cannot execute a task by itself, it can ask

for help and, if possible, another UAV will provide the required service. Those services are generated

dynamically and are mandatory to successfully complete their associated task. Thus, a third task

allocation protocol (called S+T) has been designed taking into account this characteristic.

As any other market-based algorithm, there are two roles (bidders and auctioneer) that are

played dynamically by the UAVs. The auctioneer is the agent in charge of announcing the tasks and

selecting the best bid from all the received bids. The methods associated to each role are detailed in

Algorithm 5.3. In the bidding process, when an UAV needs a service to execute a given task, it will

bid initially only with the cost of the task (because it still does not know the cost of the required

services) labelling the message to the auctioneer as “provisional”. The auctioneer will evaluate all

the bids and, if the best bid requiring a service is better than the best bid without services, the

UAV requiring the service will start another auction in order to find which UAVs can provide it.

When this second auction is finished, the UAV will send to the auctioneer the complete cost of the

task, including the cost of the associated services. Afterwards, the auctioneer will decide which UAV

Page 112: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

88 Distributed Task Allocation

executes the task based on the updated costs. If a task is allocated to an UAV requiring a service,

that service will be allocated also at the same time.

It should be pointed out that both the protocol used to allocate the services and the algorithm

to allocate the tasks are based on the SIT algorithm presented before. The only differences are:

• Services cannot be reallocated dynamically.

• When an UAV that will execute a service changes its local plan, it has to report the new cost of

the service to the UAV which required it (that can start another auction to check if a different

UAV has a lower cost now for that task).

A relevant feature of the protocol is that services can be allocated recursively, i.e., an UAV that

executes a service could also require another service to accomplish the first one and in this way to

any number of recursive services. Therefore, the algorithm takes full advantage of the possibilities

that a team of UAVs can offer (it is even possible to execute missions with a task involving the whole

team).

In order to illustrate this characteristic, a surveillance mission will be considered in the following.

The mission consists in transmitting information from a certain area to a base station in real-time.

The UAV has to be within the communication range of the base or in the range of other UAVs acting

as communication relays. For instance, Figure 5.5 shows a configuration with two UAVs acting as

communication relays to guarantee the transmission to a base location. The most relevant messages

involved in the negotiation process are represented in the diagram depicted in Fig. 5.6.

It should be pointed out that when an UAV announces a service required for a certain task, the

UAV that will execute that task cannot take part in the auction process for the service. In our

implementation, this UAV just sends a bid message with infinite cost in order to avoid a situation

with a task and its required service allocated to the same UAV.

Figure 5.5: Example of multiple recursive services required to accomplish one task. Figure a) showsthe initial locations of the UAVs and the base station and b) shows the final assignment of tasksand services that allows UAV A to transmit images to the base station using UAVs B and C ascommunication relays.

The use of services increases the level of cooperation among UAVs and allows to achieve missions

that could be impossible using a regular task allocation algorithm, as for example, transmitting

images from a location that does not have direct coverage with the base of operations. However,

Page 113: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

5.6 Services and Tasks: S+T Algorithm 89

Algorithm 5.3 Description of the S+T algorithm running on-board each UAV for distributed taskallocation purposes. The model adopted was presented in Sect. 3.2.

Signature:

Input:

receive(m)j,i,m ∈M

Output:

send(m)i,j ,m ∈M

States:

announcementList /*list with the tasks to be re-announced to improve the current allocation*/localP lan /*current plan as an ordered set of tasks*/task /*current task being negotiated*/

Transitions:

χ1 : announcementList = ∅announce taskwhile timer is running do

receive bidsend whilecalculate best bidif best bid is higher than the auctioneer bidthen

if best bid requires a service thenallow UAV to start a new auction in order tofind an UAV that can execute that service

end ifwait until the second auction is finished andthe total cost of the task (including the servicecost) is updatedsend task to best bidder taking into accountthe updated bids

end ifdelete task from announcementListif task has an associated service then

send a message to the UAV that was going toexecute the service in order to delete it fromits local plan

end if

chi2 : receive(m)j,i

if m is a task announcement thencompute the optimal insertion point for thetask in localP lancalculate bid (marginal cost)if the task requires a service then

send initial bid to the auctioneer and indi-cate that a service is needed

elsesend bid to the auctioneer

end ifelse if m allows to ask for a service then

start a new auction in order to find an UAVthat can execute the servicereceive all the bids for the servicecalculate the complete cost for the task includ-ing the cost for the servicesend the new cost to the auctioneer

else if m is a task award theninsert task in localP lan in the position calcu-lated beforeadd task in the announcementListif the task needs a service, allocate the serviceto the UAV that won the auctionif the cost of any allocated service - in case itexists - has changed then

send the new cost of the service to the UAVwith the task

end ifend if

Tasks:

send(m)i,j ,m ∈M

Page 114: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

90 Distributed Task Allocation

HMI UAV A UAV B

?t

Announce task

Bid but need a service

Allow to start auction forservice to find best bid

UAV C

Announce service

Bid but need a service

Allow to start auction forservice to find best bid

Announce service

BidInfinite bid

Update costof the service

Update costof the task

Allocate task

Allocate service

Allocate service

Figure 5.6: Messages interchanged in the negotiation process using the S+T algorithm for theexample illustrated in Fig. 5.5 (one task requiring two services to be executed).

Page 115: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

5.7 Deadlock Situations 91

services can also increment the total time of the mission since more than one UAV could be used

to execute one task and, therefore, less tasks can be executed “in parallel”. In this context, if an

UAV can execute a task by itself with a bigger cost than another UAV using services, it should be

decided which option is better. The answer to this question depends on the specific application and

two different approaches have been developed to tackle with different scenarios:

• In the first approach, tasks have higher priority than services, and therefore, it should be

applied to scenarios where the goal is to minimize the total execution time of the mission.

Basically, when an auctioneer receives bids from UAVs and, at least one of them does not

require a service, the task will be directly allocated to it. This approach also needs less

communication messages since services will be only used when mandatory for the execution of

a task.

• In the second approach, the priority between the total execution time of the mission and the

energy spent by the team can be adjusted with a parameter α defined as

α =P

1− P, (5.5)

where P ∈ [0, 1] is the priority to minimize the total time of the mission. This parameter is

used in the computation of the cost of the service

Cs = Co · (1 + α · L), (5.6)

where Co is the original cost of the service, Cs is the new cost of the service and L is the level

of the service, i.e., if it is the first service related to a task, L = 1 (if it is a service that depends

on the first service, then L = 2). This second parameter L is used to penalize the use of more

than one UAV to execute a task. Moreover, when the use of services is unavoidable, L allows

to increase the priority of services that require less UAVs.

The value of the parameter P should be selected depending on the type of mission. If it is

more important to minimize the energy spent on the mission, and the total time is not so

relevant, a value P = 0 (which means α = 0) should be selected. On the other hand, if the

total time of the mission should be minimized without considering a complete execution of all

the tasks, a value P = 1 (which means α → ∞) should be chosen. In this case, services will

not be considered and the algorithm will behave as the SIT market-based algorithm with local

plans and reallocations.

5.7 Deadlock Situations

Until now, the allocation process for tasks and services has been presented, but the relation between

tasks and services during the execution has not been addressed yet. From a general point of view,

when the execution of tasks depends on services, the potential generation of deadlocks should be

Page 116: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

92 Distributed Task Allocation

considered. It has been noticed in simulation that this problem appeared frequently since each UAV

only has local information and there is no direct way to know if its particular local plan will generate

a deadlock in the execution of all the tasks and services by the team of UAVs.

This problem is not easy to solve in a distributed manner since UAVs only have knowledge of

their own plans. The solution adopted is based on a deadlock detection algorithm presented later in

Sect. 6.2.4. The only difference is that the probe message used to detect the deadlocks is generated

by an UAV when it wins a task with a service associated and contains the identifier of that service.

Then, the UAV that has won the service will process the message and will send a new message for

every task or service that appears in its local plan before the mentioned service and has also a service

associated to it.

Once an UAV has detected a deadlock, it will sell the task that generates the loop and will insert

it in a black list in order to avoid biding again for it. The use of a black list prevents the generation

of allocation loops when the best two UAVs for a task are involved in an execution deadlock during

the integration of the task in their local plans (i.e., they start to reallocate the task to each other

and in both cases an execution loop is generated). The procedure followed to solve the deadlocks

detected is outlined in Algorithm 5.4.

Algorithm 5.4 Procedure implemented to solve the deadlocks once detected.

1: wait until receive a deadlock detection message with a task or service that the UAV has in thelocal plan

2: if id message = UAV id then3: if task has an associated service then4: send a cancel service message5: end if6: delete task from won-tasks list (deadlock detected)7: insert task in black-tasks list8: insert task in announcement-tasks list9: else

10: move to the initial position of the local plan11: repeat12: if task or service has a service associated to it then13: send deadlock detection message14: end if15: next task or service in the local plan16: until task or service != task received in the deadlock detection message17: end if

5.8 S+T Simulation Results

In the simulations, surveillance tasks where UAVs have to send images in real-time to a base station

from different locations were considered. Therefore, an UAV transmitting images has to be within

the communication range of the base station using its own communication device or using one or more

UAVs as communication relays (services). For this particular scenario, the execution synchronization

Page 117: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

5.8 S+T Simulation Results 93

300 400 600 1100500

1000

1500

2000

2500

3000

3500

Mean of global cost (m)

Communication range (m)

3 UAVs

5 UAVs

7 UAVs

Figure 5.7: Mean of the total distance traveled by all the UAVs over one hundred missions withdifferent communication ranges, number of UAVs and five tasks.

between tasks and services has been implemented using preconditions, i.e. a task cannot start until

all the services associated to it have been executed. Moreover, the UAV or UAVs that execute a

service cannot start the next task or service in their local plan until the corresponding tasks have

been completed (postconditions mechanisms).

Many simulations with different number of UAVs were performed for the surveillance missions

mentioned above with several communication range values in a scenario of 1000x1000 meters. In

Fig. 5.7, it can be observed that the total distance traveled by all the UAVs decreases when the

communication range increases as far as the probability to require a service decreases. The total

distance traveled by all the UAVs is considered as a consistent measurement of the energy spent

during the mission. Moreover, the mean of the total distance traveled decreases when the number

of UAVs increases due to the fact that a constant number of tasks is used in all the missions.

Table 5.5 shows the resulting mean values of some parameters in missions with five tasks, different

number of UAVs and values for the communication range. The number of services executed increases

when the communication range of the UAVs decreases and, as a logical consequence, the number of

messages received by one UAV and the total distance traveled by all of them also increases, as it

was mentioned above. This means that the communication requirements and the energy needed to

execute the mission will be higher when the number of services increases.

On the other hand, simulations have been run with different values of the α parameter that

depends on the priority P ∈ [0, 1] (see Sect. 5.6). As it can be seen in Fig. 5.8, one hundred random

simulations have been executed for different values of P . P = 0 is an extreme value applied when

the user wants to minimize the total distance traveled by all the UAVs in the mission in terms of

Page 118: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

94 Distributed Task Allocation

Table 5.5: Results with five tasks, different number of UAVs and values for the communication range.The mean of the values from one hundred random missions are shown, where total distance meansthe distance traveled by all the UAVs and messages received is the number of messages received byone UAV due to the S+T algorithm. Finally, number of services refers to the services executed byone UAV.

UAVs Comm. range(m) Total dist(m) Messages received Services

3600 2145.15 47.96 0.56400 2786.52 80.32 2.44300 3125.23 150.45 4.36

51100 1075.23 48.06 0.0600 1099.43 52.3 0.30400 1307.97 85.66 1.36300 1742.34 164.87 3.45

71100 609.14 45.06 0.0600 638.42 45.8 0.24400 810.23 79.76 1.24300 1318.31 142.96 2.76

energy consumption, and therefore, the cost of the services is not modified. Also in Fig. 5.8, it

can be observed how the maximum distance traveled by one UAV decreases when P increases, and

therefore, the time of the mission will be smaller (assuming that all the UAVs move at the same

speed) because of the penalization of the costs associated to the services. However, if the execution

time is critical, with P = 1.0 the S+T algorithm services are not considered and some tasks could

be let undone (mission partially accomplished).

In Fig. 5.9, the mean for the number of tasks executed over 100 missions with different values

of the communication range and with the priority P = 1.0 is shown. Up to six hundreds meters, it

can be seen that a significant number of tasks cannot be accomplished by the group of UAVs if the

use of services is not considered. Therefore, we have to be careful when the parameter P is equal

to 1.0 and a given mission needs services to execute most of the tasks. In that case, the time of

the mission will be minimized but many tasks will not be executed. Then, it is advisable to only

use P = 1.0 when most of the tasks can be executed without services and the execution time of the

mission is very critical.

5.9 Conclusions

This chapter has presented the research work carried out in the distributed solution of the task allo-

cation problem. A market-based approach has been adopted to develop three distributed algorithms

called SIT, SET and S+T.

The S+T algorithm can be applied when the coverage ranges of the communication devices on-

board the vehicles are not enough to allow continuous communication among them. This is not

the case in the AWARE experimentation scenario used to validate the architecture developed, and

hence, only the simulation results presented in this chapter have been used to validate the approach.

Page 119: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

5.9 Conclusions 95

0 0.2 0.8620

640

660

680

700

720

740

760

780

800

820

840

Mean of the maxim

um distance travelled by one robot (m

)

Value of the time priority P

300 meters

600 meters

Figure 5.8: Mean of the maximum distance traveled by one UAV over one hundred missions with300 and 600 meters as the communication range. The number of UAVs and tasks considered in themissions were five.

300 400 600 11001

1.5

2

2.5

3

3.5

4

4.5

5

5.5

Mean of the number of tasks executed by all the robots

Radio coverage (m)

Figure 5.9: Mean of the number of tasks executed by all the UAVs over one hundred missionswith different values of the communication range. The use of services are not considered in thissimulations, i.e., P = 1.0 or α→∞. In these simulations the number of UAVs and tasks were alsofive.

Page 120: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

96 Distributed Task Allocation

On the other hand, the SET algorithm provides an improvement in the performance with respect

to the SIT method. But, taking into account the higher amount of required messages interchange, it

has been considered reasonable to apply only the SIT algorithm in the missions presented in Chap. 8.

Page 121: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Chapter 6

Plan Merging Process

When considering the execution of the different plans of the vehicles in a multi-UAV platform, the

main resource shared is the airspace. Therefore, the plan merging module (see Fig. 3.5) has been

designed to detect potential conflicts among the different trajectories and also to follow a policy in

order to solve them. Then, this module has to interact with the plan builder module and also with

other UAVs to interchange the different trajectories involved.

The first part of this chapter presents the conflict avoidance method used to improve the safety

conditions in the AWARE scenario where multiple UAVs are sharing the same aerial space. As it has

been mentioned before in Chap. 3, one of the key aspects in the design was to impose few requirements

to the proprietary vehicles to be integrated in the AWARE platform. Then, a specification of the

particular trajectory or velocity profile during the flight is not considered, and the implemented

policy to avoid the inter-vehicle collisions is only based on the elementary set of tasks presented

in Sect. 3.3.4. On the other hand, the method is distributed and involves the negotiation among

different UAVs. It exploits the hovering capabilities of the helicopters and guarantees that each

trajectory to be followed by the UAVs is clear of other UAVs before proceeding to its execution.

In the last part of the chapter, a different method to solve the same problem more efficiently in

a centralized manner is presented. It requires to change the velocity profile of the UAVs in real-

time, and thus impose more requirements to the executive layer of the UAVs. Nevertheless, it has

been considered relevant to include a description of this method as it provides another option to

be applied if a centralized solution is preferred and the UAVs allow such kind of external velocity

profile as a reference.

6.1 Introduction

The variability of the flying conditions, due for example to the wind, the faults that may affect to

the UAVs, and the presence of other manned aircraft, including teleoperated aerial vehicles that

cannot be controlled by the system, demand the implementation of real-time conflict detection and

resolution techniques.

The collision avoidance problem in multi-vehicle systems is a well studied topic in the robotics

Page 122: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

98 Plan Merging Process

research community.

One of the first approaches to that problem was proposed in (Kant and Zucker, 1986) where the

problem is decomposed into the path planning problem (PPP) and the velocity planning problem

(VPP). Then, once a path has been planned, a velocity profile that avoids collisions in that path is

found by means of the proposed VPP method.

The collision avoidance problem for a single mobile robot with mobile obstacles is considered

for example in (Tsubouchi and Arimoto, 1994), that presents a method to compute a collision-

free trajectory in the (x, y, t) space. First, the authors evaluate the position and speed of the

mobile obstacles. Assuming that the obstacles speed remain constant, they compute a set of oblique

cylinders in the (x, y, t) space to be avoided. The problem is then to find a trajectory connecting

the initial position to a vertical line representing the goal.

A speed planning method with mobile obstacle avoidance was presented in (Cruz et al., 1998)

where mobile obstacles are included as motion constraints for the vehicle. In (Fujimori and Teramoto,

2000), the direction angle and the velocity of the mobile robots are used as control variables for

navigation and collision avoidance, by assuring the avoidance for two vehicles. The method in (Owen

and Montano, 2005) computes the trajectory of a vehicle in the velocity space to avoid mobile or

static obstacles in its trajectory. The coordinated modification of the trajectories of several robots

that could be involved in the collisions is not considered in all these methods.

In (Ferrari et al., 1997) alternative collision avoidance solution paths are obtained by generating

small variations of robot motions in space and time. The method assumes that the vehicles have

rather simple dynamics, and does not consider mobile obstacles.

On the other hand, the aircraft trajectory planning with collision avoidance is studied in (Richards

and How, 2002) where the problem is written as a linear program subject to mixed integer constraints,

known as a mixed-integer linear program (MILP) which can be solved using commercial software.

The problem has a significant complexity because of the high number of constraints. Furthermore,

it does not consider mobile obstacles.

In (Pallottino et al., 2007) a plan is proposed for steering multiple vehicles between assigned

independent start and goal configurations and ensuring collision avoidance. All the agents cooperate

by following the same traffic rules. They move with a constant velocity, a safety area is defined and

the velocity of movement of the safety area can be zero. However, this method usually leads to the

modification of the paths which could be not needed if the collisions are avoided by simply modifying

the velocity.

When considering real-time aircraft collision avoidance the easiest strategy is to modify the

aircraft altitude (Bichi and Pallottino, 2000) but air space is commonly structured in layers and

therefore not always altitude changes are possible. Furthermore, most vehicles have significant

dynamics limitations that do not allow them to stop or modify their trajectories as fast as needed.

To the best of our knowledge, there are two aspects that have not been usually addressed in

previous work: the use of a fully distributed policy to avoid the conflicts based on a negotiation

protocol among the UAVs, and practical implementations of the proposed methods with a real multi-

UAV platform. Then, the work in the first part of this chapter is intended to contribute in both

Page 123: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.2 Distributed Method for Conflict Detection and Resolution 99

directions. It describes the distributed method implemented for the conflict detection and resolution

in the AWARE multi-UAV platform. This approach has been validated during the experiments

carried out in May 2009 that are presented later in Chap. 8.

6.2 Distributed Method for Conflict Detection and Resolu-tion

This section describes the distributed algorithm used for conflict detection and resolution purposes

in the multi-UAV AWARE platform for airspace sharing. It is assumed that each UAV has a

plan already defined that can be retrieved at any moment from the plan builder module. From

the executive level perspective, the plan can be viewed as a list of waypoints to be visited. The

hovering capabilities of the UAVs in the AWARE platform are exploited to simplify the solution of

the problem, as far as the only motions that should be checked against conflicts are the transitions

between waypoints. This is the basis of the method presented in this section.

6.2.1 Problem Formulation

Let us consider a platform composed by n UAVs with an initial state free of conflicts. In general,

the plan builder/optimizer module generates a plan Pi for each UAV as a set of partially ordered

tasks. In the AWARE platform, the main function of the online planner will consist in ordering the

motion tasks allocated to the UAV. Let us consider the i-th UAV with a set of nm motion tasks

τki /k = 1 . . . nm to be executed. Other tasks can be also part of the plan Pi, but the motion tasks

will be only considered in the following for spatial conflict detection purposes.

As it has been mentioned above, the hovering capabilities of the UAVs in the AWARE platform

are exploited to simplify the solution of the problem, as far as the only motions that should be

checked against conflicts are the transitions between waypoints. From the elementary tasks presented

in Sect. 3.3.4, a set of states Si = s1i , s2i can be considered taking into account the motion of the

i-th UAV:

• State s1i : stationary flight around a waypoint P . The UAV can be either waiting for a new

motion task or waiting for the next path to be clear.

• State s2i : flying between waypoints P k and P k+1. The straight path ∆ki between those way-

points will be considered as the reference trajectory for the i-th UAV.

Hence, the conflicts can arise only in the transitions from states s1i to s2i . Thus, before proceeding

to the s2i state, the i-th UAV has to check two types of potential conflicts with the j-th UAV

depending on its current state sj :

• Type A (if sj = s1j ): potential conflict between the next path ∆ki and the current position of

the j-th UAV.

• Type B (if sj = s2j ): potential conflict between the next path ∆ki and the path ∆l

j being

currently followed by the j-th UAV.

Page 124: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

100 Plan Merging Process

Then, the problem to be solved can be formulated as: avoid conflicts of type A and B in the

transitions s1i → s2i∀i = 1, . . . , n. The distributed algorithm developed to solve it and ensure the

clearance of the paths is described in the following section.

6.2.2 Distributed Method for Conflict Resolution

The basic idea of the distributed method proposed is to guarantee that when an UAV is traversing

a path between two consecutive waypoints, that route is clear of other UAVs.

The formal description of the process Pi running on-board each UAV to guarantee that only

free paths are traversed can be found in Algorithm 6.1. The set of related state variables, along

with their initialization (if applicable) can be found in states(Pi). On the other hand, regarding the

transitions trans(Pi), three of them have been identified (see Algorithm 6.1):

• χ1: triggered when the current task τki changes its state to MERGING.

• χ2: activated when a request message is received from other UAV.

• χ3: triggered by a reply message received from other UAV.

In the following, we will focus on the first transition χ1. Let us consider a motion task τki with

an associated path ∆ki . Initially, the status of the task is ϵk = SCHEDULED and all the preconditions

for its execution are satisfied. If the i-th UAV were in a non-shared airspace, ϵk should change from

SCHEDULED to RUNNING and the execution of τki should start immediately. But, as there are more

UAVs in the platform sharing the airspace, an intermediate state called MERGING is considered before

starting the execution of the motion task. Once τki changes its state to MERGING, Algorithm 6.2 is

used to check if the associated path ∆ki is clear for the i-th UAV. It should be noticed that the

second part of the algorithm is used to notify to other UAVs that after the execution of τki , the path

∆ki is clear again.

On the other hand, the i-th UAV has also to manage the request and reply messages received

from other UAVs (transitions χ2 and χ3 in Algorithm 6.1). When a request message is received,

the UAV has to check if the received path is in conflict, whereas if a reply is received the counter of

positive replies has to be updated. A routine called CheckConflictTypeA (described in Sect. 6.2.3)

is used to check if the ∆lj path is in conflict with a path requested by the i-th UAV. Another routine

called CheckConflictTypeB (also described in Sect. 6.2.3) is used to check if the ∆lj path is in

conflict with the i-th UAV current location. Both routines return true only if there is a conflict.

It should be noted the algorithms that manage the transitions χ1, χ2 and χ3 run asynchronously

but operate on a set of common variables. Then, a mutex is used to serialize access to the common

variables when necessary.

Lemma 6.2.1. The method guarantees that a path ∆ki is executed by the i-th UAV only if it is clear

of other UAVs.

Proof. It is assumed that the initial state is free of conflicts. Let us consider two UAVs with uniqueidentifiers i and j. Two cases can be identified:

Page 125: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.2 Distributed Method for Conflict Detection and Resolution 101

Algorithm 6.1 Description of the algorithm for distributed conflict resolution following the modelpresented in Sect. 3.2.

Signature:

Input:

receive(m)j,i,m ∈M

Output:

send(m)i,j ,m ∈M

States:

/*RequestingPath is true whenever the i-th UAV is requesting access to a path;*/RequestingPath← false;/*The sequence number chosen by a request originating at the i-th UAV;*/SeqNumber/*The number of REPLY messages still expected;*/OutstandingReplyCount/*The highest sequence number seen in any request message sent or received;*/HighestSeqNumber ← 0/*ReplyDeferred[j] is true when this UAV is deferring a reply to the request message from the j-thUAV;*/for j = 1 to n do

ReplyDeferred[j]← falseend for/*Initialize the mutex used for the access to the shared variables;*/init(mutex);

Transitions:

χ1 : ϵki = MERGING (see Algorithm 6.2)χ2: receive(m)j,i with m = request(x,∆l

j)

/*If the message received is a request from the j-th UAV with sequence number x and associatedpath ∆l

j ;*/HighestSeqNumber ← max(HighestSeqNumber, x);ConflictingOwnPosition← CheckConflictTypeB(∆l

j);lock(mutex);if RequestingPath then

ConflictingOwnPathRequested← CheckConflictTypeA(∆lj);

end ifDefer ← ConflictingOwnPositionor(ConflictingPathRequestedand((x > SeqNumber)or(k =SeqNumberandj > i)))unlock(mutex);if Defer = true then

ReplyDeferred[j]← trueelse

sendi,j(m) with m = replyend if

χ3: receive(m)j,i with m = reply

/*If the message received is a REPLY;*/OutstandingReplyCount← OutstandingReplyCount− 1

Tasks:

send(m)i,j ,m ∈M

Page 126: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

102 Plan Merging Process

Algorithm 6.2 Algorithm used in the transition χ1 (see Algorithm 6.1) to check if the path ∆ki

associated to a task τki is clear for the i-th UAV. Once τki has been executed, the UAV notifies thatthe path ∆k

i is clear again.

1: lock(mutex);2: RequestingPath← true;3: SeqNumber ← HighestSeqNumber + 1;4: unlock(mutex);5: OutstandingReplyCount← n− 1;6: for j = 1 to n do7: if j = i then8: sendi,j(m) with m = request(SeqNumber, i,∆k

i );9: /*sent a request message containing our sequence number and our identifier to all other

UAVs;*/10: end if11: end for12: while OutstandingReplyCount = 0 do13: /*Wait for a reply from each of the other UAVs;*/14: end while15: /*The execution of τki can start;*/16: ϵk ← RUNNING;17: while ϵk = ENDED do18: /*Wait until task τki is finished;*/19: end while20: /*Once τki has ended, the path ∆k

i is free again for other UAVs;*/21: RequestingPath← false;22: for j = 1 to n do23: ConflictingOwnPosition← CheckConflictTypeB(∆l

j);24: if (ReplyDeferred[j]and!ConflictingOwnPosition) then25: ReplyDeferred[j]← false;26: sendi,j(m) with m = reply;27: /*Send a reply to the j-th UAV*/28: end if29: end for

Page 127: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.2 Distributed Method for Conflict Detection and Resolution 103

• The path ∆ki requested by the i-th UAV is in conflict with the location of the j-th UAV. In this

case, there is no reply to the request message (according to the algorithm for the transitionχ2) and the i-th UAV will not proceed with the execution.

• The path ∆ki requested by the i-th UAV is in conflict with a different path requested by the

j-th UAV. Let us assume the contrary, that at some time the i-th UAV executes a path ∆ki

and the j-th UAV is in conflict at the same time. Let us examine the message traffic associatedwith the current cycle of the algorithm that occurred in each UAV just prior to this condition.Each UAV sent a request to the other and received a reply. The following cases can be found:

Case 1: The i-th UAV sent a reply to the j-th UAV’s request before choosing its own se-quence number. Therefore, i will choose a sequence number higher than the j’s sequencenumber. When j received i’s request with a higher number, it must have found its ownRequestingPath variable to be true since this is set to be true before sending requestand i had received this request before sending its own request message. The algorithmthen directs j to defer the request and not reply until it has left the path. Then, the i-thUAV could not yet be in the path contrary to the assumption.

Case 2: The j-th UAV sent a reply to the i-th UAV’s request before choosing its own sequencenumber. This is the mirror image of Case 1.

Case 3: Both UAVs sent a reply to the other’s request after choosing their own sequencenumbers. Both UAVs must have found their own RequestingPath to be true whenreceiving the other’s request message. Both UAVs will compare the sequence number andthe UAV identifier in the request message to their own sequence number and identifier.The comparisons will develop opposite senses at each UAV and exactly one will defer therequest until it has finished with its own path contradicting the assumption.

Therefore, in all cases the algorithm will prevent both UAVs from being in conflict during theexecution of a motion task.

In the transition χ2 of Algorithm 6.1, two routines called CheckConflictTypeA and CheckCon-

flictTypeB are used to check if the ∆lj path is in conflict with the i-th UAV’s motion or position

respectively. Next section presents the geometrical method used in those routines to detect the

potential conflicts.

6.2.3 Geometrical Approach for Conflict Detection

This section describes the geometrical method implemented for the detection of potential conflicts

when sharing the airspace. The bounding solid selected and the geometrical algorithm to detect the

conflicts are detailed.

Bounding Solid for Conflict Detection

Let us consider a safety radius ri around the i-th UAV. The value for this parameter should be

selected taking into account different factors:

Page 128: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

104 Plan Merging Process

• The aerodynamical perturbation generated around the UAV: let us consider pi as the distance

with respect to the on-board GPS antenna at which there is a significant perturbation in the

air.

• The maximum distance di between the GPS antenna and any point of the UAV structure.

• The maximum separation si with respect to the reference trajectory according to the UAV

dynamics, control law and maximum perturbations due to the wind in operational conditions.

Then, a conservative value for ri can be computed as

ri = max(pi, di) + si (6.1)

in order to consider the worst case.

Regarding the bounding solid selection for conflict detection, a box of edge length 2ri centered

in the GPS antenna location will be considered for simplicity for the i-th UAV. Then, taking into

account the set of states Si described in Sect. 6.2.1 and the bounding box selected, there are two

type of solids involved in the potential conflict detection process (see Fig. 6.1):

• Box: for each UAV in stationary flight.

• Rectangular hexahedron: for each path between waypoints.

P

Pk

Pk+1

x

y

z

a)

b)

2r2r

2r

2r

2r

2r

2r2r

2r

Figure 6.1: Bounding solids adopted for each motion state of the UAV. a) Box for each UAV instationary flight around point P (state s1i ). b) Rectangular hexahedron for each path between thewaypoints P k and P k+1 (state s2i ).

Once the bounding solids have been selected, the method to check the overlapping among them

is presented in the next section.

Page 129: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.2 Distributed Method for Conflict Detection and Resolution 105

Geometrical Method for Solid Overlapping Detection

Taking into account that the chosen bounding solids are convex objects, it is possible to apply the

method of separating axes that allows to determine whether or not two convex objects in the space

are intersecting. Extensions of this method allow to handle moving convex objects and are useful for

predicting collisions of the objects and for computing the first time of contact. Then, this approach

has been adopted in the current implementation as it also allows flexibility for future extensions of

this work.

The test for nonintersection of two convex objects is simple stated: if there exists a line for which

the intervals of projection of the two objects onto that line do not intersect, then the objects do not

intersect. Such a line is called a separating line or, more commonly, a separating axis.

As the translation of a separating line is also a separating line, let us consider a line containing

the origin and with unit-length direction u. The projection of a compact, convex set C onto this

line defines an interval I given by

I = [ϕmin(u), ϕmax(u)] = [minu · v : v ∈ C,maxu · v : v ∈ C]. (6.2)

Lemma 6.2.2. Given two compact convex sets C0 and C1, there is no intersection if it is possibleto find a direction u, such that the projection intervals I0 and I1 do not intersect. Following theprevious notation, this condition can be expressed as

ϕ0min(u) > ϕ1max(u) or ϕ0max(u) < ϕ1min(u), (6.3)

where the superscript corresponds to the index of the convex set.

Lemma 6.2.3. The Boolean result of the inequalities in (6.3) are invariant to changes in the moduleor sign of u.

Proof. Let us consider a scale factor λ > 0 for the unitary vector u. Then, ϕmin(λu) = λϕmin(u)and ϕmax(λu) = λϕmax(u), so the result of the inequalities in (6.3) does not change.

On the other hand, if u is replaced by the opposite vector −u, then ϕmin(−u) = −ϕmax(u) andϕmax(−u) = −ϕmin(u), and hence the result neither changes.

However, if u is not an unitary vector, the intervals computed in the separating axis tests are

not the regular projections of the object onto the line (instead, they are scaled projections).

Our convex objects for the test are both convex polyhedra in 3D. In this particular case, only

a finite set of directions should be considered for separation tests. That set includes the normal

vectors to the faces of the polyhedra and vectors generated by a cross product of two edges, one

from each polyhedron. A formal proof is not provided here, but the intuitive approach could be

as follows: if the two polyhedra are just touching with no interpenetration, then the contact is

one of face-face, face-edge, face-vertex, edge-edge, edge-vertex, or vertex-vertex. Then, testing the

mentioned directions it is possible to detect potential conflicts.

In particular, the bounding objects selected for conflict detection are rectangular hexahedra.

This polyhedron fits in with the rectilinear segments usually adopted for the reference paths and

allows to use a simple method to check the intersections. The method implemented is detailed in

the following.

Page 130: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

106 Plan Merging Process

Let CA and CB be two rectangular hexahedra with attached frames A and B respectively,located in their centroids and aligned with their edges. Given a direction defined by a vector dG,

the objective is to check if it is a separating axis for CA and CB . Let us consider a vector pG from

the centroid of CA to the centroid of CB . The length of the projections of pG, CA and CB in

the direction of dG will be denoted by λA, λB and λp respectively. Then, using Lemmas 6.2.2 and

6.2.3, the condition to check if a given direction is a separating axis can be reformulated in a more

convenient way for its computation as follows

λA

2+λB

2< λp. (6.4)

Let us scale both orthonormal basis using the dimensions of each polyhedra (width, length and

height) in order to generate two ortogonal basis A = a1G,a2G,a3G and B = b1G,b

2G,b

3G. Then,

the terms in (6.4) can be computed as follows

λA/2 =

3∑k=1

|akG · dG|

λB/2 =3∑k=1

|bkG · dG|

λp = |pG · dG|.

(6.5)

In order to simplify the computation of all the required projections it is convenient to use Aor B as the reference frame. Each frame has an orthonormal basis A = a1G, a2G, a3G and B =

b1G, b

2G, b

3G respectively, both expressed in the global frame G. If A is used as reference, then

the rotation matrix to change any vector from the global to the A frame is given by

RGA =

(a1G)

T

(a2G)T

(a3G)T

. (6.6)

This matrix allows to compute the B basis and the pG vector with respect to the A local frame

using the following expressions (b1A)T

(b2A)T

(b3A)T

= RGA

(b1G)

T

(b2G)

T

(b3G)

T

(6.7)

vA = RGAvG. (6.8)

Then, using A as local reference frame and (6.5) with the vectors expressed in A, it is

possible to derive an algorithm to check if two rectangular hexahedra CA and CB are disjoint. The

basic idea is to test if none of the six principal axes and their nine cross products form a separating

axis (see Algorithm 6.3). The corresponding computations are greatly simplified if A is used as

reference frame as it has been mentioned before.

Page 131: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.2 Distributed Method for Conflict Detection and Resolution 107

Algorithm 6.3 Algorithm to check if two rectangular hexahedra CA and CB are disjoint. Thebasic idea is to test if none of the six principal axes and their nine cross products form a separatingaxis.1: for i = 1 to 3 do

2: if

3∑k=1

|akA · aiA|+3∑k=1

|bkA · aiA| < |pA · aiA| then

3: return false;4: end if5: end for6: for i = 1 to 3 do

7: if

3∑k=1

|akA · biA|+3∑k=1

|bkA · biA| < |pA · biA| then

8: return false;9: end if

10: end for11: for i = 1 to 3 do12: for j = 1 to 3 do

13: if3∑k=1

|akA · (aiA × bjA)|+3∑k=1

|bkA · (aiA × bjA)| < |pA · (aiA × bjA)| then

14: return false;15: end if16: end for17: end for

Page 132: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

108 Plan Merging Process

Algorithm 6.3 is the core of the routines CheckConflictTypeA and CheckConflictTypeB that

are used to check if the ∆lj path is in conflict with the i-th UAV’s motion or position respectively. In

the former case, the rectangular hexahedra associated to ∆lj and ∆k

i (current path being executed

by the i-th UAV) are compared, whereas in the latter, ∆lj is compared with the bounding box

associated to the i-th UAV hovering location.

6.2.4 Deadlock Detection and Resolution

Depending on the locations, requested paths and timing of the requests, deadlocks can arise in the

conflict resolution procedure previously described. For example, Fig. 6.2 shows a configuration with

four UAVs (Ui, i = 1, . . . , 4) in a deadlock configuration. All the UAVs are in hovering and their

planned trajectories are straight paths to different waypoints marked as crosses. It can be seen that

the four UAVs will remain in hovering due to the deadlock.

In Fig. 6.2, U3 has a planned straight path in conflict with U1. Then, U3 has to wait until U1

moves out of the bounding box of its path, and this relation will be noted as U3 → U1.

In general, the dependency relationship among UAVs with regard to the paths will be represented

by a directed graph, known as the wait-for graph (WFG) (Singhal, 1989), where each node represents

an UAV and an arc is originated from an UAV waiting for a region of airspace to an UAV holding

the region. A deadlock corresponds to a cycle in the WFG and, thus, the two terms are used

interchangeably in the following. Once a cycle is formed in the state graph, it persists until it is

detected and broken. On the other hand, cycle detection can proceed concurrently with the normal

activities of the system; therefore, it does not have a serious effect on system throughput.

For example, Figure 6.3 shows the wait-for graph associated to the configuration depicted in

Fig. 6.2. A cycle can be identified (highlighted in grey continuous line): U1 → U2 → U3 → U1.

A survey with the different approaches to detect deadlocks in distributed systems can be found in

(Singhal, 1989). Our objective was to implement a correct distributed deadlock detection algorithm,

i.e. all true deadlocks are detected and deadlocks are not reported falsely, under the only assumption

that messages are received correctly and in order. Analytic performance evaluation of the algorithm

is difficult to achieve for the following reasons: the random nature of the wait-for graph topology,

the invocation of deadlock detection activities even though there is no deadlock, and the initiation

of deadlock detection by several processes in a deadlock cycle.

Under the above mentioned requirements, the algorithm presented in (Lee and Kim, 2001) was

selected for its implementation on our multi-UAV platform. The next section describes this method.

Algorithm

It is assumed that an UAV is able to communicate with any other in the platform. This assumption

is valid for example in the experimentation scenario described later in Chap. 8. An UAV can be

blocked or active at any instant. When an UAV issues a request to execute a given path, it switches

from the active to the blocked state. The UAV returns to active only when it receives a reply

message from all the other UAVs (see Sect. 6.2.2).

Page 133: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.2 Distributed Method for Conflict Detection and Resolution 109

U1

U2

U3

wp1

wp2

wp4

U4

wp3

Figure 6.2: Top view of a configuration with four UAVs in a deadlock. The triangles represent theUAVs whereas the crosses represent the waypoints. The allocation between UAVs and waypointsare depicted with dashed arrows. Finally, the rectangles around each rectilinear planned path arethe bounding boxes used to check potential conflicts. It can be seen that the four UAVs will remainin hovering due to the deadlock.

Page 134: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

110 Plan Merging Process

Figure 6.3: Wait-for graph associ-ated to the configuration depicted inFig. 6.2. A cycle is highlighted in greycontinuous line: U1 → U2 → U3 →U1. On the other hand, U4 is transi-tively waiting due to the deadlock (greydashed line).

U1 U2

U3 U4

A blocked UAV is recognized in the WFG as a node that has at least one outgoing edge to

another node. Each outgoing edge models the fact that the UAV made a path request and is waiting

for the reply message. Once received, the edge disappears. An active UAV has no outgoing edge in

the WFG. If there is an edge from nodes p to q in the WFG, denoted by (p, q), q is called a successor

of p. If one or more edges are on a path from p to q, then q is said to be reachable from p. In the

following, the terms UAV and node are used interchangeably.

For ease of discussion, we classify blocked UAVs in the system into two types: deadlocked and

simply blocked. Those UAVs which belong to a cycle in the WFG are called deadlocked, whereas a

simply blocked UAV is waiting for reply messages, but does not belong to any cycle in the WFG. In

particular, those simply blocked UAVs which have a directed path to a cycle in the WFG are said

to be transitively waiting for the deadlock. That is, an UAV transitively waiting for a deadlock does

not belong to any cycle in the WFG, but it will be blocked forever unless the deadlock is resolved.

For instance, U4 is transitively waiting for the deadlock involving U1, U2 and U3 in Fig. 6.3.

The distributed deadlock detection algorithm discussed in the following suggest the use of a

time-out as a way to reducing the overhead of algorithm execution. If a deadlock is found by the

algorithm, it is resolved by aborting one of the deadlocked processes. In order to do so, the selected

UAV changes its altitude to clear the path in which it is an obstacle for other UAVs.

Since more than one process might be timed out at approximately the same time, there can

be several independent algorithm executions in the system at an instant. Hence, an UAV may be

involved in more than one algorithm execution at a time. To distinguish among these executions,

each execution of the algorithm is associated with an unique identifier, which comprises the initiator

identifier plus local time at the site of the initiator. Such an identifier is carried by each probe

generated by the algorithm.

(Chandy et al., 1983) developed a distributed deadlock detection algorithm which is one of the

representative algorithms using probes for the detection of deadlocks. Its basic idea is that the

initiator of the algorithm propagates probes along the edges of the WFG and declares a deadlock

upon receiving its own probe back. In order to send only one probe along an edge per execution,

Page 135: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.2 Distributed Method for Conflict Detection and Resolution 111

each node in the WFG maintains a data structure recording whether it has sent out probes to its

successors for the execution. A probe is sent along an edge only if the edge has not delivered the

probe originated from the same initiator yet. As in most other schemes using probes, (Chandy

et al., 1983)’s algorithm has the drawback that a deadlock is detected only by one of the deadlocked

nodes upon initiating the algorithm. This leads to the waste of probes which are generated by the

processes transitively waiting for the deadlock.

The algorithm proposed in (Lee and Kim, 2001) overcomes such a disadvantage, and therefore, it

has been chosen in our implementation. The algorithm declares deadlocks upon finding back edges

in a distributed search tree constructed by the propagation of probes. The tree is built as follows:

The initiator of the algorithm which becomes the root of a tree, sends out probes, called ask, to all

of its successors at once. If a node receives the probe for the first time, it becomes a child of the

sender of the probe. The probe is then further propagated until it reaches an executing node or a

tree node that has already received a probe.

In order to identify back edges in the tree, the algorithm developed the notion of path string

which is a combination of bits. Each tree node is assigned an unique path string to represent the

level of the node in the tree and distinguish one branch from another in the tree. Consequently,

path strings make it possible to identify not only back edges but the other type of edges such as

cross and forward edges. The ask probe carries the candidate victim identifier which has the lowest

priority among those nodes visited. If the candidate victim is inside the detected deadlock cycle, it

will receive an abort message. However, when a node finds a deadlock upon receiving a probe, the

carried lowest priority process may not be inside the deadlock. An example would be the case when

the initiator is transitively waiting for the deadlock and has the lowest priority among those which

delivered the probe. If the lowest priority process is waiting outside a deadlock, its abortion would

not resolve the deadlock. The process which detects a deadlock does not know from the information

carried by the ask whether the carried lowest priority process is inside the cycle. In order to find that

out, the deadlock detection message ask carries the path string of the lowest priority process along

with its identifier. Since the path string implies the level of node in the tree, the node detecting a

deadlock, i.e., back edge, can find out whether it is a descendant or ancestor of the lowest priority

process carried by the probe, by comparing its own path string with that of the lowest priority

process. The lowest priority process is inside the cycle if it is a descendant of the process that has

detected the cycle. Otherwise, the algorithm needs to find out the lowest priority process among the

deadlocked processes. For this purpose, the process which detected the cycle sends a probe named

search to the successor in the cycle. The search is passed by all the processes in the cycle while

carrying the lowest priority process among them.

A significant advantage of the algorithm is that not only a deadlocked node but a node which is

transitively waiting for a deadlock detects the deadlock provided that a back edge is formed among

the deadlocked nodes in the constructed tree. Algorithm 6.4 shows a description of the method

implemented.

In the following, the deadlock duration for this algorithm is discussed. In our model, messages

arrive at the destination in the order in which they are sent without any loss or duplication. It

Page 136: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

112 Plan Merging Process

Algorithm 6.4 Description of the algorithm for deadlock detection by (Lee and Kim, 2001) imple-mented following the model presented in Sect. 3.2.

Signature:

Input:

receive(m)j,i,m ∈M

Output:

send(m)i,j ,m ∈M

States:

pstri ← λ;procProbing(i, pstri);

Transitions:

χ1 : receive(m)j,i with m = ask(succ.pstr,pstrj , lowest.id, lowest.pstr)

if i has left the path requested by j thendiscard the probe;

else if fatheri = 0 and i is not the initiatorthen

/*a tree edge is found*/fatheri ← j;pstri ← succ.pstr;if there is a successor then

if i has lower priority than lowest.id thenprocProbing(i, pstri);

elseprocProbing(lowest.id, lowest.pstr);

end ifend ifif pstri is a prefix of pstrj then

/*a deadlock is found*/if pstri is a prefix of lowest.pstr then

/*lowest.id is inside the cycle*/send an abort message to lowest.id;

else/*lowest.id is outside the cycle*/send(m)i,k with m = search(pstrj , i)and succ.pstri[k] is a prefix of pstrj ;

end ifend if

end if

χ2 : receive(m)j,i with m = search(longest.pstr,lowest.id)

if i has lower priority than lowest.id thenlowest.id← i

end ifif pstri = longest.psrt then

/*searched all the processes in the detectedcycle*/send an abort message to lowest.id;

elsesend(m)i,k with m = search(longest.pstr,lowest.id) and succ.pstri[k] is a prefix oflongest.pstr;

end if

procedure procProbing(lowest.id, lowest.pstr)

n← number of successors;if n = 1 then

succ.pstr ← pstri||′0′;else

succ.pstr ← pstri||(⌈log2 n⌉ number of 0’s);end iffor each successor k do

succ.pstri[k]← succ.pstr;send(m)i,k with m = ask(succ.pstr, pstri,lowest.id, lowest.pstr);succ.pstr ← addition of succ.pstr and 1 left-padding with |succ.pstr| − 1 number of 0’s;

end for

end procedure

Tasks:

send(m)i,j ,m ∈M

Page 137: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.2 Distributed Method for Conflict Detection and Resolution 113

t1 t2-

tk ts1 ts2 tf2 tf1 tsk tfk t

T0

T0-

T0-

-

Te2 -

Td2|tk -

Te1 -

Td1|tk-

TekTdk|tk

cyclecreationtime

- -

Figure 6.4: Example of deadlock duration.

takes Tm time to transmit a message from one node to another. An usual performance index is the

deadlock duration, which is the elapsed time until a deadlock is detected after it is formed. Deadlock

duration is only dependent upon inter-UAV communication of deadlock detection messages. Hence,

for simplicity, we assume that it takes Tm time to deliver a deadlock detection message from a node

to another.

A deadlock detection algorithm is executed by an UAV upon receiving a message used by the

algorithm from other UAVs or upon initiation of the algorithm. If a deadlock is found by the

algorithm, a proper victim UAV is selected to resolve the deadlock.

Figure 6.4 shows an example of deadlock duration given the timing sequence of blocked lock

requests. Each UAV Ui invokes the algorithm upon waiting T0 time on the blocked request. Blocked

UAVs initiate the algorithm at tsi = ti + T0 due to their lock requests at ti. Tei indicate the

corresponding algorithm execution time until the deadlock is detected. Notice that in the figure, Te1is longer than Te2 or Tek . This is because U1 is assumed to be transitively waiting for the deadlock

and U2 and Uk deadlocked; it takes additional time for a probe generated by the initiator which

is transitively waiting for the deadlock to reach any deadlocked node. Even though U1 starts the

algorithm before U2, U2 detects the deadlock earlier. Accordingly, as shown in the figure, Td2|tkis shorter than Td1|tk , where Tdi|tk is the deadlock duration resulting from the algorithm execution

initiated by Ui when a deadlock occurs at tk.

The figure shows that earlier initiation of the algorithm does not guarantee faster deadlock

detection. It is not difficult to see that the duration of deadlock is determined by the earliest

deadlock detection time. If the deadlock duration turns out to be Td1|tk , then this implies that the

earlier execution of the algorithm initiated by U2 fails to detect the deadlock and that the initiation

by U1 is able to detect it.

In the next section, a different method to solve the same problem more efficiently in a centralized

Page 138: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

114 Plan Merging Process

manner is presented. It requires to change the velocity profile of the UAVs in real-time, and thus,

impose more requirements to the executive layer of the UAVs. Nevertheless, it has been considered

relevant to include a description of this method as another option to be applied if a centralized

solution is preferred and the UAVs allow such kind of velocity control in real-time.

6.3 Improvements Based on a Centralized Planner and theVelocity Profile

In this section, a centralized approach to solve the conflict avoidance problem with multiple UAVs

sharing the same area is described. In addition, it is also considered that the UAVs can be sharing

the airspace with other aircrafts or teleoperated vehicles that are not integrated in the system

(mobile obstacles). Taking into account that the initial trajectories of the UAVs are designed to

execute cooperatively particular tasks in a given mission, the objective of the algorithm is to find a

collision-free solution that changes the initial trajectory as little as possible, by changing the velocity

profile of the vehicles. The dynamic model and physical constraints of the aerial vehicles are also

considered.

Then, the method modifies the velocity profile of the UAVs under control maintaining the paths

initially planned. It is based on the combination of both a Search Tree algorithm, which finds a

solution if it exists, and the minimization of a cost function which tries to find the nearest solution

to the initially planned trajectories for the UAVs. The search tree algorithm provides an initial valid

order of pass for the vehicles involved in a given conflict and allows to formulate the minimization

problem as a Quadratic Programming (QP) problem that can be efficiently solved. A model for the

UAVs different from a helicopter model has been considered to avoid more simple solutions based

on the hovering capabilities.

In the next section, the conflict avoidance problem formulation is presented. It also differs from

the previous distributed approach in the conflict detection strategy applied: it is based on a common

discretization of the space in cubic cells shared by all the vehicles.

6.3.1 Problem Formulation

The conflict detection problem allows a wide range of possible solutions. One option consists in a

discretization of the 3D space using cubic cells. It does not allow to find an optimal solution, but

makes possible to use fast search algorithms to reach feasible solutions.

A trajectory can be described as a sequence of cubic cells, each of them with an associated

entrance and departure time. Therefore, a conservative policy to ensure a collision-free trajectory

would be to allow only one vehicle in each cell along all the trajectories. It is assumed that each

UAV knows the trajectories of other UAVs as a list of cells through a 4D trajectories interface. This

makes easier to check whether a conflict will occur, because each UAV simply has to find temporal

overlapping between a cell of its trajectory and a cell that belongs to another UAV trajectory.

The 3D grid proposed decreases the data transmission requirements among vehicles, because

they do not have to transmit the full trajectory in the continuous (x, y, z, t) space. This strategy

Page 139: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.3 Improvements Based on a Centralized Planner and the Velocity Profile 115

also decreases the time needed to detect potential collisions.

Let consider a scenario with N UAVs whose trajectories pass through M cubic cells (each cell

with an unique identifier). If two or more UAVs pass through the same cell, it will be considered as

a conflict. Let us consider a variable Cik, whose value is p if the i-th UAV has a conflict in the k-th

cubic cell of its trajectory, being p the unique identifier of this cell. On the other hand, Cik will be

0 if there is no conflict.

Let tik be the amount of time that the i-th UAV spends in cell k and Tik the interval of time

[

k−1∑j=1

tij ,

k∑j=1

tij ]. If γik is the identifier of the k-th cell in the trajectory of the i-th UAV, there is a

collision in the conflict p, if:

∩γik:Cik=p

Tik = ∅, ∀i = 1 . . . N, k = 1 . . .M (6.9)

Regarding the UAV model, it has not been considered a helicopter model to avoid more simple

solutions based on the hovering capabilities. The model (McLain and Beard, 2005) adopted is given

by

xi = vicos(ψi)

yi = visin(ψi)

ψi = αψ(ψc − ψ)

vi = αv(vc − v)

hi = −αhhi + αh(hci − hi)

, (6.10)

where αψ, αv, αh and αh are known parameters that depend on the particular characteristics of the

UAV, (xi, yi, hi) are the 3D coordinates and ψi is the heading of the UAV. Regarding the heading

rate and velocity, the constraints considered are

−c < ψi < c

vmin < vi < vmax

, (6.11)

where c, vmin and vmax are positive constants that depend on the dynamics of the particular UAV.

The problem to be solved can be expressed as the computation of tik in order to minimize

J =N∑i=1

M∑k=1

(tik − tikref )2, (6.12)

subject to

tik − Tikmax ≤ 0

Tikmin − tik ≤ 0(6.13)

∩γik:Cik=p

Tik = ∅,∀p = 1 . . . L, i = 1 . . . N, k = 1 . . .M, (6.14)

Page 140: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

116 Plan Merging Process

where L is the number of conflicts and tikref is the time that the initial reference trajectory takes

to pass through the kth cell of the ith UAV trajectory. The objective of the cost (6.12) is to find a

solution close to the reference trajectory, which is the initial trajectory of the vehicles before detecting

the potential collision. Minimizing this cost, the changes in the trajectories will be minimum. On

the other hand, eq. (6.13) comes from the consideration of the UAV’s model. Tikmax and Tikmin are

the maximum and minimum time that the ith UAV can stay in the kth cell, which depend on the

distance traveled in the cell, the initial velocity and the model considered.

In each collision, three types of vehicles have been considered:

• Directly involved : vehicles that are involved in a potential collision detected.

• Indirectly involved : vehicles whose trajectories are cut by the trajectories of the directly involved

vehicles and can cooperate with other UAVs to avoid the collisions.

• Non-cooperative aircrafts: indirectly involved vehicles that can not cooperate in the collision

avoidance process. This type of vehicle will be therefore considered as a mobile obstacle.

When solving a collision among directly involved vehicles, new conflicts could arise. In the

collision avoidance method presented here, only the trajectories of the directly and indirectly involved

vehicles can be changed, whereas the non-cooperative aircrafts are treated as mobile obstacles.

However, it should be noticed that in some cases it could be convenient to consider some cooperative

UAVs as mobile obstacles. It allows to reduce the information required to be exchanged among the

vehicles and the computational complexity of the algorithms.

6.3.2 Proposed Collision Avoidance Method

The objective of the algorithm is to find how long should stay each vehicle in each cell of its trajectory.

We have developed an heuristic method based on the combination of both a Search Tree algorithm

which finds a solution if it exists, and the minimization of the cost function (6.12), that will use the

information obtained by the Search Tree algorithm. The Search Tree will compute a valid order of

pass for the vehicles in a given conflict, allowing to formulate the problem to be solved as a Quadratic

Programming (QP) problem (instead of a full Mixed-Integer Linear Programming problem). The

algorithms consider the UAV model and the distances traveled by the UAVs in each cell. Finally, it

will be assumed that the UAVs are initially moving at their maximum speed, as far as it allows to

perform the missions in a minimum time.

Initialization Algorithm: Search Tree

This algorithm searches for a solution, by assuming that each vehicle involved in the collision is

executing the its trajectories at maximum speed. The algorithm is based on the idea that there is

no collision-free solution if there is a vehicle traveling as fast as possible and another, as slow as

possible which has to pass the next one through a certain conflict and collides with the first. The

algorithm does not consider the cost function (6.12).

Page 141: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.3 Improvements Based on a Centralized Planner and the Velocity Profile 117

Let us define the order of pass as the order in which the UAVs pass through a given conflict.

Therefore, a conflict with n vehicles involved, has n! different orders of pass. The algorithm explores

the different orders of pass in each conflict until a solution is found. First of all, the most logical

orders of pass are tested – in which the vehicle that has to travel less distance to arrive to the conflict

would pass first. For each order, it is possible to determine if a solution exists in short computational

time. If there is no solution for a given order of pass, the algorithm changes first the order of the

vehicles that have to travel more distance to arrive to the conflict.

If the problem hasm conflicts with ni vehicles involved in the ith conflict, there are n0!n1! . . . nm−1!

orders to check. When a given order is explored and there is no solution, the algorithm permutes

the order of the conflict with a higher cost Ji defined as

Ji = µi − σi (6.15)

where µi and σi are the mean and the standard deviation of the distance that each vehicle

involved has to travel to arrive to the conflict. Therefore, the algorithm permutes first the order of

the vehicles involved in conflicts that are farther from the beginning of their trajectories. In those

conflicts, another conflict closer to the beginning could affect the search criterion defined above (the

vehicle which has to travel less distance to arrive to the conflict, passes first). Differences in the

distance that the vehicles travel to arrive make the initial order of pass more suitable. The term σi

in (6.15) copes with this concept.

Each UAV has an associated tree for a certain order of pass with each node of the tree associated

with a cell. The depth of a node indicates the position of the cell in the trajectory. Therefore, nodes

that have the same depth will be associated with the same cell. Let ω(i, d) be the weight associated

with the edge i − d of the tree. This parameter measures the time that the UAV will spend in

the cell associated with the d node, according to the UAV model. The length of the edge i − d is

proportional to ω(i, d). Once the tree is completely built, if it exists a solution, the height of the

tree will be the number of cells of the trajectory. Algorithm 6.5 shows how the trees are built.

Basically, the trees grow from the root node by calculating the minimum weights ω(i, d) according

to the UAV model (maximum speed), until a conflict node is found. Once detected, the associated

weight could be calculated or not depending on the tree’s turn. That turn corresponds to the order

of pass being checked by the tree in a given iteration. Therefore, it is a tree’s turn if the weights

of the edges of other trees associated with the same conflict cell and associated with UAVs that

have to pass through the conflict before, were already generated. If it is the tree’s turn, it evaluates

if a collision occurs in the conflict by checking if there is temporal overlapping among the times

(weights) associated with other trees’ branches associated with the same conflict. If a collision

appears, the algorithm has to increase the degree of the previous node by computing a new branch

with a new weight associated (not calculated at the maximum speed) that should avoid the temporal

overlapping found. If the temporal overlapping persists, the algorithm would repeat the previous

process. If backtracking, the algorithm reaches the root node, there would not be any solution for

the considered order of pass.

Figure 6.5 shows an example of an initial potential collision (temporal overlapping) between

Page 142: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

118 Plan Merging Process

UAV 1

UAV 2

(8,10,10)

(8,10,10)

t

UAV 1

UAV 2

(8,10,10)

(8,10,10)

k

Figure 6.5: Initial temporal overlapping between UAVs 1 and 2 in cell (8,10,10) corresponding to apotential collision (rectangular grey area between the timelines of the UAVs).

UAVs 1 and 2 in a cell with identifier (8,10,10).

Figure 6.6 shows the scheme used to solve that collision: the algorithm backtracks and creates

new branches. Branch a is not valid because we would need to change the length of additional

branches to avoid the collision and the UAV model does not allow to increase more the weight

associated with branch a. Branches b and c are valid, allowing to arrive later to the cell identified by

(8,10,10) and solving the conflict (see the bottom timeline in Fig. 6.5 corresponding to the modified

trajectory for the UAV with identifier 1).

The changes made by the Search Tree algorithm to avoid the temporal overlapping could affect

the other trees. Figure 6.6 shows that the tree associated with UAV 3 has to rebuild its branches

before the node m, because UAV 1 passes first through the cell (5,8,0) and UAV 3 would collide with

UAV 1. The time that UAV 3 stays in the cells previous to (5,8,0) has to be recomputed, because

the condition that has to be fulfilled in the conflict has changed.

If the Search Tree algorithm does not find a solution for a certain order of pass, all the trees start

from the beginning again using the next order of pass set by the search criterion defined in (6.15).

Directly and indirectly involved UAVs build the tree in the same way. However, the trees for the

non-cooperative aircrafts are built from the beginning and are not changed (their trajectories can

not be modified).

The Search Tree algorithm allows to find a solution in short time when comparing with methods

that solve the problem of collision avoidance without considering a cell-divided space method.

Quadratic Programming Problem

If the Search Tree Algorithm finds a solution, we have a valid order of pass for the collision avoidance

problem. It allows to transform a Mixed-Integer Linear Programming (MILP) problem (the full

problem) into a QP problem. In fact, the binary variables of the MILP formulation allow to choose

different order of pass in each conflict, and can be suppressed as far as we already have a valid order.

Therefore, the problem to solve is to minimize

Page 143: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.3 Improvements Based on a Centralized Planner and the Velocity Profile 119

(8,10,10)

(8,10,10)

(8,10,10)

(8,10,10)

bc

ak

UAV 1

UAV 2

UAV 3

(5,8,0)

(5,8,0)d

m

Figure 6.6: The algorithm backtracks and creates new branches (b and c) which allow to avoid thecollision. The three numbers between parenthesis are the identifiers of the cells involved.

Page 144: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

120 Plan Merging Process

Algorithm 6.5 Search Tree Algorithm

1: while there is no solution and all the orders of pass have not been explored do2: Start the tree associated with each UAV from the root node3: while the tree of each UAV is not complete do4: for each tree, if it is not complete do5: Calculate the weight of the next edge traveling at the maximum speed vmax up to the

next conflict node, creating the associated branches.6: if the end was not reached then7: if it is the tree’s turn in the conflict then8: Calculate the weight of the edge reaching the conflict node9: if there is a collision then

10: Go back and create new branches that solve the collision, by associating higherweights. Another tree may have to backtrack as well to ensure collision-free tra-jectory

11: If the beginning of the tree is reached upon backtracking, then there is no solutionfor the order of pass being considered.

12: end if13: end if14: end if15: end for16: end while17: Get next order of pass18: end while

J =

N∑i=1

M∑k=1

(tik − tikref )2, (6.16)

subject to

tik − aikvik − bik ≤ 0

bikvik + cik − tik ≤ 0(6.17)

vik − vmax ≤ 0

vmin − vik ≤ 0(6.18)

∀i = 1 . . . N, k = 1 . . .M ,

and for each conflict and each UAV l that passes after another UAVm for the order of pass considered

Q∑k=1

tmk −P∑k=1

tlk ≤ 0, (6.19)

where P indicates the cell previous to the conflict in the trajectory of the UAV l, and Q indicates

the conflict cell in the trajectory of the UAV m. Equations (6.17) and (6.18) take into consideration

the UAV model. Equation (6.18) set the maximum and minimum time that an UAV can spend in

Page 145: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.3 Improvements Based on a Centralized Planner and the Velocity Profile 121

0

10

20

30

40

50

60

0

10

20

30

40

50

60

9

9.2

9.4

9.6

9.8

10

10.2

10.4

10.6

10.8

11

xy

z UAV 5

UAV 1 UAV 2

UAV 3

UAV 4

Figure 6.7: Three dimensional paths of five UAVs used in a simulation to compare the performanceof two different methods.

each cell by linearizing t(vik) around the reference velocity in the ik cell, allowing to have linear

model constraints. In the next section, the effects of this approximation will be analyzed. vmax and

vmin are the maximum and minimum velocities of the UAVs. Equation (6.19) ensures collision free

trajectories, avoiding temporal overlapping.

Finally, the CGAL library has been used to solve the resulting quadratic optimization problem.

6.3.3 Simulations

A scenario with five UAVs has been considered for the simulations (see Fig. 6.7). Our method to

minimize the cost function (6.12) has been compared in simulation with a Tabu Search (TS) method

described in (Rebollo et al., 2008; Rebollo et al., 2007) and the results are provided in the next

section. The TS algorithm enhances the performance of a local search method by using memory

structures to avoid local minima. On the other hand, TS does not consider a linearization of the

UAV model, in contrast to the QP formulation described previously.

Simulation Results

In this section, the results for the above mentioned methods with the paths shown in Fig. 6.7 are

presented. UAVs from 1 to 4 are considered directly involved vehicles and UAV 5 as a mobile obstacle.

In Table 6.1, the different conflicts are summarized. There is a collision among four UAVs and four

conflicts between two UAVs.

Table 6.2 shows how the four UAVs collision is solved, by comparing the entrance and leaving

time in the conflict cell. The initial reference times and the solutions computed with the Tabu and

QP methods are provided.

Page 146: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

122 Plan Merging Process

Table 6.1: Summary of the conflicts among UAVs for the scenario depicted in Fig. 6.7. There is acollision among four UAVs and four conflicts between two UAVs.

Cell UAV(20,20,10) UAV 1, UAV 2, UAV 3, UAV 4(20,10,10) UAV 2, UAV 5(10,10,10) UAV 3, UAV 5(30,10,10) UAV 4, UAV 5(29,10,10) UAV 4, UAV 5

Table 6.2: Solution computed to the four UAVs collision. The times are represented in seconds.

tintabutouttabu tinQP toutQP tinref

toutrefUAV 3 13.8 15.3 18.3 19.4 19.2 20.4UAV 2 15.3 15.9 19.4 20.3 20 21UAV 1 16.1 18.1 20.3 21.3 20 21UAV 4 19.2 20.5 21.3 22.7 19.4 20.8

It can be seen that the QP solution gives better results. But it should be noticed that we are

using a linearization of the UAV model to have the QP formulation. Therefore, it should be checked

how far is the computed solution from the linearization point. The model was linearized around

the reference velocity in each cell and by comparing the reference velocity and the velocity of the

solution, we have the worst case with vref = 1 and v = 1.11. Then, it is near the linearization point

and the solution is valid. In the QP solution, the value of the cost function is J = 0.23, which shows

how near is the obtained solution from the initial reference trajectories.

In Table 6.3, the computational times with different number of involved UAVs are listed. The

results clearly show a reduction of the computational time thanks to the linearization of the UAV

model and the formulation as a QP problem.

Table 6.3: Computational times of the two methods tested for different number of UAVs. The resultsare represented in seconds and have been obtained with a 1.7 GHz PC with 1GB RAM.

Number of UAVs TABU QP2 0.74 0.0253 1.21 0.0284 1.72 0.0325 2.67 0.04

The computational time does not depend on the shape of the path, because each path is a

sequence of cells and the algorithm deals with them in the same way.

6.4 Conclusions and Future Work

The first part of this chapter has presented the distributed conflict resolution method used to improve

the safety conditions in the AWARE scenario where multiple UAVs are sharing the same aerial space.

Page 147: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

6.4 Conclusions and Future Work 123

As it has been mentioned before in Chap. 3, one of the key aspects in the design was to impose

few requirements to the proprietary vehicles to be integrated in the AWARE platform. Then, a

specification of the particular trajectory or velocity profile during the flight was not considered,

and the implemented policy to avoid the inter-vehicle collisions is only based on the elementary set

of tasks presented in Sect. 3.3.4. On the other hand, the method is distributed and involves the

negotiation among the different UAVs. It is based on the hovering capabilities of the helicopters and

guarantees that each trajectory to be followed by the UAVs is clear of other UAVs before proceeding

to its execution.

The applicability of this first method for aerial vehicles without stationary flight capabilities

depends on the scale of the considered scenario. For instance, if we consider a fixed wing plane, it

is possible to implement a pseudo-stationary flight flying in circles around the specified coordinates.

Then, if the minimum turning radius (ti) of the plane defines a circle which is relatively small with

respect to the area where the UAVs are flying, a similar technique can be adopted. In this case,

eq. (6.1) should be modified as

ri = max(pi, di) + si + ti.

On the other hand, the centralized method proposed in the second part of the chapter is able

to find a solution by changing in real time the trajectories of the vehicles as little as possible. The

results obtained were satisfactory because the solution found allowed the initial trajectories to remain

nearly unchanged and the execution time met hard time constraints.

In situations where it is not possible to find a solution without changing the path, some additional

strategies has to be used. It would be possible to modify the altitude (Wollkind, 2004) or to

create roundabouts (Massink and Francesco, 2001) to solve the conflicts. But once the new path is

computed, the algorithms presented in this chapter could be also applied.

Page 148: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

124 Plan Merging Process

Page 149: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Chapter 7

Platform Human MachineInterface

In the previous chapters, the distributed architecture for the autonomous cooperation between mul-

tiple UAVs has been presented. Although autonomy provides many advantages by itself, it is also

important to consider the Human Machine Interface (HMI) as a key element to enable an usable

and practical platform. Then, this chapter is devoted to study some aspects in the design of the

interface that can enhance the performance of the platform.

After the introduction, Section 7.2 presents the main components of the AWARE platform human

machine interface. Basically, the different visual elements and functionalities are briefly described.

Then, the use of multimodal technologies to improve the interface is studied in the rest of the

chapter.

Multimodal technologies employ multiple sensory channels/modalities for information transmis-

sion as well as for system control. Examples of these technologies could be haptic feedback, head

tracking, auditory information (3D audio), voice control, tactile displays, etc.

The applicability and benefits of those technologies is analyzed for a task consisting in the ac-

knowledgement of alerts in an UAV ground control station composed by three screens and managed

by a single operator. For this purpose, several experiments were conducted with a group of individ-

uals using different combinations of modal conditions (visual, aural and tactile).

7.1 Introduction

It is known that multimodal display techniques may improve operator performance in Ground Con-

trol Stations (GCS) for UAVs. Presenting information through two or more sensory channels has the

dual benefit of addressing high information loads as well as offering the ability to present information

to the operator within a variety of environmental constraints. A critical issue with multimodal inter-

faces is the inherent complexity in the design of systems integrating different display modalities and

user input methods. The capability of each sensory channel should be taken into account along with

the physical capabilities of the display and the software methods by which the data are rendered for

Page 150: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

126 Platform Human Machine Interface

the operator. Moreover, the relationship between different modalities and the domination of some

modalities over others should be considered.

Using multimodal technologies begins to be usual in current GCSs (Lemon et al., 2001; Ollero

et al., 2006; Ollero and Maza, 2007a), involving several modalities such as positional sound, speech

recognition, text-to-speech synthesis or head-tracking. The level of interaction between the operator

and the GCS increases with the number of information channels, but these channels should be

properly arranged in order to avoid overloading the operator.

In (Sharma et al., 1998) some of the emerging input modalities for human-computer interaction

(HCI) are presented and the fundamental issues in integrating them at various levels-from early

“signal” level to intermediate “feature” level to late “decision” level are discussed. The different

computational approaches that may be applied at the different levels of modality integration are

presented, along with a briefly review of several demonstrated multimodal HCI systems and appli-

cations.

On the other hand, the intermodal integration can contribute to generate the illusion of presence

in virtual environments if the multimodal perceptual cues are integrated into a coherent experience

of virtual objects and spaces (Biocca et al., 2001). Moreover, that coherent integration can create

cross-modal sensory illusions that could be exploited to improve user experiences with multimodal

interfaces, specifically by supporting limited sensory displays (such as haptic displays) with appro-

priate synesthetic stimulation to other sensory modalities (such as visual and auditory analogs of

haptic forces).

Regarding the applications in the UAVs field, (McCarley and Wickens, 2005) provides a survey

on relevant aspects such as the perceptual and cognitive issues related to the interface of the UAV

operator, including the application of multimodal technologies to compensate for the dearth of

sensory information available.

7.2 AWARE Human Machine Interface

The AWARE platform Human Machine Interface (HMI) is a real-time application designed to mon-

itor the state of the different components of the system and to allow the specification of the missions

to be carried out. Figure 7.1 shows a photograph of the AWARE platform HMI during the execution

of a mission in 2009.

The main window of the application provides a map of the area where the different elements of

interest are shown:

• The “a priori” known existing infrastructure.

• The location and heading of the UAVs along with the projection on the ground of the field of

view of the camera on-board (if applicable).

• The location and projection of the field of view of the ground cameras.

• The location of each node of the WSNs along with the coverage area of the network.

Page 151: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.2 AWARE Human Machine Interface 127

Figure 7.1: A photograph of the AWARE platform HMI during the execution of a mission in 2009.

• The estimation of the location (and associated uncertainty) for the objects of interest detected

by the platform, such as firemen, fire, barrels, etc.

• The paths planned by each UAV and the trajectory back to the home location.

Each element on the map has a label with additional information. For instance, each UAV has

attached its identifier along with its current altitude.

The menu of the main window provides access to additional information and functionalities:

• Detailed state of each component: UAVs, nodes of the WSN, communication channels, etc.

• Modify the type of projection used in the map window.

• Images from the cameras on the ground and on-board the UAVs. Those images had also

overlays with some indications of the detection process being carried out.

• Send elementary tasks and missions to the UAVs.

• Manage the automated monitor on the fire truck.

• Make area subscriptions to the information from the nodes of the WSN.

• State of the elementary tasks allocated to each UAV.

It is worth to mention that the operation of the platform does not entirely rely on the HMI due

to its distributed nature and the autonomous capabilities of the different components. Then, during

the execution of some missions it was possible to shutdown the HMI application without any major

problem (in fact it was done several times during the experiments in 2008 for testing purposes).

Page 152: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

128 Platform Human Machine Interface

Moreover, thanks to the middleware (Universities of Bonn and Stuttgart (AWARE partners),

2007) and its publish/subscribe paradigm, it is possible to run several instances of the HMI in

different places during the missions, providing more robustness to the operation of the platform.

Figure 7.2 shows the HMI during a real surveillance mission performed on 26th May 2009 with

two autonomous helicopters. Both UAVs were equipped with visual cameras pointing downwards to

survey an area searching for objects of interest (barrels in this particular mission).

Figure 7.2: Human machine interface during a real surveillance mission performed on 26th May2009 with two autonomous helicopters. Both UAVs were equipped with visual cameras pointingdownwards searching for objects of interest (barrels in this particular mission). On the right, twowindows show the images from each on-board camera (one of them with a view of three barrels). Onthe map window, the UAVs, the projection of the cameras field of view and the paths are displayedalong with the nodes of the WSN (drawn with certain transparency level to avoid an overloadof information on the screen). In the middle, several windows show state information about thehelicopters and their elementary tasks.

The rest of the chapter is structured as follows. In the next section, different technologies that

can be applied in the design and development of a GCS for UAVs equipped with a multimodal

interface are summarized in Sect. 7.3. Then, Section 7.4 describes a multimodal testbed developed

to measure the benefits under different modal conditions. Section 7.5 provides the results obtained

in several experiments performed with a group of individuals and analyzes the impact of the different

modalities. Finally, the conclusions and future work section closes the chapter.

7.3 Interactions between Operator and GCS

In the design of a GCS, two information flows are usually considered: from GCS to operator, present-

ing information about the UAV, the environment and the status of the mission, and from operator

to GCS in the form of commands and actuations which are treated as inputs by the GCS software.

Page 153: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.3 Interactions between Operator and GCS 129

But there is a third flow of information which is not usually addressed in the design of the UAV’s

GCS; the information about the operator’s state that can be gathered by sensors and processed by

the GCS software. This channel could allow to have an adaptive GCS software, which could change

the modality and format of the information depending on the state of the operator (tired, bored,

etc.). Furthermore, the information about the operator can be also used to evaluate and improve

the interface. For instance, it is possible to register which screens are mainly used by the operator

during a certain type of mission.

Next subsections are devoted to each of these information flows, summarizing several methods

and devices which are usually applied.

7.3.1 Information Flow from GCS to Operator

Classical modalities in GCS to operator communications are visual information (mainly in monitors)

and sound alerts. During the last decades, researchers have devoted a significant effort to define the

characteristics of such communications, taking into account the operator’s capabilities and maxi-

mizing the information showed to the operator. Thus, effective data visualization and distribution

is discussed in (Zhu, 2007) and (Sweller, 2002), where different models to measure the effectiveness

of the visualization system are presented. Other researchers include the color, shape or size of the

displays in their analysis.

Sound alerts have also been deeply studied, mainly applied to control stations in general. Inten-

sity, frequency or loudness are some of the parameters taken into account to create comfortable and

effective sound alarms. References (Peryer et al., 2005) and (Patterson, 1982) are good examples of

sound alarm studies focused on civil aircrafts.

However, higher computational capabilities and the evolution of the communication systems

raised new devices and techniques able to provide more complex information. The next paragraphs

describe some of these new approaches.

3D Audio

Concerning the aural modality, the 3D audio can improve the Situational Awareness (SA) of the

operator. Three dimensional audio is based on a group of sound effects that attempt to widen the

stereo image produced by two loudspeakers or stereo headphones, or to create the illusion of sound

sources placed anywhere in a three dimensional space, including behind, above or below the listener.

Taking into account the usual portability requirement for the ground stations, the use of a headset

is usually preferred for the operator, instead of a set of speakers around him.

Thus, the objective of the 3D audio interface in a GCS is to provide multiple located sources

of sound for the operator (the listener) to improve his SA while performing a given mission. In

this way, the operator is able to recognize the presence of an alarm and also the origin of such

alarm. This functionality is provided for example by a library called OpenAL (Creative Labs, 2009)

(Open Audio Library), which is a free software cross-platform 3D audio Application Programming

Interface (API) designed for efficient rendering of multichannel three dimensional positional audio,

Page 154: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

130 Platform Human Machine Interface

and distributed under the LGPL license. This library has been used in our system implementation,

which is described later in Sect. 7.4.

Speech Synthesis

Considering also the audio channel, the speech synthesis technology has been also integrated in the

system presented in this chapter. Speech synthesis, also known as text-to-speech, is the artificial

production of human speech. It can be implemented in software or hardware and basically can be

created by concatenating pieces of recorded speech that are stored in a database. Systems differ in

the size of the stored speech units.

The quality of a speech synthesizer is judged by its similarity to the human voice, and by its

ability to be understood. An intelligible text-to-speech program allows operators to listen complex

messages, normally related with the state of commands, events or tasks currently carried out in the

GCS. An example of these applications in the UAVs context can be found in the WITAS project

(Stanford University, 2009).

A good example of free speech synthesis software is the Festival library (University of Edinburgh,

2009). It is a general multi-lingual speech synthesis system originally developed at the Centre

for Speech Technology Research (CSTR) at the University of Edinburgh. It offers a full text to

speech system with various APIs, as well as an environment for development and research of speech

synthesis techniques. It offers a general framework for building speech synthesis systems. As a

whole it offers full text to speech through a number of APIs: from shell level, through a Scheme

command interpreter, as a C++ library, from Java, and an Emacs interface. In the tests presented

in this chapter, Festival has been used as a C++ library and integrated into our multimodal software

application.

Haptic Devices

Haptic technologies interface to the user via the sense of touch by applying forces, vibrations and/or

motions to the operator. This mechanical stimulation can be applied to assist in the “creation” of

virtual objects (objects existing only in a computer simulation), for control of such virtual objects,

and to enhance the remote control of machines and devices (teleoperators).

In the particular case of GCSs, haptic devices add a new communication channel to the operator.

The vibration of a device can be used as a stand alone alarm mechanism, or in combination with other

sensory channels to increase the information provided to the user. For instance, if the activation of

the haptic device is added to a currently played sound alarm, the operator will consider that the

priority/criticity of such alarm has been increased.

7.3.2 Information Flow from Operator to GCS

Normally, the operator provides information to the GCS through mouse, touchpad or keyboard.

However, other channels can be used to provide information to the software application running in

the GCS.

Page 155: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.3 Interactions between Operator and GCS 131

Touch Screens

A touchscreen is a display which can detect the presence and location of a touch within the display

area. The term generally refers to touch or contact to the display of the device by a finger or

hand. The touchscreen has two main attributes. First, it enables the operator to interact with what

is displayed directly on the screen, where it is displayed. Secondly, it lets the operator to do so

without requiring any intermediate device. Thus, touchscreens allow intuitive interactions between

the operator and the GCS application.

Nevertheless, it is important to remark that touchscreen technology is usually poor in resolution

if the operator uses his finger, i.e. the minimal size of the objects required to guarantee a proper

interaction with the user must be bigger compared to a mouse or a touchpad. This is one of the main

constraints to be considered in the design of the graphical interfaces to be used with touchscreens.

Automatic Speech Recognition

Speech recognition (also known as automatic speech recognition or computer speech recognition)

converts spoken words to machine-readable input (for example, to key presses, using the binary

code for a string of character codes). Speech recognition provides an easy and very effective way to

command tasks to GCSs (Lemon et al., 2001).

7.3.3 Operator’s State

The operator’s state is the third information flow mentioned above. It can be defined as the set of

physiological parameters that allows to estimate the state of a human operator: heartbeat, temper-

ature, transpiration, position, orientation, etc. All this information can be used by adaptive systems

to improve the operator environment or to reduce the stress/workload of the operator.

There are plenty of studies examining how psychophysiological variables (e.g., electro-encephalogram,

eye activity, heart rate and variability) change as a function of task workload (Craven et al., 2006;

Poythress et al., 2006; Wilson and Russell, 2003), and how these measures might be used to monitor

human operators for task overload (Orden et al., 2007) and/or used to trigger automated processes.

Next sections details some of the current technologies used in the operator’s state estimation.

2DoF Head Tracking

As the operator’s head concentrates his main sensorial capabilities, a very important tool to acquire

the operator’s state is the head position and orientation tracking. 2DoF head tracking applications

and products are easy to find. Most of these products are based on image processing and marks/spots

placed in the users’ head (or hat). They also provide the two angle information used to move the

mouse left/right and up/down. The following professional solutions can be highlighted: Tracker Pro

(Madentec, 2009), Headmouse Extreme (Origin Instruments Corporation, 2009) or SmartNav 4 AT

(NaturalPoint, 2009a).

A practical application of this technology could be a GCS software that provides the critical

alerts on the screen which is being used by the operator when they occur. It can be also applied to

Page 156: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

132 Platform Human Machine Interface

evaluate the interaction between the human and the GCS during each mode of operation in terms

of which information/screen is more relevant for the operator, etc.

2DoF Eye Tracking

If the GCS is composed by several screens it could be also necessary in many cases to track the head

and the eye position in order to determine which screen is being used by the operator. However,

products related with 2DoF eye tracking are scarce in the market. All of them are based on computer

vision systems that analyzes the images gathered by a camera (normally mounted on the computer

monitor). EyeTech TM3 (EyeTech Digital Systems, 2009) is a good example.

6DoF Head Tracking

6DoF head tracking moves one step forward and allows estimating the complete position and ori-

entation of the user’s head in real time. Most of the existing methods make use of cameras and

visual/IR patterns mounted on the operator’s head.

TrackIr (NaturalPoint, 2009b), Cachya (Cachya Software, 2009) and FreeTrack (Free Software

Foundation, 2009) represent the main options in the market. They use a 3D pattern visible in the

infrared or visual band to estimate the position and orientation of the operator’s head.

Body Motion Sensors

In order to register the behavior of the operator during a mission it could be also convenient to

attach small sensors to his body to log the motion data. For example, it is possible to embed a

wireless 3-axis accelerometer in each arm of the operator. The data registered can help to determine

his current state (bored, tired, etc.) and useful information as, for instance, which arm is more used

in each mode of operation and GCS configuration.

7.4 System Developed based on Multimodal Technologies

The applicability and benefits of multimodal technologies has been analyzed for a simple task con-

sisting in the acknowledgement of alerts in an UAV ground control station composed by three screens

and managed by a single operator. For this purpose, several experiments were conducted with a

group of individuals using different combinations of modal conditions (visual, aural and tactile).

A software application integrating the different modalities has been developed to perform several

tests. The experimental results are shown in this section whereas the corresponding analysis and

conclusions are detailed in Sect. 7.5. This information is useful for the design of improved multimodal

ground control stations for UAVs.

7.4.1 System Description

Figure 7.3 shows the system used to perform the multimodal experiments. This setup emulates a

GCS for UAVs in which the operator can interact with the station through the following devices:

Page 157: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.4 System Developed based on Multimodal Technologies 133

three touch screens, three wireless haptic devices attached to the right hand, left hand and also on

the chest of the operator, one optical mouse, one headset and stereo speakers. In addition, modules

for speech synthesis and 3D sound are included in the software application.

Figure 7.3: System developed based on multimodal technologies.

The application has been developed under Linux and makes use of different modalities to show

the information to the operator. The graphical interface is composed by a single window (see

Fig. 7.4) in which several buttons labeled as “Yes” or “No” appear in random positions. Only one

button is present on the screen at any time and each button is displayed until it is pressed or until

a programmable timeout (Tyes or Tno) expires. The duration of the experiment and the size of the

buttons is also programmable.

Figure 7.4: Graphical interface of the multimodal software application.

Page 158: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

134 Platform Human Machine Interface

The mission for the operator is quite simple: press only buttons labeled as “Yes” as soon as

possible. Then, the right and wrong actions when each button appears on the screen are summarized

in Table 7.1. For some values of the parameters, some wrong actions do not exist, i.e. if Tyes →∞there is no possible wrong action for the “Yes” buttons.

Table 7.1: Operator right and wrong actions depending on the type of button which appears in theinterface.

Button Right action Wrong action“Yes” Press before timeout expires Do not press before timeout expires“No” Do not press Press

Both type of buttons have the same grey color, which is also the same color used in the background

of the window. Therefore, when a button appears in the perimeter of the field of view, it is hard to

realize for the user that it is there. This bad feature has been intentionally left in the application to

emphasize the benefits of other modalities different from the visual one.

Once a test has finished, several performance parameters are computed and showed automatically

on the screen. Figure 7.5 shows an example of the information displayed after a given experiment.

In the top subfigure, the reaction time of the operator for each right action (corresponding to

“Yes” buttons pressed before Tyes) is shown in milliseconds. The mean reaction time (Tyes) is also

represented with an horizontal line. In the subfigure below, the total number of buttons (n), and

the number of right and wrong actions for each button (nright yes, nright no, nwrong yes and nwrong no)

are represented with bars. Table 7.2 shows a summary of the values presented in Fig. 7.5.

Table 7.2: Summary of the values represented in Fig. 7.5.

T (sec) Tyes (ms) n nright yes nright no nwrong yes nwrong no

90 773.28 99 29 52 11 7

The system developed allows to integrate visual, aural and tactile modalities into the GCS. Their

integration in the developed software have been carried out as follows:

• Speech synthesis: Once each button appears on the screen, its label is told to the operator.

• 3D audio: Depending on the location of the button on the window (left, right or middle), the

source of audio corresponding to its label is generated on the left, on the right or in front of

the operator respectively.

• Vibrator: The wireless vibrator is activated every time a “Yes” button appears on the screen.

Moreover, depending on the location of the button on the window (left, right or middle), the

device on the left, on the right or in the middle vibrates respectively.

7.4.2 Tests Performed and Results

Prior to the different modalities tests, several sizes for the rectangular buttons are used with the

touch screens. The goal is to determine the minimum size for the buttons which can “guarantee”

Page 159: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.4 System Developed based on Multimodal Technologies 135

Figure 7.5: Graphical interface showing the results of a test.

a correct operation with the application. This minimum size is estimated to be approximately

2.8× 2.6 cm.

The tests described in the next subsections have been performed using the multimodal soft-

ware previously presented and have been recorded in a short video 1. The values selected for the

parameters of the software application have been the following:

• Full duration of each test: T = 8min.

• Size of the buttons: 3.0× 2.8 cm for the central screen and 2.8× 2.6 cm for the left and right

screens.

• Timeout period of the buttons: Tyes →∞ and Tno = 1.6 sec respectively.

On the other hand, the tests have been done by nine people with ages between 20 and 30 years

old (3 women and 6 men), registering their performance and opinions.

Table 7.3 shows a summary of the seven tests designed for the multimodal station. Each of those

tests is detailed in the next subsections, including the results obtained by the different individuals.

Experiment #1: Mouse Interface

In this test, the operator can only use the mouse to press the “Yes” buttons appearing on the screens

(see Fig. 7.6). His reaction time and the number of right/wrong actions are measured.

The results obtained by each individual are detailed in Table 7.4. It should be pointed out that

all of them were used to work with the mouse in their jobs.

1http://grvc.us.es/JINT multimodal

Page 160: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

136 Platform Human Machine Interface

Table 7.3: Summary of the tests designed along with the identifiers that will be used later to makereference to them.

Experiment nr. Description Identifier#1 Mouse interface only Mouse#2 Touch screen interface only TS#3 Touch screen and speech synthesis TS+speakers#4 Touch screen and 3D audio TS+3D#5 Touch screen and tactile interfaces TS+vibrator#6 Touch screen, 3D audio and tactile interfaces TS+3D+vibrator#7 Touch screen interface test repetition TS2

Figure 7.6: In the Experiment #1 the operator is only allowed to use the mouse interface.

Table 7.4: Summary of the results for the experiment #1.

Individual n nright yes nright no nwrong no Tyes (ms) σ (ms)#1 334 166 168 0 1288.2 375.3#2 316 165 151 0 1462.3 537.7#3 325 162 163 0 1377.2 349.0#4 341 172 169 0 1239.7 373.1#5 358 178 179 1 1098.7 322.6#6 368 195 173 0 1066.3 256.3#7 350 170 180 0 1151.9 303.7#8 342 172 170 0 1230.7 382.6#9 357 171 186 0 1093.6 300.2

Page 161: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.4 System Developed based on Multimodal Technologies 137

Experiment #2: Touch Screen Interface

This test is like the previous one, but using the touch screen interface. It allows to compare both

input technologies in order to evaluate which one is better suited for the station considered. The

more efficient input method (mouse or touch screen) will be used in the following experiments.

Table 7.5 shows results better than those obtained with the mouse interface. In order to quantify

the benefit of the touch screens, the percentage of reduction in the mean reaction time (∆Tyes)

and also in the standard deviation of the reaction times (∆σ) have been computed for the whole

population. The resulting values are

∆Tyes = +9.33%, ∆σ = +19.73%, (7.1)

and hence, the touch screen interface has been determined to be better suited for the intended

application than the mouse. Then, the touch screens is the input system adopted for the following

experiments. Nevertheless, it should be pointed out that both with the mouse and the touch screens,

the head of the operators were constantly moving from one screen to another searching for buttons.

Then, the required effort to achieve low reaction times was quite high.

Table 7.5: Summary of the results for the experiment #2.

Individual n nright yes nright no nwrong no Tyes (ms) σ (ms)#1 343 162 180 1 1210.2 442.2#2 338 174 164 0 1271.2 339.9#3 348 167 181 0 1171.9 279.9#4 359 185 174 0 1110.7 305.6#5 356 168 188 0 1097.9 251.9#6 362 190 172 0 1105.4 290.7#7 369 185 184 0 1032.7 242.7#8 371 189 182 0 1021.9 286.1#9 371 195 175 1 1042.5 265.8

Experiment #3: Speech Synthesis

In this experiment, once each button appears on the screen, its label is told to the operator through

the speakers. Therefore, two modalities (visual and aural) are involved simultaneously and the

potential benefits can be analyzed (see Table 7.6).

In the interviews after the tests, it was mentioned that the workload is reduced with the speech

synthesis as far as the operator can be relaxed until the “Yes” message is received. Then, it was

observed that the head was more or less static if several “No” buttons appeared consecutively. Once

a “Yes” message was heard, the operator moved his head from one screen to another searching for

the “Yes” button.

Page 162: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

138 Platform Human Machine Interface

Table 7.6: Summary of the results for the experiment #3.

Individual n nright yes nright no nwrong no Tyes (ms) σ (ms)#1 364 182 182 0 1065.5 373.5#2 348 165 183 0 1160.5 268.4#3 363 179 183 1 1068.9 254.1#4 370 184 186 0 1030.8 241.7#5 371 182 189 0 1001.2 232.2#6 372 181 191 0 991.0 194.9#7 389 200 188 1 921.7 206.1#8 393 216 176 1 943.5 277.6#9 381 202 179 0 981.2 266.9

Experiment #4: 3D Audio Interface

This test is like the previous one, but adding the 3D audio technology. Depending on the location

of the button on the screens (left, right or middle), the source of audio corresponding to its label

is generated synthetically on the left, on the right or in front of the operator respectively through

the headset. The goal is to evaluate the potential benefits of the 3D audio with respect to the

conventional audio.

The results obtained are shown in the Table 7.7 and, compared to the speech synthesis alone, it

can be seen than the performance is better. In fact, it could be observed during the experiments

that the individuals pointed their head directly on the right screen after hearing the “Yes” message.

Then, the workload was lower due to two different factors:

• No need to pay attention while hearing “No” messages.

• Once a “Yes” button appeared, no need to search for the button from one screen to another

(focusing immediately on the screen with the “Yes” button instead).

Table 7.7: Summary of the results for the experiment #4.

Individual n nright yes nright no nwrong no Tyes (ms) σ (ms)#1 364 168 196 0 1020.4 244.5#2 344 157 187 0 1173.5 183.8#3 376 209 165 2 1048.7 250.3#4 382 193 188 1 953.2 241.1#5 374 182 192 0 973.8 190.2#6 376 190 186 0 982.7 184.0#7 388 208 180 0 954.6 243.0#8 380 173 207 0 891.6 181.4#9 380 183 196 1 939.3 254.3

Page 163: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.4 System Developed based on Multimodal Technologies 139

Experiment #5: Tactile Interfaces

In this case, three wiimotes 2 are used along with the touch screens. The devices are attached to

the left and right arms, and also on the chest. The wiimote vibrator is activated every time a “Yes”

button appears on the screen. Moreover, depending on the location of the button on the window

(left, right or middle), the wiimote on the left, on the right or on the chest vibrates respectively.

Table 7.8 shows values which are quite similar in mean to those obtained in the last experiment

with the 3D audio interface. The reason is that the kind of benefits that the vibrators provide are

essentially the same provided by the 3D audio:

• No need to pay attention while there is no vibration.

• Once a vibrator is activated, no need to search for the button from one screen to another

(focusing immediately on the screen with the “Yes” button instead).

Table 7.8: Summary of the results for the experiment #5.

Individual n nright yes nright no nwrong no Tyes (ms) σ (ms)#1 364 181 183 0 1062.0 279.5#2 352 157 195 0 1094.9 154.2#3 367 182 185 0 1041.4 235.3#4 392 216 175 1 959.7 244.9#5 372 168 204 0 956.4 226.2#6 384 201 183 0 958.1 190.5#7 375 191 184 0 1001.6 226.6#8 382 188 192 2 948.5 202.3#9 379 176 202 1 922.1 233.2

Experiment #6: Integrated 3D Audio and Tactile Interfaces

This test is a combination of the modalities involved in the last two experiments. The operator

receives redundant information from the 3D audio and tactile interfaces. Then, depending on the

location of the button on the screens (left, right or middle):

• the source of audio corresponding to its label is generated synthetically on the left, on the

right or in front of the operator respectively through the headset, and

• if the button is a “Yes”, the wiimote on the left, on the right or on the chest vibrates respec-

tively.

In the Table 7.9, it can be observed that the results are slightly better than those presented in

the previous two experiments. Therefore, it seems that the redundant information from the audio

and tactile interfaces contributes to improve the performance of the operator.

2http://en.wikipedia.org/wiki/Wiimote

Page 164: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

140 Platform Human Machine Interface

Table 7.9: Summary of the results for the experiment #6.

Individual n nright yes nright no nwrong no Tyes (ms) σ (ms)#1 375 198 177 0 1017.3 336.8#2 360 170 190 0 1061.0 215.8#3 368 197 171 0 1068.5 258.5#4 390 189 200 1 881.9 209.2#5 387 191 196 0 910.7 204.4#6 387 196 191 0 929.2 158.8#7 384 202 182 0 963.8 220.7#8 393 200 193 0 879.3 197.4#9 389 209 180 0 953.7 241.4

Experiment #7: Touch Screen Interface Repetition

The goal of this test is to check if the learning process of the user has any impact on the results. To

fulfill this purpose, the individual is requested to repeat the test using only the touch screen interface

after completing all the previous experiments. Comparing the results obtained in Table 7.10 with

those corresponding to the second experiment (see Table 7.5), no significant improvement from the

learning process arises.

Table 7.10: Summary of the results for the experiment #7.

Individual n nright yes nright no nwrong no Tyes (ms) σ (ms)#1 360 174 185 1 1073.6 325.1#2 355 189 166 0 1154.7 304.7#3 345 171 174 0 1202.6 299.3#4 362 176 186 0 1067.9 304.5#5 366 197 169 0 1092.4 380.6#6 364 171 193 0 1025.0 257.1#7 365 172 193 0 1014.0 232.2#8 369 173 191 5 1023.2 281.7#9 384 197 186 1 953.0 232.0

7.5 Analysis of the Results

In order to compare in a more exhaustive manner the different technologies involved in the exper-

iments, we will focus on the results from one individual. Table 7.11 shows several performance

parameters of individual #5 in all the experiments.

Figure 7.7 contains six histograms corresponding to the first six experiments (from Exp. #1 to

#6) with the number of correct actions in several reaction time intervals. From those histograms

the idea was to find a probability density function that could approximate them. The approach

adopted is depicted in the next subsection.

Page 165: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.5 Analysis of the Results 141

500 1000 1500 2000 25000

5

10

15

20

25

30

35

40

45

50Histogram

Reaction time (ms)

Act

ions

Mouse

(a) Experiment #1

500 1000 1500 2000 25000

5

10

15

20

25

30

35

40

45

50Histogram

Reaction time (ms)

Act

ions

TS

(b) Experiment #2

500 1000 1500 2000 25000

5

10

15

20

25

30

35

40

45

50Histogram

Reaction time (ms)

Act

ions

TS+speakers

(c) Experiment #3

500 1000 1500 2000 25000

5

10

15

20

25

30

35

40

45

50Histogram

Reaction time (ms)

Act

ions

TS+3D

(d) Experiment #4

500 1000 1500 2000 25000

5

10

15

20

25

30

35

40

45

50Histogram

Reaction time (ms)

Act

ions

TS+vibrator

(e) Experiment #5

500 1000 1500 2000 25000

5

10

15

20

25

30

35

40

45

50Histogram

Reaction time (ms)

Act

ions

TS+3D+vibrator

(f) Experiment #6

Figure 7.7: Individual #5: Histograms with the number of correct actions in each reaction timeinterval for the different experiments.

Page 166: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

142 Platform Human Machine Interface

Table 7.11: Summary of the results for individual #5.

Experiment n nright yes nright no nwrong no Tyes (ms) σ (ms)Mouse 358 178 179 1 1098.7 322.6TS 356 168 188 0 1097.9 251.9

TS+speakers 371 182 189 0 1001.2 232.2TS+3D 374 182 192 0 973.7 190.2

TS+vibrator 372 168 204 0 956.4 226.1TS+3D+vibrator 387 191 196 0 910.7 204.3

TS2 366 197 169 0 1092.4 380.5

7.5.1 Probability Density Functions

Taking into account the histograms from the experiments and due to the nature of the measured

values, it seems reasonable to use Gaussian distributions as an analytical approach for the results.

However, the shape of the histograms computed is not symmetric with respect to the mean value

(the decrease at the left is more abrupt than at the right of the mean value). Therefore, it has been

considered that the probability model of the asymmetric Gaussians (AG) (Kato et al., 2002), which

can capture temporal asymmetric distributions, could outperform Gaussian models.

Let χ be the random variable associated to the reaction times measured in the experiments

presented before. To indicate that a real-valued random variable χ is normally distributed with

mean µ and variance σ2 ≥ 0, we write

χ ∼ N (µ, σ2). (7.2)

The continuous probability density function of the normal distribution is the Gaussian function

φµ,σ2(x) =1

σ√2π

exp

(− (x− µ)2

2σ2

), (7.3)

where σ > 0 is the standard deviation and the real parameter µ is the expected value.

We now introduce an asymmetric Gaussian (AG) model with a distribution

φµ,σ2,r(x) =2

σ(r + 1)√2π

exp(− (x−µ)2

2σ2

)if x > µ,

exp(− (x−µ)2

2r2σ2

)otherwise

, (7.4)

where µ, σ and r are the parameters. We term the density model (7.4) as univariate asymmetric

Gaussian (UAG). The density function is plotted in Fig. 7.8(b) and it can be seen that UAG has

an asymmetric distribution. In addition, UAG is an extension of a Gaussian since UAG with r = 1

is equivalent to the Gaussian distribution.

Then, the next step was to approximate each histogram by an UAG distribution. For example,

for the histograms in Fig. 7.7, the values of µ and σ have been already computed (see Table 7.11).

Selecting r = 0.5 and plotting the UAGs corresponding to the first six experiments in the same

figure, it is possible to compare easily the different modalities tested (see Fig. 7.9).

Page 167: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.5 Analysis of the Results 143

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

µ

σ σ

(a) Univariate Gaussian

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

µ

r σ σ

(b) Univariate Asymmetric Gaussian (UAG)

Figure 7.8: Univariate Gaussian and univariate asymmetric Gaussian.

600 800 1000 1200 1400 1600 1800 2000 22000

0.5

1

1.5

2

2.5

3x 10

−3 Probability density functions

Reaction time (ms)

Pro

babi

lity

MouseTSTS+speakersTS+3DTS+vibratorTS+3D+vibratorTS2

Figure 7.9: Individual #5 reaction time probability density functions (using an approximation basedon the univariate asymmetric Gaussians (UAGs) with r = 0.5

.

Page 168: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

144 Platform Human Machine Interface

7.5.2 Comparative Results among Technologies

After collecting the full set of data from all the individuals in all the experiments, it has been pro-

cessed in order to obtain a general comparison among the technologies involved in the experiments.

In a first step, the improvement in the mean reaction time and in its standard deviation has been

computed for all the individuals participating in the experiments (see Table 7.12). This improvement

has been expressed in percentage and computed with respect to the results in the Experiment #2,

in which only the touch screens were used (a negative value in the percentage means that the

performance was worse).

Table 7.12: Summary of the improvements in mean with respect to the results of Experiment #2

(TS): percentage of reduction in the mean reaction time (∆Tyes) and also in the standard deviationof the reaction times (∆σ).

Mouse TS TS+speakers TS+3D TS+vibr TS+3D+vibr TS2

∆Tyes (%) -9.33 0 8.89 11.18 10.97 13.78 4.41∆σ (%) -19.73 0 13.91 24.93 24.46 23.68 1.04

It can be seen that the progressive introduction of better multimodal technologies from the first

experiment to the sixth one improves the performance of the operator. On the other hand, when

equivalent technologies are used (i.e. 3D audio or vibrators), the results obtained in mean are quite

similar (although each individual could show preference for one of them).

It should be pointed out that there is a “minimum” response time due to the limitations of

the operating system and the electronic components and interfaces involved in the system. This

minimum response time has been estimated to be approximately 100 ms. Then, if we remove

this interval from the computed mean reaction times, the percentages of improvement presented in

Table 7.12 would have had higher values.

The histograms for the whole population in the experiments from #1 to #6 are shown in Fig. 7.10.

Comparing this figure with the histograms of the individual #5, it can be seen that when the number

of samples increases, the shape of the histograms is more similar to the UAG distribution adopted

for the analysis.

Then, using the values of µ and σ for the whole population and with r = 0.5, the UAG distri-

butions for the experiments from #1 to #6 are computed and plotted together in Fig. 7.11. This

figure allows to compare the impact of each modality for the whole population at a glance. The

Gaussians move from right to left as we use better modalities in the interface, because the mean

reaction times are lower. Moreover, the shape of the Gaussians is narrower also from right to left as

far as the standard deviation is lower.

Finally, during the experiments it was also registered the screen where each “Yes” button was

pressed by the operator, allowing us to compute the reaction times when a transition from one screen

to another happened. Using this information, the UAG distributions for the reaction times of the

transitions were calculated (see Fig. 7.12). The Gaussians are slightly displaced to the right with

respect to Fig. 7.11 as expected (the mean reaction times are higher for the transitions) and the

Page 169: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.5 Analysis of the Results 145

500 1000 1500 2000 25000

50

100

150

200

250

300

350Histogram

Reaction time (ms)

Act

ions

Mouse

(a) Experiment #1

500 1000 1500 2000 25000

50

100

150

200

250

300

350Histogram

Reaction time (ms)

Act

ions

TS

(b) Experiment #2

500 1000 1500 2000 25000

50

100

150

200

250

300

350Histogram

Reaction time (ms)

Act

ions

TS+speakers

(c) Experiment #3

500 1000 1500 2000 25000

50

100

150

200

250

300

350Histogram

Reaction time (ms)

Act

ions

TS+3D

(d) Experiment #4

500 1000 1500 2000 25000

50

100

150

200

250

300

350Histogram

Reaction time (ms)

Act

ions

TS+vibrator

(e) Experiment #5

500 1000 1500 2000 25000

50

100

150

200

250

300

350Histogram

Reaction time (ms)

Act

ions

TS+3D+vibrator

(f) Experiment #6

Figure 7.10: Histograms with the number of correct actions in each reaction time interval for thewhole population during the different experiments.

Page 170: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

146 Platform Human Machine Interface

500 1000 1500 2000 25000

0.5

1

1.5

2

2.5x 10

−3 Probability density functions

Reaction time (ms)

Pro

babi

lity

MouseTSTS+speakersTS+3DTS+vibratorTS+3D+vibratorTS2

Figure 7.11: Reaction times probability density functions for the whole population in the differentexperiments.

benefit of adding modalities is clearer.

7.6 Conclusions and Future Developments

Multimodal display techniques may improve operator performance in the context of a Ground Con-

trol Station (GCS) for UAVs. Presenting information through two or more sensory channels has the

dual benefit of addressing high information loads as well as offering the ability to present information

to the operator within a variety of environmental constraints.

This chapter has explored different technologies that can be applied in the design and develop-

ment of a GCS for UAVs equipped with a multimodal interface. The applicability and benefits of

those technologies has been analyzed for a task consisting in the acknowledgement of alerts in an

UAV ground control station composed by three screens and managed by a single operator. The

system integrated visual, aural and tactile modalities and multiple experiments have shown that the

use of those modalities has improved the performance of the users of the application.

Regarding the multimodal application used to obtain the results presented in this chapter, there

are several possible improvements. One of them would be to compute the exact position of each

button on the screen when it is pressed. It will allow to estimate the stochastic relation between the

reaction times, the different modalities and the distance between buttons.

On the other hand, the wiimote devices were used in the experiments as wireless vibrators to

signal the alarms. But their internal accelerometers can also provide information about the motion

Page 171: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

7.6 Conclusions and Future Developments 147

500 1000 1500 2000 25000

0.5

1

1.5

2

2.5x 10

−3 Probability density functions

Reaction time (ms)

Pro

babi

lity

MouseTSTS+speakersTS+3DTS+vibratorTS+3D‘+vibratorTS2

Figure 7.12: Reaction times probability density functions for the whole population in the differentexperiments considering only the transitions from one screen to another.

of the arms of the operator during the mission, allowing to measure the level of stress for instance.

Finally, it could be interesting to integrate a head-tracking system for the operator in the plat-

form. This system will allow to compute an estimation of the screen at which the head of the

operator is pointing at. This information can be used to show each alarm in the screen where the

attention of the user is focused, and evaluate its benefits for the operation. Additionally, it can be

used along with other body sensors to evaluate the state of the user (level of attention, stress, etc.).

Page 172: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

148 Platform Human Machine Interface

Page 173: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Chapter 8

Experimental Results with theAWARE Project Multi-UAVPlatform

This chapter describes the multi-UAV missions carried out in the framework of the AWARE project.

In those missions, the software implementation of the architecture and HMI presented in this thesis

has been put into practice.

Most of them were executed in the third year of the project, but the first multi-UAV mission

was carried out in the second year, and is also detailed in Sect. 8.4. The third year experiments

were scheduled from 20th to 28th May 2009 at the Protec-Fire company (Iturri group) facilities in

Utrera (Spain). The last day was devoted to a demonstration of the platform for the EU reviewers

and potential end users.

As a summary, the missions described in this chapter include surveillance with multiple UAVs,

fire confirmation, monitoring and extinguishing, load transportation and deployment with single

and multiple UAVs, and people tracking. A selection of those missions is described with a certain

level of detail. Nevertheless, in order to avoid redundant information, instead of describing the

full architecture operation for every mission, only non-overlapping aspects of the architecture are

highlighted in each one.

Next section presents the experimentation scenario considered in the project.

8.1 Experimentation Scenario in the AWARE Project

In order to verify the success in reaching the objectives, the project considered the validation in two

different applications:

• Filming dynamically evolving scenes with mobile objects. Particularly cooperative object

tracking techniques by using the cameras in aerial objects cooperating with cameras on the

ground are required. Furthermore, this activity involves sensors carried by mobile entities

(people, vehicles, etc.) to obtain measures that can be also displayed in the broadcast picture.

Page 174: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

150 Experimental Results with the AWARE Project Multi-UAV Platform

(a) Structure used to simulate a building (b) Tents used by the AWARE team

Figure 8.1: Common scenario for the AWARE Project experiments. The area in the photographs isproperty of the Protec-Fire company (Iturri group) in Utrera (Spain).

• Disaster Management/Civil Security (DMCS), involving exploration of an area of interest,

detection, precise localization, deployment of the infrastructure, monitoring the evolution of

the objects of interest, and providing reactivity against changes in the environment and the loss

of the required connectivity of the network. Actuators, such as fire extinguishers, to generate

actions in real-time from the information provided by the sensors are also considered.

Three general experiments, one every project year, in a common scenario were conducted in

order to integrate the system and test the functionalities required for the validation. These exper-

iments involved the wireless ground sensor network with mobile nodes, the UAVs, the middleware,

actuators, the network-centric cooperation of the UAVs with the ground sensor network, and the

self-deployment functionality. This common scenario was part of the Protec-Fire company (Iturri

group) facilities (see Fig. 8.1) located in Utrera (Spain).

Then, the AWARE experiments offered the framework to test the distributed implementation of

the architecture described in this thesis.

Figure 8.1(a) shows the structure used to simulate a building where an emergency could be

declared. In the structure there are several nodes of the WSN equipped with different types of

sensors (temperature, humidity, CO, smoke, etc.) that will provide an alarm if a potential fire

is detected. During the experiments, the fire in the building was simulated using fire and smoke

machines like those shown in Fig. 8.2.

In the surroundings of the building, the following elements were present (see Fig. 8.3):

• An area with more nodes of the WSN on the ground.

• Several barrels close to the building. The fire declared in the building could propagate to its

surroundings and reach other infrastructures with devastating consequences. Then, the barrels

are intended to simulate fuel tanks that could be located around the building.

• Fixed cameras mounted on tripods. There are two visual cameras in the area around the

Page 175: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.1 Experimentation Scenario in the AWARE Project 151

(a) Vesuvius smoke machine (b) Mobifit fire machine

Figure 8.2: Smoke and fire machines used to simulate a fire in the building. Both systems arecommercialized by the HAAGEN2 Fire Training Products company.

building to monitor the building itself and also the firemen moving on the ground in the area

in front of the building.

• A fire machine for outdoors used to simulate a possible propagation of the fire from the building

to other infrastructures.

• Several dummy bodies were used as victims in the building and also on the ground (see Fig. 8.4).

Building

Outdoors

Fire MachineBarrels

WSN areaGround

Cameras

Figure 8.3: Elements located in the surroundings of the building during the experiments.

In the list above, the WSN and the fixed cameras were part of the AWARE platform subsystems.

In the next section, the different subsystems of the platform and their roles are summarized.

Page 176: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

152 Experimental Results with the AWARE Project Multi-UAV Platform

Figure 8.4: Dummy bodies used as victims during the experiments.

8.2 AWARE Platform Subsystems involved in the Missions

In Chap. 3, the different models and subsystems relevant for the architecture of the platform were

presented. In this section, the particular components used in the experiments are described. In

summary, the subsystems integrated in the AWARE platform are the following:

• Unmanned Aerial Vehicles (UAVs).

• Ground cameras.

• Wireless Sensor Network (WSN).

• Fire truck equipped with an automated mounted monitor (water cannon).

• Human Machine Interface (HMI) station.

In the next subsections, the role and details of each component is described briefly.

8.2.1 Unmanned Aerial Vehicles

A total of five small scale autonomous helicopters were available in the third year of the project:

• Four TUB-H helicopters (see Fig. 8.5) developed by the Technische Universitat Berlin (TUB).

• One FC III E SARAH helicopter (Electric Special Aerial Response Autonomous Helicopter)

developed by the Flying-Cam (FC) company (see Fig. 8.6). The Technische Universitat Berlin

also cooperated with Flying-Cam for the operation of this prototype in autonomous flight.

In order to provide a maximum of safety during the AWARE experiments, each simultaneously

flying UAV had its own safety pilot.

The first objective of the safety pilot is to protect observers from possible injuries in case of

helicopter critical component failure that could be solved by a human manual override.

Page 177: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.2 AWARE Platform Subsystems involved in the Missions 153

(a) Three TUB-H helicopters in the landing pads (b) Detailed view of one TUB-H helicopter

Figure 8.5: Fleet of TUB-H helicopters developed by the Technische Universitat Berlin (TUB) usedin the experiments. The fourth TUB-H unit was a ready to fly, spare helicopter.

Figure 8.6: The FC III E SARAH helicopter developed by the Flying-Cam (FC) company.

The secondary objective of the safety pilot is to rescue the helicopter or at least minimize the

damage on the helicopter in an emergency situation. This was of special importance in the exper-

iments involving several helicopters flying at the same time. The most critical one was the load

transportation experiment that had, in addition of the three TUB helicopters carrying the load, the

FC helicopter.

However, considering the following facts:

• Safety has higher priority than rescuing a helicopter.

• Each UAV is a very complex system and depends on a lot of different hardware components.

• Because of the weight and size restrictions, only minimum hardware redundancy is possible.

A number of actions were taken in order to guarantee the conduction of all experiments in case of

electronic and helicopter hardware failures or even if one helicopter is completely destroyed during

an experiment. These actions were:

Page 178: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

154 Experimental Results with the AWARE Project Multi-UAV Platform

Infrared Camera

Visual Camera

(a) Visual and infrared cameras on-board the TUB-H helicopter (b) Detailed view of the visual cam-era on-board the TUB-H helicopter

Figure 8.7: Visual and infrared cameras on-board the TUB-H helicopter. On the left photograph,both cameras are mounted on-board the helicopter with different orientation angles. The rightphotograph shows a detailed view of the visual camera with its analog transmitter.

• A ready to fly, spare helicopter was taken to Utrera.

• For each electronic hardware component of the UAV at least one spare part was taken to

Utrera (including expensive parts like GPS).

• Two different communication systems based onWi-Fi and radio modems were provided in order

to be resilient against external disturbances (according to the experience from the second year

experiments).

Regarding the payloads, the TUB-H UAVs were ready to mount different types depending on

the particular mission thanks to a mechanical design based on a frame composed of strut profiles.

Through the use of these profiles, the location of hardware components can be altered and new

hardware can be installed easily. This allows quick reconfiguration of the UAVs for different applica-

tions, easy replacement of defective hardware and alteration of the position of different components

to adjust the UAVs centre of gravity. Then, the following payloads were used during the different

missions:

• Fixed visual and infrared cameras (see Fig. 8.7).

• ANode Deployment Device (NDD) developed by the Technische Universitat Berlin (see Fig. 8.8).

The functionality of the device is the same as for candy bar automats: A short wire is attached

to the node ending in a metal grommet. This grommet is attached to the right end of the steel

spring and the clockwise rotation of the spring moves the grommet (and the node) further

onto the spring. This procedure allows attaching several nodes to the helicopter. During the

Page 179: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.2 AWARE Platform Subsystems involved in the Missions 155

Rotation

Sensor

Buttons for

Manual Control

Node Deployment

Computer Interface

(a) Different components of the Node Deploy-ment Device (NDD)

(b) NDD loaded with several nodes of the WSN

Figure 8.8: Detailed view of the Node Deployment Device (NDD) developed by the TechnischeUniversitat Berlin (TUB). It was used on-board the helicopter for missions that required the au-tonomous deployment of sensors in a given area. Scheme courtesy of the Technische UniversitatBerlin (TUB).

dropping maneuver the spring rotates counterclockwise until the rightmost grommet is moved

beyond the end of the spring and the node is released.

• The Load Transportation Device (LTD) developed by the Technische Universitat Berlin. The

LTD (see Fig. 8.9) is composed of a two axis cardan joint with two magnetic encoders attached

to each axis. After the joint a force sensor is mounted. After the force sensor a release

mechanism for the rope is attached, which is composed of a bolt, inserted into a tube. The

bolt is fixed in the tube through a pin, which can be pulled out by a small motor, to release

the load. The release mechanism can be used for emergency decoupling of the UAV from the

load (or the other coupled UAVs), but also to release the load after successful transportation.

The magnetic encoders allow measuring the rope orientation relative to the UAV fuselage.

With this information and the measured force in the rope, it becomes possible to calculate the

torques imposed on the UAV through the load (and/or the other coupled UAVs).

Finally, at the software level, two layers have been implemented for the UAVs (see Chap. 3):

• Executive Layer (EL): this software is proprietary of the UAV developer and it is in charge of

the elementary tasks execution and supervision.

• On-board Deliberative Layer (ODL): this software is common for the different UAVs and is

described along this thesis.

8.2.2 Ground Cameras

In the experimentation scenario, there were several fixed cameras on the ground that were intended

to emulate a surveillance camera network in an urban setting. The system was based on firewire

Page 180: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

156 Experimental Results with the AWARE Project Multi-UAV Platform

Magnetic Encoder

Magnetic Encoder

Cardan Joint

ForceSensor

Motor

Rope Mounting

Bolt

Release Pin

(a) Different components of the LoadTransportation Device (LTD)

(b) TUB-H helicopter equipped with the LTD on the landing pad

Figure 8.9: Detail of the Load Transportation Device (LTD) developed by the Technische Univer-sitat Berlin (TUB). It was used on-board the helicopter for missions that required the autonomoustransportation of loads to a given location. Scheme courtesy of the Technische Universitat Berlin(TUB).

cameras connected to a PC104 that had a wireless link to the AWARE network (see Fig. 8.10).

On each PC104 there was a Perception Subsystem (PSS) application that processed the images

and computed a local estimation of the states of the objects in the field of view. This local estimation

was fused in a distributed perception system that integrated measurements from different information

sources such as the visual and infrared cameras, as well as the wireless sensor network. Thus, the

AWARE platform could build a distributed model of the environment.

8.2.3 Wireless Sensor Network (WSN)

During the experiments, there were several wireless sensor networks deployed on the ground in the

area in front of the building and also inside the building. Each WSN had a laptop acting as a

gateway connected to the AWARE network through a wireless link. All the WSNs were measuring

different variables such as temperature, humidity, CO, etc. in order to generate alarms if a potential

fire were detected (see Fig. 8.11(a)). Additionally, the gateway of one WSN composed by nodes

able to measure the Received Signal Strength Indication (RSSI) had a Perception Subsystem (PSS)

application in charge of providing estimations of the firemen equipped with that same type of nodes

(see Figs. 8.11(b) and 8.11(c)). Those estimations are fused in a distributed manner with the

measurements provided by the different cameras (ground and on-board the UAVs) to improve the

results.

The nodes of the WSN can be also autonomously deployed when required by the UAVs thanks

to their node deployment device (see Fig. 8.8). The purpose of the deployment is twofold – firstly,

allows to extend the area measured, and secondly, can help to recover the connectivity in a WSN

Page 181: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.2 AWARE Platform Subsystems involved in the Missions 157

Figure 8.10: Ground cameras used in the experimentation scenario. The firewire cameras wereconnected to a PC104 that had a wireless link to the AWARE network.

(a) Detail of a WSN node (b) Node located in the building (c) Node located onthe ground in front ofthe building

Figure 8.11: Detail of the WSN nodes used in the experiments. The type of node shown in thephotograph (a) was used to measure different variables such as temperature, humidity, CO, etc. Onthe other hand, for the localization of the firemen in the area in front of the building (and also insideit) a different type of node able to measure the RSSI (shown in the photographs (b) and (c)) wasused.

Page 182: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

158 Experimental Results with the AWARE Project Multi-UAV Platform

(a) The automated mounted monitor spraying wateron the fire machine for outdoors

(b) Detail of the automated mounted monitor

Figure 8.12: The automated mounted monitor of the fire truck in operation. On the left, it is sprayingwater on a fire machine for outdoors based on the coordinates of the fire detected previously by theAWARE platform. On the right, one of the researchers of the AWARE team manipulating themonitor GPS antenna.

that has lost some nodes due to the difficult conditions in the scenario.

8.2.4 Fire Truck

In the experimentation scenario, there was a fire truck equipped with an automated mounted “mon-

itor” (water cannon – see Fig. 8.12). Then, once the AWARE platform has detected a fire in a given

location, the monitor can be commanded from the HMI to deliver water pointing to that location

(see Fig. 8.13). The system is equipped with GPS and IMU, and computes the required angles to

deliver the water on the intented place.

8.3 Types of Missions

As it has been previously stated in this thesis, a mission is a set of partially ordered tasks. A task

description defines the nature and the parameters of the task to perform. The user may also specify

which type of AWARE subsystem should be chosen to perform the task.

As a reminder, an example of elementary task that the UAVs can perform is the flight to a

given position, taking into account a number of constraints (forbidden areas, geographical relief,

boundaries on altitude and height, radio communication availability, coordination with the other

UAVs or regular airplanes or helicopters, etc). Another “basic” action for the UAVs, the GCNs and

WSN is the data acquisition using various on-board sensors.

These primitive functions are the basis for the specification of more elaborate missions like cov-

ering an area, detecting and tracking mobile targets or evolutive phenomena (smoke), etc. Besides,

such tasks will be often temporally constrained.

The following multi-UAV missions were performed during the AWARE project experiments:

Page 183: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.3 Types of Missions 159

Figure 8.13: Screenshot from the human machine interface application during the activation of thefire truck monitor.

• Multi-UAV firemen tracking.

• UAV node deployment to extend the WSN coverage.

• UAV fire confirmation and monitoring.

• Multi-UAV surveillance.

• Single and multi-UAV load transportation.

As it will be shown later, these missions can be also sequenced if the corresponding preconditions

to start are satisfied. For instance, after the UAV node deployment, a mission for fire confirmation

and monitoring can start if a fire has been detected by the WSN.

8.3.1 Scheduling of the Experiments and Demonstration

Table 8.1 shows different multi-UAV missions carried out in the AWARE project framework.

In the last year of the AWARE Project, several days of general experiments were planned, as well

as a final demonstration day of the platform with the attendance of the European Union reviewers

of the project.

In order to avoid the bad weather conditions experienced during the Utrera’08 demonstration,

the Utrera’09 experiments were shifted to May but, unfortunately, during the demonstration day

the wind conditions were not good with wind gusts up to 60 km/h. Nevertheless, almost all planned

experiments were conducted on the demonstration day.

Page 184: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

160 Experimental Results with the AWARE Project Multi-UAV Platform

Table 8.1: Scheduling of the different AWARE missions with an indication of the subsystems involvedin each one.

Mission Date Brief description HMI UAV GCN WSN FT0 16th April 08 Node deployment X X X1 25th May 09 Firemen tracking X X X X2 25th May 09 Firemen tracking X X X X3 25th May 09 Surveillance X X4 25th May 09 Node deployment & fire monitoring X X X X5 25th May 09 Node deployment & fire monitoring X X X X6 26th May 09 Fire monitoring X X X X7 26th May 09 Surveillance X X X8 28th May 09 Load transportation X X9 28th May 09 Node deployment X X X10 28th May 09 Fire monitoring X X X X X11 28th May 09 Surveillance X X X

The following integrated missions were scheduled: coordinated flights involving node deployment,

fire detection, monitoring and extinguishing, surveillance using two coordinated helicopters, tracking

of firemen using two coordinated helicopters, load transportation using a single helicopter, and load

transportation using three coupled helicopters. The experiments setup and integration activities were

conducted from 20-24 May, whereas 25-27 May were devoted to perform integrated missions with

the AWARE platform. Although no specific experiments were carried out to test the middleware,

its performance in different conditions was validated since all the AWARE subsystems use it for

communication purposes. These integrated missions were executed as scheduled with no significant

deviation and with the expected results. They can be considered as a success.

Finally, the agenda of the demonstration on 28 May was modified starting earlier in order to avoid

the strong wind conditions that were predicted by the weather forecasts. The demonstration was

attended by the project reviewers (Fernando Lobo from Porto University and Ørnulf Jan Rødseth

from the Marintek company), Mr. Alkis Konstantellos from the European Commission, and guests

from the Academia and the industry, as well as end-users.

8.4 Preliminar Multi-UAV Missions in the AWARE’08 Gen-eral Experiments

During the AWARE’08 general experiments, several multi-UAV missions were performed with the

first version of the software implementation of the architecture described in this thesis. This section

describes one of them, emphasizing the role of each module of the On-board Deliberative Layer

(ODL) architecture.

The mission described in this section was performed in April 2008 during the second year of the

AWARE project 3. The objective was to deploy a sensor node from an UAV in a given location

to repair the WSN network connectivity, whereas another UAV supervised the operation with the

3http://www.aware-project.net/videos/videos.shtml#aware08

Page 185: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.4 Preliminar Multi-UAV Missions in the AWARE’08 General Experiments 161

on-board camera and also monitored the area for the deployment and a nearby building. During

the mission, a third UAV took-off to monitor the whole operation.

The following systems were involved in the execution:

• Human machine interface station: allowed to specify the mission (waypoints with their corre-

sponding headings, speeds, altitudes, etc). After that, there was a mission briefing with the

operators of the UAVs using a simulation of the execution in the HMI to tune several param-

eters of the mission using the EUROPA planner. Once an agreement was met, the mission

was ready to be executed. During the execution, the HMI allowed to supervise the execution

state of the different tasks allocated to the UAVs. Figure 8.14 shows a screenshot of the HMI

application during the mission.

WSN area

Cameras FOV

UAV locations

Deployment

location

Figure 8.14: The HMI screen during the mission: visualization of the locations and status of theTUB helicopters with their allocated elementary tasks.

• ODL (On-board Deliberative Layer) software of the UAVs.

• UAV supervisor and executive software components: allowed to supervise the elementary tasks

sent by the ODL in order to have another check point of their consistency. Finally, the executive

software on-board the helicopter was in charge of the final execution of the elementary tasks.

• WSN gateway: In this experiment, the gateway was in charge of autonomously repairing the

connectivity of the network once a node was placed between two isolated networks. The system

was able to automatically re-compute the data routing and to send all the information from

the nodes to the HMI.

Page 186: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

162 Experimental Results with the AWARE Project Multi-UAV Platform

50 60 70 80 90 100 11040

50

60

70

80

90

100

wp1

wp3

wp2

wp4

y(m

) - South to North

x(m) - East to West

tents

building

nodes area

Figure 8.15: Paths followed by the two helicopters during the mission in April 2008.

• Middleware: allowed to communicate the tasks status, telemetry and images, and also to send

high level commands from the HMI to the ODL and commands from the ODL to the UAV

executive layer.

8.4.1 Mission Description

The mission was specified by the AWARE platform user with the HMI application. It was a node

deployment mission to repair the WSN connectivity involving three UAVs:

• UAV 1: equipped with a fixed camera aligned with the fuselage of the helicopter and pointing

downwards 45 .

• UAV 2: it has a device on-board for node deployment (see Fig. 8.16) and also a camera.

• UAV 3: equipped with a camera mounted inside a mobile gimbal to film the whole mission

from different points of view, transmitting images to the HMI through the middleware.

The AWARE platform user specified the following tasks to be executed (see Table 8.2):

• τ1: deploy a node in the waypoint wp1 with GPS UTM coordinates “30S 251696.41 4121268.48”

after τ2 is completed. The goal was to repair the connectivity with the node with identifier 28

deploying the node 11 in wp1.

• τ2: visit waypoint wp2 “30S 251673.40 4121285.50” to monitor a nearby building.

Page 187: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.4 Preliminar Multi-UAV Missions in the AWARE’08 General Experiments 163

Rotation

Sensor

Buttons for

Manual Control

Node Deployment

Computer Interface

Figure 8.16: Detail of the device on-board the helicopter (UAV 2) for the node deployment operation.Scheme courtesy of the Technische Universitat Berlin (TUB).

• τ3: During the node deployment, the corresponding WSN area should be also monitored with

the camera from wp2.

• τ4: After node deployment, the building should be monitored again from wp2.

Table 8.2: Tasks to be executed for the node deployment mission.

τk λ −Ω Ω+ Πτ1 DEPLOY(wp1) END(τ2) ∅ Π1

τ2 GOTO(wp2) ∅ ∅ Π2

τ3 GOTO(wp2) START(101τ1) ∅ Π3

τ4 GOTO(wp2) END(103τ1) ∅ Π4

In the next section, the role of the different modules in the ODL architecture is described.

8.4.2 ODL Modules during the Mission

As the user did not allocate those tasks manually, the distributed task allocation process started

from the HMI software. The negotiation involved the CNP manager modules of UAV 1 and UAV 2,

and due to the different devices on-board each UAV, task τ1 was allocated to UAV 2 whereas the

rest of tasks were allocated to UAV 1 (which bid with infinite cost for task τ1).

In this mission, the plan builder role was trivial: for both UAVs a take-off was required before

executing the allocated tasks and the ordering of the tasks was fixed by the preconditions.

Then, the plan refining module (see Chap. 4) of UAV 2 decomposed τ1 as follows:

Page 188: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

164 Experimental Results with the AWARE Project Multi-UAV Platform

τ1−→1τ1,101 τ1,102 τ1,103 τ1, (8.1)

with:

• 1τ1: visit wp1.

• 101τ1: descend to an altitude of 3 meters above the ground.

• 102τ1: activate the device for node deployment.

• 103τ1: ascend back to the initial altitude.

whereas in UAV 1, it computed the headings required for monitoring the WSN area and the node

deployment operation with the fixed camera from wp2.

During the mission, the interface with the executive layer was the task manager. For example,

once the executive layer changed the status of 1τ1 from RUNNING to ENDED, the task manager sent

the next task (101τ1) for its execution. Moreover, the dependencies between tasks of different UAVs

were also handled by the task manager with the assistance of the synchronization module. On the

other hand, the plan merging modules running the algorithm shown in Chap. 6 did not detect any

conflict between the planned 4D trajectories of the UAVs and no change in the plan was inserted.

The resulting evolution of the tasks during the mission is shown in Fig. 8.17, where the different

preconditions have been represented by arrows.

UAV1

UAV2

2

1 1

3

101 1 103 1

4 t (sec)

t (sec)

0 50 100 150

0 50 100 150

102 1

END 2 START 101 1 END 103 1

ˆ ˆ

ˆ

ˆ ˆ

Figure 8.17: Tasks executed by each UAV. The arrows represent the different preconditions summa-rized in Table 8.2.

Finally, it should be mentioned that this experiment represented the first test of the AWARE

platform integrating the two helicopters and the wireless sensor network in the same mission. Re-

garding the WSN, the connectivity with node 28 was achieved a few seconds after node 11 was

deployed. Figure 8.18 shows several photographs taken during the experiment.

8.5 Multi-UAV Missions in the AWARE’09 General Experi-ments

During the AWARE’09 general experiments, all the types of missions described in Sect. 8.3 were

executed several times, as it can be seen in Table 8.1. In this section, one execution per type of

mission is detailed.

Page 189: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.5 Multi-UAV Missions in the AWARE’09 General Experiments 165

Figure 8.18: Coordinated flights during the node deployment mission in April 2008.

8.5.1 People Tracking (Mission #2)

This mission (identified as #2 in Table 8.1) was carried out on 25th May 2009. Two firemen were

located in the area in front of the building assisting injured people and moving equipment. The

objective of the user was to have an estimation of the location of the firemen on the map and also

images of their operations. Two UAVs were available and ready in the landing pads for this mission:

• UAV 1: equipped with a fixed visual camera aligned with the fuselage of the helicopter and

pointing downwards 45.

• UAV 2: equipped with a fixed visual camera aligned with the fuselage of the helicopter and

pointing downwards 45.

On the other hand, the firemen were equipped with sensor nodes that allowed to have an initial

estimation of their location based on the information from the WSN deployed in front of the building

(Capitan et al., 2009). Later, this information is also fused with the estimations computed from the

visual images gathered by the helicopters in order to decrease the uncertainty in the location of the

firemen.

Two tasks of type TRACK(object_0) were sent to the UAVs at different times:

• Firstly, task τ3 was announced and allocated to UAV 2 due to its lowest bid (lowest insertion

cost) during the distributed negotiation process based on the SIT algorithm (see Chap. 5). In

Page 190: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

166 Experimental Results with the AWARE Project Multi-UAV Platform

order to compute the insertion cost for the tracking task, the plan refining toolbox module was

used to find the required associated waypoint and heading based on the techniques presented

in Sect. 4.1. The idea is to have the camera on-board pointing perpendicular to the main

axis of the uncertainty ellipse associated to the position estimation of the fireman. From the

two possible solutions, the waypoint closer to the flight plan of the UAV is chosen. Once

the location is reached, the UAV captured images from the fireman (labelled as object_0)

and processed them in order to contribute to the estimation of his position. In Sect. 4.1, it

was also shown how the UAV 2 have to broadcast its position relative to the tracked object

location; if more tracking tasks for the same object are commanded, the next UAVs will need

this information to compute their positions around the object accordingly.

• Later, τ8 was announced and allocated to UAV 1 (UAV 2 bid with infinite cost because it

was already tracking the same object). A new waypoint and heading were computed to take

images from a perpendicular viewpoint with respect to the current UAV allocated to the object.

Again, from the two possible solutions, the waypoint closer to the flight plan was chosen.

Figure 8.19 illustrates the CNP messages interchanged during the distributed negotiation process.

The announcement of the two tracking tasks mentioned above are separated in the figure using two

dotted horizontal lines. On the other hand, the labels of the arrows representing the messages are

always above them.

It can be seen that each time a tracking task was allocated to an UAV, the UAV requested the

token from the HMI application in order to re-announce it. In this case, there were no reallocations

due to dynamic changes in the UAV’s partial plans.

Table 8.3 shows the list of tasks executed during the mission (once the allocation process finished),

whereas Table 8.4 shows the values computed for the parameters of the GOTO elementary tasks in

the plans of both UAVs.

Table 8.3: Tasks to be executed for the Mission #2 and their decomposition in elementary tasks.The values of the parameters Πk corresponding to the elementary tasks with type λk = GOTO aredetailed in Table 8.4.

τki λ −Ω Ω+ Decomposition Π

τ11 TAKE-OFF PRE-FLIGHT_CHECK ∅ 1τ11 (λ1= TAKE-OFF) 1Π11

τ21 GOTO(wp1) END(τ11 ) ∅ 1τ21 (λ2= GOTO) 1Π21

τ31 TRACK(object_0) END(τ21 ) ∅ 1τ31 (λ3= GOTO) 1Π31

τ41 HOME END(τ31 ) ∅ 1τ41 (λ4= GOTO) 1Π41

τ51 LAND END(τ41 ) ∅ 1τ51 (λ5= LAND) 1Π51

τ62 TAKE-OFF PRE-FLIGHT_CHECK ∅ 1τ62 (λ6= TAKE-OFF) 1Π62

τ72 GOTO(wp4) END(τ62 ) ∅ 1τ72 (λ7= GOTO) 1Π72

τ82 TRACK(object_0) END(τ72 ) ∅ 1τ82 (λ8= GOTO) 1Π82

τ92 HOME END(τ82 ) ∅ 1τ92 (λ9= GOTO) 1Π92

τ102 LAND END(τ92 ) ∅ 1τ102 (λ10= LAND) 1Π102

Page 191: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.5 Multi-UAV Missions in the AWARE’09 General Experiments 167

HMI UAV 1 UAV 2

?t

ANNOUNCE τ3

BID 23.29 BID 22.39

ACCEPT BID

ANNOUNCE τ8

ANNOUNCE τ3

BID 23.29BID ∞ τ3 allocatedto UAV 2

BID 4.16 BID ∞

ACCEPT BID

ANNOUNCE τ8

BID ∞BID ∞ τ8 allocatedto UAV 1

REQUEST TOKEN

GIVE TOKEN

REQUEST TOKEN

GIVE TOKEN

Figure 8.19: CNP mes-sages interchanged duringthe distributed negotiationprocess in the people track-ing mission (Mission #2).The labels of the arrowsrepresenting the messagesare always above them.Two tracking tasks (τ3 andτ8) were announced at dif-ferent times, separated inthe figure using two dottedhorizontal lines. Each timea tracking task was allo-cated to an UAV, the UAVrequested the token fromthe HMI application in or-der to re-announce it. Inthis case, there were no re-allocations due to dynamicchanges in the UAV’s par-tial plans.

Table 8.4: Values of the parameters Πki corresponding to the elementary tasks with type λki = GOTOin Mission #2. Table 8.5 details the meaning of each parameter πj .

Parameters (Πki )1Π2

11Π3

11Π4

11Π7

21Π8

21Π9

2

π1 251663.94 251664.78 251674.17 251705.60 251689.77 251679.50π2 4121283.77 4121284.95 4121244.74 4121262.52 4121278.31 4121252.62π3 70.0 72.8 70.4 70.0 72.8 70.4π4 1.0 1.0 1.0 1.0 1.0 1.0π5 1 1 1 1 1 1π6 90.0 62.0 0.0 0.0 -28.0 0.0π7 0 0 0 0 0 0

Page 192: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

168 Experimental Results with the AWARE Project Multi-UAV Platform

Table 8.5: Parameters of a task with type λ = GOTO.

Parameters (Π) Descriptionπ1(x) East UTM coordinate (m)π2(y) North UTM coordinate (m)

π3(Altitude) Altitude (m) ellipsoid-based datum WGS84π4 (Speed) Desired speed (m/s) along the way to the way-

pointπ5(Force heading) 1: force to the specified heading, 0: Not forceπ6(Heading) Specified heading (degree) along the way (N

is 0 , E is 90 , W is −90 and S is 180 )π7(Payload) 1: to activate the payload around the location

of the waypoint, 0: not to activate

Figure 8.20 shows the trajectories followed by the UAVs during the execution of the mission.

The different waypoints of the elementary GOTO tasks are represented by squares. The waypoints

labelled as wp1 and wp4 are initial locations defined by the user for each UAV after taking-off. The

WSN and ground cameras of the platform started to provide estimations from the fireman labelled

as object_0. Based on those estimations, the tracking tasks can be sent to the UAVs. In the case

of UAV 1, the computed waypoint wp2 for the observation resulted very close to the first waypoint,

but the heading was different as it can be seen in Table 8.4 (1Π21 and 1Π3

1 parameters).

Finally, Figure 8.21 shows two screenshots taken from the HMI interface during the live execution

of the mission.

8.5.2 Node Deployment and Fire Monitoring (Mission #5)

This mission (identified as #5 in Table 8.1) was performed on 25th May 2009. The initial situation

was as follows:

• A fire alarm had been declared in the building by the WSN inside. This fire had been also

confirmed with the ground cameras outside the building.

• After a surveillance mission, several fuel barrels close to the building had been localized.

There were two UAVs ready to fly on the landing pads:

• UAV 1 equipped with an infrared camera aligned with the fuselage and pointing downwards

45.

• UAV 2 equipped with the node deployment device (NDD) and three sensor nodes.

As there was risk of fire propagation from the building to the fuel barrels, a deployment mission

was specified in order to place several sensors in the area between them at the locations of the

waypoints wp1, wp2 and wp3 (see Fig. 8.24). Let us denote the corresponding tasks as τ8, τ9 and

τ10 respectively.

Page 193: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.5 Multi-UAV Missions in the AWARE’09 General Experiments 169

40 50 60 70 80 90 100 11040

50

60

70

80

90

100

110

wp1

wp2

wp3

wp4

wp5

wp6

y(m

) - South to North

x(m) - East to West

tents

building

Figure 8.20: Paths followed by the two helicopters during the people tracking mission (Mission#2). The trajectories in red and blue correspond to the UAVs 1 and 2 respectively. The differentwaypoints of the elementary GOTO tasks are represented by squares.

Page 194: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

170 Experimental Results with the AWARE Project Multi-UAV Platform

(a) Both UAVs tracking one fireman

(b) Another fireman got into the same area (see cameras on-board at the right)

Figure 8.21: Screenshots of the platform Human Machine Interface during the execution of Mission#2. At the right, the view of the on-board cameras is shown.

Page 195: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.5 Multi-UAV Missions in the AWARE’09 General Experiments 171

The distributed negotiation process for the sensor deployment tasks started and the involved

messages interchanged are shown in Fig. 8.22. The negotiation is based on the SIT algorithm

explained in Chap. 5. The HMI application announced the three tasks and the two UAVs bid for

them. It can be seen that the bids from UAV 1 were infinite because it was not equipped with the

NDD. Regarding UAV 2, it bid with the insertion cost of the tasks computed using the distance to

the waypoints as metric.

HMI UAV 1 UAV 2

?t

ANNOUNCE τ8

BID ∞ BID 92.88

ACCEPT BID

ANNOUNCE τ8

BID ∞BID ∞

ANNOUNCE τ10

BID ∞ BID 7.36

ACCEPT BID

ANNOUNCE τ10

BID ∞BID ∞

ANNOUNCE τ9

BID ∞ BID 0.54

ACCEPT BID

ANNOUNCE τ9

BID ∞BID ∞τ8, τ10 and τ9

allocated to UAV 2

REQUEST TOKEN

GIVE TOKEN

τ72 TAKE-OFF

τ82 DEPLOY(wp1)

τ92 DEPLOY(wp2)

τ102 DEPLOY(wp3)

τ112 HOME

τ122 LAND

τ72 TAKE-OFF

τ112 HOME

τ122 LAND

τ72 TAKE-OFF

τ82 DEPLOY(wp1)

τ112 HOME

τ122 LAND

τ102 DEPLOY(wp3)

τ82 DEPLOY(wp1)

Plan buildingprocess

Figure 8.22: CNP messages interchanged for the allocation of the sensor deployment tasks (Mission#5). The labels of the arrows representing the messages are always above them.

When bidding, the plan builder module checks different insertion points in the current plan in

order to find the lowest associated cost (lowest bid). The mechanism is illustrated in Fig. 8.23, that

shows the different possible partial plans once each new task was announced. When τ8 was received,

the whole plan including the take-off, home and land tasks was built. Then, τ10 was received and

Page 196: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

172 Experimental Results with the AWARE Project Multi-UAV Platform

two insertion points (represented with small arrows) were evaluated with equal resulting bids (the

bid in the gray box was chosen arbitrary). Finally, τ9 was announced and three insertion points

were evaluated. The lowest insertion cost (0.54) was achieved with task τ9 inserted between tasks

τ8 and τ10.

All the tasks were initially allocated to UAV 2. According to the SIT algorithm, UAV 2 asked

for the token to announce again the tasks won. The HMI application sent the token to UAV 2 and

tasks τ8, τ9 and τ10 were announced again. All the bids received were infinite, so the tasks were

definitely allocated to UAV 2: τ82 , τ92 and τ102 .

τ82 DEPLOY(wp1)

τ92 DEPLOY(wp2)

τ102 DEPLOY(wp3)

τ112 HOME

τ122 LAND

τ72 TAKE-OFF

τ82 DEPLOY(wp1)

τ112 HOME

τ122 LAND

τ72 TAKE-OFF

τ82 DEPLOY(wp1)

τ102 DEPLOY(wp3)

τ112 HOME

τ122 LAND

τ72 TAKE-OFF

τ82 DEPLOY(wp1)

τ92 DEPLOY(wp2)

τ102 DEPLOY(wp3)

τ112 HOME

τ122 LAND

τ72 TAKE-OFF

τ82 DEPLOY(wp1)

τ92 DEPLOY(wp2)

τ102 DEPLOY(wp3)

τ112 HOME

τ122 LAND

τ72 TAKE-OFF

τ82 DEPLOY(wp1)

τ102 DEPLOY(wp3)

τ112 HOME

τ122 LAND

BID 7.36 BID 7.36

BID 1.83 BID 9.73

τ8 announced τ10 announced

τ9 announced

τ72 TAKE-OFF

BID 0.54

BID 92.88

Figure 8.23: Partial plans built by UAV 2 during the negotiation process depicted in Fig. 8.22(Mission #5). When τ8 was received, the whole plan including the take-off, home and land taskswas built. Then, τ10 was received and two insertion points (represented with small arrows) wereevaluated with equal resulting bids (the bid in the gray box was chosen arbitrary). Finally, τ9 wasannounced and three insertion points were evaluated. The lowest insertion cost (0.54) was achievedwith task τ9 inserted between tasks τ8 and τ10.

Then, each deployment task was decomposed by the plan refiner module (see Sect. 4.2), leading

to four elementary tasks:

1. Reach the waypoint.

2. Go down until the altitude is hd = 3.5 meters above the ground.

3. Activate the NDD to deploy the sensor.

Page 197: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.5 Multi-UAV Missions in the AWARE’09 General Experiments 173

4. Go up to the specified waypoint altitude.

Once decomposed, the following twelve elementary tasks were inserted in the plan, substituting

the tasks τ82 , τ92 and τ102 :

• τ82 → 1τ82 ,2 τ82 ,3 τ82 ,4 τ82 (λ8= GOTO)

• τ92 → 1τ92 ,2 τ92 ,3 τ92 ,4 τ92 (λ9= GOTO)

• τ102 → 1τ102 ,2 τ102 ,3 τ102 ,4 τ102 (λ10= GOTO)

After the execution of the deployment tasks, during the landing maneuver of UAV 2, a new fire

alarm was declared by one of the sensors deployed in the area between the building and the barrels.

Then, in order to confirm this second fire, a take-shot task (τ2) was specified: take images from the

west of the fire at an altitude of 75 meters. A negotiation process started, and the task was allocated

to UAV 1 (τ21 ), which was equipped with an infrared camera (UAV 2 had no cameras on-board and

its bids were infinite).

Task τ21 was processed by the plan refining toolbox in order to compute the waypoint that fulfilled

the above constraints and allowed to have the fire in the center of the field of view of the on-board

camera (see Sect. 4.1). Once the fire was confirmed, the platform operator commanded a fire truck

equipped with a remotely controlled water cannon (monitor) to extinguish it. Before activating the

monitor, UAV 1 was commanded to a safe location (task τ31 ). After the operation with the monitor,

the user commanded again a take-shot task τ41 for UAV 1 in order to confirm that the fire was

extinguished. After the confirmation, the UAV returned home and landed (tasks τ51 and τ61 ).

Table 8.6 summarizes all the tasks described above, along with their decomposition into elemen-

tary tasks. Moreover, from the elementary tasks allocated to UAV1, those with type λk = GOTO

are detailed in Table 8.7. Table 8.5 details the meaning of each parameter πj . The values of the π1

and π2 parameters shown in Table 8.7 are represented in Fig. 8.24 as small red squares.

Figure 8.24 shows the paths followed by the two helicopters (red and blue for the UAVs 1 and

2 respectively). The small squares represent the waypoints corresponding to the elementary GOTO

tasks:

• UAV 1 (red line):

– wp5, wp7: locations computed to monitor the fire.

– wp6: safe waypoint to wait for the monitor operation to be over.

– wp8: UAV 1 home.

• UAV 2 (blue line):

– wp1, wp2, wp3: deployment locations.

– wp4: UAV 2 home.

Page 198: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

174 Experimental Results with the AWARE Project Multi-UAV Platform

Table 8.6: Tasks executed during Mission #5 and their decomposition in elementary tasks. Thevalues of the parameters Πk corresponding to the elementary tasks with type λk = GOTO of theUAV 1 are detailed in Table 8.7.

τki λ −Ω Ω+ Decomposition Π

τ11 TAKE-OFF PRE-FLIGHT_CHECK ∅ 1τ11 (λ1= TAKE-OFF) 1Π11

τ21 TAKE-SHOT END(τ11 ) ∅ 1τ21 (λ2= GOTO) 1Π21

τ31 GOTO END(τ21 ) ∅ 1τ31 (λ3= GOTO) 1Π31

τ41 TAKE-SHOT END(τ31 ) ∅ 1τ41 (λ4= GOTO) 1Π41

τ51 HOME END(τ41 ) ∅ 1τ51 (λ5= GOTO) 1Π51

τ61 LAND END(τ51 ) ∅ 1τ61 (λ6= LAND) 1Π61

τ72 TAKE-OFF PRE-FLIGHT_CHECK ∅ 1τ72 (λ7= TAKE-OFF) 1Π72

τ82 DEPLOY(wp1) END(τ72 ) ∅ 1τ82 ,2 τ82 ,3 τ82 ,4 τ82 (λ8= GOTO) 1Π82

τ92 DEPLOY(wp2) END(τ82 ) ∅ 1τ92 ,2 τ92 ,3 τ92 ,4 τ92 (λ9= GOTO) 1Π92

τ102 DEPLOY(wp3) END(τ92 ) ∅ 1τ102 ,2 τ102 ,3 τ102 ,4 τ102 (λ10= GOTO) 1Π102

τ112 HOME END(τ102 ) ∅ 1τ112 (λ11= GOTO) 1Π112

τ122 LAND END(τ112 ) ∅ 1τ122 (λ12= LAND) 1Π122

Table 8.7: Values of the parameters Πk1 corresponding to the elementary tasks with type λk1 = GOTOin Mission #5 for UAV 1. Table 8.5 details the meaning of each parameter πj .

Parameters (Πki )1Π2

11Π3

11Π4

11Π5

1

π1 251688.49 251665.36 251689.67 251674.17π2 4121282.87 4121282.99 4121283.58 4121244.74π3 75.0 70.0 75.0 70.4π4 1.0 1.0 1.0 1.0π5 1 1 1 1π6 90.0 90.0 90.0 0.0π7 0 0 0 0

Page 199: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.5 Multi-UAV Missions in the AWARE’09 General Experiments 175

40 50 60 70 80 90 100 110 12040

50

60

70

80

90

100

110

120

wp5

wp6

wp7

wp8

wp1

wp2

wp3

wp4

y(m

) - South to North

x(m) - East to West

tents

building

Figure 8.24: Paths followed by the two helicopters during the node deployment and fire monitoringmission (Mission #5).

Finally, Figure 8.25 shows different screenshots of the HMI application taken during the exe-

cution of the mission. Figure 8.25(a) shows UAV 2 in operation during the execution of the three

sensor deployment tasks transformed into twelve elementary goto tasks (see “TUB2 tasks status”

window). On the other hand, in Fig. 8.25(a) UAV 1 is monitoring the fire detected by the sensors

deployed: a window shows the images captured by the infrared camera on-board with an red overlay

corresponding to the fire detected.

8.5.3 Multi-UAV Surveillance (Mission #7)

In this mission (identified as #7 in Table 8.1), the objective was to find objects of interest in a given

area. In our case, the objects of interest were fuel barrels located around a building where a fire

alarm had been declared.

The propagation of the fire could reach those barrels and make more difficult the extinguishing

task of the firemen. Then, the platform user specified a surveillance task (see Table 8.8) to localize

the barrels and display them on the map of the HMI. In this mission, two UAVs were available and

ready in the landing pads:

• UAV 1: equipped with a fixed visual camera aligned with the fuselage of the helicopter and

pointing downwards 90.

• UAV 2: equipped with a fixed visual camera aligned with the fuselage of the helicopter and

pointing downwards 90.

Page 200: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

176 Experimental Results with the AWARE Project Multi-UAV Platform

(a) A fire had been declared in the building and three sensors were deployed to detect its potential propagation to thefuel tanks close to the building

(b) The fire propagation was detected with the sensors previously deployed and a second UAV equipped with aninfrared camera took-off to confirm it and to provide estimations of the evolution of the fire

Figure 8.25: Screenshots of the platform human machine interface during the execution of Mission#5: sensor deployment and fire monitoring. The screenshot on the top shows UAV 2 in operationduring the execution of the three sensor deployment tasks transformed into twelve elementary gototasks (see “TUB2 tasks status” window). On the other hand, in the screenshot below UAV 1 ismonitoring the fire detected by the sensors deployed: a window shows the images captured by theinfrared camera on-board with an red overlay corresponding to the fire detected.

Page 201: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.5 Multi-UAV Missions in the AWARE’09 General Experiments 177

Table 8.8: Task specified for the Mission #7. The values of the parameters (Πk) are detailed inTable 8.9.

τk λ −Ω Ω+ Πτ1 SURV ∅ ∅ Π1

The values for the parameters of the surveillance task are shown in Table 8.9 and the meaning

of each parameter is explained in Table 8.10. Basically, the user specifies the vertices of the area to

be covered, the altitude for the flight, the speed for the UAVs and the overlapping between images

due to the associated zigzag pattern.

Table 8.9: Values for the tasks parameters (Πk). The meaning of each parameter πj is explained inTable 8.10.

Parameters (Πk) Π1

251685.14 4121220.82251663.65 4121237.68251655.84 4121263.99251666.46 4121282.96

π1 251694.82 4121288.55251721.20 4121288.71251731.66 4121269.75251728.22 4121246.46251706.54 4121231.88

π2 72.0π3 1.0π4 100.0

Table 8.10: Parameters of a task with type λ = SURV.

Parameters (Πk) Descriptionπ1 (Polygon) The set of vertices defining the

polygon of the area to be coveredby the UAV

π2(Altitude) Altitude (m) for the flight(ellipsoid-based datum WGS84)

π3(Speed) Specified speed (m/s) for theflight

π4(Overlapping) Desired overlapping in percent-age between consecutive rows ofthe zigzag pattern

Figure 8.26 shows the vertices of the whole polygon specified by the user, that was later divided

into the blue and red sub-areas.

The distributed negotiation for the surveillance task is different with respect to the previously

presented missions. In this case, the different bids are used by the auctioneer to compute the relative

capabilities of the available UAVs for the partition process.

Page 202: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

178 Experimental Results with the AWARE Project Multi-UAV Platform

50 60 70 80 90 100 110 120 13020

30

40

50

60

70

80

90

100

110

wp1

wp2 wp3

wp4 wp5

wp6 wp7

wp8 wp9

wp10wp11

wp12 wp13

wp14

wp1

wp2 wp3

wp4 wp5

wp6 wp7

wp8 wp9

wp10

y(m

) - South to North

x(m) - East to West

building

tents

Figure 8.26: Paths followed by the two helicopters during the multi-UAV surveillance mission (Mis-sion #7).

Then, once the surveillance task was announced by the HMI application, the two available UAVs

started with the negotiation process bidding with their particular capabilities for the execution. Each

bid was computed by the plan refining toolbox module taking into account the specified altitude

and the parameters of the on-board cameras (shown in Table 8.11) as

bi = wiPdi , (8.2)

where Pdi was the probability of detection for the object of interest and wi was the sensing width

according to the expressions derived in Sects. 4.1 and 4.3.

Table 8.11: Parameters of the cameras on-board during the surveillance mission used by each UAVto compute its sensing width for the zigzag pattern to be followed. The notation used for theparameters is described in Sect. 4.1.

Camera parameters UAV 1 UAV 2w 384 384h 288 288u0 199.9948 179.9591v0 116.6379 112.6779αu 551.3304 494.4553αv 549.3181 492.6934γ 0.0026 0.0017

Page 203: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.5 Multi-UAV Missions in the AWARE’09 General Experiments 179

The particular values computed in Mission #7 for the bids and the relative capabilities are shown

in Table 8.12. In the last column of the table, the values for the relative capabilities computed by

the auctioneer determine the percentage of the full area that was assigned to each UAV.

Table 8.12: Values for the bids and resulting relative capabilities in percentage.

wi Pdi bi Relative capabilityUAV 1 5.17 0.901 4.65 50.12 %UAV 2 5.76 0.803 4.63 49.88 %

The plan refining toolbox module of each UAVs embeds exactly the same algorithm described

in Sect. 4.3. Then, once each UAV received from the auctioneer the relative capabilities, it could

compute the whole partition and its assigned sub-area. The plan refining module of each UAV

also computed the list of goto tasks required to cover the allocated sub-area based on the sensorial

capabilities of each UAV and the flight altitude. Then, Fig. 8.26 shows the location of the waypoints

computed for each UAV in its sub-area. It should be mentioned that given the sensing widths

wi shown in Table 8.12, the waypoints were computed taking into account the 100% overlapping

specified between consecutive rows. But, it can be seen that in the frontier between sub-areas the

distance between rows of different UAVs is larger. This difference comes from the 0% overlapping

that is forced between sub-areas in order to increase the safety conditions of the UAVs.

Finally, Fig. 8.27 shows three screenshots captured from the HMI application during the execution

of the mission. On the right of each screen, there are two windows with the images received from

the cameras on-board. In Fig. 8.27(b), two barrels are in the field of view of the camera on-board

UAV 2, allowing to estimate their locations. Later, in Fig. 8.27(c), the computed estimations of the

locations of the barrels are shown on the map as red dots.

8.5.4 Load Transportation (Mission #8)

In this mission (identified as #8 in Table 8.1), a fire alarm had been declared in the building and

the objective was to place a wireless camera with pan&tilt on the top floor. The camera would

allow to have continuous real-time video to monitor the operations of the firemen and the health

status of the victims on the top floor of the building. The weight of the camera and its associated

communication equipment and batteries were too heavy for a single helicopter, and hence, the Load

Transportation System (LTS) had to be used.

The Load Transportation System (LTS) composed by three TUB-H helicopters was ready on the

landing pads with the load connected by ropes. The platform operator specified a load transportation

task to deploy the wireless pan&tilt camera on the top floor and the plan builder module generated

the full set of ordered tasks for the LTS (see Table 8.13). Then, the plan refiner toolbox decomposed

the plan in elementary tasks to be sent to the executive layer (see Tables 8.13 and 8.14).

It should be noted that the enormous complexity of the load transportation system composed by

three helicopters was hidden to the user, that could proceed to the specification of the deployment

task simply providing the GPS location where the load was required.

Page 204: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

180 Experimental Results with the AWARE Project Multi-UAV Platform

(a) UAVs taking-off to follow their respective zigzag patterns

(b) Two barrels in the images from the camera on-board UAV 2 on the right

(c) The estimation of the position of the barrels was computed and represented by red dots

Figure 8.27: Screenshots of the platform Human Machine Interface during the execution of surveil-lance mission (Mission #7).

Page 205: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.6 Summary of Results and Lessons Learned 181

Table 8.13: Tasks to be executed for the Mission #8 and their decomposition in elementary tasks.The values of the parameters Πk corresponding to the elementary tasks with type λk = GOTO aredetailed in Table 8.14.

τk λ −Ω Ω+ Decomposition Π

τ1 TAKE-OFF PRE-FLIGHT_CHECK ∅ 1τ1 (λ1= TAKE-OFF) 1Π1

τ2 DEPLOY END(τ1) ∅ 1τ2,2 τ2,3 τ2,4 τ2 (λ2= GOTO) 1Π2,2 Π2,3 Π2,4 Π2

τ3 HOME END(τ2) ∅ 1τ3 (λ3= GOTO) 1Π3

τ4 LAND END(τ3) ∅ 1τ4 (λ4= LAND) 1Π4

Table 8.14: Values of the parameters Πk corresponding to the elementary tasks with type λk =GOTO in Mission #8. Table 8.5 details the meaning of each parameter πj .

Parameters (Πk) 1Π2 2Π2 3Π2 4Π2 1Π3

π1 251703.60 251703.60 251703.60 251703.60 251673.85π2 4121294.60 4121294.60 4121294.60 4121294.60 4121250.25π3 80.0 70.0 70.0 80.0 80.0π4 3.7 N/A 1.0 1.0 3.7π5 1 1 1 1 1π6 0.0 0.0 0.0 0.0 0.0π7 0 0 1 0 0

The altitude specified for the deployment was several meters above the top floor of the building.

The ODL had access to the map of the area in order to plan the deployment task decomposition

properly taking also into account the length of the ropes.

Figure 8.28 shows the trajectories followed by the three TUB-H helicopters and the transported

pan&tilt camera unit to execute task 1τ2.

On the other hand, Figure 8.29 shows the values of the x, y and z coordinates of the helicopters

and the load during the flight. The curves for the load end when the ropes are released. It should

be mentioned that during the execution, wind gusts around 35 Km/h were registered.

Finally, Figure 8.30 contains three screenshots of the HMI software captured during the execution

of the mission. The different elements in the interface were: (left) map of the area with the position

and heading of the three LTS helicopters represented by arrows; (center) images transmitted by the

Flying-Cam helicopter and telemetry from all the UAVs ; (right) interface to control the transported

camera with pan&tilt.

It should be mentioned that to the best of our knowledge, this was the first mission involving

the transportation of a load from the ground to the top floor of a building with three autonomous

helicopters.

8.6 Summary of Results and Lessons Learned

The AWARE experiments have shown that the architecture developed allows to cover a wide spec-

trum of missions, ranging from surveillance to load transportation and deployment.

Page 206: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

182 Experimental Results with the AWARE Project Multi-UAV Platform

-10

0

10

20

30

40

50

60

-60-40-200204060

X (

South

to N

ort

h)

[m]

Y (East to West) [m]

Rope

Building

Tent

UAV3

UAV1

LoadUAV2

Tent

Figure 8.28: Path followed by the three helicopters transporting the load in the x − y plane. Thetrajectories of the load and the helicopters are in red and blue respectively. Plots courtesy of theTechnische Universitat Berlin (TUB).

Both the human machine interface and the ODL applications programmed in C++ have demon-

strated robustness during the execution of all the missions and no software crashes were registered.

In order to point out the benefits of the distributed design, the HMI application was shut down and

restarted during the execution of several missions and the platform performance was not affected at

all.

The autonomous capabilities provided by the ODL have allowed to have a single operator for

mission design and execution during all the tests performed with the platform. The modules pre-

sented along this thesis through Chaps. 3 to 6 have been used during all the missions and their

behavior was as expected. On the other hand, the HMI designed for the platform has proven to be

highly usable, allowing the operator to exploit the capabilities of the platform easily.

Table 8.15 shows some figures that reflect the performance of the ODL during all the multi-UAV

missions.

The messages interchanged were related to task synchronization and the required negotiations

for task allocation and conflict resolution.

Among the lessons learned during the experiments, it is worth to mention the relevance of the

time synchronization when a distributed system is being tested. In our platform, this issue was

solved by using the Network Time Protocol (NTP) with a server connected through a serial port to

a GPS base station (see Appendix B).

Page 207: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.6 Summary of Results and Lessons Learned 183

0 50 100 150 200 250 300 350 400 450-30

-20

-10

0

10

20

30

40

50

time [s]

position [m]

0 50 100 150 200 250 300 350 400 450-30

-20

-10

0

10

20

30

40

50

time [s]

position [m]

UAV1x

UAV1y

UAV1z

UAV2x

UAV2y

UAV2z

UAV3x

UAV3y

UAV3z

load x

load y

load z

Figure 8.29: Values of the x, y and z coordinates of the helicopters and the load during the flight.Plots courtesy of the Technische Universitat Berlin (TUB).

Table 8.15: Some figures that reflect the performance of the ODL during all the missions.

Mission # 0 1 2 3 4 5 6 7 8 9 10 11 TotalTasks received from theHMI

9 10 8 12 12 6 8 3 4 6 8 86

Elementary tasks sent tothe executive

9 10 22 21 21 6 28 3 7 6 18 151

Tasks generated by theplan refiner

4 4 18 13 13 3 24 0 3 3 14 99

Coordination messagesinterchanged

12 12 66 34 34 0 78 0 0 0 58 294

Potential conflicts man-aged successfully

6 6 18 17 17 0 24 0 0 0 14 102

Page 208: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

184 Experimental Results with the AWARE Project Multi-UAV Platform

(a) LTS flying above the deployment location

(b) Camera deployed on the top floor of the building. The operator has used the pan&tilt to find one victim. TheLTS is still over the deployment location after releasing the ropes.

(c) LTS over the home location ready to land

Figure 8.30: Screenshots of the platform human machine interface during the execution of Mission#8. The different elements in the interface were: (left) map of the area with the position and headingof the three LTS helicopters represented by arrows; (center) images transmitted by the Flying-Camhelicopter and telemetry from all the UAVs ; (right) interface to control the transported camerawith pan&tilt.

Page 209: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

8.6 Summary of Results and Lessons Learned 185

On the other hand, data marshalling had to be explicitly considered within a platform with het-

erogeneous hardware and software. It was solved with the middleware developed by the Universities

of Bonn and Stuttgart (see also Appendix B).

Another relevant issue was related to the adoption of common coordinate frames by all the

partners in the Consortium. Many errors detected during the debugging process of the software came

from misunderstandings related to the coordinate frames attached to the cameras and helicopters.

That debugging process was performed through many integration meetings (more than 20 in the

last year of the project). As far as the partners in the Consortium were from different countries, it

should be also mentioned that the use of a Virtual Private Network (VPN) was a key tool to debug

the different applications involved in the platform.

Finally, Table 8.16 shows the different videos with the live execution of the missions presented

in this chapter. The videos contain some fragments of the full mission and alternates views of the

HMI screen along with the action of the helicopters from an external camera.

Table 8.16: Videos showing the live execution of the missions presented in this paper. The videoscan be played using the VLC media player (http://www.videolan.org).

Mission LinkPeople tracking http://www.aware-project.net/videos/firemen.avi

Sensor deployment http://www.aware-project.net/videos/sens.avi

Fire confirmation and extinguishing http://www.aware-project.net/videos/ir.avi

Multi-UAV surveillance http://www.aware-project.net/videos/surv.avi

Load transportation http://www.aware-project.net/videos/aware.mov

Page 210: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

186 Experimental Results with the AWARE Project Multi-UAV Platform

Page 211: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Chapter 9

Conclusions and Future Work

The control, coordination and autonomous cooperation of Unmanned Aerial Vehicles (UAVs) is a

promising area of research that has gained a lot of attention. Their use is now no longer confined

to military applications and more and more civil applications that make use of one or more UAVs

are starting to emerge.

The main topic of this thesis has been the design and implementation of a distributed architecture

for the autonomous cooperation of UAVs in civil scenarios. The application of the ideas to an actual

platform composed by autonomous helicopters and its demonstration in real civil applications have

been an important guideline for the development of the thesis.

Unmanned aerial vehicles offer advantages for many applications when comparing with their

manned counterparts. They preserve human pilots of flying in dangerous conditions that can be

encountered not only in military applications but also in other scenarios involving operation in bad

weather conditions, or near to buildings, trees, civil infrastructures and other obstacles.

Furthermore, there are commercial applications, such as inspection of infrastructures or electrical

lines, in which the use of low cost UAVs can produce significant cost savings when comparing to

conventional aircrafts. Moreover, longer endurance of HALE (High Altitude / Long Endurance) and

MALE (Medium Altitude / Long Endurance) platforms could provide benefits in applications such

as environmental monitoring, communications and others. Then, the prospects for market growth of

UAV-based applications are very good and it is expected that in the next 20–30 years, cost-effective

UAVs will substitute manned aircraft in many missions and will open new application markets.

The work in this thesis has been focused on multi-UAV systems. The benefits of the multi-UAV

approach when comparing to the use of a single UAV can be summarized as follows:

• Increased coverage in surveillance by minimizing delays in the event detection.

• Decreased time in exploration, mapping and other missions.

• Improved reliability by avoiding dependencies of a single UAV.

• Decreased uncertainties by providing simultaneous information and measurements from differ-

ent observation points.

Page 212: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

188 Conclusions and Future Work

• Makes possible the application of teaming involving multiple aircrafts with different and com-

plementary characteristics and sensors.

In this chapter the main contributions of the thesis and potential future developments are sum-

marized.

9.1 Revisiting the Main Contributions

This section provides a summary of the main contributions presented along this thesis.

9.1.1 Summary of Contributions

The research contributions of this thesis can be classified in six main axes:

• Multi-UAV distributed architecture. The whole distributed architecture for the deliberative

layer of the UAVs has been designed, implemented in C++ and tested in real missions.

• Task decomposition and refining. In order to avoid imposing many constraints to the executive

layer capabilities of the UAVs in the platform, several techniques have been developed to de-

compose and refine complex tasks. Specifically, Chapter 4 presented different methods for task

decomposition in the context of missions involving deployment, monitoring and surveillance.

• Distributed multi-robot task allocation. Three algorithms (SIT, SET and S+T) have been

developed to solve this problem (see Chap. 5). A market based approach based on the Contract

Net Protocol (CNP) has been adopted for all the algorithms developed and two of them (S+T

and SIT) have been tested in real multi-robot platforms in the framework of the CROMAT 1

and AWARE 2 Projects.

• Conflict resolution. Two different approaches have been presented in Chap. 6: distributed and

centralized. The distributed method contributes to the state of the art in two aspects that

have not been usually addressed in previous work: the use of a fully distributed policy to avoid

the conflicts based on a negotiation protocol among the UAVs, and practical implementations

of the proposed methods with a real multi-UAV platform. On the other hand, the centralized

approach developed requires to change the velocity profile of the UAVs in real-time, and thus

impose more requirements to the executive layer of the UAVs. Nevertheless, it has been

considered relevant to include a description of this method as another option more general

to be applied if a centralized solution is preferred and the UAVs allow such kind of velocity

control in real-time.

• Multimodal technologies application for the human machine interface. The previous contri-

butions are related to the distributed architecture for the autonomous cooperation between

multiple UAVs. Although autonomy provides many advantages by itself, it is also important

1http://grvc.us.es/cromat2http://www.aware-project.net

Page 213: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

9.1 Revisiting the Main Contributions 189

to consider the Human Machine Interface (HMI) as a key element to enable an usable and

practical platform. Then, the work in this thesis included also a study on how multimodal

technologies can enhance the performance of the platform when presenting alerts to the user.

• Experimental demonstration of the whole architecture including the HMI application. Thanks

to the AWARE Project, it was possible to test the distributed software implemented in C++

following the techniques described along this thesis using a real multi-UAV platform. To

the best of our knowledge, there are quite few real demonstrations of distributed multi-UAV

platforms (at least for civil applications).

On the other hand, conditions close-to operational scenarios have been considered, and, in the

opinion of the author, it constitutes one of the main contributions of the thesis, as the applica-

bility of the techniques and ideas presented in the real world is illustrated. Chapter 8 described

the multi-UAV missions carried out in the framework of the AWARE Project using the software

implementation in C++ of the architecture and HMI described in this thesis. As a summary, those

missions included surveillance with multiple UAVs, fire confirmation, monitoring and extinguishing,

load transportation and deployment with single and multiple UAVs, and people tracking.

9.1.2 Detailed Discussion

Some of the contributions summarized above are detailed in the following.

Distributed Multi-robot Task Allocation

An important issue in distributed multirobot coordination is the multi-robot task allocation (MRTA)

problem. It deals with the way to distribute tasks among the robots of a team and requires to define

some metrics to assess the relevance of assigning each task to a given robot. This thesis is focused on

the distributed solution of the MRTA problem, but centralized ((Brumitt and Stenz, 1998), (Caloud

et al., 1990)) and hybrid ((Dias and Stenz, 2002), (Ko et al., 2003)) approaches have been also

addressed in the literature.

In this thesis, three algorithms (SIT, SET and S+T) to solve the distributed task allocation

problem were presented. A market based approach based on the Contract Net Protocol (CNP)

(Smith, 1980) was adopted for all of them.

The first algorithm (SIT) was based on the ideas presented in (Dias, 2004). In order to reduce

the limitations of the former method, a new algorithm called SET which considers subsets of tasks

in the negotiation process was also developed. From the simulation results, the SIT algorithm was

selected for its implementation and final usage in the real multi-UAV platform. The reason was that

the trade-off between quality of the solution and number of messages interchanged was better.

The latter algorithm (S+T) solves the MRTA problem in applications that could require the

cooperation among the UAVs to accomplish all the tasks. If an UAV cannot execute a task by itself,

it asks for help and, if possible, another UAV will provide the required service. This protocol was

also based on a distributed market-based approach and could be considered as an extension of the

SIT algorithm. The basic idea is that an UAV can ask for services when it cannot execute a task

Page 214: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

190 Conclusions and Future Work

by itself. The cost of the task will be the sum of the costs of the task and the service or services

required.

A similar idea was presented in (Lemaire et al., 2004), where soft temporal constraints were

considered using master/slave relations, and also in (Zlot and Stentz, 2006), where the efficiency of

the solution is increased considering at the same time the decomposition and allocation of complex

tasks in a distributed manner. However, the potential execution loops associated to the relation

between tasks and services that could lead to deadlock situations, is original in our work. To the

best of our knowledge, there is no previous work dealing with this problem in a distributed manner

within the MRTA area. Moreover, the parameters of our algorithm can be adapted to give priority

to either the execution time or the energy consumption (i.e., the sum of the distances traveled by

each of the UAVs) in the mission.

Conflict Resolution

The variability of the flying conditions, due for example to the wind, the faults that may affect to the

UAVs, and the presence of other manned aircraft, including teleoperated aerial vehicles that cannot

be controlled by the system, demand the implementation of real-time collision avoidance techniques.

To the best of our knowledge, there are two aspects that have not been usually addressed in

previous work: the use of fully distributed policy to avoid the conflicts based on a negotiation

protocol among the UAVs, and practical implementations of the proposed methods with a real

multi-UAV platform. Then, the work in this thesis has contributed in both directions.

A distributed conflict avoidance method used to improve the safety conditions in the AWARE

scenario where multiple UAVs are sharing the same aerial space has been developed. As it has been

mentioned before in Chap. 3, one of the key aspects in the design was to impose few requirements

to the proprietary vehicles to be integrated in the AWARE platform. Then, a specification of the

particular trajectory or velocity profile during the flight is not considered, and the implemented

policy to avoid the inter-vehicle conflicts is only based on the elementary set of tasks presented

in Sect. 3.3.4. On the other hand, the method is distributed and involves the negotiation among

different UAVs. It exploits the hovering capabilities of the helicopters and guarantees that each

trajectory to be followed by the UAVs is clear of other UAVs before proceeding to its execution.

The distributed method implemented for the AWARE multi-UAV platform was validated during

the experiments carried out in May 2009 in the framework of the AWARE project, which were

presented in Chap. 8.

Multimodal Technologies Application for the Human Machine Interface

It is known that multimodal display techniques may improve operator performance in Ground Con-

trol Stations (GCS) for UAVs. Presenting information through two or more sensory channels has the

dual benefit of addressing high information loads as well as offering the ability to present information

to the operator within a variety of environmental constraints. A critical issue with multimodal inter-

faces is the inherent complexity in the design of systems integrating different display modalities and

user input methods. The capability of each sensory channel should be taken into account along with

Page 215: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

9.2 Perspectives for the Application in Civil Markets 191

the physical capabilities of the display and the software methods by which the data are rendered for

the operator. Moreover, the relationship between different modalities and the domination of some

modalities over others should be considered.

The applicability and benefits of those technologies was analyzed in this thesis for a task consist-

ing in the acknowledgement of alerts in an UAV ground control station composed by three screens

and managed by a single operator. For this purpose, several experiments were conducted with a

group of individuals using different combinations of modal conditions (visual, aural and tactile).

9.2 Perspectives for the Application in Civil Markets

The characteristics of the UAV platforms impact the performance of the multi-UAV team. Thus,

the following developments will facilitate the penetration of UAV technology in the civil market:

New platforms: Flight endurance and range of the currently available low cost unmanned aerial

vehicles are very limited. New platforms are required for many applications. This includes

large scale aircraft-like systems for long range missions, medium size platforms, mini UAVs

and micro UAVs and very small scale systems (few centimeters) for indoor operations.

Autonomy: The application of new methods and technologies in avionics and robotic systems is

required in future UAVs to minimize the activity of the ground operators. Autonomous take-

off and landing have been demonstrated in several works. However, the landing in unknown

terrains and mobile platforms still requires significant efforts. The same applies for the im-

plementation of autonomous robotic functions such as decision making, supervision, obstacle

avoidance and autonomous tracking. These autonomous functions require suitable environ-

ment perception functions and robotic architectures. In spite of some recent developments

and demonstrations more efforts are still required for the efficient reliable implementation of

these functionalities in commercial systems.

Ground control station and operator interfaces: It involves the adoption of interfaces and

systems to monitor the activity of UAVs and to facilitate the intervention of the operators when

needed. The application of telerobotics concepts and new multimedia interface technologies

will favor new implementations. Easy transportation and deployment is also a need for many

applications.

Reliability: It is related to the platform itself (mechanics, power system, electronics) and to the

implementation of the above mentioned autonomous functions in a variety of different con-

ditions. High dependability properties will be essential in many missions, particularly when

considering activities in populated areas. Then, the design and implementation of fault detec-

tion and identification techniques as well as new fault tolerant control methods in unmanned

aerial vehicles is required. On the other hand, autonomous high-level supervision mechanisms

are also required.

Page 216: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

192 Conclusions and Future Work

Application sensors: Development and integration of best adapted on-board sensors, ranging

from low-cost, low energy and low accuracy sensors, to high precision sensors for the long

distance detection and the close monitoring of events. In some applications new efficient

sensor data fusion techniques could improve the performance of individual sensors.

Communication devices: Development of light, low-cost and long-range communication devices

to provide more reliable communication and high-bandwidth links, integrating the UAV with

the ground infrastructure and with other UAVs and unmanned systems in general. This is

related to both the individual UAV communication system and the networking issues by the

adoption of new technologies for mobile systems.

Affordability: Development of cost-effective platforms, with adequate payload capacity, well-suited

to specific application needs and well-defined missions. This is also related to the development

of modular platforms and interoperable systems that could be used with a variety of payloads,

missions and applications. Usability and flexibility will also be required to adapt the UAVs to

their missions and context of operations.

The following paragraphs are specifically devoted to the coordination and cooperation of multiple

unmanned aerial vehicles.

9.3 Future Developments

As far as the cooperation of unmanned aerial vehicles is concerned, more research and development

activities are required to implement and demonstrate higher cooperation levels. Particularly, the im-

plementation of decentralized architectures could provide benefits when considering scalability and

reliability issues. This implementation requires the above mentioned increase in UAV autonomy,

which could be achieved by adopting the progress in embedded computing systems and new minia-

turised sensors. Furthermore, new cooperation strategies that consider explicitly fault tolerance and

reliability issues are required.

Another new trend could be the development of new control and cooperation techniques for tasks

requiring strong interactions between vehicles and between vehicles and the environment, such as the

manipulation of objects through the cooperation of several helicopters. However, this technology is

being tested in simulation and only recent projects dealing with the demonstration with real UAVs

have been proposed.

The communication technologies will obviously play an important role in the cooperation of

multiple unmanned aerial systems. In the context of AWARE a middleware was designed, tested

and implemented in the experiments. This middleware can be applied with different communication

technologies. The evolution of these technologies could lead to different implementations. This

includes the use of more reliable communication and high-bandwidth links integrating the UAV

with the ground infrastructure and with other UAVs and unmanned systems in general. This is

related to both the individual UAV communication system and the networking technologies by the

adoption of new technologies for mobile systems.

Page 217: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

9.4 Final Remarks 193

In general, the development of the ground infrastructure deserves special attention. Usually this

infrastructure is not available, or the communication range required for the UAV operation is too

large for the existing technology. In general, the market application of UAVs requires not only low

cost vehicles but also infrastructure, platforms and systems to integrate them. Thus, for example,

the integration with ground sensor and communication networks could have an important impact

in the development of new products. This approach requires the development of new cooperative

fault-tolerant aerial/ground-based perception techniques for object and event recognition providing

reactivity to changes in the environment.

The ground infrastructure also includes the development of ground stations for the monitoring

and operation of multiple unmanned aerial vehicles. Some of the concepts present in the AWARE

Human Machine Interface are useful, but significant research and development efforts are needed to

produce a ground station in which a minimal number of human operators can monitor a team of

UAVs.

Moreover, the practical application of a team of aerial vehicles will require the integration with

piloted aerial vehicles. Thus, for example, this is clear in the Disaster Management and Civil Security

applications mentioned in Chap. 8. In the real scenario, piloted airborne means, i.e. airplanes and

helicopters, are used today in disaster management activities. Then, the coordination of these aerial

means with the unmanned aerial vehicles is a must. Then, the architecture presented in Chap. 3

should be extended for the integration with the conventional aircrafts.

In general, the lack of integration of the UAVs with the existing air traffic control systems is

a main barrier for many commercial applications. This is related to the required certification to

fly in civil airspaces. Another barrier is the lack of standard/modular platforms and standardized

components, and the development of common UAV interoperability standards. The development

of these regulations and standards will play an important role in the practical application of the

technologies presented in this thesis.

9.4 Final Remarks

The cooperation of multiple autonomous aerial vehicles is a very suitable approach for many appli-

cations, including detection, precise localization, monitoring and tracking in emergency scenarios.

In these applications the UAVs do not modify the state of the environment and there are no physi-

cal interactions between the UAV and the environment. Furthermore, the interactions between the

UAVs, are essentially information exchanges without physical couplings between them. This thesis

demonstrates the possibilities of the cooperation and control of multiple aerial robots with sensing

and actuation capabilities allowing load deployment (in particular sensor nodes deployment).

Furthermore, the thesis presents the multi-UAV load transportation, which requires the consid-

eration of physical interactions between the aerial robots. The thesis has presented a multi-UAV

architecture developed in the AWARE project that allows different level of interaction among the

UAVs and between the UAVs and the environment, including both sensing and actuation. The inter-

action between all the systems in this architecture can be represented by means of an hybrid system

formulation coping with both discrete events associated to the tasks to achieve a given mission and

Page 218: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

194 Conclusions and Future Work

the continuous dynamic interactions between physically coupled UAVs.

Particularly, the thesis has presented results obtained in the AWARE project demonstrating the

lifting and transportation of a slung load by means of one helicopter and also by three coupled

helicopters, which has been the first demonstration of this challenging application. On the other

hand, the thesis also presented several multi-UAV mission including surveillance with multiple UAVs,

fire confirmation, monitoring and extinguishing, and people tracking.

The proposed methods open many different new opportunities in missions involving the coop-

eration of multiple UAVs for applications such as search and rescue and interventions in disaster

management and civil security. The transportation of loads by means of the UAVs can be also

considered as a first step toward the cargo transportation by means of UAVs.

Page 219: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Appendix A

Plan Builder / Optimizer

The plan builder module (see Fig. 3.5) when operating in offline mode is based on the EUROPA

framework developed at NASA’s Ames Research Center and available under NASA’s open source

agreement (NOSA) since 2007. NOSA is an OSI-approved software license accepted as open source

but not free software. EUROPA is a class library and tool set for building planners (and/or sched-

ulers) within a Constraint-based Temporal Planning paradigm and it is typically embedded in a host

application. Constraint-based Temporal Planning (and Scheduling) is a paradigm of planning based

on an explicit notion of time and a deep commitment to a constraint-based formulation of plan-

ning problems. This paradigm has been successfully applied in a wide range of practical planning

problems and has a legacy of success in NASA applications including:

• Observation scheduling for the Hubble Telescope (Muscettola et al., 1998).

• Autonomous control of DS-1.

• Ground-based activity planning for MER (Ai-Chang et al., 2004).

• Autonomous control of EO-1 (Tran et al., 2004).

and therefore, it is a reasonable choice for the type of missions considered in the AWARE project.

A.1 EUROPA Overview

EUROPA (Extensible Universal Remote Operations Planning Architecture) is a framework to model

and tackle problems in Planning, Scheduling and Constraint Programming. It is designed to be

expressive, efficient, extendable and configurable. It includes:

A Plan Database: The technology cornerstone of EUROPA for storage and manipulation of plans

as they are initialized and refined. The EUROPA Plan Database integrates a rich represen-

tation for actions, states, objects and constraints with powerful algorithms for automated

reasoning, propagation, querying and manipulation.

Page 220: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

196 Plan Builder / Optimizer

A Problem Solver: A core solver to automatically find and fix flaws in the plan database. It can

be configured to plan, schedule or both. It can be easily customized to integrate specialized

heuristics and resolution operations.

A Tool Box: Europa includes a debugger for instrumentation and visualization of applications. It

also includes a very high-level, declarative modeling language for describing problem domains

and partial-plans.

EUROPA is now in the second version and is the successor of the original EUROPA which in

turn was based upon HSTS (Muscettola et al., 1998). EUROPA offers capabilities in three key areas

of problem solving:

Representation: EUROPA allows a rich representation for actions, states, resources and con-

straints that allows concise declarative descriptions of problem domains and powerful expres-

sions of plan structure. This representation is supported with a high-level object-oriented

modeling language for describing problem domains and data structures for instantiating and

manipulating problem instances.

Reasoning: Algorithms are provided which exploit the formal structure of problem representation

to enforce domain rules and propagate consequences as updates are made to the problem

state. These algorithms are based on logical inference and constraint-processing. In particular,

specialized techniques are included for reasoning about temporal quantities and relations.

Search: Problem solving in EUROPA requires search. Effective problem solving typically requires

heuristics to make search tractable and to find good solutions. EUROPA provides a frame-

work for integrating heuristics into a basic search algorithm and for developing new search

algorithms.

EUROPA is not an end-user application. Rather, it is a means to integrate advanced planning,

scheduling and constraint reasoning into an end-user application. EUROPA is not a specific planner

or a scheduler. Rather it is a framework for developing specific planners and/or schedulers. It is

designed to be open and extendable to accommodate diverse and highly specialized problem solving

techniques within a common design framework and around a common technology core.

EUROPA is unconventional in providing a separate Plan Database that can integrate in a wide

variety of applications. This reflects the common needs for representation and manipulation of plan

data in different application contexts and different problem solving approaches. Possible approaches

include:

• A batch planning application where an initial state is input and a final plan is output without

any interaction with other actors.

• A mixed-initiative planning application where human users interact directly with a plan

database but also employ an automated problem solver to work on parts of the planning

problem in an interleaved fashion.

Page 221: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

A.2 EUROPA Architecture 197

• An autonomous execution system where the plan database stores the plan data as it evolves

in time, being updated from data in the environment, commitments from the executive, and

the accompanying automated solver which plans ahead and fixes plans when they break.

The latter approach has been adopted in the implementation of the AWARE plan builder module.

The next section provides a brief overview of the main modules present in the EUROPA archi-

tecture.

A.2 EUROPA Architecture

The modules of EUROPA and their interdependencies are set out in Fig. A.1.

Figure A.1: EUROPA modules and their dependencies.

EUROPA is a highly modular architecture and modules can be developed, tested and applied

quite independently:

Utils: provides common C++ utility classes for error checking, smart pointers etc. It also includes a

very useful debugging utility. Many common programming practices in EUROPA development

are built on assets in this module.

Constraint Engine: is the nexus for consistency management. It provides a general-purpose

component-based architecture for handling dynamic constraint networks. It deals in vari-

ables and constraints. It includes an open propagation architecture making it straightforward

to integrate specialized forms of local and global constraint propagation.

Plan Database: adds higher levels of abstractions for tokens and objects and the interactions

between them. This is the code embodiment of the EUROPA planning paradigm. It supports

all services for creation, deletion, modification and inspection of partial plans. It maintains the

dynamic constraint network underlying a partial plan by delegation to the Constraint Engine

Page 222: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

198 Plan Builder / Optimizer

and leverages that propagation infrastructure to maintain relationships between tokens and

objects.

Solvers: provides abstractions to support search in line with the EUROPA planning approach. It

includes a component-based architecture for Flaw Identification, Resolution and heuristics as

well as an algorithm for chronological backtracking search. As additional search algorithms

are implemented they will be added to this module.

Rules Engine: provides the inference capabilities based on domain rules described in the model. It

is almost exclusively used to execute NDDL rules but can be extended for custom rule formats.

Resources: provides specialized algorithms and data structures to support metric resources (e.g.

battery, power bus, disk drive).

Temporal Network: provides specialized algorithms and data structures to support efficient prop-

agation of temporal constraints.

NDDL module: provides a parser and compiler for NDDL (pronounced noodle) which is a very

high-level, object-oriented, declarative domain and problem description language. This module

defines the mapping from the language to the code and consequently interfaces to a number

of key modules in the system.

PlanWorks: is a java application for visualization and debugging of plans and planning. It is

loosely coupled to the other EUROPA modules through a JNI interface.

From an application developers’ view-point, the modules of interest are: NDDL, Solvers and

PlanWorks. These modules address modeling, search and troubleshooting respectively.

A.3 Application Example: a Deployment Mission in EU-ROPA

In order to illustrate how to approach modeling application domains in the EUROPA framework, this

section explains a simple application example: a deployment mission for an autonomous helicopter.

We will build a batch application where we provide an application domain model and problem

definition to the planner. The planner must then generate a plan to solve the problem. Fig. A.2

shows the main inputs and outputs of our application.

The main entity in our application domain is the helicopter. The next decision is to identify

the entities that will describe changes in state of the helicopter as it moves around the environment

performing a mission. We call each entity a timeline. The helicopter in our domain is the actor for

which we do the planning and which contains all the timelines. Analyzing the components of the

helicopter produces the following breakdown of timelines (see Fig. A.3):

Navigator: controls the motion of the helicopter between locations and hovers at a location.

Instrument: controls the instrument for dropping nodes.

Page 223: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

A.3 Application Example: a Deployment Mission in EUROPA 199

Figure A.2: Example batch application overview.

Commands: manages instructions from the HMI and checks the communication link after dropping

a node.

The next stage is to identify the states that each timeline can be in. We call each state a predicate.

The easiest way to identify the predicates is to think through the lifecycle of each timeline. Figure A.3

shows the set of predicates identified on each timeline along with the variables in each state:

Navigator: The helicopter is hovering at a location or going between locations.

Instrument: the instrument for dropping nodes can be on or off.

Commands: the helicopter can be instructed to drop a node and has to check the communication

link.

The meaning of the state transitions added to this figure are intuitive. A timeline is always in

a given state. It can transition only to a new state connected in our diagram by an arrow. The

next stage is to consider the constraints between predicates on different timelines. So far we have

only used the notion of state transitions to connect predicates; these map to the temporal relations

of meets and met_by and are sufficient for timelines where only one predicate instance can occur

at any given moment. When we start to connect predicates between timelines we need to use the

full range of temporal relations as we begin to deal with concurrent states. In our diagram, only

contains and contained_by temporal relations have been considered.

Then, the application domain description is encoded in NDDL (an acronym for New Domain

Description Language). The description contains the Helicopter class which pulls together all the

components we have defined so far. It has an attribute for the navigator, instrument, commands and

battery classes. The constructor takes an instance of the built-in Battery class and creates instances

of the other classes to setup the helicopter.

On the other hand, the initial state of the domain is also encoded in NDDL. It contains the

specific locations of the waypoints where the nodes should be dropped, the initial location and

Page 224: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

200 Plan Builder / Optimizer

Navigator

Hovering

location

Going

from

to

path

duration

Instrument

DropInstrumentOff

DropInstrumentOn

wp

duration

contained by

Commands

DropNode

wp

duration

CheckLink

wp

duration

contains

Figure A.3: Timelines and Predicates with Transitions between Predicates on each Timeline.

battery level of the UAV and the different paths (with their associated costs computed by the plan

refining toolbox) between the locations of interest.

Finally, let consider that the goal of the mission is to deploy three nodes in different locations:

• wp1 at time 30.

• wp4 at time 60.

• wp3 at time 90.

It can be also easily encoded in NDDL in the following way:

goal(Commands.DropNode drop_node_1);

drop_node_1.start.specify(30);

drop_node_1.wp.specify(wp1);

goal(Commands.DropNode drop_node_2);

drop_node_2.start.specify(60);

drop_node_2.wp.specify(wp4);

goal(Commands.DropNode drop_node_0);

drop_node_0.start.specify(90);

drop_node_0.wp.specify(wp3);

Page 225: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

A.3 Application Example: a Deployment Mission in EUROPA 201

With the model and the initial state specified, the planner can be started to compute the solution.

Figure A.4 shows a screenshot with the visualization of the results obtained with PlanWorks.

Figure A.4: Screenshot with the visualization of the results obtained with PlanWorks.

Page 226: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

202 Plan Builder / Optimizer

Page 227: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Appendix B

Network Setup in the AWAREProject

This appendix provides a brief scheme of the network setup done during the experiments summarized

in Chap. 8. First, based on the picture of the different processes involved and their IP addresses, the

AWARE platform components are revisited. Then, the characteristics of the middleware developed

in the framework of the AWARE project are described. Finally, the solution adopted to solve the

time synchronization problem in the AWARE network is presented.

B.1 Platform Components

Figure B.1 shows the IP addresses of the different processes of the AWARE platform during the

missions. This scheme also allows to revisit the physical components of the platform:

• Three Wireless Sensor Networks (WSN1, WSN2 and WSN3) to measure different physical

variables inside and around the building. Each WSN had a laptop acting as a gateway.

• A Wireless Sensor Network (WSN RSSI) with capability to measure the RSSI among other

physical variables. The radio signals are processed by its gateway to compute a distributed

estimation of the location of the mobile nodes.

• Two ground cameras (GCN1 and GCN2) gathering images of the building and the surrounding

area.

• Three cameras ready to be mounted on-board the UAVs.

• A wireless camera equipped with a pan&tilt unit attached to the load transported by the LTS.

• Four UAVs or the LTS plus one additional UAV, each with an associated ODL process.

• The human machine interface application running on the computer with IP address 192.168.0.2.

This computer is also connected through the serial port to a GPS receiver and acts as NTP

server for the platform (see Sect. B.3).

Page 228: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

204 Network Setup in the AWARE Project

• The automated monitor mounted on-board the fire truck.

Delib

era

tive

TU

B1/L

TS

helic

opte

r

Delib

era

tive

TU

B2 h

elic

opte

r

Executive

Layer

TU

B1

TU

B2

TU

B3

FC

On-b

oard

TU

B1

images g

ate

way

WS

N1

gate

way

WSN

1

192.1

68.0

.3

192.1

68.0

.4

192.1

68.0

.5

192.1

68.0

.6

192.1

68.0

.46

192.1

68.0

.10

WS

N2

gate

way

WSN

2192.1

68.0

.47

WS

N3

gate

way

WSN

3192.1

68.0

.45

192.1

68.0

.7A

WA

RE

HM

I

&

NTP

serv

er

192.1

68.0

.2

LTS

On-b

oard

LTS

images g

ate

way

192.1

68.0

.16

On-b

oard

TU

B2

images g

ate

way

192.1

68.0

.11

On-b

oard

FC

images g

ate

way

192.1

68.0

.12

WSN

RS

SI

gate

way

WSN

RSSI

192.1

68.0

.15

GC

N1 im

ages

gate

way

192.1

68.0

.13

GC

N2 im

ages

gate

way

192.1

68.0

.14

GC

N2

GC

N1

Delib

era

tive

TU

B3 h

elic

opte

r

Delib

era

tive F

C

helic

opte

r

Monitor on-b

oard

the fire tru

ck

Figure B.1: Network setup done during the missions summarized in Chap. 8.

B.2 AWARE Middleware

The main goal of the middleware was to enable the transparent communication between different

instances of the AWARE system that are generally heterogeneous in hardware and software. It was

developed by the Universities of Bonn and Stuttgart and is detailed in (Universities of Bonn and

Stuttgart (AWARE partners), 2007).

Page 229: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

B.2 AWARE Middleware 205

The main requirement on the middleware architecture was that it had to enable a tight co-

operation between applications running on different hardware and software platforms: gateways,

High-Bandwidth Network (HBN) devices and Low-Bandwidth Network (LBN) devices. Moreover,

the middleware running on gateways must provide extended functionality to enable transparent

communication between the devices in HBN and LBN. Due to high heterogeneity and absence of

hardware or software interface standards of LBN devices, the gateway middleware extension defines

the protocol of LBN/HBN cooperation and does not set any constraints on the middleware design

solutions for LBN devices.

The design criteria adopted and the strategic decisions taken are detailed in the following, along

with the final features of the middleware:

• Dealing with hardware and software heterogeneities of AWARE instances.

Interactions of heterogeneous AWARE instances require the middleware to hide their differ-

ences in hardware and software. This implies that the AWARE middleware must be executed

on all the platforms involved in the project. This includes gateway PCs running different

distributions of Linux or Windows XP and PC 104 running Windows CE and/or QNX which

controls UAVs. It was agreed to use ACE facade library which supports all mentioned plat-

forms and provides a UNIX-like platform-independent API. The middleware code is therefore

platform-independent. Additionally, a common data representation (CDR) format is used for

data exchange between different platforms.

• Transparent communication.

In order to achieve transparent communication between AWARE instances and provide a good

network performance, there is a strong need to implement and to test (through simulations

and real-world experiences) several reliable and unreliable routing protocols. Important test

criteria are robust network reconfiguration to handle mobility of UAVs and high throughput

to enable transmission of big volumes of video information.

• Data-centric architecture style.

The AWARE system uses a distributed data-centric architecture which maps naturally to

a publish/subscribe communication model. A publish/subscribe communication model uses

asynchronous message passing to connect information producers (publishers) with information

consumers (subscribers). All AWARE devices in both the HBN and LBN can potentially be

publishers and subscribers.

• Data types and channels.

Sensor readings provided by WSN and video data streams of cameras are the sensor data

that may be of interest to any AWARE instance. The produced sensor data has spatial and

time characteristics expressed by means of node coordinates and a timestamp when the sensor

reading is taken. The data flow through the network is organized in channels depending on

the type of sensor data (temperature, humidity, video, etc.). We distinguish between data

channels for transmission of sensor data and command channels for control information.

Page 230: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

206 Network Setup in the AWARE Project

• Interfaces and generic components.

Each component of the middleware architecture will implement a number of interfaces used for

interacting with the other components. Each interface consists of a set of functions which are

specialized for some type of interaction. The intention is to define these interfaces as generic

as possible.

• Re-use standard components.

The objective is to avoid unnecessary duplication by identifying components in the differ-

ent parts of the system which are the same or very similar. Moreover, re-using the existing

components of the ACE library wherever possible was considered useful.

• Integration of technology standards.

B.3 Time Synchronization

In a distributed system, time synchronization is a critical issue since many algorithms rely on it.

The Network Time Protocol (NTP) is a protocol for synchronizing the clocks of computer systems

over packet-switched, variable-latency data networks. NTP uses UDP on port 123 as its transport

layer. It is designed particularly to resist the effects of variable latency by using a jitter buffer. NTP

also refers to a reference software implementation that is distributed by the NTP Public Services

Project.

NTP is one of the oldest Internet protocols still in use (since before 1985). NTP was originally

designed by Dave Mills of the University of Delaware, who still maintains it, along with a team of

volunteers. NTP uses Marzullo’s algorithm, and includes support for features such as leap seconds.

NTPv4 can usually maintain time to within 10 milliseconds (1/100 s) over the public Internet, and

can achieve accuracies of 200 microseconds (1/5000 s) or better in local area networks under ideal

conditions.

NTP provides Coordinated Universal Time (UTC). No information about time zones or daylight

saving time is transmitted; this information is outside its scope and must be obtained separately. In

isolated LANs, NTP could in principle be used to distribute a different time scale (e.g. local zone

time), but this is uncommon.

For modern Unix systems, the NTP client is implemented as a daemon process that runs contin-

uously in user space (ntpd). Because of sensitivity to timing, however, it is important to have the

standard NTP clock phase-locked loop implemented in kernel space. All recent versions of Linux,

BSD, Mac OS X and Solaris are implemented in this manner.

In the AWARE platform, the station running the human machine interface application (IP ad-

dress 192.168.0.2 in Fig. B.1) was configured as the NTP server. The PC received the GPS time

through a serial port from the GPS base station used in the platform. The driver supported GPS

receivers with the $GPRMC, $GPGLL, $GPGGA, $GPZDA, and $GPZDG NMEA sentences by

default, but the GPS base station was finally configured to send $GPRMC NMEA sentences. On

the other hand, all the components of the platform had a NTP client also installed. During all the

Page 231: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

B.3 Time Synchronization 207

experiments, it was not detected any issue related to the time synchronization using this configura-

tion.

Page 232: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

208 Network Setup in the AWARE Project

Page 233: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Appendix C

Coordinate Systems in theAWARE Platform

This appendix provides a summary of the coordinate frames adopted for the different AWARE

platform components. It is worth to mention that when different partners are developing different

parts of a platform, it is quite relevant to follow a common convention for the coordinate frames.

Although this statement could sound quite obvious, many integration meetings during the three years

of the AWARE Project have pointed out that a common source of errors is the wrong consideration

of the coordinated frames. Then, this appendix contains the common convention followed in the

project, that is also used in different chapters of this thesis.

C.1 Notation

Let us consider two different coordinate frames A and B with standard bases i, j, k and

i′, j′, k′ respectively and the same origin. Let us denote a vector v expressed in the coordinate

frame A as vA. The same vector v can be also expressed in B as

vB = RABvA, (C.1)

where RAB is the rotation matrix from frame A to B that can be computed from the standard

bases as

RAB =

i · i′ j · i′ k · i′

i · j′ j · j′ k · j′

i · k′ j · k′ k · k′

. (C.2)

As the rotation matrix is orthogonal, its inverse is equal to its transpose and then, the rotation

matrix from frame B to A can be obtained as

RBA = (RA

B)−1 = (RA

B)T . (C.3)

Page 234: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

210 Coordinate Systems in the AWARE Platform

Considering only the rotation of the reference frame is sufficient to work with free vectors such

as velocity, force, etc. But if position vectors are used, the translation of the frame origin must be

considered in the transformation from one reference frame to another. Then, let us consider now

that the origins of the coordinate frames A and B are different. Let tA be the translation vector

from the origin of A to the origin of B expressed in A. The transformation matrix TBA allows

to transform a vector expressed in the reference frame B to the frame A applying

[vA

1

]= TB

A

[vB

1

]=

RBA tA

0 0 0 1

[

vB

1

]. (C.4)

If the inverse transformation is required, then the following formula can be applied for the

inversion of the matrix:

TAB = (TB

A)−1 =

(RB

A)T −(RB

A)T · tA

0 0 0 1

(C.5)

C.2 Global Coordinate System G

It will be assumed that the operational area of the platform is small enough to be considered

approximately flat. Then, the global coordinate system will have the x-axis pointing to the East

and the y-axis pointing to the North. The z-axis will be in the line from the center of the Earth to

the operational area and pointing upwards (see Fig. C.1).

This frame is used as a common reference for the 3D orientation of the different platform compo-

nents. But to define the coordinates of a given point, two different approaches have been followed:

1. In the interfaces of the software implementation of the HMI and the ODL, the geographic

coordinate system has been used. It is a coordinate system that enables every location on

Earth to be specified in three coordinates, using mainly a spherical coordinate system: latitude,

longitude and altitude (see Fig. C.1):

• Latitude (abbreviation: Lat., ϕ, or phi) is the angle from a point on the Earth’s surface

to the equatorial plane, measured from the center of the sphere. The north pole is 90

N; the south pole is 90 S. The 0 parallel of latitude is designated the equator, the

fundamental plane of all geographic coordinate systems. The equator divides the globe

into Northern and Southern Hemispheres.

• Longitude (abbreviation: Long., λ, or lambda) is the angle east or west of a reference

meridian between the two geographical poles to another meridian that passes through

Page 235: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

C.3 UAV Coordinate System U 211

an arbitrary point. All meridians are halves of great circles, and are not parallel. They

converge at the north and south poles. A line passing to the rear of the Royal Observatory,

Greenwich (near London in the UK) has been chosen as the international zero-longitude

reference line, the Prime Meridian. Places to the east are in the eastern hemisphere, and

places to the west are in the western hemisphere. The antipodal meridian of Greenwich

is both 180 W and 180 E.

• Altitude: to completely specify a location of a topographical feature on, in, or above

the Earth, one has to also specify the vertical distance from the centre of the sphere, or

from the surface of the sphere. Because of the ambiguity of “surface” and “vertical”, it

is more commonly expressed relative to a more precisely defined vertical datum such as

mean sea level at a named point.

2. In the internal software implementation of the HMI and ODL, when it is required to do any

calculation in a Cartesian coordinate system, the geographic coordinates are translated using

the Universal Transverse Mercator (UTM) coordinate system. It is a grid-based method of

specifying locations on the surface of the Earth that is a practical application of a 2-dimensional

Cartesian coordinate system. The UTM coordinate system has been avoided in the interfaces

between different software applications (possibly from different partners or companies) because

it has been realized that the consideration of the particular library for the conversion can lead

to different results. Another source of errors was due to the use of the right and common UTM

zone for the conversion. Then, as it has been mentioned above, only the geographic coordinate

system has been used in the interface between software applications.

On the other hand, trajectories or waypoints are plotted in a plane in different figures along

this document. Then, only UTM coordinates are used in the related tables with values for the

sake of clarity.

C.3 UAV Coordinate System U

A common reference frame has been adopted for all the UAVs in the platform. The x-axis is

pointing forwards, whereas the z-axis is downwards (the y-axis is given by the right-hand rule).

Figure C.2 shows a photograph of a TUB-H model autonomous helicopter with its coordinate frame

superimposed.

The origin of the coordinate frame is located at the GPS antenna because the measured GPS

coordinates correspond exactly to that place.

At the executive level, the Euler angles 321 (rotation about helicopter fixed axes in following

order: z, y′, x′′) were adopted for the UAV 3D orientation. But in the interface with the ODL and

the HMI, the rotation matrix was used to provide the 3D orientation of the UAV. The reason is that

the rotation matrix allows to avoid the possible misunderstandings related to the angles internally

adopted to represent the 3D orientation (order of the rotations applied, different conventions, etc.).

Page 236: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

212 Coordinate Systems in the AWARE Platform

Figure C.1: Global coordinate frameconsidered for the operational area.The x-axis is pointing to the East andthe y-axis to the North, whereas the z-axis will be in the line from the centerof the Earth to the operational area andpointing upwards. This frame is used asa common reference for the 3D orien-tation of the different platform compo-nents. On the other hand, the conceptof latitude (ϕ) and longitude (λ) is alsorepresented, along with their extremevalues.

x

y

z

x y

z

Figure C.2: A photograph of a TUB-H model autonomous helicopter with its coordinate framesuperimposed. The x-axis is pointing forwards, whereas the z-axis is downwards (the y-axis is givenby the right-hand rule). This convention is adopted for all the UAVs in the platform.

Page 237: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

C.4 Camera Coordinate System C 213

The Euler angles 321 q1, q2 and q3 are the angles about x, y and z axes correspondingly. The zero

orientation (q1 = q2 = q3 = 0) is the orientation where the x-axis of the UAV and global reference

frames coincide and the y-axis have the opposite directions. The resulting rotation matrix is given

by

RUG =

=

cos q2 cos q3 sin q3 cos q2 − sin q2

sin q3 cos q1 − sin q1 sin q2cosq3 − cos q1 cos q3 − sin q1 sin q2 sin q3 − sin q1 cos q2

− sin q1 sin q3 − sin q2 cos q1 cos q3 sin q1 cos q3 − sin q2 sin q3 cos q1 −cosq1 cos q2

.(C.6)

Multiplying this matrix by any vector expressed in the UAV coordinate frame, we get the same

vector expressed in the global coordinate frame.

C.4 Camera Coordinate System C

For the camera on-board the UAV, the coordinate frame adopted is shown in Fig. C.3.

The roll (γ), pitch (β) and yaw (α) angles are used to define the 3D orientation of the camera.

In the zero orientation (α = β = γ = 0), the y-axis of the camera coincides with the z-axis of the

UAV, whereas the z-axis of the camera coincides with the x-axis of the UAV. Then, the rotation

matrix to translate a vector from the local camera frame to the UAV frame for the zero orientation

is given by

RCU (α = 0, β = 0, γ = 0) =

0 0 1

1 0 0

0 1 0

. (C.7)

On the other hand, the usual convention in aviation is followed:

• Yaw: counterclockwise rotation of α about the z-axis.

• Pitch: counterclockwise rotation of β about the y-axis.

• Roll: counterclockwise rotation of γ about the x-axis.

These angles change the orientation of any given frame if the following rotation matrix is applied:

Page 238: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

214 Coordinate Systems in the AWARE Platform

u

v

x

yz

Figure C.3: Coordinate frame attached to the cameras. The image plane is also represented on thephotograph.

Page 239: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

C.4 Camera Coordinate System C 215

RCU = R(α, β, γ) = Rz(α)Ry(β)Rx(γ) =

=

cosα cosβ cosα sinβ sin γ − sinα cos γ cosα sinβ cos γ + sinα sin γ

sinα cosβ sinα sinβ sin γ + cosα cos γ sinα sinβ cos γ − cosα sin γ

− sinβ cosβ sin γ cosβ cos γ

.(C.8)

Then, for our on-board camera, multiplying C.7 by C.8 gives the full rotation matrix that allows

to transform a vector expressed in the camera frame to the UAV reference frame:

RCU = R(α, β, γ) = Rz(α)Ry(β)Rx(γ) =

=

− sinβ cosβ sin γ cosβ cos γ

cosα cosβ cosα sinβ sin γ − sinα cos γ cosα sinβ cos γ + sinα sin γ

sinα cosβ sinα sinβ sin γ + cosα cos γ sinα sinβ cos γ − cosα sin γ

.(C.9)

It is important to note that R(α, β, γ) performs the roll first, then the pitch, and finally the yaw.

The cameras used in the experiments were fixed and the roll, pitch and yaw angles were measured

before the missions. In particular, during the missions described in this thesis, the only non-zero

angle for the camera orientation was γ. Then, the full rotation matrix C.9 could be simplified as

RCU (α = 0, β = 0, γ) =

1 0 0

0 cos γ −sinγ0 sin γ cos γ

. (C.10)

For example, if the camera was aligned with the fuselage of the UAV and pointing downwards

45, a value of γ = −π/4 should be substituted in C.10.

Page 240: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

216 Coordinate Systems in the AWARE Platform

Page 241: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

Bibliography

Ai-Chang, M., Bresina, J., Charest, L., Hsu, J., Jonsson, A. K., Kanefsky, B., Morris, P., Rajan,

K., Yglesias, J., Maldague, P., Chafin, B. G., and Dias, W. (2004). MAPGEN planner: Mixed-

initiative planning and scheduling for the Mars exploration Rover mission.

Alami, R., Chatila, R., and Asama, H., editors (2007). Distributed Autonomous Robotic Systems 6,

volume 6 of Distributed Autonomous Robotic Systems. Springer Verlag.

Balch, T. and Arkin, R. (1998). Behavior-based formation control for multi-robot teams. IEEE

Transactions on Robotics and Automation, 14(6):926–939.

Banatre, M., Marron, P., Ollero, A., and Wolisz, A., editors (2008a). Cooperating Embedded Systems

and Wireless Sensor Networks. ISTE Ltd and John Wiley & Sons.

Banatre, M., Marron, P., Ollero, A., and Wolisz, A. (2008b). Cooperating Embedded Systems and

Wireless Sensor Networks. John Wiley & Sons Inc.

Barcala, M. and Rodrıguez, A. (1998). Helicopteros. EUIT Aeronautica, Madrid.

Barnes, D. and Gray, J. (1991). Behaviour synthesis for co-operant mobile robot control. In Inter-

national Conference on Control, volume 2, pages 1135–1140, Edinburgh, UK.

Batalin, M., Sukhatme, G., and Hattig, M. (2004). Mobile robot navigation using a sensor network.

In Proceedings of the IEEE International Conference on Robotics and Automation, pages 636–

642, New Orleans, Louisiana.

Batalin, M. A. and Sukhatme, G. S. (2007). The design and analysis of an efficient local algorithm for

coverage and exploration based on sensor network deployment. IEEE Transactions on Robotics,

23(4):661–675.

Baydere, S., Cayirci, E., Hacioglu, I., Ergin, O., Ollero, A., Maza, I., Viguria, A., Bonnet, P., and

Lijding, M. (2008). Cooperating Embedded Systems and Wireless Sensor Networks, chapter

Applications and Application Scenarios, pages 25–114. ISTE Ltd and John Wiley & Sons.

Bayraktar, S., Fainekos, G. E., and Pappas, G. J. (2004). Experimental cooperative control of

fixed-wing unmanned aerial vehicles. In Proceedings of the IEEE Conference on Decision and

Control.

Page 242: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

218 BIBLIOGRAPHY

Bichi, A. and Pallottino, L. (2000). On optimal cooperative conflict resolution for air traffic man-

agement systems. IEEE Transactions on Intelligent Transportation Systems, 1(4):221–231.

Biocca, F., Jin, K., and Choi, Y. (2001). Visual touch in virtual environments: An exploratory study

of presence, multimodal interfaces, and cross-modal sensory illusions. Presence: Teleoperators

and Virtual Environments, 10(3):247–265.

Bisnik, N., Abouzeid, A. A., and Isler, V. (2007). Stochastic event capture using mobile sensors

subject to a quality metric. IEEE Transactions on Robotics, 23(4):676–692.

Bom, J., Thuilot, B., Marmoiton, F., and Martinet, P. (2005). Nonlinear control for urban vehicles

platooning, relying upon a unique kinematic GPS. In Proceedings of the IEEE International

Conference on Robotics and Automation, pages 4138–4143.

Borenstein, J. (2000). The OmniMate: A guidewire- and beacon-free AGV for highly reconfigurable

applications. International Journal of Production Research, 38(9):1993–2010.

Botelho, S. and Alami, R. (2001). Multi-robot cooperation through the common use of “mecha-

nisms”. In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems,

pages 375–380, Maui, USA.

Botelho, S. C. and Alami, R. (1999). M+: a scheme for multi-robot cooperation through negoti-

ated task allocation and achievement. In Proceedings of the IEEE International Conference on

Robotics and Automation, volume 2, pages 1234–1239, Detroit, USA.

Brumitt, B. and Stenz, A. (1998). GRAMMPS: A generalized mission planner for multiple mobile

robots. In Proceedings of the IEEE International Conference on Robotics and Automation.

Caballero, F., Maza, I., Molina, R., Esteban, D., and Ollero, A. (2009). A robust head tracking

system based on monocular vision and planar templates. Sensors, 9(11):8924–8943.

Caballero, F., Merino, L., Gil, P., Maza, I., and Ollero, A. (2008a). A probabilistic framework for

entire WSN localization using a mobile robot. Robotics and Autonomous Systems, 56(10):798–

806.

Caballero, F., Merino, L., Maza, I., and Ollero, A. (2008b). A particle filtering method for wireless

sensor network localization with an aerial robot beacon. In Proceedings of the IEEE Interna-

tional Conference on Robotics and Automation, pages 596–601, Pasadena, California, USA.

Cachya Software (2009). Cachya. http://www.cachya.com/esight/overview.php.

Caloud, P., Choi, W., Latombe, J., Le Pape, C., and Yim, M. (1990). Indoor automation with many

mobile robots. In Proceedings of the IEEE International Workshop on Intelligent Robotics and

Systems (IROS).

Cao, Y. U., Fukunaga, A. S., and Kahng, A. (1997). Cooperative mobile robotics: Antecedents and

directions. Autonomous Robots, 4(1):7–27.

Page 243: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

BIBLIOGRAPHY 219

Capitan, J., Merino, L., Caballero, F., and Ollero, A. (2009). Delayed-state information filter for

cooperative decentralized tracking. In Proc. of the International Conference on Robotics and

Automation.

Chaimowicz, L., Kumar, V., and Campos, M. F. M. (2004). A paradigm for dynamic coordination

of multiple robots. Autonomous Robots, 17(1):7–21.

Chandy, K., Misra, J., and Haas, L. (1983). Distributed deadlock detection. ACM Trans. Computer

Systems, 1:144–156.

Corke, P., Hrabar, S., Peterson, R., Rus, D., Saripalli, S., and Sukhatme, G. (2004). Autonomous

deployment and repair of a sensor network using an unmanned aerial vehicle. In Proceedings of

the IEEE International Conference on Robotics and Automation, pages 3602–3608.

Corke, P., Peterson, R., and Rus, D. (2003). Networked robots: Flying robot navigation using a

sensor net. In Proceedings of the International Symposium of Robotic Research, Siena, Italy.

Craven, P., Belov, N., Tremoulet, P., Thomas, M., Berka, C., Levendowski, D., and Davis, G.

(2006). Foundations of Augmented Cognition, chapter Cognitive workload gauge development:

comparison of real-time classification methods, pages 75–84. Springer.

Creative Labs (2009). OpenAL: Cross-platform 3D audio library. http://www.openal.org/.

CROMAT consortium (2006). CROMAT Project. World Wide Web electronic publication – http:

//grvc.us.es/cromat.

Cruz, A., Ollero, A., Munoz, V., and Garcıa-Cerezo, A. (1998). Speed planning method for mobile

robots under motion constraints. In Intelligent Autonomous Vehicles (IAV), pages 123–128.

Das, A., Fierro, R., Kumar, V., Ostrowski, J., Spletzer, J., and Taylor, C. (1997). A vision-based

formation control framework. IEEE Transactions on Robotics and Automation, 18(5):813–825.

Desai, J. P., Ostrowski, J. P., and Kumar, V. (2001). Modeling and control of formations of non-

holonomic mobile robots. IEEE Transactions on Robotics and Automation, 17(6):905–908.

Dias, M. (2004). TraderBots: A New Paradigm for Robust and Efficient MultiRobot Coordination

in Dynamic Environments. PhD thesis, Carnegie Mellon University.

Dias, M., Zinck, M., Zlot, R., and Stentz, A. (2004). Robust multirobot coordination in dynamic en-

vironments. In Proceedings of the IEEE International Conference on Robotics and Automation,

volume 4, pages 3435–3442.

Dias, M. B. and Stenz, A. (2002). Opportunistic optimization for market-based multirobot control.

In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems, pages

2714–2720, Lausanne, Switzerland.

Egerstedt, M., Hu, X., and Stotsky, A. (2001). Control of mobile platforms using a virtual vehicle

approach. IEEE Transactions on Automatic Control, 46(11):1777–1782.

Page 244: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

220 BIBLIOGRAPHY

EyeTech Digital Systems (2009). EyeTech TM3. http://www.eyetechds.com/index.htm.

Fagiolini, A., Valenti, G., Pallottino, L., Dini, G., and Bicchi, A. (2007). Decentralized intrusion

detection for secure cooperative multi-agent systems. In Proc. IEEE Int. Conf. on Decision and

Control, pages 1553–1558.

Fax, J. A. and Murray, R. (2002). Graph Laplacians and stabilization of vehicle formations. In

Proceedings of the 15th IFAC World Congress, pages 283–288, Barcelona, Spain.

Feddema, J. and Schoenwald, D. (2001). Decentralized control of cooperative robotic vehicles. In

Proceedings of SPIE – The International Society for Optical Engineering, volume 4364, pages

136–146, Orlando, Florida.

Ferrari, C., Pagello, E., Voltolina, M., Ota, J., and Arai, T. (1997). Multirobot motion coordination

using a deliberative approach. In Second Euromicro Workshop on Advanced Mobile Robots

(EUROBOT ’97), page 96.

Fierro, R., Das, A., Spletzer, J., Esposito, J., Kumar, V., Ostrowski, J. P., Pappas, G., Taylor, C. J.,

Hur, Y., Alur, R., Lee, I., Grudic, G., and Southall, B. (2002). A framework and architecture

for multi-robot coordination. International Journal of Robotics Research, 21(10–11):977–995.

Forssen, P.-E. (2004). Low and Medium Level Vision using Channel Representations. PhD thesis,

Linkoping University, Sweden, SE-581 83 Linkoping, Sweden. Dissertation No. 858, ISBN 91-

7373-876-X.

Free Software Foundation (2009). FreeTrack. http://www.free-track.net/english/.

Fujimori, A. and Teramoto, M. (2000). Cooperative collision avoidance between multiple mobile

robots. Journal of Robotic Systems, 17(3):347–363.

Gerkey, B. and Mataric, M. (2000). Murdoch: Publish/subscribe task allocation for heterogeneous

agents. In Proceedings of the Fourth International Conference on Autonomous Agents, pages

203–204, Barcelona, Spain.

Gerkey, B. and Mataric, M. (2002). Sold!: Auction methods for multi-robot coordination. IEEE

Transactions on Robotics and Automation, 18(5):758–768.

Gerkey, B. and Mataric, M. (2004). A formal analysis and taxonomy of task allocation in multi-robot

systems. International Journal of Robotics Research, 23(9):939–954.

Gerkey, B. P. and Mataric, M. J. (2003). Multi-robot task allocation: Analyzing the complexity

and optimality of key architectures. In Proceedings of the IEEE International Conference on

Robotics and Automation, volume 3, pages 3862–3868, Taipei, Taiwan.

Giulietti, F., Pollini, L., and Innocenti, M. (2000). Autonomous formation flight. IEEE Control

Systems Magazine, 20(6):34–44.

Page 245: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

BIBLIOGRAPHY 221

Gross, R., Bonani, M., Mondada, F., and Dorigo, M. (2006). Autonomous self-assembly in swarm-

bots. IEEE Transactions on Robotics, 22(6):1115–1130.

Grossglauser, M. and Tse, D. N. C. (2002). Mobility increases the capacity of ad hoc wireless

networks. IEEE/ACM Transactions on Networking, 10(4):477–486.

Gu, Y., Seanor, B., Campa, G., Napolitano, M. R., Rowe, L., Gururajan, S., and Wan, S. (2006).

Design and flight testing evaluation of formation control laws. IEEE Transactions on Control

Systems Technology, 14(6):1105–1112.

Heredia, G., Caballero, F., Maza, I., Merino, L., Viguria, A., and Ollero, A. (2008). Multi-UAV

cooperative fault detection employing vision-based relative position estimation. In Proc. of the

17th International Federation of Automatic Control World Congress, pages 12093–12098, Seoul,

Korea.

Heredia, G., Caballero, F., Maza, I., Merino, L., Viguria, A., and Ollero, A. (2009). Multi-

unmanned aerial vehicle (UAV) cooperative fault detection employing differential global po-

sitioning (DGPS), inertial and vision sensors. Sensors, 9(9):7566–7579.

Hert, S. and Lumelsky, V. (2001). Polygon area decomposition for multiple-robot workspace division.

International Journal of Computational Geometry and Applications, 8(4):437–466.

How, J., King, E., and Kuwata, Y. (2004). Flight demonstrations of cooperative control for UAV

teams. In Proceedings of the AIAA 3rd Unmanned-Unlimited Technical Conference, Workshop,

and Exhibit, volume 1, pages 505–513, Chicago, Illinois.

Huntsberger, T. L., Trebi-Ollennu, A., Aghazarian, H., Schenker, P. S., Pirjanian, P., and Nayar,

H. D. (2004). Distributed control of multi-robot systems engaged in tightly coupled tasks.

Autonomous Robots, 17(1):79–92.

Jain, S., Shah, R. C., Brunette, W., Borriello, G., and Roy, S. (2006). Exploiting mobility for

energy efficient data collection in wireless sensor networks. Mobile Networks and Applications,

11(3):327–339.

Jiang, K., Seneviratne, L., and Earles, S. (1993). Finding the 3D shortest path with visibility graph

and minimum potential energy. In Proc. of the International Conference on Intelligent Robots

and Systems, pages 679–684. cited By (since 1996) 3.

Kant, K. and Zucker, S. (1986). Toward efficient trajectory planning: The path-velocity decompo-

sition. The International Journal of Robotics Research, 5(3).

Kato, T., Omachi, S., and Aso, H. (2002). Asymmetric gaussian and its application to pattern

recognition. In Proceedings of the Joint IAPR International Workshop on Structural, Syntactic,

and Statistical Pattern Recognition, pages 405–413, London, UK. Springer-Verlag.

Page 246: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

222 BIBLIOGRAPHY

Ko, J., Stewart, B., Fox, D., Konolige, K., and Limketkai, B. (2003). A practical decision-theoretic

approach to multi-robot mapping and exploration. In Proceedings IEEE/RSJ International

Conference on Intelligent Robots and Systems, pages 2714–2720.

Kondak, K., Bernard, M., Caballero, F., Maza, I., and Ollero, A. (2009). Advances in Robotics Re-

search, chapter Cooperative Autonomous Helicopters for Load Transportation and Environment

Perception, pages 299–310. Springer Berlin Heidelberg.

Konolige, K., Fox, D., Limketkai, B., Ko, J., and Stewart, B. (2003). Map merging for distributed

robot navigation. In Proceedings of the IEEE/RSJ International Conference on Intelligent

Robots and Systems, pages 212–217.

Kosuge, K. and Sato, M. (1999). Transportation of a single object by multiple decentralized-

controlled nonholonomic mobile robots. In Proceedings of the IEEE International Conference

on Intelligent Robots and Systems, volume 3, pages 1681–1686.

Kube, C. R. and Zhang, H. (1993). Collective robotics: From social insects to robots. Adaptive

Behavior, 2(2):189–218.

Latombe, J. C. (1990). Robot Motion Planning. Kluwer Academic Publishers.

LaValle, S. M. (2006). Planning Algorithms. Cambridge University Press, Cambridge, U.K. Available

at http://planning.cs.uiuc.edu/.

Lee, S. and Kim, J. (2001). Performance analysis of distributed deadlock detection algorithms. IEEE

Transactions on Knowledge and Data Engineering, 13(4):623–636. Cited By (since 1996): 11.

Lemaire, T., Alami, R., and Lacroix, S. (2004). A distributed tasks allocation scheme in multi-UAV

context. In Proceedings of the IEEE International Conference on Robotics and Automation,

volume 4, pages 3622 – 3627.

Lemon, O., Bracy, A., Gruenstein, A., and Peters, S. (2001). The WITAS multi-modal dialogue sys-

tem I. In Proceedings of the 7th European Conference on Speech Communication and Technology

(EUROSPEECH), pages 1559–1562, Aalborg, Denmark.

Leonard, N. E. and Fiorelli, E. (2001). Virtual leaders, artificial potentials and coordinated control

of groups. In Proceedings of the IEEE Conference on Decision and Control, volume 3, pages

2968–2973.

Li, H., Karray, F., Basir, O., and Song, I. (2008). A framework for coordinated control of multia-

gent systems and its applications. IEEE Transactions on Systems, Man, and Cybernetics Part

A:Systems and Humans, 38(3):534–548.

Lim, C., R.P., M., and Rodriguez, A. (1999). Interactive modeling, simulation, animation and real-

time control (MoSART) twin lift helicopter system environment. In Proceedings of the American

Control Conference, volume 4, pages 2747–2751.

Page 247: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

BIBLIOGRAPHY 223

Lynch, N. A. (1997). Distributed Algorithms (The Morgan Kaufmann Series in Data Management

Systems). Morgan Kaufmann, 1st edition.

Madentec (2009). Tracker Pro. http://www.madentec.com/products/tracker-pro.php.

Massink, M. and Francesco, N. D. (2001). Modelling free flight with collision avoidance. In Proceed-

ings of the Seventh International Conference on Engineering of Complex Computer Systems,

pages 270–279.

Mataric, M. J. (1992). Designing emergent behaviors: from local interactions to collective intel-

ligence. In Meyer, J.-A., Roitblat, H., and Wilson, S., editors, From Animals to Animats 2,

2nd International Conference on Simulation of Adaptive Behavior (SAB-92), pages 432–441,

Cambridge, MA, EEUU. MIT Press.

Maza, I., Caballero, F., Molina, R., na, N. P., and Ollero, A. (2010a). Multimodal interface tech-

nologies for UAV ground control stations. a comparative analysis. Journal of Intelligent and

Robotic Systems, 57(1–4):371–391.

Maza, I., Kondak, K., Bernard, M., and Ollero, A. (2010b). Multi-UAV cooperation and control for

load transportation and deployment. Journal of Intelligent and Robotic Systems, 57(1–4):417–

449.

Maza, I., na, N. P., Ollero, A., and Scarlatti, D. (2007). Impact of the communication ranges in an

autonomous distributed task allocation process within a multi-UAV team providing services to

SWIM applications. In Proc. of the 6th Eurocontrol Innovative Research Workshop & Exhibition,

pages 219–223, Bretigny sur Orge, France.

Maza, I. and Ollero, A. (2004). Multiple UAV cooperative searching operation using polygon area

decomposition and efficient coverage algorithms. In Proceedings of the 7th International Sym-

posium on Distributed Autonomous Robotic Systems, pages 211–220, Toulouse, France.

Maza, I. and Ollero, A. (2007). Distributed Autonomous Robotic Systems 6, volume 6 of Distributed

Autonomous Robotic Systems, chapter Multiple UAV cooperative searching operation using

polygon area decomposition and efficient coverage algorithms, pages 221–230. Springer Verlag.

Maza, I., Viguria, A., and Ollero, A. (2006). Networked aerial-ground robot system with distributed

task allocation for disaster management. In Proc. of the IEEE International Workshop on

Safety, Security and Rescue Robotics.

McCarley, J. S. and Wickens, C. D. (2005). Human factors implications of UAVs in the national

airspace. Technical Report AHFD-05-5/FAA-05-1, Institute of Aviation, Aviation Human Fac-

tors Division, University of Illinois at Urbana-Champaign.

McLain, T. and Beard, R. (2005). Coordination variables, coordination functions, and cooperative

timing missions. Journal of Guidance, Control, and Dynamics, 28:150–161.

Page 248: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

224 BIBLIOGRAPHY

Merino, L. (2007). Cooperative Perception Techniques For Multiple Unmanned Aerial Vehicles:

Applications To The Cooperative Detection, Localization And Monitoring Of Forest Fires. PhD

thesis, Dpto. Ingenieria de Sistemas y Automatica – University of Seville.

Merino, L., Caballero, F., de Dios, J. M., Ferruz, J., and Ollero, A. (2006). A cooperative perception

system for multiple UAVs: Application to automatic detection of forest fires. Journal of Field

Robotics, 23(3–4):165–184.

Mittal, M., Prasad, J. V. R., and Schrage, D. P. (1991). Nonlinear adaptive control of a twin lift

helicopter system. IEEE Control Systems Magazine, 11(3):39–45.

Moore, K. L., Chen, Y., and Song, Z. (2004). Diffusion-based path planning in mobile actuator-sensor

networks (MAS-Net): Some preliminary results. In Proceedings of SPIE - The International

Society for Optical Engineering, volume 5421, pages 58–69.

Muscettola, N., Nayak, P. P., Pell, B., and Williams, B. C. (1998). Remote agent: To boldly go

where no AI system has gone before. Artificial Intelligence, 103(1-2):5–47.

NaturalPoint (2009a). SmartNav 4 AT. http://www.naturalpoint.com/smartnav/.

NaturalPoint (2009b). TrackIR 4. http://www.naturalpoint.com/trackir/02-products/

product-TrackIR-4-PRO.html.

No, T. S., Chong, K. ., and Roh, D. . (2001). A Lyapunov function approach to longitudinal control

of vehicles in a platoon. IEEE Transactions on Vehicular Technology, 50(1):116–125.

Nouyan, S., Campo, A., and Dorigo, M. (2008). Path formation in a robot swarm: Self-organized

strategies to find your way home. Swarm Intelligence, 2(1):1–23.

Ogren, P., Egerstedt, M., and Hu, X. (2002). A control Lyapunov function approach to multiagent

coordination. IEEE Transactions on Robotics and Automation, 18(5):847–851.

Ollero, A., Garcia-Cerezo, A., and Gomez, J. (2006). Teleoperacion y Telerrobotica. Pearson Prentice

Hall.

Ollero, A. and Maza, I., editors (2007a). Multiple Heterogeneous Unmanned Aereal Vehicles, chapter

Teleoperation Tools, pages 189–206. Springer Tracts on Advanced Robotics. Springer.

Ollero, A. and Maza, I. (2007b). Multiple Heterogeneous Unmanned Aerial Vehicles, chapter 1, pages

1–14. Springer Tracks on Advanced Robotics. Springer-Verlag. Introduction.

Ollero, A. and Maza, I., editors (2007c). Multiple heterogeneous unmanned aerial vehicles. Springer

Tracts on Advanced Robotics. Springer-Verlag.

Ollero, A. and Maza, I. (2007d). Multiple Heterogeneous Unmanned Aerial Vehicles, chapter 9,

pages 229–232. Springer Tracks on Advanced Robotics. Springer-Verlag. Conclusions and

Future Directions.

Page 249: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

BIBLIOGRAPHY 225

Ollero, A. and Merino, L. (2004). Control and perception techniques for aerial robotics. Annual

Reviews in Control, 28(2):167–178.

Orden, K. F. V., Viirre, E., and Kobus, D. A. (2007). Foundations of Augmented Cognition, chapter

Augmenting Task-Centered Design with Operator State Assessment Technologies, pages 212–

219. Springer.

Origin Instruments Corporation (2009). Headmouse Extreme. http://www.orin.com/access/

headmouse/.

Owen, E. and Montano, L. (2005). Motion planning in dynamic environments using the velocity

space. In In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and

Systems 2005 (IROS 2005), pages 2833–2838.

Pallottino, L., Scordio, V. G., Frazzoli, E., and Bicchi, A. (2007). Decentralized cooperative policy

for conflict resolution in multi-vehicle systems. IEEE Transactions on Robotics, 23(6).

Parker, L. (1998). ALLIANCE: An architecture for fault-tolerant multi-robot cooperation. IEEE

Transactions on Robotics and Automation, 14(2):220–240.

Patterson, R. D. (1982). Guidelines for auditory warnings on civil aircraft. Civil Aviation Authority,

London.

Peryer, G., Noyes, J., Pleydell-Pearce, K., and Lieven, N. (2005). Auditory alert characteristics: A

survey of pilot views. International Journal of Aviation Psychology, 15(3):233–250.

Poythress, M., Berka, C., Levendowski, D., Chang, D., Baskin, A., Champney, R., Hale, K., Milham,

L., Russell, C., Seigel, S., Tremoulet, P., and Craven, P. (2006). Foundations of Augmented Cog-

nition, chapter Correlation between expected workload and EEG indices of cognitive workload

and task engagement, pages 75–84. Springer.

Rebollo, J., Maza, I., and Ollero, A. (2007). Collision avoidance among multiple aerial robots and

other non-cooperative aircrafts based on velocity planning. In Proceedings of the 7th Conference

On Mobile Robots And Competitions, Paderne, Portugal.

Rebollo, J., Maza, I., and Ollero, A. (2008). A two step velocity planning method for real-time

collision avoidance of multiple aerial robots in dynamic environments. In Proceedings of the

17th IFAC World Congress, volume 17, pages 1735–1740, Seoul, Korea.

Rebollo, J., Maza, I., and Ollero, A. (2009). Planificacion de trayectorias libres de colision para

multiples UAVs usando el perfil de velocidad. Revista Iberoamericana de Automatica e Infor-

matica Industrial (RIAI), 6(4):56–65.

Ren, W. and Beard, R. (2008). Distributed Consensus in Multi-vehicle Cooperative Control. Springer,

Berlin, Germany.

Page 250: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

226 BIBLIOGRAPHY

Reynolds, H. K. and Rodriguez, A. A. (1992). H∞ control of a twin lift helicopter system. In

Proceedings of the 31st IEEE Conference on Decision and Control, pages 2442–2447.

Richards, A. and How, J. (2002). Aircraft trajectory planning with collision avoidance using mixed

integer linear programming. In Proceedings of American Control Conference, pages 1936–1941.

Sandholm, T. (1993). An implementation of the Contract Net Protocol based on marginal cost

calculations. In Proceedings of the 12th International Workshop on Distributed Artificial Intel-

ligence.

Schmitt, T., Hanek, R., Beetz, M., Buck, S., and Radig, B. (2002). Cooperative probabilistic state

estimation for vision-based autonomous mobile robots. IEEE Transactions on Robotics and

Automation, 18:670–684.

Schumacher, C. and Singh, S. (2000). Nonlinear control of multiple UAVs in close-coupled formation

flight. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, pages 14–17.

Sharkey, A. J. C. (2006). Robots, insects and swarm intelligence. Artificial Intelligence Review,

26(4):255–268.

Sharkey, A. J. C. (2007). Swarm robotics and minimalism. Connection Science, 19(3):245–260.

Sharma, R., Pavlovic, V. I., and Huang, T. S. (1998). Toward multimodal human-computer interface.

Proceedings of the IEEE, 86(5):853–869.

Singhal, M. (1989). Deadlock detection in distributed systems. Computer, 22(11):37–48. Cited By

(since 1996): 68.

Smith, G. (1980). The Contract Net Protocol: High-level communication and control in a distributed

problem solver. IEEE Transactions on Computers, 29(12):1104–1113.

Spears, W. M., Spears, D. F., Hamann, J. C., and Heil, R. (2004). Distributed, physics-based control

of swarms of vehicles. Autonomous Robots, 17(2–3):137–162.

Stanford University (2009). WITAS Project multi-modal conversational interfaces. http://

www-csli.stanford.edu/semlab-hold/witas/.

Sugar, T. G. and Kumar, V. (2002). Control of cooperating mobile manipulators. IEEE Transactions

on Robotics and Automation, 18(1):94–103.

Sukkarieh, S., Nettleton, E., Kim, J.-H., Ridley, M., Goktogan, A., and Durrant-Whyte, H. (2003a).

The ANSER Project: Data fusion across multiple uninhabited air vehicles. The International

Journal of Robotics Research, 22(7-8):505–539.

Sukkarieh, S., Nettleton, E., Kim, J.-H., Ridley, M., Goktogan, A., and Durrant-Whyte, H. (2003b).

The ANSER Project: Data Fusion Across Multiple Uninhabited Air Vehicles. The International

Journal of Robotics Research, 22(7-8):505–539.

Page 251: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

BIBLIOGRAPHY 227

Sweller, J. (2002). Visualisation and instructional design. In Proceedings of the International Work-

shop on Dynamic Visualizations and Learning.

Tanner, H. G., Pappas, G. J., and Kumar, V. (2004). Leader-to-formation stability. IEEE Transac-

tions on Robotics and Automation, 20(3):443–455.

Thrun, S. (2001). A probabilistic online mapping algorithm for teams of mobile robots. International

Journal of Robotics Research, 20(5):335–363.

Tran, D., Chien, S., Sherwood, R., Castano, R., Cichy, B., Davies, A., and Rabideau, G. (2004). The

autonomous sciencecraft experiment onboard the EO-1 spacecraft. In AAAI, pages 1040–1041.

Tsubouchi, T. and Arimoto, S. (1994). Behavior of a mobile robot navigated by an iterated forecast

and planning scheme in the presence of multiple moving obstacles. In Proceedings of the 1994

IEEE International Conference on Robotics and Automation, pages 2470–2475.

Universities of Bonn and Stuttgart (AWARE partners) (2007). Middleware and communications

design document (AWARE project deliverable D10). Technical report, European Commission.

University of Edinburgh (2009). The Festival speech synthesis system. http://www.cstr.ed.ac.

uk/projects/festival/.

Venkitasubramaniam, P., Adireddy, S., and Tong, L. (2004). Sensor networks with mobile agents:

Optimal random access and coding. IEEE Journal on Selected Areas in Communications:

Special Issue on Sensor Networks, 22(6):1058–1068.

Vidal, R., Shakernia, O., and Sastry, S. (2004). Following the flock: Distributed formation control

with omnidirectional vision-based motion segmentation and visual servoing. IEEE Robotics and

Automation Magazine, 11(4):14–20.

Viguria, A., Maza, I., and Ollero, A. (2007). SET: An algorithm for distributed multirobot task

allocation with dynamic negotiation based on task subsets. In Proceedings of the IEEE Inter-

national Conference on Robotics and Automation, pages 3339–3344, Rome, Italy.

Viguria, A., Maza, I., and Ollero, A. (2008). S+T: An algorithm for distributed multirobot task

allocation based on services for improving robot cooperation. In Proceedings of the IEEE In-

ternational Conference on Robotics and Automation, pages 3163–3168, Pasadena, California,

USA.

Viguria, A., Maza, I., and Ollero, A. (2010). Distributed service-based cooperation in aerial/ground

robot teams applied to fire detection and extinguishing missions. Advanced Robotics, 24(1–2):1–

23.

Werger, B. B. and Mataric, M. J. (2000). Broadcast of local eligibility for multi-target observation.

In Distributed Autonomous Robotic Systems 4, pages 347 – 356. Springer-Verlag.

Page 252: personal.us.es · Agradecimientos Durante la lenta y multiples´ veces interrumpida evoluci´on de esta tesis he acumulado muchas deudas, y solamente tengo espacio para agradecer

228 BIBLIOGRAPHY

Wilson, G. and Russell, C. (2003). Real-time assessment of mental workload using psychophysiolog-

ical measures and artificial neural networks. Human Factors, pages 635–643.

Wollkind, S. (2004). Using Multi-Agent Negotiation Techniques for the Autonomuos Resolution of

Air Traffic Conflicts. PhD thesis, University of Texas.

Zanella, A., Zorzi, M., Fasolo, E., Ollero, A., Maza, I., Viguria, A., Pias, M., Coulouris, G., and

Petrioli, C. (2008). Cooperating Embedded Systems and Wireless Sensor Networks, chapter

Paradigms for Algorithms and Interactions, pages 115–258. ISTE Ltd and John Wiley & Sons.

Zarzhitsky, D., Spears, D., and Spears, W. (2005). Swarms for chemical plume tracing. In Proceedings

2005 IEEE Swarm Intelligence Symposium, pages 249–256.

Zelinski, S., Koo, T. J., and Sastry, S. (2003). Hybrid system design for formations of autonomous

vehicles. In Proceedings of the IEEE Conference on Decision and Control, volume 1, pages 1–6.

Zhang, D., Xie, G., Yu, J., and Wang, L. (2007). Adaptive task assignment for multiple mobile

robots via swarm intelligence approach. Robotics and Autonomous Systems, 55(7):572–588.

Zhang, Y., Kosmatopoulos, E., Ioannou, P., and Chien, C. (1999). Autonomous intelligent cruise

control using front and back information for tight vehicle following maneuvers. IEEE Transac-

tions on Vehicular Technology, 48(1):319–328.

Zhu, Y. (2007). Advances in Visual Computing, chapter Measuring Effective Data Visualization,

pages 652–661. Springer.

Zlot, R. M. and Stentz, A. (2006). Market-based multirobot coordination for complex tasks. In-

ternational Journal of Robotics Research, Special Issue on the 4th International Conference on

Field and Service Robotics, 25(1):73–101.