phd thesis - coordination of multiple robotic agents for disaster and emergency response

228
INSTITUTO TECNOL ´ OGICO Y DE ESTUDIOS SUPERIORES DE MONTERREY CAMPUS CAMPUS MONTERREY SCHOOL OF ENGINEERING AND INFORMATION TECHNOLOGIES GRADUATE PROGRAMS DOCTOR OF PHILOSOPHY IN INFORMATION TECHNOLOGIES AND COMMUNICATIONS MAJOR IN INTELLIGENT SYSTEMS Dissertation Coordination of Multiple Robotic Agents For Disaster and Emergency Response By Jes ´ us Salvador Cepeda Barrera DECEMBER 2012

Upload: chuycepedaslideshare

Post on 11-May-2015

4.478 views

Category:

Documents


2 download

DESCRIPTION

Here the document of the operations, system architecture, and state of the art concerning robotics for disaster and emergency response.

TRANSCRIPT

Page 1: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

INSTITUTO TECNOLOGICO Y DE ESTUDIOS SUPERIORES DE MONTERREYCAMPUS CAMPUS MONTERREY

SCHOOL OF ENGINEERING AND INFORMATION TECHNOLOGIESGRADUATE PROGRAMS

DOCTOR OF PHILOSOPHYIN

INFORMATION TECHNOLOGIES AND COMMUNICATIONSMAJOR IN INTELLIGENT SYSTEMS

Dissertation

Coordination of Multiple Robotic AgentsFor Disaster and Emergency Response

By

Jesus Salvador Cepeda Barrera

DECEMBER 2012

Page 2: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Coordination of Multiple Robotic AgentsFor Disaster and Emergency Response

A dissertation presented by

Jesus Salvador Cepeda Barrera

Submitted to theGraduate Programs in Engineering and Information Technologies

in partial fulfillment of the requirements for the degree of

Doctor of Philosophyin

Information Technologies and CommunicationsMajor in Intelligent Systems

Thesis Committee:

Dr. Rogelio Soto - Tecnologico de MonterreyDr. Luiz Chaimowicz - Universidade Federal de Minas Gerais

Dr. Jose Luis Gordillo - Tecnologico de MonterreyDr. Leonardo Garrido - Tecnologico de Monterrey

Dr. Ernesto Rodrıguez - Tecnologico de Monterrey

Instituto Tecnologico y de Estudios Superiores de MonterreyCampus Campus Monterrey

December 2012

Page 3: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Instituto Tecnologico y de Estudios Superiores de MonterreyCampus Campus Monterrey

School of Engineering and Information TechnologiesGraduate Program

The committee members hereby certify that have read the dissertation presented by Jesus Sal-vador Cepeda Barrera and that it is fully adequate in scope and quality as a partial fulfillmentof the requirements for the degree of Doctor of Philosophy in Information Technologiesand Communications, with a major in Intelligent Systems.

Dissertation Committee

Dr. Rogelio SotoAdvisor

Dr. Luiz ChaimowiczExternal Co-AdvisorUniversidade Federal de Minas Gerais

Dr. Jose Luis GordilloCommittee Member

Dr. Leonardo GarridoCommittee Member

Dr. Ernesto RodrıguezCommittee Member

Dr. Cesar VargasDirector of the Doctoral Program in

Information Technologies andCommunications

i

Page 4: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Copyright Declaration

I, hereby, declare that I wrote this dissertation entirely by myself and, that, it exclusivelydescribes my own research.

Jesus Salvador Cepeda BarreraMonterrey, N.L., MexicoDecember 2012

c©2012 by Jesus Salvador Cepeda BarreraAll Rights Reserved

ii

Page 5: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Dedicatoria

Dedico este trabajo a todos quienes me dieron la oportunidad y confiaron en que valdrıa lapena este tiempo que no solo requirio de trabajo arduo y de nuevas experiencias, sino quedemando por apoyo constante, paciencia y aliento ante los perıodos mas difıciles.

A mi padre por su sacrificio eterno para convencerme de pensar en grande y de hacer quevalga la pena el camino y sus dificultades. A el por aguantar hasta estos dıas la economıa delestudiante y confiar siempre que lo mejor esta por venir. A ti papa por tu amor y guıa consabidurıa para permitirme llegar hasta donde me lo proponga.

A mi madre por su abrazo sin igual que siempre abre nuevas brechas cuando pareciera que yano hay por donde continuar. A ella por el regazo donde renacen las fuerzas y motivacion paravolver a intentar. A ti mama por el amor que siempre me da seguridad para seguir adelantesabiendo que hay alguien que por siempre me ha de acompanar.

A mi hermana por saber demostrarme, sin intenciones, que la preparacion nunca estara demas, que la vida puede complicarse tanto como uno quiera y por ende existe la necesidad deser cada vez mas. A ti por ejemplo de lucha y rebeldıa.

A los tıos tecnologos que nunca han dejado de invertir ni de creer en mi. A ustedes sin quienesno hubiera sido posible llegar a este momento. Entre economıa, herramientas y confianzaconstante, ustedes me dieron siempre motivacion y Fe para ser ejemplo y apostar con el mayoresfuerzo.

Al abuelo que siempre quiso un ingeniero y ahora se le hizo doctor. Le dedico este trabajoque sin sus conocimientos y companıa en el taller nunca hubiera tenido la integridad que locaracteriza. A usted por ensenarme que la ingenierıa no es una decision, sino una conviccion.

Finalmente, a la mujer que por su existencia es guıa y voz divina. A ti que sabes decir y hacerlo que hace falta. A ti que complementas como ying y yang, como sol y luna, como pielmorena y cabellos rizados. A ti mi linda esposa por tu amor constante que nunca permitiotristezas ni en los peores momentos. Lo dedico por tu firme disposicion a dejar todo por viviry aprender cosas que nunca te imaginaste, por tu animo vivo por recorrer el mundo a mi lado.A ti princesa por confiar en mi y acompanarme en cada una de estas paginas.

iii

Page 6: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Acknowledgements

If the observer were intelligent (and extraterrestrial observers are always pre-sumed to be intelligent) he would conclude that the earth is inhabited by a fewvery large organisms whose individual parts are subordinate to a central direct-ing force. He might not be able to find any central brain or other controlling unit,but human biologists have the same difficulty when they try to analyse an anthill. The individual ants are not impressive objects in fact they are rather stupid,even for insects but the colony as a whole behaves with striking intelligence. –Jonathan Norton Leonard

I want to express my deepest feeling of gratitude to all of you who contributed for me to notbe an individual ant. Advisors, peers, friends, and the robotics gurus, which doubtfully willread this but who surely deserve my gratitude because without them this work won’t even bepossible.

Thanks Prof. Rogelio Soto for your constant confidence in my ideas and for supporting andguiding all my developments during this dissertation. Thanks for the opportunity you gaveme for working with you and developing that which I like the most and I doesn’t even knewit existed.

Thanks Prof. Jose L. Gordillo for the hard times you gave me and for sharing your knowledge.I really appreciate both things, definitively you make me a more integral professional.

Thanks Prof. Luiz Chaimowicz, for opening the research doors from the very first day. Thanksfor believing in my developments and letting me live a little of the amazing Brazilian experi-ence. Thanks for your constant guidance even when we are more than 8000km apart. Thanksfor showing me my very first experiences around real robotics and for making me understandthat it is Skynet and not the Terminator which we shall fear.

Thanks eRobots friends and colleagues for not only sharing your knowledge and experienceswith me, but also for validating my own. Thanks for your constant support and company whennobody else should be working. Thanks for your words when I needed them the most, youreally are a fundamental part of this work.

Thanks Prof. Mario Montenegro and the Verlabians for the most accurate and guided knowl-edge I’ve ever had about mobile robotics. Thanks for giving me the chance to be part of yourteam. Thanks for letting me learn from you and be your mexican friend even though I workedwith Windows.

Thanks God and Life for giving me this opportunity.

iv

Page 7: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Coordination of Multiple Robotic AgentsFor Disaster and Emergency Response

byJesus Salvador Cepeda Barrera

Abstract

In recent years, the use of Multi-Robot Systems (MRS) has become popular for several appli-cation domains. The main reason for using these MRS is that they are a convenient solutionin terms of costs, performance, efficiency, reliability, and reduced human exposure. In thatway, existing robots and implementation domains are of increasing number and complexity,turning coordination and cooperation fundamental features among robotics research.

Accordingly, developing a team of cooperative autonomous mobile robots has been oneof the most challenging goals in artificial intelligence. Research has witnessed a large bodyof significant advances in the control of single mobile robots, dramatically improving thefeasibility and suitability of MRS. These vast scientific contributions have also created theneed for coupling these advances, leading researchers to the challenging task of developingmulti-robot coordination infrastructures.

Moreover, considering all possible environments where robots interact, disaster scenar-ios come to be among the most challenging ones. These scenarios have no specific structureand are highly dynamic, uncertain and inherently hostile. They involve devastating effectson wildlife, biodiversity, agriculture, urban areas, human health, and also economy. So, theyreside among the most serious social issues for the intellectual community.

Following these concerns and challenges, this dissertation addresses the problem of howcan we coordinate and control multiple robots so as to achieve cooperative behavior for assist-ing in disaster and emergency response. The essential motivation resides in the possibilitiesthat a MRS can have for disaster response including improved performance in sensing andaction, while speeding up operations by parallelism. Finally, it represents an opportunity forempowering responders’ abilities and efficiency in the critical 72 golden hours, which areessential for increasing the survival rate and for preventing a larger damage.

Therefore, herein we achieve urban search and rescue (USAR) modularization leverag-ing local perceptions and mission decomposition into robotic tasks. Then, we have developeda behavior-based control architecture for coordinating mobile robots, enhancing most relevantcontrol characteristics reported in literature. Furthermore, we have implemented a hybrid in-frastructure in order to ensure robustness for USAR mission accomplishment with currenttechnology, which is better for simple, fast, reactive control. These single and multi-robotarchitectures were designed under the service-oriented paradigm, thus leveraging reusability,scalability and extendibility.

Finally, we have inherently studied the emergence of rescue robotic team behaviors andtheir applicability in real disasters. By implementing distributed autonomous behaviors, weobserved the opportunity for adding adaptivity features so as to autonomously learn additionalbehaviors and possibly increase performance towards cognitive systems.

v

Page 8: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

List of Figures

1.1 Number of survivors and casualties in the Kobe earthquake in 1995. Imagefrom [267]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Percentage of survival chances in accordance to when victim is located. Basedon [69]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 70 years for autonomous control levels. Edited from [44]. . . . . . . . . . . . 61.4 Mobile robot control scheme. Image from [255]. . . . . . . . . . . . . . . . 91.5 Minsky’s interpretation of behaviors. Image from [188]. . . . . . . . . . . . 181.6 Classic and new artificial intelligence approaches. Edited from [255]. . . . . 181.7 Behavior in robotics control. Image from [138]. . . . . . . . . . . . . . . . . 191.8 Coordination methods for behavior-based control. Edited from [11]. . . . . . 191.9 Group architecture overview. . . . . . . . . . . . . . . . . . . . . . . . . . . 231.10 Service-oriented group architecture. . . . . . . . . . . . . . . . . . . . . . . 25

2.1 Major challenges for networked robots. Image from [150]. . . . . . . . . . . 302.2 Typical USAR Scenario. Image from [267]. . . . . . . . . . . . . . . . . . . 302.3 Real pictures from the WTC Tower 2. a) shows a rescue robot within the white

box navigating in the rubble; b) robots-eye view with three sets of victimremains. Image edited from [194] and [193]. . . . . . . . . . . . . . . . . . 31

2.4 Typical problems with rescue robots. Image from [268]. . . . . . . . . . . . . 352.5 Template-based information system for disaster response. Image based on [156,

56]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412.6 Examples of templates for disaster response. Image based on [156, 56]. . . . 422.7 Task force in rescue infrastructure. Image from [14]. . . . . . . . . . . . . . 432.8 Rescue Communicator, R-Comm: a) Long version, b) Short version. Image

from [14]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432.9 Handy terminal and RFID tag. Image from [14]. . . . . . . . . . . . . . . . . 442.10 Database for Rescue Management System, DaRuMa. Edited from [210]. . . . 442.11 RoboCup Rescue Concept. Image from [270]. . . . . . . . . . . . . . . . . . 462.12 USARSim Robot Models. Edited from [284, 67]. . . . . . . . . . . . . . . . 472.13 USARSim Disaster Snapshot. Edited from [18, 17]. . . . . . . . . . . . . . . 472.14 Sensor Readings Comparison. Top: Simulation, Bottom: Reality. Image

from [67]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482.15 Control Architecture for Rescue Robot Systems. Image from [3]. . . . . . . . 502.16 Coordinated exploration using costs and utilities. Frontier assignment consid-

ering a) only costs; b) costs and utilities; c) three robots paths results. Editedfrom [58]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

vi

Page 9: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

2.17 Supervisor sketch for MRS patrolling. Image from [168]. . . . . . . . . . . . 532.18 Algorithm for determining occupancy grids. Image from [33]. . . . . . . . . 542.19 Multi-Robot generated maps in RoboCup Rescue 2007. Image from [225]. . . 552.20 Behavioral mapping idea. Image from [164]. . . . . . . . . . . . . . . . . . . 552.21 3D mapping using USARSim. Left) Kurt3D and its simulated counterpart.

Right) 3D color-coded map. Edited from [20]. . . . . . . . . . . . . . . . . . 562.22 Face recognition in USARSim. Left) Successful recognition. Right) False

positive. Image from [20]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 572.23 Human pedestrian vision-based detection procedure. Image from [90]. . . . . 572.24 Human pedestrian vision-based detection procedure. Image from hal.inria.fr/inria-

00496980/en/. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582.25 Human behavior vision-based recognition. Edited from [207]. . . . . . . . . 582.26 Visual path following procedure. Edited from [103]. . . . . . . . . . . . . . . 592.27 Visual path following tests in 3D terrain. Edited from [103]. . . . . . . . . . 592.28 START Algorithm. Victims are sorted in: Minor, Delayed, Immediate and

Expectant; based on the assessment of: Mobility, Respiration, Perfusion andMental Status. Image from [80]. . . . . . . . . . . . . . . . . . . . . . . . . 61

2.29 Safety, security and rescue robotics teleoperation stages. Image from [36]. . . 612.30 Interface for multi-robot rescue systems. Image from [209]. . . . . . . . . . . 622.31 Desired information for rescue robot interfaces: a)multiple image displays, b)

multiple map displays. Edited from [292]. . . . . . . . . . . . . . . . . . . . 632.32 Touch-screen technologies for rescue robotics. Edited from [185]. . . . . . . 642.33 MRS for autonomous exploration, mapping and deployment. a) the complete

heterogeneous team; b) sub-team with mapping capabilities. Image from [130]. 652.34 MRS result for autonomous exploration, mapping and deployment. a) origi-

nal floor map; b) robots collected map; c) autonomous planned deployment.Edited from [130]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

2.35 MRS for search and monitoring: a) Piper J3 UAVs; b) heterogeneous UGVs.Edited from [131]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

2.36 Demonstration of integrated search operations: a) robots at initial positions, b)robots searching for human target, c) alert of target found, d) display nearestUGV view of the target. Edited from [131]. . . . . . . . . . . . . . . . . . . 67

2.37 CRASAR MicroVGTV and Inuktun [91, 194, 158, 201]. . . . . . . . . . . . 702.38 TerminatorBot [282, 281, 204]. . . . . . . . . . . . . . . . . . . . . . . . . . 702.39 Leg-in-Rotor Jumping Inspector [204, 267]. . . . . . . . . . . . . . . . . . . 712.40 Cubic/Planar Transformational Robot [266]. . . . . . . . . . . . . . . . . . . 712.41 iRobot ATRV - FONTANA [199, 91, 158]. . . . . . . . . . . . . . . . . . . . 712.42 FUMA [181, 245]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722.43 Darmstadt University - Monstertruck [8]. . . . . . . . . . . . . . . . . . . . 722.44 Resko at UniKoblenz - Robbie [151]. . . . . . . . . . . . . . . . . . . . . . 722.45 Independent [84]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732.46 Uppsala University Sweden - Surt [211]. . . . . . . . . . . . . . . . . . . . . 732.47 Taylor [199]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732.48 iRobot Packbot [91, 158]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742.49 SPAWAR Urbot [91, 158]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

vii

Page 10: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

2.50 Foster-Miller Solem [91, 194, 158]. . . . . . . . . . . . . . . . . . . . . . . 742.51 Shinobi - Kamui [189]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752.52 CEO Mission II [277]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752.53 Aladdin [215, 61]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752.54 Pelican United - Kenaf [204, 216]. . . . . . . . . . . . . . . . . . . . . . . . 762.55 Tehzeeb [265]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762.56 ResQuake Silver2009 [190, 187]. . . . . . . . . . . . . . . . . . . . . . . . 762.57 Jacobs Rugbot [224, 85, 249]. . . . . . . . . . . . . . . . . . . . . . . . . . 772.58 PLASMA-Rx [87]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772.59 MRL rescue robots NAJI VI and NAJI VII [252]. . . . . . . . . . . . . . . . 772.60 Helios IX and Carrier Parent and Child [121, 180, 267]. . . . . . . . . . . . . 782.61 KOHGA : Kinesthetic Observation-Help-Guidance Agent [142, 181, 189, 276]. 782.62 OmniTread OT-4 [40]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782.63 Hyper Souryu IV [204, 276]. . . . . . . . . . . . . . . . . . . . . . . . . . . 792.64 Rescue robots: a) Talon, b) Wolverine V-2, c) RHex, d) iSENSYS IP3, e)

Intelligent Aerobot, f) muFly microcopter, g) Chinese firefighting robot, h)Teleoperated extinguisher, i) Unmanned surface vehicle, j) Predator, k) T-HAWK, l) Bluefin HAUV. Images from [181, 158, 204, 267, 287]. . . . . . . 80

2.65 Jacobs University rescue arenas. Image from [249]. . . . . . . . . . . . . . . 812.66 Arena in which multiple Kenafs were tested. Image from [205]. . . . . . . . 822.67 Exploration strategy and centralized, global 3D map: a) frontiers in current

global map, b) allocation and path planning towards the best frontier, c) afinal 3D global map. Image from [205]. . . . . . . . . . . . . . . . . . . . . 82

2.68 Mapping data: a) raw from individual robots, b) fused and corrected in a newglobal map. Image from [205]. . . . . . . . . . . . . . . . . . . . . . . . . . 83

2.69 Building exploration and temperature gradient mapping: a) robots as mobilesensors navigating and deploying static sensors, b) temperature map. Imagefrom [144]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

2.70 Building structure exploration and temperature mapping using static sensors,human mobile sensor, and UAV mobile sensor. Image from [98]. . . . . . . . 84

2.71 Helios IX in a door-opening procedure. Image from [121]. . . . . . . . . . . 852.72 Real model and generated maps of the 60 m. hall: a) real 3D model, b)

generated 3D map with snapshots, c) 2D map with CPS, d) 2D map with deadreckoning. Image from [121]. . . . . . . . . . . . . . . . . . . . . . . . . . . 86

2.73 IRS-U and K-CFD real tests with rescue robots: a) deployment of Kohgaand Souryu robots, b) Kohga finding a victim, c) operator being notified ofvictim found, d) Kohga waiting until human rescuer assists the victim, e)Souryu finding a victim, f) Kohga and Souryu awaiting for assistance, g) hu-man rescuers aiding the victim, and h) both robots continue exploring. Imagesfrom [276]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

2.74 Types of entries in mine rescue operations: a) Surface Entry (SE), b) BoreholeEntry (BE), c) Void Entry (VE), d) Inuktun being deployed in a BE [201]. . . 89

2.75 Standardized test arenas for rescue robotics: a) Red Arena, b) Orange Arena,c) Yellow Arena. Image from [67]. . . . . . . . . . . . . . . . . . . . . . . . 91

viii

Page 11: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

3.1 MaSE Methodology. Image from [289]. . . . . . . . . . . . . . . . . . . . . 943.2 USAR Requirements (most relevant references to build this diagram include:

[261, 19, 80, 87, 254, 269, 204, 267, 268]). . . . . . . . . . . . . . . . . . . 963.3 Sequence Diagram I: Exploration and Mapping (most relevant references to

build this diagram include: [173, 174, 175, 176, 21, 221, 86, 232, 10, 58, 271,101, 33, 240, 92, 126, 194, 204]). . . . . . . . . . . . . . . . . . . . . . . . . 99

3.4 Sequence Diagram IIa: Recognize and Identify - Local (most relevant refer-ences to build this diagram include: [170, 175, 221, 23, 242, 163, 90, 207, 89,226]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

3.5 Sequence Diagram IIb: Recognize and Identify - Remote (most relevant ref-erences to build this diagram include: [170, 175, 221, 23, 242, 163, 90, 207,89, 226]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

3.6 Sequence Diagram III: Support and Relief (most relevant references to buildthis diagram include: [58, 33, 80, 19, 226, 150, 267, 204, 87, 254]). . . . . . . 102

3.7 Robots used in this dissertation: to the left a simulated version of an AdeptPioneer 3DX, in the middle the real version of an Adept Pioneer 3AT, and tothe right a Dr. Robot Jaguar V2. . . . . . . . . . . . . . . . . . . . . . . . . 103

3.8 Roles, behaviors and actions mappings. . . . . . . . . . . . . . . . . . . . . 1063.9 Roles, behaviors and actions mappings. . . . . . . . . . . . . . . . . . . . . 1073.10 Behavior-based control architecture for individual robots. Edited image from [178].1083.11 The Hybrid Paradigm. Image from [192]. . . . . . . . . . . . . . . . . . . . 1093.12 Group architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103.13 Architecture topology: at the top the system element communicating wireless

with the subsystems. Subsystems include their nodes, which can be differ-ent types of computers. Finally, components represent the running softwareservices depending on the existing hardware and node’s capabilities. . . . . . 112

3.14 Microsoft Robotics Developer Studio principal components. . . . . . . . . . 1143.15 CCR Architecture: when a message is posted into a given Port or PortSet,

triggered Receivers call for Arbiters subscribed to the messaged port in orderfor a task to be queued and dispatched to the threading pool. Ports defined aspersistent are concurrently being listened, while non-persistent are one-timelistened. Image from [137]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

3.16 DSS Architecture. The DSS is responsible for loading services and manag-ing the communications between applications through the Service Forwarder.Services could be distributed in a same host and/or through the network. Im-age from [137]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

3.17 MSRDS Operational Schema. Even though DSS is on top of CCR, manyservices access CCR directly, which at the same time is working on low levelas the mechanism for orchestration to happen, so it is placed sidewards to theDSS. Image from [137]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

ix

Page 12: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

3.18 Behavior examples designed as services. Top represents the handle collisionbehavior, which according to a goal/current heading and the laser scanner sen-sor, it evaluates the possible collisions and outputs the corresponding steeringand driving velocities. Middle represents the detection (victim/threat) behav-ior, which according to the attributes to recognize and the camera sensor, itimplements the SURF algorithm and outputs a flag indicating if the objecthas been found and the attributes that correspond. Bottom represents the seekbehavior, which according to a goal position, its current position and the laserscanner sensor, it evaluates the best heading using the VFH algorithm andthen outputs the corresponding steering and driving velocities. . . . . . . . . 119

4.1 Process to Quick Simulation. Starting from a simple script in SPL we candecide which is more useful for our robotic control needs and programmingskills, either going through C# or VPL. . . . . . . . . . . . . . . . . . . . . . 122

4.2 Created service for fast simulations with maze-like scenarios. Available athttp://erobots.codeplex.com/. . . . . . . . . . . . . . . . . . . . . . . . . . . 123

4.3 Fast simulation to real implementation process. It can be seen that going froma simulated C# service to real hardware implementations is a matter of chang-ing a line of code: the service reference. Concerning VPL, simulated and realservices are clearly identified providing easy interchange for the desired test. . 124

4.4 Local and remote approaches used for the experiments. . . . . . . . . . . . . 1244.5 Speech recognition service experiment for voice-commanded robot naviga-

tion. Available at http://erobots.codeplex.com/. . . . . . . . . . . . . . . . . 1254.6 Vision-based recognition service experiment for visual-joystick robot naviga-

tion. Available at http://erobots.codeplex.com/. . . . . . . . . . . . . . . . . 1264.7 Wall-follow behavior service. View is from top, the red path is made of a robot

following the left (white) wall in the maze, while the blue one corresponds toanother robot following the right wall. . . . . . . . . . . . . . . . . . . . . . 127

4.8 Seek behavior service. Three robots in a maze viewed from the top, one staticand the other two going to specified goal positions. The red and blue pathsare generated by each one of the navigating robots. To the left of the picture asimple console for appreciating the VFH [41] algorithm operations. . . . . . 127

4.9 Flocking behavior service. Three formations (left to right): line, column andwedge/diamond. In the specific case of 3 robots a wedge looks just like adiamond. Red, green and blue represent the traversed paths of the robots. . . 128

4.10 Field-cover behavior service. At the top, two different global emergent behav-iors for a same algorithm and same environment, both showing appropriatefield-coverage or exploration. At the bottom, in two different environments,just one robot doing the same field-cover behavior showing its traversed pathin red. Appendix D contains complete detail on this behavior. . . . . . . . . . 128

4.11 Victim and Threat behavior services. Being limited to vision-based detection,different figures were used to simulate threats and victims according to recentliterature [116, 20, 275, 207]. To recognize them, already coded algorithmswere implemented including SURF [26], HoG [90] and face-detection [279]from the popular OpenCV [45] and EmguCV [96] libraries. . . . . . . . . . . 129

x

Page 13: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

4.12 Simultaneous localization and mapping features for the MSRDS VSE. Robot1 is the red path, robot 3 the green and robot 3 the blue. They are not onlymapping the environment by themselves, but also contributing towards a teammap. Nevertheless localization is a simulation cheat and laser scanners haveno uncertainty as they will have in real hardware. . . . . . . . . . . . . . . . 130

4.13 Subscription Process: MSRDS partnership is achieved in two steps: runningthe subsystems and then running the high-level controller asking for subscrip-tions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

4.14 Single robot exploration simulation results: a) 15% wandering rate and flatzones indicating high redundancy; b) Better average results with less redun-dancy using 10% wandering rate; c) 5% wandering rate shows little improve-ments and higher redundancy; d) Avoiding the past with 10% wandering rate,resulting in over 96% completion of a 200 sq. m area exploration for everyrun using one robot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

4.15 Typical navigation for qualitative appreciation: a) The environment basedupon Burgard’s work in [58]; b) A second more cluttered environment. Snap-shots are taken from the top view and the traversed paths are drawn in red.For both scenarios the robot efficiently traverses the complete area using thesame algorithm. Black circle with D indicates deployment point. . . . . . . . 136

4.16 Autonomous exploration showing representative results in a single run for 3robots avoiding their own past. Full exploration is completed at almost 3 timesfaster than using a single robot, and the exploration quality shows a balancedresult meaning an efficient resources (robots) management. . . . . . . . . . . 137

4.17 Autonomous exploration showing representative results in a single run for 3robots avoiding their own and teammates’ past. Results show more interfer-ence and imbalance at exploration quality when compared to avoiding theirown past only. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

4.18 Qualitative appreciation: a) Navigation results from Burgard’s work [58]; b)Our gathered results. Path is drawn in red, green and blue for each robot.High similarity with a much simpler algorithm can be appreciated. Blackcircle with D indicates deployment point. . . . . . . . . . . . . . . . . . . . 138

4.19 The emergent in-zone coverage behavior for long time running the explorationalgorithm. Each color (red, green and blue) shows an area explored by adifferent robot. Black circle with D indicates deployment point. . . . . . . . 139

4.20 Multi-robot exploration simulation results, appropriate autonomous explo-ration within different environments including: a) Open Areas; b) ClutteredEnvironments; c) Dead-end Corridors; d) Minimum Exits. Black circle withD indicates deployment point. . . . . . . . . . . . . . . . . . . . . . . . . . 140

4.21 Jaguar V2 operator control unit. This is the interface for the application whereautonomous operations occur including local perceptions and behaviors coor-dination. Thus, it is the reactive part of our proposed solution. . . . . . . . . 142

4.22 System operator control unit. This is the interface for the application wheremanual operations occur including state change and human supervision. Thus,it is the deliberative part of our proposed solution. . . . . . . . . . . . . . . . 142

4.23 Template structure for creating and managing reports. Based on [156, 56]. . . 143

xi

Page 14: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

4.24 Deployment of a Jaguar V2 for single robot autonomous exploration experi-ments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

4.25 Autonomous exploration showing representative results implementing the ex-ploration algorithm in one Jaguar V2. An average of 36 seconds for full ex-ploration demonstrates coherent operations considering simulation results. . . 145

4.26 Deployment of two Jaguar V2 robots for multi-robot autonomous explorationexperiments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

4.27 Autonomous exploration showing representative results for a single run using2 robots avoiding their own past. An almost half of the time for full explo-ration when compared to single robot runs demonstrates efficient resourcemanagement. The resultant exploration quality shows the trend towards per-fect balancing between the two robots. . . . . . . . . . . . . . . . . . . . . . 146

4.28 Comparison between: a) typical literature exploration process and b) our pro-posed exploration. Clear steps and complexity reduction can be appreciatedbetween sensing and acting. . . . . . . . . . . . . . . . . . . . . . . . . . . 147

A.1 Generic single robot architecture. Image from [2]. . . . . . . . . . . . . . . . 154A.2 Autonomous Robot Architecture - AuRa. Image from [12]. . . . . . . . . . . 155

D.1 8 possible 45◦ heading cases with 3 neighbor waypoints to evaluate so as todefine a CCW, CW or ZERO angular acceleration command. For example,if heading in the -45◦ case, the neighbors to evaluate are B, C and D, as left,center and right, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . 181

D.2 Implemented 2-state Finite State Automata for autonomous exploration. . . . 184

xii

Page 15: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

List of Tables

1.1 Comparison of event magnitude. Edited from [182]. . . . . . . . . . . . . . . 71.2 Important concepts and characteristics on the control of multi-robot systems.

Based on [53, 11, 2, 24]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.3 FSA, FSM and BBC relationships. Edited from [192]. . . . . . . . . . . . . . 201.4 Components of a hybrid-intelligence architecture. Based on [192]. . . . . . . 211.5 Nomenclature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221.6 Relevant metrics in multi-robot systems . . . . . . . . . . . . . . . . . . . . 23

2.1 Factors influencing the scope of the disaster relief effort from [83]. . . . . . . 402.2 A classification of robotic behaviors. Based on [178, 223]. . . . . . . . . . . 512.3 Recommendations for designing a rescue robot [37, 184, 194, 33, 158, 201, 267]. 69

3.1 Main advantages and disadvantages for using wheeled and tracked robots [255,192]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

4.1 Experiments’ results: average delays . . . . . . . . . . . . . . . . . . . . . . 1334.2 Metrics used in the experiments. . . . . . . . . . . . . . . . . . . . . . . . . 1344.3 Average and Standard Deviation for full exploration time in 10 runs using

Avoid Past + 10% wandering rate with 1 robot. . . . . . . . . . . . . . . . . 1364.4 Average and Standard Deviation for full exploration time in 10 runs using

Avoid Past + 10% wandering rate with 3 robots. . . . . . . . . . . . . . . . . 1374.5 Average and Standard Deviation for full exploration time in 10 runs using

Avoid Kins Past + 10% wandering rate with 3 robots. . . . . . . . . . . . . . 138

B.1 Comparison among different software systems engineering techniques [219,46, 82, 293, 4]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

C.1 Wake up behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162C.2 Resume behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163C.3 Wait behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163C.4 Handle Collision behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . 164C.5 Avoid Past behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164C.6 Locate behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165C.7 Drive Towards behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165C.8 Safe Wander behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166C.9 Seek behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166C.10 Path Planning behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

xiii

Page 16: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

C.11 Aggregate behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167C.12 Unit Center Line behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . 167C.13 Unit Center Column behavior. . . . . . . . . . . . . . . . . . . . . . . . . . 168C.14 Unit Center Diamond behavior. . . . . . . . . . . . . . . . . . . . . . . . . . 168C.15 Unit Center Wedge behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . 169C.16 Hold Formation behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169C.17 Lost behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169C.18 Flocking behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170C.19 Disperse behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171C.20 Field Cover behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171C.21 Wall Follow behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172C.22 Escape behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172C.23 Report behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172C.24 Track behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173C.25 Inspect behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173C.26 Victim behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174C.27 Threat behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174C.28 Kin behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175C.29 Give Aid behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175C.30 Aid- behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176C.31 Impatient behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176C.32 Acquiescent behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176C.33 Unknown behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

xiv

Page 17: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Contents

Abstract v

List of Figures xii

List of Tables xiv

1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Problem Statement and Context . . . . . . . . . . . . . . . . . . . . . . . . 6

1.2.1 Disaster Response . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.2.2 Mobile Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.3 Search and Rescue Robotics . . . . . . . . . . . . . . . . . . . . . . 121.2.4 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.3 Research Questions and Objectives . . . . . . . . . . . . . . . . . . . . . . . 161.4 Solution Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.4.1 Dynamic Roles + Behavior-based Robotics . . . . . . . . . . . . . . 171.4.2 Architecture + Service-Oriented Design . . . . . . . . . . . . . . . . 201.4.3 Testbeds Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

1.5 Main Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251.6 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2 Literature Review – State of the Art 282.1 Fundamental Problems and Open Issues . . . . . . . . . . . . . . . . . . . . 292.2 Rescue Robotics Relevant Software Contributions . . . . . . . . . . . . . . . 38

2.2.1 Disaster Engineering and Information Systems . . . . . . . . . . . . 382.2.2 Environments for Software Research and Development . . . . . . . . 452.2.3 Frameworks, Algorithms and Interfaces . . . . . . . . . . . . . . . . 49

2.3 Rescue Robotics Relevant Hardware Contributions . . . . . . . . . . . . . . 682.4 Testbed and Real-World USAR Implementations . . . . . . . . . . . . . . . 79

2.4.1 Testbed Implementations . . . . . . . . . . . . . . . . . . . . . . . . 812.4.2 Real-World Implementations . . . . . . . . . . . . . . . . . . . . . . 87

2.5 International Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

3 Solution Detail 933.1 Towards Modular Rescue: USAR Mission Decomposition . . . . . . . . . . 953.2 Multi-Agent Robotic System for USAR: Task Allocation and Role Assignment 98

xv

Page 18: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

3.3 Roles, Behaviors and Actions: Organization, Autonomy and Reliability . . . 1043.4 Hybrid Intelligence for Multidisciplinary Needs: Control Architecture . . . . 1063.5 Service-Oriented Design: Deployment, Extendibility and Scalability . . . . . 113

3.5.1 MSRDS Functionality . . . . . . . . . . . . . . . . . . . . . . . . . 113

4 Experiments and Results 1214.1 Setting up the path from simulation to real implementation . . . . . . . . . . 1224.2 Testing behavior services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234.3 Testing the service-oriented infrastructure . . . . . . . . . . . . . . . . . . . 1304.4 Testing more complete operations . . . . . . . . . . . . . . . . . . . . . . . 133

4.4.1 Simulation tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1344.4.2 Real implementation tests . . . . . . . . . . . . . . . . . . . . . . . 139

5 Conclusions and Future Work 1485.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 1485.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

A Getting Deeper in MRS Architectures 153

B Frameworks for Robotic Software 158

C Set of Actions Organized as Robotic Behaviors 162

D Field Cover Behavior Composition 178D.1 Behavior 1: Avoid Obstacles . . . . . . . . . . . . . . . . . . . . . . . . . . 178D.2 Behavior 2: Avoid Past . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180D.3 Behavior 3: Locate Open Area . . . . . . . . . . . . . . . . . . . . . . . . . 180D.4 Behavior 4: Disperse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182D.5 Emergent Behavior: Field Cover . . . . . . . . . . . . . . . . . . . . . . . . 182

Bibliography 210

xvi

Page 19: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Chapter 1

Introduction

“One can expect the human race to continue attempting systems just within orjust beyond our reach; and software systems are perhaps the most intricateand complex of man’s handiworks. The management of this complex craftwill demand our best use of new languages and systems, our best adaptationof proven engineering management methods, liberal doses of common sense,and a God-given humility to recognize our fallibility and limitations.”

– Frederick P. Brooks, Jr. (Computer Scientist)

CHAPTER OBJECTIVES— Why this dissertation.— What we are dealing with.— What we are solving.— How we are solving it.— Where we are contributing.— How the document is organized.

In recent years, the use of Multi-Robot Systems (MRS) has become popular for severalapplication domains such as military, exploration, surveillance, search and rescue, and evenhome and industry automation. The main reason for using these MRS is that they are aconvenient solution in terms of costs, performance, efficiency, reliability, and reduced humanexposure to harmful environments. In that way, existing robots and implementation domainsare of increasing number and complexity, turning coordination and cooperation fundamentalfeatures among robotics research [99].

Accordingly, developing a team of cooperative autonomous mobile robots with efficientperformance has been one of the most challenging goals in artificial intelligence. The co-ordination and cooperation of MRS has involved state of the art problems such as efficientnavigation, multi-robot path planning, exploration, traffic control, localization and mapping,formation and docking control, coverage and flocking algorithms, target tracking, individualand team cognition, tasks’ analysis, efficient resource management, suitable communications,among others. As a result, research has witnessed a large body of significant advances inthe control of single mobile robots, dramatically improving the feasibility and suitability ofcooperative robotics. These vast scientific contributions created the need for coupling these

1

Page 20: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 2

advances, leading researchers to develop inter-robot communication frameworks. Finding aframework for cooperative coordination of multiple mobile robots that ensures the autonomyand the individual requirements of the involved robots has always been a challenge too.

Moreover, considering all possible environments where robots interact, disaster scenar-ios come to be among the most challenging ones. These scenarios, either man-made or natu-ral, have no specific structure and are highly dynamic, uncertain and inherently hostile. Thesedisastrous events like: earthquakes, floods, fires, terrorist attacks, hurricanes, trapped popu-lations, or even chemical, biological, radiological or nuclear explosions(CBRN or CBRNE);involve devastating effects on wildlife, biodiversity, agriculture, urban areas, human health,and also economy. So, the rapidly acting to save lives, avoid further environmental damageand restore basic infrastructure has been among the most serious social issues for the intellec-tual community.

For that reason, technology-based solutions for disaster and emergency situations aremain topics for relevant international associations, which had created specific divisions forresearch on this area such as IEEE Safety, Security and Rescue Robotics (IEEE SSRR)and the RoboCup Rescue, both active since 2002. Therefore, this dissertation focuses onan improvement for disaster response and recovery, encouraging the relationship betweenmultiple robots as an important tool for mitigating disasters by cooperation, coordination andcommunication among them and human operators.

1.1 MotivationHistorically, rescue robotics began in 1995 with one of the most devastating urban disastersin the 20th century: the Hanshin-Awajii earthquake in January 17th in Kobe, Japan. Accord-ing to [267], this disaster claimed more than 6,000 human lives, affected more than 2 millionpeople, damaged more than 785,000 houses, direct damage costs were estimated above 100billion USD, and death rates reached 12.5% in some regions. The same year robotics re-searchers in the US pushed the idea of the new research field while serving as rescue workersat the bombing of the Murrah federal building in Oklahoma City [91]. Then, the 9/11 eventsconsolidated the area by being the first known place in the world to have real implementationsof rescue robots searching for victims and paths through the rubble, inspecting structures, andlooking for hazardous materials [194]. Additionally, the 2005 World Disasters report [283]indicates that between 1995 and 2004 more than 900,000 human lives were lost and directdamage costs surpassed the 738 billion USD, just in urban disasters. Merely indicating thatsomething needs and can be done.

Furthermore, these incidents as well as other mentioned disasters can also put the res-cuers at risk of injury or death. In Mexico City the 1985 earthquake killed 135 rescuers duringdisaster response operations [69]. In the World Trade Center in 2001, 402 rescuers lost theirlives [184]. More recently in March 2011, in the nuclear disaster in Fukushima, Japan [227]rescuers were not even allowed to enter the ravaged area because it implied critical radiationexposure. So, the rescue task is dangerous and time consuming, with the risk of further prob-lems arising on the site [37]. To reduce these additional risks to the rescuers and victims,the search is carried out slowly and delicately provoking a direct impact on the time to locate

Page 21: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 3

survivors. Typically, the mortality rate increases and peaks the second day, meaning that sur-vivors who are not located in the first 48 hours after the event are unlikely to survive beyonda few weeks in the hospital [204]. Figure 1.1 shows the survivors rescued in the Kobe earth-quake. As can be seen, beyond the third day there are almost no more victims rescued. Then,Figure 1.2 shows the average survival chances in a urban disaster according to the days afterthe incident. It can be appreciated that after the first day the chances of surviving are dramati-cally decreased by more than 40%, and also after the third day another critical decrease showsno more than 30% chances of surviving. So, there is a clear urgency for rescuers in the first3 days where chances are good for raising survival rate, thus giving definition to the popularterm among rescue teams of “72 golden hours”.

Figure 1.1: Number of survivors and casualties in the Kobe earthquake in 1995. Imagefrom [267].

Figure 1.2: Percentage of survival chances in accordance to when victim is located. Basedon [69].

Consequently, real catastrophes and international contributions within the IEEE SSRRand the RoboCup Rescue lead researchers to define the main usage of robotics in the so called

Page 22: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 4

Urban Search and Rescue (USAR) missions. The essence of USAR is to save lives but,Robin Murphy and Satoshi Tadokoro, two of the major contributors in the area, refer thefollowing possibilities for robots operating in urban disasters [204, 267]:

Search. Aimed to gather information on the disaster, locate victims, dangerous ma-terials or any potential hazards in a faster way without increasing risks for secondarydamages.

Reconnaissance and mapping. For providing situational awareness. It is broader thansearch in the way that it creates a reference of the ravaged zone in order to aid in thecoordination of the rescue effort, thus increasing the speed of the search, decreasing therisk to rescue workers, and providing a quantitative investigation of damage at hand.

Rubble removal. Using robotics can be faster than manually and with a smaller foot-print (e.g., exoskeletons) than traditional construction cranes.

Structural inspection. Providing better viewing angles at closer distances without ex-posing the rescuers nor the survivors.

In-situ medical assessment and intervention. Since medical doctors may not be per-mitted inside the critical ravaged area, called hot zone, robotic medical aid ranges fromverbal interactions, visual inspections and transporting medications; to complete sur-vivors’ diagnosis and telemedicine. This is perhaps the most challenging task for robots.

Acting as a mobile beacon or repeater. Serve as landmark for localization and ren-dezvous purposes or simply extending the wireless communication ranges.

Serving as a surrogate. Decreasing the risk to the rescue workers, robots may be usedas sensor extensions for enhancing rescuers’ perceptions enabling them to remotelygather information of the zone and monitor other rescuers progress and needs.

Adaptively shoring unstable rubble. In order to prevent secondary collapse and avoid-ing higher risks for rescuers and survivors.

Providing logistics support. Provide recovery actions and assistance by autonomouslytransporting equipment, supplies and goods from storage areas to distribution points andevacuation and assistance centres.

Instant deployment. Avoiding the initial overall evaluations for letting human rescuersto go on site, robots can go instantly, thus improving speed of operations in order to raisesurvival rate.

Other. General uses may suggest robots doing particular operations that are impossibleor difficult to perform by humans, as they can enter smaller areas and operate withoutbreaks. Also, robots can operate for long periods in harsher conditions in a more ef-ficient way than humans do (e.g., they don’t need water or food, no need to rest, nodistractions, and the only fatigue is power running low).

Page 23: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 5

In the same line, multi-agent robotic systems (MARS, or simply MRS) have inherentcharacteristics that come to be of huge benefit for USAR implementations. According to [159]some remarkable properties of these systems are:

Diversity. They apply to a large range of tasks and domains. Thus, they are a versatiletool for disaster and emergency support where tasks are plenty.

Greater efficiency. In general, MRS exchanging information and cooperating tend tobe more efficient than a single robot.

Improved system performance. It has been demonstrated that multiple robots finishtasks faster and more accurately than a single robot.

Fault tolerance. Using redundant units makes a system more tolerable to failures byenabling possible replacements.

Robustness. By introducing redundancy and fault tolerance, a task is lesser compro-mised and thus the system is more robust.

Lower economic cost. Multiple simpler robots are usually a better and more affordableoption than one powerful and expensive robot, essentially for research projects.

Ease of development. Having multiple agents allow developers to focus more pre-cisely than when trying to have one almighty agent. This is helpful when the task isas complex as disaster response.

Distributed sensing and action. This feature allows for better and faster reconnais-sance while being more flexible and adaptable to the current situation.

Inherent parallelism. The use of multiple robots at the same time will inherently searchand cover faster than a single unit.

So, the essential motivation for developing this dissertation resides in the possibilitiesand capabilities that a MRS can have for disaster response and recovery. As referred, there areplenty of applications for rescue robotics and the complexity of USAR demands for multiplerobots. This multiplicity promises an improved performance in sensing and action that arecrucial in a disaster race against time. Also, it provides a way for speeding up operationsby addressing diverse tasks at the same time. Finally, it represents an opportunity for instantdeployment and for increasing the number of first responders in the critical 72 golden hours,which are essential for increasing the survival rate and for preventing a larger damage.

Additionally, before getting into the specific problem statement, it is worth to refer thatchoosing the option for multiple robots keeps developments herein aligned with internationalstate of the art trends as shown in Figure 1.3. Finally, this topic provides us with an insightinto social, life and cognitive sciences, which, in the end, are all about us.

Page 24: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 6

Figure 1.3: 70 years for autonomous control levels. Edited from [44].

1.2 Problem Statement and ContextThe purpose of this section is to narrow the research field into the specific problematic weare dealing with. In order to do that, it is important to give a precise context on disasters andhazards and about mobile robotics. Then we will be able to present an overview of search andrescue robotics (SAR or simply rescue robotics) for finally stating the problem we addressherein.

1.2.1 Disaster ResponseEveryday people around the world confront experiences that cause death, injuries, destroy per-sonal belongings and interrupt daily activities. These incidents are known as accidents, crises,emergencies, disasters, or catastrophes. Particularly, disasters are defined as deadly, destruc-tive, and disruptive events that occur when hazards interact with human vulnerability [182].The hazard comes to be the threat such as an earthquake, CBRNE, terrorist attack, amongothers previously referred (a complete list of hazards is presented in [182]). This dissertationfocuses on aiding in emergencies and disasters such as Table 1.1 classifies.

Once a disaster has occurred, it changes with time through 4 phases that characterize theemergency management according to [182, 267] and [204]. In spite of the description pre-sented below, it is worth to refer that Mitigation and Preparedness are pre-incident activities,whereas Response and Recover are post-incident. Particularly, disaster and emergency re-sponse requires the capabilities of being as fast as possible for rescuing survivors and avoidingany further damage, while being cautious and delicate enough to prevent any additional risk.This dissertation is settled precisely in this phase, where the first responders’ post-incidentactions reside. The description of the 4 phases is now presented.

Ph. 1: Mitigation. Refers to disaster prevention and loss reduction.

Page 25: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 7

Ph. 2: Preparedness. Efforts to increase readiness for a disaster.

Ph. 3: Response (Rescue). Actions immediately after the disaster for protecting lives andproperty.

Ph. 4: Recovery. Actions to restore the basic infrastructure of the community or, preferably,improved communities.

Table 1.1: Comparison of event magnitude. Edited from [182].Accidents Crises Emergencies/

DisastersCalamities/ Catas-trophes

Injuries few many scores hundreds/thousandsDeaths few many scores hundreds/thousandsDamage minor moderate major severeDisruption minor moderate major severeGeographicImpact

localized disperse disperse/diffuse disperse/diffuse

Availabilityof Resources

abundant sufficient limited scarce

Number ofResponders

few many hundreds hundreds/thousands

RecoveryTime

minutes/hours/days

days/weeks months/years years/decades

During the response phase search and rescue operations take place. In general, theseoperations consist on activities such as looking for lost individuals, locating and diagnosingvictims, freeing extricated persons, providing first aids and basic medical care, and transport-ing the victims away from the dangers. The human operational procedure that persists amongdifferent disasters is described by D. McEntire in [182] as the following steps:

1) Gather the facts. Noticing just what happened, the estimated number of victims andrescuers, type and age of constructions, potential environmental influence, presence ofother hazards or any detail for improving situational awareness.

2) Asses damage. Determine the structural damage in order to define the best actions basi-cally including: entering with medical operation teams, evacuating and freeing victims,or securing the perimeter.

3) Identify and acquire resources. Includes the need for goods, personnel, tools, equip-ment and technology.

4) Establish rescue priorities. Determining the urgency of the situations for defining whichrescues must be done before others.

5) Develop a rescue plan. Who will enter the zone, how they will enter, which tools aregoing to be needed, how they will leave, how to ensure safety for rescuers and victims;all the necessary for following an strategy.

Page 26: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 8

6) Conduct disaster and emergency response operations. Search and rescue, cover, fol-low walls, analyse debris, listen for noises indicating survivors, develop everything thatis considered as useful for saving lives. According to [267], this step is the one thattakes the longest time.

7) Evaluate progress. Prevention of further damage demands for continuously monitor-ing the situation including to see if the plan is working or there must be a better strategy.

In the described procedure, research has witnessed characteristic human behavior [182].For example, typically the first volunteers to engage are untrained people. This provokes alack of skills that shows people willing to help but unable to handle equipments, coordinateefforts, or develop any data entry or efficient resources administration and/or distribution. An-other example is that there are emergent and spontaneous rescuers so that the number can beoverwhelming to manage, therefore causing division of labor and encountered priorities sothat some of them are willing to save relatives, friends and neighbors, without noticing otherpossible survivors. Additionally, professional rescuers are not always willing to use volun-teers in their own operations, thus from time to time, there are huge crowds with just a fewworking hands. This situation leads into frustrations that compromise safeness of volunteers,professional rescue teams, and victims, thus decreasing survival rates while increasing possi-bilities for larger damages. The only good behavior that persists is that victims do cooperatewith each other and with rescuers during the search and rescue.

Consequently, we can think of volunteering rescue robotic teams for conducting thesearch and rescue operations at step 6, which constitutes the most time-consuming disasterresponse activities. Robots do not feel emotions such as preferences for relatives, they aretypically built for an specific task, and they will surely not become frustrated. Moreover,robots have demonstrated to be highly capable for search and coverage, wall following, andsensing under harsh environments. So, as R. Murphy et al. referred in [204]: there is aparticular need to start using robots in tactical search and rescue, which covers how the fieldteams actually find, support, and extract survivors.

1.2.2 Mobile RoboticsGiven the very broad definition of robot, it is important to state that we refer to the machinethat has sensors, a processing ability for emulating cognition and interpreting sensors’ signals(perceive), and actuators in order to enable it to exert forces upon the environment to reachsome kind of locomotion, thus referring a mobile robot. When considering one single mobilerobot, designers must take into account at least an architecture upon which the robotic re-sources are settled in order to interact with the real world. Then robotic control takes place asa natural coupling of the hardware and software resources conforming the robotic system thatmust develop an specified task. This robotic control has received huge amounts of contribu-tions from the robotics community most them focusing in at least one of the topics presentedin Figure 1.4: perception and robot sensing (interpretation of the environment), localizationand mapping (representation of the environment), intelligence and planning, and mobilitycontrol.

Furthermore, a good coupling of the blocks in Figure 1.4 shall result in mobile robots ca-pable to develop tasks with certain autonomy. Bekey defines autonomy in [29] as: a systems’

Page 27: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 9

Figure 1.4: Mobile robot control scheme. Image from [255].

capability of operating in the real-world environment without any form of external controlfor extended periods of time; they must be able to survive dynamic environments, maintaintheir internal structures and processes, use the environment to locate and obtain materials forsustenance, and exhibit a variety of behaviors. This means that autonomous systems mustperform some task while, within limits, being able to adapt to environment’s dynamics. Inthis dissertation special efforts towards autonomy including every block represented in Figure1.4 are required.

Moreover, when considering multiple mobile robots there are additional factors that in-tervene for having a successful autonomous system. First of all, the main intention of usingmultiple entities is to have some kind of cooperation, thus it is important to define cooperativebehavior. Cao et al. in [63] refer that: “given some task specified by a designer a multiple-robot system displays cooperative behavior if due to some underlying mechanism, there is anincrease in the total utility of the system”. So, pursuing this increase in utility (better perfor-mance) cooperative robotics addresses major research axes [63] and coordination aspects [99]presented below.

Group Architecture. This is the basic element of a multi-robot system, it is the persis-tent structure allowing for variations at team composition such as the number of robots,the level of autonomy, the levels of heterogeneity and homogeneity between them, andthe physical constraints. Similar to individual robot architectures, it refers to the setof principles organizing the control system (collective behaviors) and determining itscapabilities, limitations and interactions (sensing, reasoning, communication and act-ing constraints). Key features of a group architecture for mobile robots are: multi-levelcontrol, centralization / decentralization, entities differentiation, communications, andthe ability to model other agents.

Page 28: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 10

Resource Conflicts. This is perhaps the principal aspect concerning MRS coordination(or control). Sharing of space, tasks and resources such as information, knowledge, orhardware capabilities (e.g., cooperative manipulation), requires for coordination amongthe actions of each robot in order for not interfering with each other, and end up devel-oping autonomous, coherent and high-performance operations. This may additionallyrequire for robots taking into account the actions executed by others in order for beingmore efficient and faster at task development (e.g., avoiding the typical issue of “every-one going everywhere”). Typical resource conflicts also deal with the rational division,distribution and allocation of tasks for achieving an specific goal, mission or global task.

Cooperation Level. This aspect considers specifically how robots are cooperating ina given system. The usual is to have robots operating together towards a commongoal, but there is also cooperation through competitive approaches. Also, there aretypes of cooperation called innate or eusocial, and intentional, which implies directcommunication through actions in the environment or messaging.

Navigation Problems. Inherent problems for mobile robots in the physical world in-clude geometrical navigational issues such as path planning, formation control, patterngenerations, collision-avoidance, among others. Each robot in the team must have anindividual architecture for correct navigation, but it is the group architecture where nav-igational control should be organized.

Adaptivity and Learning. This final element considers the capabilities to adapt tochanges in the environment or in the MRS in order to optimize task performance andefficiently deal with dynamics and uncertainty. Typical approaches involve reinforce-ment learning techniques for automatically finding the correct values for the controlparameters that will lead to a desired cooperative behavior, which can be a difficult andtime-consuming task for a human designer.

Perhaps the first important aspect this dissertation concerns is the implementation of agroup architecture that consolidates the infrastructure for a team of multiple robots for searchand rescue operations. For these means it is included in Appendix A a deeper context on thistopic. From those readings the following list of the characteristics that an architecture musthave for successful performance and relevance in a multi-disciplinary research area such asrescue robotics, which involves rapidly-changing software and hardware technologies. So, anappropriate group architecture must consider:

• Robotic task and domain independence.

• Robot hardware and software abstraction.

• Extendibility and scalability.

• Reusability.

• Simple upgrading.

• Simple integration of new components and devices.

Page 29: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 11

• Simple debugging and prototyping.

• Support for parallelism.

• Support for modularity.

• Use of standardized tools.

These characteristics are fully considered in the implementations concerning this dis-sertation and are detailed further in this document. What is more, the architectural designinvolves the need for a coordination and cooperation mechanism for confronting the disasterresponse requirements. This implies not only solving individual robot control problems butalso the resource conflicts and navigational problems that arise. For this means informationon robotic control is included.

Mobile Robots Control and Autonomy

A typical issue when defining robotic control is to find where it fits among robotic software.According to [29] there are two basic perspectives: 1) Some designers refer exclusively torobot motion control including maintaining velocities and accelerations at a given set point,and orientation according to certain path. Also, they consider a “low-level” control for whichthe key is to ensure steady-states, quick response time and other control theory aspects. 2) Onthe other hand, other designers consider robotic control to the ability of the robot to followdirections towards a goal. This means that planning a path to follow resides in a way of “high-level” control that constantly sends the commands or directions to the robot control in orderto reach a defined goal. So, it turns difficult to find a clear division between each perspective.

Fortunately, a general definition for robotic control states that: “it is the process oftaking information about the environment, through the robot’s sensors, processing it as nec-essary in order to make decisions about how to act, and then executing those actions in theenvironment”– Mataric [177]. Thus, robotic control typically requires the integration of mul-tiple disciplines such as biology, control theory, kinematics, dynamics, computer engineering,and even psychology, organization theory and economics. So, this integration implies theneed for multiple levels of control supporting the idea of the necessity for the individual andgroup architectures.

Accordingly, from the two perspectives and the definition, we can refer that roboticcontrol happens essentially at two major levels for which we can embrace the concepts ofplatform control and activity control provided by R. Murphy in [204]. The first one is the onethat moves the robot fluidly and efficiently through any given environment by changing (andmaintaining) kinematic variables such as velocity and acceleration. This control is usuallyachieved with classic control theory such as PID controllers and thus can be classified as alow-level control. The next level refers to the navigational control, which main concern is tokeep the robot operational in terms of avoiding collisions and dangerous situations, and to beable to take the robot from one location to another. This control typically includes additionalproblems such as localization and environment representation (mapping). So, generally itneeds to use other control strategies lying under artificial intelligence such as behavior-basedcontrol and probabilistic methods, and thus being classified as a high-level control.

Page 30: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 12

Consequently, we must clarify that this dissertation supposes that there is already arobust, working low-level platform control for every robot. So, there is the need for developingthe high-level activity control for each unit and the whole MRS to operate in search andrescue missions. In that way, this need for the activity control leads us to three major designissues [159]:

1. It is not clear how a robot control system should be decomposed; meaning particularproblems at intra-robot control (individuals) that differ from inter-robot control (group).

2. The interactions between separate subsystems are not limited to directly visible connect-ing links; interactions are also mediated via the environment so that emergent behavioris a possibility.

3. As system complexity grows, the number of potential interactions between the compo-nents of the system also grows.

Moreover, the control system must address and demonstrate characteristics presented inTable 1.2. What is important to notice is that coordination of multi-robot teams in dynamicenvironments is a very challenging task. Fundamentally, for having a successfully controlledrobotic team, every action performed by each robot during the cooperative operations musttake into account not only the robot’s perceptions but also its properties, the task requirements,information flow, teammates’ status, and the global and local characteristics of the environ-ment. Additionally, there must exist a coordination mechanism for synchronizing the actionsof the multiple robots. This mechanism should help in the exchange of necessary informa-tion for mission accomplishment and task execution, as well as provide the flexibility andreliability for efficient and robust interoperability.

Furthermore, for fulfilling controller needs, robotics community has been highly con-cerned in creating standardized frameworks for developing robotic software. Since they aresignificant for this dissertation, information on them is included in Appendix B, particularlyfocusing in Service-Oriented Robotics (SOR). Robotic control as well as individuals andgroup architectures must consider the service-oriented approach as a way of promoting itsimportance and reusability capabilities. In this way, software development concerning thisdissertation turns to be capable of being implemented among different resources and circum-stances and thus becoming a more interesting, relevant and portable solution with a betterimpact.

1.2.3 Search and Rescue RoboticsHaving explained briefs on disasters and mobile robots, it is appropriate to merge both re-search fields and refer about robotics intended for disaster response. In spite of all the pre-viously referred possibilities for robotics in search and rescue operations, this technology isnew and its acceptance as well as its hardware and software completeness will take time. Ac-cording to [204], as of 2006, rescue robotics took place only in four major disasters: WorldTrade Center, and hurricanes Katrina, Rita and Wilma. Also, in 2011, in the nuclear disasterat Fukushima, Japan, robots were barely used because of problems such as mobility in harshenvironments where debris is scattered all over with tangled steel beams and collapsed struc-tures, difficulties in communication because of thick concrete walls and lots of metal, and

Page 31: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 13

Table 1.2: Important concepts and characteristics on the control of multi-robot systems. Basedon [53, 11, 2, 24].

Situatedness The robots are entities situated and surrounded by the real world. Theydo not operate upon abstract representations.

Embodiment Each robot has a physical presence (a body). This has consequences inits dynamic interactions with the world.

Reactivity The robots must take into account events with time bounds compatiblewith the correct and efficient achievement of their goals.

Coherence Referring that robots should appear to an observer to have coherence ofactions towards goals.

Relevance /Locality

The active behavior should be relevant to the local situation residing onthe robot’s sensors.

Adequacy /Consistency

The behavior selection mechanism must go towards the mission accom-plishment guided by their tasks’ objectives.

Representation The world aspect should be shared between behaviors and also triggerfor new behaviors.

Emergence Given a group of behaviors there is an inherent global behavior withgroup and individual’s implications.

Synthesis To automatically derive a program for mission accomplishing.

Communication Increase performance by explicit information sharing.

Cooperation Proposing that robots should achieve more by operating together.

Interference Creation of protocols for avoiding unnecessary redundancies.

Density N number of robots should be able to do in 1 unit of time, what 1 robotshould in N units of time.

Individuality Interchangeability results in robustness because of repeatability or un-necessary robots operating.

Learning /Adaptability

Automate the acquisition of new behaviors and the tuning and modifi-cation of existing ones according to the current situation.

Robustness The control should be able to exploit the redundancy of the processingfunctions. This implies to be decentralized to some extent.

Programmability A useful robotic system should be able to achieve multiple tasks de-scribed at an abstract level. Its functions should be easily combinedaccording to the task to be executed.

Extendibility Integration of new functions and definition of new tasks should be easy.

Scalability The approach should easily scale to any number of robots.

Flexibility The behaviors should be flexible to support many social patterns.

Reliability The robot can act correctly in any given situation over time.

Page 32: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 14

physical presence within adverse environments because radiation affects electronics [227].In short, the typical difficulty of sending robots inside major disasters is the need for a bigand slow robot that can overcome the referred challenges [217]. Not to mention the needfor robots capable of performing specific complex tasks like opening and closing doors andvalves, manipulating fire fighting hoses, or even carefully handling rubble to find survivors.

It is worth to mention that there are many types of robots proposed for search and rescue,including robots that can withstand radiation and fire-fighter robots that shoot water to build-ings, but the thing is that there is still not one all-mighty unit. For that reason, most typicalrescue robotics implementations in the United States and Japan reside in local incidents suchas urban fires, and search with unmanned vehicles (UxVs). In fact, most of the real implemen-tations used robotics only as the eyes of the rescue teams in order to gather more informationfrom the environment as well as to monitor its conditions in order for better decision making.And even that way, all the real operations allowed only for teleoperated robots and no auton-omy at all [204]. Nevertheless, these real implementations are the ones responsible of havinga better understanding of the sensing and acting requirements as well as listing the possibleapplications for robots in a search and rescue operation.

On the other hand, making use of the typical USAR scenarios where rescue roboticsresearch is implemented there are the contributions within the IEEE SSRR society and theRoboCup Rescue. Main tasks include mobility and autonomy (act), search for victims andhazards (sense), and simultaneous localization and mapping (SLAM) (reason). Also, human-robot interactions have been deeply explored. The simulated software version of the RoboCupRescue has shown interesting contributions in exploration, mapping and victim detection al-gorithms. Good sources describing some of these contributions can be found at [20, 19]. Thereal testbed version has not only validated functionality of previously simulated contributions,but also pushed the design of unmanned ground vehicles (UGVs) that show complex abilitiesfor mobility and autonomy. Also, it has leveraged the better usage of proprioceptive instru-mentation for localization as well as exteroceptive instrumentation for mapping and victimsand hazards detection. Good examples of these contributions can be found at [224, 261].

So, even though the referred RoboCup contributions are simulated solutions far fromreaching a real disaster response operation, they are pushing the idea of having UGVs that canenable rescuers to find victims faster as well as identifying possibilities for secondary damage.Also, they are leveraging the possibility for other unmanned vehicles such as larger UGVsthat can be able to remove rubble faster than humans do, unmanned aerial vehicles (UAVs)to extend the senses of the responders by providing a birds eye view of the situation, andunmanned underwater vehicles (UUVs) and unmanned surface vehicles (USVs) for similarlyextending and enhancing the rescuers’ senses [204].

In summary, some researchers are encouraging the development of practical technolo-gies such as design of rescue robots, intelligent sensors, information equipment, and humaninterfaces for assisting in urban search and rescue missions, particularly victim search, infor-mation gathering, and communications [267]. Some other researchers are leveraging devel-opments such as processing systems for monitoring and teleoperating multiple robots [108],and creating expert systems on simple triage and rapid medical treatment of victims [80].And there are few others intending the analysis and design of real USAR robot teams forthe RoboCup [261, 8], fire-fighting [206, 98], damaged building inspection [141], mine res-cue [201], underwater exploration robots [203], and unmanned aerial systems for after-collapse

Page 33: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 15

inspection [228]; but they are still in a premature phase not fully implemented and with noautonomy at all. So, we can synthesize that researchers are addressing rescue robotics chal-lenges in the following order of priority: mobility, teleoperation and wireless communica-tions, human-robot interaction, and robotic cooperation [268]; and we can also refer that thefundamental work is being leaded mainly by Robin Murphy, Satoshi Tadokoro, Andreas Birk,among others (refer Chapter 2 for full details).

The truth is that there are a lot of open issues and fundamental problems in this barelyexplored and challenging research field of rescue robotics. There is an explicit need for robotshelping to quickly locate, assess and even extricate victims who cannot be reached; and thereis an urgency for extending the rescuers’ ability to see and act in order to improve disasterresponse operations, reduce risks of secondary damage, and even raise survival rates. Also,there is an important number of robotics researchers around the globe focusing on particularproblems in the area, but there seems to be no direct (maybe less) effort towards generatinga collaborative rescue multi-robot system, which appears to be further in the future. In fact,the RoboCup Rescue estimates a fully autonomous collaborative rescue robotic team by 2050,which sounds pretty much as a reasonable timeline.

1.2.4 Problem DescriptionAt this point we have presented several possibilities and problems that involve robotics fordisaster and emergency response. We have mentioned that robots come to fit well as rescuerunits for conducting search and rescue operations but several needs must be met. First wedefined the need for crafting an appropriate architecture for the individual robots as well asfor the complete multi-robot team. Next we added the necessity for appropriate robotic controland the efficient coordination of units in order to take advantage of the inherent characteristicsof a MRS and be able to provide efficient and robust interoperability in dynamic environments.Then we included the requirement for software design under the service-oriented paradigm.Finally, we expressed that there is indeed a good number of relevant contributions using singlerobots for search and rescue but that is not the case when using multiple robots. Thus, ingeneral the central problem this dissertation addresses is the following:

HOW DO WE COORDINATE AND CONTROL MULTIPLE ROBOTS SO AS TO ACHIEVE

COOPERATIVE BEHAVIOR FOR ASSISTING IN DISASTER AND EMERGENCY RE-SPONSE, SPECIFICALLY, IN URBAN SEARCH AND RESCUE OPERATIONS?

It has to be clear that this problem implies the use of multiple robotic agents workingtogether in a highly uncertain and dynamic environment where there are the special needs forquick convergence, robustness, intelligence and efficiency. Also, even though the essentialpurpose is to address navigational issues, other factors include: time, physical environmen-tal conditions, communications management, security management, resources management,logistics management, information management, strategy, and adaptivity [83]. So, we cangeneralize by mentioning that the rescue robotic team must be prepared for navigating inhostile dynamic environment where the time is critical, the sensitivity and multi-agent coop-eration are crucial, and finally, strategy is vital to scope the efforts towards supporting humanrescuers to achieve faster and more secure USAR operations.

Page 34: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 16

1.3 Research Questions and ObjectivesHaving stated problem, the general idea of having a MRS for efficiently assisting human firstresponders in a disaster scenario includes several objectives to complete. In Robin Murphy’swords the most pressing challenges for rescue robotics reside in:

“How to reduce mission times ? How to localize, map, and integrate data from therobots into the larger geographic information systems used by strategic decisionmakers? How to make rescue robot operations more efficient in order to find moresurvivors or provide more timely information to responders? How to improve theoverall reliability of rescue robots?”– Robin. R. Murphy [204]

Consequently, we can state the following research questions addressed herein:

1. HOW TO FORMULATE, DESCRIBE, DECOMPOSE AND ALLOCATE USAR MISSIONS

AMONG A MRS SO AS TO ACHIEVE FASTER COMPLETION?

2. HOW TO PROVIDE APPROPRIATE COMMUNICATION, INTERACTION, AND CONFLICT

RECOGNITION AND RECONCILIATION BETWEEN THE MRS SO AS TO ACHIEVE EF-FICIENT INTEROPERABILITY IN USAR?

3. HOW TO ENSURE ROBUSTNESS FOR USAR MISSION ACCOMPLISHMENT WITH CUR-RENT TECHNOLOGY WHICH IS BETTER FOR SIMPLE BUT FAST CONTROL?

4. HOW TO MEASURE PERFORMANCE IN USAR SO AS TO LEARN AND ADAPT ROBOTIC

BEHAVIORS?

5. HOW TO MAKE THE WHOLE SYSTEM EXTENDIBLE, SCALABLE, ROBUST AND RELI-ABLE?

In such way, we can define the following objectives in order to develop an answer to thestated questions:

1. Modularize search and rescue missions.

(a) Identify main USAR requirements.

(b) Decompose USAR operations in fundamental tasks or subjects so as to allocatethem among robots.

(c) Define robotic basic requirements for USAR.

2. Determine the basic structure for the multi-agent robotic system.

(a) Control architecture for the autonomous mobile robots.

(b) Control architecture for the rescue team.

3. Create a distributed system structure for coordination and control of a MRS for USAR.

Page 35: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 17

(a) Identify possibilities for defining roles in accordance to fundamental tasks in USAR.

(b) Define appropriate robotic behaviors needed for the tasks and matching the definedroles.

(c) Decompose behaviors into observable disjoint actions.

4. Develop innovative algorithms and computational models for mobile robots coordina-tion and cooperation towards USAR operations.

(a) Create the mechanism for synchronization of the MRS actions in order to go co-herently and efficiently towards mission accomplishment.

(b) Create the robotic behaviors for USAR.

(c) Create the mechanism for coordinating behavioral outputs in individual robots(connect the actions).

(d) Identify the possibilities for an adaptivity feature so as to learn additional behav-iors and increase performance.

5. Demonstrate results.

(a) Make use of standardized tools for developing the robotic software for both simu-lation and real implementations.

(b) Implement experiments with real robots and testbed scenarios.

So, next section provides an overview about how we fulfill such objectives so as to pushforward rescue robotics state of the art.

1.4 Solution OverviewPerhaps the most important thing when working towards a long term goal is to provide solu-tions with certain capabilities for continuity in order to achieve increasing development andsuitability for future technologies. In this way, solutions provided herein intend to promote amodular development in order for fully integrating and adding new control elements as well asnew software and hardware resources so as to permit upgrades. The main purpose is to havea solution that can be constantly improved according to the current rescue robotics advancesso that performance and efficiency can be increased. So, in this section, general informationcharacterizing our solution approach is presented. First is described the behavioral and coor-dination strategies, then the architectural and service-oriented design, and finally briefs on thetypical testbeds for research experiments.

1.4.1 Dynamic Roles + Behavior-based RoboticsWhen considering human cognition M. Minsky states in The Emotion Machine [188] that thehuman mind has many different ways of thinking that are used according to different circum-stances. He considers emotions, intuitions and feelings as these different ways of thinking,which he calls selectors. In Figure 1.5 is exposed how given a set of resources it depends on

Page 36: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 18

the active selectors which resources are used. It can be appreciated that some resources canbe shared among multiple selectors.

Figure 1.5: Minsky’s interpretation of behaviors. Image from [188].

In robotics, these selectors come to be the frontiers for sets of actions that activate roboticresources according to different circumstances (perceptions). This approach was introducedby R. Brooks in a now-classic paper that suggests a control composition in terms of roboticbehaviors [49]. This control strategy revolutionized the area of artificial intelligence by essen-tially characterizing a close coupling between perception and action, without an intermediatecognitive layer. Thus, a classification aroused of what is now known as classic and new arti-ficial intelligence, refer to Figure 1.6. The major motivation for using this new AI resides inthat there is no need for accurate knowledge of the robot’s dynamics and kinematics, neitherfor carefully constructed maps of the environment the way classic AI and traditional methodsdo. So, it is a well suited strategy for addressing time-varying, unpredictable and unstructuredsituations [29].

Figure 1.6: Classic and new artificial intelligence approaches. Edited from [255].

Accordingly, in new AI, as stated by M. Mataric in [175] behavior-based control comesas an extension of any reactive architecture, making a compromise between a purely reactive

Page 37: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 19

system and a highly deliberative system; it employs various forms of interpretation and rep-resentations for a given state enabling for relevance and locality. She refers that this strategyenables for implementing a basic unit of abstraction and control, which limits for doing an spe-cific mapping between a perception and a given response, while permitting the add-up of morebehaviors or control units. So, behaviors work as the building blocks for robotic actions [11].Thus, the inherent modularity is highly desirable for constructing increasingly more complexsystems, and also for creating a distributed control that facilitates scalability, extendibility, ro-bustness, feasibility and organization to design complex systems, flexibility and setup speed.Also, according to [52], using behavior-based control implies a direct impact on situatedness,embodiment, reactivity, cooperation, learning and emergence (refer Table 1.2). Finally, forthe ease of understanding these building blocks, Figure 1.7 represents the basic code structureof a given behavior.

Figure 1.7: Behavior in robotics control. Image from [138].

So, the proposed solution herein considers the qualitative definition of robotic behaviorsneeded for USAR operations, and the decomposition of them into robotic actions concerningmultiple unmanned ground vehicles. In such way, it can be referred that individual robot ar-chitectures reside in a behavior-based “horizontal” structure that is intended to be coordinatedfor showing coherent performance towards mission accomplishment. Coordination is mainlyaddressed in the four approaches shown in Figure 1.8, their usage is described in Chapter 3.

Figure 1.8: Coordination methods for behavior-based control. Edited from [11].

What is more, for reducing the number of triggered behaviors in a given circumstanceand thus simplifying single robot action coordination a dynamic role assignment is proposed.

Page 38: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 20

As defined in [75] a role is a function that one or more robots perform during the execution ofa cooperative task while certain internal and external conditions are satisfied. So, which roleto perform depends on the robot’s internal state, and other external states such as other robots,environment and mission status. This role will define which controllers (behaviors) will becontrolling the robot in that moment. So, the role-assignment mechanism allows the robotsto assume and exchange roles during cooperation and thus changing their active behaviorsdynamically during the task execution.

Additionally, for ensuring the correct procedure towards mission accomplishment, amechanism for specifying what robots should be doing at a given time or circumstance isproposed. This mechanism is the so called finite state automata (FSA) [192]. For its de-velopment it is required to define a finite number of discrete states K, the stimulus Σ fordemanding a state change, the transition function δ for selecting the appropriate state accord-ing to the given stimulus, and a pre-defined pair of states: initial s and final F. All these resultsin the finite state machine (FSM) used to remind what is needed for constructing a FSA. It iscommonly known as M for machine and it is defined as in Equation 1.1. Table 1.3 refers therelationship of using a FSM and FSA within the context of behavior-based control (BBC).

M = {K,Σ, δ, s, F} (1.1)

Table 1.3: FSA, FSM and BBC relationships. Edited from [192].FSM FSA Behavioral Analog

K set of states set of behaviorsΣ state stimulus behavior releaser/triggerδ function that computes new state function that computes new behaviors initial state initial behaviorF termination state termination behavior

So, using these strategies with a precise match with USAR robotic requirements leadus into the goal diagram and sequence diagrams that enabled us for completely defining anddecomposing roles, behaviors and actions. Full detail on this is presented in Chapter 3.

1.4.2 Architecture + Service-Oriented DesignAs referred in the previous section, the idea for the individual robots architecture comes to fitwell with the “horizontal” structure provided by the new AI and behavior-based robotics. Thisis mainly due to the advantages in focusing and fully attending the local perceptions and quickresponding to the current circumstances. Nevertheless, there must exist something that en-sures reliable control and robust mission completion at the multi-robot level. For these means,we propose a classic AI mechanism providing plans and higher level decision/supervision inthe traditional “vertical” approach of sense-think-act. Thus, the group architecture proposedherein resides in the classification of hybrid architecture, which is primarily characterizedfor providing the structure for merging deliberation and reaction [192].

Generally describing, the proposed hybrid architecture concerns the elements presentin AuRA and Alami et al.’s work (refer to Appendix A) but at two levels: single-robot and

Page 39: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 21

multi-robot. These elements are properly defined by R. Murphy in [192] and are presentedin Table 1.4 with their specific component at each level. It is worth to mention that thesecomponents interact essentially at the Decisional, Executional, and Functional levels.

Table 1.4: Components of a hybrid-intelligence architecture. Based on [192].Single-Robot Multi-Robot

Sequencer FSM Task and Mission Su-pervisor

Resource Manager Behavioral Manage-ment

Reports Database

Cartographer Robot State Robots States FusionPlanner Behaviors Releasers Mission PlannerEvaluator Low-level Metrics High-level MetricsEmergence Learning Behaviors

WeightsLearning New Behav-iors

Accordingly, a nomenclature based in [11] is shown in Table 1.5. In general terms, theidea is that having a determined pool of robots we can form a rescue robotic team defined as ~X ,where every element in the vector represents a physical robotic unit. Once we have the robots,a set of roles ~Hx can be defined for each xi robot, containing a subset of robotic behaviors~Bxh, which basically refer to the mapping between the perceptions ~Sx and the responses oractions ~Rx ( ~Bxh : ~Sx 7→ ~Rx; so called β-mapping), both of which are linked to the physicalrobot capabilities. It is worth to clarify that these roles and behaviors are considered to be theabstraction units for facilitating the control and coordination of the robotic team, includingaspects such as scalability and redundancies. Also, these roles and behaviors represent thecapabilities of each robot and the whole team for solving different tasks and thus resulting ina measure for task and mission coverage.

The nomenclature representations are used in Figure 1.9 for graphically showing anoverview of the group architecture proposed herein. As can be seen, the architecture is di-vided into the 5 principal divisions, allowing this research work for focusing in the Decision,Execution and Functional control levels. The Decisional Level is where the mission status,supervision reports and team behavior take place. In this level is where the mission is parti-tioned in tasks. Then the call for roles, behavior activation and individual behavior reportstake place in the Execution Level. It is at this level of control where the task allocation and thecoordination of robot roles ( ~H) occur. Finally, a coordinated output from the active roboticbehaviors ( ~Bxh) is expected to come in the form of ρ∗ for each robotic unit at the FunctionalLevel, including also the correspondent action reports. Below these levels are the wiring andhardware specifications, which are not main research topics for this dissertation work.

Furthermore, as mentioned in the evaluator component in Table 1.4 and as shown inFigure 1.9 we are considering some low-level and high-level metrics. These metrics are de-scribed in Table 1.6 and their principal purpose is to provide a way for evaluating singlerobots actions and team performance in order to provide a way of learning. The intentionis to automatically obtain better behavior parameters (~GB) according to operability as wellas to generate new emerging behaviors (β-mappings) for gaining efficiency. Other particularmetrics are described in Chapter 4.

Page 40: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 22

Table 1.5: Nomenclature.Description (Type) Representation

Set of Robots (INT) ~X = [x1, x2, x3, · · · , xN ] for N robots.

Set of Robot Roles (INT) ~Hx = [h1, h2, h3, · · · , hn] n roles for each x robot.

Set of Robot Behaviors (INT) ~Bxh = [β1, β2, β3, · · · , βM ] M behaviors for h rolesfor x robots.

Set of Behavior Gains (FLOAT) ~GB = [g1|β1, g2|β2, g3|β3, · · · , gM |βM ] for M behav-iors as their control parameters.

Set of Robot Perceptions (FLOAT) ~Sx = [(P1, λ1)x, (P2, λ2)x, (P3, λ3)x, · · · , (Pp, λp)x]p perceptions for x robots.

Set of Robot Responses (FLOAT) ~Rx = [r1, r2, r3, · · · , rm] m responses for x robots.

Set of Possible Outputs (FLOAT) ~ρx = [g1 ? r1, g2 ? r2, g3 ? r3, · · · , gM ? rM ] M outputswith ? as special scaling operator for x robots.

Specific Output (FLOAT) ρ∗x for x robots from the arbitration of ~ρx.

Set of Tasks (INT) ~T = [t1, t2, t3, · · · , tk] for k tasks.

Set of Capabilities (BOOL) ~Ck = [(B1, H1)k, (B2, H2)k, (B3, H3)k, · · · , (BN , HN)k]for k tasks for N robots.

Set of Neighbors (INT) ~Nx = [n1, n2, n3, · · · , nq] q neighbors for x robots.

Task Coverage (FLOAT) TCi = |Ci|√N

for i task and N robots.

Mission Coverage (FLOAT) MC = 1√N∗k ·

∑ki=1 |Ci| for k tasks and N robots.

So, the last thing to refer is that every behavior is coded under the service-orientedparadigm. In this way, every single piece of code is highly reusable. Also, the architecture andcommunications are settled upon this SOR approach. Even though we mentioned ROS andMSRDS as robotic frameworks promoting SOR design, we decided to go with MSRDS be-cause of its two main additional features: the Concurrency and Coordination Runtime (CCR)and the Distributed Software Services (DSS).

Essentially, the CCR is a programming model for automatic multi-threading and inter-task synchronization that helps to prevent typical deadlocks while dealing with suitable com-munications methods and robotics requirements such as asynchrony, concurrency, coordina-tion and failure handling. The DSS is the one that provides the flexibility of distributionand loosely coupling of services including the tools to deploy lightweight controllers andweb-based interfaces in non hi-spec computers such as commercial handhelds. Both features

Page 41: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 23

Figure 1.9: Group architecture overview.

Table 1.6: Relevant metrics in multi-robot systemsLevel ID Name DescriptionLow TTD Task time devel-

opmentFlexibility & Adaptivity. Time taken to complete thetask.

Low TTC Task time com-munication

Flexibility & Adaptivity. Time used for communicat-ing.

Low FO Fan out Robots utilization. Neglect time over interaction time.High TC Task coverage Robustness. Team capabilities over task needs.High MC Mission cover-

ageRobustness. Team capabilities over mission needs.

High TE Task effective-ness

Reliability. Binary metric: completed / failed.

Page 42: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 24

enable us to code more efficiently in a well structured fashion. For a complete description onhow they work and MSRDS functionality refer to [70].

In that way, Figure 1.10 shows the basic unit of representation of the infrastructure fororganizing the MRS in the service-oriented approach. Every element there such as system,subsystem and components; are intended to work as a service or group of services (applica-tion). The complete description on its features and elements is presented in Chapter 3. Fornow it is worth to mention that important aspects on the proposed architecture include:

• JAUS-compliant topology leveraging a clear distinction between levels of competence(individual robot (subsystem) and robotic team (system) intelligence) and the simpleintegration of new components and devices [106].

• Easy to upgrade, share, reuse, integrate, and to continue developing.

• Robotic platform independent, mission/domain independent, operator use independent(autonomous and semi-autonomous), computer resource independent, and global stateindependent (decentralized).

• Time-suitable communications with one-to-many control capabilities.

• Manageability of code heterogeneity by standardizing a service structure.

• Ease of integrating new robots to the network by self-identifying without reprogram-ming or reconfiguring (self-discoverable capabilities)

• Inherent negotiation structure where every robot can offer its services for interactionand ask for other robots’ running services.

• Fully meshed data interchange for robots in the network

• Capability to handle communication disruption where a disconnected out-of-communication-range robot can resynchronize and continue communications when connection is recov-ered (association/dissociation).

• Easily extended in accordance to mission requirements and available software and hard-ware resources by instantiating the current elements.

• Capability to have more interconnected system elements each with different level offunctionality leveraging distribution, modularity, extendibility and scalability features.

1.4.3 Testbeds OverviewFor demonstrating the feasibility of the solution proposed herein simulations in MSRDS andreal implementations results using research academical robotic platforms are included. Eventhough Chapter 4 refers the complete detail on every test, here it is worth to mention thegeneral experimentation idea. This idea concerns multiple unmanned ground vehicles nav-igating in maze-like arenas representing disasters aftermath scenarios. Their main purposeis to gather information from the environment and map it to a central station. Thus testing

Page 43: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 25

Figure 1.10: Service-oriented group architecture.

the architecture for coupling the MRS, validating behaviors and coordinating simultaneoustriggered actions are our main tests. General assessment and deliberation on the type of aid togive to an entity (victim, hazard or endangered kin) as well as complete rounds of coordinatedsearch and rescue operations are out of the scope of this work.

1.5 Main ContributionsAccording to [182], tools and equipment are a key aspect for successful search and rescueoperations, but they are usually disaster-specific needs. So, it is outside our scope to gen-erate such an specific robotic team, instead we focus in a broader approach of coordinatednavigation, assuming we will be capable of implementing the same strategy regardless of therobotics resources, which are very particular to each specific disaster. It is important to re-member that the attractiveness of robots for disasters resides from their potential to extend thesenses of the responders into the interior of the rubble or through hazardous materials [204],thus implying the need for navigating.

So the principal benefit of the project resides in the expectations of robotics applied indisastrous events and the study of behavior emergence in rescue robotic teams. More specifi-cally, the focus is to find and test the appropriate behaviors for multi-robot systems addressinga disaster scenario, in order to develop an strategy for choosing the best combination of roles,behaviors and actions (RBA) for mission accomplishing. So, we can refer the main contribu-tions in the following list:

• USAR modularization leveraging local perceptions and mission decomposition intosubtasks concerning specific role, behaviors and actions.

• Primitive and composite service-oriented behaviors fully described, decomposed intorobotic actions, and organized by roles for addressing USAR operations.

Page 44: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 26

• USAR robotic distributed coordinator in a RBA plus FSM strategy with a JAUS-compliantand SOR-based infrastructure focusing in features such as modularity, scalability, ex-tendibility, among others.

• An emergent robotic behavior for single and multi-robot autonomous exploration of un-known environments with essential features such as coordinating without any delibera-tive process, simple targeting/mapping technique with no need for a-priori knowledge ofthe environment or calculating explicit resultant forces, robots are free of leaving line-of-sight and task completion is not compromised to every robot’s functionality. Also,our algorithm decreases computational complexity from typical O(n2T ) (n robots, Tfrontiers) in deliberative systems and O(n2) (nxn grid world) in reactive systems, toO(1) when robots are dispersed and O(m2) whenever m robots need to disperse.

• Study of emergence of rescue robotic team behaviors and their applicability in realdisasters.

Consequently, we can summarize that the main purpose of this work is to create a co-ordinator mechanism that serves as an infrastructure to autonomous decisional and functionalabilities in order to allow robotic units to demonstrate cooperative behavior for coherently de-veloping USAR operations. This includes the partition of a USAR mission in tasks that mustbe efficiently distributed among the robotic resources and their conflicts resolution. Also, it isimportant to mention that there is no intended contribution in robots giving some kind of realaid such as medical treatment, rubble removal, fire extinguish, deep structural inspection orshoring unstable rubble; but there is a clear intention for emulating its development when thesystem determines any kind of aid is needed. So, main contributions in robotic actions residewithin search, reconnaissance and mapping, serving as a surrogates, and even representingmobile beacons/repeaters.

In the end, the ideal long term solution should be a highly adaptive, fault tolerant het-erogeneous multi-robot system, that would be able to flexibly handle different tasks and en-vironments, which means: task allocation solving, obstacle/failure overcoming, and efficientautonomous decision, navigation and exploration. In other words, the ideal is to create arobotic team in which each unit behaves coherently and takes time for reorganizing if tacticor performance is not working well, thus showing group tactical goals and/or team strate-gical decision-making so as to achieve a crucial impact in the so called “72 golden hours”for increasing the survival rate, avoiding further environmental damage, and restoring basicinfrastructure.

1.6 Thesis OrganizationThis work is organized as follows: in the next chapter we discuss a literature review on thestate of the art of rescue robotics, focusing on major addressed issues, software contributions,robotic units and team designs, real and simulated implementations, and the given standardsuntil today. Then, Chapter 3 includes the detail on the provided solution, referring everyprocedure to fulfill the previously referred objectives including detail on USAR operationsrequirements, the task decomposition and allocation, the hybrid intelligence approach, the

Page 45: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 1. INTRODUCTION 27

dynamic role assignment and behavioral details, and the implemented service-oriented design.In Chapter 4 the experiments are described as well as the results for simulation tests andreal implementations; this chapter includes the proposed MRS for experimentation. Finally,Chapter 5 brings the conclusion of this dissertation including a summary of contributions,final discussion and the possibilities for future work.

Page 46: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Chapter 2

Literature Review – State of the Art

“So even if we do find a complete set of basic laws, there will still be in the yearsahead the intellectually challenging task of developing better approximationmethods, so that we can make useful predictions of the probable outcomes incomplicated and realistic situations.”

– Stephen Hawking. (Theoretical Physicist)

CHAPTER OBJECTIVES— What robots do in rescue missions.— Which are the major software contributions.— Which are the major hardware contributions.— Which are the major MRS contributions.— How contributions are being evaluated.

A good start point when looking for a solution is to identify what has been done, thestate of the art and the worldwide trends around the problem of interest. In such way, cur-rent technological innovations are important tools that can be used to improve disaster andemergency response and recovery. So, knowing what technology is available is crucial whentrying to enhance emergency management. The typical technology that is implemented forthese situations includes [182, 267]:

• Radar devices such as Doppler radar for severe weather forecasting and microwaves fordetecting respiration under debris.

• Traffic signal preemption devices for allowing responders to arrive without unnecessarydelay.

• Detection equipment for determining present mass destruction weapons.

• Listening devices and extraction equipment for locating and removing victims underthe debris including acoustic probes for listening to sound from victims.

• Communication devices such as amateur (ham) radios for sharing information whenother communication systems fail. Also, equipment as the ACU-1000 for linking ina single real-time communication system all the present mobile radios, cell phones,satellite technology and regular phones.

28

Page 47: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 29

• Global positioning systems (GPS) for plotting damage and critical assets.

• Video cameras and remote sensing devices such as bending cameras head and lightwith telescopic stick or cable for search under rubble, and infrared cameras for humandetection by means of thermal imaging; for providing information about the damages.

• Personal digital assistants (PDAs) and smartphones for communicating via phone, e-mail or messaging in order to contact resources and schedule activities.

• Geographic information systems (GIS) for organizing and accessing spatial informa-tion such as physical damage, economic loss, social impacts, location of resources andassets. Also, equipment as the HAZUS for analysing scientific and engineering infor-mation with GIS in order to estimate the hazard-related damage including shelter andmedical needs.

• Variety of tools such as pneumatic jacks for lifting structures, spreader hydraulic toolsfor opening narrow gaps, air/engine tools for cutting structures, jack hammers for drillingholes in concrete structures.

• Teleoperated robots such as submarine vehicles for underwater search, ground vehiclesto capture victims, ground vehicles for searching fire, ground vehicles for remote fireextinguishing, and air vehicles for video streaming.

Therefore, we can refer that different sensing and communication devices are being im-plemented by human rescuers and mobile technology in order to reduce the impact of disas-trous events. Also, rescue teams are capable of using more technological tools than before be-cause of lower costs of computers, software, and other equipment. Thus, this chapter presentsinformation on the incorporation of robotic technology for disaster response including: majoraddressed problems for mobile robots in disasters, main rescue robotic software and hardwarecontributions, most relevant teams of rescue robots, important tests and real implementations,and the international standards achieved until today.

2.1 Fundamental Problems and Open IssuesIntending to implement mobile robots in disaster scenarios imply a variety of challenges thatmust be addressed not only from a robotics perspective but also from some other disciplinessuch as artificial intelligence and sensor networking. At hand, having a MRS for collabora-tively assisting a rescue mission implies several challenges that are consistent among differentapplication domains for which a generic diagram is presented in Figure 2.1. As can be seen,the main problems that arise reside at the intersection of control, perception and communica-tion, which are responsible for attaining the adaptivity, networking and decision making thatwill provide the capabilities for efficient operations [150].

Being more precise, concerning this work’s particular implementation domain it is worthto describe the structure of a typical USAR scenario in order to better understand the situa-tion. An illustration of a USAR scenario is presented in Figure 2.2. It can be appreciated thatthrough time solution is being addressed in three main approaches: robots and systems, simu-lation and human responders. Each of them represent a tool for gathering more data from the

Page 48: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 30

Figure 2.1: Major challenges for networked robots. Image from [150].

incident in order to record and map it on to a central station (usually a GIS) for better decisionmaking and more efficient search and rescue operations. Also, each of them intends to provideparallel actions that can reduce operations time, reduce risks of humans, prevent secondarydamage, and raise the survival rate. Particularly, robots and systems are expected to improvethe capability of advanced equipment and the method of USAR essentially by complement-ing human abilities and supporting difficult human tasks with the mere intention to empowerresponders’ ability and efficiency [267, 268]. According to [204], these expectations implypreviously described robotic applications such as search, reconnaissance and mapping, rubbleremoval, structural inspection, in-situ medical assessment and intervention, sensitive extrica-tion and evacuation of victims, mobile repeaters, humans’ surrogates, adaptively shoring, andlogistics support. For complete details refer to [268].

Figure 2.2: Typical USAR Scenario. Image from [267].

Moreover, inside the USAR scenario robots are intended to operate at the hot zone of thedisaster. Typically in the US, the hot zone is the rescue site in which movement is restricted

Page 49: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 31

(confined spaces), there is poor ventilation, is noisy and wet, and it is exposed to environmen-tal conditions such as rain, snow, CBRNE materials, and natural lightning conditions [196].Figure 2.3 shows an image taken from the WTC Tower 2 with a robot in it for demonstratingthe challenges imposed by the rubble and the difficulties for victim recognition.

Figure 2.3: Real pictures from the WTC Tower 2. a) shows a rescue robot within the whitebox navigating in the rubble; b) robots-eye view with three sets of victim remains. Imageedited from [194] and [193].

So, based on the general challenge of developing an efficient MRS for disaster responseoperations and on the particularities concerning networked robots and the typical USAR sce-nario we are able to state the major addressed issues for robotic search and rescue. Eachchallenge is described below.

Control. As previously referred, the platform control and activity control are a chal-lenging task because of the mechanical complexities of the different UxVs and thecharacteristics of the environments [204]. This challenging task such as motion con-trol have been being developed for the purpose of improving communications [132],localization [119, 144, 286], information integration [165], deployment [76, 144], cov-erage/tracking [140, 129, 160, 149, 39, 89, 226, 7, 248], cooperative reconnaissance[285, 58, 130, 101, 131, 290, 205, 100, 164], cooperative manipulation [262], and coor-dination of groups of unmanned vehicles [199, 112, 202, 119, 120, 271, 93, 167], amongother tasks. An overview of all the issues to control a MRS can be found at [130].

Communications. In order to enhance rescuers sensing capabilities and to record gath-ered information on the environment robots rely on real-time communications eitherthrough tether or wireless radio links [204]. At a lower level, communications enable forstate feedback of the MRS, which exchanges information for robot feedforward control;at a higher level, robots share information for planning and for coordination/cooperationcontrol [150]. The challenge resides in that large quantities of data such as image andrange finder are necessary for enough situation awareness and efficient task execution,but there is typically a destroyed communication infrastructure and ad hoc networks andsatellite phones are likely to become saturated [204, 268]. Also, implementing lossycompression reduces bandwidth, but the cost is losing information critical to computervision enhancements and artificial intelligence augmentation. Moreover, using wirelesscommunications demands for encrypted video so as to not be intercepted by a newsagency, violating a survivors privacy [194]. Examples of successful communication

Page 50: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 32

networks among multiple robots can be found in [119, 76, 130, 131]. However, im-plementations in disaster scenarios haven’t demonstrated solid contributions but ratherpoint to promising directions for future work in hybrid tether-wireless communicationapproaches allowing for reducing computational costs, enough bandwidth, latency andstability. It is worth to mention that in the WTC disaster just one robot was intended tobe wireless and it was lost and never recovered [194].

Sensors and perceptions. According to [196] sensors for rescue robots fall into twomain categories: control of the robot and victim/hazards identification. For the firstcategory, sensors must permit control of the robot through confined, cluttered spaces;perhaps localization and pose estimation sensors are the greatest challenge. Thus, small-sized range finders are needed in order to attain good localization and mapping results,and to aid odometry and GPSs sensors, which are not always available or sufficient.Relevant works in this category can be found in [130, 33]. On the other hand, victimand hazards detection and identification requires specific sensing devices and algorithmsfor which research development is being carried out. Essentially, there is the need fora sensor that can perceive victims obscured by rubble and another to refer the victim’sstatus. For this, smaller and better sensors are not sufficient but improvements in sens-ing algorithms are also needed [204]. At this time, autonomous detection is consideredwell beyond the capabilities of computer vision so humans are expected to interpretall sensing data in real-time and it is still difficult (refer to Figure 2.3). Nevertheless,it has been demonstrated that video cameras are essential not only for detection pur-poses but for navigational issues and teleoperation means [196]. Color cameras havebeen successfully used in aiding to find victims [194] and black and white cameras forstructural inspection [203]. Also, lightning for the cameras and special purpose videodevices such omni-cams or fish-eye cameras, 3D range cameras and forward looking in-frared (FLIR) miniature cameras for thermal imaging are of significant importance butmay not be always useful and typically they are large and noisy (at WTC disaster col-lapsed structures where too hot that FLIR readings were irrelevant [194]). Moreover,other personal protection sensors are being implemented such as small-size sensorsfor CBRNE materials, oxygen, hydrogen sulfide, methane, and carbon dioxide sensors,which can be beneficial in preventing rescue workers from also becoming victims [196].Additionally, rapid sampling, distributed sensing and data fusing are important prob-lems to be solved [268]. Relevant works towards USAR detection tasks can be foundin [163, 90, 246, 130, 116, 161], among others. In short, new developments for smallerand robust sensing devices is a must. Also, interchangeable sensors between roboticplatforms are desired and thus standards and cost-reduction are needed. Here comes thepossibility for implementing artificial intelligence so as to take advantage from inex-pensive sensors in order to improve problems such as the lack of depth perception, hardto interpret data, lack of peripheral vision or feedback, payload support, unclear planarlaser readings, among others.

Mobility. According to [204] the problem of mobility remains a major issue for allmodalities of rescue robots (aerial, ground, underground, surface and underwater) butspecially for ground robots. It states that the essential challenge resides in the com-plexity of the environment, which is currently lacking of a useful characterization of

Page 51: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 33

rubble to facilitate actuation and mechanical design. In general, robotic platforms needto be small to fit through voids but at the same time highly mobile, flexible, stable andself-righting (or better highly symmetrical with no side up). Also, real implementationshave shown the need for not losing traction, tolerating moderated vertical drops, andsealed enclosures for dealing with harsh conditions [196, 194]. With these character-istics in mind, robots are expected to exhibit efficiency in their mechanisms, control,and sensing; so as to improve navigational performance such as speed and power econ-omy [268]. Most relevant robotic designs and mobility features for search and rescueare detailed in Section 2.3.

Power. Since implementation domain implies inherent risks, flammable solutions suchas combustion are left apart and electrical battery power is preferred. According to [204],the most important aspects concerning the power source are: robot payload capabilitiesand location providing good vehicle stability and ease of replacing without special tools.Many batteries exist that can be used and the appropriate one must be particular of therobotic resources. So, choosing the right one and knowing the batteries state of the artis the main challenge.

Human-robot interaction. Rescue robots interact with human rescuers and with hu-man victims, they are part of a human-centric system. According to [68, 204], thisproduces four basic problems: 1)human-to-robot ratio for safe and reliable operations,where nowadays a single robot requires multiple human operators; 2)humans teleop-erating robots must be highly prepared and trained, this is a scarce resource in a re-sponse team; 3)user interfaces are insufficient, non friendly and difficult to interpret;and 4)there is the need for controlling the robots in order to approach humans in an’affective robotics’ approach so as to seem helpful. These four problems determine if arobot can be used in a disaster scenario such as the case of a robot at the WTC that wasrejected because of the complexity of its interface [194]. Perhaps these implications andthe desired semi-autonomy to augment human rescuers abilities motivated the RoboCupRescue to suggest the needed information for a user interface: a) the robot’s perspec-tive plus perceptions that enhance the impression of telepresence; b) robot’s status andcritical sensor information; and c) a map providing the bird-eye view of the locality.Moreover, relevant guidelines have been proposed such as in [292]. The thing is thatthe human-robot interaction must provide a way of cooperation with an interface thatreduces fatigue and confusion in order to achieve a more intelligent robot team [196].What is more, acceptance of rescue robots within existing social structure must be en-couraged [193].

Localization and data integration. As previously referred a robot must localize itselfin order to operate efficiently and this is a challenging task in USAR missions. In ad-dition to the instrumentation problems, computation and robustness in the presence ofnoise and affected sensor models are basic for practical localization and data integration.As we had stated in USAR GIS mapping is necessary to use information gathered bymultiple robots and systems and come up with a strategy and decision making process,so it is of crucial importance to have an adequate distributed localization mechanism

Page 52: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 34

and to deal with particular problems that arise when robot networks are used for identi-fying, localizing, and then tracking targets in a dynamic setting [150]. Field experienceis needed for determining when to consider sensor readings as reliable or it is betterto discard data or use a fusing technique (typically Kalman filtering [288]). Relevantdevelopments can be found in [130, 33].

Autonomy. This problem is perhaps the ‘Holy Grail’ for robotics and artificial intelli-gence as stated by Birk and Carpin in [33]. It is in the middle between the ideal au-tonomous robot rescue team that would traverse a USAR scenario, locate victims, andcommunicate with the home base [196]; and the unrealistic and undesirable solutionsystem for disaster response [194]. In a broad manner it is accepted that a greater de-gree of autonomy with improved sensors and operator training will greatly enhance theuse of robots at USAR operations, but an issue of trust from the human rescuers must besolved first with further successful deployments and awareness of robotic tools to assistthe rescue effort [37, 194, 33]. That is the main reason why all robots in the first realimplementation at WTC were teleoperated as well as those in the latest nuclear disas-ter in Fukushima. In fact, in [194] were demonstrated some forms of semiautonomouscontrol for USA, but they were not allowed to use it, however they stated that it wasmore likely to achieve autonomous navigation with miniaturized range sensors than au-tonomous detection of victims, which represents very challenging issues for computervision under unstructured lightning conditions. So, for autonomous navigation typicalpath planning algorithms, path following and more methodical algorithms might notbe as helpful because of the diversity of the voids. Therefore, from a practical soft-ware perspective, autonomy must be adjustable (i.e., the degree of human interactionvaries) so that rescuers can know what is going on a take appropriate override com-mands, while robots serve as tools enhancing rescue teams capabilities [196]. What ismore, research groups are working towards the system intelligence that can be fitted inon-board processing units since communications may be intermittent or restricted.

Cooperation. As the mission is challenging enough, an heterogeneous solution to coverdisaster areas comes to be an invaluable tool. Robots, humans and other technologicalsystems must be used in a cooperative and collaborative manner so as to achieve ef-ficient operations. Main developments concerning cooperation can be found in [199,112, 202, 119, 120, 271, 93, 167, 58, 33, 130, 101, 131, 290, 222, 205, 100, 164].

Performance metrics. Until today there are no standardized metrics because evalua-tion of rescue robots is complex. On one hand, disaster situations are different caseby case and this represents no simple characterization among them leaving no roomfor performance comparison [268]. On the other hand, robots and their missions arealso different and are highly dependant on human operators. So, for now it has beenproposed to evaluate post-mission results such as video analysis for missed victims andavoidable collisions [194], and disaster ad hoc qualitative metrics [204]. It is worth torefer that RoboCup Rescue evaluates quantitative metrics such as number of victimsfound [19], traversing time [295] and map correctness [155, 6], but these metrics do notcapture the value of a robot in establishing that there are no survivors or dangers in aparticular area. Thus, metrics for measuring performance remain undefined.

Page 53: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 35

Components performances. According to [268], research must be done in high-poweractuators, stiff mechanisms, sensor miniaturization, light weight, battery performance,low energy consumption, and higher sensing ability (reliable data). These importantcomponent technologies are the essential features that provide reliability, environmentresistance, durability, water-proof, heat-proof, dust-proof, and anti-explosion; all ofwhich are crucial for in-disaster operations.

So, we can conclude at this point that the research field of rescue robotics is large,with many different research areas open for investigation. Also, it can be deducted fromthe majority of the work in this area that mobile robots are an essential tool within USARand their utilisation will increase in the future [37, 194, 33, 204, 268]. For now they haveseveral problems to be solved and are not ready because of size needs, insufficient mobility,situation awareness, wireless communications and sensing capabilities. For example UAVshave been successfully deployed for gathering overview information of disaster but they lackof important aspects such as the robustness against bad weather, obstacles such as birds andelectric power lines, wireless communication, limitation of payload and aviation regulation.On the other hand, UGVs successfully deployed for finding victims need the human operatorto help for deciding if a victim is detected and even though they are teleoperated, they stilllack of good mobility and actuation. Problems are about the same among different modalitiesof robots and Figure 2.4 depicts the most important ones. The important thing is that thereis a clearly open path towards researching and pushing forward worldwide trends such asubiquitous systems to have information on security sensors, fire detectors, among others; andminiaturization of devices in order to reduce the robotic platforms physical, computational,power, and communication constraints so as to facilitate autonomy.

Figure 2.4: Typical problems with rescue robots. Image from [268].

Last but not least, it is worth to take a look at the following list concerning the mostrelevant research contributions in rescue robotics. They are listed according to the leaderresearcher including the developments done since 2000 until today. After the list, Section 2.2presents the description of the most relevant software contributions.

Page 54: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 36

• Robin Murphy, Texas A&M, Center for Robot Assisted Search And Rescue (CRASAR).

– understandings of in-field USAR [69];

– mobile robots opportunities and sensing and mobility requirements in USAR [196];

– team of teleoperated heterogeneous robots for a mixed human-robot initiative forcoordinated victim localization [199];

– recommendations and experiences towards the RoboCup Rescue and standardiza-tion of robots potential tasks in USAR [198, 197];

– experiences in mobility, communications and sensing at the WTC implementa-tions [194];

– recommendations and synopsis of HRI based on the findings, from the post-hocanalysis of 8 years of implementations, that impact the robotics, computer science,engineering, psychology, affective and rescue robotic fields [68, 193, 32];

– novel taxonomy on UGV failures according to WTC implementations and other 9relevant USAR studies [65];

– multi-touch techniques and devices validation tests for HRI and teleoperation ofrobots in USAR [186, 185];

– survey on rescue robotics including robot design, concepts, methods of evaluation,fundamental problems and open issues [204];

– survey and experiences of rescue robots for mine rescue [200, 201];

– robots that diagnose and help victims with simple triage and rapid treatment (start)methods concerning mobility, respiration, blood pressure and mental state [80];

– underwater and aerial after collapse structural inspections including damage foot-print and mapping of the debris [228, 203];

– study of the domain theory and robotics applicability and requirements for wild-land firefighting [195];

– deployment of different robots for aiding in the Fukushima nuclear disaster [237].

• Satoshi Tadokoro, Tohoku University, Tadokoro. Laboratory.

– understandings of the rescue process after the Kobe earthquake, explaining theopportunities for robots [269]

– understandings of the simulation, robotic, and infrastructure projects of the RoboCupRescue [270];

– design of special video devices for USAR [123] and implementation in the Fukushimanuclear disaster [237];

– robot hardware and control software design for USAR [215, 61];

– in-field demonstration experiments with robots training along with human firstresponders [276];

– guidelines for human interfaces for using rescue robots in different modalities [292];

Page 55: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 37

– exploration and map building reports from RoboCup Rescue implementations [205];

– complete book on rescue robots, robotic teams for USAR, demonstrations and realimplementations, and the unsolved problems and future roadmap [267];

– survey on the advances and contributions for USAR methods and rescue robotdesigns including evaluation metrics and standardizations, and the open issuesand challenges [268].

• Fumitoshi Matsuno, Kyoto University, Matsuno Laboratory.

– development of snake-like rescue robot platform [142];

– RoboCup Rescue experiences and recommendations on the effective multiple robotcooperative activities for USAR [246];

– robotic rescue platforms for USAR operations [245, 181];

– development of groups of rescue robot development platforms for building inspec-tion [141];

– development of on-rubble rescue teams using tracked robots [180, 189];

– implementation of rescue robots in the Fukushima nuclear disaster [237];

– information infrastructures and ubiquitous sensing and information collection forrescue systems [14];

– generation of topological behavioral trace maps using multiple rescue robots [164];

– the HELIOS system for specialized USAR robotic operations [121].

• Andreas Birk, Jacobs University (International University Bremen), Robotics Group.

– individual rescue robot control architecture for ensuring semi-autonomous opera-tions [34];

– understandings of software component reuse and its potential for rescue robots [145];

– merging technique for multiple noisy maps provided by multiple rescue robots [66];

– USARSim, a high fidelity robot simulation tool based on a commercial game en-gine, and intended to be the bridge between the RoboCup Rescue Simulation andReal Robot Leagues [67, 18, 17, 20];

– multiple rescue robots exploration while ensuring to keep every unit inside com-munications range [239];

– cooperative and decentralized mapping in the RoboCup Rescue Real Robot Leagueand in USARSim implementations [33, 225];

– human-machine interface (HMI) for adjustable autonomy in rescue robots [35];

– mechatronic component design for adjusting the footprint of a rescue robot so asto maximize navigational performance [85];

– complete hardware and software framework for fully autonomous operations of arescue robot implemented in RoboCup Rescue Real Robot League [224];

Page 56: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 38

– efficient semi-autonomous human-robot cooperative exploration [209]

– teleoperation and networking multi-leveled framework for the heterogeneous wire-less traffic for USAR [36].

• Other relevant researchers, several institutions, several laboratories.

– an overview of rescue robotics field [91];

– survey on rescue robots, deployment scenarios and autonomous rescue swarmsincluding an analysis of the gap between RoboCup Rescue and the real world [261,212];

– metrics and evaluation methods for the RoboCup Rescue and general multi-robotteams [254, 143];

– rescue robot designs [282, 40, 158, 265, 8, 266, 84, 277, 187, 211, 216, 249, 87,151, 252];

– system for continuous navigation of rescue teams [9];

– a multi-platform on-board system for teleoperating different modalities of un-manned vehicles [108];

– multi-robot systems for exploration and rescue including fire-fighting, temperaturecollection, reconnaissance and surveillance, target tracking and situational aware-ness [242, 140, 129, 76, 119, 149, 58, 120, 132, 144, 130, 101, 229, 131, 39, 290,206, 98, 7, 226, 248, 126, 168, 100, 13, 57, 256, 232, 10, 43, 112, 295, 253, 60,240, 114, 259, 280, 92, 169, 294, 25];

– useful coordination and swarm intelligence algorithms [241, 75, 74, 78, 112, 78,79, 271, 93, 89, 166, 167, 161, 162, 208, 118, 5].

2.2 Rescue Robotics Relevant Software ContributionsThis section is intended to provide information on some of the most relevant software de-velopments that have contributed towards the use of robotic technology for urban search andrescue. It is important to clarify that there have been plenty of successful algorithms forworking with multiple robots in several application domains that could be useful for rescueimplementations. Nevertheless, in spite of these indirect contributions, information hereinresides essentially in solutions intended directly for the rescue domain and related tasks.

2.2.1 Disaster Engineering and Information SystemsPerhaps the most basic contributions towards using robotics to mitigate disasters reside in theidentification of the factors that are involved in a rescue scenario. This provides a way tounderstand what we are dealing with and what must be taken into consideration for proposingsolutions. Also, this disaster analysis creates a path for developing more precise tools suchas experts systems and template-based methodologies for information management and taskforce definition.

Page 57: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 39

In [83] an appropriate disaster engineering can be found based on the 2004 AsianTsunami. This particular disaster presented the opportunity to develop a profound analysisnot only because of its large damage but also because at the beginning of the disaster responseoperations everything was carried out with an important lack of organization. Every countrytried to help in their own way resulting in a sudden congregation of large amounts of resourcesthat caused delays, provisions piling up, and aid not reaching victims. The present lack of co-ordination among the various parties also provoked tensions between the in-site rescue teams,which were different at elemental human levels such as cultural, racial, religious, political andother sensitivities important when conducting a team effort. Fortunately, the ability to adaptand improvise plans on the fly permitted the isolated countries to get connected in a networkof networks with assigned leaders coordinating the efforts. This turned operations more struc-tured and aid could reach the victims more quickly. So, a lesson was learned showing up thateven with limited resources a useful contribution can be made if the needs are well-identifiedand the rescue efforts are properly coordinated. This resulted in a so called Large Scale Sys-tems Engineering framework concerning the conceptualization and planning of how a disasterrelief could be carried out. The most important is the definition of the most critical constraintsaffecting a disaster response shown in Table 2.1.

Accordingly, in order to address constraints such as time, environmental, information,and even people, different damage assessment systems have been created. The importanceof determining the extent of damage to life, property, the environment, resides in the priori-tization of relief efforts in order to define a strategy that can match our intentions for raisingsurvival rate and reducing further damage. In [81], an expert system to assess the damage forplanning purposes is presented. This software helps to prepare initial damage maps by fusingdata from Satellite Remote Sensing (SRS) and Geographic Information Systems. A typicaltechnique consist in visual change algorithms that compare (subtraction, ratio, correlativity,comparability. . . ) pre-disaster and post-disaster satellite images, but authors created an expertsystem consisting in an human expert, a knowledge base, an inference engine based on deci-sion trees, and a user interface. In that way, using a dataset for experimentation the systemwas fed with a set of rules such as “IF (IMAGE CHANGE=HIGH) AND (BUILDING DEN-SITY=HIGH) THEN (PIXEL=SEVERELY DAMAGED AREA” and obtained over 60% ofaccuracy for determining the real damage extent in all cases. The most important of this kindof developments is the additional information that could be used for planning and structuringinformation.

In addition, relevant information structures have been defined in order to organize datafor developing more efficient disaster response operations. These structures are in fact atemplate-based information system, which is expected to facilitate preparedness and impro-visation by first gathering information from the ravaged zone, and subsequently provide aprotocol for coordinating rescue teams without compromising their autonomy and creativity.A template that is consistent among different literature is shown in Figure 2.5 [156, 56]. Itmatches different characteristics of the typical short-lasting (ephemeral) teams that emerge ina disaster scenario with communication needs that must be met in order for efficient opera-tions. Concerning the boundaries and membership characteristics, which refer to membersentering and exiting different rescue groups, information is needed on what they should com-municate among the groups, where they are, why and when they leave a group, and who tocommunicate to. In the case of leadership, several leaders may help for coordination among

Page 58: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 40

Table 2.1: Factors influencing the scope of the disaster relief effort from [83].Limiting Factors Important questions to consider

Primary BoundariesHow much time do we have to scope the efforts?

Time What must be done to minimize the time needed toaid the survivors?What is the current political relationship between theaffected nation and the aiding organizations?

Political What is the current internal political state (potentialcivil/social unrest) of the affected country?How much assistance is the affected government will-ing to accept?

External LimitationsWhat are the causes of the disaster?What is the extent of the damage due to the disaster?

Environmental What are the environment conditions that would limitthe relief efforts (e.g. proximity to helping country,accessibility to victims)?

Information How much information on the disaster do we have?How accurate is the information provided to us?

Internal LimitationsHow can technology enhance relief efforts?What extent and depth of training does the responseteam have?

Capability How far can this training be converted to relevant skillsets to carry out the rescue efforts?What is the extent of the coordination effort required?What is the range and extent of the critical resourcespresent allocated to the response team?How are the resources contributing to the overall re-lief effectiveness in terms of reliability, maintainabil-ity, supportability, dependability and capability?People What is the state of the victims?

Resources What are the perceptions of the public of the affectedcountry and aiding countries and organizations withregards to the disaster?How are recent world developments (e.g. frequenciesof events, economy climate, social relationships withthe victims) shaping the willingness of people to as-sist in the relief efforts?

Page 59: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 41

different groups so they need to inform who to communicate to and what they are doing.Then, the networking characteristic or organizational morphology must adapt to the changingoperations requirements so they must deal with what to report just before changing in ordernot to lose focus and strategy. Work, tasks and roles primarily concern where they shouldbe done and why. Then, activities serve as organizational form and behavior triggered byrules of procedures and thus dealing with the what to do and who to report factors. Next, theephemeral is concerned in completing the task, rather than adopting the best approach or evena better method, so, the only way to quickly convert decision into action is to act on an adhoc basis considering who to communicate to, how to develop actions and how to decomposeactivities. As for memory, it is practically impossible for rescue groups to replicate or basecurrent operations on previous experiences, but there is an opportunity for using knowledgefor future reference in order to develop best practices on how to act and activities decomposi-tion. The final characteristic is intelligence, which is very restricted for rescue teams becausethey intervene and act on the ground with only partial information or local intelligence crucialfor defining what to do an when to do it. So, this mapping produces the template that has beenused in major disaster such as the WTC. Examples are shown in Figure 2.6.

Figure 2.5: Template-based information system for disaster response. Image based on [156,56].

With this information in mind, other important contributions consider the definition ofinformation flow and management so as to achieve a productive disaster relief strategy. Wehave stated the importance of quickly collecting global information on the disaster area andvictims buried in the debris awaiting rescue. In [14] they provide their view for ideal in-formation collection and sharing in disasters. It is based upon an ubiquitous device calledRescue-Communicator (R-Comm) and RFID technologies working along with mobile robots

Page 60: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 42

Figure 2.6: Examples of templates for disaster response. Image based on [156, 56].

and information systems. The R-Comm comprises a microprocessor, a memory, three com-pact flash slots, a voice playback module including a speaker, a voice recording module in-cluding a microphone, a battery including a power control module, and two serial interfaces.One of the compact flash slots is equipped with wireless/wired communication. The systemcan operate for 72 h, which is the critical time for humans to survive. It is supposed to betriggered by emergency situations (senses vibrations or voltage drop) and play recorded mes-sages in order to seek for a human response at the microphones and send information to localor ad hoc R-Comm networks. Then, RFID technologies are used for marking the environmentin order for the ease of mapping and recognizing which zones have been already covered andeven for denoting if they are safe or dangerous. Finally, additional information is collectedwith the deployment of mobile devices such as humans with PDAs and unmanned vehicles asrescue robots. Figure 2.7 shows a graphic representation of what is intended for informationcollection using technology. Then, Figure 2.8 shows a picture of an R-Comm and Figure 2.9shows a picture of example RFID devices used in rescue robotics experimentation. In theend, R-Comm, RFID and mobile devices information is sent through a network into an infor-mation system known as Database for Rescue Management (DaRuMa) in order to integrateinformation and provide better situational awareness with an integrated map with differentrecognition marks.

According to [210], the DaRuMa consists in a reference system that utilizes a proto-col for rescue information sharing called Mitigation Information Sharing Protocol (MISP),which provides functions to access and to maintain geographical information databases overnetworks. Through a middleware it translates MISP to SQL in order to get SQL tables fromXML structures in a MySQL server database. The main advantage is that it is highly portableto several OS and hardware and it is able to support multiple connections at the same time en-abling for integrating information from multiple devices in a parallel way. Additionally, thereis a developed tool for linking the created database with the Google Earth, a popular GIS.Figure 2.10 shows a diagram for representing how the DaRuMa system collects informationfrom different devices and interacts with them for communication and sharing purposes.

Page 61: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 43

Figure 2.7: Task force in rescue infrastructure. Image from [14].

Figure 2.8: Rescue Communicator, R-Comm: a) Long version, b) Short version. Imagefrom [14].

Page 62: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 44

Figure 2.9: Handy terminal and RFID tag. Image from [14].

Figure 2.10: Database for Rescue Management System, DaRuMa. Edited from [210].

Page 63: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 45

2.2.2 Environments for Software Research and DevelopmentWe have previously mentioned the existence of the RoboCup Rescue, which stands for Sim-ulated and Real Robot leagues. This competition has served importantly as a test bed forartificial intelligence and intelligent robotics research. As stated in [270] it is an initiative thatintends to provide emergency decision and action support through the integration of disasterinformation, prediction, planning, and human interface in the virtual disaster world wherevarious kinds of disasters are simulated. The Simulation League consists of a software worldof simulated disasters in which different agents interact as victims and rescuers in order fortesting diverse algorithms so as to maximize virtual disaster experience in order to use it forthe human world and perhaps reaching transparent implementations towards real disastersmitigation. The overall concept of the RoboCup Rescue remains persistent as it is in Fig-ure 2.11. Nevertheless the simulator has evolved into the most recent implementations usingthe so called USARSim.

The USARSim is a software that has been internationally validated for robotics andautomation research. It is a high fidelity robot simulation tool based on a commercial gameengine which can be used as a bridging tool between the RoboCup Rescue Real Robot Leagueand the RoboCup Rescue Simulation League [67]. The main purpose is to provide an envi-ronment for the study of HRI, multi-robot coordination, true 3D mapping and exploration ofenvironments by multi-robot teams, development of novel mobility modes for obstacle traver-sal, and practice and development for real robots that will compete in the physical league.Among the most relevant advantages are the capabilities for rendering video, representingrobot automation and behavior, and accurately representing the remote environment that linksthe operator’s awareness with the robot’s behaviors. Today, the USARSim consists of sev-eral robot and sensor models (Figure 2.12) including the possibility for designing your owndevices, and also environmental models representing different disasters (Figure 2.13) and in-ternational standard arenas for research comparison and competition (refer section sec:stds).Robots in the simulator are used to develop typical rescue activities such as autonomously ne-gotiating compromised and collapsed structures, finding victims and ascertaining their condi-tion, producing practical maps of victim locations, delivering sustenance and communicationsto victims, identifying hazards, and providing structural shoring [18].

Furthermore, the USARSim is providing the infrastructure for comparing different de-velopments in terms of score vectors [254]. The most important aspect about these vectorsis that they are based upon the high fidelity framework so that the difference between multi-ple implementations in simulation and real robots remains minimal. As can be seen in Fig-ure 2.14, the data collected from the sensor reading in the simulator (top) are very similar tothe ones collected from the real version (bottom). This allows researchers to be able to com-pare almost essentially the algorithms and intelligence behind their systems trying to reachstandardized missions in which they must find victims and extinguish fires while using com-munications and navigating efficiently.

On the other hand, according to [17] the main drawbacks reside in the ability to cre-ate, import and export textured models with arbitrarily complicated geometry in a variety offormats is of paramount importance, also the ideal next generation simulation engine shallallow the simulation of tracked vehicles and sophisticated friction modelling. What is more,

Page 64: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 46

Figure 2.11: RoboCup Rescue Concept. Image from [270].

Page 65: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 47

Figure 2.12: USARSim Robot Models. Edited from [284, 67].

Figure 2.13: USARSim Disaster Snapshot. Edited from [18, 17].

Page 66: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 48

Figure 2.14: Sensor Readings Comparison. Top: Simulation, Bottom: Reality. Imagefrom [67].

Page 67: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 49

it should should be easy to add a new robot and to code novel components based on the avail-able primitives and backward compatibility with the standard USARSim interface should beassured. For the complete details on this system refer to [284].

2.2.3 Frameworks, Algorithms and InterfacesAs a barely explored research field, just a few direct contributions have been made directly torescue robotics but several other applications that serve for search and rescue as well as otherdisaster response operations are being used in the field.

Control Architectures for Rescue Robots and Systems

Perhaps a good start point is to reference that until now there is no known single robot ormulti-robot architecture that serves as the default infrastructure for working with robot in dis-asters. In [3], authors propose a generic architecture for rescue missions in which they dividethe control blocks according to the level of intelligence or computational requirements. Atthe lowest level reside the sensors and actuators interfacing. Then, a reactive level is includedconcerning basic robot behaviors for exploration and self-preservation, and essential sensingfor self-localization. Next, an advanced reactive layer is included concerning simultaneous lo-calization and mapping (SLAM) and goal-driven navigation behaviors as well as identificationmodules for target finding and feature classification. Then, at the highest level are includedthe learning capabilities and the coordination of the lower levels. Each level is linked via userinterface and a communication handler. Figure 2.15 shows a representation of the architec-ture. The relevance of this infrastructure is that it considers all the needs for a rescue scenariowith an approach independent from robotic hardware and in a well-fashioned level distribu-tion enabling researchers to focus in particular blocks while constructing the more complexsystem.

Navigation and Mapping

Concerning the navigation of mobile robots a huge amount of algorithms can be found inliterature for a wide variety of locomotion mechanisms including different mobile modali-ties. Among the modern classic approaches there are the behavior-based works inspired byR. Brooks research [49, 50, 51, 54, 52, 53] which lead to representative contributions that canbe summarized in Table 2.2.

Moreover, more recent research developments include works such as automated explo-ration and mapping. The main goal in robotic exploration is to minimize the overall timefor covering an unknown environment. It has been widely accepted that the key for efficientexploration is to carefully assign robots to sequential targets until the environment is covered,the so-called next-best-view (NBV) problem [115]. Typically, those targets are called fron-tiers, which are boundaries between open and unknown space that are gathered from rangesensors and sophisticated mapping techniques [291, 127]. In [57, 58] is presented an strategythat became relevant because it was one of the first developments not to use landmarks andsonars (as in [241]) but relying on the information from a laser scanner sensor. Their idea isto pick up the sensor readings, determine the frontiers and select the best so as to navigate

Page 68: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 50

Figure 2.15: Control Architecture for Rescue Robot Systems. Image from [3].

Page 69: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 51

Table 2.2: A classification of robotic behaviors. Based on [178, 223].Relative motion require-ments

Multi-robot behaviors

Relative to other robots Formations [220, 263, 264, 23, 24], flocking [170,172], natural herding, schooling, sorting, clump-ing [28, 172], condensation, aggregation [109, 172],dispersion [183, 172].

Relative to the environment Search [104, 105, 172], foraging [22, 172], grazing,harvesting, deployment [128], coverage [59, 39, 89,226, 104], localization [191], mapping [117], explo-ration [31, 172], avoiding the past [21].

Relative to external agents Pursuit [146], predator-prey [64], target tracking [27].Relative to other robots andthe environment

Containment, orbiting, surrounding, perimetersearch [88, 168].

Relative to other robots, ex-ternal agents, and the envi-ronment

Evasion, tactical overwatch, soccer [260].

to. For doing this, authors use the readings that indicate the maximum laser range and thenallocate their indexes in a vector. Once they have finished determining the frontiers they cal-culate costs and utilities according to equations 2.1 and 2.2. It is supposed that for everyrobot i and set of frontiers t there must exist a utility Ut and a cost V i

t . The utility is calculatedaccording to a probability P , which is subtracted from the initial utility value according tothe neighboring frontiers in a distance d minor than a user-defined max.range that had beenpreviously assigned to other robots. The cost is the calculated distance from the robot’s posi-tion to the frontier cell taking into consideration possible obstacles and a user-defined scalingfactor β. So, maximizing the utility minus the cost is an strategy with complexity O(i2t) thatleads to successful results as shown in Figure 2.16. This approach has been demonstrated insimulation, with real robots and with interesting variations in the formulations of costs andutilities such as including targets that less impact robots’ localization, less compromise com-munications, and even the ones that fulfill multiple criteria according to the current situationor local perceptions [256, 232, 10, 112, 295, 43, 101, 253, 240, 60, 280, 169, 25]. What ismore, it has been extended to strategies segmenting the environment by matching frontiers tosegments leading to O(n3) complexity, where n is the biggest number between the number ofrobots and segments [290]; and even to strategies that learn from the structural compositionof the environment for example to choose between rooms and corridors [259].

(i, t) = argmax(i′,t′)(Ut′ − β·V i′

t′ ) (2.1)

U(tn | t1, . . . , tn−1) = Utn −n−1∑i=1

P (‖ tn − ti ‖) (2.2)

Another strategy for multi-robot exploration has resided in the implementation of cover-age algorithms [86]. These algorithms usually assign target positions to the robots according

Page 70: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 52

Figure 2.16: Coordinated exploration using costs and utilities. Frontier assignment consider-ing a) only costs; b) costs and utilities; c) three robots paths results. Edited from [58].

to their locality and use different motion control strategies to reach, and sometimes remainin, the assigned position. Also, when the knowledge of the environment is enough to havean a-priori map, the implementation of Voronoi Tesellations [15] is very typical. Relevantliterature on these can be found in [89, 7, 226].

The previous examples of multi-robot exploration reside in an important drawback: ei-ther they need an a-priori map or their results are highly compromised in dynamic environ-ments. So, another attractive example for multi-robot exploration that does not quite relyon a fixed environment is the one presented in [168]. In their work, authors make use ofsimple behaviors such as reach.frontier, avoid.teammate, keep.going, stay.on.frontier,patrol.clockwise and patrol.counterclockwise. With the coordination among those behav-iors using a finite state automata, they are able to conceive a fully decentralized algorithmfor multi-robot border patrolling which provided satisfactory results in extensive simulationtests and through real robots experiments. As can be appreciated in Figure 2.17 the statesand triggering actions reside in a very simplistic approach that results in efficient multi-robotoperations.

Summarizing autonomous exploration contributions, it can be stated that more sophis-ticated works try to coordinate robots such that they do not tend to move toward the sameunknown area while having a balanced target location assignment with less interferences be-tween robots. Furthermore, recent works tend to include communications as well as otherbehavioral strategies for better MRS functionality into the target allocation process. Never-theless, the reality is that most of these NBV-based approaches still fall short of presentinga MRS that is reliable and efficient in exploring highly uncertain and unstructured environ-ments, robust to robot failures and sensor uncertainty, and effective in exploiting the benefitsof using a multi-robot platform.

Concerning map generation, it is acknowledged that mapping unstructured and dynamicenvironments is an open and challenging problem [33]. Several approaches exist among whichreside the generation of abstract, topological maps, whereas others tend to produce more

Page 71: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 53

Figure 2.17: Supervisor sketch for MRS patrolling. Image from [168].

Page 72: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 54

detailed, metric maps. In this mapping problem, robot localization appears to be among themost challenging issues even when there have been impressive contributions to solve it [274,94]. Additionally, when the mapping entities are multiple robots, there are other importantchallenges such as the map-merging issue and multi-robot global localization. Recent researchworks as in [66, 33, 225] use different stochastic strategies for developing appropriate mapmerging from the readings of laser scanner sensors and odometry so as to produce a detailed,metric map based upon occupancy grids. These grids are a numerical value assigned to acurrent 2D (x, y, θ) position in respect to what has been perceived by the sensors. Thesenumerical values typically indicate with certain probability the existence of: an obstacle, anopen space, or an unknown area. Figure 2.18 shows the algorithm for defining the occupancygrid that authors use as the mapping procedure in [33]. Next, in Figure 2.19 is shown thegraphical equivalent of the occupancy grid in a grayscale formatting for which white is anopen space, black is an obstacle, and the gray shaded are unknown areas [225]. In general,for addressing exploration and metric mapping a very complete source can be found in [273].

Figure 2.18: Algorithm for determining occupancy grids. Image from [33].

On the other hand, other researchers work in the generation of different strategical mapsthat can fit better the necessities and the constraints of a rescue mission. In [164], researchersshow their development towards the generation of behavioral trace maps (BTM), which theyargue are representations of map information which are richer in content compared to tradi-tional topological maps but less memory and computation intensive compared to SLAM ormetric mapping. As shown in Figure 2.20 the maps represent a topological linkage of usedbehaviors for which a human operator can interpret what the robot has confronted in eachsituation, better detailing the environment without the need of precise numerical values.

Finally, as the sensors’ costs are being reduced and the possibility of collecting moreprecise 3D information from an environment, researches have been able to produce more in-teresting 3D mapping solutions. In [20] this kind of mapping has been demonstrated using the

Page 73: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 55

Figure 2.19: Multi-Robot generated maps in RoboCup Rescue 2007. Image from [225].

Figure 2.20: Behavioral mapping idea. Image from [164].

Page 74: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 56

USARSim environment and a mobile robot with a laser scanner mounted over a tilt device,which enables for the three-dimensional readings. This work is interesting because authors’main intention is to provide an already working framework for 3D mapping algorithmic testsand the study of its possibilities. Also, as shown in Figure 2.21 the simulated robot is highlysimilar to its real counterpart thus providing the opportunity for transparency and easy migra-tion of code from simulated environments to the real world. In the same figure, in the rightside there is a map resulting from the sensor readings in which the color codes are as follows:black, obstacles in the map generated with the 2D data; white, free areas in the map generatedwith the 2D data; blue, unexplored areas in the map generated with the 2D data; gray, obsta-cles detected by the 3D laser; green, solid ground free of holes and 3D obstacles (traversableareas).

Figure 2.21: 3D mapping using USARSim. Left) Kurt3D and its simulated counterpart.Right) 3D color-coded map. Edited from [20].

Another example of 3D mapping using laser scanners is the work in [205] in which re-searchers report their obtained results from the map building in RoboCup Rescue Real RobotLeague 2009. Nevertheless, most recent approaches are following the trend of implement-ing the Microsoft Kinect [233], which is a sensing device that interprets 3D scene informationfrom a continuously-projected infrared structured light and an RGB camera with a multi-arraymicrophone so as to provide full-body 3D motion capture, facial recognition and voice recog-nition capabilities. Also, for developers there is a software development kit (SDK) [233],which has been released as open source for accessing all the device capabilities. Until nowthere are only a few formal literature reports on the use of Kinect since it is very recent, buttaking a look at popular internet search engines is a good idea for knowing where is the stateof the art on its robotics usage (tip: try searching for “kinect robot mapping”).

Recognition and Identification

Examples on detection and recognition contributions vary from object detection to more com-plex situational recognitions. As for object detection, in [116] researchers make use of scale-invariant feature transform (SIFT) detectors [163] in the so called speeded up robust features

Page 75: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 57

(SURF) algorithm for recognizing danger signs. Even though their approach is a very simpleusage of already developed algorithms, the implementations showed an appropriate applica-tion for efficient recognition in rescue missions. In addition, other researchers have developedprecise facial recognition implementations in the USARSim environment [20] by using thefamous work for robust real-time facial recognition in [279]. This simulated faces recogni-tion has a little drawbacks with false positives as can be appreciated from Figure 2.22. Theimportant point is that either for danger signs or for human facial recognition both have beensuccessfully implemented and thus seem to be useful for USAR operations.

Figure 2.22: Face recognition in USARSim. Left) Successful recognition. Right) False posi-tive. Image from [20].

Furthermore, in the process of identifying human victims and differentiating them amonghuman rescue teams, other researchers have made important contributions. In [90], researchersshow a successful algorithm for identifying human bodies by doing as they call a robust“pedestrian detection”. Using a strategy called histograms of oriented gradients (HoG) and aSVM classifier system in a process depicted in Figure 2.23, they are able to identify humanswith impressive results. Figure 2.24 shows the pedestrian detection that can be done with thealgorithm. What is more, this algorithm has been extended and tested for recognizing otherobjects such as cars, buses, motorcycles, bicycles, cows, sheep, horses, cats and dogs. So, thechallenge reside in that in rescue situations there are unstructured images in which recogni-tion must be done. Also, in the case of humans, there are many of them around that are notprecisely victims or desired targets for detection. So, an algorithm like this must be aided insome way to identify victims from non-victims.

Figure 2.23: Human pedestrian vision-based detection procedure. Image from [90].

Towards finding a solution for recognizing human victims from non-victims, in [207] aninteresting posture recognition and classification is proposed. This algorithm helps to detectif the human body is in a normal action such as walking, standing or sitting; or in an abnormalevent such as lying down or falling. They used a dataset of videos and images for teaching

Page 76: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 58

Figure 2.24: Human pedestrian vision-based detection procedure. Image fromhal.inria.fr/inria-00496980/en/.

their algorithm the actions or postures that represent a normal action. Then, every recognizedposture that is outside from the learned set is considered as an abnormal event. Also, anstochastic method is used as an adaptivity feature for determining which is the most likelyposture to be happening and then classify it. Figure 2.25 shows the real-time results of aset of snapshots from a video signal. As can be seen, recognition ranges from green normalactions and yellow not-quite normal, to orange possibly-abnormal and red abnormal actions;the black bar in the normal actions refer the probability of matching learned postures, so whenit is null it must have recognized an abnormal yellow, orange or red action.

Figure 2.25: Human behavior vision-based recognition. Edited from [207].

In this way, the previously described use of SIFT and SURF for object detection, the hu-man face and body recognition algorithms, and this last strategy for detecting human behavior,all can be of important aid for the visual recognition of particular targets in a rescue mission

Page 77: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 59

such as victims, rescuers, and hazards. But, additionally, there are also other researchers fo-cusing in the use of vision-based recognition and detection for navigational purposes. Animpressive and recent work presented in [103] demonstrates how using stereo-vision withpositioning sensors such as GPS, a robot can be able to learn and repeat paths. Figure 2.26shows the implemented procedure in which they basically start with a teach pass for the robotto record the stereo images and extract their main features using the SURF algorithm so as toachieve the stereo image coordinates, a 64-dimensional image descriptor, and the 3D positionof the features, in order to input those values to a localization system and create a traversingmap. Once they have a map built, then they run the repeat pass in which the mobile robotdevelops the same mapped path by controlling its movements in accordance to the capturedvisual scenes and the localization provided by the visual odometry and positioning sensors.In Figure 2.27 are presented the results of one teach pass and seven repeat passes made whilebuilding the route. All repeat passes were completed fully autonomously despite significantnon-planar camera motion and the blue non-GPS localization sections. So, even when fullautonomy is not quite the short-term goal, this type of contributions allow human operators tobe confident on the robot capabilities and thus can focus in more important activities becauseof the augmented autonomy.

Figure 2.26: Visual path following procedure. Edited from [103].

Figure 2.27: Visual path following tests in 3D terrain. Edited from [103].

Page 78: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 60

Last but not least for recognition and identification, there is a more directed rescue ap-plication presented in [80] in which researchers propose a robot-assisted mass-casualty triageor urgency prioritization by means of recognizing the victims’ health status. They argue theimplementation of a widely accepted triage system called Simple Triage and Rapid Treat-ment (START), which provides a simple algorithm for sorting victims on the basis of signs:mobility, respiratory frequency, blood perfusion, and mental state. For mobility, moving com-mands are produced to see if the victim is able to follow them in which case will indicatethat victims are physically stable and mental aware. For respiration frequency, if a victim isnot breathing it is a sign of death, if it is breathing more than 30 breaths per second then it isprobably in shock, otherwise it is considered stable. For blood perfusion, it requires to checkvictim’s radial pulse for determining if blood irrigation is normal or if has been affected. Formental state, commands are produced to see if the victim can follow or there is a possiblebrain injury. So, according to the results of the assessment victims can be classified into fourcategories: minor (green) indicating the victim can wait to receive treatment and even helpother victims, delayed (yellow) indicating the victim is not able to move but it is stable andcan also wait for treatment, immediate (red) indicating the victim can be saved only if it israpidly transported to medical care facilities, and expectant (black) in which victims have lowchances to survive or are death; refer to Figure 2.28. Researchers’ idea proposes to developrobots that can be able to assist in rescue missions by developing the START method so asto help rescuers to reach inaccessible victims and recognize their urgency, but this work isstill under development. The main challenges reside in the robot capabilities to interact withhumans (physically and socially), robot range of action and fine control of movements, sensorplacement and design, compliant manipulators, and the human acceptance of a robotic unitintending to help.

Teleoperation and Human-Robot Interfaces

As for teleoperation, several works have considered the simple approach of joystick com-mands to motor activations. Nevertheless, in [36] authors provide a complete framework forteleoperating robots for safety, security and rescue, considering important aspects such as be-havior and mission levels where a single operator triggers short-time, autonomous behaviors,respectively, and supervises a whole team of autonomously operating robots. This means thatthey consider significant amounts of heterogeneous data to be transmitted between the robotsand the adaptable operator control unit (OCU) such as video, maps, goal points, victim data,hazards data, among others. With this information authors provide not only low-level motionteleoperation but also higher behavioral and goal-driven teleoperation commands, refer to Fig-ure 2.29. This provides an environment for better robot autonomy and less user dependencethus allowing operators to control several units with relative ease.

Moreover, authors in [209, 36] not only enhance operations by improving teleopera-tion but by providing an augmented autonomy with a very complete, adaptable user interface(UI) such as the presented in Figure 2.30. Their design follows general guidelines from theliterature, based on intensive surveys of existing similar systems as well as evaluations ofapproaches in the particular domain of rescue robots. As can be seen, it provides the sensorreadings (orientation, video, battery, position and speed) for the selected robot in the list ofactive robots, as well as the override commanding area for manual triggering of behaviors

Page 79: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 61

Figure 2.28: START Algorithm. Victims are sorted in: Minor, Delayed, Immediate and Ex-pectant; based on the assessment of: Mobility, Respiration, Perfusion and Mental Status.Image from [80].

Figure 2.29: Safety, security and rescue robotics teleoperation stages. Image from [36].

Page 80: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 62

or mission changes. In the center it includes the global representation of the informationcollected by the robots. And it also includes a list of victims that have been found alongthe mission development. In general, this UI allow operators to access at any time to localperceptions of every robot as well as to have a global mapping of the gathered information,thus having better situational awareness and more tools for better decision making. What ismore, the interface can be tuned with parameter and rules for automatically changing its dis-play and control functions based on relevance measures, the current robot locality, and userpreferences [35] (i.e., the non-selected robot has found a victim so the display changes au-tomatically to that robot). Their framework has proved its usefulness in different field testsincluding USARSim and real robot operations, demonstrating that it is indeed beneficial touse a multi-robot network that is supervised by a single operator; this interface has led theJacobs University to the best results in RoboCup Rescue in the latest years. Other similar in-terfaces have also demonstrated successful large multi-robot teams (24 robots) teleoperationin USARSim [20].

Figure 2.30: Interface for multi-robot rescue systems. Image from [209].

Besides the presented characteristics, researchers in [292] recommend the followingaspects as guidelines for designing UI (or OCU) for rescue robotics looking towards stan-dardization:

• Multiple image display: it is important not only to include the robot’s eye view but alsoan image that shows the robot itself and/or its surroundings for the ease of understandingwhere is the robot. Refer to Figure 2.31 a).

• Multiple environmental maps: if the environmental map is available in advance it iscrucial to use it even though it may have changed due to the disaster. If it is not available,

Page 81: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 63

a map must be drawn in parallel to the search display. Also, not only is important tohave a global map but a local map for each robot. The orientation of the map must beselected such that the operator’s burden of mental rotation is minimized. So, the globalmap should be north-up in most cases and the local map should be consistent with thecamera view. Refer to Figure 2.31 b).

• Windows arrangement: the time to interpret information is crucial so it is a need toshow every image at the same moment. Rearranging windows and overlapping of themare key aspects to avoid.

• Visibility of display devices: it is important to consider that the main interest of rescuerobotics is to implement robots in the 72-golden hours, this implies daylight changingconditions that must be considered when choosing the display devices for having goodquality of visualization at any time of the day.

• Pointing devices: the ideal pointing device for working with the control units is a touchscreen.

• Resistance of devices: as the intention is to use devices outdoors, the best is for them tobe water and dust proof.

Figure 2.31: Desired information for rescue robot interfaces: a)multiple image displays, b)multiple map displays. Edited from [292].

Finally, another important work to mention on teleoperation and user interfaces is theone presented in [186, 185]. In these works researchers make use of novel touch-screendevices for monitoring and controlling teams of robots for rescue applications. They havecreated a dynamically resizing, ergonomic, and multi-touch controller called the DREAMcontroller. With this controller the human operator can control the camera mounted on amobile robot and the driving of the robot. It has particular features such as control for thepan-tilt unit (PTU) and the automatic direction reversal (ADR), which toggles for controllingthe robot driving forwards or backwards. What is more, in the same touch-screen the imagingfrom the robot camera views and the generated map are displayed. Also, the operator caninteract with this information by zooming, servoing, among other functions. Figure 2.32shows the DREAM controller detailed in the left and the complete interface touch-screendevice in the right. The main drawback of this interface is that the visibility is not optimal atoutdoors.

Page 82: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 64

Figure 2.32: Touch-screen technologies for rescue robotics. Edited from [185].

Full Autonomy

In the end, it is important to remember that the main goal of rescue robotics software is toprovide an integrated solution with full autonomous, intelligent capabilities. Among the maincontributions there is the work in [130] in which researchers present different experimentswith teams of mobile robots for autonomous exploration, mapping, deployment and detec-tion. Even though the environment is not as adverse as a rescue scenario, the experimentsconcerned integral operations with multiple heterogeneous robots (Figure 2.33) that explore acomplete building, map the environment and deploy a sensor network covering as much openspace as possible. As for exploration they implement a frontier-based algorithm similar tothe previously described from [58]. For mapping, each robot uses a SLAM to maintain anindependent local pose estimate, which is sent to the remote operator so as to be processedthrough a second SLAM algorithm to generate consistent global pose estimates for all robots.In-between the process an occupancy grip map, combining data from all robots is gener-ated and further used for deployment operations. This deployment comes from a generatedplanned sensor deployment positions to meet several criteria, including minimizing pathwayobstruction, achieving a minimum distance between sensor robots, and maximizing visibilitycoverage. Researchers demonstrated successful operations with complete exploration, map-ping and deployment as shown in Figure 2.34.

Another example exhibiting full autonomy but in a more complex scenario is the workpresented in [131]. In their work, researchers integrated various challenges from several com-ponent technologies developed towards the establishment of a framework for deploying anadaptive system of heterogeneous robots for urban surveillance. With major contributions in

Page 83: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 65

Figure 2.33: MRS for autonomous exploration, mapping and deployment. a) the completeheterogeneous team; b) sub-team with mapping capabilities. Image from [130].

Figure 2.34: MRS result for autonomous exploration, mapping and deployment. a) originalfloor map; b) robots collected map; c) autonomous planned deployment. Edited from [130].

Page 84: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 66

cooperative control strategies for search, identification and localization of targets, the team ofrobots presented in Figure 2.35 is able to monitor a small village, and search for and localizehuman targets, while ensuring that the information from the team is available to a remotelylocated control unit. As an integral demonstration, researchers developed a task with mini-mal human intervention in which all the robots start from a given position and begin to lookfor a human with an specified color uniform. If the human has been found, an alert is sentto the main operator control unit and images containing the human target are displayed. In-between the process of visual recognition and exploration of the environment a 3D mappingis being carried out. A graphical representation of this demonstration and its results is shownin Figure 2.36. The most interesting about this development is that robots had different char-acteristics in software and hardware, and human developers were from different universitiesthus implying the use of different control strategies. Nevertheless, they successfully demon-strated that diverse robots and robot control architectures could be reliably aggregated into ateam with a single, uniform operator control station, being able to perform tightly coordinatedtasks such as distributed surveillance and coordinated movements in a real-world scenario.

Figure 2.35: MRS for search and monitoring: a) Piper J3 UAVs; b) heterogeneous UGVs.Edited from [131].

Page 85: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 67

Figure 2.36: Demonstration of integrated search operations: a) robots at initial positions, b)robots searching for human target, c) alert of target found, d) display nearest UGV view ofthe target. Edited from [131].

A final software contribution to mention resides in the works from the Jacobs University(former IUB) in the RoboCup Rescue Real Robot League in which researchers demonstrateone of the most relevant teams over the latest RoboCup years [19]. In [224], researcherspresent a version of an integrated hardware and software framework for autonomous opera-tions of an individual rescue robot. As for the software, it basically consists in two modules: aserver program running at the robot, and a control unit running at the operator station. At theserver program several threads are occurring among which the sensor thread is responsible formanaging information from the sensors, the mapping thread develops an occupancy grid map-ping (2D and 3D) and an SLAM algorithm, and the autonomy thread analyses sensor data andgenerates the appropriate moving commands. This last autonomy thread is based upon roboticbehaviors that are triggered according to robot’s perception and current, detected, pre-definedsituation (obstacle, dangerous pitch/roll, stuck, victim found,etc.). Each of these situationshas its own level of importance and flags for triggering behaviors. At the same time, eachbehavior has its own priority. Thus, the most suitable actions are selected according to a givenlocal perception for which the most relevant detected situation will trigger a set of behaviorsthat will be coordinated according to their priorities. Among the possible actions reside: avoidan obstacle, rotate towards largest opening, back off, stop and wait for confirmation when vic-tim has been detected, and motion plan towards unexplored areas according to the generatedoccupancy grid. With this simple behavioral strategy, researchers are able to deal with dif-ferent problems that arise at the test arenas and perform efficiently for locating victims andgenerating maps of the environment.

Page 86: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 68

So, summarizing this section we have presented information concerning important de-tails in disaster engineering and information management, research software environments asthe USARSim for testing diverse algorithms, and different frameworks, algorithms and inter-faces useful for USAR operations. We have presented control architectures specially designedfor rescue robots that have been proposed in literature. Additionally, we included descriptionsof relevant works in the three most contributed areas that aid for rescue operations: navigationand mapping, recognition and identification, and teleoperation and human-robot interfaces.Finally, projects concerning minimal human intervention to fully autonomous robot opera-tions were described. Now, the next section is dedicated for describing the major contributionsconcerning physical robotic design that has been proposed for rescue robotics.

2.3 Rescue Robotics Relevant Hardware ContributionsHaving stated the principal advances in software for rescue robotics now it is appropriateto include information on the robotic units that have demonstrated successful operations interms of mobility, control, communications, sensing and other design lineaments. Some of therobots included herein have been applied in real world disasters and some others have beendesigned for applications in the RoboCup Rescue Real Robot League. Both types concerndesign aspects that have been stated in consensus among relevant literature on the topic andwhich are included in Table 2.3.

Page 87: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 69

Table 2.3: Recommendations for designing a rescue robot [37, 184, 194, 33, 158, 201, 267].Characteristic Description

Small

Even though design size depends highly on the robotmodality (air,water,ground. . . ), in general the robotshould be small in dimension and mass so as to be ableto enter areas of a search environment which will be typ-ically inaccessible for humans. Also, it is useful for therobot to be man-packable in order for easier deploymentand transportation.

Expendable

An important point for using robots in disaster scenar-ios is to avoid human exposure by sending robotic surro-gates, which are exposed to various challenges that willcompromise their integrity. Hence, cheap expendablerobots are required in order for maintaining low replace-ment costs and make it affordable.

Usable

This means that human-robot interfaces must be user-friendly and that there is no high training required orspecial equipment (such as power, communication links,among others) for operating the robots. Communicationsare desired to be wireless and time-suitable for transmit-ting real-time video and audio.

Hazards-protected

The rescue environment implies several hazards suchas water, dust, fire, mud, or other contamina-tion/decontamination agents that could adversely affectthe robots and control units. So, robotic equipment mustbe protected in some way from these hazards. Also, theuse of safety ropes and communication tethers are appro-priate in terms of robot protection.

Instrumentation

Robots must have at least a color and FLIR or black andwhite video cameras, two-way audio (to enable rescuersto talk with a survivor), control units capable of handlingcomputer vision algorithms and perceptual cueing, andthe possibility of hazardous material, structural and vic-tim assessments. It is typical to have robots equippedwith laser scanners, stereo-cameras, 3D ranging devices,CO2 sensors, contact sensors, force sensors, infraredsensors, encoders, gyroscopes, accelerometers, magneticcompasses, and other pose sensors.

Mobility

Until now there is no known rubble terrain characteri-zation that indicates the needs for clearances or specificmobility features. Despite, any robot should take intoconsideration the possibility to flip over so invertibility(no side-up) or self-righting capabilities are desirable.

Page 88: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 70

Some relevant ground robots that have either been implemented in real major disasters,won in some category over the RoboCup Rescue years, or simply have been among the mostnovel ideas for rescue robotic design are presented from Figure 2.37 to 2.63. Along with thepicture of each robot are presented the details concerning their design. It has to be clear thatcharacteristics of the robot and its capabilities are highly dependant on the application scenarioand thus there is no one all-mighty, best robot among all the presented herein [204, 201]. All ofthem are developed with essential exploration (mobility) purposes in adverse terrains. Someof them include mapping capabilities, victim recognition systems, and even manipulators andcamera masts. All of them use electrical power sources, and their weight and dimensions areconsidered to be man-packable.

Miniature Robots

Figure 2.37: CRASAR MicroVGTV and Inuktun [91, 194, 158, 201].

Figure 2.38: TerminatorBot [282, 281, 204].

Page 89: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 71

Figure 2.39: Leg-in-Rotor Jumping Inspector [204, 267].

Figure 2.40: Cubic/Planar Transformational Robot [266].

Wheeled Robots

Figure 2.41: iRobot ATRV - FONTANA [199, 91, 158].

Page 90: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 72

Figure 2.42: FUMA [181, 245].

Figure 2.43: Darmstadt University - Monstertruck [8].

Figure 2.44: Resko at UniKoblenz - Robbie [151].

Page 91: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 73

Figure 2.45: Independent [84].

Figure 2.46: Uppsala University Sweden - Surt [211].

Tracked Robots

Figure 2.47: Taylor [199].

Page 92: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 74

Figure 2.48: iRobot Packbot [91, 158].

Figure 2.49: SPAWAR Urbot [91, 158].

Figure 2.50: Foster-Miller Solem [91, 194, 158].

Page 93: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 75

Figure 2.51: Shinobi - Kamui [189].

Figure 2.52: CEO Mission II [277].

Figure 2.53: Aladdin [215, 61].

Page 94: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 76

Figure 2.54: Pelican United - Kenaf [204, 216].

Figure 2.55: Tehzeeb [265].

Figure 2.56: ResQuake Silver2009 [190, 187].

Page 95: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 77

Figure 2.57: Jacobs Rugbot [224, 85, 249].

Figure 2.58: PLASMA-Rx [87].

Figure 2.59: MRL rescue robots NAJI VI and NAJI VII [252].

Page 96: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 78

Figure 2.60: Helios IX and Carrier Parent and Child [121, 180, 267].

Figure 2.61: KOHGA : Kinesthetic Observation-Help-Guidance Agent [142, 181, 189, 276].

Figure 2.62: OmniTread OT-4 [40].

Page 97: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 79

Figure 2.63: Hyper Souryu IV [204, 276].

As can be appreciated, the vast majority are tracked robots. According to literature consensusthis is due to the high capabilities for confronting obstacles and because of larger payloadcapacities. Nevertheless, the cost of these benefits reside in the energy consumption and inthe overall robot weight, both aspects for which a wheeled robot tends to be more efficient.Also, complementary teams of robots and composite re-configurable serpentine systems areamong the most recent trends for rescue robots.

Finally, other robots worth to mention include the Foster-Miller Talon, which is a trackeddifferential robot with flippers and arm similar to the Solem; the Remotec ANDROS Wolver-ine V-2 tracked robot for bomb disposal, slow speed and heavy weight operations; the RHexhexapod, which is very proficient in different terrains including waterproof and swimming ca-pabilities [204]; iSENSYS IP3 and other medium-sized UAVs for surveillance and search [181,204, 228]; muFly and µDrones as fully autonomous micro helicopters for search and moni-toring purposes [247, 157]; among other several bigger and commercial robots designed forfire-fighting, search and rescue [158, 204, 267, 201, 213]. Also, multimillionaire, novel de-signs with military purposes are worth to mention such as the Predator UAV, T-HAWK UAV,Bluefin HAUV UUV, among others [287]. Refer to Figure 2.64 for identifying some of thementioned.

Besides robot designs, humanoid modelled victims have been proposed for standardtesting purposes [267]. Also, there are trends being carried out towards the adaptation of theenvironments through networked robots and devices [244, 14]. These trends intention is tosimplify information collection such as mapping, recognition and prioritization of explorationsites by implementing ubiquitous devices (refer section 2.2.1) that interact with rescue roboticsystems when a disaster occurs.

2.4 Testbed and Real-World USAR ImplementationsAt this point robotic units and software contributions have been described. Now, this sec-tion includes information on the use of rescue robots for developing disaster response opera-tions. For the ease of understanding complexity described systems are classified in controlledtestbeds and real-world implementations. The former constitutes mainly RoboCup RescueReal Robot League equivalent developments, and the latter the most relevant uses of robots inlatest disastrous events.

Page 98: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 80

Figure 2.64: Rescue robots: a) Talon, b) Wolverine V-2, c) RHex, d) iSENSYS IP3, e) In-telligent Aerobot, f) muFly microcopter, g) Chinese firefighting robot, h) Teleoperated ex-tinguisher, i) Unmanned surface vehicle, j) Predator, k) T-HAWK, l) Bluefin HAUV. Imagesfrom [181, 158, 204, 267, 287].

Page 99: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 81

2.4.1 Testbed ImplementationsDeveloping controlled tests shows the possibilities to realize practically usable search andrescue high-performance technology. It allows for operating devices and evaluate their per-formance, while discovering their real utility and drawbacks. For this reason, researchersat different laboratories build their own test arenas such as the presented in Figure 2.65.These test scenarios provide the opportunity for several tests such as multiple robot recon-naissance and surveillance [242, 144, 132, 98], navigation for exploration and mapping [117,241, 239, 130, 148, 224, 225, 249, 205, 136, 103], among other international competitionactivities [212, 261] (refer section 2.5).

Figure 2.65: Jacobs University rescue arenas. Image from [249].

In [205] researchers present one of the most recent and relevant developments that hasbeen validated within these simulated man-made scenarios. Using several homogeneous unitsof Kenaf (refer Figure 2.54) robots their goal is to navigate autonomously in an stepped terrainand gather enough information for creating a complete, full, integrated 3D map of the environ-ment. Developers argue that if the rescue robots have the capability to search autonomouslyin such an environment, the chances of rapid mapping in a large-scale disaster environmentare increased. The main challenges reside in the robots’ capabilities for collaboratively cov-ering the environment autonomously and integrate their individual information into a uniquemap. Also, since the terrain is uneven as Figure 2.66 shows, the necessity for stabilizing therobot and its sensors for correct readings represents an important challenge too. So, using a3D laser scanner they implemented a frontier-based coverage and exploration algorithm (refersection 2.2.3) for creating a digitial elevation map (DEM). This exploration strategy is shownin Figure 2.67 with the generated map of the complete environment at its right. It consistedin a segmentation of the current global map and the allocation of the best frontier for eachrobot according to their distance towards it, but no coordination among the robots has beencarried out so the situation of multiple robot exploring the same frontier was possible. Then,

Page 100: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 82

the centralized map was created by fusing each robot’s gathered data in the DaRuMa (refersection 2.2.1) for updating the map into a new current and corrected global map that must besegmented again until no unvisited frontiers are found, refer to Figure 2.68. Consequently, re-searchers had the opportunity to successfully validate their hardware capabilities and softwarealgorithms to fulfill their goals.

Figure 2.66: Arena in which multiple Kenafs were tested. Image from [205].

Figure 2.67: Exploration strategy and centralized, global 3D map: a) frontiers in currentglobal map, b) allocation and path planning towards the best frontier, c) a final 3D globalmap. Image from [205].

Page 101: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 83

Figure 2.68: Mapping data: a) raw from individual robots, b) fused and corrected in a newglobal map. Image from [205].

On the other hand, more real implementations include building and real-world environ-ments inspection for sensing and monitoring purposes. In [144] the deployment of groundrobots similar to Robbie (refer Figure 2.44) for temperature reading that is applied as a possi-ble task for fire-fighting or toxic-environment missions. Their main idea it to deploy humansand robots in unknown building and disperse while following gradients of temperature andconcentration of toxins, and looking for possible victims. Also, while moving forwards staticsensors must be deployed for maintaining information connectivity, visibility and always-in-range communications. Figure 2.69 shows a snapshot of the deployed robots and the resultingtemperature map obtained from a burning building as an experimental exercise developed byseveral US universities. The main challenges reside in networking, sensing and navigationstrategy generation and control including problems such as robot localization, informationflow, real-time maps updating, using the sensors data for updating the coverage strategy fordefining new target locations, and map integration. For localization and communications, re-searchers automatically deployed along with the temperature sensors other RFID tags and athand, manually deployed repeaters. Consequently, the main benefits from this implementa-tion are the validated algorithms for navigation strategy and control, reliable communicationsin adverse scenarios, and the temperature map integration.

Page 102: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 84

Figure 2.69: Building exploration and temperature gradient mapping: a) robots as mobilesensors navigating and deploying static sensors, b) temperature map. Image from [144].

Additionally, in [98] a similar building exploration and temperature mapping is donebut through aerial vehicles working as mobile sensor nodes. As illustrated in Figure 2.70, athree-floor building was simulated by means of the structure. Smoke and fire machines whereused to simulate the fires. Different sensing strategies were carried out in order to fulfilltheir main goal, which consisted in evaluating the data readings from mobile and static sensornodes. Sensor 14 is a human firefighter walking around the structure, sensor 6 is representedby a UAV, and the rest are static deployed sensors. Researchers argue that due to the openspace and the wind blowing only some static sensors near to fires were able to perceive thetemperature raises, but all sensing strategies worked well even though human was about 10times slower in speed when compared to the UAV. The principal benefit of this implementationis the confirmation of the feasibility and reliability of their routing protocol and the differentpossibilities for appropriate sensing in firefighting missions pushing forwards towards theirultimate goal, which is to use the advantages of mobility with low-cost embedded devices andthus improve the response time in mission-critical situations.

Figure 2.70: Building structure exploration and temperature mapping using static sensors,human mobile sensor, and UAV mobile sensor. Image from [98].

Page 103: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 85

What is more, another building inspection testbed but with the objective of structuralassessment and mapping is presented in [121]. In their developments they use a set of multipleHelios Carriers and Helios IX (refer Figure 2.60) for teleoperated exploration and 3D mappingof a 60 meter hall and one of the Tokyo subways stations. They deploy multiple HeliosCarriers to analyse the environment and send 3D images of the scenario, which are used byone Helios IX so as to open the closed doors (refer Figure 2.71) and remove obstacles upto 8 kg. for the Carriers to be able to complete the exploration. Another Helios IX is usedfor more specific search and rescue activities once the 3D map is generated by the Carriers.For localization of the robots they use a technique they call collaborative positioning system(CPS), which consists in sensors at each robot that are particularly used for recognition amongthem so that they can help each other to estimate its current pose. The major benefits fromthese controlled implementations are the knowledge of the time demands for creating large3D maps, the need for accurate planning of the deployment of each robot so as to lessen theexploration and map-generation time, the validation of the CPS as a better localization methodthan typical dead reckoning (refer Figure 2.72), among other important confirmations of theindividual robot’s features. The main drawback is the lack of autonomy of the robots.

Figure 2.71: Helios IX in a door-opening procedure. Image from [121].

Final to describe herein, more directed and real USAR operations for acquiring ex-perience in the rescue robotics research field are presented in [276]. In these controlledexperiments robots as the Kohga and Souryu (refer Figures 2.61 and 2.63) are used alongwith Japanese rescue teams from the International Rescue System Institute (IRS-U) and theKawasaki City Fire Department (K-CFD). Their main goals reside in deploying the robots asscouting devices to search for remaining victims and to investigate the inside situation of thetown after a supposed earthquake. Both teleoperated robots found several victims as shownin Figure 2.73. Once a robot detected a victim it reported the situation to the rescue teamsand asks for a human rescuer to assist the victims and waited there activating the two-wayradio communications for voice-messaging between the victim with the human operators un-til the human rescuer reached the location. Once the human arrived the robot continued itsoperations transmitting constantly video and sensors data. These experiments provided the

Page 104: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 86

Figure 2.72: Real model and generated maps of the 60 m. hall: a) real 3D model, b) gener-ated 3D map with snapshots, c) 2D map with CPS, d) 2D map with dead reckoning. Imagefrom [121].

opportunity areas for improving robots such as the additional back-view camera that is nowin all Souryu robots. Also, it was useful for the validation of mobility, portability, and ease ofoperation including basic advantages and disadvantages of using a tether (Souryu) or work-ing wireless (Kohga). This communications feature determined that the tether is very muchuseful because it offers bidirectional aural communication like the telephone, avoiding theneed to press the “press to talk” switch to talk with another team member, and thus avoidingthe problem of momentarily stop working while pressing the switch. It is argued that thesestrategy enables easy and uninterrupted communication between a victim, a rescuer and otherrescuers on the ground. On the other hand, the Kohga was advantageous in terms for highermobility but there was a slight delay in receiving images from the camera because of the delayin the wireless communication line. Moreover, it was determined as useful to have a zoomcapability in its video cameras for enhancing the capabilities of standing up in the flippers forbetter sensor readings. In summary, this testbed provided several “first experiences” that ledto important knowledge in terms of robotic hardware and underground communications tech-nology, which highlighted the need to maintain high quality, wide bandwidth, high reliability,and no delay.

Page 105: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 87

Figure 2.73: IRS-U and K-CFD real tests with rescue robots: a) deployment of Kohga andSouryu robots, b) Kohga finding a victim, c) operator being notified of victim found, d) Ko-hga waiting until human rescuer assists the victim, e) Souryu finding a victim, f) Kohga andSouryu awaiting for assistance, g) human rescuers aiding the victim, and h) both robots con-tinue exploring. Images from [276].

2.4.2 Real-World ImplementationsPerhaps the first attempt of using rescue robots in real disasters is the specialized, teleoper-ated vehicle for mapping, sampling and monitoring radiation levels in the surroundings ofUnit 4 in the Chernobyl nuclear plant [1]. Nevertheless, it was not until the WTC 9/11 disas-ter that scientists reported the implementation of rescue robots. According to [194], Inuktunand Solem robots (refer Figures 2.37 and 2.50) were implemented as teleoperated, tetheredtools for searching for victims and for paths through the rubble that would be quicker to ex-cavate, structural inspection, and detection of hazardous materials. These robots are creditedfor finding multiple sets of human remains, but technical search is measured by the numberof survivors found, so this statistic is meaningless within the rescue community. The primarylessons learned concerned: 1) the need for the acceptance of robotic tools for USAR becausefederal authorities restricted a lot the use of robots; 2) the need for a complete and user-friendly human-robot interface because even when equipped with FLIR cameras the providedimaging was not so representative and easy to understand thus demanding a lot of extra time;and 3) other hardware implications such as specific mobility features for rolling over, self-righting, and freeing from getting stuck. Also, reinforcing this hardware implications, severalyears later the same research group intended to use the Inuktun in the 2005 La Conchita mud-slide in the US, but it completely failed within 2 to 4 minutes because of poor mobility [204].So, the major benefit from these implementations has been the roadmap towards defining theneeds and the opportunities for developing more effective rescue robots.

Another set of disasters that have served for rescue robotics research are hurricanesKatrina, Rita and Wilma in the US [204]. This scenarios provided the knowledge that thedimensions of the ravaged area influences directly to choose the type of robots that will servebest. In these events, UAVs such as the iSENSYS IP3 (refer Figure 2.64 d)) were used becauseof the ease of deployment and transportation, and because they fly below regulated airspace.

Page 106: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 88

These robots were intended for surveying and sending information directly to responders soas to reduce unnecessary delays. It is important to clarify that these UAVs where tether-lessand not compromised the mission as reported in [228]. Also, Inuktuns were successfully usedfor searching indoor environments that were considered unsafe for human entry, and showedthat no one was trapped as believed. So, in contrast with the La Conchita mudslide, thesescenarios provided more favorable terrain for the robots to traverse.

Furthermore, rescue robots have been extensively used for mine rescue operations [201].In 2006, in the Sago Mine disaster in West Virginia it was reported that for reaching the vic-tims it was necessary to traverse environments saturated with carbon monoxide and methaneand heavy rubble [204]. So, the Wolverine (refer Figure 2.64 b)) was deployed relying onthe the advantage of being able to enter a mine faster than a person and also being less likelyto create an explosion. Unfortunately, it got stuck at 2.3 km before reaching the victims, butit highlighted the need to maintain reliable wireless communications with more agile robots.Despite, this Wolverine has demonstrated its abilities for surface entries (refer Figure 2.74) inmine rescue as has been used widely. Nevertheless, some other scenarios have other charac-teristics such as the 2007 collapse of the Crandall Canyon mine in Utah, which prohibited theuse of Wolverine [200]. This scenario required for a small-sized robot deployed through bore-holes and void entries and descending more than 600 meters in order to begin to search (referFigure 2.74). The searching terrain demanded for the robot to be waterproof, to have goodtraction in mud and rubble and to carry its own lightning system. An Inuktun-like robot wasused but it was concluded that the needed was a serpentine robot. So, mine rescue operationshave shown a clear classification of entry types each with their own characteristic physicalchallenges [201], that influence which robot to choose.

These lack of significant results because of ground mobility problems is not quite thecase for underwater and aerial inspections. In [203], an underwater inspection mission af-ter the hurricane Ike is reported. The mission consisted in determining scour and locatingdebris without exposing human rescuers. So, an unmanned underwater vehicle (UUV) wasdeployed. The robot autonomously navigated towards a bridge and when being near enough itwas teleoperated for the inspection routines. It successfully completed the mission objectivesand left important findings such as the importance of control of unmanned vehicles in swiftcurrents, the challenge of underwater localization and obstacle avoidance, the need for mul-tiple camera views, the opportunity for collaborating between UUVs and unmanned surfacevehicles (USV), which must map the navigable zone for the UUV; and the important chal-lenge interpreting underwater video signals. As for aerial inspections, the most recent eventin which UAVs successfully participated is the Fukushima nuclear disaster [227, 237]. Thisdisastrous event disabled the rescuers to implement any kind of ground robot because of themechanical difficulties that the rubble implied. So, the use of UAVs for teleoperated damageassessment seemed to be the only opportunity for rescue robotics and several T-HAWK robots(refer to Figure 2.64) were deployed [287].

In summary, real implementations have shown a lack of significant results to the rescuecommunity provoking the need for extending the testbed implementations in a more standard-ized approach. Next section is intended to describe this intention.

Page 107: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 89

Figure 2.74: Types of entries in mine rescue operations: a) Surface Entry (SE), b) BoreholeEntry (BE), c) Void Entry (VE), d) Inuktun being deployed in a BE [201].

Page 108: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 90

2.5 International StandardsPerhaps the last important thing to include in this chapter is the description of the achievedstandards in order to have a reference for comparing different research contributions so as todetermine its relevance. According to [204], the E54.08 subcommittee on operational equip-ment within the E54 Homeland Security application committee of ASTM International starteddeveloping an urban search and rescue (USAR) robot performance standard with the NationalInstitute of Standards and Technology (NIST) as a US Department of Homeland Security(DHS) program from 2005 to 2010. Thus, the National Institute of Standards and Technology(NIST) created a test bed to aid research within robotic USAR planning to cover sensing,mobility, navigation, planning, integration, and operator control under the extreme conditionsof rescue [198, 212, 204]. Basically, this test bed constitutes the RoboCup Rescue competi-tions for the Simulation and Real Robot Leagues, offering zones to test mobile commercialand experimental robots and sensors with varying degrees of difficulty. In Figure 2.75 themain standard environmental models (arenas) of the NIST are presented in their simulated(USARSim) and real versions. The arenas consist as described [214]:

Simulated Victims. Simulated victims with several signs of life such as form, motion,head, sound and CO2 are distributed throughout the arenas requiring directional viewingthrough access holes at different elevations.

Yellow Arena. For robots capable of fully autonomous navigation and victim identifi-cation, this arena consists of random mazes of hallways and rooms with continuous 15◦

pitch and roll ramp flooring.

Orange Arena. For robots capable of autonomous or remote teleoperative navigationand victim identification, this arena consists of moderate terrains with crossing 15◦ pitchand roll ramps and structured obstacles such as stairs, inclined planes, and others.

Red Arena. For robots capable of autonomous or remote teleoperative navigation andvictim identification, this arena consists of complex step field terrains requiring ad-vanced robot mobility.

Blue Arena. For robots capable of mobile manipulation on complex terrains to placesimple block or bottle payloads carried in from the start or picked up within the arenas.

Black/Yellow Arena (RADIO DROP-OUT ZONE). For robots capable of autonomousnavigation with reasonable mobility to operate on complex terrains.

Black Arena (Vehicle Collapse Scenario). For robots capable of searching a simu-lated vehicle collapse scenario accessible on each side from the RED ARENA and theORANGE ARENA.

Aerial Arena. For small unmanned aerial systems under 2 kg with vertical take-off andlanding (VTOL) capabilities that can perform station-keeping, obstacle avoidance, andline following tasks with varying degrees of autonomy.

Page 109: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 91

Figure 2.75: Standardized test arenas for rescue robotics: a) Red Arena, b) Orange Arena, c)Yellow Arena. Image from [67].

Furthermore, it is stated in [204] that there is the intention for the standards to consistof performance measures that encompass basic functionality, adequacy and appropriatenessfor the task, interoperability, efficiency, sustainability and robotic components. Among therobotic components systems include platforms, sensors, operator interfaces, software, com-putational models and analyses, communication, and information. Nevertheless, developmentof requirements, guidelines, performance metrics, test methods, certification, reassessment,and training procedures is still being planned. For now, the performance measuring standardsreside in the characteristics and challenges conforming the described RoboCup Rescue arenasonly for UGVs [268]. Further intention in standardizing interfaces and providing guidelinesfor operator control units is also being carried out [292].

Despite the non-ready standardized performance measures, main quantitative metricsbeing used at RoboCup Rescue are based on locating victims (RFID-based technologies areused for simulating victims), providing information about the victims that had been located(readable data from RFID tags at 2 m ranges and taking pictures from victims), and developinga comprehensive map of the explored environment. A total score vector S is calculated asshown in Equation 2.3 in accordance to [19]. The variables VID, VST , and VLO reward 10points for each victim identified, victim’s status, and victim’s location reported, respectively.Then t is a scaling factor from 0 to 1 for measuring the metric accuracy of the map M ,which can represent up to 50 points according to reported scoring tags located, multi-robotdata fusion into a single map, attributes over the map, groupings (e.g., recognizing rooms),accuracy, skeleton quality and utility. Next, up to 50 points can be awarded for the explorationefforts E, which are measured according to the logged positions of the robots and the totalarea of the environment in a range from 0 to 1. Finally, C stands for the number of collisions,B for a maximum 20 points bonus for additional information produced, andN for the numberof human operators required, which typically is 1 thus implying a scaling factor of 4; fully

Page 110: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 2. LITERATURE REVIEW – STATE OF THE ART 92

autonomous systems are not scaled. It is important to clarify that this evaluation scheme is forthe Real Robot League, for the simulation version the score vector can be found at [254].

S =VID · 10 + VST · 10 + VLO · 10 + t ·M + E · 50− C · 5 +B

(1 +N)2(2.3)

In the end, for better knowing the current standards it is highly recommended to visitthe following websites:

NIST - INTELLIGENT SYSTEMS DIVISION:www.nist.gov/el/isd/ROBOTICS PROGRAMS/PROJECTS IN INTELLIGENT SYSTEMS DIVISION:www.nist.gov/el/isd/robotics.cfmHOMELAND SECURITY PROGRAMS/PROJECTS IN INTELLIGENT SYSTEMS DIVISION:www.nist.gov/el/isd/hs.cfmDEPARTMENT OF HOMELAND SECURITY USAR ROBOT PERFORMANCE STANDARDS:www.nist.gov/el/isd/ks/respons robot test methods.cfmSTANDARD TEST METHODS FOR RESPONSE ROBOTS:www.nist.gov/el/isd/ks/upload/DHS NIST ASTM Robot Test Methods-2.pdf

Concluding this chapter, we have presented information on the worldwide developmentstowards an autonomous MRS for rescue operations. So, according to the presented works andmore precisely to Tadokoro in [267] the roadmap for 2015 is as follows:

Information collection. Multiple UAVs and UGVs will collaboratively search and gat-her information from disasters. This implies that sensing technology for characterizingand recognizing disasters and victims from the sky should be established. Also, broad-band mobile communications should be of high performance and stable during disastersin such a way that information collection by teleoperated and autonomous robots, dis-tributed sensors, home networks, and ad hoc networks should be possible.

Exploration in confined spaces. Mini-actuator robots should be able to enter the rub-ble and navigate over and inside the debris. Also, miniaturized equipment such ascomputers and sensors are required so as to achieve semi-autonomy and localizationwith sufficient accuracy.

Victim triage and structural damage assessment. Robot emergency diagnosis of vic-tims should be possible as well as 3D mapping in real time. This demands for an ad-equate sensing for situational awareness among robots and human operators and inter-faces that reduce strain on operators and augment autonomy and intelligence on robots.

Hazard-protection. Robotic equipment should be heat and water resistant.

The multiple use of UGVs for collaboratively search and gather information from disas-ters is a primary goal on this dissertation. For now on, this document focuses on the descrip-tion of the proposed solution and the developed tests concerning this dissertation. The nextchapter specifies the addressed solution.

Page 111: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Chapter 3

Solution Detail

“I would rather discover a single fact, even a small one, than debate the greatissues at length without discovering anything at all.”

– Galileo Galilei. (Physicist, Mathematician, Astronomer and Philosopher)

“When we go to the field, it’s often like what we did at the La Conchita mud-slide. . . It’s to take advantage of some of the down cycles that the rescuershave.”

– Robin R. Murphy. (Robotics Scientist)

CHAPTER OBJECTIVES— Which tasks, which mission.— Why and how a MRS for rescue.— How behavior-based MRS.— How hybrid intelligence.— How service-oriented.

Concerning the core of this dissertation work, this chapter contains the deepest of ourthoughts towards solving the problem: How do we coordinate and control multiple robots soas to achieve cooperative behavior for assisting in urban search and rescue operations? Eachof the sections included is intended to give answer and fulfill each of the research questionsand objectives stated in section 1.3. First, information on the tasks and roles in a rescue mis-sion is presented. Second, those tasks are matched to a team of multiple mobile robots. Third,each robot is given with a set of generic capabilities so as to be able to address each describedtask. Fourth, those robots are coupled in a multi-robot architecture for the ease of coordina-tion, interaction and communication. And finally, a novel solution design is implemented soas to permit the solution not to be fixed but rather flexible and scalable.

It is worth to mention that the solution procedure is based upon a popular analysis anddesign methodology called Multi-agent Systems Engineering (MaSE) [289], which amongother reasons matched precisely our interests of coordinating local behaviors of individualagents to provide an appropriate system-level behavior. A graphical representation of thismethodology is presented in Figure 3.1.

93

Page 112: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 94

Figure 3.1: MaSE Methodology. Image from [289].

Page 113: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 95

3.1 Towards Modular Rescue: USAR Mission Decomposi-tion

According to the MaSe methodology, the first requirement is to capture the goals. In orderto do this we extracted the common objectives from the state of the art developments, mostrepresentative surveys, and the achieved standards and trends on rescue robotics. This includesmainly the developments listed on rescue robotics in section 2.1 as well as the referencespresented in section 2.5, both in Chapter 2.

Briefly, it is worth to say that the essence of rescue robotics (refer section 1.1) denotesthe main goal: to save human lives and reduce the damage. In order to do that, we found threemain global tasks (or stages):

1) Exploration and Mapping. Navigate through the environment in order to get thestructural design while trying to localize important features or objects such as threats orvictims.

2) Recognize and Identify. Identify different entities such as teammates, threats orvictims, and recognize its status for determining the appropriate actions towards aid-ing.

3) Support and Relief. Provide the appropriate aid for damage control and victimssupport and relief.

According to these global tasks, we determined that the particular goals for a team ofrobots in a rescue mission are the ones presented in Figure 3.2. It can be seen that thereexists an inherent parallelism in terms of priorities when it comes to finding a threat or avictim, but also there is a very relevant issue which is the map quality, which also determinesthe team’s performance when in absence of threats or victims (refer to performance metricsin section 2.1). Then, it is considered a level of characterization, which basically residesin the recognition stage and the sensor data interpretation so as to come up with a singlemap, a threat report or victim report. In this level, maps are intended to have appropriatedefinition, for example, have the number of rooms and corridors; while threats and victims areintended to be located, diagnosed and classified with the possibility of additional informationsuch as photos of the current situation. Lastly, actions corresponding to the threat or victimclassification come to take place.

Once we have defined the goals and its hierarchy, we needed to reach the complete setof concurrent tasks that will conform a rescue mission. Following the MaSE methodology, weused different cases presented in literature, mainly focusing in the different scenarios providedby the RoboCup and described previously in section 2.5. Using this information we definedthree main sequence diagrams described below:

Sequence Diagram I: Exploration and Mapping. This is the start-up diagram, hereis where every robot in the team starts once deployment has been done or support andrelief operations have ended for a given entity. Being the first diagram, it consists ofan initialization stage and the information gathering (exploration) loop. This loop is anaggregation-dispersion action that is considered so that the robots can start exploring the

Page 114: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 96

Figure 3.2: USAR Requirements (most relevant references to build this diagram include:[261, 19, 80, 87, 254, 269, 204, 267, 268]).

Page 115: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 97

environment in an structured way (flock) just before they disperse to cover the distantpoints and meet again in a given point. This loop is considered important because ofthe relevance it has over literature to aggregate the robots in a so-called rendezvouspoint so as to reduce mapping errors and/or possible communication disruptions onceevery unit has been dispersed towards covering the environment [232, 101, 240, 92]. Itis important to clarify that the coverage of distant points or the exploration strategiesmay vary according to the amount of information that has been gathered. Also, at anymoment during the exploration loop, critical situations may be triggered, taking therobot out of the loop and entering another set of operations. These critical situationsinclude: victim/threat/endangered-kin detected, control message asking for particulartask, or damaged/stuck/low-battery robot. For better understanding these sequentialoperations, Figure 3.3 shows a graphical representation of this diagram. Details infigure are described further in the document.

Sequence Diagram II: Recognize and Identify. This second diagram occurs when-ever a critical situation has been triggered. In such way, it is composed of an initialtriggering stage, which can happen either local or remote. Local refers to the ownsensors of the robot detecting a victim or a threat for example. Remote means thata message has been sent to the robot so for it to assist either with a threat, victim orendangered-kin. This difference in triggering makes a difference also in the second stepof the diagram, the approaching or pursuing stage. In the case of the local triggering,this stage consists in the robot tracking and approaching itself to the corresponding en-tity; in the case of the remote triggering, it is assumed that the message contains thepose of the entity so for the robot to seek for it. Once the entity has been reached therecomes an analysis and inspection stage for fulfilling the recognition goals of classifi-cation and status so that the data can be reported to a main station and then deliberatethe appropriate actions to take. These actions will take the robot outside this diagrameither back to the exploration and mapping, or forwards to the support and relief. Forbetter understanding these sequential operations, Figures 3.4 and 3.5 show a graphicalrepresentation of these diagrams, local and remote, respectively. Details in figures aredescribed further in the document.

Sequence Diagram III: Support and Relief. This is the final operations diagram, sohere is where the critical support and aiding actions occur. The first step is to deter-mine if any kind of possible aid matches the current need of the entity, which can bethe threat, victim or kin. If no action is possible, then an aid failed report is generatedso that a main station can send another robot or human rescuer to give appropriate sup-port. But in the case an action is possible, the robot must develop the correspondingoperations among which most relevant literature refers: rubble removal, in-situ medicalassessment, acting as mobile beacon or surrogate, adaptively shoring unstable rubble,entity transportation, display information to victim, clear a blockade, extinguish a fire,alert of risks, among others [204, 267]. Once developing the support and relief action,it can still fail and generate an aid failed report, or succeed and generate an updatedsuccess report, either way, after making the report the last operation is to go back tothe exploration and mapping stage. For better understanding these sequential opera-tions, Figure 3.6 shows a graphical representation of this diagram. Details in figure are

Page 116: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 98

described further in the document.

So, at this point we have established the USAR requirements and sequentially orderedthe different operations that could be found among the most relevant literature in rescuerobotics. We can say that this is a complete decomposition of the generic rescue operationsthat we will find among a pool of robots deployed in a USAR mission, independently of thenature of the disaster. Now, it is time for defining the basic robotic requirements to fulfill theseoperations.

3.2 Multi-Agent Robotic System for USAR: Task Allocationand Role Assignment

Given the complete list of goals and tasks that conform a rescue mission presented in theprevious section, it will be to ambitious to pretend to code everything and deploy a completeMRS that fulfills every task just within the reaches of this dissertation. So, this section isintended to delimit the scope in terms of the robotic team in order to end up with a moreintegral solution, we are getting into the roles and concurrent tasks final phases of the MaSEanalysis stage.

First of all, it becomes easier to think of allocating tasks and assigning roles among ho-mogeneous robots because there are no additional capabilities to evaluate. Also, equippingthe robots with the least instrumentation referred in Table 2.3 such as laser scanner, videocamera, and pose sensors; simplifies the challenge while leaving room for more sophisticateddevelopments and future work. In this way, robotic resources concerning the solution hereininclude middle-sized ground wheeled and tracked robots presented in Figure 3.7. Their mainadvantages and disadvantages are summarized in Table 3.1. It is assumed that with a team of2-3 robots we still gain the advantages concerning a MRS presented in section 1.1 such as ro-bustness by redundancy and superior performance by parallelism. Finally, it is worth to clarifythat one of the main objectives of this work is to provide the ease of extending software so-lutions to upgraded and heterogeneous hardware, nevertheless for the ease of demonstrationsand because of our laboratory resources, the proposed MRS has been limited.

Page 117: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 99

Figure 3.3: Sequence Diagram I: Exploration and Mapping (most relevant references to buildthis diagram include: [173, 174, 175, 176, 21, 221, 86, 232, 10, 58, 271, 101, 33, 240, 92, 126,194, 204]).

Page 118: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 100

Figure 3.4: Sequence Diagram IIa: Recognize and Identify - Local (most relevant referencesto build this diagram include: [170, 175, 221, 23, 242, 163, 90, 207, 89, 226]).

Page 119: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 101

Figure 3.5: Sequence Diagram IIb: Recognize and Identify - Remote (most relevant referencesto build this diagram include: [170, 175, 221, 23, 242, 163, 90, 207, 89, 226]).

Page 120: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 102

Figure 3.6: Sequence Diagram III: Support and Relief (most relevant references to build thisdiagram include: [58, 33, 80, 19, 226, 150, 267, 204, 87, 254]).

Page 121: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 103

Figure 3.7: Robots used in this dissertation: to the left a simulated version of an Adept Pioneer3DX, in the middle the real version of an Adept Pioneer 3AT, and to the right a Dr. RobotJaguar V2.

Table 3.1: Main advantages and disadvantages for using wheeled and tracked robots [255,192].

Mobile Mechanism Advantages Disadvantages

WheeledHigh mobility Low obstacle performance

Energy efficient

Tracked

High obstacle performance HeavyLarge Payload High energy consumption

Cramped Construction

Perhaps the main issue once we have defined the pool of robots is the task allocationproblem or the coordination of the team towards solving multiple tasks in a given mission.According to [29], an interesting task allocation problem arises in cases when a team of robotsis tasked with a global goal, but the robots have only local information and multiple capabil-ities among which they must select the appropriate ones autonomously. This is precisely thesituation we are dealing with, but including the already mentioned three main global tasks.These tasks as well as relevant literature on the experiences within disaster response and res-cue robotics testbeds (essentially [182, 9, 254], lead us to come up with the definition of thefollowing roles:

Police Force (PF). This role is responsible for the tasks concerning the exploration andmapping global task. It is the main role for gathering information from the environment.

Ambulance Team (AT). This role is responsible for the tasks concerning the victimsincluding the tracking, approaching, seeking, diagnosing and aiding.

Firefighter Brigade (FB). This role is responsible for the tasks concerning the threatsincluding the tracking, approaching, seeking, inspecting and aiding.

Team Rescuer (TR). This role is responsible for the tasks concerning the endangeredkins including the seeking and aiding.

Trapped (T). This role is defined for identifying a damaged robot.

Page 122: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 104

These roles simplify the task allocation process because of delimiting the possible tasksa robot can develop. They can be dynamically assigned following the strategy presentedin [75, 78]. This means that at any given moment a robot can change its role according to itslocal perceptions, but also that if a robot has not finished doing some task it may stick to itsrole until completing its duty. So, recalling Figures 3.3, 3.4, 3.5 and 3.6, it can be understoodthat a robot in a PF role can change to any other role according to its perceptions, for exampleit can change to AT if a victim has been detected by its sensors, or to TR if it has received anendangered-kin alert message. In a similar way, if a robot is currently on a FB role and itssensors identify a victim, it may send a message of victim found but will not change its roleto AT until finishing the tasks corresponding to its current role and if the reported victim hasnot been attended yet.

So, even though the roles have simplified the problem, there are still multiple tasksamong each one of them. Thus, for each robot to know the current status of the missionand therefore the most relevant operations so as to be coherent (refer to Table 1.2), a finitestate machine (FSM) is introduced (refer to Table 1.3 and Equation 1.1). Recalling againFigures 3.3, 3.4, 3.5 and 3.6, the operations in white boxes represent the set of states K fromwhich a robot can move according to the black arrows, which represent the function δ thatcomputes the next state. It is worth to mention that states have at most two possibilities for thefollowing state, so δ has always one option according to an alternative flag, which if set thenthe next state is represented by the rightmost arrow. The stimulus Σ for changing from state tostate is based upon the acquiescence and impatience concepts presented in [221]. We intendto be flexible so as to trigger the stimulus autonomously according to the local perceptions,enough gathered information, performance metrics or other learning approaches; or triggeringit manually by a human operator so as to end up with a semi-autonomous system, which ismore likely to match the state-of-the-art, where almost every real implementation has beenfully teleoperated. The last concepts in the FSM are the initial state s and the final stateF , both of which are clearly denoted in every sequence diagram as the top and the bottom,respectively.

Furthermore, each of the states or operations in the sequence diagrams is finally de-composed into primitive or composite actions, which ultimately activate the correspondingrobotic resources according to the different circumstances or robotic perceptions. These setsof actions are fully described in the next section.

3.3 Roles, Behaviors and Actions: Organization, Autonomyand Reliability

In section 1.4 an introduction to robotic behaviors was presented. It was stated that this controlstrategy is well-suited for unknown and unstructured situations because of enhancing local-ity. Behaviors were described as the abstraction units that serve as building blocks towardscomplex systems, thus facilitating scalability and organization. Herein, behaviors are aboutto conform the operations referred in the previous section but now in terms of robotic control.This section is highly based upon the idea that it is not the belief which makes a better robot,but its behavior, and this is how we intend to define the agent classes, according to the nextMaSE phase.

Page 123: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 105

According to Maja Matric and Ronald Arkin [175, 11], the challenge when defininga behavior-based system and that which determines its effectiveness is the design of eachbehavior. Mataric states that all the power, elegance and complexity of a behavior-basedsystem reside in the particular way in which behaviors are defined and applied. She refers thatthe main issues reside in how to create them, which are the most adequate for a given situation,and how they must be combined in order to be productive and cooperative. Reinforcing theidea, Arkin refers that the main issue is to come up with the right behavioral building blocks,clearly identifying the primitive ones, effectively coordinating them, and finally to groundthem to the robotic resources such as sensors and actuators. So, in this work we need a properdefinition of primitive behaviors including a clear control phase referring the actions to do, atriggering or releasers phase, and the arbiters for coordinating simultaneous outputs. In thecase of composite behaviors, the difference is to define the primitive behaviors that conformits control phase.

With these requirements and assuming that at the moment of deployment we are in analmost no-knowledge system, we have pre-defined a set of behaviors presented in Tables C.1-C.33 included in Appendix C. It is important to mention that the majority are based uponuseful and practical reported behaviors in literature. Also, even though it is not explicitlyreferred in each of them, every behavior out of the initialization stage can be inhibited byacquiescent and impatient behaviors according to a state transition in the FSM (black arrowsin sequence diagrams), or even by the escape behavior if the robot has a problem. What ismore, all behaviors consider 2D navigation and maps for the ease of developments and someof them are based on popular algorithms such as the SURF [26] for visual recognition or theVFH [41] for autonomous navigation with obstacle avoidance. This is done in order to takeadvantage from the already existing software contributions, coding them in a state-of-the-art fashion as will be described in section 3.5 while reducing the amount of work towards amore integral solution concerning this dissertation. The central idea of all these behaviors isthat with no specific strategy or plan but with simple emergence of efficient local behaviors,complex global strategy can be achieved [52].

Most of those behaviors happen without interfering with each other because of the rolesand finite state machine assembly. Thus, by controlling the triggering/releasing action of eachbehavior, we dismiss the arbitration stage. Nevertheless, for the cases where multiple behav-iors trigger simultaneously for example in the case of the safe wander or field cover operations,where there are the avoid past plus avoid obstacles plus the locate open area behaviors occur-ring, each behavior contributes with an amount of its output in the way of a weighted sum-mation such as in [21] (refer to fusion in Figure 1.8). This fusion coordination as well as themanual triggering of behaviors leave room for the possibility for better coordinating behaviorsor creating new emergent ones, according to the amount of gathered sensor data or measuredperformance, but this will be out of the scope of this dissertation. We know that it will bean ideal solution to have all behaviors transitioning and fusing autonomously while showingefficient operations towards mission completion, but full autonomy for USAR missions is stilla long-term goal, so we must aim for operator use and semi-autonomous operations so as toreduce coordination complexity and increase system’s reliability, also known as sliding auton-omy [124, 251]. In Chapter 4 implementations of individual and coordinated/fused behaviorswill better explain what has been referred.

Summarizing this section, Figures 3.8 and 3.9 show a graphical representation of the

Page 124: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 106

roles, behaviors, and actions organization, including some examples of possible robotic aidsuch as alerting humans or fire extinguishing. All this constitutes the functional level of oursystem recalling Alami’s architecture A.1, and gives definition to the reactive layer accordingto Arkin’s AuRA A.2. So, the next step is to define the executional and decisional levelsthat correspond to the deliberative layer of our system. Following the MaSE methodologynext section refers the conversations and the architecture for completing the assembly of ourrescue MRS.

Figure 3.8: Roles, behaviors and actions mappings.

3.4 Hybrid Intelligence for Multidisciplinary Needs: Con-trol Architecture

At this point it must be clear that the control strategy for each individual robot is based onrobotic behaviors. This constitutes its individual control architecture which is represented inFigure 3.10. Among activations we have the roles, the finite states, and also the current mis-sion situation and robots’ local perceptions. For the stimuli, control and actions, we have the

Page 125: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 107

Figure 3.9: Roles, behaviors and actions mappings.

Page 126: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 108

inputs, the ballistic or servo control, and the resultant operations/actions for which the behav-ior was designed. Also, we have referred that for cases when multiple behaviors are giving adesired action, a weighted summation is done so as to end up with a fused unique actuator re-sponse. So, among other already mentioned benefits, this control strategy enable us for closecoupling perceptions and actions so that we can come up with adequate, autonomous and in-time operations even when dealing with highly unpredictable and unstructured environments.Nevertheless, there is still the need for a higher level control that ensures the appropriate cog-nition/planning at the multi-robot level for mission accomplishment. For this reason, a higherlevel architecture was created for coupling the rescue team and providing the deliberative andsupervision control layers.

Figure 3.10: Behavior-based control architecture for individual robots. Edited imagefrom [178].

Providing a deliberative layer to a behavior-based layer, which is nearly reactive, is tocreate a hybrid architecture. According to [192], under this hybrid paradigm, the robot firstplans (deliberates) how to best decompose a task into subtasks and then what are the suitablebehaviors to accomplish each subtask. In this work, the robot can choose autonomously thenext best behavior according to its local perceptions, but also its performance can be enhancedif some global knowledge is provided, meaning that each robot knows something outside ofitself so as to derive a better next best behavior. Using Figure 3.11 it is easier to understandthat a hybrid approach provides our system the possibility to close couple sensing and acting,but also to enhance the internal operations by some sort of planning. Through this we combinelocal control with higher-level control approaches to achieve both robustness and the ability toinfluence the entire team’s actions through global goals, plans, or control, in order to end-upwith a much more reliable system [223].

Therefore, using information about the characteristics to make a relevant multi-robotarchitecture [218], being inspired in the initiative towards standardization in unmanned sys-tems composition and communications JAUS [106], and taking into account the most popularconcepts on group architectures [63], we have created a multi-robot architecture with the fol-lowing design lineaments:

Robotic hardware independent. Leveraging heterogeneity and reusability, hardwareabstraction is essential so the architecture shall not limit to specific robots only.

Mission/domain independent. As a modular and portable architecture, the core should

Page 127: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 109

Figure 3.11: The Hybrid Paradigm. Image from [192].

remain persistent, while team composition [99] and behavior vary according to differenttasks.

Sliding autonomy. The system can be autonomous or semi-autonomous, the humanoperator can control and monitor the robots but is not required for full functionality.

Computer resource independent. Must provide flexibility in computer resources de-mand, ranging from hi-spec computers to simple handhelds and microcontrollers.

Global centralized, local decentralized. The system can consider global team state(centralized communication) for increasing performance but should not require it forlocal decision-making, thus intelligence resides on robot, refer [153]. Multi-agent sys-tems that are decentralized include advantages such as fault tolerance, natural exploita-tion of parallelism, reliability, and scalability. However, achieving global coherencyin these systems can be difficult, thus requiring a central station that enhances globalcoordination [223].

Distributed. As shown in [175] distribution fits better for behavior-based control, whichmatches our long-term goal and the intended modularity. Also, team composition can beenhanced distributing by hierarchies (sub-teams) or distributing by peer agents througha network [63], according to the mission’s needs. With distributed-control it is assumedthat close coupling of perception with action among robots, each working on local goals,can accomplish a global task.

Upgradeable. Leveraging extendibility and scalability, the architecture must providethe ease of rapid technology insertion such as new hardware (e.g. sensors) and software(e.g. behaviors) components. We want a system that has a good balance between gen-eral enough for extendability, scalability and upgrades, while being specific enough forconcrete contributions.

Interoperability. Three levels of interoperability are desired: human-human, human-robot and robot-robot.

Reliable communication. Time-suitable and robust communications are essential formulti-robot coordination. Nevertheless, communications in hazardous environmentsshould not be essential for task completion, for robustness’ sake. This way the job

Page 128: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 110

is guaranteed even in the event of a communications breakdown. In this way, our ar-chitecture should not rely on robots communicating with each other through explicitcommunication but rather through the environment and sensing.

One-to-many control. Human operators must be able to command and monitor multi-ple robots at the same time.

The described architecture is represented in Figure 3.12 (for understanding nomencla-ture refer to Tables 1.5 and 1.6). For the ease of representing it graphically we have distributedthe levels horizontally being the highest level to the left. At this level the mission is globallydecomposed such as we presented in section 3.1 so that according to a given task, the ex-ecutional level can derive the most appropriate role and start developing the correspondingbehavioral sequence taking into account their activations including mainly the robot’s localperceptions. When the corresponding behaviors have been triggered, simultaneous outputsare fused to derive the optimal command that is sent to the robot actuators or physical re-sources. This happens for every robot in the team. It is worth to mention that every robothas a capabilities vector that is intended to match a given task, but since this work is limitedto homogeneous robots, we leave it expressed in the architecture but unused in tests. Finally,everywhere in the architecture where there are a set of gears represent that a coordination isbeing done, either inter-robot (roles and tasks) or intra-robot (behaviors and actions).

Figure 3.12: Group architecture.

Furthermore, for grounding the architecture to hardware resources we decided to use atopology similar to JAUS [106] because of the clear distinction between levels of competence

Page 129: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 111

and the simple integration of new components and devices [218]. This topology is shown inFigure 3.13 and includes the following elements1 :

1. System. At the top, there is the element representing the logical grouping of multiplerobotic subsystems in order to gain some cooperative and cognitive benefits. So, hereis developed the planning, reasoning and decision-making for better team performancein a given mission. Also, at this element resides the operator control unit OCU (or userinterface UI) that enables human operator to monitor and send higher-level commandsto multiple subsystems, matching our one-to-many control design goal. So, the wholesystem can perform in a fully autonomous or semi-autonomous way being operator–use independent. Finally, this element can also represent signal repeaters for longerarea networks, OCU’s for human-human interoperability, and local centralizations (sub-teams coordinators) for larger systems.

2. Subsystems. Can be represented by independent entities such as robots and sensorstations. In general, a subsystem is the entity that is composed of computer nodes andsoftware/hardware components that enable them to work.

3. Nodes. Contain the assets or components in order to provide a complete applicationfor ensuring appropriate entity behavior. They can be several types of interconnectedcomputers enabling for distribution and better team organization, increasing modularityand simplifying the addition of reusable code as in [77].

4. Components. The place where the services operate. A service could be either hardwarecontrolling drivers or more sophisticated software algorithms (e.g. a robotic behavior),and, since it is a class, it can be instantiated in a same node. So, by integrating differentcomponents we give definition to the applications running at nodes. It is worth to saythat the number of components will be mainly limited by the node capabilities.

5. Wireless TCP/IP Communications. Communications between subsystems and thesystem element is done through a common Wireless Area Network using the TCP/IPtransport protocol. The messaging between them corresponds to an echoed CCR portbeing sent by the Service Forwarder. The Service Forwarder looks for the specifiedtransport (TCP/IP) and then goes through the network until reaching the subscriber.This CCR port, is part of the Main Port of standardized services. The message sentthrough this port corresponds to a user-defined State class containing the objects thatcharacterize the subsystem’s status. This class is also part of every service in MSRDS.So, by implementing this communication structure we enable for an already settledmessaging protocol that can be easily user-modified to achieve specific robotic behaviorand tasks’ requirements within a robust communications network. For details on thiscommunication process refer to [70].

6. Serial Communications. Inside each subsystem a different communication protocolcan be used among the existing nodes. This communication can be achieved by serialnetworks such as RS232 links, CAN buses, or even through Ethernet. It is important

1Some of the concepts to understand the description of the elements competing service-oriented robotics andMSRDS were presented in Appendix B and in section 1.4.2 and are detailed in next section.

Page 130: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 112

to refer that nodes can be microcontrollers, handhelds, laptops, or even workstations;where at least one of them must be running a windows-based environment for beingable to handle communications within the MSRDS.

Figure 3.13: Architecture topology: at the top the system element communicating wirelesswith the subsystems. Subsystems include their nodes, which can be different types of com-puters. Finally, components represent the running software services depending on the existinghardware and node’s capabilities.

In Figure 3.13 we show an explicit 2-leveled approach allowing for the hybrid intel-ligence purpose (or mixed-initiative as in [199]) with main focus in differentiating betweenindividual robot intelligence (autonomous perception-action) and robotic team intelligence(human deliberation and planning), matching the decentralization and distribution lineaments.Moreover, this architecture can be easily extended in accordance to mission requirements andavailable software and hardware resources by instantiating the current elements fulfilling ourmission/domain independent and upgradeable design goals. Also, it has the ability to havemore interconnected system elements each with different level of functionality leveraging dis-tribution, modularity, extendibility and scalability features. It is worth to reinforce that evenif it looks like there is a centralization by using a system element, this is done so as to op-timize global parameters and to have a monitoring central station rather than for ensuringfunctionality.

In summary, the architecture provides the infrastructure for re-coding only what hard-ware we are going to use and how the mission is going to be solved (tasks). Thus, the system issettled to couple the team composition, reasoning, decision-making, learning, and messagingfor mission solving [63, 99]. Additionally, in fulfilling such objectives, using the MicrosoftRobotics Developer Studio (MSRDS) robotic framework we match the following design goalsat hand: robot hardware abstraction and rapid technology insertion because of service-orienteddesign, and distributed, computer resource independent, time-suitable communications and

Page 131: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 113

concurrent robotic processing, because of the CCR and DSS characteristics. Also, it providesus with the infrastructure for reusability within services standardization and an environmentfor simple debugging and prototyping among other advantages described in [72]. Next sectionprovides deeper information on the advantages of developing service-oriented systems plusthe use of MSRDS.

3.5 Service-Oriented Design: Deployment, Extendibility andScalability

Concerning the last phase of the MaSE methodology, we finish the design stage with thissection. This constitutes how the MRS is going to be finally designed in order for successfuldeployment. Following the state-of-the-art trends in the frameworks for robotic software wechoose to work under the service-oriented robotics (SOR) paradigm. It is important to recallAppendix B to have a clear definition on services and understanding the relevance of develop-ing service-oriented solutions over other programming approaches. Also, section 1.4.2 brieflydescribes the MSRDS framework and its CCR and DSS components, which are key elementsin this section.

In general, we choose service-oriented because of its manageability of heterogeneity,the self-discoverable internet capabilities, the information exchange structure, and its highcapabilities for reusability and modularity without depending on fixed platforms, devices,protocols or technologies. All of these, among other characteristics are present in MSRDSand ROS.

Nowadays is perhaps more convenient to develop using ROS and not MSRDS, essen-tially because of the recent growth of service repositories [107]. But at the time most of thealgorithms concerning this dissertation were developed, MSRDS and ROS had a very similarsupport among the robotics community. So, choosing among them was a matter of explor-ing the systems and identifying the one with characteristics that simplified or enhanced ourintended implementations. In this way, the Visual Studio debugging environment, the Con-currency and Coordination Runtime (CCR), the Decentralized Software Services (DSS), theintegrated simulation service, and the available tutorials at that time turned us towards usingMSRDS as reported in [70].

3.5.1 MSRDS FunctionalityThe MSRDS is a Windows-based system focused on facilitating the creation of robotics appli-cations. It is built upon a lightweight service-oriented programming model that makes simplethe development of asynchronous, state-driven applications. Its environment enables usersfor interacting and controlling robots with different programming languages. Moreover, itsplatform provides a common programming framework that enables code and skills transferincluding the integration of external applications [135]. Its main components are depicted inFigure 3.14 and described below.

CCR. This is a programming model for multi-threading and inter-task synchronization.Differently from past programming models, enables the real-time robotics requirements

Page 132: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 114

Figure 3.14: Microsoft Robotics Developer Studio principal components.

for moving actuators at the same time sensors are being listened, without the use ofclassic and conventional complexities such as manual multi-threading, use of mutualexclusions (mutexes), locks, semaphores, and specific critical sections, thus preventingtypical deadlocks while dealing with asynchrony, concurrency, coordination and failurehandling; using a simple, open, protocol. The basic tool for CCR to work is called Port.Through ports, messages from sensors and actuators are concurrently being listened(and/or modified) for developing actions and updating the robot’s state. Ports could beindependent or belong to a given group called PortSet. Once a portset has a message thathas been received, a specific Arbiter, which can get single messages or compose logicaloperations between them, dispatches the corresponding task for being automaticallymulti-threaded by the CCR. Figure 3.15 shows graphically the process.

DSS. This provides the flexibility of distribution and loosely coupling of services. Itis built on top of CCR, giving definition to Services or Applications. A DSS appli-cation is usually called a service too, because it is basically a program using multi-ple services or instances of a service. These services are mainly (but not limited to):hardware components such as sensors and actuators, software components as user in-terfaces, orchestrators and repositories; or aggregations referring to sensor-fusion andrelated tasks. Also, services can be operating in a same hosting environment, or DSSNode, or distributed over a network, giving flexibility for execution of computationalexpensive services in distributed computers. By these means, it is worth to describethe 7 components of a service. The unique key for each service is the Service URI,which refers to the dynamical Universal Resource Identifier (URI) assigned to a servicethat has been instantiated in a DSS node, enabling the service to be identified amongother running instances of the same service. The second component is the ContractIdentifier, which is created, static and unique, within the service for identifying it fromother services, also enabling to communicate elements of their Main Port portset amongsubscribed services. Reader must notice that when multiple instances of a service arerunning in the same application, each instance will contain the same contract identi-fier but different service URI. The third component of a service is the Service State,which carries the current contents of a service. This state could be useful for creatinga FSM (finite state machine) for controlling a robot; also, it can be accessed for basic

Page 133: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 115

information, for example if the service is a laser range finder, state must have angu-lar range, distance measurements, and sensor resolution. Fourth component is formedby the Service Partners, which enable a DSS application to be composed by severalservices providing higher level functions and conforming more complex applications.These partner definitions are the “cables”, wiring-up the services that must communi-cate. The fifth component is the Main Port, or operations port, which is a CCR portsetwhere services can talk to each other. An important feature of this port is that it is aprivate member of a service with specific types of ports (defined at service creation)that can serve as channels for specific information sharing, thus providing a well orga-nized infrastructure for coupling distributed services. The sixth component of a serviceis formed by the Service Handlers, which need to be consistent with each type of portdefined in the Main Port. These handlers operate in terms of the received messages inthe main port, which can come in the form of requested information or as a notification,in order to develop specific actions in accordance to the type of port received. So, thelast component is composed by Event Notifications, which represent announcements asresult of changes to a service state. For listening to those notifications a service mustspecify a subscription to the monitored service. Also, each subscription will representa message on a particular CCR port, providing differentiation between notifications andenabling for orchestration using CCR primitives. Additionally, as DSS applications canwork in a distributed fashion through the network. There is a special port called ServiceForwarder, which is responsible for the linkage (partnering) of services and/or applica-tions running in remote nodes. Figure 3.16 has a graphic representation of services inDSS architecture.

VSE. Is an already developed service for providing a simulation environment that en-ables for rapid prototyping of software solutions. This simulator has a very realisticphysics engine but lacks from simulating typical sensors’ errors.

VPL. Is a visual environment that enables for programming with visual blocks, whichcorrespond to already provided services. In this way, non-expert programmers are ableto quickly start developing solutions or simple software services. Also, this componentserves as a tool for easy conforming robotics applications that are built upon the aggre-gation of multiple services. Even it works in a drag-and-drop fashion, it also providesthe option to generate C# code.

Samples and Tutorials. This is a set of already developed services demonstrating con-trol and interaction with simulated and popular academic robots. Also, popular algo-rithms such as visual tracking or recognition are already provided.

Visual Studio. Finally, this is the integrated development environment (IDE) that pro-vides a nice framework towards rapid debugging and prototyping, simplifying the diffi-culties for error detection in service-oriented systems. It is important to mention that thecoding of services is independent of languages and programming teams, thus program-ming languages for creating services could be different with most common including:Python, VB, C++, and C#.

Page 134: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 116

Figure 3.15: CCR Architecture: when a message is posted into a given Port or PortSet, trig-gered Receivers call for Arbiters subscribed to the messaged port in order for a task to bequeued and dispatched to the threading pool. Ports defined as persistent are concurrentlybeing listened, while non-persistent are one-time listened. Image from [137].

Page 135: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 117

Figure 3.16: DSS Architecture. The DSS is responsible for loading services and managingthe communications between applications through the Service Forwarder. Services could bedistributed in a same host and/or through the network. Image from [137].

Page 136: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 118

Having explained the components, the typical schema for MSRDS to work is shown inFigure 3.17. This design is used repeatedly in this dissertation. In this way we are flexible toupgrading sensors or actuators while being able to maintain the core behavioral component (oruser interface) that orchestrates operations from perceptions to actions. At the same time weare able to plug-in newly developed services or more sophisticated algorithms in repositoriessuch as in [243, 147, 133, 152, 275, 250, 73, 185], or even taking our encapsulated devel-opments towards newly proposed architectures for search and rescue such as in [3]. Threegraphic examples of how behaviors are coded under this design paradigm are demonstratedin Figure 3.18: at the top the handle collision behavior, at the middle the visual recognitionbehavior, and at the bottom the seek behavior, all of them with their generic inputs and outputs.

Figure 3.17: MSRDS Operational Schema. Even though DSS is on top of CCR, many servicesaccess CCR directly, which at the same time is working on low level as the mechanism fororchestration to happen, so it is placed sidewards to the DSS. Image from [137].

Concluding this chapter, we have followed the Multi-agent Systems Engineering method-ology so as to generate a MRS that is able to deal with urban search and rescue missions. Thisincluded listing the essential requirements and making a hierarchical diagram of the most rel-evant goals. Then, we decomposed the goals into global and local tasks according to a definedteam of robots. Additionally, we took those tasks into robotic operations and clearly orga-nized it as roles, behaviors, and actions. Following, we developed an architecture in order tocouple those elements and provide robustness to our system by means of hybrid intelligence,leaving the deliberative parts to human operators (open for possible future autonomy) and theautonomous reactions to the robots. Finally, we have explained how everything herein wascoded so that it can be completely reused and upgraded according to state-of-the-art possibil-ities and needs. Thus, we end-up this chapter with a proposed MRS for rescue missions thatfalls into the following classification according to [95, 63, 99, 110]:

Page 137: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 119

Figure 3.18: Behavior examples designed as services. Top represents the handle collisionbehavior, which according to a goal/current heading and the laser scanner sensor, it evaluatesthe possible collisions and outputs the corresponding steering and driving velocities. Middlerepresents the detection (victim/threat) behavior, which according to the attributes to recog-nize and the camera sensor, it implements the SURF algorithm and outputs a flag indicatingif the object has been found and the attributes that correspond. Bottom represents the seekbehavior, which according to a goal position, its current position and the laser scanner sensor,it evaluates the best heading using the VFH algorithm and then outputs the correspondingsteering and driving velocities.

Page 138: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 3. SOLUTION DETAIL 120

• Single-task robots because each robot can develop as most one task at a time.

• Multi-robot tasks because even when some tasks require only one robot, performanceis enhanced with multiple entities.

• Time-extended assignment because even when there can be instantaneous allocationsaccording to robots’ local perceptions, we will consider a global model of how tasks areexpected to arrive over time.

• SIZE-PAIR/LIM because we will use only 2-3 robots at most.

• COM-NONE because robots will not communicate explicitly to each other but ratherusing the environment and perceptions.

• TOP-TREE because explicit communications topology will be delimited to a hierarchytree with controlling humans or supervisors at the top.

• BAND-LOW because we will always assume that communications in hazardous envi-ronments imply a very hard cost so that there are very independent robots.

• ARR-DYN because their collective configuration may change dynamically accordingto tasks.

• PROC-FSA because of the use of finite state models to simplify the reasoning.

• CMP-HOM because the composition of the robotic team is essentially by homoge-neous (same physical characteristics) robots.

• Cooperative because there is a team of robots operating together to perform a globalmission.

• Aware because robots have some kind of knowledge of their team mates (e.g. theirroles and poses).

• Strong/Weak coordination because in some cases the robots follow a set of rules tointeract with each other (e.g. flocking), but there are also other situations in which theydevelop weak coordination because of each of them developing independent tasks (e.g.tracking and object).

• Distributed/Weakly-Centralized because even though communication occurs towardsa central station controlled/supervised by human operators, robots are completely au-tonomous in the decision process with respect to each other and there is no leader.Weakly centralized is considered because in the flocking example, one robot may as-sume a leader role just to assign proper positions to other robots in the formation.

• Hybrid because the system is provided with an overall strategy (deliberation), whilestill enhancing locality for autonomous operations (reaction).

Next chapter includes simulated and real implementations of this proposed MRS, demon-strating the usefulness of our solution.

Page 139: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Chapter 4

Experiments and Results

“The central idea that I’ve been playing with for the last 12-15 years is thatwhat we are and what biological systems are. It’s not what’s in the head, it’sin their interaction with the world. You can’t view it as the head, and thebody hanging off the head, being directed by the brain, and the world beingsomething else out there. It’s a complete system, coupled together.”

– Rodney Brooks. (Robotics Scientist)

CHAPTER OBJECTIVES— Which simulated and real tests.— What qualitative and quantitative results.— How good is it.

It will be to ambitious to think that we can develop tests including all the three globaltasks and every sequence diagram within this dissertation, even semi-autonomously. Thereare a lot of open issues outside the scope of this dissertation that make it harder to develop fulloperations. Some of them are the simultaneous localization and mapping problem, reliablecommunications, sensor data, and actuator operations; robust low-level control for maintain-ing commanded steering and driving velocities, and even having powerful enough computersfor human–multi-robot interfacing. In this way, we delimited our tests to implement morerelevant behaviors and develop autonomous operations that are easier to be compared withstate-of-the-art literature. This means that for example everything related to the Support andRelief stage is perhaps to soon to be trying to test [80, 204], but it is still important to includein our planned solution.

Accordingly, the experimentation phase resided in simulations using the MSRDS VSEand testing the architecture and the most relevant autonomous operations in real implementa-tions. The following sections present details on these experiments.

121

Page 140: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 122

4.1 Setting up the path from simulation to real implemen-tation

This section is included as an argument for the validity of simulated tests over real imple-mentations. Here we demonstrate a quick way we created to reach reliable 3D simulatedenvironments and the fast process to go to real hardware within a highly transparent serviceinterchange.

Using MSRDS, the easiest way we have found for creating simulated environments, be-sides just modifying already created ones, is to save SimStates (scenes) into .XML files or intoScripts from SPL (for more information on SPL refer to [125]), and then load them throughC# or VPL. Basically, we developed the entities and environments with SPL. This softwareenables the programmer to create realistic worlds, taking simple polygons (for example abox) with appropriate meshes and making use of a realistic physics engine (the MSRDS usesAGEIA PhysX Engine). SPL menus enable users for creating the environments and entitiesin a script composed by click-based programming. Most typical actuators and sensors areincluded in the wide variety of SPL simulation tools. Also, besides the already built robots’models, SPL provides the easy creation of other robots including joints and drives. Anotherway to create these entities is following the samples on C# and importing computer modelsfor an specific robot or object, or even just importing the already provided models within theMSRDS installation.

Once the environment and the entities are already defined, the SPL Script is exportedinto an XML and then loaded from a C# DSS Service, or the SPL Script is saved and thenloaded from a VPL file, ending-up with the complete 3D simulated world. Figure4.1 showsgraphically these two options. What is more, we have created a service adapting code frominternet repositories that from simple image files we can create 3D maze-like scenarios asshown in Figure 4.2. This and some other generic services developed within this dissertationare available online at http://erobots.codeplex.com/.

Figure 4.1: Process to Quick Simulation. Starting from a simple script in SPL we can de-cide which is more useful for our robotic control needs and programming skills, either goingthrough C# or VPL.

Page 141: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 123

Figure 4.2: Created service for fast simulations with maze-like scenarios. Available athttp://erobots.codeplex.com/.

Having briefly explained how we set-up simulations, the important thing relies in howto take it transparently into real implementations. Here, the best aspect is that MSRDS has al-ready working services for generic differential/skid drives, laser scanners, and webcam-basedsensors. So, for the particular case of the Pioneer robots, MSRDS provides its complete simu-lated version and drivers for real hardware including every service to control each componentof the robot. In this way, commands sent to the simulated robot are identical than those neededby the real hardware. Thus, going from simulation to reality when services are properly de-signed is a matter of changing a reference in the service name which is going to be used inC#, or changing the corresponding service block in VPL. Figure 4.3 shows the simplicity ofthis process.

As may be inferred, one of the biggest issues in robotics research is that simulated hard-ware never behaves as real hardware. For this reason, next section presents our experiences insimulating and implementing our behavior services among other technologies.

4.2 Testing behavior servicesThis section presents the tests we developed in order to explore the functionality of SORsystems under the implementation of services provided by different enterprises. Also, we de-veloped experiments concerning the use of different types of technologies in order to observethe system’s performance. And lastly, we implemented the most relevant behaviors describedin the previous chapter in a service-oriented fashion. All the experiments were developedboth for simulation and real implementation using the Pioneer robots. Additionally, testswere developed locally using a piggy-backed laptop in real robots or running all the simula-tion services in the same computer, and remotely by using wireless connected computers; thisis graphically represented in Figure 4.4 and was developed so as to explore the real impact ofthe communications overhead among networked services in real-time performance [82, 73].

First, taking advantage from the MSRDS examples, we implemented a simple program

Page 142: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 124

Figure 4.3: Fast simulation to real implementation process. It can be seen that going from asimulated C# service to real hardware implementations is a matter of changing a line of code:the service reference. Concerning VPL, simulated and real services are clearly identifiedproviding easy interchange for the desired test.

Figure 4.4: Local and remote approaches used for the experiments.

Page 143: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 125

for achieving voice-commanded navigation in simulation and real implementations using theMS Speech Recognition service. This application consisted in recognizing voice-commandssuch as ’Turn Left’, ’Turn Right’, ’Move Forwards’, ’Move Backwards’, ’Stop’, and alter-native phrases for same commands in order control the robot’s movements. This experimentshowed us the feasibility of developing applications using already built services by the samecompany providing the development framework. We showed that in either way, VPL or C#,simulated and real implementation worked equally well. Also, the real-time processing fittedthe needs for controlling a real Pioneer-3AT via serial port without any inconvenient. Addi-tionally, it must be referred that because of using an already developed service, it was fastand easy to develop the complete speech recognition application for teleoperated navigation.Figure 4.5 shows a snapshot of the speech recognition service in its simulated version.

Figure 4.5: Speech recognition service experiment for voice-commanded robot navigation.Available at http://erobots.codeplex.com/.

Second, considering that using vision sensors requires a high computational processingtime, we decided to test MSRDS under the implementation of an off-the-shelf service pro-vided by the Company RoboRealm [238]. The main intention was to observe MSRDS real-time behavior with higher processing demand service, which, at the same time, has been cre-ated by external-to-Microsoft providers. Therefore, we developed an approach for operatingthe RoboRealm vision system through MSRDS. One of the experiments consisted in a visualjoystick, which provided the vision commands for the robot to navigate. It resided in using areal webcam for tracking an object and determining its center of gravity (COG). So, depend-ing on the COG location with respect to the center of the image, the speed of the wheels was

Page 144: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 126

settled as if using a typical hardware joystick, thus driving the robot forward, backward, turn-ing and stopping. Code changes for implementing simulation and real implementation residedvery similar to speech recognition experiment and section 4.1 explanations. Figure 4.6 showsa snapshot of how simulation looks when running MSRDS and RoboRealm. From this exper-iment we observed that MSRDS is well-suited for operating with real-time vision processingand robot control. Results were basically the same for simulation and real implementationtests. So, this test resulted for us in an application for vision processing and robotics controlusing SOA-based robotics, enabling us to implement services as in [275, 116, 279] with avery simple, fast and yet robust method. Also, it is worth to mention that applications withRoboRealm are easy to do and very extensive: from simple feature recognition as road signsfor navigation, to more complex situational recognition [207]; in a click-based programminglanguage.

Figure 4.6: Vision-based recognition service experiment for visual-joystick robot navigation.Available at http://erobots.codeplex.com/.

Finally, even though for every real implementation we used the Pioneer services pro-vided within MSRDS for controlling its motors, in this experiment we implemented au-tonomous mobile robot navigation with Laser Range Finder sensor service and MobileR-obots Arcos Bumper service, as the external-to-Microsoft providers of hardware-controllingservices. Keeping our exploration purposes on SOA-based robotics, we created a boundary-follow behavior for testing the simulated result and the real version of it, as well as capabilitiesfor real-time orchestration between sensor and actuator services. Here, an interesting behaviorwas observed: while in simulation the robot followed the wall without any trouble, in real ex-periments the robot sometimes starts turning trying to find the lost wall. The obvious answeris that real sensors are not as predictable and robust as in simulation. Thus we reinforced thepoint of advantage with SOA-based robotics for fast achieving real experiments in order todeal with real and more relevant robotics’ problems. With this experiment the most interestingobservations reside in the establishment of MSRDS as an orchestration service for interactingwith real sensor and actuator services provided by MobileRobots, the Pioneer manufacturer.Also, that we observed appropriate real-time behavior with capabilities of instant reaction tominimal sensor changes and no communication problem neither locally nor remote.

Therefore, having obtained confidence in the SOR approach we started developing thebehaviors described in the previous chapter in a service-oriented fashion, intending to reduce

Page 145: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 127

time costs in the development and deployment. Among the most relevant include: wall-follow,seek (used by 15 out of the 36 behaviors), flock (including safe wander, hold formation, lost,aggregate and every formation used), field cover1 (including disperse, safe wander, handlecollisions, avoid past and move forward), and victim/threat (visual recognition). Figures 4.7-4.11 show snapshots of these robotic behavior services, all of which are also available athttp://erobots.codeplex.com/. Other behaviors not shown or not implemented include moresophisticated operations such as giving aid, which is a barely explored set of actions accord-ing to state-of-the-art literature and out of the scope of this dissertation; or perhaps have nosignificant appreciation such as wait or resume.

Figure 4.7: Wall-follow behavior service. View is from top, the red path is made of a robotfollowing the left (white) wall in the maze, while the blue one corresponds to another robotfollowing the right wall.

Figure 4.8: Seek behavior service. Three robots in a maze viewed from the top, one static andthe other two going to specified goal positions. The red and blue paths are generated by eachone of the navigating robots. To the left of the picture a simple console for appreciating theVFH [41] algorithm operations.

1Refer to Appendix D for complete detail on this behavior.

Page 146: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 128

Figure 4.9: Flocking behavior service. Three formations (left to right): line, column andwedge/diamond. In the specific case of 3 robots a wedge looks just like a diamond. Red,green and blue represent the traversed paths of the robots.

Figure 4.10: Field-cover behavior service. At the top, two different global emergent behav-iors for a same algorithm and same environment, both showing appropriate field-coverageor exploration. At the bottom, in two different environments, just one robot doing the samefield-cover behavior showing its traversed path in red. Appendix D contains complete detailon this behavior.

Page 147: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 129

Figure 4.11: Victim and Threat behavior services. Being limited to vision-based detec-tion, different figures were used to simulate threats and victims according to recent litera-ture [116, 20, 275, 207]. To recognize them, already coded algorithms were implementedincluding SURF [26], HoG [90] and face-detection [279] from the popular OpenCV [45] andEmguCV [96] libraries.

Page 148: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 130

Closing the section, the best experience from these tests resided in achieving fast 3Dsimulation environments and then quickly getting into the real implementations using off-the-shelf services with MSRDS. Also, since we observed appropriate processing times under realrobotic requirements, it gave us the confidence towards implementing our intended architec-ture without hesitating about any possible communication inconvenient. Next section detailsthe experiences with the implementation of our proposed infrastructure.

4.3 Testing the service-oriented infrastructureAt this point, experiments lead us into a nice integrated application containing all the availablebehavior services that have been coded, plus additional features such as being able to create 3Dsimulation environments as fast as creating an image file, and even almost perfect localizationand mapping as can be appreciated in Figure 4.12. Nevertheless, in the words of Mong-yingA. Hsieh et al. in [131]: “Field-testing is expensive, tiring, and frustrating, but irreplaceable inmoving the competency of the system forward. In the field, sensors and perceptual algorithmsare pushed to their limits [. . . ]”. Thus, achieving good localization is perhaps the biggestproblem towards successfully implementing every coded behavior in real robots. So, in thissection we describe the first step towards relevant real implementations: test the infrastructure.

Figure 4.12: Simultaneous localization and mapping features for the MSRDS VSE. Robot 1is the red path, robot 3 the green and robot 3 the blue. They are not only mapping the environ-ment by themselves, but also contributing towards a team map. Nevertheless localization is asimulation cheat and laser scanners have no uncertainty as they will have in real hardware.

It is worth to recall that many architectures for MRS had been proposed [63, 223] andevaluated [218], but there are only a few working under the service-oriented paradigm andfulfilling the architectural and coordination requirements we address. One example can beSIRENA [38], a JAVA-based framework to seamlessly connect heterogeneous devices fromthe industrial, automotive, telecommunication and home automation domains. Maybe it isone of the first projects that pointed out the benefits of using a Service-Oriented Architecture(SOA). Even though in its current state of development it has showed its feasibility and func-tionality, communication has been limiting scalability in the intended application for real-time

Page 149: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 131

embedded networked devices. A second example is SENORA [231], this framework, basedon peer to peer technology, can accommodate a large number of mobile robots with limitedaffects on the quality of service. It has been tested on robots working cooperatively to obtainsensory information from remote locations. Its efficiency and scalability have been demon-strated. Nevertheless, there has been a lack of adequate abstraction and standardization caus-ing difficulties in reusing and in the integration of services. As a third example there is [73],which consists in an instrumented industrial robot that must be able to localize itself, mapits surroundings and navigate autonomously. The relevance of this project is that everythingworks as a service-on-demand, meaning that there were localization services, navigation ser-vices, kinematic control services, feature extraction services, SLAM services, and some otheroperational services. These allows for upgrading any of the services without demanding anychanges in other parts of the system. Accordingly, in our work we want to demonstrate ade-quate abstractions as in [73] but already working with multiple robots as [231] intended, whilemaintaining time-suitable communications for achieving good multi-robot interoperability.

Additionally, we want to fulfill architectural requirements such as robot hardware ab-straction, extendibility and scalability, reusability, simple upgrading and integration of newcomponents and devices, simple debugging, ease of prototyping, and use of standardized toolsto add relevance. Also, we concern on particular requirements for multi-robot coordinationsuch as having a persistent structure allowing for variations at team composition, an approachto hybrid intelligence control for decentralization and distribution, and the use of suitable mes-saging allowing the user to easily modify what needs to be communicated. In this way, theexperiments are intended to demonstrate functionality and interoperability with a team of Pio-neer robots achieving: time-suitable communications, individual and cooperative autonomousoperations, semi-autonomous user-commanded operations, and the ease of adding/removingrobotic units to the working system. Our focus is to prove that the infrastructure facilitates theintegration of current and new developments in terms of robotic software and hardware, whilekeeping a modular structure in order for it to be flexible without demanding complete systemmodifications.

In this way, we implemented the architecture design and topology described in sec-tion 3.4. For the system element we used a laptop running Windows 7 with Intel Core 2 Duo at2.20 GHz and 3 GB RAM. For subsystems (homogeneous) we used 3 RS232-connected nodesconsisting in: 1) a laptop running Windows XP with Intel Atom at 1.6GHz and 1 GB RAMfor organizing data and controlling the robot including image processing and communicationswith system element; 2) the Pioneer Microcontroller with the embedded ARCOS software formanaging the skid-drive, encoders, compass, bumpers, and sonars ; and 3) a SICK LMS200sensor providing laser scanner readings. System and subsystems were connected through theWAN at our laboratory, which was being normally used by other colleagues. Now, the typicalconfiguration when running this kind of infrastructures requires for a human operator to loginto an operator control unit (OCU), then connect to robots, communicate high-level data, andfinally robotic platforms receive the message and start operating. In our architecture steps aresimilar:

1. Every node in the subsystem must be started, and then services will load and start thespecified partners for operating and subscribing all components.

2. Run the system service specifying subscriptions to the existing subsystems. In this

Page 150: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 132

service, human operator can access to monitor and command if required.

3. Messaging within subsystems and system is started autonomously after subscriptioncompletion, and everything is ready to work.

It is worth to insist that without running the high-level system service, subsystem robotscan start operations; however, supervision and additional team intelligence features may belost. Also, since there is no explicit communication between subsystems, absence of high-level service could lead into a lack of interoperability. So, for the ease of understanding thesecommunication links between system and subsystems, we included Figure 4.13 exemplifyingwith one subsystem. It is important to notice that components have no input and just send theirdata to the subsystem element. Then the subsystem receives and organizes the informationfrom the components to update its state and report it to the system element. Finally, the systemelement receives each subsystem’s state through the Replace port and it is able to answer toeach subsystem any command through the UpdateSuccessMsg port.

Figure 4.13: Subscription Process: MSRDS partnership is achieved in two steps: running thesubsystems and then running the high-level controller asking for subscriptions.

Once the infrastructure is running, testing implied four different operations:

1. Single-robot manual. First, we considered transmitting the sensor readings to the sys-tem element from different locations. Second, joystick navigation through our build-ing’s corridors moving the joystick in the system element and sending commands to thesubsystem Pioneer robot.

2. Single-robot autonomous. First, the system element triggered the command for au-tonomous sequential navigation (e.g. square-path). Second, the system element com-manded for autonomous wall-following behavior. Third, the system element com-manded for obstacle-avoidance navigation.

3. Multi-robot manual. Same as with the single-robot manual but now with two subsys-tems.

4. Multi-robot autonomous. Same as with the single-robot autonomous but now withtwo subsystems and a bit of negotiation for deciding which wall to follow and collisionavoidance according to robots’ ID.

Page 151: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 133

Table 4.1: Experiments’ results: average delays

Single-Robot (15 Minutes) Multi-Robot (30 Minutes)Messages Sent from Subsystem: 4213 Messages Sent from Subsystem 1: 8778Messages Received in System: 4210 Messages Received in System: 8762Total loss: 0.07% Total loss: 0.18%Messages per second: 4.6778 Messages per second: 4.6890Highest delay: 0.219 s Highest delay: 0.234 s

Messages Sent from Subsystem 2: 8789Messages Received in System: 8764

Total loss: 0.28%Messages per second: 4.6954

Highest delay: 0.219 s

In spite of the four basic differences in our experiments and that the number of col-leagues using the network as well as the subsystems’ positions were changing, results indelays showed practically the same. Some of these results are shown in Table 4.1.

These experiments showed the successful instantiation of the architecture using mul-tiple Pioneer robots and a remote station. Quantitative preliminary results indicated thatthe architecture is task-independent and robot-number-independent when referring to time-suitable communications including a well balanced messaging (less than 0.1% difference for2 homogeneous robots). Also, it enabled us for fully controlling the robots and reaching therequirements for concurrent robotic processing, while having an appropriate communicationtime with the higher level control during the manual and autonomous operations. Finally, itis worth to emphasize that even when non-SOA approaches could reduce delays to half asdemonstrated in [4], the observed results suffice for good MRS interoperability and thus thereal impact could not be considered as a disadvantage.

In view of that, for our intended application in search and rescue missions, where robotsneed to exchange application-specific data or information, such as capabilities, tasks, loca-tion, sensor readings, etc.; this architecture comes to be useful. Also, even though run-timeoverhead is not as important as it was because modern hardware is fast and cheap, CCR andDSS come to be essential for reducing complexity. Therefore, in next section we detail moresophisticated operations using this infrastructure but with a different set of robots.

4.4 Testing more complete operationsBecause of the huge amount of operations conforming each of the described global tasks in arescue mission and the lack of a good possibility to evaluate our contributions with literature,we decided to implement the most popular operations for a rescue MRS: the autonomous ex-ploration of unknown environments. This operation has become very popular for the roboticscommunity mainly because it is a challenging task, with several potential applications. The

Page 152: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 134

main goal in robotic exploration is to minimize the overall time for covering an unknown envi-ronment. So, we used our field-cover behavior to achieve single and multi-robot autonomousexploration, evaluating essentially the time for covering a complete environment. For a com-plete description on how the algorithm works refer to Appendix D and reference [71]. Fol-lowing are presented the simulated and real tests.

4.4.1 Simulation testsFor simulation test, we used a set of 3 Pioneer robots in their simulated version for MSRDS.Also, for better appreciation of our results, we implemented a 200 sq. mt 3D simulatedenvironment qualitatively equivalent to the used in Burgard’s work [58], one of the mostrelevant in recent literature. Robots are equipped with laser range scanners limited to 2m and180◦ view, and have a maximum velocity of 0.5m/s. As for metrics, we used the percentageof explored area over time as well as a exploration quality metric proposed to measure thebalance of individual exploration within multiple robots [295], refer to Table 4.2.

METRIC DESCRIPTION EXAMPLEEXPLORATION(%)

For single and multiple robots, mea-sures the percent of gathered locationsfrom the total 1-meter grid discrete en-vironment. With this metric we knowthe total explored area in a given timeand the speed of exploration.

In Figure 4.25, an av-erage of 100% Explo-ration was achieved in36 seconds.

EXPLORATIONQUALITY (%)

For multiple robots only, measureshow much of the total team’s explo-ration has been contributed by eachteammate. With this metric we knowour performance in terms of resourcemanagement and robot utilization.

In Figure 4.27(b), tworobots reached 100%Exploration with ap-proximately 50% Ex-ploration Quality each.

Table 4.2: Metrics used in the experiments.

Single Robot Exploration

Since our algorithm can do a dispersion or not, depending on the robots’ proximity, we de-cided to test it for an individual robots first. These tests first considered the Safe Wanderbehavior without the Avoid Past action, so as to evaluate the importance of the wanderingfactor [10]. Figure 4.14 shows representative results for multiple runs using different wanderrates. Since we are plotting the percentage of exploration over time, the flat zones in the curvesindicate exploration redundancy (i.e. there was a period of time in which the robot did notreach unexplored areas). Consequently, in these results, we want to minimize the flat zonesin the graph so as to refer to a minimum exploration redundancy, while gathering the highestpercentage in the shortest time. It is worth to mention that by safe wandering we can’t ensuretotal exploration so we defined a fixed 3-minute period to compare achieved explorations. Weobserved higher redundancy for 15% and 5% wandering rates as presented in Figures 4.14(a)

Page 153: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 135

and 4.14(c), and better results for 10% wandering rate presented in Figure 4.14(b). This 10%was latter used in combination with Avoid Past to produce over 96% exploration of the simu-lated area in 3 minutes as can be seen in Figure 4.14(d). This fusion enhances the wanderingso as to ensure total coverage. Statistical analysis from 10 runs is presented in Table 4.3 forvalidating repeatability, while typical navigation using this method is presented in Figure 4.15as a visual validation of qualitative results. It is important to observe that given the size of theenvironment and the robot’s dimension, one environment is characterized by open spaces andthe other provides more cluttered paths. Nevertheless, this very simple algorithm is able toproduce reliable and efficient exploration such as more complex counterparts over literaturein either open spaces and/or cluttered environments.

(a) (b)

(c) (d)

Figure 4.14: Single robot exploration simulation results: a) 15% wandering rate and flatzones indicating high redundancy; b) Better average results with less redundancy using 10%wandering rate; c) 5% wandering rate shows little improvements and higher redundancy; d)Avoiding the past with 10% wandering rate, resulting in over 96% completion of a 200 sq. marea exploration for every run using one robot.

Multi-Robot Exploration

In the literature-based environment, we tested a MRS using 3 robots starting inside the pre-defined near area such as in typical robot deployment in unknown environments. First testsconsidered only Disperse and Safe Wander without Avoid Past, which are worth to mention

Page 154: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 136

RUNS AVERAGE STD. DEVIATION10 177.33 s 6.8 s

Table 4.3: Average and Standard Deviation for full exploration time in 10 runs using AvoidPast + 10% wandering rate with 1 robot.

(a) (b)

Figure 4.15: Typical navigation for qualitative appreciation: a) The environment based uponBurgard’s work in [58]; b) A second more cluttered environment. Snapshots are taken fromthe top view and the traversed paths are drawn in red. For both scenarios the robot efficientlytraverses the complete area using the same algorithm. Black circle with D indicates deploy-ment point.

because results show sometimes quite efficient exploration, while other times can’t ensure fullexploration. So, this combination may be appropriate in cases where it is preferable to get aninitial rough model of the environment and then focus on improving potentially interestingareas with more specific detail (e.g. planetary exploration) [295].

Nevertheless, more efficient results for cases where guaranteed total coverage is neces-sary (e.g. surveillance and reconnaissance, land mine detection [204]) were achieved usingour exploration algorithm using Avoid Past. In our first approach, we intended to be less-dependent on communications so that robots avoid their own past only. Figure 4.16 showsthe typical results for a single run with the total exploration on Figure 4.16(a) and explorationquality on Figure 4.16(b). We seek for the least flat zones in robots’ exploration as well asa reduced team redundancy, which represented locations visited by two or more robots. Wecan see that for every experiment, full exploration is achieved averaging a time reduction toabout 40% of the required time for single robot exploration in the same environment, andeven to about 30% without counting the dispersion time. This is highly coherent to what isappreciated in the exploration quality, which showed a trend towards a perfect balance justafter dispersion occurred, meaning that with 3 robots we can almost explore 3 times faster.Additionally, team redundancy holds around 10%, representing a good resource management.It must be clear that, because of the wandering factor, not every run gives the same results,but even when atypical cases occurred, such as when one robot is trapped at dispersion, theteam delays exploration while being redundant in their attempt to disperse, and then developsa very efficient full exploration in about 50 seconds after dispersion, while resulting in a per-fectly balanced exploration quality. Table 4.4 presents the statistical analysis from 10 runs so

Page 155: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 137

as to validate repeatability.

(a) Exploration. (b) Exploration Quality.

Figure 4.16: Autonomous exploration showing representative results in a single run for 3robots avoiding their own past. Full exploration is completed at almost 3 times faster thanusing a single robot, and the exploration quality shows a balanced result meaning an efficientresources (robots) management.

RUNS AVERAGE STD. DEVIATION10 74.88 s 5.3 s

Table 4.4: Average and Standard Deviation for full exploration time in 10 runs using AvoidPast + 10% wandering rate with 3 robots.

The next approach consider avoiding also teammates’ past. For this case, we assumedthat every robot can communicate its past locations concurrently during exploration, whichwe know can be a difficult assumption in real implementations. Even though we were expect-ing a natural reduction in team redundancy, we observed a higher impact of interference andno improvements in redundancy. These virtual paths to be avoided tend to trap the robots,generating higher individuals’ redundancy (flat zones) and thus producing an imbalanced ex-ploration quality, which resulted in larger times for full exploration in typical cases, refer toFigures 4.17(a) and 4.17(b). In these experiments, atypical cases such as where robots got dis-persed the best they can, resulted in exploration where individuals have practically just theirown past to avoid and thus giving similar results to avoiding their own past only. Table 4.5presents the statistical analysis from 10 runs running this algorithm. Finally, Figure 4.18shows a visual qualitative comparison between Burgard’s results and our results. It can beobserved a high similarity with way different algorithms.

An additional observation to exploration results is shown in Figure 4.19, a naviga-tional emergent behavior that results from running the exploration algorithm for a long time,which can be described as territorial exploration or even as in-zone coverage for surveillancetasks [204, 92]. What is more, in Figure 4.20 we present the navigation paths of the sameautonomous exploration algorithm in different environments including open areas, clutteredareas, dead-end corridors and rooms with minimum exits; all of them with inherent charac-teristics for challenging multiple robots efficient exploration. It can be observed that even in

Page 156: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 138

(a) Exploration. (b) Exploration Quality.

Figure 4.17: Autonomous exploration showing representative results in a single run for 3robots avoiding their own and teammates’ past. Results show more interference and imbalanceat exploration quality when compared to avoiding their own past only.

RUNS AVERAGE STD. DEVIATION10 92.71 s 4.06 s

Table 4.5: Average and Standard Deviation for full exploration time in 10 runs using AvoidKins Past + 10% wandering rate with 3 robots.

(a) (b)

Figure 4.18: Qualitative appreciation: a) Navigation results from Burgard’s work [58]; b) Ourgathered results. Path is drawn in red, green and blue for each robot. High similarity with amuch simpler algorithm can be appreciated. Black circle with D indicates deployment point.

Page 157: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 139

adverse scenarios appropriate autonomous exploration is always achieved. Particularly, weobserved that when dealing with large open areas such as in Figure 4.20(a) robots fulfill aquick overall exploration of the whole environment, but we noticed that it takes more timeto achieve an in-zone coverage compared with other scenarios. We found that this could beenhanced by avoiding also kins’ past, but it will imply full dependence on communications,which are highly compromised in large areas. Another example is shown in Figure 4.20(b)considering cluttered environments, these situations demand for more coordination at thedispersion process as well as difficulties for exploring close gaps. Still, it can be observedthat robots were successfully distributed and practically achieved full exploration. Next, Fig-ure 4.20(c) presents an environment that is particularly characterised because of compromis-ing typical potential fields solutions because of reaching local minima or even being trappedwithin avoiding the past and a dead-end corridor. With this experiment we observed that ittook more time for the robots to get dispersed and to escape the dead-end corridors in order toexplore the rooms, nevertheless full exploration is not compromised and robots successfullynavigate autonomously through the complete environment. The final environment shown inFigure 4.20(d) presents an scenario where the robots are constantly getting inside rooms withminimum exits, thus complicating the efficient dispersion and spreading through the environ-ment. In spite of that, it can be appreciated how the robots efficiently explore the completeenvironment. We observed that the most relevant action for successfully exploring this kindof environments is the dispersion that robots keep on doing each time 2 or more face eachother.

Figure 4.19: The emergent in-zone coverage behavior for long time running the explorationalgorithm. Each color (red, green and blue) shows an area explored by a different robot. Blackcircle with D indicates deployment point.

Summarizing, we have successfully demonstrated that our algorithm works for singleand multi-robot autonomous exploration. What is more, we have demonstrated that evenwhen it is way simpler, it achieves similar results to complex solutions over literature. Finally,we have tested its robustness against different scenarios and still get successful results. So,the next step is to demonstrate how it works with real robots.

4.4.2 Real implementation testsFor the field tests another set of robots was used. It consisted in a pair of Jaguar V2 robots withthe characteristics presented below. Further information can be found at DrRobot Inc. [134].

Power. Rechargeable LiPo battery at 22.2V 10AH.

Page 158: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 140

(a) (b)

(c) (d)

Figure 4.20: Multi-robot exploration simulation results, appropriate autonomous explorationwithin different environments including: a) Open Areas; b) Cluttered Environments; c) Dead-end Corridors; d) Minimum Exits. Black circle with D indicates deployment point.

Page 159: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 141

Mobility. Skid-steering differential drive with 2 motors for tracks and 1 for the arms,all of them at 24V and rated current 2.75A. This turns into a carrying capacity of 15Kgand 50Kg dragging.

Instrumentation. Motion and sensing controller (PWM, position and speed control),5Hz GPS and 9 DOF IMU (Gyro/Accelerometer/Compass), laser scanner (30m), tem-perature sensing and voltage monitoring, headlights and color camera (640x480, 30fps)with audio.

Dimensions. Height: 176mm. Width: 700mm. Length: 820mm (extended arms) /640mm (folded arms) Weight: 25Kg.

Communications. WiFi802.11G and Ethernet.

For controlling the robots as well as for appropriately interfacing with a system ele-ment two OCUs (or UIs) were created. Concerning the interface for robot control, meaningthe subsystems control application, where the behaviors are processed along with the localperceptions, Figure 4.21 shows how it is composed. The robot connection section is for spec-ifying to which robot the interface is going to be connected. The override controls are formanually moving the robot when the computer is wireless linked to the robot. The mappingsection uses a counting strategy for colouring an image file in grayscale according to laserscanner readings and current pose at every received update (approximately at 10Hz). Thepositioning sensors section include the gyroscope, accelerometer, compass, encoders, and gpsreadings, plus a section referring the pose estimation of the robot. When operations are out-doors and the gps is working properly the satellital view section displays the current latitudeand longitude readings as well as the orientation of the robot. Finally, the camera and laserdisplay section include the video streaming and the laser readings in two different views: topand front.

Concerning the interface for the system element, where the next state is commandedand robots are monitored and possibly overridden by human operator, Figure 4.22 shows howit is composed. The first thing to say is that this interface was based upon the works ofAndreas Birk et al. reported in [36] and described in Chapter 2. The subsystems interfacingsection has everything related to each robot in the team including the override controls, thefsm monitoring and the current status as well as the sensor readings. The override controlssection includes a release button which enables the autonomous control mode, an overridebutton for manually driving and steering the robot, and the impatience button together withthe alternative checkbox for transitioning states in the active sequence diagram. The fsmmonitoring section contains the sequence diagrams as they were presented in section 3.1 butwith the current operation being highlighted so as to supervise what is being developed byeach robot. The individual robot data section includes information on the current state ofthe robot as well as its pose and sensors’ readings. Finally, the mission status and globalteam data section includes the overall evaluations of the team performance, with a space fora fused map and another for the reports list followed by buttons for commanding a robot toattend certain report such as an endangered-kin or a failed aid to a victim or threat. It is worthto mention that these reports are predefined structures that are fully complaint with relevantworks particularly [156, 56]. Thus, predefined options for filling these reports were definedand are graphically displayed in Figure 4.23.

Page 160: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 142

Figure 4.21: Jaguar V2 operator control unit. This is the interface for the application whereautonomous operations occur including local perceptions and behaviors coordination. Thus,it is the reactive part of our proposed solution.

Figure 4.22: System operator control unit. This is the interface for the application where man-ual operations occur including state change and human supervision. Thus, it is the deliberativepart of our proposed solution.

Page 161: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 143

Figure 4.23: Template structure for creating and managing reports. Based on [156, 56].

The last step to reach the field tests was to solve the localization problem [94]. Thus, inorder to simplify tests, for the ease of focusing in the performance of our proposed algorithmand taking into account that even the more sophisticated localization algorithms are not goodenough for the intended real scenarios, we created a very robust localization service using anexternal camera that continuously tracks the robots’ pose and messages it to our system-levelOCU. This message is then forwarded to each robot so that both of them can know with goodprecision where they are at any moment. Another important thing to mention is that the laserscanner was limited to 2m and 130◦ field of view, and maximum velocity was set to 0.25m/s,half of the limit used in the simulations. The environment consisted in an approximate 1:10scaled version of the simulation scenario so that by using the same metrics (refer to Table 4.2),expected results were available at hand.

Page 162: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 144

Single Robot Exploration

For single robot exploration experiments, a Jaguar V2 was wirelessly connected to an externalcomputer, which was receiving the localization data and human operator commands for start-ing the autonomous operations (subsystem and system elements). The robot was deployedinside the exploration maze and once the communications link was ready, it started exploringautonomously. Figure 4.24 shows a screenshot of the robot in the environment, including thetracking and markers for localization, and a typical autonomous navigation pattern resultingfrom our exploration algorithm.

We have stated that maximum speed was set to half the speed of the simulation experi-ments and the environment area was reduced to approximately 10%. So, the expected resultsfor over 96% explored area must be around 36 seconds (2 ∗ 180s/10 = 36s, refer to Fig-ure 4.14(d)). Figure 4.25 demonstrates coherent results for 3 representative runs, validatingour proposed exploration algorithm functionality for single robot operations. It can be appre-ciated that there are very little flat zones (redundancy) and close results among multiple runs,referring robustness in the exploration algorithm.

Figure 4.24: Deployment of a Jaguar V2 for single robot autonomous exploration experi-ments.

Multi-Robot Exploration

For the case of multiple robots, a second robot was included as an additional subsystem el-ement as refered in section 3.4 and detailed in [72]. Figure 4.26 shows a screenshot of thetypical deployment used during the experiments including the tracking and markers for local-ization, and an example of navigational pattern when the robots meet along the explorationtask.

This time, considering the average results from the single robot real experiments, theideal expected result when using two robots must be around half of the time so as to validatethe algorithm functionality. Figure 4.27(a) shows the results from a representative run includ-ing robot’s exploration and team’s redundancy. It can be appreciated that full exploration isachieved almost at half of the time of using only one robot and that redundancy stays veryclose to 10%. What is more, Figure 4.27(b) presents an adequate balance in the exploration

Page 163: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 145

Figure 4.25: Autonomous exploration showing representative results implementing the explo-ration algorithm in one Jaguar V2. An average of 36 seconds for full exploration demonstratescoherent operations considering simulation results.

Figure 4.26: Deployment of two Jaguar V2 robots for multi-robot autonomous explorationexperiments.

Page 164: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 146

quality for each robot. Thus, these results demonstrate the validity of our proposed algorithmwhen implemented in a team of multiple robots.

(a) Exploration. (b) Exploration Quality.

Figure 4.27: Autonomous exploration showing representative results for a single run using 2robots avoiding their own past. An almost half of the time for full exploration when comparedto single robot runs demonstrates efficient resource management. The resultant explorationquality shows the trend towards perfect balancing between the two robots.

Summarizing these experiments, we have presented an efficient robotic explorationmethod using single and multiple robots in 3D simulated environments and in a real testbedscenario. Our approach achieves similar navigational behavior such as most relevant papersin literature including [58, 290, 101, 240, 259]. Since there are no standard metrics andbenchmarks, it is a little bit difficult to quantitatively compare our approach with others. Inspite of that, we can conclude that our approach presented very good results with the advan-tages of using less computational power, coordinating without any bidding/negotiation pro-cess, and without requiring any sophisticated targeting/mapping technique. Furthermore, wediffer from similar reactive approaches as [21, 10, 114], in that we use a reduced complexityalgorithm with no a-priori knowledge of the environment and without calculating explicit re-sultant forces. Additionally, we need no static roles neither relay robots so that we are free ofleaving line-of-sight, and we are not depending on every robot’s functionality for task comple-tion. Moreover, we need no specific world structure and no significant deliberation process,and thus our algorithm decreases computational complexity from typical O(n2T ) (n robots,T frontiers) in deliberative systems and O(n2) (nxn grid world) in reactive systems, to O(1)when robots are dispersed and O(m2) whenever m robots need to disperse, and still achievesefficient exploration times, which is largely due to the fact that all operations are composed ofsimple conditional checks and no complex calculations are being done (refer to [71] for thefull details). In short, we use a very simple approach with way reduced operations as shownin Figure 4.28, and still gather similar and/or better results.

We have demonstrated with these tests that the essence for efficient exploration is to ap-propriately remember the traversed locations so as to avoid being redundant and time-wasting.Also, by observing efficient robot dispersion and the effect of avoiding teammates past, wedemonstrated that interference is a key issue to be avoided. Hence, our critical need is areliable localization that can enable the robots to appropriately allocate spatial information

Page 165: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 4. EXPERIMENTS AND RESULTS 147

Figure 4.28: Comparison between: a) typical literature exploration process and b) our pro-posed exploration. Clear steps and complexity reduction can be appreciated between sensingand acting.

(waypoints). In this way, perhaps a mixed strategy of our algorithm with a periodic targetallocation method presented in [43] can result interesting. What is more, the presented explo-ration strategy could be extended with additional behaviors that can result in a more flexibleand multi-objective autonomous exploration strategy as authors suggest in [25]. The chal-lenge here resides in defining the appropriate weights for each action so that the emergentbehavior performs efficiently.

Concluding this chapter, we have developed a series of experiments to test the proposedsolution. We have demonstrated the functionality of most of the autonomous behaviors, whichconstituted the coordination of the actions developed by the robots. Also, we implementedan instance of the proposed infrastructure for coupling our MRS and giving it the additionalfeature to deliberate and follow a plan, which is supervised and controlled by human operators.This constituted the coordination of the actions developed by the team of robots. Finally, whiletesting the infrastructure, we contributed towards an alternative solution to the autonomousexploration problem with single and multiple robots. So, the last thing in order to completethis dissertation is to summarize the contributions and settle the path towards future work.

Page 166: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Chapter 5

Conclusions and Future Work

“It’s not us saving people. It’s us getting the technology to the people who willuse it to save people. I always hate it when I hear people saying that we thinkwe’re rescuers. We’re not. We’re scientists. That’s our role.”

– Robin R. Murphy. (Robotics Scientist)

CHAPTER OBJECTIVES— Summarize contributions.— Establish further work plans.

In this last chapter we present a summary of the accomplished work, highlighting itsmore relevant contributions and real impact of this dissertation. Then, we finish the chapterpresenting a discussion towards the future directions and possibilities for this dissertationproject.

5.1 Summary of ContributionsThis dissertation focused in the rescue robotics research area, which has received particularattention from the research community since 2002. Thus, being almost 10 years-old, mostrelevant contributions have been limited to understanding the complexity of conducting searchand rescue operations and the possibilities for empowering rescuers’ abilities and efficiency byusing mobile robots. On the other hand, mobile robotics research area has more than 30 yearsreceiving relevant contributions. Therefore, we tried to take advantage on this contrast so as toderive a clear path towards mobile robots possibilities in disaster response operations, whilebringing some of the most relevant software solutions in literature towards rescue robotics.Here we describe what we have accomplished by following this strategy.

First of all, we have developed a very profound research concerning the multiple dis-ciplines that conform the rescue robotics research field. From these readings, we were ableto follow an inductive reasoning in order derive a synthesis and comprehend the most rele-vant and popular tasks that are being addressed by the robotics community and that could fitinto the concept of disaster and emergency response operations. In this way, we ended-upwith a very concise and generic goals diagram presented in Chapter 3. This diagram not only

148

Page 167: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 5. CONCLUSIONS AND FUTURE WORK 149

provides a clear panorama of what is more important in search and rescue operations, butalso served as the map towards easily identifying the main USAR requirements so that wewere able to decompose disaster response operations into fundamental robotic tasks ready tobe allocated among a pool of robots, specifically the type of robots presented in Chapter 2,section 2.3.

Accordingly, once having the list of requirements and robotics tasks, we were able toorganize them in sequential order so that we found three major tasks or sequence diagramscomposing a complete strategy including the fundamental actions that describe the major pos-sibilities for ground robots in disaster response operations. These actions included in Chap-ter 3, section 3.1, conform a very valuable deduction of a very vast research in autonomousmobile robots operations that is considered to have a relevant impact in disastrous events. Thatis the main reason we have not only listed them in this dissertation but also organized themaccording to the roles found in most complete demonstrations in RoboCup Rescue, and morerelevant behavior-based contributions found over literature (refer to Figures 3.8 and 3.9). Inshort, with the development of a very profound research, we have achieved USAR modular-ization leveraging local perceptions, literature-based operations where robots are good at, andrescue mission decomposition into subtasks concerning specific robotic roles, behaviors andactions.

The next step concerned to take the philosophical and theoretical understandings intopractical contributions. In order to do this, we developed a profound study of the differ-ent frameworks for developing robotic software (refer to Appendix B), intending to increasethe impact and relevance of our real-world robotic developments. Thus, we have definedand created a very integral set of primitive and composite, service-oriented robotic behaviors,concerning the previously deducted requirements and actions for disaster response operations.These behaviors have been fully described and decomposed into robotic, observable, disjointactions. This detailing is also a very valuable tool that served not only for this dissertationcompletion, but also for future developments concerning the need of several control char-acteristics that were highly addressed herein such as situatedness, embodiment, reactivity,relevance, locality, consistency, representation, synthesis, cooperation, interference, individu-ality, adaptability, extendibility, programmability, emergence, reliability and robustness (referto Table 1.2). It is worth to mention that not all behaviors were coded or demonstrated herein,and this is mainly because they are an important set of actions concerning disaster responseoperations but they remain to be an open issue until today. Nevertheless, the ones that werecoded possess the ability to be easily reused independently of the constantly updated hardware(i.e. more affordable or better sensors). This characteristic is perhaps the most important pathtowards easily continuing the works herein.

Following these developments, we implemented a pair of architectures for fulfilling theneed of coupling at one level the robotic behaviors that compose the robot control, and ata higher level for coupling the robots that compose the multi-robot system. The essence ofthese architectures relies in taking as much advantage as possible from current technologywhich is better for simple, fast, and reactive control. Thus, we have exploited the capabilitiesof the service-oriented design to couple our system at both levels, resulting in a careful inte-gration that is characterized by a very relevant set of features such as modularity, flexibility,extendibility, scalability, easy to upgrade, heterogeneity management, inherent negotiationstructure, fully meshed data interchange, handles communication disruption, highly reusable,

Page 168: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 5. CONCLUSIONS AND FUTURE WORK 150

robust and reliable for efficient interoperability (refer to Chapter 1, section 1.4.2, and Ap-pendix B). Experimentation included in Chapter 4 demonstrates these characteristics, whichare inherently present in the different tests concerning different and multiple robots connectedthrough a wireless network.

Finally, the last concise contribution is the inherent study of the emergence of rescuerobotic behaviors and their applicability in real disaster response operations. By implement-ing distributed autonomous behaviors, we recognized that there is a huge possibility for per-formance evaluation and thus there exists the opportunity for adding adaptivity features soas to learn additional behaviors and possibly increase performance and capabilities of robotsin search and rescue operations. As it is described in Chapter 4, section 4.4, and in Ap-pendix D, the field cover behavior comes to be an excellent example of this contribution. Inthe particular case of autonomous exploration, the field cover emergent behavior resulted in asimple and robust algorithm with very relevant features for highly uncertain and dynamic en-vironments such as coordinating without any deliberative process, simple targeting/mappingtechnique with no need for a-priori knowledge of the environment or calculating explicit re-sultant forces, robots are free of leaving line-of-sight and task completion is not compromisedto every robot’s functionality. Also, the algorithm decreases computational complexity fromtypical O(n2T ) (n robots, T frontiers) in deliberative systems and O(n2) (nxn grid world) inreactive systems, to O(1) when robots are dispersed and O(m2) whenever m robots need todisperse. So, with this composite behavior it is demonstrated that the exact combination ofprimitive behaviors could lead into several advantages that result in simpler solutions withvery robust performance. Thus the possibilities for extending this work, concerning not onlythe service-oriented design, but also the different behaviors that can be combined, end-upbeing one of the most important and interesting contributions.

In short, we can summarize contributions as follows:

• USAR modularization leveraging local perceptions, literature-based operations whererobots are good at, and mission decomposition into subtasks concerning specific roboticroles, behaviors and actions.

• Primitive and composite, service-oriented, robotic behaviors for addressing USAR op-erations.

• Behavior-based control architecture for coordinating autonomous mobile robots ac-tions.

• Hybrid system infrastructure that served for synchronization of the MRS as a USAR,distributed, semi-autonomous, robotic coordinator based on the organizational strategyof roles, behaviors and actions (RBA) and working under a finite state machine (FSM).

• Studied the emergence of rescue robotic team behaviors and their applicability in realsearch and rescue operations.

Besides these contributions, it is also important to refer that information in Chapter 2refers a vast survey on rescue robotics research, covering the most relevant literature fromits beginning until today. This is very valuable information not only in terms of this disser-tation but because of filtering 10-years (perhaps more) of research. Then, in Chapter 4 we

Page 169: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 5. CONCLUSIONS AND FUTURE WORK 151

demonstrated a methodology for quick setup of robotics simulations and a fast path towardsthe real implementations, intending to reduce time costs in the development and deploymentof robotic systems. This resulted in a relevant contribution reported in [70]. Following thisinformation, the demonstrated functionality of the service-oriented, generic architecture forthe MRS, essentially its scalability and extendibility features, resulted also in another relevantcontribution reported in [72]. Finally, we demonstrated that the essence for efficient explo-ration is to appropriately remember the traversed locations so as to avoid being redundantand time-wasting, and not quite to appropriately define the next best target location. Thissimplification also resulted in a relevant contribution reported in [71].

5.2 Future WorkHaving stated what has been accomplished, it is time to refer the future steps for this work.Perhaps the best starting point is to refer the possibilities for scalability and extendibility.About scalability, it will be interesting to test the team architecture using more real robots.Also, instantiating multiple system elements and interconnecting them so as to have sub-teams of rescue robots seems like a first step towards much more complex multi-robot sys-tems. Then, about extendibility, the behavioral architecture of the robots provides a verysimple way of adding more behaviors so as to address different or additional tasks. Also,if the robots’ characteristics change, the service-oriented design facilitates the process foradding/modifying behaviors by enabling developers to change focused parts of the softwareapplication. Moreover, thinking of the sequence diagrams and the manual triggering for thenext state, adding more states to the FSM is a simple task. The conflict may come whentransitioning becomes autonomous. So, these characteristics are perhaps the most importantreasons we proposed a nomenclature in Chapter 1 that was not completely exploited in thisdissertation, we intended to provide a clear path towards the applicability of our system fordiverse missions/tasks and using diverse robotic resources.

Another important step towards the future is implementing more complete operationsin more complete/real scenarios. Perhaps the most important reasons for this are time andlaboratory resources. For example, at the beginning of this dissertation we do not even hada working mobile robot, not to think of a team of them. This situation severely delimitsthe work generating a lack of more realistic implementations. Nowadays, the possibilitiesfor software resources are much more broad as the popularity of the ROS [107] continuesrising, so integrating complex algorithms and even having robust 3D localization systems isavailable at hand. So, the challenge resides in setting up a team of mobile robots and startgenerating diverse scenarios such as described in [267]. Then, it will be interesting to pursuerelevant goals such as autonomously mapping an environment with characteristics identifyingsimulated victims, hazards and damaged kins. Also, a good challenge could be to provide ageneral deliberation of the type of aid required according to the victim, hazard or damagedkin status in order to simulate a response action. In this way, complete rounds of coordinatedsearch and rescue operations are developed.

Furthermore, in such a young research area, where there are no standardized evaluationmetrics, knowing that a system is performing well is typically qualitatively. Within this disser-tation we think that evaluating the use of behaviors could lead into learning so as to increase

Page 170: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

CHAPTER 5. CONCLUSIONS AND FUTURE WORK 152

performance. What is more, in Chapter 1 we even proposed a table of metrics that was notused because it was thought for complete rounds of coordinated operations. In [268], authorspropose a list with more than 20 possible metrics for evaluating rescue robots’ performance.Also, the RoboCup Rescue promotes its own metrics and score vectors. So, this turns outto be a good opportunity area for future work, either implementing some of those metricsproposed herein or in literature, or even defining new ones that can be turned into standardsor at least provide a generic evaluation method so that the real impact of contributions canbe quantitatively measured. Additionally, once having this evaluators/metrics, systems couldtend to be more autonomous because of their capabilities for learning from what they havedone.

More precise enhancements to this work could be to test the service-oriented property ofdynamic discoverability so as to enhance far reaches exploration [92] by allowing the individ-ual robots to connect and disconnect automatically according to communication ranges anddynamically defined rendezvous/aggregation points as in [232]. With this approach, robotscan leave communications range for a certain time and then autonomously come back to con-nection with more data from the far reaches in the unknown environment. Also, we need todispose of the camera-based localization so as to give more precise quantitative evaluationssuch as map quality/utility as referred in [155, 6].

In general, there is still a long way in terms of mobility, uncertainty and 3D locationsmanagement. All of these are essential for appropriately trying to coordinate single and multi-robot systems. Nevertheless, we believe it is by providing these alternative approaches thatwe can have a good resource for evaluation purposes that will lead us to address complexproblems and effectively resolve them the way they are. In the end, we think that if more peo-ple start working with this trend of SOA-based robotics and thus more service independentproviders are active, robotics research could step forward in a faster and more effective waywith more sharing of solutions. We are seeing services as the modules for building complexand perhaps cognitive robotics systems.

Stated the contributions and the future work, the last thing worth to include is a quotewith which we feel very empathic after having completed this work. It is from Joseph Engel-berger, the “Father of Robotics”.

“You end up with a tremendous respect for a human being if you’re a roboti-cist”

– Joseph Engelberger, quoted in Robotics Age, 1985.

Page 171: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Appendix A

Getting Deeper in MRS Architectures

In order to better understand group architectures it is important to describe a single robot ar-chitecture. In this dissertation both concepts refer to the software organization of a roboticsystem either for one or multiple robots. So, a robot architecture typically involves multiplecontrol levels for generating the desired actions from perceptions in order to achieve a givenstate or goal. For the ease of understanding we include two relevant examples that demon-strated functionality, appropriate control organization, and successful tests within differentrobotic platforms.

First, there is the development of Alami et al. in [2], which is described as a genericarchitecture suitable for autonomy and intelligent robotic control. This architecture is basedupon being task and domain independent and extendible at the robot and behavior levels,meaning that it can be used for different purposes with different robotic resources. Also,its modular structure allows for easily developing what is needed for an specific task, thusenabling designers for simplicity and focus. Figure A.1 shows an illustration of the referredsingle robot architecture. Important aspect to notice is the separation of control levels byblocks concerning differences in operational frequency and complexity. The higher level,called Decisional, is the one in charge of monitoring and supervising the progress in order toupdate mission’s status or modify plans. Then, the Executional level receives the updates fromthe supervisor and thus calls for executing the required functional module(s). The Functionallevel takes care of the perceptions that are reported to higher levels and used for controllingthe active module(s). This functional modularity enables for dealing with different tasks androbotic resources. Finally, the Logical and Physical levels represent the electrical signals andother physical interactions between sensors, actuators and the environment.

Another relevant example designed under the same lineaments is provided by Arkin andBalch in [12] shown in Figure A.2. Their architecture known as Autonomous Robot Architec-ture (AuRA) has served as inspiration of plenty other works and implementations requiringautonomous robots. Perhaps looking less organized than Alami et al.’s work, the idea of hav-ing multiple control levels is basically the same. It has the equivalent decisional level with theCartographer and Planner entities maintaining spatial information and monitoring the statusof the mission and its tasks. Then the executional level comes to be the sequencer trigger-ing the modules at the functional level called motor schemas (robot behaviors). Also, thesemodules can be triggered by sensors’ perceptions including the stored spatial information atthe cartographer block. Thus, a coordinated output from the triggered executional modules is

153

Page 172: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX A. GETTING DEEPER IN MRS ARCHITECTURES 154

Figure A.1: Generic single robot architecture. Image from [2].

Page 173: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX A. GETTING DEEPER IN MRS ARCHITECTURES 155

sent to the actuators for working at the physical level and interacting with the environment.An important additional aspect is the Homeostatic control, which manages the integrity andrelationship among motor schemas by modifying its gains and thus enabling for adaptationand learning. Finally, there is an explicit division of layers into deliberative and reactive,this implies specific characteristics of the elements that reside in each of them. This strategyis known as hybrid architecture for which a complete description can be found at [192],including purely reactive and purely deliberative approaches.

Figure A.2: Autonomous Robot Architecture - AuRa. Image from [12].

Accordingly, organizing a multiple-robot control system requires to extend the idea ofmanaging multiple levels of control and functionality in order to conform a group. So, robotsin a given MRS must have their individual architecture such as the ones mentioned above butcoupled in a group architecture. This higher-level structure typically requires for additionalinformation and control essentially at the decisional and execution control levels, which areresponsible for addressing the task allocation and other resource conflicts. Some historicalexamples of representative general purpose architectures for building and controlling multiple

Page 174: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX A. GETTING DEEPER IN MRS ARCHITECTURES 156

autonomous mobile robots are briefly described below.

NERD HERD [174]. This architecture is one of the first studies in behavior-basedrobotics for multiple robots in which simple ballistic behaviors are combined in or-der to conform more complex team behaviors. Its key features reside in: distributedand decentralized control, and capabilities for extensibility and scalability. Then, beingpractically an evolution of authors’ previous works on behavior-based architectures, theMURDOCH [111] project modularized not only but control but tasks by implementingsubject-based control strategies. This allowed for having sub-scenarios and directedcommunications. The main features of this evolution are: a publish/subscribe basedmessaging for task allocation, and negotiations using multi-agent theory (ContractNet)in multi-robot systems.

Task Control Architecture (TCA) [257]. This work inspired with its ability for con-current planning, execution and perception for handling several tasks in a parallel wayusing multiple robots. Its key features reside in: an efficient resource managementmechanism for task allocation and failure overcoming, task trees for interleaving plan-ning and execution, and concurrent system status monitoring. Nowadays it is discontin-ued but authors have created the Distributed Robot Architecture (DIRA) [258] in whichindividual autonomy and explicit coordination among multiple robots is achieved via a3-layered infrastructure: planner, executive and behavioral.

ACTRESS [179]. Considering that every task has its own needs, this work’s designfocuses on distribution, communication protocol, and negotiation, in order to enablerobots to work separately or cooperatively as the task demands. Its key features residein: a message protocol designed for distributed/decentralized cooperation, a separa-tion of problem solving strategies in accordance to leveled communication system, andmulti-robot negotiation at task, cooperation and communication levels.

CEBOT [102]. Having its name from cellular robotics, this work deals with a self-organizing robotic system that consists of a number of autonomous robots organized incells, which can communicate, approach, connect and cooperate with each other. Itskey features reside in: modular structures for collective intelligence and self-organizingrobotic systems, and robot self-recognition used for coordinating efforts towards a goal.

ALLIANCE [221]. Perhaps the most popular and representative work, it is a distributedfault-tolerant behavior-based cooperative architecture for heterogeneous mobile robots.It is characterized for implementing a fixed set of motivational controllers for behaviorselection, which at the same time have priorities (subsumption idea from [49]). Con-trollers use the sensors’ data, communications and modelling of actions between eachrobot for better decision making. Its key features reside in: robustness at mission ac-complishing, fault tolerance by using concepts of robot impatience and acquiescence,coherent cooperation between robots, and automatic adjustment of controllers’ param-eters.

M+ System [42]. Taking basis in opportunistic re-scheduling this work is similar tothe TCA in the way of doing concurrent planning. Its key features reside in: robots

Page 175: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX A. GETTING DEEPER IN MRS ARCHITECTURES 157

concurrently detecting and solving coordination issues, and an effective cooperationthrough a “round-robin” mechanism.

A more complete description of some of the mentioned architectures along with otherpopular ones such as GOFER [62] and SWARMS [30], can be found in [63, 223, 16]. Also, agood evaluation of some of them is presented in [218] and [11].

Page 176: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Appendix B

Frameworks for Robotic Software

According to [55], in recent years, there has been a growing concern in the robotics com-munity for developing better software for mobile robots. Issues such as simplicity, con-sistency, modularity, code reuse, integration, completeness and hardware abstraction havebecome key points. With these general objectives in mind, different robotic programmingframeworks have been proposed such as Player [113], ROCI [77], ORCA [47], and more re-cently ROS [230, 107] and Microsoft Robotics Developer Studio (MSRDS) [234, 135] (anover-view of some of these frameworks can be found in [55]).

In a parallel path, state of the art trend is to implement Service-Oriented Architec-tures (SOA) or Service-Oriented Computing (SOC), into the area of robotics. Yu et al. defineSOA in [293] as: “a new paradigm in distributed systems aiming at building loosely-coupledsystems that are extendible, flexible and fit well with existing legacy systems”. SOA promotescost-efficient development of complex applications because of leveraging service exchange,and strongly supporting the concurrent and collaborative design. Thus, applications builtupon this strategy are faster developed, reusable, and upgradeable. From the previously re-ferred programming frameworks ROS and MSRDS use SOA for developing a networkableframework for mobile robots giving definition to Service-Oriented Robotics (SOR).

Thus, in a brief timeline, we can accommodate these frameworks and trend as follows:

Before. Robotics software was developed using 0’s and 1’s, assembly and proceduralprogramming languages, limiting its reusability and being highly delimited to particularhardware. It was very difficult to upgrade code and give continuity to sophisticatedsolutions.

2001 [260, 113]. Player/Stage framework was introduced by Brian Gerkey and person-nel from the University of Southern California (USC). This system promoted object-oriented computing (OOC) towards reusable code, modularity, scalability, and ease ofupdate and maintenance. This implies to instantiate Player modules/classes and connectthem through communication sockets characteristic of the own system. The essentialdisadvantage in using Player object-oriented development is that it requires for tightlycoupled classes based on the inheritance relationships. So, developers must have knowl-edge of application domain and programming. Also, the reuse by inheritance requiresfor library functions to be imported at compilation time (only offline upgrading) and areplatform dependent.

158

Page 177: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX B. FRAMEWORKS FOR ROBOTIC SOFTWARE 159

2003 [77]. ROCI (Remote Objects Control Interface) was introduced by Chaimow-icz and personnel from the University of Pennsylvania (UPenn) as a self-describing,objected-oriented programming framework that facilitates the development of robustapplications for dynamic multi-robot teams. It consists in a kernel that coordinatesmultiple self-contained modules that serve as building blocks for complex applica-tions. This was a very nice implementation of hardware abstraction and generic mobilerobotics processes encapsulation, but still resided in object-oriented computing.

2006 [135, 234]. From the private sector, it is released the first version of the MicrosoftRobotics Developer Studio (MSRDS). It was novel framework because it was the firstto introduce the service-oriented systems engineering (SOSE) into robotics research,but relying on Windows and not being open-source limited its popularity. Nevertheless,for the first time code reuse happened at the service level. Services have standard in-terfaces and are published on Internet repository. They are platform-independent andcan be searched and remotely accessed. Service brokerage enables systematic sharingof services, meaning that service providers can program but do not have to understandthe applications that use their services, while service consumers may use services butdo not have to understand its code deeply. Additionally, the possibility for the servicesto be discovered after the application has been deployed, allows an application to berecomposed at runtime (online upgrading and maintenance).

2007 [47, 48]. This was the time for component-based systems engineering (CBSE)with the rise of ORCA by Makarenko and personnel from the University of Sidney.Relying on the same lineaments of Player, ORCA provides with a more useful pro-gramming approach in terms of modularity and reuse. This framework consists in de-veloping components under certain pre-defined models as the encapsulated software tobe reused. There is no need to fully understand applications or components code if theyhave homogeneous models. So, it is more promising that object-oriented but still lackedof some important features of service-oriented.

2009 [230, 107]. The Robot Operating System (ROS) started to be hugely promotedby the designers of Player, essentially by Brian Gerkey and personnel from WillowGarage. It appeared as an evolution of Player and ORCA offering a framework withthe same advantages from both, plus being more friendly among diverse technologiesand being highly capable of network distribution. This was the first service-orientedrobotics framework that was released as open-source.

Today. MSRDS and ROS are the most popular service-oriented robotic frameworks.MSRDS is now in its fourth release (RDS 4) but still not open-source and only availablefor Windows. ROS has grown incredibly, being supported by a huge robotics commu-nity a thus providing very large service repositories. Also, both contributions have anexplicit trend to what is now known as cloud robotics [122].

Being more precise, services are mainly a defined class whose instance is a remoteobject connected through a proxy, in order to reach a desired behavior. Then, a service-oriented architecture is essentially a collection of services. In robotics, these services aremainly (but not limited to): hardware components such as drivers for sensors and actuators;

Page 178: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX B. FRAMEWORKS FOR ROBOTIC SOFTWARE 160

software components such as user interfaces, orchestrators (robot control algorithms), andrepositories (databases); or aggregations referring to sensor-fusion, filtering and related tasks.So, the main advantage of this implementation resides in that there are pre-developed servicesthat exist in repositories that developers can use for their specific application. Also, if a serviceis not available, the developer can build its own and contribute to the community. In such way,SOR is composed of independent providers all around the globe, allowing to build roboticssoftware in distributed teams with large code bases and without a single person crafting theentire software, enabling faster setup and easier development of complex applications [82].Other benefits on using SOR are the following [4]:

• Manageability of heterogeneity by standardizing a service structure.

• Ease of integrating new robots to the network by self-identifying without reprogram-ming or reconfiguring (self-discoverable capabilities).

• An inherent negotiation structure where every robot can offer its services for interactionand ask for other robots’ running services.

• Fully meshed data interchange for robots in the network.

• Ability to handle communication disruption where a disconnected out-of-communication-range robot can resynchronize and continue communications when connection is recov-ered.

• Mechanisms for making reusability more direct than in traditional approaches, enablingfor using the same robot’s code for different applications.

On the other hand, the well-known disadvantage of implementing SOR is the reducedefficiency when compared to classical software solutions because of the additional layer ofstandard interfaces, which are necessary to guarantee concurrent coordination among ser-vices [73, 82]. The crucial effect resides in the communications overhead among networkedservices, having an important impact in real-time performance. Fortunately for us, nowa-days the run-time overhead is not as important as it was because modern hardware is fast andcheap [218].

Summarizing, in Table B.1 we synthesize the main characteristics of the different pro-gramming approaches that are popular among the most relevant frameworks for robotic soft-ware.

Page 179: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX B. FRAMEWORKS FOR ROBOTIC SOFTWARE 161

Tabl

eB

.1:C

ompa

riso

nam

ong

diff

eren

tsof

twar

esy

stem

sen

gine

erin

gte

chni

ques

[219

,46,

82,2

93,4

].

Obj

ect-

Ori

ente

dC

ompo

nent

-Bas

edSe

rvic

e-O

rien

ted

Reu

sabi

lity

√√

Mod

ular

ity√

√√

Mod

ule

unit

libra

ryco

mpo

nent

serv

ice

Man

agem

ento

fcom

plex

ity√

Shor

ten

depl

oym

entt

ime

√√

Ass

embl

yan

din

tegr

atio

nof

part

s√

√√

Loo

sely

coup

ling

√√

Tigh

tlyco

uplin

g√

Stat

eles

s√

Stat

eful

√√

Plat

form

inde

pend

ent

Prot

ocol

sin

depe

nden

t√

Dev

ices

inde

pend

ent

Tech

nolo

gyin

depe

nden

t√

Inte

rnet

sear

ch/d

isco

very

Eas

ym

aint

enan

cean

dup

grad

es√

Self

-des

crib

ing

mod

ules

√√

Self

-con

tain

edm

odul

es√

Feas

ible

orga

niza

tion

√√

Feas

ible

mod

ule

shar

ing/

subs

titut

abili

ty√

Feas

ible

info

rmat

ion

exch

ange

amon

gm

odul

es√

Run

-tim

edy

nam

icdi

scov

ery/

upgr

ade

(onl

ine

com

posi

tion)

Com

pila

tion-

time

stat

icm

odul

edi

scov

ery

(offl

ine

com

posi

tion)

√√

Whi

te-b

oxen

caps

ulat

ion

√√

Bla

ck-b

oxen

caps

ulat

ion

√√

Het

erog

eneo

uspr

ovid

ers/

com

posi

tion

ofm

odul

es√

Dev

elop

ers

may

notk

now

the

appl

icat

ion

Page 180: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Appendix C

Set of Actions Organized as RoboticBehaviors

Classification, types and description of behaviors are essentially based upon [172, 175, 11,192] Ballistic control type implies a fixed sequence of steps, while servo control refers to“in-flight” corrections for a closed-loop control.

Table C.1: Wake up behavior.

Behavior Name (ID): Wake up (WU)

Literature aliases: Initialize, Setup, Ready, Start, DeployClassification: ProtectiveControl type: Ballistic

Inputs: -

Actions:

Enable motorsInitialize state variables

Set Police Force (PF) roleCall for Safe Wander behavior

Releasers: Initial deploymentInhibited by: Resume, Safe Wander

Sequence diagram operations: Initialization stageMain references: -

162

Page 181: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 163

Table C.2: Resume behavior.

Behavior Name (ID): Resume (RES)

Literature aliases: Restart, ResetClassification: ProtectiveControl type: Ballistic

Inputs: -

Actions:

Re-initialize state variablesSet Police Force (PF) role

Call for Safe Wander behaviorReleasers: Finished reporting or updating report

Inhibited by: Safe WanderSequence diagram operations: Initialization stage, Re-establishing stage

Main references: -

Table C.3: Wait behavior.

Behavior Name (ID): Wait (WT)

Literature aliases: Halt, Queue, StopClassification: Cooperative, ProtectiveControl type: Servo

Inputs: Number of lost kins

Actions:Stop motors until every robot in PoliceForce (PF) role is docked and holding

formationReleasers: Lost robot

Inhibited by: Hold Formation, Flocking readySequence diagram operations: Flocking surroundings stage

Main references: [167]

Page 182: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 164

Table C.4: Handle Collision behavior.

Behavior Name (ID): Handle Collision (HC)

Literature aliases: Avoid ObstaclesClassification: ProtectiveControl type: Servo

Inputs: Distance and obstacle type

Actions:Avoid sides

Avoid cornersAvoid kins

Releasers: Always onInhibited by: Wall Follow, Inspect, Aid Blockade

Sequence diagram operations: AllMain references: [11, 236, 278]

Table C.5: Avoid Past behavior.

Behavior Name (ID): Avoid Past (AP)

Literature aliases: Motion Planner, Waypoint ManagerClassification: ExplorativeControl type: Servo

Inputs: Waypoints list

Actions:

Evaluate neighbor waypointsAdd waypoint to waypoint listIncrease waypoint visit count

Steer away from most visited waypointReleasers: Field Cover and visited waypoint

Inhibited by: Seek, Wall Follow, Path Planning, ReportSequence diagram operations: Covering distants stage, Approaching stage

Main references: [21]

Page 183: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 165

Table C.6: Locate behavior.

Behavior Name (ID): Locate (LOC)

Literature aliases: Adjust HeadingClassification: Explorative, ProtectiveControl type: Servo

Inputs: Current heading, goal type and location

Actions:Identify goal type

Calculate goal headingSteer until achieving desired heading

Releasers: Safe Wander or Field Cover and wander rateInhibited by: Handle Collision, Victim/Threat/Kin

Sequence diagram operations: Covering distants stageMain references: [7]

Table C.7: Drive Towards behavior.

Behavior Name (ID): Drive Towards (DT)

Literature aliases: Arrive, Cruise, ApproachClassification: ExplorativeControl type: Servo

Inputs: Distance to goal

Actions: Determine zone according to distanceAdjust driving velocity

Releasers: ApproachInhibited by: Inspect, Handle Collision

Sequence diagram operations: Approaching stageMain references: [23]

Page 184: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 166

Table C.8: Safe Wander behavior.

Behavior Name (ID): Safe Wander (SW)

Literature aliases: Random ExplorerClassification: ExplorativeControl type: Ballistic

Inputs: Distance to objects nearby

Actions:

Move forwardLocate open areaHandle collision

Avoid PastReleasers: Wake up, Resume, or Field Cover ended

Inhibited by: Aggregate, Wall Follow, Report, Victim/Threat/KinSequence diagram operations: Initialization stage, Covering distants stage

Main references: [175]

Table C.9: Seek behavior.

Behavior Name (ID): Seek (SK)

Literature aliases: Homing, Attract, GoTo, Local Path PlannerClassification: Appetitive, ExplorativeControl type: Servo

Inputs: Goal position (X,Y)

Actions: Create Vector Field HistogramMotion control towards goal

Releasers: Aggregate, Hold Formation, SeekingInhibited by: Inspect, Disperse, Victim/Threat/Kin

Sequence diagram operations:Approaching, Rendezvous, and

Flocking Surroundings stagesMain references: [171, 175, 236, 41]

Page 185: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 167

Table C.10: Path Planning behavior.

Behavior Name (ID): Path Planning (PP)

Literature aliases: Motion PlannerClassification: ExplorativeControl type: Servo

Inputs: Goal position (X,Y)

Actions:Determine the wayfront propagation

List target waypoints to goalSeek to each waypoint

Releasers: Field Cover ended plus enough 2D map to planInhibited by: Safe Wander, Wall Follow, Report, Victim/Threat/Kin

Sequence diagram operations: Covering distants stageMain references: [10, 154, 224]

Table C.11: Aggregate behavior.

Behavior Name (ID): Aggregate (AG)

Literature aliases: Cohesion, Dock, RendezvousClassification: AppetitiveControl type: Servo

Inputs: Police Force robots’ poses

Actions: Determine centroid of all PF robots’ posesSeek towards centroid

Releasers: Safe Wander, Resume, Call for formationInhibited by: Disperse, Victim/Threat/Kin

Sequence diagram operations: Rendezvous stageMain references: [171, 175, 23]

Table C.12: Unit Center Line behavior.

Behavior Name (ID): Unit Center Line (UCL)

Literature aliases: Form LineClassification: CooperativeControl type: Servo

Inputs: Robot ID and number of PF robots

Actions:Aggregate

Determine pose according to line formationSeek position

Releasers: Aggregation/Rendezvous, Structured ExplorationInhibited by: Hold Formation, Disperse, Victim/Threat/Kin

Sequence diagram operations: Rendezvous and Flocking surroundings stagesMain references: [23]

Page 186: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 168

Table C.13: Unit Center Column behavior.

Behavior Name (ID): Unit Center Column (UCC)

Literature aliases: Form ColumnClassification: CooperativeControl type: Servo

Inputs: Robot ID and number of PF robots

Actions:Aggregate

Determine pose according to column formationSeek position

Releasers: Aggregation/Rendezvous, Structured ExplorationInhibited by: Hold Formation, Disperse, Victim/Threat/Kin

Sequence diagram operations: Rendezvous and Flocking surroundings stagesMain references: [23]

Table C.14: Unit Center Diamond behavior.

Behavior Name (ID): Unit Center Diamond (UCD)

Literature aliases: Form DiamondClassification: CooperativeControl type: Servo

Inputs: Robot ID and number of PF robots

Actions:Aggregate

Determine pose according to diamond formationSeek position

Releasers: Aggregation/Rendezvous, Structured ExplorationInhibited by: Hold Formation, Disperse, Victim/Threat/Kin

Sequence diagram operations: Rendezvous and Flocking surroundings stagesMain references: [23]

Page 187: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 169

Table C.15: Unit Center Wedge behavior.

Behavior Name (ID): Unit Center Wedge (UCW)

Literature aliases: Form WedgeClassification: CooperativeControl type: Servo

Inputs: Robot ID and number of PF robots

Actions:Aggregate

Determine pose according to wedge formationSeek position

Releasers: Aggregation/Rendezvous, Structured ExplorationInhibited by: Hold Formation, Disperse, Victim/Threat/Kin

Sequence diagram operations: Rendezvous and Flocking surroundings stagesMain references: [23]

Table C.16: Hold Formation behavior.

Behavior Name (ID): Hold Formation (HF)

Literature aliases: Align, Keep PoseClassification: CooperativeControl type: Servo

Inputs: Position to hold

Actions: Seek positionCall for Lost

Releasers: Docked in formation, Flocking readyInhibited by: Lost, Disperse, Victim/Threat/Kin

Sequence diagram operations: Rendezvous and Flocking surroundings stagesMain references: [23, 271, 208]

Table C.17: Lost behavior.

Behavior Name (ID): Lost (L)

Literature aliases: Undocked, UnalignedClassification: CooperativeControl type: Servo

Inputs: Position to hold

Actions: Message of lost robotSeek towards position

Releasers: Hold formation failedInhibited by: Disperse, Hold Formation, Flocking ready

Sequence diagram operations: Flocking surroundings stageMain references: [167]

Page 188: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 170

Table C.18: Flocking behavior.

Behavior Name (ID): Flock (FL)

Literature aliases: Joint Explore, Sweep Cover, Structured ExplorationClassification: CooperativeControl type: Ballistic

Inputs: Robot ID

Actions:Determine the leader

If leader, then Safe WanderIf not leader, then Hold Formation

Releasers: Flocking readyInhibited by: Disperse, Victim/Threat/Kin

Sequence diagram operations: Flocking surroundings stageMain references: [105, 171, 23, 236, 235]

Page 189: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 171

Table C.19: Disperse behavior.

Behavior Name (ID): Disperse (DI)

Literature aliases: SeparateClassification: AppetitiveControl type: Servo

Inputs: Police Force robots’ poses

Actions:Locate PF robots’ centroid

Turn 180 degrees awayMove forward until comfort zone

Releasers: Field Cover, Flocking endedInhibited by: Dispersion ready, Victim/Threat/Kin

Sequence diagram operations: Covering distants stageMain references: [171, 23]

Table C.20: Field Cover behavior.

Behavior Name (ID): Field Cover (FC)

Literature aliases: Survey, Patrol, SwipeClassification: CooperativeControl type: Ballistic

Inputs: Waypoints list

Actions:Disperse

Locate open areaSafe Wander

Releasers: Dispersion readyInhibited by: Path Plan, Wall Follow, Report, Victim/Threat/Kin

Sequence diagram operations: Covering distants stageMain references: [58]

Page 190: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 172

Table C.21: Wall Follow behavior.

Behavior Name (ID): Wall Follow (WF)

Literature aliases: Boundary FollowClassification: ExplorativeControl type: Servo

Inputs: Laser readings, side to follow

Actions: Search for wallMove forward

Releasers: Room detectedInhibited by: Report, Victim/Threat/Kin

Sequence diagram operations: Covering distants stageMain references: -

Table C.22: Escape behavior.

Behavior Name (ID): Escape (ESC)

Literature aliases: Stuck, Stall, Stasis, Low Battery, DamageClassification: ProtectiveControl type: Ballistic

Inputs: Odometry data, Battery level

Actions:

If odometry anomaly, Locate open areaIf located open area, Translate safe distance

If low battery, Seek homeIf no improvement, set Trapped role

Releasers: Odometry anomaly, low batteryInhibited by: Trapped role

Sequence diagram operations: AllMain references: [224]

Table C.23: Report behavior.

Behavior Name (ID): Report (REP)

Literature aliases: Communicate, MessageClassification: CooperativeControl type: Ballistic

Inputs: Report content

Actions: Generate report template message using contentSend it to central station

Releasers: Victim/Threat/Kin inspected or aidedInhibited by: Resume, Give Aid

Sequence diagram operations: AllMain references: [156, 272, 56, 222, 168]

Page 191: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 173

Table C.24: Track behavior.

Behavior Name (ID): Track (TRA)

Literature aliases: Pursue, HuntClassification: Perceptive, AppetitiveControl type: Servo

Inputs: Object to track

Actions:

Locate attribute/objectHold attribute in line of sight (AVM or SURF)

Drive TowardsHandle Collisions

Call for InspectReleasers: Victim/Threat found

Inhibited by: Inspect, ReportSequence diagram operations: Approaching/Pursuing stage

Main references: [278], AVM tracking [97], SURF tracking [26]

Table C.25: Inspect behavior.

Behavior Name (ID): Inspect (INS)

Literature aliases: Analyze, Orbit, Extract FeaturesClassification: PerceptiveControl type: Ballistic

Inputs: Object to inspect

Actions:Predefined navigation routine surrounding object

Report attributesWait for central station decision

Releasers: Object to inspect reachedInhibited by: Report, Give Aid

Sequence diagram operations: Analysis/Examination stageMain references: -

Page 192: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 174

Table C.26: Victim behavior.

Behavior Name (ID): Victim (VIC)

Literature aliases: Human Recognition, Face RecognitionClassification: SupportiveControl type: Ballistic

Inputs: Object attributes

Actions:Evaluate reported objects

If not reported, switch to Ambulance Team roleCall for Seek/Track, Approach, Inspect routine

Releasers: Visual recognition of victimInhibited by: Resume, Give Aid

Sequence diagram operations: Triggering recognition stageMain references: [90, 224, 32, 20, 207]

Table C.27: Threat behavior.

Behavior Name (ID): Threat (TH)

Literature aliases: Threat Detected, Fire Detected, Hazmat FoundClassification: SupportiveControl type: Ballistic

Inputs: Object attributes

Actions:Evaluate reported objects

If not reported, switch to Firefighter Brigade roleCall for Seek/Track, Approach, Inspect routine

Releasers: Visual recognition of threatInhibited by: Resume, Give Aid

Sequence diagram operations: Triggering recognition stageMain references: [224, 32, 116, 20]

Page 193: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 175

Table C.28: Kin behavior.

Behavior Name (ID): Kin (K)

Literature aliases: Trapped Kin, Endangered KinClassification: SupportiveControl type: Ballistic

Inputs: Object attributes

Actions:Evaluate reported objects

If not reported, switch to Team Rescuer roleCall for Seek, Inspect routine

Releasers: Message of endangered kinInhibited by: Resume, Give Aid

Sequence diagram operations: Triggering recognition stageMain references: [224]

Table C.29: Give Aid behavior.

Behavior Name (ID): Give Aid (GA)

Literature aliases: Help, Support, ReliefClassification: SupportiveControl type: Ballistic

Inputs: Object attributes and robot role

Actions:Determine appropriate aid

If available/possible, call for corresponding Aid-If unavailable, call for Report

Releasers: Central station accepts to evaluate aidInhibited by: Aid- , Report

Sequence diagram operations: Aid determining stageMain references: [80, 224, 204]

Page 194: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 176

Table C.30: Aid- behavior.

Behavior Name (ID): Aid- (Ax)

Literature aliases: -Classification: SupportiveControl type: Servo

Inputs: Object attributes

Actions:

Include the possibility of rubble removal, fireextinguising, displaying info, enabling two-way

communications, send alerts, transporting object,or even in-situ medical assessment

Releasers: Aid determinedInhibited by: Aid finished or failed, Report

Sequence diagram operations: Support and Relief stageMain references: [224, 204, 20, 268]

Table C.31: Impatient behavior.

Behavior Name (ID): Impatient (IMP)

Literature aliases: TimeoutClassification: CooperativeControl type: Ballistic

Inputs: Current behavior, robot role, current global task

Actions: Increase impatience countCall for Acquiescence

Releasers: Manual triggering, reached timeoutInhibited by: Acquiescent

Sequence diagram operations: AllMain references: [221]

Table C.32: Acquiescent behavior.

Behavior Name (ID): Acquiescent (ACQ)

Literature aliases: RelinquishClassification: CooperativeControl type: Ballistic

Inputs: Current behavior, robot role, current global task

Actions: Determine next behavior or stateChange to new behavior

Releasers: ImpatientInhibited by: -

Sequence diagram operations: AllMain references: [221]

Page 195: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX C. SET OF ACTIONS ORGANIZED AS ROBOTIC BEHAVIORS 177

Table C.33: Unknown behavior.

Behavior Name (ID): Unknown (U)

Literature aliases: Failure, Damage, Malfunction, TrappedClassification: ProtectiveControl type: Ballistic

Inputs: Error type

Actions: Stop motorsReport

Releasers: Failure detected, Escape failedInhibited by: Manual triggering

Sequence diagram operations: AllMain references: [224]

Page 196: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Appendix D

Field Cover Behavior Composition

For this behavior we focus on the very basis of robotic exploration according to Yamauchi:“Given what you know about the world, where should you move to gain as much new informa-tion as possible?” [291]. In this way, we propose a behavior-based approach for multi-robotexploration that puts together the simplicity and good performance of purely reactive controlwith some of the benefits of deliberative approaches, regarding the ability of reasoning aboutthe environment.

The proposed solution makes use of four different robotic behaviors and a resultantemergent behavior.

D.1 Behavior 1: Avoid ObstaclesThe first behavior is the Avoid Obstacles. This protective behavior considers 3 particu-lar conditions for maintaining the robot’s integrity. The first condition is to check for possiblecorners in order to avoid getting stuck or spending unnecessary time there because of theavoiding the past effect. The methodology for detecting the corners is to check for the dis-tance measurements of 6 fixed laser points for each side (left, right, front) and according totheir values determine if there is a high probability of being a corner. There are multiple casesconsidering corners: 1) if the corner has been detected at the left, then robot must turn rightwith an equivalent steering speed according to the angle where the corner has been detected;2) if it has been detected at the right, then robot must turn left with an equivalent steeringspeed according to the angle where the corner has been detected; and 3) if the corner hasbeen detected at the front, then robot must turn randomly to right or left with an equivalentsteering speed according to the distance towards the corner. The next condition is to keep asafe distance to obstacles, steering away from them if it is still possible to avoid collision, ortranslating a fixed safe distance if obstacles are already too close. The third and final condi-tion is to avoid teammates so as not to interfere or collide with them. Most of the times this isdone by steering away from the robot nearby, but other times we found it useful to translate afixed distance. It is worth to refer that the main reason for differentiating between teammatesand moving obstacles resides in that we can control a teammate so as to make a more efficientavoidance. Pseudocode referring these operations is presented in Algorithm 1.

178

Page 197: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX D. FIELD COVER BEHAVIOR COMPOSITION 179

AvoidingObstacleAngle = 0;Check the distance measurements of 18 different laser points (6 for left, 6 for front, and 6 forright) that imply a high probability of CornerDetected either in front, left or right;if CornerDetected then

AvoidingObstacleAngle = an orthogonal angle towards the detected corner side;else

Find nearest obstacle location and distance within laser scanner data;if Nearest Obstacle Distance < Aware of Obstacles Distance then

if Nearest Obstacle Distance is too close thendo a fixed backwards translation to preserve the robot’s integrity;

elseAvoidindObstacleAngle = an orthogonal angle towards the nearest obstaclelocation;

endelse

if Any Kins’ Distance < Aware of Kin Distance thenWith 30% chance, do a fixed translation to preserve the robot’s integrity;With 70% chance, AvoidingObstacleAngle = an orthogonal angle towards thenearby kin’s location;

elseDo nothing;

endend

endreturn AvoidingObstacleAngle;

Algorithm 1: Avoid Obstacles Pseudocode.

Page 198: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX D. FIELD COVER BEHAVIOR COMPOSITION 180

D.2 Behavior 2: Avoid PastThe second behavior is for gathering the newest locations: the Avoid Past. This kind ofexplorative behavior was introduced by Balch and Arkin in [21] as a mechanism for avoidinglocal minima when navigating towards a goal. It was proposed also for autonomous explo-ration, but it leaded to a constant conflict of getting stuck in corners, therefore the importanceof anticipated corners avoidance in previous behavior. Additionally, the algorithm requireda static discrete environment grid which must be known at hand, which is not possible forunknown environments. Furthermore, the complexity in order to compute the vector so asto derive the updated potential field goes up to O(n2) for a supposed nxn grid world. Thus,the more the resolution of the world (smaller grid-cell size) the more computational powerrequired. Nevertheless, it is from them and from the experience presented in works such asin [114], that we considered the idea of enhancing reactivity with local spatial memory so asto produce our own algorithm.

Our Avoid Past does not get the aforementioned problems. First of all, becauseof the simple recognition of corners provided within the Avoid Obstacles, we never get stuckneither spend unnecessary time there. Next, we are using a hashtable data structure for storingthe robot traversed locations (the past). Basically, concerning the size of the used robots, weconsider an implicit 1-meter grid discretization in which the actual robot position (x,y) isrounded. We then use a fixed number of digits, for x and y, to create the string “xy” as a keyto the hashtable, that is queried and updated whenever the robot visits that location. Thus,each location has a unique key, turning the hashtable to be able to look up for an elementwith complexity O(1), which is a property of this data structure. It is important to mentionthat this discretization can accommodate imperfect localization within the grid resolution andwe do not require any a-priori knowledge of the environment. To set the robot direction, asteering speed reaction is computed by evaluating the number of visits of the 3-front neighbor(x,y) locations in the hashtable. These 3 neighbors depend on the robot orientation accordingto 8 possible 45◦ heading cases (ABC, BCD, CDE, DEF, EFG, FGH, GHA, HAB) shownin Figure D.1. It is important to notice, that evaluating 3 neighbors without a hashtable datastructure will turn our location search complexity into O(n) for n locations, where n is anincreasing number as exploration goes by, thus the hashtable is very helpful. Additionally,we keep all operations with the 3 neighbors within IF-THEN conditional checks leveragingsimplicity and reduced computational cost. Pseudocode referring these operations is presentedin Algorithm 2.

D.3 Behavior 3: Locate Open AreaThe third behavior, named Locate Open Area, is composed of an algorithm for locatingthe largest open area in which the robot’s width fits. It consists of a wandering rate thatrepresents the frequency at which the robot must locate the open area, which is basically thebiggest surface without obstacles being perceived by the laser scanner. So, if this behavior istriggered the robot stops moving and turns towards the open area to continue its navigation.This behavior represents the wandering factor of our exploration algorithm and resulted veryimportant for the obtained performance. For example, when the robot enters a small room, it

Page 199: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX D. FIELD COVER BEHAVIOR COMPOSITION 181

Figure D.1: 8 possible 45◦ heading cases with 3 neighbor waypoints to evaluate so as todefine a CCW, CW or ZERO angular acceleration command. For example, if heading in the-45◦ case, the neighbors to evaluate are B, C and D, as left, center and right, respectively.

AvoidingPastAngle = 0;Evaluate the neighbor waypoints according to current heading angle;if Neighbor Waypoint at the Center is Free and Unvisited then

AvoidingPastAngle = 0;else

if Neighbor Waypoint at the Left is Free and Unvisited thenAvoidingPastAngle = 45;

elseif Neighbor Waypoint at the Right is Free and Unvisited then

AvoidingPastAngle = −45;else

AvoidingPastAngle = an angle between -115 and 115 according to visit countsproportions of the left, center and right neighbor waypoints;

endend

endreturn AvoidingPastAngle;

Algorithm 2: Avoid Past Pseudocode.

Page 200: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX D. FIELD COVER BEHAVIOR COMPOSITION 182

tends to be trapped within its past and the corners of the room, if this happens there is still thechance of locating the exit as the largest open area and escape from this situation in order tocontinue exploring. Pseudocode referring these operations is presented in Algorithm 3.

Find the best heading as the middle laser point of a set of consecutive laser points that fit asafe width for the robot to traverse, and have the biggest distance measurements;if DistanceToBestHeading > SafeDistance then

Do a turning action towards the determined best heading;else

Do nothing;end

Algorithm 3: Locate Open Area Pseudocode.

D.4 Behavior 4: DisperseThe next operation is our cooperative behavior called Disperse. This behavior is inspiredby the work of Mataric [173]. It activates just in the case two or more robots get into a prede-fined comfort zone. Thus, for m robots near in a pool of n robots, where m ≤ n, we call forsimple conditional checks so as to derive an appropriate dispersion action. It must be statedthat this operation serves as the coordination mechanism for efficiently spreading the robotsas well as for avoiding teammates interference. Even though it is not active at all times, if (andonly if) it is triggered, a temporal O(m2) complexity is added to the model, which is finallydropped when the m involved robots have dispersed. The frequency of activation dependson the number of robots and the relative physical dimensions between robots and the envi-ronment, which is important before deployment decisions. Actions concerning this behaviorinclude steering away from the nearest robot if m = 1, or steer away from the centroid of thegroup of m > 1; then a move forward action is triggered until reaching out the defined neararea or comfort zone. It is important to clarify that this behavior firstly checks for any possibleavoiding obstacles action, which if exists then the dispersion effect is overridden until robot’sintegrity is ensured. Pseudocode referring these operations is presented in Algorithm 4.

D.5 Emergent Behavior: Field CoverLast, with a Finite State Automata (FSA) we achieve our Field Cover emergent behavior.In this emergent behavior, we fuse the outputs of the triggered behaviors with different strate-gies (either subsumption [49] or weighted summation [21]) according to the current state.In Figure D.2 there are 2 states conforming the FSA that results in coordinated autonomousexploration: Dispersing and ReadyToExplore. Initially, assuming that robots are deployedtogether, the <if m robots near> condition is triggered so that the initial state comes to beDispersing. During this state, the Disperse and Avoid Obstacles behaviors take control of theoutputs. As can be appreciated in the Algorithm 4, the Avoid Obstacles behavior overrides(subsumes) any action from the Disperse behavior. This means that if any obstacle is detected,main dispersion actions are suspended. An important thing to mention is that for this particular

Page 201: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX D. FIELD COVER BEHAVIOR COMPOSITION 183

if Any Avoid Obstacles condition is triggered thenDo the avoiding obstacle turning or translating action immediately (do not return anAvoidObstacleAngle, but stop and turn the robot in-situ).;//Doing this operation immediately and not implementing a fusion with the dispersebehavior resulted in a more efficient dispersion effect, this is why it is not treated as theavoid obstacles behavior is implemented.

elseDetermine the number of kins inside the Comfort Zone distance parameter;if Number of Kins inside Comfort Zone == 0 then

return Status = ReadyToExplore;else

Status = Dispersing;if Number of Kins inside Comfort Zone > 1 then

Determine the centroid of all robots’ poses;if Distance to Centroid < Dead Zone then

Set DrivingSpeed equal to 1.5 ∗MaxDrivingSpeed, and do a turningaction to an orthogonal angle towards centroid location;

elseSet DrivingSpeed equal to MaxDrivingSpeed, and do a turning action toan orthogonal angle towards centroid location;

endelse

if Distance to Kin < Dead Zone thenSet DrivingSpeed equal to 1.5 ∗MaxDrivingSpeed, and do a turningaction to an orthogonal angle towards kin location;

elseSet DrivingSpeed equal to MaxDrivingSpeed, and do a turning action toan orthogonal angle towards kin location;

endend

endend

Algorithm 4: Disperse Pseudocode.

Page 202: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX D. FIELD COVER BEHAVIOR COMPOSITION 184

state, we observed that immediately stopping and turning towards the AvoidObstacleAngle (ortranslating to safety as the Avoid Obstacles behavior commands), was more efficient in orderto get all robots dispersed, than by returning a desired angle as the behavior is implemented.

Then, once all the robots have been dispersed, the <if m robots dispersed> conditionis triggered so that the new state comes to be the ReadyToExplore. In this state, two mainactions can happen. First, if the wandering rate is triggered, the Locate Open Area behavior isactivated, subsuming any other action out of turning towards the determined best heading if itis appropriate, or holding the current driving and steering speeds, which means to do/changenothing (refer to Algorithm 3). Second, if the wandering rate is not triggered, we fuse outputsfrom the Avoid Obstacles and Avoid Past behaviors in a weighted summation. This summationrequires for a careful balance between behaviors gains for which the most important is toestablish an appropriate AvoidPastGain < AvoidObstaclesGain relation [21]. In this way,with this simple 2-state FSA, we ensure that robots are constantly commanded to spread andexplore the environment. Thus, it can be referred that this FSA constitutes the deliberative partin our algorithm since it decides which behaviors are the best according to a given situation, sothat the combination of this with the behaviors’ outputs lead us into a hybrid solution such asthe presented in [139] with the main difference that we do not calculate any forces, potentialfields, nor have any sequential targets, thus reducing complexity and avoiding typical localminima problems. Pseudocode referring these operations is presented in Algorithm 5.

Figure D.2: Implemented 2-state Finite State Automata for autonomous exploration.

Page 203: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

APPENDIX D. FIELD COVER BEHAVIOR COMPOSITION 185

if Status = Dispersing thenDisperse;

elseif Wandering Rate triggers then

LocateOpenArea;else

Get the current AvoidingPastAngle and AvoidingObstacleAngle;//This is to do smoother turning reactions with larger distances towards obstacles;if Distance to Nearest Obstacle in Front < Aware of Obstacles Distance then

DrivingSpeedFactor =DistancetoNearestObstacleinFront/AwareofObstacleDistance;

elseDrivingSpeedFactor = 0 ;

endDrivingSpeed = DrivingGain∗MaxDrivingSpeed∗(1−DrivingSpeedFactor);//Here is the fusion (weighted summation) for simultaneous obstacles and pastavoidance;SteeringSpeed = SteeringGain ∗ ((AvoidingPastAngle ∗ AvoidPastGain+AvoidingObstacleAngle ∗ AvoidObstaclesGain)/2);Ensure driving and steering velocities are within max and min possible values;Set the driving and steering velocities;

endif m robots near then

Status = Dispersingend

endAlgorithm 5: Field Cover Pseudocode.

Page 204: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

Bibliography

[1] ABOUAF, J. Trial by fire: teleoperated robot targets chernobyl. Computer Graphicsand Applications, IEEE 18, 4 (jul/aug 1998), 10 –14.

[2] ALAMI, R., CHATILA, R., FLEURY, S., GHALLAB, M., AND INGRAND, F. Anarchitecture for autonomy. International Journal of Robotics Research 17 (1998), 315–337.

[3] ALI, S., AND MERTSCHING, B. Towards a generic control architecture of rescuerobot systems. In Safety, Security and Rescue Robotics, 2008. SSRR 2008. IEEE Inter-national Workshop on (oct. 2008), pp. 89 –94.

[4] ALNOUNOU, Y., HAIDAR, M., PAULIK, M., AND AL-HOLOU, N. Service-orientedarchitecture: On the suitability for mobile robots. In Electro/Information Technology(EIT), 2010 IEEE International Conference on (may 2010), pp. 1 –5.

[5] ALTSHULER, Y., YANOVSKI, V., WAGNER, I., AND BRUCKSTEIN, A. Swarm antrobotics for a dynamic cleaning problem - analytic lower bounds and impossibilityresults. In Autonomous Robots and Agents, 2009. ICARA 2009. 4th International Con-ference on (feb. 2009), pp. 216 –221.

[6] AMIGONI, F. Experimental evaluation of some exploration strategies for mobilerobots. In Robotics and Automation, 2008. ICRA 2008. IEEE International Confer-ence on (may 2008), pp. 2818 –2823.

[7] ANDERSON, M., AND PAPANIKOLOPOULOS, N. Implicit cooperation strategies formulti-robot search of unknown areas. Journal of Intelligent Robotics Systems 53 (De-cember 2008), 381–397.

[8] ANDRILUKA, M., FRIEDMANN, M., KOHLBRECHER, S., MEYER, J., PETERSEN,K., REINL, C., SCHAUSS, P., SCHNITZPAN, P., STROBEL, A., THOMAS, D., AND

VON STRYK, O. Robocuprescue 2009 - robot league team: Darmstadt rescue robotteam (germany), 2009. Institut fur Flugsysteme und Regelungstechnik.

[9] ANGERMANN, M., KHIDER, M., AND ROBERTSON, P. Towards operational sys-tems for continuous navigation of rescue teams. In Position, Location and NavigationSymposium, 2008 IEEE/ION (may 2008), pp. 153 –158.

186

Page 205: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 187

[10] ARKIN, R., AND DIAZ, J. Line-of-sight constrained exploration for reactive multia-gent robotic teams. In Advanced Motion Control, 2002. 7th International Workshop on(2002), pp. 455 – 461.

[11] ARKIN, R. C. Behavior-Based Robotics. The MIT Press, 1998.

[12] ARKIN, R. C., AND BALCH, T. Aura: Principles and practice in review. Journal ofExperimental and Theoretical Artificial Intelligence 9 (1997), 175–189.

[13] ARRICHIELLO, F., HEIDARSSON, H., CHIAVERINI, S., AND SUKHATME, G. S. Co-operative caging using autonomous aquatic surface vehicles. In Robotics and Automa-tion (ICRA), 2010 IEEE International Conference on (may 2010), pp. 4763 –4769.

[14] ASAMA, H., HADA, Y., KAWABATA, K., NODA, I., TAKIZAWA, O., MEGURO, J.,ISHIKAWA, K., HASHIZUME, T., OHGA, T., TAKITA, K., HATAYAMA, M., MAT-SUNO, F., AND TADOKORO, S. Rescue Robotics. DDT Project on Robots and Systemsfor Urban Search and Rescue. Springer, March 2009, ch. 4. Information Infrastructurefor Rescue System, pp. 57–70.

[15] AURENHAMMER, F., AND KLEIN, R. Handbook of Computational Geometry Auren-hammer, F. and Klein, R. ”Voronoi Diagrams.” Ch. 5 in Handbook of ComputationalGeometry (Ed. J.-R. Sack and J. Urrutia). Amsterdam, Netherlands: North-Holland,pp. 201-290, 2000. Elsevier Science B. V., 2000, ch. 5. Voronoi Diagrams, pp. 201–290.

[16] BADANO, B. M. I. A Multi-Agent Architecture with Distributed Coordination for anAutonomous Robot. PhD thesis, Universitat de Girona, 2008.

[17] BALAGUER, B., BALAKIRSKY, S., CARPIN, S., LEWIS, M., AND SCRAPPER, C.Usarsim: a validated simulator for research in robotics and automation. In IEEE/RSJIROS (2008).

[18] BALAKIRSKY, S. Usarsim: Providing a framework for multi-robot performance eval-uation. In In: Proceedings of PerMIS (2006), pp. 98–102.

[19] BALAKIRSKY, S., CARPIN, S., KLEINER, A., LEWIS, M., VISSER, A., WANG,J., AND ZIPARO, V. A. Towards heterogeneous robot teams for disaster mitigation:Results and performance metrics from robocup rescue. Journal of Field Robotics 24,11-12 (2007), 943–967.

[20] BALAKIRSKY, S., CARPIN, S., AND LEWIS, M. Robots, games, and research: successstories in usarsim. In Proceedings of the 2009 IEEE/RSJ international conference onIntelligent robots and systems (Piscataway, NJ, USA, 2009), IROS’09, IEEE Press,pp. 1–1.

[21] BALCH, T. Avoiding the past: a simple but effective strategy for reactive navigation.In Robotics and Automation, 1993. Proceedings., 1993 IEEE International Conferenceon (may 1993), vol. vol.1, pp. 678 –685.

Page 206: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 188

[22] BALCH, T. The impact of diversity on performance in multi-robot foraging. In In Proc.Autonomous Agents 99 (1999), ACM Press, pp. 92–99.

[23] BALCH, T., AND ARKIN, R. Behavior-based formation control for multirobot teams.Robotics and Automation, IEEE Transactions on 14, 6 (dec 1998), 926 –939.

[24] BALCH, T., AND HYBINETTE, M. Social potentials for scalable multi-robot forma-tions. In Robotics and Automation, 2000. Proceedings. ICRA ’00. IEEE InternationalConference on (2000), vol. 1, pp. 73 –80 vol.1.

[25] BASILICO, N., AND AMIGONI, F. Defining effective exploration strategies for searchand rescue applications with multi-criteria decision making. In Robotics and Automa-tion (ICRA), 2011 IEEE International Conference on (may 2011), pp. 4260 –4265.

[26] BAY, H., ESS, A., TUYTELAARS, T., AND VAN GOOL, L. Speeded-up robust features(surf). Comput. Vis. Image Underst. 110, 3 (June 2008), 346–359.

[27] BEARD, R., MCLAIN, T., GOODRICH, M., AND ANDERSON, E. Coordinated targetassignment and intercept for unmanned air vehicles. Robotics and Automation, IEEETransactions on 18, 6 (dec 2002), 911 – 922.

[28] BECKERS, R., HOLL, O. E., AND DENEUBOURG, J. L. From local actions to globaltasks: Stigmergy and collective robotics. In Proc. 14th Int. Workshop Synth. Simul.Living Syst. (1994), R. Brooks and P. Maes, Eds., MIT Press, pp. 181–189.

[29] BEKEY, G. A. Autonomous Robots: From Biological Inspiration to Implementationand Control. The MIT Press, 2005.

[30] BENI, G. The concept of cellular robotic system. In Intelligent Control, 1988. Pro-ceedings., IEEE International Symposium on (aug 1988), pp. 57 –62.

[31] BERHAULT, M., HUANG, H., KESKINOCAK, P., KOENIG, S., ELMAGHRABY, W.,GRIFFIN, P., AND KLEYWEGT, A. Robot exploration with combinatorial auctions.In Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJInternational Conference on (oct. 2003), vol. 2, pp. 1957 – 1962 vol.2.

[32] BETHEL, C., AND MURPHY, R. R. Survey of non-facial/non-verbal affective ex-pressions for appearance-constrained robots. Systems, Man, and Cybernetics, Part C:Applications and Reviews, IEEE Transactions on 38, 1 (jan. 2008), 83 –92.

[33] BIRK, A., AND CARPIN, S. Rescue robotics - a crucial milestone on the road toautonomous systems. Advanced Robotics Journal 20, 5 (2006), 595–605.

[34] BIRK, A., AND KENN, H. A control architecture for a rescue robot ensuring safe semi-autonomous operation. In RoboCup-02: Robot Soccer World Cup VI, G. Kaminka,P. Lima, and R. Rojas, Eds., LNAI. Springer, 2002.

Page 207: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 189

[35] BIRK, A., AND PFINGSTHORN, M. A hmi supporting adjustable autonomy of rescuerobots. In RoboCup 2005: Robot WorldCup IX, I. Noda, A. Jacoff, A. Bredenfeld,and Y. Takahashi, Eds., vol. 4020 of Lecture Notes in Artificial Intelligence (LNAI).Springer, 2006, pp. 255 – 266.

[36] BIRK, A., SCHWERTFEGER, S., AND PATHAK, K. A networking framework forteleoperation in safety, security, and rescue robotics. Wireless Communications, IEEE16, 1 (february 2009), 6 –13.

[37] BLITCH, J. G. Artificial intelligence technologies for robot assisted urban search andrescue. Expert Systems with Applications 11, 2 (1996), 109 – 124. Army Applicationsof Artificial Intelligence.

[38] BOHN, H., BOBEK, A., AND GOLATOWSKI, F. Sirena - service infrastructure forreal-time embedded networked devices: A service oriented framework for differentdomains. In In International Conference on Networking (ICN) (2006).

[39] BOONPINON, N., AND SUDSANG, A. Constrained coverage for heterogeneous multi-robot team. In Robotics and Biomimetics, 2007. ROBIO 2007. IEEE InternationalConference on (dec. 2007), pp. 799 –804.

[40] BORENSTEIN, J., AND BORRELL, A. The omnitread ot-4 serpentine robot. In Roboticsand Automation, 2008. ICRA 2008. IEEE International Conference on (may 2008),pp. 1766 –1767.

[41] BORENSTEIN, J., AND KOREN, Y. The vector field histogram-fast obstacle avoidancefor mobile robots. Robotics and Automation, IEEE Transactions on 7, 3 (jun 1991),278 –288.

[42] BOTELHO, S. C., AND ALAMI, R. A multi-robot cooperative task achievement sys-tem. In Robotics and Automation, 2000. Proceedings. ICRA ’00. IEEE InternationalConference on (2000), vol. 3, pp. 2716 –2721 vol.3.

[43] BOURGAULT, F., MAKARENKO, A., WILLIAMS, S., GROCHOLSKY, B., AND

DURRANT-WHYTE, H. Information based adaptive robotic exploration. In Intelli-gent Robots and Systems, 2002. IEEE/RSJ International Conference on (2002), vol. 1,pp. 540 – 545 vol.1.

[44] BOWEN, D., AND MACKENZIE, S. Autonomous collaborative unmanned vehicles:Technological drivers and constraints. Tech. rep., Defence Research and DevelopmentCanada, 2003.

[45] BRADSKI, G. The OpenCV Library. Dr. Dobb’s Journal of Software Tools (2000).

[46] BREIVOLD, H., AND LARSSON, M. Component-based and service-oriented softwareengineering: Key concepts and principles. In Software Engineering and AdvancedApplications, 2007. 33rd EUROMICRO Conference on (aug. 2007), pp. 13 –20.

Page 208: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 190

[47] BROOKS, A., KAUPP, T., MAKARENKO, A., WILLIAMS, S., AND OREBACK, A. To-wards component-based robotics. In Intelligent Robots and Systems (IROS ). IEEE/RSJInternational Conference on (aug. 2005), pp. 163 – 168.

[48] BROOKS, A., KAUPP, T., MAKARENKO, A., WILLIAMS, S., AND OREBACK, A.Orca: A component model and repository. In Software Engineering for ExperimentalRobotics, D. Brugali, Ed., vol. 30 of Springer Tracts in Advanced Robotics. Springer -Verlag, Berlin / Heidelberg, April 2007.

[49] BROOKS, R. A robust layered control system for a mobile robot. Robotics and Au-tomation, IEEE Journal of 2, 1 (mar 1986), 14 – 23.

[50] BROOKS, R. Intelligence without representation. MIT Artificial Intelligence Report 47(1987), 1–12.

[51] BROOKS, R. A robot that walks; emergent behaviors from a carefully evolved network.In Robotics and Automation, 1989. Proceedings., 1989 IEEE International Conferenceon (may 1989), vol. vol. 2, pp. 692 –698.

[52] BROOKS, R. Elephants don’t play chess. Robotics and Autonomous Systems 6, 1-2(1990), 3– 15.

[53] BROOKS, R. Intelligence without reason. In COMPUTERS AND THOUGHT, IJCAI-91 (1991), Morgan Kaufmann, pp. 569–595.

[54] BROOKS, R., AND FLYNN, A. M. Fast, cheap and out of control: A robot invasion ofthe solar system. The British Interplanetary Society 42, 10 (1989), 478–485.

[55] BRUGALI, D., Ed. Software Engineering for Experimental Robotics, vol. 30 ofSpringer Tracts in Advanced Robotics. Springer - Verlag, Berlin / Heidelberg, April2007.

[56] BUI, T., AND TAN, A. A template-based methodology for large-scale ha/dr involvingephemeral groups - a workflow perspective. In System Sciences, 2007. HICSS 2007.40th Annual Hawaii International Conference on (jan. 2007), p. 34.

[57] BURGARD, W., MOORS, M., FOX, D., SIMMONS, R., AND THRUN, S. Collaborativemulti-robot exploration. In Robotics and Automation, 2000. Proceedings. ICRA ’00.IEEE International Conference on (2000), vol. 1, pp. 476 –481 vol.1.

[58] BURGARD, W., MOORS, M., STACHNISS, C., AND SCHNEIDER, F. Coordinatedmulti-robot exploration. Robotics, IEEE Transactions on 21, 3 (june 2005), 376 – 386.

[59] BUTLER, Z., RIZZI, A., AND HOLLIS, R. Cooperative coverage of rectilinear environ-ments. In Robotics and Automation, 2000. Proceedings. ICRA ’00. IEEE InternationalConference on (2000), vol. 3, pp. 2722 –2727 vol.3.

[60] CALISI, D., FARINELLI, A., IOCCHI, L., AND NARDI, D. Multi-objective explorationand search for autonomous rescue robots. J. Field Robotics 24, 8-9 (2007), 763–777.

Page 209: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 191

[61] CALISI, D., NARDI, D., OHNO, K., AND TADOKORO, S. A semi-autonomous trackedrobot system for rescue missions. In SICE Annual Conference, 2008 (aug. 2008),pp. 2066 –2069.

[62] CALOUD, P., CHOI, W., LATOMBE, J. C., LE PAPE, C., AND YIM, M. Indoorautomation with many mobile robots. In Intelligent Robots and Systems ’90. ’Towardsa New Frontier of Applications’, Proceedings. IROS ’90. IEEE International Workshopon (jul 1990), pp. 67 –72 vol.1.

[63] CAO, Y. U., FUKUNAGA, A. S., AND KAHNG, A. Cooperative mobile robotics:Antecedents and directions. Autonomous Robots 4 (1997), 7–27.

[64] CAO, Z., TAN, M., LI, L., GU, N., AND WANG, S. Cooperative hunting by dis-tributed mobile robots based on local interaction. Robotics, IEEE Transactions on 22,2 (april 2006), 402 – 406.

[65] CARLSON, J., AND MURPHY, R. R. How ugvs physically fail in the field. Robotics,IEEE Transactions on 21, 3 (june 2005), 423 – 437.

[66] CARPIN, S., AND BIRK, A. Stochastic map merging in noisy rescue environments. InRoboCup 2004: Robot Soccer World Cup VIII, D. Nardi, M. Riedmiller, and C. Sam-mut, Eds., vol. 3276 of Lecture Notes in Artificial Intelligence (LNAI). Springer, 2005,p. p.483ff.

[67] CARPIN, S., WANG, J., LEWIS, M., BIRK, A., AND JACOFF, A. High fidelity toolsfor rescue robotics: Results and perspectives. In RoboCup (2005), A. Bredenfeld,A. Jacoff, I. Noda, and Y. Takahashi, Eds., vol. 4020 of Lecture Notes in ComputerScience, Springer, pp. 301–311.

[68] CASPER, J., AND MURPHY, R. R. Human-robot interactions during the robot-assistedurban search and rescue response at the world trade center. Systems, Man, and Cyber-netics, Part B: Cybernetics, IEEE Transactions on 33, 3 (june 2003), 367 – 385.

[69] CASPER, J. L., MICIRE, M., AND MURPHY, R. R. Issues in intelligent robots forsearch and rescue. In Society of Photo-Optical Instrumentation Engineers (SPIE) Con-ference Series (jul 2000), . C. M. S. G. R. Gerhart, R. W. Gunderson, Ed., vol. 4024 ofPresented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Confer-ence, pp. 292–302.

[70] CEPEDA, J. S., CHAIMOWICZ, L., AND SOTO, R. Exploring microsoft robotics studioas a mechanism for service-oriented robotics. Latin American Robotics Symposium andIntelligent Robotics Meeting 0 (2010), 7–12.

[71] CEPEDA, J. S., CHAIMOWICZ, L., SOTO, R., GORDILLO, J., ALANIS-REYES, E.,AND CARRILLO-ARCE, L. C. A behavior-based strategy for single and multi-robot au-tonomous exploration. Sensors Special Issue: New Trends towards Automatic VehicleControl and Perception Systems (2012), 12772–12797.

Page 210: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 192

[72] CEPEDA, J. S., SOTO, R., GORDILLO, J., AND CHAIMOWICZ, L. Towards a service-oriented architecture for teams of heterogeneous autonomous robots. In Artificial In-telligence (MICAI), 2011 10th Mexican International Conference on (26 2011-dec. 42011), pp. 102 –108.

[73] CESETTI, A., SCOTTI, C. P., DI BUO, G., AND LONGHI, S. A service orientedarchitecture supporting an autonomous mobile robot for industrial applications. InControl Automation (MED), 8th Mediterranean Conference on (june 2010), pp. 604–609.

[74] CHAIMOWICZ, L. Dynamic Coordination of Cooperative Robots: A Hybrid SystemsApproach. PhD thesis, Universidade Federal de Minas Gerais, 2002.

[75] CHAIMOWICZ, L., CAMPOS, M., AND KUMAR, V. Dynamic role assignment forcooperative robots. In Robotics and Automation, 2002. Proceedings. ICRA ’02. IEEEInternational Conference on (2002), vol. vol.1, pp. 293 – 298.

[76] CHAIMOWICZ, L., COWLEY, A., GROCHOLSKY, B., ANDJ. F. KELLER, M. A. H.,KUMAR, V., AND TAYLOR, C. J. Deploying air-ground multi-robot teams in urbanenvironments. In Proceedings of the Third Multi-Robot Systems Workshop (WashingtonD. C., March 2005).

[77] CHAIMOWICZ, L., COWLEY, A., SABELLA, V., AND TAYLOR, C. J. Roci: a dis-tributed framework for multi-robot perception and control. In Intelligent Robots andSystems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJ International Conferenceon (oct. 2003), vol. vol.1, pp. 266 – 271.

[78] CHAIMOWICZ, L., KUMAR, V., AND CAMPOS, M. F. M. A paradigm for dynamiccoordination of multiple robots. Autonomous Robots 17 (2004), 7–21.

[79] CHAIMOWICZ, L., MICHAEL, N., AND KUMAR, V. Controlling swarms of robotsusing interpolated implicit functions. In Robotics and Automation, 2005. ICRA 2005.Proceedings of the 2005 IEEE International Conference on (april 2005), pp. 2487 –2492.

[80] CHANG, C., AND MURPHY, R. R. Towards robot-assisted mass-casualty triage. InNetworking, Sensing and Control, 2007 IEEE International Conference on (april 2007),pp. 267 –272.

[81] CHEEMA, U. Expert systems for earthquake damage assessment. Aerospace and Elec-tronic Systems Magazine, IEEE 22, 9 (sept. 2007), 6 –10.

[82] CHEN, Y., AND BAI, X. On robotics applications in service-oriented architecture.In Distributed Computing Systems Workshops, 2008. ICDCS ’08. 28th InternationalConference on (june 2008), pp. 551 –556.

[83] CHIA, E. S. Engineering disaster relief. Technology and Society Magazine, IEEE 26,3 (fall 2007), 24 –29.

Page 211: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 193

[84] CHOMPUSRI, Y., KHUEANSUWONG, P., DUANGKAW, A., PHOTSATHIAN, T., JUN-LEE, S., NAMVONG, N., AND SUTHAKORN, J. Robocuprescue 2006 - robot leagueteam: Independent (thailand), 2006.

[85] CHONNAPARAMUTT, W., AND BIRK, A. A new mechatronic component for adjustingthe footprint of tracked rescue robots. In RoboCup 2006: Robot Soccer World Cup X,G. Lakemeyer, E. Sklar, D. Sorrenti, and T. Takahashi, Eds., vol. 4434 of Lecture Notesin Computer Science. Springer Berlin / Heidelberg, 2007, pp. 450–457.

[86] CHOSET, H. Coverage for robotics a survey of recent results. Annals of Mathematicsand Artificial Intelligence 31, 1-4 (May 2001), 113–126.

[87] CHUENGSATIANSUP, K., SAJJAPONGSE, K., KRUAPRADITSIRI, P., CHANMA, C.,TERMTHANASOMBAT, N., SUTTASUPA, Y., SATTARATNAMAI, S., PONGKAEW,E., UDSATID, P., HATTHA, B., WIBULPOLPRASERT, P., USAPHAPANUS,P., TULYANON, N., WONGSAISUWAN, M., WANNASUPHOPRASIT, W., AND

CHONGSTITVATANA, P. Plasma-rx: Autonomous rescue robots. In Robotics andBiomimetics, 2008. ROBIO 2008. IEEE International Conference on (feb. 2009),pp. 1986–1990.

[88] CLARK, J., AND FIERRO, R. Cooperative hybrid control of robotic sensors for perime-ter detection and tracking. In American Control Conference, 2005. Proceedings of the2005 (june 2005), pp. 3500 – 3505 vol. 5.

[89] CORRELL, N., AND MARTINOLI, A. Robust distributed coverage using a swarm ofminiature robots. In Robotics and Automation, 2007 IEEE International Conferenceon (april 2007), pp. 379 –384.

[90] DALAL, N., AND TRIGGS, W. Histograms of oriented gradients for human detection.2005 IEEE Computer Society Conference on Computer Vision and Pattern RecognitionCVPR05 1, 3 (2004), 886–893.

[91] DAVIDS, A. Urban search and rescue robots: from tragedy to technology. IntelligentSystems, IEEE 17, 2 (march-april 2002), 81 –83.

[92] DE HOOG, J., CAMERON, S., AND VISSER, A. Role-based autonomous multi-robotexploration. In Future Computing, Service Computation, Cognitive, Adaptive, Con-tent, Patterns, 2009. COMPUTATIONWORLD ’09. Computation World: (nov. 2009),pp. 482 –487.

[93] DIAS, M., ZLOT, R., KALRA, N., AND STENTZ, A. Market-based multirobot co-ordination: A survey and analysis. Proceedings of the IEEE 94, 7 (july 2006), 1257–1270.

[94] DISSANAYAKE, M., NEWMAN, P., CLARK, S., DURRANT-WHYTE, H., AND

CSORBA, M. A solution to the simultaneous localization and map building (slam)problem. Robotics and Automation, IEEE Transactions on 17, 3 (jun 2001), 229 –241.

Page 212: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 194

[95] DUDEK, G., JENKIN, M. R. M., MILIOS, E., AND WILKES, D. A taxonomy formulti-agent robotics. Autonomous Robots 3, 4 (1996), 375–397.

[96] EMGUCV. Emgu cv, a cross platform .net wrapper to the opencv image processinglibrary [online]: http://www.emgu.com/, 2012.

[97] EREMEEV, D. Library avm sdk simple.net [online]: http://edv-detail.narod.ru/library avm sdk simple net.html, 2012.

[98] ERMAN, A., HOESEL, L., HAVINGA, P., AND WU, J. Enabling mobility in hetero-geneous wireless sensor networks cooperating with uavs for mission- critical manage-ment. Wireless Communications, IEEE 15, 6 (december 2008), 38 –46.

[99] FARINELLI, A., IOCCHI, L., AND NARDI, D. Multirobot systems: a classificationfocused on coordination. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEETransactions on 34, 5 (oct. 2004), 2015 –2028.

[100] FLOCCHINI, P., KELLETT, M., MASON, P., AND SANTORO, N. Map construc-tion and exploration by mobile agents scattered in a dangerous network. In ParallelDistributed Processing, 2009. IPDPS 2009. IEEE International Symposium on (may2009), pp. 1 –10.

[101] FOX, D., KO, J., KONOLIGE, K., LIMKETKAI, B., SCHULZ, D., AND STEWART, B.Distributed multirobot exploration and mapping. Proceedings of the IEEE 94, 7 (july2006), 1325 –1339.

[102] FUKUDA, T., AND IRITANI, G. Evolutional and self-organizing robots-artificial life inrobotics. In Emerging Technologies and Factory Automation, 1994. ETFA ’94., IEEESymposium on (nov 1994), pp. 10 –19.

[103] FURGALE, P., AND BARFOOT, T. Visual path following on a manifold in unstructuredthree-dimensional terrain. In Robotics and Automation (ICRA), 2010 IEEE Interna-tional Conference on (may 2010), pp. 534 –539.

[104] GAGE, D. W. Sensor abstractions to support many-robot systems. In Proceedings ofSPIE Mobile Robots VII (1992), pp. 235–246.

[105] GAGE, D. W. Randomized search strategies with imperfect sensors. In In Proceedingsof SPIE Mobile Robots VIII (1993), pp. 270–279.

[106] GALLUZZO, T., AND KENT, D. The joint architecture for unmanned systems (jaus)[online]: http://www.openjaus.com, 2012.

[107] GARAGE, W. Ros framework [online]: http://www.ros.org/, 2012.

[108] GARCIA, R. D., VALAVANIS, K. P., AND KONTITSIS, M. A multiplatform on-boardprocessing system for miniature unmanned vehicles. In ICRA (2006), pp. 2156–2163.

[109] GAZI, V. Swarm aggregations using artificial potentials and sliding-mode control.Robotics, IEEE Transactions on 21, 6 (dec. 2005), 1208 – 1214.

Page 213: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 195

[110] GERKEY, B. P. A formal analysis and taxonomy of task allocation in multi-robotsystems. The International Journal of Robotics Research 23, 9 (2004), 939–954.

[111] GERKEY, B. P., AND MATARIC, M. J. Murdoch: Publish/Subscribe Task Allocationfor Heterogeneous Agents. ACM Press, 2000, pp. 203–204.

[112] GERKEY, B. P., AND MATARIC, M. J. Sold!: auction methods for multirobot co-ordination. Robotics and Automation, IEEE Transactions on 18, 5 (oct 2002), 758 –768.

[113] GERKEY, B. P., VAUGHAN, R. T., STØY, K., HOWARD, A., SUKHATME, G. S., AND

MATARIC, M. J. Most valuable player: A robot device server for distributed control. InProceeding of the IEEE/RSJ International Conference on Intelligent Robotic Systems(IROS) (Wailea, Hawaii, November 2001), IEEE.

[114] GIFFORD, C., WEBB, R., BLEY, J., LEUNG, D., CALNON, M., MAKAREWICZ,J., BANZ, B., AND AGAH, A. Low-cost multi-robot exploration and mapping. InTechnologies for Practical Robot Applications, 2008. TePRA 2008. IEEE InternationalConference on (nov. 2008), pp. 74 –79.

[115] GONZALEZ-BANOS, H. H., AND LATOMBE, J.-C. Navigation strategies for exploringindoor environments. I. J. Robotic Res. 21, 10-11 (2002), 829–848.

[116] GOSSOW, D., PELLENZ, J., AND PAULUS, D. Danger sign detection using colorhistograms and surf matching. In Safety, Security and Rescue Robotics, 2008. SSRR2008. IEEE International Workshop on (oct. 2008), pp. 13 –18.

[117] GRABOWSKI, R., NAVARRO-SERMENT, L., PAREDIS, C., AND KHOSLA, P. Hetero-geneous teams of modular robots for mapping and exploration. Autonomous Robots -Special Issue on Heterogeneous Multirobot Systems 8 (3) (1999), 271298.

[118] GRANT, L. L., AND VENAYAGAMOORTHY, G. K. Swarm Intelligence for CollectiveRobotic Search. No. 177. Springer, 2009, p. 29.

[119] GROCHOLSKY, B., BAYRAKTAR, S., KUMAR, V., TAYLOR, C. J., AND PAPPAS, G.Synergies in feature localization by air-ground robot teams. In in Proc. 9th Int. Symp.Experimental Robotics (ISER04 (2004), pp. 353–362.

[120] GROCHOLSKY, B., SWAMINATHAN, R., KELLER, J., KUMAR, V., AND PAPPAS, G.Information driven coordinated air-ground proactive sensing. In Robotics and Automa-tion, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on(april 2005), pp. 2211 – 2216.

[121] GUARNIERI, M., KURAZUME, R., MASUDA, H., INOH, T., TAKITA, K., DEBEN-EST, P., HODOSHIMA, R., FUKUSHIMA, E., AND HIROSE, S. Helios system: A teamof tracked robots for special urban search and rescue operations. In Intelligent Robotsand Systems, 2009. IROS 2009. IEEE/RSJ International Conference on (oct. 2009),pp. 2795 –2800.

Page 214: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 196

[122] GUIZZO, E. Robots with their heads in the clouds. Spectrum, IEEE 48, 3 (march2011), 16 –18.

[123] HATAZAKI, K., KONYO, M., ISAKI, K., TADOKORO, S., AND TAKEMURA, F. Ac-tive scope camera for urban search and rescue. In Intelligent Robots and Systems, 2007.IROS 2007. IEEE/RSJ International Conference on (29 2007-nov. 2 2007), pp. 2596 –2602.

[124] HEGER, F., AND SINGH, S. Sliding autonomy for complex coordinated multi-robottasks: Analysis & experiments. In Proceedings of Robotics: Science and Systems(Philadelphia, USA, August 2006).

[125] HELLOAPPS. Ms robotics helloapps [online]: http://www.helloapps.com/, 2012.

[126] HOLLINGER, G., SINGH, S., AND KEHAGIAS, A. Efficient, guaranteed search withmulti-agent teams. In Proceedings of Robotics: Science and Systems (Seattle, USA,June 2009).

[127] HOLZ, D., BASILICO, N., AMIGONI, F., AND BEHNKE, S. Evaluating the efficiencyof frontier-based exploration strategies. In Robotics (ISR), 2010 41st InternationalSymposium on and 2010 6th German Conference on Robotics (ROBOTIK) (june 2010),pp. 1 –8.

[128] HOWARD, A., MATARIC, M. J., AND SUKHATME, G. S. An incremental self-deployment algorithm for mobile sensor networks. Auton. Robots 13 (September 2002),113–126.

[129] HOWARD, A., MATARIC, M. J., AND SUKHATME, G. S. Mobile sensor networkdeployment using potential fields: A distributed, scalable solution to the area coverageproblem. In Distributed Autonomous Robotic Systems (2002).

[130] HOWARD, A., PARKER, L. E., AND SUKHATME, G. S. Experiments with a largeheterogeneous mobile robot team: Exploration, mapping, deployment and detection.The International Journal of Robotics Research 25, 5-6 (2006), 431–447.

[131] HSIEH, M. A., COWLEY, A., KELLER, J. F., CHAIMOWICZ, L., GROCHOLSKY, B.,KUMAR, V., TAYLOR, C. J., ENDO, Y., ARKIN, R. C., JUNG, B., AND ET AL. Adap-tive teams of autonomous aerial and ground robots for situational awareness. Journalof Field Robotics 24, 11-12 (2007), 991–1014.

[132] HSIEH, M. A., COWLEY, A., KUMAR, V., AND TAYLOR, C. Towards the deploymentof a mobile robot network with end-to-end performance guarantees. In Robotics andAutomation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on(may 2006), pp. 2085 –2090.

[133] HUNG, W.-H., LIU, P., AND KANG, S.-C. Service-based simulator for security robot.In Advanced robotics and Its Social Impacts, 2008. ARSO 2008. IEEE Workshop on(aug. 2008), pp. 1 –3.

Page 215: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 197

[134] INC., D. R. Dr robot, inc. extend your imagination: Jaguar platform specification[online]: http://jaguar.drrobot.com/specification.asp, 2012.

[135] JACKSON, J. Microsoft robotics studio: A technical introduction. Robotics AutomationMagazine, IEEE 14, 4 (dec. 2007), 82 –87.

[136] JAYASIRI, A., MANN, G., AND GOSINE, R. Mobile robot navigation in unknownenvironments based on supervisory control of partially-observed fuzzy discrete eventsystems. In Advanced Robotics, 2009. ICAR 2009. International Conference on (june2009), pp. 1 –6.

[137] JOHNS, K., AND TAYLOR, T. Professional Microsoft Robotics Developer Studio. Wi-ley Publishing, Inc., 2008.

[138] JONES, J. L. Robot Programming: A Practical Guide to Behavior-Based Robotics.McGrawHill, 2004.

[139] JULIA, M., REINOSO, O., GIL, A., BALLESTA, M., AND PAYA, L. A hybrid so-lution to the multi-robot integrated exploration problem. Engineering Applications ofArtificial Intelligence 23, 4 (2010), 473 – 486.

[140] JUNG, B., AND S., S. G. Tracking targets using multiple robots: The effect of envi-ronment occlusion. Autonomous Robots 13 (November 2002), 191–205.

[141] KAMEGAWA, T., SAIKAI, K., SUZUKI, S., GOFUKU, A., OOMURA, S., HORIKIRI,T., AND MATSUNO, F. Development of grouped rescue robot platforms for informa-tion collection in damaged buildings. In SICE Annual Conference, 2008 (aug. 2008),pp. 1642 –1647.

[142] KAMEGAWA, T., YAMASAKI, T., IGARASHI, H., AND MATSUNO, F. Developmentof the snake-like rescue robot. In Robotics and Automation, 2004. Proceedings. ICRA’04. 2004 IEEE International Conference on (april-1 may 2004), vol. 5, pp. 5081 –5086 Vol.5.

[143] KANNAN, B., AND PARKER, L. Metrics for quantifying system performance in intel-ligent, fault-tolerant multi-robot teams. In Intelligent Robots and Systems, 2007. IROS2007. IEEE/RSJ International Conference on (29 2007-nov. 2 2007), pp. 951 –958.

[144] KANTOR, G., SINGH, S., PETERSON, R., RUS, D., DAS, A., KUMAR, V., PEREIRA,G., AND SPLETZER, J. Distributed Search and Rescue with Robot and Sensor Teams.Springer, 2006, p. 529538.

[145] KENN, H., AND BIRK, A. From games to applications: Component reuse in rescuerobots. In In RoboCup 2004: Robot Soccer World Cup VIII, Lecture Notes in ArtificialIntelligence (LNAI (2005), Springer.

[146] KIM, J., ESPOSITO, J. M., AND KUMAR, V. An rrt-based algorithm for testingand validating multi-robot controllers. In Robotics: Science and Systems’05 (2005),pp. 249–256.

Page 216: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 198

[147] KIM, S. H., AND JEON, J. W. Programming lego mindstorms nxt with visual program-ming. In Control, Automation and Systems, 2007. ICCAS ’07. International Conferenceon (oct. 2007), pp. 2468 –2472.

[148] KOES, M., NOURBAKHSH, I., AND SYCARA, K. Constraint optimization coordi-nation architecture for search and rescue robotics. In Robotics and Automation, 2006.ICRA 2006. Proceedings 2006 IEEE International Conference on (may 2006), pp. 3977–3982.

[149] KONG, C. S., PENG, N. A., AND REKLEITIS, I. Distributed coverage with multi-robot system. In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEEInternational Conference on (may 2006), pp. 2423 –2429.

[150] KUMAR, V., RUS, D., AND SUKHATME, G. S. Networked Robots. Springer, 2008,ch. 41. Networked Robots, pp. 943–958.

[151] LANG, D., HASELICH, M., PRINZEN, M., BAUSCHKE, S., GEMMEL, A., GIESEN,J., HAHN, R., HARAKE, L., REIMCHE, P., SONNEN, G., VON STEIMKER, M.,THIERFELDER, S., AND PAULUS, D. Robocuprescue 2011 - robot league team: resko-at-unikoblenz (germany), 2011.

[152] LANG, H., WANG, Y., AND DE SILVA, C. Mobile robot localization and object poseestimation using optical encoder, vision and laser sensors. In Automation and Logistics,2008. ICAL 2008. IEEE International Conference on (sept. 2008), pp. 617 –622.

[153] LATHROP, S., AND KORPELA, C. Towards a distributed, cognitive robotic architecturefor autonomous heterogeneous robotic platforms. In Technologies for Practical RobotApplications, 2009. TePRA 2009. IEEE International Conference on (nov. 2009), pp. 61–66.

[154] LAVALLE, S. M. Planning Algorithms. Cambridge University Press, 2006.

[155] LEE, D., AND RECCE, M. Quantitative evaluation of the exploration strategies of amobile robot. Int. J. Rob. Res. 16, 4 (Aug. 1997), 413–447.

[156] LEE, J., AND BUI, T. A template-based methodology for disaster management infor-mation systems. In System Sciences, 2000. Proceedings of the 33rd Annual HawaiiInternational Conference on (jan. 2000), p. 7 pp. vol.2.

[157] LEROUX, C. Microdrones: Micro drone autonomous navigation of environment sens-ing [online]: http://www.ist-microdrones.org, 2011.

[158] LIU, J., WANG, Y., LI, B., AND MA, S. Current research, key performances andfuture development of search and rescue robots. Frontiers of Mechanical Engineeringin China 2 (2007), 404–416.

[159] LIU, J., AND WU, J. Multi-Agent Robotic Systems. CRC Press, 2001.

Page 217: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 199

[160] LIU, Z., ANG, M.H., J., AND SEAH, W. Reinforcement learning of cooperativebehaviors for multi-robot tracking of multiple moving targets. In Intelligent Robotsand Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on (aug.2005), pp. 1289 – 1294.

[161] LOCHMATTER, T., AND MARTINOLI, A. Simulation experiments with bio-inspiredalgorithms for odor source localization in laminar wind flow. In Machine Learningand Applications, 2008. ICMLA ’08. Seventh International Conference on (dec. 2008),pp. 437 –443.

[162] LOCHMATTER, T., RODUIT, P., CIANCI, C., CORRELL, N., JACOT, J., AND MARTI-NOLI, A. Swistrack - a flexible open source tracking software for multi-agent systems.In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Confer-ence on (sept. 2008), pp. 4004 –4010.

[163] LOWE, D. G. Distinctive image features from scale- invariant keypoints. InternationalJournal of Computer Vision 602 (2004), 91–110.

[164] MANO, H., MIYAZAWA, K., CHATTERJEE, R., AND MATSUNO, F. Autonomousgeneration of behavioral trace maps using rescue robots. In Intelligent Robots and Sys-tems, 2009. IROS 2009. IEEE/RSJ International Conference on (oct. 2009), pp. 2809–2814.

[165] MANYIKA, J., AND DURRANT-WHYTE, H. Data Fusion and Sensor Management:A Decentralized Information-Theoretic Approach. Prentice Hall PTR, Upper SaddleRiver, NJ, USA, 1995.

[166] MARCOLINO, L., AND CHAIMOWICZ, L. A coordination mechanism for swarm nav-igation: experiments and analysis. In AAMAS (3) (2008), pp. 1203–1206.

[167] MARCOLINO, L., AND CHAIMOWICZ, L. No robot left behind: Coordination to over-come local minima in swarm navigation. In Robotics and Automation, 2008. ICRA2008. IEEE International Conference on (may 2008), pp. 1904 –1909.

[168] MARINO, A., PARKER, L. E., ANTONELLI, G., AND CACCAVALE, F. Behavioralcontrol for multi-robot perimeter patrol: A finite state automata approach. In Roboticsand Automation, 2009. ICRA ’09. IEEE International Conference on (may 2009),pp. 831 –836.

[169] MARJOVI, A., NUNES, J., MARQUES, L., AND DE ALMEIDA, A. Multi-robot ex-ploration and fire searching. In Intelligent Robots and Systems, 2009. IROS 2009.IEEE/RSJ International Conference on (oct. 2009), pp. 1929 –1934.

[170] MATARIC, M. J. Designing emergent behaviors: From local interactions to collectiveintelligence. In In In Proceedings of the International Conference on Simulation ofAdaptive Behavior: From Animals to Animats (1992), vol. 2, pp. 432–441.

[171] MATARIC, M. J. Group behavior and group learning. In From Perception to ActionConference, 1994., Proceedings (sept. 1994), pp. 326 – 329.

Page 218: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 200

[172] MATARIC, M. J. Interaction and Intelligent Behavior. PhD thesis, MassachusettsInstitute of Technology, Cambridge, MA, USA, 1994.

[173] MATARIC, M. J. Designing and understanding adaptive group behavior. AdaptiveBehavior 4 (1995), 51–80.

[174] MATARIC, M. J. Issues and approaches in the design of collective autonomous agents.Robotics and Autonomous Systems 16, 2-4 (1995), 321–331.

[175] MATARIC, M. J. Behavior-based control: Examples from navigation, learning, andgroup behavior. Journal of Experimental and Theoretical Artificial Intelligence 9(1997), 323–336.

[176] MATARIC, M. J. Coordination and learning in multirobot systems. Intelligent Systemsand their Applications, IEEE 13, 2 (mar/apr 1998), 6 –8.

[177] MATARIC, M. J. Situated robotics. In Encyclopedia of Cognitive Science. NaturePublishing Group, 2002.

[178] MATARIC, M. J., AND MICHAUD, F. Behavior-Based Systems. Springer, 2008, ch. 38.Behavior-Based Systems, pp. 891–909.

[179] MATSUMOTO, A., ASAMA, H., ISHIDA, Y., OZAKI, K., AND ENDO, I. Communi-cation in the autonomous and decentralized robot system actress. In Intelligent Robotsand Systems ’90. ’Towards a New Frontier of Applications’, Proceedings. IROS ’90.IEEE International Workshop on (Jul 1990), vol. vol. 2, pp. 835–840.

[180] MATSUNO, F., HIROSE, S., AKIYAMA, I., INOH, T., GUARNIERI, M., SHIROMA,N., KAMEGAWA, T., OHNO, K., AND SATO, N. Introduction of mission unit oninformation collection by on-rubble mobile platforms of development of rescue robotsystems (ddt) project in japan. In SICE-ICASE, 2006. International Joint Conference(oct. 2006), pp. 4186 –4191.

[181] MATSUNO, F., AND TADOKORO, S. Rescue robots and systems in japan. In Roboticsand Biomimetics, 2004. ROBIO 2004. IEEE International Conference on (aug. 2004),pp. 12 –20.

[182] MCENTIRE, D. A. Disaster Response and Recovery. Wiley Publishing, Inc., 2007.

[183] MCLURKIN, J., AND SMITH, J. Distributed algorithms for dispersion in indoor envi-ronments using a swarm of autonomous mobile robots. In 7th Distributed AutonomousRobotic Systems (2004).

[184] MICIRE, M. Analysis of the robotic-assisted search and rescue response to the worldtrade center disaster. Master’s thesis, University of South Florida, May 2002.

[185] MICIRE, M., DESAI, M., DRURY, J. L., MCCANN, E., NORTON, A., TSUI, K. M.,AND YANCO, H. A. Design and validation of two-handed multi-touch tabletop con-trollers for robot teleoperation. In IUI (2011), pp. 145–154.

Page 219: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 201

[186] MICIRE, M., AND YANCO, H. Improving disaster response with multi-touch tech-nologies. In Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ InternationalConference on (29 2007-nov. 2 2007), pp. 2567 –2568.

[187] MIHANKHAH, E., ABOOSAEEDAN, E., KALANTARI, A., SEMSARILAR, H., MOT-TAGHI, S., ALIZADEHARJMAND, M., FOROUZIDEH, A., SHARH, M. A. M.,SHAHRYARI, S., AND MOGHADMNEJAD, N. Robocuprescue 2009 - robot leagueteam: Resquake (iran), 2009.

[188] MINSKY, M. The Emotion Machine: Commonsense Thinking, Artificial Intelligence,and the Future of the Human Mind. Simon & Schuster, 2006.

[189] MIZUMOTO, H., MANO, H., KON, K., SATO, N., KANAI, R., GOTO, K., SHIN, H.,IGARASHI, H., AND MATSUNO, F. Robocuprescue 2009 - robot league team: Shinobi(japan), 2009.

[190] MOOSAVIAN, S. A. A., KALANTARI, A., SEMSARILAR, H., ABOOSAEEDAN, E.,AND MIHANKHAH, E. Resquake: A tele-operative rescue robot. Journal of Mechani-cal Design 131, 8 (2009), 081005.

[191] MOURIKIS, A., AND ROUMELIOTIS, S. Performance analysis of multirobot coopera-tive localization. Robotics, IEEE Transactions on 22, 4 (aug. 2006), 666 –681.

[192] MURPHY, R. R. Introduction to AI Robotics. The MIT Press, 2000.

[193] MURPHY, R. R. Human-robot interaction in rescue robotics. Systems, Man, and Cy-bernetics, Part C: Applications and Reviews, IEEE Transactions on 34, 2 (may 2004),138 –153.

[194] MURPHY, R. R. Trial by fire. Robotics Automation Magazine, IEEE 11, 3 (sept. 2004),50 – 61.

[195] MURPHY, R. R., BROWN, R., GRANT, R., AND ARNETT, C. Preliminary domaintheory for robot-assisted wildland firefighting. In Safety, Security Rescue Robotics(SSRR), 2009 IEEE International Workshop on (nov. 2009), pp. 1 –6.

[196] MURPHY, R. R., CASPER, J., HYAMS, J., MICIRE, M., AND MINTEN, B. Mobilityand sensing demands in usar. In Industrial Electronics Society, 2000. IECON 2000.26th Annual Conference of the IEEE (2000), vol. 1, pp. 138 –142 vol.1.

[197] MURPHY, R. R., CASPER, J., AND MICIRE, M. Potential tasks and research issuesfor mobile robots in robocup rescue. In RoboCup 2000: Robot Soccer World Cup IV(London, UK, 2001), Springer-Verlag, pp. 339–344.

[198] MURPHY, R. R., CASPER, J., MICIRE, M., AND HYAMS, J. Assessment of the niststandard test bed for urban search and rescue, 2000.

Page 220: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 202

[199] MURPHY, R. R., CASPER, J., MICIRE, M., HYAMS, J., ROBIN, D., MURPHY, R.,MURPHY, R., MURPHY, R. R., CASPER, J. L., MICIRE, M. J., AND HYAMS, J.Mixed-initiative control of multiple heterogeneous robots for urban search and rescue,2000.

[200] MURPHY, R. R., KRAVITZ, J., PELIGREN, K., MILWARD, J., AND STANWAY, J.Preliminary report: Rescue robot at crandall canyon, utah, mine disaster. In Roboticsand Automation, 2008. ICRA 2008. IEEE International Conference on (may 2008),pp. 2205 –2206.

[201] MURPHY, R. R., KRAVITZ, J., STOVER, S., AND SHOURESHI, R. Mobile robots inmine rescue and recovery. Robotics Automation Magazine, IEEE 16, 2 (june 2009), 91–103.

[202] MURPHY, R. R., LISETTI, C. L., TARDIF, R., IRISH, L., AND GAGE, A. Emotion-based control of cooperating heterogeneous mobile robots. Robotics and Automation,IEEE Transactions on 18, 5 (oct 2002), 744 – 757.

[203] MURPHY, R. R., STEIMLE, E., HALL, M., LINDEMUTH, M., TREJO, D.,HURLEBAUS, S., MEDINA-CETINA, Z., AND SLOCUM, D. Robot-assisted bridgeinspection after hurricane ike. In Safety, Security Rescue Robotics (SSRR), 2009 IEEEInternational Workshop on (nov. 2009), pp. 1 –5.

[204] MURPHY, R. R., TADOKORO, S., NARDI, D., JACOFF, A., FIORINI, P., CHOSET, H.,AND ERKMEN, A. M. Search and Rescue Robotics. Springer, 2008, ch. 50. Searchand Rescue Robotics, p. 11511173.

[205] NAGATANI, K., OKADA, Y., TOKUNAGA, N., YOSHIDA, K., KIRIBAYASHI, S.,OHNO, K., TAKEUCHI, E., TADOKORO, S., AKIYAMA, H., NODA, I., YOSHIDA,T., AND KOYANAGI, E. Multi-robot exploration for search and rescue missions: Areport of map building in robocuprescue 2009. In Safety, Security Rescue Robotics(SSRR), 2009 IEEE International Workshop on (nov. 2009), pp. 1 –6.

[206] NAGHSH, A., GANCET, J., TANOTO, A., AND ROAST, C. Analysis and design ofhuman-robot swarm interaction in firefighting. In Robot and Human Interactive Com-munication, 2008. RO-MAN 2008. The 17th IEEE International Symposium on (aug.2008), pp. 255 –260.

[207] NATER, F., GRABNER, H., , AND GOOL, L. V. Exploiting simple hierarchies for un-supervised human behavior analysis. In In Proceedings IEEE Conference on ComputerVision and Pattern Recognition (CVPR) (June 2010).

[208] NAVARRO, I., PUGH, J., MARTINOLI, A., AND MATIA, F. A distributed scalable ap-proach to formation control in multi-robot systems. In Proceedings of the InternationalSymposium on Distributed A utonomous Robotic Systems (2008).

[209] NEVATIA, Y., STOYANOV, T., RATHNAM, R., PFINGSTHORN, M., MARKOV, S.,AMBRUS, R., AND BIRK, A. Augmented autonomy: Improving human-robot team

Page 221: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 203

performance in urban search and rescue. In Intelligent Robots and Systems, 2008.IROS 2008. IEEE/RSJ International Conference on (sept. 2008), pp. 2103 –2108.

[210] NODA, I., HADA, Y., ICHI MEGURO, J., AND SHIMORA, H. Rescue Robotics. DDTProject on Robots and Systems for Urban Search and Rescue. Springer, 2009, ch. 8.Information Sharing and Integration Framework Among Rescue Robots InformationSystems, pp. 145–160.

[211] NORDFELTH, A., WETZIG, C., PERSSON, M., HAMRIN, P., KUIVINEN, R., FALK,P., AND LUNDGREN, B. Robocuprescue 2009 - robot league team: Robocuprescueteam (rrt) uppsala university (sweden), 2009.

[212] NOURBAKHSH, I., SYCARA, K., KOES, M., YONG, M., LEWIS, M., AND BURION,S. Human-robot teaming for search and rescue. Pervasive Computing, IEEE 4, 1(jan.-march 2005), 72 – 79.

[213] OF COMPANIES, I. G. International submarine engineering ltd. [online]:http://www.ise.bc.ca/products.html, 2012.

[214] OF STANDARDS, N. I., AND TECHNOLOGY. Performance metrics and test arenas forautonomous mobile robots [online]: http://www.nist.gov/el/isd/testarenas.cfm, 2011.

[215] OHNO, K., MORIMURA, S., TADOKORO, S., KOYANAGI, E., AND YOSHIDA,T. Semi-autonomous control of 6-dof crawler robot having flippers for getting overunknown-steps. In Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ Inter-national Conference on (29 2007-nov. 2 2007), pp. 2559 –2560.

[216] OHNO, K., AND YOSHIDA, T. Robocuprescue 2010 - robot league team: Pelicanunited (japan), 2010.

[217] OLSON, G. M., SHEPPARD, S. B., AND SOLOWAY, E. Canjapan send in robots to fix troubled nuclear reactors? [online]:http://spectrum.ieee.org/automaton/robotics/industrial-robots/japan-robots-to-fix-troubled-nuclear-reactors, 2011. This is an electronic document. Date of publication:[March 22, 2011]. Date retrieved: June 23, 2011. Date last modified: [Dateunavailable].

[218] OREBACK, A., AND CHRISTENSEN, H. I. Evaluation of architectures for mobilerobotics. Autonomous Robots 14 (2003), 33–49.

[219] PAPAZOGLOU, M., TRAVERSO, P., DUSTDAR, S., AND LEYMANN, F. Service-oriented computing: State of the art and research challenges. Computer 40, 11 (nov.2007), 38 –45.

[220] PARKER, L. E. Designing control laws for cooperative agent teams. In Robotics andAutomation, 1993. Proceedings., 1993 IEEE International Conference on (may 1993),pp. 582 –587 vol.3.

Page 222: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 204

[221] PARKER, L. E. Alliance: an architecture for fault tolerant multirobot cooperation.Robotics and Automation, IEEE Transactions on 14, 2 (apr 1998), 220 –240.

[222] PARKER, L. E. Distributed intelligence: Overview of the field and its application inmulti-robot systems. Journal of Physical Agents 2, 1 (2008), 5–14.

[223] PARKER, L. E. Multiple Mobile Robot Systems. Springer, 2008, ch. 40. MultipleMobile Robot Systems, pp. 921–942.

[224] PATHAK, K., BIRK, A., SCHWERTFEGER, S., DELCHEF, I., AND MARKOV, S. Fullyautonomous operations of a jacobs rugbot in the robocup rescue robot league 2006. InSafety, Security and Rescue Robotics, 2007. SSRR 2007. IEEE International Workshopon (sept. 2007), pp. 1 –6.

[225] PFINGSTHORN, M., NEVATIA, Y., STOYANOV, T., RATHNAM, R., MARKOV, S.,AND BIRK, A. Towards cooperative and decentralized mapping in the jacobs virtualrescue team. In RoboCup (2008), pp. 225–234.

[226] PIMENTA, L. C. A., SCHWAGER, M., LINDSEY, Q., KUMAR, V., RUS, D.,MESQUITA, R. C., AND PEREIRA, G. Simultaneous coverage and tracking (scat)of moving targets with robot networks. In WAFR (2008), pp. 85–99.

[227] POOL, R. Fukushima: the facts. Engineering Technology 6, 4 (may 2011), 32 –36.

[228] PRATT, K., MURPHY, R. R., BURKE, J., CRAIGHEAD, J., GRIFFIN, C., AND

STOVER, S. Use of tethered small unmanned aerial system at berkman plaza ii col-lapse. In Safety, Security and Rescue Robotics, 2008. SSRR 2008. IEEE InternationalWorkshop on (oct. 2008), pp. 134 –139.

[229] PUGH, J., AND MARTINOLI, A. Inspiring and modeling multi-robot search with par-ticle swarm optimization. In Swarm Intelligence Symposium, 2007. SIS 2007. IEEE(april 2007), pp. 332 –339.

[230] QUIGLEY, M., CONLEY, K., GERKEY, B. P., FAUST, J., FOOTE, T., LEIBS, J.,WHEELER, R., AND NG, A. Y. Ros: an open-source robot operating system. InICRA Workshop on Open Source Software (2009).

[231] RAHMAN, M., MIAH, M., GUEAIEB, W., AND SADDIK, A. Senora: A p2p service-oriented framework for collaborative multirobot sensor networks. Sensors Journal,IEEE 7, 5 (may 2007), 658 –666.

[232] REKLEITIS, I., DUDEK, G., AND MILIOS, E. Multi-robot collaboration for robustexploration. Annals of Mathematics and Artificial Intelligence 31 (2001), 7–40.

[233] RESEARCH, M. Kinect for windows sdk beta [online]: http://www.microsoft.com/en-us/kinectforwindows/, 2012.

[234] RESEARCH, M. Microsoft robotics [online]: http://www.microsoft.com/robotics/,2012.

Page 223: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 205

[235] REYNOLDS, C. Red 3d, steering behaviors, boids and opensteer [online]:http://red3d.com/cwr/, 2012.

[236] REYNOLDS, C. W. Steering behaviors for autonomous characters, vol. San Jose,.Citeseer, 1999, pp. 763–782.

[237] RICHARDSON, D. Robots to the rescue? Engineering Technology 6, 4 (may 2011), 52–54.

[238] ROBOREALM. Roborealm vision for machines [online]: http://www.roborealm.com/,2012.

[239] ROOKER, M. N., AND BIRK, A. Combining exploration and ad-hoc networking inrobocup rescue. In RoboCup 2004: Robot Soccer World Cup VIII, D. Nardi, M. Ried-miller, and C. Sammut, Eds., vol. 3276 of Lecture Notes in Artificial Intelligence(LNAI). Springer, 2005, pp. pp.236–246.

[240] ROOKER, M. N., AND BIRK, A. Multi-robot exploration under the constraints ofwireless networking. Control Engineering Practice 15, 4 (2007), 435 – 445.

[241] ROY, N., AND DUDEK, G. Collaborative robot exploration and rendezvous: Algo-rithms, performance bounds and observations. Autonomous Robots 11, 2 (2001), 117–136.

[242] RYBSKI, P., PAPANIKOLOPOULOS, N., STOETER, S., KRANTZ, D., YESIN, K.,GINI, M., VOYLES, R., HOUGEN, D., NELSON, B., AND ERICKSON, M. Enlistingrangers and scouts for reconnaissance and surveillance. Robotics Automation Maga-zine, IEEE 7, 4 (dec 2000), 14 –24.

[243] SALLE, D., TRAONMILIN, M., CANOU, J., AND DUPOURQUE, V. Using microsoftrobotics studio for the design of generic robotics controllers: the robubox software.In IEEE ICRA 2007 Workshop on Software Development and Integration in Robotics(SDIR-II) (April 2007), D. Brugali, C. Schlegel, I. A. Nesnas, W. D. Smart, andA. Braendle, Eds., SDIR-II, IEEE Robotics and Automation Society.

[244] SANFELIU, A., ANDRADE, JUANAND EMDE, W. R., AND ILA, V. S. Ubiq-uitous networking robotics in urban settings [online]: http://www.urus.upc.es/ ,http://www.urus.upc.es/nuevooutcomes.html, 2011.

[245] SATO, N., MATSUNO, F., AND SHIROMA, N. Fuma : Platform development andsystem integration for rescue missions. In Safety, Security and Rescue Robotics, 2007.SSRR 2007. IEEE International Workshop on (sept. 2007), pp. 1 –6.

[246] SATO, N., MATSUNO, F., YAMASAKI, T., KAMEGAWA, T., SHIROMA, N., AND

IGARASHI, H. Cooperative task execution by a multiple robot team and its operatorsin search and rescue operations. In Intelligent Robots and Systems, 2004. (IROS 2004).Proceedings. 2004 IEEE/RSJ International Conference on (sept.-2 oct. 2004), vol. 2,pp. 1083 – 1088 vol.2.

Page 224: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 206

[247] SCHAFROTH, D., BOUABDALLAH, S., BERMES, C., AND SIEGWART, R. From thetest benches to the first prototype of the mufly micro helicopter. Journal of IntelligentRobotic Systems 54 (2009), 245–260.

[248] SCHWAGER, M., MCLURKIN, J., SLOTINE, J.-J. E., AND RUS, D. From theoryto practice: Distributed coverage control experiments with groups of robots. In ISER(2008), pp. 127–136.

[249] SCHWERTFEGER, S., POPPINGA, J., PATHAK, K., BULOW, H., VASKEVICIUS, N.,AND BIRK, A. Robocuprescue 2009 - robot league team: Jacobs university (germany),2009.

[250] SCOTTI, C. P., CESETTI, A., DI BUO, G., AND LONGHI, S. Service oriented real-time implementation of slam capability for mobile robots, 2010.

[251] SELLNER, B., HEGER, F., HIATT, L., SIMMONS, R., AND SINGH, S. Coordinatedmultiagent teams and sliding autonomy for large-scale assembly. Proceedings of theIEEE 94, 7 (july 2006), 1425 –1444.

[252] SHAHRI, A. M., NOROUZI, M., KARAMBAKHSH, A., MASHAT, A. H., CHEGINI, J.,MONTAZERZOHOUR, H., RAHMANI, M., NAMAZIFAR, M. J., ASADI, B., MASHAT,M. A., KARIMI, M., MAHDIKHANI, B., AND AZIZI, V. Robocuprescue 2010 - robotleague team: Mrl rescue robot (iran), 2010.

[253] SHENG, W., YANG, Q., TAN, J., AND XI, N. Distributed multi-robot coordination inarea exploration. Robotics and Autonomous Systems 54, 12 (2006), 945 – 955.

[254] SIDDHARTHA, H., SARIKA, R., AND KARLAPALEM, K. Score vector : A new eval-uation scheme for robocup rescue simuation competition 2009, 2009.

[255] SIEGWART, R., AND NOURBAKHSH, I. R. Introduction to Autonomous MobileRobots. The MIT Press, 2004.

[256] SIMMONS, R., APFELBAUM, D., BURGARD, W., FOX, D., MOORS, M., AND ET AL.Coordination for multi-robot exploration and mapping. In In Proceedings of the AAAINational Conference on Artificial Intelligence (2000), AAAI.

[257] SIMMONS, R., LIN, L. J., AND FEDOR, C. Autonomous task control for mobilerobots. In Intelligent Control, 1990. Proceedings., 5th IEEE International Symposiumon (sep 1990), vol. vol. 2, pp. 663 –668.

[258] SIMMONS, R., SINGH, S., HERSHBERGER, D., RAMOS, J., AND SMITH, T. Firstresults in the coordination of heterogeneous robots for large-scale assembly. In Exper-imental Robotics VII, vol. 271 of Lecture Notes in Control and Information Sciences.Springer Berlin / Heidelberg, 2001, pp. 323–332.

[259] STACHNISS, C., MARTINEZ MOZOS, O., AND BURGARD, W. Efficient explorationof unknown indoor environments using a team of mobile robots. Annals of Mathematicsand Artificial Intelligence 52 (2008), 205–227.

Page 225: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 207

[260] STONE, P., AND VELOSO, M. A layered approach to learning client behaviours inrobocup soccer server. Applied Artificial Intelligence 12 (December 1998), 165–188.

[261] STORMONT, D. P. Autonomous rescue robot swarms for first responders. In Compu-tational Intelligence for Homeland Security and Personal Safety, 2005. CIHSPS 2005.Proceedings of the 2005 IEEE International Conference on (31 2005-april 1 2005),pp. 151 –157.

[262] SUGAR, T., DESAI, J., KUMAR, V., AND OSTROWSKI, J. Coordination of multiplemobile manipulators. In Robotics and Automation, 2001. Proceedings 2001 ICRA.IEEE International Conference on (2001), vol. 3, pp. 3022 – 3027 vol.3.

[263] SUGIHARA, K., AND SUZUKI, I. Distributed motion coordination of multiple mobilerobots. In Intelligent Control, 1990. Proceedings., 5th IEEE International Symposiumon (sep 1990), pp. 138 –143 vol.1.

[264] SUGIHARA, K., AND SUZUKI, I. Distributed algorithms for formation of geometricpatterns with many mobile robots. Journal of Robotic Systems 13, 3 (1996), 127–139.

[265] SUTHAKORN, J., SHAH, S., JANTARAJIT, S., ONPRASERT, W., SAENSUPO, W.,SAEUNG, S., NAKDHAMABHORN, S., SA-ING, V., AND REAUNGAMORNRAT, S. Onthe design and development of a rough terrain robot for rescue missions. In Roboticsand Biomimetics, 2008. ROBIO 2008. IEEE International Conference on (feb. 2009),pp. 1830 –1835.

[266] TABATA, K., INABA, A., ZHANG, Q., AND AMANO, H. Development of a trans-formational mobile robot to search victims under debris and rubbles. In IntelligentRobots and Systems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ InternationalConference on (sept.-2 oct. 2004), vol. 1, pp. 46 – 51 vol.1.

[267] TADOKORO, S. Rescue Robotics. DDT Project on Robots and Systems for UrbanSearch and Rescue. Springer, 2009.

[268] TADOKORO, S. Rescue robotics challenge. In Advanced Robotics and its Social Im-pacts (ARSO), 2010 IEEE Workshop on (oct. 2010), pp. 92 –98.

[269] TADOKORO, S., TAKAMORI, T., OSUKA, K., AND TSURUTANI, S. Investigation re-port of the rescue problem at hanshin-awaji earthquake in kobe. In Intelligent Robotsand Systems, 2000. (IROS 2000). Proceedings. 2000 IEEE/RSJ International Confer-ence on (2000), vol. 3, pp. 1880 –1885 vol.3.

[270] TAKAHASHI, T., AND TADOKORO, S. Working with robots in disasters. RoboticsAutomation Magazine, IEEE 9, 3 (sep 2002), 34 – 39.

[271] TAN, J. A scalable graph model and coordination algorithms for multi-robot systems.In Advanced Intelligent Mechatronics. Proceedings, 2005 IEEE/ASME InternationalConference on (july 2005), pp. 1529 –1534.

Page 226: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 208

[272] TANG, F., AND PARKER, L. E. Asymtre: Automated synthesis of multi- robot tasksolutions through software reconfiguration. In Robotics and Automation, 2005. ICRA2005. Proceedings of the 2005 IEEE International Conference on (april 2005), pp. 1501– 1508.

[273] THRUN, S. A probabilistic online mapping algorithm for teams of mobile robots.International Journal of Robotics Research 20, 5 (2001), 335–363.

[274] THRUN, S., FOX, D., BURGARD, W., AND DELLAERT, F. Robust monte carlo local-ization for mobile robots. Artificial Intelligence 128, 1-2 (2000), 99–141.

[275] TRUNG, P., AFZULPURKAR, N., AND BODHALE, D. Development of vision servicein robotics studio for road signs recognition and control of lego mindstorms robot. InRobotics and Biomimetics, 2008. ROBIO 2008. IEEE International Conference on (feb.2009), pp. 1176 –1181.

[276] TSUBOUCHI, T., OSUKA, K., MATSUNO, F., ASAMA, H., TADOKORO, S.,ONOSATO, M., YOKOKOHJI, Y., NAKANISHI, H., DOI, T., MURATA, M.,KABURAGI, Y., TANIMURA, I., UEDA, N., MAKABE, K., SUZUMORI, K., KOY-ANAGI, E., YOSHIDA, T., TAKIZAWA, O., TAKAMORI, T., HADA, Y., , AND NODA,I. Rescue Robotics. DDT Project on Robots and Systems for Urban Search and Res-cue. Springer, 2009, ch. 9. Demonstration Experiments on Rescue Search Robots andOn-Scenario Training in Practical Field with First Responders, pp. 161–174.

[277] TUNWANNARUX, A., AND TUNWANNARUX, S. The ceo mission ii, rescue robot withmulti-joint mechanical arm. World Academy of Science, Engineering and Technology27, 2007.

[278] VADAKKEPAT, P., MIIN, O. C., PENG, X., AND LEE, T. H. Fuzzy behavior-basedcontrol of mobile robots. Fuzzy Systems, IEEE Transactions on 12, 4 (aug. 2004), 559– 565.

[279] VIOLA, P., AND JONES, M. J. Robust real-time face detection. Int. J. Comput. Vision57 (May 2004), 137–154.

[280] VISSER, A., AND SLAMET, B. Including communication success in the estimation ofinformation gain for multi-robot exploration. In Modeling and Optimization in Mobile,Ad Hoc, and Wireless Networks and Workshops, 2008. WiOPT 2008. 6th InternationalSymposium on (april 2008), pp. 680 –687.

[281] VOYLES, R., GODZDANKER, R., AND KIM, T.-H. Auxiliary motive power for ter-minatorbot: An actuator toolbox. In Safety, Security and Rescue Robotics, 2007. SSRR2007. IEEE International Workshop on (sept. 2007), pp. 1 –5.

[282] VOYLES, R., AND LARSON, A. Terminatorbot: a novel robot with dual-use mech-anism for locomotion and manipulation. Mechatronics, IEEE/ASME Transactions on10, 1 (feb. 2005), 17 –25.

Page 227: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 209

[283] WALTER, J. International federation of red cross and red crescent societies: Worlddisasters report. Kumarian Press, Bloomfield, 2005.

[284] WANG, J., AND BALAKIRSKY, S. Usarsim [online]:http://sourceforge.net/projects/usarsim/, 2012.

[285] WANG, J., LEWIS, M., AND SCERRI, P. Cooperating robots for search and rescue.In Proceedings of the AAMAS 1st International Workshop on Agent Technology forDisaster Management (2004), pp. 92–99.

[286] WANG, Q., XIE, G., WANG, L., AND WU, M. Integrated heterogeneous multi-robotsystem for collaborative navigation. In Frontiers in the Convergence of Bioscience andInformation Technologies, 2007. FBIT 2007 (oct. 2007), pp. 651 –656.

[287] WEISS, L. G. Autonomous robots in the fog of war [online]:http://spectrum.ieee.org/robotics/military-robots/autonomous-robots-in-the-fog-of-war/0, 2011. This is an electronic document. Date of publication: [August 1, 2011].Date retrieved: August 3, 2011. Date last modified: [Date unavailable].

[288] WELCH, G., AND BISHOP, G. An introduction to the kalman filter. Tech. rep., Uni-versity of North Carolina at Chapel Hill Department of Computer Science, 2001.

[289] WOOD, M. F., AND DELOACH, S. A. An overview of the multiagent systems en-gineering methodology. AgentOriented Software Engineering 1957, January (2001),207–221.

[290] WURM, K., STACHNISS, C., AND BURGARD, W. Coordinated multi-robot explo-ration using a segmentation of the environment. In Intelligent Robots and Systems,2008. IROS 2008. IEEE/RSJ International Conference on (sept. 2008), pp. 1160 –1165.

[291] YAMAUCHI, B. A frontier-based approach for autonomous exploration. In Compu-tational Intelligence in Robotics and Automation, 1997. CIRA’97., Proceedings., 1997IEEE International Symposium on (jul 1997), pp. 146 –151.

[292] YOKOKOHJI, Y., TUBOUCHI, T., TANAKA, A., YOSHIDA, T., KOYANAGI, E., MAT-SUNO, F., HIROSE, S., KUWAHARA, H., TAKEMURA, F., INO, T., TAKITA, K., SHI-ROMA, N., KAMEGAWA, T., HADA, Y., OSUKA, K., WATASUE, T., KIMURA, T.,NAKANISHI, H., HORIGUCHI, Y., TADOKORO, S., AND OHNO, K. Rescue Robotics.DDT Project on Robots and Systems for Urban Search and Rescue. Springer, 2009,ch. 7. Design Guidelines for Human Interface for Rescue Robots, pp. 131–144.

[293] YU, J., CHA, J., LU, Y., AND YAO, S. A service-oriented architecture framework forthe distributed concurrent and collaborative design, vol. 1. IEEE, 2008, pp. 872–876.

[294] ZHAO, J., SU, X., AND YAN, J. A novel strategy for distributed multi-robot coordi-nation in area exploration. In Measuring Technology and Mechatronics Automation,2009. ICMTMA ’09. International Conference on (april 2009), vol. 2, pp. 24 –27.

Page 228: PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

BIBLIOGRAPHY 210

[295] ZLOT, R., STENTZ, A., DIAS, M., AND THAYER, S. Multi-robot exploration con-trolled by a market economy. In Robotics and Automation, 2002. Proceedings. ICRA’02. IEEE International Conference on (2002), vol. 3, pp. 3016 –3023.