a ground control station for a multi-uav surveillance ... · a ground control station for a...

7
A Ground Control Station for a Multi-UAV Surveillance System: Design and Validation in Field Experiments Daniel Perez-Rodriguez, Ivan Maza, Fernando Caballero, David Scarlatti, Enrique Casado and Anibal Ollero Abstract— This paper presents the ground control station developed for a platform composed by multiple unmanned aerial vehicles in surveillance missions. The software application is fully based on open source libraries and it has been designed as a robust and decentralized system. It allows the operator to dynamically allocate different tasks to the UAVs and to show their operational information in a 3D realistic environment in real time. The ground control station has been designed to assist the operator in the challenging task of managing a system with multiple UAVs, trying to reduce his workload. The multi-UAV surveillance system has been demonstrated in field experiments with two quadrotors equipped with visual cameras. Index Terms—multi-UAS ground control station; Multi-UAS platforms; decentralized architectures. I. I NTRODUCTION The progress on miniaturization technologies, together with new sensors, embedded control systems and commu- nication, have boosted the development of many new small and relatively low cost Unmanned Aerial Vehicles (UAVs). However, constraints such as power consumption, weight and size play an important role in the UAVs’ performance, and particularly in small size, light and low cost UAVs. Then, the cooperation of many of these vehicles is the most suitable approach for many applications. A single powerful aerial vehicle equipped with a large array of different sensors of different modalities is limited at any one time to a single view point. However, a team of aerial vehicles can simultaneously collect information from multiple locations and exploit the information derived from multiple disparate points to build models that can be used to take decisions. Team members can exchange sensor information, collaborate to identify and track targets and perform detection and monitoring activities among other tasks [1], [2]. Thus, for example, a team of aerial vehicles can be used for exploration, detection, precise localization, monitoring and measuring the evolution of natural disasters, such as forest fires. Furthermore, the multi-UAV approach leads to redundant solutions offering greater fault tolerance and flexibility. This work has been developed in the framework of the ADAM and CLEAR (DPI2011-28937-C02-01) Spanish National Research projects. Daniel Perez-Rodriguez, Ivan Maza, Fernando Caballero and Anibal Ollero are with the Robotics, Vision and Control Group, University of Seville, Avd. de los Descubrimientos s/n, 41092, Sevilla, Spain. Email: [email protected], [email protected], [email protected], [email protected] David Scarlatti and Enrique Casado are with Boeing Research & Technology Europe, Avenida Sur del Aeropuerto de Barajas 38, Bldg 4, 28042 Madrid, Spain. Email: [email protected], [email protected] Anibal Ollero is also with the Center for Advanced Aerospace Technology (CATEC), Seville, Spain. Email: [email protected] An efficient user friendly Ground Control Station (GCS) is a crucial component in any Unmanned Aerial System (UAS) based platform. The GCS gathers all the information about the UAV status and allows to send commands according to the specified missions. It is remarkable that most of the stations include a common set of components, such as artificial horizons, battery and IMU indicators and lately 3D environments [3], [4], [5], as generally accepted useful elements for the operators. The operator’s workload grows exponentially with the number of UAVs operating/flying in the platform. Due to the critical nature of the unmanned flights, there has been a continuous effort to improve the capabilities of the GCSs managing multiple UAVs. The use of multimodal technolo- gies begins to be usual in current GCSs [6], [7], involving several modalities such as positional sound, speech recog- nition, text-to-speech synthesis or head-tracking. The level of interaction between the operator and the GCS increases with the number of information channels, but these channels should be properly arranged in order to avoid overloading the operator. In [8] some of the emerging input modalities for human- computer interaction (HCI) are presented and the funda- mental issues in integrating them at various levels - from early “signal” level to intermediate “feature” level and to late “decision” level - are discussed. The different computational approaches that may be applied at the different levels of modality integration are presented, along with a briefly review of several demonstrated multimodal HCI systems and applications. Reference [9] provides a survey on relevant aspects such as the perceptual and cognitive issues related to the interface of the UAV operator, including the application of multimodal technologies to compensate for the dearth of sensory infor- mation available. The decrease on the reaction time of the operator adding modalities such as auditory information (3D audio), speech synthesis, haptic feedback and touch screens are analyzed in [10]. It is possible to find commercial GCSs for multi-UAV systems ranging from the advanced proprietary and closed solution by Boeing for the X-45 [11], [12] to open source solutions as QGroundControl [5] or the Paparazzi System [13] used in [14]. This paper presents the ground control station developed for a platform composed by multiple unmanned aerial ve- hicles in surveillance missions demonstrated in field experi- ments. The application is fully based on open source libraries and it has been designed as a robust and decentralized The original publication is available at www.springerlink.com in this link: http://dx.doi.org/10.1007/s10846-012-9759-5

Upload: vantram

Post on 19-Oct-2018

228 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Ground Control Station for a Multi-UAV Surveillance ... · A Ground Control Station for a Multi-UAV Surveillance System: Design and Validation in Field Experiments Daniel Perez-Rodriguez,

A Ground Control Station for a Multi-UAV Surveillance System: Designand Validation in Field Experiments

Daniel Perez-Rodriguez, Ivan Maza, Fernando Caballero, David Scarlatti, Enrique Casado and Anibal Ollero

Abstract— This paper presents the ground control stationdeveloped for a platform composed by multiple unmannedaerial vehicles in surveillance missions. The software applicationis fully based on open source libraries and it has been designedas a robust and decentralized system. It allows the operator todynamically allocate different tasks to the UAVs and to showtheir operational information in a 3D realistic environment inreal time.

The ground control station has been designed to assist theoperator in the challenging task of managing a system withmultiple UAVs, trying to reduce his workload. The multi-UAVsurveillance system has been demonstrated in field experimentswith two quadrotors equipped with visual cameras.

Index Terms— multi-UAS ground control station; Multi-UASplatforms; decentralized architectures.

I. INTRODUCTION

The progress on miniaturization technologies, togetherwith new sensors, embedded control systems and commu-nication, have boosted the development of many new smalland relatively low cost Unmanned Aerial Vehicles (UAVs).However, constraints such as power consumption, weight andsize play an important role in the UAVs’ performance, andparticularly in small size, light and low cost UAVs. Then, thecooperation of many of these vehicles is the most suitableapproach for many applications. A single powerful aerialvehicle equipped with a large array of different sensors ofdifferent modalities is limited at any one time to a single viewpoint. However, a team of aerial vehicles can simultaneouslycollect information from multiple locations and exploit theinformation derived from multiple disparate points to buildmodels that can be used to take decisions. Team memberscan exchange sensor information, collaborate to identify andtrack targets and perform detection and monitoring activitiesamong other tasks [1], [2]. Thus, for example, a teamof aerial vehicles can be used for exploration, detection,precise localization, monitoring and measuring the evolutionof natural disasters, such as forest fires. Furthermore, themulti-UAV approach leads to redundant solutions offeringgreater fault tolerance and flexibility.

This work has been developed in the framework of the ADAM andCLEAR (DPI2011-28937-C02-01) Spanish National Research projects.

Daniel Perez-Rodriguez, Ivan Maza, Fernando Caballero and AnibalOllero are with the Robotics, Vision and Control Group, Universityof Seville, Avd. de los Descubrimientos s/n, 41092, Sevilla, Spain.Email: [email protected], [email protected],[email protected], [email protected]

David Scarlatti and Enrique Casado are with Boeing Research &Technology Europe, Avenida Sur del Aeropuerto de Barajas 38, Bldg 4,28042 Madrid, Spain. Email: [email protected],[email protected]

Anibal Ollero is also with the Center for Advanced Aerospace Technology(CATEC), Seville, Spain. Email: [email protected]

An efficient user friendly Ground Control Station (GCS) isa crucial component in any Unmanned Aerial System (UAS)based platform. The GCS gathers all the information aboutthe UAV status and allows to send commands accordingto the specified missions. It is remarkable that most ofthe stations include a common set of components, such asartificial horizons, battery and IMU indicators and lately3D environments [3], [4], [5], as generally accepted usefulelements for the operators.

The operator’s workload grows exponentially with thenumber of UAVs operating/flying in the platform. Due tothe critical nature of the unmanned flights, there has beena continuous effort to improve the capabilities of the GCSsmanaging multiple UAVs. The use of multimodal technolo-gies begins to be usual in current GCSs [6], [7], involvingseveral modalities such as positional sound, speech recog-nition, text-to-speech synthesis or head-tracking. The levelof interaction between the operator and the GCS increaseswith the number of information channels, but these channelsshould be properly arranged in order to avoid overloadingthe operator.

In [8] some of the emerging input modalities for human-computer interaction (HCI) are presented and the funda-mental issues in integrating them at various levels - fromearly “signal” level to intermediate “feature” level and to late“decision” level - are discussed. The different computationalapproaches that may be applied at the different levels ofmodality integration are presented, along with a brieflyreview of several demonstrated multimodal HCI systems andapplications.

Reference [9] provides a survey on relevant aspects suchas the perceptual and cognitive issues related to the interfaceof the UAV operator, including the application of multimodaltechnologies to compensate for the dearth of sensory infor-mation available. The decrease on the reaction time of theoperator adding modalities such as auditory information (3Daudio), speech synthesis, haptic feedback and touch screensare analyzed in [10].

It is possible to find commercial GCSs for multi-UAVsystems ranging from the advanced proprietary and closedsolution by Boeing for the X-45 [11], [12] to open sourcesolutions as QGroundControl [5] or the Paparazzi System[13] used in [14].

This paper presents the ground control station developedfor a platform composed by multiple unmanned aerial ve-hicles in surveillance missions demonstrated in field experi-ments. The application is fully based on open source librariesand it has been designed as a robust and decentralized

The original publication is available at www.springerlink.com in this link:http://dx.doi.org/10.1007/s10846-012-9759-5

Page 2: A Ground Control Station for a Multi-UAV Surveillance ... · A Ground Control Station for a Multi-UAV Surveillance System: Design and Validation in Field Experiments Daniel Perez-Rodriguez,

Fig. 1. Decentralized software architecture. The boxes and ellipsesrepresent the ground and flying segment processes respectively. In eachUAV, there are two main levels: the Automatic Decision Level (ADL)and the proprietary Executive Level (EL). The gray arrows representXBee communication links. Finally, the YARP based communications arerepresented by black (video transmission) and white (non-video) arrows.

system. It allows to allocate different tasks to the UAVsand shows their operational information in a 3D realisticenvironment in real time.

II. MULTI-UAV SURVEILLANCE SYSTEMARCHITECTURE

The system used to validate the ground control station iscomposed by a team of UAVs, a network of ground camerasand the station itself. The overall software architecture ofthe system is shown in Fig. 1, where different indepen-dent processes can be identified. Each software componentcan be transparently executed in the same machine or indifferent computers thanks to the communication functionsprovided by the YARP library. The only exception is theUAV position/orientation controller that must be executedon its corresponding on-board computer.

In each UAV, there are two main levels: the AutomaticDecision Level (ADL) and the proprietary Executive Level(EL). The former deals with high-level decision-makingwhereas the latter is in charge of the execution of the so-called elementary tasks (land, take-off and go-to). In theinterface between both levels, the ADL sends task requestsand receives the execution state of each task and the UAVstate. For distributed decision-making purposes, interactionsamong the ADLs of different UAVs are required. Finally,the position/orientation controller is running on the hardwareon-board the UAV. The main elements of this architectureare described in [1], which summarizes the functionalitiesendowed at each level.

The ground control station allows the user to specify themissions and tasks to be executed by the platform, and alsoto monitor the execution state of the tasks and the status ofthe different components of the platform. On the other hand,

the ground camera network provides images from the areaof interest for surveillance purposes.

The next section is focused on the design and imple-mentation of the GCS and its interaction with the rest ofcomponents.

III. GROUND CONTROL STATION DESIGN ANDIMPLEMENTATION

The main goal in the design of the GCS was to simplify thecommand and control of a multi-UAV surveillance system inorder to keep the workload of the operator below a certainlevel. The system was designed to be controlled by a singleuser, who must be able to command several interacting UAVsin order to perform coordinated missions. In addition, thefollowing aspects have been considered in the design:

• The GCS should automatically detect insertions orremovals of UAVs in the system, following a plug-and-play paradigm.

• The GCS should provide visual alerts as a response todifferent events that may happen during the operationof the system, such as a low battery level or a problemwith the video transmission of a ground camera.

• The GCS should display all the information requiredto be aware of the state of each UAV. However, theinformation must be carefully selected in order to avoidoverloading the operator.

• A certain level of decisional autonomy is assumed on-board each UAV (see Sect. II), so that the user shouldnot pay continuous attention to the state of all the UAVsin the system.

The resulting layout of the GCS is depicted in Fig. 2,where four main areas have been numbered:

1) UAV selector. One click on the UAV’s identifier (or itsshortcut) selects the UAV, updating the area 2 in Fig. 2with its information and making an automatic zoomover its location on the map. The layout of the currentversion of the GCS has been designed to support fourUAVs in terms of selectors, shortcuts, etc. However, thesoftware can be easily adapted to support more UAVs,being the workload of the operator the limiting factorregarding the maximum number of UAVs to handlefrom the GCS.

2) Detailed information of the selected UAV. These in-dicators show all the information required to localizethe selected UAV on any point on the Earth. It con-tains information about GPS latitude, longitude, andaltitude, heading, barometric altitude and the numberof available GPS satellites being used by the UAVon-board controller. The UAV status shows its currentmode of operation and the level of the battery (voltageand intensity). A visual/sound alarm is triggered whenthe battery reaches a critical level and a red LED isturned on. Additionally, it shows information comingfrom the Inertial Measurement Unit (IMU).

3) Map area. This is an interactive map that displaysimportant information such as the locations of UAVs,alarms or motion tasks.

The original publication is available at www.springerlink.com in this link:http://dx.doi.org/10.1007/s10846-012-9759-5

Page 3: A Ground Control Station for a Multi-UAV Surveillance ... · A Ground Control Station for a Multi-UAV Surveillance System: Design and Validation in Field Experiments Daniel Perez-Rodriguez,

Fig. 2. Ground control station main layout. Four main areas have beennumbered: 1) UAV selector, 2) Detailed UAV information, 3) Map area and4) Tab widget.

Fig. 3. A scheme of the main execution loop of the ground control stationsoftware. The different events or tasks are treated in their specific slots, inparallel to the main loop execution.

4) Tab Widget. It contains all the widgets of the appli-cation (logging, sensor visualization, etc), except themap widget.

The common cycle of operation is simple. After theinitial setup (detecting screens on the station, activating thesound, etc.) an automatic updating process is executed everyten seconds in order to detect the addition/extraction ofcomponents (such as UAVs and cameras) in the system. If anew component is detected, a thread showing its informationis launched at a refresh rate of 10 Hz, while in parallel theGCS updates the different widgets. The overall process isshown in Fig. 3.

The next section details the libraries chosen for the devel-opment of the ground control station.

A. Open Source Libraries

The selection criteria for the libraries used in the softwareimplementation was based on:

• Widely contrasted usage in the community and reliabil-ity.

• Well documented APIs.• Open source.• Multiplatform.• C++ based.According to these prerequisites, the following decisions

were taken in the design:• The Graphical User Interface (GUI) was developed

using the Qt library [15], that allows an easy devel-opment of user interfaces due to its signals and slotsconnection method [16]. It provides its own IDE andeasy integration with other C++ libraries.

• The video image treatment was based in the Intel’sOpenCV library [17], a fact standard in digital imageprocessing.

• The integration of the map was solved using the Marblelibrary [18] that provides a fully compatible Qt widget.

• A realistic 3D environment was developed using theOpenSceneGraph [19] and the OSG Earth [20] libraries.

• Finally, interprocess communication was solved by us-ing the YARP library [21]. YARP is an open sourcecode framework, multiplatform, almost written in C++,that supports distributed computation in a very efficientway.

During the software development stage, the main diffi-culties were found in the asynchronous nature of the systemwith multiple UAVs, tasks and associated events. In addition,several issues related to the integration of the differentlibraries had to be solved.

In the following, the main widgets of the GCS are de-scribed along with their usage by the operator to controlseveral UAVs.

B. Marble Map Widget

The map widget is an important component in any GCSthat shows the position and orientation of the UAVs ona global map, updating them in real time. It shows alsoadditional information, such as areas of interest, intruders,waypoints, trajectories, video streaming from the UAVs, etc.

The widget is based on the Marble Library, and thegeographical information is provided by the Open StreetMaps project [22]. This combination has proven to bean interesting option compared to Google Maps, with animportant advantage: it does not require access to the Internetduring the operation of the system (it is possible to pre-cachethe areas of interest in disk).

The map widget is completely interactive and the usercan perform different operations: zoom in/out, move around,follow the selected UAV, generate missions and pathways,choose the elements to be displayed (UAVs, waypoints,camera projections, intruders or surveillance zones), getcoordinates, etc.

The original publication is available at www.springerlink.com in this link:http://dx.doi.org/10.1007/s10846-012-9759-5

Page 4: A Ground Control Station for a Multi-UAV Surveillance ... · A Ground Control Station for a Multi-UAV Surveillance System: Design and Validation in Field Experiments Daniel Perez-Rodriguez,

C. SVG widgets

Some information components, such as the battery levels,are represented using SVG widgets (see Fig. 4) to simulatea cockpit environment that can be more intuitive for theoperator. These widgets are based on the Qt EmbeddedWidgets Demo [15].

D. OSG Earth Widget

The OSG Earth widget shows a 3D real size model ofthe Earth including the 3D representation of the UAVs andother elements relevant to the mission such as the waypoints,flight paths, etc. Additional 3D models of buildings, bridges,monuments, etc can be displayed on the terrain to increasethe realism of the environment and to improve the situationalawareness of the remote operator. These objects can beloaded from a COLLADA model [23]. Figure 5 shows ascreenshot of the widget.

E. Multi-UAV Control Panel Widget

When an UAV is added to the system, a new UAV controlpanel widget (see Fig. 6) is created with an unique colorcode identifier. The control panel shows the UAV numberand a LED with the status of the communication link. Thereare also eight buttons to send the following tasks to the UAV(from left to right according to Fig. 6):

• Row 1: TAKE-OFF, LAND, RETURN-HOME, GO-TO.• Row 2: SURVEILLANCE, WAIT, TAKE-IMAGES,RECHARGE.

It is important to highlight that the disposition of thecommand buttons and the information simplify the work ofthe operator. The task buttons included in the panel allowsto build complex missions based on the combination ofdifferent tasks. In addition, the ADL and EL on-board the

Fig. 4. Example of the SVG widgets used in the ground control station.

Fig. 5. Screenshot of the OSG Earth widget used in the ground controlstation. The camera can be configured to be controlled by the operator or tofollow the UAV automatically from a chase or top view. It is also possibleto switch to a virtual view of any of the available ground cameras.

Fig. 6. UAV Control Panel that integrates buttons for the different tasks,a mini-HUD and a classical mission time counter.

UAV (see Sect. II) checks the different specified missionsagainst inconsistencies and unsafe trajectories.

Finally, a mini Head-Up Display (HUD) with a classicalartificial horizon level has been developed and integrated.It shows the orientation of the UAV with respect to thehorizon, and the indicators of the altitude, heading and speed.This information is overlaid on the images from an on-boardcamera if it is available. This widget has been developed fromscratch using the graphical capabilities provided by the Qtlibrary[15].

F. Graphs Widgets

The graphs widget plots graphics of data from differentsensors of the selected UAV in real time. Available mea-surements are yaw, pitch and roll angles, speed vector andbattery levels. It also plots the trajectory followed by theUAV in Cartesian coordinates [x,y] (as it can be seen inFig. 7), so the operator can track its flight path during theexecution of the mission.

G. Log Player Widget

The log player widget is a tool really useful to replayprevious flights and experiments for mission debriefing pur-poses. The operator only have to select a log file, correspond-ing to a previous mission, to replay it using the commonmultimedia controls (play, stop, etc).

Fig. 7. An example of a graph widget showing the trajectory followed bythe UAV in Cartesian coordinates [x,y].

The original publication is available at www.springerlink.com in this link:http://dx.doi.org/10.1007/s10846-012-9759-5

Page 5: A Ground Control Station for a Multi-UAV Surveillance ... · A Ground Control Station for a Multi-UAV Surveillance System: Design and Validation in Field Experiments Daniel Perez-Rodriguez,

Fig. 8. An example of the OSD notification system that uses a partiallytransparent window to keep the operator informed about the different events.

H. OSD Notifications and Alarms

The different events that may occur during the executionof a mission are notified to the operator in different ways.At the top level, an On Screen Display (OSD) system hasbeen developed, popping-up the notifications to the user ina partially transparent window, as it can be seen in Fig. 8.

Events with higher priority or with the need of an in-teractive answer by the operator are displayed in a pop-upwindow, as it is shown in Fig. 11. Sounds can be also linkedto both types of alarms.

IV. GCS TESTING IN FIELD EXPERIMENTS

The ground control station was tested in field experi-ments on November 2011 in Seville. The overall multi-UAVsurveillance system architecture described previously alongwith the developed GCS were used in two types of missionswith with two customized quadrotors.

A. Hardware Setup

The UAVs integrated in the system were two quadrotors ofthe model Pelican manufactured by Ascending Technologies[24]. This quadrotor had available enough payload (500 g)for executing different missions. It is equipped with an IntelAtom on-board which allows to connect add-on sensors andcameras, mainly via mini USB interfaces. The UAV hasa flight autonomy of approximately fifteen minutes at fullpayload.

A customized Ubuntu Linux distribution was installedon the Atom board and configured with several watchdogsprocesses in order to guarantee the wireless LAN commu-nication and the allocation of identifiers to the devices on-board. Each quadrotor was equipped with a high definitionwebcam, a XBee-PRO radio data link and a GPS receiverfor telemetry, all of them connected to the Atom boardwith customized mini USB connections (see Fig. 9). Thecaptured images were transmitted to the ground through thewireless LAN connection, whereas the on-board positionand orientation controller used the XBee radio link to sendthe telemetry data to its EL/ADL processes (running on alaptop). This hardware setup is summarized in Fig. 10.

Due to the decentralized software design, it is also possibleto run all the processes (GCS and both EL/ADLs) on a singlelaptop. However, looking for an improved supervision duringthe field experiments, it was decided to use three different

TABLE ILINKS TO THE VIDEOS OF THE MISSIONS CARRIED OUT IN NOVEMBER

2011 USING A TEAM OF TWO QUADROTORS AND THE GCS DESCRIBED

IN THIS PAPER.

Mission 1 http://grvc.us.es/GCS/icuas_mission1.wmvMission 2 http://grvc.us.es/GCS/icuas_mission2.wmv

laptops: two of them running the EL/ADL associated to eachquadrotor and the third one to execute the GCS.

The next sections describe the two types of missionscarried out in Seville in November 2011 to test the developedmulti-UAV GCS. Table I shows the links to the videos ofboth missions.

B. Mission 1: Cooperative Area Surveillance

In this mission, the area of interest was automatically splitin subareas taking into account the number of available UAVsin order to efficiently patrol it. The mission was launched bythe GCS (under the operator’s supervision) when an intruderwas detected and localized by the ground camera network.The intruder alarm was displayed on the interactive map ofthe GCS as it is shown in Fig. 11. Then the operator hadtwo options:

1) Allow the automatic generation of a surveillance taskaround the area in which the intruder was detected.

2) Set different waypoints manually around the area ofinterest.

In the field experiments conducted, the operator alwaysselected the first option in order to test the autonomouscapabilities of the system. Thus, a surveillance area taskwas automatically requested by the GCS to the ADLs, thatreplied with the subareas and waypoints for each UAVconsidering the operational constraints, such as battery levelsor distances to the subareas. These subareas and waypointswere displayed on the GCS to be validated (see Fig. 12).Then, the operator could have modified or removed the

Fig. 9. Customized quadrotor of the model Pelican by AscendingTechnologies equipped with a high definition webcam, a XBee-PRO radiodata link and a GPS receiver for telemetry, all of them connected to theAtom board with customized mini USB connections.

The original publication is available at www.springerlink.com in this link:http://dx.doi.org/10.1007/s10846-012-9759-5

Page 6: A Ground Control Station for a Multi-UAV Surveillance ... · A Ground Control Station for a Multi-UAV Surveillance System: Design and Validation in Field Experiments Daniel Perez-Rodriguez,

Fig. 10. Hardware setup used in the field experiments. The capturedimages were transmitted to the ground through the wireless LAN connection,whereas the on-board position and orientation controller used the XBeeradio link to send the telemetry data to its EL/ADL processes (runningon a laptop). The Linksys E4200 dual band router was used to supportLAN communication between the ground computers/cameras and wirelesscommunication with the on-board IP cameras.

different tasks allocated to each UAV. Once the plan wascompleted, the operator validated it and the autonomousexecution of the mission started.

The IP cameras on-board were pointing downwards send-ing images through the wireless link to the GCS and theoperator could supervise the area searching for other poten-tial intruders. The workload of the operator was low duringthe execution, as he only had to attend the images from thecameras displayed on the map.

Fig. 11. Automatic intruder alarm window. The operator has the optionto allow the automatic generation of a surveillance task around the area inwhich the intruder was detected.

Fig. 12. Resulting automatic area partition and list of waypoints.

Fig. 13. If a camera of the ground network (or its datalink) is lost, analarm is triggered on the GCS map. The operator is asked to confirm anautomatic task for the UAVs in order to cover the area lost by the camera.

C. Mission 2: Camera Network Dynamic Repairing

The objective of this mission is to use the IP cameras on-board the UAVs to provide images of an area in which aground camera is damaged or temporally offline. Thus, thesystem must be able to detect the potential failures of theground cameras and to schedule the required tasks to allowthe UAV cover the area around the non-working cameralocation.

If the signal from one camera of the network (or itsdatalink) is lost, an alarm is triggered on the GCS map as itis shown in Fig. 13. Then, the operator is asked to confirman automatic task for the UAVs in order to cover the arealost by the camera.

The task is allocated to one of the UAVs according toits available capabilities that depends on the location, flightendurance and sensors on-board. The UAV takes-off, fliesto the selected location and sends the images to the GCSuntil the ground camera link is restored (see Fig. 14). Ifthe UAV’s autonomy is below a certain threshold, the GCSwill detect it and will communicate a RETURN-HOME taskto get the UAV back. In this case, the original task can be

The original publication is available at www.springerlink.com in this link:http://dx.doi.org/10.1007/s10846-012-9759-5

Page 7: A Ground Control Station for a Multi-UAV Surveillance ... · A Ground Control Station for a Multi-UAV Surveillance System: Design and Validation in Field Experiments Daniel Perez-Rodriguez,

Fig. 14. UAV-1 is hovering on the area previously covered by the lostcamera to restore the surveillance.

automatically transferred to another UAV in order to continuewith the mission.

V. CONCLUSIONS AND FUTURE WORK

The design adopted on the system architecture has demon-strated to be an efficient solution to achieve a rapid deploy-ment of a multi-UAV surveillance system. The design of theGCS exploited the autonomous capabilities of the multi-UAVsystem to decrease the workload of the operator. Thus, thedeveloped GCS allowed to exploit most of the capabilitiesthat the autonomous multi-UAV system could provide, whileat the same time offered an user-friendly and easy-to-operateinterface.

Future work will consider the integration of the infor-mation from other sensors in the GCS (not only groundcameras). Also, the possibility of executing the GCS stationfrom a remote location through internet will be studied.

VI. ACKNOWLEDGMENTS

The authors would like to thank the Boeing Research &Technology Europe company for their financial and technicalsupport.

REFERENCES

[1] I. Maza, F. Caballero, J. Capitan, J. M. de Dios, and A. Ollero,“A distributed architecture for a robotic platform with aerial sensortransportation and self-deployment capabilities,” Journal of FieldRobotics, vol. 28, no. 3, pp. 303–328, 2011. [Online]. Available:http://dx.doi.org/10.1002/rob.20383

[2] M. Bernard, K. Kondak, I. Maza, and A. Ollero, “Autonomoustransportation and deployment with aerial robots for search andrescue missions,” Journal of Field Robotics, vol. 28, no. 6, p. 914931,2011. [Online]. Available: http://dx.doi.org/10.1002/rob.20401

[3] M. Jovanovic and D. Starcevic, “Software architecture for groundcontrol station for unmanned aerial vehicle,” in Computer Modelingand Simulation, 2008. UKSIM 2008. Tenth International Conferenceon, april 2008, pp. 284 –288.

[4] M. Dong, B. Chen, G. Cai, and K. Peng, “Development of a real-timeonboard and ground station software system for a uav helicopter,”Journal of Aerospace Computing, Information and Communication,vol. 4, no. 8, pp. 933–955, 2007.

[5] (2010) Qgroundcontrol. open source mav ground control station.[Online]. Available: http://qgroundcontrol.org

[6] O. Lemon, A. Bracy, A. Gruenstein, and S. Peters, “The WITAS multi-modal dialogue system I,” in Proceedings of the 7th European Con-ference on Speech Communication and Technology (EUROSPEECH),Aalborg, Denmark, September 2001, pp. 1559–1562.

[7] A. Ollero and I. Maza, Eds., Multiple Heterogeneous UnmannedAereal Vehicles, ser. Springer Tracts on Advanced Robotics. Springer,2007, ch. Teleoperation Tools, pp. 189–206.

[8] R. Sharma, V. I. Pavlovic, and T. S. Huang, “Toward multimodalhuman-computer interface,” Proceedings of the IEEE, vol. 86, no. 5,pp. 853–869, 1998.

[9] J. S. McCarley and C. D. Wickens, “Human factors implications ofUAVs in the national airspace,” Institute of Aviation, Aviation HumanFactors Division, University of Illinois at Urbana-Champaign, Tech.Rep. AHFD-05-5/FAA-05-1, 2005.

[10] I. Maza, F. Caballero, R. Molina, N. P. na, and A. Ollero,“Multimodal interface technologies for UAV ground control stations.a comparative analysis,” Journal of Intelligent and RoboticSystems, vol. 57, no. 1–4, pp. 371–391, 2010. [Online]. Available:http://dx.doi.org/10.1007/s10846-009-9351-9

[11] Boeing. X-45 unmanned aerial combat system. [Online]. Available:http://www.boeing.com/history/boeing/x45 jucas.html

[12] ——. Ground control station for multiple x-45 unmanned aerialcombat systems. [Online]. Available: http://www.youtube.com/watch?v=ilyUNkjKlPM

[13] (2012) The paparazzi project. free and open-source hardware andsoftware autopilot system. [Online]. Available: http://paparazzi.enac.fr

[14] P. Brisset and G. Hattenberger, “Multi-UAV control with the paparazzisystem,” in Proceedings of the first conference on Humans OperatingUnmanned Systems (HUMOUS’08), Brest, France, 3-4 Sept. 2008.

[15] N. Corporation. (2010) Qt reference documentation. [Online].Available: http://doc.qt.nokia.com/4.6

[16] J. Blanchette and M. Summerfield, C++ GUI Programming with Qt4. Prentice Hall, 2006.

[17] Opencv. open source computer vision. [Online]. Available: http://opencv.willowgarage.com

[18] (2005-2011) The marble project. KDE Educational Project. [Online].Available: http://edu.kde.org/marble

[19] (2005-2011) Open scene graph. open source 3d graphics toolkit.[Online]. Available: http://www.openscenegraph.org

[20] (2005-2011) Osg earth. terrain rendering toolkit. Pelican Mapping.[Online]. Available: http://osgearth.org/

[21] G. Metta, P. Fitzpatrick, and L. Natale, “Yarp: Yet another robotplatform,” International Journal of Advanced Robotic Systems, vol. 3,no. 1, pp. 043–048, 2006.

[22] (2005-2011) The open street map project. [Online]. Available:http://www.openstreetmap.org

[23] Collada - digital asset and fx exchange schema. [Online]. Available:https://collada.org

[24] (2011) Ascending technologies. [Online]. Available: www.asctec.de

The original publication is available at www.springerlink.com in this link:http://dx.doi.org/10.1007/s10846-012-9759-5