development of an autonomous rough-terrain robot · 2015. 3. 10. · development of an autonomous...

5
Development of an autonomous rough-terrain robot Alina Conduraru and Ionel Conduraru Gheorghe Asachi Technical University of Iasi B-dul D. Mangeron, 61-63 700050-Iasi, Romania [email protected] Emanuel Puscalau Technical University of Civil Engineering Bucharest Lacul Tei Bvd., no. 122 - 124 RO 020396, sector 2, Bucharest [email protected] Geert De Cubber, Daniela Doroftei and Haris Balta Royal Military Academy Unmanned Vehicle Centre Av. De La Renaissance 30 B1000 Brussels, Belgium [email protected] Abstract—In this paper, we discuss the development process of a mobile robot intended for environmental observation ap- plications. The paper describes how a standard tele-operated Explosive Ordnance Disposal (EOD) robot was upgraded with electronics, sensors, computing power and autonomous capa- bilities, such that it becomes able to execute semi-autonomous missions, e.g. for search & rescue or humanitarian demining tasks. The aim of this paper is not to discuss the details of the navigation algorithms (as these are often task-dependent), but more to concentrate on the development of the platform and its control architecture as a whole. I. I NTRODUCTION A. Problem Statement Mobile robots are more and more leaving the protected lab environment and entering the unstructured and complex outside world, e.g. for applications such as environmental monitoring. However, recent events like the Tohoku earthquake in Japan, where robots could in theory have helped a lot with disaster relief but were nearly not used at all in practice [3], have learned that there exists a large discrepancy between robotic technology which is developed in science labs and the use of such technology on the terrain. The rough outside world poses several constraints on the mechanical structure of the robotic system, on the electronics and the control architecture and on the robustness of the autonomous components. Main factors to keep into consideration are [2]: Mobility on difficult terrain and different soils Resistance to rain and dust Capability of working in changing illumination condi- tions and in direct sunlight Capability of dealing with unreliable communication links, requiring autonomous navigation capabilities In this paper, we present a robotic system which is developed to deal with these constraints. The platform is to be used as an environmental monitoring robot for 2 main application areas: humanitarian demining (when equipped with a ground penetrating radar and a metal detector) and search and rescue (when equipped with human victim detection sensors). B. Platform Description Taking into account the different constraints and tasks for outdoor environmental monitoring applications, a robotic Quadrotor Teodor robot Pan-Tilt Unit ToF Camera Stereo Camera Fig. 1. The robotic system, consisting of the Teodor base UGV, a quadrotor UAS, and an integrated active Stereo/Time-Of-Flight Depth sensing system system as shown in Figure 1 was developed. The base vehicle of this unmanned platform consists of a Telerob Teodor Explosive Ordnance Disposal (EOD) robot [6]. We chose to use a standard EOD robot platform for several reasons: As a platform, it has proven its usefulness in dealing with rough terrain. Recycling a standardized platform is a good means of saving costs, as rescue or demining teams do not have the financial resources to buy expensive dedicated platforms. The rugged design of the platform makes it capable of handling unfriendly environmental conditions. An important drawback of the standard Teodor platform is that it does not feature any autonomous capabilities. As discussed in [2], end-users of these systems do require the robotic systems to have autonomous capabilities, e.g. for enter- ing into semi-collapsed structures, where communication lines may fail. On the other hand, the end-users also want to always have the capability to remote control the robotic systems. For this reason, a hybrid control architecture, sketched in Figure 2 and explained in section II, was developed, giving the user the choice between direct tele-operation and autonomous operation.

Upload: others

Post on 29-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Development of an autonomous rough-terrain robot · 2015. 3. 10. · Development of an autonomous rough-terrain robot Alina Conduraru and Ionel Conduraru Gheorghe Asachi Technical

Development of an autonomous rough-terrain robotAlina Conduraru

and Ionel ConduraruGheorghe Asachi

Technical University of IasiB-dul D. Mangeron, 61-63

700050-Iasi, [email protected]

Emanuel Puscalau

Technical Universityof Civil Engineering BucharestLacul Tei Bvd., no. 122 - 124

RO 020396, sector 2, [email protected]

Geert De Cubber, Daniela Dorofteiand Haris Balta

Royal Military AcademyUnmanned Vehicle CentreAv. De La Renaissance 30B1000 Brussels, Belgium

[email protected]

Abstract—In this paper, we discuss the development processof a mobile robot intended for environmental observation ap-plications. The paper describes how a standard tele-operatedExplosive Ordnance Disposal (EOD) robot was upgraded withelectronics, sensors, computing power and autonomous capa-bilities, such that it becomes able to execute semi-autonomousmissions, e.g. for search & rescue or humanitarian deminingtasks. The aim of this paper is not to discuss the details of thenavigation algorithms (as these are often task-dependent), butmore to concentrate on the development of the platform and itscontrol architecture as a whole.

I. INTRODUCTION

A. Problem Statement

Mobile robots are more and more leaving the protectedlab environment and entering the unstructured and complexoutside world, e.g. for applications such as environmentalmonitoring. However, recent events like the Tohoku earthquakein Japan, where robots could in theory have helped a lot withdisaster relief but were nearly not used at all in practice [3],have learned that there exists a large discrepancy betweenrobotic technology which is developed in science labs and theuse of such technology on the terrain. The rough outside worldposes several constraints on the mechanical structure of therobotic system, on the electronics and the control architectureand on the robustness of the autonomous components. Mainfactors to keep into consideration are [2]:• Mobility on difficult terrain and different soils• Resistance to rain and dust• Capability of working in changing illumination condi-

tions and in direct sunlight• Capability of dealing with unreliable communication

links, requiring autonomous navigation capabilitiesIn this paper, we present a robotic system which is developedto deal with these constraints. The platform is to be usedas an environmental monitoring robot for 2 main applicationareas: humanitarian demining (when equipped with a groundpenetrating radar and a metal detector) and search and rescue(when equipped with human victim detection sensors).

B. Platform Description

Taking into account the different constraints and tasksfor outdoor environmental monitoring applications, a robotic

QuadrotorTeodor robot

Pan-Tilt UnitToF Camera

StereoCamera

Fig. 1. The robotic system, consisting of the Teodor base UGV, a quadrotorUAS, and an integrated active Stereo/Time-Of-Flight Depth sensing system

system as shown in Figure 1 was developed. The base vehicleof this unmanned platform consists of a Telerob TeodorExplosive Ordnance Disposal (EOD) robot [6]. We chose touse a standard EOD robot platform for several reasons:• As a platform, it has proven its usefulness in dealing with

rough terrain.• Recycling a standardized platform is a good means of

saving costs, as rescue or demining teams do not have thefinancial resources to buy expensive dedicated platforms.

• The rugged design of the platform makes it capable ofhandling unfriendly environmental conditions.

An important drawback of the standard Teodor platformis that it does not feature any autonomous capabilities. Asdiscussed in [2], end-users of these systems do require therobotic systems to have autonomous capabilities, e.g. for enter-ing into semi-collapsed structures, where communication linesmay fail. On the other hand, the end-users also want to alwayshave the capability to remote control the robotic systems.For this reason, a hybrid control architecture, sketched inFigure 2 and explained in section II, was developed, giving theuser the choice between direct tele-operation and autonomousoperation.

Page 2: Development of an autonomous rough-terrain robot · 2015. 3. 10. · Development of an autonomous rough-terrain robot Alina Conduraru and Ionel Conduraru Gheorghe Asachi Technical

In order to provide data input to the autonomous controlsystem, an active depth sensing system was integrated on theplatform. This 3D sensing system consists of a time-of-flight(TOF) camera and a stereo-camera, mounted on a pan-tilt unit.This active vision system - further discussed in section IV-B- provides the required input for the terrain traversability andpath negotiation algorithms.

Finally, the last component of the unmanned system is aquadrotor-type helicopter system, able to land on top of theground robot. The idea of using this unmanned aerial vehicle(UAV) is to pair the advantage of an UAV (possibility toobtain a good overview of the environment from above) withthe advantages of an UGV (possibility of interacting on theterrain). The control system of the quadrotor is integrated inthe global control architecture, making this robotic system anintegrated UAV / UGV.

The remains of this paper is organized as follows: Section IIdiscusses the global control architecture. Section III focuseson the remote-operation functionalities, whereas section IVdiscusses the autonomous capabilities.

II. GLOBAL CONTROL ARCHITECTURE

The global control architecture is shown on Figure 2. Ascan be clearly noticed from Figure 2, the architecture foreseesmultiple levels of control:

1) The first (bottom) layer is the hardware layer, consistingof the robots (UGV + UAV) themselves and the differentinstalled devices (sensors).

2) In a second layer, a series of drivers provide interfacesto these devices.

3) In a third abstract sensing layer, information is trans-ferred at a higher level due to data fusion and commanddecomposition algorithms. It is also here that the theremote control interface can be found

4) In a fourth and final layer, the robot intelligence modulesand algorithms can be found

It must be noted that, on Figure 2, the boxes with a greenbackground represent ROS (Robot Operating System) mod-ules. This software architecture was chosen as a base systemto develop all autonomous capabilities upon, due to the largerepository of pre-existing material which can be put to gooduse on this robot system.

As can be noted from Figure 2, there are 2 means ofcontrolling the robotic system: tele-operation (white boxes/ left side) and using autonomy (green boxes / right side).In the following sections, we will now detail each of thesepossibilities more in detail.

III. REMOTE-OPERATION FUNCTIONALITIES

Mobile robot tele-operation requires an intuitive human-machine interface (HMI), which is flexible and efficient. Thedesign and implementation of such a HMI is a difficult task,in particular when the mobile robot is used for operations orinterventions in complex environments, where both safety andprecision need to be assured. In most cases, mobile robotsare equipped with sensors, which are capable of providingan impressive volume of data to the user. This massive data-stream risks of causing a cognitive overload for the humanoperator when transferred unfiltered to the operator. The HMImust see to it that the operator is presented a comprehensive

Teodor robot Stereo camera Time-of-flightcameraWeb camera

Base driverLabview Driver

ROS driver

Bumblebeedriver PMD driver

Integrated 3D Reconstruction

Intelligence (task dependent: search & rescue / demining)

Remote Control

Terrain traversability estimation

Quadrotorwith HD camera

iOS/AndroidApplication ROS driver

Fig. 2. Global Control Architecture. Green boxes represent ROS nodes

Page 3: Development of an autonomous rough-terrain robot · 2015. 3. 10. · Development of an autonomous rough-terrain robot Alina Conduraru and Ionel Conduraru Gheorghe Asachi Technical

1

2

3

4

5

6 7

8

Fig. 3. The control panel for remote robot operation, subdivided in 8 zones

overview, which presents all required sensor data and allmeaningful control modalities, but not too much.

For this application of remote operation, LabView waschosen as a design methodology. Following the LabViewdesign formalism, the remote operation module is built up as avirtual instrument. The virtual instrument provides a solutionfor the integration of all the control elements into a unitarysystem which is compact and with a high degree of mobil-ity. Combining these virtual instrumentation in the LabViewgraphical programming reduces significantly the developmenttime and the solution validation.

Figure 3 shows the control panel which is presented to theremote human operator. As shown on Figure 3, the front panelis composed of eight areas, integrating the tools used by theoperator to control the robot movement:

1) Here, a connection with the robot can established andcommands can be sent to the robot. Commands aretransmitted by the computer over serial port interface inthe example format MV 150RT150. Here, the charactersMV are used to identify the type of command (lineardisplacement). The following three characters representthe variable value of the MV command. The charactersRT again from a character string, identifying the rota-tion command for the robot, with a value determined bythe following three characters.

2) Here, a connection to a (remote controlled) joystick orgamepad can be made. When this is turned on, theoperator can use the gamepad connected to the computerto control the robot.

3) In this area, the user can select the speed of the different

actuators.4) Here, the user can control the robot using a mouse or

keyboard, in the absence of a joystick or gamepad.5) On this panel, the current movement speed and turning

velocity are displayed6) In this area, the position of the robot is shown.7) Here, a history of all previous commands is shown.8) Finally, the camera image is streamed, such that the user

has a view of the robot environment.

IV. AUTONOMOUS FUNCTIONALITIES

A. RequirementsUnmanned vehicles used for environmental monitoring re-

quire autonomous capabilities for the following reasons:• During indoor operations, they must be able to copse with

a loss of communication. In this case, they must be ableto perform a high-level task (e.g. searching for survivorsof a collapse) without human operator intervention

• The complexity of the unstructured environment makesit difficult for remote human operators to assess thetraversability of the terrain. Therefore, the robotic systemsshould be equipped at least with some local obstacleavoidance capacity.

Autonomous reasoning requires the correct assessment (or”understanding”) of the environment by the robot artificialintelligence. A first step in this process is the perception.As can be noted from Figure 1, the proposed unmannedenvironmental monitoring system is equipped with an active3D sensing system, consisting of a stereo camera and a time-of-flight camera, mounted on a pan-tilt-unit. Note that the TOFcamera used here is capable of working in outdoor conditions(also in heavy sunlight), unlike modern consumer-grade depthsensors, which is a requirement for environmental monitoringapplications.

B. 3D SensingThe objective of this 3D sensing system is to provide

high-quality and real-time dense depth data by combining theadvantages of a time-of-flight camera and a stereo camera.Individually, both the stereo and the time-of-flight sensingsystem suffer from restrictive usage constraints:• A stereo system has difficulties with reconstruction on

untextured areas.• A TOF camera has a very limited resolution (here: 200

pixels x 200 pixels).These limitations are also visible on Figure 4, showing the3D view of both the TOF and the stereo camera. It can benoted that the stereo-based reconstruction features some holeswhere reconstruction was not possible due to a lack of texture,which causes the left-to-right matching to fail. The TOF-basedreconstruction on the right of Figure 4 is dense. However, itfeatures only a limited resolution and a limited field of view,as depicted by the red rectangle on Figure 4a.

To lift these disadvantages, we propose a data fusion ap-proach which combines the TOF-based and stereo-based re-construction result in real time. As both stereo and TOF depth

Page 4: Development of an autonomous rough-terrain robot · 2015. 3. 10. · Development of an autonomous rough-terrain robot Alina Conduraru and Ionel Conduraru Gheorghe Asachi Technical

(a) Stereo-based Reconstruction (b) TOF-based Reconstruction

Fig. 4. Visualisation of the points provided by the depth cameras.

sensors provide similar types of output (a depth map and/ora 3D point cloud), it is possible to perform this data fusionin a straightforward manner using standard ICP approaches;Whereas classical ICP approaches can be notoriously slow(which would be a problem in the envisaged application), thisproblem can be circumvented in this case as both sensors arerigidly attached to each other, so there is a good initial guesson the translation and rotation between both point clouds.

The result of this data fusion operation is a clean and high-resolution 3D reconstruction, serving as input for subsequentdata processing algorithms, notable for traversability analysis.

C. Terrain Traversability Analysis

Traversability estimation is a challenging problem, as thetraversability is a complex function of both the terrain char-acteristics, such as slopes, vegetation, rocks, etc and the robotmobility characteristics, i.e. locomotion method, wheels, etc.It is thus required to analyze in real-time the 3D characteristicsof the terrain and pair this data to the robot capabilities.

The methodology towards stereo and time-of-flight-basedterrain traversability analysis extends our previous work onstereo-based terrain classification approaches [1], [4]. Follow-ing this strategy, the RGB data stream from the stereo camerais segmented to group pixels belonging to the same physicalobjects. From the Depth data streams of the TOF and stereocameras, the v − disparity [5] is calculated to estimate theground plane, which leads to a first estimation of the terraintraversability. From this estimation, a number of pixels areselected which have a high probability of belonging to theground plane (low distance to the estimated ground plane).The mean a and b color values in the Lab color space ofthese pixels are recorded as c.

The presented methodology then classifies all image pixelsas traversable or not by estimating for each pixel a traversabil-ity score which is based upon the analysis of the segmentedcolor image and the v-disparity depth image. For each pixeli in the image, the color difference ‖ci − c‖ and the obstacledensity in the region where the pixel belongs to are calculated.The obstacle density δi is here defined as: δi =

〈o∈Ai〉〈Ai〉 , where

o denotes the pixels marked as obstacles (high distance to theestimated ground plane) and Ai denotes the segment wherepixel i belongs to. This allows us to define a traversabilityscore as τi = δi‖ci−c‖, which is used for classification. Thisis done by setting up a dynamic threshold, as a function ofthe distance measured. Indeed, as the error on the depth mea-surement increases with the distance, it is required to increasethe tolerance on the terrain classification as a function of thedistance. An important issue when dealing with data from atime-of-flight sensor is the correct assessment of erroneousinput data and noise. Therefore, the algorithm automaticallydetects regions with low intensities and large variances indistance measurements and marks these as ”suspicious”.

Using the traversability data, it is possible to steer the robotaround non-traversable obstacles and execute high-level tasks.

D. Implementation & Results

In order to have all the physical subsystems working to-gether autonomously, and achieve a control framework like theone depicted in Figure 2, we need to integrate all capabilities ina suitable framework. Here, we used ROS (Robot OperatingSystem), an open-source, meta-operating system for robots.One of the main advantages of ROS is that it contains a lot ofpre-made packages and libraries, providing access to sensorsand actuators, next to a whole set of data processing andcontrol algorithms. As an example, the low-level hardware

Page 5: Development of an autonomous rough-terrain robot · 2015. 3. 10. · Development of an autonomous rough-terrain robot Alina Conduraru and Ionel Conduraru Gheorghe Asachi Technical

drivers for the pan-tilt-unit, stereo-camera, TOF camera andquadrotor are available in ROS. As such, we only needed todevelop a low-level driver for the robot, supporting our customserial commands.

Both the stereo camera and the TOF camera publish ROStopics of the PointCloud format towards the Integrated 3DReconstruction module, which combines the data and sends theunified 3D data to a Terrain Traversability Estimation node,outputting a map of the environment, indicating the traversableand non-traversable areas.

V. FUTURE WORK & CONCLUSION

In this paper, we discussed the development process ofa robotic system for environmental monitoring in search &rescue and demining applications. The system consists of anoutdoor-capable UGV equipped with an active 3D sensingsystem and an UAS. By combining both types of vehicles,it is possible to rapidly get a good overview of the situationin the environment and to perform a life-saving task (findingand rescuing victims or detecting land mines).

It is clear that this is still a work in progress, e.g. therobotic system does not contain specific sensor for its tasksyet (human detector / mine detector). Also a GPS system stillneeds to be integrated in the system for localisation purposes.From a research point of view, the objective is to completelyintegrate the UAS in the control system, such that the UAS willalso assist in mapping the (traversability of the) environment,helping the UGV to navigate.

ACKNOWLEDGMENT

The research leading to these results has received fundingfrom the European Union Seventh Framework Programme(FP7/2007-2013) under grant agreement number 285417(ICARUS) and 284747 (TIRAMISU).

REFERENCES

[1] G. De Cubber and D. Doroftei and H. Sahli and Y. Baudoin, OutdoorTerrain Traversability Analysis for Robot Navigation using a Time-Of-Flight Camera, RGB-D Workshop on 3D Perception in Robotics, 2011.

[2] D. Doroftei and G. De Cubber and K. Chintanami, Towards collaborativehuman and robotic rescue workers, Human Friendly Robotics, 2012.

[3] K. Richardson, Rescue robots - where were they in the Japanese quakerelief efforts?, Engineering and Technology Magazine, vol.6, nr. 4, 2011.

[4] G. De Cubber, Multimodal Terrain Analysis for an All-terrain CrisisManagement Robot, in Proc. IARP HUDEM Workshop on HumanitarianDemining, 2011.

[5] R. Labayrade and D. Aubert, In - vehicle obstacles detection andcharacterization by stereovision, Int. Workshop on In-Vehicle CognitiveComp. Vision Systems, 2003.

[6] telerob GmbH, EOD Robot tEODor - Product Description,http://www.xtek.net/assets/DOL/PDF/302601.pdf.