semi-autonomous robots for robocuprescuebernhardh/mobileme/publications... · 2009. 12. 18. · arm...

10
Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia

Upload: others

Post on 30-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Semi-Autonomous Robots for RoboCupRescue

    Raymond Sheh, Adam Milstein, Matthew McGill,

    Rudino Salleh, Bernhard Hengst, Claude Sammut

    ARC Centre of Excellence for Autonomous SystemsSchool of Computer Science & Engineering

    University of New South WalesSydney NSW 2052 Australia

    {rsheh,amilstein,mmcgill,rudinos,bernhardh,claude}@cse.unsw.edu.au

    Abstract

    Search and rescue robots are intended to workas aids to human rescuers. One of the researchchallenges is how to combine autonomous op-eration with operator control. This paperpresents a single software system which inde-pendently operates several physically distinctrobots. One operator can coordinate sev-eral robots by switching their individual modebetween autonomous and remote operation.For simple regions, a robot may travel au-tonomously, while in more complex areas theoperator may assert control. The mixture ofautonomous and remote operation also allowsthe robot to continue to operate when wirelesscommunication to the operator is lost.

    1 Introduction

    Urban search and rescue is a signi�cant application areafor robotics, both in terms of its social importance andin the research challenges that this domain presents.These robots are intended to work as aids to humanrescuers, however, all currently deployed robots are tele-operated. This means that one operator is required foreach robot. One of the research challenges for searchand rescue robots is how to combine autonomous oper-ation with operator control, freeing the operator fromclose supervision of an individual robot and allowing asingle operator to supervise several robots. This paperpresents a system in which a team of physically di�er-ent robots run the same software that enables both au-tonomous and remote operation. A single operator cancoordinate several robots by switching modes. When arobot is in a relatively benign part of the environment,the operator may let the robot run on its own but, if itreaches an area that is beyond its autonomous capabil-ities, the operator can take over. As well as making anoperator more e�cient, the mixture of autonomous andremote operation also allows the robot to continue to

    operate when wireless communication to the operator islost. This happens frequently in disaster sites because ofinterface from building structures and from heavy radiotra�c amongst emergency services.The US National Institute of Standards and Technol-

    ogy has designed standardised simulated disaster sitesto test the capabilities of robots and to certify them foroperation. These standards are also used as the basis forthe RoboCup Rescue competition. Rescue robot teamshave the task of exploring a simulated disaster site andmapping it while searching for victims. The site containsa variety of elements to test the robots' locomotion, sens-ing and decision making (either onboard or on the partof the operator). The competition consists of an "open"challenge in which teams may �eld either tele-operatedor autonomous robots. In addition, there are challengesspeci�c for autonomous robots, mobility and manipu-lation. Our team won the award for best autonomousrobot and won an award for innovation in operator in-terface because of our ability to combine autonomousand remote operation. We also came second in the mo-bility challenge. In the following sections, we describethe hardware and software systems that were deployedin RoboCup 2009.

    2 Hardware

    Our team consisted of two mobile robots, a highly mobiletracked robot called Negotiator and a four-wheel-driverobot called Emu.These robots carried identical sensor and computa-

    tion payloads and their mechanical di�erences, whilstcomplementary in abilities, were abstracted away in oursoftware infrastructure to present a uni�ed interface, tobe described in Section 3. The system components areshown in Figures 1 and 2.

    2.1 Sensors

    Our sensor package is an evolution of the sensor pack-ages that we have �elded in the competition since 2005[Kadous et al., 2005; 2006a; Sheh et al., 2007]. Our fo-cus is on rich sensing and mapping for good situational

    Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia

  • S e n s o r h e a d

    B a s e j o i n t

    O n b o a r d c o m p u t e r

    A u t o - l e v e l l e dlase r

    r a n g e f i n d e r

    H e a d i n g - a t t i t u d es e n s o r

    S e n s o r h e a d

    B a s e jo in t

    A u t o - l e v e l l e dl a s e r r a n g e f i n d e r

    H e a d i n g - a t t i t u d e s e n s o r

    O n b o a r d c o m p u t e r

    M o v a b l e f l i p p e r s

    H e a d i n g - a t t i t u d e s e n s o r

    W i d e a n g l ec a m e r a

    N a r r o w a n g l e c a m e r aT i l t - pan un i t

    R a n g e i m a g e r

    T h e r m a lc a m e r a

    Figure 1: Emu (left) and Negotiator (right), the two robots forming our 2009 RoboCup Rescue Robot League entry,plus an enlargement of the sensor head. Major sensors and system components are shown.

    S e n s o r h e a dP a y l o a d b a y

    A u t o - l e v e l l e d L R F R o b o t b a s e

    F i r e w i r e U S B U S B t o s e r i a l

    C a m e r a s

    U S B h u b

    U S Bh u b

    T h e r m a l

    N a r r o w a n g l e

    W i d e a n g l e

    R a n g e i m a g e r

    H e a d i n g - a t t i t u d es e n s o r

    P a n - t i l tu n i t

    F i r e w i r eh u b

    A r mb a s e j o i n t

    L R Fa u t o - l e v e l l e r

    L R F

    M o t o r c o n t r o l l e r s

    H e a d i n g - a t t i t u d es e n s o r

    O n b o a r dc o m p u t e r

    D o c k i n gs t a t i o n

    Figure 2: Schematic diagram of the robot's systems withdata connections shown. Whilst the robot mobility plat-forms are di�erent, the payload and sensor heads areidentical.

    awareness, the ability to �exibly direct the perception ofthe victim identi�cation sensors and the need to map anddrive autonomously in an intrinsically 3D environment.

    The sensor head contains a CSEM/MESA Swiss-Ranger SR-3100 time-of-�ight LIDAR range imager for3D mapping, an Indigo Omega/FLIR ThermoVision A10thermal camera for victim identi�cation, a 190◦ wide an-gle camera for driving and situational awareness and a10◦ narrow angle camera for identifying distant objectsand �ne features such as victim tags. The near-IR �ood-lights on the range imager also provide illumination forthe wide and narrow angle cameras.

    The sensor head is mounted on an arm that consists of

    a high torque joint at the base and two Dynamixel RX-64 �smart� servo motors that form a tilt-pan unit. Thetilt and base joints are linked in software to keep thetilt joint's frame of reference level. This arrangement ofjoints is simple and yet allows for considerable �exibilityin sensor placement. The base joints, arm and sensorhead housings are built to be strong enough to take therobot's weight so the arm can be used to improve mobil-ity by shifting the robot's center of mass and by pushingo� the �oor and walls.

    An XSens MTi heading-attitude sensor and a HokuyoURG-04LX laser range�nder (LRF) are �tted to therobot base and used for mapping and autonomy. Theheading-attitude sensor uses an accelerometer and amagnetometer, combined with an embedded data fusionand Kalman �ltering processor, to provide a quaternionthat represents the rotation of the robot relative to theearth. This sensor is relatively immune to the robot'stranslational and rotational accelerations. A second unitcan be mounted on top of the sensor head to reduce in-terference from underground cables.

    The RoboCup Rescue arenas consist of large areas ofpitch and roll ramps, as shown in Figure 3. An LRFmounted straight to the robot will tilt and roll as therobot moved over the ramps, causing its scan plane tointersect with the �oor or point up over walls. As a resultthere can be �phantom� and �missing� observations. Toavoid this, the LRF is mounted on an auto-levelling plat-form consisting of two Dynamixel RX-28 �smart� servomotors. These are controlled in software to counter-act the pitch and roll of the robot as measured by theheading-attitude sensor on the robot base. The stability

    Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia

  • of the measurements from the heading-attitude sensor,combined with the accuracy of the servos, resulted invery good performance with reaction times signi�cantlyless than one second and no oscillations.

    Figure 3: 15◦ pitch and roll ramps, present through muchof the arena. Note that they do not always match up.

    Figure 4: Emu running an autonomous left-wall-following policy in the Radio Dropout Zone. Note thedebris-strewn �oor.

    The auto-leveller does not rotate about the LRF's op-tical center and thus induces a shift as the robot pitchesand rolls. Our auto-levelling software provides the 3Dposition and rotation of the LRF's optical center rela-tive to the robot's local co-ordinate system in order tocorrect for this. With these corrections, 3D aware algo-rithms can use the LRF even when the auto-leveller isnot functional, such as when the platform hits its limits.Although not implemented in time for the competition,this also allows the auto-levelling mechanism to deliber-ately tilt the LRF and produce accurate 3D point clouds,such as in [Sheh et al., 2006], despite not rotating theLRF about its optical centre.

    The LRF is mounted high to reduce the probabilitythat ramps would be detected as obstacles. This leavesit vulnerable to damage if the robot rolled. Our softwaredetects when the robot tilts dangerously and moves the

    LRF to a stowed position to protect it. This is re�ectedin the corrections above so, if necessary, the readingsfrom the LRF can be discarded. Once the heading-attitude sensor detects that the robot is close to levelagain, the platform unstows and resumes operation.

    2.2 Computation and Communication

    Onboard computation on both robots is provided byidentically con�gured Sony Vaio UX92 handheld PCswhich have 1.2GHz Intel Core 2 Solo processors, 1GB ofRAM and a solid-state hard disks. These are mountedin their docking cradles to provide access to the �rewireports needed to communicate with the thermal and nar-row angle cameras. The internal 802.11a wireless LANadapters and antennas are used for communication.The operator control units (OCUs) are standard lap-

    top PCs with high resolution (1920x1200) screens anddedicated graphics adapters to run the operator inter-face. An 802.11a wireless access point is used for com-munication between the four computers.

    2.3 Power and Data

    The primary communication system for the various com-ponents is USB. Ordinary consumer USB2.0 hubs werefound to have insu�cient bandwidth and stability so DigiHubPort industrial USB hubs are used to connect thevarious sensors and devices. The thermal and narrowangle cameras are connected to the onboard computervia a �rewire hub that allows for power injection. Themobility platforms, joints and heading-attitude sensorscommunicate via FTDI USB-to-serial interfaces to en-sure a predictable ordering of the serial devices from theonboard computer's perspective.The sensor and computation package requires an 18V-

    24V power supply. This directly powers the USB huband the servo motors for the arm, pan-tilt unit and auto-levellers. It is regulated down for the �rewire bus andrange imager. 5V for the LRFs is drawn from the USBhub's internal regulator.

    2.4 Mobility platforms

    Negotiator

    Negotiator is based on a RoboticFX Negotiator robotbase [RoboticFX, 2007]. It has a pair of main tracksand a pair of linked, tracked �ippers on the front thatmay be continuously rotated forward or backward. The�ippers and low center of gravity allow it to overcomea wide variety of obstacles including human-sized stairsand step�elds. It also has a large payload area and hasmany mounting points for additional equipment.The robot base is controlled via an RS-232 port on its

    expansion connector. Although it is also possible to ac-cess the robot's onboard nickel-metal hydride batteriesthrough this connector, we elected to use an additional

    Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia

  • high capacity lithium polymer battery to power the sen-sors and arm. This provides an active runtime of overone hour. A custom worm drive is used for the armjoint in order to minimise weight, this is combined witha potentiometer and embedded motor controller.

    Emu

    Emu is based on a Volksbot RT mobile base [Volksbot,2007]. This is a four-wheel-drive, skid-steering robotwith two high torque motors that drive the four pneu-matic wheels via a chain system. It has absolute speedcontrol which, along with its high 70mm ground clear-ance, make its behaviour very predictable over much ofthe Rescue arena, including the pitch/roll ramps (Fig-ure 3) and debris-strewn �oors in the radio dropout zone(Figure 4). Tracked vehicles tended to slide on the slop-ing �oors and got �railroaded� by debris on the �oor.The motors are controlled via commands sent through

    an RS-232 connection. The two on-board lead-acid bat-teries are tapped to power the sensors and arm, providingan active runtime in excess of 3 hours. A powerful 24VAmtec Powercube harmonic drive servo is used for thearm joint.

    3 Software Infrastructure

    Figure 5: Overview of Software Infrastructure

    Due to the distributed nature of a rescue scenario withan operator controlling robots remotely and indepen-dence between the robots, a networked client/server ar-chitecture was used with a server (RescueServ) runningon the robot and a client (RescueGUI) running on theOCU.For the sake of simplicity, all of the computers used in

    the competition ran the same operating system, Ubuntu8.10 32bit. The exception was one of the OCUs whichran Ubuntu 9.04 64bit to take advantage of its higherperformance.

    While the robots used in the competition were dis-similar, the software that was installed on the machineswas virtually identical. This meant that if one of theonboard computers broke it could be replaced with thebackup or, if needed, the computer from the other robot.The robots ran the same software with their own indi-vidual con�guration �le.While it was possible to operate multiple robots using

    a single OCU each robot was given its own OCU, so thatthe operator could operate both robots simultaneously.

    3.1 Robot

    The server running on the robot was divided into twoparts: Player[Gerkey et al., 2001] and RescueServ.

    Player

    Despite the mechanical di�erences between the tworobots, we were able to treat them as identical from asoftware interface perspective by using the Player robotmiddleware [Gerkey et al., 2001]. Drivers had also al-ready been written for most of the actuators and sen-sors and the structure of the Player con�g �le allowedus to easily add and remove sensors as required duringthe course of preparations and the competition. We alsomade use of a modi�ed version of the MBICP positiontracking driver present in Player (see Section 4.1).

    RescueServ

    Most of the processing done on the robot was done inRescueServ, which was responsible for streaming the im-ages captured from the robot's various cameras to theclient. It also managed which tasks the robot was cur-rently performing and maintained a connection to playerso that tasks could receive data from and send commandsto the robot's hardware.Tasks in RescueServ were separated into two types:

    foreground and background. Each background task ranin its own thread and any number of background taskscould run concurrently. However, only one foregroundtask could run at a time. If multiple foreground tasksneeded to be run, they were queued and run one at atime. It was possible for background tasks to be noti�edwhen foreground tasks started and �nished.In the competition two main background tasks were

    used: position tracking and autonomy. Position trackingkept track of the robot's position and fused the sensordata for use in the map, see Section 4.1 for more detail.The autonomy task drove the robot around the arenaand is described in Section 5. It kept track of foregroundtasks and slept while the robot was tele-operated or whilea foreground task was running.A foreground task was used to collect a victim snap,

    which fused all current sensor data of the victim thatwas currently in front of the sensors and sent it to themap.

    Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia

  • Two other background tasks to perform victim identi�-cation and autonomous search were developed, but werenot used as they proved unreliable in the early rounds ofthe competition.

    3.2 Networking

    The OCU and robot communicated using three separatecommunication channels. The �rst was through Playerwhich gave low latency access to the robot's motors andservos. The second was the compressed video streamsfrom the robot's cameras. By using a highly loss tolerantcodec and UDP as the underlying transport, we couldmaintain the video stream despite network instability.The third channel was an Ice[Ice, 2009] connection overwhich all other communication between robot and OCUtook place.

    3.3 GUI

    Figure 6: Negotiator's User Interface

    The user interface the operator used at RoboCup Res-cue 2009 (Figure 6) was divided into two sections. Themap panel, on the left, showed the map and victimswhilst the robot panel, on the right, showed the stateof the robot being controlled.

    Robot

    The user interface for operating the robots was basedon our earlier interfaces that were inspired by computergames. [Kadous et al., 2006b; 2006a]

    The main body of the robot panel displayed the cur-rent view from the robot's wide angle camera. Superim-posed on this at the top of the screen are the robot's cur-rent orientation from the heading-attitude sensor, driv-ing speed and battery voltage. The robot's forwardspeed could be adjusted by using the scroll wheel onthe mouse. Holding the shift key while moving the scrollwheel changed the turn rate.Overlaid in the wide angle camera were a crosshair

    showing the middle of the panel and a number of rect-angles showing the approximate �elds of view for therange imager and narrow angle camera.

    It was possible to bring up the view from the narrowangle camera or to overlay the latest reading from thethermal camera. The range of temperatures in the ther-mal image that were overlaid could be adjusted using afew key strokes.Along the bottom of the robot pane are the current

    con�guration of the robot's arm and �ippers and presetcon�gurations that could be selected using the numericalkeys 0-9. As these take time to move, both the actual anddesired con�gurations are displayed in di�erent colours.Clicking on the robot panel using the mouse controlled

    the position of the tilt-pan unit. Right clicking wouldreturn the head to a tilt-pan of (0,0). Left clicking wouldmove the head to look at the point in the wide anglecamera that was clicked.Autonomous driving for the robot was enabled using

    a few keys that set the type of algorithm to employ, withthe spacebar designated the E-stop.The `V' key was used to mark a victim and to initiate

    the victim snap foreground task on the robot.

    Map

    The OCU was responsible for maintaining the map (seeSection 4) and displaying it to the user. Visible in themap were the walls and obstacles detected, the robots'paths and current location and any victims that hadbeen marked. The display of these items could be tog-gled on and o�, and the map could be exported to Geo-TIFF format. The map interface also allowed the mapto be saved and maps from previous runs to be loaded.In a second tab in the map section was a list of all of

    the victims in the current map. This also showed theimages of the victims that were captured by the variouscameras onboard the robot that located them.

    4 Mapping

    Creating a map while simultaneously localising a robotwithin that map is a class of problem called Simulta-neous Localisation and Mapping (SLAM) [Thrun et al.,2005]. Fundamentally, the SLAM problem is representedas estimating the pose and map given the sensor readingsand robot odometry:

    p(xt,m|zt, ut) (1)

    Where xt is the robot's position at time t, m is themap, zt is the robot's sensor reading at time t and utis the robot's odometry reading at time t. As a stan-dard notation we use superscript to denote a series, forexample: xt = {x1, . . . , xt}. Equation 1 is called the on-line SLAM problem because it estimates only the robot'scurrent pose. However, we are often interested in updat-ing our knowledge of the robot's past poses as the mapbecomes more accurate. This problem is called the fullSLAM problem and is represented as:

    Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia

  • p(xt,m|zt, ut) (2)Obviously, after completion this gives a more accurate

    path for the robot, however, it usually involves too muchprocessing to be calculated in real time.

    4.1 Position Tracking

    The SLAM problem de�nition requires an estimate ofthe robot's odometry, usually obtained from wheel en-coders. In the rescue environment, the uneven surfaceseverely reduces the e�ectiveness of wheel encoders, es-pecially for tracked robots which experience much moreslippage than wheeled robots in these environments. Infact, Negotiator did not even have encoders during thecompetition.We use an implementation of the iterative closest point

    algorithm called MBICP [Minguez et al., 2005] to alignsuccessive scans from the auto-leveled LRF. The o�setbetween each scan is used as the odometry data, insteadof wheel encoder values.ICP works by taking two range scans and for each

    point in the new scan, �nding the closest point in theprevious scan. The new scan is then adjusted by the av-erage vector between its points and the previous points.This process is repeated until the size of the motion is re-duced below a threshold. MBICP is an implementationof ICP which de�nes the closest point using a special dis-tance metric, rather than simple euclidean distance. Themetric used is the rigid body transformation necessaryto align the two points:

    ‖q‖ =√x2 + y2 + L2θ2 (3)

    L is a parameter which determines the relative weightof the linear and rotational components. The bene�tof MBICP is that it determines the robot's translation(x, y) and rotation (θ) in a single algorithm.

    4.2 SLAM Solution

    To produce the 2.5D map and position, we elected touse an implementation of the FastSLAM algorithm foroccupancy grid maps [Montemerlo et al., 2002; Thrunet al., 2005]. Using an occupancy grid map makes itunnecessary to extract and identify speci�c features fromthe raw sensor readings, removing a source of error andreducing the processing required. Also, FastSLAM is oneof the few real-time algorithms which provides a solutionto the full SLAM problem, correcting the past poses ofthe robot as new data arrives. Another bene�t is thatthe run-time does not increase with the size of the map,allowing the algorithm to be used in large environments.Finally, because the FastSLAM algorithm is independentof the underlying motion and sensor models, the samemapping implementation can handle multiple di�erenttypes of sensors for position tracking and map building.

    Algorithm 1 Occupancy Grid FastSLAM

    For each particle in the sample set:

    1. Generate a new particle according to the motion

    model and the odometry x[k]t ∼ p(xt|ut, xt−1)

    2. Set the weight of the particle by comparing

    the actual sensor readings to the map w[k]t =

    p(zt|x[k]t ,m[k]t−1)

    3. Update the map for the particle according to the

    sensor readings m[k]t = update(m

    [k]t−1, x

    [k]t , z

    [k]t )

    The �nal step is to resample these weighted particles byrandomly selecting K new particles from the set, withreplacement, according to their weights from step 2.

    The basis of FastSLAM is to factor equation 2 into aposition update and a map update. The robot's stateis then estimated by a set of samples as in Monte CarloLocalisation (MCL) [Dellaert et al., 1999; Thrun et al.,2005] with the addition of a map to each particle and amap update step (Algorithm 1).The e�ect of resampling is to replace the weight of each

    particle with the number of particles at that location.Thus, on the next update when the particles are movedprobabilistically, there should be enough samples at thecorrect location to track the robot's motion.

    Map Update

    Each particle's map is updated by raytracing along thepath of each sensor ray and updating the cells as they areencountered. Although the rescue environment is threedimensional, we simplify the map to a two dimensionaloccupancy grid height map in order to reduce space andprocessing requirements. Thus, when a sensor ray de-tects an occupied cell, the cell's occupancy probabilityup to that height is increased. In contrast, when a raypasses through a cell, the occupancy probability is de-creased only if the ray is below the occupied height ofthe cell.The particle weight is calculated in a similar manner.

    Each sensor ray is compared to the distance to the near-est wall along the direction of that ray in the particle'smap. A probability is generated from these two valuesusing a probabilistic model of the sensor.

    Sensors

    Although we use only an auto levelled planar LRF, asdetailed in section 4.1, to perform the position tracking,the FastSLAM algorithm can make use of any occupancysensor. Our implementation uses the data from a rangeimager as well as the LRF in order to determine theheight data of the map cells. Since the LRF is a lev-elled, planar sensor, it could not be used to determinethe height of the robot, and also could not observe theenvironment except at the robot's �xed height. Using

    Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia

  • the range imager allows us to observe the height of thewalls and so generate a more accurate map which, inturn, produces a more accurate localisation.FastSLAM estimated the robot's position in only

    (x, y, θ), as the roll and pitch angles reported from theheading-attitude sensor were found to be highly accu-rate. The runtime of FastSLAM increases exponentiallywith the number of state variables, so removing theseangles provides a signi�cant performance saving. Addi-tionally, the planar LRF could not provide roll and pitchestimates anyway.Finally, we have no motion data for the vertical z axis

    and so we assume that the robot does not move verti-cally. This is our most dubious assumption, especiallyin the rescue environment, however in practice there isusually not enough vertical change to cause a localisa-tion failure. Developing a system to estimate the verticalchange in the robot's position would be a signi�cant im-provement to our system.

    5 Autonomy

    Our focus is on autonomy that could assist the single op-erator by performing tasks such as autonomously driv-ing ahead, avoiding obstacles and following passagewayswhilst the operator could be busy with another robot.The goal is to save time by keeping both robots advanc-ing towards their goals as much as possible, rather thanalways having one robot stationary as would be the casefor purely tele-operated robots. Unlike fully autonomousexploration, assisted autonomy assumes the presence ofan operator who can provide high level commands. Wechose to implement a controller based on wall following,where the high level commands are to follow the left wall,follow the right wall and to stop and let the operator takeover.In addition to having an easy-to-anticipate behaviour,

    wall following can be implemented reactively, withoutrelying on an existing map, so transient sensing anoma-lies, such as a referee walking in front of the robot, areless likely to confuse the policy. The algorithm is alsoimmune to failures in the position tracker, which are areal risk in this environment (Sections 4.1 and 6.3). Wealso implemented a second policy called bump-and-gohowever this was not used in the competition.First, all sources of obstacles observed by the robot's

    obstacle detection sensors are placed on a local 2D oc-cupancy grid centred on the robot (Figure 7(a,b)). Theuse of the auto-levelled LRF allows us to abstract awaythe fact that the environment is 3D, albeit with increasedvariability in the response of the robot as it interacts withthe real 3D terrain. This occupancy grid must be largeenough to encompass obstacles that the robot should re-act to in this time step. As the centres of the grid cellsare used as goal points for the robot, the resolution gov-

    a b c

    d e f

    Figure 7: Several stages in the �almost-Voronoi� auton-omy controller. (a) shows the robot with the actual po-sition of the surrounding walls (grey) and the range ofthe LRF (red). The path that the robot should follow ina left-wall-following policy is marked by the black arrow.Note that the gap in the wall on the left is too narrowfor the robot to safely �t through. See text for detailson parts (b) to (f).

    erns the smoothness of the resulting policy. We chose3m and 0.03m for the dimensions of the occupancy gridand the size of each cell, respectively. In competition,the obstacles were detected using the auto-levelled LRF.We have also implemented obstacle detection using therange camera, which allows the robot to detect obstaclesthat might be missed by the LRF, such as small wallsand negative obstacles.

    The obstacles are then expanded by a distance equalto the safety distance of the robot. In competition con-�guration, this distance was a little more than the radiusof the robot's bounding circle and represents the closestthat a robot should attempt to drive to a wall. Thisprocess leaves the areas where it is safe for the center ofthe robot to move to and closes o� passageways that arephysically too narrow for the robot to drive through (Fig-ure 7(c)). This occupancy grid could be used to directlybuild a wall following policy. However, doing so wouldresult in a policy that always drove as close to walls aspossible. This is necessary for squeezing through gapsbut undesirable in more open areas of the arena wherethe robot can drive more quickly if it keeps a greaterdistance from walls. This would also cause the robot tofollow every indentation in the walls, resulting in unnec-essary manoeuvring.

    To solve this problem, the obstacles in the occupancygrid are further expanded but with additional logic toavoid closing o� existing gaps that the robot might beable to squeeze through. This process is similar to grow-

    Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia

  • ing a Voronoi graph but is stopped after a prede�ned dis-tance, equal to the extra clearance that the robot shouldgive walls when possible. This was generally around 1/2to 1/3 the safety distance (Figure 7(d)). We do not use afull Voronoi graph as this results in faraway objects hav-ing an undue in�uence on the robot's path, resulting inhard-to-anticipate behaviour. A �nal step smooths out�stubs� formed by this �almost-Voronoi� process (Fig-ure 7(e)).The resulting occupancy grid is used to determine an

    appropriate steering angle and drive speed. The grid isscanned in a circle around the robot and the angles atwhich the scan observed transitions from �occupied� to�clear� used to determine the desired goal angle. Theleft wall-following policy, shown in Figure 7(f), startsits scan at 90◦ to the left and proceeds clockwise. Theright-wall-following policy starts from 90◦ to the rightand proceeds counterclockwise. The choice of the radiusof this circle is important. Too small and the scan maynot see a transition at all. Even if it does, it may notdrive smoothly as small errors in the robot's positionwill result in large changes in goal angle. Too far andthe robot may attempt to cut corners. This was chosento be roughly the safety distance. The act of rotatingoften causes objects to appear and disappear from theLRF's view due to occlusions and sensing distance. If the�rst transition is always chosen, the robot will �thrash�between two angles. To prevent this, the angle closestto the last goal angle is reported as the new goal angle.The two-stage process in which the obstacles in the

    occupancy grid are expanded also provides a natural wayof determining the drive speed. If the transition pointpicked in the scan process is adjacent to a cell that isthe result of the initial safety radius stage (Figure 7(c)),that goal point is in an area where the robot has littleclearance and thus the robot should drive slowly. Incontrast, if the transition is adjacent to a cell that is theresult of the extra expansion stage (Figure 7(d)), it canbe assumed that the goal point is far from obstacles andthus the robot can be driven more quickly. Likewise,the robot should drive slowly for large steering angles,especially since turning on the uneven and loose surfacescan be unpredictable. Additional heuristics are also usedfor drive speed that included subsequent circular scansto determine if there is a turn in the distance and adjustdrive speed accordingly.

    6 Results

    The robots were evaluated in the context of participa-tion in the 2009 RoboCup Rescue Robot League com-petition. We won the Best-in-Class Autonomy compe-tition, the award for Innovation Operator Interface andplaced second in the Best-in-Class Mobility competition.The following subsections describe in further detail our

    evaluation of the robots and infrastructure.

    6.1 Hardware

    In general the hardware performed well. The ability tomove the robots' centres of gravity by moving their armsresulted in some very impressive manoeuvres, includingthe ability to drive Emu up 45◦ ramps, something thatfew observers believed was possible.The Best-in-Class Mobility award focused on the abil-

    ity of the robot to traverse unstructured terrain whilstracing against the clock. Mapping and victim identi�-cation were not required, so Negotiator was stripped ofits sensor and computation payload and controlled via itsoriginal OCU. Performance improved signi�cantly due tothe decreased weight and the responsiveness of the con-troller, although situational awareness was limited witha single wide angle camera mounted low on the base ofthe robot as the only sensor.Negotiator's performance during the main competi-

    tion, however, was found to be severely limited by thefact that Negotiator's motors are e�ectively torque con-trolled, rather than speed controlled. Combined with theround-trip delays between RescueServ and RescueGUIof over 1 second at times, this made traversing the pitchand roll ramps very di�cult. The speed of the robot be-came heavily dependent on the slope of the terrain andthe robot lost traction regularly. Although the originalOCU, which was used to control the robot during the �-nal rounds and the Best-in-Class Mobility run, was alsolimited to torque control, there were virtually no delaysand thus it was possible for the operator to react andcompensate in real time.

    6.2 Software Infrastructure

    The software infrastructure proved to be very robustand �exible and was one of the highlights of our entry.The ability to use the same software on both robots andswitch between them in both tele-operative and assisted-autonomous modes proved valuable and allowed us totake advantage of the introduction of the radio dropoutzone. The fact that only the con�guration �les changedbetween the two robots was also valuable � when thelength of the arm or the position of the laser on Emuchanged, for instance, a single change in the con�gura-tion �le was able to update all the infrastructure, fromSLAM to the renderings of the robot in the GUI.The shortcomings of the infrastructure weren't evident

    during the operation of the robots in the competition,but rather during the development of the innovative fea-tures that were deployed. The more signi�cant problemswere lack of integrated logging and replay of data, staticconnections between the robot and OCU and no inher-ent method for communicating between the OCU andbackground tasks.

    Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia

  • User Interface

    Although the user interface provided good situationalawareness, considerable delays were experienced, primar-ily in the video feed from the robot. This was mainlydue to queues within the network infrastructure and de-lays in image compression. Emu moves predictably and,often, autonomously and was thus less a�ected by thedelays. However, Negotiator was much harder to controlwith these delays and for much of the competition wasunable to reliably traverse the pitch-and-roll ramps atan angle as it was very di�cult to compensate for therobot spinning and sliding down the ramps.

    During the �nal few runs, we reverted to using theoriginal OCU that shipped with the robot, which usesan analog video transmitter and thus has no perceptibledelay. Our sensor and computation package was usedto collect mapping data and victim information. Thissigni�cantly improved performance and reliability andmore than doubled the number of victims we were ableto identify with this robot.

    We were presented with the Innovative User Interfacesaward for our ability to easily run both robots at thesame time with only one operator. In particular, ourability to switch between tele-operation and assisted au-tonomy in an intuitive fashion was recognised.

    6.3 Mapping

    Figure 8: The map created by Emu during the Best-in-Class Autonomy run, each shaded square represents 1m.The 45◦ mismatch in the centre of the map is not anerror � the upper and lower sections of the maze wereconnected by a region that was set at 45◦ to the restof the maze. The robot's start point was at the yellowarrow at the left and was executing a right-wall-followingpolicy. The roughness of the trace was due to uneven�ooring and rubble tipping the robot from side to side.

    The mapping algorithm was e�ective, producing a rec-ognizable map of the environment when projected intoa two dimensional GeoTIFF. FastSLAM has the featureof usually either converging to a correct map or failingcatastrophically. Our mapping failures were generallycaused by the problem that the position tracking wasbased on the same sensor as the map, as described insection 4.1.

    Position tracking tended to fail in situations whererapid motion of the robot was accompanied by severesensor errors. Such errors were usually caused by therobot changing its vertical position so that low obsta-cles, such as raised sections of the �oor, were no longerobserved. These problems were exacerbated by the lim-ited processing power available to the processor-intensiveMBICP algorithm on-board the robot, which reducedthe possible frequency of position tracking updates belowthe rate of the sensor updates. Aside from these speci�csituations, the maps were an accurate representation ofthe environment, especially on Emu, which tended tohave a smoother motion.

    Figure 8 shows the map from the autonomous chal-lenge. This challenge was particularly conducive to themapping algorithms as a result of the removal of thearm to allow an unobstructed sensor view and the au-tonomous algorithm's uninterrupted control, which pro-vided smoother driving than the operator.

    6.4 Autonomy

    The introduction of the Radio Dropout Zone in thisyear's competition allowed us to showcase the assistedautonomy features of our robot. A representative pic-ture of this area is shown in Figure 4. This zone consistedof a winding passageway with short dead-end branchesand simulated an area through which there was no radiocommunication. The robot entered at one end and hadto reach the other end without operator intervention. Atthe other end, the operator could resume tele-operationand score points for the victim. If the robot was able toreturn to the start autonomously, this score was doubled.Using our wall following controller, we were able to com-plete this zone in every run, doubling our victim scoreeach time except for one case where Emu completed thezone less than one minute late.

    In addition to being non-straight, the Radio DropoutZone was also non-�at, with the �oor strewn with �xedand movable debris, the di�culty of which increasedthrough the competition. Apart from one case where therobot became stuck and damaged its motor controller,Emu's performance was reliable and largely determin-istic. Very few other teams were able to even attemptthe Radio Dropout Zone as it required a combinationof some rough terrain mobility (reaching the zone re-quired descending two 0.2m steps and driving over de-

    Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia

  • bris) and reliable autonomy. As a result, the next bestteam achieved less than half our success rate.Emu also performed very well during the Best-in-Class

    Autonomy challenge. The arena was speci�cally set upfor this challenge with di�cult narrow passageways cou-pled with many non-vertical obstacles that required therobot to keep a safe distance from walls. Using the wall-following controller with adjusted safety and extra ex-pansion distances, Emu was able to cover almost all ofthe arena on its �rst attempt with time to spare andproduced a good quality map as shown in Figure 8.

    7 Conclusion

    At present, our mapping is 2D, due to limitations inour sensing. The SwissRanger 3D imager gives us depthinformation but with a narrow �eld of view. To gainadequate situation awareness, the sensor head must bescanned, which is slow and often impractical. New laserrange�nders are now small enough and have high enoughscan rates that we can mount a scanning laser to give usan almost completely spherical point cloud. These datawill allow our mapping software to create 3D maps.Due to limitations in onboard computing power, po-

    sition tracking and navigation are separated from map-ping. We designed the navigation algorithms so thatthey would not depend on mapping, in case communi-cation to the OCU is lost. However, this deprives thenavigation system of much useful information. Upgrad-ing the onboard computers will allow us to run mappingalongside navigation and permit the implementation ofmore intelligent navigation algorithms.The current sensor head is large and heavy and re-

    quires a powerful, and heavy, arm to carry it. Movingthe navigation sensors o� the head would mean that thethe arm only needs to carry a few light-weight cameras.This will allow us to design a new arm that is lighter,has more degrees of freedom and can be used to pointthe cameras into holes and hard to reach places. A moredexterous arm may will eventually be needed for manip-ulation.

    References

    [Dellaert et al., 1999] F. Dellaert, D. Fox, W. Burgard,and S. Thrun. Monte carlo localization for mobilerobots. In IEEE International Conference on Roboticsand Automation, pages 1322�1328. IEEE, 1999.

    [Gerkey et al., 2001] B. Gerkey, R. Vaughan, K. Sty,A. Howard, G. Sukhatme, and M. Mataric. Mostvaluable player: A robot device server for distributedcontrol. In Proc IEEE/RSJ Intl Conf on IntelligentRobots and Systems, Wailea, Hawaii, October 2001.

    [Ice, 2009] ZeroC = The Internet Communications En-gine. www.zeroc.com/ice.html, August 2009.

    [Kadous et al., 2005] M. Waleed Kadous, RaymondSheh, and Claude Sammut. Caster: A robot for ur-ban search and rescue. In 2005 Australasian Conf onRobotics and Automation, 2005.

    [Kadous et al., 2006a] M. Waleed Kadous, RaymondSheh, and Claude Sammut. Controlling heterogeneoussemi-autonomous rescue robot teams. In 2006 IEEEIntl Conf on Systems, Man, and Cybernetics, 2006.

    [Kadous et al., 2006b] M. Waleed Kadous, RaymondSheh, and Claude Sammut. E�ective user interfacedesign for rescue robotics. In 2006 Conf on Human-Robot Interaction, 2006.

    [Minguez et al., 2005] J. Minguez, F. Lamiraux, andL. Montesano. Metric-based scan matching algorithmsfor mobile robot displacement estimation. In Proc2005 IEEE Intl Conf on Robotics and Automation,pages 3557�3563, 2005.

    [Montemerlo et al., 2002] M. Montemerlo, S. Thrun,D. Koller, and B. Wegbreit. FastSLAM: A factoredsolution to the simultaneous localization and mappingproblem. In Proc Nat Conf on Arti�cial Intelligence,pages 593�598. MIT Press, 2002.

    [RoboticFX, 2007] RoboticFX. Negotiator TacticalSurveillance Robot. www.roboticfx.com, 2007.

    [Sheh et al., 2006] Raymond Sheh, Nawid Jamali,M. Waleed Kadous, and Claude Sammut. A low-cost,compact, lightweight 3d range sensor. In 2006Australasian Conf on Robotics and Automation, 2006.

    [Sheh et al., 2007] Raymond Sheh, M. Waleed Kadous,and Claude Sammut. RoboCup Rescue 2007 � TeamCASualty (Australia). In RoboCup 2007 Team De-scription Papers, 2007.

    [Thrun et al., 2005] S. Thrun, W. Burgard, and D. Fox.Probabilistic Robotics (Intelligent Robotics and Au-tonomous Agents). MIT Press, 2005.

    [Volksbot, 2007] Volksbot. Volksbot RT: A mod-ular robot construction kit for Rough Terrain.www.volksbot.de/pdfs/VolksBot-en.pdf, 2007.

    Australasian Conference on Robotics and Automation (ACRA), December 2-4, 2009, Sydney, Australia