autonomous long range navigation in planetary...

8
AUTONOMOUS LONG RANGE NAVIGATION IN PLANETARY-LIKE ENVIRONMENTS Simon Lacroix, Anthony Mallet, David Bonnafous Sara Fleury, Mauro Di Marco and Raja Chatila LAAS/CNRS 7, av. du Colonel Roche F-31077 Toulouse Cedex 4 France Abstract Autonomous long range navigation in partially known planetary-like terrain is still quite challenging for roboticists. Navigating several hundreds of meters without any operator in- tervention requires the ability to build various representations of the perceived environment, to plan and execute trajectories ac- cording to the kind of terrain traversed, to localize the robot as it executes its mission, and to schedule, trigger, control and in- terrupt these various activities. In this paper, we briefly describe some functionalities that are currently being integrated on board the Marsokhod robot Lama at LAAS/CNRS. We then focus on the necessity to integrate various instances of the perception and decision functionalities, and on the difficulties raised by this in- tegration. 1 Introduction To foster ambitious exploration missions, future planetary rovers will have to fulfill tasks described at a high abstrac- tion level, such as ‘‘reach the top of that hill’’ or ‘‘explore this area’’ . This calls for the ability to navigate for several hundreds of meters, dealing with vari- ous and complex situations, without any operator intervention. Such an ability is still quite an open challenge for roboticists: it requires the integration and control of a wide variety of au- tonomous processes, ranging from the lowest level servoings to the highest level decisions, considering time and resource con- straints. We are convinced that no simple autonomy concept can lead to the development of robots able to tackle such complex tasks: we believe in the efficiency of deliberative approaches [4], that are able to plan and control a big variety of processes. Fol- lowing such a paradigm and according to a general economy of mean principle, we want the robot to adapt its decisions and be- havior to the environment and to the task it has to achieve [5]. This requires the development of: Various methods to implement each of the perception, de- cision and action functionalities, adapted to given contexts; An architecture that allows the integration of these meth- ods, in which deliberative and reactive processes can coexist; And of specific decision-making processes, that dynami- cally select the appropriate decision, perception and action pro- cesses among the ones the robot is endowed with. In this paper, we present the current state of development of the robot Lama, an experimental platform within which our developments related to autonomous long range navigation are integrated and tested. We especially focus on the necessity to in- tegrate various implementations of each of the main functional- ities required by autonomous navigation (i.e. environment mod- eling, localization, path and trajectory generation). After a brief description of Lama and its equipments, the rest of the paper is split in two different parts: the first one briefly presents the main functionalities required by long range navigation we cur- rently consider (terrain modeling, path and trajectory planning, rover localization), while the second one insists on the problems raised by the integration of these functionalities. 2 The robot Lama Lama is a 6-wheels Marsokhod chassis [10] that has been to- tally equipped at LAAS 1 . The chassis is composed of three pairs of independently driven wheels, mounted on axes that can roll relatively to one another, thus giving the robot high obstacle traversability capacities. Lama is wide, its length varies from to , depending on the axes configuration ( on its “nominal” configuration), and weighs approxi- mately . Each motor is driven by a servo-control card, and its maximal speed is . Lama is equipped with the following sensors: Each wheel is equipped with a very high resolution optical encoder, allowing fine speed control and odometry; Five potentiometers provide the chassis configuration; A 2 axes inclinometer provide the robot attitude, a mag- netic fluxgate compass and a optical fiber gyrometer provide the robot orientation and rotational speed; A first stereo bench is supported by a pan and tilt unit, mounted on top of a mast rigidly tied to the middle axis. This bench has a horizontal field of view of approximately , and is mainly dedicated to goal and landmarks tracking; A second stereo bench, also supported by a PTU, is mounted upon the front axis, at a elevation. It has a horizontal field of view of approximately , and is mainly dedicated to terrain modeling in front of the robot; A differential carrier-phase GPS receiver 2 is used to qualify the localization algorithms. All the computing equipment is in a VME rack mounted on the rear axis of the robot. The rack contains four CPU’s (two PowerPc and two ) operated by the real-time OS Vx- Works. The are in charge of the data acquisitions (except the camera images) and of the locomotion and PTU control, whereas all the environment modeling and planning functional- ities are run on the PowerPc’s. 1 Lama is owned by Alcatel Space Industries, and is currently lent to LAAS 2 currently lent to us by CNES

Upload: others

Post on 09-Sep-2019

1 views

Category:

Documents


0 download

TRANSCRIPT

AUTONOMOUSLONG RANGENAVIGATION INPLANETARY-LIKE ENVIRONMENTS

SimonLacroix,Anthony Mallet, David BonnafousSaraFleury, MauroDi MarcoandRajaChatila

LAAS/CNRS7, av. duColonelRoche

F-31077ToulouseCedex 4France

AbstractAutonomous long range navigation in partially knownplanetary-liketerrain is still quite challengingfor roboticists.Navigatingseveralhundredsof meterswithoutany operatorin-terventionrequirestheability to build variousrepresentationsoftheperceivedenvironment,to planandexecutetrajectoriesac-cordingto thekind of terraintraversed,to localizetherobotasit executesits mission,andto schedule,trigger, controlandin-terruptthesevariousactivities. In thispaper, webriefly describesomefunctionalitiesthatarecurrentlybeingintegratedonboardtheMarsokhodrobotLamaat LAAS/CNRS.We thenfocusonthenecessityto integratevariousinstancesof theperceptionanddecisionfunctionalities,andonthedifficultiesraisedby this in-tegration.

1 IntroductionTo foster ambitious exploration missions, future planetaryrovers will have to fulfill tasksdescribedat a high abstrac-tion level, suchas‘‘reach the top of that hill’’or ‘‘explore this area’’. This calls for the abilityto navigatefor several hundredsof meters,dealingwith vari-ousandcomplex situations,without any operatorintervention.Suchan ability is still quite an openchallengefor roboticists:it requiresthe integration andcontrol of a wide variety of au-tonomousprocesses,rangingfrom thelowestlevel servoingstothehighestlevel decisions,consideringtime andresourcecon-straints.

Weareconvincedthatnosimpleautonomyconceptcanleadto thedevelopmentof robotsableto tacklesuchcomplex tasks:we believe in theefficiency of deliberativeapproaches[4], thatare able to plan and control a big variety of processes.Fol-lowing suchaparadigmandaccordingto ageneraleconomyofmeanprinciple,wewanttherobotto adaptits decisionsandbe-havior to theenvironmentandto the taskit hasto achieve [5].This requiresthedevelopmentof:� Variousmethodsto implementeachof theperception,de-cisionandactionfunctionalities,adaptedto givencontexts;� An architecturethat allows the integrationof thesemeth-ods,in whichdeliberativeandreactiveprocessescancoexist;� And of specificdecision-makingprocesses,that dynami-cally selecttheappropriatedecision,perceptionandactionpro-cessesamongtheonestherobotis endowedwith.

In this paper, we presentthe currentstateof developmentof the robotLama,anexperimentalplatformwithin which ourdevelopmentsrelatedto autonomouslong rangenavigationareintegratedandtested.Weespeciallyfocusonthenecessityto in-tegratevariousimplementationsof eachof themainfunctional-

itiesrequiredby autonomousnavigation(i.e. environmentmod-eling,localization,pathandtrajectorygeneration).After abriefdescriptionof Lamaand its equipments,the restof the paperis split in two differentparts: thefirst onebriefly presentsthemain functionalitiesrequiredby long rangenavigationwe cur-rently consider(terrainmodeling,pathandtrajectoryplanning,rover localization),while thesecondoneinsistsontheproblemsraisedby the integrationof thesefunctionalities.

2 The robot LamaLamais a 6-wheelsMarsokhodchassis[10] that hasbeento-tally equippedatLAAS1. Thechassisiscomposedof threepairsof independentlydriven wheels,mountedon axesthatcanrollrelatively to one another, thus giving the robot high obstacletraversabilitycapacities.Lamais

��������wide, its lengthvaries

from��� ���

to��� ��� �

, dependingon the axes configuration(��� ����

on its “nominal” configuration),andweighsapproxi-mately

��������. Eachmotor is driven by a servo-controlcard,

and its maximalspeedis�����������������

. Lamais equippedwiththefollowing sensors:� Eachwheelis equippedwith averyhigh resolutionopticalencoder, allowing finespeedcontrolandodometry;� Five potentiometersprovide thechassisconfiguration;� A 2 axesinclinometerprovide the robot attitude,a mag-netic fluxgatecompassand a optical fiber gyrometerprovidetherobotorientationandrotationalspeed;� A first stereobenchis supportedby a pan and tilt unit,mountedontopof a

��� ����mastrigidly tied to themiddleaxis.

Thisbenchhasahorizontalfield of view of approximately����

,andis mainlydedicatedto goalandlandmarkstracking;� A secondstereobench, also supportedby a PTU, ismountedupon the front axis, at a

��� ��� �elevation. It hasa

horizontalfield of view of approximately�����

, and is mainlydedicatedto terrainmodelingin front of therobot;� A differentialcarrier-phaseGPSreceiver2 is usedtoqualifythelocalizationalgorithms.

All thecomputingequipmentis in a VME rackmountedonthe rearaxis of the robot. The rackcontainsfour CPU’s (twoPowerPcand two

��������) operatedby the real-timeOS Vx-

Works.The��������

arein chargeof thedataacquisitions(exceptthe cameraimages)and of the locomotionand PTU control,whereasall theenvironmentmodelingandplanningfunctional-ities arerunon thePowerPc’s.

1Lamais ownedby AlcatelSpaceIndustries,andis currentlylenttoLAAS

2currentlylent to usby CNES

Figure1: TherobotLamaonour experimentationsite

Theterrainonwhichwetestournavigationalgorithmsis ap-proximately

�����meterslong by � wide. It containsa big va-

riety of navigationclasses,rangingfrom flat obstacle-freeareasto roughareas,includinggentleandsteepslopes,rocks,gravel,treesandbushes.

Part A:Navigation functionalities

3 Environment modelingPerceiving and modelingthe environmentis of coursea keyfunctionality for the developmentof autonomousnavigation.Environmentmodelsareactuallyrequiredfor severaldifferentfunctionalities:to planpaths,trajectoriesandperceptiontasks(section4), to localizetherobot(section5),andalsotoservotheexecutionof trajectories.Aiming at building a “universal” ter-rainmodelthatcontainsall thenecessaryinformationsfor thesevariousprocessesis extremelydifficult, inefficient, andmore-over not really necessary. It is moredirect andeasierto builddifferentrepresentationsadaptedto theiruse.Theenvironmentmodelis thenmulti-layeredandheterogeneous, andperceptionis multi-purpose: severalmodelingprocessescoexist in thesys-tem,eachdedicatedto thebuilding of specificrepresentations.

3.1 Qualitative modelingWe developeda methodthat producesa descriptionof the ter-rain in termof navigabilityclasses, on thebasisof stereovisiondata[7]. Most of the existing contributions to producesimi-lar terrainmodelscometo a datasegmentationprocedure(e.g.[15, 9]), thatproducea binary descriptionof theenvironment,in termsof traversableandnon-traversableareas.Our methodis aclassificationprocedurethatproducesaprobabilisticallyla-beledpolygonalmap. It is an identificationprocess,anddoesnotrequireany thresholddetermination(atediousproblemwithsegmentationalgorithms).

Ourmethodreliesonaspecificdiscretisationof theperceivedarea,that definesa cell image. The discretisationcorrespondsto thecentralprojectiononavirtual horizontalgroundof a reg-ular (Cartesian)discretisationin thesensorframe(figure2). It“respects”the sensorscharacteristics:the cell’s resolutionde-creaseswith thedistanceaccordingto thedecreaseof thedataresolution.

Featuresarecomputedfor eachcell,andareusedto labelthecells thanksto a supervisedBayesianclassifier: a probabilityfor eachcell to correspondto a pre-definedtraversabilityclassis estimated.Figure3 shows a classificationresults,with two

Figure2: Discretisationsof a 3D stereoimage.Left: regular Cartesiandiscretisationin the sensorframe; right: its projectionon the ground(theactualdiscretisationis muchfiner)

terrainclassesconsidered(flat andobstacle).Thereareseveralextensionsto themethod:thediscretisationcanbedynamicallycontrolledto allow afinerdescription,andtheclassificationre-sultscanbe combinedwith a terrainphysicalnatureclassifierusing texture or color attributes. Oneof its greatadvantagesis that thanksto the probabilisticdescription,local mapsper-ceivedfrom differentviewpointscanbeveryeasilymergedintoa globaldescription.Theproducedterrainmodelcanbeeitherusedto generateelementarymotionson ratherobstacle-clearterrains(section4.1),or to reasonatthepathlevel (section4.3).

© L

ama

Pilo

t

Figure3: An exampleof classificationresult.Fromleft to right: imageof the stereovisionpair, partial probabilitiesof the definedcells to beanobstacle(representedasgraylevels),andreprojectionof thecellsinthesensorframe,after theapplicationof a symmetricdecisionfunction

3.2 Digital elevation map buildingDigital elevation maps,i.e. groundelevationscomputedon aregular Cartesiangrid, area very commonway to model theenvironment[11, 2]. Althoughtherehasbeenseveralcontribu-tionsto thisproblem,wethink thatit hasstill notbeenaddressedin very satisfactoryway: the main difficulty comesfrom theuncertaintieson the3D input data,that canbefairly well esti-mated,but hardlypropagatedthroughoutthecomputationsandrepresentedin thegrid structure.

However, a quiterealisticmodelcanbeeasilybuilt by com-puting the meanelevation of the datapointson the grid cells,usingonly thepointsthatareprovidedwith precisecoordinates.With our stereovision algorithmfor instance,3D pointswhosedepthis below

��� �canbeusedto build a realistic

�����"!#�������cell digital elevationmap.Providedtherobotis localizedwith aprecisionof theorderof thecell size,dataacquiredfrom severalview-pointscanbe merged into a global map(figure 4). Thismodelis usedto detectlandmarks(section3.3)andto generateelementarytrajectories(section4.2).

3.3 Finding landmarksAn efficient way to localizea rover is to rely on particularele-mentspresentin theenvironment,referredto aslandmarks(sec-tion 5.3). A pertinentlandmarkfor positionrefinementshouldhave thefollowing properties:(i) it mustbeeasyto identify, sothatlandmarkassociationin differentimagescanbeperformed;

Figure4: A digital elevationbuilt byLamaduringa 20meterlongrun,using50 stereovisionpairs. Thedisplayedgrid is $&%'$)( , theactualmapgrid is *�+�$,%-*+�$)( .

(ii) it mustbe“geometricallyrich enough”to allow the refine-mentof all theparametersof theinitial positionestimation;(iii)its perceived geometricattributesmust be as stableas possi-ble, i.e. independentto viewpoint changes.The first require-ment can be relaxed when a sufficiently good initial positionestimationis available (which is the casewe considerin sec-tion 5): matchinglandmarksextractedfrom differentimagesisthensimply doneby comparingtheir estimatedposition. Thesecondpropertycanbe bypassedwhenseveral landmarksareperceived.Localpeaks,suchasthesummitof obstacles,satisfythethirdproperty, areoftennumerousin planetary-likeenviron-mentsandcanbequiteeasilyextracted:we thereforeoptedtoextractsuchfeaturesaslandmarks.

Variousauthorspresented3D datasegmentationproceduresto extractsalientobjects,andourfirst attemptswerebasedon asimilarprinciple[3]. However, suchtechniquesefficientonly insimplecases,i.e. on scenesweresparserockslie on avery flatterrain,but ratherfragile on roughor highly clutteredterrainsfor instance.To robustly detectsuchlocal peaks,we arecur-rently investigatinga techniquethat relieson the computationof similarity scoresbetweena digital elevationmapareaandapre-defined3D peak-likepattern(a paraboloidfor instance),atvariousscales.First resultsareencouraging(figure5), andthedetectedlandmarkcould be usedto feeda positionestimationtechnique(section5.3).

4 Trajectory generationNatural terrainsbeing unstructured,specifictrajectorygener-ation algorithmshave to be developed. A generictrajectoryplannerableto dealwith any situationshouldtakeinto accountall theconstraints,suchasrover stability, rover bodycollisionswith theground,kinematicandevendynamicconstraints.Thedifficulty of the problemcalls for high time-consumingalgo-rithms, which would actuallybe quite inefficient in situationswheremuch simpler techniquesareapplicable. We thereforethink it is worth to endow therover with varioustrajectorygen-eration algorithms,dedicatedthe kind of terrain to traverse.Section8 describeshow they are actively triggeredand con-trolled.

4.1 On easy terrainsOn easyterrains,i.e. ratherflat andlightly cluttered,deadendsarevery unlikely to occur. Therefore,the robotcanefficientlymove just on a basisof agoal to reach3, andof a terrainmodel

3notnecessarilythedistantglobalgoal,it canbeasub-goalformerlyselected- seesection4.3

Figure5: Landmarks(black + signs)detectedon the locally built dig-ital elevationmaps(left), and reprojectedin thecameraframe(right).Threemetersseparatethe two imageacquisitions,and landmarksareherefrom3 to 10metersfar.

thatexhibitsnon-traversableareas,usingtechniquesthatevalu-ateelementarymotions[7].

To generatemotionsin suchterrains,we usean algorithmthat evaluatescircle arcson the basisof the global qualitativeprobabilisticmodel.Thealgorithmis runevery time theterrainmodelis updated,it consistsin evaluatingtheinterest(in termsof reachingthegoal)andtherisk (in termsof terraintraversabil-ity) of asetof circlearcs(figure6). Therisk of anarcis definedin termsof theprobabilityto encounteranobstacle,arcswhoserisk is biggerthanachosenthresholdarediscarded,andthearcthatmaximizestheinterest/riskratio is chosen.

Figure 6: A set of circle arcs to evaluateon the global probabilisticmodel(left), and reprojectionof the arcs in the current cameraview(right)

4.2 On rough terrainsOnuneventerrain,thenotionof obstacleclearlydependsonthecapacityof the locomotionsystemto overcometerrainirregu-larities, andon specificconstraintsactingon the placementoftherobotover theterrain.Theseconstraintsarethestabilityandcollision constraints,plus, if thechassisis articulated,thecon-figurationconstraints(figure 7). To evaluatesuchconstraints,theprobabilisticqualitativemodelisnotanymoresufficient: thedigital elevationmapis required.

Wedevelopedaplanner[8] thatcomputesmotionsverifyingsuchconstraintsby exploringathreedimensionalconfiguration

z

ψavψar

ϕ

y

x

z

z

C

C

Figure7: Thechassisinternalconfigurationanglescheckedonthedig-ital elevationmap

space."/10325476)8�6:9 ; on thedigital elevationmap.Thisplan-nerbuildsagraphof discreteconfigurationsthatcanbereachedfrom theinitial position,by applyingsequencesof discretecon-trols.

It is however quite time-consuming:we thereforeevaluateelementarytrajectories,in a way very similar to section4.1. Asetof circle arcsis produced,andfor eacharc,adiscretesetofconfigurationsareevaluated.Eacharc is thengivena costthatintegratesthedangerousnessof thesuccessiveconfigurationsitcontains,the arc to executebeingthe onethat maximizestheinterest/costratio.

Figure8: A trajectory resultingfrom the applicationof the roughter-rain local planner(approximately30 cycles)

4.3 Planning pathsThetwo techniquesdescribedaboveonly evaluateselementarylocal trajectories,andarethereforenot ableto efficiently dealwith highly clutteredareasanddead-ends.For thatpurpose,weuseapathplanner, thatreasonson theglobalqualitative modelto find sub-goalsandperceptiontasks[12].

The global qualitative model,which is built upona bitmapstructure,is segmentedto producea region map.This mapde-finesa graph,in which a searchalgorithmprovidesanoptimalpathto reachtheglobalgoal. The“optimality” criterion takeshereacrucialimportance:it is a linearcombinationof timeandenergy consumed,weightedby the terrain classto crossandthe confidenceof the terrain labeling. Introducingthe label-ing confidencein thecrossingcostof anarccomesto implicitlyconsiderthe modelingcapabilitiesof the robot: toleratingtocrossobstacleareaslabeledwith a low confidencemeansthattherobotis ableto easilyacquireinformationsonthisarea.Thereturnedpathis thenanalyzed,to producea sub-goalto reach:it is thelastnodeof thepaththatlies in a traversablearea.

5 LocalizationA positionestimateis notonlynecessaryto build coherentenvi-ronmentmodels,it is alsorequiredto ensurethatthegivenmis-sionis successfullybeingachieved,or to servo motionsalonga

definedtrajectory. Robotself-localizationis actuallyoneof themostimportantissueto tackleautonomousnavigation.

The varioustechniquesrequiredto computethe robotposi-tion as it navigatesrangefrom inertial or odometrydatainte-grationto absolutelocalizationwith respectto aninitial model.Onecandistinguishvariousalgorithmcategories:(i) motiones-timation techniques,that integratesdataasa very high paceasthe robot moves (odometry, inertial navigation, visual motionestimation- sections5.1and5.2), (ii) positionrefinementtech-niques,that rely on thematchingof landmarksperceivedfromdifferentpositions,and(iii) absolutelocalizationwith respecttoaninitial globalmodelof theenvironment.All thesealgorithmsarecomplementary, andprovide positionestimateswith differ-entcharacteristics:weareconvincedthatanautonomousrovershouldbeendowedwith at leastoneinstanceof eachcategory.

5.1 Odometry on natural terrainsOdometryonnaturalterrainsis of coursemuchlessprecisethanon a perfectflat ground,but it canhowever bring someusefulinformations.Weuse3D odometrywith Lamaby incorporatingtheattitudeinformationsprovidedbythe2axesinclinometer4 tothetranslationsmeasuredby thecenterwheelencoders.Duetoskid-steering,theangularorientationmeasuredby theodome-tersnot reliable:theinformationprovidedby theintegrationofthegyrometerdatais muchbetter, anddonot drift significantlybeforea few tensof minutes.

To havea quantitative ideaof theprecisionof odometry, wegatheredsomestatistics,usingacarrier-phaseDGPSasa refer-ence. Figure9 presentsan histogramof themeasuredtransla-tion errorsof odometryevery

��� � . What is noticeableis thesecondarypeakaround

�������, thatappearedduringthetraverse

of a gravel area,wheresomelongitudinalandlateralslippagesoccurred.

0

0.02

0.04

0.06

0.08

0.1

0.12

0 0.05 0.1 0.15 0.2 0.25

Odometry

Figure 9: Histogramof odometryerrors measured every *+ <�( stepsduringa 50meterrun with Lama,onvariouskindsof ground.

Thesepreliminaryfiguresshow thatodometrycanhardlybemodeledanestimatorwith Gaussianuncertainties:somegrosserrorsactuallyoccurquiteoften.Wearecurrentlyinvestigatingthe possibility to analyzeon line a set of proprioceptive datain order to be able to dynamicallyqualify the odometry, andespeciallyto detectsucherrors. Thesedataare the 6 wheelencoders,the measuredcurrents,the two attitudeparametersandthefivechassisconfigurationparameters.

5.2 Visual motion estimationWe developedan exteroceptive position estimationtechniquethat is ableto estimatethe6 parametersof the robotdisplace-ments in any kind of environments,provided it is textured

4aftertheapplicationof aslightsmoothingfilter on its data.

enoughso that pixel-basedstereovision works well: the pres-enceof no particularlandmarkis required[13]. Thetechniquecomputesthemotionparametersbetweentwo stereoframesonthebasisof a setof 3D point to 3D point matches,establishedby trackingthecorrespondingpixelsin theimagesequenceac-quiredwhile therobotmoves.

The principle of the approachis extremely simple, but wepaid a lot of attentionto theselectionof the pixel to track: inorderto avoid wrongcorrespondences,onemustmakesurethatthey can be faithfully tracked,and in order to have a preciseestimationof themotion,onemustchoosepixelswhosecorre-sponding3D point is known with a goodaccuracy. Pixel se-lection is donein threesteps:an a priori selectionis doneonthebasisof thestereoimages;anempiricalmodelof thepixeltracking algorithm is usedto discardthe dubiouspixels dur-ing the trackingphase;and finally an outlier rejectionis per-formedwhencomputingan estimateof displacementbetweentwo stereoframes(a posterioriselection).

Figure10 presentsasetof positionsvisuallyestimated,on a� � long run. On this run, thealgorithmgivestranslationes-timatesof about

� =, anda similar precisionhasbeenobtained

over several experiments,processingseveral hundredsof im-ages.Work relatedto this algorithmis still underway, andletushopethat it might reacha precisionon translationestimatesof about

�>=.

Odometry

D-GPS

Steo

Figure10: Comparisonof the positionmeasuredby odometry, the vi-sualmotionestimationestimator, andthereference

5.3 Landmark based localizationLandmark-basedlocalizationalgorithmsrefineboth the robotandlandmarkpositionestimates.Most of the contributionstothis problem,often referredto as SimultaneousLocalizationandMap Building (SLAM), rely on a theapplicationextendedKalman filters. However, we believe that the hypothesesre-quired to apply thesefiltering techniques(mainly the statisti-cal assumptionson the landmarkperceptionandrobot motionerrors)areseldomsatisfiedon naturalterrains. For that pur-pose,we arecurrently investigatingSetTheoreticapproachesto tacklethisproblem[14]. Within suchanapproach,no statis-tical assumptionis madeon theerrorsthat affectsthesensors:theonly hypothesisis thaterrorsareboundedin normby somequantity. Estimatesof the robot and landmarkspositionsarederived in termsof feasibleuncertaintysets, definedasregionsin which therobotandthelandmarksareguaranteedto lie, ac-cordingto all theavailableinformations.

Somesimulation resultsusing realistic bounds,using thelandmarksdetectionalgorithmspresentedin section3.3 arepromising.The integrationof thesealgorithmson boardLamais currentlyunderway.

Part B: Integration

6 A general architecture for auton-omy

Our researchgrouphasbeenworking for severalyearson thedefinitionanddevelopmentof agenericsoftwareanddecisionalarchitecturefor autonomousmachines.Webriefly presentherethe conceptsof this architecturethat allows the integrationofbothdecision-makingandreactive capabilities(a detailedpre-sentationcanbefound in [1]). This architecturehasbeensuc-cessfully instantiatedin multi-robot cooperationexperiments,indoor mobile roboticsexperiments,andautonomoussatellitesimulations[6]. The architecture,sketchedin figure 11, con-tainsthreedifferentlayers:� Thefunctionallevel includesall thebasicrobotactionandperceptioncapacities.Theseprocessingfunctionsandcontrolloops (imageprocessing,obstacleavoidance,motion control,etc.) are encapsulatedinto controllablecommunicatingmod-ules. A modulemayreaddataexportedby othermodules,andoutput its own processingresultsin exporteddatastructures.The organizationof themodulesis not fixed, their interactionsdependonthetaskbeingexecutedandontheenvironmentstate.This is an importantpropertythatenablesto achieveaflexible,reconfigurablerobot behavior. Modulesfit a standardstruc-ture, and are implementedthanksto a developmentenviron-ment,Genom.

Note that in order to makethis level as hardwareindepen-dentas possible,and henceportablefrom a robot to another,the functionallevel is interfacedwith thesensorsandeffectorsthrougha logical robotlevel.� The Executivecontrolsand coordinatesthe executionofthe functionsdistributed in the modulesaccordingto the taskrequirements.It’s function is to fill thegapbetweenthedeci-sionandfunctionallevels decision,i.e. betweentheslowratelogical reasoningon symbolicdata,andthe higher bandwidthcomputationon numericaldata. It is a purely reactive system,with no planningcapability. It receivesfrom thedecisionlevelthe sequencesof actionsto be executed,andselects,parame-terizesandsynchronizesdynamicallytheadequatefunctionsofthefunctionallevel.� Thedecisionlevel includesthecapacitiesof producingthetask plansandsupervisingtheir execution,while beingat thesametimereactive to eventsfrom thepreviouslevel. This levelmaybedecomposedinto two or morelayers,basedonthesameconceptualdesign,but using different representationabstrac-tionsor differentalgorithmictools,andhaving differenttempo-ral properties.Thischoiceis mainlyapplicationdependent.

Up to now, all the algorithmsare integratedasmodulesonboard Lama, while the executive is simply implementedasscriptswritten in Tcl.

7 Integration of concurrent local-ization algorithms

Someparticularintegrationproblemsarerelatedto thecoexis-tenceof several localizationalgorithmsrunningin parallelonboardtherobot. To tacklethis in a genericandreconfigurableway, we developeda particularmodulenamedPoM (position

N

S

EW

perception

mission

10 s

1 s

0.1 s

Modules

communication

OPERATOR?

reports

state

requests reports

control

proprioceptive sensors

proximetric sensors

effectorsexteroceptive sensors

ENVIRONMENT@

Dec

isio

nal

Lev

elE

xecu

tio

nC

on

tro

l Lev

elF

on

ctio

nal

Lev

elL

og

ical

Sys

tem

Ph

ysic

alS

yste

m

Executive

servo−controlcontrol

reflex actions

Plan Supervisor

Task Supervisor TaskRefinment

MissionPlanner

Sensor and Effector InterfaceA

monitoring

modelling

requests

Figure11: LAASarchitecturefor robotautonomy.

manager),that receivesall the positionestimatesproducesbythelocalizationasinputs,andproducesasingleconsistentposi-tion estimateasanoutput.PoMaddressesthefollowing issues:� Sensor’sgeometricaldistribution. Therobot’s sensorsbe-ing distributedall over the robot,onemustknow their relativepositions,which candynamicallyevolve. Indeed,we wantallthemodulesto begeneric,i.e. to handledatawithouthaving toconsiderthepositionfrom which it hadbeenacquired.This isparticularlytruefor videoimages,sinceLamais equippedwithtwo orientablestereobenches:we maywantto switchbencheswhenever required,with theminimal effort. For this purpose,we developeda framework named“InSitu” (internalsituation,section7.1)which is partof PoM.� Localizationmodulesasynchronism.Thelocalizationalgo-rithmshaveindividual timeproperties:someproduceapositionestimateat a regularhigh frequency, while othersrequiresomenon-constantcomputationtime. To beableto provideaconsis-tentpositionestimateat any time, the“time-management”partof PoMhasbeendeveloped(section7.2).� Fusionof thevariouspositionestimates.Thefusionof var-iouspositionestimatesis off coursea key issuein robot local-ization.This is donewithin the“fusion” partof PoM(discussedin section7.2).

7.1 Internal situationSomeproblemsarisewhenthe sensorsof a robot aregeomet-rically distributed. For instance,the vision-basedlocalizationmodulehasto know theorientationof thecamerasfor eachim-ageit processes,whereasthe thedigital elevationmapmoduleneedstherelative and/orabsolutepositionof the3-D imagesitis usingin apredefinedcoordinateframe.

Distributing thegeometricalinformationsin the modulesisnot satisfying: someinformationsare hard-codedand dupli-catedwithin the modules,and it complicatesthe porting of amoduleto anotherrobotor anothersensor. For thatpurpose,weuseInSitu, a centralizedgeometricaldescriptionof a robot. Itreadsaconfigurationfile uponstartup,andit providestheneces-saryframecoordinatesto any modulewhentherobotnavigates.All thedataacquisitionmodulesusethis informationto tagthe

datathey producewith thenecessaryinformations.

PSfragreplacements Rigid

Mobile

GPS

Camera

Camera

Camera

Camera

PTU

PTU

Mast

Front

Robot

Robot

(top, left)

(top, right)

(bottom, left)

(bottom, right)

(vertical)

(bottom)

(top)

Figure 12: The current geometricalgraph usedon the robot Lama.Rectangularboxesare framesconfigurationsthat are exportedin thesystem.Thethick box (namedRobot in the figure) is the main frame.Solid lines are links (either rigid or mobile) that are declared in theconfigurationfile. Eachmobilelink hasanassociatedfunctionthatgetsthelink parametersfromtheunderlyinghardware,computesthecurrenttransformationmatrixandsendsit back to PoM.

Theconfigurationfile is thetextual descriptionof a geomet-rical graph(figure12). Thenodesof thegraphareframesco-ordinatesthatneedsto beexported.They usuallycorrespondtosensorslocationsbut canalsobea convenientway to split oth-erwisecomplex links. Thisgraphis theonly robot-specificpartof PoM.

Thelinks betweenframesareeitherstatic(rigid) or dynamic(mobile). A staticlink cannotchangeduring executionandisusually relatedto somemechanicalpart of the robot. On theotherhand,dynamiclinks, thatdependson thechassisconfigu-rationor on aPTU configurationfor instance,areupdatedcon-tinuouslyatapredefinedfrequency. Updatesaremadepossiblewith thedefinitionof link specificfunctionsthatgetslinks pa-rametersfrom theunderlyinghardwaresystem.Thesefunctionsaregatheredtogetherin a library associatedwith theconfigura-tion file. Last, the configurationfile definesa commonframe(alsocalledmain frame). To facilitate andhomogenizeinter-moduledatatransfers,the variousmodulesdataareexpressedin this frame.

To easedatamanipulationandtransferamongthemodules,InSitucontinuouslyexportsall theframesconfigurationsfoundin theconfigurationfile with thestructureshown in figure13.

The headerof exporteddatacontainsthreefields: the firstfield is thecurrentPoM-dateexpressedin tickssinceboot-time.Thenext two fieldsarepositions:theMain to Origin transfor-mationis thecurrentabsolutepositionof themain framerela-tive to anabsoluteframe.TheOrigin frameis usuallytheposi-tion of therobotat boot-timebut it canbespecifiedanywhere.The Main to Basetransformationis a relative positionwhichcannotbeusedassuch.Thesoleoperationpermittedwith thisframeis thecompositionwith anotherMain to Basetransforma-tion (seesection7.2for adetaileddescription).Baseis avirtualframethatis maintainedandcomputedby PoM.

After the header, thereis a variablenumberof transforma-tions which are the current framesconfiguration. They arenamedaccordingto the scheme“Frameto Main” andarethetransformationmatricesthat map 3-D points in the “Frame”frameto 3-D pointsin the“Main” frame.

Every dataacquisitionmodulereadstheInSitu framewhichit relatesto,andassociateto its dataa“tag” structure(figure14).

PSfragreplacements

Date

Main to BaseMain to Origin

Camera to Main

Camera to Main(top, left)

(top, right)

GPS to Main

Figure 13: Structure exported byInSitu.

PSfragreplacementsDate

Main to BaseMain to Origin

Frame to Main

Figure 14: Structure fordatatagging.

Thanksto this tagging,dataacquisitionmodulesonly have toknow thenameof theframethey areconnectedto, whichcanbespecifieddynamically. Oncetaggingis done,clientsusingsuchdatadonothavetocarefrom wherethedatacomes,sinceall thenecessarygeometricalandtime informationis containedin thedataitself. The tag is propagatedalongwith thedatabetweenmodules,thusmaking inter-moduledatacommunicationveryflexible.

7.2 Position managementIn additionto internalframesconfigurations,PoM gathersthevariouspositionestimatorspresentin thesystem.It is theplacewherepositionalinformationis centralizedandmadeavailablefor any modulethatrequireit.

Positionscomputedby thevariouspositionestimatorsareal-waysproducedwith somedelay, that dependson the compu-tation time requiredto producea particularposition. Thustherobothasalwaysto dealwith outdatedpositions.Oneof theroleof PoMis to maintainthetimeconsistency betweenthevariouspositionestimates.

Internally, thereis onetimechartfor eachpositionestimatorhandledby PoM,plusoneparticularchartfor thefusedposition.The fusedpositionis the resultof the fusionof every positionestimator(seesection7.2).All chartshavethesamelengthandhold everyproducedpositionssincethebeginningof thechart,upto thecurrentdate.Lengthisvariablebut is computedsothatwealwayshaveat leasttwo positionsof everymotionestimator(figure15).

PoM periodicallypolls every positionestimatorandlook ifa positionhavebeenupdated.Whenanew positionis found,itis storedin thechartof thecorrespondingmotionestimator, atdateits correspondingdate(“updates”arrows in figure15). Ifthechartis not longenoughto storeatooold date,it is enlargedasneeded.Onceevery motion estimatorhasbeenpolled, ev-ery fusedpositionfrom theoldestupdateto thecurrenttime ismarkedfor re-estimation.

Properlyspeaking,no datafusionalgorithmis currentlyim-plementedwithin PoM: given the individual position estima-tor characteristics,we indeedconsiderthata consistencycheck(fault detection)hasto beperformedformerlyto any fusion.Upto now, anestimateselectionis performedonthebasisof acon-fidence(realvaluebetween

��� �and

��� �,��� �

beingthebest)thatis hard-codedfor eachpositionestimator.

Oncethecurrentbestpositionis computed,PoM storesit inthe fusiontimechart(figure15). Theimportantthing to noteisthatwecannotsimplystorethenew positionassuchbecauseit

PSfragreplacementsGPS

Odometry

Landmarks

Fusion

Updates

Now

timehistory length

Figure15: Timemanagementandinternal representationof motiones-timators in PoM. Position estimatorsshownhere are only examples.Black dotsshowpreviouslystored positions. Dasheddotsshownewupcomingpositions.

might bevery differentfrom thepreviousposition.Sincesomemodulemay usethe robot positioncontinuously(suchasser-voing on a computedtrajectoryfor instance),virtual jumpsarenot permitted.This is why we defineda referencepositiones-timator, anda virtual framenamedBase. The fusedpositioncomputedby PoM is the Robotto Origin position. This oneis exportedassuchandusedby modulesthatrequiresabsoluteposition.Thesecondexportedposition,Robotto Baseit thepo-sition of the referenceestimatorandcanonly be usedlocally,i.e. to computeadeltaposition.PoMcomputesaBasetoOrigintransformationfor everypositionestimatorfoundin thesystem.This transformationis an indicatorof thedrift of eachpositionestimator.

8 Navigation strategiesThe integrationof the variousalgorithmspresentedin thefirstpart of the paperrequiresspecificdecisionalabilities, that arecurrentlyinstantiatedasTcl scripts.Thefollowing simplestrat-egy is currentlyapplied: the threeenvironmentmodels(qual-itative map,digital mapand landmarkmap) arecontinuouslyupdatedevery time new dataare gathered,and the two inte-gratedlocalizationalgorithms(odometryandvisualmotiones-timate) are also continuouslyrunning5. The selectionof thetrajectorygenerationalgorithmis thefollowing: givena globalgoal to reach,theeasyterrainalgorithm,which requiresnegli-gible CPUtime, is applieduntil no feasiblearcscanbe found.In suchcases,the roughterrainalgorithmis applied. It is rununtil eithertheeasyterrainalgorithmsucceedsagain,or until nofeasiblearcsarefoundin thedigital map.In thelattercase(thatcanbe assimilatedto a deadend),thepathplanningalgorithmis run, to selecta sub-goalto reach.Thewholestrategy is thenappliedto reachthesub-goal,andsoon.

9 ConclusionWe insistedon the fact that to efficiently achieve autonomouslong rangenavigation,variousalgorithmshave to bedevelopedfor eachof the basicnavigation functions(environmentmod-eling, localizationand motion generation). Sucha paradigmeventually leadsto the developmentof a complex integratedsystem,thusrequiringthedevelopmentof integrationtools,atboththefunctionalanddecisionallevels.Weareconvincedthatsuchtoolsarethekey to implementefficientautonomyonlargetimeandspaceranges.

5landmark-basedlocalizationintegrationis underway, but is shouldalsobecontinuouslyrun, while activelycontrollingtheimageacquisi-tion.

Thereare however several openissues. Among these,webelieve that themostimportantoneis still localization.In par-ticular, thesystemmustberobustto extremelylargeuncertain-ties on the positionestimates,that will eventuallyoccur: thisrequiresthe developmentof landmarkrecognitionabilities totacklethe dataassociationproblem,andalsothe developmentof terrainmodelstructuresthat toleratelarge distortions.Notethatbothproblemsshouldbenefitfromtheavailability of anini-tial terrainmap,suchasprovided by an orbiter, whosespatialconsistency is ensured.Indeed,thedevelopmentof algorithmsthat matcheslocally build terrain modelswith suchan initialmapwould guaranteeboundson theerrorof the positionesti-mates.

References[1] R. Alami, R. Chatila, S. Fleury, M. Ghallab,andF. In-

grand. An architecturefor autonomy. SpecialIssueofthe InternationalJournal of RoboticsResearch on Inte-gratedArchitecturesfor RobotControl andProgramming,17(4):315–337,April 1998. Rapport LAAS N97352,Septembre1997,46p.

[2] P. Ballard and F. Vacherand. The manhattanmethod:A fastcartesianelevationmapreconstructionfrom rangedata. In IEEE InternationalConferenceon RoboticsandAutomation,SanDiego,Ca.(USA), pages143–148,1994.

[3] S. Betge-Brezetz,R. Chatila,andM.Devy. Object-basedmodelling and localizationin naturalenvironments. InIEEE InternationalConferenceon RoboticsandAutoma-tion, Nagoya(Japan), pages2920–2927,May 1995.

[4] R. Chatila. Deliberationand reactivity in autonomousmobile robots. RoboticsandAutonomousSystems, 16(2-4):197–211,1995.

[5] R.ChatilaandS.Lacroix. A casestudyin machineintelli-gence:Adaptiveautonomousspacerovers.In A. Zelinsky,editor, Field and serviceRobotics, numberXI in LectureNotesin ControlandInformationScience.Springer, July1998.

[6] J. Gout, S. Fleury, andH. Schindler. A new designap-proachof softwarearchitecturefor anautonomousobser-vationsatellite.In 5th InternationalSymposiumon Artifi-cial Intelligence,RoboticsandAutomationin Space,No-ordwijk (TheNetherlands), June1999.

[7] H. Haddad,M. Khatib, S. Lacroix, andR. Chatila. Re-active navigation in outdoorenvironmentsusing poten-tial fields. In InternationalConferenceon RoboticsandAutomation,Leuven(Belgium), pages1232–1237,May1998.

[8] A. Hait,T. Simeon,andM. Taix. Robustmotionplanningfor roughterrainnavigation. In IEEE/RSJInternationalConferenceon Intelligent Robotsand Systems,Kyongju(Korea), pages11–16,Oct.1999.

[9] L. HenriksenandE.Krotkov. Naturalterrainhazarddetec-tion with a laserrangefinder. In IEEE InternationalCon-ferenceon Roboticsand Automation,Albuquerque,NewMexico (USA), pages968–973,April 1997.

[10] A. Kemurdjian, V. Gromov, V. Mishkinyuk,V. Kucherenko, and P. Sologub. Small marsokhodconfiguration. In IEEE International Conference onRoboticsandAutomation,Nice(France), pages165–168,May 1992.

[11] I.S. Kweon andT. Kanade. High-resolutionterrainmapfrom multiple sensordata. IEEE Transactionson PatternAnalysisand Machine Intelligence, 14(2):278–292,Feb.1992.

[12] S. Lacroix andR. Chatila. Motion andperceptionstrate-giesfor outdoormobile robotnavigationin unknown en-vironments. In 4th InternationalSymposiumon Experi-mentalRobotics,Stanford,California (USA), July1995.

[13] A. Mallet, S. Lacroix, andL. Gallo. Postionestimationin outdoorenvironmentsusingpixel trackingandstereo-vision. In IEEEInternationalConferenceonRoboticsandAutomation,SanFrancisco,Ca (USA), pages3519–3524,April 2000.

[14] M. Di Marco, A. Garulli, S. Lacroix, andA. Vicino. Asettheoreticapproachto thesimultaneouslocalizationandmapbuilding problem. In 39th IEEE Conferenceon De-cisionandControl, Sydney (Australia)), Dec.2000.

[15] L. Matthies,A. Kelly, andT. Litwin. Obstacledetectionfor unmannedground vehicles: A progressreport. InInternational Symposiumof RoboticsResearch, Munich(Germany), Oct.1995.