collaborative tracking for robotic tasks · 2016-04-21 · collaborative tracking for robotic tasks...

Post on 10-Jul-2020

1 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Collaborative trackingfor robotic tasks

Antonio Marın-Hernandez�

, Vıctor Ayala-Ramırez�

andMichel Devy�

�LAAS-CNRS

7, Av. du Col. Roche31077ToulouseCEDEX 4

France

�Universidadde Guanajuato-FIMEE

TampicoNo. 91236730Salamanca,Gto.

Mexico

Abstract—This paper describeshow to integrate trackingfunctions on a mobile robot. Tracking is required during au-tonomousrobot navigation in different situations: landmarktracking to guarantee real-time robot localization, targettracking for a sensor based motion or obstacle trackingto control an obstacle avoidance procedure. These objects-landmarks, targets, and obstacles-have different charac-teristics: this paper shows that a single tracking methodcould not overcomeall the tracking tasks. Several methodsmust be integrated on the robot. A tracker controller is inchargeto activatea specificmethodwith respectto the objecttype and to a transition model used to recover the trackingfailur es.This method is validated using four trackers, basedon: template differences, set of points, lines and snakes.Experimental resultsprovidedon numerousimagesequencesare presented.

I . INTRODUCTION

Under real-world environmental conditions, a singletracking methodcannotovercomeall different tasksandsituationspresentedin it. Moreover, trackingmethodsarenot robust enoughto work undervarying environmentalconditions. Tracking often fails when illumination un-dergoessignificantchanges.Someother situationswheretracking methodsfail are due to cluttered background,changesin model pose,occlusionsof the target imageor motion discontinuitiesin object dynamics. In orderto overcometheselimitations, we proposeto integrateacollaborativetrackingcontroller. Thiscontrolleris chargedto select,the tracking methodto be use, in function ofthe target andenvironmentsconditions.In addition,whena specific method has failed, it has to select anothermethodandthento try to recover the target,or if it is notpossible,to selectanothertarget,dependingon the tasktoachieve.A trackingtransitionmodelthatusesperformanceevaluation of the currently selectedmethodand contextidentification to guaranteerobust tracking on a mobilerobotdeterminesmethodswitching.Eachtrackingmethodis associatedwith a different type of model so modelcoherenceis a fundamentalissuein tracking integration.

The rest of this paper is organized as follows: Wereview related work in tracking integration and activecontrol in sectionII. In sectionIII, we presentour visualfunctionality model. Tracking methodsincluded in thisfunctionality aredescribedin sectionIV. We discussthetracking transition model in section VI. In section V,

1Correspondingauthor:amarin@laas.fr, Phone:+33 5 61 33 64 42,Fax: +33 5 61 33 64 55, Ph.D studentfundedby Mexican govermentthanksto a CONACyT grant.

2This work hasbeenpartially fundedby Ecos-NordactionM99M01

resultsof our experimentationsillustratestrategiesappliedto keepmodelcoherenceafter trackingmethodswitching.Finally, sectionVII presentsourconclusionsandthefuturework to be done.

I I . RELATED WORK ON COLLABORATIVE TRACKING

Integration of tracking methods intends to increaserobustnessof theroboticvisionsystemsfor visualtrackingtasks. Toyama and Hager [17] have proposedto usea layered hierarchy of visual tracking methods.Someothershave proposedfusionof informationfrom differenttracking methods to improve reliability of the visualtracker:Kruppa et al. [13] takesadvantageof a mutualinformation approachto fuse different tracking modelsfrom several tracking methodsexecutedsimultaneously.Rasmussenand Hager [16] use a probabilisticapproachto control a hierarchyof trackingstrategies.Jepsonet al.[10] perform the integration of several tracking methodslimiting its scopeto appearancemodelsfor facetrackingin clutterednaturalenvironment.

Someother works have beenfocusedon performanceevaluation to control one or several tracking methodsinanactive way. Barretoand al. [2][4] have developedsomeperformanceevaluationmeasuresin orderto controlactivetrackingusinga stereohead.Resolutioncontrol hasbeendevelopedby Ferrier[6] in orderto achieve a stablevisualtrackingsystem.Ayala and al. [1] have proposeda activetrackingfunctionalitythatcontrolsacquisitionsparametersfor a Hausdorff basedtracking method. Guibas et al.[7] shows how to integratetracking methodsin a globalrobotic task.

I I I . V ISUAL FUNCTIONALITY

A visual functionality is a block wherea visual task isperformed.Ourvisualfunctionalityis modeledasdepictedin Fig. 1.

The elementsof this modelareas follows:� A setof � visual functions ������� ���������� ���������able to performthe visual task.Eachfunction ��� isassociatedwith a setof parameters������ � , ���! #"��$��%'&)(that can be adaptedaccordingto the currentimage.Each visual function has also an enablesignal *��that determinesif the methodhasto be executedonthecurrentinput data.A local adaptationmechanismis provided for local adaptationof the parameterset.This adaptationis donebasedon a performanceevaluation measure + � of the application of themethodto the currentimage.

Proceedings of the 2003 IEEE/RSJIntl. Conference on Intelligent Robots and SystemsLas Vegas, Nevada · October 2003

0-7803-7860-1/03/$17.00 © 2003 IEEE 272

ModeSelector

ModalityController

GlobalAdaptation

Visual function

1

Visual function

N

Input Output

Fig. 1. Visual functionality block diagram.� A modeselectorthatdetermineswhich methodshaveto be executedon the currentinput data.� A modalitycontrollerthatdeterminesa setof param-etersfor the selectedvisual function to perform thevisual task.� A global adaptationmechanismthat controls boththe modeselectorand the modality controllerusingperformanceevaluationinformationfrom all thecur-rently active methods.

In this paper, we apply this model for a visual targettracking task that can be concurrentlyperformedby sev-eral trackingmethods.

IV. THE TRACKING METHODS

Trackingis usedon a mobile robot duringa navigationtask, in order to follow a target or to keepa landmarkinthe view field. Four trackersare consideredhere,basedon templatedifferences,set of interestpoints, lines andsnakes.

For every tracker, only asmallimageregion is examinedto obtain new target position, as opposedto the entireimage.In this manner, the computationtime is decreasedsignificantly. The idea behind a local exploration of theimageis that if theexecutionof thecodeis quick enough,the new target position will then lie within a vicinity ofthepreviousone.In thisway, therobustnessof themethodis increasedto handletarget deformations,sinceit is lesslikely thattheshapeof themodelwill changesignificantlyin a small ,�- . We do not describehere,thepredictionstepbasedon Kalmanfiltering, usedto estimatethe predictedtarget positionat -/.�" knowing its positionat -A. Template difference tracker

Template tracking is an interestingregion basedap-proachbasedon templatedifferences( [8], [11]). Thismethod consider that, processrate frequency is high( 0 20Hz),to makelinearapproximationsof a planarpatchmovements.Let 12���34���3�����5�#�#�$36��� the setof N imagelocationswhich define target region, and let 7891:�;-�< thevectorof the intensitiesof region R at time - . Taking thereferenceimageat time -�= = 0, we define the referencetemplateas 7891:�;-�=>< . Motion of referencetemplateovertime producesdeformations,rotationsand/ortranslationsin imagesat -:?A@ . Consideringthat the motion couldbe modelby a parametricfunction BC8EDGFIH/< , calledmotion

model andparameterizedby H:�J8KHL �MH/�N� �#�#�#�MH4OP< with ini-tial condition H/= at -�= . Thento trackanobjectin theimagesequenceis to determinethevector HG8#-�< . Consideringthat�Q?�?SR and that B and H are both differentiable.Themotion parameter vector H canbe estimatedat time - byminimizing the following objective function:

�T8EH/<U�WVXZY�[ 8]\^8#_`8EDGFIH/<>�;-�<bac\^8#_`8KDGFMH/=><)�d-�=)<M<�

Consideringthat, at time -e?f-�= the motion parametervector can be approximatedby HG8#-g.ih^<j��HG8#-�<L.k,H , hbeena very small increaseof time - , theobjective function� canbe expressedin termsof ,H , as :

�T8K,H/<G�ml \68KHG8#-�<4.k,HG�;-b.kh6<Can\68EH/=Z�;-�=><>lWe canlinearizedtheproblemexpanding\68KHG8#-�<�.o,HG�;-'.h6< in a Taylor seriesaround H and - , as is describedin[8]. Solving the setof equationsfor pT�i��@ , we get:

HG8#-g.nh^<G�qHG8#-�<ras89t�u t�<$v t�u$,wwhereM is the jacobian matrix of I with respectto H ,and ,w is the intensityerror vectordefinedas:

,w/�i78KHG8#-�<>�;-g.nh^<Cac\^8EH/=Z�d-�=)<Hager and al [8], proposedifferent models to reduce

Jacobianmatrix computation,Jurie and al [11], consideran interaction matrix A, definedby:

HG8#-/.sh^<g�xHG8#-�<C.ny�8#-g.nh^<d,wwhich, couldbelearntin anoffline computationstage,bymakinga setof �jz smallperturbations,H andcalculatingthe intensity error vector ,w for each one, in a localreferencesystem,which makeseasierthe computationofA. Making �jzo?i� , is possibleto obtain the interactionmatrix A from the � z couples 8K,w��I,H/< , such that nextterm be minimal.

�|{��LzV�|{C 8K,H

� a}yT,w � < �

We have implementedthis learning method: figure 2shows an exampleof sucha target, herea poster.

(a) (b) (c)

Fig. 2. (a) the initial template;(b) the templateafter a little motion ~$�(c) the templatedifference

273

B. The set of points tracker

This tracking method,has beenpresentedin [9], [1].The tracking is done using a comparisonbetweentwosetsof points:on the onehand,pointsextractedfrom thecurrentimage:discontinuitypoints,detectedby a classicaledgedetectoror interestpointsextractedby Susan,Harrisdetectors;andon theotherhand,pointsgivenasthe targetmodel. The Hausdorff distanceis used to measurethesimilarity betweenthesesetsof points.Otherworks haveproposedto trackonly points,withoutapplyingconstraintson their relative positions,by usinga correlationfunction,independentlyfor several small windows (typically 10 �10 pixels ), selectedaroundinterestpoints.

A partial Hausdorff distanceis usedas a resemblancemeasurementbetweenthe target model and its predictedposition in an image.Given two setsof points � and � ,the partial Hausdorff distanceis definedas:

� 8]�j�$��<L�i����689�/89�j����<>�$�/89�T����<M<where

�6����� �]�z Y % �o�#��)YN� lL��ac��llL��ae��l is a givendistancebetweentwo points � and � .���]�z Y % B/8��`< denotesthe � v �]� rankedvalue of BC8���< overthe set � .

The function �/89�j����< (distancefrom set � to � ) is ameasureof thedegreein which eachpoint in � is neartosomepoint in � . TheHausdorff distanceis themaximumamong�C8]�j�$��< and �C8]������< ; it givesthedistancebetweenthe mostmismatchedpointson the two sets,rejectingthe� worsematchings,consideredasoutliers.

At step -^.q" of the sequence,the first taskis to searchthe position of the model � � in the next image \ �9� ,around the predicted position. The minimum value ofthe unidirectionalpartial distancefrom the model to theimage, � �Z 8]� � �$\ �]� < , identifiesthebest“position” of � �in \ �9� , under the action of somegroup of translations�

. The target searchis stoppedat the first translation� ,such that its associated�^�� 89� � ��\ �]� �< is no larger thana given threshold h , consideringa target searchstrategythatsweepsspaceof possibletranslationfollowing a spiraltrajectoryaroundthe predictedposition.

Having found the position of the model � � in theimage \ �]� of the sequence,the secondtask consistsinupdatingthe target model � � : the new model � �]� isbuilt by determiningwhich pixels of the image \ �]� arepart of the target. The model is updatedby using theunidirectionalpartialdistancefrom theimageto themodelasa criterionfor selectingthesubsetof imagespoints \ �9� that belongto � �9� . In order to allow scalechange,thescaleis increasedwhenever therearea significantnumberof nonzeropixels near the boundaryand is decreasedinthe contrarycase.

The model � �]� can be filtered applyingsomeshapeconstraintson the target by morphologicaloperators(forexample, rather elliptical target to track a face). Fig. 3presentsan exampleof this trackeron a person.

(a) (b)

Fig. 3. Trackingon a person:(a) image;(b) target

C. Line tracker

The line tracker[5] is focusedon straightlines; in ourimplementation,thetrackercansearchin every image\ �]� of a sequence,R lines 8K� � � �$����"��$R4< without verifyingconstraintsbetweentheselines (for example,parallelismor convergenceon a vanishingpoint).

Fig. 4. 1D correlationprinciple.

The methodis basedon 1D correlationto look for aline � � � in a predictedposition (figure 4). (a) Points areregularly sampledon � � � . (b) Every point is matchedwiththe closer discontinuity found on the normal to the lineon this point: the optimal discontinuityposition is foundusinga correlationalongthis normal,with a genericstepmodel.(c) A RANSAC fitting methodallows to generatethe line position � � �9� , keepingonly the alignedpoints,rejectingthe outliers.The line is trackedif at least50%of the sampledpointsarematchedwith alignedpoints.

In contrastto a classicalsegment tracker, this methodis very robust: it can be applied also to track curvesdefinedby someanalytical model in order to apply theRANSAC fitting (a quadricor a cubiccurve for example).The initial lines 8E� � = �$�L�W"Z��R4< aregenerallyprovided bytheprojectionof 3D segmentson the image(for example,the wall-groundor wall-ceiling edges),or by theedgesofan object recognizedin an initial image.

D. Snake tracker

Snakesor active contours[12] have beenextensivelyusedfor object tracking.This methodis basedon energyminimization along a curve, which is subjectto internaland external forces.The total energy for an active con-tour v describedwith a parametricrepresentation���893/89� <)���^8]��<M< canbe written as:

* �]�;� 8E�g<L� V *j�#O � 8K� 89� <I<g.k*¢¡ X � 8E�£8]��<M<M¤P�Marin and al [15] proposeto considertheactivecontour

as the surfaceof an electrical charged conductor;theseelectrical charges generatea new internal force. As a

274

result of these repulsive forces, the control points areredistributedalongthe curve with higherdensityin zonesof high convex curvature. Then internal energy can bewritten as:

*j�#O � 8]¥Z<G��V ¦£ 8]��<>8 ,�¥,�� <� .§¦r�Z8]��<>8 , � ¥,��>,�� <

� .q¨ © �84ª$«d¬ª$­dª$­ < �¤'�

where ¦£ 8]��< and ¦r�Z8]��< areweightsgiven to elasticityandsmooth-nessenergies,k a constantand © the chargedensity.

Theevolutionof thecurve is determinedby theequationof movementderived from energies terms.

A parametricshape-modelcouldbedefined(deformabletemplate)whentheobjectto betrackis well known,whichwill be matchedto an image,in a very similar way likesnakes[3]. Anotherway to introduceshapeinformationinactive contoursis applyingconstraintsin orderto deformthesnakein a suitableway. Relationshipsbetweencontrolpoints can be incorporatedas energy termsand multiplecontrol points can be introducedto describeverticesordiscontinuities[14].

Fig. 5 presentsan exampleof a persontracking usinga snakeinitialized by an operator.

Fig. 5. Trackingon a faceby the snaketracker

V. TRACKER CHARACTERIZATION

The trackermethodsmustbe described,mainly for theinitializationconditions,theconfidencemeasurementto beusedfor failure detectionandinformationrequiredby thetrackerselectionmechanism.The selectionwill be basedonintrinsicpropertiesof thetracker(for example,templatedifferencestracking must usea textured target), but alsoon more contextual information (for example,the imagebackgroundis texturedor uniform).

A. Template differences tracker

Template differencestracking can be used only forplanar textured templates,or for quasi-planartexturedtemplates(for example,a face),when the movementsdonot changea lot planarperspective; it is a fast method(atleast20 Hz).

A knowledgebasismustbebuilt on planartemplatestobe trackedduring an offline computation;the interactionmatrix y must be learnt off line, becausethe executiontime is too importantfor on line computation.In this case,the imagefrom which y is learnt,needsto be acquiredfrom a viewpoint closeto thefirst cameraonlineposition.

Oneof themajorproblemswith templatetrackeris that,once the target is lost, it is nearly impossibleto find itagain,by the methoditself. A confidencemeasureis the

norm of the H vector. So when, this norm overpassesathreshold(typically, the largest learnt variation), it couldbe consideredasa trackerfailure.

B. Set of points tracker

This trackerhas good performancesin non-structuredenvironmentsor with deformableobjects,the appearanceof which couldchangegradually. This methodis basedona matchingprocess,so it is ratherslow (from 3 to 5Hz),but a target lost can be detectedagain.

The target is only definedby a region of interest inthe image, without any a priori learning process.Theconfidencemeasureis given by the Hausdorff distancebetweenthe currentimageand target model.

Whenthe trackingis performedover very complex im-ages(too muchtexture),someerrorscanhappen,becausethe target modelwill be pollutedby outliers.

C. Line tracker

The exact oppositethantheprevious tracker:goodper-formanceswith structuredtargets,quick method(10Hz)which could detectsa target lost by itself.

The initial conditions are a set of segments to betracked;oneproblemis that,dueto the RANSAC fitting,the lengthof thesesegmentsarealwaysdecreasingalonga sequence,andsomehigh level informationmustbeusedto restorethe segmentsizes.

This method works very fine with either a uniformtarget on a textured background,or a textured target ona uniform background.Nevertheless,if the imageis toocomplex, in spiteof the RANSAC fitting, somesegmentscould be lost. The number of points integrated in thesegmentsgives the confidencemeasure.

D. Snake tracker

This tracker is normally usedfor deformableor freeform shapes,with high gradientedgesbetweenthe back-ground and the target; it meansthat the environmentcannotbeverycomplex. Thespeeddependsonthenumberof control points on the snake,but typically, it runs at12Hz.

Thecontrolpointscanbeinitialized closeto thesilhou-ette of the target. In order to guaranteethe convergenceof the method,internalweightsparameters¦  , ¦r� and ¨canbemodified,to beadaptedto thecurrentsituation,butsuchan adaptationneedto be very well characterizedina learningprocessingstage.

As the templatetracker, once a snakehas lost theirtarget, it is nearly impossible to recover it, without arecognitionstage.We considerthat snakehas lost theirtarget when first areamomentsof the enclosedregion,hasa high variation.

VI . TRACKING TRANSITION MODEL

Onceevery trackerhasbeencharacterized,the systemcan associatea target with the best tracker, which willbe able to maintain the target position along an imagesequence.Whenit is requiredin a sensorbasednavigationsystem,thedecisionallevel will generatea target-trackingrequest.Dependingon the target type, a trackercontrol

275

will activate the best method. The transition betweendifferenttrackerscould occur in two situations:� An error is detected,eitherby the trackeritself using

the confidencemeasuresdescribedin the previoussection,or by a low frequency interpretationfunction(doubleloop paradigm).� The trackedtarget is occludedor moves out of theimage due to the target or the cameramotion. De-pendingon the task,anothertarget mustbe selectedand tracked.

Two examplesare presentedon figures6 and 7: the redlinesaretheboundariesof thepredictedposition,thegreenlines show the trackerresults.

A. Recovering an error tracking

Fig. 6 presentsa transitiondueto a trackerfailure. Thetarget is the postermoved hereby an operator, on a verycomplex background.The templatedifferencestrackerisselectedfor this planar target. Tracking is right on thefirst 60 images((a) to (d)), but due to the background,the computed,H on the image62 is wrong on (e) andthe tracker diverges totally on the image 65 (f). Theset of points tracker is activated using as initial postermodel, the interest points extracted from the last goodwindow detectedby the templatetracker (g). Thankstothe Hausdorff distance,the poster model is found andthe tracking can continueat a low frequency on (h). Astemplatedifferencestrackeris faster, thetrackercontrollerwill activateit againandit will be executedconcurrentlywith thesetof pointstrackerduringsomeframes,to verifyit convergesagain.As it is faster, thetrackercontrollerwillactivate again the templatedifferencestracker executedconcurrentlywith the set of points tracker during someframes,to verify it convergesagain.

B. Transition between targetsFig. 7 presentsa transition betweentrackersand be-

tween landmarks:the robot is navigating on a corridor.When he entersthe visibility areaof the posteron theleft wall, it tracks this posteron figures (a) to (c), andconcurrently, computestherelativepositioncamera-poster,and then robot-environment, using a metric map builtoff line. We use the templatedifferencestracker in thissequence:the line trackercouldbe usedalso,becausethewall is uniform.

When the robot goes forward, the boundary of theposterregion in the imagebecomescloser to the imagelimit; when it is too close in (d), the tracker controllerselectsanothertarget to be tracked,here the lines cor-respondingto the edgeswall-ceiling. The line tracker isinitialized with theprojectionsof thetwo 3D edgeson thecurrentimage:the two trackersareexecutedconcurrentlyon 10 frames(becausethe templatetrackeris very fast)and then the postertracking is stoppedto avoid a failuredueto the posteroutputfrom the image.

VI I . CONCLUSIONS AND FUTURE WORK

Thispaperhasdescribedthetrackingmoduleintegratedonourmobilerobots.Trackingis requiredfor severaltasksa robot must perform: object following (anotherrobot,

a person),visual basednavigation (trajectorydefinedasa sequenceof visual servoing commands)or obstaclefollowing during the execution of a motion (trajectorydefinedasa curve on the ground).

A collaborative trackingmethodhasbeenproposedsothat, the robot can selectsthe best tracking methodde-pendingon the target type or somecontextual knowledge(natureof the background � ��� ). A target controller hasbeendescribed.This modulewill requesttrackerswitches,either if a target is lost, by one quick but not so robustmethod,and a more robust method is activated to findagainthe target using local imagemeasurements,or if atarget switch is necessary, for examplebecausethe robotmust turn aroundan edgeafter a corridor following exe-cution. Every target is associatedwith the more adaptedtrackerandwith a recovery procedurein caseof failure.

Only sequentialexecutionsareproposed:anotherstrat-egy could be to activate concurrently several trackingmethodswhen a target must be tracked,and to fuse atthe control level, the different target representationsandpositions.This solutioncould be the bestone,but at thistime, is not compatiblewith the real time constraints.

VI I I . REFERENCES

[1] V. Ayala-Ramırez, C. Parra, and M. Devy. Activetracking basedon Hausdorff matching. In Proc. ofICPR’2000, volume 4, pages706–709,Barcelona,Spain,September2000.IAPR.

[2] J. P. Barreto,P. Peixoto,J. Batista,and H. Araujo.Robust vision for vision-based control of motion,chapterEvaluation of the robustnessof visual be-haviors throughperformancecharacterization.IEEEPress,1999.

[3] A. Blake and M. Isard. Active Contours. Springer,1998.

[4] S. Chroust,J. Barreto,H. Araujo, and M. Vincze.Comparisonof control structuresfor visual servo-ing. In Proc. ICAR’2001, pages235–240,Budapest,August2001.

[5] T. DrumondandR. Cipolla. Real-timevisual track-ing of complex structures.IEEE. Trans. on PatternAnalysis and Machine Intelligence, 24(7):932–946,July 2002.

[6] N. J. Ferrier. Workshop on Modelling and Planningfor Intelligent Systems, chapter Active control ofresolutionfor stablevisualtracking.LNCS.Springer,1999.

[7] L. G. Guibasandal. Model building, target findingandtargettracking:Performancedataon theStanfordTMR Project. Monthly report of Tactical MobileRobotsproject,StanfordRoboticsLab,August1999.

[8] G. HagerandP. Belhumeur. Efficient region trackingwith parametricmodelsof geometryand illumina-tion. IEEE. Trans. on Pattern Analysis and MachineIntelligence, 20(10):1025–1039, October1998.

[9] D. Huttenlocher, G. Klanderman,and W. J. Ruck-lidge. Visually guidednavigation by comparingtwodimensionaledge images. Technical Report TR-94-1407,StanfordUniversity, Stanford,California,January1994.

276

(a) (b) (c) (d)

(e) (f) (g) (h)

Fig. 6. A transitionafter the detectionof a trackerfailure.

(a) (b) (c)

(d) (e) (f)

Fig. 7. A transitionbetweentrackedfeatures:a poster, thencorridor corners.

[10] A. D. Jepson,D. J. Fleet, and T. F. El Maraghi.Robust online appeareancemodelsfor visual track-ing. In Proc. IEEE Conf. on Computer Vision andPattern Recognition CVPR’01, volume1, pages415–422, Kauai, 2001.IEEE ComputerSociety.

[11] F. Jurie and M. Dhome. A simple and efficientmatching algorithm. In Proc. Int.Conference on¿Computer Vision(ICCV’99), pages265–275,1999.

[12] M. Kass, A. Witkin, and D. Terzopoulos. Snakes:Active contour models. International Journal ofComputer Vision, 1(4):321–331,1988.

[13] H. Kruppa, M. Spengler, and B. Schiele. Context-driven model switching for visual tracking. InProc. 8th Int. Symp. on Intelligent Robotic SystemsSIRS’00, 2000.

[14] A. Marin-Hernandezand M. Devy. Model-basedactive contour for real time tracking. In Proc. In-ternational Symposium on Robotics and Automation2002 (ISRA’02), Toluca,Mexico, September2002.

[15] A. Marin-Hernandezand H. Rios-Figueroa. Eels:Electric snakes. Computacin y Sistemas, 2:87–94,1999.

[16] C. Rasmussenand G. Hager. Probabilistic dataassociationmethodsfor trackingcomplex visual ob-jects. IEEE Trans. on Pattern Analysis and machineIntelligence, 23(6):560–576,June2001.

[17] K. Toyamaand G. Hager. Incrementalfocus of at-tention.In Proc. IEEE Int. Conf. on Computer Visionand Pattern Recognition. IEEE ComputerSociety,1996.

277

top related