in: oleg gusikhin, dimitri peaucelle & kurosh madani (ed.), human ...950977/fulltext01.pdf ·...

9
http://www.diva-portal.org This is the published version of a paper presented at 13th International Conference on Informatics in Control, Automation and Robotics, Lisbon, Portugal, 29-31 July, 2016. Citation for the original published paper : Mashad Nemati, H., Gholami Shahbandi, S., Åstrand, B. (2016) Human Tracking in Occlusion based on Reappearance Event Estimation. In: Oleg Gusikhin, Dimitri Peaucelle & Kurosh Madani (ed.), ICINCO 2016: 13th International Conference on Informatics in Control, Automation and Robotics: Proceedings, Volume 2 (pp. 505-511). SCITEPRESS N.B. When citing this work, cite the original published paper. Permanent link to this version: http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-31709

Upload: others

Post on 23-Jun-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

http://www.diva-portal.org

This is the published version of a paper presented at 13th International Conference on Informatics inControl, Automation and Robotics, Lisbon, Portugal, 29-31 July, 2016.

Citation for the original published paper:

Mashad Nemati, H., Gholami Shahbandi, S., Åstrand, B. (2016)Human Tracking in Occlusion based on Reappearance Event Estimation.In: Oleg Gusikhin, Dimitri Peaucelle & Kurosh Madani (ed.), ICINCO 2016: 13th InternationalConference on Informatics in Control, Automation and Robotics: Proceedings, Volume 2 (pp.505-511). SCITEPRESS

N.B. When citing this work, cite the original published paper.

Permanent link to this version:http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-31709

Human Tracking in Occlusion based on Reappearance Event Estimation

Hassan M. Nemati, Saeed Gholami Shahbandi and Bjorn AstrandSchool of Information Technology, Halmstad University, Halmsatd, Sweden

{hassan.nemati, saesha, bjorn.astrand}@hh.se

Keywords: Detection and Tracking Moving Objects, Extended Kalman Filter, Human Tracking, Occlusion, IntelligentVehicles, Mobile Robots.

Abstract: Relying on the commonsense knowledge that the trajectory of any physical entity in the spatio-temporal do-main is continuous, we propose a heuristic data association technique. The technique is used in conjunctionwith an Extended Kalman Filter (EKF) for human tracking under occlusion. Our method is capable of trackingmoving objects, maintain their state hypothesis even in the period of occlusion, and associate the target reap-peared from occlusion with the existing hypothesis. The technique relies on the estimation of the reappearanceevent both in time and location, accompanied with an alert signal that would enable more intelligent behavior(e.g. in path planning). We implemented the proposed method, and evaluated its performance with real-worlddata. The result validates the expected capabilities, even in case of tracking multiple humans simultaneously.

1 INTRODUCTION

An autonomous mobile robot is not only expected toself-localize and navigate through the environment,but also perceive and understand the dynamic of itssurrounding to avoid collisions. Becoming awareof moving objects is a contributing factor to thisobjective. From this perspective, object detection andtracking is a crucial requirement for a safe operation,especially in environments where humans and robotsshare the work space. In the context of intelligentvehicles, human detection and tracking is an im-portant concern for automation safety. Examplesof such context are self-driving cars in cities andauto-guided lift trucks in warehouses (see figure 1.)The challenge is often referred to as Detection andTracking Moving Objects (DTMO), which alsocontributes to the performance of SLAM algorithms(Wang and Thorpe, 2002).

Related Works: Various techniques and algo-rithms have been proposed for movement and humandetection using different types of sensors. Some ap-proaches are based on 2D and 3D laser scanners (Cas-tro et al., 2004), (Xavier et al., 2005), (Arras et al.,2007), (Nemati and Astrand, 2014), (Lovas and Barsi,2015), visual sensors and cameras (Papageorgiou andPoggio, 2000), (Dalal and Triggs, 2005), (Tuzel et al.,2007), (Schiele et al., 2009), (Zitouni et al., 2015), ora combination of both laser scanners and visual sen-sors (Zivkovic and Krose, 2007), (Arras and Mozos,

2009), (Linder and Arras, 2016).The accuracy, robustness, and metric result of

range scanner sensors makes them more suitable, andconsequently the favorable choice. The number ofemployed sensors varies depending on the approach.For instance (Mozos et al., 2010) uses three laserscanners in different heights to detect human’s legs,torso, and the head. Carballo et al. in (Carballo et al.,2009) employ a double layered laser scanner to de-tect legs and torso. In these approaches, the pose ofthe tracked object (what we call “target”) is estimatedafter fitting a model to the measurements of the sen-sors. While using multiple sensors improves the per-formance of system, it comes with a trade off on thecost and maintenance of more sensors.

For many task such as path planning and obsta-cle avoidance, detection of the objects alone does notsuffice. Such tasks require the ability to track the mo-tion and predict the future state of the moving objects.In order to reliably track a moving object’s motion,one must tackle different challenges, such as non de-terministic motion model and occlusion. Under suchcircumstances, using a probabilistic framework is cru-cial (Schulz et al., 2001).

The human tracking problem can be reduced to asearch task and formulated as an optimization prob-lem. Accordingly, the human tracking problem couldbe considered as a deterministic or a stochastic prob-lem. In case of the deterministic methods, the track-ing results are often obtained by optimizing an ob-jective function based on distance, similarity or clas-

(a) occlusion in an urban scenario (b) occlusion in a warehouse scenarioFigure 1: Two examples where the target (tracked object) disappears momentarily.

sification measures. The Kanade-Lucas-Tomasi al-gorithm (Lucas et al., 1981), the mean-shift track-ing algorithm (Comaniciu et al., 2003), Kalman Filter(Fod et al., 2002), (Castro et al., 2004), (Xavier et al.,2005), (Le Roux, 1960), and Extended Kalman Fil-ter (EKF) (Grewal and Andrews, ), (Kalman, 1960),(Welch and Bishop, 2004), (Kmiotek and Ruichek,2008), (Rebai et al., 2009) are some examples ofdeterministic methods. Stochastic methods on theother hand usually optimize the objective function byconsidering observations over multiple scans using aBayesian rule. It improves robustness over determin-istic methods. The condensation algorithm (Isard andBlake, 1998) and Particle Filters (PF) (Schulz et al.,2001), (Thrun et al., 2005), (Almeida et al., 2005),(Braunl, 2008), (Arras and Mozos, 2009) are someexamples of stochastic approaches.

Observability of the target has often been anunderlying assumption in the formulation of humantracking problem. Few has taken into account theproblem of partially occluded targets (Arras et al.,2008), (Leigh et al., 2015), but not the problemof fully occluded target. Figure 1 demonstratestwo scenarios where the target disappears andmight reappear again from behind the obstacle. Inthese examples the target momentarily becomeshidden from the laser scanner due to obstacles inbetween. If the autonomous vehicle (what we call“agent”) is not capable of maintaining the hypoth-esis associated with the to-be occluded target, theprobable reappearance event might surprise the agent.

Our Approach: In this paper we propose a novelapproach for human (“target”) tracking under occlu-sion. Our heuristic method relies on the common-sense knowledge that the trajectory of any physicalentity in the spatio-temporal domain is continuous.Detecting occluded regions caused by stationary ob-jects, the method is enabled to associate an upcom-ing occlusion event to a target. The time and locationof the reappearance event is estimated according tothe last observed velocity and direction of the target,and the relative location between the agent, occludingobstacle and the target. Awareness of the upcomingocclusion event provides a probabilistic insight to thefuture state of the target, improving the data associa-tion between observation of the reappeared target andthe agent’s hypothesis of target’s state. Our approachenables the agent to detect the occlusion event, andthe upcoming reappearance event more reliably, andconsequently would improve decisions towards safeoperation, on adaptive path planning to avoid colli-sion.

In the rest of this paper, we review the architec-ture of a DTMO system in section 2. This section alsocontains a full description of our proposed method indetails, along with its integration with an EKF algo-rithm. The performance of our proposal is evaluatedin section 3 through a series of real-world experimentswith different setups. In the last section, we concludethe paper by reviewing the advantages and limitationsof our method, and presenting our plans for futureworks.

2 APPROACH

A general DTMO system for human detection andtracking procedure is composed of several sequen-tial steps: i) segmentation of objects; ii) modelingof the objects; iii) human detection; iv) pose estima-tion; and v) associating a hypothesis to each target’sstate and tracking the hypothesis . The first step isthe segmentation of the sensor’s measurement into aset of objects based on the connectivity of data points(Castro et al., 2004), (Xavier et al., 2005), (Preme-bida and Nunes, 2005). Segmentation step is followedby modeling each distinct object with a geometricalmodel of line or circle. In order to identify potentialtargets (i.e. humans) we take two criteria into con-sideration. i) size; and ii) motion . The length ofthe lines, or diameter of the circles denote the size ofthe objects. Small objects with a length less than anempirical threshold (50cm) are classified as potentialtargets (i.e. humans leg). If two legs are close enoughto each other (less than 40cm), they are grouped into asingle target. The position of the human body is thencalculated based on the mean value of center of grav-ity of each leg. Additionally, we consider the motionof the potential targets to distinguish between station-ary and moving objects. Motion of each object is esti-mated over consecutive frames of measurement. As-sociating a hypothesis with target’s state, and trackingthe hypothesis through observation, makes it possibleto predict the next position of the target and conse-quently to alleviate the consequence of occlusion.

The most common approach for tracking problemis Kalman filter, and in nonlinear situations EKF. Thegeneral procedure of EKF tracking is illustrated inFigure 2. EKF procedure starts by associating eachtarget with one single modal hypothesis of the stateof that target. In other words, the hypothesis is theagent’s “belief” of the target’s state. The state in-cludes the pose and velocity of the target. Each hy-pothesis contains an uncertainty denoted by a covari-ance matrix. The uncertainty indicates how much the

Figure 2: Extended Kalman Filter.

agent is sure of the target’s state. Based on the mo-tion model and the hypothesis, EKF predicts the nextstate of the target. The EKF updates the hypothesiswith new sensor measurments /or/ observations, andconsequently decreases the uncertainty.

2.1 Hypothesis Tracking in Occlusion

The challenge rises when the agent can not observethe target (loses visibility) and therefore is unable toupdate its hypothesis. Increase of uncertainty over theiterations of prediction stage without updating cyclesis the consequence of the occlusion (see figure 3b).The increased in uncertainty makes it impossible forthe agent to recover its hypothesis even after the reap-pearance of the target. That is to say, the agent wouldnot be able to associate the new observations of thereappeared target with the existing, but highly uncer-tain hypothesis. More importantly the agent wouldbecome less aware of the target’s location as the timepasses.

Here we propose an approach to maintain thehypothesis associated with occluded target, so thatthe agent would become aware of the upcomingreappearance event. The novelty in this work is aheuristic assumption that our method relies on, andit is based on the continuity of target’s trajectoryin the spatio-temporal domain. In this case, if anobject becomes invisible by an observer, it is out ofsight or occluded, but does not mean it disappearedto “nowhere”. By formulating the occlusion eventand its consequences in terms of observations,, ourapproach enables the agent to detect the occlusionevent, and consequently handle the upcoming reap-pearance event more reliably.

Occluded Region: Every stationary object iscounted as an obstacle which can cause an occludedregion. Each occluded region is bounded by thecausing obstacle, the lines of sight from the agent,and the range of sensor (see figure 3a). Note thatin the result section, we only highlight the potentialcandidate among all the occluded regions where thetarget is about to hide in that region.

Hypothesis Tracking: The tracking algorithmis modified to update the hidden target’s state (i.e.pose = (x,y,θ)) based on the last observed directionand velocity of the target. Instead of expanding theuncertainty region associated with the state of thehidden target in every direction (default in EKF),the modified updating procedure would increase theuncertainty along the line of sight (see figures 3band 3c). The imposed restriction on the uncertainty

(a) occlusion and events

(b) default prediction

(c) modified prediction

Figure 3: In this illustration we present an occlusion scenario in detail. Figure 3a show the relative locations of the agent,target and the occluding obstacle. The concepts of “occluded region” and “lines of sight” are also described. Figure 3b and3c represent the difference between the default hypothesis tracking of the EKF and our proposed method.

update improves the agent’s ability to associate thetarget at reappearance with its existing hypothesis.This in turn provides the agent with more knowledgeof its surrounding (i.e. occluded targets) that wouldimprove decisions on path planning, and obstacleavoidance if required. In addition, the modificationdoes not demand any extra computation.

Bounding Hypothesis Location: We pose a con-straint over the estimated position of the occluded tar-get. The bound to this constraint is the estimated reap-pearance location on the line of sight (see figure 3a).This is also motivated by the continuity of the target’strajectory in spatio-temporal domain. In other words,the target is not expected to appear anywhere far fromthe line of sight. We do not address the possibility ofdrastic changes of the motion model (i.e velocity andtrajectory) in this paper.

In this case the covariance matrix R (in the cor-rection phase of EKF) is modified to Rh based on thebounding hypothesis and is expanded on the line ofsight (see figure 3a) according to the following for-mulas.

Rh = Rot(θ) ·[

σ2x σxσy

σxσy σ2y

]Rot(θ) =

[cos(θ) −sin(θ)sin(θ) cos(θ)

]θ = arctan(

yobstacle− yagent

xobstacle− xagent)

, where σx = λ ·σy, λ > 1 and varies depending onthe distance between the agent and the target.

Hypothesis Rejection: when should the agentreject the hypothesis associated with a hidden target?We consider two criteria to reject a hypothesis:i) when the agent moves and the estimated trajectoryof the target in the occluded region becomes visible;and ii) an upper bound in time (thr), relative to theestimated occlusion period (see figure 4). The hy-pothesis rejection time is dependent on the estimationof reappearance event. The estimated reappearanceevent in turn is adaptive to the relative pose of theagent and the obstacle. Consequently, we can assumethat the first criterion for rejecting the hypothesis willbe included in the second criterion.

Alert Signal: Furthermore, for a more reliablebehavior, we estimate the location and the time ofreappearance event for the hidden target. This esti-mation is associated with an alert signal which warnsthe agent of the reappearance event beforehand. Thesignal is defined as:

Alerta(t) =

t

tr−toto < t < tr

1 tr < t < thr

0 otherwise

3 EXPERIMENTS AND RESULTS

To evaluate the performance of our proposed ap-proach, several experiments with multiple movingobjects in different situations are performed. Eventhough the experiments are done from a stationaryagent’s sensor perspective, the proposed method of

(a) events of interest (b) visibility of the target (c) the alert signalFigure 4: A temporal analysis of the occlusion scenario is provided in this figure. In (a) two events of occlusion and reap-pearance are presented by upward arrows. to is the time of occlusion, tr is the estimated time of reappearance, and thr is thetime of hypothesis rejection. (b) shows the visibility of the target from the agents point of view with respect to time. And (c)illustrates the proposed “alert signal” that could be used for a cautious behavior planning of the autonomous vehicle.

this paper is intended to be employed by mobileagents. The conversion between the two cases couldbe simply done by the conversion of observationsfrom an ego-centric to world-centric reference frame.The experiments have been done at Halmstad Univer-sity Laboratory. The laser scanner (SICK S300) ismounted at the height about 30cm above the groundand two obstacles are placed in the sensor’s field ofview.

In the first experiment, a person is moving in thesensor’s field of view and walking behind an obstacle.Two examples of applying our approach in this setupare shown in figure 5a and figure 5b.

In the second setup two people are moving in thefield. In the case of having multiple targets, one targetmight be occluded by the other target. Such occlusionis known as partially occluded situation in the track-ing and detection domain. Therefore, in these exper-iments the target might become hidden partly (by theother target) or fully (by an obstacle) from the laserscanner. Figures 5c and 5d show successful track-ing results applying our approach under these circum-stances.

In the next experiment, a more complex situationis investigated, where multiple people are walking inthe sensor’s field of view. Figure 5e shows an exam-ple of three people walking in the environment. Theresults verify how efficient our approach can handlehuman tracking in occlusion situation.

Last experiment is devoted to one of the mostchallenging situation in which two targets are ap-proaching to each other, and the meeting point ishidden behind an obstacle. In fact they are passingeach other while they are hidden by an obstacle,so the sensor does not receive any measurement toupdate its hypotheses. The question is whether theapproach can correctly continue tracking the targetsafter reappearing, and be able to associate them withthe correct hypothesis. The result shows in figure 5fand verifies that both targets are correctly tracked asthey reappear from behind the obstacle.

In the classic EKF tracking, when the target is hid-

den for a certain time, the uncertainty area of the oc-cluded target will be increased at each scan. This willresult in a drastic drop of certainty before the targetreappears from behind the obstacle, and consequentlyit is nearly impossible for the system to recover. Inaddition, any change in the targets velocity would re-sult in a mismatch between the targets location andthe hypothesis. In such cases even if the uncertaintyis not too high, the association would fail due to themismatch between target’s real location and the hy-pothesis.

Our proposed approach would not fall into thesame pitfall since the hypothesis will not be expandedlike classical EKF. Instead the hypothesis is adjusted,with an increased uncertainty but only in the directionof the sensor’s line of sight. Furthermore according tothe aforementioned heuristic (trajectory continuity inthe spatio-temporal domain), the location of the hy-pothesis is adjust to where the object is estimated toreappear. This improves the ability of the intelligentvehicle to associate the person at reappearance withits existing hypothesis.

4 CONCLUSION

In an environment shared between humans and au-tonomous vehicles, detection and tracking of movingobjects is one of the most crucial requirements forsafe path planning (i.e. collision avoidance). Thischallenge becomes more of a concern when a movingobject hides momentarily behind an obstacle. Wetackled this problem by proposing a novel approach inmaintaining hypotheses in occlusion. Our approachdetects the occlusion event, predicts the reappearanceevent both in time and location. Relying on theseinformation, the agent is enabled to maintain the statehypothesis of the occluded target. Consequently theagent becomes aware of the upcoming reappearanceand can take appropriate action to avoid hazards. Theperformance of the approach is evaluated through aseries of real-world experiments. The experiments

(a) One moving human and two obstacles (b) One moving human and two obstacles

(c) Two moving humans and two obstacles (d) Two moving humans and two obstacles

(e) Multiple moving humans and two obstacles (f) Two moving humans and one obstacle

Figure 5: In each sub-figure, the laser scanner’s field of view with the result of object segmentation and modeling are shownin the left hand side (single bigger image) The tracking results based on the proposed method are shown in the right hand plotsin a column of two plots, the bottom one also shows the uncertainties associated with hypotheses. Black dots are the scanpoint, dash red and green lines are static objects, red area is the occluded region, blue circle is the estimated human position,black ellipse are the uncertainty regions for each scan, and the blue, cyan, and green lines are the tracking results.

vary in environment configuration, number of targetsand their trajectories. Our proposed approach canhandle human tracking in occlusion situation in amore efficient way compared to classical EKF. It doesnot require additional computational power, whichmakes it faster than PF, specially in crowded envi-ronment with multiple targets. Due to the confinedspace of the common robotic labs, our experimental

result could not reflect the ability to tackle theproblem in case of a moving agent. Neverthelessour proposed approach can be used by movingagents, using a conversion of observation from theego-centric to world-centric frame of reference.A video compilation of the results is available athttps://www.youtube.com/watch?v=16TyoN-LxzA.

Prospective: In the future works we plan to im-prove our target tracking systems from different per-spective. We plan to employ more advanced tech-niques in estimating the trajectory of the hidden tar-get, instead of the straight line trajectory assumption.In addition we will integrate a mutation based trajec-tory bifurcation to expand the hypothesis over a trel-lis to account for the possible radical changes in thetrajectory of the target. Furthermore we plan to im-plement a more comprehensive system, composed ofdifferent sensory modalities, so that the data associa-tion could benefit from visual cues. The continuationof this work will be evaluated over more experimentalresults, that we plan to carry out in a real warehouseenvironment.

REFERENCES

Almeida, A., Almeida, J., and Araujo, R. (2005). Real-time tracking of multiple moving objects using par-ticle filters and probabilistic data association. AU-TOMATIKA: casopis za automatiku, mjerenje, elek-troniku, racunarstvo i komunikacije, 46(1-2):39–48.

Arras, K. O., Grzonka, S., Luber, M., and Burgard, W.(2008). Efficient people tracking in laser range datausing a multi-hypothesis leg-tracker with adaptive oc-clusion probabilities. In Robotics and Automation,2008. ICRA 2008. IEEE International Conference on,pages 1710–1715. IEEE.

Arras, K. O. and Mozos, O. M. (2009). People detectionand tracking.

Arras, K. O., Mozos, O. M., and Burgard, W. (2007). Us-ing boosted features for the detection of people in2D range data. In Robotics and Automation, 2007IEEE International Conference on, pages 3402–3407.IEEE.

Braunl, T. (2008). Embedded robotics: mobile robot designand applications with embedded systems. SpringerScience & Business Media.

Carballo, A., Ohya, A., and Yuta, S. (2009). Multiple peo-ple detection from a mobile robot using double lay-ered laser range finders. In ICRA Workshop.

Castro, D., Nunes, U., and Ruano, A. (2004). Feature ex-traction for moving objects tracking system in indoorenvironments. In Proc. 5th IFAC/euron Symposium onIntelligent Autonomous Vehicles. Citeseer.

Comaniciu, D., Ramesh, V., and Meer, P. (2003). Kernel-based object tracking. Pattern Analysis and MachineIntelligence, IEEE Transactions on, 25(5):564–577.

Dalal, N. and Triggs, B. (2005). Histograms of oriented gra-dients for human detection. In Computer Vision andPattern Recognition, 2005. CVPR 2005. IEEE Com-puter Society Conference on, volume 1, pages 886–893. IEEE.

Fod, A., Howard, A., and Mataric, M. J. (2002). Alaser-based people tracker. In Robotics and Automa-

tion, 2002. Proceedings. ICRA’02. IEEE InternationalConference on, volume 3, pages 3024–3029. IEEE.

Grewal, M. S. and Andrews, A. P. Kalman filtering: theoryand practice, 1993.

Isard, M. and Blake, A. (1998). Condensationconditionaldensity propagation for visual tracking. Internationaljournal of computer vision, 29(1):5–28.

Kalman, R. E. (1960). A new approach to linear filteringand prediction problems. Journal of basic Engineer-ing, 82(1):35–45.

Kmiotek, P. and Ruichek, Y. (2008). Representing andtracking of dynamics objects using oriented bound-ing box and extended kalman filter. In IntelligentTransportation Systems, 2008. ITSC 2008. 11th Inter-national IEEE Conference on, pages 322–328. IEEE.

Le Roux, J. (1960). An introduction to kalman filtering:Probabilistic and deterministic approaches. ASME J.Basic Engineering, 82:34–45.

Leigh, A., Pineau, J., Olmedo, N., and Zhang, H. (2015).Person tracking and following with 2D laser scanners.In Robotics and Automation (ICRA), 2015 IEEE Inter-national Conference on, pages 726–733. IEEE.

Linder, T. and Arras, K. O. (2016). People detection, track-ing and visualization using ROS on a mobile servicerobot. In Robot Operating System (ROS), pages 187–213. Springer.

Lovas, T. and Barsi, A. (2015). Pedestrian detection by pro-file laser scanning. In Models and Technologies forIntelligent Transportation Systems (MT-ITS), 2015 In-ternational Conference on, pages 408–412. IEEE.

Lucas, B. D., Kanade, T., et al. (1981). An iterative imageregistration technique with an application to stereo vi-sion. In IJCAI, volume 81, pages 674–679.

Mozos, O. M., Kurazume, R., and Hasegawa, T. (2010).Multi-part people detection using 2D range data. In-ternational Journal of Social Robotics, 2(1):31–40.

Nemati, H. and Astrand, B. (2014). Tracking of peoplein paper mill warehouse using laser range sensor. InModelling Symposium (EMS), 2014 European, pages52–57. IEEE.

Papageorgiou, C. and Poggio, T. (2000). A trainable systemfor object detection. International Journal of Com-puter Vision, 38(1):15–33.

Premebida, C. and Nunes, U. (2005). Segmentation and ge-ometric primitives extraction from 2D laser range datafor mobile robot applications. Robotica, 2005:17–25.

Rebai, K., Benabderrahmane, A., Azouaoui, O., andOuadah, N. (2009). Moving obstacles detectionand tracking with laser range finder. In AdvancedRobotics, 2009. ICAR 2009. International Conferenceon, pages 1–6. IEEE.

Schiele, B., Andriluka, M., Majer, N., Roth, S., and Wojek,C. (2009). Visual people detection: Different mod-els, comparison and discussion. In IEEE InternationalConference on Robotics and Automation.

Schulz, D., Burgard, W., Fox, D., and Cremers, A. B.(2001). Tracking multiple moving targets with a mo-bile robot using particle filters and statistical data as-sociation. In Robotics and Automation, 2001. Pro-

ceedings 2001 ICRA. IEEE International Conferenceon, volume 2, pages 1665–1670. IEEE.

Thrun, S., Burgard, W., and Fox, D. (2005). Probabilisticrobotics. MIT press.

Tuzel, O., Porikli, F., and Meer, P. (2007). Human de-tection via classification on riemannian manifolds.In Computer Vision and Pattern Recognition, 2007.CVPR’07. IEEE Conference on, pages 1–8. IEEE.

Wang, C.-C. and Thorpe, C. (2002). Simultaneous localiza-tion and mapping with detection and tracking of mov-ing objects. In Robotics and Automation, 2002. Pro-ceedings. ICRA’02. IEEE International Conferenceon, volume 3, pages 2918–2924. IEEE.

Welch, G. and Bishop, G. (2004). An introduction to thekalman filter. university of north carolina at chapelhill, department of computer science. Technical re-port, TR 95-041.

Xavier, J., Pacheco, M., Castro, D., Ruano, A., and Nunes,U. (2005). Fast line, arc/circle and leg detection fromlaser scan data in a player driver. In Robotics and Au-tomation, 2005. ICRA 2005. Proceedings of the 2005IEEE International Conference on, pages 3930–3935.IEEE.

Zitouni, M. S., Dias, J., Al-Mualla, M., and Bhaskar, H.(2015). Hierarchical crowd detection and represen-tation for big data analytics in visual surveillance. InSystems, Man, and Cybernetics (SMC), 2015 IEEE In-ternational Conference on, pages 1827–1832. IEEE.

Zivkovic, Z. and Krose, B. (2007). Part based people de-tection using 2D range data and images. In IntelligentRobots and Systems, 2007. IROS 2007. IEEE/RSJ In-ternational Conference on, pages 214–219. IEEE.