lecture 8 map building 1

Upload: jack2423

Post on 30-May-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/14/2019 Lecture 8 Map Building 1

    1/26

    CpE 521

    Part 2

    Yan Meng

    De artment of Electrical and Com uter En ineerin

    Stevens Institute of Technology

  • 8/14/2019 Lecture 8 Map Building 1

    2/26

    Toda s Content

    Probabilistic map-based localization

    Kalman filter localization

    Other examples of localization system

    Landmark-based localization

    Position beacon systems Autonomous map building

    Stochastic map technique

  • 8/14/2019 Lecture 8 Map Building 1

    3/26

    General robot localization roblem and solution strate

    Consider a mobile robot moving in a known environment.

    As it starts to move, say from a precisely known location, it might

    keep track of its location using odometry.

    However, after a certain movement, the robot will get very uncertainabout its position.

    update using an observation of its environment.

    Observation leads to an estimate of the robots position which can be

    fused with the odometric estimation to get the best possible update of

    the robots actual position.

  • 8/14/2019 Lecture 8 Map Building 1

    4/26

    Two-ste robot osition u date

    Action update

    with ot: encoder measurement, s

    t-1: prior belief state

    increases uncertainty

    Perception update

    perception model SEE

    t

    ,t

    decreases uncertainty

  • 8/14/2019 Lecture 8 Map Building 1

    5/26

    Probabilistic Ma -Based Localization Problem Statement

    Given

    its covariance for time k,

    the current control input

    )( kkp)(ku

    the current set of observations and

    the map

    )1( +kZ

    )(kM

    Compute the

    its covariance )11( ++ kkp

    Such a procedure usually involves five steps:

  • 8/14/2019 Lecture 8 Map Building 1

    6/26

    The Five Ste s for Ma -Based LocalizationPrediction of

    Measurement andEstimation

    (fusion)positionestimate

    Encoder

    Ma

    eature

    ions matched predictions

    and observationsdata base

    redicted

    observa

    YES

    Matching

    1. Prediction based on previous estimate and odometry

    -

    Observationon-board sensorse

    ption

    raw sensor data orextracted features

    .

    3. Measurement prediction based on prediction and map

    4. Matching of observation and map

    -

    Per

    .

  • 8/14/2019 Lecture 8 Map Building 1

    7/26

    Markov

    Kalman Filter Localization

    Markov localization Kalman filter localization

    localization starting from anyunknown position

    tracks the robot and is inherentlyvery precise and efficient.

    situation.

    However, to update the probability

    ,

    robot becomes to large (e.g.

    collision with an object) theKalman ilter will ail and the

    state space at any time requires a

    discrete representation of the

    s ace rid . The re uired memor

    position is definitively lost.

    and calculation power can thus

    become very important if a fine

    grid is used.

  • 8/14/2019 Lecture 8 Map Building 1

    8/26

    Markov Localization

    Markov localization uses an explicit, discrete representation for the

    .

    This is usually done by representing the environment by a grid or a

    topological graph with a finite number of possible states (positions).

    During each update, the probability for each state (element) of the

    entire space is updated.

  • 8/14/2019 Lecture 8 Map Building 1

    9/26

    Probabilit theor

    P(A): Probability that A is true.

    . . t

    We wish to compute the probability of each individual robot positiongiven actions and sensor measures.

    P(A|B): Conditional probability of A given that we know B.

    e.g. p(rt= l| it): probability that the robot is at position l given thet.

    Product rule:

    Bayes rule:

  • 8/14/2019 Lecture 8 Map Building 1

    10/26

    Markov Localization

    Bayes rule:

    a rom a belie state and a sensor in ut to a re ined belie state SEE :

    p(l): belief state before perceptual update process

    o consult robots map, identify the probability of a certain sensor reading for each

    possible position in the map

    .

  • 8/14/2019 Lecture 8 Map Building 1

    11/26

    Markov Localization

    Bayes rule:

    a rom a belie state and an action to new belie state ACT :

    Summing over all possible ways in which the robot may have reached l.

    Markov assumption: update only depends on previous state and its

    most recent actions and perception.

  • 8/14/2019 Lecture 8 Map Building 1

    12/26

    Markov Localization: Case Stud 1 - To olo ical Ma 1

    1994 AAAI National Robot Contest: winner Dervish Robot

    ,

    Topological Localization with Sonar

  • 8/14/2019 Lecture 8 Map Building 1

    13/26

    Markov Localization: Case Stud 1 - To olo ical Ma 2

    Topological map of office-type environment

  • 8/14/2019 Lecture 8 Map Building 1

    14/26

    Markov Localization: Case Stud 1 - To olo ical Ma 3

    Update of believe state for position n given the percept-pair i

    p(ni): new likelihood for being in position n

    p(n): current believe state

    p(in): probability of seeing i in n (see table)

    No action update !However, the robot is moving and therefore we can apply a combination

    of action and perception update

    t-i is used instead of t-1 because the topological distance between n and

    n can very epen ng on e spec c opo og ca map

  • 8/14/2019 Lecture 8 Map Building 1

    15/26

    Markov Localization: Case Stud 1 - To olo ical Ma 4

    The calculation is performed by multiplying the

    probability of having failed to generate perceptual events at all nodesbetween n and n.

  • 8/14/2019 Lecture 8 Map Building 1

    16/26

    Markov Localization: Case Stud 1 - To olo ical Ma 5 Example calculation Assume that the robot has two nonzero belie states

    o p(1-2) = 1.0 ; p(2-3) = 0.2 *at that it is facing east with certainty

    State 2-3 will progress potentially to 3 and3-4 to 4.

    State 3 and3-4 can be eliminated because the likelihood of detecting an open dooris zero.

    The likelihood of reaching state 4 is the product of the initial likelihood p(2-3)= 0.2,(a) the likelihood of not detecting anything at node 3; and (b) the likelihood ofdetecting a hallway on the left and a door on the right at node 4. (for simplicity we

    assume t at t e e oo o etect ng not ng at no e - s . ) This leads to:

    o 0.2 [0.60.4 + 0.4 0.05] 0.7[0.9 0.1] p(4) = 0.003.

    o m ar ca cu at on or progress rom - p = . .

    * Note that the probabilities do not sum up to one. For simplicity normalization was avoided in this example

  • 8/14/2019 Lecture 8 Map Building 1

    17/26

    Markov Localization: Case Stud 2 Grid Ma 1

    Finefixed decomposition grid (x,y, ), 15 cm x 15 cm x 1

    Action update:

    Sum over previous possible positions

    and motion model

    Discrete version of eq. 5.22

    Perce tion u date:

    Given perception i, what is the

    probability to be at location l

    W. Burgard

  • 8/14/2019 Lecture 8 Map Building 1

    18/26

    Markov Localization: Case Stud 2 Grid Ma 2

    The critical challenge is the calculation ofp(il)

    p(il) is computed using a model of the robots sensor behavior, its position l, andthe local environment metric map around l.

    ssump ons

    o Measurement error can be described by a distribution with a mean

    o Non-zero chance for any measurement

    W. Burgard

  • 8/14/2019 Lecture 8 Map Building 1

    19/26

    Markov Localization: Case Stud 2 Grid Ma 3

    The 1D case

    .

    No knowledge at start, thus we havean uniform probability distribution.

    .

    Seeing only one pillar, the probability

    being at pillar 1, 2 or 3 is equal.

    3.Robot moves

    Action model enables to estimate the

    on the previous one and the motion.

    4.Robot perceives second pillar

    probability being at pillar 2 becomes

    dominant

  • 8/14/2019 Lecture 8 Map Building 1

    20/26

    Markov Localization: Case Stud 2 Grid Ma 4

    Example 1: Office Building

    Courtesy of

    W. Burgard

    Position 5

    Position 3Position 4

  • 8/14/2019 Lecture 8 Map Building 1

    21/26

    Markov Localization: Case Stud 2 Grid Ma 5

    Example 2: MuseumCourtesy of

    W. Burgard

  • 8/14/2019 Lecture 8 Map Building 1

    22/26

    Markov Localization: Case Stud 2 Grid Ma 6

    Example 2: MuseumCourtesy of

    W. Burgard

  • 8/14/2019 Lecture 8 Map Building 1

    23/26

    Markov Localization: Case Stud 2 Grid Ma 7

    Example 2: MuseumCourtesy of

    W. Burgard

  • 8/14/2019 Lecture 8 Map Building 1

    24/26

  • 8/14/2019 Lecture 8 Map Building 1

    25/26

  • 8/14/2019 Lecture 8 Map Building 1

    26/26

    Markov Localization: Case Stud 2 Grid Ma 10 Finefixed decomposition grids result in a huge state space

    Ver extensive rocessin ower needed

    Large memory requirement

    Reducing complexity

    Various approac e ave een propose or re ucing comp exity

    The main goal is to reduce the number of states that are updated in each

    step Randomized Sampling / Particle Filter /Monte Carlo Algorithm

    Approximated belief state by representing only a representative subseto all states (possible locations)

    E.g update only 10% of all possible locations

    The sampling process is typically weighted, e.g. put more samples

    However, you have to ensure some less likely locations are still tracked,

    otherwise the robot might get lost