biologically inspired mobile robot vision localization

Post on 23-Feb-2016

51 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Biologically inspired Mobile Robot Vision Localization. Presenter Folami Alamudun Authors Christian Siagian Laurent Itti. Introduction Vision-based Localization Scene Recognition Topological Maps Biological Vision Localization System Experimental Results Discussion Related work. What? - PowerPoint PPT Presentation

TRANSCRIPT

Biologically inspired Mobile Robot Vision Localization

Presenter Folami Alamudun

AuthorsChristian Siagian

Laurent Itti

IntroductionVision-based LocalizationScene RecognitionTopological MapsBiological Vision Localization System Experimental ResultsDiscussionRelated work

Introduction

What? ◦Robot localization system using biologically

inspired visionWhy?

◦Provide machines with a human-like perceptual system capable conducting intelligent localization in an unstructured environment.

How?◦Biologically inspired scene summarization (gist)

and landmark identification (saliency).

IntroductionVision-based LocalizationScene RecognitionTopological MapsBiological Vision Localization System Experimental ResultsDiscussionRelated work

Vision-based Localization

Vision◦Primary perceptual system for localization in most

animals (including humans).◦Effective in most environments where sonar, radar and

GPS are unavailable or inoperable.The human process of localization is performed

using two processes:◦Gist – A holistic statistical signature of the image,

thereby yielding abstract scene classification and layout.

◦Saliency – A measure of interest at every image location and landmark-identification.

Vision-based Localization

Vision-based localization systems use vision information to classify systems using:

Global features – A general summary of information over the entire image.

Local features – Computed over a limited area of the image

Vision-based Localization – Global Features

Global feature methods generally consider an input image as a whole and extract a low-dimensional signature.

Advantage:◦ Provides a summary of the image statistics or semantics.◦ Robust because random local pixel noise averages out on global

scale.

Disadvantages:◦ Sacrifices spatial information such as feature location and

orientation.◦ Unable to define accurate pose estimation◦ Harder to deduce positional change even with significant robot

movement.

Vision-based Localization – Local Features

Local feature methods limit their scope to image regions and their respective configuration relationships to form a signature of a location.

Advantage◦ local features encode scene characteristics that are more

focused in scope.◦ Invariant in scale, in-plane rotation, viewpoint and lighting

invariance.

Disadvantage◦ Very slow.

IntroductionVision-based LocalizationScene RecognitionTopological MapsBiological Vision Localization System Experimental ResultsDiscussionRelated work

Scene Recognition

Human visual processing system uses visually interesting regions within the field of view.

Saliency-based selection of landmarks that are most reliable in a particular environment.

Focusing on specific regions for comparing different images makes for a less computationally expensive process

Scene Recognition

Scene Recognition

IntroductionVision-based LocalizationScene RecognitionTopological MapsBiological Vision Localization System Experimental ResultsDiscussionRelated work

Topological Maps

A topological map is a graph annotation of an environment.

Topological Maps assign nodes to particular places and edges as paths if direct passage between pairs of places (end nodes) exist.

Humans manage spatial knowledge primarily by topological information.

This information is used to construct a hierarchical topological map that describes the environment.

IntroductionVision-based LocalizationScene RecognitionTopological MapsBiological Vision Localization System Experimental ResultsDiscussionRelated work

Biological Vision Localization System

Biological Vision Localization System

The localization system is divided into three stages:

Feature extraction – Processes image to produce:◦Gist features; ◦Salient regions.

Recognition - compares features with memorized environment visual information.

Localization – compute where the robot is situated.

Biological Vision Localization System – Feature extraction

Feature extraction involves processing of raw low-level filter outputs into gist and saliency modules.

Gist feature extraction ◦Computes average values from sub-regions of

feature maps.◦Dimensionality reduction using PCA/ICA

Salient region selection and segmentation ◦Uses feature maps to detect conspicuity regions

in each channel.

Biological Vision Localization System – Gist Feature extraction

Biological Vision Localization System – Gist Feature extraction

Biological Vision Localization System – Segment and Salient Region Recognition

This stage attempts to match salient regions and gist features with stored environment information.

Segment estimator: ◦Three-layer neural network classifier trained using the

back-propagation on gist features

Salient Region Recognition:◦Recalls stored salient regions◦Uses SIFT key points and salient feature vector to

recognize regions.

Biological Vision Localization System

Segment Estimation computes likelihood that a scene belongs to a segment:

Salient region localization provides a saliency map which highlights coordinates of peak values (salient points).◦These points are used for identification

purposes in subsequent viewing.

Biological Vision Localization System – Salient Region Recognition

Recollection of stored salient regions for localization involves:

SIFT keypoints◦SIFT recognition system using parameters and

thresholds.Salient feature vector

◦A set of values taken from 5x5 window centered at the salient point location.

Biological Vision Localization System – Salient Region Recognition

(continued)Salient feature vectors form two salient

regions (sreg1, sreg2) are compared using:◦Similarity

◦Proximity

Biological Vision Localization System – Salient Region Recognition

Biological Vision Localization System – Monte Carlo Localization

When a landmark is recognized its associated location is used to deduce robot location.

Accumulated temporal context is used to distinguish between identical landmarks.

Robot position is estimated by implementing Monte-Carlo Localization (MCL) which utilizes Sampling Importance Resampling (SIR).

Biological Vision Localization System – Monte Carlo Localization

St as a set of weighted particles:◦ St = {xt,i, wt,i}, (i = 1, . . . , N)◦ xt,i = {snumt,i , ltravt,i} (possible robot location)

snum – segment number Ltrav – length traveled along segment edge

◦ wt,i = weight likelihood.◦ at time t; and ◦ N is the number of particles.

◦ Bel(St) = location belief at time t.◦ ut = motion measurement

Biological Vision Localization System – Monte Carlo Localization

Belief estimation algorithm:Apply motion model to St−1 to create St’ .Apply segment observation model to St’ to create St’’.IF (Mt > 0):

◦ apply salient region observation model to St’’ to yield St ;ELSE St = St’’.

Where:◦ St’ = is the belief state after application of motion model.◦ St’’ = is the state after the segment observation

Biological Vision Localization System – Monte Carlo Localization

IntroductionVision-based LocalizationScene RecognitionTopological MapsBiological Vision Localization System Experimental ResultsDiscussionRelated work

Experimental Results – rigid environment

Lighting conditions test

Experimental Results – rigid environment

Lighting conditions test

Experimental Results – rigid environment

Test System response on sparser scenes

Experimental Results – rigid environment

Test System response on sparser scenes

IntroductionVision-based LocalizationScene RecognitionTopological MapsBiological Vision Localization System Experimental ResultsDiscussionRelated work

Discussion

This paper introduced new ideas (the use of complementary gist and saliency features).

Saliency model lets the system automatically select persistent salient regions as localization cues.

Low computation cost gist features approximate the image layout and provide segment estimation.

System is able to compute coordinate level localization in multiple environments

Performance is comparable to GPS database guided systems.

Related Work

Determining Patch Saliency Using Low-Level Context. European Conference on Computer Vision (ECCV), 2008. D. Parikh et. al.

top related