# state estimation and kalman filtering

28
State Estimation and Kalman Filtering

Post on 01-Feb-2016

80 views

Category:

## Documents

DESCRIPTION

State Estimation and Kalman Filtering. Sensors and Uncertainty. Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models to estimate ground truth, unobserved variables, make forecasts. How does this relate to motion?. A robot must: - PowerPoint PPT Presentation

TRANSCRIPT

State Estimation and Kalman Filtering

Sensors and Uncertainty Real world sensors are noisy

and suffer from missing data (e.g., occlusions, GPS blackouts)

Use sensor models to estimate ground truth, unobserved variables, make forecasts

How does this relate to motion?

A robot must: Know where it is (localization) Model its environment to avoid collisions

(mapping, model building) Move in order to get a better view of an

object (view planning, inspection) Predict how other agents move (tracking)

Intelligent Robot ArchitecturePercept (raw sensor data)

Action

Sensing algorithms

Models

Planning

State estimationMappingTrackingCalibrationObject recognition…

State Estimation Framework

Sensor model(aka observation model) State => observation Observations may be partial, noisy

Dynamics model(aka transition model) State => next state Transition may be noisy, control-

dependent Probabilistic state estimation

Given observations, estimate probability distribution over state

xt

zt

xt+1

Sensor model

Dynamics model

Markov Chain

Given an initial distribution and the dynamics model, can predict how the state evolves (as a probability distribution) over time

X0 X1 X2 X3

Hidden Markov Model

Use observations to get a better idea of where the robot is at time t

X0 X1 X2 X3

z1 z2 z3

Hidden state variables

Observed variables

Predict – observe – predict – observe…

General Bayesian Filtering

Compute P(xt|zt), given zt, prior on P(xt-1) Bayes rule:

P(xt|zt) = P(zt|xt)P(xt)

P(xt) = integral [ P(xt | xt-1) P(xt-1) dxt-1 ]

= integral [ P(zt|xt)P(xt) dyt ]

Nasty, hairy integrals

Key Representational Issue

How to represent and perform calculations on probability distributions?

Kalman Filtering

In a nutshell Efficient filtering in continuous state

spaces Gaussian transition and observation

models Ubiquitous for tracking with noisy

Kalman Filtering

Closed form solution for HMM Assuming linear Gaussian process

Transition and observation function are linear transformation + multivariate Gaussian noise

y = A x + ~ N(,)

Linear Gaussian Transition Model for Moving 1D Point Consider position and velocity xt, vt

Time step h Without noise

xt+1 = xt + h vt

vt+1 = vt

With Gaussian noise of std

P(xt+1|xt) exp(-(xt+1 – (xt + h vt))2/(22

i.e. xt+1 ~ N(xt + h vt, )

Linear Gaussian Transition Model If prior on position is Gaussian, then the

posterior is also Gaussianvh 1

N(,) N(+vh,+1)

Linear Gaussian Observation Model

Position observation zt

Gaussian noise of std 2

zt ~ N(xt,)

Linear Gaussian Observation Model If prior on position is Gaussian, then the

posterior is also Gaussian

(2z+22)/(2+2

2)

2 222/(2+2

2)

Position prior

Posterior probability

Observation probability

Kalman Filter

xt ~ N(x,x)

xt+1 = F xt + g + v

zt+1 = H xt+1 + w

v ~ N(,v), w ~ N(,w)

Dynamics noise

Observation noise

Two Steps

Maintain t, t the gaussian distribution over state at time t

PredictCompute distribution of xt+1 using dynamics

model alone Update

Compute xt+1|zt+1 with Bayes rule

Multivariate Computations

Linear transformations of gaussians If x ~ N(,), y = A x + b Then y ~ N(A+b, AAT)

Consequence If x ~ N(x,x), y ~ N(y,y), z=x+y Then z ~ N(x+y,x+y)

Conditional of gaussian If [x1,x2] ~ N([1 2],[11,12;21,22]) Then on observing x2=z, we have

x1 ~ N(1-1222-1(z-2), 11-1222

-121)

Predict Step

Linear transformation rule: xt+1 ~ N(Ft + g, F t FT

+ v) Let these be ’ and ’

Update step

Given ’ and ’, and new observation z Compute the final distribution t+1 and t+1

using the conditional distribution formulas

Presentation

Deriving the Update Rulext

zt

’a= N ( , )’ B

BT C

xt ~ N(’ , ’)

(1) Unknowns a,B,C

(3) Assumption

(7) Conditioning (1)xt | zt ~ N(’-BC-1(zt-a), ’-BC-1BT)

(2) Assumption

zt | xt ~ N(H xt, W)

C-BT’-1B = W => C = H ’ HT + W

H xt = a-BT’-1(xt-’) => a=H’, BT=H’ (5) Set mean (4)=(3)

(6) Set cov. (4)=(3)

(8,9) Kalman filtert = ’ - ’HTC-1(zt-H’)

(4) Conditioning (1)zt | xt ~ N(a-BT’-1xt, C-BT’-1B)

t = ’ - ’HTC-1H’

Filtering Variants

Extended Kalman Filter (EKF) Unscented Kalman Filter (UKF) Particle Filtering

Extended/Unscented Kalman Filter Dynamics / sensing model is nonlinear

xt+1 = f(xt) + vyt+1 = h(xt+1) + w

Gaussian noise, gaussian state estimate as before

Strategy:Propagate means as usualWhat about covariances & conditioning?

Extended Kalman Filter

Approach: linearize about current estimateF = f/x(xt)H = h/x(xt+1)

Use Jacobians to propagate covariance matrices and perform conditioning

Jacobian = matrix of partial derivatives

Unscented Kalman Filter

Approach: propagate a set of points with the same mean/covariance as the input, reconstruct gaussian

Next Class

Particle FilterNon-Gaussian distributionsNonlinear observation/transition function