week11 ben kalman

19
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers

Upload: kramachandran

Post on 23-Dec-2015

218 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Week11 Ben Kalman

Lecture 11:Kalman Filters

CS 344R: Robotics

Benjamin Kuipers

Page 2: Week11 Ben Kalman

Up To Higher Dimensions

• Our previous Kalman Filter discussion was of a simple one-dimensional model.

• Now we go up to higher dimensions:– State vector: – Sense vector: – Motor vector:

• First, a little statistics.

x∈ℜn

z∈ℜm

u∈ℜl

Page 3: Week11 Ben Kalman

Expectations

• Let x be a random variable.

• The expected value E[x] is the mean:

– The probability-weighted mean of all possible values. The sample mean approaches it.

• Expected value of a vector x is by component.

E[x] = x p(x) dx∫ ≈ x =1

Nx i

1

N

E[x]=x =[x 1,L x n]T

Page 4: Week11 Ben Kalman

Variance and Covariance

• The variance is E[ (x-E[x])2 ]

• Covariance matrix is E[ (x-E[x])(x-E[x])T ]

– Divide by N1 to make the sample variance an unbiased estimator for the population variance.

σ 2 =E[(x−x )2]=1N

(xi −x )2

1

N

Cij =1N

(xik −x i)(xjk −x j)k=1

N

Page 5: Week11 Ben Kalman

Biased and Unbiased Estimators• Strictly speaking, the sample variance

is a biased estimate of the population variance. An unbiased estimator is:

• But: “If the difference between N and N1 ever matters to you, then you are probably up to no good anyway …” [Press, et al]

σ 2 =E[(x−x )2]=1N

(xi −x )2

1

N

s2 =1

N −1(x i − x )2

1

N

Page 6: Week11 Ben Kalman

Covariance Matrix

• Along the diagonal, Cii are variances.

• Off-diagonal Cij are essentially correlations.

C1,1 = σ 12 C1,2 C1,N

C2,1 C2,2 = σ 22

O M

CN ,1 L CN ,N = σ N2

⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥

Page 7: Week11 Ben Kalman

Independent Variation

• x and y are Gaussian random variables (N=100)

• Generated with x=1 y=3

• Covariance matrix:

Cxy =0.90 0.44

0.44 8.82

⎣ ⎢

⎦ ⎥

Page 8: Week11 Ben Kalman

Dependent Variation

• c and d are random variables.

• Generated with c=x+y d=x-y

• Covariance matrix:

Ccd =10.62 −7.93

−7.93 8.84

⎣ ⎢

⎦ ⎥

Page 9: Week11 Ben Kalman

Discrete Kalman Filter• Estimate the state of a linear

stochastic difference equation

– process noise w is drawn from N(0,Q), with covariance matrix Q.

• with a measurement

– measurement noise v is drawn from N(0,R), with covariance matrix R.

• A, Q are nxn. B is nxl. R is mxm. H is mxn.

xk =Axk−1 +Buk +wk−1

x∈ℜn

z∈ℜm

zk =Hxk +vk

Page 10: Week11 Ben Kalman

Estimates and Errors• is the estimated state at time-step k.

• after prediction, before observation.

• Errors:

• Error covariance matrices:

• Kalman Filter’s task is to update

ˆ x k ∈ℜn

ˆ x k− ∈ℜn

ek−=xk −ˆ x k

ek =xk −ˆ x k

Pk−=E[ek

−ek−T

]

Pk =E[ek ekT ]

ˆ x k Pk

Page 11: Week11 Ben Kalman

Time Update (Predictor)• Update expected value of x

• Update error covariance matrix P

• Previous statements were simplified versions of the same idea:

ˆ x k− =Aˆ x k−1 +Buk

Pk−=APk−1A

T +Q

ˆ x (t3−)=ˆ x (t2)+u[t3 −t2]

σ 2(t3−)=σ 2(t2)+σε

2[t3 −t2]

Page 12: Week11 Ben Kalman

Measurement Update (Corrector)

• Update expected value

– innovation is

• Update error covariance matrix

• Compare with previous form

ˆ x k =ˆ x k−+Kk(zk −Hˆ x k

−)

zk −Hˆ x k−

Pk =(I−K kH)Pk−

ˆ x (t3) =ˆ x (t3−)+K(t3)(z3 −ˆ x (t3

−))

σ 2(t3) =(1−K(t3))σ2(t3

−)

Page 13: Week11 Ben Kalman

The Kalman Gain

• The optimal Kalman gain Kk is

• Compare with previous form

K k =Pk−HT (HPk

−HT +R)−1

=Pk

−HT

HPk−HT +R

K(t3) =σ 2(t3

−)σ 2(t3

−)+σ32

Page 14: Week11 Ben Kalman

Extended Kalman Filter

• Suppose the state-evolution and measurement equations are non-linear:

– process noise w is drawn from N(0,Q), with covariance matrix Q.

– measurement noise v is drawn from N(0,R), with covariance matrix R.

xk =f (xk−1,uk)+wk−1

zk =h(xk)+vk

Page 15: Week11 Ben Kalman

The Jacobian Matrix

• For a scalar function y=f(x),

• For a vector function y=f(x),

Δy = ′ f (x)Δx

Δy=J Δx=

Δy1

M

Δyn

⎢ ⎢ ⎢

⎥ ⎥ ⎥ =

∂f1∂x1

(x) L∂f1∂xn

(x)

M M∂fn

∂x1

(x) L∂fn

∂xn

(x)

⎢ ⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥ ⎥

Δx1

M

Δxn

⎢ ⎢ ⎢

⎥ ⎥ ⎥

Page 16: Week11 Ben Kalman

Linearize the Non-Linear

• Let A be the Jacobian of f with respect to x.

• Let H be the Jacobian of h with respect to x.

• Then the Kalman Filter equations are almost the same as before!

A ij =∂fi

∂x j

(xk−1,uk)

Hij =∂hi

∂x j

(xk)

Page 17: Week11 Ben Kalman

EKF Update Equations

• Predictor step:

• Kalman gain:

• Corrector step:

ˆ x k− =f (ˆ x k−1,uk)

Pk−=APk−1A

T +Q

K k =Pk−HT (HPk

−HT +R)−1

ˆ x k =ˆ x k−+Kk(zk −h(ˆ x k

−))

Pk =(I−K kH)Pk−

Page 18: Week11 Ben Kalman

“Catch The Ball” Assignment

• State evolution is linear (almost).– What is A? – B=0.

• Sensor equation is non-linear.– What is y=h(x)?– What is the Jacobian H(x) of h with respect to x?

• Errors are treated as additive. Is this OK?– What are the covariance matrices Q and R?

Page 19: Week11 Ben Kalman

TTD

• Intuitive explanations for APAT and HPHT in the update equations.