copyright 2004 aaron lanterman the kalman filter prof. aaron d. lanterman school of electrical &...

16
Copyright 2004 Aaron Lanterman The Kalman Filter The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL: 404-385-2548 <[email protected]> ECE 7251: Spring 2004 Lecture 17 2/16/04

Upload: heather-grant

Post on 31-Dec-2015

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

The Kalman FilterThe Kalman FilterThe Kalman FilterThe Kalman Filter

Prof. Aaron D. Lanterman

School of Electrical & Computer EngineeringGeorgia Institute of Technology

AL: 404-385-2548<[email protected]>

ECE 7251: Spring 2004Lecture 17

2/16/04

Page 2: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

State equation

Measurement eqn.

Process “noise” covariance

Measurement noise covariance

Initial guess, before taking any data

Covariance indicating confidence of initial guess

are uncorrelated with each otherand for different k

k1[m1] A[mm ]k

[m1] Uk[m1]

Yk[n1] C[nm ]k

[m1] Wk[n1]

Uk N(0,KU )

Wk N(0,KW )

1 N( ˆ 1|0,P1|0)

Uk , Wk , 1

The Setup for Kalman FilteringThe Setup for Kalman FilteringThe Setup for Kalman FilteringThe Setup for Kalman Filtering

Page 3: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Constant Velocity ModelConstant Velocity ModelConstant Velocity ModelConstant Velocity Model

• For small sample intervals T, the following model is commonly used (Blackman & Popoli, Sec. 4.2.2):

k

kx

x

kx

x Uv

pT

v

p

10

1

1

TT

TTqKU

2/

2/3/2

23

• Bar-Shalom & Li’s rule of thumb for selecting q

1

2max_acceleration q max_acceleration

Page 4: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Constant Acceleration ModelConstant Acceleration ModelConstant Acceleration ModelConstant Acceleration Model

• For small sample intervals T, another common model is (Blackman & Popoli, p. 202):

k

kx

x

x

kx

x

x

U

a

v

p

T

TT

a

v

p

100

1021

2

1

TTT

TTT

TTT

qKU

2/6/

2/3/8/

6/8/20/

23

234

345

Page 5: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Singer Maneuver ModelSinger Maneuver ModelSinger Maneuver ModelSinger Maneuver Model

• Often a good idea to encourage the acceration process to tend back towards zero (Blackman & Popoli, Sec. 4.2.1):

k

kx

x

x

kx

x

x

U

a

v

p

TT

TT

a

v

p

00

)2/1(10

2/1 2

1

0 1where

• R.A. Singer, “Estimating Optimal Tracking Filter Performance for Manned Maneuvering Targets,” IEEE Trans. On Aerospace and Electronic Systems, vol. 5, pp. 473-483, July 1970.

Page 6: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Extending to Higher DimensionsExtending to Higher DimensionsExtending to Higher DimensionsExtending to Higher Dimensions

• Easily extended to two or more dimensions by making processes in the different coordinates independent:

k

ky

y

x

x

ky

y

x

x

U

v

p

v

p

T

T

v

p

v

p

1000

100

0010

001

1

• This is just a convenient mathematical model– Of course, for real aircraft, the motion in the different

coordinates is not independent!

Page 7: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Goal of Kalman FilteringGoal of Kalman FilteringGoal of Kalman FilteringGoal of Kalman Filtering

• Goal: Find MMSE (conditional mean) estimates

• Define

Filter error covariance

Prediction covariance

(Prediction)

],|[)(ˆ1| kkkk yyEy

],|[)(ˆ11|1 kkkk yyEy

Pk|k E[(k ˆ k|k (Y ))(k ˆ k|k (Y ))T | y1, yk ]

Pk1|k E[(k1 ˆ k1|k (Y ))(k1 ˆ k1|k (Y ))T | y1, yk ]

Page 8: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

A Tale of Two SystemsA Tale of Two SystemsA Tale of Two SystemsA Tale of Two Systems

A

+ +

kw

Delay

C

Delay

C

A +

+L-+

k

k 1

uk

k

ˆ k|k

ˆ k 1|k 1

ˆ k|k 1

yk

ˆ y k|k 1

(based on a lecture by J.A. O’Sullivan)

Page 9: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Step 1: PredictionStep 1: PredictionStep 1: PredictionStep 1: Prediction

ˆ k1|k (y) E[k1 | y1, yk ] A ˆ k|k

Pk1|k (y) APk|k AT KU

k1 Ak Uk

k1|k (y) N( ˆ k1|k,Pk1|k ) (A ˆ k|k,APk|k AT KU )

• Predicted state:

• Recall

• Predicted covariance:

• Putting a Gaussian random vector through a linear transformation yields another Gaussian random vector:

Page 10: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Step 2A: State UpdateStep 2A: State UpdateStep 2A: State UpdateStep 2A: State Update

• Consider predicted data

Conditioned on all data up to k

• Recall

Kalman Gain k̂Ak̂A

Yk CWk

Yk1|k N(C ˆ k1|k,CPk1|kCT KW )

)ˆ()(ˆ|11

1|1|1|1 kkkW

Tkk

Tkkkk CyKCCPCP

],Cov[][ˆ|1|1|11|1 kkkkkkkk YE

])[]([Cov |11|1-1

kkkkk YEyY

Page 11: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Step 2B: Covariance UpdateStep 2B: Covariance UpdateStep 2B: Covariance UpdateStep 2B: Covariance Update

KkWT

kkT

kkkk CPKCCPCPP |11

|1|1|1 )(

APk|k A KU

],Cov[]Cov[ |11|11|1 kkkkkkk YP

],Cov[][Cov 1|1|11

kkkkk YY

• Covariance matrix tells us how much confidence we should have in our state estimates

Page 12: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Putting it All TogetherPutting it All TogetherPutting it All TogetherPutting it All Together

(dropping k|k notation)

• The Kalman filter state and covariance updates:

• Note that the Kalman gains don’t involve the data, and can hence be computed offline ahead of time:

(♥)

)ˆ(ˆˆ111 kkkkk CAyLA

k

Lk1 Pk1|kCT (CPk1|kC

T KW ) 1

(the innovations)

Pk1|k APk|k AT KU

Pk1 Pk1|k1 Pk1|k Pk1|kCT (CPk1|kC

T KW ) 1CPk1|k

Cov[Yk1|k ]

Page 13: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

The Innovation SequenceThe Innovation SequenceThe Innovation SequenceThe Innovation Sequence

• The innovations are defined as:

• Let the prediction of the data given the previous data be given by

• Innovations process is white:

• When trying on real data, testing innovations for “whiteness” tells you how accurate your models are

• Innovations are orthogonal to the data:

ˆ y k1|k E[Yk1 | y1,,yk ] CA ˆ k

kkkkkk yyCAy |1111 ˆˆ

0][ kTkYE

lkE lTk for 0][

Page 14: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Assorted TidbitsAssorted TidbitsAssorted TidbitsAssorted Tidbits• For non-Gaussian statistics (process and measurement

noise), the Kalman filter is the best linear MMSE estimator

• Combining covariance update steps yield the discrete Riccati equation:

• Under some conditions, DRE has a fixed point and ; in this case, Kalman filter acts likea Wiener filter for large k

• tells us our confidence in out state estimates. If it is small, then is small, then the filter is saturated; we

pay little attention to measurements.

Pk1|k1 messy function of Pk|k

P

Pk P

Pk|k

Lk|k

Page 15: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Pseudomeasurement ApproachPseudomeasurement ApproachPseudomeasurement ApproachPseudomeasurement Approach

)(sin

cosPOLCAR yg

r

r

y

xy

• Problem with Kalman filter applied to radar: data is rarely a linear function of the parameters• Canonical example: radar measuring range and angle

– Here we consider a 2-D scenario, and consider just the azimuth angle; could extend to include an elevation angle

• One possible solution: convert raw polar data into Cartesian coordinate “pseudomeasurements”, then process using Kalman filter

• Need covariance on the new data – use CRB-type transformation formulas

Page 16: Copyright 2004 Aaron Lanterman The Kalman Filter Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:

Copyright 2004 Aaron Lanterman

Trouble with PseudomeasurementsTrouble with PseudomeasurementsTrouble with PseudomeasurementsTrouble with Pseudomeasurements• Accuracy of covariance transformation depends on accuracy of state estimate

– Can have problems with significant bias for large cross-range errors (Bar-Shalom & Li, Sec. 1.6.2)– Bar-Shalom & Li offer an improved approach with “debiased conversion”

• Original measurement errors may be Gaussian, but converted measurement errors are not– Kalman filter only truly optimal for Gaussian measurement errors

• In more general problems, we may not always have a nice invertible mapping from the original parameter space to the sensor space!