kalman filter derivation - ati courses technical … · kalman filter derivation kalman filter...

Post on 05-Aug-2018

257 Views

Category:

Documents

3 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Kalman FilterDerivation

Kalman FilterDerivation Overview

1. Discuss several useful matrix identities.

2. Derive Kalman filter algorithms.

3. Discuss alternate form (Alternate Gain Expression) of the Kalman filter.

Kalman FilterDerivation References

1. Applied Optimal Estimation. Edited by Arthur Gelb. M.I.T. Press 1986.

2. Introduction to Random Signals and Applied Kalman Filtering. 2ndEdition. R.G. Brown and P.Y.C. Hwang. John Wiley and Sons, Inc.New York. 1992.

Kalman FilterDerivation Kalman Filter Equations

In this section, we will derive the five Kalman filter equations

1. State extrapolation

2. Covariance Extrapolation

3. Kalman Gain Computation

4. State Update

5. Covariance Update

[ ]

[ ]

$ $

$ $ $

x x

P P Q

K P H H P H R

x x K z H x

P P K H P

-

- -

- -

- -

k k k

k k

k k k k k k k

k k k k k k

k k k k k

+ +

+ +

=

= +

= +

= + −

= −

1 1

+1 k k 1T

k

+1 +1 +1

T+1 +1 +1

T+1

1

+1 +1 +1 +1 +1 +1

+1 +1 +1 +1 +1

ΦΦ

ΦΦ ΦΦ1

Kalman FilterDerivation Definitions and Identities

Vector and Matrix Differentiation

Let be a scalar and be a column vector

Likewise for the matrix

z

z

x

z

xn nx

mxn

z

z

a

z

a n

z

am

z

amn

z x

A

δδδδ

δδδδ

δδδδ

δδδδ

δδδδ

δδδδ

δδδδ

δδδδ

x

A

==

==

1

1

11 1

1

M

L

M O M

L

,

Kalman FilterDerivation Definitions and Identities

(( )) (( )) (( ))

(( ))

(( ))

( )

( ( )

( )

( )

, dim dim

( ) ( )

1

2) 2

3 2 2 2

4

5 2

δδδδ

δδδδ

δδδδ

δδδδ

δδδδ

δδδδ

x yx

yy xx

x Nxx

N x N

A x b M A x bx

A M A x A M b A M A x b

AAC C

A C A C

AABA A B B

T T

T

TT T T

T

T

T

where is symmetric

trace

Note for to be square

trace where is symmetric

== ==

==

++ ++== ++ == ++

==

==

==

Kalman FilterDerivation Definitions and Identities

(( )) (( ))[[ ]] (( ))

[[ ]] [[ ]][[ ]] [[ ]]

[[ ]] [[ ]] [[ ]][[ ]]

( )

xHx z W Hx z H W Hx z ( W )

( )H H

H R HPH H H H R HPHH R HPH R P H R H PH

P H R H H R PH HPH R

( )

P H R H P H R H I H R HP H R HP

P H R H

6

2

7

8

1 1

1 1 1

1 1 1 1 1

1 1 1 1 1 1 1 1

1 1

Gradient Expression

for symmetric

Gain Expression oof

Inversion Lemma oof

T T

T T

T T T T T T

T T T T

T T T T

T T T T

T

δδδδ

−− −− == −−

==++ == ++++ == ++

++ == ++ ++

++ == ++ ++ −−

== ++

−− −−

−− −− −−

−− −− −− −− −−

−− −− −− −− −− −− −− −−

−− −−

Pr

Pr

[[ ]][[ ]]

[[ ]] [[ ]]

−− −− −− −−

−− −− −− −−

−− −− −− −−

++ −−

== −− ++

++ == −− ++

1 1 1 1

1 1 1 1

1 1 1 1

P H R H P H R HP

P P H R H H R HP

" "P H R H P PH HPH R H

T T

T T

T T T

Use identity above to obtain Inversion Lemma

Kalman FilterDerivation Assumptions

Assume the following form of the estimator

• linear

• recursive

Goal is to show that the Kalman Filter Equations provide the minimumvariance estimator over all unbiased estimators which have this form

No assumptions are made concerning the particular distribution of theprocess or measurement noise

Kalman FilterDerivation Model

Process:

Measurement:

Assumptions: [ ]

[ ]

[ ]

{ }

{ }{ }

{ }{ }

{ }

x x w

z H x

x

w

v

w w Q

v v R

x x P

w v

x w

x v

x

k k k k

k k k k

k

k

k j k kj

k j k kj

k j

k

j

k

k

k

k

j

+ += +

= +

=

= ∀

= ∀

=

=

=

= ∀

= ∀

= ∀

1 1

0 0

0 0 0

0

0

0

0

0

0

0

ΦΦ

v

cov ,

cov ,

cov ,

cov , , ,

cov , , ,

cov , ,

E

E

E

µµ

δδ

δδ

Kalman FilterDerivation Assumptions

1 1

2

.

~ $ ,cov

. , ,

Assume that at time k we have available an unbiased estimate ofthe state at time k

The error term in addition to having zero mean hasariance

At time k we have a measurement available where

k k k

k

k

k k k k

++

== −−

== ++

x x xP

z

z H x v

Kalman FilterDerivation Goal

The goal is to find unbiased minimum variance estimator of the state attime k+1, of the form

Note that this estimator is both

linear in

recursive - only processes the current measurement zk+1

$ $x K x K zk k k k k++ ++ ++ ++== ′′ ++1 1 1 1

$x zk kand ++1

Kalman FilterDerivation Derivation Steps

Step For unbiased develop ression for Substitute for toobtain state update ression equation

Step Develop ression for that imizes iance

a Define equation

Define equation

b Define equation Joseph form

c Define equation

Define equation short form

k

k

k

k

k

k

14

2

2 1

2)

2 5

2 3

5

1

1

1

1

. $ , exp .exp ( ).

. exp min var .

. $ ( )

(

. ( )

. ( )

( )

x K′′ ′′

′′

−−

−−

++

++

++

++

K

K

x

P

P

K

P

+1

k+1 k+1

k+1

Kalman FilterDerivation Step 1

[[ ]]

The unbiased criteria forces a certain relationship between andWhat we will see over the next few pages is that this criteria

requires that

For to be unbiased

k

k k k k

k k k

k

′′

′′ == −−

−− ==

++

++ ++ ++ ++

++ ++ ++

++

K

K

x x

x

k+1

k+1

KK H

x

1

1 1 1 1

1 1 1

1

0

. , ,.

$ , $

~.

ΦΦ ΦΦ

E1 24 34

Kalman FilterDerivation Step 1

Substituting for the terms in the brackets gives

Adding and subtracting two terms, and further substitution gives

Rearranging terms gives

(( ))

[[ ]] [[ ]](( )) (( ))

[[ ]]

$ $

$

$

$

x x K x x

K x H x v x K x K x

K x x H x + v - x K

K x x H

k+1 k+1

k+1

k k k k k k

k k k k

k k k k k k k k k

k k k k k

Add and Subtract

++ ++ ++ ++ ++

++ ++ ++

++ ++ ++ ++ ++

++ ++ ++

−− == ′′ ++ −−

== ′′ ++ −− ′′ ++ ′′

== ′′ ++ ++ ++ ++ ++ ′′

== ′′ ++ ++ −− ++ ′′

1 1 1 1 1

1 1 1

1 1 1 1 1

1 1 1

k+1

k+1 k+1 k+1 k+1 k+1 k+1 k

k+1 k+1

k+1

+ -

K z

K

K w w

K

1 244 344

ΦΦ ΦΦ

ΦΦ ΦΦ[[ ]] (( ))K x + w K vk k+1 k+1 kk+1 K H Ik k++ ++ −− ++1 1

Kalman FilterDerivation Step 1

The final step is to take the expectation of this expression and set it equalto zero. For the right hard side to be equal to zero, the following must betrue

which implies

or

[[ ]] [[ ]] [[ ]]

(( ))

E E k$x x K H K

K H - K

K = I - K H

k+1 k+1

k+1 k+1 k+1 k+1

k+1 k+1 k+1

k k++ ++−− == ++ ′′ ==

++ ′′ ==

′′

1 1 0

0

k+1 k+1

k+1

k+1

-ΦΦ ΦΦ

ΦΦ ΦΦ

ΦΦ

k 1+ x

Kalman FilterDerivation Step 1

Thus, to satisfy the unbiased criteria:

or equivalently

which is the state update equation (equation 4)

It remains to find the value of Kk+1 which minimizes the covariance of theestimation error

(( ))

(( ))

$ $

$ $ $

x = I -K H x K z

x = x K z H x

k+1 k+1 k+1 k+1 k k+1 k+1

k+1 k+1 k k+1 k+1 k

ΦΦ

ΦΦ ΦΦ

++

++ −− ++ ++

extrapolatedstate

k k

residual of measurementand prediction of

measurement

124 34 1 24444 344441 1

Kalman FilterDerivation Step 1

The estimation error is

The covariance of this error

The goal will be to find Kk+1 such that

is minimized (i.e., minimum variance).

~ $x x xk k k

k

n

n k

looks like

Trace

++ ++ ++

++

++

== −−

++ ++ ==

1 1 1

1

12

2

12 2

1

P

P

σσ

σσ

σσ σσ

O

L

Kalman FilterDerivation Step 2

[[ ]]

Step is to find which imizes Trace where

A First find the ariance of the extrapolated estimate error

equation

The extrapolated estimate is defined as equation

k k

k k kT

k

k k k

2

2).

1

1 1

1 1 1

1

K P

P

P-

-

++ ++

++ ++ ++

++

==

==

min ,

~ ~

. , cov ,

(

( )

$ $

E x x

X X

+1

+1ΦΦ

Kalman FilterDerivation Step 2

The extrapolated estimate error is then

To obtain equation 2, take the expected value of both sides

Thus, equation 2 is

[[ ]] {

{{ }}(( ))(( )){{ }} {{ }}

~ $ $

$ $

~ ~

~ ~

cov

x x x x x w

x - x x - x w

x x

x x x x

P P Q

k k k k k k k k

k k

hask

k k k

has Cov k

k

has Cov k

k k k

T

k k k k k

T

kT

kT

k k kT

k

k k k

we can see

+1 +1

+1

- -

-

P

w w

P Q

P P

== −− == ++ −− ++

== ++

==

== −− −− ++

== ++

==

++ ++ ++ ++

++

++

++ ++ ++

++

++ ++ ++−−

++ ++ ++

++ ++ ++

++−−

++

1 1 1 1

1

1

1 1 1

1

1 1 1

1 1 1

1 1 1

1 1

ΦΦ ΦΦ

ΦΦ

ΦΦ ΦΦ

ΦΦ ΦΦ

ΦΦ ΦΦ

1 24 34 1 24 34

E

E E k+1

kT

k++ ++++1 1Q

Kalman FilterDerivation Step 2

(( ))

[[ ]]

[[ ]] [[ ]]

B Find the ariance the ariance of the final estimation error equationIt will be a function of and

U g the identity in

Gives

k

k k

k k k k k k

k k k

k k k k k k k

k k k k k k k k

. cov ( cov , )..

sin $ $ $

~ $

~ $

$

$

PK P

K z H

I H

I H H

-

- -

-

-

++

++

++ ++ ++ ++

++ ++ ++

++ ++ ++ ++ ++ ++

++ ++ ++ ++ ++ ++ ++

== −− −−

== −−

== −− ++ −−

== −− ++ ++ −−

==

1

1

1 1 1 1

1 1 1

1 1 1 1 1 1

1 1 1 1 1 1 1

5

+1

+1 +1

+1

+1

x x x

x x x

x K x K z x

K x K x v x

x

[[ ]] [[ ]]

[[ ]][[ ]][[ ]]

k k k k k k

Tk k k k

k k k k k k k k

k k k k k k

k k k k k

+1 +1

+1 +1

+1

+1

- -

- -

-

-

H

I H

I H

−− ++ ++ −−

== −− −− −− ++

== −− −− ++

== −− −−

++ ++ ++ ++ ++ ++ ++ ++

++ ++ ++ ++ ++ ++

++ ++ ++ ++ ++

++ ++ ++ ++

K x K H x K v x

x x K H x x K v

K x x K v

K x K v

1 1 1 1 1 1 1 1

1 1 1 1 1 1

1 1 1 1 1

1 1 1 1

$ $

$

~

Kalman FilterDerivation Step 2

Thus

Taking the expected value of both sides will provide us with an expressionfor Pk+1

Now we have found the expression for the covariance update,equation 5. Note that

[[ ]]

[[ ]]

[[ ]] [[ ]]

~ $ ~

~ ~ ~

, , .

x x x x K x

x x xT

k k k k k k k k

k k k k

k k k k k

T

k k k

T

k k k k

Cov

is a function of and

++ ++ ++ ++ ++ ++ ++

++ ++ ++ ++

++ ++ ++ ++ ++ ++ ++

++ ++ ++−−

++

== −− == −− ++

== ==

== −− −− ++

1 1 1 1 1 1 1

1 1 1 1

1 1 1 1 1 1 1

1 1 1 1

I K H

P

I K H P I K H K R K

P K P R

-

-

+1

+1

E

Kalman FilterDerivation Step 2

c. The final step is to find an expression for Kk+1 (equation 3) whichminimizes the trace of Pk+1.

(( )) (( ))(( )) (( ))

(( ))

U g as shorthand notation for for for and for

Tr Tr Tr Tr Tr

k k k k

kT T

T T T

T T T T T

kT T T

sin , , ,P P K K R R H H

P I K H P I K H K R KI K H P I H K K R K

P K H P P H K K H P H K K R KP P K H P K H P H K K R K

-

+1 ++ ++ ++

++

++

== −− −− ++== −− −− ++== −− −− ++ ++

== −− ++ ++

1 1 1

1

1 2

Kalman FilterDerivation Step 2

(( ))(( )) (( ))

U g the identities

Tr Tr

Trwhere is symmetric

Tr

We obtain the partial of Tr P with request to KTr

T T T

T T

T

T

k

k T T

sin

( )

P H K K H P

P H K K H P

A B AA

A B B

A CA

C

PK

PH K H P H K

====

==

==

== −− ++ ++

++

++

δδδδ

δδδδ

δδδδ

2

2 2 2

1

1

Kalman FilterDerivation Step 2

Taking the partial with respect to K,

And setting this equal to zero and solving for K gives

which is the Kalman gain (equation 3)!

It can be verified that this is indeed a minimum (reference Gelb, pg. 109).

(( ))

δδδδ

Tr k T T

T T

PK

P H K H P H K

K P H H P H R

++

−−

== −− ++ ++

== ++

1

1

2 2 2

Kalman FilterDerivation Step 2

Now we have an expression for Kk+1 that optimizes the estimate

We can substitute Kk+1 in our Pk+1 expression

To get the Pk+1 expression

[ ]

( ) [ ]

( )

K P H H P H R

P I K H P I K H K R K

P I K H P

- -

-

-

k k k

Tk k k

Tk

k k k k k k

T

k k k

T

k k k k

+ + + + +

+ + + + + + + +

+ + +

= +

= − − +

= −

1 1 1 1 1

1

1 1 1 1 1 1 1 1

1 1 1

+1 +1

+1

+1

Kalman FilterDerivation Summary

Thus we have the recursive algorithm

where

[ ]

[ ][ ]

$ $ $

$ $

x x K z H x

x xk

k k k k k k

k k

k k k kT

k

k k kT

k k kT

k

k k k k

+ + + +

+

+ + +

+ + + + +

+ + +

= + −

== +

= += −

1 1 1 1

1

1 1 1

1 1 1 1 1

1

1 1 1

+1 +1

+1

+1

+1 +1

+1

- -

-

-

- -

-

P P Q

K P H H P H R

P I K K P

ΦΦΦΦ ΦΦ

Kalman FilterDerivation Alternate Gain Expression

The standard Kalman Filter algorithm computes the gain Kk+1, then computes theupdated covariance Pk+1 as a function of the gain.

This computation involves taking the inverse of a m x m matrix, where m = dim z

Usually dim z < dim x (size measurement vector smaller than number of states), sothis formulation is desirable.

[ ][ ]

K P H H P H R

P I K H P

- -

-

k k k k k

k k k k

+ + +

+ + +

= +

= −

1 1 1

1

1 1 1

+1 k+1T

+1 k+1T

+1

Kalman FilterDerivation Alternate Gain Expression

Another formulation exists which involves reversing this, i.e. computingthe updated covariance Pk+1 first, and finding Kk+1 as a function of Pk+1.

This form is

Note that this involves computing the inverse of n dimensional matrices,where n = dim x, the state (in addition to )

There are situations where this would be computationally preferable - ifdim x < dim z and Rk+1 is of a simple form (I, diagonal, etc.)

Rk++−−

11

( )[ ]P P H R H

K P H R

-k k k

Tk k

k k kT

k

+

+ +−

+

+ + + +−

= +

=

1

1

1 11

1

1

1 1 1 11

+1

Kalman FilterDerivation Alternate Gain Expression

Derivation of the Alternate formulation follows directly from the MatrixInversion Lemma (MIL) and Gain Expression (GE) Identity given earlier.

( ) ( )

( )[ ]

( ) ( )[ ]

P I K H P P -P H H P H R H P

P H R H

K P H H P H R P H R H H R

P H R

- - - - -1 -

-

- - -1 -

-

k k k k k k kT

k k kT

k k k

k kT

k k

k k kT

k k kT

k k kT

k k kT

k

k kT

k

MIL

GE

+ + + + + + + +

+ +−

+

+ + + + +

+ +−

+

+ +−

+ +

= − = +

= +

= + = +

=

1 1 1 1 1 1 1 1

1

1 11

1

1

1 1 1 1 1

1

1 11

1

1

1 11

1 1

+1 +1 +1 +1 +1

+1

+1 +1 +1

+1

−1

top related