singular value decomposition

84
Singular Value Decomposition T n n n m m m n m V Σ U A I U U T 0 2 1 n I V V T n m X X V T X V Σ T X V T T T T n n n V U V U V U A 2 2 2 1 1 1 0 0 0 0 0 0 0 0 0 Σ 2 1 n

Upload: kagami

Post on 31-Jan-2016

81 views

Category:

Documents


4 download

DESCRIPTION

Singular Value Decomposition. Singular Value Decomposition. Homogeneous least-squares Span and null-space Closest rank r approximation Pseudo inverse. Singular Value Decomposition. Homogeneous least-squares. Parameter estimation. 2D homography - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Singular Value Decomposition

Singular Value Decomposition

Tnnnmmmnm VΣUA

IUU T

021 n

IVV T

nm

XXVT XVΣ T XVUΣ T

TTTnnn VUVUVUA 222111

000

00

00

00

Σ

2

1

n

Page 2: Singular Value Decomposition

Singular Value Decomposition

• Homogeneous least-squares

• Span and null-space

• Closest rank r approximation

• Pseudo inverse

1X AXmin subject to nVX solution

0 ,, 0 ,,,,~

21 rdiag TVΣ~

UA~ TVUΣA

0000

0000

000

000

Σ 2

1

4321 UU;UU LL NS 4321 VV;VV RR NS

TUVΣA 0 ,, 0 ,,,, 112

11 rdiag

TVUΣA

Page 3: Singular Value Decomposition

Singular Value Decomposition

XXVT XVΣ T XVUΣ T

Homogeneous least-squares

TVUΣA

1X AXmin subject to nVX solution

Page 4: Singular Value Decomposition

Parameter estimation

• 2D homographyGiven a set of (xi,xi’), compute H (xi’=Hxi)

• 3D to 2D camera projectionGiven a set of (Xi,xi), compute P (xi=PXi)

• Fundamental matrixGiven a set of (xi,xi’), compute F (xi’TFxi=0)

• Trifocal tensor Given a set of (xi,xi’,xi”), compute T

Page 5: Singular Value Decomposition

Number of measurements required

• At least as many independent equations as degrees of freedom required

• Example:

Hxx'

11

λ

333231

232221

131211

y

x

hhh

hhh

hhh

y

x

2 independent equations / point8 degrees of freedom

4x2≥8

Page 6: Singular Value Decomposition

Approximate solutions

• Minimal solution4 points yield an exact solution for H

• More points• No exact solution, because

measurements are inexact (“noise”)• Search for “best” according to some

cost function• Algebraic or geometric/statistical cost

Page 7: Singular Value Decomposition

Gold Standard algorithm

• Cost function that is optimal for some assumptions

• Computational algorithm that minimizes it is called “Gold Standard” algorithm

• Other algorithms can then be compared to it

Page 8: Singular Value Decomposition

Direct Linear Transformation(DLT)

ii Hxx 0Hxx ii

i

i

i

i

xh

xh

xh

Hx3

2

1

T

T

T

iiii

iiii

iiii

ii

yx

xw

wy

xhxh

xhxh

xhxh

Hxx12

31

23

TT

TT

TT

0

h

h

h

0xx

x0x

xx0

3

2

1

TTT

TTT

TTT

iiii

iiii

iiii

xy

xw

yw

Tiiii wyx ,,x

0hA i

Page 9: Singular Value Decomposition

Direct Linear Transformation(DLT)

• Equations are linear in h

0

h

h

h

0xx

x0x

xx0

3

2

1

TTT

TTT

TTT

iiii

iiii

iiii

xy

xw

yw

0AAA 321 iiiiii wyx

0hA i• Only 2 out of 3 are linearly independent

(indeed, 2 eq/pt)

Page 10: Singular Value Decomposition

•Holds for any homogeneous representation,

e.g. (xi’,yi’,1)

(only drop third row if wi’≠0)

0

h

h

h

x0x

xx0

3

2

1

TTT

TTT

iiii

iiii

xw

yw

Page 11: Singular Value Decomposition

Direct Linear Transformation(DLT)

• Solving for H

0Ah

0h

A

A

A

A

4

3

2

1

size A is 8x9 or 12x9, but rank 8

Trivial solution is h=09T is not interesting

1-D null-space yields solution of interestpick for example the one with 1h

Page 12: Singular Value Decomposition

Direct Linear Transformation(DLT)

• Over-determined solution

No exact solution because of inexact measurementi.e. “noise”

0Ah 0h

A

A

A

n

2

1

Find approximate solution- Additional constraint needed to avoid 0, e.g.

- not possible, so minimize

1h Ah0Ah

Page 13: Singular Value Decomposition

DLT algorithmObjective

Given n≥4 2D to 2D point correspondences {xi↔xi’}, determine the 2D homography matrix H such that xi’=Hxi

Algorithm

(i) For each correspondence xi ↔xi’ compute Ai. Usually only two first rows needed.

(ii) Assemble n 2x9 matrices Ai into a single 2nx9 matrix A

(iii) Obtain SVD of A. Solution for h is last column of V

(iv) Determine H from h

Page 14: Singular Value Decomposition

Inhomogeneous solution

'

'h~

''000'''

'''''000

ii

ii

iiiiiiiiii

iiiiiiiiii

xw

yw

xyxxwwwywx

yyyxwwwywx

Since h can only be computed up to scale, pick hj=1, e.g. h9=1, and solve for 8-vector

h~

Solve using Gaussian elimination (4 points) or using linear least-squares (more than 4 points)

However, if h9=0 this approach fails also poor results if h9 close to zeroTherefore, not recommendedNote h9=H33=0 if origin is mapped to infinity

0

1

0

0

H100Hxl 0

T

Page 15: Singular Value Decomposition

Degenerate configurations

x4

x1

x3

x2

x4

x1

x3

x2H? H’?

x1

x3

x2

x4

0Hxx iiConstraints: i=1,2,3,4

TlxH 4* Define:

4444* xxlxxH kT

3,2,1 ,0xlxxH 4* iii

TThen,

H* is rank-1 matrix and thus not a homography

(case A) (case B)

If H* is unique solution, then no homography mapping xi→xi’(case B)

If further solution H exist, then also αH*+βH (case A) (2-D null-space in stead of 1-D null-space)

Page 16: Singular Value Decomposition

Solutions from lines, etc.

ii lHl T 0Ah

2D homographies from 2D lines

Minimum of 4 lines

Minimum of 5 points or 5 planes

3D Homographies (15 dof)

2D affinities (6 dof)

Minimum of 3 points or lines

Conic provides 5 constraints

Mixed configurations?

Page 17: Singular Value Decomposition

Cost functions

• Algebraic distance• Geometric distance• Reprojection error

• Comparison• Geometric interpretation• Sampson error

Page 18: Singular Value Decomposition

Algebraic distanceAhDLT minimizes

Ahe residual vector

ie partial vector for each (xi↔xi’)algebraic error vector

2

22alg h

x0x

xx0Hx,x

TTT

TTT

iiii

iiiiiii

xw

ywed

algebraic distance

22

21

221alg x,x aad where 21

T321 xx,,a aaa

2222alg AhHx,x eed

ii

iii

Not geometrically/statistically meaningfull, but given good normalization it works fine and is very fast (use for initialization)

Page 19: Singular Value Decomposition

Geometric distancemeasured coordinatesestimated coordinatestrue coordinates

xxx

2H

xH,xargminH iii

d Error in one image

e.g. calibration pattern

221-

HHx,xxH,xargminH iiii

i

dd Symmetric transfer error

d(.,.) Euclidean distance (in image)

Reprojection error

22

x,xH,x,xx,xargminx,x,H iiii

iii dd

ii

ii xHx subject to

Page 20: Singular Value Decomposition

Reprojection error

221- Hx,xxHx, dd

22 x,xxx, dd

Page 21: Singular Value Decomposition

Comparison of geometric and algebraic distances

iiii

iiiiii wxxw

ywwye

ˆˆ

ˆˆhA

Error in one image

Tiiii wyx ,,x xHˆ,ˆ,ˆx Tiiii wyx

3

2

1

h

h

h

x0x

xx0TTT

TTT

iiii

iiii

xw

yw

222iialg ˆˆˆˆx,x iiiiiiii wxxwywwyd

ii

iiiiiiii

wwd

wxwxwywyd

ˆ/x,x

/ˆ/ˆˆ/ˆ/x,x

iialg

2/1222ii

typical, but not , except for affinities 1iw i3xhˆ iw

For affinities DLT can minimize geometric distance

Possibility for iterative algorithm

Page 22: Singular Value Decomposition

represents 2 quadrics in 4 (quadratic in X)

Geometric interpretation of reprojection error

Estimating homography~fit surface to points X=(x,y,x’,y’)T in 4

0Hxx ii

22

22222

ii

x,xx,x

ˆˆˆˆXX

iiii

iiiiiiii

dd

yyxxyyxx

2H22 ,Xx,xx,x iiiii ddd

Analog to conic fitting

CxxC,x T2alg d

2C,xd

Page 23: Singular Value Decomposition

Sampson error

2XX

HνVector that minimizes the geometric error is the closest point on the variety to the measurement

XX

between algebraic and geometric error

XSampson error: 1st order approximation of 0XAh H C

XH

HXH δX

XδX

C

CC XXδX 0XH C

0δX

X XH

H

C

C eXJδ

Find the vector that minimizes subject to

Xδ eXJδXδ

XJ Hwith

C

Page 24: Singular Value Decomposition

Find the vector that minimizes subject to

Xδ eXJδXδ

Use Lagrange multipliers:

minimize

derivatives

0Jδ2λ-δδ XXX eT

TT 0J2λ-δ2 X 0Jδ2 X e

λJδ XT

0λJJ T e

e1TJJλ

e1TT

X JJJδ

XδXX ee1TT

XTX

2

X JJδδδ

Page 25: Singular Value Decomposition

Sampson error

2XX

HνVector that minimizes the geometric error is the closest point on the variety to the measurement

XX

between algebraic and geometric error

XSampson error: 1st order approximation of 0XAh H C

XH

HXH δX

XδX

C

CC XXδX 0XH C

0δX

X XH

H

C

C eXJδ

Find the vector that minimizes subject to

Xδ eXJδXδ

ee1TT

XTX

2

X JJδδδ

(Sampson error)

Page 26: Singular Value Decomposition

Sampson approximation

A few points

(i) For a 2D homography X=(x,y,x’,y’) (ii) is the algebraic error vector(iii) is a 2x4 matrix,

e.g.

(iv) Similar to algebraic error in fact, same as Mahalanobis distance

(v) Sampson error independent of linear reparametrization (cancels out in between e and J)

(vi) Must be summed for all points(vii) Close to geometric error, but much fewer

unknowns

ee1TT2

X JJδ

eee T2

2

JJT e

31213T2T

11 /hxhx hyhwxywJ iiiiii XJ H

C XHCe

ee

1TT JJ

Page 27: Singular Value Decomposition

Statistical cost function and Maximum Likelihood

Estimation• Optimal cost function related to noise

model• Assume zero-mean isotropic Gaussian

noise (assume outliers removed) 22 2/xx,2πσ2

1xPr de

22i 2xH,x /

2iπσ2

1H|xPr ide

i

constantxH,xH|xPrlog 2i2i

σ2

1id

Error in one image

Maximum Likelihood Estimate

2i xH,x id

Page 28: Singular Value Decomposition

Statistical cost function and Maximum Likelihood

Estimation• Optimal cost function related to noise

model• Assume zero-mean isotropic Gaussian

noise (assume outliers removed) 22 2/xx,2πσ2

1xPr de

22i

2i 2xH,xx,x /

2iπσ2

1H|xPr

ii dd

ei

Error in both images

Maximum Likelihood Estimate

2i2

i x,xx,x ii dd

Page 29: Singular Value Decomposition

Mahalanobis distance

• General Gaussian caseMeasurement X with covariance

matrix Σ

XXXXXX 1T2

22XXXX

Error in two images (independent)

22XXXX

iiii

iii

Varying covariances

Page 30: Singular Value Decomposition

Invariance to transforms ?

xTx~ Txx~ Hxx x~H

~x~

TH~

TH 1-?

TxH~

xT TxH

~Tx -1

will result change? for which algorithms? for which transformations?

Page 31: Singular Value Decomposition

Non-invariance of DLT

Given and H computed by DLT,

and

Does the DLT algorithm applied to yield ?

iiii xTx~,Txx~ ii xx

ii x~x~ -1HTTH

~

Page 32: Singular Value Decomposition

iiiiie TxHTTxTx~H~

x~~ -1 iii e** THxxT

hA,R~,~h~

A~

i2121i seesee TT

Effect of change of coordinates on algebraic error

10

tsRT

ss

Rt-

0RT T

*

for similarities

so

iiii sdd x~H~

,x~Hx,x algalg

Page 33: Singular Value Decomposition

Non-invariance of DLT

Given and H computed by DLT,

and

Does the DLT algorithm applied to yield ?

iiii xTx~,Txx~ ii xx

ii x~x~ -1HTTH

~

1H

~ subject tox~H

~,x~minimize

1H subject tox~H~

,x~minimize

1H subject toHx,xminimize

2

alg

2

alg

2alg

iii

iii

iii

d

d

d

Page 34: Singular Value Decomposition

Invariance of geometric error

ii

iiiiii

sd

ddd

Hx,x

HxT,xTTxHTT,xTx~H~

,x~ -1

Given and H,

and

Assume T’ is a similarity transformations

,xTx~,Txx~ iiii ii xx

,x~x~ ii -1HTTH~

Page 35: Singular Value Decomposition

Normalizing transformations

• Since DLT is not invariant,what is a good choice of coordinates?e.g.• Translate centroid to origin• Scale to a average distance to the

origin• Independently on both images

2

1

norm

100

2/0

2/0

T

hhw

whwOr

Page 36: Singular Value Decomposition

Importance of normalization

0

h

h

h

0001

1000

3

2

1

iiiiiii

iiiiiii

xyxxxyx

yyyxyyx

~102 ~102 ~102 ~102 ~104 ~104 ~10211

orders of magnitude difference!

Page 37: Singular Value Decomposition

Normalized DLT algorithmObjective

Given n≥4 2D to 2D point correspondences {xi↔xi’}, determine the 2D homography matrix H such that xi’=Hxi

Algorithm

(i) Normalize points

(ii) Apply DLT algorithm to

(iii) Denormalize solution

,x~x~ ii inormiinormi xTx~,xTx~

norm-1

norm TH~

TH

Page 38: Singular Value Decomposition

Iterative minimization metods

Required to minimize geometric error (i) Often slower than DLT(ii) Require initialization(iii) No guaranteed convergence, local minima(iv) Stopping criterion requiredTherefore, careful implementation required:(i) Cost function(ii) Parameterization (minimal or not)(iii) Cost function ( parameters )(iv) Initialization(v) Iterations

Page 39: Singular Value Decomposition

Parameterization

Parameters should cover complete space and allow efficient estimation of cost

• Minimal or over-parameterized? e.g. 8 or 9 (minimal often more complex, also cost surface)

(good algorithms can deal with over-parameterization)

(sometimes also local parameterization)

• Parametrization can also be used to restrict transformation to particular class, e.g. affine

Page 40: Singular Value Decomposition

Function specifications

(i) Measurement vector XN with covariance Σ(ii) Set of parameters represented by vector P N

(iii) Mapping f : M →N. Range of mapping is surface S representing allowable measurements

(iv) Cost function: squared Mahalanobis distance

Goal is to achieve , or get as close as possible in terms of Mahalanobis distance

PXPXPX 1T2fff

XP f

Page 41: Singular Value Decomposition

Error in one image

2i xH,x id

nf Hx,...,Hx,Hxh: 21

hX f

221- Hx,xxH,x iiiii

dd Symmetric transfer error

nnf Hx,...,Hx,Hx,xH,...,xH,xHh: 21-1

2-1

1-1

hX f

Reprojection error

2i2

i x,xx,x ii dd nnf x,...,x,xx,...,x,xh,: 2121

hX f

Page 42: Singular Value Decomposition

Initialization

• Typically, use linear solution• If outliers, use robust algorithm

• Alternative, sample parameter space

Page 43: Singular Value Decomposition

Iteration methods

Many algorithms exist• Newton’s method• Levenberg-Marquardt

• Powell’s method• Simplex method

Page 44: Singular Value Decomposition

Gold Standard algorithmObjective

Given n≥4 2D to 2D point correspondences {xi↔xi’}, determine the Maximum Likelyhood Estimation of H

(this also implies computing optimal xi’=Hxi)

Algorithm

(i) Initialization: compute an initial estimate using normalized DLT or RANSAC

(ii) Geometric minimization of -Either Sampson error:

● Minimize the Sampson error

● Minimize using Levenberg-Marquardt over 9 entries of h

or Gold Standard error:

● compute initial estimate for optimal {xi}

● minimize cost over {H,x1,x2,…,xn}

● if many points, use sparse method

2i2

i x,xx,x ii dd

Page 45: Singular Value Decomposition

Robust estimation

• What if set of matches contains gross outliers?

Page 46: Singular Value Decomposition

RANSACObjective

Robust fit of model to data set S which contains outliers

Algorithm

(i) Randomly select a sample of s data points from S and instantiate the model from this subset.

(ii) Determine the set of data points Si which are within a distance threshold t of the model. The set Si is the consensus set of samples and defines the inliers of S.

(iii) If the subset of Si is greater than some threshold T, re-estimate the model using all the points in Si and terminate

(iv) If the size of Si is less than T, select a new subset and repeat the above.

(v) After N trials the largest consensus set Si is selected, and the model is re-estimated using all the points in the subset Si

Page 47: Singular Value Decomposition

Distance threshold

Choose t so probability for inlier is α (e.g. 0.95)

• Often empirically• Zero-mean Gaussian noise σ then

follows distribution with m=codimension of model

2d

2m

(dimension+codimension=dimension space)

Codimension

Model t 2

1 l,F 3.84σ2

2 H,P 5.99σ2

3 T 7.81σ2

Page 48: Singular Value Decomposition

How many samples?

Choose N so that, with probability p, at least one random sample is free from outliers. e.g. p=0.99

sepN 11log/1log

peNs 111

proportion of outliers es 5% 10% 20% 25% 30% 40% 50%2 2 3 5 6 7 11 173 3 4 7 9 11 19 354 3 5 9 13 17 34 725 4 6 12 17 26 57 1466 4 7 16 24 37 97 2937 4 8 20 33 54 163 5888 5 9 26 44 78 272 117

7

Page 49: Singular Value Decomposition

Acceptable consensus set?

• Typically, terminate when inlier ratio reaches expected ratio of inliers

neT 1

Page 50: Singular Value Decomposition

Adaptively determining the number of samples

e is often unknown a priori, so pick worst case, e.g. 50%, and adapt if more inliers are found, e.g. 80% would yield e=0.2

• N=∞, sample_count =0• While N >sample_count repeat

• Choose a sample and count the number of inliers• Set e=1-(number of inliers)/(total number of

points)• Recompute N from e• Increment the sample_count by 1

• Terminate

sepN 11log/1log

Page 51: Singular Value Decomposition

Robust Maximum Likelyhood Estimation

Previous MLE algorithm considers fixed set of inliers

Better, robust cost function (reclassifies)

outlier

inlier ρ with ρ

222

222

i tet

teeed iR

Page 52: Singular Value Decomposition

Other robust algorithms

• RANSAC maximizes number of inliers

• LMedS minimizes median error

• Not recommended: case deletion, iterative least-squares, etc.

Page 53: Singular Value Decomposition

Automatic computation of HObjective

Compute homography between two imagesAlgorithm

(i) Interest points: Compute interest points in each image

(ii) Putative correspondences: Compute a set of interest point matches based on some similarity measure

(iii) RANSAC robust estimation: Repeat for N samples

(a) Select 4 correspondences and compute H

(b) Calculate the distance d for each putative match

(c) Compute the number of inliers consistent with H (d<t)

Choose H with most inliers

(iv) Optimal estimation: re-estimate H from all inliers by minimizing ML cost function with Levenberg-Marquardt

(v) Guided matching: Determine more matches using prediction by computed H

Optionally iterate last two steps until convergence

Page 54: Singular Value Decomposition

Determine putative correspondences

• Compare interest pointsSimilarity measure:• SAD, SSD, ZNCC on small neighborhood

• If motion is limited, only consider interest points with similar coordinates

• More advanced approaches exist, based on invariance…

Page 55: Singular Value Decomposition

Example: robust computation

Interest points(500/image)

Putative correspondences (268)

Outliers (117)

Inliers (151)

Final inliers (262)

Page 56: Singular Value Decomposition

• Maximum Likelihood Estimation

• DLT not invariant normalization• Geometric minimization invariant

• Iterative minimization• Cost function• Parameterization• Initialization• Minimization algorithm

22XXXX

iiii

iii

Page 57: Singular Value Decomposition

Automatic computation of HObjective

Compute homography between two imagesAlgorithm

(i) Interest points: Compute interest points in each image

(ii) Putative correspondences: Compute a set of interest point matches based on some similarity measure

(iii) RANSAC robust estimation: Repeat for N samples

(a) Select 4 correspondences and compute H

(b) Calculate the distance d for each putative match

(c) Compute the number of inliers consistent with H (d<t)

Choose H with most inliers

(iv) Optimal estimation: re-estimate H from all inliers by minimizing ML cost function with Levenberg-Marquardt

(v) Guided matching: Determine more matches using prediction by computed H

Optionally iterate last two steps until convergence

Page 58: Singular Value Decomposition

Algorithm Evaluation and Error Analysis

• Bounds on performance• Covariance estimation

?

? residual error

uncertainty

Page 59: Singular Value Decomposition

Algorithm evaluation

measured coordinates

estimated quantities

true coordinates

xH ,x

H ,x

Test on real data ortest on synthetic data

ii xx • Generate synthetic correspondences

• Add Gaussian noise, yielding ii xx

• Estimate from using algorithmii xx Hmaybe also ii x,x

• Verify how well or ii xHx ii xHx • Repeat many times (different noise, same )

Page 60: Singular Value Decomposition

• Error in one image

• Error in two images

ii xx )σ,0(xx Gii

2/1

1

2x,x2

1

n

iires dn

e

Estimate , thenH

Note: residual error ≠ absolute measure of quality of He.g. estimation from 4 points yields eres=0

more points better results, but eres will increase

2/1

1

2

1

2 x,xx,x4

1

n

ii

n

iires ddn

e

)σ,0(xx 2Gii ii xx )σ,0(xx 2Gii Estimate so that , thenii x,x,H ii xHx

Page 61: Singular Value Decomposition

Optimal estimators (MLE)

f

XP

Estimate expected residual error of MLE,Other algorithms can then be judged to this standard

f : M → N (parameter space to measurement space)

X N P M XP f

M NSM

dimension of submanifold SM = #essential parameters

Page 62: Singular Value Decomposition

nX

X

X

SM

2σ,XX NG

)X(X MLE

Assume SM locally planar around X

projection of isotropic Gaussian distribution on N with total variance N2 onto a subspace of dimension s is an isotropic Gaussian distribution with total variance s2

Page 63: Singular Value Decomposition

N measurements (independent Gaussian noise ) model with d essential parameters(use s=d and s=(N-d))

(i) RMS residual error for ML estimator

(ii) RMS estimation error for ML estimator

2/12/12

/1/XX NdNEeres

2/12/12

//XX NdNEeest

nX

X

X

SM

Page 64: Singular Value Decomposition

Error in one image Error in two images

2/1/1 Nderes

2/1/ Ndeest

2/1/41 neres

2/1/4 neest

2/1

2

4

n

neres

2/1

2

4

n

neest

nNd 2 and 8 nNnd 4 and 28

Page 65: Singular Value Decomposition

Covariance of estimated model

• Previous question: how close is the error to smallest possible error?

• Independent of point configuration

• Real question: how close is estimated model to real model?

• Dependent on point configuration (e.g. 4 points close to a line)

Page 66: Singular Value Decomposition

Forward propagation of covariance

Let v be a random vector in M with mean v and covariance matrix , and suppose that f: M →N is an affine mapping defined by f(v)=f(v)+A(v-v). Then f(v) is a random variable with mean f(v) and covariance matrix AAT.

Note: does not assume A is a square matrix

Page 67: Singular Value Decomposition

Example:

22 2,0,1,0 GyGx

723),( yxyxfx

7)0,0( fx

40

01 23A

252

3

40

0123AA T

5)( xstd

Page 68: Singular Value Decomposition

Example:

22 2,0,1,0 GyGx

yxy

yxx

23

23

40

01

23

23A

257

725

22

33

40

01

23

23AA T

;7;25;25 yxEyyExxE

Page 69: Singular Value Decomposition

Non-linear propagation of covariance

Let v be a random vector in M with mean v and covariance matrix , and suppose that f: M →N

differentiable in the neighborhood of v. Then, up to a first order approximation, f(v) is a random variable with mean f(v) and covariance matrix JJT, where J is the Jacobian matrix evaluated at v

)vv(J)v()v( ffj

iij

fJ

v

Note: good approximation if f close to linear within variability of v

Page 70: Singular Value Decomposition

Example:

40

01,

0

0),(x T Gyx

523),( 2 yxxyxfx

dxdyyxfyxPx ),(),(

dxdyxyxfyxP 22 ),(),(

222 2/)4/(

π8

1),( yxeyxP

2σ5x422 σ2σ25σ x

5x 222 σ25

2

3

4

123σσ

x

23J

Page 71: Singular Value Decomposition

Example:

feydxcybxyaxyxf 22),(

fca 2y

2x σσmean

2y

2x

2y

22y

2x

4x

2 eσσσ2σσσ2variance dcba

fmean est.2y

2x eσσ varianceest. d

Page 72: Singular Value Decomposition

Backward propagation of covariance

f : M → N

X N P M

PX1 f

X

f -1

P

X

XX η

PX1 f

Page 73: Singular Value Decomposition

Backward propagation of covariance

PX1 f

PPJPP -ff XP f

PXXX f

PPXX J

XXJJJPP 1T-11T

XP 1 f XP 1 f

PX1 f XXXJJJ 11T-11T f

assume f is affineX

f -1PX

what about f -1o ?

solution:

minimize:

XXXJJJ 11T-11T f

Page 74: Singular Value Decomposition

Backward propagation of covariance

PX1 fX

f -1PX

1T-11T JJJA

T1T-11T1T-11T JJJJJJ1 f

-T1TT1T-11T JJJJJJ

-11T JJ

Page 75: Singular Value Decomposition

Backward propagation of covariance

PX1 fX

f -1PX

-11x

TP JJ If f is affine, then

non-linear case, obtain first order approximations by using Jacobian

Page 76: Singular Value Decomposition

Over-parameterization

In this case f is not one-to-one and rank J<M

-11x

TP JJ so can not hold

e.g. scale ambiguity infinite variance!

However, if constraints are imposed, then ok.

T11x

TTA1x

TP AJAJAAJJ

A

Invert d xd in stead of MxM

Page 77: Singular Value Decomposition

Over-parameterization

When constraint surface is locally orthogonal to the null space of J

JJ 1x

TP

e.g. usual constraint ||P||=1

nullspace||P||=1

(pseudo-inverse)

Page 78: Singular Value Decomposition

Example: error in one image

(i) Estimate the transformation from the data(ii) Compute Jacobian , evaluated at(iii) The covariance matrix of the estimated is

given by JJ 1

xT

h

h/XJ f

Hh

h

TT

TT

x~x~0

x~0x~1h/xJ

iii

iii

iii

y

x

w

iiii JJJJ 1T1

xT

h

Page 79: Singular Value Decomposition

Example: error in both images

BBAB

BAAAJJ

1x

T1x

T

1x

T1x

T1

xT

B|AJ

separate in homography and point parameters

Page 80: Singular Value Decomposition

Using covariance matrix in point transfer

Thhhx JJ

Txxx

Thhhx JJJJ

Error in one image

Error in two images

(if h and x independent, i.e. new points)

Page 81: Singular Value Decomposition

=1 pixel =0.5cm (Crimisi’97)

Example:

Page 82: Singular Value Decomposition

=1 pixel =0.5cm (Crimisi’97)

Example:

Page 83: Singular Value Decomposition

(Crimisi’97)

Example:

Page 84: Singular Value Decomposition

Monte Carlo estimation of covariance

• To be used when previous assumptions do not hold (e.g. non-flat within variance) or to complicate to compute.

• Simple and general, but expensive

• Generate samples according to assumed noise distribution, carry out computations, observe distribution of result