qr factorization and singular value decomposition · last time • solving non-linear least squares...

43
QR Factorization and Singular Value Decomposition COS 323

Upload: others

Post on 11-Apr-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

QR Factorization and Singular Value Decomposition

COS 323

Page 2: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Last time

•  Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt

methods –  Intro to logistic regresion

•  Dealing with outliers and bad data: – Robust regression, least absolute deviation, and

iteratively re-weighted least-squares

•  Practical considerations

•  Solving with Excel and Matlab

Page 3: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Today

•  How do we solve least-squares… –  without incurring condition-squaring effect of normal

equations (ATAx = ATb) –  when A is singular, “fat”, or otherwise poorly-specified?

•  QR Factorization –  Householder method

•  Singular Value Decomposition

•  Total least squares

•  Practical notes

Page 4: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Review: Condition Number

•  Cond(A) is function of A

•  Cond(A) >= 1, bigger is bad

•  Measures how change in input is propogated to change in output

•  E.g., if cond(A) = 451 then can lose log(451)= 2.65 digits of accuracy in x, compared to precision of A

||Δx |||| x ||

≤ cond(A) ||ΔA |||| A ||

Page 5: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Normal Equations are Bad

•  Normal equations involves solving ATAx = ATb

•  cond(ATA) = [cond(A)]2

•  E.g., if cond(A) = 451 then can lose log(4512) = 5.3 digits of accuracy, compared to precision of A

||Δx |||| x ||

≤ cond(A) ||ΔA |||| A ||

Page 6: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

QR Decomposition

Page 7: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

What if we didn’t have to use ATA?

•  Suppose we are “lucky”:

•  Upper triangular matrices are nice!

# # #0 # #0 0 0 0 #0 0 0 0 0

⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥

x ≅

#######

⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥

RO⎡

⎣ ⎢ ⎤

⎦ ⎥ x = b

Page 8: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

How to make A upper-triangular?

•  Gaussian elimination? – Applying elimination yields MAx = Mb – Want to find x s.t. minimizes ||Mb-MAx||2

– Problem: ||Mv||2 != ||v||2 (i.e., M might “stretch” a vector v)

– Another problem: M may stretch different vectors differently

–  i.e., M does not preserve Euclidean norm –  i.e., x that minimizes ||Mb-MAx|| may not be

same x that minimizes Ax=b

Page 9: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

QR Factorization

•  Can’t usually find R such

•  Can find Q, R such that

•  If Q orthogonal, doesn’t change least-squares solution – QTQ=I, columns of Q are orthonormal –  i.e., Q preserves Euclidean norm: ||Qv||2=||v||2

A =QRO⎡

⎣ ⎢ ⎤

⎦ ⎥ , so

RO⎡

⎣ ⎢ ⎤

⎦ ⎥ x =QTb

A =RO⎡

⎣ ⎢ ⎤

⎦ ⎥

Page 10: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Goal of QR

A =QRO⎡

⎣ ⎢ ⎤

⎦ ⎥ =Q

? ? … ?0 0 0 ?0 0 … 00 0 0 0

⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥

mxn

mxm mxn

R: nxn,

upper tri.

O: (m-n)xn, all zeros

Page 11: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Reformulating Least Squares using QR

r 22 = b − Ax 2

2

= b −QRO⎡

⎣ ⎢ ⎤

⎦ ⎥ x

2

2

= QTb −QTQRO⎡

⎣ ⎢ ⎤

⎦ ⎥ x

2

2

= QTb −RO⎡

⎣ ⎢ ⎤

⎦ ⎥ x

2

2

= c1 − Rx + c2 2

2

= c1 − Rx 2

2+ c2 2

2

= c2 2

2

A =QRO⎡

⎣ ⎢ ⎤

⎦ ⎥ because

because Q is orthogonal (QTQ=I)

if we call

QTb =c1c2

⎣ ⎢

⎦ ⎥

if we choose x such that Rx=c1

Page 12: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Householder Method for Computing QR Decomposition

Page 13: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Orthogonalization for Factorization

•  Rough idea: – For each i-th column of A, “zero out” rows i+1 and

lower – Accomplish this by multiplying A with an

orthogonal matrix Hi

– Equivalently, apply an orthogonal transformation to the i-th column (e.g., rotation, reflection)

– Q becomes product H1*…*Hn, R contains zero-ed out columns

A =QRO⎡

⎣ ⎢ ⎤

⎦ ⎥

Page 14: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Householder Transformation

•  Accomplishes the critical sub-step of factorization: – Given any vector (e.g., a column of A), reflect it so

that its last p elements become 0. – Reflection preserves length (Euclidean norm)

Page 15: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Computing Householder

•  if a is the k-th column:

Exercise: Show H is orthogonal (HTH=I)

a =a1

a2

⎣ ⎢

⎦ ⎥

v =0a2

⎣ ⎢

⎦ ⎥ −αek where α = −sign ak( ) a2 2

apply H to a and columns to the right :

Hu = u − 2 vT uvTv

⎝ ⎜

⎠ ⎟ v (*with some shortcuts - see p124)

Page 16: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Outcome of Householder

Hn…H1A =RO⎡

⎣ ⎢ ⎤

⎦ ⎥

where QT = Hn…H1

so Q = H1…Hn

so A = QRO⎡

⎣ ⎢ ⎤

⎦ ⎥

Page 17: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Review: Least Squares using QR

r 22 = b − Ax 2

2

= b −QRO⎡

⎣ ⎢ ⎤

⎦ ⎥ x

2

2

= QTb −QTQRO⎡

⎣ ⎢ ⎤

⎦ ⎥ x

2

2

= QTb −RO⎡

⎣ ⎢ ⎤

⎦ ⎥ x

2

2

= c1 − Rx + c2 2

2

= c1 − Rx 2

2+ c2 2

2

= c2 2

2

A =QRO⎡

⎣ ⎢ ⎤

⎦ ⎥ because

because Q is orthogonal (QTQ=I)

if we call

QTb =c1c2

⎣ ⎢

⎦ ⎥

if we choose x such that Rx=c1

Page 18: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Using Householder

•  Iteratively compute H1, H2, … Hn and apply to A to get R –  also apply to b to get

•  Solve for Rx=c1 using back-substitution

QTb =c1c2

⎣ ⎢

⎦ ⎥

Page 19: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Alternative Orthogonalization Methods

•  Givens: –  Don’t reflect; rotate instead –  Introduces zeroes into A one at a time –  More complicated implementation than Householder –  Useful when matrix is sparse

•  Gram-Schmidt –  Iteratively express each new column vector as a linear

combination of previous columns, plus some (normalized) orthogonal component

–  Conceptually nice, but suffers from subtractive cancellation

Page 20: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Singular Value Decomposition

Page 21: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Motivation #1

•  Diagonal matrices are even nicer than triangular ones:

# 0 0 00 # 0 00 0 00 0 #0 0 0 0 0

⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥

x ≅

#######

⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥

Page 22: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Motivation #2

•  What if you have fewer data points than parameters in your function? –  i.e., A is “fat” –  Intuitively, can’t do standard least squares – Recall that solution takes the form ATAx = ATb – When A has more columns than rows,

ATA is singular: can’t take its inverse, etc.

Page 23: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Motivation #3

•  What if your data poorly constrains the function?

•  Example: fitting to y=ax2+bx+c

Page 24: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Underconstrained Least Squares

•  Problem: if problem very close to singular, roundoff error can have a huge effect – Even on “well-determined” values!

•  Can detect this: – Uncertainty proportional to covariance C = (ATA)-1 –  In other words, unstable if ATA has small values – More precisely, care if xT(ATA)x is small for any x

•  Idea: if part of solution unstable, set answer to 0 – Avoid corrupting good parts of answer

Page 25: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Singular Value Decomposition (SVD)

•  Handy mathematical technique that has application to many problems

•  Given any m×n matrix A, algorithm to find matrices U, V, and W such that

A = U W VT U is m×n and orthonormal W is n×n and diagonal V is n×n and orthonormal

Page 26: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

SVD

•  Treat as black box: code widely available In Matlab: [U,W,V]=svd(A,0)

Page 27: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

SVD

•  The wi are called the singular values of A

•  If A is singular, some of the wi will be 0

•  In general rank(A) = number of nonzero wi

•  SVD is mostly unique (up to permutation of singular values, or if some wi are equal)

Page 28: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

SVD and Inverses

•  Why is SVD so useful?

•  Application #1: inverses

•  A-1=(VT)-1 W-1 U-1 = V W-1 UT

– Using fact that inverse = transpose for orthogonal matrices

– Since W is diagonal, W-1 also diagonal with reciprocals of entries of W

Page 29: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

SVD and the Pseudoinverse

•  A-1=(VT)-1 W-1 U-1 = V W-1 UT

•  This fails when some wi are 0 –  It’s supposed to fail – singular matrix – Happens when rectangular A is rank deficient

•  Pseudoinverse: if wi=0, set 1/wi to 0 (!) –  “Closest” matrix to inverse – Defined for all (even non-square, singular, etc.)

matrices – Equal to (ATA)-1AT if ATA invertible

Page 30: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

SVD and Condition Number

•  Singular values used to compute Euclidean (spectral) norm for a matrix:

cond(A) =σmax (A)σmin (A)

Page 31: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

SVD and Least Squares

•  Solving Ax=b by least squares:

•  ATAx = ATb x = (ATA)-1ATb

•  Replace with A+: x = A+b

•  Compute pseudoinverse using SVD –  Lets you see if data is singular (< n nonzero

singular values) – Even if not singular, condition number tells you

how stable the solution will be – Set 1/wi to 0 if wi is small (even if not exactly 0)

Page 32: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

SVD and Matrix Similarity

•  One common definition for the norm of a matrix is the Frobenius norm:

•  Frobenius norm can be computed from SVD

•  Euclidean (spectral) norm can also be computed:

•  So changes to a matrix can be evaluated by looking at changes to singular values

A 2 = {maxλ : λ ∈σ (A)}

Page 33: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

SVD and Matrix Similarity

•  Suppose you want to find best rank-k approximation to A

•  Answer: set all but the largest k singular values to zero

•  Can form compact representation by eliminating columns of U and V corresponding to zeroed wi

Page 34: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

SVD and Eigenvectors

•  Let A=UWVT, and let xi be ith column of V

•  Consider ATA xi:

•  So elements of W are sqrt(eigenvalues) and columns of V are eigenvectors of ATA

Page 35: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Total Least Squares

•  One final least squares application

•  Fitting a line: vertical vs. perpendicular error

Page 36: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Total Least Squares

•  Distance from point to line:

where n is normal vector to line, a is a constant

•  Minimize:

Page 37: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Total Least Squares

•  First, let’s pretend we know n, solve for a

•  Then

Page 38: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Total Least Squares

•  So, let’s define

and minimize

Page 39: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Total Least Squares

•  Write as linear system

•  Have An=0 – Problem: lots of n are solutions, including n=0 – Standard least squares will, in fact, return n=0

Page 40: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Constrained Optimization

•  Solution: constrain n to be unit length

•  So, try to minimize |An|2 subject to |n|2=1

•  Expand in eigenvectors ei of ATA:

where the λi are eigenvalues of ATA

Page 41: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Constrained Optimization

•  To minimize subject to set µmin = 1, all other µi = 0

•  That is, n is eigenvector of ATA with the smallest corresponding eigenvalue

Page 42: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Comparison of Least Squares Methods

•  Normal equations (ATAx = ATb) –  O(mn2) (using Cholesky) –  cond(ATA)=[cond(A)]2

–  Cholesky fails if cond(A)~1/sqrt(machine epsilon)

•  Householder –  Usually best

orthogonalization method –  O(mn2 - n3/3) operations

–  Relative error is best possible for least squares

–  Breaks if cond(A) ~ 1/(machine eps)

•  SVD –  Expensive: mn2 + n3 with

bad constant factor –  Can handle rank-

deficiency, near-singularity –  Handy for many different

things

Page 43: QR Factorization and Singular Value Decomposition · Last time • Solving non-linear least squares – Newton, Gauss-Newton, Levenberg-Marquardt methods – Intro to logistic regresion

Matlab functions

•  qr: explicit QR factorization

•  svd

•  A\b: (‘\’ operator) – Performs least-squares if A is m-by-n – Uses QR decomposition

•  pinv: pseudoinverse

•  rank: Uses SVD to compute rank of a matrix