l. vandenberghe ece133a (fall 2019) 8. least squaresvandenbe/133a/lectures/ls.pdf · least squares...

19
L. Vandenberghe ECE133A (Fall 2019) 8. Least squares least squares problem solution of a least squares problem solving least squares problems 8.1

Upload: others

Post on 09-Jan-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

L. Vandenberghe ECE133A (Fall 2019)

8. Least squares

• least squares problem

• solution of a least squares problem

• solving least squares problems

8.1

Page 2: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Least squares problem

given A ∈ Rm×n and b ∈ Rm, find vector x ∈ Rn that minimizes

‖Ax − b‖2 =m∑

i=1

(n∑

j=1Ai j x j − bi

) 2

• “least squares” because we minimize a sum of squares of affine functions:

‖Ax − b‖2 =m∑

i=1ri(x)2, ri(x) =

n∑j=1

Ai j x j − bi

• the problem is also called the linear least squares problem

Least squares 8.2

Page 3: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Example

A =

2 0−1 10 2

, b =

10−1

−1 0 1−1

0

1

f (x̂) + 1f (x̂) + 2

x1

x 2

• the least squares solution x̂ minimizes

f (x) = ‖Ax − b‖2 = (2x1 − 1)2 + (−x1 + x2)2 + (2x2 + 1)2

• to find x̂, set derivatives with respect to x1 and x2 equal to zero:

10x1 − 2x2 − 4 = 0, −2x1 + 10x2 + 4 = 0

solution is (x̂1, x̂2) = (1/3,−1/3)

Least squares 8.3

Page 4: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Least squares and linear equations

minimize ‖Ax − b‖2

• solution of the least squares problem: any x̂ that satisfies

‖Ax̂ − b‖ ≤ ‖Ax − b‖ for all x

• r̂ = Ax̂ − b is the residual vector

• if r̂ = 0, then x̂ solves the linear equation Ax = b

• if r̂ , 0, then x̂ is a least squares approximate solution of the equation

• in most least squares applications, m > n and Ax = b has no solution

Least squares 8.4

Page 5: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Column interpretation

least squares problem in terms of columns a1, a2, . . . , an of A:

minimize ‖Ax − b‖2 = ‖n∑

j=1a j x j − b‖2

range(A) = span(a1, . . . ,an)

r = Ax̂ − b

b

Ax̂

• Ax̂ is the vector in range(A) = span(a1,a2, . . . ,an) closest to b

• geometric intuition suggests that r̂ = Ax̂ − b is orthogonal to range(A)

Least squares 8.5

Page 6: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Example: advertising purchases

• m demographic groups; n advertising channels

• Ai j is # impressions (views) in group i per dollar spent on ads in channel j

• x j is amount of advertising purchased in channel j

• (Ax)i is number of impressions in group i

• bi is target number of impressions in group i

Example: m = 10, n = 3, b = 1031

1 2 3 4 5 6 7 8 9 10

1

2

Group

Impr

essi

ons

Columns of matrix A

Channel 1Channel 2Channel 3

1 2 3 4 5 6 7 8 9 10

500

1,000

1,500

Group

Impr

essi

ons

Target b and least squares result Ax̂

Least squares 8.6

Page 7: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Example: illumination

• n lamps at given positions above an area divided in m regions

• Ai j is illumination in region i if lamp j is on with power 1 and other lamps are off

• x j is power of lamp j

• (Ax)i is illumination level at region i

• bi is target illumination level at region i

Example: m = 252, n = 10; figure shows position and height of each lamp

0 25m0

25m

1 (4.0m) 2 (3.5m)

3 (6.0m)

4 (4.0m)5 (4.0m)

6 (6.0m)

7 (5.5m)

8 (5.0m) 9 (5.0m) 10 (4.5m)

Least squares 8.7

Page 8: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Example: illumination

• left: illumination pattern for equal lamp powers (x = 1)

• right: illumination pattern for least squares solution x̂, with b = 1

(4.0m) (3.5m)

(6.0m)

(4.0m)(4.0m)

(6.0m)

(5.5m)

(5.0m) (5.0m) (4.5m)

(4.0m) (3.5m)

(6.0m)

(4.0m)(4.0m)

(6.0m)

(5.5m)

(5.0m) (5.0m) (4.5m)0.6

0.8

1

1.2

1.4

0 1 20

50

100

Intensity

Num

bero

freg

ions

0 1 20

50

100

IntensityLeast squares 8.8

Page 9: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Outline

• least squares problem

• solution of a least squares problem

• solving least squares problems

Page 10: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Solution of a least squares problem

if A has linearly independent columns (is left-invertible), then the vector

x̂ = (AT A)−1AT b

= A†b

is the unique solution of the least squares problem

minimize ‖Ax − b‖2

• in other words, if x , x̂, then ‖Ax − b‖2 > ‖Ax̂ − b‖2

• recall from page 4.23 thatA† = (AT A)−1AT

is called the pseudo-inverse of a left-invertible matrix

Least squares 8.9

Page 11: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Proof

we show that ‖Ax − b‖2 > ‖Ax̂ − b‖2 for x , x̂:

‖Ax − b‖2 = ‖A(x − x̂) + (Ax̂ − b)‖2

= ‖A(x − x̂)‖2 + ‖Ax̂ − b‖2

> ‖Ax̂ − b‖2

• 2nd step follows from A(x − x̂) ⊥ (Ax̂ − b):

(A(x − x̂))T(Ax̂ − b) = (x − x̂)T(AT Ax̂ − AT b) = 0

• 3rd step follows from linear independence of columns of A:

A(x − x̂) , 0 if x , x̂

Least squares 8.10

Page 12: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Derivation from calculus

f (x) = ‖Ax − b‖2 =m∑

i=1

(n∑

j=1Ai j x j − bi

) 2

• partial derivative of f with respect to xk

∂ f∂xk

(x) = 2m∑

i=1Aik

(n∑

j=1Ai j x j − bi

)= 2(AT(Ax − b))k

• gradient of f is

∇ f (x) =(∂ f∂x1

(x),∂ f∂x2

(x), . . . ,∂ f∂xn

(x))= 2AT(Ax − b)

• minimizer x̂ of f (x) satisfies ∇ f (x̂) = 2AT(Ax̂ − b) = 0

Least squares 8.11

Page 13: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Geometric interpretation

residual vector r̂ = Ax̂ − b satisfies AT r̂ = AT(Ax̂ − b) = 0

range(A) = span(a1, . . . ,an)

b

r̂ = Ax̂ − b

Ax̂

• residual vector r̂ is orthogonal to every column of A; hence, to range(A)

• projection on range(A) is a matrix-vector multiplication with the matrix

A(AT A)−1AT = AA†

Least squares 8.12

Page 14: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Outline

• least squares problem

• solution of a least squares problem

• solving least squares problems

Page 15: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Normal equations

AT Ax = AT b

• these equations are called the normal equations of the least squares problem

• coefficient matrix AT A is the Gram matrix of A

• equivalent to ∇ f (x) = 0 where f (x) = ‖Ax − b‖2

• all solutions of the least squares problem satisfy the normal equations

if A has linearly independent columns, then:

• AT A is nonsingular

• normal equations have a unique solution x̂ = (AT A)−1AT b

Least squares 8.13

Page 16: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

QR factorization method

rewrite least squares solution using QR factorization A = QR

x̂ = (AT A)−1AT b = ((QR)T(QR))−1(QR)T b

= (RTQTQR)−1RTQT b

= (RT R)−1RTQT b

= R−1R−T RTQT b

= R−1QT b

Algorithm

1. compute QR factorization A = QR (2mn2 flops if A is m × n)

2. matrix-vector product d = QT b (2mn flops)

3. solve Rx = d by back substitution (n2 flops)

complexity: 2mn2 flopsLeast squares 8.14

Page 17: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Example

A =

3 −64 −80 1

, b =−172

1. QR factorization: A = QR with

Q =

3/5 04/5 00 1

, R =[

5 −100 1

]

2. calculate d = QT b = (5,2)

3. solve Rx = d [5 −100 1

] [x1x2

]=

[52

]solution is x1 = 5, x2 = 2

Least squares 8.15

Page 18: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Solving the normal equations

why not solve the normal equations

AT Ax = AT b

as a set of linear equations?

Example: a 3 × 2 matrix with “almost linearly dependent” columns

A =

1 −10 10−5

0 0

, b =

010−5

1

,we round intermediate results to 8 significant decimal digits

Least squares 8.16

Page 19: L. Vandenberghe ECE133A (Fall 2019) 8. Least squaresvandenbe/133A/lectures/ls.pdf · Least squares and linear equations minimize kAx bk2 solution of the least squares problem: any

Solving the normal equations

Method 1: form Gram matrix AT A and solve normal equations

AT A =[

1 −1−1 1 + 10−10

]{

[1 −1

−1 1

], AT b =

[0

10−10

]after rounding, the Gram matrix is singular; hence method fails

Method 2: QR factorization of A is

Q =

1 00 10 0

, R =[

1 −10 10−5

]rounding does not change any values (in this example)

• problem with method 1 occurs when forming Gram matrix AT A

• QR factorization method is more stable because it avoids forming AT A

Least squares 8.17