sparse and redundant representations - eth zpeople.math.ethz.ch/~pgrohs/srr/intro.pdf · dhf(i; j)...

56
Sparse and Redundant Representations Ph. Grohs ETH Zurich, Seminar for Applied Mathematics 09-27-2012

Upload: vuongdiep

Post on 06-Feb-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Sparse and Redundant RepresentationsPh. GrohsETH Zurich, Seminar for Applied Mathematics

09-27-2012

Page 2: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Basic ThemeLet A ∈ Rn×m with n < m.

We would like to solve the linear system

Ax = b, x ∈ Rn, b ∈ Rm.

Clearly not uniquely solvable! Representation of b is redundant!

Ph. Grohs 09-27-2012 p. 2

Page 3: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Basic ThemeLet A ∈ Rn×m with n < m. We would like to solve the linear system

Ax = b, x ∈ Rn, b ∈ Rm.

Clearly not uniquely solvable! Representation of b is redundant!

Ph. Grohs 09-27-2012 p. 2

Page 4: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Basic ThemeLet A ∈ Rn×m with n < m. We would like to solve the linear system

Ax = b, x ∈ Rn, b ∈ Rm.

Clearly not uniquely solvable! Representation of b is redundant!

Ph. Grohs 09-27-2012 p. 2

Page 5: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Basic ThemeLet A ∈ Rn×m with n < m. We would like to solve the linear system

Ax = b, x ∈ Rn, b ∈ Rm.

Clearly not uniquely solvable!

Representation of b is redundant!

Ph. Grohs 09-27-2012 p. 2

Page 6: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Basic ThemeLet A ∈ Rn×m with n < m. We would like to solve the linear system

Ax = b, x ∈ Rn, b ∈ Rm.

Clearly not uniquely solvable! Representation of b is redundant!

Ph. Grohs 09-27-2012 p. 2

Page 7: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

X-Ray CT

Mathematically, measurements are line-integrals

Rf (r , θ) =∫

Lf (x , y)dl

Only available for a few angles θ. Reconstruction of f??

Ph. Grohs 09-27-2012 p. 3

Page 8: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

X-Ray CT

Mathematically, measurements are line-integrals

Rf (r , θ) =∫

Lf (x , y)dl

Only available for a few angles θ. Reconstruction of f??

Ph. Grohs 09-27-2012 p. 3

Page 9: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

X-Ray CT

Mathematically, measurements are line-integrals

Rf (r , θ) =∫

Lf (x , y)dl

Only available for a few angles θ. Reconstruction of f??

Ph. Grohs 09-27-2012 p. 3

Page 10: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

X-Ray CT

Mathematically, measurements are line-integrals

Rf (r , θ) =∫

Lf (x , y)dl

Only available for a few angles θ.

Reconstruction of f??

Ph. Grohs 09-27-2012 p. 3

Page 11: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

X-Ray CT

Mathematically, measurements are line-integrals

Rf (r , θ) =∫

Lf (x , y)dl

Only available for a few angles θ. Reconstruction of f??Ph. Grohs 09-27-2012 p. 3

Page 12: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Projection-Slice TheoremTheorem

FrRf (ω, θ) = f̂ (ω cos(θ), ω sin(θ)).

A measurement Rf (r , θ) for a fixed angle θ gives us access to theFourier transform of f along a ray with polar angle θ.

For 22 different angles we getthe Fourier transform off ∈ R256×256 on the rays in thepicture. This leads to aredundant linear system for f !

Ph. Grohs 09-27-2012 p. 4

Page 13: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Projection-Slice TheoremTheorem

FrRf (ω, θ) = f̂ (ω cos(θ), ω sin(θ)).

A measurement Rf (r , θ) for a fixed angle θ gives us access to theFourier transform of f along a ray with polar angle θ.

For 22 different angles we getthe Fourier transform off ∈ R256×256 on the rays in thepicture. This leads to aredundant linear system for f !

Ph. Grohs 09-27-2012 p. 4

Page 14: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Projection-Slice TheoremTheorem

FrRf (ω, θ) = f̂ (ω cos(θ), ω sin(θ)).

A measurement Rf (r , θ) for a fixed angle θ gives us access to theFourier transform of f along a ray with polar angle θ.

For 22 different angles we getthe Fourier transform off ∈ R256×256 on the rays in thepicture. This leads to aredundant linear system for f !

Ph. Grohs 09-27-2012 p. 4

Page 15: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Projection-Slice TheoremTheorem

FrRf (ω, θ) = f̂ (ω cos(θ), ω sin(θ)).

A measurement Rf (r , θ) for a fixed angle θ gives us access to theFourier transform of f along a ray with polar angle θ.

For 22 different angles we getthe Fourier transform off ∈ R256×256 on the rays in thepicture.

This leads to aredundant linear system for f !

Ph. Grohs 09-27-2012 p. 4

Page 16: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Projection-Slice TheoremTheorem

FrRf (ω, θ) = f̂ (ω cos(θ), ω sin(θ)).

A measurement Rf (r , θ) for a fixed angle θ gives us access to theFourier transform of f along a ray with polar angle θ.

For 22 different angles we getthe Fourier transform off ∈ R256×256 on the rays in thepicture. This leads to aredundant linear system for f !

Ph. Grohs 09-27-2012 p. 4

Page 17: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies I: PseudoinverseA common strategy to solve Ax = b with A ∈ Rn×m, m < n is to usethe Moore-Penrose pseudoinverse and get a solution

x∗ = A†b

= argminAx=b

‖x‖`2 .

Let’s try it for the X-ray problem.

This is a typical benchmark (Shepp-Logan phantom).

Ph. Grohs 09-27-2012 p. 5

Page 18: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies I: PseudoinverseA common strategy to solve Ax = b with A ∈ Rn×m, m < n is to usethe Moore-Penrose pseudoinverse and get a solution

x∗ = A†b = argminAx=b

‖x‖`2 .

Let’s try it for the X-ray problem.

This is a typical benchmark (Shepp-Logan phantom).

Ph. Grohs 09-27-2012 p. 5

Page 19: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies I: PseudoinverseA common strategy to solve Ax = b with A ∈ Rn×m, m < n is to usethe Moore-Penrose pseudoinverse and get a solution

x∗ = A†b = argminAx=b

‖x‖`2 .

Let’s try it for the X-ray problem.

This is a typical benchmark (Shepp-Logan phantom).

Ph. Grohs 09-27-2012 p. 5

Page 20: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies I: PseudoinverseA common strategy to solve Ax = b with A ∈ Rn×m, m < n is to usethe Moore-Penrose pseudoinverse and get a solution

x∗ = A†b = argminAx=b

‖x‖`2 .

Let’s try it for the X-ray problem.

This is a typical benchmark (Shepp-Logan phantom).

Ph. Grohs 09-27-2012 p. 5

Page 21: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies I: PseudoinverseA common strategy to solve Ax = b with A ∈ Rn×m, m < n is to usethe Moore-Penrose pseudoinverse and get a solution

x∗ = A†b = argminAx=b

‖x‖`2 .

Let’s try it for the X-ray problem.

This is a typical benchmark (Shepp-Logan phantom).Ph. Grohs 09-27-2012 p. 5

Page 22: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution

Ph. Grohs 09-27-2012 p. 6

Page 23: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution

Ph. Grohs 09-27-2012 p. 7

Page 24: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.

Why should sought solution have this property?

Define discrete gradient for image f ∈ R256×256

Dhf (i, j) =

{f (i + 1, j) − f (i, j) i < 256

0 i = 256 , Dv f (i, j) =

{f (i, j + 1) − f (i, j) j < 256

0 j = 256 , Df (i, j) =

(Dhf (i, j)Dv f (i, j)

).

vertical gradient of image issparse!

instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!

Ph. Grohs 09-27-2012 p. 8

Page 25: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.Why should sought solution have this property?

Define discrete gradient for image f ∈ R256×256

Dhf (i, j) =

{f (i + 1, j) − f (i, j) i < 256

0 i = 256 , Dv f (i, j) =

{f (i, j + 1) − f (i, j) j < 256

0 j = 256 , Df (i, j) =

(Dhf (i, j)Dv f (i, j)

).

vertical gradient of image issparse!

instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!

Ph. Grohs 09-27-2012 p. 8

Page 26: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.Why should sought solution have this property?

Define discrete gradient for image f ∈ R256×256

Dhf (i, j) =

{f (i + 1, j) − f (i, j) i < 256

0 i = 256 , Dv f (i, j) =

{f (i, j + 1) − f (i, j) j < 256

0 j = 256 , Df (i, j) =

(Dhf (i, j)Dv f (i, j)

).

vertical gradient of image issparse!

instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!

Ph. Grohs 09-27-2012 p. 8

Page 27: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.Why should sought solution have this property?

Define discrete gradient for image f ∈ R256×256

Dhf (i, j) =

{f (i + 1, j) − f (i, j) i < 256

0 i = 256 , Dv f (i, j) =

{f (i, j + 1) − f (i, j) j < 256

0 j = 256 , Df (i, j) =

(Dhf (i, j)Dv f (i, j)

).

vertical gradient of image issparse!

instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!

Ph. Grohs 09-27-2012 p. 8

Page 28: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.Why should sought solution have this property?

Define discrete gradient for image f ∈ R256×256

Dhf (i, j) =

{f (i + 1, j) − f (i, j) i < 256

0 i = 256 , Dv f (i, j) =

{f (i, j + 1) − f (i, j) j < 256

0 j = 256 , Df (i, j) =

(Dhf (i, j)Dv f (i, j)

).

vertical gradient of image issparse!

instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!

Ph. Grohs 09-27-2012 p. 8

Page 29: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.Why should sought solution have this property?

Define discrete gradient for image f ∈ R256×256

Dhf (i, j) =

{f (i + 1, j) − f (i, j) i < 256

0 i = 256 , Dv f (i, j) =

{f (i, j + 1) − f (i, j) j < 256

0 j = 256 , Df (i, j) =

(Dhf (i, j)Dv f (i, j)

).

vertical gradient of image issparse!

instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!

Ph. Grohs 09-27-2012 p. 8

Page 30: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem

x0 = argminAx=b

‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .

/This optimization problem is NP-hard.

Relax, solve instead

x1 = argminAx=b

‖Dx‖1 :=∑i,j

√Dhx(i , j)2 + Dv x(i , j)2 :=

∑i,j

‖Dx(i , j)‖2.

,Can be recast as SOCP (Second Order Cone Program)

mint,x

∑i,j

t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)

and solved efficiently.

Ph. Grohs 09-27-2012 p. 9

Page 31: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem

x0 = argminAx=b

‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .

/This optimization problem is NP-hard.

Relax, solve instead

x1 = argminAx=b

‖Dx‖1 :=∑i,j

√Dhx(i , j)2 + Dv x(i , j)2 :=

∑i,j

‖Dx(i , j)‖2.

,Can be recast as SOCP (Second Order Cone Program)

mint,x

∑i,j

t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)

and solved efficiently.

Ph. Grohs 09-27-2012 p. 9

Page 32: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem

x0 = argminAx=b

‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .

/This optimization problem is NP-hard.

Relax

, solve instead

x1 = argminAx=b

‖Dx‖1 :=∑i,j

√Dhx(i , j)2 + Dv x(i , j)2 :=

∑i,j

‖Dx(i , j)‖2.

,Can be recast as SOCP (Second Order Cone Program)

mint,x

∑i,j

t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)

and solved efficiently.

Ph. Grohs 09-27-2012 p. 9

Page 33: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem

x0 = argminAx=b

‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .

/This optimization problem is NP-hard.

Relax, solve instead

x1 = argminAx=b

‖Dx‖1

:=∑i,j

√Dhx(i , j)2 + Dv x(i , j)2 :=

∑i,j

‖Dx(i , j)‖2.

,Can be recast as SOCP (Second Order Cone Program)

mint,x

∑i,j

t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)

and solved efficiently.

Ph. Grohs 09-27-2012 p. 9

Page 34: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem

x0 = argminAx=b

‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .

/This optimization problem is NP-hard.

Relax, solve instead

x1 = argminAx=b

‖Dx‖1 :=∑i,j

√Dhx(i , j)2 + Dv x(i , j)2

:=∑i,j

‖Dx(i , j)‖2.

,Can be recast as SOCP (Second Order Cone Program)

mint,x

∑i,j

t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)

and solved efficiently.

Ph. Grohs 09-27-2012 p. 9

Page 35: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem

x0 = argminAx=b

‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .

/This optimization problem is NP-hard.

Relax, solve instead

x1 = argminAx=b

‖Dx‖1 :=∑i,j

√Dhx(i , j)2 + Dv x(i , j)2 :=

∑i,j

‖Dx(i , j)‖2.

,Can be recast as SOCP (Second Order Cone Program)

mint,x

∑i,j

t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)

and solved efficiently.

Ph. Grohs 09-27-2012 p. 9

Page 36: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem

x0 = argminAx=b

‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .

/This optimization problem is NP-hard.

Relax, solve instead

x1 = argminAx=b

‖Dx‖1 :=∑i,j

√Dhx(i , j)2 + Dv x(i , j)2 :=

∑i,j

‖Dx(i , j)‖2.

,Can be recast as SOCP (Second Order Cone Program)

mint,x

∑i,j

t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)

and solved efficiently.Ph. Grohs 09-27-2012 p. 9

Page 37: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution

Original image is reconstructed exactly!

Ph. Grohs 09-27-2012 p. 10

Page 38: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Solution

Original image is reconstructed exactly!

Ph. Grohs 09-27-2012 p. 10

Page 39: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Summary

Redundant system Ax = b can be solved exactly for sparse solutionsx .

Ph. Grohs 09-27-2012 p. 11

Page 40: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Learning Objectives

Ability to...

◦ understand theoretical analysis of sparse optimizationalgorithms,

◦ critique current research publications,◦ implement basic models and methods in signal processing,◦ summarize and explain research publications in the field of

sparse optimization.

Ph. Grohs 09-27-2012 p. 12

Page 41: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Learning Objectives

Ability to...◦ understand theoretical analysis of sparse optimization

algorithms,

◦ critique current research publications,◦ implement basic models and methods in signal processing,◦ summarize and explain research publications in the field of

sparse optimization.

Ph. Grohs 09-27-2012 p. 12

Page 42: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Learning Objectives

Ability to...◦ understand theoretical analysis of sparse optimization

algorithms,◦ critique current research publications,

◦ implement basic models and methods in signal processing,◦ summarize and explain research publications in the field of

sparse optimization.

Ph. Grohs 09-27-2012 p. 12

Page 43: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Learning Objectives

Ability to...◦ understand theoretical analysis of sparse optimization

algorithms,◦ critique current research publications,◦ implement basic models and methods in signal processing,

◦ summarize and explain research publications in the field ofsparse optimization.

Ph. Grohs 09-27-2012 p. 12

Page 44: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Learning Objectives

Ability to...◦ understand theoretical analysis of sparse optimization

algorithms,◦ critique current research publications,◦ implement basic models and methods in signal processing,◦ summarize and explain research publications in the field of

sparse optimization.

Ph. Grohs 09-27-2012 p. 12

Page 45: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Main Literature Source

Ph. Grohs 09-27-2012 p. 13

Page 46: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Modus Operandi

Interactivity!

Everybody is required to...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book...attend the talks of others...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)

Ph. Grohs 09-27-2012 p. 14

Page 47: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Modus Operandi

Interactivity!

Everybody is required to...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book...attend the talks of others...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)

Ph. Grohs 09-27-2012 p. 14

Page 48: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Modus Operandi

Interactivity!

Everybody is required to

...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book

...attend the talks of others

...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)

Ph. Grohs 09-27-2012 p. 14

Page 49: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Modus Operandi

Interactivity!

Everybody is required to...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book

...attend the talks of others

...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)

Ph. Grohs 09-27-2012 p. 14

Page 50: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Modus Operandi

Interactivity!

Everybody is required to...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book...attend the talks of others

...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)

Ph. Grohs 09-27-2012 p. 14

Page 51: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Modus Operandi

Interactivity!

Everybody is required to...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book...attend the talks of others...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)

Ph. Grohs 09-27-2012 p. 14

Page 52: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Hints

When giving a talk

...practice the right speed

...think thoroughly about the balance detail vs. big picture

...keep a “red thread” throughout the talk

...give a “takehome message”

Ph. Grohs 09-27-2012 p. 15

Page 53: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Hints

When giving a talk...practice the right speed

...think thoroughly about the balance detail vs. big picture

...keep a “red thread” throughout the talk

...give a “takehome message”

Ph. Grohs 09-27-2012 p. 15

Page 54: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Hints

When giving a talk...practice the right speed...think thoroughly about the balance detail vs. big picture

...keep a “red thread” throughout the talk

...give a “takehome message”

Ph. Grohs 09-27-2012 p. 15

Page 55: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Hints

When giving a talk...practice the right speed...think thoroughly about the balance detail vs. big picture...keep a “red thread” throughout the talk

...give a “takehome message”

Ph. Grohs 09-27-2012 p. 15

Page 56: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution

Introduction

Hints

When giving a talk...practice the right speed...think thoroughly about the balance detail vs. big picture...keep a “red thread” throughout the talk...give a “takehome message”

Ph. Grohs 09-27-2012 p. 15