model-based compressive sensing presenter: jason david bonior ece / cmr tennessee technological...

46
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, Chinmay Hegde )

Upload: vernon-thomas

Post on 02-Jan-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

Model-Based Compressive Sensing

Presenter: Jason David Bonior

ECE / CMR

Tennessee Technological University

November 5, 2010

Reading Group

(Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, Chinmay Hegde )

Page 2: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

2

Outline

■ Introduction■ Compressive Sensing■ Beyond Sparse and Compressible Signals■ Model-Based Signal Recovery Algorithms■ Example: Wavelet Tree Model■ Example: Block-Sparse Signals and Signal Ensembles■ Conclusions

11/5/10

Page 3: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

3

Introduction

■ Shannon/Nyquist Sampling□ Sampling rate must be 2x the Fourier bandwidth□ Not always feasible

■ Reduction of dimensionality by representing as sparse set of coefficients in a basis expansion

□ Sparse means that K << N coefficients are nonzero and need to be transmitted/stored/etc.

■ Compressive Sensing can be used instead of Nyquist Sampling when the signal in known to be sparse or compressible

11/5/10

x

Page 4: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

4

Background on Compressive SensingSparse Signals

■ We can represent any signal in terms of coefficients of a basis set:

■ A signal is K-Sparse iff K << N entries are nonzero■ Support of x (supp(x)) is a list of the indices for nonzero

entries■ The set of all K-sparse signals is the union of the , K-

dimensional subspaces aligned with the coordinate axes in □ Denote this union of subspaces by

11/5/10

N

iiix

1

K

N

NRK

Page 5: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

5

Background on Compressive SensingCompressible Signals

■ Many signals are not sparse but can be expressed as such□ Called “Compressible Signals”

■ Given a signal with coefficients that when sorted in order of decreasing magnitude decay according to power law:

□ Because of the rapid decay of the coefficients such signals can be approximated as K-sparse

▪ Error for such approximations is given by:

11/5/10

Page 6: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

6

Background on Compressive SensingCompressible Signals

■ Expressing a compressible signal as K-sparse is known as Transform Coding.□ Record signal’s full N samples□ Express in terms of basis functions□ Discard all but K largest coefficients□ Encode coefficients and their locations

■ Transform Coding has drawbacks□ Must start with full N samples□ Must compute all N coefficients□ Must encode locations of coefficients we keep

11/5/10

Page 7: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

7

Background on Compressive SensingRestricted Isometry Property (RIP)

■ Compressive Sensing combines signal acquisition and compression by using a measurement matrix

■ In order to recover a good estimate of our signal x from M compressive measurements our measurement matrix must satisfy the Restricted Isometry Property

11/5/10

Page 8: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

8

Background on Compressive SensingRecovery Algorithms

■ We can conceive of an infinite amount of signal coefficient vectors which will produce the same set of compressive measurements. If we seek the sparsest x that satisfies y:

We recover a K-sparse signal from M = 2K compressive measurements. This is a combinatorial NP-Complete problem and is not stable in the presence of noise.□ Need to find another way to solve this problem

11/5/10

0minarg xx

xy

Page 9: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

9

Background on Compressive SensingRecovery Algorithms

■ Convex Optimization□ Linear program, polynomial time□ Adaptations exist to handle noise

▪ Basis Pursuit with Denoising (BPDN), Complexity-Based Regularization, and Dantzig Selector

■ Greedy Search□ Matching Pursuit, Orthogonal Matching Pursuit, StOMP, Iterative

Hard Thresholding (IHT), CoSaMP, Subspace Pursuit (SP)▪ All use a best L-term approximation for the estimated signal

11/5/10

1minarg xx

xy

Page 10: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

10

Background on Compressive SensingPerformance Bounds on Signal Recovery

■ For compressive measurements

□ All l1 techniques and CoSaMP, SP, IHT iterative techniques offer stable recovery with performance close to optimal K-term approximation

□ With random Φ all results hold with high probability▪ In a noise free setting these offer perfect recovery▪ In the presence of noise the mean-square error is given by:

▪ For an s-compressible signal with noise of bounded norm the mean-sqaure error is:

11/5/10

))/log(( KNKOM

22ˆ nCxx

23

221

2 212ˆ nC

s

SKC

s

SKCxx

s

Page 11: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

11

Beyond Sparse and Compressible Signals

■ Coefficients of both natural and manmade signals often exhibit interdependency□ We can model this structure in order to:

▪ Reduce the degrees of freedom▪ Reduce the number of compressive measurements needed to

reconstruct the signal

11/5/10

Page 12: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

12

Beyond Sparse and Compressible Signals Model-Sparse Signals

11/5/10

Page 13: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

13

Beyond Sparse and Compressible Signals Model-Based RIP

■ If x is K-sparse we can relax RIP constraint on Φ.

11/5/10

Page 14: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

14

Beyond Sparse and Compressible Signals Model-Compressible Signals

11/5/10

Page 15: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

15

Beyond Sparse and Compressible Signals

■ Nested Model Approximations and Residual Subspaces

■ Restricted Amplification Property (RAmP)□ The number of compressive measurements M required for a

random matrix to be MK-RIP is determined by the number of canonical subspaces mK. This does not extend to model-compressible signals.

□ We can analyze the robustness by looking at the signal outside its K-term approximation and considering it noise

11/5/10

Page 16: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

16

Beyond Sparse and Compressible Signals

■ Restricted Amplification Property (RAmP)□ A matrix Φ has the (εK,r)-RAmP for the residual subspaces Rj,K of

model M if:

■ We can determine the number of measurements M required for a random measurement matrix Φ to have RAmP with high probability:

11/5/10

2

2

22

21 uju r

K

t

K

NRK

jM j

KrKNj

2ln4211

1max 2/1

Page 17: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

17

Model-Based Signal Recovery Algorithms

■ For greedy algorithms just replace the K-term approximation step with the corresponding K-term model-based approximation

■ These algorithms have fewer subspaces to search so fewer measurements are required to obtain the same accuracy of conventional CS

11/5/10

Page 18: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

18

Model-Based Signal Recovery AlgorithmsModel-Based CoSaMP

■ CoSaMP was chosen because: □ It offers robust recovery on

par with the best convex-optimization approaches

□ It has a simple iterative greedy structure which can be easily modified for the model-based case

11/5/10

Page 19: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

19

Model-Based Signal Recovery AlgorithmsPerformance of Model-Sparse Signal Recovery

11/5/10

Page 20: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

20

Model-Based Signal Recovery AlgorithmsPerformance of Model-Compressible Signal Recovery

■ We use RAmP as a condition on our measurement matrix Φ to obtain a robustness guarantee for signal recovery with noisy measurements:

11/5/10

Page 21: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

21

Model-Based Signal Recovery AlgorithmsRobustness to Model Mismatch

■ A model mismatch occurs when the model chosen does not exactly match the signal we are trying to recover.

■ We start with the best case possibility: □ Model-based CoSaMP (Sparsity mismatch):

□ (Compressibility mismatch):

■ Worst Case: We end up requiring the same number of measurements required for conventional CS

11/5/10

22221512162ˆ nxxxxx KK

rii

1/352ˆ

M222

KNKxnxxx si

iK

Page 22: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

22

Model-Based Signal Recovery AlgorithmsComputational Complexity of Model-Based Recovery

■ Model-based algorithms are different from the standard forms of the algorithms in two ways:□ There is a reduction in the number of required measurements.

This reduces the computational complexity.□ K-term approximation can be implemented using a simple

sorting algorithm (low cost implementation).

11/5/10

Page 23: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

23

Example: Wavelet Tree Model

■ Wavelet coefficients can be naturally organized into a tree structure with the largest coefficients clustering together along the branches of the tree.□ This motivated the authors towards a connected tree model for

wavelet coefficients.▪ Previous work did not utilize bounds on the number of compressive

measurements.

11/5/10

Page 24: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

24

Example: Wavelet Tree ModelTree-Sparse Signals

■ The wavelet representation of a signal x is given by:

■ Nested supports create a parent/child relationship between the wavelet coefficients at different scales.

■ Discontinuities create larger coefficients which results in a chain from root to leaf.□ This relationship has been exploited in many wavelet processing

and compression algorithms.

11/5/10

Page 25: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

25

Example: Wavelet Tree ModelTree-Sparse Signals

11/5/10

Page 26: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

26

Example: Wavelet Tree ModelTree-Based Approximation

■ The optimal approximation for tree-based signal recovery:

□ An efficient algorithm exists, Condensing Sort and Select Algorithm (CSSA).

▪ CSSA solves by condensing nonmonotonic segments of the branches using iterative sort and average.

□ Subtree approximations coincide with K-term approximations when the wavelet coefficients are monotonically non-increasing along the tree branches out from the root.

11/5/10

Page 27: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

27

Example: Wavelet Tree ModelTree-Based Approximation

■ CSSA solves by condensing nonmonotonic segments of the branches using iterative sort and average.□ Condensed nodes are called supernodes□ This can also be implemented as a greedy search among nodes

▪ The algorithm calculates the average wavelet coefficient for the subtree rooted at that node

▪ records the largest average among all the subtrees as the energy for that node

▪ search for the unselected node with the largest energy and add the subtree corresponding to the node's energy to the estimated support as a supernode

11/5/10

Page 28: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

28

Example: Wavelet Tree ModelTree-Based Approximation

11/5/10

Page 29: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

29

Example: Wavelet Tree ModelTree-Compressible Signals

■ Tree approximation classes contain signals with wavelet coefficients that have loose decay from coarse to fine scales.

11/5/10

Page 30: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

30

Example: Wavelet Tree ModelStable Tree-Based Recovery from Compressive Measurements

11/5/10

Page 31: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

31

Example: Wavelet Tree ModelExperiments

11/5/10

Page 32: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

32

Example: Wavelet Tree ModelExperiments

■ Monte Carlo simulation study on impact of number of measurements M on the model-based and conventional recovery for a class of tree-sparse piece-wise polynomials

■ Each point is from measuring normalized recovery error of 500 sample trials

■ For each trial: □ generate new piecewise-polynomial signal with five polynomial

pieces of cubic degree and randomly placed discontinuities□ compute K-term tree-approx using CSSA □ measure resulting signal using matrix with i.i.d. Gaussian entries

11/5/10

Page 33: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

33

Example: Wavelet Tree ModelExperiments

11/5/10

Page 34: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

34

Example: Wavelet Tree ModelExperiments

■ Generated sample piecewise-polynomial signals as before■ Computed K-term tree-approximation■ Computed M measurements of each approximation■ Added Gaussian noise of expected norm■ Recovered the signal using CoSaMP and model-based

recovery ■ Measured the error for each case

11/5/10

Page 35: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

35

Example: Wavelet Tree ModelExperiments

11/5/10

Page 36: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

36

Example: Wavelet Tree ModelExperiments

11/5/10

Page 37: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

37

Example: Block Sparse Signals and Signal Ensembles

11/5/10

■ Locations of significant coefficients cluster in blocks under a specific sorting order

■ This has been investigated in CS applications: □ DNA microarrays□ Magnetoencephalography

■ There is a similar problem in CS for signal ensembles like sensor networks and MIMO communication□ Several signals share a common coefficient support set□ The signal can be re-shaped as single vector by concatenation

then the coefficients rearranged so the vector has block sparsity

Page 38: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

38

Example: Block Sparse Signals and Signal Ensembles

11/5/10

■ Block-Sparse Signals

■ Block-Based Approximation

Page 39: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

39

Example: Block Sparse Signals and Signal Ensembles

■ Block-Compressible Signals

11/5/10

Page 40: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

40

Example: Block Sparse Signals and Signal Ensembles

■ Double Block-Based Recovery from Compressible Measurements□ The same number of measurements is required for block-sparse

and block-compressible signals.□ The bound on the number of measurements required is:

□ The first term of this bound matches the order of the bound for conventional CS.

□ The second term represents a linear dependence on the size of the block J.

▪ The number of measurements M = O(KJ+K*log(N/K))▫ An improvement over conventional CS

11/5/10

Page 41: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

41

Example: Block Sparse Signals and Signal Ensembles

■ Double Block-Based Recovery from Compressible Measurements□ We can break an M x JN dense matrix in a distributed setting

into J pieces of size M x N, calculate the CS at each sensor, then sum the results for the complete vector

□ According to our bound:

for large values of J, the number of measurements required is lower than that required for recovery of each signal independently.

11/5/10

Page 42: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

42

Example: Block Sparse Signals and Signal Ensembles

■ Experiments□ Comparison of model-based recovery to CoSaMP for block-

sparse signals.□ The model-based procedures are several times faster than

convex optimization based procedures.

11/5/10

Page 43: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

43

Example: Block Sparse Signals and Signal Ensembles

11/5/10

Page 44: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

44

Example: Block Sparse Signals and Signal Ensembles

11/5/10

Page 45: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

45

Conclusions

■ Signal Models can produce significant performance gains over conventional CS

■ Wavelet procedure offers considerable speed-up■ Block-sparse procedure can recover signals with fewer

measurements than each sensor recovering the signals independently

Future Work:□ The authors have only considered models that are geometrically

described as the union of subspaces. There may be potential to extend these models to more complex geometries.

□ It may be possible to integrate these models into other iterative algorithms

11/5/10

Page 46: Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

Thank you!

46 11/5/10