richard baraniuk rice university model-based sparsity

65
Richard Baraniuk Rice University Model-based Sparsity

Upload: baldric-ferguson

Post on 25-Dec-2015

221 views

Category:

Documents


2 download

TRANSCRIPT

  • Slide 1
  • Richard Baraniuk Rice University Model-based Sparsity
  • Slide 2
  • Marco Duarte Chinmay Hegde Volkan Cevher
  • Slide 3
  • The Digital Universe Size: 281 billion gigabytes generated in 2007 digital bits > stars in the universe growing by a factor of 10 every 5 years > Avogadros number (6.02x10 23 ) in 15 years Growth fueled by multimedia data audio, images, video, surveillance cameras, sensor nets, (2B fotos on Flickr, 4.2M security cameras in UK) In 2007 digital data generated > total storage by 2011, of digital universe will have no home [Source: IDC Whitepaper The Diverse and Exploding Digital Universe March 2008]
  • Slide 4
  • Three Challenges 1. Data acquisition 2. Data compression 3. Data processing
  • Slide 5
  • One Solution 1. Data acquisition 2. Data compression 3. Data processing parsity
  • Slide 6
  • Sparsity / Compressibility pixels large wavelet coefficients (blue = 0) wideband signal samples large Gabor (TF) coefficients time frequency
  • Slide 7
  • Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional subspaces aligned w/ coordinate axes Concise Signal Structure sorted index
  • Slide 8
  • Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional subspaces Compressible signal:sorted coordinates decay rapidly to zero well-approximated by a K-sparse signal (simply by thresholding) sorted index Concise Signal Structure
  • Slide 9
  • Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional subspaces Compressible signal:sorted coordinates decay rapidly to zero model: ball: Concise Signal Structure sorted index power-law decay
  • Slide 10
  • Compressive Sensing Sensing via randomized dimensionality reduction Sub-Nyquist sampling without aliasing random measurements sparse signal nonzero entries
  • Slide 11
  • Compressive Sensing Sensing via randomized dimensionality reduction random measurements sparse signal nonzero entries Recovery:solve this ill-posed inverse problem via sparse approximation exploit the geometrical structure of sparse/compressible signals
  • Slide 12
  • Restricted Isometry Property (RIP) Preserve the structure of sparse/compressible signals K-dim subspaces
  • Slide 13
  • Restricted Isometry Property (RIP) Preserve the structure of sparse/compressible signals
  • Slide 14
  • Restricted Isometry Property (RIP) Stable embedding RIP of order 2K implies: for all K-sparse x 1 and x 2 K-dim subspaces
  • Slide 15
  • Compressive Sensing Random subGaussian matrix has the RIP whp if random measurements sparse signal nonzero entries
  • Slide 16
  • Compressive Sensing Signal recovery via optimization [Candes, Romberg, Tao; Donoho] random measurements sparse signal nonzero entries
  • Slide 17
  • Compressive Sensing Signal recovery via iterative greedy algorithm (orthogonal) matching pursuit [Gilbert, Tropp] iterated thresholding [Nowak, Figueiredo; Kingsbury, Reeves; Daubechies, Defrise, De Mol; Blumensath, Davies; ] CoSaMP [Needell and Tropp] random measurements sparse signal nonzero entries
  • Slide 18
  • Iterated Thresholding update signal estimate prune signal estimate (best K-term approx) update residual
  • Slide 19
  • Performance Using methods, CoSaMP, IHT Sparse signals noise-free measurements: exact recovery noisy measurements: stable recovery Compressible signals recovery as good as K-sparse approximation CS recovery error signal K-term approx error noisesignal K-term approx error
  • Slide 20
  • Single-Pixel Camera target N=65536 pixels M=1300 measurements (2%) M=11000 measurements (16%) M randomized measurements
  • Slide 21
  • From Sparsity to Structured Sparsity
  • Slide 22
  • Beyond Sparse Models Sparse signal model captures simplistic primary structure wavelets: natural images Gabor atoms: chirps/tones pixels: background subtracted images
  • Slide 23
  • Beyond Sparse Models Sparse signal model captures simplistic primary structure Modern compression/processing algorithms capture richer secondary coefficient structure wavelets: natural images Gabor atoms: chirps/tones pixels: background subtracted images
  • Slide 24
  • Sparse Signals K-sparse signals comprise a particular set of K-dim subspaces
  • Slide 25
  • Structured-Sparse Signals A K-sparse signal model comprises a particular (reduced) set of K-dim subspaces [Blumensath and Davies] Fewer subspaces relaxed RIP stable recovery using fewer measurements M
  • Slide 26
  • Model-based CS Running Example: Tree-Sparse Signals
  • Slide 27
  • Wavelet Sparse Typical of wavelet transforms of natural signals and images (piecewise smooth)
  • Slide 28
  • Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Typical of wavelet transforms of natural signals and images (piecewise smooth)
  • Slide 29
  • Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Sparse approx:find best set of coefficients sorting hard thresholding Tree-sparse approx: find best rooted subtree of coefficients CSSA [B] dynamic programming [Donoho]
  • Slide 30
  • Wavelet Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree RIP: stable embedding K-planes
  • Slide 31
  • Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Tree-RIP: stable embedding [Blumensath and Davies] K-planes
  • Slide 32
  • Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Tree-RIP: stable embedding [Blumensath and Davies] Recovery: inject tree-sparse approx into IHT/CoSaMP
  • Slide 33
  • Recall: Iterated Thresholding update signal estimate prune signal estimate (best K-term approx) update residual
  • Slide 34
  • Iterated Model Thresholding update signal estimate prune signal estimate (best K-term model approx) update residual
  • Slide 35
  • Tree-Sparse Signal Recovery target signal CoSaMP, (RMSE=1.12) Tree-sparse CoSaMP (RMSE=0.037) signal length N=1024 random measurements M=80 L1-minimization (RMSE=0.751)
  • Slide 36
  • Compressible Signals Real-world signals are compressible, not sparse Recall: compressible approximable by sparse compressible signals lie close to a union of subspaces ie: approximation error decays rapidly as nested in that If has RIP, then both sparse and compressible signals are stably recoverable via LP or greedy alg sorted index
  • Slide 37
  • Model-Compressible Signals Model-compressible approximable by model-sparse model-compressible signals lie close to a reduced union of subspaces ie: model-approx error decays rapidly as Nested approximation property (NAP): model-approximations nested in that
  • Slide 38
  • Model-Compressible Signals Model-compressible approximable by model-sparse model-compressible signals lie close to a reduced union of subspaces ie: model-approx error decays rapidly as Nested approximation property (NAP): model-approximations nested in that
  • Slide 39
  • Model-Compressible Signals Model-compressible approximable by model-sparse model-compressible signals lie close to a reduced union of subspaces ie: model-approx error decays rapidly as Nested approximation property (NAP): model-approximations nested in that
  • Slide 40
  • Model-Compressible Signals Model-compressible approximable by model-sparse model-compressible signals lie close to a reduced union of subspaces ie: model-approx error decays rapidly as Nested approximation property (NAP): model-approximations nested in that
  • Slide 41
  • Model-Compressible Signals Model-compressible approximable by model-sparse model-compressible signals lie close to a reduced union of subspaces ie: model-approx error decays rapidly as Nested approximation property (NAP): model-approximations nested in that New result: while model-RIP enables stable model-sparse recovery, model-RIP is not sufficient for stable model-compressible recovery!
  • Slide 42
  • Model-Compressible Signals Model-compressible approximable by model-sparse model-compressible signals lie close to a reduced union of subspaces ie: model-approx error decays rapidly as New result: while model-RIP enables stable model-sparse recovery, model-RIP is not sufficient for stable model-compressible recovery! Ex: If has the Tree-RIP, then tree-sparse signals can be stably recovered with However, tree-compressible signals cannot be stably recovered
  • Slide 43
  • Stable Recovery Result: Stable model-compressible signal recovery requires that have both: RIP + Restricted Amplification Property RAmP: controls anisometry of in the approximations residual subspaces optimal K-term model recovery (error controlled by RIP) optimal 2K-term model recovery (error controlled by RIP) residual subspace (error not controlled by RIP)
  • Slide 44
  • Tree-RIP, Tree-RAmP Theorem: An MxN iid subgaussian random matrix has the Tree(K)-RIP if Theorem: An MxN iid subgaussian random matrix has the Tree(K)-RAmP if
  • Slide 45
  • Performance Using model-based IHT, CoSaMP with RIP and RAmP Model-sparse signals noise-free measurements: exact recovery noisy measurements: stable recovery Model-compressible signals recovery as good as K-model-sparse approximation CS recovery error signal model K-term approx error noisesignal model K-term approx error
  • Slide 46
  • Simulation Recovery performance (MSE) vs. number of measurements Piecewise cubic signals + wavelets Models/algorithms: sparse (CoSaMP) tree-sparse (tree-CoSaMP)
  • Slide 47
  • Simulation Number samples for correct recovery Piecewise cubic signals + wavelets Models/algorithms: sparse (CoSaMP) tree-sparse (tree-CoSaMP)
  • Slide 48
  • Other Useful Models Clustered coefficients [C, Duarte, Hegde, B], [C, Indyk, Hegde, Duarte, B] Dispersed coefficients [Tropp, Gilbert, Strauss], [Stojnic, Parvaresh, Hassibi], [Eldar, Mishali], [Baron, Duarte et al], [B, C, Duarte, Hegde]
  • Slide 49
  • Clustered Signals targetIsing-model recovery CoSaMP recovery LP (FPC) recovery Probabilistic approach via graphical model Model clustering of significant pixels in space domain using Ising Markov Random Field Ising model approximation performed efficiently using graph cuts [Cevher, Duarte, Hegde, B08]
  • Slide 50
  • Block-Sparse Model N = 4096 K = 6 active blocks J = block length = 64 M = 2.5JK = 960 msnts [Stojnic, Parvaresh, Hassibi], [Eldar, Mishali], [B, Cevher, Duarte, Hegde] target CoSaMP (MSE = 0.723) block-sparse model recovery (MSE=0.015)
  • Slide 51
  • Block-Compressible Model target CoSaMP (MSE=0.711) block-sparse recovery (MSE=0.195) best 5-block approximation (MSE=0.116 )
  • Slide 52
  • Sparse Spike Trains Sequence of pulses Simple model: sequence of Dirac pulses refractory period between each pulse Model-based RIP if Stable recovery via iterative algorithm (exploit total unimodularity) [Hedge, Duarte, Cevher 09]
  • Slide 53
  • N=1024 K=50 =10 M=150 original model-based recovery error CoSaMP recovery error Sparse Spike Trains
  • Slide 54
  • Sparse Pulse Trains More realistic model: sequence of Dirac pulses * pulse shape of length refractory period between each pulse of length Model-based RIP if
  • Slide 55
  • More realistic model: sequence of Dirac pulses * pulse shape of length refractory period between each pulse of length Model-based RIP if N=4076 K=7 =25 =10 M=290 originalCoSaMPmodel-alg Sparse Pulse Trains
  • Slide 56
  • Summary Why CS work:stable embedding for signals with concise geometric structure Sparse signals >> model-sparse signals Compressible signals >> model-compressible signals Greedy model-based signal recovery algorithms upshot: provably fewer measurements more stable recovery new concept:RIP >> RAmP
  • Slide 57
  • dsp.rice.edu/cs
  • Slide 58
  • Slide 59
  • Manifold Models Why CS works: stable embedding for signals with concise geometric structure Manifolds a K-dimensional smooth manifold in N-dimensional space stably is embedded by a subGaussian random projection if Compressive inference: many inference problems have a manifold interpretation can solve directly on compressive measurements (obviates recovery) [B, Wakin, FOCM, 2007]
  • Slide 60
  • Matched Filtering Object classification with K unknown camera articulation parameters Each set of articulated images lives on a K-dim manifold
  • Slide 61
  • Smashed Filtering Object classification with K unknown camera articulation parameters Each set of articulated images lives on a K-dim manifold
  • Slide 62
  • Open Positions open postdoc positions in sparsity / compressive sensing at Rice University dsp.rice.edu [email protected]
  • Slide 63
  • Connexions (cnx.org) non-profit open publishing project goal: make high-quality educational content available to anyone, anywhere, anytime for free on the web and at very low cost in print open-licensed repository of Lego-block modules for authors, instructors, and learners to create, rip, mix, burn global reach: >1M users monthly from 200 countries collaborators:IEEE (IEEEcnx.org), Govt. Vietnam, TI, NI,
  • Slide 64
  • Why We Want RIP PHI x1 x2 y1 y2 d Approx d
  • Slide 65
  • Why We Want RIP PHI x1 x2 y1 y2 d Approx d