richard baraniuk rice university compressive nonsensing

76
Richard Baraniuk Rice University compressive nonsensing

Upload: gwendoline-baldwin

Post on 17-Dec-2015

223 views

Category:

Documents


3 download

TRANSCRIPT

Richard Baraniuk

Rice University

compressivenonsensing

Chapter 1

The Problem

challenge 1data too expensive

Case in Point: MR Imaging

• Measurements very expensive

• $1-3 million per machine

• 30 minutes per scan

Case in Point: IR Imaging

challenge 2too much data

Case in Point: DARPA ARGUS-IS

• 1.8 Gpixel image sensor– video rate output:

444 Gbits/s– comm data rate:

274 Mbits/s

factor of 1600xway out of reach ofexisting compressiontechnology

• Reconnaissancewithout conscience– too much data to transmit to a ground station– too much data to make effective real-time decisions

Chapter 2

The Promise

COMPRESSIVE

SENSING

innovation 1sparse signal models

Sparsity

pixels largewaveletcoefficients

(blue = 0)

widebandsignalsamples

largeGabor (TF)coefficients

time

frequ

en

cy

Sparsity

pixels largewaveletcoefficients

(blue = 0)

sparsesignal

nonzeroentries

nonlinear signal model

innovation 2dimensionalityreduction for sparse signals

Dimensionality Reduction

• When data is sparse/compressible, can directly acquire a compressed representation with no/little information loss through linear dimensionality reduction

measurements sparsesignal

nonzeroentries

Stable Embedding• An information preserving projection preserves

the geometry of the set of sparse signals

• SE ensures that

K-dim subspaces

Stable Embedding• An information preserving projection preserves

the geometry of the set of sparse signals

• SE ensures that

Random Embedding is Stable

• Measurements = random linear combinations of the entries of

• No information loss for sparse vectors whp

measurements sparsesignal

nonzeroentries

innovation 3sparsity-basedsignal recovery

Signal Recovery

• Goal: Recover signal from measurements

• Problem: Randomprojection not full rank(ill-posed inverse problem)

• Solution: Exploit the sparse/compressiblegeometry of acquired signal

• Recovery via (convex) sparsitypenalty or greedy algorithms[Donoho; Candes, Romberg, Tao, 2004]

Signal Recovery

• Goal: Recover signal from measurements

• Problem: Randomprojection not full rank(ill-posed inverse problem)

• Solution: Exploit the sparse/compressiblegeometry of acquired signal

• Recovery via (convex) sparsitypenalty or greedy algorithms[Donoho; Candes, Romberg, Tao, 2004]

“Single-Pixel” CS Camera

randompattern onDMD array

DMD DMD

single photon detector

imagereconstructionorprocessing

w/ Kevin Kelly

scene

“Single-Pixel” CS Camera

randompattern onDMD array

DMD DMD

single photon detector

imagereconstructionorprocessing

scene

• Flip mirror array M times to acquire M measurements• Sparsity-based recovery

Random Demodulator

• CS enables sampling near signal’s (low) “information rate” rather than its (high) Nyquist rate

A2Isampling rate

number oftones /window

Nyquistbandwidth

• Problem: In contrast to Moore’s Law, ADC performance doubles only every 6-8 years

Example: Frequency Hopper

20x sub-Nyquist sampling

spectrogram sparsogram

Nyquist rate sampling

• Sparse in time-frequency

challenge 1data too expensive

means fewer expensive measurements needed for the same resolution scan

challenge 2too much data

means we compress on the fly as we acquire data

EXCITING!!!

6640 citations

dsp.rice.edu/cs archive >1500 papers

nuit-blanche.blogspot.com > 1 posting/sec

2004—2014

9797 citations

Chapter 3

The Hype

CS is Growing Up

Gerh

ard

Rich

ter

40

96 C

olo

urs

muralsoflajolla.com/roy-mcmakin-mural

“L1 is the new L2”

- Stan Osher

Exponential Growth

?

Chapter 4

The Fallout

“L1 is the new L2”

- Stan Osher

CS for “Face Recognition”

From: M. V. Subject: Interesting application for compressed sensingDate: June 10, 2011 at 11:37:31 PM EDTTo: [email protected], [email protected]

Drs. Candes and Romberg,You may have already been approached about this, but I feel I should say something in case you haven't. I'm writing to you because I recently read an article in Wired Magazine about compressed sensing

I'm excited about the applications CS could have in many fields, but today I was reminded of a specific application where CS could conceivably settle an area of dispute between mainstream historians and Roswell UFO theorists.  As outlined in the linked video below, Dr. Rudiak has analyzed photos from 1947 in which a General Ramey appears holding a typewritten letter from which Rudiak believes he has been able to discern a number of words which he believes substantiate the extraterrestrial hypothesis for the Roswell Incident).  For your perusal, I've located a "hi-res" copy of the cropped image of the letter in Ramey's hand.

I hope to hear back from you.  Is this an application where compressed sensing could be useful?  Any chance you would consider trying it?

Thank you for your time,M. V.

P.S. - Out of personal curiosity, are there currently any commercial entities involved in developing CS-based software for use by the general public?

--

x

Chapter 5

Back to Reality

Back to Reality

• “There's no such thing as a free lunch”

• “Something for Nothing” theorems

• Dimensionality reductionis no exception

• Result: CompressiveNonsensing

Nonsense 1

Robustness

Measurement Noise

• Stable recoverywith additive measurement noise

• Noise is added to

• Stability: noise only mildly amplified in recovered signal

Signal Noise

• Often seek recoverywith additive signal noise

• Noise is added to

• Noise folding: signal noise amplified in by 3dB for every doubling of

• Same effect seen in classical “bandpass subsampling”

[Davenport, Laska, Treichler, B 2011]

Noise Folding in CS

slope = -3

CS r

eco

vere

d s

ign

al S

NR

“Tail Folding”

• Can model compressible(approx sparse) signals as

“signal” + “tail”

• Tail “folds” into signal asincreases

sorted index

“signal”

“tail”

[Davies, Guo, 2011; Davenport, Laska, Treichler, B 2011]

All Is Not Lost – Dynamic Range

• In wideband ADC apps

• As amount of subsampling grows, can employan ADC with a lower sampling rate and hence higher-resolution quantizer

Dynamic Range

• CS can significantly boost the ENOB of an ADC system for sparse signals

conventional ADC

CS ADC w/ sparsity

log sampling frequency

state

d n

um

ber

of

bit

s

Dynamic Range

• As amount of subsampling grows, can employan ADC with a lower sampling rate and hence higher-resolution quantizer

• Thus dynamic range of CS ADC can significantly exceed Nyquist ADC

• With current ADC trends, dynamic range gain is theoretically 7.9dB for each doubling in

Dynamic Ranged

yn

am

ic r

an

ge

slope = +5 (almost 7.9)

Tradeoff

SNR: 3dB loss for each doubling of

Dynamic Range: up to 7.9dB gain for each doubling of

Adaptivity

• Say we know the locations of the non-zero entriesin

• Then we boostthe SNR by

• Motivates adaptivesensing strategiesthat bypass the noise-folding tradeoff[Haupt, Castro, Nowak, B 2009; Candes, Davenport 2011]

columns

Nonsense 2

Quantization

CS and Quantization

• Vast majority of work in CS assumes the measurements are real-valued

• In practice, measurements must be quantized (nonlinear)

• Should measure CS performance in terms of number of measurement bits

rather thannumber of (real-valued) measurements

• Limited progress– large number of bits per measurement– 1 bit per measurement

N=2000, K=20, M = (total bits)/(bits per meas)

CS and Quantization

1 bit6 bits

8 bits

10 bits12 bits/meas

4 bits

2 bits

Nonsense 3

Weak Models

Weak Models

• Sparsity models in CS emphasize discrete bases and frames– DFT, wavelets, …

• But in real data acquisition problems, the world is continuous, not discrete

The Grid Problem• Consider “frequency sparse” signal

– suggests the DFT sparsity basis

• Easy CS problem: K=1frequency

• Hard CS problem: K=1frequency

slow decay due to sincinterpolation of off-grid sinusoids(asymptotically, signal is not even in L1)

Going Off the Grid• Spectral CS [Duarte, B, 2010]

– discrete formulation

• CS Off the Grid [Tang, Bhaskar, Shah, Recht, 2012]– continuous formulation

best case

worst caseaverage case

Spectral CS

20dB

Nonsense 4

Focus on Recovery

Misguided Focus on Recovery

• Recall the data deluge problem in sensing– ex: large-scale imaging, HSI, video,

ultrawideband ADC, – data ambient dimension N too large

• When N ~ billions, signal recoverybecomes problematic, if notimpossible

• Solution: Perform signalexploitation directly on the compressive measurements

Compressive Signal Processing

• Many applications involve signal inference and not reconstruction

detection < classification < estimation < reconstruction

• Good news: CS supports efficient learning, inference, processing directly on compressive measurements

• Random projections ~ sufficient statisticsfor signals with concise geometrical structure

Classification• Simple object classification problem

– AWGN: nearest neighbor classifier

• Common issue:– L unknown articulation parameters

• Common solution: matched filter– find nearest neighbor under all articulations

CS-based Classification• Target images form a low-dimensional

manifold as the target articulates– random projections preserve information

in these manifolds if

• CS-based classifier: smashed filter– find nearest neighbor under all articulations

under random projection [Davenport, B, et al 2006]

Smashed Filter

• Random shift and rotation (L=3 dim. manifold)• White Gaussian noise added to measurements• Goals: identify most likely shift/rotation parameters

identify most likely class

number of measurements Mnumber of measurements Mavg

. sh

ift

esti

mate

err

or

cla

ssifi

cati

on

rate

(%

)more noise

more noise

Frequency Tracking

• Compressive Phase Locked Loop (PLL) – key idea: phase detector in PLL computes inner product

between signal and oscillator output– RIP ensures we can compute this inner product between

corresponding low-rate CS measurements

CS-PLL w/ 20xundersampling

Nonsense 5

Weak Guarantees

Performance Guarantees

• CS performance guarantees– RIP, incoherence, phase transition

• To date, rigorous results only for random matrices– practically not useful– often pessimistic

• Need rigorous guarantees for non-random, structured sampling matrices with fast algorithms– analogous to the progress in coding theory from Shannon’s

original random codes to modern codes

Chapter 6

All Is Not Lost !

Sparsity

Convex optimization

Dimensionality reduction

12-Step ProgramTo End Compressive Nonsensing

1. Don’t give in to the hype surrounding CS

2. Resist the urge to blindly apply L1 minimization

3. Face up to robustness issues

4. Deal with measurement quantization

5. Develop more realistic signal models

6. Develop practical sensing matrices beyond random

7. Develop more efficient recovery algorithms

8. Develop rigorous performance guarantees for practical CS systems

9. Exploit signals directly in the compressive domain

10. Don’t give in to the hype surrounding CS

11. Resist the urge to blindly apply L1 minimization

12. Don’t give in to the hype surrounding CS