sparse recovery and fourier samplingecprice/slides/thesis.pdf2 beyond: sparse recovery / compressive...
TRANSCRIPT
Sparse Recovery and Fourier Sampling
Eric Price
MIT
Eric Price (MIT) Sparse Recovery and Fourier Sampling 1 / 37
The Fourier TransformConversion between time and frequency domains
Time Domain Frequency Domain
Fourier Transform
Displacement of Air Concert A
Eric Price (MIT) Sparse Recovery and Fourier Sampling 2 / 37
The Fourier Transform is Ubiquitous
Audio Video Medical Imaging
Radar GPS Oil Exploration
Eric Price (MIT) Sparse Recovery and Fourier Sampling 3 / 37
Computing the Discrete Fourier Transform
How to compute x = Fx?
Naive multiplication: O(n2).Fast Fourier Transform: O(n log n) time. [Cooley-Tukey, 1965]
[T]he method greatly reduces the tediousness of mechanicalcalculations.
– Carl Friedrich Gauss, 1805
By hand: 22n log n seconds. [Danielson-Lanczos, 1942]Can we do better?
When can we compute the FourierTransform in sublinear time?
Eric Price (MIT) Sparse Recovery and Fourier Sampling 4 / 37
Computing the Discrete Fourier Transform
How to compute x = Fx?Naive multiplication: O(n2).
Fast Fourier Transform: O(n log n) time. [Cooley-Tukey, 1965]
[T]he method greatly reduces the tediousness of mechanicalcalculations.
– Carl Friedrich Gauss, 1805
By hand: 22n log n seconds. [Danielson-Lanczos, 1942]Can we do better?
When can we compute the FourierTransform in sublinear time?
Eric Price (MIT) Sparse Recovery and Fourier Sampling 4 / 37
Computing the Discrete Fourier Transform
How to compute x = Fx?Naive multiplication: O(n2).Fast Fourier Transform: O(n log n) time. [Cooley-Tukey, 1965]
[T]he method greatly reduces the tediousness of mechanicalcalculations.
– Carl Friedrich Gauss, 1805
By hand: 22n log n seconds. [Danielson-Lanczos, 1942]Can we do better?
When can we compute the FourierTransform in sublinear time?
Eric Price (MIT) Sparse Recovery and Fourier Sampling 4 / 37
Computing the Discrete Fourier Transform
How to compute x = Fx?Naive multiplication: O(n2).Fast Fourier Transform: O(n log n) time. [Cooley-Tukey, 1965]
[T]he method greatly reduces the tediousness of mechanicalcalculations.
– Carl Friedrich Gauss, 1805
By hand: 22n log n seconds. [Danielson-Lanczos, 1942]Can we do better?
When can we compute the FourierTransform in sublinear time?
Eric Price (MIT) Sparse Recovery and Fourier Sampling 4 / 37
Computing the Discrete Fourier Transform
How to compute x = Fx?Naive multiplication: O(n2).Fast Fourier Transform: O(n log n) time. [Cooley-Tukey, 1965]
[T]he method greatly reduces the tediousness of mechanicalcalculations.
– Carl Friedrich Gauss, 1805
By hand: 22n log n seconds. [Danielson-Lanczos, 1942]
Can we do better?
When can we compute the FourierTransform in sublinear time?
Eric Price (MIT) Sparse Recovery and Fourier Sampling 4 / 37
Computing the Discrete Fourier Transform
How to compute x = Fx?Naive multiplication: O(n2).Fast Fourier Transform: O(n log n) time. [Cooley-Tukey, 1965]
[T]he method greatly reduces the tediousness of mechanicalcalculations.
– Carl Friedrich Gauss, 1805
By hand: 22n log n seconds. [Danielson-Lanczos, 1942]Can we do better?
When can we compute the FourierTransform in sublinear time?
Eric Price (MIT) Sparse Recovery and Fourier Sampling 4 / 37
Computing the Discrete Fourier Transform
How to compute x = Fx?Naive multiplication: O(n2).Fast Fourier Transform: O(n log n) time. [Cooley-Tukey, 1965]
[T]he method greatly reduces the tediousness of mechanicalcalculations.
– Carl Friedrich Gauss, 1805
By hand: 22n log n seconds. [Danielson-Lanczos, 1942]Can we do much better?
When can we compute the FourierTransform in sublinear time?
Eric Price (MIT) Sparse Recovery and Fourier Sampling 4 / 37
Computing the Discrete Fourier Transform
How to compute x = Fx?Naive multiplication: O(n2).Fast Fourier Transform: O(n log n) time. [Cooley-Tukey, 1965]
[T]he method greatly reduces the tediousness of mechanicalcalculations.
– Carl Friedrich Gauss, 1805
By hand: 22n log n seconds. [Danielson-Lanczos, 1942]Can we do much better?
When can we compute the FourierTransform in sublinear time?
Eric Price (MIT) Sparse Recovery and Fourier Sampling 4 / 37
Idea: Leverage SparsityOften the Fourier transform is dominated by a small number of peaks:
Time Signal Frequency(Exactly sparse)
Frequency(Approximately sparse)
Sparsity is common:
Audio Video MedicalImaging
Radar GPS Oil Exploration
Goal of this work: a sparse Fourier transformFaster Fourier Transform on sparse data.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 5 / 37
Idea: Leverage SparsityOften the Fourier transform is dominated by a small number of peaks:
Time Signal Frequency(Exactly sparse)
Frequency(Approximately sparse)
Sparsity is common:
Audio Video MedicalImaging
Radar GPS Oil Exploration
Goal of this work: a sparse Fourier transformFaster Fourier Transform on sparse data.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 5 / 37
Idea: Leverage SparsityOften the Fourier transform is dominated by a small number of peaks:
Time Signal Frequency(Exactly sparse)
Frequency(Approximately sparse)
Sparsity is common:
Audio Video MedicalImaging
Radar GPS Oil Exploration
Goal of this work: a sparse Fourier transformFaster Fourier Transform on sparse data.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 5 / 37
Talk Outline
1 Sparse Fourier TransformOverviewTechnical Details
2 Beyond: Sparse Recovery / Compressive SensingOverviewAdaptivityConclusion
Eric Price (MIT) Sparse Recovery and Fourier Sampling 6 / 37
Talk Outline
1 Sparse Fourier TransformOverviewTechnical Details
2 Beyond: Sparse Recovery / Compressive SensingOverviewAdaptivityConclusion
Eric Price (MIT) Sparse Recovery and Fourier Sampling 6 / 37
Talk Outline
1 Sparse Fourier TransformOverviewTechnical Details
2 Beyond: Sparse Recovery / Compressive SensingOverviewAdaptivityConclusion
Eric Price (MIT) Sparse Recovery and Fourier Sampling 7 / 37
My Contributions
Goal: Compute the Fourier transform x = Fx when x is k -sparse.
Theory:I The fastest algorithm for Fourier transforms of sparse data.I The only algorithms faster than FFT for all k = o(n).
Practice:I Implementation is faster than FFTW for a wide range of inputs.I Orders of magnitude faster than previous sparse Fourier transforms.I Useful in multiple applications.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 8 / 37
My Contributions
Goal: Compute the Fourier transform x = Fx when x is k -sparse.
Theory:I The fastest algorithm for Fourier transforms of sparse data.I The only algorithms faster than FFT for all k = o(n).
Practice:I Implementation is faster than FFTW for a wide range of inputs.I Orders of magnitude faster than previous sparse Fourier transforms.I Useful in multiple applications.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 8 / 37
Applications of ideashttp://groups.csail.mit.edu/netmit/sFFT/workshop.html
GPS [HAKI]: 2× faster
Spectrum sensing [HSAHK]: 6× lower sampling rateDense FFT over clusters [TPKP]: 2× faster...
Eric Price (MIT) Sparse Recovery and Fourier Sampling 9 / 37
Applications of ideashttp://groups.csail.mit.edu/netmit/sFFT/workshop.html
GPS [HAKI]: 2× fasterSpectrum sensing [HSAHK]: 6× lower sampling rate
Dense FFT over clusters [TPKP]: 2× faster...
Eric Price (MIT) Sparse Recovery and Fourier Sampling 9 / 37
Applications of ideashttp://groups.csail.mit.edu/netmit/sFFT/workshop.html
GPS [HAKI]: 2× fasterSpectrum sensing [HSAHK]: 6× lower sampling rateDense FFT over clusters [TPKP]: 2× faster
...
Eric Price (MIT) Sparse Recovery and Fourier Sampling 9 / 37
Applications of ideashttp://groups.csail.mit.edu/netmit/sFFT/workshop.html
GPS [HAKI]: 2× fasterSpectrum sensing [HSAHK]: 6× lower sampling rateDense FFT over clusters [TPKP]: 2× faster...
Eric Price (MIT) Sparse Recovery and Fourier Sampling 9 / 37
Talk Outline
1 Sparse Fourier TransformOverviewTechnical Details
2 Beyond: Sparse Recovery / Compressive SensingOverviewAdaptivityConclusion
Eric Price (MIT) Sparse Recovery and Fourier Sampling 10 / 37
Theoretical ResultsFor a signal of size n with k large frequencies
First on Boolean cube [GL89, KM92, L93]
Adapted to complexes [Mansour ’92, GGIMS02, AGS03, GMS05,Iwen ’10, Akavia ’10]
I All take at least k log4 n time.I Only better than FFT if k n/ log3 n.
Our results [HIKP12a, HIKP12b]
I Exactly k -sparse: O(k log n)F Optimal if FFT is optimal.
I Approximately k -sparse: O(k log(n/k) log n)
‖result − x‖2 6 (1 + ε) mink -sparse x(k)
‖x(k) − x‖2
I Better than FFT for any k = o(n)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 11 / 37
Theoretical ResultsFor a signal of size n with k large frequencies
First on Boolean cube [GL89, KM92, L93]Adapted to complexes [Mansour ’92, GGIMS02, AGS03, GMS05,Iwen ’10, Akavia ’10]
I All take at least k log4 n time.I Only better than FFT if k n/ log3 n.
Our results [HIKP12a, HIKP12b]
I Exactly k -sparse: O(k log n)F Optimal if FFT is optimal.
I Approximately k -sparse: O(k log(n/k) log n)
‖result − x‖2 6 (1 + ε) mink -sparse x(k)
‖x(k) − x‖2
I Better than FFT for any k = o(n)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 11 / 37
Theoretical ResultsFor a signal of size n with k large frequencies
First on Boolean cube [GL89, KM92, L93]Adapted to complexes [Mansour ’92, GGIMS02, AGS03, GMS05,Iwen ’10, Akavia ’10]
I All take at least k log4 n time.I Only better than FFT if k n/ log3 n.
Our results [HIKP12a, HIKP12b]
I Exactly k -sparse: O(k log n)F Optimal if FFT is optimal.
I Approximately k -sparse: O(k log(n/k) log n)
‖result − x‖2 6 (1 + ε) mink -sparse x(k)
‖x(k) − x‖2
I Better than FFT for any k = o(n)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 11 / 37
Theoretical ResultsFor a signal of size n with k large frequencies
First on Boolean cube [GL89, KM92, L93]Adapted to complexes [Mansour ’92, GGIMS02, AGS03, GMS05,Iwen ’10, Akavia ’10]
I All take at least k log4 n time.I Only better than FFT if k n/ log3 n.
Our results [HIKP12a, HIKP12b]
I Exactly k -sparse: O(k log n)F Optimal if FFT is optimal.
I Approximately k -sparse: O(k log(n/k) log n)
‖result − x‖2 6 (1 + ε) mink -sparse x(k)
‖x(k) − x‖2
I Better than FFT for any k = o(n)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 11 / 37
Theoretical ResultsFor a signal of size n with k large frequencies
First on Boolean cube [GL89, KM92, L93]Adapted to complexes [Mansour ’92, GGIMS02, AGS03, GMS05,Iwen ’10, Akavia ’10]
I All take at least k log4 n time.I Only better than FFT if k n/ log3 n.
Our results [HIKP12a, HIKP12b]I Exactly k -sparse: O(k log n)
F Optimal if FFT is optimal.
I Approximately k -sparse: O(k log(n/k) log n)
‖result − x‖2 6 (1 + ε) mink -sparse x(k)
‖x(k) − x‖2
I Better than FFT for any k = o(n)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 11 / 37
Theoretical ResultsFor a signal of size n with k large frequencies
First on Boolean cube [GL89, KM92, L93]Adapted to complexes [Mansour ’92, GGIMS02, AGS03, GMS05,Iwen ’10, Akavia ’10]
I All take at least k log4 n time.I Only better than FFT if k n/ log3 n.
Our results [HIKP12a, HIKP12b]I Exactly k -sparse: O(k log n)
F Optimal if FFT is optimal.I Approximately k -sparse: O(k log(n/k) log n)
‖result − x‖2 6 (1 + ε) mink -sparse x(k)
‖x(k) − x‖2
I Better than FFT for any k = o(n)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 11 / 37
Theoretical ResultsFor a signal of size n with k large frequencies
First on Boolean cube [GL89, KM92, L93]Adapted to complexes [Mansour ’92, GGIMS02, AGS03, GMS05,Iwen ’10, Akavia ’10]
I All take at least k log4 n time.I Only better than FFT if k n/ log3 n.
Our results [HIKP12a, HIKP12b]I Exactly k -sparse: O(k log n)
F Optimal if FFT is optimal.I Approximately k -sparse: O(k log(n/k) log n)
‖result − x‖2 6 (1 + ε) mink -sparse x(k)
‖x(k) − x‖2
I Better than FFT for any k = o(n)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 11 / 37
Discrete Fourier Transform (DFT) Definition
Given x ∈ Cn, compute Fourier transform x :
xi =1n
∑j
ω−ijxj for ω = eτi/n
x = F x for Fij = ω−ij/n
Inverse transform almost identical:
xi =∑
j
ωij xj
I ω→ ω−1, scale
Lots of nice properties
I Convolution←→ Multiplication
(where τ is the circle constant 6.283...)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 12 / 37
Discrete Fourier Transform (DFT) Definition
Given x ∈ Cn, compute Fourier transform x :
xi =1n
∑j
ω−ijxj for ω = eτi/n
x = F x for Fij = ω−ij/n
Inverse transform almost identical:
xi =∑
j
ωij xj
I ω→ ω−1, scale
Lots of nice properties
I Convolution←→ Multiplication
(where τ is the circle constant 6.283...)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 12 / 37
Discrete Fourier Transform (DFT) Definition
Given x ∈ Cn, compute Fourier transform x :
xi =1n
∑j
ω−ijxj for ω = eτi/n
x = F x for Fij = ω−ij/n
Inverse transform almost identical:
xi =∑
j
ωij xj
I ω→ ω−1, scale
Lots of nice properties
I Convolution←→ Multiplication
(where τ is the circle constant 6.283...)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 12 / 37
Discrete Fourier Transform (DFT) Definition
Given x ∈ Cn, compute Fourier transform x :
xi =1n
∑j
ω−ijxj for ω = eτi/n
x = F x for Fij = ω−ij/n
Inverse transform almost identical:
xi =∑
j
ωij xj
I ω→ ω−1, scaleLots of nice properties
I Convolution←→ Multiplication
(where τ is the circle constant 6.283...)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 12 / 37
Discrete Fourier Transform (DFT) Definition
Given x ∈ Cn, compute Fourier transform x :
xi =1n
∑j
ω−ijxj for ω = eτi/n
x = F x for Fij = ω−ij/n
Inverse transform almost identical:
xi =∑
j
ωij xj
I ω→ ω−1, scale
Lots of nice properties
I Convolution←→ Multiplication
(where τ is the circle constant 6.283...)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 12 / 37
Discrete Fourier Transform (DFT) Definition
Given x ∈ Cn, compute Fourier transform x :
xi =1n
∑j
ω−ijxj for ω = eτi/n
x = F x for Fij = ω−ij/n
Inverse transform almost identical:
xi =∑
j
ωij xj
I ω→ ω−1, scaleLots of nice properties
I Convolution←→ Multiplication
(where τ is the circle constant 6.283...)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 12 / 37
Discrete Fourier Transform (DFT) Definition
Given x ∈ Cn, compute Fourier transform x :
xi =1n
∑j
ω−ijxj for ω = eτi/n
x = F x for Fij = ω−ij/n
Inverse transform almost identical:
xi =∑
j
ωij xj
I ω→ ω−1, scaleLots of nice properties
I Convolution←→ Multiplication
(where τ is the circle constant 6.283...)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 12 / 37
Algorithm
Simpler case: x is exactly k -sparse.
TheoremWe can compute x in O(k log n) expected time.
Still kind of hard.
Simplest case: x is exactly 1-sparse.
LemmaWe can compute a 1-sparse x in O(1) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 13 / 37
Algorithm
Simpler case: x is exactly k -sparse.
TheoremWe can compute x in O(k log n) expected time.
Still kind of hard.
Simplest case: x is exactly 1-sparse.
LemmaWe can compute a 1-sparse x in O(1) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 13 / 37
Algorithm
Simpler case: x is exactly k -sparse.
TheoremWe can compute x in O(k log n) expected time.
Still kind of hard.
Simplest case: x is exactly 1-sparse.
LemmaWe can compute a 1-sparse x in O(1) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 13 / 37
Algorithm
Simpler case: x is exactly k -sparse.
TheoremWe can compute x in O(k log n) expected time.
Still kind of hard.
Simplest case: x is exactly 1-sparse.
LemmaWe can compute a 1-sparse x in O(1) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 13 / 37
Algorithm
Simpler case: x is exactly k -sparse.
TheoremWe can compute x in O(k log n) expected time.
Still kind of hard.
Simplest case: x is exactly 1-sparse.
LemmaWe can compute a 1-sparse x in O(1) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 13 / 37
Algorithm for k = 1
x :
t
aLemmaWe can compute a 1-sparse x in O(1) time.
xi =
a if i = t0 otherwise
Then x = (a,aωt ,aω2t ,aω3t , . . . ,aω(n−1)t).
x0 = a x1 = aωt
x1/x0 = ωt =⇒ t .
(Related to OFDM, Prony’s method, matrix pencil.)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 14 / 37
Algorithm for k = 1
x :
t
aLemmaWe can compute a 1-sparse x in O(1) time.
xi =
a if i = t0 otherwise
Then x = (a,aωt ,aω2t ,aω3t , . . . ,aω(n−1)t).
x0 = a x1 = aωt
x1/x0 = ωt =⇒ t .
(Related to OFDM, Prony’s method, matrix pencil.)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 14 / 37
Algorithm for k = 1
x :
t
aLemmaWe can compute a 1-sparse x in O(1) time.
xi =
a if i = t0 otherwise
Then x = (a,aωt ,aω2t ,aω3t , . . . ,aω(n−1)t).
x0 = a
x1 = aωt
x1/x0 = ωt =⇒ t .
(Related to OFDM, Prony’s method, matrix pencil.)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 14 / 37
Algorithm for k = 1
x :
t
aLemmaWe can compute a 1-sparse x in O(1) time.
xi =
a if i = t0 otherwise
Then x = (a,aωt ,aω2t ,aω3t , . . . ,aω(n−1)t).
x0 = a x1 = aωt
x1/x0 = ωt =⇒ t .
(Related to OFDM, Prony’s method, matrix pencil.)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 14 / 37
Algorithm for k = 1
x :
t
aLemmaWe can compute a 1-sparse x in O(1) time.
xi =
a if i = t0 otherwise
Then x = (a,aωt ,aω2t ,aω3t , . . . ,aω(n−1)t).
x0 = a x1 = aωt
x1/x0 = ωt =⇒ t .
(Related to OFDM, Prony’s method, matrix pencil.)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 14 / 37
Algorithm for k = 1
x :
t
aLemmaWe can compute a 1-sparse x in O(1) time.
xi =
a if i = t0 otherwise
Then x = (a,aωt ,aω2t ,aω3t , . . . ,aω(n−1)t).
x0 = a x1 = aωt
x1/x0 = ωt =⇒ t .
(Related to OFDM, Prony’s method, matrix pencil.)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 14 / 37
Algorithm for k = 1
x :
t
aLemmaWe can compute a 1-sparse x in O(1) time.
xi =
a if i = t0 otherwise
Then x = (a,aωt ,aω2t ,aω3t , . . . ,aω(n−1)t).
x0 = a x1 = aωt
x1/x0 = ωt =⇒ t .
(Related to OFDM, Prony’s method, matrix pencil.)
Eric Price (MIT) Sparse Recovery and Fourier Sampling 14 / 37
Algorithm for general k
Reduce general k to k = 1.
“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.
Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.
Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.
Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.
Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Algorithm for general k
Reduce general k to k = 1.“Filters”: partition frequencies intoO(k) buckets.
I Sample from time domain of eachbucket with O(log n) overhead.
I Recovered by k = 1 algorithm
Most frequencies alone in bucket.Random permutation
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Recovers most of x :
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 15 / 37
Overall outline x
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial k -sparse recovery
x x ′
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Repeat, k → k/2→ k/4→ · · ·
TheoremWe can compute x in O(k log n) expected time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 16 / 37
Overall outline x − x ′
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial k -sparse recovery
x x ′
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Repeat, k → k/2→ k/4→ · · ·
TheoremWe can compute x in O(k log n) expected time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 16 / 37
Overall outline x − x ′
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial k -sparse recovery
x x ′
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Repeat, k → k/2→ k/4→ · · ·
TheoremWe can compute x in O(k log n) expected time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 16 / 37
Overall outline x − x ′
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial k -sparse recovery
x x ′
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Repeat, k → k/2→ k/4→ · · ·
TheoremWe can compute x in O(k log n) expected time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 16 / 37
Overall outline x − x ′
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial k -sparse recovery
x x ′
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Repeat, k → k/2→ k/4→ · · ·
TheoremWe can compute x in O(k log n) expected time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 16 / 37
Overall outline x − x ′
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial k -sparse recovery
x x ′
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Repeat, k → k/2→ k/4→ · · ·
TheoremWe can compute x in O(k log n) expected time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 16 / 37
Overall outline x − x ′
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial k -sparse recovery
x x ′
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Repeat, k → k/2→ k/4→ · · ·
TheoremWe can compute x in O(k log n) expected time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 16 / 37
Overall outline x − x ′
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial k -sparse recovery
x x ′
Lemma (Partial sparse recovery)In O(k log n) expected time, we can compute an estimate x ′ such thatx − x ′ is k/2-sparse.
Repeat, k → k/2→ k/4→ · · ·
TheoremWe can compute x in O(k log n) expected time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 16 / 37
How can you isolate frequencies?Time Frequency
×
=
∗
=
Eric Price (MIT) Sparse Recovery and Fourier Sampling 17 / 37
n-dimensional DFT:O(n log n)x → x
n-dimensional DFT of firstk terms: O(n log n)x · rect→ x ∗ sinc.
k -dimensional DFT offirst k terms: O(B log B)alias(x · rect)→subsample(x ∗ sinc).
How can you isolate frequencies?Time Frequency
×
=
∗
=
Eric Price (MIT) Sparse Recovery and Fourier Sampling 17 / 37
n-dimensional DFT:O(n log n)x → x
n-dimensional DFT of firstk terms: O(n log n)x · rect→ x ∗ sinc.
k -dimensional DFT offirst k terms: O(B log B)alias(x · rect)→subsample(x ∗ sinc).
How can you isolate frequencies?Time Frequency
×
=
∗
=
Eric Price (MIT) Sparse Recovery and Fourier Sampling 17 / 37
n-dimensional DFT:O(n log n)x → x
n-dimensional DFT of firstk terms: O(n log n)x · rect→ x ∗ sinc.
k -dimensional DFT offirst k terms: O(B log B)alias(x · rect)→subsample(x ∗ sinc).
How can you isolate frequencies?Time Frequency
×
=
∗
=
Eric Price (MIT) Sparse Recovery and Fourier Sampling 17 / 37
n-dimensional DFT:O(n log n)x → x
n-dimensional DFT of firstk terms: O(n log n)x · rect→ x ∗ sinc.
k -dimensional DFT offirst k terms: O(B log B)alias(x · rect)→subsample(x ∗ sinc).
How can you isolate frequencies?Time Frequency
×
=
∗
=
Eric Price (MIT) Sparse Recovery and Fourier Sampling 17 / 37
n-dimensional DFT:O(n log n)x → x
n-dimensional DFT of firstk terms: O(n log n)x · rect→ x ∗ sinc.
k -dimensional DFT offirst k terms: O(B log B)alias(x · rect)→subsample(x ∗ sinc).
How can you isolate frequencies?Time Frequency
×
=
∗
=
Eric Price (MIT) Sparse Recovery and Fourier Sampling 17 / 37
n-dimensional DFT:O(n log n)x → x
n-dimensional DFT of firstk terms: O(n log n)x · rect→ x ∗ sinc.
k -dimensional DFT offirst k terms: O(B log B)alias(x · rect)→subsample(x ∗ sinc).
The issue
Frequency
We want to isolate frequencies.
The sinc filter “leaks”.Contamination from other buckets.
We introduce a better filter:(Gaussian / prolate spheroidal sequence) convolved with rectangle.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 18 / 37
The issue
Frequency
We want to isolate frequencies.
The sinc filter “leaks”.Contamination from other buckets.
We introduce a better filter:(Gaussian / prolate spheroidal sequence) convolved with rectangle.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 18 / 37
The issue
Frequency
We want to isolate frequencies.
The sinc filter “leaks”.Contamination from other buckets.
We introduce a better filter:(Gaussian / prolate spheroidal sequence) convolved with rectangle.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 18 / 37
Algorithm for exactly sparse signalsOriginal signal x Goal x
LemmaIf t is isolated in its bucket and in the “super-pass” region, the value bwe compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 19 / 37
Algorithm for exactly sparse signalsComputed F ·x Filtered signal F ∗x
LemmaIf t is isolated in its bucket and in the “super-pass” region, the value bwe compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 19 / 37
Algorithm for exactly sparse signalsF ·x aliased to k terms Filtered signal F ∗x
LemmaIf t is isolated in its bucket and in the “super-pass” region, the value bwe compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 19 / 37
Algorithm for exactly sparse signalsF ·x aliased to k terms Computed samples of F ∗x
LemmaIf t is isolated in its bucket and in the “super-pass” region, the value bwe compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 19 / 37
Algorithm for exactly sparse signalsF ·x aliased to k terms Computed samples of F ∗x
LemmaIf t is isolated in its bucket and in the “super-pass” region, the value bwe compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 19 / 37
Algorithm for exactly sparse signalsF ·x aliased to k terms Knowledge about x
LemmaIf t is isolated in its bucket and in the “super-pass” region, the value bwe compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 19 / 37
Algorithm for exactly sparse signalsF ·x aliased to k terms Knowledge about x
LemmaIf t is isolated in its bucket and in the “super-pass” region, the value bwe compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 19 / 37
Algorithm for exactly sparse signalsF ·x aliased to k terms Knowledge about x
LemmaIf t is isolated in its bucket and in the “super-pass” region, the value bwe compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 19 / 37
Algorithm for exactly sparse signalsLemmaFor most t, the value b we compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Time-shift x by one and repeat: b ′ = xtωt .
Divide to get b ′/b = ωt
=⇒ can compute t .I Just like our 1-sparse recovery algorithm, x1/x0 = ωt .
Gives partial sparse recovery: x ′ such that x − x ′ is k/2-sparse.
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Repeat k → k/2→ k/4→ · · ·O(k log n) time sparse Fourier transform.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 20 / 37
Algorithm for exactly sparse signalsLemmaFor most t, the value b we compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Time-shift x by one and repeat: b ′ = xtωt .
Divide to get b ′/b = ωt
=⇒ can compute t .I Just like our 1-sparse recovery algorithm, x1/x0 = ωt .
Gives partial sparse recovery: x ′ such that x − x ′ is k/2-sparse.
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Repeat k → k/2→ k/4→ · · ·O(k log n) time sparse Fourier transform.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 20 / 37
Algorithm for exactly sparse signalsLemmaFor most t, the value b we compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Time-shift x by one and repeat: b ′ = xtωt .
Divide to get b ′/b = ωt =⇒ can compute t .
I Just like our 1-sparse recovery algorithm, x1/x0 = ωt .Gives partial sparse recovery: x ′ such that x − x ′ is k/2-sparse.
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Repeat k → k/2→ k/4→ · · ·O(k log n) time sparse Fourier transform.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 20 / 37
Algorithm for exactly sparse signalsLemmaFor most t, the value b we compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Time-shift x by one and repeat: b ′ = xtωt .
Divide to get b ′/b = ωt =⇒ can compute t .I Just like our 1-sparse recovery algorithm, x1/x0 = ωt .
Gives partial sparse recovery: x ′ such that x − x ′ is k/2-sparse.
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Repeat k → k/2→ k/4→ · · ·O(k log n) time sparse Fourier transform.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 20 / 37
Algorithm for exactly sparse signalsLemmaFor most t, the value b we compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Time-shift x by one and repeat: b ′ = xtωt .
Divide to get b ′/b = ωt =⇒ can compute t .I Just like our 1-sparse recovery algorithm, x1/x0 = ωt .
Gives partial sparse recovery: x ′ such that x − x ′ is k/2-sparse.
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Repeat k → k/2→ k/4→ · · ·O(k log n) time sparse Fourier transform.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 20 / 37
Algorithm for exactly sparse signalsLemmaFor most t, the value b we compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Time-shift x by one and repeat: b ′ = xtωt .
Divide to get b ′/b = ωt =⇒ can compute t .I Just like our 1-sparse recovery algorithm, x1/x0 = ωt .
Gives partial sparse recovery: x ′ such that x − x ′ is k/2-sparse.
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Repeat k → k/2→ k/4→ · · ·
O(k log n) time sparse Fourier transform.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 20 / 37
Algorithm for exactly sparse signalsLemmaFor most t, the value b we compute for its bucket satisfies
b = xt .
Computing the b for all O(k) buckets takes O(k log n) time.
Time-shift x by one and repeat: b ′ = xtωt .
Divide to get b ′/b = ωt =⇒ can compute t .I Just like our 1-sparse recovery algorithm, x1/x0 = ωt .
Gives partial sparse recovery: x ′ such that x − x ′ is k/2-sparse.
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Repeat k → k/2→ k/4→ · · ·O(k log n) time sparse Fourier transform.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 20 / 37
Algorithm for approximately sparse signals
What changes with noise?Identical architecture:
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial sparse recovery
x x ′
Just requires robust 1-sparse recovery.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 21 / 37
Algorithm for approximately sparse signals
What changes with noise?
Identical architecture:
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial sparse recovery
x x ′
Just requires robust 1-sparse recovery.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 21 / 37
Algorithm for approximately sparse signals
What changes with noise?Identical architecture:
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial sparse recovery
x x ′
Just requires robust 1-sparse recovery.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 21 / 37
Algorithm for approximately sparse signals
What changes with noise?Identical architecture:
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
Partial sparse recovery
x x ′
Just requires robust 1-sparse recovery.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 21 / 37
Algorithm for approximately sparse signals: k = 1LemmaSuppose x is approximately 1-sparse:
|xt |/‖x‖2 > 90%.
Then we can recover it with O(log n) samples and O(log2 n) time.
With exact sparsity: log n bits in a single measurement.With noise: only constant number of useful bits.Choose Θ(log n) time shifts c to recover i .Error correcting code with efficient recovery =⇒ Lemma.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 22 / 37
Algorithm for approximately sparse signals: k = 1LemmaSuppose x is approximately 1-sparse:
|xt |/‖x‖2 > 90%.
Then we can recover it with O(log n) samples and O(log2 n) time.
x1/x0 = ωt
With exact sparsity: log n bits in a single measurement.
With noise: only constant number of useful bits.Choose Θ(log n) time shifts c to recover i .Error correcting code with efficient recovery =⇒ Lemma.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 22 / 37
Algorithm for approximately sparse signals: k = 1LemmaSuppose x is approximately 1-sparse:
|xt |/‖x‖2 > 90%.
Then we can recover it with O(log n) samples and O(log2 n) time.
x1/x0 = ωt + noise
With exact sparsity: log n bits in a single measurement.With noise: only constant number of useful bits.
Choose Θ(log n) time shifts c to recover i .Error correcting code with efficient recovery =⇒ Lemma.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 22 / 37
Algorithm for approximately sparse signals: k = 1LemmaSuppose x is approximately 1-sparse:
|xt |/‖x‖2 > 90%.
Then we can recover it with O(log n) samples and O(log2 n) time.
x1/x0 = ωt + noise
With exact sparsity: log n bits in a single measurement.With noise: only constant number of useful bits.Choose Θ(log n) time shifts c to recover i .
Error correcting code with efficient recovery =⇒ Lemma.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 22 / 37
Algorithm for approximately sparse signals: k = 1LemmaSuppose x is approximately 1-sparse:
|xt |/‖x‖2 > 90%.
Then we can recover it with O(log n) samples and O(log2 n) time.
xc2/x0 = ωc2t + noise
With exact sparsity: log n bits in a single measurement.With noise: only constant number of useful bits.Choose Θ(log n) time shifts c to recover i .
Error correcting code with efficient recovery =⇒ Lemma.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 22 / 37
Algorithm for approximately sparse signals: k = 1LemmaSuppose x is approximately 1-sparse:
|xt |/‖x‖2 > 90%.
Then we can recover it with O(log n) samples and O(log2 n) time.
xc3/x0 = ωc3t + noise
With exact sparsity: log n bits in a single measurement.With noise: only constant number of useful bits.Choose Θ(log n) time shifts c to recover i .
Error correcting code with efficient recovery =⇒ Lemma.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 22 / 37
Algorithm for approximately sparse signals: k = 1LemmaSuppose x is approximately 1-sparse:
|xt |/‖x‖2 > 90%.
Then we can recover it with O(log n) samples and O(log2 n) time.
xc3/x0 = ωc3t + noise
With exact sparsity: log n bits in a single measurement.With noise: only constant number of useful bits.Choose Θ(log n) time shifts c to recover i .Error correcting code with efficient recovery =⇒ Lemma.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 22 / 37
Algorithm for approximately sparse signals: general kLemmaIf x is approximately 1-sparse, we can recover it with O(log n) samplesand O(log2 n) time.
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Reduce k -sparse to 1-sparse on buckets of size n/k , with log noverhead per sample.
TheoremIf x is approximately k-sparse, we can recover it inO(k log(n/k) log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 23 / 37
Algorithm for approximately sparse signals: general kLemmaIf x is approximately 1-sparse, we can recover it with O(log n) samplesand O(log2 n) time.
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Reduce k -sparse to 1-sparse on buckets of size n/k , with log noverhead per sample.
TheoremIf x is approximately k-sparse, we can recover it inO(k log(n/k) log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 23 / 37
Algorithm for approximately sparse signals: general kLemmaIf x is approximately 1-sparse, we can recover it with O(log n) samplesand O(log2 n) time.
Permute Filters O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Reduce k -sparse to 1-sparse on buckets of size n/k , with log noverhead per sample.
TheoremIf x is approximately k-sparse, we can recover it inO(k log(n/k) log n) time.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 23 / 37
Empirical performance
Compare toI FFTW, the “Fastest Fourier Transform in the West”I AAFFT, the [GMS05] sparse Fourier transform.
0.001
0.01
0.1
1
10
26
27
28
29
210
211
212
213
214
215
216
217
218
Ru
n T
ime (
sec)
Sparsity (K)
Run Time vs Signal Sparsity (N=222
)
sFFT 3.0 (Exact)
FFTW
AAFFT 0.9
1e-05
0.0001
0.001
0.01
0.1
1
10
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
Ru
n T
ime (
sec)
Signal Size (n)
Run Time vs Signal Size (k=50)
sFFT 3.0 (Exact)
FFTW
AAFFT 0.9
Faster than FFTW for wide range of values.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 24 / 37
Empirical performance
Compare toI FFTW, the “Fastest Fourier Transform in the West”I AAFFT, the [GMS05] sparse Fourier transform.
0.001
0.01
0.1
1
10
26
27
28
29
210
211
212
213
214
215
216
217
218
Ru
n T
ime (
sec)
Sparsity (K)
Run Time vs Signal Sparsity (N=222
)
sFFT 3.0 (Exact)
FFTW
AAFFT 0.9
1e-05
0.0001
0.001
0.01
0.1
1
10
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
Ru
n T
ime (
sec)
Signal Size (n)
Run Time vs Signal Size (k=50)
sFFT 3.0 (Exact)
FFTW
AAFFT 0.9
Faster than FFTW for wide range of values.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 24 / 37
Empirical performance
Compare toI FFTW, the “Fastest Fourier Transform in the West”I AAFFT, the [GMS05] sparse Fourier transform.
0.001
0.01
0.1
1
10
26
27
28
29
210
211
212
213
214
215
216
217
218
Ru
n T
ime (
sec)
Sparsity (K)
Run Time vs Signal Sparsity (N=222
)
sFFT 3.0 (Exact)
FFTW
AAFFT 0.9
1e-05
0.0001
0.001
0.01
0.1
1
10
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
Ru
n T
ime (
sec)
Signal Size (n)
Run Time vs Signal Size (k=50)
sFFT 3.0 (Exact)
FFTW
AAFFT 0.9
Faster than FFTW for wide range of values.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 24 / 37
Empirical performance
Compare toI FFTW, the “Fastest Fourier Transform in the West”I AAFFT, the [GMS05] sparse Fourier transform.
0.001
0.01
0.1
1
10
26
27
28
29
210
211
212
213
214
215
216
217
218
Ru
n T
ime (
sec)
Sparsity (K)
Run Time vs Signal Sparsity (N=222
)
sFFT 3.0 (Exact)
FFTW
AAFFT 0.9
1e-05
0.0001
0.001
0.01
0.1
1
10
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
Ru
n T
ime (
sec)
Signal Size (n)
Run Time vs Signal Size (k=50)
sFFT 3.0 (Exact)
FFTW
AAFFT 0.9
Faster than FFTW for wide range of values.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 24 / 37
Recap of Sparse Fourier Transform
Theory:I The fastest algorithm for Fourier transforms of sparse data.I The only algorithms faster than FFT for all k = o(n).
Practice:I Implementation is faster than FFTW for a wide range of inputs.I Orders of magnitude faster than previous sparse Fourier transforms.I Useful in multiple applications.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 25 / 37
Recap of Sparse Fourier Transform
Theory:I The fastest algorithm for Fourier transforms of sparse data.I The only algorithms faster than FFT for all k = o(n).
Practice:I Implementation is faster than FFTW for a wide range of inputs.I Orders of magnitude faster than previous sparse Fourier transforms.I Useful in multiple applications.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 25 / 37
Talk Outline
1 Sparse Fourier TransformOverviewTechnical Details
2 Beyond: Sparse Recovery / Compressive SensingOverviewAdaptivityConclusion
Eric Price (MIT) Sparse Recovery and Fourier Sampling 26 / 37
Sparse Recovery / Compressive Sensing
Robustly recover sparse x from linear measurements y = Ax .
x
Ay =
Sparse Fourier MRI Single-Pixel Camera
Streaming AlgorithmsA(x + ∆) = Ax + A∆
Genetic Testing
Eric Price (MIT) Sparse Recovery and Fourier Sampling 27 / 37
Sparse Recovery / Compressive Sensing
Robustly recover sparse x from linear measurements y = Ax .
x
Ay =Sparse Fourier
MRI Single-Pixel Camera
Streaming AlgorithmsA(x + ∆) = Ax + A∆
Genetic Testing
Eric Price (MIT) Sparse Recovery and Fourier Sampling 27 / 37
Sparse Recovery / Compressive Sensing
Robustly recover sparse x from linear measurements y = Ax .
x
F−1S
yxS =
Sparse Fourier transform:looks at set S of coordinates
Sparse Fourier
MRI Single-Pixel Camera
Streaming AlgorithmsA(x + ∆) = Ax + A∆
Genetic Testing
Eric Price (MIT) Sparse Recovery and Fourier Sampling 27 / 37
Sparse Recovery / Compressive Sensing
Robustly recover sparse x from linear measurements y = Ax .
x
Ay =
Sparse Fourier MRI
Single-Pixel Camera
Streaming AlgorithmsA(x + ∆) = Ax + A∆
Genetic Testing
Eric Price (MIT) Sparse Recovery and Fourier Sampling 27 / 37
Sparse Recovery / Compressive Sensing
Robustly recover sparse x from linear measurements y = Ax .
x
Ay =
Sparse Fourier MRI Single-Pixel Camera
Streaming AlgorithmsA(x + ∆) = Ax + A∆
Genetic Testing
Eric Price (MIT) Sparse Recovery and Fourier Sampling 27 / 37
Sparse Recovery / Compressive Sensing
Robustly recover sparse x from linear measurements y = Ax .
x
Ay =
Sparse Fourier MRI Single-Pixel Camera
Streaming AlgorithmsA(x + ∆) = Ax + A∆
Genetic Testing
Eric Price (MIT) Sparse Recovery and Fourier Sampling 27 / 37
Sparse Recovery / Compressive Sensing
Robustly recover sparse x from linear measurements y = Ax .
x
Ay =
Sparse Fourier MRI Single-Pixel Camera
Streaming AlgorithmsA(x + ∆) = Ax + A∆
Genetic Testing
Eric Price (MIT) Sparse Recovery and Fourier Sampling 27 / 37
My Contributions
Sparse Fourier: minimize time complexity [HIKP12b, HIKP12a]
MRI: minimize Fourier sample complexity [GHIKPS13, IKP14]Camera: use Earth-Mover Distance metric [IP11, GIP10, GIPR11]Streaming: improved analysis of Count-Sketch [MP14, PW11, P11]Genetic testing: first asymptotic gain using adaptivity [IPW11, PW13]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 28 / 37
My Contributions
Sparse Fourier: minimize time complexity [HIKP12b, HIKP12a]MRI: minimize Fourier sample complexity [GHIKPS13, IKP14]
Camera: use Earth-Mover Distance metric [IP11, GIP10, GIPR11]Streaming: improved analysis of Count-Sketch [MP14, PW11, P11]Genetic testing: first asymptotic gain using adaptivity [IPW11, PW13]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 28 / 37
My Contributions
Sparse Fourier: minimize time complexity [HIKP12b, HIKP12a]MRI: minimize Fourier sample complexity [GHIKPS13, IKP14]Camera: use Earth-Mover Distance metric [IP11, GIP10, GIPR11]
Streaming: improved analysis of Count-Sketch [MP14, PW11, P11]Genetic testing: first asymptotic gain using adaptivity [IPW11, PW13]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 28 / 37
My Contributions
Sparse Fourier: minimize time complexity [HIKP12b, HIKP12a]MRI: minimize Fourier sample complexity [GHIKPS13, IKP14]Camera: use Earth-Mover Distance metric [IP11, GIP10, GIPR11]Streaming: improved analysis of Count-Sketch [MP14, PW11, P11]
Genetic testing: first asymptotic gain using adaptivity [IPW11, PW13]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 28 / 37
My Contributions
Sparse Fourier: minimize time complexity [HIKP12b, HIKP12a]MRI: minimize Fourier sample complexity [GHIKPS13, IKP14]Camera: use Earth-Mover Distance metric [IP11, GIP10, GIPR11]Streaming: improved analysis of Count-Sketch [MP14, PW11, P11]Genetic testing: first asymptotic gain using adaptivity [IPW11, PW13]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 28 / 37
My Contributions
Sparse Fourier: minimize time complexity [HIKP12b, HIKP12a]MRI: minimize Fourier sample complexity [GHIKPS13, IKP14]Camera: use Earth-Mover Distance metric [IP11, GIP10, GIPR11]Streaming: improved analysis of Count-Sketch [MP14, PW11, P11]Genetic testing: first asymptotic gain using adaptivity [IPW11, PW13]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 28 / 37
Adaptive Sparse Recovery Model
Unknown approximately k -sparse vector x ∈ Rn.
Choose v ∈ Rn, observe y = 〈v , x〉.Choose another v and repeat as needed.Output x ′ satisfying
‖x ′ − x‖2 < (1 + ε) mink -sparse x(k)
‖x − x(k)‖2
Nonadaptively: Θ(k log(n/k)) measurements necessary andsufficient. [Candes-Romberg-Tao ’06, DIPW ’10]Natural question: does adaptivity help?
I Studied in [MSW08, JXC08, CHNR08, AWZ08, HCN09, ACD11, ...]
First asymptotic improvement: O(k log log(n/k)) measurements.[IPW ’11]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 29 / 37
Adaptive Sparse Recovery Model
Unknown approximately k -sparse vector x ∈ Rn.Choose v ∈ Rn, observe y = 〈v , x〉.
Choose another v and repeat as needed.Output x ′ satisfying
‖x ′ − x‖2 < (1 + ε) mink -sparse x(k)
‖x − x(k)‖2
Nonadaptively: Θ(k log(n/k)) measurements necessary andsufficient. [Candes-Romberg-Tao ’06, DIPW ’10]Natural question: does adaptivity help?
I Studied in [MSW08, JXC08, CHNR08, AWZ08, HCN09, ACD11, ...]
First asymptotic improvement: O(k log log(n/k)) measurements.[IPW ’11]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 29 / 37
Adaptive Sparse Recovery Model
Unknown approximately k -sparse vector x ∈ Rn.Choose v ∈ Rn, observe y = 〈v , x〉.Choose another v and repeat as needed.
Output x ′ satisfying
‖x ′ − x‖2 < (1 + ε) mink -sparse x(k)
‖x − x(k)‖2
Nonadaptively: Θ(k log(n/k)) measurements necessary andsufficient. [Candes-Romberg-Tao ’06, DIPW ’10]Natural question: does adaptivity help?
I Studied in [MSW08, JXC08, CHNR08, AWZ08, HCN09, ACD11, ...]
First asymptotic improvement: O(k log log(n/k)) measurements.[IPW ’11]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 29 / 37
Adaptive Sparse Recovery Model
Unknown approximately k -sparse vector x ∈ Rn.Choose v ∈ Rn, observe y = 〈v , x〉.Choose another v and repeat as needed.Output x ′ satisfying
‖x ′ − x‖2 < (1 + ε) mink -sparse x(k)
‖x − x(k)‖2
Nonadaptively: Θ(k log(n/k)) measurements necessary andsufficient. [Candes-Romberg-Tao ’06, DIPW ’10]Natural question: does adaptivity help?
I Studied in [MSW08, JXC08, CHNR08, AWZ08, HCN09, ACD11, ...]
First asymptotic improvement: O(k log log(n/k)) measurements.[IPW ’11]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 29 / 37
Adaptive Sparse Recovery Model
Unknown approximately k -sparse vector x ∈ Rn.Choose v ∈ Rn, observe y = 〈v , x〉.Choose another v and repeat as needed.Output x ′ satisfying
‖x ′ − x‖2 < (1 + ε) mink -sparse x(k)
‖x − x(k)‖2
Nonadaptively: Θ(k log(n/k)) measurements necessary andsufficient. [Candes-Romberg-Tao ’06, DIPW ’10]
Natural question: does adaptivity help?
I Studied in [MSW08, JXC08, CHNR08, AWZ08, HCN09, ACD11, ...]
First asymptotic improvement: O(k log log(n/k)) measurements.[IPW ’11]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 29 / 37
Adaptive Sparse Recovery Model
Unknown approximately k -sparse vector x ∈ Rn.Choose v ∈ Rn, observe y = 〈v , x〉.Choose another v and repeat as needed.Output x ′ satisfying
‖x ′ − x‖2 < (1 + ε) mink -sparse x(k)
‖x − x(k)‖2
Nonadaptively: Θ(k log(n/k)) measurements necessary andsufficient. [Candes-Romberg-Tao ’06, DIPW ’10]Natural question: does adaptivity help?
I Studied in [MSW08, JXC08, CHNR08, AWZ08, HCN09, ACD11, ...]
First asymptotic improvement: O(k log log(n/k)) measurements.[IPW ’11]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 29 / 37
Adaptive Sparse Recovery Model
Unknown approximately k -sparse vector x ∈ Rn.Choose v ∈ Rn, observe y = 〈v , x〉.Choose another v and repeat as needed.Output x ′ satisfying
‖x ′ − x‖2 < (1 + ε) mink -sparse x(k)
‖x − x(k)‖2
Nonadaptively: Θ(k log(n/k)) measurements necessary andsufficient. [Candes-Romberg-Tao ’06, DIPW ’10]Natural question: does adaptivity help?
I Studied in [MSW08, JXC08, CHNR08, AWZ08, HCN09, ACD11, ...]
First asymptotic improvement: O(k log log(n/k)) measurements.[IPW ’11]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 29 / 37
Adaptive Sparse Recovery Model
Unknown approximately k -sparse vector x ∈ Rn.Choose v ∈ Rn, observe y = 〈v , x〉.Choose another v and repeat as needed.Output x ′ satisfying
‖x ′ − x‖2 < (1 + ε) mink -sparse x(k)
‖x − x(k)‖2
Nonadaptively: Θ(k log(n/k)) measurements necessary andsufficient. [Candes-Romberg-Tao ’06, DIPW ’10]Natural question: does adaptivity help?
I Studied in [MSW08, JXC08, CHNR08, AWZ08, HCN09, ACD11, ...]
First asymptotic improvement: O(k log log(n/k)) measurements.[IPW ’11]
Eric Price (MIT) Sparse Recovery and Fourier Sampling 29 / 37
Applications of Adaptivity
Eric Price (MIT) Sparse Recovery and Fourier Sampling 30 / 37
Applications of Adaptivity
Eric Price (MIT) Sparse Recovery and Fourier Sampling 30 / 37
Applications of Adaptivity
Eric Price (MIT) Sparse Recovery and Fourier Sampling 30 / 37
Applications of Adaptivity
Eric Price (MIT) Sparse Recovery and Fourier Sampling 30 / 37
Applications of Adaptivity
Eric Price (MIT) Sparse Recovery and Fourier Sampling 30 / 37
Applications of Adaptivity
Eric Price (MIT) Sparse Recovery and Fourier Sampling 30 / 37
Applications of Adaptivity
Eric Price (MIT) Sparse Recovery and Fourier Sampling 30 / 37
Outline of Algorithm
TheoremAdaptive k-sparse recovery is possible with O(k log log(n/k))measurements.
Permute Partition O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Suffices to solve for k = 1:
LemmaAdaptive 1-sparse recovery is possible with O(log log n)measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 31 / 37
Outline of Algorithm
TheoremAdaptive k-sparse recovery is possible with O(k log log(n/k))measurements.
Permute Partition O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Suffices to solve for k = 1:
LemmaAdaptive 1-sparse recovery is possible with O(log log n)measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 31 / 37
Outline of Algorithm
TheoremAdaptive k-sparse recovery is possible with O(k log log(n/k))measurements.
Permute Partition O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Suffices to solve for k = 1:
LemmaAdaptive 1-sparse recovery is possible with O(log log n)measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 31 / 37
Outline of Algorithm
TheoremAdaptive k-sparse recovery is possible with O(k log log(n/k))measurements.
Permute Partition O(k)
1-sparse recovery
1-sparse recovery
1-sparse recovery
1-sparse recovery
x x ′
Suffices to solve for k = 1:
LemmaAdaptive 1-sparse recovery is possible with O(log log n)measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 31 / 37
1-sparse recovery: non-adaptive lower bound
LemmaAdaptive 1-sparse recovery is possible with O(log log n)measurements.
Non-adaptive lower bound: why is this hard?Hard case: x is random ei plus Gaussian noise w with ‖w‖2 ≈ 1.
Robust recovery must locate i .
Observations 〈v , x〉 = vi + 〈v ,w〉 = vi +‖v‖2√
n z, for z ∼ N(0,1).
Eric Price (MIT) Sparse Recovery and Fourier Sampling 32 / 37
1-sparse recovery: non-adaptive lower bound
LemmaAdaptive 1-sparse recovery is possible with O(log log n)measurements.
Non-adaptive lower bound: why is this hard?
Hard case: x is random ei plus Gaussian noise w with ‖w‖2 ≈ 1.
Robust recovery must locate i .
Observations 〈v , x〉 = vi + 〈v ,w〉 = vi +‖v‖2√
n z, for z ∼ N(0,1).
Eric Price (MIT) Sparse Recovery and Fourier Sampling 32 / 37
1-sparse recovery: non-adaptive lower bound
LemmaAdaptive 1-sparse recovery is possible with O(log log n)measurements.
Non-adaptive lower bound: why is this hard?Hard case: x is random ei plus Gaussian noise w with ‖w‖2 ≈ 1.
Robust recovery must locate i .
Observations 〈v , x〉 = vi + 〈v ,w〉 = vi +‖v‖2√
n z, for z ∼ N(0,1).
Eric Price (MIT) Sparse Recovery and Fourier Sampling 32 / 37
1-sparse recovery: non-adaptive lower bound
LemmaAdaptive 1-sparse recovery is possible with O(log log n)measurements.
Non-adaptive lower bound: why is this hard?Hard case: x is random ei plus Gaussian noise w with ‖w‖2 ≈ 1.
Robust recovery must locate i .
Observations 〈v , x〉 = vi + 〈v ,w〉 = vi +‖v‖2√
n z, for z ∼ N(0,1).
Eric Price (MIT) Sparse Recovery and Fourier Sampling 32 / 37
1-sparse recovery: non-adaptive lower bound
LemmaAdaptive 1-sparse recovery is possible with O(log log n)measurements.
Non-adaptive lower bound: why is this hard?Hard case: x is random ei plus Gaussian noise w with ‖w‖2 ≈ 1.
Robust recovery must locate i .
Observations 〈v , x〉 = vi + 〈v ,w〉 = vi +‖v‖2√
n z, for z ∼ N(0,1).
Eric Price (MIT) Sparse Recovery and Fourier Sampling 32 / 37
1-sparse recovery: non-adaptive lower boundObserve 〈v , x〉 = vi +
‖v‖2√n z, where z ∼ N(0,1)
Shannon 1948: information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR)
where SNR denotes the “signal-to-noise ratio,”
SNR =E[signal2]E[noise2]
=E[v2
i ]
‖v‖22/n= 1
Finding i needs Ω(log n) non-adaptive measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 33 / 37
1-sparse recovery: non-adaptive lower boundObserve 〈v , x〉 = vi +
‖v‖2√n z, where z ∼ N(0,1)
Shannon 1948: information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR)
where SNR denotes the “signal-to-noise ratio,”
SNR =E[signal2]E[noise2]
=E[v2
i ]
‖v‖22/n= 1
Finding i needs Ω(log n) non-adaptive measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 33 / 37
1-sparse recovery: non-adaptive lower boundObserve 〈v , x〉 = vi +
‖v‖2√n z, where z ∼ N(0,1)
Shannon 1948: information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR)
where SNR denotes the “signal-to-noise ratio,”
SNR =E[signal2]E[noise2]
=E[v2
i ]
‖v‖22/n= 1
Finding i needs Ω(log n) non-adaptive measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 33 / 37
1-sparse recovery: non-adaptive lower boundObserve 〈v , x〉 = vi +
‖v‖2√n z, where z ∼ N(0,1)
Shannon 1948: information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR)
where SNR denotes the “signal-to-noise ratio,”
SNR =E[signal2]E[noise2]
=E[v2
i ]
‖v‖22/n= 1
Finding i needs Ω(log n) non-adaptive measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 33 / 37
1-sparse recovery: non-adaptive lower boundObserve 〈v , x〉 = vi +
‖v‖2√n z, where z ∼ N(0,1)
Shannon 1948: information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR)
where SNR denotes the “signal-to-noise ratio,”
SNR =E[signal2]E[noise2]
=E[v2
i ]
‖v‖22/n= 1
Finding i needs Ω(log n) non-adaptive measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 33 / 37
1-sparse recovery: non-adaptive lower boundObserve 〈v , x〉 = vi +
‖v‖2√n z, where z ∼ N(0,1)
Shannon 1948: information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR)
where SNR denotes the “signal-to-noise ratio,”
SNR =E[signal2]E[noise2]
=E[v2
i ]
‖v‖22/n= 1
Finding i needs Ω(log n) non-adaptive measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 33 / 37
1-sparse recovery: non-adaptive lower boundObserve 〈v , x〉 = vi +
‖v‖2√n z, where z ∼ N(0,1)
Shannon 1948: information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR)
where SNR denotes the “signal-to-noise ratio,”
SNR =E[signal2]E[noise2]
=E[v2
i ]
‖v‖22/n= 1
Finding i needs Ω(log n) non-adaptive measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 33 / 37
1-sparse recovery: non-adaptive lower boundObserve 〈v , x〉 = vi +
‖v‖2√n z, where z ∼ N(0,1)
Shannon 1948: information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR)
where SNR denotes the “signal-to-noise ratio,”
SNR =E[signal2]E[noise2]
=E[v2
i ]
‖v‖22/n= 1
Finding i needs Ω(log n) non-adaptive measurements.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 33 / 37
1-sparse recovery: changes in adaptive setting
Information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR).
where SNR denotes the “signal-to-noise ratio,”
SNR =E[v2
i ]
‖v‖22/n.
If i is independent of v , this is O(1).As we learn about i , we can increase the SNR.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 34 / 37
1-sparse recovery: changes in adaptive setting
Information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR).
where SNR denotes the “signal-to-noise ratio,”
SNR =E[v2
i ]
‖v‖22/n.
If i is independent of v , this is O(1).
As we learn about i , we can increase the SNR.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 34 / 37
1-sparse recovery: changes in adaptive setting
Information capacity
I(i , 〈v , x〉) 6 12
log(1 + SNR).
where SNR denotes the “signal-to-noise ratio,”
SNR =E[v2
i ]
‖v‖22/n.
If i is independent of v , this is O(1).As we learn about i , we can increase the SNR.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 34 / 37
1-sparse recovery: ideax = ei + w
0 bits
v
Candidate setSignal
SNR = 2 I(i , 〈v , x〉) 6 log SNR = 1〈v , x〉 = vi + 〈v ,w〉
Eric Price (MIT) Sparse Recovery and Fourier Sampling 35 / 37
1-sparse recovery: ideax = ei + w
0 bits1 bit
v
Candidate setSignal
SNR = 22 I(i , 〈v , x〉) 6 log SNR = 2〈v , x〉 = vi + 〈v ,w〉
Eric Price (MIT) Sparse Recovery and Fourier Sampling 35 / 37
1-sparse recovery: ideax = ei + w
0 bits1 bit
2 bits
v
Candidate setSignal
SNR = 24 I(i , 〈v , x〉) 6 log SNR = 4〈v , x〉 = vi + 〈v ,w〉
Eric Price (MIT) Sparse Recovery and Fourier Sampling 35 / 37
1-sparse recovery: ideax = ei + w
0 bits1 bit
2 bits4 bits
v
Candidate setSignal
SNR = 28 I(i , 〈v , x〉) 6 log SNR = 8〈v , x〉 = vi + 〈v ,w〉
Eric Price (MIT) Sparse Recovery and Fourier Sampling 35 / 37
1-sparse recovery: ideax = ei + w
0 bits1 bit
2 bits4 bits8 bits
v
Candidate setSignal
SNR = 216 I(i , 〈v , x〉) 6 log SNR = 16〈v , x〉 = vi + 〈v ,w〉
Eric Price (MIT) Sparse Recovery and Fourier Sampling 35 / 37
1-sparse recovery
Lemma (IPW11)Adaptive 1-sparse recovery takes O(log log n) measurements.
0 5 10 15 20 25 30log n
0
5
10
15
20
25
30
Num
ber o
f mea
sure
men
ts
m as a function of n (SNR=10db,k=1)Gaussian measurements, L1 minimizationAdaptive measurements
0 10 20 30 40 50SNR (dB)
0
5
10
15
20
25
30
Num
ber o
f mea
sure
men
ts
m as a function of SNR (n=8192,k=1)Gaussian measurements, L1 minimizationAdaptive measurements
Gives Θ(k log log(n/k)) k -sparse recovery via general framework.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 36 / 37
1-sparse recovery
Lemma (IPW11, PW13)Adaptive 1-sparse recovery takes Θ(log log n) measurements.
0 5 10 15 20 25 30log n
0
5
10
15
20
25
30
Num
ber o
f mea
sure
men
ts
m as a function of n (SNR=10db,k=1)Gaussian measurements, L1 minimizationAdaptive measurements
0 10 20 30 40 50SNR (dB)
0
5
10
15
20
25
30
Num
ber o
f mea
sure
men
ts
m as a function of SNR (n=8192,k=1)Gaussian measurements, L1 minimizationAdaptive measurements
Gives Θ(k log log(n/k)) k -sparse recovery via general framework.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 36 / 37
1-sparse recovery
Lemma (IPW11, PW13)Adaptive 1-sparse recovery takes Θ(log log n) measurements.
0 5 10 15 20 25 30log n
0
5
10
15
20
25
30
Num
ber o
f mea
sure
men
ts
m as a function of n (SNR=10db,k=1)Gaussian measurements, L1 minimizationAdaptive measurements
0 10 20 30 40 50SNR (dB)
0
5
10
15
20
25
30
Num
ber o
f mea
sure
men
ts
m as a function of SNR (n=8192,k=1)Gaussian measurements, L1 minimizationAdaptive measurements
Gives Θ(k log log(n/k)) k -sparse recovery via general framework.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 36 / 37
1-sparse recovery
Lemma (IPW11, PW13)Adaptive 1-sparse recovery takes Θ(log log n) measurements.
0 5 10 15 20 25 30log n
0
5
10
15
20
25
30
Num
ber o
f mea
sure
men
ts
m as a function of n (SNR=10db,k=1)Gaussian measurements, L1 minimizationAdaptive measurements
0 10 20 30 40 50SNR (dB)
0
5
10
15
20
25
30
Num
ber o
f mea
sure
men
ts
m as a function of SNR (n=8192,k=1)Gaussian measurements, L1 minimizationAdaptive measurements
Gives Θ(k log log(n/k)) k -sparse recovery via general framework.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 36 / 37
1-sparse recovery
Lemma (IPW11, PW13)Adaptive 1-sparse recovery takes Θ(log log n) measurements.
0 5 10 15 20 25 30log n
0
5
10
15
20
25
30
Num
ber o
f mea
sure
men
ts
m as a function of n (SNR=10db,k=1)Gaussian measurements, L1 minimizationAdaptive measurements
0 10 20 30 40 50SNR (dB)
0
5
10
15
20
25
30
Num
ber o
f mea
sure
men
ts
m as a function of SNR (n=8192,k=1)Gaussian measurements, L1 minimizationAdaptive measurements
Gives Θ(k log log(n/k)) k -sparse recovery via general framework.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 36 / 37
Summary
Sparse Fourier transformI Fastest algorithm for Fourier transforms on sparse dataI Already has applications with substantial improvements
Broader sparse recovery theoryI Sparse Fourier: minimize time complexity [HIKP12]I MRI: minimize Fourier sample complexity [GHIKPS13, IKP14]I Camera: use Earth-Mover Distance metric [IP11, GIP10, GIPR11]I Streaming: improved analysis of Count-Sketch [MP14, PW11, P11]I Genetic testing: first asymptotic gain using adaptivity [IPW11, PW13]
Lower boundsI Based on Gaussian channel capacity: tight bounds, extensible to
adaptive settings.I Based on communication complexity: extends to `1 setting.
Thank You
Eric Price (MIT) Sparse Recovery and Fourier Sampling 37 / 37
Summary
Sparse Fourier transformI Fastest algorithm for Fourier transforms on sparse dataI Already has applications with substantial improvements
Broader sparse recovery theoryI Sparse Fourier: minimize time complexity [HIKP12]I MRI: minimize Fourier sample complexity [GHIKPS13, IKP14]I Camera: use Earth-Mover Distance metric [IP11, GIP10, GIPR11]I Streaming: improved analysis of Count-Sketch [MP14, PW11, P11]I Genetic testing: first asymptotic gain using adaptivity [IPW11, PW13]
Lower boundsI Based on Gaussian channel capacity: tight bounds, extensible to
adaptive settings.I Based on communication complexity: extends to `1 setting.
Thank You
Eric Price (MIT) Sparse Recovery and Fourier Sampling 37 / 37
Summary
Sparse Fourier transformI Fastest algorithm for Fourier transforms on sparse dataI Already has applications with substantial improvements
Broader sparse recovery theoryI Sparse Fourier: minimize time complexity [HIKP12]I MRI: minimize Fourier sample complexity [GHIKPS13, IKP14]I Camera: use Earth-Mover Distance metric [IP11, GIP10, GIPR11]I Streaming: improved analysis of Count-Sketch [MP14, PW11, P11]I Genetic testing: first asymptotic gain using adaptivity [IPW11, PW13]
Lower boundsI Based on Gaussian channel capacity: tight bounds, extensible to
adaptive settings.I Based on communication complexity: extends to `1 setting.
Thank You
Eric Price (MIT) Sparse Recovery and Fourier Sampling 37 / 37
Summary
Sparse Fourier transformI Fastest algorithm for Fourier transforms on sparse dataI Already has applications with substantial improvements
Broader sparse recovery theoryI Sparse Fourier: minimize time complexity [HIKP12]I MRI: minimize Fourier sample complexity [GHIKPS13, IKP14]I Camera: use Earth-Mover Distance metric [IP11, GIP10, GIPR11]I Streaming: improved analysis of Count-Sketch [MP14, PW11, P11]I Genetic testing: first asymptotic gain using adaptivity [IPW11, PW13]
Lower boundsI Based on Gaussian channel capacity: tight bounds, extensible to
adaptive settings.I Based on communication complexity: extends to `1 setting.
Thank You
Eric Price (MIT) Sparse Recovery and Fourier Sampling 37 / 37
The Future
Make sparse Fourier applicable to more problems
I Better sample complexityI Incorporate stronger notions of structure
Tight constants in compressive sensing
I Analogous to channel capacity in coding theory.I Lower bound techniques, from information theory, should be strong
enough.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 38 / 37
The Future
Make sparse Fourier applicable to more problemsI Better sample complexity
I Incorporate stronger notions of structureTight constants in compressive sensing
I Analogous to channel capacity in coding theory.I Lower bound techniques, from information theory, should be strong
enough.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 38 / 37
The Future
Make sparse Fourier applicable to more problemsI Better sample complexityI Incorporate stronger notions of structure
Tight constants in compressive sensing
I Analogous to channel capacity in coding theory.I Lower bound techniques, from information theory, should be strong
enough.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 38 / 37
The Future
Make sparse Fourier applicable to more problemsI Better sample complexityI Incorporate stronger notions of structure
Tight constants in compressive sensing
I Analogous to channel capacity in coding theory.I Lower bound techniques, from information theory, should be strong
enough.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 38 / 37
The Future
Make sparse Fourier applicable to more problemsI Better sample complexityI Incorporate stronger notions of structure
Tight constants in compressive sensingI Analogous to channel capacity in coding theory.
I Lower bound techniques, from information theory, should be strongenough.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 38 / 37
The Future
Make sparse Fourier applicable to more problemsI Better sample complexityI Incorporate stronger notions of structure
Tight constants in compressive sensingI Analogous to channel capacity in coding theory.I Lower bound techniques, from information theory, should be strong
enough.
Eric Price (MIT) Sparse Recovery and Fourier Sampling 38 / 37
Eric Price (MIT) Sparse Recovery and Fourier Sampling 39 / 37
Eric Price (MIT) Sparse Recovery and Fourier Sampling 40 / 37