compressed sensing using generative...
TRANSCRIPT
![Page 1: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/1.jpg)
Compressed Sensing using Generative Models
Ashish Bora Ajil Jalal Eric Price Alex Dimakis
UT Austin
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 1 / 20
![Page 2: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/2.jpg)
Compressed Sensing
Want to recover a signal (e.g. an image) from noisy measurements.
MedicalImaging
Astronomy Single-PixelCamera
Oil Exploration
Linear measurements: Choose A ∈ Rm×n, see y = Ax .
How many measurements m to learn the signal?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 2 / 20
![Page 3: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/3.jpg)
Compressed Sensing
Want to recover a signal (e.g. an image) from noisy measurements.
MedicalImaging
Astronomy Single-PixelCamera
Oil Exploration
Linear measurements: Choose A ∈ Rm×n, see y = Ax .
How many measurements m to learn the signal?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 2 / 20
![Page 4: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/4.jpg)
Compressed Sensing
Want to recover a signal (e.g. an image) from noisy measurements.
MedicalImaging
Astronomy Single-PixelCamera
Oil Exploration
Linear measurements: Choose A ∈ Rm×n, see y = Ax .
How many measurements m to learn the signal?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 2 / 20
![Page 5: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/5.jpg)
Compressed Sensing
Want to recover a signal (e.g. an image) from noisy measurements.
MedicalImaging
Astronomy Single-PixelCamera
Oil Exploration
Linear measurements: Choose A ∈ Rm×n, see y = Ax .
How many measurements m to learn the signal?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 2 / 20
![Page 6: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/6.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?
I Naively: m ≥ n or else underdetermined
: multiple x possible.
I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)
I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 7: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/7.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined
: multiple x possible.I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)
I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 8: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/8.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.
I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)
I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 9: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/9.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)
I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 10: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/10.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)
I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 11: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/11.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)
I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 12: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/12.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)
I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 13: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/13.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)
I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 14: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/14.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)
I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 15: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/15.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)I Image “compressible” =⇒ information in image is small.
I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 16: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/16.jpg)
Compressed Sensing
Given linear measurements y = Ax , for A ∈ Rm×n.
How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.
5MB 36MB
I This is why compression is possible.
Ideal answer: (information in image) / (new info. per measurement)I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20
![Page 17: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/17.jpg)
Compressed Sensing
Want to estimate x ∈ Rn from m n linear measurements.
Suggestion: Find “most compressible” image that fits measurements.
Three questions:
I How should we formalize that an image is “compressible”?I What algorithm for recovery?I How to choose the measurement matrix?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20
![Page 18: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/18.jpg)
Compressed Sensing
Want to estimate x ∈ Rn from m n linear measurements.
Suggestion: Find “most compressible” image that fits measurements.
Three questions:
I How should we formalize that an image is “compressible”?I What algorithm for recovery?I How to choose the measurement matrix?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20
![Page 19: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/19.jpg)
Compressed Sensing
Want to estimate x ∈ Rn from m n linear measurements.
Suggestion: Find “most compressible” image that fits measurements.
Three questions:
I How should we formalize that an image is “compressible”?I What algorithm for recovery?I How to choose the measurement matrix?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20
![Page 20: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/20.jpg)
Compressed Sensing
Want to estimate x ∈ Rn from m n linear measurements.
Suggestion: Find “most compressible” image that fits measurements.
Three questions:I How should we formalize that an image is “compressible”?
I What algorithm for recovery?I How to choose the measurement matrix?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20
![Page 21: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/21.jpg)
Compressed Sensing
Want to estimate x ∈ Rn from m n linear measurements.
Suggestion: Find “most compressible” image that fits measurements.
Three questions:I How should we formalize that an image is “compressible”?I What algorithm for recovery?
I How to choose the measurement matrix?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20
![Page 22: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/22.jpg)
Compressed Sensing
Want to estimate x ∈ Rn from m n linear measurements.
Suggestion: Find “most compressible” image that fits measurements.
Three questions:I How should we formalize that an image is “compressible”?I What algorithm for recovery?I How to choose the measurement matrix?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20
![Page 23: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/23.jpg)
Compressed Sensing
Standard compressed sensing: sparsity in some basis
I Sparsity + other constraints (“structured sparsity”)
This talk: new method.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 5 / 20
![Page 24: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/24.jpg)
Compressed Sensing
Standard compressed sensing: sparsity in some basis
I Sparsity + other constraints (“structured sparsity”)
This talk: new method.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 5 / 20
![Page 25: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/25.jpg)
Compressed Sensing
Standard compressed sensing: sparsity in some basis
I Sparsity + other constraints (“structured sparsity”)
This talk: new method.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 5 / 20
![Page 26: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/26.jpg)
Compressed Sensing
Standard compressed sensing: sparsity in some basis
I Sparsity + other constraints (“structured sparsity”)
This talk: new method.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 5 / 20
![Page 27: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/27.jpg)
Compressed Sensing
Standard compressed sensing: sparsity in some basis
I Sparsity + other constraints (“structured sparsity”)
This talk: new method.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 5 / 20
![Page 28: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/28.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.
I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSO
minx ‖Ax − y‖22 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]
I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 29: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/29.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSO
minx ‖Ax − y‖22 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]
I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 30: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/30.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSO
minx ‖Ax − y‖22 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]
I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 31: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/31.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSOminx ‖Ax − y‖2
2
+ λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]
I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 32: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/32.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSOminx ‖Ax − y‖2
2 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]
I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 33: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/33.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSOminx ‖Ax − y‖2
2 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]
I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 34: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/34.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSOminx ‖Ax − y‖2
2 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]
I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 35: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/35.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSOminx ‖Ax − y‖2
2 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]
I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 36: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/36.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSOminx ‖Ax − y‖2
2 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γx
I If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 37: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/37.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSOminx ‖Ax − y‖2
2 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)
I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 38: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/38.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSOminx ‖Ax − y‖2
2 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 39: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/39.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSOminx ‖Ax − y‖2
2 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]
I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 40: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/40.jpg)
Standard Compressed Sensing Formalism“Compressible” = “sparse”
Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .
Algorithm for recovery: LASSOminx ‖Ax − y‖2
2 + λ‖x‖1
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (1)
I Reconstruction accuracy proportional to model accuracy.
Theorem [Candes-Romberg-Tao 2006]I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.
Theorem [Do Ba-Indyk-Price-Woodruff 2010]I m = Θ(k log(n/k)) is optimal
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20
![Page 41: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/41.jpg)
Alternatives to sparsity?
Basis needs to be handcrafted
Does not capture the plausible vectors tightly : Too simplistic
Ignores a lot of domain dependent structure
Billions of natural images, millions of MRIs collected each year
Opportunity to improve structural understanding
Better structural understanding should give fewer measurements
Best way to model images in 2017?
I Deep convolutional neural networks.I In particular: generative models.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20
![Page 42: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/42.jpg)
Alternatives to sparsity?
Basis needs to be handcrafted
Does not capture the plausible vectors tightly : Too simplistic
Ignores a lot of domain dependent structure
Billions of natural images, millions of MRIs collected each year
Opportunity to improve structural understanding
Better structural understanding should give fewer measurements
Best way to model images in 2017?
I Deep convolutional neural networks.I In particular: generative models.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20
![Page 43: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/43.jpg)
Alternatives to sparsity?
Basis needs to be handcrafted
Does not capture the plausible vectors tightly : Too simplistic
Ignores a lot of domain dependent structure
Billions of natural images, millions of MRIs collected each year
Opportunity to improve structural understanding
Better structural understanding should give fewer measurements
Best way to model images in 2017?
I Deep convolutional neural networks.I In particular: generative models.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20
![Page 44: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/44.jpg)
Alternatives to sparsity?
Basis needs to be handcrafted
Does not capture the plausible vectors tightly : Too simplistic
Ignores a lot of domain dependent structure
Billions of natural images, millions of MRIs collected each year
Opportunity to improve structural understanding
Better structural understanding should give fewer measurements
Best way to model images in 2017?
I Deep convolutional neural networks.I In particular: generative models.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20
![Page 45: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/45.jpg)
Alternatives to sparsity?
Basis needs to be handcrafted
Does not capture the plausible vectors tightly : Too simplistic
Ignores a lot of domain dependent structure
Billions of natural images, millions of MRIs collected each year
Opportunity to improve structural understanding
Better structural understanding should give fewer measurements
Best way to model images in 2017?
I Deep convolutional neural networks.I In particular: generative models.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20
![Page 46: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/46.jpg)
Alternatives to sparsity?
Basis needs to be handcrafted
Does not capture the plausible vectors tightly : Too simplistic
Ignores a lot of domain dependent structure
Billions of natural images, millions of MRIs collected each year
Opportunity to improve structural understanding
Better structural understanding should give fewer measurements
Best way to model images in 2017?
I Deep convolutional neural networks.I In particular: generative models.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20
![Page 47: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/47.jpg)
Alternatives to sparsity?
Basis needs to be handcrafted
Does not capture the plausible vectors tightly : Too simplistic
Ignores a lot of domain dependent structure
Billions of natural images, millions of MRIs collected each year
Opportunity to improve structural understanding
Better structural understanding should give fewer measurements
Best way to model images in 2017?
I Deep convolutional neural networks.I In particular: generative models.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20
![Page 48: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/48.jpg)
Alternatives to sparsity?
Basis needs to be handcrafted
Does not capture the plausible vectors tightly : Too simplistic
Ignores a lot of domain dependent structure
Billions of natural images, millions of MRIs collected each year
Opportunity to improve structural understanding
Better structural understanding should give fewer measurements
Best way to model images in 2017?I Deep convolutional neural networks.
I In particular: generative models.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20
![Page 49: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/49.jpg)
Alternatives to sparsity?
Basis needs to be handcrafted
Does not capture the plausible vectors tightly : Too simplistic
Ignores a lot of domain dependent structure
Billions of natural images, millions of MRIs collected each year
Opportunity to improve structural understanding
Better structural understanding should give fewer measurements
Best way to model images in 2017?I Deep convolutional neural networks.I In particular: generative models.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20
![Page 50: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/50.jpg)
Generative Models
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:
I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].
I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 51: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/51.jpg)
Generative Models
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:
I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].
I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 52: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/52.jpg)
Generative Models
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:
I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].
I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 53: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/53.jpg)
Generative Models
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:
I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].
I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 54: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/54.jpg)
Generative Models
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:
I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].
I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 55: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/55.jpg)
Generative Models
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.
I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].
I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 56: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/56.jpg)
Generative Models
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].
I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 57: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/57.jpg)
Generative ModelsBeGAN
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].
I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 58: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/58.jpg)
Generative ModelsBeGAN
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].
I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 59: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/59.jpg)
Generative ModelsBeGAN
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 60: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/60.jpg)
Generative ModelsBeGAN
Want to model a distribution D of images.
Function G : Rk → Rn.
When z ∼ N(0, Ik), then ideally G (z) ∼ D.
Active area of machine learning research in last few years.
Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...
Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].I Blurrier, but maybe better coverage of the space.
Suggestion for compressed sensing
Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20
![Page 61: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/61.jpg)
Our Results“Compressible” = “near range(G)”
Want to estimate x from y = Ax , for A ∈ Rm×n.
We are given the generative model G : Rk → Rn.
Algorithm for recovery
I minz ‖AG (z)− y‖22 + λ‖z‖2
2I Backprop to get gradients wrt z .I Optimize with gradient descent
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20
![Page 62: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/62.jpg)
Our Results“Compressible” = “near range(G)”
Want to estimate x from y = Ax , for A ∈ Rm×n.
We are given the generative model G : Rk → Rn.
Algorithm for recovery
I minz ‖AG (z)− y‖22 + λ‖z‖2
2I Backprop to get gradients wrt z .I Optimize with gradient descent
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20
![Page 63: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/63.jpg)
Our Results“Compressible” = “near range(G)”
Want to estimate x from y = Ax , for A ∈ Rm×n.
We are given the generative model G : Rk → Rn.
Algorithm for recovery
I minz ‖AG (z)− y‖22 + λ‖z‖2
2I Backprop to get gradients wrt z .I Optimize with gradient descent
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20
![Page 64: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/64.jpg)
Our Results“Compressible” = “near range(G)”
Want to estimate x from y = Ax , for A ∈ Rm×n.
We are given the generative model G : Rk → Rn.
Algorithm for recoveryI minz ‖Ax − y‖2
2 + λ‖x‖1
I Backprop to get gradients wrt z .I Optimize with gradient descent
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20
![Page 65: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/65.jpg)
Our Results“Compressible” = “near range(G)”
Want to estimate x from y = Ax , for A ∈ Rm×n.
We are given the generative model G : Rk → Rn.
Algorithm for recoveryI minz ‖AG (z)− y‖2
2 + λ‖z‖22
I Backprop to get gradients wrt z .I Optimize with gradient descent
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20
![Page 66: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/66.jpg)
Our Results“Compressible” = “near range(G)”
Want to estimate x from y = Ax , for A ∈ Rm×n.
We are given the generative model G : Rk → Rn.
Algorithm for recoveryI minz ‖AG (z)− y‖2
2 + λ‖z‖22
I Backprop to get gradients wrt z .
I Optimize with gradient descent
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20
![Page 67: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/67.jpg)
Our Results“Compressible” = “near range(G)”
Want to estimate x from y = Ax , for A ∈ Rm×n.
We are given the generative model G : Rk → Rn.
Algorithm for recoveryI minz ‖AG (z)− y‖2
2 + λ‖z‖22
I Backprop to get gradients wrt z .I Optimize with gradient descent
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20
![Page 68: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/68.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · mink-sparse x ′
‖x − x ′‖2 (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).
I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:
I For any Lipschitz G , m = O(k log rLδ ) suffices.
I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 69: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/69.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′∈range(G)
‖x − x ′‖2 (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).
I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:
I For any Lipschitz G , m = O(k log rLδ ) suffices.
I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 70: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/70.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′∈range(G)
‖x − x ′‖2 (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).
I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:
I For any Lipschitz G , m = O(k log rLδ ) suffices.
I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 71: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/71.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′∈range(G)
‖x − x ′‖2 (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).
I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:
I For any Lipschitz G , m = O(k log rLδ ) suffices.
I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 72: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/72.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′∈range(G)
‖x − x ′‖2 (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.
I When A is random Gaussian matrix.
Main Theorem II:
I For any Lipschitz G , m = O(k log rLδ ) suffices.
I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 73: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/73.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′∈range(G)
‖x − x ′‖2 (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:
I For any Lipschitz G , m = O(k log rLδ ) suffices.
I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 74: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/74.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′∈range(G)
‖x − x ′‖2 (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:
I For any Lipschitz G , m = O(k log rLδ ) suffices.
I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 75: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/75.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′∈range(G)
‖x − x ′‖2 (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:I For any Lipschitz G , m = O(k log L) suffices.
I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 76: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/76.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r
‖x − x ′‖2+δ (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:I For any Lipschitz G , m = O(k log rL
δ ) suffices.
I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 77: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/77.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r
‖x − x ′‖2+δ (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:I For any Lipschitz G , m = O(k log rL
δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 78: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/78.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r
‖x − x ′‖2+δ (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:I For any Lipschitz G , m = O(k log rL
δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:
I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 79: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/79.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r
‖x − x ′‖2+δ (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:I For any Lipschitz G , m = O(k log rL
δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:I Just like training, no proof that gradient descent converges
I Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 80: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/80.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r
‖x − x ′‖2+δ (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:I For any Lipschitz G , m = O(k log rL
δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)
I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 81: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/81.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r
‖x − x ′‖2+δ (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:I For any Lipschitz G , m = O(k log rL
δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.
I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 82: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/82.jpg)
Our Results“Compressible” = “near range(G)”
Goal: x with
‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r
‖x − x ′‖2+δ (2)
I Reconstruction accuracy proportional to model accuracy.
Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.
Main Theorem II:I For any Lipschitz G , m = O(k log rL
δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)
Convergence:I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20
![Page 83: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/83.jpg)
Experimental Results
Faces: n = 64× 64× 3 = 12288, m = 500O
rigin
al
Lass
o (
DC
T)
Lass
o (
Wavele
t)D
CG
AN
MNIST: n = 28x28 = 784, m = 100.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 11 / 20
![Page 84: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/84.jpg)
Experimental Results
Faces: n = 64× 64× 3 = 12288, m = 500O
rigin
al
Lass
o (
DC
T)
Lass
o (
Wavele
t)D
CG
AN
MNIST: n = 28x28 = 784, m = 100.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 11 / 20
![Page 85: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/85.jpg)
Experimental Results
10
25
50
10
0
20
0
30
04
00
50
0
75
0
Number of measurements
0.00
0.02
0.04
0.06
0.08
0.10
0.12
Reco
nst
ruct
ion e
rror
(per
pix
el)
Lasso
VAE
VAE+Reg
MNIST
20
50
10
0
20
0
50
0
10
00
25
00
50
00
75
00
10
00
0
Number of measurements
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
Reco
nst
ruct
ion e
rror
(per
pix
el)
Lasso (DCT)
Lasso (Wavelet)
DCGAN
DCGAN+Reg
Faces
For fixed G , have fixed k , so error stops improving after some point.
Larger m should use higher capacity G , so min‖x − G (z)‖ smaller.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 12 / 20
![Page 86: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/86.jpg)
Experimental Results
10
25
50
10
0
20
0
30
04
00
50
0
75
0
Number of measurements
0.00
0.02
0.04
0.06
0.08
0.10
0.12
Reco
nst
ruct
ion e
rror
(per
pix
el)
Lasso
VAE
VAE+Reg
MNIST
20
50
10
0
20
0
50
0
10
00
25
00
50
00
75
00
10
00
0
Number of measurements
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
Reco
nst
ruct
ion e
rror
(per
pix
el)
Lasso (DCT)
Lasso (Wavelet)
DCGAN
DCGAN+Reg
Faces
For fixed G , have fixed k , so error stops improving after some point.
Larger m should use higher capacity G , so min‖x − G (z)‖ smaller.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 12 / 20
![Page 87: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/87.jpg)
Experimental Results
10
25
50
10
0
20
0
30
04
00
50
0
75
0
Number of measurements
0.00
0.02
0.04
0.06
0.08
0.10
0.12
Reco
nst
ruct
ion e
rror
(per
pix
el)
Lasso
VAE
VAE+Reg
MNIST
20
50
10
0
20
0
50
0
10
00
25
00
50
00
75
00
10
00
0
Number of measurements
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
Reco
nst
ruct
ion e
rror
(per
pix
el)
Lasso (DCT)
Lasso (Wavelet)
DCGAN
DCGAN+Reg
Faces
For fixed G , have fixed k , so error stops improving after some point.
Larger m should use higher capacity G , so min‖x − G (z)‖ smaller.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 12 / 20
![Page 88: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/88.jpg)
Proof Outline (ReLU-based networks)
Show range(G ) lies within union of ndk k-dimensional hyperplane.
I Then analogous to proof for sparsity:(nk
)≤ 2k log(n/k) hyperplanes.
I So dk log n Gaussian measurements suffice.
ReLU-based network:
I Each layer is z → ReLU(Aiz).
I ReLU(y)i =
yi yi ≥ 00 otherwise
Input to layer 1: single k-dimensional hyperplane.
Lemma
Layer 1’s output lies within a union of nk k-dimensional hyperplanes.
Induction: final output lies within ndk k-dimensional hyperplanes.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20
![Page 89: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/89.jpg)
Proof Outline (ReLU-based networks)
Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:
(nk
)≤ 2k log(n/k) hyperplanes.
I So dk log n Gaussian measurements suffice.
ReLU-based network:
I Each layer is z → ReLU(Aiz).
I ReLU(y)i =
yi yi ≥ 00 otherwise
Input to layer 1: single k-dimensional hyperplane.
Lemma
Layer 1’s output lies within a union of nk k-dimensional hyperplanes.
Induction: final output lies within ndk k-dimensional hyperplanes.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20
![Page 90: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/90.jpg)
Proof Outline (ReLU-based networks)
Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:
(nk
)≤ 2k log(n/k) hyperplanes.
I So dk log n Gaussian measurements suffice.
ReLU-based network:
I Each layer is z → ReLU(Aiz).
I ReLU(y)i =
yi yi ≥ 00 otherwise
Input to layer 1: single k-dimensional hyperplane.
Lemma
Layer 1’s output lies within a union of nk k-dimensional hyperplanes.
Induction: final output lies within ndk k-dimensional hyperplanes.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20
![Page 91: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/91.jpg)
Proof Outline (ReLU-based networks)
Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:
(nk
)≤ 2k log(n/k) hyperplanes.
I So dk log n Gaussian measurements suffice.
ReLU-based network:
I Each layer is z → ReLU(Aiz).
I ReLU(y)i =
yi yi ≥ 00 otherwise
Input to layer 1: single k-dimensional hyperplane.
Lemma
Layer 1’s output lies within a union of nk k-dimensional hyperplanes.
Induction: final output lies within ndk k-dimensional hyperplanes.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20
![Page 92: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/92.jpg)
Proof Outline (ReLU-based networks)
Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:
(nk
)≤ 2k log(n/k) hyperplanes.
I So dk log n Gaussian measurements suffice.
ReLU-based network:I Each layer is z → ReLU(Aiz).
I ReLU(y)i =
yi yi ≥ 00 otherwise
Input to layer 1: single k-dimensional hyperplane.
Lemma
Layer 1’s output lies within a union of nk k-dimensional hyperplanes.
Induction: final output lies within ndk k-dimensional hyperplanes.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20
![Page 93: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/93.jpg)
Proof Outline (ReLU-based networks)
Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:
(nk
)≤ 2k log(n/k) hyperplanes.
I So dk log n Gaussian measurements suffice.
ReLU-based network:I Each layer is z → ReLU(Aiz).
I ReLU(y)i =
yi yi ≥ 00 otherwise
Input to layer 1: single k-dimensional hyperplane.
Lemma
Layer 1’s output lies within a union of nk k-dimensional hyperplanes.
Induction: final output lies within ndk k-dimensional hyperplanes.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20
![Page 94: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/94.jpg)
Proof Outline (ReLU-based networks)
Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:
(nk
)≤ 2k log(n/k) hyperplanes.
I So dk log n Gaussian measurements suffice.
ReLU-based network:I Each layer is z → ReLU(Aiz).
I ReLU(y)i =
yi yi ≥ 00 otherwise
Input to layer 1: single k-dimensional hyperplane.
Lemma
Layer 1’s output lies within a union of nk k-dimensional hyperplanes.
Induction: final output lies within ndk k-dimensional hyperplanes.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20
![Page 95: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/95.jpg)
Proof Outline (ReLU-based networks)
Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:
(nk
)≤ 2k log(n/k) hyperplanes.
I So dk log n Gaussian measurements suffice.
ReLU-based network:I Each layer is z → ReLU(Aiz).
I ReLU(y)i =
yi yi ≥ 00 otherwise
Input to layer 1: single k-dimensional hyperplane.
Lemma
Layer 1’s output lies within a union of nk k-dimensional hyperplanes.
Induction: final output lies within ndk k-dimensional hyperplanes.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20
![Page 96: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/96.jpg)
Proof Outline (ReLU-based networks)
Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:
(nk
)≤ 2k log(n/k) hyperplanes.
I So dk log n Gaussian measurements suffice.
ReLU-based network:I Each layer is z → ReLU(Aiz).
I ReLU(y)i =
yi yi ≥ 00 otherwise
Input to layer 1: single k-dimensional hyperplane.
Lemma
Layer 1’s output lies within a union of nk k-dimensional hyperplanes.
Induction: final output lies within ndk k-dimensional hyperplanes.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20
![Page 97: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/97.jpg)
Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.
A1z is k-dimensional hyperplane in Rn.
ReLU(A1z) is linear, within any constant region of sign(A1z).
How many different patterns can sign(A1z) take?
k = 2 version
: how many regions can n lines partition plane into?
I 1 + (1 + 2 + . . .+ n) = n2+n+22 .
I n half-spaces divide Rk into less than nk regions.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20
![Page 98: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/98.jpg)
Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.
A1z is k-dimensional hyperplane in Rn.
ReLU(A1z) is linear, within any constant region of sign(A1z).
How many different patterns can sign(A1z) take?
k = 2 version
: how many regions can n lines partition plane into?
I 1 + (1 + 2 + . . .+ n) = n2+n+22 .
I n half-spaces divide Rk into less than nk regions.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20
![Page 99: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/99.jpg)
Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.
A1z is k-dimensional hyperplane in Rn.
ReLU(A1z) is linear, within any constant region of sign(A1z).
How many different patterns can sign(A1z) take?
k = 2 version
: how many regions can n lines partition plane into?
I 1 + (1 + 2 + . . .+ n) = n2+n+22 .
I n half-spaces divide Rk into less than nk regions.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20
![Page 100: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/100.jpg)
Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.
A1z is k-dimensional hyperplane in Rn.
ReLU(A1z) is linear, within any constant region of sign(A1z).
How many different patterns can sign(A1z) take?
k = 2 version
: how many regions can n lines partition plane into?
I 1 + (1 + 2 + . . .+ n) = n2+n+22 .
I n half-spaces divide Rk into less than nk regions.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20
![Page 101: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/101.jpg)
Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.
A1z is k-dimensional hyperplane in Rn.
ReLU(A1z) is linear, within any constant region of sign(A1z).
How many different patterns can sign(A1z) take?
k = 2 version: how many regions can n lines partition plane into?
I 1 + (1 + 2 + . . .+ n) = n2+n+22 .
I n half-spaces divide Rk into less than nk regions.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20
![Page 102: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/102.jpg)
Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.
A1z is k-dimensional hyperplane in Rn.
ReLU(A1z) is linear, within any constant region of sign(A1z).
How many different patterns can sign(A1z) take?
k = 2 version: how many regions can n lines partition plane into?
I 1 + (1 + 2 + . . .+ n) = n2+n+22 .
I n half-spaces divide Rk into less than nk regions.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20
![Page 103: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/103.jpg)
Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.
A1z is k-dimensional hyperplane in Rn.
ReLU(A1z) is linear, within any constant region of sign(A1z).
How many different patterns can sign(A1z) take?
k = 2 version: how many regions can n lines partition plane into?
I 1 + (1 + 2 + . . .+ n) = n2+n+22 .
I n half-spaces divide Rk into less than nk regions.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20
![Page 104: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/104.jpg)
Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.
A1z is k-dimensional hyperplane in Rn.
ReLU(A1z) is linear, within any constant region of sign(A1z).
How many different patterns can sign(A1z) take?
k = 2 version: how many regions can n lines partition plane into?
I 1 + (1 + 2 + . . .+ n) = n2+n+22 .
I n half-spaces divide Rk into less than nk regions.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20
![Page 105: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/105.jpg)
Proof outline (Lipschitz networks)
Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.
I Hence can distinguish with noise.
True for fixed x1, x2 with 1− e−Ω(m) probability.
Apply to δ-cover of range(G ).
I Comes from δ/L-cover of domain(G ).I Size ( rL
δ )k : union bound works for m = O(k log rLδ ).
x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:
‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)
Hence‖x − x‖2 ≤ C min
x ′∈range(G)‖x ′ − x‖2 + δn
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20
![Page 106: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/106.jpg)
Proof outline (Lipschitz networks)
Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.
I Hence can distinguish with noise.
True for fixed x1, x2 with 1− e−Ω(m) probability.
Apply to δ-cover of range(G ).
I Comes from δ/L-cover of domain(G ).I Size ( rL
δ )k : union bound works for m = O(k log rLδ ).
x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:
‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)
Hence‖x − x‖2 ≤ C min
x ′∈range(G)‖x ′ − x‖2 + δn
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20
![Page 107: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/107.jpg)
Proof outline (Lipschitz networks)
Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.
I Hence can distinguish with noise.
True for fixed x1, x2 with 1− e−Ω(m) probability.
Apply to δ-cover of range(G ).
I Comes from δ/L-cover of domain(G ).I Size ( rL
δ )k : union bound works for m = O(k log rLδ ).
x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:
‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)
Hence‖x − x‖2 ≤ C min
x ′∈range(G)‖x ′ − x‖2 + δn
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20
![Page 108: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/108.jpg)
Proof outline (Lipschitz networks)
Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.
I Hence can distinguish with noise.
True for fixed x1, x2 with 1− e−Ω(m) probability.
Apply to δ-cover of range(G ).
I Comes from δ/L-cover of domain(G ).I Size ( rL
δ )k : union bound works for m = O(k log rLδ ).
x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:
‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)
Hence‖x − x‖2 ≤ C min
x ′∈range(G)‖x ′ − x‖2 + δn
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20
![Page 109: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/109.jpg)
Proof outline (Lipschitz networks)
Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.
I Hence can distinguish with noise.
True for fixed x1, x2 with 1− e−Ω(m) probability.
Apply to δ-cover of range(G ).I Comes from δ/L-cover of domain(G ).
I Size ( rLδ )k : union bound works for m = O(k log rL
δ ).
x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:
‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)
Hence‖x − x‖2 ≤ C min
x ′∈range(G)‖x ′ − x‖2 + δn
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20
![Page 110: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/110.jpg)
Proof outline (Lipschitz networks)
Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.
I Hence can distinguish with noise.
True for fixed x1, x2 with 1− e−Ω(m) probability.
Apply to δ-cover of range(G ).I Comes from δ/L-cover of domain(G ).I Size ( rL
δ )k : union bound works for m = O(k log rLδ ).
x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:
‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)
Hence‖x − x‖2 ≤ C min
x ′∈range(G)‖x ′ − x‖2 + δn
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20
![Page 111: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/111.jpg)
Proof outline (Lipschitz networks)
Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.
I Hence can distinguish with noise.
True for fixed x1, x2 with 1− e−Ω(m) probability.
Apply to δ-cover of range(G ).I Comes from δ/L-cover of domain(G ).I Size ( rL
δ )k : union bound works for m = O(k log rLδ ).
x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:
‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)
Hence‖x − x‖2 ≤ C min
x ′∈range(G)‖x ′ − x‖2 + δn
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20
![Page 112: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/112.jpg)
Proof outline (Lipschitz networks)
Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.
I Hence can distinguish with noise.
True for fixed x1, x2 with 1− e−Ω(m) probability.
Apply to δ-cover of range(G ).I Comes from δ/L-cover of domain(G ).I Size ( rL
δ )k : union bound works for m = O(k log rLδ ).
x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:
‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)
Hence‖x − x‖2 ≤ C min
x ′∈range(G)‖x ′ − x‖2 + δn
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20
![Page 113: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/113.jpg)
Proof outline (Lipschitz networks)
Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.
I Hence can distinguish with noise.
True for fixed x1, x2 with 1− e−Ω(m) probability.
Apply to δ-cover of range(G ).I Comes from δ/L-cover of domain(G ).I Size ( rL
δ )k : union bound works for m = O(k log rLδ ).
x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:
‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)
Hence‖x − x‖2 ≤ C min
x ′∈range(G)‖x ′ − x‖2 + δn
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20
![Page 114: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/114.jpg)
Non-gaussian measurements
A = subset of Fourier matrix: MRI
A = Gaussian blur: superresolution
A = diagonal with zeros: inpainting
Algorithm can be applied, unifying problems.
I Guarantee only holds if G and A are “incoherent”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 16 / 20
![Page 115: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/115.jpg)
Non-gaussian measurements
A = subset of Fourier matrix: MRI
A = Gaussian blur: superresolution
A = diagonal with zeros: inpainting
Algorithm can be applied, unifying problems.
I Guarantee only holds if G and A are “incoherent”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 16 / 20
![Page 116: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/116.jpg)
Non-gaussian measurements
A = subset of Fourier matrix: MRI
A = Gaussian blur: superresolution
A = diagonal with zeros: inpainting
Algorithm can be applied, unifying problems.
I Guarantee only holds if G and A are “incoherent”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 16 / 20
![Page 117: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/117.jpg)
Non-gaussian measurements
A = subset of Fourier matrix: MRI
A = Gaussian blur: superresolution
A = diagonal with zeros: inpainting
Algorithm can be applied, unifying problems.
I Guarantee only holds if G and A are “incoherent”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 16 / 20
![Page 118: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/118.jpg)
Non-gaussian measurements
A = subset of Fourier matrix: MRI
A = Gaussian blur: superresolution
A = diagonal with zeros: inpainting
Algorithm can be applied, unifying problems.I Guarantee only holds if G and A are “incoherent”.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 16 / 20
![Page 119: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/119.jpg)
Inpainting
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 17 / 20
![Page 120: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/120.jpg)
Super-resolution
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 18 / 20
![Page 121: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/121.jpg)
Summary
Number of measurements required to estimate an image= (information content of image) / (new information per measurement)
Generative models can bound information content as O(kd log n).
I Open: do real networks require linear in d?
Generative models differentiable =⇒ can optimize in practice.
Gaussian measurements ensure independent information.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 19 / 20
![Page 122: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/122.jpg)
Summary
Number of measurements required to estimate an image= (information content of image) / (new information per measurement)
Generative models can bound information content as O(kd log n).
I Open: do real networks require linear in d?
Generative models differentiable =⇒ can optimize in practice.
Gaussian measurements ensure independent information.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 19 / 20
![Page 123: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/123.jpg)
Summary
Number of measurements required to estimate an image= (information content of image) / (new information per measurement)
Generative models can bound information content as O(kd log n).I Open: do real networks require linear in d?
Generative models differentiable =⇒ can optimize in practice.
Gaussian measurements ensure independent information.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 19 / 20
![Page 124: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/124.jpg)
Summary
Number of measurements required to estimate an image= (information content of image) / (new information per measurement)
Generative models can bound information content as O(kd log n).I Open: do real networks require linear in d?
Generative models differentiable =⇒ can optimize in practice.
Gaussian measurements ensure independent information.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 19 / 20
![Page 125: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/125.jpg)
Summary
Number of measurements required to estimate an image= (information content of image) / (new information per measurement)
Generative models can bound information content as O(kd log n).I Open: do real networks require linear in d?
Generative models differentiable =⇒ can optimize in practice.
Gaussian measurements ensure independent information.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 19 / 20
![Page 126: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/126.jpg)
Future work
More uses of differentiable compression?
More applications of distance to range of G?
I Metric of how good generative model is.I Objective to train new generative model.
F Bojanowski et al. 2017: impressive results with this.
Computable proxy for Lipschitz parameter?
I Real networks may be much better than worst-case nd .
Competitive guarantee for non-Gaussian A?
I Even nonlinear A?
Thank You
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20
![Page 127: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/127.jpg)
Future work
More uses of differentiable compression?
More applications of distance to range of G?
I Metric of how good generative model is.I Objective to train new generative model.
F Bojanowski et al. 2017: impressive results with this.
Computable proxy for Lipschitz parameter?
I Real networks may be much better than worst-case nd .
Competitive guarantee for non-Gaussian A?
I Even nonlinear A?
Thank You
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20
![Page 128: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/128.jpg)
Future work
More uses of differentiable compression?
More applications of distance to range of G?I Metric of how good generative model is.
I Objective to train new generative model.
F Bojanowski et al. 2017: impressive results with this.
Computable proxy for Lipschitz parameter?
I Real networks may be much better than worst-case nd .
Competitive guarantee for non-Gaussian A?
I Even nonlinear A?
Thank You
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20
![Page 129: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/129.jpg)
Future work
More uses of differentiable compression?
More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.
F Bojanowski et al. 2017: impressive results with this.
Computable proxy for Lipschitz parameter?
I Real networks may be much better than worst-case nd .
Competitive guarantee for non-Gaussian A?
I Even nonlinear A?
Thank You
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20
![Page 130: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/130.jpg)
Future work
More uses of differentiable compression?
More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.
F Bojanowski et al. 2017: impressive results with this.
Computable proxy for Lipschitz parameter?
I Real networks may be much better than worst-case nd .
Competitive guarantee for non-Gaussian A?
I Even nonlinear A?
Thank You
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20
![Page 131: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/131.jpg)
Future work
More uses of differentiable compression?
More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.
F Bojanowski et al. 2017: impressive results with this.
Computable proxy for Lipschitz parameter?
I Real networks may be much better than worst-case nd .
Competitive guarantee for non-Gaussian A?
I Even nonlinear A?
Thank You
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20
![Page 132: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/132.jpg)
Future work
More uses of differentiable compression?
More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.
F Bojanowski et al. 2017: impressive results with this.
Computable proxy for Lipschitz parameter?I Real networks may be much better than worst-case nd .
Competitive guarantee for non-Gaussian A?
I Even nonlinear A?
Thank You
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20
![Page 133: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/133.jpg)
Future work
More uses of differentiable compression?
More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.
F Bojanowski et al. 2017: impressive results with this.
Computable proxy for Lipschitz parameter?I Real networks may be much better than worst-case nd .
Competitive guarantee for non-Gaussian A?
I Even nonlinear A?
Thank You
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20
![Page 134: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/134.jpg)
Future work
More uses of differentiable compression?
More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.
F Bojanowski et al. 2017: impressive results with this.
Computable proxy for Lipschitz parameter?I Real networks may be much better than worst-case nd .
Competitive guarantee for non-Gaussian A?I Even nonlinear A?
Thank You
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20
![Page 135: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/135.jpg)
Future work
More uses of differentiable compression?
More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.
F Bojanowski et al. 2017: impressive results with this.
Computable proxy for Lipschitz parameter?I Real networks may be much better than worst-case nd .
Competitive guarantee for non-Gaussian A?I Even nonlinear A?
Thank You
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20
![Page 136: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/136.jpg)
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 21 / 20
![Page 137: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from](https://reader031.vdocument.in/reader031/viewer/2022041900/5e5fdde0102e751cf4332d9f/html5/thumbnails/137.jpg)
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 22 / 20