an introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · emmanuel candes and...

54
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks

Upload: others

Post on 01-Jun-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

AN INTRODUCTION TO COMPRESSIVE SENSING

Rodrigo B. Platte

School of Mathematical and Statistical Sciences

APM/EEE598 Reverse Engineering of Complex Dynamical Networks

Page 2: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

OUTLINE

1 INTRODUCTION

2 INCOHERENCE

3 RIP

4 POLYNOMIAL MATRICES

5 DYNAMICAL SYSTEMS

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 2 / 37

Page 3: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

THE RICE DSP WEBSITE

Resources for papers, codes, and more ....

http://www.dsp.ece.rice.edu/cs/

References:Emmanuel Candes, Compressive sampling. (Proc. InternationalCongress of Mathematics, 3, pp. 1433-1452, Madrid, Spain, 2006)Richard Baraniuk, A Lecture on Compressive Sensing. (IEEESignal Processing Magazine, July 2007)Emmanuel Candes and Michael Wakin, An introduction tocompressive sampling. (IEEE Signal Processing Magazine, 25(2),pp. 21 - 30, March 2008)

m-files and some links are available in the course page

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 3 / 37

Page 4: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

VIDEO LECTURES

Some well known CS people:Emmanuel Candes (Stanford University)Sequence of papers with Terence Tao and Justin Romberg in 2004.

David Donoho (Stanford University)Richard Baraniuk (Rice University)Ronald A. DeVore (Texas A&M)Anna C. Gilbert (Univ. of Michigan)Jared Tanner (University of Edinburgh). . .

A good way to learn the basics of CS is to watch these IMA videolectures:

http://www.ima.umn.edu/videos/→ IMA New Directions short courses→ Compressive Sampling andFrontiers in Signal Processing (two weeks long)

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 4 / 37

Page 5: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

UNDERDETERMINED SYSTEMS

cafeperss.com $20AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 5 / 37

Page 6: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

UNDERDETERMINED SYSTEMS

IEEE SIGNAL PROCESSING MAGAZINE [119] JULY 2007

SOLUTION

DESIGNING A STABLEMEASUREMENT MATRIXThe measurement matrix ! must allowthe reconstruction of the length-N signalx from M < N measurements (the vectory). Since M < N, this problem appearsill-conditioned. If, however, x is K-sparseand the K locations of the nonzero coef-ficients in s are known, then the problemcan be solved provided M ! K. A neces-sary and sufficient condition for this sim-plified problem to be well conditioned isthat, for any vector v sharing the same Knonzero entries as s and for some " > 0

1 " " # $#v$2

$v$2# 1 + " . (3)

That is, the matrix # must preserve thelengths of these particular K-sparse vec-tors. Of course, in general the locationsof the K nonzero entries in s are notknown. However, a sufficient conditionfor a stable solution for both K-sparseand compressible signals is that # satis-fies (3) for an arbitrary 3K-sparse vectorv. This condition is referred to as therestricted isometry property (RIP) [1]. Arelated condition, referred to as incoher-ence, requires that the rows {$ j} of !cannot sparsely represent the columns{%i} of & (and vice versa).

Direct construction of a measure-ment matrix ! such that # = !& hasthe RIP requires verifying (3) for eachof the

!NK"

possible combinations of Knonzero entries in the vector v of

length N. However, both the RIP andincoherence can be achieved with highprobability simply by selecting ! as arandom matrix. For instance, let thematrix elements $ j,i be independentand identically distributed (iid) randomvariables from a Gaussian probabilitydensity function with mean zero andvariance 1/N [1], [2], [4]. Then themeasurements y are merely M differentrandomly weighted linear combinationsof the elements of x, as illustrated inFigure 1(a). The Gaussian measure-ment matrix ! has two interesting anduseful properties:

! The matrix ! is incoherent withthe basis & = I of delta spikes withhigh probability. More specifically, anM % N iid Gaussian matrix# = !I = ! can be shown to havethe RIP with high probability ifM ! cK log(N/K) , with c a smallconstant [1], [2], [4]. Therefore, K-sparse and compressible signals oflength N can be recovered from only M ! cK log(N/K) & N randomGaussian measurements.! The matrix ! is universal in thesense that # = !& will be iidGaussian and thus have the RIP withhigh probability regardless of thechoice of orthonormal basis &.

DESIGNING A SIGNALRECONSTRUCTION ALGORITHMThe signal reconstruction algorithmmust take the M measurements in thevector y, the random measurementmatrix ! (or the random seed that gen-

erated it), and the basis & and recon-struct the length-N signal x or, equiva-lently, its sparse coefficient vector s. ForK-sparse signals, since M < N in (2)there are infinitely many s' that satisfy#s' = y. This is because if #s = y then#(s + r) = y for any vector r in the nullspace N (#) of #. Therefore, the signalreconstruction algorithm aims to findthe signal’s sparse coefficient vector inthe (N " M)-dimensional translated nullspace H = N (#) + s.

! Minimum '2 norm reconstruction:Define the 'p norm of the vector s as($s$p)

p =#N

i=1 |si|p . The classicalapproach to inverse problems of thistype is to find the vector in the trans-lated null space with the smallest '2norm (energy) by solving

$s = argmin $s'$2 such that #s' = y.

(4)

This optimization has the convenientclosed-form solution $s = #T(##T)"1 y.Unfortunately, '2 minimization willalmost never find a K-sparse solution,returning instead a nonsparse $s withmany nonzero elements.! Minimum '0 norm reconstruction:Since the '2 norm measures signalenergy and not signal sparsity, con-sider the '0 norm that counts thenumber of non-zero entries in s.(Hence a K -sparse vector has '0norm equal to K.) The modified opti-mization

$s = argmin $s'$0 such that #s' = y

(5)

can recover a K-sparse signal exactlywith high probability using onlyM = K + 1 iid Gaussian measure-ments [5]. Unfortunately, solving (5)is both numerically unstable and NP-complete, requiring an exhaustiveenumeration of all

!NK"

possible loca-tions of the nonzero entries in s.! Minimum '1 norm reconstruction:Surprisingly, optimization based onthe '1 norm

$s = argmin $s'$1 such that #s' = y

(6)

[FIG1] (a) Compressive sensing measurement process with a random Gaussianmeasurement matrix ! and discrete cosine transform (DCT) matrix &. The vector ofcoefficients s is sparse with K = 4. (b) Measurement process with # = !&. There arefour columns that correspond to nonzero si coefficients; the measurement vector y is alinear combination of these columns.

M NK-sparse

y y! "# S

(a) (b)

S

x

= =Solve

Ax = b,

where A is m×Nand m < N.

In CS we want to obtain sparse solutions, i.e., xj ≈ 0, for several j ′s.

One option: Minimize ‖x‖`1 subject to Ax = b.

‖x‖`p =(|x0|p + |x2|p + · · ·+ |xN |p

)1/p

Why p = 1?Remark: the location of nonzero xj ’s is not known in advance.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 6 / 37

Page 7: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

UNDERDETERMINED SYSTEMS

IEEE SIGNAL PROCESSING MAGAZINE [119] JULY 2007

SOLUTION

DESIGNING A STABLEMEASUREMENT MATRIXThe measurement matrix ! must allowthe reconstruction of the length-N signalx from M < N measurements (the vectory). Since M < N, this problem appearsill-conditioned. If, however, x is K-sparseand the K locations of the nonzero coef-ficients in s are known, then the problemcan be solved provided M ! K. A neces-sary and sufficient condition for this sim-plified problem to be well conditioned isthat, for any vector v sharing the same Knonzero entries as s and for some " > 0

1 " " # $#v$2

$v$2# 1 + " . (3)

That is, the matrix # must preserve thelengths of these particular K-sparse vec-tors. Of course, in general the locationsof the K nonzero entries in s are notknown. However, a sufficient conditionfor a stable solution for both K-sparseand compressible signals is that # satis-fies (3) for an arbitrary 3K-sparse vectorv. This condition is referred to as therestricted isometry property (RIP) [1]. Arelated condition, referred to as incoher-ence, requires that the rows {$ j} of !cannot sparsely represent the columns{%i} of & (and vice versa).

Direct construction of a measure-ment matrix ! such that # = !& hasthe RIP requires verifying (3) for eachof the

!NK"

possible combinations of Knonzero entries in the vector v of

length N. However, both the RIP andincoherence can be achieved with highprobability simply by selecting ! as arandom matrix. For instance, let thematrix elements $ j,i be independentand identically distributed (iid) randomvariables from a Gaussian probabilitydensity function with mean zero andvariance 1/N [1], [2], [4]. Then themeasurements y are merely M differentrandomly weighted linear combinationsof the elements of x, as illustrated inFigure 1(a). The Gaussian measure-ment matrix ! has two interesting anduseful properties:

! The matrix ! is incoherent withthe basis & = I of delta spikes withhigh probability. More specifically, anM % N iid Gaussian matrix# = !I = ! can be shown to havethe RIP with high probability ifM ! cK log(N/K) , with c a smallconstant [1], [2], [4]. Therefore, K-sparse and compressible signals oflength N can be recovered from only M ! cK log(N/K) & N randomGaussian measurements.! The matrix ! is universal in thesense that # = !& will be iidGaussian and thus have the RIP withhigh probability regardless of thechoice of orthonormal basis &.

DESIGNING A SIGNALRECONSTRUCTION ALGORITHMThe signal reconstruction algorithmmust take the M measurements in thevector y, the random measurementmatrix ! (or the random seed that gen-

erated it), and the basis & and recon-struct the length-N signal x or, equiva-lently, its sparse coefficient vector s. ForK-sparse signals, since M < N in (2)there are infinitely many s' that satisfy#s' = y. This is because if #s = y then#(s + r) = y for any vector r in the nullspace N (#) of #. Therefore, the signalreconstruction algorithm aims to findthe signal’s sparse coefficient vector inthe (N " M)-dimensional translated nullspace H = N (#) + s.

! Minimum '2 norm reconstruction:Define the 'p norm of the vector s as($s$p)

p =#N

i=1 |si|p . The classicalapproach to inverse problems of thistype is to find the vector in the trans-lated null space with the smallest '2norm (energy) by solving

$s = argmin $s'$2 such that #s' = y.

(4)

This optimization has the convenientclosed-form solution $s = #T(##T)"1 y.Unfortunately, '2 minimization willalmost never find a K-sparse solution,returning instead a nonsparse $s withmany nonzero elements.! Minimum '0 norm reconstruction:Since the '2 norm measures signalenergy and not signal sparsity, con-sider the '0 norm that counts thenumber of non-zero entries in s.(Hence a K -sparse vector has '0norm equal to K.) The modified opti-mization

$s = argmin $s'$0 such that #s' = y

(5)

can recover a K-sparse signal exactlywith high probability using onlyM = K + 1 iid Gaussian measure-ments [5]. Unfortunately, solving (5)is both numerically unstable and NP-complete, requiring an exhaustiveenumeration of all

!NK"

possible loca-tions of the nonzero entries in s.! Minimum '1 norm reconstruction:Surprisingly, optimization based onthe '1 norm

$s = argmin $s'$1 such that #s' = y

(6)

[FIG1] (a) Compressive sensing measurement process with a random Gaussianmeasurement matrix ! and discrete cosine transform (DCT) matrix &. The vector ofcoefficients s is sparse with K = 4. (b) Measurement process with # = !&. There arefour columns that correspond to nonzero si coefficients; the measurement vector y is alinear combination of these columns.

M NK-sparse

y y! "# S

(a) (b)

S

x

= =Solve

Ax = b,

where A is m×Nand m < N.

In CS we want to obtain sparse solutions, i.e., xj ≈ 0, for several j ′s.

One option: Minimize ‖x‖`1 subject to Ax = b.

‖x‖`p =(|x0|p + |x2|p + · · ·+ |xN |p

)1/p

Why p = 1?Remark: the location of nonzero xj ’s is not known in advance.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 6 / 37

Page 8: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

UNDERDETERMINED SYSTEMS

IEEE SIGNAL PROCESSING MAGAZINE [119] JULY 2007

SOLUTION

DESIGNING A STABLEMEASUREMENT MATRIXThe measurement matrix ! must allowthe reconstruction of the length-N signalx from M < N measurements (the vectory). Since M < N, this problem appearsill-conditioned. If, however, x is K-sparseand the K locations of the nonzero coef-ficients in s are known, then the problemcan be solved provided M ! K. A neces-sary and sufficient condition for this sim-plified problem to be well conditioned isthat, for any vector v sharing the same Knonzero entries as s and for some " > 0

1 " " # $#v$2

$v$2# 1 + " . (3)

That is, the matrix # must preserve thelengths of these particular K-sparse vec-tors. Of course, in general the locationsof the K nonzero entries in s are notknown. However, a sufficient conditionfor a stable solution for both K-sparseand compressible signals is that # satis-fies (3) for an arbitrary 3K-sparse vectorv. This condition is referred to as therestricted isometry property (RIP) [1]. Arelated condition, referred to as incoher-ence, requires that the rows {$ j} of !cannot sparsely represent the columns{%i} of & (and vice versa).

Direct construction of a measure-ment matrix ! such that # = !& hasthe RIP requires verifying (3) for eachof the

!NK"

possible combinations of Knonzero entries in the vector v of

length N. However, both the RIP andincoherence can be achieved with highprobability simply by selecting ! as arandom matrix. For instance, let thematrix elements $ j,i be independentand identically distributed (iid) randomvariables from a Gaussian probabilitydensity function with mean zero andvariance 1/N [1], [2], [4]. Then themeasurements y are merely M differentrandomly weighted linear combinationsof the elements of x, as illustrated inFigure 1(a). The Gaussian measure-ment matrix ! has two interesting anduseful properties:

! The matrix ! is incoherent withthe basis & = I of delta spikes withhigh probability. More specifically, anM % N iid Gaussian matrix# = !I = ! can be shown to havethe RIP with high probability ifM ! cK log(N/K) , with c a smallconstant [1], [2], [4]. Therefore, K-sparse and compressible signals oflength N can be recovered from only M ! cK log(N/K) & N randomGaussian measurements.! The matrix ! is universal in thesense that # = !& will be iidGaussian and thus have the RIP withhigh probability regardless of thechoice of orthonormal basis &.

DESIGNING A SIGNALRECONSTRUCTION ALGORITHMThe signal reconstruction algorithmmust take the M measurements in thevector y, the random measurementmatrix ! (or the random seed that gen-

erated it), and the basis & and recon-struct the length-N signal x or, equiva-lently, its sparse coefficient vector s. ForK-sparse signals, since M < N in (2)there are infinitely many s' that satisfy#s' = y. This is because if #s = y then#(s + r) = y for any vector r in the nullspace N (#) of #. Therefore, the signalreconstruction algorithm aims to findthe signal’s sparse coefficient vector inthe (N " M)-dimensional translated nullspace H = N (#) + s.

! Minimum '2 norm reconstruction:Define the 'p norm of the vector s as($s$p)

p =#N

i=1 |si|p . The classicalapproach to inverse problems of thistype is to find the vector in the trans-lated null space with the smallest '2norm (energy) by solving

$s = argmin $s'$2 such that #s' = y.

(4)

This optimization has the convenientclosed-form solution $s = #T(##T)"1 y.Unfortunately, '2 minimization willalmost never find a K-sparse solution,returning instead a nonsparse $s withmany nonzero elements.! Minimum '0 norm reconstruction:Since the '2 norm measures signalenergy and not signal sparsity, con-sider the '0 norm that counts thenumber of non-zero entries in s.(Hence a K -sparse vector has '0norm equal to K.) The modified opti-mization

$s = argmin $s'$0 such that #s' = y

(5)

can recover a K-sparse signal exactlywith high probability using onlyM = K + 1 iid Gaussian measure-ments [5]. Unfortunately, solving (5)is both numerically unstable and NP-complete, requiring an exhaustiveenumeration of all

!NK"

possible loca-tions of the nonzero entries in s.! Minimum '1 norm reconstruction:Surprisingly, optimization based onthe '1 norm

$s = argmin $s'$1 such that #s' = y

(6)

[FIG1] (a) Compressive sensing measurement process with a random Gaussianmeasurement matrix ! and discrete cosine transform (DCT) matrix &. The vector ofcoefficients s is sparse with K = 4. (b) Measurement process with # = !&. There arefour columns that correspond to nonzero si coefficients; the measurement vector y is alinear combination of these columns.

M NK-sparse

y y! "# S

(a) (b)

S

x

= =Solve

Ax = b,

where A is m×Nand m < N.

In CS we want to obtain sparse solutions, i.e., xj ≈ 0, for several j ′s.

One option: Minimize ‖x‖`1 subject to Ax = b.

‖x‖`p =(|x0|p + |x2|p + · · ·+ |xN |p

)1/p

Why p = 1?Remark: the location of nonzero xj ’s is not known in advance.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 6 / 37

Page 9: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

WHY `1?Unit ball:`0, `1/2, `1, `2, `4, `∞

‖x‖`p =(|x0|p + · · ·+ |xN |p

)1/p

or, for 0 ≤ p < 1,

‖x‖`p =(|x0|p + · · ·+ |xN |p

)

‖x‖`0 = # of nonzero entries in xideal (?) but leads to a NP-complete problem`p, with p < 1 is not a norm (triangular inequality). Also notpractical.`2 computationally easy but does not lead to sparse solutions.The unique solution of minimum `2 norm is (pseudo-inverse)

x = A′(AA′)−1bAN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 7 / 37

Page 10: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

SPARSITY AND THE `1-NORM (2D CASE)

EXAMPLE

a1x1 + a2x2 = b1

x1

x2

!1.5 !1 !0.5 0 0.5 1 1.5!1.5

!1

!0.5

0

0.5

1

1.5

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 8 / 37

Page 11: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

SPARSITY AND THE `1-NORM (2D CASE)

EXAMPLE – `2

minx1,x2

√x2

1 + x22 subject to a1x1 + a2x2 = b1

x1

x2

!x2

1 + x22 > 0.8944

!x2

1 + x22 < 0.8944

!1.5 !1 !0.5 0 0.5 1 1.5!1.5

!1

!0.5

0

0.5

1

1.5

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 8 / 37

Page 12: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

SPARSITY AND THE `1-NORM (2D CASE)

EXAMPLE – `1

minx1,x2|x1|+ |x2| subject to a1x1 + a2x2 = b1

x1

x2 |x1| + |x2| < 1

|x1| + |x2| > 1

!1.5 !1 !0.5 0 0.5 1 1.5!1.5

!1

!0.5

0

0.5

1

1.5

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 8 / 37

Page 13: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

MINIMIZING ‖x‖`2

Recall Parseval’s Formula:f (t) =

∑Nk=0 xkφk (t), with φk orthonormal in L2.

‖f‖22 =N∑

k=0

|xk |2.

Also, `2 penalizes heavily large values, while small values don’t affectthe norm significantly. In general will not give a sparse representation!

See matlab experiment! (Test-l1-l2.m)

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 9 / 37

Page 14: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

MINIMIZING ‖x‖`1

Matlab experimet! (Test-l1-l2.m)Note: solution may not be unique!Solve an optimization problem (in practice O(N3) operations).Several codes are available for CS see:http://www.dsp.ece.rice.edu/cs/

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 10 / 37

Page 15: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

A SIMPLE EXAMPLE

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

f (t) =1√N

N∑

k=1

xk sin(πkt)

N = 1024, number of samples: m = 50

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 11 / 37

Page 16: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

A SIMPLE EXAMPLE

System of equations:

f (tj) =1√

1024

1024∑

k=1

xk sin(πktj), j = 1 . . . 50

SOLVE:min ‖x‖`1 subject to Ax = b,

where A has 50 rows and 1024 columns.

Aj,k =1√

1024sin(πktj), bj = f (tj).

Matlab code on Blackboard: ”SineExample.m” (uses CVX)

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 11 / 37

Page 17: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

A SIMPLE EXAMPLE

0 200 400 600 800 1000 1200−1.5

−1

−0.5

0

0.5

1

1.5

2

originaldecoded

Recovery of coefficients is accurate to almost machine precision!

‖x − x0‖2‖x0‖2

= 7.9611...× 10−11

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 11 / 37

Page 18: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

WHY SPARSITY?Sparsity is often a good regularization criteria because most signalshave structure.

Take a picture! (this one has 512× 512 pixels)

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37

Page 19: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

WHY SPARSITY?Sparsity is often a good regularization criteria because most signalshave structure.

Gray scale please!

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37

Page 20: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

WHY SPARSITY?Sparsity is often a good regularization criteria because most signalshave structure.

50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500

Find wavelet coefficients. Daubechies(6,2), 3 vanish. moments

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37

Page 21: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

WHY SPARSITY?Sparsity is often a good regularization criteria because most signalshave structure.

Make 75% of the coefficients zero.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37

Page 22: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

WHY SPARSITY?Sparsity is often a good regularization criteria because most signalshave structure.

Restored image from 25% of the coefficients.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37

Page 23: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

WHY SPARSITY?Sparsity is often a good regularization criteria because most signalshave structure.

50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500

Relative error ≈ 3%.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37

Page 24: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

WHY SPARSITY?Sparsity is often a good regularization criteria because most signalshave structure.

50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500

Keep only 2% of the coefficients, set 98% to zero.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37

Page 25: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

WHY SPARSITY?Sparsity is often a good regularization criteria because most signalshave structure.

Reconstructed image from 2% of the coefficients.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37

Page 26: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

SPARSITY IS NOT SUFFICIENT FOR CS TO WORK!

Example: A is a finite difference matrixA maps a sparse vector x into another sparse vector y .

00...1−10...

=

1 0 0 · · · 0−1 1 0 · · · 00 −1 1 · · · 0. . . . . . . . . . . . . . . . . . . . .0 0 · · · −1 1

00...100...

A few samples of y are likely to be all zeros!

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 13 / 37

Page 27: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

SPARSITY IS NOT SUFFICIENT FOR CS TO WORK!

Example: A is a finite difference matrixA maps a sparse vector x into another sparse vector y .

00...1−10...

=

1 0 0 · · · 0−1 1 0 · · · 00 −1 1 · · · 0. . . . . . . . . . . . . . . . . . . . .0 0 · · · −1 1

00...100...

A few samples of y are likely to be all zeros!

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 13 / 37

Page 28: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

SPARSITY IS NOT SUFFICIENT FOR CS TO WORK!

The image below is sparse in physical domain and Haar waveletcoefficients.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 14 / 37

Page 29: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

A GENERAL APPROACH

Sample coefficients in a representation by random vectors.

y =N∑

k=1

< y , ψk > ψk ,

ψk are obtained from orthogonalized Gaussian matrices.

Ax = y ⇒ Ψ′Ax = Ψ′y ⇒ Θx = z

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 15 / 37

Page 30: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

INCOHERENCE + SPARSITY IS NEEDED

INCOHERENCE

sparse representation sample here

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 16 / 37

Page 31: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

INCOHERENCE + SPARSITY IS NEEDED

INCOHERENCE

sparse representation sample here

THEOREM (CANDES, ROMBERG, TAO)Assume that x is S-sparse and that we are given K Fourier coefficientswith frequencies selected uniformly at random. Suppose that thenumber of observations obeys

K ≥ C · S · log N.

Then minimizing `1 reconstructs x exactly with overwhelmingprobability. In details, if the constant C is of the form 22(δ + 1), thenthe probability of success exceeds 1−O(N−δ).

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 16 / 37

Page 32: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

INCOHERENCE + SPARSITY IS NEEDED

NUMERICAL EXPERIMENT

Signal recovered from Fourier coefficients:

0 100 200 300 400 500 600−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

originaldecoded

Code ”FourierSampling.m”.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 16 / 37

Page 33: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

INCOHERENT SAMPLING

Let (Φ,Ψ) be orthonormal bases of Rn.

f (t) =n∑

i=1

xiψi(t) and yk = 〈f , ϕk 〉, k = 1, . . . ,m.

Representation matrix: Ψ = [ψ1 ψ2 · · · ψn]Sensing matrix: Φ = [ϕ1 ϕ2 · · · ϕn]

COHERENCE BETWEEN Φ AND Ψ

µ(Φ,Ψ) =√

n max1≤j,k≤n

|〈ϕk , ψj〉|.

Remark: µ(Φ,Ψ) ∈ [1,√

n]Upper bound: Cauchy-SchwarzLower bound: ΨT Φ is also orthonormal, hence∑ |〈ϕk , ψj〉|2 = 1⇒ maxj |〈ϕk , ψj〉| ≥ 1/

√n

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 17 / 37

Page 34: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

A GENERAL RESULT FOR SPARSE RECOVERY

f (t) =n∑

i=1

xiψi(t) and yk = 〈f , ϕk 〉, k = 1, . . . ,m.

Consider the optimization problem:

minx∈Rn

‖x‖`1 subject to yk = 〈Ψx , ϕk 〉, k = 1, . . . ,m.

THEOREM (CANDES AND ROMBERG, 2007)Fix f ∈ Rn and suppose that the coefficient sequence x of f in thebasis Ψ is s-sparse. Select m measurements in the Φ domainuniformly at random. Then if

m ≥ C µ2(Φ,Ψ) S log(n/δ)

for some positive constant C, the solution of the problem above isexact with probability exceeding 1− δ.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 18 / 37

Page 35: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

EXAMPLES OF INCOHERENT BASES

Φ is the identity (ϕk (t) = δ(t − k)) and Ψ is the Fourier basis. Thetime-frequency pair obeys µ(Φ,Ψ) = 1.noiselets and Haar wavelets have coherence

√2.

random matrices are largely incoherent with any fixed basis Ψ(about

√2 log n).

Matlab example: ’measurementsl1.m’

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 19 / 37

Page 36: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

MULTIPLE SOLUTIONS OF MIN `1-NORM

f (t) =a0

2+

N∑

k=1

ak cos(πkt) +N∑

k=1

bk sin(πkt), t ∈ [−1,1]

Data: f (−1) = 1, f (0) = 1, f (1) = 1

even function: bk = 0Solutions of min `1: {a2 = 1,ak = 0(k 6= 2)}, {a4 = 1,ak = 0(k 6= 4)},. . .

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 20 / 37

Page 37: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

MULTIPLE SOLUTIONS OF MIN `1-NORM

f (t) =a0

2+

N∑

k=1

ak cos(πkt) +N∑

k=1

bk sin(πkt), t ∈ [−1,1]

Data: f (−1) = 1, f (0) = 1, f (1) = 1

even function: bk = 0Solutions of min `1: {a2 = 1,ak = 0(k 6= 2)}, {a4 = 1,ak = 0(k 6= 4)},. . .

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 20 / 37

Page 38: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

THE RESTRICTED ISOMETRY PROPERTY (RIP)

How about signals that are not exactly sparse?

ISOMETRY CONSTANTS

For each s = 1,2, . . . , define δs of a matrix A as the smallest numbersuch that

(1− δs)‖x‖2`2 ≤ ‖Ax‖2`2 ≤ (1 + δs)‖x‖2`2holds for all s-sparse vectors x .

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 21 / 37

Page 39: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

THE RESTRICTED ISOMETRY PROPERTY (RIP)

THEOREM (CANDES, 2007?)

Assume δ2s <√

2− 1. Then

x∗ := argminx∈Rn‖x‖`1 subject to y = Ax

‖x∗ − x‖`2 ≤ C‖x − xs‖`1√

s.

where xs is the vector x with all but the largest s components set to 0.

If x is s-sparse (exactly), then the recovery is exact.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 22 / 37

Page 40: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

RIP - BASIC IDEA

want A to preserve norm of s-sparse vectors.‖Ax1 − Ax2‖22 should not be small for s-sparse vectors x .want 0 < c‖x1 − x2‖22 ≤ ‖A(x1 − x2)‖22 for all s-sparse x .If δ2s = 1, then ‖Az‖2 = 0 for a 2s-sparse z.z = x1 − x2 with x1 and x2 both s-sparse.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 23 / 37

Page 41: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

RIP - REMARKS

the theorem above is deterministichow does one show that column vectors taken from arbitrarilysubsets are nearly orthogonal?isometry constants are shown for random matrices (randomnessis back)for Fourier basis m ≥ C s log4 nRIP is too conservative (Donoho, Tanner 2010)

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 24 / 37

Page 42: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

POLYNOMIAL MATRICES

Back to Dr. Lai’s dynamical system problem:

dxdt

= F(x(t)),

with[F(x(t))]j =

k1

k2

· · ·∑

km

(aj)k1k2···kmxk11 (t) . . . xkm

m (t)

This does not fit in classical CS-results.monomial basis becomes ill-conditioned even for small powerswe know condition numbers of Vadermonde depend on where x isevaluated.Some CS results are available for orthogonal polynomials.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 25 / 37

Page 43: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

ORTHOGONAL POLYNOMIALS

For Chebyshev polynomials expansions we have that

f (x) ≈N∑

k=0

λkcos(k arccos(x))

If we let y = arccos(x) or x = cos(y),

f (cos(y)) ≈N∑

k=0

λkcos(ky)

A Chebyshev expansion is equivalent to a cosine expansion on thevariable y .Results carry over from Fourier expansions but with samples chosenindependently according to the chebyshev measure

dν(x) = π−1(1− x2)−1/2dx

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 26 / 37

Page 44: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

SPARSE LEGENDRE EXPANSIONS

Rauhut and Ward (2010) proved that the same type sampling appliesfor Legendre exapasions.

How about polynomial expansions as power series?

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 27 / 37

Page 45: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

ROBERT THOMPSON’S EXPRIMENTS

Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques

Sparse Polynomial Discovery

Discovering Sparse Polynomials

How about if we choose just a few function values?

Φm: m randomly chosen rows of identity matrix.

And assume that x is K-sparse.

td according to some distribution in (−1, 1).

ym =

f(td1)f(td2)

...

...f(tdm)

m

= Φm

t00 t10 . . . tN0t01 t11 . . . tN1...

......

...t0N t1N . . . tNN

x1

x2......

xN

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 28 / 37

Page 46: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

ROBERT THOMPSON’S EXPERIMENTS

Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques

Sparse Polynomial Discovery

How Well Does It Work?

m/N

k/m

1d polynomial recovery for N = 36, uniform sampling

0.25 0.5 0.75 1

0.75

0.5

0.25

0

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.2

0

m/N

k/m

1d polynomial recovery for N = 36, Chebyshev sampling

0.25 0.5 0.75 1

0.75

0.5

0.25

0

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.2

0

Each pixel, 50 experiments: choose random polynomial with knon-zero Gaussian i.i.d coefficients, measure m samples, attempt torecover polynomial coefficients.Sampling at Chebyshev points give (very) slightly better results thanuniform points.

Increasing m doesn’t make as much difference as might be expected.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 29 / 37

Page 47: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

ROBERT THOMPSON’S EXPERIMENTS

Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques

Sparse Polynomial Discovery

Comparison With Chebyshev-Sparse Polynomials

Consider linear combinations of Chebyshev polynomials:y =

�Ni=1 Ti(t), Ti(t) = cos(i arccos(t))

Φm: m randomly chosen rows of identity matrix.

And assume that x is K-sparse.

td according to some distribution in (−1, 1).

ym =

f(td1)f(td2)

...

...f(tdm)

m

= Φm

T0(t0) T1(t0) . . . TN (t0)T0(t1) T1(t1) . . . TN (t1)

......

......

T0(tN ) T1(tN ) . . . TN (tN )

x1

x2......

xN

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 30 / 37

Page 48: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

ROBERT THOMPSON’S EXPERIMENTS

Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques

Sparse Polynomial Discovery

How Well Does It Work?

Vandermonde

m/N

k/m

1d polynomial recovery for N = 36, Chebyshev sampling

0.25 0.5 0.75 1

0.75

0.5

0.25

0

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.2

0

ChebyshevSparse 1d Chebyshev polynomial recovery, N = 36

m/N

k/m

0.25 0.5 0.75 1

0.75

0.5

0.25

0

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.2

0

Using Chebyshev basis functions, we realize improvement as mincreases.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 31 / 37

Page 49: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

ROBERT THOMPSON’S EXPERIMENTS

Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques

Sparse Polynomial Discovery

Increasing m helps

Columns of C are orthogonal.All vectors will be distinguishable if we use full C.If we use less than full C, orthogonality is lost, some vectors start tobecome indistinguishable.

V

!0.8

!0.6

!0.4

!0.2

0

0.2

0.4

0.6

0.8

1

C

!0.8

!0.6

!0.4

!0.2

0

0.2

0.4

0.6

0.8

1

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 32 / 37

Page 50: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

ROBERT THOMPSON’S EXPERIMENTS

Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques

Sparse Polynomial Discovery

Discovering 2-D Sparse Polynomials

What about 2-D polynomials?

In natural basis: f(t, u) =�

i+j=0..Q

xijtiuj

(td, ud) according to some distribution in (−1, 1) × (−1, 1).

ym =

f(td1 , ud1)f(td2 , ud2)

...

...f(tdm , udm)

m

= Φm

1 t0 u0 t0u0 t20 u20 . . .

1 t1 u1 t1u1 t21 u21 . . .

......

......

1 tN uN tNuN t2N u2N . . .

x00

x10

x01

x11

x20...

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 33 / 37

Page 51: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

ROBERT THOMPSON’S EXPERIMENTS

Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques

Sparse Polynomial Discovery

How Well Does It Work?

2d polynomial recovery, N = 36

m/N

k/m

0.25 0.5 0.75 1

0.75

0.5

0.25

0

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.2

0

Similar to 1-d results.

Again increasing m doesn’t change much.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 34 / 37

Page 52: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

ROBERT THOMPSON’S EXPERIMENTS (BACK TO DYNAMICAL SYSTEMS)

Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques

Dynamical System Discovery

Example - Logistic Map

xn+1 = f(xn) = rxn(1 − xn)

Coefficient vector: (0, r,−r, 0, . . . )

We can recover the system equation in chaotic regime taking about10 sample pairs or more.

0 5 10 15 20 25 30 350.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

n

xn

Sampl ing the logi sti c map, m = 10

0 5 10 15 20 25 30 3510

!8

10!6

10!4

10!2

100

102

104

106

m

||c∗−

c||2

Recovery error for logistic map, r = 3.7

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 35 / 37

Page 53: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

ROBERT THOMPSON’S EXPERIMENTS (BACK TO DYNAMICAL SYSTEMS)

Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques

Dynamical System Discovery

How Well Does It Work?

5 10 15 20 25 30 35

2.4

2.5

2.6

2.7

2.8

2.9

3

3.1

3.2

3.3

3.4

3.5

3.6

3.7

3.8

3.9

4

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

Sensitive to the dynamics determined by r.

(Bifurcation diagram: Wikipedia).

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 36 / 37

Page 54: An introduction to compressive sensingplatte/apm598/apm598_cs_intro.pdf · Emmanuel Candes and Michael Wakin, An introduction to` compressive sampling. (IEEE Signal Processing Magazine,

INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS

FINAL REMARKS

As previously pointed by Dr. Lai – recovery seems impractical withmonomial basis of large degree. Change of basis to orthogonalpolynomials result in full coefficients.Considering small degree expansions in high dimensions – whatis the optimal sampling strategy?How about a system of PDEs? For example,

ut = u(1− u)− uv +4uvt = v(1− v) + uv +4v

Thanks! In particular to Robert Thompson and Wen Xu.

AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 37 / 37