exploiting statistical dependencies in sparse representations

47
Exploiting Statistical Dependencies in Sparse Representations Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel This research was supported by the European Community's FP7- FET program SMALL under grant agreement no. 225913 * * Joint work with Tomer Faktor & Yonina Eldar

Upload: jess

Post on 22-Feb-2016

57 views

Category:

Documents


0 download

DESCRIPTION

*. Exploiting Statistical Dependencies in Sparse Representations. Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel. *Joint work with Tomer Faktor & Yonina Eldar. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Exploiting Statistical Dependencies in Sparse Representations

Exploiting Statistical Dependencies in Sparse

Representations Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel

This research was supported by the European Community's FP7-FET program SMALL under grant agreement no. 225913

*

* Joint work with

Tomer Faktor & Yonina Eldar

Page 2: Exploiting Statistical Dependencies in Sparse Representations

2

AgendaPart I –

Motivation for Modeling

Dependencies

Part II – Basics of the

Boltzmann Machine

Using BM, sparse and redundant representation modeling can be extended

naturally and comfortably to cope with dependencies within the representation

vector, while keeping most of the simplicity of the original model

Today we will show that

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

Part III – Sparse

Recovery Revisited

Part IV – Construct Practical Pursuit

Algorithms

Part V – First Steps Towards

Adaptive Recovery

Page 3: Exploiting Statistical Dependencies in Sparse Representations

3

Part I Motivation for Modeling

Dependencies

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

Page 4: Exploiting Statistical Dependencies in Sparse Representations

4

Sparse & Redundant Representations

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

We model a signal x as a multiplication of a (given) dictionary D by a sparse vector .

The classical (Bayesian) approach for modeling is as a Bernoulli-Gaussian RV, implying that the entries of are iid.

Typical recovery: find from .This is done with pursuit

algorithms (OMP, BP, … ) that assume this iid structure.

Is this fair? Is it correct?

y x e e D

K

N

DA Dictionary A sparse

vector

αx

N

Page 5: Exploiting Statistical Dependencies in Sparse Representations

5

Lets Apply a Simple Experiment

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

We gather a set of 100,000 randomly chosen image-patches of size 8×8, extracted from the image ‘Barbara’. These are our signals .

We apply sparse-coding for these patches using the OMP algorithm (error-mode with =2) with the redundant DCT dictionary (256 atoms). Thus, we approximate the solution of

Lets observe the obtained representations’ supports. Specifically, lets check whether the iid assumption is true.

k ky

2 2k 0 k 2

min s.t. y Nˆ

D

Page 6: Exploiting Statistical Dependencies in Sparse Representations

6

The Experiment’s Results (1)

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

0 50 100 150 200 2500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

The supports are denoted by s of length K (256), with {+1} for on-support element and {-1} for off-support ones.

First, we check the probability of each atom participate in the support.

As can be seen, these probabilities are far from being uniform, even though OMP gave all atoms the same weight.

iP s 1 , 1 i K

s1 s2 s3 sK-1 sK K1,1

Page 7: Exploiting Statistical Dependencies in Sparse Representations

7

The Experiment’s Results (2)

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

50 100 150 200 250

50

100

150

200

2500

0.5

1

1.5

2

2.5

3

3.5 Next, we check whether there

are dependencies between the atoms, by evaluating

This ratio is 1 if si and sj are independent, and away from 1 when there is a dependency. After abs-log, a value away from zero indicates a dependency.

As can be seen, there are some dependencies between the elements, even though OMP did not promote them.

i j

i j

P s 1,s 1 , 1 i, j KP s 1 P s 1

i j10

i j

P s 1,s 1log P s 1 P s 1

Page 8: Exploiting Statistical Dependencies in Sparse Representations

8

The Experiment’s Results (3)

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

50 100 150 200 250

50

100

150

200

2500

0.5

1

1.5

2

2.5

3

3.5

4 We saw that there are dependencies

within the support vector. Could we claim that these form a block-sparsity pattern?

If si and sj belong to the same block, we should get that

Thus, after abs-log, any value away from 0 means a non-block structure.

We can see that the supports do not form a block-sparsity structure, but rather a more complex one.

i jP s 1| s 1 1

10i j

1logP s 1 s 1

Page 9: Exploiting Statistical Dependencies in Sparse Representations

9

Our Goals

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

We would like to (statistically) model these dependencies somehow.

We will adopt a generative model for of the following form:

What P(s) is? We shall follow recent work and choose the Boltzmann Machine [Garrigues & Olshausen ’08] [Cevher, Duarte, Hedge & Baraniuk ‘09].

Our goals in this work are: o Develop general-purpose pursuit algorithms for this model.o Estimate this model’s parameters for a given bulk of data.

Generate the support vector s (of length K) by

drawing it from the distribution P(s)

Generate the non-zero values for the on-support entries,

drawn from the distributions .

2iN(0, )

α

Page 10: Exploiting Statistical Dependencies in Sparse Representations

10

Part II Basics of the

Boltzmann Machine

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

Page 11: Exploiting Statistical Dependencies in Sparse Representations

11

BM – Definition

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

Recall that the support vector, s, is a binary vector of length K, indicating the on- and the off-support elements:

The Boltzmann Machine assigns the following probability to this vector:

Boltzmann Machine (BM) is a special case of the more general Markov-Random-Field (MRF) model.

T T1 1P s exp b s,b s sZ 2

WW

s1 s2 s3 sK-1 sK K1,1

Page 12: Exploiting Statistical Dependencies in Sparse Representations

12

The Vector b

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

If we assume that W=0, the BM becomes

In this case, the entries of s are independent. In this case, the probability of sk=+1 is given by

Thus, pk becomes very small (promoting sparsity) if bk<<0.

T T

KTk k

k 1

1 1P s exp b s s sZ 21 1exp b s exp b sZ Z

W

kk k

k k

exp(b )p P s 1 exp(b ) exp( b )

Page 13: Exploiting Statistical Dependencies in Sparse Representations

13

The Matrix W

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

W introduces the inter-dependencies between the entries of s: Zero value Wij implies an ‘independence’ between si and sj. A positive value Wij implies an ‘excitatory’ interaction between si

and sj. A negative value Wij implies an ‘inhibitory’ interaction between si

and sj. The larger the magnitude of Wij, the stronger are these forces.

W is symmetric, since the interaction between i-j is the same as the one between j-i.

We assume that diag(W)=0, since

T T

trace

T T T

1 1P s exp b s s sZ 2

1 1 1exp b s s diag( ) s s diag( )sZ 2 2

W

W

W W W

Page 14: Exploiting Statistical Dependencies in Sparse Representations

14

Special Cases of Interest

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

When W=0 and b=c·1, we return to the iid case we started with. If c<<0, this model promotes sparsity.

When W is block-diagonal (up to permutation) with very high and positive entries, this corresponds to a block-sparsity structure.

A banded W (up to permutation) with 2L+1 main diagonals corresponds to a chordal graph that leads to a decomposable model.

A tree-structure can also fit this model easily.

iid

Block-Sparsity

Chordal

Tree

Page 15: Exploiting Statistical Dependencies in Sparse Representations

15

Part III MAP/MMSE Sparse Recovery Revisited

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

Page 16: Exploiting Statistical Dependencies in Sparse Representations

16

The Signal Model

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

K

N

DThe Dictionary

α

x D is fixed and known. We assume that α is built by:

Choosing the support s with BM probability P(s) from all the 2K possibilities Ω.

Choosing the αs coefficients using independent Gaussian entries we shall assume that σi are the same (= σα).

The ideal signal is x=Dα=Dsαs.The p.d.f. P(α) and P(x) are clear and known

2iN 0,

Page 17: Exploiting Statistical Dependencies in Sparse Representations

17

Adding Noise

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

K

N

DA fixed Dictionary

α

x

ye

+Noise Assumed:The noise e is additive white Gaussian vector with probability PE(e)

The conditional p.d.f.’s P(y|), P(|y), and even P(y|s), P(s|y), are all clear and well-

defined (although they may appear nasty).

2

2e

x yP y x C exp 2

Page 18: Exploiting Statistical Dependencies in Sparse Representations

18

Pursuit via the Posterior

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

P s| yWe have access to

MMSEMAP

ss ArgMax P(s| y)

MMSE E | y

There is a delicate problem due to the mixture of continuous and discrete PDF’s. This is why we estimate the support using MAP first.

MAP estimation of the representation is obtained using the oracle’s formula.

Oracle known

support s

oracleMAP

MAP

Page 19: Exploiting Statistical Dependencies in Sparse Representations

19

The Oracle

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

2

s 2s 2P exp 2

2ss

s 2e

yP y| exp 2

D

2 2

s ss 2s 2 2

e

yP | y exp 2 2

D

12T T 1e

ss s s s s2 y hˆ

D D I D Q

Comments: • This estimate is both

the MAP and MMSE.• The oracle estimate

of x is obtained by multiplication by Ds.

s s

sP y| PP | y,s P | y P y

Page 20: Exploiting Statistical Dependencies in Sparse Representations

20

MAP Estimation

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

s

s s s

sT2 12s se s s

2 2e

P(y| s) P(y| s, )P( )d

....

h h log(det( ))exp 2 2

Q Q

Based on our prior for generating the support T TP s C exp b s 0.5s s W

MAP

s s

P(y| s)P(s)s ArgMax P(s| y) ArgMax P(y)

s

T2 12 T Ts se s s2 2

e

h h log(det( )) 1P s y exp b s s s2 2 2

Q Q W

Page 21: Exploiting Statistical Dependencies in Sparse Representations

21Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

The MAP

Implications:

The MAP estimator requires to test all the possible supports for the maximization. For the found support, the oracle formula is used.

In typical problems, this process is impossible as there is a combinatorial set of possibilities.

In order to use the MAP approach, we shall turn to approximations.

T 1s ss s

2eMAP

T2s T2e

h h log(det( ))2 2

s ArgMax exp1 1b log s s s4 2

Q Q

W

Page 22: Exploiting Statistical Dependencies in Sparse Representations

22Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

The MMSE

s1ss hˆs,y|E Q

This is the oracle for s, as we have seen before

s s

MMSE ˆ)y|s(Pˆ

s

T2 12 T Ts se s s2 2

e

h h log(det( )) 1P s y exp b s s s2 2 2

Q Q W

s

MMSE s,y|E)y|s(Py|Eˆ

Page 23: Exploiting Statistical Dependencies in Sparse Representations

23Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

The MMSE

sMMSE s,y|E)y|s(Py|Eˆ

s s

MMSE ˆ)y|s(Pˆ

Implications:

The best estimator (in terms of L2 error) is a weighted

average of many sparse representations!!! As in the MAP case, in typical problems one cannot

compute this expression, as the summation is over a combinatorial set of possibilities. We should propose approximations here as well.

Page 24: Exploiting Statistical Dependencies in Sparse Representations

24

Part IV Construct Practical Pursuit Algorithms

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

Page 25: Exploiting Statistical Dependencies in Sparse Representations

25Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

An OMP-Like Algorithm

The Core Idea – Gather s one element at a time greedily:

Initialize all entries with {-1}. Sweep through all entries and check which gives the

maximal growth of the expression – set it to {+1}. Proceed this way till the overall expression decreases for

any additional {+1}.

T 1s ss s

2eMAP

T2s T2e

h h log(det( ))2 2

s ArgMax exp1 1b log s s s4 2

Q Q

W

Page 26: Exploiting Statistical Dependencies in Sparse Representations

26Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

A THR-Like Algorithm

The Core Idea – Find the on-support in one-shut: Initialize all entries with {-1}. Sweep through all entries and check for each the obtained

growth of the expression – sort (down) the elements by this growth.

Choose the first |s| elements as {+1}. Their number should be chosen so as to maximize the MAP expression (by checking 1,2,3, …).

T 1s ss s

2eMAP

T2s T2e

h h log(det( ))2 2

s ArgMax exp1 1b log s s s4 2

Q Q

W

Page 27: Exploiting Statistical Dependencies in Sparse Representations

27Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

An Subspace-Pursuit-Like Algorithm

The Core Idea – Add and remove entries from s iteratively:

Operate as in the THR algorithm, and choose the first |s|+ entries as {+1}, where |s| is maximizes the posterior.

Consider removing each of the |s|+ elements, and see the obtained decrease in the posterior – remove the weakest of them.

Add/remove entries … similar to [Needell & Tropp `09],[Dai & Milenkovic `09].

T 1s ss s

2eMAP

T2s T2e

h h log(det( ))2 2

s ArgMax exp1 1b log s s s4 2

Q Q

W

Page 28: Exploiting Statistical Dependencies in Sparse Representations

28Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

An MMSE Approximation Algorithm

The Core Idea – Draw random supports from the posterior and average their oracle estimates: Operate like the OMP-like algorithm, but choose the next +1

by randomly drawing it based on the marginal posteriors. Repeat this process several times, and average their oracle’s

solutions. This is an extension of the Random-OMP algorithm [Elad &

Yavneh, `09].

MMSE 1ss

sP(s| y) hˆ

Q

s

T2 12 T Ts se s s2 2

e

h h log(det( )) 1P s y exp b s s s2 2 2

Q Q W

Page 29: Exploiting Statistical Dependencies in Sparse Representations

29Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Lets ExperimentChoose

W & bChoose

DChoose

α

Generate Boltzmann

Machine supports {si}i using Gibbs

Sampler

Choose e

Generate representations {i}i using the supports

Generate signals {xi}i

by multiplication

by D

Generate measurements {yi}i by adding

noise

Plain OMP

Rand-OMP

THR-likeOMP-like

Oracle

?

Page 30: Exploiting Statistical Dependencies in Sparse Representations

Image Denoising & Beyond Via Learned Dictionaries and Sparse representationsBy: Michael Elad

30

Chosen Parameters

10 20 30 40 50 60

10

20

30

40

50

60

1-

0.5-

0

0.5

1

0 10 20 30 40 50 608-7-6-5-4-3-2-1-01

The Boltzmann parameters b and W

10 20 30 40 50 60

5

10

15

20

25

The dictionary D

e

604 80

The signal and noise

STD’s

Page 31: Exploiting Statistical Dependencies in Sparse Representations

31Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Gibbs Sampler for Generating s

CkkC

kk Ck

cTkk k k k

cTkk k

P s ,sP s s

P s

exp s b s s2cosh b s

WW

Initialize s randomly

{1} Set k=1

Fix all entries of s apart from sk and draw it randomly from the marginal

Set k=k+1 till k=K

Repeat J (=100) times

0 5 10 15 20 25 300

0.05

0.1

0.15

0.2

0.25

0 5 10 15 20 25 300

0.05

0.1

0.15

0.2

0.25

Obtained histogra

m for K=5

True histogra

m for K=5

Page 32: Exploiting Statistical Dependencies in Sparse Representations

32Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Results – Representation Error

10 20 30 40 50 60 70 800

0.2

0.4

0.6

0.8

1

1.2

e

22

22

ˆE

OracleMAP-OMPMAP-THRRand-OMPPlain-OMP

As expected, the oracle performs the best.

The Rand-OMP is the second best, being an MMSE approximation.

MAP-OMP is better than MAP-THR.

There is a strange behavior in MAP-THR for weak noise – should be studied.

The plain-OMP is the worst, with practically useless results.

Page 33: Exploiting Statistical Dependencies in Sparse Representations

33Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Results – Support Error

10 20 30 40 50 60 70 800

0.2

0.4

0.6

0.8

1

1.2

e

22

s sE ˆmax s , s

OracleMAP-OMPMAP-THRRand-OMPPlain-OMP

Generally the same story. Rand-OMP result is not

sparse! It is compared by choosing the first |s| dominant elements, where |s| is the number of elements chosen by the MAP-OMP. As can be seen, it is not better than the MAP-OMP.

Page 34: Exploiting Statistical Dependencies in Sparse Representations

34Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

The Unitary Case What happens when D is square and unitary? We know that in such a case, if W=0 (i.e., no interactions),

then MAP and MMSE have closed-form solutions, in the form of a shrinkage [Turek, Yavneh, Protter, Elad, `10].

Can we hope for something similar for the more general W? The answer is (partly) positive!

T y D

-3 -2 -1 0 1 2 3-3

-2

-1

0

1

2

3

-3 -2 -1 0 1 2 3-3

-2

-1

0

1

2

3 MMSE MAP

Page 35: Exploiting Statistical Dependencies in Sparse Representations

35Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

The Posterior Again

s

T2 12 T Ts se s s2 2

e

h h log(det( )) 1P s y exp b s s s2 2 2

Q Q W

2 2 2T e e

s s s 2 2

Ts sh y

Q D D I I

D

2 2T T 1 T y log 14 2 2 2 2( )e e e

1P s y exp q s s s where q b2

DW

When D is unitary, the BM prior and the resulting posterior are conjugate distributions

Page 36: Exploiting Statistical Dependencies in Sparse Representations

36Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

ImplicationsThe MAP estimation becomes the following

Boolean Quadratic Problem (NP-Hard!)

2 2k k

T T

s s 1 s s 1

1s ArgMaxP s y ArgMaxexp q s s s2

W

Approximate Assume W≥0

Assume banded W

Multi-user detection in communication

[Verdu, 1998]

Graph-cut can be used for exact solution

[Cevher, Duarte, Hedge, & Baraniuk 2009]

Our work: Belief-Propagation for exact solution with O(K∙2L)

What about exact MMSE? It is possible too! We are working on it NOW.

Page 37: Exploiting Statistical Dependencies in Sparse Representations

37Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Results

10 20 30 40 50 60 70 800

0.2

0.4

0.6

0.8

1

1.2

22

22

ˆE

e

OracleMAP-OMPExact-MAPRand-OMPPlain-OMP

W (of size 64×64) in this experiment has 11 diagonals.

All algorithms behave as expected.

Interestingly, the exact-MAP outperforms the approximated MMSE algorithm.

The plain-OMP (which would have been ideal MAP for the iid case) is performing VERY poorly.

Page 38: Exploiting Statistical Dependencies in Sparse Representations

38Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Results

e

22

s sE ˆmax s , s

10 20 30 40 50 60 70 800

0.2

0.4

0.6

0.8

1

1.2

OracleMAP-OMPExact-MAPRand-OMPPlain-OMP

As before …

Page 39: Exploiting Statistical Dependencies in Sparse Representations

39

Part V First Steps Towards

Adaptive Recovery

Exploiting Statistical Dependencies in Sparse RepresentationsBy: Michael Elad

Page 40: Exploiting Statistical Dependencies in Sparse Representations

40Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

The Grand-Challenge

Ni i 1y

Estimate the model

parameters and the

representations ,b, , W D

Ni i 1

Ni i 1y

Ni i 1Assume known

and estimate the

representations

,b, , W DAssume known

and estimate the BM

parameters

i ii is ,bW

Assume known

and estimate the dictionary

i, ,b, W D Assume known

and estimate the signal STD’s

i

Page 41: Exploiting Statistical Dependencies in Sparse Representations

41Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Maximum-Likelihood? ML-Estimate of the

BM parameters

,bW Ni i 1s

NNi ii 1,b ,b i 1

N T Ti i iN,b i 1

ˆˆ ,b ArgMax P s ,b ArgMax P s ,b

1 1ArgMax exp b s s s2Z ,b

W W

W

W W W

WW

Problem !!! We do not have access to the partition function Z(W,b).

Page 42: Exploiting Statistical Dependencies in Sparse Representations

42Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Solution: Maximum-Pseudo-Likelihood

MPL-Estimate of the BM parameters

,bW Ni i 1s

K C

kkk 1

P s ,b P s ,b,sW W

cTkk k k kC

kk cTkk k

exp s b s sP s s

2cosh b s

WW

Likelihood function

Pseudo-Likelihood function

TT T

T,b

tr 1 bˆ,b ArgMax1 b 1

W

S WS SW

WS The function (∙) is log(cosh(∙)). This is a convex programming

task. MPL is known to be a consistent estimator. We solve this problem using SESOP+SD.

Page 43: Exploiting Statistical Dependencies in Sparse Representations

43Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Simple Experiment & Results

0 10 20 30 40 50 60-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

We perform the following experiment:

W is 64×64 banded, with L=3, and all values fixed (0.2).

b is a constant vector (-0.1). 10,000 supports are created

from this BM distribution – the average cardinality is 16.5 (out of 64).

We employ 50 iterations of the MPL-SESOP to estimate the BM parameters.

Initialization: W=0 and b is set by the scalar probabilities for each entry.

Page 44: Exploiting Statistical Dependencies in Sparse Representations

44Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Simple Experiment & Results

0 10 20 30 40 500.7

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5

1.6

1.7x 10

5

0 10 20 30 40 500

0.5

1

1.5

2

2.5

3

3.5

4

4.5

MPL function

IterationsIterations

Estimation Error

2

2

ˆW WW

2

2

ˆb bb

Page 45: Exploiting Statistical Dependencies in Sparse Representations

45Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad

Testing on Image Patches

Ni i 1y

Ni i 1Assume known

and estimate the

representations

,b, , W DAssume known

and estimate the BM

parameters

i ii is ,bWD Assume known

and estimate the signal STD’s

i

Unitary DCT dictionary

Image patches of size 8×8 pixels

Use Exact-MAP

Use MPL-

SESOP

Use simple ML

2 rounds

Denoising Performance? This experiment was done for patches from 5 images (Lenna,

Barbara, House, Boat, Peppers), with varying e in the range [2-25].

The results show an improvement of ~0.9dB over plain-OMP.

*

* We assume that W is banded, and estimate it as such, by searching (greedily) the best permutation.

Page 46: Exploiting Statistical Dependencies in Sparse Representations

46

Part VI Summary and

Conclusion

Sparse and Redundant Representation Modeling of Signals – Theory and Applications By: Michael Elad

Page 47: Exploiting Statistical Dependencies in Sparse Representations

47

Today We Have Seen that …No! We know that the found representations exhibit dependencies.

Taking these into account could further

improve the model and its behavior

Is it enough?

Sparsity and Redundancy are

important ideas that can be used in

designing better tools in signal/image

processing In this

workWe use the Boltzmann Machine and propose: A general set of

pursuit methods Exact MAP pursuit

for the unitary case Estimation of the

BM parametersSparse and Redundant Representation Modeling of Signals – Theory and Applications By: Michael Elad

What next?

We intend to explore:

Other pursuit methods

Theoretical study of these algorithms

K-SVD + this model

Applications …