wiener filtering for image restoration & basics on image compression

34
M. Wu: ENEE631 Digital Image Processing (Spring'09) Wiener Filtering for Image Restoration & Wiener Filtering for Image Restoration & Basics on Image Compression Basics on Image Compression Spring ’09 Instructor: Min Wu Electrical and Computer Engineering Department, University of Maryland, College Park bb.eng.umd.edu (select ENEE631 S’09) [email protected] ENEE631 Spring’09 ENEE631 Spring’09 Lecture 8 (2/18/2009) Lecture 8 (2/18/2009)

Upload: felicia-dixon

Post on 17-Jan-2018

247 views

Category:

Documents


0 download

DESCRIPTION

UMCP ENEE631 Slides (created by M.Wu © 2004) 4/26/2017 Overview Last Time: image restoration Power spectral density for 2-D stationary random field A few commonly seen linear distortions in imaging system Deconvolution: inverse filtering, pseudo-inverse filtering Today: Wiener filtering: balance between inverse filtering & noise removal Basics compression techniques UMCP ENEE631 Slides (created by M.Wu © 2004) H u(n1, n2) v(n1, n2) (n1, n2) G u’(n1, n2) w(n1, n2) M. Wu: ENEE631 Digital Image Processing (Spring'09)

TRANSCRIPT

Page 1: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)

Wiener Filtering for Image Restoration & Wiener Filtering for Image Restoration & Basics on Image CompressionBasics on Image Compression

Spring ’09 Instructor: Min Wu

Electrical and Computer Engineering Department, University of Maryland, College Park

bb.eng.umd.edu (select ENEE631 S’09) [email protected]

ENEE631 Spring’09ENEE631 Spring’09Lecture 8 (2/18/2009)Lecture 8 (2/18/2009)

Page 2: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [2]

OverviewOverview Last Time: image restoration

– Power spectral density for 2-D stationary random field– A few commonly seen linear distortions in imaging system– Deconvolution: inverse filtering, pseudo-inverse filtering

Today:– Wiener filtering: balance between inverse filtering & noise removal– Basics compression techniques

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

004)

Hu(n1, n2) v(n1, n2)

(n1, n2)

Gu’(n1, n2)w(n1, n2)

|),(H|for ),(

1 ),( 2121

21 HasGchoose

Page 3: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [3]

Handling Noise in DeconvolutionHandling Noise in Deconvolution Inverse filtering is sensitive to noise

– Does not explicitly model and handle noise

Balance between undo degradation H vs. noise suppression– Minimize MSE between the original and restored

e = E{ [ u(n1, n2) – u’(n1, n2) ] 2 }, where u’(n1, n2) is a func. of {v(m1, m2) }

– Best estimate is conditional mean E[ u(n1 , n2) | all v(m1 , m2) ] see EE621; but usually difficult to solve for general restoration

(need conditional probability distribution, and estimation is nonlinear in general)

Get the best linear estimate instead Wiener filtering– Consider the (desired) image and noise as random fields– Produce a linear estimate from the observed image to minimize MSE

Hu(n1, n2) v(n1, n2)

(n1, n2)

Gu’(n1, n2)w(n1, n2)

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001/

2004

)

Page 4: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [6]

EE630 Review: Principle of OrthogonalityEE630 Review: Principle of Orthogonality

“Orthogonal” in a statistical sense: i.e. the optimal error signal and each observation sample used in the filtering (and also their combinations) are statistically uncorrelated

– plugging e[n] into the orthogonality principle leads to the normal equation.

Page 5: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [7]

Wiener FilteringWiener Filtering Get the best linear estimate minimizing MSE

Assume: spatial-invariant restoration filter u’(n1, n2) = g (n1, n2) v(n1, n2) ; wide-sense stationarity for original signal and noise; noise is zero-mean and uncorrelated with original signal.

Solutions– Principle of orthogonality E{ [ u(n1, n2) – u’(n1, n2) ] v*(m1, m2) }=0

=> E[ u(n1,n2) v*(m1,m2) ] = E[ u’(n1,n2) v*(m1,m2) ] => Ru v (k,l) = Ru’ v(k,l)

i.e. the restored image should have similar stochastic properties as the original.

Find out expressions of the two cross-correlation functions: – Extend 1-D: for y(n1,n2) = x(n1,n2) + w(n1,n2) => Ruy (k,l) = Rux(k,l) + Ruw(k,l) ;

if x(n1,n2) and w(n1,n2) uncorrelated => Ryy (k,l) = Rx x(k,l) + Rw w(k,l) .

– Ru’ v(k,l) = g(k,l) Rvv(k,l) = g(k,l) [ Rww(k,l) + R(k,l) ]– Ru v(k,l) = Ruw(k,l) + Ru(k,l) = h*(k,l) Ruu(k,l) + 0

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001,

200

7)

Hu v

Gu’w

Page 6: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [8]

Wiener Filter in Frequency-Domain RepresentationWiener Filter in Frequency-Domain Representation Ru v (k, l) = Ru’ v(k, l)

– Ru’ v(k,l) = g(k,l) Rvv(k,l) = g(k,l) [ Rww(k,l) + R(k,l) ]– Ru v(k,l) = Ruw(k,l) + Ru(k,l) = h*(k,l) Ruu(k,l) + 0

Take DFT to get representation in power spectrum density

),(),(),(

21

2121

vv

uv

SSG

),(),(),(

),(),(),(),( :Note

2121*

21

21212

2121

uuuv

uuvv

SHS

SSHS

),(),(),(),(),(

21212

21

2121*

SSHSH

uu

uu

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

007)

uuSHS

HG

*

wiener21

1),(

H

u v

Gu’w

Page 7: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [9]

Wiener Filtering: Special Cases Wiener Filtering: Special Cases Balancing between two jobs

for deblurring noisy image– HPF filter for de-blurring

(undo H distortion)– LPF for suppressing noise

Noiseless case ~ S = 0– Wiener filter becomes pseudo-inverse filter for S 0

No-blur case ~ H = 1 (Wiener Smoothing Filter)– Zero-phase filter to attenuate noise according to SNR at each freq.

0|),(H| if ,0

0|),(H| if ,),(

1

/||),(

21

212102

*

021

HSSH

HG Suv

S

uuSHS

HG

*

wiener21

1),(

1),(),(

),(),(),(),(

21

21

2121

211wiener21

SNR

SNR

uu

uuH S

SSS

SG

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

Page 8: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [10]

ComparisonsComparisons

From Jain Fig.8.11

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

uuSHS

HG

*

wiener21

1),(

Page 9: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [11]

Example: Wiener Filtering vs. Inverse FilteringExample: Wiener Filtering vs. Inverse FilteringU

MC

P E

NE

E63

1 S

lides

(cre

ated

by

M.W

u ©

200

4)

Figure is from slides at Gonzalez/ Woods DIP book website (Chapter 5)

Page 10: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [12]

Example (2):Example (2): Wiener Filtering Wiener Filtering vs. vs. Inverse FilteringInverse Filtering

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

004)

Figure is from slides at Gonzalez/ Woods DIP book website (Chapter 5)

Page 11: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [13]

To Explore Further on Wiener FilterTo Explore Further on Wiener Filter

Recall the assumptions:– p.s.d. of image and noise random fields are known– frequency response of distortion filter is known

Are these reasonable assumptions?

What do they imply in the implementation of Wiener filter?

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

004)

Page 12: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [14]

Wiener Filter: Issues to Be AddressedWiener Filter: Issues to Be Addressed Wiener filter’s size

– Theoretically the p.s.d. based formulation can have infinite impulse response ~ require large-size DFTs

– Impose filter size constraint: find the best FIR that minimizes MSE Need to estimate power spectrum density of orig. signal?

– Avoid explicit estimate by using an (adaptive) constant for SNR– Estimate p.s.d. of blurred image v and compensate variance due to noise– Estimate from a representative image set (similar to the images to be restored)– Or use statistical model for the orig. image and estimate parameters

Constrained least square filter ~ see Gonzalez Sec.5.9– Optimize smoothness in restored image

(least-square of the rough transitions)– Constrain differences between blurred image

and blurred version of reconstructed image– Estimate restoration filter w/o estimating p.s.d.

Unknown distortion H ~ Blind Deconvolution

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

uuSHS

HG

*

wiener21

1),(

Hu v

Gu’w

Hv’

Page 13: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [17]

Basic Ideas of Blind DeconvolutionBasic Ideas of Blind Deconvolution Three ways to estimate H: observation, experimentation, math. modeling

Estimate H via spectrum’s zero patterns– Two major classes of blur (motion blur and out-of-focus)– H has nulls related to the type and the parameters of the blur

Maximum-Likelihood blur estimation– Each set of image model and blur parameters gives a “typical” blurred output;

Probability comes into picture because of the existence of noise – Given the observation of blurred image, try to find the set of parameters that is

most likely to produce that blurred output Iteration ~ Expectation-Maximization approach (EM)

Given estimated parameters, restore image via Wiener filtering Examine restored image and refine parameter estimation Get local optimums

To explore more: Bovik’s Handbook Sec.3.5 (subsection-3 & 4)

“Blind Image Deconvolution” by Kundur et al, IEEE Sig. Proc. Magazine, vol.13(3), 1996

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001/

2004

)

Page 14: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [18]

Basic Techniques for Data CompressionBasic Techniques for Data Compression

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

004)

Page 15: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [19]

Why Need Compression?Why Need Compression? Savings in storage and transmission

– multimedia data (esp. image and video) have large data volume– difficult to send real-time uncompressed video over current

network

Accommodate relatively slow storage devices – they do not allow playing back uncompressed multimedia data in

real time 1x CD-ROM transfer rate ~ 150 kB/s 320 x 240 x 24 fps color video bit rate ~ 5.5MB/s=> 36 seconds needed to transfer 1-sec uncompressed video

from CD

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

Page 16: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [20]

Example: Storing An EncyclopediaExample: Storing An Encyclopedia

– 500,000 pages of text (2kB/page) ~ 1GB => 2:1 compress– 3,000 color pictures (64048024bits) ~ 3GB => 15:1– 500 maps (64048016bits=0.6MB/map) ~ 0.3GB => 10:1– 60 minutes of stereo sound (176kB/s) ~ 0.6GB => 6:1– 30 animations with average 2 minutes long

(64032016bits16frames/s=6.5MB/s) ~ 23.4GB => 50:1– 50 digitized movies with average 1 minute long

(64048024bits30frames/s = 27.6MB/s) ~ 82.8GB => 50:1

Require a total of 111.1GB storage capacity if without compression Reduce to 2.96GB if with compression

From Ken Lam’s DCT talk 2001 (HK Polytech)U

MC

P E

NE

E63

1 S

lides

(cre

ated

by

M.W

u ©

200

1)

Page 17: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [22]

PCM codingPCM coding How to encode a digital image into bits?

– Sampling and perform uniform quantization “Pulse Coded Modulation” (PCM) 8 bits per pixel ~ good for grayscale image/video 10-12 bpp ~ needed for medical images

Reduce # of bpp for reasonable quality via quantization– Quantization reduces # of possible levels to encode– Visual quantization: dithering, companding, etc.

Halftone use 1bpp but usually upsampling ~ saving less than 2:1

Encoder-Decoder pair “codec”

I(x,y)

Input imageSampler Quantizer Encoder transmit

image capturing device

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

Page 18: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [23]

Discussions on Improving PCMDiscussions on Improving PCM Quantized PCM values may not be equally likely

– Can we do better than encode each value using same # bits? Example

– P(“0” ) = 0.5, P(“1”) = 0.25, P(“2”) = 0.125, P(“3”) = 0.125

– If to use same # bits for all values Need 2 bits to represent the four possibilities if treat

– If to use fewer bits for the likely value “0” ~ Variable Length Codes (VLC)

“0” => [0], “1” => [10], “2” => [110], “3” => [111] Use i pi li =1.75 bits on average ~ saves 0.25 bpp!

Bring probability into the picture– Use prob. distribution to reduce average # bits per quantized sample

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

Page 19: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [24]

Entropy CodingEntropy Coding Idea: use fewer bits for commonly seen values

At least how many # bits needed?– Limit of compression => “Entropy”

Measures the uncertainty or avg. amount of information of a source Definition: H = i pi log2 (1 / pi) bits

e.g., entropy of previous example is 1.75

Can’t represent a source perfectly with less than avg. H bits per sample

Can represent a source perfectly with avg. H+ bits per sample ( Shannon Lossless Coding Theorem )

– “Compressability” depends on the statistical nature of the info source

Important to design a codebook to decode coded stream efficiently and without ambiguity

See info. theory course (EE721) for more theoretical details

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001/

2004

)

Page 20: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [25]

E.g. of Entropy Coding: Huffman CodingE.g. of Entropy Coding: Huffman Coding Variable length code

– Assign about log2 (1 / pi) bits for the ith value has to be integer# of bits per symbol

Step-1– Arrange pi in decreasing order and consider them as tree leaves

Step-2– Merge two nodes with smallest probabilities to a new node and

sum up probabilities– Arbitrarily assign 1 and 0 to each pair of merging branch

Step-3– Repeat until no more than one node left.– Read out codeword sequentially from root to leaf

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

Page 21: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [26]

Huffman Coding (cont’d)Huffman Coding (cont’d)

S0

S1

S2

S3

S4

S5

S6

S7

000

001

010

011

100

101

110

111

PCM

00

10

010

011

1100

1101

1110

1111

Huffman

(trace from root)

0.25

0.21

0.15

0.14

0.0625

0.0625

0.0625

0.06250.1251

0

0.12510

0.251

0

0.2910

0.54

1

0

0.46

1

01.0

1

0

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

Page 22: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [27]

Huffman Coding: Pros & ConsHuffman Coding: Pros & Cons Pro

– Simplicity in implementation (table lookup)– For a given alphabet size, Huffman coding gives best coding

efficiency (i.e. any other code won’t give lower expected code length) Con

– Need to obtain source statistics– The length of each codeword has to be integer

=> lead to gaps between its average codelength and entropy Improvement (Ref: Cover-Thomas)

– Code a group of symbols as a whole: allow fractional # bits/symbol– Arithmetic coding: fractional # bits/symbol– Lempel-Ziv coding or LZW algorithm

“universal”, no need to pre-estimate source statistics fix-length codeword for variable-length source symbols

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

Page 23: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [28]

Run-Length CodingRun-Length Coding How to efficiently encode it? e.g. a row in a binary doc image:

“ 0 0 0 0 0 0 0 1 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 1 …”

Run-length coding (RLC)– Code length of runs of “0” between successive “1”

run-length of “0” ~ # of “0” between “1” good if often getting frequent large runs of “0” and sparse “1”

– E.g., => (7) (0) (3) (1) (6) (0) (0) … …

– Assign fixed-length codeword to run-length in a range (e.g. 0~7)– Or use variable-length code like Huffman to further improve

RLC also applicable to general a data sequence with many consecutive “0” (or long runs of other values)

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

Page 24: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [29]

RLC RLC ExampleExample

Figure is from slides at Gonzalez/ Woods DIP book website (Chapter 8)

Page 25: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [30]

Analyzing Coding Efficiency of Run-Length CodingAnalyzing Coding Efficiency of Run-Length Coding Simplified assumption: “0” occurs independently w.p. p (close to 1)

Prob. of getting an L-run of “0”: possible runs L=0,1, …, M– P( L = l ) = pl (1-p) for 0 l M-1 (geometric distribution)– P( L M ) = pM (when having M or more “0”)

Avg. # binary symbols for each run of zero– Savg = L (L+1) pL(1-p) + M pM = (1 – pM ) / ( 1 – p )

Compression ratio C = Savg / log2 (M+1) = (1 – pM ) / [( 1–p ) log2(M+1)]

Example: p = 0.9, M=15, 4 bits per run-length symbol

Savg = 7.94, Average run-length coding rate Bavg = 4 bits / 7.94 0.516 bpp Compression ratio C = 1 / B = 1.985.

Source’s entropy H = 0.469 bpp => Coding efficiency = H / Bavg 91%

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

Page 26: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [31]

Summary of Today’s LectureSummary of Today’s Lecture Wiener filtering for image restoration

– More on advanced restoration & applications if time is allowed later in the course Basics compression techniques

– PCM coding; Entropy coding; Run-length coding

Next time: continue on image compression => quantization, etc.

Take home exercise: derive optimal quantizers(1) To minimize maximum errors; (2) To minimize MSE

Readings– Gonzalez’s 3/e book 5.5-5.8; 8.1, 8.2.1-8.2.7– For further reading: Woods’ book 7.1, 7.2, (7.7); 3.1, 3.2, 3.5.0

Jain’s book 8.1-8.4; Bovik’s Handbook Sec.3.5 (subsections 3 & 4)“Blind Image Deconvolution” by Kundur et al, IEEE Sig. Proc. Magazine, vol.13(3), 1996

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

004)

Page 27: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [32]

Revisit: Quantization ConceptRevisit: Quantization Concept

L-level Quantization– Minimize errors for this lossy process

– What L values to use?– Map what range of continuous values to each of L values?

tmin tmax

What quantizer to use to minimize maximum errors?

What conditions on {tk} and {rk} to minimize MSE?

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001/

2004

)

t1 tL+1

p.d.f pu(x)

r1 rL

Page 28: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [33]

Quantization: A Close LookQuantization: A Close Look

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

004)

Page 29: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [34]

Review of Quantization ConceptReview of Quantization Concept

L-level Quantization– Minimize errors for this lossy process

– What L values to use?– Map what range of continuous values to each of L values?

tmin tmax

Uniform partition– Maximum errors = ( tmax - tmin ) / 2L = A / 2L

over a dynamic range of A

– Best solution? Consider minimizing maximum absolute error (min-max) vs. MSE what if the value between [a, b] is more likely than other intervals?

tmin tmax

tk tk+1

(tmax—tmax)/2L

quantization error

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001/

2004

)

Page 30: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [35]

Bring in Probability Distribution Bring in Probability Distribution

Minimize error in a probability sense– MMSE (minimum mean square error)

assign high penalty to large error and to likely occurring values

squared error gives convenience in math.: differential, etc.

An optimization problem– What {tk} and {rk } to use?– Necessary conditions: by setting partial differentials to zero

t1 tL+1

p.d.f pu(x)

r1 rL

Allocate more reconstruct. values in more probable ranges

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001)

Page 31: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [37]

MMSE Quantizer (Lloyd-Max)MMSE Quantizer (Lloyd-Max) Reconstruction and decision levels need to satisfy

Solve iteratively– Choose initial values of {tk}(0) , compute {rk}(0) – Compute new values {tk}(1), and {rk}(1) ……

For large number of quantization levels– Approx. constant pdf within t[tk, tk+1), i.e. p(t) = p(tk’) for tk’=(tk+tk+1)/2

Reference: S.P. Lloyd: “Least Squares Quantization in PCM”, IEEE Trans. Info. Theory, vol.IT-28, March 1982, pp.129-137

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001/

2004

)

Page 32: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [38]

Page 33: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [40]

MMSE Quantizer for Uniform DistributionMMSE Quantizer for Uniform Distribution Uniform quantizer

– Optimal for uniform distributed r.v. in MMSE sense– MSE = q2 / 12 with q = A / L

SNR of uniform quantizer– Variance of uniform distributed r.v. = A2 / 12

– SNR = 10 log10 (A2 / q2) = 20 log10 L (dB)

– If L = 2B, SNR = (20 log102)*B = 6B (dB) “1 bit is worth 6 dB.”

Rate-Distortion tradeoff

t1 tL+1A

1/A

p.d.f. of uniformdistribution

t1 tL+1

UM

CP

EN

EE

631

Slid

es (c

reat

ed b

y M

.Wu

© 2

001/

2004

)

Page 34: Wiener Filtering for Image Restoration & Basics on Image Compression

M. Wu: ENEE631 Digital Image Processing (Spring'09)Lec8 – Wiener Filter; Basics on

Compression [41]

Quantization – A “Lossy Step” in Source CodingQuantization – A “Lossy Step” in Source Coding Quantizer achieves compression in a lossy way

– Lloyd-Max quantizer minimizes MSE distortion with a given rate

Need at least how many # bits for certain amount of error? – (information-theoretic) Rate-Distortion theory

Rate distortion function of a r.v.– Minimum average rate RD bits/sample required to represent this r.v.

while allowing a fixed distortion D– R(D) = min I(X;X*)

minimize over p(X*|X) given a source p(X)– For Gaussian r.v. and MSE

1bit more cuts down distortion to ¼ => 6dB

D

RD

2

mean) theuse(just , 0

, )/(log2

2222

1

D

DDRD

See Info. Theory course/books for detailed proof of R-D theorem

UM

CP

EN

EE

739M

Slid

es (c

reat

ed b

y M

.Wu

© 2

002)