image compression (chapter 8)

15
Image Compression (Chapter 8) CSC 446 Lecturer: Nada ALZaben

Upload: perdy

Post on 10-Feb-2016

72 views

Category:

Documents


1 download

DESCRIPTION

Image Compression (Chapter 8). CSC 446 Lecturer: Nada ALZaben. Outline: . Introduction. Image Compression Model. Compression Types . Data Redundancy. Redundancy Types. Coding redundancy Lossless compression. Introduction. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Image Compression (Chapter 8)

Image Compression(Chapter 8)

CSC 446 Lecturer: Nada ALZaben

Page 2: Image Compression (Chapter 8)

Outline : Introduction. Image Compression Model. Compression Types . Data Redundancy. Redundancy Types. Coding redundancy Lossless compression

Page 3: Image Compression (Chapter 8)

IntroductionMost data nowadays are available on line

and for the limited storage space and communications requirement, methods of compressing data prior to storage and/or transmission are being interesting study field.

Image compression address the problem of reducing the amount of data required to represent a digital image.

Image compression is done prior to storage and/or transmission and then decompressed to reconstruct the original image.

Page 4: Image Compression (Chapter 8)

Image Compression Model

Source encoder: removes input redundancy. Channel encoder: increase the noise

immunity of source encoder output. Channel: if it is noise free the channel

encoder and channel decoder is omitted.

Page 5: Image Compression (Chapter 8)

Compression Types1) Lossy image compression: is useful in

applications such as broadcasting television and video conferencing in which certain amount of error is acceptable trade off for increased compression performance.

2) Lossless image compression: useful in image archiving such as medical records where the image will be compressed and decompressed without losing any information.

Page 6: Image Compression (Chapter 8)

Data redundancy [1]

It is a central issue in digital image compression.

If denote the number of information carring units in two data sets that represent the same information, the relative data redundancy of the first data set can be defined as:

where “compression ratio” ,is total size in bits of original Image, ,total size of bits of compressed image.

Page 7: Image Compression (Chapter 8)

Data redundancy [2]

In case then indicating that the first representation of the information contains no redundant data.

In case , , =1, indicating significant compression and highly redundant data.

In case , 0, =, indicating that compressed data contain much more data than original representation.

Page 8: Image Compression (Chapter 8)

Redundancy Types1) Coding redundancy.2) Interpixel redundancy.3) Psychvisual redundancy.

Data compression is done if one or more of these types are achived.

Page 9: Image Compression (Chapter 8)

Coding Redundancy Assuming the discrete random variable in the

interval [0,1] representing the gray levels of an image and that each occurs with probability )

where L=number of gray levels, =number of times that gray level appears in the image, n is the total number of pixels in the image. If is represented by bits then the average of number of bits required to represent each pixel is The total number of bits required to code an M X N image is M X N X

Page 10: Image Compression (Chapter 8)

Coding Redundancy [example] an 8-level image has the gray level distribution as:

If a natural 3-bit binary code (code1) is used to represent the 8 possible gray levels =3 bits because =3bits

Page 11: Image Compression (Chapter 8)

Coding Redundancy [example]

but for code2 the Avg number of bits required to code the image is reduced to := = 2(0.19)+2(0.25)+2(0.21)+3(0.16)+4(0.08)+5(0.06)+ 6(0.03)+6(0.02)= 2.7 bits= approximately 10% of data resulting from the use of code1 is redundant. Level of redandant

Page 12: Image Compression (Chapter 8)

Lossless Compression Assigning fewer bits to the more

probabilty gray level than the least probable ones achive data compression which is called “variable- length coding” Lossless compression

“Huffman code” is a kind of the variable length coding.

Page 13: Image Compression (Chapter 8)

Lossless Compression “Huffman code example”.The letters A,B,C,D, and E are to be

encoded and have relative probability of occurrence as follows:

p(A)=0.16, p(B)=0.51, p(C)=0.09, p(D)=0.13, p(E)=0.11

the two characters with the lowest probability are combined in the first binary tree which has the characters as leaves. p(CE)=0.20

Each right branch 1 and each left branch 0

Page 14: Image Compression (Chapter 8)

“Huffman code example.”

Huffman tree... Huffman table…Get =

Page 15: Image Compression (Chapter 8)

Lossless Compression “Run- length encoding” Lossless

compression.The Idea of run-length encoding is

replaceing long sequences (runs) of identical samples with a special code that indicates the value to be repeated and the number of times repeated.

1110010000 RLE (3,1)(2,0)(1,1)(4,0)