[ieee 2009 24th international conference image and vision computing new zealand (ivcnz) -...

6
An Ecient and Selective Image Compression Scheme using Human and Adaptive Interpolation Sunil Bhooshan Department of ECE Jaypee University of Information Technology Solan, INDIA Email:[email protected] Shipra Sharma Department of CSE and IT Jaypee University of Information Technology Solan, INDIA Email:[email protected] Abstract—This paper proposes a hybrid approach to com- pression. It incorporates lossy as well as lossless compres- sion. Various parts of image are compressed in either way, depending on the amount of information held in that part. This scheme consists of two stages. In the first stage the image is filtered by a high pass filter to find the areas that have details. In the second stage, an eective scheme based on Human coding and adaptive interpolation is developed to encode the original image. With this algorithm, a good compression ratio is obtained, while, PSNR and SSIM are better to that of other methods available in literature. In other words, the newly proposed algorithm provides an ecient means for image compression. I. Introduction Image compression literally means reducing the size of graphics file, without compromising on its quality. Depending on whether the reconstructed image has to be exactly same as the original or some unidentified loss may be incurred, two techniques for compression exist. Lossless compression is the term used for the former and lossy compression for the later mentioned method. Although lossy compression techniques achieve very high compression ratios but the decompressed image is not exactly same as the original one. These methods take advantage of the fact that to certain extent the human eye cannot dierentiate between the images although noise exists in the decompressed image. Lossless methods on the other hand, give very less compression ratios but exactly recover back the original image. Most advances in the compression field are in lossy compression [1]. Lossy compressions proposed recently use wavelet transforms [2], [3]. But using wavelets proves to be computationally expensive and the problem of edge pixels also persists. For the proposed algorithm we use lossy compression as proposed in [4]. It is computationally inexpensive method and gives visibly good results as far as lossy compression is concerned. Lossless compression methods like [5], [6], [7], [8] have much lower performance with respect to lossy compression [9]. Human coding scheme [10] is an entropy encoding, lossless method of compression. It produces the fewest bits/symbol on an average. It has been extensively researched in last five decades [11]. This coding method has been used in one version of Lossless JPEG, JPEG-LS [12], and is computationally less expensive than the arithmetic version of JPEG [1]. Various methods exist in literature which combine these two approaches. The one in [13] is for dierent bit rates, while [14] focusses on multiresolution transfor- mation. Others, like [15], [16] concentrate on optimizing a particular technique. In this paper we present a simple algorithm incorpo- rating the Human coding scheme and adaptive inter- polation. We then compare it with JPEG2000. JPEG2000 is chosen, as it provides better compression than the traditional JPEG [17]. The paper is organized as follows. Section II deals with the proposed method. First part in this section describes the compression technique and the second part deals with decompression. Section III shows some computational results which proves that our method indeed gives good quality images, while giving a very high compression ratio. Lastly, Section IV takes up the conclusion and future work. II. The method The method, considered in this paper, is outlined for grayscale images but can be extended to colored ones. The image is considered to be a smooth function of x and y, even though it is not so. A b bit per pixel grayscale image of size m×n pixels is considered for compression. Since we are considering b bits, therefore, gray level values of the pixel in this image will range from 0 to 2 b 1. Here 0 represents black and 2 b 1 represents white and similarly intermediate values represent the transition from black to white. 978-1-4244-4698-8/09/$25.00 ©2009 IEEE 24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009) - 197 -

Upload: shipra

Post on 06-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2009 24th International Conference Image and Vision Computing New Zealand (IVCNZ) - Wellington, New Zealand (2009.11.23-2009.11.25)] 2009 24th International Conference Image

An Efficient and Selective Image CompressionScheme using Huffman and Adaptive

InterpolationSunil Bhooshan

Department of ECEJaypee University of Information Technology

Solan, INDIAEmail:[email protected]

Shipra SharmaDepartment of CSE and IT

Jaypee University of Information TechnologySolan, INDIA

Email:[email protected]

Abstract—This paper proposes a hybrid approach to com-pression. It incorporates lossy as well as lossless compres-sion. Various parts of image are compressed in either way,depending on the amount of information held in that part.This scheme consists of two stages. In the first stage theimage is filtered by a high pass filter to find the areas thathave details. In the second stage, an effective scheme basedon Huffman coding and adaptive interpolation is developedto encode the original image. With this algorithm, a goodcompression ratio is obtained, while, PSNR and SSIM arebetter to that of other methods available in literature. In otherwords, the newly proposed algorithm provides an efficientmeans for image compression.

I. Introduction

Image compression literally means reducing the sizeof graphics file, without compromising on its quality.Depending on whether the reconstructed image has tobe exactly same as the original or some unidentified lossmay be incurred, two techniques for compression exist.Lossless compression is the term used for the formerand lossy compression for the later mentioned method.Although lossy compression techniques achieve veryhigh compression ratios but the decompressed image isnot exactly same as the original one. These methods takeadvantage of the fact that to certain extent the human eyecannot differentiate between the images although noiseexists in the decompressed image. Lossless methods onthe other hand, give very less compression ratios butexactly recover back the original image.

Most advances in the compression field are in lossycompression [1]. Lossy compressions proposed recentlyuse wavelet transforms [2], [3]. But using waveletsproves to be computationally expensive and the problemof edge pixels also persists. For the proposed algorithmwe use lossy compression as proposed in [4]. It iscomputationally inexpensive method and gives visiblygood results as far as lossy compression is concerned.

Lossless compression methods like [5], [6], [7], [8]have much lower performance with respect to lossycompression [9]. Huffman coding scheme [10] is an

entropy encoding, lossless method of compression. Itproduces the fewest bits/symbol on an average. It hasbeen extensively researched in last five decades [11].This coding method has been used in one version ofLossless JPEG, JPEG-LS [12], and is computationally lessexpensive than the arithmetic version of JPEG [1].

Various methods exist in literature which combinethese two approaches. The one in [13] is for differentbit rates, while [14] focusses on multiresolution transfor-mation. Others, like [15], [16] concentrate on optimizinga particular technique.

In this paper we present a simple algorithm incorpo-rating the Huffman coding scheme and adaptive inter-polation. We then compare it with JPEG2000. JPEG2000is chosen, as it provides better compression than thetraditional JPEG [17].

The paper is organized as follows. Section II dealswith the proposed method. First part in this sectiondescribes the compression technique and the secondpart deals with decompression. Section III shows somecomputational results which proves that our methodindeed gives good quality images, while giving a veryhigh compression ratio. Lastly, Section IV takes up theconclusion and future work.

II. The method

The method, considered in this paper, is outlined forgrayscale images but can be extended to colored ones.The image is considered to be a smooth function of xand y, even though it is not so.

A b bit per pixel grayscale image of size m×n pixels isconsidered for compression. Since we are considering bbits, therefore, gray level values of the pixel in this imagewill range from 0 to 2b − 1. Here 0 represents black and2b−1 represents white and similarly intermediate valuesrepresent the transition from black to white.

978-1-4244-4698-8/09/$25.00 ©2009 IEEE

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)

- 197 -

Page 2: [IEEE 2009 24th International Conference Image and Vision Computing New Zealand (IVCNZ) - Wellington, New Zealand (2009.11.23-2009.11.25)] 2009 24th International Conference Image

A. Compression

The stepwise procedure to compress a given image isas follows:

Step 1 Let us denote the image as a matrix of graylevel intensity values and represent it by I.

Step 2 Pass I through a high pass filter and name itIHP. The pass band frequency of the high passfilter to be used, must be chosen in such a waythat the resulting filtered image has enoughdetails; or in other words, the order of the filteris to be decided on the basis of the amountof information to be retained. Refer [18] forindepth discussion.

Step 3 IHP is divided in sqaure blocks of some size, say9× 9. We take

⌈m9

⌉and

⌈n9

⌉, where �� represents

ceiling.Step 4 For each such block:

1) The grayvalue at each position (x, y) isobtained. We have 81 such values one foreach pixel position.

2) If not more than half; i.e., 42 (thresholdparameter), of these values are zero thenthe block is marked for Huffman encoding,else it is marked for adaptive interpolation.

Step 5 Computations, from now onwards, are carriedout on the original image and not on the filteredone. The original image is also divided in 9× 9blocks, as the filtered image. These blocks arenumbered so that we can keep a track of whichblock is marked for which type of compression.

Step 6 Each block, starting from block number 1, ischecked for which method it is marked for instep 3.

Step 7 All the blocks marked for Huffman encodingare placed together rowwise in a new imagematrix, ImgHuff.

Step 8 In ImgHuff1) For each position, (x, y), grey value, Gn, is

obtained.2) Number of occurence of each Gn is calcu-

lated.3) Encode ImgHuff using the above calcu-

lated values.4) This encoded matrix, can be further com-

pressed using LZW or airthmetic coding,is represented as ComHuff.

Step 9 All the blocks marked for Adaptive Interpo-lation are placed together rowwise in imagematrix, ImgInt.

1) ImgInt is divided into 3 × 3 blocks.2) Centre pixel from each block is chosen as

in [4].3) The chosen pixels form compressed image,

say, ComInt.

Originalimage

z= no. of zerosin a block

z > 42 ?

save block forlossless compression,

for some n, flag=h

blocks to be losslesslycompressed

Apply Huffmancoding

Compressedimage 1

Copy ofOriginal image

Pass through

HPF

Divide into9x9 blocks

Divide into9x9 blocks

n= block no.

Save block forlossy compression

and for some nflag=i

Y

N

Blocks forlossy compression

Choose one pixelfor each kxk block

Compressedimage 2

Fig. 1: Proposed Compression Scheme.

Hence, the two images, ComHuff and ComInt, are theresultant compressed images corresponding to the origi-nal image.The overall flow of the proposed compressionmethod is depicted in Figure 1.

B. Variants of Compression (Based on threshold parameter)

The proposed algorithm gives user flexibity in termsof degree of lossless and lossy compression one wantsto apply. To explain it further, if compression ratio isto be changed, threshold parameter (number of zeros tobe counted to decide whether the block should be en-coded losslessly or in lossy manner) should be changed.For example, we can increase it, to more than half ifcompression ratio is to be decreased. In other words,threshold parameter can be varied to increase or decreaselossless (and lossy) compression. This particular featureis further proved in results section.

C. Decompression

To reconstruct the original image the two compressedimages, ComHuff and ComInt, are considered. The pro-cedure is outlined below:

Step 1 We start with block number 1 and checkwhether it belongs to ComHuff or ComInt ma-trix.

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)

- 198 -

Page 3: [IEEE 2009 24th International Conference Image and Vision Computing New Zealand (IVCNZ) - Wellington, New Zealand (2009.11.23-2009.11.25)] 2009 24th International Conference Image

Intial PixelInterpolated Pixel

Fig. 2: Decompressing Block by Adaptive Interpolation.

Step 2 If the nth block belongs to ComHuff1) It is stored in a new image matrix ImgHuf1.

This is done until all blocks compressedby the Huffman method are stored inImgHuf1.

2) ImgHuf1 is decoded back to original pixelvalues using Huffman decoding algo-rithm.

3) The decoded blocks are placed in recon-structed image, say DecomImg, accordingto their number.

Step 3 If the nth block belongs to ComInt1) We consider 9 pixels in a block of 3×3 size.2) 2 pixels are interpolated, as in [4], between

every two adjacent pixels. The same isdepicted in Figure 2

3) As can be observed from Figure 2 theabove step will return a block of 7 × 7instead of 9× 9. For the time being valuesof adjacent row and column are copied tomake it of 9 × 9 size.

4) This block is placed in reconstructed im-age, DecomImg, according to its number.

Step 4 To make DecomImg more close to the origi-nal image, we consider all those blocks whichwere decompressed using adaptive interpola-tion. This is done because image data conatinslarge amount of local redundancy [19]. To ex-plain this let us consider an interpolated blockstarting at position (p, q) in DecomImg (depictedin Figure 3):

1) We take gray values of pixels from (p−1, q)to (p − 1, q + 9) and from (p + 1, q) to(p+ 1, q+ 9). Obtain their average and putin positions (p, q) to (p, q + 9). Similarlyaverage of (p + 8, q) to (p + 11, q + 9) andfrom (p + 11, q + 9) is placed in positions(p + 9, q) to (p + 9, q + 9).

2) Similarly, we take gray values of pixelsfrom (p, q − 1) to (p + 9, q − 1) and from(p, q + 1) to (p + 9, q). Obtain their averageand put in positions (p, q) to (p + 9, q).Similarly average of (p, q+8) to (p+9, q+11)and from (p+9, q+11) is placed in positions(p, q+ 9) to (p + 9, q + 9).

p

q

(p,q)

Decompressed Image One Interpolated Block

Fig. 3: One Block in Reconstructed Image.

Acquireimage 1

flag==h?

Acquire a 9x9 blockfrom compressed

image 2

Apply Huffmandecoding

Place block indecompressed

imageY

N

Acquire all interpolated blocks and put average of

neighbouring pixels in place of ’0’s (in all four edges)

m=1

Apply interpolation(which will result in a

7x7 block)

Add ’0’s at all four edges(which will result in a

9x9 block)

DecodedHuffman

blocks

Acquire a 9x9 blockfrom decoded

Huffman blocks

m=m+1

m<=n?

Y

N

Place block indecompressed

image

Fig. 4: Proposed Decompression Method.

3) The above two steps are repeated for allinterpolated blocks in DecomImg and theresultant image, say FinDecomImg, is ob-tained.

Step 5 FinDecomImg is the final reconstructed image.The overall flow of decompression is pictorically repre-sented in Figure 4.

III. Computational Results

The experiments were performed using the schemedescribed in the previous section. All experiments aredone on COMPAQ PC with Windows XP OS, and use

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)

- 199 -

Page 4: [IEEE 2009 24th International Conference Image and Vision Computing New Zealand (IVCNZ) - Wellington, New Zealand (2009.11.23-2009.11.25)] 2009 24th International Conference Image

(a) Original Image.

(b) Decompressed Image.

Fig. 5: Scenery

the MATLAB R© 7.1.1 . Different facets of algorithm aretaken into consideration. Tests are conducted, on largeset of images, based on different amount of edge in-formation available to make a region significant anddifferent values of threshold parameter. The followingsubsections take on various aspects of applying thealgorithm. Images used in this paper are:

• 24953011 man and horse image with 256 gray levels• 15301530 scenic image with 256 gray levels• 11531153 text image with 256 gray levels• 500500 frog image with 256 gray levels

A. Results:

Figure 5a shows an original image of size 2.7MB.When our compression algorithm is applied on it, twocompressed images of size of 4 bytes and 77.2KB areobtained. These compressed images when passed fromthe reconstruction algorithm result in image shown inFigure 5b.

As we can observe from the results that althoughboth lossy and lossless compression are used, a highcompression ratio, 1 : 84, is obtained. The loss of in-formation is very less in comparison to purely lossy

compression techniques and the compression ratio ishigh with respect to purely lossless techniques.

Table I shows the compression ratio for images ofdifferent size. As can be noticed in the table, text imagehas less compression ratio as it has more minute contentin comparison to other images.

B. Comparison with JPEG2000 (.JP2)

To compress images as .jp2, a freely available jpegcompressor, A3D compressor 1.0 was used.The compar-ison is based on PSNR and SSIM for the same compres-sion ratio obtained for a particular image.

PSNR stands for Probabilistic Signal-to-Noise Ratio. Itis the ratio between the maximum possible power of asignal and the power of corrupting noise that affects thequality of its representation. PSNR is measure of peakerror [20]. Higher PSNR indicates that reconstructedimage is of higher quality. To calculate PSNR first MSE(Mean Square Error) is calculated as:

MSE =1

nm

n∑i=1

m∑j=1

||O(i, j) −D(i, j)||2 (1)

where O is the original image and D is the decom-pressed one. Using MSE, PSNR is calculated as:

PSNR = 10 × log10

(255 × 255MSE

)(2)

SSIM measures similarity between images. It is areference metric which measures quality of image withrespect to original image. It is calculated on sections ofimage. If x and y are two sections then it is calculatedas in Equation 3.

SSIM(x, y) =(2μxμy + c1)(2covxy + c2)

(μ2x + μ

2y + c1)(σ2

x + σ2y + c2)

(3)

where μx the average of x, μy the average of y, σ2x the

variance of x, σ2y the variance of y, covxy the covariance of

x and y, c1 = (k1, L)2, c2 = (k2, L)2 two variables to stabilizethe division with weak denominator, L the dynamicrange of the pixel-values, k1 = 0.01 and k2 = 0.03 bydefault.

Table I shows that for the same compression ratio, orsame amount of compression, our method gives betterquality of decompressed image in almost all the cases.One exceptional case is when we have text in ourimage, then the performance of our method degradesslightly. Even then the performance can be improved byincreasing the number of zero count in filtered image, inour algorithm, so that more blocks are huffman codedand hence more quality of image is improved.

Our decompressed images are of higher quality thenthat of JPEG2000. This is made more clear by consideringa part of decompressed image acquired by our algorithmand zooming it and doing the same with image acquiredby JPEG2000. Figure 6a depicts images obtained by

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)

- 200 -

Page 5: [IEEE 2009 24th International Conference Image and Vision Computing New Zealand (IVCNZ) - Wellington, New Zealand (2009.11.23-2009.11.25)] 2009 24th International Conference Image

Imag

eN

ame

OS∗

CS∗∗

CR∗∗∗

PSN

R(A∗∗ )

PSN

R(J∗∗∗

)SS

IM(A∗∗ )

SSIM

(J∗∗∗

)

M&

H21.5

MB

334.

3KB

1/65

45.1

1337.9

940.

989

0.79

8Sc

ener

y2.

7MB

77.3

KB

1/84

36.7

4727.3

460.

974

0.79

4Te

xt1.

28M

B84.7

KB

1/46

26.7

3421.5

530.

613

0.52

3Fr

og13

6.2K

B7.

2KB

1/54

27.1

3720.2

330.

869

0.76

9∗ O

S=

Ori

gina

lSiz

e∗∗ C

S=

Com

pres

sedS

ize

∗∗∗C

R=

Com

pres

sion

Rat

io∗∗ A=

Our

Alg

orit

hm∗∗∗

J=

JPE

G20

00

TAB

LE

I:C

ompr

essi

onR

atio

san

dC

ompa

riso

n.

(a) Decompressed & ZoomedImage Obtained from Our Al-gorithm.

(b) Decompressed & ZoomedImage Obtained from JPEG2000Algorithm.

Fig. 6: Zoomed Images.

ThresholdParameter

CR PSNR SSIM

10(< hal f ) 1/146 23.1 0.58020(< hal f ) 1/143 23.105 0.58230(< hal f ) 1/138 25.157 0.61340(= half) 1/84 36.7 0.97450(> hal f ) 1/30 43.037 0.99360(> hal f ) 1/27 57.776 0.99970(> hal f ) 1/25 58.1 0.999

TABLE II: Varying Threshold Parameter.

proposed algorithm and Figure 6b shows that of byJP2. As is clearly visible details are lost in the laterone. This proves that the proposed algorithm maintainsmuch better quality of reconstructed image for samecompression ratio then JPEG2000.

C. Variants of Compression

As discussed earlier, we depict effect of changing thethreshold parameter. For the image in Figure 4, TableII shows the effect on compression rate for varying thethreshold parameter. The effect of threshold value onPSNR and SSIM can also be observed. So dependingon the requirements we can select a tradeoff between,quality of decompressed image and compression ratio.

D. Edge Information

Depending on the smoothness of image obtained fromfilter, the threshold value can be varied. If we haveimage with sharp edges, that tells that details havebeen removed and hence threshold parameter can bedecreased and vice versa.

IV. Conclusion and FutureWork

We have proposed a new compression techniquewhich is suited for both lossy and lossless compres-sions. Common compression techniques focus on eitherlossless or lossy mechanism. The proposed method isa combination of both. In addition, we can decide thedegree of lossy and lossless compression. Block codingmethods often stress on same method for all the blocks.

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)

- 201 -

Page 6: [IEEE 2009 24th International Conference Image and Vision Computing New Zealand (IVCNZ) - Wellington, New Zealand (2009.11.23-2009.11.25)] 2009 24th International Conference Image

Here, how the block will be coded depends on detailsit carries. Adaptive interpolation and Huffman codingwere the two methods used to implement the same.

The proposed method can be used for any type ofimage. For example, if we have a medical image, whereno data loss can be tolerated, then Huffman code will beexecuted on almost all blocks. In another case, where lossof data is not comprehensible, then interpolation will beused more often. Therefore, based on the type of imageand where it has be used, we can decide on what qualityof compression we require.

It can be observed that it is not computationallycomplex. Results show that it gives better quality imagesthan JPEG2000 for the same compression ratio.

We are working towards making a neural networklearn the threshold on which it has to decide whether theblock has to be compressed in lossy or lossless manner.

References

[1] X. Li, Y. Shen, and J. Ma, “An efficient medical image com-pression,,” in Engineering In Medicine And Biology 27th AnnualConference, Shangai, China, September 1-4 2005 IEEE.

[2] J.W.Woods, Subband Image Coding, ed., Ed. boston, MA: KluwerAcademic Publishers, 1991.

[3] J.M.Shapiro, “Embedded image coding using zerotrees of waveletcoefficients,” in Special Issue On Wavelet And Signal Processing,vol. 41, no. 12. IEEE Trans. Signal Processing, Dec 1993, pp.3445–3462.

[4] S. Bhooshan and S. Sharma, “Image compression and decompres-sion using adaptive interpolation,” in The WSEAS InternationalConference on Signal Processing, Robotics and Automation. Univer-sity of Cambridge, Cambridge: WSEAS, Feb, 21-23 2009.

[5] M.Rabbani and P. Jones, “Digital image compression techniques,”SPIE Opt. Eng. Press,, Bellingham, Washington, Tech. Rep., 1991.

[6] G. Kuduvalli and R. Rangayyan, “Performance analysis of re-versible image compression techniques for high resolution digitalteleradiology,” in IEEE Trans. Med. Imaging, vol. 11, Sept. 1992,pp. 430–445.

[7] “Progressive bi-level image compression,” CCIT Draft Recom-mendation T.82, ISO/IEC Commite Draft 11544, Sept. 1991.

[8] M.Rabbani and P.W.Melnychuck, “Conditioning contexts for thearithmetic coding of bit planes,” in IEEE Trans. Inform. Theory,vol. 40, 1994, pp. 108–117.

[9] A. Said and W. A. Pearlman, “An image multiresolution repre-sentation for lossless and lossy compression,” to appear in theIEEE Transactions on Image Processing.

[10] D. Huffman, “A method for the construction of minimum-redundancy codes,” in IRE, I.R.E, Ed., vol. 40, no. 1098-1102. IRE,Sept. 1952, pp. 1098–1011.

[11] R.Ponalagusamy, E.Kannan, and M. Arock, “A huffman decod-ing algorithm in mobile robot platform,” Information TechnologyJournal, vol. 6(5), no. ISSN 1812-5638, pp. 776–779, 2007.

[12] W.B.Pennebaker and J.L.Mitchel, “Jpeg: Still image data compres-sion standard,” in Van Nostrant Reinhold, New York, 1993.

[13] D. Marpe, G. Blttermann, J. Ricke, and P. Maa, “A two-layeredwavelet-based algorithm for efficient lossless and lossy imagecompression,” in IEEE Transactions On Circuits And Systems ForVideo Technology, vol. 10, no. 7, 2000 2000.

[14] A. Said and W. A. Pearlman, “An image multiresolution repre-sentation for lossless and lossy compression,” in IEEE TransactionsOn Image Processing, vol. 5, no. 9, September 1996.

[15] S.-G. Miaou and S.-N. Chao, “Wavelet-based lossy-to-lossless ecgcompression in a unified vector quantization framework,” in IEEETransactions On Biomedical Engineering, vol. 52, no. 3, March 2005.

[16] W. Philips, “The lossless dct for combined lossy /lossless imagecoding,” in ICIP, vol. 3, October 1998, pp. 871–875.

[17] S. Haseeb and O. O. Khalifa, “Comparitive performance analysisof image compression by jpeg2000: A case study on medicalimages,” Information Technology Journal, vol. 5(1), no. ISSN 1812-5638, pp. 35–39, 2006.

[18] S. Bhooshan and V. Kumar, “Design of two dimensional linearphase chebyshev fir filters,” in 9th, IEEE IET International Confer-ence on Signal Processing, October 2008.

[19] S.-K. Kil, J.-S. Lee, D.-F. Shen, J.-G. Ryu, E.-H. Lee, H.-K. Min,and S.-H. Hong, “Lossless medical image compression usingredundancy analysis,” IJCSNS International Journal of ComputerScience and Network Security, vol. 6, no. 1A, pp. 50–56, January2006.

[20] Wanigasekara, N.R., S. Zuangzhi, and Y.Zeng, “Quality evaluationfor jpeg 2000 based medical image compression,” in IEEE Trans.Image Proc., vol. 8, 2003, pp. 1687–1697.

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)

- 202 -