7main reprt
Post on 03-Apr-2018
230 Views
Preview:
TRANSCRIPT
-
7/28/2019 7Main reprt
1/46
CHAPTER 1
INTRODUCTION
1.1 FUNDAMENTALS OF DIGITAL IMAGE PROCESSING
Digital image processing (DIP) encompasses processes whose inputs and outputs are
images and there are three types of processes in this continuum. Low level processes involve
image pre-processing such as noise reduction, contrast enhancement and image sharpening.
Middle level processing involves segmentation and recognition of individual images. High
level processing involves image analysis and functions associated with vision.
1.2 NEED FOR COMPRESSION
Compression is needed to simply reduce the amount of space that image would otherwise
take to store. There are many factors to consider when choosing a compression technique:
REAL TIME/NON-REAL TIME
Real time refers to capturing, compressing, decompressing and playing back all in real time
with no delays. Non-real time involves delays where the process is carried out on the stored
content.
COMPRESSION RATIO
The compression ratio relates the numerical representation of the original image in comparison
to the compressed image. Generally when the compression ratio is high the image quality is
poor.
LOSSY/LOSSLESS
The loss factor determines whether there is a loss of quality between the original image and the
image after it has been compressed and played back (decompressed). There is a loss in image
content during lossy compression which is considerable (Transform compression) and no
image content is lost in lossless compression (Predictive compression) Again this is affected by
the amount of compression.
1.3 PRINCIPLES OF COMPRESSION
1
-
7/28/2019 7Main reprt
2/46
A common characteristic of most images is that the neighboring pixels are highly
correlated and therefore contain highly redundant information. The foremost task then is to
find an image representation in which the image pixels are decorrelated. Redundancy and
irrelevancy reductions are two fundamental principles used in compression. Whereas
redundancy reduction aims at removing parts of the redundancy from the signal source
(image/video), irrelevancy reduction omits signal that will not be noticed by the signal
receiver. In general, three types of redundancy in digital images can be identified:
Spatial Redundancy or correlation between neighboring pixel values
Spectral Redundancy or correlation between different color planes or spectral bands.
Temporal Redundancy or correlation between adjacent frames in a sequence of images.
Image compression research aims at reducing the number of bits needed to represent an image
by removing the spatial and spectral redundancies as much as possible.
1.4 IMAGE COMPRESSION
Compressing an image is significantly different than compressing raw binary data. Of course,
general purpose compression programs can be used to compress images, but the result is less
than optimal. This is because images have certain statistical properties which can be exploited
by encoders specifically designed for them. Also, some of the finer details in the image can be
sacrificed for the sake of saving a little more bandwidth or storage space. This also means that
lossy compression techniques can be used in this area.
Lossless compression involves with compressing data which, when decompressed, will
be an exact replica of the original data. This is the case when binary data such as executables,
documents etc. are compressed. They need to be exactly reproduced when decompressed. On
the other hand, images (and music too) need not be reproduced 'exactly'. An approximation of
the original image is enough for most purposes, as long as the error between the original and
the compressed image is tolerable.
Image compression is one of the most important and successful applications of thewavelet transform. The emergence of digital acquisition in medical imaging, the data
production is continuously growing. The goal of image compression is to reduce the amount of
data required to represent a digital image.
2
-
7/28/2019 7Main reprt
3/46
Figure 1.1: Image Compression
1.5 LOSSLESS IMAGE COMPRESSION
In recent years, it has been subject to a quasi-exponential increase, in particular,
because of an extensive use of MRI images and, even more, computed tomography (CT).
These are both volume modalities that can be viewed as a sequence of 2-D images (slices).
Figure 1.2 : Lossless Image Compression
3
-
7/28/2019 7Main reprt
4/46
The successive improvements of acquisition equipment tend to amplify the resolution
of those images, which intensifies the mass of data to archive. All this makes them really much
more cumbersome than other imaging modalities. This is why we focused on CT and MRI. The
diagnostic information must be kept in the same state as during the initial diagnosis stage to
allow their reconsideration in case of judicial proceedings. Legally, the diagnostic information
must be kept in the same state as during the initial diagnosis stage to allow their
reconsideration in case of judicial proceedings. Therefore, if some losses appear as
compression consequences, the radiologists would have to study the degraded images when
doing their diagnosis.
1.6 MEDICAL IMAGE PROCESSING
Biomedical image processing has experienced dramatic expansion and has been an
interdisciplinary research field attracting expertise from applied mathematics, computer
sciences, engineering, statistics, physics, biology and medicine. Computer-aided
diagnostic processing has already become an important part of clinical routine.
Accompanied by a rush of new development of high technology and use of various
imaging modalities and more challenges arise. For example, the process and analyze a
significant volume of images is that high quality information can be produced for disease
diagnoses and treatment. The principal objectives of this course are to provide anintroduction to basic concepts and techniques for medical image processing and to
promote interests for further study and research in medical imaging processing.
1 .7 APPLICATIONS OF IMAGE COM PRESSION
Internet
Digital Photography
Medical Imaging
Wireless imaging
Document imaging
Pre-Press
4
-
7/28/2019 7Main reprt
5/46
Remote sensing and GIS
Cultural Heritage
Scientific and Industrial
Digital Cinema
Image archives and databases
Surveillance
Printing and scanning
Facsimile
1.8 OBJECTIVE
Technical goals of our research are inspired by the needs for image compression in
radiology department of a hospital. One of the goals is to centralize processing and storage of
medical data and to provide fast access to the data through a network. Yet another goal is
teleradiology or teleconsultancy.
1.9 OVERVIEW
The thesis is organized such that most chapters are self contained in the sense that they
cover different topics, yet in the same frame work. The remaining work is organized as
follows: Chapter 2 describes Existing System and Chapter 3 describes System Specification. In
Chapter 4 System Description is given and Chapter 5 describes the Project Description. In
Chapter 6 Implementation of this work is given and Chapter 7 describes Conclusion and Future
Enhancement.
5
-
7/28/2019 7Main reprt
6/46
CHAPTER 2
EXISTING SYSTEM
2.1 HIERARCHICAL ORIENTED PREDICTIONS FOR RESOLUTION SCALABLELOSSLESS AND NEAR-LOSSLESS COMPRESSION OF CT AND MRI
BIOMEDICAL IMAGES
2.1.1 INTRODUCTION
This project have been developed in this field of interest, compression of biomedical
images remains an important issue. Since the emergence of digital acquisition in medical
imaging, the data production is continuously growing. In recent years, it has been subject to a
quasi-exponential increase, in particular, because of an extensive use of MRI images and, even
more, computed tomography (CT). These are both volume modalities that can be viewed as a
sequence of 2-D images (slices). The successive improvements of acquisition equipment tend
to amplify the resolution of those images, which intensifies the mass of data to archive. All this
makes them really much more cumbersome than other imaging modalities. This is why we
focused on CT and MRI. They are stored in picture archiving and communication systems for
which efficient compression algorithms are of great interest
2.1.2 HIERARCHICAL DECOMPOSITIONS
To hierarchically decompose an image, a prediction level of IHINT can be summarized
in the two prediction steps showed in Figure. Let H be the set of horizontally even indexed
pixel values, and let be the set of horizontally odd indexed pixel values; the first step (HStep)
consists of predicting the pixels of H using an interpolative finite impulse response filter on L .
H Then contains the residual values of the prediction. The second step (VStep) is the
mathematical transposition of HStep applied independently on to obtain two sets LL and LH,
6
-
7/28/2019 7Main reprt
7/46
-
7/28/2019 7Main reprt
8/46
The residual remapping is often used by predictive coders to reduce the alphabet size
(by a factor of 2 compared with the full-residual-range values) to make the entropy coding
easier.
Fig 2.3 : Residual Remapping
The time complexity is then around two times the one of the lossless decomposition,
which only requires the pyramidal ascent decomposition, but the memory consumption stays
the same. Another implementation using temporary storage of the residual data obtained during
the pyramidal descent would allow it to perform with the same time complexity as lossless but
with a loss of memory of the size of the image.
2.1.5 RESULT
CTs, CALIC always gives the best compression performances, except for smooth
images data set (MeDEISA), where least square dynamically optimized predictors perform
better. On MRI, except on the smooth data set (Harvard-3D), it often performs equivalent
compression to HOP. For scalable coders only, J2Kismost often the worst or not far from the
worst coding algorithm, leaving out MeDEISA CT images for which HOP is not efficient.
However, except on smooth data sets, HOP is always better than SPIHT, J2K, and IHINT.
IHINT obtains results similar to J2K on CT images but is competitive with CALIC and HOP
on MRI.
The proposed least square optimization of the predictors allows us to bypass the
inefficiency of HOP on smooth images.
8
-
7/28/2019 7Main reprt
9/46
Table 2.1: Lossless Rates Averages
2.1.6 DRAWBACK
HOP is not efficient for smooth image.
2.2 THE LOCO-I LOSSLESS IMAGE COMPRESSION ALGORITHM: PRINCIPLES
AND STANDARDIZATION INTO JPEG-LS
2.2.1 INTRODUCTION
LOCO-I (LOw COmplexity Lossless Compression for Images) is standard for lossless
and near-lossless compression of continuous- tone images, JPEG-LS. The algorithm was
introduced in an abridged format. The standard reference is quite obscure, and it skips the
theoretical background that explains the success of the algorithm. In this paper, we discussed
the theoretical foundations of LOCO-I and present a full description of the main algorithmic
components of JPEG-LS. Image compression models customarily consisted of a fixed
structure, for which parameter values were adaptively learned.
2.2.2 LOCO-I TECHNIQUES
9
-
7/28/2019 7Main reprt
10/46
Lossless data compression schemes often consist of two distinct and independent
components: modeling and coding. The modeling part can be formulated as an inductive
inference problem, in which the data (e.g., an image) is observed sample by sample in some
predefined order (e.g., raster-scan, which will be the assumed order for images in the sequel).
The CALIC algorithm conducted in parallel to the development of LOCO-I, seems to confirm
a pattern of diminishing returns. CALIC avoids some of the optimizations performed by tuning
the model more carefully to the image compression application, some compression gains are
obtained. For multi component (color) images, the JPEG-LS syntax supports both interleaved
and non interleaved (i.e., component by component) modes. The prediction and modeling units
in JPEG-LS are based on the causal template depicted in Fig. JPEG-LS limit its image
buffering requirement to one scan line. The chain of approximation is leading to the adaptation
rule used in JPEG-LS. This quantization aims at maximizing the mutual information between
the current sample value and its context, an information-theoretic measure of the amount of
information provided by the conditioning context on the sample value to be modeled. In an
adaptive mode, a structured family of codes further relaxes the need of dynamically updating
code tables due to possible variations in the estimated parameters. JPEG-LS offer a lossy mode
of operation, termed near-lossless, in which every sample value in a reconstructed image
component is guaranteed to differ from the corresponding value in the original image by up to
a preset (small) amount. It reviews the JPEG-LS lossless encoding procedures for a single
component of an image. This observation suggested that judicious modeling, which seemed to
be reaching a point of diminishing returns in terms of compression ratios, should rather be
applied to obtain competitive compression at significantly lower complexity levels. A very
simple context model, determined by quantized gradients is aimed at approaching the
capability of the more complex universal context modeling techniques for capturing high-order
dependencies. The desired small number of free statistical parameters is achieved by adopting,
here as well, a TSGD model, which yields two free parameters per context.
10
-
7/28/2019 7Main reprt
11/46
Figure 2.4 : JPEG-LS Block Diagram
2.2.3 RESULT
These results are compared with those obtained with other relevant schemes reported in
the literature, over a wide variety of images. The compressed data format for JPEG-LS closely
follows the one specified for JPEG. The bit stream organized into frames, scans, and restart
intervals within a scan, markers specifying the various structural parts, and marker segments
specifying the various parameters.
Table 2.2: Compression Results on New Image Test Set (In Bits/Sample)
S.NO. TECHNIQUES USEDCOMPRESSION RESULTS
(Bits/Samples)
1. LOCO-I 3.18
2. JPEG-LS 3.19
3. FELICS 3.76
4. Lossless JPEG Huffman 4.08
11
-
7/28/2019 7Main reprt
12/46
Table II shows (lossless) compression results of LOCO-I, JPEG-LS, and LOCO-A,
compared with other popular schemes. LOCO-I/JPEG-LS decompression is about 10% slower
than compression, making it a fairly symmetric system.
2.2.4 DRAWBACKS
LengthyCoding is difficult process.
LOCO-I/JPEG-LS decompression is 10% slower than compression
2.3 THE JPEG 2000 STILL IMAGE COMPRESSION STANDARD IMAGE
PROCESSING
2.3.1 INTRODUCTION
JOINT PHOTOGRAPHIC EXPERTS GROUP
The term "JPEG" is an acronym for the JOINT PHOTOGRAPHIC EXPERTS GROUP
which created the standard.JPEG is a commonly used method of lossy compression for digital
photography. The degree of compression can be adjusted, allowing a selectable tradeoff
between storage size and image quality. JPEG typically achieves 10:1 compression with little
perceptible loss in image quality. JPEG 2000 supports multiple- component images. Different
components need not have the same bit depths nor need to all be signed or unsigned. For
reversible (i.e., lossless) systems, the only requirement is that the bit depth of each output
image component must be identical to the bit depth of the corresponding input image
component.
JPEG COMPRESSION
The compression method is usually lossy, meaning that some original image
information is lost and cannot be restored, possibly affecting image quality. There is an
optional lossless mode defined in the JPEG standard. Image files that employ JPEG
compression are commonly called "JPEG files", and are stored in variants of the JIF image
format. JPEG compression artifacts blend well into photographs with detailed non-uniform
textures, allowing higher compression ratios. Notice how a higher compression ratio first
affects the high-frequency textures in the upper-left corner of the image, and how the
contrasting lines become fuzzier. The very high compression ratio severely affects the quality
12
http://en.wikipedia.org/wiki/Lossy_compressionhttp://en.wikipedia.org/wiki/Lossless_JPEGhttp://en.wikipedia.org/wiki/Lossless_JPEGhttp://en.wikipedia.org/wiki/Lossy_compression -
7/28/2019 7Main reprt
13/46
of the image, although the overall colors and image form are still recognizable. The JPEG 2000
compression engine (encoder and decoder) is illustrated in block diagram and the discrete
transform is first applied on the source image data. The transform coefficients are then
quantized and entropy coded before forming the output code stream (bit stream).
Figure 2.5: General Block Diagram of the JPEG 2000 (A) Encoder and (B) Decoder
Entropy coding of the quantized coefficients is performed within code blocks. Since
encoding and decoding of the code blocks are independent processes, bit errors in the bit
stream of a code block will be restricted within that code block. To increase error resilience,
termination of the arithmetic coder is allowed after every coding pass and the contexts may be
reset after each coding pass. This allows the arithmetic decoder to continue the decoding
process even if an error has occurred. The decoder is the reverse of the encoder. The code
stream is first entropy decoded, dequantized, and inverse discrete transformed, thus resulting in
the reconstructed image data. Although this general block diagram looks like the one for the
conventional JPEG, there are radical differences in all of the processes of each block of the
diagram. In addition to specifying the color space, the standard allows for the decoding of
single component images, where the value of that single component represents an index into a
palette of colors. An input of a decompressed sample to the palette converts the single value to
a multiple- component tuple. The value of that tuple represents the color of the sample.
13
-
7/28/2019 7Main reprt
14/46
JPEG 2000 provides bit streams that are parsing able and can easily be reorganized by a
transcoder on the fly. JPEG 2000 also allows random access (with minimal decoding) to the
block-level of each sub band, thus making possible to decode a region of an image without
decoding the whole image.
WAVELET TRANSFORM
Wavelet transform is used for the analysis of the tile components into different
decomposition levels. These decomposition levels contain a number of sub bands, which
consist of coefficients that describe the horizontal and vertical spatial frequency characteristics
of the original tile component. SNR scalability involves generating at least two image layers of
the same spatial resolution, but different qualities, from a single image source.
2.3.2 RESULTTable 2.3: JPEG Compression Test Set (in Bits/Sample)
S.NO. TECHNIQUES PSNR ENCODER
TIME
DECODER
TIME
1. JPEG 2000 23.81 12.51 5.85
2. SPIHT 23.44 8.44 8.69
3. JPEG LS 20.61 1.79 0.56
4. Wavelet 20.21 1.32 0.43
The lossless compression efficiency of the reversible JPEG 2000 (J2KR), JPEG-LS,
lossless JPEG (L-JPEG), and PNG is reported in Table .It is seen that JPEG2000 performs
equivalently to JPEG-LS in the case of the natural images, with the added benefit of scalability.
JPEG-LS, however, is advantageous in the case of the compound image. Error resilience is one
of the most desirable properties in mobile and Internet applications. JPEG 2000 supports also a
combination of spatial and SNR scalability. JPEG 2000 uses a variable-length coder
(arithmetic coder) to compress the quantized wavelet coefficients.
14
-
7/28/2019 7Main reprt
15/46
2.3.3 DRAWBACKS
Slow process in compression and decompression
Error Resilience is poor
Difficult in Progressive bit streams
2.4 WAVELET-BASED MEDICAL IMAGE COMPRESSION WITH ADAPTIVEPREDICTION
2.4.1 INTRODUCTION
It consists of a wavelet-based lossy layer followed by arithmetic coding of the
quantized residual to guarantee a given error bound in the pixel domain. This paper is focus on
the selection of the optimum bit rate for the lossy layer to achieve the minimum total bit rate.
Unlike other similar lossy plus lossless approaches using a wavelet-based lossy layer and the
proposed method does not require iteration of decoding and inverse discrete wavelet transform
in succession to locate the optimum bit rate. It proposed a simple method estimated the optimal
bit rate, with a theoretical justification based on the critical rate argument from the rate-
distortion theory and the independence of the residual error.
2.4.1.1 JPEG2000 TECHNIQUE
Lossless compression for medical images has been investigated by examining
dependencies among wavelet coefficient. It describes Set Partitioning in Hierarchical Trees
(SPIHT) is the powerful new wavelet-based image compression method. The JPEG2000 are
commonly used method of lossy compression for digital photography. The degree of
compression can be adjusted, allowing a selectable tradeoff between storage size and image
quality. JPEG typically achieves 10:1 compression with little perceptible loss in image
quality.DWT (Discrete wavelet Transform discrete-time wavelet transform (DWT), which
produces multi-scale image decomposition. By employing filtering and sub sampling, a result
in the form of the decomposition image (for classical dyadic approach) is produced, very
effectively revealing data redundancy in several scales.
2.4.1.2 WAVELET BASED APPROACH
Proposed two-stage near-lossless wavelet coder is simple relationship between the
quantization error in the wavelet domain and the error in the pixel domain, it is generally
15
-
7/28/2019 7Main reprt
16/46
Figure 2.6 : Proposed Two-Stage Near-Lossless Wavelet Coder
presumptuous to expect that wavelet coder will also perform well in terms of the error. In fact,
there is a rather complicated relationship between the quantization step size for the wavelet
coefficients and the bound on error in the pixel domain. The method to be presented is the only
one that determines this optimal first stage lossy rate during encoding without exhaustive
iteration.
2.4.2 RESULT
In this project analysis is presented of the convergence phenomena regarding the
probability distribution of encoding residuals in both the wavelet and the pixel domains. This
demonstrates a possible way of further improving the performance of the proposed method,
which is quite flexible in the sense that incorporating any improvement into the lossy layer.
The total rates in the parentheses of these rows refer to the actual total rates obtained by
16
-
7/28/2019 7Main reprt
17/46
stopping the SPIHT encoding at the estimated optimal lossy rate and coding the quantized
residual with an arithmetic coder.
Table 2.4: Compression Results on JPEG2000 (In Bits/Sample)
S.NO. METHOD PSNR BPP
1 JPEG 49.90 3.3
2 CALIC 49.89 3.07
3 Prequant,S+P 49.90 3.27
4 Iter.SPIHT + Context AC 49.90 3.31
5 SPIHT + AC (Proposed) 49.90 3.38
2.4.3 DRAWBACKS
Difficult to balance bit rate and complexity
It cannot offer supports for multi-contexts image processing in image compression fields
Less Efficiency
2.5 NEAR-LOSSLESS AND SCALABLE COMPRESSIONS FOR MEDICAL
IMAGING USING A NEW ADAPTIVE HIERARCHICAL ORIENTED
PREDICTION
2.5.1 INTRODUCTION
We propose a new hierarchical approach to resolution scalable lossless and near-
lossless (NLS) compression. It combines the adaptability of DPCM schemes with new
hierarchical oriented predictors to provide resolution scalability with better compression
performances than the usual hierarchical interpolation predictor or the wavelet transform. The
HOP algorithm is also well suited for NLS compression, providing an interesting rate
distortion tradeoff compared with JPEG-LS and equivalent or a better PSNR. These are both
volumic modalities that can be viewed as a sequence of 2-D images (slices).
17
-
7/28/2019 7Main reprt
18/46
These all medical images have wide application in Telemedicine which is the provision
of health care services via interactive audio and data communication. It is digitized and
computerized process incorporating many technologies like communication, database, and user
interface medical science while the foundation of it is communication. As the medical image is
very big transmission and storage in medical image often cause difficulty.
2.5.1.1 HOP TECHNIQUES
Medical images are most often stored without any loss, even if they always contain
unnecessary noisy information that could be removed by using a less drastic lossy compression
that can ensure a control on the losses, such as near-lossless (NLS) algorithms, to preserve a
visually lossless quality. Focusing on 2-D algorithms, the best lossless compression results are
usually obtained with efficient DPCM schemes. They follow a row-scan-ordered prediction
and use adaptive methods exploiting causal information. JPEG-LS (JLS) standard and CALIC
are often used as references. Such coders lack a progressive model, which is important for
distant access of biomedical images. NLS compression is performed by predicting pixels from
the NLS causal reconstructed values.
In this paper three near lossless image compression has been investigated one is NLIC
(near lossless image compression) which perform initially lossy preparation of image with
DCT (Discrete Cosine Transform) followed by lossless Huffman Coding, Second one RLE
with DCT which perform initially loss preparation with DCT followed by lossless run lengthcoding, last one is SPIHT with DWT which perform initially lossy preparation with DWT
followed by lossless JPEG encoding based on SPIHT techniques. These techniques are tested
on various kinds of square photographic and medical images and compared by evaluating
various performance evaluation parameters like compression ratio, peak signal to noise ratio,
root mean square error. The set partitioning in hierarchical trees (SPIHT) is to improve its peak
signal-to-noise ratio (PSNR) about 0.5 dB. Although the theory and program code of AC are
mature, the complicated internal operations limit and its application for some real time fields,
such as satellite image and high speed camera image compressions.
This paper has shown that, even if providing resolution scalability, some compression
improvements could be obtained on noisy native medical images both in lossless and NLS
18
-
7/28/2019 7Main reprt
19/46
modes compared with the reference algorithms. The least square optimization has allowed us
to boost the prediction on smooth images, where HOP was not really efficient.
Coding redundancy is present when less than optimal code words are used. Interpixel
redundancy results from correlations between the pixels of an image. Psychovisual redundancy
is due to data that is ignored by the human visual system (i.e. visually non essential
information
Compressing an image is significantly different than compressing raw binary data. Of
course, general purpose compression programs can be used to compress images, but the result
is less than optimal. This is because images have certain statistical properties which can be
exploited by encoders specifically designed for them. Also, some of the finer details in the
image can be sacrificed for the sake of saving a little more bandwidth or storage space. This
also means that lossy compression techniques can be used in this area.
Lossless compression involves with compressing data which, when decompressed, will
be an exact replica of the original data. This is the case when binary data such as executables,
documents etc. are compressed. They need to be exactly reproduced when decompressed. On
the other hand, images (and music too) need not be reproduced 'exactly'. An approximation of
the original image is enough for most purposes, as long as the error between the original and
the compressed image is tolerable.
2.5.2 RESULT
A new sequential context-based bias cancelation method was proposed and analyzed to
improve the prediction efficiency. The last original contribution was an entropy coding
technique based on a two-stage coder designed to improve the compression in the resolution
scalable context. Some preliminary tests on those images have given promising results. HOP
obtained 10% lossless compression improvements compared with CALIC.
Table 2.4: Compression Results on HOP (In Bits/Sample)
S.NO.
METHOD PSNR BPP
1.
JPEG-2000 23.34 4.91
19
-
7/28/2019 7Main reprt
20/46
2.
JPEG-LS 43.57 4.75
3.
CALIC 49.78 4.67
4.
SPIHT 64.34 4.86
5.HOP 64.78 4.80
2.5.3 DRAWBACKS
1. It only reduces storage requirements but also overall execution time
2. It have transmission errors since only fewer bits are transferred
3. The proposed hierarchical oriented prediction is not really efficient in all images
In existing system Medical images may require to be saved for periods of over 30
years. They are stored in picture archiving and communication systems for which efficientcompression algorithms are of great interest. The diagnostic information must be kept in the
same state as during the initial diagnosis stage to allow their reconsideration.
The proposed HOP and its least square optimization of the predictors allows us to the
inefficiency of HOP on smooth images and CT images for which HOP is not efficient.
And LOCO-I lossless image compression algorithm discussed the theoretical foundations
of LOCO-Iand present a full description of the main algorithmic components of JPEG-LS.
Image compression models customarily consisted of a fixed structure, for which parameter
values were adaptively learned.JPEG 2000 supports also a combination of spatial and
SNR scalability.
The HOP algorithm is also well suited for NLS compression, providing an interesting rate
distortion tradeoff compared with JPEG-LS and equivalent or a better PSNR.These are
both volumic modalities that can be viewed as a sequence of 2-D images (slices).
1-D addressing method instead of the original 2-D arrangement for wavelet coefficients
and a fixed memory allocation for the data lists instead of the dynamic allocation required
in the original SPIHT.
The EBCOT algorithm offers state-of-the-art compression performance with a rich set of
bit-stream features, including resolution scalability, SNR scalability and the random access
property.
20
-
7/28/2019 7Main reprt
21/46
Disadvantage
Hop is not efficient for smooth image.
Loss of medical image may appear.
2.5.4 PROPOSED SYSTEMSpatial redundancy depends on the correlation between the pixels belonging to the same frame.
In the proposed scheme, we use a simple but robust spatial predictor, the median edge detector
(MED), as used in JPEG-LS.MED estimates the symbol to be encoded based on the values of
the three previously encoded neighboring symbols. We use p(x, y) to represent the symbol to
be encoded that is located at (x, y) in frame. Context modeling is used for efficient coding of
the prediction residuals. By utilizing suitable context models, the given prediction residual can
be encoded by switching between different probability models according to already encoded
neighboring symbols of the symbol to be encoded.
Advantages
High compression ratio
Excellent reconstruction quality for video rate
21
-
7/28/2019 7Main reprt
22/46
CHAPTER 3
SYSTEM SPECIFICATION
3.1. HARDWARE REQUIREMENT
CPU type : Intel Pentium 4
Clock speed : 3.0 GHz
RAM size : 512 MB
Hard disk capacity : 40 GB
Monitor type : 15 Inch Color Monitor
Keyboard type : Internet Keyboard
CD -drive type : 52xmax
3.2. SOFTWARE REQUIREMENT
Oper a t i ng Sys t em: Wi ndows XP
Fr on t End : Mat l ab
Back End : MS- ACCESS
Do cu me nt at io n : Ms-Office
22
-
7/28/2019 7Main reprt
23/46
CHAPTER 4
SYSTEM DESCRIPTION
4.1 INTRODUCTION
The main objective of the proposed approach is to propose an efficient prediction
method for medical image compression. The existing system is an overall compression ratio of
6-14 is obtained for images with proposed methods. Whereas, by compressing same images by
a lossless JPEG2K and Huffman, compression ratio of 2 is obtained at most. The main
contribution of the research is higher compression ratios than standard techniques in lossless
scenario. This result will be of great importance for data management in a hospital and for
teleradiology. Region of Interest Coding (RIC) which is a region of interest based compression
scheme. In the first section, Region of Interest (ROI) is described with examples. The literature
is thoroughly surveyed for ROI compression schemes. A discussion on blocky artifacts and
quality assessment indices for NNVQ is carried out. Effects of shape and size of ROI on
compression capability are discussed next. RIC is a lossy technique therefore it is compared
with JPEG which is also a lossy technique. These comparisons are carried out for compression
ratio and objective and subjective quality. The second proposed technique DIC. DIC is a
lossless compression scheme. The goal of lossless image compression is to generate an
absolutely equivalent, but shorter representation than the original image. This is an important
requirement for medical imaging domains, where not only high quality is in demand, but
unaltered archiving is a legal requirement. The method exploited the fact that difference
23
-
7/28/2019 7Main reprt
24/46
images contain less data and enhanced compression capacity. Statistical analysis of difference
image is described by different parameters like, probability distribution, entropy and variance.
DIC is compared with JPEG2000 lossless. These Comparisons are shown graphically and in
tabular form for easy understanding.
4.2 SOFTWARE DESCRIPTION
4.2.1 INTRODUCTION
MATLAB is a high-performance language for technical computing integrates
computation, visualization, and programming in an easy-to-use environment where problems
and solutions are expressed in familiar mathematical notation. It is a prototyping environment,
meaning it focuses on the ease of development with language flexibility, interactive debugging,
and other conveniences lacking in performance-oriented languages like C and FORTRAN.While MATLAB may not be as fast as C, there are ways to bring it closer. We want to spend
less time total from developing, debugging, running, and until obtaining results.
Its a numerical computing environment and applicable in matrix manipulations,
plotting of functions and data and also to implement image processing. Mainly used to
interface with Programs written in other languages like C, C++ etc.Adopted by control
design Engineers. Now it is applicable in linear algebra & numerical analysis. Using
MATLAB, you can solve technical computing problems faster than with traditional C, C++
etc. Development environment for managing code, files and data. Mathematical functions
like statistics, Fourier analysis, optimization and numerical integration.
It is an interactive system whose basic data element is an array that does not require
dimensioning. It allows you to solve many technical computing problems, especially those
with matrix and vector formulations, in a fraction of the time it would take to write a program
in a scalar no interactive language such as C or FORTRAN. The name MATLAB stands for
matrix laboratory. MATLAB was originally written to provide easy access to matrix software
developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate
the LAPACK and BLAS libraries, embedding the state of the art in software for matrix
computation.
24
-
7/28/2019 7Main reprt
25/46
It has evolved over a period of years with input from many users. In university
environments, it is the standard instructional tool for introductory and advanced courses in
mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-
productivity research, development, and analysis.
Its features a family of add-on application-specific solutions called toolboxes. Very
important to most users of MATLAB, toolboxes allow you to learn and apply specialized
technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that
extend the MATLAB environment to solve particular classes of problems. You can add on
toolboxes for signal processing, control systems, neural networks, fuzzy logic, wavelets,
simulation, and many other areas.
The MATLAB System
The MATLAB system consists of these main parts:
(1) Desktop Tools and Development Environment
This part of MATLAB is the set of tools and facilities that help you use and become
more productive with MATLAB functions and files. Many of these tools are graphical user
interfaces. It includes: the MATLAB desktop and Command Window, an editor and debugger,
a code analyzer, browsers for viewing help, the workspace, and files, and other tools.
(2) Mathematical Function Library
This library is a vast collection of computational algorithms ranging from elementary
functions, like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like
matrix inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.
The Language
The MATLAB language is a high-level matrix/array language with control flow
statements, functions, data structures, input/output, and object-oriented programming features.
25
-
7/28/2019 7Main reprt
26/46
It allows both "programming in the small" to rapidly create quick programs you do not intend
to reuse. You can also do "programming in the large" to create complex application programs
intended for reuse.
(1) Graphics
MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well
as annotating and printing these graphs. It includes high-level functions for two-dimensional
and three-dimensional data visualization, image processing, animation, and presentation
graphics. It also includes low-level functions that allow you to fully customize the appearance
of graphics as well as to build complete graphical user interfaces on your MATLAB
applications.
(2) External Interfaces
The external interfaces library allows you to write C and Fortran programs that interact
with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking),
for calling MATLAB as a computational engine, and for reading and writing MAT-files.
(3) Array Preallocation
MATLAB'S matrix variables have ability to dynamically augment rows and columns.
For example,
>> a = 2
a =
2
>> a(2,6) = 1
a =
2 0 0 0 0 0
0 0 0 0 0 1
MATLAB automatically resizes the matrix. Internally, the matrix data memory must be
reallocated with larger size. If a matrix is resized repeatedly like within a loop this overhead
26
-
7/28/2019 7Main reprt
27/46
can be significant. To avoid frequent reallocations, preallocate the matrix with the zeros
command.
(4) JIT Acceleration
Matlab 6.5 (R13) and later feature the Just-In-Time (JIT) Accelerator for improving the
speed of M-functions, particularly with loops. By knowing a few things about the accelerator,
you can improve its performance.The JIT Accelerator is enabled by default. To disable it,
type \feature accel off" in the console, and \feature accel on" to enable it again. As of Matlab
R2008b, only a subset of the Matlab language is supported for acceleration. Upon encountering
an unsupported feature, acceleration processing falls back to non-accelerated evaluation.
Acceleration is most effective when significant contiguous portions of code are supported.
Data types: Code must use supported data types for acceleration: double (both real and
complex), logical, char, int8 {32, uint8 {32. Some struct, cell, classdef, and function handle
usage is supported. Sparse arrays are not accelerated.
Array shapes: Array shapes of any size with 3 or fewer dimensions are supported.
Changing the shape or data type of an array interrupts acceleration. A few limited situations
with 4D arrays are accelerated.
Function calls: Calls to built-in functions and M-functions are accelerated. Calling MEX
functions and Java interrupts acceleration. (See also page 14 on in lining simple functions.)
Conditionals and loops: The conditional statements if, else if, and simple switch statements
are supported if the conditional expression evaluates to a scalar. Loops of the form for
k=a:b, for k=a:b:c, and while loops are accelerated if all code within the loop is supported.
(5) In-Place Computation
Introduced in Matlab 7.3 (R2006b), the element-wise operators (+, .*, etc.) and some other
functions can be computed in-place. That is, a computation like
x = 5*sqrt(x.2 + 1);
is handled internally without needing temporary storage for accumulating the result. An M-
function can also be computed in-place if its output argument matches one of the input
arguments.
x = myfun(x);
27
-
7/28/2019 7Main reprt
28/46
function x = myfun(x)
x = 5*sqrt(x.2 + 1);
return;
To enable in-place computation, the in-place operation must be within an M-function (and for
an in- place function, the function itself must be called within an M-function). Currently, there
is no support for in-place computation with MEX-functions.
(6) Multithreaded Computation
MATLAB 7.4 (R2007a) introduced multithreaded computation for multicore and
multiprocessor computers. Multithreaded computation accelerates some per-element functions
when applied to large arrays (for example,^, sin, exp) and certain linear algebra functions in the
BLAS library. To enable it, select File! Preferences! General! Multithreading and select
\Enable multithreaded computation." Further control over parallel computation is possible with
the Parallel Computing Toolbox. Using par for and spmd
Working formats in MATLAB
If an image is stored as a JPEG-image on your disc we first read it into MATLAB.
However, in order to start working with an image, for example perform a wavelet transform on
the image, we must convert it into a different format. This section explains four common
formats.
Intensity image (gray scale image)
This is the equivalent to a gray scale image and this is the image we will mostly work
with in this course. It represents an image as a matrix where every element has a value
corresponding to how bright/dark the pixel at the corresponding position should be colored.
There are two ways to represent the number that represents the brightness of the pixel: The
double class (or data type). This assigns a floating number (a number with decimals)
between 0 and 1 to each pixel. The value 0 corresponds to black and the value 1 corresponds to
white. The other class is called uint8 which assigns an integer between 0 and 255 to represent
the brightness of a pixel. The value 0 corresponds to black and 255 to white. The class uint8
only requires roughly 1/8 of the storage compared to the class double. On the other hand, many
28
-
7/28/2019 7Main reprt
29/46
mathematical functions can only be applied to the double class. We will see later how to
convert between double and uint8.
Binary image
This image format also stores an image as a matrix but can only color a pixel black orwhite (and nothing in between). It assigns a 0 for black and a 1 for white.
Indexed image
This is a practical way of representing color images. (In this course we will mostly
work with gray scale images but once you have learned how to work with a gray scale image
you will also know the principle how to work with color images.) An indexed image stores an
image as two matrices. The first matrix has the same size as the image and one number for
each pixel. The second matrix is called the colormap and its size may be different from the
image. The numbers in the first matrix is an instruction of what number to use in the color map
matrix.
RGB image
This is another format for color images. It represents an image with three matrices of
sizes matching the image format. Each matrix corresponds to one of the colors red, green or
blue and gives an instruction of how much of each of these colors a certain pixel should use.
Multiframe image
In some applications we want to study a sequence of images. This is very common in
biological and medical imaging where you might study a sequence of slices of a cell. For these
cases, the multiframe format is a convenient way of working with a sequence of images. In
case you choose to work with biological imaging later on in this course, you may use this
format.
Fundamentals
A digital image is composed ofpixels which can be thought of as small dots on the
screen. A digital image is an instruction of how to color each pixel. We will see in detail later
on how this is done in practice. A typical size of an image is 512-by-512 pixels. Later on in the
29
-
7/28/2019 7Main reprt
30/46
course you will see that it is convenient to let the dimensions of the image to be a power of 2.
For example, 29=512. In the general case we say that an image is of size m-by-n if it is
composed ofm pixels in the vertical direction and n pixels in the horizontal direction.
Let us say that we have an image on the format 512-by-1024 pixels. This means that thedata for the image must contain information about 524288 pixels, which requires a lot of
memory! Hence, compressingimages is essential for efficient image processing. You will later
on see how Fourier analysis and Wavelet analysis can help us to compress an image
significantly. There are also a few computer scientific tricks (for example entropy coding) to
reduce the amount of data required to store an image. There are many different data types, or
classes, that you can work with in the MATLAB software. You can build matrices and arrays
of floating-point and integer data, characters and strings, and logical true and false states.
Function handles connect your code with any MATLAB function regardless of the current
scope. Structures and cell arrays, provide a way to store dissimilar types of data in the same
array. There are 15 fundamental classes in MATLAB. Each of these classes is in the form of a
matrix or array. With the exception of function handles, this matrix or array is a minimum of 0-
by-0 in size and can grow to an n-dimensional array of any size. A function handle is always
scalar (1-by-1).
Numeric classes in the MATLAB software include signed and unsigned integers, and
single- and double-precision floating-point numbers. By default, MATLAB stores all numeric
values as double-precision floating point. (You cannot change the default type and precision.)
You can choose to store any number, or array of numbers, as integers or as single-precision.
Integer and single-precision arrays offer more memory-efficient storage than double-precision.
All numeric types support basic array operations, such as subscripting, reshaping, and
mathematical operations.
How to display an image in MATLAB
Here are a couple of basic MATLAB commands (do not require any tool box) for
displaying an image.
Displaying an image given on matrix form
30
-
7/28/2019 7Main reprt
31/46
Sometimes your image may not be displayed in gray scale even though you might have
converted it into a gray scale image. You can then use the command color map (gray) to
force MATLAB to use a gray scale when displaying an image. If you are using MATLAB
with an Image processing tool box installed, I recommend you to use the command imshow to
display an image.
Operation MATLAB command
Display an image represented as the matrix X. imagesc(X)
Adjust the brightness. s is a parameter such that
-1
-
7/28/2019 7Main reprt
32/46
FEASIBILITY STUDY
Technology and system feasibility
The assessment is based on an outline design of system requirements in terms of Input,
Processes, Output, Fields, Programs, and Procedures. This can be quantified in terms ofvolumes of data, trends, frequency of updating, etc. in order to estimate whether the new
system will perform adequately or not. Technological feasibility is carried out to determine
whether the company has the capability, in terms of software, hardware, personnel and
expertise, to handle the completion of the project.
Economic feasibility
Economic analysis is the most frequently used method for evaluating the effectiveness
of a new system. More commonly known as cost/benefit analysis, the procedure is to
determine the benefits and savings that are expected from a candidate system and compare
them with costs. If benefits outweigh costs, then the decision is made to design and implement
the system. An entrepreneur must accurately weigh the cost versus benefits before taking an
action.
Cost Based Study: It is important to identify cost and benefit factors, which can be categorized
as follows: 1. Development costs; and 2. Operating costs. This is an analysis of the costs to be
incurred in the system and the benefits derivable out of the system.
Time Based Study: This is an analysis of the time required to achieve a return on
investments. The benefits derived from the system. The future value of a project is also a
factor.As per the cost based study this system requires the designing and implementing
environment as listed below
.NET
MS-Office Access
Legal Feasibility
32
-
7/28/2019 7Main reprt
33/46
Determines whether the proposed system conflicts with legal requirements, e.g. a data
processing system must comply with the local software Protection Acts. This system satisfies
all the legal requirements and it also complying with the local data protection act.
Operational Feasibility
Is a measure of how well a proposed system solves the problems, and takes advantage
of the opportunities identified during scope definition and how it satisfies the requirements
identified in the requirements analysis phase of system development. This system operates well
in the running environment and run as per the definition provided in the system definition.
Schedule Feasibility
A project will fail if it takes too long to be completed before it is useful. Typically this
means estimating how long the system will take to develop, and if it can be completed in a
given time period using some methods like payback period. Schedule feasibility is a measure
of how reasonable the project timetable is. Given our technical expertise, are the project
deadlines reasonable? Some projects are initiated with specific deadlines.
CHAPTER 5
PROJECT DESCRIPTION
5.1 INTRODUCTION
In this chapter, a region of interest based compression scheme is proposed. In the first
section, Region of Interest (ROI) is described with examples. The literature is thoroughly
surveyed for ROI compression schemes. The chapter explores the idea, process, experiments
and results for the proposed scheme Region of Interest Image Coding (RIC).
5.1.1 REGION OF INTEREST
Majority of present compression techniques compress the entire image. However, it is
observed that most of the medical images contain large backgrounds (up to 50% or more of the
image size), which are not used in the diagnosis. Only a small region is diagnostically relevant,
while the remaining area is much less important. Proposed approach is to compress the
33
-
7/28/2019 7Main reprt
34/46
important region strictly losslessly, and to compress the remaining regions of the image with
some loss, thus yielding an overall high compression ratio.
In region based encoding, image is first segmented or divided into spatial regions.
These segmentations could be done by identifying regions with different gray scales
characteristics either, automatic or manual. Diagnostically important regions are called Region
of Interest (ROI). Fig 3.1 shows an image with its ROI and NOT ROI. Medical image
compression is divided into three applications; compression before primary diagnosis (for rapid
transmission), compression after primary diagnosis (for long term archiving) and compression
for database browsing (progressive transmission). Research motivation is basically to
centralize processing and storage of medical data in a radiological department of any hospital,
which requires compression after primary diagnosis. This choice facilitates to segment the ROI
manually by a radiologist or a doctor at primary diagnosis stage.
Figure 4.1 An Image Indicating ROI and NOT ROI
The proposed scheme called Region of Interest Coding (RIC) is shown in Fig. 3.2. In the
scheme, ROI is extracted from the original image. The ROI can have arbitrary polygonal
shapes. ROI is compressed lossless by Huffman coding. Run length coding is used to ompress
34
-
7/28/2019 7Main reprt
35/46
the large consecutive zeros and ones in the Region of Interest Window (ROIW).
Figure 4.2 Region of Interest Coding for Medical Images
The ROI is user defined and performance of proposed scheme is also dependant on the
area of selected ROI. The compression ratio is inversely related to the percentage size of ROI
with respect to original image size. For this purpose, the size of ROI as percentage of the
original image area is also calculated. Images are preprocessed and divided into blocks. The
block size of 4 4 and 88 are used. These sizes are chosen for minimum perceptual
ambiguity. As the block size is increased for example, 1212, blocky artifacts and loss of
perceptual quality is observed. With smaller block sizes like 2 2, code book size becomes
large which leads to smaller compression ratios.
5.2 SYSTEM ARCHITECTURE
35
-
7/28/2019 7Main reprt
36/46
ROI is compressed lossless and therefore its quality is not questionable. Subjective test
were also conducted for reconstruction quality of the proposed method. Limitation of RIC is
manual selection of single ROI. Multiple ROI selected automatically may be included in future
prospects of the research. Medical images are large data rich files therefore they require a
compression scheme not only with a higher compression ratio but lossless diagnostic quality as
well. Hence there is a need for a lossless compression scheme with higher compression ratios
in lossless scenario.
Figure 4.3: Block Diagram of Lossless Image Compression
5.2.1 MODULES
Get the Image Frames
Edge Masking Generation
Generate the Intensity mask
Noise Removal
ROI detection
ROI Masking
36
-
7/28/2019 7Main reprt
37/46
Non-ROI Masking
Context Modeling
5.2.2 MODULE DESCRIPTION
Get the Image Frames
In this Module, store the image in a particular folder, after that get the image from
folder and display in different frames using the pushbutton.
Edge Masking Generation
In this Module, Edges are often associated with the boundaries of objects in a scene.
Edge detection is used to identify the edges in an image.
Generate the Intensity mask
In this Module, it performs morphological reconstruction of the image marker under the
image mask. Marker and mask can be two intensity images or two binary images with the same
size. The returned image IM is an intensity or binary image.
Noise Removal
In this Module, Noise is the result of errors in the image acquisition process that result
in pixel values that do not reflect the true intensities of the real scene. Noise can also be theresult of damage to the film, or be introduced by the scanner itself.
ROI detection
A region of interest(ROI) is a portion of an image that you want to filter or perform
some other operation on. You define an ROI by creating a binary mask, which is a binary
image that is the same size as the image you want to process with pixels that define the ROI.
Non-ROI Masking
Here in this module, ROI specifies only speakers where non-ROI specifies unwanted
noise and non-speakers. The regions can be geographic in nature, such as polygons that
encompass contiguous pixels, or they can be defined by a range of intensities.
37
-
7/28/2019 7Main reprt
38/46
Context Modeling
In this Module, the given prediction residual can be encoded by switching between
different probability models according to already encoded neighboring symbols of the symbol
to be encoded.
CHAPTER 6
RESULT & IMPLEMENTATION
6.1 RESULT
Image Frames
38
-
7/28/2019 7Main reprt
39/46
Figure 6.1: Image Frames
Edge Masking Generation
Figure 6.2: Edge Masking Generation
Generate the Intensity mask
39
-
7/28/2019 7Main reprt
40/46
Figure 6.3: Intensity mask
Noise Removal
Figure 6.4 : Noise Removal
40
-
7/28/2019 7Main reprt
41/46
ROI Detection
Figure 6.5: ROI Detection
ROI and Non-ROI Masking
Figure 6.6: ROI and Non-ROI Masking
41
-
7/28/2019 7Main reprt
42/46
Context Modeling
Figure 6.7: Context Modeling
.
CHAPTER 7
CONCLUSION AND FUTURE SCOPE
42
-
7/28/2019 7Main reprt
43/46
7.1 CONCLUSION
Medical images are very important for diagnostics and therapy. However, digital
imaging generates large amounts of data which need to be compressed, without loss of relevant
information, to economize storage space and allow speedy transfer. In this research three
techniques are implemented for medical image compression, which provide high compression
ratios with no loss of diagnostic quality. Proposed techniques include Region of Interest
Coding (RIC), Difference Image Coding (DIC) and Similar Image Coding (SIC). It is an ROI
based coding scheme.
In Region of Interest Coding, ROI is selected from an image and compressed lossless
where as the background region is compressed by a lossy method. An ROI window is also
compressed via run-length coding to locate ROI after decompression. In this method
compression ratio, PSNR and subjective quality test are conducted and comparison is
performed with lossy JPEG compression technique. Compression ratio of the proposed
technique RIC is at least twice the compression ratio of JPEG. As a result the storage
requirement and transmission times are halved.
A generic code book is designed to train NNVQ. Results show that it has better
compression ratios than JPEG. The two qualities are indices for quantitative assessment also
show that quality of image is better than images compressed by lossy JPEG in NOT ROI
Region. ROI is compressed lossless and therefore its quality is not questionable. Subjective test
were also conducted for reconstruction quality of the proposed method. Limitation of RIC is
manual selection of single ROI. Multiple ROI selected automatically may be included in future
prospects of the research.
Proposed techniques perform well in terms of compression ratio and reconstruction
quality. This makes the proposed methods a good candidate for compression of medical
images. Radiologists and doctors can use these methods for diagnosis and to keep the records
for future reference. Diagnostic centers and radiology departments of hospitals can use theseschemes for their image management/storage.
7.2 FUTURE SCOPE
43
-
7/28/2019 7Main reprt
44/46
We have cleverly used existing methods for compression, but there are new areas as
well for research purposes. We have done experiments with single ROI with the consideration
that size of ROI should be less than original image. There is a possibility of using multiple
ROIs and comparing the performance with single one.
The selection of ROI is manual in our research. Automatic selection of ROI has been
done by segmentation of different gray scale areas which often resulted in misdiagnosis.
There is a scope of research for finding out better segmentation methods for ROI
selection on which medical community will trust. In SIC reference image is very important. It
is chosen automatically on bases of Euclidean distance or cross correlation coefficient. It
requires an iterative calculation for each image within the set. The process is very cumbersome
and there is a need to optimize the process. An image with large cross correlation coefficients
and concentrated scatter plots with other images is a good choice. The more similar the image,
the less will be the difference which leads to better compression performance.
Proposed schemes are hybrid of lossless and lossy compression schemes. We have used
VQ for its decoder simplicity and Huffman for implementation point of view. However, other
combinations can also be tried and performance regarding compression ratio vs. complexity
can be evaluated.
We have used SOFM to generate code books. A comparative study can be conducted
for code book generation by Back propagation and Hebbian algorithm. The results will be
improved if number of training vectors presented to NNVQ, and number of epochs to train
NNVQ are increased.It is assumed that once trained code books are ready to use for any
subsequent test data.
REFERENCES
44
-
7/28/2019 7Main reprt
45/46
1. Alfred Bruckmann, Andreas,Selective Medical Image Compression Techniques For
Telemedical And Archiving Apglications, Image Processing, Volume. 9, No. 8,2000.
2. Anil K. Jain Fundamentals of digital image processing Pearson Education, 2007 Print
3. Annadurai Fundamentals of digital image processing
4. M. A. Ansari and R. S. Ananda, Context based medical image compression for
ultrasound images with contextual set partitioning in hierarchical trees algorithm,
Adv. Eng. Softw., volume. 40, no. 7, pg. 487496, Jul. 2009.
5. M. Akter, M. B. I. Reaz, F. Mohd-Yasin, and F. Choong, A modified- set partitioning in
hierarchical trees algorithm for real-time image compression, J. Commun. Technol.
Electron., volume. 53, no. 6, pg. 642650, Jun. 2008.
6. Bernd Jhne Digital Image Processing Pearson Education. 2nd Edition 2009
7. R. Calderbank, I. Daubechies, W. Sweldens, and B. L. Yeo. "Lossless image compression
using integer to integer wavelet transforms", In Proc.ICIP-97, IEEE International
Conference on Image, 1, pg. 596-599, Santa Barbara, California, Oct. 1997.
8. A. A. Kassim, N. Yan, and D. Zonoobi, Wavelet packet transform basis selection
method for set partitioning in hierarchical trees, J. Electron. Imag., volume. 17, no. 3,
p. 033007, Jul. 2008.
9. A. Skodras, C. Christopoulos, and T. Ebrahimi, "The JPEG 2000 Still Image
Compression Standard,"IEEE Signal Processing Magazine, pg. 36-58, September 2001.
(Pubitemid 32932333)
10. D.A.Karras, S.A.Karkanis-and D.E.Maroulis,"Efficient Image Compression of Medical
Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of
Interest", University of Piraeus, Dept. of Business Administration, Rodu 2, Ano
Iliupolis,Athens,2002.
11. F. Sepehrband, M. Mortazavi, S. Ghorshi. "Efficient DPCM Predictor for HardwareImplementation of Lossless Medical Brain CT Image Compression", 7th IEEE
International Conference on Signals and Electronic Systems (ICSES '10), Gliwice, Poland,
September 2010.
45
-
7/28/2019 7Main reprt
46/46
12. Gonzalez Digital Image Processing Pearson Education India, 3rd Edition
13. http://searchcio-midmarket.techtarget.com/definition/image-compression
14. http://www.amazon.com/Digital-Compression-Techniques-Tutorial-
Engineering/dp/0819406481#reader_0819406481
15. http://www.pvamu.edu/pages/2819.asp
16. http://www.rimtengg.com/coit2007/proceedings/pdfs/43.pdf
17. Jonathan Taquet and Claude Labit, Hierarchical Oriented Predictions for Resolution
Scalable Lossless and Near-Lossless Compression of CT and MRI Biomedical Images
IEEE transactions on image processing, 2012.
18. Lihong Zhao, Yanan Tian, Yonggang Sha, Jinghua Li, " Medical image lossless
compression based on combining an integer wavelet transformwith DPCM ", Electr.
Electron. Eng. springer, China, 2009.
19. R.C. Gonzales, R.E. Woods, Digital Image Processing, pg. 525-626,Pearson Prentice Hall,
Upger Saddle River, New Jersey, 2008.
20. R.Sumalatha, M.V.Subramanyam "Region based Coding of 3D Magnetic Resonance
Images for Telemedicine Apglications" International Journal of Computer Apglications
(0975-8887) 5-No.12August2010.
21. F. W. Wheeler and W. A. Pearlman, SPIHT image compression without lists, in Proc.
IEEE Int. Conf. Acoust., Speech, Signal Process., Istanbul, Turkey, Jun. 2000, pg. 2047
2050.
22. M.J.Weinberger, G.Seroussi, and G. Saprio. "LOCO-1 Lossless, Image Compression
Algorithm: Principles and Standardization into JPEG-LS", IEEE trans. Image
Processing, pg. 1309-1324, August 2000.
23. J.Weinberger, The LOCO-I Lossless Image Compression Algorithm:Principles and
Standardization into JPEG-LS, IEEE Transactions On Image Processing, Volume. 9,
No. 8, August 2000
http://pearson.vrvbookshop.com/book/digital-image-processing-rafael-c-gonzalez/9788131726952http://pearson.vrvbookshop.com/book/digital-image-processing-rafael-c-gonzalez/9788131726952http://pearson.vrvbookshop.com/book/digital-image-processing-rafael-c-gonzalez/9788131726952
top related