a neoteric hybrid firefly algorithm and combined tree … · 2/1/2019  · carried out in the image...

16
Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861 Volume XI, Issue I, Jan- December 2018 Ruhiat Sultana and Syed Abdul Sattar 1 A NEOTERIC HYBRID FIREFLY ALGORITHM AND COMBINED TREE DATA STRUCTURE FOR THE PURSUIT OF ACCURATE IMAGE COMPRESSION Ruhiat Sultana 1 , Syed Abdul Sattar 2 1 Resaerch Scholar Rayalaseema University Kurnool Andhra Pradesh, India 2 Principal Nawab Shah Alam Khan Engineering College Hyderabad Telangana, India ABSTRACT: This paper proposed a hybrid firefly clustering algorithm, with dual tree DS to solve the problem of low image quality, low compression ratio and high time that occurs during lossless image compression. There are three phases in our proposed approach which is done before the process of compression. They are segmentation, feature extraction and classification. The segmentation of image is done by utilizing firefly clustering algorithm whereas the feature extraction is done by texture based techniques and these features are classified by the utilization of Decision-tree classifier. After that compression and encoding is performed by making use of quad-tree. In this paper the exact feature values are extracted, classified and compressed. Therefore it provides effective compression result than previous approaches. Our proposed technique is implemented in MATLAB and therefore the experimental results proved the effectiveness of proposed image compression technique in terms of high compression ratio and low noise ratio when compared with existing techniques. Keywords: medical imaging, information system, firefly clustering, quad-tree. [1] INTRODUCTION Nowadays, by considering the important advances in multimedia and networks including telemedicine applications, the amount of information to store and transmit has dramatically increased over the last decade. To overcome the bandwidth limitations of transmission channels or storage systems, data compression is considered as a useful tool [1]. Image compression may be lossy or lossless. Lossy compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storage, handling, and transmitting content. Lossy compression is most commonly used to compress multimedia data (audio, video, and images), especially in applications such as streaming media and internet telephony [2]. By contrast, lossless compression is typically required for text and data files, such as bank records and text articles. Lossless compression is a class of data compression algorithms that allows the original data to

Upload: others

Post on 18-Oct-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

  • Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861

    Volume XI, Issue I, Jan- December 2018

    Ruhiat Sultana and Syed Abdul Sattar 1

    A NEOTERIC HYBRID FIREFLY ALGORITHM AND COMBINED

    TREE DATA STRUCTURE FOR THE PURSUIT OF ACCURATE

    IMAGE COMPRESSION

    Ruhiat Sultana 1, Syed Abdul Sattar 2 1Resaerch Scholar Rayalaseema University Kurnool Andhra Pradesh, India

    2Principal Nawab Shah Alam Khan Engineering College Hyderabad Telangana, India

    ABSTRACT:

    This paper proposed a hybrid firefly clustering algorithm, with dual tree DS to solve the problem of low image quality, low compression ratio and high time that occurs during lossless image compression. There are three phases in our proposed approach which is done before the process of compression. They are segmentation, feature extraction and classification. The segmentation of image is done by utilizing firefly clustering algorithm whereas the feature extraction is done by texture based techniques and these features are classified by the utilization of Decision-tree classifier. After that compression and encoding is performed by making use of quad-tree. In this paper the exact feature values are extracted, classified and compressed. Therefore it provides effective compression result than previous approaches. Our proposed technique is implemented in MATLAB and therefore the experimental results proved the effectiveness of proposed image compression technique in terms of high compression ratio and low noise ratio when compared with existing techniques.

    Keywords: medical imaging, information system, firefly clustering, quad-tree.

    [1] INTRODUCTION

    Nowadays, by considering the important advances in multimedia and networks including

    telemedicine applications, the amount of information to store and transmit has dramatically

    increased over the last decade. To overcome the bandwidth limitations of transmission channels

    or storage systems, data compression is considered as a useful tool [1]. Image compression may

    be lossy or lossless. Lossy compression is the class of data encoding methods that uses inexact

    approximations and partial data discarding to represent the content. These techniques are used to

    reduce data size for storage, handling, and transmitting content. Lossy compression is most

    commonly used to compress multimedia data (audio, video, and images), especially in

    applications such as streaming media and internet telephony [2]. By contrast, lossless

    compression is typically required for text and data files, such as bank records and text articles.

    Lossless compression is a class of data compression algorithms that allows the original data to

    http://www.ijaconline.com/

  • A NEOTERIC HYBRID FIREFLY ALGORITHM AND COMBINED TREE DATA STRUCTURE

    FOR THE PURSUIT OF ACCURATE IMAGE COMPRESSION

    Ruhiat Sultana and Syed Abdul Sattar 2

    be perfectly reconstructed from the compressed data. Lossless compression is used in cases

    where it is important that the original and the decompressed data be identical [3]. Compression

    is preferred for archival purposes and often for medical imaging, technical drawings, clip art or

    comics. The best image quality is based on better compression ratio and low noise ration.

    Enhanced image quality is the main goal of image compression, however, there are other

    important properties of image compression schemes: Scalability, Region of interest coding, Meta

    information and processing power [4].

    Images compression becomes a vital practical issue because image-based representations are

    typically image intensive. The rendering techniques are classified into three major categories.

    They are rendering with no geometry, rendering with implicit geometry and rendering with

    explicit geometry [5]. Image compression is a critical tool that reduces the burden of storage

    and transmission. The drawbacks of image compression includes the reduction of compression

    rate and down grade computation time significantly. Compression work has been traditionally

    carried out in the image and video communities, and many algorithms and techniques have been

    proposed in the existing to achieve high compression ratios [6]. LOCO-I (LOW Complexity

    Lossless Compression for Images) is an efficient compression algorithm for continuous-tone

    lossless images which integrates the ease of Huffman coding with potential compression of

    context models. The algorithm relied on uncomplicated static context model, which approaches

    the capability of the complex universal context modeling techniques for obtaining high-order

    dependencies [7]. Selective encryption and modified entropy coders with multiple statistical

    models is used for doing both encryption and compression. Another approach which makes use

    of multiple statistical Models is employed to transform the entropy coders into encoded format.

    It is shown that security is obtained without the give up of compression performance and the

    computational speed [8].

    For the compression of lossless images, an effective information hiding technique was

    presented. Here the information is secretly hided by the employment of index-modifying and

    side-match Vector-Quantization techniques. The information which is encoded is extracted in

    the decoder side [9]. A new coding scheme for transmitting the image data based on the

    circumstances of cloud Gaming is designed. Results shows that this approach leads to increased

    life time of the mobile battery while preserving an acceptable quality of the transmitted image

    [10]. A novel lossless color image compression scheme is presented. This scheme was based on

    reversible color transform (RCT) and Burrows–Wheeler compression algorithm (BWCA). The

    method makes use of RCT with bi-level BWT as a result it leads in better compression by taking

    advantage of the redundancy in the grey levels brought by the YUV color space [11]. Another

    lossless compression technique is presented to solve the drawbacks in the real-time transmission

    of aurora spectral images. This method decor relates the spatial and spectral domains bi-

  • Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861

    Volume XI, Issue I, Jan- December 2018

    Ruhiat Sultana and Syed Abdul Sattar 3

    dimensionally and eliminates the side information of recursively computed coefficients

    effectively to obtain high quality rapid compression [12].

    Due to the advent of technology, the presence of various image formats strength is provided

    to image data. Due to this change in technology and the existence of different formats, high

    resolution images are produced and it requires more memory for the purpose of storage. To

    solve this problem a lossless technique of Image processing is introduced by taking Haar

    wavelet and Vector transform techniques into consideration [13]. An innovative lossless

    compression scheme has been discussed for 3D medical images. After the process of pre-

    processing, the image is encoded by the utilization of embedded zero tree wavelet technique

    [14]. Huge capacity and high image quality plays as prominent research contents of data hiding/

    compression. An adaptive image steganography which utilizes absolute moment block

    truncation coding compression (AMBTC compression) and interpolation technique (ASAI), is

    presented to improve the performance of knowledge hiding scheme. As a result of this scheme a

    high embedding capacity with low computational complexity and better image quality can be

    achieved [15]. Another new algorithm is developed for the purpose of obtaining large capacity

    image steganography. In this approach, halftoning algorithm is employed to transform the gray-

    scale scanned document to binary image, which is a sparse matrix. In the next step, an algorithm

    is introduced to read the halftone image, and to convert each bit-stream of the sparse matrix into

    some meaningful decimal numbers, which are then to be embedded in 3-LSB bits of concealable

    pixels. Concealable pixels of stego image is filtered and the quality of hidden image is preserved

    by the utilization of standard deviation [16]

    [2] RELATED WORK

    This section provides an overview of the lossless image compression techniques available in

    the literature. Several algorithms and techniques have been proposed in the last decade, but

    there are considerable differences in, each with respect to the datasets used, segmentation

    objectives and validation. A summary of the different approaches and their features of lossless

    image compression is presented below

    Nanrun zhou etal. [17] discussed about an image compression–encryption scheme to

    overcome these weaknesses and reduce the possible transmission burden. This scheme was an

    efficient image compression–encryption scheme which was based on hyper-chaotic system and

    2D compressive sensing. Most of the existing image encryption algorithms based on low-

    dimensional chaos systems bear security risks and suffer encryption data expansion when

    adopting nonlinear transformation directly. To overwhelm this issue here the original image

    was measured by the measurement matrices in two directions to achieve compression and

    encryption simultaneously, and then the resulting image was re-encrypted by the cycle shift

    operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of

    http://www.ijaconline.com/

  • A NEOTERIC HYBRID FIREFLY ALGORITHM AND COMBINED TREE DATA

    STRUCTURE FOR THE PURSUIT OF ACCURATE IMAGE COMPRESSION

    Ruhiat Sultana and Syed Abdul Sattar 4

    the pixels efficiently. The presented cryptosystem decreases the volume of data to be

    transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption

    system. Simulation results verify the validity and the reliability of the proposed algorithm with

    acceptable compression and security performance.

    Venugopal etal. [18] presented a block based lossless image compression algorithm

    using Hadamard transform and Huffman encoding which was a simple algorithm with less

    complexity. Medical images play a significant role in diagnosis of diseases and require a simple

    and efficient compression technique. In this algorithm initially input image was decomposed by

    Integer wavelet transform (IWT) and LL sub-band was transformed by lossless Hadamard

    transformation (LHT) to eliminate the correlation inside the block. Further DC prediction

    (DCP) was used to remove correlation between adjacent blocks. The non-LL sub-bands were

    validated for Non-transformed block (NTB) based on threshold. The main significance of this

    method was it proposes simple DCP, effective NTB validation and truncation. Based on the

    result of NTB, encoding was done either directly or after trans- formation by LHT and

    truncated. Finally all coefficients were encoded using Huffman encoder to compress. From the

    simulation results, it was observed that the proposed algorithm yields better results in terms of

    compression ratio when compared with existing lossless compression algorithms such as

    JPEG2000. Most importantly the algorithm was tested with standard non-medical images and

    set of medical images and provides optimum values of compression ratio and was quite

    efficient.

    Jinlei Zhang etal. [19] discussed about a novel distributed coding technique for

    hyperspectral images. The important needs of hyperspectral images are lossless compression,

    progressive transmission and low complexity onboard processing. Here the decoder produces

    efficient spectral image because every individual image is compressed in slices. An adaptive

    region-based prediction algorithm is designed here to eliminate spatial and spectral

    redundancies of images. This technique obtained accurate compression performance and less

    encoding complexity by utilizing spatial and spectral correlation simultaneously at the decoder

    side.

    Seyun Kim and Nam Ik Cho [20] developed a novel image compression algorithm for

    lossless color images. This algorithm is based on hierarchical prediction which makes use of

    upper, lower and left pixels for pixel prediction and context-adaptive arithmetic coding in

    which the error was predicted using the context model and in this predicted error signal

    arithmetic coding is applied. Before the process of image compression the given RGB image is

    transformed to YCC image. After that grayscale image compression method is applied for the

    process of encoding. This algorithm diminishes the bit rates when compared with conventional

    JPEG images.

    Atef masmoudi etal. [21] designed a new geometric finite mixture model-based adaptive

    arithmetic coding (AAC) for lossless image compression. Applying AAC for image

    compression, large compression gains can be achieved only through the use of sophisticated

    models that provide more accurate probabilistic descriptions of the image. In this work, we

    proposed to divide the residual image into non-overlapping blocks, and then we model the

    statistics of each block by a mixture of geometric distributions of parameters estimated through

    the maximum likelihood estimation using the expectation–maximization algorithm. Moreover,

    a histogram tail truncation method within each predicted error block was used in order to

  • Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861

    Volume XI, Issue I, Jan- December 2018

    Ruhiat Sultana and Syed Abdul Sattar 5

    reduce the number of symbols in the arithmetic coding and therefore to reduce the effect of the

    zero-occurrence symbols. Experimentally, we showed that using convenient block size and

    number of mixture components in conjunction with the prediction technique median edge

    detector, the proposed method outperforms the well-known lossless image compressors.

    [3] FIREFLY-CLUSTERING WITH BI-FOLD TREE DS

    The purpose of image compression is to minimize the size of image and maintain good level of

    their corresponding reconstructed images. Some of the major issues occurs in image

    compression is the decrease in image quality, compression ratio and increased time. To

    overcome these issues hybrid firefly clustering algorithm, with dual tree DS is proposed. There

    are three phases in our proposed approach. In the first phase image segmentation is done by

    utilizing firefly clustering algorithm. The second phase is feature extraction, which is done by

    texture based techniques and these features are classified by the utilization of Decision-tree

    classifier. The third phase is the compression process which is performed by making use of

    quad-tree. In the compression process encoding is performed. Encoding is used to protect

    important images prior to their transmission to the recipients. Encoding and decoding is done

    based on Huffman technique which makes the compression more secure.

    [3.1] OPTIMAL IMAGE SEGMENTATION VIA FIREFLY CLUSTERING

    ALGORITHM

    The important phase in image processing is segmentation. Here the images are

    segmented into several parts which contain some important information for the user. It is used

    to extract information from the image. Here clustering is used to segment the image. It contains

    various types of similarity pixels and consistent characteristics. It is the part of data mining

    algorithm that groups the data into various number of given clusters.

    In this paper we utilized a new hybrid firefly algorithm with k-means clustering for

    segmentation. This firefly algorithm having two phases that is light intensity variation and

    calculating the attractiveness. Here attractiveness depends on the brightness of the firefly and

    the brightness in turn is defined by the objective function.

    The light intensity is I(r) varies with distance ‘r’ monotonically and exponentially, is given by

    (1)

    Where = initial light intensity, = light absorpti on coefficient

    Attractiveness is

    (2)

    Where = attractiveness

    http://www.ijaconline.com/http://www.wordhippo.com/what-is/another-word-for/depend_on.html

  • A NEOTERIC HYBRID FIREFLY ALGORITHM AND COMBINED TREE DATA

    STRUCTURE FOR THE PURSUIT OF ACCURATE IMAGE COMPRESSION

    Ruhiat Sultana and Syed Abdul Sattar 6

    The Cartesian distance between two fireflies is given by

    (3)

    Where , is the position of firefly In this firefly process the less bright firefly a, is

    moving in the direction of the brighter firefly b. The movement is represented by

    (4)

    The first step of firefly algorithm is initialization of the firefly population. The size of

    the firefly determines the number of solution so each firefly’s light intensity is used to calculate

    its size. The distance between the fireflies is said to be Cartesian distance. The attractiveness

    function is defined from the light intensity and absorption coefficient.

    K-mean is used to partition the data into k clusters. Initially the centroids is selected

    randomly. Then each data point is assigned to the cluster from which data point has minimum

    distance. Then the data point is grouped with another nearest centroid. Then the centroid is

    calculated again until convergence. In segmentation similar pixels are grouped together into

    cluster. The initial centroid determines the majority of efficiency and performance of the k-

    mean algorithm. When the algorithm is executed each time, the centroids are arbitrarily

    generated.

    This algorithm having three steps:

    1. Initialization

    2. Cluster assignment

    3. Exploration and evaluation

  • Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861

    Volume XI, Issue I, Jan- December 2018

    Ruhiat Sultana and Syed Abdul Sattar 7

    Start

    Initialization

    Cluster assignment

    Centroid update

    Exploration

    Termination

    criteria

    attained?

    End

    Y

    N

    2nd

    step

    1st step

    3rd

    step

    Figure: 1 hybrid firefly clustering algorithm

    Let the solution space be S which contain determinate number of fireflies , where

    N is the number of fireflies. K is the number of clusters. The search space contains various

    attributes and its dimension is denoted by D. Then the centroids are computed incrementally

    from start to end of execution to achieve efficient centroid at each iteration. Therefore, to get

    the best configuration of centroids, with given cluster with attribute have centroid

    denoted by therefore the weight matrix is defined as,

    (5)

    The formula to estimate centroid is

    (6)

    The objective function is the Euclidean which is minimized. The objective function for the

    firefly clustering is

    (7)

    The clustering matrix is defined by

    (8)

    When clustering matrix is enhanced, then every data point is at minimum distance from its

    centroid.

    http://www.ijaconline.com/http://www.wordhippo.com/what-is/another-word-for/enhanced.html

  • A NEOTERIC HYBRID FIREFLY ALGORITHM AND COMBINED TREE DATA

    STRUCTURE FOR THE PURSUIT OF ACCURATE IMAGE COMPRESSION

    Ruhiat Sultana and Syed Abdul Sattar 8

    [3.2] FEATURE EXTRACTION BY TEXTURE BASED TECHNIQUES

    The segmented image from the above step is processed by feature based

    technique for the extraction of feature. Here Gabor wavelet filter is employed to extract the

    feature [23]. This texture based technique is used to extract the feature vector from segment

    region of interest. This system is similar to human visual system specifically in terms of

    representation of the frequency and orientation. It is divided into many filtered image which

    limited frequency and trends in intensity will change. It is considered to be suitable to

    distinguish the texture. The conversion is considered as wavelet transform in which the main

    wavelet is Gabor function. The segmented is given as the input to the Gaussian filter for further

    extraction. After that the transformed virtual and real parts are combined. In order to improve

    the size of transformed image, the texture features characterizing the image were removed. This

    concept can be applied to both query image and database image. The output of the Gabor filter

    is the accurately extracted features. The equation for Gabor filter is given by

    pxbw

    y

    bw

    xExpyxg

    2cos*

    *2*

    *2),(

    2

    22

    2

    2

    Where, x and y are the gray image of x and y, bw is the bandwidth value, p is the phase value.

    [3.3] FEATURECLASSIFICATION USING DECISION TREE CLASSIFIER

    The extracted feature is given as the input to the decision tree classifier for further

    classification of the feature values. The main advantage of using decision tree is that it runs

    faster to reduce time. Decision tree is a set of simple rulesand it is non-parametric because they

    do not need any assumptions about the allocation of the variables in each group. In the first

    step, feature is partitioned into two parts, the feature value with the highest importance is taken

    into consideration. This process is continual for each subset until no more splitting is possible.

    After this decision, the next feature is found then it splits the data optimally into two parts. All

    non-terminal nodes contain splits. If it is followed from root to leaf node then decision tree is

    the rule-based classifier. An advantage of decision tree classifiers is their simple structure

    which allows for interpretation and visualization. By using the training set, decision tree is built

    using objects, set of attributes and a group label. Attributes are a collection of properties

    containing all the information about one object. Unlike classes, each attribute have either

    ordered or unordered values, here the class is associated with the leaf and the output is obtained

    from the tree. A tree misclassifies the image if the class is labelled. The proportion of images

    correctly partitioned by the tree is called accuracy and the proportion of images incorrectly

    partitioned by the tree is called error. Here the features extracted from the above phase are

    classified for finding the optimal feature set [24].The output from the decision tree is the

    optimal feature value.

    [3.4] ACCURATE IMAGE COMPRESSION & ENCODING USING QUAD-

    TREE AND HUFFMAN ENCODING

    http://www.wordhippo.com/what-is/another-word-for/suitable.htmlhttp://www.wordhippo.com/what-is/another-word-for/transformed.htmlhttp://www.wordhippo.com/what-is/another-word-for/transformed.html

  • Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861

    Volume XI, Issue I, Jan- December 2018

    Ruhiat Sultana and Syed Abdul Sattar 9

    The optimal feature value is finally compressed in this phase. Here both compression

    and encoding process is done by using Huffman technique after the process of quad-tree

    decomposition. The optimal feature value found by the decision tree is given as the input for

    quad-tree. The output from the quad-tree is the compressed image. Encoding is also performed

    here by means of Huffman technique which is very secured than the existing encoding

    techniques. Encoding is an effective method for protecting important images before the

    transmission and reception. The image is always secure no matter whether the image is stored

    in the secondary storage or transmitted in networks. The quad-tree data structure is used to

    represent an image.The motivation of this scheme is to have both image encoding and

    compression process. The flow of the compression process is shown below.

    Input valueQuad-tree

    Decomposition

    Huffman

    encoding

    CompressionHuffman

    decoding

    Decompressed

    image

    Figure: 2 Compression process

    The Quad-tree approach divides the optimal feature value into four equal sized blocks,

    and then various tests are contained to check that i fit meets some criterion of homogeneity. If a

    block meets the criterion it is not divided any more, and the test criterion is applied to those

    blocks. This process repeats iteratively until each block meets the criterion. The result may

    have blocks of several different values.

    The Huffman encoding algorithm begins by designing a list of all the alphabet symbols

    in drizzling order of their chances. At that time it builds from the lowest up, a binary tree with a

    symbol at each leaf. This is processed in steps, where two symbols with the least chances is

    selected at every step, added to the top of the partial tree, deleted from the list, and replaced

    with an auxiliary symbol representing the two original symbols. Once the list is reduced to

    simply one auxiliary symbol (representing the entire alphabet), the tree is complete. The tree is

    then traversed to recognize the code words of the symbols. Prior to the compression process

    begins, the encoder needs to recognize the codes. This is done in view of the chances of

    frequencies of incidence of the images. The chances or frequencies must be composed, as side

    data on the output, so that any Huffman decoder can have the capacity to decompress the data.

    This is basic, in such a way that the frequencies are integers and the chances can be composed

    as scaled integers. It consistently adds just a couple of hundred bytes to the output. It is also

    conceivable to compose the variable-length codes themselves on the output, however this may

    be uncomfortable, because the codes have different sizes. It is also conceivable to compose the

    Huffman tree on the output; however this may require more space than essentially the

    frequencies. In any case, the decoder must comprehend what is toward the begin of the

    http://www.ijaconline.com/

  • A NEOTERIC HYBRID FIREFLY ALGORITHM AND COMBINED TREE DATA

    STRUCTURE FOR THE PURSUIT OF ACCURATE IMAGE COMPRESSION

    Ruhiat Sultana and Syed Abdul Sattar 10

    compressed document, read it, and construct the Huffman tree for the letters in order. Begin at

    the root and read the principle piece of the input (the compressed document). If it is 0, take after

    the base edge of the tree; if it is 1, take after the top edge. Read the accompanying piece and

    push another edge toward the leaves of the tree. Exactly when the decoder touches base at a

    leaf, it finds there the main, uncompressed image, and that code is discharged by the decoder.

    Steps for encoding and compression

    Step 1: The value of the image is decomposed into minimum and maximum value based on the

    threshold value.

    Step 2: The values of x and y, mean values and block size from quad-tree decomposition is

    recorded.

    Step 3: Find the mean value.

    Step 4: Encode the image value using Huffman technique.

    Step 5: Record the coding information.

    Step 6: Compress the encoded value.

    Step 7: Calculate compression ratio and PSNR values.

    [4].RESULTS

    This section shows the performance of proposed optimization clustering algorithm with

    dual tree data structure and the results obtained by them. In our proposed system the main

    objective is to accurately compress the given medical image and overcome the problems that

    occur during image compression.

    Segmentation

    During the segmentation process the given input image is segmented here using the

    combination of firefly and clustering algorithm. Our medical input image is shown below.

    Figure: 2 Input medical image Figure: 3 LAB image

    Before the process of segmentation, there are some initial steps to be carried out. When the

    input image is fetched it is transformed into Lab images whereas L represents the lightness and

    a and b are the color opponents green-red and blue-yellow. The Lab color space enhances the

    gamuts of RGB and CMYK color models. The input image which is transformed to Lab model

  • Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861

    Volume XI, Issue I, Jan- December 2018

    Ruhiat Sultana and Syed Abdul Sattar 11

    is shown below. After the color transformation of the input image, label is assigned before

    carrying out the segmentation process. The labelled image is shown below.

    Figure: 4 Label assumed image Figure: 5 Edge image Figure: 6 Edge

    segmented image

    Segmentation

    The image shown below shows the computational efficiency of the firefly clustering

    algorithm. At first segmentation is done using the concept of clustering technique and here due

    to the occurrence of local optima problem firefly is integrates. Firefly algorithm constantly

    selects the efficient centroids throughout the search space using fireflies. It also shows that

    algorithm successfully overcome the local optima and achieve the global optima. The image

    segment evaluation index like the standard measure of correlation coefficient is very much

    effective to assess the quality of the image segmentation results. A higher value of correlation

    coefficient signifies better segmentation. To quantify the conformity level between the images

    after the segmentation the correlation coefficient can be used. The segmented image using

    firefly-clustering is shown below.

    After the process of segmentation, the next step is the feature extraction. Here texture

    based features are extracted by the employment of gabor filter. The image after the process of

    feature extraction classified is shown below.

    Figure: 8 LAB image

    COMPRESSION

    During the process of compression, there are some initial steps to be carried out. At first color

    transformation is to be done for the classified image obtained from the decision tree. The

    transformed RGB color image is shown below. After the color transformation down sampling

    should be done. Down sampling is the process of transforming the high resolution image into

    http://www.ijaconline.com/

  • A NEOTERIC HYBRID FIREFLY ALGORITHM AND COMBINED TREE DATA

    STRUCTURE FOR THE PURSUIT OF ACCURATE IMAGE COMPRESSION

    Ruhiat Sultana and Syed Abdul Sattar 12

    small image with all the major information contained in it. Down-sampling should be done to

    the blue channeled image.

    Figure: 9 RGB channel

    The down-sampled image is shown below.

    Figure: 10 Down-sampled image Figure: 11 DCT image

    Next to this the down-sampled image is organized into groups and Discrete cosine transform is

    applied. Here the image is partitioned into block of pixels. After that DCT is applied to each

    block. The image after applying DCT is shown below.

    Finally the image is compressed by the employment of quad-tree and it is encrypted

    before broadcasting. The compressed medical image is shown below which our output for the

    proposed system is.

    Figure: 12 Output image

    [4.1] COMPARISON RESULT

    To evaluate the compression result two measures are commonly applied. The first one is the PSNR (peak

    signal to noise ratio) and the next is the CR (compression ratio).

  • Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861

    Volume XI, Issue I, Jan- December 2018

    Ruhiat Sultana and Syed Abdul Sattar 13

    PSNR is the process of measuring the quality of reconstructed image. PSNR value for accurate

    compressed image must be always low. The PSNR value for our proposed system compared with the

    existing FIC approach and survey is shown below.

    Figure: 13 Comparison of PSNR with existing

    The compression ratio for the compressed image must be always high in order to obtain good

    quality image. Compression is nothing but it eliminates unwanted information in order to gain

    high compression ratio. The compression ratio for our proposed system compared with the

    existing FIC approach and survey is shown below.

    Figure: 14 Comparison of Compression ratio with existing

    [5].CONCLUSION

    Our proposed scheme is used to compress the image. The objective of image compression is to

    reduce irrelevance and redundancy of the image data in order to be able to store or transmit data

    in an efficient form. Lossless compression is a class of data compression algorithms that allows

    the original data to be perfectly reconstructed from the compressed data. . Some of the major

    issues occurs in image compression is the decrease in image quality, compression ratio and

    increased time. To overcome these issues hybrid firefly clustering algorithm, with dual tree DS

    is proposed. In the first phase the image is segmented by utilizing firefly clustering algorithm.

    In the second phase features are extracted by using texture based techniques and these features

    are classified by the utilization of Decision-tree classifier. In the third phase the quad-tree data

    structure is used to represent an image. In the compression process encryption is performed.

    The proposed image encryption scheme is based on the principle of lossless compression. The

    http://www.ijaconline.com/https://en.wikipedia.org/wiki/Data_compression

  • A NEOTERIC HYBRID FIREFLY ALGORITHM AND COMBINED TREE DATA

    STRUCTURE FOR THE PURSUIT OF ACCURATE IMAGE COMPRESSION

    Ruhiat Sultana and Syed Abdul Sattar 14

    proposed approach overcomes the issues that occur during the process of image compression.

    Results of our proposed technique showed that it out performed when compared to the previous

    work in terms of decreased PSNR value and increased compression ratio.

    REFERENCES

    [1] Brahimi T, Boubchir L, Fournier R and Naït-Ali A, “An improved multimodal signal-image

    compression scheme with application to natural images and biomedical data”, Multimedia Tools and

    Applications, Springer, pp. 1-23, 2016.

    [2] Pradhan A, Pati N, Rup S and Panda AS, “A modified framework for Image compression using

    Burrows-Wheeler Transform”, In Computational Intelligence and Networks (CINE), In 2nd International

    Conference on IEEE, pp. 150-153, 2016.

    [3] Conoscenti M, Coppola R and Magli E, “Constant SNR, rate control, and entropy coding for

    predictive lossy hyperspectral image compression”, IEEE Transactions on Geoscience and Remote

    Sensing, vol. 54, No. 12, pp. 7431-41, 2016.

    [4] Qureshi MA and Deriche M, “A new wavelet based efficient image compression algorithm using

    compressive sensing”, Multimedia Tools and Applications, Springer, vol. 75, No. 12, pp. 6737-54, 2016.

    [5] Shum HY, Kang SB and Chan SC, “Survey of image-based representations and compression

    techniques”, IEEE transactions on circuits and systems for video technology, vol. 13, No. 11, pp. 1020-

    37, 2003.

    [6] Karimi N, Samavi S, Soroushmehr SR, Shirani S and Najarian K, “Toward practical guideline for

    design of image compression algorithms for biomedical applications”, Expert Systems with Applications.

    Vol. 56, pp. 360-7, 2016.

    [7] Weinberger MJ, Seroussi G and Sapiro G, “LOCO-I: A low complexity, context-based, lossless

    image compression algorithm”, In Data Compression Conference Proceedings, IEEE, pp. 140-149, 1996.

    [8] Wu CP and Kuo CC, “Design of integrated multimedia compression and encryption systems”, IEEE

    Transactions on Multimedia, vol. 7, No. 5, pp. 828-39, 2005.

    [9] Aishwarya KM, Ramesh R, Sobarad PM and Singh V, “Lossy image compression using SVD coding

    algorithm”, In Wireless Communications, Signal Processing and Networking (WiSPNET), International

    Conference on IEEE, pp. 1384-1389, 2016.

    [10] Gharsallaoui R, Hamdi M and Kim TH, “Image compression with optimal traversal using wavelet

    and percolation theories”, In Software, Telecommunications and Computer Networks (SoftCOM), 24th

    International Conference on IEEE, pp. 1-6, 2016.

    [11] Khan A and Khan A, “Lossless colour image compression using RCT for bi-level BWCA”, Signal,

    Image and Video Processing. Vol. 10, No. 3, pp. 601-7, 2016.

    [12] Kong W, Wu J, Hu Z, Anisetti M, Damiani E and Jeon G, “Lossless compression for aurora spectral

    images using fast online bi-dimensional decorrelation method”, Information Sciences, vol. 381, pp. 33-

    45, 2017.

  • Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861

    Volume XI, Issue I, Jan- December 2018

    Ruhiat Sultana and Syed Abdul Sattar 15

    [13] Sikka N, Singla S and Singh GP, “Lossless image compression technique using Haar wavelet and

    vector transform”, In Research Advances in Integrated Navigation Systems (RAINS), International

    Conference on IEEE, pp. 1-5, 2016.

    [14] Rajakumar K and Arivoli T, “Lossy Image Compression Using Multiwavelet Transform for

    Wireless Transmission”, Wireless Personal Communications, vol. 87, No. 2, pp. 315-33, 2016.

    [15] Tang M, Zeng S, Chen X, Hu J and Du Y, “An adaptive image steganography using AMBTC

    compression and interpolation technique”, Optik-International Journal for Light and Electron Optics, vol.

    127, No. 1, pp. 471-7, 2016.

    [16] Soleymani SH and Taherinia AH, “High capacity image steganography on sparse message of

    scanned document image (SMSDI)”, Multimedia Tools and Applications, pp. 1-21, 2016.

    [17] Zhou N, Pan S, Cheng S and Zhou Z, “Image compression–encryption scheme based on hyper-

    chaotic system and 2D compressive sensing”, Optics & Laser Technology, Elsevier, vol. 82, pp. 121-33,

    2016.

    [18] Venugopal D, Mohan S and Raja S, “An efficient block based lossless compression of medical

    images”, Optik-International Journal for Light and Electron Optics, vol. 127, No. 2, pp. 754-8, 2016.

    [19] Chaurasia V and Chaurasia V, “Statistical feature extraction based technique for fast fractal image

    compression”, Journal of Visual Communication and Image Representation, vol. 41, pp. 87-95, 2016.

    [20] Zhao D, Zhu S and Wang F, “Lossy hyperspectral image compression based on intra-band

    prediction and inter-band fractal encoding”, Computers & Electrical Engineering, Elsevier, Vol. 54, pp.

    494-505, 2016.

    [21] Masmoudi A, Chaoui S and Masmoudi A, “A finite mixture model of geometric distributions for

    lossless image compression”, Signal, Image and Video Processing. Vol. 10, No. 4, pp. 671-8, 2016.

    [22] Chang HK and Liu JL, “A linear quadtree compression scheme for image encryption”, Signal

    Processing Image Communication. Vol. 10, No. 4, pp. 279-90, Sep 1, 1997.

    [23] Tallapragada VS, Reddy DM, Kiran PS and Reddy DV, “A Novel Medical Image Segmentation and

    Classification using Combined Feature Set and Decision Tree Classifier”, International Journal of

    Research in Engineering and Technology.Vol. 4, No. 9, pp. 83-6, 2016.

    [24] Sharma A and Sehgal S, “Image segmentation using firefly algorithm”, InInformation Technology

    (InCITe)-The Next Generation IT Summit on the Theme-Internet of Things: Connect your Worlds,

    International Conference. IEEE. pp. 99-102, Oct 6, 2016

    [25] Sundararaj GK and Balamurugan V, “An expert system based on texture features and decision tree

    classifier for diagnosis of tumor in brain MR images”, InContemporary Computing and Informatics

    (IC3I, International Conference. IEEE. pp. 1340-1344, Nov 27, 2014.

    [26] Ergen B and Baykara M, “Texture based feature extraction methods for content based medical

    image retrieval systems”, Bio-medical materials and engineering. Vol.24, No. 6, pp. 3055-62, Jan 1,

    2014.

    http://www.ijaconline.com/

  • A NEOTERIC HYBRID FIREFLY ALGORITHM AND COMBINED TREE DATA

    STRUCTURE FOR THE PURSUIT OF ACCURATE IMAGE COMPRESSION

    Ruhiat Sultana and Syed Abdul Sattar 16