research article classification of error-diffused halftone ...features from hal one images, based on...

11
Research Article Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis Zhigao Zeng, 1,2 Zhiqiang Wen, 1 Shengqiu Yi, 1 Sanyou Zeng, 3 Yanhui Zhu, 1 Qiang Liu, 1 and Qi Tong 1 1 College of Computer and Communication, Hunan University of Technology, Hunan 412007, China 2 Intelligent Information Perception and Processing Technology, Hunan Province Key Laboratory, Hunan 412007, China 3 Department of Computer Science, China University of Geosciences, Wuhan, Hubei 430074, China Correspondence should be addressed to Zhigao Zeng; [email protected] Received 21 January 2016; Revised 22 March 2016; Accepted 18 April 2016 Academic Editor: Stefanos Kollias Copyright © 2016 Zhigao Zeng et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halſtone images. We firstly design the class feature matrices, aſter extracting the image patches according to their statistics characteristics, to classify the error- diffused halſtone images. en, the spectral regression kernel discriminant analysis is used for feature dimension reduction. e error-diffused halſtone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise. 1. Introduction As a popular image processing technology, digital halſtoning [1] has found wide applications in converting a continuous tone image into a binary halſtone image for a better display on binary devices, such as printers and computer screens. Usually, binary halſtone images can only be obtained in the process of printing, image scanning, and fax, from which the original continuous tone images need to be reconstructed [2, 3], using an inverse halſtoning algorithm [4], for image processing, for example, image classification, image compres- sion, image enhancement, and image zooming. However, it is difficult for inverse halſtoning algorithms to obtain the optimal reconstruction quality due to unknown halſtoning patterns in practical applications. Furthermore, a basic draw- back of the existing inverse halſtone algorithms is that they do not distinguish the types of halſtone images or can only coarsely divide halſtone images into two major categories of error-diffused halſtone images and orderly dithered halſtone images. is inability of exploiting a prior knowledge on the halſtone images largely weakens the flexibility, adaptabil- ity, and effectiveness of the inverse halſtoning techniques, making the study on the classification of halſtone images imperative for not only optimizing the existing inverse halſtoning schemes, but also guiding the establishment of adaptive schemes on halſtone image compression, halſtone image watermarking, and so forth. Motivated by observing the significance of classifying halſtone images, several halſtone image classification meth- ods have been proposed. In 1998, Chang and Yu [5] clas- sified halſtone images into four types using an enhanced one-dimensional correlation function and a backpropaga- tion (BP) neural network, for which the data sets in the experiments are limited to the halſtone images produced by clustered-dot ordered dithering, dispersed-dot ordered dithering, constrained average, and error diffusion. Kong et al. [6, 7] used an enhanced one-dimensional correlation function and a gray level cooccurrence matrix to extract features from halſtone images, based on which the halſtone images are divided into nine categories using a decision tree algorithm. Liu et al. [8] combined support region and least mean square (LMS) algorithm to divide halſtone images into four categories. Subsequently, they [9] used LMS to extract features from Fourier spectrum in nine categories of Hindawi Publishing Corporation Advances in Multimedia Volume 2016, Article ID 4985313, 10 pages http://dx.doi.org/10.1155/2016/4985313

Upload: others

Post on 24-Feb-2021

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

Research ArticleClassification of Error-Diffused Halftone Images Based onSpectral Regression Kernel Discriminant Analysis

Zhigao Zeng12 Zhiqiang Wen1 Shengqiu Yi1 Sanyou Zeng3 Yanhui Zhu1

Qiang Liu1 and Qi Tong1

1College of Computer and Communication Hunan University of Technology Hunan 412007 China2Intelligent Information Perception and Processing Technology Hunan Province Key Laboratory Hunan 412007 China3Department of Computer Science China University of Geosciences Wuhan Hubei 430074 China

Correspondence should be addressed to Zhigao Zeng zzgzzg99163com

Received 21 January 2016 Revised 22 March 2016 Accepted 18 April 2016

Academic Editor Stefanos Kollias

Copyright copy 2016 Zhigao Zeng et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images We firstlydesign the class feature matrices after extracting the image patches according to their statistics characteristics to classify the error-diffused halftone images Then the spectral regression kernel discriminant analysis is used for feature dimension reduction Theerror-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier As demonstrated by theexperimental results our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness intackling noise

1 Introduction

As a popular image processing technology digital halftoning[1] has found wide applications in converting a continuoustone image into a binary halftone image for a better displayon binary devices such as printers and computer screensUsually binary halftone images can only be obtained in theprocess of printing image scanning and fax from which theoriginal continuous tone images need to be reconstructed[2 3] using an inverse halftoning algorithm [4] for imageprocessing for example image classification image compres-sion image enhancement and image zooming However itis difficult for inverse halftoning algorithms to obtain theoptimal reconstruction quality due to unknown halftoningpatterns in practical applications Furthermore a basic draw-back of the existing inverse halftone algorithms is that theydo not distinguish the types of halftone images or can onlycoarsely divide halftone images into two major categories oferror-diffused halftone images and orderly dithered halftoneimages This inability of exploiting a prior knowledge onthe halftone images largely weakens the flexibility adaptabil-ity and effectiveness of the inverse halftoning techniques

making the study on the classification of halftone imagesimperative for not only optimizing the existing inversehalftoning schemes but also guiding the establishment ofadaptive schemes on halftone image compression halftoneimage watermarking and so forth

Motivated by observing the significance of classifyinghalftone images several halftone image classification meth-ods have been proposed In 1998 Chang and Yu [5] clas-sified halftone images into four types using an enhancedone-dimensional correlation function and a backpropaga-tion (BP) neural network for which the data sets in theexperiments are limited to the halftone images producedby clustered-dot ordered dithering dispersed-dot ordereddithering constrained average and error diffusion Konget al [6 7] used an enhanced one-dimensional correlationfunction and a gray level cooccurrence matrix to extractfeatures from halftone images based on which the halftoneimages are divided into nine categories using a decisiontree algorithm Liu et al [8] combined support region andleast mean square (LMS) algorithm to divide halftone imagesinto four categories Subsequently they [9] used LMS toextract features from Fourier spectrum in nine categories of

Hindawi Publishing CorporationAdvances in MultimediaVolume 2016 Article ID 4985313 10 pageshttpdxdoiorg10115520164985313

2 Advances in Multimedia

halftone images and classify these halftone images using naiveBayes Although these methods work well in classifying somespecific halftone images their performance largely decreaseswhen classifying error-diffused halftone images producedby Floyd-Steinberg filter Stucki filter Sierra filter Burkersfilter Jarvis filter and Stevenson filter respectively They aredescribed as follows

Different Error Diffusion Filters Consider the following

(a) Floyd-Steinberg filter

( 116)∙ 7

3 5 1 (1)

(b) Sierra filter

( 132)∙ 5 3

2 4 5 4 20 2 3 2 0

(2)

(c) Burkers filter

( 132)∙ 8 4

2 4 8 4 2 (3)

(d) Jarvis filter

( 148)∙ 7 5

3 5 7 5 31 3 5 3 1

(4)

(e) Stevenson filter

( 1200)

∙ 3212 26 30 16

12 26 125 12 12 5

(5)

The Error Diffusion of Stucki

(a) Error kernel of Stucki filter is

( 142)∙ 8 4

2 4 8 4 21 2 4 2 1

(6)

(b) Matrix of coefficients template is

0 0 421

4 8 4 22 4 2 1

80

(7)

(c) 119874 denotes the pixel being processed 119860 119861 119862 and 119863indicate the four neighborhood pixels

0 0 8 421

4 8 4 22 4 2 1

( 142)

0

A

B

C

D

O

(8)

based on different error diffusion kernels as summarizedin [10ndash12] Moreover these literatures did not consider alltypes of error diffusion halftone images For example onlythree error diffusion filters are included in [6 7 9] and onlyone is involved in [5 8] The idea of halftoning for the sixerror diffusion filters is quite similar with the only differencelying in the templates used (shown in different error diffusionfilters and the error diffusion of Stucki described abovethe templates are shown at the right-hand side in eachequation) It is difficult to classify the error-diffused halftoneimages because of the almost inconspicuous differencesamong various halftone features extracted from using thesesix error diffusion filters the error-diffused halftone imagesHowever as a scalable algorithm the error diffusion hasgradually become one of the most popular techniques dueto its ability to provide a solution of good quality at areasonable cost [13] This asks for an urgent requirement tostudy the classificationmechanism for various error diffusionalgorithms with the hope to promote the existing inversehalftone techniques widely used in different application fieldsof graphics processing

This paper proposes a new algorithm to classify error-diffused halftone images We first extract the feature matricesof pixel pairs from the error-diffused halftone image patchesaccording to statistical characteristics of these patches Theclass feature matrices are then subsequently obtained usinga gradient descent method based on the feature matrices ofpixel pairs [14] After applying the spectral regression kerneldiscriminant analysis to realize the dimension reduction inthe class featurematrices we finally classify the error-diffusedhalftone images using the idea similar to the nearest centroidsclassifier [15 16]

The structure of this paper is as follows Section 2 presentsthe method of kernel discriminant analysis Section 3describes how to extract the feature extraction of pixel pairsfrom the error-diffused halftone images Section 4 describesthe proposed classification method for the error-diffusedhalftone image based on the spectral regression kernel dis-criminant analysis Section 5 shows the experimental resultsSome concluding remarks and possible future research direc-tions are given in Section 6

2 An Efficient Kernel DiscriminantAnalysis Method

It is well known that linear discriminant analysis (LDA)[17 18] is effective in solving classification problems but it

Advances in Multimedia 3

fails for nonlinear problems To deal with this limitation theapproach called kernel discriminant analysis (KDA) [19] hasbeen proposed

21 Overview of Kernel Discriminant Analysis Suppose thatwe are given a sample set 119909

1 1199092 119909

119898 of the error-diffused

halftone images with 1199091 1199092 119909

119898belonging to different

class 119862119894(119894 = 1 2 119870) respectively Using a nonlinear

mapping function 120601 the samples of the halftone images inthe input space 119877119899 can be projected to a high-dimensionalseparable feature space 119891 namely 119877119899 rarr 119891 119909 rarr 120601(119909) Afterextracting features the error-diffused halftone images willbe classified along the projection direction along which thewithin-class scatter is minimal and the between-class scatteris maximal For a proper 120601 an inner product described as⟨sdot sdot⟩ can be defined on119891 to form a reproducing kernel Hilbertspace That is to say ⟨120601(119909

119894) 120601(119909119895)⟩ = 120581(119909

119894 119909119895) where 120581(sdot sdot) is

a positive semidefinite kernel function In the feature space119891let 119878120601119887be the between-class scattermatrix let 119878120601

119908be thewithin-

class scatter matrix and let 119878120601119905be the total scatter matrix

119878120601119887=119888

sum119896=1

119898119896(120583119896120601minus 120583120601) (120583119896120601minus 120583120601)119879

119878120601119908=119888

sum119896=1

(119898119896

sum119894=1

(120601 (119909119896119894) minus 120583119896120601) (120601 (119909119896

119894) minus 120583119896120601)119879)

119878120601119905= 119878120601119887+ 119878120601119908=119898

sum119894=1

(120601 (119909119894) minus 120583120601) (120601 (119909

119894) minus 120583120601)119879

(9)

where 119898119896is the number of the samples in the 119896th class

120601(119909119896119894) is the 119894th sample of the 119896th class in the feature space

119891 120583119896120601= (1119898

119896) sum119873119894119894=1120601(119909119896119894) is the centroid of the 119896th class

and 120583120601= (1119873)sum119888

119896=1sum119898119896119894=1120601(119909119896119894) is the global centroid In the

feature space the aim of the discriminant analysis is to seekthe best projection direction namely the projective functionV to maximize the following objective function

Vopt = argmaxV119879119878120601119887V

V119879119878120601119908V (10)

Equation (10) can be solved by the eigenproblem 119878120601119887V = 120582119878120601

119905V

According to the theory of reproducing kernel Hilbert spacewe know that the eigenvectors are linear combinations of120601(119909119894) in the feature space 119891 there exist weight coefficients

119886119894(119894 = 1 2 119898) such that V120601 = sum119898

119894=1119886119894120601(119909119894) Let 119886 =

[1198861 1198862 119886

119898]119879 then it can be proved that (10) can be

rewritten as follows

119886opt = argmax 119886119879119870119882119870119886119886119879119870119870119886 (11)

The optimization problem of (11) is equal to the eigenproblem

119870119882119870119886 = 120582119870119870119886 (12)

where 119870 is the kernel matrix and 119870119894119895= 120581(119909

119894 119909119895) 119882 is the

weight matrix defined as follows

119908119894119895=

1119898119896

if 119909119894 119909119895belong to the 119896th class

0 others(13)

For sample 119909 the projective function in the feature space 119891can be described as

119891lowast (119909) = ⟨V 120601 (119909)⟩ =119898

sum119894=1

119886119894⟨120601 (119909119894) 120601 (119909)⟩

=119898

sum119894=1

119886119894120581 (119909119894 119909) = 119886119879120581 (sdot 119909)

(14)

22 Kernel Discriminant Analysis via Spectral Regression Toefficiently solve the eigenproblem of the kernel discriminantanalysis in (12) the following theorem will be used

Theorem 1 Let 119910 be the eigenvector of the eigenproblem119882119910 =120582119910 with eigenvalue 120582 If 119870120572 = 119910 then 120572 is the eigenvector ofeigenproblem (12) with the same eigenvalue 120582

According to Theorem 1 the projective function of thekernel discriminant analysis can be obtained according to thefollowing two steps

Step 1 Obtain 119910 by solving the eigenproblem in (12)

Step 2 Search eigenvector 120572which satisfies119870120572 = 119910 where119870is the positive semidefinite kernel matrix

As we know if 119870 is nonsingular then for any given 119910there exists a unique 120572 = 119870minus1119910 satisfying the linear equationdescribed in Step 2 If119870 is singular then the linear equationmay have infinite solutions or have no solution In this casewe can approximate 120572 by solving the following equation

(119870 + 120575119868) 120572 = 119910 (15)

where 120575 ge 0 is a regularization parameter and 119868 is the identitymatrix Combined with the projective function describedin (14) we can easily verify that the solution 120572lowast = (119870 +120575119868)minus1119910 given by (15) is the optimal solution of the followingregularized regression problem

120572lowast = min119891isin119865

119898

sum119894=1

(119891 (119909119894) minus 119910119894)2 + 120575 100381710038171003817100381711989110038171003817100381710038172119870 (16)

where 119910119894is the 119894th element of 119910 and 119865 is the reproducing

kernel Hilbert space induced from the Mercer kernel 119870with sdot

119870being the corresponding norm Due to the

essential combination of the spectral analysis and regressiontechniques in the above two-step approach the methodis named as spectral regression (SR) kernel discriminantanalysis

4 Advances in Multimedia

3 Feature Extraction of the Error-DiffusedHalftone Images

Since its introduction in 1976 the error diffusion algorithmhas attracted widespread attention in the field of printingapplications It deals with pixels of halftone images usinginstead of point processing algorithms the neighborhoodprocessing algorithms Nowwe will extract the features of theerror-diffused halftone images which are produced using thesix popular error diffusion filters mentioned in Section 1

31 Statistic Characteristics of the Error-Diffused HalftoneImages Assume that 119891

119892(119909 119910) is the gray value of the pixel

located at position (119909 119910) in the original image and 119865(119909 119910)is the value of the pixel located at position (119909 119910) in theerror-diffused halftone image For the original image all thepixels are firstly normalized to the range [0 1] Then thepixels of the normalized image are converted to the error-diffused image 119865 line by line that is to say if 119891

119892(119909 119910) ge 119879

0

119865(119909 119910) which is the value of the pixel located at position(119909 119910) in error-diffused image 119865 is 1 otherwise 119865(119909 119910) is 0where 119879

0is the threshold value The error between 119865(119909 119910)

and 1198790is diffused ahead to some subsequent pixels not

necessary to deal withTherefore for some subsequent pixelsthe comparison will be implemented between 119879

0and the

value which is the sum of 119891119892(119909 119910) and the diffusion error

119890 A template matrix can be built using the error diffusionmodes and the error diffusion coefficients as shown in theerror diffusion of Stucki described above for example (a) theerror diffusion filter and (b) the error diffusion coefficientswhich represent the proportion of the diffusion errors Ifthe coefficient is zero then the corresponding pixel does notreceive any diffusion errors According to the error diffusionof Stucki described above pixel119860 suffers frommore diffusionerrors than pixel 119861 that is to say119874-119860 has a larger probabilityto become 1-0 pixel pair than119874-119861 The reasons are as followsSuppose that the pixel value of 119874 is 0 le 119901

119874le 1 and pixel 119874

has been processed by the thresholding method according tothe following equation

119902119874= 1 119901

119874ge 1198790

0 else (17)

In general threshold1198790is set as 05 According to the template

shown in the error diffusion of Stucki described above we canknow that the diffusion error is 119890 = 119902

119874minus119901119874 the new value of

pixel119860 is 119901119860= 119891(119909

119860+119910119860)+811989042 and the new value of pixel

119861 is119901119861= 119891(119909

119861+119910119861)+11989042 where119891

119892(119909119860+119910119860) and119891

119892(119909119861+119910119861)

are the original values of pixels 119860 and 119861 respectivelySince the value of each pixel in the error-diffused halftone

image can only be 0 or 1 there are 4 kinds of pixel pairs inthe halftone image 0-1 0-0 1-0 and 1-1 Pixel pairs 0-1 and1-0 are collectively known as 1-0 pixel pairs because of theirexchange abilityTherefore there are only three kinds of pixelpairs essentially 0-0 1-0 and 1-1 In this paper three statisticalmatrices are used to store the number of different pixel pairswith different neighboring distances and different directionswhich are of size 119871 times 119871 and are referred to as119872

0011987210 and

11987211 respectively (119871 is an odd number satisfying 119871 = 2119877 + 1

and 119877 is the maximum neighboring distance) Suppose thatthe center entry of the statistical matrix template covers pixel119874 of the error-diffused halftone image with the size119882 lowast 119867and other entries overlap other neighborhood pixels 119865(119906 V)Then we can compute three statistics on 1-0 1-1 and 0-0 pixelpairs within the scope of this statistics matrix template Ifthe position (119894 119895) of pixel119874 changes continually the matrices1198720011987210 and119872

11with zero being the initial values can be

updated according to

11987211(119909 119910) = 119872

11(119909 119910) + 1

if 119865 (119894 119895) = 1 119865 (119906 V) = 111987200(119909 119910) = 119872

00(119909 119910) + 1

if 119865 (119894 119895) = 0 119865 (119906 V) = 011987210(119909 119910) = 119872

00(119909 119910) + 1

if 119865 (119894 119895) = 1 119865 (119906 V) = 0 or 119865 (119894 119895) = 0 119865 (119906 V) = 1

(18)

where 119906 = 119894minus119877+119909minus1 V = 119895minus119877+119910minus1 1 le 119894 119906 le 119882 1 le 119895 V le119867 1 le 119909 le 119871 and 1 le 119910 le 119871 After normalization the threestatistic matrices can be ultimately obtained as the statisticalfeature descriptor of the error-diffused halftone images

32 Process of Statistical Feature Extraction ofHalftone ImagesAccording to the analysis described above the process ofstatistical feature extraction of the error-diffused halftoneimages can be represented as follows

Step 1 Input the error-diffused halftone image 119865 and divide119865 into several blocks 119861

119894(119894 = 1 2 119867) with the same size

119870 times 119870Step 2 Initialize the statistical feature matrix 119872 (including119872001198721011987211) as the zero matrix and let 119894 = 1

Step 3 Obtain the statistical matrix119872119894of block 119861

119894according

to (18) and update119872 using the equation119872 = 119872 +119872119894

Step 4 Set 119894 = 119894+1 If 119894 le 119867 then return to Step 3 Otherwisego to Step 5Step 5 Normalize 119872 such that it satisfiessum119871119909=1sum119871119909=1119872(119909 119910) = 1 and 119872(119909 119910) ge 0 where 119870

satisfies119870 ge min(119879119894) and 119879

119894(1 le 119894 le 6) represents the size of

the template matrix of the 119894th error diffusion filter

According to the process described above we knowthat the statistical features of the error-diffused halftoneimage 119865 are extracted based on the method that divides119865 into image patches which is significantly different withother feature extraction methods based on image patchesFor example in [20] the brightness and contrast of theimage patches are normalized by 119885-score transformationand whitening (also called ldquospheringrdquo) is used to rescale thenormalized data to remove the correlations between nearbypixels (ie low-frequency variations in the images) because

Advances in Multimedia 5

these correlations tend to be very strong even after brightnessand contrast normalization However in this paper featuresof the patches are extracted based on counting statisticalmeasures of different pixel pairs (00 10 and 11) within amoving statistical matrix template and are optimized usingthe method described in Section 33

33 Extraction of the Class Feature Matrix The statis-tics matrices 119872119894

001198721198941011987211989411(119894 = 1 2 119873) after being

extracted can be used as the input of other algorithms suchas support vector machines and neural networks Howeverthe curse of dimensionality could occur due to the highdimension of119872

001198721011987211 making the classification effect

possibly not significant Thereby six class feature matrices1198661 1198662 119866

6are designed in this paper for the error-

diffused halftone images produced by the six error diffusionfiltersmentioned aboveThen a gradient descentmethod canbe used to optimize these class feature matrices119873 = 6 times 119899 error-diffused halftone images can

be derived from 119899 original images using the six errordiffusion filters respectively Then 119873 statistics matrices119872119894(119872119894001198721198941011987211989411) (119894 = 1 2 119873) can be extracted as

the samples from 119873 error-diffused halftone images usingthe algorithm mentioned in Section 32 Subsequently welabel thesematrices as label(119872

1) label(119872

2) label(119872

119873) to

denote the types of the error diffusion filters used to producethe error-diffused halftone image Given the 119894th sample 119872

119894

as the input the target out vector 119905119894= [1199051198941 119905

1198946] (119894 =

1 119873) and the class feature matrices 1198661 1198662 119866

6 the

square error 119890119894between the actual output and the target

output can be derived according to

119890119894=6

sum119895=1

(119905119894119895minus119872119894∙ 119866119895)2 (19)

where

119905119894119895= 1 if 119895 = label (119872

119894) 119895 = 1 2 6

0 else (20)

The derivatives of 119866119895(119909 119910) in (19) can be explicitly calculated

as

120597119890119894

120597119866119895(119909 119910) = minus2 (V119894119895 minus119872119894 ∙ 119866119895)119872119894 (119909 119910) (21)

where 1 le 119909 le 119871 1 le 119910 le 119871 and ∙ is the dot product ofmatrices defined for anymatrices119860 and 119861with the same size119862 times 119863 as

119860 ∙ 119861 =119862

sum119906=1

119863

sumV=1119860 (119906 V) times 119861 (119906 V) (22)

Thedot product ofmatrices satisfies the commutative law andassociative law that is to say 119860 ∙ 119861 = 119861 ∙ 119860 and (119860 ∙ 119861) ∙ 119862 =

119860 ∙ (119861 ∙ 119862) Then the iteration equation (23) can be obtainedusing the gradient descent method

119866119896+1119895(119909 119910) = 119866119896

119895(119909 119910) minus 120578 120597119890

119894

120597119866119896119895(119909 119910)

= 119866119896119895(119909 119910)

+ 2120578 (V119894119895minus119872119894∙ 119866119895)119872119894(119909 119910)

(23)

where 120578 is the learning factor and 119896 means the 119896th iterationThe purpose of learning is to seek the optimal matrices119866119895(119895 = 1 2 6) by minimizing the total square error

119890 = sum119873119894=1119890119894 and the process of seeking the optimal matrices

119866119895can be described as follows

Step 1 Initialize parameters initialize the numbers of itera-tions inner and outer the iteration variables 119905 = 0 and 119894 = 1the nonnegative thresholds 120576

1and 120576

2used to indicate the

end of iterations the learning factor 120578 the total number ofsamples119873 and the class feature matrices119866

119895(119895 = 1 2 6)

Step 2 Input the statistics matrices119872119894(119894 = 1 2 119873) and

let 1198660119895= 119866119895(119895 = 1 6) and 119896 = 0 The following three

substeps are executed

(1) According to (23) 119866119896+1119895

(119895 = 1 6) can becomputed

(2) Compute 119890119896+1119894

and Δ119890119896+1119894= |119890119896+1119894minus 119890119896119894|

(3) If 119896 gt inner or Δ119890119896+1119894lt 1205761 then set 119890

119894= 119890119896+1119894

and go toStep 3 otherwise set 119896 = 119896 + 1 and return to (10)

Step 3 Set 119866119895= 119866119896+1119895

(119895 = 1 6) If 119894 = 119873 then go to Step4 Otherwise set 119894 = 119894 + 1 and go to Step 2Step 4 Compute the total error 119890119905 = sum119873

119894=1119890119894 If |119890119905 minus 119890119905minus1| lt 120576

2

or 119905 = outer then end the algorithm Otherwise set 119905 = 119905 + 1and go to Step 2

4 Classification of Error-Diffused HalftoneImages Using Nearest Centroids Classifier

This section describes the details on classifying error-diffusedhalftone images using the spectral regression kernel discrim-inant analysis as follows

Step 1 Input 119873 error-diffused halftone images produced bysix error-diffused filters and extract the statistical featurematrices119872

119894 including119872119894

001198721198941011987211989411 119894 = 1 2 119873 using

the method presented in Section 32

Step 2 According to the steps described in Section 33 allthe statistical feature matrices 119872

119894(11987211989400 11987211989410 11987211989411

and 119894 =1 2 119873) are converted to the class feature matrices 119866

119894

including 11986611989400 11986611989410 11986611989411and 119894 = 1 2 119873 correspondingly

Then convert 11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) into one-

dimensional matrices by columns respectively

6 Advances in Multimedia

Step 3 All the one-dimensional class feature matrices11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) are used to construct the

samples feature matrices 11986500 11986510 11986511

of size 119873 times 119872 (119872 =119871 times 119871) respectivelyStep 4 A label matrix information of the size 1 times119873 is built torecord the type to which the error-diffused halftone imagesbelong

Step 5 The first 119898 features 119866100 119866200 119866119898

00of the samples

featurematrices11986500are taken as the training samples (the first

119898 features of 11986510 11986511 or 119865all which is the composition of 119865

00

11986510 and 119865

11also can be used as the training samples) Reduce

the dimension of these training samples using the spectralregression discriminant analysis The process of dimensionreducing can be described by three substeps as follows

(1) Produce orthogonal vectors Let

119910119896= [[0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119896minus1

119894=1119898119894

1 1 1⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟119898119896

0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119888

119894=119896+1119898119894

]]

119879

119896 = 1 2 119888(24)

and 1199100= [1 1 1]119879 be the matrix with all elements

being 1 Let 1199100be the first vector of the weight matrix 119882

and use Gram-Schmidt process to orthogonalize the othereigenvectors Remove the vector 119910

0 leaving 119888minus1 eigenvectors

of119882 denoted as follows

119910119896119888minus1119896=1 (119910119879

1198941199100= 0 119910119879

119894119910119895= 0 119894 = 119895) (25)

(2) Add an element of 1 to the end of each input data(11986611989400(119894 = 1 2 119898)) and obtain 119888 minus 1 vectors

119896119888minus1119896=1

isinR119899+1 where

119896is the solution of the following regular least

squares problem

119896= argmin(

119898

sum119894=1

(11987911986611989400minus 119910119896119894)2 + 120572 2) (26)

Here 119910119896119894is the 119894th element of eigenvectors 119910

119896119888minus1119896=1

and 120572 is thecontraction parameter

(3) Let 119860 = [1198861 119886

119888minus1] be the transformation matrix

with the size (119872+ 1) times (119888 minus 1) Perform dimension reductionon the sample features 1198661

00 119866200 119866119898

00 by mapping them to

(119888 minus 1) dimensional subspace as follows

119909 997888rarr 119911 = 119860119879 [1199091] (27)

Step 6 Compute the mean values of samples in differentclasses according to following equation

aver119894= sum119898119894

119904=111986611990400

119898119894

(119894 = 1 2 6) (28)

where119898119894is the number of samples in the 119894th class sum6

119894=1119898119894=

119898 Let aver119894be the class-centroid of the 119894th class

Step 7 The remaining119873minus119898 samples are taken as the testingsamples and the dimension reduction is implemented forthem using the method described in Step 5Step 8 Compute the square of the distance |119866119904

00minusaver

119894|2 (119904 =

119898 + 1119898 + 2 119873 and 119894 = 1 2 6) between each testingsample119866119904

00and different class-centroid aver

119894 according to the

nearest centroids classifier the sample 11986611990400is assigned to the

class 119894 if 119894 = argmin |11986611990400minus aver

119894|2

In Step 8 the weak classifier (ie the nearest centroidclassifier) is used to classify error-diffused halftone imagesbecause this classifier is simple and easy to implementSimultaneously in order to prove that these class featurematrices which are extracted according to the methodmentioned in Section 3 and handled by the algorithm of thespectral regression discriminant analysis are well suited forthe classification of error-diffused halftone images this weakclassifier is used in this paper instead of a strong classifier[20] such as support vector machine classifiers and deepneural network classifiers

5 Experimental Analysis and Results

We implement various experiments to verify theefficiency of our methods in classifying error-diffusedhalftone images The computer processor is Intel(R)Pentium(R) CPU G2030 300GHz the memory of thecomputer is 20GB the operating system is Windows7 and the experimental simulation software is matlabR2012a In our experiments all the original images aredownloaded from httpdecsaiugrescvgdbimagenes andhttpmspeentustedutw About 4000 original imageshave been downloaded and they are converted into 24000error-diffused halftone images produced by six differenterror-diffused filters

51 Classification Accuracy Rate of the Error-DiffusedHalftone Images

511 Effect of the Number of the Samples This subsectionanalyzes the effect of the number on the feature sampleson classification When 119871 = 11 and feature matrices 119865

00

11986510 11986511 119865all are taken as the input data respectively the

accuracy rate of classification under different conditionsis shown in Tables 1 and 2 Table 1 shows the classificationaccuracy rates under different number of training sampleswhen the total number of samples is 12000 Table 2 showsthe classification accuracy rates under different number oftraining samples when the total number of samples is 24000The digits in the first line of each table are the size of thetraining samples

According to Tables 1 and 2 the classification accuracyrates under 12000 samples are higher than that under 24000samples Moreover the classification accuracy rates improvewith the increase of the proportion of the training sampleswhen the number of the training samples is lower than the80 of sample size And it achieves the highest classificationaccuracy rates when the number of the training samples is

Advances in Multimedia 7

Table 1 Classification accuracy rates () (12000 samples)

2000 3000 4000 5000 6000 7000 8000 9000 1000011986500

97750 97500 99050 99150 99150 99150 99100 98950 9890011986510

99950 99950 10000 10000 10000 10000 10000 10000 1000011986511

99500 99250 99350 99400 99200 99350 99300 99300 99450119865all 99100 99100 99450 99500 99650 99650 99650 99650 99600

Table 2 Classification accuracy rates () (24000 samples)

14000 15000 16000 17000 18000 19000 20000 21000 2200011986500

95810 95878 95413 95286 94933 98580 98100 98267 9865011986510

97600 97378 97150 96957 96450 99680 99600 99533 9930011986511

96420 96256 95925 95614 95000 99040 98825 98700 98400119865all 97230 97100 96888 96686 96233 99680 99600 99533 99350

Table 3 Classification accuracy rates obtained by different algorithms ()

11986500+ SR 119865

10+ SR 119865

11+ SR 119865all + SR ECF + BP LMS + Bayes 119866

10+ ML 119866

11+ ML

9840 9953 9874 9954 9004 9038 9832 9615

Table 4 The training and testing time under different sample sizes (in seconds)

14000 15000 16000 17000 18000 19000 20000 21000 22000

11986500

Training time 65208 78317 94660 11047 12932 15018 17894 20511 51354Testing time 63742 62583 58530 55251 48792 44947 37872 30375 43393

11986510

Training time 65851 78796 95113 11070 13097 15076 17398 19895 24010Testing time 63674 60804 57496 5503 49287 43261 36775 30353 29631

11986511

Training time 65662 79936 95173 11069 13033 15063 17441 19834 23996Testing time 65226 61817 57826 53778 48531 45513 37137 29735 30597

119865all Training time 68902 82554 99189 11527 13534 15671 17994 20485 28836Testing time 10200 98393 93112 87212 8016 6902 58556 49363 36921

about 80 of the sample size In addition from Tables 1 and2 we can also see that 119865

00 11986510 11986511

can be used as the inputdata alone Simultaneously they can also be combined intothe input data 119865all based on which the classification accuracyrates would be high

512 Comparison of Classification Accuracy Rate To analyzethe effectiveness of our classification algorithm the meanvalues of classification accuracy rate of the four data sets onthe right-hand side of each row in Table 2 are computed ThealgorithmSRoutperforms other baselines in achieving higherclassification accuracy rates when compared with LMS +Bayes (the method composed of least mean square and Bayesmethod) ECF + BP (the method based on the enhancedcorrelation function and BP neural network) and ML (themaximum likelihood method) According to Table 3 themean values of classification accuracy rates obtained usingSR and different features 119865

00 11986510 11986511 and 119865all respectively

are higher than themean values obtained by other algorithmsmentioned above

52 Effect of the Size of Statistical Feature Template onClassification Accuracies Here different features 119865

00 11986510

11986511

of the 24000 error-diffused halftone images are usedto test the effect of the size of statistical feature template11986500 11986510 11986511

are constructed using the corresponding classfeature matrices 119866

00 11986610 11986611

with deferent size 119871 times 119871 (119871 =5 7 9 11 13 15 17 19 21 23 25) Figure 1 shows that theclassification accuracy rate achieves the highest value when119871 = 11 no matter which feature is selected for experiments

53 Time-Consumption of the Classification Now the exper-iments are implemented to test the time-consumption ofthe error-diffused halftone image classification under thecondition that the total number of samples is 24000 and119871 = 11 It is well known that the time-consumption of theclassification includes the training time and the testing timeFrom Table 4 we can know that the training time increaseswith the increase of the number of training samples on thecontrary the testing time decreases with the increase of thenumber of training samples

8 Advances in Multimedia

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(a) Rates based on 11986500 (119871 = 5 7 9 11 13 15)

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(b) Rates based on 11986500 (119871 = 17 19 21 23 25)

14000 15000 16000 17000 18000 19000 20000 21000 220000808108208308408508608708808909

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(c) Rates based on 11986510 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

081082083084085086087088089

L = 17

L = 19

L = 21

L = 23

L = 25

(d) Rates based on 11986510 (119871 = 17 19 21 23 25)

09

14000 15000 16000 17000 18000 19000 20000 21000 22000082084086088

092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(e) Rates based on 11986511 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

082084086088

092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(f) Rates based on 11986511 (119871 = 17 19 21 23 25)

Figure 1 Classification accuracy rates based on deferent features with deferent sample size

To compare the time-consumption of the classificationmethod proposed in this paper with other algorithms suchas the backpropagation (BP) neural network radial basisfunction (RBF) neural network and support vector machine(SVM) all the experiments are implemented using119865

10 which

is divided into two parts 12000 training samples and 12000testing samples (119871 = 11) From the digits listed in Table 5

we can know that SR achieves the minimal summation of thetraining and testing time

In addition Table 5 implies that the time-consumptionof classifiers based on neural networks such as the classifierbased on RBF or BP are much more than that of otheralgorithms especially SR This is because these neural net-work algorithms essentially use gradient descent methods to

Advances in Multimedia 9

Table 5 Time-consumption of different algorithms (in second)

ML RBF BP SVM SRTraining time 11042 76390 29667 49297 439337Testing time 12000 27600 36000 62400 63161

Table 6 Classification accuracy rates under different variances ()

Variance 14000 15000 16000 17000 18000 19000 20000 21000 22000001 95570 95422 95425 95529 95217 98440 98200 98033 97950010 95370 95233 95188 95157 94867 98200 98000 97767 98000050 94580 94522 94600 94671 94567 98320 98225 98033 98550100 93060 93067 93013 93343 93200 96980 97175 96800 97350

Table 7 Classification accuracy rates using other algorithms under different variances

Variance ECF + BP LMS + Bayes ML Variance ECF + BP LMS + Bayes ML001 895000 887577 979981 050 797333 402682 829169010 881333 784264 968315 100 615000 338547 643861

optimize the associated nonconvex problems which are wellknown to converge very slowly However the classifier basedon SR performs the classification task through computing thesquare of the distance between each testing sample and differ-ent class-centroids directly Hence the time-consumption ofit is very cheap

54The Experiment of Noise Attack Resistance In the processof actual operation the error-defused halftone images areoften polluted by noise before the inverse transform In orderto test the ability of SR to resist the attack of noise differentGaussian noises with mean 0 and different variances areembedded into the error-defused halftone imagesThen clas-sification experiments have been done using the algorithmproposed in this paper and the experimental results are listedin Table 6 According to Table 6 the accuracy rates decreasewith the increase of the variances As compared with theaccuracy rates listed in Table 7 achieved by other algorithmssuch as ECF + BP LMS + Bayes and ML we find that ourclassification method has obvious advantages in resisting thenoise

6 Conclusion

This paper proposes a novel algorithm to solve the chal-lenging problem of classifying the error-diffused halftoneimages We firstly design the class feature matrices afterextracting the image patches according to their statisticalcharacteristics to classify the error-diffused halftone imagesThen the spectral regression kernel discriminant analysisis used for feature dimension reduction The error-diffusedhalftone images are finally classified using an idea similarto the nearest centroids classifier As demonstrated by theexperimental results our method is fast and can achieve ahigh classification accuracy rate with an added benefit ofrobustness in tackling noise A very interesting direction isto solve the disturbance possibly introduced by other attacks

such as image scaling and rotation in the process of error-diffused halftone image classification

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work is supported in part by the National Natural Sci-ence Foundation of China (Grants nos 61170102 61271140)the Scientific Research Fund of Hunan Provincial Educa-tion Department China (Grant no 15A049) the EducationDepartment Fund of Hunan Province in China (Grantsnos 15C0402 15C0395 and 13C036) and the Science andTechnology Planning Project of Hunan Province in China(Grant no 2015GK3024)

References

[1] Y-M Kwon M-G Kim and J-L Kim ldquoMultiscale rank-basedordered dither algorithm for digital halftoningrdquo InformationSystems vol 48 pp 241ndash247 2015

[2] Y Jiang and M Wang ldquoImage fusion using multiscale edge-preserving decomposition based on weighted least squaresfilterrdquo IET Image Processing vol 8 no 3 pp 183ndash190 2014

[3] Z Zhao L Cheng and G Cheng ldquoNeighbourhood weightedfuzzy c-means clustering algorithm for image segmentationrdquoIET Image Processing vol 8 no 3 pp 150ndash161 2014

[4] Z-Q Wen Y-L Lu Z-G Zeng W-Q Zhu and J-H AildquoOptimizing template for lookup-table inverse halftoning usingelitist genetic algorithmrdquo IEEE Signal Processing Letters vol 22no 1 pp 71ndash75 2015

[5] P C Chang and C S Yu ldquoNeural net classification and LMSreconstruction to halftone imagesrdquo in Visual Communicationsand Image Processing rsquo98 vol 3309 of Proceedings of SPIE pp592ndash602 The International Society for Optical EngineeringJanuary 1998

10 Advances in Multimedia

[6] Y Kong P Zeng and Y Zhang ldquoClassification and recognitionalgorithm for the halftone imagerdquo Journal of Xidian Universityvol 38 no 5 pp 62ndash69 2011 (Chinese)

[7] Y Kong A study of inverse halftoning and quality assessmentschemes [PhD thesis] School of Computer Science and Tech-nology Xidian University Xian China 2008

[8] Y-F Liu J-M Guo and J-D Lee ldquoInverse halftoning based onthe Bayesian theoremrdquo IEEE Transactions on Image Processingvol 20 no 4 pp 1077ndash1084 2011

[9] Y-F Liu J-MGuo and J-D Lee ldquoHalftone image classificationusing LMS algorithm and naive Bayesrdquo IEEE Transactions onImage Processing vol 20 no 10 pp 2837ndash2847 2011

[10] D L Lau andG R ArceModern Digital Halftoning CRC PressBoca Raton Fla USA 2nd edition 2008

[11] Image Dithering Eleven Algorithms and Source Code httpwwwtannerhellandcom4660dithering-elevenalgorithms-source-code

[12] R A Ulichney ldquoDithering with blue noiserdquo Proceedings of theIEEE vol 76 no 1 pp 56ndash79 1988

[13] Y-H Fung and Y-H Chan ldquoEmbedding halftones of differentresolutions in a full-scale halftonerdquo IEEE Signal ProcessingLetters vol 13 no 3 pp 153ndash156 2006

[14] Z-Q Wen Y-X Hu and W-Q Zhu ldquoA novel classificationmethod of halftone image via statistics matricesrdquo IEEE Trans-actions on Image Processing vol 23 no 11 pp 4724ndash4736 2014

[15] D Cai X He and J Han ldquoEfficient kernel discriminantanalysis via spectral regressionrdquo in Proceedings of the 7th IEEEInternational Conference on Data Mining (ICDM rsquo07) pp 427ndash432 Omaha Neb USA October 2007

[16] D Cai X He and J Han ldquoSpeed up kernel discriminantanalysisrdquo The International Journal on Very Large Data Basesvol 20 no 1 pp 21ndash33 2011

[17] M Zhao Z Zhang T W S Chow and B Li ldquoA general softlabel based Linear Discriminant Analysis for semi-superviseddimensionality reductionrdquo Neural Networks vol 55 pp 83ndash972014

[18] M Zhao Z Zhang T W S Chow and B Li ldquoSoft labelbased Linear Discriminant Analysis for image recognition andretrievalrdquo Computer Vision amp Image Understanding vol 121 no1 pp 86ndash99 2014

[19] L Zhang and F-C Tian ldquoA new kernel discriminant analysisframework for electronic nose recognitionrdquo Analytica ChimicaActa vol 816 pp 8ndash17 2014

[20] B Gu V S Sheng K Y TayW Romano and S Li ldquoIncrementalsupport vector learning for ordinal regressionrdquo IEEE Transac-tions on Neural Networks and Learning Systems vol 26 no 7pp 1403ndash1416 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

2 Advances in Multimedia

halftone images and classify these halftone images using naiveBayes Although these methods work well in classifying somespecific halftone images their performance largely decreaseswhen classifying error-diffused halftone images producedby Floyd-Steinberg filter Stucki filter Sierra filter Burkersfilter Jarvis filter and Stevenson filter respectively They aredescribed as follows

Different Error Diffusion Filters Consider the following

(a) Floyd-Steinberg filter

( 116)∙ 7

3 5 1 (1)

(b) Sierra filter

( 132)∙ 5 3

2 4 5 4 20 2 3 2 0

(2)

(c) Burkers filter

( 132)∙ 8 4

2 4 8 4 2 (3)

(d) Jarvis filter

( 148)∙ 7 5

3 5 7 5 31 3 5 3 1

(4)

(e) Stevenson filter

( 1200)

∙ 3212 26 30 16

12 26 125 12 12 5

(5)

The Error Diffusion of Stucki

(a) Error kernel of Stucki filter is

( 142)∙ 8 4

2 4 8 4 21 2 4 2 1

(6)

(b) Matrix of coefficients template is

0 0 421

4 8 4 22 4 2 1

80

(7)

(c) 119874 denotes the pixel being processed 119860 119861 119862 and 119863indicate the four neighborhood pixels

0 0 8 421

4 8 4 22 4 2 1

( 142)

0

A

B

C

D

O

(8)

based on different error diffusion kernels as summarizedin [10ndash12] Moreover these literatures did not consider alltypes of error diffusion halftone images For example onlythree error diffusion filters are included in [6 7 9] and onlyone is involved in [5 8] The idea of halftoning for the sixerror diffusion filters is quite similar with the only differencelying in the templates used (shown in different error diffusionfilters and the error diffusion of Stucki described abovethe templates are shown at the right-hand side in eachequation) It is difficult to classify the error-diffused halftoneimages because of the almost inconspicuous differencesamong various halftone features extracted from using thesesix error diffusion filters the error-diffused halftone imagesHowever as a scalable algorithm the error diffusion hasgradually become one of the most popular techniques dueto its ability to provide a solution of good quality at areasonable cost [13] This asks for an urgent requirement tostudy the classificationmechanism for various error diffusionalgorithms with the hope to promote the existing inversehalftone techniques widely used in different application fieldsof graphics processing

This paper proposes a new algorithm to classify error-diffused halftone images We first extract the feature matricesof pixel pairs from the error-diffused halftone image patchesaccording to statistical characteristics of these patches Theclass feature matrices are then subsequently obtained usinga gradient descent method based on the feature matrices ofpixel pairs [14] After applying the spectral regression kerneldiscriminant analysis to realize the dimension reduction inthe class featurematrices we finally classify the error-diffusedhalftone images using the idea similar to the nearest centroidsclassifier [15 16]

The structure of this paper is as follows Section 2 presentsthe method of kernel discriminant analysis Section 3describes how to extract the feature extraction of pixel pairsfrom the error-diffused halftone images Section 4 describesthe proposed classification method for the error-diffusedhalftone image based on the spectral regression kernel dis-criminant analysis Section 5 shows the experimental resultsSome concluding remarks and possible future research direc-tions are given in Section 6

2 An Efficient Kernel DiscriminantAnalysis Method

It is well known that linear discriminant analysis (LDA)[17 18] is effective in solving classification problems but it

Advances in Multimedia 3

fails for nonlinear problems To deal with this limitation theapproach called kernel discriminant analysis (KDA) [19] hasbeen proposed

21 Overview of Kernel Discriminant Analysis Suppose thatwe are given a sample set 119909

1 1199092 119909

119898 of the error-diffused

halftone images with 1199091 1199092 119909

119898belonging to different

class 119862119894(119894 = 1 2 119870) respectively Using a nonlinear

mapping function 120601 the samples of the halftone images inthe input space 119877119899 can be projected to a high-dimensionalseparable feature space 119891 namely 119877119899 rarr 119891 119909 rarr 120601(119909) Afterextracting features the error-diffused halftone images willbe classified along the projection direction along which thewithin-class scatter is minimal and the between-class scatteris maximal For a proper 120601 an inner product described as⟨sdot sdot⟩ can be defined on119891 to form a reproducing kernel Hilbertspace That is to say ⟨120601(119909

119894) 120601(119909119895)⟩ = 120581(119909

119894 119909119895) where 120581(sdot sdot) is

a positive semidefinite kernel function In the feature space119891let 119878120601119887be the between-class scattermatrix let 119878120601

119908be thewithin-

class scatter matrix and let 119878120601119905be the total scatter matrix

119878120601119887=119888

sum119896=1

119898119896(120583119896120601minus 120583120601) (120583119896120601minus 120583120601)119879

119878120601119908=119888

sum119896=1

(119898119896

sum119894=1

(120601 (119909119896119894) minus 120583119896120601) (120601 (119909119896

119894) minus 120583119896120601)119879)

119878120601119905= 119878120601119887+ 119878120601119908=119898

sum119894=1

(120601 (119909119894) minus 120583120601) (120601 (119909

119894) minus 120583120601)119879

(9)

where 119898119896is the number of the samples in the 119896th class

120601(119909119896119894) is the 119894th sample of the 119896th class in the feature space

119891 120583119896120601= (1119898

119896) sum119873119894119894=1120601(119909119896119894) is the centroid of the 119896th class

and 120583120601= (1119873)sum119888

119896=1sum119898119896119894=1120601(119909119896119894) is the global centroid In the

feature space the aim of the discriminant analysis is to seekthe best projection direction namely the projective functionV to maximize the following objective function

Vopt = argmaxV119879119878120601119887V

V119879119878120601119908V (10)

Equation (10) can be solved by the eigenproblem 119878120601119887V = 120582119878120601

119905V

According to the theory of reproducing kernel Hilbert spacewe know that the eigenvectors are linear combinations of120601(119909119894) in the feature space 119891 there exist weight coefficients

119886119894(119894 = 1 2 119898) such that V120601 = sum119898

119894=1119886119894120601(119909119894) Let 119886 =

[1198861 1198862 119886

119898]119879 then it can be proved that (10) can be

rewritten as follows

119886opt = argmax 119886119879119870119882119870119886119886119879119870119870119886 (11)

The optimization problem of (11) is equal to the eigenproblem

119870119882119870119886 = 120582119870119870119886 (12)

where 119870 is the kernel matrix and 119870119894119895= 120581(119909

119894 119909119895) 119882 is the

weight matrix defined as follows

119908119894119895=

1119898119896

if 119909119894 119909119895belong to the 119896th class

0 others(13)

For sample 119909 the projective function in the feature space 119891can be described as

119891lowast (119909) = ⟨V 120601 (119909)⟩ =119898

sum119894=1

119886119894⟨120601 (119909119894) 120601 (119909)⟩

=119898

sum119894=1

119886119894120581 (119909119894 119909) = 119886119879120581 (sdot 119909)

(14)

22 Kernel Discriminant Analysis via Spectral Regression Toefficiently solve the eigenproblem of the kernel discriminantanalysis in (12) the following theorem will be used

Theorem 1 Let 119910 be the eigenvector of the eigenproblem119882119910 =120582119910 with eigenvalue 120582 If 119870120572 = 119910 then 120572 is the eigenvector ofeigenproblem (12) with the same eigenvalue 120582

According to Theorem 1 the projective function of thekernel discriminant analysis can be obtained according to thefollowing two steps

Step 1 Obtain 119910 by solving the eigenproblem in (12)

Step 2 Search eigenvector 120572which satisfies119870120572 = 119910 where119870is the positive semidefinite kernel matrix

As we know if 119870 is nonsingular then for any given 119910there exists a unique 120572 = 119870minus1119910 satisfying the linear equationdescribed in Step 2 If119870 is singular then the linear equationmay have infinite solutions or have no solution In this casewe can approximate 120572 by solving the following equation

(119870 + 120575119868) 120572 = 119910 (15)

where 120575 ge 0 is a regularization parameter and 119868 is the identitymatrix Combined with the projective function describedin (14) we can easily verify that the solution 120572lowast = (119870 +120575119868)minus1119910 given by (15) is the optimal solution of the followingregularized regression problem

120572lowast = min119891isin119865

119898

sum119894=1

(119891 (119909119894) minus 119910119894)2 + 120575 100381710038171003817100381711989110038171003817100381710038172119870 (16)

where 119910119894is the 119894th element of 119910 and 119865 is the reproducing

kernel Hilbert space induced from the Mercer kernel 119870with sdot

119870being the corresponding norm Due to the

essential combination of the spectral analysis and regressiontechniques in the above two-step approach the methodis named as spectral regression (SR) kernel discriminantanalysis

4 Advances in Multimedia

3 Feature Extraction of the Error-DiffusedHalftone Images

Since its introduction in 1976 the error diffusion algorithmhas attracted widespread attention in the field of printingapplications It deals with pixels of halftone images usinginstead of point processing algorithms the neighborhoodprocessing algorithms Nowwe will extract the features of theerror-diffused halftone images which are produced using thesix popular error diffusion filters mentioned in Section 1

31 Statistic Characteristics of the Error-Diffused HalftoneImages Assume that 119891

119892(119909 119910) is the gray value of the pixel

located at position (119909 119910) in the original image and 119865(119909 119910)is the value of the pixel located at position (119909 119910) in theerror-diffused halftone image For the original image all thepixels are firstly normalized to the range [0 1] Then thepixels of the normalized image are converted to the error-diffused image 119865 line by line that is to say if 119891

119892(119909 119910) ge 119879

0

119865(119909 119910) which is the value of the pixel located at position(119909 119910) in error-diffused image 119865 is 1 otherwise 119865(119909 119910) is 0where 119879

0is the threshold value The error between 119865(119909 119910)

and 1198790is diffused ahead to some subsequent pixels not

necessary to deal withTherefore for some subsequent pixelsthe comparison will be implemented between 119879

0and the

value which is the sum of 119891119892(119909 119910) and the diffusion error

119890 A template matrix can be built using the error diffusionmodes and the error diffusion coefficients as shown in theerror diffusion of Stucki described above for example (a) theerror diffusion filter and (b) the error diffusion coefficientswhich represent the proportion of the diffusion errors Ifthe coefficient is zero then the corresponding pixel does notreceive any diffusion errors According to the error diffusionof Stucki described above pixel119860 suffers frommore diffusionerrors than pixel 119861 that is to say119874-119860 has a larger probabilityto become 1-0 pixel pair than119874-119861 The reasons are as followsSuppose that the pixel value of 119874 is 0 le 119901

119874le 1 and pixel 119874

has been processed by the thresholding method according tothe following equation

119902119874= 1 119901

119874ge 1198790

0 else (17)

In general threshold1198790is set as 05 According to the template

shown in the error diffusion of Stucki described above we canknow that the diffusion error is 119890 = 119902

119874minus119901119874 the new value of

pixel119860 is 119901119860= 119891(119909

119860+119910119860)+811989042 and the new value of pixel

119861 is119901119861= 119891(119909

119861+119910119861)+11989042 where119891

119892(119909119860+119910119860) and119891

119892(119909119861+119910119861)

are the original values of pixels 119860 and 119861 respectivelySince the value of each pixel in the error-diffused halftone

image can only be 0 or 1 there are 4 kinds of pixel pairs inthe halftone image 0-1 0-0 1-0 and 1-1 Pixel pairs 0-1 and1-0 are collectively known as 1-0 pixel pairs because of theirexchange abilityTherefore there are only three kinds of pixelpairs essentially 0-0 1-0 and 1-1 In this paper three statisticalmatrices are used to store the number of different pixel pairswith different neighboring distances and different directionswhich are of size 119871 times 119871 and are referred to as119872

0011987210 and

11987211 respectively (119871 is an odd number satisfying 119871 = 2119877 + 1

and 119877 is the maximum neighboring distance) Suppose thatthe center entry of the statistical matrix template covers pixel119874 of the error-diffused halftone image with the size119882 lowast 119867and other entries overlap other neighborhood pixels 119865(119906 V)Then we can compute three statistics on 1-0 1-1 and 0-0 pixelpairs within the scope of this statistics matrix template Ifthe position (119894 119895) of pixel119874 changes continually the matrices1198720011987210 and119872

11with zero being the initial values can be

updated according to

11987211(119909 119910) = 119872

11(119909 119910) + 1

if 119865 (119894 119895) = 1 119865 (119906 V) = 111987200(119909 119910) = 119872

00(119909 119910) + 1

if 119865 (119894 119895) = 0 119865 (119906 V) = 011987210(119909 119910) = 119872

00(119909 119910) + 1

if 119865 (119894 119895) = 1 119865 (119906 V) = 0 or 119865 (119894 119895) = 0 119865 (119906 V) = 1

(18)

where 119906 = 119894minus119877+119909minus1 V = 119895minus119877+119910minus1 1 le 119894 119906 le 119882 1 le 119895 V le119867 1 le 119909 le 119871 and 1 le 119910 le 119871 After normalization the threestatistic matrices can be ultimately obtained as the statisticalfeature descriptor of the error-diffused halftone images

32 Process of Statistical Feature Extraction ofHalftone ImagesAccording to the analysis described above the process ofstatistical feature extraction of the error-diffused halftoneimages can be represented as follows

Step 1 Input the error-diffused halftone image 119865 and divide119865 into several blocks 119861

119894(119894 = 1 2 119867) with the same size

119870 times 119870Step 2 Initialize the statistical feature matrix 119872 (including119872001198721011987211) as the zero matrix and let 119894 = 1

Step 3 Obtain the statistical matrix119872119894of block 119861

119894according

to (18) and update119872 using the equation119872 = 119872 +119872119894

Step 4 Set 119894 = 119894+1 If 119894 le 119867 then return to Step 3 Otherwisego to Step 5Step 5 Normalize 119872 such that it satisfiessum119871119909=1sum119871119909=1119872(119909 119910) = 1 and 119872(119909 119910) ge 0 where 119870

satisfies119870 ge min(119879119894) and 119879

119894(1 le 119894 le 6) represents the size of

the template matrix of the 119894th error diffusion filter

According to the process described above we knowthat the statistical features of the error-diffused halftoneimage 119865 are extracted based on the method that divides119865 into image patches which is significantly different withother feature extraction methods based on image patchesFor example in [20] the brightness and contrast of theimage patches are normalized by 119885-score transformationand whitening (also called ldquospheringrdquo) is used to rescale thenormalized data to remove the correlations between nearbypixels (ie low-frequency variations in the images) because

Advances in Multimedia 5

these correlations tend to be very strong even after brightnessand contrast normalization However in this paper featuresof the patches are extracted based on counting statisticalmeasures of different pixel pairs (00 10 and 11) within amoving statistical matrix template and are optimized usingthe method described in Section 33

33 Extraction of the Class Feature Matrix The statis-tics matrices 119872119894

001198721198941011987211989411(119894 = 1 2 119873) after being

extracted can be used as the input of other algorithms suchas support vector machines and neural networks Howeverthe curse of dimensionality could occur due to the highdimension of119872

001198721011987211 making the classification effect

possibly not significant Thereby six class feature matrices1198661 1198662 119866

6are designed in this paper for the error-

diffused halftone images produced by the six error diffusionfiltersmentioned aboveThen a gradient descentmethod canbe used to optimize these class feature matrices119873 = 6 times 119899 error-diffused halftone images can

be derived from 119899 original images using the six errordiffusion filters respectively Then 119873 statistics matrices119872119894(119872119894001198721198941011987211989411) (119894 = 1 2 119873) can be extracted as

the samples from 119873 error-diffused halftone images usingthe algorithm mentioned in Section 32 Subsequently welabel thesematrices as label(119872

1) label(119872

2) label(119872

119873) to

denote the types of the error diffusion filters used to producethe error-diffused halftone image Given the 119894th sample 119872

119894

as the input the target out vector 119905119894= [1199051198941 119905

1198946] (119894 =

1 119873) and the class feature matrices 1198661 1198662 119866

6 the

square error 119890119894between the actual output and the target

output can be derived according to

119890119894=6

sum119895=1

(119905119894119895minus119872119894∙ 119866119895)2 (19)

where

119905119894119895= 1 if 119895 = label (119872

119894) 119895 = 1 2 6

0 else (20)

The derivatives of 119866119895(119909 119910) in (19) can be explicitly calculated

as

120597119890119894

120597119866119895(119909 119910) = minus2 (V119894119895 minus119872119894 ∙ 119866119895)119872119894 (119909 119910) (21)

where 1 le 119909 le 119871 1 le 119910 le 119871 and ∙ is the dot product ofmatrices defined for anymatrices119860 and 119861with the same size119862 times 119863 as

119860 ∙ 119861 =119862

sum119906=1

119863

sumV=1119860 (119906 V) times 119861 (119906 V) (22)

Thedot product ofmatrices satisfies the commutative law andassociative law that is to say 119860 ∙ 119861 = 119861 ∙ 119860 and (119860 ∙ 119861) ∙ 119862 =

119860 ∙ (119861 ∙ 119862) Then the iteration equation (23) can be obtainedusing the gradient descent method

119866119896+1119895(119909 119910) = 119866119896

119895(119909 119910) minus 120578 120597119890

119894

120597119866119896119895(119909 119910)

= 119866119896119895(119909 119910)

+ 2120578 (V119894119895minus119872119894∙ 119866119895)119872119894(119909 119910)

(23)

where 120578 is the learning factor and 119896 means the 119896th iterationThe purpose of learning is to seek the optimal matrices119866119895(119895 = 1 2 6) by minimizing the total square error

119890 = sum119873119894=1119890119894 and the process of seeking the optimal matrices

119866119895can be described as follows

Step 1 Initialize parameters initialize the numbers of itera-tions inner and outer the iteration variables 119905 = 0 and 119894 = 1the nonnegative thresholds 120576

1and 120576

2used to indicate the

end of iterations the learning factor 120578 the total number ofsamples119873 and the class feature matrices119866

119895(119895 = 1 2 6)

Step 2 Input the statistics matrices119872119894(119894 = 1 2 119873) and

let 1198660119895= 119866119895(119895 = 1 6) and 119896 = 0 The following three

substeps are executed

(1) According to (23) 119866119896+1119895

(119895 = 1 6) can becomputed

(2) Compute 119890119896+1119894

and Δ119890119896+1119894= |119890119896+1119894minus 119890119896119894|

(3) If 119896 gt inner or Δ119890119896+1119894lt 1205761 then set 119890

119894= 119890119896+1119894

and go toStep 3 otherwise set 119896 = 119896 + 1 and return to (10)

Step 3 Set 119866119895= 119866119896+1119895

(119895 = 1 6) If 119894 = 119873 then go to Step4 Otherwise set 119894 = 119894 + 1 and go to Step 2Step 4 Compute the total error 119890119905 = sum119873

119894=1119890119894 If |119890119905 minus 119890119905minus1| lt 120576

2

or 119905 = outer then end the algorithm Otherwise set 119905 = 119905 + 1and go to Step 2

4 Classification of Error-Diffused HalftoneImages Using Nearest Centroids Classifier

This section describes the details on classifying error-diffusedhalftone images using the spectral regression kernel discrim-inant analysis as follows

Step 1 Input 119873 error-diffused halftone images produced bysix error-diffused filters and extract the statistical featurematrices119872

119894 including119872119894

001198721198941011987211989411 119894 = 1 2 119873 using

the method presented in Section 32

Step 2 According to the steps described in Section 33 allthe statistical feature matrices 119872

119894(11987211989400 11987211989410 11987211989411

and 119894 =1 2 119873) are converted to the class feature matrices 119866

119894

including 11986611989400 11986611989410 11986611989411and 119894 = 1 2 119873 correspondingly

Then convert 11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) into one-

dimensional matrices by columns respectively

6 Advances in Multimedia

Step 3 All the one-dimensional class feature matrices11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) are used to construct the

samples feature matrices 11986500 11986510 11986511

of size 119873 times 119872 (119872 =119871 times 119871) respectivelyStep 4 A label matrix information of the size 1 times119873 is built torecord the type to which the error-diffused halftone imagesbelong

Step 5 The first 119898 features 119866100 119866200 119866119898

00of the samples

featurematrices11986500are taken as the training samples (the first

119898 features of 11986510 11986511 or 119865all which is the composition of 119865

00

11986510 and 119865

11also can be used as the training samples) Reduce

the dimension of these training samples using the spectralregression discriminant analysis The process of dimensionreducing can be described by three substeps as follows

(1) Produce orthogonal vectors Let

119910119896= [[0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119896minus1

119894=1119898119894

1 1 1⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟119898119896

0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119888

119894=119896+1119898119894

]]

119879

119896 = 1 2 119888(24)

and 1199100= [1 1 1]119879 be the matrix with all elements

being 1 Let 1199100be the first vector of the weight matrix 119882

and use Gram-Schmidt process to orthogonalize the othereigenvectors Remove the vector 119910

0 leaving 119888minus1 eigenvectors

of119882 denoted as follows

119910119896119888minus1119896=1 (119910119879

1198941199100= 0 119910119879

119894119910119895= 0 119894 = 119895) (25)

(2) Add an element of 1 to the end of each input data(11986611989400(119894 = 1 2 119898)) and obtain 119888 minus 1 vectors

119896119888minus1119896=1

isinR119899+1 where

119896is the solution of the following regular least

squares problem

119896= argmin(

119898

sum119894=1

(11987911986611989400minus 119910119896119894)2 + 120572 2) (26)

Here 119910119896119894is the 119894th element of eigenvectors 119910

119896119888minus1119896=1

and 120572 is thecontraction parameter

(3) Let 119860 = [1198861 119886

119888minus1] be the transformation matrix

with the size (119872+ 1) times (119888 minus 1) Perform dimension reductionon the sample features 1198661

00 119866200 119866119898

00 by mapping them to

(119888 minus 1) dimensional subspace as follows

119909 997888rarr 119911 = 119860119879 [1199091] (27)

Step 6 Compute the mean values of samples in differentclasses according to following equation

aver119894= sum119898119894

119904=111986611990400

119898119894

(119894 = 1 2 6) (28)

where119898119894is the number of samples in the 119894th class sum6

119894=1119898119894=

119898 Let aver119894be the class-centroid of the 119894th class

Step 7 The remaining119873minus119898 samples are taken as the testingsamples and the dimension reduction is implemented forthem using the method described in Step 5Step 8 Compute the square of the distance |119866119904

00minusaver

119894|2 (119904 =

119898 + 1119898 + 2 119873 and 119894 = 1 2 6) between each testingsample119866119904

00and different class-centroid aver

119894 according to the

nearest centroids classifier the sample 11986611990400is assigned to the

class 119894 if 119894 = argmin |11986611990400minus aver

119894|2

In Step 8 the weak classifier (ie the nearest centroidclassifier) is used to classify error-diffused halftone imagesbecause this classifier is simple and easy to implementSimultaneously in order to prove that these class featurematrices which are extracted according to the methodmentioned in Section 3 and handled by the algorithm of thespectral regression discriminant analysis are well suited forthe classification of error-diffused halftone images this weakclassifier is used in this paper instead of a strong classifier[20] such as support vector machine classifiers and deepneural network classifiers

5 Experimental Analysis and Results

We implement various experiments to verify theefficiency of our methods in classifying error-diffusedhalftone images The computer processor is Intel(R)Pentium(R) CPU G2030 300GHz the memory of thecomputer is 20GB the operating system is Windows7 and the experimental simulation software is matlabR2012a In our experiments all the original images aredownloaded from httpdecsaiugrescvgdbimagenes andhttpmspeentustedutw About 4000 original imageshave been downloaded and they are converted into 24000error-diffused halftone images produced by six differenterror-diffused filters

51 Classification Accuracy Rate of the Error-DiffusedHalftone Images

511 Effect of the Number of the Samples This subsectionanalyzes the effect of the number on the feature sampleson classification When 119871 = 11 and feature matrices 119865

00

11986510 11986511 119865all are taken as the input data respectively the

accuracy rate of classification under different conditionsis shown in Tables 1 and 2 Table 1 shows the classificationaccuracy rates under different number of training sampleswhen the total number of samples is 12000 Table 2 showsthe classification accuracy rates under different number oftraining samples when the total number of samples is 24000The digits in the first line of each table are the size of thetraining samples

According to Tables 1 and 2 the classification accuracyrates under 12000 samples are higher than that under 24000samples Moreover the classification accuracy rates improvewith the increase of the proportion of the training sampleswhen the number of the training samples is lower than the80 of sample size And it achieves the highest classificationaccuracy rates when the number of the training samples is

Advances in Multimedia 7

Table 1 Classification accuracy rates () (12000 samples)

2000 3000 4000 5000 6000 7000 8000 9000 1000011986500

97750 97500 99050 99150 99150 99150 99100 98950 9890011986510

99950 99950 10000 10000 10000 10000 10000 10000 1000011986511

99500 99250 99350 99400 99200 99350 99300 99300 99450119865all 99100 99100 99450 99500 99650 99650 99650 99650 99600

Table 2 Classification accuracy rates () (24000 samples)

14000 15000 16000 17000 18000 19000 20000 21000 2200011986500

95810 95878 95413 95286 94933 98580 98100 98267 9865011986510

97600 97378 97150 96957 96450 99680 99600 99533 9930011986511

96420 96256 95925 95614 95000 99040 98825 98700 98400119865all 97230 97100 96888 96686 96233 99680 99600 99533 99350

Table 3 Classification accuracy rates obtained by different algorithms ()

11986500+ SR 119865

10+ SR 119865

11+ SR 119865all + SR ECF + BP LMS + Bayes 119866

10+ ML 119866

11+ ML

9840 9953 9874 9954 9004 9038 9832 9615

Table 4 The training and testing time under different sample sizes (in seconds)

14000 15000 16000 17000 18000 19000 20000 21000 22000

11986500

Training time 65208 78317 94660 11047 12932 15018 17894 20511 51354Testing time 63742 62583 58530 55251 48792 44947 37872 30375 43393

11986510

Training time 65851 78796 95113 11070 13097 15076 17398 19895 24010Testing time 63674 60804 57496 5503 49287 43261 36775 30353 29631

11986511

Training time 65662 79936 95173 11069 13033 15063 17441 19834 23996Testing time 65226 61817 57826 53778 48531 45513 37137 29735 30597

119865all Training time 68902 82554 99189 11527 13534 15671 17994 20485 28836Testing time 10200 98393 93112 87212 8016 6902 58556 49363 36921

about 80 of the sample size In addition from Tables 1 and2 we can also see that 119865

00 11986510 11986511

can be used as the inputdata alone Simultaneously they can also be combined intothe input data 119865all based on which the classification accuracyrates would be high

512 Comparison of Classification Accuracy Rate To analyzethe effectiveness of our classification algorithm the meanvalues of classification accuracy rate of the four data sets onthe right-hand side of each row in Table 2 are computed ThealgorithmSRoutperforms other baselines in achieving higherclassification accuracy rates when compared with LMS +Bayes (the method composed of least mean square and Bayesmethod) ECF + BP (the method based on the enhancedcorrelation function and BP neural network) and ML (themaximum likelihood method) According to Table 3 themean values of classification accuracy rates obtained usingSR and different features 119865

00 11986510 11986511 and 119865all respectively

are higher than themean values obtained by other algorithmsmentioned above

52 Effect of the Size of Statistical Feature Template onClassification Accuracies Here different features 119865

00 11986510

11986511

of the 24000 error-diffused halftone images are usedto test the effect of the size of statistical feature template11986500 11986510 11986511

are constructed using the corresponding classfeature matrices 119866

00 11986610 11986611

with deferent size 119871 times 119871 (119871 =5 7 9 11 13 15 17 19 21 23 25) Figure 1 shows that theclassification accuracy rate achieves the highest value when119871 = 11 no matter which feature is selected for experiments

53 Time-Consumption of the Classification Now the exper-iments are implemented to test the time-consumption ofthe error-diffused halftone image classification under thecondition that the total number of samples is 24000 and119871 = 11 It is well known that the time-consumption of theclassification includes the training time and the testing timeFrom Table 4 we can know that the training time increaseswith the increase of the number of training samples on thecontrary the testing time decreases with the increase of thenumber of training samples

8 Advances in Multimedia

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(a) Rates based on 11986500 (119871 = 5 7 9 11 13 15)

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(b) Rates based on 11986500 (119871 = 17 19 21 23 25)

14000 15000 16000 17000 18000 19000 20000 21000 220000808108208308408508608708808909

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(c) Rates based on 11986510 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

081082083084085086087088089

L = 17

L = 19

L = 21

L = 23

L = 25

(d) Rates based on 11986510 (119871 = 17 19 21 23 25)

09

14000 15000 16000 17000 18000 19000 20000 21000 22000082084086088

092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(e) Rates based on 11986511 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

082084086088

092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(f) Rates based on 11986511 (119871 = 17 19 21 23 25)

Figure 1 Classification accuracy rates based on deferent features with deferent sample size

To compare the time-consumption of the classificationmethod proposed in this paper with other algorithms suchas the backpropagation (BP) neural network radial basisfunction (RBF) neural network and support vector machine(SVM) all the experiments are implemented using119865

10 which

is divided into two parts 12000 training samples and 12000testing samples (119871 = 11) From the digits listed in Table 5

we can know that SR achieves the minimal summation of thetraining and testing time

In addition Table 5 implies that the time-consumptionof classifiers based on neural networks such as the classifierbased on RBF or BP are much more than that of otheralgorithms especially SR This is because these neural net-work algorithms essentially use gradient descent methods to

Advances in Multimedia 9

Table 5 Time-consumption of different algorithms (in second)

ML RBF BP SVM SRTraining time 11042 76390 29667 49297 439337Testing time 12000 27600 36000 62400 63161

Table 6 Classification accuracy rates under different variances ()

Variance 14000 15000 16000 17000 18000 19000 20000 21000 22000001 95570 95422 95425 95529 95217 98440 98200 98033 97950010 95370 95233 95188 95157 94867 98200 98000 97767 98000050 94580 94522 94600 94671 94567 98320 98225 98033 98550100 93060 93067 93013 93343 93200 96980 97175 96800 97350

Table 7 Classification accuracy rates using other algorithms under different variances

Variance ECF + BP LMS + Bayes ML Variance ECF + BP LMS + Bayes ML001 895000 887577 979981 050 797333 402682 829169010 881333 784264 968315 100 615000 338547 643861

optimize the associated nonconvex problems which are wellknown to converge very slowly However the classifier basedon SR performs the classification task through computing thesquare of the distance between each testing sample and differ-ent class-centroids directly Hence the time-consumption ofit is very cheap

54The Experiment of Noise Attack Resistance In the processof actual operation the error-defused halftone images areoften polluted by noise before the inverse transform In orderto test the ability of SR to resist the attack of noise differentGaussian noises with mean 0 and different variances areembedded into the error-defused halftone imagesThen clas-sification experiments have been done using the algorithmproposed in this paper and the experimental results are listedin Table 6 According to Table 6 the accuracy rates decreasewith the increase of the variances As compared with theaccuracy rates listed in Table 7 achieved by other algorithmssuch as ECF + BP LMS + Bayes and ML we find that ourclassification method has obvious advantages in resisting thenoise

6 Conclusion

This paper proposes a novel algorithm to solve the chal-lenging problem of classifying the error-diffused halftoneimages We firstly design the class feature matrices afterextracting the image patches according to their statisticalcharacteristics to classify the error-diffused halftone imagesThen the spectral regression kernel discriminant analysisis used for feature dimension reduction The error-diffusedhalftone images are finally classified using an idea similarto the nearest centroids classifier As demonstrated by theexperimental results our method is fast and can achieve ahigh classification accuracy rate with an added benefit ofrobustness in tackling noise A very interesting direction isto solve the disturbance possibly introduced by other attacks

such as image scaling and rotation in the process of error-diffused halftone image classification

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work is supported in part by the National Natural Sci-ence Foundation of China (Grants nos 61170102 61271140)the Scientific Research Fund of Hunan Provincial Educa-tion Department China (Grant no 15A049) the EducationDepartment Fund of Hunan Province in China (Grantsnos 15C0402 15C0395 and 13C036) and the Science andTechnology Planning Project of Hunan Province in China(Grant no 2015GK3024)

References

[1] Y-M Kwon M-G Kim and J-L Kim ldquoMultiscale rank-basedordered dither algorithm for digital halftoningrdquo InformationSystems vol 48 pp 241ndash247 2015

[2] Y Jiang and M Wang ldquoImage fusion using multiscale edge-preserving decomposition based on weighted least squaresfilterrdquo IET Image Processing vol 8 no 3 pp 183ndash190 2014

[3] Z Zhao L Cheng and G Cheng ldquoNeighbourhood weightedfuzzy c-means clustering algorithm for image segmentationrdquoIET Image Processing vol 8 no 3 pp 150ndash161 2014

[4] Z-Q Wen Y-L Lu Z-G Zeng W-Q Zhu and J-H AildquoOptimizing template for lookup-table inverse halftoning usingelitist genetic algorithmrdquo IEEE Signal Processing Letters vol 22no 1 pp 71ndash75 2015

[5] P C Chang and C S Yu ldquoNeural net classification and LMSreconstruction to halftone imagesrdquo in Visual Communicationsand Image Processing rsquo98 vol 3309 of Proceedings of SPIE pp592ndash602 The International Society for Optical EngineeringJanuary 1998

10 Advances in Multimedia

[6] Y Kong P Zeng and Y Zhang ldquoClassification and recognitionalgorithm for the halftone imagerdquo Journal of Xidian Universityvol 38 no 5 pp 62ndash69 2011 (Chinese)

[7] Y Kong A study of inverse halftoning and quality assessmentschemes [PhD thesis] School of Computer Science and Tech-nology Xidian University Xian China 2008

[8] Y-F Liu J-M Guo and J-D Lee ldquoInverse halftoning based onthe Bayesian theoremrdquo IEEE Transactions on Image Processingvol 20 no 4 pp 1077ndash1084 2011

[9] Y-F Liu J-MGuo and J-D Lee ldquoHalftone image classificationusing LMS algorithm and naive Bayesrdquo IEEE Transactions onImage Processing vol 20 no 10 pp 2837ndash2847 2011

[10] D L Lau andG R ArceModern Digital Halftoning CRC PressBoca Raton Fla USA 2nd edition 2008

[11] Image Dithering Eleven Algorithms and Source Code httpwwwtannerhellandcom4660dithering-elevenalgorithms-source-code

[12] R A Ulichney ldquoDithering with blue noiserdquo Proceedings of theIEEE vol 76 no 1 pp 56ndash79 1988

[13] Y-H Fung and Y-H Chan ldquoEmbedding halftones of differentresolutions in a full-scale halftonerdquo IEEE Signal ProcessingLetters vol 13 no 3 pp 153ndash156 2006

[14] Z-Q Wen Y-X Hu and W-Q Zhu ldquoA novel classificationmethod of halftone image via statistics matricesrdquo IEEE Trans-actions on Image Processing vol 23 no 11 pp 4724ndash4736 2014

[15] D Cai X He and J Han ldquoEfficient kernel discriminantanalysis via spectral regressionrdquo in Proceedings of the 7th IEEEInternational Conference on Data Mining (ICDM rsquo07) pp 427ndash432 Omaha Neb USA October 2007

[16] D Cai X He and J Han ldquoSpeed up kernel discriminantanalysisrdquo The International Journal on Very Large Data Basesvol 20 no 1 pp 21ndash33 2011

[17] M Zhao Z Zhang T W S Chow and B Li ldquoA general softlabel based Linear Discriminant Analysis for semi-superviseddimensionality reductionrdquo Neural Networks vol 55 pp 83ndash972014

[18] M Zhao Z Zhang T W S Chow and B Li ldquoSoft labelbased Linear Discriminant Analysis for image recognition andretrievalrdquo Computer Vision amp Image Understanding vol 121 no1 pp 86ndash99 2014

[19] L Zhang and F-C Tian ldquoA new kernel discriminant analysisframework for electronic nose recognitionrdquo Analytica ChimicaActa vol 816 pp 8ndash17 2014

[20] B Gu V S Sheng K Y TayW Romano and S Li ldquoIncrementalsupport vector learning for ordinal regressionrdquo IEEE Transac-tions on Neural Networks and Learning Systems vol 26 no 7pp 1403ndash1416 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

Advances in Multimedia 3

fails for nonlinear problems To deal with this limitation theapproach called kernel discriminant analysis (KDA) [19] hasbeen proposed

21 Overview of Kernel Discriminant Analysis Suppose thatwe are given a sample set 119909

1 1199092 119909

119898 of the error-diffused

halftone images with 1199091 1199092 119909

119898belonging to different

class 119862119894(119894 = 1 2 119870) respectively Using a nonlinear

mapping function 120601 the samples of the halftone images inthe input space 119877119899 can be projected to a high-dimensionalseparable feature space 119891 namely 119877119899 rarr 119891 119909 rarr 120601(119909) Afterextracting features the error-diffused halftone images willbe classified along the projection direction along which thewithin-class scatter is minimal and the between-class scatteris maximal For a proper 120601 an inner product described as⟨sdot sdot⟩ can be defined on119891 to form a reproducing kernel Hilbertspace That is to say ⟨120601(119909

119894) 120601(119909119895)⟩ = 120581(119909

119894 119909119895) where 120581(sdot sdot) is

a positive semidefinite kernel function In the feature space119891let 119878120601119887be the between-class scattermatrix let 119878120601

119908be thewithin-

class scatter matrix and let 119878120601119905be the total scatter matrix

119878120601119887=119888

sum119896=1

119898119896(120583119896120601minus 120583120601) (120583119896120601minus 120583120601)119879

119878120601119908=119888

sum119896=1

(119898119896

sum119894=1

(120601 (119909119896119894) minus 120583119896120601) (120601 (119909119896

119894) minus 120583119896120601)119879)

119878120601119905= 119878120601119887+ 119878120601119908=119898

sum119894=1

(120601 (119909119894) minus 120583120601) (120601 (119909

119894) minus 120583120601)119879

(9)

where 119898119896is the number of the samples in the 119896th class

120601(119909119896119894) is the 119894th sample of the 119896th class in the feature space

119891 120583119896120601= (1119898

119896) sum119873119894119894=1120601(119909119896119894) is the centroid of the 119896th class

and 120583120601= (1119873)sum119888

119896=1sum119898119896119894=1120601(119909119896119894) is the global centroid In the

feature space the aim of the discriminant analysis is to seekthe best projection direction namely the projective functionV to maximize the following objective function

Vopt = argmaxV119879119878120601119887V

V119879119878120601119908V (10)

Equation (10) can be solved by the eigenproblem 119878120601119887V = 120582119878120601

119905V

According to the theory of reproducing kernel Hilbert spacewe know that the eigenvectors are linear combinations of120601(119909119894) in the feature space 119891 there exist weight coefficients

119886119894(119894 = 1 2 119898) such that V120601 = sum119898

119894=1119886119894120601(119909119894) Let 119886 =

[1198861 1198862 119886

119898]119879 then it can be proved that (10) can be

rewritten as follows

119886opt = argmax 119886119879119870119882119870119886119886119879119870119870119886 (11)

The optimization problem of (11) is equal to the eigenproblem

119870119882119870119886 = 120582119870119870119886 (12)

where 119870 is the kernel matrix and 119870119894119895= 120581(119909

119894 119909119895) 119882 is the

weight matrix defined as follows

119908119894119895=

1119898119896

if 119909119894 119909119895belong to the 119896th class

0 others(13)

For sample 119909 the projective function in the feature space 119891can be described as

119891lowast (119909) = ⟨V 120601 (119909)⟩ =119898

sum119894=1

119886119894⟨120601 (119909119894) 120601 (119909)⟩

=119898

sum119894=1

119886119894120581 (119909119894 119909) = 119886119879120581 (sdot 119909)

(14)

22 Kernel Discriminant Analysis via Spectral Regression Toefficiently solve the eigenproblem of the kernel discriminantanalysis in (12) the following theorem will be used

Theorem 1 Let 119910 be the eigenvector of the eigenproblem119882119910 =120582119910 with eigenvalue 120582 If 119870120572 = 119910 then 120572 is the eigenvector ofeigenproblem (12) with the same eigenvalue 120582

According to Theorem 1 the projective function of thekernel discriminant analysis can be obtained according to thefollowing two steps

Step 1 Obtain 119910 by solving the eigenproblem in (12)

Step 2 Search eigenvector 120572which satisfies119870120572 = 119910 where119870is the positive semidefinite kernel matrix

As we know if 119870 is nonsingular then for any given 119910there exists a unique 120572 = 119870minus1119910 satisfying the linear equationdescribed in Step 2 If119870 is singular then the linear equationmay have infinite solutions or have no solution In this casewe can approximate 120572 by solving the following equation

(119870 + 120575119868) 120572 = 119910 (15)

where 120575 ge 0 is a regularization parameter and 119868 is the identitymatrix Combined with the projective function describedin (14) we can easily verify that the solution 120572lowast = (119870 +120575119868)minus1119910 given by (15) is the optimal solution of the followingregularized regression problem

120572lowast = min119891isin119865

119898

sum119894=1

(119891 (119909119894) minus 119910119894)2 + 120575 100381710038171003817100381711989110038171003817100381710038172119870 (16)

where 119910119894is the 119894th element of 119910 and 119865 is the reproducing

kernel Hilbert space induced from the Mercer kernel 119870with sdot

119870being the corresponding norm Due to the

essential combination of the spectral analysis and regressiontechniques in the above two-step approach the methodis named as spectral regression (SR) kernel discriminantanalysis

4 Advances in Multimedia

3 Feature Extraction of the Error-DiffusedHalftone Images

Since its introduction in 1976 the error diffusion algorithmhas attracted widespread attention in the field of printingapplications It deals with pixels of halftone images usinginstead of point processing algorithms the neighborhoodprocessing algorithms Nowwe will extract the features of theerror-diffused halftone images which are produced using thesix popular error diffusion filters mentioned in Section 1

31 Statistic Characteristics of the Error-Diffused HalftoneImages Assume that 119891

119892(119909 119910) is the gray value of the pixel

located at position (119909 119910) in the original image and 119865(119909 119910)is the value of the pixel located at position (119909 119910) in theerror-diffused halftone image For the original image all thepixels are firstly normalized to the range [0 1] Then thepixels of the normalized image are converted to the error-diffused image 119865 line by line that is to say if 119891

119892(119909 119910) ge 119879

0

119865(119909 119910) which is the value of the pixel located at position(119909 119910) in error-diffused image 119865 is 1 otherwise 119865(119909 119910) is 0where 119879

0is the threshold value The error between 119865(119909 119910)

and 1198790is diffused ahead to some subsequent pixels not

necessary to deal withTherefore for some subsequent pixelsthe comparison will be implemented between 119879

0and the

value which is the sum of 119891119892(119909 119910) and the diffusion error

119890 A template matrix can be built using the error diffusionmodes and the error diffusion coefficients as shown in theerror diffusion of Stucki described above for example (a) theerror diffusion filter and (b) the error diffusion coefficientswhich represent the proportion of the diffusion errors Ifthe coefficient is zero then the corresponding pixel does notreceive any diffusion errors According to the error diffusionof Stucki described above pixel119860 suffers frommore diffusionerrors than pixel 119861 that is to say119874-119860 has a larger probabilityto become 1-0 pixel pair than119874-119861 The reasons are as followsSuppose that the pixel value of 119874 is 0 le 119901

119874le 1 and pixel 119874

has been processed by the thresholding method according tothe following equation

119902119874= 1 119901

119874ge 1198790

0 else (17)

In general threshold1198790is set as 05 According to the template

shown in the error diffusion of Stucki described above we canknow that the diffusion error is 119890 = 119902

119874minus119901119874 the new value of

pixel119860 is 119901119860= 119891(119909

119860+119910119860)+811989042 and the new value of pixel

119861 is119901119861= 119891(119909

119861+119910119861)+11989042 where119891

119892(119909119860+119910119860) and119891

119892(119909119861+119910119861)

are the original values of pixels 119860 and 119861 respectivelySince the value of each pixel in the error-diffused halftone

image can only be 0 or 1 there are 4 kinds of pixel pairs inthe halftone image 0-1 0-0 1-0 and 1-1 Pixel pairs 0-1 and1-0 are collectively known as 1-0 pixel pairs because of theirexchange abilityTherefore there are only three kinds of pixelpairs essentially 0-0 1-0 and 1-1 In this paper three statisticalmatrices are used to store the number of different pixel pairswith different neighboring distances and different directionswhich are of size 119871 times 119871 and are referred to as119872

0011987210 and

11987211 respectively (119871 is an odd number satisfying 119871 = 2119877 + 1

and 119877 is the maximum neighboring distance) Suppose thatthe center entry of the statistical matrix template covers pixel119874 of the error-diffused halftone image with the size119882 lowast 119867and other entries overlap other neighborhood pixels 119865(119906 V)Then we can compute three statistics on 1-0 1-1 and 0-0 pixelpairs within the scope of this statistics matrix template Ifthe position (119894 119895) of pixel119874 changes continually the matrices1198720011987210 and119872

11with zero being the initial values can be

updated according to

11987211(119909 119910) = 119872

11(119909 119910) + 1

if 119865 (119894 119895) = 1 119865 (119906 V) = 111987200(119909 119910) = 119872

00(119909 119910) + 1

if 119865 (119894 119895) = 0 119865 (119906 V) = 011987210(119909 119910) = 119872

00(119909 119910) + 1

if 119865 (119894 119895) = 1 119865 (119906 V) = 0 or 119865 (119894 119895) = 0 119865 (119906 V) = 1

(18)

where 119906 = 119894minus119877+119909minus1 V = 119895minus119877+119910minus1 1 le 119894 119906 le 119882 1 le 119895 V le119867 1 le 119909 le 119871 and 1 le 119910 le 119871 After normalization the threestatistic matrices can be ultimately obtained as the statisticalfeature descriptor of the error-diffused halftone images

32 Process of Statistical Feature Extraction ofHalftone ImagesAccording to the analysis described above the process ofstatistical feature extraction of the error-diffused halftoneimages can be represented as follows

Step 1 Input the error-diffused halftone image 119865 and divide119865 into several blocks 119861

119894(119894 = 1 2 119867) with the same size

119870 times 119870Step 2 Initialize the statistical feature matrix 119872 (including119872001198721011987211) as the zero matrix and let 119894 = 1

Step 3 Obtain the statistical matrix119872119894of block 119861

119894according

to (18) and update119872 using the equation119872 = 119872 +119872119894

Step 4 Set 119894 = 119894+1 If 119894 le 119867 then return to Step 3 Otherwisego to Step 5Step 5 Normalize 119872 such that it satisfiessum119871119909=1sum119871119909=1119872(119909 119910) = 1 and 119872(119909 119910) ge 0 where 119870

satisfies119870 ge min(119879119894) and 119879

119894(1 le 119894 le 6) represents the size of

the template matrix of the 119894th error diffusion filter

According to the process described above we knowthat the statistical features of the error-diffused halftoneimage 119865 are extracted based on the method that divides119865 into image patches which is significantly different withother feature extraction methods based on image patchesFor example in [20] the brightness and contrast of theimage patches are normalized by 119885-score transformationand whitening (also called ldquospheringrdquo) is used to rescale thenormalized data to remove the correlations between nearbypixels (ie low-frequency variations in the images) because

Advances in Multimedia 5

these correlations tend to be very strong even after brightnessand contrast normalization However in this paper featuresof the patches are extracted based on counting statisticalmeasures of different pixel pairs (00 10 and 11) within amoving statistical matrix template and are optimized usingthe method described in Section 33

33 Extraction of the Class Feature Matrix The statis-tics matrices 119872119894

001198721198941011987211989411(119894 = 1 2 119873) after being

extracted can be used as the input of other algorithms suchas support vector machines and neural networks Howeverthe curse of dimensionality could occur due to the highdimension of119872

001198721011987211 making the classification effect

possibly not significant Thereby six class feature matrices1198661 1198662 119866

6are designed in this paper for the error-

diffused halftone images produced by the six error diffusionfiltersmentioned aboveThen a gradient descentmethod canbe used to optimize these class feature matrices119873 = 6 times 119899 error-diffused halftone images can

be derived from 119899 original images using the six errordiffusion filters respectively Then 119873 statistics matrices119872119894(119872119894001198721198941011987211989411) (119894 = 1 2 119873) can be extracted as

the samples from 119873 error-diffused halftone images usingthe algorithm mentioned in Section 32 Subsequently welabel thesematrices as label(119872

1) label(119872

2) label(119872

119873) to

denote the types of the error diffusion filters used to producethe error-diffused halftone image Given the 119894th sample 119872

119894

as the input the target out vector 119905119894= [1199051198941 119905

1198946] (119894 =

1 119873) and the class feature matrices 1198661 1198662 119866

6 the

square error 119890119894between the actual output and the target

output can be derived according to

119890119894=6

sum119895=1

(119905119894119895minus119872119894∙ 119866119895)2 (19)

where

119905119894119895= 1 if 119895 = label (119872

119894) 119895 = 1 2 6

0 else (20)

The derivatives of 119866119895(119909 119910) in (19) can be explicitly calculated

as

120597119890119894

120597119866119895(119909 119910) = minus2 (V119894119895 minus119872119894 ∙ 119866119895)119872119894 (119909 119910) (21)

where 1 le 119909 le 119871 1 le 119910 le 119871 and ∙ is the dot product ofmatrices defined for anymatrices119860 and 119861with the same size119862 times 119863 as

119860 ∙ 119861 =119862

sum119906=1

119863

sumV=1119860 (119906 V) times 119861 (119906 V) (22)

Thedot product ofmatrices satisfies the commutative law andassociative law that is to say 119860 ∙ 119861 = 119861 ∙ 119860 and (119860 ∙ 119861) ∙ 119862 =

119860 ∙ (119861 ∙ 119862) Then the iteration equation (23) can be obtainedusing the gradient descent method

119866119896+1119895(119909 119910) = 119866119896

119895(119909 119910) minus 120578 120597119890

119894

120597119866119896119895(119909 119910)

= 119866119896119895(119909 119910)

+ 2120578 (V119894119895minus119872119894∙ 119866119895)119872119894(119909 119910)

(23)

where 120578 is the learning factor and 119896 means the 119896th iterationThe purpose of learning is to seek the optimal matrices119866119895(119895 = 1 2 6) by minimizing the total square error

119890 = sum119873119894=1119890119894 and the process of seeking the optimal matrices

119866119895can be described as follows

Step 1 Initialize parameters initialize the numbers of itera-tions inner and outer the iteration variables 119905 = 0 and 119894 = 1the nonnegative thresholds 120576

1and 120576

2used to indicate the

end of iterations the learning factor 120578 the total number ofsamples119873 and the class feature matrices119866

119895(119895 = 1 2 6)

Step 2 Input the statistics matrices119872119894(119894 = 1 2 119873) and

let 1198660119895= 119866119895(119895 = 1 6) and 119896 = 0 The following three

substeps are executed

(1) According to (23) 119866119896+1119895

(119895 = 1 6) can becomputed

(2) Compute 119890119896+1119894

and Δ119890119896+1119894= |119890119896+1119894minus 119890119896119894|

(3) If 119896 gt inner or Δ119890119896+1119894lt 1205761 then set 119890

119894= 119890119896+1119894

and go toStep 3 otherwise set 119896 = 119896 + 1 and return to (10)

Step 3 Set 119866119895= 119866119896+1119895

(119895 = 1 6) If 119894 = 119873 then go to Step4 Otherwise set 119894 = 119894 + 1 and go to Step 2Step 4 Compute the total error 119890119905 = sum119873

119894=1119890119894 If |119890119905 minus 119890119905minus1| lt 120576

2

or 119905 = outer then end the algorithm Otherwise set 119905 = 119905 + 1and go to Step 2

4 Classification of Error-Diffused HalftoneImages Using Nearest Centroids Classifier

This section describes the details on classifying error-diffusedhalftone images using the spectral regression kernel discrim-inant analysis as follows

Step 1 Input 119873 error-diffused halftone images produced bysix error-diffused filters and extract the statistical featurematrices119872

119894 including119872119894

001198721198941011987211989411 119894 = 1 2 119873 using

the method presented in Section 32

Step 2 According to the steps described in Section 33 allthe statistical feature matrices 119872

119894(11987211989400 11987211989410 11987211989411

and 119894 =1 2 119873) are converted to the class feature matrices 119866

119894

including 11986611989400 11986611989410 11986611989411and 119894 = 1 2 119873 correspondingly

Then convert 11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) into one-

dimensional matrices by columns respectively

6 Advances in Multimedia

Step 3 All the one-dimensional class feature matrices11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) are used to construct the

samples feature matrices 11986500 11986510 11986511

of size 119873 times 119872 (119872 =119871 times 119871) respectivelyStep 4 A label matrix information of the size 1 times119873 is built torecord the type to which the error-diffused halftone imagesbelong

Step 5 The first 119898 features 119866100 119866200 119866119898

00of the samples

featurematrices11986500are taken as the training samples (the first

119898 features of 11986510 11986511 or 119865all which is the composition of 119865

00

11986510 and 119865

11also can be used as the training samples) Reduce

the dimension of these training samples using the spectralregression discriminant analysis The process of dimensionreducing can be described by three substeps as follows

(1) Produce orthogonal vectors Let

119910119896= [[0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119896minus1

119894=1119898119894

1 1 1⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟119898119896

0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119888

119894=119896+1119898119894

]]

119879

119896 = 1 2 119888(24)

and 1199100= [1 1 1]119879 be the matrix with all elements

being 1 Let 1199100be the first vector of the weight matrix 119882

and use Gram-Schmidt process to orthogonalize the othereigenvectors Remove the vector 119910

0 leaving 119888minus1 eigenvectors

of119882 denoted as follows

119910119896119888minus1119896=1 (119910119879

1198941199100= 0 119910119879

119894119910119895= 0 119894 = 119895) (25)

(2) Add an element of 1 to the end of each input data(11986611989400(119894 = 1 2 119898)) and obtain 119888 minus 1 vectors

119896119888minus1119896=1

isinR119899+1 where

119896is the solution of the following regular least

squares problem

119896= argmin(

119898

sum119894=1

(11987911986611989400minus 119910119896119894)2 + 120572 2) (26)

Here 119910119896119894is the 119894th element of eigenvectors 119910

119896119888minus1119896=1

and 120572 is thecontraction parameter

(3) Let 119860 = [1198861 119886

119888minus1] be the transformation matrix

with the size (119872+ 1) times (119888 minus 1) Perform dimension reductionon the sample features 1198661

00 119866200 119866119898

00 by mapping them to

(119888 minus 1) dimensional subspace as follows

119909 997888rarr 119911 = 119860119879 [1199091] (27)

Step 6 Compute the mean values of samples in differentclasses according to following equation

aver119894= sum119898119894

119904=111986611990400

119898119894

(119894 = 1 2 6) (28)

where119898119894is the number of samples in the 119894th class sum6

119894=1119898119894=

119898 Let aver119894be the class-centroid of the 119894th class

Step 7 The remaining119873minus119898 samples are taken as the testingsamples and the dimension reduction is implemented forthem using the method described in Step 5Step 8 Compute the square of the distance |119866119904

00minusaver

119894|2 (119904 =

119898 + 1119898 + 2 119873 and 119894 = 1 2 6) between each testingsample119866119904

00and different class-centroid aver

119894 according to the

nearest centroids classifier the sample 11986611990400is assigned to the

class 119894 if 119894 = argmin |11986611990400minus aver

119894|2

In Step 8 the weak classifier (ie the nearest centroidclassifier) is used to classify error-diffused halftone imagesbecause this classifier is simple and easy to implementSimultaneously in order to prove that these class featurematrices which are extracted according to the methodmentioned in Section 3 and handled by the algorithm of thespectral regression discriminant analysis are well suited forthe classification of error-diffused halftone images this weakclassifier is used in this paper instead of a strong classifier[20] such as support vector machine classifiers and deepneural network classifiers

5 Experimental Analysis and Results

We implement various experiments to verify theefficiency of our methods in classifying error-diffusedhalftone images The computer processor is Intel(R)Pentium(R) CPU G2030 300GHz the memory of thecomputer is 20GB the operating system is Windows7 and the experimental simulation software is matlabR2012a In our experiments all the original images aredownloaded from httpdecsaiugrescvgdbimagenes andhttpmspeentustedutw About 4000 original imageshave been downloaded and they are converted into 24000error-diffused halftone images produced by six differenterror-diffused filters

51 Classification Accuracy Rate of the Error-DiffusedHalftone Images

511 Effect of the Number of the Samples This subsectionanalyzes the effect of the number on the feature sampleson classification When 119871 = 11 and feature matrices 119865

00

11986510 11986511 119865all are taken as the input data respectively the

accuracy rate of classification under different conditionsis shown in Tables 1 and 2 Table 1 shows the classificationaccuracy rates under different number of training sampleswhen the total number of samples is 12000 Table 2 showsthe classification accuracy rates under different number oftraining samples when the total number of samples is 24000The digits in the first line of each table are the size of thetraining samples

According to Tables 1 and 2 the classification accuracyrates under 12000 samples are higher than that under 24000samples Moreover the classification accuracy rates improvewith the increase of the proportion of the training sampleswhen the number of the training samples is lower than the80 of sample size And it achieves the highest classificationaccuracy rates when the number of the training samples is

Advances in Multimedia 7

Table 1 Classification accuracy rates () (12000 samples)

2000 3000 4000 5000 6000 7000 8000 9000 1000011986500

97750 97500 99050 99150 99150 99150 99100 98950 9890011986510

99950 99950 10000 10000 10000 10000 10000 10000 1000011986511

99500 99250 99350 99400 99200 99350 99300 99300 99450119865all 99100 99100 99450 99500 99650 99650 99650 99650 99600

Table 2 Classification accuracy rates () (24000 samples)

14000 15000 16000 17000 18000 19000 20000 21000 2200011986500

95810 95878 95413 95286 94933 98580 98100 98267 9865011986510

97600 97378 97150 96957 96450 99680 99600 99533 9930011986511

96420 96256 95925 95614 95000 99040 98825 98700 98400119865all 97230 97100 96888 96686 96233 99680 99600 99533 99350

Table 3 Classification accuracy rates obtained by different algorithms ()

11986500+ SR 119865

10+ SR 119865

11+ SR 119865all + SR ECF + BP LMS + Bayes 119866

10+ ML 119866

11+ ML

9840 9953 9874 9954 9004 9038 9832 9615

Table 4 The training and testing time under different sample sizes (in seconds)

14000 15000 16000 17000 18000 19000 20000 21000 22000

11986500

Training time 65208 78317 94660 11047 12932 15018 17894 20511 51354Testing time 63742 62583 58530 55251 48792 44947 37872 30375 43393

11986510

Training time 65851 78796 95113 11070 13097 15076 17398 19895 24010Testing time 63674 60804 57496 5503 49287 43261 36775 30353 29631

11986511

Training time 65662 79936 95173 11069 13033 15063 17441 19834 23996Testing time 65226 61817 57826 53778 48531 45513 37137 29735 30597

119865all Training time 68902 82554 99189 11527 13534 15671 17994 20485 28836Testing time 10200 98393 93112 87212 8016 6902 58556 49363 36921

about 80 of the sample size In addition from Tables 1 and2 we can also see that 119865

00 11986510 11986511

can be used as the inputdata alone Simultaneously they can also be combined intothe input data 119865all based on which the classification accuracyrates would be high

512 Comparison of Classification Accuracy Rate To analyzethe effectiveness of our classification algorithm the meanvalues of classification accuracy rate of the four data sets onthe right-hand side of each row in Table 2 are computed ThealgorithmSRoutperforms other baselines in achieving higherclassification accuracy rates when compared with LMS +Bayes (the method composed of least mean square and Bayesmethod) ECF + BP (the method based on the enhancedcorrelation function and BP neural network) and ML (themaximum likelihood method) According to Table 3 themean values of classification accuracy rates obtained usingSR and different features 119865

00 11986510 11986511 and 119865all respectively

are higher than themean values obtained by other algorithmsmentioned above

52 Effect of the Size of Statistical Feature Template onClassification Accuracies Here different features 119865

00 11986510

11986511

of the 24000 error-diffused halftone images are usedto test the effect of the size of statistical feature template11986500 11986510 11986511

are constructed using the corresponding classfeature matrices 119866

00 11986610 11986611

with deferent size 119871 times 119871 (119871 =5 7 9 11 13 15 17 19 21 23 25) Figure 1 shows that theclassification accuracy rate achieves the highest value when119871 = 11 no matter which feature is selected for experiments

53 Time-Consumption of the Classification Now the exper-iments are implemented to test the time-consumption ofthe error-diffused halftone image classification under thecondition that the total number of samples is 24000 and119871 = 11 It is well known that the time-consumption of theclassification includes the training time and the testing timeFrom Table 4 we can know that the training time increaseswith the increase of the number of training samples on thecontrary the testing time decreases with the increase of thenumber of training samples

8 Advances in Multimedia

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(a) Rates based on 11986500 (119871 = 5 7 9 11 13 15)

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(b) Rates based on 11986500 (119871 = 17 19 21 23 25)

14000 15000 16000 17000 18000 19000 20000 21000 220000808108208308408508608708808909

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(c) Rates based on 11986510 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

081082083084085086087088089

L = 17

L = 19

L = 21

L = 23

L = 25

(d) Rates based on 11986510 (119871 = 17 19 21 23 25)

09

14000 15000 16000 17000 18000 19000 20000 21000 22000082084086088

092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(e) Rates based on 11986511 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

082084086088

092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(f) Rates based on 11986511 (119871 = 17 19 21 23 25)

Figure 1 Classification accuracy rates based on deferent features with deferent sample size

To compare the time-consumption of the classificationmethod proposed in this paper with other algorithms suchas the backpropagation (BP) neural network radial basisfunction (RBF) neural network and support vector machine(SVM) all the experiments are implemented using119865

10 which

is divided into two parts 12000 training samples and 12000testing samples (119871 = 11) From the digits listed in Table 5

we can know that SR achieves the minimal summation of thetraining and testing time

In addition Table 5 implies that the time-consumptionof classifiers based on neural networks such as the classifierbased on RBF or BP are much more than that of otheralgorithms especially SR This is because these neural net-work algorithms essentially use gradient descent methods to

Advances in Multimedia 9

Table 5 Time-consumption of different algorithms (in second)

ML RBF BP SVM SRTraining time 11042 76390 29667 49297 439337Testing time 12000 27600 36000 62400 63161

Table 6 Classification accuracy rates under different variances ()

Variance 14000 15000 16000 17000 18000 19000 20000 21000 22000001 95570 95422 95425 95529 95217 98440 98200 98033 97950010 95370 95233 95188 95157 94867 98200 98000 97767 98000050 94580 94522 94600 94671 94567 98320 98225 98033 98550100 93060 93067 93013 93343 93200 96980 97175 96800 97350

Table 7 Classification accuracy rates using other algorithms under different variances

Variance ECF + BP LMS + Bayes ML Variance ECF + BP LMS + Bayes ML001 895000 887577 979981 050 797333 402682 829169010 881333 784264 968315 100 615000 338547 643861

optimize the associated nonconvex problems which are wellknown to converge very slowly However the classifier basedon SR performs the classification task through computing thesquare of the distance between each testing sample and differ-ent class-centroids directly Hence the time-consumption ofit is very cheap

54The Experiment of Noise Attack Resistance In the processof actual operation the error-defused halftone images areoften polluted by noise before the inverse transform In orderto test the ability of SR to resist the attack of noise differentGaussian noises with mean 0 and different variances areembedded into the error-defused halftone imagesThen clas-sification experiments have been done using the algorithmproposed in this paper and the experimental results are listedin Table 6 According to Table 6 the accuracy rates decreasewith the increase of the variances As compared with theaccuracy rates listed in Table 7 achieved by other algorithmssuch as ECF + BP LMS + Bayes and ML we find that ourclassification method has obvious advantages in resisting thenoise

6 Conclusion

This paper proposes a novel algorithm to solve the chal-lenging problem of classifying the error-diffused halftoneimages We firstly design the class feature matrices afterextracting the image patches according to their statisticalcharacteristics to classify the error-diffused halftone imagesThen the spectral regression kernel discriminant analysisis used for feature dimension reduction The error-diffusedhalftone images are finally classified using an idea similarto the nearest centroids classifier As demonstrated by theexperimental results our method is fast and can achieve ahigh classification accuracy rate with an added benefit ofrobustness in tackling noise A very interesting direction isto solve the disturbance possibly introduced by other attacks

such as image scaling and rotation in the process of error-diffused halftone image classification

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work is supported in part by the National Natural Sci-ence Foundation of China (Grants nos 61170102 61271140)the Scientific Research Fund of Hunan Provincial Educa-tion Department China (Grant no 15A049) the EducationDepartment Fund of Hunan Province in China (Grantsnos 15C0402 15C0395 and 13C036) and the Science andTechnology Planning Project of Hunan Province in China(Grant no 2015GK3024)

References

[1] Y-M Kwon M-G Kim and J-L Kim ldquoMultiscale rank-basedordered dither algorithm for digital halftoningrdquo InformationSystems vol 48 pp 241ndash247 2015

[2] Y Jiang and M Wang ldquoImage fusion using multiscale edge-preserving decomposition based on weighted least squaresfilterrdquo IET Image Processing vol 8 no 3 pp 183ndash190 2014

[3] Z Zhao L Cheng and G Cheng ldquoNeighbourhood weightedfuzzy c-means clustering algorithm for image segmentationrdquoIET Image Processing vol 8 no 3 pp 150ndash161 2014

[4] Z-Q Wen Y-L Lu Z-G Zeng W-Q Zhu and J-H AildquoOptimizing template for lookup-table inverse halftoning usingelitist genetic algorithmrdquo IEEE Signal Processing Letters vol 22no 1 pp 71ndash75 2015

[5] P C Chang and C S Yu ldquoNeural net classification and LMSreconstruction to halftone imagesrdquo in Visual Communicationsand Image Processing rsquo98 vol 3309 of Proceedings of SPIE pp592ndash602 The International Society for Optical EngineeringJanuary 1998

10 Advances in Multimedia

[6] Y Kong P Zeng and Y Zhang ldquoClassification and recognitionalgorithm for the halftone imagerdquo Journal of Xidian Universityvol 38 no 5 pp 62ndash69 2011 (Chinese)

[7] Y Kong A study of inverse halftoning and quality assessmentschemes [PhD thesis] School of Computer Science and Tech-nology Xidian University Xian China 2008

[8] Y-F Liu J-M Guo and J-D Lee ldquoInverse halftoning based onthe Bayesian theoremrdquo IEEE Transactions on Image Processingvol 20 no 4 pp 1077ndash1084 2011

[9] Y-F Liu J-MGuo and J-D Lee ldquoHalftone image classificationusing LMS algorithm and naive Bayesrdquo IEEE Transactions onImage Processing vol 20 no 10 pp 2837ndash2847 2011

[10] D L Lau andG R ArceModern Digital Halftoning CRC PressBoca Raton Fla USA 2nd edition 2008

[11] Image Dithering Eleven Algorithms and Source Code httpwwwtannerhellandcom4660dithering-elevenalgorithms-source-code

[12] R A Ulichney ldquoDithering with blue noiserdquo Proceedings of theIEEE vol 76 no 1 pp 56ndash79 1988

[13] Y-H Fung and Y-H Chan ldquoEmbedding halftones of differentresolutions in a full-scale halftonerdquo IEEE Signal ProcessingLetters vol 13 no 3 pp 153ndash156 2006

[14] Z-Q Wen Y-X Hu and W-Q Zhu ldquoA novel classificationmethod of halftone image via statistics matricesrdquo IEEE Trans-actions on Image Processing vol 23 no 11 pp 4724ndash4736 2014

[15] D Cai X He and J Han ldquoEfficient kernel discriminantanalysis via spectral regressionrdquo in Proceedings of the 7th IEEEInternational Conference on Data Mining (ICDM rsquo07) pp 427ndash432 Omaha Neb USA October 2007

[16] D Cai X He and J Han ldquoSpeed up kernel discriminantanalysisrdquo The International Journal on Very Large Data Basesvol 20 no 1 pp 21ndash33 2011

[17] M Zhao Z Zhang T W S Chow and B Li ldquoA general softlabel based Linear Discriminant Analysis for semi-superviseddimensionality reductionrdquo Neural Networks vol 55 pp 83ndash972014

[18] M Zhao Z Zhang T W S Chow and B Li ldquoSoft labelbased Linear Discriminant Analysis for image recognition andretrievalrdquo Computer Vision amp Image Understanding vol 121 no1 pp 86ndash99 2014

[19] L Zhang and F-C Tian ldquoA new kernel discriminant analysisframework for electronic nose recognitionrdquo Analytica ChimicaActa vol 816 pp 8ndash17 2014

[20] B Gu V S Sheng K Y TayW Romano and S Li ldquoIncrementalsupport vector learning for ordinal regressionrdquo IEEE Transac-tions on Neural Networks and Learning Systems vol 26 no 7pp 1403ndash1416 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

4 Advances in Multimedia

3 Feature Extraction of the Error-DiffusedHalftone Images

Since its introduction in 1976 the error diffusion algorithmhas attracted widespread attention in the field of printingapplications It deals with pixels of halftone images usinginstead of point processing algorithms the neighborhoodprocessing algorithms Nowwe will extract the features of theerror-diffused halftone images which are produced using thesix popular error diffusion filters mentioned in Section 1

31 Statistic Characteristics of the Error-Diffused HalftoneImages Assume that 119891

119892(119909 119910) is the gray value of the pixel

located at position (119909 119910) in the original image and 119865(119909 119910)is the value of the pixel located at position (119909 119910) in theerror-diffused halftone image For the original image all thepixels are firstly normalized to the range [0 1] Then thepixels of the normalized image are converted to the error-diffused image 119865 line by line that is to say if 119891

119892(119909 119910) ge 119879

0

119865(119909 119910) which is the value of the pixel located at position(119909 119910) in error-diffused image 119865 is 1 otherwise 119865(119909 119910) is 0where 119879

0is the threshold value The error between 119865(119909 119910)

and 1198790is diffused ahead to some subsequent pixels not

necessary to deal withTherefore for some subsequent pixelsthe comparison will be implemented between 119879

0and the

value which is the sum of 119891119892(119909 119910) and the diffusion error

119890 A template matrix can be built using the error diffusionmodes and the error diffusion coefficients as shown in theerror diffusion of Stucki described above for example (a) theerror diffusion filter and (b) the error diffusion coefficientswhich represent the proportion of the diffusion errors Ifthe coefficient is zero then the corresponding pixel does notreceive any diffusion errors According to the error diffusionof Stucki described above pixel119860 suffers frommore diffusionerrors than pixel 119861 that is to say119874-119860 has a larger probabilityto become 1-0 pixel pair than119874-119861 The reasons are as followsSuppose that the pixel value of 119874 is 0 le 119901

119874le 1 and pixel 119874

has been processed by the thresholding method according tothe following equation

119902119874= 1 119901

119874ge 1198790

0 else (17)

In general threshold1198790is set as 05 According to the template

shown in the error diffusion of Stucki described above we canknow that the diffusion error is 119890 = 119902

119874minus119901119874 the new value of

pixel119860 is 119901119860= 119891(119909

119860+119910119860)+811989042 and the new value of pixel

119861 is119901119861= 119891(119909

119861+119910119861)+11989042 where119891

119892(119909119860+119910119860) and119891

119892(119909119861+119910119861)

are the original values of pixels 119860 and 119861 respectivelySince the value of each pixel in the error-diffused halftone

image can only be 0 or 1 there are 4 kinds of pixel pairs inthe halftone image 0-1 0-0 1-0 and 1-1 Pixel pairs 0-1 and1-0 are collectively known as 1-0 pixel pairs because of theirexchange abilityTherefore there are only three kinds of pixelpairs essentially 0-0 1-0 and 1-1 In this paper three statisticalmatrices are used to store the number of different pixel pairswith different neighboring distances and different directionswhich are of size 119871 times 119871 and are referred to as119872

0011987210 and

11987211 respectively (119871 is an odd number satisfying 119871 = 2119877 + 1

and 119877 is the maximum neighboring distance) Suppose thatthe center entry of the statistical matrix template covers pixel119874 of the error-diffused halftone image with the size119882 lowast 119867and other entries overlap other neighborhood pixels 119865(119906 V)Then we can compute three statistics on 1-0 1-1 and 0-0 pixelpairs within the scope of this statistics matrix template Ifthe position (119894 119895) of pixel119874 changes continually the matrices1198720011987210 and119872

11with zero being the initial values can be

updated according to

11987211(119909 119910) = 119872

11(119909 119910) + 1

if 119865 (119894 119895) = 1 119865 (119906 V) = 111987200(119909 119910) = 119872

00(119909 119910) + 1

if 119865 (119894 119895) = 0 119865 (119906 V) = 011987210(119909 119910) = 119872

00(119909 119910) + 1

if 119865 (119894 119895) = 1 119865 (119906 V) = 0 or 119865 (119894 119895) = 0 119865 (119906 V) = 1

(18)

where 119906 = 119894minus119877+119909minus1 V = 119895minus119877+119910minus1 1 le 119894 119906 le 119882 1 le 119895 V le119867 1 le 119909 le 119871 and 1 le 119910 le 119871 After normalization the threestatistic matrices can be ultimately obtained as the statisticalfeature descriptor of the error-diffused halftone images

32 Process of Statistical Feature Extraction ofHalftone ImagesAccording to the analysis described above the process ofstatistical feature extraction of the error-diffused halftoneimages can be represented as follows

Step 1 Input the error-diffused halftone image 119865 and divide119865 into several blocks 119861

119894(119894 = 1 2 119867) with the same size

119870 times 119870Step 2 Initialize the statistical feature matrix 119872 (including119872001198721011987211) as the zero matrix and let 119894 = 1

Step 3 Obtain the statistical matrix119872119894of block 119861

119894according

to (18) and update119872 using the equation119872 = 119872 +119872119894

Step 4 Set 119894 = 119894+1 If 119894 le 119867 then return to Step 3 Otherwisego to Step 5Step 5 Normalize 119872 such that it satisfiessum119871119909=1sum119871119909=1119872(119909 119910) = 1 and 119872(119909 119910) ge 0 where 119870

satisfies119870 ge min(119879119894) and 119879

119894(1 le 119894 le 6) represents the size of

the template matrix of the 119894th error diffusion filter

According to the process described above we knowthat the statistical features of the error-diffused halftoneimage 119865 are extracted based on the method that divides119865 into image patches which is significantly different withother feature extraction methods based on image patchesFor example in [20] the brightness and contrast of theimage patches are normalized by 119885-score transformationand whitening (also called ldquospheringrdquo) is used to rescale thenormalized data to remove the correlations between nearbypixels (ie low-frequency variations in the images) because

Advances in Multimedia 5

these correlations tend to be very strong even after brightnessand contrast normalization However in this paper featuresof the patches are extracted based on counting statisticalmeasures of different pixel pairs (00 10 and 11) within amoving statistical matrix template and are optimized usingthe method described in Section 33

33 Extraction of the Class Feature Matrix The statis-tics matrices 119872119894

001198721198941011987211989411(119894 = 1 2 119873) after being

extracted can be used as the input of other algorithms suchas support vector machines and neural networks Howeverthe curse of dimensionality could occur due to the highdimension of119872

001198721011987211 making the classification effect

possibly not significant Thereby six class feature matrices1198661 1198662 119866

6are designed in this paper for the error-

diffused halftone images produced by the six error diffusionfiltersmentioned aboveThen a gradient descentmethod canbe used to optimize these class feature matrices119873 = 6 times 119899 error-diffused halftone images can

be derived from 119899 original images using the six errordiffusion filters respectively Then 119873 statistics matrices119872119894(119872119894001198721198941011987211989411) (119894 = 1 2 119873) can be extracted as

the samples from 119873 error-diffused halftone images usingthe algorithm mentioned in Section 32 Subsequently welabel thesematrices as label(119872

1) label(119872

2) label(119872

119873) to

denote the types of the error diffusion filters used to producethe error-diffused halftone image Given the 119894th sample 119872

119894

as the input the target out vector 119905119894= [1199051198941 119905

1198946] (119894 =

1 119873) and the class feature matrices 1198661 1198662 119866

6 the

square error 119890119894between the actual output and the target

output can be derived according to

119890119894=6

sum119895=1

(119905119894119895minus119872119894∙ 119866119895)2 (19)

where

119905119894119895= 1 if 119895 = label (119872

119894) 119895 = 1 2 6

0 else (20)

The derivatives of 119866119895(119909 119910) in (19) can be explicitly calculated

as

120597119890119894

120597119866119895(119909 119910) = minus2 (V119894119895 minus119872119894 ∙ 119866119895)119872119894 (119909 119910) (21)

where 1 le 119909 le 119871 1 le 119910 le 119871 and ∙ is the dot product ofmatrices defined for anymatrices119860 and 119861with the same size119862 times 119863 as

119860 ∙ 119861 =119862

sum119906=1

119863

sumV=1119860 (119906 V) times 119861 (119906 V) (22)

Thedot product ofmatrices satisfies the commutative law andassociative law that is to say 119860 ∙ 119861 = 119861 ∙ 119860 and (119860 ∙ 119861) ∙ 119862 =

119860 ∙ (119861 ∙ 119862) Then the iteration equation (23) can be obtainedusing the gradient descent method

119866119896+1119895(119909 119910) = 119866119896

119895(119909 119910) minus 120578 120597119890

119894

120597119866119896119895(119909 119910)

= 119866119896119895(119909 119910)

+ 2120578 (V119894119895minus119872119894∙ 119866119895)119872119894(119909 119910)

(23)

where 120578 is the learning factor and 119896 means the 119896th iterationThe purpose of learning is to seek the optimal matrices119866119895(119895 = 1 2 6) by minimizing the total square error

119890 = sum119873119894=1119890119894 and the process of seeking the optimal matrices

119866119895can be described as follows

Step 1 Initialize parameters initialize the numbers of itera-tions inner and outer the iteration variables 119905 = 0 and 119894 = 1the nonnegative thresholds 120576

1and 120576

2used to indicate the

end of iterations the learning factor 120578 the total number ofsamples119873 and the class feature matrices119866

119895(119895 = 1 2 6)

Step 2 Input the statistics matrices119872119894(119894 = 1 2 119873) and

let 1198660119895= 119866119895(119895 = 1 6) and 119896 = 0 The following three

substeps are executed

(1) According to (23) 119866119896+1119895

(119895 = 1 6) can becomputed

(2) Compute 119890119896+1119894

and Δ119890119896+1119894= |119890119896+1119894minus 119890119896119894|

(3) If 119896 gt inner or Δ119890119896+1119894lt 1205761 then set 119890

119894= 119890119896+1119894

and go toStep 3 otherwise set 119896 = 119896 + 1 and return to (10)

Step 3 Set 119866119895= 119866119896+1119895

(119895 = 1 6) If 119894 = 119873 then go to Step4 Otherwise set 119894 = 119894 + 1 and go to Step 2Step 4 Compute the total error 119890119905 = sum119873

119894=1119890119894 If |119890119905 minus 119890119905minus1| lt 120576

2

or 119905 = outer then end the algorithm Otherwise set 119905 = 119905 + 1and go to Step 2

4 Classification of Error-Diffused HalftoneImages Using Nearest Centroids Classifier

This section describes the details on classifying error-diffusedhalftone images using the spectral regression kernel discrim-inant analysis as follows

Step 1 Input 119873 error-diffused halftone images produced bysix error-diffused filters and extract the statistical featurematrices119872

119894 including119872119894

001198721198941011987211989411 119894 = 1 2 119873 using

the method presented in Section 32

Step 2 According to the steps described in Section 33 allthe statistical feature matrices 119872

119894(11987211989400 11987211989410 11987211989411

and 119894 =1 2 119873) are converted to the class feature matrices 119866

119894

including 11986611989400 11986611989410 11986611989411and 119894 = 1 2 119873 correspondingly

Then convert 11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) into one-

dimensional matrices by columns respectively

6 Advances in Multimedia

Step 3 All the one-dimensional class feature matrices11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) are used to construct the

samples feature matrices 11986500 11986510 11986511

of size 119873 times 119872 (119872 =119871 times 119871) respectivelyStep 4 A label matrix information of the size 1 times119873 is built torecord the type to which the error-diffused halftone imagesbelong

Step 5 The first 119898 features 119866100 119866200 119866119898

00of the samples

featurematrices11986500are taken as the training samples (the first

119898 features of 11986510 11986511 or 119865all which is the composition of 119865

00

11986510 and 119865

11also can be used as the training samples) Reduce

the dimension of these training samples using the spectralregression discriminant analysis The process of dimensionreducing can be described by three substeps as follows

(1) Produce orthogonal vectors Let

119910119896= [[0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119896minus1

119894=1119898119894

1 1 1⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟119898119896

0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119888

119894=119896+1119898119894

]]

119879

119896 = 1 2 119888(24)

and 1199100= [1 1 1]119879 be the matrix with all elements

being 1 Let 1199100be the first vector of the weight matrix 119882

and use Gram-Schmidt process to orthogonalize the othereigenvectors Remove the vector 119910

0 leaving 119888minus1 eigenvectors

of119882 denoted as follows

119910119896119888minus1119896=1 (119910119879

1198941199100= 0 119910119879

119894119910119895= 0 119894 = 119895) (25)

(2) Add an element of 1 to the end of each input data(11986611989400(119894 = 1 2 119898)) and obtain 119888 minus 1 vectors

119896119888minus1119896=1

isinR119899+1 where

119896is the solution of the following regular least

squares problem

119896= argmin(

119898

sum119894=1

(11987911986611989400minus 119910119896119894)2 + 120572 2) (26)

Here 119910119896119894is the 119894th element of eigenvectors 119910

119896119888minus1119896=1

and 120572 is thecontraction parameter

(3) Let 119860 = [1198861 119886

119888minus1] be the transformation matrix

with the size (119872+ 1) times (119888 minus 1) Perform dimension reductionon the sample features 1198661

00 119866200 119866119898

00 by mapping them to

(119888 minus 1) dimensional subspace as follows

119909 997888rarr 119911 = 119860119879 [1199091] (27)

Step 6 Compute the mean values of samples in differentclasses according to following equation

aver119894= sum119898119894

119904=111986611990400

119898119894

(119894 = 1 2 6) (28)

where119898119894is the number of samples in the 119894th class sum6

119894=1119898119894=

119898 Let aver119894be the class-centroid of the 119894th class

Step 7 The remaining119873minus119898 samples are taken as the testingsamples and the dimension reduction is implemented forthem using the method described in Step 5Step 8 Compute the square of the distance |119866119904

00minusaver

119894|2 (119904 =

119898 + 1119898 + 2 119873 and 119894 = 1 2 6) between each testingsample119866119904

00and different class-centroid aver

119894 according to the

nearest centroids classifier the sample 11986611990400is assigned to the

class 119894 if 119894 = argmin |11986611990400minus aver

119894|2

In Step 8 the weak classifier (ie the nearest centroidclassifier) is used to classify error-diffused halftone imagesbecause this classifier is simple and easy to implementSimultaneously in order to prove that these class featurematrices which are extracted according to the methodmentioned in Section 3 and handled by the algorithm of thespectral regression discriminant analysis are well suited forthe classification of error-diffused halftone images this weakclassifier is used in this paper instead of a strong classifier[20] such as support vector machine classifiers and deepneural network classifiers

5 Experimental Analysis and Results

We implement various experiments to verify theefficiency of our methods in classifying error-diffusedhalftone images The computer processor is Intel(R)Pentium(R) CPU G2030 300GHz the memory of thecomputer is 20GB the operating system is Windows7 and the experimental simulation software is matlabR2012a In our experiments all the original images aredownloaded from httpdecsaiugrescvgdbimagenes andhttpmspeentustedutw About 4000 original imageshave been downloaded and they are converted into 24000error-diffused halftone images produced by six differenterror-diffused filters

51 Classification Accuracy Rate of the Error-DiffusedHalftone Images

511 Effect of the Number of the Samples This subsectionanalyzes the effect of the number on the feature sampleson classification When 119871 = 11 and feature matrices 119865

00

11986510 11986511 119865all are taken as the input data respectively the

accuracy rate of classification under different conditionsis shown in Tables 1 and 2 Table 1 shows the classificationaccuracy rates under different number of training sampleswhen the total number of samples is 12000 Table 2 showsthe classification accuracy rates under different number oftraining samples when the total number of samples is 24000The digits in the first line of each table are the size of thetraining samples

According to Tables 1 and 2 the classification accuracyrates under 12000 samples are higher than that under 24000samples Moreover the classification accuracy rates improvewith the increase of the proportion of the training sampleswhen the number of the training samples is lower than the80 of sample size And it achieves the highest classificationaccuracy rates when the number of the training samples is

Advances in Multimedia 7

Table 1 Classification accuracy rates () (12000 samples)

2000 3000 4000 5000 6000 7000 8000 9000 1000011986500

97750 97500 99050 99150 99150 99150 99100 98950 9890011986510

99950 99950 10000 10000 10000 10000 10000 10000 1000011986511

99500 99250 99350 99400 99200 99350 99300 99300 99450119865all 99100 99100 99450 99500 99650 99650 99650 99650 99600

Table 2 Classification accuracy rates () (24000 samples)

14000 15000 16000 17000 18000 19000 20000 21000 2200011986500

95810 95878 95413 95286 94933 98580 98100 98267 9865011986510

97600 97378 97150 96957 96450 99680 99600 99533 9930011986511

96420 96256 95925 95614 95000 99040 98825 98700 98400119865all 97230 97100 96888 96686 96233 99680 99600 99533 99350

Table 3 Classification accuracy rates obtained by different algorithms ()

11986500+ SR 119865

10+ SR 119865

11+ SR 119865all + SR ECF + BP LMS + Bayes 119866

10+ ML 119866

11+ ML

9840 9953 9874 9954 9004 9038 9832 9615

Table 4 The training and testing time under different sample sizes (in seconds)

14000 15000 16000 17000 18000 19000 20000 21000 22000

11986500

Training time 65208 78317 94660 11047 12932 15018 17894 20511 51354Testing time 63742 62583 58530 55251 48792 44947 37872 30375 43393

11986510

Training time 65851 78796 95113 11070 13097 15076 17398 19895 24010Testing time 63674 60804 57496 5503 49287 43261 36775 30353 29631

11986511

Training time 65662 79936 95173 11069 13033 15063 17441 19834 23996Testing time 65226 61817 57826 53778 48531 45513 37137 29735 30597

119865all Training time 68902 82554 99189 11527 13534 15671 17994 20485 28836Testing time 10200 98393 93112 87212 8016 6902 58556 49363 36921

about 80 of the sample size In addition from Tables 1 and2 we can also see that 119865

00 11986510 11986511

can be used as the inputdata alone Simultaneously they can also be combined intothe input data 119865all based on which the classification accuracyrates would be high

512 Comparison of Classification Accuracy Rate To analyzethe effectiveness of our classification algorithm the meanvalues of classification accuracy rate of the four data sets onthe right-hand side of each row in Table 2 are computed ThealgorithmSRoutperforms other baselines in achieving higherclassification accuracy rates when compared with LMS +Bayes (the method composed of least mean square and Bayesmethod) ECF + BP (the method based on the enhancedcorrelation function and BP neural network) and ML (themaximum likelihood method) According to Table 3 themean values of classification accuracy rates obtained usingSR and different features 119865

00 11986510 11986511 and 119865all respectively

are higher than themean values obtained by other algorithmsmentioned above

52 Effect of the Size of Statistical Feature Template onClassification Accuracies Here different features 119865

00 11986510

11986511

of the 24000 error-diffused halftone images are usedto test the effect of the size of statistical feature template11986500 11986510 11986511

are constructed using the corresponding classfeature matrices 119866

00 11986610 11986611

with deferent size 119871 times 119871 (119871 =5 7 9 11 13 15 17 19 21 23 25) Figure 1 shows that theclassification accuracy rate achieves the highest value when119871 = 11 no matter which feature is selected for experiments

53 Time-Consumption of the Classification Now the exper-iments are implemented to test the time-consumption ofthe error-diffused halftone image classification under thecondition that the total number of samples is 24000 and119871 = 11 It is well known that the time-consumption of theclassification includes the training time and the testing timeFrom Table 4 we can know that the training time increaseswith the increase of the number of training samples on thecontrary the testing time decreases with the increase of thenumber of training samples

8 Advances in Multimedia

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(a) Rates based on 11986500 (119871 = 5 7 9 11 13 15)

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(b) Rates based on 11986500 (119871 = 17 19 21 23 25)

14000 15000 16000 17000 18000 19000 20000 21000 220000808108208308408508608708808909

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(c) Rates based on 11986510 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

081082083084085086087088089

L = 17

L = 19

L = 21

L = 23

L = 25

(d) Rates based on 11986510 (119871 = 17 19 21 23 25)

09

14000 15000 16000 17000 18000 19000 20000 21000 22000082084086088

092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(e) Rates based on 11986511 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

082084086088

092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(f) Rates based on 11986511 (119871 = 17 19 21 23 25)

Figure 1 Classification accuracy rates based on deferent features with deferent sample size

To compare the time-consumption of the classificationmethod proposed in this paper with other algorithms suchas the backpropagation (BP) neural network radial basisfunction (RBF) neural network and support vector machine(SVM) all the experiments are implemented using119865

10 which

is divided into two parts 12000 training samples and 12000testing samples (119871 = 11) From the digits listed in Table 5

we can know that SR achieves the minimal summation of thetraining and testing time

In addition Table 5 implies that the time-consumptionof classifiers based on neural networks such as the classifierbased on RBF or BP are much more than that of otheralgorithms especially SR This is because these neural net-work algorithms essentially use gradient descent methods to

Advances in Multimedia 9

Table 5 Time-consumption of different algorithms (in second)

ML RBF BP SVM SRTraining time 11042 76390 29667 49297 439337Testing time 12000 27600 36000 62400 63161

Table 6 Classification accuracy rates under different variances ()

Variance 14000 15000 16000 17000 18000 19000 20000 21000 22000001 95570 95422 95425 95529 95217 98440 98200 98033 97950010 95370 95233 95188 95157 94867 98200 98000 97767 98000050 94580 94522 94600 94671 94567 98320 98225 98033 98550100 93060 93067 93013 93343 93200 96980 97175 96800 97350

Table 7 Classification accuracy rates using other algorithms under different variances

Variance ECF + BP LMS + Bayes ML Variance ECF + BP LMS + Bayes ML001 895000 887577 979981 050 797333 402682 829169010 881333 784264 968315 100 615000 338547 643861

optimize the associated nonconvex problems which are wellknown to converge very slowly However the classifier basedon SR performs the classification task through computing thesquare of the distance between each testing sample and differ-ent class-centroids directly Hence the time-consumption ofit is very cheap

54The Experiment of Noise Attack Resistance In the processof actual operation the error-defused halftone images areoften polluted by noise before the inverse transform In orderto test the ability of SR to resist the attack of noise differentGaussian noises with mean 0 and different variances areembedded into the error-defused halftone imagesThen clas-sification experiments have been done using the algorithmproposed in this paper and the experimental results are listedin Table 6 According to Table 6 the accuracy rates decreasewith the increase of the variances As compared with theaccuracy rates listed in Table 7 achieved by other algorithmssuch as ECF + BP LMS + Bayes and ML we find that ourclassification method has obvious advantages in resisting thenoise

6 Conclusion

This paper proposes a novel algorithm to solve the chal-lenging problem of classifying the error-diffused halftoneimages We firstly design the class feature matrices afterextracting the image patches according to their statisticalcharacteristics to classify the error-diffused halftone imagesThen the spectral regression kernel discriminant analysisis used for feature dimension reduction The error-diffusedhalftone images are finally classified using an idea similarto the nearest centroids classifier As demonstrated by theexperimental results our method is fast and can achieve ahigh classification accuracy rate with an added benefit ofrobustness in tackling noise A very interesting direction isto solve the disturbance possibly introduced by other attacks

such as image scaling and rotation in the process of error-diffused halftone image classification

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work is supported in part by the National Natural Sci-ence Foundation of China (Grants nos 61170102 61271140)the Scientific Research Fund of Hunan Provincial Educa-tion Department China (Grant no 15A049) the EducationDepartment Fund of Hunan Province in China (Grantsnos 15C0402 15C0395 and 13C036) and the Science andTechnology Planning Project of Hunan Province in China(Grant no 2015GK3024)

References

[1] Y-M Kwon M-G Kim and J-L Kim ldquoMultiscale rank-basedordered dither algorithm for digital halftoningrdquo InformationSystems vol 48 pp 241ndash247 2015

[2] Y Jiang and M Wang ldquoImage fusion using multiscale edge-preserving decomposition based on weighted least squaresfilterrdquo IET Image Processing vol 8 no 3 pp 183ndash190 2014

[3] Z Zhao L Cheng and G Cheng ldquoNeighbourhood weightedfuzzy c-means clustering algorithm for image segmentationrdquoIET Image Processing vol 8 no 3 pp 150ndash161 2014

[4] Z-Q Wen Y-L Lu Z-G Zeng W-Q Zhu and J-H AildquoOptimizing template for lookup-table inverse halftoning usingelitist genetic algorithmrdquo IEEE Signal Processing Letters vol 22no 1 pp 71ndash75 2015

[5] P C Chang and C S Yu ldquoNeural net classification and LMSreconstruction to halftone imagesrdquo in Visual Communicationsand Image Processing rsquo98 vol 3309 of Proceedings of SPIE pp592ndash602 The International Society for Optical EngineeringJanuary 1998

10 Advances in Multimedia

[6] Y Kong P Zeng and Y Zhang ldquoClassification and recognitionalgorithm for the halftone imagerdquo Journal of Xidian Universityvol 38 no 5 pp 62ndash69 2011 (Chinese)

[7] Y Kong A study of inverse halftoning and quality assessmentschemes [PhD thesis] School of Computer Science and Tech-nology Xidian University Xian China 2008

[8] Y-F Liu J-M Guo and J-D Lee ldquoInverse halftoning based onthe Bayesian theoremrdquo IEEE Transactions on Image Processingvol 20 no 4 pp 1077ndash1084 2011

[9] Y-F Liu J-MGuo and J-D Lee ldquoHalftone image classificationusing LMS algorithm and naive Bayesrdquo IEEE Transactions onImage Processing vol 20 no 10 pp 2837ndash2847 2011

[10] D L Lau andG R ArceModern Digital Halftoning CRC PressBoca Raton Fla USA 2nd edition 2008

[11] Image Dithering Eleven Algorithms and Source Code httpwwwtannerhellandcom4660dithering-elevenalgorithms-source-code

[12] R A Ulichney ldquoDithering with blue noiserdquo Proceedings of theIEEE vol 76 no 1 pp 56ndash79 1988

[13] Y-H Fung and Y-H Chan ldquoEmbedding halftones of differentresolutions in a full-scale halftonerdquo IEEE Signal ProcessingLetters vol 13 no 3 pp 153ndash156 2006

[14] Z-Q Wen Y-X Hu and W-Q Zhu ldquoA novel classificationmethod of halftone image via statistics matricesrdquo IEEE Trans-actions on Image Processing vol 23 no 11 pp 4724ndash4736 2014

[15] D Cai X He and J Han ldquoEfficient kernel discriminantanalysis via spectral regressionrdquo in Proceedings of the 7th IEEEInternational Conference on Data Mining (ICDM rsquo07) pp 427ndash432 Omaha Neb USA October 2007

[16] D Cai X He and J Han ldquoSpeed up kernel discriminantanalysisrdquo The International Journal on Very Large Data Basesvol 20 no 1 pp 21ndash33 2011

[17] M Zhao Z Zhang T W S Chow and B Li ldquoA general softlabel based Linear Discriminant Analysis for semi-superviseddimensionality reductionrdquo Neural Networks vol 55 pp 83ndash972014

[18] M Zhao Z Zhang T W S Chow and B Li ldquoSoft labelbased Linear Discriminant Analysis for image recognition andretrievalrdquo Computer Vision amp Image Understanding vol 121 no1 pp 86ndash99 2014

[19] L Zhang and F-C Tian ldquoA new kernel discriminant analysisframework for electronic nose recognitionrdquo Analytica ChimicaActa vol 816 pp 8ndash17 2014

[20] B Gu V S Sheng K Y TayW Romano and S Li ldquoIncrementalsupport vector learning for ordinal regressionrdquo IEEE Transac-tions on Neural Networks and Learning Systems vol 26 no 7pp 1403ndash1416 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

Advances in Multimedia 5

these correlations tend to be very strong even after brightnessand contrast normalization However in this paper featuresof the patches are extracted based on counting statisticalmeasures of different pixel pairs (00 10 and 11) within amoving statistical matrix template and are optimized usingthe method described in Section 33

33 Extraction of the Class Feature Matrix The statis-tics matrices 119872119894

001198721198941011987211989411(119894 = 1 2 119873) after being

extracted can be used as the input of other algorithms suchas support vector machines and neural networks Howeverthe curse of dimensionality could occur due to the highdimension of119872

001198721011987211 making the classification effect

possibly not significant Thereby six class feature matrices1198661 1198662 119866

6are designed in this paper for the error-

diffused halftone images produced by the six error diffusionfiltersmentioned aboveThen a gradient descentmethod canbe used to optimize these class feature matrices119873 = 6 times 119899 error-diffused halftone images can

be derived from 119899 original images using the six errordiffusion filters respectively Then 119873 statistics matrices119872119894(119872119894001198721198941011987211989411) (119894 = 1 2 119873) can be extracted as

the samples from 119873 error-diffused halftone images usingthe algorithm mentioned in Section 32 Subsequently welabel thesematrices as label(119872

1) label(119872

2) label(119872

119873) to

denote the types of the error diffusion filters used to producethe error-diffused halftone image Given the 119894th sample 119872

119894

as the input the target out vector 119905119894= [1199051198941 119905

1198946] (119894 =

1 119873) and the class feature matrices 1198661 1198662 119866

6 the

square error 119890119894between the actual output and the target

output can be derived according to

119890119894=6

sum119895=1

(119905119894119895minus119872119894∙ 119866119895)2 (19)

where

119905119894119895= 1 if 119895 = label (119872

119894) 119895 = 1 2 6

0 else (20)

The derivatives of 119866119895(119909 119910) in (19) can be explicitly calculated

as

120597119890119894

120597119866119895(119909 119910) = minus2 (V119894119895 minus119872119894 ∙ 119866119895)119872119894 (119909 119910) (21)

where 1 le 119909 le 119871 1 le 119910 le 119871 and ∙ is the dot product ofmatrices defined for anymatrices119860 and 119861with the same size119862 times 119863 as

119860 ∙ 119861 =119862

sum119906=1

119863

sumV=1119860 (119906 V) times 119861 (119906 V) (22)

Thedot product ofmatrices satisfies the commutative law andassociative law that is to say 119860 ∙ 119861 = 119861 ∙ 119860 and (119860 ∙ 119861) ∙ 119862 =

119860 ∙ (119861 ∙ 119862) Then the iteration equation (23) can be obtainedusing the gradient descent method

119866119896+1119895(119909 119910) = 119866119896

119895(119909 119910) minus 120578 120597119890

119894

120597119866119896119895(119909 119910)

= 119866119896119895(119909 119910)

+ 2120578 (V119894119895minus119872119894∙ 119866119895)119872119894(119909 119910)

(23)

where 120578 is the learning factor and 119896 means the 119896th iterationThe purpose of learning is to seek the optimal matrices119866119895(119895 = 1 2 6) by minimizing the total square error

119890 = sum119873119894=1119890119894 and the process of seeking the optimal matrices

119866119895can be described as follows

Step 1 Initialize parameters initialize the numbers of itera-tions inner and outer the iteration variables 119905 = 0 and 119894 = 1the nonnegative thresholds 120576

1and 120576

2used to indicate the

end of iterations the learning factor 120578 the total number ofsamples119873 and the class feature matrices119866

119895(119895 = 1 2 6)

Step 2 Input the statistics matrices119872119894(119894 = 1 2 119873) and

let 1198660119895= 119866119895(119895 = 1 6) and 119896 = 0 The following three

substeps are executed

(1) According to (23) 119866119896+1119895

(119895 = 1 6) can becomputed

(2) Compute 119890119896+1119894

and Δ119890119896+1119894= |119890119896+1119894minus 119890119896119894|

(3) If 119896 gt inner or Δ119890119896+1119894lt 1205761 then set 119890

119894= 119890119896+1119894

and go toStep 3 otherwise set 119896 = 119896 + 1 and return to (10)

Step 3 Set 119866119895= 119866119896+1119895

(119895 = 1 6) If 119894 = 119873 then go to Step4 Otherwise set 119894 = 119894 + 1 and go to Step 2Step 4 Compute the total error 119890119905 = sum119873

119894=1119890119894 If |119890119905 minus 119890119905minus1| lt 120576

2

or 119905 = outer then end the algorithm Otherwise set 119905 = 119905 + 1and go to Step 2

4 Classification of Error-Diffused HalftoneImages Using Nearest Centroids Classifier

This section describes the details on classifying error-diffusedhalftone images using the spectral regression kernel discrim-inant analysis as follows

Step 1 Input 119873 error-diffused halftone images produced bysix error-diffused filters and extract the statistical featurematrices119872

119894 including119872119894

001198721198941011987211989411 119894 = 1 2 119873 using

the method presented in Section 32

Step 2 According to the steps described in Section 33 allthe statistical feature matrices 119872

119894(11987211989400 11987211989410 11987211989411

and 119894 =1 2 119873) are converted to the class feature matrices 119866

119894

including 11986611989400 11986611989410 11986611989411and 119894 = 1 2 119873 correspondingly

Then convert 11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) into one-

dimensional matrices by columns respectively

6 Advances in Multimedia

Step 3 All the one-dimensional class feature matrices11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) are used to construct the

samples feature matrices 11986500 11986510 11986511

of size 119873 times 119872 (119872 =119871 times 119871) respectivelyStep 4 A label matrix information of the size 1 times119873 is built torecord the type to which the error-diffused halftone imagesbelong

Step 5 The first 119898 features 119866100 119866200 119866119898

00of the samples

featurematrices11986500are taken as the training samples (the first

119898 features of 11986510 11986511 or 119865all which is the composition of 119865

00

11986510 and 119865

11also can be used as the training samples) Reduce

the dimension of these training samples using the spectralregression discriminant analysis The process of dimensionreducing can be described by three substeps as follows

(1) Produce orthogonal vectors Let

119910119896= [[0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119896minus1

119894=1119898119894

1 1 1⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟119898119896

0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119888

119894=119896+1119898119894

]]

119879

119896 = 1 2 119888(24)

and 1199100= [1 1 1]119879 be the matrix with all elements

being 1 Let 1199100be the first vector of the weight matrix 119882

and use Gram-Schmidt process to orthogonalize the othereigenvectors Remove the vector 119910

0 leaving 119888minus1 eigenvectors

of119882 denoted as follows

119910119896119888minus1119896=1 (119910119879

1198941199100= 0 119910119879

119894119910119895= 0 119894 = 119895) (25)

(2) Add an element of 1 to the end of each input data(11986611989400(119894 = 1 2 119898)) and obtain 119888 minus 1 vectors

119896119888minus1119896=1

isinR119899+1 where

119896is the solution of the following regular least

squares problem

119896= argmin(

119898

sum119894=1

(11987911986611989400minus 119910119896119894)2 + 120572 2) (26)

Here 119910119896119894is the 119894th element of eigenvectors 119910

119896119888minus1119896=1

and 120572 is thecontraction parameter

(3) Let 119860 = [1198861 119886

119888minus1] be the transformation matrix

with the size (119872+ 1) times (119888 minus 1) Perform dimension reductionon the sample features 1198661

00 119866200 119866119898

00 by mapping them to

(119888 minus 1) dimensional subspace as follows

119909 997888rarr 119911 = 119860119879 [1199091] (27)

Step 6 Compute the mean values of samples in differentclasses according to following equation

aver119894= sum119898119894

119904=111986611990400

119898119894

(119894 = 1 2 6) (28)

where119898119894is the number of samples in the 119894th class sum6

119894=1119898119894=

119898 Let aver119894be the class-centroid of the 119894th class

Step 7 The remaining119873minus119898 samples are taken as the testingsamples and the dimension reduction is implemented forthem using the method described in Step 5Step 8 Compute the square of the distance |119866119904

00minusaver

119894|2 (119904 =

119898 + 1119898 + 2 119873 and 119894 = 1 2 6) between each testingsample119866119904

00and different class-centroid aver

119894 according to the

nearest centroids classifier the sample 11986611990400is assigned to the

class 119894 if 119894 = argmin |11986611990400minus aver

119894|2

In Step 8 the weak classifier (ie the nearest centroidclassifier) is used to classify error-diffused halftone imagesbecause this classifier is simple and easy to implementSimultaneously in order to prove that these class featurematrices which are extracted according to the methodmentioned in Section 3 and handled by the algorithm of thespectral regression discriminant analysis are well suited forthe classification of error-diffused halftone images this weakclassifier is used in this paper instead of a strong classifier[20] such as support vector machine classifiers and deepneural network classifiers

5 Experimental Analysis and Results

We implement various experiments to verify theefficiency of our methods in classifying error-diffusedhalftone images The computer processor is Intel(R)Pentium(R) CPU G2030 300GHz the memory of thecomputer is 20GB the operating system is Windows7 and the experimental simulation software is matlabR2012a In our experiments all the original images aredownloaded from httpdecsaiugrescvgdbimagenes andhttpmspeentustedutw About 4000 original imageshave been downloaded and they are converted into 24000error-diffused halftone images produced by six differenterror-diffused filters

51 Classification Accuracy Rate of the Error-DiffusedHalftone Images

511 Effect of the Number of the Samples This subsectionanalyzes the effect of the number on the feature sampleson classification When 119871 = 11 and feature matrices 119865

00

11986510 11986511 119865all are taken as the input data respectively the

accuracy rate of classification under different conditionsis shown in Tables 1 and 2 Table 1 shows the classificationaccuracy rates under different number of training sampleswhen the total number of samples is 12000 Table 2 showsthe classification accuracy rates under different number oftraining samples when the total number of samples is 24000The digits in the first line of each table are the size of thetraining samples

According to Tables 1 and 2 the classification accuracyrates under 12000 samples are higher than that under 24000samples Moreover the classification accuracy rates improvewith the increase of the proportion of the training sampleswhen the number of the training samples is lower than the80 of sample size And it achieves the highest classificationaccuracy rates when the number of the training samples is

Advances in Multimedia 7

Table 1 Classification accuracy rates () (12000 samples)

2000 3000 4000 5000 6000 7000 8000 9000 1000011986500

97750 97500 99050 99150 99150 99150 99100 98950 9890011986510

99950 99950 10000 10000 10000 10000 10000 10000 1000011986511

99500 99250 99350 99400 99200 99350 99300 99300 99450119865all 99100 99100 99450 99500 99650 99650 99650 99650 99600

Table 2 Classification accuracy rates () (24000 samples)

14000 15000 16000 17000 18000 19000 20000 21000 2200011986500

95810 95878 95413 95286 94933 98580 98100 98267 9865011986510

97600 97378 97150 96957 96450 99680 99600 99533 9930011986511

96420 96256 95925 95614 95000 99040 98825 98700 98400119865all 97230 97100 96888 96686 96233 99680 99600 99533 99350

Table 3 Classification accuracy rates obtained by different algorithms ()

11986500+ SR 119865

10+ SR 119865

11+ SR 119865all + SR ECF + BP LMS + Bayes 119866

10+ ML 119866

11+ ML

9840 9953 9874 9954 9004 9038 9832 9615

Table 4 The training and testing time under different sample sizes (in seconds)

14000 15000 16000 17000 18000 19000 20000 21000 22000

11986500

Training time 65208 78317 94660 11047 12932 15018 17894 20511 51354Testing time 63742 62583 58530 55251 48792 44947 37872 30375 43393

11986510

Training time 65851 78796 95113 11070 13097 15076 17398 19895 24010Testing time 63674 60804 57496 5503 49287 43261 36775 30353 29631

11986511

Training time 65662 79936 95173 11069 13033 15063 17441 19834 23996Testing time 65226 61817 57826 53778 48531 45513 37137 29735 30597

119865all Training time 68902 82554 99189 11527 13534 15671 17994 20485 28836Testing time 10200 98393 93112 87212 8016 6902 58556 49363 36921

about 80 of the sample size In addition from Tables 1 and2 we can also see that 119865

00 11986510 11986511

can be used as the inputdata alone Simultaneously they can also be combined intothe input data 119865all based on which the classification accuracyrates would be high

512 Comparison of Classification Accuracy Rate To analyzethe effectiveness of our classification algorithm the meanvalues of classification accuracy rate of the four data sets onthe right-hand side of each row in Table 2 are computed ThealgorithmSRoutperforms other baselines in achieving higherclassification accuracy rates when compared with LMS +Bayes (the method composed of least mean square and Bayesmethod) ECF + BP (the method based on the enhancedcorrelation function and BP neural network) and ML (themaximum likelihood method) According to Table 3 themean values of classification accuracy rates obtained usingSR and different features 119865

00 11986510 11986511 and 119865all respectively

are higher than themean values obtained by other algorithmsmentioned above

52 Effect of the Size of Statistical Feature Template onClassification Accuracies Here different features 119865

00 11986510

11986511

of the 24000 error-diffused halftone images are usedto test the effect of the size of statistical feature template11986500 11986510 11986511

are constructed using the corresponding classfeature matrices 119866

00 11986610 11986611

with deferent size 119871 times 119871 (119871 =5 7 9 11 13 15 17 19 21 23 25) Figure 1 shows that theclassification accuracy rate achieves the highest value when119871 = 11 no matter which feature is selected for experiments

53 Time-Consumption of the Classification Now the exper-iments are implemented to test the time-consumption ofthe error-diffused halftone image classification under thecondition that the total number of samples is 24000 and119871 = 11 It is well known that the time-consumption of theclassification includes the training time and the testing timeFrom Table 4 we can know that the training time increaseswith the increase of the number of training samples on thecontrary the testing time decreases with the increase of thenumber of training samples

8 Advances in Multimedia

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(a) Rates based on 11986500 (119871 = 5 7 9 11 13 15)

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(b) Rates based on 11986500 (119871 = 17 19 21 23 25)

14000 15000 16000 17000 18000 19000 20000 21000 220000808108208308408508608708808909

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(c) Rates based on 11986510 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

081082083084085086087088089

L = 17

L = 19

L = 21

L = 23

L = 25

(d) Rates based on 11986510 (119871 = 17 19 21 23 25)

09

14000 15000 16000 17000 18000 19000 20000 21000 22000082084086088

092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(e) Rates based on 11986511 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

082084086088

092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(f) Rates based on 11986511 (119871 = 17 19 21 23 25)

Figure 1 Classification accuracy rates based on deferent features with deferent sample size

To compare the time-consumption of the classificationmethod proposed in this paper with other algorithms suchas the backpropagation (BP) neural network radial basisfunction (RBF) neural network and support vector machine(SVM) all the experiments are implemented using119865

10 which

is divided into two parts 12000 training samples and 12000testing samples (119871 = 11) From the digits listed in Table 5

we can know that SR achieves the minimal summation of thetraining and testing time

In addition Table 5 implies that the time-consumptionof classifiers based on neural networks such as the classifierbased on RBF or BP are much more than that of otheralgorithms especially SR This is because these neural net-work algorithms essentially use gradient descent methods to

Advances in Multimedia 9

Table 5 Time-consumption of different algorithms (in second)

ML RBF BP SVM SRTraining time 11042 76390 29667 49297 439337Testing time 12000 27600 36000 62400 63161

Table 6 Classification accuracy rates under different variances ()

Variance 14000 15000 16000 17000 18000 19000 20000 21000 22000001 95570 95422 95425 95529 95217 98440 98200 98033 97950010 95370 95233 95188 95157 94867 98200 98000 97767 98000050 94580 94522 94600 94671 94567 98320 98225 98033 98550100 93060 93067 93013 93343 93200 96980 97175 96800 97350

Table 7 Classification accuracy rates using other algorithms under different variances

Variance ECF + BP LMS + Bayes ML Variance ECF + BP LMS + Bayes ML001 895000 887577 979981 050 797333 402682 829169010 881333 784264 968315 100 615000 338547 643861

optimize the associated nonconvex problems which are wellknown to converge very slowly However the classifier basedon SR performs the classification task through computing thesquare of the distance between each testing sample and differ-ent class-centroids directly Hence the time-consumption ofit is very cheap

54The Experiment of Noise Attack Resistance In the processof actual operation the error-defused halftone images areoften polluted by noise before the inverse transform In orderto test the ability of SR to resist the attack of noise differentGaussian noises with mean 0 and different variances areembedded into the error-defused halftone imagesThen clas-sification experiments have been done using the algorithmproposed in this paper and the experimental results are listedin Table 6 According to Table 6 the accuracy rates decreasewith the increase of the variances As compared with theaccuracy rates listed in Table 7 achieved by other algorithmssuch as ECF + BP LMS + Bayes and ML we find that ourclassification method has obvious advantages in resisting thenoise

6 Conclusion

This paper proposes a novel algorithm to solve the chal-lenging problem of classifying the error-diffused halftoneimages We firstly design the class feature matrices afterextracting the image patches according to their statisticalcharacteristics to classify the error-diffused halftone imagesThen the spectral regression kernel discriminant analysisis used for feature dimension reduction The error-diffusedhalftone images are finally classified using an idea similarto the nearest centroids classifier As demonstrated by theexperimental results our method is fast and can achieve ahigh classification accuracy rate with an added benefit ofrobustness in tackling noise A very interesting direction isto solve the disturbance possibly introduced by other attacks

such as image scaling and rotation in the process of error-diffused halftone image classification

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work is supported in part by the National Natural Sci-ence Foundation of China (Grants nos 61170102 61271140)the Scientific Research Fund of Hunan Provincial Educa-tion Department China (Grant no 15A049) the EducationDepartment Fund of Hunan Province in China (Grantsnos 15C0402 15C0395 and 13C036) and the Science andTechnology Planning Project of Hunan Province in China(Grant no 2015GK3024)

References

[1] Y-M Kwon M-G Kim and J-L Kim ldquoMultiscale rank-basedordered dither algorithm for digital halftoningrdquo InformationSystems vol 48 pp 241ndash247 2015

[2] Y Jiang and M Wang ldquoImage fusion using multiscale edge-preserving decomposition based on weighted least squaresfilterrdquo IET Image Processing vol 8 no 3 pp 183ndash190 2014

[3] Z Zhao L Cheng and G Cheng ldquoNeighbourhood weightedfuzzy c-means clustering algorithm for image segmentationrdquoIET Image Processing vol 8 no 3 pp 150ndash161 2014

[4] Z-Q Wen Y-L Lu Z-G Zeng W-Q Zhu and J-H AildquoOptimizing template for lookup-table inverse halftoning usingelitist genetic algorithmrdquo IEEE Signal Processing Letters vol 22no 1 pp 71ndash75 2015

[5] P C Chang and C S Yu ldquoNeural net classification and LMSreconstruction to halftone imagesrdquo in Visual Communicationsand Image Processing rsquo98 vol 3309 of Proceedings of SPIE pp592ndash602 The International Society for Optical EngineeringJanuary 1998

10 Advances in Multimedia

[6] Y Kong P Zeng and Y Zhang ldquoClassification and recognitionalgorithm for the halftone imagerdquo Journal of Xidian Universityvol 38 no 5 pp 62ndash69 2011 (Chinese)

[7] Y Kong A study of inverse halftoning and quality assessmentschemes [PhD thesis] School of Computer Science and Tech-nology Xidian University Xian China 2008

[8] Y-F Liu J-M Guo and J-D Lee ldquoInverse halftoning based onthe Bayesian theoremrdquo IEEE Transactions on Image Processingvol 20 no 4 pp 1077ndash1084 2011

[9] Y-F Liu J-MGuo and J-D Lee ldquoHalftone image classificationusing LMS algorithm and naive Bayesrdquo IEEE Transactions onImage Processing vol 20 no 10 pp 2837ndash2847 2011

[10] D L Lau andG R ArceModern Digital Halftoning CRC PressBoca Raton Fla USA 2nd edition 2008

[11] Image Dithering Eleven Algorithms and Source Code httpwwwtannerhellandcom4660dithering-elevenalgorithms-source-code

[12] R A Ulichney ldquoDithering with blue noiserdquo Proceedings of theIEEE vol 76 no 1 pp 56ndash79 1988

[13] Y-H Fung and Y-H Chan ldquoEmbedding halftones of differentresolutions in a full-scale halftonerdquo IEEE Signal ProcessingLetters vol 13 no 3 pp 153ndash156 2006

[14] Z-Q Wen Y-X Hu and W-Q Zhu ldquoA novel classificationmethod of halftone image via statistics matricesrdquo IEEE Trans-actions on Image Processing vol 23 no 11 pp 4724ndash4736 2014

[15] D Cai X He and J Han ldquoEfficient kernel discriminantanalysis via spectral regressionrdquo in Proceedings of the 7th IEEEInternational Conference on Data Mining (ICDM rsquo07) pp 427ndash432 Omaha Neb USA October 2007

[16] D Cai X He and J Han ldquoSpeed up kernel discriminantanalysisrdquo The International Journal on Very Large Data Basesvol 20 no 1 pp 21ndash33 2011

[17] M Zhao Z Zhang T W S Chow and B Li ldquoA general softlabel based Linear Discriminant Analysis for semi-superviseddimensionality reductionrdquo Neural Networks vol 55 pp 83ndash972014

[18] M Zhao Z Zhang T W S Chow and B Li ldquoSoft labelbased Linear Discriminant Analysis for image recognition andretrievalrdquo Computer Vision amp Image Understanding vol 121 no1 pp 86ndash99 2014

[19] L Zhang and F-C Tian ldquoA new kernel discriminant analysisframework for electronic nose recognitionrdquo Analytica ChimicaActa vol 816 pp 8ndash17 2014

[20] B Gu V S Sheng K Y TayW Romano and S Li ldquoIncrementalsupport vector learning for ordinal regressionrdquo IEEE Transac-tions on Neural Networks and Learning Systems vol 26 no 7pp 1403ndash1416 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

6 Advances in Multimedia

Step 3 All the one-dimensional class feature matrices11986611989400 11986611989410 11986611989411(119894 = 1 2 119873) are used to construct the

samples feature matrices 11986500 11986510 11986511

of size 119873 times 119872 (119872 =119871 times 119871) respectivelyStep 4 A label matrix information of the size 1 times119873 is built torecord the type to which the error-diffused halftone imagesbelong

Step 5 The first 119898 features 119866100 119866200 119866119898

00of the samples

featurematrices11986500are taken as the training samples (the first

119898 features of 11986510 11986511 or 119865all which is the composition of 119865

00

11986510 and 119865

11also can be used as the training samples) Reduce

the dimension of these training samples using the spectralregression discriminant analysis The process of dimensionreducing can be described by three substeps as follows

(1) Produce orthogonal vectors Let

119910119896= [[0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119896minus1

119894=1119898119894

1 1 1⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟119898119896

0 0 0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟sum119888

119894=119896+1119898119894

]]

119879

119896 = 1 2 119888(24)

and 1199100= [1 1 1]119879 be the matrix with all elements

being 1 Let 1199100be the first vector of the weight matrix 119882

and use Gram-Schmidt process to orthogonalize the othereigenvectors Remove the vector 119910

0 leaving 119888minus1 eigenvectors

of119882 denoted as follows

119910119896119888minus1119896=1 (119910119879

1198941199100= 0 119910119879

119894119910119895= 0 119894 = 119895) (25)

(2) Add an element of 1 to the end of each input data(11986611989400(119894 = 1 2 119898)) and obtain 119888 minus 1 vectors

119896119888minus1119896=1

isinR119899+1 where

119896is the solution of the following regular least

squares problem

119896= argmin(

119898

sum119894=1

(11987911986611989400minus 119910119896119894)2 + 120572 2) (26)

Here 119910119896119894is the 119894th element of eigenvectors 119910

119896119888minus1119896=1

and 120572 is thecontraction parameter

(3) Let 119860 = [1198861 119886

119888minus1] be the transformation matrix

with the size (119872+ 1) times (119888 minus 1) Perform dimension reductionon the sample features 1198661

00 119866200 119866119898

00 by mapping them to

(119888 minus 1) dimensional subspace as follows

119909 997888rarr 119911 = 119860119879 [1199091] (27)

Step 6 Compute the mean values of samples in differentclasses according to following equation

aver119894= sum119898119894

119904=111986611990400

119898119894

(119894 = 1 2 6) (28)

where119898119894is the number of samples in the 119894th class sum6

119894=1119898119894=

119898 Let aver119894be the class-centroid of the 119894th class

Step 7 The remaining119873minus119898 samples are taken as the testingsamples and the dimension reduction is implemented forthem using the method described in Step 5Step 8 Compute the square of the distance |119866119904

00minusaver

119894|2 (119904 =

119898 + 1119898 + 2 119873 and 119894 = 1 2 6) between each testingsample119866119904

00and different class-centroid aver

119894 according to the

nearest centroids classifier the sample 11986611990400is assigned to the

class 119894 if 119894 = argmin |11986611990400minus aver

119894|2

In Step 8 the weak classifier (ie the nearest centroidclassifier) is used to classify error-diffused halftone imagesbecause this classifier is simple and easy to implementSimultaneously in order to prove that these class featurematrices which are extracted according to the methodmentioned in Section 3 and handled by the algorithm of thespectral regression discriminant analysis are well suited forthe classification of error-diffused halftone images this weakclassifier is used in this paper instead of a strong classifier[20] such as support vector machine classifiers and deepneural network classifiers

5 Experimental Analysis and Results

We implement various experiments to verify theefficiency of our methods in classifying error-diffusedhalftone images The computer processor is Intel(R)Pentium(R) CPU G2030 300GHz the memory of thecomputer is 20GB the operating system is Windows7 and the experimental simulation software is matlabR2012a In our experiments all the original images aredownloaded from httpdecsaiugrescvgdbimagenes andhttpmspeentustedutw About 4000 original imageshave been downloaded and they are converted into 24000error-diffused halftone images produced by six differenterror-diffused filters

51 Classification Accuracy Rate of the Error-DiffusedHalftone Images

511 Effect of the Number of the Samples This subsectionanalyzes the effect of the number on the feature sampleson classification When 119871 = 11 and feature matrices 119865

00

11986510 11986511 119865all are taken as the input data respectively the

accuracy rate of classification under different conditionsis shown in Tables 1 and 2 Table 1 shows the classificationaccuracy rates under different number of training sampleswhen the total number of samples is 12000 Table 2 showsthe classification accuracy rates under different number oftraining samples when the total number of samples is 24000The digits in the first line of each table are the size of thetraining samples

According to Tables 1 and 2 the classification accuracyrates under 12000 samples are higher than that under 24000samples Moreover the classification accuracy rates improvewith the increase of the proportion of the training sampleswhen the number of the training samples is lower than the80 of sample size And it achieves the highest classificationaccuracy rates when the number of the training samples is

Advances in Multimedia 7

Table 1 Classification accuracy rates () (12000 samples)

2000 3000 4000 5000 6000 7000 8000 9000 1000011986500

97750 97500 99050 99150 99150 99150 99100 98950 9890011986510

99950 99950 10000 10000 10000 10000 10000 10000 1000011986511

99500 99250 99350 99400 99200 99350 99300 99300 99450119865all 99100 99100 99450 99500 99650 99650 99650 99650 99600

Table 2 Classification accuracy rates () (24000 samples)

14000 15000 16000 17000 18000 19000 20000 21000 2200011986500

95810 95878 95413 95286 94933 98580 98100 98267 9865011986510

97600 97378 97150 96957 96450 99680 99600 99533 9930011986511

96420 96256 95925 95614 95000 99040 98825 98700 98400119865all 97230 97100 96888 96686 96233 99680 99600 99533 99350

Table 3 Classification accuracy rates obtained by different algorithms ()

11986500+ SR 119865

10+ SR 119865

11+ SR 119865all + SR ECF + BP LMS + Bayes 119866

10+ ML 119866

11+ ML

9840 9953 9874 9954 9004 9038 9832 9615

Table 4 The training and testing time under different sample sizes (in seconds)

14000 15000 16000 17000 18000 19000 20000 21000 22000

11986500

Training time 65208 78317 94660 11047 12932 15018 17894 20511 51354Testing time 63742 62583 58530 55251 48792 44947 37872 30375 43393

11986510

Training time 65851 78796 95113 11070 13097 15076 17398 19895 24010Testing time 63674 60804 57496 5503 49287 43261 36775 30353 29631

11986511

Training time 65662 79936 95173 11069 13033 15063 17441 19834 23996Testing time 65226 61817 57826 53778 48531 45513 37137 29735 30597

119865all Training time 68902 82554 99189 11527 13534 15671 17994 20485 28836Testing time 10200 98393 93112 87212 8016 6902 58556 49363 36921

about 80 of the sample size In addition from Tables 1 and2 we can also see that 119865

00 11986510 11986511

can be used as the inputdata alone Simultaneously they can also be combined intothe input data 119865all based on which the classification accuracyrates would be high

512 Comparison of Classification Accuracy Rate To analyzethe effectiveness of our classification algorithm the meanvalues of classification accuracy rate of the four data sets onthe right-hand side of each row in Table 2 are computed ThealgorithmSRoutperforms other baselines in achieving higherclassification accuracy rates when compared with LMS +Bayes (the method composed of least mean square and Bayesmethod) ECF + BP (the method based on the enhancedcorrelation function and BP neural network) and ML (themaximum likelihood method) According to Table 3 themean values of classification accuracy rates obtained usingSR and different features 119865

00 11986510 11986511 and 119865all respectively

are higher than themean values obtained by other algorithmsmentioned above

52 Effect of the Size of Statistical Feature Template onClassification Accuracies Here different features 119865

00 11986510

11986511

of the 24000 error-diffused halftone images are usedto test the effect of the size of statistical feature template11986500 11986510 11986511

are constructed using the corresponding classfeature matrices 119866

00 11986610 11986611

with deferent size 119871 times 119871 (119871 =5 7 9 11 13 15 17 19 21 23 25) Figure 1 shows that theclassification accuracy rate achieves the highest value when119871 = 11 no matter which feature is selected for experiments

53 Time-Consumption of the Classification Now the exper-iments are implemented to test the time-consumption ofthe error-diffused halftone image classification under thecondition that the total number of samples is 24000 and119871 = 11 It is well known that the time-consumption of theclassification includes the training time and the testing timeFrom Table 4 we can know that the training time increaseswith the increase of the number of training samples on thecontrary the testing time decreases with the increase of thenumber of training samples

8 Advances in Multimedia

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(a) Rates based on 11986500 (119871 = 5 7 9 11 13 15)

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(b) Rates based on 11986500 (119871 = 17 19 21 23 25)

14000 15000 16000 17000 18000 19000 20000 21000 220000808108208308408508608708808909

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(c) Rates based on 11986510 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

081082083084085086087088089

L = 17

L = 19

L = 21

L = 23

L = 25

(d) Rates based on 11986510 (119871 = 17 19 21 23 25)

09

14000 15000 16000 17000 18000 19000 20000 21000 22000082084086088

092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(e) Rates based on 11986511 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

082084086088

092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(f) Rates based on 11986511 (119871 = 17 19 21 23 25)

Figure 1 Classification accuracy rates based on deferent features with deferent sample size

To compare the time-consumption of the classificationmethod proposed in this paper with other algorithms suchas the backpropagation (BP) neural network radial basisfunction (RBF) neural network and support vector machine(SVM) all the experiments are implemented using119865

10 which

is divided into two parts 12000 training samples and 12000testing samples (119871 = 11) From the digits listed in Table 5

we can know that SR achieves the minimal summation of thetraining and testing time

In addition Table 5 implies that the time-consumptionof classifiers based on neural networks such as the classifierbased on RBF or BP are much more than that of otheralgorithms especially SR This is because these neural net-work algorithms essentially use gradient descent methods to

Advances in Multimedia 9

Table 5 Time-consumption of different algorithms (in second)

ML RBF BP SVM SRTraining time 11042 76390 29667 49297 439337Testing time 12000 27600 36000 62400 63161

Table 6 Classification accuracy rates under different variances ()

Variance 14000 15000 16000 17000 18000 19000 20000 21000 22000001 95570 95422 95425 95529 95217 98440 98200 98033 97950010 95370 95233 95188 95157 94867 98200 98000 97767 98000050 94580 94522 94600 94671 94567 98320 98225 98033 98550100 93060 93067 93013 93343 93200 96980 97175 96800 97350

Table 7 Classification accuracy rates using other algorithms under different variances

Variance ECF + BP LMS + Bayes ML Variance ECF + BP LMS + Bayes ML001 895000 887577 979981 050 797333 402682 829169010 881333 784264 968315 100 615000 338547 643861

optimize the associated nonconvex problems which are wellknown to converge very slowly However the classifier basedon SR performs the classification task through computing thesquare of the distance between each testing sample and differ-ent class-centroids directly Hence the time-consumption ofit is very cheap

54The Experiment of Noise Attack Resistance In the processof actual operation the error-defused halftone images areoften polluted by noise before the inverse transform In orderto test the ability of SR to resist the attack of noise differentGaussian noises with mean 0 and different variances areembedded into the error-defused halftone imagesThen clas-sification experiments have been done using the algorithmproposed in this paper and the experimental results are listedin Table 6 According to Table 6 the accuracy rates decreasewith the increase of the variances As compared with theaccuracy rates listed in Table 7 achieved by other algorithmssuch as ECF + BP LMS + Bayes and ML we find that ourclassification method has obvious advantages in resisting thenoise

6 Conclusion

This paper proposes a novel algorithm to solve the chal-lenging problem of classifying the error-diffused halftoneimages We firstly design the class feature matrices afterextracting the image patches according to their statisticalcharacteristics to classify the error-diffused halftone imagesThen the spectral regression kernel discriminant analysisis used for feature dimension reduction The error-diffusedhalftone images are finally classified using an idea similarto the nearest centroids classifier As demonstrated by theexperimental results our method is fast and can achieve ahigh classification accuracy rate with an added benefit ofrobustness in tackling noise A very interesting direction isto solve the disturbance possibly introduced by other attacks

such as image scaling and rotation in the process of error-diffused halftone image classification

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work is supported in part by the National Natural Sci-ence Foundation of China (Grants nos 61170102 61271140)the Scientific Research Fund of Hunan Provincial Educa-tion Department China (Grant no 15A049) the EducationDepartment Fund of Hunan Province in China (Grantsnos 15C0402 15C0395 and 13C036) and the Science andTechnology Planning Project of Hunan Province in China(Grant no 2015GK3024)

References

[1] Y-M Kwon M-G Kim and J-L Kim ldquoMultiscale rank-basedordered dither algorithm for digital halftoningrdquo InformationSystems vol 48 pp 241ndash247 2015

[2] Y Jiang and M Wang ldquoImage fusion using multiscale edge-preserving decomposition based on weighted least squaresfilterrdquo IET Image Processing vol 8 no 3 pp 183ndash190 2014

[3] Z Zhao L Cheng and G Cheng ldquoNeighbourhood weightedfuzzy c-means clustering algorithm for image segmentationrdquoIET Image Processing vol 8 no 3 pp 150ndash161 2014

[4] Z-Q Wen Y-L Lu Z-G Zeng W-Q Zhu and J-H AildquoOptimizing template for lookup-table inverse halftoning usingelitist genetic algorithmrdquo IEEE Signal Processing Letters vol 22no 1 pp 71ndash75 2015

[5] P C Chang and C S Yu ldquoNeural net classification and LMSreconstruction to halftone imagesrdquo in Visual Communicationsand Image Processing rsquo98 vol 3309 of Proceedings of SPIE pp592ndash602 The International Society for Optical EngineeringJanuary 1998

10 Advances in Multimedia

[6] Y Kong P Zeng and Y Zhang ldquoClassification and recognitionalgorithm for the halftone imagerdquo Journal of Xidian Universityvol 38 no 5 pp 62ndash69 2011 (Chinese)

[7] Y Kong A study of inverse halftoning and quality assessmentschemes [PhD thesis] School of Computer Science and Tech-nology Xidian University Xian China 2008

[8] Y-F Liu J-M Guo and J-D Lee ldquoInverse halftoning based onthe Bayesian theoremrdquo IEEE Transactions on Image Processingvol 20 no 4 pp 1077ndash1084 2011

[9] Y-F Liu J-MGuo and J-D Lee ldquoHalftone image classificationusing LMS algorithm and naive Bayesrdquo IEEE Transactions onImage Processing vol 20 no 10 pp 2837ndash2847 2011

[10] D L Lau andG R ArceModern Digital Halftoning CRC PressBoca Raton Fla USA 2nd edition 2008

[11] Image Dithering Eleven Algorithms and Source Code httpwwwtannerhellandcom4660dithering-elevenalgorithms-source-code

[12] R A Ulichney ldquoDithering with blue noiserdquo Proceedings of theIEEE vol 76 no 1 pp 56ndash79 1988

[13] Y-H Fung and Y-H Chan ldquoEmbedding halftones of differentresolutions in a full-scale halftonerdquo IEEE Signal ProcessingLetters vol 13 no 3 pp 153ndash156 2006

[14] Z-Q Wen Y-X Hu and W-Q Zhu ldquoA novel classificationmethod of halftone image via statistics matricesrdquo IEEE Trans-actions on Image Processing vol 23 no 11 pp 4724ndash4736 2014

[15] D Cai X He and J Han ldquoEfficient kernel discriminantanalysis via spectral regressionrdquo in Proceedings of the 7th IEEEInternational Conference on Data Mining (ICDM rsquo07) pp 427ndash432 Omaha Neb USA October 2007

[16] D Cai X He and J Han ldquoSpeed up kernel discriminantanalysisrdquo The International Journal on Very Large Data Basesvol 20 no 1 pp 21ndash33 2011

[17] M Zhao Z Zhang T W S Chow and B Li ldquoA general softlabel based Linear Discriminant Analysis for semi-superviseddimensionality reductionrdquo Neural Networks vol 55 pp 83ndash972014

[18] M Zhao Z Zhang T W S Chow and B Li ldquoSoft labelbased Linear Discriminant Analysis for image recognition andretrievalrdquo Computer Vision amp Image Understanding vol 121 no1 pp 86ndash99 2014

[19] L Zhang and F-C Tian ldquoA new kernel discriminant analysisframework for electronic nose recognitionrdquo Analytica ChimicaActa vol 816 pp 8ndash17 2014

[20] B Gu V S Sheng K Y TayW Romano and S Li ldquoIncrementalsupport vector learning for ordinal regressionrdquo IEEE Transac-tions on Neural Networks and Learning Systems vol 26 no 7pp 1403ndash1416 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

Advances in Multimedia 7

Table 1 Classification accuracy rates () (12000 samples)

2000 3000 4000 5000 6000 7000 8000 9000 1000011986500

97750 97500 99050 99150 99150 99150 99100 98950 9890011986510

99950 99950 10000 10000 10000 10000 10000 10000 1000011986511

99500 99250 99350 99400 99200 99350 99300 99300 99450119865all 99100 99100 99450 99500 99650 99650 99650 99650 99600

Table 2 Classification accuracy rates () (24000 samples)

14000 15000 16000 17000 18000 19000 20000 21000 2200011986500

95810 95878 95413 95286 94933 98580 98100 98267 9865011986510

97600 97378 97150 96957 96450 99680 99600 99533 9930011986511

96420 96256 95925 95614 95000 99040 98825 98700 98400119865all 97230 97100 96888 96686 96233 99680 99600 99533 99350

Table 3 Classification accuracy rates obtained by different algorithms ()

11986500+ SR 119865

10+ SR 119865

11+ SR 119865all + SR ECF + BP LMS + Bayes 119866

10+ ML 119866

11+ ML

9840 9953 9874 9954 9004 9038 9832 9615

Table 4 The training and testing time under different sample sizes (in seconds)

14000 15000 16000 17000 18000 19000 20000 21000 22000

11986500

Training time 65208 78317 94660 11047 12932 15018 17894 20511 51354Testing time 63742 62583 58530 55251 48792 44947 37872 30375 43393

11986510

Training time 65851 78796 95113 11070 13097 15076 17398 19895 24010Testing time 63674 60804 57496 5503 49287 43261 36775 30353 29631

11986511

Training time 65662 79936 95173 11069 13033 15063 17441 19834 23996Testing time 65226 61817 57826 53778 48531 45513 37137 29735 30597

119865all Training time 68902 82554 99189 11527 13534 15671 17994 20485 28836Testing time 10200 98393 93112 87212 8016 6902 58556 49363 36921

about 80 of the sample size In addition from Tables 1 and2 we can also see that 119865

00 11986510 11986511

can be used as the inputdata alone Simultaneously they can also be combined intothe input data 119865all based on which the classification accuracyrates would be high

512 Comparison of Classification Accuracy Rate To analyzethe effectiveness of our classification algorithm the meanvalues of classification accuracy rate of the four data sets onthe right-hand side of each row in Table 2 are computed ThealgorithmSRoutperforms other baselines in achieving higherclassification accuracy rates when compared with LMS +Bayes (the method composed of least mean square and Bayesmethod) ECF + BP (the method based on the enhancedcorrelation function and BP neural network) and ML (themaximum likelihood method) According to Table 3 themean values of classification accuracy rates obtained usingSR and different features 119865

00 11986510 11986511 and 119865all respectively

are higher than themean values obtained by other algorithmsmentioned above

52 Effect of the Size of Statistical Feature Template onClassification Accuracies Here different features 119865

00 11986510

11986511

of the 24000 error-diffused halftone images are usedto test the effect of the size of statistical feature template11986500 11986510 11986511

are constructed using the corresponding classfeature matrices 119866

00 11986610 11986611

with deferent size 119871 times 119871 (119871 =5 7 9 11 13 15 17 19 21 23 25) Figure 1 shows that theclassification accuracy rate achieves the highest value when119871 = 11 no matter which feature is selected for experiments

53 Time-Consumption of the Classification Now the exper-iments are implemented to test the time-consumption ofthe error-diffused halftone image classification under thecondition that the total number of samples is 24000 and119871 = 11 It is well known that the time-consumption of theclassification includes the training time and the testing timeFrom Table 4 we can know that the training time increaseswith the increase of the number of training samples on thecontrary the testing time decreases with the increase of thenumber of training samples

8 Advances in Multimedia

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(a) Rates based on 11986500 (119871 = 5 7 9 11 13 15)

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(b) Rates based on 11986500 (119871 = 17 19 21 23 25)

14000 15000 16000 17000 18000 19000 20000 21000 220000808108208308408508608708808909

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(c) Rates based on 11986510 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

081082083084085086087088089

L = 17

L = 19

L = 21

L = 23

L = 25

(d) Rates based on 11986510 (119871 = 17 19 21 23 25)

09

14000 15000 16000 17000 18000 19000 20000 21000 22000082084086088

092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(e) Rates based on 11986511 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

082084086088

092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(f) Rates based on 11986511 (119871 = 17 19 21 23 25)

Figure 1 Classification accuracy rates based on deferent features with deferent sample size

To compare the time-consumption of the classificationmethod proposed in this paper with other algorithms suchas the backpropagation (BP) neural network radial basisfunction (RBF) neural network and support vector machine(SVM) all the experiments are implemented using119865

10 which

is divided into two parts 12000 training samples and 12000testing samples (119871 = 11) From the digits listed in Table 5

we can know that SR achieves the minimal summation of thetraining and testing time

In addition Table 5 implies that the time-consumptionof classifiers based on neural networks such as the classifierbased on RBF or BP are much more than that of otheralgorithms especially SR This is because these neural net-work algorithms essentially use gradient descent methods to

Advances in Multimedia 9

Table 5 Time-consumption of different algorithms (in second)

ML RBF BP SVM SRTraining time 11042 76390 29667 49297 439337Testing time 12000 27600 36000 62400 63161

Table 6 Classification accuracy rates under different variances ()

Variance 14000 15000 16000 17000 18000 19000 20000 21000 22000001 95570 95422 95425 95529 95217 98440 98200 98033 97950010 95370 95233 95188 95157 94867 98200 98000 97767 98000050 94580 94522 94600 94671 94567 98320 98225 98033 98550100 93060 93067 93013 93343 93200 96980 97175 96800 97350

Table 7 Classification accuracy rates using other algorithms under different variances

Variance ECF + BP LMS + Bayes ML Variance ECF + BP LMS + Bayes ML001 895000 887577 979981 050 797333 402682 829169010 881333 784264 968315 100 615000 338547 643861

optimize the associated nonconvex problems which are wellknown to converge very slowly However the classifier basedon SR performs the classification task through computing thesquare of the distance between each testing sample and differ-ent class-centroids directly Hence the time-consumption ofit is very cheap

54The Experiment of Noise Attack Resistance In the processof actual operation the error-defused halftone images areoften polluted by noise before the inverse transform In orderto test the ability of SR to resist the attack of noise differentGaussian noises with mean 0 and different variances areembedded into the error-defused halftone imagesThen clas-sification experiments have been done using the algorithmproposed in this paper and the experimental results are listedin Table 6 According to Table 6 the accuracy rates decreasewith the increase of the variances As compared with theaccuracy rates listed in Table 7 achieved by other algorithmssuch as ECF + BP LMS + Bayes and ML we find that ourclassification method has obvious advantages in resisting thenoise

6 Conclusion

This paper proposes a novel algorithm to solve the chal-lenging problem of classifying the error-diffused halftoneimages We firstly design the class feature matrices afterextracting the image patches according to their statisticalcharacteristics to classify the error-diffused halftone imagesThen the spectral regression kernel discriminant analysisis used for feature dimension reduction The error-diffusedhalftone images are finally classified using an idea similarto the nearest centroids classifier As demonstrated by theexperimental results our method is fast and can achieve ahigh classification accuracy rate with an added benefit ofrobustness in tackling noise A very interesting direction isto solve the disturbance possibly introduced by other attacks

such as image scaling and rotation in the process of error-diffused halftone image classification

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work is supported in part by the National Natural Sci-ence Foundation of China (Grants nos 61170102 61271140)the Scientific Research Fund of Hunan Provincial Educa-tion Department China (Grant no 15A049) the EducationDepartment Fund of Hunan Province in China (Grantsnos 15C0402 15C0395 and 13C036) and the Science andTechnology Planning Project of Hunan Province in China(Grant no 2015GK3024)

References

[1] Y-M Kwon M-G Kim and J-L Kim ldquoMultiscale rank-basedordered dither algorithm for digital halftoningrdquo InformationSystems vol 48 pp 241ndash247 2015

[2] Y Jiang and M Wang ldquoImage fusion using multiscale edge-preserving decomposition based on weighted least squaresfilterrdquo IET Image Processing vol 8 no 3 pp 183ndash190 2014

[3] Z Zhao L Cheng and G Cheng ldquoNeighbourhood weightedfuzzy c-means clustering algorithm for image segmentationrdquoIET Image Processing vol 8 no 3 pp 150ndash161 2014

[4] Z-Q Wen Y-L Lu Z-G Zeng W-Q Zhu and J-H AildquoOptimizing template for lookup-table inverse halftoning usingelitist genetic algorithmrdquo IEEE Signal Processing Letters vol 22no 1 pp 71ndash75 2015

[5] P C Chang and C S Yu ldquoNeural net classification and LMSreconstruction to halftone imagesrdquo in Visual Communicationsand Image Processing rsquo98 vol 3309 of Proceedings of SPIE pp592ndash602 The International Society for Optical EngineeringJanuary 1998

10 Advances in Multimedia

[6] Y Kong P Zeng and Y Zhang ldquoClassification and recognitionalgorithm for the halftone imagerdquo Journal of Xidian Universityvol 38 no 5 pp 62ndash69 2011 (Chinese)

[7] Y Kong A study of inverse halftoning and quality assessmentschemes [PhD thesis] School of Computer Science and Tech-nology Xidian University Xian China 2008

[8] Y-F Liu J-M Guo and J-D Lee ldquoInverse halftoning based onthe Bayesian theoremrdquo IEEE Transactions on Image Processingvol 20 no 4 pp 1077ndash1084 2011

[9] Y-F Liu J-MGuo and J-D Lee ldquoHalftone image classificationusing LMS algorithm and naive Bayesrdquo IEEE Transactions onImage Processing vol 20 no 10 pp 2837ndash2847 2011

[10] D L Lau andG R ArceModern Digital Halftoning CRC PressBoca Raton Fla USA 2nd edition 2008

[11] Image Dithering Eleven Algorithms and Source Code httpwwwtannerhellandcom4660dithering-elevenalgorithms-source-code

[12] R A Ulichney ldquoDithering with blue noiserdquo Proceedings of theIEEE vol 76 no 1 pp 56ndash79 1988

[13] Y-H Fung and Y-H Chan ldquoEmbedding halftones of differentresolutions in a full-scale halftonerdquo IEEE Signal ProcessingLetters vol 13 no 3 pp 153ndash156 2006

[14] Z-Q Wen Y-X Hu and W-Q Zhu ldquoA novel classificationmethod of halftone image via statistics matricesrdquo IEEE Trans-actions on Image Processing vol 23 no 11 pp 4724ndash4736 2014

[15] D Cai X He and J Han ldquoEfficient kernel discriminantanalysis via spectral regressionrdquo in Proceedings of the 7th IEEEInternational Conference on Data Mining (ICDM rsquo07) pp 427ndash432 Omaha Neb USA October 2007

[16] D Cai X He and J Han ldquoSpeed up kernel discriminantanalysisrdquo The International Journal on Very Large Data Basesvol 20 no 1 pp 21ndash33 2011

[17] M Zhao Z Zhang T W S Chow and B Li ldquoA general softlabel based Linear Discriminant Analysis for semi-superviseddimensionality reductionrdquo Neural Networks vol 55 pp 83ndash972014

[18] M Zhao Z Zhang T W S Chow and B Li ldquoSoft labelbased Linear Discriminant Analysis for image recognition andretrievalrdquo Computer Vision amp Image Understanding vol 121 no1 pp 86ndash99 2014

[19] L Zhang and F-C Tian ldquoA new kernel discriminant analysisframework for electronic nose recognitionrdquo Analytica ChimicaActa vol 816 pp 8ndash17 2014

[20] B Gu V S Sheng K Y TayW Romano and S Li ldquoIncrementalsupport vector learning for ordinal regressionrdquo IEEE Transac-tions on Neural Networks and Learning Systems vol 26 no 7pp 1403ndash1416 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

8 Advances in Multimedia

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(a) Rates based on 11986500 (119871 = 5 7 9 11 13 15)

14000 15000 16000 17000 18000 19000 20000 21000 220000808208408608809092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(b) Rates based on 11986500 (119871 = 17 19 21 23 25)

14000 15000 16000 17000 18000 19000 20000 21000 220000808108208308408508608708808909

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(c) Rates based on 11986510 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

081082083084085086087088089

L = 17

L = 19

L = 21

L = 23

L = 25

(d) Rates based on 11986510 (119871 = 17 19 21 23 25)

09

14000 15000 16000 17000 18000 19000 20000 21000 22000082084086088

092094096098

1

L = 5

L = 7

L = 9

L = 11

L = 13

L = 15

(e) Rates based on 11986511 (119871 = 5 7 9 11 13 15)

08

09

14000 15000 16000 17000 18000 19000 20000 21000 22000

082084086088

092094096098

1

L = 17

L = 19

L = 21

L = 23

L = 25

(f) Rates based on 11986511 (119871 = 17 19 21 23 25)

Figure 1 Classification accuracy rates based on deferent features with deferent sample size

To compare the time-consumption of the classificationmethod proposed in this paper with other algorithms suchas the backpropagation (BP) neural network radial basisfunction (RBF) neural network and support vector machine(SVM) all the experiments are implemented using119865

10 which

is divided into two parts 12000 training samples and 12000testing samples (119871 = 11) From the digits listed in Table 5

we can know that SR achieves the minimal summation of thetraining and testing time

In addition Table 5 implies that the time-consumptionof classifiers based on neural networks such as the classifierbased on RBF or BP are much more than that of otheralgorithms especially SR This is because these neural net-work algorithms essentially use gradient descent methods to

Advances in Multimedia 9

Table 5 Time-consumption of different algorithms (in second)

ML RBF BP SVM SRTraining time 11042 76390 29667 49297 439337Testing time 12000 27600 36000 62400 63161

Table 6 Classification accuracy rates under different variances ()

Variance 14000 15000 16000 17000 18000 19000 20000 21000 22000001 95570 95422 95425 95529 95217 98440 98200 98033 97950010 95370 95233 95188 95157 94867 98200 98000 97767 98000050 94580 94522 94600 94671 94567 98320 98225 98033 98550100 93060 93067 93013 93343 93200 96980 97175 96800 97350

Table 7 Classification accuracy rates using other algorithms under different variances

Variance ECF + BP LMS + Bayes ML Variance ECF + BP LMS + Bayes ML001 895000 887577 979981 050 797333 402682 829169010 881333 784264 968315 100 615000 338547 643861

optimize the associated nonconvex problems which are wellknown to converge very slowly However the classifier basedon SR performs the classification task through computing thesquare of the distance between each testing sample and differ-ent class-centroids directly Hence the time-consumption ofit is very cheap

54The Experiment of Noise Attack Resistance In the processof actual operation the error-defused halftone images areoften polluted by noise before the inverse transform In orderto test the ability of SR to resist the attack of noise differentGaussian noises with mean 0 and different variances areembedded into the error-defused halftone imagesThen clas-sification experiments have been done using the algorithmproposed in this paper and the experimental results are listedin Table 6 According to Table 6 the accuracy rates decreasewith the increase of the variances As compared with theaccuracy rates listed in Table 7 achieved by other algorithmssuch as ECF + BP LMS + Bayes and ML we find that ourclassification method has obvious advantages in resisting thenoise

6 Conclusion

This paper proposes a novel algorithm to solve the chal-lenging problem of classifying the error-diffused halftoneimages We firstly design the class feature matrices afterextracting the image patches according to their statisticalcharacteristics to classify the error-diffused halftone imagesThen the spectral regression kernel discriminant analysisis used for feature dimension reduction The error-diffusedhalftone images are finally classified using an idea similarto the nearest centroids classifier As demonstrated by theexperimental results our method is fast and can achieve ahigh classification accuracy rate with an added benefit ofrobustness in tackling noise A very interesting direction isto solve the disturbance possibly introduced by other attacks

such as image scaling and rotation in the process of error-diffused halftone image classification

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work is supported in part by the National Natural Sci-ence Foundation of China (Grants nos 61170102 61271140)the Scientific Research Fund of Hunan Provincial Educa-tion Department China (Grant no 15A049) the EducationDepartment Fund of Hunan Province in China (Grantsnos 15C0402 15C0395 and 13C036) and the Science andTechnology Planning Project of Hunan Province in China(Grant no 2015GK3024)

References

[1] Y-M Kwon M-G Kim and J-L Kim ldquoMultiscale rank-basedordered dither algorithm for digital halftoningrdquo InformationSystems vol 48 pp 241ndash247 2015

[2] Y Jiang and M Wang ldquoImage fusion using multiscale edge-preserving decomposition based on weighted least squaresfilterrdquo IET Image Processing vol 8 no 3 pp 183ndash190 2014

[3] Z Zhao L Cheng and G Cheng ldquoNeighbourhood weightedfuzzy c-means clustering algorithm for image segmentationrdquoIET Image Processing vol 8 no 3 pp 150ndash161 2014

[4] Z-Q Wen Y-L Lu Z-G Zeng W-Q Zhu and J-H AildquoOptimizing template for lookup-table inverse halftoning usingelitist genetic algorithmrdquo IEEE Signal Processing Letters vol 22no 1 pp 71ndash75 2015

[5] P C Chang and C S Yu ldquoNeural net classification and LMSreconstruction to halftone imagesrdquo in Visual Communicationsand Image Processing rsquo98 vol 3309 of Proceedings of SPIE pp592ndash602 The International Society for Optical EngineeringJanuary 1998

10 Advances in Multimedia

[6] Y Kong P Zeng and Y Zhang ldquoClassification and recognitionalgorithm for the halftone imagerdquo Journal of Xidian Universityvol 38 no 5 pp 62ndash69 2011 (Chinese)

[7] Y Kong A study of inverse halftoning and quality assessmentschemes [PhD thesis] School of Computer Science and Tech-nology Xidian University Xian China 2008

[8] Y-F Liu J-M Guo and J-D Lee ldquoInverse halftoning based onthe Bayesian theoremrdquo IEEE Transactions on Image Processingvol 20 no 4 pp 1077ndash1084 2011

[9] Y-F Liu J-MGuo and J-D Lee ldquoHalftone image classificationusing LMS algorithm and naive Bayesrdquo IEEE Transactions onImage Processing vol 20 no 10 pp 2837ndash2847 2011

[10] D L Lau andG R ArceModern Digital Halftoning CRC PressBoca Raton Fla USA 2nd edition 2008

[11] Image Dithering Eleven Algorithms and Source Code httpwwwtannerhellandcom4660dithering-elevenalgorithms-source-code

[12] R A Ulichney ldquoDithering with blue noiserdquo Proceedings of theIEEE vol 76 no 1 pp 56ndash79 1988

[13] Y-H Fung and Y-H Chan ldquoEmbedding halftones of differentresolutions in a full-scale halftonerdquo IEEE Signal ProcessingLetters vol 13 no 3 pp 153ndash156 2006

[14] Z-Q Wen Y-X Hu and W-Q Zhu ldquoA novel classificationmethod of halftone image via statistics matricesrdquo IEEE Trans-actions on Image Processing vol 23 no 11 pp 4724ndash4736 2014

[15] D Cai X He and J Han ldquoEfficient kernel discriminantanalysis via spectral regressionrdquo in Proceedings of the 7th IEEEInternational Conference on Data Mining (ICDM rsquo07) pp 427ndash432 Omaha Neb USA October 2007

[16] D Cai X He and J Han ldquoSpeed up kernel discriminantanalysisrdquo The International Journal on Very Large Data Basesvol 20 no 1 pp 21ndash33 2011

[17] M Zhao Z Zhang T W S Chow and B Li ldquoA general softlabel based Linear Discriminant Analysis for semi-superviseddimensionality reductionrdquo Neural Networks vol 55 pp 83ndash972014

[18] M Zhao Z Zhang T W S Chow and B Li ldquoSoft labelbased Linear Discriminant Analysis for image recognition andretrievalrdquo Computer Vision amp Image Understanding vol 121 no1 pp 86ndash99 2014

[19] L Zhang and F-C Tian ldquoA new kernel discriminant analysisframework for electronic nose recognitionrdquo Analytica ChimicaActa vol 816 pp 8ndash17 2014

[20] B Gu V S Sheng K Y TayW Romano and S Li ldquoIncrementalsupport vector learning for ordinal regressionrdquo IEEE Transac-tions on Neural Networks and Learning Systems vol 26 no 7pp 1403ndash1416 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 9: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

Advances in Multimedia 9

Table 5 Time-consumption of different algorithms (in second)

ML RBF BP SVM SRTraining time 11042 76390 29667 49297 439337Testing time 12000 27600 36000 62400 63161

Table 6 Classification accuracy rates under different variances ()

Variance 14000 15000 16000 17000 18000 19000 20000 21000 22000001 95570 95422 95425 95529 95217 98440 98200 98033 97950010 95370 95233 95188 95157 94867 98200 98000 97767 98000050 94580 94522 94600 94671 94567 98320 98225 98033 98550100 93060 93067 93013 93343 93200 96980 97175 96800 97350

Table 7 Classification accuracy rates using other algorithms under different variances

Variance ECF + BP LMS + Bayes ML Variance ECF + BP LMS + Bayes ML001 895000 887577 979981 050 797333 402682 829169010 881333 784264 968315 100 615000 338547 643861

optimize the associated nonconvex problems which are wellknown to converge very slowly However the classifier basedon SR performs the classification task through computing thesquare of the distance between each testing sample and differ-ent class-centroids directly Hence the time-consumption ofit is very cheap

54The Experiment of Noise Attack Resistance In the processof actual operation the error-defused halftone images areoften polluted by noise before the inverse transform In orderto test the ability of SR to resist the attack of noise differentGaussian noises with mean 0 and different variances areembedded into the error-defused halftone imagesThen clas-sification experiments have been done using the algorithmproposed in this paper and the experimental results are listedin Table 6 According to Table 6 the accuracy rates decreasewith the increase of the variances As compared with theaccuracy rates listed in Table 7 achieved by other algorithmssuch as ECF + BP LMS + Bayes and ML we find that ourclassification method has obvious advantages in resisting thenoise

6 Conclusion

This paper proposes a novel algorithm to solve the chal-lenging problem of classifying the error-diffused halftoneimages We firstly design the class feature matrices afterextracting the image patches according to their statisticalcharacteristics to classify the error-diffused halftone imagesThen the spectral regression kernel discriminant analysisis used for feature dimension reduction The error-diffusedhalftone images are finally classified using an idea similarto the nearest centroids classifier As demonstrated by theexperimental results our method is fast and can achieve ahigh classification accuracy rate with an added benefit ofrobustness in tackling noise A very interesting direction isto solve the disturbance possibly introduced by other attacks

such as image scaling and rotation in the process of error-diffused halftone image classification

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work is supported in part by the National Natural Sci-ence Foundation of China (Grants nos 61170102 61271140)the Scientific Research Fund of Hunan Provincial Educa-tion Department China (Grant no 15A049) the EducationDepartment Fund of Hunan Province in China (Grantsnos 15C0402 15C0395 and 13C036) and the Science andTechnology Planning Project of Hunan Province in China(Grant no 2015GK3024)

References

[1] Y-M Kwon M-G Kim and J-L Kim ldquoMultiscale rank-basedordered dither algorithm for digital halftoningrdquo InformationSystems vol 48 pp 241ndash247 2015

[2] Y Jiang and M Wang ldquoImage fusion using multiscale edge-preserving decomposition based on weighted least squaresfilterrdquo IET Image Processing vol 8 no 3 pp 183ndash190 2014

[3] Z Zhao L Cheng and G Cheng ldquoNeighbourhood weightedfuzzy c-means clustering algorithm for image segmentationrdquoIET Image Processing vol 8 no 3 pp 150ndash161 2014

[4] Z-Q Wen Y-L Lu Z-G Zeng W-Q Zhu and J-H AildquoOptimizing template for lookup-table inverse halftoning usingelitist genetic algorithmrdquo IEEE Signal Processing Letters vol 22no 1 pp 71ndash75 2015

[5] P C Chang and C S Yu ldquoNeural net classification and LMSreconstruction to halftone imagesrdquo in Visual Communicationsand Image Processing rsquo98 vol 3309 of Proceedings of SPIE pp592ndash602 The International Society for Optical EngineeringJanuary 1998

10 Advances in Multimedia

[6] Y Kong P Zeng and Y Zhang ldquoClassification and recognitionalgorithm for the halftone imagerdquo Journal of Xidian Universityvol 38 no 5 pp 62ndash69 2011 (Chinese)

[7] Y Kong A study of inverse halftoning and quality assessmentschemes [PhD thesis] School of Computer Science and Tech-nology Xidian University Xian China 2008

[8] Y-F Liu J-M Guo and J-D Lee ldquoInverse halftoning based onthe Bayesian theoremrdquo IEEE Transactions on Image Processingvol 20 no 4 pp 1077ndash1084 2011

[9] Y-F Liu J-MGuo and J-D Lee ldquoHalftone image classificationusing LMS algorithm and naive Bayesrdquo IEEE Transactions onImage Processing vol 20 no 10 pp 2837ndash2847 2011

[10] D L Lau andG R ArceModern Digital Halftoning CRC PressBoca Raton Fla USA 2nd edition 2008

[11] Image Dithering Eleven Algorithms and Source Code httpwwwtannerhellandcom4660dithering-elevenalgorithms-source-code

[12] R A Ulichney ldquoDithering with blue noiserdquo Proceedings of theIEEE vol 76 no 1 pp 56ndash79 1988

[13] Y-H Fung and Y-H Chan ldquoEmbedding halftones of differentresolutions in a full-scale halftonerdquo IEEE Signal ProcessingLetters vol 13 no 3 pp 153ndash156 2006

[14] Z-Q Wen Y-X Hu and W-Q Zhu ldquoA novel classificationmethod of halftone image via statistics matricesrdquo IEEE Trans-actions on Image Processing vol 23 no 11 pp 4724ndash4736 2014

[15] D Cai X He and J Han ldquoEfficient kernel discriminantanalysis via spectral regressionrdquo in Proceedings of the 7th IEEEInternational Conference on Data Mining (ICDM rsquo07) pp 427ndash432 Omaha Neb USA October 2007

[16] D Cai X He and J Han ldquoSpeed up kernel discriminantanalysisrdquo The International Journal on Very Large Data Basesvol 20 no 1 pp 21ndash33 2011

[17] M Zhao Z Zhang T W S Chow and B Li ldquoA general softlabel based Linear Discriminant Analysis for semi-superviseddimensionality reductionrdquo Neural Networks vol 55 pp 83ndash972014

[18] M Zhao Z Zhang T W S Chow and B Li ldquoSoft labelbased Linear Discriminant Analysis for image recognition andretrievalrdquo Computer Vision amp Image Understanding vol 121 no1 pp 86ndash99 2014

[19] L Zhang and F-C Tian ldquoA new kernel discriminant analysisframework for electronic nose recognitionrdquo Analytica ChimicaActa vol 816 pp 8ndash17 2014

[20] B Gu V S Sheng K Y TayW Romano and S Li ldquoIncrementalsupport vector learning for ordinal regressionrdquo IEEE Transac-tions on Neural Networks and Learning Systems vol 26 no 7pp 1403ndash1416 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 10: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

10 Advances in Multimedia

[6] Y Kong P Zeng and Y Zhang ldquoClassification and recognitionalgorithm for the halftone imagerdquo Journal of Xidian Universityvol 38 no 5 pp 62ndash69 2011 (Chinese)

[7] Y Kong A study of inverse halftoning and quality assessmentschemes [PhD thesis] School of Computer Science and Tech-nology Xidian University Xian China 2008

[8] Y-F Liu J-M Guo and J-D Lee ldquoInverse halftoning based onthe Bayesian theoremrdquo IEEE Transactions on Image Processingvol 20 no 4 pp 1077ndash1084 2011

[9] Y-F Liu J-MGuo and J-D Lee ldquoHalftone image classificationusing LMS algorithm and naive Bayesrdquo IEEE Transactions onImage Processing vol 20 no 10 pp 2837ndash2847 2011

[10] D L Lau andG R ArceModern Digital Halftoning CRC PressBoca Raton Fla USA 2nd edition 2008

[11] Image Dithering Eleven Algorithms and Source Code httpwwwtannerhellandcom4660dithering-elevenalgorithms-source-code

[12] R A Ulichney ldquoDithering with blue noiserdquo Proceedings of theIEEE vol 76 no 1 pp 56ndash79 1988

[13] Y-H Fung and Y-H Chan ldquoEmbedding halftones of differentresolutions in a full-scale halftonerdquo IEEE Signal ProcessingLetters vol 13 no 3 pp 153ndash156 2006

[14] Z-Q Wen Y-X Hu and W-Q Zhu ldquoA novel classificationmethod of halftone image via statistics matricesrdquo IEEE Trans-actions on Image Processing vol 23 no 11 pp 4724ndash4736 2014

[15] D Cai X He and J Han ldquoEfficient kernel discriminantanalysis via spectral regressionrdquo in Proceedings of the 7th IEEEInternational Conference on Data Mining (ICDM rsquo07) pp 427ndash432 Omaha Neb USA October 2007

[16] D Cai X He and J Han ldquoSpeed up kernel discriminantanalysisrdquo The International Journal on Very Large Data Basesvol 20 no 1 pp 21ndash33 2011

[17] M Zhao Z Zhang T W S Chow and B Li ldquoA general softlabel based Linear Discriminant Analysis for semi-superviseddimensionality reductionrdquo Neural Networks vol 55 pp 83ndash972014

[18] M Zhao Z Zhang T W S Chow and B Li ldquoSoft labelbased Linear Discriminant Analysis for image recognition andretrievalrdquo Computer Vision amp Image Understanding vol 121 no1 pp 86ndash99 2014

[19] L Zhang and F-C Tian ldquoA new kernel discriminant analysisframework for electronic nose recognitionrdquo Analytica ChimicaActa vol 816 pp 8ndash17 2014

[20] B Gu V S Sheng K Y TayW Romano and S Li ldquoIncrementalsupport vector learning for ordinal regressionrdquo IEEE Transac-tions on Neural Networks and Learning Systems vol 26 no 7pp 1403ndash1416 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 11: Research Article Classification of Error-Diffused Halftone ...features from hal one images, based on which the hal one images are divided into nine categories using a decision tree

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of