texture image classification using support vector machine · texture image classification using...
TRANSCRIPT
Texture Image Classification Using Support Vector Machine
Mr.S.R.Suralkar Mr.A.H.Karode Ms.Priti W.Pawade
Asso.Prof. (E & TC Dept.) Asst. Prof. (E & TC Dept.) M.E. 2nd Year (Digital Electronics) SSBT’s COET, Bambhori, SSBT’s COET, Bambhori, SSBT’s COET, Bambhori,
Jalgaon. Jalgaon. Jalgaon.
[email protected] [email protected] priti_pawade6 @rediffmail.com
Abstract
Texture refers to properties that represent the
surface or structure of an object and is defined as
something consisting of mutually related elements.
The main focus in this study is to do texture
segmentation and classification for texture images.
Statistical features can be calculated based on the
grey level co-occurrence probabilities (GLCP)
generated. The statistical features used in this study
are uniformity, contrast, and entropy. The features
are obtained by using a combination of different
angles. For noise reduction, an appropriate moving
average is applied to the statistical features. To post-
process the image, support vector machines (SVM)
had been proposed to do classification on the
extracted features. Some kernel functions which are
being tested are second degree polynomial, radial
basis function (RBF), exponential radial basis
function (ERBF), sigmoid, and odd-order Bspline.
RBF and ERBF achieved the best classification
accuracy compare to other kernels used. SVM also
automatically helps RBF kernel to define the centres
during optimization. Brodatz texture album is used in
this study to test out the result. In the study, a
combined GLCP with SVM post-processing showed
a marked improvement over other classifier in terms
of classification accuracy.
Keywords: Support Vector Machines, Grey
Level Co-occurrence Probabilities, Image
segmentation, Texture Classification
1. INTRODUCTION Texture is defined as a pattern that is repeated
and is represented on the surface or structure of
an object. To separate textures into a single
texture type, first we need to preserve spatial
information for each texture. For instance, the
manual grey level thresholding which does not
provide the spatial information for each texture
that could generate in appropriate segmentation
result. Edge detection techniques used on
texture image could result in noisy and
discontinuous edges and therefore segmentation
process becomes more complicated Grey level
co-occurrence probabilities (GLCP) method is
used as a texture descriptor in the process of
feature extraction. The selection of certain
texture is possible as it is based on the
distribution in grey level co-occurrence matrix
(GLCM). Boundaries that separate between
textures can be created by searching the
gradients in one-dimensional (1D) GLCP
statistical features .The process of GLCP
extraction is arbitrary and takes unreliable time.
Some approaches had modified the structure of
GLCP algorithm in order to speed up the
computation time for texture feature extraction
process.
We present a novel texture classification
algorithm using Grey Level Co-occurrence
Probabilities (GLCP) method is being used to
extract features from texture image and support
vector machines (SVM)[4]. Grey Level Co-
occurrence Probabilities (GLCP) statistics are
used to preserve the spatial characteristics of a
texture. The selection of certain texture is
possible based on the statistical features[5]. The
best statistical features that are used for analysis
are entropy, contrast, homogeneity and
correlation. However, further analysis in shows
that correlation was not suitable for texture
segmentation. GLCP statistics can also be used
to discriminate between two different textures.
This feature vector is first used for classification
of the extracted features using the GSVM
(Gaussian SVM) classifier. The experimental
setup consists of images from the Brodatz
texture databases and a combination of some
images therein. The proposed method produces
promising classification results for both single
and multiple class texture analysis problems[ 4].
2. SVM – An Introductory Overview In the context of supervised classification,
machine learning and pattern recognition is the
extraction of regularity or some sort of structure
from a collection of
data. Neural networks (NN) and Bayesian
classifiers are the typical examples to learn such
organization from the given data observations.
Support Vector Machines (SVM) is a relatively
new classifier and is based on strong
foundations from the broad area of statistical
Priti W Pawade et al,Int.J.Comp.Tech.Appl,Vol 3 (1), 71-75
IJCTA | JAN-FEB 2012 Available [email protected]
71
ISSN:2229-6093
learning theory [4]. Since its inception in early
90s, it has found applications in a wide range of
pattern recognition problems, to name a few:
handwritten character recognition, image
classification, financial time series prediction,
face detection, bioinformatics, biomedical
signal analysis, medical diagnostics, and data
mining.
SVM has become, in practice, the
classifier of choice of numerous researchers and
practitioners for several real-world classification
problems. This is because SVM is capable of
generalizing well (predicting the unseen or
unknown samples with a good degree of
accuracy) as compared to many traditional
classifiers (NN, etc.) It offers several
advantages which are typically not found in
other classifiers:
• Computationally much less intensive (esp. in
comparison to NN)
• Performs well in higher dimensional spaces
(a factor which limits many efficient
classifiers)
• Lack of training data is often not a severe
problem
• Based on minimizing an estimate of test error
rather than the training error (structural risk
minimization)
• Robust with noisy data (noise can severely
degrade the performance of NN)
• Does not suffer as much from the curse of
dimensionality and prevents overfitting
It is seen that support vector machine is a
powerfull classifier than other classifier.
2.1 Introduction to support vector machines
A binary class supervised classification problem
is usually formulated in the following way:
given n training samples (< xi >,yi) where
< xi > =(xi1,xi2,.....,xim) is an input feature vector
and yi ∈ {−1,+1} is the target label, the task of
the discriminant function or a classifier is to
learn the patterns in the training samples in such
a way that at a later stage it can predict reliably
a yi for an unknown xi . SVM is fundamentally
developed for such binary classification case
and is extendable for multi-class situation. Like
other linear classifiers, it attempts to evaluate a
linear decision boundary (assuming that the data
is linearly separable) or a linear hyperplane
between the 2-classes (Figure 1a). Theoretically,
when the data is linearly separable, there exist
possibly an infinite number of hyperplanes
(Figure 1b) which can correctly classify the
training data. SVM, unlike other classifiers of
its kind, strives to find out an optimal
hyperplane (Figure 1c). It is commonly believed
that points belonging to the two data classes
often lie in such a way that there is always some
„margin‟ between them. SVM attempts to
maximize this margin ( 2γ in Figure 1c) by
considering it as a quadratic programming
problem, see [4, 5] for mathematical
formulation and derivation of the solution.
3. METHODOLOGY
This paper considers the problem of
texture classification only for a gray-level case
which is conventionally tackled in two stages of
feature extraction and classification.
3.1 GLCP Feature Extraction:
GLCP is a discrete function that represents joint
probability, Cij, of different sets of pixels
having different grey levels, and is defined by
…………………(1)
where Fij is the co-occurrence matrix
constructed by the frequencies of two grey
levels of two relational pixels. G represents the
grey level quantization. The distance between
two relational pixels is set to become 1 for
micro-texture analysis. The common angle is
either 0°, 45°, 90° or 135°. To reduce the
computation time in GLCP feature extraction,
we set a window size, M×N or a block of pixels
as one feature value.
3.2. SVM Classification
The purpose of SVM is to map feature vectors
into a higher dimensional feature space, and
then creating a separating hyperplane with
maximum margin to group the GLCP features.
Support vectors (SVs) contain highlighted
pixels that help to create the margins or
boundaries in an image. The higher dimensional
space is defined by a kernel function. The kernel
functions that we used in texture discrimination
are shown in Table 1. For more detail on
learning kernels is described in Schölkopf
B.and Smola A. J. (2002).
Priti W Pawade et al,Int.J.Comp.Tech.Appl,Vol 3 (1), 71-75
IJCTA | JAN-FEB 2012 Available [email protected]
72
ISSN:2229-6093
Type of
classifier
Inner Kernel Function
Polynomial
Radial basis
function
Tangent
hyperbolic
kernel
Table 3.1: Kernel functions for used in SVM
training
4. EXPERIMENTAL DESIGN
I. Test Images
Some of the Brodatz‟s textures (Brodatz, 1966)
had been used for our methods testing. The 8-
partite texture image having one sample of each
image with resolution of 989×98 pixels in
Figure 4.1 is created to measure GLCP
statistical approaches to identify textures.
II. Parameters Settings
We used an adequate grey level
quantization, G of 64 levels to construct GLCM
(Jobanputra & Clausi, 2006). The displacement
vector, (θ, d) is set to become (0, 1). The
window size configuration depends upon the
texture primitive size on the test image. The
bigger the window size we set, the more spatial
information we yield. However a window size
which is too large, may cause the overlapping
between textures at the boundaries. Given the
resolution of a test image, Ri, and the window
size, M×N. The resolution of the feature space
is defined by
…………………….(2)
To gain sufficient spatial information, we
recommended the Rf should have at least 50×50
pixels. Thus, from the equation (2), we obtain
the following equation defined by
………………………(3)
to adjustify our window size settings. The
bigger the resolution of feature space is
required, the smaller the window size that we
have to set. All the statistical features as shown
in Figure 4.1 to 4.4 Let Rf be equal to k×l,
where k and l is the number of rows and
columns of a feature space respectively. The
number of moving average, v must fulfill the
condition, v < k (4) if the window scanning
sequence starts from left to right and followed
by top to bottom or the condition v < l (5) if the
window scanning sequence starts from top to
bottom and followed by left to right.
Criteria Setting
Grey level quantization,G 64 levels
Distance,d 1 pixel
Angle, 00
Window size(MN) From equn 3
Satistical Feature Feature from
Table 3.2: GLCP parameters configuration
4. RESULTS :
Eight different kinds of textures with each
one sample have been chosen from the Brodatz
album (1966) to measure the GLCP statistical
features as stated in Hammouche et al. (2006)
[16]. The chosen textures have varying
characteristics in terms of the size of primitive
pattern, structure arrangements, brightness,
coarseness, and the statistical distribution
(Sonka et al., 2007). These textures are pressed
D1_01 and D1_02, D107_01 and D107_02,
D112_04 and D112_08, D98_03 and D98_04.
Fig 4.1 shows corresponding
statistical feature contrast of GLCP.it is seen
that same image with different intensity of grel
level shows different graph of
it.figure4.2,4.3,4.4 shows statistical faeture of
correlation,energy and homogeneity.
FIG.4.1: D1_01 And D1_02 shows their
correspnding graoh of GLCP contrast
Priti W Pawade et al,Int.J.Comp.Tech.Appl,Vol 3 (1), 71-75
IJCTA | JAN-FEB 2012 Available [email protected]
73
ISSN:2229-6093
FIG.4.2: D107_01 And D107_02 shows their
correspnding graoh of GLCP correlation
FIG.4.3: D112_04 And D112_08 shows their
correspnding graoh of GLCP Energy
FIG.4.4: D98_03 And D98_04 shows their
correspnding graoh of GLCP Homogeneity
Table 4.1: Measurments of each feature in test
images
In order to assess the performance of the
proposed approach, experiments with the
Brodatz database [ 16] were carried out. In the
experiments, each Brodatz texture constitutes a
separate class. Each texture have 640 x 640
pixels. The samples were separated in two
disjoint sets, one for training and the other for
testing the classifier
The evaluation is based by the accuracy (see
Equation 4) These measurements are estimated
in random partitions of the training and test sets.
This approach is compared with several
classifiers in Li et al [15].
Accuracy = 100%
....................(4)
Figure 4.5 summarizes the results of the
proposed method, along with the results [15]
for the single and fused SVM classifier, the
Bayes classifiers using Bayes distance and
Mahalanobis distance, and the LVQ
classifier. These measurements are estimated
with random partitions of the training and test
sets.
Features D1_01 D107_0
2
D112_0
8
D98_
03
Contrast 0.0410 0.1453 0.0655 0.0941
Correlation 0.7197 0.8482 0.5746 0.828
7
Energy 0.8216 0.4380 0.7940 0.409
7
Homogeneity 0.9795 0.9280 0.9672 0.953
0
Priti W Pawade et al,Int.J.Comp.Tech.Appl,Vol 3 (1), 71-75
IJCTA | JAN-FEB 2012 Available [email protected]
74
ISSN:2229-6093
FIG.4.5: Accuracy of texture classification
5. CONCLUSIONS In this paper, an approach for texture-based
image classification using the gray-level co-
occurrence probability(GLCP) and Support
vector machine (SVM) methods is presented
To show the usefulness of the proposed
methodology, an application with a benchmark
data set was considered. The proposed approach
is evaluated in terms of accuracy. And it is
compared with several classifiers in Li et al
[15]. Figure 4.5 show the superiority of
GLCP+SVM over the single and fused S VM,
the Bayes classifiers using Bayes distance and
Mahalanobis distance, and the LVQ classifier.
6. REFERENCE:
1.TUCERYAN, M., and JAIN, A. K.: „Texture analysis‟,
Handbook Pattern Recognition and Computer
Vision, World Scientific, 1993. 2.Sklansky, J., “Image Segmentation and Feature
Extraction,” IEEE Transactions on Systems, Man, and
Cybernetics, SMC-8, pp. 237-247, 1978.
3. Haralick, R.M., “Statistical and Structural Approaches
to Texture,” Proceedings of the IEEE, 67, pp. 786-804,
1979.
4 .VAPNIK, V.: „The nature of statistical learning theory‟.
Springer Verlag, 1995.
5.Dr.P.V.Ingole,A.H.Karode,S.R.Suralkar “Textured and
Non-textured image classification using wavelet transform for CBIR” National Conference on Emerging Trends in
Electronics Engineering & Computing Nagpur, Feb. 2010.
6. Bernd Heisele (2005), “Hierarchical classification and
feature reduction for fast face detection”. Handbook of
Pattern Recognition and Computer Vision (CH Chen, PSP
Wang, ed.), p.481-495. World Scientific Publ.
7.Rajpoot, K.M.; Rajpoot,.“Wavelets and support vector
machines for texture classification”N.M., Multitopic
Conference, 2004, Proceedings of INMIC 2004, 8th International, pp. 328 – 333.
8. Kwang In Kim, Keechul Jung, and Jin Hyung Kim,
“Texture-Based Approach for Text Detection in Images
Using Support Vector Machines and Continuously
Adaptive Mean Shift Algorithm IEEE Transactions On Pattern Analysis AndMachine Intelligence.Vol.25,No.12,
December 2003.
9. Lalit Gupta, Sukhendu Das, Shivani G. Rao;
"Classification of Textures in SAR Images using multi-channel multi-resolution filters"; NCIP-2005, March-2005,
NIAS IISc. Bangalore, India, pp. 198-201.
10. Hong-ChoonOng, Hee-KooiKhoo,“Improved Image Texture Classification Using Grey Level Co-occurrence
Probabilities with Support Vector Machines Post-
Processing”, European Journal of Scientific Research
ISSN 1450-216X Vol.36 No.1 (2009), pp.56-64.
11. Hee-KooiKhoo, Hong-ChoonOng, Ya-Ping
Wong,“Image Texture Classification using Combined
Grey Level Co-occurrence Probabilities and Support Vector Machines”, 2008 Fifth International Conference on
Computer Graphics, Imaging and Visualisation.
12. Shu-yi Zhang , Xiaorong Xue,Xi Zhang Kwang In Kim, Keechul Jung, Se Hyun Park, andHang Joon Kim
“Support Vector Machines for Texture Classification,
IEEE Transactions IEEE Transactions on Pattern
AnalysisAndMachineIntelligence.Vol.24,No.11,November 2002.
13. “Feature Extraction and Classification with Wavelet
Transform and Support Vector Machines”, vol. 6, pp.3795
- 3798,IEEE International Geoscience and Remote Sensing Symposium, 2005.
14.Sheng Zheng, Jian Liu, Jin Wen Tian, “A new efficient
SVM-based edge detection method”, Pattern Recognition Letters 25, p.1143-1154, 2004.
15. S. Li, 1. T. Kwok, H. Zhu, and Y. Wang, " Texture
classification using the support vector machines," Pattern Recognition, vol. 36, no. 12, pp. 2883-2893, 2003.
16. P. Brodatz, Textures: A P hotographic Album for
Artists and Designers. Dover Publications, 1966
Priti W Pawade et al,Int.J.Comp.Tech.Appl,Vol 3 (1), 71-75
IJCTA | JAN-FEB 2012 Available [email protected]
75
ISSN:2229-6093