illumination-insensitive texture discrimination based on illumination compensation and enhancement

13
Illumination-insensitive texture discrimination based on illumination compensation and enhancement Muwei Jian a , Kin-Man Lam a,, Junyu Dong b a Centre for Signal Processing, Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong b Department of Computer Science, Ocean University of China, Qingdao, China article info Article history: Received 14 December 2012 Received in revised form 3 December 2013 Accepted 12 January 2014 Available online 21 January 2014 Keywords: Illumination compensation Illumination enhancement Illumination-effect matrix Illumination-insensitive texture abstract As the appearance of a 3D surface texture is strongly dependent on the illumination direc- tion, 3D surface-texture classification methods need to employ multiple training images captured under a variety of illumination conditions for each class. Texture images under different illumination conditions and directions still present a challenge for texture-image retrieval and classification. This paper proposes an efficient method for illumination-insen- sitive texture discrimination based on illumination compensation and enhancement. Fea- tures extracted from an illumination-compensated or -enhanced texture are insensitive to illumination variation; this can improve the performance for texture classification. The proposed scheme learns the average illumination-effect matrix for image representation under changing illumination so as to compensate or enhance images and to eliminate the effect of different and uneven illuminations while retaining the intrinsic properties of the surfaces. The advantage of our method is that the assumption of a single-point light source is not required, so it circumvents and overcomes the limitations of the Lambertian model and is also suitable for outdoor settings. We use a wide range of textures in the Pho- Tex database in our experiments to evaluate the performance of the proposed method. Experimental results demonstrate the effectiveness of our proposed methods. Ó 2014 Elsevier Inc. All rights reserved. 1. Introduction The appearance of rough surface textures may be dramatically different when they are lit from different directions. For example, Fig. 1 shows images of the same surface texture captured under varied lighting directions. They look dissimilar mainly due to the different illumination directions. Although it is well known that the appearance of a texture is strongly dependent on the illumination directions, dealing with illumination-insensitive texture is still an open issue and worth fur- ther investigation [8,9,37,39]. As a special type of image, texture can describe a wide variety of surface characteristics. Texture is very important for human visual perception and plays a key role in computer vision and pattern recognition. In addition, since texture can be effectively used for characterizing image regions, texture features have been extensively studied in image classification and content-based image retrieval, as well as in other fields related to pattern analysis [8,9,21,22,38]. Traditionally, texture-representation methods can be divided into three categories, namely structural [34], statistical [17,26], and multi-resolution filtering methods [14,19,24,27–29,36]. These methods have been effectively used for texture analysis, segmentation, retrieval and classification [1,8]. However, most of these previous methods focus on texture-feature 0020-0255/$ - see front matter Ó 2014 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.ins.2014.01.019 Corresponding author. Tel.: +852 2766 6207. E-mail addresses: [email protected] (M. Jian), [email protected] (K.-M. Lam), [email protected] (J. Dong). Information Sciences 269 (2014) 60–72 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins

Upload: junyu

Post on 30-Dec-2016

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

Information Sciences 269 (2014) 60–72

Contents lists available at ScienceDirect

Information Sciences

journal homepage: www.elsevier .com/locate / ins

Illumination-insensitive texture discrimination basedon illumination compensation and enhancement

0020-0255/$ - see front matter � 2014 Elsevier Inc. All rights reserved.http://dx.doi.org/10.1016/j.ins.2014.01.019

⇑ Corresponding author. Tel.: +852 2766 6207.E-mail addresses: [email protected] (M. Jian), [email protected] (K.-M. Lam), [email protected] (J. Dong).

Muwei Jian a, Kin-Man Lam a,⇑, Junyu Dong b

a Centre for Signal Processing, Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kongb Department of Computer Science, Ocean University of China, Qingdao, China

a r t i c l e i n f o a b s t r a c t

Article history:Received 14 December 2012Received in revised form 3 December 2013Accepted 12 January 2014Available online 21 January 2014

Keywords:Illumination compensationIllumination enhancementIllumination-effect matrixIllumination-insensitive texture

As the appearance of a 3D surface texture is strongly dependent on the illumination direc-tion, 3D surface-texture classification methods need to employ multiple training imagescaptured under a variety of illumination conditions for each class. Texture images underdifferent illumination conditions and directions still present a challenge for texture-imageretrieval and classification. This paper proposes an efficient method for illumination-insen-sitive texture discrimination based on illumination compensation and enhancement. Fea-tures extracted from an illumination-compensated or -enhanced texture are insensitive toillumination variation; this can improve the performance for texture classification. Theproposed scheme learns the average illumination-effect matrix for image representationunder changing illumination so as to compensate or enhance images and to eliminatethe effect of different and uneven illuminations while retaining the intrinsic propertiesof the surfaces. The advantage of our method is that the assumption of a single-point lightsource is not required, so it circumvents and overcomes the limitations of the Lambertianmodel and is also suitable for outdoor settings. We use a wide range of textures in the Pho-Tex database in our experiments to evaluate the performance of the proposed method.Experimental results demonstrate the effectiveness of our proposed methods.

� 2014 Elsevier Inc. All rights reserved.

1. Introduction

The appearance of rough surface textures may be dramatically different when they are lit from different directions. Forexample, Fig. 1 shows images of the same surface texture captured under varied lighting directions. They look dissimilarmainly due to the different illumination directions. Although it is well known that the appearance of a texture is stronglydependent on the illumination directions, dealing with illumination-insensitive texture is still an open issue and worth fur-ther investigation [8,9,37,39].

As a special type of image, texture can describe a wide variety of surface characteristics. Texture is very important forhuman visual perception and plays a key role in computer vision and pattern recognition. In addition, since texture canbe effectively used for characterizing image regions, texture features have been extensively studied in image classificationand content-based image retrieval, as well as in other fields related to pattern analysis [8,9,21,22,38].

Traditionally, texture-representation methods can be divided into three categories, namely structural [34], statistical[17,26], and multi-resolution filtering methods [14,19,24,27–29,36]. These methods have been effectively used for textureanalysis, segmentation, retrieval and classification [1,8]. However, most of these previous methods focus on texture-feature

Page 2: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

Fig. 1. Textures under different illumination directions: five texture images of the same surface, ‘‘acg’’, from the PhoTex database.

M. Jian et al. / Information Sciences 269 (2014) 60–72 61

representations, and seldom make further analysis of texture images captured under different illumination directions; yetvariations in lighting directions can make the same texture look dissimilar. In this paper, we will focus on the problems asso-ciated with the effect of illumination variations on texture images. This paper proposes a method that can compensate for orenhance illumination in an original texture image so that features that are insensitive to illumination variation can be pro-duced. According to our method, texture-feature extraction is performed after illumination compensation or enhancementhas been applied to the original surface texture. This can alleviate the effect of different and uneven illuminations, and renderthe extracted feature ‘‘illumination-insensitive’’. In this paper, we will focus on the problems associated with the effect ofillumination variations on texture images. The main contributions of this paper are:

� two novel methods, namely the illumination-compensation algorithm (ICA) and the illumination-enhancementalgorithm (IEA) are proposed for illumination-insensitive texture discrimination; and

� the assumption of a single-point light source is not required, so it circumvents and overcomes the limitations of theLambertian model and is also suitable for outdoor circumstances.

The rest of the paper is organized as follows. Related work is presented in Section 2. In Section 3, we describe ourproposed methods for illumination compensation and enhancement. Experiment results will be presented in Section 4,and a conclusion and discussion are given in Section 5.

2. Related work

A surface texture captured under different illumination directions may involve some difficult problems, as the appearanceof the same surface can change dramatically. Illumination variation is still one of the most prominent issues for appearance-or image-based recognition approaches, although a number of researchers have paid great attention to relevant solutions.The first type of method in this research field uses a number of texture images captured under different illumination direc-tions to extract the three-dimensional shape of the texture for illumination-insensitive representation [7,10,12,23,30]. In[18], it was found that the ratio of two images of the same object is simpler than the ratio of images of different objects withLambertian reflectance, and that the ratio also provides two of the three distinct values in the Hessian matrix used to rep-resent the object’s surface. Another method based on quotient images was introduced in [33], which assumes – based on theLambertian model – that faces in the same class have a similar shape but different textures. In [6], it was found that, withsurfaces of uniform albedo, it is possible to make two images of a surface observed under two different illumination direc-tions have a similar appearance; Chen et al. in [6] employed the joint probability of image gradient directions to compute thelikelihood that two images come from the same surface. Chantler et al. [4] and Barsky [3] employed an illumination model todeal with this problem: they showed that the variance of the response of a filtered image under varying illumination tiltangles is sinusoidal. Barsky [3] computed statistical surface descriptors from photometric stereo data, and generalized Chan-tler’s approach to non-uniform albedo materials and general lighting directions. Both of these works use the variance ofimages filtered by linear operators as features. Osadchy et al. [31] achieved illumination quasi-invariance on smooth surfacesusing the whitening approach. Their assumptions are that the surface is Lambertian with uniform reflectance and shallowrelief, and that the illumination direction is sufficiently inclined from the surface macro-normal. The method is appliedfor the classification of registered images of smooth objects. However, it is not easily extendable for texture recognition,as its effect increases the dissimilarity between images coming from different objects, rather than making the images pro-duced by the same surface more similar [31]. Drbohlav and Chantler [11] proposed a novel method which is different fromprevious work in such a way that it can make two images of the same surface virtually identical. The method hypothesizesthat it can match image statistics of the same surface texture observed under different illumination conditions; this idea wasapplied to comparing texture images for classification. Recently, Barsky and Petrou [2] proposed statistical surface descrip-tors using photometric stereo data based on a generalized sinusoidal model, which can capture variations in texture featuresdue to changed illumination directions. Traditional illumination-insensitive methods for image representation are based onthe Lambertian assumption, and construct a 3-D surface representation by using a number of the images captured underdifferent illumination directions. However, there are two obvious drawbacks with the Lambertian model: a single-point lightsource placed at infinity is assumed, and multiple images need to be captured under a variety of illumination conditions foreach surface so that a 3-D representation can be obtained. Even though the Lambertian model is suitable for some

Page 3: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

62 M. Jian et al. / Information Sciences 269 (2014) 60–72

applications, it has been proven that it is difficult to build accurate 3-D models using images taken in uncontrolled circum-stances only, and the assumptions make it non-trivial to apply to general object recognition in outdoors environments [18].

Some pre-processing methods have been proposed for alleviating the effect of illumination variance. Qi et al. [32] pro-posed a 3D surface texture classification method based on self-similarity maps, which are calculated directly from capturedraw texture images. Later, inspired by [32], a method combining self-similarity maps and the support vector machine (SVM)was presented for texture classification/retrieval in [20].

Compared to these previous works, we propose in this paper a computationally efficient method for generating illumina-tion-insensitive texture images using the scheme of illumination compensation and enhancement. We use an illuminationmodel which is universal and does not require the assumption of a single-point light source, thereby overcoming the limi-tation of the Lambertian model. The proposed approach captures the mean illumination-effect matrix representations ofimages captured under a variety of illumination conditions, so as to compensate or enhance the images and, as a result,to achieve a better classification performance. In particular, we aim at devising a simple and effective scheme to compen-sate/enhance illumination, rather than obtaining a sophisticated and accurate representation of the texture-surfacereflection.

3. Illumination compensation and enhancement for illumination-insensitive textures

3.1. An illumination model

Some methods have been proposed to handle varied illumination conditions based on the Lambertian model, with theassumption that a single illuminant source is placed at infinity, and the utilization of a number of images to construct 3Dgeometry and reflectance that are insensitive to illumination. However, in real situations, images are usually captured in out-door, uncontrolled environments, with various illumination sources from different directions. To overcome the limitations ofthe Lambertian model, the illumination model should be universal (i.e. can be used in multi-lighting circumstances), withoutrequiring the assumption of a single-point light source.

According to the Retinex theory [16], the intensity of an image I(x, y) can be represented as the product of illuminationL(x, y) and surface reflectance R(x, y). Based on this theory, in contrast to the previous work, a novel and effective scheme isproposed in this paper for illumination compensation and enhancement, which is efficient and does not require an imageunder even and frontal illumination to learn or to be the reference image. Thus, our proposed algorithm is easy to implement.The intensity of an image I(x, y) is expressed as follows:

Iðx; yÞ ¼ Rðx; yÞLðx; yÞ; ð1Þ

where R(x, y) is the surface reflectance and L(x, y) is the illumination.Such an illumination model could be advantageous for many computer vision algorithms. However, estimating this

decomposition is a fundamentally ill-posed problem, because for every observed value there are multiple unknowns[15,35]. In this paper, we employ a mathematical framework that can solve the ill-posed problem and can be used to extractimage representations for relighting. The framework is essentially based on the Singular Value Decomposition (SVD) repre-sentation of images under multiple and different illumination directions. The illumination model in (1) is nonlinear. Hence,the logarithmic transformation is applied so as to convert (1) into a linear model, as follows:

Ilðx; yÞ ¼ logðIðx; yÞ þ bÞ ¼ logðRðx; yÞLðx; yÞ þ bÞ � logðRðx; yÞÞ þ logðLðx; yÞÞ ¼ R0ðx; yÞ þ L0ðx; yÞ ð2Þ

where b is a small positive integer.After the transformation, our proposed framework decomposes the image Il(x, y) of size m � n into the eigenspace using

SVD. SVD is commonly used in matrix analysis and can be applied to analyze an image matrix based on the followingtheorem of linear algebra:

The image Il(x, y) can be viewed as a matrix with m rows and n columns, and any Il(x, y) matrix whose number of rows m isgreater than or equal to its number of columns n can be written as the product of an m � n column-orthogonal matrix U, an�n diagonal matrix W with positive or zero elements, and the transpose of an n � n orthogonal matrix V. That is,

Ilðx; yÞ ¼ UWVT ; ð3Þ

where UTU = VTV = E and E is the unit matrix. The elements wi on the diagonal of W are called singular values (the square rootof the eigenvalues), i.e.

W ¼ diagðw1;w2; . . . ;wi; . . . ;wnÞ: ð4Þ

The singular-value vector s of the image Il(x, y) is defined as follows:

S ¼ ½w1;w2; . . . ;wi . . . wn�T ; ð5Þ

where 1 6 i 6 n, and wi is the ith singular value of the image Il(x, y) in the singular-value vector s such that wi P wi+1. It can beobserved that the singular values decrease dramatically and the mathematical framework of SVD can be used to representtexture images effectively [10,23]. In general, the first k major eigenvectors mainly reflect variation in illumination.

Page 4: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

M. Jian et al. / Information Sciences 269 (2014) 60–72 63

Let : W ¼ diagðw1;w2; . . . ;wi; . . . ;wnÞ ¼

w1

w2 0. . .

wi

0 . . .

wn

0BBBBBB@

1CCCCCCA

¼ diag w1; . . . ; wk; 0; . . . ; 0ð Þ þ diag 0; . . . ; 0; wkþ1; . . . ; wnð Þ

¼

w1

. . . 0wk

00 . . .

0

0BBBBBB@

1CCCCCCAþ

0. . . 0

0wkþ1

0 . . .

wn

0BBBBBB@

1CCCCCCA¼W1 þW2 ð6Þ

Then, (2) can be written as follows:

Ilðx; yÞ ¼ UWVT ¼ UðW1 þW2ÞVT ¼ UW1VT þ UW2VT ¼ L0ðx; yÞ þ R0ðx; yÞ ð7Þ

where UW1VT = L0(x, y) for the first k major eigenvectors, which mainly reflect variation in illumination, and the other residual

component is the surface-reflectance-representation matrix UW2VT = R0(x, y).

This expression is similar to the formulation of the illumination model in (2). Specifically, UW2VT can be treated as thecomponent of the surface-reflectance-representation matrix R

0(x, y), while UW1VT can be seen as the component of the illu-

mination-effect matrix L0(x, y) in the illumination model. Now, we can see that an image represented in matrix form can be

described using the illumination model in (1) and SVD in (7).Fig. 2 shows an example of image decomposition based on using the illumination model and SVD with different k. The

next step is to select the value k to obtain an optimal image-decomposition representation so as to solve the ill-posedproblem.

3.2. The surface-reflectance-representation matrix in the illumination model

What makes images of the same surface reflectance structure look dissimilar, as illustrated in Fig. 1? Taking texture as anexample, texture images of the same class have identical structures and patterns, sharing a similar surface-reflectance struc-ture, so it is reasonable to assume that the surface-reflectance-representation matrix R(x, y) contains elements with smallvariance, which reflects the intrinsic property of a certain surface. The dissimilarity between images of the same class underdifferent illumination conditions is mainly caused by the differences in the illumination-effect matrix L(x, y). This is due tothe fact that images under uneven illumination conditions produce shadows, and look different in those regions with insuf-ficient illumination. Therefore, it is reasonable to assume that the surface-reflectance-representation matrix R(x, y) is aslowly-changing matrix, which reflects the intrinsic property of a texture surface. Consequently, the dissimilarity betweenimages of the same texture under different illumination conditions is mainly caused by the differences in the illumination-effect matrix L(x, y). That is to say, assuming that there are M texture images of the same surface, the differences between thecomponents of the surface-reflectance-representation matrix R

0(x, y) of the M texture images are small. The following root

mean squared value (RMS) can be used to measure the differences between the components of the surface-reflectance-rep-resentation matrix R

0(x, y) of the M texture images with different k.

RMSk ¼

1mn

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPl�a�Ml�b�M

a–b

Pl�x�ml�y�m

R0aðx; yÞ � R0bðx; yÞ� �2

r

rk; ð8Þ

where rk is the standard deviation of the components of the surface-reflectance-representation matrix R0aðx; yÞ ð1 � a � MÞ,and m and n are the numbers of rows and columns of the images I(x, y). R0aðx; yÞ and R0bðx; yÞ represent the components of thesurface-reflectance-representation matrix of the original images Ia(x, y) and Ib(x, y), (1 6 a, b 6M), respectively. Therefore,

R0aðx;yÞ ¼UaWa;2VTa ¼Ua

0. . . 0

0wa;kþ1

0 . . .

wa;n

0BBBBBBBB@

1CCCCCCCCA

VTa and R0bðx;yÞ ¼UbWb;2VT

b ¼Ub

0. . . 0

0wb;kþ1

0 . . .

wb;n

0BBBBBBBB@

1CCCCCCCCA

VTb :

Page 5: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

Fig. 2. An example of image decomposition based on the illumination model and SVD with different k. (a) Input image. The odd rows are the component ofthe illumination-effect matrix L

0(x, y) in the illumination model, and the even rows are the component of the surface-reflectance-representation matrix

R0(x, y).

64 M. Jian et al. / Information Sciences 269 (2014) 60–72

Suppose that there are N distinct surfaces, and for each of these surfaces there are M texture images in the training set.The M images of each distinct surface in the training set are transformed in the same way, using Eq. (8). The average rootmean squared (ARMS) value can be used to measure the overall differences between the N surfaces in the training set todetermine the value k:

ARMSk ¼P

1�j�NRMSkj

N: ð9Þ

The range of the value k is set at 2 6 k 6 19 empirically. As shown in Fig. 2, when the value k becomes larger, the energy ofthe surface-reflectance-representation matrix R

0(x, y) becomes smaller, and it approaches zero when k is close to 20.

Page 6: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

M. Jian et al. / Information Sciences 269 (2014) 60–72 65

Fig. 3 shows the ARMS values with different k. The optimal k can be selected as the one with the smallest differences be-tween the components of the surface-reflectance-representation matrix R

0(x, y):

Fig. 4.the illu

k ¼minfARMSk;2 � k � 19g: ð10Þ

The globally optimal k can be determined using the training set (where k = 7 for the ‘‘acg’’ class in our experiment). Fig. 4illustrates the decomposition of the intensities of an image I(x, y) into the component of the surface-reflectance-representa-tion matrix R

0(x, y) and the component of the illumination-effect matrix L

0(x, y) based on the proposed illumination model.

3.3. The illumination-compensation algorithm (ICA)

Real-world rough surfaces may have similar structures, but even images of the same surface do not look similar underdifferent lighting conditions. Thus, it is reasonable to infer that the reflectance-representation matrices R

0(x, y) of surfaces

with a similar structure are slightly different, while the illuminations-effect matrix L0(x, y) can vary significantly, depending

on the illumination conditions. This is due to the fact that images under uneven illumination conditions produce shadows,and look different in those regions with insufficient illumination. If we can learn a mean illumination-effect matrix Lðx; yÞ soas to compensate the component of the illumination-effect matrix L

0(x, y) of the images with uneven lighting and shadows, it

will make those shadowed regions in an image brighter and less shadowy. Fig. 5 illustrates the process of our algorithm forillumination compensation and enhancement.

Suppose that Ic(x, y) is a texture image under uneven lighting and with shadows. Q images are searched with a similarstructure to Ic(x, y), and these searched Q images are transformed in the same way as in Section 3.1. We can then learnthe mean illumination-effect matrix Lðx; yÞ to compensate the images with uneven illuminations and shadows. The Q imagescan be expressed as follows:

Itðx; yÞ ¼ Rtðx; yÞLtðx; yÞ; ð1 � t � QÞ ð11Þ

where logðLtðx; yÞÞ ¼ L0tðx; yÞ ¼ UtWt;1VTt according to (7), where 1 6 t 6 Q. The mean illumination-effect matrices Lðx; yÞ can

be computed as follows:

Fig. 3. The average root mean squared value (ARMS) of the training set with different k.

The decomposition of an image into the two components of the proposed illumination model: (a) The input gray-scale image, (b) the component ofmination-effect matrix L

0(x, y), and (c) the component of the surface-reflectance-representation matrix R

0(x, y) of the illumination model.

Page 7: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

Fig. 5.directiocompenimage ireader

66 M. Jian et al. / Information Sciences 269 (2014) 60–72

Lðx; yÞ ¼ 1Q

XQ

t¼1

Ltðx; yÞ: ð12Þ

Decompose a novel image Ic(x, y), which is under uneven and non-frontal illumination, using the illumination model asfollows:

Icðx; yÞ ¼ Rcðx; yÞLcðx; yÞ: ð13Þ

The mean illumination-effect matrix Lðx; yÞ can then be used for illumination compensation as follows:

IICAc ¼ RcðLc þ LÞ: ð14Þ

Finally, IICAc is normalized so that all its pixel values are within the range [0,255]. The interpretation of (Lc þ L) means that we

add lighting to compensate for uneven illumination conditions during image-illumination compensation. When an imageIc(x, y) is under an uneven illumination condition, shadows may appear and the image may look different in those regionswith insufficient illumination. Therefore, the formulation of Lðx; yÞ in (12) takes the value of different illumination conditions

The illumination-insensitive schemes using illumination compensation and enhancement: (a and b) an image under two different illuminationns, where the red block arrows indicate the illumination directions for the respective images, (c) the training images, (d) the illumination-sation results for the image in (a), (e) the illumination-enhancement results for the image in (a), (f) the illumination-compensation results for then (b), and (g) the illumination-enhancement results for the image in (b). (For interpretation of the references to colour in this figure legend, theis referred to the web version of this article.)

Page 8: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

M. Jian et al. / Information Sciences 269 (2014) 60–72 67

into account to generate a mean illumination-effect matrix Lðx; yÞ for compensating the images for uneven lighting andshadows. We call it the ‘‘illumination-compensation algorithm’’ (ICA).

3.4. The illumination-enhancement algorithm (IEA)

Inspired by the shadowless lamp used in surgical operations to compensate for illumination and remove shadows, wepropose an efficient method for image-illumination enhancement. We call it the ‘‘illumination-enhancement algorithm’’(IEA). The difference between IEA and ICA is that the former sets k > 1, while the latter has its k = 1. In other words, IEAemploys a stronger light to compensate for uneven illumination conditions than ICA does.

Fig. 6. Texture images processed using different illumination-compensation and illumination-enhancement methods: (a) the original texture images fromthe class ‘‘abj’’ in the PhoTex database, (b) results using the histogram-equalization method [13], (c) results using the method in [32], (d) results using ourillumination-compensation method, and (e) results using our illumination-enhancement method, with k = 3.

Page 9: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

68 M. Jian et al. / Information Sciences 269 (2014) 60–72

The mean lighting matrix Lðx; yÞ can be utilized for image illumination enhancement, not only to compensate for unevenlighting but also to enhance an image by removing any shadows in the image Ic(x, y) under uneven illumination conditions,as follows:

Fig. 7.the clasillumin

IIEAc ¼ RcðLc þ kLÞ; ð15Þ

where k P 1 and it is called the ‘‘illumination-enhancement factor’’. When k = 1, the illumination-enhancement algorithm(IEA) will become the illumination-compensation algorithm (ICA) as described in Section 3.2.

Fig. 5 illustrates the proposed illumination-insensitive framework for using illumination compensation (ICA) andenhancement (IEA). The effect of IEA can be seen in Figs. 6 and 7; experiment results will show the performance of this

Texture images processed using different illumination-compensation and illumination-enhancement methods: (a) the original texture images froms ‘‘ace’’ in the PhoTex database, (b) results using the histogram-equalization method [13], (c) results using the method in [32], (d) results using ouration-compensation method, and (e) results using our illumination-enhancement method, with k = 3.

Page 10: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

M. Jian et al. / Information Sciences 269 (2014) 60–72 69

algorithm in the next section. The processed texture images IIEAc will have their illumination smoothed, and they will look

similar under even and frontal light sources.

4. Experimental results

In this section, we evaluate our proposed illumination-compensation and enhancement algorithms. We carried out alarge number of experiments to verify the effectiveness of the proposed methods. We performed experiments on the widelyused PhoTex database [5], which contains images of rough surfaces that have been illuminated from various directions. Theimages in this database are textures of various surfaces, which are placed on a fixed plane and are observed from a constantviewpoint for all different illumination directions. In the following sections, we will first show the visual quality of the tex-ture images processed by our algorithms. Then, our algorithms are evaluated in terms of general texture classification.

4.1. Comparison based on visual observation

To compare our algorithms with other state-of-the-art algorithms, histogram equalization was first employed to improvethe visual appearance of all images used in the experiments [13]. A recent approach based on self-similarity maps was alsoemployed for comparison [32]. Figs. 6 and 7 show the results of two distinct classes. Texture images of the same class lookdissimilar under different illumination directions, as shown in Figs. 6(a) and 7(a). Figs. 6(b) and 7(b) show the images pro-cessed by histogram equalization. Although both the visual appearance and the contrast of the textures are enhanced, theresults are not illumination-insensitive. Our algorithm is compared to the method proposed in [32], which computes aself-similarity map using a neighborhood size of 5�5. The results are illustrated in Figs. 6(c) and 7(c). Although the methodcan enhance images to some extent, the processed texture images under different illumination directions still look dissim-ilar, and are not illumination-insensitive. In addition, the method needs to choose a suitable neighborhood size (such as3 � 3, 5 � 5, or 7 � 7) for calculating the self-similarity map for each training image, and also requires a reference pointin each image. Figs. 6(d) and 7(d) are the results based on our method, which can produce visually better results and canalleviate the effect of illumination variations. Figs. 6(e) and 7(e) show the texture images after illumination enhancementusing the illumination-enhancement factor k = 3. We can observe that uneven lighting is compensated and the shadowsare smoothed.

Experimental results using all texture images in the database show that our simple, non-iterative illumination-compen-sation and illumination-enhancement algorithms can achieve a good performance level, and can effectively reduce theillumination effects while retaining the primary structures and patterns of textures.

4.2. Performances in terms of texture classification

In this section, experiments are conducted to evaluate the effectiveness of the proposed schemes in terms of texture clas-sification. We selected 20 types of textures from the PhoTex database captured under different illumination conditions to per-form texture classification. Each class consists of 36 captured images under different illumination conditions, with the size ofeach texture being 512 � 512. For every class, 12 images are randomly selected to form the training set for computing the illu-mination-effect matrix L

0(x, y) for the illumination-compensation and illumination-enhancement method. Each texture image

in the selected classes is divided into 4 non-overlapping sub-images with a size of 256 � 256, in order to perform classification.All texture images are processed using 3-level wavelet decomposition, with the ‘‘db4’’ wavelet, to extract the feature vector[29]. At each level, the subbands of three directions are computed; this results in 10 subbands for each texture image. This fea-ture vector based on wavelet decomposition has been proven to be effective for texture retrieval and classification [21,22].

In this paper, we focus on using illumination compensation and enhancement to improve the visual quality of textureimages and classification performance, rather than investigating texture-feature extraction and representation. Wavelet-based feature representation is appropriate to assess the respective performances of different illumination-insensitivemethods for texture classification. Our proposed method is a heuristic one to produce texture that can be dominated byillumination-insensitive texture features without explicitly identifying such features. A weighted L2 distance measure [29]is used for matching a query input and the images in the database. The best match is the one with the minimum distance.Our algorithm is compared to two state-of-the-art methods: the method based on the self-similarity map [32] and the meth-od based on the statistical surface descriptors [2]. In the experiment, we evaluate the following six algorithms for textureclassification, which are denoted as Algorithm a, Algorithm b, Algorithm c, Algorithm d, Algorithm e, and Algorithm f,respectively.

� Algorithm a: using the original textures without any pre-processing performed.� Algorithm b: the method based on histogram equalization [13].� Algorithm c: the method in [32] based on the self-similarity map.� Algorithm d: the method in [2] based on the statistical surface descriptors.� Algorithm e: the proposed method with illumination compensation.� Algorithm f: the proposed method with illumination enhancement.

Page 11: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

70 M. Jian et al. / Information Sciences 269 (2014) 60–72

Fig. 8 shows the average classification accuracies of the six algorithms for each class in the PhoTex database. It can be seenthat our illumination-compensation and illumination-enhancement algorithms are effective for illumination-insensitivetexture classification. The average classification accuracy of all 20 classes, based on the six different schemes, is shown inTable 1. It is obvious that Algorithm e and Algorithm f can achieve a significantly better performance than the other fourmethods. The average recognition rates are 82% and 90.22% for the illumination-compensation algorithm and the illumina-tion-enhancement algorithm, respectively. It should be noted that the illumination-enhancement scheme significantly out-performs the illumination-compensation scheme in terms of its average classification accuracy. This is because, afterillumination enhancement, the images of the same texture or class will be more similar to each other than those employedthe illumination-compensation scheme only, as shown in Figs. 3 and 4. If no compensation/normalization scheme is em-ployed, the average recognition rate drops to 48.80%. The average recognition rate slightly increases to 57% with all imagesenhanced by histogram equalization (Algorithm b). The performance of Algorithm c, based on the self-similarity map, canfurther improve the accuracy rate to 71.40%. Our proposed Algorithm e and Algorithm f perform better than both Algorithmc and Algorithm d. This is because, although Algorithm c can enhance the texture information using the self-similarity map, itcannot eliminate the illumination effects completely. For Algorithm d, the statistical surface descriptors are used, which canhandle rough surfaces with varying albedo. It can achieve better classification accuracy (77.90%) than Algorithm c (71.40%).However, in order to estimate the illumination direction reliably and accurately, Algorithm d needs all the quadratic illumi-nation components to have significant magnitudes; this may violate the assumption in the sinusoidal model. Our proposedapproaches can incorporate the detailed structure information about the same textures and, at the same time, smooth outthe illumination influence by compensating and enhancing the illumination. The experiments on texture classification showthat the proposed methods are effective.

In addition to the classification statistics shown in Table 1 and Fig. 8, in order to illustrate the challenge of classifyingtexture images, Fig. 9 shows some representative texture images under different illumination conditions. It can be seen thatthe same texture looks very dissimilar under different illumination conditions.

In Fig. 10, we can observe that the first subset (aaa, aab, aam, aan) achieves a relatively low classification rate. This ismainly due to the fact that the four classes in this subset have textures that are visually similar, as shown in Fig. 10. In addi-tion, when these images are divided into 4 non-overlapping sub-texture images, these sub-images of the different classesmay become even more similar. It should be noted that the classification accuracy can be further improved by employingmore complex similarity metrics and more sophisticated texture-feature representations (such as Gabor filters and waveletpackets), but this is beyond the scope of this paper.

In our experiments, we partitioned the database into a training set and a test set, and the algorithms were evaluated using5 runs of 10-fold cross-validation. In general, 10-fold cross-validation is accepted as it can provide a highly accurate estimateof the generalization error of a model [25]. The procedure is as follows:

� Divide data into 10 equal-sized groups.� Test on single group and train on the remaining 9 groups.� Repeat 5 times and compute the mean accuracy.

Fig. 8. The classification rates of the six different algorithms for each of the 20 surface-texture classes, from the PhoTex database.

Table 1The average classification rates of the six different schemes based on the PhoTex database.

Algorithm a Algorithm b Algorithm c Algorithm d Algorithm e Algorithm f

Average classification rate (%) 48.80 57.00 71.40 77.90 82.00 90.22

Page 12: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

Fig. 9. Some representative texture images under different illumination conditions. Each row shows images of the same texture under differentillumination conditions.

aanaamaabaaa

Fig. 10. Misclassification of the four classes – aaa, aab, aam, and aan – in the PhoTex database.

Table 2The average classification rates of the proposed schemes using 10-fold cross-validation.

Illumination compensation Illumination enhancement

Average classification rate (%) 81.65 ± 1.776 90.91 ± 1.813

M. Jian et al. / Information Sciences 269 (2014) 60–72 71

Table 2 tabulates the average classification accuracies of the two different proposed methods using 10-fold cross-valida-tion. We can see that our proposed illumination-compensation and illumination-enhancement schemes are effective andstable for illumination-insensitive texture classification.

5. Conclusion and discussion

A rough surface lit under different illumination directions makes texture images look dissimilar. This phenomenon is adifficult challenge for forms of texture analysis such as texture classification, segmentation, and retrieval. This is becausethe appearance of a surface texture is strongly dependent on illumination directions and conditions. In this paper, we have

Page 13: Illumination-insensitive texture discrimination based on illumination compensation and enhancement

72 M. Jian et al. / Information Sciences 269 (2014) 60–72

proposed effective schemes for illumination compensation and enhancement to create illumination-insensitive textureimages. In contrast, the traditional Lambertian model requires a number of texture images to reconstruct 3D models for illu-mination invariance, with the assumption of the existence of a single-point light source. The proposed approach can over-come these limitations, and are suitable for environments without any stipulation of the light sources. The experiments haveproved that the illumination problem is important for texture recognition. Through the illumination-compensation andenhancement schemes, the recognition rates can be improved dramatically; this demonstrates that the proposed schemesare an important pre-processing step for practical texture applications. Both the proposed illumination-compensation andillumination-enhancement methods can be effectively used for illumination-insensitive texture classification, which isimportant for appearance-based recognition.

References

[1] R.B. André, C. Dalcimar, M.B. Odemir, Texture analysis and classification: a complex network-based approach, Inform. Sci. 219 (2013) 168–180.[2] S. Barsky, M. Petrou, Surface texture using photometric stereo data: classification and direction of illumination detection, J. Math. Imag. Vis. 29 (2–3)

(2007) 185–204.[3] S. Barsky, Surface Shape and Colour Reconstruction using Photometric Stereo, PhD thesis, University of Surrey, October 2003.[4] M. Chantler, M. Schmidt, M. Petrou, G. McGunnigle, The effect of illuminant rotation on texture filters: Lis-sajous’s ellipses, in: Proc. European

Conference on Computer Vision, vol. 3, 2002, pp. 289–303.[5] M. Chantler, The PhoTex database, Texture lab, Heriot-Watt University Edinburgh, UK. <http://www.macs.hw.ac.uk/texturelab/resources/databases/>.[6] H. Chen, P. Belhumeur, D. Jacobs, In search of illumination invariants, in: Proc. IEEE Conference on Computer Vision and Pattern Recognition, vol. 2,

2000, pp. 254–261.[7] E. Coleman, R. Jain, Obtaining 3-dimensional shape of textured and specular surfaces using four-source photometry, Comput. Graph. Image Process. 18

(4) (1982) 309–328.[8] R. Datta, D. Joshi, J. Li, Z. Wang, Image retrieval: ideas, influences, and trends of the new age, ACM Comput. Surv. 40 (2) (2008) 1–60.[9] J. Dong, M. Chantler, Capture and synthesis of 3D surface texture, Int. J. Comput. Vis. 62 (2) (2005) 177–194.

[10] M. Diker, A. Ugur, Textures and covering based rough sets, Inform. Sci. 184 (2012) 44–63.[11] O. Drbohlav, M. Chantler, Illumination-invariant texture classification using single training images, IEEE Text. 1 (2005) (2005) 610–617.[12] J. Fan, L. Wolff, Surface curvature and shape reconstruction from unknown multiple illumination and integrability, Comput. Vis. Image Understand. 65

(2) (1997) 347–359.[13] R. Gonzalez, R. Woods, Digital Image Processing, third ed., Addison-Wesley, Publishing Company, 2006 (Chapter 4).[14] S.E. Grigorescu, N. Petkov, P. Kruizinga, Comparison of texture features based on Gabor filters, IEEE Trans. Image Process. 11 (10) (2002) 1160–1167.[15] R. Grosse, M. K. Johnson, E.H. Adelson, W.T. Freeman, Ground truth dataset and baseline evaluations for intrinsic image algorithms, in: Proceedings of

IEEE ICCV, 2009, pp. 2335–2342.[16] B.K.P. Horn, Determining lightness from an image, Comput. Graph. Image Process. 3 (1974) 277–299.[17] P. Howarth, S. Ruger, Evaluation of texture features for content-based image retrieval, in: Proceedings of the International Conference on Image and

Video Retrieval, Springer-Verlag, 2004. pp. 326–324.[18] D. Jacobs, P. Belhumeur, R. Basri, Comparing images under variable illumination, in: Proceedings IEEE Conference on Computer Vision and Pattern

Recognition, vol. 1, 1998, pp. 610–617.[19] M. Jian, J. Dong, Y. Liu, New perceptual texture features based on wavelet transform, Int. J. Comput. Inform. Sci. 9 (1) (2008) 11–18.[20] M. Jian, S. Chen, J. Dong, Illumination-invariant texture classification based on self-similarity and Gabor wavelet, in: International Symposium on

Intelligent Information Technology Application, 2008, pp. 352–355.[21] M. Jian, H. Guo, L. Liu, Texture image classification using visual perceptual texture features and Gabor wavelet features, J. Comput. 4 (8) (2009) 763–

770.[22] M. Jian, N. Hao, P. Ma, J. Dong, Intergrating salient regions with new perceptual texture features based on wavelet transform for image retrieval, J.

Softw. 4 (8) (2009) 851–858.[23] M. Jian, J. Dong, Capture and fusion of 3d surface texture, Multim. Tools Appl. 53 (1) (2011) 237–251.[24] M. Jian, J. Dong, J. Ma, Image retrieval using wavelet-based salient regions, Imag. Sci. J. 59 (4) (2011) 219–231.[25] A. Liu, G. Jun, J. Ghosh, A self-training approach to cost sensitive uncertainty sampling, Mach. Learn. 76 (2009) 257–270.[26] F. Liu, R.W. Picard, Periodicity, directionality, and randomness: wold features for image modeling and retrieval, IEEE Trans. Pattern Anal. Mach. Learn.

18 (7) (1996) 722–733.[27] W. Y. Ma, B. S. Manjunath, Texture features of wavelet transform features for texture image annotation, in: IEEE Intl. Conf. on Image Proc., vol. 2, 1995,

pp. 256–259.[28] B.S. Manjunath, W.Y. Ma, Texture features for browsing and retrieval of image data, IEEE Trans. Pattern Anal. Mach. Intell. 18 (8) (1996) 837–842.[29] B.S. Manjunath, P. Wu, S. Newsam, H. Shin, A texture descriptor for browsing and similarity retrieval, J. Signal Process.: Image Commun. 16 (1) (2000)

33–43.[30] H. Murase, S. Nayar, Visual learning and recognition of 3D objects from appearance, Int. J. Comput. Vis. 14 (1) (1995) 5–25.[31] M. Osadchy, M. Lindenbaum, D. Jacobs, Whitening for photometric comparison of smooth surfaces under varying illumination, in: Proc. European

Conference on Computer Vision, 2004, pp. 217–228.[32] L. Qi, L. Zhang, J. Dong, Z. Yu, A. Yang, Self-similarity based classification of 3D surface textures, in: 2008 Congress on Image and Signal Processing, vol.

2, 2008, pp. 402–406.[33] A. Shashua, T. Riklin-Raviv, The quotient image: class based rerendering and recognition with varying illuminations, IEEE Trans. Pattern Anal. Mach.

Intell. 23 (2) (2001) 129–139.[34] H. Tamura, S. Mori, T. Yamawaki, Texture features corresponding to visual perception, IEEE Trans. Syst., Man, Cybern. 8 (6) (1978) 460–473.[35] Y. Weiss, Deriving intrinsic images from image sequences, in: Proc. of the Int. Conf. on Computer Vision, vol. 2, 2001, pp. 68–75.[36] G. Werner, L. Edwin, T. Stefan, Associating visual textures with human perceptions using genetic algorithms, Inform. Sci. 180 (2010) 2065–2084.[37] C. Yeh, C. Lin, K. Muchtar, L. Kang, Real-time background modeling based on a multi-level texture description, Inform. Sci. 269 (2014) 106–127.[38] B. Zhang, X. Wang, F. Karray, Z. Yang, D. Zhang, Computerized facial diagnosis using both color and texture features, Inform. Sci. 221 (2013) 49–59.[39] C. Zhu, R. Wang, Local multiple patterns based multiresolution gray-scale and rotation invariant texture classification, Inform. Sci. 187 (2012) 93–108.