a novel iris segmentation using radial-suppression edge detection

14
A novel iris segmentation using radial-suppression edge detection Jing Huang a , Xinge You a, , Yuan Yan Tang a,b , Liang Du a , Yuan Yuan c a Department of Electronics and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074, China b Department of Computer Science, Hong Kong Baptist University, Kowloong, Hong Kong c School of Engineering and Applied Science, Aston University, Birmingham B4 7ET, UK article info Article history: Received 1 November 2008 Received in revised form 13 March 2009 Accepted 2 May 2009 Available online 9 May 2009 Keywords: Iris segmentation Non-separable wavelet transform Radial-suppression edge detection abstract Iris segmentation is a key step in the iris recognition system. The conventional methods of iris segmentation are based on the assumption that the inner and outer boundaries of an iris can be taken as circles. The region of the iris is segmented by detecting the circular inner and outer boundaries. However, we investigate the iris boundaries in the CASIA-IrisV3 database, and find that the actual iris boundaries are not always circular. In order to solve this problem, a new approach for iris segmentation based on radial- suppression edge detection is proposed in this paper. In the radial-suppression edge detection, a non-separable wavelet transform is used to extract the wavelet transform modulus of the iris image. Then, a new method of radial non-maxima suppression is proposed to retain the annular edges and simultaneously remove the radial edges. Next, a thresholding operation is utilized to remove the isolated edges and produce the final binary edge map. Based on the binary edge map, a self-adaptive method of iris boundary detection is proposed to produce final iris boundaries. Experimental results demon- strate that the proposed iris segmentation is desirable. & 2009 Elsevier B.V. All rights reserved. 1. Introduction Human identification based on biometrics has been motivated by a growing need for security in recent years. In contrast to the traditional security systems, which may be faked or cracked, current biometric technologies [7,8,48,49] utilize physiological or behavioral character- istics (such as the face, fingerprint, palmprint, iris, retina, voice, and gait) to accurately authenticate personal identity. Among these biometric technologies, iris recog- nition is one of the most stable and reliable system [24,34]. Some desirable properties (i.e., uniqueness, stability, and noninvasiveness) make iris recognition suitable for highly reliable human identification [11,17,24]. A typical iris recognition system includes four proce- dures: acquisition, preprocessing, feature extraction and matching [24,30,31,36,47]. Preprocessing consists of three steps: segmentation, normalization and enhancement, as illustrated in Fig. 1 . Acquisition is to capture a sequence of iris images from the subject using a specifically designed sensor. Preprocessing provides an effective iris region in a selected image for subsequent feature extraction and matching. Feature extraction is a common technique used to lower the size of iris models and improve classifier accuracy. Finally, matching is performed by comparing the feature of template iris with a set of features of candidate iris to determine the identical iris. Iris segmentation is a critical step in an iris recognition system. It is to isolate the actual iris area from the human eye image. These steps after segmentation (involving normalization, enhancement, feature extraction and matching) are based on the results of iris segmentation. The performance of iris recognition system is mainly dependent on the accuracy of iris segmentation [19,30]. Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/sigpro Signal Processing ARTICLE IN PRESS 0165-1684/$ - see front matter & 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2009.05.001 Corresponding author. Tel.: +86 027 87544014x8269; fax: +86 027 87542831. E-mail addresses: [email protected] (J. Huang), [email protected], [email protected] (X. You). Signal Processing 89 (2009) 2630–2643

Upload: jing-huang

Post on 26-Jun-2016

214 views

Category:

Documents


2 download

TRANSCRIPT

ARTICLE IN PRESS

Contents lists available at ScienceDirect

Signal Processing

Signal Processing 89 (2009) 2630–2643

0165-16

doi:10.1

� Cor

fax: +86

E-m

youxg@

journal homepage: www.elsevier.com/locate/sigpro

A novel iris segmentation using radial-suppression edge detection

Jing Huang a, Xinge You a,�, Yuan Yan Tang a,b, Liang Du a, Yuan Yuan c

a Department of Electronics and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074, Chinab Department of Computer Science, Hong Kong Baptist University, Kowloong, Hong Kongc School of Engineering and Applied Science, Aston University, Birmingham B4 7ET, UK

a r t i c l e i n f o

Article history:

Received 1 November 2008

Received in revised form

13 March 2009

Accepted 2 May 2009Available online 9 May 2009

Keywords:

Iris segmentation

Non-separable wavelet transform

Radial-suppression edge detection

84/$ - see front matter & 2009 Elsevier B.V. A

016/j.sigpro.2009.05.001

responding author. Tel.: +86 027 87544014x82

027 87542831.

ail addresses: [email protected] (J. H

mail.hust.edu.cn, [email protected]

a b s t r a c t

Iris segmentation is a key step in the iris recognition system. The conventional methods

of iris segmentation are based on the assumption that the inner and outer boundaries of

an iris can be taken as circles. The region of the iris is segmented by detecting the

circular inner and outer boundaries. However, we investigate the iris boundaries in the

CASIA-IrisV3 database, and find that the actual iris boundaries are not always circular. In

order to solve this problem, a new approach for iris segmentation based on radial-

suppression edge detection is proposed in this paper. In the radial-suppression edge

detection, a non-separable wavelet transform is used to extract the wavelet transform

modulus of the iris image. Then, a new method of radial non-maxima suppression is

proposed to retain the annular edges and simultaneously remove the radial edges. Next,

a thresholding operation is utilized to remove the isolated edges and produce the final

binary edge map. Based on the binary edge map, a self-adaptive method of iris boundary

detection is proposed to produce final iris boundaries. Experimental results demon-

strate that the proposed iris segmentation is desirable.

& 2009 Elsevier B.V. All rights reserved.

1. Introduction

Human identification based on biometrics has beenmotivated by a growing need for security in recent years.In contrast to the traditional security systems, which maybe faked or cracked, current biometric technologies[7,8,48,49] utilize physiological or behavioral character-istics (such as the face, fingerprint, palmprint, iris, retina,voice, and gait) to accurately authenticate personalidentity. Among these biometric technologies, iris recog-nition is one of the most stable and reliable system[24,34]. Some desirable properties (i.e., uniqueness,stability, and noninvasiveness) make iris recognitionsuitable for highly reliable human identification [11,17,24].

ll rights reserved.

69;

uang),

n (X. You).

A typical iris recognition system includes four proce-dures: acquisition, preprocessing, feature extraction andmatching [24,30,31,36,47]. Preprocessing consists of threesteps: segmentation, normalization and enhancement, asillustrated in Fig. 1. Acquisition is to capture a sequence ofiris images from the subject using a specifically designedsensor. Preprocessing provides an effective iris region in aselected image for subsequent feature extraction andmatching. Feature extraction is a common technique usedto lower the size of iris models and improve classifieraccuracy. Finally, matching is performed by comparing thefeature of template iris with a set of features of candidateiris to determine the identical iris.

Iris segmentation is a critical step in an iris recognitionsystem. It is to isolate the actual iris area from the humaneye image. These steps after segmentation (involvingnormalization, enhancement, feature extraction andmatching) are based on the results of iris segmentation.The performance of iris recognition system is mainlydependent on the accuracy of iris segmentation [19,30].

ARTICLE IN PRESS

Fig. 1. Iris recognition system.

J. Huang et al. / Signal Processing 89 (2009) 2630–2643 2631

However, the conventional methods of iris segmentationhave the following two disadvantages, which lead to theinaccurate segmentation of iris.

1. The conventional methods of iris segmentation useedge-detector operators to produce edge maps of irisimages, and then detect the boundaries of irises based onthe edge maps [24,31,32,36]. However, these edge mapscontain many edges of the pseudo iris boundaries such aseyelash edges and iris texture edges, which may lead to

the inaccuracy of iris segmentation, as shown in Fig. 2. Inthe original image (Fig. 2(a)), many eyelashes occlude irisnear the outer boundary. So we can see many edges of thepseudo iris boundaries in the edge map (Fig. 2(b)), whichlead to the inaccuracy of the iris segmentation, asillustrated in Fig. 2(c).

2. The conventional methods of iris segmentation[2,6,10,14,16,18,20,24,31,32,36,43,47] are based on theassumption that both the inner boundary and the outer

ARTICLE IN PRESS

The area of iris is out of theboundary.

Iris broundarydetectiondetection

Edge

Fig. 2. Iris segmentation with the conventional edge detection. (a) original image (b) edge map (c) segmented image.

in the area of iris.The area of pupil is segmented

Norm

alize

Segment

Fig. 3. The results of segmentation and normalization with the assumption of circular boundaries: (a) original image; (b) segmented image; and (c)

normalized image.

J. Huang et al. / Signal Processing 89 (2009) 2630–26432632

boundary of an iris can be taken as circles, and segmentthe iris using the two circular borders. However, weinvestigate the iris boundaries in the CASIA-IrisV3database [1], and find that the actual iris boundaries arenot always circular. Such a simple assumption leads to theinaccuracy of the iris segmentation. Fig. 3 shows theresults of these methods. In the original image (Fig. 3(a)),we can see that the pupil is not circular. These methodssegment the inner boundary using circular line, and theyare inaccurate, as illustrated in Fig. 3(b). A large piece ofpupil area is segmented in the iris area. So, the normalizediris image (Fig. 3(c)) contains some pupil area. There are aconsiderable number of the irises whose pupils are notcircular in the CASIA-IrisV3 database (about 20% of theCASIA-IrisV3 database). Fig. 4 shows other results of theinaccurate iris segmentation. The inaccuracy of irissegmentation will inevitably result in poor performance

of iris recognition system. To solve this problem, someresearchers [30,35,51] assume that the inner and outerboundaries of irises are elliptical, and segment iris usingthe two elliptical boundaries. However, the assumption isalso flawed. In the non-ideal iris images, which are off-angle, motion blur and noisy, irises and pupils appear tobe non-elliptical. And even in the ideal iris images, someshapes of pupils are not perfect ellipses. Therefore, themethods of iris segmentation with assumption of ellipticalboundaries are not good enough.

To overcome the above disadvantages, a new approachfor iris segmentation based on radial-suppression edgedetection is proposed in this paper. The proposedapproach consists of two phases: radial-suppression edgedetection and iris boundary detection, as shown in Fig. 5.

The proposed radial-suppression edge detection in-cludes three steps: (1) The modulus of the wavelet

ARTICLE IN PRESS

Fig. 4. The results of inaccurate segmentation with the assumption of circular boundaries.

Radial−suppression edge detection

Iris boundary detection

Inner boundary detection

Outer boundary detection

Radial non−maximum suppression

Edge thresholding

Calculation of wavelet transform modulus

Fig. 5. Phases of proposed iris segmentation.

J. Huang et al. / Signal Processing 89 (2009) 2630–2643 2633

transform is calculated by the non-separable wavelettransform. Compared with the separable wavelet trans-form, the high frequency subbands of non-separablewavelet transform can reveal more singularities reflectingvarious orientations. (2) At the purpose of retainingannular edges and simultaneously removing radialedges, a method named radial non-maxima suppressionis proposed. (3) An edge thresholding is used to determinethe final edge map, which can remove the isolatededges.

Based on the radial-suppression edge detection, a self-adaptive method of iris boundary detection is proposed.No matter what shapes the iris boundaries are, theproposed method can segment the iris area accurately.

The rest of this paper is organized as follows. In Section2, we propose the radial-suppression edge detection of theiris image. The algorithm of the proposed self-adaptive irisboundary detection is described in Section 3. Section 4presents experimental results and conclusions are drawnin Section 5.

ARTICLE IN PRESS

J. Huang et al. / Signal Processing 89 (2009) 2630–26432634

2. Radial-suppression edge detection

Edge detection is an important tool in image segmen-tation [22,23,41,42,50]. To segment iris more effectivelyand accurately, we propose the radial-suppression edgedetection in this section.

2.1. Introduction to methods on edge detection

At the early stage of the research on edge detection,edge detection operators are based on the convolutionswith a set of kernels of directional derivatives [5]. Thecommonly used detection operators are Roberts, Sobel,Hueckel, Prewitt, Kirsch and Laplacian operators[5,15,29,33]. These operators are very simple and can beimplemented easily. However, they failed to handle thenoisy images.

To improve these methods, two smoothing techniqueshave been employed.

(1)

Marr et al. [28,37] suggested filtering the images withGaussian function before detection.

(2)

Two typical fitting-based edge detection methodshave been proposed by Hueckel [15] and Haralick etal. [13,27].

The weakness of the above approaches is that theoptimal result may not be obtained by using a smoothingoperator. To combat such a deficiency, Canny [3] devel-oped a computational approach for edge detection, wherethree criteria related to the performance of the edgedetector have been established, namely: (1) good detec-tion, (2) good segmentation, and (3) low multiplicity ofthe response to a single edge. Thereafter, by conforming tothese performance criteria, the optimal detection methodcan be derived for several common image features. In [3],an optimal detector has been derived, which can beapproximated by the first derivative of a Gaussianfunction. Thus, Canny detector detects the edges bylocating the local maximal value of f � y0 (where f is theoriginal image, y is a Gaussian function and ‘‘0’’ means thefirst-order derivative).

With the development of the wavelet theory, wavelettransform has been found to be a remarkable mathema-tical tool to analyze the singularities including edges.Thus, a new approach for edge detection based on thewavelet transform has been established. Actually, this ideais similar to that of Canny [3]. In fact, a Gaussian functiony is selected as in Canny’s approach, while a waveletfunction y0 is chosen to be in the wavelet-based approach.Significant researches related to this topic have been doneby Mallat et al. [25,26]. They proved that the maxima ofthe modulus of the wavelet transform can detect thesegmentations of the irregular structures. In their work,the first derivative of a cubic spline function is utilized todetect the local extreme values of a wavelet transform asedge points. Tang et al. [38,39] improve Mallat’s work andpresent a mathematical characterization of three basicgeometric structures of edges (i.e., step-structure, roof-structure, and Dirac-structure) with Lipschitz exponents.

An algorithm called scale-independent algorithm hasbeen developed to extract step-structure edges from amultistructure-edges in [38] and an algorithm to extractthe Dirac-structure edge by the wavelet transform hasalso been developed in [39].

The above methods can detect edges well in mostcases. However, in the case of the iris segmentation, thesemethods not only detect the edges of iris boundaries, butalso detect the noises (such as edges of eyelash, and edgesof texture of iris). These noises are invalid to irissegmentation, and may result in the inaccuracy of irissegmentation. In order to detect high quality edge maps ofiris images and segment irises accurately, we develop anew method of radial-suppression edge detection basedon the non-separable wavelet transform.

2.2. Radial-suppression edge detection of iris image

Although in the literature various methods of radialedge detection have already been used by other research-ers, the proposed radial-suppression edge detection isdifferent from them. In Ref. [21], Li and his colleaguespresented the radial edge detection. In their research, theimage is transformed to polar coordinate. Vertical Sobeledge detection is applied to the polar image to detect theedges in the radius direction. In Fu’s work [12], hetransformed the image to polar coordinate, and used thehorizontal edge detector to detect the edge in polar image.However, our radial-suppression edge detection is basedon wavelet transform, and there is no need to transformthe image to polar coordinate. The proposed method ofradial-suppression edge detection extends Canny’s threecriteria [3] to detect the edges of iris boundaries efficientlyand accurately.

2.2.1. Selection of wavelet functions and calculation of the

modulus of the wavelet transform

The selection of a suitable wavelet function is a keyissue in radial-suppression edge detection.

In Mallat et al. work [25,26], they used the separablewavelet transform. The two-dimensional separable wave-let transform of an image Iðx; yÞ 2 L2

ðR2Þ at scale s and in

orientation k is defined as

Wks Iðx; yÞ ¼ I � ck

s ðx; yÞ; k ¼ 1;2. (1)

The two directional wavelets cks can be constructed as

c1ðx; yÞ ¼

@yðx; yÞ@x

and c2ðx; yÞ ¼

@yðx; yÞ@y

, (2)

where yðx; yÞ is a separable spline scaling function. It canbe shown that the two-dimensional separable wavelettransform gives the gradient of Iðx; yÞ smoothed by yðx; yÞat scales s

r2j Iðx; yÞ ¼W1

s Iðx; yÞ

W2s Iðx; yÞ

0@

1A ¼ s

@

@xI � ysðx; yÞ

@

@yI � ysðx; yÞ

0BBB@

1CCCA

¼ sr!ðI � ysÞðx; yÞ. (3)

ARTICLE IN PRESS

J. Huang et al. / Signal Processing 89 (2009) 2630–2643 2635

The modulus and phase angle of the gradient vector are

MsIðx; yÞ ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffijW1

s Iðx; yÞj2 þ jW2s Iðx; yÞj2

q, (4)

AsIðx; yÞ ¼ arctanW1

s Iðx; yÞ

W2s Iðx; yÞ

����������. (5)

Because of the construction scheme, the separable wave-lets are anisotropic and prefer certain directions (verticaland horizontal directions as well as diagonal) [44]. While,in the case of the iris segmentation, it is required todetermine the approximately annular iris boundary,which demands detecting edges reflecting various orien-tations. Therefore, the separable wavelets are unsuitablefor detecting edges in iris images. We need a new waveletto overcome the limitations of separable wavelets.

In our previous work [45,46], a non-separable waveletis constructed by using centrally symmetric matrices. The4� 4 centrally symmetric and orthogonal matrix Uða;bÞ isdefined as follows:

Uða;bÞ:¼1

2

cosaþ cosb � sinaþ sinb � sina� sinb cosa� cosbsina� sinb cosaþ cosb cosa� cosb sinaþ sinbsinaþ sinb cosa� cosb cosaþ cosb sina� sinbcosa� cosb � sina� sinb � sinaþ sinb cosaþ cosb

0BBBB@

1CCCCA,

(6)

where a and b are the arbitrary real numbers.From the previous centrally symmetric matrix, we may

derive a class of the non-separable product wavelet filtersbelow.

The low-pass filter m0ðz1; z2Þ is defined as follows:

m0ðz1; z2Þ ¼1

4ð1; z1; z2; z1z2Þ

YNk¼1

Uðak ;bkÞDðz2

1; z22ÞU

Tðak ;bkÞ

!V0,

ðz1; z2Þ 2 @D� @D, (7)

where D ¼ fz : jzjp1g and @D ¼ fz : jzj ¼ 1g. And threehigh-pass filters mjðz1; z2Þ; j ¼ 1;2;3 with respect to theabove low-pass filter m0ðz1; z2Þ are as follows:

mjðz1; z2Þ ¼1

4ð1; z1; z2; z1z2Þ

YNk¼1

Uðak ;bkÞDðz2

1; z22ÞU

Tðak ;bkÞ

!Vj,

j ¼ 1;2;3, (8)

where

V0 ¼ ð1;1;1;1ÞT ; V1 ¼ ð1;�1;1;�1ÞT ; V2 ¼ ð1;1;�1;�1ÞT ,

V3 ¼ ð1;�1;�1;1ÞT . (9)

Uðak ;bkÞis centrally symmetric orthogonal matrix defined

in Eq. (6) andDðz1; z2Þ is the matrix of trigonometricpolynomial below

Dðz1; z2Þ ¼

1 0 0 0

0 z1 0 0

0 0 z2 0

0 0 0 z1z2

0BBBB@

1CCCCA; ðz1; z2Þ 2 @D� @D. (10)

The low-pass filter m0 defined in Eq. (7) andfm1;m2;m3g defined in Eq. (8) form perfect reconstruc-tional FIR orthogonal filter banks.

Compared with the separable wavelet transform, thenon-separable wavelet transform has the following desir-able properties:

1. Since the non-separable wavelet transform isconstructed by using the centrally symmetric matrices,its high frequency components can reveal more singula-rities reflecting various orientations of the image than theseparable one can do [44,45], as illustrated in Fig. 6.

2. The non-separable wavelet transform decomposesan image into one low frequency component (correspond-ing to the filter m0) and three high frequency components(corresponding to the filter m1;m2;m3). The high fre-quency filters m1 and m2 are anti-symmetric filters. Torreand Poggio [40] pointed out if an edge detector is to detectthe local extreme as edge points, it must be anti-symmetric with respect to the origin.

With respect to the above reasons, we select the non-separable wavelet transform for radial-suppression edgedetection.

In order to determine the rate of intensity change ateach point in the iris image, we have to calculate themodulus of the wavelet transform. Assume that Iðx; yÞ is anacquired iris image of size M by N pixels. At each scale j

with j40 and m00I ¼ Iðx; yÞ, the non-separable wavelet

transform decomposes image mj�10 I into a low-pass band

mj0I and three high-pass bands mj

1I;mj2I;mj

3I. Since thebands of the wavelet transform are used to find the edges,the wavelet decomposition is a non-subsampled waveletdecomposition. Thus, the four subbands (mj

0I;mj1I;mj

2I;

mj3I) at scale j are of size M by N pixels, which is the same

as the original image. Since the m1 and m2 are anti-symmetric filters, and anti-symmetric filters are suitablefor detecting the local extreme values as edge points [40],the modulus of the non-separable wavelet transform canbe computed by

MjI ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffijmj

1Ij2 þ jmj2Ij2

q. (11)

2.2.2. Radial non-maxima suppression and edge

thresholding

Before performing radial non-maxima suppression, wehave to find the approximate center point of the edges. Inthe case of the iris segmentation, the center point of theedges is the pupillary center.

Some researchers [24] estimate the pupillary center bythe projections in x and y directions. However, images inCASIA-IrisV3 database have eight bright spots in the pupil.Estimating the pupillary center by this method may bedeviated far away from the true location of it. So weimprove this method and replace the intensity values ofthe bright spots with the values of the points of the pupil.The algorithm can be expressed concisely by the followingpseudocode:

Algorithm 1 (Estimating the pupillary center).

Input: Iðx; yÞ (the iris image), and Tspots (the threshold limit of bright

spots).

Output: ðxc ; ycÞ (the pupillary center). !

1:

xp arg minx

Py

Iðx; yÞ ; // find a point ðxp ; ypÞ of the pupil

ARTICLE IN PRESS

Fig. 6. Wavelet decomposition (the high frequency components are quantized to binary image). (a) The separable wavelet (db2). (b) The non-separable

wavelet.

J. Huang et al. / Signal Processing 89 (2009) 2630–26432636

2:

The center point A B C

yp arg min

y

Px

Iðx; yÞ

� �;

3:

for all xp � 50oxoxp þ 50 AND yp � 50oyoyp þ 50 do The point Pi PiD E 4: if Iðx; yÞ4Tspots then // detect the bright spots near the point ðxp ; ypÞ

The eight pixel neighhourhood

5: of the point Pi F G H

Iðx; yÞ Iðxp ; ypÞ; // replace the intensity values of the bright

spots

6:

end if

n

7: end for ! cti

o

8:

l dire

xc arg min

x

Py

Iðx; yÞ ;

adia

9:

he r

yc arg min

y

Px

Iðx; yÞ

� �;

T

10:

neighourhood points along the radial direction.

The center point (xc, yc)

Points C and F are the two

Fig. 7. The radial direction.

return ðxc ; ycÞ.

Definition 1. The radial direction is defined as thedirection from the center point to the edge point, asillustrated in Fig. 7.

Edges are placed at the points with the maximal rate ofintensity change. And in the wavelet transform modulusof the iris image, edges are placed at the points with thelocal maximum moduli of the wavelet transform. Theconventional method of non-maxima suppression [3]suppresses all points that are not with the peak valuesalong the line of the gradient. This method can detect allthe edges which are perpendicular to the direction of thegradient. These edges contain not only edges of irisboundaries, but also edges of eyelash and iris texture,which are invalid to iris segmentation. Therefore, theconventional method is not suitable for detecting irisedges. In this study, we propose one radial non-maximasuppression method to detect iris edges. Different fromthe conventional method of non-maxima suppression [3],the radial non-maxima suppression method suppresses all

points that are not with the peak values along the radialdirection, as illustrated in Fig. 7. Therefore, it can retainthe annular edges and simultaneously remove the radialedges. The annular edges are potential edges of irisboundaries, and the radial edges are always noises (suchas edges of eyelash, and edges of texture of iris). As such,the radial non-maxima suppression is suitable for irisedge detection.

The algorithm of radial non-maxima suppression canbe described by the following pseudocode:

Algorithm 2 (Radial non-maxima suppression).

Input: f (the modulus image), and ðxc ; ycÞ (the approximate center point

of the edge).

Output: g (the modulus image after radial non-maxima suppression).

ARTICLE IN PRESS

J. Huang et al. / Signal Processing 89 (2009) 2630–2643 2637

1:

for all points Pi of the modulus image f do 2: determine the radial direction of point Pi; // the direction from

ðxc ; ycÞ to Pi

3:

determine two points N1;N2 in the eight pixel neighborhood of Pi

along the radial direction;

4:

if f ðPiÞ4f ðN1Þ AND f ðPiÞ4f ðN2Þ then 5: gðPiÞ f ðPiÞ;

6:

else 7: gðPiÞ 0;

8:

end if 9: end for 10: return g.

To further detect the edges of the iris boundary andremove noises, we combine the ‘‘hysteresis’’ thresholding[3] with Cao’s method [4] of the edge length, and present amethod of edge thresholding.

When we perform ‘‘hysteresis’’ thresholding, we alsocalculate the length of edge curve. If the length of the edgecurve is less than the threshold Tl of edge length, weremove the edge curve in the final binary edge map.Otherwise, we retain the edge curve in the edge map, andstore the edge curve and its length. This method canremove isolated edges, which are always noises in the irisedge map.

3. Iris boundary detection

Accuracy of the iris recognition system is highlydependent on the iris segmentation. The better the irisis segmented, the better the performance of the systemwill be. The conventional methods of iris segmentationare based on the assumption that the inner and outerboundaries of an iris can approximately be taken ascircles, and segment the iris using two circular borders.However, as discussed above, iris boundaries are usuallynon-circular. Therefore, we present a self-adaptive meth-od of iris boundary detection in this section. The methodcan segment the iris area accurately regardless of theshapes of iris boundaries.

The algorithm of the self-adaptive boundary detectioncan be expressed concisely by the following pseudocode:

Algorithm 3 (Self-adaptive boundary detection).

Input: h (the final binary edge map), L (the array of lists of the edge

curves), the range of diameter of the inner boundary which is signed as

ðain ;binÞ, the range of diameter of the outer boundary which is signed as

ðaout ;boutÞ, and Cl (the array of the lengths of the edge curves).

Output: Bin (the inner iris boundary), and Bout (the outer iris boundary).

1:

isloop_in 0;

2:

isloop_out 0;

3:

for all edge curves L½i� of array L AND their lengths Cl½i� of array Cl do 4: if L½i� is loop curve AND aino max

P1 ;P22L½i�jP1 � P2jobin then

5:

Bin L½i�;

6:

isloop_in 1;

7:

else if L½i� is loop curve AND aouto maxP1 ;P22L½i�

jP1 � P2jobout

then

8: Bout L½i�;

9:

isloop_out 1;

10:

end if 11: end for 12: if isloop_in ¼ 0 then 13: Bin HOUGH_ellipseðh; ain ; binÞ; // elliptical Hough transform

14:

end if

15: i

f isloop_out ¼ 0 then 16: Bout HOUGH_circleðh; aout ; bout Þ; // circular Hough

transform

17: e

nd if 18: r eturn Bin , Bout .

Here, we would like to explain Algorithm 3. In Lines4–6, if the edge curve L½i� is loop curve and the maximumdistance between two points of L½i� is within the range ofdiameter of the inner boundary, we get the inner irisboundary Bin. Similarly, in Lines 7–9, if the edge curve L½i�

is loop curve and the maximum distance between twopoints of L½i� is within the range of diameter of the outerboundary, we get the outer iris boundary Bin. In Lines12–14, if we cannot get the inner iris boundary of loopcurve, the edge curve corresponding to the pupil may bedisturbed by the reflection points. We detect the inner irisboundary by using the elliptical Hough transform. In asimilar way, in Lines 15–17, if we cannot get the outer irisboundary of loop curve, the outer boundary may bedistorted by some noises. We detect the outer irisboundary by using the circular Hough transform.

Irises from different people may be captured indifferent size, even for irises from one same eye, the sizemay change due to illumination variations and otherfactors. For the purpose of compensating for the irisdeformation, it is necessary to normalize the iris imageafter segmentation.

Our method to normalize the iris image without theassumption that the boundary of the pupil is circle orellipse is described as follows. The barycenter of the inneriris boundary is considered as the reference point. Fromthe reference point, draw a number of radial lines, whichpass though the inner and outer iris boundaries. Theintersection points of the radial line and iris boundariesare ðxIðyÞ; yIðyÞÞ and ðxOðyÞ; yOðyÞÞ, as shown in Fig. 8. Oursegmented iris can be normalized into a rectangular blockby the following mapping [47]:

Iðxðr; yÞ; yðr; yÞÞ�!Iðr; yÞ, (12)

where xðr; yÞ and yðr; yÞ are the linear combinations ofpoints in the inner boundary ðxIðyÞ; yIðyÞÞ and points in theouter boundary ðxOðyÞ; yOðyÞÞ, respectively. They can beexpressed by

xðr; yÞ ¼ ð1� rÞxIðyÞ þ rxOðyÞ;

yðr; yÞ ¼ ð1� rÞyIðyÞ þ ryOðyÞ;

(

r 2 ½0;1� and y 2 ½0;2pÞ. (13)

4. Experimental results

In this section, a series of experiments are conducted toevaluate the performance of the proposed method of irissegmentation. Firstly, we evaluate the performance of theradial-suppression edge detection. Secondly, we comparethe accuracy of pupil segmentation between the proposediris segmentation method and the conventional irissegmentation method, Finally, we use the iris recognitionsystem to further investigate the performance of theproposed method of iris segmentation.

ARTICLE IN PRESS

(xI(θ), yI(θ))

θ

Inner boundray

Outer boundray

θ

r

(xO(θ), yO(θ))

Fig. 8. Unwrap the iris.

J. Huang et al. / Signal Processing 89 (2009) 2630–26432638

4.1. Iris database

To evaluate the performance of the proposed method,we use the public database: CASIA-IrisV3 [1]. CASIA-IrisV3includes three subsets which are labeled as CASIA-IrisV3-Interval, CASIA-IrisV3-Lamp and CASIA-IrisV3-Twins. CA-SIA-IrisV3 contains a total of 22,051 iris images from morethan 700 subjects. All iris images are 8 bit gray-level JPEGfiles, collected under near infrared illumination.

As we mentioned above, in this database, more than20% of the iris boundaries are not circular. The CASIA-IrisV3 database is an ideal iris image database, in whichthe person is looking straight at the camera. However, inactual iris recognition system, the person may not alwayslook straight at the camera. The probability of non-circular boundaries may be higher than the database.

4.2. Performance of radial-suppression edge detection

In order to evaluate the performance of the proposedradial-suppression edge detection, 800 iris images inCASIA-IrisV3 database are employed to test the perfor-mance of the Ma’s method [24] of iris segmentation andthe proposed method. Ma’s method segments iris usingCanny’s edge detection and Hough transform, and ourmethod does it using radial-suppression edge detectionand self-adaptive boundary detection.

The correct rate of Ma’s method is 96:38%, while thecorrect rate of the proposed method is 99:75%. Weconsider a correct segmentation when the parameterscorresponding to the pupillary and outer boundaries fallinto the actual boundaries, as we can see from thesegmented image in Fig. 9(b). The segmented image inFig. 9(a) exemplifies incorrect segmentation.

The main reason of incorrect segmentation in Ma’smethod is that the edge map contains a number of noises.Fig. 9 shows different segmentation results of a sampleiris image using two methods. In the iris segmentation ofMa, Canny’s edge detection method is used to detect theedges of iris image. The edge map contains many edges ofpseudo iris boundaries, such as the edges of eyelash andthe edges of iris texture, as shown in Fig. 9(a). These edgesof pseudo iris boundaries may result in the incorrect irissegmentation. Therefore, in Fig. 9(a), we can see that somearea of iris is out of the boundary. In Fig. 9(b), the edges ofeyelash and iris texture are removed in the edge map ofthe radial-suppression edge detection, and the iris

boundaries are segmented correctly. Obviously, the ra-dial-suppression edge detection has a better performancethan Canny’s edge detection in iris segmentation.

4.3. Accuracy of pupil segmentation

The conventional methods can detect pupil correctly,but cannot segment pupil area accurately because thepupils are often non-circular or non-elliptical. Theinaccuracy of pupil segmentation will also reduce thecorrect recognition rate of whole iris recognition system.In order to evaluate the accuracy of pupil segmentation,200 iris images in CASIA-IrisV3 database are employed totest the accuracy of pupil segmentation obtained by ourproposed method. Meanwhile, we have also done thesame experiments using Monro’s method [31] (assump-tion of circular boundaries) and Miyazawa’s method [30](assumption of elliptical boundaries).

To quantitatively evaluate our results, we comparedthe segmented pupil area with actual pupil area. InTable 1, we give the percentage of covered pupil area bythe three methods. Over-covered area is the area that wecover extra region other than the pupil area. Uncoveredarea is the pupil area that we unable to cover. False-covered area consists of both over-covered area anduncovered area. Figs. 10 and 11 show the visual effect ofresults of the three methods. In the Fig. 10(a), the methodof Monro cannot segment the pupil accurately. A largepiece of pupil area is segmented in the iris area. InFig. 10(b), the method of Miyazawa is more accurate thanthe first one, but it is still insufficient. Some pupil area alsosegments in the iris area. Fig. 10(c) shows that theproposed segmentation can segment the pupil veryaccurately.

4.4. Recognition results of whole system

Since iris segmentation is a step in the iris recognitionsystem, the results of segmentation should be evaluatedusing the performance of the overall iris recognitionsystem. In this experiment, we analyze the performance ofFeng’s [9] iris recognition system with different methodsof iris segmentation (including the proposed method, theiris segmentation of Monro’s [31] and Miyazawa’s [30]).Feng improved Daugman’s iris recognition system [18],and used multiple snapshots (i.e. two iris images) toidentify the subject. The system achieves an obvious

ARTICLE IN PRESS

The area of iris is out of the

boundary.

Segmented image

Iris broundarydetectiondetection

Edge

Segmented image

accurately.

The iris boundary is segmented

Iris broundarydetectiondetection

Edge

Original image

Original image Edge map

Edge map

Fig. 9. The comparison between the iris segmentation of Ma and the proposed iris segmentation. (a) Iris segmentation of Ma. (b) The proposed iris

segmentation.

Table 1Covered pupil area with different methods of iris segmentation.

Methods Over-covered area

(%)

Uncovered area

(%)

False-covered area

(%)

Mean Min. Max. Mean Min. Max. Mean Min. Max.

Monro 2.36 0.43 7.32 2.73 0.56 6.12 3.24 0.78 8.97

Miyazawa 0.82 0.21 1.96 0.77 0.19 1.67 1.03 0.34 2.45

Proposed 0.16 0.13 0.24 0.18 0.15 0.23 0.21 0.18 0.28

J. Huang et al. / Signal Processing 89 (2009) 2630–2643 2639

performance improvement in comparison with the systemdesigned by Daugman [18].

In the experiment, 1600 iris images in CASIA-IrisV3database are employed to test the performance of the irisrecognition system with different iris segmentations. Theiris images are divided into two subsets randomly, eachsubset has 200� 4 images. Each iris has four images. Weselect two images of the same iris to constitute a pair ofpatterns, one image from each subset. In this way, 800pairs of 200 irises are matched each other in the irisrecognition system. Thus 1200 genuine scores and318,400 impostor scores can be obtained. The min-combined classifier [9] is used in the experiment. Fig. 12

illustrates the receiver operator characteristic (ROC)curves of systems with different iris segmentations andTable 2 shows the equal error rate (EER) of them. We cansee that the performance of the system using theproposed iris segmentation method can achieve the bestresult.

5. Conclusion

In this paper, we propose an iris segmentation methodbased on radial-suppression edge detection. The proposediris segmentation consists of two phases: radial-suppres-sion edge detection and iris boundary detection.

In radial-suppression edge detection, a non-separablewavelet transform is used to extract the wavelet transformmodulus of the iris image, and then new radial non-maxima suppression is proposed to retain the annularedges and simultaneously remove the radial edges. Next,an edge thresholding is utilized to remove the isolatededges and determine the final binary edge map.

Based on the binary edge map, a self-adaptive irisboundary detection method is proposed, which can detectthe iris boundaries adaptively regardless of the shapes ofthe boundaries.

ARTICLE IN PRESS

in the area of iris.The area of pupi is segmented

Norm

alize

Segment

in the area of iris.The area of pipil is segmented

Norm

alize

Segment

The boundary of pupil issegmented accurately.

Norm

alize

Segment

Fig. 10. The results of segmentation and normalization. (a) The results of the method of Monro. (b) The results of the method of Miyazawa. (c) The results

of the proposed method.

J. Huang et al. / Signal Processing 89 (2009) 2630–26432640

ARTICLE IN PRESS

Normalization example

Segmentation example Segmentation example

Normalization example

Segmentation example

Normalization example

Segmentation example

Normalization example

Segmentation example

Normalization example Normalization example

Segmentation example

Segmentation example

Normalization example

Segmentation example

Normalization example

Segmentation example

Normalization example

Fig. 11. Examples of segmentation and normalization. Column 1 gives the results of the method of Monro; column 2 gives the results of the method of

Miyazawa; and column 3 gives the result of the proposed method.

J. Huang et al. / Signal Processing 89 (2009) 2630–2643 2641

ARTICLE IN PRESS

Table 2The EER of Feng’s iris recognition systems with different methods of iris

segmentation.

Methods of segmentation EER (%)

The method of Monro 3.36

The method of Miyazawa 1.36

The proposed method 0.32

10−3 10−2 10−1 100 1010

1

2

3

4

5

6

7

8

9

10

False Accept Rate (%)

Fals

e R

ejec

t Rat

e (%

)

MonroMiyazawaProposed

Fig. 12. ROC curves of the iris recognition systems with the methods of

iris segmentation of Monro, Miyazawa, and the proposed.

J. Huang et al. / Signal Processing 89 (2009) 2630–26432642

The experimental results illustrated in this paper showthat the proposed iris segmentation is better than theconventional iris segmentation method, and iris recogni-tion system utilizing the proposed method of irissegmentation achieve a higher recognition rate than theone using the conventional method.

Acknowledgments

This work is supported by the Grants 60773187 and60803056 from the NSFC, NCET-07-0338 from the Minis-try of Education, and the Grants 2006ABA023, 2007CA011and 2007ABA036 from the Department of Science andTechnology in Hubei Province, China. Xinge You is thecorresponding author.

References

[1] Casia-irisv3, nst. of Automation, Chinese Academy of Sciences,hhttp://www.cbsr.ia.ac.cn/IrisDatabase.htmi, 2006.

[2] C. Belcher, Y. Du, A selective feature information approach for irisimage-quality measure, IEEE Transactions on Information Forensicsand Security 3 (3) (2008) 572–577.

[3] J. Canny, A computational approach to edge detection, IEEETransactions on Pattern Analysis and Machine Intelligence 8 (6)(1986) 679–698.

[4] W. Cao, R. Che, D. Ye, An illumination-independent edge detectionand fuzzy enhancement algorithm based on wavelet transform fornon-uniform weak illumination images, Pattern Recognition Letters29 (3) (2008) 192–199.

[5] K.R. Castleman, Digital Image Processing, Prentice-Hall, EnglewoodCliffs, NJ, 1996.

[6] Y. Du, R.W. Ives, D.M. Etter, T.B. Welch, Use of one-dimensional irissignatures to rank iris pattern similarities, Optical Engineering 45(3) (2006) 037201.

[7] B. Fang, Y.Y. Tang, Improved class statistics estimation for sparsedata problems in offline signature verification, IEEE Transactions onSystems, Man, and Cybernetics Part C: Applications and Reviews 35(3) (2005) 276–286.

[8] B. Fang, Y.Y. Tang, Elastic registration for retinal images based onreconstructed vascular trees, IEEE Transactions on BiomedicalEngineering 53 (6) (2006) 1183–1187.

[9] X. Feng, X. Ding, Y. Wu, P.S.P. Wang, Classifier combination and itsapplication in iris recognition, International Journal of PatternRecognition and Artificial Intelligence 22 (3) (2008) 617–638.

[10] X. Feng, C. Fang, X. Ding, Y. Wu, Iris localization with dual coarse-to-fine strategy, in: 18th International Conference on Pattern Recogni-tion, vol. 4, 2006, pp. 553–556.

[11] L. Flom, A. Safir. Iris recognition system. United States Patent,(4641349), 1987.

[12] Z. Fu, W. Li, X. Li, F. Li, Y. Wang, Automatic tongue location andsegmentation, in: International Conference on Audio, Language andImage Processing, 2008, pp. 1050–1055.

[13] R.M. Haralick, Digital step edges from zero-crossing of seconddirectional derivatives, IEEE Transactions on Pattern Analysis andMachine Intelligence 6 (1) (1984) 58–68.

[14] X. He, P. Shi, A new segmentation approach for iris recognitionbased on hand-held capture device, Pattern Recognition 40 (4)(2007) 1326–1333.

[15] M. Hueckel, An operator which locates edges in digital pictures,Journal of the ACM 18 (1) (1971) 113–125.

[16] J. Daugman, High confidence visual recognition of persons by a testof statistical independence, IEEE Transactions on Pattern Analysisand Machine Intelligence 15 (11) (1993) 1148–1161.

[17] J. Daugman, C. Downing, Epigenetic randomness, complexity, andsingularity of human iris patterns, Proceedings of the Royal Societyof London Series B: Biological Sciences 268 (2001) 1737–1740.

[18] J.G. Daugman, The importance of being random: statisticalprinciples of iris recognition, Pattern Recognition 36 (2) (2003)279–291.

[19] J. Kim, S. Cho, J. Choi, Iris recognition using wavelet features, Journalof VLSI Signal Processing 38 (2) (2004) 147–156.

[20] A. Kumar, A. Passi, Comparison and combination of iris matchers forreliable personal identification, in: IEEE Computer Society Con-ference on Computer Vision and Pattern Recognition Workshops,2008, pp. 1–7.

[21] H. Li, L. Ko, J.H. Lim, J. Liu, D.W.K. Wong, T.Y. Wong, Y. Sun,Automatic opacity detection in retro-illumination images forcortical cataract diagnosis, in: IEEE International Conference onMultimedia and Expo, 2008, pp. 553–556.

[22] X. Li, T. Yuan, N. Yu, Y. Yuan, Adaptive color quantization based onperceptive edge protection, Pattern Recognition Letter 24 (16)(2003) 3165–3176.

[23] J. Lia, X. Li, D. Tao, KPCA for semantic object extraction in images,Pattern Recognition 41 (10) (2008) 3244–3250.

[24] L. Ma, T. Tan, Y. Wang, D. Zhang, Personal identification based on iristexture analysis, IEEE Transactions on Pattern Analysis and MachineIntelligence 25 (12) (2003) 1519–1533.

[25] S. Mallat, W.L. Huang, Singularity detection and processing withwavelets, IEEE Transactions on Information Theory 38 (1992)617–643.

[26] S. Mallat, S. Zhong, Characterization of signals from multiscaleedges, IEEE Transactions on Pattern Analysis and Machine Intelli-gence 14 (7) (1992) 710–732.

[27] D. Marr, E.C. Hildreth, Theory of edge detection, Proceedings of theRoyal Society of London Series B: Biological Sciences 207 (1980)187–217.

[28] L.D. Marr, S. Ullman, Directional selectivity and its use in earlyvisual processing, Proceedings of the Royal Society of London SeriesB: Biological Sciences 211 (1981) 151–180.

[29] L. Mero, A simplified and fast version of the Hueckel operator forfinding optimal edges in pictures, in: Proceedings of the 4thInternational Joint Conference on Artificial Inteligence, 1975,pp. 650–655.

[30] K. Miyazawa, K. Ito, T. Aoki, K. Kobayashi, H. Nakajima, An effectiveapproach for iris recognition using phase-based image matching,IEEE Transactions on Pattern Analysis and Machine Intelligence 30(10) (2008) 1741–1756.

[31] D.M. Monro, S. Rakshit, D. Zhang, DCT-based iris recognition,Transactions on Pattern Analysis and Machine Intelligence 29 (4)(2007) 586–595.

ARTICLE IN PRESS

J. Huang et al. / Signal Processing 89 (2009) 2630–2643 2643

[32] M. Nabti, A. Bouridane, An effective and fast iris recognition systembased on a combined multiscale, Pattern Recognition 41 (3) (2008)868–879.

[33] R. Nevatia, Evaluation of simplified Hueckel edge-line detector,Computer Graphics and Image Processing 6 (6) (1977)582–588.

[34] H.-A. Park, K.R. Park, Iris recognition based on score level fusion byusing SVM, Pattern Recognition Letters 28 (15) (2007)2019–2028.

[35] S.J. Pundlik, D.L. Woodard, S.T. Birchfield, Non-ideal iris segmenta-tion using graph cuts, IEEE Computer Society Conference onComputer Vision and Pattern Recognition Workshops, June 2008,pp. 1–6.

[36] R.P. Wildes, Iris recognition: an emerging biometric technology,Proceedings of the IEEE 85 (9) (1997) 1348–1363.

[37] M.A. Shah, R. Jain, Detecting time-varying corners, in: Proceedingsof the 7th International Conference on Pattern Recognition, 1984,pp. 2–5.

[38] Y.Y. Tang, L.H. Yang, L. Feng, Characterization and detection of edgesby Lipschitz exponent and MASW wavelet transform, in: Proceed-ings of the 14th International Conference on Pattern Recognition,Brisbane, Australia, 1998, pp. 1572–1574.

[39] Y.Y. Tang, L. Yang, J. Liu, Characterization of dirac-structure edgeswith wavelet transform, IEEE Transactions On Systems, Man, andCybernetics Part B: Cybernetics 30 (1) (2000) 93–109.

[40] V. Torre, T. Poggio, On edge detection, IEEE Transactions on PatternAnalysis and Machine Intelligence 8 (2) (1986) 147–163.

[41] D. Xu, X. Li, Z. Liu, Y. Yuan, Cast shadow detection in videosegmentation, Pattern Recognition Letters 26 (1) (2005) 91–99.

[42] D. Xu, J. Liu, X. Li, Z. Liu, X. Tang, Insignificant shadow detection forvideo segmentation, IEEE Transactions on Circuits and Systems forVideo Technology 15 (8) (2005) 1058–1064.

[43] G.Z. Xu, Z.F. Zhang, Y.D. Ma, Automatic iris segmentation based onlocal areas, in: 18th International Conference on Pattern Recogni-tion, vol. 4, 2006, pp. 505–508.

[44] J. Yang, X. You, Y.Y. Tang, B. Fang, A watermarking scheme based ondiscrete non-separable wavelet transform, Pattern Recognition andImage Analysis (2005) 427–434.

[45] X. You, Q. Chen, Y.Y. Tang, Construction of non-tensor productwavelet and its application, Technical Report, Department ofComputer Science, Hong Kong Baptist University, 2007.

[46] X. You, D. Zhang, Q. Chen, Face representation by using non-tensorproduct wavelets, in: 18th International Conference on PatternRecognition, 2006, pp. 503–506.

[47] L. Yu, D. Zhang, K. Wang, The relative distance of key point basediris recognition, Pattern Recognition 40 (2) (2007) 423–430.

[48] T. Zhang, B. Fang, W. Liu, Y.Y. Tang, G. He, J. Wen, Total variationnorm-based nonnegative matrix factorization for identifying dis-criminant representation of image patterns, Neurocomputing 71(2008) 1824–1831.

[49] T. Zhang, B. Fang, Y.Y. Tang, G. He, J. Wen, Topology preserving non-negative matrix factorization for face recognition, IEEE Transactionson Image Processing 17 (4) (2008) 574–584.

[50] H. Zhou, Y. Yuan, F. Lin, T Liu, Level set image segmentation withBayesian analysis, Neurocomputing 71 (2008) 1994–2000.

[51] J. Zuo, N.K. Ratha, J.H. Connell, A new approach for iris segmenta-tion, in: IEEE Computer Society Conference on Computer Vision andPattern Recognition Workshops, June 2008, pp. 1–6.