iris localization in frontal eye images for less constrained iris recognition systems

16
Digital Signal Processing 22 (2012) 971–986 Contents lists available at SciVerse ScienceDirect Digital Signal Processing www.elsevier.com/locate/dsp Iris localization in frontal eye images for less constrained iris recognition systems Farmanullah Jan, Imran Usman , Shahrukh Agha Department of Electrical Engineering, COMSATS Institute of Information Technology, Islamabad, Pakistan article info abstract Article history: Available online 5 June 2012 Keywords: Near infrared Visible wavelength Iris localization Biometrics Iris recognition Iris segmentation Commercial iris recognition systems do not perform well for non-ideal data, because their iris localization algorithms are specifically developed for controlled data. This paper presents a robust iris localization algorithm for less constrained data. It includes: (i) suppressing specular reflections; (ii) localizing the iris inner (pupil circle) and outer (iris circle) boundaries in a two-phase strategy. In the first phase, we use Hough transform, gray level statistics, adaptive thresholding, and a geometrical transform to extract the pupil circle in a sub-image containing a coarse pupil region. After that, we localize iris circle in a sub-image centered at the pupil circle. However, if the first phase fails, the second phase starts, where first we localize a coarse iris region in the eye image. Next, we extract pupil circle within the coarse iris region by reusing procedure of first phase. Following that, we localize iris circle. In either of the two phases, we validate the pupil location by using an effective occlusion transform; and (iii) regularizing the iris circular boundaries by using radial gradients and the active contours. Experimental results show that the proposed technique is tolerant to off-axis eye images, specular reflections, non-uniform illumination; glasses, contact lens, hair, eyelashes, and eyelids occlusions. © 2012 Elsevier Inc. All rights reserved. 1. Introduction Secure and reliable security measures have long been the major concerns of the government and public sectors such as e-banking, law enforcement agencies, border control, travel agencies, etc. In recent years, the field of information technology has gone through tremendous developments. It includes innovations and maturity in both the software and the hardware; for example, auto-teller ma- chines, cellular phones, and Google-map are a few to name. How- ever, despite these developments, the life and assets of an indi- vidual are still open to criminal hazards. For instance, events such as cyber crimes, bank frauds, human trafficking, etc. are common in the media-news today. It is mostly because of the conventional security systems deployed to identify and verify the humans. As these systems are based on knowledge and tokens [1], i.e. words, phrases, keys, identity cards, personal identification numbers, etc., therefore, they could be fooled by the professional culprits. Due to these flaws, the emphasis is now to devise some non-invasive and robust automated human identification and verification sys- tems such as the biometrics. Biometric technology uses automated techniques to recognize and verify an individual from its behavioral and physical charac- teristics such as DNA, gait, smell, hand geometry, signature, voice, retina, ear, iris, face, etc. [2–5]. Among these traits, iris has pro- foundly enticed the research community. It is because of its stable * Corresponding author. E-mail address: [email protected] (I. Usman). and unique characteristics over the life period of a subject, except for some minor changes that occur in the early stages of a child [6,7]. Iris is an externally visible internal organ protected by the cornea. It is annulus located between the sclera and pupil [8,9]. Moreover, it has quite complex structure that comprises features such as ridges, corona, furrows, crypts, freckles, and the arching ligaments [1,6,7]. Literature reveals that irises of any two individuals are not iden- tical and is true for even left and right eyes of the same subject [1,7,9]. As the iris has very stable, unique, and non-invasive na- ture, therefore it is now considered as the most secure and reliable biometric by the research community. Hence, for secure and reli- able security measures, this technology can be installed in various venues. For example, criminal investigation and citizen identifica- tion agencies; financial services, health care, and law enforcement departments; travel and immigration venues, access to PC and net- work based systems, to name a few. A state of the art iris recogni- tion system comprises the following four basic modules [10,11]: I. Image acquisition, II. Iris segmentation, III. Normalization and iris code generation, and IV. Comparison and recognition. Most commercially available iris recognition systems are based on the pioneered algorithms of Daugman [6] and Wildes [7]. How- ever, they perform well for ideal conditions but may fail for non- ideal data. In ideal systems, the eye images are acquired in a very 1051-2004/$ – see front matter © 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.dsp.2012.06.001

Upload: shahrukh

Post on 25-Nov-2016

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Iris localization in frontal eye images for less constrained iris recognition systems

Digital Signal Processing 22 (2012) 971–986

Contents lists available at SciVerse ScienceDirect

Digital Signal Processing

www.elsevier.com/locate/dsp

Iris localization in frontal eye images for less constrained iris recognition systems

Farmanullah Jan, Imran Usman ∗, Shahrukh Agha

Department of Electrical Engineering, COMSATS Institute of Information Technology, Islamabad, Pakistan

a r t i c l e i n f o a b s t r a c t

Article history:Available online 5 June 2012

Keywords:Near infraredVisible wavelengthIris localizationBiometricsIris recognitionIris segmentation

Commercial iris recognition systems do not perform well for non-ideal data, because their iris localizationalgorithms are specifically developed for controlled data. This paper presents a robust iris localizationalgorithm for less constrained data. It includes: (i) suppressing specular reflections; (ii) localizing theiris inner (pupil circle) and outer (iris circle) boundaries in a two-phase strategy. In the first phase, weuse Hough transform, gray level statistics, adaptive thresholding, and a geometrical transform to extractthe pupil circle in a sub-image containing a coarse pupil region. After that, we localize iris circle in asub-image centered at the pupil circle. However, if the first phase fails, the second phase starts, wherefirst we localize a coarse iris region in the eye image. Next, we extract pupil circle within the coarseiris region by reusing procedure of first phase. Following that, we localize iris circle. In either of the twophases, we validate the pupil location by using an effective occlusion transform; and (iii) regularizing theiris circular boundaries by using radial gradients and the active contours. Experimental results show thatthe proposed technique is tolerant to off-axis eye images, specular reflections, non-uniform illumination;glasses, contact lens, hair, eyelashes, and eyelids occlusions.

© 2012 Elsevier Inc. All rights reserved.

1. Introduction

Secure and reliable security measures have long been the majorconcerns of the government and public sectors such as e-banking,law enforcement agencies, border control, travel agencies, etc. Inrecent years, the field of information technology has gone throughtremendous developments. It includes innovations and maturity inboth the software and the hardware; for example, auto-teller ma-chines, cellular phones, and Google-map are a few to name. How-ever, despite these developments, the life and assets of an indi-vidual are still open to criminal hazards. For instance, events suchas cyber crimes, bank frauds, human trafficking, etc. are commonin the media-news today. It is mostly because of the conventionalsecurity systems deployed to identify and verify the humans. Asthese systems are based on knowledge and tokens [1], i.e. words,phrases, keys, identity cards, personal identification numbers, etc.,therefore, they could be fooled by the professional culprits. Dueto these flaws, the emphasis is now to devise some non-invasiveand robust automated human identification and verification sys-tems such as the biometrics.

Biometric technology uses automated techniques to recognizeand verify an individual from its behavioral and physical charac-teristics such as DNA, gait, smell, hand geometry, signature, voice,retina, ear, iris, face, etc. [2–5]. Among these traits, iris has pro-foundly enticed the research community. It is because of its stable

* Corresponding author.E-mail address: [email protected] (I. Usman).

1051-2004/$ – see front matter © 2012 Elsevier Inc. All rights reserved.http://dx.doi.org/10.1016/j.dsp.2012.06.001

and unique characteristics over the life period of a subject, exceptfor some minor changes that occur in the early stages of a child[6,7]. Iris is an externally visible internal organ protected by thecornea. It is annulus located between the sclera and pupil [8,9].Moreover, it has quite complex structure that comprises featuressuch as ridges, corona, furrows, crypts, freckles, and the archingligaments [1,6,7].

Literature reveals that irises of any two individuals are not iden-tical and is true for even left and right eyes of the same subject[1,7,9]. As the iris has very stable, unique, and non-invasive na-ture, therefore it is now considered as the most secure and reliablebiometric by the research community. Hence, for secure and reli-able security measures, this technology can be installed in variousvenues. For example, criminal investigation and citizen identifica-tion agencies; financial services, health care, and law enforcementdepartments; travel and immigration venues, access to PC and net-work based systems, to name a few. A state of the art iris recogni-tion system comprises the following four basic modules [10,11]:

I. Image acquisition,II. Iris segmentation,

III. Normalization and iris code generation, andIV. Comparison and recognition.

Most commercially available iris recognition systems are based onthe pioneered algorithms of Daugman [6] and Wildes [7]. How-ever, they perform well for ideal conditions but may fail for non-ideal data. In ideal systems, the eye images are acquired in a very

Page 2: Iris localization in frontal eye images for less constrained iris recognition systems

972 F. Jan et al. / Digital Signal Processing 22 (2012) 971–986

constrained environment (i.e. stop-and-stare) [6,9]. For example,the subject (wearing no glasses and/or contact lens) should standat a short distance and look directly at the camera. This way theacquired eye image is relatively of good quality. On the other hand,in non-ideal systems, the eye images may contain multiple issuessuch as specular reflections, low contrast, blurring, focus, non-uniform illumination, glasses and contact lens, off-axis and theoff-angle eyes; occlusions such as eyelashes, eyelids and hair [12,13].

Non-ideal data come from two types of systems: less-cons-trained iris recognition systems and the un-constrained iris recog-nition systems [10,13]. In un-constrained systems, the eye imagesare acquired on-a-move and/or at-a-distance using visible wave-length (VW) or near infrared (NIR) illumination sources. In thisregard, there are some well known public iris databases such asCASIA v4.0 (Distance) [14] and UBIRIS v2.0 [15] which are usedto simulate the un-constrained systems. A number of algorithms[12,16–21] have been proposed for iris segmentation in the un-constrained data but they are tested on a set of 500 eye imagesfrom UBIRIS v2.0 only [15]. Moreover, some of them [12,16,22]take longer to segment an iris. Similarly, in less-constrained sys-tems, the eye images may contain all the issues as mentionedabove, but it is not necessary that they are acquired at-a-moveand/or at-a-distance. We recommend our proposed work for theless constrained systems, especially for NIR systems, because theNIR range is useful for individuals having dark irises and wearingcontact lens and/or glasses [10,23].

A typical iris segmentation process includes the following steps:demarcating the iris inner and outer boundaries at the pupil andsclera; demarcating its upper and/or lower eyelids if they occlude;and detecting and excluding any superimposed occlusions suchas eyelashes, eyelids, shadows, or reflections [6,7,10,11]. Literaturereveals that various techniques such as genetic algorithms [24], ac-tive contour models [25], gradients and circular edge-detectors [6,7], thresholding and histogram [11], and fuzzy logic [26,27] arecommonly used for the iris segmentation. Iris segmentation algo-rithms based on thresholding-based techniques [10,11] work betterfor ideal data, but may fail for non-ideal data. In contrast, ac-tive contour models and the gradient based techniques [10,28] arenow much common for iris segmentation in the less constrainedor unconstrained systems. Literature shows that non-ideal issues,as mentioned earlier, affect most of the contemporary iris segmen-tation algorithms.

Basit et al. [1], first, localize the pupil circle by searching a re-gion having relatively low gray level intensity in a gray eye image.This process may fail for an eye image containing other low inten-sity regions such as eyebrows, hair, glasses, etc., because any one ofthem may trap this technique [29]. Next to extract the pupil circlecenter and radius parameters, they suppress specular reflections bydeleting small edges in an edge-image, which is very specific pro-cess for a particular iris database. Finally, they extract iris circle bydoing gradient-analysis in a horizontal direction, which may notwork for an off-axis eye image (image in which eye is not orientedhorizontally). Similarly, Khan et al. [11] remove specular reflectionsby passing an eye image through a min followed by a median fil-ter. This process is sensitive to window-size of the median filter.That is an object (it may be pupil or a specular reflection) whosearea is less than one-half the filter area may be removed [30]. Inaddition, they also extract radius of the iris circle by doing gradi-ent-analysis in the horizontal direction, which may not work forthe off-axis issue. We experienced that most of the contemporaryiris localization techniques [6,7,17,20,21,25,31–34] are not tolerantto the off-axis issue, because they involve analysis for a horizon-tally oriented eye image.

To resolve the above highlighted issues, we propose a robustiris localization algorithm. It includes suppressing the specular re-

flections in the eye image; localizing the pupil and iris circles usinga two-phase strategy; and regularizing the circular iris boundariesby using the radial gradients and active contours. In the proposedtechnique, we emphasize on demarcating the iris inner and outernon-circular contours only, as done in [11,28,35–37]. However, toexclude other unwanted iris parts, such as eyelashes, eyelids, andthe hair, refer to the published work [1,10]. The remainder of thispaper is organized as follows. Section 2 explains the basic modulesof the proposed technique in explicit details. Experimental resultsand discussion are presented in Section 3, and finally, Section 4concludes the work.

2. Proposed iris localization algorithm

Procedure of the proposed algorithm is shown in Fig. 1. It com-prises of four basic modules: Image Preprocessing (IP), Phase-1Iris Localization (P1_IL), Phase-2 Iris Localization (P2_IL), and Fit-ting Non-circular Contours (FNC). We experimentally observed thatalmost in all the standard iris databases [14,15,38,39], the eye im-ages contain specular reflections (white dots) that resist to theiris localization process [6,17,21,22,40]. Therefore, module IP effec-tively suppresses these white dots in the eye image. The resultantimage is called the preprocessed eye image.

Literature reveals that circular Hough transform (CHT) is tol-erant [32] to broken contours of the objects in the ideal images.However, it may not be true for the non-ideal data [40]. It maybe because of the non-uniform illumination, non-circular iris con-tours, and occlusions such as hair, glasses, contact lens, eyelids, andeyelashes. However, to make CHT robust for the non-ideal data aswell, we augment it by the image gray level statistics (IGS) forexample, global average gray level intensity, lower and the up-per saturated gray level limits of the eye image, etc. Actually, thiscombination of CHT and IGS results in a two-fold strategy. It im-plies that a circular region in an eye image would be consideredas an iris/pupil region, provided the following two conditions aretrue: (i) a peak (i.e. votes) corresponding to the pupil/iris circleshould be present in CHT accumulator; and (ii) gray level inten-sity of that circular region should be relatively low with respect tosome threshold value.

Next, module P1_IL localizes the pupil and iris circles in thepreprocessed eye image using an effective scheme. If it fails, thenP2_IL invokes to handle the iris localization process using a strat-egy different from P1_IL. However, if it fails too, then the iris local-ization process is aborted, because the eye image may not containan eye or its quality may be low [6]. Finally, FNC regularizes thenon-circular iris contours by using a combination of the radial gra-dients and active contours. As we mentioned earlier, our focus ison precise localization of the iris inner and outer contours, there-fore the final localized iris may contain eyelid and/or eyelashesocclusions, as in [11,28,35–37]. The following sections detail ba-sic modules of the proposed algorithm.

2.1. Image preprocessing (IP)

In most standard iris databases [14,15,38,39], the input eye im-age Iim(x, y) may contain the white dots that resist during theiris localization process if not properly suppressed [6,17,21,22,40].Herein, we propose an effective scheme to suppress the white dotsand any other spiky noise such as hair threads or the salt and pa-per type noise in the eye image as follows:

To begin with, we convert Iim(x, y) to a gray level formatIgray(x, y) if already not. After that, we estimate its lower and theupper saturated gray level limits [41,42], αL and αU respectively.Here αL and αU represent the saturated gray level values of thebottom and top 1% of all the gray values in Igray(x, y) respec-

Page 3: Iris localization in frontal eye images for less constrained iris recognition systems

F. Jan et al. / Digital Signal Processing 22 (2012) 971–986 973

Fig. 1. Block diagram of the proposed algorithm.

Fig. 2. (a) Gray level eye image Igray(x, y). (b) Binary image b(x, y). (c) Image bhf (x, y) showing no holes. (d) Image binf(x, y). (e) Image M(x, y). (f) Image Ir(x, y), and(g) preprocessed eye image Ipre(x, y). Eye image is taken from CASIA v3.0 (Interval) [14].

tively [42]. Next, we convert Igray(x, y) to a binary image b(x, y)

by using threshold Tb as follows:

Tb = (αU − αL − 10),

b(x, y) ={

1, if Igray(x, y) < Tb,

0, otherwise.(1)

Here Tb is empirically set after carrying out exhaustive experimen-tations on the standard iris databases. In b(x, y), a black region(0s) which is surrounded by a white region (1s) is called a holeand the white region is called an object.

After that, we use a 4-connectivity method [30] to first de-tect and then complement gray level intensity of holes in b(x, y);the resultant image is bhf (x, y). Further, we subtract b(x, y) frombhf (x, y); the resultant image binf(x, y) contains information re-garding holes and any other white regions, possibly the sclera and

skin. We experimentally observed that in most iris databases, areaof a white dot except sclera and the skin parts is less than 80 pixel.Hence, we delete any object in binf(x, y) whose area is greaterthan 80 pixel, which prevents unwanted editing to sclera and/orthe skin regions; the resultant image is M(x, y).

Finally, we use position coordinates (x, y) of white pixels (1s) inM(x, y) to replace gray level values, with αL , of only those pixelsin Igray(x, y) who share the same coordinates. The resultant imageis Ir(x, y). We also pass the image Ir(x, y) through a median filterhaving window size [7 × 7] to reduce any boundary artifacts [30];the resultant image is called the preprocessed eye image Ipre(x, y).Fig. 2 shows results of this module for an eye image from CASIAv3.0 (Interval) [14]. This is the only database from CASIA that con-tain more specular reflections (i.e. 8 white dots) in the iris/pupilregion. It is obvious from results that the proposed technique ef-fectively suppresses these white dots.

Page 4: Iris localization in frontal eye images for less constrained iris recognition systems

974 F. Jan et al. / Digital Signal Processing 22 (2012) 971–986

Fig. 3. Block diagram of P1_IL.

2.2. Phase-1 iris localization (P1_IL)

Module P1_IL localizes the pupil and iris circles in the pre-processed eye image Ipre(x, y) whose procedure is shown inFig. 3. It involves two sub-modules Phase-1 pupil circle localization(P1_PCL) and Phase-1 iris circle localization (P1_ICL) to achieve itsset goals. The following sections explain these modules explicitly.

2.2.1. Phase-1 pupil circle localization (P1_PCL)Module P1_PCL extracts the pupil circle in the preprocessed

eye image Ipre(x, y) by using CHT of the pupil circle and IGS ofIpre(x, y). It involves a couple of tasks, such as Hough accumula-tor; pupil region localization; pupil circle extraction; and the pupillocation validation test, to extract center (xp, yp) and radius rp ofthe pupil circle. It works as follows:

2.2.1.1. Hough accumulator Hough transform is most often used inmachine vision to detect lines, circles, parabolas, and any othergeneric shape in an image [30]. It is evident from literature, mostof the researchers have used Hough transform in their recent irissegmentation algorithms [17,18,31,32,35,43–45] for the non-idealsystems. Herein, we develop CHT accumulator hcp for the pupilcircle as follows:

To begin with, we scale down Ipre(x, y) by a scaling factor σ[1,10,34], the resultant image is Isld(x, y). It simply speeds up thecircle localization process, as CHT is computationally an expensivealgorithm [10]. Based on the experimental observations, we em-pirically set σ to 0.5. Next, we pass Isld(x, y) through a modifiedcanny edge detector to get an edge image Iedge(x, y) [1,7,30,34].After that, we utilize expression given in Eq. (2) to develop hcp , fora pupil circle radii range (rp min ∼ rp max) [7,10,34], where rp minand rp max represent the lower and upper radii limits of the pupilcircle for a specific iris database (see Table 2). For optimal perfor-mance, this range is necessary [17,18,31,32,35,43–45]. Finally, wecast votes for each edge-pixel (xi, yi) from Iedge(x, y) in the CHTaccumulator, i.e. in hcp [34].

hcp(xcp, ycp, rcp) =n∑i

h(xi, yi, xcp, ycp, rcp) (2)

with

h(xi, yi, xcp, ycp, rcp)

={

1, if (xi, yi) is on the circle (xcp, ycp, rcp),

0, otherwise.

Now the location (xcp, ycp, rcp), for which hcp is maximum, rep-resents a potential parameter vector of the pupil circle, where(xcp, ycp) and rcp represent center and radius of a coarse pupil cir-cle respectively.

2.2.1.2. Pupil region localization As we mentioned earlier, in non-ideal data, the CHT alone may not be enough to robustly localize acircle, because of the following possible reasons: non-uniform illu-minations, non-circular nature of the iris contours; and occlusionssuch as eyelid, glasses, hair, etc. Therefore, we augment it by anadaptive threshold λ to get a coarse location of the pupil region inIpre(x, y).

λ = αL + ω, (3)

where ω is a biasing constant (see Table 2). The argument to useω is to bias the value of λ. We do so, because in the non-idealdata, the pupil region may not always appear darker than the otherlow intensity regions such as hair, eyebrows, and eyelashes; im-ages containing heavily pigmented (dark) irises; and the imageswith serious noise effects [6,32]. We experimentally observed thatthe average gray level intensity of the pupil region is always lessthan λ. Based on this assumption, we use Algorithm 1 to localizea potential location of the pupil region in Ipre(x, y).

Working of Algorithm 1: Herein, the proposed technique usesCHT accumulator hcp and the image gray level statistics (IGS) suchas μcp and αL to robustly localize the pupil region. First, we ex-tract a peak Ψcp and its corresponding location (xcp, ycp, rcp) fromhcp . Next, we compute an average gray level intensity μcp of a cir-cular region described by (xcp, ycp, rcp) in Ipre(x, y). Now if μcp

Page 5: Iris localization in frontal eye images for less constrained iris recognition systems

F. Jan et al. / Digital Signal Processing 22 (2012) 971–986 975

Algorithm 1: Pupil region localization.

Input: Preprocessed eye image Ipre(x, y), CHT-threshold Ωcp , and the CHT ac-cumulator hcp .

Step 1. First, extract a peak Ψcp from hcp as: Ψcp ← max(hcp).Step 2. Next, extract location (xcp, ycp, rcp) of Ψcp from hcp .Step 3. After that, compute average gray level intensity μcp of a circular region

described by (xcp, ycp, rcp) in Ipre(x, y).Step 4. Now,

if (μcp < λ) thenParameter (xcp, ycp, rcp) represents potential pupil region.

else if (Ψcp > Ωcp) thenSuppress Ψcp to zero value in hcp andrepeat process from Step 1.

else Abort module P1_IL.end

Output: Coarse circle parameter (xcp , ycp, rcp) of a potentialpupil region in Ipre(x, y).

is less than λ, then (xcp, ycp, rcp) represents a potential locationof the pupil region. Otherwise, if Ψcp is greater than a lower CHT-threshold Ωcp (see Eq. (4)) of hcp , then we suppress it to zero valuein hcp and repeat the whole process from Step 1 of Algorithm 1.However, if hcp is scanned down to Ωcp and no circle is found yet,then we abort P1_IL and transfer control to phase P2_IL.

CHT-threshold Ωcp : As the hcp may contain a large number ofcandidates (i.e. peaks) for a target circle, therefore trying each oneof them would not be practicable for the real time applications.We experimentally observed that the top 50% peaks of the CHTaccumulator might contain a right choice for the desired circle.However, for worst scenario, we set this range to the top 70% ofhcp . It means that peaks below Ωcp may be useless because thespurious edges may contribute to them. We use the following ex-pression to set value of Ωcp , which is then kept fixed throughoutthe iris localization process.

Ωcp = (0.3) · [max(hcp)]. (4)

Fig. 4 shows results of Algorithm 1 in the gray level eye im-age Igray(x, y). It demonstrates three specific cases. (i) As pupilregion (Fig. 4(a)) is slightly circular therefore, it is marked with

fair good fidelity. We experimentally observed that this is the mostfrequently occurring case. (ii) In Fig. 4(b), the pupil region is notrelatively circular and therefore, a significant part of the pupil re-gion is not included (less frequently occurring case). (iii) Finally, inFig. 4(c), pupil in the iris region is elliptical-type and the eye imagecontains other low intensity regions as well. Herein, a thick stripof eyelashes on the upper eyelid catches our proposed technique(rarely occurring case). In the following tasks, i.e. pupil circle ex-traction and pupil location validation test, these issues are resolvedsuccessfully.

2.2.1.3. Pupil circle extraction Algorithm 1 (pupil region localiza-tion) simply locates a potential pupil region in the preprocessedeye image Ipre(x, y). However, this region may not be an actualcircular estimate of the pupil region, as shown in Fig. 4. It maybe because of the non-circular nature of the iris inner and outercontours, pupil region partly obscured by the upper and/or lowereyelid, and tolerance of the Hough transform to broken contoursof the objects. To extract a fine parameter (xp, yp, rp) for the pupilcircle, we proceed as follows:

First, we extract a sub-image Isub(x, y) centered at (xcp, ycp)

and having its each side length at maximum equal to (2rcp + �r)pixel in Ipre(x, y) (see Fig. 5(a)), where 2rcp represents diameter ofthe coarse pupil circle. The parameter �r is a compensation factor,which is set to 10 experimentally. It provides some extra marginalspace around the coarse pupil region (see Fig. 5(a)). In general,the coarse pupil region may exclude a part of the actual pupil re-gion (see Fig. 4(e)) or it may include a part of the iris as well(see Fig. 5(a)). Therefore, this marginal space enables the proposedtechnique to properly rectify the coarse pupil region is the sub-image, possibly containing the actual pupil region inside.

Second, we compute an array of adaptive thresholds δ as:δ(κ) = αL + κ , for κ = 1,2,3, . . . , ϕ , where ϕ is a biasing weight,which is set experimentally (see Table 2). The argument behindϕ is that for the ideal data, the assumption that pupil is darkerthan iris and iris than sclera [1,10,11] seems reasonable. However,it may not hold true for the non-ideal data [6,32] where an eyeimage may contain other low intensity regions [29] such as eye-

Fig. 4. Potential location of the pupil region in the gray level eye image Igray(x, y): (a)–(c) Pupil region marked with (xcp, ycp, rcp). (d)–(f) Magnified view of (a)–(c) respec-tively. Eye images are taken from IITD v1.0 [38], CASIA v3.0 (Interval) [14], and MMU (new) v2.0 [39] respectively.

Page 6: Iris localization in frontal eye images for less constrained iris recognition systems

976 F. Jan et al. / Digital Signal Processing 22 (2012) 971–986

Fig. 5. Results of pupil circle extraction: (a) Sub-image Isub(x, y) showing coarse pupil region marked with (xcp , ycp, rcp). (b)–(f) Detected binary object, for some intermediatevalues of κ , in image Ibin(x, y). (g)–(k) illustrate their corresponding coarse circle (xo, yo, ro) in Isub(x, y). (l) Iris inner boundary marked with fine pupil circle parameter(xp , yp, rp). Eye image is taken from CASIA v3.0 (Interval) [14].

brows, glasses, hair, etc. In non-ideal data, the average gray levelintensity of a pupil region may not be necessarily lower than theaverage gray level intensity of these low intensity regions. It is be-cause that gray level intensity of the pupil region may be biased bysome non-ideal parameters for example, non-uniform illuminationas in CASIA v3.0 (Lamp) [14]. Hence, this biasing weight providesa room for the pupil object to grow in properly in a binary image(explained in the following text), which may lead to a successfulextraction of the parameter (xp, yp, rp).

Third, we convert Isub(x, y) to a binary image Ibin(x, y) by δ(κ)

(initial value of κ is 1) as follows:

Ibin(x, y) ={

1, if Isub(x, y) < δ(κ),

0, otherwise.

Next, we pass Ibin(x, y) through a morphological opening [30,41]operation to isolate any loosely connected objects by a depth-sizeβ (it represents the number of pixels by which two loosely con-nected objects are isolated, see Table 2). Following this, we usea 4-connectivity procedure [30,41] to detect and then invert graylevel intensity of holes in Ibin(x, y). Now, starting from center ofIbin(x, y), we reuse the 4-connectivity procedure to detect any ob-ject in Ibin(x, y) and mark its boundary B(X, Y ) if found, whereB(X, Y ) is a two-dimensional boundary array to hold x and y co-ordinates of the boundary pixels. However, if we detect no object,then we increment κ by 1 and repeat the whole process for nextvalue of δ provided κ is not greater than ϕ . Otherwise, if no objectis detected for all values of κ , then we suppress Ψcp to zero valuein hcp and repeat the whole process for a next peak from hcp byreusing Algorithm 1.

Fourth, from B(X, Y ), we estimate the lower and upper extremevalues, i.e. (xmin, xmax) and (ymin, ymax) for the x and y coordi-nates respectively. Next, we use these extreme values to derive thefollowing coarse parameters for the detected binary object as:

�x = (xmax − xmin);�y = (ymax − ymin);ς = max(�x,�y), coarse length of the binary object;υ = min(�x,�y), coarse width of the binary object;ro = (ς/2 + υ/2)/2, coarse radius of the binary object;(xo, yo) = (

(xmin + xmax)/2, (ymin + ymax)/2),

coarse center of the binary object.

Now, we propose a new geometrical transform�χ to test circular

geometry of the detected object as follows:

�χ= (γ1 &γ2 &γ3 &γ4) (5)

where γ1 = (ς � 2rp max); γ2 = (υ � 0.6ς); γ3 = (ro � rp max); and

γ4 = (ro � rp min). Now, if�χ is 1, then we register parameters

xo, yo and ro into arrays Xgt , Y gt and R gt respectively and repeatthe whole process for all other values of κ .

Finally, we extract the pupil circle parameter (xp, yp, rp) as fol-lows:

If R gt is not empty then

rp = max[

R gt(κ)], for κ = 1,2,3, . . . ,ϕ,

radius of the pupil circle;(xp, yp) = (

Xgt(κo), Y gt(κo)), center of the pupil circle,

where κo is value of κ for which R gt(κo) is maximum.

2.2.1.4. Large and small objects issues The proposed geometrical

transform�χ rejects an object that does not comply with its cri-

terion. It may occur for one the following reasons: low contrastbetween the pupil and iris regions, as in dark irises [6,32]; thepupil is partially occluded by a lower and/or the upper eyelid hav-ing relatively low gray level intensity [29]; and the non-uniformillumination [10]. We experimentally observed that this issue hap-pens rarely. However, to resolve it, we propose an adaptive strategyas follows:

If γ4 is zero, then it implies the detected binary object inIbin(x, y) is small. It may happen if a pupil region has non-uniformgray level intensity [14,15]. It was experienced for the UBIRIS v1.0[15], which has been acquired with VW illumination source. How-ever, to rectify this problem, we decrement β by 1, increment ϕby 1, and repeat the current task until β � 1. The logic behinddecrementing β is to decrease depth of the morphological openingoperation (i.e. to reduce the number of pixels, which the binaryobject is eroded by). It prevents a small object from complete ero-sion. Similarly, the argument to increment ϕ is to inflate value of δ,which provides more space for a small object to grow in properly.

Similarly, if γ2 is zero, then it implies that coarse width ofthe detected object is smaller than 60% of its coarse length. Itmay happen when a pupil region is connected either with a lowerand/or the upper eyelid or with a thick strip of eyelashes havingrelatively low gray level intensity. However, to resolve this issue,we decrement ϕ by 1, increment β by 1, and repeat the currenttask until the condition ((β � 8)& (ϕ � 1)) holds. Herein, incre-menting β may isolate a pupil region that is loosely connectedwith other low intensity regions. Similarly, decrementing ϕ mayprevent the pupil object in binary image to do not grow enough sothat it is connected with the occlusions. Hence, this way, a pupilthat is connected loosely with the low intensity regions is success-fully localized.

Fig. 5 shows results of this section. Fig. 5(a) illustrates coarse lo-cation of the pupil region in Isub(x, y) marked with (xcp, ycp, rcp).

Page 7: Iris localization in frontal eye images for less constrained iris recognition systems

F. Jan et al. / Digital Signal Processing 22 (2012) 971–986 977

Fig. 6. Results of Pupil location validation test: (a) Pupil wrongly localized on eyelashes and (b) Correct localization of pupil in (a). Eye image is taken from MMU (new) v2.0[39].

Here, the coarse pupil circle encloses a significant part of the irisregion. Figs. 5(b)–(f) show successive refinement of the detectedbinary object for some intermediate values of κ in Ibin(x, y). Ineach case, the black-dot at center of the binary object representsthe coarse center (xo, yo). Figs. 5(g)–(k) illustrate their correspond-ing coarse circle (xo, yo, ro) in Isub(x, y). Finally, Fig. 5(l) shows theiris inner boundary marked with pupil circle (xp, yp, rp) which isimproved compared to the one marked with (xcp, ycp, rcp).

2.2.1.5. Pupil location validation test In the above tasks, parametervector (xp, yp, rp) for the pupil circle is extracted. However, weexperimentally observed that in rare cases the pupil region mightwrongly be localized on occlusions. As we mentioned earlier, itmay happen when the eye image contains other low intensity re-gions such as eyebrows, thick strip of eyelashes, glasses, and hair[29]; and the iris is more like elliptical shape (see Fig. 6). We re-solve this issue as follows:

To begin with, we compute an average gray level value μ̂ ofa small region having size [3 × 3] and centered at (xp, yp) in thepreprocessed eye image Ipre(x, y) as follows:

μ̂ = (1/9)

1∑q=−1

1∑w=−1

Ipre(xp + q, yp + w).

Next, we also compute average gray level value μ̄ f of a smallregion, size [3 × 3] and centered at location (x f , y f ), for f =0,1,2, . . . ,15, around the pupil circle as follows:

μ̄ f = (1/9)

1∑q=−1

1∑w=−1

Ipre(x f + q, y f + w),

for f = 0,1,2, . . . ,15,

where x f = xp + (rp + 5) · cos θ f ; y f = yp + (rp + 5) · sin θ f ; andθ◦

f = ( f (2π/16))◦ . The point (x f , y f ) represents center of eachsmall region and (rp + 5) indicates that each region is located 5pixel outside the pupil circle. After that, we propose the follow-ing occlusion transform Υ to validate location of the pupil region inIpre(x, y) as:

Υ j = ((μ̄ j − 10 < μ̂ < μ̄ j + 10)

& (μ̄ j+8 − 10 < μ̂ < μ̄ j+8 + 10)), j = 0,1,2,3, . . . ,7.

(6)

Here, the value 10 is set experimentally, which implies that if av-erage gray level values of the two small regions (opposite regionsaround the pupil circle) are similar by an offset of 10 pixels, then itis a signal for the wrong pupil location. Otherwise, the location de-scribed by (xp, yp, rp) is accurate for the pupil region. Finally, if Υ

is 1, then it means the pupil region is not limited to inside of thepupil circle (xp, yp, rp) only, instead it exists outside as well whichis a sign for the occlusions (see Fig. 6(a)). Therefore, we suppresspeak Ψcp to zero value in hcp and repeat the entire process for anext peak from hcp by reusing Algorithm 1 (pupil region localiza-tion). Otherwise, the parameter (xp, yp, rp) represents the correctpupil location. Fig. 6 shows results of this section. It is evident thatthe proposed technique works successfully for the stated issue.

2.2.2. Phase-1 iris circle localization (P1_ICL)After localizing the pupil circle (xp, yp, rp) in module P1_PCL

successfully, the next turn is to localize the iris circle (xi, yi, ri)

in the preprocessed eye image Ipre(x, y). We experimentally ob-served that the average gray level value of an actual iris region inIpre(x, y) is smaller than the global average gray level value μpre

of image Ipre(x, y). Herein, we use a combination of the CHT andIGS (i.e. average gray level intensity) to localize the iris circle asfollows:

First, we extract a sub-image Isiris(x, y) from Ipre(x, y), whichis centered at (xp, yp) and has its each side length at maximumequal to 2ri max (see Fig. 7(b)), where ri max represents the up-per radius limit of the iris circle. Next, we develop CHT accu-mulator hsiris for the iris circle, for a radii range (ri min ∼ ri max)

(see Table 2), by reusing the procedure of task Hough accumulator(P1_PCL). After that, we extract a peak Ψi and its correspondinglocation (xi, yi, ri) from hsiris . Following this, we perform the fol-lowing test to check validity of (xi, yi, ri) as:

ρ = (ζ1 & ζ2 & ζ3), (7)

where

ζ1 = (μiris < μpre),

ζ2 = ((ri − rp) > 10

), and

ζ3 = ((xp − 5) < xi < (xp + 5)

)&

((yp − 5) < yi < (yp + 5)

).

Here μiris is an average gray level value of a circular region cen-tered at (xi, yi) and having radius ri in Ipre(x, y). The condition ζ1ensures that the localized iris region should be relatively a dark re-gion in Ipre(x, y). Similarly, the condition ζ2 ensures that the pupilcircle should always be smaller than the iris circle [16,18,20,25,46].Finally, the condition ζ3 ascertains that the two circles are concen-tric by an offset of ±5 pixels [6,7,11,21,29,40].

Finally, if ρ is 1, then the location (xi, yi, ri) represents thecorrect iris circle. Otherwise, if Ψi is greater than a lower CHT-threshold Ωi , then we suppress it to zero in hsiris and repeat thewhole process for a next peak from hsiris , where Ωi is set by usingEq. (4). Otherwise, we transfer the control to phase P2_IL. Fig. 7illustrates results of module P1_ICL.

Page 8: Iris localization in frontal eye images for less constrained iris recognition systems

978 F. Jan et al. / Digital Signal Processing 22 (2012) 971–986

Fig. 7. Results of P1_ICL: (a) Pupil circle marked with (xp , yp, rp) in the preprocessed eye image Ipre(x, y). (b) Sub-image Isiris(x, y) (the white dot in the pupil regionrepresents (xp , yp)). (c) Iris circle localized in Isiris(x, y). Here, the two white dots in the pupil region indicate the pupil and iris centers. (d) Iris localized in gray level eyeimage Igray(x, y), with circular estimation. Eye image is taken from MMU (new) v2.0 [39].

Fig. 8. Block diagram of module P2_IL.

2.3. Phase-2 iris localization (P2_IL)

The previous module P1_IL localizes an iris in the eye imagewith a circle approximation successfully. However, we experimen-tally observed that if occlusions such as the lower and/or uppereyelid obscure the pupil region severely, then Hough accumula-tor for the pupil circle might not get enough votes; it may causefailure to P1_IL. Generally, as iris circle is always bigger than thepupil circle [18,25]; therefore, more votes may go to CHT accumu-lator of the iris circle compared to the pupil circle. Following thisargument, we localize the pupil circle within a coarse iris region,as done in [6,21]. As shown in Fig. 8, P2_IL comprises two sub-modules: Phase-2 pupil circle localization (P2_PCL) and Phase-2iris circle localization (P2_ICL). The following sections explain thesemodules explicitly.

2.3.1. Phase-2 pupil circle localization (P2_PCL)In this section, we localize the pupil circle parameter (xp, yp, rp)

within a coarse iris region. To proceed with, we reuse the taskHough accumulator (P1_PCL) to develop CHT accumulator hi2 forthe iris circle, for a radii range (ri min ∼ ri max) (see Table 2). Nextto localize a coarse iris region in the preprocessed eye imageIpre(x, y), we extract a peak Ψi2 and its corresponding location(xci, yci, rci) from hi2. After that, we compute an average gray levelvalue μi2 of a circular region described by (xci, yci, rci) in Ipre(x, y).Now if μi2 is less than μpre , then (xci, yci, rci) represents a poten-tial coarse iris region (see Fig. 9(a)). Here μpre is the average graylevel value of Ipre(x, y). Otherwise, if Ψi2 is greater than a lowerCHT-threshold Ωi2 (it is set by Eq. (4)), then we suppress it to

zero value in hi2 and repeat the whole process for another peak.Otherwise, we abort P2_IL because either the eye image does notcontain an eye or its quality is poor. For instance, a closed eye oran image with no eye at all.

Now for the pupil circle localization, we extract a sub-imageIi2(x, y) centered at (xci, yci) and having its each side length atmaximum equal to 2rci , as shown in Fig. 9(b). Now, we reusethe task pupil circle extraction (P1_PCL) to extract the pupil circle(xp, yp, rp) in Ii2(x, y). Similarly, we also reuse the task pupil loca-tion validation test (P1_PCL) to check location of the pupil region aswell. However, if no pupil circle is localized, then we suppress Ψi2to zero in hi2 and repeat the entire process for a next peak fromhi2. Otherwise, we abort phase P2_IL for the reasons just men-tioned above. Fig. 9 illustrates results of this section.

2.3.2. Phase-2 iris circle localization (P2_ICL)After extracting the pupil circle (xp, yp, rp) within a coarse iris

region in the preprocessed eye image Ipre(x, y) successfully, thenext turn is to localize the iris circle (xi, yi, ri). To do so, wereuse the procedure of module P1_ICL to localize the iris circle inIpre(x, y). However, this time, we do not compute CHT accumulatorfor the iris circle, instead we reuse hi2. This greatly speeds up theiris localization process because Hough transform is an expensivecomputational algorithm [10].

To start with, we extract a peak Ψi2 and its corresponding lo-cation (xi, yi, ri) from hi2. Next, we pass the parameter (xi, yi, ri)

through a test given in Eq. (7). Finally, if that test is passed suc-cessfully, then we consider (xi, yi, ri) as the correct iris circle pa-rameter. Otherwise, we repeat this process for a next peak from hi2

Page 9: Iris localization in frontal eye images for less constrained iris recognition systems

F. Jan et al. / Digital Signal Processing 22 (2012) 971–986 979

Fig. 9. Results of P2_PCL: (a) Coarse location (xci , yci , rci) of the iris region marked in the preprocessed eye image Ipre(x, y). (b) Sub-image Ii2(x, y) centered at (xci , yci) andhaving each side equal to 2rci . (c) Pupil circle (xp , yp, rp) marked in Ii2(x, y). (d) Pupil circle marked in the gray eye image Igray(x, y) with (xp , yp , rp). White dots in (a)and (b) represent center of the coarse iris region. Similarly, in (c) it represents center of the pupil region. The eye image is taken from MMU (new) v2.0 [39].

provided Ψi2 is greater than a lower CHT-threshold Ωi2(it is set byEq. (4)). Otherwise, we abort P2_ICL for the reasons as mentionedin the above tasks.

2.4. Fitting non-circular contours (FNC)

The phase either P1_IL or P2_IL localizes the iris contours witha circle approximation. However, in reality, these boundaries areneither circular nor elliptical [9,16,28]. Therefore, they may af-fect the overall system accuracy if not localized with reasonablefidelity. Herein, we use an effective strategy based on the radialgradients and active contours (in terms of Fourier series [9,16]) toregularize the iris inner and outer contours; a similar approach isalso used in [9,16]. Herein, we regularize the iris inner contour inthe preprocessed eye image Ipre(x, y) by using the pupil circle pa-rameter (xp, yp, rp) as follows:

First, consider a sub-annular image Iann(x, y) in Ipre(x, y) whichis centered at (xp, yp) and has its inner and outer circular bound-aries located at (rp − ε) and (rp + ε) respectively (see Fig. 10(a)).Here ε is set empirically to 5, which results in 11 pixel widthfor Iann(x, y). Next, assume N radial segments, each having alength of (2ε + 1) pixel, oriented at an angle θm (= (2πm)/N , form = 0,1,2,3, . . . , N − 1), in Iann(x, y); where N is set experimen-tally to 2πrp . Then, use the following parametric equations to getgray level values in an array Ag of pixels located on each radialsegment as:

Ag(m, t) = Iann(xt , yt),

for m = 0,1,2,3, . . . , N − 1 and t = 0,1,2, . . . ,2ε,

where

xt = xp + rt · cos θm,

yt = yp + rt · sin θm, with rt = ((rp − ε) + t

).

After that, we use a first order difference equation [30] to computea gradient array �g as:

�g(m, t) = Ag(m + 1, t + 1) − Ag(m, t),

for m = 0,1,2,3, . . . , N − 1 and t = 0,1,2, . . . ,2ε.

Following that, we blur �g through a Gaussian filter G f (size[3 × 3] and σ = 3) [6,30,34] as:

BG(m, t) = �g(m, t) ∗ G f (m, t),

for m = 0,1,2,3, . . . , N − 1 and t = 0,1,2, . . . ,2ε,

where the symbol ∗ represent the convolution operation [30] andBG is a 2-D array containing blurred values of the radial gradients;and:

G f (m, t) = exp(−(

m2 + t2)/2σ 2).

Moreover, we extract the radial distances, into an array D withrespect to (xp, yp), of the gradient-maxima around the iris innercontour in Iann(x, y) as follows:

D(m) = rto , for m = 0,1,2,3, . . . , N − 1,

where to is from {t = 0,1,2, . . . ,2ε} for which BG(m, to) is maxi-mum. We also pass D through a 1-D median filter, with windowsize [1 × 5], to smooth any rapid transitions in radial distances, theresultant array is r̂.

Finally, we use a Fourier series method (active contours [9,16])to fine-tune radial distances in r̂ to get a more regularized anda close contour for the iris inner boundary. To proceed with, weuse the following expression to obtain M discrete Fourier seriescoefficients. Here M is set experimentally to 20 for the optimumfidelity of the iris inner contour [9]:

ck =N−1∑m=0

r̂m exp(−2π imk/N), for k = 0,1,2, . . . , M − 1.

Next, we use these coefficients in the following expression to geta more refined approximation to the iris inner boundary, whoseresolution is controlled by M .

�r m= (1/N)

M−1∑k=0

ck exp(2π imk/N),

for m = 0,1,2,3, . . . , N − 1.

Lastly, we compute x- and y-coordinates of the iris inner boundarypoints as follows:

xm = xp+ �r m · cos θm,

ym = yp+ �r m · sin θm, for m = 0,1,2,3, . . . , N − 1,

where θm = (2πm/N) and N = 2πrp , with rp as the pupil circleradius, as mentioned earlier.

We use the same procedure to regulate the iris outer circularcontour by assuming M = 6; it is set empirically for an optimalfidelity of the iris outer boundary as in [9]. Fig. 10 illustrates reg-ularization of the iris inner and outer circular boundaries to non-circular ones. It is evident from Figs. 10(b) and (d) that the irisinner and outer contours are now properly regularized.

3. Experimental results and discussion

We use MATLAB version 7.1, installed on a PC with 2.33 GHzCPU and 1 GB RAM, to simulate our proposed algorithm on a setof standard iris databases (see Table 1). These databases have beenacquired with NIR illumination source, except UBIRIS v1.0 [15],which is acquired with VW light source for simulating less con-strained systems. They, collectively, offer non-ideal factors such as

Page 10: Iris localization in frontal eye images for less constrained iris recognition systems

980 F. Jan et al. / Digital Signal Processing 22 (2012) 971–986

Fig. 10. Results of module FNC: (a) Eye image showing a sample-structure of the sub-annular image Iann(x, y). Here, the iris inner boundary is marked with the pupil circle(xp , yp, rp). (b) Corresponding non-circular boundary of iris in (a). (c) Iris outer boundary marked with iris circle (xi , yi , ri). (d) Corresponding non-circular boundary of irisin (c). Eye images are taken from CASIA v1.0 [14] and MMU (new) v2.0 [39] respectively.

Table 1Standard iris databases used in experimentation.

Iris databases Total images Images used in experimentation Image resolution Illumination type

MMU v1.0 [39] 450 All (320 × 240) NIRMMU (new) v2.0 [39] 995 All (320 × 240) NIRCASIA v1.0 [14] 756 All (280 × 320) NIRCASIA v3.0 (Interval) [14] 2639 1000 (randomly selected) (320 × 280) NIRCASIA v3.0 (Twins) [14] 3183 1000 (randomly selected) (640 × 480) NIRCASIA v3.0 (Lamp) [14] 16,212 1000 (randomly selected) (640 × 480) NIRCASIA v4.0 (Thousand) [14] 20,000 1000 (randomly selected) (640 × 480) NIRCASIA v4.0 (Syn) [14] 10,000 1000 (randomly selected) (640 × 480) NIRUBIRIS v1.0 [15] 1877 1000 (randomly selected) (220 × 150) VWIITD v1.0 [38] 1120 All (320 × 240) NIR

specular reflections, low contrast, non-uniform illumination, off-axis and off-angle eye images; occlusions such as eyelashes, eye-lids, hair, glasses, contact lens; and synthesized iris images, etc.Refer to Table 1 for the number of images we involve in our exper-imentation and other details such as image resolution, illuminationsource, and total number of images per database. The followingtext details these databases:

MMU iris databases [39]: MMU v1.0 contains 450 iris images,which were collected from 45 subjects using LG IrisAccess® 2200.Each subject has contributed 10 images. Whereas MMU (new)v2.0 contains 995 iris images, which were captured with Pana-sonic BM-ET100US Authenticam. These images were contributedby 100 volunteers with different age and nationality from Asia,Middle East, Africa and Europe as well. Each volunteer contributed10 iris images, i.e. 5 images for each eye. There are 5 left eye irisimages which are excluded from this database due to cataract dis-ease. For each database (i.e. MMU v1.0 and MMU (new) v2.0), theimage resolution is 320 × 240 and is stored in the BMP format.Both databases, collectively, offer issues such as specular reflec-

tions, off-axis and the off-angle images, blur, focus, non-uniformillumination; occlusions such as eyelids, eyelashes, glasses, contactlens, and hair.

CASIA iris databases [14]: CASIA v1.0 includes 756 iris imagesfrom 108 eyes using the CASIA close-up iris camera. For each eye,three samples were collected in the first session and then four inthe second session. In each eye image, the pupil region was manu-ally edited, i.e. specular were suppressed. The CASIA v3.0 (Interval)iris database includes 2630 iris images collected from 249 subjectsusing the same CASIA close-up iris camera. Most of the images areacquired in two sessions from the graduate students of CASIA. Inthis database, there are eight white dots (i.e. specular reflections)in a circular-orientation in the pupil region. All images of CASIAv1.0 and CASIA v3.0 (Interval) are stored in the BMP format withresolution 320 × 280. CASIA v3.0 (Twins) includes 3118 iris im-ages collected from 200 subjects (100 pairs) using OKI IRISPASS-hin one session. Most of the subjects were children participating inBeijing Twins Festival. Similarly, CASIA v3.0 (Lamp) contains 16,212iris images collected from 411 subjects in one session using the

Page 11: Iris localization in frontal eye images for less constrained iris recognition systems

F. Jan et al. / Digital Signal Processing 22 (2012) 971–986 981

Table 2Constants and parameters.

Iris databases Pupil circle radii range(rp min ∼ rp max)

Iris circle radii range(ri min ∼ ri max)

Biasing-constantω

Biasing-weightϕ

Depth-sizeβ

MMU v1.0 [39] 0.05kw ∼ 0.20kw 0.15kw ∼ 0.30kw 10 14 4MMU (new) v2.0 [39] 0.05kw ∼ 0.20kw 0.15kw ∼ 0.30kw 10 14 5CASIA v1.0 [14] 0.05kw ∼ 0.33kw 0.15kw ∼ 0.45kw 10 10 4CASIA v3.0 (Interval) [14] 0.05kw ∼ 0.33kw 0.15kw ∼ 0.45kw 40 50 5CASIA v3.0 (Twins) [14] 0.05kw ∼ 0.20kw 0.15kw ∼ 0.40kw 10 10 4CASIA v3.0 (Lamp) [14] 0.05kw ∼ 0.20kw 0.15kw ∼ 0.40kw 10 10 4CASIA v4.0 (Thousand) [14] 0.05kw ∼ 0.20kw 0.15kw ∼ 0.40kw 10 14 4CASIA v4.0 (Syn) [14] 0.05kw ∼ 0.20kw 0.15kw ∼ 0.40kw 10 14 4IITD v1.0 [38] 0.05kw ∼ 0.25kw 0.15kw ∼ 0.45kw 10 16 5UBIRIS v1.0 [15] 0.05kw ∼ 0.20kw 0.15kw ∼ 0.45kw 20 8 2

Table 3Temporal results of different modules for the proposed algorithm.

Iris databases IP(s)

P1_PCL(s)

P1_ICL(s)

P2_PCL(s)

P2_ICL(s)

FNC(s)

P1_IL(s)

P2_IL(s)

MMU v1.0 [39] 0.030 2.000 0.321 1.200 0.120 0.280 2.651 1.640MMU (new) v2.0 [39] 0.056 2.100 0.250 1.500 0.124 0.281 2.697 1.970CASIA v1.0 [14] 0.060 4.000 2.500 4.450 0.200 0.550 7.116 5.264CASIA v3.0 (Interval) [14] 0.075 4.800 2.300 4.000 0.150 0.550 7.745 4.780CASIA v3.0 (Twins) [14] 0.140 2.123 1.000 3.200 0.200 0.600 3.893 4.200CASIA v3.0 (Lamp) [14] 0.134 4.111 1.544 3.120 0.250 0.560 6.359 4.064CASIA v4.0 (Thousand) [14] 0.154 3.056 1.111 4.000 0.320 0.550 4.881 5.030CASIA v4.0 (Syn) [14] 0.152 3.321 1.445 4.320 0.222 0.666 5.594 5.370UBIRIS v1.0 [15] 0.080 0.460 0.330 0.800 0.034 0.230 1.134 1.150IITD v1.0 [38] 0.056 5.000 2.801 4.560 0.143 0.554 8.451 5.314

same device OKI IRISPASS-h. Herein, a lamp was turned on/offclose to the subject to introduce more intra-class variations. Mostparticipants were CAISA graduate students. All images of CASIAv3.0 (Lamp and Twins) are stored in the JPEG format with reso-lution 640 × 480. Collectively, these databases offer issues such asspecular reflections, off-axis eye images, blur, low-contrast, non-uniform illumination; occlusions such as eyelids, eyelashes, andhair.

CASIA v4.0 iris databases [14]: CASIA v4.0 (Thousand) includes20,000 iris images collected from 1000 subjects in one sessionusing device Irisking IKEMB-100. The participants were students,workers, farmers with wide-range distribution of ages. A lamp wasturned on/off to introduce reflections in the iris images. Similarly,CASIA-Iris-Syn contains 10,000 synthesized iris images of 1000classes. These synthesized iris images were developed from a sub-set of CASIA v1.0. All images of these two databases are stored inJPEG format with resolution 640 × 480. Non-uniform illumination,blur, synthesized images, highlights, specular reflections; eyelashes,eyelids, and hair occlusions are common problems in these twodatabases.

UBIRIS v1.0 [15]: It includes 1877 images collected from 241persons in two sessions using Nikon E5700 with software E5700v1.0. This camera uses VW illumination source. All images arestored in JPEG format with 200 × 150. This database is specifi-cally designed to simulate less constrained systems. Issues suchas reflections, low contrast, eyelids, eyelashes are common in thisdatabase.

IITD v1.0 [38]: It includes 1120 iris images collected from 224subjects, which were mostly the students and staff at IIT Delhi,New Delhi, India. All the subjects in the database are in the agegroup 14–55 years comprising of 176 males and 48 females. Im-ages were acquired using JIRIS, JPC1000, digital CMOS camera andstored in BMP format with resolution 320 × 240. This database of-fers issues such as focus, specular reflections, off-axis eye images,contact lens, and occlusions such as eyelashes, eyelids, and eye-brows.

For an optimal performance, the proposed algorithm uses sometuning parameters, as shown in Table 2. It includes the pupil circle

radii range (rp min ∼ rp max), iris circles radii range (ri min ∼ ri max),biasing constant ω, biasing weight ϕ , and the depth-size β . Amongthese parameters, the radii ranges play a crucial role, as CHT com-putational time depends on it [10]; large radii range means moretime consumption and vice versa. Therefore, for an optimal perfor-mance, we use specific windows for these radii ranges as shownin Table 2. However, for generalization, let kw represents width ofthe input eye image. Next, we use the following scheme to get thegeneralized ranges:

� = 0.5(kw),

(ri min ∼ ri max) = (0.10� ∼ 0.70�),

(rp min ∼ rp max) = (5 ∼ 0.33ri max).

In the above expressions, we have set the iris radii range from 10%to 70% to the half-width of the input eye image. Literature revealsthat radius of the pupil circle has 1:3 ratio with the iris circle [10,16,18,20,25,46]. Therefore, we set the pupil circle radii range from5 to 33% of ri max. In addition, we have experimentally set the otherparameters such as ω,ϕ , and β . We did not use any systematicscheme to extract their values. It is because that for a specific irisrecognition system, the image acquisition device always remainsthe same, which implies that the image statistics do not changesignificantly for a particular setup.

In addition, we also perform temporal analysis for the ba-sic modules of the proposed algorithm (see Table 3). We com-pute result for each module, as the average time in seconds, for400 randomly selected iris images from each iris database. More-over, we compute average time for phase P2_IL by disabling phaseP1_IL. Therefore, P2_IL includes timings of sub-modules: IP, P2_PCL,P2_ICL, FNC, and some redundant code lines to connect thesemodules together. However, in real scenario, it may include sometimings from phase P1_IL too, which is dependent on the stepwhereby phase P1_IL is failed on. For temporal analysis, we use theMATLAB built-in profile-function [41] in real-timer mode. It helpsin debugging and optimization of the MATLAB code files by track-ing their execution time. For example, for every function in the

Page 12: Iris localization in frontal eye images for less constrained iris recognition systems

982 F. Jan et al. / Digital Signal Processing 22 (2012) 971–986

Table 4Comparison with other methods for CASIA iris database (results for comparison aretaken from the published work).

Method Accuracy (%)

CASIA v1.0 CASIA v3.0 (Lamp) CASIA v3.0 (Interval)

Basit [50] 98.94 – 99.25Khan et al. [11] 100.00 – –Zuniga [53] 94.84 – –Koh et al. [43] – 99.00 –Daugman [43] – 96.00 –Proposed 100.00 98.00 99.50

Table 5Comparison with other methods for MMU and UBIRIS iris databases (results forcomparison are taken from the published work).

Method Accuracy (%)

MMU v1.0 MMU (new)v2.0

UBIRIS v1.0(only session-1images)

Teo et al. [54] – 98.85 –Basit [50] 98.10 – –Daugman new [11] 98.23 – –Khan et al. [11] 98.22 – –Li et al. [55] – – 98.13Wildes [55] – – 86.64Proposed 100.00 99.90 93.50

MATLAB-file, the profile records information about execution time,number of calls, parent functions, child functions, code line hit-count, and code line execution time [41].

For the mentioned number of eye images, we also evaluated av-erage time per eye image of the proposed algorithm. It takes 2.67,2.70, 7.20, 7.75, 3.91, 6.40, 4.93, 5.61, 1.14, and 8.52 seconds forMMU v1.0, MMU (new) v2.0, CASIA v1.0, CASIA v3.0 (Interval), CA-SIA v3.0 (Twins), CASIA v3.0 (Lamp), CASIA v3.0 (Thousand), CASIAv4.0 (Syn), UBIRIS v1.0, and IITD v1.0 respectively. It means that av-erage time per eye image for all the used databases is 5 seconds,which is reasonable for the real time applications. It is becauseof the following reasons: as mentioned earlier, we evaluated theproposed algorithm in MATLAB wherein we mostly used the built-in functions to encode our proposed technique; these functionscontain implicit redundancy because of them being generic. There-fore, any MATLAB-program generally takes longer compared to thesame if written in a low-level programming language for exampleC/C++ language. Second, in the mentioned experimental setup, theMATLAB-package runs in general purpose operating system (GPO[47]) and moreover, the microprocessor used herein is genericas well. Therefore, for the real time applications, we recommendthe proposed technique to be ported onto a reconfigurable media(e.g. CPLD, FPLD, FPGAs, SoC, etc. [48]), which has much higherthroughput compared to these generic processors. The reconfig-urable media could localize an iris in a fraction of second usingour proposed algorithm.

As the circular contours of iris, localized either in P1_IL orP2_IL, are regularized in module FNC, therefore the final iris con-tours in the sclera region are near to real. To get accuracy resultsfor each database, we use subjective observation as the metric of de-cision, a similar approach is also used in [6,11,20,35,49–52]. Thatis if the iris inner and outer boundaries are found within a confi-dence of 1 pixel, in non-occluded part of the iris, then it is consid-ered as accurate. Otherwise, it is rejected. As we sated in the verybeginning, our focus is on precise localization of the iris inner andouter contours only; as in [11,28,35–37]. However, occluded partof iris would be removed [10] as occlusions before heading to nextstages of the iris recognition process.

Tables 4 and 5 show comparison results, in percentage, of theproposed algorithm with some of the contemporary techniques for

Table 6Localization results for IITD and CASIA iris databases.

Method Accuracy (%)

IITD v1.0 CASIA v3.0(Twin)

CASIA v4.0(Thousand)

CASIA v4.0(Syn)

Proposed 99.40 97.00 99.50 100.00

CASIA, MMU, and UBIRIS databases. Similarly, Table 6 shows accu-racy results of the proposed algorithm on some of the databaseswith no comparison, because either the results available [10] forthem are based on recognition accuracy or they may not complywith our decision-criterion just mentioned above. The results, weuse for comparison, include methods where iris is localized eitherwith non-circular or circular descriptors as well. However, we con-sider those methods, where the corresponding authors claim theiris inner and outer boundaries to be within a confidence of atmaximum 2 pixels. It is obvious from comparison results that per-formance of our proposed algorithm is satisfactory.

Figs. 11–14 show some correct iris localization results for MMU,CASIA, IITD, and the UBIRIS iris databases respectively. It is evidentthat the proposed algorithm is tolerant to specular reflections, off-axis eye images, non-uniform illumination; and occlusions such asglasses, contact lens, hair, eyelashes, and eyelids. However, we ex-perimentally observed that it does not properly localize an iris forthe following issues: first, a thick strip of eyelashes that severelyoccludes the pupil region; and second, the low contrast betweenthe pupil and iris regions (such as dark irises [23]) as shown inFig. 15.

3.1. Discussion

Our proposed technique employs a two-phase strategy (P1_ILand P2_IL) to extract the pupil and iris circles in the preprocessedeye image. Each phase uses a combination of the CHT and IGS forthe robust localization of these circles. However, we experimen-tally observed that the specular reflections, present in an inputeye image, might trap this technique if not properly suppressed.It is because of the following reasons: (i) these specular reflectionscontribute to spurious edges in the edge image. (ii) In addition,as they have relatively higher gray level values, therefore they in-flate average gray level intensity of the region of interest (i.e. iris).Therefore, for an optimal performance, we suppress these specu-lar reflections prior to the iris localization process. The resultantpreprocessed eye image may contain some small white holes sup-pressed as well (outside the iris region). Nevertheless, it does notharm our proposed technique because the combination of CHT andIGS has a two-fold nature. It means for a coarse localization of thepupil or iris region, two conditions are essential to be satisfied:first, a peak corresponding to the pupil/iris circle must be presentin CHT and second, the gray level intensity of the region corre-sponding to this peak should be relatively low in the preprocessedeye image. This way, our proposed strategy is not enticed by otherlow intensity regions as mentioned earlier.

In addition, if either phase (i.e. P1_IL or P2_IL) scans the CHTaccumulator down to its lower threshold limit (see Eq. (4)), andno pupil or iris circle is localized, then the corresponding phase isaborted. It may happen, if an eye image has no eye at all or itsquality is poor such as a closed eye, etc. We observed experimen-tally that the pupil circle, in rare cases, localized either by P1_ILor P2_IL may not be a close estimate of the pupil region, i.e. itmay exclude a part of the actual pupil region and/or a part of theiris region may also be included in as well. Therefore, we resolvethis issue in the task pupil circle extraction; this task extracts a finecircular estimate of the pupil region, which is probably a contribu-

Page 13: Iris localization in frontal eye images for less constrained iris recognition systems

F. Jan et al. / Digital Signal Processing 22 (2012) 971–986 983

Fig. 11. (a)–(h) Some correct iris localization results for MMU v1.0 and MMU (new) v2.0 [39].

Fig. 12. (a)–(p) Some correct iris localization results for CASIA v1.0, CASIA v3.0 (Interval, Lamp, Twins), and CASIA v4.0 (Thousand, Syn) [14].

tion to some of the contemporary techniques that just rely on CHTlocalization-results only.

Finally, we use module FNC to regularize the iris contours in alocal region using the radial gradients and active contours based onthe Fourier series. Herein, if the upper and/or lower eyelids do notocclude an iris region, then the iris inner and outer contours areregularized completely. Otherwise, non-occluded parts of the iriscontours are successfully regularized, and the occluded parts re-tain a circular estimate. Noise, such as eyelashes and/or the eyelids,

is present in the final localized iris. It is because that we do notpropose any scheme for the noise removal. In literature, numerousmethods are proposed for the removal of this type of noise [10].

4. Conclusion

To conclude the work proposed in this paper, we summarizethe following key points:

Page 14: Iris localization in frontal eye images for less constrained iris recognition systems

984 F. Jan et al. / Digital Signal Processing 22 (2012) 971–986

Fig. 13. (a)–(d) Some correct iris localization results for IITD v1.0 [38].

Fig. 14. (a)–(h) Some correct iris localization results for UBIRIS v1.0 [15].

Fig. 15. Some incorrect iris localization results for: (a) MMU v2.0 [39], (b) CASIA v4.0 (Thousand) [14], and (c) and (d) UBIRIS v1.0 [15] iris databases.

• Specular reflections resist most of the contemporary iris lo-calization algorithms for the following possible reasons: theyoffer local maxima that can catch the iris localization tech-niques based on integro-differential operator; in addition, theycontribute to spurious edges as well, which can entice the irislocalization techniques based on Hough transform. Therefore,to resolve this issue, we propose an effective scheme to sup-press these reflections in the input eye image.

• As robust localization of an iris region is always a challengingtask in the noisy eye images, therefore to resolve this issue,we propose a two-phase strategy. It uses a combination of thecircular Hough transform and the image gray level statisticsto locate a potential coarse region of an iris or a pupil in theinput eye image.

• As circular Hough transform is tolerant to broken contours ofan object in an image, therefore it may not accurately local-ize the pupil region in the non-ideal eye images. To resolvethis issue, we propose a new technique based on adaptivethresholding, image gray level statistics, and the circle geome-try to extract a close circular estimate of the pupil boundary.In addition, we also propose an occlusion transform to validatelocation of the pupil region.

• We also regularize the circular iris inner and outer contours byusing a combination of the radial gradients and active contours(Fourier series) in a local region, in the eye image.

• Localization of an iris in the off-axis eye images is still anissue for most of the contemporary algorithms; this issue isresolved in our proposed algorithm, owing to because of theuse of Hough transform and the image gray level statistics forthe purpose.

• Besides, the proposed algorithm is tolerant to issues such asnon-uniform illumination, glasses, contact lens, hair, eyelashes,and the eyelids.

Experimental results, obtained on a set of different standard irisdatabases, indicate that the proposed technique is suitable for lessconstrained iris recognition systems, specifically for systems usingthe near infrared illumination sources.

Acknowledgments

The authors highly acknowledge COMSATS Institute of Informa-tion Technology, Islamabad campus Pakistan, for its in-house PhDprogram. We also thank the following: Malaysia Multimedia Uni-versity; Indian Institute of Technology Delhi; Department of Com-puter Science, SOCIA Lab. University of Beira Interior; and ChineseAcademy of Sciences for granting free access to their respectivedeveloped standard iris databases.

Page 15: Iris localization in frontal eye images for less constrained iris recognition systems

F. Jan et al. / Digital Signal Processing 22 (2012) 971–986 985

References

[1] A. Basit, M.Y. Javed, Localization of iris in gray scale images using intensitygradient, Opt. Lasers Eng. 45 (2007) 1107–1114.

[2] F.N. Sibai, H.I. Hosani, R.M. Naqbi, S. Dhanhani, S. Shehhi, Iris recognition usingartificial neural networks, Expert Syst. Appl. 38 (2011) 5940–5946.

[3] W. Al-Mayyan, H.S. Own, H. Zedan, Rough set approach to online signatureidentification, Digital Signal Process. 21 (2011) 477–485.

[4] Y. Wen, L. He, P. Shi, Face recognition using difference vector plus KPCA, DigitalSignal Process. 22 (2012) 140–146.

[5] D.Y. Huang, C.J. Lin, W.C. Hu, Learning-based face detection by adaptive switch-ing of skin color models and AdaBoost under varying illumination, J. Inf. HidingMultimedia Signal Process. 2 (2011) 204–216.

[6] J.G. Daugman, High confidence visual recognition of persons by a test of sta-tistical independence, IEEE Trans. Pattern Anal. Mach. Intell. 15 (1993) 1148–1161.

[7] R.P. Wildes, Iris recognition: an emerging biometric technology, Proc. IEEE 85(1997) 1348–1363.

[8] C.H. Chen, C.T. Chu, High performance iris recognition based on 1-D circularfeature extraction and PSO–PNN classifier, Expert Syst. Appl. 36 (2009) 10351–10356.

[9] J. Daugman, New methods in iris recognition, IEEE Trans. Syst. Man Cybern.Part B 37 (2007) 1167–1175.

[10] K.W. Bowyer, K. Hollingsworth, P.J. Flynn, Image understanding for iris biomet-rics: a survey, Comput. Vis. Image Und. 110 (2008) 281–307.

[11] T.M. Khan, M.A. Khan, S.A. Malik, S.A. Khan, T. Bashir, A.H. Dar, Automatic lo-calization of pupil using eccentricity and iris using gradient based method, Opt.Lasers Eng. 49 (2011) 177–187.

[12] H. Proença, L.A. Alexandre, Introduction to the special issue on the segmenta-tion of visible wavelength iris images captured at-a-distance and on-the-move,Image Vis. Comput. 28 (2010) 213–214.

[13] H. Proenca, S. Filipe, R. Santos, J. Oliveira, L.A. Alexandre, The UBIRIS.v2:a database of visible wavelength iris images captured on-the-move and at-a-distance, IEEE Trans. Pattern Anal. Mach. Intell. 32 (2010) 1529–1535.

[14] CASIA iris database, http://www.idealtest.org/findTotalDbByMode.do?mode=Iris,last accessed April 22, 2012.

[15] UBIRIS iris database, http://iris.di.ubi.pt/, last accessed April 22, 2012.[16] R.D. Labati, F. Scotti, Noisy iris segmentation with boundary regularization and

reflections removal, Image Vis. Comput. 28 (2010) 270–277.[17] D.S. Jeong, J.W. Hwang, B.J. Kang, K.R. Park, C.S. Won, D.K. Park, J. Kim, A new

iris segmentation method for non-ideal iris images, Image Vis. Comput. 28(2010) 254–260.

[18] P. Li, X. Liu, L. Xiao, Q. Song, Robust and accurate iris segmentation in verynoisy iris images, Image Vis. Comput. 28 (2010) 246–253.

[19] J.R. Matey, R. Broussard, L. Kennell, Iris image segmentation and sub-optimalimages, Image Vis. Comput. 28 (2010) 215–222.

[20] S. Pundlik, D. Woodard, S. Birchfield, Iris segmentation in non-ideal images us-ing graph cuts, Image Vis. Comput. 28 (2010) 1671–1681.

[21] W. Sankowski, K. Grabowski, M. Napieralska, M. Zubert, A. Napieralski, Reliablealgorithm for iris segmentation in eye image, Image Vis. Comput. 28 (2010)231–237.

[22] P.d. Almeida, A knowledge-based approach to the iris segmentation problem,Image Vis. Comput. 28 (2010) 238–245.

[23] Y. Du, R.W. Ives, D.M. Etter, T.B. Welch, Use of one-dimensional iris signaturesto rank iris pattern similarities, Opt. Eng. 45 (2006) 037201.

[24] T.C. Lin, H.C. Huang, B.Y. Liao, J.S. Pan, An optimized approach on applyinggenetic algorithm to adaptive cluster validity index, Int. J. Comput. Sci. Eng.Syst. 1 (2007) 253–257.

[25] A. Ross, S. Shah, Segmenting non-ideal irises using geodesic active contours,in: 2006 Biometrics Symposium: Special Session on Research at the BiometricConsortium Conference, Baltimore, MD, September 19, 2006–August 21, 2006,pp. 1–6.

[26] A.d.S. Sierra, J.G. Casanova, C.S. Avila, V.J. Vera, Iris segmentation based onfuzzy mathematical morphology, neural networks and ontologies, in: 43rd An-nual 2009 International Carnahan Conference on Security Technology, Zurich,5–8 October 2009, pp. 355–360.

[27] P. Puranik, P. Bajaj, A. Abraham, P. Palsodkar, A. Deshmukh, Human perception-based color image segmentation using comprehensive learning particle swarmoptimization, J. Inf. Hiding Multimedia Signal Process. 2 (2011) 227–235.

[28] K. Nguyen, C. Fookes, S. Sridharan, Fusing shrinking and expanding active con-tour models for robust iris segmentation, in: Proc. 10th Int. Conf. Informa-tion Sciences Signal Processing and Their Applications (ISSPA), Kuala Lumpur,Malaysia, 10–13 May 2010, pp. 185–188.

[29] T. Tan, Z. He, Z. Sun, Efficient and robust segmentation of noisy iris images fornon-cooperative iris recognition, Image Vis. Comput. 28 (2010) 223–230.

[30] R.C. Gonzalez, R.E. Woods, Digital Image Processing, second edition, PrenticeHall, Upper Saddle River, NJ, 2001.

[31] B.J. Kang, K.R. Park, J.H. Yoo, K. Moon, Fuzzy difference-of-Gaussian-based irisrecognition method for noisy iris images, Opt. Eng. 49 (2010) 067001.

[32] Y. Chen, M. Adjouadi, C. Han, J. Wang, A. Barreto, N. Rishe, J. Andrian, A highlyaccurate and computationally efficient approach for unconstrained iris segmen-tation, Image Vis. Comput. 28 (2010) 261–269.

[33] X. Liu, K.W. Bowyer, P.J. Flynn, Experiments with an improved iris segmenta-tion algorithm, in: 4th IEEE Workshop on Automatic Identification AdvancedTechnologies, Buffalo, New York, 17–18 October 2005, pp. 118–123.

[34] L. Masek, Recognition of human iris patterns for biometric identification, The-sis, School of Computer Science and Software Engineering, The University ofWestern Australia, 2003.

[35] J. Huang, X. You, Y.Y. Tang, L. Du, Y. Yuan, A novel iris segmentation usingradial-suppression edge detection, Signal Process. 89 (2009) 2630–2643.

[36] N. Tajbakhsh, B.N. Araabi, H. Soltanian-Zadeh, Robust iris verification based onlocal and global variations, EURASIP J. Adv. Signal Process. 2010 (2010) 1–12.

[37] S. Venugopalan, M. Savvides, Unconstrained iris acquisition and recognition us-ing COTS PTZ camera, EURASIP J. Adv. Signal Process. 2010 (2010) 1–20.

[38] IITD iris database, http://www.iitd.ac.in/, last accessed April 22, 2012.[39] MMU iris databases, http://pesona.mmu.edu.my/~ccteo/, last accessed April 22,

2012.[40] Z. He, T. Tan, Z. Sun, X. Qiu, Toward accurate and fast iris segmentation for iris

biometrics, IEEE Trans. Pattern Anal. Mach. Intell. 31 (2009) 1670–1684.[41] Mathworks, http://www.mathworks.com/, last accessed April 22, 2012.[42] Stretchlim, http://www.mathworks.com/help/toolbox/images/ref/stretchlim.

html, last accessed April 22, 2012.[43] J. Koh, V. Govindaraju, V. Chaudhary, A robust iris localization method using an

active contour model and Hough transform, in: 20 Int. Conf. Pattern Recogni-tion (ICPR), Istanbul, Turkey, 23–26 August 2010, pp. 2852–2856.

[44] C. Rathgeb, A. Uhl, Context-based biometric key generation for iris, IET Comput.Vis. 5 (2011) 389–397.

[45] N. Sudha, N.B. Puhan, H. Xia, X. Jiang, Iris recognition on edge maps, IET Com-put. Vis. 3 (2009) 1–7.

[46] A. Abhyankar, L. Hornak, S. Schuckers, Off-angle iris recognition using bi-orthogonal wavelet network system, in: Proc. 4th IEEE Workshop on AutomaticIdentification Advanced Technologies, IEEE Computer Society, Buffalo, NY, 17–18 October 2005, pp. 239–244.

[47] General purpose operating system, http://www.engineersgarage.com/articles/rtos-real-time-operating-system, last accessed April 22, 2012.

[48] A. Dasu, S. Panchanathan, Reconfigurable media processing, in: Proc. Int. Conf.Information Technology: Coding and Computing, Las Vegas, NV, 2 April 2001–4 April 2001, pp. 300–304.

[49] A. Basit, M.Y. Javed, S. Masood, Non-circular pupil localization in iris images,in: Proc. Int. Conf. Emerging Technologies, Rawalpindi, Pakistan, 18–19 October2008.

[50] A. Basit, Iris localization using graylevel texture analysis and recognition usingbit planes, PhD thesis, Department of Computer Engineering, College of Electri-cal and Mechanical Engineering, National University of Sciences and Technol-ogy, Rawalpindi, Pakistan, 2009.

[51] Y. Du, C. Belcher, Z. Zhou, R. Ives, Feature correlation evaluation approach foriris feature quality measure, Signal Process. 90 (2010) 1176–1187.

[52] X. Ren, Z. Peng, Q. Zeng, C. Peng, J. Zhang, S. Wu, Y. Zeng, An improved methodfor Daugman’s iris localization algorithm, Comput. Biol. Med. 38 (2008) 111–115.

[53] A.M.G. Zuniga, A fast and robust approach for iris segmentation, in: Proc. Peru-vian 2nd Symp. Comput. Graph. Image Process., Arequipa, Peru, 7–28 December2008, pp. 1–10.

[54] C.C. Teo, H.F. Neo, G.K.O. Michael, C. Tee, K.S. Sim, A robust iris segmenta-tion with fuzzy supports, in: 17th Int. Conf. on Neural Information Processing(ICONIP2010), Sydney, Australia, 22–25 November 2010, pp. 532–539.

[55] P. Li, X. Liu, An incremental method for accurate iris segmentation, in: 19thInt. Conf. on Pattern Recognition (ICPR 2008), Tampa, FL, 8–11 December 2008,pp. 1–4.

Mr. Farmanullah Jan received MSc (Electron-ics) degree from the University of Peshawar, KhyberPukhtoonkhwa, Pakistan, 1998 and MS (ComputerEngineering) degree from the University of Engineer-ing and Technology, Taxila, Islamabad, Pakistan, 2009.Currently, he is pursuing PhD degree at the Electri-cal Engineering Department, COMSATS Institute ofInformation Technology (CIIT), Islamabad, Pakistan.He worked as an Electronic Engineer at EDC, SIDB,

Pukhtoonkhwa, Pakistan, for 5 years. In 2003, he joined CIIT as lecturerand was promoted to the post of Assistant Professor by 2009. His researchinterests include image processing, machine vision, VLSI, analog and digi-tal design, and embedded systems.

Page 16: Iris localization in frontal eye images for less constrained iris recognition systems

986 F. Jan et al. / Digital Signal Processing 22 (2012) 971–986

Dr. Imran Usman received his BS (Software En-gineering) degree from Foundation University, Islam-abad, MS (Computer Systems Engineering, 2006) fromGhulam Ishaq Khan Institute of Engineering Sciencesand Technology, Topi, Pakistan and PhD from Pak-istan Institute of Engineering and Applied Sciences in2010. From 2002 to 2003, he worked at LMKR as asoftware developer. From 2003 to 2004, he served inIqra University, Pakistan, as a lecturer. Currently, he is

working as an assistant professor at the Department of Electrical Engineer-ing, COMSATS Institute of Information Technology, Islamabad, Pakistan. Hisresearch interests include image processing, evolutionary algorithms, wa-termarking, and machine learning.

Dr. Shahrukh Agha received BSc (Electronics En-gineering) degree from the University of Engineering,Taxila, Pakistan in 2001, MSc (Digital Communica-tion Systems) and PhD degrees from LoughboroughUniversity, Loughborough, UK in 2002 and 2006 re-spectively. His research interests include software andhardware techniques for accelerating MPEG2 motionestimation process, real time low power VLSI archi-tectures and SoC (system on chip) design of motion

estimation for MPEG compression, video encoding, configurable processor,and VLSI architectures of video encoders, digital signal processing, imageprocessing, computer vision, machine learning.