chapter 3 efficient multimodal biometric...

30
88 CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC AUTHENTICATION USING FAST FINGERPRINT VERIFICATION AND ENHANCED IRIS FEATURES 3.1 OVERVIEW Due to sensitivity to noise, intra-class variability, data quality, non- universality and other factors in many real world applications, unimodal biometric systems often face significant limitations. It does not prove effective in attempting to improve the performance of individual matchers in such situations. By providing multiple pieces of evidence of the same identity, multimodal biometric systems shown in Figure 3.1 seek to alleviate some of these problems. Figure 3.1 Overview of multimodal system using fingerprint and iris

Upload: others

Post on 20-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

88

CHAPTER 3

EFFICIENT MULTIMODAL BIOMETRIC

AUTHENTICATION USING FAST FINGERPRINT

VERIFICATION AND ENHANCED IRIS FEATURES

3.1 OVERVIEW

Due to sensitivity to noise, intra-class variability, data quality, non-

universality and other factors in many real world applications, unimodal

biometric systems often face significant limitations. It does not prove

effective in attempting to improve the performance of individual matchers in

such situations. By providing multiple pieces of evidence of the same identity,

multimodal biometric systems shown in Figure 3.1 seek to alleviate some of

these problems.

Figure 3.1 Overview of multimodal system using fingerprint and iris

Page 2: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

89

In this work, a multimodal biometric system to overcome the

limitations by using multiple pieces of evidence of the same identity is

implemented. However, due to its multiple processing stages the multimodal

biometric system is limited to the time constraints. To improve the speed of

authentication in the biometric system with acceptable accuracy, a dynamic

fingerprint verification technique fused with enhanced iris recognition using

the adaptive rank level fusion method is introduced. Various fusion

techniques including highest rank, borda count and logistic regression

methods were implemented. The system shows improvement in the False

Acceptance Rate (FAR) and Equal Error Rate (EER) curves when tested upon

the standard biometric dataset.

3.2 FINGERPRINT AUTHENTICATION SYSTEM

The two main topics of basic research under this solicitation

include: (i) measure the amount of detail in a single fingerprint that is

available for comparison, and (ii) measure the amount of detail in

correspondence between two fingerprints. The problem of finger print

individuality can be formulated in many different ways depending on which

one of the following aspects of the problem is under examination: (i) the

individuality problem may be cast as determining the probability that any two

individuals may have sufficiently similar fingerprints in a given target

population. (ii) Determine the probability of finding a sufficiently similar

fingerprint in a target population, when a sample fingerprint is given as input.

Figure 3.2 shows the sequential activities for a general fingerprint

identification system.

3.2.1 Segmentation

Figure 3.3 illustrates the results of segmenting a fingerprint image

based on variance threshold. The variance image in Figure 3.3(b) for the

Page 3: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

90

original image in Figure 3.3(a) shows that a very high variance value is

exhibited from the central fingerprint area, whereas a very low variance is

exhibited by the regions outside this area. Hence, to separate the fingerprint

foreground area from the background regions a variance threshold method is

used. As shown in Figure 3.3(c), the final segmented image is formed by

assigning the regions with a variance value below the threshold to a grey-level

value of zero. There is a trade-off involved when determining the threshold

value used to segment the image. If the threshold value is too large, results

have shown that foreground regions may be incorrectly assigned as

background regions. Conversely, if the threshold value is too small,

background regions may be mistakenly assigned as part of the fingerprint

foreground area. Hence, a variance threshold of 100 gives the optimal results

in terms of differentiating between the foreground and background regions.

Figure 3.2 Flowchart for fingerprint identification system

Figure 3.3 Segmentation using a variance threshold of 100

(a) Original Image (b) Variance Image (c) Segmented Image

Page 4: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

91

Hence, for discriminating the foreground area from the background

regions the variance threshold method is effective. There is a trade-off

involved when determining the threshold value used to segment the image.

Results have shown that foreground regions may be incorrectly assigned as

background regions, if the threshold value is too large. Conversely, if the

threshold value is too small, background regions may be mistakenly assigned

as part of the fingerprint foreground area. An optimal result in terms of

differentiating between the foreground and background regions is given by a

variance threshold of 100.

3.2.2 Fingerprint Image Enhancement

An important characteristic in a fingerprint image is the quality of

the ridge structures, as the ridges carry the information of characteristic

features required for minutiae extraction. Ideally, in a well-defined fingerprint

image, the ridges and valleys should alternate and flow in locally constant

direction. This regularity facilitates the detection of ridges and consequently,

allows minutiae to be precisely extracted from the thinned ridges. In this

work, the histogram equalization is applied for image enhancement.

3.2.2.1 Histogram equalization

The histogram of an image represents the relative frequency of

occurrence of the various gray levels in the image. Histogram modeling

techniques modify an image so that its histogram has a desired shape. This is

useful in stretching the low contrast levels of images with narrow histograms.

Histograms modeling have been found to be a powerful technique for image

enhancement.

Page 5: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

92

such t is uniform.

This mapping stretches the contrast (expands the range of gray levels) for

gray levels near the histogram maxima. The transformation improves the

detect ability of many image features by expanding the contrast for most of

the image pixels,. The probability density function of a pixel intensity level

is given by:

(3.1)

Where

0 <= rk<=1

- Number of pixels at intensity level k r

n - Total number of pixels.

The histogram is derived by plotting kr rp against kr . A new

intensity ks of level k is defined as:

(3.2)

The histogram equalization is applied locally by using a local

window of 11x11 pixels. This results in expanding the contrast locally, and

changing the intensity of each pixel according to its local

neighborhood. Figure 3.4 presents the improvement in the image contrast

obtained by applying the local histogram equalization. Figure 3.5 represents

the quality improvement after applying histogram equalization.

3.2.3 Fingerprint Enrolment

As the ridges carry the information of characteristic features

required for minutiae extraction, the quality of the ridge structures in a

fingerprint image is an important characteristic. Ideally, in a well-defined

Page 6: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

93

fingerprint image, the ridges and valleys should alternate and flow in the

locally constant direction. This regularity facilitates the detection of ridges

and consequently, allows minutiae to be precisely extracted from the thinned

ridges.

Figure 3.4 Histogram equalization

Figure 3.5 Output of histogram equalization

Inte

nsity

Fre

quen

cy

Probability Probability

Inte

nsity

Fre

quen

cy

Page 7: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

94

3.2.4 Reference Point Location

For establishing a reference point, since fingerprints have many

conspicuous landmarks, any combination of them could be used for

establishing a reference point. The point of maximum curvature of the ridges

in the fingerprint image is defined as the reference point of a fingerprint.

To align two fingerprint images, a reference point as well as the

orientation of each image must be located. The most commonly used

reference point is the core point. A core point is defined as the point at which

a maximum direction change is detected in the orientation field of a

fingerprint image or the point at which the directional field becomes

discontinuous. Several methods have been proposed for core point detection.

3.2.4.1 Orientation estimation

The orientation field of a fingerprint image defines the local

orientation of the ridges contained in the fingerprint as in Figure 3.7 and Figure

3.8 shows the orientation estimation for a fingerprint image. The least mean

square estimation method employed by Hong et al (1999) is used to compute

the orientation image. This is a gradient based method which proves to be

simple and most accurate with high quality images. However, instead of

estimating the orientation block-wise, their method has been extended into a

pixel-wise scheme, which produces a finer and more accurate estimation of the

orientation field.

Page 8: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

95

Figure 3.6 Regions for integrating pixel intensities in for computing

A(i; j)

Figure 3.7 The orientation of a ridge pixel in a fingerprint.

Figure 3.8 Orientation estimation of fingerprint image

1. A block of size W x W is centered at pixel (i; j) in the

normalized fingerprint image.

2. For each pixel in the block, compute the gradients (i; j) and

(i; j), which are the gradient magnitudes in the x and y

directions, respectively. The horizontal Sobel operator is used

to compute (i; j):

(a) Original Image (b) Orientation Image

Page 9: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

96

The vertical Sobel operator is used to compute (i; j)

3. The local orientation at pixel (i; j) can then be estimated using

the equations 3.3 to 3.5 as;

(3.3)

(3.4)

(3.5)

where

(i; j) - Least square estimate of the local orientation at

the block centered at pixel (i; j).

4. Smooth the orientation field in a local neighborhood using a

Gaussian filter. The orientation image is firstly converted into

a continuous vector field, which is defined as:

(3.6)

(3.7)

where and - x and y components of the vector field,

respectively.

After the vector field has been computed, Gaussian smoothing

is then performed as follows:

(3.8)

Page 10: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

97

(3.9)

where

G -- Gaussian low-pass filter of size

5. The final smoothed orientation field O at pixel (i; j)

is defined as:

(3.10)

Reference Point Detection: The reference point or the core point

of the fingerprint image is obtained using the following algorithm.

Compute the sine component (i, j) of the smoothed orientation

field becomes a reference point:

(3.11)

The sine component possesses an attractive characteristic in that it

reflects the local ridge direction. The sine component of a perfectly horizontal

orientates vertically. Due to the discontinuity property, the sine component

value always changes abruptly in areas near a reference point. Because of such

findings, the following procedure is added.

Initialize a two-dimensional (2-D) array and set all its entries to 0.

Scan the sine component map in a top-to-bottom, left-to-right manner. For

each sine component, (i, j)

threshold,

Page 11: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

98

(a) (b) (c) (d)

Compute the difference D

Compute the Ci(i,j)value

Figure 3.9 Examples of the results of reference point location algorithm

The result of the reference points found in the arch-type fingers is

shown in Figure 3.9 (a), (b), (c) and (d) . It can be observed that the locations

of the reference points are consistent in different impressions of the same

finger.

For each pixel (i, j) in E, integrate pixel intensities (sine component

of the orientation field) in regions RI and RII shown in Figure 3.6 and assign

the corresponding pixels in A the value of their difference:

(3.12)

By applying the reference point location algorithm over a large

database the regions RI and RII were determined empirically. The radius of

the semi-circular region was set equal to the window size w. The geometry of

regions RI and RII is designed to capture the maximum curvature in concave

ridges. Although this approach successfully detects the reference point in

most of the cases, including double loops, the present implementation is not

very precise and consistent for the arch type fingerprints because it is difficult

to localize points of high curvature in arch type fingerprint images.

Page 12: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

99

The entry Ci(i,j) is used to compute the continuity of a possible

reference point candidate and is defined as shown below:

(3.13)

The difference D in the circular mask indicates the extent of the

change of direction for the concave ridges. The position with the maximum

value is obtained after all the sine components have been scanned. In other

words, the location with the sharpest change in the orientation of the ridge

direction becomes a reference point.

Due to the presence of noises in a fingerprint image, it is not

uncommon that the location with an abrupt change in the orientation field is

mistaken as a false reference point. To alleviate the problem, the following

conditions must be checked to verify the genuineness of a reference point:

With the convergence property of the ridges curvature near the

reference point, a reference point should be located in the block

(i,j) at which the corresponding Ci(i,j)value > Ci threshold.

In general, if two reference point candidates have the same D

value, the one located at the bottom should be taken as the true

reference point

The above procedure is applied using a larger grid size (w=8) first

and then refine the grid size (w=3) to restrict the search in a localized

fingerprint image. The method not only increases the processing speed, but

also reduces the possible error due to scars or noises in the fingerprint image.

Page 13: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

100

3.2.5 Minutiae Extraction

The endings and bifurcations of the fingerprint images are known

as the minutiae which are shown in the Figure 3.10

Figure 3.10 Example of a ridge ending and a bifurcation

The most commonly employed method of minutiae extraction is the

CN concept. This method involves the use of the skeleton image where the

ridge flow pattern is eight-connected. The minutiae are extracted by scanning

the local neighborhood of each ridge pixel in the image using a 3 x 3window.

Figure 3.11 shows the result of performing minutiae extraction on a

fingerprint image and various process involved to achieve the goal.

3.2.5.1 Binarization

Most minutiae extraction algorithms operate on binary images

where there are only two levels of interest: the black pixels that represent

ridges, and the white pixels that represent valleys. The conversion of a grey

level image into a binary image is called Binarization. This improves the

contrast between the ridges and valleys in a fingerprint image, and

consequently facilitates the extraction of minutiae. The output of the binarized

image from the enhanced image (Figure 3.12 (a)) is shown in Figure 3.12 (c).

(a) (b)

Page 14: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

101

Figure 3.11 Results of performing minutiae extraction on a fingerprint

image

The binarization process involves examining the grey-level value of each pixel in the enhanced image, and, if the value is greater than the global threshold, then the pixel value is set to a binary value one; otherwise, it is set to zero. The outcome is a binary image containing two levels of information, the foreground ridges and the background valleys.

3.2.5.2 Thinning

The final image enhancement step typically performed prior to minutiae extraction is thinning. The morphological operation that successively erodes away the foreground pixels until they are one pixel wide is called thinning. A standard thinning algorithm is employed, which performs the thinning operation using two sub-iterations.

Page 15: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

102

(a) Enhanced Image (b) Binary Image (c) Thinned Image

The application of the thinning algorithm to a fingerprint image

preserves the connectivity of the ridge structures while forming a skeletonized

version of the binary image. This skeleton image is then used in the

subsequent extraction of minutiae.

Figure 3.12 demonstrates that the global thresholding technique is

effective in separating the ridges (black pixels) from the valleys (white

pixels). The results of thinning show that the connectivity of the ridge

structures is well preserved, and that the skeleton is eight-connected

throughout the image. In particular, Figure 3.12 (c) shows that the thinning

algorithm is able to accurately extract the skeleton of minutia points without

disrupting the continuity of the ridge flow pattern. Figure 3.13 shows the

output after applying binarization and thinning over the given input.

The CN value is then computed. Half the sum of the differences

between pairs of adjacent pixels in the eight-neighborhood is defined as the

CN value. Using the properties of the CN, the ridge pixel can then be

classified as a ridge ending, bifurcation or non-minutiae point. For example, a

ridge pixel with a CN of one corresponds to a ridge ending, and a CN of three

corresponds to a bifurcation.

Figure 3.12 Results of applying binarization and thinning to the

enhanced image

Page 16: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

103

(a) Ridge Ending (b) Bifurcation

Figure 3.13 Results of applying binarization and thinning

The CN method is used to perform minutiae extraction. This

method extracts the ridge endings and bifurcations from the skeleton image

by examining the local neighbourhood of each ridge pixel using a 3X3

window. The CN for a ridge pixel P is given by

(3.14)

where Pi is the pixel value in the neighborhood of P. For a pixel P, its eight

neighboring pixels are scanned in an anti-clockwise direction as follows:

The pixel can then be classified according to the property of its CN

value after the CN for a ridge pixel has been computed. A ridge pixel with a

CN of one corresponds to a ridge ending, and a CN of three corresponds to a

bifurcation. For each extracted minutiae point, the following information is

recorded:

x and y coordinates,

orientation of the associated ridge segment, and

type of minutiae (ridge ending or bifurcation).

Page 17: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

104

The extracted minutiae points are superimposed on the original

image. Visual inspection of the image indicates that the majority of the

marked minutiae points from the skeleton image correspond to valid minutiae

points in the original image.

3.2.6 Fingerprint Verification

Once the reference point is located, all minutiae extracted from a

master fingerprint image can be aligned with the reference point to generate a

circular sub region in the original image. This sub region contains a fixed

number of minutiae to be matched with similar minutiae contained in a live

template during an authentication process.

First, the Cartesian coordinates of the extracted minutiae in a

master fingerprint image are converted into Polar coordinates using the

following equations:

Figure 3.14 The first N minutiae and their reference point formed

(3.15)

(3.16)

(3.17)

Page 18: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

105

Where: (xi,yi) - Cartesian coordinates of minutia i

i - Minutia orientation

(ri - Polar coordinates of minutia i

- Normalized minutia orientation

(corex, corey) - Cartesian coordinates of the reference point

Core orient - Reference point orientation

Figure 3.14 shows the minutiae and their reference point formed by applying the minutiae algorithm on a fingerprint image.

The minutiae are rotational and transitional invariant with respect to their reference point, in polar coordinate representation. Aftransformation, the minutiae are sorted in ascending order according to their distances from the reference point. To compute a minimum area that covers a predetermined number of minutiae points, the first minutiae from the list to form a master feature template is selected (Maio et al 2002). Especially in the Arch fingerprints, that some reference points are located near the boundaries of the images. Such cases can lead to large bounding circle size as shown in Figure 3.15. As a remedy, an average center (Xcenter, Ycenter) is constructed.

(3.18)

where, (Xi, Yi) - Cartesian coordinates of minutia in the feature template (Xcenter, Ycenter) - New centre of the feature template.

It should be noted that, to tolerate elastic distortion errors during an image capture process a pre-defined constant Rd is added. Subsequently,

Page 19: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

106

minutiae points only in the bounding circle centered at the average center is found out.

Figure 3.15 Size of a bounding circle is large if the reference point is

near boundary (b) Size of the bounding circle decreases

3.3 IRIS RECOGNITION

The iris is the plainly visible, colored ring that surrounds the pupil.

It is a muscular structure that controls the amount of light entering the eye,

with intricate details that can be measured, such as striations, pits, and

furrows. The iris is not to be confused with the retina, which lines the inside

of the back of the eye as in Figure 3.16. No two irises are alike. There is no

detailed correlation between the iris patterns of even identical twins, or the

right and left eye of an individual. The amount of information that can be

measured in a single iris is much greater than fingerprints. An iris recognition

camera takes a black and white picture from 5 to 24 inches away, depending

on the type of camera. The camera uses non-invasive, near-infrared

illumination (similar to a TV remote control) that is barely visible and very

safe.

Unlike other biometric technologies that can be used in surveillance

mode, iris recognition is an opt-in technology. In order to use the technology,

Page 20: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

107

one must first glance at a camera. Iris recognition cannot take place without

permission. The picture of an eye is first processed by software that

localizes the inner and outer boundaries of the iris, and the eyelid contours, in

order to extract just the iris portion. Eyelashes and reflections that may cover

parts of the iris are detected and discounted.

Being the first step in iris recognition, iris segmentation defines the

image contents used for feature extraction and matching, which is directly

related to the recognition accuracy. Speed is often a bottleneck in practical

applications, and iris segmentation is often found to be the most time-

consuming module in an iris recognition system. It is reported that most

failures to match in iris recognition result from inaccurate iris segmentation.

Figure 3.16 Eye Structure and Iris features

Page 21: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

108

Iris segmentation is to locate the valid part of the iris for iris

biometrics, including finding the pupillary and limbic boundaries of the iris,

localizing its upper and lower eyelids if they occlude, and detecting and

excluding any superimposed occlusions of eyelashes, shadows, or reflections.

Daugman(2007) used the following integro differential operator to find the

circular boundaries of an iris:

(3.19)

This operator serves as a circle finder which searches the maximum

angular integral of radial derivative over the k-means clustering algorithm on

the position and intensity feature vector of the iris image.

3.3.1 Iris Detection After Reflection Removal

The objective of iris detection is not only to identify the presence of

an iris in input image but also to determine its position and scale as shown in

Figure 3.17 (a), (b), (c) and (d) with steps involved in this process.

Figure 3.17 Flowchart of iris segmentation algorithm.

Page 22: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

109

An adaptive threshold Tref

R(x,y) of image I(x,y) is used.

3.3.2 Pupillary and Limbic Boundary Localization

A novel iterative Pulling and Pushing (PP) method has been used

here. In this method, several important considerations involved in its effective

implementation are presented after a brief introduction.

Figure 3.18 PP method

3.3.2.1 The pulling and pushing method

(2006) and young et al(2003). Its mechanics are illustrated in Figures. 3.18(a)

and 3.18(b), where denotes N identical mass less springs with the

equilibrium length R and spring constant k. One end of the springs is attached

to a circle whose radius is R and the other end joins at point O.

At the beginning, all the springs are relaxed, and O is the

equilibrium position, as shown in Figure 3.18(a). Then, an appended force is

a restoring force to resist the introduced deformation:

(a) (b) (c) (d)

Page 23: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

110

(3.20)

Where

- Current length of is

ie - Direction radiating from ' .

As illustrated in Figure 3.18(b), the composition of will

push ' back to its equilibrium position after the appended force is

removed. Here, the composition force is obtained by equation 3.21

(3.21)

Figure 3.19 Flowchart of the PP method with an illustration. (a) The

result of iris detection (b) Edge detection in polar

coordinates. (c) The PP forces in cartesian coordinates

(d) The new estimation driven by the forces in (c)

Based on such mechanics, the PP method is developed as illustrated

in Figure 3.19 (a), (b), (c) and (d). Let the localization of the pupillary

boundary taken as an example (the limbic boundary can be similarly located).

Page 24: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

111

Suppose O0P is the rough position of the pupil center obtained by iris

detection, the PP method then works as follows:

1. Perform vertical edge detection on the image after

transforming the original iris image into polar coordinates

(centered by O0P). Only one edge point along each column is

preserved to avoid most of the noisy edge points, as shown in

Figure 3.20 (b). In addition, only the [ ] sector is used to

avoid the influence of the upper eyelid occlusion.

2. Join each resulted edge point

(3.22)

and the center point O0P with an imaginary spring-like line in

attached to a circle and meeting at O0P, as shown in Figure

3.20(c) is got.

(3.23)

(3.24)

3.3.3 Eyelid Localization

Locating the upper and lower eyelids is an even harder problem

involved in iris segmentation. It is impossible to fit them with simple shape

assumptions since, the shape of eyelids is so irregular. In addition to this, the

upper eyelid tends to be partially covered with eyelashes, making the

localization more difficult. Fortunately, these problems can be solved by an

1D rank filter and a histogram filter. The 1D rank filter removes the

eyelashes, while the histogram filter addresses the shape irregularity.

Page 25: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

112

3.3.4 Eyelid Models

To tackle the shape irregularity of eyelids, three typical models of

upper eyelid as shown in Figure 3.20 are statistically established by clustering

the manually labeled eyelids in a training set. The idea is that, given a probe

upper eyelid, its shape similarity with the images is calculated and the model

with the highest similarity score is taken as an initial guess of its shape.

Although it is inaccurate, this initial guess provides cues for noise

elimination.

Figure 3.20 Examples of upper eyelid models

The flowchart of the proposed Eyelid Localization (EL) method is

depicted in Figure 3.21. Let us take the localization of upper eyelid as an

example, the method works as follows:

Figure 3.21 Flowchart of eyelid localization

Page 26: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

113

Crop the image Region of Interest (ROI), the ROI Iroi of the

iris image is cropped based on the localization results (Figure

3.21 (a)).

Filter Iris Region of Interest (IROI) with a 1D horizontal rank

filter (Figure 3.21 (b). With the observation that the eyelashes

are mostly vertical thin and dark lines, IROI is horizontally

filtered with a 1D rank filter.

Calculate a raw eyelid edge map. Edge detection is then

performed on the upper region of Iranked along the vertical

direction as shown in Figure 3.21 (c), resulting in a raw eyelid

edge map Erow.

Eliminate noisy edge points through shape similarity

Calculation as shown in Figure 3.21 (d).

Fit the eyelid with a parabola curve. The exact shape of the

eyelid is obtained by parabolic curve fitting as shown in

Figure 3.21 (e).

3.3.5 Eyelash and Shadow Detection

Eyelashes and Shadows (ES) (Figure 3.22) are another source of

occlusion that challenges iris segmentation method. The basic idea of the

solution proposed is to extract an appropriate threshold for ES detection via a

statistically established prediction model.

2 distance is adopted to measure the dissimilarity between two

considered histograms h1 and h2 as follows

2= (3.25)

Page 27: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

114

Finally, the detection result is refined by checking the connectivity

of the candidate points to the upper eyelid. The idea is that most eyelashes

and shadows appear near the upper eyelid. This refinement is necessary

because it relaxes the burden of selecting the detection threshold. It allows us

to not spend too much effort trying to find an optimal threshold but just a

moderately good and loose one.

Figure 3.22 Eyelash detection

3.4 MULTIMODAL BIOMETRIC AUTHENTICATION

It is often not possible to achieve a higher recognition rate and

attempt to improve the performance of single matchers. In such situations,

single recognizer may not prove to be effective due to inherent problems. By

utilizing a multi biometric system, these problems can easily be alleviated by

providing multiple pieces of evidence of the same human subject, thus

achieving higher and more reliable recognition.

In this work, the results of fingerprint and iris authentication are

combined to improve system performance. The raw-scores of fingerprint and

iris have different distributions, the sigmoid function to normalize these raw-

scores are applied from 0 to 1. Finally, the multimodal biometric

authentication system by fusing these normalized-scores using an adaptive

rank level fusion method is proposed. The fused-score is used to classify the

unknown user into the acceptance or rejection.

Page 28: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

115

3.4.1 Score Normalization

Transforming the raw-scores obtained using different modalities to

a common domain using a mapping function is called Normalization.

Normalization generally has various methods such as z-score, min-max,

decimal change, and sigmoid function. The sigmoid function is used in this

work to normalize the raw-scores of fingerprint and iris. The sigmoid function

is used in this work, since it is useful to include the outlier data while still

preserving the significance of data within the standard deviation of the mean.

The normalization method using the sigmoid function maps the raw-scores to

the [0, 1] interval, and is defined by

(3.26)

Where - Defined the

raw-score of i - th modality

- Normalized-score.

and - Mean and standard deviation of raw scores.

3.4.2 Adaptive Rank Level Fusion

Fusion can be done at the rank level when the output of each

biometric matcher is a subset of possible matches sorted in decreasing order

of confidence. The goal of rank-level fusion is to consolidate the rank output

by individual biometric subsystems (matchers) in order to derive a consensus

rank for each identity. An effective adaptive rank level fusion scheme

(Monwar and Gavrilova 2009) shown in the Figure 3.23 below that combines

information presented by multiple domain experts based on the rank-level

fusion integration method is implemented.

Page 29: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

116

3.4.2.1 Highest rank fusion

In the highest rank method, the fused rank of a user is computed as

the lowest rank generated by different classifiers is given by

Rj = (3.27)

This rank fusion technique is similar to applying the max rule for

fusion at the score level. As a consequence of applying this fusion rule, ties

between users may be randomly broken.

Figure 3.23 Schematic representation of adaptive rank level fusion

3.4.2.2 Borda count fusion

In the Borda count method, the fused rank is estimated as the sum

of the ranks of individual classifiers:

(3.28)

Page 30: CHAPTER 3 EFFICIENT MULTIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/23050/8/08_chapter3.pdf · By providing multiple pieces of evidence of the same identity,

117

The advantage of the Borda count method over the highest rank is

its ability to account for the variability in ranks due to the use of a large

number of classifiers.

3.4.2.3 Logistic regression method

In the logistic regression method a weighted sum of the individual

ranks is calculated. When different matching modules have significant

differences in their accuracies this method is very efficient, but this method

requires a training phase to determine the weights.

The logistic response function is given by

(3.29)

(3.30)

Where

and - parameters of the logistic regression

model

To evaluate the multimodal results of fingerprint and iris, the

various rank level fusion techniques such as highest rank, borda count and

logistic regression methods are compared in terms of Genuine Acceptance

Rate (GAR). It is evident from the results that the multimodal authentication

system with logistic regression fusion techniques has better error rates when

compared with highest rank and borda count methods. Further, the training

and recognition time of various rank level fusion approaches is compared. It

could be inferred that higest rank method shows 10% improvement in the

recognition time comparative to other methods. The detailed procedure for the

experimentation and the result comparisons are given in section 6.2.