development of efficient biometric recognition algorithms based on fingerprint...
TRANSCRIPT
DEVELOPMENT OF EFFICIENT BIOMETRIC
RECOGNITION ALGORITHMS BASED ON
FINGERPRINT AND FACE
A thesis submitted to the Christ University for the Degree of
DOCTOR OF PHILOSOPHY
IN
COMPUTER SCIENCE
BY
JOSSY P. GEORGE
UNDER THE GUIDANCE OF
Dr. K. B. RAJA
Centre for Research
Christ University, Bangalore - 560029
MARCH – 2012
I
CERTIFICATE
This is to certify that the thesis entitled DEVELOPMENT OF
EFFICIENT BIOMETRIC RECOGNITION ALGORITHMS
BASED ON FINGERPRINT AND FACE submitted by Jossy P.
George to Christ University, Bangalore for the award of the degree of
Doctor of Philosophy is a bonafide record of research work carried out by
Jossy P. George under my supervision. The contents of this thesis, in
full or its parts, have not been submitted to any other University for the
award of any degree or diploma.
Place: Bangalore Dr. K. B. Raja
Date: 23 March, 2012 Associate Professor
Department of Electronics and
Communication Engineering
University Visvesvaraya College of
Engineering, Bangalore
II
DECLARATION
I hereby declare that Ph. D thesis titled DEVELOPMENT OF
EFFICIENT BIOMETRIC RECOGNITION ALGORITHMS
BASED ON FINGERPRINT AND FACE is an original research work
done by me under the guidance and supervision of Dr. K. B. Raja,
Associate Professor, Department of Electronics and Communication,
University Visvesvaraya College of Engineering. This thesis is submitted
to Christ University, Bangalore, for the award of the degree of DOCTOR
OF PHILOSOPHY IN COMPUTER SCIENCE.
I also declare that this thesis or any part of it has not been submitted to
any other university for the award of any degree.
Place : Bangalore Jossy P. George
Date : 23 March, 2012
III
ACKNOWLEDGEMENTS
First and foremost I raise my heart in gratitude to God my Father for
His inspiration and unending love.
This research is incomplete if I do not place on record my deepest
sense of gratitude to all those who inspired me, guided and assisted me to
take up and conduct this research.
I wish to express my heart felt and deep debt of gratitude to my
research supervisor and guide Dr. K. B. Raja, Associate Professor,
Department of Electronics and Communication, University Visvesvaraya
College of Engineering for his invaluable guidance and direction
throughout my research. He has been a constant source of inspiration and
it would not have been possible but for his entirely effort in guiding me to
complete this thesis. His patience to go through my work, gentle
encouragement and his continuous guidance has inspired me a lot. His
clarity of thought, incisive analysis and taste for perfection are qualities
that are worth being emulated.
I accord my sincere thanks to Dr. Fr. Thomas C. Mathew, Vice
Chancellor, Christ University, Dr. Fr. Abraham V. M., Pro Vice
Chancellor, Christ University, Prof. J. Subramanian, Registrar, Christ
University, Dr. Fr. Varghese K. J, Chief Finance Officer and all other
CMI fathers of Christ University for their constant support and
encouragement.
I owe my heartfelt and special gratitude to Dr. Srikanta Swamy,
Additional Director, Centre for Research, Christ University, for his deep
concern and timely directions throughout my research work.
IV
I thank my beloved father Late Mr. Paul George and my mother Mrs.
Mary George, who moulded me in my academic pursuit and character
formation. I express my sincere gratitude to my brothers and sisters for
their encouragement. I would like to thank the Librarian, Christ
University, all the faculty members of my department, my colleagues and
friends in Christ University for their whole hearted support.
Finally, my sincere acknowledgements to all who directly or
indirectly helped me in completion of this work.
Jossy P. George
V
PREFACE
The reliable verification systems are required to verify and confirm
the identity of an individual requesting their service. Secure access to the
buildings, laptops, cellular phones, ATM etc. is an example of such
applications. In the absence of robust verification systems, these systems
are vulnerable to the wiles of an impostor. The traditional ways of
authentications are passwords (knowledge – based security) and the ID
Cards (token – based security). These methods can be easily breached due
to the chance of stolen, lost or forget. The development and progress of
biometrics technology, the fear of stolen, lost or forget can be eliminated.
Biometrics refers to the automatic identification (or verification) of an
individual (or a claimed identity) by using certain physiological or
behavioral traits associated with the person.
The biometrics identifies the person based on features vector derived
from physiological or behavioural characteristics such as uniqueness,
permanence, accessibility, collectability with minimum cost. The
physiological biometrics are Fingerprint, Hand Scan, Iris Scan, Facial
Scan and Retina Scan etc., and behavioural biometric are Voice,
Keystroke, Gait, Signature etc., The physiological biometrics measures
the specific part of the structure or shape of a portion of a subject’s body.
VI
But the behavioural biometric are more concerned with mood and
environment.
Chapter one presents the introduction to biometrics and its various
traits. Further description like structure of the biometric system, different
approaches are discussed. Also the design issues in biometric system such
as universality, collectability, distinctiveness, permanence, acceptability,
uniqueness, performance, circumvention etc., are discussed.
Chapter two gives a detailed survey of biometric techniques. It
includes the literature survey of fingerprint and face biometric traits and
various approaches.
In Chapter three, the algorithm of Fingerprint Verification based on
Dual Tree Complex Wavelet Transformation (DTCWT) is proposed. The
original fingerprint is cropped and resized to apply the DTCWT. The
features of Fingerprint are obtained by applying different levels of
DTCWT. Performance analysis is discussed with the FRR, FAR and TSR.
Chapter four discusses another highly recommended source of
authentication such as face recognition. In this chapter, the algorithm of
Performance Comparison of Face Recognition using Transform Domain
Techniques (PCFTD) is proposed. The face databases L – Spacek, JAFFE
and NIR are considered. The features of face are generated using wavelet
families such as Haar, Symelt and DB1 by considering approximation
band only. The face features are also generated using magnitudes of
VII
FFTs. The test image features are compared with database features using
Euclidian Distance (ED). The performance parameters such as FAR,
FRR, TSR and EER computed using wavelet families and FFT. The
methodology described in this paper is accurate, simple, fast and better
than the existing algorithms. Chapter five presents conclusion and future
work.
VIII
LIST OF TABLES
Sl. Table Title Page
No No No
01 3.1 Scanners / Technologies used for FVC 2004 63
02 3.2 Proposed TDFID Algorithm 70
03 3.3 EER and TSR for Different Levels of DTCWT 71
04 3.4 Values of FRR, FAR and TSR for Thresholds 73
05 4.1 Algorithm of PCFTD 89
06 4.2 Performance on Different Face Databases with FFT 91
07 4.3 Performance Parameters of L- Spacek Databases 95
08 4.4 Performance Parameters of JAFFE Databases 96
09 4.5 Performance Parameters of NIR Databases 97
10 4.6 EER Values for Different Transforms 103
IX
LIST OF FIGURES
Sl Figure Title Page
No No No
01 1.1 Classification of Biometrics 03
02 1.2 Fingerprint Ridge Patterns 05
03 1.3 Fingerprint Characteristics 05
04 1.4 A Face Scan 06
05 1.5 Ear Structure 06
06 1.6 DNA Structure 08
07 1.7 Structure of Palm Image 08
08 1.8 How Iris Scanners Record Identities 10
09 1.9 Front View of Iris 10
10 1.10 Retina 11
11 1.11 Gait Cycle 12
12 1.12 Signature 12
13 1.13 Keystroke Dynamics 13
14 1.14 Voice File 14
15 1.15 General Biometrics System 14
16 1.16 The Original Histogram of a Fingerprint Image 18
17 1.17 Histogram after the Histogram Equalization 18
18 1.18 The Fingerprint Image after Adaptive Binarization 19
19 3.1 TDFID Model 61
20 3.2 One Fingerprint Image from Each Database 64
21 3.3 A Sample of Finger Print of DB3_A 64
22 3.4 Real and Imaginary Parts - Complex Coefficients 66
23 3.5 DTCWT Images at Different Levels. 66
24 3.6 Variations of FRR, FAR and TSR with Threshold 72
X
25 4.1 The Block Diagram of PCFTD Model 76
26 4.2 Samples of NIR Fface Images of a Person 78
27 4.3 Samples L- Spacek Face Images of a Person 79
28 4.4 Samples JAFFE Face Images of a Person 80
29 4.5 Wavelet Families 82
30 4.6 Wavelet Decomposition 86
31 4.7 Block Diagram of 2 DWT Decomposition Process 87
32 4.8 FAR and FRR with Threshold (L-Spacek with FFT) 92
33 4.9 FAR and FRR with Threshold (JAFFE with FFT) 93
34 4.10 FAR and FRR with Threshold (NIR with FFT) 93
35 4.11 FAR and FRR with Threshold for L–Spacek
Databases with DWT (Haar) 98
36 4. 12 FAR and FRR with Threshold for L–Spacek
Databases with DWT (Symlet) 98
37 4.13 FAR and FRR with Threshold for L–Spacek
Databases with DWT (DB1) 99
38 4.14 FAR and FRR with Threshold for JAFFE
Databases with DWT (Haar) 100
39 4.15 FAR and FRR with Threshold for JAFFE
Databases with DWT (Symlet) 100
40 4.16 FAR and FRR with Threshold for JAFFE
Databases with DWT (DB1) 101
41 4.17 FAR and FRR with Threshold for NIR
Databases with DWT (Haar) 101
42 4.18 FAR and FRR with Threshold for NIR
Databases with DWT (Symlet) 102
43 4.19 FAR and FRR with Threshold for NIR
Databases with DWT (DB1) 102
TABLE OF CONTENTS
Certificate I
Declaration II
Acknowledgement III
Preface V
List of Tables VIII
List of Figures IX
CHAPTER 1
INTRODUCTION
1.1 Introduction 01
1.2 Types of Biometrics 02
1.3 The Biometric System 14
1.4 Design Issues in Biometric System 25
1.5 Applications of Biometrics System 26
1.6 Definitions 28
1.7 Motivation 30
1.8 Organization of the Thesis 30
CHAPTER 2
LITERATURE SURVEY
2.1 Introduction 31
2.2 Review of Fingerprint Biometric Trait 31
2.3 Review of Face Biometric Recognition 49
2.4 Summary 59
V
CHAPTER 3
TRANSFORM DOMAIN FINGERPRINT
IDENTIFICATION BASED ON DTCWT
3.1 Introduction 60
3.2 Proposed Model 60
3.3 Algorithm 69
3.4 Performance Analysis 70
3.5 Summary 74
CHAPTER 4
PERFORMANCE COMPARISON OF FACE
RECOGNITION USING TRANSFORMS DOMAIN
TECHNIQUES (PCFTD)
4.1 Introduction 75
4.2 Proposed PCFTD Model 76
4.3 Algorithm 89
4.4 Performance Analysis 90
4.5 Summary 104
CHAPTER 5
CONCLUSIONS
5.1 Introduction 105
5.2 Contribution of this Work 106
5.3 Future Work 107
Bibliography 108
List of Publications 129
Appendix A 130
Appendix B 145
VI
CHAPTER 1
INTRODUCTION
1
CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION
The Biometric algorithms are used for analysing human body parts to
authenticate a person for multiple applications. The word Biometric is
derived from the Greek word [1] bios (life) and metrikos (measure). The
biometric system captures and store certain human body parts ie.,
biometric information onto the database and compares the test biometric
information with the database biometric information. The biometric
system has received considerable attention and has been successfully used
in many applications. The earlier biometric systems used for
authentication are unimodel that depends only on single source of
biometrics information whereas now a days multimodel biometrics are
used depends on multiple biometrics. Biometrics are becoming more
popular now a day’s, due to the security requirements in the field of
information, business, military, e-commerce, internet and electronic
transfers. In the mid nineteenth century the police criminal identification
division in Paris [2] have developed and practiced the idea of using many
features of human body parts and behavioral characteristics to identify
criminals. Since then biometric recognition technology emerged rapidly
in law enforcement to identify criminals. The personal identification
2
based on biometrics is essential to create Unique Identification (UID)
card, which can be used for voting in electoral system, accessing secured
areas, identification to avail government and nongovernment facilities.
The traditional identification systems such as passwords, employee
key, cards, Personal Identification Number (PIN) etc., can be easily
stolen, forgotten and lost, whereas biometrics cannot be stolen, forgotten
and lost, hence biometrics is more secure compared to traditional
methods. The better biometric system should satisfy the parameters of
high recognition rate, accuracy, speed, False Acceptance Rate (FAR),
False Rejection Rate (FRR), robust to fraudulent techniques and attacks,
harmless to the users and accepted by an intended users.
1.2 TYPES OF BIOMETRICS
The biometrics are broadly classified into physiological and
behavioral characteristics [3] used for automated recognition that are
based on features of fixed human body parts and behavior of a person
respectively. Classification of Biometrics is shown in the Figure 1.1. The
choice of biometrics depends on user acceptance, level of security
required, accuracy, and implementation cost and time. Among these
biometrics traits, some are required the user’s cooperation to acquire the
images for the process.
3
Fig.1.1: Classification of Biometrics
1.2.1 Physiological Parameters
Physiological characteristics are related to parts of the body such as
Fingerprint, Face, Ear, Facial Thermogram, Deoxyribo Nucleic Acid
(DNA), Hand and Palm print, Iris and Retinal blood vessel patterns.
Biometrics
Physiological Behavioral
Fingerprint Signature
Iris
Face
Hand Geometry
Keystroke
Voice
DNA
Gait
4
(i) Fingerprint
The biometric fingerprint of each person is unique and has minutia,
pores, pattern of valleys and ridges. The fingerprints have been used for
identification in forensic investigations and sciences, criminal
investigations, civil and commercial identifications since nineteenth
century. Fingerprint verification is simple, faster, cheap and reliable to
identify the individuals compared to iris, voice, retina, face and other
recognition systems. The disadvantages of fingerprint recognition are
fingerprint features may not be unique sometimes, since the injury to
finger, fingerprint of the people working in chemical industry may
change, database maintenance and protection from fraudulent user and
finally vulnerable. The fingerprint has Plain Arch, Tented Arch, Radial
Arch, Ulnar Arch, Plain Whorl, Central Pocket Whorl, Double Loop and
Accidental ridge patterns as shown in the Figure 1.2. The ridge pattern are
used to differentiate two images of persons. The fingerprint
characteristics such as Core, Ending Ridge, Short Ridge, Fork or
Bifurcation, Delta, Hook, Eye, Dot or Island, Crossover, Bridge,
Enclosures and Speciality are shown in the Figure 1.3 to compute the
features of fingerprint image.
5
Fig.1.2: Fingerprint Ridge Patterns.
Fig.1.3: Fingerprint Characteristics
(ii) Face
The oldest method of person identification mechanisms are based on
their facial features. The Human Visual System (HVS) provides an
effective way of recognizing other people’s expressions and facial
features. Normally HVS identify people by their faces and sometimes
6
much better than computer based recognition systems. The face
recognition system shown in Figure 1.4 is nonintrusive and the most
common biometric system used now a days for personal identification.
Fig.1.4: A Face Scan
(iii) Ear
The ear shape and the structure are distinct. The features of ear size
and the pattern of ear structure is shown in Figure 1.5. The recognition is
based on matching between the landmark locations on the ear and salient
points on the pinna. The disadvantage is features of an ear are not unique.
Fig. 1.5: Ear Structure
7
(iv) Facial Thermogram
The property of human body heat radiation is considered as feature
for recognition and the heat radiation is captured by infrared camera. The
application of facial thermogram is limited to covert recognition.
(v) DNA
The biometric based on DNA sampling is a form of blood sample and
tissue and it is considered as unique feature for recognition. DNA is
probably the most reliable biometrics. It is in fact a one-dimensional code
unique for each person. The disadvantage is the patterns of DNA shown
in Figure 1. 6 may not be unique for twins and the issues such as
contamination, sensitivity, privacy and automatic real-time recognition
are not clear. DNA sampling is rather intrusive at present and requires a
form of tissue, blood or other bodily sample. The DNA analysis has not
been sufficiently automatic to rank the DNA analysis as a biometric
technology. This method, however, has some drawbacks: (i)
contamination and sensitivity, since it is easy to steal a piece of DNA
from an individual and use it for an ulterior purpose, (ii) no real-time
application is possible because DNA matching requires complex chemical
methods involving expert's skills, (iii) privacy issues since DNA sample
taken from an individual is likely to show susceptibility of a person to
some diseases.
8
Fig. 1. 6: DNA Structure
(vi) Hand and Palmprint
The structure of person's hand and palm shown in Figure 1. 7 are
unique but not as unique as iris and fingerprints. Hand scanner and finger
reader recognition system measure and analyze the pattern in the palm
and hand such as ridge length, orientation of ridges and valleys. The
technique is relatively easy, simple and inexpensive. The disadvantage is
hand and palm geometry is not unique and distinctive over the years.
Fig. 1. 7: Structure of Palm Image
9
(vii) Iris
The human iris is unique compared to other biometrics and iris image
can be captured using camera. The iris pattern contains large amount of
arbitrariness, becomes unique features and is normally formed between
the third and eighth month of fetus growth. The iris pattern remains same
throughout the life of an individual. The pattern of an iris contain many
distinctive features such as arching ligaments, furrows, ridges, crypts,
rings, corona, freckles and a zigzag collarette. Iris scanning is more
comfortable than retinal because the iris is visible from several meters
away, so very easily we can collect the image. The work on iris pattern
recognition [4] is gaining importance and it is widely used in medical
field for classification and localization of diabetic related eye diseases,
fatigue testing, security monitoring, automated access control, specific
users man- machine interface etc. Iris recognition system basically has
three stages viz., preprocessing, feature extraction and matching. In
preprocessing there are three steps – iris localization, iris normalization
and image enhancement. The localization is the process in which the
inner and outer boundaries of the iris are detected. During this process the
eyelid and eyelashes which cover the portion of the iris are removed.
Normalization is the technique in which the Cartesian coordinates are
converted to polar coordinates called as Daugman‘s rubber sheet model
[5]. But low contrast and non-uniform illumination makes the feature
10
extraction difficult. Thus the image enhancement technique provides the
uniform distribution of illumination. The two important iris recognition
methods are Daugman‘s method and Wildes [6] method. The Figure 1.8
explains how Iris scanners record identities.
Fig. 1.8: How Iris Scanners Record Identities
A front view of the iris is shown in Figure 1.9. The iris is perforated
close to its centre by a circular aperture known as the pupil. The function
of the iris is to control the amount of light entering through the pupil and
this is done by the sphincter and the dilator muscles, which adjust the
size of the pupil.
Fig. 1.9: Front View of Iris
crypts
Radial furrows
Pigment frill
Pupilary area
Ciliary area
collarette
11
(viii) Retinal blood vessel patterns
The retinal vasculature is characteristic of each individual eye and not
easy to change or replicate is as shown in Figure 1.10. The biometric
sample acquisition is difficult and usage is limited compared to other
biometrics.
Fig. 1.10: Retina
1.2.2 Behavioral Parameters
The behavioral biometrics is connected to the human behavior of a
person and investigates gesture of an individual’s such as Gait, Signature,
Keystroke, Voice etc.
(i) Gait
The way person walks and a combined spatial temporal biometric as
in Figure 1.11, and not very distinct, but satisfactorily distinguish person
in low-security application. The biometric may differ over a long period
of time due to fluctuations in body weight and major injuries connecting
joints and brain.
12
Fig. 1.11: Gait Cycle
(ii) Signature
The way person symbols his/her name is acknowledged to be a
feature of that person. The signatures have been acknowledged in lawful,
business communication and in government as a technique of verification
of person. The signature as shown in the Figure 1.12 is a behavioral
biometric usually changes over a period of time and is inclined by
physical and emotional conditions of the person. The automatic signature
confirmation system has several applications such as symbol of approval,
predominantly in credit cards validation, bank cheques, land purchases,
legal documents and security systems. The disadvantages with this system
are signature forgeries such as arbitrary, casual and skilled.
Fig. 1.12: Signature
13
(iii) Keystroke
The pressure applied by a person while typing on a keyboard, in a
unique way is referred to as keystroke given in Figure 1.13. The
keystroke is not always unique to each person but provides sufficient
discriminatory information to recognize an individual. It is observed that
there exists a deviation in typical typing patterns for some persons. The
weakness with the keystrokes of a person using a system could be
monitor and imitate.
Fig. 1.13: Keystroke dynamics
(iv) Voice
The voice biometric is a combination of both behavioral and
physiological characteristics. The individual’s dot wave files of voice are
shown in Figure 1.14. The features are based on vocal tract, mouth, nasal
cavity and lips. Thus, shape and size of the appendage are used in sound
synthesis. The physiological uniqueness of human verbal communication
are invariant for a person, but the behavioral element of the
communication of a person changes over time due to age, medical
conditions and emotional state. The voice is uniquely different including
14
twins and cannot be exactly duplicated. Voice is also not appropriate for
exact authentication of an individual because it is not very distinctive.
The disadvantage with the voice biometric system is the features of
speech may vary due to noise introduce into the system.
Fig. 1.14: Voice File
1.3 THE BIOMETRIC SYSTEM
The biometric system normally operates in two modes such as
verification mode and identification mode depends on the application.
Fig. 1.15: General Biometrics System
Accept/Reject
Classification
Section
Test Section
Feature Extraction
Preprocessing
Test Biometric
Enrollment Section
Feature Extraction
Preprocessing
Biometric Database
Match
15
Verification- The process of one-to-one comparison to claim an
identity of a person. The test image is compared with an image in the
stored database.
Identification- The process of one-to-many comparison of the test
biometric with biometric database to identify an unknown person. The
comparison of the test biometric sample with a template in the database
falls within a predefined threshold then it is succeeded in identifying an
individual. The general biometrics system is divided into three sections
viz., enrolment section, test section and classification section is shown in
Figure 1.15. The biometric database is created and preprocessed on each
biometric data to device good quality and appropriate image for
processing. The feature vectors are extracted from each biometric data
and are developed on whole biometric database in the enrollment section.
1.3.1 Enrolment Section
The biometric database is enrolled and features are extracted in this
section. The enrollment stage performs operations such as preprocessing
and feature extraction on database biometrics. Since the fingerprint
images acquired from sensors or other medias, are not assured with
perfect quality, preprocessing methods helps for increasing the contrast
between ridges and furrows and keep a higher accuracy to fingerprint
recognition.
16
1.3.1.1 Biometric Database
The significant section of any biometric system for person
identification is the data collection and creation of database. Each
component of the database is created by biometrics collected with
different image sensors such as mobile cameras, digital cameras, iPods
and web cameras at different sessions with at least two weeks’ time
separating each session and at different intensity, angle, expressions and
with or without accessories. The database is created with different images
on same person and some of the databases are readily available in the
internet.
The popular face biometric databases [7] available for research work
are Libor Spacek, AT & T (formerly Olivetti Research Laboratory), Oulu
Physics, XM2VTS, Yale, Yale-B, MIT, CMU-PIE (Pose, Illumination
and Expression), UMIST, Bern University face database, Purdue AR,
University of Sterling online database, FERET, Kuwait University face
database and AR. The fingerprint biometric databases such as Chinese
Academy of Science Institute of Automation (CASIA) developed by
Chinese Academy Institute, SFinGe, FVC2006, FVC2004, FVC2002,
FVC2000 etc. The iris biometric databases such as CASIA V2 and
UBIRIS, the palm print biometric database such as CASIA palm print
image, Hong Kong Polytechnic University 3D palm print database etc.
The signature databases are GPDS300 and SVC 2004.
17
1.3.1.2 Preprocessing
(i) Re-sizing the image
The size of an image is resized to required size or a Region Of
Interest (ROI) of an image.
(ii) Marphological Operation
Morphology is the study of the shape and form of objects.
Morphological image [ 8 ] analysis can be used to perform object
extraction, image filtering operations, such as removal of small objects or
noise from an image, image segmentation operations, such as separating
connected objects and measurement operations, such as texture analysis
and shape description.
(iii) Histogram Equalizer
The object is to enhance contrast of images using histogram
equalization. To expand the pixel value distribution of an image so as to
increase the perception information, histogram equalization is used [9, 10,
11]. The original histogram of a fingerprint image is shown in Figure
1.16. The histogram after the histogram equalization occupies all the
range from 0 to 255 and the visualization effect is enhanced as shown in
Figure 1.17.
18
Input Intensity Value Input Intensity Value
Outp
ut
Inte
nsi
ty V
alue
Ou
tpu
t I
nte
nsi
ty V
alu
e
Fig. 1.16: The Original Histogram
of a Fingerprint Image
Fig. 1.17: Histogram after the
Histogram Equalization
(iv) Binarization
It is a process in which each pixel in a given image is converted into
one bit and can assign the values '1' or '0' depending upon the mean value
of all the pixel. If greater than mean value then its '1' otherwise its '0'.
Image binarization changes an image of up to 256 gray levels to a black
and white image. The simplest and easiest way to use image binarization
is to choose a threshold value, and classify all pixels with values above
this threshold as white, and all other pixels as black. Fingerprint Image
Binarization is to convert the 8-bit gray fingerprint image to a 1-bit image
with 0-value for ridges and 1-value for furrows. After the operation,
ridges in the fingerprint are highlighted with black color while furrows
with white are shown in Figure 1.18.
19
Fig. 1.18: The Fingerprint Image after Aadaptive Binarization
(a) Binarized Image (left), (b) Enhanced Gray Image (right)
(v) Fingerprint Image Enhancement
To make the image clearer for the further processing, the fingerprint
image quality is enhanced. This happens due to the sensors or other
Medias which are not assured with perfect quality, those enhancement
methods, to improve the contrast between ridges and furrows, are very
useful to keep a higher accuracy to fingerprint recognition. Through the
fingerprint enhancement, the image is more clear for the further process
and help to draw the accurate results.
1.3.1.3 Feature Extraction
Feature extraction is the process by which the key features of the
samples are selected. The process of feature extraction is depending on
the set of algorithms; the methods choosing will be based on the type of
20
biometric identification used [ 12 ]. A fingerprint feature extraction
program is to locate, measure and encode ridge endings and bifurcations
in the fingerprint.
Some of the examples for the feature extraction are;
In a voice recording, filtering out some frequencies and patterns.
From a digital picture, it will extract particular measurements of
image patterns.
Iris prints will encode the mapping of furrows and striations in the
iris [13].
The features of each biometric are computed to identify a person and are
based on spatial domain, transform domain and fusion techniques.
(i) Spatial Domain Techniques
The biometric features are directly extracted from spatial domain
itself by applying spatial domain techniques on image pixels without
converting into frequency domain.
The Edge detection techniques: Color image is converted into gray
scale image for easy processing and then into binary image for edge
detection. The edge detection operators such as Canny, Sobel, Prewitt,
Robert etc., are used to obtain edges of an image. The features are
extracted by measuring different lengths of edges in an image [14].
21
Principal Component Analysis (PCA): The best eigenvectors of the
covariance matrix in a larger dimension data refers to the best low
dimensional space and are called as principal components of covariance
matrix [15].
Independent Component Analysis (ICA): A statistical latent variable
model called ICA [16] for computing independent components.
Line Edge Map [LEM]: LEM [17] extracts lines from an image edge
map and geometrical feature matching. Edge information is a useful
object representation feature i.e., invariance to illumination changes, low
memory requirement and high recognition performance of template
matching. LEM integrate the structural information with spatial
information of an image by grouping pixels of an image edges in to line
segments.
3D Morphable Model: The vector representation of images that is
being constructed such that any convex combination of shape and texture
vectors of a set of examples describes a realistic image is a morphable
model [18].
Hidden Morkov Model (HMM): A discrete HMM [19] models is
viewed as a probabilistic model whose states are not explicitly observed.
In HMM each state is based on the decision of probability distribution
function and symbols are emitted based on the probability of occurrence
of that symbol and depends on the previous states.
22
(ii) Transform Domain Techniques:
The biometric traits are converted from spatial domain into transform
domain i.e., frequency domain.
Fast Fourier Transform (FFT): The FFT is applied on spatial domain
image to obtain FFT coefficients. The features are extracted from FFT
[20] coefficients are real part, imaginary part, magnitude value and phase
angle. The FFT computation is fast compared to Discrete Fourier
Transform (DFT), since the number of multiplications required to
compute N-point DFT are less i.e., only N2log2
N in FFT as against N
2 in
DFT.
Discrete Cosine Transform (DCT): The technique is used for image
compression. The DCT technique [21 , 22 ] is a linear and invertible
frequency domain transform to express pixel intensity values of an image
in terms of sum of cosine functions oscillating at different frequencies.
The original spatial domain image is converted in to the frequency
domain using the DCT technique and original image is reconstructed
from DCT coefficients by applying inverse DCT technique. The
transform domain image represents original image DCT coefficients and
reflects in terms of frequencies present in it. The first DCT coefficient has
lowest frequency and forms the DC-coefficient and normally carries the
significant information of original image. The last DCT coefficient has
23
higher frequencies consists of detailed information of signal and usually
generated by noise. The rest of the coefficients carry different frequency
components of the original signal varies between very low and very high
frequency.
Discrete Wavelet Transform (DWT): The wavelet transform
represents a signal in terms of mother wavelets using dilation and
translation [ 23 ]. The wavelets are oscillatory functions having finite
duration both in time and in frequency, hence represents in both spatial
and frequency domains. The features extracted by wavelet transform
gives better results in recognition as well as in bifurcating low frequency
and high frequency components as approximation band and detailed
bands respectively.
Complex Wavelet Transform (CWT): The CWT [ 24 ] is two
dimensional wavelet to provide multi resolution and improved
transformation of DWT. CWT provides high degree of shift invariance
and has more redundancy.
Dual Tree Complex Wavelet Transform (DT-CWT): The
decomposition technique DT-CWT eliminates disadvantages of DWT,
DCT and Gabor Wavelet Transform, and gives better results in feature
extraction [25], [26]. Two wavelet trees are created in parallel forming
Hilbert pairs. The two trees of the DT-CWT are the real and imaginary
24
parts of complex wavelet. The Hilbert transform pairs in DT-CWT are
called half sample delay condition.
(iii) Fusion Techniques:
The spatial domain features extracted by different techniques are
fused to obtain final spatial domain feature vector.
The transform domain features are extracted using various
transformation techniques. The different features are combined to
generate final feature vector.
The spatial and transform domain features are combined to obtain
hybrid features with different techniques.
The different biometric traits such as speech, face, iris and finger print
features are too fused to identify a person [27].
1.3.2 Test Section
In this section, the one sample of biometric test image is considered to
authenticate a person. The pre-processing and feature extraction on test
image is similar to that of enrolment section.
1.3.3 Classification Section
The given test biometric data is compared with database biometric to
authenticate a person is discussed in this section. The distance formulae
such as Euclidean Distance (ED) [28], Hamming distance [29], Chi-
25
square [30] are used for comparisons. The classifiers such as Support
Vector Machine (SVM) [31], Artificial Neural Network (ANN) [32] and
Random Forest (RF) [33],[ 34],[ 35], Multiple Classifier Systems (MCS)
[36], Template Matching [37], Graph Matching [38] and Mahanalobis
Distance [39] are used for matching.
1.4 DESIGN ISSUES IN BIOMETRIC SYSTEM
The biometric traits can be used as biometric characteristics to
identify an individual as long as it satisfies the following parameters.
There are at least different biometric techniques commercially available
and new techniques are in the stage of research and development. Any
human physiological or behavioral characteristics can become a biometric
provided the following properties are fulfilled:
Universality: The human beings must have some common body parts
such as face, finger, palm, iris etc. It is really difficult to get 100%
coverage. There are mute people, people without fingers or with injured
eyes. A good biometric system should handle all the types of people
Collectability: The features of human body parts are acquired and
quantitatively measured. Face recognition systems are not intrusive and
obtaining of a face image is easy. In the contrast the DNA analysis
requires a blood or other bodily sample. The retina scan is rather intrusive
as well.
26
Distinctiveness: The patterns of each biometric trait of any two
persons in the world should be distinct and different in terms of both
physiological and behavioral characteristics.
Permanence: The physiological and behavioral characteristics of
biometric traits should not change over a period of time pertaining to
recognition criterion. While the iris usually remains stable over decades, a
person’s face changes significantly with time. The signature and its
dynamics may change as well and the finger is subject to injuries
Acceptability: In general people need to accept a particular biometric
identifier for day-to-day business or any related transactions.
Uniqueness: The biometric characteristic that differentiate effectively
between persons. Sometimes biometric traits like face recognition, DNS
etc., may be not useful for the identical twins
Circumvention: This refers to how difficult it is to fool the system by
fraudulent techniques. An automated access control system that can be
easily fooled with a fingerprint model or a picture of a user’s face does
not provide much security.
1.5 APPLICATIONS OF BIOMETRICS SYSTEM
The biometric need for security systems is going up, hence
recognition of human being every day based on fully automated personal
identification and authentication has been attracting extensively over the
27
past two decades. Some of the biometric system applications are as listed
below.
(i) The biometric systems have wide range of applications in different
areas such as human-computer interaction, image processing, film
processing, security applications, computer access control, criminal
screening and surveillance.
(ii) Banking systems
(iii) Regular attendance monitoring and authentication of the
employees using any of the biometric traits.
(iv) Airport checking for personal authentication
(v) Home security applications
(vi) Electronic voting system
(vii) Military force to authenticate refugee
(viii)Using a pre-stored image database, the biometric recognition system
is able to verify and authenticate one or more persons in the
database.
(ix) Biometric is one of the major research topic in the current fields such
as neural networks, man and machine intelligence system, robotics
and computational vision, computer graphics, image processing and
psychology study.
28
1.6 DEFINITIONS
(i) Pixels: Picture Element (pel) is a single point image and is
addressable as small screen element. The pixel can be controlled and is
represented using dots or squares with its own address. The address is
corresponds to its coordinates with a two dimensional grid.
(ii) Histogram: The graphical representation of digital image pixels by
showing the number of pixels at each intensity level found in the image is
defined as histogram of an image.
(iii) Histogram Equalization: The technique which enhances the
dynamic range of an image by assigning intensity values of pixels in the
input image is referred to as histogram equalization. The image obtained
after histogram equalization has pixels with uniform distribution of
intensities.
(iv) Threshold: The distance between the test image and the database
images are recorded as Error Vector (EV) using distance formula and then
the average of EV is considered as the threshold value for recognition
declaration.
(v) False Rejection Rate (FRR): The ratio of the number of false
rejections to the number of identification attempts and is given in
Equation 1.1.
)1.1(...ofno. Total
system theby ejectedofNo.FRR
attemptstionidentifica
rpersonsauthorised
29
(vi) False Acceptance Rate (FAR): The ratio of the number of false
acceptances to the number of identification attempts and is given in
Equation 1. 2.
(1.2)ofno. Total
accepted images test edunauthorizofNo.FAR .......
attemptstionidentifica
(vii) Equal Error Rate (EER): The rate at which both accept and reject
errors are equal and it is taken from the region of convergence plot by
considering the point where FAR and FRR have the same value. The
system with the least EER is most accurate.
(viii) Correct Recognition Rate (CRR): The CRR measures the
percentage of match rate regardless of the FRR and is given in the
Equation 1. 3.
1.3atabasetheinofno.Total
correctlyatches images testofNo.CRR ..............
dpersons
m
(ix) Fingerprint: Impression of a finger acquired by digital scanners.
(x) Minutiae: Ridge bifurcations, Ridge endings in fingerprint image.
(xi) Ridge bifurcation: The ridge splits into two ridges.
(xii) Ridge termination: The ridge end point.
(xiii) False Minutiae: The points which are incorrectly identified as
minutiae.
30
1.7 MOTIVATION
The biometric validation of a person have several advantages
compared to the earlier validation systems such as smart cards, PIN,
password, visiting cards, credit cards and debit cards. The biometric
identification is based on several parts of human body and behavioral
characteristics such as face, fingerprint, iris, retina, palm print, hand
geometry, DNA, voice and signature. Hence I have been motivated to
design and implement a robust fingerprint and face biometric system with
high recognition rate and low FRR and FAR for variations in the
biometric images
1.8 ORGANIZATION OF THE THESIS
The organization of the thesis is as follows. Chapter one presents a
detailed introduction to biometrics, biometric system, applications of
biometrics, design issues related to biometrics and motivation for the
research work. The detailed literature survey on existing fingerprint and
face recognition models using different techniques is presented in chapter
two. Transform domain fingerprint identification based on DTCWT is
described in chapter 3. Face recognition based on Transform Domain
Technique is given in chapter 4. Finally conclusions, contributions and
future work are presented in chapter five.
CHAPTER 2
LITERATURE SURVEY
31
CHAPTER 2
LITERATURE SURVEY
2.1 INTRODUCTION
In this chapter a review of the current literature on fingerprint and
face recognition are presented. It includes a review of different techniques
and algorithms used for authenticating and identifying a human being
using fingerprint and face biometrics.
2.2 REVIEW OF FINGERPRINT BIOMETRIC TRAIT
Michael Kucken and Newell [40] discussed the hypothesis on the
development of epidermal ridges viz., (i) The epidermal ridge pattern is
established as a result of buckling instability acting on the basal layer of
the epidermis and resulting in the primary ridges. (ii) The buckling
process underlying fingerprint development is controlled by the stresses
formed in the basal layer and not by the curvatures of the skin surface and
(iii) the stresses that determine ridge direction are themselves determined
by boundary forces acting at creases and nail furrow, normal
displacements which are most pronounced close to the ridge. Shlomo
Greenberg et. al., [ 41 ] proposed two methods for fingerprint image
enhancement. The first one is carried out using local histogram
32
equalization, Wiener filtering and image binarization. The second
method is unique anisotropic filter for direct grayscale enhancement.
Bazen and Gerez [42] presented methods for the estimation of a high
resolution directional field from fingerprints. The directional field detects
the singular points and the orientations of the points. Yun and Cho [43]
proposed an adaptive preprocessing method, which extracts five features
from the fingerprint images, analyzes image quality with clustering
method, and enhances the images according to their characteristics. The
preprocessing is performed after distinguishing the fingerprint image
quality according to its characteristics.
Brankica Popovi´c and Maskovic [44] used multiscale directional
information obtained from orientation field image to filter the spurious
minutiae. The feature extraction in pattern recognition system is to extract
information from the input data and depends greatly on the quality of the
images. Multiscale directional information estimated based on orientation
field estimation. Afsar et. al., [ 45 ] presented the minutiae based
Automatic Fingerprint Identification Systems. The technique is based on
the extraction of minutiae from the thinned, binarized and segmented
version of a fingerprint image. The system uses fingerprint classification
for indexing during fingerprint matching. Jagadeeswar Reddy et. al., [46]
presented fingerprint denoising using both wavelet and Curvelet
Transforms. The search-rearrangement method performs better than
33
minutiae based matching for fingerprint binary constraint graph matching
since it does not require implicit alignment of two fingerprint images.
Zebbiche and Khelifi [47] presented biometric images as one Region
of Interest (ROI). The scheme consists of embedding the watermark into
ROI in fingerprint images. Discrete Wavelet Transform and Discrete
Fourier Transform are used for the proposed algorithm. Bhupesh Gour et.
al., [48] introduced midpoint ridge contour representation in order to
extract the minutiae from fingerprint images. Colour coding scheme is
used to scan each ridge only once. Seung Hoonchae and Jong Ku Kim
[49] proposed Fingerprint Verification in which both minutiae and ridge
information are used to reduce the errors due to incomplete alignment or
distortion.
Aparecido Nilcau Marana and Jain [ 50 ] proposed Ridge Based
Fingerprint matching using the Hough transform. The major straight lines
that match the fingerprint ridges are used to estimate rotation and
translation parameters. Anil Jain et. al., [51] described the use of logistic
regression method to integrate multiple fingerprint matching algorithms.
The integration of Hough transform based matching, string distance based
matching and 2D dynamic programming based matching using the
logistic regression has minimized the False Rejection Rate for a specified
level of False Acceptance Ratio. Fanglin Chen and Jie Zhou [ 52 ]
proposed an algorithm for reconstructing fingerprint orientation field
34
from saved minutiae and are used in the matching stage to compare with
the minutiae from the query fingerprint. The orientation fields computed
from the saved minutiae is a global feature and the saved minutia is the
local feature, are used to get more information.
Chunxian Ren and Yilong Yin [53] used the hybrid algorithm based
on linear classifier to segregate foreground and background blocks. The
pixel wise classifier uses three pixel features such as Coherence, mean
and variance. Hartwig Front haler and Klaus Kollreider [ 54 ] used a
multigrid representation of a discrete differential scale space enhancement
strategy of fingerprint recognition system. The fingerprint image is
decomposed using Laplacian Pyramid as relevant information is
concentrated within a few frequency bands. The Fausian Directional
Filtering is used to enhance ridge valley pattern of fingerprint using 1-D
filtering on higher pyramid level. The linear symmetric features are used
to extract the local ridge –valley orientation. Chaohong Wu and Sergey
Tulyakov [ 55 ] proposed Harris corner point based fingerprint
segmentation method which strongly discriminates between foreground
and background features of the fingerprint. Liu Wei and Zhou Cong [56]
proposed Gradual Segmentation algorithm and multi segmentation
features for finger print image segmentation. The fingerprint region is
obtained using a Gradual Segmentation and recoverable region are
segmented using Multi Segmentation feature algorithm.
35
Hemanth Krishnappa and Hongyu Guo [57] proposed the algorithm of
fingerprint verification using mutual information. The input image is
subjected to preprocessing using filtering and normalization which
involves translation and Rotation. The mutual information at each step of
Rotation and translation was calculated to find the best alignment of
fingerprint by maximizing the mutual information. Swapnali Mahadik et.
al., [58] described an Alignment based Minutiae Matching algorithm. The
minutiae extraction involves Filtering, Binarization, Orientation
Estimation, Region of interest, Thinning and Minutiae Extraction. In the
matching stage the images are subjected to translation Rotation and
Scaling.
Yi Chen and Anil K Jain [ 59 ] proposed an algorithm based on
fingerprint features viz., minutiae and ridges, Pattern and Pores. The
correlation among Fingerprint features and their distributions are
considered for the model. Johg Ku Kum et. al., [60] presented a study on
Hybrid fingerprint matching methods. The minutiae and image based
fingerprints verification methods are implemented together. The shapes in
the fingerprint such as square, diamond, cross and dispersed cross are
used for matching. Manvjeeth Kaur et. al., [61] proposed fingerprint
verification system adopting many methods to build a minutiae extractor
and a minutiae matcher. The method with some changes like
segmentation using morphological operation, improved thinning,
36
minutiae marking with special Triple Branch Counting, Minutiae
unification by decomposing a branch into three terminations, matching
the unified x – y co-ordinate system are employed. Liu Wei [ 62 ]
described Rapid Singularity Searching for fingerprint classification. The
algorithm uses Delta Poincare Index and Rapid Classification algorithm
to classify the fingerprint into five classes. The Singularity is achieved by
Detection algorithm which searches the direction field that has the larger
directional change. Arun Ross et. al., [63] proposed the hybrid fingerprint
matcher which employs the combination of ridge strengths and a set of
minutiae points.
Haiyun Xu et. al., [64] introduce two feature reduction algorithms: the
Column Principal Component Analysis and the Line Discrete Fourier
Transform feature reductions, these algorithms can be efficiently
compress the template size with rate of 94%. Also they present the
performance of the spectral minutiae fingerprint recognition system and
show a matching speed with 125000 comparisons per second on a PC
with Intel Pentium D processor 2.80 GHz and 1 GB of RAM. Jian-wei du
et. al., [65] proposes the fractal features for glycyrrhiza fingerprint of
medicinal herbs EMD to obtain the IMF from high to low frequency.
After this EMD fractal features in fingerprint of medicinal herbs are
extracted through computing the fractal dimensions of every IMF. For the
recognition of glycyrrhiza fingerprints of medicinal herbs, novel approach
37
is applied. Xiao Xiaohong and Niu Jiping [ 66 ] describes about the
principle of EMD and different types of its extensions such as BEMD,
CEMD, and EEMD. They also give the application in fusing the image.
The advantages and the difficulties while fusing the image are described.
Li Lin and Ji Hongbing [67] use an improved EMD method for signal
feature extraction. The optimal envelopes mean is obtained by an inverse
EMD filter scheme in this method. A new sifting stop criterion is
proposed to guarantee the orthogonality of the sifting results. Numerical
simulation and experimental result demonstrate the validity of the
improved method.
Tachaphetpiboont and Amornraksa [68] proposes a feature extraction
method based on FFT for the fingerprint matching. The recognition rate
obtained from the proposed method is also evaluated by the k- NN
classifier. The amount of time required for the extraction and verification
is very less in this approach. Kemingmao et. al., [ 69 ] proposed a
fingerprint image segmentation method aiming at dealing with low
quality fingerprint images using the two new features, intra-consistency
and extra consistency. In the segmentation stage, the frequency and
orientation of local ridge are obtained. Intra-consistency, which used to
further partition and extra consistency, used to reflect the relationship
between centre block and its neighbour blocks, and is used to validate the
shadow and the boundary area between foreground and background.
38
Gabor filter method is utilized to reduce the counting cost. Baiget et. al.,
[70] proposed a robust segmentation algorithm which uses the strength of
Harros corners for segmentation. The algorithm employs dynamic
thresholding and simple binary operations to provide highly accurate
segmentation of the image. This algorithm provides very accurate
segmentation even for low quality images. Furthermore, the algorithm is
also effective in segmenting corrupt areas within the fingerprint images.
Aayush Sharma et. al., [71] proposed an approaches for performance
optimization is to fuse two or more biometric matcher technologies to add
on to the performance of the individual systems. Here the author present
new step integration based fusion method for multimodal biometric
technologies, based upon eliminative machine learning. The individual
matcher systems are integrated in steps eliminating unwanted user classes
at each matcher step. The method achieves high accuracies and
recognition rates, achieving low processing times at the same time.
Teoh and Pang [72] proposed a touch-less fingerprint recognition
system by using a digital camera. They proposes a pre-processing
technique which comprised of low passing filtering, segmentation and
Gabor enhancement for their own-designed touch less sensor. The feature
extraction is done by Gabor filter and the favourable verification results
are attained with the Support Vector Machine.
39
Tiande Gue et. al., [73] proposed a novel segmentation approach
which is very different from the traditional segmentation methods. This
method is based on the local binary pattern (LBP) which is having a good
performance on texture discrimination. It uses LBP operator to transform
the image signal and analyzes the histogram of uniform LBP information
to decide the corresponding image block whether to be segmented or not.
The most attractive advantages of LBP are its invariance to monotonic
gray scale changes, low computational complexity and convenient muti-
scale extension.
Chen Yu et. al., [74] proposed a system to authenticate the persons in
online examination system for large scale. They presented a novel
principal component analysis neural network algorithm for fingerprint
recognition. The algorithm to meet the convergence conditions and to
simplify the complex pre-processing steps, greatly reducing the
computation to improve the recognition speed. The algorithm used here is
to obtain a higher recognition rate for fingerprint examination online
algorithm provides new and effective methods.
Conti et. al., [75] proposed an interface for the use and management
of biometric recognition systems. Here they place before an interface
which allows for an intuitive parameterization of functions and
procedures of system algorithms, for the optimized management of great
database by unspecialized operators. It is based on an ad-hoc language, a
40
subset of natural language that makes possible all the functionalities
offered by a biometric recognition system. It also allows a
parameterization of functions and procedures of system algorithms for an
better management of large databases by specialized users. Raffaele
Cappelliet et. al., [76] studied on the spatial distributions of singularity
locations in nature and derives, from a representative data set of labeled
samples, the probability density functions of the four main fingerprint
classes. The singularity positions have been aligned with respect to
fingerprint centered and scaled according to the average width.
Chunfeng Hu et. al., [77] proposed fingerprint alignment using special
ridges. The ridge with the maximum of sampled curvature is used for
initial alignment and other corresponding ridges then align using different
features. The speed of authentication depends on the algorithm used for
the alignment. Macro details of whole image, Global and those of a point,
ridge or block, local are the two characteristics in fingerprint
authentication method. Comparing to the local features, global features
are fast but not rebuts. In global features, there can be some missing local
features which will affect the accuracy of the alignment. In the fingerprint
verification minutiae features are more important, since it contain most of
fingerprint individuality. Dadgostaret et. al., [ 78 ] employed wavelet
based features for fingerprint recognition. The proposed method is
assessed on image from the biodata database and has lower computational
41
complexity and higher accuracy rates than conventional methods on
texture features. This method has both frequency – selective and
orientation – selective properties and have optimal joint resolution in both
spatial and frequency domains. So Gabor filters are used to remove noise
and preserve true ridge structures.
Young Li et. al., [79] proposed two algorithms based on match scores
namely; maximum match scores and greedy maximum match scores.
These algorithms are useful and more flexible and can be used in various
biometric systems. Registration and authentication are the two stages
involved in the biometric systems. In registration, multiple samples of the
same biometric traits will be captured. Xiaohui Renet et. al., [ 80 ]
implements multi-fingerprint fusion bases on D-S evidence theory with an
improved D-S combination rule. They combined fingerprint information
together for personal identification. This method is to improve the
traditional D-SCR. Experimental results can effectively improve the
correct recognition rate of the multi-fingerprint identification system. Anil
Jain et. al., [81] proposes a hierarchical matching system that utilizes
features at all the three levels extracted from 1,000 ppi fingerprint scans.
Level 3 features, including pores and ridges contours, are automatically
extracted using Gabor filters and wavelet transform and are locally
matched using the iterative Closest Point (ICP) algorithm. This
experiment shows that Level 3 features carry significant discriminatory
42
information. There is a relative reduction of 20 percent in the equal error
rate (EER) of the matching system when Level 3 features are employed in
combination with Level 1 and Level 2 features. The experimental results
demonstrate that Level 3 features should be examined to refine the
establishment of minutia correspondences provided at level 2. And also,
consistent performance gains were also observed in both high quality and
low quality images, suggesting that automatically extracted Level 3
features can be informative and robust.
Sunny Arief Sundiroet et. al., [82] proposes the technique to enhance
the process to get the good quality fingerprint images to produce good
results. This enhancement process in simple minutiae detection algorithm
using crossing number on valley structure improves detection of true
minutiae. In the experiment, this process reduces the number of average
minutiae points from 854 to 59 minutiae and leaves the good minutiae
points even though the computation time increases 2.5 times. And in
order to speed up the computation time, the author intends to implement
the algorithm on embedded hardware system using FPGA device.
Raffaele Cappelli et. al., [83] proposed a novel approach to reconstruct
fingerprint images from standard templates and investigates to what
extent the reconstructed images are similar to the original ones. The
proposed reconstruction approach could be further improved to make the
success probability of a masquerade attack even higher. Here the authors
43
encourage developers of algorithms and systems to seriously take into
account this kind of attack and to implement specific protections and
counter measures. Thi Hoi Le et. al., [ 84 ] proposed an approach to
improve the performance of fingerprint indexing process. Moreover, this
technique itself provides privacy property for fingerprint system which is
not mentioned in previous indexing techniques. A fingerprint indexing
algorithm selects most likely fingerprint candidates and sorts them by the
similarity to the query fingerprint template.
Seong – Jin Kim et. al., [85] proposed a 200 X 160 pixel CMOS
fingerprint System – On- Chip (SOC) with a local adaptive pixel scheme
and embedded column – parallel processors for performing 2 – D digital
image processing for fingerprint recognition. The SOC can generate
robust fingerprint images enhanced by a local adaptive scheme and on –
chip signal processing in both analogue and digital domains. The on-chip
self-configurable column – parallel image processors can provide
adaptive filter operations in a small form factor by leveraging the
advantages of parallelism and flexibility. Liu Wenzhou et. al., [ 86 ]
proposed a method for extracting fingerprint minutiae. After acquiring the
binary image, the method is applied in fingerprint verification system
about examinee, whose principle and basic structure, software and
hardware design are discussed. They introduced a method that thin the
ridges and valleys point when the fingerprint feature point was extracted
44
and 8 fields fingerprint trail algorithm was used to extract fingerprint
feature point concretely. Faisal Farooq et. al., [87] proposed a method
over the lack of anonymous and revocability of biometric templates. They
introduced binary string representations of fingerprints that obviate the
need for registration and can be directly matched. The given
representation is computationally infeasible to invert it to the original
fingerprint. The method indeed has the properties of verifiability and
revocability. We see the securing of the template by hashing using any
existing technique like MD5. Baig et. al., [ 88 ] proposes a robust
segmentation algorithm which uses the strength of Harros corners for
segmentation. The algorithm employs dynamic thresholding and simple
binary operations to provide highly accurate segmentation of the image.
This algorithm provides very accurate segmentation even for low quality
images. Furthermore, the algorithm is also effective in segmenting corrupt
areas within the fingerprint images.
Nocolas Galy et. al., [ 89 ] proposes a full fingerprint verification
system which is composes of a tactile fingerprint sensor, integrated read
out and conversion circuits, and dedicated recognition algorithms. The
use of a single line to measure a v requires the user to sweep its finger
along the sensor. This sensing scheme produces fingerprint image with
several distortions that needs further image processing to allow efficient
fingerprint recognition. To get ride of the distortion, a bank of directional
45
Log – Gabor masks have been used and new distortion model has been
implemented. Performance evaluation has been encouraging results and
has shown that some improvement have to be made regarding the quality
of images produces by the sensor. Falguera et. al., [90] proposes the
fusion of a minutiae-based and a ridge –bases fingerprint recognition
method at rank, decision and score level. They emphasized that the
combination of methods that use different kinds of information could be
an interesting research area. The authors intend to evaluate the fusion
technique in a database which images were obtained by fingerprint sensor
with a small sensing area. Here also testing the fusion using other
minutiae – based method.
Lopez et. al., [91] proposes the implementation of a whole minutiae
extraction fingerprint algorithm using a Spartan – 3 FPGA, as an
appropriate solution for portable devices and for the low cost consume
market. The experimental results show as minutiae of fingerprint are
obtained in 988 ms when an image of 256 X 256 pixels is analyssed. Here
they presents a hardware – software co-design. Daniel Ashlock et. al.,
[92] proposes the evolution of strategies for playing the iterated prisoner’s
dilemma (IPD) at three different noise levels is analyzed using
fingerprinting and other techniques including a novel quantity,
evolutionary velocity, derived from fingerprinting. Here, the study is on
substantially lengthened version of the experiments were rerun to permit
46
the gathering of new types of data and several additional analysis were
performed. A new quantity, evolutionary velocity, is introduced in this
study and is used to expose an unexpected and critical difference between
agents evolved with and without noise using the finite-state
representation. This paper also highlights two distinct forms of
competitive ability, the ability to dominate an evolving population and the
ability to win a contest against a diverse selection of opponents. These
two abilities, while not opposites, are demonstrated in this paper to have
some degree of trade off.
Dalwon Jang et. al., [93] proposes a method for learning a distance
metric in a fingerprinting system which identifies a query content by
measuring the distance between its fingerprint and a fingerprint stored in
a database. By learning a distance metric from training data consisting of
original and distorted contents, the identification performance can be
improved. Fingerprints of original contents are assumed to be fingerprints
stored in a DB, and fingerprints of distorted contents are assumed to be
the query fingerprints. For correct identification, the distance of the
fingerprint of a distorted content to the fingerprint of the original content
from which the distorted content was obtained and a large distance
margin should be established between fingerprints of the distorted and
non-corresponding contents. In this paper it is shown that the distance
metric learning improves the fingerprinting performance. Woo Kyu Lee
47
et. al., [ 94 ] proposes a fingerprint recognition algorithm which is
developed based on the wavelet transform, and the dominant local
orientation which is derived from the coherence and the gradient of
Gaussian. By using the wavelet transform, the algorithm does not require
conventional preprocessing procedures such as smoothing, binarization,
thinning and restoration. Their study have shown that while the rate of
Type I1 error is held at O.O%, the rate of Type I error turns out as 2.5%
in real time.
Xiangping Meng et. al., [95] proposes a kind of effective fingerprint
recognition algorithm composed of fingerprint images per treatment,
fingerprint character points extract and fingerprint match. Here the
acquired binary image thinned the ridges and valleys respectively, then
extracted ridge bifurcations and valley bifurcations as the key minutiae
from the ridge thinned image and valley thinned image. The ridge line
and valley line branch point was used to match fingerprint in the
matching process. In the fingerprint match process, Xiangping Meng et.
al., uses the coordinate transformation method that transformed the
unmatched detail point and the template detail point to the same
coordinate, then calculate the match score between them, finally may
obtain two fingerprints whether to come from the identical finger or not.
Jiao Ruili et. al., [96] proposes an automatic fingerprint acquisition and
preprocessing system with a fixed point DSP, TMS320VC5509A and a
48
fingerprint sensor, MBF200. The system is diminutive and flexible. The
authors presents a VC5509A based fingerprint preprocessing system,
accomplished fingerprint image acquisition. The preprocessing system is
accomplished with the properly selected algorithm on a DSP platform.
Comparing the results of the algorithms, appropriate algorithms are
selected for fingerprint identification preprocessing. They are Median
Filtering, Directional Filtering Enhancement, Fixed Threshold
Binarization, and Hilditch Thinning. Han Xi et. al., [ 97 ] proposes a
universal laboratory management system, which contains checking on
work attendance, student’s information management, querying and course
management functions. By using this technique, we are able to record the
right time and times of every student enter or leave the lab, afford the
opportunity to choose or modify the courses, query the students’
information related to the experiment content, times and time length.
Using this system, the situation of exceptional students imitate others to
join the experiment will be avoided; the ratio of laboratory use and
equipment use will be also confirmed. Utilizing the fingerprint
recognition technology, choosing the student’s fingerprint as the only
identify symbol to use the lab, the reliability of the recognition is
enhanced extremely, and at the same time of guaranteeing the normal
management order of the lab, the experiment interest of the students and
the passion of the lab assistants are stimulated greatly.
49
2.3 REVIEW OF FACE BIOMETRIC RECOGNITION
Jeffery and Masatoshi [98] proposed a new data structure known as
Haar Spectral Diagram (HSD) which is useful for representing the Haar
spectrum of boolean functions. To represent the Haar transform matrix in
terms of a Kro-necker product yielding a natural decision diagram based
representation is an alternative ordering of Haar coefficients. The
resulting graph is a point- decomposition of the Haar spectrum using O-
element edge values. Kun Ma and Xiaoou Tang [ 99 ] proposed an
algorithm by using discrete wavelet face graph. This graph is similar to
the Gabor face graph. They used the method of elastic bunch graph
matching process to locate fiducial points. They used 2340 face images
to compare the recognition performance of the two methods. As a result,
they conclude that DWT face graph has comparable performance as the
Gabor face graph.
Duan and Zheng [100] proposed a concept of gray-like image from
which generalized Haar like features can be exacted. The process make
use of other forms of images in addition to gray level image in Haar-
Adaboost schema. The application of the gray-like images, the
generalized Haar-like features are constructed for fast face detection. The
results show that the boosted face detector using the generalized Haar-
like features out performs significantly the original using the basic Haar-
like features. Paul and Abbes [101] proposed a method to determine the
50
most discriminative coefficients in a DWT/PCA-based face recognition
system. This is achieved based on their inter-class and intra-class
standard deviations. Also the eigen faces used for recognition are
generally chosen based on the value of their associated eigenvalues. Jun
Ying Gan and Jun Feng Liu [102] described a novel approach to the
fusion and recognition of face and iris image based on wavelet features.
They developed Kernel Fisher Discriminant Analysis (KFDA). In the
algorithm, after the dimension is reduced, the noise is eliminated and the
storage space is saved and then the efficiency is improved by Discrete
Wavelet Transform (DWT) to face and iris image. Also the face and iris
features are extracted by the fusion of KFDA. After the extraction,
nearest neighbor classifier is selected to perform recognition.
Experimental results shows that not only the small sample problem is
overcome by KFDA, but also the correct recognition rate is higher than
that of face recognition and iris recognition.
Sudha and Mohan [103] proposed a hardware oriented algorithm for
eigenface based face detection using FFT. Eigenfaces have long been
used for face detection and recognition and are known to give reasonably
good results. They have given the FFT-based computation of distance
measure which facilitates hardware implementation and fast face
detection. Also extended the face detection framework by training with
the whole face as well as other facial features like eyes, mouth etc.,
51
separately. Satiyan et. al., [ 104 ] investigated the performance of a
Daubechies Wavelet family in recognizing facial expressions. A set of
luminance stickers were fixed on subject’s face and the subject is
instructed to perform required facial expressions. Also the subject’s
expressions are recorded in video. A set of 2D coordinate values are
obtained by tracking the movements of the stickers in video using
tracking software. Standard deviation is derived from wavelet
approximation coefficients for each daubechies wavelet orders.
Hengliang Tang et. al., [105] proposed a novel face representation
approach known as Haar Local Binary Pattern Histogram (HLBPH). The
face image is decomposed into four-channel sub images in frequency
domain by Haar wavelet transform, and then the LBP operator is applied
on each sub image to extract the face features. Hafiz Imtiaz and Shaikh
Anowarul Fattah [106] proposed a multi-resolution feature extraction
algorithm for face recognition based on 2D-DWT. For feature
extraction, an entropy-based local band selection criterion is developed.
A very high degree of recognition accuracy is achieved by the proposed
method. Ramesh and Raja [107] proposed a performance evaluation of
face recognition based on DWT and DT-CWT using Multi-matching
Classifiers. The face images is resized to required size for DT-CWT.
The two level DWT is applied on face images to generate four subbands.
Euclidian Distance, Random Forest and Support Vector Machine
52
matching algorithms are used for matching. Masashi Nishiyama and
Osamu Yamaguchi [108] have proposed a method for synthesizing an
illumination on normalized image with diffuse reflection, specular
reflection, attached shadow and cast shadow using the Self-Quotient
Image i.e., the ratio of the pixel value to a locally smoothed pixel value.
Kekre et. al., [109] proposed a method of Novel Walshlets Pyramid
based face recognition technique for face recognition on two image
databases using 100 images each. The feature sets of image are extracted
from Walshlets applied on the image at different levels on gray plane for
improving the performance. Kailash Karande and Sanjay Talbar [110]
have developed a method for face recognition using edge information,
Laplacian of Gaussian and Canny edge detection techniques to generate
edge information and preprocessing is carried out by distance classifiers
for testing of images.
Mohamed Aroussi et. al., [111] have proposed a method for face
recognition based on steerable pyramid feature extraction. Local
information is extracted from SP sub-bands using Local binary Pattern
which make it to be more effective technique for face recognition.
Sanqiang Zhao and Yongsheng Gao [ 112 ] proposed a method on
Multidirectional Binary Pattern (MBP) for face recognition. MBP
algorithm is used on a sparse set of shape-driven points to extracting
more discriminative features. MBP measurement is proposed to evaluate
53
binary patterns and establish point correspondence for face recognition.
Rerkchai Fooprateepsiri and Werasak Kurutach [113] have proposed a
robust method for face recognition with variant illumination, scaling and
rotation. The features are extracted by combining the Trace Transform
and Fast Active Contour. To determine the similarity between models
and test images Hausdorff distance and Modified Shape Context are
used. The experimental results give that the average of accuracy rate of
face recognition with variant illumination, scaling and rotation.
Vitomir Struc and Nikola Pavesic [114] presents a novel method for
facial features extraction based on the Gabor-based kernel partial-least-
squares discrimination method. Gabor wavelets are applied to extract
discriminative and robust facial features and then the kernel partial-least-
squares discrimination method is used to reduce the dimensionality of the
Gabor feature vector. Further enhance its discriminatory power and the
cosine distance measure was employed at the matching stage for the
calculation of the matching scores to achieve the better results. Taskeed
Jabid et. al., [115] proposed a method based on feature descriptor and
local directional pattern to represent facial geometry for analyzing its
performance. LDP features are obtained by computing the edge response
values in 8 directions at each pixel and encoding them into an 8 bit
binary number using the relative strength of these edge responses. Arif
Muntasa et. al., [ 116 ] have proposed a model for Face Image
54
Recognition, global structure features are extracted using Principal
component analysis and linear descriminant analysis method and locality
preserving projection and orthogonal laplacian faces methods are used to
extraction the local structure features then they are fused the global and
the local structure features based on appearance to achieved higher
recognition rate.
Kelsey Ramirez-Gutierrez et. al., [117] have developed an algorithm
for face recognition using support vector machine and histogram
equalization techniques. Principal components analysis and histogram
equalization is used to extract features. This technique provides a
recognition rate higher. Jaya Priya and Rajesh [118] proposed a method
of novel local appearance feature extraction method using multi-
resolution Dual Tree Complex Wavelet Transform (DTCWT) to generate
coefficients to characterize the face texture. The recognition is done on
the basis of Euclidean Distance (ED) measure on block based feature
vectors. The proposed method gives better performs better and low
computational complexity. Reza Ebrahimpour et. al., [ 119 ] have
proposed a method on two-dimensional Expression-Independent face
recognition method based on features inspired by the human’s visual
ventral stream. A feature set is extracted which contains illumination and
view invariant C2 features from all images in the dataset. Then, these C2
feature vectors which derived from a cortex-like mechanism passed to a
55
standard Nearest Neighbor classifier to maintain the higher recognition
rate. John Adedapo and Adeniran [ 120 ] proposed an algorithm for
recognition and verification with one sample image per class, features
are extracted using two dimensional discrete wavelet transform (2D
DWT) from images and hidden Markov model (HMM) is used for
training, recognition. 90% correct classification (Hit) and False
Aacceptance Rate (FAR) of 0.02% was achieved. Timo Ojala et. al.,
[121] presents a very simple, efficient, multi resolution approach to gray-
scale and rotation invariant texture classification based on local binary
patterns and nonparametric discrimination of sample and prototype
distributions. This approach is very robust in terms of gray-scale
variations and invariant against any monotonic transformation of the gray
scale. Another advantage is computational simplicity as the operator can
be realized with a few operations in a small neighborhood and a lookup
table.
Ramesh et. al., [122] proposed the template based Mole Detection for
Face Recognition in which the person is identified by the presence of
mole on the face. In this, homomorphic filtering is used for illumination
compensation. Normalized Cross Correlation (NCC) matching is used to
detect intensity value and position of the mole with respect to the NCC
threshold values. Validation is done by comparing the value of mole with
grab out segmented image.
56
Murugan et. al., [123] have proposed the performance evaluation of
face recognition using Gabor filter, Log Gabor filter and discrete wavelet
transform. The PCA which is used to reduce the dimensionality of the
face images reduces the important information needed for matching.
Hence in this three multiscale techniques such as Gabor filter, log Gabor
filter and DWT are applied before PCA to preserve the important
information. Gabor filter is used to exploit salient visual properties such
as spatial localization, orientation selectively and spatial frequency
characteristics. It works a BPF to achieve optional resolution in both
spatial and frequency domain. In Log Gabor filter, log Gabor frequency
result is multiplied with Fourier transform of original image. Then
inverse Fourier transform of multiplied in high amplitude signal and less
prominent information appears in low amplitude. The image is
decomposed into 4 sub images Low-Low, Low-High, High-Low,High-
High by applying DWT along rows and columns. DWT gives high
compression ratio and good quality of reconstruction.
Ramesh et. al., [124] have proposed Dual Transform based Feature
Extraction for Face Recognition is proposed. The images from database
are of different size and format, and hence are to be converted into
standard dimension, which is appropriate for applying DT-CWT.
Variation due to expression and illumination are compensated by
applying DWT on an image and also DWT reduces image dimension by
57
decomposition. The DT-CWT is applied on LL subband, which is
generated after two-level DWT, to generate DT-CWT coefficients to
form feature vectors. The feature vectors of database and test face are
compared using Random Forest, Euclidian Distance and Support Vector
Machine matching algorithms.
Nick G. Kingsbury et. al., [125] proposed The dual-tree C WT is a
valuable enhancement of the traditional real wavelet transform that is
nearly shift invariant and, in higher dimensions, directionally selective.
Since the real and imaginary parts of the dual-tree C WT are, in fact,
conventional real wavelet transforms the C WT benefits from the vast
theoretical, practical, and computational resources that have been
developed for the standard DWT. Baochang Zhang et. al., [ 126 ]
introduced the establishment of polyu near infrared face database
(POLYU-NIRFED), which is one of the largest NIR based face
recognition method, namely directional binary code (DBC), was also
proposed to capture more efficiently the directional edge information in
NIR face images.
Haihong Zhang and Yan Guo [127] have proposed a method based on
Facial Motion Graph (FMG) for Facial Expression Recognition
(FER).which is based on feature points and muscle movements. FER is
achieved by analysing the similarity between unknown expressions.
Furthermore they propose a method to evaluate edge weights in FMG
58
similarity calculation to achieve a more accurate and robust system.
Praseeda Lekshmi and Sasikumar [128] have developed a method for
extracting face regions from the detected skin regions and facial
expressions are analyzed from facial images by using Gabor wavelet
transform (GWT) and Discrete Cosine Transform (DCT) and Radial
Basis Function (RBF) Network is used to identify the person.
Gandhe et. al., [129] have developed a contour matching based face
recognition system which uses “contour” for identification of faces. The
input contour is matched with registered contour using matching
algorithms to increase the high recognition rate. Dakshina Ranjan Kisku
et. al., [130] have proposed a model for Linear Discriminant Analysis for
face recognition to multi view faces, the Gabor filter bank is used to
extract facial features characterized by spatial frequency, spatial locality
and orientation. Canonical covariate is used to Gabor faces to reduce the
high dimensional feature spaces into low dimensional subspaces. Support
vector machines are trained with canonical sub-spaces contained a set of
features to perform recognition task. The experiment results demonstrate
the efficiency and robustness of the system with high recognition rates.
59
2. 4 SUMMARY
In this chapter a brief summary of the related works in the area of
biometrics is discussed. Literature survey of fingerprint recognition and
iris recognition are discussed in detail. Also various algorithms of these
biometric traits like development of epidermal ridges, image
enhancement, ridge based fingerprint matching using the Hough
transform, algorithms based on EMD, DWT, match score, rapid haar
wavelet decomposition etc. are discussed.
CHAPTER 3
TRANSFORM DOMAIN FINGERPRINT
IDENTIFICATION BASED ON DTCWT
60
CHAPTER 3
TRANSFORM DOMAIN FINGERPRINT
IDENTIFICATION BASED ON DTCWT
3.1 INTRODUCTION
In this chapter, Transform Domain Fingerprint Identification Based
on DTCWT is discussed. The physiological biometric characteristics are
better compared to behavioural biometric identification of human beings
to identify a person. Fingerprint recognition products accounted for high
percentage of the total sales of biometric technology. For the Transform
Domain Fingerprint Identification Based on DTCWT, The original
Fingerprint is cropped and resized to suitable dimension to apply
DTCWT. The DTCWT is applied on Fingerprint to generate coefficient
which form features. The performance analysis is discussed with different
levels of DTCWT and also with different sizes of Fingerprint database. It
is observed that the recognition rate is better in the case of level 7
compared to other levels of DTCWT.
3.2 PROPOSED MODEL
The block diagram of Transform Domain Fingerprint Identification
using DTCWT (TDFID ) is given in the Figure 3.1.
61
Fig. 3.1: TDFID Model
3.2.1 Fingerprint Database
The first and second international Competitions on fingerprint
verification (FVC2000 and FVC 2002) were organized in 2000 and 2002
respectively. The FVC 2004 competition [131] is mainly based on the
fingerprint verification. The Database is collected with various sensors
and different timings are provided by the competition organizers.
A total of ninety students (24 years old on the average) enrolled in the
Computer Science degree program at the University of Bologna
Test Fingerprint
Database
MATCHING
DTCWT
DTCWT
Fingerprint
Database
Preprocessing
Preprocessing
ACCEPT / REJECT
62
volunteers were randomly made into three groups of 30 persons; each
group of 30 person were associated to a DB in order to use different types
of fingerprint scanner;
Each volunteer was invited with at least two weeks’ time to present
him/herself at the collection place in three distinct sessions.
Forefinger and middle finger of both the hands (four fingers total) of
each person were acquired by interleaving the acquisition of the different
fingers to maximize differences in finger placement;
Image quality was not taken into consideration. So the sensor platens
were not systematically cleaned.
At each session, four impressions were acquired of each of the four
fingers of each person;
During the second session, individuals were requested to lay it on
thick skin distortion of the finger; during the third session, fingers were
dried and moistened. At the end of the data collection, for each database a
total of 120 Fingers and 12 impressions per finger using 30 volunteers
were collected.
Four databases collected constitute the FVC2004 benchmark. The
details of collected fingerprints are shown in Table 3. 1. Four distinct
databases, DB1, DB2, DB3 and DB4 constitute the benchmark for FVC
2004. Each database is 110 fingers wide and 8 samples per finger in depth
(i.e., it consists of 880 fingerprint images).
63
Table 3.1: Scanners/Technologies used for FVC 2004 Database
Sensor Type
Image
Size
Set A
(wxd)
Set B
(wxd)
Resolution
DB1 Optical Sensor 640 X 480 100 X 8 10 X 8 500 dpi
DB2 Optical Sensor 640 X 480 100 X 8 10 X 8 500 dpi
DB3
Thermal
sweeping sensor
640 X 480 100 X 8 10 X 8 512 dpi
DB sFinge v3.0 640 X 480 100 X 8 10 X 8
About 500
dpi
Each database will be partitioned in two disjoint subsets A and B. The
subsets DB1-A, DB2-A, DB3-A and DB4-A, which contain the first 100
fingers (800 images) of DB1, DB2, DB3 and DB4, respectively, will be
used for the algorithm performance evaluation. The subsets DB1-B, DB2-
B, DB3-B and DB4-B, containing the last 10 fingers (80 images) of DB1,
DB2, DB3 and DB4, respectively, will be made available to the
participants to allow parameter tuning before executable(s) submission.
During performance evaluation, only homogeneous fingerprints, i.e.
those belonging to the same database, will be matched against each other.
The image format is TIF, 256 gray-level, uncompressed.
64
The image resolution, which could slightly change depending on the
database, is about 500 dpi.
The image size varies depending on the database.
The orientation of fingerprint is approximately in the range [-30,
+30] with respect to the vertical orientation.
Each pair of images of the same finger will have a non-null overlap,
but the presence of the fingerprint cores and deltas is not guaranteed.
The Figure 3.2 shows a sample images of each fingerprint from
databases DB1, DB2, DB3 and DB4. A sample of finger print with eight
impressions of DB3 A is shown in Figure 3.3.
Fig. 3.2: One Fingerprint Image From Each Database
Fig.3.3: A Sample of Fingerprint of DB3_A
(i) Source Database: The first seven Fingerprint images of each
person from DB3 _A database of FVC 2004 are stored ie., seven hundred
samples.
65
(ii) Test Template: The eighth Fingerprint of each person from DB3 _A
database of FVC 2004 are used in the test template and is compared with
source database to compute FRR and TSR.
(iii) Mismatch template database: The DB3_B of FVC 2004 database
of 10 fingers are stored in Mismatch template database and compared
with source database to compute FAR.
3.2.2 Pre-processing
The original Fingerprint image is of size 480 X 300. On observing the
DB3_A of FVC 2004, we crop the input image to the size of 401 X 201 in
order to remove the unwanted portion in the image. The cropped image is
resized into 512 X 256 for the DTCWT requirement.
3.2.3 Dual Tree Complex Wavelet Transform (DTCWT):
The DTCWT is a recent enhancement technique to the DWT with
some additional properties and changes. It is an effective method for
implementing an analytical wavelet transform, introduced by Kings bury
in 1998 [132, 133, 134]. DTCWT gives the complex transform of a signal
using two separate DWT decompositions ie., tree a and tree b. DTCWT
produces complex coefficients by using a dual tree of wavelet filters and
gives real and imaginary parts as shown in Figure 3.4.
66
Fig. 3.4: Real and Imaginary Parts of the Complex Coefficients
While applying the DTCWT in different levels, the number of
features and dimensions are reduces. The Fingerprint images for level-1,
level-2, level-3, level-4 and level-5, level-6, level-7 are shown in the
Figure. 3.5.
Fig. 3.5: DTCWT Images at Different Levels.
Level-2
Level-7 Level-5
Level-6
Level-3
Level-1
Level-4
67
3.2.4 Feature Extraction by DT-CWT
The feature representation should have information sufficient to
classify various faces and be less sensitive to noise. Only the significant
features of the face must be encoded so that comparisons between
templates can be made. The feature elements capture the local
information and the ordered sequence captures the invariant global
relationships among the local patterns. The feature representation should
have information enough to classify various faces and be less sensitive to
noise.
For many natural signals, the wavelet transform is a more effective
tool than the Fourier transform. The wavelet transform provides a multi
resolution representation using a set of analysing functions that are
dilations and translations of a few functions (wavelets). The wavelet
transform comes in several forms. The critically-sampled form of the
wavelet transform provides the most compact representation; however, it
has several limitations. For example, it lacks the shift-invariance property,
and in multiple dimensions it does a poor job of distinguishing
orientations, which is important in image processing. For these reasons, it
turns out that for some applications improvements can be obtained by
using an expansive wavelet transform in place of a critically-sampled one.
(An expansive transform is one that converts an N-point signal into M
coefficients with M > N.) There are several kinds of expansive DWTs;
68
here we describe and provide an implementations of the dual-tree
complex discrete wavelet transform (DTCWT).
3.2.4.1 The DTCWT has following properties:
(i) Approximate shift invariance;
(ii) Good directional selectivity in 2-dimensions (2-D) with Gabor-like
filters also true for higher dimensionality: m-D);
(iii) Perfect reconstruction (PR) using short linear-phase filters;
(iv) Limited redundancy: independent of the number of scales: 2:1 for
1-D ( 2m :1 for m-D);
(v) Efficient order-N computation - only.
DTCWT differentiates positive and negative frequencies and
generates six subbands oriented in ±15°, ±45°, ±75°.
3.2.4.2 DTCWT Applications
The applications including image segmentation [ 135 , 136 ],
classification [137], deconvolution [138, 139], image sharpening [140],
motion estimation [141], coding [142, 143, 144], watermarking [145,
146], texture analysis and synthesis [147, 148], feature extraction [149,
150 ], seismic imaging [ 151 ], and the extraction of evoked potential
responses in EEG signals [152].
69
3.2.5 Matching:
The Euclidean Distance (ED) is used to identify the test image with
the database images using Equation 3.1.
( ) √
∑ ( )
….. (3.1)
Where, M = the dimension of feature vector,
= is the database feature vector
= is the test feature vector.
3.3 ALGORITHM
The physiological trait Fingerprint is used to identify a person using
the features obtained by the coefficients of DTCWT. The proposed
algorithm for the performance analysis of the fingerprint identification for
different levels of DTCWT is given in Table 3.2.
The objectives are;
(i) Fingerprint verification to authenticate a person using DTCWT
(ii) To achieve high TSR
(iii) To have FRR and FAR very low
70
Table 3.2 Proposed TDFID Algorithm.
Input : Fingerprint Database, Test Fingerprint
Output : Person is identified.
1. FVC 2004, DB3_A database is considered.
2. Pre-processing is done by cropping the input fingerprint image to size
401 X 201.
3. Cropped image is resized to 512 X 256 for DTCWT requirement.
4. DTCWT is applied on Fingerprint with levels 5, 6, 7.
5. Magnitude and phase resulted from DTCWT are concatenated and
Considered as features.
6. Source database is created with the features obtained by step 5.
7. For the test Fingerprint DTCWT is applied and features obtained using
step 5.
8. Test Fingerprint is compared with the database fingerprint using ED to
Identify a person
3.4 PERFORMANCE ANALYSIS
For the performance analysis, DB3_A of FVC 2004 Fingerprint
database is considered. The number of Persons Inside the DataBase
(PIDB) to compute FRR and TSR are varied from 30 to 90 and the
number of Persons Outside the DataBase (PODB) are 10 to compute FAR
is given in Table 3.3.
71
Table 3.3: EER and TSR for Different Levels of DTCWT
Levels
PIDB:PODB
30:10 40:10 60:10 70:10 80:10 90:10
5
EER 0.5 0.2 0.573 0.34 0.36 0.33
% TSR
50 80 42.7 66 64 67
6
EER 0.45 0.2 0.59 0.3 0.282 0.3
% TSR
55 80 41 70 71.8 70
7
EER 0.36 0.15 0.228 0.21 0.197 0.197
% TSR
64 85 77.2 79 80.3 82.1
72
It is observed from the Table 3.3 that the values of EER and TSR
depend on the quality of Fingerprint image than the number of images in
PIDB and PODB. The values of EER and TSR are better in the case of
PIDB: PODB of 40:10. The recognition rate is better in DTCWT level 7
compared to other lower levels of DTCWT. The TSR and EER is 85%
and 0.15 respectively for DTCWT level 7 with PIDB:PODB of 40:10.
The graph for FRR, FAR and TSR is given in Figure 3.6 and the
variations of FRR and TSR with threshold for POIB: PODB of 40:10 is
tabulated in Table 3.4 and It is noticed that as threshold increases, the
value of FRR decreases, whereas the values of FAR and TSR increases.
The highest success rate of recognition of 85% is achieved for the
threshold value of 2.4.
Fig.3.6 Variations of FRR, FAR and TSR with Threshold Values
73
Table 3.4. Values of FRR, FAR and TSR for Different Thresholds
Threshold FRR FAR % TSR
0 1 0 0
0.3 1 0 0
0.6 1 0 0
0.9 1 0 0
1.2 0.85 0 50
1.5 0.6 0 60
1.8 0.35 0 64
2.1 0.15 0.1 72
2.4 0.1 0.4 85
2.7 0.05 0.4 85
3 0 0.4 85
3.3 0 0.5 85
3.6 0 0.8 85
3.9 0 0.8 85
4.2 0 0.8 85
4.3 0 0.9 85
4.8 0 0.9 85
5.1 0 1 85
5.4 0 1 85
5.7 0 1 85
6 0 1 85
74
3.5 SUMMARY
The biometric is used to authenticate a person. The Performance
Analysis of Fingerprint Identification using different levels of DTCWT is
proposed in this chapter. The Fingerprint is preprocessed to a suitable size
that suit DTCWT. The Fingerprint features are obtained by applying
DTCWT with different levels. The test image features are compared with
Database images using Euclidean Distance. It is observed that the
recognition rate is better in the case of DTCWT level 7 compared to
lower levels with PIDB:PODB of 40:10.
CHAPTER 4
PERFORMANCE COMPARISON OF FACE
RECOGNITION USING TRANSFORM DOMAIN TECHNIQUES (PCFTD)
75
CHAPTER 4
PERFORMANCE COMPARISON OF FACE RECOGNITION
USING TRANSFORM DOMAIN TECHNIQUES (PCFTD)
4.1 INTRODUCTION
The biometrics is a powerful tool to authenticate a person for multiple
applications. The face recognition is better biometrics and recognizing
people by their facial features is the oldest identification mechanism
compared to other biometric traits as the image can be captured without
the knowledge and cooperation of a person. So face recognition got more
popularity among the people. Face recognition is a non-invasive process
where a portion of the subject’s face is photographed and is reduced to a
digital code for the further process. Facial recognition records the spatial
geometry of distinguishing features of the face. This chapter explains the
Performance Comparison of Face Recognition using Transform Domain
Techniques (PCFTD). The face databases L – Speack, JAFFE and NIR
are considered. The features of face are generated using wavelet families
such as Haar, Symelt and DB1 by considering approximation band only.
The face features are also generated using magnitudes of FFTs. The test
image features are compared with database features using Euclidian
Distance (ED). The performance parameters such as FAR, FRR, TSR and
EER computed using wavelet families and FFT.
76
4.2 PROPOSED PCFTD MODEL
In this section, the proposed model for the Performance Comparison
of Face Recognition using Transform Domain Techniques are discussed.
In the proposed model Haar, Symlet and DB1 of DWTs and FFT
transformations are applied to generate features of face images to identify
a person effectively. The block diagram of proposed model, PCFTD is
shown in the Figure 4. 1.
Fig. 4.1: The Block Diagram of PCFTD Model
Face Database Test Image
Preprocessing
DWT / FFT
Features
Preprocessing
DWT / FFT
Features
Matching
Result
77
4.2.1 Face Databases:
(i) Near Infrared (NIR)
The NIR data base is considered due to its variation of pose,
expression, illumination, scale, blurring and a combination of them. The
database consists of 120 persons with 15 images per person. The data
base is created by considering first 60 persons out of 120 persons with
first 10 images per person are considered which leads to 600 images in
the database and the thirteenth image from first 60 persons is considered
as a test image to compute FRR and TSR. The remaining 60 persons out
of 120 are considered as out of database to compute FAR. The samples of
NIR face images are shown in Figure 4. 2.
(ii) L-Spacek
The total number of persons in the L – Spacek are 120. The first 65
persons are considered for database and reaming 55 persons are
considered out of database. Each person has 19 images in that first 10
images per person are considered to create data base which leads to a total
of 650 images and thirteenth image of the first 65 persons taken as test
image to compute the FRR and TSR. The FAR is computed using 55
persons out of data base images. The samples of L-Spacek face images
are shown in Figure 4.3.
78
Fig. 4.2: Samples of NIR Face Images of a Person
79
Fig. 4.3: Samples L- Spacek Face Images of a Person
(iii) JAFFE:
The face database consists of 10 persons with approximatly 20 images
per person. The database is created by considering first 5 persons out of
10 persons and first 10 images per person are considered to create data
base which leads to 50 images in the database and fourteenth image from
80
first 5 persons are taken as test image to compute FRR and TSR. The
remaining 5 persons out of 10 are considered as out of database to
compute FAR. The samples of JAFFE database is shown in Figure 4.4.
Fig. 4.4: Samples of JAFFE Face Images of a Person
81
4.2.2 Preprocessing
The color image is converted into gray scale images. The original size
of Face images are re-sized to the required sizes.
4.2.3 Wavelet Families
The wavelet transform represents a signal in terms of mother wavelets
using dilation and translation. The wavelets are oscillatory functions
having finite duration both in time and in frequency, hence represents in
both spatial and frequency domains. The features extracted by wavelet
transform gives better results in recognition as well as in bifurcating low
frequency and high frequency components as approximation band and
detailed bands respectively.
Advantages of Discrete wavelet transform are; It gives information
about both time and frequency of the signal, Transform of a non-
stationary signal is efficiently obtained, Reduces the size without losing
much of resolution, Reduces redundancy and Reduces computational
time.
Disadvantage of DWT are; Lack of shift invariance, Lack of
directional selectivity for higher dimensionality, unsatisfactory
reconstruction and It has more redundancy compare to DTCWT.
There are a number of basis functions that can be used as the
mother wavelet for Wavelet Transformation. Since the mother wavelet
82
produces all wavelet functions used in the transformation through
translation and scaling, it determines the characteristics of the resulting
Wavelet Transform. Therefore, the details of the particular
application should be taken into account and the appropriate mother
wavelet should be chosen in order to use the Wavelet Transform
effectively.
(a) (b) (c)
Fig. 4.5: Wavelet families (a) Haar (b) Daubechies (c) Symlet2
Figure 4.5 illustrates some of the commonly used wavelet
functions. Haar wavelet is one of the oldest and simplest wavelet.
Therefore, any discussion of wavelets starts with the Haar wavelet.
Daubechies wavelets are the most popular wavelets. They
represent the foundations of wavelet signal processing and are
used in numerous applications. These are also called Maxflat
wavelets as their frequency responses have maximum flatness at
frequencies 0 and π. This is a very desirable property in some
83
applications. The Haar, Daubechies, Symlets and Coiflets are
compactly supported orthogonal wavelets. These wavelets along with
Meyer wavelets are capable of perfect reconstruction.
In DWT, the most prominent information in the signal appears in
high amplitudes and the less prominent information appears in very
low amplitudes. Data compression can be achieved by discarding these
low amplitudes. The wavelet transforms enables high compression ratios
with good quality of reconstruction. At present, the application of
wavelets for image compression is one the hottest areas of
research. Recently, the Wavelet Transforms have been chosen for the
JPEG 2000 compression standard.
4.2.4 Wavelet Transform
Wavelet is an irregular and asymmetric waveform of effectively
limited duration that has an average value of zero. Wavelet Transform
is used to analyse non stationary signals i.e., whose frequency response
varies in time. Wavelet Transform is capable of providing time and
frequency information simultaneously. The wavelet transform is
created by repeatedly filtering the image coefficients on a row-by-row
and column-bycolumn basis. The usefulness of wavelets in image
steganography lies in the fact that the wavelet transform clearly
separates high-frequency and low-frequency information on a pixel-by-
84
pixel basis. The cover image is passed through wavelet filter bank.
Image convolved with wavelet low pass filter gives smooth version of
the input image and that with high pass filter results in the detail band.
This decomposition can be carried up to log2{min(height, width)}. The
low pass coefficients of final level decomposition of the image
constitute approximation band. The low-frequency wavelet coefficients
(approximation band coefficients Ai) are generated by averaging the
two pixel values as given in Equation 4.1 and the high frequency
coefficients (detail band coefficients Di) are generated by taking half of
the difference of the same two pixels as given in equation 4.2.
Ai = …. (4.1)
Di = …. (4.2)
Where Pi is the ith
pixel value in the input spatial domain signal
sequence. A image is decomposed into various wavelet sub-bands,
shown in Figure 4.6, such as Approximation band, Vertical Detail
bands, Horizontal Detail bands, and Diagonal Detail Bands. The
Approximation band consists of the low frequency wavelet coefficients,
which contains significant part of the spatial domain image. A detail
band consists of high frequency coefficients, which contains edge
P2i – 1 + P2i
2
P2i – 1 - P2i
2
85
details of spatial domain image. There are two types of wavelet
transforms viz., Continuous Wavelet Transform (CWT) and Discrete
Wavelet Transform(DWT).
Continuous Wavelet Transform was developed as an alternative
approach to the short term Fourier Transform to overcome the
resolution problem. Where as Discrete Wavelet Transform provides
sufficient information both for analysis and synthesis of the signal with
a significant reduction in the computation time.
Figure 4.7 gives the two Dimensional Discrete Wavelet Transform
decomposing process. The rows of an image are convolved with the
system function of low pass filter and high pass filter to get convoluted
signal. These convoluted signals are down sampled by keeping even
indexed columns. They again are convoluted with transfer functions of
low pass filter and high pass filter. The final convoluted signals are
down sampled to rows, to generate the approximation band and
detailed band.
86
Fig. 4.6: Wavelet Decomposition
Approximation Band
Horizontal Detail Band DWT 2
Vertical Detail Band
Diagonal Detail Band
87
Fig. 4.7: Block Diagram of 2 DWT Decomposition Process
4.2.5 Fast Fourier Transform (FFT)
The FFT is applied on spatial domain image to obtain FFT
coefficients. The features are extracted from FFT [153] coefficients are
real part, imaginary part, magnitude value and phase angle. The FFT
computation is fast compared to Discrete Fourier Transform (DFT)
[154], since the number of multiplications required to compute N-point
DFT are less i.e., only in FFT as against N2 in DFT. The efficiency of
the FFT algorithm can be enhanced for real input signals by forming
complex-valued sequences from the real-valued sequences prior to the
computation of the FFT. The value of FFT can be obtained by using the
Equation 4.3.
88
... (4.3)
Where ;
0 ≤ m ≤ M -1
0 ≤ n ≤ N -1
4.2.6 Features
The features of DWT are obtained from approximation band only.
The features of FFT are computed using the magnitude values.
4.2.7 Matching
The features of test image is compared with features of database
images using Euclidian Distance with the Equation 4.4.
… (4.4)
Where, M = the dimension of feature vector.
Pi = is the database feature vector.
qi = is the test feature vector.
89
4.3 ALGORITHM
Problem Definition
The proposed algorithm is used to analyse the performance of face
recognition using different wavelet families and FFT transformation for
different Face database is given in the Table 4.1.
The objectives are;
• Fingerprint verification to authenticate a person.
• To achieve high TSR
• To have FRR and FAR very low
Table 4.1: Algorithm of PCFTD
Input: Face Database, Test Face Image
Output: Recognition of a person
Step 1: Face image is read from data base.
Step 2: Colored image is converted in to Gray Scale.
Step 3: Image is resized
Step4: Haar, Symlet and DB1 of DWTs and FFT are applied to generate features
Step 5: Repeat step 1 to 4 for test image.
Step 6: Test features are compared with database features using Euclidean distance.
Step 7: Image with Euclidean distance less than threshold value is considered as
matched image otherwise not matching.
90
Original Fingerprint image is of size 480 X 300. An observing the
DB3_A of FVC 2004, we crop the input image to the size 401 X 201 in
order to remove the unwanted portion in the image. And then the cropped
image is resized into 512 X 256 for the DTCWT requirement.
4.4 PERFORMANCE ANALYSIS
The face databases viz., JAFFE, L-Spacek and NIR are considered to
test the algorithm for performance analysis. The frequency domain
transformation FFT and transformation domain DWT with different
wavelets are used to compute FAR, FRR and TSR. The values were
compared to draw the conclusions.
(i) Performance Using FFT
The Table 4.2 gives the variations of FAR, FRR and TSR with respect
to threshold values for different face database with FFT transformation.
FRR decreases whereas FAR increases from 0 value to 100% as threshold
value increases from 0 to 5.
91
Table 4.2: Performance on Different Face Databases with FFT
Threshold
FFT
L – Spacek NIR JAFFE
FAR FRR %
TSR FAR FRR
%
TSR FAR FRR
%
TSR
0 0 1 0 0 1 0 0 1 0
0.25 0 1 0 0 0.85 15.38 0 1 0
0.5 0 0.98 1.54 0 0.63 36.92 0 1 0
0.75 0 0.89 10.77 0 0.49 50.77 0 1 0
1 0 0.80 20 0 0.34 66.15 0 1 0
1.25 0 0.70 29.23 0.02 0.28 72.31 0 1 0
1.5 0 0.55 44.62 0.11 0.25 75.38 0 1 0
1.75 0 0.40 60 0.19 0.22 78.46 1 0.8 20
2 0 0.30 69.23 0.22 0.17 83.08 2 0.6 40
2.25 0 0.16 83.08 0.37 0.15 84.62 2 0.6 40
2.5 0 0.15 84.62 0.44 0.14 86.15 2 0.6 40
2.75 0 0.13 86.15 0.52 0.12 86.15 2 0.6 40
3 0 0.07 92.31 0.69 0.06 90.77 2 0.6 40
3.25 0 0.06 93.85 0.74 0.06 90.77 2 0.6 40
3.5 0 0.04 95.38 0.83 0.03 93.85 2 0.6 40
3.75 0 0.04 95.38 0.91 0.03 93.85 2 0.6 40
4 0 0.04 95.38 0.93 0.03 93.85 3 0.4 60
4.25 0 0.03 96.92 0.93 0.02 93.85 5 0 100
4.5 0 0 100 0.93 0 95.38 5 0 100
4.75 0 0 100 0.94 0 95.38 5 0 100
5 0 0 100 0.94 0 95.38 5 0 100
92
The success rate of recognition is 100% in the case of L-Spacek and
JAFFE face images while success rate is 95% in the case of NIR face
database. Hence FFT is better for L-Spacek and JAFFE face databases.
The variations of FAR and FRR with threshold for L-Spacek, JAFFE and
NIR face databases with FFT are shown in Figure 4.8, 4.9 and 4.10.
Fig. 4.8: FAR and FRR with Threshold for L-Spacek Database
93
Fig. 4.9: FAR and FRR with Threshold for JAFFE Database
Fig. 4.10: FAR and FRR with Threshold for NIR Database
94
(ii) Performance Using Wavelet Families
The performance parameters viz., FAR, FRR and TSR values are
varying with threshold values for different databases such as L- Speack,
NIR and JAFFE with DWT families are given in Tables 4.3, 4.4 and 4.5
respectively. The success rate for L- Speack, and JAFFE database is
100% compared to 95% of success rate for NIR database.
The variations of FAR and FAR with threshold values for L–Spacek
face database using Haar, Symlet and DB1 wavelets are shown in Figure
4.11, 4.12 and 4.13 respectively. The FRR and FAR values are decreasing
and increasing as threshold increases. The value of EER is 0.01 for Haar
and DB1 wavelets compared to EER value 0.2 in the case of Symlet.
Hence Haar and DB1 are better wavelets for L- Spacek face database
compared to Symlet.
95
Table 4.3 Performance Parameters of L- Spacek Databases
Threshold
L – Spacek
Haar Symelet DB1
FAR FRR %
TSR FAR FRR
%
TSR FAR FRR
%
TSR
0 0 1 0 0 1 0 0 1 0
0.25 0 0.98 1.54 0 1 0 0 0.98 1.54
0.5 0 0.88 12.31 0 1 0 0 0.88 12.31
0.75 0 0.71 29.23 0 1 0 0 0.71 29.23
1 0 0.51 49.23 0 1 0 0 0.51 49.23
1.25 0 0.31 69.23 0 0.8 20 0 0.31 69.23
1.5 0 0.25 75.38 0 0.6 40 0 0.25 75.38
1.75 0 0.18 81.54 0 0.6 40 0 0.18 81.54
2 0 0.09 90.77 0 0.6 40 0 0.09 90.77
2.25 0 0.06 93.85 0 0.6 40 0 0.06 93.85
2.5 0 0.05 95.38 0 0.6 40 0 0.05 95.38
2.75 0 0.05 95.38 0 0.6 40 0 0.05 95.38
3 0 0.05 95.38 0.25 0.2 80 0 0.05 95.38
3.25 0 0.03 96.92 0.25 0.2 80 0 0.03 96.92
3.5 0.02 0.03 96.92 0.25 0.2 80 0.02 0.03 96.92
3.75 0.02 0 100 0.25 0.2 80 0.02 0 100
4 0.06 0 100 0.25 0 100 0.06 0 100
4.25 0.15 0 100 0.25 0 100 0.15 0 100
4.5 0.17 0 100 0.25 0 100 0.17 0 100
4.75 0.24 0 100 0.25 0 100 0.24 0 100
5 0.31 0 100 0.25 0 100 0.31 0 100
96
Table 4.4: Performance Parameters of JAFFE Databases
Threshold
JAFFE
Haar Symlet DB1
FAR FRR %
TSR FAR FRR
%
TSR FAR FRR
%
TSR
0 0 1 0 0 1 0 0 1 0
0.25 0 1 0 0 1 0 0 1 0
0.5 0 1 0 0 1 0 0 1 0
0.75 0 1 0 0 1 0 0 1 0
1 0 1 0 0 1 0 0 1 0
1.25 0 1 0 0 0.8 20 0 0.8 20
1.5 0 0.8 20 0 0.6 40 0 0.6 40
1.75 0 0.8 20 0 0.6 40 0 0.6 40
2 0 0.6 40 0 0.6 40 0 0.6 40
2.25 0 0.6 40 0 0.6 40 0 0.6 40
2.5 0 0.6 40 0 0.6 40 0 0.6 40
2.75 0 0.6 40 0 0.6 40 0 0.4 60
3 0 0.6 40 0.25 0.2 80 0.25 0.2 80
3.25 0 0.6 40 0.25 0.2 80 0.25 0.2 80
3.5 0 0.4 60 0.25 0.2 80 0.25 0.2 80
3.75 0.25 0.2 80 0.25 0.2 80 0.25 0.2 80
4 0.25 0.2 80 0.25 0 100 0.25 0 100
4.25 0.25 0.2 80 0.25 0 100 0.25 0 100
4.5 0.25 0.2 80 0.25 0 100 0.25 0 100
4.75 0.25 0 100 0.25 0 100 0.25 0 100
5 0.25 0 100 0.25 0 100 0.25 0 100
97
Table 4.5: Performance Parameters of NIR Databases
Threshold
NIR
Haar Symelet DB1
FAR FRR %
TSR FAR FRR
%
TSR FAR FRR
%
TSR
0 0 1 0 0 1 0 0 1 0
0.25 0 0.69 30.77 0 0.72 27.69 0 0.69 30.77
0.5 0 0.45 55.38 0 0.51 49.23 0 0.45 55.38
0.75 0 0.28 72.31 0 0.29 70.77 0 0.28 72.31
1 0.15 0.22 78.46 0.13 0.22 78.46 0.15 0.22 78.46
1.25 0.35 0.17 83.08 0.33 0.17 83.08 0.35 0.17 83.08
1.5 0.52 0.08 90.77 0.52 0.08 90.77 0.52 0.08 90.77
1.75 0.8 0.03 93.85 0.78 0.03 93.85 0.8 0.03 93.85
2 0.93 0.02 93.85 0.93 0.02 93.85 0.93 0.02 93.85
2.25 0.93 0.02 93.85 0.93 0.02 93.85 0.93 0.02 93.85
2.5 0.94 0 93.85 0.94 0 95.38 0.94 0 93.85
2.75 0.96 0 93.85 0.94 0 95.38 0.96 0 93.85
3 0.96 0 93.85 0.96 0 95.38 0.96 0 93.85
3.25 0.98 0 93.85 0.98 0 95.38 0.98 0 93.85
3.5 0.98 0 93.85 0.98 0 95.38 0.98 0 93.85
3.75 0.98 0 93.85 0.98 0 95.38 0.98 0 93.85
4 0.98 0 93.85 0.98 0 95.38 0.98 0 93.85
4.25 0.98 0 93.85 0.98 0 95.38 0.98 0 93.85
4.5 0.98 0 93.85 0.98 0 95.38 0.98 0 93.85
4.75 0.98 0 93.85 0.98 0 95.38 0.98 0 93.85
5 0.98 0 93.85 0.98 0 95.38 0.98 0 93.85
98
Fig. 4.11: FAR and FRR with Threshold for L–Spacek with DWT
Fig. 4. 12: FAR and FRR with Threshold for L–Spacek with DWT
Haar Wavelet
Symlet Wavelet
99
Fig. 4.13: FAR and FRR with Threshold for L–Spacek with DWT
The variations of FAR and FAR with threshold values for JAFFE
face database using Haar, Symlet and DB1 wavelets are shown in Figure
4.14, 4.15 and 4.16 respectively. The FRR and FAR values are decreasing
and increasing as threshold increases. The value of EER is 0.2 for Haar,
Symlet and DB1 wavelets. Hence Haar, Symelt and DB1 are equal
wavelets for JAFFE face database.
DB1 Wavelet
100
Fig. 4.14 FAR and FRR with Threshold for JAFFE Databases with DWT
Fig. 4.15: FAR and FRR with Threshold for JAFFE with DWT
Haar Wavelet
Symlet Wavelet
101
Fig. 4.16: FAR and FRR with Threshold for JAFFE with DWT
Fig. 4.17: FAR and FRR with threshold for NIR databases with DWT
DB1 Wavelet
Haar Wavelet
102
Fig. 4.18: FAR and FRR with threshold for NIR databases with DWT
Fig. 4.19: FAR and FRR with threshold for NIR databases with DWT
Symlet Wavelet
DB1
103
The variations of FAR and FAR with threshold values for JAFFE
face database using Haar, Symlet and DB1 wavelets are shown in Figure
4.17, 4.18 and 4.19 respectively. The FRR and FAR values are decreasing
and increasing as threshold increases. The value of EER is 0.2 for Haar,
Symlet and DB1 wavelets. Hence Haar, Symelt and DB1 are equal
wavelets for NIR face database.
EER values with different transformation and face image database
are tabulated in the Table 4.6. It is observed that the EER values are better
in the case of FFT compared to DWTs. The performance with L- Speack
database is better compared to JAFFE and NIR with both DWT and FFT
transformations.
Table 4.6: EER Values For Different Transforms
Database
EER
DWT FFT
Haar Symlet DB1
L – Speack 0.01 0.2 0.01 0
JAFFE 0.2 0.2 0.2 0.15
NIR 0.2 0.2 0.2 0.2
104
4.5 SUMMARY
Face recognition is a physiological biometric trait. The different face
data bases are considered for performance analysis. The PCFTD model of
Face Recognition using Haar, Symlet and Dd1 of DWTs and FFT is
proposed. The features of face images are obtained using Haar, Symlet
and DB1 wavelets as well as FFT transforms. The features of test image
are compared with database images using Euclidian Distance (ED). The
performance parameters such as FAR, FRR and TSR are computed using
different transform on different face databases. It is observed that the
performance of FFT is better compared to DWT.
CHAPTER 5
CONCLUSIONS
105
CHAPTER 5
CONCLUSIONS
5.1 INTRODUCTION
Biometrics identifies people by measuring some aspect of individual
anatomy or physiology such as hand geometry or fingerprint, some deeply
ingrained skill or other behavioral characteristic such as handwritten
signature or something that is a combination of the two such as voice. By
the implementation of biometrics technology, the fear of stolen, lost or
forget of the traditional ways of authentication can be eliminated.
Biometrics refers to the automatic identification (or verification) of an
individual (or a claimed identity) by using certain physiological or
behavioral traits associated with the person.
The proposed Transform Domain Fingerprint Identification based on
DTCWT is presented in chapter 3. The fingerprint is cropped and resized
to dimensions of 2m X 2
n which is suitable for DTCWT. The features of
fingerprint are extracted by applying seven levels of DTCWT. The
features are generated by concatenating magnitude and phase of DTCWT.
The test image features are compared with database images using
Euclidian Distance (ED). The level of DTCWT gives better results in
terms of EER and TSR.
106
In chapter 4, the Performance Comparison of Face Recognition using
Transform Domain technique is proposed. The face databases such as L-
Spacek, JAFFE and NIR are considered for the performance analysis. The
face images are resized to a required dimensions and color images are
converted into gray scale images. The wavelet families viz., Haar, Symelt
and DB1 are applied on face images to derive four subbands such as LL,
LH, HL and HH. The face image features are obtained using LL subband
only. The FFT is also applied on face images to extract features using
magnitudes. The test image features are compared with database images
usinig ED. The perforamce parameters such as FRR, FAR, TSR and EER
are evaluated using wavelet families and FFT on L- Spacek, JAFFE and
NIR face databases. The performance of the algorithm is compared using
wavelet families and FFT. The EER values are better in the case of FFT
compared to wavelet families on different face databases.
5.2 CONTRIBUTIONS OF THIS WORK
The different levels of DTCWT are applied on cropped and resized
fingerprint images. The features of fingerprint are extracted by
concatenating magnitude and phase of DTCWT. The test image features
are compared with database images using ED. The performance results
are compared using seven levels of DTCWT. The proposed algorithm
gives better results for level seven of DTCWT.
107
The different wavelet families and FFT are used to verify the face
recognition algorithm. The face databases of L- Spacek, JAFFE and NIR
are used to test the algorithm. The face image are resized and the wavelet
families such as Haar, Symlet and DB2 are applied to derive low and
high frequency components. The features of face images are extracted by
considering low frequency components only.
The FFT is also applied on face images to extract features using
magnitudes. The features of test image is compared with database images
using ED. The performance of an algorithm is compared using wavelet
families and FFT. The proposes algorithm gives better result with FFT
compared with the wavelet families.
5.3 FUTURE WORK
Fingerprint verification can be tested with the combination of Spatial
and Transform Domain techniques.
The fingerprint can be segmented into small parts and apply Spatial or
Transform Domain techniques on segments simultaneous to generate
features for better results.
Face identification can be test with Hybrid Domain techniques
The fingerprint and face features are combined to identify a person
effectively.
108
Bibliography
108
BIBLIOGRPAHY
[1] Marcos Faundez-Zanuy, “Biometric Security Technology,”
Encyclopedia of Artificial Intelligence, pp. 262-264, 2009.
[2] Anil K Jain, Arun Ross and Salil Prabhakar, “An Introduction to
Biometric Recognition,” IEEE Transactions on Circuits and
Systems for Video Technology, vol. 14, no.1, pp. 4-20, 2004.
[3] Parvinder S Sandhu, Iqbaaldeep Kaur, Amit Verma, Samriti Jindal
and Shailendra Singh, “Biometric Methods and Implementation of
Algorithms,” International Journal of Electrical and Electronics
Engineering, vol. 3, no. 8, pp. 492-497, 2009.
[4] Debnath Bhattacharyya, Poulami Das, Samir Kumar
Bandyopadhyay and Tai-hoon Kim, “IRIS Texture Analysis and
Feature Extraction for Biometric Pattern Recognition,”
International Journal of Database Theory and Application, vol. 1,
pp. 53-61, 2007.
[5] J Daugman, “How Iris recognition Works,” IEEE Proceeding on
Transaction on Circuits, 2008, vol. 14, pp. 21-30, 2004.
[6] R Wildes, “Iris Recognition: An Emerging Biometric Technology,”
IEEE Conference Proceedings on Image Processing, vol. 85, pp.
1348-1363, 1998.
[7] Tolba A S, El-Baz, A H and El-Harby A A, “Face Recognition: A
Literature Review,” International Journal of Information and
Communication Engineering, vol. 2, no. 2, pp. 88-103, 2006.
[8] K Delac, M Grgic, and T Kos, “Sub-Image Homomorphic Filtering
Technique for Improving Facial Identification under Difficult
Illumination Conditions,” Thirteenth International Conference on
Systems, Signals and Image Processing, pp.95-98, 2006.
109
[9] A Abbas, M I Khalil, S Abdel-Hay and H M Fahmy, “Expression
and Illumination Invariant Preprocessing Technique for Face
Recognition,” Proceedings of the International Conference on
Computer Engineering and System, pp. 59-64, 2008.
[10] M Savvides and V Kumar, “Illumination Normalization using
Logarithm Transforms for Face Authentication,” Proceedings of
the Fifth International Conference on Audio and Video based
Biometric Person Authentication, pp. 549 -556, 2003.
[11] Jukka Holappa, Timo Ahonen and Matti Pietikainen, “An
Optimized Illumination Normalization Method for Face
Recognition,” Proceedings of the IEEE Second International
Conference on Biometrics: Theory, Applications and Systems, pp.
1-6, 2008.
[12] Debnath Bhattacharyya, Rahul Ranjan, Farkhod Alisherov A, and
Minkyu Choi, “Biometric Authentication: A Review,” International
Journal of u- and e- Service, Science and Technology, vol. 2, No. 3,
2009.
[13] John G. Daugman, “High Confidence Visual Recoginition of
Persons by a Test of Statistical Independence,” vol.15, pp. 1148 –
1160, 1993.
[14] Ramesha K, K B Raja, Venugopal K R and L M Patnaik, “Feature
Extraction based Face Recognition, Gender and Age
Classification,” International Journal on Computer Science and
Engineering, vol. 02, pp. 14-23, 2010.
[15] Ramesha K and K B Raja, “Face Recognition System using
Discrete Wavelet Transform and Fast PCA,” Proceedings of the
International Conference on Advances in Information Technology
and Mobile Communication, pp. 13-18, 2011.
110
[16] Marian Stewart Bartlett, Javier R Movellan and Terrence J
Sejnowski, “Face Recognition by Independent Component
Analysis,” IEEE Transactions on Neural Networks, vol. 13, no. 6,
pp. 1450-1464, 2002.
[17] Yongsheng Gao and Leung M K H, “Face Recognition using Line
Edge Map,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 24, pp. 764-779, 2002.
[18] B Weyrauch J, Huangy, B Heisele and V Blanz, “Component-based
Face Recognition with 3D Morphable Models,” Proceedings of
First IEEE Workshop on Face Processing in Video, pp. 1-5, 2004.
[19] Ara V Nefian and Monson H Hayes, “Maximum Likelihood
Training of the Embedded HMM for Face Detection and
Recognition,” Proceedings of the IEEE International Conference
on Image Processing, vol. 1, pp. 33-36, 2000.
[20] Abdu Rahiman V and Jiji C V, “Face Hallucination using Eigen
Transformation in Transform Domain,” International Journal of
Image Processing, vol. 3, issue. 6, pp. 265-281, 2011.
[21] Sudha N and Mohan A R, “Hardware Directed Fast Eigenface
based Face Detection Algorithm using FFT,” Proceedings of IEEE
International Symposium on Industrial Electronics, pp. 915-919,
2009.
[22 ] Marios Savvides, Jingu Heo, Ramzi Abiantun, Chunyan Xie and
Vijay Kumar, “Class Dependent Kernel Discrete Cosine
Transform Features for Enhanced Holistic Face Recognition in
FRGC-II,” Proceedings of the IEEE International Conference on
Acoustics, Speech and Signal Processing, vol. 5, no. 2, pp. 185-
188, 2006.
111
[23] Derzu Omaia, Jankees V D Poel and Leonardo V Batista, “2D-DCT
Distance based Face Recognition using a Reduced Number of
Coefficients,” Proceedings of Twenty Second Brazilian Symposium
on Computer Graphics and Image Processing, pp. 291-298, 2009.
[24] Meihua Wang, Hong Jiang and Ying Li, “Face Recognition based
on DWT/DCT and SVM,” First International Conference on
Computer Application and System Modelling, pp. V3-507 - V3-
510, 2010.
[25] Alaa Eleyan, Huseyin Ozkaramanli and Hasan Demirel, “Complex
Wavelet Transform based Face Recognition,” Eurasip Journal on
Advances in Signal Processing, pp. 1-13, 2008.
[26] Chao-Chun Liu and Dao-Qing Dai, “Face Recognition using Dual-
Tree Complex Wavelet Features,” IEEE Transactions on Image
Processing, vol. 18, issue 11, pp. 2593-2599, 2009.
[27] Gunawan Sugiarta Y B, Riyanto Bambang, Hendrawan and
Suhardi, “Feature Level Fusion of Speech and Face Image based
Person Identification System,” Proceedings of the Second
International Conference on Computer Engineering and
Applications, pp. 221-225, 2010.
[28] Zhong Qu and Zheng yong Wang, “Research on preprocessing of
palmprint image based on adaptive threshold and Euclidian
distance,” International Conference on Natural Computation, vol.
8, pp. 4238-4242, 2010
[29] Cox I J, Ghosn J and Yianilos P N, “ Feature-based Face
Recognition using Mixture-distance,” Proceedings of the IEEE
Computer Society Conference on Computer Vision and Pattern
Recognition, pp. 209-216, 1996.
112
[30] Hong Yang and Yiding Wang, “A LBP-based Face Recognition
Method with Hamming Distance Constraint,” Proceedings of the
Fourth International Conference on Image and Graphics, pp. 645-
649, 2007.
[31] Vytautas Perlibakas, “Distance Measures for PCA-based Face
Recognition,” Pattern Recognition Letters, vol. 25, issue 6, pp.
711-724, 2004.
[32] Tefas A, Kotropoulos C and Pitas I, “Using Support Vector
Machines to Enhance the Performance of Elastic Graph Matching
for Frontal Face Authentication,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 23, pp. 735-746, 2001.
[33] S S Ranawade, “Face Recognition and Verification using Artificial
Neural Network,” International Journal of Computer Applications,
vol.1, no. 14, pp. 21-25, 2010.
[34] Albert Montillo and Haibin Ling, “Age Regression from Faces
using Random Forests,” Proceedings of the IEEE International
Conference on Image Processing, pp. 2465-2468, 2009.
[35] Gerard Biau, Luc Devroye and Gabor Lugosi, “Consistency of
Random Forests and Other Averaging Classifiers,” Journal of
Machine Learning Research, vol. 9, pp. 2015-2033, 2008.
[36] Nitesh V Chawla and Kevin W Bowyer, “Designing Multiple
Classifier Systems for Face Recognition,” Proceedings of Springer
International Workshop on Maltiple Classifier System, pp. 407-416,
2005.
[37] Xiaoyan Mu, Mehmet Artiklar, Metin Artiklar, Mohamad H
Hassoun and Paul Watta, “Training Algorithms for Robust Face
Recognition using a Template Matching Approach,” Proceedings
113
of the IEEE International Joint Conference on Neural Networks,
vol. 4, pp. 2877-2882, 2001.
[38] Hochul Shin, Seong-Dae Kim and Hae-Chul Choi, “Generalized
Elastic Graph Matching for Face Recognition,” Pattern
Recognition Letters, vol. 28, issue 9, pp. 1077-1082, 2007.
[39] Yangfeng Ji, Tong Lin and Hongbin Zha, “Mahalanobis Distance
based Non-negative Sparse Representation for Face Recognition,”
Proceedings of the IEEE International Conference on Machine
Learning and Applications, pp. 41-46, 2009.
[40] M. Kucken and A. C. Newell, “Fingerprint Formation,”
Proceedings of Journal of Theoretical Biology, pp. 71–83, 2005.
[41] S. Greenberg, M. Aladjem and D Kogan, “Fingerprint Image
Enhancement using Filtering Techniques,” National Conference on
Real-Time Imaging , pp. 227-236, 2002.
[42] Bazen and Gerez, “Extraction of Singular Points from Directional
Fields of Fingerprints,” Annual Centre for Telematics and
Information Technology Workshop, vol. 24, pp 905-919, July 2002.
[43] E. K. Yun and S. B. Cho, “Adaptive Fingerprint Image
Enhancement with Fingerprint Image Quality Analysis,”
International conference of Image and Vision Computing, pp. 101–
110, 2006.
[44] M. P. Brankica and L. Maskovic, “Fingerprint Minutiae Filtering
Based on Multiscale Directional Information,” FACTA
Universitatis-Series: Electronics and Energetics, vol. 20, pp.233-
244, 2007.
114
[45] F. A. Afsar, M. Arif and M. Hussain, “Fingerprint Identification
and Verification System using Minutiae Matching,” National
Conference on Emerging Technologies, pp.141-146, 2004
[46] Jagadeeswar Reddy, Jaya Chandra Prasad and Giri Prasad M N,
“Image Denoising by Curvelets,” International Journal of image
processing, pp. 154-160, 2003.
[47] K Zebbiche and F Khelifi “Region-Based Watermarking of
Biometrics Images: Case Study in Fingerprint Images,”
Proceedings of International Journal of Digital Multimedia
Broadcasting, pp. 1-13, 2008.
[48] Bhupesh Gour, T K Bandopadhyaya and Sudhir Sharma,
“Fingerprint Feature Extraction using Midpoint Ridge Contour
Method and Neural Network,” Proceedings of International
Journal of Computer Science and Network Security, vol. 8, no.7,
pp. 99-103, 2008.
[49] Seung-Hoon Chae and Jong Ku Kim “Ridge-Based Fingerprint
Verification for Enhanced Security,” Digest of Technical Papers
International Conference on Consumer Electronics, pp. 1-2, 2009.
[50] A N Marana and A K Jain, “Ridge-Based Fingerprint Matching
using Hough Transform,” Proceedings of the IEEE Brazilliab
Symposium on Computer Graphica and Image Processing, pp. 112-
119, 2005.
[51] A K Jain, S Prabhakar and A Chen, “Combining Multiple Matchers
for a High Security Fingerprint Verification System,” Pattern
Recognition Letters, Elsevier Science Direct, vol. 20, pp. 1371-
1379, 1999.
[52] Fanglin Chen and Jie Zhou, “Reconstructing Orientation Field from
Fingerprint Minutiae to Improve Minutiae-Matching Accuracy,”
115
IEEE Transactions on Image Processing, vol. 18, no 4, pp. 1665-
1670, 2009.
[53] Chunxiao Ren and Yilong Yin, “A Linear Hybrid Classifier for
Fingerprint Segmentation,” Fourth International Conference on
Neural Computation, pp. 33-37, 2008.
[54] Hartwig Fronthaler and Klaus Kollreider, “Local Features for
Enhancement and Minutiae Extraction in Fingerprints,” IEEE
Transactions on Image Processing, vol. 17, no. 3, pp. 354-363,
2008.
[55] Chaohong Wu and Sergey Tulyakov, “Robust Point-Based Feature
Fingerprint Segmentation Algorithm,” International Conference on
Biometrics, pp. 1095-1103, 2007.
[56] Liu Wei and Zhou Cong, “Efficient Gradual Segmentation of
Fingerprint Images,” Proceedings of Sixth WSEAS International
Conference on Multimedia Systems and Signal Processing, pp. 67-
70, 2006.
[57] Hemanth Krishnappa and Hongyu Guo, “Fingerprint Verification
using Mutual Information,” Journal of Consortium for Computing
Sciences in Colleges, pp .15-22, 2008.
[58] Swapnali Mahadik, K Narayanan, D V Bhoir and Darshana Shah,
“Access Control System using Fingerprint Recognition,”
International Conference on Advances in Computing,
Communication and Control, pp. 306-311, 2009.
[59] Yi Chen and A K Jain, “Beyond Minutiae: A Fingerprint
Individuality Model with Patteren, Ridge and Pore Features,”
International Conference on Biometrics, pp. 523-533, 2009.
[60] Jong KU Kin, Seung-Hoon Chae, Sung Jin Lim and Sung Bum
Pan, “A Study on the Performance Analysis of Hybrid Fingerprint
116
Matching Methods,” International Journal of Future Generation
Communication and Networking, pp. 23-28, 2008.
[61] Manvjeet Kaur, Mukhwinder Singh, AkshayGiridhar and Parvinder
S. Sandhu,“Fingerprint Verification System using Minutiae
Extraction Technique,” Proceedings of World Academy of Science,
Engineering and Technology, vol. 36, pp. 497-502, 2008.
[62] Liu Wei, “Fingerprint Classification using Singularities Detection,”
International Journal of Mathematics and Computers in
Simulation, vol. 2, issue 2, pp. 158-162, 2008.
[63] Arun Ross, Anil Jain and James Reisman, “A Hybrid Fingerprint
Matcher,” Proceedings of International Conference on Pattern
Recognition, pp .1661-1673, 2003.
[64] Haiyun Xu, Raymond N J Veldhuis, Tom A M Kevenaar, and Ton
A H M Akkermans, “A Fast Minutiae-Based Fingerprint
Recognition System,” Proceedings of IEEE Systems Journal, vol.
3, no. 4, 2009.
[65] Du Jain, Mu Zhi Chun, T Yuan-Yan, D Tian and C Li-Min,
“Application of EMD and Fractal Technique in Fingerprint of
Medicinal Herbs,” Proceedings of the IEEE International
Conference on Wavelet Analysis and Pattern Recognition, pp. 10-
13, 2011.
[66] X Xiao and J Niu, “Review of EMD-based Image Fusion,” IEEE
International Conference on Intelligence Science and Information
Engineering, pp. 282-285, 2011.
[67] L Li and J Hongbing, “Signal Feature Extraction based on an
Improved EMD Method,” Elsevier Journel of measurement, vol.
32, 2009.
117
[68] S Tachaphetpiboont and T Amornraksa, “Applying FFT Features
for Fingerprint Matching,” Proceedings of the IEEE Conference on
Wireless Pervasive Computing, pp. 1-5, 2006.
[69] M Keming, W Guoren and Y Changyong, “A Multi-Stage
Fingerprint Segmentation Method,” Proceedings of International
Conference on Intelligent System Knowledge Engineering, pp.
1141-1145, 2008.
[70] A Baig, A Bouridane and F Kurugollu, “A Corner Strength Based
Fingerprint Segmentation Algorithm with Dynamic Thresholding,”
IEEE International Conference on Pattern Recognition, pp. 1-4,
2008.
[71] S Aayush, “Step Integration Based Information Fusion For
Multimodal Biometrics,” IEEE International Workshop on Systems,
Signals and Image Processing, pp. 213-216, 2007.
[72] B Y Teoh, and Pang Y H, “Touch-less Fingerprint Recognition
System,” IEEE Workshop on Automatic Identification Advanced
Technologies, vol. 7, pp. 24 – 29, 2007.
[73] Tiande Guo, Chengming Wen and Yuyang Zhou,“ A Novel and
Efficient Algorithm for Segmentation of Fingerprint Image Based
on Local Binary Pattern Operator,” IEEE International Conference
on Information Technology and Computer Science, pp. 200-204,
2009.
[74] Chen Yu, Zhang Jian, Yi Bo and Chen Deyun, “A Novel Principal
Component Analysis Neural Network Algorithm for Fingerprint
Recognition in Online Examination System,” Asia-Pacific
Conference on Information Processing, pp. 182 – 186, 2009.
[75] Conti V, Militello, Sorbello F and Vitabile S, “A User-Friendly
Interface for Fingerprint Recognition Systems Based on Natural
118
Language Processing,” International Conference on Complex,
Intelligent and Software Intensive Systems, pp. 736 – 741, 2009.
[76] Cappelli R and Maltoni D, “On the Spatial Distribution of
Fingerprint Singularities,” IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 31, pp. 742 – 448, 2009.
[77] Chunfeng Hu, Jianping Yin, En Zhu, Hui Chen and Yong Li,
“Fingerprint Alignment using Special Ridges,” International
Conference on Pattern Recognition, pp. 1-4, 2008.
[78] Dadgostar, M Tabrizi, P R Fatemizadeh and E Soltanian-Zadeh,
“Feature Extraction Using Gabor-Filter and Recursive Fisher
Linear Discriminant with Application in Fingerprint Identification,”
Seventh International Conference on Advances in Pattern
Recognition, pp. 217 – 220, 2009.
[79] Yong Li, Jianping Yin, En Zhu, Chunfeng Hu and Hui Chen,
“Score Based Biometric Template Selection and Update,”
International Conference on Future Generation Communication
and Networking, pp. 35 – 40, 2008.
[80] Xiaohui Ren, Jinfeng Yang, Henghui Li and Renbiao Wu, “Multi-
Fingerprint Information Fusion for Personal Identification Based on
Improved Dempster-Shafer Evidence Theory,” International
Conference on Electronic Computer Technology, pp. 281 – 285,
2009.
[81] Jain, A K, Yi Chen and Demirkus, “Pores and Ridges: High-
Resolution Fingerprint Matching Using Level 3 Features,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, pp. 15-
17, 2007.
[82] Sudiro S A, Paindavoine M and Maulana Kusuma, “Image
Enhancement in Simple Fingerprint Minutiae Extraction Algorithm
119
using Crossing Number on Valley Structure,” International
Conference on Intelligent and Advanced Systems, pp. 655 – 659,
2007.
[83] Cappelli R, Lumini A, Maio D and Maltoni D, “Fingerprint Image
Reconstruction from Standard Templates,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, pp. 1489 - 1503, 2007.
[84] Thi Hoi Le and Bui T D, “A Codeword-Based Indexing Scheme for
Fingerprint Identification”, International Conference on Control,
Automation, Robotics and Vision, pp. 1352 – 1356, 2008.
[85] Seong-Jin Kim, Kwang-Hyun Lee, Sang-Wook Han and Euisik
Yoon, “A CMOS Fingerprint System-On-a-Chip With Adaptable
Pixel Networks and Column-Parallel Processors for Image
Enhancement and Recognition,” IEEE Journal of Solid-State
Circuits, vol. 43, pp. 2558 – 2567, 2008.
[86] Liu Wenzhou, Meng Xiangping, Linna, L and Yuan Quande, “A
Kind of Effective Fingerprint Recognition Algorithm and
Application in Examinee Identity Recognition,” International
Conference on Computer Science and Software Engineering, pp.
1035 – 1039, 2008.
[87] Farooq F, Bolle, R M Tsai-Yang Jea and Ratha N, “Anonymous
and Revocable Fingerprint Recognition,” IEEE Conference on
Computer Vision and Pattern Recognition, pp. 1-7, 2007.
[88] Baig A, Bouridane A and Kurugollu F, “A Corner Strength Based
Fingerprint Segmentation Algorithm with Dynamic Thresholding,”
International Conference on Pattern Recognition, pp. 1-4, 2008.
[89] Galy N, Charlot B and Courtois B, “A Full Fingerprint Verification
System for a Single-Line Sweep Sensor,” IEEE Sensors Journal,
vol. 7, pp. 1054 – 1065, 2007.
120
[90] Falguera, Marana A N and Falguera, J R, “Fusion of Fingerprint
Recognition Methods for Robust Human Identification,” IEEE
International Conference on Computational Science and
Engineering, pp. 413 – 420, 2008.
[91] Lopez M and Canto E, “FPGA Implementation of a Minutiae
Extraction Fingerprint Algorithm,” pp. 1920 – 1925, 2008.
[92] Daniel Ashlock, Eun Youn Kim and Ashlock W, “Fingerprint
Analysis of the Noisy Prisoner's Dilemma Using a Finite-State
Representation,” IEEE Transactions on Computational Intelligence
and AI in Games, vol. 1, pp. 154 – 167, 2009.
[93] Dalwon Jang and Yoo C D, “Fingerprint Matching Based on
Distance Metric Learning,” IEEE International Conference on
Acoustics, Speech and Signal Processing, pp. 1529 – 1532, 2009.
[94] Woo Kyu Lee and Jae Ho Chung, “Fingerprint Recognition
Algorithm Development Using Directional Information in Wavelet
Transform Domain,” IEEE International Symposium on Circuits
and Systems, vol. 2, pp. 1201 – 1204, 1997.
[95] Xiangping Meng, Zengguang Wu and Yulan Zhao, “New
Algorithm of Automation Fingerprint Recognition,” IEEE
International Conference on Automation and Logistics, pp. 838 –
842, 2008.
[96] Jiao Ruili and Fan Jing, “VC5509A Based Fingerprint
Identification Preprocessing System,” International Conference on
Signal Processing, pp. 2859 – 2863, 2008.
[97] Han Xi, NiuWenliang and Li Zheying, “Application of Fingerprint
Recognition on the Laboratory Management”, International
Conference on Signal Processing, pp. 2960 – 2963, 2008.
121
[98] Jeffery P Hansen and Masatoshi Sekine, “Decision Diagram Based
Techniques for the Haar Wavelet Transform,” Proceedings of IEEE
International Conference on Information, Communications and
Signal Processing, pp. 59 – 64, September 1997.
[99 ] Kun Ma and Xiaoou Tang, “Face Recognition Using Discrete
Wavelet Graph,” IEEE international conference on signal and
image processing, pp. 117-121, 2003.
[100 ] Duan-Sheng Chen and Zheng Kai Liu, “Generalized Haar-Like
Features For Fast Face Detection,” Proceedings of the Sixth
International Conference on Machine Learning and Cybernetics,
pp. 2131 – 2136, August 2007.
[101 ] Paul Nicholl and Abbes Amira, “DWT/PCA Face Recognition
using Automatic Coefficient Selection,” IEEE International
Symposium on Electronic Design, Test & Applications, pp. 390 –
394, 2008.
[102 ] Jun Ying Gan and Jun Feng Liu, “Fusion And Recognition Of Face
And Iris Feature Based On Wavelet Feature And Kfda,”
Proceedings of International Conference on Wavelet Analysis and
Pattern Recognition, pp. 47 – 52, July 2009.
[103 ] Sudha N and Mohan A R, “Hardware Directed Fast Eigenface
based Face Detection Algorithm using FFT,” IEEE International
Symposiumon on Industrial Electronics , pp. 915 – 920, July, 2009.
[104 ] Satiyan M, Hariharan M and Nagarajan R, “Comparison of
Performance using Daubechies Wavelet Family for Facial
Expression Recognition,” IEEE International Colloquium on
Signal Processing & Its Applications, pp. 354 – 359, 2010.
[105] Hengliang Tang, Yanfeng Sun, Baocai Yin and Yun Ge, “Face
Recognition Based on Haar LBP Histogram,” IEEE International
122
Conference on Advanced Computer Theory and Engineering, pp.
235 – 239, 2010.
[106 ] Hafiz Imtiaz and Shaikh Anowarul Fattah, “A Face Recognition
Scheme Using Wavelet-based Local Features,” IEEE Symposium
on Computers & Informatics, pp. 313-318, 2011.
[107 ] Ramesha K and Raja K B, “Performance Evaluation of Face
Recognition based on DWT and DT-CWT using Multi-matching
Classifiers,” IEEE International Conference on Computational
Intelligence and Communication Systems, pp. 601-606, 2011.
[108 ] Masashi Nishiyama and Osamu Yamaguchi, “Face Recognition
Using the Classified Appearance-based Quotient Image,” IEEE
International Conference on Automatic Face and Gesture
Recognition, pp. 49-54, 2006.
[109] H B Kekre, Sudeep D. Thepade and Akshay Maloo, “Face
Recognition using Texture Features Extracted from Walshlet
Pyramid,” International Journal on Recent Trends in Engineering
and Technology, Vol. 05, No. 01, pp 185-190, 2011
[110 ] Kailash J K and Sanjay N T, “Independent Component Analysis of
Edge Information for Face Recognition,” International Journal of
Image Processing, vol. 3, pp. 120-130, 2009.
[111 ] Mohamed El Aroussi, Mohammed El Hassouni, Sanaa Ghouzali,
Mohammed Rziza, and Driss Aboutajdine, “Local Steerable
Pyramid Binary Pattern Sequence LSPBPS for face recognition
method,” International Journal of Signal Processing, PP. 281-284,
2009.
[112 ] Sanqiang Zhao and Yongsheng Gao, “Establishing Point
Correspondence Using Multidirectional Binary Pattern for Face
123
Recognition,” IEEE International conference on pattern
recognition, pp. 1–4, 2008.
[113 ] Rerkchai Fooprateepsiri and Werasak Kurutach, “Facial
Recognition using Hausdorff- Shape- Radon Transform,”
International Journal of Digital Content Technology and its
Applications, Vol. 3, PP.67-74, 2009.
[114 ] Vitomir Struc, and Nikola Pavesic, “Gabor-Based Kernel Partial-
Least-Squares Discrimination Features for Face Recognition,”
Institute of Mathematics and Informatics, Vol. 20, No. 1, pp.115–
138. 2009.
[115 ] Taskeed Jabid, Md Hasanul Kabir and Oksam Chae, “Robust Facial
Expression Recognition Based on Local Directional Pattern,”
Journal of Electronics and Telecommunications Research Institute,
Vol. 32, pp. 784-794, 2010.
[116 ] Arif Muntasa, Indah Agustien Sirajudin and Mauridhi Hery
Purnomo, “Appearance Global and Local Structure Fusion for Face
Image Recognition, Indonesian Journal of Electrical Engineering,
Vol.9, No.1, pp. 125-132, 2011.
[117 ] Kelsey Ramirez-Gutierrez, Daniel Cruz-Perez, Jesús Olivares-
Mercado, Mariko Nakano-Miyatake and Hector Perez-Meana, “A
Face Recognition Algorithm using Eigenphases and Histogram
Equalization,” International Journal of Computers , Vol. 5, PP. 34-
41, 2011.
[118 ] K Jaya Priya, R S Rajesh, “Dual Tree Complex Wavelet Transform
based Face Recognition with Single View,” The International
Conference on Computing, Communications and Information
Technology, Vol.5, 2010.
124
[119 ] Reza Ebrahimpour, Ahmad Jahani, Ali Amiri and Masoom Nazari,
“Expression-Independent Face Recognition Using Biologically
Inspired Features,” Indian Journal of Computer Science and
Engineering, Vol. 2, No. 3, pp 492-499, 2011.
[120 ] John Adedapo and Adeniran S A, “One-Sample Face Recognition
Using HMM Model of Fiducial Areas,” International Journal of
Image Processing, vol. 5, pp. 58-68, 2011.
[121 ] Timo Ojala, Matti PietikaÈ inen, “Multi Resolution Gray-Scale and
Rotation Invariant Texture Classification with Local Binary
Patterns,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 24, no. 7, pp. 971-987, 2002.
[122 ] Ramesh K, K B Raja, Venugopal K R and L M Patnaik, “Template
Based Mole Detection for Face Recognition,” International Journal
of computer theory and Engineering, vol 2, no.5, pp.797-804, 2010.
[123 ] D Murugan, S Arumugam, K Rajalaskhmi and Manish T,
“Performance Evaluation of Face Recognition using Gabor Filter,
Log Gabor Filter and Discrete Wavelet Transform,” Internal
Journal of Computer Science and Information Technology, vol.2,
no.1, pp.125-133, 2010.
[124 ] Ramesha K and K B Raja, “Dual Transform based Feature
Extraction for Face Recognition,” International Journal of
Computer Science, vol. 8, no 2, pp. 115-121, 2011.
[125 ] N G Kingsbury, “The Dual-Tree Complex Wavelet Transform”,
IEEE Signal Processing Magazine, pp .123-151, 2005.
[126 ] Baochang Zhang, Lei Zhang, David Zhang, Linlin shen,
“Directional Binary Code with Application to PolyU Near Infrared
Face Database,” Pattern Recognition Letters, pp. 123-130, 2010.
125
[127 ] Haihong Zhang and Yan Guo, “Facial Expression Recognition
Using Continuous Dynamic Programming,” IEEE International
Conference on Automatic Face and Gesture Recognition, pp. 163 –
167, 2001.
[128 ] Praseeda Lekshmi V and M Sasikumar, “RBF Based Face
Recognition and Expression Analysis,” Proceedings of World
Academy of Science, Engineering and Technology, Vol. 32, pp
589-592, 2008.
[129 ] S T Gandhe, K T Talele and A G Keskar, “Face Recognition Using
Contour Matching,” IAENG International Journal of Computer
Science, Vol. 35, 2008.
[130 ] Dakshina Ranjan Kisku, Hunny Mehrotra, Jamuna Kanta Sing and
Phalguni Gupta, “SVM-Based Multiview Face Recognition by
Generalization of Discriminant Analysis,” International Journal of
Intelligent Systems and Technologies, PP. 174-179, 2008.
[131] D Maio, D Maltoni, R Cappelli, J L Wayman and A K Jain,
“FVC2004: Third Fingerprint Verification Competition,”
Documentation on FVC 2004, pp. 1-8, 2004.
[132] N G Kingsbury, “The Dual-Tree Complex Wavelet Transform: A
New Technique for Shift Invariance and Directional Filters,”
Proceeding of 8th IEEE DSP Workshop, pp.124-130, 1998.
[133] N G Kingsbury, “Image Processing with Complex Wavelets,”
Philosophical Transactions of the Royal Society A: Mathematical,
Physical, Engineering and Sciences, vol. 357, no. 1760, pp. 2543–
2560, 1999.
[134] N G Kingsbury, “Complex Wavelets for Shift Invariant Analysis
and Filtering of Signals,” Proceedings of Applied Computational
Harmony and Analaysis., vol. 10, pp. 234–253, 2001.
126
[135] J K Romberg, M Wakin, H Choi, N G Kingsbury, and R G
Baraniuk, “A Hidden Markov Tree Model for the Complex
Wavelet Transform,” IEEE Transactions on Signal Processing, pp.
1-28, 2002.
[136] C W Shaffrey, N G Kingsbury, and I H Jermyn, “Unsupervised
Image Segmentation via Markov Trees and Complex Wavelets,”
Proceedings of IEEE International Conference on Image
Processing, vol. 3, pp. 801–804, 2002.
[137] J Romberg, H Choi, R G Baraniuk, and N G. Kingbury, “Multiscale
Classification using Complex Wavelets and Hidden Markov Tree
Models, Proceedings of IEEE International Conference on Image
Processing, vol. 2, pp. 371–374, 2000.
[138] P F C de Rivaz and N G Kingsbury, “Bayesian Image
Deconvolution and Denoising using Complex Wavelets,”
Proceedings of IEEE International Conference on Image
Processing, vol. 2, pp. 273–276, 2001.
[139 A Jalobeanu, N G Kingsbury, and J Zerubia, “Image Deconvolution
using Hidden Markov Tree Modeling of Complex Wavelet
Packets,” Proceedings of IEEE International Conference on Image
Processing, vol. 1, pp. 201–204, 2001.
[140] F Shi, I W Selesnick, and S Cai, “Image Sharpening via Image
Denoising in the Complex Wavelet Domain,” Proceedings of
Wavelet Applications Signal Image Processing, pp. 467–474, 2003.
[141] J F A Magarey and N G Kingsbury, “Motion Estimation using a
Complex-Valued Wavelet Transform,” IEEE Transaction on
Signal Processing, vol. 46, pp. 1069–1084, 1998.
[142] T H Reeves and N G Kingsbury, “Over Complete Image Coding
using Iterative Projection-Based Noise Shaping,” Proceeding of
127
IEEE International Conference on Image Processing, vol. 3, pp.
597–600, 2003.
[143] K Sivaramakrishnan and T Nguyen, “A Uniform Transform
Domain Video Codec based on Dual Tree Complex Wavelet
Transform,” Proceedings of IEEE International Conference on
Acoustic Speech and Signal Processing, vol. 3, pp. 1821–1824,
2001.
[144] B Wang, Y Wang, I Selesnick and A Vetro. “Video Coding using
3-D Dual Tree Discrete Wavelet Transforms,” Proceedings of
IEEE International Conference on Acoustic Speech and Signal
Processing, vol. 2, pp. 61–64, 2005.
[145] J W Earl and N G Kingsbury, “Spread Transform Watermarking
for Video Sources,” Proceedings of IEEE International Conference
on Image Processing, vol. 2, pp. 491–494, 2003.
[146] P Loo and N G Kingsbury, “Digital Watermarking using Complex
Wavelets,” Proceedings of IEEE International Conference on
Image Processing, vol. 3, pp. 29–32, 2000.
[147] P F C de Rivaz and N G Kingsbury, “Complex Wavelet Features
for Fast Texture Image Retrieval,” Proceedings of IEEE
International Conference on Image Processing, vol. 1, pp. 109–
113, 1999.
[148] S Hatipoglu, S K Mitra, and N G Kingsbury, “Image Texture
Description using Complex Wavelet Transform,” Proceedings of
IEEE International Conference on Image Processing, vol. 2, pp.
530–533, 2000.
[149] M Kokare, P K Biswas, and B N Chatterji, “Rotation Invariant
Texture Features using Rotated Complex Wavelet for Content
128
Based Image Retrieval,” Proceedings of IEEE International
Conference on Image Processing, vol. 1, pp. 393–396, 2004
[150] E Lo, M Pickering, M Frater, and J Arnold, “Scale and Rotation
Invariant Texture Features from the Dual-Tree Complex Wavelet
Transform,” Proceedings of IEEE International Conference on
Image Processing, vol. 1, pp. 227–230, 2004.
[151] M Miller, N Kingsbury, and R Hobbs, “Seismic Imaging using
Complex Wavelets,” Proceeding of IEEE International Conference
on Acoustic Speech and Signal Processing, vol. 2, pp. 557–560,
2005.
[152] E Causevic, R John, J Kovacevic, and A. Jacquin “Adaptive
Complex Wavelet based Filtering of EEG for Extraction of Evoked
Potential Responses,” Proceeding of IEEE International
Conference on Acoustic Speech and Signal Processing, vol. 5, pp.
393–396, March 2005.
[153 ] Saidi A, “Decimation-in-time-frequency FFT algorithm,” IEEE
International Conference on Acoustics, Speech, and Signal
Processing, vol. 3, pp. 453 – 456, 1994.
[154 ] Papouli E V, Stathald T, “A DFT algorithm based on filter banks:
the extended subband DFT,” Proceedings of International
Conference on Image Processing, vol. 1, pp. 1- 6, 2003 .
129
LIST OF PUBLICATIONS BASED ON THE THESIS
International Journals
[1] Jossy P George, S K Abhilash and K B Raja, “Transform Domain
Fingerprint Identification Based on DTCWT,” International Journal
of Advanced Computer Science and Applications, vol. 3, no.1, pp. 190-
198, 2012.
[2] Jossy P George, Saleem S Tevarmani and K B Raja, “Performance
Comparison of Face Recognition using Transforms Domain
Techniques,” World of Computer Science and Information Technology
Journal, vol. 2, no.3, pp. 82-86, 2012.
International Conferences
[3] Jossy P George, S K Abhilash, M D Chethana and K B Raja,
“Performance Analysis of Fingerprint Identification Using Different
Levels of DTCWT,” Proceedings of International Conference on
Information and Computer Applications, Hong Kong, vol. 24, pp. 185-
192, February, 2012.
[4] Jossy P George and K B Raja, “Performance Analysis of Face
Recognition using Wavelet Families and FFT,” International
Conference on Computer Technology and Science, New Delhi, 2012
(Communicated).
130
Appendix A Publications
131
Appendix B MATLAB TUTORIAL
132
APPENDIX B
MATLAB TUTORIAL
Introduction
MATLAB is an abbreviation for "Matrix Laboratory." It is an
interactive and high performance language for numerical computations
and graphics which is faster than with traditional programming languages
such as C, C++, and Fortran. MATLAB is mainly used for the matrix
computations. All MATLAB variables are multidimensional arrays, no
matter what type of Data. All the problems and solutions are expressed in
the mathematical notations. The main uses of MATLAB includes; Math
and computation, Algorithm development, Data acquisition, Modelling,
simulation and prototyping, Data analysis, exploration and visualization,
Scientific and engineering graphics, Application development etc. Since,
MATLAB is designed to solve problems numerically, that is, infinite
precision arithmetic. So it produces approximate rather than exact
solutions.
1. Starting/quitting
To start using the MatLab, click on the ‘Start’ button on the left
bottom of the screen, and then click on ‘All Programs’, then ‘Math and
Stats’, then ‘Matlab’. A window will pop up that will consist of three
smaller windows. On the right there will be a big window entitled
133
‘Command Window’. On the left there will be two windows, one entitled
‘Workspace’ and another one ‘Command History’. In the Command
window type quit (the letters should appear after the prompt) and hit
enter. Matlab will close.
2. A = imread (filename, fmt)
This command helps to read a gray scale or color image from the
file specified by the string filename. If the file is not in the current
directory, or in a directory on the MATLAB path, specify the full path
name.
The text string fmt specifies the format of the file by its standard file
extension. For example, specify ‘gif’ for Graphics Interchange Format
files. To see a list of supported formats, with their file extensions, use the
imformats function. If imread cannot find a file named filename, it looks
for a file named filename.fmt.
The return value A is an array containing the image data. If the file
contains a grayscale image, A is an M-by-N array. Fi the file contains a
true color image, A is an M-by-N-by-3 array. For TIFF files containing
color images that use the CMYK color space, A is an M-by-N-by-4 array.
See TIFF in the Format-Specified information section for more
information.
134
The class of A depends on the bits-per-sample of the image data,
rounded to the next byte boundary. For example, imread returns 24 bit
color data as an array of unit data because the sample size for each color
component is 8 bits. See remarks for a discussion of bitdpeths, and see
Format-Specific Information for more detail about supported bitdepths
and sample sizes for a particular format.
[X, map] = imread(…) reads the indexed image in filename into X and its
associated colormap into map. Colormap values in the image file are
automatically rescaled into the range [0, 1].
[…] = imread (filename) attempts to infer the format of the file from its
content.
3. Imshow(I)
Imshow(I) displays the grayscale image I. Imshow is an image
processing toolbox command and this deals the matrix as an image. It
assumes that the elements are pixel intensities.
imshow(I,[low high]) displays the grayscale image I, specifying the
display range for I in [low high]. The value low (and any value less than
low) displays as black; the value high (and any value greater than high)
displays as white. Values in between are displayed as intermediate shades
of gray, using the default number of gray levels. If you use an empty
matrix ([]) for [low high], imshow uses [min(I(:)) max(I(:))]; that is, the
135
minimum value in I is displayed as black, and the maximum value is
displayed as white.
imshow(RGB) displays the true color image RGB.
imshow(BW) displays the binary image BW. imshow displays pixels with
the value 0 (zero) as black and pixels with the value 1 as white.
imshow(X,map) displays the indexed image X with the colormap map. A
color map matrix may have any number of rows, but it must have exactly
3 columns. Each row is interpreted as a color, with the first element
specifying the intensity of red light, the second green, and the third blue.
Color intensity can be specified on the interval 0.0 to 1.0.
4. edge
BW = edge(I) takes a grayscale or a binary image I as its input, and
returns a binary image BW of the same size as I, with 1's where the
function finds edges in I and 0's elsewhere.
By default, edge uses the Sobel method to detect edges but the following
provides a complete list of all the edge-finding methods supported by this
function:
The Sobel method finds edges using the Sobel approximation to the
derivative. It returns edges at those points where the gradient of I is
maximum.
136
The Prewitt method finds edges using the Prewitt approximation
to the derivative. It returns edges at those points where the gradient of I is
maximum.
The Roberts method finds edges using the Roberts approximation
to the derivative. It returns edges at those points where the gradient of I is
maximum.
The Laplacian of Gaussian method finds edges by looking for zero
crossings after filtering I with a Laplacian of Gaussian filter.
The zero-cross method finds edges by looking for zero crossings
after filtering I with a filter you specify.
The Canny method finds edges by looking for local maxima of the
gradient of I. The gradient is calculated using the derivative of a Gaussian
filter. The method uses two thresholds, to detect strong and weak edges,
and includes the weak edges in the output only if they are connected to
strong edges. This method is therefore less likely than the others to be
fooled by noise, and more likely to detect true weak edges.
5. fft2 / ifft2
The MATLAB functions fft, fft2, and fftn (and their inverses ifft,
ifft2, and ifftn, respectively) all use fast Fourier transform algorithms to
compute the DFT.
137
Y = fft2(X) returns the two-dimensional discrete Fourier transform (DFT)
of X, computed with a fast Fourier transform (FFT) algorithm. The result
Y is the same size as X.
Y = fft2(X,m,n) truncates X, or pads X with zeros to create an m-by-n
array before doing the transform. The result is m-by-n. Ifft2 MATLAB
function returns the two-dimensional inverse discrete Fourier transform
(DFT) of X, computed with a fast Fourier transform (FFT) algorithm
Y = ifft2(X) returns the two-dimensional inverse discrete Fourier
transform (DFT) of X, computed with a fast Fourier transform (FFT)
algorithm. The result Y is the same size as X.
ifft2 tests X to see whether it is conjugate symmetric. If so, the
computation is faster and the output is real. An M-by-N matrix X is
conjugate symmetric if X(i,j) =conj(X(mod(M-i+1, M) + 1, mod(N-j+1,
N) + 1)) for each element of X. Y = ifft2(X,m,n) returns the m-by-n
inverse fast Fourier transform of matrix X.
y = ifft2(..., 'symmetric') causes ifft2 to treat X as conjugate symmetric.
This option is useful when X is not exactly conjugate symmetric, merely
because of round-off error. y = ifft2(..., 'nonsymmetric') is the same as
calling ifft2(...) without the argument 'nonsymmetric'.
For any X, ifft2(fft2(X)) equals X to within roundoff error.
138
6. Concatenation
Concatenation is the process of joining arrays to make larger ones.
In fact, you made your first array by concatenating its individual
elements. The pair of square brackets[] is the concatenation operator.
A = [a,a] A=
1 2 3 1 2 3
4 5 6 4 5 6
7 8 10 7 8 10
Concatenating arrays next to one another using commas is called
horizontal concatenation. Each array must have the same number of rows.
Similarly, when the arrays have the same number of columns, you can
concatenate vertically using semicolons.
A = [a; a] A=
1 2 3
4 5 6
7 8 10
1 2 3
4 5 6
7 8 10