fingerprint image classification and retrieval using...
TRANSCRIPT
FINGERPRINT IMAGE CLASSIFICATION AND
RETRIEVAL USING STATISTICAL METHODS
A Thesis submitted to Gujarat Technological University
for the Award of
Doctor of Philosophy
in
Computer IT Engineering
by
Sudhir P Vegad
119997107012
under the supervision of
Dr. Tanmay Pawar
GUJARAT TECHNOLOGICAL UNIVERSITY AHMEDABAD
August – 2018
ii
© Sudhir P Vegad
iii
DECLARATION
I declare that the thesis entitled Fingerprint Image Classification and Retrieval using
Statistical Methods submitted by me for the degree of Doctor of Philosophy is the record
of research work carried out by me during the period from October 2011 to August 2018
under the supervision of Dr. Tanmay Pawar and this has not formed the basis for the award
of any degree, diploma, associateship, fellowship, titles in this or any other University or
other institution of higher learning.
I further declare that the material obtained from other sources has been duly acknowledged
in the thesis. I shall be solely responsible for any plagiarism or other irregularities, if
noticed in the thesis.
Signature of the Research Scholar: ......................... Date: ….………
Name of Research Scholar: Sudhir P Vegad
Place : Vallabh Vidyanagar, Anand
iv
CERTIFICATE
I certify that the work incorporated in the thesis Fingerprint Image Classification and
Retrieval using Statistical Methods submitted by Sudhir P Vegad was carried out by the
candidate under my supervision / guidance. To the best of my knowledge: (i) the
candidate has not submitted the same research work to any other institution for any
degree/diploma, Associateship, Fellowship or other similar titles (ii) the thesis submitted
is a record of original research work done by the Research Scholar during the period of
study under my supervision, and (iii) the thesis represents independent research work on
the part of the Research Scholar.
Signature of Supervisor: .................................................. Date: ………………
Name of Supervisor: Dr. Tanmay Pawar
Place: Vallabh Vidyanagar, Anand
v
Course-work Completion Certificate This is to certify that Mr. Sudhir P Vegad enrolment no. 119997107012 is a PhD scholar
enrolled for PhD program in the branch Computer IT Engineering of Gujarat
Technological University, Ahmedabad.
(Please tick the relevant option(s))
□ He / She has been exempted from the course-work (successfully completed during
M.Phil Course)
□ He / She has been exempted from Research Methodology Course only (successfully
completed during M.Phil Course)
□ He / She has successfully completed the PhD course work for the partial requirement
for the award of PhD Degree. His/ Her performance in the course work is as follows-
Grade Obtained in Research Methodology
(PH001)
Grade Obtained in Self Study Course
(Core Subject)
(PH002)
BB BB
Supervisor‘s Signature
(Dr. Tanmay Pawar)
vi
Originality Report Certificate
It is certified that PhD Thesis Fingerprint Image Classification and Retrieval using
Statistical Methods by Sudhir P Vegad has been examined by us. We undertake the
following:
a. Thesis has significant new work / knowledge as compared already published or are
under consideration to be published elsewhere. No sentence, equation, diagram,
table, paragraph or section has been copied verbatim from previous work unless it
is placed under quotation marks and duly referenced.
b. The work presented is original and own work of the author (i.e. there is no
plagiarism). No ideas, processes, results or words of others have been presented as
Author own work.
c. There is no fabrication of data or results which have been compiled / analyzed.
d. There is no falsification by manipulating research materials, equipment or
processes, or changing or omitting data or results such that the research is not
accurately represented in the research record.
e. The thesis has been checked using turnitin (copy of originality report attached) and
found within limits as per GTU Plagiarism Policy and instructions issued from time
to time (i.e. permitted similarity index <=25%).
Signature of the Research Scholar: .................................................. Date: ….………
Name of Research Scholar: Sudhir P Vegad
Place : Vallabh Vidyanagar, Anand
Signature of Supervisor: .................................................. Date: ……………
Name of Supervisor: Dr. Tanmay Pawar
Place: Vallabh Vidyanagar, Anand
vii
viii
PhD THESIS Non-Exclusive License to
GUJARAT TECHNOLOGICAL UNIVERSITY
In consideration of being a PhD Research Scholar at GTU and in the interests of the
facilitation of research at GTU and elsewhere, I, Sudhir P Vegad (Full Name of the
Research Scholar) having (Enrollment No.) 119997107012 hereby grant a non-exclusive,
royalty free and perpetual license to GTU on the following terms:
a) GTU is permitted to archive, reproduce and distribute my thesis, in whole or in
part, and/or my abstract, in whole or in part ( referred to collectively as the
―Work‖) anywhere in the world, for non-commercial purposes, in all forms of
media;
b) GTU is permitted to authorize, sub-lease, sub-contract or procure any of the acts
mentioned in paragraph (a);
c) GTU is authorized to submit the Work at any National / International Library,
under the authority of their ―Thesis Non-Exclusive License‖;
d) The Universal Copyright Notice (©) shall appear on all copies made under the
authority of this license;
e) I undertake to submit my thesis, through my University, to any Library and
Archives. Any abstract submitted with the thesis will be considered to form part of
the thesis.
f) I represent that my thesis is my original work, does not infringe any rights of
others, including privacy rights, and that I have the right to make the grant
conferred by this non-exclusive license.
g) If third party copyrighted material was included in my thesis for which, under the
terms of the Copyright Act, written permission from the copyright owners is
ix
required, I have obtained such permission from the copyright owners to do the acts
mentioned in paragraph (a) above for the full term of copyright protection.
h) I retain copyright ownership and moral rights in my thesis, and may deal with the
copyright in my thesis, in any way consistent with rights granted by me to my
University in this non-exclusive license.
i) I further promise to inform any person to whom I may hereafter assign or license
my copyright in my thesis of the rights granted by me to my University in this non-
exclusive license.
j) I am aware of and agree to accept the conditions and regulations of PhD including
all policy matters related to authorship and plagiarism.
Signature of the Research Scholar: ..................................................
Name of Research Scholar: Sudhir P Vegad
Date:.................................................. Place: Vallabh Vidyanagar, Anand
Signature of Supervisor: ..................................................
Name of Supervisor: Dr. Tanmay Pawar
Date:.................................................. Place: Vallabh Vidyanagar, Anand
Seal:
x
Thesis Approval Form
The viva-voce of the PhD Thesis submitted by Shri/Smt./Kum. Sudhir P Vegad (Enrollment
No. 119997107012) entitled Fingerprint Image Classification and Retrieval using
Statistical Methods was conducted on …………………….………… (day and date) at
Gujarat Technological University.
(Please tick any one of the following option)
□ The performance of the candidate was satisfactory. We recommend that he/she be
awarded the PhD degree.
□ Any further modifications in research work recommended by the panel after 3
months from the date of first viva-voce upon request of the Supervisor or request of
Independent Research Scholar after which viva-voce can be re-conducted by the
same panel again.
□ The performance of the candidate was unsatisfactory. We recommend that he/she
should not be awarded the PhD degree.
----------------------------------------------------------------
--------------------------------------------------------------
Name and Signature of Supervisor with Seal 1) (External Examiner 1) Name and Signature
---------------------------------------------------------------
---------------------------------------------------------------
2) (External Examiner 2) Name and Signature 3) (External Examiner 3) Name and Signature
xi
ABSTRACT
Human fingerprints are popular biometrics due to its characteristics like universality,
uniqueness, permanence, collectability and acceptability. Fingerprints are rich in details
which are known as minutiae, which can be used as identification marks to uniquely
identify the fingerprint for various purposes. There are four basic patterns of fingerprint
ridges; arch, left loop, right loop and whorl.
Fingerprint classification is an important indexing scheme to narrow down the search of
fingerprint database for efficient large-scale retrieval. It is still a challenging problem due
to the intrinsic class ambiguity and the difficulty for poor quality fingerprints. Fingerprint
retrieval is a pre-requisite step for many applications like recognition and registration.
Feature extraction plays a very important role in retrieval process. The work presented in
this research has two phases; classification and retrieval. The first phase classifies the
fingerprint image and second phase performs retrieval from the classified class in phase
one. In classification phase, two classification methods are used. In the first method, scale
and rotation invariant GLCM and LBP features are concatenated to enhance the local
information from the fingerprint images for powerful representation. In second method,
SURF and SIFT features are concatenated to strongly represent the class-uniqueness of
fingerprint images. Classification is carried out by applying these features in classification
models, kNN, SVM and BoF. The retrieval phase matches the fingerprint image features
using distance matching methods to retrieve similar images from the class declared from
classification phase. The experimental results and comparisons on FVC2000, FVC2002,
FVC 2004 and NIST - 4 databases have shown the effectiveness and superiority of the
proposed method for fingerprint classification and retrieval.
xii
Acknowledgement
It is a pleasant aspect that I have now the opportunity to express my gratitude for many
people who accompanied and supported me during the journey of my work. Firstly, I offer
my adoration to almighty who created me, gave me the strength and courage to complete
my research work. I would like to express my deep gratitude to my supervisor Dr. Tanmay
Pawar, Professor and Head, Department of Electronics Engineering, BVM Engineering
College, Vallabh Vidyanagar, for his guidance and whole hearted cooperation. His helping
nature, valuable moral support and appreciation have inspired me and will continue to
inspire for the rest of my career.
I would also like to extend my sincere gratitude to my Foreign Co-Supervisor, Dr. Arun
Somani, Associate Dean for Research, IOWA State University, IA, for his cooperation,
help and motivation throughout the research work. I would like to thank my doctoral
progress committee: Dr. Narendra M Patel, Associate Professor, BVM Engineering
College and Dr. Shanmuganathan Raman, Assistant Professor, Electrical Engineering
Department, IIT, Gandhinagar for their insightful comments and encouragement during
entire period of research work.
It is an honour for me to express my deep sense of gratitude to Dr. C. L. Patel Sir, former
Chairman, Charutar Vidya Mandal (CVM) for giving permission to pursue in-service
Ph.D. I am extremely thankful to Dr. R. K. Jain Principal, ADIT for his kind support,
continuous inspiration and providing necessary resources during the research work. I take
this opportunity to thank Dr. Narendra Chauhan, Head, Information Technology
Department, ADIT for providing me mental support and inspiration. It is my pleasure to
acknowledge the help and suggestions from Mr. Prashant Italiya, alumni of ADIT.
I also would like to thank my students for helping me in carrying out this work. I also
thank teaching and non-teaching staff of Department of Information Technology and
Computer Engineering Department, ADIT who participated and helped me in creating
fingerprint image dataset.
I am short of words for my family members being grateful to them for all the sacrifices
they have made for me. I owe deepest gratitude to my parents, my sister and in-laws for
their mental support. My heartiest thank to my wife and true inspiration, Zankhana Shah,
xiii
for her care and all the sacrifices. This work would not have been completed without her
unconditional love and support during entire period of my work. I am short of words to
express my loving gratitude to my adorable son Dirgh, whose innocent smile and love has
given me mental support during the entire work.
Sudhir P Vegad
xiv
Table of Content
1 Introduction
1.1 Overview 1
1.1.1 Performance Metrics 4
1.2 Fingerprints 6
1.2.1 Principles of Fingerprint 8
1.2.2 Applications 8
1.2.3 Types of Fingerprints 9
1.2.4 Classification of Finger Prints 10
1.2.5 Fingerprint Detection Methods 14
1.2.6 Fingerprint Retrieval 17
1.3 Motivation 21
1.4 Research objectives 22
1.5 Research Contribution 22
1.6 Organization of the thesis 23
2 Theoretical Background and Literature Survey
2.1 Introduction 24
2.2 Biometric Security 24
2.3 Fingerprint Recognition System 26
2.4 Fingerprint Acquisition Methods 28
2.5 Fingerprint Pre-processing 29
2.6 Fingerprint Retrieval 33
3 Classification using LBP and GLCM Features
3.1 Introduction 42
3.1.1 Fingerprint Classification 42
3.1.2 Feature Extraction 43
3.1.3 Feature extraction methods 44
3.2 Methodology 48
3.2.1 Gray level co-occurrence matrix (GLCM) 48
3.2.2 Local Binary Patterns (LBP) 51
3.2.3 Fingerprint Classification using Support Vector Machine (SVM) 52
3.2.4 Fingerprint Classification using kNN 53
3.2.5 Combining GLCM and LBP 55
3.3 Experimental results 55
3.3.1 Classification Results using SVM Classifier 56
3.3.2 Classification Results with kNN Classifier 60
3.4 Concluding remarks 64
4 Classification using Bag of Features (BoF)
4.1 Introduction 65
4.2 Methodology 66
4.2.1 Feature Extraction using Speeded Up Robust Features (SURF) 66
4.2.2 Fingerprint Classification using Bag of Features (BoF) 68
4.2.3 Classification model of SURF Features with BoF 71
4.3 Experimental results 72
xv
4.4 Concluding remarks 76
5 Retrieval using Scale and Rotation Invariant Robust Features
5.1 Introduction 78
5.2 Methodology 79
5.2.1 Feature Extraction using Scale Invariant Feature Transform
(SIFT)
80
5.2.2 Feature Extraction using fusion method of SURF and SIFT 84
5.2.3 Retrieval using Bag of Features 84
5.3 Experimental results 91
5.4 Concluding remarks 94
6 Conclusion and Further Directions
6.1 Conclusion 95
6.2 Further Directions 96
List of References 97
List of Publications 111
xvi
List of Abbreviations
AFIT Advance Fingerprint Identification Technology
BoF Bag of Features
EER Equal Error Rate
FAR False Accept Rate
FMR False Match Rate
FRR False Rejection Rate
GLCM Gray Level Co-occurrence Matrix
JPEG Joint Photographic Experts Group
kNN K-Nearest Neighbour
LBP Local Binary Pattern
LDA Linear Discriminant Analysis
PCA Principal Component Analysis
PCASYS Pattern-Level Classification Automation System
ROC Receiver Operating Characteristics
SIFT Scale Invariant Feature Transformation
SURF Speeded-Up Robust Feature
SVM Support Vector Machine
UIDAI Unique Identification Authority of India
xvii
List of Figures
1.1 Parameters of Biometrics 3
1.2 Confusion Matrix for calculating hit rate for a classification model 4
1.3 A basic ROC graph showing five discrete classifiers 5
1.4 Fingerprint Classes and Subclasses, a) Arch, b) Loop and c) Whorl 11
1.5 Block Diagram of Fingerprint Retrieval system 19
2.1 Typical Fingerprint-based Biometric System 26
2.2 Typical flow of fingerprint image pre-processing 30
3.1 Taxonomy of features used for fingerprint classification 44
3.2 Co-occurrence matrix directions for extracting texture features 49
3.3 GLCM construction based on a test image along four possible directions
(0º, 45º, 90º and 135º) with distance d = 1
50
3.4 Example of LBP operator 51
3.5 Process of generating LBP features 52
3.6 Hyper-plane for classifying samples in SVM 53
3.7 Example for kNN 54
3.8 Classification flow using Fusion features with SVM / KNN 55
3.9 Sample images from the dataset used; a) NIST – 4, b) Captured 56
3.10 Confusion matrix of Testcase1 using SVM Classifier 57
3.11 Confusion matrix of Testcase2 using SVM Classifier 57
3.12 Confusion matrix of Testcase3 using SVM Classifier 58
3.13 Confusion matrix of Testcase4 using SVM Classifier 58
3.14 Confusion matrix of Testcase5 using SVM Classifier 59
3.15 Analysis of Results of various Testcases (with SVM Classifier) 59
3.16 Analysis of Results of various classes (with SVM Classifier) 60
3.17 Confusion matrix of Testcase1 using kNN Classifier 61
3.18 Confusion matrix of Testcase2 using kNN Classifier 61
3.19 Confusion matrix of Testcase3 using kNN Classifier 62
3.20 Confusion matrix of Testcase4 using kNN Classifier 62
3.21 Confusion matrix of Testcase5 using kNN Classifier 63
3.22 Analysis of Results of various Testcases (with kNN Classifier) 63
3.23 Analysis of Results of various classes (with kNN Classifier) 64
4.1 Haar wavelet for gradient calculation in (a) x-direction and (b) y-direction 67
xviii
4.2 Example for Bag of Feature Classification 69
4.3 Histogram representation for BoF 70
4.4 Classification flow using SURF features with BoF 71
4.5 Sample images from the dataset used; a) NIST – 4, b) Captured 72
4.6 Confusion matrix of Testcase1 using BoF Classifier 73
4.7 Confusion matrix of Testcase2 using BoF Classifier 73
4.8 Confusion matrix of Testcase3 using BoF Classifier 74
4.9 Confusion matrix of Testcase4 using BoF Classifier 74
4.10 Confusion matrix of Testcase5 using BoF Classifier 75
4.11 Analysis of Results of various Testcases (with BoF Classifier) 75
4.12 Analysis of Results of various classes (with BoF Classifier) 76
5.1 Keypoint localization at different scales 82
5.2 Orientation Histogram 83
5.3 Computation of Keypoint Descriptors 83
5.4 Image Retrieval Based on Bag of Words 86
5.5 Example for k-Means Clustering with 4 clusters 89
5.6 Retrieval Flow 90
5.7 Sample images from the dataset used; a) FVC2002, b) FVC2004 and c)
Captured
92
xix
List of Tables
3.1 Comparative Summary of feature extraction and classification methods 47
4.1 Comparison of average Testcase accuracy and class accuracy of the
approaches used for classification
76
5.1 Retrieval Result analysis of FVC dataset images without performing
classification phase
93
5.2 Retrieval Result analysis of FVC dataset images after performing
classification phase
93
5.3 Retrieval Result analysis of Captured Image dataset images without
performing classification phase
93
5.4 Retrieval Result analysis of Captured Image dataset images after performing
classification phase
94
Introduction
1
CHAPTER – 1
Introduction
1.1 Overview
Security concerns are the real worry in this day and age and are proceeding to develop in
intensity and many-sided quality. Security is a perspective that is given best need by
associations, educational organizations, political and government in personality and
extortion issues. In light of the new dangers, associations need to execute or refresh a staff
security program to avoid unapproved access to control systems and basic data [1]. As per
the Personal Security Guidelines, productive techniques should be created for faculty
security and control system trustworthiness. Associations ought to work safely with the
goal that they don't constitute an unsuitable security chance that could affect the wellbeing
of other faculty or the general population.
Reliable authentication of individuals has for some time been a fascinating objective, and
is ending up more imperative as the requirement for security develops. Innovations called
biometrics can robotize the distinguishing proof of individuals by at least one of their
particular physical or behavioural attributes. They are design acknowledgment
frameworks, which make an individual distinguishing proof by deciding the credibility of
the particular trademark controlled by the client [2]. Biometrics has come to possess an
undeniably critical part in human distinguishing proof due fundamentally to their
comprehensiveness and uniqueness. Because of this advancement, another type of systems
and strategies for client personality acknowledgment and check has seemed in light of the
biometric highlights that are remarkable to every individual [3].
Biometrics is an exploration of confirming and building up the character of a person
through physiological highlights or behavioural characteristics. Biometric innovations
fluctuate in complexity, abilities and execution; however all offer a few normal
components. Biometric recognizable proof systems are basically design acknowledgment
frameworks. They utilize procurement gadgets, for example, cameras and examining
Introduction
2
gadgets to catch images, chronicles or estimations of a person's qualities and PC equipment
and programming to remove, encode, store and look at these attributes [4]. As the
procedure is computerized, biometric basic leadership is by and large quick, and much of
the time, takes just a couple of moments continuously.
It is utilized as a type of personality get to administration and access control. It is likewise
used to distinguish people in bunches that are under observation. Utilize biometrics go
back finished a thousand years. In East Asia, potters set their fingerprints on their products
as an early type of brand character. In Egypt's Nile Valley, dealers were formally
recognized in light of physical attributes, for example, tallness, eye shading, and
appearance [5]. This data distinguished trusted brokers whom dealers had effectively
executed business before. The Old Testament likewise gives early cases of voice
acknowledgment and biometric spoofing. Since 1970, biometrics is started to be used to
identify and authenticate a person using a set of recognizable and verifiable data unique
and specific to that person.
One of the primary business applications was utilized as a part of 1972 when a Wall Street
organization, Shearson Hamil, introduced Identimat, a finger-estimation gadget that filled
in as a period keeping and checking application. Since this organization, biometrics has
enhanced immensely in convenience and assorted variety of utilizations. The headway of
biometrics has been driven by the expanded figuring power at bring down costs, better
calculations, and less expensive stockpiling systems accessible today [6].
There are two modes in which biometric frameworks can be utilized. They are
confirmation and ID. 'Confirmation', likewise called 'validation', is utilized to check a
man's personality, that is, to verify that people are who they say they are. 'ID' is utilized to
build up a man's personality, that is, to figure out who a man is. Despite the fact that
biometric advances measure diverse qualities in considerably extraordinary ways, all
biometric frameworks include comparative procedures that can be isolated into two
particular stages [7]: Enrolment & Verification or Identification.
Biometric characteristics can be divided in two main classes:
Physiological characteristics
Behavioural characteristics
Introduction
3
Physiological attributes are identified with the state of the body and incorporate
fingerprint, face, DNA, ear, palm print, hand geometry, iris, retina and smell/fragrance.
Behavioural qualities are identified with the conduct of a man like mark, keystroke
progression, stride and voice. A few analysts have instituted the term behaviometrics to
depict the last class of biometrics.
Voice is additionally a physiological characteristic on the grounds that each individual has
an alternate vocal tract, however voice acknowledgment is fundamentally in light of the
investigation of the way a man talks, generally named behavioural. It is conceivable to
comprehend if a human trademark can be utilized for biometrics as far as the
accompanying parameters:
FIGURE 1.1: Parameters of Biometrics
Acceptability: Degree of approval of a technology.
Circumvention: Ease of use of a substitute.
Collectability: Ease of acquisition for measurement.
Performance: Accuracy, speed, and robustness of technology used.
Introduction
4
Permanence: Measures how well a biometric resists aging and other variance over time.
Uniqueness: Indicates how well the biometric separates individuals from another.
Universality: Each person should have the characteristic.
By utilizing biometrics it is conceivable to set up a character in light of your identity, as
opposed to by what you have, for example, an ID card, or what you recall, for example, a
watchword. In a few applications, biometrics might be utilized to supplement ID cards and
passwords in this way giving an extra level of security. Such a plan is frequently called a
double factor verification conspire [8].
The vast majority of the works found in the writing about biometric based verification,
utilizes parameters like iris, voice, fingerprints, and confronts qualities and others. Out of
these, for quite a while, the fingerprints have been a standout amongst the most broadly
utilized and acknowledged biometric. This is confirmed by the Chinese who have utilized
fingerprints to sign archives for more than 1000 years.
1.1.1 Performance Metrics
A classification model is a mapping from instances to predicted classes [187]. As shown in
Fig. 1.2 each instance I is mapped with one element of the set {p,n} of positive or negative
class labels. To distinguish between actual class and predicted class {Y,N} labels are used
for the predictions produced by the classification model.
FIGURE 1.2: Confusion Matrix for calculating hit rate for a classification model
Hypothesized
Class
Y
N
True Class
p n
True
Positives
False
Positive
False
Negatives
True
Negatives
P N Column Totals:
Introduction
5
Given a classifier and an instance, there are four possible outcomes. If the instance is
positive and it is classified as positive, it is counted as a true positive; if it is classified as
negative, it is counted as a false negative. If the instance is negative and it is classified as
negative, it is counted as a true negative; if it is classified as positive, it is counted as a
false positive. Given a classifier and a set of instances (the test set), a two-by-two
confusion matrix (also called a contingency table) can be constructed representing the
dispositions of the set of instances. This matrix forms the basis for many common metrics.
The true positive rate and false positive rates can be calculated by the following equations,
True Positive (TP) Rate = Positives Correctly Classified / Total Positives (1.1)
False Positive (FP) Rate = Positives Correctly Classified / Total Negatives (1.2)
TP rate and FP rate are plotted on receiver operating characteristics (ROC) graph on Y axis
and X axis respectively. Each discrete classifier produces an (FP rate, TP rate) pair
corresponding to a single point in ROC space. Fig. 1.3 shows five such pair labels, A
through E. One point in ROC space is better than another if it is to the northwest (TP rate is
higher, FP rate is lower, or both). D‘s performance is perfect in the case shown in Fig. 1.3.
FIGURE 1.3: A basic ROC graph showing five discrete classifiers
In case of fingerprint classification, if the fingerprint actually belongs to a class but it is not
identified by the classifiers, it falls in to false negative. As fingerprint classification is used
as a prior step for many fingerprint applications like retrieval, recognition, etc., false
A.
C.
D. B.
0.2 0.4 0.6 0.8 1.0 0
0.2
0.4
0.6
0.8
1.0
0
E.
Tru
e P
osi
tiv
e R
ate
False Positive Rate
Introduction
6
negatives are more problematic as it will never give correct result in retrieval or
recognition stage.
There are few other parameters or measurements for biometric framework verifications.
False Accept Rate or False Match Rate (FAR or FMR): The likelihood that the
framework inaccurately coordinates the information example to a non-coordinating format
in the database. It measures the percentage of invalid data sources which are erroneously
acknowledged.
False Reject Rate or False Non-Match Rate (FRR or FNMR): The likelihood that the
framework neglects to distinguish a match between the info design and a coordinating
layout in the database. It quantifies the percentage of substantial data sources which are
inaccurately dismissed.
Equal Error Rate or Crossover Error Rate (EER or CER): The rate at which both
FAR and FRR are equivalent. The estimation of the EER can be effortlessly gotten from
the ROC bend. The EER is a snappy method to contrast the precision of gadgets and
diverse ROC bends. When all is said in done, the gadget with the least EER is generally
precise. Acquired from the ROC plot by taking the point where FAR and FRR have a
similar esteem.
1.2 Fingerprints
Fingerprint, palm print, hand vein, hand/finger geometry, iris, retina, confront, ear, scent or
the DNA data of an individual come in physiological classification. A unique mark takes a
gander at the examples found on a fingertip. There is an assortment of ways to deal with
fingerprint confirmation. Some imitate the customary police strategy for coordinating
particulars; others utilize straight example coordinating gadgets; and still others are more
novel, including things like moiré periphery designs and ultrasonic. Some confirmation
methodologies can identify when a live finger is introduced; some can't [9].
Fingerprint recognition is outstanding amongst other known and most broadly utilized
biometric innovations and is financially accessible. A fingerprint based biometric is a
framework that perceives a man by deciding the realness of his/her finger impression.
Introduction
7
The fingerprint properties of a man are exceptionally precise and are interesting to a
person. Verification frameworks in light of unique finger impression have demonstrated to
create low false acknowledgment rate and false dismissal rate, alongside different
favourable circumstances like simple and minimal effort execution methodology [10].
Additionally, the fingerprint impression regularly stays unaltered from birth until death.
Aside from being interesting and constant, fingerprints can be accumulated in a non-
intrusive way with no reactions. The present unique finger impression coordinating
innovation is very develop for coordinating full prints, while coordinating fractional unique
mark still needs part of change. The requirement for incomplete fingerprint
acknowledgment frameworks are expanding in both non-military personnel and scientific
applications [11].
In non-military personnel, the creation of little hand-held gadgets, for example, cell
phones, scaled down unique mark sensors deliver just little parts of fingerprints. Scaling
down of fingerprint sensors has prompted little detecting territories of 0.42" x 0.42". Be
that as it may, as per FBI determinations, a normal of 1.0" x 1.0" finger print impression
territory is expected to gather particulars focuses precisely [12]. The decreased fingerprint
estimate diminishes the precision of acknowledgment while utilizing fractional fingerprints
and looks into are still on-going to enhance this. The most alluring properties of high
exactness, low blunder rate and greatest speed are yet to be accomplished with Automatic
Partial Fingerprint Recognition System (APFS).
Fingerprint recognition is one of the most established types of biometrics that has been
ended up being profoundly solid. The finger print impression of every individual is
particular and can have a place with just a single individual and does not change amid their
lifetime. Besides, the equipment and programming for unique finger impression
acknowledgment are progressively getting to be noticeably less expensive [13]. Utilizing
fingerprints for verification is an all-around acknowledged arrangement and a greater part
of the populace has intelligible fingerprints. This is more than the quantity of individuals
who have travel papers, permit and Identity cards. It has exceptionally unmistakable
highlights and is a standout amongst the most precise types of biometrics accessible. And
in addition to that is due to the following convenience the fingerprints, as biometrics, are
widely accepted for identity authentication [14]:
Introduction
8
i) The obtaining procedure is non-noisy, requiring no preparation.
ii) It is the most conservative biometric validation strategy.
iii) Small storage room is required for the biometric format, diminishing the extent of the
database memory required.
1.2.1 Principles of Fingerprint
The essential standards of fingerprint innovation were logically settled by Sir Francis
Galton, a British Scientist viewed as the Father of Fingerprint science and are given
underneath [15]:
Permanency of Persistency: Fingerprints never show signs of change from birth to death
till decay sets in. The edges on the grating skin of fingers are produced with settled and
changeless characters in one and half to two months of gestation and are full-fledged in
180 days. These arrangements extend as per the development of the body and in this way
stay lasting till the skin perishes or is harmed.
Individuality or variety: No two fingerprints will be indistinguishable unless they are
taken from a similar finger of a similar individual. Fingerprints taken from various digits of
a similar individual will likewise not be indistinguishable. In this manner, fingerprints are
God's own particular seal to distinguish a person.
Immutability of fingerprints: As said already, there are two layers in human skin in
particular, epidermis and dermis. Straight forward wounds, age, development, peeling of
skin, warts, wrinkles, consumes, and skin maladies will influence the edges in the
epidermis briefly. Be that as it may, after some time, the influenced edges recover their
unique development of examples and edge qualities. In any case, skin wounds that scar
influence the dermis are perpetual and consequently change the development of examples
and edges.
Consequently, it can be comprehended that fingerprints are trustworthy for the duration of
human life and in this way, are most dependable, honest, unchangeable and steadfast in
setting up the character of a man.
Introduction
9
1.2.2 Applications
Fingerprint biometric has been utilized as a part of various applications including
passageway control and entryway bolt applications, savvy cards, vehicle start control
frameworks and fingerprint controlled access. Fingerprint impression is a standout
amongst the most incessant and most solid sources utilized as a part of criminology to
recognize hoodlums and deceitful people. It is additionally utilized as a part of the
recognizable proof of unidentified body where fingerprint impression records are
contrasted and existing databases [16]. They are progressively utilized as a part of
numerous imperative administration situated ventures like managing an account,
authorizing and identification, where use of fingerprint diminishes the frequency of
fabrication, pantomime and fakes as it were. They are utilized as a part of property and
common situations where the use of authentic records of the enrolment office and on
archives explains basic cases [17].
As of late, fingerprints caught from infants are utilized as a part of doctor's facilities to stay
away from oversight of compatibility by utilizing a solitary care to store the print of
mother and tyke together. It is additionally utilized by seniority beneficiaries and numerous
regular administrations like railroad, power board, where impermanent workers are
fingerprinted to maintain a strategic distance from assemble part cheats [18]. These days,
fingerprints are utilized to open PCs. Programming prepared bolting framework is
accommodated solid rooms of banks and even entryways of houses along these lines
keeping away from unapproved passage. Therefore, with its wide utilization, fingerprint
impression acknowledgment frameworks are of appeal.
1.2.3 Types of Fingerprints
From long time past days, three sorts of fingerprints can be gained, to be specific, exemplar
prints, patent prints and plastic prints [19].
Exemplar prints, or known prints, is the name given to fingerprints intentionally
gathered from a subject, regardless of whether for motivations behind enlistment in a
framework or when in custody for a presumed criminal offense. These prints can be
gathered utilizing Live Scan or by utilizing ink on paper cards.
Introduction
10
Patent prints are chance contact edge impressions which are clear to the human eye and
which have been caused by the exchange of outside material from a finger onto a surface.
Some undeniable illustrations would be impressions from flour and wet earth. Since they
are now obvious and have no need of upgrade they are by and large captured instead of
being lifted in how idle prints are. Patent prints can be left on a surface by materials, for
example, ink, earth, or blood.
A plastic prints is a grinding edge impression left in a material that holds the state of the
edge detail. Albeit not very many lawbreakers would be sufficiently indiscreet to leave
their prints in a piece of wet dirt, this would make an immaculate plastic print. Ordinarily
experienced illustrations are softened light wax; putty expelled from the border of window
sheets and thick oil stores on auto parts. Such prints are now unmistakable and require no
upgrade. Independent of these three kinds, the fingerprints in this way gathered can fall
into three classes, in particular, Rolled/full, Plain/level and Latent or halfway.
Moved fingerprint images are acquired by rolling a finger from one side to the next ("nail-
to-nail") with a specific end goal to catch all the edge points of interest of a finger. Plain
impressions are those in which the finger is pushed down on a level surface yet not rolled.
Rolled and plain impressions are acquired either by filtering the inked impact on paper or
by utilizing live-check gadgets. Since rolled and plain fingerprints are procured in a gone
to mode, they are normally of good quality and are rich in data content.
The fingers have tiny outlets for the expulsion of sweat and these outlets pick up salts, oils
and tiny particles of dirt along the way. These particles can then be passed to hard surfaces
and generate fingerprints that are not visible to the naked eye. Such fingerprints are known
as Latent prints. Latent are visualised using magnesium powder which is gently brushed
over these hard and shiny surfaces in order to illuminate them. Once this is done the prints
can be photographed or 'lifted' using a variety of different tapes without delay as some
prints can fade quickly [168].
1.2.4 Classification of Finger Prints
Dr. Jan. E. Purkinje, Professor of Physiology in the University of Breslau has characterized
all fingerprints into eight sorts; the transverse bends, the central longitudinal strip, the
oblique strip, the almond whorl, the spiral whorl, the ellipse, the circle and the double
whorl [20]. Edward Richard Henry [181] adjusted Dr. Jan. E. Purkinje's eight sorts
Introduction
11
characterized them into four primary classes as indicated by the level of their circulation in
the entire populace of the world, these are Loop (65-67)%, Whorl (25)%, Arch (6-7)% and
Composite or Accidental or Chance (3-4)%. This technique is known as "Henry Galton
strategy" or "Henry strategy", is name being got from its originators Sir Francis Galton and
Sir Edward Richard Henry. This Henry arrangement of characterization is most effective
and is in relatively all inclusive utilize.
Fig. 1.4 shows the subclasses of each primary class, Arch, Loop and Whorl.
FIGURE 1.4: Fingerprint Classes and Subclasses, a) Arch, b) Loop and c) Whorl
Arch: These are described by a slight ascent in the edges which enter on the finger‘s one
side and also exit on the contrary side. The arches are of two subtypes- plain arch and
tented arch. Neither the plain arch nor the tented arch has a delta formation [23].
Plain Arch: Plain curve is the most straightforward of all the fingerprint designs, and is
effectively recognized. In plain curves the edges enter on one side of the impression and
stream or tend to stream out on the opposite agree with an ascent or wave in the middle.
There might be various edge arrangements, for example, finishing edges, bifurcations,
dabs and islands engaged with this kind of example, yet they each of the have a
tendency to take after the general edge shape, i.e. they enter on one side, make an ascent
or a wave in the inside, and stream or tend to stream out on the opposite side.
Arch
Tented
Arch
Plain
Arch
Exceptional
Arch
Loop
Radial
Loop
Ulnar
Loop
Whorl
Central
Loop
Double
Loop
Twinned
Loop
(c)
Accidental /
Composite
Introduction
12
Tented Arch: Tented Arch is the one which the vast majority of the edges enter upon
one side of the impression and stream or tend to stream out upon the opposite side as in
the plane curve compose; in any case, the edge or edges at the inside don't. Tented
curves and a few types of circles are regularly befuddled. It ought to be recalled that the
negligible merging of two edges does not frame a re-curve, without which there can be
no circle arrangement.
Exceptional Arch: Their structure is more shifted than that of customary rose curves,
and may incorporate a portion of the attributes of a circle, in spite of the fact that
insufficient to give them a circle grouping.
Loop: Regarding fingerprints, and in addition in the general use of the word circle, there
can't be a circle unless there is a re-bend or recovery of at least one of the edges alongside
the other pre-essentials. An example must have a few necessities previously it might be
legitimately named a circle [21]. Nonetheless, this kind of example is the most well-known
and constitutes around sixty to sixty five percentage of all prints. A loop is that kind of
fingerprint design in which at least one of the edges enter on either side of the impression,
re-bend, touch or pass a non-existent line attracted from the delta to center and ends or has
a tendency to end on or towards a similar side of the impression from where edge or edges
enter. A circle has one and just a single delta.
As showed by their arranging and the circulation of the edges, loops are partitioned into
two fundamental writes. They are radial loop and ulnar loop.
Radial Loop: Radial Loop is charged in light of the fact that the edges stream or end
toward the lower arm‘s sweep bone. On the off chance that there ought to be an event of
the left hand fingers, the edges slant towards the right side and in the right hand fingers,
the slant is towards the left side.
Ulnar Loop: Ulnar Loop is alleged in light of the fact that the edges stream or end
toward the lower arm‘s ulnar bone, if there ought to be an event of the left hand fingers,
the edges slant towards the left side and for the right hand fingers the slanting of the
edges is towards the right side.
Whorl: A whorl is depicted by an indirect case having no less than one edges turn around
the middle making an aggregate circle. The whorl is that kind of case in which no under
two deltas are accessible with a re-twist before each. Whorl compose outlines occur in
Introduction
13
around 30% of all fingerprints [22]. Note that this is an especially wide definition. This
case, regardless, may be subdivided for extension reason in tremendous social occasions
where whorls are commanding. Regardless of the way that this increase may be used,
extensive varieties of whorls are amassed under the general name of whorl and are created
by letter 'W'. The subdivision of whorl configuration is according to the accompanying.
Double Loop: Twofold Loops are composite examples and comprises of two discrete
and particular arrangements of shoulders and two deltas. The two circles might be
associated with each other by an attaching edge gave that it doesn't adjoin at right points
between the shoulders of the arrangement. It isn't fundamental that the two sides of the
circle be of equivalent length, nor that two circles be of a similar size. The twofold
circles are of two kinds - parallel pocket circle and twinned circle. The refinement
between twinned circle and sidelong circle was made by Henry.
Central Pocket Loop: In this illustration one circle fills in as a side pocket to the
following circle, in the lateral pocket loop. This pocket is shaped by the descending
twisting on one of the sides of the edges of the other circle before they recurve. The
edges about the centre, the ones containing the motivation behind the center of the
circles have their courses out on a comparative side of delta.
Twinned Loop: In this example there are two particular circles, one resting upon or
enclosing the other and the edges, containing the purpose of center have their exit
towards various deltas.
Accidental/Composite/Chance: Accidentals are sure composite examples inside the
whorl amass as they happen once in a while and are framed absolutely by possibility.
These are the mind boggling designs generally made out of various arrangements in a
single example, for example, rose curve and circle, or circle and whorl and so forth or
whatever other such blends which don't fit properly in the essential example writes.
In any case, most scientists like to additionally group these classes into an aggregate of five
classes; Arch, Tented Arch , Whorl, Left Loop and Right Loop.
These groupings have been done in light of the edge stream example of the fingerprints.
Visual investigation uncovers that the above-said classes have following qualities:
Introduction
14
Arch: The stream of edges frames a curve like portrayal at the focal area of the unique
mark.
Tented Arch: In this class, the arch frames a sharp top at the middle.
Whorl: In this class, a whorl like edge development has been made at the middle.
Left Loop: In a few fingerprints, the edge streams like a loop and curves towards left
course. These fingerprints are named Left Loop.
Right loop: Like the past class, when the loop twists towards right, it has been named
Right Loop.
1.2.5 Fingerprint Detection Methods
Fingerprints are the most conclusive kinds of physical proof [24]. Every one of the items at
the criminal scene ought to be considered as conceivable confirmation of fingerprints.
Fingerprints left by the criminal at the criminal scene are known as 'Chance prints'. These
prints are left by the criminal unknowingly, and are appropriately called as 'thieves going
by cards'. There are three primary classes of chance prints as portrayed underneath:
Visible prints: These are the prints shaped when fingers are spread with some hued
material, for example, paint, ink, earth, blood, or other visible material. Prints of this
compose are less regularly found, in light of the fact that the individual who abandons
them can without much of a stretch see and take care to expel them. In the event that by
any shot they are as yet discovered then they could be there because of scramble or
heedlessness. Prints of this nature need not to bother with any improvement. They can be
effortlessly recorded by taking photos with or without utilization of channels.
Plastic prints: These are for the most part found on flexible surface and are called plastic
prints. Plastic prints are for the most part found on items, for example, cleanser, mud,
pitch, candles, thick dried blood, dissolved wax or paraffin, cements et cetera. Prints of this
nature can be captured by precise enlightenment. Fingerprints on flame can likewise be
increased by moving over them a thick layer of printer's ink by methods for elastic roller
and after that taking a photo.
Latent prints: In law enforcement the important and widely used evidences by forensic
agencies worldwide, are latent fingerprints. Latent fingerprints are created after being
unintentionally deposited on the subjects at the crime location, and are of poor quality of
texture quality and large background noise. As a recent advancement in the latent
Introduction
15
fingerprints, Kai Cao and Anil K. Jain [168] have proposed an automated latent fingerprint
recognition algorithm using Convolutional Neural Networks (CNN). The latent is represent
from ridge flow estimation and minutiae descriptor extraction, and extracting
complementary templates.
Different Methods of Fingerprint Development
Laser Method: Laser light has altered the unique mark innovation. Laser technique was
created by Ontario Provincial Police and Xerox Research Center in Canada. Laser takes the
benefit of the way that sweat contains an assortment of segment that fluoresces when lit up
by laser light. It has been discovered that idle unique mark build-up contains an assortment
of natural substances like oils, paint and inks, which have inalienable radiance property.
Utilizing a persistent argon particle laser and seeing through reasonable channels, inert
fingerprints indicate radiance. These prints can be shot utilizing unique channels.
On the off chance that innate radiance neglects to distinguish idle fingerprint through
lasers, at that point the same can be treated with fluorescent material like Coumarin-6 to
give instigated glow to inactive unique finger impression. Exceptionally old prints even as
old as ten years can be produced utilizing laser strategy on surfaces like plastic, elastic,
painted dividers, paper, wood, calfskin and so on. The principle favourable position of
laser bar over the ordinary unique mark advancement strategies is its ability to create
dormant prints that can't be produced by some other strategy. It is exceptionally delicate
technique, and does not experience the ill effects of whenever restriction. The created
prints flaunt especially under laser source. This strategy is exceedingly advanced and can
turn into the technique for decision to be trailed by other customary strategies.
Photographic Method: The prints acquired by this strategy are clear, in any case, in light
of the fact that lone regions in coordinate contact can be shot, the technique has just
restricted an incentive in dermatoglyphic examination and is costly.
New Fingerprint Detection Technology: In a period of cutting edge DNA innovation,
fingerprint confirmation might be viewed as old fashioned criminology; however it's not as
obsolete as a few offenders may think. Propelled fingerprinting innovation now makes
creating, gathering, and distinguishing unique mark confirm simpler and snappier.
Sometimes, notwithstanding attempting to wipe fingerprints clean from a wrongdoing
scene may not work. Not just has the innovation for gathering fingerprint prove enhanced,
Introduction
16
yet the innovation used to coordinate fingerprints to those in the current database has been
fundamentally made strides.
Advance Fingerprint Identification Technology: The FBI propelled its Advance
Fingerprint Identification Technology (AFIT) framework which improved fingerprint
impression and idle print preparing administrations. The framework expanded the
exactness and day by day handling limit of the organization and furthermore enhanced the
framework's accessibility. The AFIT framework executed another fingerprint coordinating
calculation which expanded the precision of fingerprint impression coordinating from 92%
to over 99.6%, as per the FBI. Amid the initial five days of task, AFIT coordinated more
than 900 fingerprints that were not coordinated utilizing the old framework. With AFIT on
board, the organization has possessed the capacity to decrease the quantity of required
manual unique mark audits by 90%.
Prints from Metal Objects: In 2008, researchers at the University of Leicester in Great
Britain built up a method that will upgrade fingerprints on metal articles from little shell
housings to substantial automatic rifles.
They found that substance stores that frame fingerprints have electrical protecting qualities,
which can square electric current regardless of whether the fingerprint material is thin, just
nanometers thick. By utilizing electric streams to store a shaded electro-dynamic film
which appears in the exposed areas between the fingerprints stores, analysts can make a
negative image of the print in what is known as electro-chromic image. As indicated by the
Leicester criminological researchers, this strategy is so delicate it can even recognize
fingerprints from metal questions regardless of whether they have been wiped off or even
washed off with lathery water.
Color-Changing Florescent Film: Professor Robert Hillman and his Leicester partners
have additionally improved their procedure by adding flourophore particles to the film
which is delicate to light and ultra-violet beams. Essentially, the bright film gives
researcher and additional apparatus in creating differentiating shades of inert fingerprints -
electro chromic and brilliance. The bright film gives and third shading that can be
acclimated to build up a high-differentiate unique mark image.
Micro-X-Ray Florescence: The advancement of the Leicester procedure took after a 2005
revelation by University of California researchers working at Los Alamos National
Introduction
17
Laboratory utilizing miniaturized scale X-beam fluorescence, or MXRF, to create unique
finger impression imaging. MXRF recognizes the sodium, potassium and chlorine
components introduce in salts, and also numerous different components, on the off chance
that they are available in the fingerprints. The components are distinguished as an element
of their area on a surface, making it conceivable to "see" a unique mark where the salts
have been kept in the examples of fingerprints, the lines called rubbing edges by legal
researchers. MXRF really identifies the sodium, potassium and chlorine components
display in those salts, and in addition numerous different components, on the off chance
that they are available in the fingerprints. The components are identified as an element of
their area on a surface, making it conceivable to "see" a fingerprint where the salts have
been saved in the examples of fingerprints, the lines called rubbing edges by legal
researchers.
Non-invasive Procedure: The procedure has a few focal points over customary
fingerprint impression location techniques that include treating the speculate zone with
powders, fluids, or vapours keeping in mind the end goal to add shading to the finger
impression so it can be effectively observed and captured. Utilizing customary fingerprint
differentiate improvement, it is infrequently hard to recognize fingerprints introduce on
specific substances, for example, kaleidoscopic foundations, stringy papers and materials,
wood, cowhide, plastic, glues and human skin.
The MXRF system dispenses with that issue and is non-intrusive, which means a
fingerprint dissected by the strategy is left immaculate for examination by different
strategies like DNA extraction. Los Alamos researcher Christopher Worley said MXRF
isn't a panacea for identifying all fingerprints, since a few fingerprints won't contain
enough noticeable components to be "seen". Notwithstanding, it is imagined as a
reasonable buddy to the utilization of conventional differentiation improvement strategies
at wrongdoing scenes, since it doesn't require any concoction treatment steps, which are
tedious, as well as can for all time adjust the proof.
Forensic Science Advances: While numerous advances have been made in the field of
legal DNA prove, science keeps on gaining ground in the field of fingerprinting
improvement and accumulation, making it expanding all the more most likely that if a
criminal deserts any confirmation whatsoever at the wrongdoing scene, he will be
Introduction
18
distinguished. New fingerprint innovation has improved the probability of examiners
creating proof that will withstand challenges in court.
1.2.6 Fingerprint Retrieval
In November 2010, Ministry of Communications and Information Technology,
Government of India announced the fingerprint image and minutiae data standard for e-
governance applications in India [169]. The ISO-19794-4:2005(E) Fingerprint Image Data
Standard is adopted by Indian e-Governance Standard. As per the adopted ISO standard,
the fingerprint image can be stored in either PNG or JPEG2000 with compression ratio up
to 1:15.
The FBI has formulated national standards for digitization and compression of gray-scale
fingerprint images. The compression algorithm for the digitized images is based on
adaptive uniform scalar quantization of a discrete-wavelet transform sub-band
decomposition, a technique referred to as the wavelet/scalar quantization method. The
algorithm produces archival-quality images at compression ratios of around 15 to 1 and
will allow the current database of paper fingerprint cards to be replaced by digital imagery
[182].
The UIDAI (Unique Identification Authority of India) started a project to assign unique
identification number to each resident of India. In Indian context, biometrics were
determined to be the most suitable factors for carrying out de-duplication. Hence it is
necessary to enrol all residents along with their biometrics and build a clean database for
the purposes of a National Identity system. It has adopted ISO/IEC 19794-4 Fingerprint
Image Data Standard as Indian Standard and specified certain implementation values
[183]. Uncompressed images are strongly represented and for legacy reasons, lossless
JPEG 2000 and WSQ compression is accepted. Storage format is permitted as per ISO
Section 8.3. The Enrollment status of the UIDAI project as of Dec 31st 2011 [184] shows
false positive identification rate (FPIR) as 0.057%. In practical terms, it means that at a run
rate of 10 lakh enrolments a day, only about 570 cases need to be manually reviewed daily
to ensure that no resident is erroneously denied an Aadhaar number. The false negative
identification rate (FNIR) is 0.035%. This implies that 99.965% of all duplicates submitted
to the biometric de-duplication system are correctly caught by the system as duplicates.
Introduction
19
Recognizing an obscure test fingerprint impression requires an inquiry against all selected
images in such a vast database and postures testing issues both accordingly time and
precision. To decrease the hunt space required, unique mark retrieval, or called ordering, is
regularly embraced. In such methodologies, each fingerprint is spoken to in a specific
frame so that amid look stage it is conceivable to pre-sift through substantial quantities of
images with low comparability to the test image in view of such a portrayal, and the test
would then be able to be recognized from the rest of the competitors by a balanced
coordinating methodology [25].
Convolutional Neural Networks (CNN) have achieved great success in large-scale image
and video recognition. Karen et al. [15] investigated the effect of CNN depth on its
accuracy in the large-scale image recognition systems. Matthew et al. [186] introduced a
visualisation technique that reveals the input stimuli that excite individual feature maps at
any layer in the model and provided observation on the evaluation of features during
training and diagnosis of potential problems with the model.
FIGURE 1.5: Block Diagram of Fingerprint Retrieval system
Fig. 1.5 shows a typical fingerprint retrieval system. Fingerprint retrieval systems can
essentially separate into three classes: worldwide component, nearby element and
coordinating score based. The retrieval strategies in light of worldwide highlights more
often than not utilize worldwide example of edges with a uniform model, for instance,
Feng and Jain proposed a multistage sifting framework, which uses design compose,
Fingerprint Sensor Feature Extraction
Matching
Database
Fingerprint Sensor Feature Extraction
Enrollent
Authentication
Live
Fingerprint
Live
Fingerprint
Retrieved Images
Introduction
20
solitary focuses and introduction field, and Lee proposed a retrieval system in view of
finger code [26]. The retrieval strategies in the view of neighbourhood include more often
than not use nearby highlights of fingerprint, for example, neighbourhood edge highlights,
particulars highlights and SIFT Features. Coordinating score based retrieval methods use
coordinating score over a little arrangement of reference images as the list. When all is said
in done, nearby highlights are favoured in light of the fact that worldwide highlights may
not be dependably acquired from low quality or fractional images.
Considering that particulars are generally utilized as capacity includes in the greater part of
huge unique fingerprint image databases, and have neighbourhood security which can
guarantee strength when the test image is incomplete, we propose a novel unique mark
retrieval approach in light of details triplet in this paper. In this approach, two procedures
are proposed to repay the moderately frail discriminative energy of details triplet. The first
is a strategy to determine the quantity of coordinated particulars polygons from
coordinating data of details triplets, which offers extra data to assess the similitude of two
particulars sets. The second one is to join a one-piece banner to each of highlight lists to
improve the coordinating accuracy. These two procedures essentially increment the
precision of fingerprint retrieval in vast databases [27].
In unique mark ordering/retrieval, the issue of balanced coordinating is reached out to one-
to-N coordinating without requiring the subject's claim of character. Given a unique mark
occurrence of obscure character, the framework looks through the whole database of
enlisted formats and returns a rundown of likely fingers that the unique finger impression
may have a place with. The unique finger impression highlights prepared the framework
ought to be sufficiently solid to recognize one finger from all the others in the database
[28]. To this point, some revealed works influenced utilization of the introduction to
handle highlights while some investigated the highlights on the complex-sifted images.
Details highlights and other changed highlights were likewise embraced.
Fingerprint ordering calculations, profoundly identified with persistent arrangement,
perform superior to elite grouping by choosing most plausible competitors and arranging
them by the comparability to the info. Nonstop characterization is proposed to manage the
issue of select order by speaking to fingerprint impression with include vectors. The
inquiry is performed by registering the separations between the question and every format
to yield the nearest. Those calculations depend on two sorts of highlights: worldwide
Introduction
21
highlights and nearby highlights. Worldwide highlights speak to the worldwide example of
edges and valleys and the introduction field [29]. They for the most part distinguish a
center point and afterward gauge its neighborhood include.
However such methodologies are bad at taking care of partialness and non-direct twisting,
which are two noteworthy issues in the portrayal of fingerprint highlights, because of their
absence of separation of neighbourhood surfaces. Conversely, neighbourhood highlights
are accepted to be vigorous to halfway and misshaped prints since they can speak to stable
nearby structures. Triplets of details and accessorial data are utilized as a part of the
ordering technique [30].
Without a doubt, nearby highlights are normally spoken to by neighbourhood surface
portrayal around intrigue focuses. In this manner, two conditions are to be met. To start
with, intrigue focuses ought to be steadily identified under various survey circumstances,
particularly scaling and revolution. Second, a particular neighbourhood descriptor ought to
be computed free of the intrigue indicates and be fitting portray the locales. In any case, the
retrieval execution may experience the ill effects of the little amount of particulars focuses
in fractional fingerprints [31].
Fingerprint matching: Fingerprint image matching methods can be comprehensively
delegated being details based, relationship based or edge include based [32]. The execution
of particulars construct methods depend in light of the exact recognition of details focuses
and the utilization of complex coordinating systems based ones. In view of the dampness
substance of the skin, the procured images may have either thin or thick edges. Further, the
nature of the images obtained suing the sensor may fluctuate with time, in this manner
confusing the relationship plans. Particulars are an arrangement of focuses in a plane and
these focuses are coordinated in a layout. In connection based, in two fingerprint images
relative pixels are thought about. Edge include based is a propelled innovation that catch
and looks at the edges.
1.3 Motivation
The motivation of this research work is for developing a need to differentiate a man for the
security purpose. The Fingerprint impression is one of the prevalent biometric strategies
which are utilized to confirm person. A precision of the arrangement demonstrate assumes
a key part in Fingerprint impression retrieval. Retrieval speed is bargained because of the
Introduction
22
substantial scale databases of Fingerprint images. Retrieval precision is diminished
because of clamour and twisting in the Fingerprint images. The proposed strategy ought to
enhance the retrieval exactness with the new grouping model.
1.4 Objective of Research
A fingerprint image classification system should be able to analyse and recognize the
universal classes; Arch, Left Loop, Right Loop and Whorl. The recognition of the
fingerprint image class should be done accurately and without any human intervention. A
fingerprint image retrieval system should be able to retrieve efficiently the similar images
from the image dataset. The main objective of this research work is to develop a system
which first, recognize the class of the query fingerprint image and then retrieve similar
images from the dataset of recognized class.
There are two major objectives of this research work as follows:
• To develop an image classification model that extracts hybrid and robust features for the
fingerprint images. The extracted features are to be used for the classification of
fingerprint image.
• To develop an image retrieval model that uses that extracts the scale and rotation
invariant features to represent the fingerprint images uniquely. The extracted features
are to be used for the retrieval of similar images from the similar fingerprint image
class.
1.5 Research Contribution
In this thesis, method of fingerprint image classification and retrieval system is suggested
with the combinations of feature extraction, classification and retrieval techniques.
Fingerprint images are taken from NIST special database 4 (NIST-4), Fingerprint
Verification Competition databases (FVC2000, FVC2002 and FVC2004) and the
fingerprint images captured using optical sensor with the help of volunteers. The main
contributions of this thesis are summarized as follows:
The feature extraction methods using LBP and GLCM are used collectively to
extract features from fingerprint images. The concatenated features are given to the
Support Vector Machine classifier for classification. K-nearest neighbor (k-NN)
classifier is also used for the comparison of the classification results.
Introduction
23
The Speed-Up Robust Feature (SURF) algorithm is used to extract the features to
test BoF classifier along with the hybrid feature extraction method. Both the
classification methods are evaluated on NIST-4 and captured fingerprint images
databases.
In retrieval phase, Scale Invariant Feature Transform (SIFT) features and SURF are
used collectively to extract features from the fingerprint images of the same class.
The concatenated features are extracted from these methods are given to BoF and
other distance matching algorithms for indexing. The retrieval methods are
evaluated on NIST-4, FVC and captured fingerprint image databases.
1.6 Organization of the Thesis
Chapter 1 deals with the general introduction of the research work. It elaborates about
basics of biometrics, fingerprints and its applications, fingerprint classification and
fingerprint retrieval. Chapter 2 presents the details about the research work done in the
same area and their conclusions.
Three types of methods are proposed in the thesis and each one is explained briefly in
chapter 3, 4 and 5. Chapter 3 discusses the two feature extraction methods LBP and GLCM
in detail with application of it on developing hybrid fingerprint classification system. The
effectiveness of the method is discussed in form of results at the end of the chapter.
Chapter 4 presents the detail discussion of SURF algorithm to extract fingerprint image
features. Chapter 5 proposes hybrid approach towards developing fingerprint retrieval
system. Hybrid system is discussed using SURF and SIFT algorithms, to extract scale and
rotation invariant features. The similarity ranking is performed by using BoF classifier.
Chapter 6 contains the main conclusions of all the work proposed in this thesis. It also
explains the possible work to be carried out in the future.
Theoretical Background and Literature Survey
24
CHAPTER – 2
Theoretical Background and Literature Survey
2.1 Introduction
Research on biometric techniques has increased restored consideration as of late expedited
by an expansion in security concerns. The current world state of mind towards terrorism
has affected individuals and their administrations to make a move and be more proactive in
security issues. This requirement for security likewise reaches out to the requirement for
people to ensure, in addition to other things, their workplaces, homes, individual belonging
and resources. Numerous biometric procedures have been produced and are being
enhanced with the best being connected in regular law authorization and security
applications. Biometric strategies incorporate a few best in class systems. Among them,
unique finger impression acknowledgment is thought to be the most capable strategy for
most extreme security validation [33].
Advances in sensor innovation and an expanding interest for biometrics are driving a
blossoming biometric industry to grow new advances. As business motivating forces
increment, numerous new advances for person identification are being created, each with
its own qualities and shortcomings and a potential specialty showcase. This section audits
some notable biometrics with uncommon accentuation to unique finger impression.
2.2 Biometric Security:
The expression "Biometrics" is gotten from the Greek words "bio" (life) and
"measurements". Computerized biometric frameworks have just turned out to be accessible
in the course of the most recent couple of decades, due to the significant progresses in the
field of PC and image handling. In spite of the fact that biometric innovation appears to
have a place in the twenty first century, the historical backdrop of biometrics goes back a
great many years [34]. The old Egyptians and the Chinese assumed an expansive part in
Theoretical Background and Literature Survey
25
biometrics history. Today, the emphasis is on utilizing biometric face acknowledgment, iris
acknowledgment, retina acknowledgment and recognizing qualities to stop terrorism and
enhance safety efforts. This segment gives a short history on biometric security and
fingerprint recognition.
The main systematic approach for capturing hand and finger images for ID reasons for
existing was utilized by Sir William Herschel, Civil Service of India, who recorded an
impression on the back of an agreement for every laborer to recognize representatives.
S. Falohun et al. [35] developed a system which estimates the human age-range and gender
using fingerprint analysis trained with Back Propagation Neural Network and DWT+PCA.
A total of 280 fingerprint samples of people with various age and gender were gathered.
140 of these samples were used for training the system‘s Database; 70 males and 70
females respectively. This was done for age groups 1-10, 11-20, 21-30, 31-40, 41-50, 51-
60 and 61-70 accordingly. In order to determine the gender of an individual, the Ridge
Thickness Valley Thickness Ratio (RTVTR) of the person was put into consideration.
M. ManiRoja [36] proposes two different approaches based on RSA and Elliptic curve
cryptography for database protection of biometric authentication systems. Use of K-means
algorithm is also proposed for the generation of encryption and decryption keys from the
biometric templates. In this paper, implementation of public key algorithms has been
realized for experimental purposes and the results thus obtained have been critically
verified.
Francisco Fons et al. [37] As proof-of-concept, a run-time partially reconfigurable field-
programmable gate array (FPGA) is addressed to carry out a specific application of high-
demanding computational power such as an automatic fingerprint authentication system
(AFAS). Biometric personal recognition is a good example of compute-intensive algorithm
composed of a series of image processing tasks executed in a sequential order.
Chinta Someswara Rao et al. [38] discovered that no two irises are similar and their
discoveries were granted a patent amid 1986. The following stage in unique finger
impression mechanization happened toward the finish of 1994 with the Integrated
Automated Fingerprint Identification System (IAFIS) competition. The opposition
distinguished and researched three major challenges; Digital unique finger impression
procurement, Local edge trademark extraction and Ridge trademark design coordinating.
Theoretical Background and Literature Survey
26
The principal Automated Fingerprint Identification System (AFIS) was produced by Palm
System.
2.3 Fingerprint Recognition System
FIGURE 2.1: Typical Fingerprint-based Biometric System
Fingerprint imaging innovation has been in presence for a considerable length of time.
Archeologists have revealed prove recommending that enthusiasm for fingerprints dates to
ancient times. In Nova Scotia petroglyphs demonstrating a hand with overstated edge
designs has been found [39]. In old Babylon and China, fingerprints were impressed
claytablets and seals. The utilization of fingerprints as an extraordinary human identifier
goes back to second century B.C. China, where the character of the sender of an imperative
archive could be confirmed by his fingerprint impression in the wax seal.
In fourteenth-century Persia fingerprints were impressed different authority papers. Around
then, a legislative authority watched that no two fingerprints were precisely
indistinguishable. Sachin Dwivedi et al. [40] described the edges of the fingerprints as
circles and spirals yet did not take note of their incentive as a method for individual ID.
This was the initial move towards the cutting edge investigation of fingerprints. The
principal present day utilization of fingerprints happened in 1856 when Sir William
Herschel, the Chief Magistrate of the Hooghly region in Jungipoor, India, had a nearby
agent, Rajyadhar Konai, inspire his imprint on the back of an agreement. Afterward, the
correct list and center fingers were printed alongside the mark on all agreements made with
local people.
The object was to panic the underwriter of renouncing the agreement on the grounds that
Fingerprint
Image
Acquisition
Feature
Extraction
Pattern
Matching
Accept
or
Reject?
Database
Theoretical Background and Literature Survey
27
local people trusted that individual contact with the report made it all the more
authoritative. As his fingerprint accumulation developed, Waits et al. [41] started to
understand that fingerprints could demonstrate or invalidate personality. In spite of his
absence of logical information in fingerprinting he was persuaded that fingerprints are
remarkable and perpetual all through life.
Further branch of this new field concerns approaches was developed by Mark S. Nixon et
al. [42] to estimate soft biometrics, either using conventional biometrics approaches or just
from images alone. These three strands combine to form what is now known as soft
biometrics. We survey the achievements that have been made in recognition by and in
estimation of these parameters, describing how these approaches can be used and where
they might lead to. The approaches lead to a new type of recognition, and one similar to
Bertillonage which is one of the earliest approaches to human identification.
For the following thirty years, Bertillon age was the primary strategy for biometric
identification. Dr. Henry Faulds, British Surgeon-Superintendent of the Tsukiji Hospital in
Tokyo, took up the investigation of fingerprints in the 1870's after noticing finger engraves
on ancient ceramics. In 1880, in the October 28 issue of the British logical periodical
Nature, Dr. Faulds was the first to distribute ascientific record of the utilization of
fingerprint impression as a method for ID. Notwithstanding perceive the significance of
fingerprints, for ID he devised a strategy for grouping too.
Dr. Faulds is credited for the first fingerprint distinguishing proof in light of a fingerprint
left on a liquor bottle. The method of characterization proposed by Dr. Faulds is called
Henry Classification system and depends on examples, for example, circles and whorls,
which is still used to day to sort out fingerprint card records.
Proceeding with crafted by Dr. Faulds et al. [43] set up the singularity and lastingness of
fingerprints. This book, "Fingerprints" from 1892, contains the principal unique finger
impression characterization system containing three essential example composes: circle,
curve, and whorl. The framework was based on the appropriation of the example composes
on the ten fingers, e.g. LLAWLLWWLL. The framework worked, yet was yet to be
enhanced with a classification that was simpler to control.
Sir Galton recognized the attributes utilized for personal ID, the one of a kind edge quality
known as minutiae, which are regularly alluded to as "Galton's details". In 1892, Juan
Theoretical Background and Literature Survey
28
Vecetich, an Argentine Police official, made the first criminal unique finger impression
distinguishing proof. He could distinguish a lady, who had murdered her two children and
cut her own throat trying to maintain a strategic distance from blame. Her bleeding print
was left on a doorpost, demonstrating her way of life as the killer.
2.4 Fingerprint Acquisition Methods
This area shows the different procurement techniques used to acquire fingerprints of an
individual [44].
Optical: Optical fingerprint imaging includes catching a computerized image of the
print utilizing unmistakable light. This kind of sensor is, fundamentally, a particular
advanced camera. The best layer of the sensor, where the finger is set, is known as the
touch surface. Underneath this layer is a light-emitting phosphor layer, which
illuminates the surface of the finger. The light reflected from the finger goes through the
phosphor layer to a variety of strong state pixels (a charge-coupled gadget), which
catches a visual image of the unique finger impression. A scratched or filthy touch
surface can cause an awful image of the fingerprint.
A disservice of this sort of sensor is the way that the imaging abilities are influenced by
the nature of skin on the finger. For example, a filthy or stamped finger is hard to image
appropriately. Additionally, it is feasible for a person to disintegrate the external layer
of skin on the fingertips to the point where the unique mark is never again noticeable. It
can likewise be effortlessly tricked by a image of a fingerprint if not combined with a
"live finger" finder. Be that as it may, dissimilar to capacitive sensors, this sensor
innovation isn't helpless to electrostatic release harm.
Ultrasonic: Ultrasonic sensors influence utilization of the standards of restorative
ultrasonography with a specific end goal to make visual images of the fingerprint.
Dissimilar to optical imaging, ultrasonic sensors utilize high recurrence sound waves to
enter the epidermal layer of skin. The sound waves are produced utilizing piezoelectric
transducers and reflected vitality is likewise estimated utilizing piezoelectric materials.
Since the dermal skin layer shows a similar trademark example of the fingerprint, the
reflected wave estimations can be utilized to frame an image of the unique finger
impression. This dispenses with the requirement for perfect, undamaged epidermal skin
and a spotless detecting surface.
Theoretical Background and Literature Survey
29
Capacitance: Capacitance sensors use the standards related with capacitance to shape
fingerprint images. In this technique for imaging, the sensor cluster pixels each go
about as one plate of a parallel-plate capacitor, the dermal layer which is electrically
conductive goes about as the other plate, and the non-conductive epidermal layer goes
about as a dielectric.
Passive capacitance: A passive capacitance sensor utilizes the guideline illustrated
above to shape an image of the fingerprint designs on the dermal layer of skin. Every
sensor pixel is utilized to quantify the capacitance by then of the cluster. The
capacitance differs between the edges and valleys of the fingerprint because of the way
that the volume between the dermal layer and detecting component in valleys contains
an air hole. The dielectric consistent of the epidermis and the territory of the detecting
component are known esteems. The deliberate capacitance esteems are then used to
recognize unique finger impression edges and valleys.
Active capacitance: Dynamic capacitance sensors utilize a charging cycle to apply a
voltage to the skin before estimation happens. The utilization of voltage charges the
compelling capacitor. The electric field between the finger and sensor takes after the
example of the edges in the dermal skin layer. On the release cycle, the voltage over
the dermal layer and detecting component is contrasted against a reference voltage all
together with figure the capacitance. The separation esteems are then computed
numerically, and used to frame a image of the fingerprint. Dynamic capacitance
sensors measure the edge examples of the dermal layer like the ultrasonic technique.
Once more, this disposes of the requirement for spotless, undamaged epidermal skin
and a perfect detecting surface.
The Automatic Fingerprint Recognition Systems require a reasonable clamor free unique
finger impression keeping in mind the end goal to process it for particulars discovery or
relationship. This is finished by pre-processing the unique fingerprint.
2.5 Fingerprint Pre-processing
The fingerprint must be pre-processed to remove the impact of commotion, impact of
dryness, wetness of the finger and distinction in the applied weight while checking the
fingerprint. The pre-processing is a multi-step process [45]. The distinctive methods used
in pre-processing are; Smoothening Filter, Intensity Normalization, Orientation Field
Theoretical Background and Literature Survey
30
Estimation, Fingerprint Segmentation, Ridge Extraction and Thinning. Fig. 2.2 shows the
fingerprint pre-processing flow [188].
FIGURE 2.2: Typical flow of fingerprint image pre-processing
Depending on the application and highlight extraction strategy these means may change.
Galbally et al. [46] have proposed an adaptive image sifting strategy for peculiarity
(particulars) conservation. Initially image quality is evaluated by Fourier range of the
image, fingerprint image pre-processing is performed in light of the discriminant
recurrence and measurable surface highlights. Later Gaussian separating is utilized to
upgrade the edge structure and angle field lucidness quality is utilized for division of
region of interest (ROI).
A Short Term Fourier Transform (STFT) based image separating is talked about by
Chikkerur et al. [47] to upgrade fingerprint images. Another approach broadly took after
depends on Gabor Filter, Gabor channels equipped for directional sifting of edges. The
directional band pass Gabor channel bank approach is a standout amongst the best and
numerically exquisite strategies to date for fingerprint image upgrade, this reality is
utilized by different scientists for fingerprint image improvement and division. Like
directional Gabor channels Sahasrabudhe et al. [48] have proposed directional Fourier
channels for sifting fingerprint edges.
The execution of Gabor channel or some other directional sifting relies on the course of
channel, which ought to be appropriately tuned to edge bearing, for this the introduction
estimation is vital. Most generally utilized approach is to experience the slopes of dim
power. There are some different strategies accessible in writing like channel bank based
approach, ghastly estimation, waveform projection, however the inclination based
Fingerprint Image
Enhancement
Input Fingerprint Image
Fingerprint Image
Segmentation
Enhanced foreground fingerprint image
Theoretical Background and Literature Survey
31
technique give better outcomes. Angle based system likewise have varieties, scientists
have proposed distinctive approaches to assess introduction shape inclinations.
Püspöki et al. [49] has proposed a straightforward procedure in view of direct count of
introduction in light of slope. In creators have examined introduction estimation in view of
Eigen estimations of nearby structure tensor. Bazen has examined PCA and Structure
tensor construct introduction estimation calculation situated in light of inclinations.
Galar et al. [50] have proposed a changed strategy in light of angle estimation which
misuses the way that the introduction field have a tendency to be persistent in the
neighboring areas, he has proposed a calculation which appoints the introduction of main
issue in view of the introduction of neighboring squares at four corners and their field
quality additionally called as coherence. Hong et al. talked about an instrument to
accomplish a smoother introduction field by a constant vector field approach. They utilize
an averaging channel to the consistent vector recorded ascertained from the neighborhood
inclination point. Both the methodologies give sensibly great guess of the introduction
field. They have proposed a calculation for introduction estimation in light of improved
neighborhood averaging of slope fields. This calculation accomplished smoother
introduction field by social occasion data from neighborhood.
Fingerprint division comprises in the partition of the finger impression territory (frontal
area) from the foundation. Division systems abuse the presence of a situated periodical
example in the forefront, and a non-arranged isotropic example out of sight. The technique
portrayed by Jain et al. depends on the neighborhood sureness level of the introduction
field, which is processed utilizing the force inclination of the image. Those 16×16 pixel
obstructs in which the sureness level is higher than a given edge are considered as frontal
area pieces.
Patel et al. [51] suggested that the normal angle on each square is figured, which is relied
upon to be high in the forefront (edge valley varieties) and low out of sight.
Bian et al. [52] have proposed a strategy where different parameters, for example,
inclination rationality, dim force mean and change are additionally utilized as a part of the
division choice. A morphological post-handling is likewise performed with a specific end
goal to fill the rest of the gaps in the frontal area and additionally out of sight. This strategy
is exceptionally exact however includes high computational weight. The strategy
Theoretical Background and Literature Survey
32
introduced by Mehtre relies on the angle and results in bring down computational weight.
It registers the dark level difference over the normal direction of the introduction field,
which is relied upon to be high in nearness of edge valley variety. This technique is
actualized in other unique finger impression confirmation frameworks too.
The division method introduced by Thai et al. [53] depends on Gabor channels. It registers
the reaction of eight oriented Gabor channels to decide if a square has a place with the
closer view or to the foundation. It is demonstrated that when good quality images are
viewed as, both angle and Gabor-based strategies create comparable outcomes, however
Gabor channel based methodsare speedier than inclination based methodologies. In this
work, an improved Gabor channel based approach is exhibited.
Rajasekaran et al. [54] have proposed an upgraded approach for fingerprint division in
view of the reaction of eight arranged Gabor filters. This technique gets higher closer view
measure and impressively bring down size of the foundation area, hence recovering blocks
with particulars and legitimate yet not very much characterized zones. A deficiency of this
strategy is that the thresholding isn't programmed and a manual limit should be chosen
exactly. We have proposed programmed thresholding in view of Gabor channels; the
procedure is mechanized by producing an edge by Otsu's strategy connected on Gabor
greatness histogram.
In Correlation based fingerprint acknowledgment framework we have to decide an
enlistment point as a kind of perspective; this is called as centre point. Centre point
location is a non-trifling assignment. In this examination they talked about connection
based unique finger impression acknowledgment; now we talk about a few strategies for
centre point location.
Cao and Anil Jain [55] have performed Core Point Detection utilizing Integration of Sine
Component of the Fingerprint Orientation. In this technique the sine part of the
introduction recorded is incorporated in a semi-roundabout district, with three fragments
and the segments are directly summed up in a particular way as talked about in, this
strategy give a decent estimate of fingerprint yet precision is still low, and for better guess
more number of emphases are required.
They have talked about another approach in view of count of Poincare list of the
considerable number of focuses in introduction outline, actually determine the Poincare list
Theoretical Background and Literature Survey
33
by figuring the back to back focuses field edge distinction and summing it, the point
encased by a computerized bend (Core Point) will have most noteworthy Poincare file. The
Poincare Index outline then edge and the point with most noteworthy esteem are taken as
centre point. This technique is additionally utilized by Iwasokun et al. [56]. This strategy is
likewise recursive, if no centre point is discovered then the introduction field is
smoothened and again a similar methodology is taken after. In the event that still center
point isn't evaluated then the creators have recommended a covariance based strategy, yet
this is computationally costly.
A Core Point Estimation strategy utilizing Direction Codes and Curve Classification is
created by Khodadoust et al. [57]. In this strategy first the introduction field is figured,
from this field the directional codes are produced. The course codes are utilized for rough
estimate and an inspected grid is produced. This framework and bend characterization
strategy like chain codes is utilized for precise centre point recognition. This technique
requires more advances and the system given in this paper isn't appropriate for curve
compose prints, since it isn't conceivable to characterize the centre point for this situation.
Cao et al. [58] have utilized Complex convolution delineate centre and delta focuses over
the squared introduction field to get the centre and delta point in the unique finger
impression. This technique covers recognition of centre and delta focuses, and it is a
multistep procedure.
The precision got is great around 95-98%. In the meantime the scientific multifaceted
nature is high and the technique needs post preparing steps too. They have built up a centre
point identification calculation in light of different highlights got from the fingerprint
which are by and large utilized for steady center point recognition. Here Orientation field
is utilized, rationality, Poincare record for center point location. In spite of the fact that all
fingerprints don't have center point still this calculation is helpful to recognize high shape
districts and gives high precision as it joins focal points frame singular highlights. The
following stage in the improvement of fingerprint acknowledgment frameworks is
including extraction and coordinating. They pass the pre-prepared image as a contribution
to this.
2.6 Fingerprint Retrieval
Recently, new optical encryption schemes were proposed, which greatly enrich the
Theoretical Background and Literature Survey
34
research of optical image encryption. Ran et al. [59] proposed an optical image encryption
system based on cascaded FRFT, which expand the dimensions of the secret key space and
was easier for optical implementation. In recent years, image encryption methods based on
DRPE were testified unsafe due to their vulnerability to attacks. An encryption method
using fingerprint as a secrete key based on DRPE was proposed in, which could effectively
resist known plaintext attack (KPA). However, encryption systems based on DRPE are still
deficient. In 2010, Qin and Peng proposed the asymmetric cryptosystem (ACS) based on
phase-truncated Fourier transforms (PTFT), which reserved the two phases as private keys
using PTFT. Owing to the nonlinear operation of phase truncation, the robustness against
existing attack schemes was improved.
Later on analysts proposed different encryption plans in view of PTFT. However exhibited
from the point of view of cryptanalysis that PTFT was defenseless against particular
assault and KPA. As of late, Liu et al. proposed ACS in view of twofold irregular stage
encoding utilizing secluded arithmetic operation and Yang-Gu calculation individually and
Rajput et al. [60] proposed Fresnel space nonlinear optical image encryption plot in view
of Gerchberg– Saxton calculation. They gave new plans to optical asymmetric
cryptosystem (OACS).
In any case, late investigations demonstrated that current OACS is untenable due to the
misconception of essential guideline of ACS. Furthermore, Sui et al. [61] proposed an
optical various levelled authentication scheme in view of obstruction and hash work which
understood the mix of optics and cryptography. In the plan two-shaft interferometer
achieved the confirmation procedure and the hash work significantly enhanced the security
of the framework.
The previous uses data identified with the example of edges and valleys found in
fingerprints to parcel the unique finger impression database into shared elite containers. In
this sense, once the fingerprint query image is grouped, the image hopefuls are looked in
the comparing container. Further, this sort of approach can be subdivided into four
subcategories relying upon the kind of data utilized for elite characterization, to be
specific, edge, introduction field, singularity, and basic based data.
In ceaseless characterization approaches, fingerprint images are spoken by include vectors.
Similitudes among unique fingerprints are built up by the separation in the component
space of their relating highlight vectors. This approach is firmly identified with a
Theoretical Background and Literature Survey
35
fingerprint database ordering issue. Edge based methodologies customarily utilize the data
of the structure recurrence of the unique finger impression edges for grouping purposes.
The work proposed by Bhattacharya et al. [62] considers the recurrence range of
fingerprints, acquired by applying a hexagonal Fourier Transform, to group fingerprints
into three classes: whorls, circles, and curves. A wedge-ring locator is utilized to parcel the
recurrence space images into non-covering zones in which the pixel esteems are summed
up to shape an element vector. Once the component vector is discovered, it is contrasted
with the reference include vectors of every one of the classes and a further grouping is
performed by utilizing a closest neighbor characterization technique. To catch the structure
of unique finger impression edges, a few works create numerical models to describe the
comparing images.
Janan et al. [63] use, for instance, B-splines bends to approximate the state of the
fingerprint edges. At that point, comparative introduction edges are assembled together to
acquire a worldwide shape portrayal of the fingerprints which is utilized for grouping.
Methodologies in light of introduction fields utilize the neighborhood normal introductions
of fingerprint edges to group fingerprints.
Jayaraman et al. [64]] utilize the piece introduction fields of fingerprints and conviction
measures to produce the fingerprint highlight vectors. For highlight dimensionality
decrease, they consider a SOM neuronal system to enhance the general order precision.
fingerprint singularities have been generally utilized for characterization. They can be
characterized as the nearby districts where the fingerprint edges introduce some physical
properties.
Ala et al. [65] separate the singularities that can be found in the fingerprints to order them
by thinking about the area and the quantity of the identified singularities. Auxiliary
methodologies utilize the topology data of fingerprints for arrangement purposes
Peralta et al. [66] section the orientational field of fingerprint images to speak to the
fingerprints as social diagrams. For each class of fingerprints, social diagrams demonstrate
is made. An inexact chart coordinating calculation is utilized to arrange fingerprint images.
In spite of the fact that the inquiry spaces can be diminished in restrictive arrangement
approaches, there are a few inadequacies that ought to be considered:
Theoretical Background and Literature Survey
36
A few fingerprints display properties of more than one class and in this way they can't
be appointed to only one canister.
Regular dispersion of fingerprints isn't uniform and hence, even by performing binning
in the first database, the quantity of one-to-numerous examinations can in any case be
high.
A portion of the qualities utilized for binning are difficult to identify because of the
nearness of commotion, encompassing conditions, and so forth.
Thanh-Nghi Do et al. [67] demonstrated that the circulation of fingerprint classes isn't
uniform (93.4% of fingerprints are among an arrangement of three classes). Then again,
Murillo-Escobar et al. [68] proposed a ceaseless framework to record fingerprint databases
utilizing streak hashing. For that reason, the area and introduction of the particulars, and
additionally the quantity of edges among them, are utilized to produce highlight vectors.
Some data identified with the component vectors are acquired and used to make the image
records that are added to a multi outline structure and thought about later amid the unique
finger impression recovery.
Khan et al. [69] proposed fingerprint impression ID impression. Robert Hastings built up a
technique for improving the edge design by utilizing a procedure of situated dissemination
by adjustment of anisotropic dispersion to smooth the image toward the path parallel to the
edge stream. The image force shifts easily as one navigate along the edges or valleys by
expelling the vast majority of the little irregularities and breaks yet with the personality of
the individual edges and valleys saved.
Paulino et al. [70] proposed a strategy for fingerprint impression check which incorporates
both particulars and model based introduction field is utilized. It gives vigorous prejudicial
data other than details focuses. fingerprint coordinating is finished by joining the choices
of the matchers in light of the introduction field and details.
Lomte et al. [71] proposed a strategy for execution measure of nearby administrators in
fingerprint by recognizing the edges of fingerprints utilizing five neighborhood
administrators specifically Sobel, Roberts, Prewitt, Canny and LoG. The edge
distinguished image is additionally divided to extricate singular fragments from the image.
Theoretical Background and Literature Survey
37
Sutthiwichaiporn et al. [72] exhibited a strategy by presenting a unique space finger
impression improvement technique which breaks down the fingerprint image into an
arrangement of separated images then introduction field is evaluated. A quality veil
recognizes the recoverable and unrecoverable tainted locales in the info image are
produced. Utilizing the assessed introduction field, the information fingerprint image is
adaptively improved in the recoverable locales.
Marasco et al. [73] purposed a strategy to explore the impact of five diverse power levels
on fingerprint coordinating execution, image quality scores, and details check amongst
optical and capacitance finger impression sensors. Three images were gathered from the
correct forefingers of 75 members for each detecting innovation. Expressive insights,
examination of change, and Kruskal-Wallis nonparametric tests were directed to evaluate
noteworthy contrasts in particulars tallies and image quality scores in view of the power
level.
The outcomes uncover a critical contrast in image quality score in view of the power level
and every sensor innovation, yet there is no huge distinction in particulars tally in light of
the power levels of the capacitance sensor. The image quality score, appeared to be
affected by power and sensor compose, is one of numerous components that impact the
framework coordinating execution, yet the evacuation of low quality images does not
enhance the framework execution at each power level.
Jung et al. [74] proposed a technique to describe a fingerprint coordinating in view of lines
extraction and chart coordinating standards by receiving a mixture plot which comprises of
a hereditary calculation stage and a neighborhood look stage. Test comes about exhibited
the power of calculation.
Venkatesh et al. [75] proposed a technique for evaluating four course introduction field by
considering four stages, I) pre-processing fingerprint image, ii) deciding the essential edge
of finger impression square utilizing neuron beat coupled neural system, iii) assessing
piece heading by projective separation fluctuation of an edge, rather than a full square, iv)
remedying the assessed introduction field.
Zhang et al. [76] represented an auto ID framework for fingerprints. Vital bends are
generated by utilizing key diagram calculation by kegl. Feature extraction calculation,
from key bends, is utilized to extract uniqueness of fingerprint images. The experimental
Theoretical Background and Literature Survey
38
results indicates that bends acquired from diagram calculation are smoother than the
diminishing calculation.
Galbally et al. [77] developed a design acknowledgement with two classes. A strategy for
feature extraction based on fingerprint and its way to deal with the classification is
discussed. The SVM is applied on the feature vector and used to categorize the fingerprints
in to real or non-real groups.
Wong et al. [78] presented a strategy to overcome non straight mutilation utilizing Local
Relative Error Descriptor (LRLED). The calculation comprises of three stages I) a couple
savvy arrangement technique to accomplish fingerprint arrangement ii) a coordinated
details combine set is acquired with a limit to decrease non-coordinates at long last iii) the
LRLED – based likeness measure. LRLED is great at recognizing comparing and non-
relating details combines and functions admirably for fingerprint particulars coordinating.
Chatbri et al. [79] displayed a strategy, diminishing is the way toward lessening thickness
of each line of examples to only a solitary pixel width. The necessities of a decent
calculation regarding a unique finger impression are I) the diminished unique fingerprint
acquired ought to be of single pixel width without any discontinuities ii) Each edge ought
to be diminished to its focal pixel iii) Noise and particular pixels ought to be disposed of
iv) no further evacuation of pixels ought to be conceivable after consummation of
diminishing procedure.
Yang et al. [80] exhibited fingerprint order framework utilizing Fuzzy Neural Network.
The fingerprint highlights, for example, particular focuses, positions and bearing of center
and delta acquired from a binarised fingerprint. The strategy is delivering great
arrangement comes about.
Chhillar et al. [81] has created anoid technique for Fingerprint acknowledgment. Edge
bifurcations are utilized as particulars and edge bifurcation calculation with barring the
noise– like focuses are proposed. Exploratory outcomes demonstrated the humanoid
unique finger impression acknowledgment was hearty, dependable and quick.
Wang [82] proposed a strategy for quick singularities seeking calculation which utilizes
delta field Poincare list and a fast arrangement calculation to characterize the unique finger
impression in to 5 classes. The location calculation looks through the heading field which
has the bigger course changes to get the singularities. Singularities discovery is utilized to
Theoretical Background and Literature Survey
39
expand the exactness.
Ahmed et al. [83] Proposed fingerprint upgrade to enhance the coordinating execution and
computational proficiency by utilizing a image scale pyramid and directional sifting in the
spatial area.
Guo et al. [84] introduced auxiliary approach with fingerprint characterizations by utilizing
the directional image of fingerprint rather than procedure utilizing a dim level watershed
strategy to discover the edges introduce on a unique fingerprint by specifically checked
fingerprints or inked singularities. Directional image incorporates overwhelming heading
of edge lines.
Gaikwad et al. [85] have built up a strategy for extraction of particulars from fingerprint
images utilizing midpoint edge form portrayal. The initial step is division to isolate frontal
area from foundation of unique fingerprint. A 64 x 64 area is removed from fingerprint
image. The grayscale powers in 64 x 64 locales are standardized to a consistent mean and
fluctuation to evacuate the impacts of sensor clamor and grayscale varieties because of
finger weight contrasts.
After the standardization the differentiation of the edges are upgraded by sifting 64 x 64
standardized windows by fittingly tuned Gabor channel. Handled unique fingerprint is then
checked start to finish and left to right and advances from white (foundation) to dark
(closer view) are recognized. The length vector is computed in all the eight headings of
shape. Each shape component speaks to a pixel on the form, contains fields for the x, y
directions of the pixel. The proposed technique takes less and don't recognize any false
details.
Zhou et al. [86] proposed Scale Invariant Feature Transformation (SIFT) to speak to and
coordinate the unique finger impression. By removing trademark SIFT highlight focuses in
scale space and perform coordinating in view of the surface data around the element
focuses. The blend of SIFT and traditional particulars based framework accomplishes
essentially preferred execution over both of the individual plans.
Chaudhari et al. [87] have acquainted consolidated strategies with fabricate a minutia
extractor and a minutia matcher. Division with Morphological activities used to enhance
diminishing, false particulars evacuation, minutia stamping.
Peralta et al. [88] proposed a compelling and effective calculation for details extraction to
Theoretical Background and Literature Survey
40
enhance the general execution of a programmed unique finger impression distinguishing
proof framework since it is vital to save genuine particulars while expelling deceptive
particulars in post-preparing. The proposed novel fingerprint image post-preparing
calculation attempts an endeavors to dependably separate deceptive details from genuine
ones by making utilization of edge number data, alluding to unique dim level image,
planning and orchestrating different handling systems appropriately, and furthermore
choosing different handling parameters precisely. The proposed post-handling calculation
is successful and proficient.
Marasco et al., [89] has developed a portrayal method for fingerprint recognizable proof. It
effectively represents the qualities of fingerprint to generate unique distinguishing roof.
The fingerprint images are shifted in various ways and a 640-dimensinal feature vector is
generated in the focal area of each fingerprint image. Hence, the feature vector is shortened
and requires merely 640 bytes. Euclidean distance is used for registering between the
format finger code and the information finger code. The technique gives great coordinating
with high exactness.
Arjona M [90] presented Directional Fingerprint Processing utilizing unique finger
impression smoothing, order and ID in view of the solitary focuses (delta and center
focuses) got from the directional histograms of a finger impression. Fingerprints are
ordered into two primary classes that are called Lasso and Wirbel. The procedure
incorporates directional image development, directional image square portrayal, solitary
point recognition and choice. The technique gives coordinating choice vectors with least
mistakes, and strategy is straightforward and quick.
Tiwari et al. [34] proposed an ordering calculation like Jayaraman et al. [64] calculation.
They utilized a coaxial Gaussian track code (CGTC) rather than the MBP. The calculations
depend vigorously on the centre point. These calculations have not considered the absence
of centre point, in curve compose fingerprints. These calculations additionally utilized 72
segments that every minutia is lying in a part. At the point when a details was put in
covering zones, limit between two segments, their calculations have not considered this
unique case.
As of late, Khodadoust & Khodadoust [92] proposed a unique finger impression ordering
calculation in view of particulars sets and uncommon sort of centre point, arched or upper
centre point. This calculation abuses level-1 and level-2 includes and uses circle properties.
Theoretical Background and Literature Survey
41
Since the quantity of details sets are (n(n − 1)/2), consequently behind calculations think
about O(n2) particulars sets amid coordinating.
It is observed from the literature that the automated systems based on biometrics, specially
human fingerprints, have been capitalized since long. The various approaches have been
used for the classification and retrieval of images, targeting specific applications, but still
there is a scope in improvement.
Classification using LBP and GLCM Features with SVM / kNN Classifiers
42
CHAPTER – 3
Classification using LBP and GLCM Features
3.1 Introduction
3.1.1 Fingerprint Classification
Fingerprint classification provides an important indexing mechanism in a fingerprint
database. An accurate and consistent classification can greatly reduce fingerprint matching
time for a large database. Hybrid approaches combine two or more approaches for
classification. These approaches show some promise but have not been tested on a large
database.
For example, Fitz and Green [190] used frequency-based approach on 40 fingerprints.
They used the frequency spectrum of the fingerprints for classification, whereas Kawagoe
and Tojo [191] used another structure-based approach on 94 fingerprints. Chong et al.
[192] reported results on 89 fingerprints in a structure-based approach that used B-spline
curves to represent and classify fingerprints.
Birdi et al. [100] proposed a fingerprint classification algorithm based on the multispace
Karhunen-Loève (KL) transform applied to the orientation field. The classification
accuracy reported by the various authors are different in databases with different numbers
of fingerprints, and therefore, they cannot be directly compared. Most of the work in
fingerprint classification is based on supervised learning and discrete class assignment
using knowledge-based features.
Bernard et al. [189] proposed the utilization of Kohonen topologic guide for unique finger
impression design arrangement. The learning procedure considers the expansive intra-class
assorted variety and the continuum of fingerprint design composes. Contingent upon the
unique finger impression area particular learning and the master approach, they displayed a
unique and natural portrayal of the calculation. They reasoned that the self-sorting out
Classification using LBP and GLCM Features with SVM / kNN Classifiers
43
maps are proficient in fingerprint grouping and gave a decent nonlinear bi-dimensional
guess of a heading map dissemination of highlight space.
Mikel et al. [172] reviewed various classification approaches used by various researchers.
Classifiers like SVM, Neural Networks, Graph matching, fixed classification and Nearest
Neighbour has been widely used. In fixed classification approaches, training is not required
as the classification is done along with the feature extraction process. The classification
experiments are performed on NIST-4 in most of the cases, while in few cases FVC dataset
is also used.
There are few challenges to the classification of the fingerprints. Fingerprints from the
same finger will be slightly different whenever they are captured. Every time when a finger
is pressed on a surface of an acquisition tool, it is applied with a certain amount of pressure
and angle. These parameters vary from time to time. Hence, fingerprint classifiers are
expected not to be sensitive to translations and rotations. Another challenge is related to
fingerprint variation. There are cases in which large intra class variation and small inter-
class variation is observed. In some cases, fingerprint contains the properties of more than
one class. These challenges makes fingerprint classification problem difficult and also
provides motivation to develop continuous classification schemes.
3.1.2 Feature Extraction
In pattern recognition and image processing, feature selection and extraction is a special
technique used for selecting a subset of most relevant image characteristics that can best
represent the whole image. Transforming the input digital image data into a set of features
is called feature extraction. If the method of extraction is designed efficiently, then the
subsequent recognition task can be performed with this reduced representation instead of
the full sized input image. The minutiae and non-minutiae features are the two frequently
used features extracted from a fingerprint image for various fingerprint applications.
There are two non-minutiae features such as Local Binary Pattern (LBP) and Gray Level
Co-occurrence Matrix (GLCM) offer multiple advantages like being rotation invariant, free
from gray scale invariance, robust to noise and small skin distortions. The advantages of
using the local features in fingerprint representation are that they are computed at multiple
points in the image, and hence, they are invariant to image scale and rotation. In addition,
they do not need further image pre-processing or segmentation operations.
Classification using LBP and GLCM Features with SVM / kNN Classifiers
44
Bazen and Otterlo [98] show that reinforcement learning can be used for minutiae
detection fingerprint matching. Minutiae are characteristic features of fingerprints which
determine their uniqueness. Classical approaches use a series of image processing steps for
this task, but lack robustness because they are highly sensitive to noise and image quality.
They propose a more robust approach in which an autonomous agent walks around in the
fingerprint and learn how to follow ridges in the fingerprint and how to recognize
minutiae. The agent is situated in the environment and uses reinforcement learning to
obtain an optimal policy.
Rahimi et al. [99] presented a novel algorithm for the detection of singular points in
fingerprint images. The number and location of singular points are used to classify
fingerprint images into five general groups and therefore to narrow down the search space
in a large fingerprint database. Using the proposed directional masks in the first step, they
detect the neighborhood of the singular points. In the second stage, using the proposed
algorithm, an adaptive singular point detection method is implemented to extract the exact
location of core and delta points. In their findings, they stated that a good minutiae
extrication algorithm should efficiently incorporate local ride orientation into ridge
extraction operation. However, it should also be kept in mind that directional smoothing is
usually a computationally expensive operation. Post-processing is a very important step in
minutiae extraction, which can eliminate a significant number of errors.
3.1.3 Methods of Feature Extraction
FIGURE 3.1: Taxonomy of features used for fingerprint classification [172]
Mikel et al. [172] has presented a review on the various feature extraction approaches used
for fingerprint classification in the literature. As shown in Fig. 3.1, the feature extraction
methods, based on orientations include syntactic coding, direct / unaltered coding,
registration and segmentation. The methods based on number / positions and relative
measures are categorized under the singular point approach. The ridge structure approach
Features for Classification
Singular
Points
Orientations Filter
Responses
Ridge
Structure
Classification using LBP and GLCM Features with SVM / kNN Classifiers
45
includes ridge tracing, skeleton, geometric / global representation model and minutiae map.
The filter response approaches are the use of Gabor and other filters like FFT, DCT, GHM
discrete multi-wavelet transform etc.
In the area of feature extraction various methods are discussed and compared with their
strength and weaknesses. The weakness becomes the motivation for further invention.
According to the survey on fingerprint classification by Mikel et al. [194] PCASYS
(Pattern-level Classification Automation SYStem) is well-known since it is the only
method whose source code was made publicly available by the authors. PCASYS uses the
features which are the orientations which are obtained from the registered Orientation
Maps (OM). Primarily, two components from the squared orientation map is used to
generate future vector. By using Karhunen-Loève (KL) transform (also known as Principal
Component Analysis (PCA)), the reduced feature vector is generated from the feature
vector, obtained primarily.
The most relevant contribution in the field of fingerprint image classification is done by
Anil Jain and Karu [193]. Despite of its simplicity, the proposed work described a
fingerprint classification model based on fixed rules and obtained good results though.
After detecting the singular points (SP), the rules like whether there are SPs present or not.
If no SPs are present, Arch class is declared.
Based on the detection of two pairs of cores and deltas, whorl or no whorl is declared. If
only one pair of core and delta is found, from core to the delta, a straight line is traced and
the difference with the orientations crossing this line is taken in to account to decide loops
and tented arch. Based on the positions of SPs in the image, the decision of left loop or
right loop is made. As the SPs are required to obtained to decide the fingerprint image
class, the OM is obtained by slit sum method and then processed. By giving high weights
to the orientations near to the centre of the image, OM is processed so that OM is
smoothed and hence, the noise in the outer locations are also removed.
Again the use of OM is proposed by Cappelli et al. [195]. The segmentation of OM,
consists of grouping areas with similar orientation is used. A cost function is defined as the
orientation variance in the different areas. The OM is matched with the templates,
predefined per fingerprint class. The lowest segmentation cost, obtained for each class,
generates feature vector. The fingerprint is enhanced and improved using PCASYS. As
explained earlier, OM extraction is performed using slit sum method. Regularization,
Classification using LBP and GLCM Features with SVM / kNN Classifiers
46
attenuation and strengthening steps are further used to enhance OM. After obtaining OM, it
is fitted to masks, of each class.
To define the masks, vertices, regions defined by the vertices and the relationships between
pairs of regions and functions which define mobility of the vertices are used. To carry out
the matching in a sequential manner, first, by using greedy search, the best global
positioning is obtained and then, for each mobile vertice, the optimal position is obtained.
Finally, the minimum cost obtained for each mask, normalized by the sum of costs is
codified in the feature vector.
Nyongesa et al. [196] proposed a method which uses the features like; angle between
cores, deltas and angles between each one of the cores and the deltas. The angle
differences between the SPs detected are used to compose feature vector. The method of
identifying angles between cores and deltas describe six features. Along with this features
another feature is also used which is computed as a mean orientation in a window of 16x16
blocks. This block is centred in the centre of the image.
Before extracting the features, the image is pre-processed in which, fingerprint image is
binarized using the mean intensity in each block of 8x8 pixels. By using gray level
variance method, the OM is extracted and smoothed using the window of 15x15. To
obtain SPs, Poincarè method is used. Then feature vector is computed based on these
extracted points.
As proposed by Shah [104] the features of the feature vector are based on orientations in
two different ways: directed/unaltered and registered. The method of obtaining orientations
and registration highly differ from the works represented. A line detection mechanism is
used to extract the orientations. The final feature vector consists of two parts, the first part
consists of the features of direct orientations and the second part consists of the features
which are registered ones. As a part of pre-processing, the fingerprint is binarized in local
windows of 16x16.
An interactive line detection method is carried out to compute the OM. To extract the
orientations, eight oriented masks are considered. Skeletonization process is used as a line
detection mechanism. Once the lines are detected, the two parts of the feature can be
obtained. For the first part of the feature vector, the lines are converted into an OM. As the
Classification using LBP and GLCM Features with SVM / kNN Classifiers
47
other part of the feature vector, the most predominant orientations and their percentage of
occurrence in which OM is divided.
A comparative summary of features and classification techniques used by authors with
fingerprint images, is shown in Table 3.1 [172] [194].
TABLE 3.1: Comparative Summary of feature extraction and classification methods
Author Name Year Features Classification
Technique Test Dataset
R. Cappelli, D. Maio
and D. Maltoni 2002 Orientation Map
Nearest
Neighbour NIST-4
J.-H. Chang and K.-
C. Fan 2002 Ridge Structure Graph Matching NIST-4
R. Cappelli, D.
Maio, D. Maltoni
and L. Nanni,
2004 Orientation Map Nearest
Neighbour NIST-4/14
X. Wang and M.
Xie, 2004
Singular Points
Ridge Structure
Fixed
Classification NIST-4
J.-K. Min, J.-H.
Hong and S.-B. Cho 2006 Filter Responses SVM NIST-4
T. Kristensen, J.
Borthen and K.
Fyllingsnes
2007 Filter Responses Neural Networks Other
J. Li, W.-Y. Yau and
H. Wang 2008
Singular Points
Ridge Structure SVM NIST-4
H.-W. Jung and J.-H.
Lee, 2009 Ridge Structure Structural Models NIST-4
U. Rajanna, A. Erol
and G. Bebis 2010
Orientation Map
Ridge Structure
Nearest
Neighbour NIST-4
K. Cao, L. Pang, J.
Liang and J. Tian 2013 Ridge Structure
SVM and Nearest
Neighbour NIST-4
J. Luo, D. Song, C.
Xiu, S. Gen and T.
Dong
2014 Filter Responses Nearest
Neighbour NIST-4
H.-W. Jung and J.-H.
Lee, 2015 Orientation Map
Nearest
Neighbour
FVC2000, FVC
2002 &
FVC2004
As it can be seen from Table 3.1 that Support Vector Machines (SVM) and Nearest
Neighbour classifiers are frequently used for the fingerprint classification purpose by
researchers, concatenated features of GLCM and LBP are classified using SVM and kNN
classifiers and results are compared. SVM is inherently used for binary classification. This
technique is needed to extend the method to handle the real world classification problems
involving more than two classes. There are two main approaches for class classification.
First is One Vs One Classifier, also known as pairwise classifier. It constructs k(k−1)/2
pairwise binary classifiers. The classifying decision is made by aggregating the outputs of
Classification using LBP and GLCM Features with SVM / kNN Classifiers
48
the pairwise classifiers. Another one is One Vs All Classifier, known as ―One against
Rest‖. This classifier trains binary classifiers, each of which separates one class from the
other k (k −1) classes. Classification of a test image, is done by computing the function
value for each binary SVM model [179].
3.2 Proposed Methodology
Most of the datasets of fingerprint images contains gray scale images. Co-occurrence
matrix provides a statistical approach that can well describe texture image [177]. Since
1973, introduced by Haralick [176], Gray level co-occurrence matrix (GLCM) is popular
feature extraction technique for image classification and retrieval. The limitation of GLCM
features of describing information regarding the relative spatial position of pixels with one
another. Local Binary Pattern (LBP) makes it possible to describe the texture and shape of
a digital image. This is done by dividing an image into several small regions from which
the features are extracted. These features consist of binary patterns that describe the
surroundings of pixels in the regions. The obtained features from the regions are
concatenated into a single feature histogram, which forms a representation of the image.
Images can then be compared by measuring the similarity (distance) between their
histograms [178]. To combine the strong characteristics of GLCM and LBP, concatenation
of these two features is used as input to the classifiers for improved classification.
3.2.1 Gray level co-occurrence matrix (GLCM)
A statistical approach that can well describe second-order statistics of a texture image is a
co-occurrence matrix [109]. GLCM is essentially a two-dimensional histogram in which
the (i, j) th element is the frequency of event i co-occurs with event j. A co-occurrence
matrix is specified by the relative frequencies P(i, j , d, θ) in which two pixels, separated
by distance d, occur in a direction specified by the angle θ, one with gray level i and the
other with gray level j. A co-occurrence matrix is therefore a function of distance r, angle θ
and gray-scales i and j.
Classification using LBP and GLCM Features with SVM / kNN Classifiers
49
FIGURE 3.2: Co-occurrence matrix directions for extracting texture features
Our observation is that a fingerprint image can be decomposed into regions with regular
textures. So we should be able to represent these regular texture regions by using co-
occurrence matrices. To do so, we utilize the co-occurrence matrices in angles of 0°, 45°,
90°, and 135° as follows:
jnmIilkIdnImkLLLLnmlkdjiP crcr ),(,),(,||,0,,,#)0,,,(
(3.7)
}),(,),()||,(
)||,()()()),(),,{((#)45,,,(
jnmIilkIdnIdmk
ordnIdmkLLLLnmlkdjiP crcr
(3.8)
}),(,),(,0||,)()()),(),,{((#)90,,,( jnmIilkInIdmkLLLLnmlkdjiP crcr
(3.9)
}),(,),()||,(
)||,()()()),(),,{((#)135,,,(
jnmIilkIdnIdmk
ordnIdmkLLLLnmlkdjiP crcr
(3.10)
Where, # denotes the number of elements in the set.
Feature Extraction Using GLCM
To understand the formation of co-occurrence vectors, take an example of an image vector
of size 4x4 as shown in the Fig. 3.2 (a). Co-occurrence vector is computed for the fixed d,
which is 1 in our case for θ=0˚, 45˚, 90˚, and 135˚ as described by Haralick et al. [176]. So
we have 4x4 co-occurrence matrices. Based on each computed co-occurrence matrices, 4
features that can successfully characterize the statistical behaviour of a co-occurrence
matrix are extracted. They are energy, mean, homogeneity and standard deviation.
135 [-1,-1]
90 [-1,0]
45 [-1,1]
0 [0,1]
Reference Pixel
Classification using LBP and GLCM Features with SVM / kNN Classifiers
50
FIGURE 3.3:GLCM construction based on a (a) test image along four possible directions (b) 0º (c) 45º (d)
90º and (e) 135º with a distance d = 1.
Energy (E) can be defined as the measure of the extent of pixel pair repetitions. It measures
the uniformity of an image. When pixels are very similar, the energy value is large.
ji
jipEnergy,
2),(
(3.11)
Mean gives the estimation of the intensity in the relationship that contributes.
nm
jip
Meanji
,
),(
(3.12)
where, m and n are number of rows and columns respectively.
Homogeneity refers to the closeness of distribution of elements to the GLCM diagonal.
ji ji
jipyHomogeneit
, ||1
),(
(3.13)
Measure of heterogeneity is strongly correlated to Standard Deviation. It increases when
the gray level values differ from their mean.
m
i
n
j
meanjipmn 1 1
2)),((1
Deviation Standard
(3.14)
0 3 3
3 0 0
3
0 0
1
2
3
1 2 3 0
0
1
1
2
2
3
3
0
3
0
2
2
1
3
0
#
0 1 0 1
0 0 4
1
4 0
2
0
0
2 0 0 1
0
1
1
2
2
3
3
0
3
0
2
2
1
3
0
#
0 0 4 2
4 0 0
2
0 0
1
1
4
1 1 4 0
0
1
1
2
2
3
3
0
3
0
2
2
1
3
0
#
0 2 0 0
0 2 0
0
0 2
1
0
3 0
1 0 0 2
0
1
1
2
2
3
3
0
3
0
2
2
1
3
0
#
0
0 2 3
1 0 2
0
1 3
0
3
2
2 0 1 3
(a)
(d) (c) (b) (e)
Classification using LBP and GLCM Features with SVM / kNN Classifiers
51
3.2.2 Local Binary Patterns (LBP)
The first LBP operator was introduced by Ojala et al [175]. The operator works with the
3x3 neighbourhood of every pixel. It is a gray-scale invariant texture measure computed
from the analysis of a 3x3 local neighborhood over a central pixel. It may not catch the key
surface attributes. Therefore, the first LBP is reached out to utilize the area of various sizes
[111]. They utilized round neighbourhoods and directly introduced the pixel esteems in the
image. Along these lines, LBP could be utilized as a part of any range and any quantities of
pixels in the area. They utilized the documentation (P, R), where P is the quantity of
examining focuses on the hover of span R. The LBP estimation of pixel (gc) from the dark
estimations of the circularly symmetrical neighbourhood gp (p = 0,…, P-1) is figured by
1
0
, 2*)(P
p
p
cpRP ggsLBP
(3.15)
00
01)(
xif
xifxs
(3.16)
Where gc is the centre pixel value, gp is one of the eight surrounding centre pixel values, R
is the radius, P is the whole neighborhood number and s(x) is the sign function.
1 2 3
4 5 6
7 8 9
Threshold
0 0 0
1 1
1 1 1
X
1 2 4
8 16
32 64 128
0 0 0
8 16
32 64 128
LBP = 16+32+64+128 = 240
FIGURE 3.4: Example of LBP operator
Texture Description using LBP
The histogram of the LBP labels can be used as a texture descriptor. The original LBP with
P neighborhoods has 2p different binary patterns. However, Ojala et al. [175] observed that
the natural images generally contain a small number of LBP codes, which are called the
uniform LBP. When the bit pattern is considered circular, the uniform LBP includes the
maximum of two bitwise transitions which ranges from 0 to 1 and vice versa. These
uniform patterns represent the majority of texture microstructures [112].
Classification using LBP and GLCM Features with SVM / kNN Classifiers
52
FIGURE 3.5: Process of generating LBP features
Feature Extraction using LBP
LBP are a local texture descriptor that have performed well in different computer vision
applications, including texture classification and segmentation, image recovery, surface
assessment, and face detection. The best current strategy for fingerprint liveness
recognition utilizes this technique. Another augmentation to the original operator is the
meaning of alleged uniform examples, which can be utilized to lessen the length of the
feature vector and execute a simple rotation invariant descriptor.
The number of various marks of LBP is lessened from 256 to only 10 in the uniform
pattern. The standardized histogram of the LBPs with 256 and 10 containers for non-
uniform and uniform administrators individually is utilized as a feature vector. The
presumption underlying the calculation of a histogram is that the dispersion of patterns
matters, however the correct spatial area does not. In this way, the upside of removing the
histogram is spatial invariance property. The LBP separated images are similarly
partitioned in rectangles and their histograms are linked to frame the final feature vector
[113].
3.2.3 Fingerprint Classification using Support Vector Machine (SVM)
SVM is a supervised learning model to recognize spatial relationships between groups of
data. It seeks to find the optimal separating hyper-plane between classes by focusing on the
training cases that lie at the edge of the class distributions. It maps the original input space
into a higher-dimensional feature space using a certain kernel function, making possible
the avoidance of the computation of the inner product of two vectors. Once the instances
are mapped to the feature space, the optimal separating hyper plane with the maximal
margin is computed. This way, an upper bound of the expected risk is minimized instead of
Classification using LBP and GLCM Features with SVM / kNN Classifiers
53
the empirical risk. In order to carry out the training of the SVMs, Platt [197] suggested the
Sequential Minimal Optimization (SMO) procedure.
FIGURE 3.6: Hyper-plane for classifying samples in SVM
SVM is utilized to train the classifier with feature vector. Treating the data as two sets of
points in an n-dimensional space, SVM builds a separating hyper plane by Lagrangian
multipliers to differentiate the negative data points from the positive ones. Intuitively, a
good separation is achieved when a separating hyper plane has the largest distance to the
boundary points of both classes. Chih-Wei et al. [200] presented a practical guide for
support vector classification using LIBSVM software [199]. The radial basis function
(RBF) kernel is widely suggested as the best choice by the users. It also provides a tool
named ―Cross-validation and Grid-search‖ to search the appropriate parameters for RBF
kernel. LIBSVM with RBF kernel is used to train classifiers in the experiments, and the
tool ―Cross validation and Grid-search‖ is utilized to search the penalty parameter and the
kernel parameter.
3.2.4 Fingerprint Classification using kNN
kNN classifier is best suited for classifying biometric features based on their images due to
its lesser execution time and better accuracy than other commonly used methods which
include Hidden Markov Model and Kernel method [173]. Although methods like SVM and
Adaboost algorithms are proved to be more accurate than KNN classifier, KNN classifier
has a faster execution time and is dominant than SVM. [174].
Classification using LBP and GLCM Features with SVM / kNN Classifiers
54
The training data is mapped into multi-dimensional element space. The feature space is
divided into locales in light of the nearest highlight space in the training set. The training
data is mapped into multidimensional feature space. The element space is apportioned into
locales in view of the classification of the preparation set. A point in the element space is
alloted to a specific class in the event that it is the most continuous classification among
the K closest preparing information.
In the classification stage, this classifiation method finds the K nearest named training tests
for an unlabeled info test and allocates the information test to the classification that seems
most frequently inside the K subset. This classification approach is exceptional with its
straightforwardness. It performs well even in dealing with the arrangement tasks with
multi-ordered document. It finds the gathering of the k nearest instances in the training set
to the test examples. From these k neighbor occurrences a choice is made in light of the
power of a specific class. As a result, both the distance metric used to process the closeness
of the cases and the number of neighbors considered are the key components in this
strategy. With a specific end goal to locate the best value for these parameters a cross-
approval strategy can be taken after utilizing the accessible training data [115].
FIGURE 3.7: Example for kNN
Under this scheme an image in the test set is recognized by assigning to it the label of the
closest point in the learning set, where distance are measured in image space. The
Euclidean distance metric is often chosen to determine the closeness between the data
points in KNN. A distance is assigned between all pixels in a dataset. Distance is defined
as the Euclidean distance between two pixels. The distance between the neighbours can be
calculated using the following formula.
Classification using LBP and GLCM Features with SVM / kNN Classifiers
55
2
),( ii qpqpd (3.17)
The Hamming distance between the neighbours can be calculated by the following
equation.
||),( ii qpqpd (3.18)
3.2.5 Combining GLCM and LBP
FIGURE 3.8: Classification flow using Fusion features with SVM / KNN
The features of the input image are calculated using both the GLCM and LBP. Then both
the extracted features are concatenated to generate final feature vector. The model library
is populated with the features of training images of fingerprint. Classification is carried out
with SVM and kNN classifiers.
3.3 Experimental Results
The proposed methods are simulated on Intel Core i5, 2.60 GHz processor, 64-bit
Operating System in MATLAB 2016a environment. The effectiveness of the method is
Input Image
Calculate GLCM
Feature Extraction
using concatenation Model Data Library
Data Model Population
Calculate LBP
Classification
Feature Matching to
Determine Class
Display Result
Input Image
Calculate GLCM
Feature Extraction
using concatenation
Calculate LBP
Classification using LBP and GLCM Features with SVM / kNN Classifiers
56
measured by applying it on standard dataset, i.e. NIST – 4 which contains 800 images of
fingerprints (200 images of each class).
(a) (b)
FIGURE 3.9: Sample images from the dataset used; a) NIST – 4, and b) Captured
NIST - 4 dataset is a valuable tool for evaluating fingerprint systems on a statistical sample
of fingerprints which is evenly distributed over the five major classifications. A
comparative summary shown in Table 3.1, this database is widely used for verifying
fingerprint classification techniques proposed by the researchers. This dataset contains 8-
bit gray scale fingerprint image pairs including classifications in Arch, Left and Right
Loops, Tented Arch, Whirl classes. Suitable for automated fingerprint classification
research, the database is widely used for algorithm development: system training and
testing.
3.3.1 Classification using SVM
A confusion matrix is a table that is often used to describe the performance of a
classification model or a classifier on a set of test data for which the true values are known.
True positives and true negatives can be computed from the confusion matrix as explained
in chapter 1, section 1.1.1. Overall accuracy of the classifier can be computed from true
positives and true negatives as:
Classification Accuracy = (True Positives + True Negatives) / Total Size (3.19)
Cross Validation is used to assess the predictive performance of the models and to judge
how they perform outside the sample to a new data set also known as test data. K-fold
cross validation (k=10) is used to analysed the classification accuracy. The experiments
have been carried out using various images for testing purpose arbitrarily. The testing
images are selected by the system arbitrarily and five different testcases are considered. In
all testcases, testing images are excluded from training phase. The results of various test
cases are shown further.
Classification using LBP and GLCM Features with SVM / kNN Classifiers
57
Fig.3.9 to Fig. 3.13 demonstrates the confusion matrices of the fingerprints with fusion
features and SVM classifier. Fig. 3.15 shows the classification rate of various testcases and
Fig. 3.16 shows the classification rate of various classes.
Arch 18 2 0 0 90.00
Left
Loop 2 14 0 4 70.00
Right
Loop 4 2 13 1 65.00
Whorl 0 5 1 14 70.00
Arch Left
Loop
Right
Loop Whorl
73.75%
(Over all
Accuracy)
FIGURE 3.10: Confusion matrix of Testcase1 using SVM Classifier
Arch 13 5 2 0 65.00
Left
Loop 4 12 1 3 60.00
Right
Loop 2 1 15 2 75.00
Whorl 1 0 2 17 85.00
Arch Left
Loop
Right
Loop Whorl
71.25%
(Over all
Accuracy)
FIGURE 3.11: Confusion matrix of Testcase2 using SVM Classifier
Classification using LBP and GLCM Features with SVM / kNN Classifiers
58
Arch 18 1 1 0 90.00
Left
Loop 3 17 0 0 85.00
Right
Loop 0 2 16 2 80.00
Whorl 2 1 2 15 75.00
Arch Left
Loop
Right
Loop Whorl
82.50%
(Over all
Accuracy)
FIGURE 3.12: Confusion matrix of Testcase3 using SVM Classifier
Arch 16 2 2 0 80.00
Left
Loop 5 10 1 4 50.00
Right
Loop 2 2 14 2 70.00
Whorl 2 3 0 15 75.00
Arch Left
Loop
Right
Loop Whorl
68.75%
(Over all
Accuracy)
FIGURE 3.13: Confusion matrix of Testcase4 using SVM Classifier
Classification using LBP and GLCM Features with SVM / kNN Classifiers
59
Arch 15 2 2 1 75.00
Left
Loop 2 18 0 0 90.00
Right
Loop 2 0 13 5 65.00
Whorl 1 2 2 15 75.00
Arch Left
Loop
Right
Loop Whorl
76.25%
(Over all
Accuracy)
FIGURE 3.14: Confusion matrix of Testcase5 using SVM Classifier
FIGURE 3.15: Analysis of Results of various Testcases (with SVM Classifier)
73.75 71.25
82.5
68.75 76.25
0
10
20
30
40
50
60
70
80
90
100
Testcase1 Testcase2 Testcase3 Testcase4 Testcase5
Cla
ssif
icat
ion A
ccura
cy
Fingerprint Classes
Classification using LBP and GLCM Features with SVM / kNN Classifiers
60
FIGURE 3.16: Analysis of Results of various classes (with SVM Classifier)
3.3.2 Classification with kNN Classifier
The efficiency of proposed method has been investigated using kNN classifier also. The
experiments have been carried out on various test cases consisting of varied input classes
of fingerprints. The input fingerprint images are chosen randomly and have been excluded
from training phase. The results of such five cases are shown further.
Fig.3.16 to Fig. 3.20demonstrates the confusion matrices of the fingerprints with fusion
features and kNN classifier. Fig. 3.21 shows the classification rate of various testcases and
Fig. 22 shows the classification rate of various classes.
85
60 67.5
72.5
0
10
20
30
40
50
60
70
80
90
100
Arch Left Loop Right Loop Whorl
Cla
ssif
icat
ion A
ccura
cy
Fingerprint Classes
Classification using LBP and GLCM Features with SVM / kNN Classifiers
61
Arch 16 0 2 2 80.00
Left
Loop 3 10 3 4 50.00
Right
Loop 4 3 8 5 40.00
Whorl 2 3 5 10 50.00
Arch Left
Loop
Right
Loop Whorl
55.00%
(Over all
Accuracy)
FIGURE 3.17: Confusion matrix of Testcase1 using kNN Classifier
Arch 13 1 4 2 65.00
Left
Loop 5 8 0 7 40.00
Right
Loop 6 3 10 1 50.00
Whorl 3 1 4 12 60.00
Arch Left
Loop
Right
Loop Whorl
53.75%
(Over all
Accuracy)
FIGURE 3.18: Confusion matrix of Testcase2 using kNN Classifier
Classification using LBP and GLCM Features with SVM / kNN Classifiers
62
Arch 13 5 1 1 65.00
Left
Loop 8 11 0 1 55.00
Right
Loop 5 2 10 3 50.00
Whorl 5 2 2 11 55.00
Arch Left
Loop
Right
Loop Whorl
56.25%
(Over all
Accuracy)
FIGURE 3.19: Confusion matrix of Testcase3 using kNN Classifier
Arch 13 2 4 1 65.00
Left
Loop 4 14 0 2 70.00
Right
Loop 2 1 11 6 55.00
Whorl 5 3 2 10 50.00
Arch Left
Loop
Right
Loop Whorl
60.00%
(Over all
Accuracy)
FIGURE 3.20: Confusion matrix of Testcase4 using kNN Classifier
Classification using LBP and GLCM Features with SVM / kNN Classifiers
63
Arch 11 2 4 3 55.00
Left
Loop 3 13 1 3 65.00
Right
Loop 9 3 7 1 35.00
Whorl 2 4 6 8 40.00
Arch Left
Loop
Right
Loop Whorl
48.75%
(Over all
Accuracy)
FIGURE 3.21: Confusion matrix of Testcase5 using kNN Classifier
FIGURE 3.22: Analysis of Results of various Testcases (with kNN Classifier)
Overall classification accuracy using SVM and kNN classifiers for different testcases are
shown in Fig. 3.15 and Fig. 3.22 respectively. The overall classification accuracy is found
better with SVM classifier. In both classifications, using SVM and kNN, Arch class shows
better classification accuracy. This can be observed from Fig. 3.16 and Fig. 3.23.
55 53.75 56.25 60
48.75
0
10
20
30
40
50
60
70
80
90
100
Testcase1 Testcase2 Testcase3 Testcase4 Testcase5
Cla
ssif
icat
ion A
ccura
cy
Testcases
Classification using LBP and GLCM Features with SVM / kNN Classifiers
64
FIGURE 3.23: Analysis of Results of various classes (with kNN Classifier)
3.4 Concluding Remarks
The classification system proposed in this chapter demonstrates the use of fusion of two
texture features of fingerprint images. As discussed earlier, the efficiency of each
fingerprint classification system highly depend on feature extraction method. And to make
the part of feature extraction robust, the feature vector is built using LBP and GLCM
features. This feature vector is given as input the classifier.
Various experiments have been performed to analyse the effectiveness of the proposed
method. The experiments have been conducted for various test cases including different
fingerprint images of various classes using SVM as well as kNN and results are analysed.
It has been observed that compared to kNN, method works better with SVM classifier and
achieves classification rate ranges from 68.75 % to 82.50 % for various test cases. The
experiments have also been conducted on various fingerprint classes individually. The
classes of different fingerprints have been chosen arbitrarily and results are examined.
Looking at the classification rate, it has been concluded that arch are most prominent class
that have been classified most correctly with 85% classification rate using SVM classifier.
The remaining fingerprints are less correctly recognized due to the limitation of method for
proper differentiation of feature points between the classes.
66
56
46 51
0
10
20
30
40
50
60
70
80
90
100
Arch Left Loop Right Loop Whorl
Cla
ssif
icat
ion
Acc
ura
cy
Finngerprint Classes
Classification using Robust Features with Bag of Features Classifier
65
CHAPTER – 4
Classification using Bag of Features (B0F)
4.1 Introduction
Classification of fingerprint images is being adapted as a very important step in image
searching applications like fingerprint recognition and retrieval due to its ability to
minimize the similarity matching count. Fingerprint classification, which classifies
fingerprints into a number of pre-specified categories, provides an indexing scheme to
facilitate efficient matching for large fingerprint databases; if two fingerprint images are
the impressions of the same finger, then they must belong to the same category. Therefore,
a query fingerprint needs to be compared only with the database fingerprints of the same
category in the fingerprint matching process. The central problem in designing a
fingerprint classification algorithm is to determine what features should be used and how
categories are defined based on these features [119].
One of the vital issue in computing the image similarity is feature extraction. The feature
extraction methods discussed in chapter 3 using the concatenation of GLCM and LBP is
time consuming. This chapter discusses a speedy feature extraction method using speeded
up robust features (SURF). SURF helps in detecting the key points which describe the
texture uniquely. It generates unique invariant description of each point detected which
uniquely identifies the point and can be used to match the points even if target image
suffers from distortion due to image noise, illumination, scale change, rotation change and
change of the viewpoint from which the image was taken [202].
Classification using Robust Features with Bag of Features Classifier
66
4.2 Methodology
4.2.1 Feature Extraction using Speeded Up Robust Features (SURF)
For the classification of fingerprint images, SURF are used to extract the local robust
features. As per an article by Bay et al. [201] SURF uses scale and rotation invariant
interest point detector and descriptor which outperforms with respect to repeatability,
distinctiveness, and robustness, yet can be computed and compared much faster.
Following steps are used for extracting features using SURF:
Interest point detection: An interest point detector based on Hassian Matrix is used. The
descriptor vector is generated on the basis of points identified in this step [201][203]. The
detector relies on integral images to reduce the computation time and therefore it is also
called as ‘Fast-Hessian‘ detector [125].
First of all, an integral image is computed from the input image as it helps speed up the
calculation of any rectangular area, especially the Haar wavelets. For a given point (x, y) in
an image, an integral image I can be described as shown in Eq. follows,
yj
j
xi
i
jiIyxI00
),(),(
(4.1)
For a given a point X = (x, y) in an image I, the Hessian matrix H(x, σ) in X at scale σ is
defined as follows [204],
),(),(
),(),(),(
xLxL
xLxLxH
yyxy
xyxx (4.2)
(x, σ) is the convolution of the Gaussian second order derivative with the image I at point
X. Lxx(x, σ), Lxy(x, σ) and Lyy(x, σ) are second order Gaussian derivatives. Gaussians are
optimal for scale-space analysis. In practice, however, the Gaussian needs to be discretised
and cropped, and even with Gaussian filters aliasing still occurs as soon as the resulting
images are sub-sampled [128].
Then, the second-order Gaussian derivates are approximated based on box filters and are
denoted as Dxx, Dxy and Dyy instead. The Hessian determinant is computed from these
terms as follows [201],
Classification using Robust Features with Bag of Features Classifier
67
2)(*)det( wDxyDDH yyxxapprox (4.3)
The Hessian matrix is computed for different filter sizes, where the filter size represents
the region around which the matrix determinant is computed, with various scale factors.
The process is repeated for various octaves. After calculating the Hessian matrix at
different scale factors (different octaves and filter sizes), the interest points are chosen by
computing the local maxima (in a 3x3x3 neighborhood) in scale and image space [16].
This step is O (mn log2 (max (m, n))) for a m × n size image (m is the height of the image
and n is the width in pixels).
Descriptor Generation: For feature description, SURF uses Wavelet responses in
horizontal and vertical direction, which is the coarse grain pixel contrast of a rectangular
region, around a given interest point. A neighbourhood of size 20s x 20s is taken around
the keypoint where s is the size. It describes how pixel intensities are distributed within a
scale dependent neighborhood of each interest point [204]. Typically descriptor vectors of
length 64 or 128 are used.
FIGURE 4.1: Haar wavelet for gradient calculation in (a) x-direction and (b) y-direction
The computing of descriptor vector determines a reproducible orientation that is identified
based on a circular region surrounding the interest point [203]. To assign the orientation,
Haar wavelet responses (in x and y direction) are computed in a circular neighborhood of
6s (where s is the scale factor at which the interest point was identified).
The dominant orientation is estimated by calculating the sum of all responses within a
sliding orientation window (covering an angle of л/3). The longest vector (a sum of
horizontal and vertical responses) is chosen as the dominant orientation. After computing
the dominant orientation, the descriptor vector is computed by choosing a square region of
size 20s (where s is the scale factor at which the interest point was identified). The region
chosen around the interest point and oriented along the dominant orientation is split up into
(a) (b)
Classification using Robust Features with Bag of Features Classifier
68
smaller 4x4 square sub-regions and Haar wavelet responses (dx in the x and dy in the y
direction) are calculated within each of the sub-regions at 5x5 regularly spaced sample
points.
The wavelet responses are summed up in each region and the following four dimensional
vectors per 4x4 region forms the descriptor vector [201]:
|||,|,, yxyx ddddv (4.4)
This vector gives SURF feature descriptor with total 64 dimensions. Lower the dimension,
higher the speed of computation and matching, but provide better distinctiveness of
features [123].
4.2.2 Fingerprint Classification using Bag of Features (BoF)
The basic idea of using Bag-of-Feature (BoF) approach is that a set of local image patches
is sampled using a keypoint detector and a vector of visual descriptors is evaluated on each
patch independently (e.g. SURF descriptor, normalized pixel values). The resulting
distribution of descriptors in descriptor space is then quantified by using vector
quantization against a pre-specified codebook to convert it to a histogram of votes for (i.e.
patches assigned to) codebook centres and the resulting global feature descriptor vector is
set as a characterization of the image. This feature vector is used for classification [205].
BoF methods have been applied to image classification, object detection, image retrieval,
and even visual localization for robots. BoF approaches are characterized by the use of an
order-less collection of image features [139]. Lacking any structure or spatial information,
it is perhaps surprising that this choice of image representation would be powerful enough
to match or exceed state-of-the-art performance in many of the applications to which it has
been applied. Due to its simplicity and performance, the Bag of Features approach has
become well-established in the field..
Classification using Robust Features with Bag of Features Classifier
69
FIGURE 4.2: Example for Bag of Feature Classification
Liu [206] has discussed image feature representation based on BoF. As a first step, key
point detection is performed. Popular key point detectors introduced are Laplacian of
Gaussian (LoG), Difference of Gaussian (DoG), Harris Laplace, Hessian Laplace, Harris
Affine, Hessian Affine, Maximally Stable Extremal Regions (MSER) [207][208]. In next
step, local descriptors describe key point (identified by detectors) neighbourhood. Popular
descriptors are SIFT, LBP and SURF [206].
For the quantization of the descriptors into visual words it is required to generate
vocabulary performed in the next step. By clustering, each cluster is treated as a unique
visual word of the vocabulary. K-means clustering technique is proposed as effective
clustering technique [209]. In local descriptor quantization, one local descriptor is assigned
to one or multiple visual words. To search for the nearest neighbour anomg the vocabulary
obtained in the vocabulary generation step is the common way to assign a descriptor to
visual word(s).
Classification using Robust Features with Bag of Features Classifier
70
As BoG is a flexible and adaptable framework. since each step may be determined by
different techniques according to the application domain needs. There are two main stages
in image classification using BoF; Building Vocabulary and Determining Image Class.
FIGURE 4.3: Histogram representation for BoF
For building vocabulary, large feature descriptors are clustered by applying the clustering
algorithm, such as K-Means technique to generate a visual vocabulary [208].
The BoF model can be formulated as:
Training dataset S is represented by
S = s1, s2,…,sn (4.4)
Where, s is the extracted visual features.
Using clustering algorithm like K-Means, visual words, W is represented by
W = w1, w2,...,wv (4.5)
Where v is the cluster number.
Then, the data is summarized in a v x n occurrence table of counts
Nij = n(wi, sj) (4.6)
Classification using Robust Features with Bag of Features Classifier
71
Wheren (wi, sj) denotes how often the word wi is occurred in an image sj.
To determining the class of the query image the descriptor of query image is matched with
the descriptors using nearest neighbour approach.
FIGURE 4.4: Classification flow using SURF features with BoF
4.2.3 Classification model of SURF Features with BoF
Here, initially the features of the image are extracted using the Speeded-Up Robust
Features. Then the extracted features are classified using the Bag of Features (BoF)
classifier. The outputs of the classifier are stored in the Model Data Library. During the
verification process features of the input image are extracted and classified using SVM
classifier. The determined class is displayed as an output.
Retrieval Result
Classifier
k-Means
Clustering
Bag of Words
Input Image
Description of Local
Features using SURF
Histogram Generation
Input Image
Description of Local
Features using SURF
Vocabulary
Training Phase Testing Phase
Classification using Robust Features with Bag of Features Classifier
72
4.3 Experimental Results
The proposed methods are simulated on Intel Core i5, 2.60 GHz processor, 64-bit
Operating System in MATLAB 2016a environment. The effectiveness of the method is
measured by applying it on standard dataset, i.e. NIST – 4 which contains 800 images of
fingerprints (200 images of each class). The selection of this dataset is for the purpose of
the comparison of the classification approaches discussed in chapter 3 and this chapter.
The experiments have been carried out using various images for testing purpose arbitrarily.
The testing images are selected by the system arbitrarily and five different testcases are
considered. In all testcases, testing images are excluded from training phase. The results of
various test cases are shown further.
(a) (b)
FIGURE 4.5: Sample images from the dataset used; a) NIST – 4, b) Captured
Fig.4.6 to Fig. 4.10 demonstrates the confusion matrices of the fingerprints with fusion
features and SVM classifier. Fig. 4.11 shows the classification rate of various testcases and
Fig. 4.12 shows the classification rate of various classes.
Classification using Robust Features with Bag of Features Classifier
73
Arch 16 3 1 0 80.00
Left
Loop 2 13 2 3 65.00
Right
Loop 1 3 14 2 70.00
Whorl 0 1 1 18 90.00
Arch Left
Loop
Right
Loop Whorl
76.25%
(Over all
Accuracy)
FIGURE 4.6: Confusion matrix of Testcase1 using BoF Classifier
Arch 17 1 2 0 85.00
Left
Loop 1 15 3 1 75.00
Right
Loop 0 1 18 1 90.00
Whorl 0 1 2 17 85.00
Arch Left
Loop
Right
Loop Whorl
83.75 %
(Over all
Accuracy)
FIGURE 4.7: Confusion matrix of Testcase2 using BoF Classifier
Classification using Robust Features with Bag of Features Classifier
74
Arch 17 2 1 0 85.00
Left
Loop 0 17 2 1 85.00
Right
Loop 0 1 18 1 90.00
Whorl 0 1 1 18 90.00
Arch Left
Loop
Right
Loop Whorl
87.50 %
(Over all
Accuracy)
FIGURE 4.8: Confusion matrix of Testcase3 using BoF Classifier
Arch 18 1 0 1 90.00
Left
Loop 1 17 1 1 85.00
Right
Loop 0 1 18 1 90.00
Whorl 0 2 1 17 85.00
Arch Left
Loop
Right
Loop Whorl
87.50%
(Over all
Accuracy)
FIGURE 4.9: Confusion matrix of Testcase4 using BoF Classifier
Classification using Robust Features with Bag of Features Classifier
75
Arch 19 1 0 0 95.00
Left
Loop 0 18 1 1 90.00
Right
Loop 1 0 18 1 90.00
Whorl 0 2 2 16 80.00
Arch Left
Loop
Right
Loop Whorl
88.75%
(Over all
Accuracy)
FIGURE 4.10: Confusion matrix of Testcase5 using BoF Classifier
FIGURE 4.11: Analysis of Results of various Testcases (with BoF Classifier)
76.25 83.75
87.5 87.5 88.75
0
10
20
30
40
50
60
70
80
90
100
Testcase1 Testcase2 Testcase3 Testcase4 Testcase5
Cla
ssif
icti
on
Rat
e
Testcases
Classification using Robust Features with Bag of Features Classifier
76
FIGURE 4.12: Analysis of Results of various classes (with BoF Classifier)
4.4 Concluding Remarks
The classification system proposed in this chapter demonstrates the use of robust features
of fingerprint images. As discussed earlier, the efficiency of each fingerprint classification
system highly depend on feature extraction method. And to make the part of feature
extraction robust, the feature vector is built using SURF features. This feature vector is
given as input the classifier.
TABLE 4.1: Comparison of average Testcase accuracy and class accuracy of the approaches used for
classification
Sr.
No. Description
LBP &
GLCM with
SVM
LBP &
GLCM with
Knn
SURF with
BoF
1 Average Testccase Accuracy 74.50% 54.75% 84.75%
2 Average Arch Class Accuracy 85.00 66.00 87.00
3 Average Left Loop Class Accuracy 60.00 56.00 80.00
4 Average Right Loop Class Accuracy 67.50 46.00 86.00
5 Average Whorl Class Accuracy 72.50 51.00 86.00
Various experiments have been performed to analyse the effectiveness of the proposed
method. The experiments have been conducted for various testcases including different
fingerprint images of various classes using BoF classification and results are analysed. It
has been observed that BoF classifier achieves classification accuracy ranges from 76.25 %
to 88.75% for various testcases. The experiments have also been conducted on various
87 80
86 86
0
10
20
30
40
50
60
70
80
90
100
Arch Left Loop Right Loop Whorl
Cla
ssif
icat
ion
Rat
e
Finngerprint Classes
Classification using Robust Features with Bag of Features Classifier
77
fingerprint classes individually. The classes of different fingerprints have been chosen
arbitrarily and results are examined. Looking at the classification accuracy, it has been
concluded that arch, right loop and whorl are most prominent class that have been
classified most correctly with the classification accuracy of 86-87 % classification
accuracy. The remaining class whorl is less correctly recognized due to the limitation of
method for proper differentiation of feature points between the classes.
Table 4.1 shows the comparison of average testcase accuracies and average class
accuracies from the methods of fingerprint classification discussed in chapter 3 and this
chapter. It is important to note that the testcases are generated by the system randomly
based the bifurcation of fingerprint images as 10% for testing and for training. Hence,
comparison is shown based on the average testcase accuracies and average class
accuracies. It is observed that the BoF classifier, used with SURF descriptor, is found
strong. Moreover there is a bigger difference in the average class accuracies in case of the
LBP & GLCM features compared to BoF classifier.
Retrieval using Scale and Rotation Invariant Robust Features
78
CHAPTER – 5
Retrieval using Scale and Rotation Invariant
Robust Features
5.1 Introduction
Extraction of discriminative and reliable features is crucial for fingerprint retrieval. The
texture found in images shows an intense segregating feature for image classification and
also for image retrieval. In spite of the fact that there does not exist a formal meaning of
surface, it can be comprehended as the essential natives in images whose spatial
conveyance makes some visual examples [149]. Accordingly, the objective of a surface
element extraction strategy is to make a feature vector that catches the image surface data
while saving its substance. By considering that the ridges and valleys of fingerprints form a
textural pattern, it is then possible to capture discriminatory information through their
textural representations. Further, the captured representations in the feature vectors can be
indexed and stored for image retrieval purposes.
Feature extraction is the component of dimensionality minimization. Principal component
analysis (PCA) is the most famous feature extraction method, which wants to find a linear
mapping to preserve the total variance. Linear discriminant analysis (LDA) is another
frequently used feature extraction method; it seeks an optimal projection space, such that
the projected patterns have the maximal inter-class distance and the minimal intra-class
distance [150].
For nonlinear cases, kernel and manifold learning frameworks have pulled in much
consideration and some attractive outcomes have been acquired. Kernel PCA (KPCA) and
kernel LDA (KLDA) are the nonlinear expansion versions of PCA and LDA respectively;
they first guide the input data into the high-dimensional space through the kernel function,
and after that perform the feature extraction method. Locality preserving projections
(LPP), locally linear embedding (LLE) and isometric feature mapping are three popular
Retrieval using Scale and Rotation Invariant Robust Features
79
manifold learning algorithms [151]. These algorithms and their extensions have been
effectively utilized as a part of many image processing works.
Retrieval is defined as given an input fingerprint filter out a subset of candidate
fingerprints for the finer matching by coarsely searching the data base. If one of the
retrieved candidates originates from the same finger as the input, the retrieval is successful
for this input fingerprint; otherwise, it is a failure. Therefore, the retrieval accuracy/error is
calculated by the percentage of the input fingerprints with retrieval success/failure [152].
As retrieval is to reduce the number of finer matching for identification, the average
percentage of the retrieved fingerprints from the data base over all input fingerprints
represents the retrieval efficiency. The performance of a retrieval technique is measured by
its accuracy and efficiency. The continuous classification retrieves fingerprints in the data
base whose features fall in a neighborhood centered at the input. It usually provides better
performance and can balance the efficiency and accuracy more easily than the exclusive
classification [153]. Here we are using Bag of Features (BoF) for generating the
vocabulary. Clusters of vocabulary words are generated using k-means clustering. Using
kNN, the rank of each training dataset image with query image is found and they are
arranged in the order of rank. We finally display top n similar images from the order of
rank as retrieved images.
The dominant local feature extractors are the Scale Invariant Feature Transform (SIFT),
and the Speeded-Up Robust Feature (SURF) [96]. Due to their robustness and their time
efficiency, SIFT and SURF have been deployed in a broad scope of applications such as
object recognition, texture recognition, and image retrieval. The proposed work uses SURF
and SIFT features of the fingerprint images for retrieval.
5.2 Methodology
Image Retrieval provides a way for search-engines to retrieve query-similar images from
abundantly available/accessible digital images based on visual image contents rather than
metadata (manual image labelling / manual image annotation) [154].
For image retrieval, there are two steps which include feature extraction and fingerprint
retrieval. For feature extraction, we are using the following techniques.
Speeded Up Robust Features (SURF)
Retrieval using Scale and Rotation Invariant Robust Features
80
Scale Invariant Feature Transform (SIFT) and
Concatenation of SURF and SIFT
Panchal et al. [157] performed experiments to compare the effect of SIFT and SURF on
images. He proposed the conclusion that the number of key points detected by SIFT is
significantly higher than SURF. The experiments are carried out for key point description
by using SIFT and SURF separately as well as the concatenation of both the features.
The Feature extraction using the SURF method is discussed in the chapter 4, section 4.2.1.
In this chapter the feature extraction using SIFT and retrieval using BoF is discussed.
5.2.1 Feature Extraction using Scale Invariant Feature Transform (SIFT)
Scale Invariant Feature Transform (SIFT) is an image descriptor for image-based matching
and recognition developed by David Lowe. These descriptors as well as the related image
descriptors are used for a large number of purposes in computer vision related to point
matching between different views of a 3-D scene and view-based object recognition [156].
The SIFT descriptor is invariant to translations, rotations and scaling transformations in the
image domain and robust to moderate perspective transformations and illumination
variations. Experimentally, the SIFT descriptor has been proven to be very useful in
practice for image matching and object recognition under real-world conditions.
In its original formulation, the SIFT descriptor comprised a method for detecting interest
points from a grey-level image at which statistics of local gradient directions of image
intensities were accumulated to give a summarizing description of the local image
structures in a local neighborhood around each interest point, with the intention that this
descriptor should be used for matching corresponding interest points between different
images.
Later, the SIFT descriptor has also been applied at dense grids which have been shown to
lead to better performance for tasks such as object categorization, texture classification,
image alignment and biometrics . The SIFT descriptor has also been extended from grey-
level to colour images and from 2-D spatial images to 2+1-D spatio-temporal video.
Retrieval using Scale and Rotation Invariant Robust Features
81
Generating the features of the image
The SIFT algorithm extracts distinctive features of local image patches which are invariant
to image scale and are robust to changes in noise, illumination, distortion and viewpoint. It
consists of four basic steps. Scale-space extrema estimation, key point localization, key-
point orientation assignment and descriptor generation [156].
1. Extrema detection based on Scale-space
The first step of the computation searches for extrema over all scales and image locations
[157]. Let ),( yxI be an input image, then the scale space of the image I is defined as given
below:
),(*),,(),,( yxIyxGyxL (5.1)
where * is the convolution operation in x and y directions, and the Gaussian function
2
22
2
22
1),,(
yx
eyxG
(5.2)
where σ denotes the factor of the scale space. In order to efficiently detect potential interest
points that are invariant to scale and orientation, which are also called key points in SIFT
framework, the method used the scale-space extrema in the difference-of-Gaussian (DoG)
function convolved with the image, D(x,y,σ), which can be computed from the difference
of two nearby scales separated by a constant multiplicative factor k:
),,(),,(
),(*)],,(),,([),,(
yxLkyxL
yxIyxGkyxGyxD
(5.3)
The convolved images are grouped by octave, and an octave corresponds to doubling the
value of σ. Then the value of k is selected so that a fixed number of blurred images per
octave are obtained. This also ensures the same numbers of DoG images are generated per
octave. Once DoG images have been obtained, keypoints are identified as local minima or
maxima of the DoG images across scales. This is done by comparing each pixel in the
DoG images to its eight neighbors at the same scale and nine corresponding neighboring
pixels in each of the neighboring scales. If the pixel value is the maximum or minimum
among all compared pixels, it is selected as a candidate key point.
Retrieval using Scale and Rotation Invariant Robust Features
82
2. Key point localization
Scale-space extrema detection produces too many key point candidates, including some
unstable ones. To retain only stable key points, key points are filtered in this step. Once a
key point candidate has been found by comparing a pixel to its neighbors, next is to
perform a detailed fit to the nearby data for accurate location, scale, and ratio of principal
curvatures. This information allows points to be rejected that have low contrast (and are
therefore sensitive to noise) or are poorly localized along an edge [158].
FIGURE 5.1: Key point localization at different scales
3. Orientation assignment
This is the key step to achieve invariance to image rotation, in which each key point is
assigned one or more orientations based on local image gradient directions [159]. The key
point orientation is calculated from an orientation histogram of local gradients from the
closest smoothed image . For each image sample at the key point‘s scale σ, the gradient
magnitude and orientation is computed using pixel differences, let
),,1(),,1(1 yxLyxLL (5.5)
),1,(),1,(2 yxLyxLL (5.6)
2
2
2
1),( LLyxm (5.7)
)/arctan(),( 12 LLyx (5.8)
Retrieval using Scale and Rotation Invariant Robust Features
83
An orientation histogram with 36 bins, with each bin covering 10 degrees, is formed from
the gradient orientations of sample points within a region around the key point. Then the
maximum orientation is assigned to this key point; additional key points will be created
with orientation which is within 80% of the maximum orientation.
FIGURE 5.2: Orientation Histogram
4 Key point descriptor
The previous operations have assigned an image location, scale, and orientation to each
key point, which ensure the invariance to image rotation, location and scale. And then we
want to compute descriptor vectors for each key point such that descriptors are distinctive
and robust to other variations, such as illuminations etc. Compute the feature descriptor as
a set of orientation histograms on 4 x 4 pixel neighborhoods.
The orientation histograms are relative to the key point orientation and the orientation data
comes from the Gaussian image closest in scale to the key point's scale. Histograms
contain 8 bins each, and each descriptor contains a 4 x 4 array of 16 histograms around the
key point. This leads to a SIFT feature vector with (4 x 4 x 8) 128 elements [160].
FIGURE 5.3: Computation of Key point Descriptors
Retrieval using Scale and Rotation Invariant Robust Features
84
5.2.2 Feature Extraction using SURF and SIFT
The features of the input image are extracted by applying SURF, SIFT and the
concatenation of both the features. The outputs are stored in the model data library. During
the image retrieval, by applying SURF, SIFT and the concatenation, the features of the
input image are extracted and verified with the stored data. In the verification process the
features are checked whether it is matched or not. Then ranking process is carried out and
the result is displayed.
5.2.3 Retrieval using Bag of Features
Content Based Image Retrieval (CBIR) systems are used to find images that are visually
similar to a query image. The application of CBIR systems can be found in many areas
such as a web-based product search, surveillance, and visual place identification. A
common technique used to implement a CBIR system is bag of visual words, also known
as bag of features. Bag of features is a technique adapted to image retrieval from the world
of document retrieval [161].
Instead of using actual words as in document retrieval, bag of features uses image features
as the visual words that describe an image. Image features are an important part of CBIR
systems. These image features are used to gauge similarity between images and can
include global image features such as color, texture, and shape. Image features can also be
local image features such as speeded up robust features (SURF), histogram of gradients
(HOG), or local binary patterns (LBP) [162].
The benefit of the bag-of-features approach is that the type of features used to create the
visual word vocabulary can be customized to fit the application. The BoW model is first
used in the areas of natural language processing and information retrieval, in which a
document is represented as a bag of orderless words. In multimedia search area, the BoW
model is adopted to represent an image as a collection of orderless visual words, where one
key issue is how to construct a visual vocabulary.
In essence, the construction of visual vocabulary is to vector-quantize the local image
descriptors in a training set. The speed and efficiency of image search is also important in
CBIR systems. For example, it may be acceptable to perform a brute force search in a
small collection of images of less than a 100 images, where features from the query image
Retrieval using Scale and Rotation Invariant Robust Features
85
are compared to features from each image in the collection. For larger collections, a brute
force search is not feasible and more efficient search techniques must be used [163].
The bag of features provides a concise encoding scheme to represent a large collection of
images using a sparse set of visual word histograms. This enables compact storage and
efficient search through an inverted index data structure. The Computer Vision System
provides a customizable bag-of-features framework to implement an image retrieval
system. The following steps outline the procedure:
Select the Image Features for Retrieval
Create a Bag Of Features
Index the Images
Search for Similar Images
In recent past, Bag-of-Features (BoF) model is getting popularity in the area of image
retrieval, also known as object retrieval, based on local feature, due to the scalability of
retrieval system. In this model, local features are trained to build visual vocabulary in
advance. This vocabulary is then used to quantify the local features of image. the similar
local features are presented in approximation as teir cluster centre, as visual word. The
quantization of visual word affects the retrieval result strongly in the image retrieval based
on local features. Moreover, there is a crucial effect of pre-trained bag-of-features on the
quantization of visual word.
In BoF based image retrieval system, to improve the visual word training, a k-means
clustering algorithm is used. The distribution of features data on each dimension is
analysed and the distance method, which is used to partition the data space in high-
dimensional indexing according to the data distribution adaptively, is combined to obtain
the initial clustering centres. Finally the visual vocabulary is obtained by using k-means
algorithm to cluster the feature data and train the visual words.
Image Retrieval System
The main procedures of image retrieval based on bag of words model include two steps,
building index table and retrieval. During building index table, images in database are
transformed to sets of visual words, and are presented by visual words in corresponding
database. In retrieval process, the query image is transformed to a set of visual words at
first, and each visual word is used to find out relative indexed images in inverted index
Retrieval using Scale and Rotation Invariant Robust Features
86
table [165].
A generalized diagram of image retrieval using bag of features is shown in Fig. 5.4. After
obtaining the visual vocabulary, the corpus can be acquired by quantifying each image in
database. The vector space model in information retrieval can be used to retrieval. In this
model, the query image and each image in database are expressed to be sparse vector of
visual words.
FIGURE 5.4: Image Retrieval on the basis of Bag of Words
The major process of transform from image to visual words includes the following steps.
Retrieval using Scale and Rotation Invariant Robust Features
87
Detection features and description of features
SIFT and SURF features are extracted from each image in image database to describe local
features. SIFT has robust invariance to image‘s scale transform, rotation transform and
luminance transform. The extraction processes are described in previous sections.
Training the Visual Vocabulary
In BoF model, the training of visual vocabulary is an important step. This step aims to
obtain a space division of features description vector. In the phase of feature quantization
phase, feature description vectors which are falling into the same space division should be
treated as the same visual word. Then they are matched while image matching is
performed in image retrieval. The quantization error of visual word is strongly determined
by the correctness of space division and hence, the performance of visual vocabulary
ultimately affects the image retrieval accuracy.
Quantization of features
Quantization of features is the process of assigning one feature to one or multiple visual
words. To assign an image features to visual word(s), the common way is to search for
nearest neighbors among the vocabulary obtained in the vocabulary generation step.
Finally the visual word ID is used to index file is generated.
For the indexing in the database, the quantization characteristics of the image the nearest
neighbor search method in the visual vocabulary of the nearest cluster centre away from
the feature vector is used for a given localized feature of each image. corresponding to the
visual word of the local image characteristics, index listing is performed.
K-Means Clustering
Fingerprints in the information base are bunched to encourage a quick recovery process
that maintains a strategic distance from comprehensive examinations of an info unique
mark with all fingerprints in the information base [166].
k-means is one of the simplest unsupervised learning algorithms that solve the well-
known clustering problem. The procedure follows a simple and easy way to classify a
given data set through a certain number of clusters (assume k clusters) fixed apriori. The
main idea is to define k centers, one for each cluster. These centers should be placed in a
Retrieval using Scale and Rotation Invariant Robust Features
88
cunning way because of different location causes different result. So, the better choice is to
place them as much as possible far away from each other. The next step is to take each
point belonging to a given data set and associate it to the nearest center. When no point is
pending, the first step is completed and an early group age is done.
At this point we need to re-calculate k new centroids as center of the clusters resulting
from the previous step. After we have these k new centroids, a new binding has to be done
between the same data set points and the nearest new center. A loop has been generated.
As a result of this loop we may notice that the k centers change their location step by step
until no more changes are done or in other words centers do not move any more. Finally,
this algorithm aims at minimizing an objective function knows as squared error function
given by:
c
i
c
j
ji
i
vxVJ1 1
2||)(||)(
(5.9)
where,‗||xi - vj||‘ represents the Euclidean distance between the xi and vj.
‗ci‘ denotes the number of vocabularies in ith
cluster.
‗c‘ shows the number of cluster centers.
A widely used construction scheme is to perform exact k-means clustering and treat each
cluster centroid as a visual word. In the word assignment stage, a new local descriptor is
mapped to a visual word by searching its nearest cluster centroid.
Although this method is quite effective, its time and space complexity is extremely high,
making it even unpractical for handling a large set of training samples. To improve the
computational efficiency, a number of methods have been designed to efficiently construct
visual vocabularies. An approximate k-means clustering scheme speeds up the vocabulary
construction by replacing the exact computation with an approximate nearest-neighbor
method.
Retrieval using Scale and Rotation Invariant Robust Features
89
FIGURE 5.5: Example for k-Means Clustering with 4 clusters
Fig. 5.5 represents an example for k-Means Clustering with 4 clusters. The vocabularies
are splitted and clustered into 4 clusters. The four clusters are represented using different
colors such as red, blue, green and black.
Image Ranking using k-NN
The algorithm shows the steps of our decision-making strategy for test instances. Given a
test instance x, we begin by finding its k-nearest neighbors. Let },....,1|),({ kjjxN k be
the k-closest neighbors of the example x. Nk(x, j) is the j-th closest element neighbor of the
occurrence x. At that point, we abuse an overall positioning model to re-rank the k-closest
neighbors.
Let },....,1|),({ ' kjjxN k be the re-ranked neighbors of the instance x. ),(' jxN k is the j-
th trustable neighbor of the instance x, decided by the ranking model. Finally, we apply a
weighted voting strategy to obtain the final prediction. The ranking model used to re-rank
the k-nearest neighbors based on their trustiness explains how to determine the weights for
the weighted voting strategy [167].
The training examples are vectors in a multidimensional feature space, each with a class
label. The training phase of the algorithm consists only of storing the feature vectors and
class labels of the training samples.
Retrieval using Scale and Rotation Invariant Robust Features
90
In the classification phase, k is a user-defined constant, and an unlabeled vector (a query or
test point) is classified by assigning the label which is most frequent among the k training
samples nearest to that query point. A commonly used distance metric for continuous
variables is Euclidean distance. For discrete variables, such as for text classification,
another metric can be used, such as the overlap metric (or Hamming distance). In the
context of gene expression microarray data, for example, k-NN has also been employed
with correlation coefficients such as Pearson and Spearman. Often, the classification
accuracy of k-NN can be improved significantly if the distance metric is learned with
specialized algorithms such as Large Margin Nearest Neighbor or Neighbourhood
components analysis.
FIGURE 5.6: Retrieval Flow
A drawback of the basic "majority voting" classification occurs when the class distribution
is skewed. That is, examples of a more frequent class tend to dominate the prediction of the
new example, because they tend to be common among the k nearest neighbors due to their
large number. One way to overcome this problem is to weight the classification, taking into
Retrieval Result
Ranking using kNN
k-Means
Clustering
Bag of Words
Input Image
Description of Local
Features using SURF
and SIFT
Histogram Generation
Input Image
Description of Local
Features using SURF
and SIFT
Vocabulary
Training Phase Testing Phase
Retrieval using Scale and Rotation Invariant Robust Features
91
account the distance from the test point to each of its k nearest neighbors. The class (or
value, in regression problems) of each of the k nearest points is multiplied by a weight
proportional to the inverse of the distance from that point to the test point. Another way to
overcome skew is by abstraction in data representation. For example, in a self-organizing
map (SOM), each node is a representative (a center) of a cluster of similar points,
regardless of their density in the original training data. K-NN can then be applied to the
SOM. Fig. 5.7 shows the retrieval system diagram used for the experiments.
5.3 Experimental Results
The proposed method is simulated on Intel Core i5, 2.60 GHz processor, 64-bit Operating
System in MATLAB 2016a environment. The effectiveness of the method is measured by
applying it on standard dataset, i.e. FVC which contains 200 images of each fingerprint
classes.
The experiments have been carried out using various images for testing purpose arbitrarily
from the datasets of FVC and captured images. The testing images are selected by
manually. Testing images are excluded from training phase. The results of various test
images are shown further. Fig. 5.7 shows the sample dataset images used. Fig. 5.8 is
showing the sample query and retrieved images. Following performance measurement
equations are used to analyse the results:
Precision (P) = True Positives / (True Positives + False Positives) (5.10)
Recall (R) = True Positives / (True Positives + Missed) (5.11)
The result analysis of FVC dataset images and captured image dataset is shown in Table
5.1 to 5.4.Table 5.1 shows the precision and recall values on FVC dataset, when SIFT,
SURF and fusion features are used without applying classification phase. Euclidean
distance is used to rank the similar images. Table 5.2 shows the precision and recall values
on FVC dataset, when SIFT, SURF and fusion features are used after applying
classification phase (as described in Chapter 3 and Chapter 4. Retrieval results are obtained
using BoF classifier to rank the similar images. Table 5.3 shows the precision and recall
values on captured image dataset, when SIFT, SURF and fusion features are used without
applying classification phase. Euclidean distance is used to rank the similar images. Table
5.4 shows the precision and recall values on captured image dataset, when SIFT, SURF
Retrieval using Scale and Rotation Invariant Robust Features
92
and fusion features are used after applying classification phase (as described in Chapter 3
and Chapter 4. Retrieval results are obtained using BoF classifier to rank the similar
images.
(a) (c) (b)
FIGURE 5.7: Sample images from the dataset used; a) FVC2002, b) FVC2004 and c) Captured
(a)
(b)
FIGURE 5.8: a) Query fingerprint image and b) retrieved similar images
Table 5.1 and 5.2 gives the experiment results on FVC dataset images. Table 5.3 and 5.4
gives the experiment results on captured image dataset. First column shows the fingerprint
image used as a query image for retrieval. Next four columns represents the results when
SURF and SIFT descriptors are used independently. Next two columns represent the
retrieval results when SURF and SIFT features are concatenated. The last two columns
represent the results of BoF classifier used with concatenated features. Table 5.1 shows the
results without using classification before retrieval phase. Table 5.2 shows the results of
retrieval after applying classification results as discussed in chapter 4.
Table 5.3 and 5.4 gives the experiment results on captured image dataset similar to Table
5.1 and 5.2.
Retrieval using Scale and Rotation Invariant Robust Features
93
TABLE 5.1: Retrieval Result analysis of FVC dataset images without performing classification phase
Dataset SURF with BoF SIFT with BoF Concatenation Bag of Features
Precision Recall Precision Recall Precision Recall Precision Recall
101_1 0.40 0.57 0.40 0.57 0.60 0.86 0.50 0.71
102_2 0.50 0.71 0.50 0.71 0.50 0.71 0.50 0.71
103_3 0.40 0.57 0.40 0.57 0.50 0.71 0.40 0.57
104_4 0.30 0.43 0.40 0.57 0.20 0.29 0.40 0.57
105_5 0.40 0.57 0.30 0.43 0.50 0.71 0.60 0.86
106_6 0.50 0.71 0.60 0.86 0.40 0.57 0.30 0.43
107_7 0.30 0.43 0.30 0.43 0.30 0.43 0.40 0.57
108_8 0.40 0.57 0.40 0.57 0.50 0.71 0.60 0.86
TABLE 5.2: Retrieval Result analysis of FVC dataset images after performing classification phase
Dataset SURF with BoF SIFT with BoF Concatenation Bag of Features
Precision Recall Precision Recall Precision Recall Precision Recall
101_1 0.50 0.71 0.60 0.86 0.60 0.86 0.70 1.00
102_2 0.50 0.71 0.60 0.86 0.60 0.86 0.50 0.71
103_3 0.50 0.71 0.70 1.00 0.50 0.71 0.60 0.86
104_4 0.60 0.86 0.30 0.43 0.40 0.57 0.40 0.57
105_5 0.30 0.43 0.40 0.57 0.50 0.71 0.70 1.00
106_6 0.50 0.71 0.50 0.71 0.60 0.86 0.60 0.86
107_7 0.60 0.86 0.60 0.86 0.50 0.71 0.70 1.00
108_8 0.60 0.86 0.50 0.71 0.60 0.86 0.60 0.86
TABLE 5.3: Retrieval Result analysis of Captured Image dataset images without performing classification
phase
Dataset SURF with BoF SIFT with BoF Concatenation Bag of Features
Precision Recall Precision Recall Precision Recall Precision Recall
finger163_2 0.20 0.67 0.20 0.67 0.20 0.67 0.20 0.67
finger18_3 0.10 0.33 0.20 0.67 0.20 0.67 0.20 0.67
finger1_1 0.20 0.67 0.10 0.33 0.20 0.67 0.20 0.67
finger27_1 0.10 0.33 0.10 0.33 0.10 0.33 0.20 0.67
Retrieval using Scale and Rotation Invariant Robust Features
94
TABLE 5.4: Retrieval Result analysis of Captured Image dataset images after performing classification phase
Dataset SURF with BoF SIFT with BoF Concatenation Bag of Features
Precision Recall Precision Recall Precision Recall Precision Recall
finger163_2 0.20 0.67 0.20 0.67 0.20 0.67 0.20 0.67
finger18_3 0.20 0.67 0.20 0.67 0.20 0.67 0.20 0.67
finger1_1 0.20 0.67 0.10 0.33 0.30 1.00 0.30 1.00
finger27_1 0.10 0.33 0.10 0.33 0.10 0.33 0.20 0.67
5.4 Concluding Remarks
The retrieval system proposed in this chapter demonstrates the use of fusion features using
robust SURF and scale and rotation invariant SIFT features. The feature vector, generated
by processing training images is given as an input to the BoF classifier to rank the
similarity of dataset images and n most similar images are displayed as a result.
Experiments are carried out on the FVC fingerprint dataset and captured image dataset to
analyse the effectiveness of the proposed method. FVC dataset if mainly used for the
purpose of fingerprint verification. As it contains eight image of each human finger, it is
suitable for evaluating fingerprint retrieval. Fingerprint images are arbitrarily considered
for testing. The test-set of the images is excluded from training-set. FVC dataset used for
the experiments, contains total 352 fingerprint image. The another dataset of captured
images contains four images of one human finger. This dataset used for the experiments,
contains total 366 fingerprint images. The result analysis is done by displaying minimum n
(n=10) most similar images as a retrieval result. As, the number of images per fingerprint
is less than minimum similar images to be displayed, recall measurement features the
actual efficiency of the system. It is observed from the experiments that the retrieval results
(obtained from Eq. 5.11) contains more than 50% of recall value in almost all cases when
concatenated features are used with BoF classifier for calculating similarity between the
query and dataset images in both the kind of datasets. This recall value is found better than
the few fingerprint retrieval systems presented in the literature [213].
Conclusion and Further Directions
95
Chapter 6
Conclusion and Further Directions
6.1 Conclusion
Fingerprint image retrieval system primarily involves two steps, namely, fingerprint image
classification and retrieval. As a first step, the fingerprint image is classified in to one of
the four standard fingerprint image classes; Arch, Left loop, Right loop and Whorl. The
classification goes through two basic steps; feature extraction and applying classifier. The
proposed first method extracts the texture features using LBP and GLCM and uses
concatenation of both these features for further classification. Two standard classifiers;
SVM and kNN is used to judge the improvement in the classification. This approach is
achieving the classification accuracy ranging from 68.75% to 82.50% on various testcases
tried, when the features are applied to SVM classifier, which higher than the classification
accuracy achieved using kNN classifier. From the analysis, arch are most prominent class
that have been classified most correctly with 85% classification accuracy using SVM
classifier.
The robust features, for improving the classification accuracy have been explored in
second proposed classification technique. A feature vector is generated consisting of SURF
features from the training images and has been exploited as an input to the BoF classifier.
It has been observed that BoF classifier achieves classification accuracy ranges from 76.25
% to 88.75 % for various testcases. From the analysis, arch, right loop and whorl are most
prominent class that have been classified most correctly with the classification accuracy of
86-87 % classification accuracy.
The classified query image is given as query to the retrieval system, which tries to search
and display n similar images from the same class as of the query image. The retrieval
approach discussed in this work, uses the concatenated features from SURF and SIFT
features to make the features scale and rotation invariant. Precision and recall are the two
performance measures used to analyze the efficiency of the retrieval system. Due to the
Conclusion and Further Directions
96
number of images of one fingerprint is finite, the recall is considered as the major factor
for evaluating the performance of the proposed system. From the results it is observed that
the efficiency of the system in which retrieval is performed after classification, is giving
good recall value.
6.2 Further Directions
Fingerprint retrieval research presented here can be further advanced as:
Proposed fingerprint image classification and retrieval methods can be used for the
applications of recognition and registration on fingerprint images.
The use of more realistic fingerprint image databases in which we expect that the
retrieval effectiveness of our image descriptors will be considerably improved.
References
97
List of References
[1] Kshetri, Nir. "Cybercrime and cyber-security issues associated with China: some economic and
institutional considerations." Electronic Commerce Research 13, no. 1 (2013): 41-69.
[2] Gubbi, Jayavardhana, Rajkumar Buyya, Slaven Marusic, and Marimuthu Palaniswami. "Internet of
Things (IoT): A vision, architectural elements, and future directions." Future generation computer
systems 29, no. 7 (2013): 1645-1660.
[3] Unar, J. A., Woo Chaw Seng, and Almas Abbasi. "A review of biometric technology along with
trends and prospects." Pattern recognition 47, no. 8 (2014): 2673-2688.
[4] Kühl, Hjalmar S., and Tilo Burghardt. "Animal biometrics: quantifying and detecting phenotypic
appearance." Trends in ecology & evolution 28, no. 7 (2013): 432-441.
[5] Islam, Mohammad, Abd-elhamid Taha, and Selim Akl. "A survey of access management techniques
in machine type communications." IEEE communications Magazine 52, no. 4 (2014): 74-81.
[6] Kalra, Sheetal, and Sandeep K. Sood. "Secure authentication scheme for IoT and cloud servers."
Pervasive and Mobile Computing 24 (2015): 210-223.
[7] Frank, Mario, Ralf Biedert, Eugene Ma, Ivan Martinovic, and Dawn Song. "Touchalytics: On the
applicability of touchscreen input as a behavioral biometric for continuous authentication." IEEE
transactions on information forensics and security 8, no. 1 (2013): 136-148.
[8] Divya, R., and V. Vijayalakshmi. "Analysis of multimodal biometric fusion based authentication
techniques for network security." International Journal of Security and Its Applications 94 (2015):
239-246.
[9] Neumann, Cedric, and Hal Stern. "forensic Examination of fingerprints: past, present, and future."
Chance 29, no. 1 (2016): 9-16.
[10] Saini, Rupinder, and Narinder Rana. "Comparison of various biometric methods." International
Journal of Advances in Science and Technology 2, no. 1 (2014): 24-30.
[11] Aravindan, Ashok, and S. M. Anzar. "Robust partial fingerprint recognition using wavelet SIFT
descriptors." Pattern Analysis and Applications 20, no. 4 (2017): 963-979.
[12] Galbally, Javier. "Anti-spoofing, fingerprint databases." Encyclopedia of Biometrics (2015): 79-86.
[13] Jain, Anil K., Karthik Nandakumar, and Arun Ross. "50 years of biometric research:
Accomplishments, challenges, and opportunities." Pattern Recognition Letters 79 (2016): 80-105.
[14] Sarkar, Swagato. "The unique identity (UID) project, biometrics and re-imagining governance in
India." Oxford Development Studies 42, no. 4 (2014): 516-533.
[15] Fieldhouse, Sarah, and Karen Stow. "The Identification of Missing Persons Using Fingerprints." In
Handbook of Missing Persons, pp. 389-413. Springer, Cham, 2016.
[16] Saxby, Steve. "The 2013 CLSR-LSPI seminar on electronic identity: The global challenge–Presented
at the 8th International Conference on Legal, Security and Privacy issues in IT Law (LSPI) November
11–15, 2013, Tilleke & Gibbins International Ltd., Bangkok, Thailand." Computer Law & Security
Review 30, no. 2 (2014): 112-125.
[17] Kindt, Els J. "An Introduction into the Use of Biometric Technology." In Privacy and Data Protection
Issues of Biometric Applications, pp. 15-85. Springer, Dordrecht, 2013.
References
98
[18] Kindt, Els J. "The Risks Involved upon the Use of Biometric Data and Biometric Systems." In Privacy
and Data Protection Issues of Biometric Applications, pp. 275-401. Springer Netherlands, 2013.
[19] Xu, Chaoying, Ronghui Zhou, Wenwei He, Lan Wu, Peng Wu, and Xiandeng Hou. "Fast imaging of
eccrine latent fingerprints with nontoxic Mn-doped ZnS QDs." Analytical chemistry 86, no. 7 (2014):
3279-3283.
[20] Harold Cummins and Rebecca Wright Kennedy, "Purkinje's Observations (1823) on Finger Prints and
Other Skin Features." Journal of Criminal Law and Criminology. Vol. 31, no. 3, (1940).
[21] Schwenker, Friedhelm, and Edmondo Trentin. "Pattern classification and clustering: A review of
partially supervised learning approaches." Pattern Recognition Letters 37 (2014): 4-14.
[22] Naamha, Esraa Qasim. "Fingerprint Identification and Verification System Based on Extraction of
Unique ID." PhD diss., University Of Technology, 2017.
[23] Ogiela, Lidia, and Marek R. Ogiela. "Cognitive systems and bio-inspired computing in homeland
security." Journal of Network and Computer Applications 38 (2014): 34-42.
[24] De Alcaraz-Fossoul, Josep, Cristina Mestres Patris, Antoni Balaciart Muntaner, Carme Barrot Feixat,
and Manel Gené Badia. "Determination of latent fingerprint degradation patterns—a real fieldwork
study." International journal of legal medicine 127, no. 4 (2013): 857-870.
[25] Yuan, Baoxi, Fei Su, and Anni Cai. "Fingerprint retrieval approach based on novel minutiae triplet
features." In Biometrics: Theory, Applications and Systems (BTAS), 2012 IEEE Fifth International
Conference on, pp. 170-175. IEEE, 2012.
[26] Peralta, Daniel, Isaac Triguero, Salvador García, Yvan Saeys, Jose M. Benitez, and Francisco Herrera.
"On the use of convolutional neural networks for robust classification of multiple fingerprint
captures." International Journal of Intelligent Systems 33, no. 1 (2018): 213-230.
[27] Zhou, Wei, Jiankun Hu, Song Wang, Ian Petersen, and Mohammed Bennamoun. "Partial fingerprint
indexing: a combination of local and reconstructed global features." Concurrency and Computation:
Practice and Experience 28, no. 10 (2016): 2940-2957.
[28] Leelambika, K. V., and A. Rohini. "Bayes Classification for the Fingerprint Retrieval." International
Journal of Advanced Research in Computer Science 4, no. 2 (2013).
[29] Gago-Alonso, AndréS, José HernáNdez-Palancar, Ernesto RodríGuez-Reina, and Alfredo MuñOz-
BriseñO. "Indexing and retrieving in fingerprint databases under structural distortions." Expert
Systems with Applications 40, no. 8 (2013): 2858-2871.
[30] Talele, Miss Harsha V., Pratvina V. Talele, and Saranga N. Bhutada. "Study of local binary pattern for
partial fingerprint identification." International Journal of Modern Engineering Research 4, no. 9
(2014): 55-61.
[31] Wang, Song, and Jiankun Hu. "Design of alignment-free cancelable fingerprint templates via curtailed
circular convolution." Pattern Recognition 47, no. 3 (2014): 1321-1329.
[32] Hasan, Haitham, and S. Abdul-Kareem. "Fingerprint image enhancement and recognition algorithms:
a survey." Neural Computing and Applications 23, no. 6 (2013): 1605-1610.
[33] Porikli, Fatih, François Brémond, Shiloh L. Dockstader, James Ferryman, Anthony Hoogs, Brian C.
Lovell, Sharath Pankanti, Bernhard Rinner, Peter Tu, and Péter L. Venetianer. "Video surveillance:
References
99
past, present, and now the future [DSP Forum]." IEEE Signal Processing Magazine 30, no. 3 (2013):
190-198.
[34] Tiwari, Shradha, N. I. Chourasia, and Vijay S. Chourasia. "A review of advancements in biometric
systems." International Journal of Innovative Research in Advanced Engineering 2, no. 1 (2015): 187-
204.
[35] Hagins, Zachary R. "Fashioning the ‗Born Criminal‘on the Beat: Juridical Photography and the Police
municipale in Fin-de-Siècle Paris." Modern & Contemporary France 21, no. 3 (2013): 281-296.
[36] Maniroja, M., and Sudhir Sawarkar. "Biometric Database Protection using Public Key Cryptography."
IJCSNS 13, no. 5 (2013): 20.
[37] Fons, Francisco, Mariano Fons, Enrique Cantó, and Mariano López. "Real-time embedded systems
powered by FPGA dynamic partial self-reconfiguration: a case study oriented to biometric recognition
applications." Journal of real-time image processing 8, no. 3 (2013): 229-251.
[38] Rao, Chinta Someswara, K. V. S. Murthy, R. Shiva Shankar, and V. Mnssvkr Gupta. "A Novel Secure
Personal Authentication System with Finger in Face Watermarking Mechanism." In Advances in Soft
Computing and Machine Learning in Image Processing, pp. 319-353. Springer, Cham, 2018.
[39] Kirch, Patrick Vinton. On the road of the winds: an archaeological history of the Pacific Islands before
European contact. Univ of California Press, 2017.
[40] Dwivedi, Sachin, Shrey SharmaVishwa Mohan, and Khatana Arfan Aziz. "Fingerprint, retina and
facial recognition based multimodal systems." International Journal of Engineering and Computer
Science 2, no. 05 (2013).
[41] Waits, Mira Rai. "The indexical trace: a visual interpretation of the history of fingerprinting in
colonial India." Visual Culture in Britain 17, no. 1 (2016): 18-46.
[42] Kandgaonkar, Tejas V., Rajivkumar S. Mente, Ashok R. Shinde, and Shriram D. Raut. "Ear
Biometrics: A Survey on Ear Image Databases and Techniques for Ear Detection and Recognition."
IBMRD's Journal of Management & Research 4, no. 1 (2015): 88-103.
[43] Faulds, Henry, and William J. Herschel. Dactylography and The Origin of Finger-Printing. Cambridge
University Press, 2015.
[44] Liu, Feng, David Zhang, Changjiang Song, and Guangming Lu. "Touchless multiview fingerprint
acquisition and mosaicking." IEEE Transactions on Instrumentation and Measurement 62, no. 9
(2013): 2492-2502.
[45] Bartunek, Josef Ström, Mikael Nilsson, Benny Sallberg, and Ingvar Claesson. "Adaptive fingerprint
image enhancement with emphasis on preprocessing of data." IEEE transactions on image processing
22, no. 2 (2013): 644-656.
[46] Galbally, Javier, Sébastien Marcel, and Julian Fierrez. "Image quality assessment for fake biometric
detection: Application to iris, fingerprint, and face recognition." IEEE transactions on image
processing 23, no. 2 (2014): 710-724.
[47] Bakhtiari, Somayeh. "A Novel Method for Orientation Estimation in Highly Corrupted Fingerprint
Images." International Journal of Intelligent Computing in Medical Sciences & Image Processing 6,
no. 1 (2014): 45-63.
References
100
[48] He, Shuyou, and Xiaopeng Hu. "Chinese character recognition in natural scenes." In Computational
Intelligence and Design (ISCID), 2016 9th International Symposium on, vol. 2, pp. 124-127. IEEE,
2016.
[49] Püspöki, Zsuzsanna, Martin Storath, Daniel Sage, and Michael Unser. "Transforms and operators for
directional bioimage analysis: a survey." In Focus on Bio-Image Informatics, pp. 69-93. Springer,
Cham, 2016.
[50] Galar, Mikel, Joaquín Derrac, Daniel Peralta, Isaac Triguero, Daniel Paternain, Carlos Lopez-Molina,
Salvador García et al. "A survey of fingerprint classification Part I: Taxonomies on feature extraction
methods and learning models." Knowledge-based systems 81 (2015): 76-97.
[51] Patel, Hardikkumar V., and Kalpesh Jadav. "Fingerprint Matching Using Minutiae Feature."
International Journal of Engineering and Innovative Technology (IJEIT) 2, no. 11 (2013): 2.
[52] Saparudin, Saparudin, and Ghazali Sulong. "Segmentation of Fingerprint Image Based on Gradient
Magnitude and Coherence." International Journal of Electrical and Computer Engineering 5, no. 5
(2015).
[53] Thai, Duy Hoang, Stephan Huckemann, and Carsten Gottschlich. "Filter design and performance
evaluation for fingerprint image segmentation." PloS one 11, no. 5 (2016): e0154160.
[54] Casti, Paola, Arianna Mencattini, Marcello Salmeri, AntoniettaAncona, F. Mangeri, Maria Luisa
Pepe, and Rangaraj M. Rangayyan. "Estimation of the breast skin-line in mammograms using
multidirectional Gabor filters." Computers in biology and medicine 43, no. 11 (2013): 1870-1881.
[55] Iwasokun, Gabriel Babatunde, and Sunday Olusegun Ojo. "Review and Evaluation of Fingerprint
Singular Point Detection Algorithms." British Journal of Applied Science & Technology 4, no. 35
(2014): 4918.
[56] Iwasokun, Gabriel Babatunde, and Oluwole Charles Akinyokun. "Fingerprint singular point detection
based on modified poincare index method." Int. J. Signal Process. Image Process. Pattern Recogn 7
(2014): 259-272.
[57] Su, Yijing, Jianjiang Feng, and Jie Zhou. "Fingerprint indexing with pose constraint." Pattern
Recognition 54 (2016): 1-13.
[58] Cao, Kai, Liaojun Pang, Jimin Liang, and Jie Tian. "Fingerprint classification by a hierarchical
classifier." Pattern Recognition 46, no. 12 (2013): 3186-3197.
[59] Zhao, Tieyu, Qiwen Ran, Lin Yuan, Yingying Chi, and Jing Ma. "Image encryption using fingerprint
as key based on phase retrieval algorithm and public key cryptography." Optics and Lasers in
Engineering 72 (2015): 12-17.
[60] Rajput, Sudheesh K., and Naveen K. Nishchal. "Fresnel domain nonlinear optical image encryption
scheme based on Gerchberg–Saxton phase-retrieval algorithm." Applied optics 53, no. 3 (2014): 418-
425.
[61] Sui, Liansheng, Xiao Zhang, and Ailing Tian. "Optical multiple-image authentication scheme based
on the phase retrieval algorithm in gyrator domain." Journal of Optics 19, no. 5 (2017): 055702.
[62] Bhattacharya, Samayita, and Kalyani Mali. "Fingerprint Recognition by Classification Using Neural
Network and Matching Using Minutia (Fingerprint Recognition by NNMM Method)." International
Journal of Emerging Research in Management and Technology 3, no. 8 (2014): 1-10.
References
101
[63] Janan, Faraz, and Michael Brady. "Shape description and matching using integral invariants on
eccentricity transformed images." International Journal of Computer Vision 113, no. 2 (2015): 92-112.
[64] Jayaraman, Umarani, Aman Kishore Gupta, and Phalguni Gupta. "An efficient minutiae based
geometric hashing for fingerprint database." Neurocomputing 137 (2014): 115-126.
[65] Ala, Balti, Mounir Sayadi, and Farhat Fnaiech. "Fingerprint verification based on back propagation
neural network." Journal of Control Engineering and Applied Informatics 15, no. 3 (2013): 53-60.
[66] Peralta, Daniel, Mikel Galar, Isaac Triguero, Daniel Paternain, Salvador García, Edurne Barrenechea,
José M. Benítez, Humberto Bustince, and Francisco Herrera. "A survey on fingerprint minutiae-based
local matching for verification and identification: Taxonomy and experimental evaluation."
Information Sciences 315 (2015): 67-87.
[67] Do, Thanh-Nghi, Philippe Lenca, and Stéphane Lallich. "Classifying many-class high-dimensional
fingerprint datasets using random forest of oblique decision trees." Vietnam journal of computer
science 2, no. 1 (2015): 3-12.
[68] Murillo-Escobar, M.A., Cruz-Hernández, C., Abundiz-Pérez, F. and López-Gutiérrez, R.M., A robust
embedded biometric authentication system based on fingerprint and chaotic encryption. Expert
Systems with Applications, 42(21), pp.8198-8211, (2015).
[69] Khan, Asif Iqbal, and Mohd Arif Wani. "Strategy to extract reliable minutia points for fingerprint
recognition." In Advance Computing Conference (IACC), 2014 IEEE International, pp. 1071-1075.
IEEE, 2014.
[70] Paulino, Alessandra A., Jianjiang Feng, and Anil K. Jain. "Latent fingerprint matching using
descriptor-based hough transform." IEEE Transactions on Information Forensics and Security 8, no. 1
(2013): 31-45.
[71] Lomte, Archana C., and S. B. Nikam. "Biometric fingerprint authentication by minutiae extraction
using USB token system." International Journal of Computer Technology and Applications 4, no. 2
(2013): 187.
[72] Sutthiwichaiporn, Prawit, and Vutipong Areekul. "Adaptive boosted spectral filtering for progressive
fingerprint enhancement." Pattern Recognition 46, no. 9 (2013): 2465-2486.
[73] Marasco, Emanuela, Luca Lugini, Bojan Cukic, and Thirimachos Bourlai. "Minimizing the impact of
low interoperability between optical fingerprints sensors." In Biometrics: Theory, Applications and
Systems (BTAS), 2013 IEEE Sixth International Conference on, pp. 1-8. IEEE, 2013.
[74] Jung, Suk-hoon, Byung-chul Moon, and Dongsoo Han. "Unsupervised learning for crowdsourced
indoor localization in wireless networks." IEEE Transactions on Mobile Computing 15, no. 11 (2016):
2892-2906.
[75] Venkatesh, J., and VD Ambeth Kumar. "Advanced Filter and Minutiae Matching for Fingerprint
Recognition." International Journal of Science and Research 2, no. 1 (2013).
[76] Zhang, Hongyun, Witold Pedrycz, Duoqian Miao, and Zhihua Wei. "From principal curves to
granular principal curves." IEEE transactions on cybernetics 44, no. 6 (2014): 748-760.
[77] Galbally, Javier, Arun Ross, Marta Gomez-Barrero, Julian Fierrez, and Javier Ortega-Garcia. "Iris
image reconstruction from binary templates: An efficient probabilistic approach based on genetic
algorithms." Computer Vision and Image Understanding 117, no. 10 (2013): 1512-1525.
References
102
[78] Wong, Wei Jing, Andrew BJ Teoh, ML Dennis Wong, and Yau Hee Kho. "Enhanced multi-line code
for minutiae-based fingerprint template protection." Pattern Recognition Letters 34, no. 11 (2013):
1221-1229.
[79] Chatbri, Houssem, and Keisuke Kameyama. "Using scale space filtering to make thinning algorithms
robust against noise in sketch images." Pattern Recognition Letters 42 (2014): 1-10.
[80] Yang, Jucheng, Naixue Xiong, and Athanasios V. Vasilakos. "Two-stage enhancement scheme for
low-quality fingerprint images by learning from the images." IEEE transactions on human-machine
systems 43, no. 2 (2013): 235-248.
[81] Chhillar, R., Minutiae Based Fingerprint Recognition Using Fuzzy Logic-A Review. International
Journal of Global Research in Computer Science, 4(4), pp.139-142, 2013.
[82] Wang, Jing-Wein, Ngoc Tuyen Le, Chou-Chen Wang, and Jiann-Shu Lee. "Enhanced ridge structure
for improving fingerprint image quality based on a wavelet domain." IEEE Signal Processing Letters
22, no. 4 (2015): 390-394.
[83] Ahmed, Hany Hashem, Hamdy M. Kelash, Maha Tolba, and Mohamed Badwy. "Fingerprint image
enhancement based on threshold fast discrete curvelet transform (FDCT) and Gabor filters."
International Journal of Computer Applications 110, no. 3 (2015).
[84] Guo, Jing-Ming, Yun-Fu Liu, Jla-Yu Chang, and Jiann-Der Lee. "Fingerprint classification based on
decision tree from singular points and orientation field." Expert Systems with Applications 41, no. 2
(2014): 752-764.
[85] Patil, D. D., N. A. Nemade, and K. M. Attarde. "Iris recognition using fuzzy system." International
Journal of Computer Science and Mobile Computing 2, no. 2 (2013): 14-17.
[86] Zhou, Zhili, Yunlong Wang, QM Jonathan Wu, Ching-Nung Yang, and Xingming Sun. "Effective and
efficient global context verification for image copy detection." IEEE Transactions on Information
Forensics and Security 12, no. 1 (2017): 48-63.
[87] Chaudhari, Atul S., and Sandip S. Patil. "A study and review on fingerprint image enhancement and
minutiae extraction." the IOSR Journal of Computer Engineering 9, no. 6 (2013): 53.
[88] Peralta, Daniel, Mikel Galar, Isaac Triguero, Oscar Miguel-Hurtado, Jose M. Benitez, and Francisco
Herrera. "Minutiae filtering to improve both efficacy and efficiency of fingerprint matching
algorithms." Engineering Applications of Artificial Intelligence 32 (2014): 37-53.
[89] Marasco, Emanuela, and Arun Ross. "A survey on antispoofing schemes for fingerprint recognition
systems." ACM Computing Surveys (CSUR) 47, no. 2 (2015): 28.
[90] Arjona, Rosario, and Iluminada Baturone. "A hardware solution for real-time intelligent fingerprint
acquisition." Journal of real-time image processing 9, no. 1 (2014): 95-109.
[91] Khodadoust, Javad, and Ali Mohammad Khodadoust. "Fingerprint indexing based on expanded
Delaunay triangulation." Expert Systems with Applications 81 (2017): 251-267.
[92] Khodadoust, Javad, and Ali Mohammad Khodadoust. "Partial fingerprint identification for large
databases." Pattern Analysis and Applications (2017): 1-16.
[93] Liu, Feng, David Zhang, and Linlin Shen. "Study on novel curvature features for 3D fingerprint
recognition." Neurocomputing 168 (2015): 599-608.
References
103
[94] Karahan, Şamil, Adil Karaöz, Ömer Faruk Özdemir, Ahmet Gökhan Gü, and Umut Uludag. "On
identification from periocular region utilizing sift and surf." In Signal Processing Conference
(EUSIPCO), 2014 Proceedings of the 22nd European, pp. 1392-1396. IEEE, 2014.
[95] Abdulkader, Sarah N., Ayman Atia, and Mostafa-Sami M. Mostafa. "Authentication systems:
Principles and threats." Computer and Information Science 8, no. 3 (2015): 155.
[96] Awad, Ali Ismail. "Fingerprint local invariant feature extraction on GPU with CUDA." Informatica
37, no. 3 (2013): 279.
[97] Singh, Arshdeep, and Gaurav Mittal. "An Improved Scheme for Bad-Quality Fingerprint
Reconstruction and Matching." Imperial Journal of Interdisciplinary Research 3, no. 2 (2017).
[98] Bazen A, Gerez S, Poel M, Otterlo M ―Reinforcement learning agent for minutiae extraction from
fingerprints.‖ Dept. of Computer Science, TKI, University of Twente, Enschede, (2002).
[99] Rahimi, Ehsan, Shohreh, "An Adaptive Approach to Singular Point Detection in Fingerprint Images."
International Journal of Electronics and Communication, Elsevier, (2004): 367-370.
[100] Chen, Siheng, Fernando Cerda, Piervincenzo Rizzo, Jacobo Bielak, James H. Garrett, and Jelena
Kovačević. "Semi-supervised multiresolution classification using adaptive graph filtering with
application to indirect bridge structural health monitoring." IEEE Transactions on Signal Processing
62, no. 11 (2014): 2879-2893.
[101] Duvenaud, David K., Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán
Aspuru-Guzik, and Ryan P. Adams. "Convolutional networks on graphs for learning molecular
fingerprints." In Advances in neural information processing systems, (2015): 2224-2232.
[102] Zhong, Yunfei, and Xiaoqi Peng. "Multi-level fingerprint continuous classification for large-scale
fingerprint database using fractal analysis." International Journal of Biometrics 8, no. 2 (2016): 115-
133.
[103] Galar, Mikel, Joaquín Derrac, Daniel Peralta, Isaac Triguero, Daniel Paternain, Carlos Lopez-Molina,
Salvador García et al. "A survey of fingerprint classification part II: experimental analysis and
ensemble proposal." Knowledge-Based Systems 81 (2015): 98-116.
[104] S. Shah, P. S. Sastry, ―Fingerprint Classification Using a Feedback Based Line Detector‖, IEEE
Trans. On Systems, Man and Cybernetics, Part B, vol. 34, no.1, 2004.
[105] Junior, Jarbas Joaci de Mesquita Sá, and André Ricardo Backes. "ELM based signature for texture
classification." Pattern Recognition 51 (2016): 395-401.
[106] Bolón-Canedo, Verónica, Noelia Sánchez-Marono, Amparo Alonso-Betanzos, José Manuel Benítez,
and Francisco Herrera. "A review of microarray datasets and applied feature selection methods."
Information Sciences 282 (2014): 111-135.
[107] Jasiewicz, Jarosław, and Tomasz F. Stepinski. "Geomorphons—a pattern recognition approach to
classification and mapping of landforms." Geomorphology 182 (2013): 147-156.
[108] Guo, Jing-Ming, Yun-Fu Liu, Jla-Yu Chang, and Jiann-Der Lee. "Fingerprint classification based on
decision tree from singular points and orientation field." Expert Systems with Applications 41, no. 2
(2014): 752-764.
[109] De Siqueira, Fernando Roberti, William Robson Schwartz, and Helio Pedrini. "Multi-scale gray level
co-occurrence matrices for texture description." Neurocomputing 120 (2013): 336-345.
References
104
[110] Chandy, D. Abraham, J. Stanly Johnson, and S. Easter Selvan. "Texture feature extraction using gray
level statistical matrix for content-based mammogram retrieval." Multimedia tools and applications
72, no. 2 (2014): 2011-2024.
[111] Li, Leida, Shushang Li, Hancheng Zhu, Shu-Chuan Chu, John F. Roddick, and Jeng-Shyang Pan. "An
efficient scheme for detecting copy-move forged images by local binary patterns." Journal of
Information Hiding and Multimedia Signal Processing 4, no. 1 (2013): 46-56.
[112] Zhu, Chao, Charles-Edmond Bichot, and Liming Chen. "Image region description using orthogonal
combination of local binary patterns enhanced with color information." Pattern Recognition 46, no. 7
(2013): 1949-1963.
[113] Tapia, Juan E., and Claudio A. Perez. "Gender classification based on fusion of different spatial scale
features selected by mutual information from histogram of LBP, intensity, and shape." IEEE
transactions on information forensics and security 8, no. 3 (2013): 488-499.
[114] Yuan, Chengsheng, Xingming Sun, and Rui Lv. "Fingerprint liveness detection based on multi-scale
LPQ and PCA." China Communications 13, no. 7 (2016): 60-65.
[115] Stöber, Tim, Mario Frank, Jens Schmitt, and Ivan Martinovic. "Who do you sync you are?:
smartphone fingerprinting via application behaviour." In Proceedings of the sixth ACM conference on
Security and privacy in wireless and mobile networks, pp. 7-12. ACM, 2013.
[116] Singh, Jagtar. "A Survey on Fingerprint Recognition Methods." Journal of Network Communications
and Emerging Technologies (JNCET) www. jncet. org 7, no. 5 (2017).
[117] Feng, Jianjiang, Jie Zhou, and Anil K. Jain. "Orientation field estimation for latent fingerprint
enhancement." IEEE transactions on pattern analysis and machine intelligence 35, no. 4 (2013): 925-
940.
[118] Gottschlich, Carsten, Emanuela Marasco, Allen Y. Yang, and Bojan Cukic. "Fingerprint liveness
detection based on histograms of invariant gradients." In Biometrics (IJCB), 2014 IEEE International
Joint Conference on, pp. 1-7. IEEE, 2014.
[119] Das, Madirakshi, Alexander C. Loui, and Mark D. Wood. "Method for event-based semantic
classification." U.S. Patent 8,611,677, issued December 17, 2013.
[120] Gutierrez, Pablo David, Miguel Lastra, Francisco Herrera, and Jose Manuel Benitez. "A high
performance fingerprint matching system for large databases based on GPU." IEEE Transactions on
Information Forensics and Security 9, no. 1 (2014): 62-71.
[121] Viswanathan, Anand, And S. Chitra. "Optimized Radial Basis Function Classifier With Hybrid Bat
Algorithm For Multi Modal Biometrics." Journal of Theoretical & Applied Information Technology
67, No. 1 (2014).
[122] Do, Thanh-Nghi, Philippe Lenca, and Stéphane Lallich. "Classifying many-class high-dimensional
fingerprint datasets using random forest of oblique decision trees." Vietnam journal of computer
science 2, no. 1 (2015): 3-12.
[123] Patel, Mital S., N. M. Patel, and Mehfuza S. Holia. "Feature based multi-view image registration using
SURF." In Advanced Computing and Communication (ISACC), 2015 International Symposium on,
pp. 213-218. IEEE, 2015.
References
105
[124] Kazerouni, Masoud Fathi, Jens Schlemperz, and Klaus-Dieter Kuhnert. "Efficient Modern Description
Methods by Using SURF Algorithm for Recognition of Plant Species." Advances in Image and Video
Processing 3, no. 2 (2015): 10.
[125] Kulkarni, A. V., J. S. Jagtap, and V. K. Harpale. "Object recognition with ORB and its
Implementation on FPGA." International Journal of Advanced Computer Research 3, no. 3 (2013):
164.
[126] Sangeetha, S., M. Thanga Kavitha, and N. Manikandaprabu. "Enhanced Approximated SURF Model
For Object Recognition." International Journal of Engineering Research and Applications (IJERA) 4
(2014): 13-18.
[127] Hassaballah, M., Aly Amin Abdelmgeid, and Hammam A. Alshazly. "Image features detection,
description and matching." In Image Feature Detectors and Descriptors, pp. 11-45. Springer, Cham,
2016.
[128] Aswin, N. B. "Electrical system fault diagnosis using infrared thermography." International Journal of
Management, IT and Engineering 3, no. 8 (2013): 467.
[129] Canclini, Antonio, Matteo Cesana, Alessandro Redondi, Marco Tagliasacchi, João Ascenso, and R.
Cilla. "Evaluation of low-complexity visual feature detectors and descriptors." In Digital Signal
Processing (DSP), 2013 18th International Conference on, pp. 1-7. IEEE, 2013.
[130] Kotha, Dileep Kumar, and Santanu Rath. "Forensic Sketch Matching Using SURF." In Proceedings of
International Conference on Advances in Computing, pp. 527-537. Springer, New Delhi, 2013.
[131] Oliver, Neus, Miguel Cornelles Soriano, David W. Sukow, and Ingo Fischer. "Fast random bit
generation using a chaotic laser: approaching the information theoretic limit." IEEE Journal of
Quantum Electronics 49, no. 11 (2013): 910-918.
[132] Oyallon, Edouard, and Julien Rabin. "An analysis of the SURF method." Image Processing On Line 5
(2015): 176-218.
[133] Sharma, Sadgi, and Abdul Moiz Siddiqui. "Image retrieval using speeded up robust feature: An effort
to improvement." International Journal of Computer Science and Network Security (IJCSNS) 14, no.
11 (2014): 102.
[134] Mainali, Pradip, Gauthier Lafruit, Qiong Yang, Bert Geelen, Luc Van Gool, and Rudy Lauwereins.
"SIFER: scale-invariant feature detector with error resilience." International journal of computer
vision 104, no. 2 (2013): 172-197.
[135] Fan, Youchen, Huayan Sun, and Haoxiang Wang. "Application in casting defect lossless examination
based on surf." In International Symposium on Photoelectronic Detection and Imaging 2013: Imaging
Sensors and Applications, vol. 8908, p. 89080J. International Society for Optics and Photonics, 2013.
[136] Tao, Lei, Xiaojun Jing, Songlin Sun, Hai Huang, Na Chen, and Yueming Lu. "Combining SURF with
MSER for image matching." In Granular Computing (GrC), 2013 IEEE International Conference on,
pp. 286-290. IEEE, 2013.
[137] Sahu, Megha, and Neeraj Shukla. "Rotation invariant approach of surf for unimodal biometric
system." International Journal of Emerging Technology and Advanced Engineering (IJETAE) 3
(2013).
References
106
[138] Rajesh, G. Karpaga, Ravikant Kaushik, and Ritu Jangra. "A Comparative Analysis of Object
Recognition System Using SIFT, SURF and FAST Algorithms." IJITR 4, no. 3 (2016): 3025-3032.
[139] Yu, Jing, Zengchang Qin, Tao Wan, and Xi Zhang. "Feature integration analysis of bag-of-features
model for image retrieval." Neurocomputing 120 (2013): 355-364.
[140] Hu, Rui, and John Collomosse. "A performance evaluation of gradient field hog descriptor for sketch
based image retrieval." Computer Vision and Image Understanding 117, no. 7 (2013): 790-806.
[141] Gao, Yue, Rongrong Ji, Wei Liu, Qionghai Dai, and Gang Hua. "Weakly supervised visual dictionary
learning by harnessing image attributes." IEEE Transactions on Image Processing 23, no. 12 (2014):
5400-5411.
[142] Chen, Shizhi, and YingLi Tian. "Pyramid of spatial relatons for scene-level land use classification."
IEEE Transactions on Geoscience and Remote Sensing 53, no. 4 (2015): 1947-1957.
[143] Bolovinou, Anastasia, Ioannis Pratikakis, and S. Perantonis. "Bag of spatio-visual words for context
inference in scene classification." Pattern Recognition 46, no. 3 (2013): 1039-1053.
[144] Yang, Yi, and Shawn Newsam. "Geographic image retrieval using local invariant features." IEEE
Transactions on Geoscience and Remote Sensing 51, no. 2 (2013): 818-832.
[145] Penatti, Otávio AB, Fernanda B. Silva, Eduardo Valle, Valerie Gouet-Brunet, and Ricardo Da S.
Torres. "Visual word spatial arrangement for image retrieval and classification." Pattern Recognition
47, no. 2 (2014): 705-720.
[146] Xu, Yongchao, Pascal Monasse, Thierry Géraud, and Laurent Najman. "Tree-based morse regions: A
topological approach to local feature detection." IEEE Transactions on Image Processing 23, no. 12
(2014): 5612-5625.
[147] Ravindran, U., and T. Shakila. "Content based image retrieval for histology image collection using
visual pattern mining." International Journal of Scientific & Engineering Research 4, no. 4 (2013).
[148] Ling, Hefei, Lingyu Yan, Fuhao Zou, Cong Liu, and Hui Feng. "Fast image copy detection approach
based on local fingerprint defined visual words." Signal Processing 93, no. 8 (2013): 2328-2338.
[149] Han, Junwei, Xiang Ji, Xintao Hu, Dajiang Zhu, Kaiming Li, Xi Jiang, Guangbin Cui, Lei Guo, and
Tianming Liu. "Representing and retrieving video shots in human-centric brain imaging space." IEEE
Transactions on Image Processing 22, no. 7 (2013): 2723-2736.
[150] Gu, Xingjian, Chuancai Liu, Sheng Wang, Cairong Zhao, and Songsong Wu. "Uncorrelated slow
feature discriminant analysis using globality preserving projections for feature extraction."
Neurocomputing 168 (2015): 488-499.
[151] Zheng, Feng, Zhan Song, Ling Shao, Ronald Chung, Kui Jia, and Xinyu Wu. "A semi-supervised
approach for dimensionality reduction with distributional similarity." Neurocomputing 103 (2013):
210-221.
[152] Iancu, Ion, and Nicolae Constantinescu. "Intuitionistic fuzzy system for fingerprints authentication."
Applied Soft Computing 13, no. 4 (2013)
[153] Abdullahi, Sani M., and Hongxia Wang. "Robust enhancement and centroid-based concealment of
fingerprint biometric data into audio signals." Multimedia Tools and Applications (2017): 1-30.
[154] Khan, Shaziya, and Shamaila Khan. "An Efficient Content based Image Retrieval: CBIR."
International Journal of Computer Applications 152, no. 6 (2016).
References
107
[155] Guo, Yulan, Mohammed Bennamoun, Ferdous Sohel, Min Lu, Jianwei Wan, and Ngai Ming Kwok.
"A comprehensive performance evaluation of 3D local feature descriptors." International Journal of
Computer Vision 116, no. 1 (2016): 66-89.
[156] Zhou, Wengang, Houqiang Li, Yijuan Lu, and Qi Tian. "SIFT match verification by geometric coding
for large-scale partial-duplicate web image search." ACM Transactions on Multimedia Computing,
Communications, and Applications (TOMM) 9, no. 1 (2013): 4.
[157] Panchal, P. M., S. R. Panchal, and S. K. Shah. "A comparison of SIFT and SURF." International
Journal of Innovative Research in Computer and Communication Engineering 1, no. 2 (2013): 323-
327.
[158] Chiu, Liang-Chi, Tian-Sheuan Chang, Jiun-Yen Chen, and Nelson Yen-Chung Chang. "Fast SIFT
design for real-time visual feature extraction." IEEE Transactions on Image Processing 22, no. 8
(2013): 3158-3167.
[159] Goh, K. M., Syed AR Abu-Bakar, M. M. Mokji, and U. U. Sheikh. "Implementation and application
of orientation correction on sift and surf matching." Int. J. Advance. Soft Comput. Appl 5, no. 3
(2013).
[160] Montazer, Gholam Ali, and Davar Giveki. "Content based image retrieval system using clustered
scale invariant feature transforms." Optik-International Journal for Light and Electron Optics 126, no.
18 (2015): 1695-1699.
[161] Dharani, T., and I. Laurence Aroquiaraj. "A survey on content based image retrieval." In Pattern
Recognition, Informatics and Mobile Engineering (PRIME), 2013 International Conference on, pp.
485-490. IEEE, 2013.
[162] Yu, Jing, Zengchang Qin, Tao Wan, and Xi Zhang. "Feature integration analysis of bag-of-features
model for image retrieval." Neurocomputing 120 (2013): 355-364.
[163] Mukherjee, Satabdi, Samuel Cohen, and Izidor Gertner. "Content-based vessel image retrieval." In
Automatic Target Recognition XXVI, vol. 9844, p. 984412. International Society for Optics and
Photonics, 2016.
[164] Alzu‘bi, Ahmad, Abbes Amira, and Naeem Ramzan. "Semantic content-based image retrieval: A
comprehensive study." Journal of Visual Communication and Image Representation 32 (2015): 20-54.
[165] Xie, Hongtao, Yongdong Zhang, Jianlong Tan, Li Guo, and Jintao Li. "Contextual query expansion
for image retrieval." IEEE Transactions on Multimedia 16, no. 4 (2014): 1104-1114.
[166] Lin, Xufeng, and Chang-Tsun Li. "Large-scale image clustering based on camera fingerprints." IEEE
Transactions on Information Forensics and Security 12, no. 4 (2017): 793-808.
[167] Quanzeng, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. "Image captioning with semantic
attention." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.
4651-4659. 2016.
[168] Kai Cao, Anil K. Jain. ―Automated Latent Fingerprint Recognition.‖ Computer Vision and Pattern
Recognition, Cornell University Library, (2017).
[169] ―Fingerprint Image and Minutiae Data Standard for e-Governance Applications in India‖, Government
of India, (November 2010).
References
108
[170] Alaa Ahmed Abbood, Ghazali Sulong and Sabine U. Peters, ―A Review of Fingerprint Image Pre-
processing‖, Jurnal Teknologi 69, no. 2, (2014): 79-84.
[171] Alaa Ahmed Abbood and Ghazali Sulong, ―Fingerprint Classification Techniques: A Review‖,
International Journal of Computer Science Issues 11, no. 1, (2014): 111-122.
[172] Mikel Galar, Joaquín Derrac, Daniel Peralta, Isaac Triguero, Daniel Paternain, Carlos Lopez-Molina,
Salvador García, José M. Benítez, Miguel Pagola, Edurne Barrenechea, Humberto Bustince, Francisco
Herrera, ―A survey of fingerprint classification Part I: Taxonomies on feature extraction methods and
learning models‖, Elsevier, Knowledge-Based Systems 81, (2015): 76–97.
[173] Dhriti, Manvjeet Kaur, ―K-Nearest Neighbor Classification Approach for Face and Fingerprint at
Feature Level Fusion‖, International Journal of Computer Applications 60, no.14, (2012): 0975 –
8887.
[174] V.Vaidehi, S.Vasuhi, R.Kayalvizhi, K.Mariammal, Raghuraman.M.B, Sundara Raman.V,
Meenakshi.L, Anupriyadharshini.V, Thangamani.T, ―Person Authentication Using Face Detection‖,
Proceedings of the World Congress on Engineering and Computer Science, October 22 - 24, 2008,
San Francisco, USA.
[175] Timo Ojala, Matti Pietikainen, David Harwood, ―A Comparative Study of Texture Measures with
Classification based on Feature Distributions‖, Pattern Recoonition 29, no. l, (1996): 51-59.
[176] R.M. Haralick, K.Shanmugam, I.Dinstein, ―Textural Features for Image Classification‖, IEEE Trans.
Syst., Man and Cybernetics 3, no. 6, (1973): 610–621.
[177] Mehran Yazdi, Kazem Gheysari, ―A New Approach for the Fingerprint Classification Based on Gray-
Level Co-Occurrence Matrix‖, International Journal of Computer and Information Engineering 2, no.
11, (2008): 3689-3692.
[178] Md. Abdur Rahim, Md. Najmul Hossain, Tanzillah Wahid, Md. Shafiul Azam, ―Face Recognition
using Local Binary Patterns (LBP)‖, Global Journal of Computer Science and Technology 13, no. 4,
2013.
[179] Pranita Wankhede, Dharmpal Doye, ―Support Vector Machines for Fingerprint Classification,‖
Proceedings of National Conference on Communications, (2005): 356-360.
[180] Kadhim, Wasan, ―A Novel Modificatin of SURF for Fingerprint Matching‖, Journal of Theoretical
and Applied Information Technology 96, no. 6, (2005): 1570-1581.
[181] E. R. Henry, ―Classification and uses of Fingerprints‖, London: Routledge (1900).
[182] ―FBI Compression Standard for Digitized Fingerprint Images‖, https://www.spiedigitallibrary.org/
conference-proceedings-of-spie/2847/1/FBI-compression-standard-for-digitized-fingerprint-
images/10.1117/12.258243.short
[183] Biometrics Design Standards for UID Applications, Unique Identification Authority of India, 2009.
[184] Role of Biometric Technology in Aadhaar Enrollment, Unique Identification Authority of India, 2012.
[185] Karen Simonyan, Andrew Zisserman, ―Very Deep Convolutional Networkd for Large Scale Image
Recognition‖, ICLR, 2015.
[186] Matthew D. Zeiler, Rob Fergus, ―Visualizing and Understanding Convolutional Networks‖, Computer
Vision ECCV, Springer International Publishing, (2014): 818-833. Tom Fawcett, ―An introduction to
ROC analysis‖, Pattern Recognition Letters 27, no. 8, (2006): 861–874.
References
109
[187] Alaa Ahmed Abbood, Ghazali Sulong and Sabine U. Peters, ―A Review of Fingerprint Image Pre-
processing‖, Jurnal Teknologi 69, no. 2, (2014): 79-84.
[188] Sylvain, Nozha, David, Claude, ―Fingerprint Classification using Kohonen Topologic Map‖,
Proceedings International Conference on Image Processing, Thessaloniki, Greece, 2001.
[189] Cheong, Haesun, ―Fingerprint Classification using Fast Fourier Transform and Nonlinear
Discriminant Analysis‖, Pattern Recognition 38, no. 4, (2005): 495 – 503.
[190] M. Kawagoe and A. Tojo ―Fingerprint Pattern Classification.‖ Pattern Recognition 17, no. 3,(1984):
295-303,
[191] M.M.S. Chong, T.H. Ngee, L. Jun, R.K.L. Gay, "Geometric Framework for Fingerprint
Classification", Pattern Recognition 30, no. 9, (1997): 1475-1488.
[192] Kalle Karu, Anil K Jain, ―Fingerprint Classification‖, Pattern Recognition 29, no. 3, (1996): 389-404.
[193] Mikel Galar, Joaquín Derrac, Daniel Peralta, Isaac Triguero, Daniel Paternain, Carlos Lopez-Molina,
Salvador García, José M. Benítez, Miguel Pagola, Edurne Barrenechea, Humberto Bustince, Francisco
Herrera, ―A survey of fingerprint classification Part II: Experimental Analysis and Ensemble
Proposal‖, Elsevier, Knowledge-Based Systems 81, (2015): 98-122.
[194] Raffaele Cappelli, Alessandra Lumini, Dario Maio, and Davide Maltoni, ―Fingerprint Classification
by Directional Image Partitioning.‖ IEEE Transactions on Pattern Analysis and Machine Intelligence
21, no. 5, (1999): 402–421.
[195] H. Nyongesa, S. Al-khayatt, S. Mohamed, M. Mahmoud, ―Fast robust fingerprint feature extraction
and classification‖, Journal of Intelligent Robotic Systems 40, no. 1, (2004): 103-112.
[196] John C. Platt, ―Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector
Machines‖, Technical Report MSR-TR-98-14, Microsoft Research, 1998.
[197] Guodong Guo, Stan Z. Li, ―Content-Based Audio Classification and Retrieval by Support Vector
Machines‖, IEEE Transactions on Neural Networks 14, no. 1, (2003): 209-215.
[198] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001.
[199] C.-W. Hsu, C.-C. Chang, C.-J. Lin, A Practical Guide to Support Vector Classification, Tech. Rep.,
Taipei, 2003.
[200] Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc J. Van Gool, ―SURF: Speeded up robust features‖,
Computer Vision and Image Understanding (CVIU) 110, no. 3, (2008): 346-359.
[201] Kumar A, ―Image retrieval using SURF features‖, Master Thesis, Thapar University, 2011.
[202] Srinivasan, Zhen, Iyer, Zhang, Mike, Newell, Daniel , Yi Wu, Kozintsev, Horst, ―Performance
Characterization and Optimization of Mobile Augmented Reality on Handheld Platforms.‖ IEEE
International Symposium on Workload Characterization, 2009.
[203] Christopher Evans, ―Notes on the opensurf library‖, Technical Report CSTR-09-001, University of
Bristol, January 2009. URL http://www.cs.bris.ac.uk/Publications/Papers/2000970.pdf. E. Nowak, F.
Jurie, B. Triggs, ―Sampling strategies for bag-of-features image classification‖, Proceedings, ECCV
2006, Springer, (2006): 490-503.
[204] Jialu Liu, ―Image retrieval based on bag-of-words model‖, Clinical Orthopaedics and Related
Research(CoRR) (2013).
References
110
[205] Y. G. Jiang, C. W. Ngo, and J. Yang, ―Towards optimal bag-of-features for object categorization and
semantic video retrieval,‖ in Proceedings of ACM International Conference on Image and Video
Retrieval, Amsterdam, The Netherlands, (2007): 494–501.
[206] Manju, Kavitha, ―Survey on Fingerprint Enhancement Techniques‖, International Journal of
Computer Appliations, Vol. 62, No. 4, (2013): 41-47.
[207] M.P. Roath, M. Winter, ―Survey of Appearance-based Methods for Object Recognition‖, Technical
Report, Institute for Computer Graphics and Vision, Graz University, Austria, 2008.
[208] J. Sivic and A. Zisserman, ―Video Google: Efficient Visual Search of Videos‖, In Toward Category-
Level Object Recognition, Springer, Vol. 4170 of LNCS, (2006): 127-144.
[209] H.F. Ling, L.Y. Yan, F.H. Zou, C. Liu, H. Feng, ―Fast image copy detection approach based on local
fingerprint defined visual words‖, Signal Processing 93, (2013): 2328-2338.
[210] Murat Olgun, Ahmet Okan Onarcan, Kemal Özkan, Sahin Isik, Okan Sezer, Kurtulus Özgis,
[211] Nazife Gözde Ayter, Zekiye Budak Basçiftçi, Murat Ardiç, Onur Koyuncu, ―Wheat grain
classification by using dense SIFT features with SVM classifier‖, Computers and Electronics in
Agriculture, 122, (2016): 185-190.
[212] L. Kabbai, A. Azaza, M. Abdellaoui, A. douik, "Image matching based on LBP and SIFT Descriptor",
12th International Multi Conference on systems Signals and Devices, 2015.
[213] X. Jiang, M. Liu, A. C. Kot, "Fingerprint retrieval for identification", IEEE Transactions on
Information Forensics and Security 1, no. 4, (2006): 532-542.
[214] NIST-4 Dataset link: https://www.nist.gov/itl/iad/image-group/resources/biometric-special-databases-
and-software
[215] FVC Dataset link: http://bias.csr.unibo.it/fvc2004/databases.asp
Publications
111
List of Publications
[1] Sudhir Vegad, Dr. Tanmay Pawar, "Fingerprint Image Retrieval: A Novel Approach‖,
Springer International Conference on Intelligent Systems and Signal Processing,
ISSP-2017, March 2017
[2] Sudhir Vegad, Dr. Tanmay Pawar, ―Fingerprint Image Retrieval using Statistical
Methods‖, International Journal of Computer Science and Information Security, ISSN
– 1947 5500, Vol. 15 (11), pp 246-249, November 2017