feature-level data fusion for biometric mobile ... · feature-level data fusion for biometric...

7
Feature-level Data Fusion for biometric mobile applications. A case study S. Soviany 1 , C. Soviany 2 , and S. Puşcoci 1 1 Communications Terminals and Telematics Department, I.N.S.C.C., Bucharest, Romania 2 Feature Analytics, Nivelles, Belgium Abstract - The paper presents a case study for the biometric authentication with several traits and design for the mobile applications constraints (reduced complexity, storage, processing), looking to ensure good performances. The feature-level data fusion is approached in order to exploit more information from the raw data during the combination process, but with care about the dimensionality issues. This is why a functional fusion is applied, instead of the most common concatenation-based fusion. The model design looks to efficiently manage the problems of over-fitting, curse of dimensionality and performance peaking. The research is in line with the actual efforts to prove that multimodal biometrics can be a reliable design option for the security solutions in mobile applications. Keywords: multimodal, feature-level fusion, identification 1 Introduction The mobile applications deal with performances and data security issues, despite of the technological advances in mobile communications, hardware, software and algorithms design [1]. The advances in hardware and mobile communications improve the performances of the mobile devices in respect to various storage and processing constraints. The software advances support innovative capabilities addressing many applications. The algorithms design benefits from the advanced technologies belonging to Artificial Intelligence, Data Mining and Machine Learning [1]. The smartphones usage for applications with sensitive data like banking or medical cases is still in charge with security problems to be addressed through a suitable design of the security functions including the authentication. The performances must meet the end-users confidence about the privacy. The complexity should be adapted to the real-time application processes execution. The biometric authentication became a reliable option especially for multi-factor solutions with different credentials. The several biometrics allow the enhancing of the accuracy, with a higher confidence level of the end- users [2]. The multimodal biometric systems design uses combination rules in which the biometric data are fused at various processing levels, pre- or post-classification [3]. The pre-classification fusion combines the samples within an earlier processing stage, before the matching (sensor- level and feature-level fusion). The post-classification fusion includes score-level, rank-level and decision-level fusion; this fusion is commonly applied given its low complexity, but with the drawback of not exploiting the most informative properties of the extracted features [2]. In this paper the pre-classification fusion is approached with a case study in which several traits are combined using a feature-level fusion scheme with care about the dimensionality issues. The remainder of this paper is structured as follows. Section 2 presents recent developments in this area. Section 3 presents a use-case of the feature-level fusion for mobile applications. Section 4 presents the experimental results. Section 5 concludes the research. 2 Related works The feature-level data fusion is a challenge for the multimodal biometry because of the following reasons [2],[3]: the curse of dimensionality; the relationships among the feature spaces that can be generated by various biometrics; the unavailability of the feature vectors structure; the complexity of the feature extraction algorithms for different biometrics; the incompatibility among the extracted features. The feature-level fusion can be applied in 2 ways [4]: x the combination among different features sets, that are then fused through concatenation or functional rules; x the raw data samples combination into a single sample, followed by a single feature extraction algorithm provides the required features. The various issues about pre- and post-classification fusion are described in [3], explaining the advantages of the data integration within an early processing stage. The major categories of biometric fusion techniques are presented in [4], with use-cases exemplifications. The feature fusion can be applied for unimodal and multimodal cases. An unimodal case is given in [5] for multispectral palmprint. The features are extracted by using Information sets with membership functions and Gabor filter. Several features sets are generated by dividing the image into non-overlapping windows and then concatenated. In the multimodal biometric cryptosystem example provided in [6], fingerprint, retina and vein features are fused using the concatenation. Another cryptographic use-case is presented in [7]. The individual iris and fingerprint features are mixed using a random vector generation process. The results are applied in the encryption process. Int'l Conf. Security and Management | SAM'19 | 175 ISBN: 1-60132-509-6, CSREA Press ©

Upload: others

Post on 04-Jun-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Feature-level Data Fusion for biometric mobile ... · Feature-level Data Fusion for biometric mobile applications. A case study S.Soviany1,C.Soviany2, and S.Puşcoci1 1Communications

Feature-level Data Fusion for biometric mobile applications. A case study

S. Soviany1, C. Soviany2, and S. Puşcoci1

1 Communications Terminals and Telematics Department, I.N.S.C.C., Bucharest, Romania2 Feature Analytics, Nivelles, Belgium

Abstract - The paper presents a case study for the biometric authentication with several traits and design for the mobile applications constraints (reduced complexity, storage, processing), looking to ensure good performances. The feature-level data fusion is approached in order to exploit more information from the raw data during the combination process, but with care about the dimensionality issues. This is why a functional fusion is applied, instead of the most common concatenation-based fusion. The model design looks to efficiently manage the problems of over-fitting, curse of dimensionality and performance peaking. The research is in line with the actual efforts to prove that multimodal biometrics can be a reliable design option for the security solutions in mobile applications.

Keywords: multimodal, feature-level fusion, identification

1 Introduction The mobile applications deal with performances and data security issues, despite of the technological advances in mobile communications, hardware, software and algorithms design [1]. The advances in hardware and mobile communications improve the performances of the mobile devices in respect to various storage and processing constraints. The software advances support innovative capabilities addressing many applications. The algorithms design benefits from the advanced technologies belonging to Artificial Intelligence, Data Mining and Machine Learning [1]. The smartphones usage for applications with sensitive data like banking or medical cases is still in charge with security problems to be addressed through a suitable design of the security functions including the authentication. The performances must meet the end-users confidence about the privacy. The complexity should be adapted to the real-time application processes execution. The biometric authentication became a reliable option especially for multi-factor solutions with different credentials. The several biometrics allow the enhancing of the accuracy, with a higher confidence level of the end-users [2]. The multimodal biometric systems design uses combination rules in which the biometric data are fused at various processing levels, pre- or post-classification [3]. The pre-classification fusion combines the samples within an earlier processing stage, before the matching (sensor-level and feature-level fusion). The post-classification

fusion includes score-level, rank-level and decision-level fusion; this fusion is commonly applied given its low complexity, but with the drawback of not exploiting the most informative properties of the extracted features [2]. In this paper the pre-classification fusion is approached with a case study in which several traits are combined using a feature-level fusion scheme with care about the dimensionality issues. The remainder of this paper is structured as follows. Section 2 presents recent developments in this area. Section 3 presents a use-case of the feature-level fusion for mobile applications. Section 4 presents the experimental results. Section 5 concludes the research.

2 Related works The feature-level data fusion is a challenge for the

multimodal biometry because of the following reasons [2],[3]: the curse of dimensionality; the relationships among the feature spaces that can be generated by various biometrics; the unavailability of the feature vectors structure; the complexity of the feature extractionalgorithms for different biometrics; the incompatibility among the extracted features. The feature-level fusion can be applied in 2 ways [4]:

the combination among different features sets, that are then fused through concatenation or functional rules; the raw data samples combination into a single sample, followed by a single feature extraction algorithm provides the required features.

The various issues about pre- and post-classification fusion are described in [3], explaining the advantages of the data integration within an early processing stage. The major categories of biometric fusion techniques are presented in [4], with use-cases exemplifications. The feature fusion can be applied for unimodal and multimodal cases. An unimodal case is given in [5] for multispectral palmprint. The features are extracted by using Information sets with membership functions and Gabor filter. Several features sets are generated by dividing the image into non-overlapping windows and then concatenated. In the multimodal biometric cryptosystem example provided in [6], fingerprint, retina and vein features are fused using the concatenation. Another cryptographic use-case is presented in [7]. The individual iris and fingerprint features are mixed using a random vector generation process. The results are applied in the encryption process.

Int'l Conf. Security and Management | SAM'19 | 175

ISBN: 1-60132-509-6, CSREA Press ©

Page 2: Feature-level Data Fusion for biometric mobile ... · Feature-level Data Fusion for biometric mobile applications. A case study S.Soviany1,C.Soviany2, and S.Puşcoci1 1Communications

The feature-level fusion is approached with Discriminant Correlation Analysis in [8] to fuse features that are extracted from different traits or to combine several features from a single modality, in order to maximize the pairwise correlations across the 2 features sets while maintaining a high class separation. The fusion is done by concatenation or summation of the transformed feature vectors. A comparative analysis for feature and decision-level fusion in multimodal biometrics (face, ear and iris) is performed in [9]. The feature fusion uses a functional approach with AND, OR and a combined AND-OR rule between the binary representations of the feature sets. An example of feature-level fusion for a biometric authentication scheme is presented in [10]. An ensemble algorithm is applied for normalized feature points sets that are independently extracted from 2 modalities; the feature sets are concatenated after their Min-Max normalization. A feature selection is applied to reduce the dimensionality. The feature and score-level data fusion for 4 biometrics (fingerprint, palmprint, iris and retina) is applied in [11]. The feature fusion is based on Discrete Wavelet Transform. A medical use-case of the feature-level data fusion for the biometric identification on Internet-of-Medical Things platform is described in [12]. 3 biometric modalities are integrated for the recognition process (face, fingerprint, finger vein). The feature fusion is based on the Fisher vector calculation and Gaussian Mixture Model to unify the dimensions of the different features. An example of concatenation feature-level fusion for palmprint and face is presented in [13]. The fused feature vector is obtained by augmenting the normalized feature vectors and with an additional feature selection on the concatenated vector. The feature-level fusion is also applied in other use-cases belonging to bio-medical and bio-engineering domains. An example is that of a self-attentive feature-level fusion for multimodal emotion detection [14], with several features sets (textual, audio). The fusion is performed with element-wise addition (the sum rule), element-wise multiplication (the product rule), the outer product method (the pairwise product of each feature) and the concatenation. A multimodal fusion is proposed based on a functional combination of the textual and audio features sets, with a weighted addition process.A review of the most used data fusion rules for affective computing with focus on the multimodal emotion recognition is given in [15]. The features from various modalities (visual, text, audio) are fused to exploit their correlation during an early processing stage. Another example of the feature-level application in bio-engineering and bio-medical cases is presented in [16]. The fusion is applied for activity monitoring data with real-time signals (surface electromyography). A new fused feature space is created using a projection-based method with a weighting genetic algorithm for GCCA (Global Canonical Correlation Analysis). We approached the feature-level fusion for several datasets and biometrics, moving from concatenation to

the functional combination fusion [1],[2],[17],[18]. The use-cases included intra- and inter-modal feature fusion rules, integrated into hierarchical data fusion models [2],[17],[18], to find the best fusion rule [1], to improve the design by finding the optimal Training Set Size (TSS) vs. Feature Set Size (FSS) ratio [2], to compare the performances of intra-modal vs. inter-modal feature fusion [17] and to prove the benefits of the feature-level fusion [18]. In a single case a leave-one out cross-validation was applied [2]. The next case study contains the new results provided within an ongoing research in order to improve the accuracy only using the functional feature-level fusion, avoiding the concatenation-based fusion.

3 A case study: Feature-level fusion for the mobile users authentication

The case study presents a biometric security model using the functional feature-level data fusion with design and optimization for mobile applications. The model can be applied in m-Health use-cases for the remote access to medical data; depending on its behavior, the application can be extended to other use-cases with sensitive data.

3.1 The system architecture and modeling The following security architecture for a mobile application (figure 1) is used for design and modeling.

The subject can use the mobile device in order to get remote access to the medical data or to the bank account.The security model is very close to the previous works ([1], [2]); this is an ongoing research in which the models are tested on several datasets with various options for the data fusion. In this design the credentials generation is performed on the mobile devices. The functional components follow a client-server model:

the client, with the feature extraction and selection for fingerprint F and iris I. The Fingerprint and Iris Application Modules integrate the Pre-Processingfunction (FPP, IPP), which is responsible for the feature extraction without any optimization of the data space, and the Processing function (FP, IP),

Figure 1: The security model for mobile use-cases

176 Int'l Conf. Security and Management | SAM'19 |

ISBN: 1-60132-509-6, CSREA Press ©

Page 3: Feature-level Data Fusion for biometric mobile ... · Feature-level Data Fusion for biometric mobile applications. A case study S.Soviany1,C.Soviany2, and S.Puşcoci1 1Communications

which is responsible with the feature space transformations and feature selection to improve the discriminant power. These app. modules (F, I) generate the biometric credentials for the authentication - the feature vectors FVF and FVI ; the server, with the mobile user authentication (M-AUTH.) and the following operations: the Data Fusion (to combine the 2 biometric credentials), the Data Matching (the data classification) and the User Recognition (the decision for the application level).

The modeling steps are as following (figure 2):

the feature generation for both biometrics,providing the authentication credentials; the feature data fusion, providing a single representation of the generated features in a fused data space; the biometric data classification/matching.

3.2 The feature generation The feature generation operations include:

the feature extraction from the images, using a regional textural approach with statistical features; the feature space transformation and feature selection to ensure the best discriminant properties under a reduced dimensionality of the data space.

3.2.1 Feature extraction: the FPP and IPP functions A textural approach is applied to generate 1st and 2nd

order statistical features for the texture characterization [19]. A color to gray-scale conversion of the input images is performed using the procedure given in [20] for the texture analysis. The process is depicted in figure 3. Several instances of the same trait are applied. Only 2 regions of interest (ROIs) are selected within the images. The ROIs definition is manually performed, despite of the existing automatic selection modules, for experimental reasons in order to work with a constrained fused data space according to the mobile applications requirements. The procedure performs the following operations, in which the raw data from left (L) and right (R) iris and

from the left (L) and right (R) thumb fingerprint are used: the color-to-gray scale conversion of the fingerprint and iris images; the manual definition of the region of interest(ROI), a rectangular area given by the initial point coordinates

, ,,

X i X iB Bx y and the pair of offsets

, ,,

B BX i X ix y , according to [1]

, , , , , , ,: , , , (1)

X i X i X i X i X i X i X iB B B B B B BROI x x x y y y

, , , , 1,2B F I X L R ithe ROI selection with mask matrices that are applied on the original images [1]:

, , , ,

, , , , , ,,

1, ,

[ , ] (2)

0,

X i X i X i X i

X i X i X i X i X i X i

B B B B

X i B B B B B B

x m x x

MaskB m n y n y y

otherwise

where MaskBX,i is the mask matrix that is applied to extract the desired ROI; the vertical concatenation of the mask matrices(ensuring the same number of columns). The resulted images are as following:

Im , , , , (3)(1.2) ,1 ,2B MaskB MaskB B F I X L RX X v XMaskB

(1.2)Im LF ,(1.2)Im RF : the extracted ROI from the

images of the left and right thumbs fingerprint; (1.2)Im LI ,

(1.2)Im RI : the extracted ROI from the images of the left and right iris; the ROI-level fusion in which the left and right extracted ROIs are combined with a weighted average rule, to simplify the feature computation:

(1.2) (1.2) (4)Im Im

Im L RL R

L R

w B w BB

w w,B F I and the weights wL and wR are assigned

based on the available data for the left and right thumb fingerprint and iris; the co-occurrence matrices (CoM) and statistical features computing, in which the 1st and 2nd order textural statistical features are computed, using the

Figure 2: The modeling process steps

Figure 3: The feature extraction process

Int'l Conf. Security and Management | SAM'19 | 177

ISBN: 1-60132-509-6, CSREA Press ©

Page 4: Feature-level Data Fusion for biometric mobile ... · Feature-level Data Fusion for biometric mobile applications. A case study S.Soviany1,C.Soviany2, and S.Puşcoci1 1Communications

Co-occurrence Matrices (CoM) for the 2nd order case [19],[21],[22]. These features evaluate the gray levels distribution over the pixels belonging to the extracted ROI, based on their spatial relationships [19]. Given a gray-level GL (the fraction of pixels with that gray-level in the ROI), the following 1st

order textural statistical features [19] are considered: the moment m1 (the mean value of GL)and the 3 central moments: μ2=σ2 (the variance), μ3(the skewness) and μ4 (the kurtosis). The 1st order statistical feature vectors for both biometrics (fingerprint FSFV1 and iris ISFV1, respectively) are as following:

Im Im Im Im1, 2,GL 3,GL 4,GL1 , , , (5)

TF F F FGLFSFV m

Im Im Im Im1, 2,GL 3,GL 4,GL1 , , , (6)

TI I I IGLISFV m

Their components are the 1st order statistical properties of the given GL distribution (1st order histogram P(GL) [19]) within the ROI (ImF, ImI)

for the both biometrics, ,B F I : ImB1,GLm is the

mean value of GL; ImB2,GL is the variance of GL;

ImB3,GL is the skewness; ImB

4,GL is the kurtosis for the 1st order histogram of the gray-level distribution (GL). The 2nd order statistical features are based on Co-occurrence matrices (CoM) and provide spatial information about the positions of various gray levels within the image [19]. The normalized entries estimate the probability of a certain gray-level GL1while a pixel with a certain displacement (distance, orientation) has another gray-level GL2 [21],[22]:

(7)Im , 1,Im , 2

1, 2_ _

B B B B B BN B x y GL B x x y y GLCoM GL GL

N pixels pairs

where the horizontal and vertical displacements of the pixels Bx and By are given [1]. The number of the pixels pairs N_pixels_pairs depends on the distance and orientation of the pixels to be considered. N{cond} shows how many times the specified condition for the given pair of pixels is met. CoM shows the way in which the pixels intensities are distributed according to the spatial information about the neighboring pixels [22]. The following parameters allow to adjust the feature space size (FSS) generating many non-null informative elements and textural features [1],[2]:

GLB (the number of Gray-Level Bins) that can be adjusted to ensure many significant matrix elements with non-null values. The following assignment is fixed: 7FGLB and

9FGLB ; OFFS, the displacement distance [1], the number of pixels between the pixels pairs that are used to compute the CoM elements. This amount should not exceed a certain value otherwise the spacing between the pixels pairs will increase too much, reducing the total number of pairs and only ensuring

very poor discriminant information [1],[2]. The assignment is 2, ,BOFFS B F I .

The resulting dimensionality is according to 2 (8), ,BCoMBFSS GLB B F I

1 1 4FSFV ISFVFSS FSS The following 2nd order statistical features can be derived from the CoM elements representing the occurrence probabilities of gray levels in respect to the relative spatial positions of pixels: the angular second moment (ASM), the contrast (CON), the inverse difference moment (IDM) and the entropy (H) [19]. These amounts are derived for each modality (F,I) based on the selected ROIs images (ImF, ImI). The corresponding 2nd order statistical feature vectors are as following:

Im Im

1, 2 1, 2 1, 2 1, 2

Im Im (9)2 , , ,F F

GL GL GL GL GL GL GL GL

F FT

FSFV ASM CON IDM H

Im Im

1, 2 1, 2 1, 2 1, 2

Im Im (10)2 , , ,I I

GL GL GL GL GL GL GL GL

I IT

ISFV ASM CON IDM H

with the dimensionalities 2 2 4FSFV ISFVFSS FSS . The feature vectors that combine the 1st and 2nd order statistical features and CoM are:

1 2 (11),0FV V FSFV FSFVF CoMF 1 2 (FSFV1

1 2 (12),0FV V ISFV ISFVI CoMI 1 2 ISFV1

in which CoMFV and CoMIV are the representations of the co-

occurrence matrices elements VCoMB=P (GL1,GL2), and is the concatenation operator. CoM has a GLB GLBrepresentation. The data space dimensionality is given by:

28 (13)1 2,0

FSS FSS FSS FSS GLBFV CoMF FSFV FSFV FF2

8 (14)1 2,0FSS FSS FSS FSS GLBFV CoMI ISFV ISFV II

3.2.2 Feature space transformation and feature selection for the dimensionality optimization: the FP and IP functions

The data space dimensionality should be considered in order to efficiently manage the curse of dimensionality,over-fitting and performance peaking.[19]. An additional procedure for the feature space transformation and feature selection is applied to reduce the dimensionality while maintaining the most discriminant features. The following operations and transforms are applied:

PCA (Principal Component Analysis), which in this case retains a fraction of 99% from the total variance in order to ensure the most informative dimensions. A supervised version of PCA is used because the default unsupervised PCA does not always preserve the class separation in the transformed data sub-space. The supervised PCA is applied on the weighted covariance matrix CoVM [1],[2]:

1 , 1 2 , 2 (15), ,B c B c c B cCoVM p CoVM p CoVM B F I in which the class covariance matrices CoVMB,c are summed; the weights pc1 and pc2 are given by the classes priors within the training dataset; c1 is the class that includes the biometric samples belonging

178 Int'l Conf. Security and Management | SAM'19 |

ISBN: 1-60132-509-6, CSREA Press ©

Page 5: Feature-level Data Fusion for biometric mobile ... · Feature-level Data Fusion for biometric mobile applications. A case study S.Soviany1,C.Soviany2, and S.Puşcoci1 1Communications

to the mobile device owner; c2 is the class that contains the samples belonging to any other person. The input vectors are ,0FFV and I,0FV ; the transformed vectors are F,1FV and I,1FV ; LDA (Linear Discriminant Analysis) is a linear transformation w that maximizes the class separation based on the ratio between inter- and intra-class variance, 2

int -er class and 2int -ra class , respectively (Fisher

Discriminant Ratio, FDR) [19]: 2int - int -2

int -int - (16)( )

Ter class er class

Tra classra class

w S wFDR ww S w

This measure is evaluated using the inter- and intra-class scatter matrices (

int -er classS ,

int -ra classS ). The operation

is applied following PCA and generates the new vectors ,2FFV and I,2FV ;

Feature selection, which enhances the discriminant value of the features. The goal is to obtain the same number of features (feature space size FSSB) for both biometrics, in order to ensure the functional data fusion feasibility avoiding the concatenation. The relevant features are drawn using a forward-searching non-exhaustive method in which the evaluation criterion is based on 1-NN (Nearest Neighbor) rule, due to its property to limit the classification error rate [23]. The resulting featurevectors are FVF and FVI, with the common dimensionality for both biometrics

( ) 20, ,B BFSS size FV B F I , that is slightly higher than in the previous cases [1],[2], but still allowing to manage the curse of dimensionality with an optimal Training Set Size vs. Feature Set Size ratio setting.

3.3 The feature-level Data Fusion The Data Fusion combines the feature sets using a functional rule, avoiding the concatenation which is expensive in terms of dimensionality. The functionalfusion is only feasible for homogeneous feature vectors. The homogeneity can be ensured by the common dimensionality while working with the same feature extraction algorithm or if the feature sets are generated from the same trait (intra-modal fusion) [2]. An intra-modal fusion is already applied during the feature extraction: the mask matrices concatenation, the functional fusion of the extracted ROIs and the concatenation of the 1st and 2nd order statistical features. The inter-modal functional feature fusion can be applied because the homogeneity is provided for both biometrics. In the previous works several fusion rules were applied [1],[2]. Another fusion rule is defined here based on the previous rules. The fusion rule ensures a low-complexity combination process and should not be dependent on the feature extraction algorithm:

[ ] [ ] [ - ] [ ] [ - ] (17)F F I I B F I BFV k W FV k W FV FSS k FV k FV FSS k

0, -1Bk FSS . FV is the fused feature vector that combines the components of the fingerprint FVF and iris FVI feature vectors. The weights WF and WI are assigned based on the experimental data.

3.4 The biometric data classification/matching

The application is for a 2-class problem. The 1st

class represents the target identity (the mobile device owner). The 2nd class includes the biometric samples belonging to all non-target identities. The classification is done using a kernel SVM (Support Vector Machine), due to its good performance and stability on medium and high-complexity datasets [18]. The underlying discriminant function g is [1],[19]:

01

(18)( ) sgn ,BTSS

trtest tr tr test

trg x y K x x w

BTSS is the training set size for the both biometrics. It can be fixed to prevent the curse of dimensionality, according to [19] 2 10, ,B

B

TSS B F IFSS. This provides the optimal

the number of training samples between 40 and 200 examples per class; testx is the testing sample; trx is the training sample; tr is the Lagrange multiplier for the optimization problem of finding the maximum margin hyper-plane; w0 is the offset parameter; try is the class label; K(.,.) is the kernel mapping for the conversion from a non-linear to a linear space. A polynomial kernel is used:

(19), , 1,2,3ptr test tr testK x x x x a p

The parameter a is fixed to adjust the model behavior. The model design is done with and without cross-validation. For the case with cross-validation, a K-fold method is used (K=10). For the case without cross validation, the design dataset with 100 samples per class is randomly divided into 2 independent sub-sets, one for training (70 samples per class) and one for validation (30 samples per class). This ratio meets the condition for the training set size vs. dimensionality ratio.

4 Experimental results The dataset contains samples belonging to 50

persons with 3 images (left and right iris and thumb fingerprint) per subject. The images are captured using the smartphone cameras. The target performances for mobile use-cases are as following: a True Positive Rate (TPR) on the target identity close to 90% or even more, for a reduced False Positive Rate (FPR) to 10% or even less. The high-level security applications industry expects significant improvements close to 95% TPR vs. 5% FPR to guarantee the acceptability of the biometric authentication with mobile devices. The data acquisition set-up for the required features uses several Samsung Galaxy S8 devices to provide the input images for the processing according to the model specification. Given the 3 original images of the fingerprint and iris per subject, the best image is selected for the feature extraction. The ROIs are obtained after several steps in which the most informative regions are manually selected. The regions of interest are manually defined for experimental purposes, without automatic ROI selection. The dataset contains several statistic features for

Int'l Conf. Security and Management | SAM'19 | 179

ISBN: 1-60132-509-6, CSREA Press ©

Page 6: Feature-level Data Fusion for biometric mobile ... · Feature-level Data Fusion for biometric mobile applications. A case study S.Soviany1,C.Soviany2, and S.Puşcoci1 1Communications

the gray-level bins distribution within the extracted ROI. The features are defined and computed according to the rules given in Table 1, together with the fusion rule (the composition model for the subject). GL is a certain gray-level and GLB is the total number of gray levels.

Table 1: The features computing and composition rulesFeature Definition

1st order statistic features, given GL (gray-level):P(GL),FSFV1[1…4], ISFV1[1…4]

1st order histogram

# _ _ _( )# _

pixels gray level GLP GLtotal pixels

Mean Im1,

-1

0, ,( )B

GL

GLB

GLB F Im GL P GL

Varianceσ2 ImB

2,GL

2-1Im1,

0,- ( ),

GLBB

GLGL

B F IGL m P GL

SkewnessImB3,GL

3-1Im1,

0,- ( ),

GLBB

GLGL

B F IGL m P GL

KurtosisImB4,GL

4-1Im1,

0,- ( ),

GLBB

GLGL

B F IGL m P GL

2nd order statistic features, pairs of pixels with the graylevels GL1,GL2:VCoMB, FSFV2[1…4], ISFV2[1…4]

Co-occurrence

Matrix

# ImB x , y = GL1,ImB x + Dx , y + Dy = GL2B B B B B BCoM GL1,GL2 =#total_pixels_pairs

Angular Second Moment

ImB

1, 2

2-1 -1

1 0 2 0( 1, 2)

GL GL

GLB GLB

GL GLASM P GL GL

Contrast1, 2

ImB

1 2

-1 -1 -12

0 1 0 2 0( 1, 2)

GL GL

GL GL

GLB GLB GLB

n GL GLn

CON n P GL GL

Inverse Difference Moment

1, 2

Im-1 -1

21 0 2 0

( 1, 2)1 1 2GL GL

BGLB GLB

GL GLIDM P GL GL

GL GL

Entropy Im

1, 2

-1

21 0 2 0

- ( 1, 2) log ( 1, 2)B

GL GL

GLB GLB

GL GLH P GL GL P GL GL

The feature combination model for a subjectMerging

,0 1 2, ,B CoMBFV V BSFV BSFV B F I,BSFV BSFV B1 2,2,BSFV1 2,2,Transform ,1 ,0 , ,B BFV PCA FV B F I

,2 ,1 , ,B BFV LDA FV B F I

Selection,2_ , ,B crit BFV Feat Sel FV B F I

Fusion [ ] [ ] [ - ] [ ] [ - ]F F I I B F I BFV k W FV k W FV FSS k FV k FV FSS k

The training is done with balanced classes: the same representation of the 2 classes within the training dataset. The further experiments for the same model should take in account the unbalanced classes. The performances are evaluated for the cases with and without cross-validation. The modeling is performed using a polynomial kernel SVM with 3 degrees (p=1,p=2, p=3). The ROC (Receiver Operating Characteristic) curves are generated with a set of 100 operating points. The curves are represented in figures 4 (with 10-fold cross-validation) and 5 (without cross-validation, with 70%/30% training/validation split ratio). The cross-validation allows the finding of the best operating point close to the desired target for TPR

(identification rate for the target identity), with FPR slightly exceeding 15%, that can be not very convenient for acceptability. The design with cross-validation outperforms the same model without cross-validation, ensuring a better

generalization capacity by preventing the over-fitting.

5 Conclusions The present research explores the potential

performances that can be provided with the feature-level fusion for the multimodal biometric authentication in mobile use-cases. The proposed security model for the mobile users’ authentication is based on the functional pre-classification data fusion, avoiding the curse of dimensionality issues that can occur if the different features sets are concatenated. In this case, the best performances are

Figure 4: The ROC curves with cross-validation

Figure 5: The ROC curves without cross-validation

180 Int'l Conf. Security and Management | SAM'19 |

ISBN: 1-60132-509-6, CSREA Press ©

Page 7: Feature-level Data Fusion for biometric mobile ... · Feature-level Data Fusion for biometric mobile applications. A case study S.Soviany1,C.Soviany2, and S.Puşcoci1 1Communications

achieved when the cross-validation is applied in order to enhance the generalization power while working with a fused data space. The design requires to make optimizations as concerning the complexity and the feature space dimensionality in order to ensure the best trade-off for the application case. An advance towards a software implementation of the designed model should be considered if the new results on several datasets with some additional optimizations of the fused data space can ensure their consistency and stability over several design options.

6 References [1] S. Soviany, S. Puşcoci, V. Săndulescu, C. Soviany:“A Biometric Security Model for Mobile Applications”,The 16th International Conference onData Networks, Communications, Computers (DNCOCO'18),Rome, Italy November 23-25, 2018.[2] S. Soviany, C. Soviany, S. Puşcoci: “A Multimodal Biometric System with Several Degrees of Feature Fusion for Target Identities Recognition”, The 2016 International Conference on Security and Management (SAM'16), Las Vegas, USA, July 25 – 28, 2016.[3] A.Jain, K.Nandakumar, A.Ross: “ Score Normalization in multimodal biometric systems, Pattern Recognition”, The Journal of the Pattern Recognition Society, 38 (2005).[4] D.Zhang, F.Song Y.Xu, Z.Liang.: “Advanced Pattern Recognition Technologies with Applications to Biometrics”, Medical Information Science Reference, IGI Global, 2009. [5] A. Kumari, B. Alankar, J. Grover: “Feature Level Fusion of Multispectral Palmprint”, International Journal of Computer Applications (0975 – 8887) Volume 144 –No.3, June 2016.[6] D. Jagadiswarya, D. Saraswadya: “Biometric Authentication using Fused Multimodal Biometric”,International Conference on Computational Modeling and Security (CMS 2016), Procedia Computer Science 85 ( 2016 ) 109 – 116.[7] R.Wagh, S.Darokar, S.Khobragade: “Multimodal Biometrics Features with Fusion Level Encryption”,International Journal of Engineering Science and Computing, Volume 7 Issue No.3, March 2017. [8] M. Haghighat, M. Abdel-Mottaleb, W. Alhalabi: “Discriminant Correlation Analysis for Feature level Fusion with Application to Multimodal Biometrics”,IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, March 2016.[9] A. Mondal, A. Kaur: “Comparative Study of Feature Level and Decision Level Fusion in Multimodal Biometric Recognition of Face, Ear and Iris”,International Journal of Computer Science and Mobile Computing (IJCSMC), Vol.5 Issue.5, May- 2016, pg. 822-842.[10] S.K.Bhardwaj: “An Algorithm for Feature Level Fusion in Multimodal Biometric System”, International Journal of Advanced Research in Computer Engineering

& Technology (IJARCET) Volume 3 Issue 10, October 2014.[11] S.Mohana Prakash, P.Betty, K.Sivanarulselvan:“Fusion of Multimodal Biometrics using Feature and Score Level Fusion”, International Journal on Applications in Information and Communication Engineering Volume 2: Issue 4: April 2016, pp 52-56.[12] Y. Xin, L. Kong, Z. Liu, C. Wang, H. Zhu, M. Gao, C. Zhao, X. Xu : “Multimodal Feature-Level Fusion for Biometrics Identification System on IoMT Platform”, 2169-3536 IEEE Access, Open Access Journal, Volume 6, 2018 Special Section on Trends, Perspectives and Prospects of Machine Learning Applied to Biomedical Systems in Internet of Medical Things.[13] G. Umakant Bokade, A. M.Sapkal: “Feature Level Fusion of Palm and Face for Secure Recognition”,International Journal of Computer and Electrical Engineering, Vol.4, No.2, April 2012. [14] D.Hazarika, S.Gorantla, S.Poria, R. Zimmermann: “Self-Attentive Feature-Level Fusion for Multimodal Emotion Detection”, 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). [15] S.Poria, E.Cambria, R. Bajpai, A.Hussain: “Areview of affective computing: From unimodal analysis to multimodal fusion”, Information Fusion 37 (2017) 98–125, Elsevier. [16] X. Xi, M. Tang, Z.Luo : “Feature-Level Fusion of Surface Electromyography for Activity Monitoring”,Sensors 2018, 18, 614.[17] S. Soviany, V. Săndulescu, S. Puşcoci, C. Soviany, M. Jurian: “An Optimized Biometric System with Intra-and Inter-Modal Feature-level Fusion”, ECAI 2017-International Conference–9th Edition Electronics, Computers and Artificial Intelligence, Târgovişte, România, 29 June -1 July, 2017.[18] S. Soviany, V. Săndulescu, S. Puşcoci, C. Soviany: “A Biometric System with Hierarchical Feature-level Fusion”, ECAI 2018-International Conference–10th Edition Electronics, Computers and Artificial Intelligence, Iaşi, România, 28-30 June, 2018.[19] S.Theodoridis, K.Koutroumbas: “PatternRecognition”, 4th edition, Academic Press Elsevier, 2009. [20] D.Bhattacharyya, P.Das, S.K.Bandyopadhyay, T.Kim: “IRIS Texture Analysis and Feature Extraction for Biometric Pattern Recognition”, International Journal of Database Theory and Application, vol. 1, nr. 1, pp. 53-60, 2008. [21] S.V.Bino, A.Unnikrishnan, B.Kannan: “Gray level Co-Occurrence Matrices: Generalization and some new features”, International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012. [22] A.Eleyan, H.Demirel: “Co-occurrence matrix and its statistical features as a new approach for face recognition”, Turk J Elec Eng & Comp Sci, Vol.19, Nr.1, 2011. [23] L.Devroye, L.Gyorfy, G.Lugosi: “A Probabilistic Theory of Pattern Recognition”, Springer, 1997.

Int'l Conf. Security and Management | SAM'19 | 181

ISBN: 1-60132-509-6, CSREA Press ©