dynamic score combination: a supervised and unsupervised score combination method
Post on 21-Jun-2015
453 Views
Preview:
DESCRIPTION
TRANSCRIPT
Dynamic Score Combinationa supervised and unsupervised
score combination method
R. Tronci, G. Giacinto, F. Roli
DIEE - University of Cagliari, Italy
Pattern Recognition and Applications Group
http://prag.diee.unica.it
MLDM 2009 - Leipzig, July 23-25, 2009
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 2
Outline
! Goal of score combination mechanisms
! Dynamic Score Combination
! Experimental evaluation
! Conclusions
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 3
Behavior of biometric experts
FNMRj(th) = p(s
j| s
j! positive)ds
j
"#
th
$ = P(sj% th | s
j! positive)
FMRj(th) = p(s
j| s
j! negative)ds
j
th
#
$ = P(sj
> th | sj! negative)
Genuine scores should produce
a positive outcome
Impostor scores should produce
a negative outcome
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 4
Performance assessment
! True Positive Rate = 1 - FNMR
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 5
Goal of score combination
! To improve system reliability, different
experts are combined
! different sensors, different features, different
matching algorithms
! Combination is typically performed at the
matching score level
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 6
Goal of score combination
Combined score
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 7
Goal of score combination
! The aim is to maximize the separation
between classes
e.g.
! Thus the distributions have to be shifted far
apart, and the spread of the scores reduced
FD =µgen
! µimp( )
2
"gen
2 +"imp
2
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 8
Static combination
! Let E = {E1,E2,…Ej,…EN} be a set of N experts
! Let X = {xi} be the set of patterns
! Let fj (.) be the function associated to expert Ej that producesa score sij = fj(xi) for each pattern xi
Static linear combination
! The weights are computed as to maximize somemeasure of class separability on a training set
! The combination is static with respect to the testpattern to be classified
si
*= !
j" s
ij
j=1
N
#
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 9
Dynamic combination
The weights of the combination also depends
on the test pattern to be classified
The local estimation of combination
parameters may yield better results than the
global estimation, in terms of separation
between the distributions of scores si*
si
*= !
ij" s
ij
j=1
N
#
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 10
Estimation of the parameters
for the dynamic combination
! Let us suppose without loss of generality
! The linear combination of three experts
can also be written as
which is equivalent to
si1! s
i2!! ! s
iN
!i1si1+!
i2si2+!
i3si3
!ij" 0,1[ ]
!"i1si1+ s
i2+ !"
i3si3
!!"i1si1+ !!"
i3si3
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 11
Estimation of the parameters
for the dynamic combination
! This reasoning can be extended to N experts,so we can get
! Thus, for each pattern we have to estimatetwo parameters
! If we set the constraint
only one parameter has to be estimated and
si* ! [minj(sij),maxj(sij)]
si
*= !
i1min
jsij( ) + !
i2max
jsij( )
!i1+ !
i2= 1
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 12
Properties of the Dynamic
Score Combination
! This formulation embeds the typical static
combination rules
! Linear combination
! Mean rule
! Max rule for "i = 1 and Min rule for "i = 0
si
*= !
imax
jsij( ) + 1" !
i( )minj
sij( )
!i=
"Jsij
j=1
N
# $minj
sij( )
maxj
sij( )$min
jsij( )
!i=
1
Nsij
j=1
N
" #minj
sij( )
maxj
sij( ) #min
jsij( )
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 13
Properties of the Dynamic
Score Combination
! This formulation also embeds the Dynamic
Score Selection (DSS)
! DSS clearly maximize class separability if the
estimation of the class of xi is reliable! e.g., a classifier trained on the outputs of the
experts E
!i =1 if xi belongs to the positive class
0 if xi belongs to the negative class
"#$
si
*= !
imax
jsij( ) + 1" !
i( )minj
sij( )
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 14
Supervised estimation of "i
! "i = P(pos|xi,E)
P(pos|xi,E) can be estimated by a classifier
trained on the outputs of the experts E
! "i is estimated by a supervised procedure
! This formulation can also be seen as a soft
version of DSS
! P(pos|xi,E) accounts for the uncertainty in class
estimation
si
*= !
imax
jsij( ) + 1" !
i( )minj
sij( )
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 15
Unsupervised estimation of "i
! "i is estimated by an unsupervised procedure
! the estimation does not depend on a training set
Mean rule
Max rule
Min rule
si
*= !
imax
jsij( ) + 1" !
i( )minj
sij( )
!i=1
Nsij
j=1
N
"
!i= max
jsij( )
!i= min
jsij( )
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 16
Dataset
! The dataset used is the Biometric Scores Set
Release 1 of the NIST
http://www.itl.nist.gov/iad/894.03/biometricscores/
! This dataset contains scores from 4 experts related
to face and fingerprint recognition systems.
! The experiments were performed using all the
possible combinations of 3 and 4 experts.
! The dataset has been divided into four parts, each
one used for training and the remaining three for
testing
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 17
Experimental Setup
! Experiments aimed at assessing the performance of
! The unsupervised Dynamic Score Combination (DSC)
! "i estimated by the Mean, Max, and Min rules
! The supervised Dynamic Score Combination
! "i estimated by k-NN, LDC, QDC, and SVM classifiers
! Comparisons with
! The Ideal Score Selector (ISS)
! The Optimal static Linear Combination (Opt LC)
! The Mean, Max, and Min rules
! The linear combination where coefficients are estimated by
the LDA
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 18
Performance assessment
! Area Under the ROC Curve (AUC)
! Equal Error Rate (ERR)
!
! FNMR at 1% and 0% FMR
! FMR at 1% and 0% FNMR
!d =µgen " µimp
# gen
2
2+# imp
2
2
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 19
Combination of three experts
4.8972 (±0.4911)0.0048 (±0.0026)0.9996 (±0.0004)DSC svm
9.1452 (±3.1002)0.0147 (±0.0092)0.9964 (±0.0039)DSC qdc
2.7654 (±0.2782)0.0642 (±0.0149)0.9741 (±0.0087)DSC ldc
6.9911 (±0.9653)0.0104 (±0.0053)0.9987 (±0.0016)DSC k-NN
2.3802 (±0.2036)0.0296 (±0.0123)0.9945 (±0.0040)LDA
2.3664 (±0.2371)0.0634 (±0.0158)0.9769 (±0.0085)DSC Min
3.8799 (±0.2613)0.0214 (±0.0065)0.9960 (±0.0015)DSC Max
3.8300 (±0.5049)0.0064 (±0.0030)0.9986 (±0.0011)DSC Mean
2.0068 (±0.1636)0.0694 (±0.0148)0.9708 (±0.0085)Min
3.0608 (±0.3803)0.0450 (±0.0048)0.9892 (±0.0022)Max
3.6272 (±0.4850)0.0096 (±0.0059)0.9982 (±0.0013)Mean
3.1231 (±0.2321)0.0050 (±0.0031)0.9997 (±0.0004)Opt LC
25.4451 (±8.7120)0.0000 (±0.0000)1.0000 (±0.0000)ISS
d’EERAUC
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 20
DSC Mean Vs. Mean rule
Combination of three experts
DSC Mean
AUC !!0.9991
EER !!0.0052
d' !!!4.4199
Mean rule
AUC !!0.9986
EER !!0.0129
d' !!!4.0732
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 21
Unsupervised DSC Vs. fixed rules
AUC
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 22
Unsupervised DSC Vs. fixed rules
EER
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 23
Unsupervised DSC Vs. fixed rules
FMR at 0% FNMR
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 24
DSC Mean Vs. supervised DSC
AUC
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 25
DSC Mean Vs. supervised DSC
EER
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 26
DSC Mean Vs. supervised DSC
FMR at 0% FNMR
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 27
Conclusions
! The Dynamic Score Combination mechanism
embeds different combination modalities
! Experiments show that the unsupervised DSC
usually outperforms the related “fixed” combination
rules
! The use of a classifier in the supervised DSC allows
attaining better performance, at the expense of
increased computational complexity
! Depending on the classifier, performance are very
close to those of the optimal linear combiner
top related