face recognition - ham rara

Upload: agung-arif-nur-wibowo

Post on 14-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 Face Recognition - Ham Rara

    1/39

    Face Recognition

  • 7/27/2019 Face Recognition - Ham Rara

    2/39

    Face Recognition Scenarios

    Face verification Am I who say I am?

    a one-to-one match that compares a query face imageagainst a template face image whose identity is being claimed

    Face identification Who am I?

    a one-to-many matching process that compares a query imageagainst all template images in a face database to determine the identity

    of the query face a similarity score is found for each comparison

    Watch list Are you looking for me?

    is an open-universe test, the test individual may or may not bein the system database perform face identification first and rank similarity scores, if thehighest similarity score is higher than a preset threshold, an alarm is raised

  • 7/27/2019 Face Recognition - Ham Rara

    3/39

    Face Recognition System Block Diagram

    Face detection:

    Given a single image or video, an ideal face detector must be able toidentify and locate all the present faces regardless of theirposition, scale,orientation, age, and expression.

  • 7/27/2019 Face Recognition - Ham Rara

    4/39

    Face Detection: Intel OpenCV Implementation

  • 7/27/2019 Face Recognition - Ham Rara

    5/39

    Inface alignment, facial components, such as eyes, nose, and mouth, andfacial outline are located, and thereby the input face image is normalizedin geometry and photometry.

    Face Alignment

  • 7/27/2019 Face Recognition - Ham Rara

    6/39

  • 7/27/2019 Face Recognition - Ham Rara

    7/39

    Feature Extraction/Matching

    Feature Extraction/Matching: Linear Subspace Approaches

    - Principal Component Analysis (PCA)- Linear Discriminant Analysis (LDA)- Independent Component Analysis (ICA)

  • 7/27/2019 Face Recognition - Ham Rara

    8/39

    Face Recognition: Challenges

    Figure: Inter-subject versus intra-subject variations. (a) and (b) are images fromdifferent subjects, but their appearance variations represented in the inputspace can be smaller than images from the same subject b, c and d.

  • 7/27/2019 Face Recognition - Ham Rara

    9/39

    Face Recognition in Subspaces (PCA and LDA)

    Vector representation of images a 2D image data can be representedas vectors

    2 3 4

    4 5 6

    8 2 1

    I =

    23

    4

    4

    5

    6

    8

    2

    1

    VI

    =

    24

    8

    3

    5

    2

    4

    6

    1

    VI

    =

    Consider a (60x50) image we have a (3000x1) vector

  • 7/27/2019 Face Recognition - Ham Rara

    10/39

    Face Recognition in Subspaces: Image Space vs. Face Space

    the appearance of faces is highly constrained; any frontal view of a face ishighly symmetrical, has eyes on the sides, nose in the middle, etc.

    a vast proportion of points in the image space do not represent faces, like

    the second image above

    therefore, the natural constraints dictate that face images are confined to asubspace, which is referred to as the face space (a subspace of the imagespace)

  • 7/27/2019 Face Recognition - Ham Rara

    11/39

    Face Recognition in Subspaces: Subspace Illustration

    Consider a straight line in R2, passing through the origin

    x1

    x2

    (0,0)

    A subset Hof a vector space Vis a

    subspace if it is closed under thetwo operations:

    u, v H u + v H

    u H, c R, cu H

    u

    v

    Analogy: line face space, R2 image space

  • 7/27/2019 Face Recognition - Ham Rara

    12/39

    Face Recognition: Linear Subspace Analysis

    Three classical linear appearance-based models: PCA, LDA and ICA

    Each model has its own representation (basis vectors) of a high-dimensionalface vector space based on different statistical viewpoints.

    All the three representations can be considered as a linear transformationfrom the original image vector to a projection feature vector

    TY W X=

    where Y (dx 1), X (nx 1) and W (nx d), d

  • 7/27/2019 Face Recognition - Ham Rara

    13/39

    Face Recognition: Principal Component Analysis (PCA)

    The objective is to perform dimensionality reduction while preserving asmuch of the randomness (variance) in the high-dimensional space aspossible

    T

    Y W X= The PCA basis vectors are defined as the eigenvectors of the scatter

    matrix (ST)

    ( )( )1

    N T

    T i i

    i

    S x x =

    = where xi i th sample, and - samplemean

    The transformation matrix WPCA is composed of the eigenvectorscorresponding to the d largest eigenvalues

    TS W W=

  • 7/27/2019 Face Recognition - Ham Rara

    14/39

    Principal Component Analysis (PCA): Numerical Example

    x1

    x2 x3 x4 x5

    xTes

    t

    Figure: Train/Test Images for the PCANumerical Example

    Figure: (a) The data matrix, consisting of pixel valuesin columns, for the 5 training images, (b) the pixel

    values of the mean image.

  • 7/27/2019 Face Recognition - Ham Rara

    15/39

    Create Data Matrix:The (5x3) images are mapped into a (15x1) column vector by concatenating eachcolumn of the image. The column vectors are combined to form a data matrix ofsize (15x5), since there are 15 pixels in the image and 5 training images. For theexample above, the original data matrix (X) and the sample mean () is shownbelow.

    Principal Component Analysis (PCA): Numerical Example

  • 7/27/2019 Face Recognition - Ham Rara

    16/39

    Center Data: The training images must be centered by subtracting the meanimage from each of the training images. This is an essential preprocessing step forthe principal component algorithm. Figure i shows the mean-centered data incolumns.

    Principal Component Analysis (PCA): Numerical Example

    ( )( )1

    NT

    T i ii

    S x x =

    =

  • 7/27/2019 Face Recognition - Ham Rara

    17/39

    Principal Component Analysis (PCA): Numerical Example

    Create Covariance Matrix:

    The centered data matrix (X) is multiplied by its transpose to create a covariancematrix (C = XXT) where the eigenvalue analysis is applied. Since the size of the data

    matrix is (15x5), the covariance matrix size is (15x15). When using the singular valuedecomposition (SVD) to compute the eigenvalues and eigenvectors, it is notnecessary to compute the covariance matrix, as will be discussed later. Due to thesize of the covariance matrix, it will not be shown here.

    ( )( )1

    NT

    T i i

    i

    S x x =

    =

  • 7/27/2019 Face Recognition - Ham Rara

    18/39

    Principal Component Analysis (PCA): Numerical Example

    Solve Eigenvalue Problem: The eigenvalues and eigenvectors of thecovariance matrix can be solved from the following equation,

    v v=C

    where C is the covariance matrix, v is the eigenvector and is thecorresponding eigenvalue. TheMatlab function eig.m can be used tosolve for the eigenvalues and eigenvectors.

  • 7/27/2019 Face Recognition - Ham Rara

    19/39

    Order Eigenvectors:

    The eigenvectors are then ordered according to their corresponding eigenvaluesfrom highest to lowest. For the example above, only the first four eigenvectors

    corresponding to the highest eigenvalues (the nonzero eigenvalues) are kept. Theeigenvectors corresponding to eigenvalues 1 = 397360, 2 = 217350, 3 = 112950, and4 = 54730 are shown in Figure i.

    Principal Component Analysis (PCA): Numerical Example

  • 7/27/2019 Face Recognition - Ham Rara

    20/39

    Project Training/Test Images:

    The chosen eigenvectors form the basis vectors of the subspace defined by thereduced sample images. Each of the training and test images are projected intothis new space to reduce the dimensions of the image feature vectors. Toproject the images to the subspace, the dot product between the image andeach of the eigenvectors is computed. The resulting dimension of the reducedfeature vector is equal to the chosen number of eigenvectors. In matrixnotation, the eigenvectors are stacked into a matrix V = [ v1 v2 vn ], where vn

    represents the column-wise eigenvector. To get the reduced feature vector,each image in column-vector form is multiplied to the transpose of V. The fivecentered training images projected into the new space are shown in Figure i.

    Principal Component Analysis (PCA): Numerical Example

    TY W X=

  • 7/27/2019 Face Recognition - Ham Rara

    21/39

    Principal Component Analysis (PCA): Numerical Example

    Identify Test Image:

    The test image is projected into the new space by first centering the data using thesample mean of the training images and then multiplying it to the transpose of the

    eigenvector matrix V mentioned above. The class of the training image that hasthe closest distance to the test image is the desired result. The training and testimages can be compared by several distance measures, such as the L2 norm (theEuclidean distance).

    The method used here to classify the test images is commonly known as thenearest-neighbor classifier, which has the advantage of having no need ofpreliminary training. The original test image xTest, the mean image , thecentered test data, and the reduced feature vector are shown in Figure 10. Using

    the L2 norm as the distance measure for the nearest-neighbor classifier, the testimage is closest to the fourth image, which is the desired result.

  • 7/27/2019 Face Recognition - Ham Rara

    22/39

    Principal Component Analysis (PCA): Numerical Example

  • 7/27/2019 Face Recognition - Ham Rara

    23/39

    PCA for High-Dimensional Data

    When the number of training image samples (m, e.g. m = 400) is much less than thenumber of pixels (n, e.g. n = 3000) in an image, the dimension of the original featurevector (m

  • 7/27/2019 Face Recognition - Ham Rara

    24/39

    ( )T

    i i iv v=X X ( )T

    i i iv v=XX X X

    In practical problems such asface recognition, training images of size (60x50)form a covariance matrix of size (3000x3000) that is computationallydemanding in solving its eigenvalues and eigenvectors. The n-dimensional

    eigenvectors can be solved by first computing the eigenvectors of the (m xm, e.g. 400x400) covariance matrix, a less computationally intensive task [7].Consider the eigenvectors vi of the matrix X

    TX,

    Premultiplying both sides by X, this results into

    where Xvi are the eigenvectors of the original covariance matrix C = XXT.

    Therefore, it is possible to solve the eigenvectors of the original covariance(C = XXT) matrix by first getting the eigenvectors of the much smallercovariance matrix (C = XTX) and then multiplying it by the data matrix X,resulting to the actual eigenvectors, vi

    * = Xvi.

    PCA for High-Dimensional Data

  • 7/27/2019 Face Recognition - Ham Rara

    25/39

    PCA for High-Dimensional Data: Numerical Example

  • 7/27/2019 Face Recognition - Ham Rara

    26/39

    Face Recognition: Linear Discriminant Analysis (LDA)

    The objective of LDA is to perform dimensionality reduction whilepreserving as much of the class discriminatory information as possible

    T

    Y W X= The method selects WLDAin such a way that the ratio of the between-

    class scatter (SB) and the within-class scatter (SW) is maximized

    ( ) ( )1

    cT

    B i i i

    i

    S N =

    = ( )( )1 k i

    cT

    W k i k i

    i x X

    S x x =

    =

    .i i

    i i

    N no of samples in class X

    overall mean

    mean image of class X

    B W

    S W S W =

  • 7/27/2019 Face Recognition - Ham Rara

    27/39

    Face Recognition: Linear Discriminant Analysis (LDA)

    The method selects WLDAin such a way that the ratio of the between-class scatter (SB) and the within-class scatter (SW) is maximized

    x1

    x2

    .

    .

    .

    .

    1

    3

    2

    Sw1

    Sw2

    Sw3

    SB2

    SB3

    SB1

  • 7/27/2019 Face Recognition - Ham Rara

    28/39

    Numerical Example (2D case)

    Face Recognition: Linear Discriminant Analysis (LDA)

  • 7/27/2019 Face Recognition - Ham Rara

    29/39

  • 7/27/2019 Face Recognition - Ham Rara

    30/39

    B WS W S W =

    Problem: Within-class scatter matrix (SW) is always singular. Rank (SW) is at

    most N c, and the no. of images is much smaller than the numberof pixels in each image (N total no. of samples, c no. of classes)

    Linear Discriminant Analysis (LDA): Problems

  • 7/27/2019 Face Recognition - Ham Rara

    31/39

    Solution to the singularity problem (Belhumeur, 1997) : Fisherface method

    Fisherface method: Project the image set to a lower-dimensional space to avoidnonsingularity in the within-class scatter (SW)

    Use PCA to achieve dimension reduction to N c, andapply the Fisher Linear Discriminant (FLD)

    T T T

    opt fld pcaW W W=

    arg maxT

    pca TW

    W W S W = arg maxT T

    pca B pca

    fld T TWpca W pca

    W W S W W W

    W W S W W =

    Linear Discriminant Analysis (LDA): Problems

  • 7/27/2019 Face Recognition - Ham Rara

    32/39

    Face Recognition: Databases

    ORL Database (400 images: 40 people, 10 images each)

  • 7/27/2019 Face Recognition - Ham Rara

    33/39

    Face Recognition: Databases

    FERET Database contains more extensive training and test images forevaluating face recognition algorithms

    Gallery set of known individuals

    Probe image of unknown face presented to the algorithm Duplicate a type of probe but the corresponding gallery image was taken on

    a different day

    Original (Grayscale) FERET: fa - gallery, fb probe, dup1 probe (within 1minute to 1031 days), dup2 - probe

    fa fb dup1dup2

    Normalized Version

  • 7/27/2019 Face Recognition - Ham Rara

    34/39

    Face Recognition: Distance measures

    Use 1-nearest neighbor (NN) classifier: similarity measures to evaluate theefficiency of different representation and recognition methods

    ( )1 ,L i ii

    d x y= x y

    ( ) ( ) ( )2 ,T

    Ld = x y x y x y

    ( )cos ,d

    = x y

    x y

    x y

    ( ) ( ) ( )1,T

    Mdd

    = x y x y x y

    L1 Norm

    L2 Norm

    Cosine Distance

    Mahalanobis Distance

  • 7/27/2019 Face Recognition - Ham Rara

    35/39

    Face Recognition: Cumulative Match Score Curves (CMC)

    Cumulative Match Score (CMS) rank n vs. percentage of correctidentification

  • 7/27/2019 Face Recognition - Ham Rara

    36/39

    Face Recognition: Cumulative Match Score Curves (CMC)

    Cumulative Match Score (CMS) rank n vs. percentage of correctidentification

    Probe Image

    Gallery Images

    ClassifierSco

    res

    (NearestNeighbor)

    1.75

    2.55

    1.25

    1.35

    1.75

    1.35

    1.25

    2.55

    Sorted Scores

    Rank-1 (+0)

    Rank-2 (+1)

    Rank-3 (+1)

    Rank-4 (+1)

    You continue counting for allprobe images

  • 7/27/2019 Face Recognition - Ham Rara

    37/39

    Face Recognition: Results and Discussions (ORL)

    Eigenfaces/Fisherfaces

  • 7/27/2019 Face Recognition - Ham Rara

    38/39

    Face Recognition: Results and Discussions (ORL and FERET)

    ORL

    0 5 10 15 20 25 30 35 40 45 500.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    Rank n

    Re

    cognitionRate

    Classification Results: Subspace Dimesion (100)

    L1 Norm

    L2 Norm

    Cosine

    Mahalanobis

    FERET

  • 7/27/2019 Face Recognition - Ham Rara

    39/39

    References

    X. Lu, Image Analysis for Face Recognition, Personal Notes, May 2003,http://www.face-rec.org/interesting-papers/General/ImAna4FacRcg_lu.pdf

    M. Turk and A. Pentland, Eigenfaces for Recognition, J. Cog. Neuroscience, Jan.1991, pp. 71-86

    P. Belhumuer, J. Hespanha, and D. Kriegman, Eigenfaces vs. fisherfaces:Recognition using class specific linear projection, IEEE Trans. Pattern Analysisand Machine Intelligence, 19(7): 711-720, 1997