6.report face recognition

Upload: suresh-mg

Post on 03-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 6.Report Face Recognition

    1/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 1

    CHAPTER 1

    INTRODUCTION

    1.1 OBJECTIVES

    Our project is mainly concerned with developing a Face Recognition system using

    Eigenface recognition Algorithm. The Eigenface recognition is one of the mostsuccessful

    techniques that have been used in image recognition and compression.

    1.2 FACE RECOGNITION

    Computerized human face recognition has been an active research area for the last

    20 years. It has many practical applications, such as bankcard identification, aces control,mug shots searching, security monitoring, and surveillance systems. Face recognition is

    used to identify one or more persons from still images or a video image sequence of a

    scene by comparing input images with faces stored in a database. It is a biometric system

    that employs automated methods to verify or recognize the identity of person based on

    his/her physiological characteristic. In general, a biometric identification system makes

    use of either physiological characteristics or behavior patterns to identify a person.

    Because of human inherent protectiveness of his/her eyes, some people are reluctant to

    use eye identification systems. Face recognition has the benefit of being a passive,

    nonintrusive system to verify personal identity in a natural and friendly way.

    The face is our primary focus of attention in social intercourse, playing a major

    role in conveying identity and emotion. Hence face recognition has become an important

    issue in many applications such as security systems, credit card verification and criminal

    identification. Face Recognition is an emerging field of research with many challenges

    such as large set of images, improper illuminating conditions. Much of the work in facerecognition by computers has focused on detecting individual features such as the eyes,

    nose, mouth and head outline, and defining aface model by the position, size, and

    relationships among these features. Such approaches have proven to depend on the

    precise features.

    Computational models of face recognition are interesting because they can

    contribute not only to theoretical knowledge but also to practical applications.

    Unfortunately, developing a computational model of face detection and recognition is

  • 7/28/2019 6.Report Face Recognition

    2/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 2

    quite difficult because faces are complex, multidimensional and meaningful visual

    stimuli. The user should focus his attention toward developing a sort of early, pre

    attentive Pattern recognition capability that does not depend on having three-dimensional

    information or detailed geometry. He should develop a computational model of face

    recognition that is fast, reasonably simple, and accurate.

    Eigenface approach is one of the simplest and most efficient methods that can

    locate and track a subject's head, and then recognize the person by comparing

    characteristics of the face to those of known individuals. This approach treats the face

    recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather

    than requiring recovery of three-dimensional geometry, taking advantage of the fact that

    faces are normally upright and thus may be described by a small set of 2-D characteristic

    views. In eigenface approach, after the dimensional reduction of the face space, the

    distance is measured between two images for recognition. If the distance is lessthan some

    threshold value, then it is considered as a known face else it is an unknown face.Face

    recognition is a very high level computer vision task, in which many early vision

    techniques can be involved. Face Recognition can be divided into two parts.

    1.3 FACE DETECTION

    Face detection is largely motivated by the need for surveillance and security,

    humancomputerintelligent interaction. Detecting faces is challenging due to the wide

    varietiesof face appearances and the complexity of the backgrounds. The methods

    proposed forface detection so far generally fall into two major categories: feature-based

    methods andclassification-based approaches.

    1.3.1 FEATURE BASED METHODS

    These methods detect faces by searching for facial features andgrouping them into

    faces according to their geometrical relationships. Since theperformance of feature-based

    methods primarily depends on the reliable locationof facial features, it is susceptible to

    partial occlusion, excessive deformation, andlow quality of images.

    1.3.2 CLASSIFICATION BASED METHODS

    These methods have used the intensity values of images as theinput features of the

    underlying classifier. The Gabor filter banks, whose kernels are similar to the 2D

  • 7/28/2019 6.Report Face Recognition

    3/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 3

    receptive field profiles of the mammalian cortical simplecells, exhibit desirable

    characteristics of spatial locality and orientation selectivity.As a result, the Gabor filter

    features extracted from face images should be robustto variations due to illumination and

    facial expression changes. We propose aclassification-based approach using Gabor filter

    features for detecting faces.

    1.4 FACE IDENTIFICATION

    The first step of human face identification is to extract the relevant features from

    facial images. Research in the field primarily intends to generate sufficiently reasonable

    familiarities of human faces so that another human can correctly identify the face. The

    question naturally arises as to how well facial features can be quantized. If such a

    quantization if possible then a computer should be capable of recognizing a face given aset of features. Investigations by numerous researchers over the past several years have

    indicated that certain facial characteristics are used by human beings to identify faces.

    There are three major research groups which propose three different approaches to

    the face recognition problem. The largest group has dealt with facial characteristics which

    are used by human beings in recognizing individual faces. The second group performs

    human face identification based on feature vectors extracted from profile silhouettes. The

    third group uses feature vectors extracted from a frontal view of the face. Although there

    are three different approaches to the face recognition problem, there are two basic

    methods from which these three different approaches arise.

    The first method is based on extracting feature vectors from the basic parts of a

    face such as eyes, nose, mouth, and chin. In this method, with the help of deformable

    templates and extensive mathematics, key information from the basic parts of a face is

    gathered and then converted into a feature vector. L. Yullie and S. Cohen played a greatrole in adapting deformable templates to contour extraction of face images.

    The second method is based on the information theory concepts, in other words,

    on the principal component analysis methods. In this approach, the most relevant

    information that best describes a face is derived from the entire face image. Based on the

    Karhunen- Loeve expansions in pattern recognition, M. Kirby and L. Sirovich have

    shown that any particular face could be economically represented in terms of a best

    coordinate system that they termed "eigenfaces". These are the eigenfunctions of the

  • 7/28/2019 6.Report Face Recognition

    4/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 4

    averaged covariance of the ensemble of faces. Later, M. Turk and A. Pentland have

    proposed a face recognition method based on the eigenfaces approach.

    1.4.1 PRINCIPAL COMPONENT ANALYSIS METHOD

    We have focused our research toward developing a sort of unsupervised patternrecognition scheme that does not depend on excessive geometry and computations like

    Elastic bunch templates. Eigenfaces approach seemed to be an adequate method to be

    used in face recognition due to its simplicity, speed and learning capability.

    A previous work based on the eigenfaces approach was done by M. Turk and A.

    Pentland, in which, faces were first detected and then identified. In this thesis, a facerecognition system based on the eigenfaces approach, similar to the one presented by M.

    Turk and A. Pentland, is proposed. The scheme is based on an information theory

    approach that decomposes face images into a small set of characteristic feature images

    called eigenfaces, which may be thought of as the principal components of the initial

    training set of face images. Recognition is performed by projecting a new image onto the

    subspace spanned by the eigenfaces and then classifying the face by comparing its

    position in the face space with the positions of known individuals.

    Actual system is capable of both recognizing known individuals and learning to

    recognize new face images. The eigenface approach used in this scheme has advantages

    over other face recognition methods in its speed, simplicity, learning capability and

    robustness to small changes in the face image.

  • 7/28/2019 6.Report Face Recognition

    5/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 5

    CHAPTER 2

    THEORETICAL OVERVIEW

    2.1 INTRODUCTION

    Face recognition is a pattern recognition task performed specifically on faces. It

    can be described as classifying a face either "known" or "unknown", after comparing it

    with stored known individuals. It is also desirable to have a system that has the ability of

    learning to recognize unknown faces.

    Computational models of face recognition must address several difficult problems.

    This difficulty arises from the fact that faces must be represented in a way that bestutilizes the available face information to distinguish a particular face from all other faces.

    Faces pose a particularly difficult problem in this respect because all faces are similar to

    one another in that they contain the same set of features such as eyes, nose, mouth

    arranged in roughly the same manner.

    2.2 BACKGROUND AND RELATED WORK

    Much of the work in computer recognition of faces has focused on detectingindividual features such as the eyes, nose, mouth, and head outline, and defining a face

    model by the position, size, and relationships among these features. Such approaches

    have proven difficult to extend to multiple views and have often been quite fragile,

    requiring a good initial guess to guide them. Research in human strategies of face

    recognition, moreover, has shown that individual features and their immediate

    relationships comprise an insufficient representation to account for the performance of

    adult human face identification. Nonetheless, this approach to face recognition remainsthe most popular one in the computer vision literature.

    Bledsoe was the first to attempt semi-automated face recognition with a hybrid

    human-computer system that classified faces on the basis of fiducial marks entered on

    photographs by hand. Parameters for the classification were normalized distances and

    ratios among points such as eye corners, mouth corners, nose tip, and chin point. Later

    work at Bell Labs developed a vector of up to 21 features, and recognized faces using

    standard pattern classification techniques.

  • 7/28/2019 6.Report Face Recognition

    6/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 6

    Fischler and Elschlager attempted to measure similar features automatically. They

    described a linear embedding algorithm that used local feature template matching and a

    global measure of fit to find and measure facial features. This template matching

    approach has been continued and improved by the recent work of Yuille and Cohen.

    Their strategy is based on deformable templates, which are parameterized models of theface and its features in which the parameter values are determined by interactions with the

    face image.

    Connectionist approaches to face identification seek to capture the configurational

    nature of the task. Kohonen and Kononen and Lehtio describe an associative network

    with a simple learning algorithm that can recognize face images and recall a face image

    from an incomplete or noisy version input to the network. Fleming and Cottrell extend

    these ideas using nonlinear units, training the system by back propagation.

    Others have approached automated face recognition by characterizing a face by a

    set of geometric parameters and performing pattern recognition based on the parameters.

    Kanade's face identification system was the first system in which all steps of the

    recognition process were automated, using a top-down control strategy directed by a

    generic model of expected feature characteristics. His system calculated a set of facial

    parameters from a single face image and used a pattern classification technique to match

    the face from a known set, a purely statistical approach depending primarily on local

    histogram analysis and absolute gray-scale values.

    Recent work by Burt uses a smart sensing approach based on multiresolution

    template matching. This coarse to fine strategy uses a special purpose computer built to

    calculate multiresolution pyramid images quickly, and has been demonstrated identifying

    people in near real time.

    2.3FACE RECOGNITION WITH EIGENFACES

    It is the most thoroughly researched among these. According to Sirovich and

    Kirby [1] principal component or eigenvectors can be used to represent face images.

    Their research showed that a face could be approximately reconstructed by a collection of

    weights describing each face, and a standard eigenpicture. The weights of each face are

    obtained by projecting the test image on the eigenpicture. Turk and Pentland [2] used this

  • 7/28/2019 6.Report Face Recognition

    7/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 7

    idea of eigenface for face detection and identification. Mathematically eigenfaces are the

    principal components of the distribution of faces or the eigenvectors of the covariance

    matrix of the set of face images (this is explained in chapter 2). The eigenvectors account

    for the variation between the images in a set. A face can be represented by a linear

    combination of the eigenvectors. Later it was shown thatillumination normalization [6] isrequired for the eigenface approach.

    Pentland et al [3] used the eigenface recognition technique and extended it to

    facial features such as eyes, nose, and mouth which can be called eigenfeatures. This

    modular eigenspace approach comprising of the eigenfeatures namely eigeneyes,

    eigennose and eigenmouth was found to be less sensitive to changes in appearance than

    the standard eigenface method. To summarize eigenface method is not invariant to

    illumination and changes in scale.

    2.4 NEURAL NETWORK

    As mentioned previously is another approach in face recognition. The difference

    from eigenface method lies in the fact that it is nonlinear. Due to the non linear nature of

    the network the feature extraction is more efficient than eigenface. WISARD ( Wilkie,

    Stonham and Aleksanders Recognition Device ) was one of the first artificial neural

    network (ANN) schemes. It was a single layer adaptive network and each individual [4]stored in it had a separate network. Neural network is application oriented. For face

    detection multilayer perceptron and convolutional neural network [12] has been applied.

    Again for face verification Cresceptron [5] which is a multiresolution pyramidal structure

    is used. The Convolutional network provides partial invariance to translation, rotation,

    scale, and deformation. Lins [6] probabilistic decision -based neural network (PDBNN)

    has a modular structure and a decision based neural network. PDBNN can be applied to

    face detection, eye localization and face recognition. PDBNN has the merits of bothneural networks and statistical approaches and its distributed computing principle is

    relatively easy to implement on a parallel computer. However when the number of people

    increases, the need for more computation also increases. This is because the training time

    increases. Also, it is not suitable for single model image recognition as multiple model

    images per person are required for the optimal training of the system.

  • 7/28/2019 6.Report Face Recognition

    8/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 8

    2.5 DYNAMIC LINK ARCHITECTURE

    It is another method for face recognition. It is an extension of the classical

    artificial neural network. In this method, memorized objects are represented by sparse

    graphs, whose vertices are labeled with a multi-resolution description in terms of a local

    power spectrum and whose edges are labeled with geometrical distance vectors. However in this method the matching process is computationally expensive. Wiskott and Von der

    Malsburg extended the technique and matched faces against a

    gallery of 112 neutral frontal view faces. They reported 86.5 percent and 66.4 percent for

    matching 111 faces of 15 degree and 110 faces of 30 degree rotation to a gallery of 112

    neutral frontal faces. In general, dynamic link architecture is superior to other face

    recognition techniques in terms of rotation invariance but the matching process iscomputationally expensive.

    2.6 HIDDEN MARKOV MODEL

    Another approach to face recognition is Hidden Markov Model . Samaria and

    Fallside [8] applied this method to human face recognition. Faces were divided into

    regions like the eyes, nose, mouth which could be associated with the states of the hidden

    Markov model. Since HMM requires 1 dimensional observation sequence and the imagesare in 2D , the images must be converted to either 1D temporal sequences or 1D spatial

    sequences . An unknown test image is first sampled to an observation sequence. Then it is

    matched against every HMMs in the model face database. The match with the hig hest

    likelihood is considered the best match and the relevant model reveals the identity of the

    test face. The recognition rate is 87 percent using ORL database. The training time is

    usually high for this approach.

    2.7 GEOMETRICAL FEATURE MATCHING TECHNIQUE

    It is based on the computation of the geometrical features of the face. The overall

    configuration is described by a vector representing the position and size of the facial

    features like eyes, nose, mouth and shape of the outline of the face. Kanades [ 9] face

    recognition based on this approach reached a 75 percent recognition rate. Brunelis and

    Poggios [10] method extracted 35 features from the face to form a 35 dimensional

    vector. The recognition was performed with a Bayes classifier. They reported a

  • 7/28/2019 6.Report Face Recognition

    9/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 9

    recognition rate of 90 percent on a database of 47 people. In these methods, typically 35

    45 feature points per face were generated. Geometrical feature matching depends on

    precisely measured distances between features, which in turn depend on the accuracy of

    the feature location algorithms.

    2.8TEMPLATE MATCHING

    In this method a test image is represented as a two-dimensional array of intensity

    values and is compared using a metric such as Euclidean distance with a single template

    representing the whole face. In this method, each viewpoint of an individual can be used

    to generate a template. Thus, multiple templates can represent each person in the

    database. A single template corresponding to a viewpoint can be made up of smaller distinctive templates [24] [10]. Bruneli and Poggio [10] extracted four features namely

    the eyes, nose, mouth and the entire face and compared the performance with their

    geometrical method with the template method. The template matching method was found

    to be superior to the geometrical matching technique. Drawbacks of template matching

    include the complexity of the computation and the description of the templates. As the

    recognition system is tolerant to certain discrepancies between the test image and the

    template, the tolerance might average out the differences that make a face unique.

    2.9 THE ACQUISITION MODULE

    This is the entry point of the face recognition process. It is the module where the

    faceimage under consideration is presented to the system. An acquisitionmodule can

    request a face image from several different environments: The face imagecan be an image

    file that is located on a magnetic disk, it can be captured by a framegrabber or it can be

    scanned from paper with the help of a scanner.

  • 7/28/2019 6.Report Face Recognition

    10/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 10

    2.10 OUTLINE OF A TYPICAL FACE RECOGNITION SYSTEM

    Vector Face Image Normalized face Image Feature

    Classified known or unknown

    Figure 2.1: Outline of a Face Recognition System

    There are six main functional blocks, whose responsibilities are given below:

    2.11 THE PRE-PROCESSING MODULE

    In this module, by means of early vision techniques, face images are normalized

    and if desired, they are enhanced to improve the recognition performance of the system.

    Some or all of the following pre-processing steps may be implemented in a face

    recognition system:

    2.11.1 IMAGE SIZE NORMALIZATION

    It is usually done to change the acquired image size to a default image size such

    as 128 x 128, on which the face recognition system operates. This is mostly encountered

    in systems where face images are treated as a whole like the one proposed in this thesis.

    2.11.2 HISTOGRAM EQUALIZATION

    It is usually done on too dark or too bright images in order to enhance image

    quality and to improve face recognition performance. It modifies the dynamic range

    (contrast range) of the image and as a result, some important facial features become more

    apparent.

  • 7/28/2019 6.Report Face Recognition

    11/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 11

    Histogram equalization is applied in order to improve the contrast of the images.

    The peaks in the image histogram, indicating the commonly used grey levels, are

    widened, while the valleys are compressed.

    Figure 2.2: Histogram Equalization

    2.11.3 Median Filtering

    For noisy images especially obtained from a camera or from aframe grabber,

    median filtering can clean the image without losing information .

    2.11.4 High Pass Filtering

    Feature extractors that are based on facial outlines, may benefit the results that are

    obtained from an edge detection scheme. High-pass filtering emphasizes the details of an

    image such as contours which can dramatically improve edge detection performance.

    2.11.5 Background Removal

    In order to deal primarily with facial information itself, face background can be

    removed. This is especially important for face recognition systems where entire

    information contained in the image is used.

  • 7/28/2019 6.Report Face Recognition

    12/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 12

    2.11.6 Rotation Correction

    When the locations of eyes, nose and mouth are given for a face image, this data

    can be used to rotate the image so that all the face images are exactly positioned the same.

    When the positions for the right and left eyes are known, the inverse tangent of the angle

    between the lines, l1 and l2, connecting the mid-points of two eyes can be calculated. The

    image can be rotated using the calculated angle. This process is drawn in Figure 2.9.6.

    Figure 2.3: The rotation correction procedure when eye, nose and mouth locations are given.

    2.11.7 Masking

    By using a mask, which simply has a face shaped region, the effect ofbackground

    change is minimized. The effect of masking is studied only onFERET database images.

    The mask used in this study is shown in Figure 2.4.

    Figure 2.4: The face shaped mask

  • 7/28/2019 6.Report Face Recognition

    13/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 13

    Figure 2.5: Masking

    2.11.8 Translational and Rotational Normalizations

    In some cases, it is possible to work on a face image in which the head is

    somehow shifted or rotated. Especially for face recognition systems that are based on the

    frontal views of faces, it may be desirable that the pre-processing module determines and

    if possible, normalizes the shifts and rotations in the head position .

    2.11.9 Illumination Normalization

    Face images taken under different illuminations can degrade recognition

    performance especially for face recognition systems based on the principal component

    analysis in which entire face information is used for recognition. A picture can be

    equivalently viewed as an array of reflectivities r(x).

    Thus, under a uniform illumination I, the corresponding picture is given by

    The normalization comes in imposing a fixed level of illumination I0 at a

    reference point x 0 on a picture. The normalized picture is given by

    In actual practice, the average of two reference points, such as one under each eye,

    each consisting of 2 x 2 array of pixels can be used.

  • 7/28/2019 6.Report Face Recognition

    14/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 14

    2.12 THE FEATURE EXTRACTION MODULE

    After performing some pre-processing (if necessary), the normalized face image is

    presented to the feature extraction module in order to find the key features that are going

    to be used for classification. In other words, this module is responsible for composing a

    feature vector that is well enough to represent the face image.

    2.13 THE CLASSIFICATION MODULE

    In this module, with the help of a pattern classifier, extracted features of the face

    image is compared with the ones stored in a face library (or face database). After doing

    this comparison, face image is classified as either known or unknown .

    2.14 TRAINING SET

    Training sets are used during the "learning phase" of the face recognition process.

    The feature extraction and the classification modules adjust their parameters in order to

    achieve optimum recognition performance by making use of training sets.

    2.15 FACE LIBRARY OR FACE DATABASE

    After being classified as "unknown", face imagescan be added to a library (or to a

    database) with their feature vectors for latercomparisons. The classification module

    makes direct use of the face library.

  • 7/28/2019 6.Report Face Recognition

    15/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 15

    CHAPTER 3

    FACE RECOGNITION USING PRINCIPAL

    COMPONENTANALYSIS

    3.1 INTRODUCTION

    Principal component analysis (PCA) was invented in 1901 by Karl Pearson.The

    Principal Component Analysis (PCA) is one of the most successful techniques that have

    been used in image recognition and compression. PCA is a statistical method under the

    broad title off actor analysis. The purpose of PCA is to reduce the large dimensionality of

    the data space (observed variables) to the smaller intrinsic dimensionality of feature space(independent variables), which are needed to describe the data economically. This is the

    case when there is a strong correlation between observed variables. The jobs which PCA

    can do are prediction, redundancy removal, feature extraction, data compression, etc.

    Because PCA is a classical technique which can do something in the linear domain,

    applications having linear models are suitable, such as signal processing, image

    processing, system and control theory, communications, etc. Depending on the field of

    application, it is also named the discrete Karhunen Love transform (KLT), the Hotelling

    transform or proper orthogonal decomposition (POD).

    Face recognition has many applicable areas. Moreover, it can be categorized into

    face identification, face classification, or sex determination. The most useful applications

    contain crowd surveillance, video content indexing, personal identification (ex:

    driverslicence), entrance security, etc. The main idea of using PCA for face recognition

    is to express the large 1-D vector of pixels constructed from 2-D facial image in to the

    compact principal components of the feature space. This can be called eigenspace

    projection. Eigenspace is calculated by identifying the eigenvectors of the covariance

    matrix derived from set official images (vectors).

    3.2 OUTLINE OF THE PROPOSED FACE RECOGNITION SYSTEM

    The proposed face recognition system passes through three main phases during a

    facerecognition process. Three major functional units are involved in these phases and

  • 7/28/2019 6.Report Face Recognition

    16/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 16

    they are depicted in Figure 3.1. The characteristics of these phases in conjunction with

    thethree functional units are given below:

    Figure 3.1: Functional block diagram of the proposed face recognition system

    3.3 FACE LIBRARY FORMATION PHASE

    In this phase, the acquisition and the pre-processing of the face images that are

    going to be added to the face library are performed. Face images are stored in a face

    library in the system. We call this face database a "face library" because at the moment, it

    does not have the properties of a relational database. Every action such as training set or

    eigenface formation is performed on this face library. Face library is

    initially empty. In order to start the face recognition process, this initially emptyface library has to be filled with face images. The proposed face recognition system

    operates on image files of any resolution. In order to perform image size conversions and

    enhancements on face images, there exists the "pre-processing" module. This module

    automatically converts every face image to 200 x 180 (if necessary) and based on user

    request, it can modify the dynamic range of face images (histogram equalization) in order

    to improve face recognition performance. After acquisition and pre-processing, face

    image under consideration is added to the face library. Each face is represented by two

    entries in the face library: One entry corresponds to the face image itself (for the sake of

  • 7/28/2019 6.Report Face Recognition

    17/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 17

    speed, no datacompression is performed on the face image that is stored in the face

    library) and the other corresponds to the weight vector associated for that face image.

    Weight vectors of the face library members are empty until a training set is chosen and

    eigenfaces are formed.

    3.4 TRAINING PHASE

    After adding face images to the initially empty face library, the system is ready

    toperform training set and eigenface formations. Those face images that are going to be

    inthe training set are chosen from the entire face library. Now that the face library entries

    are normalized, no further pre-processing is necessary at this step. After choosingthe

    training set, eigenfaces are formed and stored for later use. Eigenfaces are calculatedfrom

    the training set, keeping only the M images that correspond to the highest eigenvalues.These M eigenfaces define the M-dimensional "face space". As new faces are

    experienced, the eigenfaces can be updated or recalculated. The corresponding

    distribution in the M-dimensional weight space is calculated for each face library

    member, by projecting its face image onto the "face space" spanned by the eigenfaces.

    Now the corresponding weight vector of each face library member has been updated

    which were initially empty. The system is now

    ready for the recognition process. Once a training set has been chosen, it is not

    possible to add new members to the face library with the conventional method that is

    presented in "phase 1" because, the system does not know whether this item already

    exists in the face library or not. A library search must be performed.

    3.5 RECOGNITION AND LEARNING PHASE

    After choosing a training set and constructing the weight vectors of face library

    members, now the system is ready to perform the recognition process. User initiates the

    recognition process by choosing a face image. Based on the user request and the acquired

    image size, pre-processing steps are applied to normalize this acquired image to face

    library specifications (if necessary). Once the image is normalized, its weight vector is

    constructed with the help of the Eigen faces that were already stored during the training

    phase. After obtaining the weight vector, it is compared with the weight vector of every

    face library member within a user defined "threshold". If there exists at least one face

    library member that is similar to the acquired image within that threshold then, the face

  • 7/28/2019 6.Report Face Recognition

    18/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 18

    image is classified as "known". Otherwise, a miss has occurred and the face image is

    classified as "unknown".

    3.6 MATHEMATICS OF PCA

    A 2-D facial image can be represented as 1-D vector by concatenating each row

    (or column) into a long thin vector. Lets suppose we have M vectors of size N (= rows of

    image columns of image) representing a set of sampled images. p js represent the pixel

    values.

    The images are mean centered by subtracting the mean image from each image vector.Let m represent the mean image.

    And let w i be defined as mean centered image.

    Our goal is to find a set of e is which have the largest possible projection onto each of the

    w is. We wish to find a set of M orthonormal vectors e i for which the quantity

    is maximized with the orthonormality constraint

    It has been shown that the e is and is are given by the eigenvectors and eigenvalues of

    the covariance matrix.

  • 7/28/2019 6.Report Face Recognition

    19/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 19

    where W is a matrix composed of the column vectors wi placed side by side. The size of

    C is N N which could be enormous. For example, images of size 64 64 create the

    covariance matrix of size 40964096. It is not practical to solve for the eigenvectors of C

    directly. A common theorem in linear algebra states that the vector s ei and scalars i can

    be obtained by solving for the eigenvectors and eigenvalues of the M M matrix WT

    W.Let d i and I be the eigenvectors and eigenvalues of W TW, respectively.

    By multiplying left to both sides by W

    which means that the first M-1 eigenvectors e i and Eigen values iof WW T are given by

    W d i and i, respectively. W d i needs to be normalized in order to be equal to ei. Since we

    only sum up a finite number of image vectors, M, the rank of the covariance matrix

    cannot exceed M- 1 (The -1 comes from the subtraction of the mean vector m).

    The eigenvectors corresponding to non-zero eigenvalues of the covariance matrix

    produce an orthonormal basis for the subspace within which most image data can be

    represented with a small amount of error. The eigenvectors are sorted from high to low

    according to their corresponding eigenvalues. The eigenvector associated with the largest

    eigenvalue is one that reflects the greatest variance in the image. That is, the smallest

    eigenvalue is associated with the eigenvector that finds the least variance. They decrease

    in exponential fashion, meaning that the roughly 90% of the total variance is contained in

    the first 5% to 10% of the dimensions.

    A facial image can be projected onto M (

  • 7/28/2019 6.Report Face Recognition

    20/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 20

    the eigenfaces as a basis set for facial images. The simplest method for determining

    which face class provides the best description of a n input facial image is to find the face

    class k that minimizes the Euclidean distance

    where k is a vector describing the k th face class. If k is less than some predefined

    threshold , a face is classified as belonging to the class k.

    3.7 USING EIGENFACES TO CLASSIFY A FACE IMAGE

    The eigenface images calculated from the eigenvectors of L span a basis set with

    whichto describe face images. Sirovich and Kirby evaluated a limited version of this

    frameworkon an ensemble of M = 115 images of Caucasian males digitized in a

    controlled manner,and found that 40 eigenfaces were sufficient for a very good

    description of face images.

    In practice, a smaller M' can be sufficient for identification, since accurate

    reconstruction of the image is not a requirement. Based on this idea, the proposed face

    recognition system lets the user specify the number of eigenfaces (M') that is going to be

    used in the recognition. For maximum accuracy, the number of eigenfaces should beequal to the number of images in the training set. But, it was observed that, for a training

    set of fourteen face images, seven eigenfaces were enough for a sufficient description of

    the training set members.

    In this framework, identification becomes a pattern recognition task. The

    eigenfaces span an M' dimensional subspace of the original N 2 image space. The M'

    significant eigenvectors of the L matrix are chosen as those with the largest associated

    eigenvalues.

    A new face image () is transformed into its eigenface components (projected

    onto "face space") by a simple operation,

    for k = 1,...,M'.

  • 7/28/2019 6.Report Face Recognition

    21/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 21

    The weights form a feature vector ,

    that describes the contribution of each eigenface in representing the input face

    image,treating the eigenfaces as a basis set for face images. The feature vector is then

    used in astandard pattern recognition algorithm to find which of a number of predefined

    faceclasses, if any, best describes the face. The face classesican be calculated by

    averagingthe results of the eigenface representation over a small number of face images

    (as few asone) of each individual. In the proposed face recognition system, face

    classescontainonly one representation of each individual.

    Classification is performed by comparing the feature vectors of the face library

    members with the feature vector of the input face image. This comparison is based on the

    Euclidean distance between the two members to be smaller than a user defined threshold

    k . If the comparison falls within the user defined threshold, then face image is classified

    as known, otherwise it is classified as unknown.

    3.8 REBUILDING A FACE IMAGE WITH EIGENFACES

    A face image can be approximately reconstructed (rebuilt) by using its feature

    vector andtheeigenfaces as

    where

    is the projected image.

  • 7/28/2019 6.Report Face Recognition

    22/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 22

    The face image under consideration is rebuilt just by adding each eigenface with a

    contribution of w ito the average of the training set images. The degree of the fit or the

    "rebuild error ratio" can be expressedby means of the Euclidean distance between the

    original and the reconstructed face image.

    It has been observed that, rebuild error ratio increases as the training set members differ

    heavily from each other. This is due to the addition of the average face image. When the

    members differ from each other (especially in image background) the average face image

    becomes messier and this increases the rebuild error ratio.

    There are four possibilities for an input image and its pattern vector:

    Near face space and near a face class,

    Near face space but not near a known face class,

    Distant from face space and near a face class,

    Distant from face space and not near a known face class.

    In the first case, an individual is recognized and identified. In the second case, an

    unknown individual is presented. The last two cases indicate that the image is not a face

    image. Case three typically shows up as a false classification. It is possible to avoid this

    false classification in this system.

    where k is a user defined threshold for the faceness of the input face images belonging

    to k th face class.

  • 7/28/2019 6.Report Face Recognition

    23/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 23

    3.9 SUMMARY OF THE EIGENFACE RECOGNITION

    PROCEDURE

    The eigenfaces approach to face recognition can be summarized in the following steps:

    Form a face library that consists of the face images of known individuals. Choose a training set that includes a number of images (M) for each person

    with

    some variation in expression and in the lighting.

    Calculate the M x M matrix L, find its eigenvectors and eigenvalues, and

    choose theM' eigenvectors with the highest associated eigenvalues.

    Combine the normalized training set of images to produce M'eigenfaces. Store

    these eigenfaces for later use.

    For each member in the face library, compute and store a feature vector. Choose a threshold that defines the maximum allowabledistance from any

    face class. Optionally choose a threshold that defines the maximum

    allowable distance from face space.

    For each new face image to be identified, calculate its feature vector and

    compare it with the stored feature vectors of the face library members

    Ifthe comparison satisfies the condition thenfor at least one member, then

    classify this face image as "known", otherwise amiss has occurred and classify

    it as "unknown".

  • 7/28/2019 6.Report Face Recognition

    24/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 24

    CHAPTER 4

    PROJECT IMPLEMENTATION

    4.1 FLOWCHART

    Start

    Image size reduction andgrayscale conversion

    Face Detection

    Principal ComponentAnalysis, Finding Eigenfaces

    Projected training images(image feature vectors)

    Image size reduction andgrayscale conversion

    Face Detection

    Projected Test Image (test

    image feature vector)

    Compare?

    Match No Match

    Show the result

    Stop

    Acquire a trainingset of face images

    Acquire testimage

  • 7/28/2019 6.Report Face Recognition

    25/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 25

    4.2 SOFTWARE REQUIREMENTS Matlab R2011b

    4.3DATA ACQUISITION

    Here a directory dialog box is opened up by using Matlab commands to select a

    folder where the training set of images is stored. This dialog box helps to select any folder

    thus overcoming the concept of hard coding where the user has to mention the particular

    folder path. This dialog box is shown below:

    Figure 4.1 Selecting training folder

    4.4 TRAINING SET

    The training set defines the subspace in which recognition will be performed.

    Aftertraining, the recognition system develops an efficient seperability among the

    trainingimages using the PCA technique. An assumption is made on the training set: it

    canaccurately represent each image in the gallery, which means that the separability of

    the system can also be well applied to the gallery. Training set size determines

    thedimensionality of the subspace. Apparently, the more sparsely the gallery images

    aredistributed, the higher separability the recognitionsystem can achieve. Generally,

    byincreasing the dimensionality, the images should be more sparsely distributed.

    Hence,there are two

    schemes of selecting training sets to enhance recognition performance. Oneis to

    make the training set more representative of the gallery; the other is to enlarge thetraining

    set size in order to increase the dimensionality of the subspace.

    Let the training set of the image be 1, 2, 3.. n.

  • 7/28/2019 6.Report Face Recognition

    26/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 26

    Figure 4.2 Training Set of Image

    4.5 Pre Processing Techniques

    4.5.1 Image Size Reduction

    The images read one by one are reduced into size of 200180 so as to increase the

    speed of processing and also to get rid of software hang and errors.

    4.5.2Grayscale Conversion

    Then all input images are converted into grayscale for the purpose of

    normalization and error reduction.

  • 7/28/2019 6.Report Face Recognition

    27/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 27

    Fi

    gure4.3: Grayscale conversion of all images

    4.5.3 Normalized Training Set

    In image processing, normalization is a process that changes the range of pixel intensity

    values. Applications include photographs with poor contrast due to glare, for example.

    Normalization is sometimes called contrast stretching. In more general fields of data

    processing, such as digital signal processing, it is referred to as dynamic range expansion.

    The purpose of dynamic range expansion in the various applications is usually to bring

    the image, or other type of signal, into a range that is more familiar or normal to the

    senses, hence the term normalization. Often, the motivation is to achieve consistency in

    dynamic range for a set of data, signals, or images to avoid mental distraction or fatigue.

    For example, a newspaper will strive to make all of the images in an issue share a similar

    range of grayscale.

    For example, if the intensity range of the image is 50 to 180 and the desired range is 0 to

    255 the process entails subtracting 50 from each of pixel intensity, making the range 0 to

    130. Then each pixel intensity is multiplied by 255/130, making the range 0 to 255.

    http://en.wikipedia.org/wiki/Image_processinghttp://en.wikipedia.org/wiki/Pixelhttp://en.wikipedia.org/wiki/Contrast_(vision)http://en.wikipedia.org/wiki/Digital_signal_processinghttp://en.wikipedia.org/wiki/Dynamic_rangehttp://en.wikipedia.org/wiki/Grayscalehttp://en.wikipedia.org/wiki/Grayscalehttp://en.wikipedia.org/wiki/Dynamic_rangehttp://en.wikipedia.org/wiki/Digital_signal_processinghttp://en.wikipedia.org/wiki/Contrast_(vision)http://en.wikipedia.org/wiki/Pixelhttp://en.wikipedia.org/wiki/Image_processing
  • 7/28/2019 6.Report Face Recognition

    28/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 28

    Figure 4.4: Normalized Training Set

    4.6 MEAN

    It is defined as the arithmetic average of the training image vectors at each pixel

    point and its size is (N x 1).

    The Mean face set is defined by

    Subtract the Mean Face.

    It gives us the difference of the training image from the mean image (size N x 1).

  • 7/28/2019 6.Report Face Recognition

    29/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 29

    Figure 4.5: Mean Image

    4.7 COVARIANCE MATRIX

    Definition: it is the measure of how much two random variables vary together (as

    distinct from variance, which measures how much a single variable varies). If two

    variables tend to vary together (that is, when one of them is above its expected value, then

    the other variable tends to be above its expected value too), then the covariance between

    the two variables will be positive. On the other hand, if when one of them is above its

    expected value, the other variable tends to be below its expected value, then the

    covariance between the two variables will be negative.

    An important property of the Eigenface method is obtaining the eigenvectors of

    the covariance matrix. For a face image of size (N x x N y) pixels, the covariance matrix is

    of size (P x P), P being (N x x N y). This covariance matrix is very hard to work with due to

    its huge dimension causing computational complexity. On the other hand, Eigenface

    method calculates the eigenvectors of the (M t x M t) matrix, Mt being the number of face

    images, and obtains (P x P) matrix using the eigenvectors of the (M t x M t) matrix. Thus, it

  • 7/28/2019 6.Report Face Recognition

    30/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 30

    is possible to obtain the eigenvectors of X by using the eigenvectors of Y. A matrix of

    size (M t x M t) is utilized instead of a matrix of size (P x P) (i.e. [{N x x N y} x {N x x N y}]

    ). This formulation brings substantial computational efficiency.

    4.8 EIGENVECTOR AND EIGENVALUE

    Definition: Let A be an n x n matrix. Then a real number is called an eigenvalue

    of the matrix, A if and only if, there is a n-dimensional nonzero vector, v for which

    eigenvectors can be considered as the vectors pointing in the direction of the maximum

    variance and the value of the

    variance the eigenvector represents is directly proportional to the value of the

    eigenvalue (i.e. the larger the eigenvalue indicates the larger variance the eigenvector

    represents). Hence, the eigenvectors are sortedwith respect to their

    correspondingeigenvalues. The eigenvector having the largest eigenvalue is marked as the

    first eigenvector, and so on. In this manner, the most generalizing eigenvector comes first

    in the eigenvector matrix.

    Example: Eigenvectors

    Eigenvalue

    Eigenvectors

  • 7/28/2019 6.Report Face Recognition

    31/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 31

    Figure 4.6 Eigenvectors

    Eigenvalues

    Figure 4.7: Eigenvalues

    4.9EIGENFACES

    Eigenfaces are a set of eigenvectors used in the computer vision problem of

    human face recognition. The approach of using eigenfaces for recognition was developed

    by Matthew Turk and Alex Pentland beginning in 1987, and is considered the first facial

    recognition technology that worked. These eigenvectors are derived from the covariance

    matrix of the probability distribution of the high-dimensional vector space of possible

    faces of human beings To generate a set of eigenfaces, a large set of digitized images of

    human faces, taken under the same lighting conditions, are normalized to line up the eyes

    and mouths. They are then all resampled at the same pixel resolution. Eigenfaces can be

    extracted out of the image data by means of a mathematical tool called Principal

    Component Analysis (PCA).

  • 7/28/2019 6.Report Face Recognition

    32/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 32

    Figure 4.8: Eigenfaces

    4.10 SELECTING THE TEST IMAGE

    The test image is again selected when a dialog box opens. From here, we can

    select any file image from our hard disk without mentioning the image name and format

    instantaneously

    Figure4.9: Selecting Test Image

    4.11 CLASSIFICATION

    The process of classification of a new (unknown) face new to one of the classes

    (known faces) can be carried out in two ways:

    4.11.1 Bayesian classifier

  • 7/28/2019 6.Report Face Recognition

    33/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 33

    Bayesian classifier is one of the most widely applied statistical approaches in the

    standard pattern classification. It assumes that the classification problem is posed in

    probabilistic terms, and that all the relevant probability values are known. The decision

    process in statistical pattern recognition can be explained as follows:

    First, the new image is transformed into its eigenface components. The resulting

    weights form the weight vector newT.

    Figure 4.10: Face Space

    4.11.2. Nearest Mean Classifier

    Nearest Mean Classifier is an analogous approach to the Nearest Neighbor

    Rule(NN-Rule). In the NN-Rule, after the classification system is trained by thesamples,

    a test data is fed into the system and it is classified in the class of

    thenearesttrainingsample in the data space with respect to Euclidean Distance. Inthe

    Nearest Mean Classifier, again the Euclidean distance from each class mean(in this case)

    is computed for the decision of the class of the test data. Inmathematical terms, the

    Euclidean distance between the test sample x, and each face class mean i is:

  • 7/28/2019 6.Report Face Recognition

    34/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 34

    where x is a d dimensional input data .

    After computing the distance to each class mean, the test data is classified into the

    class with minimum Euclidean Distance.

    4.12 WEIGHT AND EUCLIDEAN DISTANCE

    Figure 4.11: Weight & Euclidean Distance

    4.13Matlab Output

    Based on the least value of the Euclidean distance, face recognition is carried out.

  • 7/28/2019 6.Report Face Recognition

    35/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 35

    Figure 4.12: Output (reconstructed image)

    The above output is obtained when we select a different test image of the person

    which is not present in the database of images. The algorithm finds the correct match.

    CHAPTER 5

    Face Detection

    5.1 INTRODUCTION

    Detecting human faces automatically is becoming a very important and

    challenging task in computer vision research. The significance of the problem can be

    easily illustrated by its vast applications, as face detection is the first step towards

    intelligent vision-based human computer interaction. Face recognition, face tracking,

    pose estimation and expression recognition all require robust face detecting algorithmsfor successful implementation. Segmenting facial regions in images or video sequences

    can also lead to more efficient coding schemes [1], content-based representation

    (MPEG4) [2], three-dimensional human face model fitting, image enhancement and

    audio-visual speech integration [3]. Although a major area of interest, many problems

    still need to be solved, as segmenting a human face successfully depends on many

    parameters such as skin-tones under varying lighting conditions, complexity level of the

    background in the image to be segmented and application for which the segmentation is

  • 7/28/2019 6.Report Face Recognition

    36/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 36

    required. Inherent differences due to the existence of different ethnic backgrounds,

    gender and age groups also complicate the face detection paradigm.

    With so many new applications, the development of faster and more robust face

    detection algorithms has become a major area of research over the last few years.

    Techniques based on knowledge of rules that capture the relationship between facial

    features, feature invariant approaches that tend to define structural features that exist

    even when the pose, viewpoint and lighting condition vary, and template matching

    methods that use several standard templates to describe a face, are all being thoroughly

    investigated. A comprehensive survey on the methods used to detect faces in images can

    be found in [4]. However, many of the successful claims reported in the literature either use data sets that are too small or the test images acquired are not standard images and so

    can show biased results favouring one method over another. The comparison difficulties

    are due to the fact that much less attention has been paid to the development of a

    standard face detection database that can be used as a benchmark to test the performance

    of these new algorithms.

    5.2 Definition and relation to other tasks

    Face detection is a computer technology that determines the locations and sizes

    of human faces in arbitrary (digital) images. It detects facial features and ignores anything

    else, such as buildings, trees and bodies.

    Face detection can be regarded as a specific case of object-class detection. In

    object-class detection, the task is to find the locations and sizes of all objects in an image

    that belong to a given class. Examples include upper torsos, pedestrians, and cars.

    Face detection can be regarded as a more general case of face localization . In face

    localization, the task is to find the locations and sizes of a known number of faces

    (usually one). In face detection, one does not have this additional information.

    Early face-detection algorithms focused on the detection of frontal human faces,

    whereas newer algorithms attempt to solve the more general and difficult problem of

    multi-view face detection. That is, the detection of faces that are either rotated along the

    axis from the face to the observer (in-plane rotation), or rotated along the vertical or left-

    http://en.wikipedia.org/w/index.php?title=Face_localization&action=edit&redlink=1http://en.wikipedia.org/w/index.php?title=Face_localization&action=edit&redlink=1http://en.wikipedia.org/w/index.php?title=Face_localization&action=edit&redlink=1http://en.wikipedia.org/w/index.php?title=Face_localization&action=edit&redlink=1
  • 7/28/2019 6.Report Face Recognition

    37/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 37

    right axis (out-of-plane rotation), or both. The newer algorithms take into account

    variations in the image or video by factors such as face appearance, lighting, and pose.

    5.3 FACE DETECTION USING PCA

    In our experiments we use a 10-dimensional principal subspace (M = 10). The

    projection matrix PM is calculated by an eigenspace decomposition using a set of

    luminance face test images. To localize a facial region in a new image, the error criterion

    must be computed for each pixel position, resulting in a distance map. For the moment,

    we use a simple detector which is based only on the \distance from face space" (DFFS).

    The globalminimum of the distance map is then selected as the bestmatch. When using M

    eigenvectors, the computation of the DFFS requires M+ 1 correlations (withthe M

    \eigenfaces" and the mean image) and an additional energy calculation. The correlations;ongrey level images and point out the problems causedby the image background.

    Fig. 5.1 shows the DFFS map for the luminance image

    salesman#1. The global minimum is marked by the white circle. Though there is a local

    minimum at the true face position, the best match is in the background region leading to a

    false detection. The DFFS is high for non-facial image regions with a high changing in

    the intensity (e.g. the area thatincludes parts of the light-colored shirt and the dark background). On the other hand, the DFFS becomes relative small in non-facial regions

    with little variance in the intensity like parts of the background at the right side of the test

    image. This is due to the fact that an image pattern that can be modeled by noise can be

    better represented by the eigenfaces than a non-facial image pattern containing a strong

    edge. Therefore, detection based on the DFFS becomes di_cult even in images with a

    simple background if the face region does not cover the main part of the test image.We

    now apply a principal component analysis to the skin probability image de_ned by

    equation (2) using the same projection matrix. The DFFS map for the skin probability

    image shown in _g. 2 is displayed in _g. 4a. The true face region is characterized by a

    local minimum. Similar to the luminance case, the error criterion is also low in

    background regions with little variance. These regions represent nonskin-colored

    background, so the probability image isnear zero in these areas.

  • 7/28/2019 6.Report Face Recognition

    38/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 38

    5.1 Detected face region

    In 5.1 the eigenspace decomposition is used to estimate the complete probability

    density for a certain object class (e.g. the face class). Assuming a normal distribution, the

    estimate for the Mahalanobis distance is also a weighted sum of the DFFS and the DIFS

    similar to equation . A main difference isthat our scaling factor c is much smaller than the

    one used in because the DIFS provides more information when using the skin probability

    image instead of the luminance image. The distance map using the combined error

    criterion is shown in fig. 5.1.The global minimum (marked by the white circle) lies

    in the true face region and the face is detected correctly. Fig. 5.1 shows the detected face

    region super imposed on the luminance component.To detect faces at multiple scales, the

    detectionalgorithm is performed on several scaled versions ofthe skin probability image

    and the global minimum of the resulting multi-scale distance maps is selected.

  • 7/28/2019 6.Report Face Recognition

    39/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 39

    5.2 Detection of more than one face

    Detection results for severalimages of MPEG test sequences are shown in fig

    5.2.The global minimum detection based on the assumption that the image contains

    exactly one face. To detect several faces and reject non-facial images, athreshold can be

    introduced which allows a tradeoff between false detection and false rejection. Ifthe error

    criterion at a certain spatial position is less than this predetermined threshold, the

    subimage located at this position is classified as a face. To prevent overlapping regions,

    only the global minimumis selected in the next step. For the search of the second (local)

    minimum, all spatial positions which leadto overlapping regions are discarded. This

    procedure is repeated until no position with error less thanthe threshold can be found

    which is not already covered by another detected region. The result of a multiple face

    detection is given in fig 5.2.

    5.4 APPLICATIONS

    1. Biometrics

    2. A Facial Recognition System

    3. Video Surveillance

    4. Human Computer Interface

    5. Image Database Management

    6. Webcam etc..

    Face detection is used in biometrics , often as a part of (or together with) a facial

    recognition system . It is also used in video surveillance, human computer interface and

    image database management. Some recent digital cameras use face detection for

    autofocus .[1] Face detection is also useful for selecting regions of interest in photo

    slideshows that use a pan-and-scale Ken Burns effect .

    Face detection is gaining the interest of marketers. A webcam can be integrated into a

    television and detect any face that walks by. The system then calculates the race, gender,

    and age range of the face. Once the information is collected, a series of advertisements

    can be played that is specific toward the detected race/gender/age.

    With increasing research in the area of face segmentation, new methods for detecting human faces automatically are being developed. However, less attention is

    http://en.wikipedia.org/wiki/Biometricshttp://en.wikipedia.org/wiki/Biometricshttp://en.wikipedia.org/wiki/Biometricshttp://en.wikipedia.org/wiki/Facial_recognition_systemhttp://en.wikipedia.org/wiki/Facial_recognition_systemhttp://en.wikipedia.org/wiki/Facial_recognition_systemhttp://en.wikipedia.org/wiki/Facial_recognition_systemhttp://en.wikipedia.org/wiki/Face_detection#cite_note-1http://en.wikipedia.org/wiki/Face_detection#cite_note-1http://en.wikipedia.org/wiki/Face_detection#cite_note-1http://en.wikipedia.org/wiki/Ken_Burns_effecthttp://en.wikipedia.org/wiki/Ken_Burns_effecthttp://en.wikipedia.org/wiki/Ken_Burns_effecthttp://en.wikipedia.org/wiki/Ken_Burns_effecthttp://en.wikipedia.org/wiki/Face_detection#cite_note-1http://en.wikipedia.org/wiki/Facial_recognition_systemhttp://en.wikipedia.org/wiki/Facial_recognition_systemhttp://en.wikipedia.org/wiki/Biometrics
  • 7/28/2019 6.Report Face Recognition

    40/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 40

    being paid to the development of a standard face image database to evaluate these new

    algorithms. This paper recognizes the need for a colour face image database and creates

    such a database

    for direct benchmarking of automatic face detection algorithms. The database has two

    parts.Part one contains colour pictures of faces having a high degree of variability in

    scale, location, orientation, pose, facial expression and lighting conditions, while part

    two has manually segmented results for each of the images in part one of the database.

    This allows direct comparison of algorithms. These images are acquired from a wide

    variety of sources such as digital cameras, pictures scanned in from a photo-scanner and

    the World Wide Web. The database is intended for distribution to researchers. Details of

    the face database such as the development process and file information along with a

    common criterion for performance evaluation measures is also discussed in this paper.

    Key Words: Face Detection, Colour Face Image Database, Evaluation Performance.

    CHAPTER 6

    RESULTS AND ANALYSIS

    6.1 Assessment of Success Rate

    Here we have assessed the dependency of output on the number of training images

    (per individual).

    Total Number of

    Images

    Number of Training

    Images( per

    individual)

    Number of Test

    Images(per

    individual)

    Success Rate (%)

  • 7/28/2019 6.Report Face Recognition

    41/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 41

    50 1 5 72.3%

    50 5 5 87%

    50 10 1 88%

    6.2 Advantages

    It is simple and fast, and it only needs a small amount of memory. PCA basically performs dimensionality reduction.

    Smaller representation of database because we only store the training images in

    the form of their projections on the reduced basis.

    Noise is reduced because we choose the maximum variation basis and features

    like backgrounds with small variations are automatically ignored.

    It is quite efficient and accurate in terms of success rate.

    6.3 Limitations

    Mainly frontal face is required .

    Multiscaling is a problem .

    Background variations and lightning conditions still pose a difficulty .

    6.4 Application Areas

    Access Control

    1. ATM

    2. Airport

    3. Lockers

    Entertainment

    1. Video Game

    2. Virtual Reality

    3. Training Programs

  • 7/28/2019 6.Report Face Recognition

    42/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 42

    4. Human-Computer-Interaction

    5. Human-Robotics

    6. Family Photo Album

    Smart Cards

    1. Drivers Licenses

    2. Passports

    3. Voter Registrations

    4. Welfare Fraud

    5. Voter Registration

    Information Security

    1. TV Parental control

    2. Desktop Logon

    3. Personal Device Logon

    4. Database Security

    5. Medical Access

    6.

    Internet Access

    Law Enforcement and Surveillance

    1. Advanced Video surveillance

    2. Drug Trafficking

    3. Portal Control

    Multimedia Management

    1. Multimedia management is used in the face based database search

    Some Commercial Applications

    1. Motion Capture for movie special effects.

    2. Face Recognition biometric systems.

  • 7/28/2019 6.Report Face Recognition

    43/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 43

    3. Home Robotics.

    CONCLUSION

    We are currently extending the system to deal with a range of aspects (other than

    full frontal views) by defining a small number of classes for each known person

    corresponding to characteristic views. Because of the speed of the recognition, the system

    has many chances within a few seconds to respond to attempt to recognize many slightly

    different views.

    In this project we store a set of images in the database ,whenever we input a image that

    we want to test and if it is in the database will be recognized using the Eigen face

    algorithm and the reconstructed face will the output image. Here we are not using any

    filters but we are simply recognizing by reconstructed images of the input. And parallel

    the Euclidean distances also will be measured.

  • 7/28/2019 6.Report Face Recognition

    44/45

    A Matlab based Face Recognition using PCA 2013

    Dept. of ECE, BGSIT Page 44

    The eigenface approach to face recognition was motivated by information

    theory, leading to the idea of basing face recognition on a small set of image features that

    best approximate the set of known face images, without requiring that they correspond to

    our intuitive notions of facial parts and features. Although it is not an elegant solution to

    the general object recognition problem, the eigenface approach does provide a practicalsolution that is well fitted to the problem of face recognition. It is fast, relatively simple,

    and has been shown to work well in a somewhat constrained environment.

    FUTURE SCOPE

    This project is based on eigenface approach that gives the accuracy maximum

    92.5%. There is scope for future using Neural Network and Active Appearance Model

    techniques that can give better results compared to eigenface approach. With the help of neural network technique accuracy can be improved.

    The whole software is dependent on the database and the database is dependent on

    resolution of camera, so in future if good resolution digital camera or good resolution

    analog camera will be in use then result will be better used. Real time video processing .

    As for now, the threshold value is difficult to set in real time.Using OpenCV, an external

    platform provided by Intel consisting of inbuilt libraries and functions, face recognition

    can be made easier.

    REFERENCES

    [1] Zhujie, Y. L. Yu. Face Recognition with eigenfaces, Proceedings of the IEEE

    International Conference on Industrial Technology, pp. 434-438., 1994.

    [2] Lizama, E.; Waldoestl, D.; Nickolay, B., An eigenfaces -based automatic face

    recognition system

    IEEE International Conference Systems, Man, andCybernetics, 1997. Volume 1, Issue 12-15 Oct 1997 Page(s):174 177.

    [3] M.Turk and A.Pentland; Face Recognition Using Eigenfaces, Proceedings;

    IEEE Conference on Computer Vision and Pattern Recognition, pages 586-591,

    1991.

    [4] Pablo Navarrete Javier Ruiz-del- Solar Analysis and Comparison of Eigenspace

    Based Face Recognition Approaches IJPRAI 16(7): 817-830 (2002).

  • 7/28/2019 6.Report Face Recognition

    45/45

    A Matlab based Face Recognition using PCA 2013

    [5] H.K. Ekenel, J. Stallkamp, H. Gao, M. Fischer, R. Sti efelhagen.; Face

    Recognition for Smart Interactions, IEEE International Conference on

    Multimedia & Expo, Beijing, China, July 2007. Page(s):1007 1010.

    [6] Alex Pentland, BabackMoghaddam, and Thad Starner, View -Based and Modular

    Eigenspaces for Face Re cognition, IEEE Conf. on Computer Vision and Pattern Recognition , MIT Media Laboratory Tech. Report No. 245 1994.

    [7] http://www.face-rec.org/[8] http://stackoverflow.com/[9] http://www.mathworks.in/[10] http://cnx.org/content/m12531/latest/[11] http://www.pages.drexel.edu/~sis26/Eigenface%20Tutorial.htm#Programming[12] http://www.facerecognition.it/[13] http://www.shervinemami.info/faceRecognition.html