Transcript
Page 1: Discriminative Deep Face Shape Model for Facial Point Detectioncvrl/wuy/Face_shape_prior_RBM.pdf · 2016-12-07 · Facial point detection. a. The Facial points that define the face

RESEARCH POSTER PRESENTATION DESIGN © 2011

www.PosterPresentations.com

Discriminative Deep Face Shape Model for Facial Point Detection

In this paper, we address the problem of facial

point detection under varying facial expressions

and poses by proposing a discriminative deep

face shape model that is constructed from the

Restricted Boltzmann Machine and its variants.

1. Problem

Yue Wu and Qiang JiRensselaer Polytechnic Institute

Figure 1. Facial point detection. a. The Facial points that

define the face shapes. b. Facial images with detected facial

points.

2. Motivation

Observation:

(1) There exist patterns of face shapes.

(2) The face shape depends on the facial

expressions and head poses.

Motivation:

To increase the accuracy and robustness of

facial feature detection algorithm, a face

shape model that captures the face shape

patterns with varying facial expressions and

poses should be utilized.

3. Discriminative Deep Face Shape Model

Goal:

Build a model to captures the conditional

joint probability 𝑝(𝑥|𝑚) of the ground truth

facial point location 𝑥, given their

measurements 𝑚 from local point detectors.

4. Facial Point Detection Using the Face Shape Model

Model:

• A discriminative model based on Restricted

Boltzmann Machines (Fig. 2(a) and Fig. 3).

• Bottom layer: 𝑚, measurements of point

locations from local pint detectors.

• Middle layer: 𝑥, face shape under varying

expressions and poses. 𝑦, frontal face

shape under corresponding expressions for

the same subjects. (see Fig. 2(b) for some

examples).

• Top layer: sets of hidden nodes ℎ1 and ℎ2.

(a) (b)

Figure 2. a. The proposed discriminative deep face shape

model. It consist of a factorized three-way RBM connecting

nodes 𝑥, 𝑦, and ℎ1. It also include two RBMs that model the

connections among 𝑥, ℎ1 and 𝑚, 𝑥. b. Corresponding frontal

and non-frontal images for the same subjects and expression.

(a) (b)

(c) (d)

Figure 3. Graphical depiction about different parts of the

model. a.b. Factored three way RBM. c.d. RBM models.

Model training:

• Input: Complete data {𝑥𝑐 , 𝑦𝑐 , 𝑚𝑐}𝑐=1𝑁𝐶

, including

the face shape in arbitrary pose and

expression, its measurement, and its

corresponding frontal shape with same

expression. Incomplete data {𝑥𝑖 , 𝑚𝑖}𝑖=1𝑁𝐼

without the frontal face shape.

• Output: Model parameters 𝜃.

• Method:

• Maximal Likelihood learning

• Gradient ascent algorithm. The gradient:

• Use mean-field fixed point equations to

estimate the data dependent terms. Use

the Persistent Markov Chains to estimate

the model dependent term.

𝜃∗ = 𝑎𝑟𝑔𝑚𝑎𝑥𝜃 𝐿 𝜃; 𝐷𝑎𝑡𝑎𝐶 + 𝐿(𝜃; 𝐷𝑎𝑡𝑎𝐼)

𝜕𝐿(𝜃)

𝜕𝜃= −

𝜕𝐸

𝜕𝜃𝑃𝑑𝑎𝑡𝑎𝐶

−𝜕𝐸

𝜕𝜃𝑃𝑑𝑎𝑡𝑎𝐼

−𝜕𝐸

𝜕𝜃𝑃𝑚𝑜𝑑𝑒𝑙

Model inference:

• Input: the measurements 𝑚𝑡 from local point

detectors, and the model parameters 𝜃 that

defines 𝑝(𝑥|𝑚; 𝜃).

• Output: the inferred facial point locations 𝑥∗

• Method: Gibbs sampling.

𝑥∗ = 𝑎𝑟𝑔𝑚𝑎𝑥𝑥𝑝(𝑥|𝑚𝑡)

Figure 3.

Diagram

illustration of

the facial point

detection

algorithm using

the face shape

model. Local

point detection

and shape

refinements

using the

proposed model

are performed

iteratively.

5. Experiments

• Evaluate the proposed facial point detection algorithm.

• Comparison with other state-of-the-art works.

Figure 5. Comparing local

detectors with different feature

descriptors and classifiers.

Figure 6. Comparing different

variants of the proposed face

shape models.

Figure 7. Detection error (mean and std) for each point across four

testing databases.

(a) MultiPie (b) Helen (c) LFPW

Figure 8.

Detection

results on

samples images

from four

databases.

(d) AFW

Top Related