hongliang li, senior member, ieee, linfeng xu, member, ieee, and guanghui liu face hallucination via...

Click here to load reader

Upload: mia-bastow

Post on 16-Dec-2015

219 views

Category:

Documents


1 download

TRANSCRIPT

  • Slide 1
  • Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints
  • Slide 2
  • Outline Introduction Proposed Method Framework of the Proposed Method Similarity Constraints Computation LR-LR Similarity Constraint LR-HR Similarity Constraint HR Smoothness Constraint Spatial Similarity Experiments Conclusion
  • Slide 3
  • Introduction In many cases the face images captured by live cameras are often of low resolutions due to the environment or equipment limitations. In order to generate a high resolution face image effectively, a lot of methods have been presented in the last decade.
  • Slide 4
  • Introduction In this letter, a new face hallucination approach based on similarity constraints is proposed to hallucinate a high resolution face image from an input low-resolution face image. The proposed method formulates the face hallucination as a local linear filtering progress based on training LR-HR face image pairs.
  • Slide 5
  • Proposed Method A. Framework of the Proposed Method Let Z L and Z H denote the low resolution and high resolution training face images, respectively, where Z L is downsampled from Z H by an integer factor. Assume I L be an input low-resolution face image, while I H represents its high-resolution face image to be hallucinated.
  • Slide 6
  • Framework of the Proposed Method Fig.1. Framework of our face hallucination approach.
  • Slide 7
  • Framework of the Proposed Method Three stages are involved in this work. We first search a LR-HR face database for all patches that are stored beforehand. The similarities between the input patch and each pair of LR- HR face patches are measured under different constraint conditions. Finally, we hallucinate a high-resolution image by inferring the lost details within the input low-resolution image.
  • Slide 8
  • Framework of the Proposed Method Assume each image has been divided into N overlapping patches with identical spacing. Let denote the set of pairs of training LR-HR patches, i and j are patch indices. For an input LR face patch I L (i), our goal is to utilize the training patch pairs to recover the missing high frequency details in the hallucinated patch I H (i).
  • Slide 9
  • Framework of the Proposed Method and are the mean values of the input LR patch I L (i) and the HR patch I H (j), respectively. The second term (Z H (j) - )is to perform the normalization by subtracting the mean from the HR patch. is defined as a filter kernel that depends on I L, Z L, and Z H.
  • Slide 10
  • Framework of the Proposed Method C ij is to ensure that the sum of is equal to one. Here represents the neighborhood of patch i. It is noticed that there are four terms defined in the kernel W, which perform the similarity constraints, i.e., LR-LR similarity, LR-HRsimilarity, smoothness constraint and spatial similarity.
  • Slide 11
  • Proposed Method B. Similarity Constraint Computation 1) LR-LR Similarity Constraint Given a LR training face image, we have stored its corresponding HR training image beforehand. It means that all the missing high-frequency details in the LR image can be accurately estimated from its HR one.
  • Slide 12
  • Similarity Constraints Computation The control parameter 1 adjusts the range of intensity similarity, which means that smaller allows large changes between the two LR patches. A straightforward computation of S is their Euclidean distance, which may result in poor performance in the case of the significant lighting variation or noise corruption.
  • Slide 13
  • Similarity Constraints Computation The distance can be expressed as where the operation denotes the l-norm distance.
  • Slide 14
  • Similarity Constraints Computation 2) LR-HR Similarity Constraint The LR-HR constraint is designed to measure the similarity between an input photo patch I L (i) and a HR patch Z H (j). Since HR patches usually contain a great of high frequency contents that are missed for the LR patches, it is difficult to compare their similarity directly based on their difference.
  • Slide 15
  • Similarity Constraints Computation We design a new descriptor called local appearance similarity (LAS) descriptor to measure the similarity between LR and HR patches. This descriptor is generated based on patch pairs similarity within a local region, which is illustrated in Fig. 2.
  • Slide 16
  • Similarity Constraints Computation Fig. 2. Illustration of computation.
  • Slide 17
  • Similarity Constraints Computation Given a LR patch I L (i) and a HR training patch Z H (j), i.e., the patches marked with solid yellow line, the LR-HR constraint is defined to measure the similarity between them. The final LAS descriptor for a patch is the concatenation of the matrix elements in terms of the raster scan order.
  • Slide 18
  • Similarity Constraints Computation Let and denote the 1 x d dimensional LAS descriptors for patches I L (i) and Z H (j).
  • Slide 19
  • Similarity Constraints Computation The parameter 2 and s adjust the descriptors similarity, and denote the neighborhoods of patches I L (i) and I H (j), respectively. In our work, we set unless otherwise specified. The final LAS descriptor will be a 25-dimensional vector.
  • Slide 20
  • Similarity Constraints Computation 3) HR Smoothness Constraint We tend to design a constraint to answer if those similar patches have good compatibilities with the neighboring ones. We call as a smoothness term, which aims to impose the smoothness constraint between neighboring hallucinated patches.
  • Slide 21
  • Similarity Constraints Computation The HR smoothness constraint can be formulated as where t and l denote the top and left overlapping regions for pairs of patches Z H (j) I H (i t ) and Z H (j) I H (i l ), respectively. Here, 3 is used to control the range of smoothness variation.
  • Slide 22
  • Similarity Constraints Computation 4) Spatial Similarity It is reasonable to assign small constraints for those patches that are far from the hallucinating patch I H (i). We define a new constraint to compute the similarity between Z H (j) and I L (i) based on the spatial distance.
  • Slide 23
  • Similarity Constraints Computation The parameter 4 adjusts the spatial similarity. D(i,j) is a spatial window function defined by the set of the neighborhood of t i (i.e., ).
  • Slide 24
  • Experiments Given an input LR face image, we divide it into a number of overlapping patches with the size of 4 4. The overlapping pixel is set to 3, which corresponds to 12 pixels in the HR face image. We employ laplacian cost function, i.e., l = 1, to compute the similarity constraints.
  • Slide 25
  • Experiments We first perform the evaluation on a large number of face images taken from FERET face database. About 1200 images of 873 persons were selected as training images and 300 images of 227 persons for testing. We compare our method with the state-of-the-art methods, which include the general bicubic interpolation, Liu et al. [3], Wang et al. [4], Ma et al. [7], and Zhang et al. [11].
  • Slide 26
  • Experiments Fig. 3. (a) Some examples of face hallucination results. (b) Locally enlarged results for the last two face images.
  • Slide 27
  • Experiments In addition, we also evaluate our proposed method on some face images taken from the CMU+MIT face database. Fig. 4. Experimental results on some LR face images.
  • Slide 28
  • Experiments We also performthe objective evaluation on our method. Two quantitative parameters are used to measure the similarity between the original HR face image and the hallucinated one, namely peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The default parameters in SSIM are set to K ssim =[0.05 0.05](constant term), window =8(local window size), and L ssim =100(dynamic range of the pixel values), which were recommended by the authors.
  • Slide 29
  • Experiments However, as discussed in [11] and [12], we also found a similar phenomenon that PSNR and SSIM are not always consistent with the human perceptual quality.
  • Slide 30
  • Conclusion Inspired by our guided synthesis framework, this method provides an effective way to infer the missing high frequency details within the input LR face image based on the similarity constraints. Given the training set, four constraint functions are designed to learn the lost information from the most similar training examples. Experimental evaluation demonstrates the good performance of the proposed method on the face hallucination task.
  • Slide 31
  • Thank you for your listening