Hallucinating Unaligned Face Images by Multiscale Transformative Discriminative Networks

Springer Science and Business Media LLC
Publication Type:
Journal Article
International Journal of Computer Vision, 2020, 128, (2), pp. 500-526
Issue Date:
Filename Description Size
Yu2020_Article_HallucinatingUnalignedFaceImag.pdfPublished version5.86 MB
Adobe PDF
Full metadata record
© 2019, Springer Science+Business Media, LLC, part of Springer Nature. Conventional face hallucination methods heavily rely on accurate alignment of low-resolution (LR) faces before upsampling them. Misalignment often leads to deficient results and unnatural artifacts for large upscaling factors. However, due to the diverse range of poses and different facial expressions, aligning an LR input image, in particular when it is tiny, is severely difficult. In addition, when the resolutions of LR input images vary, previous deep neural network based face hallucination methods require the interocular distances of input face images to be similar to the ones in the training datasets. Downsampling LR input faces to a required resolution will lose high-frequency information of the original input images. This may lead to suboptimal super-resolution performance for the state-of-the-art face hallucination networks. To overcome these challenges, we present an end-to-end multiscale transformative discriminative neural network devised for super-resolving unaligned and very small face images of different resolutions ranging from 16 × 16 to 32 × 32 pixels in a unified framework. Our proposed network embeds spatial transformation layers to allow local receptive fields to line-up with similar spatial supports, thus obtaining a better mapping between LR and HR facial patterns. Furthermore, we incorporate a class-specific loss designed to classify upright realistic faces in our objective through a successive discriminative network to improve the alignment and upsampling performance with semantic information. Extensive experiments on a large face dataset show that the proposed method significantly outperforms the state-of-the-art.
Please use this identifier to cite or link to this item: