Robust Face Recognition via Multimodal Deep Face Representation
- Publication Type:
- Journal Article
- IEEE Transactions on Multimedia, 2015, 17 (11), pp. 2049 - 2058
- Issue Date:
Files in This Item:
|Robust Face Recognition via Multimodal Deep Face Representation.pdf||Accepted Manuscript Version||1.1 MB|
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
© 2015 IEEE. Face images appearing in multimedia applications, e.g., social networks and digital entertainment, usually exhibit dramatic pose, illumination, and expression variations, resulting in considerable performance degradation for traditional face recognition algorithms. This paper proposes a comprehensive deep learning framework to jointly learn face representation using multimodal information. The proposed deep learning structure is composed of a set of elaborately designed convolutional neural networks (CNNs) and a three-layer stacked auto-encoder (SAE). The set of CNNs extracts complementary facial features from multimodal data. Then, the extracted features are concatenated to form a high-dimensional feature vector, whose dimension is compressed by SAE. All of the CNNs are trained using a subset of 9,000 subjects from the publicly available CASIA-WebFace database, which ensures the reproducibility of this work. Using the proposed single CNN architecture and limited training data, 98.43% verification rate is achieved on the LFW database. Benefitting from the complementary information contained in multimodal data, our small ensemble system achieves higher than 99.0% recognition rate on LFW using publicly available training set.
Please use this identifier to cite or link to this item: