Facial expression recognition with emotion-based feature fusion

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
Proceedings of APSIPA Annual Summit and Conference 2015, 2015, pp. 1 - 6 (6)
Issue Date:
2015-12-16
Full metadata record
Files in This Item:
Filename Description Size
APSIPA_ASC_2015_submission_313.pdfAccepted Manuscript version679.18 kB
Adobe PDF
In this paper, we propose an emotion-based feature fusion method using the Discriminant-Analysis of Canonical Correlations (DCC) for facial expression recognition. There have been many image features or descriptors proposed for facial expression recognition. For the different features, they may be more accurate for the recognition of different expressions. In our proposed method, four effective descriptors for facial expression representation, namely Local Binary Pattern (LBP), Local Phase Quantization (LPQ), Weber Local Descriptor (WLD), and Pyramid of Histogram of Oriented Gradients (PHOG), are considered. Supervised Locality Preserving Projection (SLPP) is applied to the respective features for dimensionality reduction and manifold learning. Experiments show that descriptors are also sensitive to the conditions of images, such as race, lighting, pose, etc. Thus, an adaptive descriptor selection algorithm is proposed, which determines the best two features for each expression class on a given training set. These two features are fused, so as to achieve a higher recognition rate for each expression. In our experiments, the JAFFE and BAUM-2 databases are used, and experiment results show that the descriptor selection step increases the recognition rate up to 2%.
Please use this identifier to cite or link to this item: