Fusing face and body display for bi-modal emotion recognition: single frame analysis and multi frame post integration

Publication Type:
Conference Proceeding
Affective Computing and Intelligent Interaction - First International Conference ACII 2005 Proceedings, 2005, pp. 102 - 111
Issue Date:
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2005003127.pdf270.88 kB
Adobe PDF
This paper presents an approach to automatic visual emotion recognition from two modalities: expressive face and body gesture. Face and body movements are captured simultaneously using two separate cameras. For each face and body image sequence single expressive frames are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities for mono-modal emotion recognition. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision-level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. We further extend the affect analysis into a whole image sequence by a multi-frame post integration approach over the single frame recognition results. In our experiments, the post integration based on the fusion of face and body has shown to be more accurate than the post integration based on the facial modality only.
Please use this identifier to cite or link to this item: