Fusing face and body gesture for machine recognition of emotions

Publication Type:
Conference Proceeding
Citation:
Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, 2005, 2005 pp. 306 - 311
Issue Date:
2005-12-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2005003129.pdf316.93 kB
Adobe PDF
Research shows that humans are more likely to consider computers to be human-like when those computers understand and display appropriate nonverbal communicative behavior. Most of the existing systems attempting to analyze the human nonverbal behavior focus only on the face; research that aims to integrate gesture as an expression mean has only recently emerged. This paper presents an approach to automatic visual recognition of expressive face and upper body action units (FAUs and BAUs) suitable for use in a vision-based affective multimodal framework. After describing the feature extraction techniques, classification results from three subjects are presented. Firstly, individual classifiers are trained separately with face and body features for classification into FAU and BAU categories. Secondly, the same procedure is applied for classification into labeled emotion categories. Finally, we fuse face and body information for classification into combined emotion categories. In our experiments, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual face modality. © 2005 IEEE.
Please use this identifier to cite or link to this item: