Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-body

Australian Computer Society
Publication Type:
Conference Proceeding
Use of Vision in Human-Computer Interaction: Proceedings of the HCSNet Workshop on the use of vision in human-computer interaction, 2006, 56 pp. 35 - 42
Issue Date:
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2006014570.pdf595.51 kB
Adobe PDF
A first step in developing and testing a robust affective multimodal system is to obtain or access data representing human multimodal expressive behaviour. Collected affect data has to be further annotated in order to become usable for the automated systems. Most of the existing studies of emotion or affect annotation are monomodal. Instead, in this paper, we explore how independent human observers annotate affect display from monomodal face data compared to bimodal face-and-body data. To this aim we collected visual affect data by recording the face and face-and-body simultaneously. We then conducted a survey by asking human observers to view and label the face and face-and-body recordings separately. The results obtained show that in general, viewing face-and-body simultaneously helps with resolving the ambiguity in annotating emotional behaviours.
Please use this identifier to cite or link to this item: