Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-body

Publisher:
Australian Computer Society
Publication Type:
Conference Proceeding
Citation:
Use of Vision in Human-Computer Interaction: Proceedings of the HCSNet Workshop on the use of vision in human-computer interaction, 2006, 56 pp. 35 - 42
Issue Date:
2006-01
Full metadata record
A first step in developing and testing a robust affective multimodal system is to obtain or access data representing human multimodal expressive behaviour. Collected affect data has to be further annotated in order to become usable for the automated systems. Most of the existing studies of emotion or affect annotation are monomodal. Instead, in this paper, we explore how independent human observers annotate affect display from monomodal face data compared to bimodal face-and-body data. To this aim we collected visual affect data by recording the face and face-and-body simultaneously. We then conducted a survey by asking human observers to view and label the face and face-and-body recordings separately. The results obtained show that in general, viewing face-and-body simultaneously helps with resolving the ambiguity in annotating emotional behaviours.
Please use this identifier to cite or link to this item: