How to Distinguish Posed from Spontaneous Smiles using Geometric Features

Publication Type:
Conference Proceeding
Proceedings of the ACM Ninth International Conference on Multimodal Interfaces, 2007, pp. 38 - 45
Issue Date:
Full metadata record
Files in This Item:
Filename Description SizeFormat
2006014553.pdf879.57 kBAdobe PDF
Automatic distinction between posed and spontaneous expressions is an unsolved problem. Previously cognitive sciencesâ studies indicated that the automatic separation of posed from spontaneous expressions is possible using the face modality alone. However, little is known about the information contained in head and shoulder motion. In this work, we propose to (i) distinguish between posed and spontaneous smiles by fusing the head, face, and shoulder modalities, (ii) investigate which modalities carry important information and how the information of the modalities relate to each other, and (iii) to which extent the temporal dynamics of these signals attribute to solving the problem. We use a cylindrical head tracker to track the head movements and two particle filtering techniques to track the facial and shoulder movements. Classification is performed by kernel methods combined with ensemble learning techniques. We investigated two aspects of multimodal fusion: the level of abstraction (i.e., early, mid-level, and late fusion) and the fusion rule used (i.e., sum, product and weight criteria). Experimental results from 100 videos displaying posed smiles and 102 videos displaying spontaneous smiles are presented. Best results were obtained with late fusion of all modalities when 94.0% of the videos were classified correctly.
Please use this identifier to cite or link to this item: