Automatic Segmentation of Spontaneous Data using Dimensional Labels from Multiple Coder

Publisher:
Multimodal Corpora
Publication Type:
Conference Proceeding
Citation:
Proceedings - Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, 2010, pp. 43 - 48
Issue Date:
2010-01
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2010001223.pdf550.37 kB
Adobe PDF
This paper focus on automatic of spontaneous data using continuous dimensional labels from multiple coders. It introduces efficient algorithms to the aim of (i) producing ground-truth by maximizing inter-coder agreement, (ii) eliciting the frames or samples that capture the transition to and from an emotional state, and (iii) automatic segmentation of spontaneous audio-visual data to be used by machine learning techniques that cannot handle unsegmented sequences. As a proof of concept, the algorithms introduced are tested using data annotated in arousal and valence space. However, they can be straighforawardly applied to data annotated in other continuous emotional spaces, such as power and expectation.
Please use this identifier to cite or link to this item: