Automatic, Dimensional and Continuous Emotion Recognition
- Publisher:
- IGI Global
- Publication Type:
- Journal Article
- Citation:
- International Journal of Synthetic Emotions, 2010, 1 (1), pp. 68 - 99
- Issue Date:
- 2010-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
2010001259OK.pdf | 3.92 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Recognition and analysis of human emotions have attracted a lot of interest in the past two decades and have been researched extensively in neuroscience, psychology, cognitive sciences, and computer sciences. Most of the past research in machine analysis of human emotion has focused on recognition of prototypic expressions of six basic emotions based on data that has been posed on demand and acquired in laboratory settings. More recently, there has been a shift toward recognition of affective displays recorded in naturalistic settings as driven by real world applications. This shift in affective computing research is aimed toward subtle, continuous, and context-specific interpretations of affective displays recorded in real-world settings and toward combining multiple modalities for analysis and recognition of human emotion. Accordingly, this article explores recent advances in dimensional and continuous affect modelling, sensing, and automatic recognition from visual, audio, tactile, and brain-wave modalities.
Please use this identifier to cite or link to this item: