Modeling temporal information using discrete fourier transform for recognizing emotions in user-generated videos

Publication Type:
Conference Proceeding
Citation:
Proceedings - International Conference on Image Processing, ICIP, 2016, 2016-August pp. 629 - 633
Issue Date:
2016-08-03
Filename Description Size
Haimin.pdfPublished version271.68 kB
Adobe PDF
Full metadata record
© 2016 IEEE. With the widespread of user-generated Internet videos, emotion recognition in those videos attracts increasing research efforts. However, most existing works are based on framelevel visual features and/or audio features, which might fail to model the temporal information, e.g. characteristics accumulated along time. In order to capture video temporal information, in this paper, we propose to analyse features in frequency domain transformed by discrete Fourier transform (DFT features). Frame-level features are firstly extract by a pre-trained deep convolutional neural network (CNN). Then, time domain features are transferred and interpolated into DFT features. CNN and DFT features are further encoded and fused for emotion classification. By this way, static image features extracted from a pre-trained deep CNN and temporal information represented by DFT features are jointly considered for video emotion recognition. Experimental results demonstrate that combining DFT features can effectively capture temporal information and therefore improve emotion recognition performance. Our approach has achieved a state-of-the-art performance on the largest video emotion dataset (VideoEmotion-8 dataset), improving accuracy from 51.1% to 55.6%.
Please use this identifier to cite or link to this item: