Improving Visual Saliency Computing with Emotion Intensity

Publication Type:
Journal Article
Citation:
IEEE Transactions on Neural Networks and Learning Systems, 2016, 27 (6), pp. 1201 - 1213
Issue Date:
2016-06-01
Filename Description Size
07470324.pdfPublished Version2.63 MB
Adobe PDF
Full metadata record
© 2012 IEEE. Saliency maps that integrate individual feature maps into a global measure of visual attention are widely used to estimate human gaze density. Most of the existing methods consider low-level visual features and locations of objects, and/or emphasize the spatial position with center prior. Recent psychology research suggests that emotions strongly influence human visual attention. In this paper, we explore the influence of emotional content on visual attention. On top of the traditional bottom-up saliency map generation, our saliency map is generated in cooperation with three emotion factors, i.e., general emotional content, facial expression intensity, and emotional object locations. Experiments, carried out on National University of Singapore Eye Fixation (a public eye tracking data set), demonstrate that incorporating emotion does improve the quality of visual saliency maps computed by bottom-up approaches for the gaze density estimation. Our method increases about 0.1 on an average of area under the curve of receiver operation characteristic curve, compared with the four baseline bottom-up approaches (Itti's, attention based on information maximization, saliency using natural, and graph-based vision saliency).
Please use this identifier to cite or link to this item: