A Computational Model for Stereoscopic Visual Saliency Prediction

Publication Type:
Journal Article
Citation:
IEEE Transactions on Multimedia, 2018
Issue Date:
2018-08-08
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
08430571.pdfPublished Version1.46 MB
Adobe PDF
IEEE Depth information plays an important role in human vision as it provides additional cues that distinguish objects from their backgrounds. This paper explores depth information for analyzing stereoscopic saliency and presents a computational model that predicts stereoscopic visual saliency based on three aspects of human vision: the pop-out effect, comfort zones, and background effects. Through an analysis of these three phenomena, we find that most of the stereoscopic saliency region can be explained. Our model comprises three modules, each describing one aspect of saliency distribution, and a control function that can be used to adjust the three models independently. The relationship between the three models is not mutually exclusive. One, two, or three phenomena may appear in one image. Therefore, to accurately determine which phenomena the image conforms to, we have devised a selection strategy that chooses the appropriate combination of models based on the content of the image. Our approach is implemented within a framework based on the multi-feature analysis. The framework considers surrounding regions, color/depth contrast, and points of interest. The selection strategy can improve the performance of the framework. A series of experiments on two recent eye-tracking datasets show that our proposed method outperforms several state-of-the-art saliency models.
Please use this identifier to cite or link to this item: