Semantically-based human scanpath estimation with HMMs
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 3232 - 3239
- Issue Date:
- 2013-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
2013003739OK.pdf | 480.57 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
We present a method for estimating human scan paths, which are sequences of gaze shifts that follow visual attention over an image. In this work, scan paths are modeled based on three principal factors that influence human attention, namely low-level feature saliency, spatial position, and semantic content. Low-level feature saliency is formulated as transition probabilities between different image regions based on feature differences. The effect of spatial position on gaze shifts is modeled as a Levy flight with the shifts following a 2D Cauchy distribution. To account for semantic content, we propose to use a Hidden Markov Model (HMM) with a Bag-of-Visual-Words descriptor of image regions. An HMM is well-suited for this purpose in that 1) the hidden states, obtained by unsupervised learning, can represent latent semantic concepts, 2) the prior distribution of the hidden states describes visual attraction to the semantic concepts, and 3) the transition probabilities represent human gaze shift patterns. The proposed method is applied to task-driven viewing processes. Experiments and analysis performed on human eye gaze data verify the effectiveness of this method. © 2013 IEEE.
Please use this identifier to cite or link to this item: