Context-aware video retargeting via graph model

Publication Type:
Journal Article
Citation:
IEEE Transactions on Multimedia, 2013, 15 (7), pp. 1677 - 1687
Issue Date:
2013-10-31
Filename Description Size
Thumbnail2013000997OK.pdfPublished Version1.83 MB
Adobe PDF
Full metadata record
Video retargeting is a crowded but challenging research area. In order to maximally comfort the viewers' watching experience, the most challenging issue is how to retain the spatial shape of important objects while ensure temporal smoothness and coherence. Existing retargeting techniques deal with these spatialoral requirements individually, which preserve the spatial geometry and temporal coherence for each region. However, the spatialoral property of the video content should be context-relevant, i.e., the regions belonging to the same object are supposed to undergo uniform spatialoral transformation. Regardless of the contextual information, the divide-and-rule strategy of existing techniques usually incurs various spatialoral artifacts. In order to achieve satisfactory spatialoral coherent video retargeting, in this paper, a novel context-aware solution is proposed via graph model. First, we employ a grid-based warping framework to preserve the spatial structure and temporal motion trend at the unit of grid cell. Second, we propose a graph-based motion layer partition algorithm to estimate motions of different regions, which simultaneously provides the evaluation of contextual relationship between grid cells while estimating the motions of regions. Third, complementing the salience-based spatialoral information preservation, two novel context constraints are encoded for encouraging the grid cells of the same object to undergo uniform spatial and temporal transformation, respectively. Finally, we formulate the objective function as a quadratic programming problem. Our method achieves a satisfactory spatialoral coherence while maximally avoiding the influence of artifacts. In addition, the grid-cell-wise motion estimation could be calculated every few frames, which obviously improves the speed. Experimental results and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our approach. © 2013 IEEE.
Please use this identifier to cite or link to this item: