Implicit motion-shape model: A generic approach for action matching

Publication Type:
Conference Proceeding
Citation:
Proceedings - International Conference on Image Processing, ICIP, 2010, pp. 1477 - 1480
Issue Date:
2010-12-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2013006871OK.pdf1.22 MB
Adobe PDF
We develop a robust technique to find similar matches of human actions in video. Given a query video, Motion History Images (MHI) are constructed for consecutive keyframes. This is followed by dividing the MHI into local Motion-Shape regions, which allows us to analyze the action as a set of sparse space-time patches in 3D. Inspired by the idea of Generalized Hough Transform, we develop the Implicit Motion-Shape Model that allows the integration of these local patches to describe the dynamic characteristics of the query action. In the same way we retrieve motion segments from video candidates, then project them onto the Hough Space built by the query model. This produces the matching score by running Parzen window density estimation under different scales. Empirical experiments on popular datasets demonstrate the efficiency of this approach, where highly accurate matches are returned within acceptable processing time. © 2010 IEEE.
Please use this identifier to cite or link to this item: