Harnessing lab knowledge for real-world action recognition

Publication Type:
Journal Article
Citation:
International Journal of Computer Vision, 2014, 109 (1-2), pp. 60 - 73
Issue Date:
2014-01-01
Filename Description Size
Harnessing Lab Knowledge for Real-World Action Recognition.pdfPublished Version1.26 MB
Adobe PDF
Full metadata record
Much research on human action recognition has been oriented toward the performance gain on lab-collected datasets. Yet real-world videos are more diverse, with more complicated actions and often only a few of them are precisely labeled. Thus, recognizing actions from these videos is a tough mission. The paucity of labeled real-world videos motivates us to "borrow" strength from other resources. Specifically, considering that many lab datasets are available, we propose to harness lab datasets to facilitate the action recognition in real-world videos given that the lab and real-world datasets are related. As their action categories are usually inconsistent, we design a multi-task learning framework to jointly optimize the classifiers for both sides. The general Schatten $$p$ $ p -norm is exerted on the two classifiers to explore the shared knowledge between them. In this way, our framework is able to mine the shared knowledge between two datasets even if the two have different action categories, which is a major virtue of our method. The shared knowledge is further used to improve the action recognition in the real-world videos. Extensive experiments are performed on real-world datasets with promising results. © 2014 Springer Science+Business Media New York.
Please use this identifier to cite or link to this item: