Visual Recognition in RGB Images and Videos by Learning from RGB-D Data

Publication Type:
Journal Article
Citation:
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40 (8), pp. 2030 - 2036
Issue Date:
2018-08-01
Filename Description Size
08000401.pdfPublished Version275.73 kB
Adobe PDF
Full metadata record
© 1979-2012 IEEE. In this work, we propose a framework for recognizing RGB images or videos by learning from RGB-D training data that contains additional depth information. We formulate this task as a new unsupervised domain adaptation (UDA) problem, in which we aim to take advantage of the additional depth features in the source domain and also cope with the data distribution mismatch between the source and target domains. To handle the domain distribution mismatch, we propose to learn an optimal projection matrix to map the samples from both domains into a common subspace such that the domain distribution mismatch can be reduced. Such projection matrix can be effectively optimized by exploiting different strategies. Moreover, we also use different ways to utilize the additional depth features. To simultaneously cope with the above two issues, we formulate a unified learning framework called domain adaptation from multi-view to single-view (DAM2S). By defining various forms of regularizers in our DAM2S framework, different strategies can be readily incorporated to learn robust SVM classifiers for classifying the target samples, and three methods are developed under our DAM2S framework. We conduct comprehensive experiments for object recognition, cross-dataset and cross-view action recognition, which demonstrate the effectiveness of our proposed methods for recognizing RGB images and videos by learning from RGB-D data.
Please use this identifier to cite or link to this item: