Exploiting depth from single monocular images for object detection and semantic segmentation

Publication Type:
Journal Article
Citation:
IEEE Transactions on Image Processing, 2017, 26 (2), pp. 836 - 846
Issue Date:
2017-02-01
Filename Description Size
07707416.pdfPublished Version1.87 MB
Adobe PDF
Full metadata record
© 1992-2012 IEEE. Augmenting RGB data with measured depth has been shown to improve the performance of a range of tasks in computer vision, including object detection and semantic segmentation. Although depth sensors such as the Microsoft Kinect have facilitated easy acquisition of such depth information, the vast majority of images used in vision tasks do not contain depth information. In this paper, we show that augmenting RGB images with estimated depth can also improve the accuracy of both object detection and semantic segmentation. Specifically, we first exploit the recent success of depth estimation from monocular images and learn a deep depth estimation model. Then, we learn deep depth features from the estimated depth and combine with RGB features for object detection and semantic segmentation. In addition, we propose an RGB-D semantic segmentation method, which applies a multi-task training scheme: Semantic label prediction and depth value regression. We test our methods on several data sets and demonstrate that incorporating information from estimated depth improves the performance of object detection and semantic segmentation remarkably.
Please use this identifier to cite or link to this item: