Depth Super-Resolution on RGB-D Video Sequences with Large Displacement 3D Motion

Publication Type:
Journal Article
IEEE Transactions on Image Processing, 2018, 27 (7), pp. 3571 - 3585
Issue Date:
Filename Description Size
08327836-published version.pdfPublished Version6.83 MB
Adobe PDF
Full metadata record
© 1992-2012 IEEE. To enhance the resolution and accuracy of depth data, some video-based depth super-resolution methods have been proposed, which utilizes its neighboring depth images in the temporal domain. They often consist of two main stages: motion compensation of temporally neighboring depth images and fusion of compensated depth images. However, large displacement 3D motion often leads to compensation error, and the compensation error is further introduced into the fusion. A video-based depth super-resolution method with novel motion compensation and fusion approaches is proposed in this paper. We claim that 3D nearest neighboring field (NNF) is a better choice than using positions with true motion displacement for depth enhancements. To handle large displacement 3D motion, the compensation stage utilized 3D NNF instead of true motion used in the previous methods. Next, the fusion approach is modeled as a regression problem to predict the super-resolution result efficiently for each depth image by using its compensated depth images. A new deep convolutional neural network architecture is designed for fusion, which is able to employ a large amount of video data for learning the complicated regression function. We comprehensively evaluate our method on various RGB-D video sequences to show its superior performance.
Please use this identifier to cite or link to this item: