Self-Supervised Depth Completion From Direct Visual-LiDAR Odometry in Autonomous Driving
- Publisher:
- Institute of Electrical and Electronics Engineers
- Publication Type:
- Journal Article
- Citation:
- IEEE Transactions on Intelligent Transportation Systems, 2021, 23, (8)
- Issue Date:
- 2021-08-29
Closed Access
| Filename | Description | Size | |||
|---|---|---|---|---|---|
| Self-Supervised_Depth_Completion_From_Direct_Visual-LiDAR_Odometry_in_Autonomous_Driving (1).pdf | Published version | 5.46 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
In this work, a simple yet effective deep neural network is proposed to generate the dense depth map of the scene by exploiting both LiDAR sparse point cloud and the monocular camera image. Specifically, a feature pyramid network is firstly employed to extract feature maps from images across time. Then the relative pose is calculated by minimizing the feature distance between aligned pixels from inter-frame feature maps. Finally, the feature maps and the relative pose are further applied to compute the feature-metric loss for training the depth completion network. The key novelty of this work lies in that a self-supervised mechanism is presented to train the depth completion network by directly using visual-LiDAR odometry between consecutive frames. Comprehensive experiments and ablation studies on benchmark dataset KITTI demonstrate the superior performance over other state-of-the-art methods in terms of pose estimation and depth completion. The detailed performance of the proposed approach (referred to as SelfCompDVLO) can be found on the KITTI depth completion benchmark. The source code, models, and data have been made available at GitHub.
Please use this identifier to cite or link to this item:
