Multimodal localization: Stereo over LiDAR map
- Publisher:
- Wiley
- Publication Type:
- Journal Article
- Citation:
- Journal of Field Robotics, 2020, 37, (6), pp. 1003-1026
- Issue Date:
- 2020-01-21
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
rob.21936.pdf | Published version | 10.95 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
In this paper, we present a real‐time high‐precision visual localization system for an autonomous vehicle which employs only low‐cost stereo cameras to localize the vehicle with a priori map built using a more expensive 3D LiDAR sensor. To this end, we construct two different visual maps: a sparse feature visual map for visual odometry (VO) based motion tracking, and a semidense visual map for registration with the prior LiDAR map. To register two point clouds sourced from different modalities (i.e., cameras and LiDAR), we leverage probabilistic weighted normal distributions transformation (ProW‐NDT), by particularly taking into account the uncertainty of source point clouds. The registration results are then fused via pose graph optimization to correct the VO drift. Moreover, surfels extracted from the prior LiDAR map are used to refine the sparse 3D visual features that will further improve VO‐based motion estimation. The proposed system has been tested extensively in both simulated and real‐world experiments, showing that robust, high‐precision, real‐time localization can be achieved.
Please use this identifier to cite or link to this item: