Persistent Stereo Visual Localization on Cross-Modal Invariant Map

Publisher:
Institute of Electrical and Electronics Engineers
Publication Type:
Journal Article
Citation:
IEEE Transactions on Intelligent Transportation Systems, 2020, 21, (11), pp. 4646-4658
Issue Date:
2020
Filename Description Size
08853391.pdfPublished version4.63 MB
Adobe PDF
Full metadata record
Autonomous mobile vehicles are expected to perform persistent and accurate localization with low-cost equipment. To achieve this goal, we propose a stereo camera based visual localization method using a modified laser map, which takes the advantage of both the low cost of camera, and high geometric precision of laser data to achieve long-term performance. Considering that LiDAR and camera give measurements of the same environment in different modalities, the cross-modal invariance is investigated to modify the laser map for visual localization. Specifically, a map learning algorithm is introduced to sample the robust subsets in laser maps that are useful for visual localization using multi-session visual and laser data. Further, a generative map model is derived to describe this cross-modal invariance, based on which two types of measurements are defined to model the laser map points as appropriate visual observations. Tightly coupling these measurements within the local bundle adjustment during online sliding-window based visual odometry, the vehicle can achieve robust localization even one year after the map was built. The effectiveness of the proposed method is evaluated on both the public KITTI datasets and self-collected datasets in our campus, which include seasonal, illumination and object variations. On all experimental localization sessions, our method provides satisfactory results, even when the direction is opposite to that in the mapping session, verifying the superior performance of the laser map based visual localization method.
Please use this identifier to cite or link to this item: