IN2LAMA: INertial Lidar Localisation And Mapping

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
2019 International Conference on Robotics and Automation (ICRA), 2019, 2019-May, pp. 6388-6394
Issue Date:
2019-05-20
Filename Description Size
08794429.pdfPublished version294.2 kB
Adobe PDF
Full metadata record
In this paper, we introduce a probabilistic framework for INertial Lidar Localisation And MApping (IN2LAMA). Most of today's lidars are based on spinning mechanisms that do not capture snapshots of the environment. As a result, movement of the sensor can occur while scanning. Without a good estimation of this motion, the resulting point clouds might be distorted. In the lidar mapping literature, a constant velocity motion model is commonly assumed. This is an approximation that does not necessarily always hold. The key idea of the proposed framework is to exploit preintegrated measurements over upsampled inertial data to handle motion distortion without the need for any explicit motion-model. It tightly integrates inertial and lidar data in a batch on-manifold optimisation formulation. Using temporally precise upsampled preintegrated measurement allows frame-to-frame planar and edge features association. Moreover, features are re-computed when the estimate of the state changes, consolidating front-end and back-end interaction. We validate the effectiveness of the approach through simulated and real data.
Please use this identifier to cite or link to this item: