Towards Observable Urban Visual SLAM

Publication Type:
Thesis
Issue Date:
2020
Full metadata record
Visual Simultaneous Localisation and Mapping (V-SLAM) is the subject of robot state and environment map estimation by drawing inference on camera captured data. It has been a major branch of research and popular in application owing to the rich information and low cost in vision measurement acquisition. However, for applications in urban environments, where the camera-mounted vehicle moves along a straight line direction towards the road scene, a large number of features suffer difficulty in depth estimation due to their small parallax angles, as a result the classical V-SLAM algorithm encounters instability and the system state is often unobservable. This thesis addresses the issue of Urban SLAM observability associated with monocular cameras. It proposes a novel Bundle Adjustment (BA) formulation that addresses the problem from a fundamental approach – by parameterising map points in an on-manifold ray parallax form the SLAM formulation has a stable configuration that guarantees local state observability despite of presence of low parallax features. V-SLAM is known to be highly non-convex from its projective image formation principle. Slight off-optimal initial values easily lead to sub-optimal final state estimates. In Urban SLAM this is further exacerbated by collinear camera motion that causes ambiguity in initial state estimation. A robust initialisation method is proposed in this thesis to provide unique near-optimal initial estimates effectively addressing collinearity issues. For practical use of our algorithm, we demonstrate how the urban scene friendly V-SLAM algorithms are integrated into a real-time Visual Inertial Navigation system (VINS). A series of quantitative analyses are performed on a few benchmark datasets, demonstrating effectiveness of our algorithm in urban environments.
Please use this identifier to cite or link to this item: