Stereo vision based three dimensional simultaneous localisation and mapping
- Publication Type:
- Thesis
- Issue Date:
- 2007
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
This thesis deals with the problem of stereo vision based three dimensional
Simultaneous Localisation and Mapping in the context of autonomous robotic
navigation. Simultaneous Localisation and Mapping (SLAM) refers to the problem
of mapping landmarks in an environment by the navigating robot and concurrently
using the mapped features in the localisation of the robot. This thesis concentrates on
the issues that arise from using a short baseline stereo vision system as the primary
sensor for observing the environment.
Initially, a stereo vision sensor is empirically studied in the context of SLAM.
Several error sources that could potentially affect the performance of SLAM
algorithms are identified. It is then shown that the observation model corresponding
to the particular vision system is highly nonlinear and as a consequence, traditional
filtering techniques such as the Extended Kalman Filter used in solving the SLAM
problem generate inconsistent state estimates. This observation leads to the
development of a novel nonlinear batch optimisation technique that is shown to
produce consistent state estimates.
The next major contribution of this thesis arises from the development of a novel
Multi Map (MM) framework for SLAM. The framework was inspired by
observations of human navigation habits. The novel representation relies on two
individual maps in the localisation and mapping process. The Global Map (GM) is a
compact global representation of the robots environment and the Local Map (LM) is
exclusively used for low-level navigation between local points in the robots
navigation horizon. The LM in many aspects is similar to prevailing sub map
methods and hence, has efficiencies of such representations. However, the
combination of two map representations In the MM framework extends the
capabilities of hitherto existent algorithms by not only in the way of improving
consistency but also by way of increase in efficiency through the compact
representation and the unique feature marginalisation strategy. In addition, it aids
implementation of novel techniques for loop closure. The framework is highly suited
for sensors like vision where map sizes tend to grow rapidly due to the very nature of
the sensing techniques used.
Finally, the algorithms are validated with real experimental data collected using a
mobile robot platform traversing in an indoor environment.
Please use this identifier to cite or link to this item: