Scale estimation for monocular SLAM using depth from defocus

Publication Type:
Thesis
Issue Date:
2018
Full metadata record
An autonomous robot must map its environment and estimate its egomotion to perform effectively. Monocular simultaneous localization and mapping (SLAM) can generate maps of the robot’s environment, except for the absolute scale. Alternatives based on stereo or RGB-D camera based SLAM systems can obtain the metric scale but have disadvantages in terms of the cost, size and power requirements. This thesis is focused on the development of an absolute metric scale monocular SLAM system for autonomous robots. A depth from defocus (DfD) technique that relies on image blur is used to estimate the metric scale. However, existing methods for DfD suffer from ambiguities caused by texture, motion blur, and the location of the focal plane. The novelty of this research is combining DfD with camera motion to resolve estimation errors caused by these ambiguities and compute a reliable measure of metric scale. Monocular SLAM algorithms are also prone to scale drift, where the scale gradually changes while mapping. It is demonstrated that integrating DfD into monocular SLAM eliminates scale drift and results in accurate metric scale maps.
Please use this identifier to cite or link to this item: