Vision-based SLAM using natural features in indoor environments

Publication Type:
Conference Proceeding
Proceedings of the 2005 Intelligent Sensors, Sensor Networks and Information Processing Conference, 2005, 2005 pp. 151 - 156
Issue Date:
Full metadata record
This paper presents a practical approach to solve the simultaneous localization and mapping (SLAM) problem for autonomous mobile platforms by using natural visual landmarks obtained from an stereoscopic camera. It is an attempt to depart from traditional sensors such as laser rangefinders in order to gain the many benefits of nature-inspired information-rich 3D vision sensors. Whilst this makes the system fully observable in that the sensor provide enough information (range and bearing) to compute the full 2D estate of the observed landmarks from a single position, it is also true that depth information is difficult to rely on, particularly on measurements beyond a few meters (in fact the full 3D estate is observable, but here robot motion is constrained to 2D and only the 2D problem is considered). The work presented here is an attempt to overcome such a drawback by tackling the problem from a partially measurable SLAM perspective in that only landmark bearing from one of the cameras is employed in the fusion estimation. Range information estimates from the stereo pair is only used during map building in the landmark initialization phase in order to provide a reasonably accurate initial estimate. An additional benefit of the approach presented here lies in the data association aspect of SLAM. The availability of powerful feature extraction algorithms from the vision community, such as SIFT, permits a more flexible SLAM implementation separated from feature representation, extraction and matching, essentially carrying out matching with minimal recourse to geometry. Simulation results on real data illustrate the validity of the approach. © 2005 IEEE.
Please use this identifier to cite or link to this item: