Information-driven 6D SLAM based on ranging vision

Publication Type:
Conference Proceeding
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2008, pp. 2072 - 2077
Issue Date:
Full metadata record
This paper presents a novel solution for building three-dimensional dense maps in unknown and unstructured environment with reduced computational costs. This is achieved by giving the robot the 'intelligence' to select, out of the steadily collected data, the maximally informative observations to be used in the estimation of the robot location and its surroundings. We show that, although the actual evaluation of information gain for each frame introduces an additional computational cost, the overall efficiency is significantly increased by keeping the matrix compact. The noticeable advantage of this strategy is that the continuously gathered data is not heuristically segmented prior to be input to the filter. Quite the opposite, the scheme lends itself to be statistically optimal and is capable of handling large data sets collected at realistic sampling rates. The strategy is generic to any 3D feature-based simultaneous localization and mapping (SLAM) algorithm in the information form, but in the work presented here it is closely coupled to a proposed novel appearance-based sensory package. It consists of a conventional camera and a range imager, which provide range, bearing and elevation inputs to visual salient features as commonly used by three-dimensional point-based SLAM, but it is also particularly well adapted for lightweight mobile platforms such as those commonly employed for Urban Search and Rescue (USAR), chosen here to demonstrate the excellences of the proposed strategy. ©2008 IEEE.
Please use this identifier to cite or link to this item: