Towards robust vision-based self-localization of vehicles in dense urban environments

Publication Type:
Conference Proceeding
Citation:
IEEE International Conference on Intelligent Robots and Systems, 2012, pp. 3152 - 3157
Issue Date:
2012-12-01
Filename Description Size
Thumbnail2011004737OK.pdf Published version1.39 MB
Adobe PDF
Full metadata record
Self-localization of ground vehicles in densely populated urban environments poses a significant challenge. The presence of tall buildings in close proximity to traversable areas limits the use of GPS-based positioning techniques in such environments. This paper presents an approach to global localization on a hybrid metric-topological map using a monocular camera and wheel odometry. The global topology is built upon spatially separated reference places represented by local image features. In contrast to other approaches we employ a feature selection scheme ensuring a more discriminative representation of reference places while simultaneously rejecting a multitude of features caused by dynamic objects. Through fusion with additional local cues the reference places are assigned discrete map positions allowing metric localization within the map. The self-localization is carried out by associating observed visual features with those stored for each reference place. Comprehensive experiments in a dense urban environment covering a time difference of about 9 months are carried out. This demonstrates the robustness of our approach in environments subjected to high dynamic and environmental changes. © 2012 IEEE.
Please use this identifier to cite or link to this item: