DocumentCode :
3625421
Title :
Large scale vision-based navigation without an accurate global reconstruction
Author :
Sinisa Segvic;Anthony Remazeilles;Albert Diosi;Francois Chaumette
Author_Institution :
IRISA/INRIA, Campus de Beaulieu, F-35042 Rennes Cedex, France. sinisa.segvic@tugraz.at
fYear :
2007
fDate :
6/1/2007 12:00:00 AM
Firstpage :
1
Lastpage :
8
Abstract :
Autonomous cars will likely play an important role in the future. A vision system designed to support outdoor navigation for such vehicles has to deal with large dynamic environments, changing imaging conditions, and temporary occlusions by other moving objects. This paper presents a novel appearance-based navigation framework relying on a single perspective vision sensor, which is aimed towards resolving of the above issues. The solution is based on a hierarchical environment representation created during a teaching stage, when the robot is controlled by a human operator. At the top level, the representation contains a graph of key-images with extracted 2D features enabling a robust navigation by visual servoing. The information stored at the bottom level enables to efficiently predict the locations of the features which are currently not visible, and eventually (re-)start their tracking. The outstanding property of the proposed framework is that it enables robust and scalable navigation without requiring a globally consistent map, even in interconnected environments. This result has been confirmed by realistic off-line experiments and successful real-time navigation trials in public urban areas.
Keywords :
"Large-scale systems","Navigation","Robustness","Image reconstruction","Machine vision","Remotely operated vehicles","Vehicle dynamics","Education","Robot sensing systems","Educational robots"
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition, 2007. CVPR ´07. IEEE Conference on
ISSN :
1063-6919
Print_ISBN :
1-4244-1179-3
Type :
conf
DOI :
10.1109/CVPR.2007.383025
Filename :
4270050
Link To Document :
بازگشت