Title :
Learning generative models of invariant features
Author :
Sim, Robert ; Dudek, Gregory
Author_Institution :
Dept. of Comput. Sci., British Columbia Univ., Vancouver, BC, Canada
fDate :
28 Sept.-2 Oct. 2004
Abstract :
We present a method for learning a set of models of visual features which are invariant to scale and translation in the image domain. The models are constructed by first applying the scale-invariant feature transform (SIFT) to a set of training images, and matching the extracted features across the images, followed by learning the pose-dependent behavior of the features. The modeling process avoids assumptions with respect to scene and imaging geometry, but rather learns the direct mapping from camera pose to feature observation. Such models are useful for applications to robotic tasks, such as localization, as well as visualization tasks. We present the model learning framework, and experimental results illustrating the success of the method for learning models that are useful for robot localization.
Keywords :
feature extraction; intelligent robots; learning (artificial intelligence); feature extraction; generative models; imaging geometry; model learning framework; pose-dependent behavior learning; robot localization; robotic tasks; scale-invariant feature transform; training images; visual features; Cameras; Computational geometry; Computer science; Layout; Lighting; Noise robustness; Robot localization; Robot vision systems; Solid modeling; Visualization;
Conference_Titel :
Intelligent Robots and Systems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on
Print_ISBN :
0-7803-8463-6
DOI :
10.1109/IROS.2004.1389955