Title :
Appearance-based object reacquisition for mobile manipulation
Author :
Walter, Matthew R. ; Friedman, Yuli ; Antone, Matthew ; Teller, Seth
Author_Institution :
CS & AI Lab., MIT, Cambridge, MA, USA
Abstract :
This paper describes an algorithm enabling a human supervisor to convey task-level information to a robot by using stylus gestures to circle one or more objects within the field of view of a robot-mounted camera. These gestures serve to segment the unknown objects from the environment. Our method´s main novelty lies in its use of appearance-based object “reacquisition” to reconstitute the supervisory gestures (and corresponding segmentation hints), even for robot viewpoints spatially and/or temporally distant from the viewpoint underlying the original gesture. Reacquisition is particularly challenging within relatively dynamic and unstructured environments. The technical challenge is to realize a reacquisition capability robust enough to appearance variation to be useful in practice. Whenever the supervisor indicates an object, our system builds a feature-based appearance model of the object. When the object is detected from subsequent viewpoints, the system automatically and opportunistically incorporates additional observations, revising the appearance model and reconstituting the rough contours of the original circling gesture around that object. Our aim is to exploit reacquisition in order to both decrease the user burden of task specification and increase the effective autonomy of the robot. We demonstrate and analyze the approach on a robotic forklift designed to approach, manipulate, transport and place palletized cargo within an outdoor warehouse. We show that the method enables gesture reuse over long timescales and robot excursions (tens of minutes and hundreds of meters).
Keywords :
computer vision; feature extraction; gesture recognition; human-robot interaction; mobile robots; object detection; object recognition; task analysis; appearance based object reacquisition; feature based appearance model; mobile manipulation; robot excursion; robot mounted camera; stylus gesture; supervisory gestures; task level information; task specification; Artificial intelligence; Cameras; Human robot interaction; Image recognition; Layout; Lighting; Mobile robots; Object recognition; Robot vision systems; Robotics and automation;
Conference_Titel :
Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on
Conference_Location :
San Francisco, CA
Print_ISBN :
978-1-4244-7029-7
DOI :
10.1109/CVPRW.2010.5543614