Title :
Learning grasping affordances from local visual descriptors
Author :
Montesano, Luis ; Lopes, Manuel
Author_Institution :
Inst. de Sist. e Robot., Inst. Super. Tecnico, Lisbon, Portugal
Abstract :
In this paper we study the learning of affordances through self-experimentation. We study the learning of local visual descriptors that anticipate the success of a given action executed upon an object. Consider, for instance, the case of grasping. Although graspable is a property of the whole object, the grasp action will only succeed if applied in the right part of the object. We propose an algorithm to learn local visual descriptors of good grasping points based on a set of trials performed by the robot. The method estimates the probability of a successful action (grasp) based on simple local features. Experimental results on a humanoid robot illustrate how our method is able to learn descriptors of good grasping points and to generalize to novel objects based on prior experience.
Keywords :
humanoid robots; mechanoception; neurophysiology; grasping affordances; humanoid robot; local visual descriptors; self-experimentation; Cognitive robotics; Computational modeling; Fingers; Grasping; Humanoid robots; Humans; Iron; Morphology; Neuroscience; Robot kinematics;
Conference_Titel :
Development and Learning, 2009. ICDL 2009. IEEE 8th International Conference on
Conference_Location :
Shanghai
Print_ISBN :
978-1-4244-4117-4
Electronic_ISBN :
978-1-4244-4118-1
DOI :
10.1109/DEVLRN.2009.5175529