DocumentCode :
495946
Title :
Grasping familiar objects using shape context
Author :
Bohg, Jeannette ; Kragic, Danica
Author_Institution :
Comput. Vision & Active Perception Lab., KTH, Stockholm, Sweden
fYear :
2009
fDate :
22-26 June 2009
Firstpage :
1
Lastpage :
6
Abstract :
We present work on vision based robotic grasping. The proposed method relies on extracting and representing the global contour of an object in a monocular image. A suitable grasp is then generated using a learning framework where prototypical grasping points are learned from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labeled synthetic images. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects. Furthermore, we will show how our representation supports the inference of a full grasp configuration.
Keywords :
feature extraction; image classification; image representation; learning (artificial intelligence); object detection; robot vision; image representation; monocular image; object detection; prototypical grasping point; robot vision; robotic grasping; Cameras; Computer vision; Humans; Laboratories; Object detection; Prototypes; Robot kinematics; Robot vision systems; Shape control; Supervised learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Advanced Robotics, 2009. ICAR 2009. International Conference on
Conference_Location :
Munich
Print_ISBN :
978-1-4244-4855-5
Electronic_ISBN :
978-3-8396-0035-1
Type :
conf
Filename :
5174710
Link To Document :
بازگشت