Title :
Grasping novel objects with a dexterous robotic hand through neuroevolution
Author :
Pei-Chi Huang ; Lehman, Joel ; Mok, Aloysius K. ; Miikkulainen, Risto ; Sentis, L.
Author_Institution :
Dept. of Comput. Sci., Univ. of Texas at Austin, Austin, TX, USA
Abstract :
Robotic grasping of a target object without advance knowledge of its three-dimensional model is a challenging problem. Many studies indicate that robot learning from demonstration (LfD) is a promising way to improve grasping performance, but complete automation of the grasping task in unforeseen circumstances remains difficult. As an alternative to LfD, this paper leverages limited human supervision to achieve robotic grasping of unknown objects in unforeseen circumstances. The technical question is what form of human supervision best minimizes the effort of the human supervisor. The approach here applies a human-supplied bounding box to focus the robot´s visual processing on the target object, thereby lessening the dimensionality of the robot´s computer vision processing. After the human supervisor defines the bounding box through the man-machine interface, the rest of the grasping task is automated through a vision-based feature-extraction approach where the dexterous hand learns to grasp objects without relying on pre-computed object models through the NEAT neuroevolution algorithm. Given only low-level sensing data from a commercial depth sensor Kinect, our approach evolves neural networks to identify appropriate hand positions and orientations for grasping novel objects. Further, the machine learning results from simulation have been validated by transferring the training results to a physical robot called Dreamer made by the Meka Robotics company. The results demonstrate that grasping novel objects through exploiting neuroevolution from simulation to reality is possible.
Keywords :
dexterous manipulators; feature extraction; human-robot interaction; image sensors; intelligent robots; learning (artificial intelligence); learning systems; neural nets; robot vision; Dreamer; Kinect; LfD; Meka Robotics company; NEAT neuroevolution algorithm; computer vision processing; depth sensor; dexterous robotic hand; grasping task automation; human-supplied bounding box; machine learning; man-machine interface; neural networks; physical robot; robot learning from demonstration; robot visual processing; robotic grasping; three-dimensional model; vision-based feature-extraction approach; Artificial neural networks; Grasping; Robot sensing systems; Three-dimensional displays; Training;
Conference_Titel :
Computational Intelligence in Control and Automation (CICA), 2014 IEEE Symposium on
Conference_Location :
Orlando, FL
DOI :
10.1109/CICA.2014.7013242