DocumentCode :
138496
Title :
Unifying scene registration and trajectory optimization for learning from demonstrations with application to manipulation of deformable objects
Author :
Lee, Alex X. ; Huang, Steven He ; Hadfield-Menell, Dylan ; Tzeng, Eric ; Abbeel, Pieter
Author_Institution :
Dept. of Electr. Eng. & Comput. Sci., Univ. of California at Berkeley, Berkeley, CA, USA
fYear :
2014
fDate :
14-18 Sept. 2014
Firstpage :
4402
Lastpage :
4407
Abstract :
Recent work [1], [2] has shown promising results in enabling robotic manipulation of deformable objects through learning from demonstrations. Their method computes a registration from training scene to test scene, and then applies an extrapolation of this registration to the training scene gripper motion to obtain the gripper motion for the test scene. The warping cost of scene-to-scene registrations is used to determine the nearest neighbor from a set of training demonstrations. Then once the gripper motion has been generalized to the test situation, they apply trajectory optimization [3] to plan for the robot motions that will track the predicted gripper motions. In many situations, however, the predicted gripper motions cannot be followed perfectly due to, for example, joint limits or obstacles. In this case the past work finds a path that minimizes deviation from the predicted gripper trajectory as measured by its Euclidean distance for position and angular distance for orientation. Measuring the error this way during the motion planning phase, however, ignores the underlying structure of the problem-namely the idea that rigid registrations are preferred to generalize from training scene to test scene. Deviating from the gripper trajectory predicted by the extrapolated registration effectively changes the warp induced by the registration in the part of the space where the gripper trajectories are. The main contribution of this paper is an algorithm that considers this effective final warp as the criterion to optimize for in a unified optimization that simultaneously considers the scene-to-scene warping and the robot trajectory (which were separated into two sequential steps by the past work). This results in an approach that adjusts to infeasibility in a way that adapts directly to the geometry of the scene and minimizes the introduction of additional warping cost. In addition, this paper proposes to learn the motion of the gripper pads, whereas past work consi- ered the motion of a coordinate frame attached to the gripper as a whole. This enables learning more precise grasping motions. Our experiments, which consider the task of knot tying, show that both unified optimization and explicit consideration of gripper pad motion result in improved performance.
Keywords :
control engineering computing; extrapolation; grippers; image registration; learning by example; optimisation; robot vision; trajectory control; Euclidean distance; angular distance; deformable objects manipulation; extrapolation; grasping motions; learning from demonstrations; motion planning; position; predicted gripper motions; predicted gripper trajectory; robot motions; robot trajectory; robotic manipulation; scene-to-scene registrations; scene-to-scene warping; test scene; training scene registration; trajectory optimization; unified optimization; warping cost; Grippers; Joints; Optimization; Robots; Splines (mathematics); Three-dimensional displays; Trajectory;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on
Conference_Location :
Chicago, IL
Type :
conf
DOI :
10.1109/IROS.2014.6943185
Filename :
6943185
Link To Document :
بازگشت