DocumentCode
250727
Title
Policy search for learning robot control using sparse data
Author
Bischoff, B. ; Nguyen-Tuong, D. ; van Hoof, Herke ; McHutchon, A. ; Rasmussen, Carl Edward ; Knoll, Aaron ; Peters, Jochen ; Deisenroth, Marc Peter
Author_Institution
Cognitive Syst., Bosch Corp. Res., Germany
fYear
2014
fDate
May 31 2014-June 7 2014
Firstpage
3882
Lastpage
3887
Abstract
In many complex robot applications, such as grasping and manipulation, it is difficult to program desired task solutions beforehand, as robots are within an uncertain and dynamic environment. In such cases, learning tasks from experience can be a useful alternative. To obtain a sound learning and generalization performance, machine learning, especially, reinforcement learning, usually requires sufficient data. However, in cases where only little data is available for learning, due to system constraints and practical issues, reinforcement learning can act suboptimally. In this paper, we investigate how model-based reinforcement learning, in particular the probabilistic inference for learning control method (Pilco), can be tailored to cope with the case of sparse data to speed up learning. The basic idea is to include further prior knowledge into the learning process. As Pilco is built on the probabilistic Gaussian processes framework, additional system knowledge can be incorporated by defining appropriate prior distributions, e.g. a linear mean Gaussian prior. The resulting Pilco formulation remains in closed form and analytically tractable. The proposed approach is evaluated in simulation as well as on a physical robot, the Festo Robotino XT. For the robot evaluation, we employ the approach for learning an object pick-up task. The results show that by including prior knowledge, policy learning can be sped up in presence of sparse data.
Keywords
Gaussian distribution; Gaussian processes; learning (artificial intelligence); learning systems; probability; robots; Festo Robotino XT; PILCO; dynamic environment; generalization performance; grasping; learning robot control; linear mean Gaussian prior distribution; machine learning; manipulation; model-based reinforcement learning; object pick-up task; physical robot; policy learning; policy search; probabilistic Gaussian processes framework; probabilistic inference for learning control method; sound learning; sparse data; system constraints; uncertain environment; Computational modeling; Data models; Grasping; Heuristic algorithms; Pneumatic systems; Robots; Valves;
fLanguage
English
Publisher
ieee
Conference_Titel
Robotics and Automation (ICRA), 2014 IEEE International Conference on
Conference_Location
Hong Kong
Type
conf
DOI
10.1109/ICRA.2014.6907422
Filename
6907422
Link To Document