DocumentCode
663820
Title
Transfer in inverse reinforcement learning for multiple strategies
Author
Tanwani, Ajay Kumar ; Billard, Aude
Author_Institution
Learning Algorithms & Syst. Lab. (LASA), Ecole Polytech. Fed. de Lausanne, Lausanne, Switzerland
fYear
2013
fDate
3-7 Nov. 2013
Firstpage
3244
Lastpage
3250
Abstract
We consider the problem of incrementally learning different strategies of performing a complex sequential task from multiple demonstrations of an expert or a set of experts. While the task is the same, each expert differs in his/her way of performing it. We assume that this variety across experts´ demonstration is due to the fact that each expert/strategy is driven by a different reward function, where reward function is expressed as a linear combination of a set of known features. Consequently, we can learn all the expert strategies by forming a convex set of optimal deterministic policies, from which one can match any unseen expert strategy drawn from this set. Instead of learning from scratch every optimal policy in this set, the learner transfers knowledge from the set of learned policies to bootstrap its search for new optimal policy. We demonstrate our approach on a simulated mini-golf task where the 7 degrees of freedom Barrett WAM robot arm learns to sequentially putt on different holes in accordance with the playing strategies of the expert.
Keywords
learning (artificial intelligence); Barrett WAM robot arm; complex sequential task; incremental learning; inverse reinforcement learning; mini-golf task; multiple strategies; optimal deterministic policies; reward function; Bayes methods; Conferences; Decision making; Learning (artificial intelligence); Probability distribution; Projection algorithms; Vectors;
fLanguage
English
Publisher
ieee
Conference_Titel
Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on
Conference_Location
Tokyo
ISSN
2153-0858
Type
conf
DOI
10.1109/IROS.2013.6696817
Filename
6696817
Link To Document