Title :
Gaussian processes in inverse reinforcement learning
Author :
Jin, Zhuo-jun ; Qian, Hui ; Zhu, Miao-liang
Author_Institution :
Coll. of Comput. Sci., Zhejiang Univ., Hangzhou, China
Abstract :
Inverse reinforcement learning (IRL) is the general problem of recovering a reward function from demonstrations provided by an expert. By incorporating Gaussian process (GP) into IRL, we present an approach to recovering both rewards and uncertainty information in continuous state and action spaces. To predicate value in every point in spaces, we use GP models for value function and reward function separately. Our contribution is threefold: First, we extend the existing IRL algorithm to the case of continuous spaces. Second, reward GP provides not only the reward function with flexible forms, but also uncertainty about rewards, which helps the learner make a tradeoff between exploitation and exploration. Third, by introducing the kernel function, our approach takes sample points in the demonstration as learning features. It prevents manually designating features. Experimental results show the proposed method works well and demonstrate good learning in a traditional learning setting.
Keywords :
Gaussian processes; learning (artificial intelligence); GP; Gaussian processes; IRL; inverse reinforcement learning; kernel function; reward function; uncertainty information; value function; Equations; Gaussian processes; Learning; Machine learning; Markov processes; Mathematical model; Uncertainty; Gaussian process; Inverse reinforcement learning; Markov decision process; Reinforcement learning; Reward learning;
Conference_Titel :
Machine Learning and Cybernetics (ICMLC), 2010 International Conference on
Conference_Location :
Qingdao
Print_ISBN :
978-1-4244-6526-2
DOI :
10.1109/ICMLC.2010.5581063