Title :
Hierarchical Bayesian Inverse Reinforcement Learning
Author :
Jaedeug Choi ; Kee-Eung Kim
Author_Institution :
Dept. of Comput. Sci., Korea Adv. Inst. of Sci. & Technol., Daejeon, South Korea
Abstract :
Inverse reinforcement learning (IRL) is the problem of inferring the underlying reward function from the expert´s behavior data. The difficulty in IRL mainly arises in choosing the best reward function since there are typically an infinite number of reward functions that yield the given behavior data as optimal. Another difficulty comes from the noisy behavior data due to sub-optimal experts. We propose a hierarchical Bayesian framework, which subsumes most of the previous IRL algorithms as well as models the sub-optimality of the expert´s behavior. Using a number of experiments on a synthetic problem, we demonstrate the effectiveness of our approach including the robustness of our hierarchical Bayesian framework to the sub-optimal expert behavior data. Using a real dataset from taxi GPS traces, we additionally show that our approach predicts the driving behavior with a high accuracy.
Keywords :
Bayes methods; learning (artificial intelligence); GPS traces; IRL; expert behavior data; hierarchical Bayesian framework; hierarchical Bayesian inverse reinforcement learning; infinite number; noisy behavior data; reward functions; suboptimal expert behavior data; suboptimal experts; synthetic problem; Bayes methods; Cybernetics; Learning (artificial intelligence); Linear programming; Markov processes; Trajectory; Vectors; Decision theory; inverse problems; maximum a posteriori estimation;
Journal_Title :
Cybernetics, IEEE Transactions on
DOI :
10.1109/TCYB.2014.2336867