DocumentCode :
2476249
Title :
Regularized Fitted Q-Iteration for planning in continuous-space Markovian decision problems
Author :
Farahmand, Amir Massoud ; Ghavamzadeh, Mohammad ; Szepesvári, Csaba ; Mannor, Shie
Author_Institution :
Dept. of Comput. Sci., Univ. of Alberta, Edmonton, AB, Canada
fYear :
2009
fDate :
10-12 June 2009
Firstpage :
725
Lastpage :
730
Abstract :
Reinforcement learning with linear and non-linear function approximation has been studied extensively in the last decade. However, as opposed to other fields of machine learning such as supervised learning, the effect of finite sample has not been thoroughly addressed within the reinforcement learning framework. In this paper we propose to use L2 regularization to control the complexity of the value function in reinforcement learning and planning problems. We consider the Regularized Fitted Q-Iteration algorithm and provide generalization bounds that account for small sample sizes. Finally, a realistic visual-servoing problem is used to illustrate the benefits of using the regularization procedure.
Keywords :
Markov processes; function approximation; iterative methods; learning (artificial intelligence); nonlinear functions; planning (artificial intelligence); continuous-space Markovian decision problem; nonlinear function approximation; regularized fitted Q-iteration; reinforcement learning; supervised learning; visual-servoing problem; Automatic control; Complex networks; Computational modeling; Computer networks; Discrete event simulation; Error correction; Function approximation; Machine learning; Machine learning algorithms; Supervised learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
American Control Conference, 2009. ACC '09.
Conference_Location :
St. Louis, MO
ISSN :
0743-1619
Print_ISBN :
978-1-4244-4523-3
Electronic_ISBN :
0743-1619
Type :
conf
DOI :
10.1109/ACC.2009.5160611
Filename :
5160611
Link To Document :
بازگشت