Title :
Basis Function Construction in Reinforcement Learning Using Cascade-Correlation Learning Architecture
Author :
Girgin, Sertan ; Preux, Philippe
Author_Institution :
Team-Project SequeL, INRIA, Lille
Abstract :
In reinforcement learning, it is a common practice to map the state(-action) space to a different one using basis functions. This transformation aims to represent the input data in a more informative form that facilitates and improves subsequent steps. As a "good\´\´ set of basis functions result in better solutions and defining such functions becomes a challenge with increasing problem complexity, it is beneficial to be able to generate them automatically. In this paper, we propose a new approach based on Bellman residual for constructing basis functions using cascade-correlation learning architecture. We show how this approach can be applied to Least Squares Policy Iteration algorithm in order to obtain a better approximation of the value function, and consequently improve the performance of the resulting policies. We also present the effectiveness of the method empirically on some benchmark problems.
Keywords :
computational complexity; iterative methods; learning (artificial intelligence); least squares approximations; Bellman residual; basis function construction; cascade-correlation learning architecture; least squares policy iteration algorithm; problem complexity; reinforcement learning; state-action space; Approximation algorithms; Artificial neural networks; Europe; History; Least squares approximation; Machine learning; Minimization methods; State feedback; State-space methods; Supervised learning; basis function expansion; cascade-correlation network; reinforcement learning; state representation;
Conference_Titel :
Machine Learning and Applications, 2008. ICMLA '08. Seventh International Conference on
Conference_Location :
San Diego, CA
Print_ISBN :
978-0-7695-3495-4
DOI :
10.1109/ICMLA.2008.24