Title of article :
Restricted gradient-descent algorithm for value-function approximation in reinforcement learning Original Research Article
Author/Authors :
André da Motta Salles Barreto، نويسنده , , Charles W. Anderson، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2008
Abstract :
This work presents the restricted gradient-descent (RGD) algorithm, a training method for local radial-basis function networks specifically developed to be used in the context of reinforcement learning. The RGD algorithm can be seen as a way to extract relevant features from the state space to feed a linear model computing an approximation of the value function. Its basic idea is to restrict the way the standard gradient-descent algorithm changes the hidden units of the approximator, which results in conservative modifications that make the learning process less prone to divergence. The algorithm is also able to configure the topology of the network, an important characteristic in the context of reinforcement learning, where the changing policy may result in different requirements on the approximator structure. Computational experiments are presented showing that the RGD algorithm consistently generates better value-function approximations than the standard gradient-descent method, and that the latter is more susceptible to divergence. In the pole-balancing and Acrobot tasks, RGD combined with SARSA presents competitive results with other methods found in the literature, including evolutionary and recent reinforcement-learning algorithms.
Keywords :
Reinforcement learning , Neuro-dynamic programming , Value-function approximation , Radial-basis-function networks
Journal title :
Artificial Intelligence
Journal title :
Artificial Intelligence