Title :
Incremental State Aggregation for Value Function Estimation in Reinforcement Learning
Author :
Mori, Takayoshi ; Ishii, Shin
Author_Institution :
Inst. of Perception, Action & Behaviour, Univ. of Edinburgh, Edinburgh, UK
Abstract :
In reinforcement learning, large state and action spaces make the estimation of value functions impractical, so a value function is often represented as a linear combination of basis functions whose linear coefficients constitute parameters to be estimated. However, preparing basis functions requires a certain amount of prior knowledge and is, in general, a difficult task. To overcome this difficulty, an adaptive basis function construction technique has been proposed by Keller recently, but it requires excessive computational cost. We propose an efficient approach to this difficulty, in which the problem of approximating the value function is decomposed into a number of subproblems, each of which can be solved with small computational cost. Computer experiments show that the CPU time needed by our method is much smaller than that by the existing method.
Keywords :
function approximation; learning (artificial intelligence); parameter estimation; adaptive basis function construction technique; incremental state aggregation; linear coefficients; parameter estimation; reinforcement learning; value function estimation; Computational efficiency; Function approximation; Learning; Mathematical model; Adaptive construction of basis functions; reinforcement learning (RL); value function; Artificial Intelligence; Computer Simulation; Game Theory; Models, Psychological; Models, Statistical; Reinforcement (Psychology);
Journal_Title :
Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on
DOI :
10.1109/TSMCB.2011.2148710