Title :
Kernelizing LSPE(λ)
Author :
Jung, Tobias ; Polani, Daniel
Author_Institution :
Mainz Univ.
Abstract :
We propose the use of kernel-based methods as underlying function approximator in the least-squares based policy evaluation framework of LSPE(λ) and LSTD(λ). In particular we present the ´kernelization´ of model-free LSPE(λ). The ´kernelization´ is computationally made possible by using the subset of regressors approximation, which approximates the kernel using a vastly reduced number of basis functions. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of the relevant basis functions. The LSPE method is well-suited for optimistic policy iteration and can thus be used in the context of online reinforcement learning. We use the high-dimensional Octopus benchmark to demonstrate this
Keywords :
learning (artificial intelligence); least squares approximations; Octopus benchmark; function approximator; kernel-based methods; least-squares-based policy evaluation; online reinforcement learning; regressors approximation; relevant basis functions; Control systems; Dynamic programming; Electronic mail; Function approximation; Kernel; Learning; Least squares approximation; Least squares methods; Optimal control; Optimization methods;
Conference_Titel :
Approximate Dynamic Programming and Reinforcement Learning, 2007. ADPRL 2007. IEEE International Symposium on
Conference_Location :
Honolulu, HI
Print_ISBN :
1-4244-0706-0
DOI :
10.1109/ADPRL.2007.368208