Title :
A Kaiman filter-based actor-critic learning approach
Author :
Bin Wang ; Dongbin Zhao
Author_Institution :
State Key Lab. of Manage. & Control for Complex Syst., Inst. of Autom., Beijing, China
Abstract :
Kalman fiter is an efficient way to estimate the parameters of the value function in reinforcement learning. In order to solve Markov Decision Process (MDP) problems in both continuous state and action space, a new online reinforcement learning algorithm using Kalman filter technique, which is called Kalman filter-based actor-critic (KAC) learning is proposed in this paper. To implement the KAC algorithm, Cerebellar Model Articulation Controller (CMAC) neural networks are used to approximate the value function and the policy function respectively. Kalman filter is used to estimate the weights of the critic network. Two benchmark problems, namely the cart-pole balancing problem and the acrobot swing-up problem are provided to verify the effectiveness of the KAC approach. Experimental results demonstrate that the proposed KAC algorithm is more efficient than other similar algorithms.
Keywords :
Kalman filters; Markov processes; cerebellar model arithmetic computers; decision theory; learning (artificial intelligence); parameter estimation; CMAC neural networks; KAC algorithm; Kalman filter-based actor-critic learning approach; MDP problems; Markov decision process; acrobot swing-up problem; action space; cart-pole balancing problem; cerebellar model articulation controller; continuous state space; critic network; online reinforcement learning algorithm; parameter estimation; policy function; value function; Algorithm design and analysis; Approximation algorithms; Function approximation; Kalman filters; Least squares approximations; Neural networks;
Conference_Titel :
Neural Networks (IJCNN), 2014 International Joint Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4799-6627-1
DOI :
10.1109/IJCNN.2014.6889527