DocumentCode :
3661128
Title :
Online reinforcement learning by Bayesian inference
Author :
Zhongpu Xia;Dongbin Zhao
Author_Institution :
The State Key Laboratory of Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
fYear :
2015
fDate :
7/1/2015 12:00:00 AM
Firstpage :
1
Lastpage :
6
Abstract :
Policy evaluation has long been one of the core issues of the online reinforcement learning, especially in the continuous state domain. In this paper, the issue is addressed by employing Gaussian processes to represent the action value function from the probability perspective. By modeling the return as a stochastic variable, the action value function can sequentially update according to observed variables such as state and reward by Bayesian inference during the policy evaluation. The update rule shows that it is a temporal difference learning method with the learning rate determined by the uncertainty of a collected sample. Incorporating the policy evaluation method with the ∈-greedy action selection method, we propose an online reinforcement learning algorithm referred as to Bayesian-SARSA. It is tested on some benchmark problems and the empirical results verifies its effectiveness.
Keywords :
"Bayes methods","Noise","Trajectory"
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), 2015 International Joint Conference on
Electronic_ISBN :
2161-4407
Type :
conf
DOI :
10.1109/IJCNN.2015.7280437
Filename :
7280437
Link To Document :
بازگشت