DocumentCode :
2746332
Title :
Research on Actor-Critic Reinforcement Learning in RoboCup
Author :
Guo, He ; Liu, Tianyang ; Wang, Yuxin ; Chen, Feng ; Fan, Jianming
Author_Institution :
Dept. of Comput. Sci. & Eng., Dalian Univ. of Technol.
Volume :
2
fYear :
0
fDate :
0-0 0
Firstpage :
9212
Lastpage :
9216
Abstract :
Actor-critic method combines the fast convergence of value-based (critic) and directivity on search of policy gradient (actor). It is suitable for solving the problems with large state space. In this paper, the actor-critic method with tile-coding linear function approximation is analysed and applied to a RoboCup simulation subtask named "Soccer Keepaway". The experiments on Soccer Keepaway show that the policy learned by actor-critic method is better than policies from value-based Sarsa(lambda) and benchmarks
Keywords :
function approximation; gradient methods; learning (artificial intelligence); mobile robots; multi-robot systems; RoboCup; Soccer Keepaway; actor-critic reinforcement learning; policy gradient searching; tile-coding linear function approximation; value-based Sarsa(lambda); Analytical models; Automation; Computer science; Electronic mail; Function approximation; Helium; Intelligent control; Learning; Space technology; State-space methods; Actor-Critic; Function Approximation; MAS; Reinforcement Learning; RoboCup;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Control and Automation, 2006. WCICA 2006. The Sixth World Congress on
Conference_Location :
Dalian
Print_ISBN :
1-4244-0332-4
Type :
conf
DOI :
10.1109/WCICA.2006.1713783
Filename :
1713783
Link To Document :
بازگشت