DocumentCode :
2784363
Title :
Hybrid Q-learning algorithm about cooperation in MAS
Author :
Chen, Wei ; Guo, Jing ; Li, Xiong ; Wang, Jie
Author_Institution :
Autom. Fac., GuangDong Univ. of Technol., Guangzhou, China
fYear :
2009
fDate :
17-19 June 2009
Firstpage :
3943
Lastpage :
3947
Abstract :
In most cases, agent learning tends to be a good method for solving challenging problems in multi-agent System (MAS). Since the learning efficiency is significantly different according to the actions taken by each specific agent, suitable algorithms will play important roles in the answer of the mentioned problems in multi-agent system. Although many related work are addressed to different algorithms of agent learning, few of them could balance efficiency and accuracy. In this paper, a hybrid Q-learning algorithm named CE-NNR which is springed form the CE-Q learning and NNR Q-learning is presented. The algorithm is then well extended to RoboCup soccer simulation system and is proved to be reasonable with the experimental results arranged at the end of this paper.
Keywords :
learning (artificial intelligence); multi-agent systems; CE-NNR learning; CE-Q learning; NNR Q-learning; RoboCup soccer simulation system; agent learning; hybrid Q-learning algorithm; learning efficiency; multiagent system; Artificial intelligence; Automation; Educational robots; Humanoid robots; Intelligent robots; Legged locomotion; Multiagent systems; Optimal control; Parallel robots; Robot kinematics; CE-NNR Q-Learning; MAS; RoboCup 2D Soccer Simulation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Control and Decision Conference, 2009. CCDC '09. Chinese
Conference_Location :
Guilin
Print_ISBN :
978-1-4244-2722-2
Electronic_ISBN :
978-1-4244-2723-9
Type :
conf
DOI :
10.1109/CCDC.2009.5191990
Filename :
5191990
Link To Document :
بازگشت