DocumentCode :
624717
Title :
Cooperative multiagent reinforcement learning using factor graphs
Author :
Zhen Zhang ; Dongbin Zhao
Author_Institution :
State Key Lab. of Manage. & Control for Complex Syst., Inst. of Autom., Beijing, China
fYear :
2013
fDate :
9-11 June 2013
Firstpage :
797
Lastpage :
802
Abstract :
In this paper, we propose a sparse reinforcement learning (RL) algorithm using factor graphs. The contribution is to make the original sparse RL algorithm applicable for tasks decomposed in a more general manner. For some problems, it is more reasonable to divide agents into cliques, each of which is responsible for its specific subtask. In this way, the global Q-value function is decomposed into the sum of simpler local Q-value functions, each of which may contain more than two action variables. Such decomposition can be expressed by a factor graph and exploited by the general max-plus algorithm to get the global greedy joint action. The experimental results show that our methodology is feasible and effective.
Keywords :
graph theory; learning (artificial intelligence); multi-agent systems; cliques; cooperative multiagent reinforcement learning; factor graphs; global Q-value function; global greedy joint action; sparse reinforcement learning; Approximation algorithms; Belief propagation; Games; Joints; Learning (artificial intelligence); Visualization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Control and Information Processing (ICICIP), 2013 Fourth International Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4673-6248-1
Type :
conf
DOI :
10.1109/ICICIP.2013.6568181
Filename :
6568181
Link To Document :
بازگشت