DocumentCode :
8439
Title :
Clique-based cooperative multiagent reinforcement learning using factor graphs
Author :
Zhen Zhang ; Dongbin Zhao
Author_Institution :
State Key Lab. of Manage. & Control for Complex Syst., Inst. of Autom., Beijing, China
Volume :
1
Issue :
3
fYear :
2014
fDate :
Jul-14
Firstpage :
248
Lastpage :
256
Abstract :
In this paper, we propose a clique-based sparse reinforcement learning (RL) algorithm for solving cooperative tasks. The aim is to accelerate the learning speed of the original sparse RL algorithm and to make it applicable for tasks decomposed in a more general manner. First, a transition function is estimated and used to update the Q-value function, which greatly reduces the learning time. Second, it is more reasonable to divide agents into cliques, each of which is only responsible for a specific subtask. In this way, the global Q-value function is decomposed into the sum of several simpler local Q-value functions. Such decomposition is expressed by a factor graph and exploited by the general maxplus algorithm to obtain the greedy joint action. Experimental results show that the proposed approach outperforms others with better performance.
Keywords :
graph theory; learning (artificial intelligence); multi-agent systems; clique-based cooperative multiagent reinforcement learning; clique-based sparse reinforcement learning algorithm; cooperative tasks; factor graph; general maxplus algorithm; global Q-value function; greedy joint action; learning time reduction; local Q-value functions; original sparse RL algorithm; transition function; Algorithm design and analysis; Approximation algorithms; Games; Heuristic algorithms; Learning (artificial intelligence); Sensors; Sparse matrices; Multiagent reinforcement learning; clique-based decomposition; factor graph; maxplus algorithm;
fLanguage :
English
Journal_Title :
Automatica Sinica, IEEE/CAA Journal of
Publisher :
ieee
ISSN :
2329-9266
Type :
jour
DOI :
10.1109/JAS.2014.7004682
Filename :
7004682
Link To Document :
بازگشت