Title :
Finding shortcuts from episode in multi-agent reinforcement learning
Author :
Jin, Zhao ; Liu, WieYi ; Jin, Jian
Author_Institution :
Sch. of Inf. Sci. & Eng., Yunnan Univ., Kunming, China
Abstract :
In multi-agent reinforcement learning, the state space grows exponentially in terms of the number of agents, which makes the training episode longer than before. It will take more time to make learning convergent. In order to improve the efficiency of the convergence, we propose an algorithm to find shortcuts from episode in multi-agent reinforcement learning to speed up convergence. The loops that indicate the ineffective paths in the episode are removed, but all the shortest state paths from each other state to the goal state within the original episode are kept, that means no loss of state space knowledge when remove these loops. So the length of episode is shortened to speed up the convergence. Since a large mount of episodes are included in learning process, the overall improvement accumulated from every episode´s improvement will be considerable. The episode of multi-agent pursuit problem is used to illustrate the effectiveness of our algorithm. We believe this algorithm can be introduced into most other reinforcement learning approaches for speeding up convergence, because its improvement is made on episode, which is the most foundational learning unit of reinforcement learning.
Keywords :
graph theory; learning (artificial intelligence); multi-agent systems; episode improvement; learning process; multiagent reinforcement learning; shortest state path; speed up convergence; state space knowledge; Cybernetics; Machine learning; episode; multi-agent reinforcement learning; shortcut; speed up convergence; state loops;
Conference_Titel :
Machine Learning and Cybernetics, 2009 International Conference on
Conference_Location :
Baoding
Print_ISBN :
978-1-4244-3702-3
Electronic_ISBN :
978-1-4244-3703-0
DOI :
10.1109/ICMLC.2009.5212219