Title of article
Aggregate Reinforcement Learning for multi-agent territory division: The Hide-and-Seek game
Author/Authors
Gunady، نويسنده , , Mohamed K. and Gomaa، نويسنده , , Walid and Takeuchi، نويسنده , , Ikuo، نويسنده ,
Issue Information
روزنامه با شماره پیاپی سال 2014
Pages
15
From page
122
To page
136
Abstract
In many applications in Robotics such as disaster rescuing, mine detection, robotic surveillance and warehouse systems, it is crucial to build multi-agent systems (MAS) in which agents cooperate to complete a sequence of tasks. For better performance in such systems, e.g. minimizing duplicate work, agents need to agree on how to divide and plan that sequence of tasks among themselves. This paper targets the problem of territory division in the children’s game of Hide-and-Seek as a test-bed for our proposed approach. The problem is solved in a hierarchical learning scheme using Reinforcement Learning (RL). Based on Q-learning, our learning model is presented in detail; definition of composite states, actions, and reward function to deal with multiple agent learning. In addition, a revised version of the standard updating rule of the Q-learning is proposed to cope with multiple seekers. The model is examined on a set of different maps, on which it converges to the optimal solutions. After the complexity analysis of the algorithm, we enhanced it by using state aggregation (SA) to alleviate the state space explosion. Two levels of aggregation are devised: topological and hiding aggregation. After elaborating how the learning model is modified to handle the aggregation technique, the enhanced model is examined by some experiments. Results indicate promising performance with higher convergence rate and up to 10× space reduction.
Keywords
Multi-agent systems , Hierarchical learning , State aggregation , Hide-and-Seek , Q-learning , reinforcement learning
Journal title
Engineering Applications of Artificial Intelligence
Serial Year
2014
Journal title
Engineering Applications of Artificial Intelligence
Record number
2126246
Link To Document