Title of article
Decentralized MDPs with sparse interactions Original Research Article
Author/Authors
Francisco S. Melo، نويسنده , , Manuela Veloso، نويسنده ,
Issue Information
روزنامه با شماره پیاپی سال 2011
Pages
33
From page
1757
To page
1789
Abstract
Creating coordinated multiagent policies in environments with uncertainty is a challenging problem, which can be greatly simplified if the coordination needs are known to be limited to specific parts of the state space. In this work, we explore how such local interactions can simplify coordination in multiagent systems. We focus on problems in which the interaction between the agents is sparse and contribute a new decision-theoretic model for decentralized sparse-interaction multiagent systems, Dec-SIMDPs, that explicitly distinguishes the situations in which the agents in the team must coordinate from those in which they can act independently. We relate our new model to other existing models such as MMDPs and Dec-MDPs. We then propose a solution method that takes advantage of the particular structure of Dec-SIMDPs and provide theoretical error bounds on the quality of the obtained solution. Finally, we show a reinforcement learning algorithm in which independent agents learn both individual policies and when and how to coordinate. We illustrate the application of the algorithms throughout the paper in several multiagent navigation scenarios.
Keywords
Sparse interaction , Multiagent coordination , Decentralized Markov decision processes
Journal title
Artificial Intelligence
Serial Year
2011
Journal title
Artificial Intelligence
Record number
1207870
Link To Document