Title :
Multi-agent Markov decision processes with limited agent communication
Author :
Mukhopadhy, Snehasis ; Jain, Bindu
Author_Institution :
Dept. of Comput. & Inf. Sci., Indiana Univ., Indianapolis, IN, USA
Abstract :
A number of well known methods exist for solving Markov decision problems (MDP) involving a single decision-maker with or without model uncertainty. Recently, there has been great interest in the multi-agent version of the problem where there are multiple interacting decision makers. However, most of the suggested methods for multi-agent MDPs require complete knowledge concerning the state and action of all agents. This, in turn, results in a large communication overhead when the agents are physically distributed. In this paper, we address the problem of coping with uncertainty regarding the agent states and action with different amounts of communication. In particular, assuming a known model and common reward structure, hidden Markov models and techniques for partially observed MDPs are combined to estimate the states or actions (or both) of other agents. Simulation results are presented to compare the performances that can be realized under different assumptions on agent communications
Keywords :
Markov processes; hidden Markov models; multi-agent systems; state estimation; Markov decision problems; hidden Markov models; multiple agent systems; reward structure; state estimation; Decision making; Dynamic programming; Game theory; Hidden Markov models; Information science; Learning; Nash equilibrium; State estimation; Stochastic processes; Uncertainty;
Conference_Titel :
Intelligent Control, 2001. (ISIC '01). Proceedings of the 2001 IEEE International Symposium on
Conference_Location :
Mexico City
Print_ISBN :
0-7803-6722-7
DOI :
10.1109/ISIC.2001.971476