شماره ركورد كنفرانس :
3297
عنوان مقاله :
A new Approach on Multi-Agent Multi-Objective Reinforcement Learning based on agents’ preferences
پديدآورندگان :
Daavarani Asl Zeinab Computer Engineering Department - Faculty of Engineering Yazd University Yazd - Iran , Derhami Vali Computer Engineering Department - Faculty of Engineering Yazd University Yazd - Iran , Yazdian-Dehkordi Mehdi Computer Engineering Department - Faculty of Engineering Yazd University Yazd - Iran
كليدواژه :
Pareto Front , multiobjective , multi-agent systems , reinforcement learning
عنوان كنفرانس :
نوزدهمين سمپوزيوم بين المللي هوش مصنوعي و پردازش سيگنال
چكيده لاتين :
Reinforcement Learning (RL) is a powerful machine
learning paradigm for solving Markov Decision Process (MDP).
Traditional RL algorithms aim to solve one-objective problems,
but many real-world problems have more than one objective
which conflict each other. In recent years, Multi-Objective
Reinforcement Learning (MORL) algorithms, which employ a
reward vector instead of a scalar reward signal, have been
proposed to solve multi-objective problems. In MORL, because of
conflicting objectives, there is no one optimal solution and a set of
solutions named Pareto Front will be learned. In this paper, we
proposed a new multi-agent method, which uses a shared Q-table
for all agents to solve bi-objective problems. However, each agent
selects actions based on its preference. These preferences are
different with each other and the agents reach to Pareto Front
solutions based on this preferences. The proposed method is simple
in understanding and its computational cost is very low. Moreover,
after finding the Pareto Front set, we can easily track the policy.
Simulation results show that our proposed method outperforms
the available methods in the term of learning speed.