Title :
Hierarchical Nash-Q learning in continuous games
Author :
Sahraei-Ardakani, Mostafa ; Rahimi-Kian, Ashkan ; Nili-Ahmadabadi, Majid
Author_Institution :
ECE Dept., Univeristy of Tehran, Tehran
Abstract :
Multi-agent reinforcement learning (RL) algorithms usually work on repeated extended, or stochastic games. Generally RL is developed for discrete systems both in terms of states and actions. In this paper, a hierarchical method to learn equilibrium strategy in continuous games is developed. Hierarchy is used to break the continuous domain of strategies into discrete sets of hierarchical strategies. The algorithm is proved to converge to Nash-equilibrium in a specific class of games with dominant strategies. Then, it is applied to some other games and the convergence in shown. This approach is common in RL algorithms that they are applied to problem where no proof of convergence exits.
Keywords :
convergence; game theory; learning (artificial intelligence); multi-agent systems; Nash-equilibrium; continuous games; convergence; discrete systems; hierarchical Nash-Q learning; hierarchical method; multi-agent reinforcement learning algorithms; Algorithm design and analysis; Convergence; Educational institutions; Equations; Game theory; Learning; Minimax techniques; Optimization methods; Prototypes; Stochastic processes;
Conference_Titel :
Computational Intelligence and Games, 2008. CIG '08. IEEE Symposium On
Conference_Location :
Perth, WA
Print_ISBN :
978-1-4244-2973-8
Electronic_ISBN :
978-1-4244-2974-5
DOI :
10.1109/CIG.2008.5035652