DocumentCode :
3348216
Title :
Disturbance rejection of multi-agent systems: A reinforcement learning differential game approach
Author :
Qiang Jiao ; Modares, Hamidreza ; Shengyuan Xu ; Lewis, Frank L. ; Vamvoudakis, Kyriakos G.
Author_Institution :
Sch. of Autom., Nanjing Univ. of Sci. & Technol., Nanjing, China
fYear :
2015
fDate :
1-3 July 2015
Firstpage :
737
Lastpage :
742
Abstract :
Distributed tracking control of multi-agent linear systems in the presence of disturbances is considered in this paper. The given problem is first formulated into a multi-player zero-sum differential graphical game. It is shown that the solution to this problem requires solving the coupled Hamilton-Jacobi-Isaacs (HJI) equations. A multi-agent reinforcement learning algorithm is developed to find the solution to these coupled HJI equations. The convergence of this algorithm to the optimal solution is proven. It is also shown that the proposed method guarantees L2-bounded synchronization errors in the presence of dynamical disturbances.
Keywords :
convergence of numerical methods; differential games; directed graphs; learning (artificial intelligence); linear matrix inequalities; linear systems; multi-agent systems; synchronisation; L2-bounded synchronization errors; algorithm convergence; coupled HJI equations; coupled Hamilton-Jacobi-Isaacs equations; distributed tracking control; disturbance rejection; dynamical disturbances; multiagent linear systems; multiagent reinforcement learning algorithm; multiplayer zero-sum differential graphical game; optimal solution; reinforcement learning differential game approach; Convergence; Games; Heuristic algorithms; Learning (artificial intelligence); Nash equilibrium; Synchronization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
American Control Conference (ACC), 2015
Conference_Location :
Chicago, IL
Print_ISBN :
978-1-4799-8685-9
Type :
conf
DOI :
10.1109/ACC.2015.7170822
Filename :
7170822
Link To Document :
بازگشت