Title :
Linearly solvable Markov games
Author :
Dvijotham, Krishnamurthy ; Todorov, Emo
Author_Institution :
Dept. of Comput. Sci. & Eng., Univ. of Washington, Seattle, WA, USA
Abstract :
Recent work has led to a novel theory of linearly solvable optimal control, where the Bellman equation characterizing the optimal value function is reduced to a linear equation. Already, this work has shown promising results in planning and control of nonlinear systems in high dimensional state spaces. In this paper, we extend the class of linearly solvable problems to include certain kinds of 2-player Markov Games. In terms of modeling power, the new framework is more general than previous work, and can apply to any noisy dynamical system. Also, we obtain analytical solutions for the optimal value function of continuous-state control problems with linear dynamics and a very flexible class of cost functions. The linearity leads to many other useful properties: the ability to compose solutions to simple control problems to obtain solutions to new problems, a convex optimization formulation of inverse optimal control etc. We demonstrate the usefulness of the framework through examples of forward and inverse optimal control problems in continuous as well as discrete state spaces.
Keywords :
Markov processes; continuous systems; convex programming; discrete systems; game theory; nonlinear control systems; optimal control; 2-player Markov games; Bellman equation; continuous-state control problems; convex optimization formulation; cost functions; forward optimal control; inverse optimal control; linear dynamics; linear equation; linearly solvable problems; noisy dynamical system; nonlinear system control; optimal value function; Aerospace electronics; Cost function; Equations; Games; Markov processes; Mathematical model; Optimal control;
Conference_Titel :
American Control Conference (ACC), 2012
Conference_Location :
Montreal, QC
Print_ISBN :
978-1-4577-1095-7
Electronic_ISBN :
0743-1619
DOI :
10.1109/ACC.2012.6315632