Title :
Convergence of numerical optimal feedback policies for deterministic optimal control problems
Author :
Dupuis, Paul ; Szpiro, Adam
Author_Institution :
Div. of Appl. Math., Brown Univ., Providence, RI, USA
Abstract :
We consider a Markov chain based numerical approximation method for a class of deterministic nonlinear optimal control problems. It is known that methods of this type yield convergent approximations to the value function on the entire domain. These results do not easily extend to the optimal control, which need not be uniquely defined on the entire domain. There are, however, regions of strong regularity on which the optimal control is well defined and smooth. Typically, the union of these regions is open and dense in the domain. Using probabilistic methods, we prove that on the regions of strong regularity, the Markov chain method yields a convergent sequence of approximations to the optimal feedback control. The result is illustrated with several examples.
Keywords :
Markov processes; approximation theory; convergence of numerical methods; feedback; nonlinear control systems; optimal control; probability; Markov chain based numerical approximation method; convergence; convergent approximations; deterministic optimal control problems; nonlinear control problems; numerical optimal feedback policies; probabilistic methods; strong regularity; Approximation methods; Convergence of numerical methods; Cost function; Feedback control; Finite difference methods; Mathematics; Optimal control; Stochastic processes; Symmetric matrices; US Government;
Conference_Titel :
Decision and Control, 2002, Proceedings of the 41st IEEE Conference on
Print_ISBN :
0-7803-7516-5
DOI :
10.1109/CDC.2002.1184352