• DocumentCode
    391016
  • Title

    Convergence of numerical optimal feedback policies for deterministic optimal control problems

  • Author

    Dupuis, Paul ; Szpiro, Adam

  • Author_Institution
    Div. of Appl. Math., Brown Univ., Providence, RI, USA
  • Volume
    3
  • fYear
    2002
  • fDate
    10-13 Dec. 2002
  • Firstpage
    3138
  • Abstract
    We consider a Markov chain based numerical approximation method for a class of deterministic nonlinear optimal control problems. It is known that methods of this type yield convergent approximations to the value function on the entire domain. These results do not easily extend to the optimal control, which need not be uniquely defined on the entire domain. There are, however, regions of strong regularity on which the optimal control is well defined and smooth. Typically, the union of these regions is open and dense in the domain. Using probabilistic methods, we prove that on the regions of strong regularity, the Markov chain method yields a convergent sequence of approximations to the optimal feedback control. The result is illustrated with several examples.
  • Keywords
    Markov processes; approximation theory; convergence of numerical methods; feedback; nonlinear control systems; optimal control; probability; Markov chain based numerical approximation method; convergence; convergent approximations; deterministic optimal control problems; nonlinear control problems; numerical optimal feedback policies; probabilistic methods; strong regularity; Approximation methods; Convergence of numerical methods; Cost function; Feedback control; Finite difference methods; Mathematics; Optimal control; Stochastic processes; Symmetric matrices; US Government;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Decision and Control, 2002, Proceedings of the 41st IEEE Conference on
  • ISSN
    0191-2216
  • Print_ISBN
    0-7803-7516-5
  • Type

    conf

  • DOI
    10.1109/CDC.2002.1184352
  • Filename
    1184352