• DocumentCode
    2469971
  • Title

    Approximate dynamic programming using Bellman residual elimination and Gaussian process regression

  • Author

    Bethke, Brett, Jr. ; How, Jonathan P.

  • Author_Institution
    Dept. of Aeronaut. & Astronaut., Massachusetts Inst. of Technol., Cambridge, MA, USA
  • fYear
    2009
  • fDate
    10-12 June 2009
  • Firstpage
    745
  • Lastpage
    750
  • Abstract
    This paper presents an approximate policy iteration algorithm for solving infinite-horizon, discounted Markov decision processes (MDPs) for which a model of the system is available. The algorithm is similar in spirit to Bellman residual minimization methods. However, by using Gaussian process regression with nondegenerate kernel functions as the underlying cost-to-go function approximation architecture, the algorithm is able to explicitly construct cost-to-go solutions for which the Bellman residuals are identically zero at a set of chosen sample states. For this reason, we have named our approach Bellman residual elimination (BRE). Since the Bellman residuals are zero at the sample states, our BRE algorithm can be proven to reduce to exact policy iteration in the limit of sampling the entire state space. Furthermore, the algorithm can automatically optimize the choice of any free kernel parameters and provide error bounds on the resulting cost-to-go solution. Computational results on a classic reinforcement learning problem indicate that the algorithm yields a high-quality policy and cost approximation.
  • Keywords
    Gaussian processes; Markov processes; decision theory; dynamic programming; function approximation; infinite horizon; iterative methods; learning (artificial intelligence); minimisation; regression analysis; sampling methods; Bellman residual elimination algorithm; Bellman residual minimization method; Gaussian process regression; MDP; Markov decision process; approximate dynamic programming; approximate policy iteration algorithm; cost-to-go function approximation architecture; error bound; infinite horizon; nondegenerate kernel function; reinforcement learning algorithm; state space sampling; Approximation algorithms; Costs; Dynamic programming; Function approximation; Gaussian processes; Kernel; Learning; Minimization methods; Sampling methods; State-space methods;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    American Control Conference, 2009. ACC '09.
  • Conference_Location
    St. Louis, MO
  • ISSN
    0743-1619
  • Print_ISBN
    978-1-4244-4523-3
  • Electronic_ISBN
    0743-1619
  • Type

    conf

  • DOI
    10.1109/ACC.2009.5160344
  • Filename
    5160344