• DocumentCode
    313152
  • Title

    Direct-reinforcement-adaptive-learning neural network control for nonlinear systems

  • Author

    Kim, Young H. ; Lewis, Frank L.

  • Author_Institution
    Autom. & Robotics Res. Inst., Texas Univ., Arlington, TX, USA
  • Volume
    3
  • fYear
    1997
  • fDate
    4-6 Jun 1997
  • Firstpage
    1804
  • Abstract
    The paper is concerned with the application of reinforcement learning techniques to feedback control of nonlinear systems using neural networks (NN). Even if a good model of the nonlinear system is known, it is often difficult to formulate a control law. The work in this paper addresses this problem by showing how a NN can cope with nonlinearities through reinforcement learning with no preliminary off-line learning phase required. The learning is performed online based on a binary reinforcement signal from a critic without knowing the nonlinearity appearing in the system. The algorithm is derived from Lyapunov stability analysis, so that both system tracking stability and error convergence can be guaranteed in the closed-loop system
  • Keywords
    Lyapunov methods; adaptive control; closed loop systems; convergence; feedback; learning (artificial intelligence); neurocontrollers; nonlinear control systems; tracking; Lyapunov stability analysis; binary reinforcement signal; closed-loop system; direct-reinforcement-adaptive-learning neural network control; error convergence; feedback control; nonlinear systems; system tracking stability; Control nonlinearities; Control systems; Electronic mail; Feedback control; Learning; Neural networks; Nonlinear control systems; Nonlinear systems; Robotics and automation; Stability analysis;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    American Control Conference, 1997. Proceedings of the 1997
  • Conference_Location
    Albuquerque, NM
  • ISSN
    0743-1619
  • Print_ISBN
    0-7803-3832-4
  • Type

    conf

  • DOI
    10.1109/ACC.1997.610896
  • Filename
    610896