• DocumentCode
    324533
  • Title

    Q-learning based on regularization theory to treat the continuous states and actions

  • Author

    Fukao, Takanori ; Sumitomo, Takaaki ; Ineyama, Norikatsu ; Adachi, Norihiko

  • Author_Institution
    Dept. of Appl. Syst. Sci., Kyoto Univ., Japan
  • Volume
    2
  • fYear
    1998
  • fDate
    4-9 May 1998
  • Firstpage
    1057
  • Abstract
    Reinforcement learning is the learning technique to learn how to act optimally in unknown environment through trial and error. Q-learning is one of the famous algorithms for reinforcement learning, but the ordinary Q-learning algorithm using a Q-table has a problem in treating the continuous-valued state and action of the agent. In this paper, a new algorithm that is able to treat the continuous value of the agent´s state and action in Q-learning is presented. This algorithm is based on an approximation method using regularization theory. The Q-function is smoothly approximated by radial basis functions. This algorithm is applied to path planning and control of an inverted pendulum
  • Keywords
    function approximation; learning (artificial intelligence); path planning; position control; Q-learning; approximation method; continuous actions; continuous states; inverted pendulum; path planning; radial basis functions; regularization theory; Approximation algorithms; Approximation methods; Counting circuits; Feedback; Learning; Path planning; State estimation; Temperature distribution;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks Proceedings, 1998. IEEE World Congress on Computational Intelligence. The 1998 IEEE International Joint Conference on
  • Conference_Location
    Anchorage, AK
  • ISSN
    1098-7576
  • Print_ISBN
    0-7803-4859-1
  • Type

    conf

  • DOI
    10.1109/IJCNN.1998.685918
  • Filename
    685918