• DocumentCode
    2717652
  • Title

    Reinforcement Learning in Continuous Action Spaces

  • Author

    Van Hasselt, Hado ; Wiering, Marco A.

  • Author_Institution
    Dept. of Inf. & Comput. Sci., Utrecht Univ.
  • fYear
    2007
  • fDate
    1-5 April 2007
  • Firstpage
    272
  • Lastpage
    279
  • Abstract
    Quite some research has been done on reinforcement learning in continuous environments, but the research on problems where the actions can also be chosen from a continuous space is much more limited. We present a new class of algorithms named continuous actor critic learning automaton (CACLA) that can handle continuous states and actions. The resulting algorithm is straightforward to implement. An experimental comparison is made between this algorithm and other algorithms that can handle continuous action spaces. These experiments show that CACLA performs much better than the other algorithms, especially when it is combined with a Gaussian exploration method
  • Keywords
    continuous systems; learning (artificial intelligence); learning automata; continuous action space; continuous actor critic learning automaton; continuous state; reinforcement learning; Books; Computational modeling; Dynamic programming; Intelligent systems; Learning automata; Physics computing; Telephony;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Approximate Dynamic Programming and Reinforcement Learning, 2007. ADPRL 2007. IEEE International Symposium on
  • Conference_Location
    Honolulu, HI
  • Print_ISBN
    1-4244-0706-0
  • Type

    conf

  • DOI
    10.1109/ADPRL.2007.368199
  • Filename
    4220844