• DocumentCode
    2397145
  • Title

    Improving Learning Stability for Reinforcement Learning Agent

  • Author

    Du Xiaoqin ; Qinghua, Li

  • Author_Institution
    Coll. of Comput. Sci. & Technol., Huazhong Univ. of Sci. & Technol., Hubei
  • fYear
    0
  • fDate
    0-0 0
  • Firstpage
    1041
  • Lastpage
    1046
  • Abstract
    We present a method, which organically combines the actor/critic architecture with the self-organizing feature map (SOFM), and the results of its research aimed at improving the learning stability for reinforcement learning agent. A model is proposed based on the SOFM that receives as input the continuous state space and produces as output neurons, which then are mapped to BOXES. Our model extends the actor/critic architecture so that inactive BOXES may learn appropriate eligibility trace from active BOXES in order to improve learning stability for reinforcement learning agent. Experimental results obtained from a simulation show that our model is capable of learning a useful partitioning of the continuous state space and improving learning stability for reinforcement learning agents
  • Keywords
    learning (artificial intelligence); self-organising feature maps; state-space methods; actor-critic architecture; continuous state space; eligibility trace; learning stability; neurons; reinforcement learning agent; self-organizing feature map; Computer science; Educational institutions; High performance computing; Neurons; Partitioning algorithms; Stability; State-space methods; Supervised learning; Training data; Unsupervised learning;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Networking, Sensing and Control, 2006. ICNSC '06. Proceedings of the 2006 IEEE International Conference on
  • Conference_Location
    Ft. Lauderdale, FL
  • Print_ISBN
    1-4244-0065-1
  • Type

    conf

  • DOI
    10.1109/ICNSC.2006.1673295
  • Filename
    1673295