• DocumentCode
    3243166
  • Title

    Q learning behavior on autonomous navigation of physical robot

  • Author

    Wicaksono, Handy

  • Author_Institution
    Dept. of Electr. Eng., Petra Christian Univ., Surabaya, Indonesia
  • fYear
    2011
  • fDate
    23-26 Nov. 2011
  • Firstpage
    50
  • Lastpage
    54
  • Abstract
    Behavior based architecture gives robot fast and reliable action. If there are many behaviors in robot, behavior coordination is needed. Subsumption architecture is behavior coordination method that give quick and robust response. Learning mechanism improve robot´s performance in handling uncertainty. Q learning is popular reinforcement learning method that has been used in robot learning because it is simple, convergent and off policy. In this paper, Q learning will be used as learning mechanism for obstacle avoidance behavior in autonomous robot navigation. Learning rate of Q learning affect robot´s performance in learning phase. As the result, Q learning algorithm is successfully implemented in a physical robot with its imperfect environment.
  • Keywords
    collision avoidance; learning (artificial intelligence); mobile robots; autonomous navigation; autonomous robot navigation; behavior based architecture; behavior coordination; learning mechanism; obstacle avoidance behavior; physical robot; reinforcement learning method; robot learning; subsumption architecture; Collision avoidance; Computer architecture; Learning; Learning systems; Navigation; Robot kinematics; Q learning; autonomous navigation; behavior coordination; physical robot;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Ubiquitous Robots and Ambient Intelligence (URAI), 2011 8th International Conference on
  • Conference_Location
    Incheon
  • Print_ISBN
    978-1-4577-0722-3
  • Type

    conf

  • DOI
    10.1109/URAI.2011.6145931
  • Filename
    6145931