• Title of article

    Reinforcement learning algorithms for robotic navigation in dynamic environments

  • Author/Authors

    Yen، نويسنده , , Gary G. and Hickey، نويسنده , , Travis W.، نويسنده ,

  • Issue Information
    روزنامه با شماره پیاپی سال 2004
  • Pages
    14
  • From page
    217
  • To page
    230
  • Abstract
    The purpose of this study was to examine improvements to reinforcement learning (RL) algorithms in order to successfully interact within dynamic environments. The scope of the research was that of RL algorithms as applied to robotic navigation. Proposed improvements include: addition of a forgetting mechanism, use of feature based state inputs, and hierarchical structuring of an RL agent. Simulations were performed to evaluate the individual merits and flaws of each proposal, to compare proposed methods to prior established methods, and to compare proposed methods to theoretically optimal solutions. Incorporation of a forgetting mechanism did considerably improve the learning times of RL agents in a dynamic environment. However, direct implementation of a feature-based RL agent did not result in any performance enhancements, as pure feature-based navigation results in a lack of positional awareness, and the inability of the agent to determine the location of the goal state. Inclusion of a hierarchical structure in an RL agent resulted in significantly improved performance, specifically when one layer of the hierarchy included a feature-based agent for obstacle avoidance, and a standard RL agent for global navigation. In summary, the inclusion of a forgetting mechanism, and the use of a hierarchically structured RL agent offer substantially increased performance when compared to traditional RL agents navigating in a dynamic environment.
  • Keywords
    reinforcement learning , Navigation , dynamic environment , obstacle avoidance
  • Journal title
    ISA TRANSACTIONS
  • Serial Year
    2004
  • Journal title
    ISA TRANSACTIONS
  • Record number

    2382608