DocumentCode
548901
Title
Hierarchical Reinforcement Learning: Learning sub-goals and state-abstraction
Author
Jardim, David ; Nunes, Luís ; Oliveira, Sancho
Author_Institution
ADETTI & ISCTE-IUL, Inst. Univ. de Lisboa, Lisbon, Portugal
fYear
2011
fDate
15-18 June 2011
Firstpage
1
Lastpage
4
Abstract
In this paper we present a method that allows an agent to discover and create temporal abstractions autonomously. Our method is based on the concept that to reach the goal, the agent must pass through relevant states that we will interpret as subgoals. To detect useful subgoals, our method creates intersections between several paths leading to a goal. Our research focused on domains largely used in the study of temporal abstractions. We used several versions of the room-to-room navigation problem. We determined that, in the problems tested, an agent can learn more rapidly by automatically discovering subgoals and creating abstractions.
Keywords
learning (artificial intelligence); mobile agents; autonomous agent; hierarchical reinforcement learning; learning subgoal; room-to-room navigation problem; state-abstraction; temporal abstraction; Navigation; Abstractions; Autonomous Agents; Machine Learning; Reinforcement Learning; Sub-goals;
fLanguage
English
Publisher
ieee
Conference_Titel
Information Systems and Technologies (CISTI), 2011 6th Iberian Conference on
Conference_Location
Chaves
Print_ISBN
978-1-4577-1487-0
Type
conf
Filename
5974351
Link To Document