DocumentCode :
2498680
Title :
Grounding subgoals in information transitions
Author :
Van Dijk, Sander G. ; Polani, Daniel
Author_Institution :
Adaptive Syst. Res. Group, Univ. of Hertfordshire, Hatfield, UK
fYear :
2011
fDate :
11-15 April 2011
Firstpage :
105
Lastpage :
111
Abstract :
In reinforcement learning problems, the construction of subgoals has been identified as an important step to speed up learning and to enable skill transfer. For this purpose, one typically extracts states from various saliency properties of an MDP transition graph, most notably bottleneck states. Here we introduce an alternative approach to this problem: assuming a family of MDPs with multiple goals but with a fixed transition graph, we introduce the relevant goal information as the amount of Shannon information that the agent needs to maintain about the current goal at a given state to select the appropriate action. We show that there are distinct transition states in the MDP at which new relevant goal information has to be considered for selecting the next action. We argue that these transition states can be interpreted as subgoals for the current task class, and we use these states to automatically create a hierarchical policy, according to the well-established Options model for hierarchical reinforcement learning.
Keywords :
information theory; learning (artificial intelligence); MDP transition graph; Shannon information; goal information; grounding subgoals; hierarchical reinforcement learning; information transitions; reinforcement learning problems; skill transfer; Entropy; History; Learning; Mutual information; Probability distribution; Uncertainty;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Adaptive Dynamic Programming And Reinforcement Learning (ADPRL), 2011 IEEE Symposium on
Conference_Location :
Paris
Print_ISBN :
978-1-4244-9887-1
Type :
conf
DOI :
10.1109/ADPRL.2011.5967384
Filename :
5967384
Link To Document :
بازگشت