DocumentCode :
1576784
Title :
Scalable reinforcement learning through hierarchical decompositions for weakly-coupled problems
Author :
Toutounji, Hazem ; Rothkopf, Constantin A. ; Triesch, Jochen
Author_Institution :
Frankfurt Inst. for Adv. Studies, Frankfurt am Main, Germany
Volume :
2
fYear :
2011
Firstpage :
1
Lastpage :
7
Abstract :
Reinforcement Learning, or Reward-Dependent Learning, has been very successful at describing how animals and humans adjust their actions so as to increase their gains and reduce their losses in a wide variety of tasks. Empirical studies have furthermore identified numerous neuronal correlates of quantities necessary for such computations. But, in general it is too expensive for the brain to encode actions and their outcomes with respect to all available dimensions describing the state of the world. This suggests the existence of learning algorithms that are capable of taking advantage of the independencies present in the world and hence reducing the computational costs in terms of representations and learning. A possible solution is to use separate learners for task dimensions with independent dynamics and rewards. But the condition of independence is usually too restrictive. Here, we propose a hierarchical reinforcement learning solution for the more general case in which the dynamics are not independent but weakly coupled and show how to assign credit to the different modules, which solve the task jointly.
Keywords :
brain; cognition; neurophysiology; brain; hierarchical decompositions; neurons; reward-dependent learning; scalable reinforcement learning; weakly-coupled problems;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Development and Learning (ICDL), 2011 IEEE International Conference on
Conference_Location :
Frankfurt am Main
ISSN :
2161-9476
Print_ISBN :
978-1-61284-989-8
Type :
conf
DOI :
10.1109/DEVLRN.2011.6037351
Filename :
6037351
Link To Document :
بازگشت