DocumentCode :
1797952
Title :
WWN: Integration with coarse-to-fine, supervised and reinforcement learning
Author :
Zejia Zheng ; Juyang Weng ; Zhengyou Zhang
Author_Institution :
Michigan State Univ., East Lansing, MI, USA
fYear :
2014
fDate :
6-11 July 2014
Firstpage :
1517
Lastpage :
1524
Abstract :
The cost of autonomous development is substantial. Although supervised learning is effective, the cost demand on teachers is often too high to be constantly applied. Reinforcement learning can take advantage of physical reality due to environmental feedback and inspections. Information required in reinforcement learning is not as specific as is required in supervised learning. Integration theories, methods, and analysis of these two learning strategies are still rare in the literature although such integration has been well known in the animal kingdom. Based on our prior work on a general purpose framework called Developmental Network and its embodiment Where-What-Network, we present our theory, method, and analysis for integration of supervised learning and reinforcement learning in this paper. Different from all other known work on reinforcement learning, this DN framework uses fully emergent representation to avoid the brittleness and task-specific representations. Central in the integration is not just to provide a freedom for the teacher to choose the mode of learning, which is necessary especially when the physical non-living world is an implicit teacher, but the mechanism of scaffolding. In our experiment the scaffolding is reflected by allowing the location motor(LM) neurons to gradually refine representation through splitting(mitosis) in a coarse to fine scheme. We report our experimental work in a very challenging learning setting: both object and backgrounds are unknown(cluttered settings) and concepts(e.g. location and type) emerge from agent-environment interactions, instead of rigidly handcrafted.
Keywords :
learning (artificial intelligence); psychology; LM neurons; coarse-to-fine learning; developmental network; integration theories; learning mode; learning strategies; location motor neurons; reinforcement learning; supervised learning; where-what-network; Brain modeling; Computational modeling; Learning (artificial intelligence); Neurons; Supervised learning; Training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), 2014 International Joint Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4799-6627-1
Type :
conf
DOI :
10.1109/IJCNN.2014.6889701
Filename :
6889701
Link To Document :
بازگشت