DocumentCode :
2707723
Title :
A motor learning neural model based on Bayesian network and reinforcement learning
Author :
Hosoya, Haruo
Author_Institution :
Comput. Sci. Dept., Univ. of Tokyo, Tokyo, Japan
fYear :
2009
fDate :
14-19 June 2009
Firstpage :
1251
Lastpage :
1258
Abstract :
A number of models based on Bayesian network have recently been proposed and shown to be biologically plausible enough to explain various phenomena in visual cortex. The present work studies how far the same approach can extend to motor learning, in particular, in combination with reinforcement learning, with the aim of suggesting a possible cooperation mechanism of cerebral cortex and basal ganglia. The basis of our model is BESOM, a biologically solid model for cerebral cortex proposed by Ichisugi, but extended with a reinforcement learning capability. We show how reinforcement learning can benefit from Bayesian network computations with unsupervised learning, in particular, in approximate representation of a large state-action space and detection of a goal state. By a simulation with a concrete BESOM network inspired by anatomically known cortical hierarchy to carry out a reach movement task, we demonstrate our model´s stable and robust ability for motor learning.
Keywords :
belief networks; unsupervised learning; Bayesian network; basal ganglia; cerebral cortex; motor learning neural model; reinforcement learning; unsupervised learning; Basal ganglia; Bayesian methods; Biological system modeling; Biology computing; Brain modeling; Cerebral cortex; Computational modeling; Computer networks; Solid modeling; Unsupervised learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2009. IJCNN 2009. International Joint Conference on
Conference_Location :
Atlanta, GA
ISSN :
1098-7576
Print_ISBN :
978-1-4244-3548-7
Electronic_ISBN :
1098-7576
Type :
conf
DOI :
10.1109/IJCNN.2009.5178689
Filename :
5178689
Link To Document :
بازگشت