DocumentCode :
1798074
Title :
Causality traces for retrospective learning in neural networks — Introduction of parallel and subjective time scales
Author :
Shibata, Kenji
Author_Institution :
Dept. of Electr. & Electron. Eng., Oita Univ., Oita, Japan
fYear :
2014
fDate :
6-11 July 2014
Firstpage :
2268
Lastpage :
2275
Abstract :
We live in the flow of time, and the sensor signals we get not only have a huge amount in space, but also keep coming without a break in time. As a general method for effective retrospective learning in neural networks (NNs) in such a world based on the concept of "subjective time", "causality trace" is introduced in this paper. At each connection in each neuron, a trace is assigned. It takes in the corresponding input signal according to the temporal change in the neuron\´s output, and is held when the output does not change. This enables to memorize only past important events, to hold them in its local memory, and to learn the past processes effectively from the present reinforcement or training signals without tracing back to the past. The past events that the traces represent are different in each neuron, and so autonomous division of roles in the time axis among neurons is promoted through learning. From the viewpoint of time passage, there are parallel, non-uniform and subjective time scales for learning in the NN. Causality traces can be applied to value learning with a NN, and also applied to supervised learning of recurrent neural networks even though the way of application is a bit different. A new simulation result in a value-learning task shows the outstanding learning ability of causality traces and autonomous division of roles in the time axis among neurons through learning. Finally, several useful properties and concerns are discussed.
Keywords :
causality; learning (artificial intelligence); neural nets; NN value learning; causality traces; neural networks; parallel time scales; recurrent neural networks; retrospective learning; subjective time scales; supervised learning; Artificial neural networks; Biological neural networks; Neurons; Recurrent neural networks; Supervised learning; Training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), 2014 International Joint Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4799-6627-1
Type :
conf
DOI :
10.1109/IJCNN.2014.6889764
Filename :
6889764
Link To Document :
بازگشت