DocumentCode :
1738103
Title :
Inhibitory unlearning: a mechanism for increasing the storage capacity in an attractor network
Author :
Veredas, F. ; Vico, F.J. ; Roman, J.
Volume :
1
fYear :
2000
fDate :
2000
Firstpage :
177
Abstract :
Attractor networks with and without learning dynamics have been proposed as models for the formation of neural assemblies. For this work, we have used an attractor-recurrent network that builds internal representations of input stimuli as assemblies of neurons. This network has an ongoing, human-like learning, integrating new information into what it already knows. This sequential learning process has two fundamental underlying problems: the limited network storage capacity and catastrophic forgetting. During learning, the network performance decreases: the network wastes more time learning new stimuli, new assemblies are smaller and the capacity for recuperation decreases. In trying to solve this, we suggest a mechanism based on the unlearning of inhibitory connections
Keywords :
content-addressable storage; learning (artificial intelligence); performance evaluation; recurrent neural nets; attractor network; catastrophic forgetting; inhibitory connections; inhibitory unlearning; input stimuli; internal representations; learning dynamics; network performance; network storage capacity; neural assembly formation; new information integration; ongoing learning; recuperation capacity; recurrent neural network; sequential learning process; Artificial neural networks; Assembly; Biological system modeling; Brain modeling; Electronic mail; Equations; Humans; Image storage; Intelligent networks; Neurons;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Knowledge-Based Intelligent Engineering Systems and Allied Technologies, 2000. Proceedings. Fourth International Conference on
Conference_Location :
Brighton
Print_ISBN :
0-7803-6400-7
Type :
conf
DOI :
10.1109/KES.2000.885786
Filename :
885786
Link To Document :
بازگشت