DocumentCode :
2750056
Title :
Convergence mechanism of learning on backpropagation and new high speed method
Author :
Sakai, Akira ; Izumi, Hiroyuki ; Iijima, Nobukazu ; Yoshida, Masao ; Mitsui, Hideo ; Sone, Mototaka
Author_Institution :
Musashi Inst. of Technol., Tokyo, Japan
fYear :
1991
fDate :
8-14 Jul 1991
Abstract :
Summary form only given, as follows. A neural network consists of three layers (input, hidden, and output). The connection of the layers is used by the I/O function as a sigmoid function. However, as the number of training cycles depends on the function, it is necessary to devise the function. Applying the new function, the convergent mechanism should be studied until the learning finishes. When the input data of each layer are processed by the sigmoid function, its sensitivity is different from the operating point. The sensitive region is named the active region. The signals in active regions are amended effectively towards the supervised signals. Managing and controlling the active region, the new I/O function for high-speed learning can be obtained using a neural network with back-propagation
Keywords :
learning systems; neural nets; I/O function; active region; back-propagation; hidden layer; input layer; learning convergence mechanism; output layer; sigmoid function; Backpropagation; Biological system modeling; Biology; Computer network management; Computer science; Convergence; Genetic algorithms; Learning systems; Neural networks; Neurons;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on
Conference_Location :
Seattle, WA
Print_ISBN :
0-7803-0164-1
Type :
conf
DOI :
10.1109/IJCNN.1991.155620
Filename :
155620
Link To Document :
بازگشت