DocumentCode :
2697554
Title :
Learning by parallel forward propagation
Author :
Abe, Shigeo
fYear :
1990
fDate :
17-21 June 1990
Firstpage :
99
Abstract :
The back-propagation algorithm is widely used for learning weights of multilayered neural networks. The major drawbacks however, are the slow convergence and lack of a proper way to set the number of hidden neurons. The author proposes a learning algorithm which solves the above problems. The weights between two layers are successively calculated, fixing other weights so that the error function, which is the square sum of the difference between the training data and the network outputs is minimised. Since the calculation results in solving a set of linearized equations, redundancy of the hidden neurons is judged by the singularity of the corresponding coefficient matrix. For the exclusive-OR and parity check circuits, excellence convergence characteristics are obtained, and the redundancy of the hidden neurons is checked by the singularity of the matrix
Keywords :
learning systems; neural nets; parallel algorithms; parallel architectures; coefficient matrix; convergence characteristics; error function; exclusive-OR; hidden neurons; learning algorithm; linearized equations; multilayered neural networks; parallel forward propagation; parity check circuits; redundancy; singularity; training data; weights;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1990., 1990 IJCNN International Joint Conference on
Conference_Location :
San Diego, CA, USA
Type :
conf
DOI :
10.1109/IJCNN.1990.137830
Filename :
5726788
Link To Document :
بازگشت