DocumentCode :
2030446
Title :
Accelerated learning in multi-layer neural networks
Author :
Negnevitsky, Michael ; Ringrose, Martin
Author_Institution :
Sch. of Electr. Eng. & Comput. Sci., Tasmania Univ., Hobart, Tas., Australia
Volume :
3
fYear :
1999
fDate :
1999
Firstpage :
1167
Abstract :
The most popular training method for multi-layer feedforward networks has traditionally been the error backpropagation algorithm. This algorithm has proved to be slow in its convergence to the error minimum; thus, several methods of accelerating learning using backpropagation have been developed. These include using hyperbolic tangent activation functions, momentum, adaptive learning rates and fuzzy control of the learning parameters. These methods are looked at in this paper
Keywords :
backpropagation; convergence; feedforward neural nets; fuzzy control; momentum; multilayer perceptrons; transfer functions; accelerated learning; adaptive learning rates; convergence; error backpropagation algorithm; error minimum; fuzzy control; hyperbolic tangent activation functions; learning parameters; momentum; multilayer feedforward neural networks; Acceleration; Australia; Computer errors; Computer science; Convergence; Feedforward systems; Intelligent networks; Multi-layer neural network; Neural networks; Neurons;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Information Processing, 1999. Proceedings. ICONIP '99. 6th International Conference on
Conference_Location :
Perth, WA
Print_ISBN :
0-7803-5871-6
Type :
conf
DOI :
10.1109/ICONIP.1999.844701
Filename :
844701
Link To Document :
بازگشت