DocumentCode :
2694038
Title :
Back-propagation heuristics: a study of the extended delta-bar-delta algorithm
Author :
Minai, Ali A. ; Williams, Ronald D.
fYear :
1990
fDate :
17-21 June 1990
Firstpage :
595
Abstract :
An investigation is presented of an extension, proposed by A.A. Minai and R.D. Williams (Proc. Int. Joint Conf. on Neural Networks, vol.1, p.676-79, Washington, DC, 1990), to an algorithm for training neural networks in real-valued, continuous approximation domains. Specifically, the most effective aspects of the proposed extension are isolated. It is found that while momentum is particularly useful for the delta-bar-delta algorithm, it cannot be used conveniently because of sensitivity considerations. It is also demonstrated that by using more subtle versions of the algorithm, the advantages of momentum can be retained without any significant drawbacks
Keywords :
learning systems; neural nets; backpropagation heuristics; continuous approximation domains; delta-bar-delta algorithm; momentum; neural networks; sensitivity; supervised learning; training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1990., 1990 IJCNN International Joint Conference on
Conference_Location :
San Diego, CA, USA
Type :
conf
DOI :
10.1109/IJCNN.1990.137634
Filename :
5726594
Link To Document :
بازگشت