DocumentCode
395151
Title
Computational experiences of a novel global algorithm for optimal learning in MLP-networks
Author
Di Fiore, Carmine ; Fanelli, Stefano ; Zellini, Paolo
Author_Institution
Dipt. di Matematica, Univ. di Roma, Italy
Volume
1
fYear
2002
fDate
18-22 Nov. 2002
Firstpage
317
Abstract
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algorithm for the optimal learning of feedforward neural networks. The proposed method is founded on a new concept, called "non-suspiciousness", which can be seen as a generalisation of convexity. The algorithm described in this work follows several adaptive strategies in order to avoid possible entrapments into local minima. In many cases the global minimum of the error function can be successfully computed. The paper performs also a useful comparison between the proposed method and a global optimisation algorithm of deterministic type well known in the literature.
Keywords
generalisation (artificial intelligence); learning (artificial intelligence); multilayer perceptrons; optimisation; convexity generalisation; error function; feedforward neural networks; multilayer perceptron; optimal learning; Computer networks; Electronic mail; Equations; Feedforward neural networks; Intelligent networks; Lyapunov method; Minimization methods; Neural networks; Optimization methods; Vectors;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Information Processing, 2002. ICONIP '02. Proceedings of the 9th International Conference on
Print_ISBN
981-04-7524-1
Type
conf
DOI
10.1109/ICONIP.2002.1202185
Filename
1202185
Link To Document