DocumentCode
3000306
Title
How to not get frustrated with neural networks
Author
Wilamowski, Bogdan M.
Author_Institution
AMNSTC, Auburn Univ., Auburn, AL, USA
fYear
2011
fDate
14-16 March 2011
Firstpage
5
Lastpage
11
Abstract
In the presentation major difficulties of designing neural networks are shown. It turn out that popular MLP (Multi Layer Perceptron) networks in most cases produces far from satisfactory results. Also, popular EBP (Error Back Propagation) algorithm is very slow and often is not capable to train best neural network architectures. Very powerful and fast LM (Levenberg- Marquardt) algorithm was unfortunately implemented only for MLP networks. Also, because a necessity of the inversion of the matrix, which size is proportional to number of patterns, the LM algorithm can be used only for small problems. However, the major frustration with neural networks occurs when too large neural networks are used and it is being trained with too small number of training patterns. Indeed, such networks, with excessive number of neurons, can be trained to very small errors, but these networks will respond very poorly for new patterns, which were not used for training. The most of frustrations with neural network can be eliminated when smaller, more effective, architectures are used and trained by newly developed NBN (Neuron-by-Neuron) algorithm.
Keywords
learning (artificial intelligence); matrix inversion; multilayer perceptrons; EBP algorithm; LM algorithm; Levenberg-Marquardt algorithm; MLP network; NBN algorithm; error back propagation algorithm; matrix inversion; multilayer perceptron; neural network architectures; neuron-by-neuron algorithm; Artificial neural networks; Computer architecture; FCC; Neurons; Software; Software algorithms; Training;
fLanguage
English
Publisher
ieee
Conference_Titel
Industrial Technology (ICIT), 2011 IEEE International Conference on
Conference_Location
Auburn, AL
ISSN
Pending
Print_ISBN
978-1-4244-9064-6
Type
conf
DOI
10.1109/ICIT.2011.5754336
Filename
5754336
Link To Document