DocumentCode :
314404
Title :
Self-regulation of model order in feedforward neural networks
Author :
Kothari, Ravi ; Agyepong, Kwabena
Author_Institution :
Dept. of Electr. & Comput. Eng., Cincinnati Univ., OH, USA
Volume :
3
fYear :
1997
fDate :
9-12 Jun 1997
Firstpage :
1966
Abstract :
Despite the presence of theoretical results, the application of feedforward neural networks is hampered by the lack of systematic procedural methods for determining the number of hidden neurons to use. The number of hidden layer neurons determine the order of the neural network model and consequently the generalization performance of the network. This paper puts into perspective the approaches used to address this problem and presents a new paradigm which uses dependent evolution of hidden layer neurons to self-regulate the model order. We show through simulations that despite an abundance of free-parameters (i.e. starting with a larger than necessary network), the proposed paradigm allows for localization of specializing hidden layer neurons with the unspecialized hidden layer neurons behaving similarly. These similarly behaving neurons reduce the model order and allow for the benefits of a smaller sized network. Hints on analytically understanding the behavior are also noted
Keywords :
backpropagation; covariance matrices; feedforward neural nets; generalisation (artificial intelligence); backpropagation; covariance matrix; evolution; feedforward neural networks; generalization; gradient descent algorithm; hidden layer neurons; learning; model order; self-regulation; Artificial neural networks; Biological neural networks; Computer networks; Feedforward neural networks; Feedforward systems; Intelligent networks; Laboratories; Neural networks; Neurons; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks,1997., International Conference on
Conference_Location :
Houston, TX
Print_ISBN :
0-7803-4122-8
Type :
conf
DOI :
10.1109/ICNN.1997.614200
Filename :
614200
Link To Document :
بازگشت