DocumentCode :
1817480
Title :
Feed forward networks and the Cramer-Rao bound
Author :
Schmidt, W.F. ; Duin, R.P.W.
Author_Institution :
Fac. of Appl. Phys., Delft Univ. of Technol., Netherlands
Volume :
1
fYear :
1992
fDate :
7-11 Jun 1992
Firstpage :
646
Abstract :
The weight space of feedforward networks is described by a probability density function where the probability is maximum for the optimal set of weights. This probability density function is given by a property of maximum likelihood estimators and the covariance matrix of this distribution is the Cramer-Rao lower bound. For certain classes of problems the optimization of the mean squared error is equal to the maximum likelihood estimator. For these problems the probability density function is closely related to the mean squared error criterion and therefore results derived from the probability density function hold for the mean squared error surface. An analysis of the probability density function provides some theoretical understanding of the error surface and learning dynamics
Keywords :
feedforward neural nets; learning (artificial intelligence); probability; Cramer-Rao bound; covariance matrix; error surface; feedforward neural networks; learning dynamics; maximum likelihood estimators; mean squared error; mean squared error criterion; probability density function; weight space; Calibration; Covariance matrix; Curve fitting; Feeds; Maximum likelihood estimation; Parameter estimation; Pattern recognition; Physics; Probability density function; Space technology;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1992. IJCNN., International Joint Conference on
Conference_Location :
Baltimore, MD
Print_ISBN :
0-7803-0559-0
Type :
conf
DOI :
10.1109/IJCNN.1992.287114
Filename :
287114
Link To Document :
بازگشت