DocumentCode :
3322958
Title :
Fast learning in artificial neural systems: multilayer perceptron training using optimal estimation
Author :
Shepanski, J.F.
Author_Institution :
TRW Inc., Redondo Beach, CA, USA
fYear :
1988
fDate :
24-27 July 1988
Firstpage :
465
Abstract :
The author reports on how information can be loaded in a multilayer perceptron using methods of optimal estimation theory. Initial results indicate that optimal estimate training (OET) is a supervised learning technique that is faster and more accurate than backward error propagation. Further, because optimal estimation is well-characterized mathematically, the information content loaded into a set of network interconnection weights is also characterized well. Starting with a multilayer network and a set of (input/desired output) correlation vectors, the data is expressed in matrix form. Training occurs as a simultaneous calculation where an optimal set of interconnection weights are determined, using a least-squares criterion, by standard pseudoinverse matrix techniques. This technique has been applied previously on a single-layer network. The author has extended the pseudoinverse method to multiple-layer networks and the results are insignificant. Initial results show that optimal estimation methods are promising techniques for loading information in perceptrons.<>
Keywords :
artificial intelligence; estimation theory; learning systems; neural nets; optimisation; artificial intelligence; artificial neural systems; correlation vectors; interconnection weights; learning; least-squares; matrix; multilayer perceptron training; neural nets; optimal estimate training; Artificial intelligence; Estimation; Learning systems; Neural networks; Optimization methods;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1988., IEEE International Conference on
Conference_Location :
San Diego, CA, USA
Type :
conf
DOI :
10.1109/ICNN.1988.23880
Filename :
23880
Link To Document :
بازگشت