Title :
A stochastic training model for perceptron algorithms
Author :
Shynk, John J. ; Bershad, Neil J.
Author_Institution :
Dept. of Electr. & Comput. Eng., California Univ., Santa Barbara, CA, USA
Abstract :
A stochastic training model that can be used to study the transient and steady-state convergence properties of perceptron learning algorithms is presented. It is based on a system identification formulation whereby the training signals are modeled as the output of a nonlinear system. The perceptron input signals are modeled as a Gaussian random vector so that closed-form expressions can be derived for expectations of Gaussian variates. These, in turn, can be solved to predict the trajectories and convergence points of the network connection weights. Although this model is quite general and can be applied to a variety of multilayer perceptron configurations, the authors focus on the single-layer perceptron and two of its learning algorithms
Keywords :
convergence; identification; learning systems; neural nets; Gaussian random vector; closed-form expressions; multilayer perceptron configurations; network connection weights; nonlinear system; perceptron algorithms; single-layer perceptron; steady-state convergence properties; stochastic training model; system identification; transient convergence properties; Backpropagation algorithms; Convergence; Information processing; Multilayer perceptrons; Nonlinear systems; Signal processing; Steady-state; Stochastic processes; System identification; Vectors;
Conference_Titel :
Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on
Conference_Location :
Seattle, WA
Print_ISBN :
0-7803-0164-1
DOI :
10.1109/IJCNN.1991.155277