DocumentCode
285227
Title
On the convergence of a block-gradient algorithm for back-propagation learning
Author
Paugam-Moisy, Hélène
Author_Institution
Lab. de l´´Inf. du Parallelisme, Ecole Normale Superieure de Lyon, France
Volume
3
fYear
1992
fDate
7-11 Jun 1992
Firstpage
919
Abstract
A block-gradient algorithm is defined, where the weight matrix is updated after every presentation of a block of b examples each. Total and stochastic gradients are included in the block-gradient algorithm, for particular values of b . Experimental laws are stated on the speed of convergence, according to the block size. The first law indicates that an adaptive learning rate has to respect an exponential decreasing function of the number of examples presented between two successive weight updates. The second law states that, with an adaptive learning rate value, the number of epochs grows linearly with the size of the exemplar blocks. The last one shows how the number of epochs for reaching a given level of performance depends on the learning rate
Keywords
backpropagation; stochastic processes; adaptive learning; back-propagation learning; block gradient algorithm convergence; exponential decreasing function; performance; stochastic gradients; weight matrix; Computational modeling; Computer simulation; Concurrent computing; Convergence of numerical methods; Cost function; Error correction; Multi-layer neural network; Neural networks; Stochastic processes; Thumb;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks, 1992. IJCNN., International Joint Conference on
Conference_Location
Baltimore, MD
Print_ISBN
0-7803-0559-0
Type
conf
DOI
10.1109/IJCNN.1992.227082
Filename
227082
Link To Document