DocumentCode :
3417148
Title :
Learning rate schedules for faster stochastic gradient search
Author :
Darken, Christian ; Chang, Joseph ; Moody, John
Author_Institution :
Yale Univ., New Haven, CT, USA
fYear :
1992
fDate :
31 Aug-2 Sep 1992
Firstpage :
3
Lastpage :
12
Abstract :
The authors propose a new methodology for creating the first automatically adapting learning rates that achieve the optimal rate of convergence for stochastic gradient descent. Empirical tests agree with theoretical expectations that drift can be used to determine whether the crucial parameter c is large enough. Using this statistic, it will be possible to produce the first adaptive learning rates which converge at optimal speed
Keywords :
convergence; learning (artificial intelligence); search problems; statistics; automatically adapting learning rates; drift; learning rate schedules; optimal rate of convergence; statistic; stochastic gradient descent; stochastic gradient search; Backpropagation algorithms; Computer science; Convergence; Displays; Fluctuations; Least squares approximation; Processor scheduling; Random variables; Statistics; Stochastic processes;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks for Signal Processing [1992] II., Proceedings of the 1992 IEEE-SP Workshop
Conference_Location :
Helsingoer
Print_ISBN :
0-7803-0557-4
Type :
conf
DOI :
10.1109/NNSP.1992.253713
Filename :
253713
Link To Document :
بازگشت