DocumentCode
3420317
Title
Fast DNN training based on auxiliary function technique
Author
Tran, Dung T. ; Ono, Nobutaka ; Vincent, Emmanuel
Author_Institution
Inria, Villers-lès-Nancy, France
fYear
2015
fDate
19-24 April 2015
Firstpage
2160
Lastpage
2164
Abstract
Deep neural networks (DNN) are typically optimized with stochastic gradient descent (SGD) using a fixed learning rate or an adaptive learning rate approach (ADAGRAD). In this paper, we introduce a new learning rule for neural networks that is based on an auxiliary function technique without parameter tuning. Instead of minimizing the objective function, a quadratic auxiliary function is recursively introduced layer by layer which has a closed-form optimum. We prove the monotonic decrease of the new learning rule. Our experiments show that the proposed algorithm converges faster and to a better local minimum than SGD. In addition, we propose a combination of the proposed learning rule and ADAGRAD which further accelerates convergence. Experimental evaluation on the MNIST database shows the benefit of the proposed approach in terms of digit recognition accuracy.
Keywords
gradient methods; image sampling; learning (artificial intelligence); neural nets; stochastic processes; ADAGRAD; MNIST database; SGD; adaptive learning rate approach; convergence method; deep neural network; digit recognition accuracy; fast DNN training; fixed learning rate; image sampling; learning rule monotonic decrease; quadratic auxiliary function technique; stochastic gradient descent; Approximation algorithms; Approximation methods; Artificial neural networks; Optimization; Robustness; Switches; Training; DNN; adaptive learning rate; auxiliary function technique; back-propagation; gradient descent;
fLanguage
English
Publisher
ieee
Conference_Titel
Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on
Conference_Location
South Brisbane, QLD
Type
conf
DOI
10.1109/ICASSP.2015.7178353
Filename
7178353
Link To Document