DocumentCode :
672370
Title :
Accelerating Hessian-free optimization for Deep Neural Networks by implicit preconditioning and sampling
Author :
Sainath, Tara N. ; Horesh, Lior ; Kingsbury, Brian ; Aravkin, Aleksandr Y. ; Ramabhadran, Bhuvana
Author_Institution :
IBM T. J. Watson Res. Center, Yorktown Heights, NY, USA
fYear :
2013
fDate :
8-12 Dec. 2013
Firstpage :
303
Lastpage :
308
Abstract :
Hessian-free training has become a popular parallel second order optimization technique for Deep Neural Network training. This study aims at speeding up Hessian-free training, both by means of decreasing the amount of data used for training, as well as through reduction of the number of Krylov subspace solver iterations used for implicit estimation of the Hessian. In this paper, we develop an L-BFGS based preconditioning scheme that avoids the need to access the Hessian explicitly. Since L-BFGS cannot be regarded as a fixed-point iteration, we further propose the employment of flexible Krylov subspace solvers that retain the desired theoretical convergence guarantees of their conventional counterparts. Second, we propose a new sampling algorithm, which geometrically increases the amount of data utilized for gradient and Krylov subspace iteration calculations. On a 50-hr English Broadcast News task, we find that these methodologies provide roughly a 1.5× speed-up, whereas, on a 300-hr Switchboard task, these techniques provide over a 2.3× speedup, with no loss in WER. These results suggest that even further speed-up is expected, as problems scale and complexity grows.
Keywords :
iterative methods; learning (artificial intelligence); neural nets; optimisation; speech recognition; 300-hr Switchboard task; 50-hr English broadcast news task; Hessian-free optimization; Hessian-free training; Krylov subspace solver iterations; L-BFGS based preconditioning scheme; deep neural network training; fixed-point iteration; implicit Hessian estimation; implicit preconditioning; implicit sampling; parallel second order optimization technique; speech recognition; Approximation algorithms; Approximation methods; Equations; Hafnium; Mathematical model; Optimization; Training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on
Conference_Location :
Olomouc
Type :
conf
DOI :
10.1109/ASRU.2013.6707747
Filename :
6707747
Link To Document :
بازگشت