• DocumentCode
    179585
  • Title

    A comparison of two optimization techniques for sequence discriminative training of deep neural networks

  • Author

    Saon, George ; Soltau, Hagen

  • Author_Institution
    IBM T. J. Watson Res. Center, Yorktown Heights, NY, USA
  • fYear
    2014
  • fDate
    4-9 May 2014
  • Firstpage
    5567
  • Lastpage
    5571
  • Abstract
    We compare two optimization methods for lattice-based sequence discriminative training of neural network acoustic models: distributed Hessian-free (DHF) and stochastic gradient descent (SGD). Our findings on two different LVCSR tasks suggest that SGD running on a single GPU machine achieves the best accuracy 2.5 times faster than DHF running on multiple non-GPU machines; however, DHF training achieves a higher accuracy at the end of the optimization. In addition, we present an improved modified forward-backward algorithm for computing lattice-based expected loss functions and gradients that results in a 34% speedup for SGD.
  • Keywords
    gradient methods; graphics processing units; neural nets; stochastic processes; DHF; GPU machine; LVCSR; SGD; deep neural networks; distributed Hessian-free; forward-backward algorithm; lattice based sequence discriminative training; neural network acoustic models; optimization methods; sequence discriminative training; stochastic gradient descent; two optimization technique comparison; Acoustics; Graphics processing units; Hidden Markov models; Lattices; Neural networks; Optimization; Training; distributed Hessian-free optimization; neural network acoustic models; sequence discriminative training; stochastic gradient descent;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on
  • Conference_Location
    Florence
  • Type

    conf

  • DOI
    10.1109/ICASSP.2014.6854668
  • Filename
    6854668