DocumentCode :
1693498
Title :
Speaker adaptation of context dependent deep neural networks
Author :
Hank Liao
Author_Institution :
Google Inc., New York, NY, USA
fYear :
2013
Firstpage :
7947
Lastpage :
7951
Abstract :
There has been little work on examining how deep neural networks may be adapted to speakers for improved speech recognition accuracy. Past work has examined using a discriminatively trained affine transformation of the input features applied at a frame level or the re-training of the entire shallow network for a specific speaker. This work explores how deep neural networks may be adapted to speakers by re-training the input layer, the output layer or the entire network. We look at how L2 regularization using weight decay to the speaker independent model improves generalization. Other training factors are examined including the role momentum plays and stochastic mini-batch versus batch training. While improvements are significant for smaller networks, the largest show little gain from adaptation on a large vocabulary mobile speech recognition task.
Keywords :
affine transforms; neural nets; speaker recognition; L2 regularization; batch training; context dependent deep neural networks; discriminatively trained affine transformation; frame level; generalization improvement; large vocabulary mobile speech recognition task; role momentum; speaker adaptation; speaker independent model; speech recognition accuracy improvement; stochastic mini-batch; weight decay; Acoustics; Adaptation models; Hidden Markov models; Neural networks; Speech; Speech recognition; Training; Deep neural networks; Large vocabulary continuous speech recognition; Multilayer perceptrons; Speaker adaptation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on
Conference_Location :
Vancouver, BC
ISSN :
1520-6149
Type :
conf
DOI :
10.1109/ICASSP.2013.6639212
Filename :
6639212
Link To Document :
بازگشت