DocumentCode :
2180277
Title :
Distributed training of large scale exponential language models
Author :
Sethy, Abhinav ; Chen, Stanley F. ; Ramabhadran, Bhuvana
Author_Institution :
IBM TJ. Watson Res. Center, Yorktown Heights, NY, USA
fYear :
2011
fDate :
22-27 May 2011
Firstpage :
5520
Lastpage :
5523
Abstract :
Shrinkage-based exponential language models, such as the recently introduced Model M, have provided significant gains over a range of tasks [1]. Training such models requires a large amount of computational resources in terms of both time and memory. In this paper, we present a distributed training algorithm for such models based on the idea of cluster expansion [2]. Cluster expansion allows us to efficiently calculate the normalization and expectations terms required for Model M training by minimizing the computation needed between consecutive n-grams. We also show how the algorithm can be implemented in a distributed environment, greatly reducing the memory required per process and training time.
Keywords :
speech recognition; automatic speech recognition; cluster expansion; distributed training; large scale exponential language models; shrinkage-based exponential language models; Computational modeling; Entropy; History; Memory management; Predictive models; Training; Vocabulary; Language modeling; distributed training; exponential n-gram models;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on
Conference_Location :
Prague
ISSN :
1520-6149
Print_ISBN :
978-1-4577-0538-0
Electronic_ISBN :
1520-6149
Type :
conf
DOI :
10.1109/ICASSP.2011.5947609
Filename :
5947609
Link To Document :
بازگشت