DocumentCode
2175147
Title
Exploiting sparseness of backing-off language models for efficient look-ahead in LVCSR
Author
Nolden, David ; Ney, Hermann ; Schlüter, Ralf
Author_Institution
Human Language Technol. & Pattern Recognition Group, RWTH Aachen Univ., Aachen, Germany
fYear
2011
fDate
22-27 May 2011
Firstpage
4684
Lastpage
4687
Abstract
In this paper, we propose a new method for computing and applying language model look-ahead in a dynamic network decoder, exploiting the sparseness of backing-off n-gram language models. Only partial (sparse) look-ahead tables are computed, with a size that depends on the number of words that have an n-gram score in the language model for a specific context, rather than a constant, vocabulary dependent size. Since high order backing-off language models are inherently sparse, this mechanism reduces the runtime- and memory effort of computing the look-ahead tables by magnitudes. A modified decoding algorithm is required to apply these sparse LM look-ahead tables efficiently. We show that sparse LM look-ahead is much more efficient than the classical method, and that full n-gram look-ahead becomes favorable over lower order look-ahead even when many distinct LM contexts appear during decoding.
Keywords
speech coding; speech recognition; LM look-ahead tables; LVCSR; high order backing-off language models; modified decoding algorithm; n-gram look-ahead; partial look-ahead tables; Acoustics; Computational modeling; Context; Decoding; Hidden Markov models; History; Vocabulary; decoding; language model; look-ahead; recognition; search; speech;
fLanguage
English
Publisher
ieee
Conference_Titel
Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on
Conference_Location
Prague
ISSN
1520-6149
Print_ISBN
978-1-4577-0538-0
Electronic_ISBN
1520-6149
Type
conf
DOI
10.1109/ICASSP.2011.5947400
Filename
5947400
Link To Document