DocumentCode :
1749715
Title :
Classes for fast maximum entropy training
Author :
Goodman, Joshua
Author_Institution :
Microsoft Res., Washington, DC, USA
Volume :
1
fYear :
2001
fDate :
2001
Firstpage :
561
Abstract :
Maximum entropy models are considered by many to be one of the most promising avenues of language modeling research. Unfortunately, long training times make maximum entropy research difficult. We present a speedup technique: we change the form of the model to use classes. Our speedup works by creating two maximum entropy models, the first of which predicts the class of each word, and the second of which predicts the word itself. This factoring of the model leads to fewer nonzero indicator functions, and faster normalization, achieving speedups of up to a factor of 35 over one of the best previous techniques. It also results in typically slightly lower perplexities. The same trick can be used to speed training of other machine learning techniques, e.g. neural networks, applied to any problem with a large number of outputs, such as language modeling
Keywords :
iterative methods; learning (artificial intelligence); maximum entropy methods; natural languages; probability; factoring; fast maximum entropy training; language modeling; normalization; perplexities; speedup technique; Context modeling; Decision trees; Entropy; Geographic Information Systems; Information resources; Iterative algorithms; Machine learning; Neural networks; Predictive models; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech, and Signal Processing, 2001. Proceedings. (ICASSP '01). 2001 IEEE International Conference on
Conference_Location :
Salt Lake City, UT
ISSN :
1520-6149
Print_ISBN :
0-7803-7041-4
Type :
conf
DOI :
10.1109/ICASSP.2001.940893
Filename :
940893
Link To Document :
بازگشت