DocumentCode :
2769291
Title :
Hierarchical large-margin Gaussian mixture models for phonetic classification
Author :
Chang, Hung-An ; Glass, James R.
Author_Institution :
MIT, Cambridge
fYear :
2007
fDate :
9-13 Dec. 2007
Firstpage :
272
Lastpage :
277
Abstract :
In this paper we present a hierarchical large-margin Gaussian mixture modeling framework and evaluate it on the task of phonetic classification. A two-stage hierarchical classifier is trained by alternately updating parameters at different levels in the tree to maximize the joint margin of the overall classification. Since the loss function required in the training is convex to the parameter space the problem of spurious local minima is avoided. The model achieves good performance with fewer parameters than single-level classifiers. In the TIMIT benchmark task of context-independent phonetic classification, the proposed modeling scheme achieves a state-of-the-art phonetic classification error of 16.7% on the core test set. This is an absolute reduction of 1.6% from the best previously reported result on this task, and 4-5% lower than a variety of classifiers that have been recently examined on this task.
Keywords :
Gaussian processes; error statistics; signal classification; speech processing; hierarchical large-margin Gaussian mixture model; loss function; parameter space; two-stage hierarchical context-independent phonetic classification error; Artificial intelligence; Automatic speech recognition; Benchmark testing; Classification tree analysis; Computer science; Context modeling; Glass; Laboratories; Mutual information; Robustness; committee classifier; hierarchical classifier; large margin GMM; phonetic classification;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Automatic Speech Recognition & Understanding, 2007. ASRU. IEEE Workshop on
Conference_Location :
Kyoto
Print_ISBN :
978-1-4244-1746-9
Electronic_ISBN :
978-1-4244-1746-9
Type :
conf
DOI :
10.1109/ASRU.2007.4430123
Filename :
4430123
Link To Document :
بازگشت