Title :
Soft GPD for minimum classification error rate training
Author :
Shi, Bertram E. ; Yao, Kaisheng ; Cao, Zhigang
Author_Institution :
Dept. of Electr. & Electron. Eng., Hong Kong Univ. of Sci. & Technol., Kowloon, China
Abstract :
Minimum classification error (MCE) rate training is a discriminative training method which seeks to minimize an empirical estimate of the error probability derived over a training set. The segmental generalized probabilistic descent (GPD) algorithm for MCE uses the log likelihood of the best path as a discriminant function to estimate the error probability. This paper shows that by using a discriminant function similar to the auxiliary function used in EM, we can obtain a “soft” version of GPD in the sense that information about all possible paths is retained. Complexity is similar to segmental GPD. For certain parameter values, the algorithm is equivalent to segmental GPD. By modifying the misclassification measure usually used, we can obtain an algorithm for embedded MCE training for continuous speech which does not require a separate N-best search to determine competing classes. Experimental results show error rate reduction of 20% compared with maximum likelihood training
Keywords :
error statistics; minimisation; pattern classification; speech recognition; MCE rate training; best path; complexity; continuous speech; discriminant function; discriminative training; error probability; error rate reduction; log likelihood; minimum classification error rate training; misclassification measure; segmental generalized probabilistic descent algorithm; soft GPD; Error analysis; Error probability; Hidden Markov models; Loss measurement; Maximum likelihood estimation; Speech; Weight control;
Conference_Titel :
Acoustics, Speech, and Signal Processing, 2000. ICASSP '00. Proceedings. 2000 IEEE International Conference on
Conference_Location :
Istanbul
Print_ISBN :
0-7803-6293-4
DOI :
10.1109/ICASSP.2000.861803