DocumentCode :
2896481
Title :
Neural networks, maximum mutual information training, and maximum likelihood training [speech recognition]
Author :
Niles, Les T. ; Silverman, Harvey F. ; Bush, Marcia A.
Author_Institution :
Div. of Eng., Brown Univ., Providence, RI, USA
fYear :
1990
fDate :
3-6 Apr 1990
Firstpage :
493
Abstract :
A Gaussian-model classifier trained by maximum mutual information estimation (MMIE) is compared to one trained by maximum-likelihood estimation (MLE) and to an artificial neural network (ANN) on several classification tasks. Similarity of MMIE and ANN results for uniformly distributed data confirm that the ANN is better than the MLE in some cases due to the ANNs use of an error-correcting training algorithm. When the probability model fits the data well, MLE is better than MMIE if the training data are limited, but they are equal if there are enough data. When the model is a poor fit, MMIE is better than MLE. Training dynamics of MMIE and ANN are shown to be similar under certain assumptions. MMIE seems more susceptible to overtraining and computational difficulties than the ANN. Overall, ANN is the most robust of the classifiers
Keywords :
estimation theory; learning systems; neural nets; probability; speech recognition; Gaussian-model classifier; error-correcting training algorithm; maximum likelihood training; maximum mutual information estimation; neural network; probability model; speech recognition; Artificial neural networks; Error analysis; Gaussian processes; Maximum likelihood estimation; Mutual information; Neural networks; Performance analysis; Robustness; Speech; Testing; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech, and Signal Processing, 1990. ICASSP-90., 1990 International Conference on
Conference_Location :
Albuquerque, NM
ISSN :
1520-6149
Type :
conf
DOI :
10.1109/ICASSP.1990.115757
Filename :
115757
Link To Document :
بازگشت