DocumentCode :
2980712
Title :
Unsupervised, smooth training of feed-forward neural networks for mismatch compensation
Author :
Surendran, A.C. ; Lee, Chin-Hui ; Rahim, Mazin
Author_Institution :
AT&T Bell Labs., Murray Hill, NJ, USA
fYear :
1997
fDate :
14-17 Dec 1997
Firstpage :
482
Lastpage :
489
Abstract :
We present a maximum likelihood technique for training feedforward neural networks. The proposed technique is completely unsupervised; hence it eliminates the need for having target values for each input. Thus stereo databases are no longer required for learning nonlinear distortions under adverse conditions in speech recognition applications. We show that this technique is guaranteed to converge smoothly to the local maxima, and provides a more meaningful metric in speech recognition applications than the traditional mean square error. We apply the technique to model compensation to reduce the mismatch between training and testing in speech recognition applications and show that this data driven technique can be used under a wide variety of conditions without prior knowledge of the mismatch
Keywords :
feedforward neural nets; speech recognition; unsupervised learning; data driven technique; feedforward neural networks; local maxima; mean square error; mismatch compensation; model compensation; nonlinear distortions; speech recognition applications; unsupervised smooth training; Artificial neural networks; Convergence; Databases; Equations; Feedforward neural networks; Feedforward systems; Neural networks; Speech recognition; Testing; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on
Conference_Location :
Santa Barbara, CA
Print_ISBN :
0-7803-3698-4
Type :
conf
DOI :
10.1109/ASRU.1997.659127
Filename :
659127
Link To Document :
بازگشت