Title :
Maximum Entropy-Based Reinforcement Learning Using a Confidence Measure in Speech Recognition for Telephone Speech
Author :
Molina, Carlos ; Yoma, Nestor Becerra ; Huenupan, Fernando ; Garretón, Claudio ; Wuth, Jorge
Author_Institution :
Dept. of Electr. Eng., Univ. de Chile, Santiago, Chile
fDate :
7/1/2010 12:00:00 AM
Abstract :
In this paper, a novel confidence-based reinforcement learning (RL) scheme to correct observation log-likelihoods and to address the problem of unsupervised compensation with limited estimation data is proposed. A two-step Viterbi decoding is presented which estimates a correction factor for the observation log-likelihoods that makes the recognized and neighboring HMMs more or less likely by using a confidence score. If regions in the output delivered by the recognizer exhibit low confidence scores, the second Viterbi decoding will tend to focus the search on neighboring models. In contrast, if recognized regions exhibit high confidence scores, the second Viterbi decoding will tend to retain the recognition output obtained at the first step. The proposed RL mechanism is modeled as the linear combination of two metrics or information sources: the acoustic model log-likelihood and the logarithm of a confidence metric. A criterion based on incremental conditional entropy maximization to optimize a linear combination of metrics or information sources online is also presented. The method requires only one utterance, as short as 0.7 s, and can lead to significant reductions in word error rate (WER) between 3% and 18%, depending on the task, training-testing conditions, and method used to optimize the proposed RL scheme. In contrast to ordinary feature compensation and model parameter adaptation methods, the confidence-based RL method takes place in the frame log-likelihood domain. Consequently, as shown in the results presented here, it is complementary to feature compensation and to model adaptation techniques.
Keywords :
Viterbi decoding; hidden Markov models; optimisation; speech recognition; unsupervised learning; WER; acoustic model log-likelihood; conditional entropy maximization; confidence metric logarithm; confidence-based RL method; confidence-based reinforcement learning; correction factor; feature compensation; information sources; limited estimation data; maximum entropy-based reinforcement learning; metrics linear combination; model adaptation techniques; model parameter adaptation methods; neighboring HMM; speech recognition; telephone speech; training-testing conditions; two-step Viterbi decoding; unsupervised compensation; word error rate; Confidence measure; incremental conditional entropy; reinforcement learning; robust automatic speech recognition; telephone speech;
Journal_Title :
Audio, Speech, and Language Processing, IEEE Transactions on
DOI :
10.1109/TASL.2009.2032618