Title :
Predictor–Corrector Adaptation by Using Time Evolution System With Macroscopic Time Scale
Author :
Watanabe, Shinji ; Nakamura, Atsushi
Author_Institution :
NTT Commun. Sci. Labs., NTT Corp., Seika, Japan
Abstract :
Incremental adaptation techniques for speech recognition are aimed at adjusting acoustic models to time-variant acoustic characteristics related to such factors as changes of speaker, speaking style, and noise source over time. In this paper, we propose a novel incremental adaptation framework, which models such time-variant characteristics by successively updating posterior distributions of acoustic model parameters based on a macroscopic time scale (e.g., every set of more than a dozen utterances). The proposed incremental update involves a predictor-corrector algorithm based on a macroscopic time evolution system in accordance with the Kalman filter theory. We also provide a unified interpretation of the proposal and the two major conventional approaches of indirect adaptation via transformation parameters [e.g., maximum-likelihood linear regression (MLLR)] and direct adaptation of classifier parameters [e.g., maximum a posteriori (MAP)]. We reveal analytically and experimentally that the proposed incremental adaptation realizes the predictor-corrector algorithm and involves both the conventional and their combinatorial adaptation approaches. Consequently, the proposal achieves robust recognition performance based on a balanced incremental adaptation between quickness and stability.
Keywords :
Kalman filters; maximum likelihood estimation; predictor-corrector methods; regression analysis; speech recognition; Kalman filter theory; acoustic model parameters; macroscopic time scale; maximum a posteriori; maximum-likelihood linear regression; posterior distributions; predictor-corrector adaptation; speaking style; speech recognition; time evolution system; time-variant acoustic characteristics; Acoustic model; incremental adaptation; macroscopic time evolution; predictor–corrector algorithm; speech recognition;
Journal_Title :
Audio, Speech, and Language Processing, IEEE Transactions on
DOI :
10.1109/TASL.2009.2029717