Title :
State synchronous modeling of audio-visual information for bi-modal speech recognition
Author :
Nakamura, Shigenari ; Kumatani, Ken´ichi ; Tamura, Satoshi
Author_Institution :
ATR Spoken Language Translation Res. Labs., Japan
Abstract :
There has been a higher demand recently for automatic speech recognition (ASR) systems able to operate robustly in acoustically noisy environments. This paper proposes a method to integrate audio and visual information effectively in audio-visual (bi-modal) ASR systems. Such integration inevitably necessitates modeling of the synchronization of the audio and visual information. To address the time lag and correlation problems in individual features between speech and lip movements, we introduce a type of integrated HMM modeling of audio-visual information based on HMM composition. The proposed model can represent state synchronicity, not only within a phoneme, but also between phonemes. Evaluation experiments show that the proposed method improves the recognition accuracy for noisy speech.
Keywords :
acoustic noise; hidden Markov models; image motion analysis; image recognition; speech recognition; synchronisation; video signal processing; ASR; automatic speech recognition; bi-modal speech recognition; integrated HMM modeling; integrated audio-visual information; state synchronous modeling; Audio databases; Automatic speech recognition; Degradation; Feature extraction; Hidden Markov models; Spatial databases; Speech recognition; Streaming media; Visual databases; Working environment noise;
Conference_Titel :
Automatic Speech Recognition and Understanding, 2001. ASRU '01. IEEE Workshop on
Print_ISBN :
0-7803-7343-X
DOI :
10.1109/ASRU.2001.1034671