DocumentCode :
1155859
Title :
Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition
Author :
Papandreou, George ; Katsamanis, Athanassios ; Pitsikalis, Vassilis ; Maragos, Petros
Author_Institution :
Sch. of Electr. & Comput. Eng., Nat. Tech. Univ. of Athens, Athens
Volume :
17
Issue :
3
fYear :
2009
fDate :
3/1/2009 12:00:00 AM
Firstpage :
423
Lastpage :
435
Abstract :
While the accuracy of feature measurements heavily depends on changing environmental conditions, studying the consequences of this fact in pattern recognition tasks has received relatively little attention to date. In this paper, we explicitly take feature measurement uncertainty into account and show how multimodal classification and learning rules should be adjusted to compensate for its effects. Our approach is particularly fruitful in multimodal fusion scenarios, such as audiovisual speech recognition, where multiple streams of complementary time-evolving features are integrated. For such applications, provided that the measurement noise uncertainty for each feature stream can be estimated, the proposed framework leads to highly adaptive multimodal fusion rules which are easy and efficient to implement. Our technique is widely applicable and can be transparently integrated with either synchronous or asynchronous multimodal sequence integration architectures. We further show that multimodal fusion methods relying on stream weights can naturally emerge from our scheme under certain assumptions; this connection provides valuable insights into the adaptivity properties of our multimodal uncertainty compensation approach. We show how these ideas can be practically applied for audiovisual speech recognition. In this context, we propose improved techniques for person-independent visual feature extraction and uncertainty estimation with active appearance models, and also discuss how enhanced audio features along with their uncertainty estimates can be effectively computed. We demonstrate the efficacy of our approach in audiovisual speech recognition experiments on the CUAVE database using either synchronous or asynchronous multimodal integration models.
Keywords :
feature extraction; sensor fusion; speech recognition; CUAVE database; adaptive multimodal fusion; audiovisual speech recognition; environmental conditions; feature measurements; learning rules; measurement noise uncertainty; multimodal sequence integration architectures; multimodal uncertainty compensation approach; pattern recognition; person-independent visual feature extraction; uncertainty compensation; Automatic speech recognition; Feature extraction; Measurement uncertainty; Noise measurement; Noise robustness; Pattern recognition; Spatial databases; Speech recognition; Streaming media; Working environment noise; Active appearance models (AAMs); audiovisual automatic speech recognition (AV-ASR); multimodal fusion; uncertainty compensation;
fLanguage :
English
Journal_Title :
Audio, Speech, and Language Processing, IEEE Transactions on
Publisher :
ieee
ISSN :
1558-7916
Type :
jour
DOI :
10.1109/TASL.2008.2011515
Filename :
4782036
Link To Document :
بازگشت