DocumentCode :
3764203
Title :
Feature Level Fusion for Bimodal Facial Action Unit Recognition
Author :
Zibo Meng;Shizhong Han;Min Chen;Yan Tong
Author_Institution :
Comput. Sci. &
fYear :
2015
Firstpage :
471
Lastpage :
476
Abstract :
Recognizing facial actions from spontaneous facial displays suffers from subtle and complex facial deformation, frequent head movements, and partial occlusions. It is especially challenging when the facial activities are accompanied with speech. Instead of employing information solely from the visual channel, this paper presents a novel fusion framework, which exploits information from both visual and audio channels in recognizing speech-related facial action units (AUs). In particular, features are first extracted from visual and audio channels, independently. Then, the audio features are aligned with the visual features in order to handle the difference in time scales and the time shift between the two signals. Finally, these aligned audio and visual features are integrated via a feature-level fusion framework and utilized in recognizing AUs. Experimental results on a new audiovisual AU-coded dataset have demonstrated that the proposed feature-level fusion framework outperforms a state-of-the-art visual-based method in recognizing speech-related AUs, especially for those AUs that are "invisible" in the visual channel during speech. The improvement is more impressive with occlusions on the facial images, which, fortunately, would not affect the audio channel.
Keywords :
"Feature extraction","Visualization","Face recognition","Gold","Speech","Face","Mel frequency cepstral coefficient"
Publisher :
ieee
Conference_Titel :
Multimedia (ISM), 2015 IEEE International Symposium on
Type :
conf
DOI :
10.1109/ISM.2015.116
Filename :
7442381
Link To Document :
بازگشت