Title :
Kernel cross-modal factor analysis for multimodal information fusion
Author :
Wang, Yongjin ; Guan, Ling ; Venetsanopoulos, A.N.
Author_Institution :
Dept. of Electr. & Comput. Eng., Ryerson Univ., Toronto, ON, Canada
Abstract :
This paper presents a novel approach for multimodal information fusion. The proposed method is based on kernel cross-modal factor analysis (KCFA), in which the optimal transformations that represent the coupled patterns between two different subsets of features are identified by minimizing the Frobenius norm in the transformed domain. It generalizes the linear cross-modal factor analysis (CFA) method via the kernel trick to model the nonlinear relationship between two multidimensional variables. The effectiveness of the introduced solution is demonstrated through experimentation on an audiovisual based emotion recognition problem. Experimental results show that the proposed approach outperforms the concatenation based feature level fusion, the linear CFA, as well as the canonical correlation analysis (CCA) and kernel CCA methods.
Keywords :
audio-visual systems; correlation methods; emotion recognition; sensor fusion; CFA method; Frobenius norm; audiovisual based emotion recognition problem; canonical correlation analysis; feature level fusion; kernel CCA methods; kernel cross-modal factor analysis; linear cross-modal factor analysis method; multidimensional variables; multimodal information fusion; Correlation; Emotion recognition; Kernel; Multimedia communication; Principal component analysis; Vectors; Yttrium; Cross-modal analysis; Frobenius norm; information fusion; kernel method;
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on
Conference_Location :
Prague
Print_ISBN :
978-1-4577-0538-0
Electronic_ISBN :
1520-6149
DOI :
10.1109/ICASSP.2011.5946963