DocumentCode
2956650
Title
Affect recognition from face and body: early fusion vs. late fusion
Author
Gunes, Hatice ; Piccardi, Massimo
Author_Institution
Fac. of Inf. Technol., Sydney Technol. Univ., NSW, Australia
Volume
4
fYear
2005
fDate
10-12 Oct. 2005
Firstpage
3437
Abstract
This paper presents an approach to automatic visual emotion recognition from two modalities: face and body. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information first at a feature-level, in which the data from both modalities are combined before classification, and later at a decision-level, in which we integrate the outputs of the monomodal systems by the use of suitable criteria. We then evaluate these two fusion approaches, in terms of performance over monomodal emotion recognition based on facial expression modality only. In the experiments performed the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. Moreover, fusion at the feature-level proved better recognition than fusion at the decision-level.
Keywords
emotion recognition; face recognition; sensor fusion; affective body gesture information; bimodal affect recognition; early fusion; emotion classification; facial expression modality; late fusion; monomodal emotion recognition; monomodal systems; recognition accuracy; visual emotion recognition; Computer displays; Computer vision; Data mining; Emotion recognition; Face detection; Face recognition; Facial animation; Fuses; Human computer interaction; Information technology; Facial expression; bimodal affect recognition; body gesture; early fusion; late fusion; monomodal affect recognition;
fLanguage
English
Publisher
ieee
Conference_Titel
Systems, Man and Cybernetics, 2005 IEEE International Conference on
Print_ISBN
0-7803-9298-1
Type
conf
DOI
10.1109/ICSMC.2005.1571679
Filename
1571679
Link To Document