DocumentCode :
3703358
Title :
Multimodal emotion recognition in response to videos (Extended abstract)
Author :
Mohammad Soleymani;Maja Pantic;Thierry Pun
Author_Institution :
Swiss Center for Affective Sciences, University of Geneva, Switzerland
fYear :
2015
Firstpage :
491
Lastpage :
497
Abstract :
We present a user-independent emotion recognition method with the goal of detecting expected emotions or affective tags for videos using electroencephalogram (EEG), pupillary response and gaze distance. We first selected 20 video clips with extrinsic emotional content from movies and online resources. Then EEG responses and eye gaze data were recorded from 24 participants while watching emotional video clips. Ground truth was defined based on the median arousal and valence scores given to clips in a preliminary study. The arousal classes were calm, medium aroused and activated and the valence classes were unpleasant, neutral and pleasant. A one-participant-out cross validation was employed to evaluate the classification performance in a user-independent approach. The best classification accuracy of 68.5% for three labels of valence and 76.4% for three labels of arousal were obtained using a modality fusion strategy and a support vector machine. The results over a population of 24 participants demonstrate that user-independent emotion recognition can outperform individual self-reports for arousal assessments and do not underperform for valence assessments.
Keywords :
"Videos","Electroencephalography","Emotion recognition","Feature extraction","Motion pictures","Multimedia communication","Streaming media"
Publisher :
ieee
Conference_Titel :
Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on
Electronic_ISBN :
2156-8111
Type :
conf
DOI :
10.1109/ACII.2015.7344615
Filename :
7344615
Link To Document :
بازگشت