• DocumentCode
    1389677
  • Title

    Multimodal Emotion Recognition in Response to Videos

  • Author

    Soleymani, Mohammad ; Pantic, Maja ; Pun, Thierry

  • Author_Institution
    Comput. Sci. Dept., Univ. of Geneva, Carouge, Switzerland
  • Volume
    3
  • Issue
    2
  • fYear
    2012
  • Firstpage
    211
  • Lastpage
    223
  • Abstract
    This paper presents a user-independent emotion recognition method with the goal of recovering affective tags for videos using electroencephalogram (EEG), pupillary response and gaze distance. We first selected 20 video clips with extrinsic emotional content from movies and online resources. Then, EEG responses and eye gaze data were recorded from 24 participants while watching emotional video clips. Ground truth was defined based on the median arousal and valence scores given to clips in a preliminary study using an online questionnaire. Based on the participants´ responses, three classes for each dimension were defined. The arousal classes were calm, medium aroused, and activated and the valence classes were unpleasant, neutral, and pleasant. One of the three affective labels of either valence or arousal was determined by classification of bodily responses. A one-participant-out cross validation was employed to investigate the classification performance in a user-independent approach. The best classification accuracies of 68.5 percent for three labels of valence and 76.4 percent for three labels of arousal were obtained using a modality fusion strategy and a support vector machine. The results over a population of 24 participants demonstrate that user-independent emotion recognition can outperform individual self-reports for arousal assessments and do not underperform for valence assessments.
  • Keywords
    behavioural sciences computing; electroencephalography; emotion recognition; sensor fusion; support vector machines; affective tags; bodily response classification; electroencephalogram; emotional video clips; extrinsic emotional content; gaze distance; median arousal; modality fusion strategy; multimodal emotion recognition; one-participant-out cross validation; pupillary response; support vector machine; user-independent emotion recognition; user-independent emotion recognition method; valence scores; video response; Electroencephalography; Emotion recognition; Motion pictures; Multimedia communication; Physiology; Tagging; Videos; EEG; Emotion recognition; affective computing.; pattern classification; pupillary reflex;
  • fLanguage
    English
  • Journal_Title
    Affective Computing, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1949-3045
  • Type

    jour

  • DOI
    10.1109/T-AFFC.2011.37
  • Filename
    6095505