DocumentCode :
1799525
Title :
Multimodel emotion analysis in response to multimedia
Author :
Wei-Long Zheng ; Jia-Yi Zhu ; Bao-Liang Lu
Author_Institution :
Dept. of Comput. Sci. & Eng., Shanghai Jiao Tong Univ., Shanghai, China
fYear :
2014
fDate :
14-18 July 2014
Firstpage :
1
Lastpage :
2
Abstract :
In this demo paper, we designed a novel framework combining EEG and eye tracking signals to analyze users´ emotional activities in response to multimedia. To realize the proposed framework, we extracted efficient features of EEG and eye tracking signals and used support vector machine as classifier. We combined multimodel features using feature-level fusion and decision-level fusion to classify three emotional categories (positive, neutral and negative), which can achieve the average accuracies of 75.62% and 74.92%, respectively. We investigated the brain activities that are associated with emotions. Our experimental results indicated there exist stable common patterns and activated areas of the brain associated with positive and negative emotions. In the demo, we also showed the trajectory of emotion changes in response to multimedia.
Keywords :
brain; emotion recognition; feature extraction; gaze tracking; image classification; image fusion; multimedia systems; support vector machines; EEG; brain activities; classifier; decision-level fusion; emotional category classification; eye tracking signals; feature extraction; feature-level fusion; multimedia; multimodel emotion analysis; multimodel features; support vector machine; Accuracy; Brain modeling; Data models; Electroencephalography; Emotion recognition; Multimedia communication; Videos; EEG; Emotion recognition; affective computing; eye track;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Multimedia and Expo Workshops (ICMEW), 2014 IEEE International Conference on
Conference_Location :
Chengdu
ISSN :
1945-7871
Type :
conf
DOI :
10.1109/ICMEW.2014.6890622
Filename :
6890622
Link To Document :
بازگشت