DocumentCode :
3121499
Title :
Emotion recognition based on human gesture and speech information using RT middleware
Author :
Vu, H.A. ; Yamazaki, Y. ; Dong, F. ; Hirota, K.
Author_Institution :
Dept. of Comput. Intell. & Syst. Sci., Tokyo Inst. of Technol., Yokohama, Japan
fYear :
2011
fDate :
27-30 June 2011
Firstpage :
787
Lastpage :
791
Abstract :
A bi-modal emotion recognition approach is proposed for recognition of four emotions that integrate information from gestures and speech. The outputs from two unimodal emotion recognition systems based on affective speech and expressive gesture are fused on a decision level fusion by using weight criterion fusion and best probability plus majority vote fusion methods, and the performance of classifier which performs better than each uni-modal and is helpful in recognizing suitable emotions for communication situations. To validate the proposal, fifty Japanese words (or phrases) and 8 types of gestures that are recorded from five participants are used, and the emotion recognition rate increases up to 85.39%. The proposal is able to extent to using more than other modalities and useful in automatic emotion recognition system for human-robot communication.
Keywords :
emotion recognition; human-robot interaction; middleware; natural language processing; probability; speech recognition; Japanese words; RT middleware; emotion recognition; human gesture information; human-robot communication; majority vote fusion methods; probability; speech information; weight criterion fusion; Emotion recognition; Humans; Image color analysis; Robot kinematics; Speech; Speech recognition; Affective Speech; Decision-level Fusion; Emotion Recognition; Expressive Gesture;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Fuzzy Systems (FUZZ), 2011 IEEE International Conference on
Conference_Location :
Taipei
ISSN :
1098-7584
Print_ISBN :
978-1-4244-7315-1
Electronic_ISBN :
1098-7584
Type :
conf
DOI :
10.1109/FUZZY.2011.6007557
Filename :
6007557
Link To Document :
بازگشت