Title :
Multimodal emotion estimation and emotional synthesize for interaction virtual agent
Author :
Minghao Yang ; Jianhua Tao ; Hao Li ; Kaihui Mu
Author_Institution :
Nat. Lab. of Pattern Recognition, Inst. of Autom., Beijing, China
fDate :
Oct. 30 2012-Nov. 1 2012
Abstract :
In this study, we create a 3D interactive virtual character based on multi-modal emotional recognition and rule based emotional synthesize techniques. This agent estimates users´ emotional state by combining the information from the audio and facial expression with CART and boosting. For the output module of the agent, the voice is generated by TTS (Text-to-Speech)system by freely given text. The synchronous visual information of agent, including facial expression, head motion, gesture and body animation, are generated by multi-modal mapping from motion capture database. A kind of high level behavior markerup language(hBML) which contains five keywords is used to drive the animation of virtual agent for emotional expression. Experiments show that this virtual character is considered natural and realistic in multimodal interaction environments.
Keywords :
computer animation; emotion recognition; multi-agent systems; speech synthesis; 3D interactive virtual character; CART; TTS system; facial expression; hBML; high level behavior markerup language; multi modal mapping; multimodal emotion estimation; multimodal emotional recognition; text-to-speech system; virtual agent; virtual character; Animation; Emotion recognition; Face; Human computer interaction; Speech; Training; Visualization; Body movements; Boosting; CART; Face animation; Interactive virtual character; Multi-modal;
Conference_Titel :
Cloud Computing and Intelligent Systems (CCIS), 2012 IEEE 2nd International Conference on
Conference_Location :
Hangzhou
Print_ISBN :
978-1-4673-1855-6
DOI :
10.1109/CCIS.2012.6664394