DocumentCode :
122969
Title :
Conveying emotion in robotic speech: Lessons learned
Author :
Crumpton, Joe ; Bethel, Cindy
Author_Institution :
Dept. of Comput. Sci. & Eng., Mississippi State Univ., Starkville, MS, USA
fYear :
2014
fDate :
25-29 Aug. 2014
Firstpage :
274
Lastpage :
279
Abstract :
This research explored whether robots can use modern speech synthesizers to convey emotion with their speech. We investigated the use of MARY, an open source speech synthesizer, to convey a robot´s emotional intent to novice robot users. The first experiment indicated that participants were able to distinguish the intended emotions of anger, calm, fear, and sadness with success rates of 65.9%, 68.9%, 33.3%, and 49.2%, respectively. An issue was the recognition rate of the intended happiness statements, 18.2%, which was below the 20% level determined for chance. The vocal prosody modifications for the expression of happiness were adjusted and the recognition rates for happiness improved to 30.3% in a second experiment. This is an important benchmarking step in a line of research that investigates the use of emotional speech by robots to improve human-robot interaction. Recommendations and lessons learned from this research are presented.
Keywords :
control engineering computing; emotion recognition; human-robot interaction; speech recognition; speech synthesis; MARY; benchmarking step; conveying emotion; emotional speech; happiness statements; human-robot interaction; open source speech synthesizer; recognition rate; robot emotional intent; robot users; robotic speech; vocal prosody modification; Educational institutions; Emotion recognition; Human-robot interaction; Monitoring; Robots; Speech; Synthesizers;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on
Conference_Location :
Edinburgh
Print_ISBN :
978-1-4799-6763-6
Type :
conf
DOI :
10.1109/ROMAN.2014.6926265
Filename :
6926265
Link To Document :
بازگشت