DocumentCode :
2990780
Title :
Towards rich multimodal behavior in spoken dialogues with embodied agents
Author :
Al Moubayed, Samer
Author_Institution :
Dept. for Speech, KTH R. Inst. of Technol., Stockholm, Sweden
fYear :
2013
fDate :
2-5 Dec. 2013
Firstpage :
817
Lastpage :
822
Abstract :
Spoken dialogue frameworks have traditionally been designed to handle a single stream of data - the speech signal. Research on human-human communication has been providing large evidence and quantifying the effects and the importance of a multitude of other multimodal nonverbal signals that people use in their communication, that shape and regulate their interaction. Driven by findings from multimodal human spoken interaction, and the advancements of capture devices and robotics and animation technologies, new possibilities are rising for the development of multimodal human-machine interaction that is more affective, social, and engaging. In such face-to-face interaction scenarios, dialogue systems can have a large set of signals at their disposal to infer context and enhance and regulate the interaction through the generation of verbal and nonverbal facial signals. This paper summarizes several design decision, and experiments that we have followed in attempts to build rich and fluent multimodal interactive systems using a newly developed hybrid robotic head called Furhat, and discuss issues and challenges that this effort is facing.
Keywords :
control engineering computing; human computer interaction; humanoid robots; interactive systems; speech processing; speech-based user interfaces; Furhat robot; animation technologies; capture devices; data stream; design decision; dialogue systems; embodied agents; face-to-face interaction scenarios; human-human communication; hybrid robotic head; multimodal behavior; multimodal human spoken interaction; multimodal human-machine interaction; multimodal interactive systems; multimodal nonverbal signals; nonverbal facial signals generation; robotics; speech signal; spoken dialogue frameworks; Face; Facial animation; Microphones; Robots; Speech; Unified modeling language; XML; Dialogue Systems; Facial Synthesis; Furhat Robot; Multimodal communication; Social Robots;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Cognitive Infocommunications (CogInfoCom), 2013 IEEE 4th International Conference on
Conference_Location :
Budapest
Print_ISBN :
978-1-4799-1543-9
Type :
conf
DOI :
10.1109/CogInfoCom.2013.6719212
Filename :
6719212
Link To Document :
بازگشت