DocumentCode
2527780
Title
Towards an integrated model of speech and gesture production for multi-modal robot behavior
Author
Salem, Maha ; Kopp, Stefan ; Wachsmuth, Ipke ; Joublin, Frank
Author_Institution
Res. Inst. for Cognition & Robot., Bielefeld, Germany
fYear
2010
fDate
13-15 Sept. 2010
Firstpage
614
Lastpage
619
Abstract
The generation of communicative, speech-accompanying robot gesture is still largely unexplored. We present an approach to enable the humanoid robot ASIMO to flexibly produce speech and co-verbal gestures at run-time, while not being limited to a pre-defined repertoire of motor actions. Since much research has already been dedicated to this challenge within the domain of virtual conversational agents, we build upon the experience gained from the development of a speech and gesture production model used for the virtual human Max. We propose a robot control architecture building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to flexibly realize planned multi-modal behavior representations on the spot. Our approach tightly couples ACE with ASIMO´s perceptuo-motor system, combining conceptual representation and planning with motor control primitives for speech and arm movements of a physical robot body. First results of both gesture production and speech synthesis using ACE and the MARY text-to-speech system are presented and discussed.
Keywords
humanoid robots; multi-agent systems; path planning; speech synthesis; ASIMO; Max; articulated communicator engine; gesture production; humanoid robot; multimodal robot behavior; robot control architecture; speech production; text-to-speech system; virtual conversational agents; virtual human; Animation; Humans; Joints; Production; Robots; Shape; Speech;
fLanguage
English
Publisher
ieee
Conference_Titel
RO-MAN, 2010 IEEE
Conference_Location
Viareggio
ISSN
1944-9445
Print_ISBN
978-1-4244-7991-7
Type
conf
DOI
10.1109/ROMAN.2010.5598665
Filename
5598665
Link To Document