DocumentCode :
1839236
Title :
Multimodal human-robot interaction with Chatterbot system: Extending AIML towards supporting embodied interactions
Author :
Tan, Jeffrey Too Chuan ; Feng Duan ; Inamura, Tetsunari
Author_Institution :
Principles of Inf. Res. Div., Nat. Inst. of Inf., Tokyo, Japan
fYear :
2012
fDate :
11-14 Dec. 2012
Firstpage :
1727
Lastpage :
1732
Abstract :
The research objective of this work is to realize multimodal human-robot interaction based on light-weight Chatterbot system. The dialogue system is integrated into SIGVerse system with immersive multimodal interfaces to achieve interaction in an embodied virtual environment. To validate the feasibility of the proposed design, the actual AIML implementations are described to illustrate (a) Gesture Inputs, (b) Emotional Expressions, (c) Robot Interactive Learning, and (d) Interactive Learning towards Symbol Grounding in multimodal human-robot interaction.
Keywords :
human-robot interaction; user interfaces; virtual reality; AIML; SIGVerse system; dialogue system; embodied interaction; embodied virtual environment; emotional expression; gesture input; immersive multimodal interface; light-weight chatterbot system; multimodal human-robot interaction; robot interactive learning; symbol grounding;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Robotics and Biomimetics (ROBIO), 2012 IEEE International Conference on
Conference_Location :
Guangzhou
Print_ISBN :
978-1-4673-2125-9
Type :
conf
DOI :
10.1109/ROBIO.2012.6491217
Filename :
6491217
Link To Document :
بازگشت