Title :
Directing humanoids in a multi-modal command language
Author :
Oka, Tetsushi ; Abe, Toyokazu ; Shimoji, Masato ; Nakamura, Takuya ; Sugita, Kaoru ; Yokota, Masao
Author_Institution :
Fukuoka Inst. of Technol., Fukuoka
Abstract :
This paper reports some recent results from a study on directing humanoids in a multi-modal command language. A system which interprets userspsila messages in the language through microphones, visual and tactile sensors, and control buttons in real time has been developed and applied to small humanoids. The command language is based on a simple well-defined spoken language and non-verbal events detected using sensors and buttons. In some usability tests, subjects unfamiliar with the language were able to operate small humanoids and complete their tasks by talking to them, using gestures, touching them and pressing keypad keys without a long learning stage. Our system operating on PCs responded to multi-modal commands without significant delay. Multi-modal commands were more successful than spoken commands without non-verbal messages although some users needed a certain number of trials to adapt to multi-modal communications in our language.
Keywords :
high level languages; humanoid robots; robot vision; tactile sensors; command language; humanoids; microphones; multimodal command language; nonverbal events; spoken language; tactile sensors; visual sensors; Command languages; Control systems; Event detection; Microphones; Natural languages; Pressing; Real time systems; Tactile sensors; Testing; Usability;
Conference_Titel :
Robot and Human Interactive Communication, 2008. RO-MAN 2008. The 17th IEEE International Symposium on
Conference_Location :
Munich
Print_ISBN :
978-1-4244-2212-8
Electronic_ISBN :
978-1-4244-2213-5
DOI :
10.1109/ROMAN.2008.4600729