DocumentCode :
2336251
Title :
Obtaining an object position using multimodal interaction for a service robot
Author :
Iwasawa, Masaya ; Fukusato, Yusuke ; Sato-Shimokawara, Eri ; Yamaguchi, Toru
Author_Institution :
Grad. Sch. of Syst. Design, Tokyo Metropolitan Univ., Hino, Japan
fYear :
2009
fDate :
Sept. 27 2009-Oct. 2 2009
Firstpage :
1155
Lastpage :
1160
Abstract :
Gestures are useful factor in communication. A gesture has different meanings depending on related objects, situation and so on. Authors aim that a robot recognizes gesture meanings and provides a service considering related objects. Object position is one of the important factors for a robot to recognize a gesture. This paper focused on multimodal interaction between human and a robot to obtain object positions. Proposed system combines gesture recognition and speech recognition as multimodal interaction, and manages the database with object position information through the interaction. The system expects improvement of infrastructure to support cognitive ability of robots.
Keywords :
gesture recognition; human-robot interaction; service robots; speech recognition; gesture recognition; human robot interaction; object position information; service robot; speech recognition; Cameras; Cognitive robotics; Databases; Dictionaries; Fasteners; Human robot interaction; Humanoid robots; Robot kinematics; Service robots; Speech recognition;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Robot and Human Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE International Symposium on
Conference_Location :
Toyama
ISSN :
1944-9445
Print_ISBN :
978-1-4244-5081-7
Electronic_ISBN :
1944-9445
Type :
conf
DOI :
10.1109/ROMAN.2009.5326322
Filename :
5326322
Link To Document :
بازگشت