DocumentCode :
2700937
Title :
Integrating visual exploration and visual search in robotic visual attention: The role of human-robot interaction
Author :
Begum, Momotaz ; Karray, Fakhri
Author_Institution :
Dept. of ECE, Univ. of Waterloo, Waterloo, ON, Canada
fYear :
2011
fDate :
9-13 May 2011
Firstpage :
3822
Lastpage :
3827
Abstract :
A common characteristics of the computational models of visual attention is they execute the two modes of visual attention (visual exploration and visual search) separately. This makes a visual attention model unsuitable for real-world robotic applications. This paper focuses on integrating visual exploration and visual search in a common framework of visual attention and the challenges resulting from such integration. It proposes a visual attention-oriented speech-based human robot interaction framework which helps a robot to switch back and-forth between the two modes of visual attention. A set of experiments are presented to demonstrate the performance of the proposed framework.
Keywords :
human-robot interaction; robot vision; speech processing; robotic visual attention; visual attention-oriented speech-based human robot interaction; visual exploration; visual search; Cameras; Color; Humans; Robot sensing systems; Training; Visualization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Robotics and Automation (ICRA), 2011 IEEE International Conference on
Conference_Location :
Shanghai
ISSN :
1050-4729
Print_ISBN :
978-1-61284-386-5
Type :
conf
DOI :
10.1109/ICRA.2011.5980376
Filename :
5980376
Link To Document :
بازگشت