Title :
Visual attention model for manipulating human attention by a robot
Author :
Tamura, Yoshinobu ; Yano, Sumio ; Osumi, Hisashi
Author_Institution :
Dept. of Precision Mech., Chuo Univ., Tokyo, Japan
fDate :
May 31 2014-June 7 2014
Abstract :
For smooth interaction between human and robot, the robot should have an ability to manipulate human attention and behaviors. In this study, we developed a visual attention model for manipulating human attention by a robot. The model consists of two modules, such as the saliency map generation module and manipulation map generation module. The saliency map describes the bottom-up effect of visual stimuli on human attention and the manipulation map describes the top-down effect of face, hands and gaze. In order to evaluate the proposed attention model, we measured human gaze points during watching a magic video, and applied the attention model to the video. Based on the result of this experiment, the proposed attention model can better explain human visual attention than the original saliency map.
Keywords :
human-robot interaction; social aspects of automation; human attention manipulation; manipulation map; manipulation map generation module; saliency map generation module; visual attention model; visual stimuli bottom-up effect; Atmospheric measurements; Computational modeling; Face; Image color analysis; Particle measurements; Robots; Visualization;
Conference_Titel :
Robotics and Automation (ICRA), 2014 IEEE International Conference on
Conference_Location :
Hong Kong
DOI :
10.1109/ICRA.2014.6907639