DocumentCode
586583
Title
A saliency model for goal directed actions
Author
Vikram, T.N. ; Tscherepanow, M. ; Wrede, Britta
Author_Institution
CoR-Lab., Bielefeld Univ., Bielefeld, Germany
fYear
2012
fDate
7-9 Nov. 2012
Firstpage
1
Lastpage
6
Abstract
In this paper, we propose a saliency model which can be used to guide eye movements for viewing a goal-directed action video. The model employs top-down and bottom-up saliency components which work purely on contrasts of random pixels in the image. We construct task specific spatio-temporal priors and integrate them into the top-down and bottom-up modules. These priors reduce the search space for the target object, thereby automatically suppressing those image regions that are task irrelevant. For the purpose of evaluation we introduce a new goal-directed video database containing 60 sequences of five goal-directed actions performed by the same human demonstrator. The presented results of saliency detection justify the proposed model and demonstrate its advantage over models for task independent saliency detection.
Keywords
eye; image sequences; mobile robots; random processes; robot vision; video databases; automatic image region suppression; bottom-up saliency components; eye movements; goal-directed action video sequences; goal-directed video database; human demonstrator; random image pixels; saliency detection; search space reduction; target object; task specific spatiotemporal priors; top-down saliency components; Computational modeling; Humans; Predictive models; Robots; Search problems; Streaming media; Visualization;
fLanguage
English
Publisher
ieee
Conference_Titel
Development and Learning and Epigenetic Robotics (ICDL), 2012 IEEE International Conference on
Conference_Location
San Diego, CA
Print_ISBN
978-1-4673-4964-2
Electronic_ISBN
978-1-4673-4963-5
Type
conf
DOI
10.1109/DevLrn.2012.6400881
Filename
6400881
Link To Document