DocumentCode :
2293436
Title :
Learning to predict where humans look
Author :
Judd, Tilke ; Ehinger, Krista ; Durand, Frédo ; Torralba, Antonio
fYear :
2009
fDate :
Sept. 29 2009-Oct. 2 2009
Firstpage :
2106
Lastpage :
2113
Abstract :
For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations. Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features. This large database of eye tracking data is publicly available with this paper.
Keywords :
feature extraction; human computer interaction; tracking; eye tracking data; high-level image features; human computer interaction; saliency approaches; top-down image semantics; Application software; Biological system modeling; Biology computing; Computer graphics; Context modeling; Human computer interaction; Image databases; Layout; Predictive models; Spatial databases;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision, 2009 IEEE 12th International Conference on
Conference_Location :
Kyoto
ISSN :
1550-5499
Print_ISBN :
978-1-4244-4420-5
Electronic_ISBN :
1550-5499
Type :
conf
DOI :
10.1109/ICCV.2009.5459462
Filename :
5459462
Link To Document :
بازگشت