Title :
Boosting bottom-up and top-down visual features for saliency estimation
Author_Institution :
Dept. of Comput. Sci., Univ. of Southern California, Los Angeles, CA, USA
Abstract :
Despite significant recent progress, the best available visual saliency models still lag behind human performance in predicting eye fixations in free-viewing of natural scenes. Majority of models are based on low-level visual features and the importance of top-down factors has not yet been fully explored or modeled. Here, we combine low-level features such as orientation, color, intensity, saliency maps of previous best bottom-up models with top-down cognitive visual features (e.g., faces, humans, cars, etc.) and learn a direct mapping from those features to eye fixations using Regression, SVM, and AdaBoost classifiers. By extensive experimenting over three benchmark eye-tracking datasets using three popular evaluation scores, we show that our boosting model outperforms 27 state-of-the-art models and is so far the closest model to the accuracy of human model for fixation prediction. Furthermore, our model successfully detects the most salient object in a scene without sophisticated image processings such as region segmentation.
Keywords :
feature extraction; image segmentation; learning (artificial intelligence); object tracking; regression analysis; support vector machines; AdaBoost classifier; SVM; bottom-up visual features; direct mapping; eye fixations; fixation prediction; human performance; image processing; low-level visual features; natural scenes; region segmentation; regression; saliency estimation; saliency maps; top-down cognitive visual features; top-down visual features; visual saliency model; Computational modeling; Feature extraction; Humans; Image color analysis; Predictive models; Support vector machines; Visualization;
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on
Conference_Location :
Providence, RI
Print_ISBN :
978-1-4673-1226-4
Electronic_ISBN :
1063-6919
DOI :
10.1109/CVPR.2012.6247706