DocumentCode :
2540818
Title :
Sparse Embedding Visual Attention Model
Author :
Zhao, Cairong ; Liu, ChuanCai ; Lai, Zhihui ; Sui, Yue ; Li, Zuoyong
Author_Institution :
Sch. of Comput. Sci., Nanjing Univ. of Sci. & Technol., Nanjing, China
fYear :
2009
fDate :
4-6 Nov. 2009
Firstpage :
1
Lastpage :
4
Abstract :
Visual attention (VA), defined as the ability of a biological or artificial visual system to rapidly detect potentially relevant parts of a visual scene, provides a general purpose solution for low level feature detection in a visual architecture. Numerous computational models of visual attention have been suggested during the last two decades. In saliency map of VA, how to select weights of each feature map that can correctly reflect the response salience between feature maps is important. A sparse embedding visual attention (SEVA) model, inspired by sparse representation, is presented. This paper describes the feature saliency index measured by sparse representation that adjusts the weights of each feature map in proportion of its average contribution to the saliency map. The proposed visual attention system is examined by using different scene images. Results show that the SEVA model consistently outperforms the traditional VA model, attributed to the adaptation of the weights of each feature map.
Keywords :
image representation; artificial visual system; biological visual system; feature maps; feature saliency index; low level feature detection; saliency map; sparse embedding visual attention model; sparse representation; visual architecture; Biological system modeling; Brain modeling; Computational modeling; Computer science; Computer vision; Humans; Image segmentation; Layout; Physics; Visual system;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Pattern Recognition, 2009. CCPR 2009. Chinese Conference on
Conference_Location :
Nanjing
Print_ISBN :
978-1-4244-4199-0
Type :
conf
DOI :
10.1109/CCPR.2009.5343989
Filename :
5343989
Link To Document :
بازگشت