DocumentCode
2963364
Title
Visual context representation using a combination of feature-driven and object-driven mechanisms
Author
Miao, Jun ; Duan, Lijuan ; Qing, Laiyun ; Chen, Xilin ; Gao, Wen
Author_Institution
Key Lab. of Intell. Inf. Process., Chinese Acad. of Sci., Beijing
fYear
2008
fDate
1-8 June 2008
Firstpage
3800
Lastpage
3805
Abstract
Visual context between objects is an important cue for object position perception. How to effectively represent the visual context is a key issue to study. Some past work introduced task-driven methods for object perception, which led a large coding quantity. This paper proposes an approach that incorporates feature-driven mechanism into object-driven context representation for object locating. As an example, the paper discusses how a neuronal network encodes the visual context between feature salient regions and human eye centers with as little coding quantity as possible. A group of experiments on efficiency of visual context coding and object searching are analyzed and discussed, which show that the proposed method decreases the coding quantity and improve the object searching accuracy effectively.
Keywords
image coding; neural nets; object detection; feature-driven mechanism; neuronal network; object location; object perception; object-driven mechanism; visual context representation; Neural networks;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on
Conference_Location
Hong Kong
ISSN
1098-7576
Print_ISBN
978-1-4244-1820-6
Electronic_ISBN
1098-7576
Type
conf
DOI
10.1109/IJCNN.2008.4634344
Filename
4634344
Link To Document