DocumentCode
2448619
Title
Generating semantic visual templates for video databases
Author
Chen, William ; Chang, Shih-Fu
Author_Institution
Dept. of Electr. Eng., Columbia Univ., New York, NY, USA
Volume
3
fYear
2000
fDate
2000
Firstpage
1337
Abstract
We describe a system that generates semantic visual templates (SVTs) for video databases. From a single query sketch, new queries are automatically generated with each one representing a different view of the initial sketch. The combination of the original and new queries forms a large set of potential queries for a content-based video retrieval system. Through Bayesian relevance feedback, the user narrows the choices to an exemplar set. This exemplar set, or SVTs, represents personalized views of a concept and an effective set of queries to retrieve a general category of images and videos. We have generated SVTs for several classes of videos, including sunsets, high jumpers, and slalom skiers. Our experiments show that the user can quickly converge upon SVTs with optimal performance, achieving over 85% of the precision from icons chosen by exhaustive search
Keywords
content-based retrieval; relevance feedback; video databases; video signal processing; Bayesian relevance feedback; content-based video retrieval system; exemplar set; icons; query sketch; semantic visual template generation; video databases; Bayesian methods; Bridges; Content based retrieval; Feedback; Image converters; Image databases; Image retrieval; Information retrieval; Search engines; Visual databases;
fLanguage
English
Publisher
ieee
Conference_Titel
Multimedia and Expo, 2000. ICME 2000. 2000 IEEE International Conference on
Conference_Location
New York, NY
Print_ISBN
0-7803-6536-4
Type
conf
DOI
10.1109/ICME.2000.871013
Filename
871013
Link To Document