DocumentCode :
3625648
Title :
Dynamic Multimodal Fusion in Video Search
Author :
Lexing Xie;Apostol Natsev;Jelena Tesic
Author_Institution :
IBM T. J. Watson Research Center, Hawthorne, NY
fYear :
2007
fDate :
7/1/2007 12:00:00 AM
Firstpage :
1499
Lastpage :
1502
Abstract :
We propose effective multimodal fusion strategies for video search. Multimodal search is a widely applicable information-retrieval problem, and fusion strategies are essential to the system in order to utilize all available retrieval experts and to boost the performance. Prior work has focused on hard-and soft-modeling of query classes and learning weights for each class, while the class partition is either manually defined or learned from data but still insensitive to the testing query. We propose a query-dependent fusion strategy that dynamically generates a class among the training queries that are closest to the testing query, based on light-weight query features defined on the outcome of semantic analysis on the query text. A set of optimal weights are then learned on the dynamic class, which aims to model both the co-occurring query features and unusual test queries. Used in conjunction with the rest of our multimodal retrieval system, dynamic query classes performs favorably with hard and soft query classes, and the system performance improves upon the best automatic search run of TRECVID05 and TRECVID06 by 34% and 8%, respectively.
Keywords :
"Information retrieval","Testing","System performance","Government","Engines","Tellurium","Fusion power generation","Web search","Metasearch","Multimedia communication"
Publisher :
ieee
Conference_Titel :
Multimedia and Expo, 2007 IEEE International Conference on
ISSN :
1945-7871
Print_ISBN :
1-4244-1016-9
Electronic_ISBN :
1945-788X
Type :
conf
DOI :
10.1109/ICME.2007.4284946
Filename :
4284946
Link To Document :
بازگشت