DocumentCode :
2293660
Title :
Multimodal partial estimates fusion
Author :
Xu, Jiang ; Yuan, Junsong ; Wu, Ying
Author_Institution :
Department of Electrical Engineering and Computer Science, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA
fYear :
2009
fDate :
Sept. 29 2009-Oct. 2 2009
Firstpage :
2177
Lastpage :
2184
Abstract :
Fusing partial estimates is a critical and common problem in many computer vision tasks such as part-based detection and tracking. It generally becomes complicated and intractable when there are a large number of multimodal partial estimates, and thus it is desirable to find an effective and scalable fusion method to integrate these partial estimates. This paper presents a novel and effective approach to fusing multimodal partial estimates in a principled way. In this new approach, fusion is related to a computational geometry problem of finding the minimum-volume orthotope, and an effective and scalable branch and bound search algorithm is designed to obtain the global optimal solution. Experiments on tracking articulated objects and occluded objects show the effectiveness of the proposed approach.
Keywords :
Algorithm design and analysis; Computational geometry; Computer vision; Concrete; Detectors; Fuses; Motion detection; Multimodal sensors; Object detection; Target tracking;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision, 2009 IEEE 12th International Conference on
Conference_Location :
Kyoto
ISSN :
1550-5499
Print_ISBN :
978-1-4244-4420-5
Electronic_ISBN :
1550-5499
Type :
conf
DOI :
10.1109/ICCV.2009.5459475
Filename :
5459475
Link To Document :
بازگشت