Title :
Margin based likelihood map fusion for target tracking
Author :
Peng, Jing ; Seetharaman, Guna
Author_Institution :
Dept. of Comput. Sci., Montclair State Univ., Montclair, NJ, USA
Abstract :
Visual object recognition and tracking can be formulated as an object-background classification problem. Since combining multi-modal information is known to exponentially quicken classification, often different features are used to create a set of representations for a pixel or target object. Each of the representations generates a probability of that pixel being part of the target object or scene background. Thus, how to combine these views to effectively exploit multi-modal information for classification becomes a key issue. We propose a margin based fusion technique for exploiting these heterogeneous features for classification, thus tracking. All representations contribute to classification on their learned confidence scores (weights). As a result of optimally combining multi-modal information or evidence, discriminant object and background information is preserved, while ambiguous information is discarded. We provide experimental results that show its performance against competing techniques.
Keywords :
image classification; image fusion; image representation; object recognition; object tracking; probability; target tracking; background information; discriminant object; margin based likelihood MAP fusion; multimodal information; object-background classification problem; pixel object representation; probability; target object representation; target tracking; visual object recognition; visual object tracking; Accuracy; Face; Feature extraction; Glass; Image edge detection; Linear discriminant analysis; Target tracking; Classification; Fusion; Large margin;
Conference_Titel :
Geoscience and Remote Sensing Symposium (IGARSS), 2012 IEEE International
Conference_Location :
Munich
Print_ISBN :
978-1-4673-1160-1
Electronic_ISBN :
2153-6996
DOI :
10.1109/IGARSS.2012.6351037