DocumentCode
2446192
Title
Multi-feature Fusion for Video Object Tracking
Author
Song, Yuqing ; Yue, Dongpeng
Author_Institution
Sch. of Automotive & Transp., Tianjin Univ. of Technol. & Educ., Tianjin, China
fYear
2012
fDate
1-3 Nov. 2012
Firstpage
33
Lastpage
36
Abstract
Tracking by individual features, such as color or motion, is the main reason why most tracking algorithms are not as robust as expected. In order to better describe the object, multi-feature fusion is very necessary. In this paper we introduce a graph grammar based method to fuse the low level features and apply them to object tracking. Our tracking algorithm consists of two phases: key point tracking and tracking by graph grammar rules. The key points are computed using salient level set components. All key points, as well as the colors and the tangent directions, are fed to a Kalman filter for object tracking. Then the graph grammar rules are used to dynamically examine and adjust the tracking procedure to make it robust.
Keywords
Kalman filters; feature extraction; graph grammars; image colour analysis; object tracking; sensor fusion; video signal processing; Kalman filter; feature tracking; graph grammar; key point tracking; multifeature fusion; salient level set components; video object tracking; Face; Feature extraction; Grammar; Shape; Target tracking; graph grammar; multi-feature fusion; object tracking; semantics based tracking;
fLanguage
English
Publisher
ieee
Conference_Titel
Intelligent Networks and Intelligent Systems (ICINIS), 2012 Fifth International Conference on
Conference_Location
Tianjin
Print_ISBN
978-1-4673-3083-1
Type
conf
DOI
10.1109/ICINIS.2012.56
Filename
6376478
Link To Document