DocumentCode :
1659232
Title :
Robust visual tracking via part-based sparsity model
Author :
Pingyang Dai ; Yanlong Luo ; Weisheng Liu ; Cuihua Li ; Yi Xie
Author_Institution :
Comput. Sci. Dept., Xiamen Univ., Xiamen, China
fYear :
2013
Firstpage :
1803
Lastpage :
1806
Abstract :
The sparse representation has been widely used in many areas including visual tracking. The part-based representation performs outstandingly by using non-holistic templates to against occlusion. This paper combined them and proposed a robust object tracking method using part-based sparsity model for tracking an object in a video sequence. In the proposed model, one object is represented by image patches. The candidates of these patches are sparsely represented in the space which is spanned by the patch templates and trivial templates. The part-based method takes the spatial information of each patch into consideration, where the vote maps of multiple patches are used. Furthermore, the update scheme keeps the representative templates of each part dynamically. Therefore, trackers can effectively deal with the changes of appearances and heavy occlusion. On various public benchmark videos, the abundant results of experiments demonstrate that the proposed tracking method outperforms many existing state-of-the-arts algorithms.
Keywords :
compressed sensing; image sequences; object tracking; image patches; part based sparsity model; patch templates; robust object tracking method; robust visual tracking; sparse representation; trivial templates; video sequence; Boosting; Computer vision; Image reconstruction; Noise; Robustness; Target tracking; Visualization; Visual tracking; part-based; sparsity model;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on
Conference_Location :
Vancouver, BC
ISSN :
1520-6149
Type :
conf
DOI :
10.1109/ICASSP.2013.6637963
Filename :
6637963
Link To Document :
بازگشت