DocumentCode :
247816
Title :
Collaborating frames: Temporally weighted sparse representation for visual tracking
Author :
Soltani-Farani, A. ; Rabiee, H.R. ; Zarezade, A.
Author_Institution :
Dept. of Comput. Eng., Sharif Univ. of Technol., Tehran, Iran
fYear :
2014
fDate :
27-30 Oct. 2014
Firstpage :
456
Lastpage :
460
Abstract :
Sparse representation techniques for visual tracking have rarely taken advantage of the similarity between target objects in consecutive frames. In this paper, the target is divided into disjoint patches, and the sparse representation of corresponding consecutive target patches is assumed to be distributed according to a common Laplacian Scale Mixture (LSM) with a shared scale parameter. The target patches collaborate to determine this shared parameter, which in turn encourages smooth temporal variation in their representations. The target´s appearance is modeled using a dictionary composed of patch templates. This patchwise treatment allows occluded patches to be detected and excluded when updating the dictionary. Experimental results on 6 challenging video sequences, show superior performance, especially in scenarios with considerable appearance change.
Keywords :
image representation; image sequences; object tracking; LSM; Laplacian scale mixture; dictionary; frame collaboration; patch template; patchwise treatment; smooth temporal variation; target patch; temporally weighted sparse representation technique; video sequence; visual tracking; Adaptation models; Dictionaries; Laplace equations; Markov processes; Robustness; Target tracking; Visualization; Markov chain; Visual tracking; dictionary; patchwise; reweighed ℓ1;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Image Processing (ICIP), 2014 IEEE International Conference on
Conference_Location :
Paris
Type :
conf
DOI :
10.1109/ICIP.2014.7025091
Filename :
7025091
Link To Document :
بازگشت