Title :
Learning a temporally invariant representation for visual tracking
Author :
Chao Ma;Xiaokang Yang;Chongyang Zhang;Ming-Hsuan Yang
Author_Institution :
Shanghai Jiao Tong University, China
Abstract :
In this paper, we propose to learn temporally invariant features from a large number of image sequences to represent objects for visual tracking. These features are trained on a convolutional neural network with temporal invariance constraints and robust to diverse motion transformations. We employ linear correlation filters to encode the appearance templates of targets and perform the tracking task by searching for the maximum responses at each frame. The learned filters are updated online and adapt to significant appearance changes during tracking. Extensive experimental results on challenging sequences show that the proposed algorithm performs favorably against state-of-the-art methods in terms of efficiency, accuracy, and robustness.
Keywords :
"Correlation","Target tracking","Visualization","Feature extraction","Computational modeling","Adaptation models"
Conference_Titel :
Image Processing (ICIP), 2015 IEEE International Conference on
DOI :
10.1109/ICIP.2015.7350921