• DocumentCode
    249533
  • Title

    Learning multi-scale sparse representation for visual tracking

  • Author

    Zhengjian Kang ; Wong, Edward K.

  • Author_Institution
    New York Univ., New York, NY, USA
  • fYear
    2014
  • fDate
    27-30 Oct. 2014
  • Firstpage
    4897
  • Lastpage
    4901
  • Abstract
    We present a novel algorithm for learning multi-scale sparse representation for visual tracking. In our method, sparse codes with max pooling are used to form a multi-scale representation that integrates spatial configuration over patches of different sizes. Different from other sparse representation methods, our method uses both holistic and local descriptors. In the hybrid framework, we formulate a new confidence measure that combines generative and discriminative confidence scores. We also devised an efficient method to update templates for adaptation to appearance changes. We demonstrate the effectiveness of our method with experiments and show that our method outperforms other state-of-the-art tracking algorithms.
  • Keywords
    image coding; image representation; learning (artificial intelligence); object tracking; confidence measure; discriminative confidence scores; generative confidence scores; holistic descriptor; hybrid framework; local descriptor; max pooling; multiscale sparse representation learning; sparse codes; spatial configuration; visual tracking; Adaptation models; Histograms; Lighting; Robustness; Target tracking; Visualization; Multi-scale sparse representation; max pooling; visual tracking;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Image Processing (ICIP), 2014 IEEE International Conference on
  • Conference_Location
    Paris
  • Type

    conf

  • DOI
    10.1109/ICIP.2014.7025992
  • Filename
    7025992