Title :
Correspondence-Free Dictionary Learning for Cross-View Action Recognition
Author :
Fan Zhu ; Ling Shao
Author_Institution :
Dept. of Electron. & Electr. Eng., Univ. of Sheffield, Sheffield, UK
Abstract :
In this paper, we propose a novel unsupervised approach to enabling sparse action representations for the cross-view action recognition scenario. Superior to previous cross-view action recognition methods, neither target view label information nor correspondence annotations are required in this approach. Low-level dense trajectory action features are first coded according to their feature space localities within the same view by projecting each descriptor into its local-coordinate system under the locality constraints. Actions across each pair of views are additionally decomposed to sparse linear combinations of basis atoms, a.k.a., dictionary elements, which are learned to reconstruct the original data while simultaneously forcing similar actions to have identical representations in an unsupervised manner. Consequently, cross-view knowledge is retained through the learned basis atoms, so that high-level representations of actions from both views can be considered to possess the same data distribution and can be directly fed into the classifier. The proposed approach achieves improved performance compared to state-of-the-art methods on the multi-view IXMAS data set, and leads to a new experimental setting that is closer to real-world applications.
Keywords :
gesture recognition; image representation; unsupervised learning; correspondence-free dictionary learning; cross-view action recognition; low-level dense trajectory action features; multiview IXMAS data set; sparse action representations; unsupervised approach; Cameras; Dictionaries; Encoding; Equations; Feature extraction; Optimization; Trajectory;
Conference_Titel :
Pattern Recognition (ICPR), 2014 22nd International Conference on
Conference_Location :
Stockholm
DOI :
10.1109/ICPR.2014.774