DocumentCode :
3707764
Title :
Multi-level action detection via learning latent structure
Author :
Behzad Bozorgtabar;Roland Goecke
Author_Institution :
Vision &
fYear :
2015
Firstpage :
3004
Lastpage :
3008
Abstract :
Detecting actions in videos is still a demanding task due to large intra-class variation caused by varying pose, motion and scales. Conventional approaches use a Bag-of-Words model in the form of space-time motion feature pooling followed by learning a classifier. However, since the informative body parts motion only appear in specific regions of the body, these methods have limited capability. In this paper, we seek to learn a model of the interaction among regions of interest via a graph structure. We first discover several space-time video segments representing persistent moving body parts observed sparsely in video. Then, via learning the hidden graph structure (a subset of the graph), we identify both spatial and temporal relations between the subsets of these segments. In order to seize the more discriminative motion patterns and handle different interactions between body parts from simple to composite action, we present a multi-level action model representation. Consequently, for action classification, the classifier learned through each action model labels the test video based on the action model that gives the highest probability score. Experiments on challenging datasets, such as MSR II and UCF-Sports including complex motions and dynamic backgrounds, demonstrate the effectiveness of the proposed approach that outperforms state-of-the-art methods in this context.
Keywords :
"Videos","Motion segmentation","Feature extraction","Deformable models","Correlation","Tracking","Trajectory"
Publisher :
ieee
Conference_Titel :
Image Processing (ICIP), 2015 IEEE International Conference on
Type :
conf
DOI :
10.1109/ICIP.2015.7351354
Filename :
7351354
Link To Document :
بازگشت