DocumentCode :
3770323
Title :
An effective view and time-invariant action recognition method based on depth videos
Author :
Zhi Liu;Xin Feng;Yingli Tian
Author_Institution :
College of Computer Science and Engineering, Chongqing University of Technology, Chongqing, 400050, China
fYear :
2015
Firstpage :
1
Lastpage :
4
Abstract :
Little progress has been achieved in hand-crafted feature based human action recognition (HAR) for RGB videos in recent years. The emergence of low price depth camera presents more information for action recognition. Compared to RGB videos, depth video sequences are more insensitive to light changes and more discriminative in many vision tasks such as segmentation and activity recognition. In this paper, we propose an effective and straightforward HAR method by using skeleton joints information of the depth sequence. First, we calculate three feature vectors which capture angle and position information between joints. Then, the obtained vectors are used as the inputs of three separate support vector machine (SVM) classifiers. Finally, the action recognition is conducted by fusing the SVM classification results. Our features are viewinvariant because the extracted vectors contain only angle and normalized position information based on joint coordinates. By normalizing action videos with different temporal lengths to a fixed size using interpolation, the extracted features have the same dimension for different videos and can still keep the principal movement patterns which make the proposed method timeinvariant. Experimental results demonstrate that our method performs comparable results on the UTKinect-Action3D dataset, and is more efficient and simpler than state-of-the-art methods.
Keywords :
"Feature extraction","Videos","Skeleton","Hip","Three-dimensional displays","Cameras","Video sequences"
Publisher :
ieee
Conference_Titel :
Visual Communications and Image Processing (VCIP), 2015
Type :
conf
DOI :
10.1109/VCIP.2015.7457931
Filename :
7457931
Link To Document :
بازگشت