DocumentCode :
3707398
Title :
A new unsupervised model of action recognition
Author :
Dan Wang;Qing Shao;Xiaoqiang Li
Author_Institution :
School of Computer Engineer and Science, Shanghai University, Shanghai, China
fYear :
2015
Firstpage :
1160
Lastpage :
1164
Abstract :
The hand-craft feature description such as blob detection (SIFT), the gradient of edges (HOG) are widely used in the action recognition, while they are not suitable to all kinds of videos and increase the manual intervention. And the deep learning based on unsupervised method have a lot of parameter tuning and iterations. Our paper proposes a new model of action recognition combine the hand-craft spatiotemporal interest points with unsupervised descriptors. This model uses STIP as the spatiotemporal interest points extractor and improved K-means as the unsupervised learning method to build the descriptor. This K-Means based unsupervised descriptor provides higher accuracy than hand-craft descriptors and lower training time than multi-layer unsupervised learning methods. Moreover, we update the BoF model in recognition framework, which constructs local vocabularies to each category. Experimental results indicate that this proposed framework works well.
Keywords :
"Feature extraction","Videos","Dictionaries","Computational modeling","Learning systems","Training","Vocabulary"
Publisher :
ieee
Conference_Titel :
Image Processing (ICIP), 2015 IEEE International Conference on
Type :
conf
DOI :
10.1109/ICIP.2015.7350982
Filename :
7350982
Link To Document :
بازگشت