Title of article :
Multimodal Spatiotemporal Feature Map for Dynamic Gesture Recognition from Real Time Video Sequences
Author/Authors :
Reddy P. ، S. Department of Electronics and Communication Engineering - Koneru Lakshmaiah Education Foundation , Santhosh ، C. Department of Electronics and Communication Engineering - Koneru Lakshmaiah Education Foundation
From page :
1440
To page :
1448
Abstract :
The utilization of artificial intelligence and computer vision has been extensively explored in the context of human activity and behavior recognition. Numerous researchers have investigated and suggested various techniques for human action recognition (HAR) to accurately identify actions from real-time videos. Among these techniques, convolutional neural networks (CNNs) have emerged as the most effective and widely used for activity recognition. This work primarily focuses on the significance of spatial information in activity/action classification. To identify human actions and behaviors from large video datasets, this paper proposes a two-stream spatial CNN approach. One stream, based on RGB data, is fed with the spatial information from unprocessed RGB frames. The second stream is powered by graph-based visual saliency maps generated by GBVS (Graph-Based Visual Saliency) method. The outputs of the two spatial streams were combined using sum, max, average, and product feature fusion techniques. The proposed method is evaluated on well-known benchmark human action datasets, such as KTH, UCF101, HMDB51, NTU RGB-D, and G3D, to assess its performance Promising recognition rates were observed on all datasets.
Keywords :
2D Video Data , 3D Video Data , Human Action Recognition , Visual Saliency , Deep Learning
Journal title :
International Journal of Engineering
Journal title :
International Journal of Engineering
Record number :
2752210
Link To Document :
بازگشت