DocumentCode :
729751
Title :
Multi-modal learning for gesture recognition
Author :
Congqi Cao ; Yifan Zhang ; Hanqing Lu
Author_Institution :
Nat. Lab. of Pattern Recognition, Inst. of Autom., Beijing, China
fYear :
2015
fDate :
June 29 2015-July 3 2015
Firstpage :
1
Lastpage :
6
Abstract :
With the development of sensing equipments, data from different modalities is available for gesture recognition. In this paper, we propose a novel multi-modal learning framework. A coupled hidden Markov model (CHMM) is employed to discover the correlation and complementary information across different modalities. In this framework, we use two configurations: one is multi-modal learning and multi-modal testing, where all the modalities used during learning are still available during testing; the other is multi-modal learning and single-modal testing, where only one modality is available during testing. Experiments on two real-world gesture recognition data sets have demonstrated the effectiveness of our multi-modal learning framework. Improvements on both of the multi-modal and single-modal testing have been observed.
Keywords :
correlation methods; data analysis; gesture recognition; hidden Markov models; learning (artificial intelligence); CHMM; complementary information; correlation information; coupled hidden Markov model; multimodal learning framework; multimodal testing; real-world gesture recognition data sets; sensing equipments; single-modal testing; Accuracy; Brain modeling; Gesture recognition; Hidden Markov models; Skeleton; Testing; Training; coupled hidden Markov model; gesture recognition; multi-modality;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Multimedia and Expo (ICME), 2015 IEEE International Conference on
Conference_Location :
Turin
Type :
conf
DOI :
10.1109/ICME.2015.7177460
Filename :
7177460
Link To Document :
بازگشت