Author_Institution :
Sch. of Humanities, Tsinghua Univ., Beijing, China
Abstract :
Sign recognition has evolved from traditional video-based to 3D-based image recognition. Most documents are presented with Kinect-based somatosensory terminals, which are limited by difficulties in precisely describing the motions performed by various palm joints. The linguistic details of sign language (SL), such as position, direction, and movement, therefore have to be manually inputted. Meanwhile, most studies rely on the positions or rotation of virtual agent articulations as experimental data to apply classifying or matching techniques, which employ inefficient algorithms. By fully utilizing the features of Leap Motion, motion trajectory is automatically calculated on a computer. Such features as location, movement, and direction are calculated on the basis of the motion parameters of 22 palm joints. Thus, we propose a decision-tree-based algorithm to recognize 3D gestures. Our experimental results show that 1,203 Chinese SLs were signed, and 1,152 were successfully recognized with the use of the Leap Motion sensor. Thus, the recognition rate reached 95.8%, with a recognition response time of only 5.4 s.
Keywords :
decision trees; image classification; image motion analysis; sign language recognition; 3D gesture recognition; 3D sign classification; 3D-based image recognition; decision-tree-based algorithm; leap motion sensor; recognition rate; recognition response time; sign recognition; time 5.4 s; Algorithm design and analysis; Assistive technology; Classification algorithms; Gesture recognition; Shape; Three-dimensional displays; Trajectory; 3D-based recognition; Chinese sign language; decision tree;