DocumentCode
2116015
Title
Modeling layered meaning with gesture parameters
Author
Ong, Sylvie C W ; Ranganath, Surendra ; Venkatesh, Y.V.
Author_Institution
Dept. of Electr. & Comput. Eng., Nat. Univ. of Singapore, Singapore
Volume
3
fYear
2002
fDate
2-5 Dec. 2002
Firstpage
1591
Abstract
Signs produced by gestures (such as in American Sign Language) can have a basic meaning coupled with additional meanings that are like layers added to the basic meaning of the sign. These layered meanings are conveyed by Systematic temporal and spatial modification of the basic form of the gesture. The work reported in this paper seeks to recognize temporal and spatial modifiers of hand movement and integrates them with the recognition of the basic meaning of the sign. To this end, a Bayesian network framework is explored with a simulated vocabulary of 4 basic signs which give rise to 14 different combinations of basic meanings and layered meanings. In this paper we approached the problem of deciphering layered meanings by drawing analogies to the gesture parameters in Parametric HMM which represent systematic spatial modifications to gesture movement. Various Bayesian network structures were compared for recognizing the signs with layered meanings. The best performing network yielded 85.5% accuracy.
Keywords
belief networks; gesture recognition; hidden Markov models; image processing; spatiotemporal phenomena; vocabulary; Bayesian network; Parametric HMM; drawing analogies; gesture movement; gesture parameters; image processing; layered meanings; sign basic meaning; sign vocabulary; spatial modification; systematic spatial modifications; systematic temporal; Drives; Handicapped aids; Hidden Markov models; Humans; Motion analysis; Plasma welding; Production; Scalability; Shape; Vocabulary;
fLanguage
English
Publisher
ieee
Conference_Titel
Control, Automation, Robotics and Vision, 2002. ICARCV 2002. 7th International Conference on
Print_ISBN
981-04-8364-3
Type
conf
DOI
10.1109/ICARCV.2002.1235012
Filename
1235012
Link To Document