Abstract :
An intelligent robot requires natural interaction with humans. Visual interpretation of gestures can be useful in accomplishing natural human-robot interaction (HRl). Previous HRI researches were focused on issues such as hand gesture, sign language, and command gesture recognition. However, automatic recognition of whole body gestures is required in order to operate HRI naturally. This can be a challenging problem because describing and modeling meaningful gesture patterns from whole body gestures are complex tasks. This paper presents a new method for spotting and recognizing whole body key gestures at the same time on a mobile robot. Our method is simultaneously used with other HRI approaches such as speech recognition, face recognition, and so forth. In this regard, both of execution speed and recognition performance should be considered. For efficient and natural operation, we used several approaches at each step of gesture recognition; learning and extraction of articulated joint information, representing gesture as a sequence of clusters, spotting and recognizing a gesture with HMM. In addition, we constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile robot
Keywords :
face recognition; gesture recognition; image sequences; intelligent robots; man-machine systems; mobile robots; speech recognition; visual databases; articulated joint information; automatic gesture recognition; command gesture recognition; face recognition; gesture database; hand gesture; image sequence; intelligent human-robot interaction; mobile robot; sign language; speech recognition; whole body gestures; Biological system modeling; Data mining; Databases; Face recognition; Handicapped aids; Hidden Markov models; Humans; Intelligent robots; Mobile robots; Speech recognition;