DocumentCode :
1747327
Title :
Acquiring hand-action models by attention point analysis
Author :
Ogawara, Koichi ; Iba, Soshi ; Tanuki, Tomikazu ; Kimura, Hiroshi ; Ikeuchi, Karsushi
Author_Institution :
Inst. of Ind. Sci., Tokyo Univ., Japan
Volume :
1
fYear :
2001
fDate :
2001
Firstpage :
465
Abstract :
This paper describes our current research on learning task level representations by a robot through observation of human demonstrations. We focus on human hand actions and represent such hand actions in symbolic task models. We propose a framework of such models by efficiently integrating multiple observations based on attention points; we then evaluate the model by using a human-form robot. We propose a two-step observation mechanism. At the first step, the system roughly observes the entire sequence of the human demonstration, builds a rough task model and extracts attention points (APs). The attention points indicate the time and position in the observation sequence that requires further detailed analysis. At the second step, the system closely examines the sequence around the APs and the obtained attribute values for the task model, such as what to grasp, which hand to be used, or what is the precise trajectory of the manipulated object. We implemented this system on a human form robot and demonstrated its effectiveness.
Keywords :
learning by example; manipulator dynamics; robot vision; stereo image processing; attention point analysis; hand-action models; human demonstration; human-form robot; learning by example; observation mechanism; robot vision; stereo vision; task model; Assembly systems; Automatic control; Automatic programming; Cameras; Humans; Robot vision systems; Robotic assembly; Robotics and automation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE International Conference on
ISSN :
1050-4729
Print_ISBN :
0-7803-6576-3
Type :
conf
DOI :
10.1109/ROBOT.2001.932594
Filename :
932594
Link To Document :
بازگشت