DocumentCode :
382892
Title :
Learning cooperative assembly with the graph representation of a state-action space
Author :
Ferch, Markus ; Höchsmann, Matthias ; Zhang, Jianwei
Author_Institution :
Tech. Comput. Sci., Bielefeld Univ., Germany
Volume :
1
fYear :
2002
fDate :
2002
Firstpage :
990
Abstract :
In this paper, we present a method for two robot manipulators to learn cooperative assembly tasks. A learning algorithm based on trial end error is used to find a sequence for each robot to assemble the goal aggregate. It is shown that a distributed learning method based on a Markov decision process is able to learn the sequences for the involved robots. A novel state-action graph is used to store the reinforcement values of the learning process. The approach is designed in a way that not only exact matches but also similar aggregates are accepted by the system.
Keywords :
Markov processes; assembling; cooperative systems; graph theory; industrial manipulators; learning (artificial intelligence); multi-robot systems; state-space methods; Markov decision process; aggregate model; approximate graph matching; cooperative assembly task learning; distributed learning method; learning process reinforcement values; robot manipulators; sequence learning; state-action space graph representation; trial end error learning algorithm; Aggregates; Cameras; Cognitive robotics; Computer science; Manipulator dynamics; Orbital robotics; Robot vision systems; Robotic assembly; Space technology; State-space methods;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Robots and Systems, 2002. IEEE/RSJ International Conference on
Print_ISBN :
0-7803-7398-7
Type :
conf
DOI :
10.1109/IRDS.2002.1041519
Filename :
1041519
Link To Document :
بازگشت