DocumentCode :
1803501
Title :
Continuous action for multi-agent q-learning
Author :
Hwang, Kao-Shing ; Chen, Yu-Jen ; Lin, Tzung-Feng ; Jiang, Wei-Cheng
Author_Institution :
Dept. of Electr. Eng., Nat. Chung Cheng Univ., Ming-Hsiung, Taiwan
fYear :
2011
fDate :
15-18 May 2011
Firstpage :
418
Lastpage :
423
Abstract :
Q-learning, a most widely used reinforcement learning method, normally needs well-defined quantized state and action spaces to obtain an optimal policy for accomplishing a given task. This means it difficult to be applied to real robot tasks because of poor performance of learned behavior due to the failure of quantization of continuous state and action spaces. In this paper, we proposed a fuzzy-based Cerebellar Model Articulation Controller method to calculate contribution values to estimate a continuous action value in order to make motion smooth and effective. And we implement it to a multi-agent system for real robot applications.
Keywords :
cerebellar model arithmetic computers; control engineering computing; fuzzy control; learning (artificial intelligence); motion control; multi-agent systems; robots; action spaces; continuous state quantization; fuzzy based cerebellar model articulation controller; multiagent Q-learning; real robot tasks; reinforcement learning; smooth motion; Learning; Logic gates; Multiagent systems; Quantization; Robot kinematics; Robot sensing systems; Cerebellar Model Articulation Controller; Multi-agent; Reinforcement learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Control Conference (ASCC), 2011 8th Asian
Conference_Location :
Kaohsiung
Print_ISBN :
978-1-61284-487-9
Electronic_ISBN :
978-89-956056-4-6
Type :
conf
Filename :
5899108
Link To Document :
بازگشت