Title :
Adaptive reinforcement learning in box-pushing robots
Author :
Hwang, K.S. ; Ling, J.L. ; Wei-Han Wang
Author_Institution :
Dept. of Electr. Eng., Nat. Sun Yat-sen Univ., Kaoshiung, Taiwan
Abstract :
In this paper, an adaptive state aggregation Q-Learning method, with the capability of multi-agent cooperation, was proposed to enhance the efficiency of reinforcement learning (RL) and applied to box-pushing tasks for humanoid robots. First, a decision tree was applied to partition the state space according to temporary differences in reinforcement learning, so that a real valued action domain could be represented by a discrete space. Furthermore, adaptive state Q-Learning, which is the modification of estimating Q-value by tabular or function approximation, was proposed to demonstrate the efficiency of reinforcement learning when a humanoid robot pushing a box was simulated. During the process of a robot pushing a box, because the box moves along with the direction the robot asserts force and the pushing point on the box, the robot needs to learn how to adjust angles, avoid obstacles, keep gravity, and push the box to the target point. From the simulation results, the proposed method shows its learning efficiency outperforms the Q-Learning without using adaptive states.
Keywords :
collision avoidance; decision trees; function approximation; humanoid robots; learning (artificial intelligence); multi-agent systems; multi-robot systems; Q-value estimation; RL; adaptive reinforcement learning; adaptive state aggregation Q-Learning method; box-pushing robots; decision tree; function approximation; humanoid robots; multiagent cooperation capability; obstacle avoidance; real valued action domain; state space; tabular approximation; Decision trees; Humanoid robots; Learning (artificial intelligence); Robot kinematics; Robot sensing systems; Training;
Conference_Titel :
Automation Science and Engineering (CASE), 2014 IEEE International Conference on
Conference_Location :
Taipei
DOI :
10.1109/CoASE.2014.6899476