DocumentCode :
2693150
Title :
Reinforcement learning for high-dimensional problems with symmetrical actions
Author :
Kamal, M.A.S. ; Murata, Junichi
Author_Institution :
Graduate Sch. of ISEE, Kyushu Univ., Fukuoka, Japan
Volume :
7
fYear :
2004
fDate :
10-13 Oct. 2004
Firstpage :
6192
Abstract :
A reinforcement learning algorithm is proposed that can cope with high dimensionality for a class of problems with symmetrical actions. The action selection does not need considering all the states but only needs looking at a part of the states. Moreover, every symmetrical action is related to the same kind of part of state, and thus the value function can be shared, which greatly reduces the reinforcement learning problem size. The overall learning algorithm is equivalent to the standard reinforcement learning algorithm. Simulation results and other aspects are compared with standard and other reinforcement learning algorithms. Reduction in dimensionality, much faster convergence without worsening other objectives show the effectiveness of the proposed mechanism on a high dimensional optimization problem having symmetrical actions.
Keywords :
convergence; learning (artificial intelligence); convergence; high-dimensional problem; reinforcement learning; symmetrical action; value function; Artificial neural networks; Control systems; Convergence; Costs; Elevators; Large-scale systems; Learning; State estimation; State-space methods; Table lookup;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Systems, Man and Cybernetics, 2004 IEEE International Conference on
ISSN :
1062-922X
Print_ISBN :
0-7803-8566-7
Type :
conf
DOI :
10.1109/ICSMC.2004.1401371
Filename :
1401371
Link To Document :
بازگشت