DocumentCode :
3117209
Title :
Exploiting Domain Symmetries in Reinforcement Learning with Continuous State and Action Spaces
Author :
Agostini, Alejandro ; Celaya, Enric
Author_Institution :
Inst. de Robot. i Inf. Ind. (UPC-CSIC), Barcelona, Spain
fYear :
2009
fDate :
13-15 Dec. 2009
Firstpage :
331
Lastpage :
336
Abstract :
A central problem in reinforcement learning is how to deal with large state and action spaces. When the problem domain presents intrinsic symmetries, exploiting them can be key to achieve good performance. We analyze the gains that can be effectively achieved by exploiting different kinds of symmetries, and the effect of combining them, in a test case: the stand-up and stabilization of an inverted pendulum.
Keywords :
learning (artificial intelligence); pendulums; action spaces; continuous state; domain symmetry; intrinsic symmetry; inverted pendulum stabilization; reinforcement learning; Acceleration; Aerospace industry; Function approximation; Machine learning; Multiagent systems; State estimation; State-space methods; Testing; Reinforcement learning; domain symmetries; function approximation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Machine Learning and Applications, 2009. ICMLA '09. International Conference on
Conference_Location :
Miami Beach, FL
Print_ISBN :
978-0-7695-3926-3
Type :
conf
DOI :
10.1109/ICMLA.2009.41
Filename :
5381530
Link To Document :
بازگشت