DocumentCode :
2470115
Title :
Partitioning the state space by critical states
Author :
Jin, Zhao ; Liu, Weiyi ; Jin, Jian
Author_Institution :
Sch. of Inf. Sci. & Eng., Yunnan Univ., Kunming, China
fYear :
2009
fDate :
16-19 Oct. 2009
Firstpage :
1
Lastpage :
7
Abstract :
For scaling up reinforcement learning to large and complex problems, we propose an approach to partition the larger state space into multiple smaller state spaces based the critical states for decomposing learning task. During learning process, we record every training episode, and eliminate the state loops existed in it. We find some states have high probability (even to 1) appeared in all these acyclic episodes. We call these states critical states. That means, if agent wants to reach the goal state, then it will have high probability to pass these critical states according to the learned experience. So the critical states can be used to partition the state space for accomplishing learning task by stages. We also prove that the optimal policy found in the partitioned smaller state space is equivalent to the optimal policy found in the original state space. The experiment comparisons between Q-learning and Q-learning with critical states demonstrate our approach more effective. The more important is that our approach brings the light of how agent can use its own experience to plan learning for better performance.
Keywords :
learning (artificial intelligence); Q-learning; critical states; reinforcement learning; state space partitioning; Convergence; Humans; Information science; Machine learning; Psychology; Scheduling; State-space methods;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Bio-Inspired Computing, 2009. BIC-TA '09. Fourth International Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4244-3866-2
Electronic_ISBN :
978-1-4244-3867-9
Type :
conf
DOI :
10.1109/BICTA.2009.5338123
Filename :
5338123
Link To Document :
بازگشت