DocumentCode :
1126685
Title :
Positive Impact of State Similarity on Reinforcement Learning Performance
Author :
Girgin, Sertan ; Polat, Faruk ; Alhajj, Reda
Author_Institution :
Middle East Tech. Univ., Ankara
Volume :
37
Issue :
5
fYear :
2007
Firstpage :
1256
Lastpage :
1270
Abstract :
In this paper, we propose a novel approach to identify states with similar subpolicies and show how they can be integrated into the reinforcement learning framework to improve learning performance. The method utilizes a specialized tree structure to identify common action sequences of states, which are derived from possible optimal policies, and defines a similarity function between two states based on the number of such sequences. Using this similarity function, updates on the action-value function of a state are reflected onto all similar states. This allows experience that is acquired during learning to be applied to a broader context. The effectiveness of the method is demonstrated empirically.
Keywords :
learning (artificial intelligence); state estimation; trees (mathematics); action sequence; action-value function; reinforcement learning; similarity function; state identification; state similarity; tree structure; Associate members; Computer science; Councils; Frequency; Joining processes; Learning; Scattering; State-space methods; Statistics; Tree data structures; Action-value function; learning performance; optimal policies; reinforcement learning (RL); similarity function; state similarity; Algorithms; Artificial Intelligence; Computer Simulation; Models, Theoretical; Pattern Recognition, Automated;
fLanguage :
English
Journal_Title :
Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on
Publisher :
ieee
ISSN :
1083-4419
Type :
jour
DOI :
10.1109/TSMCB.2007.899419
Filename :
4305275
Link To Document :
بازگشت