DocumentCode :
1929339
Title :
Tabu search exploration for on-policy reinforcement learning
Author :
Abramson, Myriam ; Wechsler, Hany
Author_Institution :
George Mason Univ., Fairfax, VA, USA
Volume :
4
fYear :
2003
fDate :
20-24 July 2003
Firstpage :
2910
Abstract :
On-policy reinforcement learning provides online adaptation, a characteristic of intelligent systems and lifelong learning. Unlike dynamic programming, an exhaustive sweep of the search space is not necessary for convergence in reinforcement learning with an efficient exploration strategy. For efficient and "believable" online performance, an exploration strategy also has to avoid cycling through previous solutions and know when to stop without getting stuck in a local optimum. This paper addresses the above problem with tabu search (TS) exploration. Several strategies for reinforcement learning are introduced. Experimental results are presented in the game of Go, a deterministic, perfect-information two-player game, and Sarsa learning vector quantization (SLVQ), an on-policy reinforcement learning algorithm.
Keywords :
games of skill; learning (artificial intelligence); search problems; Sarsa learning vector quantization; Tabu search exploration; deterministic perfect-information two-player game; game of Go; on-policy reinforcement learning; Convergence; Dynamic programming; Equations; Intelligent systems; Learning; Polynomials; Sampling methods; Space exploration; State-space methods; Vector quantization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2003. Proceedings of the International Joint Conference on
ISSN :
1098-7576
Print_ISBN :
0-7803-7898-9
Type :
conf
DOI :
10.1109/IJCNN.2003.1224033
Filename :
1224033
Link To Document :
بازگشت