Title :
Tabu search exploration for on-policy reinforcement learning
Author :
Abramson, Myriam ; Wechsler, Hany
Author_Institution :
George Mason Univ., Fairfax, VA, USA
Abstract :
On-policy reinforcement learning provides online adaptation, a characteristic of intelligent systems and lifelong learning. Unlike dynamic programming, an exhaustive sweep of the search space is not necessary for convergence in reinforcement learning with an efficient exploration strategy. For efficient and "believable" online performance, an exploration strategy also has to avoid cycling through previous solutions and know when to stop without getting stuck in a local optimum. This paper addresses the above problem with tabu search (TS) exploration. Several strategies for reinforcement learning are introduced. Experimental results are presented in the game of Go, a deterministic, perfect-information two-player game, and Sarsa learning vector quantization (SLVQ), an on-policy reinforcement learning algorithm.
Keywords :
games of skill; learning (artificial intelligence); search problems; Sarsa learning vector quantization; Tabu search exploration; deterministic perfect-information two-player game; game of Go; on-policy reinforcement learning; Convergence; Dynamic programming; Equations; Intelligent systems; Learning; Polynomials; Sampling methods; Space exploration; State-space methods; Vector quantization;
Conference_Titel :
Neural Networks, 2003. Proceedings of the International Joint Conference on
Print_ISBN :
0-7803-7898-9
DOI :
10.1109/IJCNN.2003.1224033