DocumentCode
395551
Title
Reinforcement learning based on a statistical value function and its application to a board game
Author
Nishikawa, Ikuko ; Nakanishi, Tomoyuki
Author_Institution
Dept. of Comput. Sci., Ritsumeikan Univ., Shiga, Japan
Volume
3
fYear
2002
fDate
18-22 Nov. 2002
Firstpage
1449
Abstract
A statistical method for reinforcement learning is proposed to cope with a large number of discrete states. As a coarse-graining of a large number of states, less number of sets of states are defined as a group of neighbouring states. State sets partly overlap, and one state is included in a multiple sets. The learning is based on an action-value function for each state set, and an action-value function on an individual state is derived by a statistical average of multiple value functions on state sets. The proposed method is applied to a board game Dots-and-Boxes. Simulations show a successful learning through the training games competing with a mini-max method of the search depth 2 to 5, and the winning rate against a depth-3 mini-max attains about 80%. An action-value function derived by a weighted average with the weight given by the variance of rewards shows the advantage compared with the one derived by a simple average.
Keywords
games of skill; learning (artificial intelligence); minimax techniques; search problems; statistical analysis; action-value function; board game; minimax method; reinforcement learning; search depth; statistical method; statistical value function; Application software; Bellows; Computational efficiency; Computer science; Learning; State-space methods; Statistical analysis;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Information Processing, 2002. ICONIP '02. Proceedings of the 9th International Conference on
Print_ISBN
981-04-7524-1
Type
conf
DOI
10.1109/ICONIP.2002.1202860
Filename
1202860
Link To Document