DocumentCode
130258
Title
Learning to play fighting game using massive play data
Author
Hyunsoo Park ; Kyung-Joong Kim
Author_Institution
Dept. of Comput. Sci. & Eng., Sejong Univ., Seoul, South Korea
fYear
2014
fDate
26-29 Aug. 2014
Firstpage
1
Lastpage
2
Abstract
Designing fighting game AI has been a challenging problem because the program should react in real-time and require expert knowledge on the combination of actions. In fact, most of entries in 2013 fighting game AI competition were based on expert rules. In this paper, we propose an automatic policy learning method for the fighting game AI bot. In the training stage, the AI continuously plays fighting games against 12 bots (10 from 2013 competition entries and 2 examples) and stores massive play data (about 10 GB). UCB1 is used to collect the data actively. In the testing stage, the agent searches for the similar situations from the logs and selects skills with the highest rewards. In this way, it is possible to construct the fighting game AI with minimum expert knowledge. Experimental results show that the learned agent can defeat two example bots and show comparable performance against the winner of 2013 competition.
Keywords
computer games; expert systems; learning (artificial intelligence); automatic policy learning method; expert knowledge; fighting game AI bot; massive play data; Artificial intelligence; Games; Quantization (signal); Fighting game AI; Game AI competition; Multi-armed bandits problem; UCB1;
fLanguage
English
Publisher
ieee
Conference_Titel
Computational Intelligence and Games (CIG), 2014 IEEE Conference on
Conference_Location
Dortmund
Type
conf
DOI
10.1109/CIG.2014.6932921
Filename
6932921
Link To Document