DocumentCode :
3683524
Title :
Combining Monte Carlo tree search and apprenticeship learning for capture the flag
Author :
Jayden Ivanovo;William L. Raffe;Fabio Zambetta;Xiaodong Li
Author_Institution :
School of Computer Science and IT, RMIT University, Melbourne (Australia)
fYear :
2015
Firstpage :
154
Lastpage :
161
Abstract :
In this paper we introduce a novel approach to agent control in competitive video games which combines Monte Carlo Tree Search (MCTS) and Apprenticeship Learning (AL). More specifically, an opponent model created through AL is used during the expansion phase of the Upper Confidence Bounds for Trees (UCT) variant of MCTS. We show how this approach can be applied to a game of Capture the Flag (CTF), an environment which is both non-deterministic and partially observable. The performance gain of a controller utilizing an opponent model learned via AL when compared to a controller using just UCT is shown both with win/loss ratios and True Skill rankings. Additionally, we build on previous findings by providing evidence of a bias towards a particular style of play in the AI Sandbox CTF environment. We believe that the approach highlighted here can be extended to a wider range of games other than just CTF.
Keywords :
"Games","Trajectory","Monte Carlo methods","Computers","Adaptation models","Learning (artificial intelligence)"
Publisher :
ieee
Conference_Titel :
Computational Intelligence and Games (CIG), 2015 IEEE Conference on
ISSN :
2325-4270
Electronic_ISBN :
2325-4289
Type :
conf
DOI :
10.1109/CIG.2015.7317914
Filename :
7317914
Link To Document :
بازگشت