DocumentCode
1874041
Title
Improving Temporal Difference game agent control using a dynamic exploration during control learning
Author
Galway, Leo ; Charles, Darryl ; Black, Michaela
Author_Institution
Sch. of Comput. & Inf. Eng., Univ. of Ulster, Coleraine, UK
fYear
2009
fDate
7-10 Sept. 2009
Firstpage
38
Lastpage
45
Abstract
This paper investigates the use of a dynamically generated exploration rate when using a reinforcement learning-based game agent controller within a dynamic digital game environment. Temporal difference learning has been employed for the real-time generation of reactive game agent behaviors within a variation of classic arcade game Pac-Man. Due to the dynamic nature of the game environment initial experiments made use of static, low value for the exploration rate utilized by action selection during learning. However, further experiments were conducted which dynamically generated a value for the exploration rate prior to learning using a genetic algorithm. Results obtained have shown that an improvement in the overall performance of the game agent controller may be achieved when a dynamic exploration rate is used. In particular, if the use of the genetic algorithm is controlled by a measure of the current performance of the game agent, further gains in the overall performance of the game agent may be achieved.
Keywords
computer games; genetic algorithms; learning (artificial intelligence); software agents; Pac-Man; classic arcade game; control learning; dynamic digital game environment; dynamic exploration rate; game agent control; genetic algorithm; reactive game agent behavior; reinforcement learning; temporal difference learning; Computational intelligence;
fLanguage
English
Publisher
ieee
Conference_Titel
Computational Intelligence and Games, 2009. CIG 2009. IEEE Symposium on
Conference_Location
Milano
Print_ISBN
978-1-4244-4814-2
Electronic_ISBN
978-1-4244-4815-9
Type
conf
DOI
10.1109/CIG.2009.5286497
Filename
5286497
Link To Document