DocumentCode :
2498231
Title :
Protecting against evaluation overfitting in empirical reinforcement learning
Author :
Whiteson, Shimon ; Tanner, Brian ; Taylor, Matthew E. ; Stone, Peter
Author_Institution :
Inf. Inst., Univ. of Amsterdam, Amsterdam, Netherlands
fYear :
2011
fDate :
11-15 April 2011
Firstpage :
120
Lastpage :
127
Abstract :
Empirical evaluations play an important role in machine learning. However, the usefulness of any evaluation depends on the empirical methodology employed. Designing good empirical methodologies is difficult in part because agents can overfit test evaluations and thereby obtain misleadingly high scores. We argue that reinforcement learning is particularly vulnerable to environment overfitting and propose as a remedy generalized methodologies, in which evaluations are based on multiple environments sampled from a distribution. In addition, we consider how to summarize performance when scores from different environments may not have commensurate values. Finally, we present proof-of-concept results demonstrating how these methodologies can validate an intuitively useful range-adaptive tile coding method.
Keywords :
generalisation (artificial intelligence); learning (artificial intelligence); evaluation overfitting; machine learning; range-adaptive tile coding method; reinforcement learning; remedy generalized methodology; Algorithm design and analysis; Learning; Machine learning; Supervised learning; Tiles; Tuning; Uncertainty;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Adaptive Dynamic Programming And Reinforcement Learning (ADPRL), 2011 IEEE Symposium on
Conference_Location :
Paris
Print_ISBN :
978-1-4244-9887-1
Type :
conf
DOI :
10.1109/ADPRL.2011.5967363
Filename :
5967363
Link To Document :
بازگشت