DocumentCode
2220022
Title
An empirical evaluation of interval estimation for Markov decision processes
Author
Strehl, Alexander L. ; Littman, Michael L.
Author_Institution
Dept. of Comput. Sci., Rutgers Univ., Piscataway, NJ, USA
fYear
2004
fDate
15-17 Nov. 2004
Firstpage
128
Lastpage
135
Abstract
This work takes an empirical approach to evaluating three model-based reinforcement-learning methods. All methods intend to speed the learning process by mixing exploitation of learned knowledge with exploration of possibly promising alternatives. We consider ε-greedy exploration, which is computationally cheap and popular, but unfocused in its exploration effort; R-Max exploration, a simplification of an exploration scheme that comes with a theoretical guarantee of efficiency; and a well-grounded approach, model-based interval estimation, that better integrates exploration and exploitation. Our experiments indicate that effective exploration can result in dramatic improvements in the observed rate of learning.
Keywords
Markov processes; computational complexity; decision theory; decision trees; greedy algorithms; learning (artificial intelligence); optimisation; ε-greedy exploration; Markov decision processes; R-Max exploration; model-based interval estimation; model-based reinforcement-learning methods; Arm; Artificial intelligence; Computer science; Convergence; Learning; Mathematical model; Pursuit algorithms; Sampling methods; State-space methods;
fLanguage
English
Publisher
ieee
Conference_Titel
Tools with Artificial Intelligence, 2004. ICTAI 2004. 16th IEEE International Conference on
ISSN
1082-3409
Print_ISBN
0-7695-2236-X
Type
conf
DOI
10.1109/ICTAI.2004.28
Filename
1374179
Link To Document