DocumentCode :
3003764
Title :
Adaptive stepsize selection for online Q-learning in a non-stationary environment
Author :
Levy, Kim ; Vázquez-Abad, Felisa J. ; Costa, Andre
Author_Institution :
Dept. of Math. & Stat., Melbourne Univ., Vic.
fYear :
2006
fDate :
10-12 July 2006
Firstpage :
372
Lastpage :
377
Abstract :
We consider the problem of real-time control of a discrete-time Markov decision process (MDP) in a non-stationary environment, which is characterized by large, sudden changes in the parameters of the MDP. We consider here an online version of the well-known Q-learning algorithm, which operates directly in its target environment. In order to track changes, the stepsizes (or learning rates) must be bounded away from zero. In this paper, we show how the theory of constant stepsize stochastic approximation algorithms can be used to motivate and develop an adaptive stepsize algorithm, that is appropriate for the online learning scenario described above. Our algorithm automatically achieves a desirable balance between accuracy and rate of reaction, and seeks to track the optimal policy with some pre-determined level of confidence
Keywords :
Markov processes; approximation theory; learning (artificial intelligence); adaptive stepsize algorithm; adaptive stepsize selection; constant stepsize stochastic approximation; discrete-time Markov decision process; online Q-learning; real-time control; Adaptive control; Approximation algorithms; Convergence; Mathematics; Performance evaluation; Programmable control; State estimation; Statistics; Stochastic processes; Target tracking;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Discrete Event Systems, 2006 8th International Workshop on
Conference_Location :
Ann Arbor, MI
Print_ISBN :
1-4244-0053-8
Type :
conf
DOI :
10.1109/WODES.2006.382396
Filename :
4267647
Link To Document :
بازگشت