DocumentCode :
336649
Title :
New value iteration and Q-learning methods for the average cost dynamic programming problem
Author :
Bertsekas, Dimitri P.
Author_Institution :
Lab. for Inf. & Decision Syst., MIT, Cambridge, MA, USA
Volume :
3
fYear :
1998
fDate :
1998
Firstpage :
2692
Abstract :
We propose a new value iteration method for the classical average cost Markovian decision problem, under the assumption that all stationary policies are unichain and furthermore there exists a state that is recurrent under all stationary policies. This method is motivated by a relation between the average cost problem and an associated stochastic shortest path problem. Contrary to the standard relative value iteration, our method involves a weighted sup norm contraction and for this reason it admits a Gauss-Seidel and an asynchronous implementation. Computational tests indicate that the Gauss-Seidel version of the new method substantially outperforms the standard method for difficult problems. The contraction property also makes the method a suitable basis for the development of asynchronous Q-learning methods
Keywords :
Markov processes; decision theory; dynamic programming; iterative methods; management science; probability; Gauss-Seidel method; Markovian decision; Q-learning methods; average cost problem; dynamic programming; probability; stochastic shortest path problem; value iteration method; Cost function; Dynamic programming; Equations; Gaussian processes; Laboratories; Milling machines; Shortest path problem; Stochastic processes; Testing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control, 1998. Proceedings of the 37th IEEE Conference on
Conference_Location :
Tampa, FL
ISSN :
0191-2216
Print_ISBN :
0-7803-4394-8
Type :
conf
DOI :
10.1109/CDC.1998.757860
Filename :
757860
Link To Document :
بازگشت