Title of article :
Finding optimal memoryless policies of POMDPs under the expected average reward criterion
Author/Authors :
Yanjie Li، نويسنده , , Baoqun Yin، نويسنده , , Hongsheng Xi، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2011
Pages :
12
From page :
556
To page :
567
Abstract :
In this paper, partially observable Markov decision processes (POMDPs) with discrete state and action space under the average reward criterion are considered from a recent-developed sensitivity point of view. By analyzing the average-reward performance difference formula, we propose a policy iteration algorithm with step sizes to obtain an optimal or local optimal memoryless policy. This algorithm improves the policy along the same direction as the policy iteration does and suitable step sizes guarantee the convergence of the algorithm. Moreover, the algorithm can be used in Markov decision processes (MDPs) with correlated actions. Two numerical examples are provided to illustrate the applicability of the algorithm.
Keywords :
POMDPs , Policy iteration with step sizes , Correlated actions , Performance difference , Memoryless policy
Journal title :
European Journal of Operational Research
Serial Year :
2011
Journal title :
European Journal of Operational Research
Record number :
1313213
Link To Document :
بازگشت