Title :
Drift and monotonicity conditions for continuous-time controlled Markov chains with an average criterion
Author :
Guo, Xianping ; Hernández-Lerma, Onésimo
Author_Institution :
Dept. de Matematicas, CINVESTAV-IPN, Mexico City, Mexico
Abstract :
We give conditions for the existence of average optimal policies for continuous-time controlled Markov chains with a denumerable state-space and Borel action sets. The transition rates are allowed to be unbounded, and the reward/cost rates may have neither upper nor lower bounds. In the spirit of the "drift and monotonicity" conditions for continuous-time Markov processes, we propose a new set of conditions on the controlled process\´ primitive data under which the existence of optimal (deterministic) stationary policies in the class of randomized Markov policies is proved using the extended generator approach instead of Kolmogorov\´s forward equation used in the previous literature, and under which the convergence of a policy iteration method is also shown. Moreover, we use a controlled queueing system to show that all of our conditions are satisfied, whereas those in the previous literature fail to hold.
Keywords :
Markov processes; continuous time systems; convergence of numerical methods; decision theory; iterative methods; optimal control; probability; queueing theory; Borel action sets; average criterion; average optimal policies; continuous-time controlled Markov chains; controlled queueing system; denumerable state-space; deterministic policies; drift; extended generator approach; monotonicity conditions; optimal stationary policies; policy iteration method; primitive data; randomized Markov policies; reward/cost rates; Cities and towns; Control systems; Convergence; Cost function; Integral equations; Markov processes; Mathematics; Optimal control; Process control; Stochastic systems;
Journal_Title :
Automatic Control, IEEE Transactions on
DOI :
10.1109/TAC.2002.808469