DocumentCode :
3085229
Title :
Optimality of pure strategies in stochastic decision processes
Author :
Feinberg, Eugene A.
Author_Institution :
Dept. of Appl. Math. & Stat., State Univ. of New York, Stony Brook, NY, USA
fYear :
1990
fDate :
5-7 Dec 1990
Firstpage :
2149
Abstract :
Discrete-time, infinite-horizon stochastic decision processes with various reward criteria are addressed. Sufficient conditions are obtained for the value of a class of strategies to be equal to the value of the subclass of nonrandomized strategies from this class. Two different methods for proving that nonrandomized strategies are as good as arbitrary ones are considered. The first method is based on the fact that a strategic measure for any strategy may be represented as a linear combination (or a linear operator) of strategic measures generated by nonrandomized strategies and the same initial distribution. This method is applicable to various criteria and classes of strategies. The second method is applicable to Markov decision processes with the expected total reward criterion. It is based on linearity properties of optimality equations, on the approximation of dynamic programming models by negative dynamic programming models, and on the replacement of the initial model by another one whose states represent information about the past in the initial model
Keywords :
decision theory; dynamic programming; stochastic processes; Markov decision processes; decision theory; dynamic programming; optimality; stochastic decision processes; strategic measure; sufficient conditions; Dynamic programming; History; Infinite horizon; Integral equations; Linearity; Mathematics; Statistics; Stochastic processes; Strategic planning; Sufficient conditions;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control, 1990., Proceedings of the 29th IEEE Conference on
Conference_Location :
Honolulu, HI
Type :
conf
DOI :
10.1109/CDC.1990.204006
Filename :
204006
Link To Document :
بازگشت