DocumentCode :
1215846
Title :
Evolutionary policy iteration for solving Markov decision processes
Author :
Chang, Hyeong Soo ; Lee, Hong-Gi ; Fu, Michael C. ; Marcus, Steven I.
Author_Institution :
Dept. of Comput. Sci. & Eng., Sogang Univ., Seoul, South Korea
Volume :
50
Issue :
11
fYear :
2005
Firstpage :
1804
Lastpage :
1808
Abstract :
We propose a novel algorithm called evolutionary policy iteration (EPI) for solving infinite horizon discounted reward Markov decision processes. EPI inherits the spirit of policy iteration but eliminates the need to maximize over the entire action space in the policy improvement step, so it should be most effective for problems with very large action spaces. EPI iteratively generates a "population" or a set of policies such that the performance of the "elite policy" for a population monotonically improves with respect to a defined fitness function. EPI converges with probability one to a population whose elite policy is an optimal policy. EPI is naturally parallelizable and along this discussion, a distributed variant of PI is also studied.
Keywords :
Markov processes; evolutionary computation; infinite horizon; iterative methods; Markov decision process; elite policy; evolutionary policy iteration; infinite horizon discounted reward; optimal policy; Biotechnology; Business; Computer science; Contracts; Defense industry; Evolutionary computation; Genetic algorithms; Infinite horizon; Power engineering and energy; State-space methods; (Distributed) policy iteration; Markov decision process; evolutionary algorithm; genetic algorithm; parallelization;
fLanguage :
English
Journal_Title :
Automatic Control, IEEE Transactions on
Publisher :
ieee
ISSN :
0018-9286
Type :
jour
DOI :
10.1109/TAC.2005.858644
Filename :
1532410
Link To Document :
بازگشت