DocumentCode :
3631248
Title :
Policy search with cross-entropy optimization of basis functions
Author :
Lucian Busoniu;Damien Ernst;Bart De Schutter;Robert Babuska
Author_Institution :
Center for Systems and Control of the Delft University of Technology, The Netherlands
fYear :
2009
Firstpage :
153
Lastpage :
160
Abstract :
This paper introduces a novel algorithm for approximate policy search in continuous-state, discrete-action Markov decision processes (MDPs). Previous policy search approaches have typically used ad-hoc parameterizations developed for specific MDPs. In contrast, the novel algorithm employs a flexible policy parameterization, suitable for solving general discrete-action MDPs. The algorithm looks for the best closed-loop policy that can be represented using a given number of basis functions, where a discrete action is assigned to each basis function. The locations and shapes of the basis functions are optimized, together with the action assignments. This allows a large class of policies to be represented. The optimization is carried out with the cross-entropy method and evaluates the policies by their empirical return from a representative set of initial states. We report simulation experiments in which the algorithm reliably obtains good policies with only a small number of basis functions, albeit at sizable computational costs.
Keywords :
"Function approximation","Shape","Optimization methods","Automatic control","Stochastic processes","Automatic generation control","Marine technology","Computational modeling","Computational efficiency","Operations research"
Publisher :
ieee
Conference_Titel :
Adaptive Dynamic Programming and Reinforcement Learning, 2009. ADPRL ´09. IEEE Symposium on
ISSN :
2325-1824
Print_ISBN :
978-1-4244-2761-1
Electronic_ISBN :
2325-1867
Type :
conf
DOI :
10.1109/ADPRL.2009.4927539
Filename :
4927539
Link To Document :
بازگشت