DocumentCode :
3312658
Title :
An actor-critic method using Least Squares Temporal Difference learning
Author :
Paschalidis, Ioannis Ch ; Li, Keyong ; Estanjini, Reza Moazzez
Author_Institution :
Dept. of Electr. & Comput. Eng., Boston Univ., Brookline, MA, USA
fYear :
2009
fDate :
15-18 Dec. 2009
Firstpage :
2564
Lastpage :
2569
Abstract :
In this paper, we use a Least Squares Temporal Difference (LSTD) algorithm in an actor-critic framework where the actor and the critic operate concurrently. That is, instead of learning the value function or policy gradient of a fixed policy, the critic carries out its learning on one sample path while the policy is slowly varying. Convergence of such a process has previously been proven for the first order TD algorithms, TD(¿) and TD(1). However, the conversion to the more powerful LSTD turns out not straightforward, because some conditions on the stepsize sequences must be modified for the LSTD case. We propose a solution and prove the convergence of the process. Furthermore, we apply the LSTD actor-critic to an application of intelligently dispatching forklifts in a warehouse.
Keywords :
Markov processes; learning (artificial intelligence); least squares approximations; warehousing; actor-critic method; forklift dispatch application; learning; least squares temporal difference algorithm; warehouse; Computer science; Convergence; Cost function; Decision making; Dispatching; Dynamic programming; Learning; Least squares methods; Numerical simulation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control, 2009 held jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference on
Conference_Location :
Shanghai
ISSN :
0191-2216
Print_ISBN :
978-1-4244-3871-6
Electronic_ISBN :
0191-2216
Type :
conf
DOI :
10.1109/CDC.2009.5400592
Filename :
5400592
Link To Document :
بازگشت