• DocumentCode
    2277060
  • Title

    Importance sampling actor-critic algorithms

  • Author

    Williams, Jason L. ; Fisher, John W., III ; Willsky, Alan S.

  • Author_Institution
    Lab. for Inf. & Decision Syst., Massachusetts Inst. of Technol., Cambridge, MA
  • fYear
    2006
  • fDate
    14-16 June 2006
  • Abstract
    Importance sampling (IS) and actor-critic are two methods which have been used to reduce the variance of gradient estimates in policy gradient optimization methods. We show how IS can be used with temporal difference methods to estimate a cost function parameter for one policy using the entire history of system interactions incorporating many different policies. The resulting algorithm is then applied to improving gradient estimates in a policy gradient optimization. The empirical results demonstrate a 20-40 times reduction in variance over the IS estimator for an example queueing problem, resulting in a similar factor of improvement in convergence for a gradient search
  • Keywords
    estimation theory; gradient methods; importance sampling; optimisation; parameter estimation; cost function parameter estimation; gradient estimates; gradient search; importance sampling actor-critic algorithms; policy gradient optimization; temporal difference; Approximation algorithms; Approximation methods; Computational modeling; Cost function; Function approximation; Gradient methods; History; Laboratories; Monte Carlo methods; Stochastic processes;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    American Control Conference, 2006
  • Conference_Location
    Minneapolis, MN
  • Print_ISBN
    1-4244-0209-3
  • Electronic_ISBN
    1-4244-0209-3
  • Type

    conf

  • DOI
    10.1109/ACC.2006.1656451
  • Filename
    1656451