Title :
Approximate dynamic programming for stochastic systems with additive and multiplicative noise
Author :
Jiang, Yu ; Jiang, Zhong-Ping
Author_Institution :
Dept. of Electr. & Comput. Eng., Polytech. Inst. of New York Univ., Brooklyn, NY, USA
Abstract :
This paper studies the stochastic optimal control problem with additive and multiplicative noise via reinforcement learning (RL) and approximate/adaptive dynamic programming (ADP). Using Itô calculus, a policy iteration algorithm is derived in the presence of both additive and multiplicative noise. It is shown that the expectation of the approximated cost matrix is guaranteed to converge to the solution of certain algebraic Riccati equation that gives rise to the optimal cost value. Furthermore, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, the efficiency of the proposed ADP methodology is illustrated in a numerical example.
Keywords :
Riccati equations; calculus; dynamic programming; optimal control; stochastic systems; Ito calculus; adaptive dynamic programming; additive noise; algebraic Riccati equation; approximate dynamic programming; approximated cost matrix; multiplicative noise; policy iteration algorithm; reinforcement learning; stochastic optimal control problem; stochastic systems; Additives; Approximation algorithms; Convergence; Covariance matrix; Noise; Steady-state; Symmetric matrices;
Conference_Titel :
Intelligent Control (ISIC), 2011 IEEE International Symposium on
Conference_Location :
Denver, CO
Print_ISBN :
978-1-4577-1104-6
Electronic_ISBN :
2158-9860
DOI :
10.1109/ISIC.2011.6045404