Title :
Convergence of discretization procedures in dynamic programming
Author :
Bertsekas, Dimitri P.
Author_Institution :
University of Illinois, Urbana, IL, USA
fDate :
6/1/1975 12:00:00 AM
Abstract :
The computational solution of discrete-time stochastic optimal control problems by dynamic programming requires, in most cases, discretization of the state and control spaces whenever these spaces are infinite. In this short paper we consider a discretization procedure often employed in practice. Under certain compactness and Lipschitz continuity assumptions we show that the solution of the discretized algorithm converges to the solution of the continuous algorithm, as the discretization grids become finer and finer. Furthermore, any control law obtained from the discretized algorithm results in a value of the cost functional which converges to the optimal value of the problem.
Keywords :
Dynamic programming; Nonlinear systems, stochastic discrete-time; Optimal stochastic control; Stochastic optimal control; Concrete; Convergence; Cost function; Dynamic programming; Grid computing; Heuristic algorithms; Optimal control; Probability distribution; Stochastic processes; Veins;
Journal_Title :
Automatic Control, IEEE Transactions on
DOI :
10.1109/TAC.1975.1100984