DocumentCode :
558838
Title :
Model Predictive Control and Dynamic Programming
Author :
Lee, Jay H.
Author_Institution :
Dept. of Chem. & Biomol. Eng., KAIST, Daejeon, South Korea
fYear :
2011
fDate :
26-29 Oct. 2011
Firstpage :
1807
Lastpage :
1809
Abstract :
Model Predictive Control (MPC) and Dynamic Programming (DP) are two different methods to obtain an optimal feedback control law. The former uses on-line optimization to solve an open-loop optimal control problem cast over a finite size time window at each sample time. A feedback control law is defined implicitly by repeating the optimization calculation after a feedback update of the state at each sample time. In contrast, the latter attempts to derive an explicit feedback law off-line by deriving and solving so called Bellman´s optimality equation. Both have been used successfully to solve optimal control problems, the former for constrained control problems and the latter for unconstrained linear quadratic optimal control problem. In this paper, we examine the differences and similarities as well as their relative merits and demerits. We also propose ways to integrate the two methods to alleviate each other´s shortcomings.
Keywords :
dynamic programming; feedback; linear quadratic control; open loop systems; predictive control; Bellman optimality equation; dynamic programming; finite size time window; model predictive control; online optimization; open-loop optimal control; optimal feedback control law; unconstrained linear quadratic optimal control; Dynamic programming; Feedback control; Mathematical model; Optimal control; Optimization; Predictive control; Predictive models; Dynamic Programming; Model Predictive Control; Optimal Feedback Control; Stochastic System Control;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Control, Automation and Systems (ICCAS), 2011 11th International Conference on
Conference_Location :
Gyeonggi-do
ISSN :
2093-7121
Print_ISBN :
978-1-4577-0835-0
Type :
conf
Filename :
6106171
Link To Document :
بازگشت