Title :
Continuous-Time ADP for Linear Systems with Partially Unknown Dynamics
Author :
Vrabie, Draguna ; Abu-Khalaf, Murad ; Lewis, Frank L. ; Wang, Youyi
Author_Institution :
Autom. & Robotics Res. Inst., Texas Univ., Arlington, TX
Abstract :
Approximate dynamic programming has been formulated and applied mainly to discrete-time systems. Expressing the ADP concept for continuous-time systems raises difficult issues related to sampling time and system model knowledge requirements. In this paper is presented a novel online adaptive critic (AC) scheme, based on approximate dynamic programming (ADP), to solve the infinite horizon optimal control problem for continuous-time dynamical systems; thus bringing together concepts from the fields of computational intelligence and control theory. Only partial knowledge about the system model is used, as knowledge about the plant internal dynamics is not needed. The method is thus useful to determine the optimal controller for plants with partially unknown dynamics. It is shown that the proposed iterative ADP algorithm is in fact a quasi-Newton method to solve the underlying algebraic Riccati equation (ARE) of the optimal control problem. An initial gain that determines a stabilizing control policy is not required. In control theory terms, in this paper is developed a direct adaptive control algorithm for obtaining the optimal control solution without knowing the system A matrix
Keywords :
Riccati equations; adaptive control; continuous time systems; dynamic programming; infinite horizon; linear systems; optimal control; stability; V-learning; adaptive control; algebraic Riccati equation; approximate dynamic programming; computational intelligence; continuous-time ADP; continuous-time dynamical systems; continuous-time systems; control theory; discrete-time systems; infinite horizon optimal control problem; linear systems; online adaptive critic scheme; partially unknown dynamics; policy iterations; quasiNewton method; sampling time; system model knowledge requirements; Adaptive control; Control theory; Dynamic programming; Infinite horizon; Iterative algorithms; Linear systems; Optimal control; Programmable control; Riccati equations; Sampling methods; Adaptive Critics; Approximate Dynamic Programming; Policy iterations; V-learning;
Conference_Titel :
Approximate Dynamic Programming and Reinforcement Learning, 2007. ADPRL 2007. IEEE International Symposium on
Conference_Location :
Honolulu, HI
Print_ISBN :
1-4244-0706-0
DOI :
10.1109/ADPRL.2007.368195