DocumentCode
75138
Title
Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems Using Reinforcement Learning
Author
Modares, Hamidreza ; Lewis, Frank L.
Author_Institution
Arlington Res. Inst., Univ. of Texas, Fort Worth, TX, USA
Volume
59
Issue
11
fYear
2014
fDate
Nov. 2014
Firstpage
3051
Lastpage
3056
Abstract
In this technical note, an online learning algorithm is developed to solve the linear quadratic tracking (LQT) problem for partially-unknown continuous-time systems. It is shown that the value function is quadratic in terms of the state of the system and the command generator. Based on this quadratic form, an LQT Bellman equation and an LQT algebraic Riccati equation (ARE) are derived to solve the LQT problem. The integral reinforcement learning technique is used to find the solution to the LQT ARE online and without requiring the knowledge of the system drift dynamics or the command generator dynamics. The convergence of the proposed online algorithm to the optimal control solution is verified. To show the efficiency of the proposed approach, a simulation example is provided.
Keywords
Riccati equations; continuous time systems; learning (artificial intelligence); linear quadratic control; ARE; LQT Bellman equation; LQT algebraic Riccati equation; command generator; integral reinforcement learning technique; linear quadratic tracking control; online learning algorithm; optimal control solution; partially-unknown continuous-time systems; system state; value function; Equations; Generators; Heuristic algorithms; Learning (artificial intelligence); Mathematical model; Optimal control; Trajectory; Causal solution; integral reinforcement learning; linear quadratic tracking; policy iteration; reinforcement learning;
fLanguage
English
Journal_Title
Automatic Control, IEEE Transactions on
Publisher
ieee
ISSN
0018-9286
Type
jour
DOI
10.1109/TAC.2014.2317301
Filename
6787009
Link To Document