Title of article :
Reinforcement -learning for optimal tracking control of linear discrete-time systems with unknown dynamics
Author/Authors :
Kiumarsi، نويسنده , , Bahare and Lewis، نويسنده , , Frank L. and Modares، نويسنده , , Hamidreza and Karimpour، نويسنده , , Ali and Naghibi-Sistani، نويسنده , , Mohammad-Bagher، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2014
Pages :
9
From page :
1167
To page :
1175
Abstract :
In this paper, a novel approach based on the Q -learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner. It is assumed that the reference trajectory is generated by a linear command generator system. An augmented system composed of the original system and the command generator is constructed and it is shown that the value function for the LQT is quadratic in terms of the state of the augmented system. Using the quadratic structure of the value function, a Bellman equation and an augmented algebraic Riccati equation (ARE) for solving the LQT are derived. In contrast to the standard solution of the LQT, which requires the solution of an ARE and a noncausal difference equation simultaneously, in the proposed method the optimal control input is obtained by only solving an augmented ARE. A Q -learning algorithm is developed to solve online the augmented ARE without any knowledge about the system dynamics or the command generator. Convergence to the optimal solution is shown. A simulation example is used to verify the effectiveness of the proposed control scheme.
Keywords :
reinforcement learning , Policy iteration , Algebraic Riccati equation , Linear quadratic tracker
Journal title :
Automatica
Serial Year :
2014
Journal title :
Automatica
Record number :
1449744
Link To Document :
بازگشت