DocumentCode :
114566
Title :
Event-triggered optimal regulation of uncertain linear discrete-time systems by using Q-learning scheme
Author :
Sahoo, Avimanyu ; Jagannathan, S.
Author_Institution :
Dept. of Electr. & Comput. Eng., Missouri Univ. of Sci. & Technol., Rolla, MO, USA
fYear :
2014
fDate :
15-17 Dec. 2014
Firstpage :
1233
Lastpage :
1238
Abstract :
In this paper, an event-triggered optimal adaptive regulation of an uncertain linear discrete time system is proposed. This scheme solves the optimal control in a forwardin- time and online manner by using both dynamic programming and Q learning. First, the time varying action dependent value or the Q-function is estimated online by an adaptive value function estimator (VFE) with event-based state vector and a time dependent basis function. The estimated value function parameters are subsequently used to generate the optimal control gain matrix. Further, aperiodic tuning law for the VFE parameters is proposed not only to estimate the parameters but also handle the terminal constraint. The parameters are tuned only at the event-trigger instants thus reducing computation when compared to the traditional optimal adaptive control. Above all, an adaptive event-trigger condition to decide the event-trigger instants and guarantee stability of the closed-loop system is analytically derived based on the optimal performance criterion via Lyapunov direct method. Nonetheless, the existence of a non-trivial minimum inter-event time is analyzed. Further, it is shown that the parameters converge asymptotically provided the persistency of excitation condition on the regression vector is ensured. Finally, the analytical design is validated with the simulation results.
Keywords :
Lyapunov methods; adaptive control; asymptotic stability; closed loop systems; constraint handling; discrete time systems; dynamic programming; learning (artificial intelligence); linear systems; optimal control; parameter estimation; regression analysis; uncertain systems; Lyapunov direct method; Q-function; Q-learning scheme; VFE parameters; adaptive value function estimator; aperiodic tuning law; closed-loop system stability; dynamic programming; event-based state vector; event-trigger instants; event-triggered optimal adaptive regulation; excitation condition; forward-in-time; nontrivial minimum inter-event time; optimal control gain matrix; optimal performance criterion; regression vector; terminal constraint handling; time dependent basis function; time varying action dependent value; uncertain linear discrete-time systems; value function parameter estimation; Adaptive systems; Asymptotic stability; Discrete-time systems; Dynamic programming; Optimal control; Parameter estimation; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on
Conference_Location :
Los Angeles, CA
Print_ISBN :
978-1-4799-7746-8
Type :
conf
DOI :
10.1109/CDC.2014.7039550
Filename :
7039550
Link To Document :
بازگشت