Title :
Model reference output feedback control using episodic natural actor-critic
Author :
Fang, Zhou ; Hao, Chuanchuan ; Li, Ping
Author_Institution :
Sch. of Aeronaut. & Astronaut., Zhejiang Univ., Hangzhou, China
Abstract :
In this paper, we develop a novel reinforcement learning algorithm which requires only system output and converges to an optimal output feedback control policy with expected dynamic performance. An informative reward function based on reference model is adopted to intuitively represent the desired closed-loop performance, which significantly reduces the difficulty of reward construction. A stochastic output feedback control policy based on PID law is used to release the complete observability requirement. The episodic Natural Actor-Critic (eNAC) algorithm is used for policy search. Simulations on a second-order unstable system and a nonlinear LPV model of UAV´s longitudinal dynamics demonstrate the efficiency of the proposed algorithm.
Keywords :
autonomous aerial vehicles; closed loop systems; convergence; feedback; learning (artificial intelligence); linear systems; nonlinear control systems; observability; optimal control; search problems; stochastic systems; three-term control; PID feedback law; UAV longitudinal dynamics; closed loop performance; convergence; eNAC algorithm; episodic natural actor critic algorithm; informative reward function; model reference output feedback; nonlinear LPV model; observability; optimal control; policy search; reinforcement learning algorithm; reward construction; second-order unstable system; stochastic policy; Aerodynamics; Approximation algorithms; Educational institutions; Heuristic algorithms; Learning; Output feedback; Stochastic processes;
Conference_Titel :
Industrial Electronics (ISIE), 2012 IEEE International Symposium on
Conference_Location :
Hangzhou
Print_ISBN :
978-1-4673-0159-6
Electronic_ISBN :
2163-5137
DOI :
10.1109/ISIE.2012.6237275