Title :
Q-value based genetic reinforcement learning for fuzzy controller design
Author :
Juang, Chia-Feng
Author_Institution :
Dept. of Electr. Eng., Nat. Chung Hsing Univ., Taichung, Taiwan
Abstract :
This paper proposes a Q-value based Genetic Reinforcement (QGR) learning scheme for Fuzzy controller design (QGRF). The QGRF fulfills GA-based fuzzy controller design under reinforcement learning environment where only weak reinforcement signals are available. For a fuzzy controller, the precondition part is assigned a priori, and the consequent part is designed by QGRF. In QGRF, each individual in the GA population encodes the consequent part parameters of a fuzzy controller and is associated with a Q-value, which is used as a fitness value for GA evolution. At each time step, an individual is selected according to the Q-values, and then a corresponding fuzzy controller is built and applied to the environment with a critic received. With this critic, Q-learning with eligibility trace is executed. After each trial, GA is performed to search for better consequent parameters based on the learned Q-values. Thus, in QGRF, evolution is performed immediately after the end of one trial in contrast to general GA where many trials are performed before evolution. The feasibility of QGRF is demonstrated through simulations in cart-pole balancing problem with only binary reinforcement signals.
Keywords :
control system synthesis; fuzzy control; genetic algorithms; learning (artificial intelligence); GA-based design; Q-value based genetic reinforcement learning; binary signals; cart-pole balancing problem; eligibility trace; fitness value; fuzzy controller design; weak reinforcement signals; Automatic control; Design methodology; Education; Fuzzy control; Fuzzy systems; Genetics; Inference algorithms; Signal design; Supervised learning; Training data;
Conference_Titel :
Fuzzy Systems, 2003. FUZZ '03. The 12th IEEE International Conference on
Print_ISBN :
0-7803-7810-5
DOI :
10.1109/FUZZ.2003.1209359