DocumentCode :
1676518
Title :
Linguistic reward-oriented Takagi-Sugeno fuzzy reinforcement learning
Author :
Yan, X.W. ; Deng, Z.D. ; Sun, Z.-Q.
Author_Institution :
State Key Lab. of Intelligent Technol. & Syst., Tsinghua Univ., Beijing, China
Volume :
1
fYear :
2001
fDate :
6/23/1905 12:00:00 AM
Firstpage :
533
Lastpage :
536
Abstract :
This paper presents a new learning method to attack two significant sub-problems in reinforcement learning at the same time: continuous space and linguistic rewards. Linguistic reward-oriented Takagi-Sugeno fuzzy reinforcement learning (LRTSFRL) is constructed by combining Q-learning with Takagi-Sugeno type fuzzy inference systems. The proposed paradigm is capable of solving complicated learning tasks of continuous domains, also can be used to design Takagi-Sugeno fuzzy logic controllers. Experiments on the double inverted pendulum system demonstrate the performance and applicability of the presented scheme. Finally, the conclusion remark is drawn
Keywords :
fuzzy set theory; inference mechanisms; learning (artificial intelligence); LRTSFRL; Linguistic reward-oriented Takagi-Sugeno fuzzy reinforcement learning; Q-learning; Tkkagi-Sugeno type fuzzy inference systems; continuous space; double inverted pendulum system; fuzzy logic controller design; Control systems; Equations; Fuzzy systems; Gain; Input variables; Intelligent systems; Laboratories; Learning; Sun; Takagi-Sugeno model;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Fuzzy Systems, 2001. The 10th IEEE International Conference on
Conference_Location :
Melbourne, Vic.
Print_ISBN :
0-7803-7293-X
Type :
conf
DOI :
10.1109/FUZZ.2001.1007366
Filename :
1007366
Link To Document :
بازگشت