DocumentCode :
2248278
Title :
Reduction of the dynamic state-space in fuzzy Q-learning
Author :
Kovács, Szilveszter ; Baranyi, Péter
Author_Institution :
Dept. of Inf. Technol., Miskolc Univ., Hungary
Volume :
2
fYear :
2004
fDate :
25-29 July 2004
Firstpage :
1075
Abstract :
Reinforcement learning (RL) methods, surviving the control difficulties of the unknown environment, are gaining more and more popularity recently in the autonomous robotics community. One of the possible difficulties of the reinforcement learning applications in complex situations is the huge size of the state-value- or action-value-function representation. The case of continuous environment (continuous valued) reinforcement learning could be even complicated, as the state-value- or action-value-functions are turning into continuous functions. In this paper, we suggest a way for tackling these difficulties by the application of SVD (singular value decomposition) methods.
Keywords :
fuzzy set theory; learning (artificial intelligence); singular value decomposition; state-space methods; action value function representation; autonomous robotics community; dynamic state space reduction; fuzzy Q-learning; reinforcement learning methods; singular value decomposition; state value function representation; Environmental economics; Function approximation; Fuzzy control; Informatics; Information technology; Learning; Robots; State estimation; Telecommunication control; Turning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Fuzzy Systems, 2004. Proceedings. 2004 IEEE International Conference on
ISSN :
1098-7584
Print_ISBN :
0-7803-8353-2
Type :
conf
DOI :
10.1109/FUZZY.2004.1375559
Filename :
1375559
Link To Document :
بازگشت