DocumentCode :
3317802
Title :
Fuzzy Approximation for Convergent Model-Based Reinforcement Learning
Author :
Busoniu, L. ; Ernst, Damien ; De Schutter, Bart ; Babuska, Robert
fYear :
2007
fDate :
23-26 July 2007
Firstpage :
1
Lastpage :
6
Abstract :
Reinforcement learning (RL) is a learning control paradigm that provides well-understood algorithms with good convergence and consistency properties. Unfortunately, these algorithms require that process states and control actions take only discrete values. Approximate solutions using fuzzy representations have been proposed in the literature for the case when the states and possibly the actions are continuous. However, the link between these mainly heuristic solutions and the larger body of work on approximate RL, including convergence results, has not been made explicit. In this paper, we propose a fuzzy approximation structure for the Q-value iteration algorithm, and show that the resulting algorithm is convergent. The proof is based on an extension of previous results in approximate RL. We then propose a modified, serial version of the algorithm that is guaranteed to converge at least as fast as the original algorithm. An illustrative simulation example is also provided.
Keywords :
fuzzy set theory; iterative methods; learning (artificial intelligence); Q-value iteration algorithm; convergent model-based reinforcement learning; fuzzy approximation; Approximation algorithms; Control systems; Convergence; Fuzzy control; Fuzzy neural networks; Learning; Marine technology; Process control; Signal processing; State feedback;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Fuzzy Systems Conference, 2007. FUZZ-IEEE 2007. IEEE International
Conference_Location :
London
ISSN :
1098-7584
Print_ISBN :
1-4244-1209-9
Electronic_ISBN :
1098-7584
Type :
conf
DOI :
10.1109/FUZZY.2007.4295497
Filename :
4295497
Link To Document :
بازگشت