DocumentCode :
325163
Title :
Reinforcement function design and bias for efficient learning in mobile robots
Author :
Touzet, Claude ; Santos, Juan Miguel
Author_Institution :
CESAR, Oak Ridge Nat. Lab., TN, USA
Volume :
1
fYear :
1998
fDate :
4-9 May 1998
Firstpage :
153
Abstract :
The main paradigm in sub-symbolic learning robot domain is the reinforcement learning method. Various techniques have been developed to deal with the memorization/generalization problem, demonstrating the superior ability of artificial neural network implementations. In this paper, we address the issue of designing the reinforcement so as to optimize the exploration part of the learning. We also present and summarize works relative to the use of bias intended to achieve the effective synthesis of the desired behavior. Demonstrative experiments involving a self-organizing map implementation of the Q-learning and real mobile robots (Nomad 200 and Khepera) in a task of obstacle avoidance behavior synthesis are described
Keywords :
generalisation (artificial intelligence); learning (artificial intelligence); mobile robots; path planning; self-organising feature maps; Q-learning; bias; generalization; memorization; mobile robots; neural network; obstacle avoidance; reinforcement learning; self-organizing map; subsymbolic learning; Artificial neural networks; Computer networks; Design optimization; Human robot interaction; Laboratories; Learning; Mobile robots; Orbital robotics; Robot sensing systems; Signal synthesis;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Fuzzy Systems Proceedings, 1998. IEEE World Congress on Computational Intelligence., The 1998 IEEE International Conference on
Conference_Location :
Anchorage, AK
ISSN :
1098-7584
Print_ISBN :
0-7803-4863-X
Type :
conf
DOI :
10.1109/FUZZY.1998.687475
Filename :
687475
Link To Document :
بازگشت