DocumentCode
3484795
Title
Application of direct-vision-based reinforcement learning to a real mobile robot
Author
Iida, Masaru ; Sugisaka, Masanori ; Shibata, Katsunari
Author_Institution
Dept. of Electr. Eng., Oita Univ., Japan
Volume
5
fYear
2002
fDate
18-22 Nov. 2002
Firstpage
2556
Abstract
In this paper, it was confirmed that a real mobile robot with a simple visual sensor could learn appropriate actions to reach a target by Direct-Vision-Based reinforcement learning (RL). In Direct-Vision-Based RL, raw visual sensory signals are put into a layered neural network directly, an the neural network is trained by Back Propagation using the training signal that is generated based on reinforcement learning. Considering the time delay to get the visual sensory signals, it was proposed that the actor outputs are trained using the critic output at two time steps ahead. It was shown that the robot with a monochrome visual sensor could obtain reaching actions to a target object through the learning from scratch without any advance knowledge and any help of humans.
Keywords
backpropagation; intelligent robots; learning (artificial intelligence); mobile robots; neural nets; robot vision; actor outputs; autonomous robots; backpropagation; critic output; direct-vision-based reinforcement learning; layered neural network; monochrome sensor; raw visual sensory signals; real mobile robot; robot intelligence; simple visual sensor; time delay; Humans; Intelligent robots; Learning; Mobile robots; Neural networks; Orbital robotics; Robot kinematics; Robot sensing systems; Signal generators; State-space methods;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Information Processing, 2002. ICONIP '02. Proceedings of the 9th International Conference on
Print_ISBN
981-04-7524-1
Type
conf
DOI
10.1109/ICONIP.2002.1201956
Filename
1201956
Link To Document