DocumentCode :
3613900
Title :
A reinforcement learning approach to obstacle avoidance of mobile robots
Author :
K. Macek;I. Petrovic;N. Peric
Author_Institution :
Fac. of Electr. Eng. & Comput., Zagreb Univ., Croatia
fYear :
2002
fDate :
6/24/1905 12:00:00 AM
Firstpage :
462
Lastpage :
466
Abstract :
One of the basic issues in the navigation of autonomous mobile robots is the obstacle avoidance task that is commonly achieved using a reactive control paradigm where a local mapping from perceived states to actions is acquired. A control strategy with learning capabilities in an unknown environment can be obtained using reinforcement learning where the learning agent is given only sparse reward information. This credit assignment problem includes both temporal and structural aspects. While the temporal credit assignment problem is solved using core elements of the reinforcement learning agent, solution of the structural credit assignment problem requires an appropriate internal state space representation of the environment. In this paper, a discrete coding of the input space using a neural network structure is presented as opposed to the commonly used continuous internal representation. This enables a faster and more efficient convergence of the reinforcement learning process.
Keywords :
"Learning","Mobile robots","Control engineering computing","Automatic control","Neural networks","Fuzzy logic","Path planning","Robotics and automation","Navigation","State-space methods"
Publisher :
ieee
Conference_Titel :
Advanced Motion Control, 2002. 7th International Workshop on
Print_ISBN :
0-7803-7479-7
Type :
conf
DOI :
10.1109/AMC.2002.1026964
Filename :
1026964
Link To Document :
بازگشت