DocumentCode :
303214
Title :
Application of sequential reinforcement learning to control dynamic systems
Author :
Riedmiller, Martin
Author_Institution :
Karlsruhe Univ., Germany
Volume :
1
fYear :
1996
fDate :
3-6 Jun 1996
Firstpage :
167
Abstract :
The article describes the structure of a neural reinforcement learning controller, based on the approach of asynchronous dynamic programming. The learning controller is applied to a well-known benchmark problem, the cart-pole system. In crucial difference to previous approaches, the goal of learning is not only to avoid failure, but moreover to stabilize the cart in the middle of the track, with the pole standing in an upright position. The aim is to learn high quality control trajectories known from conventional controller design, by providing only a minimum amount of a priori knowledge and teaching information
Keywords :
dynamic programming; learning (artificial intelligence); neurocontrollers; nonlinear control systems; asynchronous dynamic programming; cart-pole system; dynamic system control; neural reinforcement learning controller; sequential reinforcement learning; Centralized control; Control systems; Costs; Current control; Dynamic programming; Education; Learning; Predictive models; Quality control; State estimation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1996., IEEE International Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-3210-5
Type :
conf
DOI :
10.1109/ICNN.1996.548885
Filename :
548885
Link To Document :
بازگشت