DocumentCode :
2234186
Title :
Learning programs for decision and control
Author :
Si, Jennie ; Enns, Russell ; Wang, Yu-tsung
Author_Institution :
Dept. of Electr. Eng., Arizona State Univ., Tempe, AZ, USA
Volume :
3
fYear :
2001
fDate :
2001
Firstpage :
462
Abstract :
Introduces learning programs, an approximate dynamic programming (ADP) or otherwise named neural dynamic programming (NDP) algorithm developed and tested by the authors. We first introduce the basic framework of our learning programs, the associated learning algorithms, and then extensive case studies to demonstrate the effectiveness of our learning programs. This is probably the first time that neural dynamic programming type of learning algorithms has been applied to complex, real life continuous state problems. Until now, reinforcement learning (another learning approach for approximate dynamic programming) has been mostly successful in discrete state space problems. On the other hand, prior NDP based approaches to controlling continuous state space systems have all been limited to smaller, or linearized, or decoupled problems. Therefore the work presented here compliments and advances the existing literature in the general area of learning approaches in approximate dynamic programming
Keywords :
dynamic programming; learning (artificial intelligence); neural nets; approximate dynamic programming; continuous state spaces; control; decision; discrete state spaces; learning programs; neural dynamic programming algorithm; Control systems; Control theory; Dynamic programming; Heuristic algorithms; Learning systems; Neural networks; Sampling methods; Signal generators; State-space methods; Testing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Info-tech and Info-net, 2001. Proceedings. ICII 2001 - Beijing. 2001 International Conferences on
Conference_Location :
Beijing
Print_ISBN :
0-7803-7010-4
Type :
conf
DOI :
10.1109/ICII.2001.983100
Filename :
983100
Link To Document :
بازگشت