DocumentCode :
1547774
Title :
Initial state training procedure improves dynamic recurrent networks with time-dependent weights
Author :
Leistritz, Lutz ; Galicki, Miroslaw ; Witte, Herbert ; Kochs, Eberhard
Author_Institution :
Inst. of Med. Stat., Friedrich-Schiller-Univ., Jena, Germany
Volume :
12
Issue :
6
fYear :
2001
fDate :
11/1/2001 12:00:00 AM
Firstpage :
1513
Lastpage :
1518
Abstract :
The problem of learning multiple continuous trajectories by means of recurrent neural networks with (in general) time-varying weights is addressed. The learning process is transformed into an optimal control framework where both the weights and the initial network state to be found are treated as controls. For such a task, a learning algorithm is proposed which is based on a variational formulation of Pontryagin´s maximum principle. The convergence of this algorithm, under reasonable assumptions, is also investigated. Numerical examples of learning nontrivial two-class problems are presented which demonstrate the efficiency of the approach proposed
Keywords :
learning (artificial intelligence); maximum principle; recurrent neural nets; time-varying systems; Pontryagin maximum principle; convergence; dynamic recurrent networks; initial state training procedure; learning; multiple continuous trajectories; nontrivial two-class problems; optimal control; time-dependent weights; time-varying weights; trajectory learning; Associative memory; Convergence; Documentation; Multilayer perceptrons; Neural networks; Neurons; Optimal control; Recurrent neural networks; Statistics; Stochastic processes;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.963788
Filename :
963788
Link To Document :
بازگشت