Title :
ANASA-a stochastic reinforcement algorithm for real-valued neural computation
Author :
Vasilakos, Athanasios V. ; Loukas, Nikolaos H.
Author_Institution :
Inst. of Comput. Sci., Found. for Res. & Technol.-Hellas, Crete, Greece
fDate :
7/1/1996 12:00:00 AM
Abstract :
This paper introduces ANASA (adaptive neural algorithm of stochastic activation), a new, efficient, reinforcement learning algorithm for training neural units and networks with continuous output. The proposed method employs concepts, found in self-organizing neural networks theory and in reinforcement estimator learning algorithms, to extract and exploit information relative to previous input pattern presentations. In addition, it uses an adaptive learning rate function and a self-adjusting stochastic activation to accelerate the learning process. A form of optimal performance of the ANASA algorithm is proved (under a set of assumptions) via strong convergence theorems and concepts. Experimentally, the new algorithm yields results, which are superior compared to existing associative reinforcement learning methods in terms of accuracy and convergence rates. The rapid convergence rate of ANASA is demonstrated in a simple learning task, when it is used as a single neural unit, and in mathematical function modeling problems, when it is used to train various multilayered neural networks
Keywords :
convergence; learning (artificial intelligence); multilayer perceptrons; self-organising feature maps; ANASA; adaptive learning rate function; adaptive neural algorithm; input pattern presentations; multilayered neural networks; optimal performance; real-valued neural computation; reinforcement estimator learning algorithms; self-adjusting stochastic activation; self-organizing neural networks; stochastic activation; stochastic reinforcement algorithm; strong convergence theorems; Artificial neural networks; Computer science; Data mining; Distribution functions; Informatics; Neurofeedback; Signal processing; Stochastic processes; Stochastic resonance; Unsupervised learning;
Journal_Title :
Neural Networks, IEEE Transactions on