DocumentCode :
1817735
Title :
Mathematical justification of recurrent neural networks with long and short-term memories
Author :
Lo, James T. ; Bassu, Devasis
Author_Institution :
Dept. of Math. & Stat., Maryland Univ., Baltimore, MD, USA
Volume :
1
fYear :
1999
fDate :
1999
Firstpage :
364
Abstract :
Two theorems are given, that justify the use of multi-layer perceptrons with interconnected neurons (MLPWIN) with long- and short-term memories (LASTMs) for both Lp and risk-sensitive adaptive processing. The benefits of using a MLPWIN with LASTMs include less online computation, no poor local extrema to fall into, and much more timely and better adaptation
Keywords :
learning (artificial intelligence); multilayer perceptrons; recurrent neural nets; adaptation; interconnected neurons; long-term memories; risk-sensitive adaptive processing; short-term memories; Adaptive algorithm; Adaptive filters; Information filtering; Information filters; Mathematics; Multilayer perceptrons; Neurons; Nonlinear filters; Recurrent neural networks; Statistics;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1999. IJCNN '99. International Joint Conference on
Conference_Location :
Washington, DC
ISSN :
1098-7576
Print_ISBN :
0-7803-5529-6
Type :
conf
DOI :
10.1109/IJCNN.1999.831520
Filename :
831520
Link To Document :
بازگشت