Title :
A class of low complexity and fast converging algorithms for complex-valued neural networks
Author :
Su Lee Goh ; Mandic, D.P.
Author_Institution :
Dept. of Electr. Eng., Imperial Coll. London
fDate :
Sept. 29 2004-Oct. 1 2004
Abstract :
Applications of complex-valued neural networks employed as neural adaptive filters are emerging, however, the associated learning algorithms are typically computationally expensive, slowly converging and sensitive. To help to circumvent some of these problems, we introduce the a posteriori data-reusing (DR) approach into the class of first order (sign) algorithms for complex-valued feedforward neural adaptive filters. This is achieved starting from the data-reusing complex-valued nonlinear gradient descent (DRCNGD) algorithm through to low complexity fast converging data-reusing sign algorithms. The analysis proves faster convergence, lower sensitivity and computational complexity when the DR approach is applied in this framework. Simulation results and statistical analysis support the analysis
Keywords :
adaptive filters; computational complexity; feedforward; gradient methods; learning (artificial intelligence); neural nets; statistical analysis; a posteriori data-reusing approach; associated learning algorithms; complex-valued feedforward neural adaptive filters; complex-valued neural networks; computational complexity; fast converging algorithms; first order algorithms; low complexity; low complexity fast converging data-reusing sign algorithms; neural adaptive filters; nonlinear gradient descent algorithm; Adaptive filters; Algorithm design and analysis; Analytical models; Computational complexity; Convergence; Neural networks; Robust stability; Robustness; Signal processing; Signal processing algorithms;
Conference_Titel :
Machine Learning for Signal Processing, 2004. Proceedings of the 2004 14th IEEE Signal Processing Society Workshop
Conference_Location :
Sao Luis
Print_ISBN :
0-7803-8608-4
DOI :
10.1109/MLSP.2004.1422955