Title :
An Efficient Neural Network Training Algorithm with Maximized Gradient Function and Modulated Chaos
Author :
Islam, Mobarakol ; Rahaman, Arifur ; Hasan, Md Mehedi ; Shahjahan, Md
Author_Institution :
Dept. of Electron. & Commun. Eng., Khulna Univ. of Eng. & Technol., Khulna, Bangladesh
Abstract :
Biological brain involves chaos and the structure of artificial neural networks (ANNs) is similar to human brain. In order to imitate the structure and the function of human brain better, it is more logical to combine chaos with neural networks. In this paper we proposed a chaotic learning algorithm called Maximized Gradient function and Modulated Chaos (MGMC). MGMC maximizes the gradient function and also added a modulated version of chaos in learning rate (LR) as well as in activation function. Activation function made adaptive by using chaos as gain factor. MGMC generates a chaotic time series as modulated form of Mackey Glass, Logistic Map and Lorenz Attractor. A rescaled version of this series is used as learning rate (LR) called Modulated Learning Rate (MLR) during NN training. As a result neural network becomes biologically plausible and may get escaped from local minima zone and faster convergence rate is obtained as maximizing the derivative of activation function together with minimizing the error function. MGMC is extensively tested on three real world benchmark classification problems such as australian credit card, wine and soybean identification. The proposed MGMC outperforms the existing BP and BPfast in terms of generalization ability and also convergence rate.
Keywords :
biology computing; brain; gradient methods; learning (artificial intelligence); neural nets; ANN; Lorenz attractor; MGMC; MLR; Mackey glass; activation function; artificial neural networks; biological brain; chaotic learning algorithm; efficient neural network training algorithm; human brain; logistic map; maximized gradient function and modulated chaos; modulated chaos; modulated learning rate; Artificial neural networks; Biological neural networks; Chaos; Convergence; Nonlinear dynamical systems; Time series analysis; Training; BPfast; MLR; adaptive activation function; backpropagation; chaos; convergence rate; generalization ability; gradient information; maximization; neural network;
Conference_Titel :
Computational Intelligence and Design (ISCID), 2011 Fourth International Symposium on
Conference_Location :
Hangzhou
Print_ISBN :
978-1-4577-1085-8
DOI :
10.1109/ISCID.2011.18