Abstract :
In this paper, a new approach for function approximation problem is proposed to obtain better generalization performance and faster convergent rate. It is well known that gradient-based learning algorithm for feedforward neural networks (FNN) such as backpropagation (BP) algorithm is apt to be trapped in local minima, which leads to worse generalization performance and slower convergence rate. Therefore, in the new approach, first, adaptive particle swarm optimization (APSO) is applied to train network to search global minima. Second, with the trained weights produced by APSO, the network is trained with a constrained learning algorithm (CLA). Moreover, the CLA in the new approach can improve convergence capability by decreasing the input-to-output mapping sensitivity of the network and penalizing the high frequency component in the training data. Due to combined APSO with the CLA, the new approach has better generalization performance and convergence rate. Finally, simulation results are given to verify the efficiency and effectiveness of the proposed learning approach.
Keywords :
approximation theory; backpropagation; feedforward neural nets; gradient methods; particle swarm optimisation; adaptive particle swarm optimization; backpropagation; constrained learning algorithm; feedforward neural networks; function approximation; gradient-based learning; Backpropagation algorithms; Birds; Convergence; Cost function; Feedforward neural networks; Frequency; Function approximation; Neural networks; Particle swarm optimization; Training data;