DocumentCode
1704282
Title
A global convergence PSO training algorithm of neural networks
Author
Li, Wei ; Wei Li ; Yang, Cheng-wu
Author_Institution
Coll. of Commun., Machinery & Civil Eng., Southwest Forestry Univ., Kunming, China
fYear
2010
Firstpage
3261
Lastpage
3265
Abstract
Traditional gradient-based training algorithms have been known to suffer from local minima and have heavy computation load for obtaining the derivative information. The particle swarm optimization (PSO) method was used as a training algorithm of neural networks to improve the convergence rate. However, as the network architecture grows, the size of swarm increases exponentially, which increase the computational complexity evidently. Moreover, such algorithms had the problem of premature convergence. An improved PSO training algorithm was proposed in this paper. The swarm was only composed of two particles in the new algorithm. The algorithm was guaranteed to converge to the global optimization solution with probability one. Simulation results show the new algorithm has fast convergence rate and high accuracy. Moreover, the convergence of the algorithm didn´t depend on the initial value of weights of neural networks.
Keywords
learning (artificial intelligence); neural nets; particle swarm optimisation; PSO training algorithm; computational complexity; global optimization solution; network architecture; neural networks; particle swarm optimization; Artificial neural networks; Biological neural networks; Convergence; Educational institutions; Particle swarm optimization; Training; USA Councils; PSO algorithm; global convergence; neural network; training algorithm;
fLanguage
English
Publisher
ieee
Conference_Titel
Intelligent Control and Automation (WCICA), 2010 8th World Congress on
Conference_Location
Jinan
Print_ISBN
978-1-4244-6712-9
Type
conf
DOI
10.1109/WCICA.2010.5555076
Filename
5555076
Link To Document