DocumentCode :
2619664
Title :
Parallel execution of square approximation learning algorithm for MLP neural networks
Author :
Antunovic, Mladen ; Filko, Damir ; Hocenski, Zeljko
fYear :
2008
fDate :
25-27 June 2008
Firstpage :
1816
Lastpage :
1821
Abstract :
This work gives improvement of gradient learning algorithms for adjusting neural network weights. Suggested improvement results in alternative method that converge in less iteration and is inherently parallel, convenient for implementation on computer grid. Experimental results show time savings in multiple thread execution for a wide range of MLP neural network parameters, such as size of input/output data matrix, number of neurons and layers.
Keywords :
approximation theory; gradient methods; learning (artificial intelligence); multilayer perceptrons; MLP neural network; computer grid; gradient learning algorithm; input-output data matrix; multiple thread execution; neural network weight adjustment; parallel execution; square approximation learning algorithm; Approximation algorithms; Automatic control; Automation; Concurrent computing; Equations; Gradient methods; Iterative algorithms; Neural networks; Neurons; Search methods;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Control and Automation, 2008 16th Mediterranean Conference on
Conference_Location :
Ajaccio
Print_ISBN :
978-1-4244-2504-4
Electronic_ISBN :
978-1-4244-2505-1
Type :
conf
DOI :
10.1109/MED.2008.4602192
Filename :
4602192
Link To Document :
بازگشت