DocumentCode :
1623401
Title :
Exploiting multiple degrees of BP parallelism on the highly parallel computer AP1000
Author :
Torresen, J. ; Mori, S. ; Nakashima, Hideharu ; Tomita, S. ; Landsverk, O.
Author_Institution :
Kyoto Univ., Japan
fYear :
1995
Firstpage :
483
Lastpage :
488
Abstract :
During the few last years, several neurocomputers have been developed, but still general-purpose computers are an alternative to these special-purpose computers. This paper describes a mapping of the backpropagation (BP) learning algorithm onto a large 2D torus architecture. The parallel algorithm was implemented on a 512-processor AP1000 and evaluated using NETtalk and other applications. To obtain high speedup, we have suggested an approach to combine the multiple parallel degrees of the algorithm (training set parallelism, node parallelism and pipelining of the training patterns). For a large number of processors, we obtained a performance of 81 million weight updates per second using 512 processors, when running the NETtalk network. Our results show that to obtain the best performance on a large number of processors, a combination of multiple degrees of parallelism in the backpropagation algorithm ought to be considered
Keywords :
backpropagation; general purpose computers; neural nets; parallel algorithms; parallel machines; performance evaluation; pipeline processing; virtual machines; 2D torus architecture; AP1000; NETtalk; backpropagation learning algorithm; general-purpose computers; highly parallel computer; multiple parallel degrees; neurocomputers; node parallelism; parallel algorithm; performance; speedup; training pattern pipelining; training set parallelism; weight updates;
fLanguage :
English
Publisher :
iet
Conference_Titel :
Artificial Neural Networks, 1995., Fourth International Conference on
Conference_Location :
Cambridge
Print_ISBN :
0-85296-641-5
Type :
conf
DOI :
10.1049/cp:19950604
Filename :
497867
Link To Document :
بازگشت