• DocumentCode
    288628
  • Title

    Pipelining and parallel training of neural networks on distributed-memory multiprocessors

  • Author

    Zickenheiner, S. ; Wendt, M. ; Klauer, B. ; Waldschmidt, K.

  • Author_Institution
    Frankfurt Univ., Germany
  • Volume
    4
  • fYear
    1994
  • fDate
    27 Jun-2 Jul 1994
  • Firstpage
    2052
  • Abstract
    This paper presents a parallel neural network simulator, implemented on a Parsytec Multicluster2 transputer system. In practical use, neural networks often employ the backpropagation learning rule, as this supervised learning method can be applied to a wide field of recognition problems. The authors focus on the acceleration of backpropagation learning by combining pipelining and parallel training methods. The pipelining model was proposed by Klauer (1992), which actually is independent of the parallel hardware used. This contribution continues the idea of concurrency and pipelining by a concrete implementation
  • Keywords
    backpropagation; distributed memory systems; neural nets; parallel architectures; pipeline processing; transputer systems; transputers; Parsytec Multicluster2 transputer system; backpropagation learning rule; concurrency; distributed-memory multiprocessors; neural networks; parallel neural network simulator; parallel training; pipelining; supervised learning; Acceleration; Backpropagation; Computer architecture; Concrete; Network topology; Neural network hardware; Neural networks; Neurons; Pipeline processing; Supervised learning;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
  • Conference_Location
    Orlando, FL
  • Print_ISBN
    0-7803-1901-X
  • Type

    conf

  • DOI
    10.1109/ICNN.1994.374529
  • Filename
    374529