Title :
Training artificial neural networks using variable precision incremental communication
Author :
Ghorbani, Ali A. ; Bhavsar, Virendra C.
Author_Institution :
Fac. of Comput. Sci., New Brunswick Univ., Fredericton, NB, Canada
fDate :
27 Jun-2 Jul 1994
Abstract :
We have earlier proposed incremental inter-node communication to reduce the communication cost as well as time of the learning process in artificial neural networks. In the incremental communication, instead of communicating the full magnitude of an input (output) variable of a neuron, only the increment/decrement to the previous value of the variable, using reduced precision, is sent on a communication link. In this paper, a variable precision incremental communication scheme is proposed. Variable precision, which can be implemented in either hardware or software, can further reduce the complexity of intercommunication and speed up the computations in massively parallel computers. This scheme is applied to the multilayer feedforward networks and simulation studies are carried out. The results of our simulations reveal that, regardless of the degree of the complexity of the problems used, variable precision scheme has stable convergence behavior and shows considerable degree of saving in terms of the number of bits used for communications
Keywords :
feedforward neural nets; learning (artificial intelligence); multilayer perceptrons; artificial neural network training; incremental inter-node communication; variable precision incremental communication; Artificial neural networks; Computational modeling; Computer science; Concurrent computing; Convergence; Costs; Fixed-point arithmetic; Hardware; Neurons; Nonhomogeneous media;
Conference_Titel :
Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
Conference_Location :
Orlando, FL
Print_ISBN :
0-7803-1901-X
DOI :
10.1109/ICNN.1994.374492