DocumentCode :
3418510
Title :
A digital neural network LSI using sparse memory access architecture
Author :
Aihara, Kimihisa ; Fujita, Osarnu ; Uchimura, Kuniharu
Author_Institution :
NTT LSI Labs., Kanagawa, Japan
fYear :
1996
fDate :
12-14 Feb 1996
Firstpage :
139
Lastpage :
148
Abstract :
A sparse memory access architecture is proposed to achieve a high-computational-speed neural network LSI. The architecture uses two key techniques, compressible synapse weight neuron calculation and differential neuron operation, to reduce the number of accesses to synapse weight memories and the number of neurons. Calculations without an accuracy penalty. In a pattern recognition example, the number of memory accesses and neuron calculations are reduced to 0.87% of that in the conventional method and the practical performance is 18 GCPS
Keywords :
CMOS digital integrated circuits; large scale integration; neural chips; neural net architecture; compressible synapse weight neuron calculation; differential neuron operation; digital neural network LSI; high-computational-speed neural network; pattern recognition; sparse memory access architecture; Accuracy; Circuits; Computer architecture; Image coding; Laboratories; Large scale integration; Memory architecture; Neural networks; Neurons; Pattern recognition;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Microelectronics for Neural Networks, 1996., Proceedings of Fifth International Conference on
Conference_Location :
Lausanne
ISSN :
1086-1947
Print_ISBN :
0-8186-7373-7
Type :
conf
DOI :
10.1109/MNNFS.1996.493784
Filename :
493784
Link To Document :
بازگشت