DocumentCode :
2765607
Title :
Training convolutional networks of threshold neurons suited for low-power hardware implementation
Author :
Fieres, Johannes ; Schemmel, Johannes ; Meier, Karlheinz
Author_Institution :
Ruprecht-Karls Univ., Heidelberg
fYear :
0
fDate :
0-0 0
Firstpage :
21
Lastpage :
28
Abstract :
Convolutional neural networks are known to be powerful image classifiers. In this work, a method is proposed for training convolutional networks for implementation on an existing mixed digital-analog VLSI hardware architecture. The binary threshold neurons provided by this architecture cannot be trained using gradient-based methods. The convolutional layers are trained with a clustering method, locally in each layer. The output layer is trained using the Perceptron learning rule. Competitive results are obtained on hand-written digits (MNIST) and traffic signs. The analog hardware enables high integration and low power consumption, but inherent error sources affect the computation accuracy. Networks trained as suggested are highly robust against random changes of synaptic weights occuring on the hardware substrate, and work well even with only three distinct weight values (-1, 0, +1), reducing computational complexity to mere counting.
Keywords :
VLSI; gradient methods; neural nets; convolutional neural networks; digital-analog VLSI hardware architecture; gradient-based methods; hand-written digits; hardware substrate; image classifiers; perceptron learning rule; synaptic weights; threshold neurons; Analog computers; Clustering methods; Digital-analog conversion; Energy consumption; Neural network hardware; Neural networks; Neurons; Robustness; Telecommunication traffic; Very large scale integration;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2006. IJCNN '06. International Joint Conference on
Conference_Location :
Vancouver, BC
Print_ISBN :
0-7803-9490-9
Type :
conf
DOI :
10.1109/IJCNN.2006.246654
Filename :
1716065
Link To Document :
بازگشت