DocumentCode
259108
Title
LVQ neural network SoC adaptable to different on-chip learning and recognition applications
Author
Fengwei An ; Akazawa, Toshinobu ; Yamazaki, Shogo ; Lei Chen ; Mattausch, Hans Jurgen
fYear
2014
fDate
17-20 Nov. 2014
Firstpage
623
Lastpage
626
Abstract
The developed SoC in 180nm for the implementation of a Learning Vector Quantization (LVQ) neural network is based on a concept of hardware/software co-design for on-chip learning and recognition. Minimal Euclidean distance search, which is the most time consuming operation in the competition layer of the LVQ algorithm, is solved by a pipeline with parallel p-word input architecture. Very high flexibility is achieved because input number, neuron number in the competition layer, weight values, and output number are scalable for satisfying the requirements of different applications without changing the designed hardware. For example, in the case of a d-dimensional input vector, the classification can be completed in [d/p] +R clock cycles where R is the pipeline depth. An embedded 32-bit RISC CPU is mainly used for adjusting the values of the feature vectors which is not a time-critical operation in the LVQ algorithm.
Keywords
hardware-software codesign; learning (artificial intelligence); neural nets; system-on-chip; vector quantisation; LVQ neural network SoC; RISC CPU; central processing unit; competition layer; dimensional input vector; hardware-software codesign; learning vector quantization algorithm; minimal Euclidean distance search; neuron number; on-chip learning; on-chip recognition application; pipeline depth; reduced instruction set computing; size 180 nm; system-on-chip; weight value; Computer architecture; Hardware; Neurons; Pipelines; Registers; System-on-chip; Vectors;
fLanguage
English
Publisher
ieee
Conference_Titel
Circuits and Systems (APCCAS), 2014 IEEE Asia Pacific Conference on
Conference_Location
Ishigaki
Type
conf
DOI
10.1109/APCCAS.2014.7032858
Filename
7032858
Link To Document