Title :
A Massively Parallel Coprocessor for Convolutional Neural Networks
Author :
Sankaradas, Murugan ; Jakkula, Venkata ; Cadambi, Srihari ; Chakradhar, Srimat ; Durdanovic, Igor ; Cosatto, Eric ; Graf, Hans Peter
Author_Institution :
NEC Labs. America, Inc., Princeton, NJ, USA
Abstract :
We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a ldquometa-operatorrdquo to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithmpsilas simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1 GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10times faster, depending on the host-coprocessor bandwidth.
Keywords :
convolution; coprocessors; face recognition; image sampling; learning (artificial intelligence); neural nets; nonlinear functions; parallel algorithms; storage management chips; DDR2 memory bank; Xilinx Virtex5 LX330T FPGA; convolutional neural network; data access pattern; data bandwidth; distributed off-chip memory bank; face recognition; image processing; machine learning algorithm; massively parallel coprocessor functional unit; nonlinear function; off-the-shelf PCI FPGA card; parallel 2D convolution primitive; programmable unit; scratchpad; Acceleration; Bandwidth; Cellular neural networks; Convolution; Coprocessors; Field programmable gate arrays; Hardware; Machine learning algorithms; Neural networks; Software prototyping; CNNs; FPGA; Machine Learning; Multicore; Neural Networks; Parallel Processor;
Conference_Titel :
Application-specific Systems, Architectures and Processors, 2009. ASAP 2009. 20th IEEE International Conference on
Conference_Location :
Boston, MA
Print_ISBN :
978-0-7695-3732-0
Electronic_ISBN :
2160-0511
DOI :
10.1109/ASAP.2009.25