• DocumentCode
    73248
  • Title

    Efficient Hardware Architecture for Sparse Coding

  • Author

    Jung Kuk Kim ; Knag, Phil ; Chen, T. ; Zhengya Zhang

  • Author_Institution
    Dept. of Electr. Eng. & Comput. Sci., Univ. of Michigan, Ann Arbor, MI, USA
  • Volume
    62
  • Issue
    16
  • fYear
    2014
  • fDate
    Aug.15, 2014
  • Firstpage
    4173
  • Lastpage
    4186
  • Abstract
    Sparse coding encodes natural stimuli using a small number of basis functions known as receptive fields. In this work, we design custom hardware architectures for efficient and high-performance implementations of a sparse coding algorithm called the sparse and independent local network (SAILnet). A study of the neuron spiking dynamics uncovers important design considerations involving the neural network size, target firing rate, and neuron update step size. Optimal tuning of these parameters keeps the neuron spikes sparse and random to achieve the best image fidelity. We investigate practical hardware architectures for SAILnet: a bus architecture that provides efficient neuron communications, but results in spike collisions; and a ring architecture that is more scalable, but causes neuron misfires. We show that the spike collision rate is reduced with a sparse spiking neural network, so an arbitration-free bus architecture can be designed to tolerate collisions without the need of arbitration. To reduce neuron misfires, we design a latent ring architecture to damp the neuron responses for an improved image fidelity. The bus and the ring architecture can be combined in a hybrid architecture to achieve both high throughput and scalability. The three architectures are synthesized and place-and-routed in a 65 nm CMOS technology. The proof-of-concept designs demonstrate a high sparse coding throughput up to 952 M pixels per second at an energy consumption of 0.486 nJ per pixel.
  • Keywords
    computer vision; neural nets; CMOS technology; bus architecture; hardware architecture; improved image fidelity; latent ring architecture; neural network size; neuron communications; neuron spiking dynamics; sparse and independent local network; sparse coding; Algorithm design and analysis; Biological neural networks; Computer architecture; Encoding; Hardware; Neurons; Signal processing algorithms; Algorithm and architecture co-optimization; hardware acceleration; neural network architecture; sparse and independent local network; sparse coding;
  • fLanguage
    English
  • Journal_Title
    Signal Processing, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1053-587X
  • Type

    jour

  • DOI
    10.1109/TSP.2014.2333556
  • Filename
    6845367