DocumentCode :
1763408
Title :
Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse Clustered Networks
Author :
Jarollahi, Hooman ; Gripon, Vincent ; Onizawa, Naoya ; Gross, Warren J.
Author_Institution :
Dept. of Electr. & Comput. Eng., McGill Univ., Montreal, QC, Canada
Volume :
23
Issue :
4
fYear :
2015
fDate :
42095
Firstpage :
642
Lastpage :
653
Abstract :
We propose a low-power content-addressable memory (CAM) employing a new algorithm for associativity between the input tag and the corresponding address of the output data. The proposed architecture is based on a recently developed sparse clustered network using binary connections that on-average eliminates most of the parallel comparisons performed during a search. Therefore, the dynamic energy consumption of the proposed design is significantly lower compared with that of a conventional low-power CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. TSMC 65-nm CMOS technology was used for simulation purposes. Following a selection of design parameters, such as the number of CAM entries, the energy consumption and the search delay of the proposed design are 8%, and 26% of that of the conventional NAND architecture, respectively, with a 10% area overhead. A design methodology based on the silicon area and power budgets, and performance requirements is discussed.
Keywords :
CMOS memory circuits; content-addressable storage; energy consumption; low-power electronics; CAM; CMOS technology; NAND architecture; TSMC; binary connection; complementary metal oxide semiconductor; dynamic energy consumption; low-power content-addressable memory; matched tag; parallel comparison; power budget; search delay; silicon area; size 65 nm; sparse clustered network; Arrays; Computer aided manufacturing; Decoding; Delays; Energy consumption; Neurons; Associative memory; content-addressable memory (CAM); low-power computing; recurrent neural networks; sparse clustered networks (SCNs); sparse clustered networks (SCNs).;
fLanguage :
English
Journal_Title :
Very Large Scale Integration (VLSI) Systems, IEEE Transactions on
Publisher :
ieee
ISSN :
1063-8210
Type :
jour
DOI :
10.1109/TVLSI.2014.2316733
Filename :
6808477
Link To Document :
بازگشت