DocumentCode :
1284183
Title :
Scalable Tree-Based Architectures for IPv4/v6 Lookup Using Prefix Partitioning
Author :
Le, Hoang ; Prasanna, Viktor K.
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Southern California, Los Angeles, CA, USA
Volume :
61
Issue :
7
fYear :
2012
fDate :
7/1/2012 12:00:00 AM
Firstpage :
1026
Lastpage :
1039
Abstract :
Memory efficiency and dynamically updateable data structures for Internet Protocol (IP) lookup have regained much interest in the research community. In this paper, we revisit the classic tree-based approach for solving the longest prefix matching (LPM) problem used in IP lookup. In particular, we target our solutions for a class of large and sparsely distributed routing tables, such as those potentially arising in the next-generation IPv6 routing protocol. Due to longer prefix lengths and much larger address space, preprocessing such routing tables for tree-based LPM can significantly increase the number of prefixes and/or memory stages required for IP lookup. We propose a prefix partitioning algorithm (DPP) to divide a given routing table into k groups of disjoint prefixes (k is given). The algorithm employs dynamic programming to determine the optimal split lengths between the groups to minimize the total memory requirement. Our algorithm demonstrates a substantial reduction in the memory footprint compared with those of the state of the art in both IPv4 and IPv6 cases. Two proposed linear pipelined architectures, which achieve high throughput and support incremental updates, are also presented. The proposed algorithm and architectures achieve a memory efficiency of 1 byte of memory for each byte of prefix for both IPv4 and IPv6. As a result, our design scales well to support either larger routing tables, longer prefix lengths, or both. The total memory requirement depends solely on the number of prefixes. Implementations on 45 nm ASIC and a state-of-the-art FPGA device (for a routing table consisting of 330K prefixes) show that our algorithm achieves 980 and 410 million lookups per second, respectively. These results are well suited for 100 Gbps lookup. The implementations also scale to support larger routing tables and longer prefix length when we go from IPv4 to IPv6. Additionally, the proposed architectures can easily interface with external SRAMs to ease th- limitation of on-chip memory of the target devices.
Keywords :
IP networks; SRAM chips; application specific integrated circuits; data structures; dynamic programming; field programmable gate arrays; memory architecture; next generation networks; pattern matching; pipeline processing; routing protocols; table lookup; trees (mathematics); ASIC; IPv4-v6 lookup; Internet protocol lookup; LPM problem; SRAM; address space; classic tree-based approach; dynamic programming; dynamically updateable data structures; incremental updates; linear pipelined architectures; longer prefix lengths; longest prefix matching problem; memory efficiency; memory footprint; memory requirement; memory stages; next-generation IPv6 routing protocol; on-chip memory; optimal split lengths; prefix partitioning; prefix partitioning algorithm; routing tables; scalable tree-based architectures; sparsely distributed routing tables; state-of-the-art FPGA device; substantial reduction; target devices; tree-based LPM; Data structures; IP networks; Memory management; Partitioning algorithms; Routing; Throughput; IP Lookup; field-programmable gate array (FPGA); longest prefix matching; partitioning.; pipeline; reconfigurable;
fLanguage :
English
Journal_Title :
Computers, IEEE Transactions on
Publisher :
ieee
ISSN :
0018-9340
Type :
jour
DOI :
10.1109/TC.2011.130
Filename :
5963640
Link To Document :
بازگشت