Title :
Distributed prefetch-buffer/cache design for high performance memory systems
Author :
Alexander, Thomas ; Kedem, Gershon
Author_Institution :
Dept. of Comput. Sci. & Electr. Eng., Duke Univ., Durham, NC, USA
Abstract :
Microprocessor execution speeds are improving at a rate of 50%-80% per year while DRAM access times are improving at a much lower rate of 5%-10% per year. Computer systems are rapidly approaching the point at which overall system performance is determined not by the speed of the CPU but by the memory system speed. We present a high performance memory system architecture that overcomes the growing speed disparity between high performance microprocessors and current generation DRAMs. A novel prediction and prefetching technique is combined with a distributed cache architecture to build a high performance memory system. We use a table based prediction scheme with a prediction cache to prefetch data from the on-chip DRAM array to an on-chip SRAM prefetch buffer. By prefetching data we are able to hide the large latency associated with DRAM access and cycle times. Our experiments show that with a small (32 KB) prediction cache we can get an effective main memory access time that is close to the access time of larger secondary caches
Keywords :
cache storage; memory architecture; DRAM access times; SRAM; cache design; distributed cache architecture; distributed prefetch buffer design; high performance memory systems; memory system architecture; memory system speed; prefetching technique; table based prediction scheme; Bandwidth; Clocks; Computer science; Delay; Gears; Hardware; Microprocessors; Prefetching; Random access memory; System performance;
Conference_Titel :
High-Performance Computer Architecture, 1996. Proceedings., Second International Symposium on
Conference_Location :
San Jose, CA
Print_ISBN :
0-8186-7237-4
DOI :
10.1109/HPCA.1996.501191