DocumentCode :
2213750
Title :
Enabling partial cache line prefetching through data compression
Author :
Zhang, Youtao ; Gupta, Rajiv
Author_Institution :
Dept. of Comput. Sci., Texas Univ., Dallas, TX
fYear :
2003
fDate :
9-9 Oct. 2003
Firstpage :
277
Lastpage :
285
Abstract :
Hardware prefetching is a simple and effective technique for hiding cache miss latency and thus improving the overall performance. However, it comes with addition of prefetch buffers and causes significant memory traffic increase. We propose a new prefetching scheme which improves performance without increasing memory traffic or requiring prefetch buffers. We observe that a significant percentage of dynamically appearing values exhibit characteristics that enable their compression using a very simple compression scheme. The bandwidth freed by transferring values from lower levels in memory hierarchy to upper levels in compressed form is used to prefetch additional compressible values. These prefetched values are held in vacant space created in the data cache by storing values in compressed form. Thus, in comparison to other prefetching schemes, our scheme does not introduce prefetch buffers or increase the memory traffic. In comparison to a baseline cache that does not support prefetching, on average, our cache design reduces the memory traffic by 10%, reduces the data cache miss rate by 14%, and speeds up program execution by 7%
Keywords :
cache storage; data compression; storage allocation; cache miss latency; data compression; hardware prefetching; memory traffic; prefetch buffer; Bandwidth; Computer science; Data compression; Delay; Hardware; High performance computing; Parallel processing; Pipelines; Pollution; Prefetching;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Parallel Processing, 2003. Proceedings. 2003 International Conference on
Conference_Location :
Kaohsiung
ISSN :
0190-3918
Print_ISBN :
0-7695-2017-0
Type :
conf
DOI :
10.1109/ICPP.2003.1240590
Filename :
1240590
Link To Document :
بازگشت