DocumentCode :
1486424
Title :
Prefetching using Markov predictors
Author :
Joseph, Doug ; Grunwald, Dirk
Author_Institution :
IBM Thomas J. Watson Res. Center, Yorktown Heights, NY, USA
Volume :
48
Issue :
2
fYear :
1999
fDate :
2/1/1999 12:00:00 AM
Firstpage :
121
Lastpage :
133
Abstract :
Prefetching is one approach to reducing the latency of memory operations in modern computer systems. In this paper, we describe the Markov prefetcher. This prefetcher acts as an interface between the on-chip and off-chip cache and can be added to existing computer designs. The Markov prefetcher is distinguished by prefetching multiple reference predictions from the memory subsystem, and then prioritizing the delivery of those references to the processor. This design results in a prefetching system that provides good coverage, is accurate, and produces timely results that can be effectively used by the processor. We also explored a range of techniques that can be used to reduce the bandwidth demands of prefetching, leading to improved memory system performance. In our cycle-level simulations, the Markov Prefetcher reduces the overall execution stalls due to instruction and data memory operations by an average of 54 percent for various commercial benchmarks while only using two-thirds the memory of a demand-fetch cache organization
Keywords :
Markov processes; cache storage; memory architecture; performance evaluation; Markov prefetcher; cache; latency; memory operations; memory system performance; prefetching; Bandwidth; Computer Society; Computer interfaces; Data structures; Delay; Design for experiments; Hardware; Prefetching; Process design; System performance;
fLanguage :
English
Journal_Title :
Computers, IEEE Transactions on
Publisher :
ieee
ISSN :
0018-9340
Type :
jour
DOI :
10.1109/12.752653
Filename :
752653
Link To Document :
بازگشت