• DocumentCode
    3290243
  • Title

    Prediction caches for superscalar processors

  • Author

    Bennett, James E. ; Flynn, Michael J.

  • Author_Institution
    Comput. Syst. Lab., Stanford Univ., CA, USA
  • fYear
    1997
  • fDate
    1-3 Dec 1997
  • Firstpage
    81
  • Lastpage
    90
  • Abstract
    Processor cycle times are currently much faster than memory cycle times, and this gap continues to increase. Adding a high speed cache memory allows the processor to run at full speed as long as the data it needs is present in the cache. However, memory latency still affects performance in the case of a cache miss. Prediction caches use a history of recent cache misses to predict future misses and to reduce the overall cache miss rate. This paper describes several prediction caches, and introduces a new kind of prediction cache, which combines the features of prefetching and victim caching. This new cache is shown to be more effective at reducing miss rate and improving performance than existing prediction caches
  • Keywords
    cache storage; parallel architectures; performance evaluation; cache miss rate; high speed cache memory; memory cycle times; memory latency; performance; prediction caches; prefetching; processor cycle times; superscalar processors; victim caching; Cache memory; Delay; Dynamic scheduling; Graphics; History; Laboratories; Prefetching; Processor scheduling; Silicon;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Microarchitecture, 1997. Proceedings., Thirtieth Annual IEEE/ACM International Symposium on
  • Conference_Location
    Research Triangle Park, NC
  • ISSN
    1072-4451
  • Print_ISBN
    0-8186-7977-8
  • Type

    conf

  • DOI
    10.1109/MICRO.1997.645800
  • Filename
    645800