DocumentCode :
87977
Title :
Size-Aware Cache Management for Compressed Cache Architectures
Author :
Seungcheol Baek ; Hyung Gyu Lee ; Nicopoulos, Chrysostomos ; Junghee Lee ; Jongman Kim
Author_Institution :
Dept. of Electr. & Comput. Eng., Georgia Inst. of Technol., Atlanta, GA, USA
Volume :
64
Issue :
8
fYear :
2015
fDate :
Aug. 1 2015
Firstpage :
2337
Lastpage :
2352
Abstract :
A practical way to increase the effective capacity of a microprocessor´s cache, without physically increasing the cache size, is to employ data compression. Last-Level Caches (LLC) are particularly amenable to such compression schemes, since the primary purpose of the LLC is to minimize the miss rate, i.e., it directly benefits from a larger logical capacity. In compressed LLCs, the cacheline size varies depending on the achieved compression ratio. Our observations indicate that this size information gives useful hints when managing the cache (e.g., when selecting a victim), which can lead to increased cache performance. However, there are currently no replacement policies tailored to compressed LLCs; existing techniques focus primarily on locality information. This article introduces the concept of size-aware cache management as a way to maximize the performance of compressed caches. Upon analyzing the benefits of considering size information in the management of compressed caches, we propose a novel mechanism-called Effective Capacity Maximizer (ECM)-to further enhance the performance and energy consumption of compressed LLCs. The proposed technique revolves around four fundamental principles: ECM Insertion (ECM-I), ECM Promotion (ECM-P), ECM Eviction Scheduling (ECM-ES), and ECM Replacement (ECM-R). Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing conventional locality-aware cache replacement policies. Specifically, our ECM shows an average effective capacity increase of 18.4 percent over the Least-Recently Used (LRU) policy, and 23.9 percent over the Dynamic Re-Reference Interval Prediction (DRRIP) [1] scheme. This translates into average system performance improvements of 7.2 percent over LRU and 4.2 percent over DRRIP. Moreover, the average energy consumption is also reduced by 5.9 percent over LRU and 3.8 percent over DRRIP.
Keywords :
cache storage; data compression; microprocessor chips; DRRIP; ECM eviction scheduling; ECM insertion; ECM promotion; ECM replacement; ECM-ES; ECM-I; ECM-P; ECM-R; LLC; LRU; cache size; cacheline size; compressed cache architectures; compression schemes; data compression; dynamic rereference interval prediction scheme; effective capacity maximizer; last-level caches; least-recently used policy; locality information; microprocessor cache; size information; size-aware cache management; Compaction; Compression algorithms; Computer architecture; Data compression; Electronic countermeasures; Energy consumption; System performance; Cache; Cache Compression; Cache Replacement Policy; Compression; Data Compression; cache compression; cache replacement policy; compression; data compression;
fLanguage :
English
Journal_Title :
Computers, IEEE Transactions on
Publisher :
ieee
ISSN :
0018-9340
Type :
jour
DOI :
10.1109/TC.2014.2360518
Filename :
6911946
Link To Document :
بازگشت