DocumentCode
3601211
Title
GREEN Cache: Exploiting the Disciplined Memory Model of OpenCL on GPUs
Author
Jaekyu Lee ; Dong Hyuk Woo ; Hyesoon Kim ; Azimi, Mani
Author_Institution
Intel Corp., Hillsboro, OR, USA
Volume
64
Issue
11
fYear
2015
Firstpage
3167
Lastpage
3180
Abstract
As various graphics processing unit architectures are deployed across broad computing spectrum from a hand-held or embedded device to a high-performance computing server, OpenCL becomes the de facto standard programming environment for general-purpose computing on graphics processing units. Unlike its CPU counterpart, OpenCL has several distinct features such as its disciplined memory model, which is partially inherited from conventional 3D graphics programming models. On the other hand, due to ever increasing memory bandwidth pressure and low power requirement, the capacity of on-chip caches in GPUs keeps increasing overtime. Given such trends, we believe that we have interesting programming model/architecture co-optimization opportunities, in particular, how to energy-efficiently utilize large on-chip caches for GPUs. In this paper, as a showcase, we study the characteristics of the OpenCL memory model and propose a technique called GPU Region-aware energy-efficient non-inclusive cache hierarchy, or GREEN cache hierarchy. With the GREEN cache, our simulation results show that we can save 56 percent of dynamic energy in the L1 cache, 39 percent of dynamic energy in the L2 cache, and 50 percent of leakage energy in the L2 cache with practically no performance degradation and off-chip access increases.
Keywords
cache storage; energy conservation; graphics processing units; low-power electronics; power aware computing; 3D graphics programming models; GPU; GREEN cache hierarchy; L1 cache; L2 cache; OpenCL memory model; de facto standard programming environment; disciplined memory model; dynamic energy; embedded device; energy efficiency; general-purpose computing; graphics processing unit architectures; hand-held device; high-performance computing server; large on-chip caches; leakage energy; low power requirement; memory bandwidth; off-chip access; programming architecture; region-aware energy-efficient noninclusive cache hierarchy; Computational modeling; Graphics processing units; Hardware; Kernel; Memory management; Programming; Training; Cache; GPU; OpenCL; cache;
fLanguage
English
Journal_Title
Computers, IEEE Transactions on
Publisher
ieee
ISSN
0018-9340
Type
jour
DOI
10.1109/TC.2015.2395435
Filename
7018047
Link To Document