DocumentCode :
3571353
Title :
Automatic Code Tuning for Improving GPU Resource Utilization
Author :
Takeshima, Ryo ; Tsumura, Tomoaki
Author_Institution :
Nagoya Inst. of Technol., Nagoya, Japan
fYear :
2014
Firstpage :
419
Lastpage :
425
Abstract :
Utilizing a GPU to perform general purpose computation is called GPGPU. The high theoretical performance of GPU draws attention to GPGPU. CUDA supplies a platform for the developers of GPU applications. In CUDA programming model, massive threads are allocated to GPU´s calculation units. Besides, CUDA has various kinds of memories on GPU. These memories have different features of access latency, capacity, and so on. Therefore, to produce high-performance GPU programs, developers should consider how to allocate the massive threads to cores and which memory should be used for storing data. Hence, developers should have deep understanding of the GPU architecture and CUDA APIs. To address this problem, we propose an auto tuning framework for GPU programs, and explain an implementation of a preprocessor for the framework, in this paper.
Keywords :
application program interfaces; graphics processing units; parallel architectures; resource allocation; CUDA API; Compute Unified Device Architecture; GPGPU; GPU resource utilization; application program interface; code tuning; general purpose graphics processing unit; high-performance GPU program; Graphics processing units; Instruction sets; Kernel; Message systems; Registers; Tuning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computing and Networking (CANDAR), 2014 Second International Symposium on
Type :
conf
DOI :
10.1109/CANDAR.2014.48
Filename :
7052220
Link To Document :
بازگشت