Title :
Exploiting Hyper-Loop Parallelism in Vectorization to Improve Memory Performance on CUDA GPGPU
Author :
Shixiong Xu;David Gregg
Author_Institution :
Dept. of Comput. Sci., Univ. of Dublin, Dublin, Ireland
Abstract :
Memory performance is of great importance to achieve high performance on the Nvidia CUDA GPU. Previous work has proposed specific optimizations such as thread coarsening, caching data in shared memory, and global data layout transformation. We argue that vectorization based on hyper loop parallelism can be used as a unified technique to optimize the memory performance. In this paper, we put forward a compiler framework based on the Cetus source-to-source compiler to improve the memory performance on the CUDA GPU by efficiently exploiting hyper loop parallelism in vectorization. We introduce abstractions of SIMD vectors and SIMD operations that match the execution model and memory model of the CUDA GPU, along with three different execution mapping strategies for efficiently offloading vectorized code to CUDA GPUs. In addition, as we employ the vectorization in C-to-CUDA with automatic parallelization, our technique further refines the mapping granularity between coarse-grain loop parallelism and GPU threads. We evaluated our proposed technique on two platforms, an embedded GPU system -- Jetson TK1 -- and a desktop GPU -- GeForce GTX 645. The experimental results demonstrate that our vectorization technique based on hyper loop parallelism can yield performance speedups up to 2.5× compared to the direct coarse-grain loop parallelism mapping.
Keywords :
"Graphics processing units","Parallel processing","Instruction sets","Optimization","Message systems","Layout","Arrays"
Conference_Titel :
Trustcom/BigDataSE/ISPA, 2015 IEEE
DOI :
10.1109/Trustcom.2015.612