Title :
Automatic Parallelization of Tiled Loop Nests with Enhanced Fine-Grained Parallelism on GPUs
Author :
Di, Peng ; Ye, Ding ; Su, Yu ; Sui, Yulei ; Xue, Jingling
Author_Institution :
Sch. of Comput. Sci. & Eng., Univ. of New South Wales, Sydney, NSW, Australia
Abstract :
Automatically parallelizing loop nests into CUDA kernels must exploit the full potential of GPUs to obtain high performance. One state-of-the-art approach makes use of the polyhedral model to extract parallelism from a loop nest by applying a sequence of affine transformations to the loop nest. However, how to automate this process to exploit both intra and inter-SM parallelism for GPUs remains a challenging problem. Presently, compilers may generate code significantly slower than hand-optimized code for certain applications. This paper describes a compiler framework for tiling and parallelizing loop nests with uniform dependences into CUDA code. We aim to improve two levels of wave front parallelism. We find tiling hyper planes by embedding parallelism enhancing constraints in the polyhedral model to maximize intra-tile, i.e., intra-SM parallelism. This improves the load balance among the SPs in an SM executing a wave front of loop iterations within a tile. We eliminate parallelism-hindering false dependences to maximize inter-tile, i.e., inter-SM parallelism. This improves the load balance among the SMs executing a wave front of tiles. Our approach has been implemented in PLUTO and validated using eight benchmarks on two different NVIDIA GPUs (C1060 and C2050). Compared to PLUTO, our approach achieves 2 - 5.5X speedups across the benchmarks. Compared to highly hand-optimized 1-D Jacobi (3 points), 2-D Jacobi (5 points), 3-D Jacobi (7 points) and 3-D Jacobi (27 points), our speedups, 1.17X, 1.41X, 0.97X and 0.87X with an average of 1.10X on C1060 and 1.24X, 1.20X, 0.86X and 0.95X with an average of 1.06X on C2050, are competitive.
Keywords :
affine transforms; computational geometry; graphics processing units; parallel architectures; program compilers; program control structures; resource allocation; 1D Jacobi; 2D Jacobi; 3D Jacobi; C1060; C2050; CUDA code; CUDA kernel; NVIDIA GPU; PLUTO; affine transformation; automatic parallelization; automatical parallelizing loop nest; code generation; compiler framework; fine-grained parallelism; hand-optimized code; interSM parallelism; intraSM parallelism; load balance; loop iteration; loop nest tiling; polyhedral model; wave front parallelism; Arrays; Graphics processing unit; Jacobian matrices; Kernel; Parallel processing; Tiles; Vectors; GPU; Loop Parallelization; Loop Tiling;
Conference_Titel :
Parallel Processing (ICPP), 2012 41st International Conference on
Conference_Location :
Pittsburgh, PA
Print_ISBN :
978-1-4673-2508-0
DOI :
10.1109/ICPP.2012.19