Title :
A Static Task Scheduling Framework for Independent Tasks Accelerated Using a Shared Graphics Processing Unit
Author :
Li, Teng ; Narayana, Vikram K. ; El-Ghazawi, Tarek
Author_Institution :
Dept. of Electr. & Comput. Eng., George Washington Univ., Washington, DC, USA
Abstract :
The High Performance Computing (HPC) field is witnessing the increasing use of Graphics Processing Units (GPUs) as application accelerators, due to their massively data-parallel computing architectures and exceptional floating-point computational capabilities. The performance advantage from GPU-based acceleration is primarily derived for GPU computational kernels that operate on large amount of data, consuming all of the available GPU resources. For applications that consist of several independent computational tasks that do not occupy the entire GPU, sequentially using the GPU one task at a time leads to performance inefficiencies. It is therefore important for the programmer to cluster small tasks together for sharing the GPU, however, the best performance cannot be achieved through an ad-hoc grouping and execution of these tasks. In this paper, we explore the problem of GPU tasks scheduling, to allow multiple tasks to efficiently share and be executed in parallel on the GPU. We analyze factors affecting multi-tasking parallelism and performance, followed by developing the multi-tasking execution model as a performance prediction approach. The model is validated by comparing with actual execution scenarios for GPU sharing. We then present the scheduling technique and algorithm based on the proposed model, followed by experimental verifications of the proposed approach using an NVIDIA Fermi GPU computing node. Our results demonstrate significant performance improvements using the proposed scheduling approach, compared with sequential execution of the tasks under the conventional multi-tasking execution scenario.
Keywords :
graphics processing units; parallel processing; resource allocation; scheduling; GPU computational kernel; GPU resource; GPU sharing; GPU task scheduling; GPU-based acceleration; NVIDIA Fermi GPU computing node; application accelerator; data-parallel computing architecture; floating-point computational capability; high performance computing; independent task scheduling; multitasking execution model; multitasking execution scenario; parallelism; performance prediction approach; shared graphics processing unit; static task scheduling framework; Computational modeling; Computer architecture; Graphics processing unit; Kernel; Measurement; Processor scheduling; Scheduling; GPU; multi-tasking; resource sharing; scheduling;
Conference_Titel :
Parallel and Distributed Systems (ICPADS), 2011 IEEE 17th International Conference on
Conference_Location :
Tainan
Print_ISBN :
978-1-4577-1875-5
DOI :
10.1109/ICPADS.2011.13