• DocumentCode
    258938
  • Title

    A Performance Prediction Model for Memory-Intensive GPU Kernels

  • Author

    Zhidan Hu ; Guangming Liu

  • Author_Institution
    Coll. of Comput., Nat. Univ. of Defense Technol., Changsha, China
  • fYear
    2014
  • fDate
    26-27 July 2014
  • Firstpage
    14
  • Lastpage
    18
  • Abstract
    Commodity graphic processing units (GPUs) have rapidly evolved to become high performance accelerators for data-parallel computing through a large array of processing cores and the CUDA programming model with a C-like interface. However, optimizing an application for maximum performance based on the GPU architecture is not a trivial task for the tremendous change from conventional multi-core to the many-core architectures. Besides, the GPU vendors do not disclose much detail about the characteristics of the GPU´s architecture. To provide insights into the performance of memory-intensive kernels, we propose a pipelined global memory model to incorporate the most critical global memory performance related factor, uncoalesced memory access pattern, and provide a basis for predicting performance of memory-intensive kernels. As we will demonstrate, the pipeline throughput is dynamic and sensitive to the memory access patterns. We validated our model on the NVIDIA GPUs using CUDA (Compute Unified Device Architecture). The experiment results show that the pipeline captures performance factors related to global memory and is able to estimate the performance for memory-intensive GPU kernels via the proposed model.
  • Keywords
    DRAM chips; graphics processing units; parallel architectures; performance evaluation; pipeline processing; CUDA; Compute Unified Device Architecture; DRAM chips; GPU architecture; NVIDIA GPU; critical global memory performance related factor; data-parallel computing; dynamic sensitive pipeline throughput; global memory; graphic processing units; high-performance accelerators; memory-intensive GPU kernels; memory-intensive kernel performance; performance prediction model; pipelined global memory model; uncoalesced memory access pattern; Graphics processing units; Instruction sets; Kernel; Memory management; Pipelines; Random access memory; Throughput; CUDA; GPU; memory-intensive; performance prediction;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Computer Applications and Communications (SCAC), 2014 IEEE Symposium on
  • Conference_Location
    Weihai
  • Type

    conf

  • DOI
    10.1109/SCAC.2014.10
  • Filename
    6913158