• DocumentCode
    3237209
  • Title

    MPI versus MPI+OpenMP on the IBM SP for the NAS Benchmarks

  • Author

    Cappello, Franck ; Etiemble, Daniel

  • Author_Institution
    Université Paris-Sud
  • fYear
    2000
  • fDate
    04-10 Nov. 2000
  • Firstpage
    12
  • Lastpage
    12
  • Abstract
    The hybrid memory model of clusters of multiprocessors raises two issues: programming model and performance. Many parallel programs have been written by using the MPI standard. To evaluate the pertinence of hybrid models for existing MPI codes, we compare a unified model (MPI) and a hybrid one (OpenMP fine grain parallelization after profiling) for the NAS 2.3 benchmarks on two IBM SP systems. The superiority of one model depends on 1) the level of shared memory model parallelization, 2) the communication patterns and 3) the memory access patterns. The relative speeds of the main architecture components (CPU, memory, and network) are of tremendous importance for selecting one model. With the used hybrid model, our results show that a unified MPI approach is better for most of the benchmarks. The hybrid approach becomes better only when fast processors make the communication performance significant and the level of parallelization is sufficient.
  • Keywords
    Central Processing Unit; Computer aided manufacturing; Concurrent computing; Manufacturing processes; Memory architecture; Message passing; Network interfaces; Parallel programming; Programming profession; Supercomputers;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Supercomputing, ACM/IEEE 2000 Conference
  • ISSN
    1063-9535
  • Print_ISBN
    0-7803-9802-5
  • Type

    conf

  • DOI
    10.1109/SC.2000.10001
  • Filename
    1592725