DocumentCode :
1679427
Title :
Programming models for petascale to exascale
Author :
Yelick, Katherine
Author_Institution :
Electr. Eng. & Comput. Sci. Dept., Lawrence Berkeley Nat. Lab., Berkeley, CA
fYear :
2008
Firstpage :
1
Lastpage :
1
Abstract :
Multiple petascale systems will soon be available to the computational science community and will represent a variety of architectural models. These high-end systems, like all computing platforms, will have an increasing reliance on software-managed on-chip parallelism. These architectural trends bring into question the message-passing programming model that has dominated high-end programming for the past decade. In this talk I will describe some of the technology challenges that will drive the design of future systems and their implications for software tools, algorithm design, and application programming. In particular, I will show a need to consider models other than message passing as we move towards massive on-chip parallelism. I will talk about a class of partitioned global address space (PGAS) languages, which are an alternative to both message passing models like MPI and shared memory models like OpenMP. PGAS languages offer the possibility of a programming model that will work well across a wide range of shared memory, distributed memory, and hybrid platforms. Some of these languages, including UPC, CAF and Titanium, are based on a static model of parallelism, which gives programmers direct control over the underlying processor resources. The restricted nature of the static parallelism model in these languages has advantages in terms of implementation simplicity, analyzability, and performance transparency, but some applications demand a more dynamic execution model, similar to that of Charm++ or the recently developed HPCS languages (X10, Chapel, and Fortress). I will describe some of our experience working with both static and dynamically managed applications and some of the research challenges that I believe will be critical in developing viable programming techniques for future systems.
Keywords :
message passing; software architecture; software tools; exascale; message-passing programming model; multiple petascale systems; partitioned global address space languages; programming models; shared memory models; software tools; software-managed on-chip parallelism; Algorithm design and analysis; Concurrent computing; Electronics packaging; Message passing; Parallel processing; Partitioning algorithms; Petascale computing; Software algorithms; Software tools; System-on-a-chip;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Parallel and Distributed Processing, 2008. IPDPS 2008. IEEE International Symposium on
Conference_Location :
Miami, FL
ISSN :
1530-2075
Print_ISBN :
978-1-4244-1693-6
Electronic_ISBN :
1530-2075
Type :
conf
DOI :
10.1109/IPDPS.2008.4536090
Filename :
4536090
Link To Document :
بازگشت