• DocumentCode
    3280689
  • Title

    Challenges to evaluating Petaflops systems

  • Author

    Sterling, Thomas

  • Author_Institution
    California Inst. of Technol., USA
  • fYear
    2005
  • fDate
    19-22 Sept. 2005
  • Firstpage
    166
  • Abstract
    Summary form only given. Even as the high performance computing community approaches 100 Teraflops Linpack performance, challenges to supercomputer hardware and software design may impede further progress and limit scalability and performance to cost. The assumed canonical methods of harnessing distributed resources are being severely stressed by the continued advances of Moore´s law and system scaling as well as the complexities of emerging interdisciplinary applications. As we struggle into the Petaflops era, new models and metrics will be essential to guide all aspects of the evolution and application of future systems. A new generation of computer architecture such as the Cray Cascade system will employ its resources in potentially innovative ways quite different from today´s prosaic commodity clusters (or MPPs). What those semantic and physical structures should look like and how they should be employed must be determined by aggressive application of a mix of modeling and evaluation techniques. While such methods in almost all cases have been explored, their use in the design and implementation of real world systems is currently limited. This presentation discusses the challenges to evaluating future generation Petaflops scale systems and the kinds of questions that needs to be answered that are usually not addressed in the early design cycle. Included for consideration are the baseline of optimality that should be used (today it is peak), measures of the impact of memory systems including concepts of temporal and spatial locality, cost functions for normalization of observed capabilities, and the role of statistical parametric tradeoff studies. In addition, this presentation briefly examines issues related to user productivity and the impact of system characteristics on them. This article concludes that among the most important trends in advanced high end computing is the dramatic potential of quantitative evaluation of systems.
  • Keywords
    computer architecture; performance evaluation; Cray Cascade system; Moore law; Petaflops system evaluation; computer architecture; cost function; distributed resource; memory system; prosaic commodity cluster; quantitative system evaluation; spatial locality; statistical parametric tradeoff study; system scaling; temporal locality; Application software; Computer architecture; Costs; Hardware; High performance computing; Impedance; Moore´s Law; Scalability; Software design; Supercomputers;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Quantitative Evaluation of Systems, 2005. Second International Conference on the
  • Print_ISBN
    0-7695-2427-3
  • Type

    conf

  • DOI
    10.1109/QEST.2005.7
  • Filename
    1595792