• DocumentCode
    892564
  • Title

    Speculative parallelization

  • Author

    Gonzalez-Escribano, A. ; Llanos, D.R.

  • Author_Institution
    Departamento de Informatica, Valladolid Univ.
  • Volume
    39
  • Issue
    12
  • fYear
    2006
  • Firstpage
    126
  • Lastpage
    128
  • Abstract
    The most promising technique for automatically parallelizing loops when the system cannot determine dependences at compile time is speculative parallelization. Also called thread-level speculation, this technique assumes optimistically that the system can execute all iterations of a given loop in parallel. A hardware or software monitor divides the iterations into blocks and assigns them to different threads, one per processor, with no prior dependence analysis. If the system discovers a dependence violation at runtime, it stops the incorrectly computed work and restarts it with correct values. Of course, the more parallel the loop, the more benefits this technique delivers. To better understand how speculative parallelization works, it is necessary to distinguish between private and shared variables. Informally speaking, private variables are those that the program always modifies in each iteration before using them. On the other hand, values stored in shared variables are used in different iterations.
  • Keywords
    multi-threading; parallelising compilers; program control structures; system monitoring; automatic loop parallelization; hardware monitor; program compiler; program dependence analysis; runtime dependence violation discovery; software monitor; speculative parallelization; thread-level speculation; Concurrent computing; Hardware; Monitoring; Performance loss; Runtime; How Things Work; Speculative parallelization;
  • fLanguage
    English
  • Journal_Title
    Computer
  • Publisher
    ieee
  • ISSN
    0018-9162
  • Type

    jour

  • DOI
    10.1109/MC.2006.441
  • Filename
    4039262