• DocumentCode
    146247
  • Title

    Collision array based workload assignment for Network-on-Chip concurrency

  • Author

    He Zhou ; Powers, Linda S. ; Roveda, Janet M.

  • Author_Institution
    Dept. of Electr. & Comput. Eng., Univ. of Arizona, Tucson, AZ, USA
  • fYear
    2014
  • fDate
    2-5 Sept. 2014
  • Firstpage
    188
  • Lastpage
    191
  • Abstract
    To improve Network-on-Chip (NoC) parallelism, this paper proposes a new collision array based workload assignment to increase data request cancellation. Through a task flow partitioning algorithm, we minimize sequential data access and then dynamically schedule tasks while minimizing router execution time. Experimental results show that this method can provide an average of 87.7% system throughput improvement and 41.4% router execution time reduction. This throughput improvement is the direct consequence of collision array. A 7x improvement was reported in [10] Fig. 7 when 32 threads are employed on a single core. The system can achieve 2.7 times of speedup. By investigating the performance-overhead tradeoff between different collision array sizes, we proved a maximum of 42.9% energy and area overheads saving, only with a cost of 23.6% performance degradation in term of router execution time.
  • Keywords
    logic partitioning; network-on-chip; parallel architectures; NoC parallelism; collision array; data request cancellation; network-on-chip concurrency; router execution time reduction; sequential data access; task flow partitioning algorithm; workload assignment; Arrays; Dynamic scheduling; Multicore processing; System-on-chip; TV; Throughput; Network-on-Chip system; collision array; parallelism; workload assignment;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    System-on-Chip Conference (SOCC), 2014 27th IEEE International
  • Conference_Location
    Las Vegas, NV
  • Type

    conf

  • DOI
    10.1109/SOCC.2014.6948924
  • Filename
    6948924