Author :
Bauer, G. ; Boyer, V. ; Branson, J. ; Brett, A. ; Cano, E. ; Carboni, A. ; Ciganek, M. ; Cittolin, S. ; Erhan, S. ; Gigi, D. ; Glege, F. ; Reino, R. Gomez ; Gulmini, M. ; Mlot, E. Gutierrez ; Gutleber, J. ; Jacobs, C. ; Kim, J.C. ; Klute, M. ; Lipeles, E.
Abstract :
The data acquisition system of the CMS experiment at the Large Hadron Collider features a two-stage event builder, which combines data from about 500 sources into full events at an aggregate throughput of 100 GByte/s. To meet the requirements, several architectures and interconnect technologies have been quantitatively evaluated. Both gigabit Ethernet and Myrinet networks will be employed during the first run. Nearly full bi-section throughput can be obtained using a custom software driver for Myrinet based on barrel shifter traffic shaping. This paper discusses the use of Myrinet dual-port network interface cards supporting channel bonding to achieve virtual 5 GBit/s links with adaptive routing to alleviate the throughput limitations associated with wormhole routing. Adaptive routing is not expected to be suitable for high-throughput event builder applications in high-energy physics. To corroborate this claim, results from the CMS event builder pre-series installation at CERN are presented and the problems of wormhole routing networks are discussed.
Keywords :
adaptive systems; data acquisition; high energy physics instrumentation computing; local area networks; CERN; Ethernet; LHC; Large Hadron Collider; Myrinet dual-port network interface cards; adaptive wormhole routing; barrel shifter traffic shaping; byte rate 100 GByte/s; custom software driver; data acquisition system; event builder networks; virtual links; Aggregates; Collision mitigation; Computer architecture; Data acquisition; Ethernet networks; Large Hadron Collider; Network interfaces; Routing; Telecommunication traffic; Throughput;