DocumentCode :
3588480
Title :
The new CMS DAQ system for run-2 of the LHC
Author :
Bawej, Tomasz ; Behrens, Ulf ; Branson, James ; Chaze, Olivier ; Cittolin, Sergio ; Darlea, Georgiana-Lavinia ; Deldicque, Christian ; Dobson, Marc ; Dupont, Aymeric ; Erhan, Samim ; Forrest, Andrew ; Gigi, Dominique ; Glege, Frank ; Gomez-Ceballos, Guill
Author_Institution :
CERN, Geneva, Switzerland
fYear :
2014
Firstpage :
1
Lastpage :
1
Abstract :
Summary form only given. The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gb/s Infiniband FDR Clos network has been chosen for the event builder with a throughput of ~4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. This paper presents the requirements, technical choices, and performance of the new system.
Keywords :
data acquisition; field programmable gate arrays; local area networks; particle accelerators; physics computing; telecommunication network reliability; transport protocols; μTCA implementation; CERN large hadron collider; CMS DAQ system; Ethernet technologies; FPGA; Infiniband FDR Clos network; LHC; LHC luminosities; commercial computing hardware; compute nodes; custom electronics; data acquisition system; data concentration; event data transport; event pileup; global file system; high level trigger farm; increase readout channel number; network file systems; reduced TCP-IP; storage infrastructure; throughput aggregation; Aggregates; Computer architecture; Data acquisition; File systems; Large Hadron Collider; Readout electronics; Throughput;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Real Time Conference (RT), 2014 19th IEEE-NPSS
Print_ISBN :
978-1-4799-3658-8
Type :
conf
DOI :
10.1109/RTC.2014.7097437
Filename :
7097437
Link To Document :
بازگشت