Author/Authors :
Jaroslav Antos، نويسنده , , M. Breitung، نويسنده , , Tony Chan، نويسنده , , Paoti Chang، نويسنده , , Delia Yen-Chu Chen، نويسنده , , Troy Dawson، نويسنده , , Jim Fromm، نويسنده , , Lisa Giacchetti، نويسنده , , Tanya Levshina، نويسنده , , Igor Mandrichenko، نويسنده , , Ray Pasetes، نويسنده , , Marilyn Schweitzer، نويسنده , , Karen Shepelak، نويسنده , , Miroslav Siket، نويسنده , , Dane Skow، نويسنده , , Stephen Wolbers، نويسنده , , G.P. Yeh، نويسنده , , Ping Yeh، نويسنده ,
Abstract :
The high energy physics experiment CDF, located in the anti-proton–proton collider at Fermilab, will write data in Run 2 at a rate of 20 MByte/s, twenty times the rate of Run 1. The offline production system must be able to handle this rate. Components of that system include a large PC farm, I/O systems to read/write data to and from mass storage, and a system to split the reconstructed data into physics streams which are required for analysis. All of the components must work together seamlessly to ensure the necessary throughput. A description will be given of the overall hardware and software design for the system. A small prototype farm has been used for about one year to study performance, to test software designs and for the first Mock Data Challenge. Results from the tests and experience from the first Mock Data Challenge will be discussed. The hardware for the first production farm is in place and will be used for the second Mock Data Challenge. Finally, the possible scaling of the system to handle larger rates foreseen later in Run 2 will be described.