21st International Conference on Computing in High Energy and Nuclear Physics (CHEP), Okinawa, Japonya, 13 - 17 Nisan 2015, cilt.664
During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.