They use all of it
because if they didn't, someone would notice and they'd never get the funding for the next round of upgrades.
The optimal state for the modern physicist is always having just a bit more data than you have time to analyse.
CERN today unveiled the upgraded grid that will support the Large Hadron Collider when the titanic particle-punisher finally kicks back into life. Sverre Jarp, CTO at CERN OpenLab supporting the Large Hadron Collider (LHC) buried beneath the Franco-Swiss border outside Geneva, described the network, powered by Intel Xeons, as …
For a while, the Alpha project was running their own scheduler on the machines. CERN central grid people thought Alpha was using lots of CPU time, but really they were just block booking it for when they needed it. Why? Central scheduler was too unresponsive, and as they weren't being billed for CPU time, why not block book the cores?
"We are the only ones [here at CERN] who have profited from the delays [to the LHC]," he jokes.
Not even remotely true. I know (admittedly second-handedly) that more than one group has spent the last year on fine-honing their machines.
[nitpicking]
While CERN is fairly close to the Alps, it's even closer to the Jura, so it's more accurately described as being 'Jurassic' rather than Alpine.
[/nitpicking]
possibly their reasoning for not doing so was along the line:
possibility that something goes wrong enough to destroy the computer center 1km away: 1e-3*
possibility that something goes just wrong enough to destroy the computer center 1km away but not the computer center 6000km away (through the earth): 1e-30*
cost of additional compter center: factor 2
(cost of transmission line able to transmit all experiment data ahead of the wave of destruction: huge?)
summary: nope
*numbers are obviously made up