The future holds no comfort for data centers aching over energy consumption and floor space, predicts Gartner. By 2011, more than 70 per cent of US enterprise data centers will face "significant disruptions" related to an apex of these mounting woes, the technology research firm says. "CIOs of large US organizations must …
Has anybody asked why?
Has anybody taken the time to ask why data centers are outgrowing their limitations? The direct answer is "more servers, and increase in power demand". But has anybody stopped to ask why that additional power is needed (or, indeed, if it really is needed)? I'm not saying anything either way, but it might be a good idea to take a step back and look to see why data centers need to be filled with such capacity.
I have no doubt that if the software vendors (especially Microsoft) have their way, data centers will need much more power in the future. Software as a service (or "software on demand" or whatever it's called this week) takes the users' apps and runs them on servers in the data center. Effectively, that's taking (part of) the power required on the user's computer and transferring it to the data center. The end result is that the data center will consume more power. Remote computing is, of course, the logical end-game of SaaS. Thin clients for everyone, all connected to huge data centers. Of course, when something happens to the data center, imagine how many people will be effected. And imagine the power and bandwidth required by the data center. These issues may very well be a good reason to keep your computer and ignore the SaaS idea entirely. It's not altogether a bad idea in theory. But in practice, on a large scale, it seems like a disaster waiting to happen. And you thought California had power crunches now...
It's the move to pizza boxes (aka blade servers). They're small, powerful and cheap to buy. Unfortunately they're little more than thin pcs and lots go into a standard rack. But like pcs they use a lot of energy and give off much of this in heat....
Where an old style mainframe will run continuously without a hiccup under a 100% workload, the use of *nix or *nux on these machines, and their low cost measn that there's huge redundancy in each box and thus mcuh wasted power. Many run at workloads that are a fraction of their capacity, but burn off electricity at the same time.
Moves by AMD & Intel are starting to reduce power consumption, but it doesn't get away from the ever increasing energy consumption per box/rack as clock speeds and memory are increased. Single croes give way to dual/quads...
& the companies need more and more computing power.
Meter the racks
Perhaps colo providers should stick a meter on each rack? This would then drive the market to deliver improved power efficiency in the hardware. At the moment, our colo providers only provide an upper limit on power draw per rack (and probably don't measure it).
As far as I know, you can get meters that are not in-line and therefore not a point of failure.
Conservation of energy
"But like pcs they use a lot of energy and give off much of this in heat...."
No. ALL of the energy they use turns into heat. Where would it go otherwise?
Do servers get slightly heavier as they store data? :-)
Virtualisation of storage and processor resources will start to make sure that the multiple small servers are used at closer to 100% capacity. Certainly in our racks, anyway.
But why more?
re: "Pizza boxes":
I understand that the slimming down of systems to 1U and 2U means more servers fit into a rack, thus a higher power draw per rack. But the question I'm asking is why do companies need more and more computing power? As you mentioned, most of the capacity in these systems is just wasted. And while component manufacturers are making their products less power-hungry, it will always consume more power to run two systems at idle (or even at 50% utilization) than one system at 100% utilization.
As an example, we've all seen the effects of upgrading Windows. Windows Vista appears to run just as slow (if not slower) on my dual-core 2GHz system with 2GB of memory than Windows 3.11 did on my 486SX/25 with 4MB of memory. I don't want to start another optimization debate here, but the complexity and/or bloat of modern software is what is causing the computing (and hence power) requirements to increase. So while many people always say to never optimize, you (or at least I) have to wonder -- how much would computing power be reduced by optimizing the code for the most popular software packages? It may not be a big savings on a per-system basis (though I suspect it would be), but I would bet good money that the aggregate would be substantial. And in a data center, that could make a huge difference. Which would, in turn, make a huge difference in the power draw for the A/C.
Re: Conservation of Energy
Apart from those little lights on the front, which give out...well, light.
And do 1s weigh more than 0s? If so, and if there are more 1s stored than 0s on a typical drive, we could reverse binary notation and reduce the foundation weight spec for data centre floors. There are so many ways to cut energy usage ;-)
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Updated + vids WHOA: Get a load of Asteroid DX110 JUST MISSING planet EARTH
- 10 years of Facebook Inside Facebook's engineering labs: Hardware heaven, HP hell – PICTURES
- Very fabric of space-time RIPPED apart in latest Hubble pic
- Massive new AIRSHIP to enter commercial service at British dirigible base