back to article Penny wise and pound foolish: Server hoarders are energy wasters

This summer was particularly bad for western Canada, where I live. Electricity costs are soaring and the datacenter air conditioners were going 24 hours a day. There has to be a way to be more efficient. My power bill at home is $250 most months. This is with over half the equipment in my lab out on customer sites for testing …

  1. drexciya
    Thumb Up

    Good example to show to management

    This article should be required reading for a lot of managers. The costs of power and cooling are relatively high compared to the costs of hardware, which is something that not a lot of less technical people understand, but is clearly laid out in your example. Also, cloud being not that cheap, when running legacy loads, is something that I knew (I even know examples of companies moving workloads back), but a load of managers are blinded by all the cloud hype.

    1. dan1980

      Re: Good example to show to management

      I agree, it's a good article but the missing caveat is, of course, that this equation can vary hugely depending on your location. Here in Australia, our dollar is toast right now and even when it isn't, servers are VASTLY more expensive that they are in North America.

      Now, I realise Trevor is dealing with Supermicros here, but for ease of comparison I look at Dell.

      An R730 spec'ed with an E5-2630L (a 55W, 8c part) and 128 GB of RAM is ~$4400 USD. In Australia? $8,600 AUD. That's with the cheapest HDD option (1 x 300GB SATA) to avoid blow-outs as the options on the Dell AU site are ridiculous.

    2. Voland's right hand Silver badge

      Re: Good example to show to management

      In more than one way.

      The article misses one fairly important root cause for the atrocious power bill: you have significantly higher power costs in a classic VM based virtualized environment compared to bare metal or containers. Older Vmware versions were not particularly good at using the CPU frequency governor. While the newer ones are better, the power cost of using virtualized IO compared to native IO is very significant. You trade DMA which costs virtually nothing to moving data across a VM boundary using the CPU which costs a hell of a lot power-wize. This is the case even in zero-copy scenarios because you still have to update page tables, TLB, etc in the CPU - something that is a very costly operation.

      This is one part of the VM TCO which everybody keeps forgetting about when they do the "consolidation" math and it is quite significant.

      So if you are running a lab, fine - you have no choice and you have to suffer from the power inefficiency of classic VMs. The rollback and "kill and start from scratch" facility you get with VMs justifies the costs.

      If you are running a production version of a predominantly non-windows environment using VMs and Vmware in production in this day and age this is a "crime against the environment" offense. Pack all of them as containers (or if you feel fashionable - as Docker instances). If you are IO heavy, your power + cooling bill will drop in half (at least).

  2. Bob H

    Do the article mash!

    The consolidation article seems to have had an Azure rant stuffed in the middle? It is rather an ungainly mating of two articles. I like the idea of a critique of cloud hosting in relation to consolidation and I like the idea of a talk about consolidation of legacy systems, but the way the two are mashed up here is frankly weird.

    1. Bronek Kozicki

      Re: Do the article mash!

      Repeat after me : cost of ownership. Now do you see where they are all connected?

  3. TRT Silver badge

    Sooooo... someone, somewhere

    at Amazon and at Microsoft etc etc has done the maths and come up with a price that includes all the cooling and racks and datacenter vajazzle, then added a bit, because they're a business and profits etc, and they're coming up a figure more than the DIY route. So scaleability upwards isn't reducing costs anywhere? I mean, I'm in the market for an archiving / on-line storage solution for scientific data garnered from labs full of microscopes and sequencers etc, and frankly the University's datacenter costs are a big % bigger than the DIY route, and I'm finding the same with the cloud providers too. Is there not a sensible solution?

    1. Destroy All Monsters Silver badge
      Holmes

      Re: Sooooo... someone, somewhere

      Maybe something in the DIY route has fallen under the table? What could it be....

    2. CanadianMacFan

      Re: Sooooo... someone, somewhere

      Well, the author is accounting for a lot of other things in the pricing that you would encounter. You would need space for the equipment. Something to install everything on (a rack?). Switches, storage, UPS, cables, power bars, cooling, and other stuff that I've forgotten. Then there's the networking costs (which might not apply in this case). And you have to account for the time setting it up and administering everything. You will have to learn all of the technologies or pay someone to do all of that for you.

      And then you have to think about what kind of availability do you want. If something major happens during the middle of the night or on the weekend is it okay for it to be down or does someone need to go in to fix it. If so then you need to add on call costs.

      1. Anonymous Coward
        Anonymous Coward

        Re: Sooooo... someone, somewhere

        Plus the costs for hardware maintenance contract (or if you don't have that, factor in a stash of "unused hardware" in the storeroom). Maybe also some guy from G4S who walks his dog in front of the serverroom. Maybe a software license for (VMWare? Any other software?). There may be also infrastructure and administrative costs that you are leeching of other departments who "forget" to bill you for the same. Any insurance needed? Need off-site backup? Your own work time? It's difficult...

        1. dan1980

          Re: Sooooo... someone, somewhere

          Exactly - there's more to hosting your own servers than just buying them and paying the running costs. Trevor is talking about taking an existing solution and migrating to the cloud to save money and how that equation works out. In that scenario, it can be assumed that the servers are already assembled, racked, configured and working and that there is sufficient knowledge available to maintain them.

  4. Chez

    Hey, if the government wants to subsidize an upgrade of my home lab for the sake of being green, I'll take it. But for now, my free (salvaged) Poweredge 1950 eating lots of power is a sight better than spending thousands on a modern replacement for something I technically don't even NEED to run.

    1. Bronek Kozicki

      hey, if you need heater in the room then it makes sense. Otherwise, probably not so much, but it's your money!

  5. Steve Davies 3 Silver badge

    Ever thought about different cooling?

    A friend of mine has a ground heat source. Cools in summer, heats in winter. Combined with a large PV array on his roof he makes money from leccy generation and yes, he runs a small business from outbuildings.

  6. ckm5

    Azure hosting costs off the charts....

    Is it just me or do the Azure hosting costs seem off the charts? We are currently looking at Google's Cloud Hosting and an n1 standard (1 CPU, 3.75 g ram, no storage) is $25/mo.... This is roughly inline w/AWS (we've compared pricing recently) pricing.

    I don't understand how MSFT can justify $70/mo - that's almost the same price as a dedicated box in a traditional facility....

  7. Fazal Majid

    The reason why some businesses are still running P4s

    Is the cost and risk of testing (integrating, really) whatever half-understood legacy app is running on them against whatever newer version of the OS is compatible with a newer server. P2V only goes so far. That's also the reason why VMware gets to charge extortionate license fees - they are really in the business of managing DLL hell for legacy environments, but a truly ancient OS like whatever is running on those P4s might not run on a VM.

    1. Paul Crawford Silver badge

      Re: The reason why some businesses are still running P4s

      Our experience of P2V converters is mixed, my old home machine (W2k) worked perfectly after I did a bit of file system re-arranging and had enough external HDD to direct the output to. Another machine failed, but that might have been due to the odd/legacy drivers that were not uninstalled first.

      I suspect old systems are find with VM emulation, so long as you go for a low enough starting point. Also you can try/wipe/try again with greater ease. Overall I have been really impressed by the VMplayer as a tool to preserve old flaky Windows software and set-ups.

  8. Anonymous Coward
    Anonymous Coward

    I recently did the same sort of calculation for my home setup. It made even less sense for me to move into the cloud. What really struck me though was the price differences between Windows and Linux servers in the cloud. Presumably the two are running on exactly the same hardware but the Windows server was about $60 / month more expensive for the lower spec server. On the desktop I can kind of understand the dominance of Windows, on the server side though I really can't believe it. Presumably your servers are run by trained people who aren't scared of a command line etc etc. $60/month/server soon adds up to a lot of money for looking at a pretty desktop.

    1. Adam 52 Silver badge

      Look at those costs again for something like RHEL...

      1. Steve Davies 3 Silver badge

        Or...

        CentOS at $0/month

        1. John Sanders
          Linux

          Re: Or...

          Or Debian... or Ubuntu...etc.

    2. Paul Crawford Silver badge

      Yes, Windows as such is more expensive. But the overall cost comes down to what you need to run and how much effort you are willing or able to put in to it.

      Most readers of El Reg who don't have much legacy Windows stuff will be happy to run various servers of all sorts to do the job, and much cheaper than cloud solutions. Other do need Windows and maybe that is the cheapest solution for either local hardware or cloud provisioning.

      On the other hand, a lot of SMB have no real tech support and the cloud suits them in both style and cost. Think web email, document collaboration with Google Docs or Office 365.

      And then we get on to data sovereignty and what happens if you decide not to pay suppler #1 any more...

  9. gibbleth

    Similar math led to a new G34 Opteron for me

    I am a long time AMD user and am too old to figure out the Intel stuff if I can avoid it. I did, however, do some back-of-envelope maths and determine that replacing my VM system (was 16 core 4 socket Shanghai Socket F) with a dual g34 (two sockets, 8 cores, 3.5GHz) would save me enough to pay for the used parts off ebay in around two years. Electricity is a bit more expensive here in Texas, around ten cents a kwh. The old system pulled an average (yes, average) of about 400 watts. The new one is around 200 watts. That works out (using the one watt for server, one watt for ac) to $350 per year in savings. I paid $200 for the motherboard, $100 for the pair of CPUs, $50 or so for the coolers, and around $350 worth of ram. The gravy, of course, is that the new system is actually over twice as fast per thread as the old system, more than making up for losing half the cores.

  10. DCLXV
    Devil

    It's truly a pleasure to rent in a building where utilities are included in the flat cost of rent and the dumpster out back overfloweth with old powerhungry PCs begging to be reused. Let my armies be the landlord and property management company and the good neighbours who junk socket 478 rigs.

    Now if you'll excuse me, I need to call the retentions department of my ISP and discuss how much I'm paying for this service...

  11. Anonymous Coward
    Anonymous Coward

    5.832 cents/kWh

    Puh. On the east coast of the cold colony it's about 15 cents/kWh. And there's tax on top of that. Our home has more tech than most small offices, by far. Hundreds of gadgets plugged in, usually on standby. Luckily, eight months of the year is heating season. So waste heat displaces electric heat, making it 'free'.

    1. storner

      How about 30 cents/kWh

      which is what us crazy danes pay, because converting to "green energy" requires massive subsidies for putting up windmills.

      1. Jesper Frimann

        Re: How about 30 cents/kWh

        Actually, the cheapest way of generating electricity in DK is windmills. The problem is that the electricity is there when the Wind blows, so you have to pay for traditional electricity plants also, cause the energy grid is not up to the task of routing the electricty to where it is needed.

        So when the Wind really really picks up in the western part of Denmark... then the windmills puts on the brakes, cause they can't get rid of it.

        But 75% of your electricity bill is tax anyway.

        // Jesper

  12. Tom 64

    Pentium 4 using 500W?

    ... don't think so. A dual socket box might consume that much at the wall but that would have been Xeon branded even then. The CPUs themselves never ate that much power. Still crap mind you.

    http://ark.intel.com/products/family/78132/Legacy-Intel-Pentium-Processor#@All

  13. Aslan

    I'd be happy to see the article 3X the length

    I'd be happy to see an article 3x as long as this. I think there was only 1 article I started and didn't finish in the last 12 months on The Register. I'd love to see someone grab benchmark and power data from a tech website and give us some average numbers per server generation, there's some tech websites now with over 10 years worth of data, and perhaps even an adjustable calculator.

    One grows fond of old personal equipment. It was about 4 years ago when I turned off my dual socket Pentium 4 Xeon 3.4GHZ system. 12GB of memory 2X 75GB hard drives in raid 1 and 4X 250GB hard drives in hardware raid 5. A wonderful system, but I ended up replacing it because of the crazy power bills. The heat coming off it was enough to make a chilly but well insulated 850sf basement a little too warm such that the door had to be kept open to let the heat into the rest of the house. When I turned the server it surprised the other family members at the change in temperature. I really should get rid of it, but I put 2X 4TB hard drives in it and turn it on every few months to store backup copies of other systems, and then disconnect it.

  14. Henry Wertz 1 Gold badge

    I had the same problem

    I had the same problem, but finally was saved by power supply and motherboard failures over the last year or so (I was using desktops, not servers). A P4, a second P4, and an Athlon XP 3400+? Ugh did they ever suck down power. Luckily a bunch of the University upgrades on 3-5 year upgrade cycles, so I can buy nice systems for like $50.

  15. -tim

    At $.10 kWh, power in watts is about the price in dollars. 100W = $100/yr. Most places have power costing from about $.05 to about $.20 per kWh so half or double.

    Modern A/C systems have a better efficiency. A cheap 2.5 kW split system can now move 400 Watts out of a room with 100 Watts of electricty but larger systems are less efficient.

    For idle sytems, the ram may be the largest power cost. Spinning drives and graphics cards can also eat up loads of power.

  16. GlenP Silver badge

    Think I need to keep a copy of this, for two years' time when I need to convince the bean counters that the (by then) 5 year old servers do need replacing! We did the consolidation last time from several, varied, physical servers down to a couple of virty boxes, partly justified on the basis of a premises move and a much smaller computer room.

    Many SMBs like ours are largely stuck with Windows as that's what the ERP software requires. If we're going to be running SQL Server, IIS, etc. we might as well run everything on MS.

    1. Tom Womack

      I'm not sure it's true that 2012 boxes will need replacing as soon as 2017; 2012 is late enough that Intel had decent power-saving-on-idle implemented, and is after the death-of-Moore's-Law point which meant new processors were not significantly better than old processors except on vectorised HPC apps.

      If you can consolidate down to one Xeon-D in a breadbox, it's probably worth doing that; but replacing one old computer with one new computer costs at least £600 and saves at most £100 of electricity a year.

  17. Anonymous Coward
    Anonymous Coward

    Pentium 4s really did suck, didn't they?

    I remember how bad the P3 was, that really did suck.

    1. Tom 38
      FAIL

      Re: Pentium 4s really did suck, didn't they?

      You've got a bad memory, because the P3 was fucking awesome - so awesome that after P4 was revealed for the POS it was, they went back and re-engineered the Tualatin core that was used in the Pentium 3M and came up with the Core micro-architecture that we are still using today, low power, high speed and super clockable.

      Even when P4 came out, the enthusiast with an eye for bang/buck would buy P3-M processors, whack them in a desktop motherboard and overclock them by 50-80%.

      1. Anonymous Coward
        Anonymous Coward

        Re: Pentium 4s really did suck, didn't they?

        The late P3s maybe but remember the slot 1 bodge?

        https://en.wikipedia.org/wiki/Pentium_III#/media/File:Intel_Pentium_III_Katmai.jpg

    2. MyffyW Silver badge

      Re: Pentium 4s really did suck, didn't they?

      When I was a neophyte sysadmin we we're still wary of these new fangled Pentiums with their flawed floating-point division.

      I have fond memories of the 486DX-66 which ran our file server and cc:Mail.

      Skips off into the distance humming a Sleeper ditty.

  18. Anonymous Coward
    Anonymous Coward

    Newer ATX power supplies make a difference, my old PC went from 80 to about 40 Watts idle.

    If you pay 22 / 23 cents per kWh it helps to walk around with a Watt meter and write everything down.

    1 Watt 24/7 is about 2 Euro a year. A little investment can pay itself back within a year.

  19. Anonymous Coward
    Anonymous Coward

    Old hardware

    In terms of efficiency I can indeed vouch for P4s being power vampires.

    Its interesting to note that although SSDs are very fast they do use a lot of power especially older ones where all the chips are powered at once to make all the memory available without latency.

    The chipset is also a known power hog and in some cases uses almost as much energy as the entire Flash array on small SSDs (<128GB) as the regulators are in this part.

    I did a few experiments comparing heat production versus efficiency and even a laptop from 5 years ago runs a lot hotter and has a lower IOPS/W rating.

    The point at which replacing hardware becomes viable depends on cost, if a business can afford to do so selling their now-obsolete servers to other smaller companies for a pittance makes good sense as it helps them grow into large companies eventually.

    A few years back someone got rid of their server because it was stuck in Socket Hell (aka upgrade limit), they'd maxed it out with Xeons + 32GB RAM and it still wasn't fast enough running VMs.

  20. Zmodem

    drill a 140mm hole in the side of all your cases and put a 140mm fan on the motherboard system fan 2, remove all other fans, and leave them for blow holes, and enable AMD cool n quiet

  21. Anonymous Coward
    Anonymous Coward

    Azure for charities.

    My previous employer is a charity and the pricing comes in pretty low from Microsoft for Azure tenancies. However, I've known Microsoft 'remodel' their charity pricing and/or qualifying requirements a few times. I fear for my old bunch, if Microsoft ever decides to bump things up.

    Couple with that, the clunkiness of Azure, the two different portals/WebUIs that behave quite differently when configuring your cloudy estate and the off-prem inconvenience and I'm liking Azure about as much as a dose of clap.

    It doesn't even feel like Beta, in places, yet they're pricing it as a premium service.

    Anon, for obvious reasons.

  22. Understep

    Waste heat is your friend (for half the year)

    Speaking as someone who runs a home lab in the Canadian prairies, I feel like the cooling requirements as listed in this article are overstated. Sure, I want to pipe waste heat outside during the summer, but in the winter that extra warmth begins to lower my heating bill.

    Overall I see heating and cooling as a value neutral proposition, but it also helps that my setup is located in the basement of a drafty 110 year old house.

  23. Anonymous Coward
    Anonymous Coward

    Fuzzy fans

    Had a meet with an organization a few months back. They showed me their pride and joy, a data centre so old and dusty that I counted 3 entire generations of old HP kit and at least 4 of Dell all hooked into a hodgepodge of extinct EMC arrays. Walking around behind a rack, I saw a dust bunny wafting in the breeze at least 4 inches long behind a server fan. "Junkyard IT" was the phrase almost out of my mouth before I thought better. "Great place you have here.. have you considered upgrading any of these servers or storage?" I said instead. "Oh no, we expect to keep them for 7 years and retire them as they break" was the reply. I about gagged and took my leave as soon as I gracefully could.

    CIOs and IT directors have no idea how much this attitude is costing them, hyper-focusing instead on cost of acquisition only. Instead, they proudly party like it was 1999 with their kit that's nearly that old.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like