back to article Get ready for the coming data centre crunch

"If there's going to be a theme of the Press Summit this year," mused one delegate on the flight to Portugal, "then it's going to be power, and heat." He should have been right. We covered femtocells, 100-gig Ethernet, managed wireless, specialised security-oriented operating software for network switches, media gateways and ( …

COMMENTS

This topic is closed for new posts.
  1. amanfromMars Silver badge
    Alien

    Pink Floyd Red Mist ........Minimalist TeleVisual SurRealism

    "Can you go short on a power hungry server?"

    No.

    And that must be the shorter post I have ever yet submitted anywhere making Perfect Sense to almost everybody into such Heavy Metas and Darker Side Matters.

  2. John
    Thumb Up

    Stick a meter on each rack

    That will focus a few minds on unused servers and persudade the vendors that there is a market for lower power consumption systems. Telehouse charge a fixed rental rate for a rack irrespective of power consumption.

  3. Pete Silver badge

    Just let the market sort it out

    Limited supply, increasing demand. Traditionally this leads to rising prices as those who value the service (or can extract the greatest value from it) are prepared to pay - while the less efficient or poorer operations find alternatives elsewhere.

    Hopefully the Telehouse people are learning this lesson (albeit rather slowly) and not making the mistake of Heathrow by trying to shoehorn more services into an area that's obviously not capable of supporting it. Here's a hint guys: look at where the aluminium smelters are situated. Like Telehouse, they also have 10-20MW power requirements for their operations.

    With any luck and only the smallest amount of forethought and planning, how about taking the profits from exploiting your supply/demand position and re-investing that money in a new facility near a power station, rather than in the middle of a city that simply can't meet your needs.

  4. andy
    Stop

    Why do you need SO much power??

    Perhaps this issue is because so many people run servers that have capacity far beyond what they need. Add in the "it must run in minus temperatures or it will overheat omg!" mentality and that's the power consumption already going through the roof.

    I once read about a company that were converting small, lower power equipment - which included an AppleTV - into web servers, because of their reduced power consumption and low heat output. But if you try and persuade people to run their website on a set top box, I bet most of them would tell you where to go.

  5. Paul Murphy
    Black Helicopters

    Go virtual.

    Another answer (to lower powered servers) would be to turn as many servers as possible into virtual ones, running on a much smaller number of real servers.

    It does assume that the likes of VMWare aren't going to decide that they don't want your VMs to play any more, but on the whole it's a more flexible answer.

    Also getting fewer switches/routers with more ports, and buying bigger (as in 3U or more) servers with bigger fans, which can move more air, and then cranking back on the aircon would work.

    After all that has been tried then move to Sellarfield.

    Maye the helicopters wouldn't mind doing some cooling work with those big fans of theirs while they are here.

    ttfn

  6. Anton Ivanov
    Coat

    Now is the time to buy land around hydroelectrics

    Now is the time to buy land near the big hydroelectrics in Europe.

    They are usually high in the mountains so the cooling system can suck cold air from outside most of the time. Electricity is cheap and abundant. All of this by far outweights the financial constraints on laying fiber to them. In fact the conduits and trenches are already layed out to support the hydro plant's telemetry. All it takes is to blow some new fiber into them.

    On top of that, most of these facilities have spare buildings left from times they were constructed used to house workers and construction machinery. So this may end up being the "cheapest datacenter money can buy" for many companies. It also perfectly fits the disaster recovery strategies that are now mandatory for many companies.

    Compared to that places like Telehouse and Co will simply no longer be able to compete. And as the Chinese say: "Once the avalanche has started it is too late for pebbles to vote"

  7. Anonymous Coward
    Flame

    Use already-available tools

    Some Ethernet company with an axe to grind says "40w per port" and you print it barely qualified as though it's gospel. What is this, the Inquirer? Do you know how hilarious that number is in general?

    Meanwhile some other tosspot (I can say that, it's allowed on the BBC) says "we have no idea which servers are actively in use". Well duh guys, get yourself some managed LAN switches (you're supposed to be running a datacentre not my workroom LAN at home) and monitor the traffic counters over an interval as long as you fancy. A fortnight might be a start. If there's no meaningful traffic in a fortnight, power it down and wait for the phone to ring.

    Or maybe they should get a bookkeeper. If these datacentre guys don't actually know which servers are active, do they know which are actively being paid for? No money => no power ?

    And a final thought: ground source heat pumps are starting to be trendy in the UK for domestic and commercial heating. How long before someone notices they can work the other way round too when necessary, and they start being trendy for commercial *cooling* in UK datacentres? The temperature in the ground a few feet underground tends not to change much, and tends not to change very quickly, even if you pump in or take out a fair amount of heat.

    Other than that, nice article. Could've done with mentioning the Olympic effect too, but there's still plenty of time to panic over that. Maybe not enough time for Olympics GLC 2012 plc (and their suppliers and subcontractors) to fix it, but...

  8. Britt Johnston
    Unhappy

    Underclocking, anyone?

    Unnecessary cooling might be a left-over practice from the bad old mainframe days. A colleague who recently retired described one of his first jobs, keeping a department store's computer up and running. " At[...], the air conditioning was under-powered. If a hot day was forecast in summer, we knew where we would be spending our time. The beasts only functioned between 19 – 21°C".

    Since heat increases more than linearly with speed, what about underclocking?

  9. Anonymous Coward
    Anonymous Coward

    I like cake

    but the cake is a lie

  10. Anonymous Coward
    Thumb Down

    Wrong performance metrics...

    Society measures success on the wrong metrics. That's why the sales person who sells an IT solution gets paid 5 times as much as the techie who has the skills to implement it. The people taking these short cuts are getting nice fat bonuses now for cutting costs and will be long gone by the time that the whole thing catches up with us. In reality the 'borrow now, screw later' mentality that the country has got itself into is responsible for a number of our woes including the credit crunch, high consumer debt, Gordon Brown as PM and the current sell off of the nuclear industry. So much of the important things are long term and someone who is responsible in the short term cannot make decision on what is best in the long term.

    We have a shortage of electricity because money was saved a decade ago by not building replacements.

    We have a shortage of senior techie people because all the junior techie jobs were outsourced.

    We have a shortage of engineers because we prefer to teach people media studies and pointless toss like that.

    We have a bankrupt treasury because we have let the government spend more than it is raising in taxes.

    We have high fuel prices because we didn't invest in our own fuel industries and now are totally dependant on others.

    We have no manufacturing output to speak of so rely on others for that.

    We have a need for an immigrant workforce because we have spent the last 30 years destroying our country to the point where much of Britain is unqualified to work and dependant on dole handouts.

    Basically our baby boomer parents have f***ed up right royally and there are no quick fixes. We have to accept that a generation of people need to work bloody hard to make things better for our children.

  11. Paul

    Who the hell

    Thought it was a good idea to stick these things in Docklands in the first place?

    Ohhh... I see. Tech companys run by oh so cool Geeks who want a Docklands workplace. No other reason. Move your data centers out of London and life will be much better for a few reasons:

    1) Staff will cost less.

    2) Power will be safer.

    3) Ambiant tempreture will be lower/ City centers, esp London avrage 1-2C above the sorounding area. Howabout Scotland. Much colder, and lots of Hydro power.

  12. Pinwizard
    Unhappy

    Oversold on hardware.

    Too many customers are sold dedicated quad core, multigigabyte servers for a single website with a under dozen simultaneous users.

    I've been using a Cobalt RAQ550 for years, 4 websites, mix of asp/php/jsp with mysql and ...

    perfectly fine

    In fact the only reason I upgraded from the RAQ 4 was that php resizing large images on the fly was slowing it down. In retrospect I could have adapted the code and kept my R4 which required so little cooling and was so quiet I kept it under my monitor. RAQ550 needs to be in the garage (and thats only 1ghz Pentium4).

    So heres the thing, an R4 is a 500mhz processor.

    Webmasters, check your logs, did you really need the same rig the MMORPG guys with their 1000s of simultaneous users need or could you save your company a small fortune and share hardware with other virtualised users?

    Bosses, go check those logs too if your system is always chuntering along at 10% usage then you're wasting money hand over fist.

    oh, and would gladly swap the 550 for a RAQ4 if anyone is interested.

  13. Ciaran Flanagan
    Thumb Up

    A new (datacentre!!) world order ?

    Very concise article...a must read for those Chief Execs who still have organisations where one cost centre consumes ( IT ) and another pays ( Facilities ) and never the twain should meet ...the market dynamic will drive energy efficiency becasue the punishment for inefficient consumption is getting more and more severe.

    The Intel experiment is also something the industry will look carefully at although I have a minor gripe with what I've read ...the cost savings are expressed in terms of a tradiditional versus radical design ...but if the traditional was already a dog then the numbers may look artificially attractive....regardless, very interesting. ( and I'm sure someone can clarify for me ).

    SO....The power cost and availability debate is fast replacing the 'carbon credit' and 'green agenda' in the Datacentre industry and that's got to be a good thing ....now we can focus on something we can influence...direclty. If we look after the kilowatts the carbon will look after itself ...

  14. Pete Silver badge

    @Anton

    > as the Chinese say: "Once the avalanche has started it is too late for pebbles to vote"

    Kosh wasn't chinese - he/she/it/they was a Vorlon

  15. Blitheringeejit

    Waste heat? Ask a pensioner....

    So let me get this straight - we have a massive problem with fuel poverty, but datacentres which are "illogically" located in population centres are paying a fortune to vent sh*tloads of heat to the atmosphere via aircon units? This rings a bell...

    Years ago we built huge out-of-town power stations with massive cooling towers to cool the condensate after it went through the generator turbines, resulting in stupidly small power generation efficiency, or in other words lot of wasted heat. At the same time (if I remember the green press coverage correctly) the Swedish were building Combined Heat and Power stations which fed that same hot water directly to domestic users.

    The heat pump idea is half-way there, but instead of cooling datacentres AND heating houses with ground-source heat-pumps, how about just pumping the damn heat straight from the datacentres into the houses?

  16. Anonymous Coward
    Thumb Up

    "how about just pumping the damn heat ... into the houses"

    Just in case no one else points this out: the biggest cooling problem in datacentres is in summer, when the pensioners (etc) don't really need the heat. For the rest of the year maybe there is an opportunity for "district heating" on the lines suggested (and thanks for remembering CHP, which was a bit more widespread than just Sweden, I believe they even have CHP in Woking).

    Sadly, the corporate beancounters that ru(i)n most of the UK won't pick up the costs of this kind of thing, as shared costs and investment to benefit the larger community are a Commie kind of thing and we don't do that here except when it benefits the spivs and wideboys in our incompetent financial services community, so in the case of a datacentre it probably just opens the windows and let the waste heat go to waste...

  17. Anonymous Coward
    Flame

    telecity house

    [quote]That will focus a few minds on unused servers and persudade the vendors that there is a market for lower power consumption systems. Telehouse charge a fixed rental rate for a rack irrespective of power consumption.[/quote]

    Not strictly true, they do have house rules, 1 of those rules is that each rack has 8 amps, go over 8 amps and you'll be fined,

    just because there are 42Us in a rack it doesn't mean that you can put 42 servers in that rack. (and have them all on).

    Of course they don't have limits on how much heat you can generate.

    A quick walk about looking at the air conditioning in red bus will show that the temperature hovers around 20 degrees, but frequently goes a little above that anyway.

    further to that you'd have thought that the biggest source of power concern would not be what they can get pumped in from the grid supply but rather what their backup generators can supply. I would reasonably assume that this would be a lot less.

    virtualising machines, yes can save a lot. and this is surely the way to go, but this isn't for a data centre to push onto their customer.

    Telecity rent space, power and cold air to whoever wants to buy it.

    Assume you rent a cage containing some 20 or so racks there, and a few racks scattered around the floor where you've had to move servers out of our main cage due to not having enough power in the cage. (i.e your machines could fit into 20 racks, but their isn't enough current supplied so you have to rent further racks just to get power, even though you don't physically need the space).

    Now here's the rub. it's not redbus's job to tell you to virtualise your servers, in fact it's none of their concern what's on your servers, how old they are, how well utilised they are or anything. all they do is rent space power and cold air, and whilst you pay, I'm sure that they are more than happy for you to use whatever servers you like, because they get paid, they are happy.

    now assume you are hosting equipment on behalf of customers, so you're not going to absorb the cost of needing extra space/power either, you pass that cost directly on to your customers for hosting and supporting the hardware that they buy.

    And the customers, (usually small customers), don't even know that their machines are under utilised, could be virtualised, and even if they are told that they are going to wait till their next upgrade cycle anyway... and even if they are told, completely rebuilding their entire environment would be a long and costly project. for a return that would be small year on year.

    If the problem is five years away then you are screwed anyway, as any systems going in today aren't going to be taking account of they need power that might not be available in five years, and anything already in has a few more years service before it's going to be replaced anyway.

    thus as the article suggest, when red bus hit the wall and inevitably have to put up their price because their demand for power is greater than they can supply, you will just say OK and re-locate anyway. that's assuming that you can relocate. and are not too heavily invested their to make re-locating your co-location impossible.

    but really I think that the real issue is being missed.

    it's building design that needs to be addressed.

    Red Bus is a big glass box . you don't need to spend too much time out in your greenhouse, (a small glass box) to know that temperatures will rise.

    compare that the the queens building at DeMontfort university in Leicester (the engineering building).

    the building is designed with tall chimney like structures that work on the same principal as termite mounds the work quite naturally helping to keep the building cool in summer and warm in winter.

    Also, even a primary school student could tell you that heat rises and building a heat generating building five storeys high might cause a lot of heat dissipation problems. why aren't these data centres built in more rural areas where they would be able to expand outwards rather than upwards, adding to the amount of heat the could be naturally dissipated away simply by the building having a larger surface area?

  18. Alan Parsons
    Thumb Up

    Yay

    "steps will start to be taken to bail out the co-lo centres. It will be rushed, it will cost far more than it should, and it will be impossible to do quickly anyway"

    Speaking as a contractor - this is wonderful news :)

  19. Anonymous Coward
    Alert

    Datacentres in rural areas?

    Usual excuse is that there's not enough electricity *and* not enough connectivity (ie not enough *cheap* connectivity to t'Internerd). London mostly has cheap connectivity for historic reasons (the lovely hydraulic main being one of them) but is becoming marginal for electricity and has long been ridiculously overpriced for space.

    I thought I'd seen reports, on here even, not too long ago that at least one new *big* datacentre was being built in former Welsh Development Authority funded factory premises not far from the M4. Can't find it right now though. Does that count as rural? Six months ago it would have been cheaper than Docklands but I reckon there'll be some bargains in Docklands in the next few months.

  20. Steven
    Happy

    @Alan

    "Speaking as a contractor - this is wonderful news :)"

    Amen brother :)

  21. Sean Healey
    Boffin

    @John + Anon

    "Telehouse charge a fixed rental rate for a rack irrespective of power consumption."

    NOT ANY MORE!!

    I just received my first £200 'excess power usage' bill from Telehouse. I'm in the Metro site, which I believe is slightly more relaxed than docklands. Thats blown all my budgets right out the window, and has actually put the entire future of my small colo operation into question.

  22. Alex Kinch

    Talking about leccy costs and Maidenhead..

    Notice that Rapidswitch will be adding a 'power surcharge' of £6 per month per server from 1st November for existing clients, and I think in the next few days for new customers.

  23. Blitheringeejit

    Colo my eeePC?

    Any colo operation out there prepared to give me a power discount because my webserver is an eeePC..?

  24. Chris Comley

    Out-of-town

    At least one smaller ISP (Merula) has worked out that its better to build your OWN out-of-town data centre than to continue renting off <the usual suspects> Though in part this may have had it's roots in the same problem. You couldn't rent new cabinets with the full 16amps of power you were used to getting. And with modern computers, you can only run about five or six of them in 8 amps, the rest of your 42U rack becomes coat storage space.

    I can see other ISPs will go the same way.

    It's also good news for their *customers* - I hate having to go all the way to docklands when I need to visit in person with my hosted servers. Huntingdon is the same distance, says my GPS. Well, I know which is *quicker*. And which is easier to park outside.

  25. Chris Comley

    Oh and...

    Our new servers are all 8-core monsters with 16Gb ram.

    The upside is, we now only need two, where we used to have eight to ten individual boxes.

    The downside is HUGE power demands from the new boxes.

    The good news is, we've found a supplier of quality rackmount servers which use HALF the power of the most common Dell and HP models!!!

  26. Anonymous Coward
    Coat

    Problem is the cost of comms

    The main reason why people are centralising in co-lo's is the cost of comms.

    I agree - if we could have a datacentre out in the sticks along the M4 or even up in rural locations like the Midlands or the North, near decent motorway infrastructure for access, it would be much more useful for Disaster Recovery purposes, and better for the national grid. But not having limitless pockets means we're a bit scuppered.

    Having recently been through the exercise myself, we priced up a cost comparison of two London-based datacentres with multiple high bandwidth (Gigabit) links vs having them spread across the country.

    The price of providing the comms links and the existing cheap architecture provided by companies such as Colt meant that we will have saved over a million pounds by having both sites in London vs having them spread across the country.

    So it's a bit of a no-brainer for us financially.

    So what do we do about it in the UK? How can we cure this problem?

    If we can get the comms link pricing down by getting BT Openreach to lose it's stranglehold on the local loop and allowing more competition in the last mile (when I enquire about costs and lead times outside London, The BT leg is the single most costly part of any link), we may have a better chance of getting things away from being so London-centric. Problem is that Ofcom just seem to be making things much worse, and it's becoming so expensive to put in Fibre even the cable companies don't seem to be extending their reach and have no plans to expand. So it's a little bit of a stalemate.

    Many of the providers now charge per kW pull instead of charging per rack, and they're even getting to the point of doing the 'sorry - you cannot put that in as we don't have enough capacity for power yet' so the hypothesis of the original story is coming true.

    @AC Telecity - It's interesting to hear we're not the only ones who have had bad experiences of Telecity's building and it's greenhouse-like properties. We also were lucky enough to experience their capacitor explosion which managed to hose an entire data centre's UPS facility taking down power for a couple of hours not once, but twice in a year.

  27. Jay
    Thumb Up

    Waste heat and cooling on the Isle of dogs.

    Most of the dogs is surrounded by housing so waste heat could be used to heat them.

    The dogs is also surrounded by water. Drop a heat exchanger into the water and that would lower the cost of cooling, the Thames and surrounding basins are pretty chilly most of the time. You would only need to run a big water pump rather than air conditioning units, maybe not a huge saving but a saving nonetheless.

    Maybe Data centres should start using the environment around them to help lower costs.

This topic is closed for new posts.