back to article Give them a cold trouser blast and data centre bosses WILL dial up the juice

If you've ever looked at putting your servers and other infrastructure in a data centre, you'll have come across the power limitation they place upon you: they'll only allow your kit to suck up a certain amount before they cut you off. Generally, they'll tell you that you can have three or four kilowatts per cabinet, and even …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Couldn't all this waste heat

    heat water, and with a bit of help from gas to get it up to temp, boil the water to drive local turbines to generate power for the cooling.

    1. frank ly

      Re: Couldn't all this waste heat

      That wouldn't be cost efficient or practical for anything but a very large computer centre located next to a power station. It would also required long term 'joined up thinking' from large and separate groups of people. So, sadly, no.

      1. Jason Ozolins
        Meh

        Re: Couldn't all this waste heat

        Regenerating electricity from low grade heat isn't useful. Using low grade heat as a head start for building heat can be useful. The difference between ambient temperature and the server exhaust air temperature is too low to do much else with it.

        You *can* use waste heat from power generation for cooling, though, as the temperature drop to ambient is much higher:

        https://en.wikipedia.org/wiki/Cogeneration

        (the trigeneration bit of the article)

    2. This post has been deleted by its author

    3. chris 143
      Meh

      Re: Couldn't all this waste heat

      Even if you run your servers quite hot the hot air is unlikely to be above 40C. That's simply not hot enough to be particularly helpful for a steam based power station.

      You could start using heat pumps or possibly boiling a different liquid but I doubt it'd be worth the effort.

      The easiest way to improve the efficiency is to reduce the amount of energy you spend cooling.

    4. Anonymous Coward
      Anonymous Coward

      Re: Couldn't all this waste heat

      This is already done in places, Telehouse West for example.

      http://www.telehouse.net/newsroom/news/news/2009/green-datacentre-from-telehouse-europe-to-power-london-homes-and-businesses/

  2. This post has been deleted by its author

  3. Anonymous Coward
    Anonymous Coward

    Yahoo!

    Ok their content may be a little crap, but their DC's certainly are not.

    http://gigaom.com/2010/09/19/now-online-yahoos-chicken-coop-inspired-green-data-center/

  4. Anonymous Coward
    Anonymous Coward

    Switch heat vets

    Most decent switches have their vents on the sides and suck in through their "backs". This works nicely in the hot/cold aisle.

    You should only have to visit the cold side to quickly put the servers/SANs in the rack and then move around to the warm back to do the fancy time consuming stuff.

    Cheers

    Jon

    1. Keith_C

      Re: Switch heat vets

      *Really* decent switches will come with directional airflow options to allow the same device to work in either front-to-back or back-to-front airflow modes.

      The biggest advantage of cold aisle containment (CAC) is that the volume of air to be chilled is reduced. This is because without CAC, as cool air leaves the vent and starts mixing with the warmer ambient air of the datacentre (which has been actively heated due to the exhaust of all the kit in the DC) it naturally starts to dissipate by the time it reaches the top of the rack. This means that in order to deliver a target inlet temperature at the top of the rack both the temperature and just as important the velocity of the air as it leaves the floor vent need to be that much lower and that much faster. With CAC you can cool to a higher target temperature and move the airflow more slowly, both of which save power.

      It is also worth mentioning that low-voltage and high-efficiency components make sense at the datacentre scale - and from about 10 cabinets upwards. While the component itself may be more expensive, and indeed may not ever directly save more in power than the cost difference from the 'normal' version, if it's the difference between having to start building a new datacentre 6 months from now or 12 months from now that is a considerable saving.

      Even little differences like tidy cabling can make quite a difference - if the hot air can exhaust without obstruction the fans have less work to do, again reducing power requirement. If you can go diskless and boot from SD you save the electrical cost of not just the drives, but also the disk controller you no longer need, as well as again improving airflow internally. All little differences that really add up over a datacentre.

      1. Jason Ozolins
        Headmaster

        Re: Switch heat vets

        In HPC, where high power density per rack is the norm (peak I've encountered is 35kW per ~1000mm wide rack, and many racks) the tendency is towards hot aisle containment, with some form of water fed in-row heat exchanger cooling, either in the back doors of the racks themselves, or above the aisle, or in-line with the server racks (APC provided that last sort for the Raijin supercomputer at the <a href="http://nf.nci.org.au/facilities/fujitsu.php">NCI National Facility</a> in Canberra.) In this model the hot air is cooled as close to source as possible, there is no need for air handlers to pressurize the subfloor with cold air, and there are no long return paths for warm air back to air handlers, mixing with ambient air on the way and making it harder to extract heat from the air. Key points:

        Heat is concentrated in as small a volume of air as possible in a hot aisle, making it possible to use higher temperature water to feed the in-row coolers.

        In-row cooler fans are hopefully producing slight negative pressure in the hot aisles to minimize air leakage through gaps in the racks. This also helps prevent hot air stagnating behind front blanking plates in partially full racks; still hot air is bad because it loses heat by conduction to the rack or adjacent equipment, and that heat can make equipment stressed, or escape out to the room.

        Issues of equalizing cold air distribution are reduced when the inlet air is the mixed air from the room with no particular cold vents. The in-row coolers ideally return air to the room only slightly colder than the bulk air in the room. If they do more than that, then the return water is cooler than it needs to be.

        The sheer volumes of air needed to cool dense servers become unworkable with a single pressurized floor. With hot aisle containment, the airflow is local to each hot aisle and distributed among the cooler units.

        The water returning from the in-row heat exchangers is warm enough that for a lot of the year in Canberra "free" cooling can be used, instead of needing to use heat pumps to get the heat into a lower volume of hotter water going to the cooling towers.

        The Raijin data centre itself is not classically cool; more like 25 degrees in the room. The hot aisles are well above 40 degrees C, very noisy, and not nice places to linger in. AFAICR the only room air cooling is to achieve the requisite air changes so humans are not breathing endlessly recycled plastic volatiles.

        1. Jason Ozolins
          Coat

          Re: Switch heat vets

          "~1000mm wide rack"

          Bother. Thinking of the width, typed the depth. Actual rack unit width was a bit more than a 600mm tile. Water cooled doors removed >30kW of heat from each rack unit so there was no hot aisle in that system. Until you opened a rack door, that is.

          I'll get me coat...

  5. DougMac

    Working as part of Data Center design..

    Getting more power in usually isn't too bad cost wise.

    But as to the 2nd half of the equation, redoing the cooling system beyond what load it is designed for can have astronomical costs. You can't just go stick more cooling units willy-nilly in. They take up lots of space that probably already have racks and servers in them. Assuming raised floor, underneath it is probably zoned already, and that would have to be all redone, and that glycol piping is messy to install, right above all the existing servers?

    Essentially redoing cooling means rebuilding the data center from scratch.

    Most of what people are bringing up are ways to more effectively use the cooling that is there, but ultimately you are stuck with xxx tons of cooling of the design, which can cool only yyy of MW of power.

  6. ecofeco Silver badge
    WTF?

    Heat? What heat?

    I have YET to be in a data center, master control or any other IT office where the temp wasn't set on "stun".

    As in "so damn cold I could hardly type and my skin is getting cold rash".

    No joke.

    1. Christopher W

      Re: Heat? What heat?

      Ah, but what happens if the cooling fails or runs suboptimally? The warmer it is to begin with, the less margin for error you have. This happened where I work recently in a smaller apps room, and it's amazing how rapidly those temps shoot up. Temporary cooling was promptly installed and crisis averted, but it makes you realise just how many BTUs those servers chuck out.

      Monitoring onboard, chassis and CPU temps, even in a room at ambient 12/13 celcius, with your boxen throwing out air warmed to (sometimes) 40 celcius, the cool molecules get very excited quite quickly.

      That, and a lot of gear has inefficiently designed / clogged inlets, is in a very hot rack behind a door or is obscured by messy cabling...

  7. Jock in a Frock

    This reminds me...

    This reminds me of the town of Redditch, where the local council have installed a heat transfer system from the local crematorium to the swimming pool next door.

    I always thought that a little cruel to the families of anyone being cremated who met their demise through drowning.

This topic is closed for new posts.

Other stories you might like