back to article The data centre design that lets you cool down – and save electrons

I started my commercial data centre experience in London in the late 1990s. Even back then, most of the service providers were parroting the same mantra: “Your power provision is limited, and we'll charge you through the nose for anything over the basic consumption figure you've signed up to.” The logic most of them gave was …

  1. Anonymous Coward
    Anonymous Coward

    I'm pretty certain I read a story from a 90s Silicon Valley start-up whose coloc provider rented everything by the square meter. Or square foot, as it were. Power was unlimited. They ended up having piles upon piles of computers surrounded by state-of-the-art desk fans for cooling.

    1. Anonymous Coward
      Anonymous Coward

      I saw that :-)

      I saw Google's install in a Global Crossing DC in Sunnyvale back in 2000. Only three small rows, but two mini servers per U and desk fans sitting on the floor between the rows. Oh, and empty cages all around since Google had all the power :-)

  2. streaky

    you can't identify just the hot bits

    Get about a million 1-wire temp probes (these cost next to nothing), some wire, and put one at the top of every single rack, or maybe even a bunch per rack, write some software to output csv, make a map.

    Easy identification of the hot bits, maybe even write some code to control the output of your coolers. DS18B20's are about 5 quid for 5 on ebay right now, that's a zero cost operation for the money you could save in energy use and potentially shortening server life if they're your servers.

    1. Roger Greenwood

      "you can't identify just the hot bits"

      An IR camera can help you with that. Maybe too late of course as it's already installed.

      1. Brandon 2

        hot bits...

        ... tell a Reg reader that it "can't" be done... and in the comments, it will be, and probably already has been, and then revised, and operating well since the 90's.

      2. Grahame 2

        From experience, IR cameras are great for finding hot spots, (they are also great for finding overloaded power lines), but generally not necessary, just about every enterprise class server/network appliance is packed with temperature, power draw and fan speed sensors.

        Of course if you can't be bothered to poll them they are not a lot of use.

  3. frank ly

    Just wondering

    Would it be worthwhile to have cold air input delivered from floor mounted access points to an underfloor cold air duct? Similarly, the hot air outlet could be coupled to an overhead manifold that would take the hot air away. If the cabinets all have standard footprints and spacing, then a line of screw-cap covered access points wouldn't be difficult to plan or be inconvenient.

    1. Alan Brown Silver badge

      Re: Just wondering

      The "inlets" and "outlets" are generally mesh steel doors. The problem with attaching pipework to them is that it makes them hard to open/close and get past (standard spacing in a cold aisle setup is 1.2 metres between rows)

      Once you go past about 7kw/cabinet then traditional solutions end up being useless anyway and you start getting into water cooling systems where the entire rear door contains the "radiator" (actually absorber) or recirculating cabinet systems with cooling gubbins in the bottom (water cooling from the top is a bad idea because there are always leaks, even with dry-break systems.)

      It's surprisingly easy to spend half a million pounds on a cooling system, as I've been finding out whilst evaluating various solutions to get rid of 120kW

      1. Mark Hahn

        Re: Just wondering

        no, 12-15KW/rack is no problem with air.

        1. EssEll

          Re: Just wondering

          "no, 12-15KW/rack is no problem with air."

          Before I cough and splutter and shout BOLLOCKS, would you care to explain how? I would agree with the 7kW, although my org uses about 5/rack. I've got a customer who only wants 3kW/rack, and they are a PITA.

          So your ability to shift up to 15kW of heat out of a rack using just airflow is very, very interesting.

        2. Anonymous Coward
          Anonymous Coward

          Re: Just wondering

          20KW a rack is no problem with air. Just use Chimney racks.

  4. Jon Massey
    Meh

    Three pages

    It took three pages to describe hot aisle/cold aisle separation?

    1. Mark Hahn

      Re: Three pages

      and say silly things like "every DC has a PUE of >= 2" (penultimate paragraph).

  5. John H Woods Silver badge

    Immersion ...

    ... is it still a thing? IS2R people were, a few years ago, pretty excited about liquid cooling - either immersing the whole lot in non conductive liquid or pumping it round the racks ... but googling just turns up futurology and hobbyist stuff.

    1. Fred Flintstone Gold badge

      Re: Immersion ...

      Yes, it seems to have sunk without a trace (sorry :).

      I suspect that it's possibly a heck of a lot harder to do this in a data centre used by all sorts of different people because they all have different needs, but liquid cooling in itself seemed to be far more efficient at transferring heat to where you could vent it all. It doesn't reduce the *amount* of heat you need to get rid of, just makes transport more efficient.

      Interesting question - would love to hear of anyone who has an insight into that one.

      1. SolidSquid

        Re: Immersion ...

        part of the issue in a data centre might have been that full immersion requires a vertical access route to keep all the liquid contained. This means that traditional server racks (which ave space by stacking vertically with a horizontal access route) wouldn't be compatible and the floor space needed probably wouldn't be viable

      2. Anonymous Coward
        Anonymous Coward

        Re: Immersion ...

        Checkout this bitcoin mining operation: http://hongwrong.com/hong-kong-bitcoin/

    2. Ian 62

      Re: Immersion ...

      If you watch a couple of youtube vids of lads building immersion gaming rigs you'll see the issues with it demonstrated on a small scale pretty quickly.

      1) its HEAVY. A cabinet full of kit needs a good strong floor, now imagine filling all the spaces around that kit with oil? Double? Triple? the weight.

      2) its messy. Want to change a network card? Got to turn it off, lift everything out of the oil, try not to make a mess, then put it back again without contaminating the oil too much in the process, or spilling it across the floor.

  6. Tokoloshe

    PuE?

    "The reason the operator cares about your power consumption is because they have to provide it at least twice over: once to physically power the kit and once to run all the equipment that deals with the heat you're generating."

    While PuE is a pretty basic benchmark for DC energy use, you are asserting that it has to be >=2. It doesn't.

    1. Soundman

      Re: PuE?

      Sorry but make that three times -:-

      1 To power the equipemant

      2. To power the heavy backup batteries.

      3. Aircon the whole lot.

  7. Peter Gathercole Silver badge

    Sooo out of date!

    Put some water provision in the data centre. Water is a much better medium than air to extract heat, and it is much more efficient to scavenge heat from water for things like the hot-water in the handbasins in the restrooms than it is from air (although it does depend on the exit temperature of the water).

    Use water-cooled back doors. It takes significant amounts of the heat away before it even enters the airspace. Even better, put them both in the front and back, so the air enters the rack cooler than the ambient temperature, and gets any heat that is added taken out as it leaves the rack.

    I know I've said this before, but look at the IBM PERCs implementation. Water cooling to the CPUs, DIMMS, 'fabric' chips, and also in the power supplies. There is still some air cooling of the other components, but from experience, I can say that these systems actually return air back to the general space cooler than it went in!

    There are some really innovative things happening, much more than just the decades old hot-cold aisles, hanging curtains and under-floor air ducts.

    1. Anonymous Coward
      Anonymous Coward

      Re: Sooo out of date!

      Doesn't water introduce all sorts of other problems, like humidity? It's all OK when it works, but when it leaks you have a problem, which is where I suspect the main challenges reside.

      1. Alan Brown Silver badge

        Re: Sooo out of date!

        "Doesn't water introduce all sorts of other problems, like humidity?"

        High humidity is easily dealt with (condensation is essentially distilled water, which isn't particularly conductive and moist air absorbs heat better anyway (latent heat of evaporation and all that stuff).

        The bigger danger in an actively cooled data centre is things being too dry, which results in static discharge problems.

        Most data centres are sited in piss-poor locations, with fuck-all thought given to their design and even less given to feeding them (power is a major problem in London with most feeder mains overloaded - hence the many stories about exploding pavements and the Kingsway incident a couple of weeks ago).

        Even when you start looking at the newer datacentres - such as are popping up around Slough - you find that they're vulnerable to flooding and aircraft falling out of the sky (but they won't tell you that when they're showing you all their glossy literature and walking you around the sites)

        1. Mark Hahn

          Re: Sooo out of date!

          it's funny that people often go on about humidity control for datacenters. but the fact is that they are easy to keep at modest numbers (say 15-35%), which also happens to let you avoid both humidification and dehumidification. in most countries, you'd have to put some effort into driving the humidity down so low that static was an issue.

      2. Peter Gathercole Silver badge

        Re: Sooo out of date!

        I don't understand the issues with water cooling and humidity.

        The water is totally contained in sealed pipes, so there is no chance of it entering the data centre atmosphere.

        In the case of the PERCs systems, there are actually two water systems, one internal to the frames which is a sealed system with the requite corrosion inhibitors and gas quenching agents , and the other a customer water supply, with heat-exchangers between them.

        The only time water can get into the air is if there is a leak. Where I work did have a leak at one time, which was caused by cavitation erosion to the case of one of the pumps. but that is one minor leak in the six years I've worked here.

  8. Eddie12345

    AC/DC

    How about DC datacenters? When power is provided to datacenters, it comes off the grid as AC. Then it is converted to DC to charge the UPS. Then it is converted back to AC to send through the power cable to the servers/switches. Then it is converted to DC again by the server's power supply (more fans) to charge the circuit boards. All this is a huge waste of power, heat and money. Most servers/switches accept DC power. Everything in the datacenter should be DC. Phone companies have been doing that for years with their equipment. It's much more efficient.

    1. Alan Brown Silver badge

      Re: AC/DC

      "Most servers/switches accept DC power"

      Which they then feed into a switchmode power supply to convert to whatever they actually use internally.

      Meantime you pay 10 times more for DC PSUs AND given most DC feeds are based around 48V (telephone exchange standards) you have to spend 10 times more for feeder cables to the racks as they have to be substantially heavier to carry the higher currents that lower supply voltages entail.

      On top of that should you draw an arc at any stage you'd better hope that noone decided to cut corners on power switches (which have to be current-derated by 80-90% for DC vs AC in order to ensure arcs are quenched) or fuses (same deal. High current DC fuses are larger than their AC equivalents because AC arcs are self-extinguishing every half cycle, whilst DC arcs have to rely on the endpoints being pulled far enough apart to quench the arc that's been maintained through ionised air)

      That means that your electrical standards and precautions for "Low Voltage" high current DC have to be substantially more robust than for AC mains. I've seen more than a few spanners end up as a shower of molten sparks because someone got careless around DC busbars - molten sparks which have the potential to cause fires or secondary damage should they land on anything flammable or be drawn into the air intakes of a blade server (think of it as an injection of conductive iron dust and you won't be far wrong). That was OK in the days of concrete-floored Strowger switch rooms but not so wonderful around high density electronics.

  9. This post has been deleted by its author

  10. Toltec
    Facepalm

    Really?

    "All kit that's designed to be installed in a cabinet is designed to have its airflow from front to back"

    Ever come across kit by a small company called Cisco? They do not appear to have received your memo...

    1. Stoneshop

      Re: Really?

      Ever come across kit by a small company called Cisco? They do not appear to have received your memo...

      I think they actually did.

      Almost all systems have their connections at the back. It therefore makes sense to mount patch panels as well as top-of-cabinet switches with their business end towards the back of those racks.

      Larger kit such as a Catalyst 6500 is usually mounted in a cabinet of its own, with several tens of U of patch panel filling the rest of the rack. You mount the patch panels to match the Cat, and the Cat to match the airflow through the cabinet.

  11. Stevie

    Bah!

    Vent the hot air instead of cooling it back down. For those unobviously challenged, vent it upwards, through the roof.

    Filter fresh air at ambient temperature into the cooling infrastructure. For the same crowd, don't build your inlets next to your roof vents.

    And build your data centers somewhere cold to start with.

    100% passive is too much to ask, but using passive techniques to lower the avoidable energy usage shouldn't be.

    1. Anonymous Coward
      Anonymous Coward

      Re: Bah!

      "And build your data centers somewhere cold to start with"

      Svalbard Undersea Cable System has potential capacity upgrades available? And on-site security should be relatively simple, depending on whether the bears have had their glacier mints ...

    2. blondie101

      Re: Bah!

      A few weeks ago I visited a brand new datacenter in the north of the Netherlands. They use ambient air all day except for about 6 days a year!! It was a neat facility with they treated the ambient air (filtering, dehumidify etc) and blow it into the datacenter with a little over-pressure. They just let the air fall from above, no raised floors, all cabling from above. The hot aisles had a huge chimney so the hold air could get away the natural way. Because of the huge amount of ambient air it wasn't that cold inside (~20 degrees C). And the over-pressure kept the fans quiet: you could actually talk to each other!! They managed to do 1.25 energy efficiency!! Nice DC :-)

      1. Soundman

        Re: Bah!

        Velly good - but is it only 22 feet below sea level?

  12. Anonymous Coward
    Anonymous Coward

    Heat pipes

    Sounds like the combination of cooled doors, CPU water cooling and whatever 'fabric' DIMS is(??) is the current state of play. The DC idea sounds pretty plausible too but surely heat pipes have a role here. Using the energy transfer efficiency of the specific latent heat (SLH) of evaporation to collect the energy which is then returned when the vapour returns to liquid state has (I believe) peerless efficiency. I even heard that at everyday temperatures they can handle greater energy transfer than copper. With more exotic versions, heat transfer was greater than found at the surface of the sun. (If I could find the quote, I'd post the link or quote more accurately ;o) - sorry.) Mainframes running Windows might need those. Heat pipes sound like a good way to get heat from the CPU with a small surface area out to a larger area liquid based heat exchanger, well away from the liquid sensitive innards?

    1. Peter Gathercole Silver badge

      Re: Heat pipes @AC

      If you were referring to 'fabric' chips in my earlier comment, they are a little bit like what you might describe as "northbridge" or "southbridge" chips in older Intel servers (although only in concept, not in the detail). They provide the copper and optical interconnect to glue the components together into a cluster (both external network, and internal processor-to-processor traffic), and also the PCIe and other peripheral connections.

      I could have called them Host Fabric Interconnect (HFI) or maybe Torrent chips, but that would probably have been even less meaningful.

      Heat pipes are not ideal. Because of the way they are constructed, they are very sensitive to leaks, which because of the critical partial pressure within the pipe, render them useless almost immediately once a leak happens. I think that the distance that they can move heat is limited.

      I've seen far too many laptops that rely on heat pipes overheat whenever they've been on for any length of time because the heat pipes no longer function properly.

      Oh. By the way. Proper mainframes don't run Windows!

  13. John 104

    Cold Air And Things

    you wouldn't want the opposite, after all, because it'd mean using the hot efflux to gently cook whoever is standing in front of the cabinet typing on the keyboard.

    Try working in a server room for more than an hour and you'll be wishing for trips to the hot isle just to get the circulation back into your fingers...

    Raised floors are actually not quite the fashion these days either. While they provide a nice place for cables and such, they also provide a huge amount of volume for all your expensive cold air to hang out and do nothing. Cold goes down, hot goes up... Newer data centers will have your described hot/cold isle and curtains, (captain obvious), but dump cold air from above, letting physics do some of the work. Hot isles have returns that draw that heat away located up top as well.

    Either way, its damned cold for anything after 30 minutes, which is why I keep a nice coat at my desk for those longer work sessions (and headphones because damn its loud in there!)

  14. Joe User

    Dealing with the waste heat

    It would be nice if the waste heat from the servers could be piped into the office space during the winter. You've already paid for the electricity to run the computers, and all that free heat would put a significant dent in the office's heating bill.

    1. Knoydart
      Flame

      Re: Dealing with the waste heat

      Already been done. There are sites in London and Helsinki doing it but the effort required to extract usable energy from the waste heat is difficult to say the least. That or you do what the Dutch are doing and shipping servers direct to people's houses to provide heat.

      Fire, well you want to keep warm...

      1. billse10

        Re: Dealing with the waste heat

        effort required to extract energy is definitely an issue - but mindset sometimes a bigger one!

        I had a conversation about three or four years ago with a local council's energy efficiency guy about doing that, with heat from a small DC being offered "free" (think of tax implications) to heat a school. The guy just couldn't understand the idea at all.

    2. Peter Gathercole Silver badge

      Re: Dealing with the waste heat

      When I was at University in the late 1970s, the heat generated by the s360 and s370 was fed into the heating system for Claremont Tower in Newcastle.

      Nothing is really new any more.

  15. the spectacularly refined chap

    Fitted racks

    Whack a roof on the top and a door at each end and you have a rudimentary room into which cold air is introduced. The hot output from the backs of the cabinets then goes into a separate area and can be pulled out by the extractor.

    This always sounds like a retro-fit to me and it is unconsciously premised on commercial, off-the-shelf racks. It's the obvious solution until you see somewhere that does it differently. My last-but one employer used wooden racks custom built and fitted for the specific room by a local joiners. The racks did run from floor to ceiling, overhead ducts for power, air and data built in overhead (both along and between racks) and doors on each end of the cold aisles were an integral part of the racks rather than tacked on as an afterthought.

    When you first went in there your initial thought was "This has been done on the cheap", which I suppose is inevitable when the primary construction materials was 3x2 and unused bays screened off with hardboard quarter-rack blanking panels. After a while though you learned to love them - they did the job, cables were easy to route and the hot aisles were very open permitting easy access to all the connections on the rear. Oh, and you had rear rack strip at both 600mm and 1000mm positions - why can't more racks be like that?

    Depending on you defintions that was either a large server room or small data centre, 40 racks and two cold aisles. The cost of all that joinery was significantly less than 40 off-the-shelf racks despite all the additional infrastructure as part of the package. From "cheap" your attitude shifted to "Why doesn't everyone do this?" and I became convinced that in that area at least following the herd is not the best idea.

    1. Mayhem

      Re: Fitted racks

      How did they attach the servers to the wooden racks?

      Are we talking wooden frames with metal inserts for traditional cage nuts, or pure timber?

      I would have thought that the weight of a piece of equipment or laden shelf which only attaches at the front would cause undue stress on the frame.

      1. the spectacularly refined chap

        Re: Fitted racks

        Are we talking wooden frames with metal inserts for traditional cage nuts, or pure timber?

        Commercial rack strip, e.g. like this.

        I would have thought that the weight of a piece of equipment or laden shelf which only attaches at the front would cause undue stress on the frame.

        It's easy to underestimate the strength of timber based on a simple understanding that metal is stronger. It's less clear when you consider a piece of 3x2 and a 1" steel box section simply because of that much greater cross section. Those racks would probably struggle if you filled an entire rack with UPSes and their batteries but that goes for the commercial steel units too. We certainly didn't have any problems in practice.

  16. Crazy Operations Guy
    Joke

    "tall buildings all sucking up electrons"

    Its OK, its AC power, so they aren't holding on to it for too long.

  17. Morten Bjoernsvik

    3M where your Novec

    This look genious an isolating liquid with heat conductivity close to water.

    Just set up a tank and drop your electric gear into it:

    http://www.eweek.com/servers/intel-sgi-test-3m-liquid-cooling-technology.html

  18. Anonymous Coward
    Anonymous Coward

    The biggest restriction on expanding the power in a DC

    ...is the availability of diesel gensets. Usually they have to be mounted on the roof, and once the space for them is full, you either run n-x and accept the risk, or call a halt to expansion.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like