During the SC09 HPC show last November, I walked around with a small video cam and did some interviews from the floor, posting them to The Reg in lieu of blog entries. It was a lot easier than actually writing blog entries and quite a bit of fun too. One of the booths I visited was displaying their solution to removing system …
Welcome to the '60s
When liquid-cooled systems were standard.
3M? Liquid cooling in a densely-populated space? Sounds like Flourinert to me...
Way to go
Instead of heat pollution, we'll get coolant pollution.
But it do sound like a good idea - packing more blades into a smaller area in order to guzzle more electricity.
Next step will be Skynet.
Can I fill up my desktop's case with any engine oil...
...or should I use that magnatec stuff that sticks to my electrical engine before I turn(press) a key?
Will need some gaffer tape to cover the holes in my case first.
Does this mean datacenter engineers will have to pass plumbing courses to swap out a duff card?
What is the problem with standard transformer oil?
Why not use the same oil as used in distribution transformers? In direct contact with the electronics (but not hard drives) Then pump or convect the oil into radiators/heat exchangers to disperse the heat.
The big problem is the "ick" factor when you have to service something. Transformer oil would be particularly icky. Mineral oil (i.e., baby oil without the perfume) would be somewhat less annoying, but I still don't think I'd want to service a machine that was pulled from a vat of mineral oil dressed in office work attire.
I looked at a few discussions on liquid immersion cooling, one thing people keep missing is that almost ANY fluid is many, many times better at removing heat than air is. You don't need the optimal heat transfer characteristics, you want the optimal OTHER characteristics -- cheap, non-toxic, non-conductive, non-corrosive, likely to stay in its container, not harmful if it escapes its container, long-term stable in this environment, inert with regards to any of the materials used to fabricate the submerged parts, not likely to absorb anything that would change any of the above (i.e., distilled water is non-conductive, but that would likely change as dust and salts from human skin, etc. was dissolved into it over use..not to mention the residues on the products submerged into the water). The fact that it is a fluid almost guarantees superior heat transfer over free or forced air.
In this case, though, the phase change from liquid to gas absorbs a lot of extra heat from the product, and helps put an upper limit on the temp that any part will reach, which would not be true for fluid-only solutions.
Cray hit this problem many many years ago - when chip densities and hence temps were lower than today - they solved it by using evaporation.
At the time the engineers rather than try and design a customer injector assembly simply went to thier local motor parts suppliers (a lot of dealers in the MSP area) and asked for a range on injectors. Tests showed a certain Porshe injector created the perfect sized coolant droplet for the big cray boards.
To promote this breakthrough, Cray bought a load or "porsche" sunglasses and had Cray etched on them! very cool!
Oh and the other advantage of evap tech - no need to drain etc - you simply pull the boards and replace.
Limitations of Oil
Others ARE promoting mineral oil immersion for datacenter applications. Mess aside, the efficiency of those fluids is limiting, believe it or not. All dielectric coolants have miserable liquid phase heat transfer properties when compared with water. If you pump them as a liquid from a heat source (CPU) to a heat sink (water cooled heat exchanger) those shortcomings will manifest as a high source-to-sink temperature difference.
If, however, you have a dielectric fluid with the right volatility, using it in a 2-phase (evaporative) mode as we propose greatly increases heat transfer efficiency and decreases the resultant temperature difference. Also, as you noted, all devices are at the same temperature.
You are correct to stress the OTHER fluid characteristics as well. The fluids we propose are being used as Halon replacements to extinguish fires in manned environments. So far, they see quite viable for this type of application. Google “IEEE Tuma fluoroketone” for more info.
Spray Cooling vs Passive Boiling
The argument for spray evaporative cooling which was, as you noted, used in some Cray machines, is the elimination of the primary thermal interface material (TIM) between the silicon and the Integrated Heat Spreader (IHS). When you spray on the bare silicon, it is argued, this TIM is absent. Performance must be better right? It just isn’t true. The heat transfer coefficients achievable with spray evaporative cooling with volatile dielectric coolants are just too low.
Believe it or not, you’ll get better chip-to-fluid performance passively boiling off of a modern CPU package if the IHS is properly modified. This is true despite the addition of a TIM between the silicon and the IHS. Furthermore, the chip is protected by the IHS from any fluid-borne contaminants that might be left behind by fractional distillation of the fluid on the boiling surface.
questions/comments for round two...
Overall, I'm quite excited by the ideas of liquid immersion cooling, though I see a few issues:
Something about the two-phase cooling bothers me. Today, sure, breathing in the vapors of this fluid is harmless. What will we do next year if we find out the stuff is slightly toxic ("I gotta rebuild my entire infrastructure?"), or in a decade or two as I'm tugging around an oxygen bottle? I'd feel a lot more comfy with a low-tech, long-history substance... (my home's yard is contaminated by one of those miracle products of the past: PCBs..which, curiously were also a cooling fluid, so I'm a bit touchy about this!). I'm not sure I can be convinced that exposure to some new chemical, no matter how fascinating and useful its properties are, is harmless, but give it a shot, I'm listening.
One nice thing about old fashioned air cooling is I can stack machines floor to ceiling. Seems any form of immersion cooling will require vertical removal and insertion of parts "stacked" horizontally, which will probably limit the stacking to one or at the most two devices high, at least for applications which require that broken systems be replaced moderately soon and without impacting other systems. I suspect SOME of the potential density improvements would be negated by this. (on the other hand...the space required around a conventional rack (front and back) is pretty non-trivial, too.)
What about disks? I am sure many will say "SAN!" and leave it at that, but you still have an air cooling issue for the SAN, and much of the magic of liquid immersion is the almost complete elimination of active cooling equipment. I know you don't want to have the spinning platters in liquid, and I know traditional disks have vents to (at least) equalize their internal pressure...but could a conventional disk drive be practically made in such a way that it could be immersed along with its computer? I'm thinking either rigid enough case and seals to withstand the pressure of immersion and the reduced pressure of flying in a airplane cargo hold or a vent that could be sealed before immersion (don't screw up and forget to seal it!)
Disks are not a problem.
Just use solid state disks. Problem solved.
Hot Aisle/Cold Aisle insanity
One of the things that has always fascinated me about this issue is how the data center manager wants to drive the cost of the server to zero and refuses to pay for any embedded advanced cooling system yet then pays Liebert vast sums of money to cool the air. Air cooling is simply the dumbest, stupidest way of cooling electronics, any first year engineering student could devise a better system and if they proposed the way it's done now, would get an F. Hot aisle, cold aisle, floor plenums, massive fans, it's insane but made sense when the transistor density was lower and like Nick says, it works albeit in a very inefficient manner. One of the key things I see from changing this paradigm is the vast ecosystem surrounding the current hot aisle, cold aisle mentality. Everyone participates and gets a little taste of the action, consultants, architects, builders, liebert, they all get their taste and the data center manager knows if he does it this way, his beeper will most likely not go off due to a thermal issue.
Does the MMM proposal make sense, sort of, will it ever fly in the data center, I doubt it but there are better ways. Sooner rather than later transistor density will simply go beyond what air can do, we're actually there now but run the systems way below what they are capable of and justify it by saying it works.
Well it has started.
IBM builds liquid cooled systems.
Google uses liquid plumbing into their containers as far as I know.
I believe others have also experimented with containers with liquid cooling rather than air.
So almost certainly this will start to become the normal way to cool high end server equipment. At the low end you have the overclockers playing with liquid cooling too.
"...will it ever fly in the datacenter, I doubt it but there are better ways."
I appreciate your sentiments. Believe me, I appreciate any and all skepticism concerning immersion cooling. After 15 years working as a liquid cooling Thermal Engineer, I can TRULY appreciate it.
I also appreciate the cost and complexity of all the liquid cooling technologies that HAVE succeeded (immersion among them). They include pumps, quick couplings, heat exchangers, manifolds, tubing, clamshells, hermetic IO connectors, and risk that will ultimately prove unacceptable in the context of commodity computational electronics, at least so long as there is a simpler, safer, cheaper and more efficient technology. I think this may be it. I’d like to know of a better way.
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Feast your PUNY eyes on highest resolution phone display EVER
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip