back to article IBM smacks rivals with 5.0GHz Power6 beast

The rest of the server world can play with their piddling 2-3GHz chips. IBM, meanwhile, is prepared to deal in the 5GHz realm. The hardware maker has unveiled a Power6-based version of its highest-end Unix server - the Power 595. The box runs on 32 dual-core 5GHz Power6 processors, making it a true performance beast. This big …

COMMENTS

This topic is closed for new posts.

Page:

  1. Laxman

    This vs Sun's gear

    I checked out the Sun UltraSPARC T2+ today - that seems like a (really) fast processor as well (or so the benchmarks suggest).

    Is there anywhere I can see the benchmarking results mentioned in the article. I just want to check out the number of CPUs in the competing products ( I have a tinkling it's going to be somewhat less than 32.)

    plus, on a price to performance ratio basis, POWER6 is horrible (IMHO).

  2. Daniel B.
    Boffin

    Water cooling

    Funny, lots of comments here and no one has stated the obvious: Water *isn't* conductive. Its the salts that are mixed in with water that do the conducting, so pure distilled water might do the trick.

    Yet there are other liquids out there that can manage even better than water (liquid N2?) but more expensive.

    I'm all for water cooling, its greener and more efficient than air cooling. Anyone who's been near an HVAC would agree; as well as watching a ProLiant server sound like an F1 engine when firing up!

  3. Pierre

    Hi Valdis

    The use of air for cooling is the admission of lack of technical skills. It is the most power-hungry and inefficient solution around. It is also much more complex than water cooling, and not safe at all (you need to filter and cool humongous volumes, and the fans are very prone to failure).

    "I was born in a country and live in a country where free speech is respected." I believe you never ever went to the glorious US of A then.

  4. Anonymous Coward
    Paris Hilton

    Water Heating?

    I have a solar water heater on the roof that gives me 200 litres of scalding hot water on all but a few days a year.

    I wonder if IBM would like to try to sell me one of these to replace it?

    So far, I'm not convinced that the performance would be up to it.

  5. Matt Bryant Silver badge
    Happy

    RE: Valdis Filks

    Ooh, I spot a small problem on Valdiboy's horizon! Isn't Sun doing all that fancy research into directly interfaced chips, where CPUs communicate by being pressed against each other with overlapping faces? If I remember rightly, the main problem with that was heat generated at the interface and it was likely to be a watercooled solution. Oh dear, poor old Valdiboy's own prod dev team obviously like it complex. Mind you, it is only a small problem - vapourware doesn't really need cooling!

  6. Rupert Brauch
    Stop

    Performance claims

    Hard to verify their performance claims of 2-3x the competition, since it doesn't look like they've submitted any benchmark results to www.spec.org

    For all we know, they could be comparing their gear to ancient UltraSPARC III systems again.

  7. trackSuit
    Alien

    Water-cooled and Rather Stealthy

    Here is a link to ANOther place, where a computer is cooled by water, yet there is no pump driving the system, which would make it Stealthy.

    http://www.plees.f2s.com/ec/pas-cool/pas-cool.htm

    If you look at the core though, you can see it is well behind the curve, in terms of Modern Metadata Processing Capabilities. It even has an old Quantum Maverick storage device Connected!

  8. Chris iverson
    Boffin

    a car

    is coolled by a mixture of water and coolant that is driven by a mechanical pump connected to the engines' crankshaft, forced through the engine and then through a heat exchanger at the front of the car. Then repeats. Not by electricity which would be used to turn the fan inside the cabin of the car.

    </pendant>

  9. Ike Thunder
    Thumb Up

    Benchmarks

    The best place to check real performance results and compare different vendors is probably here : http://www.sap.com/solutions/benchmark/sd2tier.epx

    With SAP benchmarks you could tell that there are all vendors present ( which is for example not true with TPC tests )

    And good thing is that they have different configurations which can be compared. HW configs start from 1-core and largest one is 128-cores. Also you can compare UNIX vs. Windows.. that gives you nice results by the way with SAP...

    So check the SAPS value and you can see that the latest IBM result is roughly 17% better than the next one... and they did that with half the cores compared to second result !

    Impressive....

  10. Steven Jones

    Benchmarks

    Hear hear to the man that praises the SAP benchmark. If you want an application engine to run Java or an HPC environment then by all means look at SpecJBB, SpecFP and all those compute-intensive things, but if you want to see how a database works (and there's few other reasons why you want a big shared-memory multi-processor machine) then there aren't that many good benchmarks around. TPC'C' tends to be more of an exerciser of I/O systems than anthing else (some configurations have over 7,000 disk drives), TPC'H' is OK for data warehouses, but is not readily applicable to transactional systems.

    No, if you are stuck wihy a complex ERP package because some architecture team has decided that SAP, Peoplesoft, Siebel, Amdocs CRM or whatever has to be used because it is "off the shelf" and you get faced with some huge, single instance database which doesn't readily lends itself to parallelism, then SAP is about as realistic as it gets. It ain't perfect, but it's the best of a bad job. In a world where benchmarks are marketing tools, where software and hardware vendors contractually prevent the publication of benchmakrs (including those by customers who have paid for the stuff) then there's not much else.

    I've no doubt that the new IBM will be the fastest single-instance DB server out there (and we don't use AIX so I'll count myself as unbiased). However, if you do make comparisons then make sure it's the same database, same version (near enough) and don't try and compare UDB/DB2 numbers with different SQL*Server or Oracle.

    Oh yes - and back to water cooling. The biggest problem with that for IBM is going to be that very few datacentres are set up for that. Unless IBM have a very slick short cut then that's going to be a problem (unless all they are doing is piping the water to a heat exchanger in the cabinet - which is not the best thing to do, but makes installation vastly easier).

  11. Anonymous Coward
    Thumb Up

    Amazing all the comments about water cooled

    For clarification:

    The p595 5GHz system with 64 cores is air cooled.

    The special p575 HPC system has 16 4.7GHz dual core Power6 chips in a 2U system. The kind of customer who will buy a rack of 2U systems with 32 "POWERful" cores will not mind running water to the rack. The same customers today run water to door heat exchangers today with blades.

    Interesting to see Sun focus on the HPC system when they are basically a non-player in the HPC space. (except for a few entries in the top500.org list with AMD)

  12. David Vasta

    IBM's POWER Platform

    IBM has in fact released the POWER platform. It's like the Intel platform only faster and stronger with much more history and reliability.

    The Statement: "It will ship this May with AIX and Linux" is mostly true.

    The Power platform comes with nothing on it and you as the user can place a mix of AIX, Linux or i (formerly i5/OS or OS/400). The System is and has been for the longest time a virtual haven. Before VMWare was VMWare IBM was doing partitioning .

    You can run a mix of OSes on it and before too long you will be able to run Intel based Linux and I would guess Intel based Windows as well.

    POWER as a platform is going to change a good many things. I don't think Sun is ready for what IBM has in store.

  13. Mark Pipes

    apple

    I want a P-6 Mac!!

    Apple may have had a good reason to go with (ugh) Intel, but they should have kept

    Power PC available for special order!

    Imagine a Mac with a dozen or so of those 5GHz P-6 processors in.....

  14. Valdis Filks

    Evian for the datacenter - not good

    I am still open to be convinced that water cooling is better than air. It may be better but not one post has explained the advantages yet.

    Why do we make such hot chips. A hot chip implies lots of power/electricity. Then we use electricity to drive a water chiller, then we drive an electric pump. If the chip design was not this hot in the first place then we may not need all this extra cooling. This is the prevention rather than cure argument. Hence my argument why the design is flawed. It was made too hot first of all, then we have to get the plumbers in to cool it down.

    In these times of global warming we should be looking for ways to make cooler chips. Can a company use a water cooled Power6 server, install new water cooling, do all the extra work and then say that all these extra resources are good for the planet. Alternatively install a low power, multi-threaded CPU/server, which uses air cooling.

    If people think that Power6 is good for heating buildings, then we can use the existing air-cooling systems and the heat they take out of the datacenter to heat the buildings aswell. Just redirect the cooling mechanism.

    The new Power6 has a clock rate which is 2x faster than the Power5 (approx), but the Power6 applications may not run 2x faster. The increase in GHz does not match an increase in performance. Thus why do we have such hot high clock rate CPU's. Most of the industry is moving away from high GHz hot CPU's.

    The majority of companies Intel, Sun, AMD and IBM Cell make CPU's which do not need water cooling. I am a fan of IBM's cell chip, excellent design. I am not a fan of water cooling. Strangely Sun, Intel, AMD & IBM Cell are moving to more parallel/multi-threaded designs that run cool. What is P6 doing.

    IBM for some reason made a variation of the P6 which required water cooling, no answer here.

    Because we cool cars with water does that make it good for CPU's. We actually use oil to do the cooling aswell. Next thing which has already been hinted in this discussion is that we will have several liquids in a computer to cool it. More components to me means more complexity. With complexity comes extra costs.

    Many comparisons of a new IBM server to old Sun servers, please compare new IBM servers to new Sun servers. My post was about the environment and implications of all the extra resources required to install water cooling in a datacenter. Obviously we need to look at CPU design and the hot chip GHz issues, but mainly my interest was in water cooling issues.

    Computer history repeats itself, we may well see more water cooled systems. But I do not think that is a good idea for all the reasons already outlined.

    I have been to the US more times than I can remember, half my family are US citizens. I am an alien (see previous comments in The Reg about me) born in a very democratic country, living in another very democratic country where free speech is very much respected. I have seen and worked in 100's of datacenters/rooms. I thought that we saw then end of water cooling, just like ECL, Bipolar Chip design.

  15. Matt Bryant Silver badge
    Pirate

    RE: Evian for the datacenter - not good

    OK, let's look at current Sun servers - they all have fans! And all those fans involved additional design (not just of the fan itself, but of the added electric circuits to power them, and modern motherboards are designed for optimal airflow to aid cooling, which means after the guys that do the electronic design there is another team doing the layout based on fluid dynamics engineering). And then there will be monitoring devices included to check what the fans are doing. Well, there are on non-Sun servers, I'm not so sure Sun servers are so good at the hardware monitoring bit. Anyway, that all sounds like added complexity to me, so Valdiboy is talking through his rectal passage on that.

    Water cooling does not add massive complexity, and seeing as many datacenters are purpose designed, adding piping up front is not a great task. I have seen it added to existing rooms with relative ease as plumbers have been doing central heating for quite a while now and the tech is not that different (in fact, the first set of water-cooled racks I ever saw had been made by a company that had been making commercial fridges for fifty years!). It is simply the application of a known technology using modern materials to an existing problem, it is not some wide-eyed jump into the scientific unknown.

    And water cooling REDUCES electricity bills. From my own experience, water-cooled racks actually mean less aircon for the datacenter and lower electricity bills. You can jump around and hail it as "being greener" if you're a bandwagon humper, but businesses like it because it saves them on the electricity bills, which means lower costs = higher profits. All the "greeness" is just windowdressing for the gullible. No wonder Sun are making so much green noise.

  16. Onionman
    Stop

    @Valdis Filks

    "...but not one post has explained the advantages yet..."

    This is the last refuge of the unconvertable. To suggest that there are no posts above giving advantages to water cooling is ridiculous. One advantage, stated clearly, is that water will carry away more heat per litre (and per kg) than air.

    This style is not uncommon in Internet debates.

    poster 1: "I think x is rubbish"

    posters 2,3,4,...100: "Ah, but there are these reasons your view might be faulty"

    poster 1: "I've not seen a single reason to change my views"

    repeat, ad nauseam.

    If you're interested in searching for the truth, Valdis, try READING the responses and see if there just might be some truth in them.

    BTW, I speak as someone with no interest whatsoever in the facts of this case. I merely note a style of response that irritates every time.

    O

  17. Valdis Filks

    A good plumber is hard to find.

    My issue about water cooling is with the whole system, the server, the maintenance of the server, redesign of the computer room, the outages required to do all of this etc. From a thermoconductivity perspective water may remove more heat than air. But you need to get the water all the way to the server and plumb it all in. I like plumbing but this is expensive and difficult.

    How about not causing the problem in the first place with cooler CPU's.

    I think I mentioned and gave examples of liquids that are good for cooling and expensive implementations thereof. e.g. Magnox cooled nuclear reactors. Air is not good to cool nuclear reactors I think that I made that clear.

    Cost or ripping up floors and installing pipes is more than not ripping up floors and installing pipes.

    If I want to move a air cooled server I unplug the electrics, network, SAN and move it. Then reconnect.

    If I want to move a water cooled server call the plumber, book a weekend, possibly shut down the whole datacenter, move the server. Lay new pipes, pressure test new pipes, reconnect electrics. Do I need this complexity extra constraints.

    Water is used already in the chillers in the periphery of many computer rooms, you can already use this hot water to heat your office as a green by product. Do we want a water piping grid in addition to all the other infrastructure/cabling in a computer room. Every computer room is different and needs unique/bespoke plumbing to install water cooling to a individual server. For the sake of one hot Power6 server, am I going to re-design the whole datacenter.

    If in a couple years time when the whole industry has moved to cooler multi-threaded CPU's/servers, will your investment in water pipes all over the computer room be a good thing. Or will the computer industry employ legions of plumbers to rip out the water pipes that they installed 24 months ago.

    I may be totally wrong and water cooling within servers may become more popular and economical, at the moment I find it difficult to believe.

  18. Anonymous Coward
    Anonymous Coward

    more power Igor

    Watercooling for servers is never going to be popular or economical, for all the reasons listed. Water and volts don't mix, even slight accidents are catastrophes. Anything other than HPC and specialist sites will run a mile from the hassle.

    IBM don't say how much power these beasts consume and how much heat it puts out. Kilowatts/Hour and BTUs/Hour please (even my electrisave can calculate Kg/Hour of CO2) and then we'll decide if its "green" or not. In the current climate (ouch), these figures are as important as how many tedious SAP users it can support. Power is power and heat is heat, quite how you shift it from the chips to outside the datacenter is moot. Sun get this, I think IBM are too busy outsourcing and consulting to care anymore.

  19. Maurice Cloutier

    Need to solve both sides of the equation

    Valdis is right when he says we have to reduce the power requirements of CPUs, etc., but his dismissal of water cooling is like preventing roads from being built because someone may actually want to drive on them.

    Sun tries to solve for both sides of the equation with lower energy servers and cooling technologies. Sun has the CoolThreads CPUS (Niagara & Victoria Falls), but also the Sun Modular Datacenter (widely known as Project Blackbox), which utilizes water cooling and efficient packaging to reduce cooling costs by over 40%. This savings is independent of payload type or vendor. However, if you load the Sun MD with energy efficient systems, like the Niagara servers, the savings are magnified.

  20. kain preacher

    hmm nice to see

    that this didn't turn into a bun away topic do to my typo.

    Yes that's my straight jacket then one that smells like suggar

  21. David W Johnson
    IT Angle

    Please reread the article!!

    As already stated by Anonymous Coward, the p595 5GHz system with 64 cores is air cooled. The only pSeries system (or POWER or whatever IBM is calling it this month) is the 575 unit is water cooled. And just for the record, it was already water cooled when it had POWER 5+ chips. It is only design for certain customers.

    Any other IBM POWER system is are cooled!!

    Regarding the p595, besides the cost ...ouch, I for one would like to see how it performs against a HP Superdome. It would be interesting to see the numbers against SUN M9000, even though I suspect it would smack the M9000.

  22. Pierre
    Boffin

    Water cooling and freedom of speech

    Water cooling is much more efficient than air cooling, this is a fact and has been demonstrated here and in many other places It is also much more simple from a general point of view, as air cooling implies the need to clean (filter), move and chill enormous volumes of air. It adds overall complexity as the very local "simplification" in rack design implies open system, which need to be placed in "white rooms" to avoid contamination by airborne particles. The overall structural cost is necessarily much higher than for a closed water circuit.

    Now I'm not saying that we shouldn't develop and favor non-heating (and power-saving) chips. But even those could benefit from well-thought water cooling. I'm especially thinking about desktops and laptops operating in non-controlled atmosphere (servers too, but who is stupid enough to keep their servers outside a white room? Oh.... sorry), all this dust accumulating everywhere is a real problem. Watts and water DO mix much better than dust and air-cooling. There is no reason why a well-designed water-cooling system would be a problem. The issue is "macro-technical", and quite easy to fix (besides, polish plumbers come cheap these days). "Fire and powder don't mix", still the "mixture" is widely used, from fireworks to space rocket propulsion. In the lab, we're happily mixing pressurized gasses, water, heavy watts,very delicate electronics, very toxic compounds and radioactive isotopes, all that in a place that would make a bachelor's kitchen look tidy. All clear, sir. No safety incident or leak reported in the past few years. We did have problems though: the computer monitoring the whole shebang froze in the middle of an important experiment because the air-cooled processor over-heated (dust accumulation, in spite of the filter). And we had to change the air-cooled power supply a couple of times (dust accumulation, in spite of the filter). Gimme water cooling please.

    Besides, for applications that DO need heavy single-threaded processing power (yes, there are such things. My heavier computational needs are not easily split into parallel processes -but basic science might well be an exception), faster single-thread chips are a great thing. And air-cooling them would be an astronomical waste of energy.

    To sum up my thoughts, low-power chips are the best, when they do the trick. But water-cooling them would still be even better. Reducing the issue to "water and electricity don't mix" is a silly attempt at mixing basic "home safety" advice with highly technical issues.

    As for the freedom of speech, and me mentionning the US: freedom of speech is respected there indeed -till you start talking or writing about Al Quaeda or about filesharing or about kicking your prof's butt, even if you don't disclose the results of you elucubrations. Which is the kind of restriction that define the LACK of speech freedom (see the last few court decision about overall harmless dudes emailing bad poetry about Bin Laden, or the kid grounded for an undisclosed phantasmatic "hit list".) I could have mentionned the UK too. Goth teenage girls are really threatening these days! As for France, mother of the "Declaration des Droits de l'Homme", I guess that holding a sign reading "Niko, salaud, le peuple aura ta peau" whould lead you directly behind the bars, with the associated beating. Poor, poor western world.

    Geek icon just because.

  23. Valdis Filks

    Will we have an iCooler next ?

    The SunMD is a standard design with water cooling built in the factory, every SunMD is the same. No need for any local plumbers to change anything inside when a customer receives it. Just plug it into the power and external cooling pipes. When you buy a Sun Modular Datacenter (aka Blackbox), you do not not have to change anything. Put a water cooled server in a computer room and as explained many times, you need to do lots of extra work and ongoing complexity.

    As far as I know all servers in a SunMD are air cooled, no servers are water cooled. But I am sure if someone paid Sun enough money we would be able to connect a water cooled P6 in the SunMD to the pipes. I would advise against this, guess why, it adds complexity.

    SunMD is water cooling outside of the computer chassis/enclosure. The Power6 water cooled server is water cooling inside the chassis/enclosure.

    My point is about the added complexity, which water causes if you have to put it into an existing datacenter.

    Now if people want to make hot chips or more elegant designs then we technologists have a challenge. Produce a coolant that is safer to mix with electrical devices and those nice little towers that we put on CPUs can become a selling point, lets call it the iCooler. I remember that some of the IBM, Hitachi or Amdahl mainframes had those elegant circular tower heatsinks.

    Well it was pretty but complex. Now maybe many overclockers like to cool their PC's at home with these type of things. The modders always like new gadgets and to spend time tweaking their systems. Commercial datacenters do not.

    NB I built my latest PC with the criteria of least power usage, it is based on a AMD BE-2540 dual core cpu, $ per performance per power usage it was the most efficient. It may not be the fastest, but I was being Mr Sensible. No water anywhere near it.

    In commercial datacenters I do not think we can overclock, mod and customize our servers with lights, water coolant towers etc. But maybe the first person to do so could make the datacenter into a work of art and a light show.

    Me no iModder.

  24. Anonymous Coward
    Paris Hilton

    Re: IBM's POWER Platform

    David Vasta said "You can run a mix of OSes on it and before too long you will be able to run Intel based Linux "

    Maybe can't run Linux/x86 yet - but you can certainly run Linux *programs* - see http://www-03.ibm.com/systems/power/software/virtualization/editions/lx86/

    And before the old fogies like me chip up - yes I know this is strangely similar to the trick DEC did with WinNT on the Alpha.

    Getting back to the P6 kit - what's the big deal over the water cooling? AFAIK it's only the '575 that's watercooled, although you can add a radiator door to IBM racks. Okay, given my limited experience with overclocking a PC, you still can't be cavalier about water+volts, but then again it's deionized water so it's also not water-leak=instant-death either.

    Not sure about the greeness of this - okay you get "better" cooling than air (otherwise no overclockers would bother with H20) and you get a side product of warm/hot water (swimming pool anyone?!). On the other hand you definitely can have a lot less fans = better reliability (less components) and each fan uses/wastes power itself. I'm also guessing that watercooling makes it possible to pack these hot running systems more densely - so saving a little on floor space.

    Got to say - I'd love to see how many virtualized environments a "full house" p595/p6 could support. (sheesh, I sound like a total nerd!)

    Apologies if I sound like an IBMer - not my intention, just so nice to see someone continuing to push the boundaries...

    (Paris because we're talking about hot bods here)

Page:

This topic is closed for new posts.

Other stories you might like