Feeds

back to article HP targets supercomputers with Project Apollo

HP is imprisoning powerful Intel Xeons inside water-cooled cages ... for science! The company announced on Monday that it has developed two new classes of server for high-performance computing workloads as it prepares to go against traditional supercomputer makers like Cray, Fujitsu, IBM, SGI, and others for the lucrative high- …

COMMENTS

This topic is closed for new posts.

This post has been deleted by a moderator

Bronze badge

What OS for Apollo

It would have boon considerate if the article indicated what Operating System (OS) Software HP was running on these Apollo SuperComputers.

0
0

Re: What OS for Apollo

Well the last one out was Domain/OS SR10.4 or 10.5, IIRC?

BTW there were rumours that HP had DomainOS running on the 9000/700 series but chose not to release it....

2
1

Re: What OS for Apollo

At last - a use for my old sysadmin skills!

Strange how numbering goes - I'm guessing that this new Apollo 8000 might have a little more grunt than the old Apollo 10000...

1
0

Re: What OS for Apollo

Linux of course

1
0
Silver badge
Linux

Re: What OS for Apollo

The logic to day, as Linux runs 96% of the top500 supercomputers, is that the OS is mentioned only if it is not Linux. The top50 are all Linux. The next "big" OS is Unix.

http://www.top500.org/

Under Statistics/List Statistics you find the graphs.

@Mark Honman you forgot to use the "Joke Alert" icon.

0
0
Gold badge

Re: What OS for Apollo

That doesn't make any sense! There's an anonymous coward here that assures us all on a regular basis that Windows dominates the server market and is installed on way more servers than Linux could dream of! Surely this would be reflected in the supercomputer racket with Microsoft's overwhelming presence?

Otherwise Anonymous Cowards on the internet might not be telling the truth about everything! Egads!

2
0

Ah finally - here is why performance/watt matters

Now we see why performance/watt matters - trade-off of compute density vs the systems ability to not fail. Pretty straight forward. It continues to highlight the general weakness of the x86 chipset and lack of technology innovation with x86 vendors in general - HP in this case. IBM had their Power7 775 supercomputer in 2011 which had 12 x 256 core 775 servers in a rack - that is 3072 cores - the solution itself can scale up to 512K cores. IBM seemingly left the market for these very expensive demonstrations of chest thumping - thank goodness. I'd rather see them invest in training and marketing efforts to educate more IT shops and businesses on better options to x86. But, it makes me wonder why HP with all of their own financial troubles would want to jump into these shark infested waters.

0
4

Re: Ah finally - here is why performance/watt matters

"it makes me wonder why HP with all of their own financial troubles would want to jump into these shark infested waters."

That'll be Meg!

0
0
Anonymous Coward

Re: Ah finally - here is why performance/watt matters

The 9125-F2C Power7 775 servers are actually very nice machines, now that most of the bugs have been worked out. The density cannot really be appreciated until you see one. You look inside a frame and a drawer and wonder whether there is more volume occupied by components and water than there is air. Each frame weighs in at over three tonnes without water, and the overall footprint is tiny compared with previous HPCs installed here.

The systems are built with very large amounts of resilience, so the clusters do not fail as a result of hardware problems. About the largest outage that you get is when you have to replace a drawer power distribution assembly (there are two per drawer, and they are redundant), but you cannot replace one without powering the drawer down. This appears to be a design defect, as it was intended that you should be able to do it, but the current procedure is to power down the drawer. Something to do with initialising the global high precision clock in the interconnect. But you can drain the work from the drawer and power it down to carry out the replacement, and the rest of the cluster continues quite happily.

The main problems I've seen with these systems (at least those running AIX) is with the software stack, which has taken some time to mature, particularly GPFS Native Raid. I've seen several instances of GPFS having problems taking the cluster out. Not recently, however.

There are still some rough edges, particularly with the implementation of xCAT (eXtreme Cluster Administration Tool! - it's nothing much more than a hierarchical image deployment system), but as you say, IBM is not selling many more of these systems, so software development has largely stopped, fixes mainly being driven from common software with other Power servers. They're not dead yet though, and I'm sure IBM would be able and happy to build one for you if you have the money. The Blue Waters/University of Illinois deployment was not the only instance of these systems. There are others around.

The 775 systems were a response to the DARPA High Productivity Computing Systems initiative, which received funding from the US government, and was dubbed PERCS (Productive, Easy-to-use, Reliable Computing System). IBM succeeded in the first and third capabilities, but maybe didn't quite achieve "Easy-to-use". Close, but no biscuit.

It's ironic that the systems likely to replace the 775s I am working with are likely to be far less dense, and will occupy hugely more floor space.

OK, must stop proselytizing for a not-quite-dead system, and get back to work.

2
0
Anonymous Coward

Their next press release

Will explain how they have a large pile of manure, a catapult and a fan, and what will happen next.

0
2
Anonymous Coward

"Hello Telecity, I'd like some colo space please.

Just one rack, yes.

Internet connectivity? Yes.

Power draw? Oh, about 80kW.

Hello? Hello? Is anyone there?"

2
0
This topic is closed for new posts.