Feeds

back to article 'Double Stuf' Power7+ sockets: Yummy, but so is overclocking

IBM's first Power7+ processor systems are expected to launch on October 3, but El Reg has a modest proposal for Big Blue as it prepares its rolling rollout: take a page from the Oreo Cookies cookbook, and "Double Stuf" 'em up and down the line. We've already given you the low-down on the Power7+ processors for IBM's Power …

COMMENTS

This topic is closed for new posts.
Headmaster

Hmm... used to be a good idea TPM.

Your idea with building a cloud of POWER 775's is actually one that we, where I work examined, back in 2006, here we actually had the POWER 575 as our intended platform for our shared environment.

Today we use POWER 770's, the problem with using the HPC nodes for commercial workloads is the RAM per core numbers. The POWER 775 has 256 cores to 1 TB of memory that is 4GB of memory per core. This is way way to little for todays commercial applications, which is what UNIX systems run today.

Now for something like for example a SAP system and databases (specially if you use DB compression) you will normally have something between 8-16 GB of RAM per core + RAM for OS and virtualization.. Furthermore cause you want to drive up utilization on your platform you will normally overcommit your processor resources, usually somewhere from a factor of 2-4.

If you look at individual virtual machines that have been sized for peak workloads, their average utilization is usually somewhere in the 10-25% range.

So the actual physical amount of RAM you need per core is usually something like the factor you overcommit with times the non overcommitted amount of RAM per core. Hence usually somewhere between 2-4 times 8-16+ = 16-64 GB of RAM per core. So on a machine like the POWER 770-MMC with 64 cores you would need 64 cores times 16-64 GB of RAM + overhead. Or realistic something like 2-4TB of RAM. For a machine like the 775 this would be in the 8-16TB range.

So the HPC nodes aren't really suitable for commercial workloads these days IMHO..Sure there is memory compression but that won't give you enough IMHO.

Now as for the DCM modules I would guess that these would be 2x6 core modules not full 2x8 core modules, but lets see what is announced here what.. next week is it ?

Thanx for a good article btw.

// Jesper

1
0
Anonymous Coward

Power 7 775 systems seen in the wild!

The European Centre for Medium-range Weather Forecasting is a long way down the install of two large Power 7 775 (aka PERCs or Blue Waters style) clusters at the moment, and the UK Met Office has 2 of these clusters installed and in operation and generating the primary weather forecast data that they provide, and another smaller one providing HPC resource to NERC. I understand that the Met Office have just turned off their previous Power 6 servers for the last time.

It's no big secret, all of these systems appeared on the last Top 500 list.

There are also, I understand, several more scattered around the world.

So the PERCs HPC model is not dead, just not deployed as widely as IBM originally intended.

BTW. The way that these systems are LPARed is very different from other models in the Power range, and there is a definite limit on the size of single OS images deployed using this infrastructure (largest single system is 1/8th of a compute drawer, giving 8 virtual systems per drawer, because of cache coherency issues between processors in different QCMs). This means that they may not really be as suitable for some workloads as might be thought. Also, the only management infrastructure you can use for them is xCAT. IBM Director cannot manage these systems, so commercial customers may be uncomfortable with managing them, and I do not think that there is any way that IBM i cold be installed on them.

1
0

This post has been deleted by its author

WTF?

Power7+ DCM looks to have poor scalability

TPM-You missed one big issue regarding IBM's Power7+ DCM performance

Looking at IBM's Power7+ performance (estimation) gains slide, if you compare Power7+ SCM gains versus Power7+ DCM gains, theres less than 40% improvement in performance (Power7+ DCM vs Power7+ SCM) - for what is essentially a doubling of cores/cache/thread per socket. Clearly, the DCM is bandwidth starved and so CPU scalability is impacted.

Estimating the bar chart, I see ~15% faster OLTP, ~27% faster ERP, ~26% faster Integer & ~39% faster Java . 15% to 39% scaling for doubling-up of cores/CPUs doesn’t look like a wise investment unless its priced aggressively. And certainly not for running software based on per core licensing.

0
0
Headmaster

Re: Power7+ DCM looks to have poor scalability

@Phil 4.

You are presuming that a DCM module will have double the number of cores, compared to a SCM. It seems much more likely that a DCM module will house two six core chips, clocked a little lower than the the SCM modules.

And from a heat envelope perspective it also sounds a bit more realistic. I doubt that a process shrink, and some more dynamic power management will be enough to double the number of cores per socket, without additional cooling measures.

// Jesper

1
0
Anonymous Coward

Comparing POWER 7+ SCM & DCM to SPARC Critical-Thread

Will Sun/Oracle's approach of moving from massive throughput to software driven run time enhanced single thread performance (released a year ago) be less preferred by the market to IBM's purchase driven preference of single or double stuffed sockets?

They both seem to get to a similar place, via different paths... while Oracle's year old solution may appear to be more mature than IBM's proposed solution. http://netmgt.blogspot.com/2012/09/power-double-stuff-vs-sparc-critical.html

It seems giving customers the "choice" (T or M class SPARC) basically put Sun in a position where acquisition by a larger company such as Oracle was inevitable. Why is IBM taking POWER down that same road that Sun took years ago?

0
0
Anonymous Coward

Re: Comparing POWER 7+ SCM & DCM to SPARC Critical-Thread

IBM has had massive single thread performance for years with their ILT rich threads and mountains of cache. Oracle/Sun basically threw in critical thread as a quick fix for their remaining four Sparc customers who were telling them that Sparc is horrible for OLTP... so they wrote an API to remove the hard coded threads and add an auto negotiate function. If anyone is running a single thread workload on Sparc, they basically have a one thread core... as critical thread is always going to be sucking up all of the cache and clock speed... all the way to 3 Ghz.

0
0
Thumb Down

re:Comparing POWER 7+ SCM & DCM to SPARC Critical-Thread

Eh ? get to a similar place ? Oracle's year old solution may appear to be more mature ?

Troll ?

// Jesper

1
0
This topic is closed for new posts.