IBM recorded 1.2 million TPC-C transactions on a Power 780 server using a massive 35TB of flash memory, in the vendor's latest storage burn-up. Big Blue's 9179 Power 780 server was given 3.5TB of PCIe-connected solid state drive (SSD) NAND, identified by a FC4367 part number. We understand that this was multi-level-cell (MLC) …
"But we will surely see systems with much more flash breaking a 10 million TPC-C barrier by the end of 2011"
Well most likely we'll see an upgrade of the POWER6 power 595 this year I would presume. And if it just keeps it's socket count, then getting x4 the number of faster POWER7 cores, would mean that it would go well beyond the 10 million mark, as the power 595 does 6 Million tpmc.
So IMHO the 2011number is a bit pessimistic
What's a "4 socket P7 processor"?
"The server had two 4-socket Power7 processors"
Not sure what that means. Did you mean two 4-core Power7 processors? As I understand each P7 is a single socket processor. There are also packages that have multiple processors in the same socket but not two processors in 4 sockets.
Power 7 sockets and cores
I agree with the coward that the terminology is a bit vague. "Two four-socket Power 7 processors" is mixing metaphors.
Most Power 7 'sockets' have 6-8 cores. IBM 770 and 780 servers have four sockets in each 4u 'block'. A given 770 or 780 can have up-to-four of these blocks plus expansion I/O drawers, if needed.
On top of all that, sockets on 780s can be configured to turn off half their cores and raise the clock rate on the remaining ones (not all sockets have to do it at once, either).
Probably, the writer's language meant that the 780 had two-blocks. Each block with four sockets. Perhaps with raised clock rates, too. That is about half of the max-configuration permitted by IBM for the 780, so it is a curious benchmark to publish.
Well perhaps it's easier just to look at the disclosure report :
It's a 8 socket box, with 2 sockets each having a 4 core 4.14 GHz POWER7
Re: Power7 sockets and cores
> IBM 770 and 780 servers have four sockets in each 4u 'block'.
They have 2 sockets in each 4U (CEC). This makes 16 cores max per CEC, 64 cores per 4 CECs (max system config).
> On top of all that, sockets on 780s can be configured to turn off half their cores and raise the clock rate on the remaining ones (not all sockets have to do it at once, either).
All CPUs work in either MaxCore (all cores enabled) or TurboCore (half cores disabled) at once.
> Probably, the writer's language meant that the 780 had two-blocks. Each block with four sockets.
It was still only one CEC, 2 sockets (16 cores), but working in TurboMode, so only 8 cores were available. It's 1/4th of maximum configuration for 780.
For a max system in TurboCore mode (32 cores 4.1 GHz) the result could be about 4 800 000 (assuming theoretical linear scalability) or when in MaxCore (64 cores 3.8 GHz) over 6 million tpmC, maybe even 7 million tpmC, which would be more than current TPC-C record for single system.
More mixed terminology...
Ok. I went to the TPC website. IBM is doing a one-server, 8 socket, 4 core/socket "turbo" 4.14 GHz server. Total is 1 server, 32 cores, 128 threads, score 1.2 million.
That is a max-configured version of 780, with half of its cores turned off and raising the clock-rate.
The Sun Server to which the author compares it to a 12 servers, 384 cores, 3072 threads cluster. Score was 7.6 million.
You only compare clusters to single servers when they have lower costs. The 12 cluster configuration is hideously more expensive, its TPC/$ is 342% of IBM's and the maintenance of a 12-cluster, 384 core systems is stupefying compared to a single 32 core system.
The Dell machine cited has less than 9% the throughput, so its TPC/$ was higher.
I like to see clearly written stories on this kind of topic, but this one was not-so-clear.
"Ok. I went to the TPC website. IBM is doing a one-server, 8 socket, 4 core/socket "turbo" 4.14 GHz server. Total is 1 server, 32 cores, 128 threads, score 1.2 million."
No, the system is a drawer system, hence there is only one drawer. Read the config.
8 processor activations that is the featurecode 5469 doesn' t mean 8 processors in Intel/SPARC sense but 8 cores. So this system is a 8 core 512GB system.
"That is a max-configured version of 780, with half of its cores turned off and raising the clock-rate."
No, you can also see it on the number of software licenses.
And honestly, a 16 core POWER6 system made 1.6Million tpcc, do you then really think at 64 core POWER7 system would only do 1.2 Millon tpcc ?
I dont get it. Why do you say that Sun is expensive? Maybe you didnt knew that IBMs former TPC-C record P595 Unix server costed $35 million USD list price? One. Unix. Server.
Or, again, maybe you knew, but chose not to reveal that sum.
Re: More mixed terminology...
> Total is 1 server, 32 cores, 128 threads, score 1.2 million.
1.2 million score is for quarter configuration, 8 cores, not 32.
If you want to compare to latest Sun record, Sun config scores about 20k tpmC per core, IBM's is 150k tpmC per core. The difference is almost unbelievable...
The top Sun box???
Am I blind or is it a cluster made from 12 boxes?
ie the top-of-the-line power7 based 595 replacement will easily be the new king-o-the-hill when released...
and then some..
If it's as rumored will scale to 256 cores. Then a fully loaded system would be 25M+ tpmc at 100K tpmc per core. But lets see how it scales.
Let's be more specific
"The result was not the fastest recorded; far from it, a Sun SPARC box recorded 7.65 million TPC-Cs last year. "
That was 12 yes 12 Sun SPARC boxes in a cluster. If you use perpetual licenses and regular maintenance then the price is $20 MILLION for 3 years.
IBM was able to do 1.2M with two chips in a p780 which can scale to 8 chips....aka sockets.
SPARCCMT is dead as the previous poster compared the performance per core is unbelievable yet it is.
Compare the SAP 2-tier p780 to the Oracle/Sun/Fujitsu M9000 and you see their is a 8X performance advantage.
SPARC64 is dead and Oracle is not even bothering to sell it as all they care about is trying to make Exadata V2 successful as V1 was a disaster.
Randy Dandy Sandy
Still all just flash, with smoke and miirors
TPC-C - still just smoke and mirrors and now they have added flash to the mix.
This is one of the things that gets me about the computer industry - and I am not picking on IBM alone. TPC-C is just BS, we all know that, it is configuration nobody would ever buy, running a set of tests that has no relation to real world circumstances and they keep on rolling it out. It is by the way hellishly expensive to run one of those, get the kit, software and expertise set up to run and then to get it certified. All meaningless mumbo jumbo. And remember - the cost for that test is built into the kit you buy later.
But then, why am I surprised - this is a world where we follow dumb and dumber celebrities to see what they did last weekend.