When IBM started rolling out 5.0GHz versions of Power6, you knew it was only a matter of time before the vendor tried to usurp HP as the transaction performance king. And now it has done it. IBM this week cleared a new TPC-C score that certifies its Power 595 server as the big daddy of transactions. Running on 32 of the dual- …
Maybe this is a sign for HP to actually revamp their PA-RISC chipset instead of selling off to the Itanic???
The only HP server I'd ever buy is an HP 9000, and they're about to kill that line. Oh well, at least IBM still keeps on churning newer stuff...
Simple math says:
That Power architecture is better than the Itanium junk. They don't even compare any x86 stuff.
Makes you wonder why Apple went to the x86 architecture?
Stunning... I want one!
I see its running DB2 9.5.. We don't see a lot of that in the antipodes...
I see its running AIX as well.
Its unusual but not unheard of.. Its a bit like watching V8's race. I guess what wins on Sundays sells on Mondays.
I will leave it for someone else to make the Vista comment...
Interesting but meaningless
TPC-C has become the 0 to 60 of benchmarks, interesting and entertaining for the vendors as a test of tuning skills and extravagant hardware configurations, but absolutely useless as a real world performance metric.
Why does simple math dictate that POWER is better than Itanium? And as for Apple moving to Intel, it was because PowerPC was innovating too slowly and Apple was being eclipsed by PCs sporting fresh AMD64 and Core technologies. So they decided the best way to contest the PC was to use the PC's own hardware...*their* way.
I think they made it quite clear that they want someone who is producing little laptop chips, not database monster.
@Herby: Why the x86 for Apple
Simply put, Intel's adoption of the Core architecture (i.e., the upgraded Pentium-M from their mobile business!) finally made Intel a serious player in watts per processing power. Apple realized that Motorola just could not compete in that space, and frankly the PowerPC architecture was never really aimed at mobile platforms. Considering that Apple sells a very high percentage of laptops, and even their desktop solutions are known for attempting to be quiet and unobtrusive, power efficiency was just too important for them to pass up, even at the cost of re-writing and porting all of their software and facilitating 3d parties to do so.
And, frankly, the adoption of x86 has allowed Apple to sell to a MAJOR new audience - people who dual-boot Windows/Vista or run Parallels. Nothing but an Intel processor would make that really feasable (full software emulation of the microcode is just too slow), and it has been remarkable for how popular it has become.
I thought Jobs was nuts when he announced it...but I am in awe of how well it has worked for Apple as a business strategy. If I didn't love my watercooled homebuilt x86 so much, i would replace it with an Apple desktop in a second...
@Interesting but meaningless
Not only is the benchmark interesting but meaningless, comparing one Unix to another in this manner ignores other alternatives - mainframe, X86 architectures, etc.
The price-performance will always be better on X86 architectures, due to the volume and thus pricing of these products. Moving to AIX/Power from HP_UX/Itanium is really like jumping from the fire into the frying pan.
What's Itanium got to do with x86, if I remember correctly you have to install the IA-64 compiles of the OS and who even cares anywho something better will be out next week, probably by Intel, then IBM, then Intel again then some obscure highly invested Chinese brand will wipe the floor with them both
Apples to Apples
The benchmark does not represent real customer workloads but does show the comparison between systems. Too bad Sun does not do benchmarks anymore now that would be a funny comparison. If Power to Itanic is a 3 to 1 comparison then Power to SPANC must be a 6 to 1 comparison.
TPC-C is an outdated 16 year old benchmark - wheres TPC-E results?
TPC-C was developed and introduced back in 1992, before the internet even took form and SMP systems was in its infancy. It is now EOLed and replaced with TPC-E. Why is it that companies like IBM and HP still run this benchmark when it doesn't represent any real workload today? 4TB of RAM is what got IBM these results. Not 5GHz CPUs. When larger dimm sizes come out, you can bet that the new TPC-C scores will come out.
So why brag about leading a 16 year benchmark? This does customers no service. If IBM wants bragging rights, then publish benchmarks that are relevant to the high end SMP community. Benchmarks like TPC-E (no 595 results), TPC-H (no 595 results) would show who really is king. Otherwise, its just Benchmarketing at its best. All those basing purchasing decisions on TPC-C are just fools.
Re: @Interesting but meaningless
AC wrote: "Not only is the benchmark interesting but meaningless, comparing one Unix to another in this manner ignores other alternatives - mainframe, X86 architectures, etc."
I don't remember seeing any prohibition against other environments applying. In fact if you take the time to look at the current TPC-C top ten then you'll see a Windows system sitting at #10. Maybe the Unix variants and Linux are in #1-#9 because they deserve to be there - rather than being down to any in-built bias. :-p
"The price-performance will always be better on X86 architectures, due to the volume and thus pricing of these products. Moving to AIX/Power from HP_UX/Itanium is really like jumping from the fire into the frying pan."
Yes, but as has been pointed out by others in these comments, TPC-C is not purely down to the chip architecture, but to the environment as a whole. Yes, a 16-way x86-based box will be cheaper than a 16-way Power5+/Power6 - but can you use all that power? And what about the extra hardware that's in there (middle->big iron Power boxes have hardware hypervisors and a lot of RAS features which don't figure in the x86 servers usually - unless you go for the expensive ones, in which case your cost argument is shot to bits). Oh, and while acknowledging that I'm not an Intel expert - I though Itanic was a different product line (IA-64) and therefore wasn't "x86" anyway. :-)
And if AIX/Power is such a load of manure (as you imply) then why the heck is a 3 1/2 year old system still sitting in the benchmarks at #4. That surprised the heck out of me.
Lastly, TPC-C seems to be about the big iron - think "supercar" of the server world. So to try and compare to commodity x86 (equivalent=SUV at best?) is a bit silly. I work for a competitor to IBM, but I've got to admit (grudgingly) that the folks in the System p and AIX design teams have delivered a pretty solid product, with some neat features nicely implemented.
While TPC-C benchmarks are useful, they rarely reflect the real world concens of what is really wanted and what is effectively possible. For databases, this means Oracle or SQL Server. If you want Oracle, this means a UNIX server such as the IBM pSeries. If you want SQL Server, this means a Windows server. Itanium servers can run Windows Datacenter Itanium Edition and SQL Server 2005 Datacenter Itanium Edition and run it very effectively. The OS and SQL costs for even the largest Itanium server such as the HP Superdome is less than €50,000, because of the SQL server-based licensing model. The Oracle licencing costs for a configuration such as 32 processors/64 cores each running at 1.66GHZ would be extraordinary. Many organisations and very familiar with SQL Server and have personnel, skills and applications.
It is a very effective real world alternative.
Also, have a highly performing database server is irrelevant if it cannot handle the I/O load associated with the processing. A real world configuration will require a highly performing I/O subsystem with associated disk storage.
I have not been keeping up to date but the architectural limitations in Windows Server Data Center Edition enable support of a maximum of 32 processors/64 cores. I don't know how this will be updated with the new 4-core Itanium processors.
What the results show
Benchmark results will show how good your system is at running benchmarks.
If the real life application of your system will be running benchmarks, they show how it will perform in real life.
Sun == Jamacan Bobsliegh Team
Sun do submit TPC/C benchmarks -- but only when they have competative hardware which can produce a competative score.
This is about once every three years aiming at a particular niche.
If you look at the TPC/C benchmatk it is actually quite well structured to mimic real database/transaction loads with its blend of Insert/Update and Delete transactions mixed with queries .
It is actually a reasonable indicator relative performance in much the same way as Formula 1 is for cars, BMWs tend to be faster than Toyotas in real life as well as on the track, although your BMW is unlikely to be as fast as Kubicas BMW.
Power 6 screams
The Power 6 with a tiny bit of hacking will run os X making it the fastest HackintOsh around. I wonder what 32 cores at 5 GHz would do... Too bad Apple decided to dump the power 6 on the xserves. Maybe the should reconsider that move since they seemed to have fond some new love for the PPC and it would keep Intel working harder.
Irrelevant and useless, funmarking
IBM announces a new record on an old irrelevant benchmark on a proprietary closed OS, running on a server that you cannot buy, costing $17m. Like having the best wood burning stove/cooker, nice, impressive but I will never use it. Good last century, but not in this millenium. We do not cook with wood (TPC-C) and the cooker is too big ($17m too big) for my kitchen, plus I can only see it in a magazine. But boy can this thing make good toast. I use a $30 toaster instead.
1) IBM themselves say that TPC-C is irrelevant.
2) AIX is closed and they take new ideas and developments like Dtrace from Solaris to make it competitive.
3) The server used in the benchmark is not available.
This is about as useless as it gets.
IBM themselves helped create TPC-E as TPC-C was unrealistic.
IBM a great and good company with too much money spending time on a 1992 based benchmark. This is what you can do if you have more money than sense.
HP have some useful printers & ink, to subsidise their server business. Can IBM make me something useful with all these resources, that I can use. Like a Linux laptop running an OS called LinOS/2.
Tic, toc, tic toc ... BONG!! 8-|
Intel have been busy smugly lauding it over AMD with their Core CPUs and their "tic toc" development cycle, so it's gratifying that they've been put in their place.
Sucks be to Apple for backing the wrong horse. Even Microsoft ditched Intel to use some sort of Power deriviative in their XBox (as does the PS3 and Wii).
The other interesting item is that Oracle seems to be struggling to compete with DB2 in the performance stakes... (yet another irritatingly arrogant company).
Windows on Itanium CPU limitations and switching databases to DB2
The old Windows for IA64 had limitations in that even the top-end Data Center Edition can support a maximum of 64 CPUs in a partition, so that was 64 single-core CPUs, or 32 dual-core CPUs, or 16 dual-cores with Hyperthreading enabled. I haven't got round to playing with Win 2k8 yet but I think the limit is still 64 logical CPUs. Since we mainly used Win 2k3 on Itanium for SQL consolidation it wasn't too much of a problem partitioning a Superdome into Windows instances, but it was rather irking to think we couldn't rope the whole 128 cores into one mega Windows instance.
Looking forward, I'm waiting for the Tukwilla cores on the next gen of the 4-socket BL870c blades, as this will give us a 16-way Itanium blade (32-way logical) which should make for a sweet SQL system, and make use of our existing blades chassis infrastructure (four BL870c blades to a 10U c7000 chassis!). At that point it's highly likely we'll be doing the majority of our Win2k8 (and Linux) on blades and keeping the SDs for our large hp-ux instances.
Regarding databases, I've often found that OTS applications are usually closely tied to the underlying database, so switching the database can be a nightmare. I've tried to leverage Oracle out of a stack before and replace it with MySQL and it wouldn't work. Like OS migrations, it seems simpler to stay in the same stream, so we usually just upgrade to the next instance of the same database for closely tied apps, and re-write inhouse ones to work with new database products. It would be nice if we could get rid of that painful chunk of Oracle licensing costs, but Oracle (and IBM and MS) know they're pretty safe as unless the application vendor switches database then you are usually stuck with one choice of database. The idea of being asked to switch out a RAC instance for DB2 would likely invoke a BOFH-like response involving a modified cattleprod.....
The thing with a benchmark is that it shows how good you are at the benchmark. Which is a good thing.
Take the 0-60 analogy. You can compare 2 cars and see which one is faster 0-60. That has now become the standard, even though it is not that relevant as most people really want their cars to do 0-30 or 50-70 or 30-0 when an idiot pulls out in front of them.
But everyone lists their cars with a 0-60 time. If a new car was developed and, say, published it's cost per acceleration/time with respects to gravity and road surface it may well be a better metric but people would just ask "how quick is it 0-60".
Then another car manufacturer would list their car's cost per accident/hour divided by the square root or the mean square of the acceleration from 48.3mph to 125.2kmh and use the figures to "prove" they have a better car.
And in the end we use meaningless and inaccurate metrics like 0-60, top speed (wow, only limited to 85mph more than I can legally do, 35mph more than I normally do and 15mph more than will get me killed), fuel economy and boot volume.
But we use them because we are comfortable with them. Sure I have no idea how much more shopping I can get in 1.2554 m^3 than 1.1549 m^3 but I know I can get more.
But it is extremely rare that right minded people buy things purely on the basis of the provided facts and figures, especially if you were going to spend 17million dollars...
How many eggs can you fry on one of those?
One of the reasons Apple switched away from the PowerPC chips was the amount of heat they produce. Note the massive cooling system in the G5 towers. They couldn't make a G5 PowerBook because of how hot it would have run. I have an 800MHz G4, and I risk my fertility every time I put it on my lap.
<- Flame, obviously.
@ Robert Hill
"""Simply put, Intel's adoption of the Core architecture (i.e., the upgraded Pentium-M from their mobile business!) finally made Intel a serious player in watts per processing power. Apple realized that Motorola just could not compete in that space, and frankly the PowerPC architecture was never really aimed at mobile platforms. """
Actually Apple stopped using Moto chips a hell of a long time ago. They moved to IBM chips at some point that you can look up on Wikipedia. And IBM did have a low power laptop chip out in plenty of time for Apple to start using it in their laptops, they just claimed it was too complex (or they were already dealing with Intel.) The PPC architecture doesn't have any innate limits on the power consumption (Look at PA Semi's stuff,) an architecture isn't a lot more than an instruction set, which can be implemented many different ways.
"""Considering that Apple sells a very high percentage of laptops, and even their desktop solutions are known for attempting to be quiet and unobtrusive, power efficiency was just too important for them to pass up, even at the cost of re-writing and porting all of their software and facilitating 3d parties to do so."""
They were actually designing OS X with x86 in mind for years. The porting wasn't a huge problem at all. Plus they emulate PPC reasonably well, so very little previously functional software broke.
The real reason they went with x86 is because Intel chips are cheap. Really cheap. IBM kept jacking up the prices for Apple, supposedly because Apple was such a terrible company to do business with. Apple can now sell a fastish (obviously not Power6 fast) set of computers for roughly the same price that they sold the PPC kit for, but using cheap CPUs and more or less pre-made motherboards and things. They must be laughing their asses off that they can sell a macbook for a $300 markup over a Thinkpad of similar spec.
If they had stuck with PPC, they could have had extremely low power and fast laptop chips, plus been the only company to put Power6 into a desktop. Sure it'd require some of their custom liquid cooling and all that, but at the very least a 5GHz sticker would sell a lot of machines, plus they would probably be quite fast.
And if they weren't essentially the same stuff I already have I might actually want to own one.
Herby you're WAY off base
Herby, making a statement like and I quote, "The price-performance will always be better on X86 architectures, due to the volume and thus pricing of these products. Moving to AIX/Power from HP_UX/Itanium is really like jumping from the fire into the frying pan." is like saying a Ferrari Enzo is a great buy because the tires are cheap. You're looking at one piece of the total cost of ownership--hardware. By the way, labor is the most expensive piece of any i/t shop, not hardware. So if you ask any third line manager or above if they'd rather spend big bucks on a big machine or save money on the hardware and have 3 more head count you can guess which one they'll pick.
Pricing of the hardware is not the only consideration in purchasing a solution. X86 architectures typically require a LOT more personnel to admin as they are usually many small boxes all with their own firmware, os's, and apps that need patching not to mention the complex software like RAC required to get them to move any significant amount of data. Did we get to software licensing yet? By the way, its by processor and yes it takes a lot of dumpy x86 procs to equal one RISC proc. Larger servers when looked at across a Total Cost of Ownership are MUCH less costly in the long run.
Not to mention you make no case for what type of availability the customer requires. I hope you're not expecting your 100 machine x86 cluster to have any type of uptime.
Herby, stick to macs and playing games on pc's. Your experience in the arena of business is obviously lacking or in a VERY small business.
Yeah, I mistyped that...I meant to say Moto to PPC to Intel or something, and just bloody mistyped. Anyway, was well aware of the whole PPC thing, especially as I used to do a lot of work with IBM SP2s and was amazed that Apple got onboard the same technology.
But I would debate that an architeture isn't more than an instruction set. That instruction set makes a whole lot of assumptions about the registers, hardware security mechanisms, addressing modes, memory tiering, pipelining, arithmetic units, etc. Sure, you can make hardware that does not meet those assumptions - but the performance will be a failure (that's what emulation does, of course).
In the end analysis, IMHO the PPC architecture is geared for faster clockspeeds, with all the attendant power use and heat generation, even if they CAN clock it slower and make a laptop part out of it. The Pentium-M and Core are optimized for slower hardware clock cycles. While the blazing numbers on the PPC are highly interesting, I suspect that future computing increases will look a lot more like Nvidia's CUDA and IBM's Cell/PPC pairings, especially SPMD (single program multiple data) architectures.
The result is surprisingly low actually
TPC-C scores usually scale quite well with memory. IBM have gone from 2TB of RAM to 4TB but only managed a 50% performance improvement. I guess they've hit some architectural limitation with the 595 and need a new box here to keep there shiny new chips fed.
TPC-C is a silly benchmark these days. The workload distribution is static. In a quad/cell based system the clients can be distributed across the quads and localised. There is little or no inter quad traffic. So it scales in these boxes like no real world commercial DP application is ever likely to. It's like when they used to run clustered TPC-C scores, if you had a 4 way cluster 1/4 of your clients connected to each of the 4 modes and then only handled data in a 1/4 of the database so perfect scaling. The same thing is happening in modern large MP systems. If you could do with you database what the bench marketing engineers do, you wouldn't need the big iron at all. You'd just cut the database in a hundreds of little bits and then run them on mini blades Where as usually companies buy big iron coz they've got a damn great big database and a whole load of access to it is needed.
- The land of Milk and Sammy: Free music app touted by Samsung
- The long war on 'DRAM price fixing' is over: Claim YOUR spoils now (It's worth a few beers)
- Privacy warriors lob sueball at Facebook buyout of WhatsApp
- 20 Freescale staff on vanished Malaysia Airlines flight MH370
- Dell thuds down low-cost lap workstation for
cheapfrugal creatives or engineers