The word is on the street that Big Blue is looking to get its next-generation System z11 mainframes out the door a little sooner in the third quarter than many people might be expecting. No surprises there, with IBM's mainframe business slackening off and the prospects of selling customers System z10 boxes at anything close to a …
I don't see how. Even during college we were taught that CISC is a behemoth sloth that is anything but fast.
The Rational Thing
..would be to consolidate everything on POWER. The legacy stuff would be converted to the new ISA by means to emulation/translation. HP, DEC and IIRC an IBM Research project proved that this can be done very efficiently.
HP did exactly that when they converted HP3000 to PA-RISC.
All that exists in Cobol, C, Fortran, Pl/1 and Java can be easily ported to Power, if zOS/MVS is operating on Power.
And the bozos who insist on self-modifying code are simply stuck with the last zProcessor until they stop that nonsense.
This approach would reduce CPU R&D expenditure dramatically and allow for a single architecture to run zOS, AiX and OS/400 (whatever the current marketing labels are). This would also mean that a single VM architecture could run all IBM operating systems on the same physical system.
Rational but not what customers want
The world still runs on mainframes and its unique cache architecture which provides the highest transaction throughput of any architecture.
IBM has done an incredible job of leveraging technology between z and p and extending z technology.
Every time you use a bank, airline, phone, track a package you are using a mainframe.
Emulation is doable, just not what mainframe customers want.
It is not the Cache
"The world still runs on mainframes and its unique cache architecture which provides the highest transaction throughput of any architecture."
I guess you meant the I/O subsystem and it's IO processors ? POWER would blow the zProcessor out of the water, as much as it does with x86 and everybody else. IBM does not publicize SPECmarks for zProcessor, instead they use their "MIPS" benchmark. Guess why...
(I did some Integer benchmarking in 1999 on a cheap PC and a current S/390 CPU at that time; the results were more or less the same. I don't think this has changed)
DEC had an emulator/translator running x86 code on DEC Alpha at nearly native x86 speed (of the fastest Pentium at that time). IIRC it was called FX!32. I can't see a reason this should not be possible with the S/390 ISA and POWER.
Not to be mean but college was a looong time ago. x86 is CISC and most RISC processors have add CISC technology making them a RISC/CISC hybrid.
EPIC on the other hand is a failed alternative which was supposed to take over the world and did a pretty good job of running off the little guys. MIPS/Alpha/PA-RISC
No, it is other way around
x86 is CISC instruction set which is - in all new models of Intel and AMD processors - bolted on top of RISC core. Hardware is RISC, but decoders convert every instruction and make software see it as an classic x86.
That is one reason why x86 is so crappy and power inefficient, and why Intel's (Xeon, not Itanium) and AMD's R&D costs are higher than IBM's (at least POWER part) and Fujitsu/Sun SPARCs. That in fact let them to hang on this long against ubiquitous x86 which offsets scrappy architecture by leveraging high volume and advanced manufacturing process of Intel fabs.
There's actually an acronym for this kind of "complex instruction decoding on RISC microcode cores": CRISP.
-- [C]omplex [R]educed [I]nstruction [S]et [P]rocessor
Mine's the one with my Microprocessor Design and Engineering cheat sheet in the pocket.
"...as well as decimal math units (for doing money math without having to round single-precision calculations)..."
I thought 'money math' was usually done with scaled integers for speed and accuracy?
MIPS measurement methodology?
It would be nice if we knew, or had access to, their MIPS measurement methodology & data sets. Then we could run the data sets with comparable methodology across the multitude of systems out there & see just exactly how well these expensive machines run.
I understand and like the fact that pretty much EVERYTHING in an IBM mainframe is redundant and has HUGE I/O bandwidth. But, I'm curious as to any other benefits of using a mainframe compared to some of today's clustering technologies.
Decimal instructions aren't used to avoid rounding. Even x86 computers use binary integers to calculate with; floating-point was added later as an optional extra, and later became standard by the time of the 486 (and even then you could get a 486 DX).
Instead, what is avoided is most of the work of converting from decimal to binary and back again, if you keep your numbers recorded in printed form (as many databases do) and only do a very tiny amount of calculating with them before writing them back out again (as is often the case in commercial applications).
"It would be nice if we knew, or had access to, [IBM] MIPS measurement methodology & data sets. Then we could run the data sets with comparable methodology across the multitude of systems out there & see just exactly how well these expensive machines run."
Oh, no need for this. In terms of CPU power, IBM Mainframes are dog slow. You need eight Nehalem-EX cpus to match the largest IBM Mainframe which has 64 cpus.
The largest IBM Mainframe gives 28.000 MIPS, that is 437MIPS per cpu. An 8-socket Nehalem-EX gives 3.200 MIPS, that is 400MIPS per Nehalem-EX (when you use the IBM Mainframe software emulator Turbohercules).
But remember that software emulation is a factor 5-10x slower than running native code. So if you could port Mainframe code to x86, you would not get 400MIPS per Nehalem-EX, but instead 2.000 - 10.000MIPS. So 8-socket Nehalem-EX machine would give 16.000-32.000MIPS, which is in par with the largest IBM Mainframe.
Also, another source, an independent Linux expert that ported Linux to IBM Mainframe gives the rule of thumb: 1 MIPS == 4MHz x86. But that number is from 2003 when Pentium4 ruled the earth. The new Nehalem and Sandybridge are at least 4 times faster, clock for clock, than the Pentium4. So the new rule of thumb would be 1 MIPS == 1 MHz x86. So if you have 2GHz eight core Nehalem-EX, then you have 16GHz in aggregate MHz. That corresponds to 16.000MIPS when running native code. In this case, again you only need a few Nehalem-EX to match the largest IBM Mainframe. "Debunking the IBM Mainframe myth":
So we have two independent sources that claim that only a few Nehalem-EX are needed to match the largest IBM Mainframe which has 64 cpus.
In fact, you can emulate a Mainframe on your laptop:
Anyway, I dont see the new z11 with 50.000MIPS being much faster. Then you simply need 16 Nehalem-EX to match that machine. That is hilarious. But of course, Mainframes have good I/O, not good CPUs.
For reliability, there is a large market for the software emulator TurboHercules which IBM is suing now (despite IBM has promised not to sue open source projects that use IBM's released 511 patents!). If the Mainframes are that reliable, then why is IBM afraid of a backup solution such as TurboHercules? Read about the case TurboHercules vs IBM here. Start from the bottom. It turns out that Groklaw is IBM biased and censore all IBM critizicism. Do not trust Groklaw.
- +Analysis Microsoft: We're making ONE TRUE WINDOWS to rule us all
- Apple: We'll unleash OS X Yosemite beta on the MASSES on 24 July
- Pics It's Google HQ - the British one: Reg man snaps covert shots INSIDE London offices
- White? Male? You work in tech? Let us guess ... Twitter? We KNEW it!
- Apple fanbois SCREAM as update BRICKS their Macbook Airs