IBM today is taking the wraps off a new line of entry-level mainframes, the System z10 Business Class server. The z10 BC is a cut-down version of the existing z10 Enterprise Class machine, which launched in March 2008 using Big Blue's quad-core z6 CISC mainframe processor. It has been a long time since most IT people have …
"I'm a PC." "I'm a Mac." "I'm a Mainframe."
I'd just love to see a commercial like that. "Gee, Mainframe, you look fat." "Yeah, but I can bench press over 200 of you guys at one time." "No, you can't." "I don't believe that at all." "Oh, yeah?" Mainframe picks up PC and Mac in each hand, throws them off camera.
PC is busy spreadsheets and word processing. Mac is busy with art and music. Mainframe walks by carrying 20 filing cabinets. PC and Mac together: "Show off!"
PC: "I balanced my checkbook!" Mac: "I drew a pretty picture!" Mainframe: "I found the cure to cancer, the solution to global warming, and solved the world financial crisis." Mac: "But you didn't bring world peace!" PC smirks, nodding. Mainframe: "Neither did you."
Mainframes have not used CISC processors in years. They execute CISC code, but on RISC processors using hardware CISC decoder which converts the CISC operations to RISC operations. This is exactly what AMD and Intel have done for years on x86, ever since AMD's K5 and Intel's P6.
I recall reading somewhere there was significant similarity between one of the IBM mainframe RISC processor cores from the early 2000s and IBM's in-order RISC RS64 processor core from the late 1990s.
Do not forget mainframes still have several layers of microcode outside the processors, which help making them fault tolerant. Mainframes only run monitoring on their outer level microcode and have layers and layers beyond it, which handlers all types of errors both in hardware, virtualization and software. It's like having a perfect vmware in hardware, or a very expensive bios.
I have no idea of performance, but since IBM for the last 20 years has obfuscated all talk about mainframe performance by comparing it to previous generations and all kinds of OPS, I believe the x86 armada has passed it log ago.
We will start moving virtualized x86 linux servers onto mainframes, where we duplicate them and add some simple monitoring scripts for failover. I'll surely do some benchmarking.
IBM also have a large outsorching divisions and sales droids heavily promoting putting everything important into mainframes, it may help keeping the revenues up.
It is actually quite interesting to see the various factions and their points of view. There is such a theological war that only a handful of individuals have any real information on which to base their heartfelt convictions.
As an economic fact you can certainly replace 1,000 blade servers with 1 mainframe. What you end up with will be MUCH cheaper to buy, more reliable, and infinitely simpler to maintain; ok, not literally infinite but when you're paying for 90% fewer administrators you might think it pretty infinite.
Mainframe performance is Extremely hard to quantify in PC terms. Largely this is due to the co-processors that do so very much of the ancillary work in a mainframe complex. PCs have started to enter this world some with the longest standing and best example of course being the GPU. But for the most part the CPU still does all the work itself.
Just as Intel has been trying to escape the clock speed trap for 5 years or more, mainframes just don't have such a "general public" graspable "speedometer". MIPS are surely at best arbitrary. There are MANY standard performance measures of "typical" processing, whereby such things as database access, or the "typical business transaction" [yeah right] are modeled. But none of these are as cozy as: "2GHz is twice as better then 1GHz " [sic]. Regardless, it would be much more accurate and fair to say the Intel hawkers have avoided such comparisons knowing that the results will not show them in a positive light.
Of less importance to Reg readers, but of huge importance to the industry as a whole is the OS itself. z/OS for all practical purposes simply doesn't fail. It is perfectly suited for applications involving human life, financial transactions, or anything else that really actually matters. Unix (Solaris etc) of course are equally (or nearly so) high quality. And Linux has made remarkable progress and now stands quite respectable as well. The real concern from an industry perspective is Microsoft. IT people who want to go home at a reasonable hour and then sleep through the night are best advised to not put "things that matter" on a Microsoft platform. But many people who think "computer" means PC and "OS" means Windows make a terrible mistake and cause real, often fatal, harm to their business simply because they "Don't even know that they don't know the right question, not to mention the answer".
So, we will never have a single hardware/software platform any more than nature will settle on the one perfect organism. But that doesn't mean IBM / z/OS should not dominate in many areas. Nor does it mean that natural selection should not weed out Microsoft from the food chain.
I was reading about mainframes, so I read cisc as cics, I was confused for quite a while...
Incidentally, for a long time I've wondered: Who would buy a mainframe to run Linux? Why? Surely you just get a bunch of Proliants (or similar) and run them in a cluster; the hardware will be significantly more reliable than the software after all.
Re Terry and Performance
Re Terry and Performance, I agree a lot with what he says, but a few points/questions...
1. Does 1 mainframe = 1000 blade servers apply to the purchase price? You can indeed replace 1000 blade servers by 1 mainframe as long as the blade servers are mainly idle (or peak at different times). If they are all running flat out (or peak at the same time) then the mainframe would have a problem.
2. The engine speed versus throughput comparison for mainframe versus PC is really important – however neither side (AFAIK) has ever done a real comparison. This applies to both mainframe and PC manufacturers!
3. The offload of functions from the mainframe engine to other elements is really a result of the original architecture (going back about 50 years now) when processing speed were measure in KIPS and installed memory was measured in Kbytes. The offload functions were to reduce the CPU load when the cost of CPUs were in $1ms per MIPS range (i.e. with I/O and channel processors). Now that MIPS are so (relatively) cheap this is not so much an issue for PC-like architectures (although GPUs still have their place). For mainframe, where customers still pay frightening amounts for software, any offload of function (such as zAAP or zIIP) where normal software charges do not apply are very important.
4. IFLs, again, mean that the capacity is not included in the “z/OS” bucket – however these engines are technically all exactly the same at normal engines – they are only there for pricing reasons. If there were no mainframe software pricing issues, then you would not need them.
5. z/OS is a really a wonderful high-availability operating system. When linked with long-distance clustering (i.e. Parallel Sysplex and GeoPlex) it can provide the non-stop environment second-to-none. The problem is whether customers really require this level of availability and what other options exist. What is “good enough”?
- Elon Musk's LEAKY THRUSTER gas stalls Space Station supply run
- Windows 8.1, which you probably haven't upgraded to yet, ALREADY OBSOLETE
- FOUR DAYS: That's how long it took to crack Galaxy S5 fingerscanner
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- Did a date calculation bug just cost hard-up Co-op Bank £110m?