That I worked with mainframes :( Reading the Redbooks really makes you wonder why these are not more popular as they fit well with so many workloads.
IBM is a funny technology company in that its top brass doesn't like to talk about feeds and speeds and seems to be allergic to hardware in particular. Which is particularly idiotic for a hardware company that sells servers, storage, and chips. Thursday, in launching the new System zEnterprise 196 mainframe, IBM didn't say …
The hardware is amazing (and thanks to TPM for another great article), but it's also amazingly expensive - to buy, maintain and support. In simplistic terms of bang for your buck, a Windows or Linux system is going to be an order of magnitude cheaper for all but the most gargantuan workloads.
These beasts still exist mainly because of all the large organisations that need to run millions of lines of legacy code that's either too expensive to convert or (more likely) impossible - since all the documentation has long since disappeared. I have this vision that in 2050 IBM will announce the zEnterprise 1960 to weak cheers from the dwindling band of 90-year-old Cobol programmers, the only people alive who can still support IMS and CICS.
"God sent us this 360 and lo, our 1400 payroll programs run no slower than before." The Devil's DP Dictionary (1981) - Stan Kelly-Bootle
IBM will charge out the ass for nearly every aspect of the things, but maintenance and power are relatively cheap compared to an equivalently powerful quantity of, say, Dell servers--ESPECIALLY power (and HVAC). (IBM might charge enough for service contracts that maybe you're right abount maintenance.)
The >$1E6 acquisition cost and horrifying expense of software licenses and initial cost of the service contract, handy as it may be, just scares off most people. If, however, you can afford that, it is probably cheaper in the long run. Even if it isn't cheaper (and I cannot imagine an equivalent quantity of x86 servers saving money on power and ventilation no matter what kind kind of toad I lick), it is certainly more reliable and better at running a hair under full capacity for ten years straight. Most workloads do not require that much reliability, but for those that do...well, I'm glad mainframes exist. They can use ALL of their muscle ALL of the time, which is something that a generic x86 server running Linux is not going to do. No, it's going to start to choke before it even reaches 90% continuous utilization, and it'll probably break and have to be taken down for maintenance. It probably doesn't have things like hot-swappable memory and processors.
I like it that things like credit cards and air traffic control still work right.
Quote "and it'll probably break and have to be taken down for maintenance. It probably doesn't have things like hot-swappable memory and processors."
I belive higher end X86 servers, and I suspect some lower end ones do have hot swapable Memory and HDD's. and I remember playing with a £40,000 Compac one well over 10 years ago that I belive did have a hot swap CPU. It was way ahead of our SPARC's at the time for the price.
(used Pentium Pro's if I remember right).
Anyway, who wants hot swap Mem & CPU when you can just turn it off and pass the traffic to an identical system? (don't bother answering)
First reason, the hardware is very expensive
Second reason, the software is even more expensive (Linux apart)
Third, limited pool of (expensive and now rather old) experts
Many years back IBM had a choice between maximising the profits they could make on mainframes by keeping software costs high or increasing volume by lowering costs. They did the former, so hence what they have now is a niche, albeit lucrative market. Also IBM's had their fair share of quirky hardware features. The system software is very stable, but it's been a long time since it has been at the cutting edge of software technology. In fact some of the system software is distinctly clunky - CICS is architecturally dreadful, the file space structure was, for a long time, very unpleasant indeed. TSO is also very clunky, and IBM operating systems were not exactly designed round graphical interfaces. The core of communications was built round some very old fashioned, centralised ideas of network management.
It's also wrong to assume that these boxes are inherently much more powerful that the fastest machines from HP, Fujitsu, or IBM's very own AIX systems. Indeed IBM are extremely unwilling to publish comparative benchmark figures for these machines as they know full well that on price-performance they will not be competitive. In fact their very own AIX systems will look a great deal better value. Indeed in some areas IBM mainframes were a bit late to the party. It took a long time to move off of 24 bit addressing to go to 31 bit and there were some less than tidy ways of extending physical memory beyond that limit. That essential for large systems, 64 bit addressing, did not arrive until the z900 in 2000 whilst SUN, DEC and HP all had 64 bit from the early/mid 1990s.
For a while IBM made a big play on the use of virtualised environments from sunning hundreds, or even thousands, of Linux VMs (and IBM truly were a pioneer on virtualisation). However, the costs of memory just doesn't make any sense when there are any number of x86/x64 offerings around that will achieve the same thing as a fraction of the cost and provide you with a far wider range of pre-compiled applications.
And I do speak as somebody who has worked on mainframes. Also as somebody who has seen Linux on mainframes. They are great on backward compatibility, on stability but they are not the way the main market is going, or ever will go. They will remain a lucrative niche market, increasingly of interest to only a small number of institutions, many of which are locked in be legacy applications.
The current z10s have 'ear protection must be worn' signs on the side cos the fans are so loud... Doubt its any different for these, its a lot of heat to dissipate.
Water cooling is an option - all the Z engineers have been sent on courses in case they come across one and pull out a pipe by accident :-)
Or even on purpose. I recall an IBM engineer telling me of one of their water-cooled products that lacked an important feature as standard. A drain cock.
Since it was necessary to drain the thing down for most HW maintenance tasks, the only way to acheive this was to pull off a hose and give it a drenching. Every mainframe engineer around that time carried a hairdryer in his tool kit to reduce the maintenance downtime required....
AFAIK it still runs fine although I doubt that you will find anyone still using it. The bigger issue is finding anyone that could read/debug the damn thing. RPG was was (AFAIK) really put out to pasture in the 70's and maybe 80's. I would bet that if any of it were running in 1999 any Y2K projects would have done away with it. The semi follow on (and this is a stretch) was the cobol report writer. That died a death in the 1980's. I remember that some release of COBOL did away with it. There was a small mashing of teeth but everyone (that I knew of) was happy that the monster was put to bed. I do not know this for 100 percent but in most shops it was against standards to use the cobol report writer, not because of any great inefficiencies but maintainability was almost nil.
My *VAGUE* memories tell me that RPG came along with DOS (and then migrated to s/7 & s/34) and had a brief life in MFT/MVT. It may have gone virtual but it was a short life.
... for a company that doesn't believe itself a hardware vendor. Or a software vendor. They'd still rather everyone used their selectrics and leave the computing to the professionals at the dozen or so global computing centres world-wide, each one sporting one (pcs. 1) big fat ibm mainframe. Or in today's parlance, the cloud. They were right all along. See?
Asking why mainframes still exist is like asking why specialised "rack-mounted servers" exist, compared to the windows box under your desktop, weren't they obsolete when micros~1 came out with a "server edition"?, or why specialised and very expensive indeed networking gear exists when the "router" you have at home cost like forty north american pesos yanno. They do something that's pretty hard to do with wintendo kit, and if you'd fit wintendo kit to do all that you'd have a highly polished turd, multiply redundant and failsafe and all that, but still a turd. The most obvious brake on new uptake is the beyond painful cpu and software (even just OS) licensing costs. The kind that makes your wallet psysically bleed when you just merely think about it. A modern OS that could make proper use of the hardware that doesn't cost an orphanage in limbs (and wasn't linux, TYVM) would perhaps be interesting, though I don't believe it'll happen anytime soon. This is a milking cow and milking the cow is what they do.
Apple left the PowerPC architecture for this reason. The PowerPC chips were not being produced for general desktop or laptop purposes. They were either high end server CPUs or for embedded devices.
The G5 was never going to be suitable for a laptop and a modern Mac Mini is faster than a G5 using 10% of the electricity.
Given the PPC970 was a cut down Power4, and an Intel C2D spanks a Power4, I'm quite glad they moved. Of course the top end G5's were multicore which gave a big boost.
I'd back any move away from the monopoly based on FUD that IBM promotes. Look at this design, its backing up a complicated CISC architecture with huge cache (for LPARs no doubt) to crank the maximum performance from a system thats constrained by I/O throughput.
As Seymour Cray said, 'Anyone can build a fast CPU. The trick is to build a fast system', maybe IBM finally understand this? After all, all their competitors are a long way down that road having abandoned the Ghz race a long time ago.
Cache is also a very good throughput help. A lot is done with instruction pre-fetch and altering/catching issues with instructions before they are executed which speeds up the instruction, a LOT.
There are many items that cache buys (too many to list here) but it substantially find/resolves issues with instructions before they get to execution there by relieving the main system to concentrate on work rather than reacting to issues during the execution of the instruction.
One of the great things about Z/os (and it forbearers) is that it is designed to run at 100 percent busy 7X24X365 which really helps thru-put. In the 80's the only time the needle (cpu busy) went below 100 was at IPL (Boot up). I used to regularly look at reports showing this and it also showed if there were issues during the 100 percent busy that were causing bottlenecks.
Back in the 1980's our IBM SE made study on our system and showed what kind and where the bottle necks were and most of those were input/output. As a result IBM came up with a reasonably cheap ($35?) a month software feature that increase our thruput 60 percent just by making the default buffers on most files 5 (instead of the default 2) and chained the I/O so that the channel only had to be asked once for 5 read/write requests (although the channel needed to transfer the extra amount of data the channels were able to take the additional load requested of them with flying colors).
They know the beancounters (who *pay* for this stuff) don't care about the Copper interconnect SOI fab process (that still sounds pretty advanced).
They *care* it gives X % more throughput at Y% more (and it will be more) cost.
If IBM talked tech they would have bragged about their production ready OO OS, 96 bit address space and ground up HTML compatibility with automatic file system rebuild in the event of a crash as standard.
What? You never heard of it?
BTW 1800W over (9.2cm)^2 is roughly 20W cm^2. IIRC the Intel/AMD processors are running closer to 100W/cm^2 level, which was the Apollo heat shield design limit.
Biting the hand that feeds IT © 1998–2019