back to article The workload challenge

The mainframe was there first, but is it the dinosaur that many people assume? For some workloads, it has never been bettered: many of today's business web sites store a production database on a mainframe host, for example. For applications that rely on large-scale transaction processing, that support thousands of users, …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Missunderstood

    The mainframe is a very much missunderstood beast, the technology has been confused with the boring data processing departments that it tends to live in and many IT people in this day and age are simply ignorant of the technology. Whetever midrange technology you can think of, it was there on mainframes decades before anywhere else, distributed transactions, message queueing, stateless sessions, virtualisation etc etc etc. Unfortunatly due to the roots in data processing, mainframes have tended to remain COBOL or assember based etc so it's just not sexy enough for most.

    1. Steven Jones

      I'll take up the challenge

      IBM mainframes certainly did pioneer quite a lot of technologies, but very far from all of them. TCP/IP and the whole opening up of networks was most certainly not seen on mainframes first - indeed it's very philosophy of decentralisation runs completely contrary to the way IBM saw networks in the 1970s and 80s.

      Also mainframes did not pioneer decent, hierarchical file systems. The very structure of mainframe operating system I/O systems allowing direct access to I/O commands from user programs (labeit with add-ons to impose security) did not allow for a properly layered I/O system. MIt also left mainframes with a bewildering number of different and incompatible ways of holding file data (all those "sams" - vsam, isam etc.) with no common command set. Even apparently simple operations like deleting or copying a file (dataset) couldn't be achieved using a common, straightforward command. You had to be aware of the organisation, use the right utility and remember all the quirks.

      Mainframes also did not pioneer multi-threaded development environments. Being stuck in TSO land, you were generally limited to what you were doing in foreground and the ability to submit batch jobs. Even CMS under VM was essentially single threaded as, for that matter, were TP monitors like CICS and IDMS-DC.

      Mainframes programming models also did not pioneer good, flexible inter-process communications. That very much came through work on UNIX.

      ASCII was also not pioneered on mainframes - instead the rather inconsistent and nasty EBCDIC held sway with its odd gaps in character codes due to the legacy of punched card compatibility.

      Also fixed block disk architectures were not pioneered on mainframes - there is a legacy of nasty CKD formats. Any remotely modern software treats devices as logical block access. Even if the disks aren't truly CKD, the backward compatibility makes it very difficult to produce advanced file systems which can be used by legacy programs.

      Also mainframes were stuck with a nasty 24/32 bit hybrid architecture for long after true 32 bit alternatives were available in the mid range world and mainframes were also late to true 64 bit.

      Mainframes did not pioneer desktop or wysiwyg environments. Yes, there were graphics - of a sort - using specialised terminals, but the whole windowing/mouse type user environment with which we are now familiar emerged from the mid-range arena.

      I'm sure there are more - yes there were good things, but a lot of mainframe software looks very old and anachronistic these days.

      1. Peter Gathercole Silver badge

        I'd just like to point out

        that AT&T up until the late '80s used to run a large part of their environment around mainframes, many running UNIX! And you probably ought to look up other non-IBM OS's for 370 architecture systems as well. One of my personal favourites was MTS. I saw a demonstration of access to ARPANET (you know, a forerunner of the Internet) from this OS in the very early '80s. Also, for all it's problems, the influential OS Multics was a mainframe OS, and this set features that would appear in UNIX, VMS and a host of other OS's long forgotten.

        I was involved with installing and running a channel-based Ethernet device running TCP/IP on a mainframe linking it to Sun and VAX systems in the later '80s (again, under UNIX).

        I think that one needs to separate the hardware from the software, as there is a significant difference.

        Mind you, if you look as some of the innovations, such as virtual addressing, virtualised systems, key based page level memory protection, I/O offload, multi-processor systems, distributed processing, hierarchical storage controllers, DMA, memory cache, multi-user and multi-tasking, use of ASCII (one of your benchmarks, ASCII was mandated by US government contracts in 1968, and before this was a COMMUNICATION standard, not a COMPUTING one), microcode, solid-state electronics and a host of more minor things, mainframe was often one of the first systems to implement them (often because the features were so expensive to implement, only mainframe-class machines could benefit).

        Whilst many of these were not invented on the 360/370....zSeries systems (now the only real mainframe architecture remaining), they were almost all pioneered on mainframe-class systems like Atlas, KDF/9, Cyber/CDC, UNIVAC and others.

        1. Anonymous Coward
          Anonymous Coward

          actually, well into the '90's

          I should know, I was the admin of one of them.

          Ah, the good old days.

      2. dlc.usa
        FAIL

        Not Quite

        Actually, UCLA's MVS-running mainframe was on the ARPANET before TCP/IP was deployed. I was chief developer of the first commercialized TCP/IP stack for MVS (not an IBM offering) which was available before DNS was deployed and host tables were, ah, expanding rapidly.

      3. dlc.usa
        FAIL

        Making Some Time To Educate You

        DOS, OS/MVT, and CP-67 with CMS all ran multitasking environments--that's almost the same concept as multiprogramming--multiple programs loaded in main storage that are dispatchable serially using a single processor. OS used task management architecture centered on Task Control Blocks (TCB) and programs could spawn subtasks that would also compete for the processor. UNIX calls these "processes." This was all S/360--no virtual storage. The first multiprocessing (aka SMP) was developed on modified Model 67s (said to be tightly-coupled) and became generally available and supported on the non-virtual storage operating systems.

        Underneath all the OS access methods you found confusing was the EXCP access method (EXecute Channel Program) that gave the programmer the ability to code the channel programs processed by the I/O hardware. Serious database products being developed by ISVs all used that to achieve maximum efficiency. Take the BBN IMP 1822 ARAPNET connection hardware most customers of the TCP/IP stack I mentioned in my previous comment provisioned. The TCP/IP stack's driver program stacked five separate read channel programs via multiple EXCPs so when the active channel program ended and generated an interrupt, the I/O Supervisor's interrupt handler immediately issued the SIOF instruction for the next queued channel program before notifying the program that issued the EXCPs of the completion. None of the TCP/IP stack's code required supervisor state (kernel-mode).

        What else? Oh, the original S/360 PSW had an ASCII/EBCDIC mode bit that eventually was repurposed because customers did not use it. The "nasty" CKD hardware enabled the offloading of a lot of CPU cycles into the different hardware devices so architected reducing elapsed time and enabling more non-I/O-related processing by the CPUs.

        You are missing the main point in your final paragraph. Yes, not all the technologies you mentioned were pioneered by IBM Corporation or even on mainframes by IBM customers and third-party vendors. But today's Z boxes still do all that and more (although it is still more cost-effective to offload most of the rendering processing to the smarter terminals we enjoy today).

        1. Kebabbert

          dlc.usa

          "...But today's Z boxes still do all that and more (although it is still more cost-effective to offload most of the rendering processing to the smarter terminals we enjoy today)...."

          It is a fact that todays Mainframe cpus are several times slower than any high end x86 cpu. And as such, it is a good idea to offload heavy cpu work to x86 servers. Even the IBM claim "World's fastest cpu z196" which is a Mainframe cpu just released in September last year, is several times slower than an Intel Nehalem-EX. So, yes, you should have some x86 servers.

          Mainframe cpu have always lagged in performance when you compare between cpu generations. Here is a source from Microsoft

          http://www.microsoft.com/presspass/features/2003/sep03/09-15LinuxStudies.mspx?

          "we found that each [z9] mainframe CPU performed 14 percent less work than one [single core] 900 MHz Intel Xeon processor running Windows Server 2003."

          But Mainframes have superior RAS.

        2. Steven Jones

          @dic.usa

          Nothing new to me there. The point wasn't what you can run Z-Linux, it's what concepts were pioneered on mainframes. TCP/IP simply was not pioneered on mainframes - an early port hardly counts, and in any case, many of the concepts didn't fit MVS very well and the whole centralisation concept of that OS.

          I'm also fully aware the the I/O channel architecture was developed to offload processing to specialist I/O controllers so you could do things is search for index blocks without the CPU being involved, but it was an architectural dead end. Yes, there was a need for efficiency, and I know why it was done, but the point is that proper file systems were pioneered away from

          Also, I knew 360/370 architecture inside out as I used to write operating system code right down to the issuing I/O commands in kernel code and dealing with interrupts, condition codes (think that was the term) and do on. My company had a high performance OLTP system using a proper pre-emptive multi-tasking system, software disk mirroring, atomic transactions, dual logging and atomic transactions on this architecture back in 1971 (and there is one system still running in production albeit is should have been re-written years ago). I can still read (most) of 360/370 programmes from the binaries, and I can certainly at least align all the instructions by eye as it's a nice predictable architecture.

          Yes, and I know about TCBs and all that stuff - it's just that TSO and the like were hardly the last word in sophistication.

          So go back to the original challenge - somebody claimed that there were no major computing concepts pioneered on mid-range servers. I just gave some examples of ones that were and where some mainframe concepts became outmoded and replaced by others (like the IBM mainframe channel architecture, or the arcane mainframe access methods).

    2. Anonymous Coward
      Anonymous Coward

      But ...

      COBOL, Assembler and Mainframes ARE sexy.

  2. dlc.usa

    Fail Re: "I'll Take Up The Challenge"

    I do not understand why my comments were not associated with the comment made by Mr. Jones. The article itself is not a "Fail" and I regret causing any possible confusion.

This topic is closed for new posts.

Other stories you might like