back to article Easy to use, virus free, secure: Aaah, how I miss my MAINFRAME

Mention mainframe computers today and most people will conjure an image of something like an early analogue synthesiser crossed with a brontosaurus. Think a hulking, room-sized heap of metal and cables with thousands of moving parts that requires an army of people just to keep it plodding along. A no-name PC today would blow a …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Get source control and isolate your production machines from dev already. A complete refusal to use any of the tools that replaced mainframes doesn't make a legitimate complaint, let alone a 2 page article

    1. John Riddoch
      Thumb Down

      I was thinking similar thoughts. Most of the advantages of the mainframe listed are enforceable on Unix, Linux or even Windows servers with sufficient political will. The difficulty is that *nix & Windows admins/users aren't used to such controls and would balk and the kind of barriers that mainframe methodologies put in your way.

      1. Admiral Grace Hopper

        Quite so, although it was a lot easier to enforce it in the mainframe environment.

        I sometimes wonder whether when we've fully transferred user interactions to the browser we'll have completed the circle and gone back to the punter being given what's effectively a dumb terminal with all the functionality in a big shed a long way away we'll have recreated the old model, but with an array of boxes from different manufacturers in the computer hall rather the monoculture IBM/Amdahl/DEC/Unisys/ICL sheds of old.

        1. Greenchutes

          Yes, I wonder when we'll see a chromebook on every desk. . .

          1. Ilgaz

            As you said it

            I am sure a mainframe farm would handle google& chrome OS far better, more reliable, less space and energy required. With the massive bandwidth, way more interesting stuff can be done.

            The price& it politics and defacto IBM monopoly does it impossible of course.

            1. Tim99 Silver badge
              Windows

              Re: As you said it

              Many large organizations, have effectively already done this.

              In an effort to keep the nasties out, most Windows desktops are so locked down so much that the user might just as well be using Chrome - No possibility of loading anything unauthorised, but still they are not really secure.

              I don't remember much of a problem with VAX/UNIX and VT220/VT241s...

        2. nematoad
          Unhappy

          "gone back to the punter being given what's effectively a dumb terminal..."

          Yes, that's exactly the way things are being driven.

          Personally I don't want to go that way but it would seem that the money is having a "controlled" environment ala Apple, Google with Chromebook and now, belatedly, MS, so that is the way that most people will be herded.

          As I only use Gnu/Linux it won't be quite so bad for me and I'll be able to keep control of my data but it does look like all we will have is an over-powered dumb terminal serving up what the big corporations deem to be what we need.

          Orwell's 1984 for the IT industry.

          1. Anonymous Coward
            Anonymous Coward

            A stateless desktop like a Chromebook is miles cheaper to support and maintain than a full blown Win/Mac/Lin machine. Software licensing alone on the average corporate desktop (A/V-security endpoint-disk encryption, asset/license monitoring and compliance, software install/update services, etc) add up.

            It's not (just?) a control thing... simple is cheap :)

            1. Anonymous Coward
              Anonymous Coward

              @AC 13:17GMT

              I believe you shouldn't add Linux to your argument. GPL software license monitoring and compliance, end-point protection and disk encryption and lack of need for anti-virus add no cost to your Linux infrastructure. All you have to pay is for competent sysadmins and developers. But here lies the problem IT is facing these days no matter if you're using Windows, MacOS or Linux: companies do not want to pay for competent people. This is why today I have to call a helpdesk in India who will in turn call a tech somewhere in the city and someone will be dispatched to check the server in the room next-door from my office. A server I can look at through the glass window waiting for the problem to be solved.

              1. Anonymous Coward
                Anonymous Coward

                Re: @AC 13:17GMT

                Fair point regarding licensing for Linux... started the thought on TCO but then pivoted to licensing and wound up with a bit of a mess :/

                Management expense applies to all (i.e. a traditional desktop is more expensive) but licensing for base 'enterprise' functionality is definitely more of an issue in the Win/Mac space.

                Good point :)

      2. Mad Mike

        Missing the point

        Your point is entirely valid in that tools and operational procedures CAN be used these days to give the same result as the mainframe operating model. However, a large part of the article was about the enforcement. In the mainframe world, because of its implementation, you had no choice but to follow these mechanisms and methods. However, its almost impossible to force someone to do so now. It relies upon their active co-operation; something that is normally missing.

        It's very difficult to lock servers these days down enough to prevent people 'doing their own thing' and a little thing like general procedures don't normally stop them. Indeed, where I work, management are more than happy to allow bypassing of the procedures whenver expediency is required due to end dates etc. However, it's never put right afterwards, so...........

        So, the advantage was not in doing things a certain way, but making it impossible for you to do otherwise.

  2. Mondo the Magnificent
    Thumb Up

    Memories...

    I cut my teeth in this industry with mainframes.. we had a DG Eclipse MV15000 and two Eclipse MV7800s

    These ran DG's bespoke 32 Bit AOS/VS operating systems and supported over a 1000 users via green screen FIFO dumb terminals.

    Every night our backup operators would arm the dual reel tape drive with a spool of 2400 ft tape and back all our precious data up without any fuss at all. Our programmers were confined to work within restricted memory areas combined with restricted disk space, but all our home brewed iCobol apps worked just fine..

    All connectivity was via RS232 or RS422 if we needed longer cable runs. All our remote sites were connected via MUXes and X25 PADs, the pre-Internet era of networking.

    Every month a man from DG would come and "service" the systems, clean out the PSU filters, remove all the dust from the 19" chassis and occasionally re-align the backup tape heads. Maybe twice a year they would undertake microcode upgrades or apply "fixes" to the OS. It was all so simple, albeit very expensive.

    One day we added a LAN card and connected Novell to the server, that hailed the ned of dumb terminals as (386) PCs were cheaper than those dumb terminals. Using an IP stack over IPX/SPX and a terminal emulator, we could still view our beloved ASCII based apps.

    Then came Windows 3.1 and then Windows for Workgroups 3.11, every user demanded it, solely to play solitaire at lunch time and to churn out kids party invites on MS Write & Paint.

    We later replaced the beloved AOS/VS boxes with DG/UX and HP/UX boxes, with our applications migrated from iCobol to MFCobol, but with the addition of real reporting and data mining tools that we couldn't really get for iCobol.

    Slowly but surely, our "idle time" was being consumed by the hundreds of newfound PC users who suffered a magnitude of "issues" from the bouncing ball virus to bad sectors on 1.44" media, to "why cant I get 256 colours on Windows" type requests.

    UTP was also the order of the day, so out with the RG58 and in with the "cheapernet cables", it seems as if us old mainframe / WAN techies were cursed by the PC.

    In the end we became PC techs by default, the new DG and HP mini mainframes ran themselves with the help of monthly maintenance SLAs. Backup was now done on Exabyte cartidges and took half the time that it used to. Also our new DG and HP boxes supported X Windows, so our CLI skills were eroding as we became accustomed to pointing and clicking.

    Yes, I do miss the mainframe era, admittedly, the PC and Wintel server combo is king now, but the mainframe has its place, usually in some secluded corner of the data centre.

    The fundimental difference is that when a PC breaks, we get one user bitching, opposed to hundreds on those rare occasions the mainframe went down, usually because of a UPS outage or the [not so rare] incident of TIFU, or Techie Induced Fuck Up...

    I miss my mainframe!

    1. Chris Miller

      Luxury! We 'ad it really tuff

      My first commercial programming in 1975 was done in pencil on paper pads lined out into 24 rows of 80 characters each. When the program was complete, you stapled them together and put them into your out tray. Every 30 minutes or so the company mailman would collect them, take them to the central sorting office and they would eventually be delivered to a specialised key centre, where a bevy of beauties (they were always female) would be waiting to transfer them to punched cards using machines the size and shape of a welsh dresser. The batch of punch cards were then returned to you for careful proof reading, before being delivered into the hand of the operators who would (if the gods were propitious) load them into a card reader where they could be compiled on the mainframe.

      If you were really lucky, you might get the output back later that day (otherwise it would be next morning) in the form of fanfold printing, probably telling you that you'd omitted a full-stop at the end of an IF statement, or that one of your Ms had been misread as an N. Experienced and skilful programmers were those that knew how to plug a tiny bit of the chaff that was punched out of the cards back into a hole to allow a 1-bit correction of an erroneous character - the trick was to rub pencil over the back of the card with the chad in place, which might allow it to be read successfully. And yet somehow we managed to maintain a fair degree of productivity.

      Try telling that to t'youth of today ...

      1. This Side Up
        Thumb Up

        Re: Luxury! We 'ad it really tuff

        You had it easy! When I were a lad we 'ad to punch our programs up on 8-track paper tape and correct errors by copying from one tape to another, splicing bits together and punching holes with a hand punch. The data came in from the telex network on 5-track tape. If there was a validation error the reader would stop and you had to put it right there and then. I could read 5- and 8-track tape quite well. Then I got put on an IBM 360 project and had to learn the punch card codes. Happy days!

    2. Anonymous Coward
      Anonymous Coward

      Re: Memories...

      Yep I cut my teeth on Big Blue's boxes, I started off as a tape monkey loading "2400" reels onto drives, checking and running overnight batches moved over to system support. There was a time when you almost had an apprentiship in IT.

      I moved out to run ICL System 25's, then the PCs started to arrive and Windows was required. Then some hope! We got mandated to use IBM RS/6000 boxes, so I had to learn AIX then basic Linux distros were appearing which I loved playing with. I got into working with Oracle on DEC Alphas which later got moved to Solaris boxes.

      It's not a great career plan but I made sure I always tried to keep Windows at arms length and stay on the command line. No one likes the command line, you don't hear kids out of college wanting to work the command line on servers these days but as any Oracle/DB2/Sybase/Ingres/Informix/etc DBA will tell you, real DBAs and SAs like to be close to the system. Let's face it when you've a system down that's causing your company to piss money up the wall counted in millions per second, silly toy GUIs are not going to cut it, you need the shortest fastest way to talk to a Unix server and the command is king when the chips are down!

      1. Fatman
        WTF?

        Re: tried to keep Windows at arms length

        Is a very good strategy.

        While WROK PALCE uses Linux in house, the shiny boxes we get often have WindblowZE inflicted on them. (Don't rag on me about it, it is a cost issue, and I am not on good terms with the bean counters.)

        What we do to get around two issues (WindblowZE, and data security) is to purchase a second hard drive, and install Linux on it, and leave the WindblowZE infected hard drive completely disconnected inside the case.

        When the machine is EOLed, we pull the second hard drive, and reconnect the OEM's WindblowZE hard drive. Now that system is ready to be disposed of, and we still have our data.

        The in-house joke is to put bio-hazard stickers on the WindblowZE infected drives. I wonder what reaction some (l)user would have if they bought one of our retired systems, and open it up.

        1. Anonymous Coward
          Anonymous Coward

          Re: tried to keep Windows at arms length

          Fatman, judging by your post I assume that you have just left kindergarden?

  3. Anonymous Coward
    Flame

    Two sides to every story

    I've been on both sides of this debate, and I have a lot of sympathy with the force-people-onto-managed-systems thing. But unfortinately centrally-managed IT often fails dismally to provide what people need.

    At a recent contract, working in, I guess, a development support role, we wanted to build a number of test environments. These would consist of 2-6 VMs for each environment, and perhaps we needed up to 10 environments. We needed to be able to snapshot the VMs so we could test stuff and back it out, but we didn't need backups for instance. So we went to the IT people and asked. The answer came back that a VM cost £4,000, and we couldn't have snapshots because of performance impact. So that would be £8k-£20k per environment, for something which did not meet our requirements. After a lot of fighting we managed to persuade IT that yes, we could have snapshots, but we would have to make a request every time we wanted one, or wanted to revert to one, meaning something that normally took a few seconds would instead take a few hours. This would merely cripple development rather than prevent it altogether.

    We could have bought suitable hardware and licenses to support all our environments for the cost of having IT provide one environment which just marginally met our needs.

  4. ukgnome

    Never worked on a mainframe

    but did a fair bit on 1980's mini's as an operator.

    I do miss mounting tapes, but don't miss the sound of a drive failing.

    1. Anonymous Coward
      Anonymous Coward

      Re: Never worked on a mainframe

      A gnome? Mounting tapes? Drive (f)ailing? Is this some sort of advert for Hobbit Viagra? :)

  5. Eugene Crosser
    Trollface

    Sorry Dave,

    I suspect that you lived under a rock for the last decade and a half.

    Nowadays, more often than not, you will open a web browser, and have your application run in a datacentre far far away, maintained by a (hopefully) knowledgeable team. And you don't need to worry about the disk or memory errors at all.

    Plus, we are always fond of the past, but "easy to use"? Really? Virus free, yes, but just as virus free as your laptop will be if you never connect it to any network.

    Troll because they live under a rock, too.

  6. catphish
    WTF?

    The cloud

    It's not a new invention? We ran out computing tasks remotely before? Well I'll be damned!

  7. Pete 2 Silver badge

    The lost art of simplicity

    The best art is created during times of stress: wars, shortages, social upheaval, revolution.

    Under those conditions people tend to focus on what's important - survival, love, getting enough to eat. Come the "good times" those same people are more concerned with obtaining more, conspicuous consumption, building their dream castles in the air.

    Mainframes tended to focus the mind. They had limitations that today would be considered impossible to live with (yet we did, and did very well) - partly because modern O/S, anti-virus, GUI, IDE and monitoring bloatware software sucks up almost all of the available processing power and system resources. Luckily modern machines are sufficiently powerful that they can push through these huge overheads.

    We also had much simpler systems on mainframes. I recently saw the design for a multinational's new customer / call agent system - it filled a wall of A0 sheets, taped to the inside of a"fishbowl" meeting room. It only had to support a few thousand users and (maybe) a couple of billion records - things that a moderately sized zSeries use to do on its own.

    The difference is that this "modern" system needs to be web-accessible (with all the security overheads that entails), distributed, load-balanced, resilient and will run, I suspect, a rather crappily designed database (that will have it's original clean design mutated into an unrecognisable mess by changes, bug-fixes, new features and expediency). The design also requires a mishmash of proprietary, bespoke third-party and OTS products bodged together into something should nearly work properly.

    However, as someone who makes a living from helping companies sort out the fubars, cockups and dead-ends that their designers wander, aimlessly, into I was glad to see the end of centralised, controlled and efficient mainframe architectures and I am thankful, on a daily basis, for all the complex systems that people design today - even though these dream castles are so far outside their (and my) comprehension that there's a lifetime of assured work, just waiting to be plucked.

    1. Field Marshal Von Krakenfart

      Re: The lost art of simplicity

      We also had much simpler systems on mainframes

      I don't know if I'd agree with that, granted some of the applications were simpler, but the various systems interacted with each other in rather nasty ways. This was in a time when we did not use APIs to access other systems. As a result older mainframe systems developed a deep symbiotic relationship with each other.

      I recently saw the design for a multinational's new customer / call agent system - it filled a wall of A0 sheets,

      But the whole design process was simpler then, waterfall instead of agile. When I started programming mainframe applications were designed using a mixture of Gane & Sarson DFDs and Jackson Structured Programming. The documentation for the last mainframe project I worked on (nearly 15 years ago) consisted of a 9 page document, a cover page, a high level DFD, and 7 pages describing the functionally of each new program in the system, all built under the waterfall model.

      Now I'm up to my ass in prince2 documentation and UML.

      There is also one other thing that the mainframe excelled at, throughput, if there is one thing the mainframe does well, it is its I/O rate and ability to crunch data

  8. ratfox
    Happy

    Reads like an advertisement for the cloud to me

    Don't need to care where the computer is, it could be on the other side of the country…

    Don't need to worry about drivers…

    Have other people taking care of maintenance…

    Don't lose your data when your PC dies…

    For the user, what's the difference between the cloud and a mainframe?

    1. Dan 55 Silver badge
      Pirate

      Re: Reads like an advertisement for the cloud to me

      On the cloud your account can get hacked and your data copied and distributed or deleted.

  9. John Smith 19 Gold badge
    Unhappy

    You say "dumb terminal" I say "browser"

    You say "mainframe" I say "cloud"

    As for "web accessible" the answer is "that depends"

    If *all* users are on 1 site (or within a *private* network) then they only need the browser on their desktop, no *general* web access tools. Funny how few companies have actually gone the "doing the job from home" path.

    *but*

    In addition to security Ye Olde Mainframe was definitely *somewhere* IE the *legal* limits on it regarding who *outside* the company was *legally* entitled to ask for the data (IE needs a court order or just a phone call from someone *claiming* to be an officer of the government) were *clearly* defined.

    Something so basic no one had to wonder about it. Now they do.

    It's got a great deal to do with *management* and I *guarantee* they will demand top range PC's to run their *hugely* complex Excel models (Yeah right) on, so why not have everyone run a PC etc.

    And so the game goes full circle.

  10. GreyWolf
    Thumb Down

    Efficient and virus-free

    1. Efficient

    Last mainframe I worked on had 64 MB of RAM and was supporting 3000 users. In the PC world, 64 MB is the minimum to boot up Win98 for one user.

    2. Virus-free

    Buffer over-runs were impossible to exploit because memory was divided up in 4K "pages" which you either owned or didn't own. The hardware detected attempts to write to any page you didn't own and promptly stabbed your program in the heart. i386 and i686 architecture is a child's mindless babble in comparison to mainframe architecture.

    Unfortunately it is nowadays common for the big software packages to insist (invalidly) on running in privileged status, because their programmers can't be arsed to learn how to write safe supervisor calls. Result? You could drive a coach-and-horses through mainframe security today.

    1. Ilgaz

      Sounds like symbian

      It was the same deal on symbian at portable level. Your application either behaves or gets the kick from kernel, you have to write efficient code.

      Having to switch to Android,I am very selective in what I use and yet I am watching how happily applications leak memory, hog the cpu, ignore how system demands you to separate core/GUI and on demand parts.

      A lot of stuff wouldn't be coded for symbian for a simple reason: They couldn't get certified.

  11. Steven Jones

    "On mainframes there was generally one central scheduler where a system operator could see the details for all batch jobs across users and applications. "

    Of course that worked so well on RBS didn't it...

    1. Fatman
      FAIL

      RE: system operator could see the details...

      But, you fail in one regard - the employment of an experienced system operator.

      We all know that RBS, in its misguided attempt to increase shareholder value, and increase executive bonuses; hired a bunch of monkeysinexperienced operators.

      You get what you pay for.

  12. Anonymous Coward
    Anonymous Coward

    In them days...

    ...Real programmers could punch their weight in cards...

  13. Anonymous Coward
    Linux

    Open does not equate to RUN !

    The difference being that open does not equate to run, like in the VAX/VMS ..

  14. This Side Up

    HCF

    I didn't try it myself but I was told you could make an IBM chain printer catch fire by printing alternate upper case and lower case letters. The lower case character set was underneath the upper case set so the whole chain had to be shifted up and down (while it was whizzing round) by powerful solenoids.

    Then there was the apocryphal 360 assembler instruction HCF - halt and catch fire. There were more like that but I can't remember them.

  15. James 100
    FAIL

    RBS problem

    The RBS mainframe did exactly what it was told to - perfectly reliably and efficiently. Unfortunately, what it had been told to do was something incredibly stupid which took days to fix.

    Now, a more useful rollback/undo mechanism would certainly be useful in that particular piece of software, but that's a fault of the programmer and the operator, not the technology itself: 'delete all pending transactions' would have been just as problematic if it had been ordered on a big Oracle/Solaris cluster or any other platform.

    It does make me nostalgic, seeing the efficiency achieved in those days; the first 'big' system I was sysadmin for had 384 Mb of RAM and a pair of 167 MHz processors, servicing many dozens of active users plus some heavy number crunching - substantially less power than almost anything you can play Angry Birds on these days!

    I share the concern someone mentioned about the ultra-locked-down 'standard' central deployments, though. All too easy for lazy jobsworths in the middle to obstruct and impede the userbase, rather than delivering a decent service like they should! (Particularly bad in a university: when one department's users need big screens, heavy duty graphics cards and a dozen CAD packages, while another just browses Westlaw and other websites, can one size ever really fit all? Should it try?)

  16. oldcoder

    Ah the memories

    Especially about the developer that died... but continued submitting jobs to be run for six months.

    We only found out by sending him an updated user manual that got returned with a "no forwarding address, this person died last June" message.

  17. Christian Berger

    Sorry, that has little to do with mainframes

    Everything you mention can also be achieved with a simple Unix system or even a Windows terminal server. Since people rarely use more than 1% of their resources, it's quite feasible to have 20 people share one computer. Lock that one down, manage software installations by someone who doesn't look for software by googling for "free download" and you are set. If something goes wrong, just wipe the home directory of the user and replace it with the last known good backup, or selectively wipe certain files which are likely to be infected.

    You don't need a mainframe with its expensive support contracts, you don't even need a midrange system. A simple "Unix-workstation" will do the job for you. In fact many years back I was running a temporary installation with about 30 heavy users on a Blueberry iBook running some sort of SuSE Linux.

    From what I have gathered the main advantage of mainframes is that they do much of what an operator would usually do. Like they have built-in batch processing facilities where you simply submit your job and it'll get executed whenever there's time, etc.

  18. Field Marshal Von Krakenfart
    Trollface

    You don't need a mainframe with its expensive support contract

    Replace them with expensive Oracle licences

    1. Christian Berger

      Re: You don't need a mainframe with its expensive support contract

      Not sure how much that costs, but I've somewhere read that the cheapest IBM setup costs about $80k a year. That's a tiny little machine (you pay for CPU power).

  19. Stephen Channell
    Meh

    the mainframe was never that secure

    The first recorded email-virus was a Christmas Tree REXX script that rendered a twinkling tree on 3270 terminals, with the addition that it sent itself to everybody on you Profs contact list.. seized-up IBM for weeks and only purged by an emergency fix that that filtered-out Christmas.

    On MVS JES enabled you to run a job on another mainframe (e.g. cataloguing a tape for another job to use), and running a job that spawned another was as simple as writing to a DD mapped to the intrdr.

    Who needed a USB drive when you could use ftp & tcp/ip to move files around and optionally submit jobs through ftp with a “quote address intrdr”.. and yes those TN3270 telnet sessions had clear-text passwords.. and there was always RJE for those without TCP/IP FEP.

    No, the mainframe was no more secure, but it was : secure-by-obscurity; what security there was, was on by default; everything was audited; the price for hacking was terminal.

    1. Michael Wojcik Silver badge

      Re: the mainframe was never that secure

      The first recorded email-virus was a Christmas Tree REXX script

      CHRISTMA.EXEC was not a virus. It was a trojan - albeit an inadvertent one, and one that did what it was supposed to do. (Like the Morris finger worm, it simply wasn't properly throttled.)

      Who needed a USB drive when you could use ftp & tcp/ip to move files around and optionally submit jobs through ftp with a “quote address intrdr”.. and yes those TN3270 telnet sessions had clear-text passwords

      TCP/IP was a relative latecomer to IBM mainframe computing. More importantly, until sometime in the mid-to-late 1990s the vast majority of mainframe comms took place over private networks, leased lines, dialup, etc - circuit-switched networks that were considerably more difficult to snoop than packet-switched ones.

      Mainframes were, by design and in practice, considerably more secure than microcomputers. Most microcomputer OSes didn't even offer basic Orange Book C2 security mechanisms (eg ACLs) until OS/2 LAN Manager and Windows NT came along. (Minis were generally closer to mainframe security capabilities.) They certainly weren't close to B1, which is what MVS/ESA was certified at.

      With mainframes you didn't have fly-by-night lowest-bidder third-party hardware manufacturers cobbling together drivers that would run with supervisor privilege. You didn't have hundreds of obscure kernel-level mechanisms that were so complex the vendor can't get them right after decades of refinement (see for example the Ormandy and eEye Windows VDM exploits). You don't have brain-dead authentication mechanisms like NTLMv1.

  20. Michael Wojcik Silver badge

    What fantasy world does Mandl live in?

    Granted, this is all far too restrictive for 21st century computing needs, and certainly not enough to make anyone wish for a return to the days of the IBM System/360.

    Tell that to the thousands of businesses who still use IBM mainframes, many of them still running code very similar to what they were running on S/360s.

    Really, didn't you "the mainframe is dead" types learn your lesson when you started proclaiming this in the late '80s, and it failed to come true then? IT moves much slower than the pundits claim. Mainframes are still very much around, and will be for a long time to come.

  21. Jacobus

    this point of this article is the disadvantages of distributed systems, ok, fine.

    However, to illustrate these points a comparison is made to mainframes and centralized systems as if these would be a thing of the past - this is very unreal, as most of the computing in the world still happens on mainframes and midframes.

    in which reality does the writer live?

  22. rich0d

    xyzzy.

This topic is closed for new posts.

Other stories you might like