You say "cloud" I say mainframe. You say "browser" I say "dumb terminal."
Which of course means that who runs the mainframe runs you.
Something to think about.
IBM's System 360 mainframe, celebrating its 50th anniversary on Monday, was more than a just another computer. The S/360 changed IBM just as it changed computing and the technology industry. The digital computers that were to become known as mainframes were already being sold by companies during the 1950s and 1960s - so the S …
Which of course means that who runs the mainframe runs you.
Something to think about.
"Which of course means that who runs the mainframe runs you."
Perhaps, but not quite. We hacked our 360!
Fellow students and I hacked the student batch process on our machine to overcome the restrictive limitations the uni place on our use of it (that's long before the term 'hacker' came into use).
(I've fond memories of being regularly thrown out of the punch card room at 23:00 when all went quiet. That 360 set me on one of my careers.)
True, IBM had the "cloud" in place in the 60s.
Which of course means that who runs the mainframe runs you.
Whereas with PCs, you're untouchable?
Honestly, why do people still trot out these sophomoric oversimplifications?
The S/390 name is a hint to its lineage, S/360 -> S/370 -> S/390, I'm not sure what happened to the S/380. Having made a huge jump with S/360 they tried to do the same thing in the 1970s with the Future Systems project, this turned out to be a huge flop, lots of money spent on creating new ideas that would leapfrog the competition, but ultimately failed. Some of the ideas emerged on the System/38 and onto the original AS/400s, like having a query-able database for the file system rather than what we are used to now.
The link to NASA with the S/360 is explicit with JES2 (Job Execution Subsystem 2) the element of the OS that controls batch jobs and the like. Messages from JES2 start with the prefix HASP, which stands for Houston Automatic Spooling Program.
As a side note, CICS is developed at Hursley Park in Hampshire. It wasn't started there though. CICS system messages start with DFH which allegedly stands for Denver Foot Hills. A hint to its physical origins, IBM swapped the development sites for CICS and PL/1 long ago.
I've not touched an IBM mainframe for nearly twenty years, and it worries me that I have this information still in my head. I need to lie down!
I have great memories of being a Computer Operator on a 360/40. They were amazing capable and interesting machines (and peripherals).
ESA is the bit that you are missing - the whole extended address thing, data spaces,hyperspaces and cross-memory extensions.
Fantastic machines though - I learned everything I know about computing from Principals of Operations and the source code for VM/SP - they used to ship you all that, and send you the listings for everything else on microfiche. I almost feel sorry for the younger generations that they will never see a proper machine room with the ECL water-cooled monsters and attendant farms of DASD and tape drives. After the 9750's came along they sort of look like very groovy American fridge-freezers.
Mind you, I can get better mippage on my Thinkpad with Hercules than the 3090 I worked with back in the 80's, but I couldn't run a UK-wide distribution system, with thousands of concurrent users, on it.
Nice article, BTW, and an upvote for the post mentioning The Mythical Man Month; utterly and reliably true.
Happy birthday IBM Mainframe, and thanks for keeping me in gainful employment and beer for 30 years!
"I've not touched an IBM mainframe for nearly twenty years, and it worries me that I have this information still in my head. I need to lie down!"
"Business Application Programming"..."Basic Assembly Language"..."COBOL"...hand written Flowcharts...seems as though I have those same demons in my head.
I stated programming (IBM 360 67) and have programmed several IBM mainframe computers. One of the reason for the ability to handle large amounts of data is that these machines communicate to terminals in EBCDIC characters, which is similar to ASCII. It took very few of these characters to program the 3270 display terminals, while modern X86 computers use a graphical display and need a lot data transmitted to paint a screen. I worked for a company that had an IBM-370-168 with VM running both os and VMS. We had over 1500 terminals connected to this mainframe over 4 states. IBM had visioned that VM/CMS. CICS was only supposed to be a temporary solution to handling display terminals, but it became the main stay in many shops. Our shop had over 50 3330 300 meg disk drives online with at least 15 tape units. These machines are in use today, in part, because the cost of converting to X86 is prohibitive. On these old 370 CICS, the screens were separate from the program. JCL (job control language) was used to initiate jobs, but unlike modern batch files, it would attach resources such as a hard drive or tape to the program. This is totally foreign to any modern OS. Linux or Unix can come close but MS products are totally different.
S/380 was the "future systems program" that was cut down to the S/38 mini.
HASP was the original "grid scheduler" in Houston running on a dedicated mainframe scheduling work to the other 23 mainframes under the bridge.. I nearly wet myself with laughter reading Data-Synapse documentation and their "invention" of a job-control-language. 40 years ago HASP was doing Map/Reduce to process data faster than a tape-drive could handle.
If we don't learn the lessons of history, we are destined to IEFBR14!
As a senior IT bod said to me one time, when I was doing some work for a mobile phone outfit.
"it's an IBM engineer getting his hands dirty".
And so it was: a hardware guy, with his sleeves rolled up and
blood grime on his hands, replacing a failed board in an IBM mainframe.
The reason it was so noteworthy, even in the early 90's was because it was such a rare occurrence. It was probably one of the major selling points of IBM computers (the other one, with just as much traction, is the ability to do a fork-lift upgrade in a weekend and know it will work.) that they didn't blow a gasket if you looked at them wrong.
The reliability and compatibility across ranges is why people choose this kit. It may be arcane, old-fashioned, expensive and untrendy - but it keeps on running.
The other major legacy of OS/360 was, of course, The Mythical Man Month who's readership is stil the most reliable way of telling the professional IT managers from the wannabees who only have buzzwords as a knowledge base.
They were bloody good guys from IBM!
I started off working on mainframes around 1989, as graveyard shift "tape monkey" loading tapes for batch jobs. My first solo job was as a Unix admin on a set of RS/6000 boxes, I once blew out the firmware and a test box wouldn't boot. I called out an IBM engineer after I completely "futzed" the box, he came out and spent about 2 hours with me teaching me how to select and load the correct firmware. He then spent another 30 mins checking my production system with me and even left me his phone number so I call him directly if I needed help when I did the production box. I did the prod box with no issues because of the confidence I got and the time he spent with me. Cheers!
"It was probably one of the major selling points of IBM computers (the other one, with just as much traction, is the ability to do a fork-lift upgrade in a weekend and know it will work.) that they didn't blow a gasket if you looked at them wrong."
When I was in school for Business Applications Programming way back when...I was entering code on an S/370 for a large application I had written...and managed to lock it up so badly...the school had to send everyone home for the day, so the techs could figure to just what the heck I had done wrong.
Took the operators a good part of the evening to sort it.
First time anyone had ever managed to hose one they told me. Felt quite proud, actually. :-)
an ex-IBM guy told me this was the reasoning behind the coat&tie rule: if you see a man in suit &tie, you figure he assumes the machine will work, and he will not have to crawl on the floor fixing it.
I'm sure that's what you think, but the 360 was very simple to debug and problem solve, it also had a simple button to reboot and clear the total RAM memory. So it's not clear what you could have done that would have persisted across a reboot.
Sure, you could have written on the 29Mb hard drive, or the stand alone loader on the front of the tape if your were booting from tape... or got the cards out of sequence if booting from the card reader... but that was hardly the mainframes fault...
IBM Mainframe guy and Knight of VM, 1974-2004.
A typo for 6 bit? (E.G. ICT 1900)?
"The initial 1900 range did not suffer from the many years of careful planning behind the IBM 360."
-- Virgilio Pasquali
The typo must be fixed, the article says 6-bit now. The following is for those who have no idea what we are talking about.
Generally machines prior to the S/360 were 6-bit if character or 36-bit if word oriented. The S/360 was the first IBM architecture (thank you Dr's Brooks, Blaauw and Amdahl) to provide both data types with appropriate instructions and to include a "full" character set (256 characters instead of 64) and to provide a concise decimal format (2 digits in one character position instead of 1) 8-bits was chosen as the "character" length. It did mean a lot of Fortran code had to be reworked to deal with 32-bit single precision or 32 bit integers instead of the previous 36-bit.
If you think the old ways are gone, have a look at the data formats for the Unisys 2200.
One of the major design issues thru the '60s and '70s was word size , Seymour Cray's CDCs were 60 bits which has seductively many factors . But in the end , 2 ^ n bit words won out .
Came with the S/370, not the S/360, which didn't even have virtual memory.
The 360/168 had it, but it was a rare beast.
Nope. CP/67 was the forerunner of IBM's VM. Ran on S/360
S/360 Model 67 running CP67 (CMS which became VM) or the Michigan Terminal System. The Model 67 was a Model 65 with a DAT box to support paging/segmentation but CP67 only ever supported paging (I think, it's been a few years).
The 360/168 had a proper MMU and thus supported virtual memory. I interviewed at Bradford university, where they had a 360/168 that they were doing all sorts of things that IBM hadn't contemplated with (like using conventional glass teletypes hooked to minicomputers so they could emulate the page based - and more expensive - IBM terminals).
I didn't get to use an IBM mainframe in anger until the 3090/600 was available (where DEC told the company that they'd need a 96 VAX cluster and IBM said that one 3090/600J would do the same task). At the time we were using VM/TSO and SQL/DS, and were hitting 16MB memory size limits.
I'm not sure that the 360/168 was a real model. The Wikipedia article does not think so either.
As far as I recall, the only /168 model was the 370/168, one of which was at Newcastle University in the UK, serving other Universities in the north-east of the UK, including Durham (where I was) and Edinburgh.
They also still had a 360/65, and one of the exercises we had to do was write some JCL in OS/360. The 370 ran MTS rather than an IBM OS.
As usual Wikipedia isn't a comprehensive source. See for example http://books.google.co.uk/books?id=q2w3JSFD7l4C&pg=PA139&lpg=PA139&dq=ibm+360/168+virtual+memory&source=bl&ots=i3OQQExn_i&sig=MTqFlizLAFWINMmVkqgr_OhdbsY&hl=en&sa=X&ei=ZvtCU-mbHMmd0QX6vIHgAw&ved=0CFEQ6AEwBQ#v=onepage&q=ibm%20360%2F168%20virtual%20memory&f=false
You're right. The 360/67 was the first VM - I had the privelege of trying it out a few times. It was a bit slow though. The first version of CP/67 only supported 2 terminals I recall... The VM capability was impressive. You could treat files as though they were in real memory - no explicit I/O necessary.
The 370-168 was real. I worked for GTEDS which used it. We had to run VM because we had old dos programs which ran under DUO. We evaluated IMS/DC, but it was DOA as far as being implemented. The 360-67 was a close imitator. IT was a 65 plus a DAT (Dynamic Address Translation) box on it. IBM system 38 was a dog that no one wanted. Funny thing is that it had a revolutionary approach, but I was part of an evaluation team that evaluated one, it was a slow dog. We chose a prime 550 instead. Any of you remember much About Amdahl computers and Gene Amcahl. They produced a competitor to the 360.
Not everything was invented for the S/360 or came from IBM. Disk storage dated from IBM's 305 REMAC - Random Access Method of Accounting and Control machine - in 1956. COBOL - a staple of mainframe applications - dated from 1959 and was based on the work of ENIAC and UNIVAC programmer Grace Hopper.
That'd be Admiral Grace Hopper… it was in the US Navy that she became acquainted with the ways of computer programming.
As for COBOL --- Grace Hopper retired three times, and was brought back by the Navy twice, before they finally let her go as a Rear Admiral.
In fact Grace Hopper was one of the few people for whom what was normally a euphemism, "let her go", was strictly factual.
You may want to remember that she wasn't Active Duty, and I don't believe ever was, Women couldn't hold Active Duty billets until the 70's. She was in the US Navy Reserve, which is a different component than the active duty Navy with different personnel policies. They use Reservists as full-timers an awful lot in that Service Branch, a bunch of the Frigates and almost all of the few remaining non-Military Sealift Command Auxiliaries are crewed by full-time USNR crews with Active component Officers. The entire US Department of the Navy (including the Marine Corps) also has a real penchant for letting Officers retire and then bringing them back, just to let you know. They're better about preserving skills and institutional memory than most of the Army and just about the entire Air Force is though.
Knowing the Defense Department, she was probably brought back to active duty status and then dropped back to selected reserve status more than twice. Probably more along the lines of at least once every year that she wasn't considered in the Retired Reserve. With Flag Rank it is most probably different, and there is probably a difference between the way the Navy manages Officers, and the way the Army does which I'm used to. Oh and to make things complex, we also have full-timers in the Army Reserve, we call them AGR, they're mostly recruiters and career counselors, some other niche functions are also reservists alot of times, usually because the Combat Arms that run the Regular Army don't especially see the need for them. If they could do the same to Signal and MI and get away with it, believe me, they would.
I'm a reservist and every year when I go back on Active Duty for Annual Training or get deployed and then when my AD period or deployment ends I technically get "let go", meaning that after its over I go back into the Selected Reserve at part-time status and get another DD-214 for my trouble with my extra two weeks or year to 18 months calculated in days added to my days of service (effectively my retirement tracker at this point) until the contract that the US Government has with me ends and I choose not to renew it, take my pension and transfer into the Retired Reserve, or Human Resources Command choose to not allow me to renew my contract if I really bomb an evaluation or piss someone off and don't get promoted and am dismissed or retired early.
This was a big factor in the profitability of mainframes. There was no such thing as an 'industry-standard' interface - either physical or logical. If you needed to replace a memory module or disk drive, you had no option* but to buy a new one from IBM and pay one of their engineers to install it (and your system would probably be 'down' for as long as this operation took). So nearly everyone took out a maintenance contract, which could easily run to an annual 10-20% of the list price. Purchase prices could be heavily discounted (depending on how desperate your salesperson was) - maintenance charges almost never were.
* There actually were a few IBM 'plug-compatible' manufacturers - Amdahl and Fujitsu. But even then you couldn't mix and match components - you could only buy a complete system from Amdahl, and then pay their maintenance charges. And since IBM had total control over the interface specs and could change them at will in new models, PCMs were generally playing catch-up.
So true re the service costs, but "Field Engineering" as a profit centre and a big one at that. Not true regarding having to buy "complete" systems for compatibility. In the 70's I had a room full of CDC disks on a Model 40 bought because they were cheaper and had a faster linear motor positioner (the thing that moved the heads), while the real 2311's used hydraulic positioners. Bad day when there was a puddle of oil under the 2311.
That was brave! I genuinely never heard of anyone doing that, but then I never moved in IBM circles.
"This was a big factor in the profitability of mainframes. There was no such thing as an 'industry-standard' interface - either physical or logical. If you needed to replace a memory module or disk drive, you had no option* but to buy a new one from IBM and pay one of their engineers to install it (and your system would probably be 'down' for as long as this operation took). So nearly everyone took out a maintenance contract, which could easily run to an annual 10-20% of the list price. Purchase prices could be heavily discounted (depending on how desperate your salesperson was) - maintenance charges almost never were."
Back in the day one of the Scheduler software suppliers made a shed load of money (the SW was $250k a pop) by making new jobs start a lot faster and letting shops put back their memory upgrades by a year or two.
Mainframe memory was expensive.
Now owned by CA (along with many things mainframe) and so probably gone to s**t.
Done with some frequency. In the DoD agency where I worked we had mostly Memorex disks as I remember it, along with various non-IBM as well as IBM tape drives, and later got an STK tape library. Occasionally there were reports of problems where the different manufacturers' CEs would try to shift blame before getting down to the fix.
I particularly remember rooting around in a Syncsort core dump that ran to a couple of cubic feet from a problem eventually tracked down to firmware in a Memorex controller. This highlighted the enormous I/O capacity of these systems, something that seems to have been overlooked in the article. The dump showed mainly long sequences of chained channel programs that allowed the mainframe to transfer huge amounts of data by executing a single instruction to the channel processors, and perform other possibly useful work while awaiting completion of the asynchronous I/O.
@ChrisMiller - The IBM I/O channel was so well-specified that it was pretty much a standard. Look at what the Systems Concepts guys did - a Dec10 I/O and memory bus to IBM channel converter. Had one of those in the Imperial HENP group so we could use IBM 6250bpi drives as DEC were late to market with them. And the DEC 1600 bpi drives were horribly unreliable. The IBM drives were awesome. It was always amusing explaining to IBM techs why they couldn't run online diags. On the rare occasions when they needed fixing.
Any of you guys ever play around EXCP in BAL. I had to maintain a few systems that used this because of a non-standard tape unit was used.
It all comes flooding back.
A long CCW chain, some of which are the equivalent of NOP in channel talk (where did I put that green card?) with a TIC (Transfer in Channel, think branch) at the bottom of the chain back to the top. The idea was to take an interrupt (PCI) on some CCW in the chain and get back to convert the NOPs to real CCWs to continue the chain without ending it. Certainly the way the page pool was handled in CP67.
And I too remember the dumps coming on trollies. There was software to analyse a dump tape but that name is now long gone (as was the origin of most of the problems in the dumps). Those were the days I could not just add and subtract in hex but multiply as well.
Fred Brooks' seminal work on the management of large software projects, was written after he managed the design of OS/360. If you can get around the mentions of secretaries, typed meeting notes and keypunches, it's required reading for anyone who manages a software project. Come to think of it...*any* engineering project. I've recommended it to several people and been thanked for it.
// Real Computers have switches and lights...
The key concepts of this book are as relevant today as they were back in the 60s and 70s - it is still oft quoted ("there are no silver bullets" being one I've heard recently). Unfortunately fewer and fewer people have heard of this book these days and even fewer have read it, even in project management circles.
Surely "there are no silver bullets" doesn't apply anymore now that we have the cloud and web and hadoop and node.js ?
Silver bullets don't kill managers, only werewolves, vampires and the like. You'd still have managers even with the could, etc.
Indeed -- I usually re-read the anniversary edition once a year or so. (Amusingly, there is a PM book out called "The Silver Bullet" that is not bad but won't slay the werewolf.)
The first to use transistors instead of valves, and a binary front panel.
24 * 24 bit multiply too, and if you paid extra you got FORTRAN (I only used assembler
on it though)
I've been in IT since the 1970s.
My understanding from the guys who were old timers when I started was the big thing with the 360 was the standardized Op Codes that would remain the same from model to model, with enhancements, but never would an Op Code be withdrawn.
The beauty of IBM s/360 and s/370 was you had model independence. The promise was made, and the promise was kept, that after re-writing your programs in BAL (360's Basic Assembler Language) you'd never have to re-code your assembler programs ever again.
Also the re-locating loader and method of link editing meant you didn't have to re-assemble programs to run them on a different computer. Either they would simply run as it, or they would run after being re-linked. (When I started, linking might take 5 minutes, where re-assembling might take 4 hours, for one program. I seem to recall talk of assemblies taking all day in the 1960s.)
I wasn't there in the 1950s and 60s, but I don't recall any one ever boasting at how 360s or 370s were cheaper than competitors.
IBM products were always the most expensive, easily the most expensive, at least in Canada.
But maybe in the UK it was like that. After all the UK had its own native computer manufacturers that IBM had to squeeze out despite patriotism still being a thing in business at the time.
Good question particularly as we now have several decades of experience of 'cheap' computing and the current issue of vendor forced migration from Windows XP.
We were developing CAD/CAM programs in this environment starting in the early eighties, because it's what was available then, based on use of this system for stock control in a large electronics manufacturing environment. We fairly soon moved this Fortran code onto smaller machines, DEC/VAX minicomputers and early Apollo workstations. We even had an early IBM-PC in the development lab, but this was more a curiosity than something we could do much real work on initially. The Unix based Apollo and early Sun workstations were much closer to later PCs once these acquired similar amounts of memory, X-Windows like GUIs and more respectable graphics and storage capabilities, and multi-user operating systems.