Virtualisation is not a novelty. It's actually one of the last pieces of the design of 1960s computers to trickle down to the PC – and only by understanding where it came from and how it was and is used can you begin to see the shape of its future in its PC incarnation. As described in our first article in this series, current …
IPL your own S/360 guest on VM... ahh happy days!
Thanks for this trip down memory lane!
My first job as a graduate programmer in 1990 was writing code for IBM's NetView network management product and to test our code we needed our "own" NetView system.
So - fire up some JCL to IPL a OS/360 guest on top of VM. Job done.
So 20 years later when I send the guys in my teams on VMWare training courses and they come back all fired up on Virtualisation nirvana I have to chuckle and remember that there is nothing new under the sun :-)
Ahh Happy Days??
Ahh first of all there is no "JCL to IPL a os/360"
IPLING is at first a microcode function then readinging a bootstrap off a drive then reading the drive for a specific dataset then reading the dataset and then loading the OS(at least for OS/360, 3/70 and OS/390 and z/os) . JCL doesn't really exist in the OS till you start an initiator (OS 360 days not now) for Z/os Master is started with dummy JCL (it doesn't exist).
Ahh as for second either your timing is off or your narative is wrong NETVIEW did not exist for OS/360 and the first OS that support NETVIEW wasn't unti the 70's and then it was called NCCF. Netview came as a follow on the NCCF perhaps in the late 70's or 80's . SO you are off at least 10 years on the netview.
As I recall, the LPARs are managed by PR/SM and are fairly fixed divisions of the hardware. The VM operating system would then run in an LPAR and allocate those resources in a more dynamic, fluid way. Of course, I last did this in the late 80's/early nineties, all may have changed, as it does.
The HARDWARE is the problem
As I see it the PC hardware is simply not suited for virtualization, we need Intel-based heavyweight servers which are NOT necessary DOS/Windows compatible. We need hardware supporting much more throughput than DOS/Windows compatible servers can provide. I may be wrong... but again... I may me right.
Re: The HARDWARE is the problem
Er, I think you've just specified a mainframe, but one crippled with the x86 baggage.....
@hardware is the problem
Yes and no:
Yes because the whole x86 is a horrible kludge that only became successful due to DOS/Windows (and the resulting applications) running only on it, and so Intel was able to spend sh*t loads of money to make a basically crap design good value for money.
Shame it was not spent on the ARM...
No because of why you are likely to want a VM, and this is where I disagree with the author saying "the VM should emulate the same hardware as the host":
Typically a strong reason for a VM is you want or need to run some horrible old OS+application that you have no practical or economic method of replacing. As such the VM has to look like *supported hardware* for possibly a decade or older OS.
It is so nice being able to move a Windows VM from one type of x86 host to another without needing to change drivers, re-install, re-licence, etc. Host could be Windows or Linux, CPU could be AMD or Intel, running 32 or 64 bit modes, and my old w2k and XP VMs and the various useful but difficult (or expensive) to replace applications work just fine!
down memory lane
Aah! CP/67. I remember it well from my days at Big Blue. Another of those not-quite-official products that kicked the arse of the mainstream offerings. And so became the grudgingly (to the suits) acceptable VM/370.
System 370 hardware
Along with the System 370 hardware that added the hardware support for virtualization that the System 360 lacked.
what took the x86 so long?
Something that hasn't been mentioned is that both the 68000 family (from the 68010 onward) and the PowerPC family supported full hardware-assisted virtualizaton without the kind of partial-emulation hacks needed in VMWare and without the active cooperation of the guest operating system as required by the likes of Xen.
Re: what took the x86 so long?
See the first article in the series, in particular the bit about execution privilege rings.
If you are like me, the words "pig's ear" will spring irresistably to mind......
See also: VME
The virtual machin environment done properly.
No Mention of the System38/AS400/iSeries...
That if IBM had thought about it a bit, could have been the the personal computer of choice rather than the Wintel PC itself.... It was and still is far more advanced than the PC/Mac with seamless movement of user data from one piece of hardware to another and a file system level database as standard, the parts of the design that aren't 64 bit, are 128 bit and have been for nearly 20 years...
Instead what has actually happened is that these systems are reduced to having a poor stab at emulating the inferior unix concepts.
The '38 required a room to put it in, air conditioning was de rigeur and so was three-phase power. There was a certain amount of fresh air in the box, but that's only 'cos the internal "piccolo" drives went out of fashion in favour of external arrays over its long lifespan.
By the time the small '400s came out, commodity PCs already had their feet well under the table, Novell had the fileserver business sewn up and Windows was busily pwning the desktop. Also here you have to remember that the price premium on small '400s was waaay over that of a PC server and they still sold like hot cakes as thousands of '38 shops, who'd been hammering on the limits for ages, all sought to upgrade and expand at once. IBM have never cut prices when they don't *have* to. Also here you have to remember that the original "small" AS/400s were only small by comparison to the full-fat ones. It wasn't 'til the "F" (I think??) series shipped that the PC fileserver sized ones came out.
Bloody brillant the '38 was though. 200 users hammering the shit out of it in realtime, half a dozen heavy batch tasks running and that "all the grunt of a 286" CPU kept it all spinning away nicely. The one that always makes me smile though is the '38s 4TB memory model (as seen in that nice "memory pyramid" piccy in the "welcome to your shiny new System/38" manual). Effectively saying: "Nobody will ever need more than 4TB". How Bill reckoned 640k would be adequate in the light of that..........
The only Achilles' heel of the '38 was its communications. Firstly in that they were the last of the Great Black Arts in this business and secondly 'cos it never got any LAN capabilities and 64kbits was as fast as it went.
 Ok, you could squeeze one into a shipping container. Just.
 Ok, as that's a single-level model and includes disk, tape etc. it's looking shonky now, but still.....
 Yes, I *know* the winnie connectors on the comms controllers only went to 56. There was another way......
concepts from the much maligned TSS/360
The "single-level store" concept around which the S/38, AS/400 and i-series were built came from TSS/360 -- which was unfortunately way ahead of its time (and hardware capable of running it with decent performance).
Saw my first AS400 in a cupboard.
Thought it was an oversize air conditioning unit till I looked closer.
I think that was a B model, so pretty much a baby even by the standards of the day.
The comparison in the article was about virtualisation being invented a long time back as part of the S360 environment...
Well I started my DP career on the "new" at the time S370, in reality it was a bit of 360 and a bit of 370... And the three machines in my computer room, and their peripherals occupied somewhere around 1/3 acre of floor space...
I was taking it as read that miniaturisation was a fact.
Oh, and along with the air conditioning, we had our own emergency power supply which ran on diesel, and a halon gas fire protection system.
And I realise that SNA was a proprietry comms system, but that would also have sped up over the years.... (as a proprietry system though, it was almost faultless and failsafe).
Basically what I am saying is... That in their hey day, the AS400 had far more potential than the crude PC that we actually got, and that apart from the obvious room for improvements that every system mentioned by commenters and the original author, it would have been a better starting point, had IBM had a bit more vision.
The massive (failed) Future System effort in the early 70s was going to completely replace 370 and drew heavily on single-level-store design from TSS/360. The folklore is that when FS failed, several people retreated to rochester and did a simplified, FS subset as S/38. misc. past posts mentioning future system
I had learned a lot at the univ. watching tss/360 testing and its comparison with cp67/cms. Later at the science center in the 70s (during the future system period), I continued to do 360/370 stuff ... including a page-mapped filesystem for CMS (which never shipped in standard product) ... avoiding a lot of the tss/360 pitfalls (I would also periodically ridicule the FS effort ... with comments that what I had already had running was better than their bluesky stuff).
one of the shortcomings of simplified s/38 single-level-store was it treated all disks as common pool of storage with scatter allocation across the pool. As a result all disks had to be backed up as a single integral filesystem and any single disk failure would require a whole filesystem restore (folklore about extended length of time to do a complete restore after single disk failure) . single disk failures were fairly common failure mode and s/38 approach scales up poorly to environment with 300 disks (or more) ... aka on any disk failure take down the whole system while the complete configuration was restored (or length of time the system would be down for complete backup).
this shortcoming was motivation for s/38 to be early adopter of RAID technology ... as means of masking single disk failures.
Another Trip Down Memory Lane
Still have my "green card" somewhere in a box...
When considering multiprogramming on S/370
You just cannot ignore the Michigan Terminal System (MTS).
When IBM was adamant that it would not produce a time-sharing OS for the 360, the University of Michigan decided to write their own OS, maintaining the OS/360 API, allowing stock IBM programmes to work with no change, but allowing them to be multi-tasked.
IBM actually co-operated, and the S/360-65M was a (supposedly) one-off special that IBM made just for Michigan, and provides a dynamic address translation which allowed virtual address spaces for programs, and which resulted in the S/360-67 which was one of the most popular 360 models, and influenced the S/370 design.
I used MTS between 1978 and 1986 at university at Durham, and when I worked at Newcastle Polytechnic on a S/370-168 and an Amdahl 5870 (I think), and I found it a much more enjoyable environment that VM/CMS which was the then IBM multitasking offering.
Look it up, you might be surprised what it could offer. There are many people with fond memories of the OS.
On the subject of Amdahl, they produced the first hardware VM system with their Multiple Domain Facility (MDF), which I later used when running UTS and R&D UNIX on an Amdahl 5890E. During an oh-so-secret-under-non-disclosure-agreement, we were told by IBM in about 1989 about a project called Prism, which was supposed to be a hardware VM solution that would allow multiple processor types (370, System 36 and 38, and a then unannounced RISC architecture, probably the RS/6000) in the same system, sharing peripherals and (IIRC) memory. Sounds a lot like PR/SM on the zSeries! Took them long enough to get it working.
Wow! I'd forgotten that there were UK Universities running MTS. Somewhere I still have some MTS manuals from my days at Rensselaer Polytechnic Institute, along with my yellow card. Thanks for the memories!
I too used MTS in Durham from 1976. It still staggers me that we could do interactive image processing on it while there were a zillion other users also hammering away. Happy days.
Marketing baby, marketing.....
Nothing new under the sun, just better marketing, wasn't VMware started by ex-ibmers too?
OK, so I'll admit this is incredibly pedantic - but windows 3.1 did not bluescreen (the blue screen only appear in XP), instead it would throw 'General Protection Fault's at the drop of a hat (and sometime while you were still wearing it) ... IIRC these screens were black with white text, but it's been a while since I saw one :s
Not that wikipedia is always the fountain of all truth, but: http://en.wikipedia.org/wiki/Blue_Screen_of_Death
Personally I have more fond memories of the Guru Meditation errors...
First in XP...?
NT4 *certainly* BSoD'ed - there was even a screensaver doing the rounds which looked just like one, with which to scare your fellow admins :-)
Blue or black screens
Doesn't really matter but it was NT that introduced the blue ones. Windows 3.x was still DOS and would crash if the wind changed. The error messages were pretty useless and based on the more informative OS/2 ones. You got no longing, but hey we had pretty colour icons.
oh Amiga :)
those orange boxes with the meditations used to be the bane of my Amiga days!
Lies, the BSOD existed well before XP, in both the DOS and NT lineages.
So, Linux is kinda short for
"LINus' Unitary Computing Service"? (multiCS -> Unix (UniCS) -> Linux (LinUCS)
CTSS, Mutlics, CP40, CP67, etc
note that some of the CTSS people (MIT IBM 7094) went to the 5th flr of 545 tech sq and did MULTICS; others went to the science center on 4th flr of 545 tech sq and did (virtual machine) cp40, cp67, vm370, etc
Time sharing vs Virtualization...
There were several early-mainframe attempts at 'what timesharing meant'. Only the CP67-folk actually went down the operating-system virtualization approach. There was a perfectly-viable, often used time-sharing component of OS/MVT. 'TSO' was rather inefficient for another 7 years, though. There were other timesharing systems on OS/370 like Wylbur, Roscoe and others.
BUT the focus here is on Virtualization, so I wanted to add that AIX also has WPARs which are Solaris-contains-like virtualization mechanisms.
360/67, tss/360, cp67, mts, orvyl/wylbur
There were quite a few customers sold 360/67 with the promise of running tss/360. when tss/360 looked like it was going to be difficult to birth ... many switched to os/360 or cp67. Michigan did its own (virtual memory) MTS system and Stanford did its own (virtual memory) Orvyl/Wylbur system. Later the Wylbur part was ported to os/360
Great article. Very interesting.
You can read a load more, and beautifully written if I recall correctly, about the history of CP/CMS and VM if you google for "melinda varian"
Sort of 2 half OS's working together.
While not a popular approach one serious software architects *should* keep in mind in case they hit a tricky situation where perhaps the hardware is not quite up to the job
Thanks for the article and one again reminding the yoof that in the computer business it is *very* unlikely that the new game changing totally unique tech you just invented is actually *anything* like as unique as you think it is.
Credit where credit's due, please
When IBM announced VM, with the slogan 'Today IBM announces tomorrow' (or words to that effect), a patched copy of their ad appeared the same day on the notice board of the Computer Science Dept at the University of Manchester, reading 'Today IBM announces yesterday.' Because, of course, that Department had invented virtualisation some years before, and it was implemented, in a simple form, in the Ferranti Atlas designed mainly in the Department.
Finally! Something recognized from YOUR side of the pond...
The Ferranti, later swallowed up by ICT into what became ICL(IIRC).
The GEORGE series of OSs on 1900 range hardware had a lot of stuff that PCs took a long time to catch up on.
Virtual store, flat memory, device independence, workfiles, OS-level file versioning, user management and accounting. Ah, nostalgia: http://www.icl1900.co.uk/g3/index.html.
In the summer of 1970 I had a summer job programming the Atlas at the University of London. An amazing machine -- the tape drives would have looked right at home in an Avenger's episode.
Four years later I joined IBM, and my first job was to write the core of what eventually evolved into VM/'Passthru.
I should say that I never was directly involved in anything to do with virtualization on the Atlas, as I just programmed it in FORTRAN (FORTRAN V, actually, with recursive ALGOL-inspired block structure!)
I do recall learning later a couple of interesting facts about the Atlas: 1) I believe it was the first machine to use inverted page tables, decades before IBM trumpeted that "innovation" on its RISC machines and 2) as I recall, it took an interrupt to fire each hammer on its high speed line printer, thereby simplifying the control logic in the printer, but putting some significant timing constraints on responsiveness of the OS interrupt handlers.
sorry, but the statement on VMware doing full software virtualization is not correct.
in their current versions, KVM, Xen and VMware do the same thing:
they use assistance in hardware/firmware from AMD-V or Intel-VT processors,
if that's not available / possible, they do not work / they do a software emulation.
Re. ABEND - the author responds
I will cop to some errors in this piece, including missing IBM's latest name-change from zSeries to the System z, and that VMware does indeed now use hardware VT if it's available - and indeed, according to several comments, /requires/ it for 64-bit guests.
This article series was a long time in gestation and when I started researching it, VMware was still adamantly maintaining that its software virtualisation was better than Intel's hardware implementation.
However, this comment titled ABEND is notably incorrect in almost every detail.
KVM does not fall back to software VT; if no hardware VT is available; it *requires* hardware VT support. Without it, you can't use KVM at all.
Xen falls back to paravirtualisation, meaning that it needs modified guest OSs.
VMware and VirtualBox both use software VT if no hardware VT is available; in VirtualBox, enabling hardware support is an option - you can run without it. I have not tried this yet in VMware but it might be possible.
This is not "do[ing] the same thing"; it is doing 3 different things: failing, offering different, incompatible functionality, or switching to an emulation-based alternative.
S360 first to use microcode for compatability?
"What’s more, the S/360 was the first successful platform to achieve compatibility across different processors using microcode,"
Well, maybe if you ignore the ICT 1900 series.
The 1901, 1902/3, 1904/5 and 1906/7 were all different processors, some microcoded, that had the same programming interface.
(System 360 announced 7/4/1964, 1900 announced 29/9/1064, 1st 1904 delivered January 1965, first deliveries of the System/360 were in "mid 1965")
(Start work on a family of compatible machines in April, announce in September, demo in October, deliver in January. Doesn't sound like the UK of today, does it?)
ICL v IBM dress codes
I think anyone who worked on ICL machines might dispute whether IBM were first with virtual computing we used and some of Fujitsu's ICL derived kit still use VME (Virtual Machine Environment) this originated in the late '60s early '70s. At the time strange though it might seem strange ICL were technically superior to IBM machines and if they could have go their reliability problems sorted and their Engineers to wear suits a la IBM engineers rather than cardigans they might still be around today.
Modern virtualisation still has a way to go to catch up with the past...
ICL VME was later than System/370
System/370 (the major introduction of virtual memory and hence virtual machines(*)) was released in 1970.
The ICL 2900 series (with the VME operating system) was relesed in 1974.
And the ICL concept of a virtual machine was not realy comparable with what IBM was doing:
"The 2900 Series architecture uses the concept of a Virtual Machine as the set of resources available to a program. The concept of a "Virtual Machine" in the 2900 Series architecture should not be confused with the way the term is used in other environments. Because each program runs in its own Virtual Machine, the concept may be likened to a process in other operating systems, while the 2900 Series process is more like a thread."
(Some claim that IBM was forced to come up with the whole "virtual machine" thing because they just never managed to make a simple multi user os - it was easier for them to build a system that allowed one OS per user).
((*) Yes, there was _one_ member of the System/360 range that had virtual memory, but it was a special case).
Reflections on MDF
Two points I will make from the perspective of a developer of Amdahl’s Multiple Domain Facility (MDF), perhaps one of the firsts ;-) “hardware-assisted” virtualization platforms;
- MDF began as an offshoot of a strategy for minimizing the lead time required for Amdahl to respond to changes IBM was making in their mainframe architecture of that era, i.e. S370 and beyond. These changes included hardware that provided microcode underlying S370 architecture. It was observed that these underlying hardware structures could be used to offer an efficient hardware-based virtualization platform that became MDF. IBM promptly followed suit with offerings providing logical partitions or LPARs. An interesting side story is that we didn’t have many engineering models to develop and test our MDF code on. Consequently, another team inside Amdahl developed a simulator that simulated the modified S370 architecture and it was actually based on IBM’s VM/370. This simulator was vital for developing our MDF code on.
- Inside IBM 370 mainframes were a set of high-level instructions that assisted the execution of operating systems, including MVS and VSE, when executed under VM/370. These instructions reduced the overhead associated with paging and I/O of the “guest” operating system when executed under VM. The nature of these instructions is comparable to techniques Intel and AMD provided years later in their products. Performance improvements associated with these instructions (VMA, PMA, …) were dramatic. There is a certain feeling of déjà vu watching virtualization unfold in the Intel world.
Other VM trivia...
A primitve form of e-mail was possible by directing your virtual card punch to another user's virtual card reader - punch a virtual deck from a local file and it would turn up magically in the other VM from where it could be read back. By the further magic of RSCS and some judicious source routing you could get your file around the world.
There was, as I vaguely recall, also a prototype "desktop" machine that supported the S/370 instruction set and ran VM.
Although VM/370 was used a lot both for OS development work and for migrating customers from DOS to OS/370 it was also (with CMS) a refreshing alternative to IBM's Time Sharing Option, famously subject to the criticism from Stephen "yacc" Johnson that "Using TSO is like kicking a dead whale down the beach".
Don't forget MUSIC/SP
MUSIC/SP - Multi User System for Interactive Computing, brainchilded at McGill University was an exceptional late 70's - early 80's multi-user Operating System which also ran under VM (or autonomously) allowing hundreds of simultaneous users in the resource space of just a few CMS users.
While not really a virtualization system itself, it gave the end-user the impression that they were working on their own interactive "machine".
TSS (and later the premier transaction processing system in the world, CICS - Customer Information Control System) also provided (provide) multi-user environments which give the user impression of one's own machine, but not really virtualization, per se.
The combination of VM/370 and MUSIC/SP ended up being one of the most dynamic, efficient, cost-effective, amazing and user-friendly multi-user environments that IBM has ever produced.
Thanks for the walk down memory lane.
BTW, was not PR/SM really just a specialty version of VM?
VMA, virtual machine microcode assist
cp40 & cp67 provided virtual machine support by running the virtual machine in problem state and taking the privilege/supervisor state interrupts for supervisor state instructions and simulated them. Later for vm/370 and 370, virtual machine microcode assist was provided on 370/158 and 370/168 which would executive frequently executed supervisor state instructions according to virtual machine rules.
A superset of this was extended for 370/138 & 370/148 called ECPS ... which included dropping parts of vm370 supervisor into microcode. There was an attempt to ship all 138/148 machines with VM370 pre-installed ... sort of a early software flavor of LPARS ... which was overruled by corporate hdqtrs (at the time there were various parts of the corporation working on killing vm370).
A much larger and more complete facility was done for 370/xa on 3081 called SIE.
Amdahl came out with a "hardware" only "hypervisor" function ... sort of superset of SIE ... but subset of virtual machine configuration.
IBM responded with similar facility PR/SM on the 3090 ... which was further expanded to multiple logical partitions as LPARS. PR/SM heavily relied on the SIE microcode implementation ... and for a long time a vm/370 operating system running in an LPAR couldn't use SIE ... because it was already in use for LPAR. It took additional development where vm370 running in an LPAR (using SIE) could also use SIE for its own virtual machines (aka effectively SIE running under SIE).
The VM fanbois are sunning themselves..
OK. I loved VM in the 70s. Mainly because CMS was more usable than TSO on tty. Then the terminal technology changed and full-screen mechanisms improved. Since 1981, TSO has been fine and presents a richer programming environment.
- Updated Hidden network packet sniffer in MILLIONS of iPhones, iPads – expert
- Students hack Tesla Model S, make all its doors pop open IN MOTION
- BBC goes offline in MASSIVE COCKUP: Stephen Fry partly muzzled
- PROOF the Apple iPhone 6 rumor mill hype-gasm has reached its logical conclusion
- iPHONE 6 MOUNTAIN: Apple piling up 80 MILLION massive 'Air' Jesus Phones