@Ken Hagan
quote ::-
"A hypervisor is a piece of software that lets multiple copies of another piece of software each believe they have the entire system to themselves. Back in the 1950s, this was called an operating system. What is proposed here is a return to a micro-kernel operating system, with the desktop personality (aka, OS) running on top."
Good heavens - how many misconceptions. Firstly in the 1950s operating systems pretty well didn't exist. Early applications were written to the bare metal - that is they talked directly to the hardware. Gradually manufacturers introduced tools and software to make this easier to do, although the low level hardware interfacing code was still included in each application. By the 1960s operating systems had emerged which were environments in which a single application could run within a controlled environment offering some higher level services. Things such as I/O could abstracted to operating system services, although even then some applications got very close to the hardware devices by direct calling libraries that issued I/O instructions to the hardware (dig into z/OS and you will find that many of the older access methods have still got this structure, although the OS intercepts the actual I/O call). These early operating systems would only run a single applications at a time and only later did versions appear which allowed multiple applications to (apparently) run simultaneously. Early versions of these multi-taking operating systems (and the hardware on which they ran) tended to lack mechanisms to protect them from each other - a badly written application could write all over the storage belonging to another. Modern multi-tasking operating system with proper security and isolation didn't really become common until the 1970s. (PCs followed the same path, maybe 15 years later).
There is a fundamental difference in intent between an operating system and a hypervisor. Essentially the former is designed to provide services and an environment within which applications are run. In consequence, the services offered are largely abstracted ones of use to the application.
In contrast, the intent, of a hypervisor (at least in its purest sense) is to emulate a physical server. If implemented properly it will virtualise CPUs, I/O devices, memory and so on. The vast majority of business applications are not written to work in such an environment. In the case of mainframes, hypervisors started out as pure software (VM being the obvious example) and efficiency was gradually improved through moving resource-intensive virtualisation tasks into microcode. Eventually pure micro-coded hypervisors appeared (a route down which microprocessors are heading albeit using direct silicon logic and not via microcode).
There are similarities between some of the technologies used in operating systems and hypervisors, but the second is a much more constrained thing. Its job is to pretend to the code it is hosting that it is a physical machine complete with I/O ports, clocks, interrupts and so on. . VMWare ESX and the like are not throwbacks to the operating systems of the 1950s (which didn't exist in a recognisable form at that time anyway), but to VM which emerged during the 1970s. Of course there are lots of grey areas - hyperviser aware operating systems, hypervisers running within a general purpose operating system, not to mention completely abstracted virtual machines, such as the Java VM. But the principle remains.
There is a lot of common technology in hypervisors and microkernels, but the latter is really the set of low-level primitive services required to allow for the implementation of an operating system whilst a hypervisor (in its purest sense) is there to provide virtual machine environments.