Buzzwords often have very short lifetimes in IT. Today it's cloud computing, but there would be no infinitely scalable cloud without the previous "big new thing": virtualisation. We take it for granted now, but it's worth remembering that it is still quite a new and relatively immature technology, with a long way to go. In this …
Is it just me...
...or does that post look like someone's testing a context-sensitive spambot?
"bit like reading a book in a language you don't speak by looking up every single word in a dictionary.
It works, and with a good enough dictionary, you can understand it – this is how online translators like Google Translate work"
I beg to differ. You can look up every single word in a dictionary and still be none the wiser.
Which is perhaps why Google Translate does *not* work in that way, but by statistical inference. Whether it is actually superior to dictionary-based tool (like Babelfish) is another matter.
One thing that can't be virtualised
There is one thing that can't be virtualised, and that is time - or at least it can't be virtualised where an OS has to interact with the real world. This can have some unfortunate side effects in terms of performance, clock slip and so on. For instance, any OS using "wall clock" time for things like timeouts, task switching and the like can produce some undesirable features on a heavily stressed machine. This is especially true when the hyperviser is able to page part of the guest environment. This causes erratic and very lengthy (by CPU standard) lumps of time to appear to be used during execution if "wall clock" time is used.
From my experience eventually all OSs which are expected to run under hypervisers eventually have to be modified in some way to be "hyperviser aware" in order to iron out these wrinkles. Many years ago I worked on an OLTP operating system that ran under VM - in order to fix some timing issues it was necessary to modify some core timing functions in the guest OS to avoid using wall-clock time and get execution time information from the hyperviser.
You can get away with this stuff on lightly loaded environments, but not on heavily committed ones.
And That's Fundamental To The Service Bureau- I Mean "Cloud"- Problem, Innit?
No matter how much you tap-dance; you have a limited quantity of resources. If two or more users try for peak usage at the same time; you choke. Further, the more you try for separation; the more resources you use to manage each user. It's easy to set everyone to thrashing; unless you find ways to throttle usage. Which means slim pickings for somebody or you configure for peak use and much of the theoretical cost advantage over dedicated systems goes away . There is a place for the Service Bureau model; but it's far more limited than what the marketing wankers are selling.
Even more ancient history
My 1988 Archimedes came with a software emulator that would run DOS in a replica 8086 at a blistering 4.77Mhz, not bad for a CPU that's probably now relegated to controlling how fast your car's inside light dims when you close the door.
In some quarters it used to be called multi-programming. And take a look at what LEO 3 was capable of in 1962. IBM didn't get this going commercially until OS360.
Mutil-programming was the term for running multiple **programs** at the "same" time, in the same way that normal OSes do these days, not multiple **operating systems**.
LEO 3's Control Program could not act as a hypervisor, but was a very capable multiprogramming system.
Source: many conversations with my mother, who was a Chief Programmer at LEO in the early 60s.
Virtual memory vs multi-programming
IBM had the S/360 running in early 1964, but it had only very hackish multi-programming: all programs shared the same address space and took turns at using the hardware registers. This was DOS/MFT days: there was no memory protection and you had to link a program for the fixed memory slot was run in. AFAICR IBM didn't progress past this method of multi-programming until DOS/MVT, which was rolled out with the S/370 around 1970, finally provided virtual address support and memory protection.
Meanwhile ICL's 1900 series, which was available in late 1964, had virtual addressing from the outset. Each program ran in its own address space and kept its own set of registers and state information in the first 12 words of its address space. As a result, moving and swapping running programs couldn't have been simpler. The only hardware registers the machine had or needed were the datum and limit registers for the current active program, which were managed by the OS.
Oh yeah: while DOS/MFT machines required peripherals to be manually assigned to program unit numbers, all 1900s were capable of doing that automatically: tapes could be found by name regardless of which deck they were on, and hardware names translated automatically to program channels at run time.
Ah! The cloud.
Thank god someone's thinking of the children.
I can think of no more an act of strategic brilliance than putting your confidential sales and customer data on a third party company's servers.
That's funny. For years I've been working under the assumption that the x86 architecture doesn't actually have any rings; that its all controlled by the OS, as opposed to having hardware controls.
Linux virtualisation is from before KVM
Linux has not one, but _THREE_ native virtualisation technologies:
1. User Mode Linux which is from circa Y2K, long before KVM. Even if we count from the day when it aquired SKAS0 (or 3) support so it could have reasonable address space isolation it is still pre-KVM
2. OpenVZ - also pre-KVM
3. KVM is the third one chronologically and unless I am mistaken it actually derives from qemu and shares some code with it. So if we count the days of emulation into its history it also goes further back.
By the way, depending on what you want (and how good are you at C/Linux kernel drivers) KVM is quite often not the best fit for purpose either. Neither is Xen, nor is Vmware. There are a lot of cases where OpenVZ (and even UML if your kernel programming is good enough to fix its shortcomings) can do a better job.
Re: Linux virtualisation is from before KVM
"Linux ... if your kernel programming is good enough to fix its shortcomings"
I fear this eloquently explains why Windows is still so pervasive.
Re: Linux virtualisation is from before KVM
"I fear this eloquently explains why Windows is still so pervasive."
Nonsense. Nobody is having to hack the kernel to get Linux to do the majority of stuff people want to do. And on the virtualisation front, even Linux from a few years back is arguably a lot better positioned than even the latest Windows releases.
Re: Linux virtualisation is from before KVM
"Nobody is having to hack the kernel to get Linux to do the majority of stuff people want to do"
Indeed, that's an extreme case. But the fact remains that Linux only provides "some" of what people expect on the desktop, and to make it do the rest, the average user is forced to delve deep into arcane configuration files.
And when they dare ask for help, or decent documentation, the Linux gurus roll their eyes and get shouty.
Which is one reason why Linux is only on 1.4% of enterprise desktops (Forrester, June 2011).
Meanwhile Mozilla's recent bizarre pronouncement that Firefox is not for enterprise usage has probably single-handedly put the Linux cause back by a decade.
It has rungs - BUT swapping between them is so expensive you end up putting everything in the ring-0 kernel. So Windows can securely ensure that no user program can crash the OS, except the graphics driver is in the kernel.
And the graphics driver is typically most buggy piece of code, the thing written under the most time pressure by a hardware maker that only intends to support it for the 3month this card will sell, the piece of code with no economic driver to make it any good and the code with the most half-tested performance hacks - is the thing running in the most trusted processor level!
It also happens to be the thing that probably requires maximum performance more than any other task in the computer: to perhaps even include the CPU, making ring 0 residence a necessity. It's like having to hire a security guard for your bank, only to learn that all your possible candidates have spotty histories. There's been a string of robberies in town, so you MUST have a guard (people are getting scared), but at the same time you can't help but wonder if the robber is one of the people in your list.
Graphics drivers *are* notoriously buggy, but your reasoning is flawed in several respects:
1. NVidia at least has a unified driver that works for "all" of its chips (or at least the reasonably recent ones), thus invalidating your argument about only supporting the driver for a few months.
2. Drivers are released for a variety of reasons, including bug fixes, on a timescale that is occasionally sorter than three months.
3. I don't know about ATI's release policy or their drivers' unifiedness because of the very real economic driver for good quality code in graphics drivers: I don't buy Radeons because of their immense driver flakiness in earlier revs.
Windows can securely ensure that no user program can crash the OS,
What? No BSODs, then?
I always thought that this was the biggest thing that Windows massively /failed/ to do.
Aegis SR10, SVR4 and BSD.
The pre-release system I worked on could run SVR4 X11 desktop insde an Aegis window frame
and then have a BSD X11 desktop run inside the SVR4 X11 desktop within an X11 app frame!
Drag and drop between any windows - including windows within windows!
Not really VM - more multiple OS layers on a common core but insanely impressive for the time.
Not virtualisation, I'm afraid...
The SVSV and BSD syscalls were actually mapped to underlying Aegis calls - so it was really one OS with multiple personalities on the surface. And X was dreadfully inefficient when running in a DM window.
That said, it really was an amazing OS with particularly impressive features like the ability to host diskless clients having different CPU architectures (and hence executable files containing code for 2 different architectures). And the distributed networking stuff... and having 2 or more DM windows open on the same source file, all nicely synchronised...
Is still kicking butt and taking names in the virtualization game. Power7 boxes kick ass. And you can run linux on them. Their hypervisor technology makes VMware, Microsoft and the linux KVM look like they were written as a class project to almost emulate some of the functionality of the IBM virtualization technologies. But that is to be expected, IBM has been doing it for a lot longer.
Sure . . .
. . . but then you have to deal with IBM.
Running multiple DOS boxes on a 286, each with its own BBS image and serial port / Courier modem (without even mentioning Fossil drivers for multi-port serial cards) could not be done 20 years ago without DESQVIEW, from the guys who brought us QEMM memory manager :)
Back when running a BBS with 4 modems on the same box was state of the art.....
Re: Forgot DesqView...
Ah Desqview - I ran it on my old PS/2 50z (with a whole 12M or RAM in an add-on card..)
You could run the IBM 3270 emulator (a very fussy bit of code that required lots of keep-alives or it would disconnect you) in the background and various 'non-productivity' (games!) programs in the foreground. And toggle between them pretty much instantly when you team leader came into sight..
Kept me sane (ish) in the days I was a TPF programmer.. Before I grew up and went into support instead.
VMware's hypervisor absolutely *does* take advantage of Intel-VT and AMD-V hardware virtualization if it is present on the host CPU. Otherwise it falls back to software virtualization (and, incidentally, loses the ability to run 64-bit guests in that mode).
DEC & IBM 370 had full virtuallization in the 1970's
The IBM 370VM operating system implementated full virtuallization down to the hardware level in the 1970s. You could run whatever S370 operating system you wanted under VM, including another version of OS370/VM. All of the protected hardware instructions were completely implemented.
There was also a VM version of the OS for the Digital Equipment PDP-8. It allowed you to run multiple OS's underneath the TSS/8 OS. Each user gets a virtual 4K PDP-8; many of the
utilities users ran on these virtual machines were only slightly modified
versions of utilities from the Disk Monitor System or paper-tape
environments. Internally, TSS8 consists of RMON, the resident monitor, DMON,
the disk monitor (file system), and KMON, the keyboard monitor (command shell).
BASIC was well supported, while restricted (4K) versions of FORTRAN D and
Algol were available.
.... One ring to bring them all...
.... and in the darkness, crash them...
Sinclair 8 bit virtualisation
With the Sinclair Spectrum 128 you could run multiple (2) instances of a ZX81 emulator. Quite why I could never figure out, but it was still cool to do it!
- 'Windows 9' LEAK: Microsoft's playing catchup with Linux
- Infosec geniuses hack a Canon PRINTER and install DOOM
- Boffins say they've got Lithium batteries the wrong way around
- Game Theory Half a BILLION in the making: Bungie's Destiny reviewed
- Review A SCORCHIO fatboy SSD: Samsung SSD850 PRO 3D V-NAND