It is becoming increasingly obvious that a virtual server wastes great chunks of its memory occupied by the operating system wrappers around the applications in the virtual machines (VM) running in the physical server. If each VM occupies 50MB, and 20MB of that is the Windows O/S, then around 40 per cent of the servers's DRAM …
Do you think MS is going to help cut its revenue?
Dear MS please allow me to cut the revenue you make on my machine by 95%.
I'm sure they'll jump at the chance to change the licensing to ensure this happens post haste - just as soon as they've got standards compliance worked out.
Don't Parallels offer something like this with their Virtuozzo Containers product?
If you have a Hypervisor with the OS bits built in too so that it can run apps each in their own workspace without contention and without the standalone OS instance overhead, what is it about the end product that differs from a full-fat, multitasking OS?
Surely the simpler approach here would be to add the "move this task I'm running to that server over there" functionality to the OS of your choice. I'm thinking some big virtual rack or "frame" of independant servers that handles all your main computing requirements, running one OS and swapping tasks from server to server as required.
I know, we could call it a "Main-Frame"............
where did I read this before?
ok not the same thing, but a similar idea. I think it was few years back from a VMWare person who got fried for it (if I remember correctly).
the idea was something like this. Instead of creating your email/database/whatever server to need to full resource of an OS (which it will never use as it have been pointed out in this article). Instead those server should be designed to run on the VM *without* needing a full OS. The guy got fried because everyone told him "so we don't design our server to run on Windows or Unix, instead it should be design to run on *your* OS".
don't have the article, but it was few years ago. What he wanted is that Microsoft SQL server should be designed in such a way that it comes with a custom built version of Windows that can't do anything except Microsoft SQL server.
I get my coat since I don't want to be fried for this comment
Can't see that happening with windows.
Just think of all that licensing money that won't need to be spent - won't someone please think of the MS shareholders?
So maybe Linux will be able to use this as the killer app - one copy of Linux sitting on a server, with multiple thin client apps accessing just the bits they need to work, rather than having a complete O/S each.
Since most of the O/S needs to be read-only I can't see how viruses etc will get a hold, and of course the O/S and apps will be virtual as well, so can be easily deleted and restarted.
The only issue with that would be any hanging apps that now don't have an O/S to use, but maybe some of the web apps technology that caches info could be used.
Maybe I should start a business to produce a linux-based O/S on a chip, maybe with a client version and apps available to use embedded in there as well.
Isn't this the application (rather than machine) virtualisation what Parallels Virtuozzo does?
Is simple: do not use Virtualization. So, you have one physical machine running a number of virtual machines, each one running one application... how about running one physical machine running a number of applications with no need for virtual machines instead? You know... like it was done before 'virtualization' became the next buzzword bingo winner...
And besides that... "Security researchers ...have been looking at how to protect desktop and other users from the results of their own carelessness." like... installing antivirus and not allowing morons to install stuff on their own desktops at will?
"Any VM that is infected with malware can simply be deleted ", except that you'll have to recreate it, and probably restore from backup, and then the moron that installed the specific pos will have to do it again and again...
Security researchers would better to secure their own brains instead.
Fedora 12 has made a start on this with KSM: http://fedoraproject.org/wiki/Features/KSM
"Allow KVM guest virtual machines to share identical memory pages. This is especially useful when running multiple guests from the same or similar base operating system image. Because memory is shared, the combined memory usage of the guests is reduced."
Not quite the same as just enough OS for the required app - but a start to share all the common bits of the OS...
CoW is your friend
This could be handled in the host O/S (or bare metal hypervizer) virtual memory mechanism. First, a quick overview.
In an O/S the memory/paging handler knows which pages of which applications, libraries, datablocks etc. are in RAM. Therefore when another process asks to _read_ the same page, the handler doesn't go to disk and read it again - it creates a reference to the page that's already loaded and passes that to satisfy the read request. That way, multiple apps. can access the same piece of code or data yet only have one copy resident in physical memory. The only time that page needs to be physically copied is if one of the apps. attempts to write to it. At that point the memory handler performs a Copy on Write and "gives" the modifying application it's own copy of the page to dirty-up as it sees fit.
What we need are Virtualisation systems that can recognise the same situation - but across virtual machine instances. So it would "know" which VMs have which physical disk blocks resident (in their memory space) and whenever another process from another VM requests the same page from it's own VM, the hypervizer performs it's bit of P-V memory magic and lets that other process "see" the page that's already in RAM. Saves memory and removes the need for some disk activity too.
There, that;s the ideas bit sorted. All it needs now is for someone to pop off and write the code: should be developed, released and bug free in a few years if they start now.
Is the free version of Virtuozzo which also does this - certainly with Linux. "Virtual Private Servers" which share the underlying OS. We use it quite a lot - it's really very good. http://wiki.openvz.org/Main_Page
This "thingie" you are describing is called a "mainframe"... Been there, done that...
The truth is out there...
BEA had a java version that ran directly on VMware virtual machine 'bare metal' and it out performed physical systems running a fat O/S. Not surprisingly it has died now Oracle own them.
Several other companies have their own just-enough-operating-systems (JeOS), many based on CentOS, some custom developed. Be it physical or virtual hardware, these are often very fast and secure.
Technical point - running the VMware hypervisor, you can deduplicate these pages in memory to achieve a huge memory saving when running multiple of the same operating systems on the same ESX server. This is turned on by default in ESX and has been there since the earliest versions. It's amusing to see that bringing up another instance of a Win2k3 virtual machine only consumes ~64MB of new memory - the rest of memory is the same as the other virtual machines and not duplicated.
AC because they can't fire me now but might sue me
Solaris Zones / Containers can do this. I belive (but Im no mainframe expert) that z/OS/ and AIX LPARs do the same thing.
The first rule of running a server is...
...don't run windows. Run something that's built to serve up applications, processing power, whatever, anything but eyecandy. Yes, that's right, that's the core business of windows, and therefore that is what it devotes the bulk of the resources it hogs to: clickibunti.
Further, as useful as virtualisation is, there are numerous other, lighter solutions that help separate applications from each other, improve provisioning, whatnot else. Much like linux is not the only viable and production ready free operating system Out There. And like linux they won't be promiscuiously dropping its pants to each passing botnet trojan.
The cure for bloat is not to squeeze until it fits. That's a workaround, maybe, hoever good at squeezing you get. The cure is to cut it; remove it like the cancer it is. Lack of bloat results from lean implementations of mean architectures. Everything else is turd polishing.
These researchos are apparently proposing to use ``trusted computing'' (wonderful eupehmism, that, for it's not you that'll be trusting your computer any longer) to, er, try to contain the pants dropping by throwing gobs and gobs of VMs at the problem. Now, throwing more (hardware, programmers, money, VMs) at a problem is a time-honoured tradition in mediocre computing land, but it also is very much turd polishing.
To further illustrate this, I need but say: See figure one.
Wouldn't need hacks like this with better design in the first place
Of course, if you have a system with properly designed virtual memory and a properly written OS that separated info into readonly code and readwrite data, with suitable hardware protection,you could have ONE copy of the code in protected physical memory mapped into multiple VM or application address spaces by the virtual memory manager.
It's only, oh, 30 years since VMS was doing that. In 1MB of RAM...
The solution is simple - instead of running a VM with 20 copies of the OS on top of it, with each OS running a single application, why not (and I appreciate this may seem rather radical) run ONE copy of the OS and run all 20 application on that? And throw the VM away.
The last time I looked, most modern OS's (and many old ones) are actually quite good at handling multiple applications running at the same time. In fact, it makes one think that they may even have been designed to do it! Fancy that!
Single point of failure
"But Microsoft doesn't see it like this, and one product manager said it wasn't a good idea to consolidate functionality from Windows-controlled VMs in a Hyper-V environment because you would then run the risk of a single point of failure."
And that just about sums up Windows - the single point of most computer failures. Heh-heh (waits for flames).
Slimming down the ages
So we take a large system with lots of smaller system sessions running within it.
You then take each system and shrink it into a single system so that each session only contains the applications, code and data needed for that session.
In 1969, the Digital Equipment Corporation wrote a system called IOX for its new PDP11 machine, later renamed to RTSS, later still mistyped and released as RSTS, a name that stuck for the next few decades. It was a system that fell out of fashion because the people in the know thought that centralised operating systems and large machines weren't flexible enough and cost too much, so when they weren't replaced with larger machines such as the VAX or similar machines from other vendors, they were often replaced by... desktop computers!
I wonder. Does anybody see a pattern here?
For the record, yes, I do work with virtual servers and no, I don't believe that they are a reasonable use of resources. They are just a fudge for a problem that has yet to be realistically resolved and a panacea for those that can't be bothered to attack the problem at source.
Why has nobody thought of this before ?
Running isolated apps under a common operating system. We could call it something obscure like a "chroot" ....
So the wheel turns...
...and we are back to somewhat shinier and more isolated versions of chrooted processes (possibly sandboxed virtual machines or something...).
It's like showbiz.
agree with AC: The Cure @ 11:23
Strip out all the bloat, let the HyperV share common O/s components & memory pages etc
How then is the end result different from a "normal" multi tasking O/s?
Bugger all to do with Microsoft dollars, more to do with common sense or rather practical sense. Both of which are in short supply these days.
Yes that is right. Solaris Zones does that. Solaris fires up lots of virtual machines: Solaris 8, Solaris 9, Solaris 10, Linux and only one kernel is running: The highly scalable Solaris kernel. Each VM requires 40MB RAM or so. One guy started 1000 VMs in 1GB RAM, it was dog slow, but it worked. So the Zones are extremely light weight.
Nice idea. Now go look up RISC OS... :-)
Transparent Page Sharing
VMware ESX has had Transparent Page Sharing for years.
Perhaps they should rename it ?
Transparent Memory De-Duplication.
I see a wonderful followon/addon to VMware DRS N+1 , if we moved these VMs onto the
same server, Page Sharing would save X %
Cloud Page Sharing ???
huh, who's with me ?
Come on Marketing team, get of your arses
Ever heard of FreeBSD?
The solution is already out there with FreeBSD Jails : http://en.wikipedia.org/wiki/FreeBSD_jail
And 1000 jails is easy: http://ivoras.sharanet.org/blog/tree/2009-10-20.the-night-of-1000-jails.html
Of course, you are limited to the OS being FreeBSD, or Debian/Kfreebsd - but one machine can host any version of Freebsd jails, along with debian, and of course Linux compatibility means most linux programs can be run anyway
Why dont they just take at look at what IBM did with VM (and it's successors). A single copy of executable code available to all users on a system. I just don't understand why MS have so much of a problem with memory management.
Perhaps it is a result of using that bastardised language (C or the ++ variant), whereas IBM's language family (starting with PL/1) put out 2 CSECTs one for code and one for data, meant the code was read only and was protected by the hardware. (a CSECT is a Control Section and is a basic building block for the link editor (or loader) to work with).
The result, in a virtualised environment, meant that each user (or guest OS) had a shared copy of executable code and its own private storage for its data.
Executable code sharing did not just stop with other operating systems, it could also be used for sharing large quantities of data, obviously low volatility data, but it was possible to have just one copy of data shared between many processes.
I guess it's new to somebody.
Solaris zones, FreeBSD jails, AIX LPARs, and at least a few mainframe operating systems have already solved this problem, and I'm pretty sure HP-UX has something for this too (VSE, was it?).
It's unfortunate that, to most people, "virtualization" means "running VMWare on Windows".
I'd just like to point out that there are many Windows server licenses that allow you to run as many VMs as you'd like without paying any additional fees.
@Captain Thyratron et al
Solaris Zones/Containers are pretty good (speaking as a self confessed Solaris fan). Though last time I looked they didn't support migration of a live VM from one machine to another. So pluses/minuses both ways.
Of course, the best thing is to have a self-clustering app that doesn't need something like VMWare to give it resiliance, etc. et. You could run such an app natively on a series of hosts in their nice multitasking operating systems. Now where would I get one of those...
I'm pretty sure that virtualisation is just an excuse for lazy developers to not think about their app design properly. I see developers at my employer doing all sorts of crazy things - e.g. a whole Windows VM just to serve up a 10 page website! Whatever happened to efficiency?
Dear Reg, please can you start a Campaign for Real Computing. Once upon a time programmers were skilled at developing efficient code that ran quick in small amounts of RAM. Now that resources are 'plentiful' the programming community has generally got lazy. VMs where a native app would do. Languages with bloated runtime environments that take forever to load just so you don't have to worry about errant pointers. Apps running as crappy interpretted code in junk scripting languages in browsers just to save the effort of compiling the bloody thing. New thin client technologies that consume vast amounts of bandwidth and give poor results instead of updating perfectly good things like X11. An ever expanding array of app hosting environments (Silverlight, AIR, etc.) that are all 'indespensible' that make machines slow to boot and don't do anything that a carefully written native app couldn't do. Data stored in man readable text when computers learnt to store things as binary a long time ago (come on, who EVER reads XML as the prime means of accessing the data within?). <\rant>
I wonder how long it will be before data centre managers realise that lazy developers and vendors are costing them huge amounts in electricity, hardware and bandwidth costs?
Just for info, AIX LPARs have the same problem as VMWare - each LPAR has a full blown operating system, including all relevant overheads therein.
HP-UX Virtual Machines share a kernel, I believe, although VPARs run separate instances of the OS.
As has already been said, Solaris zones have this beat. If you share the binaries between zones, you can even share the memory space for the application binaries saving more memory, albeit at the cost of flexibility.
Firstly, Windows and Virtualisation don't mix. If you have to run Windows servers then run them on their own hardware.
Which brings us to *nix. If you don't like the portability offered by fully virtualising your *nix servers then what you are looking for is chroot or bsd jails.
It's hardly rocket science.
Personally, I like having single purpose servers and considering that most server hardware these days are specc'ed up to cope with Winbloat it makes them somewhat over specced when it comes to running low-medium demand *nix apps. This of course makes such server hardware a perfect match for virtualisation.
As for the usual "Why don't we just run a hundred services on the one OS like in the good old days" crowd, I have to say that I was there in those "old days" and they weren't entirely good. It gets worse these days with things like perl, php, java and python being thrown in the mix.
The minute you find yourself in a situation where App #1 requires perl version Y while App #2 will only work with perl version X is the moment you will turn to Google and enter in "how to virtualise my servers 101"
Campaign for Real Computing
Bazza, I completely agree with you! Actually, a year or two ago I dropped a note to El Reg suggesting they promote some kind of "Green Programming" initiative to help reduce the "... huge amounts in electricity, hardware and bandwidth ..." to which you refer.
I don't know. Programmers these days: you just can't get good help anymore.
Campaign for Real Computing!
We've been dealing with bloatware for entirely too long ... Have you looked at the latest *buntu release? Just as big and buggy as VistaSP3^WWin7, and for exactly the same reasons ...
Ubuntu Server or JeOS?
You obviously haven't tried the server edition, or even the "JeOS" edition of Ubuntu, have you?
I've got that up and running with a single-core 2GHz processor, 128MB RAM and less than 2GB disk space - it runs 4 server applications a treat, and pretty nifty too.
It is rather simple
Basically UNIX has had the answer for well over 40 years, called chroot jail.
Here you run - inside the operating system. The program is loaded, then isolated from the rest of the system, access to physical devices can be blocked or allowed, by creating a device access. The application does not have access to system libraries once started, and therefore you move the sub components that the application requires into the chroot jail, they can even be forced in, as read-only, and therefore limiting the damage that a compromised application can even do.
Virtualisation is good when you have an operating system, without security features, and application sand boxing - aka windows - Or when you allow multiple customers to run their servers on the same hardware, and you need to isolate the customers from each other, however, chroot, can do the very same thing, and you can choose how much of the operating system, will be available.
As the system is sharing physical memory, and you do not need to load multiple operating systems, and kernels, the amount of memory used, is reduced enormously.
However, virtualisation allows you to do restrictions that chroot cannot, such as locking a virtual machine to a specificed numer of CPU's, which means if some application goes crazy, it cannot consume all CPU resources, nor memory dedicated to other virtual servers.
Well the Cure is obviously not VMs
is it :) Good article I suppose, but I don't know, all the razzamatazz previous ones, are just solidly put into perspective,
VMs are not a cure all, properly set up shared hosting is better. Blimey, I hope no one lost a life over it.
Who needs a window manager on a server OS ?
My virtual machine hosted server runs fine in 256M of RAM, because it doesn't have to support a window manager. It can be administrated perfectly well using web applications for specific applications, command lines for general OS management, and graphical file managers with SFTP client support for updating web sites, all of which works fine over SSH and SSL.
The one thing that bloated this system was having to run the ClamAV antivirus program for the purpose of detecting the half million or so digital diseases spread between Microsoft users which would otherwise get relayed through the mail system of this Linux server. When I eventually switched it off, ClamAV was using as much memory as the rest of the system combined. I decided to switch it off to give more responsiveness to other server applications.
As far as my Windows users are concerned having to rebuild their machines a bit more often encourages them either to upgrade to operating systems used by grown ups or to take responsibility for avoiding spreading the digital diseases specific to their chosen OS.
AIX WPARs not LPARs
AIX LPARs are very much like VMWare ESX, but with a hypervisor (which is actually a specialist Linux based OS) separating the virtual systems. Each virtual system has its own OS image, with no page sharing between instances.
WPARs are like Sun Zones/Containers, where you have a single OS image running applications in what are effectively chrooted environments with some CPU and memory enforcement (provided by WLM) and some network virtualisation provided loopback virtual Ethernet devices.
BTW. Whoever said that C does not use sharable CSECTS obviously has not looked at the way that shared-text UNIX processes have worked for nearly 40 years!
Actually, come to think of it, RISC OS would be an ideal candidate for virtualising, especially as it has already gone a long way towards it with the emulators available. Considering that they only really need access to the ROM image and a copy of whatever modules loaded in from !System, theoretically you could do this if you could design the right platform for the job.
Mind you, who would fund a project for that? The Merkans' wouldn't want it because it isn't Merkan, and the Brits won't touch it because... well... it isn't Merkan.
Ah... we can but dream.
It seems to me that a proper operating system, as it was originally I mean, i.e one that provided an interface to disks, memory, keyboard, screen, mouse, network and any other hardware, and that controlled the execution of programs with pre-emptive multitasking would do most of these things.
The trouble is that the "operating system" has been extended to include application programs like browsers, AV, even parental control ffs, mainly by MS but followed by others.
They have moved more and more application code into the kernel in the mistaken belief it will improve performance, whereas in reality, it is a marketing move that improves only the thing they want to sell at the expense of other code, usually from competitors, and makes the OS unwieldy and bloated due to having code in that should be part of an application package.
So today, we have a generation of people who have been brought up to believe that the OS is a thing like Windows, including all those applications, the GUI, et al.
Now we seem to be hankering for something without the bloat of the additional code that will run multiple applications and share memory between them. Sounds a bit like an OS to me.