back to article Virtualization payback, now and in the future

Most people arguably get the point of virtualisation in terms of server consolidation, and the potential reduction in costs and overheads associated with that. Even though there are some important practicalities to be considered, as highlighted by readers in the first discussion, the game is reasonably well understood, and many …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Single application virtualisation would be great

    The problem with some organisations, particularly like public service which have to serve a wide range of departments and maintain a significant array of specialist applications, is that they don't often run at the same upgrade pace as the organisation itself. Trying to maintain a stable desktop environment can be a nightmare, whether on a workstation or a Citrix desktop.

    If each application could exist within its own virtual shell, with the necessary DLL's to support it, then this would completely free it up and it could be issued on a user basis, run on a server or client and the type of client would be much more flexible as well. It would also offer an automatic mechanism of always being up on licence enforcement.

    Virutalisation on the application level is, I believe, already around.

    It would also have a place in the home environment; I've got games from years ago that won't run properly even in emulators. A virtual engine that had better communication with 3D audio and video hardware would really boost things and unlock the two decade collection of games again.

  2. Anonymous Coward
    Anonymous Coward

    Errr?

    People don't get server consolidation if they are using virtualisation to achieve that.

    You can create a chroot, or a jail and you can keep processes separate. But you don't even need to do that, you can run a webserver, and mail server on the same machine, you can run mutliple ips on the same NIC via an alias.

    I do wonder sometimes if I am on the same planet as other people, you have been able to run multiple daemons on Unix systems for donkey's years.

    The reason to use virtualisation is flexibility, to be honest it is mainly being used for testing, be it penetration, software, or browsers, and to be able to run software of different systems which really amounts in this day and age to containing windows.

    Distributed computing via virtualisation, well now that is a bit of a security no no no. You are much better of using a distributed system and nodes contained by the physical device, than you are trying to consolidate many physical devices into one node.

    As one white label track would say, 'unfucking believable'.

  3. James O'Brien
    Pint

    Do you really want to join this pool party?

    Is there free beer? If so Im down with it :)

  4. Nate Amsden

    dynamic SMP

    Something I have wanted for a while which I suspect we may see at some point is more efficient SMP scheduling. Right now in ESX at least when you run SMP all of the CPUs you have allocated must be scheduled together(this is to ensure compatible behavior with real hardware SMP schedulers). I have several systems that benefit highly from SMP but only for a few hours a day, splitting the jobs up to run in smaller chunks isn't easy either. What I'd like to see is a way for the OS to only use SMP when it really needs it, and otherwise disable the extra processor(s) as they are not needed and write a special instruction to those processors so the hypervisor can pick that up and de-schedule them on the fly. Then when a job comes in that can benefit from SMP the guest OS spins up the extra CPUs. VMware does something similar on one of the Win2k3 service packs I was told recently, that the service pack introduced a new behavior that caused the OS to flood the cpus with idle loops when it was idle, causing ESX host cpu to spike, VMware adjusted their hypervisor to detect this instruction and de-schedule it, allowing cpu to return to idle levels.

    With vSphere's "hot add" CPUs this is headed in the right direction but as far as I can see it's only "hot add" and no "hot remove". I suspect hot remove may be more risky than it's worth to try to implement and just "zeroing" out the additional cores would work well.

  5. stizzleswick
    Pint

    The future in practical application - one business case

    Consulting as I am for a relatively small business consultancy firm which has embraced virtualization as a means to offer everybody an identical computing experience no matter which room they are in, here's where we stand now.

    Having tried several hypervisors and several guest OSs, the firm is currently running a bunch of XP VMs under XenServer. Vista as a guest has been tried, including several service requests to the manufacturer (none of which were answered to more effect than to claim that this company were the only one to experience the problems encountered, despite the fact that their own self-help discussion forums are full of said problems....). Windows as a guest OS is a customer requirement; I would much prefer to move to something a little more responsive and less complicated (less complicated for me, that is) like e.g. Linux.

    Web applications? What the $/%& could they offer for this company? We do OpenOffice, and some specialized script-gloms specific to this company. They work (except under Vista. They do work under Linux and Solaris using WINE. I've tested.).

    The major point in this particular business case is, those VMs have to run, and if they fail, the user needs to have access to another one on the fly. That is not a future scenario, it is already being offered by XenServer (one of the main points in our choosing it -- and no, I am in no way affiliated with Citrix). VM fails, user gets replacement VM basically on the fly, with the possibility of some data loss if a file in use was not recently saved. There's some room for improvement there; I am not a software developer, but seeing what IBM did with the OS/2 back in 1996 (after a crash and reboot, all previously loaded apps restored their files to the state about 2 seconds before the crash), it should be a definite possibility to offer no-data-loss live switching between VMs these days. Come on, guys, it has only been 13 years !

    As to server consolidation, that is not (in this business case) a valid concern. The servers running there have to have the maximum in computing power they can get; they are mission critical. The latter speaks for virtualization, the former strongly against at this point in time. We need direct access to the hardware here to get maximum performance. If there were the money for high-end servers, things might look rather different: the company could probably VM most of its infrastructure then. But that would require the power of a fully stacked blade center, at least. Financially out of the question.

    In larger companies that canafford to throw away a few kilobucks, things are probably different. From experience, I suggest at least a month of practical tests before going virtual with the productive workload.

    All that said, I do believe that virtualization is where we will end up. The management of virtual machines can be much easier co-ordinated and executed than that of physical machines, meaning a reduction in adminstration costs. If you're an admin and are not yet acquainted with the handling of a hypervisor, ye better learn or you'll be in trouble soon. Virtualization is coming, and it will take over quickly. Even in relatively small companies, just because a single server (hardware) is cheaper than ten clients, and for most purposes offers the same reaction time and better uptime.

    Me, I'm off for a beer.

  6. Anonymous Coward
    Anonymous Coward

    css + ie = pain

    if you ask any decent web developers, one of the pains is to get a site to work on IE. Since the css implementation of IE is still very broken, it is important to test your application on IE. To make matter worse, different versions of IE can behave very differently as well ... Don't take my words for, check the number compatibility modes offered in IE8 ... It isn't particularly easy to keep an old version of IE on a machine. With a virtualization, I can easily to have different builds of system in self contain environment for testing. You may think IE6 is dead ... Think again, it is very alive and has been the cause of headaches! In fact I have spent almost an hour to try to figure out why Javascript did work on my application. It turned out the Javascript engine has been deregistered...but how and why? Any virtualization is a value tool for website testing

  7. Anonymous Coward
    Anonymous Coward

    Distructable machine

    It is a invaluable teaching tool as it is so easy to reinstate and allows much configurations to co-exist in a single machine. Without virtualizations it would be very hard to get ICT supports to put anything other then Windows on machines, i.e. You get hundreds of Windows drones out of University every year!

  8. Anonymous Coward
    Anonymous Coward

    Sandbox for emails

    It is useful for open dodgy emails. Sometimes I may get emails from friends with out of personality headings. I usuallu open them up with a copy of Linux running virtualization, so it cannot ruin my windows system if the email was sent by an infected friend's computer.

  9. Anonymous Coward
    Alert

    Just the same thing all over again ..

    The main issue which is not included, is the cost of managing the virtualised instances. As a concept, VMWare, Xen and the other virtualisation engines is easy to implement. The catch is - where we previously had server sprawl we now have virtualised instance OS sprawl and it happens at 100 times the rate. Virtualisation is sold on a false premise - your costs will not go down. Most of the costs were in the management and people costs anyway and that has not changed. You may now have less HW and heat but the complexity of your management and the costs of resources have increased significantly.

    Mainframes and Unix can do this in the box - multi hosting and bolting layers of virtualisation together until it gets as complex as you can manage. Then everybody went Windows - and now, because it has to be virtualised on a per OS instance (due to Windows’ inability to multitask and share resources between disparate applications) you have got the OS & server proliferation problem. And to the rescue - new funky stuff called virtualisation - which we had on MF and UNIX anyway - arrived to save the day - ha! People forget it still needs to talk to the network and storage - which is also needs to be virtualised and managed by slightly clever people.

    And in the end - the virtualised OS, storage and network complexity will kill you as the tools for a start can currently not manage this virtualised integration and when it will be able to - you will pay through your nose and will be right back when you had to pay a sys admins a living wage for their knowledge to keep this sticky tapped together solutions running. And with the current windows type admins out there who has got no clue of the bigger picture - good luck!!

This topic is closed for new posts.