back to article Virtualisation extremist? Put down that cable and step away slowly

Virtualisation is everywhere, particularly the data centre, and that's a good thing - if used wisely. Virtualisation can help you milk the greatest possible performance (and hence maximum value) from the physical gear. Running multiple virtual servers on top of a physical server platform allows you to minimise wasted CPU and …

COMMENTS

This topic is closed for new posts.
  1. This post has been deleted by its author

  2. ISYS
    Holmes

    Ummm.....

    "If you're so inclined you can even take it to extremes, so that in the event of a physical host failure your virtual servers keep on humming by virtue of real-time replication onto other physical hosts"

    Not quite - Virtual Hosts store the VM files on shared storage.

    Most of this article seems to be stating what is industry best practice anyway?

    1. msage

      Re: Ummm.....

      Not in a shared nothing cluster they don't!

      1. Mr Anonymous

        Re: Ummm.....

        "Not in a shared nothing cluster they don't!"

        Oh yes, the type of "Cluster" where when you have a failure you _loose some data_, the amount of data depending on the state of the replication. It's just marketing speak, has a place, but it's not a high availability cluster.

        1. Danny 14

          Re: Ummm.....

          wont server 2012 replicate VMs between hosts using non shared storage (i.e. local drives)? I thought that was a selling points to the little man, i.e. take your existing 5 servers, VM and scrap a couple and use their local storage to create failover.

  3. RonWheeler

    Precis

    Precis - pay network guys a lot of money to hold your infrastructure hostage to their console-cable wielding whims.

    1. conan

      Re: Precis

      Exactly. Why not just run on a hosted platform? What's the point of owning iron at all? In most cases it just seems to be resistance to change and people protecting their jobs.

      I worked at a company once where we lost our devops engineer. I remember a conversation where I was saying I'd never consider starting a project without devops support, and he responded saying he'd never start a project that needed devops - he'd buy into the economy of scale and use a hosted platform until the project had paid for itself many times over. I now agree.

      1. RonWheeler
        Windows

        Re: Precis

        'Exactly. Why not just run on a hosted platform? What's the point of owning iron at all? In most cases it just seems to be resistance to change and people protecting their jobs.'

        Owning your own kit can be cost effective and better as long as you don't let a bunch of must-have-the-best nutters make the decisions. Common practice seems to be to not measure how many IOPS you data really chews through, how little bandwidth your servers actually use etc etc. Most people overspec by a factor of 10 or more. I remember using Platespin 3 years back in the P2V era - meant I made a case for saving our company many many many times my salary not shoving in stupidly overspecced 10Gb Cisco switching / HBAs etc etc That plus backing off from having THE flashiest SAN on the market vs something more mainstream.

        Much cheaper than hosted once you cost in WAN links, and less latency. But horses for courses.

      2. Mr Anonymous

        Re: Precis

        "Exactly. Why not just run on a hosted platform? What's the point of owning iron at all? In most cases it just seems to be resistance to change and people protecting their jobs."

        Short memory Conan.

        Because when your multi-million pound trusted partner goes bust, you either get held to ransom by the administrator working on behalf of the creditors or, at worst case, loose your data and your business.

        1. Yet Another Commentard

          Re: Precis

          Re: hosted

          Also make sure that any data they hold isn't suddenly subject to, well, say the PATRIOT Act or equivalent, and will be handed over to any person in a foreign jurisdiction asks for it without so much as a by-your-leave. (I exaggerate, but the point is there).

          1. Bert 1
            Thumb Up

            Re: Precis

            I fully expect the next development to be multi-provider virtual data centers.

          2. Danny 14

            Re: Precis

            with regards to PATRIOT act etc, the problem might not be handing the data over (which you might or might not care about) rather the host being shut down whilst their servers are handed over.

      3. Anonymous Coward
        Anonymous Coward

        Re: buy into the economy of scale

        Yeah, like the way those factories that make CPU's and HDD's scale. I mean, they make so much of the stuff, it's so cheap. I mean you can even host your own servers for a fraction of the going price - if you need proper servers that is. If not, yeah just pay 5 bucks for some share hosting / email package.

        I think all this commoditisation (is that a word) of servers is sometimes a load of marketing shigar that the artsy type lap up because they can deploy their RoR app with a click. Nifty yes... =p

  4. Anonymous Coward
    Anonymous Coward

    There was technology max maximise hardware usage before virtualisation

    It was called an operating system - you know, where you had multiple processes all being timeshared for the CPU and given their own memory slice etc.

    But then came along Windows with its DLL and registry hell, its piss poor seperation of user/account data and its insistence that anything that wants to do anything useful must have admin privs at some point. Which makes it virtually impossible to run more than 1 major application on the OS at the same time (or windows admins who don't know how to manage them - take your pick). And what did the genuises come up with as a solution? Improve the OS? Err , no. Have multiple instances of the same OS running at the same time!

    Genius!

    Not.

    So instead of having 1 process management layer you have 2 - the OS kernel AND the hypervisor so it actually ends up less efficient. But of course thats not what the snake oil salesman want you to believe.

    And yes - I am aware of VMs on IBM mainframes donkeys years ago but that was also down to their piss poor OS not being able to do any sophisticated process management and quota seperation.

    1. Peter Gathercole Silver badge

      Re: There was technology max maximise hardware usage before virtualisation

      Generally completely agree with you.

      But there are situation where it is useful, and also where it is essential.

      It's useful to allow two different operating systems run on the same hardware. Back in the late 1970s, the University I was at turned of their IBM 360/65 running OS/360, and migrated the workload onto a proto-VM on their 370/168. Normally the 370 was running MTS (look it up), but by using a VM, it could also do the legacy OS/360 work at the same time.

      Currently, you might do the same to run Windows next to Linux on the same system.

      In addition, many enterprise OSs running today were initially designed more than a couple of decades ago. Back then, 2 CPUs in a system was novel outside of the Mainframe world, so the same OS facing a machine with 1024 CPUs may struggle. OK, the OS should have been updated, but when these OSs were written, people probably did not foresee such large systems (640KB anybody), and built in serious limitations that require a lot of work to overcome. Unfortunately, these OSs are often becoming legacy for the vendors, so it seems unlikely that the necessary work to overcome the limitations will be done. So often, it makes sense to divide up your workload into separate OS instances, and stick each into it's own VM.

    2. Tom 38
      Stop

      Re: There was technology max maximise hardware usage before virtualisation

      Using VMs allow you to allocate resources (and enforce thoise allocations) to particular VMs. Running multiple apps on a single OS instance does not.

      1. dedmonst

        Re: There was technology max maximise hardware usage before virtualisation

        >Using VMs allow you to allocate resources (and enforce thoise allocations) to particular VMs.

        >Running multiple apps on a single OS instance does not.

        Again using a "proper" operating system (read, any of the commercial UNIX still out there) you _can_ run all your apps on a single OS instance and enforce resource allocations.

        See Solaris Zones, HP-UX Containers and AIX WPARs

        Never used it, but there's even a product to do this on windows & linux - Parallels Virtuozzo - no idea if it is any good or not...

        Of course the challenge comes when you need to operate at different patch levels and with different kernel parameters, but again these OSs will handle some of this to a greater or lesser extent, and if that doesn't work, THEN you can virtualise the OS.

        1. Tom 38

          Re: There was technology max maximise hardware usage before virtualisation

          Zones and Containers are a form of virtualisation - para-virtualisation, or OS level virtualisation, I suppose - but I take your point.

          Most of us don't get a choice of using such tasty OS, though. We're in the process of ditching FreeBSD (sob) for Linux VMs running on Xen because FreeBSD jails (which are like Solaris Zones) don't support disk 'fairness' quotas (I'm pretty sure Zones do).

          1. This post has been deleted by its author

        2. P. Lee

          Re: There was technology max maximise hardware usage before virtualisation

          "nice" is about as good as it gets. I see no particular reason why an OS scheduler shouldn't allow resource scheduling in the same way a network bandwidth management device works.

          Except... proper Unix involves expensive hardware and the OS cost is usually nominal.

          MS, on the other hand has an enormous financial incentive not to give these sorts of features. Why would it want you to stop buying multiple copies of its OS to run on the same machine?

          It also appears that hardware has outstripped the OS's ability to manage it or an application's ability to be tuned. Ever seen multiple VM's on the same host doing the same thing?

          I'd like to see a security manifest for server applications which is updated by the OS. The application requires read-access to /lib, $APP/bin, r/w to $APP/data, HTTPS(TCP-443) access to www.callhome.com via configured proxy (proxyIP) and TCP-1521 to $DBServer. That could also be fed into a firewall rules processor to make that easier and could allow the OS to stomp on "buffer-overflow -> let's access other stuff" problems. It wouldn't work for all apps, but it would be a start.

    3. Anonymous Coward
      Anonymous Coward

      Re: There was technology max maximise hardware usage before virtualisation

      The only reason I use virtualisation is for failover and migration and general management. At least linux can be quite minimal and a KVM do good there too. Jails anyone? I'd never virtualise a Windows server unless it was just to go, "Wow this is actually booted! OK that was fun. Now delete it."

  5. Anonymous Coward
    Anonymous Coward

    ISYS, if using VMware Fault Tolerence then the VM really will be replicated in realtime, (CPU, memory states etc), to another host which runs a "shadow" VM (which can become the master if the first host goes wrong). That uses shared storage.

    If using VMware SRM, the VM's disk will be replicated to another datastore/site on a schedule, so it can keep running, although there will be a reboot in case of a failure.

    So the article is correct, it just depends what virtualisation features you are using.

  6. Anonymous Coward
    Anonymous Coward

    Just because you can do something ....

    ..doesn't mean you should.

    At one client the IT dept were determined to roll out a capacity on demand virtualisation solution for their core retail systems (traditional and online). I explained to them that since the business could easily predict sales increases, because Xmas, BH weekends, etc are fairly regular and new product launches are planned months in advance, there wasn't really a need for expensive capacity on demand technology.

    They still persisted, so I asked them if they had a priority structure to assign from which systems the capacity on demand system will take resources from when they are needed. The answer was that they weren't going to do that, they were going to build a resource pool big enough so the capacity on demand didn't have to take resources from any other system.

    So they were going to spend a substantial 6 figure sum on a system that would automatically assign resources to a system that already had the resources assigned to it. Doh!!

    Somebody tried to argue about the power saving, but we calculated that in the worst of all cases it would at most save £5,000 in power and AC costs per year, which would mean that the system might start paying for itself after 40+ years.

    Surprisingly the business saw sense and refused to fund it, although the designer had a huge hissy fit, because he had already started to talk to the vendor about doing case study presentations at various conferences.

    1. Danny 14

      Re: Just because you can do something ....

      that probably had a lovely kickback to the designer too.

      1. ecofeco Silver badge

        Re: Just because you can do something ....

        >that probably had a lovely kickback to the designer too.

        Beat me to it.

  7. Brian Miller

    Horses for courses, eh?

    Once upon a time I worked on a project where entire VMs were supposed to be shuffled between data centers according to available wind and solar power. The flaw in their plan was that the connection between the data centers was essentially 10Mbit/sec, and I couldn't get their idiotships to realize that you can't just shuffle multiple multi-gigabyte VMs across such a slow connection before the power went out. Sure, you can transfer user session information, but not loads of VMs. Some people just can't do or comprehend basic math.

    1. RonWheeler
      Holmes

      Re: Horses for courses, eh?

      Your point? Other than that stupid people are stupid?

      1. Tomato42
        Thumb Up

        Re: Horses for courses, eh?

        The point is that too much sugar can spoil the cake.

        There are environments where abstracting the problem to a level where you're dealing just with VM and treating VMs as cartridges you can just move and swap around making the IT easier to manage. Thre are also situations where this abstraction will hurt you and will make your systems crumble.

        You need to know which tool is good for the job, if you hold a hammer all the problems... and so on

  8. Alistair
    Holmes

    " implement it robustly and document it to death so that when you're not there, the poor sod who's tasked with fixing a system-down problem has a fighting chance of doing so "

    Best sentence in article.

    1. Tomato42

      just don't make the descriptions too detailed, or the someone clueless might actually think he can do it and bring the system from "down" to "unrecoverable", just because he missed one warning or just clicked through an error

  9. Anonymous Coward
    Anonymous Coward

    Cool. And slightly worrying.

    "...in the event of a physical host failure your virtual servers keep on humming by virtue of real-time replication onto other physical hosts."

    Cool. Another step towards the viability of independent, adaptively self-maintaining software entities. (Sorry that doesn't form a cute acronym).

    Like John Brunner's worm in The Shockwave Rider (that prophetic 1975 SF novel). As I recall, the worm shifts around the world with lightning speed, and when the authorities try to shut it down, it retaliates by destroying banking databases until they quit and leave it alone.

This topic is closed for new posts.

Other stories you might like