back to article Server virtualisation: How to pick the right model

Virtualisation has become an over-used buzzword. On mainframes, it has been around for ages. Its introduction to x86 took a concept formerly reserved for Big Tech and let it loose among the masses. Once a straightforward technology with a limited number of implementation models, virtualisation has been bootstrapped and …

COMMENTS

This topic is closed for new posts.
  1. Nate Amsden

    direct attached

    I don't think direct attached virtualization requires high end hardware for HA.

    What you need is fault tolerant software. Something that is scale out rather than scale up is ideal for direct attached virtualization. Where if a physical server with 20 VMs die you don't care.

    A couple companies ago a decent portion of our web servers were deployed this way. Some of the core web apps could not scale beyond 1 physical CPU. Rather than try to re-write the code it was simpler to put a hypervisor on a dual proc quad core box and run 8 copies of the app (with 4 servers/site or roughly 32 copies of the app per active-active site[multiple sites for reduced customer latency]). Another app was higher performance and could leverage the underlying hardware to it's full extent so that web application ran on bare metal. The cost of a good shared storage array was going to outweigh the cost of all of the rest of the equipment at each site, so it didn't make a lot of sense or cents from a cost stand point, as much as I would of jumped at the opportunity to have such a system for an ease of use standpoint.

    You forgot to mention the hybrid approach., ala vSphere 5 VSA or things like HP Lefthand VSA which turn direct attached storage into fault tolerant network storage - good for SMBs with low I/O needs, can get the best of both worlds.

    I don't think your storage numbers are realistic either, measuring storage based on throughput when virtualization is for a large part random I/O workloads rendering throughput numbers meaningless, it's all about IOPS, the network is not the bottleneck. Sure there are some workloads that are throughput based but in the vast majority of cases your going to run out of IOPS long, long before you even get close to running out of bandwidth(consider, peaking out at maybe 4-8MB/second/disk at 15k RPM if your lucky). Latency is just as important as IOPS too. If your truly throughput bound then you may be better off running on bare metal. Hypervisors don't make sense for everything.

    Depending on the organization you can design your network up front so that once your hypervisors are deployed and your virtual switches are configured you rarely have to touch them again. For my last VM deployments I can count the number of times I had to configure a vSwitch after I installed the hypervisors on one hand (~60 VM hosts, several different clusters). My new VM infrastructure which is going in early next month is planned so I don't expect to have to touch the vSwitches for the lifetime of the product - at least 2-3 years. Not that it's a big deal if I have to, but if I don't need to then so much the better. Anything can happen but my experience tells me vSwitch configuration changes are few and far between.

    Memory (capacity not performance) is the driving force behind virtualizaiton, which is why Vmware added the vTAX in v5. Memory availability is just as important in these big boxes making technologies such as HP Advanced ECC and IBM Chipkill absolutely critical to any VM deployment. ECC by itself is not enough, and has not been for years.

    If mainframes were so good then why is IBM using KVM + Red Hat for their developer cloud instead of running Linux on mainframes (IBM used to advertise a lot about leveraging the multi tennant abilities of mainframes they don't seem to advertise that nearly as much anymore I haven't seen such an ad in years). IBM after all unlike anyone else has got to have the lowest cost of operating their very own gear, and I'm sure licensing their own software comes at no cost as well. I wrote about this a while back, the IBM developer cloud was focused around Java apps so from a technical perspective it wouldn't matter what platform they ran it on.

    What do people use when milliseconds can cost millions of dollars? More often than not these days it seems to be overclocked x86-64 systems(El Reg has many articles on this). Mainframes are what you use when you can't tolerate downtime.

  2. Mark Honman

    On the cheap

    We're in the SMB bracket, and use a KVM cluster (using the Proxmox VE distribution).

    Direct attached storage is used with 2-way disk replication (DBRD). So with a small number of VM hosts it is possible to get the speed advantage of direct attached disk, and still have live migration.

    What we aren't able to do is have a pool of VM hosts that are effectively interchangeable.

  3. Diginerd

    XCP

    One of the better kept open secrets of open source virtualization is XCP, and it's new sibling Project Chronos (a full port available for Debian/Ubuntu using apt get). Both are essentially FOSS versions of the $pendy Citrix Censerver (Talking Enterprise/Platinum editions, not the freebie base edition.)

    One of the cooler new features is a hybrid Storage Model, enabling a pool of servers to access shared storage, but have each host automatically replicate the virtual disks to local storage as they are accessed. The net result is local disk performance after the initial read from the remote SR.

    Doubly cool if the local storage is SSD. :)

This topic is closed for new posts.

Other stories you might like