back to article I/O holds up the traffic in virtual systems

We can now host 512 virtual machines on a single physical server. That's a lot of virtual machines trying to squeeze a lot of I/O out of a single server's networking interfaces. Meanwhile, vSphere 5 is out. It arrived with an indeterminately large quantity of truly outstanding features that were completely overshadowed by a …

COMMENTS

This topic is closed for new posts.
  1. Voland's right hand Silver badge
    Devil

    Vmware, Xen (and the article author) are all missing the point

    We have now reached the point where it is necessary to provide proper networking features at the v-Net layer including merging correctly n x XG interfaces into m x virtual interfaces with trunking and other network protoocols working as needed on top of this.

    Similarly we have reached the point where it is necessary to have the more advanced OS features like QoS, policing, reservations, etc all working too.

    Neither of these are on the v-world horizon. In fact if you look at where they are going it is the completely opposite direction - transparent dumb VLAN passing to VMs using PCI virtualisation and killing all OS advanced networking features to achieve the required performance.

    That already flattens out the network prior to any accel (as observed in the article). It cannot be the way forward. It is the way backward.

  2. Nate Amsden

    what are you running?

    What are you running that is consuming so much bandwidth? my last VM deployment was probably in the area of 350-400 VMs spread over probably 40 servers or something (several different use cases, some servers were at different sites etc).

    If you added all the servers up, all the VMs up you probably wouldn't exceed 2 Gbps in bandwidth across them all, maybe burst to 3Gbps or something. There was no vmotion or DRS or anything none of that was licensed. just vSphere standard edition for some systems- free ESXi for others.

    As far as I know the CPU hasn't been the bottleneck since the introduction of quad core cpus, I ran across a screen shot of my first ESX system (had used GSX and stuff a lot prior) - ESX 3.0.2 with dual proc quad core and 16GB of ram(HP DL380G5). I'm not exactly sure when it was from though ESX 3.0.2 was apparently released in July 2007.

    The bottleneck has been memory(capacity not performance) for a looooooooong time, and still is, and will continue to be in most circumstances.

    my current VM project will be about 200 VMs on about 8 servers, each with massive 4x10GbE connectivity (not because I need 10GbE speeds more because 10GbE is cheap and simplifies configuration), specifically to support the life cycle of an ecommerce app (including production use). Maybe aggregate bandwidth usage will be in the 2Gbps range (excluding things like vMotion which will burst higher because the bandwidth will be available to it), just an estimate at this point since the equipment is not even ordered yet.

    1. Trevor_Pott Gold badge

      What am I running?

      VDI. Folk watch video from inside their VMs. Servers that render images, video and audio. Web sites. File storage (deduplication front-ends, backup systems, storage replication VMs, etc)

      In a word: everything. Server CPU time is cheap and plentiful. Why not abuse the crap out of it?

      1. Nate Amsden

        interesting

        interesting, myself I would not run all of that stuff in VMs (outside of VDI) but nice that you have the flexibility to do it. CPU is plentiful though cheap is relative when talking hypervisors. I'd be very happy if vmware went and eliminated the per-socket cpu core limit in vsphere 4.1 across all editions.

        1. Trevor_Pott Gold badge
          Megaphone

          Nate Amsden

          It's a question of financial resources. We’ve rounded the bend. Virtualisation is simply cheaper.

          Virtualised environment:

          6x servers @ $4000 = $24000

          100 thin clients @ ~400 = $40000

          Software = $125000

          Total = $189000

          Advantages: Everything runs on RAID, getting people used to RDP means more of them working from home. *Way* less hardware cost.

          Disadvantages: Shared resources can be a little slow at peak usage. (This problem is rapidly going away as new tech becomes more mainstream.) Software costs are about 1.3 - 1.5x the other route.

          Same setup, non-virtualised:

          24x Servers @ $2000 = $48000

          100 fat clients @ $1000 = $100000

          Software = $90000

          Total = $238000

          Advantages: Everyone gets their own dedicated hardware. Everything is always the same speed. Cheaper software.

          Disadvantages: Way, way, WAY more expensive hardware costs. End user stuff isn't typically running on RAID. Backups are a pain in the ASCII. Swapping/maintaining hardware becomes a bit of a pain near end-of-life for end units. End users can't use their own equipment without having IT's security fingers all over it, creating wailing and gnashing of teeth.

          At the end of the day, dedicated hardware for everyone is faster. It is however a person’s salary more expensive and adds something like two bodies worth of management and support overhead.

          I’ve been running fully virtualised shops since ~2005. I simply wouldn’t go back to physical unless I was working at a place willing to put in about 2x the money /[unit of measurement].

          Welcome to 2011, $deity help us all…

          1. Anonymous Coward
            Anonymous Coward

            "Way, way, WAY more expensive hardware costs."

            Weird. We looked into thin clients and VDI in early 2011, and couldn't make it add up from a financial, energy efficiency, device management or user experience viewpoint. Then again, we found that a usable thin client was 2/3rds the price of a desktop PC (ignoring the monitor which you need anyway). We also couldn't identify any obvious savings in management or support overheads.

            You seem to have more servers in the non-VDI approach. That's the wrong way around, surely? We found that the extra server infrastructure outweighed the (small) hardware savings at the client.

            You also missed the main advantages of the non-VDI approach - much less of a single point of failure, much more mature and capable management tools, and much simpler.

            1. Trevor_Pott Gold badge

              Wyse clients.

              Wyse clients made for cheap endpoints. Also: you can drag extant desktops along with a very basic OS and no management if you are using them just as thin clients.

              As to "more servers under VDI," I virtualise more than desktops! Buying dedicated physical servers to perform all fhe rendering, backups, hosting, etc would result in many more boxen than nof using virtualisation.

              As to "single point of failure," I did address that in that when talking about RAID, maintenance, etc.

              1. Chris Miller

                + Security

                A further advantage of virtualising desktops is that it eliminates many common security issues. You needn't give standard users access to USB drives, CD/DVD burners etc. and there's less business need to move data off site (so users can work on it from home) because they can easily access their desktop remotely.

      2. Anonymous Coward
        Anonymous Coward

        The title is optional, but apparently can't be left blank.

        "VDI"

        There's your problem.

        1. Ammaross Danan
          WTF?

          Yep

          VDI was designed to remove the cost of your end-points (including management and hardware) and put them in your datacenter. Considering each end-point in a "rendering" (read Maya or AutoCAD type renderings likely) saturates Gigabit ethernet to the desktop, why would you NOT assume that a measely bonded gigabit or quad gigabit would not get saturated when hosting tens of servers?

  3. Anonymous Coward
    Anonymous Coward

    DMA

    Surely most network transfers will be DMA inside the machine so it's the speed the bus can move the data to the NIC that is the limiting factor.

    This will always be much faster than the network.

  4. launcap Silver badge
    Boffin

    Mainframes..

    .. still rule at I/O. It's about the *only* thing they rule at though.

    So - can you run VMware on IBM Big Iron?

    1. Tomato42
      Trollface

      You don't want to run hypervisors so simple as VMware on IBM iron...

    2. Chris Miller

      No, but 370/VM has been around for 40 years and there was earlier software on the 360 series (and Sperry claimed IBM stole their idea and so on ...). You can run thousands of instances of (a kind of) Linux on z/VM although it makes for an expensive platform.

  5. Lusty

    I'm also not convinced that your NICs are fully saturated unless you've configured them incorrectly. For instance are you using LACP bundles etc?

    Storage is the big bottleneck at the moment and has been for some time. This is not a bandwidth issue though but more of an IOPS issue, especially where customers (and most consultants) don't even understand the effect of RAID or IOPS on performance.

    If you really, truly have bandwidth issues and have it all configured correctly then wait for the next gen servers (HP G8 and the Dell equivelant) and install a Xsigo system with 80Gbps infiniband cards. Then you'll get 160Gbps throughput from 2 cards and 2 wires which will be sufficient I suspect. It also spells the end of VLAN trunking to the host and LACP to the host which has to be good news!

    1. Trevor_Pott Gold badge

      @Lusty

      Bandwidth issue was solved with addition of more 10GBit lanes.

      For now...

  6. Trevor 3
    Facepalm

    Where is the bottle neck?

    For my deployments, I have always found that the bottle neck hasn't been front-end "to-the-vm" bottlenecks. Rather it has been backend-to-the-storage bottlenecks.

    Front end pipes are extremely easy to increase, back end pipes are less trivial to increase and can be a lot more expensive. You just end up with more expensive storage controllers with more expensive disk for only a fraction of improvement.

    Its all well and good having a 17TB storage array, with 2 controllers, serving 2 fibre connections, with a hundred or so VMs on, until you try and get that data off it. Hundreds of machines all making small random access reads (or even just a couple of machines making long reads/writes) is just too much for some storage systems.

    I also agree somewhat with the first poster. Where is my 2gb (or bound 2x1gb) VMNic to go with the 2gb pipe off my vswitch?

  7. Sosheel
    Alert

    our experience is the opposite

    “Except for some very rare corner cases, the processor is no longer the bottleneck in the data centre. I/O is.” NOT our experience at all…bandwidth has not been a bottle neck for us with the utilization of 10GIG NICs rather memory is the resource we run out of more often than not. I am curious what workloads are being run, we have a very customized high I/O solutions that can run down a NIC but that is the very rare exception and took quite a bit of development to do...

  8. Anonymous Coward
    Anonymous Coward

    Why a SAN?

    I'm curious, since you've found the bottleneck is in your network speed why not take that out of the equation by direct attaching your storage to your VM hosts? A single external mini-SAS connection can do 24Gb/s so you won't be reaching that limit right off the bat.

    I'm guessing that is not an option because you have a centralised filer type of machine and multiple VM hosts that share the underlying storage. In that case you could replace the filer with smaller storage systems direct attached to each VM host, if that's feasible with the number of hosts you have.

    If you are using a cenrtalised filer do you think that the I/O speed on the filer itself would handle all your VMs _if_ network speed was not a limiting factor? If not then obviously if net speed was no longer a limiting factor the filer would then become the bottleneck.

    1. Trevor_Pott Gold badge

      zef

      I inherited the SAN on this particular network. *sigh* Unfortunately due to a squillion little reasons that add up to "changing this design would require completely tearing up and replacing the entire datacenter," I cannot go to DAS on this network for the forseeable future. So additional SAN filers are simply added, and added, and added....

      Most other networks I run have DAS storage on VM servers for exactly this reason.

    2. Trevor 3
      Meh

      If you have centralised storage you can do things like DRS and vmotion. If you direct connect storage to a host then only that host can see it. This makes failing over live VMs to another host (either for failure or load balancing) incredibly hard.

      You would have to shut the machine down, storage and host vmotion the machine (cold copy it) to another host then fire it up again. For my business that down time is far, far too long.

      It's not normally the filer (I'm assuming you mean disk access here) that's the problem, rather the filers controllers themselves that are the issue. They are either not fast enough, not enough sockets or not enough speed per socket. It's ok having 5 hosts with 2 fibre connections each, but if there are only 2 fibre ports on your controller then that's your max speed right there.

      For my test environment, which tends to be absolutely hammered disk wise, I use nexenta on a hardware server stuffed full of 4x1GB nics aggregated to present the DAS over iscsi. Yes I get the TCPIP overhead, but it does mean that the size of the pipe coming from the nexenta box is larger than the pipes coming from the hosts (networking set to round robin too, not the default). This seems to work pretty well and is hosting 80+ VMs over 3 hosts (lack of RAM), as well as doing machine clones, copies, server builds, P->V, VDI and everything else you'd expect from using a test system

      Hope this helps...

      1. Anonymous Coward
        Anonymous Coward

        Agreed regarding vmotion but still, 4Gb coming out of the storage server is nothing. Not to mention that with a single storage server then the point of failure is your storage server.. You could do HA on both the storage and VM host for your failover.

        The point I'm making is you can't complain about how you can saturate an aggregated 4Gb or 10Gb ethernet san when there are other options available if you want to remove that bottleneck. Sure, there may be things that are more important to you, not the point tho.

        I believe vmotion can migrate a live vm to another vmotion running with different underlying storage, correct me if I'm wrong.

  9. Ilsa Loving
    Mushroom

    Am I the only one?

    Whenever I see the acronym VDI, the first thing I think of is Venereal Disease Infection, rather than what it means now. Darn conflicting acronyms!

  10. Lusty

    bandwidth, really?

    What are you people babbling about now?! Bandwidth is almost never the bottleneck on storage and if you knew how to monitor properly you might know that.

    A fibre channel disk is capable of around 200iops, less for SAS, even less still for SATA. Admitedly much much more for SSD, but are you running an SSD SAN yet? Most people aren't!

    Server 2008, SQL and Exchange use 8k blocks so each io is around 8KB.

    Using the 4Gb fibre mentioned in another post as being "Saturated" as an example we have approximately 400MB/s bandwidth. Divide this by 8KB blocks and ignoring the FC overhead we get 51200 IO operations possible every second over your fibre.

    Divide by 200 (IOPS per disk) and you get around 256 disks which are required to saturate your link (ignoring the cache which must have filled to saturate your bandwidth). That's assuming you just stripe and didn't use any RAID in your enterprise configuration. RAID 10 halves the IO roughly, so 512 disks required there. With the penalty of RAID 5 you would need 1024 drives.

    Let's assume that you have shelves with 15 drives in - a common configuration on SANs. You would need 34 shelves of disk just to saturate a single fibre connection on a SAN with RAID 10. And that's assuming you aligned your disks properly which would be a further x2 penalty.

    Do you all have that many disks in your SAN environment? I'm ignoring, of course, that you don't have a single fibre connection, you have at least 2 :)

    1. Trevor_Pott Gold badge

      Spindles

      256 spindles is not all that abnormal.

      Also: Flash.

This topic is closed for new posts.

Other stories you might like