back to article Docker kicks KVM's butt in IBM tests

IBM Research has run done a side-by-side comparison of the KVM hypervisor and containerisation enfant terrible Docker and found the latter “ equals or exceeds KVM performance in every case we tested.” Big Blue tested the two using the linear-equation solving package Linpack, the STREAM benchmark of memory bandwidth, network …

  1. MacroRodent

    Not surprising

    An app inside a container is essentially native: Same CPU, same kernel for all containers. There is just a more isolation between the processes compared to the case without containers. So if most benchmarks did not run at native speed, there would be something seriously wrong with the container implementation.

    I find the container approach way more sane than virtualization, unless you really need to run different operating systems on the same machine.

    1. Tom 38

      Re: Not surprising

      People are flocking to "containers" as though they are some magical new feature that has only recently become available, but they are no different than BSD jails, Solaris Zones - which themselves are not much more different than a chroot.

      With Docker, although you get native performance, you still miss things like memory overcommit and IO management that you get with a VM and so you can get less performance from a single box.

      Docker allows you to split up and isolate applications, but if you couldn't run all those applications on a single host without Docker, then you still cannot with Docker. With a VM you have more control over how IO resources are allocated so that all applications can be run with their desired performance profile.

      1. Anonymous Coward
        Anonymous Coward

        Re: Not surprising

        Memory overcommit ?!?!

        Oh please, because we really need our kernel to go around on a pseudo-random killing spree just because some application running in some container decided it needed that much memory and the OS simply couldn't say no.

      2. pyite

        Re: Not surprising

        Try OpenVZ, it has a ton of tweaking features like this. I still can't figure out why Docker and LXC re-invented the wheel (poorly)

        1. Tom Samplonius

          Re: Not surprising

          "Try OpenVZ, it has a ton of tweaking features like this. I still can't figure out why Docker and LXC re-invented the wheel (poorly)"

          Because the OpenVZ patches have never been accepted into the mainstream kernels. Now, that cgroups and namespaces are in the mainstream kernel, OpenVZ is dead. The OpenVZ devs have never been able to keep up moving their collection of patches to the newest kernel release. OpenVZ is still stuck on kernel 2.6.

          And the OpenVZ devs know this, and are now adding cgroups/lxc support into vzctl, so you can provision LXC-like containers via their tools. Eventually OpenVZ will just be a wrapper around cgroups and namespaces.

  2. Nate Amsden

    Isn't this less about docker

    And more about LXC? Or does docker not use LXC anymore. My impression was docker was simply a way to package things and had little to do with the container itself.

    I deployed a few lxc containers earlier this year. They serve their purpose fine. I looked into docker at the time and found no reason to use it. Just used lxc as built into ubuntu 12.04. We built the containers months ago and haven't had to touch them since(from an OS/container standpoint at least). If you are frequently destroying and recreating containers perhaps docker is good. Life cycles for my systems typically measure in years.

    So I adapted our existing provisioning system that I have been using for 7 years that works on physical as well as virtual hardware and added simple LXC hooks into it. So installation and system configuration is very similar to the other systems we have.

    Performance is good but containers are quite limited in functionality which will limit my usage of them to specific use cases.

    1. David Dawson

      Re: Isn't this less about docker

      It doesn't use LXC anymore, it uses it's own library called libcontainer instead, as of version 1.

      Both base onto the kernel primitive containerisation stuff like cgroups that Google originally contributed in.

  3. Anonymous Coward
    Anonymous Coward

    Ah, IBM..

    Conventional wisdom (to the extent such a thing exists in the young cloud ecosystem)

    If there's one thing I've always enjoyed about IBM it's that people I have dealt with are invariably rather seriously clued up.

    1. roselan

      Re: Ah, IBM..

      "Lawyers made us include this"

      lovely :)

  4. K
    Alert

    "you might think that's goodnight for virtualisation"

    In your wet dream!! ...

    Containers and Virtulisation might be the same species, but they are different breeds.. Your comparing a race horse vs a shire horse. They exist for different purposes, the reason I run VMWare is because it turns the tin into a Mongrel, i.e. so it can have a bit of everything.. The reason I run containers, well actually I'm still trying to figure that out!

    The chances of Containers replacing Virtualisation are ZERO, ZILCH, NIL, NULL, SWEET FA.... Its elementary my dear el-reg hack.

  5. Warm Braw

    Hardly a surprise...

    Having one kernel overhead for n instances is pretty much bound to be better than n+1 kernels (and that's before you consider the fight for memory, etc, - and licensing). LPARs, zones, etc, have been around long enough to prove it.

    Linux and Windows VMs are popular because there hasn't been a ubiquitous shrinkwrapped option for containerisation that you can move between providers (or, in the case of Windows, much option at all). VMs aren't going away - there will always be cases in which it's convenient to have virtualisation (migration, testing) - but they were only ever the default for resource sharing because there was no practical alternative.

  6. Anonymous Coward
    Anonymous Coward

    Old is new

    For years many OSs have shown that OS level virtualisation is tonnes faster than pseudo type 1 / type 2 virtualisation. Why they needed to prove that to themselves I'm not sure.

    Brendan Gregg (DTrace) did a great comparison - http://dtrace.org/blogs/brendan/2013/01/11/virtualization-performance-zones-kvm-xen/

  7. pyite

    How does this compare to Virtuozzo?

    Containers have been the fastest VM technology for a decade thanks to Virtuozzo and OpenVZ. How do these two newer technologies compare?

    1. Jamie Jones Silver badge
      Devil

      Re: How does this compare to Virtuozzo?

      " Containers have been the fastest VM technology for a decade thanks to Virtuozzo and OpenVZ."

      I think you mean '.... for over 15 years thanks to FreeBSD jails'

  8. Anonymous Coward
    Anonymous Coward

    The real question is, how does KVM do so poorly?

    Linpack is totally CPU bound, there should be essentially zero virtualization overhead. If it was testing something I/O intensive or otherwise making a lot of system calls, then this would be an expected result.

    Is it possible the Linpack test created a lot of FP exceptions like denorms? That's the only reason I can think of why it wouldn't perform the same as native.

  9. DrBandwidth

    Easy explanation

    The 2x performance difference in LINPACK is very easy to explain -- KVM does not report that it supports AVX by default, so the LINPACK code runs using 128-bit SSE instructions instead of the 256-bit AVX instructions that are used in the native and containerized versions. We saw the same thing when we tested KVM at TACC and (if I recall correctly) it was very easy to fix.

    STREAM is actually more difficult, but the two tests that IBM reported were constrained in ways that prevented the trouble from being visible. The single socket STREAM data (IBM's Figure 2) is reasonable for compilation with gcc. With streaming/nontemporal stores the results would be higher -- in the 36 GB/s to 38 GB/s range for native, container, or KVM. The two socket STREAM data (IBM's Figure 3) is only consistent across the three execution environments because they forced the memory allocation to be interleaved across the two sockets. Normally a native run of STREAM would use local memory allocation and get 75 GB/s to 78 GB/s (with streaming stores) or ~52 GB/s to 57 GB/s (without streaming stores). In this case the KVM version typically has serious trouble, since the virtual machine does not inherit the needed NUMA information from the host OS, and often loses close to a factor of two in performance. I don't know whether the containerized solution does better in providing visibility to the NUMA hardware characteristics.

    1. Anonymous Coward
      Anonymous Coward

      Re: Easy explanation

      Thank you for that great explanation, I hadn't considered the angle of what type of CPU features the VM would report for support for.

      This is the kind of stuff that keeps me coming back to the Reg, and reading the comments!

  10. Victor 2
    Facepalm

    Well..

    Sun was saying this since 2004 with Solaris Containers/Zones... Only now the Linux zealots want to hear, because they discovered cold water...

    1. Anonymous Coward
      Anonymous Coward

      Re: Well..

      True. Since 2004 and charged heavy for it. It's interesting because it's open and free (almost) and as easy to deploy. You would understand if you are running a business that requires a tons of these and want to keep your licensing/support costs under control.

  11. Raul 1

    LXC will be faster than Docker

    Docker will have performance overhead compared to normal LXC containers due to Docker's use of layers of read only filesystems. Using aufs or layers of filesystems via device mapper as Docker does will inevitably have a performance hit and increases complexity.

    IBM should have tested normal LXC containers to get an accurate result of container vs VM performance.

    There is a lot of confusion and misinformation in the media about LXC and Docker and a real danger of conflating Linux containers (LXC) to Docker, which is a single use case of containers to build stateless applications as services, which contrary to popular perception makes it more complex it use than LXC.

    Docker containers can only run a single process or application. Docker containers unlike normal LXC containers have no init service to manage a normal modern multiprocess OS environment, so you have to run php, nginx and mysql for instance in 3 containers. You can't run things like SSH, cron or a management agent in Docker containers as that would be a second process.

    While stateless containers is a legitimate use case, it's still a use case (and one perhaps more suited to PAAS type vendors pursuing statelessness than average users ) and the media and blogosphere do a great disservice to the LXC project on which Docker was/is based, to readers and informed discussion by not articulating these critical differences and simply pushing Docker as Linux containers.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like