back to article Google, Red Hat, Ubuntu, Parallels bet on virtualization's surprising successor

Docker has spun off a key open source component of its Linux Containerization tech, making it possible for Google, Red Hat, Ubuntu, and Parallels to collaborate on its development and make Linux Containerization the successor to traditional hypervisor-based virtualization. The company announcement on Tuesday at Dockercon in …

COMMENTS

This topic is closed for new posts.
  1. Vector

    Back to the Future

    This isn't my area of expertise, but this looks more like a modernization of Big Iron implementations from before the rise of the Wintel server than a "burgeoning technology."

    I've always blamed Microsoft for this whole virtualization thing. As I remember, it went something like this:

    Here, use our new Windows Servers. They can do everything those big machines do for a fraction of the cost!

    Whoops! One bad application can bluescreen a server, taking all applications on it down. We better only put one application on a server to protect ourselves.

    Wow, our servers are severely underutilized!

    Hey, here's an idea: How about we create "virtual machines," so each application thinks it's alone, but it's actually sharing resources. We'll need a new OS to handle all that.

    So, we end up with a much more complex datacenter environment than we would have had if Microsoft had paid attention to the big boys (at the time) in the first place.

    Oh, and Microsoft gets to sell more server licenses.

    1. Anonymous Coward
      Anonymous Coward

      Re: Back to the Future

      'This isn't my area of expertise', .. blah blah

    2. Robert Grant

      Re: Back to the Future

      While I'm sure Microsoft's PR department is happy that you're giving them credit for virtual appliances (note: virtual machines run multiple applications that can bluescreen, or they would if they were really in evidence in 2001), but I'm pretty sure this is a Linux-only thing for now.

    3. BlueGreen

      Re: Back to the Future

      "One bad application can bluescreen a server, taking all applications on it down"

      No, this isn't your area of expertise.

      1. Vector

        @BlueGreen re:"One bad application..."

        Oh, I am quite sure of that part. That was very much the experience of most IT departments in the days of NT 3.51 and 4.0. It is why single application servers are the norm these days, which was not Microsoft's original vision for their server OS.

    4. Jim 59

      Re: Back to the Future

      Good argument - that Windows instability propelled virtualisation. Hypervisors are quite an old technology, go back to 70s mainframes IIRC.

      Virtualization has indeed escalated complexity in the datacentre. You still need all of the old skills, because they have been virtualised instead of being superceded. But you need all the virtualisation skills on top of that, across every area - networking, storage, compute.

      1. Steve the Cynic

        Re: Back to the Future

        "go back to 70s mainframes IIRC"

        I'd have to say you don't RC... Unless 1967 is part of the 70s... ;)

        1. Vector

          Re: Back to the Future

          One clarification on my OP. I never meant to credit Microsoft with the rise of hypervisors in the modern datacenter. All of the statements in my little timeline after the opening MS fanfare were meant to come from the mouths of IT managers. I think it's Microsoft's fault we needed them but, as mentioned elsewhere, they were pretty late to the party with an offering.

  2. Goat Jam

    Don't Really Understand The Concept of Docker.

    Is it like the venerable Solaris Containers or BSD jails?

    An article that explains how it is different to virtualisation and/or jails would be nice.

    Oh bugger it, I know how to use Google.

    It appears that it is like an old fashioned container with a bunch of automation tools included. Or something. Doesn't sound too revolutionary but it is interesting none-the-less. Might have to fire up a server and try it out.

  3. jnffarrell1

    Making containers work 99.999999 of the time is the trick

    More servers, more places they can be, always moving, surrounded by security, these could be a logistical nightmare if Google had not thought them through while operating its one trick pony.

  4. Neoc

    So, if I understand this correctly:

    If you want multiple copies of the same hardware/OS tuple, use Containers.

    Otherwise, you need to use Hypervisors.

  5. pierce

    the big advantage of containers is much less overhead per VM. with a hypervisor, each VM is running on its own kernel, all of which are talking to virtualized IO devices. with a container (solaris zone, etc), each container is a virtual userspace under the same OS kernel

  6. zootle

    Ten years behind Sun..

    It doesn't usually take the Linux crowd that long to assimilate ideas from BSD or Solaris!

  7. Anonymous Coward
    Anonymous Coward

    Sounds very much like FreeBSD jails, to be honest. Something which OpenVZ could do on Linux 10 years ago, too. Then came the para and HVM virtualisation Xen hype (okay hype is a strong word, as Xen is still very successful). And now we're back to containers. I hope this time around containers will be easier to manage on a bigger scale. Maybe docker can do that. I shall give it a spin.

    To those who credited virtualisation or containers to MS above: wrong. They were very late to the game, and it hasn't even been working properly for a very long time. They were under pressure to come up with an answer to linux based virtualisation which was able to run Windows Server in instances.

    EDIT: Solaris was first with containers, IMHO by a very large margin.

  8. Harry Kiri
    WTF?

    "This drastically reduces the number of moving parts, and insulates Docker from the side-effects introduced across versions and distributions of LXC. In fact, libcontainer delivered such a boost to stability that we decided to make it the default."

    So Docker is the amazing thing, but libcontainer is the amazing thing, Docker does good stuff because of libcontainer. And libcontainer is included to reduce something else and make it better, Hang on. Docker wsnt stable and had side-effects, so we wrote some code to sort this out, which was Docker, but is now spun out of Docker, but is still in Docker. But is called libcontainer.

    And sticking it out there will make everything work together. It was there because of the changes the lxc team kept making, so we'll let them change libcontainer. And that will work.

    I wish I could write press releases.

  9. Jim 59

    Oh it is just Solris containers for Linux

    Reddit here:

    http://stackoverflow.com/questions/16047306/how-is-docker-io-different-from-a-normal-virtual-machine

    For Sun people:

    LXC is the zone/chroot bit. AUFS is the file system for inheriting read-only parts of the file system. Docker is the above plus a bit of cloning/snapshotting tech (often performed by ZFS in Solaris).

    Cool. Guess the big surprise is that nobody implemented this till now.

  10. Alistair
    Coat

    google's new excuse for sharing data?

    ... Eric Brewer in a speech at the conference. "We do believe in open containers." ....

    Put a lid on it you silly man.......

    <The one with the tupperware in the pocket>

This topic is closed for new posts.

Other stories you might like