back to article The Linux cloud swap that spells trouble for Microsoft and VMware

Just occasionally, you get it right. Six years ago, I called containers "every sysadmin's dream," and look at them now. Even the Linux Foundation's annual bash has been renamed from "LinuxCon + CloudOpen + Embedded Linux Conference" to "LinuxCon + ContainerCon". Why? Because since virtualization has been enterprise IT's …

  1. thondwe

    Optimizied HyperVisors

    So basically it's a hypervisor optimized for a limited set of workloads? There's a scale of these from Containers (one OS shared) though Shared kernel, to general purpose VMs. And these solutions will work well for specific scenarios.

    Microsoft and VMware are both managing containers and VMs already, so if there's a gain for an intermediate solution, they'll manage those too - so can't see this idea being the death of either company! Azure specifically will just have a tick box for them...

    1. Anonymous Coward
      Anonymous Coward

      Re: Optimizied HyperVisors

      "But there's only one kernel, so you can only run Linux containers on Linux."

      Just like Windows containers run on Windows. And both will run on Hyper-V with a lot greater ease of use and better performance than a Linux based hypervisor. Docker functionality for instance is fully integrated. I would have said a single platform that does both better spells trouble for Linux in the space rather than for Microsoft.

      1. Anonymous Coward
        Anonymous Coward

        Re: Optimizied HyperVisors

        > spells trouble for Linux in the space rather than for Microsoft.

        Then the beancounter sees the price tag.

        Small installations? Sure, possibly, ok. People want to run Exchange, MS-SQL, some software that "runs only on Windows".

        Large installations with bespoke software? There won't be Windows.

        1. JasonT
          Alert

          Re: Optimizied HyperVisors

          "Large installations with bespoke software? There won't be Windows." - but they could be Azure, if a company feels compelled to play in the public cloud. It may be a sign of the apocalypse, and Ackbar would likely think it a trap, but Microsoft seems to be deepening its resolve to be a Linux player. Signs include Intel Clear Linux in Azure, Microsoft's investment in containers, including their Deis tool... If nothing else, Microsoft is hedging its bets

          1. returnofthemus

            Microsoft seems to be deepening its resolve to be a Linux player.

            PMSL!

            They REALITY is they had no choice, but to embrace it, because Linux is the Cloud and the Cloud is Linux.

        2. Anonymous Coward
          Anonymous Coward

          Re: Optimizied HyperVisors

          "Then the beancounter sees the price tag."

          Hyper-V Server is completely free. Fully functional, with no install or feature limits. But with a lower TCO and far greater ease of use than a Linux stack...

          1. returnofthemus

            Hyper-V Server is completely free.

            ....And KVM is part of the Linux kernal

            So I guess, there goes your TCO argument

            1. Anonymous Coward
              Anonymous Coward

              Re: Hyper-V Server is completely free.

              "....And KVM is part of the Linux kernel"

              So not a proper monolithic Hypervisor then like Hyper-v / VMWare

              "So I guess, there goes your TCO argument"

              Nope - Hyper-V is way easier to use and management tools (free or paid) tend to be far more user friendly. And according to cloud stack benchmarks, it scales better than KVM too...

  2. Destroy All Monsters Silver badge
    Paris Hilton

    Not bad but why is there a spanish lady apparently dancing not far away from highway leading an article on containers

    1. WylieCoyoteUK
      Childcatcher

      Cindy Lauper

      Showing my age

      1. Destroy All Monsters Silver badge

        Re: Cindy Lauper

        > Cindy Lauper

        Seriously who?

    2. wayne 8

      Has to do with the sub headline "Containers just wanna be hypervisors" a play on "Girls just wanna have fun" sung by Cyndi Lauper back when MTV played music videos.

      1. vaporland
        Facepalm

        wow!

        you're explaining an el-reg 80s tagline reference? cool!

  3. Sil

    Why is it bad news for Microsoft?

    Azure runs lots of Linux, and this doesn't impact its linux servers in the cloud business.

    For corporates who prefer Windows Server 2016, Microsoft offers two types of containerization with two types of isolation tradeoffs.

    And everything is Docker friendly.

    Last but not least, cloud providers such as Microsoft Azure (& AWS: Lambda) have been hyping serverless apps for quite some time. If successful, this would limit the need for both VMs and containers.

    Indeed, at least from a conceptual point of view, serverless apps do seem the logical next step in cloud computing:

    Why bother managing physical servers -> Why bother managing virtual servers -> Why bother managing containers

    1. returnofthemus

      Why is it bad news for Microsoft?

      Because fewer organisations will be compelled to upgrade to Windows Server 2016, especially given the increased per core licensing costs verses the native functionality you get with Linux much of it built into the Linux kernel, which is virtually FREE!

      'Serverless' isn't actually serverless, it's just reference to a model where the existence of servers is hidden from developers, itself emanating from Microservices architecture the foundation for which are "Containers".

      Amazon and Microsoft are not the only ones hyping so-called Serverless, there are also a number of companies replicating this model as on-prem solutions, again based on Container technology.

      The reason you have to 'manage physical servers' is because 'software runs on hardware', so whether that's a public Cloud Service Provider doing it on your behalf behind the scenes or you doing it yourself on-premise (currently where the vast majority sits) thats the way it will always be.

      1. Anonymous Coward
        Anonymous Coward

        Re: Why is it bad news for Microsoft?

        "native functionality you get with Linux much of it built into the Linux kernel, which is virtually FREE!"

        If your time has no value... Hyper-V is also totally free by the way if you are talking about hypervisors.

        Linux also costs a fair bit more than Windows if you want a supported enterprise version like RedHat or Suse.

  4. Anonymous Coward
    Anonymous Coward

    This is getting insane

    It's already known than intel runs an entire JVM on chip that sits in ring -1 so why don't they deprecate Linux and write everything in C#?

    1. Anonymous Coward
      Anonymous Coward

      Re: This is getting insane

      HaLVM: The Haskell Lightweight Virtual Machine (HaLVM) is a port of the Glasgow Haskell Compiler toolsuite that enables developers to write high-level, lightweight virtual machines that can run directly on the Xen hypervisor.

      InstaFun!

  5. boltar Silver badge

    The real reason for VMs on x86?

    Windows inability to place nice with multiple large applications running on the same OS installation. DLL issues, memory issues , you name it. These arn't problems *nix generally suffers from but for some reason some people thought VMs would be a good idea for that as well. Containers OTOH are simply improved chroot jails (also something Windows never had) but TBH , unless you need some wierd system library install for an application or you simply don't trust the application enough to allow it full system access a normal system setup should be fine. We have multiple Oracle and Mongo instances all running happily on one machine (+ failover backup machines before anyone asks) with no VMs or containers in site. Which is the way the designers of Unix intended it to be.

    1. John Brown (no body) Silver badge

      Re: The real reason for VMs on x86?

      "We have multiple Oracle and Mongo instances all running happily on one machine (+ failover backup machines before anyone asks) with no VMs or containers in site. Which is the way the designers of Unix intended it to be."

      ...and then there's FreeBSD jails if you really must have separation.

    2. Anonymous Coward
      Anonymous Coward

      Re: The real reason for VMs on x86?

      "Windows inability to place nice with multiple large applications running on the same OS installation. DLL issues, memory issues , you name it."

      Are less of a problem than say library dependency version conflict issues are on Linux. And are fixed by using containers...

      1. boltar Silver badge

        Re: The real reason for VMs on x86?

        "Are less of a problem than say library dependency version conflict issues are on Linux. And are fixed by using containers..."

        I suggest you look up LD_LIBRARY_PATH and learn what it does.

    3. Gerhard Mack

      Re: The real reason for VMs on x86?

      "We have multiple Oracle and Mongo instances all running happily on one machine"

      We do the opposite of that, not for technical reasons, but because it was the saved us money by being the most effective way to deal with per core licensing.

    4. Anonymous Coward
      Anonymous Coward

      Re: The real reason for VMs on x86?

      So if linux runs multiple applications so seamlessly then why are half the features that devs love about containers (eg no shared dependencies/libraries between disparate processes) specifically to make it easier to run multiple applications? What about security? Even with namespaces (which are newer linux features implemented primarily to support containerization) security is still challenged and needs ongoing development ... are you saying Linux was perfect at AAA before namespaces? Are you saying that managing namespaces in plain ol linux is easier if you try to manage namespaces with raw system calls? Whats the point of rootless containers? SO maybe you could run a lamp stack great, thats a couple different apps but designed to run together ... what if that stack doesnt match your hardware efficiently, can you with plain old linux easily install other apps to maximize hardware efficiency? If you did install another app to max efficiency, could it handle application spikes or auto-scale vertically easily without containers? Wouldnt these be requirements to say Linux alone is just super duper at running multiple disparate applications on a common os? Your argument does make sense with limited perspective, but in reality plain old linux wasnt really better at solving this problem alone than windows. And because it wasnt, it was never optimized for scheduling disparate processes on a machine heavily loaded with a bunch of disparate apps contending for resources, which is why there are plenty of studies showing hypervisors beating bare metal on performance in many practical scenarios ... not because of whats possible, but because of what is real. I also think the idea that atomic host is simply just better than a hypervisor is ludicrous, linux has been around for a long time and now someone thinks of slimming it down more and all the sudden its a holy grail idea, in the meantime logic tells me that if a hypervisor was more efficient running a fat OS even with whatever bloat it needed, a hypervisor may be better at running a thin os and accordingly would likely need less bloat to do so. So an atomic host idea is just saying we can slim down the OS and so many people seem to think that means the atomic host is so great, but the same logic can apply equally to hypervisors, which if nothing else shows this type of argument to be specious at best. If Linux deserves the opportunity to be slimmed down and optimized for containers, so do hypervisors.

  6. John Sanders
    Holmes

    Also...

    Dunno with LXD as I haven't tried it, but not every workflow works well with normal docker style containers.

    PD: No one doing anything remotely interesting cares about Windows these days. I'm sorry but Windows is both liability and boring as hell.

  7. John Sanders
    Trollface

    Oh yeah, how could I forget!

    Systemd makes container integration easier.

    https://access.redhat.com/documentation/en-us/

    red_hat_enterprise_linux_atomic_host/7/html/

    managing_containers/using_systemd_with_containers

    Come at me with your debuan little buggers. LOL

  8. 1Rafayal

    Kind of ignores the fact that VMWare is actually planning on releasing (if they havent already) on AWS.

    So that means you can instantiate compute instances that run across the cloud, the DC and the server room in one go.

  9. IGnatius T Foobar

    Linux on Windows

    I am convinced that Microsoft's medium-term objective with running Linux on Windows (the whole "ubuntu bash on windows 10" thing) isn't to make life easy for web developers, but to run Linux-based Docker containers natively on Windows Server, without the need for a "helper VM" like they do now.

  10. This post has been deleted by its author

  11. Randy Hudson

    Only run Linux containers on Linux?

    I'm confused. Straight from their FAQ:

    The Docker Engine client runs natively on Linux, macOS, and Windows. By default, these clients connect to a local Docker daemon running in a virtual environment managed by Docker, which provides the required features to run Linux-based containers within OS X or Windows

    1. Liam Proven

      Re: Only run Linux containers on Linux?

      Docker on Mac OS X and Windows runs a VM containing a minimal copy of Alpine Linux -- then your containers are started under that.

      There are also Windows Server Containers, which contain Windows sessions, but can be managed with Docker commands and scripts -- but they can only contain Windows binaries. They're not compatible with Docker Linux containers.

  12. W. Anderson

    unacceptable omission

    While Linux is more "popular" in business and government use than the BSD UNIX-like and Solaris Operating Systems (OS), that is no good reason for the article author to omit even mentioning FreeBSD "Jails" and Solaris "containers", 2 mature and capable container technologies in same category and similar to Linux Docker, but never the less with like-wise excellent container features and functionality to Docker.

    Popularity or quantity should never become the overriding factors in any one area of technology reporting, particularly if credible product technologies in that segment are side lined.

    Reporting as done in USA on many topics should never be dependent on esoteric values of more or larger is better.

    1. Anonymous Coward
      Anonymous Coward

      Re: unacceptable omission

      Agree

      Ubuntu's linux lxd containers with "complete linux distro on btrfs or zfs" is nothing new. As with most things "new" on linux, this is another clone of Unix (Solaris and going further back, BSD) tech.

      And if you care about a bit of history - LXD is also a re-work of LXC for better security which again is a linux project that started in 2006 after Solaris zones was production ready in 2005 with Solaris 10

      - Solaris zones (aka containers) allows full copy of the OS to be containerized. Of course ZFS is the default filesystem on Solaris which offers "snapshotting, rapid live-migrations" etc

      - Unlike LXD/LXC, you can even run different guest kernels - 'Solaris kernel zones'

      - Runs on baremetal, instant boot times, granular controls over host resources etc

      - Solaris and derived OSs also run Linux containers. Although Sun/Oracle dropped lxc support years ago, opensource communities (eg. illumos, SmartOS) have further developed this and run Linux containers on baremetal ( https://www.slideshare.net/bcantrill/illumos-lx )

      In short, there's a lot of 'standing on the shoulders of giants' stuff behind the noise of popular new mainstream linux projects that tech-journalists often seem to have little knowledge of, and because of which this myth about linux's superiority over what came before it is perpetuated. Where in reality it's a game of catch up in many ways.

    2. Liam Proven

      Re: unacceptable omission

      If you read my earlier articles on containers, they discuss FreeBSD Jails and Solaris Zones in detail.

      But neither FreeBSD or Solaris has anything like Clear Containers, and LXD is trying to do stuff that even Solaris never attempted.

      Those technologies were the specific focus of this article: LXD and Clear Containers, their resemblances and differences. Not containers in general, which I've been covering for the Reg for 6-7 years now. Including AIX WPARs and other implementations, yes, right back to CP/CMS.

  13. cdegroot

    Too much credit to VMware

    AFAICT, virtualization on the 386 has existed pretty much as long as the 386 - which is quite a bit longer than VMware. QEMM is one example (I used it to run Windows for whatever and DOS for my Fidonet BBS side-by-side), and a friend of mine was so dead set on getting the WordPerfect thesaurus sources that he wrote basically the essence of VMware just to make WP run in a DOS box he could control. VMware just packaged stuff that people were already doing in a nicely marketed product (which was awesome in itself, but technologically speaking it was hardly a big step, even in the x86 world).

    1. Liam Proven

      Re: Too much credit to VMware

      Comparable, yes, but not the same thing.

      The 386 could only support multiple virtual _8086_ sessions: that's why it was called "virtual 86 mode". You couldn't virtualise and run 386 (or 286) code in multiple sessions on the 386 -- only plain old real-mode DOS, as on the original 8088/8086 PC.

      So, something different. VMware etc. let you run multiple x86-32 or x86-64 environments on an x86-32 or x86-64 machine: the *same OS* can be host or guest or both.

      By the way: QEMM didn't do multitasking or virtualisation. You needed DESQview for that.

  14. Hstubbe

    So, I've been using containers for about 2 decades. Except we called it jails, and they ran on FreeBSD. All the penguin fanboys laughed and said virtualisation was so much better. Look at them now, pretending they invented something new. Maybe they'll get zfs right sometime soon too, i mean its been about a decade that it has been rock solid on FreeBSD.

    (Of course, the real innovators have been Sun with Solaris back in the day, we're still digesting all that and integrating it in more inferior os'es)

    1. Anonymous Coward
      Anonymous Coward

      Downvoted for dank attempt at starting a pointless flamewar.

    2. Liam Proven

      Not 20y yet.

      Jails first appeared in FreeBSD 4, released 2000.

  15. cjcox

    While containers are useful

    While containers are useful, the article is mired by a whole lot of "crap" statements that just really are not true. Virtualization through hypervisors have their purposes and if done well are lightning fast to start up (putting to rest one of "crap" comments), and containers have their purposes.

    Oh.. and by the way, lest we forget, physical servers are always there and are necessary beyond just being the plumbing.

    My point is all of these are useful tools. The author comes off sounding like an idiot.

  16. IanMoore33

    blah blah blah

    LXD / LXC / LSD

    All flash backs to UNIX based chroot tools we used 25+ years ago . What makes it all work so much better today is the higher performance servers it runs on ,

    1. Destroy All Monsters Silver badge

      Re: blah blah blah

      LOLNO

      I was was in business at that time and this ain't that at all.

  17. vaporland
    Thumb Up

    excellent article

    summarized a number of concepts that had eluded me until I read this.

  18. patrickstar

    For the use-cases where containers are suitable, there's been a very good alternative for ages - Solaris zones. And unless you need resource partitioning and/or have crappy software that scribbles all over the system and doesn't play nice with its' neighbors (which admittedly is more common on Windows, but *ix isn't without sinners either), you can even skip that and just run stuff as different users on a single system.

    However, there are certainly reasons why people virtualize that can't be satisfied with simple containers.

    A major one being live migration and HA/failover.

    I'm not exactly VMware's biggest fan, but it certainly provides a very neat way of taking your existing setup (possibly consisting of about a gazillion different OS variants) and making it redundant and disaster-recoveryable.

    And I really can't see a way of doing this on current systems without emulating all the hardware, atleast if you don't have explicit OS support...

    I've even come across huge insanely expensive and very critical turn-key software that advertises "redundancy"... which simply consists of telling you to use vSphere HA.

  19. Keith Langmead

    Marketing guff???

    "All the "type 1" and "type 2 hypervisor" stuff is marketing guff"

    What? In what way is it supposed to be just marketing guff? There are clear, major and well defined differences between the two. One runs the hypervisor directly on the hardware, with all VMs running on top of that including the management OS so there's minimal overhead between the OS and hardware. The other runs the hypervisor on top of the existing OS, with all VMs running on top of that, so all VMs have to communicate with the hardware via not only the hypervisor but also the management OS, and are also reliant on the management OS not having any issues.

    Perhaps a petty point, but when such a massive error is made with something I do know about, I wonder how many errors are in the rest of the article on topics I don't know about.

    1. Liam Proven

      Re: Marketing guff???

      I stand by it.

      Look at VMware ESXi: a circa 350MB download, and when running, it takes 1.8GB or so of RAM before you start the first VM.

      That is *not* "bare metal." That's an OS in its own right. Furthermore, it's running a Bash shell, optionally SSH, a TCP/IP stack, USB device support -- even today, I'm willing to bet there's most of a Linux distro in there somewhere.

      A dedicated hypervisor OS is not the same thing as -- for example -- IBM p Series virtualisation, where the hypervisor is right in the firmware.

      ESXi and Hyper-V are basically cut-down full-stack general-purpose OSes, pared back so that there's just enough to start and juggle VMs and not much else. Management is offloaded to client machines.

      That's not the same thing as the in-firmware, OS-less hypervisors of the non-x86 vendors -- IBM's LPARs and PRISM, Sun/Oracle SPARC LDOMs, etc.

      1. patrickstar

        Re: Marketing guff???

        In the case of VMware, it's 100% marketing lies.

        The actual hypervisor is literally the same code across their entire product line - ESXi, Workstation, etc.

        ESXi is just the VMware hypervisor running on top of a custom kernel (VMkernel) with Linux drivers and Linux userland (really, the binaries from the ESXi install runs just fine when copied to a Linux system, except those that depend on VMkernel specific syscalls).

        I can definitely see the point in having a small OS build dedicated to running your hypervisor and the stuff needed to support it - though I suspect the whole VMkernel thing is a bit too NIH and that they would have been better off customizing an existing kernel - but claiming there is some fundamental difference against competing products is just insane.

        A lot of their marketing material is acutely nauseating for someone with actual technical knowledge of the matter.

        As you've noted, it's not even particularly small. You can easily build a Linux system that's 1/10 the size and hosts whatever hypervisor you want. And it'll be literally exactly as "bare metal".

        The SPARC actually-bare-metal hypervisor is open source as part of OpenSPARC, by the way. As you also noted, it's fundamentally different - and much smaller than VMkernel, never mind the actual VMware hypervisor!

    2. patrickstar

      Re: Marketing guff???

      There is a distinction, but it's arguably not relevant for virtualization on x86.

      And it's certainly not relevant when comparing say VMware ESXi with any other x86 hypervisor on the market, or even against VMware Workstation.

      You are exactly as "bare metal" if you run a VM in VMware Workstation (or VirtualBox, or KVM, etc) on standard Linux/Windows/whatever as you are if you run it in ESXi. The only difference is WHICH OS you have "under" the hypervisor, not whether you have one at all.

  20. ecofeco Silver badge

    'Cause girls...

    ...just wanna have fun?

  21. Jove Bronze badge

    Microsoft Bias?

    Why are there only articles about Microsoft linked to this article?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019