back to article Linux is so grown up, it's ready for marriage with containers

Linux is all grown up. It has nothing left to prove. There's never been a year of the Linux desktop and there probably never will be, but it runs on the majority of the world's servers. It never took over the desktop, it did an end-run around it: there are more Linux-based client devices accessing those servers than there are …

  1. HmmmYes

    Windows server really?

    Corps do seem to have clustered around VMware for virtualisation. VMware were first, good and have delivered execellent products.

    But Windows Server? Our corp server is all Windows. None host VMs.

    Hyper-V does look promising but the need to run WS20xx is a hurdle.

    1. TheVogon

      "Hyper-V does look promising but the need to run WS20xx is a hurdle"

      You don't need to run Windows Server. Running Hyper-V under Windows Server is optional. Hyper-V Server is a standalone hypervisor that runs with all features enabled without Windows Server.

      However, even if you do choose to run it under Windows Server - it's not a hurdle at all. Especially as if like most companies - and as you mention - most of your servers already run Window Server.

    2. Hans 1

      > Our corp server is all Windows. None host VMs.

      Seriously ? I am sure you must have a least one AS/400 and a mainframe lying about, come one ... All Windows is expensive, apparently you guys have money to burn ... a very 1990's strategy.

      1. TheVogon

        "All Windows is expensive"

        Well no - Windows is commonly the lowest TCO option for many corporate uses - and additionally you only need one set of support skills if you have uniform environment.

        What would likely be more accurate is to say that virtualising much of his server estate would lower his on-going support and licensing costs...

        1. Hans 1
          Windows

          The thing that you seem not to get is "All Windows without hardware consolidation (aka VM's)" is the most expensive of options .... Windows is NOT the most expensive option on the desktop, I think Solaris/HP-UX,/AIX would be ... it certainly is NOT the cheapest, Linux is ... again, it comes as Windows Server Datacenter edition, then for desktop ... and is cheaper than Windows Pro+Office, since you only pay support and it has all the software you need, even more than you need ....

          The thing is, it is by far the most expensive on server ... think about it ... the competition offers no Basic/Advanced/DataCenter variants ... with the competition, you get DataCenter for everything ... this coupled with VM's mean Windows Server offering is not even in the same league.... and then they even dare sucking in CALS, server software licenses (Exchange, SQL Server, Sharepoint etc) ... when in Linux/FreeBSD land, you only pay support, whatever you run on your server ...

          I think you should ask Santa for a calculator ... you would be surprised.

          Was it you playing sax in the street last Sunday ? You should be happy I threw 2 euro into your hat ...

          1. Anonymous Coward
            Anonymous Coward

            "it certainly is NOT the cheapest, Linux is "

            If that were true, people would be using it on the desktop. In reality in makes Windows Mobile look popular...

            "when in Linux/FreeBSD land, you only pay support, whatever you run on your server ..."

            Support for say RedHat is usually far more expensive than licensing Windows Server.

  2. David Roberts

    Out of passing interest

    Virtualisation provides a virtual machine interface.

    I have some W7 systems which won't install Windows 10 (shame! I hear you cry) because of the lack of processor features such as NX support.

    Is it possible to emulate these featires in a VM or is the VM also directly constrained by the processor architecture?

    Oh, and is ther any point in running containers (even just for fun) on a home system?

    1. HmmmYes

      Re: Out of passing interest

      Yes. Anyhting can be emulated. Wether someone can be arsed to do it is another thing.

      Yes. Have a go at running containers.

      Have a go at Docker.

      Also have a go at installing OpenInidanna and giving Zones and the network spoofing a spin.

      Docker will get more like (Solaris) Zones and crossfire as it evolves.

      Daft really. They should have just ported ZFS, Zones + Crossfire.

      1. HmmmYes

        Re: Out of passing interest

        Or poured all the money going into Docker and the like into OpenIndianna and getting a good, solid, cmdline-only version of Solaris-11 (x86) going.

        The Solaris zone tools do need a better cmdline. PITA to do somestuff.

        1. Anonymous Coward
          Anonymous Coward

          Re: Out of passing interest

          Er, Solaris already can be cmdline only. Pretty sure you don't have to install the GUI every time. Of course, that doesn't mean the existing cmdline tools are any good...

          But yes, it does look like yet again a lot of penguinistas have failed to read their history and are busily repeating it, and getting frothy with excitement over the new thing they're 'inventing'...

      2. David Roberts

        Re: Out of passing interest

        Just passing through again.

        Mr. Picky says that a wether is a castrated ram.

        So the phrase "wether someone" made my eyes water a little.

    2. Bibbit

      Re: Out of passing interest

      I've had W10 running in Virtualbox for a while without problems (although on a Mac OSX host) and there is a PAE/NX processor feature you can enable on there for that purpose. I would have thought that would work equally well on a W7 host.

  3. Warm Braw

    VMs are expensive

    That's partly because both operating systems and CPUs converged on the all-or-nothing security model of system space and user space which might have been fine and dandy for OS/360 but doesn't really cut it even for client/desktop systems any more that are running loads on untrusted code from untrused internet sources and certainly isn't ideal for multi-tenant bit-barns.

    VMs are an attempt to circumvent this limitation by essentially adding further execution modes that are (mostly) invisible to the hosted operating systems but are essentially kludges. Part of the solution is CPU architectures that are more appropriate to modern needs: VMs are an essential part of getting there from here, but are not a long term solution. Containers have a place, too, but for true flexibility of deployment you need a kind of "Russian Doll" model of nesting, but doing that efficiently requires a rather different approach to memory management/protection and execution modes than is currently the norm in either OS or CPU design.

    1. Mark Morgan Lloyd
      Meh

      Re: VMs are expensive

      > That's partly because both operating systems and CPUs converged on the all-or-nothing

      > security model of system space and user space which might have been fine and dandy

      > for OS/360 but doesn't really cut it even for client/desktop systems

      Seems to me that history is repeating itself: if somebody wanted interactive time on an IBM mainframe he'd generally have ended up with VM, which put a single task into a single virtual machine dedicated to a single user.

      Containerisation, which mandates that software be modified to a greater or lesser extent, is definitely one way round the problem. But it seems to me that this leaves a lot of general-purpose software- in particular the sort of scripted hacks that the Internet grew up on- needing a dedicated virtual machine if it's to avail itself of process migration etc. Allowing that VM will always have an overhead in either software or silicon, it's unfortunate that process migration, checkpointing and so on aren't actually part of the kernel.

      1. Vic

        Re: VMs are expensive

        Containerisation, which mandates that software be modified to a greater or lesser extent

        Does it?

        Looks like I've been doing it wrong again...

        Vic.

  4. Bibbit

    Succinct

    Quite a nice article on virtualisation for us less familiar with the subject whose only experience of it is running a few guest OSs on Virtualbox and who have heard of Type 1 and Type 2. I am still trying to get my head round containers though; hopefully once the haze of "it is the new hotness" clears we may have light and heat.

    1. lurker

      Re: Succinct

      Main difference is that VMs run typically on a hypervisor and are 'complete' operating systems in their own right, whereas a container is usually not a fully-functioning operating system in it's own right, for example it would not normally have it's own kernel, instead it would be sharing the kernel of the host.

      This means less overhead to run a container, faster startup and 'hibernation', and smaller footprints and simpler configurations. An ideal container would contain only the code which is unique to the application it is intended to run, though in practice I think they are rarely that efficient. VMs still have their place of course - in fact most containers are run ON VMs. Things are heading towards a three tier model, with containers running on VMs which run on bare metal (OK 4 tiers if you count the hypervisors).

      Think of it is 'VM Lite' and you won't be too far out really.

      1. beavershoes

        Re: Succinct

        The fastest most efficient model would be to run Linux on bare metal and run containers on that.

        1. Anonymous Coward
          Anonymous Coward

          Re: Succinct

          init and a few basic services don't add much overhead either. That's the way I prefer it. Container with its own virtual network interface and an ssh daemon. Easy to ssh into and manage.

        2. skies2006

          Re: Succinct

          Ya mate, that is the where the trend is going, towards small and stripped down Linux OS that only got the necessary things to run the containers, networking fabric, storage fabric and the orchestration agents, so there is more RAM and clock cycles over to run actual workloads. See CoreOS and Project Atomic.

    2. Vic

      Re: Succinct

      it is the new hotness

      This guy?

      Vic.

  5. Someone_Somewhere

    Some Extra Info

    for those unsure about the current landscape.

    Good Graphic: https://www.quora.com/How-do-these-technologies-fit-together-Docker-CoreOS-Kubernetes-Mesos-OpenShift-OpenStack

    Good outline: https://www.quora.com/In-the-future-of-data-center-architecture-who-will-be-the-main-orchestrator-controller-of-containers-Mesos-or-Kubernetes-What-will-be-the-division-of-responsibilities

    For VMs specifically, take a look at Qubes OS as well.

  6. AMBxx Silver badge
    Childcatcher

    So?

    How big and super-functional does a container need to get before it's considered a virtual system in its own right?

    1. Someone_Somewhere

      Re: So?

      https://www.youtube.com/watch?v=gxPF9AeHznE&index=1&list=RDgxPF9AeHznE&nohtml5=False

      1. Someone_Somewhere

        Re: So?

        A thumb down?

        I guess you haven't seen that episode then.

        I was actually agreeing with the observation that there comes a point beyond which it's "You say tomato: I say tomato" but that evangalists for the cause won't want to hear that.

  7. Sil

    No free lunch

    You paint a very incomplete picture of virtualization.

    You describe VMs' failings but totally forget to mention those of containers, and its impact on security and reliability.

    Containers are not the new VM. Sometimes they are more appropriate, sometimes they aren't.

  8. David Fetrow
    Linux

    Nitpick: Containers not all that new in Linux

    Yes LXC is relatively new, quite nice, built in, popular and that's what Docker uses.

    There have, however, been usable Linux containers (usable for my purposes at the time) before LXC (although they usually [always?] required a patched kernel). Vserver-Linux was what we used but there were more.

    http://pivotal.io/platform/infographic/moments-in-container-history

    1. Hans 1
      Boffin

      Re: Nitpick: Containers not all that new in Linux

      >Yes LXC is relatively new, quite nice, built in, popular and that's what Docker uses.

      Yes LXC is relatively new, quite nice, built in, popular and that's what Docker used. They now use libcontainer.

      TFTFY

  9. Hans 1
    WTF?

    >Debian for the hardcore

    First time I read anything as silly as this!

    Slackware and Gentoo are for the hardcore, Debian has always been for the beginners, ubtunu and mint might have improved on that, but Debian ist really not for the hardcore Linux guyz.

    https://techgeekforever.com/2014/06/17/the-linux-distro-for-your-needs/

    1. Someone_Somewhere

      Re: the hardcore

      Slackware: in terms of having low-level control of a binary-based system, yes, okay, but installation is anything but 'hardcore'- it takes the same 'kitchen sink' approach as Ubuntu. Arch is, I would suggest, more hardcore than Slackware in that regard and although the package-management is pretty much hand-held to the same point as typing 'apt-get-install' you do at least get a nice log of what went where afterwards.

      Gentoo: setting a couple of compiler flags and make-ing isn't really /that/ hardcore - more masochistic really.

      No, the /hard/core uses (hardened) (B)LFS and keeps on top of all the CVEs and upstream updates by hand - I don't know what the correct term for this is, but it leaves 'masochistic' in the dust.*

      * yes, I've done it and, yes, there's probably something wrong with me. ;)

    2. lurker

      @hans1

      The comment was in the context of business usage of VMs really. You won't find much slackware or gentoo in use in large companies, Debian is very popular because it has a very stable release and update cycle and is widely supported.

      I've been using linux for a long time, my first install was slackware from crapload of floppy disks onto a 386, and I have tried many distributions since, and I opt for debian normally for my VMs.

      So it depends what sort of 'hardcore' you are looking at. Slackware, gentoo, arch etc are good for hardcore enthusiasts but less popular when you are looking at running very large numbers of servers/VMs.

  10. Jeff Lewis

    And Linux fans will rejoice after years of crying "This is the year of Linux in Containers!"

    Oh.. wait...

  11. fnj

    Lxd FTW

    VMs are too heavy. Docker containers are too limited and require apps to be specially written. Lxd containers are just right for a lot of stuff, if not most stuff. You can put as many services as you want in a single lxd container. Just one if you want - it's not appreciably heavier than docker. A whole lot of services if you prefer. Just like a VM. And they are ordinary services just like ones in the host OS.

    1. Someone_Somewhere

      Re: Lxd FTW

      Not everyone wants to use Ubuntu - in fact there's no way /I'd/ use it on a server.

      Remains to be seen how much uptake it gets on other platforms.*

      Core OS' Rkt is the one to watch - open standard and receiving interest (and backing) from serious players.

      * i.e. other distros.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like