back to article Google: 'EVERYTHING at Google runs in a container'

Google is now running "everything" in its mammoth cloud on top of a potential open source successor to virtualization, paving the way for other companies to do the same. Should VMware be worried? Probably not, but the tech pioneered by Google is making inroads into a certain class of technically sophisticated companies. That …

COMMENTS

This topic is closed for new posts.
  1. catphish

    "paving the way for other companies to do the same."

    Oh, I didn't realise we we lesser companies had to wait for Google to do things before we're allowed to try them.

    1. Anonymous Coward
      Anonymous Coward

      Oh, feel free to invent, refine and publish your own revolutionary techniques.

    2. Daggerchild Silver badge

      Yeah, I'm sure the people who wrote it then took it all the way to Google scale didn't provide any input you couldn't have done yourself.

    3. Anonymous Coward
      Anonymous Coward

      Now that Google approves..

      "Paving the way for other companies to do the same "

      Like most things in linux, this tech preexisted in the unix world.

      This is a widely used tech in AIX since 2007, solaris since 2005 and even earlier in BSD. All the issues you see with lxc have been addressed a long time ago on unix. And yes firing up a new container is akin to starting a new process, so you can stand up large numbers of virtual OSs in no time.

      http://www.oracle.com/technetwork/server-storage/solaris/containers-169727.html

      But as with most things IT , people just follow the noise. Solaris zones, BSD jails, AIX wPars, Linux LXC doesn't bring in revenue from licenses and don't sell enough hardware ( Containers are a lot more efficient form of virtualisation with very little to no overheads. ). So VMware, HyperV etc have a lot of big names in the industry making the noise for them. Truth is, you don't really need them in a lot of cases, as Lxc will show the wider linux community soon as more people adopt it, and what the unix folk have known for a lot longer.

      1. Peter Gathercole Silver badge

        Re: Now that Google approves..

        AIX WPARs do some other very useful things. Even though they run on a certain version, they can present to the application the API of an earlier AIX verion.

        So an AIX 6.1 system can containerise an application designed for AIX 5.3, which is still supported for a little while longer, but also for AIX 5.2, which is not. This provides a lifeline for companies that have software that won't run on the latest releases (although the excellent backward compatibility of AIX makes that fairly rare), and either cannot, or cannot afford to update the applications.

        AIX 7.1 extends this further, allowing AIX 6.1 WPARs. A side effect of this is that customers can buy current hardware that will not run an earlier versions of AIX (although, amazingly, AIX 5.3 can still run on most Power 7 and 7+ kit - we will have to wait ot see about Power 8), and move their applications into these WPARs, and decommission their older systems.

        And I believe that AIX Partition Mobility has now been extended to WPARs, allowing them to be moved to different system on the fly, providing the the storage has been appropriately configured.

        IBM have used their WorkLoad Manager functionallity (WLM) to constrain WPARs to fixed amount of resource, including CPU, memory and I/O, so that a WPAR cannot swamp a host system.

        This is all mature function that has been around for a number of years. Nothing new here.

        1. MadMike

          Re: Now that Google approves..

          Yes, AIX WPAR is a direct copy of Solaris Containers, as Linux LxC is. And AIX has also copied Solaris DTrace, under the name AIX ProbeVue. As VMware has copied it under the name vProbes

          1. Anonymous Coward
            Anonymous Coward

            Re: Now that Google approves..

            I've worked with Solaris containers, Xen, KVM and others. I can't say that one starts much quicker than another.

          2. Roland6 Silver badge

            Re: Now that Google approves..

            >AIX WPAR is a direct copy of Solaris Containers

            and I suspect that Sun got the idea from mainframe partitioning, particularly as they had a few IBM mainframes in San Jose back in the 80's which they found difficult to migrate away from...

            But then the Unix community did do a lot of sharing behind the scenes, so wouldn't be surprised that code was contributed by other members of the System-V and BSD communities.

  2. This post has been deleted by its author

  3. Buzzword

    Back to the Future?

    Virtualisation, containerisation - isn't it just multitasking? How is this different from what my 386 DX could do nearly 30 years ago?

    1. petur

      Re: Back to the Future?

      No.

    2. lambda_beta
      Linux

      Re: Back to the Future?

      30 years ago, most PCs were task switching ... not truly multitasking.

    3. A Non e-mouse Silver badge

      Re: Back to the Future?

      Virtualisation abstracts the hardware from the operating system (kernel). The O/S thinks it's running all on its own, but the virtualisation is simultaneously running multiple (different) kernels. (Para virtualisation actually requires the guest O/Ss to be adjusted to run under virtualisation. In this case, the O/S has a vague understanding that it doesn't actually have exclusive use of the hardware)

      *nix containers are a step up from normal process isolation, further isolating processes from each other. Normally, all processes see the same file system, can see all processes running on the kernel, share the same user accounts (e.g. root), etc. Containers isolate processes so that they have their own directory tree, list of visible processes, user IDs (e.g. different root account), etc. BUT all these processes are running within the same kernel instance. The kernel just sees them as a group of processes. The kernel can place limits on their use of the hardware (e.g. Memory, CPU, disk & network I/O)

      With virtualisation, you can patch & reboot each system independently. In containers, because they share the same kernel, you have to shutdown all the containers to patch & restart the kernel. (Although projects like kSplice are trying to do away with that need)

      1. John H Woods Silver badge

        Re: Back to the Future?

        ^^ this is the paragraph that should have appeared at the front of the article :-) Thanks.

      2. Anonymous Coward
        Anonymous Coward

        Re: Back to the Future?

        Pretty much like running something as a Service on Windows then.

        1. Vic

          Re: Back to the Future?

          > Pretty much like running something as a Service on Windows then.

          No, not even close.

          Containerisation means each container can operate in its own filesystem, meaning you can have entirely different userspaces on top of the same kernel.

          Running something as a service on Windows is best approximated in *nix by running it as a service...

          Vic.

          1. Ken Hagan Gold badge

            Re: Back to the Future?

            "Containerisation means each container can operate in its own filesystem, meaning you can have entirely different userspaces on top of the same kernel."

            Interestingly, 64-bit Windows does virtualise the filesystem (and registry) for 32-bit processes, but doesn't make the facility available to end-users to create their own. It also virtualises various other parts of the object namespace through its session objects. Obviously "faking it" and "hiding it" are the essential functions of any OS (and always have been) so we shouldn't be surprised to find that the mechanisms are already there and affordable. I wonder if we'll see this amazing new feature in Server 2015, or whatever the next new release is.

            1. Lusty

              Re: Back to the Future?

              " I wonder if we'll see this amazing new feature in Server 2015, or whatever the next new release is"

              App-V is basically most of this functionality and has been available for a while. The manageability aspect makes full virtualisation far more attractive for most normal workloads. Google can make use of this because their workloads are massively parallel, automated and redundant so the management aspect means far less to them.

              This is also why Virtuozzo didn't catch on as well as the marketing guys hoped. Although it does give better density and performance, the drawbacks for the average IT department far outweigh these benefits.

          2. Anonymous Coward
            Anonymous Coward

            Re: Back to the Future?

            "Containerisation means each container can operate in its own filesystem, "

            You get that automatically with a Windows Service too if you use a different Service Account for each Service.

            1. Vic

              Re: Back to the Future?

              "Containerisation means each container can operate in its own filesystem, "

              You get that automatically with a Windows Service too if you use a different Service Account for each Service.

              Really? You can have a totally different C: drive for each service?

              I've been unable to find the procedure to do that, but I'm open to being taught...

              Vic.

              1. MadMike

                Re: Back to the Future?

                Yes, that is correct. If you had Windows Containers, you would have several Windows running, each with it's own C: disk (they would be different filesystems, not necessarily different hard disks). Each Windows would have it's own admin.

                Think of Containers as VirtualBox virtualization. The big difference is, say you start 1,000 Solaris containers, then there will not be 1,000 kernels running, no. Instead, there is only one Solaris kernel running. All these 1,000 containers are mapped to that single Solaris kernel instance. The big benefit is that each Solaris container only uses additional 40MB RAM (allocating some data structs), and each filesystem allocates 100MB (it only reads from the global system install but writes to it's separate filesystem). So one guy started 1,000 Solaris containers on a PC with 1GB RAM - it was dead slow, but it worked. If you had started 1,000 virtual machines on VMware, they would have used 1,000x2GB = 2,000GB RAM.

                Solaris containers is a generic framework so you could even allow install Linux in a container (or a different Solaris version). Each Linux API call, will be translated to Solaris API calls and be mapped to that single Solaris kernel instance. Start 100 Linux servers, and there will be only one Solaris kernel running. As Solaris kernel is cleverly done and heavily multithreaded, there are no penalty running Containers. Studies by Oracle shows that Solaris Containers has less than 0.1% performance penalty, whereas VMware has a heavy penalty of maybe 5% or so. VMware uses way much more RAM too.

                However, the BIG advantage of Solaris containers is when you elevate functionality with ZFS. So you create a Container and install it and test it in a separate filesystem, maybe a database machine (with LAMP), or a developer machine - these are the master templates. ZFS allows you to clone the master template and start a separate filesystem in less than a second. So you can deploy a tested and configured database server in less than a second, with no performance penalty. And each container has it's own root user. That root user can not compromise the global Solaris install, he only has full rights in his container, in fact he can not know if he works in a container, or works on the full global solaris install. Server utilization skyrockets, because RAM utilzation is very low for additional server.

                And of course, you can send a Container to another Server if you need maintenance of the Solaris server.

                Or, you could configure a Container, setup and configure a new patch of the latest Oracle database, and login and test the Container. When everything works, you just point all new TCP/IP connections to the new container.

                I do understand that Linux really wanted Solaris containers, under the name LxC. As AIX has it under the name AIX WPAR. And Linux has Systemtap (copy of Solaris DTrace) and AIX has ProbeVue (copy of DTrace). And Linux has systemd (copy of Solaris SMF). And Linux has Open vSwitch (copy of Solaris Crossbow). And Linux has BTRFS (copy of Solaris ZFS).

                It is funny, when everybody talked about ZFS and DTrace many years ago, I said, "but wait there is more in Solaris! you have Containers, Crossbow, SMF, etc etc that are even hotter than ZFS or DTrace". Only now, several years after, Linux has started cloning Solaris Containers.

                1. Vic

                  Re: Back to the Future?

                  Yes, that is correct. If you had Windows Containers, you would have several Windows running, each with it's own C: disk

                  We weren't talking about Windows Containers, we were talking about Windows Services, and the separation obtained by using different Service Accounts for each. Now I'm not entirely sure, but I don't believe that's the same thing...

                  Think of Containers as VirtualBox virtualization. The big difference is, say you start 1,000 Solaris containers

                  Yeah, I know about Solaris Containers. They're great. This part of the discussion was about Windows Services, which aren't the same thing at all...

                  Vic.

          3. Paul Hovnanian Silver badge

            Re: Back to the Future?

            "Containerisation means each container can operate in its own filesystem,"

            So, sort of like chroot?

            I'm not picking on containerization. I realized that its probably a much better integrated way of doing things than the user/group, process group, *NIX file system permission model that we've grown up with. But if someone published the container requirements 20 years ago, I'll bet quite a few *NIX wizzards could have implemented somethng given the existing tool suite.

            1. Vic

              Re: Back to the Future?

              > So, sort of like chroot?

              Much like it, yes.

              Containers go somewhat beyond chroots, with the ability to do things like bound the amount of process time permitted to each container, but the guts of it is very similar.

              Vic.

      3. Ken Hagan Gold badge

        Re: Back to the Future?

        Quite a big step up from a Windows Job object, then.

  4. Andrew Jones 2

    LMCTFY - now I have to wonder who actually runs lmgtfy.com

  5. Anonymous Coward
    Anonymous Coward

    Oo Exciting..

    NOT!

    The key benefits of Virtualisation are

    Mixed OS's on a single piece of tin

    Resiliency for when hardware fails

    Easily shifting work, without the need for bespoke implementations (such as scripts, databases etc)

    ..

    I'll admit I've not done much research into this yet, but I just don't see how "containerising" as OS can deliver the above, hence whilst it might be a form of virtualsation, to me I consider it a complete different category compared to Hyper-V and VMWare et al.

    1. frank ly

      Re: Oo Exciting..

      Paragraphs 7-10 explain the differences and the relative advantages betwen virtualisation and containerisation. Nate Amsden's excellent comment is very interesting.

    2. A Non e-mouse Silver badge

      Re: Oo Exciting..

      The advantage of containers over full virtualisation is overhead. There's less overhead with a container, and they can often be created quite quickly. With virtualisation, to create a new instance you have to do a full O/S installation. (OK, things like templates can improve this)

    3. Ken Hagan Gold badge

      Re: Oo Exciting..

      It is a different category, in terms of what gets virtualised, but it might be the same category in terms of the problems that it solves. A server farm running zillions of VM with the same OS in each VM is probably just providing zillions of isolated places to run applications that require that OS. Containers are a more efficient way to provide the same isolation.

    4. Anonymous Coward
      Anonymous Coward

      Re: Oo Exciting..

      >The key benefits of Virtualisation are

      >

      >Mixed OS's on a single piece of tin

      ITYM different instances of Windows which can't generally cope with more than one major app (eg outlook, sqlserver) running on any OS instance at a time. This is the only real reason virtualisation has taken off in the PC world - unix has always had chroot for seperating systems spaces which although not perfect did what was needed in 90% of cases.

      >Resiliency for when hardware fails

      If the hardware goes down your VM goes down with it unless you have a failover backup system running and how many places have set that up? Most companies I've worked in use VMs simply to stop windows apps from fucking up each others DLLs. Anyway , you can achieve something similar by mounting a chroot jail on a network drive. If one machine goes down an app on another machine that has mounted the same drive kicks off.

  6. Nate Amsden

    about to deploy a few containers

    I started to mess around with LXC on top of Ubuntu 12.04(LXC is built in) recently and am about to put a few containers into production.

    In my limited experience thus far - I have not seen the need to use anything like docker because our use case for containers is very specific, we will spin up two containers per physical host (just 3 hosts to start) and once spun up they will stay there, no constant spin ups and spin downs these are static workloads. Configuration is handled in the same way as our VMs with Chef+scripts etc. If one of the underlying hosts fails it's not a big deal since the others perform the exact same role and are behind a load balancer).

    We are taking the container route(or at least trying it) to avoid the cost of VMware licensing on top of the hosts for physical machines that will have static workloads but with only two containers each(our current deployment model has us deploying to one collection of servers at a time and flip between them when we upgrade code). The servers already run e-commerce software that comes at a hefty cost per host. Part of the project was to get as much horsepower as possible to maximize utilization of the $$ e-commerce licensing. These are running on DL380 Gen8p 2x Xeon E5-2695 v2 12-core CPUs.

    Also didn't feel like investing time in some "free" hypervisor technology since that just makes things even more complicated. Sticking to VMware for now at least no plans for KVM, etc. These systems run off local storage (all other hosts run everything from a 3PAR SAN including boot disks).

    The up sides are pretty well known I think so I won't cover them the downsides were a bit unexpected (perhaps shouldn't of been had I spent more time researching in advance but that was really part of this project to determine if they were worth while or not).

    Downsides that I have encountered(all of these noted on my specific platform, and read confirmed reports from others, I don't make any claims that these pitfalls aren't fixed in another version or something):

    - autofs does not work in a linux container. We use autofs extensively for auto mount and unmount of NFS shares. Fortunately for our SPECIFIC use case there are only 3 NFS shares that are required so as a workaround they are just mounted in the container during boot. From what I have read this is a kernel limitation. Quite annoying.

    - If you choose to limit memory resources of a container the container (which we will do) it is not possible from within the container to determine what that limit is, when the container hits that limit it will have a bad day and probably be pretty confused if it sees there are gigs and gigs of memory left but the kernel is blocking access. Going to need custom memory monitoring.. annoying again.

    - process lists on the "host" system are very confusing since they seem to include all of the processes of the containers, gotta be more careful with commands like slay, killall, and just in general it is difficult to determine which PID goes to what. Also had to adjust a bunch of monitors because they alert based on # of processes (too many postfix daemons etc). Pretty annoying, more so the more containers you have on a given host.

    - Only one routing table for all containers on the host (there may be complex ways around this but my research says for the most part your stuck). This would be a deal breaker for me for anything that is meant to be really flexible. My VMware hosts have a dozen different vlans on them and each VLAN has a specific purpose. With containers while I can have multiple interfaces on different networks(or VLANs) when it comes to the default gateway seems there is only one. VMs of course have multiple OSs each with their own routing table etc so far more flexible.

    - No obvious way to "vmotion" a container to another host(I believe it's not possible)

    - Management tools seriously lacking (at least basic LXC stuff again haven't tried anything else). For my specific use case with static workloads not a big deal

    - Software upgrades may break in containers depending on what they are trying to upgrade(I forget specifics but read about it in the docs)

    - Obvious that you are tightly coupled with the underlying OS.

    In the case of google if you have a fleet of a thousand servers for example and they all run pretty similar workloads and they are all connected to basically the same "flat" network, then containers can make a lot of sense I'm sure due to the low overhead.

    I certainly would not use them in environments where flexibility is key, and being able to have dynamic configurations and easy management(migrate workloads off a system to do maintenance on the system), which in my mind represent the bulk of the organizations on this planet.

    For our specific use case I think they will work fine, though for the several hundred VMs we have in our vmware clusters I don't see containers replacing those anytime soon, if ever.

    1. Matt K.

      Re: about to deploy a few containers

      Thanks Nate, very informative from your real world perspective. I would think that licensing drives a lot of what Google does. Running a million servers +, anything they can use that is open sourced or company built/developed is a big savings on their bottom line.

    2. Anonymous Coward
      Linux

      Re: about to deploy a few containers

      "No obvious way to "vmotion" a container to another host(I believe it's not possible)"

      I think that's what this tech from Google now allows, which is the interesting thing, and seems to be what some comentards have missed. For small projects with which I have been involved, portability for operational as well as DR purposes was one of the main technical reasons to virtualise. KVM virtualisation I've tried LXC as it seems to be a technology that is attractive for reasons given here - speed of spin-up, reduced overhead etc, but running different guest OS's, even different versions of the same OS, seems yet to be cracked. For that reason it seems to me that LXC and other such container technology has a more specialised use case than general virtualisation such as KVM.

      Another apparent advantage of LXC is simplicity, as virtualised systems can become so complicated you lose track of the point of virtualisation in the first place. However, as pointed out by Nate Amsden, it would seem that LXC's simplicity means complicated workarounds to meet requirements in practice. That LXC style of systems suits Google is fully understandable, but I wonder to what extent it scales down to mere mortal level.

      1. Bernard

        re: but I wonder to what extent it scales down to mere mortal level.

        From Nate's analysis and the other shortcomings I've seen so far, all look to be current barriers rather than absolute ones.

        Google, because of their scale, have built this and tweaked it to fit their requirement. Other hyperscale software will do likewise because custom work pays huge dividends for this size of organisations.

        Smaller organisations will wait for the community and/or commercial vendors to build toolsets that solve problems like the inability to monitor and manage memory and quite probably the lack of hardware indepedence and then this should scale down nicely. I don't think it will replace virtualisation for requirements that interact heavily with the hardware stack or need high security or customisation, but there are lots of current needs for which whole virtual machines are unnecesary.

        Plus it's been a while since the last 'next big thing', and this one at least looks to be interesting.

    3. A Non e-mouse Silver badge

      Re: about to deploy a few containers

      No obvious way to "vmotion" a container to another host(I believe it's not possible)

      I believe there are several projects aiming to either migrate a process to another kernel (i.e. host) or write a process and its state to disc, and then restore it later on.

      However, I have no idea if they're actually usable....

      1. Lusty

        Re: about to deploy a few containers

        "I believe there are several projects aiming to either migrate a process to another kernel (i.e. host) or write a process and its state to disc, and then restore it later on.

        However, I have no idea if they're actually usable...."

        I doubt it. For a start, the system you're migrating to would have to have the exact same patch level in order to properly execute the running code. It's likely it would also need the same drivers in many instances too, and this is what virtualisation is there to solve - move the OS at the same time and you have none of these issues.

        For the above poster who said Google have solved this - when they said portability I believe they meant porting the code to give a single API, not porting the containers. Google have no use for moving a running container since everything they do is highly available, they simply move the workload to a different container somewhere else. That's why Google don't need virtualisation, the benefits don't suit their workloads. For everyone else on earth without a factory full of quality code monkeys though, virtualisation is often the only way to manage workloads sensibly.

    4. Anonymous Coward
      Anonymous Coward

      Re: about to deploy a few containers

      "We are taking the container route(or at least trying it) to avoid the cost of VMware licensing "

      Hyper-V Server is completely free for the fully featured version.

      "Also didn't feel like investing time in some "free" hypervisor technology since that just makes things even more complicated. Sticking to VMware for now at least no plans for KVM, etc. "

      Hyper-V Server simplifies things over VMWare in my experience. KVM does make things more complex though - and doesn't scale as well as VMWare or Hyper-V.

      1. Trevor_Pott Gold badge

        Re: about to deploy a few containers

        "Hyper-V Server is completely free for the fully featured version."

        But the management tools are not.

        "Hyper-V Server simplifies things over VMWare in my experience."

        Bullshit, bullshit and thrice bullshit. SCVMM us a horrific monster sent from hell to make Sysadmins miserable. vSphere is a comparative delight to use. It's the little things, you see. Like the ability to use ISOs that aren't in a library that is on a file server that is part of the active directory and "controlled" by SCVMM. Or hey, the ability to mount an ISO through the fucking console so I can install a template image in SCVMM Might be a breakthrough that would catch it up to VMware from the before time when we vMotioned shit be scratching the RAM changes onto stone tablets and passed them around using token ring!

        SCVMM is ass. It's an ass' dirty ass hair's ass. It's a shitpocalpyse of awfulness when compared to vSphere and when you start trying to get to cloudy scale and orchestrate things System Centers agglomeration of soul-destroying mind-snuf porn acutally manages to get worse!

        "KVM does make things more complex though - and doesn't scale as well as VMWare or Hyper-V."

        You just won the bullshit award. Ding ding ding! Openstack works like a hot damn at scale, and today is about as complex as trying to do anything with Hyper-V/SCVMM at scale. That is to say both are complete ass, covered in more asses and strewn with the dessicated souls of murdered children...but KVM/Openstack at least is actually free.

        Both platforms basically require PhDs to run at scale, but with KVM/Openstack you only need your cloud of PhDs. You don't also need a legal team with a population larger than Monaco and a SWAT team of kick-ass licensing specialists combined with the GDP of Brunei to stand up a single datacenter.

        VMware requires the GDP of Germany to get to a decent-sized datacenter, but at least you don't need all the wetware to understand how to license it or how to make the thing run.

        Horses for courses, but I question strongly any horse that chooses to endorse the use of force used by Microsoft in setting it's course.

        Also: fuck SCVMM extra hard. Because goddamn it is pissing me off today.

        1. Chavdar Ivanov

          Re: about to deploy a few containers

          1. You don't *have* to use SCVMM to manage Hyper-V - but then you knew this already... All you need is W8.1 Professional workstation with the management role enabled. You need AD infrastructure only if you are after HA via failover clustering.

          2. My guess about your SCVMM obsession is that you haven't bothered or for some reason is incapable of upgrading to SCVMM2012R2 - try it, is it actually usable. And yes, all earlier versions were a horrible mess.

          1. Trevor_Pott Gold badge

            Re: about to deploy a few containers

            "You don't *have* to use SCVMM to manage Hyper-V"

            Yes, you do, if you're running more than a handful of hosts in a testlab of an SMB so poor they count the fucking pencils to make you don't steal one.

            "y guess about your SCVMM obsession is that you haven't bothered or for some reason is incapable of upgrading to SCVMM2012R2"

            I've spent the past three weeks of my life fighting with it as part of a very in depth review of the offeri as part of a POC regarding the management of a 15,000 node datacenter's infrastructure. Additionally, I've been fighting the damned thing on 5 SMB sites and my own testlab trying to get stuff done for commercial content clients.

            I've had to use every version since Server 2008 R2 and, oh ye, even the "marvelous" PowerShell, in all it's glorious I-hope-you-have-a-truly-amazing-memory shining fail.

            1. Fatman

              Re: about to deploy a few containers

              Yes, you do, if you're running more than a handful of hosts in a testlab of an SMB so poor they count the fucking pencils to make you don't steal one.

              Shit man, that sounds just like my last employer!!!

              We constantly begged for more IT money, and it ended up in the C level suite. Now you know why I no longer work there. Let the C levels get themselves out of the clusterfuck they created.

            2. Chavdar Ivanov

              Re: about to deploy a few containers

              Fair enough - 15000 nodes is well outside of my experience range... For about a hundred VMs in several clusters SCVMM could be avoided, though - we have it installed but rarely use it (I just mentioned that it looked more sensible than earlier versions).

        2. Sil

          More details

          Can you please be more specific on where System Center VMM is worse than vSpere technically?

          Or is the core of your hostility based on licensing considerations? What issues are you encountering with System Center licensing?

          No flaming, just curious.

          1. Trevor_Pott Gold badge

            Re: More details

            Oddly enough, for once the core of my issues with Microsoft isn't with licensing. It's usually cheaper than VMware to deploy SCVMM, the licensing is straightforward by Microsoft standards, (which means you still want to kill yourself, but less with fire and more with poison.)

            No, my issues with SCVMM are around "usage of the software in a fully heterogenous environment." I.E. an environment wherein some or all of the systems (including the ISO library!!!!!!) are stored on systems that are not part of Active Directory. It's usage items like "there is no button to simply mount an ISO stored on your local computer in the VM console you have open so that you can just install a goddamned operating system without adding things to the bloody library."

            SCVMM is administrator hostile. If your goal is to "just get it done" then you'll end up very frustrated. SCVMM is designed solely for heavily change-managed environments. Situations where no VM is created, patched, migrated, etc without someone filling out forms in triplicate and planning the entire thing out ahead of time.

            With VMware you need two things to make all the critical stuff go: one or more hosts, and a vSphere server. That's it. And the vSphere server comes as a bloody virtual appliance! (VUM is going away in vSphere 6, so the bit where you need to install the update manager on a Windows VM is about to be gone.)

            SCVMM requires one or more hosts, an AD cluster (because a single AD server is asking for trouble at the worst possible time,) an SCVMM server, a Library server, a separate server to run your autodeployment software, and a client station. SCVMM is heavily reliant on DNS working properly (VMware can live and breathe nothing but IP addresses and be perfectly happy), and a full SCVMM setup (including directory and all the associated bits) takes hours to set up properly (VMware + vSphere + VUM takes less than 30 mins.)

            With VMware, I can go from "nothing at all" to "fully managed cluster with everything needed for a five nines private cloud setup" in well under an hour. With SCVMM it will take me over a week to get all the bugs knocked out, because even after you get the basics set up, there are an infinite number of stupid little nerd knobs and settings that need to be twiddled to make the goddamned thing actually usable.

            With VMware, as long as the system/user context I am using to access the client software can get access to an ISO/OVF/OVA/what-have-you then I can use that to spin up VMs. With Microsoft, templates/ISOs/etc have to be part of the "managed infrastructure" under control of the servers.

            With VMware, I can add monitoring but simply deploying a vCOps virtual appliance, logging into the vCOps appliances' admin website and telling it where the vSphere server it. With System Center, setting up monitoring is a laborious process that takes days.

            With VMware, the VMs come with their own mail servers so that if I don't happen to have a mail server on site - or don't want to rely on the on-site server - the things can still send me mail. With System Center, it's all designed for integration into Exchange.

            Microsoft is all about Microsoft. It's all about having the full Microsoft stack. Everything controlled, managed, integrated with and joined up to more and more and more Microsoft. You don't just "stand up a small cluster" and get the kind of easy-to-use, full capacity experience that you get with VMware. With Microsoft you need to keep buying more and more Microsoft software to accomplish the simplest goals and then tying it all back together with the other Microsoft software. One big inter-related, interdependent mess that if you breathe on it hard (or, heaven forbid, DNS goes down,) you're fucked.

            VMware doesn't really care what the rest of your network is. Is your file storage an isolated NetApp filer or Synology NAS that isn't joined to any domain? Okey dokey. That's groovy. Is your block storage Bob's Rink Shack Super Special iSCSI Target? Cheers! We'll work with that just fine! NFS isn't integrated into an NIS or AD environment? That's cool too, we'll work with it out of the box.

            VMware allows for total isolation of the hypervisor infrastructure from the rest of your infrastructure. If everything else breaks, your hypervisor and it's management tools don't. Which is important, because all the rest of that crap is running on top of the hypervisor!!!!!

            VMware also allows for deploying into environments that have no intention of buying the other eleventy billion Microsoft servers to run everything. If you just need to stand up VMs, and you don't want those VMs to be integrated into the same command and control infrastructure as the hypervisor control plane (like say, every cloud deployment, ever,) then you don't need to fight the design.

            If you are building an enterprise private cloud in which every last thing is part of the same managed infrastructure, all under corporate control, all change-managed and so forth, then Hyper-V is the best private cloud you could possibly run. Until you undergo a merger with another company...

            If you are building a virtualisation setup where you want the infrastructure to be run by different people than the VMs that run on that infrastructure, or you want an infrastructure that can take care of itself even if the rest of the network's management systems have failed (for whatever reason) then stay the hell away from Microsoft.

            SCVMM can be amazing, but only within the narrow range of circumstances it was designed for. VMware is amazing all the time. And that's why Microsoft makes me salty. I cut my teeth on the better product, so every time that I try to make SCVMM do something and it refuses, I get very, very salty.

    5. Anonymous Coward
      Anonymous Coward

      Re: about to deploy a few containers

      Almost all of these downsides are resolved with Solaris zones (the only one currently missing is what Nate calls "vmotion") ... And please, do not start whining about Oracle being in the picture there are alternative to Oracle Solaris such as SmartOS which allows you to keep your karma immaculated.

      Disclaimer : Anonymously because i do work for Oracle.

      1. Trevor_Pott Gold badge

        Re: "Anonymously because i do work for Oracle."

        ...I'm...I'm so sorry...

        1. Robert Grant

          Re: "Anonymously because i do work for Oracle."

          Don't apologise. Tomorrow you won't be hysterical, but he'll still be rich :)

    6. ozmark

      Re: about to deploy a few containers

      > "No obvious way to "vmotion" a container to another host(I believe it's not possible)"

      Pets vs cattle, "vmotion" is not a priority for clustered apps. Horses for courses and all that.

    7. bevand10

      Re: about to deploy a few containers

      Hi Nate.

      You should at least try XenServer from Citrix. It's free these days (has been since June last year), supports the equiv of VMotion, as well as StorageMotion - moving running VMs from one storage to another (e.g. local to NFS).

      We use it as it provides great VFM for our situation.

    8. Anonymous Coward
      Anonymous Coward

      Re: about to deploy a few containers

      "I started to mess around with LXC ... Downsides that I have encountered"

      Try OpenVZ, it's a much nicer Linux alternative to LXC for containers.

  7. This post has been deleted by its author

    1. localzuk Silver badge

      Re: Ever talked to a Google employee?

      I like how fact filled your comment is. Very informative!

    2. Anonymous Coward
      Anonymous Coward

      Re: Ever talked to a Google employee?

      "That actually think they're good at something.

      How this makes "news" is absolutely staggering. LXC is nothing new to anybody."

      You of course, have precisely... what to offer to anyone? Other than your sizeable mouth, by way of your fingers.

  8. Anonymous Coward
    Anonymous Coward

    How Revolutionary!... or maybe not

    Customers have been "container-izing" for a LONG LONG time (e.g. Solaris Zones for 5+ years). Oh wait, Google is doing it with Linux, so it must be new, cool, and revolutionary! I got it.

  9. Anonymous Coward
    Anonymous Coward

    "Google: 'EVERYTHING at Google runs in a container'"

    Tupperware?

  10. Mark 110

    Great Info

    Thanks to all intelligent commentators for lovely info.

    What on earth are people doing on a pure tech thread slagging off Googles ethics. Go to a big IT ethics thread to do that you f-wits.

    1. Anonymous Coward
      Stop

      Re: Great Info

      "What on earth are people doing on a pure tech thread slagging off Googles ethics. Go to a big IT ethics thread to do that you f-wits."

      I hear what you're saying - but I've never seen a Reg thread that was pure tech. I'll happily bitch with the rest on the monotony of the fanboi flame wars, ideology, and entrenched opinions, but one of the true pleasures of the Reg forums is the leeway for people to spout a lot of opinionated crap (I for one do so all the time) and to be challenged/called on it. It wouldn't be lively if they didn't, would it?

    2. Psmo

      Re: Great Info

      At Google's scale everything is big IT, and with their depth of penetration in lobbying and government everything they do has ethical complications.

  11. MatsSvensson

    Even the refrigerators?

    That seems inefficient.

This topic is closed for new posts.

Other stories you might like