back to article Vendor lock-in is truly a TERRIBLE idea ... says, er, Microsoft

When it comes to building applications for the cloud, John Gossman thinks agility and portability are essential. "You don't want to get locked in too much to a particular vendor, strategy, technology, whatever," he says. Gossman's advice should shock nobody. What might surprise you, though, is that he works for Microsoft. Ten …

  1. Anonymous Coward
    Anonymous Coward

    Server Licenses

    Quote

    But if Windows Server licenses aren't selling as briskly as they once were, Microsoft's cloud strategy is its hedge against that decline.

    If my customers are anything to go by, they see little to be gained by upgrading to Server 2012 from their existing Server 2008R2 setup.

    Until they embark on a hardware replacement programme it is very much 'Steady as you go'.

    Mention Cloud to them and they just smile sweetly at you and say 'Why?" and show you the door.

    Not all types of business are suitable for moving to AWS/Azure. My customers would never ever consider moving to external cloud systems. They can't risk network outages. An outage at the wrong time could result in up to 100M GBP of damage to the plant. That is far too much of a risk for their insurers.

    1. majorursa

      Re: Server Licenses

      Maybe the risk of a network outage is as costly as you claim to that company, but the risks of on-premise data storage and computing are also considerable. Not long from now that same insurer will demand draconic and auditable standardized measures to be taken on your 'own' systems to even consider giving you coverage. That will make the comparison more balanced I expect, even without the already high costs of local solutions.

      1. Khaptain Silver badge

        Re: Server Licenses

        @MajorUsa

        Hence the reason that large companies have private offsite backups. The cloud belongs to someone else, you do not have control over that environment.

        Private offsite backups are just that, private...

        When the cloud goes down, you truly are in the shit.. and between you and the cloud there are a multitude of people not willing to take the blame.

      2. Anonymous Coward
        Anonymous Coward

        Re: Server Licenses

        "That will make the comparison more balanced I expect, even without the already high costs of local solutions."

        How about factoring in the cost of a 50Gbps connection to the internet or direct to Azure? That's what our server estate had to deal with at peak load

      3. Trevor_Pott Gold badge

        Re: Server Licenses

        "Maybe the risk of a network outage is as costly as you claim to that company,"

        They're damned high for almost any company.

        "but the risks of on-premise data storage and computing are also considerable."

        No, they're not. Are you sure you understand how storage works? Because we're really quite good at it by now.

        "Not long from now that same insurer will demand draconic and auditable standardized measures to be taken on your 'own' systems to even consider giving you coverage."

        Already do. No problem. Well, actually, that're not draconic at all. They're fairly well thought out standardized tests that can be easily seen off by getting a member of CIPS to sign off on it. Just like having a professional accountant sign off on your books is required, so to can getting a legally recognized professional IT practitioner to sign off on your IT designs be required.

        What's wrong with that? I'd need the same thing if I were using the almighty American Public Cloud...except that it would be 10x as expensive and far less likely to pass muster, due to the nature of single points of failure in the American Public Cloud Computing model that are completely beyond my control.

        "That will make the comparison more balanced I expect, even without the already high costs of local solutions."

        Actually, it usually means the unreliable and ridiculously expensive public cloud solutions go down in flames. And speaking of flames, I think you'd be surprised at what local tech can take.

        On the other hand, too few people realise that American Public Cloud computing still requires proper architecture, including backups.

  2. W. Anderson

    The statement that John Gossman thinks "agility and portability are essential. You don't want to get locked in 'TOO MUCH' to a particular vendor, strategy, technology, whatever" does not indicate a sea change in attitudes and practices by (new) Microsof, since the two words "too much" are very telling in that what Microsoft thinks is "too much" cab be and is significantly different from the rest of technology industry, particularly the Free/Open Source Software (FOSS) community and organizations to whom Microsoft remains extremelt antagonistic and predatory - willing to destroy.

    A good command and understanding of the English language and subtle nuances of (double) speak is still required by readers of Microsoft propaganda.

    1. Fatman
      Joke

      RE: "Nuances"

      A good command and understanding of the English language and the ability to detect the subtle nuances of (double) speak the aroma of bullshit is still required by readers of Microsoft propaganda.

      FTFY!!!

  3. thames

    Pull the other one!

    Microsoft has changed? Horseshit! The market has changed and Microsoft are desperately waddling along behind after it. What's happened now is that a modicum of competition in platforms has been introduced and Microsoft as well as every other proprietary legacy vendor are forced to react to something they never expected to see happen.

    Microsoft loves Linux? It's more like they know that outside of Microsoft's own narrow circle "cloud" is pretty synonymous with "Linux". If you look at what is happening in the software development world, everything "cloud" that matters runs on Linux. Everybody except Microsoft is targeting to run on Linux. If Microsoft's Azure cloud doesn't offer Linux in a major way, then they can pretty much write off being considered a serious player in the cloud business.

    Microsoft working with Docker in nothing new. Microsoft has been lobbying (and even paying sometimes) open source web oriented platform developers to port their software to Windows. In many cases, the software theoretically ran on Windows, but always as an afterthought. There would be Window specific bugs that languished in bug trackers, installing on Windows was an absolute pain, and the Windows oriented support community was non-existent. Meanwhile on Linux, it was always just "apt-get whatever" to install and get running in seconds, and there was loads of information and help available to get and keep things going. Microsoft saw all the "web 2.0" stuff almost exclusively going to Linux, leaving only legacy workloads for Microsoft. The same sort of rational is going into their work with Docker.

    "You don't want to get locked in too much to a particular vendor, strategy, technology, whatever," (Microsoft man John Gossman) says.

    That's the standard line from every also-ran vendor. "I'm not the vendor with a lock on the market? Well then, vendor lock-in must be a bad thing, or at least it is unless I get control of the market."

    Oh, and if you're not in the cloud, or if you are in the cloud but only use one vendor then you should be fired? Let's deconstruct that a bit. "If you don't buy my product, then you should be fired". Or "if you buy my competitor's product and not mine, then you should be fired". Yes indeed, that's a very convincing argument about the technical and financial merits of Microsoft's Azure cloud product. Yes, I can see how Microsoft has indeed changed.

    If you look at the cloud offerings by the market leaders (Amazon, Google, Microsoft), they're all about vendor lock-in. Moving a non-trivial site from one cloud vendor to another is as big of an effort as porting between operating systems. "Cloud" is the new operating system. It's a platform on which you run things. Moving from one vendor's cloud to another's is no more a push button operation than porting a C program from Windows to Linux is just a re-compile. These cloud vendors all know that once they've got enough of your non-commodity IT in their cloud, then for most normal businesses (i.e. companies who just use computers to do other things), the pain and cost of moving to a different vendor will keep you locked in just like those old VB6 and ASP programs keep you locked into MS Windows.

    We had "public cloud" before. They were called time shares on mainframes. I've been paid in the past to move people off "public cloud" timeshare mainframes and on to PCs to get massive cost savings. It was a case of re-write from scratch rather than pushing a button.

    I'm all in favour of the public cloud in the right applications. Saying though that it's the answer to everything is like saying that NOSQL databases are the answer to all data problems (because they're web scale).

    I do think though that IT professionals are going to need to learn about Docker and other similar things in order to run applications on their own infrastructure. Once it is better developed, "private cloud" is going to be how people run their own on site systems, even at the small business level. It won't be about scaling to thousands of instances and global distribution. It will be about ease of installation and management. Done properly, all those nasty dependencies will get containerized, redundancy and migration will be built-in, and people will wonder how they ever did without it.

    What everyone ought to be doing at this stage is to download at least one Linux distro that has good support for Docker and a large and active community around it, and start playing around with it so you can educate yourself. If you don't, you will end up like the mainframe guys who laughed at anything related to PCs or x86.

    1. Destroy All Monsters Silver badge
      Windows

      Re: Pull the other one!

      A healthy dose of realist cynicism, I see. Carry on!

    2. nematoad
      Windows

      Re: Pull the other one!

      I agree with you.

      As the Bible say "There is joy in the presence of the angels of God, over one sinner that repenteth." Luke 15:10.

      So maybe MS have had a change of heart and we should all be congratulating them for joining us in the real world. Or it could be that the thing most dear to their hearts, the sacred bottom line, is being threatened and this is all smoke and mirrors.

      We shall see, but judging from past experience the old saw " Follow the money" is a better explanation of this charm offensive.

    3. Fatman

      Re: Pull the other one!...Let's deconstruct that a bit.

      Let's deconstruct that a bit. "If you don't buy my product, then you should be fired".

      I believe that there is an extraneous word in that quote, and I will leave it to the reader to determine which word I am referring to.

  4. Cynicalmark
    Headmaster

    What a load of

    total utter ballcocks. Locked in my ass - you're only locked in if you can't be arsed to learn how to cross platform your data.

    1. P. Lee

      Re: What a load of

      > you're only locked in if you can't be arsed to learn how to cross platform your data.

      Isn't that the point? The cloud has very little Windows in it, so "lock-in" refers to being "stuck" on a non-windows cloud platform.

      I suspect MS will want things like docker to bring apps to Windows and then leverage their enterprise strengths (AD) as "value-add."

      Embrace, Extend, ...

      In this case, I think they are just protecting their enterprise against being outsourced to the cloud. Cloud providers are very cost-focussed and windows licensing will usually blow that out of the water especially at the homogenised hyper-scale they operate at. These companies live by their skills and want to cut their license costs to zero. That means MS is just looking at corporate apps, probably on its own cloud - wheee software rental!

      MS would rather compete against a linux cloud and gain a little, than fight its own installed base in the data-centre and get nothing extra.

  5. Anonymous Coward
    Anonymous Coward

    Still don't see what all this docker stuff buys you.

    As someone who basically doesn't do windows - it's all linux or bsd.

    So assuming you have a private repository of packages (debs/rpms/tarballs ).

    And a base image with sane defaults configured to pull packages from your private repository.

    Why am i using docker over just packaging an application as a meta-package depending on the bits I need.

    So if i only want a sane webserver setup, I package corp-httpd and job done.

    The only hard bit is to decide what a host will do, and that can be a shell script that sets a hostname and installs on first boot.

    I don't really care what flavour of linux or bsd, it's roughly the same process, why would you bother with docker, when you can do all of this on a loopback mounts with a few hundred lines of shell.

    I get that not everybody has access to the tooling to turn a debootstrap into a production webserver,

    but it seems that with docker, you still have to do all the packaging right, and that's all the work.

    The rest of it is basically unpack tarball and chroot install, which almost everybody has scripted away.

    Gentoo does this with custom stage 4, debian does this with seeds, redhat does this with kickstart.

    I'm hazy on the proper bsd way but it surely exists.

    Most of the deployment stuff can really be boiled down to a kernel and rootfs image, at least on linux.

    So Docker helps if I've got all that stuff, but if I've not got all that stuff then what does it buy me.

    What am I missing here?

  6. Henry Wertz 1 Gold badge

    "What a load of total utter ballcocks. Locked in my ass - you're only locked in if you can't be arsed to learn how to cross platform your data."

    Yes. And part of this is... if you use/used some vendor's products, they include support for mixing and matching with products from other vendors, and help move your data out if you have to. Other vendors, they range from pretending other vendors don't exist to actively hindering mixing and matching with products from other vendors, and hinder moving you data out if you have to.

    IBM in the mainframe era was infamous for this, running from the hardware to the use of EBCDIC instead of ASCII all the way up through the software stack. Microsoft is well known for this; being pretty much the only vendor to not support ODF (until they eventually caved in and did); Exchange using proprietary data formats, with no provided solution of getting e-mail etc. in or out of it. Outlook, same. Various products (including .NET frameworks) that are tied in to SQL Server and only SQL Server (for example, I tried to use Entity Framework with MySQL since it claims to use SQL... it doesn't, I got it to *connect* to MySQL but it actually uses *T-SQL* i.e. non-standard SQL Server SQL and refuses to generate standard SQL for even basic queries.) The list goes on and on. If you're used to Microsoft products you might consider it the norm to have to purchase third-party software to perform some operations that competing software supports out of the box in the interest of interoperability and industry standards.

    But, I think this time it's possible they are being genuine (rather than "embrace, extend, extinguish" of the past.)

    I think they simply had to own up that many *many* pieces of "cloud" software, frameworks, and development environments, are for Linux and not Windows. Furthermore, I'm just not sure how much Windows-based cloud software will start coming out; Visual Studio is currently frankly a bit of dog's dinner (not that the software is necessarily bad; but the current state of the software and documentation makes it extremely hard for someone to either port software to a "cloud" or start from scratch.)

    Lockin in this case fails, and just locks people *out* of using Microsoft products; if they want to sell much more than some hosted SQL Server and Exchange instances, they must support Linux and all that software people now use for cloudy-type services.

    Similarly, the container formats, management utilities, and so on, are probably not Linux or Windows-specific. Microsoft past would not have supported this stuff, they would have preferred vendor lockin, viewing supporting standard container formats and so on as helping people move from Azure to other clouds. I think now they have recently realized potential customers will view "Well, these containers and utilities support most clouds and hypervisors except Azure and Hyper-V" as an excellent reason to go elsewhere for their cloud services and hypervisors, so they best support them when reasonably possible.

  7. Henry Wertz 1 Gold badge

    "Why am i using docker over just packaging an application as a meta-package depending on the bits I need.

    So if i only want a sane webserver setup, I package corp-httpd and job done.

    The only hard bit is to decide what a host will do, and that can be a shell script that sets a hostname and installs on first boot."

    Packaging your application as a meta-package works fine for making your package easy to install on Linux distros that use the same package format as yours.

    If you have (for political or business reasons, it really doesn't matter) multiple groups who cannot even agree on what distro they want to use but they don't want two largely-idle servers... well, the one group can use Redhat Enterprise Linux, the other can use Ubuntu, and more or less pretend they each have their own server. This isn't really possible without containerization or virtualization.

    There's also the case of sloppy commercial software that the vendor won't support unless it's on "it's own" computer. If that's because they judge resource usage to be so intense it needs it's own machine, I don't expect them to support it under Docker either, you're still effectively violating the system requirements whether a second daemon is running in Docker or bare metal. if the software requires "it's own" computer because the installer's an unholy mess that spams the filesystem, or it requires particular versions of some libraries but doesn't include them in it's own private /.../lib directory, well, Docker would be perfect for that (just as chroot jails were effective for this in the past.)

    1. Anonymous Coward
      Anonymous Coward

      If you have (for political or business reasons, it really doesn't matter) multiple groups who cannot even agree on what distro they want to use but they don't want two largely-idle servers... well, the one group can use Redhat Enterprise Linux, the other can use Ubuntu, and more or less pretend they each have their own server. This isn't really possible without containerization or virtualization.

      I can see that being a valid use case, but packaging for all supported distros is not that big a deal these

      days, https://github.com/jordansissel/fpm will take a folder and spit out a rpm/deb/solaris/osx packages (no affiliation, just a user)

      I'm fairly agnostic about what distro, it does the same job but the scripts are a little different, - again you only need to build the base image once.

      Virtualization is assumed these days, everything is kvm on physical metal, and vm above that, so sprawling application only really affect their vm - most of which is run until it dies/needs upgrading and replaced with the current base image + meta-package. (We never upgrade in place, we replace the vm with a new one).

      I can see the benefit in chroots / bsd jails, but these days - the entire vm is the jail, as far as I can see.

      Okay that's perhaps risky https://blog.nelhage.com/2011/08/breaking-out-of-kvm/ but its what we currently do.

  8. John Smith 19 Gold badge
    Unhappy

    Leopards and spots.

    This is the "new" Microsoft?

    Sounds a lot like the old MS to me, but now your data could be could be dumped $deity knows where with $deity knows what privacy laws.

    You say "cloud" I say "mainrframe"

    You say "browser" I say "universal dumb terminal"

    Let's look at the mainframe history.

    Any one here know what it was like to port

    IBM mainframe <--> Sperry <--> Burroughs (Stack architecture M/F with no MMU) <--> ICL (single Accumulator) <--> Amdahl (IBM compatible designed by Ex IBMer)

    Most descriptions I've read come down to "It was Hell on Earth" or "It was quite easy as we'd planned to rehost the software from day one."

    I'm sure portability is possible if you plan for it.

    Just like it always has been.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like