While it seems certain Docker will come to displace stodgier alternatives like VMware in the future, we’re nowhere near that future today.
This seems to miss the point about what "Docker" is good for:
It’s clear that Docker is on a tear as it ushers in a brave new world of DevOps. What’s less clear is whether this is a good idea. At least, today. After all, for most enterprises, most of the time, Docker and its container peers are simply not ready for primetime, assuming “primetime” means “standard enterprise apps.” While …
While it seems certain Docker will come to displace stodgier alternatives like VMware in the future, we’re nowhere near that future today.
This seems to miss the point about what "Docker" is good for:
"Any takers out there please"
VM's are generally OS heavy. Docker is a VM that is OS light, to provide a minimal VM wrapper generally used to run a single service. It can use more than just Linux to wrap services (MS is creating Docker compatible Windows) but generally in practice it is Linux. Docker takes advantage of a feature of the Linux kernel so it can run in an extremely efficient virtualisation mode where the overhead of virtualisation is extremely minimal. Up until now there has been a problematic disjuncture between what you want to do with VM's and what you want to do with services. A good example is when you configure services like Apache, or Node.js or MySQL or MongoDB to run on a local system for development purposes (in some ways this is a contrived example, which I will get to later, but run with it for now).
Let's say you install a combination of services, X, Y, Z to complete development project A. Well once project A is finished, its a pain to have to clean up your system and prepare it for project B that uses services V, X and Y. As we all know, systems slow down over time. You know longer have a clean base configuration from which to start and track configuration management as you implement the services.
This this is one reason VM's have become popular. Once you have finished project A, you can throw away the VM you used to develop with services X, Y and Z (or rather archive it). But still even with VM's, configuration management isn't always without unnecessary work. So now for project B you are installing services V, X and Y. Except you have just thrown away services X and Y and are reconfiguring them for the project B development VM.
This is the regard in which my example is contrived, because actually partly to overcome the problem I described, and partly for reasons of best practice, many developers configure their services one per VM anyway. So you have a node.js VM and a Cassandra DB VM etc. Except then the problem is a) configuring each VM is a little process heavy, and b) the VM's themselves are over specified when your problem has become "how do I run a single service" c) The VM's will be memory heavy to run multiple on a single machine. You want to get away from configuring detailed OS plumbing stuff over and over for each service (though partly this is mitigated with pre-built images).
What this shows is that in this modern cloud centric world, very often most of the services an OS provides are redundant (at least when we are only interested in it hosting a single service). We want it only in so far as it is a host for a single service.
Docker runs a VM "purely" to be the thinnest wrapper host for a service as possible (actually if had been built from scratch to match the Docker use case, it would be considerably lighter, so to this extent Docker is pragmatic in host OS use. So instead of configuring a VM with services, you have services that live in their own little VM light wrapper and the VM doesn't care what OS it is slung onto.
So docker makes the virtualised service a lego brick like unit with the lightest VM wrapper around the service. You can configure a set of lego bricks and you can throw that set of lego bricks onto any OS without worry as to whether they will run uniformly (yes you run multiple VM's on a single machine, that may itself be a VM).
The Docker environment allows the lego bricks to be grabbed in a similar fashion to how you can pull projects from github. It also in some respects works in a similar way to how a package manager works and pulls multiple "lego bricks" in the way a package manager will identify and pull dependent packages. Yes it is pretty cool.
I agree with the first post that this article has a very narrow Enterprise dependent view of how successful docker is. Docker wasn't conceived as a tool for the Enterprise (at least as far as I am aware), though I am sure they are be happy to see Enterprise adoption and are now finding many ways to encourage it. So it seems churlish to me to write about it as though it is failing in relation to the Enterprise, just as it would be churlish to say Bradley Wiggins has "failed" to win the 3:20 at Sundown Park.
Well I think that remark sums up the Cloud situation quite nicely.
There are, of course, a few companies for whom the Cloud is entirely justified, useful, and contributing to the ever-important bottom line.
The majority, however, are just fumbling around, toying with the tools, trying to wrap their heads around how to get that stuff into production in a useful way.
Right now, I'm against using Cloud services for security and confidentiality reasons, but I do believe that that situation will improve with the current drive for encryption (despite the US governments best efforts). When data security will be properly guaranteed by Cloud operators (and technically verifiable), then I see no reason why companies couldn't have their production environment entirely cloud-based. That way, we could finally get rid of Windows in the workplace, go for a secure Linux setup using business-oriented browsers and finally rid the world of Windows-based botnets and malware.
I'm hopeful, but now is not the time.
The majority, however, are just fumbling around, toying with the tools, trying to wrap their heads around how to get that stuff into production in a useful way.
What about " desperate to get stuff done without having to undergo IT procurement water torture"?
Over the years it has become apparent to me that in any large corporate, the entire business is in a lifecycle that comes to a slow close when the support functions (procurement, finance, HR etc - even IT) become progressively more powerful and less accountable, less responsive to their internal customers' needs. Then, in the name of efficiency and low cost, the support functions suffocate the business with byzantine delegated authority requirements, bureaucratic and unresponsive hiring and reward policies, procurement processes that take forever and then award contracts to charlatans that the business/IT managers wouldn't have allowed even to be considered, given the chance.
So as far as I can see, Docker and Cloud are IT-specific means of bureaucracy evasion, trying to avoid the "process sclerosis" of increasingly authoritarian support functions. Things are not better, maybe worse if you've outsourced your IT infrastructure, because bastards like HPE take forever to deliver anything, and it costs the earth, so you either have the water torture, or get pillaged by your outsource "partner", or both.
So I'm in favour of cloud and the like, even noting the security concerns. Sadly, evading the bureaucracy doesn't make it go away, and it continues to throttle the business around which it has grown. And eventually the company ends up like Motorola, General Motors, Nokia phones, Microsoft, HP and many other dinosaurs that have or are disappearing up their own arse.
If Nokia Phones is 100% death through process sclerosis, my own employers are about 85%. The screams of pain from the business have reached the main board, but they've still not woken up and understood that every man-jack in HR needs dismissing now, that the Procurement teams need to report to the MD of the business unit they support, that IT and Finance need to have generous employee incentive schemes that are at least 70% reliant on the performance assessed by senior managers in the supported business, not within their own silo.
The bizarre thing is that there's so much real value in good, responsive support services. But rather than recognise that value, the business focuses on cost, and then these support services hinder the business and even each other.
Sounds like you don't understand the question - sometimes a VM is best, other times a Container.
So often, some new technology comes along that's supposed to replace everything else. Rarely the case, containers will just become part of the mix. It's our job to understand the best technology for the particular requirements.
Same as cloud vs own-infrastructure - it just depends. Neither is right all the time, but neither is wrong all the time either.
The real problem I see with the Cloud, VM, and now Docker is not that they very or extremely useful in some situations, maybe even critical but that they tend to be overhyped as the solution to all IT and end user problems. Every IT solution, including bare metal installs, has its strengths and weaknesses, as AMBxx, noted "It's our job to understand the best technology for the particular requirements."
... why did "developers" replace "programmers" in the GreatUnwashed's[tm] mindset?
Gut feeling? Management & marketing, neither of which have a clue about how computers & networking work, felt a need to re-arrange the perceived reality ...
I think Adam from "Mythbusters" tongue-in-cheek comment "I refute your reality and substitute my own" applies to most of marketing & management.
we're looking at using Docker as a form of replica, where the container cluster represents 50-something units out monitoring equipment at remote sites with intermittent comms, No safety / real time stuff, five to ten minute delays in connectivity normal, but easier for on-premises tools to access a local "shadow" of the remote for processing log data, queuing up tasks (which are delay tolerant), etc. It's not mission critical, it's a convenience.
if that works out for a year or so, maybe we'll look at it for other stuff that's more important .....
The copy-on-write fs and "OS-level version control" features look potentially useful. I could efficiently sync changes up to remote servers, detect intrusions and so on.... if I could get everyone organized and drinking the kool-aid. Otherwise I would just be creating a more fragmented patchwork infrastructure to maintain.
Also, I've been migrating to FreeBSD. Does it support Docker? "Experimental". No thanks. And at this point it's not a compelling reason to endure Linux's decline.
At this rate Linux's decline will take the next 50 years so have patience.
FreeBSD certainly won't be taking over anytime soon, BSD had it's chance in the early 90s the "good enough" crowd won out a long time ago for wide scale adoption (similar to linux on the desktop).
(writing this from my linux laptop, desktop linux user for the past 20 years, though I do use OpenBSD for my own personal network firewalls, and while I've never used systemd(that seems to be a reason popping up for people ditching linux for freebsd recently), I'm sure I would not like it(on my servers, on my mobile or my desktop/laptop I don't care). But maybe by the time I have systems that use it, it will become usable I don't know, right now just not thinking about it).
I normally have a good laugh at the microsoft shills, but this isn't one and I think there's a point here. The dumbing-down efforts of some of the linux resellers are not making it better. 'It just works' for Ubuntu peaked a couple of years ago and it's been getting worse since.
Of course, being linux, one vendor doesn't bring it down - it's not single-sourced like Windows, so it can recover from even a windows-8/10 clusterfuck. But it needs more care to select the right distribution. It used to be Redhat for vendor support, Ubuntu for reliable leading edge, Debian for outdated but just-works and the others for specialist or enthusiast uses. It's no longer that clear - I'm interested to hear others perspectives. It seems like Redhat for corporate non-thinkers, Mint for Windows converters. But I'm not sure what the thinking user is supposed to do now Debian's followed the systemd path before it's mature.
Wow. How to miss the point entirely.
Docker's sweet spot isn't containerising that complex monolithic app you've been maintaining for years on vmware it's about deploying small apps, such as those favoured by the microservices crowd, quickly and cheaply.
You get isolated runtime stacks cheaply and lots of drop and restart fault tolerance and scaling. You don't get full isolation and have to suffer network, security and patching headaches. Often those trade offs are worth it, sometimes not.
Totally agree. We are scaling in aws cloud and have one service which varies in load massively. The tier serving it is autoscaling but.. Its windows server so even with the scaling firing early on load across the load balanced tier, it take many minutes for a new image to be copied, fire up and load windows and that's without very much in way of post run configuration. 7 minutes when you've just been hit by 10k requests is dead in the water. Containers would allow us to spin up these small services much faster, seconds and could even lend option for them to be allocated as the queued event is pulled - one for one! Anything that is handling bursts of workload is hard to manage or over scaled at significant cost... Which defeats the object of variable cloud. Window's containers are still in beta or we would definitely be using... Yes Linux may be faster but the best solution seems to be micro containers. Elastic beanstalk the auto scaling aws tool essentially does same as we do in allocating servers so that's little help in this use case.
I tend to regard anyone from The Home of Flash writing about DevOps with somewhat caution ...
I think I've disagreed with just about every thing Matt Asay has written on this site - but seeing his current byline, I started wondering whether Adobe is trying to launch something containerish...
Vic.
Cloud is not a good fit for all workloads but those who are specifically designed with it in mind.
Since the beginning that what people have been calling cloud is nothing more than an automated provisioning system for managed hosting.
And the TW*TS I mean developers should fuck*ng learn a minimum about how the OS they are targeting works.
And above all, not be scared to explain/talk to the sysadmins about what is what you are freak*ng trying to achieve, we want to help!
So docker will, I hope, try to avoid being a bridging technology. We have porky VM's for that.
A modular approach, with lightweight automated ops on a commoditised fine-grained utility platform is obviously the way forward. Give it twenty years and a business disruption cycle and all this will be seen as academic. Some of us are already building this componentry.
Bye bye Oracle and friends. FTW.
What is El Reg 2.0 going to do now??? I was wondering if the tagline had been changed to 'Biting the hand that feeds Docker" Less Docker, less VM and less Flash (NAND, meh) please and more real sysadmin please. . Leave the containers to Maersk.
Anonymous cos, well, downvotes :(