back to the future
So BEOS had it right then in its application packaging. None of this shared library nonsense, everything in one package in its own directory. BTW what happened to death of COBOL on mugs ?
The container is doomed, killed by serverless. Containers are killing Virtual Machines (VM). Nobody uses bare metal servers. Oh, and tape is dead. These, and other clichés, are available for a limited time, printed on a coffee mug of your choice alongside a complimentary moon-on-a-stick for $24.99. Snark aside, what does the …
None of this shared library nonsense, everything in one package in its own directory
Which was (sort of) already done on the Acorn Archimedes - I think that was the first time I used something where what looked like an application was, in fact, a special sort of directory that contained everything that the application needed to run.
As is (again, sort of) done on MacOS today.
A container might run as a process on the host, but a process is not necessarily a container.
Although their popularity has grown recently, containers really aren't that new. Docker was first released in 2013, but the underlying kernel primitives hit Linux in 2006.
It's really not just a marketing thing.
That said, although Docker (and other containers) have it's place, it gets abused IMO. Creating a docker image can massively simplify deployment (which is good for eng) but can create an absolute maintenance nightmare for operations.
It also requires a bit of additional care in terms of managing your build pipeline. Yes, you can send me an image to deploy and I can spin it up. But, where did you build it? If you're hit by a bus, can I build a new image (to integrate patch x), or is it in fact going to turn out you built it on your laptop without documenting what needs to be available for the build?
That can be an issue without docker, but IME gets experienced less, because you tend to package the software either with the dependencies it needs, or with a well defined list inside an RPM or Deb.
Personally, I start to twitch whenever someone says "we could use Docker" unless that's followed by a justification of why a container is actually needed. We can trivially spin up cattle with Ansible/Puppet without the need for Docker, so for Docker to make it into the implementation you need to be able to justify it. There are sometimes valid reasons, "It makes it easier for me to include this obscure dependancy" isn't one of those IMO.
And that's before I start on the issues I have with Docker itself as a project.
We can trivially spin up cattle with Ansible/Puppet without the need for Docker, so for Docker to make it into the implementation you need to be able to justify it.
It's far better than our older architecture of kvm VMs and CFEngine. Everything that went in CFEngine, even if it was about the structure of the application and code, had to be approved by a sysadmin before the software teams could apply it.
It's easier to manage and distribute workloads with a well structured k8s/docker/terraform/vagrant/ci setup. For developers, they get more control over how the software is structured - and if it breaks, it's on them not the sysadmin, and can be rolled back trivially. k8s manages the haproxy routing of requests to containers automagically, so there is no manual configuration file changes when we add an extra host, it JFDI. It makes it much simpler to do red-green deployments, or gradual rollout of new features, things that were harder or impossible with the old system.
If you are just using docker for the hell of it, no, its not a good solution. There's lots to learn and implement, and if you cba to put the effort in to do that, you're not going to get good results. It's not enough to say "Use Docker", there are at least 10+ other parts of the infrastructure to setup and use.
My senior management have been really confused when I've mentioned our data centre costs because they assumed we had no servers and everything was virtualised. They've heard we are no longer using "bare metal" somewhere and took it literally, whilst obviously the VMs have to run on something..
No, surely you are now serverless. The whole serverless room is just full of empty racks, heck even the switches are pure software now, no need for pesky cables anymore.
Just row after row of empty racks delivering everything the consumer (we used to call them users) need.
We hope to roll out ethereal (TM) computers soon, no need for clumsy things like keyboards, mice, and screens, just run by the power of notional thought (another TM)
We hope to roll out ethereal (TM) computers soon, no need for clumsy things like keyboards, mice, and screens, just run by the power of notional thought (another TM)
Steve Bong, you're back! I've been wondering where you'd gotten to. Sounds like an absolutely wonderful idea for a new catapult. We'll start organizing the funding right away.
But rather than "notional thought," wouldn't "Thinkfluence" be a better term?
-Theresa
This post has been deleted by its author
Does that mean that monitoring tools are obsolete for containers, too?
Yes.
And no.
Maybe.
And, just maybe, by monitoring it, you bring it into being. Therefore, you don't need all that expensive squishy meatbag programmer stuff - just set up the monitoring and, by the power of positive thought, your solution comes into being.
Whee! These drugs are good!
Thank you. This:
"Serverless, meanwhile, is also no big threat to containers. It is a stupid name for the technology in question, which is really something between a batch script and a TSR that you run on a cloud."
is the best description of 'serverless' I've come across thus far and I full intend to use it the next time I'm interviewed by buzzword-spewing imbeciles from middle damagement.
Containers? chroot on steriods with a network layer slapped on top.
What's old is new again - both of these things existed, albeit in a slightly different form, when I was at university in the 90s.
Bill Hicks was right - if you're in marketing, just do the world a favour and kill yourself.
MS are in with a good shout for those that don't want to go containers route & not bothered about platform agnostic solutions, "Standard" apps coded in Windows and used on premises work fine ported to Azure (with a few caveats about what is not allowed on SQL Server on Azure vs "normal" SQL Server) .
"automated workload baselining, instrumentation, isolation and incident response"
Sure thats most of what you want. But not _all_. Logging perhaps?
Thing about a single process container is you have to have all that stuff outside the container. With a vm you have a ton of tools to fill those roles.
Sometimes an awk script on a log makes a good quick alarm system.
Nagios monitors generally require a scripting framework.
Boot options often need sh.
Apps are not just processes, gnu tools are quick and easy.
I dont see anyone hitting the sweetspot between light container and single process. It will be too restrictive. Snaps try. I dont like it.
Anyone that isolates processes for security but lets all your quick and easy hacks n fixes still work might find a big nieche. E.g the main proc in a container with a full os on the outside instead of just esx.
The big players probably dont hack at there processes much cos they have so many. Anyone with one, two... four instances of their core app process probably have a lot of tooling around them doing logging, bespoke monitors, a couple of batch jobs that fork a script etc etc.
Containerised processes with a full os outside could be cool. We've had chroots for ages. And process isolation turns out to be harder than we thought. Ref meltdown etc etc.
Containers without tools are hard work now, perhaps they always will be?
I might have chewed through a birthday recently, and might have spent far too many years in IT of various ilk, but I believe in the 'growing up, growing old, and growing have nothing to do with one another' axiom.
I've QB'ed linux on to the DC floor for my employer, from 2 to over 4K installations. From 'data appliance' to 'blades' to servers, to vms, clusters, storage farms, server farms and DB workhorses, webservers, application servers, integration and file servers.
"Containers" change a few things. Mostly, however I think the single largest issue with containers is that the expectation of what containers *are* and *can do for us* has a huge amount of variance, from the Dev's who seem to think that it will allow them to shorten development cycles, to operations who think it will remove the need to make sure things work to platform folks who think it will make it possible to keep hardware at 80% to management who are convinced that 90% of the systems could be collapsed on to two DL580s running containers to beancounters who see the Open Source tag and decide that it will be free.
sadly the entire lot of them are utterly wrong on all fronts. I will point out that, *if* the devs sat down together and decided to work together and work on some common ground the development cycles would get shorter. The Ops folks should sit down and decide what amounts to an acceptable set of limits of performance on the critical apps (Tx/s perhaps? *something fcs* or my continual question - what is too slow?) The platform folks need to decide what level of risk they are willing to assume, and how much of the app performance are they willing to sacrifice to a failure of hardware, and management and the beancounters need to sit down and decide what they are willing to pay for application performance.
I see a difference in development style between 'dedicated application environment' and 'containers' - i.e. scale up vs scale out - and I see a substantial change in application design between the two.
The major advantage in my books, is, once you've made the mental, logical and process shifts that would be required to go containers, you *should* have a single pane of glass somewhere that has all your relevant metrics, displayed clearly so that one can see where your business flow is broken. The "change" to containers is not something that will happen across an entire business overnight, but it *may* have a substantial impact on overall business functionality very quickly once started.
A couple docker containers are pretty easy to understand and manage. Lashing them together with k8s, persistent storage, get devs to learn it, etc is HARD. Red Hat will gladly sell you $12K/year per-server subscriptions for OpenShift and fly in their consulting army to help you, and companies are buying it. But I'm glad to see others, like Docker Inc, are making it easier to use and less expensive. There is a lot of investment going on in this area, and it will be fun to see what comes of it.