back to article Containers everywhere! Getting started with Docker

Docker is the name on the tip of many tongues at the moment. It is a containerisation engine which allows you to package up an application along with all the settings and software required to run it and deploy it to a server with a minimum of fuss. So where did this idea come from? Shipping containers! Shipping containers …

  1. Tim Brown 1

    Containers, otherwise known as installing apps for dummies.

    I had a look at Docker when all the hype started about a year or so ago. It's certainly makes installing things very easy, so you don't need to know anything about dependencies within a system. But really is this ignorance a good thing?

    The major issue is that it completely cuts you off from the normal security updates of your chosen Linux distribution, you're reliant on your container maintainer (or mass of chained in container maintainers) to provide an update in a timely fashion.

    A smaller issue is that the layered file system structure used by containers can grow to be very inefficient.

    1. Frumious Bandersnatch

      Re: Containers, otherwise known as installing apps for dummies.

      re: completely cuts you off from the normal security updates

      Apparently the correct way to do this would be to create a new container that has the most recent version of the software and to (somehow) migrate the old data to it. That link makes the point that updating the software in a container is bad practice.

      I think that there's a bigger issue around trusting someone else's docker recipe for encapsulating something. The page here recommends "only trusting docker images you build yourself". I think that's very good advice.

      1. Marco Fontani

        Re: Containers, otherwise known as installing apps for dummies.

        Apparently the correct way to do this would be to create a new container that has the most recent version of the software and to (somehow) migrate the old data to it.

        Well, you wouldn't have your data in the same docker instance that has the apps. That's a recipe for data loss and disaster, as docker containers' data is ephemeral - unless you use a "data" container :)

        You'd then want to have two containers: one "app" container, and one "data" container (below the turtles, that's just a directory...). You then launch the "code" container, telling Docker to mount the other "data" container inside it.

        When you need to upgrade or whatever, then it's only a matter of updating the "app" container, and assuming the updated version can work with the old data, you're set.

        Think about a MySQL docker instance. The "app" Dockerfile would contain "mysqld", and the static "my.cnf" whereas the "data" container would contain the equivalent of /var/lib/mysql - the actual data. Need to migrate from v5.0.0 to v5.0.1? No problem. shut down the app instance for 5.0.0, create a new container with 5.0.1, start it with the data container you used for 5.0.0. If the upgrade process works, you're set ;)

        1. Anonymous Coward
          Anonymous Coward

          Re: Containers, otherwise known as installing apps for dummies.

          Think you all missed this from op

          " you're reliant on your container maintainer (or mass of chained in container maintainers) to provide an update in a timely fashion."

  2. Frumious Bandersnatch

    much more seriously ...

    From official Docker docs here:

    Running containers (and applications) with Docker implies running the Docker daemon. This daemon currently requires root privileges, and you should therefore be aware of some important details.

    First of all, only trusted users should be allowed to control your Docker daemon. [...]

  3. Ken Moorhouse Silver badge

    Resources

    Each container needs to claim RAM and CPU cycle resources from the OS which I would have thought are not capable of being readily containerised. CPU cycles in particular is an issue on a machine where several processes are running and are interdependent on the other to run. I can imagine a Docker'ised environment behaving differently to a regular environment, possibly breaking certain apps due to the way that containers are scheduled to run for a certain period, or until a milestone is reached, then suspended. This is the way that applications and services on a regular OS may work, but applications can tell the OS for example to hold-off from interfering with a critical calculation when it is in process. Docker needs to be aware of these application-specific requirements.

    There must be some kind of configuration in Docker containers which sets priorities and rules regarding these kinds of Service Interrupts. These rules must surely be more complex to define and enforce than if the processes were not containerised, and may need to dip into the innards of the container to evaluate the differences.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like