How to handle security updates within Docker containers?

42,873

Solution 1

A Docker image bundles application and "platform", that's correct. But usually the image is composed of a base image and the actual application.

So the canonical way to handle security updates is to update the base image, then rebuild your application image.

Solution 2

The containers are supposed to be lightweight and interchangeable. If your container has a security problem, you rebuild a version of the container that's patched and deploy the new container. (many containers use a standard base image that uses standard package management tools like apt-get to install their dependencies, rebuilding will pull the updates from the repositories)

While you could patch inside containers, that's not going to scale well.

Solution 3

This is handled automatically in SUSE Enterprise Linux using zypper-docker(1)

SUSE/zypper-docker

Docker Quick Start

Share:
42,873

Related videos on Youtube

Markus Miller
Author by

Markus Miller

Updated on September 18, 2022

Comments

  • Markus Miller
    Markus Miller almost 2 years

    When deploying applications onto servers, there is typically a separation between what the application bundles with itself and what it expects from the platform (operating system and installed packages) to provide. One point of this is that the platform can be updated independently of the application. This is useful for example when security updates need to be applied urgently to packages provided by the platform without rebuilding the entire application.

    Traditionally security updates have been applied simply by executing a package manager command to install updated versions of packages on the operating system (for example "yum update" on RHEL). But with the advent of container technology such as Docker where container images essentially bundle both the application and the platform, what is the canonical way of keeping a system with containers up to date? Both the host and containers have their own, independent, sets of packages that need updating and updating on the host will not update any packages inside the containers. With the release of RHEL 7 where Docker containers are especially featured, it would be interesting to hear what Redhat's recommended way to handle security updates of containers is.

    Thoughts on a few of the options:

    • Letting the package manager update packages on the host will not update packages inside the containers.
    • Having to regenerate all container images to apply updates seems to break the separation between the application and the platform (updating the platform requires access to the application build process which generates the Docker images).
    • Running manual commands inside each of the running containers seems cumbersome and changes are at risk of being overwritten the next time containers are updated from the application release artifacts.

    So none of these approaches seems satisfactory.

    • Michael Hampton
      Michael Hampton almost 10 years
      The best idea for this I've seen so far is Project Atomic. I don't think it's quite ready for prime time though.
    • Steven Confessore
      Steven Confessore over 9 years
      Valko, what workflow did you end up with? I'm running long-term containers (hosting php-cgi, for instance) and what I've found so far is: docker pull debian/jessie to update the image, then rebuild my existing image(s), then stop the containers and run them again (with the new image). The images I build have the same name as previous ones, so the starting is done via the script. I then remove "unnamed" images. I would surely appreciate a better workflow.
    • Markus Miller
      Markus Miller over 9 years
      miha: That sounds similar to what I have ended up doing. Basically continuously updating and rebuilding all images as part of making new releases. And restarting the containers using the new images.
    • Hudson Santos
      Hudson Santos over 8 years
      The best answer here helps a lot because there is a script which contains main commandlines to do exactly what Johannes Ziemke said:
    • Dalibor Filus
      Dalibor Filus almost 5 years
      Interesting question. I wonder about it myself. If you have 20 applications running on one docker host, you have to upgrade base images, rebuild and restart! 20 applications and you don't even know if the security update affected them all, or just one of them. You have to rebuild image for say Apache when the security update affected only libpng for example. So you end up with unnecessary rebuilds and restarts...
    • ʇsәɹoɈ
      ʇsәɹoɈ over 3 years
      I don't have the answer, but in case anyone wants a simple script that can help automate checking for base image updates: dockcheck
  • Markus Miller
    Markus Miller over 9 years
    Thanks, this sounds reasonable. Just still wish updating the platform so to speak wouldn't have to trigger repackaging the entire application (consider for example having to rebuild 100 different application images due to a single base image getting update). But maybe this is inevitable with the Docker philosophy of bundling everything together inside a single image.
  • dlyk1988
    dlyk1988 over 9 years
    @ValkoSipuli You could always write a script to automate the process.
  • Arthur Kay
    Arthur Kay over 8 years
    Why not apt-get upgrade, dnf upgrade, pacman -syu, etc equivalent inside the container? You could even create a shell script that does that and then runs the application, then use that as the container's entrypoint so that when the container is started / restarted it upgrades all its packages.
  • Johannes 'fish' Ziemke
    Johannes 'fish' Ziemke over 8 years
    @ArthurKay Two reasons: 1) You blow up the container size, since all packages that get upgraded will be added to the container layer while keeping the outdated package in the image. 2) It defeats the biggest advantage of (container) images: The image you run isn't the same you build/tests because you change packages on runtime.
  • Sentenza
    Sentenza over 6 years
    There's one thing I don't understand: If you are a company buying a piece of software that is shipped as a docker container, do you have to wait for the manufacturer of the software to rebuild the application package every time a security issue comes out? Which company would give up the control over their open vulnerabilities in that way?
  • tim
    tim over 5 years
    Is that OK to leave the outdated base image there? I mean anyway docker is a container, it is separated from the host OS.