Systemd fails to run in a docker container when using cgroupv2 (--cgroupns=private)

11,159

Solution 1

tl;dr

It seems to me that this use case is not explicitly supported yet. You can almost get it working but not quite.

The root cause

When systemd sees a unified cgroupfs at /sys/fs/cgroup it assumes it should be able to write to it which normally should be possible but is not the case here.

The basics

First of all, you need to create a systemd slice for docker containers and tell docker to use it - my current docker/daemon.json:

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "features": { "buildkit": true },
  "experimental": true,
  "cgroup-parent": "docker.slice"
}

Note: Not all of these options are necessary. The most important one is cgroup-parent. The cgroupdriver should already be switched to "systemd' by default.

Each slice gets its own nested cgroup. There is one caveat though: Each group might only be a "leaf" or "intermediary". Once a process takes ownershop of a cgroup no other can manage it. This means that the actual container process needs and will get its own private group attached below the configured one in the form of a systemd scope.

Reference: Please find more about systemd resource control, handling of cgroup namespaces and delegation.

Note: A this point docker daemon should use --cgroupns private by default, but you can force it anyway.

Now a newly started container will get its own group which should be available in a path that (depending on your setup) resembles:

/sys/fs/cgroup/your_docker_parent.slice/your_container.scope

And here is the important part: You must not mount a volume into container's /sys/fs/cgroup. The path to its private group mentioned above should get mounted there automatically.

The goal

Now, in theory, the container should be able to manage this delegated, private group by itself almost fully. This would allow its own init process to create child groups.

The problem

The problem is that the /sys/fs/cgroup path in the container gets mounted read-only. I've checked apparmor rules and switched seccomp to unconfined to no avail.

The hypothesis

I am not completely certain yet - my current hypothesis is that this is a security feature of docker/moby/containerd. Without private groups it makes perfect sense to mount this path ro.

Potential solutions

What I've also discovered is that enabling user namespace remapping causes the private /sys/fs/cgroup to be mounted with rw as expected!

This is far from perfect though - the cgroup (among others) mount has wrong ownership: it's owned by the real system root (UID0) while the container has been remapped to a completely different user. Once I've manually adjusted the owner - the container was able to start a systemd init sucessfully.

I suspect this is a deficiency of docker's userns remapping feature and might be fixed sooner or later. Keep in mind that I might be wrong about this - I did not confirm.

Discussion

Userns remapping has got a lot of drawbacks and the best possible scenario for me would be to get the cgroupfs mounted rw without it. I still don't know if this is done on purpose or if it's some kind of limitation of the cgroup/userns implementation.

Notes

It's not enough that your kernel has cgroupv2 enabled. Depending on the linux distribution bundled systemd might prefer to use v1 by default.

You can tell systemd to use cgroupv2 via kernel cmdline parameter:
systemd.unified_cgroup_hierarchy=1

It might also be needed to explictly disable hybrid cgroupv1 support to avoid problems using: systemd.legacy_systemd_cgroup_controller=0

Or completely disable cgroupv1 in the kernel with: cgroup_no_v1=all

Solution 2

For those wondering how to solve this with the kernel commandline:

# echo 'GRUB_CMDLINE_LINUX=systemd.unified_cgroup_hierarchy=false' > /etc/default/grub.d/cgroup.cfg
# update-grub

This creates a "hybrid" cgroup setup, which makes the host cgroup v1 available again for the container's systemd.

https://github.com/systemd/systemd/issues/13477#issuecomment-528113009

Solution 3

Thanks to @pinkeen 's answer, here is my Dockerfile and command line, it works fine. I hope this helps:

FROM debian:bullseye
# Using systemd in docker: https://systemd.io/CONTAINER_INTERFACE/
# Make sure cgroupv2 is enabled. To check this: cat /sys/fs/cgroup/cgroup.controllers
ENV container docker
STOPSIGNAL SIGRTMIN+3
VOLUME [ "/tmp", "/run", "/run/lock" ]
WORKDIR /
# Remove unnecessary units
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
  /etc/systemd/system/*.wants/* \
  /lib/systemd/system/local-fs.target.wants/* \
  /lib/systemd/system/sockets.target.wants/*udev* \
  /lib/systemd/system/sockets.target.wants/*initctl* \
  /lib/systemd/system/sysinit.target.wants/systemd-tmpfiles-setup* \
  /lib/systemd/system/systemd-update-utmp*
CMD [ "/lib/systemd/systemd", "log-level=info", "unit=sysinit.target" ]
docker build -t systemd_test .
docker run -t --rm --name systemd_test \
  --privileged --cap-add SYS_ADMIN --security-opt seccomp=unconfined \
  --cgroup-parent=docker.slice --cgroupns private \
  --tmpfs /tmp --tmpfs /run --tmpfs /run/lock \
  systemd_test

Note: you MUST use Docker 20.10 or above, and your system enabled cgroupv2 (check if /sys/fs/cgroup/cgroup.controllers) exists.

Share:
11,159

Related videos on Youtube

Stephen
Author by

Stephen

Updated on September 18, 2022

Comments

  • Stephen
    Stephen almost 2 years

    I will attach the minimized test case below. However, it is a simple Dockerfile that has these lines:

    VOLUME ["/sys/fs/cgroup"]
    CMD ["/lib/systemd/systemd"]
    

    It is Debian:buster-slim based image, and runs systemd inside the container. Effectively, I used to run the container like this:

    $ docker run  --name any --tmpfs /run \
        --tmpfs /run/lock --tmpfs /tmp \
        -v /sys/fs/cgroup:/sys/fs/cgroup:ro -it image_name
    

    It used to work fine before I upgraded a bunch of host Linux packages. The host kernel/systemd now seems to default cgroup v2. Before, it was cgroup. It stopped working. However, if I give the kernel option so that the host uses cgroup, then it works again.

    Without giving the kernel option, the fix was to add --cgroupns=host to docker run besides mounting /sys/fs/cgroup as read-write (:rw in place of :ro).

    I'd like to avoid forcing the users to give the kernel option. Although I am far from an expert, forcing the host namespace for a docker container does not sound right to me.

    I am trying to understand why this is happening, and figure out what should be done. My goal is to run systemd inside a docker, where the host follows cgroup v2.

    Here's the error I am seeing:

    $ docker run --name any --tmpfs /run --tmpfs /run/lock --tmpfs /tmp \
        -v /sys/fs/cgroup:/sys/fs/cgroup:rw -it image_name
    systemd 241 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
    Detected virtualization docker.
    Detected architecture x86-64.
    
    Welcome to Debian GNU/Linux 10 (buster)!
    
    Set hostname to <5e089ab33b12>.
    Failed to create /init.scope control group: Read-only file system
    Failed to allocate manager object: Read-only file system
    [!!!!!!] Failed to allocate manager object.
    Exiting PID 1...
    

    It does not look right but especially this line seems suspicous:

    Failed to create /init.scope control group: Read-only file system
    

    It seems like there should have been something before /init.scope. That was why I reviewed the docker run options, and tried the --cgroupsns option. If I add the --cgroupns=host, it works. If I mount /sys/fs/cgroup as read-only, then it fails with a different error, and the corresponding line looks like this:

    Failed to create /system.slice/docker-0be34b8ec5806b0760093e39dea35f4305262d276ecc5047a5f0ff43871ed6d0.scope/init.scope control group: Read-only file system
    

    To me, it is like the docker daemon/engine fails to configure XXX.slice or something like that for the container. I assume that docker may be to some extend responsible for giving the namespace but something is not going well. However, I can't be so sure at all. What would be the issue/fix?

    The Dockerfile I used for this experiment is as follows:

    FROM debian:buster-slim
    
    ENV container docker
    ENV LC_ALL C
    ENV DEBIAN_FRONTEND noninteractive
    
    USER root
    WORKDIR /root
    
    RUN set -x
    
    RUN apt-get update -y \
        && apt-get install --no-install-recommends -y systemd \
        && apt-get clean \
        && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
        && rm -f /var/run/nologin
    
    RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
        /etc/systemd/system/*.wants/* \
        /lib/systemd/system/local-fs.target.wants/* \
        /lib/systemd/system/sockets.target.wants/*udev* \
        /lib/systemd/system/sockets.target.wants/*initctl* \
        /lib/systemd/system/sysinit.target.wants/systemd-tmpfiles-setup* \
        /lib/systemd/system/systemd-update-utmp*
    
    VOLUME [ "/sys/fs/cgroup" ]
    
    CMD ["/lib/systemd/systemd"]
    

    I am using Debian. The docker version is 20.10.3 or so. Google search told me that docker supports cgroup v2 as of 20.10 but I don't actually understand what that "support" means.

    • pinkeen
      pinkeen over 3 years
      I have actually encountered the exact same problem. I've been convinced I was running cgroupv2 the whole time and wondered why systemd inside the container cannot create its own user namespaces. The goal is to actually use v2 in order to get the most functionality of the system inside the container. I've figured out the host's systemd is not using v2 and by extension also docker - enabled it - and everything stopped working. I will test in a moment, but it seems systemd inside container needs to be told to switch to v2.
    • pinkeen
      pinkeen over 3 years
      I need to get back to reading the cg v1/v2 technical documentation top-to-bottom because I feel lost. According to my understanding the cgroupns private mode should create a private group for the container's init. It seems not to do that. I think you should not mount /sys/fs/cgroup at all, it should be populated automatically. For reference see Rootlesskit docs here: github.com/rootless-containers/rootlesskit/blob/master/…. Also podman option docs seem to be quite insightful: docs.podman.io/en/latest/markdown/…
    • pinkeen
      pinkeen over 3 years
      Systemd containers work with podman OOTB, already tried it long time ago. I would gladly uses podman for my own purposes but I want to create a solution that will feel familiar for everybody so compatibility-wise docker seems to be preferred... There's also LXC/LXD but it has very different approach and the selling point is support for both (or almost all) types of workloads.
    • pinkeen
      pinkeen over 3 years
      I will try setting up docker in a clean VM from scratch, cause I've got a feeling this system might be misconfigured (it's Debian Buster but the Proxmox flavour). Even Docker for Mac does not support cgroupv2 (at least not in the stable version).
    • Stephen
      Stephen over 3 years
      Thank you for the comments as well as the answer! Somehow, I could find time to get back to this thread today. I'll read thoroughly your comment as well as the answer. Before proceeding, I wanted to say "Thank you!"
    • mviereck
      mviereck over 2 years
      You can use nsenter and mount to change the :ro permission of sys/fs/cgroup in container to :rw. See this very comprehensive post on github: github.com/mviereck/x11docker/issues/…
  • pinkeen
    pinkeen over 3 years
    Upon consideration I wonder if its even possible to do without userns-remapping. It might be that kernel does not support clone/unshare inside a child namespace of UID 0?
  • vanthome
    vanthome over 2 years
    This is the only solution that I found that worked for me as I'm running on CGroupv2 on OpenSuSE.
  • mviereck
    mviereck over 2 years
    You can use nsenter to mount /sys/fs/cgroup to :rw. See my comment under the question, it would have fit here better.
  • Admin
    Admin about 2 years
    I encountered this issue with LXC on Ubuntu for some of my Ubuntu/systemd-based LXC containers after upgrading to Ubuntu 22.04 (Jammy). I can confirm the workaround suggested works and my containers are now running again.