Kubernetes mount volume on existing directory with files inside the container
Solution 1
Unfortunately Kubernetes' volume system is very different from Docker's so this is not possible directly. If there is a single file (or a small number) you can use subPath projection like this:
volumeMounts:
- name: cephfs-0
mountPath: /opt/myapplication/conf/foo.conf
subPath: foo.conf
Repeat that for each file. But if you have a lot of files, or if they can vary, then you have to handle this at runtime or use templating tools. Usually that means mounting it somewhere else and setting up symlinks before your main process starts.
Solution 2
I was able to fix this by having my ENTRYPOINT
be a bash script that mv
my config files i wanted mounted to their correct location. It seems this device or resource is busy
errors were happening because the files were not mounted yet.
Related videos on Youtube
Yudi
Updated on December 19, 2021Comments
-
Yudi over 2 years
I am using k8s with version 1.11 and CephFS as storage.
I am trying to mount the directory created on the CephFS in the pod. To achieve the same I have written the following volume and volume mount config in the deployment configuration
Volume
{ "name": "cephfs-0", "cephfs": { "monitors": [ "10.0.1.165:6789", "10.0.1.103:6789", "10.0.1.222:6789" ], "user": "cfs", "secretRef": { "name": "ceph-secret" }, "readOnly": false, "path": "/cfs/data/conf" } }
volumeMounts
{ "mountPath": "/opt/myapplication/conf", "name": "cephfs-0", "readOnly": false }
Mount is working properly. I can see the ceph directory i.e. /cfs/data/conf getting mounted on /opt/myapplication/conf but following is my issue.
I have configuration files already present as a part of docker image at the location /opt/myapplication/conf. When deployment tries to mount the ceph volume then all the files at the location /opt/myapplication/conf gets disappear. I know it's the behavior of the mount operation but is there any way by which I would be able to persist the already existing files in the container on the volume which I am mounting so that other pod which is mounting the same volume can access the configuration files. i.e. the files which are already there inside the pod at the location /opt/myapplication/conf should be accessible on the CephFS at location /cfs/data/conf.
Is it possible?
I went through the docker document and it mentions that
Populate a volume using a container If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume. The container then mounts and uses the volume, and other containers which use the volume also have access to the pre-populated content.
This matches with my requirement but how to achieve it with k8s volumes?
-
Zhu Li about 3 yearsWould kubernetes.io/docs/concepts/configuration/configmap work better for such config?
-
-
sngjuk about 4 yearsthere's a way to share host file to pods kubernetes share a directory from your local system to kubernetes container
-
opricnik about 4 years@sngjuk Yes, that's not the issue. The hard part here is that Docker has a feature to initialize a persistent volume with content from a container image. Kubernetes does not have this feature.
-
ZedTuX almost 3 yearsNote that this option makes, at least Ruby, failing at updating the file is seen as a directory:
/application/config/initializers/clamby.rb:11:in `initialize': Is a directory @ rb_sysopen - /application/config/clamby.conf (Errno::EISDIR)
-
Thamaraiselvam about 2 yearsThis mounts as directory not as file.