Share persistent volume claims amongst containers in Kubernetes/OpenShift
TL;DR You can share PV and PVC within the same project/namespace for shared volumes (nfs, gluster, etc...), you can also access your shared volume from multiple project/namespaces but it will require project dedicated PV and PVCs, as a PV is bound to single project/namespace and PVC is project/namespace scoped.
Below I've tried to illustrate the current behavior and how PV and PVCs are scoped within OpenShift. These are simple examples using NFS as the persistent storage layer.
the accessModes at this point are just labels, they have no real functionality in terms of controlling access to PV. Below are some examples to show this
the PV is global in the sense that it can be seen/accessed by any project/namespace, HOWEVER once it is bound to a project, it can then only be accessed by containers from the same project/namespace
the PVC is project/namespace specific (so if you have multple projects you would need to have a new PV and PVC for each project to connect to the shared NFS volume - can not reuse the PV from first project)
Example 1:
I have 2 distinct pods running in "default" project/namespace, both accessing the same PV and NFS exported share. Both mount and run fine.
[root@k8dev nfs_error]# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv-nfs <none> 1Gi RWO Bound default/nfs-claim 3m
[root@k8dev nfs_error]# oc get pods <--- running from DEFAULT project, no issues connecting to PV
NAME READY STATUS RESTARTS AGE
nfs-bb-pod2-pvc 1/1 Running 0 11m
nfs-bb-pod3-pvc 1/1 Running 0 10m
Example 2:
I have 2 distinct pods running in "default" project/namespace and attempt to create another pod using the same PV but from a new project called testproject
to access the same NFS export. The third pod from the new testproject
will not be able to bind to the PV as it is already bound by default
project.
[root@k8dev nfs_error]# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv-nfs <none> 1Gi RWO Bound default/nfs-claim 3m
[root@k8dev nfs_error]# oc get pods <--- running from DEFAULT project, no issues connecting to PV
NAME READY STATUS RESTARTS AGE
nfs-bb-pod2-pvc 1/1 Running 0 11m
nfs-bb-pod3-pvc 1/1 Running 0 10m
** Create a new claim against the existing PV from another project (testproject) and the PVC will fail
[root@k8dev nfs_error]# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-claim <none> Pending 2s
** nfs-claim will never bind to the pv-nfs PV because it can not see it from it's current project scope
Example 3:
I have 2 distinct pods running in the "default" project and then create another PV and PVC and Pod from testproject
. Both projects will be able to access the same NFS exported share but I need a PV and PVC in each of the projects.
[root@k8dev nfs_error]# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv-nfs <none> 1Gi RWX Bound default/nfs-claim 14m
pv-nfs2 <none> 1Gi RWX Bound testproject/nfs-claim2 9m
[root@k8dev nfs_error]# oc get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-bb-pod2-pvc 1/1 Running 0 11m
default nfs-bb-pod3-pvc 1/1 Running 0 11m
testproject nfs-bb-pod4-pvc 1/1 Running 0 15s
** notice, I now have three pods running to the same NFS shared volume across two projects, but I needed two PV's as they are bound to a single project, and 2 PVC's, one for each project and the NFS PV I am trying to access
Example 4:
If I by-pass PV and PVC, I can connect to the shared NFS volumes directly from any project using the nfs plugin directly
volumes:
- name: nfsvol
nfs:
path: /opt/data5
server: nfs1.rhs
Now, the volume security is another layer on top of this, using supplementalGroups (for shared storage, i.e. nfs, gluster, etc...), admins and devs should further be able to manage and control access to the shared NFS system.
Hope that helps
Related videos on Youtube
Donovan Muller
Updated on July 09, 2022Comments
-
Donovan Muller almost 2 years
This may be a dumb question but I haven't found much online and want to clarify this.
Given two deployments A and B, both with different container images:
- They're deployed in two different pods(different rc, svc etc.) in a K8/OpenShift cluster.
- They both need to access the same volume to read files (let's leave locking out of this for now) or at least the same directory structure in that volume.
- Mounting this volume using a PVC (Persistent Volume Claim) backed by a PV (Persistent Volume) configured against a NFS share.
Can I confirm that the above would actually be possible? I.e. two different pods connected to the same volume with the same PVC. So they both are reading from the same volume.
Hope that makes sense...
-
Donovan Muller over 8 yearsBased on this ( kubernetes.io/v1.1/examples/nfs ) it actually seems possible? In the example there are two rc's using the same pvc.
-
Donovan Muller over 8 yearsThanks this helps allot.
-
Clayton over 8 yearsYou can bind PVs anywhere you want, but the volume provider itself can reject an attach request for simultaneous access (for Ceph, EBS, or GCE). NFS has no guarantees - if you want to prevent NFS from being used from two pods simultaneously you'll need your own fencing / locking.
-
priyank about 8 years@DonovanMuller : I am also trying to use the same PV for multiple pods , it works fine, but I think data is also shared between pods in this case. Here my main concern is if PV contains lets say 2 GB of data , will all data be available to pods which are using this PV which is what we dont want right. Pod should have only its data , not others'. I asked this question here too stackoverflow.com/questions/36624034/… , but no response. Would be very helpful if you can clear this. Thanks in advance!
-
priyank about 8 years@screenlay : would appreciate your thoughts too for my above query. thanks a ton !
-
priyank about 8 years@Clayton: I am also trying to use the same PV for multiple pods , it works fine, but I think data is also shared between pods in this case. Here my main concern is if PV contains lets say 2 GB of data , will all data be available to pods which are using this PV which is what we dont want right. Pod should have only its data , not others'. I asked this question here too stackoverflow.com/questions/36624034/… , but no response. Would be very helpful if you can clear this. Thanks in advance!
-
screeley about 8 years@priyank - I think if you want to restrict data/directories on your shared storage, you could pass in supplementalGroups from the securityContext and then set up the ownership and groups on the NFS server i.e. dir1 open to groups A and B and then dir1/dirA only open to podA and dir1/dirB only open podB - so all pods have access to dir1 but then only podA has access to dirA and podB has access to dirB
-
hamster on wheels almost 7 yearsIs it possible to share a single PVC between two different apps within the same project?
-
screeley almost 7 years@hamsteronwheels - yes, a PVC can be shared within the same project/namespace - as long as your backing PV is like a shared filesystem and allows for multiple users (RWM).
-
Daniel Watrous about 6 yearsExample 4 was exactly what I needed to get around having to create multiple PersistentVolumes and claims across many namespaces