Kubernetes NFS Persistent Volumes - multiple claims on same volume? Claim stuck in pending?

28,683

Solution 1

Basically you can't do what you want, as the relationship PVC <--> PV is one-on-one.

If NFS is the only storage you have available and would like multiple PV/PVC on one nfs export, use Dynamic Provisioning and a default storage class.

It's not in official K8s yet, but this one is in the incubator and I've tried it and it works well: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client

This will enormously simplify your volume provisioning as you only need to take care of the PVC, and the PV will be created as a directory on the nfs export / server that you have defined.

Solution 2

From: https://docs.openshift.org/latest/install_config/storage_examples/shared_storage.html

As Baroudi Safwen mentioned, you cannot bind two pvc to the same pv, but you can use the same pvc in two different pods.

volumes:
- name: nfsvol-2
  persistentVolumeClaim:
    claimName: nfs-pvc-1 <-- USE THIS ONE IN BOTH PODS   

Solution 3

A persistent volume claim is exclusively bound to a persistent volume.
You cannot bind 2 pvc to the same pv.

I guess you are interested in the dynamic provisioning. I faced this issue when I was deploying statefulsets, which require dynamic provisioning for pods. So you need to deploy an NFS provisioner in your cluster, the NFS provisioner(pod) will have access to the NFS folder(hostpath), and each time a pod requests a volume, the NFS provisioner will mount it in the NFS directory on behalf of the pod.
Here is the github repository to deploy it:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs/deploy/kubernetes
You have to be careful though, you must ensure the nfs provisioner always runs on the same machine where you have the NFS folder by making use of the node selector since you the volume is of type hostpath.

Solution 4

For my future-self and everyone else looking for the official documentation:

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding

Once bound, PersistentVolumeClaim binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping, using a ClaimRef which is a bi-directional binding between the PersistentVolume and the PersistentVolumeClaim.

Share:
28,683
John
Author by

John

Updated on July 09, 2022

Comments

  • John
    John almost 2 years

    Use case:

    I have a NFS directory available and I want to use it to persist data for multiple deployments & pods.

    I have created a PersistentVolume:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: nfs-pv
    spec:
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteMany
      nfs:
        server: http://mynfs.com
        path: /server/mount/point
    

    I want multiple deployments to be able to use this PersistentVolume, so my understanding of what is needed is that I need to create multiple PersistentVolumeClaims which will all point at this PersistentVolume.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metaData:
      name: nfs-pvc-1
      namespace: default
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 50Mi
    

    I believe this to create a 50MB claim on the PersistentVolume. When I run kubectl get pvc, I see:

    NAME        STATUS     VOLUME    CAPACITY    ACCESSMODES   AGE
    nfs-pvc-1   Bound      nfs-pv    10Gi        RWX           35s
    

    I don't understand why I see 10Gi capacity, not 50Mi.

    When I then change the PersistentVolumeClaim deployment yaml to create a PVC named nfs-pvc-2 I get this:

    NAME        STATUS     VOLUME    CAPACITY    ACCESSMODES   AGE
    nfs-pvc-1   Bound      nfs-pv    10Gi        RWX           35s
    nfs-pvc-2   Pending                                        10s
    

    PVC2 never binds to the PV. Is this expected behaviour? Can I have multiple PVCs pointing at the same PV?

    When I delete nfs-pvc-1, I see the same thing:

    NAME        STATUS     VOLUME    CAPACITY    ACCESSMODES   AGE
    nfs-pvc-2   Pending                                        10s
    

    Again, is this normal?

    What is the appropriate way to use/re-use a shared NFS resource between multiple deployments / pods?

  • Vesper
    Vesper over 4 years
    I say use different storage classes in case you need some apps with small NFS RW values and some with large ones. Then, create several NFS dynamic provisioners, at least one for each class, then recreate/alter PVCs to refer created classes. Should do instead of creating tons of PVC/PVs
  • Vesper
    Vesper over 4 years
    So, this thing looks like a provisioning layer between a single NFS volume and several consumers? Looks pretty, gonna try as I have a NFS over external SDFS that's barely customizable but enormous, and I need a ton of PVCs to work with that storage with small throughput each.
  • vishal
    vishal about 4 years
    I'm facing an issue with your suggestion, I'm using the same pvc with multiple pods. Now The problem is only the files are being displayed commonly in both pods, Directories are not being shared. Do you know if that is a limitation? If not I'll post my entire scenario as a separate question.
  • PussInBoots
    PussInBoots over 3 years
    The link to openshift.org doesn't appear to be working anymore; I get a ERR_TOO_MANY_REDIRECTS in Chrome.
  • Gerrit-K
    Gerrit-K over 2 years
    but you can use the same pvc in two different pods -> only if both pods reside in the same namespace as the pvc (see here)