Kubernetes NFS persistent volumes permission denied

47,654

Solution 1

If you set the proper securityContext for the pod configuration you can make sure the volume is mounted with proper permissions.

Example:

apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  securityContext:
    fsGroup: 2000 
  volumes:
    - name: task-pv-test-storage
      persistentVolumeClaim:
        claimName: task-pv-test-claim
  containers:
  - name: demo
    image: example-image
    volumeMounts:
    - name: task-pv-test-storage
      mountPath: /data/demo

In the above example the storage will be mounted at /data/demo with 2000 group id, which is set by fsGroup. By setting the fsGroup all processes of the container will also be part of the supplementary group ID 2000, thus you should have access to the mounted files.

You can read more about pod security context here: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

Solution 2

Thanks to 白栋天 for the tip. For instance, if the pod securityContext is set to:

securityContext:
  runAsUser: 1000
  fsGroup: 1000

you would ssh to the NFS host and run

chown 1000:1000 -R /some/nfs/path

If you do not know the user:group or many pods will mount it, you can run

chmod 777 -R /some/nfs/path

Solution 3

A simple way is to get to the nfs storage, and chmod 777, or chown with the user id in your volume-test container

Share:
47,654
fragae
Author by

fragae

Updated on July 05, 2022

Comments

  • fragae
    fragae almost 2 years

    I have an application running over a POD in Kubernetes. I would like to store some output file logs on a persistent storage volume.

    In order to do that, I created a volume over the NFS and bound it to the POD through the related volume claim. When I try to write or accede the shared folder I got a "permission denied" message, since the NFS is apparently read-only.

    The following is the json file I used to create the volume:

    {
          "kind": "PersistentVolume",
          "apiVersion": "v1",
          "metadata": {
            "name": "task-pv-test"
          },
          "spec": {
            "capacity": {
              "storage": "10Gi"
            },
            "nfs": {
              "server": <IPAddress>,
              "path": "/export"
            },
            "accessModes": [
              "ReadWriteMany"
            ],
            "persistentVolumeReclaimPolicy": "Delete",
            "storageClassName": "standard"
          }
        }
    

    The following is the POD configuration file

    kind: Pod
    apiVersion: v1
    metadata:
        name: volume-test
    spec:
        volumes:
            -   name: task-pv-test-storage
                persistentVolumeClaim:
                    claimName: task-pv-test-claim
        containers:
            -   name: volume-test
                image: <ImageName>
                volumeMounts:
                -   mountPath: /home
                    name: task-pv-test-storage
                    readOnly: false
    

    Is there a way to change permissions?


    UPDATE

    Here are the PVC and NFS config:

    PVC:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: task-pv-test-claim
    spec:
      storageClassName: standard
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 3Gi
    

    NFS CONFIG

    {
      "kind": "Pod",
      "apiVersion": "v1",
      "metadata": {
        "name": "nfs-client-provisioner-557b575fbc-hkzfp",
        "generateName": "nfs-client-provisioner-557b575fbc-",
        "namespace": "default",
        "selfLink": "/api/v1/namespaces/default/pods/nfs-client-provisioner-557b575fbc-hkzfp",
        "uid": "918b1220-423a-11e8-8c62-8aaf7effe4a0",
        "resourceVersion": "27228",
        "creationTimestamp": "2018-04-17T12:26:35Z",
        "labels": {
          "app": "nfs-client-provisioner",
          "pod-template-hash": "1136131967"
        },
        "ownerReferences": [
          {
            "apiVersion": "extensions/v1beta1",
            "kind": "ReplicaSet",
            "name": "nfs-client-provisioner-557b575fbc",
            "uid": "3239b14a-4222-11e8-8c62-8aaf7effe4a0",
            "controller": true,
            "blockOwnerDeletion": true
          }
        ]
      },
      "spec": {
        "volumes": [
          {
            "name": "nfs-client-root",
            "nfs": {
              "server": <IPAddress>,
              "path": "/Kubernetes"
            }
          },
          {
            "name": "nfs-client-provisioner-token-fdd2c",
            "secret": {
              "secretName": "nfs-client-provisioner-token-fdd2c",
              "defaultMode": 420
            }
          }
        ],
        "containers": [
          {
            "name": "nfs-client-provisioner",
            "image": "quay.io/external_storage/nfs-client-provisioner:latest",
            "env": [
              {
                "name": "PROVISIONER_NAME",
                "value": "<IPAddress>/Kubernetes"
              },
              {
                "name": "NFS_SERVER",
                "value": <IPAddress>
              },
              {
                "name": "NFS_PATH",
                "value": "/Kubernetes"
              }
            ],
            "resources": {},
            "volumeMounts": [
              {
                "name": "nfs-client-root",
                "mountPath": "/persistentvolumes"
              },
              {
                "name": "nfs-client-provisioner-token-fdd2c",
                "readOnly": true,
                "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
              }
            ],
            "terminationMessagePath": "/dev/termination-log",
            "terminationMessagePolicy": "File",
            "imagePullPolicy": "Always"
          }
        ],
        "restartPolicy": "Always",
        "terminationGracePeriodSeconds": 30,
        "dnsPolicy": "ClusterFirst",
        "serviceAccountName": "nfs-client-provisioner",
        "serviceAccount": "nfs-client-provisioner",
        "nodeName": "det-vkube-s02",
        "securityContext": {},
        "schedulerName": "default-scheduler",
        "tolerations": [
          {
            "key": "node.kubernetes.io/not-ready",
            "operator": "Exists",
            "effect": "NoExecute",
            "tolerationSeconds": 300
          },
          {
            "key": "node.kubernetes.io/unreachable",
            "operator": "Exists",
            "effect": "NoExecute",
            "tolerationSeconds": 300
          }
        ]
      },
      "status": {
        "phase": "Running",
        "hostIP": <IPAddress>,
        "podIP": "<IPAddress>,
        "startTime": "2018-04-17T12:26:35Z",
        "qosClass": "BestEffort"
      }
    }
    

    I have just removed some status information from the nfs config to make it shorter

  • fragae
    fragae about 6 years
    I tried to change the owner using the user id from the volume-test container config file, but I got an invalid user message. The id looks like: "uid": "923ca461-4ec9-11e8-8ab3-8aaf7effe4a0". Is that the right one?
  • 白栋天
    白栋天 about 6 years
    the user id is determined by the USER which exist in the end of dockerfile, default is set to 0(root), if u dont know the user id (which could be get by execute "id" in container), then just use chmod +R 777
  • lokanadham100
    lokanadham100 almost 6 years
    that example wont use nfs. So there /data/demo has 2000 gid. But, if we change the PV to NFS, there also we are getting permission error.
  • AlaskaJoslin
    AlaskaJoslin over 5 years
    I'm not sure why anyone downvoted this. This question is specific to NFS and apparently as pointed out above the NFS host needs to have the permissions set as Kubernetes cannot manage the NFS host's permissions.
  • Kutzi
    Kutzi over 5 years
    Tried it also with NFS and it didn't work with fsGroup. Probably because of this issue github.com/kubernetes/examples/issues/260
  • gimlichael
    gimlichael about 4 years
    From a security perspective, I am not sure chmod 777 is a good approach - BUT it was the solution for me at least (after many frustrating hours). The funny thing though is, that with dynamic/managed provisioning (github.com/kubernetes-incubator/external-storage/tree/maste‌​r/…) this is not an issue at all. Anyway, thank you for the proposal, it will suffice for my homelab :-)
  • Philipp Nowak
    Philipp Nowak about 4 years
    @gimlichael It seems that the dynamic provisoner does exactly this, chmod 777: github.com/kubernetes-incubator/external-storage/blob/master‌​/…
  • yuranos
    yuranos over 3 years
    Why do you need to find of the users. The docs clearly states: ...Since fsGroup field is specified, all processes of the container are also part of the supplementary group ID 2000. The owner for volume /data/demo and any files created in that volume will be Group ID 2000.
  • MrBlaise
    MrBlaise about 3 years
    You are right, I have updated the answer.
  • Elouan Keryell-Even
    Elouan Keryell-Even over 2 years
    idk man there seem to be evidence that fsGroup doesn't work for NFS, see this GitHub issue: github.com/kubernetes/examples/issues/260
  • v1d3rm3
    v1d3rm3 about 2 years
    Here worked perfectly, thanks!