Kubernetes mount.nfs: access denied by server while mounting
Solution 1
It's probably because the uid used in your pod/container has not enough rights on the NFS server.
You can runAsUser as mentioned by @Giorgio or try to edit uid-range annotations of your namespace and fix a value (ex : 666). Like this every pod in your namespace will run with uid 666.
Don't forget to chown 666
properly your NFS directory.
Solution 2
You have to set a securityContext as privileged: true. Take a look at this link
Solution 3
The complete solution for kubernetes cluster to prepare NFS folders provisioning is to apply the followings:
# set folder permission
sudo chmod 666 /your/folder/ # maybe 777
# append new line on exports file to allow network access to folder
sudo bash -c "echo '/your/folder/ <network ip/range>(rw,sync,no_root_squash,subtree_check)' >> /etc/exports"
# set folder export
sudo exportfs -ra
Colin Maxfield
Updated on June 15, 2022Comments
-
Colin Maxfield almost 2 years
I have a kubernetes cluster that is running in out network and have setup an NFS server on another machine in the same network. I am able to ssh to any of the nodes in the cluster and mount from the server by running
sudo mount -t nfs 10.17.10.190:/export/test /mnt
but whenever my test pod tries to use an nfs persistent volume that points at that server it fails with this message:Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 19s 19s 1 default-scheduler Normal Scheduled Successfully assigned nfs-web-58z83 to wal-vm-newt02 19s 3s 6 kubelet, wal-vm-newt02 Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/nfs/bad55e9c-7303-11e7-9c2f-005056b40350-test-nfs" (spec.Name: "test-nfs") pod "bad55e9c-7303-11e7-9c2f-005056b40350" (UID: "bad55e9c-7303-11e7-9c2f-005056b40350") with: mount failed: exit status 32 Mounting command: mount Mounting arguments: 10.17.10.190:/exports/test /var/lib/kubelet/pods/bad55e9c-7303-11e7-9c2f-005056b40350/volumes/kubernetes.io~nfs/test-nfs nfs [] Output: mount.nfs: access denied by server while mounting 10.17.10.190:/exports/test
Does anyone know how I can fix this and make it so that I can mount from the external NFS server?
The nodes of the cluster are running on
10.17.10.185 - 10.17.10.189
and all of the pods run with ips that start with10.0.x.x
. All of the nodes on the cluster and the NFS server are running Ubuntu. The NFS server is running on10.17.10.190
with this/etc/exports
:/export 10.17.10.185/255.0.0.0(rw,sync,no_subtree_check)
I set up a persistent volume and persistent volume claim and they both create successfully showing this output from running
kubectl get pv,pvc
:NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pv/test-nfs 1Mi RWX Retain Bound staging/test-nfs 15m NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE pvc/test-nfs Bound test-nfs 1Mi RWX 15m
They were created like this:
apiVersion: v1 kind: PersistentVolume metadata: name: test-nfs spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: # FIXME: use the right IP server: 10.17.10.190 path: "/exports/test" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-nfs spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi
My test pod is using this configuration:
apiVersion: v1 kind: ReplicationController metadata: name: nfs-web spec: replicas: 1 selector: role: web-frontend template: metadata: labels: role: web-frontend spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 volumeMounts: # name must match the volume name below - name: test-nfs mountPath: "/usr/share/nginx/html" volumes: - name: test-nfs persistentVolumeClaim: claimName: test-nfs