Kubernetes pod has unbound immediate PersistentVolumeClaims (eks)

13,219

Yes, I know this has been already discussed million times and Im answering 2 years after your question. Most probably you already forgot this question, but community remember everything.

Community answer for next generations...

Everything has been already discussed in similar stack question Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict

Answer from @Sownak Roy: Full, without my modifications. They simply dont need there..

The error "volume node affinity conflict" happens when the persistent volume claims that the pod is using are scheduled on different zones, rather than on one zone, and so the actual pod was not able to be scheduled because it cannot connect to the volume from another zone. To check this, you can see the details of all the Persistent Volumes. To check that, first get your PVCs:

$ kubectl get pvc -n <namespace>

Then get the details of the Persistent Volumes (not Volume claims)

$  kubectl get pv

Find the PVs, that correspond to your PVCs and describe them

$  kubectl describe pv <pv1> <pv2>

You can check the Source.VolumeID for each of the PV, most likely they will be different availability zone, and so your pod gives the affinity error. To fix this, create a storageclass for a single zone and use that storageclass in your PVC.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: region1storageclass
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  encrypted: "true" # if encryption required
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: failure-domain.beta.kubernetes.io/zone
    values:
    - eu-west-2b # this is the availability zone, will depend on your cloud provider
    # multi-az can be added, but that defeats the purpose in our scenario
Share:
13,219

Related videos on Youtube

roy
Author by

roy

Updated on September 18, 2022

Comments

  • roy
    roy over 1 year

    I have following StorageClass defined for aws eks cluster (3 nodes)

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: aws-gp2
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: kubernetes.io/aws-ebs
    parameters:
      type: gp2
      zones: us-west-2a, us-west-2b, us-west-2c, us-west-2d
      fsType: ext4
    reclaimPolicy: Retain
    allowVolumeExpansion: true
    

    and have eks nodes running in us-west-2a, us-west-2b, us-west-2c zones.

    When I am trying to deploy mysql with dynamic persistent volume

    ---
    
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: mysql-pv-claim
      namespace: default
      labels:
        app: mysql
        env: prod
    spec:
      storageClassName: aws-gp2
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
    metadata:
      name: mysql
      namespace: default
      labels:
        app: mysql
        env: prod
    spec:
      selector:
        matchLabels:
          app: mysql
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: mysql
        spec:
          containers:
          - image: mysql:5.6
            name: mysql
            env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: root-password
            ports:
            - containerPort: 3306
              name: mysql
            volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
          volumes:
          - name: mysql-persistent-storage
            persistentVolumeClaim:
              claimName: mysql-pv-claim
    

    But pod doesn't move beyond Pending status.

    Pod's event log says :

    Events:
      Type     Reason            Age               From               Message
      ----     ------            ----              ----               -------
      Warning  FailedScheduling  8s (x7 over 20s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
      Warning  FailedScheduling  8s (x2 over 8s)   default-scheduler  0/3 nodes are available: 3 node(s) had volume node affinity conflict.
    

    I am not understanding why pod is not able to mount the PVC.

    I added 1 more node to the eks cluster, so all 4 nodes can span across 4 az's and then re deployed the mysql and it worked. Still don't know what was the real issue.