Kubernetes "the server doesn't have a resource type deployments"
Solution 1
The first step would be to increase a verbosity level to help in finding out the root cause:
Kubectl get deployments --v=99
Overall, there few things that might cause it:
- You might have run commands below as a root user, not a regular one. So run as a regular user
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
As suggested here https://github.com/kubernetes/kubernetes/issues/52636
- Certificates in kubectl config file expired or if the cluster is in "AWS EKS" then IAM access keys might be inactive.
In my case when running "kubectl get deployments --v=99" in addition to “the server doesn't have a resource type deployments” it showed that:
Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401
If it is the case then check out your certificates kubectl config file (they might not be there, expired, new ones have to be created, etc) or if on EKS then IAM keys issued/activated.
- Lack of RBAC permissions so that a user/group for whom certificates/keys issued/signed are not allowed to view specific resources.
Solution 2
please change user to root and try the same. It worked for me
Solution 3
I was experiencing exactly the same behavior as in "Edit 1" above with Kubernetes 1.13.5 (client and server). Removing the ~/.kube/http-cache
directory on the client worked for me.
Solution 4
I delete ~/.kube directory, then remake the directory and move KUBECONFIG file into it. That works for me.
AliCan Sahin
Updated on November 19, 2021Comments
-
AliCan Sahin over 2 years
I'm new on kubernetes.
I couldn't get deployments using kubectl but I can see all deployments on kubernetes dashboard. How can i fix this problem?
user@master:~$ kubectl get deployments error: the server doesn't have a resource type "deployments"
kubernetes version: 1.12
kubectl version: 1.13
kubectl api-versions:
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
v1
api-resources:
user@master:~$ kubectl api-resources NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap endpoints ep true Endpoints events ev true Event limitranges limits true LimitRange namespaces ns false Namespace nodes no false Node persistentvolumeclaims pvc true PersistentVolumeClaim persistentvolumes pv false PersistentVolume pods po true Pod podtemplates true PodTemplate replicationcontrollers rc true ReplicationController resourcequotas quota true ResourceQuota secrets true Secret serviceaccounts sa true ServiceAccount services svc true Service apiservices apiregistration.k8s.io false APIService
Thanks for your helps.
-----------Edit 1-----------
Hello @EduardoBaitello, Thank you for quicly reply. The problem is not related to permission.
user@master:~$ kubectl auth can-i get deployments Warning: the server doesn't have a resource type 'deployments' yes user@master:~$ kubectl auth can-i get deployment Warning: the server doesn't have a resource type 'deployment' yes user@master:~$ kubectl auth can-i get namespaces yes user@master:~$ kubectl auth can-i get pods yes
So I think this is not a duplicated question.
user@master:~$ kubectl get po --namespace=kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-7c6b876df8-nk7nm 1/1 Running 2 118d calico-node-8lt9f 1/1 Running 3 118d calico-node-d9r9l 1/1 Running 2 118d calico-node-ffqlj 1/1 Running 2 118d dns-autoscaler-57ff59dd4c-c9tjv 1/1 Running 2 118d kube-apiserver-node1 1/1 Running 3 118d kube-controller-manager-node1 1/1 Running 6 118d kube-dns-84467597f5-hf2fn 3/3 Running 6 118d kube-dns-84467597f5-sttgx 3/3 Running 9 118d kube-proxy-node1 1/1 Running 3 118d kube-proxy-node2 1/1 Running 2 118d kube-proxy-node3 1/1 Running 2 118d kube-scheduler-node1 1/1 Running 6 118d kubernetes-dashboard-5db4d9f45f-gkl6w 1/1 Running 3 118d nginx-proxy-node2 1/1 Running 2 118d nginx-proxy-node3 1/1 Running 2 118d tiller-deploy-6f6fd74b68-27fqc 1/1 Running 0 16d
user@master:~$ kubectl get componentstatus NAME STATUS MESSAGE scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}
-
Alexz over 4 yearswasn't helpful in my case
-
imharindersingh over 4 yearsUse verbose option helped in getting the idea around errors( --v=99).