How do you get kubectl to log in to an AWS EKS cluster?
Solution 1
- As mentioned in docs, the AWS IAM user created EKS cluster automatically receives
system:master
permissions, and it's enough to getkubectl
working. You need to use this user credentials (AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
) to access the cluster. In case you didn't create a specific IAM user to create a cluster, then you probably created it using root AWS account. In this case, you can use root user credentials (Creating Access Keys for the Root User). - The main magic is inside
aws-auth
ConfigMap in your cluster – it contains IAM entities -> kubernetes ServiceAccount mapping.
I'm not sure about how do you pass credentials for the aws-iam-authenticator
:
- If you have
~/.aws/credentials
withaws_profile_of_eks_iam_creator
then you can try$ AWS_PROFILE=aws_profile_of_eks_iam_creator kubectl get all --all-namespaces
- Also, you can use environment variables
$ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY AWS_DEFAULT_REGION=your-region-1 kubectl get all --all-namespaces
Both of them should work, because kubectl ...
will use generated ~/.kube/config
that contains aws-iam-authenticator token -i cluster_name
command. aws-iam-authenticator
uses environment variables or ~/.aws/credentials
to give you a token.
Also, this answer may be useful for the understanding of the first EKS user creation.
Solution 2
After going over the comments I think it seems that you:
- Have created the cluster with the root user.
- Then created an IAM user and created AWS credentials (
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
) for it.
- Used these access and secret key in your
kubeconfig
settings (doesn't matter how - there are multiple ways for that).
And here is the problem as described in the docs:
If you receive one of the following errors while running kubectl commands, then your kubectl is not configured properly for Amazon EKS or the IAM user or role credentials that you are using do not map to a Kubernetes RBAC user with sufficient permissions in your Amazon EKS cluster.
- could not get token: AccessDenied: Access denied
- error: You must be logged in to the server (Unauthorized)
- error: the server doesn't have a resource type "svc" <--- Your case
This could be because the cluster was created with one set of AWS credentials (from an IAM user or role), and kubectl is using a different set of credentials.
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:masters permissions).
Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.
For more information, see Managing users or IAM roles for your cluster. If you use the console to create the cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain when you are running kubectl commands on your cluster.
This is the cause for the errors.
As the accepted answer described - you'll need to edit aws-auth
in order to manage users or IAM roles for your cluster.
Solution 3
Here are my steps using the aws-cli
$ export AWS_ACCESS_KEY_ID="something"
$ export AWS_SECRET_ACCESS_KEY="something"
$ export AWS_SESSION_TOKEN="something"
$ aws eks update-kubeconfig \
--region us-west-2 \
--name my-cluster
>> Added new context arn:aws:eks:us-west-2:#########:cluster/my-cluster to /home/john/.kube/config
Bonus, use kubectx to switch kubectl contexts
$ kubectx
>> arn:aws:eks:us-west-2:#########:cluster/my-cluster-two arn:aws:eks:us-east-1:#####:cluster/my-cluster
$ kubectx arn:aws:eks:us-east-1:#####:cluster/my-cluster
>> Switched to context "arn:aws:eks:us-east-1:#####:cluster/my-cluster".
Ref: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
Solution 4
Once you have setup the aws config on your system, check the current identity to verify that you're using the correct credentials that have permissions for the Amazon EKS cluster:
aws sts get-caller-identity
Afterwards use:
aws eks --region region update-kubeconfig --name cluster_name
This will create kubeconfig
at your home path with required kubernetes API server url at $HOME/.kube/config
.
Afterwards you can follow the kubectl
instructions for installation and this should work.
sbs
Updated on July 25, 2022Comments
-
sbs almost 2 years
Starting from a ~empty AWS account, I am trying to follow https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
So that meant I created a VPS stack, then installed aws-iam-authenticator, awscli and kubectl, then created an IAM user with Programmatic access and AmazonEKSAdminPolicy directly attached.
Then I used the website to create my EKS cluster and used
aws configure
to set the access key and secret of my IAM user.aws eks update-kubeconfig --name wr-eks-cluster
worked fine, but:kubectl get svc error: the server doesn't have a resource type "svc"
I continued anyway, creating my worker nodes stack, and now I'm at a dead-end with:
kubectl apply -f aws-auth-cm.yaml error: You must be logged in to the server (the server has asked for the client to provide credentials)
aws-iam-authenticator token -i <my cluster name>
seems to work fine.The thing I seem to be missing is that when you create the cluster you specify an IAM role, but when you create the user (according to the guide) you attach a policy. How is my user supposed to have access to this cluster?
Or ultimately, how do I proceed and gain access to my cluster using kubectl?