Amazon EKS: how to configure S3 access for worker nodes?

11,739

I was finally able to get it working.

In the Getting Started guide, add the necessary permissions (AmazonS3FullAccess) to the NodeInstanceRole used by aws-auth-cm.yaml, after the "To launch your worker nodes" step, but before running the command kubectl apply -f aws-auth-cm.yaml.

Share:
11,739

Related videos on Youtube

jackkamm
Author by

jackkamm

Updated on September 18, 2022

Comments

  • jackkamm
    jackkamm over 1 year

    How can I configure an EKS cluster to automatically allow S3 access from worker nodes?

    I've set up an EKS cluster following the Getting Started guide and have run the example Guest Book app. Now I want to use Snakemake to run bioinformatics pipelines on the cluster, which requires S3 access for the worker nodes.

    I've tried a few things in the IAM console that haven't worked:

    1. Add AmazonS3FullAccess permission to the EKS service role used to create the cluster.
    2. Create a CloudFormation role with AmazonS3FullAccess permission (among others), and assign this role to the worker nodes stack.
    3. Assign AmazonS3FullAccession permission to my user account (and leave the worker nodes stack IAM role blank -- it should use my user account permissions in this case).

    In all these cases, the worker nodes did not have S3 access (I ssh'd in to check). Any advice?

  • Alexandra Johnson
    Alexandra Johnson over 5 years
    I just ran into this issue! After completing creation of my EKS cluster, I attached the AmazonS3ReadOnlyAccess permission directly to the NodeInstanceRole (the role created by the worker stack CloudFormation template) via the AWS console. I was immediately able to run a container on the EKS cluster that downloaded s3 files. Thanks @jackkamm for including your solution! My only change is that I attach permissions after running kubectl apply, instead of before.
  • bartgras
    bartgras over 3 years
    I followed the same steps and it still wasn't working. At the end I realized that MLflow server on my EKS cluster was trying to access bucket from different region. Adding env variable AWS_DEFAULT_REGION with bucket in my region to MLflow Deployment fixed it.