Pod limit on Node - AWS EKS

25,550

Solution 1

The real maximum number of pods per EKS instance are actually listed in this document.

For t3.small instances, it is 11 pods per instance. That is, you can have a maximum number of 22 pods in your cluster. 6 of these pods are system pods, so there remains a maximum of 16 workload pods.

You're trying to run 17 workload pods, so it's one too much. I guess 16 of these pods have been scheduled and 1 is left pending.


The formula for defining the maximum number of pods per instance is as follows:

N * (M-1) + 2

Where:

  • N is the number of Elastic Network Interfaces (ENI) of the instance type
  • M is the number of IP addresses of a single ENI

So, for t3.small, this calculation is 3 * (4-1) + 2 = 11.

Values for N and M for each instance type in this document.

Solution 2

For anyone who runs across this when searching google. Be advised that as of August 2021 its now possible to increase the max pods on a node using the latest AWS CNI plugin as described here.

Using the basic configuration explained there a t3.medium node went from a max of 17 pods to a max of 110 which is more then adequate for what I was trying to do.

Solution 3

This is why we stopped using EKS in favor of a KOPS deployed self-managed cluster. IMO EKS which employs the aws-cni causes too many constraints, it actually goes against one of the major benefits of using Kubernetes, efficient use of available resources. EKS moves the system constraint away from CPU / memory usage into the realm of network IP limitations.

Kubernetes was designed to provide high density, manage resources efficiently. Not quite so with EKS’s version, since a node could be idle, with almost its entire memory available and yet the cluster will be unable to schedule pods on an otherwise low utilized node if pods > (N * (M-1) + 2).

One could be tempted to employ another CNI such as Calico, however would be limited to worker nodes since access to master nodes is forbidden. 
This causes the cluster to have two networks and problems will arise when trying to access K8s API, or working with Admissions Controllers.

It really does depend on workflow requirements, for us, high pod density, efficient use of resources, and having complete control of the cluster is paramount.

Share:
25,550
Andrija
Author by

Andrija

Software developer

Updated on October 26, 2021

Comments

  • Andrija
    Andrija over 2 years

    On AWS EKS I'm adding deployment with 17 replicas (requesting and limiting 64Mi memory) to a small cluster with 2 nodes type t3.small.

    Counting with kube-system pods, total running pods per node is 11 and 1 is left pending, i.e.:

    Node #1:
    aws-node-1
    coredns-5-1as3
    coredns-5-2das
    kube-proxy-1
    +7 app pod replicas

    Node #2:
    aws-node-1
    kube-proxy-1
    +9 app pod replicas

    I understand that t3.small is a very small instance. I'm only trying to understand what is limiting me here. Memory request is not it, I'm way below the available resources.

    I found that there is IP addresses limit per node depending on instance type. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html?shortFooter=true#AvailableIpPerENI .
    I didn't find any other documentation saying explicitly that this is limiting pod creation, but I'm assuming it does. Based on the table, t3.small can have 12 IPv4 addresses. If this is the case and this is limiting factor, since I have 11 pods, where did 1 missing IPv4 address go?

    • Mark
      Mark over 4 years
      After investigation you have applied standard amazon-vpc-cni-k8s plugin so please visit aws on github on-line resources, discussion about pros and cons here and here to find out how to disable aws cni on aws and install overlay network like calico
    • Kostanos
      Kostanos over 3 years
      did you found the solution? how to increase the limit or completely disable it?
  • Andrija
    Andrija over 4 years
    yeah, I was looking for pod limit, that's why I made 17 :) . That document looks like it, why it is not in official documentation... Thanks for quick reply.
  • weibeld
    weibeld over 4 years
    Yeah, it should be in the documentation. The document is worth bookmarking, because if you work with EKS, you have to refer to it a lot :)
  • Oren
    Oren over 4 years
    Running "kubectl describe node <node-internal-dns-name>" for each of your nodes will reveal the max number of pods for that node under the "Capacity/pods" section.
  • SlimIT
    SlimIT almost 4 years
    Could it be increased ?
  • weibeld
    weibeld over 3 years
    It could probably be increased by adding ENIs to the EC2 instances.
  • haotang
    haotang almost 3 years
    Hi @weibeld. I found your answer while searching for my question regarding pod-on-node issue of EKS here: stackoverflow.com/q/68532574/2576485 May you also help me on that question if possible? Many thanks!
  • Admin
    Admin over 2 years
    Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.
  • Chris C
    Chris C over 2 years
    This is the answer. You can also follow the instructions here: docs.aws.amazon.com/eks/latest/userguide/…