Recently I tried to deploy a nodeJS application to Amazon EKS and found that the deployment was not ready and the pod was stuck at Pending state indefinitely. I had no idea what's happening.
In order to troubleshoot, I increased kubectl output verbosity to --v=9
. For more: Kubectl output verbosity and debugging.
To get more details, I ran kubectl describe pod
and got the below message:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 17s (x14 over 18m) default-scheduler 0/1 nodes are available: 1 Insufficient pods.
I was wondering the maximum number of pods per each node, so I ran
kubectl get nodes -o yaml | grep pods
It returned
4
Then, I wanted to know how many pods which were currently running.
kubectl get pods --all-namespaces | grep Running | wc -l
I also got
4
From Architecting Kubernetes clusters — choosing a worker node size, we know the below info:
On Amazon Elastic Kubernetes Service (EKS),
the maximum number of pods per node depends on the node type and ranges from 4 to 737.
On Google Kubernetes Engine (GKE),
the limit is 100 pods per node, regardless of the type of node.
On Azure Kubernetes Service (AKS),
the default limit is 30 pods per node but it can be increased up to 250.
As more pods need to be provisioned, the maximum number of pods are not enough in this case. As I was just working on a demonstration, so I chose a small node type t2.micro
to minimise the cost.
How to increase this value? This is calculated from AWS ENI documentation. The formula is
N * (M-1) + 2
where
N is the number of Elastic Network Interfaces (ENI) of the instance type
M is the number of IP addresses of a single ENI
So I was using t2.micro
, the calculation is 2 * (2-1) + 2 = 4
.
AWS also provides a mapping file here so that you don't need to calculate yourself.
The final solution is to change the node type from t2.micro
to t3.small
in my case.
No comments:
Post a Comment