Saturday, 14 March 2020

Migrating Your Existing Applications to a New Node Worker Group in Amazon EKS

Supposing you've an existing node group in your cluster and you want to migrate your applications to it.

eksctl get nodegroups --cluster=demo
CLUSTER NODEGROUP       CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY        INSTANCE TYPE   IMAGE ID
demo    ng-a1234567     2020-03-11T13:46:19Z    1               1               1                       t3.small

Create a new node group using eksctl

eksctl create nodegroup \
--cluster demo \
--version auto \
--name ng-b1234567 \
--node-type t3.medium \
--nodes 1 \
--region=ap-southeast-1 \
--alb-ingress-access \
--full-ecr-access \
--node-ami auto

If you see the following message

[ℹ]  nodegroup "ng-b1234567" has 0 node(s)
[ℹ]  waiting for at least 1 node(s) to become ready in "ng-b1234567"

then label the node

kubectl label nodes -l alpha.eksctl.io/cluster-name=demo alpha.eksctl.io/nodegroup-name=ng-b1234567 --overwrite

Once you execute the above command, you should see

[ℹ]  nodegroup "ng-b1234567" has 1 node(s)
[ℹ]  node "ip-192-168-1-11.ap-southeast-1.compute.internal" is ready
[✔]  created 1 nodegroup(s) in cluster "demo"
[✔]  created 0 managed nodegroup(s) in cluster "demo"
[ℹ]  checking security group configuration for all nodegroups
[ℹ]  all nodegroups have up-to-date configuration

Get the node groups again

eksctl get nodegroups --cluster=demo

A new node group is created

CLUSTER NODEGROUP       CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY        INSTANCE TYPE   IMAGE ID
demo    ng-b1234567     2020-03-13T13:42:26Z    1               1               1                       t3.medium       ami-08805da128ddc2ee1
demo    ng-a1234567     2020-03-11T13:46:19Z    1               1               1                       t3.small

Check if your worker nodes are in READY state or not by running

kubectl get nodes

Delete the original node group.

This will drain all pods from that nodegroup before the instances are deleted.

eksctl delete nodegroup --cluster demo --name ng-a1234567

If you run

kubectl get pod

You see the old pods are terminating and the new ones are creating

NAME                             READY   STATUS    RESTARTS   AGE
app1-789d756b58-k8qvm    0/1     Terminating   0          46h
app1-789d756b58-pnbjz    0/1     Pending       0          35s
app2-f9b4b849c-2j2gd     0/1     Pending       0          35s
app2-f9b4b849c-znwqs     0/1     Terminating   0          26h

After a while, you should see both pods back to Running state.

Reference: EKS Managed Nodegroups

No comments:

Post a Comment

A Fun Problem - Math

# Problem Statement JATC's math teacher always gives the class some interesting math problems so that they don't get bored. Today t...