Wednesday, 18 March 2020

How to debug CrashLoopBackOff when starting a pod in Kubernetes

Recently I tried to deploy a nodeJS application to Amazon EKS and found that the deployment was not ready and the pod was stuck at CrashLoopBackOff. I had no idea what's happening.

$ kubectl get pod

NAME                     READY   STATUS             RESTARTS   AGE
app1-789d756b58-k8qvm     1/1     Running            0          13h
app2-84cf49f896-4945d     0/1     CrashLoopBackOff   7          13m

In order to troubleshoot, I increased kubectl output verbosity to --v=9. For more: Kubectl output verbosity and debugging.

$ kubectl describe pod app2-84cf49f896-4945d -v=9

It should return much info on your console. Search the keyword reason for more insight.

{"reason":"ContainersNotReady","message":"containers with unready status: [app]"}

Then I checked Dockerfile and found that the entry point was not correct. It should be app.js in my case.

CMD [ "node", "index.js" ]

Since I've setup Github Actions to automatically build and push the docker image to Amazon ECR, I could just retrieve the new docker image URI in the console. If you're interested in it, please check out my previous post.

Edit the existing deployment by running

$ kubectl edit deployment app2

and replace the existing image URI with the new one under spec -> containers -> image.

Once it's saved, it will automatically update it. Verify the result.

$ kubectl get pod

NAME                     READY   STATUS             RESTARTS   AGE
app1-789d756b58-k8qvm     1/1     Running            0          13h
app2-84cf49f896-4945d     1/1     Running            0          14m

It's back to Running now.

No comments:

Post a Comment

A Fun Problem - Math

# Problem Statement JATC's math teacher always gives the class some interesting math problems so that they don't get bored. Today t...