How to Restart Kubernetes Pod?

Photo of author

By admin

Kubernetes has changed the world of containerization. With orchestration and containerization, software developers, tech experts, engineers, developers, and many others are happy as their work has been made easy by Kubernetes.

If you are working with Kubernetes, you must know about Kubernetes pods. A Kubernetes pod is the smallest deployable unit and is an integral part of the Kubernetes. But you cannot trust a technology blindfold. One small error in code can crash the whole app. Kubernetes pods often pose such problems and leave you with the question, “how to restart Kubernetes pods?”

It is a problem that every Kubernetes user faces once in a while. It is quite possible that pods that are working just fine for one minute can crash all of a sudden. It interrupts the working and delays important timelines. If you are working with clients, fixing the problem immediately becomes extremely important.

If you are wondering, “how to restart Kubernetes pods,” this article is for you. For a newbie, who is using Kubernetes pods for the very first time, an unexpected shutdown may be the first of many. In this article, you will learn about pods, containers, clusters, their differences, how to identify the problem in your pod, and how to restart Kubernetes pod.

So, without further ado, let’s get started.

What is a Pod? What is the Difference Between Containers, Cluster, and Pod?

Kubernetes has its own lingo, and it is completely alright if you are confused with the words like pods, clusters, namespaces, and containers. So, before you dive right into troubleshooting, learn a little about these basic terms of Kubernetes.

Containers

A container is a carrying packet that contains an application and all its dependencies inside it. It is not dependent on any OS, environment, or hardware. The packet contains everything that is required to run the application.

Pod

It is the smallest unit that can be deployed on Kubernetes. You can understand it as a tight plastic wrapping around a bunch of small containers. It is a container for several other containers. Every container is packed in a pod. Every pod has two containers; one application container and an init container that assists the application container.

Cluster

All the pods need a platform to run, which is a cluster. A cluster allows running several pods, whether they are related or not. It is the central unit of Kubernetes and no Kubernetes have less than one cluster.

So, you can understand it like-

Container ——–> Pod ——–> Cluster

How to Restart Kubernetes Pods?

A pod contains the container which has the application. If the pod malfunctions, it is impossible to use the application. Apart from the application container, there is one more container in the pod which is called the init container. This is the container that assists the application container. Init container only performs its tasks and then terminates. They need to run smoothly to make the work of the development team easy.

If you are here, it means you are also facing issues like sudden stopping, crashing, restarting, etc regarding pods. If it happens continuously, it can lead to CrashLoopBackOff. There are several reasons behind it. The common reason may be a bad configuration, or the application inside the container can cause it, or the Kubernetes is not deployed properly. You may try the describe command or check tailing logs, but if you can’t find an error, it is better to restart Kubernetes pods than waste any more time.

To restart a Kubernetes pod, the first step would be to check the health of your pod. You can do so with this command.

kubectl get pods -n service

You may get an output like this:

kubectl get pods -n service

NAME READY STATUS RESTARTS AGE

api-7996469c47-d7zl2 1/1 Running 0 77d

api-7996469c47-tdr2n 1/1 Running 0 77d

chat-5796d5bc7c-2jdr5 0/1 Error 0 5d

chat-5796d5bc7c-xsl6p 0/1 Error 0 5d

The error is visible and you may try to debug it by executing the following command:

kubectl describe pod chat -n service

If you try it again and again, but the problem is not solved, it may cause a CRashBLoopBackOff. Check it with help of this command:

kubectl get pods –namespace nginx-crashloop

NAME READY STATUS RESTARTS AGE

flask-7996469c47-d7zl2 1/1 Running 1 77d

flask-7996469c47-tdr2n 1/1 Running 0 77d

nginx-5796d5bc7c-2jdr5 0/1 CrashLoopBackOff 2 1m

nginx-5796d5bc7c-xsl6p 0/1 CrashLoopBackOff 2 1m

Now, you are at a point where the only sane way to move ahead is to restart the pods.

You need the following for restarting pods:

  • A terminal window or a command line
  • A kubectl command-line tool
  • A cluster

The different ways of restarting Kubernetes pods are discussed as follows:

1. Rolling Restart

As the name suggests, it does not shut down the system in one swift roll but shuts down and restarts containers one by one. First, it saves time, and second, it does not interrupt working as the majority of containers are still working.

You can go for a rolling restart with the following command:

kubectl rollout restart deployment [deployment_name]

2. Change the Environment Variable

This method involves changing a small part, a variable in the pod, and restarting the pod again while incorporating the new changes.

You can try changing the deployment date using the following command:

kubectl set env deployment [deployment_name] DEPLOY_DATE=”$(date)”

3. Reset the Replicas

Use the ‘scale” command and rest the replicas for your pods.

Reset all the replicas to zero and shut down all the unwanted replicas. The following command will help:

kubectl scale deployment [deployment_name] –replicas=0

Now using the same command, change the replica to more than zero, such as 1 by using the command below:

kubectl scale deployment [deployment_name] –replicas=1

This is it. You will not find any simple or easier way to restart Kubernetes pods. These commands will help you restart the pods easily.

Conclusion

Even a small glitch can cause interruption and hinders the smooth working of a Kubernetes pod. When pods show errors, it causes disruption for every user who is using the application. Thus, you need to fix the errors as quickly as possible, and restarting your Kubernetes pod can be a viable solution.

Being a newbie, you may face problems restarting your Kubernetes pods. This article covers three methods that will help you restart the Kubernetes pods easily. The commands are also provided so that you can proceed with the methods quite easily. Also, the replica method is the easiest and most effective.

The problems in pods can be caused by several reasons that you need to find once you have restarted the pods and crashing has been stopped. While restarting Kubernetes pods is a temporary solution, it is important that you find the root cause of the problem and fix it.

FAQs

If you need help regarding some other problems like starting or stopping a cluster, stopping a pod, etc. You can find the solutions to your common problems here.

1. How to Stop a Kubernetes Cluster?

Follow the steps to stop a Kubernetes cluster:

  • Stop the worker nodes with this command. You can do so either individually or simultaneously.

shutdown -h now

If you are a VMware user, Shutdown Guest OS is required.

  • Shut down the master node after shutting down all the worker nodes.
  • Stop the NFS Server. If your NFS server is located on the Kubernetes master node, it will shut down automatically as the master node shuts down.
  • If your Docker registry is running upon a node other than the master node, stop the server or VM. It will be auto shut down if it is located on the master node.

2. How to Start a Kubernetes Cluster?

To start a Kubernetes cluster, you need to reverse the steps used to stop Kubernetes Cluster. It is easy if you simply follow the steps mentioned below:

  • Start the VM or server which runs the Docker registry. If it is situated on the master node, it will automatically get started when the master node is started.
  • Start the NFS Server which is generally found on the Kubernetes master node.
  • Start all the worker nodes and master nodes at the same time.

3. How to Stop Kubernetes Pods?

Sometimes you may need to stop Kubernetes pods due to maintenance or other reasons. You can stop FCI pods easily by using the commands given below:

Being the root user, execute each of these commands in an interval of 30 seconds:

kubectl scale deploy fci-solution –replicas=0

kubectl scale deploy fci-analytics –replicas=0

kubectl scale deploy fci-messaging –replicas=0

kubectl scale deploy fci-primaryds –replicas=0

The result of these commands will look like this:

deployment “fci-solution” scaled

deployment “fci-analytics” scaled

deployment “fci-messaging” scaled

deployment “fci-primaryds” scaled

To check if any pods are still running, execute this command:

kubectl get pods

The result should be like this:

No resources found

When it appears on your screen, you have successfully stopped the running Kubernetes pods.

How to Start Kubernetes Pods?

This is a reversal of the “stop Kubernetes Pods” process. Follow the steps to start Kubernetes Pods:

Being the root user, execute each of these commands one by one. Ensure that you execute a command only if the previous one has been completely finished and you can see 1/1 in the “Ready” section.

kubectl scale deploy fci-primaryds –replicas=1

kubectl scale deploy fci-messaging –replicas=1

kubectl scale deploy fci-analytics –replicas=1

kubectl scale deploy fci-solution –replicas=1

The output will be as follows:

deployment “fci-primaryds” scaled

deployment “fci-messaging” scaled

deployment “fci-analytics” scaled

deployment “fci-solution” scaled

Execute the following command to check the running pods.

kubectl get pods

The result should appear like this:

NAME READY STATUS RESTARTS AGE

fci-analytics-2613535553-c9pkg 1/1 Running 0 2m

fci-messaging-4092546328-85jlc 1/1 Running 0 5m

fci-primaryds-1962558436-t20l4 1/1 Running 0 6m

fci-solution-2263013476-9dqfv 1/1 Running 0 1m

Log into FCI.

https://<fully-qualified-hostname-of-k8s-master>:6883

Accept the certificates, keep the SSL warning aside and go ahead.

https://<fully-qualified-hostname-of-k8s-master>:9443/console

Leave a Comment