What is Kubernetes Autoscaling?

Photo of author

By admin

Kubernetes autoscaling is one of the best features available in the Kubernetes platform. Due to this feature, the Kubernetes cluster can increase the number of nodes when the service response demand increases and also reduce the nodes back down when the demand decreases. Autoscaling is currently supported on Google Container Engine and Google Compute Engine (GCE). This feature is designed to save your money and time. 

If you are interested to know what Kubernetes autoscaling is, you must read on further. 

Advantages of Kubernetes Autoscaling

To make you understand how exactly Kubernetes autoscaling can benefit you, we would like to share an example with you.

In a hypothetical situation, assume that you have around-the-clock production service with a work load, which varies with time. For example, in the US, it is quite busy during the day and not that busy during the nighttime.

So, in an ideal situation, you would want the number of pods in deployment and the number of nodes in the cluster to adjust according to the load automatically so that you can meet the consumer demand without having to spend extra money. The new autoscaling feature in the Kubernetes cluster is meant to do just that with the help of Horizontal Pod Autoscaler. 

How to Set Up Autoscaling in GCE

Before you start setting up any scalable infrastructure in GCE, it is essential to have an active GCE project with various features such as Google cloud logging, Google cloud monitoring, and stack driver. You also need to install a recent version of the Kubernetes project. 

First of all, you need to set up a cluster and turn on the Cluster Autoscaler. The node numbers will start at two and then autoscale automatically up to a maximum of 5 nodes in the cluster. For implementing this, set up the below-mentioned environment variable:

export NUM_NODES = 2
export KUBE_AUTOSCALER_MIN_NODES = 2
export KUBE_AUTOSCALER_MAX_NODES = 5
export KUBE_ENABLE_CLUSTER_AUTOSCALER = true

After you are done with this, you can first start the cluster by running Kube-up.sh, and then a cluster will be created together with the cluster autoscaler add-on. 

./cluster/kube-up.sh

After the cluster is created, you can check your cluster by running this command:

$ kubectl get nodes
NAME STATUS AGE
kubernetes-master Ready,SchedulingDisabled 10m
kubernetes-minion-group-de5q Ready 10m
kubernetes-minion-group-yhdx Ready 8m

You can now deploy any one application on the cluster, followed by enabling the horizontal pod autoscaler by using the following command:

$ kubectl autoscale deployment <Application Name> --cpu-percent = 50 --min = 1 --
max = 10

In the above command, it is specified that at least one and maximum of ten replicas of the POD will be maintained as the load increases on the application. 

The status of the autoscaler can be checked by running the command:

$kubclt get hba

 

The load on the pods can be increased by using the command below:

$ kubectl run -i --tty load-generator --image = busybox /bin/sh
$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; 

And, done.

 

You can also check the hpa by using the command:

$ kubectl get hpa
NAME         REFERENCE                     TARGET CURRENT
php-apache   Deployment/php-apache/scale    50%    310%
MINPODS  MAXPODS   AGE
  1        20      2m
$ kubectl get deployment php-apache
NAME         DESIRED    CURRENT    UP-TO-DATE    AVAILABLE   AGE
php-apache      7          7           7            3        4m

 

The number of pods running can be checked by entering the following command:

jsz@jsz-desk2:~/k8s-src$ kubectl get pods
php-apache-2046965998-3ewo6 0/1        Pending 0         1m
php-apache-2046965998-8m03k 1/1        Running 0         1m
php-apache-2046965998-ddpgp 1/1        Running 0         5m
php-apache-2046965998-lrik6 1/1        Running 0         1m
php-apache-2046965998-nj465 0/1        Pending 0         1m
php-apache-2046965998-tmwg1 1/1        Running 0         1m
php-apache-2046965998-xkbw1 0/1        Pending 0         1m

Lastly, you will obtain the node status by running the following command:

$ kubectl get nodes
NAME                             STATUS                        AGE
kubernetes-master                Ready,SchedulingDisabled      9m
kubernetes-minion-group-6z5i     Ready                         43s
kubernetes-minion-group-de5q     Ready                         9m
kubernetes-minion-group-yhdx     Ready                         9m

Conclusion

It is quite easy to deploy Cluster Autoscaler and Horizontal Pod Autoscaler using the steps and codes mentioned above. 

Cluster Autoscaler can also come in handy when there is any irregularity within the cluster load. On some occasions, development clusters or clusters for continuous integration tests can be less required on weekends or during the night. Machines that do nothing during such periods will waste a lot of your money. 

Cluster Autoscaler, though, can come to the rescue as it can reduce the number of nodes and save money because you just have to pay for the nodes that are running on your pods. 

Leave a Comment