Kuberty.io
  • blog
Kuberty.io

Kubernetes

Home / Kubernetes
18May

Installing Kubernetes on CentOS 7 – What to Do?

May 18, 2022 admin centOS, Kubernetes

Why and How to Install Kubernetes on CentOS7

Today is the age of Kubernetes. Its popularity has increased steadily since its inception. Open-source tool Kubernetes has made container management a lot easier and more efficient. K8s, also known as Kubernetes, has a constantly growing ecosystem. Plus, its services, tools, etc are easily accessible. K8s and its tooling are now essential for anyone or any company seeking to manage containers and containerized apps in an effortless manner. Many first-timers, however, struggle to install Kubernetes Clusters locally. That’s only natural. This guide is for you if you are one of those people. In this section, we will show you how to install Kubernetes on CentOS 7. Read on.

Why install Kubernetes Cluster on CentOS 7 anyway?

Now, the question is why you should opt for Kubernetes. Well, Kubernetes is an amazing innovation. It provides its users with many cool features that are hard to ignore. Here are those features:

  • Kubernetes is known for its convenient storage orchestration. In addition, it permits its users to auto-mount storage systems of their preference.
  • Kubernetes is capable of exposing containers using the DNS name or their IP address. In case the container traffic is gigantic, Kubernetes can load balance and dispense that network traffic in a way that results in a stable deployment.
  • Self-healing is another great feature of K8s. If a container fails, Kubernetes reboots them. If needed, replace some containers. Kubernetes also eradicates containers unresponsive to health checkups defined by you. Plus, it never shows those containers to the clients until they’re ready.
  • Kubernetes also offers its users the facility to auto-rollbacks and rollouts. This facility enables them to define a desirable condition of your K8s-deployed containers. And, it is capable of gradually modifying the real condition to the desirable condition.
  • The software also comes equipped with the auto-bin packing feature. Kubernetes users can specify the amount of CPU or RAM each container requires or give it a cluster to run containerized tasks on. Then, Kubernetes can place those containers into your nodes, capitalizing on your existing resources.
  • Kubernetes also allows the users to stow and manage their sensitive info, including their SSH keys and OAuth tokens. One can perform deployments and update secrets, and configure apps. That too, without reconstructing their container images or revealing secrets in their stack setups.

Steps to follow for installing Kubernetes on CentOS 7

Now coming to the article’s core, here are the steps you need to follow for installing Kubernetes on CentOS 7.

Prerequisites

  • 3 CentOS-installed servers
  • Root permission

Step 1. Install Kubernetes

The foremost task to perform here is to install Kubernetes (K8s) on all three servers. So, we got to make them ripe for it. You can make them ready for K8s installation by modifying the current setups on the servers and installing packages like Docker CE and then Kubernetes itself.

First, utilize the vim editor to edit your hosts file:

vim /etc/hosts

Copy-paste this host:

10.0.15.10 k8s-master

10.0.15.21 node01

10.0.15.22 node02

Once you’re done, save and quit the vim editor.

Next, you have to disable SELinux. This is a key step, so take heed. Execute the following command to disable SELinux:

setenforce 0

sed -i –follow-symlinks ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/sysconfig/selinux

After that, facilitate the br_netfilter module. This module has to be in an active state for installing Kubernetes. Enabling br_netfilter kernel module will allow the packets crossing the bridge to go through processing by iptables. The iptables help process these packets for port forwarding and filtering. It also permits the K8s pods on the cluster to make them communicate with one another.

Execute the following command for enabling the br_netfilter module:

modprobe br_netfilter

echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables

Once it’s enabled, focus on disabling SWAP to allow a smooth K8s installation. Utilize the below command to do so:

swapoff -a

After that, take the etc/fstab file and make some edits:

vim /etc/fstab

Comment the SWAP line’s universally unique identifier like this:

# /dev/mapper/centos-swap swap swap defaults 0 0

Once finished, let’s proceed to install the latest version of Docker Community Edition (CE) from your Docker repository.

Install the package dependencies for Docker Community Edition using this command:

yum install -y yum-utils device-mapper-persistent-data lvm2

Attach the Docker repository to your system and Docker CE utilizing the following command:

yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum install -y docker-ce

Cool your heels, allowing the Docker CE installation to get finished.

Next, you got to install the Kubernetes repository to your CentOS 7 system to run the below command:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

After that, install kubeadm, kubectl, kubelet K8s packages utilizing the following yum command:

yum install -y kubelet kubeadm kubectl

After these packages are installed, reboot them:

sudo reboot

Sign in again to your server and begin both the kubelet and docker services:

systemctl start docker && systemctl enable docker

systemctl start kubelet && systemctl enable kubelet

It is also necessary to change your cgroup-driver. While installing Kubernetes to your CentOS 7 PC, you must ensure the usage of the same cgroup by both Kubernetes and Docker CE.

Examine the cgroup of the Docker utilizing the below command:

docker info | grep -i cgroup

Here, you should see your Docker using cgroupfs as its cgroup-driver.

Next, execute the following command for modifying the cgroup-driver of K8s to cgroupfs and thus, making both alike:

sed -i ‘s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g’ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Load the systemd system again and reboot your kubelet service:

systemctl daemon-reload

systemctl restart kubelet

Now, we are all set for our K8s cluster.

Step 2. Initialize Kubernetes cluster

Here, we got to initialize our K8s master cluster setup. So, first, relocate the shell to your master server and execute this command to configure your K8s master cluster:

kubeadm init –apiserver-advertise-address=10.0.15.10 –pod-network-cidr=10.244.0.0/16

Once your K8s initialization is finished, you will get a relevant output stating the same.

  • You have to copy-paste the kubeadm join … … … to some text editor. This command will become essential during the registration of new nodes to your K8s cluster.
  • Next, you have to execute a few commands to start using your Kubernetes software.

First, generate a new .kube setup directory and copy the setup admin.conf:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

After doing so, perform the flannel network deployment to the K8s cluster using this command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Now, the flannel network deployment to your K8s cluster is done.

Wait for some time and then examine your K8s nodes and pods by putting the below command to use:

kubectl get nodes

kubectl get pods –all-namespaces

If successful, the output you will receive from this command execution would display the status of the “k8s-master” as “Ready.” Also, it would show the status of all the required pods, including the ‘kube-flannel-ds” as “Running.” These pods are crucial for configuring network pods.

And with that, the initialization and setup of your K8s master cluster is complete.

Step 3. Add both the worker nodes to your cluster

Step 3 is all about adding both the worker nodes to your K8s cluster.

Establish a connection with your first node. Remember the kubeadm join command we copied before? We need to run that command here:

kubeadm join 10.0.15.10:6443 –token vzau5v.vjiqyxq26lzsf28e –discovery-token-ca-cert-hash sha256:e6d046ba34ee03e7d55e1f5ac6d2de09fd6d7e6959d16782ef0778794b94c61e

Now, connect with the second server as well and run the same kubeadm join command again:

kubeadm join 10.0.15.10:6443 –token vzau5v.vjiqyxq26lzsf28e –discovery-token-ca-cert-hash sha256:e6d046ba34ee03e7d55e1f5ac6d2de09fd6d7e6959d16782ef0778794b94c61e

After waiting for a minute or two, verify the status of pods and nodes using this command:

kubectl get nodes

kubectl get pods –all-namespaces

If everything is done right, you’d receive an output stating that both the nodes have become a part of your cluster.

Step 4. Test construct your first pod

It’s best to deploy a demo Nginx pod as a test to your K8s cluster to ensure if everything was fine. Pods are assemblages of one/more containers, for instance, Docker containers, having common storage and network that run on K8s.

Sign in to your Kubernetes master server and deploy a new pod, namely “nginx,” using the below command:

kubectl create deployment nginx –image=nginx

Once deployed, you can see info of your Nginx deployment specs by executing this command:

kubectl describe deployment nginx

Doing so will display the Nginx deployment specs right then.

Now, you have to reveal your Nginx pod attainable online. Thus, for this purpose, you have to generate a new service called NodePort by executing the command below:

kubectl create service nodeport nginx –tcp=80:80

Ensure the flawlessness of the command and then view your Nginx service NodePort and IP address:

kubectl get pods

kubectl get svc

Notice both the port numbers from the output attentively, though we only need NodePort. Usually, it’s 30691.

Run this command from your K8s master server:

curl node01:30691

curl node02:30691

The Nginx pod deployment on your Kubernetes cluster isyea successful and accessible online. Congratulations on a triumphant installation of a K8s cluster on your PC running on CentOS 7.

Conclusion

To conclude, Kubernetes (K8s) is an essential tool for managing clusters residing on different servers. Besides making the deployment process more facile, it also makes it more fruitful. Why struggle with container management when Kubernetes can be your lifesaver? That’s why we decided to guide you so that you too can find the task of installing Kubernetes on CentOS 7 easy. We hope that this write-up suffices as an effective terminus quo for all the first-time users out there.

As you may comprehend by going through the article, the process of installing a Kubernetes cluster itself is nothing fancy. But it can end up puzzling you if you aren’t attentive enough. From installing the Kubernetes tool to create a pod as a demo to ensure that everything works perfectly – follow every step scrupulously. Skipping even a tiny part or forgetting to execute even one command could mess everything up.

Just comply with the stuff mentioned above, install Kubernetes on your local CentOS 7 machine, and bid your container management problems goodbye. All the best…and oh, happy clustering!

Read more
18May

How to Deploy RabbitMQ on Kubernetes?

May 18, 2022 admin Kubernetes

RabbitMQ is a stable message exchange program for microservices. In short, RabbitMQ is a message broker program. It allows Kubernetes applications and services to interact with each other so that data transferring becomes easier. However, Apache Kafka, Amazon MQ, Apache Active MQ, Oracle Message Broker, and RabbitMQ work as the message broker software for Kubernetes. But RabbitMQ has been used by developers for years because it is a lightweight message broker that can be easily deployed in cloud-based platforms. Here in this blog post, we will guide you on how to deploy RabbitMQ on Kubernetes. But first, let’s learn about the salient features of this message broker application.

The most highlighted features of RabbitMQ are:

  • RabbitMQ supports different messaging protocols including AMQP, MQTT, STOMP, and more.
  • RabbitMQ is a Distributed Development platform that supports high availability and throughput.
  • The platform supports different plugins and tools.
  • Its user interface is easy to understand.
  • It offers an HTTP-API that helps you manage and monitor Kubernetes applications.
  • It supports popular languages like Java, Python, Ruby, Go, etc.

So, now that you know all the benefits of deploying RabbitMQ, we should easily get going with the procedure of its deployment on Kubernetes.

Prerequisites for Deploying RabbitMQ on Kubernetes

These are the main requirements that you should consider if you wish to deploy RabbitMQ on your Kubernetes:

  • You need a Kubernetes cluster that could be based on anything including AKS, EKS, GKE, Kind, On-Prem.
  • The Kubernetes cluster should have Helm 3 installed.
  • The kubectl CLI tool in Kubernetes.
  • And the terminal window/ command line for typing commands.

Once these prerequisites are acquired, you can go ahead and learn how to deploy the RabbitMQ operator on Kubernetes. Please note that deploying RabbitMQ involves connecting different things. It involves the following pieces:

  • Kubernetes namespace
  • RabbitMQ cluster nodes stateful set
  • Storage capacity of node data directories
  • You will have to create a Kubernetes Secret for Initial RabbitMQ user credentials and r inter-node and CLI tool authentication
  • One headless service for ensuring private communication between different nodes
  • Configuration files of Node and you will have to get their access permissions
  • A pre-enabled plugin file
  • Peer discovery settings
  • And more on the way.

So before you jump into the installation process of RabbitMQ, make sure you have enough time in your hand to perform each of the steps properly.

Step 1. Install the Helm Package Manager

RabbitMQ is complex software for Kubernetes. In fact, most of the Kubernetes solutions are somewhat complex. And the advanced Kubernetes application management process requires the developers to edit some configuration files in the Kubernetes interface. Such a file is Helm package manager which is a Kubernetes application package manager. It organizes the installation process of RabbitMQ and can easily and properly deploy its data in the Kubernetes cluster. Hence, if you want to deploy RabbitMQ on Kubernetes, you will have to install the Helm package manager first.

You can use these commands on your Kubernetes cluster to install the latest version of Helm:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3

chmod 700 get_helm.sh

./get_helm.sh

Use the commands one after another on the terminal and the installation of Helm will be completed. After the installation is done, the helm init command will start the Helm package manager. Now Helm will help you to deploy RabbitMQ on Kubernetes with the help of some other commands. Once you install the Helm package manager, the next step should be to create a Kubernetes namespace and permissions.

Step 2. Create a Kubernetes Namespace

You need to create a Kubernetes namespace because everything in the Kubernetes environment is operated in the namespace. And since RabbitMQ is now a part of it, it will require creating a namespace. We recommended you create a unique namespace that will allow you to set the RabbitMQ cluster apart from other Kubernetes services. And a specific namespace will help you get the permissions to the cluster nodes. If you don’t specify a namespace for RabbitMQ, your system will use a default namespace to manage the data of RabbitMQ. But to easily and quickly create a namespace, use this command:

kubectl create namespace rabbit

After you run this command, your computer will notify you when you have created a namespace successfully. Next, create a StatefulSet that will help you run the RabbitMQ cluster on Kubernetes. The StatefulSet is a process that ensures the positions of the cluster nodes that should be running in order and one at a time. You can go to the gke directory where you will find the StatefulSet definition file. The file contains configuration data of mounting, credentials, opening ports, etc. The file will help identify the network identifiers and find stable storage for the RabbitMQ resources. It also helps with managing updates.

Once done, move on to the next step.

Step 3. Install the RabbitMQ Operator

After clearing the previous steps properly, you can now install the RabbitMQ operator on your Kubernetes. You will have to find a default stable/rabbitmq chart from Github and apply that to your Kubernetes cluster. Run this command to do so:

helm install mu-rabbit stable/rabbitmq –namespace rabbit

You will have to verify if the GitHub repository chat components are healthy to run on your system. The previous command will deploy the RabbitMQ on the Kubernetes namespace we have created beforehand. But following, you will find the detailed information of the RabbitMQ operator. To check if the rabbitmq-system namespace contains the healthy components, match your rabbitmq-cluster-operator with the following information:

apiVersion: rabbitmq.com/v1beta1

kind: RabbitmqCluster

metadata:

name: production-rabbitmqcluster

spec:

replicas: 3

resources:

requests:

cpu: 500m

memory: 1Gi

limits:

cpu: 1

memory: 2Gi

rabbitmq:

additionalConfig: |

log.console.level = info

channel_max = 1700

default_user= guest

default_pass = guest

default_user_tags.administrator = true

service:

type: LoadBalancer

This is a rabbitmqcluster.yaml manifest file that is available in the Git repo and helps you create the RabbitmqCluster on Kubernetes. Let’s take a glance at the descriptions of these components in the YAML file.

kind: kind refers to the RabbitmqCluster CRD that the RabbitMQ cluster Operator has installed on Kubernetes.

Metadata.name: It refers to the name of the RabbitmqCluster.

Spec.replicas: This one represents the number of RabbitMQ replicas that we need to create a RabbitMQ Cluster.

resources.requests / resources.limits: The resources requests or limits have been specified in this file. It refers to the number of resource requests your server can make while deploying the Rabbit operator.

rabbitmq.additionalConfig: This one contains the configuration data of the rabbitmq clusters.

Service.type: This is a Kubernetes service type that exposes the RabbitMQ cluster. The RabbitMQ cluster’s service type in our case is LoadBalancer.

Once you deploy the rabbit with helm install mu-rabbit stable/rabbitmq –namespace rabbit command, it’s time to check out the status of the operator. Use the following command to do so:

$ kubectl describe RabbitmqCluster production-rabbitmqcluster

In the terminal that appears, you will find the status of the RabbitMQ cluster and it will give you the user credentials, port number, and the URL to visit the RabbitMQ management dashboard from your Internet browser.

In this case, you will find the username and password of the RabbitMQ cluster interface from a Kubernetes secret that has been created in the process. Here are the username and password to access the RabbitMQ interface:

Username: $ kubectl get secret production-rabbitmqcluster-default-user -o jsonpath='{.data.username}’ | base64 –decode

Password:

$ kubectl get secret production-rabbitmqcluster-default-user -o jsonpath='{.data.password}’ | base64 –decode

After initiating the deployment process of RabbitMQ, check out the following section.

Step 4. Check the Provisioning Status of RabbitMQ

You will find the complete status of the RabbitMQ provisioning sequence that will confirm the deployment. To check the status, you can run this command on terminal:

watch kubectl get deployments,pods,services –namespace rabbit

The terminal window will show the details of your RabbitMQ namespace. To check out the resources that your RabbitMQ has created, run the following command:

$ kubectl get all -l app.kubernetes.io/part-of=rabbitmq

In the terminal, you will find a StatefulSet where the headless service and the Kubernetes LoadBalancer have been specified. The headless server discovers the Kubernetes cluster nodes and the LoadBalancer enables you to access the user interface of RabbitMQ from your browser.

Go check out Step 3 again and read the section where we have mentioned the RabbitMQ cluster interface. You can configure the RabbitMQ Server from that interface only. The RabbitMQ configuration files contain the server settings and plugins of RabbitMQ and if you make changes to those files, you can configure the RabbitMQ server easily. You simply need to open your text editor and open the rabbitmq.conf file that you can edit.

The default configuration file of RabbitMQ is written in sysctl format but before you edit and configure the file, there are some things you should keep in mind:

  • Your system won’t generally execute lines if the sentences start with the # (hash) symbol because that symbol is used to refer to comments.
  • You are allowed to define one set in one line only in the config file.
  • The lines are defined in the Key = Value format

You can define ports, memory storage, permissions, etc. in the rabbitmq.conf file and this information is important to connect Kubernetes applications to the messaging broker. You can see a manageable example of a rabbitmq.conf file in the RabbitMQ server source repository that has a detailed definition of how the configuration file should look like. You can only use that file as a reference but you cannot adapt it to mirror your specific system requirements.

Step 5. Set up the RabbitMQ Management Plugin

You can find the RabbitMQ management plugin in the default RabbitMQ configuration files. But you can enable the plugin using a simple command. Use the <strong>rabbitmq-plugins</strong> command:

rabbitmq-plugins enable rabbitmq_management

Check out Step 3 again and find out the IP and port number of the RabbitMQ server. You can use that user credentials to log in to the Rabbit server only. Here is how you can enter it on your browser to access the web interface:

http://rabbitmq-ip-or-server-name:15672/

The IP and port number of RabbitMQ were given to you during the installation process and when you access the web interface from a browser, the server will ask for those credentials. Generally, the RabbitMQ installation pre-defines both the username and password as “guest.”

Now, with the RabbitMQ management plugin, you can access your interface and manage hosts, permissions, messages queues, exchanges, etc. Even though you can do all this stuff effortlessly, you need to know how the RabbitMQ on Kubernetes works.

The main purpose of Kubernetes is to automate tasks and operate services for a Kubernetes cluster. With RabbitMQ though, you can improve the way Kubernetes works. RabbitMQ enhances the task management process and stabilizes the background resources. With the Advanced Message Queuing Protocol (AMQP), RabbitMQ can easily transfer messages between brokers and consumers as soon as they are produced by the producer.

A producer is a performer that will produce the message to the RabbitMQ platform and the messages will be lined up in a queue. And then the consumers will receive those messages. The message exchange improves the communication between Kubernetes applications and services.

Conclusion

So, let’s wrap up the whole article in a single paragraph so that you remember all that we have discussed here. You need the RabbitMQ message broker program to simplify the message transferring process between Kubernetes applications. Once the producer releases a message, the broker will transfer the texts to the consumers aka the Kubernetes application. You can easily install or deploy RabbitMQ on your Kubernetes using some simple commands. But the first step you should take is to establish the Helm package manager. Once you install RabbitMQ with Helm, the services in your Kubernetes cluster can interact with each other effortlessly and efficiently.

Also, we have shared how to get access to the RabbitMQ management plugin later in this post. From there, you can manage the RabbitMQ web interface and monitor how the messages go in the queue. This does not only reduce the message loads between the web application servers but also the delivery process of those messages. However, if you have any problem with deploying RabbitMQ on Kubernetes, you can ask us in the comment box below. Additionally, feel open to going through our other articles to gain more insight into the subject.

Read more
17May

How does Katacoda Work?

May 17, 2022 admin Kubernetes

Software technology is progressing day by day. As you can see there is a major shift in the dynamics of technology learning landscapes. A major chunk of the work is done online. For instance, online classes, eBooks, video calls, meetings, and much more. There are cases where you have to download or install some software and work thereafter. It can be time-consuming. Well, to ease down your baggage, Katacoda is here!

Let us begin by understanding the concept of Katacoda!

What is Katacoda?

Quoting the Katacoda website,

“Katacoda’s aim is to remove the barriers to new technologies and skills.”

Katacoda is a platform that is used to build live interactive demo and training environments. Therefore, in this current COVID-19 pandemic situation worldwide, it is becoming highly popular amongst the students, tech professionals, and many more. The catch here is that you do not require to install all the components associated with it.

In the case of the students, Katacoda helps to provide privacy to the students. How? Well, every student gets access to the new environment as well as isolation from one another. That further helps to explore the skills and the student can also question without any hesitation.

Therefore, you get a customized e-learning platform to perform best practices through the latest technologies and suggested workflows.

Moving forward, here is a brief about the working of Katacoda.

How Does Katacoda Work?

Take a look at the three instances as shown below. Each one of them shows how to create different scenarios in each case.

Docker Static Website Image

It is a three-step scenario that helps you to create a simple Docker image for a static application and implement the image with ‘nginx’ web server.

Kubernetes Single Node Cluster

This is a four-step scenario that helps you to start a single node Kubernetes cluster. It uses the minikube. So, install the deployment and launch Kubernetes.

Katacoda Create Scenario

To create this scenario, there are five steps that you need to follow. This will help you to create a scenario itself!

Let us now dive into the several benefits that Katacoda offers!

Advantages of Katacoda

Self-paced Learning

This integrated platform helps to make the users feel more confident with its instant access to every kind of learning material.

Sharing Knowledge Amongst the Team Members

Katacoda inhabits scenario creation tooling that helps the user to create new content whenever some additional learning material is needed. Now, you may wonder what the scenario is.

Well, Katacoda scenario is a method to learn a new concept interactively in a step-by-step fashion. Therefore, this makes this platform an incredible place to learn new technologies as well as teach them. You also get interactive terminals to try and execute the commands. There is no need to type anything, you can just click to copy, and then run the command in the respective terminal.

Training and Experimentation

It provides you with several interactive scenarios that can help you to understand real-world problems and understand the guiding steps to resolve them.

Learning Progress of Individuals and Teams

The ultimate aim of the platform is to construct a dedicated interactive technical platform for their audience. Therefore, they try to identify where the material is being used and how the users are learning to improve and embrace the learning opportunities.

Katacoda permits its users to understand and implement products and various scenarios without downloading them. Additionally, when we talk about complex environments, the interactive environments are customizable so that the requirements are fulfilled according to your applications. Such environments can be both simple or complex but provide you total flexibility to teach as per your comfort.

The users can run standard Linux commands as well as additional processes, for instance, Docker containers. They can also use the Internet to download and install various packages by using the following link:

curl httpbin.org/user-agent

Integrated Editor

This tool helps the users to create, update, configure, as well as explore some sample applications. All these factors help to make the user understand the technology that can be applied to their applications or situations.

Highly Embeddable

You can use or embed all the interactive environments into any site or document to maintain consistency in the look.

Now you may wonder how this incredible tool works for you! Take a look.

First of all, you need to create a Katacoda account. To do so, follow these steps:

  • Go to the official website of Katacoda. Tap on the ‘Create’ button on the menu bar.
  • It redirects to a page that asks for your username and name as shown in the below-given snapshot.
  • Fill in the information and click on the ‘Create’ button that takes you to the next section.
  • There you need to establish a connection of your Katacoda account with the Github account.
  • For an automatic configuration click on the ‘Configure Github automatically’ button. Take a look at the given snapshot below:
  • Now, you can resume installing Katacoda CLI.
  • To install Katacoda CLI, use the below-given command:
npm install katacoda-cli --global
  • After you execute this command it installs the below-given package: https://www.npmjs.com/package/katacoda-cli
  • To check the successful installation of Katacoda, run this command: katacoda -v

Now let us learn how to create your first project with Katacoda!

  • The first step is to make a clone of your Katacoda project.
  • Now, use the cd command like this: cd katacoda
  • To create the first Katacoda scenario, run the given command: katacoda scenarios:create
  • After you execute this command, it redirects you to a menu where you have to add some details regarding your project.
  • This creates a directory with whatever name that you specify.
  • Now, the content of the directory consists of the following files: index.json, intro.md, step1.md, step2.md, finish.md

Here is a brief explanation of the meaning of each file.

  • index.json: This file is the result of the details that you entered in the setup phase.
  • intro.md: Whenever the users start the tutorial, this is the welcome screen.
  • step1.md & step2.md: During the setup, you decide the number of steps for your tutorial. These files are created as a result of that.
  • finish.md: Once the tutorial is over, this is the final screen.

When you are satisfied with all the changes that you have made, commit and push them to GitHub in the following manner:

git add .
git commit -m "My first Katacoda scenario"
git push origin main

And you’re done! Furthermore, to add the content to your scenarios, perform this simple step.

As you have created your first project, now is the time to add some content to it.

Edit your .md files and add the content in Markdown format.

There is another advantage. The Markdown is rendered on the left. Whereas, the users can execute commands on the right side, on the same screen. It offers you great ease while you are working. Take a look.

Now, for instance, you have mentioned commands using the following syntax:

```
docker info
```{{execute}}

The users can simply tap on the command and the execution takes place on the right. Here is a snapshot.

Conclusion

As you have witnessed, Katacoda is an incredible learning platform. You can create some great interactive courses or sessions for your audience. This platform further offers tons of impressive benefits that present you with ease and effectiveness in your work.

We hope that you must have gained a good understanding of the basics of Katacoda.

Stay in pace with the technology and try this impressive innovation.

Read more
17May

What is Amazon Elastic Kubernetes Service?

May 17, 2022 admin AWS, Kubernetes

By sharing their workload, Kubernetes has changed the way many companies work. By using Kubernetes, you can seamlessly scale traffic changes. It helps in automating your container workflow. Additionally, Kubernetes provides container orchestration for your containers.

You can improve Kubernetes performance by running it on Amazon Elastic Kubernetes Service (EKS). Running Kubernetes on EKS will give you more control over managing, deploying, and even scaling your applications within containers. It offers a rich ecosystem and great flexibility alongside seamless container deployment on AWS. Furthermore, it gives you complete control over container customization.

While running Kubernetes cluster, you may face some of the challenges. One of them is to decide which cloud is going to deploy your applications. Once you look for options, you need to filter your choices depending on network, bandwidth, storage, and other features.

What is Amazon Elastic Kubernetes Service?

Amazon EKS is an AWS offering that comes as a managed container as a service (CaaS), allowing you to run Kubernetes on AWS. With EKS, you don’t need to install or operate Kubernetes. Kubernetes can even be run without operating the control panel or worker nodes.

To get a better understanding of Amazon EKS, let’s get an overview of Kubernetes.

Kubernetes is the most popularly used container orchestration engine that was launched by Google, in 2014. It works well for the cloud-native computing services. It was launched as an open-source engine that helps in automating, managing, and scaling thousands of containers at the same time without impacting their performance. It helps in load balancing, monitoring, controlling consumption of resources, and leveraging additional resources from various resources.

History of Amazon EKS

Today, most of the companies are running Kubernetes on AWS, making Kubernetes the core for the AWS customers. It allows them to run thousands of containers on AWS efficiently. As a result, in 2018, AWS announced that the Amazon EKS is available for customers who use kubernetes for simplifying the complete process, as there is no need to set up the Kubernetes cluster from scratch.

Before EKS was introduced and became available to all AWS customers, customers had to obtain some expertise to run and manage kubernetes clusters. Apart from this, the companies had to provision Kubernetes management infrastructure on several AZs. However, since the arrival of EKS, the problem has been resolved to much extent as it provides a production-ready architecture. It also helps in running and managing Kubernetes clusters across several AZs providing a wide range of features and functionalities.

How Amazon EKS Works

The major work of EKS is to simplify the process of managing and maintaining the high available clusters of Kubernetes within AWS. The two key components of Amazon EKS are- Control Plane and Worker Nodes.

Control Plane

The control panel of the Amazon EKS comes with the three Kubernetes master nodes which run across three different availability zones. Kubernetes API gets all the incoming traffic using the network load balancer (NLB), running on the virtual private cloud which is controlled by Amazon. Thus, organizations cannot manage the control panel directly and will need AWS for managing it.

Worker Nodes

Organization controls the worker nodes running on the Amazon EC2 instances within the virtual private cloud. You can use any instance in AWS as a worker node. You are allowed to access the worker nodes via SSH without the need of any automation. You can easily run the worker nodes cluster on the organization’s container. These nodes are managed and monitored by the control panel.

As an organization, you can easily deploy a Kubernetes cluster for every application due to the EKS layout flexibility. You can run more than one application on the EKS cluster using the Kubernetes namespace and configuration of AWS IAM. Companies can use the EKS instead of running and maintaining Kubernetes infrastructure.

Benefits of Elastic Kubernetes Service

Listed below are some of the benefits of using the AWS EKS.

  • Improves availability and observability

With the help of EKS, you can easily run the Kubernetes control panel across several AWS availability zones. It will help you in detecting and replacing the unhealthy or malfunctioning control panel nodes automatically. Apart from this, it offers you on-demand and zero downtime upgrades of the system including patching. It guarantees you 99.95 percent uptime. Not only this, it enhances the observability of your Kubernetes cluster and helps you identify and resolve the occurred issues.

  • Scales your resources efficiently

With EKS node groups, you do not need to provision compute capacity, which is how your Kubernetes cluster will scale. For running applications on Kubernetes, you can use the AWS Fargate service to provision serverless compute on-demand. The EKS nodes on Amazon EC2 are used to identify the instances that reduce costs and improve system efficiency.

  • Ensures secure Kubernetes environment

With EKS, you will get the latest security updates automatically and it will be poached to your cluster’s control panel. The community is very active and works with the AWS for addressing the crucial security issues and ensures that every Kubernetes cluster is safe and secure.

Amazon EKS Features

Amazon EKS allows organizations to take advantage of some of the most important features of the Amazon platform, including reliability, high resource availability, enhanced performance, and scale, as well as important integrations with the AWS network and security services. We have listed some of the key features of Amazon EKS below.

  • Managed Control Plane

Along with Amazon EKS, you will get a highly available and scalable control panel efficiently running on several AWS AZs. EKS helps in managing and maintaining the availability and scalability of the Kubernetes cluster and API services. To ensure high availability, you can run the Kubernetes control panel on all three different availability zones, ensuring detection of the unhealthy nodes if present.

  • Managed Worker Nodes

You can run a simple but efficient command for creating, updating, and terminating EKS’s worker node. These worker nodes run on the system with the help of the latest Amazon Machine Images (AMIs).

  • Launch using eksctl

If you want to run the EKS within minutes, you can run the open-source command. This results in creating a Kubernetes cluster that is ready to run your application.

  • Service Discovery

AWS has a cloud resource discovery service known as the Cloud Map. it helps the companies in defining the namespace for the application resources. Also, it helps to update the location for the dynamic resources. This leads in increasing the availability of the application as the company’s web service will discover the resources at the most updated location.

EKS also provides a connector for auto propagation of internal service registry locations, while Kubernetes will launch and remove them once they have been terminated.

Use cases

Below are the use case applications for Amazon EKS.

Hybrid Deployment

EKS allows you to manage the Kubernetes cluster and its running applications across hybrid environments. With EKS, you can even run the Kubernetes on your data center as well as on AWS. You can run your EKS-run applications near to the AWS local zones and AWS wavelength for getting the better performance of your application. You can also use AWS Outposts for EKS that helps in extending the AWS infrastructure, its services, APIs, and many other tools.

Machine Learning

To model the workflow of your machine learning, you can implement EKS with Kube Flow. It also helps in running the distributed training jobs with the help of the latest EC2 GPU-powered instances. To run training and interfaces using Kubeflow, you can leverage AWS deep learning containers.

Batch Processing

You can use the EKS cluster along with Kubernetes jobs API for running sequential and parallel batch workloads. EKS will help you in planning, scheduling, and executing the batch-related computing workloads across several AWS compute services like Amazon EC2, Fargate, etc.

Web Applications

It helps in creating web-based applications that can be scaled efficiently and can run in a highly available configs environment. It helps in improving the performance, scalability, and readability of your web applications. You will also get amazing integrations with AWSnetworking and security services like Load balancers for efficiently distributing the loads across the network.

Conclusion

Amazon EKS offers you complete and advanced integration with AWS services that will help in improving the performance of your applications running on clusters. It offers various features, tools, and technologies for managing and maintaining the Kubernetes cluster in a high availability zone. In this article, we have highlighted key points on Amazon EKS, its features, and various use cases. By reading it, you will get a complete picture of Amazon EKS and how it is important to organizations.

Read more
10May

Docker-compose vs Kubernetes

May 10, 2022 admin Difference, Docker, Kubernetes

Docker Compose: Docker Compose is an instrument that was created to help characterize and share multi-compartment applications. With Compose, you can make a YAML record to characterize the administration, and with a solitary order, you can turn everything up or destroy everything. The enormous benefit of utilizing Compose is that you can characterize your application stack in a document, keep it at the base of your venture repo, and effectively enable another person to add to your undertaking. Somebody would have to clone your repo and start the creative application. You may see many ventures on GitHub or GitLab doing precisely this.

Kubernetes: Also known as k8s, Kubernetes is an open-source framework for overseeing containerized applications across various hosts. It gives essential instruments to send support and a scale of uses. Kubernetes developed ten years ago, and as a part of inclusion with Google, running creation occupations at scale using Borg’s system, got together with best-of-breed considerations and practices from the neighborhood. Kubernetes is worked on by the Cloud Native Computing Foundation (CNCF).

Before jumping towards the comparison of the two popular tools, let’s get into the exploration of their containers.

Docker-Compose in relation with kubernetes

Docker Compose enjoys its benefits contrasted with Kubernetes, yet that doesn’t mean it is the best arrangement over the long haul. Kubernetes is the more powerful of the two with regards to agreements that require increasing and remaining lean. Fortunately, migrating from Docker Compose to Kubernetes is altogether more straightforward than at any time in recent memory.

Docker Compose has an advantage over Kubernetes, perhaps especially for those who are new to containerization.The expectation to absorb information isn’t as steep as the previous for its worth with the latter. Docker Compose is designed to start from the earliest stage and work on the arrangement of microservices. You can use YAML to configure the cloud environment and then send all cases and microservices in a single order.

As an alternative, Kubernetes is more appealing. First of all, Docker Compose is intended to run on a solitary host or group, while Kubernetes is more elegant in consolidating numerous cloud environments and groups.

That advantage alone means Kubernetes is simpler to scale past a specific point. Backing for components like MySQL and GoLang is likewise better in Kubernetes. You can also use local administration from any semblance of AWS and GCP to help your arrangement.

It is clear, however, that the primary motivation for moving to Kubernetes is adaptability. Moving to Kubernetes as your compartment runtime is a reasonable development to make, and it will allow you to take your application to a higher level.

Docker Compose is intended to run on a solitary host, which means that container interchanges are next to zero and require no particular configuration.

Exploration of Containers

Containers solve a vital issue within the lifetime of application development. Once developers are writing code, they’re unaware of the problems that might arise. Once they move that code to production, this is where they start facing issues. The code that worked on their machine may not work in production.

There are a vast number of explanations for this. Sometimes there may be issues with the software, or the cause may differ accordingly.

Containers help in solving the underlying issue by separating code from the infrastructure. Developers might pack up their application and all the bins and libraries into a container. In production, that container can run on any device that includes a containerization platform.

The container needs the application and the definition of all of the bins and libraries that need to run. It is not at all like traditional virtualization machines.

The container isolation is completed at the kernel level, and no guest software package is required. Giving applications the ability to be encapsulated in self-contained environments allows for faster deployments, closer parity between development environments, and infinite quantifiability.

Difference between the containers

Usually, Docker Compose begins with one or more additional containers. It has one or more other networks and connects containers to them. It can also produce one or different volumes and tack containers to mount them. All of this can be used on one host. It uses a YAML file to configure application services. It is a Docker utility to run multiple containers and share volumes and networking via the Docker engine features. It runs locally to emulate service composition and remotely on clusters.

If we tend to put an insight on Kubernetes, it is typically a distributed container orchestration tool. Now you may want to know what exactly a container orchestrator is. Container orchestrators are the tools that group hosts together to form a cluster. They are fault-tolerant and can handle an oversized volume of containers and users. It takes care of running containers and enhancing the engine features. These containers can be composed and scaled to serve complex applications.

Kubernetes is the container orchestrator that was created by Google and has been given to the CNCF. It is presently open-source. It enjoys the benefit of utilizing Google’s long stretches of skill. It is an extensive framework for mechanizing sending, booking and scaling containerized applications. It also supports numerous containerization instruments like Docker.

Furthermore, Kubernetes can be run on either a public cloud or on-premises infrastructure. It is open source and has a vibrant local community. Organizations of all sizes are investing in it. Many cloud providers provide Kubernetes as a service.

The Orchestration Battle

The main Kubernetes characteristics include a decoupling framework from applications utilising containers. It is also open to other engines in addition. The orchestration framework fills in as a dynamic, complete foundation for a container-based application, permitting it to work in an ensured, profoundly coordinated environment while dealing with its connections with the outside world.

Docker Compose isn’t a creation-prepared tool. It performs admirably in development conditions, but lacks many of the capabilities that are more or less required for genuine creation use.

Kubernetes is appropriate for this errand and is one reason it has become so well known. Kubernetes has won the orchestration battle, and its inclusion has confirmed Docker Desktop. Its presence on all major cloud providers confirms that. Kubernetes is significantly more fit and has undeniably stronger local and corporate support.It handles planning on nodes in an exceedingly calculated cluster. It actively manages workloads. It additionally guarantees that their state coordinates with the client’s announced goals.

The difficulty of choosing between the two

If changes are being made to the application or image definition and you wish to see it running in Docker Compose, you need to initiate the command docker-create up-build. In Kubernetes, the picture can be modified by utilizing the order docker assemble-label my-picture: neighborhood. In any case, you might see that your progressions are not running in Kubernetes instantly.

The issue is that there has been no sign of Kubernetes being able to accomplish something once the picture was planned. The appropriate response is to erase the unit the picture was running on and reproduce it. If you’re running a single pod, delete it and play it yourself from the YAML pod definition. If you are running a readying or a stateful set, you’ll either delete the pods so that they can be mechanically recreated for you. Otherwise, you will scale down the replicas to zero and make a copy once more.

Advantages & disadvantages of Docker compose and Kubernetes

Docker Compose is a tool for analyzing and running multi-container applications. Docker is unquestionably one of the most amazing bits of a specialist from the standpoint of Web advancement. It is an excellent method to deal with the existing pattern of your application in the advancement of getting it fully operational and stopping it.

With the docker-form, we can run it through a specialist to create a construct to measure. Also, this form cycle can produce images that we would then use to make containers.

In Docker Compose, volumes can be genuinely precise. We can mount any document or subdirectory that is similar to the catalog from which we are running docker-compose. That makes it simple to discover, assess, and clean up those documents. Kubernetes is not similar to this. It’s not running from an undertaking envelope like Docker Compose. It’s already running on the Docker Desktop Virtual Machine somewhere. So, if we characterized a volume to be mounted in a holder, where might the information for that volume live? It lives in the Docker Desktop Virtual Machine someplace, except if we’re running WSL 2. Fortunately, Docker Desktop has a document sharing set up with the host OS, so we can exploit this to do any assessment or clean-up of diligent information.

Utilizing Docker Compose for neighborhood advancement is, without a doubt, more advantageous than using Kubernetes. Generally, it would help if you were acquainted with two orders to fabricate, run, rebuild, re-run, and close your applications in Docker: docker-compose upbuild and docker-compose down. For volumes, Docker Compose allows you to mount a catalog comparative to where you execute docker-compose from. It works across stages.

Additionally, Docker Compose is more secure. There’s no possibility you’re going to coincidentally docker-compose a mid-formed picture into production.

Docker Compose has the hindrance that it’s a repetition of work to repeat an example of your Kubernetes shows into docker-form records. Thinking about the additional setups, volume definitions, and pre-arrangements that should be added for Kubernetes advancement, this is most likely a little contrast.

Kubernetes, more precisely, addresses what you will send into shared Kubernetes bunches or creations. Utilizing an apparatus like Helm gives us supervisor-like provisions of introducing remotely created works without rethinking them in your neighborhood archive.

Utilizing Kubernetes requires a decent knowledge of Kubernetes and its encompassing devices or extra prearranging to conceal these subtleties. Kubernetes devices like kubectl and Helm depend on a setting that could be set to some unacceptable Kubernetes bunch, which would cause an undesirable ruckus! Setting up shields like setting up RBAC was conceivable in the standard or creation of Kubernetes groups where reasonable. Or then again, work inside a namespace locally that doesn’t exist in different bunches.

Pros – Docker Compose vs Kubernetes

Kubernetes and Docker Compose can be sorted as “container” tools.

“Multi-container descriptor,” “Quick advancement environment setup,” and “Simple connecting of containers” are the key variables why engineers consider Docker Compose, though “driving Docker container on the board arrangement,” “Straightforward and incredible,” and “Open source” are the essential motivations behind why Kubernetes is supported.

Docker Compose and Kubernetes are both open-source instruments. It appears that Kubernetes, with 55K GitHub stars and 19.1K forks on GitHub, has more reception than Docker Compose, with 16.6K GitHub stars and 2.56K GitHub forks.

Companies using Kubernetes and Docker Compose

Google, Slack, and Shopify are some of the well-known organizations that utilize Kubernetes, while StackShare, CircleCI, and Docker use Docker Compose. Kubernetes has a more extensive endorsement, referenced in 1046 organization stacks and 1096 designer stacks, compared with Docker Compose, which is recorded in 795 organization stacks and 625 engineer stacks.

Key Drawbacks

While Docker Compose is a vigorous apparatus with a rich element library, there are numerous things it can’t do. Items like CRDs, jobs, and stateful sets can’t be made with Compose. Networking is possible; however, describing it in a docker-compose.yml file can quickly become cumbersome.

There are some technical disadvantages to proceeding with Compose, and you must also consider its impacts. Significantly fewer people are utilizing Compose in production, so you’ll probably have to battle to track down a fresh recruit that is ready to bounce directly in. Additional compose components are not commonly utilized, which you’ll need to get to know to design Kubernetes.

One choice is that the specialist in the group will comprehend the instructional exercise and get everything characterized in a.yml document. Along these lines, you’ll proceed to utilize Compose. However, you’ll need to convey the expense of designing time spent changing over the Kubernetes display. This additionally implies that your architect’s comprehension shows it’s alright to change them over to another arrangement, debilitating the contention for utilizing Compose.

The other choice is that the model show will be utilized as a Proof of Concept, yet it will wind up being used underway due to a cut-off time or different reasons. Presently you have a blend of Compose records and Kubernetes shows, which can rapidly prompt disarray.

You will make some extreme memories coordinating with different apparatuses since numerous instruments exist to develop existing Kubernetes shows. A portion of these instruments helps in facilitating arrangement, similar to Helm. As you work on your application, different tools like Scaffold work with your apparatus to run it in Kubernetes. There may be workarounds that allow you to use these tools. You will, however, not find any authoritative documentation on how to set them up. Keeping up with these workarounds is challenging, and it leaves room for errors.

Conclusion

It is possible to replace Docker Compose with Kubernetes in the near future, but due to the additional complexity and compromises, it may be worthwhile to use both. Docker Compose is presumably sufficient and much easier for the most recent turn of events. Using a nearby Kubernetes cluster advances in terms of complexity and effort, so it is up to you. It is unquestionably useful for Helm Charts or manifesting turns of events or circumstances where you should completely recreate a piece of your deployment design.

In a realistic environment, a cloud-native application can be deployed in many ways. No matter how many microservices you have, you can configure your cloud group for maximum performance. The two most popular ways are Kubernetes and Docker Compose, with Docker Compose being more popular today.

Read more
09May

Kubernetes Secrets

May 9, 2022 admin Kubernetes

Confidential information is crucial for running production systems, but having them exposed to wider people or making them easily accessible brings lots of risks to the system. Storing crucial data such as passwords, authentication tokens, and SSH keys, in plaintext on containers is a bad idea. But this information is frequently utilized by Containers for performing various operations like integration of one system with another.

To make confidential information accessible yet secure, Kubernetes is tailored with a powerful encrypting, managing, and sharing feature known as ‘secrets’ that gets deployed across every Kubernetes cluster. By default, every information stored into secrets is encoded with the Base64 encoding. Saving confidential data in secrets enables users to have better control over passwords, keys, etc. Let’s roll down the blog to learn everything about Kubernetes Secrets in detail.

What are Secrets in Kubernetes?

Secrets allow Kubernetes users to store and manage all the sensitive information, such as passwords, OAuth tokens, and ssh keys more effectively. It is a better and more flexible way if compared to other options such as using Pod specifications and container images.

Users have the flexibility of either creating them manually or configuring the infrastructure to create these secrets automatically every time a cluster gets formed. Every Secret can be restored as plain text by users having API access, or anyone having accessibility for etcd, which is the data store for Kubernetes.

Protections Offered by Kubernetes

Kubernetes resources offer strict security to the Secrets for preventing them from getting exposed. Following is the brief information about the security added by various resources in the environment:

1. Secret resources

Pods and secrets are completely isolated from each other, this makes the data stored in Secret less prone to getting exposed during the complete pod lifecycle. Therefore, the very first step for sharing critical information variables with pods is to create them separately as secret objects.

2. Kubelet

Kubelets are node agents that function on all the nodes that interact with the containers during the runtime. Data stored in secrets are utilized by containers that should be available on the nodes as well. However, these secrets are not shared with all the nodes and are only given to the nodes that have pods running the secret.

3. Pods

Multiple pods run on every node, but only selective ones that are defined for using secrets can access them. Having said that, every pod has several Containers running in them, but the secrets are only restricted for Containers that have asked for them in volume mounts specification. Therefore, the functioning of Pods reduces the possibility of the secrets getting shared unnecessarily to pods or containers.

4. Kubernetes API

The process of creating and accessing secrets is performed over the Kubernetes API. Therefore, Kubernetes secures all the conversations happening between users, the API server, and kubelet using SSL/TLS.

5. etcd

Just like the data of any other Kubernetes resource, Secrets also get stored in etcd. This makes it possible for individuals to access secrets data after breaking into the etcd via the control plane. To prevent this, Kubernetes enables users to encrypt the secret data. Encryption will further help secrets to get isolated from other Kubernetes resources and reduce the exposure.

Types of Kubernetes Secrets

Every time a Secret is created, it gets specified to a “type” field of secret resources. The main purpose behind defining the type of a Secret is to facilitate programmatic handling of the information. Kubernetes offers some pre-configured types for Secrets that are used in specific scenarios.

An individual can also create their own Secret type by assigning non-empty strings as the type value for Secret objects. All the empty strings are counted as ‘Opaque’ type and their name can be modified easily by the users.

1. Opaque Secrets

It is a default secret type. While creating secrets using kubectl, an individual has to execute a subcommand for indicating the Opaque Secret type. For instance, the below-mentioned commands are used for creating an empty Opaque type secret.

kubectl create secret generic empty-secret

kubectl get secret empty-secret

2. Service account token Secrets

A kubernetes.io/service-account-token or service account token secrets is utilized for storing tokens that help in identifying service accounts. If using this type, an individual has to make sure that the notation kubernetes.io/service-account.name is configured to any existing service account name.

Every time a pod is created, Kubernetes instantly creates a service account token secret, and the pod further gets added to this secret by default. It also comes with credentials for accessing the Application Program Interface. However, users are enabled to switch off this automatic creation and usage of API credentials if required.

3. Docker config Secrets

Individuals are allowed for choosing anyone from the below-mentioned secret types to create a secret and store credentials for accessing Docker registries:

Kubernetes.io/dockercfg

This secret type is responsible for storing serialized ~/.dockercfg that is the official format to configure the Docker command line. Here, it is important to make sure that the Secret data field has a .dockercfg key whose values are the contents of ~/.dockercfg file written in base64 format.

kubernetes.io/dockerconfigjson

This secret type is developed in order to store serialized JSON that is configured with similar rules as ~/.docker/config.json file. One thing to keep in mind for adding the .dockerconfigjson key to the Secret objects file, and the data inside ~/.dockercfg is in base64 encoded string.

4. Basic authentication Secret

This Kubernetes secret type “kubernetes.io/basic-auth” is used for storing data that is utilized for storing basic authentications. Data fields should have the following keys:

  • username: This field must contain the username for basic authentication
  • password: It is the password for authentication

Both of the aforementioned values should be base64 encoded strings. The main intent of this secret type is to provide more convenience to the users. Using this built-in Secret type allows individuals for uniting the format of credentials and API servers to quickly get the required keys in secret configurations.

5. SSH authentication Secrets

The built-in type kubernetes.io/ssh-auth helps to store all the information utilized in SSH authentications. Here, an individual has to specify ssh-private keys and key-value pairs in stringData for allowing SSH credentials to be used at times.

6. TLS secrets

Kubernetes comes with this built-in Secret type kubernetes.io/tls to store certificates that are the keys utilized for TLS. Kubernetes’ environment utilizes this data during TLS termination of the Ingress, which is an API object that provides routing rules for the management of external users. However, this can also be utilized along with other Kubernetes resources or can be deployed on any workload directly. Administrators should ensure to provide the stringData field of secret configurations to the tls.key and the tls.crt key.

7. Bootstrap token Secrets

For creating this secret type, individuals should specify the secret type to bootstrap.kubernetes.io/token. It is majorly developed for tokens utilized during the node bootstrap processes. The stored token helps in authenticating for ConfigMaps as well.

Bootstrap token secrets are developed into the kube-system namespaces and are named in the format of bootstrap-token-<token-id>. The <token-id> is basically a six characters string of the token ID. The data stored in this secret type is composed of the following information:

  • token-id: An id with six characters used for identifying tokens.
  • token-secret: The original token secret key, which is sixteen characters long.
  • description: Brief information that is used by individuals to know the functionalities of a particular token.
  • expiration: The Coordinate Universal Time (UTC) tells when a particular token will get expired.
  • usage-bootstrap-<usage>: Boolean flags that indicate the extra purposes of bootstrap tokens.
  • auth-extra-groups: It is a list of groups separated by commas that get authenticated along with the system:bootstrappers.

Creating Kubernetes Secrets

There are two ways for organizations to make Kubernetes Secrets for their infrastructure:

1. Built-in secrets

By opting for built-in secrets, Kubernetes will build secrets automatically and add them to the container through API keys. However, if it is leading to security issues, then users are allowed to disable this service.

2. Custom secrets

Users can create secrets themselves by defining the desired sensitive data. However, one can configure the system to make Kubernetes secrets automatically by making the use of kubectl create secret command as mentioned below. Users have to include the sensitive information that they want to store in it:

kubectl create secret generic db-user-pass –from-file=./username.txt –from-file=./password.txt

Following are some ways for creating Secrets manually using Kubectl, configuration files, or using Kustomize.

3. Using Kubectl

For creating Secret with the help of Kubectl, you need to have a Kubernetes cluster that has kubectl configured in it for communicating with the clusters. It is suggested by Kubernetes itself for running Kubectl on clusters that have at least two nodes, and also they shouldn’t be the control plane hosts. However, if you don’t have any Kubernetes cluster yet, you can use minikube for creating one quickly.

Secrets are useful for storing confidential information that are used by pods. Users can save the data fields such as username in a file named ./username.txt and the password in the file name ./password.txt on the local machine with these commands:

echo -n ‘admin’ > ./username.txt

echo -n ‘1f2d1e2e67df’ > ./password.txt

In the aforementioned commands, the -n flag makes sure that the file created afterward does not have any extra newline characters in the texts. This is important to add because kubectl reads all the files and encodes them into base64 strings, and if there are some extra newline characters, then they’ll be encoded too.

An individual can execute kubectl create secret command for packaging these files as Secrets and creating objects on API servers:

kubectl create secret generic db-user-pass \

–from-file=./username.txt \

–from-file=./password.txt

For verifying the secret created composed of all the crucial information, run the following Kubectl command:

kubectl get secrets

For deleting the created secrets, run the following command:

kubectl delete secret db-user-pass

4. Using Configuration files

Before proceeding, ensure to create config files first. Further, Kubernetes secrets can be created within these files either in JSON or YAML format. All the resources of the secrets are composed of two fields: data and stringData. The data field is utilized for storing arbitrary data, whose encoding is done with the help of base64.

The stringData field is added for the ease of the users and it helps them for providing Secret data as an unencoded string. Also, keep in mind that keys of both data and stringData should have alphanumeric characters. Therefore, for storing a string into Kubernetes Secret with the help of data field, run the following command for converting them into base64:

echo -n ‘admin’ | base64

Now execute the following Kubectl command for creating a secret:

kubectl apply -f ./secret.yaml

For deleting the secret you have created using the config file, run the below-mentioned command:

kubectl delete secret mysecret

5. Using Kustomize

After the Kubernetes v1.14, kubectl enables the functionality of the management of objects using Kustomize. Kustomize can help users in making secrets and ConfigMaps by using resource generators. The only prerequisite is that all of these generators should be defined in the kustomization.yaml file.

Creating Kustomization file: For creating secrets, users need to define secretGenerators into kustomization.yaml file or Kustomization file. Run the following command for creating Kustomization file on your system and defining secretGenerator in the same:

secretGenerator:

– name: db-user-pass

files:

– username.txt

– password.txt

Another way of doing the same is by providing “literals”. For instance, following kustomization.yaml file has two literals defining username and password:

secretGenerator:

– name: db-user-pass

literals:

– username=admin

– password=1f2d1e2e67df

 

Now, to create a secret, execute the directory that is composed of the kustomization.yaml file:

kubectl apply -k .

Conclusion

Kubernetes is designed with sophisticated encrypting, managing, and sharing features known as Kubernetes secrets that are deployed throughout every Kubernetes cluster in order to make confidential data available while being secure. Users have the flexibility of either creating them manually or configuring the infrastructure to create these secrets automatically every time a cluster gets formed.

When new secrets are created, they are assigned to a secret resource’s “type” field. There are seven types of pre-defined secrets assigned by Kubernetes itself for different purposes. An individual can also create their own Secret type by assigning non-empty strings as the type value for Secret objects. The blog contains three ways of creating Kubernetes secrets: using Kubectl, configuration file, and Kustomize.

Read more
09May

How to Install Software on Kubernetes Clusters?

May 9, 2022 admin Kubernetes

How to Install Software on Kubernetes clusters?

Kubernetes has a unique package manager, Helm. It assists and permits programmers to configure and install the applications on Kubernetes clusters easily. It provides various functions similar to the package manager in other operating systems;

  • Helm has a helm chart that assists in detecting a standard format and file directory format for packaging the Kubernetes resources.
  • For much famous software, Helm offers a public repository of charts. A third-party repository could be used to get the charts.
  • The Helm client software also includes a command like listing and searching for charts with the keywords and deploying applications to clusters. One can also remove applications and manage the functions.

Thus, Helm has a crucial role in installing software in Kubernetes clusters.

If you want to learn how Helm assist in deploying apps in Kubernetes, then this article is for you;

Step 1: Installing Helm

First, you have to install the helm command-line utility on the machine. The helm offers a script that will manage the installation process on macOS, Windows, and Linux.

Download the script and to a writable folder;

cd /tmp

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh

Then you have to create a script with chmod;

chmod u+x install-helm.sh

Now, with your preferred text editor, you can open the script and check it thoroughly. After checking carefully, run the script;

./install-helm.sh

Then you will need to enter the password and click Enter.

Output

helm installed into /usr/local/bin/helm

Run ‘helm init’ to configure helm.

The installation will be further finished by installing some helm components on the cluster.

Step 2: Installing Tiller

Helm command is mainly associated with Tiller. This command runs on a cluster and gets the commands from the helm by interacting with the Kubernetes API. Kubernetes API handles the task of making and removing the resources.

For permitting the Tiller to run on a cluster, you have to create a kubernetes serviceaccount resource.

To create the serviceaccount for Tiller, enter the command;

kubectl -n kube-system create serviceaccount tiller

Further, the Cluster-admin role must be bind with the tiller serviceaccount;

kubectl -n kube-system create serviceaccount Tiller

Then you can run the helm init, it will help you to install Tiller on your cluster, including local housekeeping process likes downloading stable repo details;

helm init –service-account tiller

Output

. . .

Tiller has been installed into your Kubernetes Cluster.

Note: Tiller is installed with an insecure ‘allow unauthenticated users’ policy, by default.

To check whether the Tiller is running, you have to list the pods in kube-system namespace;

kubectl get pods –namespace kube-system

Output

NAME READY STATUS RESTARTS AGE

. . .

kube-dns-64f766c69c-rm9tz 3/3 Running 0 22m

kube-proxy-worker-5884 1/1 Running 1 21m

kube-proxy-worker-5885 1/1 Running 1 21m

kubernetes-dashboard-7dd4fc69c8-c4gwk 1/1 Running 0 22m

tiller-deploy-5c688d5f9b-lccsk 1/1 Running 0 40s

 

You can find the Tiller name as states tiller-deploy-.

Up to now, You have successfully installed both Helm and Tiller. Now the helm is ready to use for the installation of applications.

Step 3: Installing Helm chart

Helm charts are the helm software packages. Chart repository known as stable comes inbuilt with Heml.

To install the Kubernetes-dashboard packages from the stable repo, you can use helm;

helm install stable/kubernetes-dashboard –name dashboard-demo

Output

NAME: dashboard-demo

LAST DEPLOYED: Wed Aug 8 20:11:07 2018

NAMESPACE: default

STATUS: DEPLOYED

Check the NAME line, in the output section. If you find it written as dashboard-demo, then it is the name of your release. The helm releases a single installation of charts with a particular configuration.

You can likewise deploy numerous charts with their own configuration.

However, if you are unable to find the release name, it is possible that Helm would name it randomly for you.

To get the list of releases on cluster, from the Helm, enter the below command;

helm list

Output

NAME REVISION UPDATED STATUS CHART NAMESPACE

dashboard-demo 1 Wed Aug 8 20:11:11 2018 DEPLOYED kubernetes-dashboard-0.7.1 default

If you want to check any new service deployed on the cluster, you can utilize the kubectl.

kubectl get services

Output

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

dashboard-demo-kubernetes-dashboard ClusterIP 10.32.104.73 <none> 443/TCP 51s

kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 34m

The Helm release name and chart name could be the combination of the service name with your release.

As of now, you have deployed the application successfully. Helm will change the configuration, and deployment will be updated.

Step 4: Updating the Release

If you want to upgrade the release with the latest update chart, you can use the command helm upgrade.

To know about the upgrade and rollback process of the dashboard demo, you can check the example process; To update the name of dashboard service to the dashboard, rather than dashboard-demo-kubernetes-dashboard.

The detailed fullnameOverride configuration for controlling the service name is provided by the Kubernetes-dashboard chart.

helm upgrade dashboard-demo stable/kubernetes-dashboard –set fullnameOverride=”dashboard”

You can check the similar output with the initial helm install setup.

To check the kubernetes services shows the update terms, enter the command;

kubectl get services

output

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 36m

dashboard ClusterIP 10.32.198.148 <none> 443/TCP 40s

The service has been updated correctly.

Step 5: Rolling back a Release

After the update of the release, if you want to roll back the release, this step will guide you.

You have formed a second revision of the release, once you updated the dashboard-demo. However, helm kept all the old details of releases, if you want to roll back to the old configuration.

You can enter the helm list to check the release;

helm list

Output

NAME REVISION UPDATED STATUS CHART NAMESPACE

dashboard-demo 2 Wed Aug 8 20:13:15 2018 DEPLOYED kubernetes-dashboard-0.7.1 default

In the Output section, you can find the Revision column that shows the second revision.

To roll back to the first revision, use the command;

helm rollback dashboard-demo 1

The output depicts the rollback is successfully placed.

Output

Rollback was a success! Happy Helming!

Now, while running the kubectl get service again, you will notice the service name is changed to the old value, which states that a re-deploy has been done by the Helm on the application with 1st configuration.

Thereafter, for removing the releases, check the next step.

Step 6: Deleting a Release

To delete the Helm release, use the command helm delete;

helm delete dashboard-demo

You will notice that the release is removed successfully, which will stop the dashboard application automatically.

As you know that helm saves the revisions, if the user wants to re-deploy the release.

You will get an error, in case you try to install helm with a new dashboard-demo.

To get the list of deleted released, you can use the command –deleted;

helm list –deleted

If you want to actually remove the release and want the previous version, you can use the –purge flag with the command helm delete;

helm delete dashboard-demo –purge

After this, the release is permanently removed.

Conclusion

In a nutshell, you have got all the information on installing software on the Kubernetes cluster using Helm. Moreover, the above steps display the installation with helm command tools and its associated tiller service. Installing applications, upgrading, rolling back to the previous release, and deleting the Helm charts will assist in the process.

Read more
09May

What is Kubernetes DaemonSet and How to Use It?

May 9, 2022 admin Kubernetes

Kubernetes is an open-source platform for operating and managing containerized applications. Kubernetes makes it easy to automate software deployments, maintain containerized apps, and scale clusters.

It comes with lots of deployment options for running containers and one of them is the DaemonSet. In this blog, we’ll discuss what DaemonSets are, what they do, and how to create them.

What is a Kubernetes DaemonSet?

Kubernetes ensures that running applications have sufficient resources, operate consistently, and have high availability during their lifecycle. A DaemonSet overcomes Kubernetes’ scheduling limits by ensuring that a given app is distributed throughout the cluster’s nodes.

A DaemonSet assures that a replica of Pods is running on all (or some) of the Nodes. Pods are deployed to nodes as they are introduced to the clusters. As soon as a node gets removed from the cluster, the associated pods would be considered garbage. When you delete a DaemonSet, it will also delete the Pods it produced.

A YAML file is often used to describe a DaemonSet. The YAML file’s fields provide you more control over the Pod deployment process. Using labels to start certain Pods on a subset of nodes is an excellent example.

Why Use a DaemonSet?

By deploying Pods that execute upkeep jobs and provide support services to each node, DaemonSets can increase cluster efficiency. To deliver adequate and updated services, specific background processes, Kubernetes monitoring apps, and other operatives must be present throughout the cluster. DaemonSets are well suited for long-running services, such as:

  • Collection of logs
  • Monitoring of node resources
  • Storage in a cluster
  • Pods that deal with infrastructure (system operations)

A single daemon type is commonly deployed throughout all nodes in a DaemonSet. Utilizing distinct labels, multiple DaemonSets can control the same daemon type. Labels in Kubernetes define deployment rules depending on specific node attributes.

How to Create a DaemonSet?

1. Configuring DaemonSets

DaemonSets, like everything else in Kubernetes, can be configured using a YAML file:

apiVersion: apps/v1kind: DaemonSet

metadata:

name: my-daemonset

namespace: my-namespace

Labels:

key: value

spec:

template:

metadata:

labels:

name: my-daemonset-container

…

selector:

matchLabels:

name: my-daemonset-container

A YAML file is composed of the following sections:

  • apiVersion (required)
  • kind (required)
  • metadata (required)
  • spec.template (required): Pod description for the pods you want to deploy on all the nodes.
  • spec.selector (required): A selector for pods managed exclusively by DaemonSet. This variable must be labelled with one of the labels provided in the pod structure. A selector for name: my-daemonset-container was generated inside the templates and used in the selector. The value cannot be changed after the DaemonSet has been established without orphaning the pods the DaemonSet has produced.
  • spec.template.spec.nodeSelector: This allows you to run only the subsets of nodes that satisfy the selector.
  • spec.template.spec.affinity: Can only execute on the subsets of the nodes that satisfy the affinities.

2. Creating DaemonSets

After you’ve finished configuring your cluster, create the DaemonSet by running the following command:

kubectl apply -f daemonset-node-exporter.yaml

Your device will confirm the creation of DaemonSet by displaying a message on the screen.

3. Confirming the state of DamonSet

After you’ve submitted the daemonset-node-exporter DaemonSet, use the describe command to verify its present state:

kubectl describe daemonset node-exporter -n monitoring

The output contains basic DaemonSet info and shows that the pods have been distributed on all of the nodes that are accessible.

4. Listing all the running pods

You can also validate this by executing the following command to display all operating pods:

kubectl get pod -o wide -n monitoring

The node-exporter Pod will now be deployed to all newly generated nodes via the DaemonSet on a regular basis.

Communicating with pods created by DaemonSet

DaemonSets pods can be communicated via a variety of methods. Here are a few options:

1. Push

It is a method through which Pods can be configured to transmit data to other services such as monitoring services, stats database, etc.

2. NodeIP & Known Port

The DaemonSet’s NodeIP & Known Port Pods employ a hostPort to make the pods accessible through the node IPs. Clients and ports are aware of the set of nodes’ IP addresses by default.

3. DNS

It builds a headless service with the same pod selection, then uses the endpoints resources to find DaemonSets or uses DNS to fetch multiple records.

4. Service

This constructs a program with the same pod selector and further utilizes it to connect to with daemon on a different node. (There’s no way to get to a specific node.)

How to Update a DaemonSet?

In previous Kubernetes releases, the OnDelete update technique was the sole way to update Pods maintained by a DaemonSet (prior to version 1.6). An OnDelete solution necessitates manually deleting each Pod for enabling DaemonSet to create new Pods with the updated configuration. However, newer Kubernetes versions utilize rolling updates by default. The spec.updateStrategy.type parameter is used to specify the update mechanism. The setting is set to RollingUpdate by default.

The rolling update technique rejects outdated Pods and replaces them with new ones. The procedure is fully automated and managed. However, the process of the Pods being deleted and recreated at the same time leads to a risk of unavailability and extended downtime. Following are the two parameters that enable users to control updating process:

  • minReadySeconds: The value is expressed in seconds, and specifying a relevant time range ensures that the Pod remains healthy before the system moves on to another Pod.
  • updateStrategy.rollingUpdate.maxUnavailable: It lets you choose the maximum number of Pods that can be updated at once. This parameter’s value is significantly dependent upon the type of application that is being delivered. To achieve high availability, it is vital to strike a balance between speed and safety.

Monitor the availability of DaemonSets rolling upgrade with the Kubectl rollout command:

kubectl rollout status ds/daemonset-node-exporter -n monitoring

The system monitors DaemonSet modifications and notifies you of the node-exporter DaemonSet’s actual roll-out status.

How Daemon Pods are Scheduled

Typically, the Kubernetes scheduler chooses the computer on which a pod operates. The devices previously get selected in pods created by the Daemon controller (.spec.nodeName is specified when the pod is created). Therefore:

  • The DaemonSet controller does not recognize a node’s unschedulable field.
  • The DaemonSet controller can create pods even if the scheduler isn’t running, which helps aid cluster bootstrapping.

Taints and tolerations are obeyed by daemon pods, however, they are built with NoExecute tolerations for the node.alpha.kubernetes.io/notReady and node.alpha.kubernetes.io/unreachable taints, which have no toleration.

Updating a DaemonSet

Rather than redeploying pods to match newly updated nodes, the DaemonSet will reclaim pods from newly non-matching nodes as soon as possible. A DaemonSet can generate different pods. A pod, on the other hand, doesn’t allow field modifications for all fields. DaemonSet will also use the original template the next time a node is created (even if the name remains the same).

A DaemonSet can be deleted. The pods will be left upon on the nodes if you set —cascade=false with kubectl. After that, you can make a new DaemonSet using a different template. All old pods will be recognized as having matching labels by the new DaemonSet with the new template.

Despite a discrepancy in the pod template, it will not change or remove them. You’ll have to delete the pods or the nodes to force a new pod to be created. You can do a rolling update on a DaemonSet on Kubernetes version 1.6 or later versions.

Deleting DaemonSets

It’s straightforward to delete a DaemonSet. Simply use the kubectl delete command with the DaemonSet to accomplish this. This will remove the DaemonSet as well as all of its underlying pods.

Delete Demonstate

The cascade=false flag in the kubectl delete command can be used to simply delete the DaemonSet and not the pods.

Conclusion

By ensuring that applications are dispersed across the cluster’s nodes, a DaemonSet overcomes Kubernetes’ scheduling limitations. DaemonSet is used to collect logs, monitor node resources, and store data in a cluster, etc.

By using these settings, monitoring, logging, and storage services can be easily implemented to improve the performance and reliability of the Kubernetes cluster and containers.

Read more
05May

How to Run Cassandra and Kubernetes Together?

May 5, 2022 admin Kubernetes

With Kubernetes, you can run distributed applications on different systems. But Cassandra creates a distributed database environment, and it works as an infrastructure for Kubernetes. But the developers have to use Cassandra and Kubernetes together to manage data so that the applications can run in different locations in the same way. In this post, we are going to focus on how to run Kubernetes and Apache Cassandra together so that you can manage your containerized applications effortlessly.

What is Apache Cassandra?

When it comes to running your Kubernetes applications on a cloud server, you need something that will help you with an easier deployment process. Cassandra is such a tool that helps you scale your applications. It has a completely tolerant database and also provides you with an easy data management method. Since Cassandra is a database, it always needs more storage to store that data. The data is used to containerize applications in Kubernetes.

Cassandra is a distributed NoSQL DBMS that comes with huge storage to handle any amount of data. Cassandra works accurately and in a way that it should not cause any failure while containerizing applications on Kubernetes. There are a few important things about Cassandra that you should know:

  • Cassandra is written in Java.
  • It consists of nodes, racks, data centers, and clusters.
  • It uses an IP address to identify the nodes.

To run Cassandra on Kubernetes, the Kubernetes community has come up with a new plan. It is called K8ssandra, which is ready for Apache Cassandra to run on Kubernetes. By doing so, it will avoid the latency in application operation and improve the scaling as well. However, before running the Cassandra on Kubernetes, you need to figure out which system is operating the applications.

One way to do this is through Kubernetes operators that will set up the deployment process of applications. It works automatically in finding domain-specific information and interacting with external systems. Cass-operator is a Kubernetes operator that supports open-sourced servers such as Kubernetes, Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), Pivotal Container Service (PKS), etc.

All you need to do is install a Cass-operator on your Kubernetes. We will look into that in later sections and figure out how to install the cass-operator and configure it using configuration YAML files on your Kubernetes cluster.

What Would You Need to Transform Cassandra to Kubernetes?

If you are using a lot of K8s clusters, you need to learn how to maintain all of them conveniently, and migrating Cassandra to Kubernetes will help you with that method. In this section, we are going to focus on the tools that can transform Cassandra to Kubernetes.

1. Data Storage

Cassandra stores some of the data in RAM, which is also known as MemTable, and the other part of the data is saved in the disk and called SSTable. The disk also keeps the transaction records of the data, and this is called the Commit Log. While storing data, developers follow a certain type of mechanism, and that is known as PersistentVolume. Cassandra comes with many default mechanisms that help you with storing and distributing data. It also has mechanisms designed especially for data replication.

Since Cassandra comes with a wide range of nodes and clusters, you won’t have to find a distributed data storage machine such as Ceph or GlusterFS. You can store the data in the node’s disk using the hostPath command or go to persistent local volumes. You can also store data in the distributed storage if you want to create multiple environments for the developers. Sometimes, to work on the different features of applications, developers work separately, and in that case, creating a single Cassandra node would do the job.

Now that the data is in a distributed storage, you know that it is safe. If by any chance, a node in Kubernetes doesn’t work properly, the distributed storage will back you up with the data.

2. Monitoring

How do you monitor the performance and events going on in Kubernetes clusters? There is a tool for that, and it is called Prometheus. But now the question is if Cassandra supports Prometheus for monitoring activities?

If you want to monitor from the Cassandra dashboard, you can use jmx_exporter and cassandra_exporter for good. But most people use the first option because it is easy to use. You can run the exporter with this command:

 

javaagent:<plugin-dir-name>/cassandra-exporter.jar=–listen=:9180

3. How to Choose Kubernetes Primitives?

You can transfer the Cassandra cluster to Kubernetes resources. For example, Cassandra Node → Pod, Cassandra Rack → StatefulSet, Cassandra Datacenter → pool of StatefulSets, but how to convert a Cassandra cluster? There is a Kubernetes mechanism for defining custom resources (CRDs). You can use a Kubernetes operator and controller to convert the Cassandra cluster to Kubernetes resources.

4. How to Identify Pods?

Cluster nodes in Cassandra are equal to Kubernetes pods, and Cassandra can identify the pods with IP addresses. Every pod of the Kubernetes has a different IP address, and after Cassandra detects the pods, you will have to add new nodes to the Cassandra cluster. But there are some other things you can do to help Cassandra identify the pods:

  1. You can track the pods with UUIDs or host identifiers. You can also use the IP addresses and save their information by creating a table.
  2. You can create a service for individual cluster nodes using ClusterIP.
  3. You can create a network for Cassandra nodes instead of a dedicated pod network. You can set up the hostNetwork: true in this case.

These are the 3 different ways to help Cassandra identify pods in Kubernetes, and you can choose any one of them depending on your requirements.

5. Backups

CronJob helps you backup the data from Cassandra nodes. However, since Cassandra stores most of the data in its memory, you need to flush the data in Memtables to SSTables. In other words, you will have to create a node drain. Cassandra nodes will stop receiving data through the node drain process, and you won’t be able to reach them. But luckily, the node will back up the data by creating a snapshot and saving the scheme.

However, the backed-up data is not enough but with a Google-made script, you can backup more data files to Kubernetes. But please note that the script won’t flush data to the Cassandra node before taking the snapshot. Here, have a look at an example of such a Google-made script for backing data on Cassandra nodes:

set -eu

if [[ -z “$1” ]]; then

info “Please provide a keyspace”

exit 1

fi

KEYSPACE=”$1″

result=$(nodetool snapshot “${KEYSPACE}”)

if [[ $? -ne 0 ]]; then

echo “Error while making snapshot”

exit 1

fi

timestamp=$(echo “$result” | awk ‘/Snapshot directory: / { print $3 }’)

mkdir -p /tmp/backup

for path in $(find “/var/lib/cassandra/data/${KEYSPACE}” -name $timestamp); do

table=$(echo “${path}” | awk -F “[/-]” ‘{print $7}’)

mkdir /tmp/backup/$table

mv $path /tmp/backup/$table

done

tar -zcf /tmp/backup.tar.gz -C /tmp/backup .

nodetool clearsnapshot “${KEYSPACE}”

How to Set Up the Cass-Operator Definitions?

In case we have not talked about the different parts of Cassandra already, now is the time. The parts of Cassandra, such as the nodes, racks, and data centers have different work to do in the system, and they are also known as “definitions”. Let’s learn about them first, and then we can go on and set up the cass-operator definitions on Kubernetes.

Nodes: A node is a computer system that runs an instance of Cassandra. You can refer to a physical host or a computer as a node. Even cloud storage and docker containers fall under the node category.

Racks: Racks are a set of Cassandra nodes that join with each other and connect to a single network switch. In cloud deployment systems, racks refer to the machine instances that run in the same and common zone.

Data center: A combination of racks can create a complete data center. The racks should be in the same location and connected to the same network. When it comes to cloud deployments, data centers work to measure cloud storage.

Clusters: A collection of data centers create a cluster together, and the cluster is to support the same application. Data centers can be physical or cloud-based, and clusters are supposed to run in both. Clusters are also distributable sources that you can transfer to different locations to reduce latency.

Now that we know the definitions of Cassandra, we can use the GKE or other Kubernetes engines. This will help you connect Cassandra to Kubernetes so that both of them can run together. Let’s follow the steps mentioned below:

Step 1: Apply the Cass-operator YAML Files to Your Cluster

Here, you need to use the kubectl command-line tool to run the YAML files. The configuration files will apply the cass-operator definitions to the Kubernetes cluster. Cassandra operators can also be named as the API object descriptions or manifests that define the state of the data in your applications. You can go to the GitHub page and find out the version-specific manifests.

Here is an example of the kubectl command in the GKE cloud that is running Kubernetes:

kubectl command for GKE cloud running Kubernetes 1.16

Step 2: Apply YAML to the Storage Configuration

The next step requires you to configure the YAML files using the kubectl command tool. The configuration files will define the storage settings that you could use in Cassandra nodes. There is a StorageClass resource that works as a layer between the physical and persistent storage in a Kubernetes cluster. See the example below where we have used SSD as the storage type.

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: server-storage

provisioner: kubernetes.io/gce-pd

parameters:

type: pd-ssd

replication-type: none

volumeBindingMode: WaitForFirstConsumer

reclaimPolicy: Delete

Step 3: Applying the YAML Files that Define the Data Centre

You can use the kubectl again and apply the YAML file to define the Cassandra Datacenter. Check out the example below:

# Sized to work on 3 k8s workers nodes with 1 core / 4 GB RAM

# See neighboring example-cassdc-full.yaml for docs for each parameter

apiVersion: cassandra.datastax.com/v1beta1

kind: CassandraDatacenter

metadata:

name: dc1

spec:

clusterName: cluster1

serverType: cassandra

serverVersion: “3.11.6”

managementApiAuth:

insecure: {}

size: 3

storageConfig:

cassandraDataVolumeClaimSpec:

storageClassName: server-storage

accessModes:

– ReadWriteOnce

resources:

requests:

storage: 5Gi

config:

cassandra-yaml:

authenticator: org.apache.cassandra.auth.PasswordAuthenticator

authorizer: org.apache.cassandra.auth.CassandraAuthorizer

role_manager: org.apache.cassandra.auth.CassandraRoleManager

jvm-options:

initial_heap_size: “800M”

max_heap_size: “800M”

After this step, you can check out the resources that you have made, and you can find them in the Google Cloud Console. Click on the Clusters tab and look at the services that are running. You can manage these computing units in the Kubernetes clusters because they are also deployable.

Other Cassandra Solutions for Kubernetes

You can use Statefulset or helm-chart to deploy Cassandra on Kubernetes. However, you need to figure out which solutions are the best for using Cassandra on Kubernetes. Here are some solutions that you can use:

StatefulSet or Helm-Chart-Based Solutions

You can use Statefulset or helm-chart-based solutions to deploy a Cassandra cluster on Kubernetes. This method is effective as well as used commonly, but the failure of a node during the process can ruin your whole deployment plan. In that case, standard Kubernetes tools cannot solve this situation alone. With helm-chart-based solutions, you cannot replace nodes or restore and monitor data.

Hence, that leads us to another choice for Cassandra on Kubernetes, which depends on the operators now.

Kubernetes Operator

Have a look at some of the available operators apart from the one we used above:

1. Cassandra-operator

Cassandra-operator is written in Java and has an Apache 2.0 license which offers Cassandra deployments management for Kubernetes. It supports monitoring, cluster management, and data backup. However, it does not support multiple racks for a single data center.

2. Navigator

Navigator is used in Kubernetes to implement DB-as-a-Service, and it also is compatible with Elasticsearch and Cassandra. You can control access to the Kubernetes database via RBAC with Navigator.

3. CassKop

You can use the CassKop operator to interact with Cassandra nodes with its CassKop plugin. The plugin is a Python-based feature that helps the Cassandra nodes communicate with one another.

Bottomline

Cassandra helps in scaling Kubernetes applications easily. Since, there are so many operators that support Kubernetes and Cassandra together, you won’t have to face any problem with it. Alongside, this guide will help in running them together. If you face any issue while performing the steps, connect with us through the comment box below.

Read more
22Apr

Best Kubernetes Configurations Practices

April 22, 2022 admin Kubernetes

Kubernetes is an example of the most powerful tools in the DevOps field that involves containerization in an IT architecture. At present, about 80% of IT organizations adopt Kubernetes in their production environment. And it helps with container orchestration, load balancing, scaling, and more. Kubernetes makes the containerization of applications easier, but it is not easy to use, especially for beginners. If you are working with different Kubernetes clusters, you will find various difficulties while managing them together. That’s why you need to follow the best Kubernetes Configurations Practices to manage the Kubernetes clusters better and more effectively.

1. Keep Kubernetes Updated to Its Latest Version

You always need to keep your Kubernetes version updated, and it’s not an option. If you regularly update your Kubernetes deployments, it will help you add more new and interesting features to your application containerization. Every new update of Kubernetes deployments comes with a bunch of security features and resolves various glitches from the previous version. And since the older versions of Kubernetes do not get enough support from the community, you need to make sure that your Kubernetes version is always up-to-date.

2. Version Control for the Configuration Files

Before you release the configuration files to a cluster, you need to store them in a version control system. These files can be related to deployment, services, and ingress. If you can keep track of these files, you can easily keep a record of the changes made to these files; thus, you can implement the change approval processes to make sure your Kubernetes cluster is secure and more stable.

3. Use Pod Security Policies

In Kubernetes, there is a cluster-level resource available that is known as PodSecurityPolicy that is available through kubectl. To use the cluster-level resource, you need to enable the PodSecurityPolicy controller. You need to authorize one policy in the cluster if you want to create pods in the Kubernetes cluster. The PodSecurityPolicy has numerous use cases, including the following:

  • The security policy prevents containers from running with the privileged flag.
  • Does not allow you to share networking, ports, host PID/IPC namespace, etc. so that there is genuine isolation between the containers.
  • It limits the use of volume types, for example, host path directory volumes.
  • Enforces read-only in the root file system.
  • Prevents privilege escalation on the root.
  • Rejects containers that have root privileges.
  • Restricts Linux capabilities to provide the least privileged principles.

These security controls are possible through pod security policies that ensure the security of your containerized applications.

4. Use Kubernetes Namespaces

You can generate logical partitions and separate your resources using namespaces. Namespaces also put a limit to the user permissions. In Kubernetes, there are three types of namespaces which are default, Kube-public, and Kube-system. If you have multiple teams working on the same Kubernetes cluster, you can use namespaces to keep them organized and separated. If there are over thousands of nodes in the cluster, and different teams are working on the cluster, you should have multiple namespaces for each of the teams. You can generate development namespaces, deployment namespaces, testing namespaces, etc., for different teams. If you organize the namespaces like this, then the different teams, like the development teams, won’t be able to create any changes to the testing namespace. If you don’t create separation, then you might also make some mistakes in the cluster.

5. Use Labels

In the Kubernetes cluster, there are services, containers, networks, pods, etc. You need to maintain these elements in the Kubernetes and keep a record of how they communicate with each other, and the process is troublesome. That’s why labels help you organize your Kubernetes cluster, and these labels come with key-value pairs that make it easy for you to organize the cluster. For instance, if you are running two similar elements of the same application at the same time, you might also have multiple teams to look after their components. Labeling the elements differentially will help you showcase the ownership of the elements effortlessly.

6. Establish Resource Requests & Limits

Sometimes deploying software to a Kubernetes cluster can fail if the resources are limited, and this situation is not very uncommon. This situation mostly occurs when the requests and resources limits of the cluster are not properly established. If you don’t put a proper limit on the resources, then the pods won’t utilize more resources properly. Also, the pods can consume more CPU memory, and the scheduler won’t be able to create new pods. You can set the resource request limits to the minimum number that a container should use. Or you can mention the highest number of resources a container can utilize.

7. Readiness and Liveness Probes

Readiness and liveness probes are mostly health check-ups on the cluster. Readiness probes make sure that a particular pod is running properly and directs a load to the pod. If the pod is not running, then the requests won’t be live anymore on your service, and wait for the pod to be ready. The liveness probe verifies if the application is working properly and checks in with the pod to make sure its status is updated. If the probe does not get any response, then the software is not running on the pod. The liveness probe creates new pods and starts the software on them if it is unable to check-in with the previous pod.

8. Use Container Images but on Smaller Sizes

Some developers include base images on the containerization that includes most of the packages and libraries that are unrequired. In that case, we suggest picking smaller container images that don’t take much space on the cluster and also make it easier for you to pull and build the image quicker. Small docker images contain fewer risks when it comes to security.

9. Monitor Your Control Panel

Don’t make the mistake of not monitoring the control panel and its components that include Kubelet, Kubernetes API, controller-manager, Kube-proxy, etc., and kube-dns. These are the main components of your cluster that you need to monitor every day to keep up with its performance. In the Kubernetes control panel, some metrics are used in the Prometheus format that creates alerts when the components go through any sort of issues. When you monitor the control panel components regularly, it helps to keep the resource consumption and the overall work volume within limits.

10. Audit Logs Regularly

The logs in the Kubernetes cluster help you identify threats and vulnerabilities in the cluster. You should store and audit these logs regularly and properly to make sure your cluster is safe. The logs and request data of the Kubernetes API are saved in the audit.log file, and the location of the file is /var/log/audit.log, and the audit policy is saved at /etc/kubernetes/audit-policy.yaml.

Conclusion

Even though using Kubernetes containerization is easy for most beginners, you need to make sure that you are using it right. Kubernetes containerization is constantly getting upgraded, and if you can’t use it successfully, you won’t be able to get the best out of it. Applying these best Kubernetes configurations practices will help you achieve the best sense of containerization using Kubernetes. If you desire to study more about Kubernetes, check out our other articles.

Read more
    123…7
corporate-one-light
+1 800 622 22 02
info@scapeindustries.com

Company

Working hours

Mon-Tue

9:00 – 18:00

Friday

9:00 – 18:00

Sat-Sun

Closed

Contacts

3 New Orchard Road
Armonk, New York 10504-1522
United States
915-599-1900

© 2021 Kuberty.io by Kuberty.io