Kuberty.io
  • blog
Kuberty.io

Kubernetes

Home / Kubernetes
04Jul

Kubernetes Replication Controller: A Complete Guide

July 4, 2022 admin Kubernetes

Kubernetes is the steadiest orchestration platform to perform container automation at present. Kubernetes comes with a wide variety of usage and the platform has several properties to know about. If you want to develop and configure an application on Kubernetes, you’re supposed to get the details of the functionality and utility of the different properties of Kubernetes. When it comes to Kubernetes, the platform utilizes pods as containers to create and deploy application configurations. Here, the concept of a replication controller comes. Without the functional presence of replication controllers, the basic functions of Kubernetes will be incomplete. In this article, we will discuss everything about replication controllers to help you know them well and recognize their functions.

What are Replication Controllers?

Replication controllers can be considered the building blocks to keep an application running on Kubernetes. As you know, pods are significant in terms of continuing Kubernetes functions; replication controllers manage pods. To be specific, replication controllers indulge in monitoring the life spans of pods and their replicas.

The basic task of replication controllers is to make a specified number of pods and their replicas available for usage on the Kubernetes network. A replication controller controls the number of replicas of a pod that are present in a particular Kubernetes cluster. Whenever a pod gets terminated, the replication controllers create a replica of that particular pod immediately to avoid functional failure of that cluster.

When the number of pods in a Kubernetes cluster exceeds the requirement, replication controllers actively take part in eliminating extra pods. Contradictorily, replication controllers get involved in creating new pods when the number of pods available in a cluster is lesser than the requirements.

Also, another major responsibility of a replication controller is to add additional features and functionalities to a pod from which multiple replicas are created. Therefore, the significance of replication controllers in the basic functions of Kubernetes can’t be overlooked at all.

Uses of Replication Controllers

Firstly, rescheduling the creation of replicas of pods is one of the most important tasks of replication controllers. Replication controllers keep rescheduling replica creation according to the requirements so that the cluster never runs out of an adequate number of pods. Therefore, the functional tasks of Kubernetes keep going on without any interruption. Replication controllers keep clusters running even during node failures or unwanted pod termination.

Pods and their replicas are highly scalable and replication controllers remain liable to scale replicas up and down in a Kubernetes cluster. Generally, replication controllers do so with the help of auto-scaling agents. However, sometimes the user can scale replicas up or down manually.

Controlling rolling updates is another major task of replication controllers. Without rolling updates, the pods in a Kubernetes cluster won’t work as they are supposed to do. So, replication controllers are mentionable contributors behind monitoring and scaling the rolling updates for enhancing the functions of a specific cluster.

On top of that, replication controllers remain liable for releasing multiple release tracks and monitoring them over a prolonged period. Most importantly, replication controllers can utilize all the release tracks of a cluster simultaneously.

When to Use Replication Controllers?

Before using replication controllers for pod orchestration, you are supposed to keep in mind that they monitor and replicate live pods only. Replication controllers don’t count terminated pods. So, you can’t determine the liveliness or readiness of pods when you’re using a replication controller. You are supposed to keep this fact in mind to manage your Kubernetes pods promptly. If you want to manage existing pods and determine the readiness probes of terminated pods collaterally, you need to use other advanced tools along with replication controllers to get your job done.

Commands Required for Handling Replication Controllers

You’ve already gathered adequate information about replication controllers and their functions. Now, it’s time to check out the specified kubectl commands that are required for handling replication controllers.

First of all, you need to create a replication controller to make use of it. Use the following kubectl command to make that possible:

kubectl create -f nginx-rc.yml

The following command enabled you to deploy the replication controller in the system:

kubectl get rc/nginx-rc

kubectl get rc/nginx-rc -o wide

kubectl get rc/nginx-rc -o yaml

kubectl get rc/nginx-rc -o json

As you deploy a replication controller in a cluster, you are supposed to describe it for identification purposes. Input the following command to describe the replication controller you created:

kubectl describe rc/nginx-rc

Now, you may be curious about how you can scale up the replicas of your pods with the help of a replication controller. As you have already created and deployed one in your cluster, input the following command to scale up and manage existing pods:

kubectl scale rc nginx-rc –replicas=5

Sometimes you may need to delete a replication controller when it is of no use anymore. This is mainly applicable for application-specific replication controllers. To delete a replication controller, the commands are:

  1. kubectl delete rc nginx-rc
  2. kubectl delete -f nginx-rc.yml

These are the kubectl commands to handle replication controllers and make them work. However, as you gain expertise in handling replication controllers, you will learn to use them for different purposes. There are more commands for advanced users to perform different tasks with replication controllers. But as a beginner, the commands mentioned above are enough for you to manage replication controllers.

Alternatives to Replication Controllers

Though the utility of replication controllers is beyond question, there are other options to manage pods efficiently. The following segment will let you know about the great alternatives of replication controllers with which you can manage your pods efficiently.

1. ReplicaSet

ReplicaSets can be considered the upgraded versions of conventional replication controllers. ReplicaSets have enabled set-based label selectors which help users to orchestrate their pods in customized ways. In most cases, ReplicaSets are not directly utilized for pod management as they are a bit hard to handle. Ideally, Deployments enforce ReplicaSets to orchestrate pod creations, selections, and updates. ReplicaSets are handy tools to perform customized update orchestration. With other replication orchestration tools, it’s not possible to customize orchestrations. If you are in dire need of orchestrating pod updates in customized manners, you are supposed to use ReplicaSets directly.

2. Deployment

Deployments are exclusive high-level API objects to update pods under them. Alongside, Deployments take part in updating the existing ReplicaSets under them too. Being server-side tools, Deployments come with special features to add more functionality to your pods. Also, the declarative nature of Deployments make them the preferred choices of users.

3. Job

Job is another handy alternative to replication controllers. Some Kubernetes pods tend to terminate themselves and such pods are quite hard to manage. Usually, such pods can’t be monitored and scaled by replication controllers and other tools. Job is a specified orchestration tool to manage such self-terminating pods and make the most out of them.

These are the decent alternatives to replication controllers that are very helpful in managing pods in your cluster. You’re supposed to select an orchestration tool based on your requirements. We have specified the functionalities of all the orchestration tools mentioned above. So, make the right choice and use the tool that you need to manage your pods.

Conclusion

The effectiveness of Kubernetes replication controllers is evident and unputdownable. This article has briefly described all the special features and functions of Kubernetes replication controllers. As a beginner, you must go through the above segments to get a clear idea of what replication controllers are and how to use them. We have already described the ways to handle Kubernetes replication controllers with specified commands. Also, we have mentioned the top alternatives to replication controllers to help readers choose the right orchestration tools for their requirements.

Now, you have got a basic idea about the functions and features of Kubernetes replication controllers. So, it’s time to imply your knowledge and use and customize replication controllers to manage your pods in the best ways. All the best!

Read more
10Jun

How to Install Jenkins on Kubernetes?

June 10, 2022 admin Kubernetes

If we look from a developer’s point of view, it is hard to ignore the significance of Continuous Integration/Continuous Deployment (CI/CD) pipelines, which are considered one of the core components to automate your software delivery process. This component plays a significant role in streamlining the entire workflow for different teams and thus further helps increase their productivity.

The pipeline helps create code, runs tests (CI), and enables the deployment of a new version of the application. Regular involvement of Continuous Integration/Continuous Deployment (CI/CD) is crucial to integrate the code with a shared repository numerous times.

Jenkins is a widely-used open source automation server that assists in setting up CI/CD pipelines. However, it is proved that to provide a new automation layer to Jenkins, the Kubernetes cluster is very important. With the help of Kubernetes, it becomes possible to use resources and servers effectively, and also the underlying infrastructure can be managed without putting any extra load.

The need to build the CD (continuous delivery) pipeline and setting up Jenkins via the Kubernetes engine offers the foremost benefits as compared to standard VM-based deployment that is as follows:

  • You get the feature of using one virtual host that allows you to run jobs on different operating systems.
  • The Engine also provides ephemeral build executors, and so every build can also run in a clean environment.
  • Kubernetes Engine also provides you with the Google global load balancer, and thus it becomes easy to handle SSL termination. A global IP address will be provided to you so that you can share it with other users and connect with them Google’s backbone network.

In this write-up, you will get familiar with the entire procedure of installing Jenkins on Kubernetes.

Steps to Install Jenkins on Kubernetes

Before discussing the steps, you need to take note of the prerequisites for installing Jenkins on Kubernetes:

Prerequisites

Prior to beginning the installation process of Jenkins on Kubernetes, some essentials that must be taken care of initially are to set up the Kubernetes cluster and kubectl on your machine. If you don’t have a running Kubernetes cluster, all you require to do is follow the steps described in your Kubernetes Quickstart to set up a Kubernetes cluster on DigitalOcean.

Step 1: Installing Jenkins on Kubernetes

To set up Jenkins, you would require to create:

  1. A namespace that permits you to set apart Jenkins objects within the Kubernetes cluster.
  2. A PersistentVolume that allows storage of your Jenkins data as it helps in preserving data across restarts.

Now to set up an environment for Jenkins, Kubernetes has an API, and you can get across the desired state by using either a YAML or JSON file. However, in such a case, it is wise to use the YAML file to launch Jenkins. And, be sure about the kubectl commands before you use them to manage the clusters.

First, use the following kubectl command to create the Jenkins namespace:

kubectl create namespace Jenkins

Further, to deploy Jenkins, you must build a new YAML file.

Create a new file named jenkins.yaml and open it with the help of nano or your preferred editor.

Now, the next thing that you need to do is to add the following code and define the Jenkins image, in addition to its port, and several more configurations:

apiVersion: apps/v1

kind: Deployment

metadata:

name: jenkins

spec:

replicas: 1

selector:

matchLabels:

app: jenkins

template:

metadata:

labels:

app: jenkins

spec:

containers:

– name: jenkins

image: jenkins/jenkins:lts

ports:

– name: http-port

containerPort: 8080

– name: jnlp-port

containerPort: 50000

volumeMounts:

– name: jenkins-vol

mountPath: /var/jenkins_vol

volumes:

– name: jenkins-vol

emptyDir: {}

NOTE: As you see in this step, we have created a deployment by making use of the Jenkins LTS image and have opened ports 8080 and 50000. These ports give access to Jenkins.

Step 2: Creation of next deployment in the Jenkins namespace

Ensure you provide adequate time for the cluster to pull the Jenkins image and get the Jenkins pod running. The command you need to provide is kubectl, as it helps in verifying the pod’s state.

kubectl create -f jenkins.yaml –namespace jenkins

The output appears similar to this:

NAME READY STATUS RESTARTS AGE

jenkins-6fb994cfc5-twnvn 1/1 Running 0 95s

There will be a difference in the pod name, and in the process of its running; just ensure to expose it using a Service. In this command, the NodePort Service type will be used. Also, there is a need to create a ClusterIP type service to connect to Jenkins.

Launch the new file after creating it as Jenkins-service.yaml:

nano jenkins-service.yaml

Now, the most vital step is to add the following code and then try to define the NodePort Service:

NOTE: In the following YAML file, we have defined NodePort Service and then exposed port 8080 of the Jenkins pod right to port 30000.

apiVersion: v1

kind: Service

metadata:

name: jenkins

spec:

type: NodePort

ports:

– port: 8080

targetPort: 8080

nodePort: 30000

selector:

app: jenkins

—

apiVersion: v1

kind: Service

metadata:

name: jenkins-jnlp

spec:

type: ClusterIP

ports:

– port: 50000

targetPort: 50000

selector:

app: jenkins

Now, finally, we are going to create the Service in a similar namespace:

kubectl create -f jenkins-service.yaml –namespace jenkins

Here is the command to verify if the Service is running or not:

kubectl get services –namespace jenkins

The output will appear like this:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

jenkins NodePort your_cluster_ip <none> 8080:30000/TCP 15d

As Jenkins and NodePort are operational, now you can access the Jenkins UI. With NodePort and Jenkins functional, you are ready to access the Jenkins UI.

Step 3: Accessing the Jenkins UI

In this step, let us try to get familiar with the procedure for exploring the Jenkins UI. You must learn about the Nodeport service and its availability in getting the TCP port 30000 with the cluster nodes. However, it is a difficult task to get a node IP for gaining exact access to the Jenkins UI. Using kubectl commands to retrieve node IPs is the fastest and accurate way, here is how to-

kubectl get nodes -o wide

kubectl will produce an output with your external IPs:

NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME

your_node Ready <none> 16d v1.18.8 your_internal_ip your_external_ip Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9

your_node Ready <none> 16d v1.18.8 your_internal_ip your_external_ip Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9

your_node Ready <none> 16d v1.18.8 your_internal_ip your_external_ip Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9

NOTE: Copy any one of external_ip values, then open a web browser, and navigate to http://your_external_ip:30000. As you do so, a page will appear asking for an administrator password.

You require to use kubectl and pull the password from those logs.

First, return to your terminal and retrieve your Pod name with the following command:

kubectl get pods -n jenkins

The output will appear as shown below:

NAME READY STATUS RESTARTS AGE

jenkins-6fb994cfc5-twnvn 1/1 Running 0 9m54s

As you can see here, you must examine the pod’s logs for the admin password. Now, all that you require to do is just substitute the highlighted section just with your pod name by using the following command:

kubectl logs jenkins-6fb994cfc5-twnvn -n jenkins

As you scroll up or down, you will find the password:

Running from: /usr/share/jenkins/jenkins.war

webroot: EnvVars.masterEnvVars.get(“JENKINS_HOME”)

. . .

Jenkins initial setup is required. An admin user has been created and a password generated.

Please use the following password to proceed to installation:

your_jenkins_password

This may also be found at: /var/jenkins_home/secrets/initialAdminPassword

. . .

Finally, you need to copy your_jenkins_password and then go to your browser and paste it into the Jenkins UI. As you enter the password, Jenkins will quickly direct you to install the plugins.

Once the installation process gets completed, Jenkins will load a new page, and then you would be required to create an admin user. While creating the Admin user, just ensure to enter all essential details in the given fields, or you may also skip this step by pressing the skip and continue as admin link. However, by default, your username will be taken as admin and your password as your_jenkins_password.

Moving ahead, the following screen will ask for instance configuration, and then you are required to click the Not now link and continue.

Jenkins successfully print Jenkins is ready!

Click on start using Jenkins, and in a matter of a few seconds, the Jenkins home page will appear.

Step 4: Running a Sample Pipeline

Jenkins finally created pipelines. In this step, we will create one of Jenkins’ sample pipelines.

As you see the Jenkins home page, click on the New item link displayed on the left-hand menu.

As a new page opens up, select pipeline and then press OK.

Jenkins will take you to the pipeline’s configuration, you need to press the Ok button. Look for the Pipeline section and then search for Hello World from the try sample pipeline dropdown menu, which is on the right-hand side.

After selecting the Hello World, click the Save button.

Jenkins will lead you to the main page of the pipeline where you have to enter the Build Now option that could be on the left-hand menu. The moment you click on Build Now, the pipeline begins to run.

Also, make sure you do not forget to examine the console output and look at what happened when the pipeline was running.

Conclusion

We’ve reached the final section of our guide. You already have learned to install and configure Jenkins on a Kubernetes cluster. Let us not forget that Jenkins has got a repository of plugins; using those plugins makes it possible to carry out complex operations. And the kubectl command-line tool will help you to perform these tasks easily. Always remember to double ensure the commands before you deploy it. If you have any more queries regarding the same, drop a comment in the below box. Stay connected for more guides.

Read more
09Jun

Install and Set Up Kubernetes Kubectl on Linux

June 9, 2022 admin Linux

Installing and Setting Up Kubernetes Kubectl on Linux

Kubectl is a command-line tool available on Linux and Mac systems for managing Kubernetes clusters. You can easily manage configurations, configure environmental variables, and do much more with Kubectl. It simplifies the process of deploying applications on Kubernetes clusters or inspecting their resources, by offering the most powerful commands on your system. Additionally, it allows you to view Kubernetes cluster logs. The following tutorial will teach you how to install and configure Kubectl for Linux on your computer.

What is Kubectl?

By using Kubectl you can easily manage Kubernetes files by setting up the –kubeconfig flag or the KUBECONFIG environment variable. The Kubectl syntax explains how commands should be executed from the command-line, and it gives relevant examples of how the commands should be executed. Below is a brief description of the Kubectl syntax:

In your terminal window, you can utilise the Kubectl syntax to run the kubectl commands:

kubectl [command] [TYPE] [NAME] [flags]

The command, type, name, and flags represent the following:

Command: Command means the operations that you require to operate on Kubernetes cluster resources. The operations may include create, get, describe, delete, etc.

TYPE: This refers to the type of sources that are case-sensitive. Types allow you to specify single, plural, and abbreviated patterns like this output:

kubectl get pod pod1

kubectl get pods pod1

kubectl get po pod1

NAME: In name, you will find the title of the resource and these titles are case-sensitive. When you run kubectl get pods, the resources’ details will appear on screen. If you are dealing with different resources, you will have to specify the resource type, names, etc. for one or more files. The resource types and names can be defined as follows:

  • Use TYPE1 name1 name2 name<#>.to organize resources in case those types are the same for example kubectl get pod example-pod1 example-pod2
  • Type separately TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#> to specify names of multiple resources such as kubectl get pod/example-pod1 replicationcontroller/example-rc1
  • Use -f file1 -f file2 -f file<#> to specify resources that have multiple files. You can use YAML files rather than JSON files because the first one is more user friendly. For example: kubectl get -f ./pod.yaml

flags: These will specify optional flags such as -s or –server flags to display the address and port of the API server in Kubernetes.

This is the overview of Kubernetes Kubectl and syntax. Check out the other sections of this post if you want to install it on your Linux computers.

To Install and Set up Kubernetes Kubectl on Linux

Make sure you have the prerequisites before installing Kubectl on Linux. As a starting point, you must use a kubectl version that differs from the one in your cluster. For example, a v1.22 client will communicate with v1.21, v1.22, and v1.23. Making sure you’re always using the latest version of a tool will help you avoid performance issues as well as security issues. Let’s start installing Kubectl on Linux after you’ve comprehended this section.

There are three methods for installing Kubectl on Linux, but right here we are sharing the easiest one, to begin with. When installing Kubectl on Linux, you can use native package management tools or other package management tools.

How to Install Kubectl Binary with Curl on Linux-

Download the latest version of Kubectl with the following command:

curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl”

If you want to download a specific version of kubectl, you need to replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) section with the exact version. Like, if you want to download v1.22.0, use the following command:

curl -LO https://dl.k8s.io/release/v1.22.0/bin/linux/amd64/kubectl

Now you can validate the binary by following the steps below. Download the kubectl checksum file first with the following command:

curl -LO “https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256”

Now validate the kubectl binary with the checksum file:

echo “$(<kubectl.sha256) kubectl” | sha256sum –check

If the output is validated, it will show this: kubectl: OK

If the output fails, then sha256 will leave with a nonzero status and will show you the following output:

kubectl: FAILED

sha256sum: WARNING: 1 computed checksum did NOT match

Now, you can install kubectl: sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

If you do not have access to root user credentials, install the tool on the ~/.local/bin directory:

chmod +x kubectl

mkdir -p ~/.local/bin/kubectl

mv ./kubectl ~/.local/bin/kubectl

# and then add ~/.local/bin/kubectl to $PATH

Now test the installed version to see if it is up to date:

kubectl version –client

Install Kubectl on Linux Using Native Package Management

If it is a debian-based Linux distribution, follow the steps below.

  1. Update apt package index as well as install packages you need to access the Kubernetes apt repository with this command: sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl.
  2. Download the Google Cloud public signing key on your system: sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
  3. Add apt repository in Kubernetes: echo “deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list
  4. Install kubectl by updating the apt package index with the new repository: sudo apt-get update sudo apt-get install -y kubectl

That should be enough to install Kubectl on Linux, but you should also verify that Kubectl is installed.

How to Verify Kubernetes Kubectl Configuration-

Kubectl needs a kubeconfig file to find and enter a Kubernetes cluster. This file is created by default when you build a cluster with kube-up.sh or when you deploy a Minikube cluster. This file is located at ~/.kube/config but if you want to verify if the Kubernetes tool is completely verified, use this command: kubectl cluster-info.

A URL response will appear, telling you that kubectl is set up to use on your computer.

Conclusion

There are various plugins available in Kubectl such as shell autocompletion that you can install by adding source /usr/share/bash-completion/bash_completion to your ~/.bashrc file. To enable the autocompletion, source the script with this command: echo ‘source <(kubectl completion bash)’ >>~/.bashrc and enable it using kubectl completion bash >/etc/bash_completion.d/kubectl.

This autocompletion plugin will help you with Bash and Zsh commands that won’t require you to do much typing for configuring Kubernetes clusters.

There is another plugin named kubectl convert that you can download with curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert. It’s the latest release of the plugin that will help you convert manifests within different Kubernetes API versions.

That’s all you need to know about installing and setting up Kubernetes kubectl on Linux but you can check out our other articles where we have talked about Kubernetes kubectl for Mac and Windows.

Read more
09Jun

How to Set Up and Run Kafka on Kubernetes?

June 9, 2022 admin Uncategorized

Kafka is a key component of the Kubernetes cluster since it prepares a huge volume of data in real-time. Then there is Apache Kafka, which further improves the metrics of the data in the Kubernetes cluster. Apache Kafka is an event streaming platform created by LinkedIn that was initially developed as an open-source project. Linkedin donated the event streaming platform to the Apache software foundation and it changed the name of the Kafka to Apache Kafka. There are two native computer languages used on the platform, Java and Scala. Its most important mission is to provide high latency for handling real-time data.

Kafka is made of different APIs including Producer, the Connector and the Streams, the Consumer, etc. United, these APIs increase the latency level of platforms that maintain a high volume of real-time data. It runs as cluster nodes that we call Kafka brokers and is trusted by many well-known companies such as Uber, Airbnb, etc. Initially, Kafka was just a messaging queue for distributed systems that used to work as a pub-sub model as well. However, Kafka’s main motto now is to stream data on the internet on the behalf of the companies and also store a huge amount of information and keep the records of Kubernetes applications.

Generally, Kafka stores the messages in a sequence and divides them by topics. Kafka usually brokers data between systems or enables applications to respond to stream data in real-time. We need to deploy a fully-fledged Kafka cluster in Kubernetes. This approach helps us address the need for a message broker at the core of a large number of microservices. To ensure that the streaming data system does not fail, the number of Kafka instances across the nodes needs to be increased.

So, in this guide, we are going to show you exactly how to set up and run Kafka on Kubernetes so that there is no problem with streaming data and keeping a persistent volume of the data safely in the cloud storage.

How Does Kafka Work?

Apache Kafka is a vital virtual component of Kubernetes for the Kubernetes applications that are running on a cluster. The messaging system collects data and processes them in real-time no matter how voluminous it is. Kafka is, however, a publish-subscribe platform, which works as follows:

  • Producers create messages and divide them into topics and publish them.
  • Then Kafka categorizes those messages or determines their topics and stores them to make them immutable.
  • Then consumers search for the specific topics to become a subscriber and enjoy the messages producers are publishing.

Both producers and consumers serve applications that inform the consumers with messages regarding updates of the applications. These messages are stored and sorted by Kafka brokers based on user-defined topics.

Kafka configurations work with component management tools such as Zookeeper and Platform9 Free Tier. Kafka cannot work properly without Zookeeper because Zookeeper manages all Kafka components, including producers, brokers, cluster memberships, and consumers. Hence, we will first learn about Zookeeper and then delve deeply into Platform9 Free Tier information.

How to Deploy Zookeeper?

As discussed above, Kafka won’t work without Zookeeper so the first thing you should try is to deploy Zookeeper on your Kubernetes. Zookeeper should be deployed so that the Kafka service doesn’t keep restarting, but it can only be deployed by generating zookeeper.yml. This YAML file will schedule Zookeeper pods on the Kubernetes cluster for you, so you don’t have to do anything manually. Start the deployment by following the commands below. Copy and paste the following codes into zookeeper.yml using kublect or your preferred text editor.

apiVersion: v1

kind: Service

metadata:

name: zk-s

labels:

app: zk-1

spec:

ports:

– name: client

port: 2181

protocol: TCP

– name: follower

port: 2888

protocol: TCP

– name: leader

port: 3888

protocol: TCP

selector:

app: zk-1

—

kind: Deployment

apiVersion: extensions/v1beta1

metadata:

name: zk-deployment-1

spec:

template:

metadata:

labels:

app: zk-1

spec:

containers:

– name: zk1

image: bitnami/zookeeper

ports:

– containerPort: 2181

env:

– name: ZOOKEEPER_ID

value: “1”

– name: ZOOKEEPER_SERVER_1

value: zk1

After this, you will need to create a definition file and run kubectl create -f zookeeper.yml on your Kubernetes cluster. The next step is to create the Kafka service itself.

How to Create Kafka Service?

Here, we will create the Kafka service definition file that will manage the Kafka Broker deployments by balancing the data volume of Kafka pods. And you will find the following components on a primary kafka-service.yml file.

apiVersion: v1

kind: Service

metadata:

labels:

app: kafkaApp

name: kafka

spec:

ports:

–

port: 9092

targetPort: 9092

protocol: TCP

–

port: 2181

targetPort: 2181

selector:

app: kafkaApp

type: LoadBalancer

Save the file and create a service with the following code:

kubectl create -f kafka-service.ymlvh

Let’s move on to the next step to continue with setting up Kafka on Kubernetes.

Time to Define the Kafka Replication Controller

Generate another extra .yml file that will work as the replication controller for Kafka and the kafka-repcon.yml file will have the following elements:

—

apiVersion: v1

kind: ReplicationController

metadata:

labels:

app: kafkaApp

name: kafka-repcon

spec:

replicas: 1

selector:

app: kafkaApp

template:

metadata:

labels:

app: kafkaApp

spec:

containers:

–

command:

– zookeeper-server-start.sh

– /config/zookeeper.properties

image: “wurstmeister/kafka”

name: zk1

ports:

–

containerPort: 2181

Now, save the file and create it with kubectl create -f kafka-repcon.yml before going on to start the Kafka server.

How to Start the Kafka Server?

You will find the configuration settings of the Kafka server in the config/server.properties file. And since configuring the Zookeeper server is done at the beginning of the article, you can start the Kafka server right away with the following command:

kafka-server-start.sh config/server.properties

Once you have started the server, it is time to create a Kafka Topic. Like Kubernetes, Kafka has a command-line utility tool as well and it is known as kafka-topics.sh. You can create new topics on the server with this utility tool.

Next open a new window and copy and paste the command written below:

kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic Topic-Name

The topic’s name is Topic-Name, which has one partition plus a replica instance. We will now proceed to setting up a Kafka producer.

How to Start a Kafka Producer?

You will find the broker port ID in the config/server.properties file. In this context, the broker that we have used is listening to port 9092 and you can use the following command to specify the listening port directly:

kafka-console-producer.sh –topic kafka-on-kubernetes –broker-list localhost:9092 –topic Topic-Name

You can use the terminal window to attach a few messages and that will be pretty much how the reports will be generated on Kafka. Now it is time to create a Kafka consumer.

How to Begin a Kafka Consumer?

You can find the default consumer configurations in the config/consumer.properties file along with the producer properties.
To get the messages for Kafka consumer, open a new terminal window and paste the following command:

kafka-console-consumer.sh –topic Topic-Name –from-beginning –zookeeper localhost:2181

Here the –from-beginning command will specify the messages in order. Even when you access information or texts from the producer’s terminal, they will appear in the consumer’s terminal.

Go ahead and learn how to scale the Kafka cluster next.

Scaling the Kafka Cluster

Scaling your Kafka cluster is easy with the kubectl scale rc kafka-rc –replicas=6 command on the kubectl to make the cluster an administrator of Kubernetes. The process involves extending the number of pods from 1 to 6.

Somethings to Consider While Running Kafka on Kubernetes

By running Kafka on Kubernetes, you can simplify the operations such as scaling, restarts, upgrades, and monitoring the Kubernetes applications. Although there are some points you should consider while running Kafka on Kubernetes.

Low Latency Network and Storage

Kafka requires low latency and high storage that also refers to having low contention for information and high throughput. It helps Kafka to deliver fast media to brokers so that they can access the data locally and in the system where the pod is running. Therefore, it improves the overall system performance.

Availability of Kafka Brokers Should be High

The Kafka brokers can be deployed throughout the Kubernetes cluster. Brokers then connect users over fault domains. As Kubernetes can automatically correct pods when nodes and containers crash, it can also correct brokers when they go down. The high availability of brokers in Kafka Kubernetes cannot be ignored. However, one thing to consider about these brokers is what happens to the information that Kafka stores? To make sure if the data is following the pod, you need to apply a data replication strategy. You will be able to use different brokers for a higher throughput, which will also help you quickly recover damaged brokers.

Data Protection and Security

Before you Set up and Run Kafka on Kubernetes, you need to know the data protection and data security systems of Kafka. Kafka provides multiple replications of topics and monitoring of data across Kubernetes clusters. So, replication is like a backup that protects the data. When something fails in the cluster such as a node, the replication will back up the data. Likewise, mirroring of data makes the data prepared in other data centers.

There is also an in-built data security system in Kafka that implements various authentication systems such as using SSL among brokers. So, these data and filesystems are protected from manipulators or hackers on the internet.

Conclusion

We have now explained how to set up and use Kafka with Kubernetes. All you need to do is follow every step starting with deploying Zookeeper. If you are looking for a tutorial on how to set up the Platform9 Free Tier Cluster specifically for running Kafka on Kubernetes, we encourage you to check out our other posts. The rest of the information can be found in this tutorial, which will help you create the Kafka service and run the Kafka server on Kubernetes without any hassle.

You should remember that Kafka is an extremely powerful tool that’s used and supported by numerous companies, including Spotify, Coursera, Netflix, and many more. To improve the value of your organization and run complex microservices on Kubernetes clusters, you will definitely need Kafka. If any of these codes and command lines appear difficult to follow, please feel free to ask us for assistance. Furthermore, each of your comments will be responded to shortly.

Read more
03Jun

Kubernetes Cheat Sheet

June 3, 2022 admin Kubernetes

Containerization is the best-in-class strategy for deploying and working with software on a large scale. It is a modern way of processing software or applications that run easily in new computing environments. However, these containers need to be managed and processed, and that is the main motive of container management systems. Kubernetes is quite possibly the most notable and widely-used container management system out there.

In this Kubernetes cheat sheet, we will learn in detail about Kubernetes, its benefits, architecture, along with all the basic yet important Kubectl commands for achieving various tasks in Kubernetes.

What is Kubernetes?

Having containers in your working environment helps a lot in the management of the applications and also prevents any sort of downtime. They manage the continuous functioning of applications by keeping backup containers. For the management of these containers, a system is used that aligns the task to them automatically in addition to distributing the resources equally.

This is exactly where Kubernetes comes into play. It is an open-source container orchestration platform that helps companies to execute their distributed systems smoothly. Time-taking processes such as application scaling, managing downtimes, aligning resources, providing deployment patterns, etc. are automatically managed by Kubernetes. It helps you in the following ways:

  • Kubernetes allows you to do automatic rollouts and rollbacks. This means that you can define the exact state in which you want the future containers to get deployed. Therefore, automating Kubernetes for creating new containers and deploying them consistently, and deleting the old ones is possible.
  • Fitting containers on specific nodes along with configurations regarding the resources gets simple. A user can define the total RAM and CPU each container would require and define the node clusters onto which they’ll be deployed.
  • It makes the identification of containers quick by making the use of their IP address or DNS system. In addition, it manages the load by distributing them equally and eventually stabilizes the load.
  • Keeps all confidential information such as OAuth tokens, passwords, and SSH keys away from unauthorized individuals. Also, it allows organizations to update configurations without rebuilding containers.
  • Enables administrators to choose the storage from various options such as local storage, public or private cloud providers, databases, etc.
  • Its real-time monitoring and self-healing ability run regular scans for checking failed containers and replacing them, removing containers that don’t respond, fixing containers that underperform, etc. It ensures to fix everything and then bring them to action.

Kubernetes Architecture

The architecture of Kubernetes is made up of a control plane which is also known as master, an etcd which is the distributed storage system responsible for consistent functioning, and tons of cluster nodes or Kubelets.

In one environment, the default setup offers only one master node that is the point of contact for all the worker nodes. However, you can have multiple master nodes in case of high demand. Following are some of the frequently-used terms associated with Kubernetes architecture:

  • Pod: A term used to represent a group of containers
  • Labels: Utilized for identifying the pod
  • Kubelet: They maintain the pod sets
  • Proxy: These help in balancing the load of the containers
  • Etcd: It is a consistent and highly available key-value storage for clustering data
  • CAdvisor: Used for real-time monitoring of usage
  • Replication controller: Helpful in managing pod replications
  • Scheduler: Responsible for scheduling pods on worker nodes

Kubernetes Cheat Sheet

Kubernetes cheat sheet is basically a set of Kubectl commands, which is the command-line configuration tool for Kubernetes through which feasible communication with Kubernetes API server becomes possible. These commands are used for executing several actions on Kubernetes such as creating, inspecting, updating, and deleting Kubernetes objects in no time. This cheat sheet is an epicenter for all the frequently-used Kubectl commands that you’ll have to utilize from time to time while working with Kubernetes.

An individual will have the full liberty to use either the complete word mentioned in the command or the shortcode variation mentioned along with the heading of each section. The outcome will be the same.

1. Cluster management

It is the process of managing multiple Kubernetes clusters under one organization. Clusters used for development, testing, and production are aligned with a single infrastructure. The whole management work is done by using the below-mentioned codes.

Display end-point details regarding master and worker nodes within a cluster. kubectl cluster-info
Used for checking the Kubernetes version by showing it on screen. kubectl version
Display all the details associated with configurations. kubectl config view
Used for listing down all the API (Application Program Interface) resources collectively. kubectl api-resources
List down all the API versions present for the users. kubectl api-versions
Run this command for listing all the data and information. kubectl get all –all-namespaces

2. Events (ev)

Events are typically used for discovery purposes and are automatically created every time resources get changed, face some issues, etc. that are important for administrators and employers to know.

Display all the recently occurred events. kubectl get events
Mentions all the warnings collectively on the screen. kubectl get events –field-selector type=Warning
Write down all the events except for the one associated with pods. kubectl get events –field-selector involvedObject.kind!=Pod
Pull out the event from a specific node with the help of its name. kubectl get events –field-selector involvedObject.kind=Node, involvedObject.name=<node_name>
Filter normal events from the list of events. kubectl get events –field-selector type!=Normal

3. Namespaces (ns)

A lot of virtual clusters get supported by one physical cluster. All of these virtual clusters are termed namespaces that are used by various environments and users across multiple teams and projects.

Use this command for creating namespaces. kubectl create namespace <namespace_name>
Display namespace either with the help of name or listing down the whole group. kubectl get namespace <namespace_name>
Get information regarding the state of a namespace. kubectl describe namespace <namespace_name>
Run this command for deleting a namespace by using its name. kubectl delete namespace <namespace_name>
Used for editing and modifying the details associated with a namespace. kubectl edit namespace <namespace_name>
Display all the resources used by a namespace such as CPU, memory, etc. kubectl top namespace <namespace_name>

4. Nodes (no)

Every pod under the Kubernetes environment runs on nodes which is basically a worker machine that can be virtual or physical based on the cluster. Nodes are handled by managers.

Update the traces associated with nodes. kubectl taint node <node_name>
List all the nodes. kubectl get node
Run this command for deleting one or more than one node. kubectl delete node <node_name>
Mention the usage of resources by the nodes. For example, listing down the consumption of CPU and memory. kubectl top node
Used for listing down the resources divided for the pods. kubectl describe nodes | grep Allocated -A 5
Check the pods that are running on a specific node. kubectl get pods -o wide | grep <node_name>
Used for annotating nodes. kubectl annotate node <node_name>
Disable a node to get scheduled by marking it through this code. kubectl cordon node <node_name>
Enable a node to get scheduled by marking it through this code. kubectl uncordon node <node_name>
Drain a node completely to make it fit for maintenance. kubectl drain node <node_name>
Add or update labels associated with one or more multiple nodes at once. kubectl label node

5. Pods (po)

The most basic deployment unit in Kubernetes is Pods. They are responsible for running instances under clusters and a single Pod can be composed of a few containers.

Run this command for listing down all the pods. kubectl get pod
Used for deleting a specific pod by using its name. kubectl delete pod <pod_name>
Display the state of a particular pod. kubectl describe pod <pod_name>
Used for creating pods. kubectl create pod <pod_name>
Used for making a pod function that is not defined by the dedicated container. kubectl exec <pod_name> -c <container_name> <command>
Get an interactive shell for pods that are aligned to single containers. kubectl exec -it <pod_name> /bin/sh
Mentions all the resources associated with a specific pod. kubectl top pod
Add or update the annotation of pods. kubectl annotate pod <pod_name> <annotation>
Add or update the labels of the pod. kubectl label pod <pod_name>

6. Services (svc)

In Kubernetes, Service is used for defining a logical set of Pods and policies through which one can access the Pods. The pattern defined is often referred to as micro-service.

List single or multiple services. kubectl get services
Used for displaying the real state of services. kubectl describe services
Expose replication controllers, services, deployments, or even pod as a new service for Kubernetes. kubectl expose deployment [deployment_name]
Modifying or updating the details of one or more services in one shot. kubectl edit services

7. Logs

Consider logs as updates in the form of notifications that help individuals to stay updated regarding all the things happening in an application. They are useful for identifying issues, updates, etc in the first place.

This command is utilized for printing the logs. kubectl logs <pod_name>
Set past one hour for printing logs. kubectl logs –since=1h <pod_name>
Print the latest 30 lines of logs. kubectl logs –tail=20 <pod_name>
Choose a specific service and container for getting the logs from. kubectl logs -f <service_name> [-c <$container>]
Get the logs of a specific pod and keep on getting the future ones for the same. kubectl logs -f <pod_name>
Get the logs of containers inside a particular pod. kubectl logs -c <container_name> <pod_name>
Used for viewing the logs for the pods that gets failed kubectl logs –previous <pod_name>
List down the logs related with pods named “pod_prefix” kubetail <pod_prefix>
Get the logs for previous five minutes kubetail <pod_prefix> -s 5m

8. Daemonsets (ds)

Daemonset enables a few or every node to run the copy of Pods. Right after the addition of a node in a cluster, Pods will be added to the same. Similarly, on the removal, the Pod will also be removed.

Used for listing daemonsets. kubectl get daemonset
Edit and update the definition of one or more daemonset. kubectl edit daemonset <daemonset_name>
Deleting a daemonset by using its name. kubectl delete daemonset <daemonset_name>
Create a new daemonset and assign a specific name. kubectl create daemonset <daemonset_name>
Manage all the rollouts of a daemonset. kubectl rollout daemonset
Display the state of a particular daemonset situated inside a particular namespace. kubectl describe ds <daemonset_name> -n <namespace_name>

9. Displaying the State of Resources

For getting detailed information regarding any resources, make the use of kubectl describe command. All the uninitialized resources will also be listed by running this command.

To check the details regarding nodes. kubectl describe nodes [node-name]
Display details about a particular pod. kubectl describe pods [pod-name]
Display information regarding pods that are listed on pod.json through the name of the type. Kubectl describe –f pod.json
Get information regarding pods that are managed by a particular replication controller. kubectl describe pods [replication-controller-name]
Get information regarding all the pods. kubectl describe pods

10. Apply o Update a Resource

For applying and updating resources, the kubectl apply command is used by taking files or stdin as inputs.

Create new services that are mentioned inside [service-name].yaml. kubectl apply -f [service-name].yaml
Create new replication controllers that are described within the [controller-name].yaml. kubectl apply -f [controller-name].yaml
Used for creating objects described within any directory, yml, or xml files. kubectl apply -f [directory-name]
Used for updating resources through editing it via text editors. This command is an addition of two commands, i.e. kubectl get and kubectl apply. kubectl edit svc/[service-name]
Run this command for opening text files within your text editor defined by you. KUBE_EDITOR=”[editor-name]” kubectl edit svc/[service-name]

Kubernetes Components

To have a better vision about implementing the aforementioned Kubernetes cheat sheet more effectively, it would be better for you to learn about Kubernetes components. The functioning of Kubernetes is possible because of its clusters. These clusters are composed of worker machines known as nodes for running the applications through containers. These nodes further host pods that are the most basic and integral component of application workloads.

All of these nodes and pods functioning under one Kubernetes environment are managed by the control plane. This control plane functions on various computer devices under one infrastructure that eventually ensures the availability of the containers and fault management.

Components of the Control Plane

The main decision-maker under the Kubernetes environment is the control plane, which is responsible for modifying the functioning of the cluster, keeping up with the events associated with clusters, etc. Following are the integral components of the same:

  • Kube-apiserver: This is a front-end component that validates and confronts data for API objects.
  • Etcd: It is a highly available key-value store used for backing cluster data.
  • Kube-scheduler: This crucial component keeps an eye on new pods and selects nodes to make them run on those pods.
  • Kube-controller-manager: All the controller processes such as endpoints controllers, replication controllers, token controllers, etc. are managed by these components.
  • Cloud-controller-manager: It is used for integrating the clusters with the cloud provider’s API. Further, it distinguishes between the components that are running on the cloud platforms from the ones running only on the clusters.

Components of the Node

Components of a node are available at every node running under a Kubernetes environment. They are typically responsible for running all the pods and therefore, providing Kubernetes runtime infrastructure.

  • Kubelet: They are available on all the nodes and their primary job is ensuring that containers are running perfectly without any fault in the pods. It only keeps the track of containers that are defined into PodSpecs.
  • Kube-proxy: It is a network proxy that is responsible for maintaining networks on nodes. They are configured with a certain set of rules through which communication between the clusters and the pods becomes possible.
  • Container runtime: It is the software responsible for running containers. Kubernetes supports various container runtimes like CRI-O, Docker, or any Kubernetes Container Runtime Interface (CRI) implementation.

Conclusion

Kubernetes is an open-source container orchestration platform that facilitates the management of distributed systems. The architecture of Kubernetes is made up of a control plane that is also known as master, an etcd that is the distributed storage system responsible for consistent functioning, and tons of cluster nodes or Kubelets.

The blog contains a Kubernetes cheat sheet that is a set of all the important and frequently-used Kubectl commands. They help in making the communication with the Kubernetes API server possible.

Read more
23May

How to Set Up Jenkins on the Kubernetes Cluster?

May 23, 2022 admin Jenkins, Kubernetes

Jenkins is a consistent integration technique that will automate a portion of the software development procedure. There are various developing teams available who are working on different projects in different microservices environments. The environment is complicated and has limited resource availability. It will help you to deliver a flawless outcome on a specified schedule. You can know the benefit of installing Jenkins on the Kubernetes cluster to get the desired results.

A Kubernetes cluster will add the new automatic layer to Jenkins. There is a prerequisite requirement of a surface to a command or terminal to set up. Resources must be used effectively so that service did not get overused through it. The cluster can deploy the container if it has enough resources in Jenkins.

Here are some practical examples of how to set up Jenkins on a cluster;

Make a namespace for the Jenkins Installation

It is essential to make a specific namespace that will provide more control over the continuous integration environment. However, one can make the namespace for Jenkins by typing the following command in the terminal.

Initially, kubectl could be used to create a namespace in Jenkins;

$ kubectl create namespace Jenkins

There is a need to write the name of the namespace which is used as a DNS compatible label. The output will confirm the namespace’s successful creation. It is the first requirement for the installation of Jenkins on the cluster software. Use the command which is provided in the existing namespace to avoid any confusion in creating the name for the deployment of Jenkins. It will satisfy the requirements of the individuals for setting up the software.

Creating Jenkins Deployment File

Once you have designated a namespace, you need to use the preferred Linux text editor to create the deployment file. It is the second requirement for the setting of Jenkins on the cluster. The deployment file will be created according to requirements after utilizing the resources and examples. There is a volume mounts section of the file that will create consistent volume for installing the software. The role of the volume is to store the basic Jenkins data and take care of it for a long period. You can save the changes for once by adding the content and exiting the file. It is the next and essential step involved in the creation of the Jenkins deployment file.

Further, YAML needs to be created to deploy Jenkins;

Open the new file named jenkins.yaml with the help of nano editor or any preferred editor;

nano jenkins.yaml

Thereafter, you have to add the codes to specify Jenkins image;

Jenkins.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: jenkins

spec:

replicas: 1

selector:

matchLabels:

app: jenkins

template:

metadata:

labels:

app: jenkins

spec:

containers:

– name: jenkins

image: jenkins/jenkins:lts

ports:

– name: http-port

containerPort: 8080

– name: jnlp-port

containerPort: 50000

volumeMounts:

– name: jenkins-vol

mountPath: /var/jenkins_vol

volumes:

– name: jenkins-vol

emptyDir: {}

In the production cluster, there is no need to use the host path. Instead, a clustered administrator will provide the provision of the network resources to change the volume. You must make sure about it while deploying Jenkins on a Kubernetes cluster.

Deployment of Jenkins

To deploy Jenkins on the cluster, use the freshly generated file. The system may be directed to install Jenkins data under the Jenkins namespace using a guide and command. It is an essential step that you need to follow for installing Jenkins on the Kubernetes cluster. The collection of information is possible by following the guide and command available at the cluster.

Create and Deploy Jenkins Service

Kubernetes cluster will manage the life cycle of a pod. It is managed within the cluster to avoid additional space. There is a need to regularly remove and deploy the pods to obtain the desired state and balance the workload available at the cluster. A service refers to an abstraction that will expose Jenkins to the wider network. It will allow you to maintain a persistent connection with the pods. There is no requirement for change to take place within the cluster. You should create a service with a Jenkins service file by using any text editor to add the content. The content can be taken from the example provided at the cluster. With the help of commands, you can make the service after tagging the Jenkins namespace. Now, you have the service available to the Jenkins dashboard installed at the Kubernetes cluster. Collection of the complete information about it is essential to install the data on the cluster correctly.

kubectl create -f jenkins.yaml –namespace jenkins

To verify the pod’s state, you need to use kubectl

kubectl get pods -n jenkins

You may find different names in your system. However, after the pod starts to run you have to find it with service.

Create a new file name as jenkins-service.yaml:

nano jenkins-service.yaml

Enter the code to specify Nodeport service;

apiVersion: v1

kind: Service

metadata:

name: jenkins

spec:

type: NodePort

ports:

– port: 8080

targetPort: 8080

nodePort: 30000

selector:

app: jenkins

—

apiVersion: v1

kind: Service

metadata:

name: jenkins-jnlp

spec:

type: ClusterIP

ports:

– port: 50000

targetPort: 50000

selector:

app: jenkins

Access to Jenkins Dashboard

You can go to your browser and have a session on a node by using the IP address which is defined in the service file. If you want to access Jenkins, you need to initially enter some credentials. The default username and some installations are an admin at the dashboard. There is the availability of a password in several ways. You can use examples related to Jenkins deployment and name to find the correct name of the port for entering the command. Once you have found a location and the name of the pod, it will provide you access to the pod logs. The password is useful at the end of the formatted string. By this, you can successfully install Jenkins on the Kubernetes cluster. That enables us to build the latest and accurate development pipelines. Understanding Jenkins working is also essential to get desired results with complete access over the dashboard.

Use kubectl to get the node IPs;

kubectl get nodes -o wide

It will produce output with the external IPs. Look for your IP and copy it.

Further, open the website with the address http://your_external_ip:30000.

It will redirect to the page that required the administration password for guidelines to get the password from Jenkins Pod logs.

You have to use kubectl to get the password.

Use the following command to get back your Pod name;

kubectl get pods -n jenkins

Look into Pod logs for the password, and change the highlighted section with Pod name;

kubectl logs jenkins-6fb994cfc5-twnvn -n jenkins

To find the password scroll up or down a bit;

Running from: /usr/share/jenkins/jenkins.war

webroot: EnvVars.masterEnvVars.get(“JENKINS_HOME”)

. . .

Jenkins initial setup is required. An admin user has been created and a password generated.

Please use the following password to proceed to installation:

your_jenkins_password

This may also be found at: /var/jenkins_home/secrets/initialAdminPassword

After getting your Password, copy and paste it into Jenkins UI.

How Jenkins Works

Jenkins has become a standard for software development pipelines industries. Before the notion of continuous integration, the development process was tempered by several integrations and long testing procedures. Jenkins refers to a self-contained solution that is compatible with different integration plugins to offer a comprehensive and easy-to-use environment. Developers also use to daily check the changes made to the source code. You can build the source codes and generate deployable files with quality assurance and security checks.

Jenkins uses plugins to produce Metric, which is the detailed and qualified information to build a continuous process. If the building fails, then the information and the testing can provide additional changes to the submitted code. It is an essential thing that developers should know while installing Jenkins on the Kubernetes cluster. Collection of information about it is essential for the people to install Jenkins on the cluster.

Conclusion

Now you know about the installation of Jenkins on a Kubernetes cluster, it will automate different tasks and assist the developers to submit the code effectively. It also increases productivity and reduces wastage of time. The following of each step in the setting up of Jenkins over the Kubernetes cluster will solve many complexities in the performance of the task for developers. The building of the local software and testing the data is possible without any additional requirement of skills and expertise.

It will also provide a service over the Jenkins dashboard. The meeting of the needs is possible for the individuals with installing Jenkins over the cluster. You need to follow each step to get the desired results without wasting time manually. The correct installation is possible for the submission of the code sufficiently. It will provide a continuous loop resulting in a well-polished product. Jenkins build creates a ready-to-deploy package that will lead to the next phase of development and production.

Read more
18May

Installing Kubernetes on CentOS 7 – What to Do?

May 18, 2022 admin centOS, Kubernetes

Why and How to Install Kubernetes on CentOS7

Today is the age of Kubernetes. Its popularity has increased steadily since its inception. Open-source tool Kubernetes has made container management a lot easier and more efficient. K8s, also known as Kubernetes, has a constantly growing ecosystem. Plus, its services, tools, etc are easily accessible. K8s and its tooling are now essential for anyone or any company seeking to manage containers and containerized apps in an effortless manner. Many first-timers, however, struggle to install Kubernetes Clusters locally. That’s only natural. This guide is for you if you are one of those people. In this section, we will show you how to install Kubernetes on CentOS 7. Read on.

Why install Kubernetes Cluster on CentOS 7 anyway?

Now, the question is why you should opt for Kubernetes. Well, Kubernetes is an amazing innovation. It provides its users with many cool features that are hard to ignore. Here are those features:

  • Kubernetes is known for its convenient storage orchestration. In addition, it permits its users to auto-mount storage systems of their preference.
  • Kubernetes is capable of exposing containers using the DNS name or their IP address. In case the container traffic is gigantic, Kubernetes can load balance and dispense that network traffic in a way that results in a stable deployment.
  • Self-healing is another great feature of K8s. If a container fails, Kubernetes reboots them. If needed, replace some containers. Kubernetes also eradicates containers unresponsive to health checkups defined by you. Plus, it never shows those containers to the clients until they’re ready.
  • Kubernetes also offers its users the facility to auto-rollbacks and rollouts. This facility enables them to define a desirable condition of your K8s-deployed containers. And, it is capable of gradually modifying the real condition to the desirable condition.
  • The software also comes equipped with the auto-bin packing feature. Kubernetes users can specify the amount of CPU or RAM each container requires or give it a cluster to run containerized tasks on. Then, Kubernetes can place those containers into your nodes, capitalizing on your existing resources.
  • Kubernetes also allows the users to stow and manage their sensitive info, including their SSH keys and OAuth tokens. One can perform deployments and update secrets, and configure apps. That too, without reconstructing their container images or revealing secrets in their stack setups.

Steps to follow for installing Kubernetes on CentOS 7

Now coming to the article’s core, here are the steps you need to follow for installing Kubernetes on CentOS 7.

Prerequisites

  • 3 CentOS-installed servers
  • Root permission

Step 1. Install Kubernetes

The foremost task to perform here is to install Kubernetes (K8s) on all three servers. So, we got to make them ripe for it. You can make them ready for K8s installation by modifying the current setups on the servers and installing packages like Docker CE and then Kubernetes itself.

First, utilize the vim editor to edit your hosts file:

vim /etc/hosts

Copy-paste this host:

10.0.15.10 k8s-master

10.0.15.21 node01

10.0.15.22 node02

Once you’re done, save and quit the vim editor.

Next, you have to disable SELinux. This is a key step, so take heed. Execute the following command to disable SELinux:

setenforce 0

sed -i –follow-symlinks ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/sysconfig/selinux

After that, facilitate the br_netfilter module. This module has to be in an active state for installing Kubernetes. Enabling br_netfilter kernel module will allow the packets crossing the bridge to go through processing by iptables. The iptables help process these packets for port forwarding and filtering. It also permits the K8s pods on the cluster to make them communicate with one another.

Execute the following command for enabling the br_netfilter module:

modprobe br_netfilter

echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables

Once it’s enabled, focus on disabling SWAP to allow a smooth K8s installation. Utilize the below command to do so:

swapoff -a

After that, take the etc/fstab file and make some edits:

vim /etc/fstab

Comment the SWAP line’s universally unique identifier like this:

# /dev/mapper/centos-swap swap swap defaults 0 0

Once finished, let’s proceed to install the latest version of Docker Community Edition (CE) from your Docker repository.

Install the package dependencies for Docker Community Edition using this command:

yum install -y yum-utils device-mapper-persistent-data lvm2

Attach the Docker repository to your system and Docker CE utilizing the following command:

yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum install -y docker-ce

Cool your heels, allowing the Docker CE installation to get finished.

Next, you got to install the Kubernetes repository to your CentOS 7 system to run the below command:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

After that, install kubeadm, kubectl, kubelet K8s packages utilizing the following yum command:

yum install -y kubelet kubeadm kubectl

After these packages are installed, reboot them:

sudo reboot

Sign in again to your server and begin both the kubelet and docker services:

systemctl start docker && systemctl enable docker

systemctl start kubelet && systemctl enable kubelet

It is also necessary to change your cgroup-driver. While installing Kubernetes to your CentOS 7 PC, you must ensure the usage of the same cgroup by both Kubernetes and Docker CE.

Examine the cgroup of the Docker utilizing the below command:

docker info | grep -i cgroup

Here, you should see your Docker using cgroupfs as its cgroup-driver.

Next, execute the following command for modifying the cgroup-driver of K8s to cgroupfs and thus, making both alike:

sed -i ‘s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g’ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Load the systemd system again and reboot your kubelet service:

systemctl daemon-reload

systemctl restart kubelet

Now, we are all set for our K8s cluster.

Step 2. Initialize Kubernetes cluster

Here, we got to initialize our K8s master cluster setup. So, first, relocate the shell to your master server and execute this command to configure your K8s master cluster:

kubeadm init –apiserver-advertise-address=10.0.15.10 –pod-network-cidr=10.244.0.0/16

Once your K8s initialization is finished, you will get a relevant output stating the same.

  • You have to copy-paste the kubeadm join … … … to some text editor. This command will become essential during the registration of new nodes to your K8s cluster.
  • Next, you have to execute a few commands to start using your Kubernetes software.

First, generate a new .kube setup directory and copy the setup admin.conf:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

After doing so, perform the flannel network deployment to the K8s cluster using this command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Now, the flannel network deployment to your K8s cluster is done.

Wait for some time and then examine your K8s nodes and pods by putting the below command to use:

kubectl get nodes

kubectl get pods –all-namespaces

If successful, the output you will receive from this command execution would display the status of the “k8s-master” as “Ready.” Also, it would show the status of all the required pods, including the ‘kube-flannel-ds” as “Running.” These pods are crucial for configuring network pods.

And with that, the initialization and setup of your K8s master cluster is complete.

Step 3. Add both the worker nodes to your cluster

Step 3 is all about adding both the worker nodes to your K8s cluster.

Establish a connection with your first node. Remember the kubeadm join command we copied before? We need to run that command here:

kubeadm join 10.0.15.10:6443 –token vzau5v.vjiqyxq26lzsf28e –discovery-token-ca-cert-hash sha256:e6d046ba34ee03e7d55e1f5ac6d2de09fd6d7e6959d16782ef0778794b94c61e

Now, connect with the second server as well and run the same kubeadm join command again:

kubeadm join 10.0.15.10:6443 –token vzau5v.vjiqyxq26lzsf28e –discovery-token-ca-cert-hash sha256:e6d046ba34ee03e7d55e1f5ac6d2de09fd6d7e6959d16782ef0778794b94c61e

After waiting for a minute or two, verify the status of pods and nodes using this command:

kubectl get nodes

kubectl get pods –all-namespaces

If everything is done right, you’d receive an output stating that both the nodes have become a part of your cluster.

Step 4. Test construct your first pod

It’s best to deploy a demo Nginx pod as a test to your K8s cluster to ensure if everything was fine. Pods are assemblages of one/more containers, for instance, Docker containers, having common storage and network that run on K8s.

Sign in to your Kubernetes master server and deploy a new pod, namely “nginx,” using the below command:

kubectl create deployment nginx –image=nginx

Once deployed, you can see info of your Nginx deployment specs by executing this command:

kubectl describe deployment nginx

Doing so will display the Nginx deployment specs right then.

Now, you have to reveal your Nginx pod attainable online. Thus, for this purpose, you have to generate a new service called NodePort by executing the command below:

kubectl create service nodeport nginx –tcp=80:80

Ensure the flawlessness of the command and then view your Nginx service NodePort and IP address:

kubectl get pods

kubectl get svc

Notice both the port numbers from the output attentively, though we only need NodePort. Usually, it’s 30691.

Run this command from your K8s master server:

curl node01:30691

curl node02:30691

The Nginx pod deployment on your Kubernetes cluster isyea successful and accessible online. Congratulations on a triumphant installation of a K8s cluster on your PC running on CentOS 7.

Conclusion

To conclude, Kubernetes (K8s) is an essential tool for managing clusters residing on different servers. Besides making the deployment process more facile, it also makes it more fruitful. Why struggle with container management when Kubernetes can be your lifesaver? That’s why we decided to guide you so that you too can find the task of installing Kubernetes on CentOS 7 easy. We hope that this write-up suffices as an effective terminus quo for all the first-time users out there.

As you may comprehend by going through the article, the process of installing a Kubernetes cluster itself is nothing fancy. But it can end up puzzling you if you aren’t attentive enough. From installing the Kubernetes tool to create a pod as a demo to ensure that everything works perfectly – follow every step scrupulously. Skipping even a tiny part or forgetting to execute even one command could mess everything up.

Just comply with the stuff mentioned above, install Kubernetes on your local CentOS 7 machine, and bid your container management problems goodbye. All the best…and oh, happy clustering!

Read more
17May

What is Amazon Elastic Kubernetes Service?

May 17, 2022 admin AWS, Kubernetes

By sharing their workload, Kubernetes has changed the way many companies work. By using Kubernetes, you can seamlessly scale traffic changes. It helps in automating your container workflow. Additionally, Kubernetes provides container orchestration for your containers.

You can improve Kubernetes performance by running it on Amazon Elastic Kubernetes Service (EKS). Running Kubernetes on EKS will give you more control over managing, deploying, and even scaling your applications within containers. It offers a rich ecosystem and great flexibility alongside seamless container deployment on AWS. Furthermore, it gives you complete control over container customization.

While running Kubernetes cluster, you may face some of the challenges. One of them is to decide which cloud is going to deploy your applications. Once you look for options, you need to filter your choices depending on network, bandwidth, storage, and other features.

What is Amazon Elastic Kubernetes Service?

Amazon EKS is an AWS offering that comes as a managed container as a service (CaaS), allowing you to run Kubernetes on AWS. With EKS, you don’t need to install or operate Kubernetes. Kubernetes can even be run without operating the control panel or worker nodes.

To get a better understanding of Amazon EKS, let’s get an overview of Kubernetes.

Kubernetes is the most popularly used container orchestration engine that was launched by Google, in 2014. It works well for the cloud-native computing services. It was launched as an open-source engine that helps in automating, managing, and scaling thousands of containers at the same time without impacting their performance. It helps in load balancing, monitoring, controlling consumption of resources, and leveraging additional resources from various resources.

History of Amazon EKS

Today, most of the companies are running Kubernetes on AWS, making Kubernetes the core for the AWS customers. It allows them to run thousands of containers on AWS efficiently. As a result, in 2018, AWS announced that the Amazon EKS is available for customers who use kubernetes for simplifying the complete process, as there is no need to set up the Kubernetes cluster from scratch.

Before EKS was introduced and became available to all AWS customers, customers had to obtain some expertise to run and manage kubernetes clusters. Apart from this, the companies had to provision Kubernetes management infrastructure on several AZs. However, since the arrival of EKS, the problem has been resolved to much extent as it provides a production-ready architecture. It also helps in running and managing Kubernetes clusters across several AZs providing a wide range of features and functionalities.

How Amazon EKS Works

The major work of EKS is to simplify the process of managing and maintaining the high available clusters of Kubernetes within AWS. The two key components of Amazon EKS are- Control Plane and Worker Nodes.

Control Plane

The control panel of the Amazon EKS comes with the three Kubernetes master nodes which run across three different availability zones. Kubernetes API gets all the incoming traffic using the network load balancer (NLB), running on the virtual private cloud which is controlled by Amazon. Thus, organizations cannot manage the control panel directly and will need AWS for managing it.

Worker Nodes

Organization controls the worker nodes running on the Amazon EC2 instances within the virtual private cloud. You can use any instance in AWS as a worker node. You are allowed to access the worker nodes via SSH without the need of any automation. You can easily run the worker nodes cluster on the organization’s container. These nodes are managed and monitored by the control panel.

As an organization, you can easily deploy a Kubernetes cluster for every application due to the EKS layout flexibility. You can run more than one application on the EKS cluster using the Kubernetes namespace and configuration of AWS IAM. Companies can use the EKS instead of running and maintaining Kubernetes infrastructure.

Benefits of Elastic Kubernetes Service

Listed below are some of the benefits of using the AWS EKS.

  • Improves availability and observability

With the help of EKS, you can easily run the Kubernetes control panel across several AWS availability zones. It will help you in detecting and replacing the unhealthy or malfunctioning control panel nodes automatically. Apart from this, it offers you on-demand and zero downtime upgrades of the system including patching. It guarantees you 99.95 percent uptime. Not only this, it enhances the observability of your Kubernetes cluster and helps you identify and resolve the occurred issues.

  • Scales your resources efficiently

With EKS node groups, you do not need to provision compute capacity, which is how your Kubernetes cluster will scale. For running applications on Kubernetes, you can use the AWS Fargate service to provision serverless compute on-demand. The EKS nodes on Amazon EC2 are used to identify the instances that reduce costs and improve system efficiency.

  • Ensures secure Kubernetes environment

With EKS, you will get the latest security updates automatically and it will be poached to your cluster’s control panel. The community is very active and works with the AWS for addressing the crucial security issues and ensures that every Kubernetes cluster is safe and secure.

Amazon EKS Features

Amazon EKS allows organizations to take advantage of some of the most important features of the Amazon platform, including reliability, high resource availability, enhanced performance, and scale, as well as important integrations with the AWS network and security services. We have listed some of the key features of Amazon EKS below.

  • Managed Control Plane

Along with Amazon EKS, you will get a highly available and scalable control panel efficiently running on several AWS AZs. EKS helps in managing and maintaining the availability and scalability of the Kubernetes cluster and API services. To ensure high availability, you can run the Kubernetes control panel on all three different availability zones, ensuring detection of the unhealthy nodes if present.

  • Managed Worker Nodes

You can run a simple but efficient command for creating, updating, and terminating EKS’s worker node. These worker nodes run on the system with the help of the latest Amazon Machine Images (AMIs).

  • Launch using eksctl

If you want to run the EKS within minutes, you can run the open-source command. This results in creating a Kubernetes cluster that is ready to run your application.

  • Service Discovery

AWS has a cloud resource discovery service known as the Cloud Map. it helps the companies in defining the namespace for the application resources. Also, it helps to update the location for the dynamic resources. This leads in increasing the availability of the application as the company’s web service will discover the resources at the most updated location.

EKS also provides a connector for auto propagation of internal service registry locations, while Kubernetes will launch and remove them once they have been terminated.

Use cases

Below are the use case applications for Amazon EKS.

Hybrid Deployment

EKS allows you to manage the Kubernetes cluster and its running applications across hybrid environments. With EKS, you can even run the Kubernetes on your data center as well as on AWS. You can run your EKS-run applications near to the AWS local zones and AWS wavelength for getting the better performance of your application. You can also use AWS Outposts for EKS that helps in extending the AWS infrastructure, its services, APIs, and many other tools.

Machine Learning

To model the workflow of your machine learning, you can implement EKS with Kube Flow. It also helps in running the distributed training jobs with the help of the latest EC2 GPU-powered instances. To run training and interfaces using Kubeflow, you can leverage AWS deep learning containers.

Batch Processing

You can use the EKS cluster along with Kubernetes jobs API for running sequential and parallel batch workloads. EKS will help you in planning, scheduling, and executing the batch-related computing workloads across several AWS compute services like Amazon EC2, Fargate, etc.

Web Applications

It helps in creating web-based applications that can be scaled efficiently and can run in a highly available configs environment. It helps in improving the performance, scalability, and readability of your web applications. You will also get amazing integrations with AWSnetworking and security services like Load balancers for efficiently distributing the loads across the network.

Conclusion

Amazon EKS offers you complete and advanced integration with AWS services that will help in improving the performance of your applications running on clusters. It offers various features, tools, and technologies for managing and maintaining the Kubernetes cluster in a high availability zone. In this article, we have highlighted key points on Amazon EKS, its features, and various use cases. By reading it, you will get a complete picture of Amazon EKS and how it is important to organizations.

Read more
10May

Docker-compose vs Kubernetes

May 10, 2022 admin Difference, Docker, Kubernetes

Docker Compose: Docker Compose is an instrument that was created to help characterize and share multi-compartment applications. With Compose, you can make a YAML record to characterize the administration, and with a solitary order, you can turn everything up or destroy everything. The enormous benefit of utilizing Compose is that you can characterize your application stack in a document, keep it at the base of your venture repo, and effectively enable another person to add to your undertaking. Somebody would have to clone your repo and start the creative application. You may see many ventures on GitHub or GitLab doing precisely this.

Kubernetes: Also known as k8s, Kubernetes is an open-source framework for overseeing containerized applications across various hosts. It gives essential instruments to send support and a scale of uses. Kubernetes developed ten years ago, and as a part of inclusion with Google, running creation occupations at scale using Borg’s system, got together with best-of-breed considerations and practices from the neighborhood. Kubernetes is worked on by the Cloud Native Computing Foundation (CNCF).

Before jumping towards the comparison of the two popular tools, let’s get into the exploration of their containers.

Docker-Compose in relation with kubernetes

Docker Compose enjoys its benefits contrasted with Kubernetes, yet that doesn’t mean it is the best arrangement over the long haul. Kubernetes is the more powerful of the two with regards to agreements that require increasing and remaining lean. Fortunately, migrating from Docker Compose to Kubernetes is altogether more straightforward than at any time in recent memory.

Docker Compose has an advantage over Kubernetes, perhaps especially for those who are new to containerization.The expectation to absorb information isn’t as steep as the previous for its worth with the latter. Docker Compose is designed to start from the earliest stage and work on the arrangement of microservices. You can use YAML to configure the cloud environment and then send all cases and microservices in a single order.

As an alternative, Kubernetes is more appealing. First of all, Docker Compose is intended to run on a solitary host or group, while Kubernetes is more elegant in consolidating numerous cloud environments and groups.

That advantage alone means Kubernetes is simpler to scale past a specific point. Backing for components like MySQL and GoLang is likewise better in Kubernetes. You can also use local administration from any semblance of AWS and GCP to help your arrangement.

It is clear, however, that the primary motivation for moving to Kubernetes is adaptability. Moving to Kubernetes as your compartment runtime is a reasonable development to make, and it will allow you to take your application to a higher level.

Docker Compose is intended to run on a solitary host, which means that container interchanges are next to zero and require no particular configuration.

Exploration of Containers

Containers solve a vital issue within the lifetime of application development. Once developers are writing code, they’re unaware of the problems that might arise. Once they move that code to production, this is where they start facing issues. The code that worked on their machine may not work in production.

There are a vast number of explanations for this. Sometimes there may be issues with the software, or the cause may differ accordingly.

Containers help in solving the underlying issue by separating code from the infrastructure. Developers might pack up their application and all the bins and libraries into a container. In production, that container can run on any device that includes a containerization platform.

The container needs the application and the definition of all of the bins and libraries that need to run. It is not at all like traditional virtualization machines.

The container isolation is completed at the kernel level, and no guest software package is required. Giving applications the ability to be encapsulated in self-contained environments allows for faster deployments, closer parity between development environments, and infinite quantifiability.

Difference between the containers

Usually, Docker Compose begins with one or more additional containers. It has one or more other networks and connects containers to them. It can also produce one or different volumes and tack containers to mount them. All of this can be used on one host. It uses a YAML file to configure application services. It is a Docker utility to run multiple containers and share volumes and networking via the Docker engine features. It runs locally to emulate service composition and remotely on clusters.

If we tend to put an insight on Kubernetes, it is typically a distributed container orchestration tool. Now you may want to know what exactly a container orchestrator is. Container orchestrators are the tools that group hosts together to form a cluster. They are fault-tolerant and can handle an oversized volume of containers and users. It takes care of running containers and enhancing the engine features. These containers can be composed and scaled to serve complex applications.

Kubernetes is the container orchestrator that was created by Google and has been given to the CNCF. It is presently open-source. It enjoys the benefit of utilizing Google’s long stretches of skill. It is an extensive framework for mechanizing sending, booking and scaling containerized applications. It also supports numerous containerization instruments like Docker.

Furthermore, Kubernetes can be run on either a public cloud or on-premises infrastructure. It is open source and has a vibrant local community. Organizations of all sizes are investing in it. Many cloud providers provide Kubernetes as a service.

The Orchestration Battle

The main Kubernetes characteristics include a decoupling framework from applications utilising containers. It is also open to other engines in addition. The orchestration framework fills in as a dynamic, complete foundation for a container-based application, permitting it to work in an ensured, profoundly coordinated environment while dealing with its connections with the outside world.

Docker Compose isn’t a creation-prepared tool. It performs admirably in development conditions, but lacks many of the capabilities that are more or less required for genuine creation use.

Kubernetes is appropriate for this errand and is one reason it has become so well known. Kubernetes has won the orchestration battle, and its inclusion has confirmed Docker Desktop. Its presence on all major cloud providers confirms that. Kubernetes is significantly more fit and has undeniably stronger local and corporate support.It handles planning on nodes in an exceedingly calculated cluster. It actively manages workloads. It additionally guarantees that their state coordinates with the client’s announced goals.

The difficulty of choosing between the two

If changes are being made to the application or image definition and you wish to see it running in Docker Compose, you need to initiate the command docker-create up-build. In Kubernetes, the picture can be modified by utilizing the order docker assemble-label my-picture: neighborhood. In any case, you might see that your progressions are not running in Kubernetes instantly.

The issue is that there has been no sign of Kubernetes being able to accomplish something once the picture was planned. The appropriate response is to erase the unit the picture was running on and reproduce it. If you’re running a single pod, delete it and play it yourself from the YAML pod definition. If you are running a readying or a stateful set, you’ll either delete the pods so that they can be mechanically recreated for you. Otherwise, you will scale down the replicas to zero and make a copy once more.

Advantages & disadvantages of Docker compose and Kubernetes

Docker Compose is a tool for analyzing and running multi-container applications. Docker is unquestionably one of the most amazing bits of a specialist from the standpoint of Web advancement. It is an excellent method to deal with the existing pattern of your application in the advancement of getting it fully operational and stopping it.

With the docker-form, we can run it through a specialist to create a construct to measure. Also, this form cycle can produce images that we would then use to make containers.

In Docker Compose, volumes can be genuinely precise. We can mount any document or subdirectory that is similar to the catalog from which we are running docker-compose. That makes it simple to discover, assess, and clean up those documents. Kubernetes is not similar to this. It’s not running from an undertaking envelope like Docker Compose. It’s already running on the Docker Desktop Virtual Machine somewhere. So, if we characterized a volume to be mounted in a holder, where might the information for that volume live? It lives in the Docker Desktop Virtual Machine someplace, except if we’re running WSL 2. Fortunately, Docker Desktop has a document sharing set up with the host OS, so we can exploit this to do any assessment or clean-up of diligent information.

Utilizing Docker Compose for neighborhood advancement is, without a doubt, more advantageous than using Kubernetes. Generally, it would help if you were acquainted with two orders to fabricate, run, rebuild, re-run, and close your applications in Docker: docker-compose upbuild and docker-compose down. For volumes, Docker Compose allows you to mount a catalog comparative to where you execute docker-compose from. It works across stages.

Additionally, Docker Compose is more secure. There’s no possibility you’re going to coincidentally docker-compose a mid-formed picture into production.

Docker Compose has the hindrance that it’s a repetition of work to repeat an example of your Kubernetes shows into docker-form records. Thinking about the additional setups, volume definitions, and pre-arrangements that should be added for Kubernetes advancement, this is most likely a little contrast.

Kubernetes, more precisely, addresses what you will send into shared Kubernetes bunches or creations. Utilizing an apparatus like Helm gives us supervisor-like provisions of introducing remotely created works without rethinking them in your neighborhood archive.

Utilizing Kubernetes requires a decent knowledge of Kubernetes and its encompassing devices or extra prearranging to conceal these subtleties. Kubernetes devices like kubectl and Helm depend on a setting that could be set to some unacceptable Kubernetes bunch, which would cause an undesirable ruckus! Setting up shields like setting up RBAC was conceivable in the standard or creation of Kubernetes groups where reasonable. Or then again, work inside a namespace locally that doesn’t exist in different bunches.

Pros – Docker Compose vs Kubernetes

Kubernetes and Docker Compose can be sorted as “container” tools.

“Multi-container descriptor,” “Quick advancement environment setup,” and “Simple connecting of containers” are the key variables why engineers consider Docker Compose, though “driving Docker container on the board arrangement,” “Straightforward and incredible,” and “Open source” are the essential motivations behind why Kubernetes is supported.

Docker Compose and Kubernetes are both open-source instruments. It appears that Kubernetes, with 55K GitHub stars and 19.1K forks on GitHub, has more reception than Docker Compose, with 16.6K GitHub stars and 2.56K GitHub forks.

Companies using Kubernetes and Docker Compose

Google, Slack, and Shopify are some of the well-known organizations that utilize Kubernetes, while StackShare, CircleCI, and Docker use Docker Compose. Kubernetes has a more extensive endorsement, referenced in 1046 organization stacks and 1096 designer stacks, compared with Docker Compose, which is recorded in 795 organization stacks and 625 engineer stacks.

Key Drawbacks

While Docker Compose is a vigorous apparatus with a rich element library, there are numerous things it can’t do. Items like CRDs, jobs, and stateful sets can’t be made with Compose. Networking is possible; however, describing it in a docker-compose.yml file can quickly become cumbersome.

There are some technical disadvantages to proceeding with Compose, and you must also consider its impacts. Significantly fewer people are utilizing Compose in production, so you’ll probably have to battle to track down a fresh recruit that is ready to bounce directly in. Additional compose components are not commonly utilized, which you’ll need to get to know to design Kubernetes.

One choice is that the specialist in the group will comprehend the instructional exercise and get everything characterized in a.yml document. Along these lines, you’ll proceed to utilize Compose. However, you’ll need to convey the expense of designing time spent changing over the Kubernetes display. This additionally implies that your architect’s comprehension shows it’s alright to change them over to another arrangement, debilitating the contention for utilizing Compose.

The other choice is that the model show will be utilized as a Proof of Concept, yet it will wind up being used underway due to a cut-off time or different reasons. Presently you have a blend of Compose records and Kubernetes shows, which can rapidly prompt disarray.

You will make some extreme memories coordinating with different apparatuses since numerous instruments exist to develop existing Kubernetes shows. A portion of these instruments helps in facilitating arrangement, similar to Helm. As you work on your application, different tools like Scaffold work with your apparatus to run it in Kubernetes. There may be workarounds that allow you to use these tools. You will, however, not find any authoritative documentation on how to set them up. Keeping up with these workarounds is challenging, and it leaves room for errors.

Conclusion

It is possible to replace Docker Compose with Kubernetes in the near future, but due to the additional complexity and compromises, it may be worthwhile to use both. Docker Compose is presumably sufficient and much easier for the most recent turn of events. Using a nearby Kubernetes cluster advances in terms of complexity and effort, so it is up to you. It is unquestionably useful for Helm Charts or manifesting turns of events or circumstances where you should completely recreate a piece of your deployment design.

In a realistic environment, a cloud-native application can be deployed in many ways. No matter how many microservices you have, you can configure your cloud group for maximum performance. The two most popular ways are Kubernetes and Docker Compose, with Docker Compose being more popular today.

Read more
09May

How to Install Software on Kubernetes Clusters?

May 9, 2022 admin Kubernetes

How to Install Software on Kubernetes clusters?

Kubernetes has a unique package manager, Helm. It assists and permits programmers to configure and install the applications on Kubernetes clusters easily. It provides various functions similar to the package manager in other operating systems;

  • Helm has a helm chart that assists in detecting a standard format and file directory format for packaging the Kubernetes resources.
  • For much famous software, Helm offers a public repository of charts. A third-party repository could be used to get the charts.
  • The Helm client software also includes a command like listing and searching for charts with the keywords and deploying applications to clusters. One can also remove applications and manage the functions.

Thus, Helm has a crucial role in installing software in Kubernetes clusters.

If you want to learn how Helm assist in deploying apps in Kubernetes, then this article is for you;

Step 1: Installing Helm

First, you have to install the helm command-line utility on the machine. The helm offers a script that will manage the installation process on macOS, Windows, and Linux.

Download the script and to a writable folder;

cd /tmp

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh

Then you have to create a script with chmod;

chmod u+x install-helm.sh

Now, with your preferred text editor, you can open the script and check it thoroughly. After checking carefully, run the script;

./install-helm.sh

Then you will need to enter the password and click Enter.

Output

helm installed into /usr/local/bin/helm

Run ‘helm init’ to configure helm.

The installation will be further finished by installing some helm components on the cluster.

Step 2: Installing Tiller

Helm command is mainly associated with Tiller. This command runs on a cluster and gets the commands from the helm by interacting with the Kubernetes API. Kubernetes API handles the task of making and removing the resources.

For permitting the Tiller to run on a cluster, you have to create a kubernetes serviceaccount resource.

To create the serviceaccount for Tiller, enter the command;

kubectl -n kube-system create serviceaccount tiller

Further, the Cluster-admin role must be bind with the tiller serviceaccount;

kubectl -n kube-system create serviceaccount Tiller

Then you can run the helm init, it will help you to install Tiller on your cluster, including local housekeeping process likes downloading stable repo details;

helm init –service-account tiller

Output

. . .

Tiller has been installed into your Kubernetes Cluster.

Note: Tiller is installed with an insecure ‘allow unauthenticated users’ policy, by default.

To check whether the Tiller is running, you have to list the pods in kube-system namespace;

kubectl get pods –namespace kube-system

Output

NAME READY STATUS RESTARTS AGE

. . .

kube-dns-64f766c69c-rm9tz 3/3 Running 0 22m

kube-proxy-worker-5884 1/1 Running 1 21m

kube-proxy-worker-5885 1/1 Running 1 21m

kubernetes-dashboard-7dd4fc69c8-c4gwk 1/1 Running 0 22m

tiller-deploy-5c688d5f9b-lccsk 1/1 Running 0 40s

 

You can find the Tiller name as states tiller-deploy-.

Up to now, You have successfully installed both Helm and Tiller. Now the helm is ready to use for the installation of applications.

Step 3: Installing Helm chart

Helm charts are the helm software packages. Chart repository known as stable comes inbuilt with Heml.

To install the Kubernetes-dashboard packages from the stable repo, you can use helm;

helm install stable/kubernetes-dashboard –name dashboard-demo

Output

NAME: dashboard-demo

LAST DEPLOYED: Wed Aug 8 20:11:07 2018

NAMESPACE: default

STATUS: DEPLOYED

Check the NAME line, in the output section. If you find it written as dashboard-demo, then it is the name of your release. The helm releases a single installation of charts with a particular configuration.

You can likewise deploy numerous charts with their own configuration.

However, if you are unable to find the release name, it is possible that Helm would name it randomly for you.

To get the list of releases on cluster, from the Helm, enter the below command;

helm list

Output

NAME REVISION UPDATED STATUS CHART NAMESPACE

dashboard-demo 1 Wed Aug 8 20:11:11 2018 DEPLOYED kubernetes-dashboard-0.7.1 default

If you want to check any new service deployed on the cluster, you can utilize the kubectl.

kubectl get services

Output

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

dashboard-demo-kubernetes-dashboard ClusterIP 10.32.104.73 <none> 443/TCP 51s

kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 34m

The Helm release name and chart name could be the combination of the service name with your release.

As of now, you have deployed the application successfully. Helm will change the configuration, and deployment will be updated.

Step 4: Updating the Release

If you want to upgrade the release with the latest update chart, you can use the command helm upgrade.

To know about the upgrade and rollback process of the dashboard demo, you can check the example process; To update the name of dashboard service to the dashboard, rather than dashboard-demo-kubernetes-dashboard.

The detailed fullnameOverride configuration for controlling the service name is provided by the Kubernetes-dashboard chart.

helm upgrade dashboard-demo stable/kubernetes-dashboard –set fullnameOverride=”dashboard”

You can check the similar output with the initial helm install setup.

To check the kubernetes services shows the update terms, enter the command;

kubectl get services

output

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 36m

dashboard ClusterIP 10.32.198.148 <none> 443/TCP 40s

The service has been updated correctly.

Step 5: Rolling back a Release

After the update of the release, if you want to roll back the release, this step will guide you.

You have formed a second revision of the release, once you updated the dashboard-demo. However, helm kept all the old details of releases, if you want to roll back to the old configuration.

You can enter the helm list to check the release;

helm list

Output

NAME REVISION UPDATED STATUS CHART NAMESPACE

dashboard-demo 2 Wed Aug 8 20:13:15 2018 DEPLOYED kubernetes-dashboard-0.7.1 default

In the Output section, you can find the Revision column that shows the second revision.

To roll back to the first revision, use the command;

helm rollback dashboard-demo 1

The output depicts the rollback is successfully placed.

Output

Rollback was a success! Happy Helming!

Now, while running the kubectl get service again, you will notice the service name is changed to the old value, which states that a re-deploy has been done by the Helm on the application with 1st configuration.

Thereafter, for removing the releases, check the next step.

Step 6: Deleting a Release

To delete the Helm release, use the command helm delete;

helm delete dashboard-demo

You will notice that the release is removed successfully, which will stop the dashboard application automatically.

As you know that helm saves the revisions, if the user wants to re-deploy the release.

You will get an error, in case you try to install helm with a new dashboard-demo.

To get the list of deleted released, you can use the command –deleted;

helm list –deleted

If you want to actually remove the release and want the previous version, you can use the –purge flag with the command helm delete;

helm delete dashboard-demo –purge

After this, the release is permanently removed.

Conclusion

In a nutshell, you have got all the information on installing software on the Kubernetes cluster using Helm. Moreover, the above steps display the installation with helm command tools and its associated tiller service. Installing applications, upgrading, rolling back to the previous release, and deleting the Helm charts will assist in the process.

Read more
    123…6
corporate-one-light
+1 800 622 22 02
info@scapeindustries.com

Company

Working hours

Mon-Tue

9:00 – 18:00

Friday

9:00 – 18:00

Sat-Sun

Closed

Contacts

3 New Orchard Road
Armonk, New York 10504-1522
United States
915-599-1900

© 2021 Kuberty.io by Kuberty.io