How to Install Anaconda on CentOS 7

Photo of author

By admin

Kubernetes (K8s) is a container orchestration system that helps manage containers at scale. Originally created by Google, Kubernetes is now an open-source system that is maintained by a community of dedicated developers. Kubeadm mechanizes the configuration of the Kubernetes elements. These include the API server, Kube DNS, and so on. Nonetheless, one cannot engage users or deal with the OS dependencies and their configuration.

That’s where configuration management tools like SaltStack and Ansible come in. Tools like these can help you create extra clusters or recreate existing clusters, making them effortless and more on point. This guide seeks to teach you how to install Kubernetes on CentOS 7 using Ansible and Kubeadm. Scroll down and read on.

Prerequisites to install Kubernetes on CentOS 7 

Before you go for the step-by-step process, there are some prerequisites that you need to take care of to install Kubernetes on CentOS 7. Yes, your Kubernetes cluster will comprise two resources that are as follows:

  • One master node

While talking about Kubernetes, we refer to each server as a node. The master node deals with managing the status of a cluster. It also helps run Etcd that stows cluster information among components responsible for scheduling workloads to worker servers/nodes.

  • Two worker nodes

The other resource is two worker nodes which are the running ground of your scheduled workloads (containerized apps and services). Once a workload is assigned, a worker server/node will keep on running, even if the master descends after scheduling the workload. Thus, we can make a cluster more capable by augmenting the number of workers.

Here are the prerequisites to install Kubernetes on CentOS 7:

  • An SSH key pair stored on your local computer.
  • A trio of servers having CentOS 7 along with 2 GB RAM and two virtual CPUs on each server. You should be capable of SSHing into each one of these servers as the root user by utilizing your SSH key pair.  
  • Ansible installed on your local Linux/macOS/BSD machine.
  • A basic understanding of Ansible playbooks.
  • Comprehension of the process of launching a container from a Docker image. 

Note: After you’ve gone through this entire article, you will learn how to make a cluster ready for running containerized apps. However, to do so, the cluster servers should have enough CPU and RAM resources consumable by your apps. You can containerize nearly any conventional Unix app, including daemons, command-line tools, etc., and run it on your cluster. The cluster itself will take about 300-500 megabytes of memory and 10% CPU on every node to function properly. 

Step-by-Step Guide to Install Kubernetes on CentOS 7

Here are all the steps that you can easily follow to install Kubernetes and start using it on your CentOS 7 system:

Step 1 – Set up the workspace directory and Ansible inventory file

The first thing to install Kubernetes on CentOS 7 is to set up your workspace directory and Ansible inventory file. To do so, you have to generate a host file that will contain inventory information. Here, you will find three servers, one among them being the master with an IP shown as master_ip. The others will be workers with the IPs worker_1_ip and worker_2_ip, respectively.

First, generate a new directory, namely ~/kube-cluster. Also, add a home directory to your PC and navigate into it by running the following commands in the terminal:

mkdir ~/kube-cluster

cd ~/kube-cluster

The directory you just created would remain your workspace for the entirety of the process. Also, it will have the whole of Ansible playbooks and be your go-to directory for running all the local commands.

Next, create a new file named ~/kube-cluster/hosts. For this purpose, you may utilize vi or any other text editor you like:

vi ~/kube-cluster/hosts

Now, press i for inserting the below text to this file for specifying info about your cluster’s logical structure:

~/kube-cluster/hosts

[masters]

master ansible_host=master_ip ansible_user=root 

[workers]

worker1 ansible_host=worker_1_ip ansible_user=root

worker2 ansible_host=worker_2_ip ansible_user=root

Once you’re done, press ESC and then follow it with :wq for writing the modifications to the file and finish.

As we mentioned before, Ansible’s inventory files are crucial for specifying server info like IP addresses, remote users, etc. It also specifies an assemblage of servers for targeting as a single unit for executing commands. So your inventory file will be ~/kube-cluster/hosts, and you’re done adding two Ansible groups, namely the masters and workers, to it. This will aid you in specifying your cluster’s logical structure.

The master group comprises a server entry called “master”, listing the IP of the master node (master_ip). It also defines that Ansible will be the root user, running all the remote commands. 

Likewise, the workers group consists of not one but two worker server entries, namely worker_1_ip and worker_2_ip. They also define the Ansible as the root user.      

Step 2 – Install the Kubernetes’ OS dependencies

After setting up your workspace directory and Ansible inventory file, let’s proceed towards the next step. That is, installing the OS dependencies of Kubernetes using the yum package manager of CentOS. These dependencies are as follows:

Docker – it is a container runtime, meaning that it runs your containers. Kubernetes is currently busy developing support for other runtimes like rkt.

kubeadm – it is a Command Line Interface (CLI) tool that helps to install and set up different cluster components.

kubelet – it is a system service running on all the nodes. Plus, it deals with node-level operations.

kubectl – This is another CLI tool that helps purvey commands to the cluster via its API server.        

Now, generate a new file, namely ~/kube-cluster/kube-dependencies.yml, in your workspace using this command:

vi ~/kube-cluster/kube-dependencies.yml

Next, add the below plays to the file you’ve just created to install these packages to your system:

~/kube-cluster/kube-dependencies.yml

– hosts: all

  become: yes

  tasks:

   – name: install Docker

  yum:

    name: docker

    state: present

    update_cache: true

   – name: start Docker

  service:

    name: docker

    state: started

   – name: disable SELinux

  command: setenforce 0

   – name: disable SELinux on reboot

  selinux:

    state: disabled

   – name: ensure net.bridge.bridge-nf-call-ip6tables is set to 1

  sysctl:

   name: net.bridge.bridge-nf-call-ip6tables

   value: 1

   state: present

   – name: ensure net.bridge.bridge-nf-call-iptables is set to 1

  sysctl:

   name: net.bridge.bridge-nf-call-iptables

   value: 1

   state: present

   – name: add Kubernetes’ YUM repository

  yum_repository:

   name: Kubernetes

   description: Kubernetes YUM repository

   baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

   gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

   gpgcheck: yes

   – name: install kubelet

  yum:

     name: kubelet-1.14.0

     state: present

     update_cache: true

    – name: install kubeadm

  yum:

     name: kubeadm-1.14.0

     state: present

   – name: start kubelet

  service:

    name: kubelet

    enabled: yes

    state: started

– hosts: master

  become: yes

  tasks:

   – name: install kubectl

  yum:

     name: kubectl-1.14.0

     state: present

     allow_downgrade: yes

The first play performed lots of stuff, including installing Docker and starting the Docker service, disabling SELinux as Kubernetes doesn’t support it fully yet, etc. It also adds Kubernetes yum repository to the repository of your remote server, installs kubelet and kubeadm, and defines some netfilter-related sysctl values vital for networking.

On the contrary, the second play does only one job, that is, installing kubectl on your master node.

Save the file and quit once you’re done doing all this. Then, execute the playbook utilizing the below command:

ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml

Once you’re finished, you will receive an output that will look something like this:

Output:

PLAY [all] **** 

TASK [Gathering Facts] ****

ok: [worker1]

ok: [worker2]

ok: [master] 

TASK [install Docker] ****

changed: [master]

changed: [worker1]

changed: [worker2]

TASK [disable SELinux] ****

changed: [master]

changed: [worker1]

changed: [worker2]

TASK [disable SELinux on reboot] ****

changed: [master]

changed: [worker1]

changed: [worker2]

TASK [ensure net.bridge.bridge-nf-call-ip6tables is set to 1] ****

changed: [master]

changed: [worker1]

changed: [worker2]

TASK [ensure net.bridge.bridge-nf-call-iptables is set to 1] ****

changed: [master]

changed: [worker1]

changed: [worker2]

TASK [start Docker] ****

changed: [master]

changed: [worker1]

changed: [worker2]

TASK [add Kubernetes’ YUM repository] *****

changed: [master]

changed: [worker1]

changed: [worker2]

TASK [install kubelet] *****

changed: [master]

changed: [worker1]

changed: [worker2]

TASK [install kubeadm] *****

changed: [master]

changed: [worker1]

changed: [worker2]

TASK [start kubelet] ****

changed: [master]

changed: [worker1]

changed: [worker2]

PLAY [master] *****

TASK [Gathering Facts] *****

ok: [master]

TASK [install kubectl] ******

ok: [master]

PLAY RECAP ****

master                     : ok=9 changed=5 unreachable=0 failed=0  

worker1                    : ok=7 changed=5 unreachable=0 failed=0 

worker2                    : ok=7 changed=5 unreachable=0 failed=0   

After the playbook execution, your remote servers will have Docker, kubelet, and kubeadm on them. The kubectl is not an essential element and deems necessary only when you try to execute the cluster commands. Therefore, installing it only on the master node is more than enough as you’re not going to run kubectl from anywhere other than the master node. Nonetheless, it doesn’t imply that you can’t run it from the worker nodes. It’s just that doing so would be unnecessary; that’s all.

All the OS dependencies are ready to roll now. So, let us configure the master node and inaugurate the cluster.

Step 3 – Set up the master node

This step is about setting up the master node. Before proceeding further and generating any playbook, it’d be best if you knew about pods and pod network plugins. Why? It is because both these things would be included in your cluster.

Pods are atomic units that run one or several containers. All these containers tend to share resources like network interfaces and file volumes. Therefore, pods are the essential unit in Kubernetes.

Every pod comprises a unique IP address. A pod on a particular node is capable of accessing another pod on some other node utilizing its IP address. Thus, containers on a particular node can communicate via a local interface with ease. On the contrary, communication between pods is noticeably more difficult and often requires the intervention of a separate networking component. The component should be capable of routing traffic between pods residing on different nodes.

Pods network plugins offer such functionality. For our cluster, we’re going to use Flannel, which is an efficient and stable option.

First, generate an Ansible playbook called master.yml on your PC:

vi ~/kube-cluster/master.yml

Add the below play to the newly created file for initializing the cluster and installing Flannel:

~/kube-cluster/master.yml 

– hosts: master

  become: yes

  tasks:

– name: initialize the cluster

   shell: kubeadm init –pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt

   args:

     chdir: $HOME

     creates: cluster_initialized.txt

– name: create .kube directory

   become: yes

   become_user: centos

   file:

     path: $HOME/.kube

     state: directory

     mode: 0755

– name: copy admin.conf to user’s kube config

   copy:

     src: /etc/kubernetes/admin.conf

     dest: /home/centos/.kube/config

     remote_src: yes

     owner: centos

– name: install Pod network

   become: yes

   become_user: centos

   shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml >> pod_network_setup.txt

   args:

     chdir: $HOME

     creates: pod_network_setup.txt          

Let us break the entire play down:  

  • The first task runs kubeadm init and helps initialize the cluster. It also passes the argument, namely –-pod-network-cidr=10.244.0.0/16, specifying the private subnet from which the pod IPs will be allocated. By default, Flannel utilizes the subnet mentioned above. Therefore, we will tell kubeadm to use the same subnet.
  • The second task helps generate a .kube directory at your home/centos. This .kube directory will contain your configuration info, including your admin key files. These files are essential for connecting to the cluster and its API address.
  • The third task duplicates the file, namely /etc/kubernetes/admin.conf, created from your kubeadm init to the home directory of your centos user (non-root). Such duplication will empower you to access the cluster using the kubectl.
  • The fourth/final task runs kubectl apply. Doing so helps it install Flannel. The syntax kubectl apply -f descriptor.[yml|json] helps tell kubectl to make the objects defined in descriptor.[yml|json. The definitions of objects needed for configuring Flannel in the cluster are contained in the kube-flannel.yml file.

Save the file and quit once your work is done.

Next, execute the playbook using the following command:

ansible-playbook -i hosts ~/kube-cluster/master.yml

Upon successful execution, you will receive the following output:

Output:

PLAY [master] ****

TASK [Gathering Facts] ****

ok: [master]

TASK [initialize the cluster] ****

changed: [master]

TASK [create .kube directory] ****

changed: [master]

TASK [copy admin.conf to user’s kube config] *****

changed: [master]

TASK [install Pod network] *****

changed: [master]

PLAY RECAP ****

master                     : ok=5 changed=4 unreachable=0 failed=0   

You may put the master node through a status checkup by SSHing into it:

ssh centos@master_ip

Once you’ve entered the master node, execute the following:

kubectl get nodes

You will receive an output that will look something like this:

Output

NAME  STATUS ROLES     AGE   VERSION

master Ready master    1d    v1.14.0  

The above output signifies the master node has finished all the tasks related to initialization. It also signifies that the master node is now ready to begin accepting worker nodes and execute tasks consigned to the API server.

Step 4 – Configure the worker nodes

Now that you’ve finished Step 3, it’s time to set up the worker nodes. One can add workers to a cluster by executing a separate command for every node. This command encompasses vital cluster info, including the IP address, a secure token, etc. Any node failing to pass in the security token won’t be allowed to join the cluster.

Return to your workspace and generate a playbook with name workers.yml:

vi ~/kube-cluster/workers.yml

Now, add the below text to the newly-generated file for adding the worker nodes to your cluster:

~/kube-cluster/workers.yml

– hosts: master

  become: yes

  gather_facts: false

  tasks:

– name: get join command

   shell: kubeadm token create –print-join-command

   register: join_command_raw

– name: set join command

   set_fact:

     join_command: “{{ join_command_raw.stdout_lines[0] }}”

– hosts: workers

  become: yes

  tasks:

– name: join cluster

   shell: “{{ hostvars[‘master’].join_command }} –ignore-preflight-errors all  >> node_joined.txt”

   args:

     chdir: $HOME

     creates: node_joined.txt

Here’s a breakdown of the playbook’s functions:

  • The first play receives the join command necessary to run on the worker nodes. This command will be portrayed in a format like this – kubeadm join –token <token> <master-ip>:<master-port> –discovery-token-ca-cert-hash sha256:<hash>. After receiving the real command containing the apt hash and token values, the process enables you to access that information.
  • The second play only possesses one goal. It is to run the join command on every worker node in existence. Once this task is completed, the two worker nodes/servers will become members of the cluster family.

Now, save the file, close it, and proceed to execute the playbook:

ansible-playbook -i hosts ~/kube-cluster/workers.yml

Once you’re finished doing so, you will get an output like this one below:

Output:

PLAY [master] ****

 TASK [get join command] ****

changed: [master]

TASK [set join command] *****

ok: [master]

PLAY [workers] *****

TASK [Gathering Facts] *****

ok: [worker1]

ok: [worker2]

TASK [join cluster] *****

changed: [worker1]

changed: [worker2]

PLAY RECAP *****

master                     : ok=2 changed=1 unreachable=0 failed=0  

worker1                    : ok=2 changed=1 unreachable=0 failed=0 

worker2                    : ok=2 changed=1 unreachable=0 failed=0    

Now that the worker nodes/servers are added to your cluster, it’s now fully configured and operational. The workers are now capable of running workloads. However, before proceeding to schedule apps, we got to verify whether the cluster is working properly or not.

Step 5 – Verify the cluster   

Verifying a cluster’s workability is an important aspect to consider if you wish to install Kubernetes on CentOS 7. Why? Because clusters can sometimes face failure during configuration. The reasons behind this could be a node/server being down, the absence of a stable connection between the master and the workers, and so on. No matter the reason, it is crucial that you check the cluster and ensure that the nodes are working fine.

For that purpose, you will have to test your cluster’s present state from the master node. It will help you ensure the nodes’ readiness. In case you lost connection with your master node, you may SSH back into it using this command:

ssh centos@master_ip

After doing so, utilize the below command to receive details about your cluster’s status:

kubectl get nodes

Once executed successfully, you will get the following output:

Output:

NAME  STATUS ROLES     AGE   VERSION

master Ready master    1d    v1.14.0

worker1   Ready <none> 1d        v1.14.0

worker2   Ready <none> 1d        v1.14.0

Notice the STATUS section of the output. Suppose all your node values are showing Ready. In that case, it signifies that they’re a part of your cluster and are capable enough of running workloads.

On the contrary, if you notice some of the nodes having NotReady as their STATUS, it indicates that there is a problem. Such an output signifies that the worker nodes are not configured properly yet. Wait for some time (preferably 5-10 minutes) and then rerun the kubectl, get nodes and see what the new output has to offer. If the status is still Not Ready for a few nodes, you might need to resort to verifying and rerunning the commands mentioned in Steps 1 to 4.

Provided that your cluster had a successful verification and returned positive status, it’s time for some real action, that is, scheduling a demo Nginx app on your cluster.

Step 6 – Run an application on your cluster

Once you’ve done following all the above steps, now you’re capable of deploying any containerized app to your cluster. To ensure that things stay in your familiar zone, you may deploy Nginx utilizing Deployments and Services. You can also use the commands that are about to be mentioned below in the case of other containerized apps. Nonetheless, you must modify the Docker image name and any pertinent flags like volumes and ports.

Now, inside your master node, generate a deployment, namely nginx, using the following command:

kubectl create deployment nginx –image=nginx

Deployments are types of Kubernetes objects that ensure there’s always a set number of pods running based upon a defined template. That remains the truth even when the pod goes down somehow.

Next, utilize the below command to generate the nginx service. This service will help bring out the app in the open. For this purpose, it will utilize a Nodeport – a scheme for making the pod accessible via a random port created on the cluster’s each node:

kubectl expose deploy nginx –port 80 –target-port 80 –type NodePort

Services are different kinds of Kubernetes objects. They help to expose the internal services of a cluster to both external and internal clients. They are also proficient at loading balancing requests to various pods and are an essential element of Kubernetes. Besides, Services often interact with other components.

Now, run this command:

kubectl get services

The output you will get after doing so will look something like this:

Output:

NAME     TYPE    CLUSTER-IP   EXTERNAL-IP       PORT(S)    AGE

kubernetes   ClusterIP   10.96.0.1    <none>            443/TCP    1d

nginx        NodePort 10.109.228.209   <none>            80:nginx_port/TCP   40m

Notice the above output carefully. From here, you can find the port on which the Nginx is running. Kubernetes is known for auto-assigning an arbitrary port number greater than 30000. Besides that, it also ensures that the port is not being used elsewhere.

For testing out the workability of everything here, navigate to http://worker_1_ip:nginx_port or http://worker_2_ip: nginx_port using any browser. Doing so will take you to the welcome page of Nginx.

Removing the Nginx app is also possible. To do that, you got to eliminate the service from your master node:  

kubectl delete service nginx

Running the below command will help you make sure that the deletion was successful:

kubectl get services

You will obtain the below output:

Output:

NAME     TYPE    CLUSTER-IP   EXTERNAL-IP       PORT(S)    AGE

kubernetes   ClusterIP   10.96.0.1    <none>            443/TCP    1d

After that, delete the deployment using this command:

kubectl delete deployment nginx    

Finally, run the following command to check if the previous command worked right:

kubectl get deployments

The output will be:

Output:

No resources found.

With this step, you have successfully installed Kubernetes on CentOS 7.

Conclusion

In this guide, you learned the steps that allow you to install Kubernetes on CentOS 7 using Ansible and Kubeadm. After going through all the steps, you might’ve realized that these steps, while not being too tough, demand sufficient attention. Skipping any step or even a tiny bit of that step could mean disaster. From configuring the workspace directory and Ansible inventory file to running an application on your cluster – follow every step religiously. And, as we mentioned in Step 5, if your worker nodes fail to be ready to run workloads even after doing everything, there’s no easy way out. The only thing you could do then is to verify and rerun the commands mentioned in the previous steps. And, that’s something quite time-consuming and should better be steered clear of.

Learning to set up the cluster has its benefits too, including the freedom to deploy your own apps onto that cluster. Thus, research a bit and clear your concepts regarding Volumes, Secrets, Dockerizing applications, Ingresses, etc. All such info could help smoothen your way to the deployment of various production apps. Remember that Kubernetes is a treasure trove of functionality and features. Hence, the more you learn, the more of its potential you get to unlock.

Leave a Comment