Prometheus is the easiest monitoring tool for Docker and Kubernetes, but you will need to learn how to use it first. For being an open-sourced platform, Prometheus can receive a huge amount of information in a jiffy in Kubernetes. In this way, you can use Prometheus for more complex tasks. Using Prometheus, you can monitor your server, database, and virtual machine, and check their performance in a cluster.
This guide explains how to monitor your Kubernetes applications and their environment by running some simple codes on the cluster. With that, we are going to show you how to deploy the Prometheus server and export its metrics and configure the server with ease. Firstly, let’s review some key Prometheus concepts.
Why Prometheus?
If you are using a huge load of containers and microservices in Kubernetes, managing each one of them individually can be a difficult task. Also, if you leave out one service, it can cause problems to your Kubernetes containers depending on their nature. There are numerous components to manage on Kubernetes, and the containers produce a great deal of data. Therefore, you require a solution that allows you to manage these components more effectively.
Prometheus comes as the best solution to this problem where you can collect the data from the containers and move them wherever you want. Prometheus was designed to control Kubernetes containers and the microservices along with their applications that are running on a cluster. It uses a scaling technique to manage and monitor data, and it is a common tool for Kubernetes containers. Let’s have a look at the benefits of Prometheus at a glance.
- Prometheus helps you monitor and operate applications and their data in dynamic cloud environments.
- It gives you a perfect view of the database of the Kubernetes applications. Using the PromQL language, you can analyze these data. PromQL makes it possible for Kubernetes containerizing application developers to diagnose any cluster-wide problem and solve it.
- Prometheus has a built-in Alertmanager that follows a set of rules and methods during sending out notifications to inform the developers of the updates of the clusters. Hence, you won’t have to use an external system and API to get the updates of your Kubernetes clusters.
- Prometheus provides whitebox and blackbox monitoring that helps you analyze the internal metrics such as logs, statistics, and more. With blackbox monitoring, you can sort out the services that are hindering your user experience.
- Prometheus is also a pulled-based monitoring system, where you sort out the metrics and represent them as HTTPS endpoints, and Prometheus extracts the metrics.
These are the biggest advantages of using Prometheus but why do developers feel the need of using Prometheus all of a sudden?
There are two most important technology changes in the Kubernetes infrastructure that have made the developers go for Prometheus.
DevOps Culture
Before the appearance of DevOps, monitoring of Kubernetes with Prometheus was all about hosts, services, and the networks. But over the years, Kubernetes has become more flexible and has added new features into its system. So, now developers need an easier scaling technique than ever. Metrics and app integrations are largely part of CI/CD pipelines that can handle a lot of application management processes on their own. With the Prometheus tool, monitoring Kubernetes applications and tracking their metrics have become more democratized.
Containers and Kubernetes
The structure of Kubernetes containers maintains the logging, monitoring, debugging, and high availability of the applications. And when you have a whole lot of software entities, addresses, and microservices, tracking all those metrics becomes harder. Therefore the old monitoring tools for Kubernetes software are not of much help in this situation.
These are the two most important traits of Prometheus that have made it the most standardized solution for monitoring Kubernetes applications.
Multidimensional Data Model
The Prometheus model is based on multi-dimensional or key-value pairs. This style is just like how Kubernetes monitors its metadata with labels because it is the best way to organize that data with accurate metrics and use the Prometheus query language.
The Easy Format and Protocols
Determining the Prometheus metrics is not hard at all. In fact, it’s simple and straightforward. These metrics can be read by a human without the help of any bot and come in a simple self-explanatory format. Prometheus metrics are published over HTTP, and you can check them out from your browser by entering 9100/metrics in the search box.
Service Discovery
The Prometheus server is always controlling the data of the applications. As a result, the applications and their microservices do not experience any problems regarding the data metrics. It is possible for Prometheus servers to automatically discover the scrape targets of application data, and they can also be configured according to the metadata of Kubernetes containers.
Modular and Highly-Available Components
Different components of Prometheus perform different tasks in Kubernetes clusters. These services are there to ensure that Kubernetes clusters scale without redundancies.
Having learned about Prometheus, we now know that it scales the metrics of Kubernetes’ containerized applications. However, what are these metrics used for?
What Are the Challenges in Prometheus?
Prometheus is a solid Cloud Native technology for Kubernetes that helps manage the microservices in a cluster in a more business-like manner. However, to work with this tool, you need to learn about its features and operation techniques more deeply. Prometheus is a monitoring tool that does much more than monitoring Kubernetes infrastructure; for example, it can diagnose performance problems in the cluster and inspect long-term goals. Prometheus also comes with challenges, and if you want to work with it, you need to be aware of all of them. For instance, you may not be able to solve the true observability of the cluster application with Prometheus alone. To fully benefit from Prometheus, you have to consider the constraints it brings.
Figure Out the Important Metrics of Your Containerized Applications
You don’t want to go on with the “instrument everything approach” the first time you are using the Prometheus server on Kubernetes. Before you install Kubernetes on your cluster, find out the most important metrics. The Weave Cloud, on the other hand, allows you to instrument any metric on Prometheus and host it there. The instrumentation of Prometheus is divided into two stages:
Data Exporter: There is an OSS community in Prometheus and it provides a lot of great services to the developers that include MySQL and Redis. These services allow you to export the metrics automatically when you install Prometheus or register for the Weave Cloud.
Client libraries: In the client libraries, you can build custom metrics for Prometheus that will allow you to scrape on Kubernetes.
There is an important thing called RED methodology which helps you determine the nature of metrics so that you can figure out which ones are important. On the RED methodology, you can instrument your codes for doing the metrics on Kubernetes applications’ data. Doing the metric this way will help you figure out the users’ ROI more effectively. With the old infrastructure of monitoring, you can use the metrics and have an understanding of how your containerized applications are doing in Kubernetes. However, to know more about the Red methodology in Prometheus, you will have to check out our other relevant guides.
We will come to know more about Prometheus and how it works in the next section. And after that, you also need to know how to install the tool on your Kubernetes so that you can monitor the applications easily.
Monitor A Kubernetes Cluster with Prometheus
Prometheus uses the configurations mentioned in the deployment file to scrape or send an HTTP request to the server. Whatever the response received after sending the scrape, and the metrics for the scrape will be stored in the custom Prometheus server database. This database is capable of handling a huge load of data without any struggle. And at the same time, it allows the developers to monitor multiple storages at the same time that are connected to the single server.
However, the data in the database needs to be clean and well-formatted for Prometheus to understand and read it. If needed, Prometheus can directly collect data from the apps’ client libraries automatically and by using a data exporter. Most of the time Prometheus uses an exporter to collect data that you initially have no access to. An example of this type of data is kernel metrics. Prometheus puts a software that we are calling exporter next to your containerized applications. This exporter will help you access HTTP requests from Prometheus and then will verify if the data is in the supported format before providing that data to the Prometheus server.
Since there is an exporter next to all the applications in a Kubernetes cluster, they can easily provide the required data to the Prometheus server. However, Prometheus doesn’t know where to find the positive data, so we want to tell it where to look. Prometheus then identifies some targets from which it will scrape data, and it does so with the help of Service Discovery.
When you are using Kubernetes, you already know that the clusters come with many great features. They have labels, annotations, convenient mechanisms, etc. that allow you to keep track of the changes and the status of the data and other relative elements of the applications. That’s why Prometheus uses Kubernetes API most of the time to find the targets to scrape from.
The services discovered in Kubernetes that can be helpful for finding targets in Prometheus are node, endpoint, service, pod, ingress, etc. Prometheus uses a node exporter to find the memory, disk space, and CPU usage of your machine and it gets the bandwidth metrics at the same time. Moreover, there is a cAdvisor exporter in Kubernetes that helps Prometheus expose cgroups metrics with ease. And after collecting all the data, the PromQL query language helps you export them to a graphical interface such as Grafana.
Now that you know how Prometheus works, it’s time to know what are the things you need to monitor Kubernetes with Prometheus.
Prerequisites
You will need the following items to be able to monitor Kubernetes with Prometheus:
- A Kubernetes cluster
- The kubectl command-line interface on your client machine and the command line tool needs to be configured properly.
How to Install Prometheus on Kubernetes?
The YAML (Yet Another Markup Language) files can be used to install the Prometheus tool on a Kubernetes cluster. An YAML file contains important cluster information, such as configurations, permissions and services, and this allows Prometheus to access the cluster and scrape the elements at the same time. The biggest advantage of YAML files is that they can be easily edited and reused. These types of YAML files are already available on the online GitHub repository.
So, the first step of installing Prometheus on Kubernetes involves creating a monitoring namespace.
How to Create a Monitoring NameSpace?
All the Kubernetes resources are involved with a namespace and your system will use a default namespace if you don’t specify a certain one. We will specify a monitoring namespace in this article to help you understand the monitoring better. And for starters, the name of the preferred namespace will have a DNS compatible label. You can use the monitoring label name for a better understanding.
Creating a namespace can help you retrieve metrics from the Kubernetes API with ease and allow Prometheus to start with its monitoring process. There are two ways to retrieve the metrics:
- You can use the “kubectl create namespace monitoring” command on your kubectl command line and create a monitoring namespace.
- You can create and use a YAML file such as this :
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
The later solution is more convenient since you can use the same YAML file in the future whenever needed. And a simple command can help you implement the file to your cluster:
kubectl -f apply namespace monitoring.yml
Even if you use the first option to create a namespace, you can use the following command to find all the namespaces in the cluster:
kubectl get namespaces
Once you create a namespace, the next steps are to configure the Prometheus deployment file.
How to Configure the Prometheus Deployment File?
Follow this section carefully to learn how to successfully do the Prometheus scraping on a Kubernetes Cluster and its data. However, you need to execute each Yaml file in sequence for this step to work. To create each Yaml file and apply it, use the following command:
kubectl -f apply [name_of_file].yml
However, we are going to show you how to put all the elements of Kubernetes in a single YAML file before applying the file to Prometheus. For example, the Prometheus YAML file guides the kubectl and asks it to offer an application to the Kubernetes API server. So, you will have to make sure that the YAML file contains the following information:
- Permissions that enable Prometheus to obtain all the nodes and pods in the Kubernetes cluster.
- The Prometheus configMap has all the information about the elements that Prometheus can scrape.
- Deployment instructions for Prometheus.
- Access to the Prometheus interface through a service.
Cluster Role Binding
Namespaces limit the permissions of default cluster roles and if you want to retrieve cluster-wide data, you should give Prometheus the access to reach all the data in the cluster. So a basic YAML file that has all the needed cluster-wide resources should have the following elements:
The cluster role definitions: each rule has verbs that define the operations of that role in the API groups.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole metadata: name: prometheus rules: – apiGroups: [“”] resources: – nodes – services – endpoints – pods verbs: [“get”, “list”, “watch”] – apiGroups: – extensions resources: – ingresses verbs: [“get”, “list”, “watch”] |
Service account: if you don’t create a service account, you cannot apply for the roles:
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: monitoring
Apply the cluster role binding: now bind the Service Account with the cluster role you have created in the previous step.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding metadata: name: prometheus roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus subjects: – kind: ServiceAccount name: prometheus namespace: monitoring |
Adding this information to your YAML file, you have successfully given access to Prometheus from the monitoring namespace. Now, let’s have a look at the configuration file of Prometheus.
Prometheus ConfigMap
Every element in the Kubernetes cluster has its own instructions for initializing the scrapping process in the Prometheus monitoring tool. In addition, you need to optimize those instructions according to your monitoring strategy and cluster configuration. Find out how to monetize the configmap in Prometheus by reading the bold sections below.
Global Scrape Rules
apiVersion: v1
data:
prometheus.yml: |
global:
scrape_interval: 10s
Scrape Node: This is a service discovery that finds out the nodes that create your Kubernetes cluster. You run the kubelet on each of these nodes to find out the information that it contains.
Scrape kubelet:
scrape_configs:
– job_name: ‘kubelet’
kubernetes_sd_configs:
– role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true # Required with Minikube.
Scrape cAdvisor (container level information)
You will only find information about the kubelet itself and not about its containers. However, you can use an exporter to find out the information about the containers. CAdvisor is already on Kubernetes, so Prometheus can be triggered to collect information about containers by providing its metrics_path: /metrics/cadvisor.
– job_name: ‘cadvisor’
kubernetes_sd_configs:
– role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true # Required with Minikube.
metrics_path: /metrics/cadvisor
Now, move on to the next step to scrape the API servers in your Kubernetes clusters.
Scrape APIServer
You can target the application instances with the endpoint roles from the API server:
– job_name: ‘k8apiserver’ kubernetes_sd_configs:
– role: endpoints scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true # Required if using Minikube. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: – source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https |
Next, follow the steps below.
Scrape Pods for Services in Kubernetes and Without API Servers
Now, scrape the pods that are supporting all the Kubernetes services and try to exclude the API server metrics:
– job_name: ‘k8services’
kubernetes_sd_configs: – role: endpoints relabel_configs: – source_labels: – __meta_kubernetes_namespace – __meta_kubernetes_service_name action: drop regex: default;kubernetes – source_labels: – __meta_kubernetes_namespace regex: default action: keep – source_labels: [__meta_kubernetes_service_name] target_label: job |
Next, move on to the next step.
Pod Roles
Use the names of the containers as their job labels and discover all the parts of the pods that have the same metrics:
– job_name: ‘k8pods’
kubernetes_sd_configs: – role: pod relabel_configs: – source_labels: [__meta_kubernetes_pod_container_port_name] regex: metrics action: keep – source_labels: [__meta_kubernetes_pod_container_name] target_label: job kind: ConfigMap metadata: name: prometheus-config |
Keep following the steps below.
Configure the ReplicaSet
Define the number of replicas you require for the Prometheus monitoring and apply a template that will define the replica set of the pods:
apiVersion: apps/v1beta2
kind: Deployment metadata: name: prometheus spec: selector: matchLabels: app: prometheus replicas: 1 template: metadata: labels: app: prometheus spec: serviceAccountName: prometheus containers: – name: prometheus image: prom/prometheus:v2.1.0 ports: – containerPort: 9090 name: default volumeMounts: – name: config-volume mountPath: /etc/prometheus volumes: – name: config-volume configMap: name: prometheus-config |
Next, define the nodePort.
Define the nodePort
To get the data that Prometheus has collected, you can add the following command to your prometheus.yml file while Prometheus is already running on the Kubernetes cluster:
kind: Service
apiVersion: v1
metadata:
name: prometheus
spec:
selector:
app: prometheus
type: LoadBalancer
ports:
– protocol: TCP
port: 9090
targetPort: 9090
nodePort: 30909
Finally, apply the Prometheus YAML file.
Apply the prometheus.yml File
The configuration map that has been defined in the previous section, provides configuration data and distributes it among every pod on the deployment.
kubectl apply -f prometheus.yml
Use the node URL and node port mentioned in the prometheus.yml file to access Prometheus from your internet browser. Here is an example:
Now, you can easily begin monitoring your Kubernetes applications from Prometheus.
Conclusion
Till now, we have successfully installed Prometheus monitoring on our Kubernetes cluster and the most important benefit of the steps is that you can easily track the overall health, behavior, nature, and performance of your system. Regardless of the complexity of your Kubernetes operations, you can simply maintain the microservices in Kubernetes clusters using the metrics-based monitoring system named Prometheus.
You can visit the Prometheus Community page for any help you might need with the Open Source version of Prometheus and with monitoring Kubernetes with Prometheus. If you need any assistance monitoring your Kubernetes containerized applications with Prometheus, you can also ask us for help. Additionally, check out our other articles on Kubernetes and Kubernetes clusters to learn more about the technology.