Kubernetes is an open-source platform for operating and managing containerized applications. Kubernetes makes it easy to automate software deployments, maintain containerized apps, and scale clusters.
It comes with lots of deployment options for running containers and one of them is the DaemonSet. In this blog, we’ll discuss what DaemonSets are, what they do, and how to create them.
What is a Kubernetes DaemonSet?
Kubernetes ensures that running applications have sufficient resources, operate consistently, and have high availability during their lifecycle. A DaemonSet overcomes Kubernetes’ scheduling limits by ensuring that a given app is distributed throughout the cluster’s nodes.
A DaemonSet assures that a replica of Pods is running on all (or some) of the Nodes. Pods are deployed to nodes as they are introduced to the clusters. As soon as a node gets removed from the cluster, the associated pods would be considered garbage. When you delete a DaemonSet, it will also delete the Pods it produced.
A YAML file is often used to describe a DaemonSet. The YAML file’s fields provide you more control over the Pod deployment process. Using labels to start certain Pods on a subset of nodes is an excellent example.
Why Use a DaemonSet?
By deploying Pods that execute upkeep jobs and provide support services to each node, DaemonSets can increase cluster efficiency. To deliver adequate and updated services, specific background processes, Kubernetes monitoring apps, and other operatives must be present throughout the cluster. DaemonSets are well suited for long-running services, such as:
- Collection of logs
- Monitoring of node resources
- Storage in a cluster
- Pods that deal with infrastructure (system operations)
A single daemon type is commonly deployed throughout all nodes in a DaemonSet. Utilizing distinct labels, multiple DaemonSets can control the same daemon type. Labels in Kubernetes define deployment rules depending on specific node attributes.
How to Create a DaemonSet?
1. Configuring DaemonSets
DaemonSets, like everything else in Kubernetes, can be configured using a YAML file:
|apiVersion: apps/v1kind: DaemonSet
A YAML file is composed of the following sections:
- apiVersion (required)
- kind (required)
- metadata (required)
- spec.template (required): Pod description for the pods you want to deploy on all the nodes.
- spec.selector (required): A selector for pods managed exclusively by DaemonSet. This variable must be labelled with one of the labels provided in the pod structure. A selector for name: my-daemonset-container was generated inside the templates and used in the selector. The value cannot be changed after the DaemonSet has been established without orphaning the pods the DaemonSet has produced.
- spec.template.spec.nodeSelector: This allows you to run only the subsets of nodes that satisfy the selector.
- spec.template.spec.affinity: Can only execute on the subsets of the nodes that satisfy the affinities.
2. Creating DaemonSets
After you’ve finished configuring your cluster, create the DaemonSet by running the following command:
kubectl apply -f daemonset-node-exporter.yaml
Your device will confirm the creation of DaemonSet by displaying a message on the screen.
3. Confirming the state of DamonSet
After you’ve submitted the daemonset-node-exporter DaemonSet, use the describe command to verify its present state:
kubectl describe daemonset node-exporter -n monitoring
The output contains basic DaemonSet info and shows that the pods have been distributed on all of the nodes that are accessible.
4. Listing all the running pods
You can also validate this by executing the following command to display all operating pods:
kubectl get pod -o wide -n monitoring
The node-exporter Pod will now be deployed to all newly generated nodes via the DaemonSet on a regular basis.
Communicating with pods created by DaemonSet
DaemonSets pods can be communicated via a variety of methods. Here are a few options:
It is a method through which Pods can be configured to transmit data to other services such as monitoring services, stats database, etc.
2. NodeIP & Known Port
The DaemonSet’s NodeIP & Known Port Pods employ a hostPort to make the pods accessible through the node IPs. Clients and ports are aware of the set of nodes’ IP addresses by default.
It builds a headless service with the same pod selection, then uses the endpoints resources to find DaemonSets or uses DNS to fetch multiple records.
This constructs a program with the same pod selector and further utilizes it to connect to with daemon on a different node. (There’s no way to get to a specific node.)
How to Update a DaemonSet?
In previous Kubernetes releases, the OnDelete update technique was the sole way to update Pods maintained by a DaemonSet (prior to version 1.6). An OnDelete solution necessitates manually deleting each Pod for enabling DaemonSet to create new Pods with the updated configuration. However, newer Kubernetes versions utilize rolling updates by default. The spec.updateStrategy.type parameter is used to specify the update mechanism. The setting is set to RollingUpdate by default.
The rolling update technique rejects outdated Pods and replaces them with new ones. The procedure is fully automated and managed. However, the process of the Pods being deleted and recreated at the same time leads to a risk of unavailability and extended downtime. Following are the two parameters that enable users to control updating process:
- minReadySeconds: The value is expressed in seconds, and specifying a relevant time range ensures that the Pod remains healthy before the system moves on to another Pod.
- updateStrategy.rollingUpdate.maxUnavailable: It lets you choose the maximum number of Pods that can be updated at once. This parameter’s value is significantly dependent upon the type of application that is being delivered. To achieve high availability, it is vital to strike a balance between speed and safety.
Monitor the availability of DaemonSets rolling upgrade with the Kubectl rollout command:
kubectl rollout status ds/daemonset-node-exporter -n monitoring
The system monitors DaemonSet modifications and notifies you of the node-exporter DaemonSet’s actual roll-out status.
How Daemon Pods are Scheduled
Typically, the Kubernetes scheduler chooses the computer on which a pod operates. The devices previously get selected in pods created by the Daemon controller (.spec.nodeName is specified when the pod is created). Therefore:
- The DaemonSet controller does not recognize a node’s unschedulable field.
- The DaemonSet controller can create pods even if the scheduler isn’t running, which helps aid cluster bootstrapping.
Taints and tolerations are obeyed by daemon pods, however, they are built with NoExecute tolerations for the node.alpha.kubernetes.io/notReady and node.alpha.kubernetes.io/unreachable taints, which have no toleration.
Updating a DaemonSet
Rather than redeploying pods to match newly updated nodes, the DaemonSet will reclaim pods from newly non-matching nodes as soon as possible. A DaemonSet can generate different pods. A pod, on the other hand, doesn’t allow field modifications for all fields. DaemonSet will also use the original template the next time a node is created (even if the name remains the same).
A DaemonSet can be deleted. The pods will be left upon on the nodes if you set —cascade=false with kubectl. After that, you can make a new DaemonSet using a different template. All old pods will be recognized as having matching labels by the new DaemonSet with the new template.
Despite a discrepancy in the pod template, it will not change or remove them. You’ll have to delete the pods or the nodes to force a new pod to be created. You can do a rolling update on a DaemonSet on Kubernetes version 1.6 or later versions.
It’s straightforward to delete a DaemonSet. Simply use the kubectl delete command with the DaemonSet to accomplish this. This will remove the DaemonSet as well as all of its underlying pods.
The cascade=false flag in the kubectl delete command can be used to simply delete the DaemonSet and not the pods.
By ensuring that applications are dispersed across the cluster’s nodes, a DaemonSet overcomes Kubernetes’ scheduling limitations. DaemonSet is used to collect logs, monitor node resources, and store data in a cluster, etc.
By using these settings, monitoring, logging, and storage services can be easily implemented to improve the performance and reliability of the Kubernetes cluster and containers.