Kubernetes Selector

Photo of author

By admin

Kubernetes is a container management tool. This tool’s primary function is to deploy containers, scale and descaling containers, and balance container load. This tool is not built on the containerization platform. Selectors or labels can control all their resources.

Kubernetes selector lets us select Kubernetes resources by analyzing the resource fields and labels assigned to each group of nodes or pods. It helps retrieve details about a group Kubernetes resource or deploying a pod or group of pods to a particular group of nodes. It is used to filter out Kubernetes objects with similar labels.

Labels

Labels are key/value pairs that are attached to services, replication controllers, and pods. We can use these labels to identify objects like pods and replication controllers. They can be added at object creation and modified at runtime.

Selectors

Labels do not provide uniqueness. It means many objects can have the same label. The core grouping primitive in Kubernetes is the label selector. We can use these labels to select an object set by users.

Difference between Selectors and Labels

Labels are the pairs of keys/values attached to objects such as pods. While we can use labels to identify meaningful or relevant attributes to users, they do not directly indicate semantics to the core systems. You can use labels to organize and select subsets.

A label selector allows the client/user to identify a set object, especially for group selection. The label selectors depend on labels to select a set of resources like pods. A deployment, for example, chooses a set of pods using a label selector from the deployment specification.

Types of Kubernetes Selectors

In Kubernetes, there are three types of selectors.

  1. Label Selectors
  2. Field Selectors
  3. nodeSelectors

1. Label selectors

Labels are not unique, unlike UIDs and names. We expect that many objects will have the same title.

A label selector allows the client/user to identify a set thing. Kubernetes’ core grouping primitive is the label selector.

The API currently supports set-based and equality-based selectors. We can combine multiple requirements into a label selector that is comma-separated. It should meet numerous needs before the comma separator can act as a logical AND (&&) operator.

The context will determine the semantics of empty and non-specified selectors. API types that use these selectors must document their validity and meaning.

Not all API types are compatible with Replica Sets. The label selectors for two instances cannot overlap within the same namespace. Otherwise, the controller may interpret that as conflicting instructions and not determine how many copies should be present.

Equality-based requirement

We can do filtering by using label keys or values to filter equality- and inequality-based requirements. All matching objects must meet the label constraints. However, they may also have additional labels. =,==,!=. There are three types of operators. The first two are synonyms of equality, while the second is an indication of inequality. For example:

environment = production

tier! = frontend

The first selects resources that are equal in key to the environment and similar in value to production. The latter chooses resources with critical equal or greater than tier, value distinct from the frontend, and all resources without labels using the tier key. Anyone can filter for resources in production excluding frontend using the comma operator: environment=production, tier! =frontend

One use scenario for equality-based label requirements is for Pods specifying node selection criteria. Take the example of Pod below selects nodes with the label “accelerator=nvidia-tesla-p100“.

apiVersion: v1

kind: Pod

metadata:

name: cuda-test

spec:

containers:

– name: cuda-test

image: “k8s.gcr.io/cuda-vector-add:v0.1”

resources:

limits:

nvidia.com/gpu: 1

nodeSelector:

accelerator: nvidia-tesla-p100

Set-based requirement

Set-based label requirements permit filtering keys according to a set number of values. There are three types of operators supported: in, notin, and exists (only the key identifier). For example:

environment in (production, qa)

tier notin (frontend, backend)

partition

!partition

  • The first example shows all resources are selected with a key equal to environment and value equal to production or qa.
  • The second example includes all resources with a key equal to tier, other than frontend or backend, and all resources without labels that include the tier key.
  • The third example selects all resources, including a label with key partition; no values are checked.
  • The fourth example selects all resources without a label with a critical partition; no values are checked.

We can use the comma separator as an AND operator. So filtering resources with a partition key (no matter the value) and an environment different from qa can be achieved using the partition, environment notin (qa). A set-based label selector is a general form of equality since environment=production is equivalent to an environment in (production); similarly, for! = and notin. You can combine equality-based and set-based requirements. Example: an environment in (production); similarly, for! = and notin.

We can mix set-based requirements with equality-based requirements. For example: partition in (customer A, customer B), environment! = qa.

2. Field Selectors

The field selectors help you select Kubernetes resources based on the value of one or more resource fields. These are some field selector queries.

metadata.name=my-service

metadata.namespace!=default

status.phase=Pending

This kubectl command selects all Pods for which the value of the status.phase field is Running:

kubectl get pods –field-selector status.phase=Running

Notice: Field selectors can be thought of as resource filters. All resources of the type specified are selected by default without any selectors or filters, which means that the kubectl queries kubectl get pods, and kubectl get pods –field-selector ” ” equivalent.

3. nodeSelector

We can describe nodeSelector as the simplest form of node selection constraint. nodeSelector can be described as a field in Pod Spec. It is a field of Pod Spec that specifies a map with key-value pairs. The Pod must be able to run on a given node. It can have additional labels, but it must contain each of the key-value pair labels. One key-value pair is the most common.

Let’s take a look at an example of nodeSelector.

Prerequisites

This example assumes you have a basic understanding of Kubernetes pods and set up a Kubernetes cluster.

Step 1: Attach the label to your node

To get the cluster’s names, run kubectl get nodes. Next, pick the one you wish to add a label and run kubectl label nodes <node-name> <label-key>=<label-value>. You can add a label to the selected node. For example, if my node name is ‘kubernetes-foo-node-1.c.a-dilwar.internal’ and my desired label is ‘disktype=ssd’, then I can run kubectl label nodes kubernetes-foo-node-1.c.a-dilwar.internal disktype=ssd.

Re-running kubectl get nodes –show-labels will confirm that it ran successfully. You can also verify that the node has a label by checking that the label is still present. To see the complete list, you can use kubectl to describe the node “nodename”.

Step 2: Add a nodeSelector field to your pod configuration

Add a nodeSelector section to the pod config file that you wish to run. Here is an example of my pod configuration below:

apiVersion: v1

kind: Pod

metadata:

name: nginx

labels:

env: test

spec:

containers:

– name: nginx

image: nginx

Then add a nodeSelector like this:

apiVersion: v1

kind: Pod

metadata:

name: nginx

labels:

env: test

spec:

containers:

– name: nginx

image: nginx

imagePullPolicy: IfNotPresent

nodeSelector:

disktype: ssd

When you run kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml, the Pod will get scheduled on the node that you attached the label to. It is possible to verify it works by running kubectl get pods -o wide and looking at the “NODE” assigned to the Pod.

Conclusion

Kubernetes selector has been used to retrieve the details of objects. However, we can perform tasks such as deleting a group pod with a particular label attached. nodeSelector can be used to restrict pods to nodes. Node affinity or anti-affinity features are available if you need to have more control over selecting nodes. We can use field selectors across multiple resource types. This kubectl command will select all Statefulsets or Services, not within the default namespace. In short, selectors let us choose the exact resources in a short time to make the work fast and efficient.

Leave a Comment