Even a small Kubernetes cluster may have hundreds of Containers, Pods, Services and many other Kubernetes API objects.
It quickly becomes annoying to page through pages of kubectl output to find YOUR object -labels address this issue perfectly.
The primary reasons you should use labels are:
In the rest of this article, we’ll elaborate on these benefits.
Labels and annotations are sometimes confused. Having a quick look at the documentation makes this understandable.
"metadata": {
"labels": {
"key1" : "value1",
"key2" : "value2"
}
}
Copy
"metadata": {
"annotations": {
"key1" : "value1",
"key2" : "value2"
}
}
Copy
Fortunately it is easy to consult the documentation to see the difference:
**"Labels **are key/value pairs that are attached to objects, such as pods. **Labels **are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. **Labels **can be used to organize and to select subsets of objects."
You can use Kubernetes **annotations **to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata.
You can use either labels or annotations to attach metadata to Kubernetes objects. **Labels **can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, **annotations **are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured.
Example labels:
"release" : "stable"
"release" : "canary"
"environment" : "dev"
"environment" : "qa"
"environment" : "production"
Example annotations:
standbyphone: 000-000 0000
developer: Neil Armstrong
Copy
Let’s now focus more deeply on labels and how to use them.
Before we create our own labels let us look at some labels that Kubernetes creates automatically.
Kubernetes automatically creates these labels on nodes:
kubernetes.io/arch
Example:
kubernetes.io/arch=amd64
kubernetes.io/os
Example:
kubernetes.io/os=linux
node.kubernetes.io/instance-type
Example:
node.kubernetes.io/instance-type=m3.medium
topology.kubernetes.io/zone
Example 1:
topology.kubernetes.io/region=us-east-1
Example 2:
topology.kubernetes.io/zone=us-east-1c
Copy
_See the full list _here.
These labels now allow us to filter our nodes in the following interesting ways:
kubectl get nodes -l 'kubernetes.io/os=linux'
Copy
kubectl get nodes -l 'node.kubernetes.io/instance-type=m3.medium'
Copy
kubectl get nodes -l 'topology.kubernetes.io/region=us-east-1'
Copy
kubectl get nodes -l 'topology.kubernetes.io/region in (us-east-1, us-west-1)'
Copy
If we apply these labels on all our Pods we may filter the kubectl output as follows:
"release" : "stable"
"release" : "canary"
"environment" : "dev"
"environment" : "qa"
"environment" : "production"
Copy
kubectl get pods -l 'environment in (production), release in (canary)'
kubectl get pods -l 'environment in (production, qa)'
kubectl get pods -l 'environment notin (qa)'
Copy
Considering a given complex environment of multiple Kubernetes clusters, multiple nodes and many more namespaces, it’s easy to see the ability to filter kubectl output is a major timesaver.
In addition, Job, Deployment, ReplicaSet, and DaemonSet, support set-based selectors as well.
selector:
matchLabels:
component: redis
matchExpressions:
- {key: tier, operator: In, values: [cache]}
- {key: environment, operator: NotIn, values: [dev]}
#devops #kubernetes #write-for-cloud-native #opa #labels