What's New in Kubernetes 1.17?

What's New in Kubernetes 1.17?

What is Kubernetes? Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. What's New in Kubernetes 1.17? Kubernetes 1.17 promotes volume snapshots to beta, moves more 'in-tree' storage plug-ins to the CSI infrastructure, and makes cloud provider labels GA.

Kubernetes 1.17 is about to be released! This short-cycle release is focused on small improvements and house cleaning. There are implementation optimizations all over the place, new features like the promising topology aware routing, and improvements to the dual-stack support. Here is the list of what’s new in Kubernetes 1.17.

Kubernetes 1.17 – Editor’s pick:

These are the features that look more exciting to us for this release (ymmv):

  • #536 Topology aware routing of services
  • #1053 Kubeadm machine/structured output
  • #1152 Avoid serializing the same object independently for every watcher
  • #563 Add IPv4/IPv6 dual-stack support
Kubernetes 1.17 core

#1053 Kubeadm machine/structured output

Stage: Alpha
Feature group: cluster-lifecycle

The most common way to deploy a kubernetes cluster is via automated tools, like the kubeadm command, or tools that rely on it, like terraform. The current output of kubeadm is not structured, so a small change on kubeadm can break the integration with those other tools.

This alpha feature allows to get the output from kubeadm in machine-readable structured formats like json, yaml, or go templates.

If the default output prints something like this:

$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
7vg8cr.pks5g06s84aisb27   <invalid>   2019-06-05T17:13:55+03:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

Using the -o or -experimental-output flag, you can get a structured version:

$ kubeadm token list -o json
{
    "kind": "BootstrapToken",
    "apiVersion": "output.kubeadm.k8s.io/v1alpha1",
    "creationTimestamp": null,
    "token": "7vg8cr.pks5g06s84aisb27",
    "description": "The default bootstrap token generated by 'kubeadm init'.",
    "expires": "2019-06-05T14:13:55Z",
    "usages": [
        "authentication",
        "signing"
    ],
    "groups": [
        "system:bootstrappers:kubeadm:default-node-token"
    ]
}

Until the Kubernetes documentation gets updated, you can check some examples in the PR and KEP pages for this feature.

#1143 Clarify use of node-role labels within Kubernetes and migrate old components

Stage: Alpha
Feature group: architecture

The initial goal for the node-role.kubernetes.io namespace for labels was to provide a grouping convention for cluster users. These labels are optional, only meant for displaying cluster information in management tools, and similar non-critical use cases.

Against the usage guidelines, some core and related projects started depending on them to vary their behaviour, which could lead to problems in some clusters.

This feature summarizes the work done to clarify the proper use of the node-role labels, so they won’t be misused again, and it removes the dependency on them where needed.

This feature implies a change of behaviour in some cases, that can be reversed with the LegacyNodeRoleBehavior and NodeDisruptionExclusion feature gates. You can learn more in the Kubernetes documentation.

#382 Taint node by Condition

Stage: Graduating to Stable
Feature group: scheduling

In beta since the 1.12 Kubernetes release, this feature finally graduates to stable.

The Taint node by condition feature causes the node controller to dynamically create taints corresponding to observed node conditions. The user can choose to ignore some of the node’s problems (represented as Node conditions) by adding appropriate pod tolerations.

#548 Schedule DaemonSet Pods by kube-scheduler

Stage: Graduating to Stable

Feature group: scheduling

Enabled by default since the 1.12 Kubernetes release, this feature finally graduates to stable.

Instead of being scheduled by the DaemonSet controller, these pods are scheduled by the default scheduler. This means that we will see pods and daemonsets created in Pending state and the scheduler will consider pod priority and preemption.

#495 Configurable Pod Process Namespace Sharing

Stage: Graduating to Stable

Feature group: node

In beta since the 1.12 Kubernetes release, this feature finally graduates to stable.

Users can configure containers within a pod to share a common PID namespace by setting an option in the PodSpec. More on this in the Kubernetes documentation: share process namespace.

#589 Move frequent Kubelet heartbeats to Lease API

Stage: Graduating to Stable

Feature group: node

node-leases complements the existing NodeStatus introducing, a lighter, more scalable heartbeat indicator.

Network

#563 Add IPv4/IPv6 dual-stack support

Stage: Major Change to Alpha

Feature group: network

This feature summarizes the work done to natively support dual-stack mode in your cluster, so you can assign both IPv4 and IPv6 addresses to a given pod.

In 1.17 there are 3 main improvements related to this feature:

  • kube-proxy now supports dual stack with EndpointSlices and IPVS.
  • Now you can set podIPs using the downward API, with the [status.podIPs environment variable](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20180612-ipv4-ipv6-dual-stack.md#container-environment-variables).
  • --node-cidr-mask-size-ipv6 now defaults to /64, instead of mirroring the /24 value from IPv4.

Dual stack is a big project, so expect new improvements on following Kubernetes release before this feature leaves the alpha stage.

#536 Topology aware routing of services

Stage: Graduating to Alpha

Feature group: network

Optimizing network traffic is essential to improve performance (and reduce costs) in complex Kubernetes deployments. Service Topology optimizes traffic by keeping it between pods that are close to each other.

This feature is enabled by the ServiceTopology feature gate:

--feature-gates="ServiceTopology=true"

Configuration is done at a Service level via the topologyKeys setting, which contains a list of tags. Pods will only be able to communicate with Endpoints with matching tag values:

["kubernetes.io/hostname", "topology.kubernetes.io/zone", "*"]

In this example, traffic will be sent to endpoints within the same hostname if possible, if not, it will fallback to nodes within the same zone. As a last resort, it will use any available node.

#752 EndpointSlice API

Stage: Graduating to Beta

Feature group: network

The new EndpointSlice API will split endpoints into several Endpoint Slice resources. This solves many problems in the current API related to big Endpoints objects. This new API is also designed to support other future features, like multiple IPs per pod.

#980 Finalizer Protection for Service LoadBalancers

Stage: Graduating to Stable

Feature group: network

There are various corner cases where cloud resources are orphaned after the associated Service is deleted. Finalizer Protection for Service LoadBalancers was introduced to prevent this from happening.

Kubernetes 1.17 API

#1152 Avoid serializing the same object independently for every watcher

Stage: Graduating to Stable

Feature group: api-machinery

This optimization of kube-apiserver will improve the performance when many watches are observing the same set of objects. This problem is manifesting in clusters with several thousands of nodes, where simple operations like creating an Endpoint object can take several seconds to complete.

The problem has been located around the serialization of the objects, as the old implementation serialized each object once per watcher. The new implementation uses a cache to serialize objects only once for all watchers.

#575 Defaulting of Custom Resources

Stage: Graduating to Stable

Feature group: api-machinery

Two features aiming to facilitate the JSON handling and processing associated with CustomResourceDefinitions.

#956 Add Watch Bookmarks support

Stage: Graduating to Stable

Feature group: api-machinery

The “bookmark“ watch event is used as a checkpoint, indicating that all objects up to a given resourceVersion requested by the client have already been sent. The API can skip sending all these events, avoiding unnecessary processing on both sides.

Storage

#177 Snapshot / Restore Volume Support for Kubernetes (CRD + External Controller)

Stage: Graduating to Beta

Feature group: storage

In alpha since the 1.12 Kubernetes release, this feature finally graduates to beta.

Similar to how API resources PersistentVolume and PersistentVolumeClaim are used to provision volumes for users and administrators, VolumeSnapshotContent and VolumeSnapshot API resources can be provided to create volume snapshots for users and administrators. Read more about volume snapshots here.

#554 Dynamic Maximum volume count

Stage: Graduating to Stable

Feature group: storage

In beta since the 1.12 Kubernetes release, this feature finally graduates to stable.

When dynamic volume limits feature is enabled, Kubernetes automatically determines the node type and supports the appropriate number of attachable volumes for the node and vendor.

You can read more about dynamic volume limits in the Kubernetes documentation.

#557 Kubernetes CSI topology support

Stage: Graduating to Stable

Feature group: storage

Topology allows Kubernetes to make intelligent decisions when dynamically provisioning volumes by getting scheduler input on the best place to provision a volume for a pod. To achieve feature parity with in-tree storage plugins, the topology capabilities will be implemented for CSI out-of-tree storage plugins.

#559 Provide environment variables expansion in sub path mount

Stage: Graduating to Stable

Feature group: storage

Systems often need to define the mount paths depending on env vars. The previous workaround was to create a sidecar container with symbolic links. To avoid boilerplate, they are going to introduce the possibility to add environment variables to the subPath.

#625 In-tree storage plugin to CSI Driver Migration

Stage: Graduating to Beta

Feature group: storage

Storage plugins were originally in-tree, inside the Kubernetes codebase, increasing the complexity of the base code and hindering extensibility. Moving all this code to loadable plugins will reduce the development costs and will make it more modular and extensible.

Other Kubernetes 1.17 features

#837 Promote Cloud Provider Labels to GA

Stage: Graduating to Stable

Feature group: cloud-provider

When nodes and volume resources are created, three labels are applied to them to provide information related to the cloud provider. After being in beta stage for some time, these labels are being promoted to Stable. This requires a naming change, existing labels:

  • beta.kubernetes.io/instance-type
  • failure-domain.beta.kubernetes.io/zone
  • failure-domain.beta.kubernetes.io/region

Will be renamed to remove the ‘beta’ label:

  • node.kubernetes.io/instance-type
  • topology.kubernetes.io/zone
  • topology.kubernetes.io/region

The old values are marked as deprecated and will be completely removed on Kubernetes 1.21.

#960 Behavior-driven conformance testing

Stage: Graduating to Stable
Feature group: architecture

This feature summarizes the efforts to improve the testing suite for the Kubernetes API. The goal is to not only check what API endpoints are tested, but also up to what extent the behavior of each endpoint is covered by the tests.

If you are interested in testing tools, check out how the behaviours have been defined and the plan to migrate the current tests to the new format.

#714 Break apart the kubernetes test tarball

Stage: Graduating to Stable

Feature group: testing

The kubernetes-test.tar.gz file included in the Kubernetes release artifacts includes test resources, both portable and platform specific. This file has been slowly growing, reaching up to 1.5GB, which complicates and slows down the testing process.

From now on this file will be split into seven smaller, platform specific, versions.

#1043 RunAsUserName for Windows

Stage: Graduating to Beta

Feature group: windows

Now that Kubernetes has support for Group Managed Service Accounts we can use the runAsUserName Windows specific property to define which user will run a container’s entrypoint.

Originally published by Víctor Jiménez at https://sysdig.com

Kubernetes Vs Docker

Kubernetes Vs Docker

This video on "Kubernetes vs Docker" will help you understand the major differences between these tools and how companies use these tools.

We will compare Kubernetes and Docker on the following factors:

  1. Definition
  2. Working
  3. Deployment
  4. Autoscaling
  5. Health check
  6. Setup
  7. Tolerance ratio
  8. Public cloud service providers
  9. Companies using them

Scaling Node.js Applications with Kubernetes and Docker

Scaling Node.js Applications with Kubernetes and Docker

Scaling Node.js Applications with Kubernetes and Docker. We will explore the benefits of DevOps process using Kubernetes, Docker, and Node.js. Learn about the basics of Kubernetes and tips to scale Node.js Applications. Learn the common problems that we face when we decide to change from monoliths to microservices using Docker and JavaScript.

We will explore the benefits of DevOps process using Kubernetes, Docker, and Node.js. Showing how Docker and Node.js can work together, using the power of Kubernetes to release and to scale automatically stateless services. At this talk we will explore the key concepts and components to start working with Kubernetes, real scenarios and the differences between the traditional approach compared to Container based applications. Attendees will learn about the basics of Kubernetes and tips to scale Node.js applications furthermore they will learn the common problems that we face when we decide to change from monoliths to microservices using Docker and JavaScript.

What are the key takeaways from this talk?

  • Service communication
  • Kubernetes and Docker,
  • High availability & release process

What is the difference between Docker, Kubernetes and Docker Swarm ?

What is the difference between Docker, Kubernetes and Docker Swarm ?

What is the difference between Docker and Kubernetes? And Kubernetes or Docker Swarm? In my video "Docker vs Kubernetes vs Docker Swarm" I compare both Docker and Kubernetes and Kubernetes vs Docker Swarm.

What is the difference between Docker and Kubernetes? And Kubernetes or Docker Swarm?
In my video "Docker vs Kubernetes vs Docker Swarm" I compare both Docker and Kubernetes and Kubernetes vs Docker Swarm.

Kubernetes and Docker are not competing technologies. In fact, they actually complement one another to get the best out of both. In contrast, Docker Swarm is the comparable technology to Kubernetes.

  • 0:38 - Comparison Docker and Kubernetes
  • 1:40 - Docker and Kubernetes in the software development process
  • 2:42 - Kubernetes in Detail
  • 3:21 - Differences of Kubernetes and Docker Swarm