What’s new in Kubernetes 1.22

The latest release of the popular open source system for running containerized applications brings Server-side Apply out of beta and improves support for Windows hosts.

container ship storage transport colorful containers diversity outsourcing
AvigatorPhotographer / Getty

By making containerized applications dramatically easier to manage at scale, Kubernetes has become a key part of the container revolution. Here’s the latest.

Kubernetes 1.22, released August 5, 2022, contains the following new and updated features:

  • Server-side Apply is now generally available. This previously beta-only feature allows objects on Kubernetes servers to be created and modified declaratively, by having the developer describe their intent. Changes to an object are tracked on a field-by-field basis, so that any attempts to change a field modified and “owned” by someone else will be rejected. Server-side Apply is intended eventually to replace the original kubectl apply function because it provides a simpler mechanism for controllers to make changes to their configurations.
  • External credential providers, available by way of plug-ins, are now out of beta.
  • Etcd, the default back-end storage for Kubernetes has been updated to a new release (3.5.0) with bug fixes and new features around log management.
  • QoS for memory resources is available as a beta feature. The cgroups v2 API can now be used to designate how memory is allocated and isolated for pods, making it easier to deploy multiple applications that might fight each other for memory usage.
  • Better support for developing and running on Microsoft Windows. Some Kubernetes features for Windows are still alpha—e.g., privileged containers—but it’s now possible to run more of the early-support Kubernetes features on Windows by manually building the Windows kubelet and kube-proxy binaries.

Other changes in Kubernetes 1.22:

  • Nodes can now run on systems where swap memory is activated if needed. Kubernetes admins used to have to disable swap space before setting up Kubernetes. (Alpha feature.)
  • Support for default, cluster-wide seccomp profiles is now available. (Alpha.)
  • kubeadm can now be run as non-root if needed, by running the control plane with lower privileges. (Alpha.) All other Kubernetes node components can be run experimentally as a non-root user as well.
  • Some APIs have been deprecated and changed, in particular the API for Ephemeral Containers (which was an alpha feature to begin with and did not have a stable API).

Kubernetes 1.20, released in December 2020, introduced these major changes:

  • The Docker runtime is being deprecated. However, this doesn’t mean Docker images or Dockerfiles don’t work in Kubernetes anymore. It just means Kubernetes will now use its own Container Runtime Interface (CRI) product to execute containers instead of the Docker runtime. For most users this will have no significant impact—e.g., any existing Docker images will work fine. But some issues might result when dealing with runtime resource limits, logging configurations, or how GPUs and other special hardware interact with the runtime (something to note for those using Kubernetes for machine learning). The previous link provides details on how to migrate workloads, if needed, and what issues to be aware of.
  • Volume snapshot operations are now stable. This allows volume snapshots—images of the state of a storage volume—to be used in production. Kubernetes applications that depend on highly specific state, such as images of database files, will be easier to build and maintain with this feature active.
  • Kubectl Debug is now in beta, allowing common debug workflows to be conducted from within the kubectl command-line environment. 
  • API Priority and Fairness (APF) is now enabled by default, although still in beta. Incoming requests to kube-apiserver can be sorted by priority levels, so that the administrator can specify which requests should be satisfied most immediately.
  • Process PID Limiting is now in general availability. This feature ensures that pods cannot exhaust the number of process IDs available on a Linux host, or interfere with other pods by using up too many processes.

Kubernetes 1.17, released in December 2019, introduced the following key new features and revisions: 

  • Volume snapshots, introduced in alpha in Kubernetes 1.12, are now promoted to beta. This feature allows a volume in a cluster to be snapshotted at a given moment in time. Snapshots can be used to provision a new volume with data from the snapshot, or to roll back an existing volume to an earlier snapshotted version. Volume snapshots make it possible to perform elaborate data-versioned or code-versioning operations inside a cluster that weren’t previously possible.
  • More of the “in-tree” (included by default) storage plug-ins are now being moved to the Container Storage Interface (CSI) infrastructure. This means less direct dependencies on those drivers for the core version of Kubernetes. However, a cluster has to be explicitly updated to support migrating the in-tree storage plug-ins, but a successful migration shouldn’t have any ill effects for a cluster.
  • The cloud provider labels feature, originally introduced in beta back in Kubernetes 1.2, is now generally available. Nodes and volumes are labeled based on the cloud provider where the Kubernetes cluster runs, as a way to describe to the rest of Kubernetes how those nodes and volumes should be handled (e.g., by the scheduler). If you are using the earlier beta versions of the labels yourself, you should upgrade them to their new counterparts to avoid problems.

Where to download Kubernetes

You can download the Kubernetes source code from the releases page of its official GitHub repository. Kubernetes is also available by way of the upgrade process provided by the numerous vendors that supply Kubernetes distributions.

What’s new in Kubernetes 1.16

Kubernetes 1.16, released in September 2019, contains the following new and revised features:
  • Custom resource definitions (CRDs), the long-recommended mechanism for extending Kubernetes functionality introduced in Kubernetes 1.7, are now officially a generally available feature. CRDs have already been widely used by third parties. With the move to GA, many optional-but-recommended behaviors are now required by default to keep the APIs stable.
  • Many changes have been made to how volumes are handled. Chief among them is moving the volume resizing API, found in the Container Storage Interface (CSI), to beta.
  • Kubeadm now has alpha support for joining Windows worker nodes to an existing cluster. The long-term goal here is to make Windows and Linux nodes both first-class citizens in a cluster, instead of having only a partial set of behaviors for Windows.
  • CSI plug-in support is now available in alpha for Windows nodes, so those systems can start using the same range of storage plug-ins as Linux nodes.
  • A new feature, Endpoint Slices, allows for greater scaling of clusters and more flexibility in handling network addresses. Endpoint Slices are now available as an alpha test feature.
  • The way metrics are handled continues a major overhaul with Kubernetes 1.16. Some metrics are being renamed or deprecated to bring them more in line with Prometheus. The plan is to remove all deprecated metrics by Kubernetes 1.17.
  • Finally, Kubernetes 1.16 removes a number of deprecated API versions

What’s new in Kubernetes 1.15

Kubernetes 1.15, released in late June 2019, provides the following new features and improvements:
  • More features (currently in alpha and beta) for Custom Resource Definitions, or CRDs. CRDs in Kubernetes are the foundation of its extensibility technology, allowing Kubernetes instances to be customized without falling out of conformance with upstream Kubernetes standards. The new features include the ability to convert CRDs between versions (something long available for native resources), OpenAPI publishing for CRDs, default values for fields in OpenAPI-validated schemas for CRDs, and more.
  • Native high availability (HA) in Kubernetes is now in beta. Setting up a cluster for HA still requires planning and forethought, but the long-term goal is to make HA possible without any third-party software.
  • More plug-ins that manage volumes have been migrated to use the Container Storage Interface (CSI), a consistent way to manage storage for hosted containers. Among the new features introduced in alpha for CSI are volume cloning, so that new persistent volumes can be based on an existing one.
Other changes in Kubernetes 1.15 include:
  • Certificate management now automatically rotates certificates before expiration.
  • A new framework for plug-ins that perform scheduling operations has entered alpha.
  • Microsoft Windows Server 2019 is now officially supported as a platform for running both Kubernetes worker nodes and container scheduling. This means entire Kubernetes clusters can run on Windows exclusively, rather than having a mix of Windows and Linux systems.
  • The plugin mechanism for Kubectl, the default Kubernetes command-line tool, is now a stable feature, letting developers implement their own Kubectl subcommands as standalone binaries.
  • Persistent local volumes are now a stable feature. This lets locally attached storage be used by Kubernetes for persistent volumes. Aside from offering better performance than using network-attached storage, it also makes it easier (and potentially cheaper) to stand up a cluster.
  • Process ID limiting for Linux hosts is now a beta feature. This prevents any one pod from using up too many process IDs and thus causing resource exhaustion on the host.

What’s new in Kubernetes 1.13

Version 1.13 of Kubernetes was released in December 2018, with the following new and upgraded features:
  • Kubeadm, a tool designed to make it easier to set up a Kubernetes cluster, is finally available as a fully supported feature. It walks an admin through the basics of setting up nodes for production, joining them to the cluster, and applying best practices along the way. It also provides a way for infrastructure-orchestration tools (Puppet, Chef, Salt, etc.) to automate cluster setup.
  • The Container Storage Interface, or CSI, is now also available as a supported feature. CSI allows extensions for Kubernetes’s volume layer, so that storage plugins can work with Kubernetes without having to be made part of Kubernetes’s core code.
  • Kubernetes now uses CoreDNS as its default DNS server. CoreDNS works as a drop-in replacement for other DNS servers, but was built to integrate with Kubernetes by way of plug-ins and integration with Kubernetes features such as Prometheus monitoring metrics.

What’s new in Kubernetes 1.12

Released in late September 2018, Kubernetes 1.12 brings to general availability the Kubelet TLS Bootstrap. The Kubelet TLS Bootstrap allows a Kubelet, or the primary agent that runs on every Kubernetes node, to join a TLS-secured cluster automatically, by requesting a TLS client certificate through an API. By automating this process, Kubernetes allows clusters to be configured with higher security by default.

Also new in Kubernetes 1.12 is support for Microsoft Azure’s virtual machine scale sets (VMSS), a way to set up a group of VMs that automatically ramp up or down on schedule or to meet demand. Kubernetes’s cluster-autoscaling feature now works with VMSS.

Other new features in Kubernetes 1.12:

  • Snapshot and restore functionality for volumes (alpha).
  • Custom metrics for pod autoscaling (beta). This allows custom status conditions or other metrics to be used when scaling a pod—for instance, if resources that are specific to a given deployment of Kubernetes need to be tracked as part of the application’s management strategy.
  • Vertical pod scaling (beta), which allows a pod’s resource limits to be varied across its lifetime, as a way to better manage pods that have a high cost associated with disposing of them. This is a long-standing item on many wish lists for Kubernetes, because it allows for strategies to deal with pods whose behaviors aren’t easy to manage under the current scheduling strategy.

What’s new in Kubernetes 1.11

Released in early July 2018, Kubernetes 1.11 adds IPVS, or IP Virtual Server, to provides high-performance cluster load balancing using an in-kernel technology that’s less complex than the iptables system normally used for such things. Eventually, Kubernetes will use IPVS as the default load balancer, but for now it’s opt-in.

Custom resource definitions, billed as a way to make custom configuration changes to Kubernetes without breaking its standardizations, may now be versioned to allow for graceful transitions from one set of custom resources to another over time. Also new are ways to define “status” and “scale” subresources, which can integrate with monitoring and high-availability frameworks in a cluster.

Other major changes include:

  • CoreDNS, introduced in 1.10, is now available as a cluster DNS add-on, and is used by default in the kubeadm administration tool.
  • Kubelet configuration changes can now be rolled out across a live cluster without first taking the cluster down.
  • The Container Storage Interface (CSI) now supports raw block volumes, interoperates with the kubelet plugin registration system, and can pass secrets more readily to CSI plugins.
  • There are many changes to storage, including online resizing of persistent volumes, the ability to specify a maximum volume count for a node, and better support for protecting storage objects from being removed when in use.

What’s new in Kubernetes 1.10

The March 2018 Kubernetes 1.10 production release includes the beta release of the Container Storage Interface (alpha as of Kubernetes 1.9) that promotes an easier way to add volume plug-ins to Kubernetes, something that previously required recompilng the Kubernetes binary. The Kubectl CLI, used to perform common maintenance and administrative tasks in Kubernetes, can now accept binary plug-ins that perform authentication against third-party services such as cloud providers and Active Directory.

Non-shared storage,” or the ability to mount local storage volumes as persistent Kubernetes volumes, is now also beta. The APIs for persistent volumes now have additional checks to make sure persistent volumes that are in use aren’t deleted. The native DNS provider in Kubernetes can now be swapped with CoreDNS, a CNCF-managed DNS project with a modular architecture, although the swap can only be accomplished when a Kubernetes cluster is first set up.

The Kubernetes project is now also moving to an automated issue life-cycle management project, to ensure stale issues don’t stay open for too long.

What’s new in Kubernetes 1.9

Kubernetes 1.9 was released in December 2017.

Production version of the Workloads API

Promoted to beta in Kubernetes 1.8 and now in production release in Kubernetes 1.9, the Apps Workloads API provides ways to define workloads based on their behaviors, such as long-running apps that need persistent state.

Version 1 of the Apps Workloads API brings four APIs to general availability:

  • Deployment. A basic way to describe the desired state for a running application, including a ReplicaSet.
  • ReplicaSet. This ensures, via a Deployment’s configuration, that an app has enough running container instances (“replicas”) to satisfy its definition.
  • Daemonset. A deployment for an app that runs continuously regardless of what other apps might be running, such as a logging or monitoring solution.
  • StatefulSet. This is used for workloads that need persistent state even if containers are killed and restarted. StatefulSets also provide persistency for things like network identification for containers or the order in which containers start and stop.

Another set of Workloads APIs, Job and CronJob (collectively called the Batch Workloads APIs), are used for workloads that run on a scheduled basis and then terminate. The Batch Workloads APIs are still in beta.

Beta support for Windows Server

After Microsoft added native support for Docker containers to Windows, the next logical step was for other apps that used Docker, like Kubernetes, to follow suit. Kubernetes 1.9 now provisionally supports the use of Kubernetes on Windows Server.

To test Kubernetes on Windows Server, you need Windows Server 2016 and Docker 1.12. Right now, the Kubernetes control plane can run only on Linux. In other words, you can schedule containers to run on Windows Server from a Linux controller, but you can’t use Windows Server as a controller instead of Linux.

The first alpha of the Container Storage Interface (CSI)

One of Kubernetes’s key features since the beginning has been abstraction of resources, including storage, from applications. Unfortunately, container storage hasn’t really had a standard; most every container solution has implemented its own way of handling storage, Kubernetes included.

The good news is that a subgroup of the Cloud Native Computing Foundation, the CNCF Storage Working Group, has devised its own standard for storage in container clusters, the Container Storage Interface (CSI) standard. Kubernetes 1.9 has an alpha version of a CSI plugin, to allow storage volume plugins to be developed entirely independently from Kubernetes itself. The project is still in the early stages, but Kubernetes’s developers are confident it’s a step in the right direction.

Other new features in Kubernetes 1.9

Some of the other additions and modifications include:

  • An alpha version of a hardware acceleration addition to Kubernetes, allowing the use of GPUs as a resource. This will better enable Kubernetes to be an underpinning for machine learning workloads.
  • Alpha support for IPv6 addressing.
  • Faster validation for Custom Resource Definition (CRD) data. CRDs let admins customize and extend a given Kubernetes installation, but without jeopardizing compatibility when new versions of Kubernetes come along.

What’s new in Kubernetes 1.8

Kubernetes 1.8 was released in October 2017.

New security features in Kubernetes 1.8

Earlier versions of Kubernetes introduced role-based access control (RBAC) as a beta feature. RBAC lets an admin define access permissions to Kubernetes resources, such as pods or secrets, and then grant (“bind”) them to one or more users. Permissions can be for changing things (“create,” “update,” “patch”) or just obtaining information about them (“get,” “list,” “watch”). Roles can be applied on a single namespace or across an entire cluster, via two distinct APIs.

Kubernetes already had a policy system for networking, including filtering incoming traffic to pods. Kubernetes 1.8 adds beta support for filtering outbound traffic as well. Right now, filtering in both directions is limited to a list of destination ports and peers, so things like rate limiting aren’t yet available through Kubernetes’s interfaces. (You can accomplish such things directly in containers using a custom network bridge.)

Another networking feature promoted to beta is automatic TLS certificate rotation for Kubelet, the agent software that runs on each Kubernetes node. TLS certificates used by Kubernetes internally have a one-year lifespan, and it’s easy to forget to regenerate those certificates. The new feature, when enabled, automatically generates new certs for Kubelet when the current ones are almost expired.

Auditing features in Kubernetes 1.8

Introduced in Kubernetes 1.7 as an alpha feature, auditing is kicked up a notch to beta status in Kubernetes 1.8. This includes formatting for audit logs, policies for controlling which elements of a cluster can be logged and by whom, and webhooks to relay events to external services. 

Promoting auditing to beta means that the audit event format will make only backward-compatible changes. In other words, it’s a signal that Kubernetes developers can start building production functionality with the feature. An example of that backward compatibility for the auditing framework is the audit2rbac tool, which can generate RBAC profiles from audit events.

Workload features in Kubernetes 1.8

Another alpha-to-beta promotion in Kubernetes 1.8 is the set of workload APIs. These provide a way to orchestrate applications based on their overall behaviors—for example, a batch job or cron-style job that runs and terminates, versus a daemon that runs continuously.

Some of the workload APIs are set to be promoted out of beta sooner than others. Four of the most common—Deployment, DaemonSet, ReplicaSet, and StatefulSet—are in full production status as of Kubernetes 1.9. The batch APIs (Job and CronJob) will follow later, but Kubernetes 1.8 gives developers an idea of how they’re meant to work.

Some applications can already use the workload APIs, but only in a provisional way. Apache Spark, for example, has a fork that runs directly on Kubernetes, although those features won’t be officially available in either Spark or Kubernetes for some time yet.

Other new alpha and beta features in Kubernetes 1.8

Other, as-yet-unfinished features are included in Kubernetes 1.8 as either alpha or beta additions:

  • cri-containerd (in beta) lets you use Containerd instead of the Docker daemon, as a possible way to reduce direct dependencies on Docker.
  • Volume snapshots (in alpha) lets you take snapshots of Kubernetes volumes using a Kubernetes API call. It’s absolutely not ready for production right now, because it doesn’t even ensure that snapshots are consistent when taken.
  • Volume resizing (in alpha) lets you manipulatge the size of volumes, again using a Kubernetes API call. Note that volume resizing changes only the underlying volume size and not the file system on the volume, because that file system could be anything.

Copyright © 2021 IDG Communications, Inc.