Kubernetes 1.19 has arrived, and it came loaded with many improvements and upgrades. According to the Kubernetes Community, version 1.19 had the longest delivery cycle lasting 20 weeks altogether. It comprises 34 upgrades: 10 improvements moving to stable, 15 improvements in beta, and 9 enhancements in alpha.
Version 1.19 has added various changes and upgrades that highlighted the Kubernetes platform maturity, including prominent security, networking storage, and support advancements.
In this blog post, We will address all those major advancements of the release.
From Kubernetes 1.19, the support will expand from nine months to one year. Until now, minor updates in Kubernetes 1.18.x were delivered as long as nine months after the version’s initial launch, which used to force cluster management teams to complete all the upgrade requirements in 9 months to stay updated and supported.
A review directed in mid-2019 by the LTS group indicated that most of the Kubernetes clients neglect to upgrade in 9-month support windows. There were different reactions from clients, 30 percent proposing that they would have been able to keep their organizations on upgraded versions of Kubernetes if their support windows were stretched to 12-14 months.
It was also mentioned that before the increased support windows, only Kubernetes 1.11 and 1.13 versions were upgraded in 9-month windows, implying that 33% of Kubernetes clients were running an unsupported version anyway. After the increment of 12~14 months, it was implied that around 80% of the Kubernetes clients would be able to stay supported.
These types of numbers will have a great impact on a Kubernetes project. It will subsequently prompt clients to stay on upgraded variants of Kubernetes. A yearly help period also works better with yearly subscription cycles. Having three additional months to apply patches and delivery will take a lot of pressure away from cluster management teams while keeping the infrastructure safe from threats.
Discussing Kubernetes version refreshes, one of the most excruciating pieces of arranging a significant Kubernetes upgrade is searching for API’s that will not become obsolete. That has become simpler in Kubernetes 1.19, which makes use of a warning system to help developers stay mindful of those upcoming API deprecations.
Kubernetes return cautioning messages when a user utilizes a deprecated APIs by making an API request to an obsolete API endpoint, incorporating a warning header about the depreciable API data like when an API was available when it will be deprecated, and when it will be taken out.
It can also incorporate additional data and upgrade steps if another upgraded version is available for that particular API. All this communication will also record as an audit event and will also update metrics.
Although the Deprecated API feature is in beta, it will permit cluster management teams to prepare for the foreseeable future, eliminating a portion of that upgrade pressure.
Most applications only need extra storage; however, they don’t care whether that information is stored after restarts. Kubernetes implements these types of storage as ephemeral volume modules whose lifecycle is attached to a unit that can be utilized to read the information available in documents, similar to setup information or mystery keys during the running lifecycle of the applications.
As of right now, Kubernetes bolsters a few sorts of such generic ephemeral volumes, yet they are restricted inside Kubernetes. The new generic ephemeral volumes alpha feature in Kubernetes version 1.19 permits any current storage driver that uses dynamic provisioning to be utilized as a transient volume with the volume’s lifecycle connected to pod.
Earlier all the ephemeral volume drivers were executed legitimately in Kubernetes. What’s more, for every ephemeral volume to work, the CSI driver has to be refreshed, which has become simpler with Kubernetes 1.19.
Kubernetes 1.19 has provided a more straightforward method of characterizing volumes in Kubernetes 1.19, which has lessened the multifaceted nature of configuration documents. This has not just made things simpler for current Kubernetes clients but also decreased the overhead for configuring storage, making things simpler for newcomers.
Presented in Kubernetes 1.7 and in Beta since Kubernetes 1.8, upgrade for new TLS 1.3 codes that can be utilized for Kubernetes tends to be one of the suggestions that emerged from the Kubernetes security audit directed a year ago. Fortunately, it has reached a stable stage in Kubernetes version 1.19.
The stable release covers a stable TLS certificate rotation cycle to acquire the kubelet authentication by confirming against kube-apiserver, which, earlier since Kubernetes v1.8, was incorporated as a (beta) measure for getting the underlying certification/key pair and rotating it as the cycle of the expiration draws near.
In 1.19, A kubelet verifies through the kube-apiserver utilizing a private key and certificate The declaration is provided during the first boot, through an out-of-group system.
During the verification process, the filesystem is examined for a current cert/key pair, overseen by the certificate manager. For the situation that cert/key is accessible, it will be loaded. If not, the kubelet checks the configuration document for an encoded authentication record reference in the kubeconfig.
In the event if the declaration is a bootstrap authentication, it will be utilized to produce a key, which as a result, makes a certificate signing request, and the request gets marked from the API server.
When certification is expiring, the certificate manager generates a correct certificate creating new private keys and requesting new certificates. With the kubelet requesting certificates continuously, requests are handled through auto-approval, making the cluster management easy.
The stable TLS release in 1.19 has also prompted another CertificateSigningRequest API (going stable) that considers PKI issuance to be consumed by both Kubernetes components and the cluster running user-workflows.
There are various upgrades to the Kubernetes scheduler with version 1.19. The ability to customize the Kube-scheduler behavior by composing a configuration file and passing it as an argument in the command line has reached beta.
The above beta release is also accompanied by the Multiple Scheduler Profiles feature, which permits the scheduler to run with various scheduler configuration, or profiles, rather than running one scheduler for every design, to decrease race conditions.
Race conditions usually happen when certain events must occur before other events in the system. There is no control over execution. But with this release graduating to beta in 1.19, there is a high chance Kubernetes scheduler will be able to handle a more comprehensive set of workloads and use cases.
Lastly, the Pod Topology Spread feature, which was beta until 1.18, has now graduated to stable in 1.19. Pod topology permits the Kubernetes scheduler to spread a group of pods across failure domains spaces.
Earlier, one had to utilize inter pod anti-affinity, which doesn’t permit more than one pod to exist failure domain. The new feature can take more than one case in a failure domain space.
The other striking feature is podSpecs alternative, which prevents the seizure of existing workloads, which can be particular for specific sorts of long workloads.
Until 1.18, there was no uniform structure for log messages and references to Kubernetes objects, which made parsing, preparing, and breaking down logs a very cumbersome task. Also, there were issues deploying any analytics solution on these logs as they were difficult to maintain. But in 1.19, Kubernetes has introduced structure logging, which produces a standard structure for Kubernetes log messages.
The feature is in the alpha stage but provides capabilities to configure Kubernetes components to produce JSON format logs. Methods like InfoS and ErrorS have been added to the klog library to enforce a uniform structure. The first argument of methods takes log messages and a list of key-value pairs as a second argument. This methodology permits easy adoption of the process without converting all Kubernetes components to another API in one go.
Find more about structural logging in the Kubernetes 1.19 documentation.
Seccomp or Secure Computing Mode and has been a security feature of the Linux kernel for limiting system calls that applications make. Seccomp was presented as a Kubernetes highlight in alpha back in 1.3. Until this point in time, applying seccomp profiles to cases required utilizing annotations on a PodSecurityPolicy.
But In 1.19, seccomp has graduated to general availability, seccompProfile as a field is added to pods’ securityContext objects. Support for the existing annotation concept until version 1.18 is deprecated and will be eliminated by 1.22. To ensure backward compatibility, seccomp profiles will be enforced first on container specific fields than on pod wide field and annotation.
Ingress as an API has been available in beta from Kubernetes 1.1. Ingress handles outbound connections to services in a cluster by exposing HTTP and HTTPS routes. It also oversees load balancing, name-based virtual hosting, and SSL termination.
In 1.19, Ingress graduates to general accessibility and is added to the networking v1 APIs. According to the Community, Ingress has achieved GA status based on its wide usage and adoption by various ingress controller providers. Also, it appears to be more reasonable to declare the current API as a V1 while working entirely on V2 of Ingress for a new set of features.
Although, as a part of this GA graduation, there are some key differences in v1 Ingress objects, including validation and schema objects. For instance, the value for pathType no longer has a default. You have to specify it explicitly.
One feature improvement that everybody in the Kubernetes community follows for quite a while is support for sidecar containers). It was scheduled to be delivered in 1.19 yet has been postponed because of extra considerations by Kubernetes SIG-hub.
The KEP for sidecar containers was first submitted in May 2018 and has been the subject of discussion and advancement from that point of time. The KEP takes note of that Kubernetes clients have casually been utilizing the sidecar approach as of now. However, some issues have come up, which the KEP expects to address.
KEP for sidecars clarifies that to solve the problem of container lifecycle dependency. We can make another class of container: a “sidecar container” that acts like a typical container; however, it is handled differently during bootup and startup.
There are tons of workflows in Kubernetes right now that individuals don’t orchestrate since they can’t work with the current, non-official sidecar containers releases. When the sidecar KEP becomes a steady component of Kubernetes, it would change and make it conceivable to have sidecar capability anywhere in the cluster.