Networking Challenges in Kuberentes
Harshit Mehndiratta
Harshit Mehndiratta
September 13, 2020
7 minutes read

Networking Challenges in Kuberentes

Modern applications that are leveraging Kubernetes and container-based workloads are becoming the standard in enterprise production deployments. Applications deployed using microservices architecture promotes efficient and faster lifecycles that can respond to the critical business needs as organizations scale.

However, enterprises implementing Kubernetes in their workflow have expressed concerns for networking. Traditional ways of operating networks simply don’t work in Kubernetes. That’s one of the challenges enterprises face when moving legacy applications using lift and shift strategy. Kubernetes networking tends to be one of the most challenging aspects of Kubernetes deployments. In fact, in a poll of top challenges faced by Kubernetes users. This complexity will multiply as more organizations run containerized applications and Kubernetes management platforms.

Kubernetes imposes networking policies on clusters to communicate with each other and applications. These policies are based on a flat network structure that eliminates the need for mapping, which provides an easy way to run distributed systems, sharing machines between applications. This implementation is suitable for small scale deployment, but as you move to large scale developments, chances are you will. You might run into challenges.

To help you address these challenges before they come, we have gathered around all the networking challenges for Kubernetes that you may encounter in your large scale deployments.

Network Addressing Challenges

Networking in Kubernetes varies significantly from traditional networking. You cannot use static IP addresses and ports for communication in Kubernetes. They are not suitable for Kubernetes as it identifies a network entity instead of an IP address. Identities are derived from Kubernetes labels and other metadata, allowing pods to have N numbers of IP addresses for one type of workload.

Creating a static IP connection between Pod is difficult in a highly dynamic environment like Kubernetes, as Pods are removed and respinned, every time with a different IP address. For example, if you have a Pod named transactions and you delete the Pod and spin up a new Pod, Kubernetes will reassign the transactions pod IP to a new Pod.

Updating IP based policies in Kubernetes is challenging as well. With a traditional networking model, nodes can shut down and update the policy rules to remove the IP address from IP allow lists. In contrast, in Kubernetes, Pod can be automatically initialized with the same IP and send traffic to other Pods, and an IP-based policy may allow it.

Network Complexity Challenges

Kubernetes is deployed sometimes across more than one infrastructure platform - public cloud, private cloud, or hybrid on-premises plus cloud. Each public cloud offering has its own networking policies, which can complicate operational processes and management of Kubernetes clusters across multiple clouds costly and time taking.

Even if you overcome the challenges of Kubernetes on different infrastructure platforms, you’re often faced with the organizations that need to use both VM-based and Kubernetes-based deployments. Managing networking of both VM-based deployments and Kubernetes-based deployments becomes complex. Although the industry is offering solutions to help address the needs of development teams that have adopted Kubernetes for containers to run with a mix of virtual machines, there are enough differences between VMs and container architecture, making it challenging to network them together..

Another complexity challenge holding back enterprises from adopting Kubernetes is its direct connection with network design and implementation. For example, one strategy to adopt Kubernetes involves restructuring a whole existing network design (or changing the deployment plans for new ones) to suit the Kubernetes and container-based application architectures specifically.

Network Communication Challenges

Understanding the communication between microservices becomes essential for delivering fast and reliable services in cloud-native environments. Networking defines how communication happens in the cloud and has become a critical component that underpins every aspect of microservices architecture when it comes to deployment headaches, development, testing, and management of requirements. However, service-to-service communication becomes more challenging in this type of architecture as the number of containers and Kubernetes pods increases.

There are several layers of network communication in Kubernetes that can have different networking challenges to solve for different communications types. For example, by default, nodes and containers can communicate with all other containers without using network address translation. The IP address the container is assigned is the same IP that other containers can communicate.

These default network requirements come handy when organizations have to lower the friction while migrating apps from monolithic architecture to container architecture, but on an evolving container-based organization, these communication models create new challenges for networking and operations teams.

Multi-Tenancy Challenges

Multitenancy is challenging in Kubernetes networking. Many organizations deploy Kubernetes-as-a-Service, where one cluster houses the lifecycle of the applications deployed on many tenants/customers and workloads. Multi-tenancy supports Kubernetes across teams and environments while allowing cloud computing platforms like AWS, Azure, and GCP to enable multi-tenancy (customers) to run their mission-critical apps on a single cluster.

Multitenancy also provides infrastructure efficiency benefits as CPU cores, and memory can be bought in the form of commodity. Operations team load gets decreased as linear management of clusters takes place. However, with these benefits of Multi-tenancy come challenges.

A multi-tenancy model promotes shared infrastructure in an environment where there are many untrusted workflows, all using the same network infrastructure. Unauthorized communication between these workflows/customers on a single cluster can pose a serious networking challenge.

Using Kubernetes, many types of workflows can be deployed to a worker node based on available resources. In multi-tenancy, these tenants can run on the same worker node exposing the network to threats that are coming from inside the network or from a source where protection is limited.

The traditional networking model was founded on a workflow that attacks which are external to the network are mostly untrusted leading the organizations to focus more on hardening firewalls that protect internal resources from external networks, but with multi-tenancy, this networking model fails since there is a requirement of firewall which protects from attacks originating inside or outside of the network.

Network Interface challenges

For a long time, the networking community has been working on virtualizing many network functions into more general systems, known as virtual network functions (VNFs). These functions are often implemented by VM-based organizations. Container architecture based organizations also want to take benefit of VNF’s in their Kubernetes workflow, but due to the simple networking model adopted by Kubernetes. Support for high-performance VNF applications becomes challenging in Kubernetes.

Kubernetes lacks native support for multiple network interfaces in a Kubernetes pod. But VNF applications require at least two different interfaces to work. This makes deployment of VNF applications into microservices architecture not feasible, making it difficult for companies with multi-network capability (VNF) to migrate their legacy applications and workloads requiring multi-tenancy and hardware acceleration for computing and networking.

There are also complexities for enabling multiple network interfaces in Kubernetes. Multi networking of interfaces requires service endpoints, DNS, and other elements that are harder to configure than the traditional networking model used by Kubernetes.

Network Policies Challenges

Network Policies have long been a critical component for Kubernetes deployments. In many ways, the ease of use Kubernetes provides in the deployment of container-based applications can make it very easy to underlook container network policies.

It is essential to ensure strong network policy controls for container-based workloads, as applications are not always accessible by traditional tools and container IPs change frequently.

Kubernetes network policies also define how a cluster of pods communicate with one another. Defining and changing the network policy for a pod or container requires you to create Network Policy resources, which amounts to large configuration files when scaling for large numbers of pods.

Final Words

The networking of modern applications that deploy Kubernetes can be overwhelming at first due to its agile and dynamic nature. But with the right practices and understanding of the challenges, they are manageable.

Container Network Interface (CNI), a plug-in for seamless integration of Kubernetes with the underlying network infrastructure, has many network solutions and implementations from different vendors. These all enable Kubernetes to access applications across different cloud platforms.

Service meshes in the Kubernetes have also been a great tool that has made it easier to implement a Kubernetes Networking Model. With service meshes, developers can focus more on primary tasks, while the ops team takes responsibility for networking service management to maintain consistent and secure communication between applications.

Containers will spin and die, and you just can’t stop them down, but a sound Kubernetes networking model will work wonders if implemented using the right planning and tools. Kubernetes has a booming market. That day is not far when network model implementation on Kubernetes will become as easy as spinning a pod in the platform.