Running a containerized application often requires exposure to network services for routing external traffic to the Kubernetes cluster. Like deployments in Kubernetes, network services usually run at the frontend of the application, handing uneven routing while providing an abstract way to access a group of services in the Kubernetes cluster dynamically.
Exposing a network service in Kubernetes can be done through three different approaches, NodePort, LoadBalancer, and Ingress. Each provides their own way for service handling and are useful in different scenarios.
We’ll discuss Ingress in this blog and how it provides mechanisms and solutions to get external traffic into the cluster. But before diving into Ingress, Let’s take a quick look at how other two service type works.
Starting from Node Port service type, NodePort exposes the application to static port on nodes to route calls to services. It assigns a Static Port number to an open port, taken from a pool of configured NodePort ranges to route incoming traffic.
On the Other hand, the Load Balancer service type implements an external load balancer that routes external traffic to a Kubernetes service. Implementation of a LoadBalancer can also vary with the cloud provider. Moreover, if you are not deploying your applications on the public cloud, an external load balancing solution is a must to safely route the traffic.
Kubernetes Ingress is an API object that can be easily set up on top of Kubernetes services for managing external users access to services deployed on a Kubernetes cluster.
Ingress provides various mechanisms such as application load balancing, HTTP and HTTPS requests mapping, SSL/TLS termination, which exposes Kubernetes services through secure reachable domains and URLs, making it simple to expose a service in a Kubernetes production environment.
Ingress, compared to Kubernetes services, operates at the application layer of OSI models instead of the network layer, which provides it the ability to inspect individual HTTP/HTTPS requests for secure communication.
Ingress also provides the capability to consolidate the traffic-routing rules into a single resource while running as a pod in Kubernetes to manage resources from inside the cluster without implementing an external load balancer.
Ingress implementations are typically made up of two components, Ingress Resource and Ingress controller, which are responsible for monitoring the desired state of exposed services in the Kubernetes cluster.
Ingress Controller is an instance of Ingress API object which monitors Kubernetes Ingress resources and provision one or more mechanisms depending upon the needed behavior.
It is essential to implement an ingress controller as it provides the capability to read and process the Ingress Resource, which contains various routing rules to manage service access.
Ingress rules are typically defined under resource information and contain metadata to process incoming traffic to services in the Kubernetes cluster.
To define an ingress rule, network administrators must configure the path and backend port to route to. Once these are defined, the path and host must match the incoming request content to route the traffic through an ingress controller. By default, if no ingress rules are defined, the ingress controller will route traffic to the default backend port specified in ingress resource information.
Ingress resource majorly supports two types of routing methods. Host-based routing routes traffic from host header to multiple services on the URI. The name-based virtual hosting routes traffic to multiple hostnames IP.
Typically, Ingress Solution is an application that specifies various mechanisms according to specification in Ingress Resource. Resource specification in Ingress can vary, which may lead to the use of different ingress solutions requiring different ingress controller implementations.
So, choosing the right Ingress solution is important as it will tightly integrate into Kubernetes workflow and extend the traffic management capabilities to support custom uses cases.
Typically Ingress solutions are categorized into two types depending upon their nature of implementation. Solutions which perform ingress load balancing through pods by staying within the cluster are called in-cluster ingress solution. In contrast, solutions that implement load balancing outside of the cluster through a cloud provider are called External Ingress solutions or cloud-based ingress controllers.
The advantage of using in-cluster ingress solutions is that they can be easily scaled with the Kubernetes environment as they are defined as a pod in the Kubernetes cluster. Also, in cluster ingress, solutions are not limited by cloud providers. They are usually open-source, making it easy to choose the ingress controller that suits the organization’s specific load balancing and security needs.
Kubernetes as a project currently supports GLBC (GCE L7 Load Balancer) and ingress-nginx controllers by default. Other than these controllers, ingress controllers must be installed separately before implementation.
Kubernetes website maintains a list of most of these third-party in cluster ingress solutions. Here are some popular ones.
Traefik was originally implemented as an HTTP reverse proxy and a load balancer from Containous to mitigate dynamic routing issues for microservices architecture. Traefik is written in Go, which is the reason it supports various container infrastructures besides Kubernetes. Traefik supports HTTP/2, WebSocket, and Let’s Encrypt certificate encryption out of the box, which will ease implementation for Ingress beginners.
Traefik UI is also seamlessly integrated with controller metrics, which helps with visualizing Kubernetes metrics.
Ingress-nginx is an open-source Ingress Controller officially supported by the Kubernetes community. Built on top of the NGINX reverse proxy solution, it is best suited for businesses that only want simple HTTP/S routing and basic SSL features.
Users looking for advanced features and performance should consider the NGINX Plus controller, which provides various authentication and tracing features.
NGINX & NGINX Plus Ingress Controller is an official Ingress Controller from NGINX Inc, providing load balancing for both small businesses and enterprise clients.
NGINX is a free open source version that does not include active health checks and JWT authentication (OpenID SSO) (included in NGINX plus). But both of the products provide enough features at different scales to implement secure routing for services deployed on the Kubernetes cluster.
Kong was initially implemented as an API Gateway to process and route API requests. But the addition of several features such as native gRPC support, request/response authentication, and active health checks on load balancers has made it a stable Kubernetes ingress solution provider.
Unlike other ingress solutions discussed above, Kong focuses on decreasing privilege escalation by limiting the controller functionality to a single namespace.
HAProxy or High Availability proxy is a load balancing solution that is well known for its high performance. Although HAproxy supports various load balancing algorithms, configuring them requires a trained workforce to handle the customization of config files, updates, and functions.
Istio Ingress is a service mesh solution that can be implemented as an ingress controller to mediate all the outside traffic coming towards the cluster. Istio makes use of Envoy proxies, which are implemented as a sidecar for each exposed service to provide advanced traffic routing and observability features.
Istio, as an ingress solution, stays completely separated from services and inspects all the traffic by intercepting and implementing metrics, tracing request/response headers, and JWT authentication.
Ambassador was one of the few Kubernetes native API gateways which provide Kubernetes Ingress support through an L7 Load balancer. Ambassador supports integration with various communication protocols, whether it’s WebSockets, TCP/IP,gRPC, or HTTP/2. Standard API gateway features such as rate-limiting and custom filters can easily integrate with Ingress components and various service mesh solutions.
Overall, organizations looking forward to using an ingress supported API gateway for routing traffic will be satisfied with features and enterprise support offered by Datawire for the commercial version.
For a depth comparison of top ingress controllers, check out our blog on top ingress controllers.
Cloud-based Ingress solutions implement routing and load balancing algorithms from outside of the cluster. That helps provide native integration with cloud services while decreasing the hassle to provision and manage load balancer for your containerized applications.
Cloud providers depending upon the ingress controller’s feature set, handle all the operational ingress workflows for the organization. Many also offer advanced features to protect the Kubernetes application through the second layer of an application load balancer.
For example, AWS Ingress Controller creates an Application Load Balancer by default, which seamlessly integrates with AWS cloud to provide load balancing to pods without providing access to nodes and proxy configs.
GCE and Azure also have their Identity Access Protection and AKS Application Gateway controller, which provides advanced modes of forwarding and protection for internal applications
Mostly all the cloud-based ingress solutions provide support for more than a basic traffic forward mode, which helps eliminate the potential load balancing bottlenecks in the public cloud through native integration.
But when you implement these solutions in a hybrid cloud. These become tougher to maintain as there would be different solutions from every provider. Also, every cloud provider will have the maximum limit of IPs application load balancer can have. Exceeding those limits will produce IP charges, which can quickly rack up when operating at a large scale with lots of namespaces.
In these high tenant multi-cloud scenarios switching to an in-cluster ingress solution makes more sense. They are not constrained by cloud provider-specific limits and are widely available with an advanced set of features.
Aside from vendor lock-in, In-cluster ingress solutions also take less time to create and update clusters with strict health checks and cross namespace, which can be an issue with AWS and GCE ingress controllers as they require a new instance of Ingress with every namespace.
We have just covered how Kubernetes Ingress API as an ingress solution exposes applications deployed in a Kubernetes cluster. Implementing Ingress requires an Ingress Controller configuration, which is responsible for handling Ingress Resource information and traffic routing. Also, a right ingress solution with appropriate configuration is required to overcome routing problems that come underway.
With Ingress API shifting to GA in Kubernetes 1.18, Ingress API now comes with a plethora of changes. The new pathType field has provided the capability to specify the path of an HTTP request for mapping. IngressClass has provided a feature for specifying a particular Ingress version for handling external user’s access to Kubernetes applications.
So, the choice for an ingress solution has become much more cumbersome. To begin routing, start with ingress-nginx, as it is one of the most reliable solutions to get started. NGINX Inc. also has different controllers with plenty of functionality depending upon the scale of the business. Traefik and Kong can be deployed as an ingress controller and provides advanced functionality with ease of use. Both also provide cross-namespace routing support.
Users looking for API Gateway based functionalities such as rate-limiting instead of pure Kubernetes Ingress should choose Ambassador or Istio as they have the richest set of features. API-based ingress controllers are complex to set up, so keep in mind an experienced workforce is required to configure and operate them. Many of the best features are only offered in paid versions only, which can help your business-critical application need those advanced functionalities.
Some other general considerations which should be taken into account before choosing an ingress solution is support for network communication protocols. Traditional ingress solutions offer support for TCP/UDP protocols, but if your environment requires integration with multiple protocols such as gRPC or HTTP/2, you must dig deeper.
Enterprise support also plays a significant role if you are deploying an ingress solution on a commercial level. Having enterprise support ensures that solutions to implementation issues are available 24*7.