Comparing Kubernetes Container Network Interface (CNI) providers
Harshit Mehndiratta
Harshit Mehndiratta
April 02, 2021
9 minutes read

Comparing Kubernetes Container Network Interface (CNI) providers

Kubernetes being a highly modular open source project, provides a lot of flexibility in network implementation. Many projects have sprung up in the Kubernetes ecosystem, making communication between containers easy, consistent and secure.

CNI, which stands for a container network interface, is one of those projects which supports plugin-based functionality to simplify networking in Kubernetes. The main purpose behind CNI is to provide enough control to administrators for monitoring the communication while reducing the overhead to generate network configurations manually.

With CNI, communication happens through an integrated plugin that aims to provide a consistent and reliable network across all your pods by allowing a Kubernetes vendor to implement custom networking models.

CNI plugins allocate functionalities like namespace isolation, traffic, and IP filtering, which by default, the Kubernetes Kube-Net plugin does not provide. Suppose a developer wants to implement these advanced network features. In that case, they have to use the CNI plugin with a Container Network Interface (CNI) to make the creation and administration of networks easier.

There are various CNI plugins available in the market. But for this blog, we will be discussing the most popular open-source ones like Flannel, Calico, WeaveNet, Cilium, and Canal. Before we start with the list of different CNI plugins, let’s have a quick overview of CNI.

What is a Container Network Interface (CNI)?

CNI is a network framework that allows the dynamic configuration of networking resources through a group of Go-written specifications and libraries. The specification mentioned for the plugin outlines an interface that would configure the network, provisioning the IP addresses, and mantain multi-host connectivity.

In the Kubernetes context, the CNI seamlessly integrates with the kubelet to allow automatic network configuration between pods using an underlay or overlay network. An underlay network is defined at the physical level of the networking layer composed of routers and switches. In contrast, the overlay network uses a virtual interface like VxLAN to encapsulate the network traffic.

Once the network configuration type is specified, the runtime defines a network for containers to join and calls the CNI plugin to add the interface into the container namespace and allocate the linked subnetwork and routes by making calls to IPAM (IP Address Management) plugin.

In addition to Kubernetes networking, CNI also supports Kubernetes-based platforms like OpenShift to provide a unified container communication across the cluster through software-defined networking (SDN) approach.


Developed by CoreOS, Flannel is one of the most mature open source CNI project available for Kubernetes. Flannel provides an easy-to-use network model that can be deployed to cover the essential Kubernetes network configuration and management use cases.

Flannel runs by configuring an overlay network that assigns subnet to each Kubernetes cluster node for internal IP address allocation. Subnet leasing and management are done through a daemon agent called flanneld, packaged as a single binary for easy installation and configuration on Kubernetes clusters and distributions.

After assigning IP addresses, Flannel leverages the Kubernetes, etcd cluster or API to store host mappings and other network-related configuration and maintain communication between the hosts/nodes through encapsulated packets.

By default, Flannel uses VXLAN configuration for encapsulation and communication, but there are several different types of backends available such as host-gw, UDP. With Flannel, it is also possible to enable VxLAN-GBP for routing, which is required when several hosts are on the same network.

For encryption of encapsulated traffic, Flannel does not implement any mechanisms by default. Still, it provides support for IPsec encryption, which can establish encrypted tunnels between worker nodes of Kubernetes clusters.

Flannel is a great CNI plugin for beginners who wants to start on their Kubernetes CNI journey from a cluster-admin perspective. Its simple networking model does not have downsides until it is used to control traffic transportation between hosts.


  • IPsec encryption support
  • Single binary installation and configuration


  • No support for network policies
  • Do not run multiple hosts, multi networks through a single daemon while running multiple daemons for each host is possible.


Calico is another popular open-source CNI plugin available for the Kubernetes ecosystem. Maintained by Tigera, Calico is positioned for environments where factors like network performance, flexibility, and power are essential. Unlike Flannel, Calico offers advanced network administration security capabilities while providing a holistic overview of connectivity between hosts and pods.

On a standard Kubernetes cluster, Calico can be easily deployed as a DaemonSet on each node. Each node in a cluster would have three Calico components installed: Felix, BIRD, and confd for managing several networking tasks. Felix which works as a Calico agent handles node routing where BIRD and confd manage routing configuration changes.

For routing packets between nodes, Calico leverages BGP routing protocol instead of an overlay network. An overlay networking mode is available through IP-IN-IP or VXLAN, which can encapsulate packets sent across subnets like an overlay network.

Calico BGP protocol uses an unencapsulated IP network fabric which eliminates the need to wrap packets with an encapsulation layer resulting in increased network performance for Kubernetes workloads. In-cluster pod traffic is encrypted using Wireguard, which creates and manages tunnels between nodes to provide secure communication.

With Calico, Tracing and debugging is a lot easier than other tools as there are no wrappers manipulating packets. Developers and administrators can easily understand packet behavior and use advanced network features like policy management and access control lists.

Network policies in Calico implements deny/match rules which can be applied through manifests to assign ingress policies to pods. Users can define globally scoped policies and integrate with Istio service mesh to control pods traffic, improve security and govern Kubernetes workloads.

Overall, Calico is an excellent choice for users who wants control over their network components. Calico can be easily used with different Kubernetes platforms like (kops, Kubespray) and offers commercial support through Calico Enterprise.


  • Support for Network Policies
  • High network performance
  • SCTP Support


  • No multicast support


Cilium is an open-source, highly scalable Kubernetes CNI solution developed by Linux kernel developers. Cilium secures network connectivity between Kubernetes services by adding high-level application rules utilizing eBPF filtering technology. Cilium is deployed as a daemon `cilium-agent’ on each node of the Kubernetes cluster to manage operations and translates the network definitions to eBPF programs.

The communication between pods happens over an overlay network or utilizing a routing protocol. Both IPv4 and IPv6 addresses are supported for cases. Overlay network implementation utilizes VXLAN tunneling for packet encapsulation while native routing happens through unencapsulated BGP protocol.

Cilium can be used with multiple Kubernetes clusters and can provide multi CNI features, a high level of inspection,pod-to-pod connectivity across all clusters.

Its network and application layer awareness manages packet inspection, and the application protocol packets are using.

Cilium also has support for Kubernetes Network Policies through HTTP request filters. The policy configuration can be written into a YAML or JSON file and offers both ingress and egress enforcements. Admins can accept or reject requests based on the request method or path header while integrating policies with service mesh like Istio.


  • Multiple Cluster Support
  • Can be used with other CNIs


  • May Need to be paired with other CNI for BGP
  • Complicated to set up for multiple clusters


Developed by Weavescope, Weave Net is a CNI-capable networking solution that allows flexible networking in Kubernetes clusters.WeaveNet was initially developed for containers but later evolved into Kubernetes network plugins. Weavenet can be easily installed and configured on the Kubernetes cluster as a daemonset that installs the necessary networking components on each node.

WeaveNet works by creating a mesh overlay network responsible for connecting all the nodes in the clusters. Inside the network. Weave Net utilizes a kernel system for packet transmission between nodes. The protocol leveraged by the kernel is known as a fast data path that transmits packets straight to the destination pod without moving in and out of userspace multiple times.

If a fast data path does not work or the packet has to travel to another host. Weavenet utilizes a slow sleeve protocol for transmission. Functions like hostname resolution, load balancing, and fault tolerance are provided through a Weavenet DNS server called WeaveDns.

For packet encapsulation and encryption, WeaveNet uses VxLAN for Kubernetes and uses NaCl and IPsec encryption for fast datapath and sleeve traffic.

Weavenet does not use etcd for storing network configuration. The settings are persisted in a database file shared across each pod created by the DaemonSet.

Coming to network policies support, WeaveNet uses the weave-npc container for managing Kubernetes network policies. The container is installed and configured by default and only requires network rules to secure communication between hosts.


  • Kernel Level Communication
  • Network Policy and Encryption Support
  • Offers paid support for troubleshooting issues


  • Only support Linux due to Kernel-Based Routing
  • Decreased network speeds due to the default encryption standard


Canal is a CNI provider that combines Flannel and Calico networking capabilities to provide a unified networking solution for Kubernetes clusters. Canal integrates the Flannel overlay networking layer and VXLAN encapsulation with the networking components of Calico, such as Felix, host agent, and network policies. Overall, Canal is a great choice for an organization that wants to leverage an overlay networking model with network policy rules for tighter security.


  • Network Policy Support with Flannel Overlay Network
  • Provides a Unified way to deploy Flannel and Calico


  • Not so much deep integration between two projects

Summary Matrix

Flannel Calico Cilium Weavenet Canal
Mode of Deployment DaemonSet DaemonSet DaemonSet DaemonSet DaemonSet
Encapsulation and Routing VxLAN IPinIP,BGP,eBPF VxLAN,eBPF VxLAN VxLAN
Support for Network Policies No Yes Yes Yes Yes
Datastore used Etcd Etcd Etcd No Etcd
Encryption Yes Yes Yes Yes No
Ingress Support No Yes Yes Yes Yes
Enterprise Support No Yes No Yes No

Choosing a CNI Provider

There is no single CNI provider that meets all the project needs. For easy setup and configuration, Flannel and Weavenet provide great capabilities. Calico is better for performance since it uses an underlay network through BGP. Cilium utilizes a completely different application-layer filtering model through BPF and is more geared towards enterprise security.

Also, it is unnecessary to lean towards a single provider since operating needs can vary greatly from projects. Using and testing multiple solutions will satisfy complex networking requirements while providing a more reliable networking experience.