Differences running Kubernetes in Production vs Kubernetes in Development
Harshit Mehndiratta
Harshit Mehndiratta
June 30, 2020
/
4 minutes read

Differences running Kubernetes in Production vs Kubernetes in Development

Kubernetes (K8s) is an open-source container orchestration platform that has become a standard for many organizations. The functionality which Kubernetes provides for managing and deploying containers has forced modern enterprises to create microservices architecture that scales efficiently.

However, deploying Kubernetes in various environments is a different story altogether. It can be a simple process in testing environments, but an enterprise-grade production deployment requires a lot more effort and resources.

This is not an issue with Kubernetes but also with containers and microservices architecture. Due to different procedures implemented by development and production environments, these environments give rise to differences that affect organizations’ reliability and performance.

So, in this post, we will share four of these differences that organizations need to implement around their container-based architecture to effectively utilize Kubernetes for their business needs.

Operational needs can differ in Kubernetes development environment compared to a Production environment

Kubernetes is great and is the future of application delivery, but it is quite complex to operate for enterprise workloads.

At the production level, Kubernetes controls how groups of containers running an application are deployed and scaled, and how they use network and storage. But once you deploy these Kubernetes clusters, it is on operations teams to work out how these pods are exposed to applications via routing, how good is the health of these pods, environment upgrades, and more which requires additional experienced workforce compared to Kubernetes running in dev environments where teams have the flexibility of simply running clusters without the operational load.

In a dev/test environment, you might not need automation for deploying clusters. But in production clusters must be installed using some automation workflow as it ensures consistency in repeatable tasks and assists in cluster recovery when required.

Versioning is also important and is an operational need for production environments compared to development environments. Version controlling your production deployment configurations, policies increases the efficiency of the CI/CD pipeline so that teams can collaborate easily on their application configuration and version code changes.

Kubernetes clusters running in production require more security and reliability as compared to Kubernetes clusters in Development

Security is a critical part of Kubernetes in production and needs to be considered and designed from the very start of cluster deployment.

In the dev/test environment, it is ok to leave some network connections open so that service to service communication is easy. But it is not a legitimate production strategy when you move into production where bigger attacking surface area presents a lot more noteworthy business dangers.

It is vital to protect clusters in production using security protocols such as Mutual TLS or Service mesh. These protocols provide an additional layer of security, which makes service to service communication safer in production but fast and reliable by limiting the interactions to only with trusted services.

Container image vulnerabilities also come quickly in production environments compared to the vulnerabilities in development environments. Using a certified base image is recommended practice in this case. Certified base images provide an additional security level that you need in production and go a step further by limiting the attacking surface of the production environment.

A private registry server is another crucial workflow deployed by production environments for security and reliability. It enables image management, signing, security, LDAP integration so that the right images are deployed into the right containers, and their safety is integrated into the process as well.

Scaling requires a lot more work in Production as compared to Development

Kubernetes in production usually scales up to hundreds of pods and typically requires the creation of highly available Kubernetes clusters for faster data access. To make these clusters highly available the Load Balancers are required to properly route traffic, which is not available by default in the open-source Kubernetes project, so you need to integrate products and plugins to implement load balancing which requires additional work.

Scaling in production is entirely different. The amount of data produced is more massive and requires continuous monitoring compared to development environments where resources required for Kubernetes components can be easily determined.

In dev/test, it can be easy to skip some basics, such as specifying the right resource and request limits, but failure to do that in production can lead to traffic overload, and the server fails.

It is also important to make sure that primary services and alerting systems in production are spread across all cluster’s nodes, so no data is lost when scaling up or down.

Deployment in Production is different from deployment in Test environments

Kubernetes is aimed to allow teams access to a consistent workflow and features, but the same cannot be said for the environment itself. The implementation of Kubernetes can be different in environments that can produce differences across deployments very easily.

Deploying in production requires access to services such as certificate management and credentials, which differ significantly from the needs of production, so there is no local deployment that will suit the needs of production.

All K8s deployments done locally in test environments differ significantly from the cloud deployments because they cannot consider essential services such as firewalls and load balancers, which are configured differently according to customer traffic.

Final Thoughts

Kubernetes is designed to make cluster deployment effective and easy, but using Kubernetes in production for mission-critical applications requires a lot more than just deploying a cluster in testing environments, as we have seen above. For maximum reliability and performance in different environments. Proper planning and clearly defined controls are needed to reduce the differences between development and production environments in Kubernetes.

message