While Docker adoption simplifies migrating workloads between environments, they still need to be scaled and managed. This is where Kubernetes comes to the rescue. Kubernetes is a standalone container management environment that bundles all the container dependencies to ensure that they run reliably regardless of their computing environment. This way, Kubernetes promotes consistency of development stacks from a developer’s laptop to a test environment to production to cloud. Organizations across industries are also discovering that Kubernetes is a robust environment for enabling workload portability in various underlying cloud infrastructures.
Pokémon Go is an augmented reality app that combines real-world locations with gaming. It was developed and published by Niantic, Inc. Pokemon GO till today had more than 500+ million downloads and 20+ million daily active users. What makes Pokemon GO special is its tracking and mapping technology, which creates an augmented reality for players to catch Pokémon characters in real locations.
Initially, Pokemon GO engineers never thought their user base would increase to 20+ million active users daily. They were expecting the traffic to surpass expected volume but never a scenario in which the traffic will surpass by five times of expected volume. They were not ready for it, because of which they faced many challenges such as scaling, server crashes, and subpar performance in the app.
The answer is Kubernetes. The Pokemon GO was implemented on Google Container Engine (GKE), which was controlled by an open-source Kubernetes project. Niantic used GKE for its potential to manage their container cluster at a scale where they can free their teams to focus on deploying live changes. That helped them turn Pokémon GO into a game for millions of players, which continuously evolves in real-time on Google cloud. Also, this gave Niantic more time to build the game’s application logic and new features rather than worry about the scaling issues.
Read more about the case study on Google Cloud.
The Feltus lab in the Department of Genetics and Biochemistry at Clemson University is a team of scientists, engineers, and bioengineers who implement machine learning algorithms and computational techniques to make meaningful scientific discoveries in humans and plants systems.
The lab discovers patterns in the biological data using bioinformatics and statistical analysis techniques, which are terabytes and petabytes in size. Analyzing data at those levels requires massive computational power, which is available mostly in commercial clouds. So, how does Feltus Lab manage to save on cloud budgets while promoting large scale computing? This is where Kubernetes comes to rescue.
Before moving to the cloud, researchers at Clemson University deployed a Kubernetes cluster of open environments where one can build and test their workflows for scaling. Once the lab has tested the workflow in Kubernetes clusters, it runs the workflows in multiple commercial clouds. This approach has not only saved them resources but also allowed them to focus on their scientific experiments while not worrying about resource management.
Read more about the case study here.
Goldman Sachs employed more than 10,000 engineers when they announced they are migrating their computing resources to containers. The migration project officially started in 2015. Engineers at Goldman undertook the most challenging technology migration project since they had more than 5000 applications to migrate, accounting for the company’s 90 percent of computational power.
The project started slowly and gradually but didn’t go well along, considering they had 5000 applications to migrate. Goldman revised its plans and focused on increasing its efficiency by relying on container orchestration platforms such as Docker Swarm and Kubernetes. They analyzed that their funding database needs access to a lot of virtual servers on-demand, which they can quickly access using Kubernetes when required. Without Kubernetes, their migration process would take a long time to execute and negatively affect the bank accounts.
Finally, Goldman engineers revamped the infrastructure by deploying Swarm for smaller deployment machines and Kubernetes for other large scale deployments. If we analyzed what Goldman Sachs has done as a bank using Container orchestration technology using Kubernetes or Docker, it is hard not to call it a technology company.
Read more about their case study.
With 250 million active users every month and over 10 billion recommendations every day, Pinterest numbers are growing day by day. With such growth comes the responsibility of managing large underlying infrastructure and various services that require continuous scaling.
Pinterest initially started on AWS cloud in 2010. Still, with massive growth in visitors in 6 years, they realized that to serve over billion recommendations per day, the fastest way is to implement container orchestration such as Kubernetes. Kubernetes will help engineers focus more on the development of the platform without worrying about the scaling of infrastructure.
So in 2016, the company launched a migration strategy towards a new content orchestration platform, which was led in stages. The first stage involved migrating to Docker. Once the services are migrated successfully on docker containers, they have implemented container management platforms such as Kubernetes to promote the management of containers in a decentralized way.
Using Kubernetes Pinterest was able to implement auto-scaling and new network policies, which has simplified the overall deployment and management of a complicated infrastructure. Kubernetes also reduced build times and efficiency, which has restored there 80 percent of capacity. Also, the Jenkins Kubernetes cluster, which they used earlier, now consumes 30 percent fewer resources.
Read their whole case study on Kubernetes website Pinterest Case Study
eBay resolves billion of queries and a massive amount of information that is over petabytes in size every day. eBay needs to handle a lot of information and deal with the traffic, remembering that smooth client experience is guaranteed with a safe, stable condition that is adaptable enough to empower innovation.
In the fall of 2018, the organization reported they were amidst a three-year plan called re-platforming in which they are in transition to dump Openstack, which accounts for 90 percent of eBay’s cloud infrastructure. eBay will re-platform themselves with Kubernetes, which improves request handling and latency issues through its cluster architecture.
Inside eBay, Kubernetes was allowed to run at scale with cluster configuration, which consists of hundreds of microservices running on different nodes. The original intention behind using Kubernetes was to enhance the user experience and allow engineers to self manage their code deployment needs as the project scales up. There were also plans to redesign their servers by implementing a decentralized strategy for data centers.
Read more about their case study.