Using Kubernetes in production typically requires integration with various cloud services and resources to deliver desired solutions. Integrating with these cloud services and vendors needs the creation of custom resource definitions and controllers, which increases operation overhead and management.
Companies like Google, which offer native integration of cloud services with their Google Kubernetes Engine(GKE), have drastically reduced the time to launch applications or rolling updates while allowing many organizations to quickly deploy and operate applications without learning about the technicality of Kubernetes components.
GKE was a great initiative by Google when there was no Kubernetes offering from competitors offering native integration with Kubernetes. But now, AWS has played a strategic move with the launch of AWS controllers for Kubernetes(ACK). ACK and GKE both are focused on services running within their clusters and clouds. However, one thing that still differs between these platforms is how the workloads running across these clusters will properly work across the different companies’ cloud-native services.
ACK makes the most sense for users looking forward to directly managing AWS services from Kubernetes compared to GKE, which offers native integration with Google Cloud. ACK helps build scalable and highly-available Kubernetes applications without defining resources outside of the cluster to provide database or message queues capabilities.
ACK is available as a developer preview. Before testing, we were hoping you could go through this blog post, which offers a brief overview of ACK features and how you can start utilizing ACK for your containerized workflows.
ACK got its start in 2018 when Amazon introduced the AWS Service Operator (ASO) as an experimental project. Feedback from the community prompted AWS to relaunch it as a first-tier open-source software project, renaming ASO as ACK(AWS Controller for Kubernetes) with the addition of a few updates.
As a project, ACK is based on a governance model that optimizes production usage with full test coverage, including performance and scalability features. ACK is a single code base that will be providing exposure to AWS services via a Kubernetes operator.
In ACK, AWS cloud resources are managed directly through AWS APIs instead of Cloud Formation, allowing Kubernetes to be the single control plane for managing all the resources and their desired state. Custom controller code and resource definition are also generated automatically from the AWS SDK to reduce manual work and keep the project up-to-date with the latest features.
AWS Controllers for Kubernetes makes it easier to enhance Kubernetes workloads using AWS cloud services by providing vendor-managed integration points for companies relying on Kubernetes. The companies can easily describe their applications, and the AWS managed services on which their applications rely in a single standard format.
ACK also simplifies the deployment and configuration of AWS services, which will help developers looking forward to speeding up releases and managing all resources from a single deployment.
Artifacts that contain binaries, container images, and helm charts to manage AWS services using Kubernetes make use of a multi-phased approach resulting in hybrid custom Kubernetes style controllers.
Files that contain code are exposed as objects and interfaces through AWS SDK. AWS SDKs get regularly updated with API changes and closely track the API service availability.
After generating the Kubernetes API type for the resources, an interface is implemented, which allows the resources and type definitions to be utilized by the Kubernetes runtime packages.
Next, custom resource definition (CRD) configuration files are identified for each resource, which leads to the generation of the ACK controller for providing a particular service. Along with the ACK controller implementation, these steps also outputs Kubernetes manifests for the deployment.
Lastly, a Kubernetes manifests for a Kubernetes role running the respective ACK service controllers are generated using the least privileges principle; this role is equipped with the right amount of permissions to manage custom resources a service controller manages.
Installing ACK in a cluster only requires you to install the desired AWS service controller(s) by setting the respective Kubernetes Role-based Access Control (RBAC) permissions for ACK custom resources.
Once each ACK customer resource is defined, an ACK service controller, which runs in its own Pod, can be used to enforce existing IAM controls, including permissions boundaries and service control policies, to define which resources can be accessed using RBAC.
Also, an AWS account ID is associated with a Kubernetes namespace, which means every ACK custom resource is namespaced. There are no cluster-wide custom resources. Creating a namespaced custom resource in one of your ACK clusters requires you to define a particular YAML file that contains specifications for a particular API.
ACK is a collection of custom service controllers managed by cluster admins to create or delete AWS resources. Each controller manages and represents a single AWS service’s API resources, which can be used just by installing the custom controller.
An ACK enabled cluster can also allow the users to fully utilize the Kubernetes API to fully deploy both their containerized applications and Kubernetes resources like deployment and service as well as any AWS managed services on which their application and packages depend.
As of right now, ACK has custom controllers for six AWS services S3, API Gateway V2, DynamoDB, ECR(Elastic container registry), SNS, and SQS. All of these service controllers provide the creation and management of custom AWS resources from Kubernetes.
Starting from AWS S3. AWS Simple Storage Service (S3) manages custom resources that represent AWS S3 buckets. Designed for performance, S3 provides object storage ideal for Kubernetes environments, which has stringent security requirements and mission-critical applications across a diverse range of workloads.
S3 leverages knowledge of the web scalers to provide scaling of a single Kubernetes cluster, which can be combined with other clusters to create a global namespace, spanning multiple data centers if needed. S3 is also native to the cloud technologies and architectures, which means multi-tenancy, storage of containerized applications, and microservices are very Kubernetes-friendly.
Amazon S3 API is a standard in the object storage market. Many organizations and startups implement and create block storage solutions. So, Kubernetes users who want to stay free from deployment and modification can easily make use of Kubernetes native block storage service, which supports S3.
Coming to AWS API Gateway, API Gateway provides a fully managed service that helps developers to manage, secure, and present APIs at scale. An API gateway controller from Kubernetes works as an access point for containerized applications to access data from backend services.
Using the ACK API Gateway controller from Kubernetes, developers can create RESTful APIs that provide synchronous communication between containerized applications. API Gateway controller in ACK natively supports container architecture and also provides compatibility for serverless web applications.
Amazon DynamoDB controller in ACK is a document database that delivers fast response time at any scale. It’s a managed, multizonal database that provides backup and restore, memory caching for Kubernetes applications. DynamoDB controller in ACK can be used to process records in an Amazon DynamoDB stream and can be utilized to store and retrieve logging records of Kubernetes clusters.
DynamoDB can also be deployed into Kubernetes for running a high availability Vault service, which can be used by applications running within Kubernetes to concurrently access and store secrets using different secret engines and authentication methods. Applications implementing the vault service using DynamoDB in HA mode can also leverage the reliable encryption to decrease the encryption needs before storing data.
AWS Elastic Container Registry (ECR) is a fully-managed container registry that makes it easy for developers to store, manage, and container images. Kubernetes users who are making use of the ACK ECR controller can reliably deploy and maintain containers. ECR integration with AWS Identity and Access Management (IAM) integration ensures a resource-level control for each repository.
ACK Elastic Container Registry controller also integrates well with Amazon ECS(Elastic container service), allowing you to simplify your development and production workflows. You can easily push containerized applications using AWS ECR and pull them directly for ACK based cluster deployments.
AWS SNS ACK controller enables the transmission of messages and notifications of users. SNS makes use of topic-driven architecture in which supported AWS services from Kubernetes is added as a topic for receiving cluster-wide notifications and messages. SNS provides redundancy across multiple SMS providers and allows you to push notifications using a single endpoint for AWS services.
Amazon SNS makes use of cross availability zone message storage to provide high message durability. Running within Amazon’s massive infrastructure and data centers, Amazon SNS topics can be implemented on a large scale. Messages using AWS SNS are stored across geographically separated servers for redundancy. If no endpoints exist, SNS executes automatic message retries and movement to dead letter queue for handling large amounts of traffic.
AWS Simple Queue Service (SQS) custom controller in ACK provides a managed queuing service of messages and notification to enable easy scaling and communication of microservices and containerized applications. SQS eliminates the hassle of associated managing message middleware, which helps developers focus more on the application’s logic.
SQS in ACK enabled cluster can send and receive messages between Kubernetes components at any volume, without losing messages or the availability of other services. SQS can make use of two message queues Standard and FIFO(First-in, First-Out). Standard queues offering maximum throughput and promise delivery, whereas FIFO queues are designed to guarantee message processing in the order, they were sent.
|AWS Service||Release Phase||Controller Github Repository|
|Amazon API Gateway||Available to developers for testing and feedback||apigatewayv2|
|Amazon DynamoDB||Available to developers for testing and feedback||dynamodb|
|Amazon ECR||Available to developers for testing and feedback||ecr|
|Amazon S3||Available to developers for testing and feedback||s3|
|Amazon SQS||In software testing phase by AWS||sqs|
|Amazon SNS||Available to Developers for Testing and Feedback||sns|
We can expect ACK to support as many as AWS services possible. AWS is planning to add ACK support for Amazon Relational Database Service and Amazon ElastiCache, and maybe for Amazon Elastic Kubernetes Service (EKS), which will extend the Kubernetes functionality even further. Essential features such as enabling cross-account resource management and native application secrets integration will be available within a few weeks.