What is a container?
The word ‘container’ might make you immediately think ‘shipping containers’, and that’s actually not a bad analogy. A container brings together everything you need to run your application in a single lightweight package.
Unlike old-school virtual machines, containers do not run directly on top of your operating system. Instead, they run on top of a container runtime. A runtime is a set of instructions that comes into play when an application is running, and tells the operating system how to parse and execute code. This makes your apps highly portable. A benefit of this portability is that it greatly speeds up the development and release cycles.
What are container registries?
Azure, AWS and the GCP all support container clusters. First, you build a container image, which is a file with code that can create a container. Then, you register this container image within a container registry, which allows you to secure access to your container images, manage different versions of images, and more.
Within Azure, this registry is simply called the Azure Container Registry.
On AWS, it’s called an ECR (Elastic Container Registry).
In GCP, given that teams need to manage more than just containers, you’ll find a next-gen container registry called the GCP Artifact Registry. The Artifact Registry is used not only for container images, but also for language packages like Maven and NPM, and operating system packages like Debian.
What are the limitations of standalone containers?
On all three cloud platforms, you can deploy your container images directly on virtual machine instances. This is known as Infrastructure-as-a-Service deployment. The downside is that this has very high administrative overload and forces you to deal with individual standalone containers, which is not ideal.
Standalone containers are not able to provide replication, auto-healing, auto-scaling, or load balancing, which are all must-haves in modern applications.
These drawbacks highlight exactly why you need a container cluster orchestrator like Kubernetes, which automates software development, replication, scaling, and load balancing on container clusters.
All about Kubernetes
All three cloud platforms offer their own managed Kubernetes offerings, so if most of your applications run on one of the cloud platforms, stick to running Kubernetes on that same platform. This will offer you great integrations with the other services you’re already using on that particular cloud platform.
The pros and cons of Kubernetes on each cloud platform
In terms of naming conventions, Azure’s version of managed Kubernetes is called Azure Kubernetes Service (AKS). AWS calls theirs Elastic Kubernetes Service (EKS), while GCP – the birthplace of Kubernetes – has Google Kubernetes Engine (GKE).
Each of the cloud providers’ managed Kubernetes service offers its own distinct advantage.
Amazon’s EKS is the most widely used.
Azure’s AKS is arguably the most cost-effective option.
And then there’s Google’s GKE, which has the most features and automated capabilities of the three providers.
AKS and GKE are more automated than EKS – they will automatically handle security patches on the control plane and upgrade the nodes that make up your Kubernetes cluster. Upgrades to components and node health repair on EKS requires some manual steps.
AKS, EKS, and GKE all support virtual machine nodes with GPUs enabled, but only EKS also supports bare metal machines as your cluster nodes.
Command line support
AKS and GKE have complete command line support, but AKS’ command line support is much more limited.
EKS and GKE both offer an integrated service mesh that works on Kubernetes called App Mesh (EKS) and Istio (GKE), but AKS does not yet offer an integrated service mesh to allow you to work with microservices.
AKS can support 500 nodes in a Kubernetes cluster, EKS can support 100, and GKE can support up to 5,000.
What if you want to be able to deploy and run containerized applications without managing infrastructure and creating clusters?
Well, you can do this using serverless containers.
* Microsoft was the first in the industry to offer serverless containers in the public cloud via Azure Container Instances, which run containers without using a Kubernetes cluster.
* On GCP, you can run containerized workloads in a serverless manner using CloudRun without needing an underlying Kubernetes cluster.
* AWS Fargate is a serverless offering on AWS that removes the overhead of scaling, patching, and managing servers on your container clusters. An important way it differs from Azure Container Instances and GCP CloudRun is that Fargate is used to abstract away the overhead of using an orchestrated cluster, either EKS or ECS. It is not used without an underlying cluster.
Hybrid multi-cloud offerings
Since Kubernetes can be deployed at on-premises data centers as well as on all cloud platforms, it offers a middle ground between IaaS and PaaS options. This allows you to work effectively in today’s hybrid, multi-cloud world.
Each cloud platform has its own offering to support hybrid, multi-cloud environments.
*On Azure, there’s Azure Arc, which goes beyond managing hybrid Kubernetes deployments. Azure Arc allows you to manage servers, Kubernetes clusters, Azure data services, and SQL servers hosted on resources outside the Azure platform.
*Amazon EKS Anywhere is a deployment option that allows customers to create and operate Kubernetes clusters on their own infrastructure, supported by AWS.
*GCP offers Anthos, which is built around a Kubernetes core where you run GKE on both your cloud machines and machines at your on-premises data center. Google then uses a single control plane to manage your applications consistently in this hybrid environment.
post Author : Janani Ravi