This document contains an extensive, though not exhaustive comparison of the four most prolific managed Kubernetes offerings: Oracle Container Engine for Kubernetes (OKE), Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). The comparison was developed with input from the community and will continue to be revised as the technology changes.
Last modified August 1st, 2023, this document will be updated regularly to ensure consistent and relevant information is made available. Should you have any questions or wish to contribute feedback, you may reach us any time on Slack!
OKE | EKS | AKS | GKE | |
---|---|---|---|---|
Supported Kubernetes version(s) (note) |
Container Engine for Kubernetes supports three versions of Kubernetes for new clusters. For a minimum of 30 days after the announcement of support for a new Kubernetes version, Container Engine for Kubernetes continues to support the fourth, oldest available Kubernetes version. After that time, the older Kubernetes version ceases to be supported. (source) |
EKS is committed to supporting at least four production-ready versions of Kubernetes at any given time. We will announce the end of support date of a given Kubernetes minor version at least 60 days before the end of support date. Because of the Amazon EKS qualification and release process for new Kubernetes versions, the end of support date of a Kubernetes version on Amazon EKS will be on or after the date that the Kubernetes project stops supporting the version upstream. (source) |
AKS defines a generally available (GA) version as a version available in all regions and enabled in all SLO or SLA measurements. AKS supports three GA minor versions of Kubernetes:
(source) |
Based on the current Kubernetes OSS community version support policy, GKE plans to maintain supported minor versions for 14 months, including the 12 months after the release in the Regular channel, followed by a 2-month maintenance period. During the maintenance period, no new node pool creations will be allowed for a maintenance version, but existing node pools that run a maintenance version will continue to remain in operation. (source) |
>=3 + 1 deprecated |
3 |
4 |
||
May 2018 |
June 2018 |
June 2018 |
August 2015 |
|
$0.10 per hour per cluster + standard costs of Compute instances and other resources. Basic Cluster option available for free |
$0.10/hour (USD) per cluster + standard costs of EC2 instances and other resources |
$0.10 per hour per cluster + Standard costs of node VMs and other resources. Free tier available for exploration. |
$0.10/hour (USD) per cluster + standard costs of GCE machines and other resources |
|
Yes |
Yes |
Yes |
||
Full support of Kubernetes clusters; kubectl support |
Full support of Kubernetes clusters; kubectl support |
Full support of Kubernetes clusters; kubectl support |
Full support of Kubernetes clusters; kubectl support |
|
User-initiated All system components update with cluster upgrade |
User initiated; User must also manually update the system services that run on nodes (e.g., kube-proxy, coredns, AWS VPC CNI) |
All system components update with cluster upgrade |
Automatically upgraded by default; can be user-initiated |
|
User-initiated |
|
Automatically upgraded; or user-initiated; AKS will drain and replace nodes |
Automatically upgraded during cluster maintenance window (default; can be turned off); can be user-initiated; drains and replaces nodes |
|
Supported Images for worker nodes (source)
|
Linux:
Windows: |
Linux: Windows:
|
Linux:
Windows: |
|
containerd (default >= 1.24) |
||||
Control plane is deployed to multiple, Oracle-managed control plane nodes which are distributed across different availability domains (where supported) or different fault domains. |
Control plane is deployed across multiple Availability Zones (default) |
Control plane components will be spread between the number of zones defined by the Admin |
Zonal Clusters: Regional Clusters: |
|
|
Guarantees 99.95% uptime |
Offers 99.95% when availability zones are enabled, and 99.9% when disabled |
GKE splits its managed Kubernetes clusters, offering 99.5% uptime for Zonal deployments and 99.95% for regional deployments. |
|
Yes (NVIDIA); By selecting a compatible Oracle Linux GPU image, CUDA libraries are pre-installed. CUDA libraries for different GPUs do not have to be included in the application container. |
Yes (NVIDIA); user must install device plugin in cluster |
Yes (NVIDIA); user must install device plugin in cluster |
Yes (NVIDIA); user must install device plugin in cluster Compute Engine A2 VMs; are also available |
|
|
|
|
|
|
|
CloudWatch Container Insights Also Supported:
|
Azure Monitor Also supported:
|
Kubernetes Engine Monitor Google Cloud’s operations suite (formerly Stackdriver) Also supported:
|
|
Self-healing – automatically provisions new worker nodes on failure to maintain cluster availability Detect and repair capabilities also exist within autoscaling functionality. |
Container Insights metrics detect failed nodes. Can trigger replace or allow autoscaling to replace node. |
Auto repair is now available. Node status monitoring is available. Use autoscaling rules to shift workloads. |
Worker node auto-repair enabled by default |
|
“CA should handle up to 1000 nodes running 30 pods each. Our testing procedure is described here.” (source) |
Cluster Autoscaler through:
|
Cluster autoscaler native capabilities |
||
Virtual Nodes and Virtual Node pools provide a serverless Kubernetes experience with per-Pod billing. |
AWS Fargate removes the need to provision and manage servers. |
AKS Virtual Nodes are heavily dependent on Azure Container Instances’ feature set. |
GKE Autopilot provides a serverless Kubernetes experience. |
|
OCI has multiple realms: one commercial realm (with 44 regions) and multiple realms for Government Cloud: US Government Cloud FedRAMP authorized and IL5 authorized, and United Kingdom Government Cloud |
30 regions containing 96 Availability Zones. Service availability may vary by region. EKS is available on AWS GovCloud regions. AWS Fargate is NOT available in GovCloud regions. |
Available in 57 of Azure’s 60 regions. Not all regions include availability zones Available in GovCloud |
Available in 35 regions; No GovCloud support |
|
Supports |
Does not support |
Does not support |
||
Flex shapes, x86, ARM, HPC, GPU, clusters with mixed node types. (source) |
x86, ARM (Graviton), GPU Specific node images required for various CPU / GPU combinations. (source) |
x86, ARM, GPU Minimum 2 vCPU per worker node. Provisioned node size cannot be changed without replacement. |
x86, ARM*, GPU *ARM supported only on v1.24 or later and only in 3 regions:
|
|
EKS supports use of Spot Instances within managed node groups.
|
Azure Spot node pools can be used. |
|||
Numerous add-ons available, though documentation is limited. |
Quick reference
Service/Provider | OKE | EKS | AKS | GKE |
---|---|---|---|---|
15 clusters/region (Monthly Universal Credits) or 1 cluster/region (Pay-as-You-Go or Promo) by default Limits can be increased through service request |
5000 per subscription |
100 zonal clusters per zone, plus 100 regional clusters per region |
||
|
450 nodes/node group * 30 node groups/cluster = 13,500 nodes/cluster |
|
(v1.18 and newer) | |
Managed node groups: 100 |
||||
No limit on number of node pools as long as total nodes per cluster does not exceed 1,000 |
Managed node groups: 30 |
Not documented |
||
|
Linux:
Windows:
|
|
|
Quick reference
Service/Provider | OKE | EKS | AKS | GKE |
---|---|---|---|---|
|
|
|||
Mutable after cluster creation |
Required Immutable after cluster creation |
Enabled by default Immutable after cluster creation |
Enabled by default Mutable after cluster creation |
|
|
|
|
|
|
Pod Security Admission Controller (PSA) (note) |
PodSecurity Admission Controller is supported by all OKE versions. |
Enabled in EKS 1.23 by default. |
Can be enabled in AKS clusters running K8s 1.23 or higher |
Available and enabled by default in GKE 1.25 (stable), 1.23 & 1.24 (beta) |
|
|
|
|
|
|
|
|
||
|
Yes, with AWS App Mesh |
Open Service Mesh as an add-on |
Yes, with Anthos Service Mesh |
|
|
CIDR allow list option |
CIDR allow list option |
CIDR allow list option |
|
AWS Secrets and Configuration Provider (ASCP) based on Secrets Store CSI Driver |
External Secrets Operator integrates with GCP Secret Manager |
Quick Reference
OKE | EKS | AKS | GKE | |
---|---|---|---|---|
ECR (Elastic Container Registry) |
ACR (Azure Container Registry) |
AR (Artifact Registry) |
||
|
||||
|
|
|
|
|
No |
Yes, with Binary Authorization and Voucher |
|||
Yes, and supports:
|
No |
|||
Yes, Oracle Vulnerability Scanning Service (note) |
Yes, paid service: Uses the Qualys scanner in a sandbox to check for vulnerabilities |
|||
No |
Yes, configurable |
Yes, by default |
(note 1) Kubernetes releases happen approximately three times per year and the typical patch cadence is monthly (though not uncommon to be every 1 to 2 weeks). The best way to determine exactly which versions are currently supported on a particular platform is to utilize the Command Line Interface (CLI) of that cloud provider. Example commands are as follows:
Oracle
oci ce cluster-options get --cluster-option-id all --query 'data."kubernetes-versions"'
Azure
az aks get-versions --location eastus --query 'orchestrators[*].[orchestratorVersion,upgrades[*].orchestratorVersion]' --output table
AWS
aws eks describe-addon-versions --query 'addons[0].addonVersions[0].compatibilities[*].clusterVersion'
GCP (Channel can be REGULAR, STABLE, or RAPID)
gcloud container get-server-config --region us-east1 --flatten="channels" --filter="channels.channel=REGULAR" --format="yaml(channels.channel,channels.validVersions)"
(note 1) some, but not all, of the latest Oracle Linux images provided by Oracle Cloud Infrastructure
i.e.: “Docker is not included in Oracle Linux 8 images. Instead, in node pools running Kubernetes 1.20.x and later, Container Engine for Kubernetes installs and uses the CRI-O container runtime and the crictl CLI (for more information, see Notes about Container Engine for Kubernetes Support for Kubernetes Version 1.20).”
(note 2) OKE images are provided by Oracle and built on top of platform images. OKE images are optimized for use as worker node base images, with all the necessary configurations and required software
(note 3) Custom images are provided by you and can be based on both supported platform images and OKE images. Custom images contain Oracle Linux operating systems, along with other customizations, configuration, and software that were present when you created the image.
(note 1) “By default, users are not assigned any Kubernetes RBAC roles (or clusterroles). Before attempting to create a new role (or clusterrole), you must be assigned an appropriately privileged role (or clusterrole). A number of such roles and clusterroles are always created by default, including the cluster-admin clusterrole (for a full list, see Default Roles and Role Bindings in the Kubernetes documentation). The cluster-admin clusterrole essentially confers super-user privileges. A user granted the cluster-admin clusterrole can perform any operation across all namespaces in a given cluster.” (source)
(note 2) “For most operations on Kubernetes clusters created and managed by Container Engine for Kubernetes, Oracle Cloud Infrastructure Identity and Access Management (IAM) provides access control.” (source)
(note 3) “In addition to IAM, the Kubernetes RBAC Authorizer can enforce additional fine-grained access control for users on specific clusters via Kubernetes RBAC roles and clusterroles.” (source)
(Oracle Developers Community) Your one-stop shop for all Oracle Cloud Developer knowledge.
(StackRox) EKS vs GKE vs AKS - Evaluating Kubernetes in the Cloud
(itoutposts) Kubernetes Engines Compared: Full Guide
(veritis) EKS Vs. AKS Vs. GKE: Which is the right Kubernetes platform for you?
(kloia) Comparison of Kubernetes Engines