This document contains an extensive, though not exhaustive comparison of the four most prolific managed Kubernetes offerings: Oracle Container Engine for Kubernetes (OKE), Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). The comparison was developed with input from the community and will continue to be revised as the technology changes.
First published January 7th, 2023, this document will be updated regularly to ensure consistent and relevant information is made available. Should you have any questions or wish to contribute feedback, you may reach us any time on Slack!
OKE | EKS | AKS | GKE | |
---|---|---|---|---|
Currently supported Kubernetes version(s) (note) |
1.25.4, 1.24.1, 1.23.4, 1.22.5 |
1.24.7, 1.23.13, 1.22.15, 1.21.14 |
1.25.5,1.24.9, 1.23.15, 1.23.12 |
{REGULAR) 1.25.5, 1.24.9, 1.23.14, 1.22.16, 1.21.14 |
>=3 + 1 deprecated |
3 |
4 |
||
May 2018 |
June 2018 |
June 2018 |
August 2015 |
|
All management costs are free |
$0.10/hour (USD) per cluster + standard costs of EC2 instances and other resources |
Pay-as-you-go: Standard costs of node VMs and other resources |
$0.10/hour (USD) per cluster + standard costs of GCE machines and other resources |
|
Yes |
Yes |
Yes |
||
Full support of Kubernetes clusters; Oracle Cloud Shell; Kubectl support |
Full support of Kubernetes clusters; Kubectl support |
Full support of Kubernetes clusters; Kubectl support |
Full support of Kubernetes clusters; Kubectl support |
|
User initiated |
User initiated |
User initiated |
Automatically upgraded by default; can be user-initiated |
|
User initiated |
|
Automatically upgraded; or user-initiated; AKS will drain and replace nodes |
Automatically upgraded during cluster maintenance window (default; can be turned off); can be user initiated; drains and replaces nodes |
|
Supported Images for worker nodes (source)
|
Linux:
Windows: |
Linux: Windows:
|
Linux:
Windows: |
|
|
|
|
||
Control plane is deployed to multiple, Oracle-managed control plane nodes which are distributed across different availability domains (where supported) or different fault domains. |
Control plane is deployed across multiple Availability Zones (default) |
Control plane components will be spread between the number of zones defined by the Admin |
Zonal Clusters: Regional Clusters: |
|
99.95% SLO |
||||
|
guarantees 99.95% uptime |
Offers 99.95% when availability zones are enabled, and 99.9% when disabled |
GKE splits its managed Kubernetes clusters, offering 99.5% uptime for Zonal deployments and 99.95% for regional deployments. |
|
Zero cost – not applicable |
||||
Yes (NVIDIA); By selecting a compatible Oracle Linux GPU image, CUDA libraries are pre-installed. CUDA libraries for different GPUs do not have to be included in the application container. |
Yes (NVIDIA); user must install device plugin in cluster |
Yes (NVIDIA); user must install device plugin in cluster |
Yes (NVIDIA); user must install device plugin in cluster Compute Engine A2 VMs; are also available |
|
|
|
|
|
|
|
CloudWatch Container Insights Also supported:
|
Azure Monitor Also supported:
|
Kubernetes Engine Monitor Also supported:
|
|
Self-healing – automatically provisions new worker nodes on failure to maintain cluster availability Detect and repair capabilities also exist within autoscaling functionality. |
Container Insights metrics detect failed nodes. Can trigger replace or allow autoscaling to replace node. |
Auto repair is now available. Node status monitoring is available. Use autoscaling rules to shift workloads. |
Worker node auto-repair enabled by default |
|
“CA should handle up to 1000 nodes running 30 pods each. Our testing procedure is described here.” (source) |
Cluster Autoscaler through:
|
Cluster autoscaler native capabilities |
||
Virtual Nodes coming soon: will deliver a complete, serverless Kubernetes experience. |
Integrated with Fargate; customer can deploy pods as container instances rather than full VMs. Requires the use of Amazon Application Load Balancer |
Virtual nodes make serverless computing possible in AKS. Does not run separately from the available Kubernetes workloads. A customer can use virtual nodes by assigning particular workloads to them. |
||
OCI has multiple realms: one commercial realm (with 34 regions) and multiple realms for Government Cloud: US Government Cloud FedRAMP authorized and IL5 authorized, and United Kingdom Government Cloud |
30 regions containing 96 Availability Zones. Service availability may vary by region. |
Available in 57 of Azure’s 60 regions. Not all regions include availability zones |
Available in 35 regions; No GovCloud support |
|
Supports |
Does not support |
Does not support |
||
Flex shapes, x86, ARM, HPC, GPU, clusters with mixed node types. (source) |
x86, ARM (Graviton), GPU Specific node images required for various CPU / GPU combinations. (source) |
x86, ARM, GPU |
x86, ARM (v1.24 or later, only), GPU |
|
Oracle provided tools:
Oracle customers can take full advantage of the K8s ecosystem - Loft, Okteto, Shipa.io, Telepresence |
AWS Toolkit for VS Code supports ECR and ECS, but not EKS AWS CloudShell Full support for entire K8s ecosystem. |
|
Google offers either Cloud Code or the VS Code extension to deploy, monitor, and control clusters directly in IDE. Integrates with Cloud Run and Cloud Run for Anthos. |
Quick reference
Service/Provider | OKE | EKS | AKS | GKE |
---|---|---|---|---|
15 clusters/region (Monthly Universal Credits) or 1 cluster/region (Pay-as-You-Go or Promo) by default. |
5000 per subscription |
|||
|
||||
1000 |
Managed node groups: 100 |
|||
No limit on number of node pools as long as total nodes per cluster does not exceed 1,000 |
Managed node groups: 30 |
Not documented |
||
Linux:
Windows:
|
|
Quick reference
Service/Provider | OKE | EKS | AKS | GKE |
---|---|---|---|---|
|
|
|||
Not assigned by default |
Required |
Enabled by default |
Enabled by default |
|
|
|
|
|
|
Pod Security Admission Controller (PSA) |
PodSecurity Admission Controller is supported by all OKE versions. |
Enabled in EKS 1.23 by default. |
Can be enabled in AKS clusters running K8s 1.23 or higher |
Available and enabled by default in GKE 1.25 (stable), 1.23 & 1.24 (beta) |
(note) |
|
|
|
|
|
|
|
||
Yes, with AWS App Mesh |
Open Service Mesh as an add-on |
Yes, withAnthos Service Mesh |
||
|
CIDR allow list option |
CIDR allow list option |
CIDR allow list option |
|
Quick Reference
OKE | EKS | AKS | GKE | |
---|---|---|---|---|
ECR (Elastic Container Registry) |
ACR (Azure Container Registry) |
AR (Artifact Registry) |
||
|
||||
|
|
|
|
|
No |
Yes, with Binary Authorization and Voucher |
|||
Yes, and supports:
|
No |
|||
Yes, paid service: Uses the Qualys scanner in a sandbox to check for vulnerabilities |
||||
No |
Yes, configurable |
Yes, by default |
(note 1) Kubernetes releases happen approximately three times per year and the typical patch cadence is monthly (though not uncommon to be every 1 to 2 weeks). The best way to determine exactly which versions are currently supported on a particular platform is to utilize the Command Line Interface (CLI) of that cloud provider. Example commands are as follows:
Oracle
oci ce cluster-options get --cluster-option-id all --query 'data."kubernetes-versions"'
Azure
az aks get-versions --location eastus --query 'orchestrators[*].[orchestratorVersion,upgrades[*].orchestratorVersion]' --output table
AWS
aws eks describe-addon-versions --query 'addons[0].addonVersions[0].compatibilities[*].clusterVersion'
GCP (Channel can be REGULAR, STABLE, or RAPID)
gcloud container get-server-config --region us-east1 --flatten="channels" --filter="channels.channel=REGULAR" --format="yaml(channels.channel,channels.validVersions)"
(note 1) some, but not all, of the latest Oracle Linux images provided by Oracle Cloud Infrastructure
(note 2) OKE images are provided by Oracle and built on top of platform images. OKE images are optimized for use as worker node base images, with all the necessary configurations and required software
(note 3) Custom images are provided by you and can be based on both supported platform images and OKE images. Custom images contain Oracle Linux operating systems, along with other customizations, configuration, and software that were present when you created the image.
(note 1) “By default, users are not assigned any Kubernetes RBAC roles (or clusterroles). Before attempting to create a new role (or clusterrole), you must be assigned an appropriately privileged role (or clusterrole). A number of such roles and clusterroles are always created by default, including the cluster-admin clusterrole (for a full list, see Default Roles and Role Bindings in the Kubernetes documentation). The cluster-admin clusterrole essentially confers super-user privileges. A user granted the cluster-admin clusterrole can perform any operation across all namespaces in a given cluster.” (source)
(note 2) “For most operations on Kubernetes clusters created and managed by Container Engine for Kubernetes, Oracle Cloud Infrastructure Identity and Access Management (IAM) provides access control.” (source)
(note 3) “In addition to IAM, the Kubernetes RBAC Authorizer can enforce additional fine-grained access control for users on specific clusters via Kubernetes RBAC roles and clusterroles.” (source)
(Oracle Developers Community) Your one-stop shop for all Oracle Cloud Developer knowledge.
(StackRox) EKS vs GKE vs AKS - Evaluating Kubernetes in the Cloud
(itoutposts) Kubernetes Engines Compared: Full Guide
(veritis) EKS Vs. AKS Vs. GKE: Which is the right Kubernetes platform for you?
(kloia) Comparison of Kubernetes Engines