Amazon EKS: What It Is and When to Use It

Definition

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that runs upstream, certified Kubernetes on AWS — and optionally on-premises via EKS Anywhere and Amazon EKS Hybrid Nodes. AWS runs and scales the Kubernetes control plane for you; you focus on deploying workloads with the standard Kubernetes API, kubectl, Helm, and the rest of the Kubernetes ecosystem.

How It Works

An EKS cluster consists of:

  • Managed control plane — the Kubernetes API server, etcd, scheduler, and controller-manager, all run by AWS across multiple Availability Zones. You never SSH into these.
  • Data plane (worker nodes) — where your pods run. You pick one or more of:
    • Managed Node Groups — AWS provisions and lifecycle-manages EC2 nodes for you (patching, draining, rotation).
    • Self-managed nodes — you own the EC2 AMIs and Auto Scaling Groups end-to-end.
    • Fargate profiles — serverless pods; no nodes to manage.
    • EKS Auto Mode — AWS manages the entire data plane (node provisioning, scaling, OS updates, cost optimization) automatically.
    • EKS Hybrid Nodes — attach on-premises servers as worker nodes to a cloud control plane.

Pods inside the cluster talk to AWS services using IAM Roles for Service Accounts (IRSA) or the newer EKS Pod Identity, so each workload gets least-privilege AWS credentials without embedded access keys.

EKS clusters use the Amazon VPC CNI plugin by default, giving every pod a native VPC IP address that integrates cleanly with Security Groups, VPC Flow Logs, and ALB/NLB target groups.

Key Features and Limits

  • Kubernetes versions: EKS typically supports the latest four minor versions in Standard Support, followed by up to 12 additional months of Extended Support (for a surcharge) before a cluster is forced to upgrade.
  • Control plane HA: multi-AZ by default, ~99.95% SLA.
  • Managed add-ons: one-click lifecycle management for VPC CNI, kube-proxy, CoreDNS, EBS CSI driver, EFS CSI driver, S3 Mountpoint CSI, AWS Load Balancer Controller, Pod Identity Agent, Amazon CloudWatch Observability, and more.
  • Identity: IRSA (via OIDC federation) or EKS Pod Identity for per-pod IAM; RBAC and aws-auth/Access Entries for user-to-cluster auth.
  • Networking: VPC CNI with configurable prefix delegation (for pod density), custom networking, security groups for pods, and optional Cilium-based add-on for advanced policies.
  • Scaling: Cluster Autoscaler or Karpenter (AWS-native autoscaler that picks instance types dynamically).
  • Observability: CloudWatch Container Insights, Amazon Managed Grafana, Amazon Managed Service for Prometheus.
  • Security: EKS GuardDuty runtime monitoring, Amazon Inspector for container image vulnerabilities, KMS envelope encryption for Kubernetes secrets.

Common Use Cases

  1. Kubernetes-first organizations — teams already using Kubernetes want AWS infrastructure without running the control plane themselves.
  2. Portable workloads — applications built against standard Kubernetes APIs can move between EKS, on-prem, and other clouds with little change.
  3. ML training and inference — GPU node groups plus Karpenter give fine-grained access to P/G instance families and Trainium/Inferentia.
  4. Regulated hybrid deployments — EKS Hybrid Nodes and EKS Anywhere let companies run a single Kubernetes fabric across the cloud and data center.
  5. Multi-tenant platforms — internal developer platforms that expose a Kubernetes API to application teams.
  6. Complex event-driven workloads — Argo Workflows, Kubeflow, Airflow, Temporal — software ecosystems that assume Kubernetes.

Pricing Model

  • Control plane: $0.10 per hour per cluster (~$73/month), with a higher rate for clusters on Extended Support versions.
  • EKS Auto Mode: additional per-cluster or per-vCPU management fee on top of normal compute.
  • EKS Hybrid Nodes: per-vCPU management fee for each on-premises worker.
  • Compute: standard EC2 pricing for Managed/Self-managed nodes, or Fargate pricing for pods scheduled on Fargate profiles. Spot, Reserved Instances, and Savings Plans all apply.
  • Other: standard charges for EBS, ELB, data transfer, CloudWatch, and so on.

Compare the total cost to ECS (no control-plane fee) when Kubernetes features aren't required.

Pros and Cons

Pros

  • Upstream Kubernetes — use Helm, Operators, kubectl, and any third-party tool.
  • Tight AWS integration: IRSA, AWS Load Balancer Controller, EBS/EFS/FSx/S3 CSI, Managed Grafana.
  • Multi-option data plane: choose Managed Node Groups, Fargate, Auto Mode, or Hybrid Nodes per workload.
  • Portable — workloads run on any certified Kubernetes distribution with minimal change.

Cons

  • Kubernetes has a steep learning curve; operational overhead is higher than ECS.
  • Clusters must be upgraded regularly (every 14 months or less in Standard Support, or pay for Extended Support).
  • Several moving parts (CNI, DNS, ingress controller, autoscaler) must be kept in sync during upgrades.
  • $0.10/hour per cluster adds up at scale; dozens of small clusters can be a meaningful bill line.

Comparison with Alternatives

| | EKS | ECS | Self-managed Kubernetes on EC2 | GKE / AKS | | --- | --- | --- | --- | --- | | Kubernetes API | Yes (upstream) | No | Yes | Yes | | Control plane managed by | AWS | AWS (no control plane fee) | You | Google / Microsoft | | Control-plane cost | $0.10/hour | Free | Your EC2 instances | Similar managed fees | | Tooling ecosystem | Full k8s | AWS-only | Full k8s | Full k8s | | Best for | Portable k8s workloads on AWS | AWS-native containers | Full control, niche requirements | Multi-cloud shops |

Exam Relevance

  • Solutions Architect Associate (SAA-C03) — ECS vs EKS decision, Fargate vs EC2 nodes, IRSA at a high level.
  • Developer Associate (DVA-C02) — lighter EKS coverage; knowing when to pick ECS vs EKS is enough.
  • DevOps Professional (DOP-C02) — IRSA, Managed Node Groups vs Fargate profiles, cluster upgrade strategies, AWS Load Balancer Controller, Helm chart deployment via pipelines.
  • Security Specialty (SCS-C02) — IRSA / Pod Identity, EKS control plane logging to CloudWatch, private-cluster API endpoints, Secrets Manager / Parameter Store integration.

Frequently Asked Questions

Q: When should I choose EKS over ECS?

A: Choose EKS when your organization has existing Kubernetes expertise, your application relies on Kubernetes-native tools (Helm, Argo CD, Kustomize, Prometheus Operator, etc.), or you need the portability to run the same workloads on other clouds or on-premises. Choose ECS when you want the simplest possible AWS-native container platform and don't need the Kubernetes API.

Q: What is IRSA and why does it matter?

A: IAM Roles for Service Accounts (IRSA) lets a Kubernetes service account assume an AWS IAM role via OIDC federation. Pods that use that service account receive short-lived AWS credentials, so each workload gets least-privilege access to AWS APIs without embedding long-lived keys. The newer EKS Pod Identity offers a similar capability with a simpler setup and no OIDC configuration needed.

Q: How much does EKS cost versus running Kubernetes on my own EC2 instances?

A: EKS charges $0.10 per hour per cluster for the managed control plane (about $73/month), plus standard costs for worker nodes, EBS, load balancers, and data transfer. Self-hosted Kubernetes avoids the control plane fee but requires you to run, patch, and upgrade etcd, the API server, the scheduler, and the controller-manager yourself — usually on three or more EC2 instances whose combined cost exceeds the EKS fee once you factor in operational effort.


This article reflects AWS features and pricing as of 2026. AWS services evolve rapidly — always verify against the official Amazon EKS documentation before making production decisions.

Published: 4/16/2026 / Updated: 4/16/2026

This article is for informational purposes only. AWS services, pricing, and features change frequently — always verify details against the official AWS documentation before making production decisions.

More in Compute