Amazon EFS: What It Is and When to Use It
Definition
Amazon Elastic File System (Amazon EFS) is a fully managed, serverless, shared file storage service that provides a POSIX-compliant NFSv4 filesystem to Linux-based AWS compute. Unlike EBS, a single EFS file system can be mounted concurrently by thousands of EC2 instances, Lambda functions, ECS tasks, and EKS pods across multiple Availability Zones in a Region. You never provision capacity — EFS grows and shrinks automatically as you add and remove files, and you pay only for the GB you actually store.
How It Works
EFS is a Regional service. When you create a file system, AWS provisions redundant storage across multiple Availability Zones (Standard class) or a single AZ (One Zone class). You then create mount targets — one per AZ — which expose ENIs inside your VPC. Linux clients mount the filesystem using the standard NFSv4 protocol:
sudo mount -t efs -o tls fs-0123456789abcdef0:/ /mnt/efs
The amazon-efs-utils package adds encryption-in-transit, IAM authorization, and automatic retry. Lambda functions mount EFS through a VPC configuration. Containers on ECS and EKS mount EFS through the EFS CSI driver using standard Kubernetes PersistentVolumes.
Access Points sit in front of the filesystem and enforce a POSIX user/group identity and a root directory for each application. This lets you give different workloads isolated views of the same file system without rewriting application code.
Key Features and Limits
- Storage classes:
- EFS Standard — Multi-AZ, highest durability (11 nines).
- EFS One Zone — single AZ, about 47% cheaper than Standard, ideal for dev/test or easily recreatable data.
- EFS Standard-IA / One Zone-IA — infrequent-access tiers (~92% cheaper) with per-GB retrieval fees. Lifecycle Management moves files between tiers automatically after 7, 14, 30, 60, or 90 days of inactivity.
- EFS Archive — cheapest tier for data accessed a few times per year.
- Performance modes:
- General Purpose (default) — lowest per-operation latency; recommended for the vast majority of workloads.
- Max I/O — higher aggregate throughput and IOPS at the cost of slightly higher latency; used for highly parallel big-data workloads. (Elastic throughput deprecates the need for Max I/O in most cases.)
- Throughput modes:
- Bursting — baseline throughput scales with filesystem size, with burst credits for spikes.
- Provisioned — fixed MiB/s independent of size; pay extra.
- Elastic — throughput scales up and down automatically; recommended for spiky or unpredictable workloads.
- Scale: petabytes of data, millions of IOPS, 10+ GB/s of throughput with Elastic throughput.
- Security: encryption at rest (KMS) and in transit (TLS), IAM authorization for NFS clients, VPC security groups on mount targets.
- Backup: native integration with AWS Backup; also supports EFS-to-EFS replication (Regional or cross-Region).
Common Use Cases
- Container shared storage — ECS and EKS workloads needing a common volume across tasks/pods (media processing, CMS, CI caches).
- Lambda shared state — packaging large ML models or Python dependencies that do not fit in the 250 MB deployment package or the 10 GB container image quota.
- Web-tier content — WordPress, Drupal, and other CMSes behind an Auto Scaling group where every instance must see the same wp-content/ or sites/default/files/ directory.
- Home directories and developer environments — shared filesystems for Cloud9 / workspaces, build farms, and shared code.
- Big data and analytics — genomics pipelines, media rendering, SaaS tenant storage.
- Lift-and-shift of NFS workloads — enterprise apps (SAP, custom ERPs) that require a traditional shared filesystem.
Pricing Model
EFS charges across four axes:
- Storage — per GB-month, depending on class (Standard is the most expensive; Archive is roughly 10x cheaper).
- Throughput — Provisioned throughput is billed per MiB/s-month above the included baseline; Elastic throughput bills per GB of data transferred read/written; Bursting is included in the storage price.
- Requests (IA/Archive) — per-GB retrieval fee when you read data from IA or Archive classes.
- Backups and replication — AWS Backup storage and cross-Region replication data transfer.
Because EFS grows and shrinks automatically, a dev/test workload holding 50 GB costs only for those 50 GB — a major advantage over EBS, which bills on provisioned capacity. Using Lifecycle Management + IA/Archive tiers often reduces total cost by 70–90% for typical datasets.
Pros and Cons
Pros
- Serverless — no capacity planning, no filesystems to resize.
- Shared across thousands of clients in many AZs simultaneously.
- POSIX / NFSv4 semantics work with existing Linux apps unchanged.
- Deep integration with Lambda, ECS, EKS, EC2, and DataSync.
- Lifecycle Management + IA/Archive tiers make cost optimization automatic.
Cons
- Linux only — no native SMB support (use FSx for Windows or FSx for NetApp ONTAP instead).
- Per-operation latency is higher than local EBS or instance store, so it is not ideal for latency-sensitive databases.
- Complex throughput/performance-mode choices can confuse new users (Elastic throughput simplifies this).
- Bursting throughput can surprise you with credit exhaustion on small, hot filesystems.
Comparison with Alternatives
| Feature | EFS | EBS | FSx for Windows | FSx for NetApp ONTAP | S3 | | --- | --- | --- | --- | --- | --- | | Protocol | NFSv4 | Block (SCSI/NVMe) | SMB | NFS + SMB + iSCSI | HTTPS API | | Clients | Many, Multi-AZ | One EC2 in one AZ | Many (Windows) | Many (multi-protocol) | Unlimited, any client | | OS | Linux | Any | Windows primarily | Linux + Windows | Any | | Scaling | Elastic (serverless) | Provisioned | Provisioned | Provisioned | Unlimited | | Use case | Shared Linux files | Boot / DB volumes | Windows shares, AD | Hybrid NAS, SnapMirror | Objects, archives |
Exam Relevance
- Solutions Architect Associate (SAA-C03) — choosing EFS when "shared POSIX filesystem across many EC2 instances or AZs" is in the question; distinguishing EFS (Linux NFS) from FSx for Windows (SMB) and FSx Lustre (HPC).
- Developer Associate (DVA-C02) — mounting EFS from Lambda, using Access Points for multi-tenant apps, EFS CSI driver on EKS.
- SysOps Administrator (SOA-C02) — Bursting vs Provisioned vs Elastic throughput tradeoffs, Lifecycle Management to IA/Archive, AWS Backup integration.
Classic exam trap: if the question mentions Windows file shares or Active Directory integration, the answer is almost always FSx for Windows File Server, not EFS.
Frequently Asked Questions
Q: What is the difference between EFS and EBS?
A: EFS is a shared NFS filesystem that many clients mount simultaneously across AZs, scales automatically, and bills per GB stored. EBS is a block volume attached to a single EC2 instance in one AZ, requires you to provision capacity in advance, and is typically used for boot disks and database storage where low latency matters more than shareability.
Q: Can I mount EFS from Lambda?
A: Yes. Configure the Lambda function with a VPC, attach it to the same subnets as the EFS mount targets, and specify an EFS Access Point plus a local mount path (for example, /mnt/data). This is commonly used to share large ML models, Python packages, or configuration data across many Lambda invocations without hitting the 250 MB deployment package limit.
Q: How much cheaper is EFS One Zone compared to Standard?
A: EFS One Zone storage is approximately 47% less expensive than EFS Standard because the data is stored in a single Availability Zone rather than replicated across multiple AZs. Combine it with Lifecycle Management to IA or Archive to reduce cost further. Use One Zone only for data you can easily recreate — if that AZ is destroyed, the data is lost.
This article reflects AWS features and pricing as of 2026. AWS services evolve rapidly — always verify against the official Amazon EFS documentation before making production decisions.