S3 Storage Classes: What They Are and When to Use Each

Definition

Amazon S3 storage classes are tiers of the same underlying S3 object storage, each tuned for a different balance of price, access latency, retrieval cost, and minimum storage duration. Every object you upload to S3 is assigned a storage class — you can keep it in S3 Standard, move it to a cheaper tier after a period of inactivity via lifecycle policies, or let S3 Intelligent-Tiering decide for you automatically.

How It Works

S3 storage classes share three guarantees — 99.999999999% (11 nines) of object durability, strong read-after-write consistency, and a single API surface — while differing on:

  • Number of Availability Zones: multi-AZ (default) vs single-AZ (cheaper, but lost if the AZ fails).
  • First-byte latency: milliseconds (hot classes) vs minutes or hours (archive classes).
  • Per-GB storage price: from S3 Standard (most expensive) down to Glacier Deep Archive (~1/25th the price).
  • Retrieval charges: infrequent-access and archive classes charge per-GB to retrieve.
  • Minimum storage duration: the smallest time window you must keep an object in the class, or pay the pro-rated balance.
  • Minimum billable object size: some classes round up small objects to 128 KB, which matters if you store millions of tiny files.

S3 Lifecycle policies transition or expire objects automatically between classes (e.g., move to Infrequent-Access after 30 days, to Glacier after 90 days, delete after 365). S3 Intelligent-Tiering monitors access patterns and moves objects between four internal tiers (Frequent, Infrequent, Archive Instant, Archive, Deep Archive) for a small per-object monitoring fee — often the simplest way to optimize.

The Eight Storage Classes

| Class | AZs | First-byte latency | Min storage | Min billable size | Use case | | --- | --- | --- | --- | --- | --- | | S3 Standard | ≥3 | ms | — | — | Default, frequently accessed data | | S3 Intelligent-Tiering | ≥3 | ms to hours (tier-dependent) | — | 128 KB | Unknown or changing access patterns | | S3 Standard-IA | ≥3 | ms | 30 days | 128 KB | Infrequently accessed, backup secondary copies | | S3 One Zone-IA | 1 | ms | 30 days | 128 KB | Recreatable data where AZ-level loss is acceptable | | S3 Express One Zone | 1 | single-digit ms (10× faster than Standard) | 1 hour | — | High-performance request-heavy workloads | | S3 Glacier Instant Retrieval | ≥3 | ms | 90 days | 128 KB | Archive data that must be fetched in milliseconds quarterly | | S3 Glacier Flexible Retrieval | ≥3 | minutes to hours (Expedited / Standard / Bulk) | 90 days | 40 KB | Backup archives, DR copies | | S3 Glacier Deep Archive | ≥3 | 12 to 48 hours | 180 days | 40 KB | Long-term retention for compliance, cheapest |

Choosing the Right Class

Ask four questions:

  1. How often is the data accessed? Daily → Standard. Monthly → Intelligent-Tiering or Standard-IA. Quarterly → Glacier Instant Retrieval. Annually → Glacier Flexible Retrieval or Deep Archive.
  2. How fast must retrievals be? Milliseconds → Standard, Intelligent-Tiering, Standard-IA, Glacier Instant Retrieval, or Express One Zone. Minutes/hours → Glacier Flexible Retrieval. Hours → Deep Archive.
  3. Can the data be recreated? If yes and the data is mostly a backup, One Zone-IA saves ~20% over Standard-IA.
  4. Is the workload request-heavy? If you make millions of GETs per second on the same bucket, S3 Express One Zone delivers 10× lower latency with 50% cheaper request pricing (but higher storage cost).

Rule of thumb: if you can't predict access patterns, start with Intelligent-Tiering. It's effectively "Standard with automatic cost optimization," costs slightly more per object for monitoring, and avoids the per-object lifecycle transition fees.

Pricing Model

Three cost axes per class, all vary by Region:

  1. Storage — per GB-month. Ranges from ~$0.023/GB (Standard) to ~$0.00099/GB (Deep Archive).
  2. Requests — PUT / POST / COPY / LIST and GET / SELECT. Archive classes have higher request costs, especially for retrievals.
  3. Retrievals — per GB for IA and Glacier classes. Glacier Expedited is ~10× the cost of Bulk.

Also note:

  • Data transfer out to the internet is paid (any class); to CloudFront is free.
  • Lifecycle transitions — each transition is a PUT-equivalent request (and for Glacier classes, an upfront "archive" overhead of ~8 KB per object).
  • Early deletion fees — delete an object from Standard-IA / One Zone-IA before 30 days, Glacier Instant / Flexible before 90 days, Deep Archive before 180 days, and you pay the remaining storage cost.
  • Minimum billable object size — Standard-IA / One Zone-IA / Glacier Instant Retrieval round up small objects to 128 KB. Storing millions of 1 KB objects in Standard-IA is almost always more expensive than keeping them in Standard.

Pros and Cons

Pros

  • All classes share 11 nines of durability (except One Zone-IA and Express One Zone, which are single-AZ).
  • Same APIs, same bucket — only the StorageClass attribute changes.
  • Lifecycle policies and Intelligent-Tiering make cost optimization nearly automatic.
  • Deep Archive is one of the cheapest cloud storage tiers available, period.

Cons

  • Early-deletion fees make it expensive to remove archive data prematurely.
  • Minimum billable object size can make IA classes more expensive than Standard for tiny files.
  • Retrieving archive classes at scale requires pre-planning (Expedited vs Bulk trade-off).
  • Intelligent-Tiering has a small per-object monitoring charge; pointless for very few large objects.

Common Use Cases per Class

  • S3 Standard — live websites, application assets, hot data lakes.
  • S3 Intelligent-Tiering — mixed workloads where access patterns are unknown or changing; a safe default for most modern pipelines.
  • S3 Standard-IA — backups, DR copies, compliance archives that must be available in milliseconds but aren't touched often.
  • S3 One Zone-IA — easily recreatable data (re-encoded video, derived datasets) where you can tolerate losing an AZ.
  • S3 Express One Zone — ultra-low-latency, very-high-throughput workloads such as ML training loops reading millions of small files.
  • S3 Glacier Instant Retrieval — medical images, legal records, historical logs that must be retrieved in ms but are rarely touched.
  • S3 Glacier Flexible Retrieval — 90+ day-old backups, media archives, raw data kept "just in case."
  • S3 Glacier Deep Archive — regulatory retention (7–10 years), magnetic-tape replacement, the cheapest long-term graveyard.

Exam Relevance

Storage-class selection is one of the most frequently tested S3 topics:

  • Cloud Practitioner (CLF-C02) — know the names of the classes and that they share 11 nines durability.
  • Solutions Architect Associate (SAA-C03) — pick the cheapest class that meets the scenario's access-frequency and retrieval-time constraints, recognize minimum storage penalties, design lifecycle policies.
  • Developer Associate (DVA-C02)x-amz-storage-class header, Intelligent-Tiering, S3 Inventory.
  • SysOps Administrator (SOA-C02) — lifecycle rules, Storage Class Analysis, Storage Lens.

Classic exam trap: minimum storage durations. Standard-IA and One Zone-IA require 30 days, Glacier Instant / Flexible Retrieval require 90 days, Deep Archive requires 180 days. Moving data to an archive class and immediately deleting it incurs the full minimum-duration charge.

Frequently Asked Questions

Q: What's the difference between S3 Standard-IA and S3 One Zone-IA?

A: Both have identical retrieval fees, 30-day minimum storage, and 128 KB minimum billable size. The difference is durability zone coverage: Standard-IA replicates across three or more Availability Zones; One Zone-IA stores one copy in a single AZ. One Zone-IA costs about 20% less but is unsuitable for data you can't easily recreate — if that AZ has a prolonged outage, data is unavailable, and if it's destroyed, the data is lost.

Q: Is S3 Intelligent-Tiering a good default?

A: Usually yes, if objects are above the 128 KB minimum billable size. Intelligent-Tiering costs slightly more than Standard for Frequent-access data (monitoring fee per object per month), but it automatically moves objects to Infrequent, Archive Instant, Archive, or Deep Archive tiers as access patterns change — avoiding the lifecycle-rule and early-deletion gotchas. For predictable workloads (always-hot or always-cold), a manual lifecycle policy is cheaper.

Q: How do I avoid early-deletion charges?

A: Two strategies. First, be conservative with lifecycle transitions — don't move data to Glacier until you're confident you won't need it for at least 90 days (180 for Deep Archive). Second, use Intelligent-Tiering, which has no minimum storage durations on its Frequent / Infrequent / Archive Instant tiers (Archive and Deep Archive tiers still have minimum durations if you opt into them). For compliance archives that must survive exactly N years, Deep Archive with Object Lock is the cheapest path.


This article reflects AWS features and pricing as of 2026. AWS services evolve rapidly — always verify against the official S3 storage classes documentation before making production decisions.

Published: 4/16/2026

This article is for informational purposes only. AWS services, pricing, and features change frequently — always verify details against the official AWS documentation before making production decisions.

More in Storage