Amazon DynamoDB Streams: What It Is and When to Use It
Definition
Amazon DynamoDB Streams is a change data capture (CDC) feature of DynamoDB that emits an ordered, time-ordered sequence of item-level modifications — inserts, updates, and deletes — made to a DynamoDB table. Each stream record describes a single change and is retained for 24 hours, during which consumers (most commonly AWS Lambda) can read the records and trigger downstream processing. Streams are the foundation of DynamoDB's own Global Tables replication and the preferred primitive for building event-driven, serverless microservices around a DynamoDB table.
How It Works
When you enable Streams on a DynamoDB table, every successful write produces a stream record containing:
- A monotonically increasing sequence number.
- The event type:
INSERT,MODIFY, orREMOVE. - A view of the changed item, controlled by the stream view type (see below).
- Metadata: event timestamp, source Region, table name, approximate creation time.
Internally the stream is partitioned to match the underlying DynamoDB table's partitioning — records are ordered per partition key but not globally. Consumers iterate through shards using the Kinesis-like iterator API, or they subscribe a Lambda event source mapping that handles polling, checkpointing, and error handling automatically.
Alternatively, you can enable Kinesis Data Streams for DynamoDB to send the same change records to a Kinesis Data Stream with 365-day retention and access to the broader Kinesis ecosystem (Firehose, Data Analytics, Flink).
Key Features and Limits
Stream view types
- KEYS_ONLY — only the primary key of the modified item.
- NEW_IMAGE — the entire item after modification.
- OLD_IMAGE — the entire item before modification.
- NEW_AND_OLD_IMAGES — both, useful for diffs or auditing.
Retention and ordering
- 24-hour retention on a native DynamoDB Stream — not configurable. Consumers must keep up or records are lost.
- Stream records are ordered per partition key; cross-partition-key ordering is not guaranteed.
- Each record appears exactly once in the stream.
- Kinesis Data Streams for DynamoDB gives you 365-day retention and tunable read patterns.
Consumption patterns
- Lambda event source mapping — the most common pattern. Lambda polls the stream, invokes your function in batches (up to 10,000 records or 6 MB), checkpoints on success, and supports batching windows, parallelization factor, bisect-on-error, and
OnFailuredestinations for poison-pill handling. - KCL adapter for DynamoDB Streams — use the Kinesis Client Library in a self-managed consumer.
- Kinesis Data Streams for DynamoDB — if enabled, consumers use standard Kinesis APIs and can fan out to Firehose, Managed Service for Apache Flink, or custom consumers.
Limits
- 2 simultaneous consumers per native DynamoDB Stream before throttling.
- Maximum record size approaches item size (400 KB) in NEW_AND_OLD_IMAGES mode.
- Changing view type requires disabling and re-enabling the stream.
Common Use Cases
- Event-driven microservices — a write to DynamoDB triggers a Lambda that publishes domain events to EventBridge or SNS for downstream services.
- Secondary index / derived table maintenance — keep a search-optimized copy of data in OpenSearch, a denormalized aggregate in another DynamoDB table, or a reporting dataset in S3.
- Cross-Region replication — before Global Tables, Streams + Lambda was the DIY replication pattern; Global Tables now uses Streams internally.
- Audit and compliance — capture NEW_AND_OLD_IMAGES to a durable log (S3 via Firehose) for immutable change history.
- Cache invalidation — on write, push invalidations to ElastiCache or CloudFront.
- Real-time analytics — stream item changes to Kinesis/Flink for aggregations or anomaly detection.
- Notifications — when an order status changes, fan out an SNS message to email/SMS.
Pricing Model
- Native DynamoDB Streams reads — billed per stream read request unit (sRRU); one sRRU returns up to 4 KB. The first
GetRecordscall on a shard comes from the stream's internal cache and is free, which is how Lambda polling is usually effectively free. - Writes to the stream are not charged separately — the cost is absorbed by the table's write activity.
- Kinesis Data Streams for DynamoDB — adds Kinesis shard-hour or On-Demand charges on top of standard DynamoDB rates; gives 365-day retention and many consumers.
- Lambda — billed for invocations and GB-second duration as usual.
- Storage — the 24-hour stream retention is free; storing change data beyond that in S3/Kinesis has its own costs.
Pros and Cons
Pros
- Built-in CDC with no extra infrastructure.
- Exactly-once delivery, ordered per partition key.
- Native Lambda integration removes polling/checkpointing boilerplate.
- Enables loose coupling and clean event-driven architectures.
- Same primitive powers Global Tables replication.
Cons
- 24-hour retention is a ceiling for consumer downtime — catch-up after an outage longer than a day means data loss (without Kinesis Data Streams for DynamoDB).
- Only 2 simultaneous consumers before throttling.
- Ordering is per partition key, not global — aggregates that span partitions need care.
- No built-in filtering — Lambda processes every record; you can pay for invocations of events you ignore.
- Schema changes require coordination between producers and consumers.
Comparison with Alternatives
| | DynamoDB Streams | Kinesis Data Streams for DynamoDB | Global Tables | | --- | --- | --- | --- | | Retention | 24 hours | Up to 365 days | Replicated continuously | | Consumers | 2 recommended | Many | Internal | | Use case | Lambda triggers, derived state | High-retention CDC, multi-consumer | Multi-Region active-active | | Cost | Lambda-friendly | Kinesis rates + DDB | Write replication unit |
Pick Streams + Lambda for small-scale event-driven patterns. Pick Kinesis Data Streams for DynamoDB when you need many consumers, 365-day retention, or integration with Flink/Firehose. Pick Global Tables when the goal is multi-Region replication of the table itself.
Exam Relevance
- Solutions Architect Associate (SAA-C03) — DynamoDB Streams + Lambda is a staple event-driven pattern; NEW_IMAGE vs OLD_IMAGE view types; 24-hour retention.
- Developer Associate (DVA-C02) — heavy coverage: event source mappings, batch size, failure destinations, poison-pill handling with bisect-on-error, idempotency.
- Database Specialty (DBS-C01) — Streams as the foundation of Global Tables; Kinesis Data Streams for DynamoDB for extended retention; CDC patterns for analytics.
Exam trap: "capture every change to the table for a Lambda that updates OpenSearch" → enable Streams with NEW_IMAGE and a Lambda event source mapping. "retain changes for more than 24 hours" → Kinesis Data Streams for DynamoDB.
Frequently Asked Questions
Q: What's the difference between DynamoDB Streams and Kinesis Data Streams for DynamoDB?
A: DynamoDB Streams is the native, built-in CDC feature with 24-hour retention, two simultaneous consumers, and tight Lambda integration. Kinesis Data Streams for DynamoDB is an optional feature that sends the same change records to a Kinesis Data Stream, giving you up to 365 days of retention, many concurrent consumers, and integration with the Kinesis ecosystem (Firehose, Managed Service for Apache Flink, Kinesis Data Analytics). Pick Streams for simple Lambda fan-out; pick Kinesis Data Streams for DynamoDB when you need long retention or sophisticated stream processing.
Q: Which stream view type should I choose?
A: KEYS_ONLY is cheapest and smallest — use it when the consumer only needs to know an item changed and will re-fetch the current state. NEW_IMAGE is the most common pick for downstream replication (e.g., maintaining a search index) because consumers need the latest state. OLD_IMAGE is useful for deletion workflows (archive the prior value before it's gone). NEW_AND_OLD_IMAGES is the richest — use it for audit logs or when computing diffs matters, accepting the higher storage and transfer cost.
Q: How do I handle processing failures in a Lambda consumer of DynamoDB Streams?
A: Lambda event source mappings for Streams support several resilience features. Configure a batch size (default 100) and maximum batching window to tune throughput. Enable bisect batch on function error so Lambda splits a failing batch in half repeatedly to isolate the poison record. Set an OnFailure destination (SQS or SNS) to receive records that can't be processed after all retry attempts. Keep your function idempotent — exactly-once stream delivery plus Lambda retries still means your function will occasionally see the same record twice, especially around shard rebalances.
This article reflects AWS features and pricing as of 2026. AWS services evolve rapidly — always verify against the official DynamoDB Streams documentation before making production decisions.