DynamoDB TTL: What It Is and When to Use It

Definition

Amazon DynamoDB Time to Live (TTL) is a feature that automatically deletes expired items from a table without consuming write throughput. It simplifies data lifecycle management by allowing you to define a per-item timestamp for expiration, making it ideal for managing transient data like session states, logs, and temporary cache entries.

How It Works

DynamoDB TTL operates through a managed, asynchronous background process. To use it, you first enable TTL on a specific table and designate an attribute to store the expiration time. This designated attribute must contain a timestamp in Unix epoch format, representing the number of seconds since January 1, 1970.

Here's the typical flow:

  1. Enable TTL: You activate TTL on a DynamoDB table via the AWS Management Console, AWS Command Line Interface (CLI), or an AWS SDK, specifying the name of the attribute that will hold the expiration timestamp (e.g., expirationTime).
  2. Write Items with TTL Attribute: When your application writes an item to the table, it calculates an expiration timestamp and includes it as a Number attribute. For example, to expire a session token in 24 hours, you would set the attribute's value to the current epoch time plus 86,400 seconds.
  3. Background Scanning: DynamoDB's managed background process continuously scans the table for items where the TTL attribute's timestamp is in the past.
  4. Asynchronous Deletion: Once an item is identified as expired, it is marked for deletion. The actual deletion is not instantaneous and is performed by the background process. This deletion operation does not consume any of your table's provisioned or on-demand Write Capacity Units (WCUs).
  5. Stream Integration: If you have Amazon DynamoDB Streams enabled on the table, each TTL deletion generates a delete event. This allows you to capture expired items for downstream processing, such as archiving them to Amazon S3 using an AWS Lambda function.

A critical aspect to understand is that TTL deletion is a best-effort process. DynamoDB typically deletes expired items within 48 hours of their expiration time. Consequently, your application's queries and scans might retrieve items that have passed their expiration time but have not yet been deleted by the background process. It is a best practice for your application logic to include a filter expression to explicitly exclude items whose TTL timestamp is in the past.

Key Features and Limits

  • Cost: Enabling TTL and the subsequent automated deletions are free of charge. You only pay for the standard storage, read, and write costs for the items before they are deleted.
  • Performance: The TTL deletion process runs in the background and does not consume your table's read or write capacity, preserving your table's performance for regular application traffic.
  • Attribute Requirements: The designated TTL attribute must be of the Number data type and its value must be a Unix epoch timestamp in seconds. Using milliseconds will result in items expiring thousands of years in the future.
  • Deletion Timing: Deletions are not immediate. While often happening much sooner, DynamoDB commits to deleting expired items within a few days, typically within 48 hours. This feature should not be used for workloads requiring guaranteed, instantaneous deletion.
  • DynamoDB Streams Integration: TTL deletions can be captured by DynamoDB Streams, appearing as system-driven deletes. This enables powerful patterns like archiving expired data to Amazon S3 for long-term storage and analysis.
  • Global Tables: When used with Global Tables, a TTL delete in one region is replicated to all other replica tables. The initial deletion does not consume WCUs, but the replicated deletes in other regions do consume Replicated Write Units.

Common Use Cases

  • Session Management: Automatically purging expired user session data from web and mobile applications to keep the session table clean and efficient.
  • Caching: Storing temporary cached data from APIs or expensive computations that can be safely discarded after a certain period.
  • Log and Event Data Management: Retaining application logs, IoT sensor readings, or event data for a specific period (e.g., 30 or 90 days) before automatically deleting them to control storage costs.
  • Temporary Data: Managing the lifecycle of short-lived data, such as one-time passwords (OTPs), incomplete form submissions, or items in a shopping cart.
  • Compliance and Data Retention: Enforcing data retention policies by automatically removing sensitive data after a mandated period to comply with regulations like GDPR.

Pricing Model

The Amazon DynamoDB TTL feature itself is offered at no additional cost. You are not billed for the delete operations performed by the TTL background service.

Your costs are based on standard DynamoDB pricing dimensions:

  • Data Storage: You pay for storing items, including the TTL attribute, until they are deleted. Automatically removing data with TTL helps reduce long-term storage costs.
  • Read and Write Throughput: You are billed for the read and write requests made by your application, but not for the TTL deletes.
  • Associated Services: If you integrate TTL with other AWS services, their respective costs will apply. For example, if you enable DynamoDB Streams to capture TTL deletes and trigger an AWS Lambda function to archive the data to Amazon S3, you will incur costs for stream reads, Lambda invocations, and S3 storage.

For a detailed estimate, always refer to the official AWS Pricing Calculator.

Pros and Cons

Pros:

  • Cost-Effective: Eliminates the cost of write capacity that would be consumed by manual delete operations.
  • Fully Managed: Offloads the operational burden of running custom scripts or applications to purge old data.
  • Performance-Neutral: Does not impact the provisioned throughput available to your application.
  • Simplified Architecture: Reduces application complexity by removing the need for custom data lifecycle management logic.

Cons:

  • Delayed Deletion: The deletion is not instantaneous and can take up to 48 hours, making it unsuitable for use cases that require immediate data removal.
  • Application-Side Filtering Required: Because deletion is not immediate, applications must filter out expired items from read results to ensure data consistency.
  • Single TTL Attribute: A table can only have one attribute designated for TTL.
  • Strict Timestamp Format: Requires a Unix epoch timestamp in seconds, which may necessitate data conversion at the application layer.

Comparison with Alternatives

  • Manual Deletion (Scheduled Lambda/Cron Job): The primary alternative is to write a custom solution, such as a scheduled AWS Lambda function, that scans the table and deletes items based on a timestamp. While this approach provides precise control over deletion timing, it has significant drawbacks: it consumes table read and write capacity (which can be expensive), requires careful error handling and monitoring, and adds operational overhead.
  • Amazon S3 Lifecycle Policies: For unstructured object data, S3 Lifecycle Policies provide a similar automated deletion capability. While both manage data lifecycles, DynamoDB TTL is designed for structured NoSQL items within a database, whereas S3 Lifecycle Policies are for objects in cloud storage. S3 policies also support transitioning objects to cheaper storage tiers (like S3 Glacier), a feature not applicable to DynamoDB.

Exam Relevance

DynamoDB TTL is a frequent topic on several AWS certification exams, particularly those aimed at developers, architects, and database specialists.

  • Relevant Certifications: AWS Certified Developer - Associate (DVA-C02), AWS Certified Solutions Architect - Associate (SAA-C03), AWS Certified Data Engineer - Associate (DEA-C01), and AWS Certified Database - Specialty (DBS-C01).
  • Key Knowledge Areas:
    • Cost-Effectiveness: Candidates must know that TTL is the most cost-effective method for deleting expired items because the deletes are free.
    • Performance: Understand that TTL operations do not consume the table's WCUs.
    • Deletion Behavior: The concept of non-immediate, eventual deletion (up to 48 hours) is a critical detail often tested.
    • Use Cases: Be prepared to identify scenarios (like session state or log management) where TTL is the optimal solution.
    • Integration: Know that TTL integrates with DynamoDB Streams to enable archiving patterns.

Exam questions often present a scenario and ask for the most operationally efficient or cost-effective solution for managing transient data, for which TTL is almost always the correct answer.

Frequently Asked Questions

Q: How long does it take for DynamoDB TTL to delete an expired item?

A: The deletion process is not instantaneous. While items are often deleted much sooner, Amazon DynamoDB typically deletes expired items within 48 hours of their specified expiration time. Your application should not rely on immediate deletion and should be designed to filter out items that have expired but have not yet been removed by the service.

Q: Does using DynamoDB TTL consume my table's write capacity?

A: No. The delete operations performed by the TTL background process are completely free and do not consume any of your table's provisioned or on-demand Write Capacity Units (WCUs). This makes it a highly cost-effective and performance-friendly solution for data lifecycle management.

Q: What happens if I add a TTL attribute to an existing item after enabling TTL on the table?

A: If you update an existing item to include the designated TTL attribute with a valid timestamp, that item becomes subject to the TTL process. If the timestamp you add is in the past, the item will become eligible for deletion by the next run of the TTL background scanner.


This article reflects AWS features and pricing as of 2026. AWS services evolve rapidly — always verify against the official AWS documentation before making production decisions.

Published: 5/7/2026 / Updated: 5/7/2026

This article is for informational purposes only. AWS services, pricing, and features change frequently — always verify details against the official AWS documentation before making production decisions.

More in Databases