RDS Proxy: What It Is and When to Use It
Definition
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) and Amazon Aurora. It sits between your application and your database to pool and share connections, improving application scalability, resilience to database failures, and security.
How It Works
Instead of connecting directly to the database endpoint, your application connects to an RDS Proxy endpoint. The proxy maintains a pool of established database connections, which it can then share across multiple application connections. This process, known as connection multiplexing or sharing, avoids the memory and CPU overhead of repeatedly opening and closing new database connections, which is especially beneficial for applications that have many short-lived connections, like those built on serverless technologies such as AWS Lambda.
When a database in a Multi-AZ configuration fails over, RDS Proxy automatically routes traffic to the new primary instance, significantly reducing downtime. It can reduce failover times by up to 66% for Amazon Aurora and Amazon RDS by preserving client connections through the failover process. This makes applications more resilient without requiring complex failover logic in the application code.
For security, RDS Proxy integrates with AWS Identity and Access Management (IAM) and AWS Secrets Manager. This allows you to enforce IAM authentication for database access and centrally manage database credentials, avoiding the need to hard-code them in your application. As of late 2025, RDS Proxy also supports end-to-end IAM authentication for MySQL and PostgreSQL, which removes the need to store credentials in Secrets Manager at all.
Key Features and Limits
- Supported Engines: Amazon Aurora (MySQL and PostgreSQL compatible), Amazon RDS for MySQL, PostgreSQL, MariaDB, and SQL Server.
- Connection Pooling: Efficiently reuses database connections to reduce overhead and allow applications to handle a high number of concurrent connections.
- Improved Availability: Reduces database failover times by up to 66% for Aurora and RDS Multi-AZ instances by preserving application connections during failovers.
- Enhanced Security: Enforces IAM-based authentication and centralizes credential management via AWS Secrets Manager. It also uses TLS certificates from AWS Certificate Manager (ACM), so you don't need to manage certificate rotation for proxy connections.
- Fully Managed: RDS Proxy is serverless and scales its capacity automatically based on your workload, eliminating the need to provision or manage proxy infrastructure.
- Connection Pinning: In some cases, a client connection is "pinned" to a specific database connection for the entire session, which prevents multiplexing. This can happen when session-level settings are changed (e.g.,
SETcommands) or temporary tables are used. You can monitor pinning using Amazon CloudWatch metrics. - Service Quotas (as of 2026):
- Each AWS account is limited to 20 proxies per region.
- Each proxy can have up to 200 associated Secrets Manager secrets.
Common Use Cases
- Serverless Applications: AWS Lambda functions and other serverless compute services often open and close database connections at a high rate. RDS Proxy pools these connections, preventing database connection exhaustion and reducing latency from new connection setup.
- Applications with Many Connections: Applications like those for SaaS or eCommerce may open a large number of connections, many of which can be idle. RDS Proxy efficiently manages these connections, allowing the database to support more clients without needing to be oversized.
- Improving Failover Resilience: For any application requiring high availability, RDS Proxy abstracts the database failover process. It automatically and quickly routes connections to the standby instance, minimizing application disruption without needing custom handling code.
- Centralizing Database Security: By enforcing IAM authentication and integrating with AWS Secrets Manager, RDS Proxy provides a centralized and secure way to manage database access, which is ideal for organizations with strict compliance and security requirements.
Pricing Model
Amazon RDS Proxy pricing is based on the capacity of the underlying database instance it is associated with.
- For provisioned Amazon RDS and Aurora instances, you pay per vCPU per hour for the database instance that the proxy is enabled for.
- For Amazon Aurora Serverless v2, you pay per Aurora Capacity Unit (ACU) per hour consumed by your database.
Billing is done in one-second increments with a 10-minute minimum charge. There are no additional charges for the default proxy endpoint, but creating additional read-only or read/write endpoints will incur charges for the underlying AWS PrivateLink interface endpoints. Data transfer between the proxy and the database in the same Availability Zone is free.
For detailed and current pricing, always consult the official Amazon RDS Proxy pricing page.
Pros and Cons
Pros:
- Improved Scalability: Connection pooling allows databases to handle thousands of connections from serverless and containerized applications efficiently.
- Higher Availability: Drastically reduces failover times, making applications more resilient to database outages.
- Enhanced Security: Centralizes access control with IAM and Secrets Manager, removing the need for hard-coded credentials.
- Managed Service: AWS handles all the patching, maintenance, and scaling of the proxy infrastructure.
Cons:
- Added Cost: RDS Proxy is an additional cost on top of the database instance itself, billed per vCPU-hour.
- Connection Pinning: The benefits of connection pooling can be negated if sessions are frequently pinned, which can happen with certain SQL commands or session settings.
- Increased Complexity: It introduces another component into the architecture that can make troubleshooting connection issues more complex.
- Limitations: Does not support all database features or versions, such as Aurora Serverless v1. It also cannot be accessed from outside its Virtual Private Cloud (VPC).
Comparison with Alternatives
RDS Proxy vs. Self-Managed Connection Poolers (e.g., PgBouncer, HikariCP):
Self-managed connection poolers are software you run on your own Amazon EC2 instances or containers. While they can provide similar connection pooling benefits, they come with significant operational overhead. You are responsible for provisioning the infrastructure, ensuring high availability, applying patches, and scaling it to meet demand.
Amazon RDS Proxy, as a fully managed service, eliminates this overhead. It is inherently highly available across multiple Availability Zones and integrates seamlessly with other AWS services like IAM, Secrets Manager, and CloudWatch. While a self-managed solution might offer more configuration flexibility or support for specific database features not covered by the proxy, RDS Proxy provides a simpler, more resilient, and more secure solution for most use cases on AWS.
Exam Relevance
Amazon RDS Proxy is a key topic on several AWS certification exams, particularly those focused on architecture, development, and databases.
- AWS Certified Solutions Architect – Associate (SAA-C03): Questions may focus on when to use RDS Proxy to improve the scalability and resilience of a serverless application or a multi-tier web application.
- AWS Certified Developer – Associate (DVA-C02): Expect questions related to using RDS Proxy to manage database connections from AWS Lambda and securing credentials with IAM and Secrets Manager.
- AWS Certified Solutions Architect – Professional (SAP-C02): Scenarios may involve complex failover strategies and designing highly available and scalable database architectures where RDS Proxy plays a critical role.
- AWS Certified Database – Specialty (DBS-C01): This exam will test deep knowledge of RDS Proxy's features, including connection pooling, failover benefits, security configurations, and troubleshooting concepts like connection pinning.
For all exams, it's crucial to understand the problem RDS Proxy solves (connection exhaustion, slow failovers) and its primary use cases (especially with serverless applications).
Frequently Asked Questions
Q: When should I use RDS Proxy instead of just increasing my database's max_connections?
A: Increasing max_connections on a database can lead to higher memory and CPU consumption, potentially degrading performance. RDS Proxy is more efficient because it allows many application connections to share a smaller pool of active database connections. This approach is more scalable and protects the database from being overwhelmed by a sudden surge in connection requests, especially from short-lived clients like Lambda functions.
Q: Does RDS Proxy work with Amazon Aurora Serverless?
A: Amazon RDS Proxy supports Aurora Serverless v2 but does not support Aurora Serverless v1. For Aurora Serverless v2, the proxy helps manage connections efficiently as the database scales, providing a stable entry point for applications.
Q: What is connection pinning in RDS Proxy and how can I avoid it?
A: Connection pinning is when RDS Proxy is forced to lock a client's session to a single database connection, preventing that connection from being reused by other clients. This negates the benefit of connection pooling. Pinning is typically caused by operations that change the connection state in a way the proxy cannot track, such as creating temporary tables, setting session variables, or using certain transaction isolation levels. To avoid pinning, you should avoid changing session-level configurations from the client side and ensure your application logic doesn't rely on state that persists between transactions.
This article reflects AWS features and pricing as of 2026. AWS services evolve rapidly — always verify against the official AWS documentation before making production decisions.