Serverless and the RDS Data API: A journey to deployment

Chris Buonocore
7 min readFeb 12, 2020

If you’ve used AWS lambda or other serverless supported solution, you may have had to interact with a database while doing so. To interact with your data, the process is pretty routine: open a new database connection, complete one or more queries, and then close the connection— as long as you consistently close these connections there’s little issue around this outside of scaling the other components of your service as you gain more users.

If you’re new to the idea of Serverless, I recommend reading the following article from Martin Fowler here, but if you’re well familiar read on.

At Stavvy, we use lambda functions for asynchronous (non time sensitive) and synchronous (time sensitive) processing of data. We have nearly 100 different functions responsible for a mixture of front end API endpoints and asynchronous data processing, such as slack notifications, audit logs, and emails.

Serverless enables you to focus more on the business logic, rather than where your code is deployed or what hardware is running it. So what are those components mentioned before that might need scaling? In a traditional deployment model, when you encounter a bottleneck or slowdown in your service, it’s typically one of the following breaking down:

  • CPU — from long running tasks and high throughput demands.
  • Memory — expensive queries or business logic causing your service to run out of memory.
  • Network — internet outages or data transfer size causing slowdowns
  • Software — suboptimal or slow software
  • Database (Disk) — too much data for what was provisioned, or too much throughput or connection demands (the focus of this article)

Deploying your application as serverless doesn’t exclude you from the above bottlenecks, but it does simplify a few other things.

Serverless functions require less administrative overhead when it comes to deployment and scaling, but this comes at a cost. Traditional server side deployments of applications are able to more effectively manage connection pools to a database. Establishing a new connection every invocation (or even every query!) is going to slow down your functions.

One recommendation from the AWS forums was creating DB connections outside the handler to enable connection reuse and improve performance. In our experience however, this just delays the issue rather than solving it effectively:

  1. You have less control over how many services or distinct containers are going to be spun up to serve requests so it’s difficult to put a cap on connections.
  2. There’s not a great time to close connections opened outside of the connection handler. The inability to close connections consistently, even if there are less of them, requires you to be more careful to avoid connection leaks.
  3. No consistent shut down handler on lambda to enforce connections closing.

And because there is no long-running server process, it’s difficult to find an ideal place to save connections for reuse.

To give additional context, this was our testing database which used connections created outside of a handler or individual function:

Connections climbing on one of our testing clusters

We were leaking connections and a dump of connections on the database cluster revealed that all these connections were left in either an idle or idle in transaction state.

Don’t worry about reading this too closely — essentially a bunch of idle transactions or connections leaked and left open on the database.

Interesting — so what are the options here?

Debugging the exact source of the leaks was difficult, but we had a couple immediate solutions that came to mind:

  1. Go through and determine where the idle connections were coming from in our logic, and triage them one at a time.
  2. Install a service like PG Bouncer…or set up an external scheduled process to clear or bounce idle connections.
  3. Move off of serverless to a traditional server-deployed application with a normal connection pool, or set up an external “always on” server specifically for mediating database transactions.

These still felt a bit like bandaids. PG Bouncer might take you to some scale, but requiring a service clean up open connections is not a resilient solution and you can still max out in short term intervals. PG Bouncer also typically isn’t an immediate option on RDS as you don’t have a root shell on the actual machines running the sql server to be able to install the software normally.

And just finding where the leaks are occurring right now wasn’t a resilient solution for the future if this became an issue again.

Enter RDS Data API,

The Data API doesn’t require a persistent connection to the DB cluster. Instead, it provides a secure HTTP endpoint and integration with AWS SDKs. You can use the endpoint to run SQL statements without managing connections.

This was huge for enabling serverless lambda functions that require database access at scale. Upon discovering this, we took the afternoon to port over our services. Here’s some of the things we had to respond to:

  1. Nontraditional returned data formats

This is in comparison to many of the well-used database drivers out there today. A typical request and response from the AWS data api looks like the following in python:

Example response from the Data API. Taken from the AWS documentation.

We had to write custom handlers to parse outputs from the API and convert to a json format for API use. In the API response we mapped the column ordering to the original columns to get a mapping that looked more like

{"id": 1, "firstName": "Rosalez", "Alejandro", "createdAt": '2016/02/15'}

The columns are returned in each API response from AWS, so we were able to make this mapping directly each time by position in the list.

2. Limitations in fields supported

To avoid potential SQL injection vulnerabilities, you should escape or parameterize your data queries, rather than just inserting a raw value directly.

*Taken from https://realpython.com/prevent-python-sql-injection/

Many database drivers support type casting and inference of types. For example, if you pass a value like ‘$1,000.00’ to Postgres it will attempt to cast the value. The RDS Data API doesn’t infer types or support all the possible types supported by Postgres. You’ll ultimately need to convert your query variables to one of the supported types, or cast them manually, to avoid unsupported type errors.

Currency data types, amongst other custom types, aren’t supported quite yet

In contrast, there are only a few supported types in the API currently:

AWS supported data parameters as of 2/5/2020

We wrote a custom wrapper library similar to this project, ChanZuckerberg, Aurora Data API, to allow mapping parameters to the list of supported types. We may be able to open source this utility in the future, but I imagine AWS will add more support for Postgres types going forward.

3. The Data API requires converting to Serverless database

Not necessarily a drawback, but as part of this transition we needed to migrate our existing database cluster to a serverless database. Going forward, this does seem like a simpler way to work with database capacity, albeit slightly more expensive at scale compared to a traditional RDS setup if you always need the serverless database on. At night, however, the cost of running our test environment drops to 0 as the database is able to drop to 0 read/write units. I won’t go into to much detail here — but I recommend reading the serverless database article linked previously if you’re interested in learning more.

What’s good about it

With so many powerful frameworks that exist natively for languages to interact with SQL databases, the main use case of the RDS API was to work efficiently with databases in serverless functions. Once the migration to use the data API was completed, we no longer had issues with idle connections…well given the above caveats.

Revisiting solution (3), RDS Data proxy was released recently into preview to serve as a hosted solution for this, allowing an off-lambda running server to control and mediate your connections. This is still in preview for Postgres, however, and the pricing can eliminate or offset some of the value offered by using the “on when needed” pattern of lambda functions. If you are cost or time sensitive I wouldn’t hold out for this.

The Verdict

After using the RDS Data API for the last few months, it has been definitely worth the conversion to enable keeping our database from excessive connections as well as to eliminate connection logic from our serverless deployments.

While leveraging new services from AWS can be powerful, there may be new constraints you may have to operate around. AWS has made a ton of great improvements over the last year in lambda and RDS. In particular, the RDS Data API holds up well in our load testing and is straightforward to use once implemented. Hoping for more investment in building these services more ahead :)

Would love to hear your experience!

Chris

--

--