top | item 33424098

(no title)

andrewbarba | 3 years ago

How could you possibly say this is doing it wrong? The only way you could batch requests in the way you describe is if you have 1 (or very small number) compute nodes. You would need all those requests to hit same node so you could try and batch. With serverless compute infrastructure (which is what this blog is demonstrating by using lambda) you can have 1 isolated process per request and therefore need a database that can actually handle this kind of load.

discuss

order

twawaaay|3 years ago

> With serverless compute infrastructure

Here is your problem. You are trying to build a huge application using inadequate technical building blocks.

Lambdas are super inefficient in many different ways. It is a good tool but as with every tool you do need to know how to use it. If you try to build heavy compute app in Python and then complain at your electricity bill -- that really is on you.

If your database is overloaded with hundreds of thousands of connections from your lambdas, it means it is end of the road for your lambdas. Do not put effort into scaling your database up, put effort into reducing the number of your connections and efficiency of your application.

aarondf|3 years ago

I think you can start to hit connection limit walls with RDS at several hundred connections, depending on your instance size. Running an even moderately busy app you could hit those pretty quickly. I would hate to have to change my entire infrastructure at such an early stage because the DB was hitting connection limits!

Would you ever need a million open connections? Probably not! But you'll likely want more than 500 at some point. And if your entire stack is serverless already, it'd be nice if the DB could handle that relatively low number of connections too.

qaq|3 years ago

if you are at load level when you have million lambdas executing concurrently your monthly bill will make even Uncle Sam cry.

ignoramous|3 years ago

In the case of the blog post at least: It is a 1,000 lambdas making 1,000 queries each.

vbezhenar|3 years ago

Introduce intermediate server which accepts multiple requests and groups them into single batch requests.

With enough trickery you can even implement it using postgres wire protocol, I guess, so it'll be transparent.

cbrewster|3 years ago

Now you need to batch your requests to your intermediate server

mjb|3 years ago

But why? Why have an additional component if your database can do it?