How could you possibly say this is doing it wrong? The only way you could batch requests in the way you describe is if you have 1 (or very small number) compute nodes. You would need all those requests to hit same node so you could try and batch. With serverless compute infrastructure (which is what this blog is demonstrating by using lambda) you can have 1 isolated process per request and therefore need a database that can actually handle this kind of load.
twawaaay|3 years ago
Here is your problem. You are trying to build a huge application using inadequate technical building blocks.
Lambdas are super inefficient in many different ways. It is a good tool but as with every tool you do need to know how to use it. If you try to build heavy compute app in Python and then complain at your electricity bill -- that really is on you.
If your database is overloaded with hundreds of thousands of connections from your lambdas, it means it is end of the road for your lambdas. Do not put effort into scaling your database up, put effort into reducing the number of your connections and efficiency of your application.
aarondf|3 years ago
Would you ever need a million open connections? Probably not! But you'll likely want more than 500 at some point. And if your entire stack is serverless already, it'd be nice if the DB could handle that relatively low number of connections too.
qaq|3 years ago
ignoramous|3 years ago
vbezhenar|3 years ago
With enough trickery you can even implement it using postgres wire protocol, I guess, so it'll be transparent.
cbrewster|3 years ago
mjb|3 years ago