top | item 46896808

(no title)

paulkre | 26 days ago

Can’t believe they needed this investigation to realize they need a connection pooler. It’s a fundamental component of every large-scale Postgres deployment, especially for serverless environments.

discuss

order

Twirrim|26 days ago

Pooling connections somewhere has been fundamental for several decades now.

Fun quick anecdote: a friend of mine worked at an EA subsidiary when Sim City (2013) was released, to great disaster as the online stuff failed under load. Got shifted over to the game a day after release to firefight their server stuff. He was responsible for the most dramatic initial improvement when he discovered the servers weren't using connection pooling, and instead were opening a new connection on almost every single query, using up all the connections on the back end DB. EA's approach had been "you're programmers, you could build the back end", not accepting games devs accurately telling them it was a distinct skill set.

foota|26 days ago

No? It sounds like they rejected the need for a connection pooler and took an alternative approach. I imagine they were aware of connection poolers and just didn't add one until they had to.

jstrong|26 days ago

can't believe postgres still uses a process-per-connection model that leads to endless problems like this one.

IsTom|26 days ago

You can't process significantly many more queries than you've got CPU cores at the same time anyway.

modin|26 days ago

I was surprised too to need it in front of RDS (but not on vanilla, as you pointed out).

citrin_ru|26 days ago

In serverless world for sure but in old-school architecture it's common to use persistent connections to a database which make connection pooler less essential. Also the last time I did check (many years ago admittedly) connection poolers didn't play well with server-size prepared statements and transactions.

ants_a|26 days ago

pgbouncer added support for prepared statements a couple years back.