top | item 42038692

(no title)

nrdvana | 1 year ago

I might not be understanding what you're pointing out here. It sounds to me like sqlalchemy is talking about a pool of connections within one process, in which case releasing back to that pool does not close the connection by that process to the database. Parent comment is talking about one connection per process with 50k processes. My comment was that you don't need that many processes if each process can handle hundreds of web requests asynchronously.

If you are saying that a connection pool can be shared between processes without pgbouncer, that is news to me.

discuss

order

throwaway313373|1 year ago

Of course, you're right, it is not possible to to share a connection pool between processes without pgbouncer.

> Parent comment is talking about one connection per process with 50k processes.

It is actually not clear what parent comment was talking about. I don't know what exactly did they mean by "front ends".

nrdvana|1 year ago

The most common design for a Web app on Linux in the last 20 years is to have a pool of worker processes, each single-threaded and ready to serve one request. The processes might be apache ready to invoke PHP, or mod-perl, or a pool of ruby-on-rails or perl or python processes receiving the requests directly. Java tends to be threads instead of processes. I've personally never needed to go past about 100 workers, but I've talked to people who scale up to thousands, and they happen to be using MySQL. I've never used pgbouncer, but understand that's the tool to reach for rather than configuring Pg to allow thousands of connections.