So I thought it would be kind of cool to take Pgbouncer to the next level. I've been using it with large database clusters for a while and it's been spectacular, but it does have some limitations that always seemed arbitrary.
This is my take on what a modern Postgres pooler can be. Besides supporting same session and transaction pooling modes as Pgbouncer, it also adds support for load balancing between replicas (round robin at the moment), failover in case a replica fails a health check, and the coolest thing yet I think: sharding at the pooler level.
It's written in Rust, which I think makes it much easier to iterate on and improve.
Note in case it wasn't obvious: this is super experimental, please don't use this in your production environment unless you're adventurous. In which case, please do and let me know how it goes!
You might want to consider a different name, as there's already a pgcat project which appears to be a logical replication extension to augment native logical replication.
I am really excited to see this project. I can't tell you how many times my team at work has hit a limitation with pgbouncer and swore up and down we should just re-write it in rust.
Sharding and failover support is really cool indeed!
But I've always struggled to understand the use cases of pgbouncer. In most cases, you want to front your database with a service and don't want many clients connecting to it directly. And I've seen fairly large workloads easily managed over a small number of connections, so a good client-side pool implementation should easily suffice. In what sort of situations does pgbouncer come into play?
As a fan of Rust, this seems like the perfect application domain for "fearless concurrency". Having gone through the bowels of a few different connection pool implementations in various languages (because they had bugs in production), I can tell you the concurrency management becomes extremely difficult to reason about as the feature complexity increases. I think you're making a good bet that offloading a good chunk of that to the compiler will improve your ability to iterate in the future.
GoLang seems like the language du jour, but I think this post [1] really illustrates how powerful it can be when you have a compiler that handles so much more for you. It's not just about performance. Deadlock detection and a robust compile time set of error checks are huge wins over looser languages.
This is actually pretty amazing. I wouldn't normally trust a single author implemented project to sit between my application and my database, since that is a rather critical single point of failure. However, a grep that fails to find `unsafe` in a rust code base lets me know that this code is in fact pretty safe[1]. I don't need to trust the author, as much as I trust the Rust core team, and LLVM.
Since I can see basic functionality exercised in the test suites, and trust that Rust has validated its more general correctness, this gives me more confidence in your project than I would historically expect to be able to gather in such a short amount of time.
This is really interesting and exciting to see. Though it probably has a ways to go before being production ready. We've seen to date a few other efforts at connection poolers and balancers in Postgres and every time come back to pgbouncer being the one that works best. While it has its issues, pgbouncer does indeed work. Others we've explored tend to break in unclear ways.
That said some of the things this one looks to solve are indeed very interesting.
I've seen issues at times with pgbouncer being single threaded. Since it's only running against a single core it is possible to peg that core and have a max throughput. While not common we have seen this for customers before, both at Crunchy Data[1] on and back at Citus.
The other interesting piece is sharding. We considered this approach for years at Citus and never managed to pull it off. If you can specify a identifier and it can route accordingly it can give you a lightweight option for sharding really quite easily.
Again, probably has a ways to go, everything we've tried over the last 10 years has failed to live up to a replacement for pgbouncer. But pgcat is attempting to do I'm fully supportive of if it can accomplish it.
[1] On Crunchy Bridge www.crunchybridge.com we have pgbouncer built-in, with multiple pgbouncer support to alleviate single threaded issue.
I spent some time playing around with odyssey and it seemed like every other situation not explicitly experienced or anticipated by the author would result in a segfault just like that.
1. rename the project, there's already pgcat.
2. bring it to production level, i would love it :D
3. Are there things like gis or jsonb which would break with it, or does it just work?
I'm really impressed by how lean this code base is. Is PgBouncer also this lean? Looks like auth is the only thing left here then, aside from maybe some more testing? Very cool to see!
For those wondering why a pooler is needed, Tl;DR containers.
It's much more normal to have hundreds of containers running an application configured to have tens of connections. Now you have thousands.
Each connection costs postgres something on the order of ~2.5MB ram, so once you get into the thousands you're starting to talk real numbers = ~2.5GB
Problems I've had with pgbouncer in the past:
- stats. Any modern tool should natively emit its own statistics in something like statsd format.
- pgbouncer is a single process - I can't throw more CPU cores at it to make it faster.
Problems that I'm still struggling to solve...
We use RDS/Aurora which uses DNS to fail over. Yes applications should handle this. Yes they should retry. Yes devs should have tested this. But in any large company this is a uphill battle involving many teams and apps. Much easier to introduce a proxy layer to handle fast failover. Even better if it can re-try queries that fail transparently to the application.
Pgbouncer does have stats, you can get them using the `SHOW STATS` family of commands. You can then either manually scrape them or use something like pgbouncer_exporter[1] for Prometheus.
Agree. I've been hoping for a PostgreSQL extension with in-built sharding (think Netezza or Teradata). I know this is ambiguous so a low-effort definition is in order: a tightly bound cluster (nodes are aware of each other and share data to fulfill queries) where you specify distribution for a table but there is no explicit rebalancing command. Admins can add nodes and the user is none-the-wiser (except for improved performance, of course). Cross-node joins work (reasonably) well. I've been watching Citus for a while but - unless I'm misunderstanding - the sharding is a bit more explicit and sometimes manual.
Not only is not the greatest, it's literally the worst database technology that's in use for the things PgCat is trying to solve. And that's not a knock on PgCat all (or on Postgres!), it's a knock on the "golden hammer" type worship of Postgres that's been the zeitgiest for the past few years.
The state of the art for sharding, connection scaling, and failover for Postgres is far behind everything else. MySql has Orchestrator and Vitess, the NewSQL systems are doing lots of interesting stuff with replication and sharding, etc. etc.
edit: Look at the work that Notion had to as of just 3 months ago to shard Postges: https://www.notion.so/blog/sharding-postgres-at-notion. Maybe they would make the same choice over again, and that's fine, but doing stone age level work to shard a database in 2021 doesn't jive with the whole "just use Postgres" idea to me.
[+] [-] levkk|4 years ago|reply
This is my take on what a modern Postgres pooler can be. Besides supporting same session and transaction pooling modes as Pgbouncer, it also adds support for load balancing between replicas (round robin at the moment), failover in case a replica fails a health check, and the coolest thing yet I think: sharding at the pooler level.
It's written in Rust, which I think makes it much easier to iterate on and improve.
Note in case it wasn't obvious: this is super experimental, please don't use this in your production environment unless you're adventurous. In which case, please do and let me know how it goes!
- Lev
[+] [-] bonesmoses|4 years ago|reply
https://github.com/kingluo/pgcat
[+] [-] dkhenry|4 years ago|reply
Looking forwards to where the project goes!
[+] [-] potamic|4 years ago|reply
But I've always struggled to understand the use cases of pgbouncer. In most cases, you want to front your database with a service and don't want many clients connecting to it directly. And I've seen fairly large workloads easily managed over a small number of connections, so a good client-side pool implementation should easily suffice. In what sort of situations does pgbouncer come into play?
[+] [-] dmingod666|4 years ago|reply
Where should new people contribute or can help most ( I do golang and am good with k8s stuff ) all the best!
[+] [-] montanalow|4 years ago|reply
GoLang seems like the language du jour, but I think this post [1] really illustrates how powerful it can be when you have a compiler that handles so much more for you. It's not just about performance. Deadlock detection and a robust compile time set of error checks are huge wins over looser languages.
1: https://fasterthanli.me/articles/some-mistakes-rust-doesnt-c...
[+] [-] montanalow|4 years ago|reply
Since I can see basic functionality exercised in the test suites, and trust that Rust has validated its more general correctness, this gives me more confidence in your project than I would historically expect to be able to gather in such a short amount of time.
1: https://github.com/levkk/pgcat/search?q=unsafe
[+] [-] craigkerstiens|4 years ago|reply
That said some of the things this one looks to solve are indeed very interesting.
I've seen issues at times with pgbouncer being single threaded. Since it's only running against a single core it is possible to peg that core and have a max throughput. While not common we have seen this for customers before, both at Crunchy Data[1] on and back at Citus.
The other interesting piece is sharding. We considered this approach for years at Citus and never managed to pull it off. If you can specify a identifier and it can route accordingly it can give you a lightweight option for sharding really quite easily.
Again, probably has a ways to go, everything we've tried over the last 10 years has failed to live up to a replacement for pgbouncer. But pgcat is attempting to do I'm fully supportive of if it can accomplish it.
[1] On Crunchy Bridge www.crunchybridge.com we have pgbouncer built-in, with multiple pgbouncer support to alleviate single threaded issue.
[+] [-] jimmyed|4 years ago|reply
https://github.com/yandex/odyssey
[+] [-] 3np|4 years ago|reply
[+] [-] riku_iki|4 years ago|reply
[+] [-] mmontagna9|4 years ago|reply
[+] [-] deknos|4 years ago|reply
[+] [-] jjice|4 years ago|reply
[+] [-] boris|4 years ago|reply
[+] [-] johncs|4 years ago|reply
[+] [-] rory_isAdonk|4 years ago|reply
So i can better understand sharding/load balancing/failover/indexing etc.
[+] [-] everfrustrated|4 years ago|reply
For those wondering why a pooler is needed, Tl;DR containers. It's much more normal to have hundreds of containers running an application configured to have tens of connections. Now you have thousands. Each connection costs postgres something on the order of ~2.5MB ram, so once you get into the thousands you're starting to talk real numbers = ~2.5GB
Problems I've had with pgbouncer in the past:
- stats. Any modern tool should natively emit its own statistics in something like statsd format. - pgbouncer is a single process - I can't throw more CPU cores at it to make it faster.
Problems that I'm still struggling to solve...
We use RDS/Aurora which uses DNS to fail over. Yes applications should handle this. Yes they should retry. Yes devs should have tested this. But in any large company this is a uphill battle involving many teams and apps. Much easier to introduce a proxy layer to handle fast failover. Even better if it can re-try queries that fail transparently to the application.
[+] [-] nsaje|4 years ago|reply
[1] https://github.com/prometheus-community/pgbouncer_exporter
[+] [-] levkk|4 years ago|reply
[+] [-] vpprofit|4 years ago|reply
[+] [-] mrslave|4 years ago|reply
[+] [-] gjs278|4 years ago|reply
[deleted]
[+] [-] azurelake|4 years ago|reply
The state of the art for sharding, connection scaling, and failover for Postgres is far behind everything else. MySql has Orchestrator and Vitess, the NewSQL systems are doing lots of interesting stuff with replication and sharding, etc. etc.
edit: Look at the work that Notion had to as of just 3 months ago to shard Postges: https://www.notion.so/blog/sharding-postgres-at-notion. Maybe they would make the same choice over again, and that's fine, but doing stone age level work to shard a database in 2021 doesn't jive with the whole "just use Postgres" idea to me.