(no title)
jordanthoms | 2 years ago
On scaling, yeah a single postgres server can handle a lot. For us we were well past the million user mark before running into serious issues. However, a lot of how we were able to keep postgres working for us as we grew was by shifting work from postgres to our stateless services like I alluded to before - e.g. making our SQL queries as simple to execute as possible even if it means more work for the client to piece the parts back together.
If everything had been running inside the database we wouldn't have had that option and we'd probably have hit scaling limits much earlier - I guess we could have split off the traffic to the highest traffic endpoints and have those handled by a separate service calling the PG db, but then you get into issues with keeping the authentication etc consistent.
Re security - yep, PG is already using C to parse untrusted inputs from the network, which is also scary, but it's (hopefully) well reviewed and mature code - and even so, I wouldn't want to expose PG's usual wire protocol port to the internet, so it's hard to imagine exposing HTTP from postgres to the wild west.
Ultimately it probably is just a question of the sort of project it's being used for - if it's for something that's not going to get need to get to larger scales, handle a lot of complexity over time, or pass security reviews and your main goal is simplicity, then maybe an approach like this is a good option. I've just found that things tend to start off looking small and simple and then turn out to be anything but, so I'd rather run `rails new` and point it at a standard PG server - which would be just as simple and productive when you are starting out, and can keep scaling as your customer base and team size grows up to the size of Shopify, Github, or Kami (shameless plug).
No comments yet.