(no title)
pgaddict | 9 months ago
In several such cases, the company was repeatedly warned about how they implemented some functionalities, and that it will cause severe issues with bloat/vacuuming, etc. Along with suggestions how to modify the application to not hit those issues. Their 10x engineers chose to completely ignore that advice, because in their minds they constructed an "ideal database" and concluded that anything that behaves differently is "wrong" and it's not their application that should change. Add a dose of politics where a new CTO wants to rebuild everything from scratch, engineers with NIH syndrome, etc. It's about incentives - if you migrate to a new system, you can write flashy blog posts how the new system is great and saved everything.
You can always argue the original system would be worse, because everyone saw it had issues - you just leave out the details about choosing not to address the issues. The engineering team is unlikely to argue against that, because that'd be against their interests too.
I'm absolutely not claiming the problems do not exist. They certainly do. Nor am I claiming Postgres is the ideal database for every possible workload. It certainly is not. But the worst examples that I've seen were due to deliberate choices, driven by politics. But that's not described anywhere. In public everyone pretends it's just about the tech.
rbanffy|9 months ago
When you design a system around a database, it pays off to design your ways of mitigating performance issues you might face in the future. Often, a simple document explaining directions to evolve the system into based on the perceived cause. You might want to add extra read replicas, introduce degraded modes for when writes aren't available, moving some functions to their own databases, sharing big tables, and so on. With a somewhat clear roadmap, your successors don't need to panic when the next crisis appears.
For extra points, leave recordings dressed as Hari Seldon.