top | item 43993422

(no title)

thehappyfellow | 9 months ago

It’s incredible how much Postgres can handle.

At $WORK, we write ~100M rows per day and keep years of history, all in a single database. Sure, the box is big, but I have beautiful transactional workloads and no distributed systems to worry about!

discuss

order

rastignack|9 months ago

At $WORK, we are within the range of 2 billion rows per day on one of our apps. We do have beefy hardware and ultra fast SSD storage though.

maz1b|9 months ago

A single PG database on one server? What are the specs?

geoka9|9 months ago

Those rows are never pruned and rarely read?

wvh|9 months ago

Two days ago, I'd have said the same. Yesterday, big box went down, and because it was so stable, it was a joint less oiled and the spare chickened out at the wrong time and apparently even managed to mess up the database timeline. Today was the post-mortem, and it was rough.

I'm just saying, simple is nice and fast when it works, until it doesn't. I'm not saying to make everything complex, just to remember life is a survivor's game.

thehappyfellow|9 months ago

You’re right, there are downsides like turbine you mention! We mitigate it by running a hot backup we can switch to in seconds and a box in which we test restoring backups every 24h, that’s necessary! But it requires 3x the number of big expensive boxes.

I still think it’s the right tradeoff for us, operating a distributed system is also very expensive in terms of dev and ops time, costs are more unpredictable etc.

It’s all tradeoffs, isn’t it?