top | item 43545422

(no title)

hankchinaski | 11 months ago

The only thing that holds me back for self hosting is Postgres. Has anyone managed to get a rock solid Postgres setup self managed? Backups + tuning?

discuss

order

homebrewer|11 months ago

Put it on a zfs dataset and back up data on the filesystem level (using sanoid/syncoid to manage snapshots, or any of their alternatives). It will be much more efficient compared to all other backup strategies with similar maintenance complexity.

candiddevmike|11 months ago

Filesystem backups may not be consistent and may lose transactions that haven't made it to the WAL. You should always try to use database backup tools like pgdump.

lytedev|11 months ago

I self-host Postgres at home and am probably screwing it up! I do at least have daily backups, but tuning is something I have given very little thought to. At home, traffic doesn't cause much load.

I'm curious as to what issues you might be alluding to!

Nix (and I recently adopted deploy-rs to ensure I keep SSH access across upgrades for rolling back or other troubleshooting) makes experimenting really just a breeze! Rolling back to a working environment becomes trivial, which frees you up to just try stuff. Plus things are reproducible so you can try something with a different set of machines before going to "prod" if you want.

swizzler|11 months ago

I was using straight filesystem backups for a while, but I knew they could be inconsistent. Since then, I've setup https://github.com/prodrigestivill/docker-postgres-backup-lo..., which regularly dumps a snapshot to the filesystem, which regular filesystem backups can consume. The README has restore examples, too

I haven't needed to tune selfhosted databases. They do fine for low load on cheap hardware from 10 years ago.

nijave|11 months ago

Inconsistent how? Postgres can recover from a crash or loss of power which is more-or-less the same as a filesystem snapshot

Aachen|11 months ago

Why would tuning be necessary for a regular setup, does it come with such bad defaults? Why not upstream those tunes so it can work out of the box?

I remember spending time on this as a teenager but I haven't touched my MariaDB config in a decade now probably. Ah no, one time a few years ago I turned off fsyncing temporarily to do a huge batch of insertions (helped a lot with qps, especially on the HDD I used at the time), but that's not something to leave permanently enabled so not really tuning it for production use

zrail|11 months ago

PostgreSQL defaults (last I looked, it's been a few years) are/were set up for spinning storage and very little memory. They absolutely work for tiny things like what self-hosting usually implies, but for production workloads tuning the db parameters to match your hardware is essential.

nijave|11 months ago

https://pgtune.leopard.in.ua/ is a pretty good start. There's a couple other web apps I've seen that do something similar.

Not sure on "easy" backups besides just running pg_dump on a cron but it's not very space efficient (each backup is a full backup, there's no incremental)

mfashby|11 months ago

I've got an openbsd server, postgres installed from the package manager, and a couple of apps running with that as the database. My backup process just stops all the services, backs up the filesystem, then starts them again. Downtime is acceptable when you don't have many users!

candiddevmike|11 months ago

What is your RTO/RPO?

nijave|11 months ago

RTO - best effort RPO - wait? You guys have backups and test then??