(no title)
gbletr42 | 2 years ago
tar c dir | zstd | gpg -e | bef -c -o backup.tar.zst.gpg.bef
and then to get back that file with the terribly long filename
bef -d -i backup.tar.zst.gpg.bef | unzstd | gpg -d | tar x
gbletr42 | 2 years ago
tar c dir | zstd | gpg -e | bef -c -o backup.tar.zst.gpg.bef
and then to get back that file with the terribly long filename
bef -d -i backup.tar.zst.gpg.bef | unzstd | gpg -d | tar x
alchemist1e9|2 years ago
Since your head is in the thick of this problem, I’d recommend you look at seqbox and consider implementing sbx headers and blocks as an optional container that would give you resilience to filesystem corruption. That way your tool would be an all in one bitrot safeguard and streaming/pipe based!
Regarding zbackup, it’s perhaps a bit obscure but extremely useful tool for managing data. The way I use it I’m able to get both dedup and lazy incremental backups, although with a computational cost, but not so significant. The encryption is a nice side effect of its implementation that is also handy.
frutiger|2 years ago
> bef -d -i backup.tar.zst.gpg.bef | unzstd | gpg -d | tar x
Should probably be
> bef -d -i backup.tar.zst.gpg.bef | gpg -d | unzstd | tar x