top | item 30822631

Restic 0.13.0

111 points| soheilpro | 4 years ago |restic.net | reply

66 comments

order
[+] timmg|4 years ago|reply
I have a hand-coded backup system for my photo library that writes to S3. It runs every night at 2AM.

The one feature I have that's important to me is this: it will figure out what files need to be uploaded and then upload as many as possible for an hour then stop.

That means that it runs for at-most an hour a night.

The reason I need/wanted this feature is that I might come home from a trip with (eg) 30G worth of photos. My (cable) internet will upload at around 1G an hour. I don't want this thing to saturate my internet for 30 hours straight. Instead, it backs up a small amount every night for 30 days.

Am I the only one that wants a feature like this? I've never seen it in any other backup system. (At alternative might be to have configurable bandwidth for uploads.)

[+] lucgommans|4 years ago|reply
Makes perfect sense. Restic kind-of supports this because you can just kill the client after an hour and, tomorrow, it'll see which objects are there already.

I'm not deep enough into the project to know whether this is like an officially supported use-case, but restic was of course made with the idea that interruptions can happen (your computer can crash) and should be handled safely, and for the deduplication it'll cut files up in a deterministic way and thus (as I understand it) store those chunks in a deterministic place.

[+] seized|4 years ago|reply
Rclone will do exactly what you want, upload to S3 and the --max-duration will stop new transfer from starting after a given duration.

There are also throttle options for bandwidth. I use that combined with Node-Red and a smart plug on my monitors, if monitor power draw exceeds a threshold then the upload throttle is changed via the rclone API.

[+] teaudel|4 years ago|reply
My internet upload speed is bad so I do want something like that.

I would also like to be able to "stage" a backup: figure out what needs to be transmitted and then create the data files that need to be transmitted without actually immediately transmitting it.

That would allow me to do things like backup my laptop to another computer in my house that can upload the files over my slow connection overnight when my laptop isn't on; and to let me bring the backup files to a place (work/university/library) with a fast connection so large backups don't take days or weeks (especially initial backup).

[+] crecker|4 years ago|reply
I wonder if you can obtain that behavior with a bash script, but it's boring to write scripts and I do not know if SIGTERM can exit Restic gracely.
[+] sydney6|4 years ago|reply
Restic has support for rate limiting built-in.
[+] mekster|4 years ago|reply
Unless it's for experimenting, I've stopped caring for backup solutions other than borg and zfs as the only way to prove their stability is to have them exist for a while without big complaints and new ones all seem to have complaints.

Just having no data loss isn't enough which is the absolute base point but huge memory consumption and other operational issues are also showstoppers.

[+] aborsy|4 years ago|reply
Restic in my experience has been rock solid. I actually switched from Borg. Borg’s crypto has known limitations; its Python error messages are long and messy; it complained more frequently.

Restic’s repository format is simple and well documented, which is important for long term data recovery (and fixes in case changes occur in the repo). The crypto is from a good source, and well regarded. Multithreaded, fast, nice and clean output.

ZFS is a file system, and has serious limitation when used as a backup tool. It needs a ZFS backend, ruling out almost any provider (basically self host your ZFS system, which is costly and error prone). It needs more RAM than Borg and restic. And I personally feel uncomfortable with native encryption in ZFS til some time. Lower level system encryption is probably not what you want in backups.

One feature I miss from these tools (other than ZFS): error correction. They could use a Reed Solomon code or similar and add parities in case there is an accidental change in the repository.

[+] cma|4 years ago|reply
Borg being single-threaded is painful in the era of consumer 12 and 16 core CPUs, and even prosumer 64-core.
[+] crecker|4 years ago|reply
Where can I read about the best solutions for backups? For nerdy people
[+] lucgommans|4 years ago|reply
There is also https://github.com/restic/others which has some keywords (e.g. is it encrypted, does it do compression) for most FOSS backup solutions. It can be outdated or incomplete for some entries, though.
[+] Mo3|4 years ago|reply
I have to say, this is an excellent solution and I am seriously contemplating on deploying it for all of our servers.

I'm a little unclear on one thing, are alternative S3 providers supported?

[+] tome|4 years ago|reply
These seem to be the supported backends

    Local directory
    sftp server (via SSH)
    HTTP REST server (protocol, rest-server)
    Amazon S3 (either from Amazon or using the Minio server)
    OpenStack Swift
    BackBlaze B2
    Microsoft Azure Blob Storage
    Google Cloud Storage
    And many other services via the rclone Backend

https://github.com/restic/restic#backends=
[+] bccdee|4 years ago|reply
Yes! I back up directly into Backblaze B2 using restic.
[+] soheilpro|4 years ago|reply
I'm so glad to finally see the --dry-run option in Restic.
[+] paraph1n|4 years ago|reply
Any thoughts on how this compares to duplicacy?
[+] aborsy|4 years ago|reply
Duplicacy has no mount!
[+] rvieira|4 years ago|reply
For someone using Kopia, is there any major advantage in switching to restic?
[+] omnimus|4 years ago|reply
I am using restic and thinking about switching to Kopia... Mainly because Kopia has compression and seems to have more activity in development. It also has gui. And from what i've seen is faster.
[+] lucgommans|4 years ago|reply
This point hides a lot of goodness in something that I didn't even understand on the first read:

> - We have added checksums for various backends so data uploaded to a backend can be checked there.

All data is already stored in files with as filename the sha256sum of the contents, so clearly it's all already checksummed and can be verified right?

Looking into the changelog entry[1], this is about verifying the integrity upon uploading:

> The verification works by informing the backend about the expected hash of the uploaded file. The backend then verifies the upload and thereby rules out any data corruption during upload. \n\n [...] besides integrity checking for uploads [this] also means that restic can now be used to store backups in S3 buckets which have Object Lock enabled.

Object lock is mentioned in passing somewhere down the changelog, but it's a big feature. S3 docs:

> Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.

i.e. ransomware protection. Good luck wiping backups if your file hoster refuses to overwrite or delete the files. And you know Amazon didn't mess with the files because they're authenticated.

Extortion is still a thing, but if people would use this, it more-or-less wipes out the attack vector of ransomware. The only risk is if the attacker is in your systems long enough to outlast your retention period and creates useless backups in the meantime so you're not tipped off. Did anyone say "test your backups"?

For self-hosting, restic has a custom back-end called rest-server[2] which supports a so-called "append-only mode" (no overwriting or deleting). I worked on the docs for this[3] together with rawtaz and MichaelEischer to make this more secure, because eventually, of course, your disks are full or you want to stop paying for outdated snapshots on S3, and an attacker could have added dummy backups to fool your automatic removal script into thinking it needs to leave only the dummy backups. Using the right retention options, this attack cannot happen.

Others are doing some pretty cool stuff in the backup sphere as well, e.g. bupstash[4] has public key encryption so you don't need to have the decryption keys as a backup client.

[1] https://github.com/restic/restic/releases/v0.13.0

[2] https://github.com/restic/rest-server/

[3] https://restic.readthedocs.io/en/latest/060_forget.html#secu...

[4] https://github.com/andrewchambers/bupstash/