I have spent a lot of time trying out backup solutions and I feel strongly enough to write this to stop others from using this. As other commenters mentioned, Duplicati is pretty unstable. I was never even able to finish the initial backups (less than 2 TB) on my PC over many years. If you pause an ongoing backup it never actually works again.
I'd use restic or duplicacy if you need something that works well both on Linux and Windows.
Duplicati's advantage is that it has a nice web UI but if the core features don't work.. that's not very useful.
I have had similar experiences. I could not get a non-corrupt backup from one machine; it would repeatedly ask me to regenerate the local database from remote, which never succeeded. Oddly, another machine never seemed to have an issue, but that's not an argument in favor of using the software. It is possible there are "safe" versions, but without a way to identify them (all the releases I used were linked from the homepage).
Just another stat point... Been using it against 1TB storing encrypted to Backblaze B2 for about a year and a half. I've tested restoring and so far it's been very stable.
Just to balance this. I use duplicati for both my web server where I host client websites, and my personal home nas.
I've had to use it to restore multiple times, and have never had an issue with it. It's saved my ass multiple times. It's always been a set it and forget it until I remember I need it.
Never tried Duplicati, but restic + B2 has been great as "a different choice", and for my use case of backing up a variety of OS's (Windows, Mac, and different Linux distros, anyway), it's worked great.
I had a very similar experience with Duplicati on a small (disk space wise) backup set but a very large number of files bloating the sqlite data store.
I use Urbackup to back up Windows and Linux hosts to a server on my home network and then use Borg to back that up for DR. I'm currently in the process of testing Restic now that it has compression and may switch Borg out for that.
How strange. I have been backing up my own computers (4) and those of my family (another 3) using Duplicati for over three years now, and aside from the very rare full-body derp that required a complete dump of the backup profile (once) and a rebuild of the remote backup (twice), it’s been working flawlessly. I do test restores of randomly chosen files at least once a year, and have never had an issue.
Granted, the backup itself errors out on in-use files (and just proceeds to the next file), but show me a backup program that doesn’t. Open file handles make backing up rather hard for anything that needs to obey the underlying operating system.
I started to use Duplicati 2 for about a month now to try it out, and it was working flawlessly for me, except for occasional time-out of the web UI. I only backup local directories, and the destinations I tried out include an external drive over USB, Google Drive, and an SSH connection.
I'm using it to backup a Firefox profile while I'm using Firefox. It backed up active files as they are being written too! I'm also using it to backup a Veracrypt container file (single 24GB file), and incremental backups worked quite well too.
Thanks for the words of advice, I will keep testing longer before I make the switch.
I've looked around quite a bit too but did you actually use restic and duplicacy?
They've eaten my RAM quite heavily, it caused the machine to freeze up by exhausting the RAM on not that huge data sets and I've stopped using them a year or so ago.
I've come to the conclusion to use Borg and zfs as backup solutions (better to run multiple reliable independent implementations), latter being quite fast by knowing what got changed on each incremental backups as a file system itself unlike any other utilities that need to scan the entire datasets to figure out what got changed since last run.
You can run a 1GB memory instance and plug HDD (far cheaper) based block storage (such as on Vultr or AWS) for cheap zfs remote target. Ubuntu gets zfs running easily by simply installing zfsutils-linux package.
If you need large space, rsync.net gives you zfs target with $0.015/GB but with 4TB minimum commitment. Also good target for Borg at same price but with 100GB minimum yearly commitment. Hetzner storage box and BorgBase seem good for that too.
If you use restic/kopia, how are you managing scheduling and failure/success reporting together?
That's one thing I can't seem to quite figure out with those solutions. I know there are scripts out there (or I could try my own), but that seems error-prone and could result in failed backups.
I strongly advise people to not rely on Duplicati. Throughout its history, it's had a lot of weird, fatal problems that the dev team has shown little interest in tracking down while there is endless interest in chasing yet another storage provider or other shiny things.
Duplicati has been in desperate need of an extended feature freeze and someone to comb through the forums and github issues looking for critical archive-destroying or corrupting bugs.
"If you interrupt the initial backup, your archive is corrupted, but silently, so you'll do months of backups, maybe even rely upon having those backups" was what made me throw up my hands in disgust. I don't know if it's still a thing; I don't care. Any backup software that allows such a glaring bug to persist for months if not years has completely lost my trust.
In general there seemed to be a lot of local database issues where it could become corrupted, you'd have no idea, and worse, a lot of situations seemed to be unrecoverable - even doing a rebuild based off the 'remote' archive would error out or otherwise not work.
The duplicati team has exactly zero appreciation for the fact that backup software should be like filesystems: the most stable, reliable, predictable piece of software your computer runs.
Also, SSD users should be aware that Duplicati assembles each archive object on the local filesystem. On spinning rust, it significantly impacts performance.
Oh, and the default archive object size is comically small for modern day usage and will cause significant issues if you're not using object storage (say, a remote directory.) After just a few backups of a system with several hundred GB, you could end up with a "cripples standard linux filesystem tools" numbers of files in a single directory.
And of course, there's no way to switch or migrate object sizes...
Backblaze is much cheaper, and can have free egress when using Cloudflare with it.
There is also Storj, a decentralized storage coin and it gives 150 GB for free + $4/TB with free egress matching what you stored.
another one is IDrive E2, it $4/tb, with the first year costing the same as a single month, with egress for free up to about three times the size of what's stored.
Hetzners storage boxes are pretty cheap, but that is for a reason.
The upload speed is pretty slow outside Hetzners network (from my experience) and more importantly is that data is only protected by a single RAID cluster.
They also offer free Unlimited egress.
But I would personally go with Backblaze or maybe IDrive.
Or a small computer with a disk at a friend's home and backup to that. It's cheaper than cloud after one or two years, always less reliable, network speed is probably OK, you can have physical access. If the friend is a techy it could be one among many other little computers in that home. You can reciprocate by hosting his/her backup at your home.
With me being an IT person my landlord asked for recommendations for doing backups. Some googling revealed duplicati and we gave it a go. Installation + configuration was easy and the features were sane. That was like 6-7 years ago and it is still running without issue (AFAIK ^^)
Have you tested restores? The problem I had with duplicati was that eventually restoring from a backup would take exponentially longer, to the point of never finishing. Maybe it would have eventually, but I can't wait multiple days to restore one file. There's a possibility it was an error or problem on my end, and this was a couple of years ago, so ymmv.
If you plan to use Duplicati please pay attention to the docs around block size. We used this to back up a couple 100GB of data to S3. Recovery was going to take over 3 days to reassemble the blocks based on the default 100KB block size. For most applications you will want at least 1MB if not more.
Otherwise a good product and has been reliable enough for us.
Many years ago I was a happy user of CrashPlan as the data was also easily accessible, but when they stopped their private user plans I looked into several solutions (Duplicati, duplicacy, and some others too). restic was the only one light enough for me to use consistently, which is a critical thing about backups.
I use restic to a backup to a local drive, then use cloud storage to backup the repos. I know restic supports some direct backup to some cloud backends directly, but this seems more decoupled and less prone to errors/hangs.
the thing that made me wanting to post is this bullshit:
Download Duplicati 2.0 (beta)
or
Look at 1.3.4 (not supported)
i work at a big tech and see this all the time, if your "new shinny promotion ready" version is not ready, why the hell would you drop support for the old version that works? i'd stay away from any of this products operating by this irresponsible teams.
yeah, i know, lots of biased guesses/views and sentiments on my part but you get how much this angers me.
Restic is CLI focused whereas Duplicati is GUI focused. Restic is based around repositories, which can contain multiple backups from multiple sources, whereas Duplicati's backups are not (although the actual backup format is similarly broken up into lots of small blocks).
I use ZFS snapshots and send/replication. This has been the easiest and most reliable backup solution for everything. I especially enjoy taking backup of SQL Server with ZFS with the new snapshot feature in SQL Server 2022 "ALTER DATABASE MyDatabase SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON";
I'm a happy user, I use it as a solution to back up specific user folders on a Windows system to an smb network share. It's been chugging along for years now, and I even have done a few recoveries, and never had a problem. I'm surprised to read the other reviews here.
Have been using this for years, it has its quirks but it works and it costs me next to nothing - I keep looking at possible alternatives but so far haven’t shifted.
It's a shame Duplicati runs quite poorly (I have the same experience). I moved to restic with the autorestic wrapper and configured notifications through another method for both failures and successful backups.
That second option works amazingly well and is much quicker, more reliable, and offers more control than Duplicati. But it's much harder and time consuming to set up, requiring timers, scripts and setting up notifictions. For new people self hosting stuff, reliable incremental off site backups can be a right pain. How many poorly tested cronjobs failing to create backups that nobody will take action on are running right now? At least the Duplicati GUI will give you a glanceable GUI showing its failures in backups.
I have been using Kopia to back up all of my laptops' home dirs to a raspberry pi for at least a couple years now. There is a CLI and a UI. The UI is somewhat funky and could benefit from an "easy mode" a la time machine, but it does work. I recently restored my home dir from it just the other day when migrating from one OS to another. My favorite thing about Kopia is that it performs incremental backups on tens of GB _much_ quicker than plan rsync can and is much more space-efficient to boot.
[+] [-] wbkang|3 years ago|reply
I'd use restic or duplicacy if you need something that works well both on Linux and Windows.
Duplicati's advantage is that it has a nice web UI but if the core features don't work.. that's not very useful.
[+] [-] mosselman|3 years ago|reply
Instead, I'd recommend Arq backup.
[+] [-] syntheticnature|3 years ago|reply
[+] [-] phcreery|3 years ago|reply
I switched to restic and recomned it over Duplicati.
[+] [-] quaffapint|3 years ago|reply
[+] [-] malfist|3 years ago|reply
I've had to use it to restore multiple times, and have never had an issue with it. It's saved my ass multiple times. It's always been a set it and forget it until I remember I need it.
[+] [-] michaelcampbell|3 years ago|reply
[+] [-] Shank|3 years ago|reply
[+] [-] alyandon|3 years ago|reply
I use Urbackup to back up Windows and Linux hosts to a server on my home network and then use Borg to back that up for DR. I'm currently in the process of testing Restic now that it has compression and may switch Borg out for that.
[+] [-] rekabis|3 years ago|reply
Granted, the backup itself errors out on in-use files (and just proceeds to the next file), but show me a backup program that doesn’t. Open file handles make backing up rather hard for anything that needs to obey the underlying operating system.
[+] [-] Ayesh|3 years ago|reply
I'm using it to backup a Firefox profile while I'm using Firefox. It backed up active files as they are being written too! I'm also using it to backup a Veracrypt container file (single 24GB file), and incremental backups worked quite well too.
Thanks for the words of advice, I will keep testing longer before I make the switch.
[+] [-] mekster|3 years ago|reply
I've looked around quite a bit too but did you actually use restic and duplicacy?
They've eaten my RAM quite heavily, it caused the machine to freeze up by exhausting the RAM on not that huge data sets and I've stopped using them a year or so ago.
I've come to the conclusion to use Borg and zfs as backup solutions (better to run multiple reliable independent implementations), latter being quite fast by knowing what got changed on each incremental backups as a file system itself unlike any other utilities that need to scan the entire datasets to figure out what got changed since last run.
You can run a 1GB memory instance and plug HDD (far cheaper) based block storage (such as on Vultr or AWS) for cheap zfs remote target. Ubuntu gets zfs running easily by simply installing zfsutils-linux package.
If you need large space, rsync.net gives you zfs target with $0.015/GB but with 4TB minimum commitment. Also good target for Borg at same price but with 100GB minimum yearly commitment. Hetzner storage box and BorgBase seem good for that too.
[+] [-] Saris|3 years ago|reply
That's one thing I can't seem to quite figure out with those solutions. I know there are scripts out there (or I could try my own), but that seems error-prone and could result in failed backups.
[+] [-] KennyBlanken|3 years ago|reply
Duplicati has been in desperate need of an extended feature freeze and someone to comb through the forums and github issues looking for critical archive-destroying or corrupting bugs.
"If you interrupt the initial backup, your archive is corrupted, but silently, so you'll do months of backups, maybe even rely upon having those backups" was what made me throw up my hands in disgust. I don't know if it's still a thing; I don't care. Any backup software that allows such a glaring bug to persist for months if not years has completely lost my trust.
In general there seemed to be a lot of local database issues where it could become corrupted, you'd have no idea, and worse, a lot of situations seemed to be unrecoverable - even doing a rebuild based off the 'remote' archive would error out or otherwise not work.
The duplicati team has exactly zero appreciation for the fact that backup software should be like filesystems: the most stable, reliable, predictable piece of software your computer runs.
Also, SSD users should be aware that Duplicati assembles each archive object on the local filesystem. On spinning rust, it significantly impacts performance.
Oh, and the default archive object size is comically small for modern day usage and will cause significant issues if you're not using object storage (say, a remote directory.) After just a few backups of a system with several hundred GB, you could end up with a "cripples standard linux filesystem tools" numbers of files in a single directory.
And of course, there's no way to switch or migrate object sizes...
[+] [-] Tepix|3 years ago|reply
However the next question would always be which cloud provider to use.
Is OVH cloud archive the cheapest cloud storage for backups in europe? It lets me use scp or rsync, among others.
They are charging(§) $0.011/GB for traffic and $0.0024/month/GB for storage.
So if my total backup is size 100gb and i upload 5gb per day of incremental backups i pay around $2 per month.
--
§ https://www.ovhcloud.com/en/public-cloud/prices/#473
[+] [-] rsync|3 years ago|reply
OVH may, indeed, be the cheapest.
If you email[1] and ask for the long-standing "HN Reader Discount" you can get $0.01/GB storage and free usage/bandwidth/transfer.
Zurich Equinix ZH4 on init7 pipes.
Depending on your preference either [2] or [3] may be the most compelling aspect of our service.
[1] [email protected]
[2] https://news.ycombinator.com/item?id=26960204
[3] https://www.rsync.net/products/universal.html
[+] [-] asmor|3 years ago|reply
https://www.hetzner.com/storage/storage-box
[+] [-] jacooper|3 years ago|reply
Backblaze is much cheaper, and can have free egress when using Cloudflare with it.
There is also Storj, a decentralized storage coin and it gives 150 GB for free + $4/TB with free egress matching what you stored.
another one is IDrive E2, it $4/tb, with the first year costing the same as a single month, with egress for free up to about three times the size of what's stored.
Hetzners storage boxes are pretty cheap, but that is for a reason.
The upload speed is pretty slow outside Hetzners network (from my experience) and more importantly is that data is only protected by a single RAID cluster.
They also offer free Unlimited egress.
But I would personally go with Backblaze or maybe IDrive.
[+] [-] pmontra|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] thesimon|3 years ago|reply
[+] [-] Phelinofist|3 years ago|reply
[+] [-] patentatt|3 years ago|reply
[+] [-] michaelcampbell|3 years ago|reply
If you don't know, then it's not working. At least that should be your stance on backups.
[+] [-] macropin|3 years ago|reply
[+] [-] bobek|3 years ago|reply
https://bobek.cz/blog/2020/restic-rclone/
[+] [-] antx|3 years ago|reply
I now use restic and I'm very happy. I find it to be very resilient. No more database, only indexes and data packs, which can be repaired.
[+] [-] willriches|3 years ago|reply
Otherwise a good product and has been reliable enough for us.
* https://duplicati.readthedocs.io/en/latest/appendix-c-choosi...
[+] [-] kidsil|3 years ago|reply
Many years ago I was a happy user of CrashPlan as the data was also easily accessible, but when they stopped their private user plans I looked into several solutions (Duplicati, duplicacy, and some others too). restic was the only one light enough for me to use consistently, which is a critical thing about backups.
[+] [-] funOtter|3 years ago|reply
* https://rclone.org/
[+] [-] dv_dt|3 years ago|reply
[+] [-] qkhhly|3 years ago|reply
Download Duplicati 2.0 (beta) or Look at 1.3.4 (not supported)
i work at a big tech and see this all the time, if your "new shinny promotion ready" version is not ready, why the hell would you drop support for the old version that works? i'd stay away from any of this products operating by this irresponsible teams.
yeah, i know, lots of biased guesses/views and sentiments on my part but you get how much this angers me.
[+] [-] mekster|3 years ago|reply
[+] [-] MikusR|3 years ago|reply
- Cross platform
- GUI
- Encryption/Compression/Deduplication
[+] [-] marwis|3 years ago|reply
I wonder if there is any program that does?
[+] [-] AtroxDev|3 years ago|reply
Works great across all my devices (win, mac, linux).
[+] [-] samuell|3 years ago|reply
[+] [-] synergy20|3 years ago|reply
[+] [-] kavalg|3 years ago|reply
- Cross platform (.NET / Mono)
- Incremental backups with compression
- Encryption (AES-256)
- Backup verification
- Block level deduplication
- WebUI
- Lots of backend supported
Wondering how does that compare to https://restic.net/
[+] [-] jsmith99|3 years ago|reply
[+] [-] rjzzleep|3 years ago|reply
[+] [-] ElDji|3 years ago|reply
[+] [-] olavgg|3 years ago|reply
[+] [-] npteljes|3 years ago|reply
[+] [-] rroot|3 years ago|reply
https://github.com/borgbackup/borg
[+] [-] fattybob|3 years ago|reply
[+] [-] DCKing|3 years ago|reply
That second option works amazingly well and is much quicker, more reliable, and offers more control than Duplicati. But it's much harder and time consuming to set up, requiring timers, scripts and setting up notifictions. For new people self hosting stuff, reliable incremental off site backups can be a right pain. How many poorly tested cronjobs failing to create backups that nobody will take action on are running right now? At least the Duplicati GUI will give you a glanceable GUI showing its failures in backups.
[+] [-] yewenjie|3 years ago|reply
https://kopia.io/
[+] [-] bityard|3 years ago|reply
[+] [-] BeetleB|3 years ago|reply
https://duply.net/Duply_(simple_duplicity)