A lot of people sing BB's praises but I never had a good experience with them. The client was always slow, buggy, and resource hungry, and its UI is terrible. They got shirty with me for reporting bugs when I was using a macOS beta. And finally, at some point even though nothing about my computer changed (it was a Mac Mini, what was going to change), I got a message saying some security/copy protection system had detected that my computer was "different", and I had to un-install and re-install the entire app to fix it (there apparently being no easier way to unset a flag). I uninstalled and skipped the second part.Instead of using BB, get a Synology/Qnap/FreeNAS box to backup all your stuff locally, and back that up to another service (e.g. Glacier or Synology's own C2).
willis936|4 years ago
B2/S3 is what most people want.
linsomniac|4 years ago
Then I realized that S3 Glacier and Deep Archive were even less expensive than B2. I took a bit further of a look and found that Glacier/DA files have some fairly chonky metadata that must be stored in normal S3, and for a lot of our images the metadata was larger than the image in question. So Glacier/DA would increase our storage costs. Over all it probably wasn't a money-saving situation.
The ideal use case is to bundle those up into a tar file or something and store those large files, and manage the metadata and indexing/access ourselves.
So, using rclone to copy 11TB of data to B2.
iJohnDoe|4 years ago
Can you elaborate on this part?
spacedcowboy|4 years ago
It takes about 2 days to make a full backup, and I can fit incrementals for the next 5 days on the batch-of-7. Then I switch to the second magazine, and do the same thing. I actually have 3 magazines, one of which I swap in and out every week, and during the before-times, I'd take that off-site to work.
I have ~30 years of data, from back when I was in college and writing CD-ROMs for backup, all on the one system. Admittedly, the major space-taking thing is the Plex library, but I wouldn't want to lose that either. It takes about 5 minutes to walk into the garage (where the server-rack is), swap magazines and I'm done - the rest is automatic.
I have vague ideas for writing a more-efficient tar designed for this specific type of setup (big disk with attached tape). The best way to do it I think is to have multiple threads reading and bzip2-compressing data, piping blobs through to a singleton tape-writer thread. Every now and then (50GB, 500GB, 1TB ?) close the device and reopen the non-rewindable device to get a record-marker on the tape, and then store the tape/record-marker/byte-offset etc. into a SQLite database on the disk. That way I'd get:
- High levels of compression without making the tape head wait for the data, which ruins the tape head. Multiple threads pooling highly-compressed data into a "record"
- fast lookup of what is where, I'm thinking a SQL LIKE query syntax for search, against the disk-based DB. No more waiting for the record to page in from the end of the tape.
- fast find on-tape, since you'd know to just do the equivalent of 'mt fsf N' before you actually have to start reading data
Right now, tar is good enough. One of these days when I get time, I'll write 'bar' (Backup And Restore :)
sponaugle|4 years ago
I need a solution for backing up the stored states for my 100 trillion digit PI calculation efforts.
simplyaccont|4 years ago
k8sToGo|4 years ago
ericcholis|4 years ago
Osiris|4 years ago
I use a RAID 1 to handle drive failure and also keep local backups on a NAS. BB is my third layer of backup. I've never run into issues with BB backups so I'm happy for what I get for the price.
markdown|4 years ago
The question this thread seems to be raising is... do you? Do you really? Are you sure?
janitha|4 years ago
I have a ZFS based NAS. And periodically do a incremental backup (zfs send) of the entire dataset, encrypt it gpg and pipe it straight up to S3 deep archive. Works like a charm.
The catch with S3 deep archive is if you want to get the data back... It's reliable, but you will pay quite a bit more. So as a last resort backup, it's perfect.
kingcharles|4 years ago
mlangenberg|4 years ago
I'm not a big fan of backing-up a back-up and opted for a Time Machine backup to a local nas and in parallel, an off-site backup to B2 with Arq on my macs.
doublepg23|4 years ago
hedora|4 years ago