The different media types is probably the hardest issue; I'd wager that for most people, backing up > 2 TB on anything but spinning rust will be impractical/prohibitively expensive.
A slight twist of this: I now have data old enough that accessing it with modern computers is starting to become a challenge. Thankfully I migrated all my, once enormous collection of 50,000 MB on tape, to just be files on my file server, but I'm worried about the longevity of optical media, and now I nervously glance at my collection of even older media ....
And the most important part of the rule which they forgot: Test your backups.
I once has to restore a DB from backups just to discover that the backups from 2 days ago were corrupted. Fortunately backup t-3 days worked. And I had to do some binlog mongering to recover the rest of the data.
People often confuse the Different Media, and this blog post makes it murky as well
>> The thing is, while keeping data on the same storage media, you may lose them due to the same hardware issues. In other words, you may lose two copies in the same accident. That’s why you should always combine media.
For "same media" the key part of this is storing the data on two DIFFERENT DEVICES of different types, they both could be hard drives, but they need to be in isolated systems of different types (say a Windows System and a Synology NAS, or a FreeNAS Storage Appliance and a Linux Server, etc)
The rule came about because people would use the same tape library, or have multiple copies on the same SAN (often times they would have a SAN Cluster of 2 or more devices that act as 1, and because they had 2 "devices" they felt they were protected.
Disparate media type (HDD, Tape, DVD, etc) may have some advantaged but as long as you are putting 2 copies on say your Home Desktop, and a NAS you have satisfied 2 media's even if both are using Hard drives
That said today the most common way for a person fill offsite and separate media is to use a Cloud Backup of some kind.
Oh oh, I forget to add that I would _never_ trust flash memory (eg. SSDs) to keep data for more than ~ a year without being powered on. It's absolutely a terrible archival format. (Powered on with scrubbing you'd at least know when it's starting to degrade).
> The different media types is probably the hardest issue; I'd wager that for most people, backing up > 2 TB on anything but spinning rust will be impractical/prohibitively expensive.
There's LTO (Linear Tape Open), LTO-8 has 12 TB of raw capacity and costs about 65 - 70 €.
The newest variant (not yet _that_ widely available), LTO-9 has 18 TB.
You need a few to handle rotation, but for most that means at max 10 plus one new per year for the long term archive.
We provide Tape support in our open source Proxmox Backup Server product, it can handle single drives and drive robots (with auto exchangers that lessen the work on tape rotation), the data is deduplicated, compressed and optionally encrypted.
My personal interpretation/variation of this is to use two different backup formats, even if everything ultimately ends up on hard drives (whether my own or a cloud service provider's). I usually will do 1 file backup via Duplicacy (but Restic/Borg/Duplicity/Time Machine work just as well) and 1 system image backup via Macrium Reflect/Carbon Copy Cloner. The file backup gets copied to Backblaze B2, the image backup stays local.
There are Blu-ray Discs specifically engineered for long-term archiving and have BER guarantees (anomalous bit read per gb of data stored per year archived or something).
They include external USB drives and NAS as different types, specifically about the NAS:
> can be considered statistically independent too since it is connected over the network and may survive if something bad happens to a part of your infrastructure.
So my interpretation is: e.g. a HDD connected over SATA, an external HDD over USB and a HDD on NAS are all considered different media types even though ultimately they are all based on spinning hard drives.
> The different media types is probably the hardest issue;
For my home stuff, I'm comfortable enough with having a backup on (raid 1 mirrored spinning rust) on my media server, plus a copy of that backup (also on spinning rust) on an external USB hard drive. The chances of simultaneous corruption of those - even though they're both spinning rust - seems "low enough" for me.
(I also have another raid 1 pair of spinning rust that powers up once a week very early Monday morning and rsyncs the media server backup directory, then powers back down when it's done, to mitigate against fat-finger-fuckups and/or home network p0wnage. Somebody who p0wns something inside my network might find the cronjob or shell scripts that run the wifi powerpoint those drives use on and off, but at least I'm down to an hour or so a week where that copy might get cryptolockered...)
People often forget the 3-2-1 rule should also apply to the secrets you use to encrypt/store your backups. If you can't decrypt it, it isn't a backup.
I use a crazy long passphrase to encrypt my backups, but should I forget - it is also printed on archival paper inside a sealed envelope in a friends safe deposit box (I also have a copy of his backup passphrase for mutually assured destruction :)).
Also, every once in a while run a fire drill and actually restore something from each of your backups. This is when you find out the rsync job has been stuck for the last 80 days.
You could use Samir Secret Sharing to split the secrets in M unrelated strings and then "recover" it with M-k strings. (E.g. Generate 5 keys and require 2 to recover). That way you can share some of those keys but nobody will have the target password.
> I also have a copy of his backup passphrase for mutually assured destruction
I have two friends who've given me private key fragments (from a Shamirs Secret Sharing setup) with the explanation "If anything happens to me, you'll work out who you need to talk to and what you'll need to do."
I haven't done that myself, because I don't have any need/desire for anybody to decrypt my backups once I'm not around. That might change if I end up with dependants one day.
> This is when you find out the rsync job has been stuck for the last 80 days.
Fire drills are important, but I get a Telegram notification every day when my rsync jobs complete. The notification is muted, but I see it in the list every morning when I use Telegram to text my girlfriend. If it wasn't there, I'd immediately know something wasn't working.
A full test is absolutely needed - including any system or DB restores and the system needs to be fully functionaly tested
A quick backup verification is not enough - I learnt this the hard way and lost 9 months of data only after I did full annual DR test. The backups were setup and configured by the (very well known) manufacturer of the backup software. They screwed up on the DNS name
> I use a crazy long passphrase to encrypt my backups, but should I forget - it is also printed on archival paper inside a sealed envelope in a friends safe deposit box (I also have a copy of his backup passphrase for mutually assured destruction :)).
Why not just remember the password and perform a regular fire drill decryption to ensure you won't ever forget it?
For backups that I can safeguard physically (as opposed to cloud), I just copy files uncompressed and unencrypted using FreeFileSync and Syncback. Restoring now is as easy as media swap.
Most people i know make "full backups". I've argued with them repeatedly about what would be critical to lose, and what would merely be inconvenient.
I.e. I backup my photo collection religiously. I keep all my photos in the cloud, and have a machine synchronizing photos locally. That machine then makes 2 backups, one local, and one to another cloud. The same applies to documents, mostly because they're highly compressible and don't take up much space.
When it comes to media backups, i honestely don't care if my iTunes library got wiped out. Most of it has been purchased, so can (hopefully) be downloaded again, and the rest has been ripped from CDs that still reside somewhere in my attic. So inconvenient to lose, but not exactly critical.
I also don't make "full computer backups". If something breaks i can just as easily reinstall the computer/applications and restore my documents/photos.
As for photos, i also burn identical M-disc BDXL media every so often, containing the photos taken since last archive date, and store the copies in different locations. They're low cost, low maintenance, and while not "spinning rust" cheap, they're still within $12/100GB, and unlike spinning rust and cloud, it's a one time cost.
Absolutely. Disk image backups feel completely pointless to me — why would I back up my entire system when I can reinstall 90% of it from a live USB? It's only my documents that I care about (including photos, etc). My restic job runs in seconds.
As far as I can see, this scheme gives you one backup copied to three places, on two different media.
What if the source for the backup was already corrupt or broken in some way? If you only have one backup, then your backup is corrupt too.
I was taught grandfather-father-son back in the 80s; still three levels of backup, but they're different generations. That fitted the kinds of media available then, but it doesn't really map to modern equipment. I've struggled to work out a backup scheme that is equally adaptable to the needs of a small business, a home network or an individual.
Ironically, it's hardest for the individual; a modern business is finished if it loses all its data. For an individual (or even a hobby network), total data-loss is painful, but not usually an existential risk. So it's harder to justify keeping everything in triplicate.
Reminds me a story here on HN, when they had plenty of backups, all encrypted with an in-memory key which was accidentally never saved to disk even on their primary server.
Doesn't the use of snapshots (and their backup) help with this ?
I am assuming here that you mean that the source got corrupted some time after it was created (and anyway, if it is not the case, it is a issue unrelated to backup, in my opinion).
A proper backup tool should help you keep several versions of your data without using a proportional amount of space, by using some form of deduplication. I use borg backup for my backup, and I can go back to any day in the past three years and get an old copy of any document (as long as I saved it on my disk for more than one day, since I do daily backups)
You can also setup "append only" backups, if you are worried that somebody may willingly try to destroy your old copies
I use Apple's TimeMachine backups as my sources, and BackInTime on the Linux boxes I care about, so I'm backing up an archive which preserves previous versions of files. I push 12 rolling monthly archival copies of them into AWS Glacier, and keep one of those as an annual archive. That's costing me around $200/year in Glacier costs. I've only ever done a restore once from there as a test, from memory it cost me ~$30 in retrieval charges.
I never really liked the 3-2-1 rule because it feels too specific: while it works, simpler solutions also provide the same level of reliability.
I think about backups in terms of blast radius. 1) The local machine has the working copy of data and a local backup as permitted by free space. The smallest blast radius where I lose data is "my laptop hard drive fails". 2) My external drive has another backup. The new blast radius is "my house burns down". 3) I maintain a cloud backup. The new blast radius is "a catastrophe on a global scale".
Any two of these backups can fail and the data is still salvageable.
I think actually physically going through your disaster recovery plan (whatever that would be, 321 or not) reveals how good/bad your backup plan is for you.
Personally, I downsized. I made peace with myself that I can live without the gobsmack amount of data I have if it were to be lost tomorrow. I pared down to a small set of data (less than 1GB) that is critical to me. That data is synced to various devices with syncthing (includes even my cellphone!), and then I use restic to two different cloud storage providers. When I'm bored I do an independent, standalone export from cloud storage.
Having several drives is not enough. I used to keep my important data replicated on 3 drives, from different brands, different capacities, 1 internal, 2 external.
One day the internal drive failed, the next day one of the external drives also failed. So... I panicked, shut everything down and bought a brand new hard drive (1tb). While copying from the third drive it also stopped working. So, I had a triple drive failure.
I managed to recover most of my data by freezing the external drives and copying from them (until they heated up and had to freeze them again).
Are there guidelines for how long a given type of media is considered stable?
From experience, hard drives seem safe on the order of years; I've spun drives back up from the early 2000s and they are fully intact. The lifespan of burned optical media is/was counted in minutes. Various flash memory is somewhere in between - I've had all kinda of cheap flash drives die.
Even with the multiple media forms they need to be refreshed at some frequency, and I don't know how often that should be.
This is from 2019, not that the advice ever goes out of date.
It's crazy how the seemingly easiest most basic security/backup advice is so easy to give, and so hard to actually do. 3-2-1, so easy to teach and remember! In reality, at any kind of scale, not so easy to do.
I am constantly reminded just how hard every aspect of security really is to do. Even for the little/basic stuff.
My first service order way back when was to a engineering office who were in a panic because their primary CAD file server had failed right in the middle of a deadline submittal.
I got there when they had just pulled out an identical file server from some other department. They had 'everything' backed up nightly on to a Zip drive. Nice, I thought. So, I plug in the Zip drive and can see all these image files created by their backup software. I ask for the backup app install files and they say its kept in a directory -- on the failed server !!!
I should mention this was not in the USA and resulted in a 6+hr long international 42kbps zmodem download session from some random BBS server of the best guess software product and version.
I still have one of the failed HDDs from that server as a paper weight on my desk.
PS. We got it working in time and the obvious moral of the story is to always test your backup systems (and that goes for both failover and fallback)
For those out there using S3 to store their personal backups, I'd like to recommend Wasabi [0] as either a replacement or addition to AWS S3.
Wasabi offers object storage in multiple regions, compatible with AWS S3 API and therefore is a drop-in replacement. The storage cost is $6 per TB p/m, with no bandwidth costs.
I'm not affiliated with them, just a happy customer enjoying massive cost savings.
This 3-2-1 backup strategy is missing a dimension: time. Data is rarely unchanging. The 3-2-1 strategy is inadequate for changing data, because if you delete the wrong database table and didn't realize it, the 3-2-1 backups are going to be missing the table too.
You need to add the time dimension and have a monthly snapshot for data that is older than a year, weekly snapshot for data within last 1 year, and daily snapshot for the last 90 days.
You may want to check out our open source project: Proxmox Backup Server, it supports 3-2-1 relatively integrated by:
- integrated tape backup (LTO-4 or newer), this allows to fulfill the two different medium rule and may also help on the offsite one (e.g., but a tape in a safe at CEO's home once a week or so)
- efficiently mirroring of backup data to remote PBS (offsite copy)
- client-side encryption, because well, backups are good but maybe not so if they're leaked
- file-level and block level backup - disclaimer the former currently only for Linux clients and the latter can handle anything but also runs on Linux (VMs are used)
If you have your non-Linux workload contained in VMs and maybe even already use Proxmox VE for that it's really covering safe and painless self-hosted backup needs.
I say “you are only as good as your last restore”, a phrase I stole from the Theatre industry where your performance “is only as good as your last rehearsal”. Of course this is most useful in Disaster Recovery scenarios. Rehearsal? Restore? Oh we’ve never tested it.
This reminded me to buy a new hard drive for secondary backup. Is it just the nature of so many reviews these days, or is it impossible to find a drive now that won't likely fail in a few months? I just spent an hour on Amazon and other stores and can't find a single drive that isn't littered with relatively believable 1-star horror stories... many of which include things like "I've had 3 drives from X brand before but this is nowhere near as good as the old ones."
(Side note, I also see the same comments about the shoes I want from the company I've bought shoes from forever).
Is this comment overload from naive users in the hard drive sector, or has quality control dropped across the board?
It seems like a solid plan but what's missing is no matter if you're doing 3-2-1 or using simpler solutions, you should test if the backup works periodically. Or else over the years, the backups would die one by one and nobody noticed, it defeats the point of so many redundancies. Also there's a chance the backup data is corrupted or incomplete (especially due to the design) so it's not possible to restore the system. Just test it periodically.
If there's a backup process you should also test it too. Scenarios like backup job workers are stuck and the system is not actually backing up is not uncommon.
What's the cloud solution (at major providers) that automatically geographically replicates data? For example, S3 buckets are tied to a region, which cannot be considered reliable - a single DC can always burn down, or even just intermittently unreachable. I'm looking for something that accepts an "upload" command that will (eventually) replicate the data to several regions. Ideally the regions can be changed at any point later, too.
This would take care of 3 and 1, sadly not 2 but 2 is pretty hard.
I've added offline/local-first to my web app so users to keep a copy and backups of all of their data on site and use the app offline but none of them want to do anything at all to implement it, and I've made very easy.
They really don't seem to give it any thought at all, even when I explain to them why they need to do this.
Conversely, when they're having network connection issues they don't hesitate to call me, and sometimes in a panic.
I'm going to keep pushing them to get on it on their end though.
How does everyone back up their iDevices? Every article I read talks about iCloud, which is great and I use, but doesn’t follow this principle and you’re out of luck if you lose access to your Apple ID.
I’ve found some content specific solutions that will backup photos or contacts for example, but I’m looking for a fairly streamlined comprehensive solution that would back up everything-messages, contacts, photos, emails, bookmarks, basically iCloud 2 but without the Apple lock in.
[+] [-] FullyFunctional|4 years ago|reply
A slight twist of this: I now have data old enough that accessing it with modern computers is starting to become a challenge. Thankfully I migrated all my, once enormous collection of 50,000 MB on tape, to just be files on my file server, but I'm worried about the longevity of optical media, and now I nervously glance at my collection of even older media ....
[+] [-] xtracto|4 years ago|reply
I once has to restore a DB from backups just to discover that the backups from 2 days ago were corrupted. Fortunately backup t-3 days worked. And I had to do some binlog mongering to recover the rest of the data.
Test your backups people. They WILL fail.
[+] [-] syshum|4 years ago|reply
>> The thing is, while keeping data on the same storage media, you may lose them due to the same hardware issues. In other words, you may lose two copies in the same accident. That’s why you should always combine media.
For "same media" the key part of this is storing the data on two DIFFERENT DEVICES of different types, they both could be hard drives, but they need to be in isolated systems of different types (say a Windows System and a Synology NAS, or a FreeNAS Storage Appliance and a Linux Server, etc)
The rule came about because people would use the same tape library, or have multiple copies on the same SAN (often times they would have a SAN Cluster of 2 or more devices that act as 1, and because they had 2 "devices" they felt they were protected.
Disparate media type (HDD, Tape, DVD, etc) may have some advantaged but as long as you are putting 2 copies on say your Home Desktop, and a NAS you have satisfied 2 media's even if both are using Hard drives
That said today the most common way for a person fill offsite and separate media is to use a Cloud Backup of some kind.
[+] [-] FullyFunctional|4 years ago|reply
[+] [-] tlamponi|4 years ago|reply
There's LTO (Linear Tape Open), LTO-8 has 12 TB of raw capacity and costs about 65 - 70 €. The newest variant (not yet _that_ widely available), LTO-9 has 18 TB.
You need a few to handle rotation, but for most that means at max 10 plus one new per year for the long term archive.
We provide Tape support in our open source Proxmox Backup Server product, it can handle single drives and drive robots (with auto exchangers that lessen the work on tape rotation), the data is deduplicated, compressed and optionally encrypted.
https://pbs.proxmox.com/docs/tape-backup.html
PBS can also efficiently mirror to remotes:
https://pbs.proxmox.com/docs/managing-remotes.html
Check the introduction/main feature section for more info if you're interested: https://pbs.proxmox.com/docs/introduction.html
[+] [-] gruez|4 years ago|reply
That's probably overrated and you shouldn't feel bad about not implementing it. The justification for it is:
>you may lose them due to the same hardware issues
buying different hard drive models/manufacturers/batches achieves the same thing without the risk of having data stored on dinosaur media.
[+] [-] firecall|4 years ago|reply
I have that problem!
I have files in my Documents folder going back to the 90s and Classic macOS!
[+] [-] kumarsw|4 years ago|reply
[+] [-] ComputerGuru|4 years ago|reply
[+] [-] mastazi|4 years ago|reply
> can be considered statistically independent too since it is connected over the network and may survive if something bad happens to a part of your infrastructure.
So my interpretation is: e.g. a HDD connected over SATA, an external HDD over USB and a HDD on NAS are all considered different media types even though ultimately they are all based on spinning hard drives.
[+] [-] bigiain|4 years ago|reply
For my home stuff, I'm comfortable enough with having a backup on (raid 1 mirrored spinning rust) on my media server, plus a copy of that backup (also on spinning rust) on an external USB hard drive. The chances of simultaneous corruption of those - even though they're both spinning rust - seems "low enough" for me.
(I also have another raid 1 pair of spinning rust that powers up once a week very early Monday morning and rsyncs the media server backup directory, then powers back down when it's done, to mitigate against fat-finger-fuckups and/or home network p0wnage. Somebody who p0wns something inside my network might find the cronjob or shell scripts that run the wifi powerpoint those drives use on and off, but at least I'm down to an hour or so a week where that copy might get cryptolockered...)
[+] [-] Someone1234|4 years ago|reply
You'd lose that bet. 2 TB on LTO Ultrium tape costs under $10 (sometimes under $5 depending on volume of tapes ordered).
[+] [-] mike_d|4 years ago|reply
I use a crazy long passphrase to encrypt my backups, but should I forget - it is also printed on archival paper inside a sealed envelope in a friends safe deposit box (I also have a copy of his backup passphrase for mutually assured destruction :)).
Also, every once in a while run a fire drill and actually restore something from each of your backups. This is when you find out the rsync job has been stuck for the last 80 days.
[+] [-] philips|4 years ago|reply
I just wish that storing things like keys on paper was easier.
At CoreOS we put some keys on printed QR codes and scanned them with an airgapped laptop every 90 days to confirm the keys were safe.
[+] [-] xtracto|4 years ago|reply
[+] [-] bigiain|4 years ago|reply
I have two friends who've given me private key fragments (from a Shamirs Secret Sharing setup) with the explanation "If anything happens to me, you'll work out who you need to talk to and what you'll need to do."
I haven't done that myself, because I don't have any need/desire for anybody to decrypt my backups once I'm not around. That might change if I end up with dependants one day.
[+] [-] k8sToGo|4 years ago|reply
[+] [-] yosito|4 years ago|reply
Fire drills are important, but I get a Telegram notification every day when my rsync jobs complete. The notification is muted, but I see it in the list every morning when I use Telegram to text my girlfriend. If it wasn't there, I'd immediately know something wasn't working.
[+] [-] 123pie123|4 years ago|reply
A quick backup verification is not enough - I learnt this the hard way and lost 9 months of data only after I did full annual DR test. The backups were setup and configured by the (very well known) manufacturer of the backup software. They screwed up on the DNS name
[+] [-] fprct|4 years ago|reply
Why not just remember the password and perform a regular fire drill decryption to ensure you won't ever forget it?
[+] [-] alok-g|4 years ago|reply
[+] [-] sigotirandolas|4 years ago|reply
[+] [-] pnutjam|4 years ago|reply
[+] [-] crossroadsguy|4 years ago|reply
“Do not backup everything!”
- repeatedly clean your data
- and discard redundancy and garbage before adding to backup set
- otherwise you’ll just to be creating a backup set that’ll turn into a pile of “mostly” garbage soon
- and it’ll just be GBs and TBs of “too much” - essentially useless - and costing more as well.
Keep your storage footprint in check.
[+] [-] jszymborski|4 years ago|reply
Here's the read out from my last Borg backup which says that I'm storing 2.5Tb worth of data in 59Gb
[+] [-] 8fingerlouie|4 years ago|reply
I.e. I backup my photo collection religiously. I keep all my photos in the cloud, and have a machine synchronizing photos locally. That machine then makes 2 backups, one local, and one to another cloud. The same applies to documents, mostly because they're highly compressible and don't take up much space.
When it comes to media backups, i honestely don't care if my iTunes library got wiped out. Most of it has been purchased, so can (hopefully) be downloaded again, and the rest has been ripped from CDs that still reside somewhere in my attic. So inconvenient to lose, but not exactly critical.
I also don't make "full computer backups". If something breaks i can just as easily reinstall the computer/applications and restore my documents/photos.
As for photos, i also burn identical M-disc BDXL media every so often, containing the photos taken since last archive date, and store the copies in different locations. They're low cost, low maintenance, and while not "spinning rust" cheap, they're still within $12/100GB, and unlike spinning rust and cloud, it's a one time cost.
[+] [-] bccdee|4 years ago|reply
[+] [-] pyuser583|4 years ago|reply
[+] [-] yosito|4 years ago|reply
[+] [-] denton-scratch|4 years ago|reply
What if the source for the backup was already corrupt or broken in some way? If you only have one backup, then your backup is corrupt too.
I was taught grandfather-father-son back in the 80s; still three levels of backup, but they're different generations. That fitted the kinds of media available then, but it doesn't really map to modern equipment. I've struggled to work out a backup scheme that is equally adaptable to the needs of a small business, a home network or an individual.
Ironically, it's hardest for the individual; a modern business is finished if it loses all its data. For an individual (or even a hobby network), total data-loss is painful, but not usually an existential risk. So it's harder to justify keeping everything in triplicate.
[+] [-] wruza|4 years ago|reply
[+] [-] mrighele|4 years ago|reply
A proper backup tool should help you keep several versions of your data without using a proportional amount of space, by using some form of deduplication. I use borg backup for my backup, and I can go back to any day in the past three years and get an old copy of any document (as long as I saved it on my disk for more than one day, since I do daily backups)
You can also setup "append only" backups, if you are worried that somebody may willingly try to destroy your old copies
[+] [-] bigiain|4 years ago|reply
[+] [-] gsich|4 years ago|reply
[+] [-] CGamesPlay|4 years ago|reply
I think about backups in terms of blast radius. 1) The local machine has the working copy of data and a local backup as permitted by free space. The smallest blast radius where I lose data is "my laptop hard drive fails". 2) My external drive has another backup. The new blast radius is "my house burns down". 3) I maintain a cloud backup. The new blast radius is "a catastrophe on a global scale".
Any two of these backups can fail and the data is still salvageable.
[+] [-] throw7|4 years ago|reply
Personally, I downsized. I made peace with myself that I can live without the gobsmack amount of data I have if it were to be lost tomorrow. I pared down to a small set of data (less than 1GB) that is critical to me. That data is synced to various devices with syncthing (includes even my cellphone!), and then I use restic to two different cloud storage providers. When I'm bored I do an independent, standalone export from cloud storage.
[+] [-] zinekeller|4 years ago|reply
Archive links:
https://web.archive.org/web/20211001064106/https://www.vmwar...
https://archive.md/1jHmP
[+] [-] kunagi7|4 years ago|reply
Having several drives is not enough. I used to keep my important data replicated on 3 drives, from different brands, different capacities, 1 internal, 2 external.
One day the internal drive failed, the next day one of the external drives also failed. So... I panicked, shut everything down and bought a brand new hard drive (1tb). While copying from the third drive it also stopped working. So, I had a triple drive failure. I managed to recover most of my data by freezing the external drives and copying from them (until they heated up and had to freeze them again).
[+] [-] scrooched_moose|4 years ago|reply
From experience, hard drives seem safe on the order of years; I've spun drives back up from the early 2000s and they are fully intact. The lifespan of burned optical media is/was counted in minutes. Various flash memory is somewhere in between - I've had all kinda of cheap flash drives die.
Even with the multiple media forms they need to be refreshed at some frequency, and I don't know how often that should be.
[+] [-] blakesterz|4 years ago|reply
It's crazy how the seemingly easiest most basic security/backup advice is so easy to give, and so hard to actually do. 3-2-1, so easy to teach and remember! In reality, at any kind of scale, not so easy to do.
I am constantly reminded just how hard every aspect of security really is to do. Even for the little/basic stuff.
[+] [-] jimmySixDOF|4 years ago|reply
My first service order way back when was to a engineering office who were in a panic because their primary CAD file server had failed right in the middle of a deadline submittal.
I got there when they had just pulled out an identical file server from some other department. They had 'everything' backed up nightly on to a Zip drive. Nice, I thought. So, I plug in the Zip drive and can see all these image files created by their backup software. I ask for the backup app install files and they say its kept in a directory -- on the failed server !!!
I should mention this was not in the USA and resulted in a 6+hr long international 42kbps zmodem download session from some random BBS server of the best guess software product and version.
I still have one of the failed HDDs from that server as a paper weight on my desk.
PS. We got it working in time and the obvious moral of the story is to always test your backup systems (and that goes for both failover and fallback)
[+] [-] gibs0ns|4 years ago|reply
I'm not affiliated with them, just a happy customer enjoying massive cost savings.
[0] https://wasabi.com/
[+] [-] petilon|4 years ago|reply
You need to add the time dimension and have a monthly snapshot for data that is older than a year, weekly snapshot for data within last 1 year, and daily snapshot for the last 90 days.
[+] [-] tlamponi|4 years ago|reply
- integrated tape backup (LTO-4 or newer), this allows to fulfill the two different medium rule and may also help on the offsite one (e.g., but a tape in a safe at CEO's home once a week or so)
- efficiently mirroring of backup data to remote PBS (offsite copy)
- client-side encryption, because well, backups are good but maybe not so if they're leaked
- file-level and block level backup - disclaimer the former currently only for Linux clients and the latter can handle anything but also runs on Linux (VMs are used)
The introduction/main feature section of the docs contain more info, if you're interested: https://pbs.proxmox.com/docs/introduction.html
If you have your non-Linux workload contained in VMs and maybe even already use Proxmox VE for that it's really covering safe and painless self-hosted backup needs.
[+] [-] nickdothutton|4 years ago|reply
[+] [-] noduerme|4 years ago|reply
(Side note, I also see the same comments about the shoes I want from the company I've bought shoes from forever).
Is this comment overload from naive users in the hard drive sector, or has quality control dropped across the board?
[+] [-] namelosw|4 years ago|reply
If there's a backup process you should also test it too. Scenarios like backup job workers are stuck and the system is not actually backing up is not uncommon.
[+] [-] H8crilA|4 years ago|reply
This would take care of 3 and 1, sadly not 2 but 2 is pretty hard.
[+] [-] oblib|4 years ago|reply
They really don't seem to give it any thought at all, even when I explain to them why they need to do this.
Conversely, when they're having network connection issues they don't hesitate to call me, and sometimes in a panic.
I'm going to keep pushing them to get on it on their end though.
[+] [-] timdavila|4 years ago|reply
I’ve found some content specific solutions that will backup photos or contacts for example, but I’m looking for a fairly streamlined comprehensive solution that would back up everything-messages, contacts, photos, emails, bookmarks, basically iCloud 2 but without the Apple lock in.