Write a little program in your favorite shell or scripting language that
* rsyncs the directories containing the files you want to back up
* mysqldumps/pg_dumps your databases
* zips/gzips everything up into a dated archive file
* deletes the oldest backup (the one with X days ago's date)
Put this program on a VPS at a different provider, on a spare computer in your house, or both. Create a cron job that runs it every night. Run it manually once or twice, then actually restore your backups somewhere to ensure you've made them correctly.
Or put this program on the same VPS but instead of doing the gzipping and versioning yourself, incorporate Tarsnap[1] into the script. With Tarsnap you can also create a read/write-only key so that if someone hacks your server, a real threat mentioned downthread, they won't be able to delete your backups.
And whatever you do, check that you can actually recover from these backups every once in a while.
I've used it for many, many years. Setup is a bit of a pain, especially if it's your first time, but it's a totally reliable backup system and gives you something much better than just a pile of zip archives.
All of our servers get BackupPC'd (rsync-over-ssh, pulled) twice a day to an in-house server that's totally unreachable from the internet. I get emails from BackupPC when something goes wrong, which is pretty much never. Backups aren't a thing I have to worry about much anymore.
The simplest way for us it's to use rsync, there is this service decade old (even more) that is just perfect for the offsite backup. http://rsync.net/index.html
We basically create a backup folder (our assets and MySQL Dump, then rsync it to rsync.net). Our source code is already on git, so basically backuped on Github, and all developers computer.
On top of it, rsynch has a very clear and simple documentation to implement it very quickly with any Linux distrib.
I hope that you know that your account, like all accounts at rsync.net, is on a ZFS filesystem.
This is important because it means that inside your account, in the .zfs directory, are 7 daily "snapshots" of your entire rsync.net account, free of charge.
Just browse right in and see your entire account as it existed on those days in the past. No configuration or setup necessary. Also, they are immutable/readonly so even if an attacker gains access to your rsync.net account and uses your credentials to delete your data, the snapshots will still be there.
Not sure I'd agree there, but it's not inscrutable. I use rsync for almost all file transfers, backups included, so I'm used to it. But there are oddities here and yon.
DigitalOcean has a droplet backup solution priced at 20% of the monthly cost of your droplet. Doesn't get much easier than that, if you can afford it. For a small droplet ($10/month) that's a full backup of everything for a buck a month. https://www.digitalocean.com/community/tutorials/understandi...
Is 20% of $10 a $1.00?That said, if you have a small application, I would pay DO to pack it up. They have global servers & imaging/snapshots. Your app would prob be good if you can get away with that kind of hosting.
I do believe you can take images and snapshots and download them, so using the api, a user could prob rig up a script to make it refundant if it was mission critical
Whatever strategy you use, make sure you test the process of recreating the server from a backup to make sure you will actually be able to recover. You'll also have an idea how long it will take, and you can create scripts to automate the entire flow so you don't have to figure it all out while you're frantic.
I use tarsnap, as many others in this thread have shared. I also have the Digital Ocean backups option enabled, but I don't necessarily trust it. For the handful of servers I run, the small cost is worth it. Tarsnap is incredibly cheap if most of your data doesn't change from day to day.
Some of our customers have already recommended rsync.net to you - let me remind folks that there is a "HN Readers Discount" - just email us[1] and ask for it.
I use AWS S3 for this as the storage prices are so cheap, at $0.03 per GB.
I recommend using a utility called s3cmd, which is a similar to rsync, in that you can backup directories.
I just have this setup with a batch of cron jobs which dump my databases and then sync the directories to s3 weekly.
I use duply (a simpler CLI front-end to duplicity) for doing encrypted incremental backups to S3.
The only annoying thing is that duplicity uses an old version of the boto s3 library that errors out if your signatures tar file is greater than 5gb unless you add `DUPL_PARAMS="$DUPL_PARAMS --s3-use-multiprocessing "` to your duply `conf` file. Took me days to figure that out.
I host many sites for clients, and use the same approach. Our VPS host offers Plesk (which we use) and it creates a backup every day (basically ZIPs up non-system directories and runs mysqldump / pg_dumps on the databases)... then I wrote a simple bash script which sends the zipped backup to an S3 bucket using s3cmd.
It took a little time to set up, but it is conceptually simple, very inexpensive (especially if you set up S3 to automatically send older files to Glacier, and/or remove old backups every now and then)... and I like that the backups are off-site and stored by a different company than the web hosts.
I see a lot of people mentioning different tools, but one thing you'll discover if you need to restore in the future is that it is crucial to distinguish between your "site" and your "data".
My main site runs a complex series of workers, CGI-scripts, and deamons. I can deploy them from scratch onto a remote node via fabric & ansible.
That means that I don't need to backup the whole server "/" (although I do!). If I can setup a new instance immediately the only data that needs to be backed up is the contents of some databases, and to do that I run an offsite backup once an hour.
I use config management to build the system (Puppet in my case, purely due to experience rather than strong preference) so it's fully reproducible.
I push my data with borg (https://github.com/borgbackup/borg) to rsync.net (http://rsync.net/products/attic.html) for offsite backup.
All automated, with one copy to AWS, one copy to Azure, and an scp local that goes on my home server. Rolling 10, put every 10th backup in cold storage. And I use a different tool for each, just in case.
For a static site put it in version control and keep as copy of your full site and deployment code.
For a database driven dynamic site or a site with content uploads you can also use your version control via cron job to upload that content. Have the database journal out the tables you need to backup before syncing to your DVCS host over choice.
If you're looking for a backup service to manage multiple servers with reporting, encryption, dedupelication, etc. I'd love your feedback on our server product: https://www.jungledisk.com/products/server (starts at $5 per month).
Remember to have automated restore testing that validates restores are successful and the data "freshness" is within a reasonable period of time, such as last updated record in a database.
Lots of people only do a full test of their backup solution when first installing it. Without constant validation of the backup->restore pipeline, it is easy to get into a bad situation and not realize it until it is too late.
On OVH I rsync to another VPS in a different data center. I pick the lowest priced VPS with enough space. I also rsync to a local disk at my home. I would do the same with DO.
OVH has a backup by FTP premium service but the FTP server is accessible only by the VPS it backups. Pretty useless because in my experience if an OVH VPS fails the technical support has never been able to take it back online.
Backup ninja. It handles backing up to remote servers via rdiff, so I have snapshots back as far as I need them. The remote server is on another provider. As long as I have SSH login via key to the remote server enabled, ninja backup will install the dependencies on the remote server for me.
I use a daily systemd timer on my home machine to remotely back-up the data on my VPS. From there, my home machine backs-up a handful of data from different places to a remote server.
Make sure you check the status of backups, I send journald and syslog stuff to papertrail[0] and have email alerts on failures.
I manually verify the back-ups at least once a year, typically on World Back-up Day [1]
I just use a simple scheduled AWS lambda to PUT to the redeploy webhook URL.
I use an IAM role with put-only permissions to a certain bucket. Then, if your box is compromised, the backups cannot be deleted or read. S3 can also be setup to automatically remove files older than X days... Also very useful.
I run a couple of virtualmin web servers which do virtualmin based backups (backs up each website with all its files/email/db's/zones etc into a single file, very much like how cPanel does its account backups), and those are rsynced (cron job) to my home server than runs two mirrored 1tb disks. A simple bash script keeps a few days of backups, plus a weekly backup that I keep two copies of. Overall pretty simple, and it's free since I'm not paying for cloud storage.
The sites I host on DigitalOcean are all very simple Rails sites deployed with Dokku. The source code is in GitHub and the databases I backup hourly to S3 with a very simple cron job.
Bash script to dump all DBs local and tar up any config files.
Then the script sends it to s3 using aws s3 sync. If versioning is enabled you get versioning applied for free and can ship your actual data and webdocs type stuff up extremely fast and it's browsable via the console or tools. Set a retention policy how you desire. Industry's best durability, nearly the cheapest too.
This is the same question I had [1], but just asked in "how can I outsource this cheap" instead of "how can I do this cheap". I also use docker, so I would only need to get a hosted database.
I see lots of great suggestions for backup hosts and methods, but I don't see anybody addressing encrypting said backups. I'm uncomfortable with rsync.net / Backblaze / etc having access to my data. What are some good ways to encrypt these multiple-GB backups before uploading them to a third-party backup server?
Important question. If it's a Wordpress site, then all you need to back up is the theme and your MySQL db. If it's a static site then just use rsync or sync to a git service.
If you use docker to deploy, see cloudron.io. You can install custom apps and it takes care of encrypted backups to s3. And automates lets encrypt as well.
i run a python/shell program to rsync and collect what i want backed up into one folder i then compress it and gpg encrypt it and send it to my backup server
All the databases and other data are backed up to s3. For mysql, we use the python mysql-to-s3 backup scripts.
But the machines themselves are "backed up" by virtue of being able to be rebuilt with saltstack. We verify through nightly builds that we can bring a fresh instance up, with the latest dataset restored from s3, from scratch.
This makes it simple for us to switch providers, and can run our "production" instances locally on virtual machines running the exact same version of CentOS or FreeBSD we use in production.
I don't know what the OP is running OS wise but if it's any modern Unix variant it uses ZFS. And a simple ZFS send/receive would be perfect. There are tons of scripts for that and replication.
If you're not using a modern Unix variant with ZFS... well there isn't a good reason why you would be.
We have cheap reliable storage servers at https://mnx.io/pricing -- $15/TB. Couple our storage server with R1soft CDP (r1soft.com), Attic, Rsync, or Innobackupex, etc..
You can also use https://r1softstorage.com/ and receive storage + R1soft license (block based incremental backups) -- or just purchase the $5/month license from them and use storage where you want.
dangrossman|9 years ago
* rsyncs the directories containing the files you want to back up
* mysqldumps/pg_dumps your databases
* zips/gzips everything up into a dated archive file
* deletes the oldest backup (the one with X days ago's date)
Put this program on a VPS at a different provider, on a spare computer in your house, or both. Create a cron job that runs it every night. Run it manually once or twice, then actually restore your backups somewhere to ensure you've made them correctly.
drinchev|9 years ago
I don't delete and/or gzip my oldest uploads though.
blfr|9 years ago
And whatever you do, check that you can actually recover from these backups every once in a while.
[1] https://www.tarsnap.com/
klodolph|9 years ago
thaumaturgy|9 years ago
I've used it for many, many years. Setup is a bit of a pain, especially if it's your first time, but it's a totally reliable backup system and gives you something much better than just a pile of zip archives.
All of our servers get BackupPC'd (rsync-over-ssh, pulled) twice a day to an in-house server that's totally unreachable from the internet. I get emails from BackupPC when something goes wrong, which is pretty much never. Backups aren't a thing I have to worry about much anymore.
unknown|9 years ago
[deleted]
briholt|9 years ago
yvan|9 years ago
We basically create a backup folder (our assets and MySQL Dump, then rsync it to rsync.net). Our source code is already on git, so basically backuped on Github, and all developers computer.
On top of it, rsynch has a very clear and simple documentation to implement it very quickly with any Linux distrib.
rsync|9 years ago
I hope that you know that your account, like all accounts at rsync.net, is on a ZFS filesystem.
This is important because it means that inside your account, in the .zfs directory, are 7 daily "snapshots" of your entire rsync.net account, free of charge.
Just browse right in and see your entire account as it existed on those days in the past. No configuration or setup necessary. Also, they are immutable/readonly so even if an attacker gains access to your rsync.net account and uses your credentials to delete your data, the snapshots will still be there.
michaelcampbell|9 years ago
Not sure I'd agree there, but it's not inscrutable. I use rsync for almost all file transfers, backups included, so I'm used to it. But there are oddities here and yon.
Kjeldahl|9 years ago
batuhanicoz|9 years ago
vonklaus|9 years ago
I do believe you can take images and snapshots and download them, so using the api, a user could prob rig up a script to make it refundant if it was mission critical
no_protocol|9 years ago
I use tarsnap, as many others in this thread have shared. I also have the Digital Ocean backups option enabled, but I don't necessarily trust it. For the handful of servers I run, the small cost is worth it. Tarsnap is incredibly cheap if most of your data doesn't change from day to day.
budhajeewa|9 years ago
rsync|9 years ago
[1] info@rsync.net
kumaraman|9 years ago
AgentME|9 years ago
The only annoying thing is that duplicity uses an old version of the boto s3 library that errors out if your signatures tar file is greater than 5gb unless you add `DUPL_PARAMS="$DUPL_PARAMS --s3-use-multiprocessing "` to your duply `conf` file. Took me days to figure that out.
jordanlev|9 years ago
It took a little time to set up, but it is conceptually simple, very inexpensive (especially if you set up S3 to automatically send older files to Glacier, and/or remove old backups every now and then)... and I like that the backups are off-site and stored by a different company than the web hosts.
double_h|9 years ago
stevekemp|9 years ago
My main site runs a complex series of workers, CGI-scripts, and deamons. I can deploy them from scratch onto a remote node via fabric & ansible.
That means that I don't need to backup the whole server "/" (although I do!). If I can setup a new instance immediately the only data that needs to be backed up is the contents of some databases, and to do that I run an offsite backup once an hour.
AdamGibbins|9 years ago
xachen|9 years ago
buro9|9 years ago
Github takes care of code and config.
AWS S3 takes care of uploaded static files.
But Tarsnap takes care of my database backups.
The only thing to be aware of is that restore times can be very slow.
unknown|9 years ago
[deleted]
touch_o_goof|9 years ago
bretpiatt|9 years ago
For a database driven dynamic site or a site with content uploads you can also use your version control via cron job to upload that content. Have the database journal out the tables you need to backup before syncing to your DVCS host over choice.
If you're looking for a backup service to manage multiple servers with reporting, encryption, dedupelication, etc. I'd love your feedback on our server product: https://www.jungledisk.com/products/server (starts at $5 per month).
billhathaway|9 years ago
Lots of people only do a full test of their backup solution when first installing it. Without constant validation of the backup->restore pipeline, it is easy to get into a bad situation and not realize it until it is too late.
shostack|9 years ago
darkst4r|9 years ago
pmontra|9 years ago
OVH has a backup by FTP premium service but the FTP server is accessible only by the VPS it backups. Pretty useless because in my experience if an OVH VPS fails the technical support has never been able to take it back online.
jasey|9 years ago
[1]http://duplicity.nongnu.org/
http://mindfsck.net/incremental-backups-amazon-s3-centos-usi...
jenkstom|9 years ago
kzisme|9 years ago
Osiris|9 years ago
For database, I use a second VPS running as a read only slave. A script runs daily to create database backups on the VPS.
2bluesc|9 years ago
Make sure you check the status of backups, I send journald and syslog stuff to papertrail[0] and have email alerts on failures.
I manually verify the back-ups at least once a year, typically on World Back-up Day [1]
[0] https://papertrailapp.com/ [1] http://www.worldbackupday.com/en/
spoiledtechie|9 years ago
Stupid simple and stupid cheap. Install, select directories you want backed up, set it and forget it.
All for $7.00 a month.
stephenr|9 years ago
Collect your files, rsync/scp/sftp them over.
Read only snapshots on the rsync.net side means even an attacker can't just delete all your previous backups.
aeharding|9 years ago
I just use a simple scheduled AWS lambda to PUT to the redeploy webhook URL.
I use an IAM role with put-only permissions to a certain bucket. Then, if your box is compromised, the backups cannot be deleted or read. S3 can also be setup to automatically remove files older than X days... Also very useful.
geocrasher|9 years ago
colinbartlett|9 years ago
mike503|9 years ago
Then the script sends it to s3 using aws s3 sync. If versioning is enabled you get versioning applied for free and can ship your actual data and webdocs type stuff up extremely fast and it's browsable via the console or tools. Set a retention policy how you desire. Industry's best durability, nearly the cheapest too.
kevinsimper|9 years ago
[1] https://news.ycombinator.com/item?id=12659437
dotancohen|9 years ago
ing33k|9 years ago
extesy|9 years ago
benbristow|9 years ago
jagger27|9 years ago
moreentropy|9 years ago
I can't praise restic enough. It's fast, secure, easy to use and set up (golang) and the developer(s) are awesome!
[1] https://restic.github.io/
wtbob|9 years ago
educar|9 years ago
00deadbeef|9 years ago
http://backuppc.sourceforge.net/
bedros|9 years ago
voycey|9 years ago
ausjke|9 years ago
yakamok|9 years ago
edoceo|9 years ago
Use pg_dump and tar then just s3cp
chatterbeak|9 years ago
All the databases and other data are backed up to s3. For mysql, we use the python mysql-to-s3 backup scripts.
But the machines themselves are "backed up" by virtue of being able to be rebuilt with saltstack. We verify through nightly builds that we can bring a fresh instance up, with the latest dataset restored from s3, from scratch.
This makes it simple for us to switch providers, and can run our "production" instances locally on virtual machines running the exact same version of CentOS or FreeBSD we use in production.
yakamok|9 years ago
[deleted]
X86BSD|9 years ago
If you're not using a modern Unix variant with ZFS... well there isn't a good reason why you would be.
LeoPanthera|9 years ago
nwilkens|9 years ago
You can also use https://r1softstorage.com/ and receive storage + R1soft license (block based incremental backups) -- or just purchase the $5/month license from them and use storage where you want.