top | item 20911356

(no title)

pmikesell | 6 years ago

This is common. Most places use their revision control system as backup for their source code and take backups of their database. This makes sense because the source code holds no state. Why would you back up the server? What would you do when you had 100 servers? You’d take backups of the running disk state of all of those? To what end?

discuss

order

ajross|6 years ago

Right, it's not a problem with technique. There are lots of storage management paradigms whose backup and restore strategies don't involve filesystem copies of the running servers.

The problem here was that there seemed to be no backup and restore strategy at all.

That said, though: if you're a tiny group like this and don't have the time to invest in a gold plated storage design, "Just Back Up All The Servers" is a pretty reasonable choice to make.

matwood|6 years ago

If the servers are still treated as pets, then they need to be backed up. Outside of code there would be configuration.

Keep in mind that once the servers are treated as cattle then individual server backups are typically no longer needed. The developer in this case would likely not be in this situation then.

onion2k|6 years ago

Why would you back up the server?

In case of things like this. Also it's usually quicker to recover a server using an image than it would be to provision from source in the case of, say, a hardware failure.

What would you do when you had 100 servers?

Something more scalable. There's no reason why you should pick an approach to solving a problem and stick with it forever. If the situation changes you do something more appropriate. That doesn't mean the small scale approach is wrong when the scale is small.

ianmobbs|6 years ago

I mean, the better way of doing this is just keep your application code stateless and make frequent backups of your data stores. If the developers knew they had server code that wasn't checked into SVN, your solution makes more sense, but they just had no idea. Even in corporate, if I need to set up a new box for a service, I don't backup an image from another box and deploy it to the new one - I just pull my Docker image and run that.

AmericanChopper|6 years ago

If you have a service you haven’t built in 5 years, then I’d say you have a lot more important problems than server backups.

cwyers|6 years ago

Well, I think we have an answer to "what end" now.

Yes, if you have a deployment strategy where you can reproducibly deploy from your source code repository, then you don't need to back up the servers. But until you've tested that that works, and I mean really tested it, you need server backups.

undoware|6 years ago

Precisely to this end :)

in that role I would typically back up the deploy directory (/opt/foo) and the /etc/ and parts of /var/ every day to Glacier.

Of course this is all pre-docker and pre-k8s but this is 2013 we are talking about

gustavorg|6 years ago

Why would you back up the server?

Because the rule number one of the programmers club, is backup everything, everywhere, all the time, if you don't remember do it again, and if you're sure you have enough backups do it again anyway. I even do backup of my backups everyday (in pendrives, in cds, in ancient scrolls, etc)

wetpaste|6 years ago

In the age of treating servers (or containers) like cattle instead of pets, the "Back up everything" mantra has fallen by the wayside. In order to get away with selective backups you have to know exactly where long-term state is stored and you need to have the infrastructure in place to manage re-provisioning everything and restoring snapshots. It's not something you can tack on later. Iterate, test, integrate, document, audit, review. It ends up being much more complicated than periodic wholesale snapshots on a server.

There's a certain elegance and assurance you get from this that has been lost with the times, akin to how monolithic server software with all functionality natively available in the code has gone away in favor of microservices. Now you have message queues, k/v stores, caches, search engines as a microservices that are tacked on to the core services and rarely fully understood by the engineering team and containing more functionality than the codebase ever really utilizes. Ends up being more complicated in manage in a lot of ways. I think the emergence of microservices is one of the driving forces behind selective state backups, because you can never back up the entire state at once, everything is too spread out. You're not going to back up the running state of the k8s node, or whatever