No server NEEDS to go down for maintenance. You can avoid doing so for anything, at any scale, DB change, server updates, etc.
The problem is that a 0-downtime system, at a certain scale, is very costly to create and maintain. You need redundancy everywhere, load balancing everywhere, data replication, synchronization. Those are hard problems.
Basically you need to arrive to the level of being able to release the Netflix Chaos Monkey in prod to be sure it works even if part of your system is busy with the update, or just out of sync. This is certainly doable. It's also very expensive, requires a lot of time and many experts to work on the problem.
Putting a site on maintenance mode can be a middle ground you choose, because you don't want to invest that much just to avoid taking down you site for a little time once in a while.
Economics.
Of course, if you do choose the road of 0down time, you site will gain more than just availability, it will gain reliability as well, since those best practices serve both purposes.
This comment makes a lot of assumptions; the primary one being you are able to completely build your infrastructure from scratch. Which just isn't the case for the majority of companies or web sites out there.
Reasons I have been "down for maintenance" in the past.
- Moving from AWS to our own datacenter.
- Payment processor issues. We weren't making money with the payment processor down... “down for maintenance” meant lower customer service costs.
- Because the CEO told me to. I shit you not. Be wary of working for someone has a name that sounds like it belongs on a bond villan.
- Because sometimes you NEED all the resources to get something done quickly
- In the days before AWS and "cloud computing" you only had hardware on hand. It is hard to get your boss to budget for a traffic spike of one hour that is greater than the sum of the previous 6 months of traffic.
- Because non technical people have access to technology: It was just some javascript -or- I didn't think I needed to tell you before I emailed 5 million people with an offer for free stuff -or- why is everything on sale for %25 off ....
- Because load and time and complex systems sometimes do funny things together, "maintenance" means were getting enough data reproduce it finally.
- The very beginning of a DDOS attack (only for some industries & sites)
Always avoidable if that's a priority - schema changes can be done online in MySQL. Patches can be done on subsets of servers. Erlang even supports hot code reloading so that even if you had a single point of failure you can upgrade without losing file descriptors or in memory state. It is a lot simpler if you have the choice though, since you don't have to have multiple versions online at the same time. "Divisions of Ericsson that do [hot code reloading] spend as much time testing them as they do testing their applications themselves." [1]
1. Companies don't know how to do the engineering for maximum uptime, like you describe. It's way more complicated than the usual CRUD operations
2. Companies know how to but they decide not to invest this time (we often traded one hour of downtime against 2-3 man-days for preparing online schema changes with nasty and inconsistent backfilling in the early days).
And 3. Don't forget disaster recovery. I've seen some of the smartest companies go down for hours due to a DB misconfiguration, or a Rack PSU faulting with only one side of the servers connected, even with a reasonably highly available setup. Stuff like this happens - and then you better have a proper 503 Maintenance page up and running to prevent Google from delisting your site. In this case though, "maintenance" is rather an euphemism :)
From a purely abstract point of view it's probably avoidable, but I'd conjecture many teams don't have the collective knowledge to effectively pull it off. Even if you plan things out carefully, something usually goes wrong :(. It only takes a small oversight to have it come crashing down. I think it's better to let your customers know ahead of time that you'll be performing maintenance, assume that something will go down, but still try to avoid it anyway.
MySQL only added support for online schema migrations with 5.6, prior to that you had to use a tool like pt-online-schema-change. I've heard claims (which I haven't verified, so it's entirely possible they are incorrect) of performance issues when performing migrations, which effectively bring the database down. Doesn't RDS sometimes require downtime for maintenance and upgrades? Is there any safe way around that?
I don't recall sites like Google or Facebook ever being down for maintenance. Are there any articles that discuss how they manage application layer and database layer migrations?
A good start would be all of http://highscalability.com
- but it more or less boils down to being able to roll back: And that rules out hard schema changes. So the proper and hard way is always a variant of: 1) Create another column, 2) Write to both columns at the same time from the database, 3) Create code to run on the new column, 4) Enable feature switch to run everything on the new column, 5) Build back code dealing with old column, 6) Remove old column.
If that looks complicated, it is - and you better only start with these things if your site earns more money per minute than you need to pay engineers and project managers to pull that off.
I actually have seen Google down once, I might even have a screenshot. An immediate F5 (possibly after the screenshot) showed them back up. I'm not sure if it was my local Google office down (Israel), but the message was in English.
You don't see 'Maintenance' on systems of companies which do this for a long time. You might see this at 'normal' companies. Smaller ones who used the 'wrong' database and had to migrate it.
If you start with one database and 'forget' or just don't think about it to have a master, slave, slave combination, you have to fix that once.
When you made a mistake, you have to fix it once.
Also today you are able to maintain quite a big page with a very small amount of people. The chances, that one of those didn't think about all necessary elements of an always online system is not far fetched.
Common causes are things like software upgrades and database changes. There's probably always a way to avoid it but going down for maintenance might be less effort and cheaper overall depending on the site. For example, if you can do it during a known time of low traffic or when you know users will just come back later. I've noticed several UK bank websites go down for maintenance during the night.
For some types of websites, zero-downtime upgrades and maintenance are costly.
Online banking is a good example. I have accounts with several banks, and all of them periodically "go down for maintenance". I assume that's because the talent and infrastructure needed to do those tasks with zero downtime are more expensive than whatever customer service hit they take for planned outages.
Because it is much easier than performing complicated modifications while the site is running.
For example, at Google "down for maintenance" is not on the table. That can in some cases lead to lots of extra work or time, e.g. dual writes for a period of time followed by mapreduces to fix the remaining part.
My internet bank is often down for maintenance on Sunday nights. I assume it is because they have a very old system.
- occasionally benefits from clean-up tasks which can be long running and would result in an irritating experience. While slow read operations in theory may be possible it is better to tell the users to come back later than to erode their confidence.
- sometimes the database of a board can corrupt. The repair operations (sort of a disk fsck for the board) require the database exclusively.
Not every aircraft has all the expertise, tools, and spares on board at all times to be able to service or replace their engines in flight.
If the system has not been designed from the ground up for that type of service, then the on-board expertise would also have to be gifted at developing workarounds on-the-spot that reliably work the first time.
I really don't think there is any excuse for it this day and age especially when building sites from scratch. There are so many different techniques and technologies for doing zero downtime deploys, not to mention the numerous PaaS that will do it out of the box if you dont know how.
There's still cost to it. It basically boils down to: do you lose more money during a manual maintenance period, or by hiring extra people to do all changes in zero-downtime style. (Or doing slower development with the existing team) The technology for transparent changes has been available for decades, although it's true - it's much easier to use today. But it still needs extra work. And someone has to pay for that work in the end.
Mistakes were made during the deploy of the new website to production. A failed website deploy is a bit more noticeable to the public than the failed deployment of an internal only system.
Which tangentially is why the USAF's E-4B airborne command posts have to land after about three days. Fuel isn't a problem but they don't have a way to replenish engine-oil in-flight.
Haha. I liked one of the answers further down in the thread - it points out that there is a cost associated with making your service smoothly upgradable, probably in both engineering time and in hardware. It's possible a lot of these companies are balancing this against the cost of just being offline for an hour or so, and making the rational choice.
Or it was a typo or they were thinking of something else at the time or they were typing their response on their mobile or any other number of reasons other than "forgot how to spell" or "doesn't know how to spell".
I swear criticising someone's spelling is the last bastion in an argument/debate/discussion. When you haven't got anything else, attack their spelling.
I'm not saying that you're getting into an argument or debate but come on, you know what the guy meant.
I posit that the offender may have rather large thumbs. Because if you look very very carefully at the standard QWERTY keyboard, you will notice that the letter 't' and the letter 'r' are in fact located precisely adjacent to each other.
In fact, from the sentence structure, I know that he (definitely male, aged in mid thirties, right handed, wears a hand knitted cardigan, keeps a cucumber sandwich in his briefcase) was typing this on an iPhone 4S (still running iOS 8) on the 5:30pm train from Campbeltown and reached the middle of the word just as the train entered the Chuddingsworth tunnel. There is a slight offset in the tracks there that causes the train carriages to lurch to the left a little, thus causing his thumb to slide across just that little bit after he hit the 't' in 'petabyte'. If you need any more info, please do not hesitate to contact me: [email protected]
[+] [-] sametmax|9 years ago|reply
The problem is that a 0-downtime system, at a certain scale, is very costly to create and maintain. You need redundancy everywhere, load balancing everywhere, data replication, synchronization. Those are hard problems.
Basically you need to arrive to the level of being able to release the Netflix Chaos Monkey in prod to be sure it works even if part of your system is busy with the update, or just out of sync. This is certainly doable. It's also very expensive, requires a lot of time and many experts to work on the problem.
Putting a site on maintenance mode can be a middle ground you choose, because you don't want to invest that much just to avoid taking down you site for a little time once in a while.
Economics.
Of course, if you do choose the road of 0down time, you site will gain more than just availability, it will gain reliability as well, since those best practices serve both purposes.
[+] [-] amelius|9 years ago|reply
It's like changing the engine of a driving car.
[+] [-] ebbv|9 years ago|reply
[+] [-] zer00eyz|9 years ago|reply
- Moving from AWS to our own datacenter. - Payment processor issues. We weren't making money with the payment processor down... “down for maintenance” meant lower customer service costs. - Because the CEO told me to. I shit you not. Be wary of working for someone has a name that sounds like it belongs on a bond villan. - Because sometimes you NEED all the resources to get something done quickly - In the days before AWS and "cloud computing" you only had hardware on hand. It is hard to get your boss to budget for a traffic spike of one hour that is greater than the sum of the previous 6 months of traffic. - Because non technical people have access to technology: It was just some javascript -or- I didn't think I needed to tell you before I emailed 5 million people with an offer for free stuff -or- why is everything on sale for %25 off .... - Because load and time and complex systems sometimes do funny things together, "maintenance" means were getting enough data reproduce it finally. - The very beginning of a DDOS attack (only for some industries & sites)
[+] [-] greenleafjacob|9 years ago|reply
[1]: http://learnyousomeerlang.com/relups
[+] [-] endymi0n|9 years ago|reply
1. Companies don't know how to do the engineering for maximum uptime, like you describe. It's way more complicated than the usual CRUD operations
2. Companies know how to but they decide not to invest this time (we often traded one hour of downtime against 2-3 man-days for preparing online schema changes with nasty and inconsistent backfilling in the early days).
And 3. Don't forget disaster recovery. I've seen some of the smartest companies go down for hours due to a DB misconfiguration, or a Rack PSU faulting with only one side of the servers connected, even with a reasonably highly available setup. Stuff like this happens - and then you better have a proper 503 Maintenance page up and running to prevent Google from delisting your site. In this case though, "maintenance" is rather an euphemism :)
[+] [-] TheAceOfHearts|9 years ago|reply
MySQL only added support for online schema migrations with 5.6, prior to that you had to use a tool like pt-online-schema-change. I've heard claims (which I haven't verified, so it's entirely possible they are incorrect) of performance issues when performing migrations, which effectively bring the database down. Doesn't RDS sometimes require downtime for maintenance and upgrades? Is there any safe way around that?
[+] [-] abritinthebay|9 years ago|reply
[+] [-] ploggingdev|9 years ago|reply
[+] [-] endymi0n|9 years ago|reply
If that looks complicated, it is - and you better only start with these things if your site earns more money per minute than you need to pay engineers and project managers to pull that off.
[+] [-] tyingq|9 years ago|reply
[+] [-] dotancohen|9 years ago|reply
[+] [-] sigi45|9 years ago|reply
You don't see 'Maintenance' on systems of companies which do this for a long time. You might see this at 'normal' companies. Smaller ones who used the 'wrong' database and had to migrate it.
If you start with one database and 'forget' or just don't think about it to have a master, slave, slave combination, you have to fix that once.
When you made a mistake, you have to fix it once.
Also today you are able to maintain quite a big page with a very small amount of people. The chances, that one of those didn't think about all necessary elements of an always online system is not far fetched.
[+] [-] e0m|9 years ago|reply
[+] [-] curt15|9 years ago|reply
[+] [-] seanwilson|9 years ago|reply
[+] [-] tyingq|9 years ago|reply
For some types of websites, zero-downtime upgrades and maintenance are costly.
Online banking is a good example. I have accounts with several banks, and all of them periodically "go down for maintenance". I assume that's because the talent and infrastructure needed to do those tasks with zero downtime are more expensive than whatever customer service hit they take for planned outages.
[+] [-] petters|9 years ago|reply
For example, at Google "down for maintenance" is not on the table. That can in some cases lead to lots of extra work or time, e.g. dual writes for a period of time followed by mapreduces to fix the remaining part.
My internet bank is often down for maintenance on Sunday nights. I assume it is because they have a very old system.
[+] [-] heisenbit|9 years ago|reply
- occasionally benefits from clean-up tasks which can be long running and would result in an irritating experience. While slow read operations in theory may be possible it is better to tell the users to come back later than to erode their confidence.
- sometimes the database of a board can corrupt. The repair operations (sort of a disk fsck for the board) require the database exclusively.
- software upgrades
[+] [-] fuzzfactor|9 years ago|reply
If the system has not been designed from the ground up for that type of service, then the on-board expertise would also have to be gifted at developing workarounds on-the-spot that reliably work the first time.
[+] [-] formula_ninguna|9 years ago|reply
[+] [-] nickjackson|9 years ago|reply
[+] [-] viraptor|9 years ago|reply
[+] [-] protomyth|9 years ago|reply
[+] [-] visarga|9 years ago|reply
[+] [-] dingaling|9 years ago|reply
[+] [-] arachnids|9 years ago|reply
[+] [-] Demcox|9 years ago|reply
The most upvoted comment forgot how to spell (or doesn't know) to petabyte.
[+] [-] welly|9 years ago|reply
I swear criticising someone's spelling is the last bastion in an argument/debate/discussion. When you haven't got anything else, attack their spelling.
I'm not saying that you're getting into an argument or debate but come on, you know what the guy meant.
[+] [-] nicky0|9 years ago|reply
[+] [-] dkersten|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] cyberferret|9 years ago|reply
In fact, from the sentence structure, I know that he (definitely male, aged in mid thirties, right handed, wears a hand knitted cardigan, keeps a cucumber sandwich in his briefcase) was typing this on an iPhone 4S (still running iOS 8) on the 5:30pm train from Campbeltown and reached the middle of the word just as the train entered the Chuddingsworth tunnel. There is a slight offset in the tracks there that causes the train carriages to lurch to the left a little, thus causing his thumb to slide across just that little bit after he hit the 't' in 'petabyte'. If you need any more info, please do not hesitate to contact me: [email protected]