Amazon makes it pretty clear that Availability Zones within the same region can fail simultaneously. In fact, a Region being down is defined as multiple AZs within that zone being down according to the SLA. And since that 99.95% promise applies to Regions and not AZs, multiple AZs within the same region being down will be fairly common.
Edit: One more point. In the SLA, you'll find the following: “Region Unavailable” and “Region Unavailability” means that more than one Availability Zone in which you are running an instance, within the same Region, is “Unavailable” to you. What it implies is that if you do not spread across multiple Availability Zones, you will then have less than 99.95% uptime. So spreading across AZs should still reduce your downtime, just not beyond that 99.95%
I have to disagree with you. The SLA is just a legal agreement that really serves to limit AWS's liability. Here's what the main EC2 page says:
"Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from failure of a single location."
That's the spec that everyone was building to, but that isn't what is happening. Of course you're right, multiple AZs can fail at the same time, but I read the above as saying that they should fail independently/coincidentally (until the entire Region fails).
The SLA uses great legal weasel words: "AWS will use commercially reasonable efforts to make Amazon EC2 available"
So anything that is beyond commercially reasonable is outside the SLA.
In truth, as with all businesses, the reputation for uptime weighs more heavily than the written contract. It will be interesting to see how the AWS people attempt to make amends.
Amazon has probably correctly designed core infrastructure so that these things shouldn't happen if you're in multiple Availability Zones. I'm guessing that means different power sources, backup generators, network hookups, etc. for the different Availability Zones. However, there's also the issue of Amazon's management software. In this case, it seems that some network issues triggered a huge reorganization of their EBS storage which would involve lots of transfer over the network of all that stored data, a lot more EBS hosts coming online and a stampede problem.
I've written vigorously (in previous comments) for using cloud servers like EC2 over dedicated hosting like SoftLayer. I'm less sure about that now. The issue is that EC2 is still beholden to the traditional points of failure (power, cooling, network issues). However, EC2 has the additional problem of Amazon's management software. I don't want to sound too down on Amazon's ability to make good software. However, Amazon's status site shows that EBS and EC2 also had issues on March 17th for about 2.5 hours each (at different times). Reddit has also just been experiencing trouble on EC2/EBS. I don't want this to sound like "Amazon is unreliable", but it does seem more hiccup-y.
The question I'm left with is what one is gaining from the management software Amazon is introducing. Well, one can launch a new box in minutes rather than a couple hours; one can dynamically expand a storage volume rather than dealing with the size of physical discs; one can template a server so that you don't have to set it up from scratch when you want a new one. But if you're a site with 5 boxes, would that give you much help? SoftLayer's pricing is competitive against EC2's 1-year reserved instances and SoftLayer throws in several TB of bandwidth and persistent storage. Even if you have to over-buy on storage because you can't just dynamically expand volumes, it's still competitively priced. If you're only running 5 boxes, the server templates aren't of that much help - and virtually none given that you're maybe running 3 app servers, and a replicated database over two boxes.
I'm still a huge fan of S3. Building a replicated storage system is a pain until you need to store huge volumes of assets. Likewise, if you need 50 boxes for 24 hours at a time, EC2 is awesome. I'm less smitten with it for general purpose web app hosting where the fancy footwork done to make it possible to launch 100 boxes for a short time doesn't really help you if you're looking to just have 5 instances keep running all the time.
Maybe it's just bad timing that I suggested we look at Amazon's new live streaming and a day later EC2 is suffering a half-day outage.
I'm responsible for a relatively large site ( http://www.foreignpolicy.com ) that was down for 12+ hours over this failure today.
One fallacy that I think that many people make in the whole cloud debate is the idea that a given cloud provider is any more or less failure prone than a given dedicated server host.
We have assets on Amazon, Slicehost, and Linode. Sometimes these go down, whether it's our fault, software's fault, hardware's fault, or a construction crew hitting a fiber drop, things happen. If you're not backed up in a fully tested way on not just another server or availability zone, but whole different hosting infrastructure (preferably in a different time zone), then you're not really backed up. Being on a host like Amazon, or even a fully managed host like a Cadillac Rackspace plan doesn't remove the need for good BCP.
What these cloud services allow you to do in theory is have that backup infrastructure ready to go on relatively short notice _without_ keeping it running all the time. We can't reasonably afford to replicate all of our servers and hot data to Western Region or the Rackspace cloud 24/7. We can, however, afford to set up the infrastructure and spin it up on the fly within an hour with slightly stale data once a month to test it, and when for things break. Requisitioning that kind of hardware and then dumping it for only a few tens of dollars a month is difficult if not impossible on a virtual host.
The big question is not 'Is the cloud more reliable?', but 'Do i need what only the cloud can offer?'. If your current infrastructure can handle getting drudged or reddited fine, and you're only on a few servers, you're probably better off just paying to keep a hot spare up at softlayer.
On the other hand if you have 1) Occasional traffic bursting that you don't want to pay to handle most days and 2) Can accept a few minutes of downtime, then the solutions offered by cloud hosts blow the competition out of the water. I guess what you're gaining is not the management software, it's the ability to turn off & on quickly when something goes wrong (or, in the case of a redditing, right).
Part of figuring out the right hosting solution involves asking the right questions.
(..and for reference, we were all ready to go with a backup... and then we learned that our hosting company was storing our nightlies on S3 and couldn't retrieve them, and that our offsite DB solution was having an unrelated issue). Had we run proper tests (I'm brand new to the job), we would've been ready for this one. I also worry big time about DNS and load balancing being a big SPF, but that's a plan for another day.
What about hardware failure? On AWS you just commission a new instance and your downtime is minutes rather than hours, plus you don't have to keep extra hardware on hand just to avoid downtime of days. There are also smaller more localized issues like network switch failure and other things that you probably never even notice on Amazon, but might be more likely to bite you on a dedicated host.
If an AWS data center goes down it gets a lot of press, but does it actually outweigh the sum of all dedicated/shared/vps hosting issues on the equivalent volume?
For entities that have the CapEx money to build out their own hardware to handle expected growth, and do it a little cheaper due to volume, does it still make sense to engage in the cloud game?
Or is it a better option when you are starting up, and want to be able to quickly throw hardware at a problem, should the need arise?
Apologies if this sounds like a pretty ignorant question, but I haven't implemented cloud-based services before. It seems like there is a hardware cost vs. people cost due to the newer nature of AWS and the like, and that needs to be factored into development / maintenance time.
Saving people time by relying on a known quality like arrays of Linux servers with failure tolerance seems preferable.
I agree with your entire comment with the exception of one sentence. Disagree as strongly as I can here:
I've written vigorously (in previous comments) for using cloud servers like EC2 over dedicated hosting like SoftLayer. I'm less sure about that now.
An issue at Amazon, or Rackspace, or Linode, or Slicehost need not imply failure at other providers and cloud as an alternative to dedicated is still as viable as ever. Amazon tanking does not mean everybody needs to run back to dedicated, and my pet peeve is that when one provider takes a crap everyone paints the cloud as toxic.
When ThePlanet's facility exploded a few years ago I did not hear lamenting that dedicated hosting was doomed. When an airliner crashes we do not say air travel is doomed. I do not understand why people rush to paint cloud as a toxic choice in light of a failure of a certain player. Admittedly a big one but there are others too and you can move.
Providers like Linode are almost exactly equivalent to dedicated hosting. They just administer the hardware for you and pay the remote hands bills. Same for Slicehost and Rackspace. It is simply far easier to wipe your instance and start over and for all intents and purposes it acts like a dedicated box. You need to administer it like one too. Most failures of the "cloud" are really designing your application in violation of the fallacies linked elsewhere.
What I find really interesting is the implication that such outage could have on Amazons business model. Specifically, what I would like to see is transparent complete application duplication to other regions & availability zones for certain customer configurations of particular sizes etc.
The application would be transparently mirrored to another region and if an even such as this occurs, the mirror would be spun up.
The Customer would choose the frequency of snapshot desired, and would pay for that.
Certain sites, with less dynamic content, would be mirrored and continue to operate as normal with minimal impact or cost.
Other sites, where the content creations is fairly real-time from its users, would pose more complex and costly mirroring situations (ala reddit).
But the option should be there.
Also, remember to think of the evolution of amazons services say, 24 months from now, where this type of offering will likely become more a reality.
As too many others have noted, it is best to not be 100% reliant on amazon for your entire services - but at this point in time its a little hard to spread the load between competing offerings to AWS/EC2 etc.
A quick tldr: Availability Zones within a Region are supposed to fail independently (until the entire Region fails catastrophically). Any sites that designed to that 'contract' were broken by this morning's incident, because multiple AZs failed simultaneously.
I've seen a lot of misinformation about this, with people suggesting that the sites (reddit/foursquare/heroku/quora) are to blame. I believe that the sites were designed to AWS's contract/specs, and AWS broke that contract.
The contract to which you refer is entirely inferred, is it not? Amazon claims the AZ's should be independent[1]:
Each availability zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of failures like generators and cooling equipment are not shared across Availability Zones. Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone.
Yet what Amazon guarantees, by way of their SLA, is only 99.95% for a region[2,3]:
The Amazon EC2 SLA guarantees 99.95% availability of the service within a Region over a trailing 365 day period.
Every time someone bitched at me for not having a "cloud-based strategy", I kept asking how many 9s of reliability they thought the cloud would deliver.
We're down to 3 nines so far. A few more hours to 2 nines.
"The cloud", as I understand it, is the ubiquitous, cheap and near-instantaneous availability of computing power; as in minutes instead of hours or days for new servers.
"The cloud" is not (and never has been) a cure-all for reliability issues. It's just as easy to have single points of failure as any other hosting strategy, and is just as easy (or difficult) to plan for. Companies that have planned for high availability with multi-region or multi-provider strategies will continue to be available, regardless of whether or not they are using "the cloud".
If your business is amongst the chosen few that can justify the cost to guarantee any number of nines then your availability strategy involves multiple vendors anyways.
The cloud is not for all businesses.
Whether Amazon can be part of an availability strategy has nothing to do with the number of nines.
We have our business website hosted out of the Amazon cloud. Our primary servers are actually located in their affected data center. But we also have a great data team behind it, so we aren't being (to the outside observer) affected at all by the outage.
Cloud is vulnerable? Of course it is. So plan accordingly.
The same steps you would take in your own datacenters to ensure high availability would work in the cloud to ensure the same availability so I'm not sure what your point is. Measuring the availability of a few zones from one provider and broadly labeling the cloud as unreliable is a flawed argument. Netflix, for example, is entirely on AWS and is still running well today.
Hmm, "bitched at you" has the ring of feeling persecuted because you didn't jump up all dreamy eyed at the latest buzz word trend. Occupational hazard I suspect.
If someone says to you "We need to improve the efficiency of our IT by adopting a cloud based strategy." Rather than ask them the 'meta' question of what sort of reliability guarantees they have, have an actual and honest talk about what IT costs and why. And perhaps they will relax their uptime requirement which will let you reduce your costs, or they will come to understand what the costs are for the level of uptime you're providing. Annual reviews of those questions (how much downtime can we tolerate, how much are we paying to achieve our current availability?) should be de rigueur.
"The cloud is not for all businesses."
Of course it isn't. However it can (and does) run some businesses more efficiently. And while Quora might be down for a day while folks at Amazon scramble to fix what ever it is they did that brought it down, their "business" won't change all that much. There will be no mass exodus of users because they could get their questions answered for one day. Now if you take someone's email away for a day, that is real money, or if you take away their ability to connect to the Internet period.
For something like icanhazcheeseburger even two 9s is probably good enough. That would be offline for 3.6 days of the year.
These outages are very rough. Clearly a lot of the Internet is building out on AWS, and not using multiple zones correctly in the first place. But AWS can have multi-zone problems too as we see here. Nobody is perfect.
But what people forget is: AWS has a world class team of engineers first fixing the problem, and second making sure it will never happen again. Same with Heroku, EngineYard, etc.
Host stuff on dedicated boxes racked up somewhere and you will not go down with everyone else. But my dedicated boxes on ServerBeach go down for the same reasons: hard drive failure, power outages, hurricanes, etc. And I don't have anyone to help me bring them back up, nor the interest or capacity to build out redundant services myself.
My Heroku apps are down, but I can rest easy knowing that they will bring them back up with out an action on my part.
The cloud might not be perfect but the baseline is already very good and should only get better. All without you changing your business applications. Economy of scale is what the cloud is about.
The cloud might not be perfect but the baseline is already very good and should only get better.
Do we have reason to believe that it will only get better? I think it's possible the complexity of the systems we are building and the traffic they encounter will outpace our ability to manage them. Not saying I think it's the most likely outcome, but I don't feel as confident as you.
I'd say your choice between Quora's engineers being incompetent or AWS being dishonest/incompetent is a completely false dichotomy. Anyone who has been around AWS (or basically any technology) will agree that the things that can really hurt you are not always the things you considered in your design. I just can't believe that many of the people who grok the cloud were running production sites under the assumption that there was no cross-AZ risk. They use the same API endpoints, auth, etc so it's obvious they're integrated at some level.
Perhaps for Quora and the like, engineering for the amount of availability needed to withstand this kind of event was simply not cost effective, but I seriously doubt the possibility didn't occur to them. It's not even obvious to me that there are many people who did follow the contract you reference who had serious downtime. All of the cases I've read about so far have been architectures that were not robust to a single AZ failure.
As for multi-az RDS, it's synchronous MySQL replication on what smell like standard EC2 instances, probably backed by EBS. Our multi-az failover actually worked fine this morning, but I am curious how normal that was.
Very interesting. If I'm reading this correctly, though, if all 4 Availability Zones that they're replicated across were to have gone down, though, they would've been in the same boat.
Relations. At the viewpoint of a non-cloud-user, this is a pretty normal situation. Systems fail. Maybe, we should think about cloud as a service, that is managed somehow different (to enable easier access to our wallets and budgets) but do eventually fail the same way as standard services. That's how I saw it as the first headline about cloud services appeared in front of me couple a years ago.
It's pretty wild that this stuff happens. Similar to today's nasty outage, Google has had some massive problems with its app engine datastore...
I'm curious if anyone has any predictions about what the landscape will be like in a few years? Will these be solved problems? Will cloud services lose favor? Will everything just be designed more conservatively? Will engineers finally learn to read the RTFSLA?
The benefits of the cloud is just too great, we won't go back. Except in a few years, when something goes down, instead of being some random site who's down, it's going to be the 20,000 sites that are hosted on that hardware.
One data point. I have one of my clients' servers in the east-1d availability zone. East coast region, zone d. So far things are holding up, no crash or no slow down. Fingers crossed.
I had one client project on DreamHost, and never again. My experience with them was that even when it was "up" it could be down. Lots of mysterious glitches and weirdness, stomping, restarts. Not 100% sure it was their fault. But didn't see any definite evidence it was ours either. In comparison, WebFaction and Linode have been great. Though I settled on Linode for all new projects for several reasons that I felt made them better in the general case.
Oh, but I've had all sorts of other fun failures with dreamhost in the early days. A number of us regularly called it "dreamhose". It seems to have matured, and I keep some material on there, but I'm still wary of putting anything mission critical on it.
[+] [-] akashs|15 years ago|reply
Edit: One more point. In the SLA, you'll find the following: “Region Unavailable” and “Region Unavailability” means that more than one Availability Zone in which you are running an instance, within the same Region, is “Unavailable” to you. What it implies is that if you do not spread across multiple Availability Zones, you will then have less than 99.95% uptime. So spreading across AZs should still reduce your downtime, just not beyond that 99.95%
http://aws.amazon.com/ec2-sla/
[+] [-] justinsb|15 years ago|reply
"Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from failure of a single location."
http://aws.amazon.com/ec2/
That's the spec that everyone was building to, but that isn't what is happening. Of course you're right, multiple AZs can fail at the same time, but I read the above as saying that they should fail independently/coincidentally (until the entire Region fails).
[+] [-] jpdoctor|15 years ago|reply
So anything that is beyond commercially reasonable is outside the SLA.
In truth, as with all businesses, the reputation for uptime weighs more heavily than the written contract. It will be interesting to see how the AWS people attempt to make amends.
[+] [-] mdasen|15 years ago|reply
I've written vigorously (in previous comments) for using cloud servers like EC2 over dedicated hosting like SoftLayer. I'm less sure about that now. The issue is that EC2 is still beholden to the traditional points of failure (power, cooling, network issues). However, EC2 has the additional problem of Amazon's management software. I don't want to sound too down on Amazon's ability to make good software. However, Amazon's status site shows that EBS and EC2 also had issues on March 17th for about 2.5 hours each (at different times). Reddit has also just been experiencing trouble on EC2/EBS. I don't want this to sound like "Amazon is unreliable", but it does seem more hiccup-y.
The question I'm left with is what one is gaining from the management software Amazon is introducing. Well, one can launch a new box in minutes rather than a couple hours; one can dynamically expand a storage volume rather than dealing with the size of physical discs; one can template a server so that you don't have to set it up from scratch when you want a new one. But if you're a site with 5 boxes, would that give you much help? SoftLayer's pricing is competitive against EC2's 1-year reserved instances and SoftLayer throws in several TB of bandwidth and persistent storage. Even if you have to over-buy on storage because you can't just dynamically expand volumes, it's still competitively priced. If you're only running 5 boxes, the server templates aren't of that much help - and virtually none given that you're maybe running 3 app servers, and a replicated database over two boxes.
I'm still a huge fan of S3. Building a replicated storage system is a pain until you need to store huge volumes of assets. Likewise, if you need 50 boxes for 24 hours at a time, EC2 is awesome. I'm less smitten with it for general purpose web app hosting where the fancy footwork done to make it possible to launch 100 boxes for a short time doesn't really help you if you're looking to just have 5 instances keep running all the time.
Maybe it's just bad timing that I suggested we look at Amazon's new live streaming and a day later EC2 is suffering a half-day outage.
[+] [-] showerst|15 years ago|reply
One fallacy that I think that many people make in the whole cloud debate is the idea that a given cloud provider is any more or less failure prone than a given dedicated server host.
We have assets on Amazon, Slicehost, and Linode. Sometimes these go down, whether it's our fault, software's fault, hardware's fault, or a construction crew hitting a fiber drop, things happen. If you're not backed up in a fully tested way on not just another server or availability zone, but whole different hosting infrastructure (preferably in a different time zone), then you're not really backed up. Being on a host like Amazon, or even a fully managed host like a Cadillac Rackspace plan doesn't remove the need for good BCP.
What these cloud services allow you to do in theory is have that backup infrastructure ready to go on relatively short notice _without_ keeping it running all the time. We can't reasonably afford to replicate all of our servers and hot data to Western Region or the Rackspace cloud 24/7. We can, however, afford to set up the infrastructure and spin it up on the fly within an hour with slightly stale data once a month to test it, and when for things break. Requisitioning that kind of hardware and then dumping it for only a few tens of dollars a month is difficult if not impossible on a virtual host.
The big question is not 'Is the cloud more reliable?', but 'Do i need what only the cloud can offer?'. If your current infrastructure can handle getting drudged or reddited fine, and you're only on a few servers, you're probably better off just paying to keep a hot spare up at softlayer.
On the other hand if you have 1) Occasional traffic bursting that you don't want to pay to handle most days and 2) Can accept a few minutes of downtime, then the solutions offered by cloud hosts blow the competition out of the water. I guess what you're gaining is not the management software, it's the ability to turn off & on quickly when something goes wrong (or, in the case of a redditing, right).
Part of figuring out the right hosting solution involves asking the right questions.
(..and for reference, we were all ready to go with a backup... and then we learned that our hosting company was storing our nightlies on S3 and couldn't retrieve them, and that our offsite DB solution was having an unrelated issue). Had we run proper tests (I'm brand new to the job), we would've been ready for this one. I also worry big time about DNS and load balancing being a big SPF, but that's a plan for another day.
[+] [-] dasil003|15 years ago|reply
If an AWS data center goes down it gets a lot of press, but does it actually outweigh the sum of all dedicated/shared/vps hosting issues on the equivalent volume?
[+] [-] mscarborough|15 years ago|reply
Or is it a better option when you are starting up, and want to be able to quickly throw hardware at a problem, should the need arise?
Apologies if this sounds like a pretty ignorant question, but I haven't implemented cloud-based services before. It seems like there is a hardware cost vs. people cost due to the newer nature of AWS and the like, and that needs to be factored into development / maintenance time.
Saving people time by relying on a known quality like arrays of Linux servers with failure tolerance seems preferable.
[+] [-] devenson|15 years ago|reply
[+] [-] jsprinkles|15 years ago|reply
I've written vigorously (in previous comments) for using cloud servers like EC2 over dedicated hosting like SoftLayer. I'm less sure about that now.
An issue at Amazon, or Rackspace, or Linode, or Slicehost need not imply failure at other providers and cloud as an alternative to dedicated is still as viable as ever. Amazon tanking does not mean everybody needs to run back to dedicated, and my pet peeve is that when one provider takes a crap everyone paints the cloud as toxic.
When ThePlanet's facility exploded a few years ago I did not hear lamenting that dedicated hosting was doomed. When an airliner crashes we do not say air travel is doomed. I do not understand why people rush to paint cloud as a toxic choice in light of a failure of a certain player. Admittedly a big one but there are others too and you can move.
Providers like Linode are almost exactly equivalent to dedicated hosting. They just administer the hardware for you and pay the remote hands bills. Same for Slicehost and Rackspace. It is simply far easier to wipe your instance and start over and for all intents and purposes it acts like a dedicated box. You need to administer it like one too. Most failures of the "cloud" are really designing your application in violation of the fallacies linked elsewhere.
[+] [-] phlux|15 years ago|reply
The application would be transparently mirrored to another region and if an even such as this occurs, the mirror would be spun up.
The Customer would choose the frequency of snapshot desired, and would pay for that.
Certain sites, with less dynamic content, would be mirrored and continue to operate as normal with minimal impact or cost.
Other sites, where the content creations is fairly real-time from its users, would pose more complex and costly mirroring situations (ala reddit).
But the option should be there.
Also, remember to think of the evolution of amazons services say, 24 months from now, where this type of offering will likely become more a reality.
As too many others have noted, it is best to not be 100% reliant on amazon for your entire services - but at this point in time its a little hard to spread the load between competing offerings to AWS/EC2 etc.
[+] [-] justinsb|15 years ago|reply
I've seen a lot of misinformation about this, with people suggesting that the sites (reddit/foursquare/heroku/quora) are to blame. I believe that the sites were designed to AWS's contract/specs, and AWS broke that contract.
[+] [-] js2|15 years ago|reply
Each availability zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of failures like generators and cooling equipment are not shared across Availability Zones. Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone.
Yet what Amazon guarantees, by way of their SLA, is only 99.95% for a region[2,3]:
The Amazon EC2 SLA guarantees 99.95% availability of the service within a Region over a trailing 365 day period.
[1] http://aws.amazon.com/ec2/faqs/#How_isolated_are_Availabilit...
[2] http://aws.amazon.com/ec2/faqs/#What_does_your_Amazon_EC2_Se...
[3] Of course, they're not even meeting that right now. :-(
[+] [-] jpdoctor|15 years ago|reply
We're down to 3 nines so far. A few more hours to 2 nines.
The cloud is not for all businesses.
[+] [-] rufo|15 years ago|reply
"The cloud" is not (and never has been) a cure-all for reliability issues. It's just as easy to have single points of failure as any other hosting strategy, and is just as easy (or difficult) to plan for. Companies that have planned for high availability with multi-region or multi-provider strategies will continue to be available, regardless of whether or not they are using "the cloud".
[+] [-] moe|15 years ago|reply
That's a nonsensical question to ask.
If your business is amongst the chosen few that can justify the cost to guarantee any number of nines then your availability strategy involves multiple vendors anyways.
The cloud is not for all businesses.
Whether Amazon can be part of an availability strategy has nothing to do with the number of nines.
[+] [-] xyzzyb|15 years ago|reply
Cloud is vulnerable? Of course it is. So plan accordingly.
[+] [-] drm237|15 years ago|reply
[+] [-] ChuckMcM|15 years ago|reply
If someone says to you "We need to improve the efficiency of our IT by adopting a cloud based strategy." Rather than ask them the 'meta' question of what sort of reliability guarantees they have, have an actual and honest talk about what IT costs and why. And perhaps they will relax their uptime requirement which will let you reduce your costs, or they will come to understand what the costs are for the level of uptime you're providing. Annual reviews of those questions (how much downtime can we tolerate, how much are we paying to achieve our current availability?) should be de rigueur.
"The cloud is not for all businesses."
Of course it isn't. However it can (and does) run some businesses more efficiently. And while Quora might be down for a day while folks at Amazon scramble to fix what ever it is they did that brought it down, their "business" won't change all that much. There will be no mass exodus of users because they could get their questions answered for one day. Now if you take someone's email away for a day, that is real money, or if you take away their ability to connect to the Internet period.
For something like icanhazcheeseburger even two 9s is probably good enough. That would be offline for 3.6 days of the year.
[+] [-] risotto|15 years ago|reply
But what people forget is: AWS has a world class team of engineers first fixing the problem, and second making sure it will never happen again. Same with Heroku, EngineYard, etc.
Host stuff on dedicated boxes racked up somewhere and you will not go down with everyone else. But my dedicated boxes on ServerBeach go down for the same reasons: hard drive failure, power outages, hurricanes, etc. And I don't have anyone to help me bring them back up, nor the interest or capacity to build out redundant services myself.
My Heroku apps are down, but I can rest easy knowing that they will bring them back up with out an action on my part.
The cloud might not be perfect but the baseline is already very good and should only get better. All without you changing your business applications. Economy of scale is what the cloud is about.
[+] [-] ANH|15 years ago|reply
Do we have reason to believe that it will only get better? I think it's possible the complexity of the systems we are building and the traffic they encounter will outpace our ability to manage them. Not saying I think it's the most likely outcome, but I don't feel as confident as you.
[+] [-] weswinham|15 years ago|reply
Perhaps for Quora and the like, engineering for the amount of availability needed to withstand this kind of event was simply not cost effective, but I seriously doubt the possibility didn't occur to them. It's not even obvious to me that there are many people who did follow the contract you reference who had serious downtime. All of the cases I've read about so far have been architectures that were not robust to a single AZ failure.
As for multi-az RDS, it's synchronous MySQL replication on what smell like standard EC2 instances, probably backed by EBS. Our multi-az failover actually worked fine this morning, but I am curious how normal that was.
[+] [-] endergen|15 years ago|reply
[+] [-] nulljangles|15 years ago|reply
[+] [-] EGreg|15 years ago|reply
http://myownstream.com/blog#2010-05-21 :)
[+] [-] cafebabe|15 years ago|reply
[+] [-] grandalf|15 years ago|reply
I'm curious if anyone has any predictions about what the landscape will be like in a few years? Will these be solved problems? Will cloud services lose favor? Will everything just be designed more conservatively? Will engineers finally learn to read the RTFSLA?
[+] [-] dendory|15 years ago|reply
[+] [-] ww520|15 years ago|reply
[+] [-] shykes|15 years ago|reply
[+] [-] enjo|15 years ago|reply
[+] [-] wslh|15 years ago|reply
It's an irony.
[+] [-] mkramlich|15 years ago|reply
[+] [-] kovar|15 years ago|reply
[+] [-] KeyBoardG|15 years ago|reply
[+] [-] unknown|15 years ago|reply
[deleted]
[+] [-] parfe|15 years ago|reply
[+] [-] delvan07|15 years ago|reply
[+] [-] ceejayoz|15 years ago|reply