> One of the biggest customer-facing effects of this delay was that status.github.com wasn't set to status red until 00:32am UTC, eight minutes after the site became inaccessible. We consider this to be an unacceptably long delay, and will ensure faster communication to our users in the future.
Amazon could learn a thing or two from Github in terms of understanding customer expectations.
I recently stepped into a role with a devops component, and one of my first surprises was just how slow status.aws.amazon.com was to update about ongoing issues. I had to scramble to find twitter and external forums confirmation for the client.
> Amazon could learn a thing or two from Github in terms of understanding customer expectations.
Do you mean that "the cloud provider that is bigger than the next 14 combined and whose jargon has spread through the community" doesn't understand what customers are interested in and delivering on that?
There's no mention of why they don't have redundant systems in more than one datacenter. As they say, it is unavoidable to have power or connectivity disruptions in a datacenter. This is why reliable configurations have redundancy in another datacenter elsewhere in the world.
Given the dependency in question is Redis, such a solution is probably exacerbated by the fact Redis hasn't really had a decent HA solution.
This is also hidden by the fact that Redis is really reliable (in my experience at least). In my experience it usually takes an ops event (like adding more RAM to the redis machine) to realize where a crutch has been developed on Redis in critical paths.
> There's no mention of why they don't have redundant systems in more than one datacenter
sometimes reading comments on hn makes me laugh out loud.
there's only one reason to not do this, and that's cost. what do you expect them to say about that? i mean really, you think they're going to put that in a blog post:
"Well, the reason we don't have an entire replica of our entire installation is because it costs way too much. In fact, more than double! And so far our uptime is actually 99.99% so there's no way it's worth it! You can forget about that spend! Sorry bros."
It's shocking that they don't at least have a read replica of their system in another 'AZ'. That's cloud hosting 101, and being self-hosted isn't an excuse to skimp on this.
If an outage caused 2 hours of read-only access to repos it would still be moderately impactful, but at least we could still build our Go code.
For all that work to be done in just two hours is amazing, especially with degraded internal tools, and both hardware and ops teams working simultaneously.
We should collectively be using incidents like this as an opportunity to learn, much like the GitHub team does. Our entire industry is held back by the lack of knowledge sharing when it comes to problem response and the fact that so many companies are terrified of being transparent in the face of failure.
This is very well written retrospective that gives us a glimpse into the internal review that they conducted. Imagine how much we could collectively learn if everyone was fearless about sharing.
Is there a timeline to how long it took them to figure out Redis was down? Because having experienced the same, you get an alert. Cool. HA-Proxy says app servers are down. Ok. You SSH in and see that everything looks ok but the processes are bouncing. You tail the logs to find out why (obviously lots of these steps could be optimized). Within a few seconds you spot the error connecting to Redis. A minute later you've verified the Redis hosts are offline.
That's the first 5 minutes after getting to a computer.
After that it doesn't really matter why they're down. You failover, get the site back up and worry about it later.
Are these systems on a SAN? That's probably the first mistake if so. Redis isn't HA. You're not going to bounce it's block devices over to another server in the event of a failure. That's just a complex, very expensive strategy that introduces a lot of novel ways to shoot yourself in the face. If you're hosting at your own data-center, you use DAS with Redis. Cheaper, simpler. I've never seen an issue where a cabinet power loss caused a JBOD failure (I'm sure it happens, but it's a far from common scenario IME), but then again, locality matters. Don't get overly clever and spread logical systems across cabinets just because you can.
Being involved with this sort of thing more frequently than I'd like to admit, I don't know the exact situation here, but 2h6m isn't necessarily anything to brag about without a lot more context.
What's pretty shameful is that a company with GitHub's resources isn't drilling failover procedures, is ignoring physical segmentation as an availability target (or maybe just got really really unlucky; stuff happens), and doesn't have a backup data-center with BGP or DNS failover. This is all stuff that (in theory if not always in practice), many of their clients wearing a "PCI Compliant" badge are already doing on their own systems.
In addition to recognizing the speed at which a non-obvious downtime was remedied, I would personally like to thank GitHub for the detailed technical report being released. Far too many companies release statements that were clearly written by or edited by PR people. Most companies just piss off their customers by releasing generic press releases that don't give us any idea of what happened. Downtime is inevitable; what matters is being open and honest about such problems, and offering insight into what can be improved for the future.
So thank you GitHub, please keep up the good work!
I don't know enough about server infrastructure to comment on whether or not Github was adequately prepared or reacted appropriately to fix the problem.
But wow it is refreshing to hear a company take full responsibility and own up to a mistake/failure and apologize for it.
Like people, all companies will make mistakes and have momentary problems. It's normal. So own up to it and learn how to avoid the mistake in the future.
As I said in another comment, the fact that they found an 8 minute delay from outage to status page update to be unacceptable speaks volumes to how much they value their relationship with their customers.
as an aside I feel that I'm quite fortunate to work in the EST timezone, as their outage apparently started at about 7pm my time. We have a general rule at my company to not deploy after 6pm unless an emergency fix absolutely needs to go up.
I saw the title of the story and said to myself, what outage? :P
Does Github run anything like Netflix Simbian Army against it's services? As a company by engineers for engineers with the scale that github has reached, I'm a bit surprised they are lacking a bit more redundancy. Though they may not need the uptime of netflix, an outage of more than a few minutes on github could affect businesses that rely on the service.
Google "Netflix downtime" for evidence that Netflix also has outages. Google has outages, sometimes very significant ones of Google Apps. Facebook has outages.
Complex systems fail. Period. All the time. Things like the Simian Army are fantastic tools that help you identify a host of problems and remediate them in advance, but they cannot test every combinatorial possibility in a complex distributed system.
At the end of the day, the best defense is to have skilled people who are practiced at responding to problems. GitHub has those in spades, which is why they could respond to a widespread failure of their physical layer in just over 2 hours.
The biggest win with the Simian Army isn't that it improves your redundancy. It's that it gives your people opportunities to _practice_ responses.
Every time I read about a massive systems failure, I think of Jurassic Park and am mildly grateful that the velociraptor padock wasn't depending on the systems operation.
This just shows how difficult it is to avoid hidden dependencies without a complete, cleanly isolated, testing environment of sufficient scale to replicate production operations and do strange system fault scenarios somewhere that won't kill production.
It turns out that it's even hard then. Complex systems, by their very nature, fail in unexpected and unpredictable ways. If that weren't bad enough, hindsight bias makes it way too easy for us to look back with perfect knowledge and opine "That was so obvious, how could they have missed such a rudimentary issue?"
> ... Updating our tooling to automatically open issues for the team when new firmware updates are available will force us to review the changelogs against our environment.
That's an awesome idea. I wish all companies published the firmware releases in simple rss feeds, so everyone could easily integrate them with their trackers.
(If someone's bored, that may be a nice service actually ;) )
This was one of the toughest things about admining hardware clusters. Firmware updates (and firmware issues) are so hard to track down. It's so annoying. I remember spending a week tracking down an issue with a RAID controller and then spending another day or two on the phone with the vendor trying to get a firmware update so we did not have 2 racks of hardware sitting on a ticking time-bomb.
I've played with the idea of some automated software update reporting site ages ago - it'd read rss feeds and scrape websites for the required info. It'd probably need adjustments for each hardware manufacturer / product though, and regular updating. But that could possibly be part of an open source project, give the firmware maintainers the opportunity to help out too.
> Remote access console screenshots from the failed hardware showed boot failures because the physical drives were no longer recognized.
I'm getting flashbacks. All of the servers in the DC reboot and NONE of them come online. No network or anything. Even remotely rebooting them again we had nothing. Finally getting a screen (which is a pain in itself) we saw they were all stuck on a grub screen. Grub detected an error and decided not to boot automatically. Needless to say we patched grubbed and removed this "feature" promptly!
You can very clearly see two kinds of people posting on this thread: those who have actually dealt with failures of complex distributed systems, and those who think it's easy.
"We identified the hardware issue resulting in servers being unable to view their own drives after power-cycling as a known firmware issue that we are updating across our fleet."
Tell us which vendor shipped that firmware, so everyone else can stop buying from them.
I feel it was good incident for the Open Source community, to see how dependent we are on GitHub today. I feel sad whenever I see another large project like Python moving to GitHub, a closed-sourced company. I know, GitLab is there as an alternative, but I would love to see all the big Open Source projects putting pressure over GitHub to make them open their source code, as right they are big player in open source, like it or not.
Git is a distributed version control system. Github is simply a place to host a repository and some issues. There is nothing stopping anyone from pushing to another remote hub for redundancy.
So you want Github to open source where they put your git repo and issues? Who cares about that? It's unimportant because regardless they're still the central endpoint to many open source projects, opened or closed source. If you want open source use Gitlab or any other service that sprinkles extra features around git.
I'll never understand this outrage of dependence on Github when you have a distributed version control system. It's not like it should be on github to setup third party repositories for you.
If github opensource all of their stuff, this still wouldn't prevent issues like this for the projects that want to use a hosted service instead of hosting it themselves, and many projects don't want to host these services them selves anymore.
When I worry about dependency on GitHub, I'm thinking about not the inconvenient hours of downtime but the larger threat that they might disappear or turn evil.
What I would like to see even more than opensource github would be a standard for spreading over more services. For instance, syncing code, issues, pull requests, wiki, pages, etc between self-hosted gitlab and gitlab.com, or between gitlab.com and github.com. Further, I'd like to see it be easier to use common logins across services.
I don't think we can rely on Github giving us this, but if GitLab would add it between gitlab.com and gitlab ce, that would be a compelling reason to think of switching.
Was it a good incident to see how dependent we are on GitHub? Every time there's a GitHub outage, a vocal group of people will voice their opinions that we are too dependent on GitHub, we should be using open source alternatives, GitHub should be open source, etc. Then, within a few days, everybody goes silent and we return to our normal lives.
I don't think outages at GitHub are very frequent. This one was lengthy, so it's definitely been on a lot of peoples' minds, but this conversation always comes up when it happens.
> I feel sad whenever I see another large project like Python moving to GitHub, a closed-sourced company.
What would you rather have? A dependency on a bunch of projects with variable hosting of whatever means or all your dependencies hosted with the uptime of GitHub? Having an install fail because some host is down somewhere deep in your nest of dependencies is going happen a lot more if you have more hosts to worry about.
It must be nice to know that the majority of your customers are familiar enough with the nature of your work that they'll actually understand a relatively complex issue like this. Almost by definition, we've all been there.
If only Bitbucket could give such comprehensive reports. A few months back outages seemed almost daily. Things are more stable now. I hope for the long term.
Isn't BB's problem basically that there are too many users? GH's outage writeup is cool, because it's a one off and it can be analysed. When BB is just overloaded for a long time and needs more power, it's not going to be very interesting.
(unless I missed some specific non capacity related outages?)
> Over the past week, we have devoted significant time and effort towards understanding the nature of the cascading failure which led to GitHub being unavailable for over two hours.
I don't mean to be blasphemous, but from a high level, is the performance issues with Ruby (and Rails) that necessitate close binding with Redis (i.e., lots of caching) part of the issue?
It sounds like the fundamental issue is not Ruby, nor Redis, but the close coupling between them. That's sort of interesting.
I don't think that Ruby/Rails has anything to do with this, really. If you want to scale any app, you're going to want to do some caching somewhere. What this boils down to is that their app has a dependency in an initializer that depends on redis. Without a connection to redis, it will flap.
As someone with a fair bit of ruby+rails+redis experience, I don't think this is blasphemous, but I also don't think the performance issues of ruby/rails having anything to do with the failure. Generally you would cache/store something in redis not because your programming language or framework is slow, but because a query to another database is slow (or at least, slower than redis), or because redis data structures happen to be a good/quick way to store certain kinds of data.
I believe the fundamental issue was just that redis availability was taken for granted by app servers so that certain code paths/requests would fail if it wasn't available, rather than merely be slower.
Not all processes that involve GitHub are development processes. I've seen automated deployments fail inside a corporate network when the resident HTTP proxy had a bad day and could not connect to github.com.
So, while it sounds like they have reasonable HA, they fell down on DR.
unrelated, I could not comprehend what this means?..:
technicians to bring these servers back online by draining the flea power to bring
I assume they mean completely disconnect the equipment from ALL external power sources. Typically even when a piece of equipment is offline in a data center, it continues to draw power, and will often keep running systems like DRAC and other management/status tools (since the whole concept of a data center is NEVER having to get up out of your chair, so even a "shutdown" system needs to be able to be remotely started).
Since the firmware had a bug, bad state could be stored, completely removing power may clear that state and appears to have done so in this case. They may have also needed to pull the backup battery, and reset the firmware settings, but I wouldn't presume that just from the term "flea power."
No, it sounds good, because it's realistic and then you can build mitigation strategies.
I was recently involved in an outage that occurred because the sama datacenter was hit by lightning three times in a row. Everything was redundant up the wazoo and handled the first two hits just fine, but by the time the power went out for the third time within N minutes, there wasn't enough juice left in some of the batteries!
Now would it be possible to build an automated system that can withstand this? Probably. But would your time & money be better spend worrying about other failure modes? Almost certainly.
If your plan to avoid downtime is to prevent power outages, you're going to have downtime. All their sentence says is they can't prevent power outages. That's fine, because the other 1/nth of your servers are on a different power grid in a different state.
Whose datacenter are they in? This is the second time in less than two weeks that they've suffered a power-related issue. My company is in 4 different sites around the world and we've never lost power ever - and, if one circuit did go out, we'd still be up and running because all of our servers have redundant power supplies on separate infeed circuits.
"...but we can take steps to ensure recovery occurs in a fast and reliable manner. We can also take steps to mitigate the negative impact of these events on our users."
The lessons that giants like Netflix have learned about running massive distributed applications show that you cannot avoid failure, and instead must plan for it.
Now, having a single datacenter is not a good plan if you want to give any sort of uptime guarantee, but that's a different point to make.
I'm going to guess that these are Dell R730xd boxes with PERC H730 Mini controllers (LSI MegaRAID SAS-3 3108).
A failed/failing drive present during cold boot could cause the controller to believe there were no drives present. To add insult to injury, on early BIOS versions this made the UEFI interface inaccessible. The only way to recover from this state was to re-seat the RAID controller.
There were also two bizarre cases where the operating system SSD RAID1 would be wiped and replaced with a NTFS partition after upgrading the controller firmware (and more) on an affected system (hanging/flapping drives). Attempts to enter UEFI caused a fatal crash, but reinstall (over PXE) worked fine. BIOS upgrade from within fresh install restored it.
From the changelog:
Fixes:
- Decreased latency impact for passthrough commands on SATA disks
- Improved error handling for iDRAC / CEM storage functions
- Usability improvements for CTRL-R and HII utilities
- Resolved several cases where foreign drives could not be imported
- Resolved several issues where the presence of failed drives could lead to controller hangs
- Resolved issues with managing controllers in HBA mode from iDRAC / CEM
- Resolved issues with displayed Virtual Disk and Non-RAID Drive counts in BIOS boot mode
- Corrected issue with tape media on H330 where tape was not being treated as sequential device
- resolved an issue where Inserted hard drives might not get detected properly.
> We had inadvertently added a hard dependency on our Redis cluster being available within the boot path of our application code.
I seem to recall a recent post on here about how you shouldn't have such hard dependencies. It's good advice.
Incidentally, this type of dependency is unlikely to happen if you have a shared-nothing model (like PHP has, for instance), because in such a system each request is isolated and tries to connect on its own.
> Because we have experience mitigating DDoS attacks, our response procedure is now habit and we are pleased we could act quickly and confidently without distracting other efforts to resolve the incident.
The thing that fixed the last problem doesn't always fix the current problem.
Power outage in DC brought many machines down. Redis clusters failed to start owing to disk issues (not cleanly unmounted?). The reboot of remaining machines uncovered an unknown dependency on the machines needing the redis cluster to be up in order to boot.
There were other learning points such as immediately going into anti DDoS mode and human communication issues that didn't realise or escalate the problem until some time after the issues started occurring.
No CI/test process was in place for critical systems to ensure that they had no external dependencies.
Takeaway: If you run any complex system, ensure that each component is tested for its response to various degrees of failure in peer services, including but not limited to totally unavailable, intermittent connectivity, reduced bandwidth, lossy links, power-cycling peers.
No CI/test process was in place for hardware/firmware combos to ensure they recovered fine from power loss.
Takeaway: If you run a decent-sized cluster, ensure all new hardware ingested is tested through various power state transitions multiple times, and again after firmware updates. With software defined networking now the norm, we have little excuse not to put a machine through its paces on an automated basis before accepting it to run critical infrastructure.
No CI/test process was in place for status advisory processes to ensure they were sufficiently rapid, representative, and automated.
Takeaway: Test your status update processes as you would test any other component service. If humans are involved, drill them regularly.
Infrastructure was too dependent on a single data center.
Takeaway: Analyze worst case failure modes, which are usually entire-site and power, networking or security related. Where possible, never depend on a single site. (At a more abstract level of business, this extends to legal jurisdictions). Don't believe the promises of third party service providers (SLAs).
PS. I am available for consulting, and not expensive.
You could google "HA", click in the Wikipedia link that shows all the things "HA" may refer to, and deduct that the most logical thing in the list, given the context, would be this link: https://en.wikipedia.org/wiki/High_availability.
I seriously doubt this version of the story. While it's possible for several hardware/firmware to fail in all your datacenters, for them to fail at the same time is highly unlikely. This may just be a PR spin to think they're not vulnerable to security attacks.
While this was happening at Github, I noticed several other companies facing that same issue at the same time. Atlassian was down for the most part. It could have been an issue with the service github uses, but they won't admit that. Notice they never said what the firmware issue was instead blaming it on 'hardware'.
I think they should be transparent with people about such vulnerability, but I suspect they would never say so because then they would lose revenue.
They're not hosted in multiple datacenters; there was a power interruption in their single datacenter that exposed this firmware bug. The point of this postmortem isn't the initial power interruption but rather its repercussions, why it took so long to recover from and how they can improve their response and communications in the future.
eric_h|10 years ago
Amazon could learn a thing or two from Github in terms of understanding customer expectations.
dmunoz|10 years ago
vacri|10 years ago
Do you mean that "the cloud provider that is bigger than the next 14 combined and whose jargon has spread through the community" doesn't understand what customers are interested in and delivering on that?
bosdev|10 years ago
nemothekid|10 years ago
This is also hidden by the fact that Redis is really reliable (in my experience at least). In my experience it usually takes an ops event (like adding more RAM to the redis machine) to realize where a crutch has been developed on Redis in critical paths.
beachstartup|10 years ago
sometimes reading comments on hn makes me laugh out loud.
there's only one reason to not do this, and that's cost. what do you expect them to say about that? i mean really, you think they're going to put that in a blog post:
"Well, the reason we don't have an entire replica of our entire installation is because it costs way too much. In fact, more than double! And so far our uptime is actually 99.99% so there's no way it's worth it! You can forget about that spend! Sorry bros."
theptip|10 years ago
If an outage caused 2 hours of read-only access to repos it would still be moderately impactful, but at least we could still build our Go code.
emergentcypher|10 years ago
danielvf|10 years ago
imbriaco|10 years ago
We should collectively be using incidents like this as an opportunity to learn, much like the GitHub team does. Our entire industry is held back by the lack of knowledge sharing when it comes to problem response and the fact that so many companies are terrified of being transparent in the face of failure.
This is very well written retrospective that gives us a glimpse into the internal review that they conducted. Imagine how much we could collectively learn if everyone was fearless about sharing.
ssmoot|10 years ago
That's the first 5 minutes after getting to a computer.
After that it doesn't really matter why they're down. You failover, get the site back up and worry about it later.
Are these systems on a SAN? That's probably the first mistake if so. Redis isn't HA. You're not going to bounce it's block devices over to another server in the event of a failure. That's just a complex, very expensive strategy that introduces a lot of novel ways to shoot yourself in the face. If you're hosting at your own data-center, you use DAS with Redis. Cheaper, simpler. I've never seen an issue where a cabinet power loss caused a JBOD failure (I'm sure it happens, but it's a far from common scenario IME), but then again, locality matters. Don't get overly clever and spread logical systems across cabinets just because you can.
Being involved with this sort of thing more frequently than I'd like to admit, I don't know the exact situation here, but 2h6m isn't necessarily anything to brag about without a lot more context.
What's pretty shameful is that a company with GitHub's resources isn't drilling failover procedures, is ignoring physical segmentation as an availability target (or maybe just got really really unlucky; stuff happens), and doesn't have a backup data-center with BGP or DNS failover. This is all stuff that (in theory if not always in practice), many of their clients wearing a "PCI Compliant" badge are already doing on their own systems.
developer2|10 years ago
So thank you GitHub, please keep up the good work!
DarkTree|10 years ago
But wow it is refreshing to hear a company take full responsibility and own up to a mistake/failure and apologize for it.
Like people, all companies will make mistakes and have momentary problems. It's normal. So own up to it and learn how to avoid the mistake in the future.
eric_h|10 years ago
as an aside I feel that I'm quite fortunate to work in the EST timezone, as their outage apparently started at about 7pm my time. We have a general rule at my company to not deploy after 6pm unless an emergency fix absolutely needs to go up.
I saw the title of the story and said to myself, what outage? :P
pedalpete|10 years ago
imbriaco|10 years ago
Complex systems fail. Period. All the time. Things like the Simian Army are fantastic tools that help you identify a host of problems and remediate them in advance, but they cannot test every combinatorial possibility in a complex distributed system.
At the end of the day, the best defense is to have skilled people who are practiced at responding to problems. GitHub has those in spades, which is why they could respond to a widespread failure of their physical layer in just over 2 hours.
The biggest win with the Simian Army isn't that it improves your redundancy. It's that it gives your people opportunities to _practice_ responses.
tbrock|10 years ago
unknown|10 years ago
[deleted]
onetwotree|10 years ago
chris_wot|10 years ago
mattdeboard|10 years ago
mjevans|10 years ago
imbriaco|10 years ago
If only things were that easy.
ones_and_zeros|10 years ago
viraptor|10 years ago
That's an awesome idea. I wish all companies published the firmware releases in simple rss feeds, so everyone could easily integrate them with their trackers.
(If someone's bored, that may be a nice service actually ;) )
vhost-|10 years ago
Cthulhu_|10 years ago
matt_wulfeck|10 years ago
I'm getting flashbacks. All of the servers in the DC reboot and NONE of them come online. No network or anything. Even remotely rebooting them again we had nothing. Finally getting a screen (which is a pain in itself) we saw they were all stuck on a grub screen. Grub detected an error and decided not to boot automatically. Needless to say we patched grubbed and removed this "feature" promptly!
gaius|10 years ago
Animats|10 years ago
Tell us which vendor shipped that firmware, so everyone else can stop buying from them.
gruez|10 years ago
merqurio|10 years ago
BinaryIdiot|10 years ago
So you want Github to open source where they put your git repo and issues? Who cares about that? It's unimportant because regardless they're still the central endpoint to many open source projects, opened or closed source. If you want open source use Gitlab or any other service that sprinkles extra features around git.
I'll never understand this outrage of dependence on Github when you have a distributed version control system. It's not like it should be on github to setup third party repositories for you.
jdboyd|10 years ago
When I worry about dependency on GitHub, I'm thinking about not the inconvenient hours of downtime but the larger threat that they might disappear or turn evil.
What I would like to see even more than opensource github would be a standard for spreading over more services. For instance, syncing code, issues, pull requests, wiki, pages, etc between self-hosted gitlab and gitlab.com, or between gitlab.com and github.com. Further, I'd like to see it be easier to use common logins across services.
I don't think we can rely on Github giving us this, but if GitLab would add it between gitlab.com and gitlab ce, that would be a compelling reason to think of switching.
davidcelis|10 years ago
I don't think outages at GitHub are very frequent. This one was lengthy, so it's definitely been on a lot of peoples' minds, but this conversation always comes up when it happens.
VeilEm|10 years ago
What would you rather have? A dependency on a bunch of projects with variable hosting of whatever means or all your dependencies hosted with the uptime of GitHub? Having an install fail because some host is down somewhere deep in your nest of dependencies is going happen a lot more if you have more hosts to worry about.
rqebmm|10 years ago
dsmithatx|10 years ago
viraptor|10 years ago
(unless I missed some specific non capacity related outages?)
guelo|10 years ago
sh4na|10 years ago
jlgaddis|10 years ago
gsibble|10 years ago
tmsh|10 years ago
I don't mean to be blasphemous, but from a high level, is the performance issues with Ruby (and Rails) that necessitate close binding with Redis (i.e., lots of caching) part of the issue?
It sounds like the fundamental issue is not Ruby, nor Redis, but the close coupling between them. That's sort of interesting.
byroot|10 years ago
It has nothing to do with Ruby, or Rails or even Redis. It's just a design flaw of the application, that you often learn the hard way.
atom_enger|10 years ago
lukeasrodgers|10 years ago
I believe the fundamental issue was just that redis availability was taken for granted by app servers so that certain code paths/requests would fail if it wasn't available, rather than merely be slower.
cognivore|10 years ago
majewsky|10 years ago
timiblossom|10 years ago
rurounijones|10 years ago
That would have given them immediate co text and not wasting time on DDOS protection
spydum|10 years ago
Flea power?
Someone1234|10 years ago
Since the firmware had a bug, bad state could be stored, completely removing power may clear that state and appears to have done so in this case. They may have also needed to pull the backup battery, and reset the firmware settings, but I wouldn't presume that just from the term "flea power."
tonylxc|10 years ago
This doesn't sound very good.
jpatokal|10 years ago
I was recently involved in an outage that occurred because the sama datacenter was hit by lightning three times in a row. Everything was redundant up the wazoo and handled the first two hits just fine, but by the time the power went out for the third time within N minutes, there wasn't enough juice left in some of the batteries!
Now would it be possible to build an automated system that can withstand this? Probably. But would your time & money be better spend worrying about other failure modes? Almost certainly.
jrockway|10 years ago
otterley|10 years ago
theptip|10 years ago
"...but we can take steps to ensure recovery occurs in a fast and reliable manner. We can also take steps to mitigate the negative impact of these events on our users."
The lessons that giants like Netflix have learned about running massive distributed applications show that you cannot avoid failure, and instead must plan for it.
Now, having a single datacenter is not a good plan if you want to give any sort of uptime guarantee, but that's a different point to make.
unknown|10 years ago
[deleted]
mattdeboard|10 years ago
ymse|10 years ago
A failed/failing drive present during cold boot could cause the controller to believe there were no drives present. To add insult to injury, on early BIOS versions this made the UEFI interface inaccessible. The only way to recover from this state was to re-seat the RAID controller.
There were also two bizarre cases where the operating system SSD RAID1 would be wiped and replaced with a NTFS partition after upgrading the controller firmware (and more) on an affected system (hanging/flapping drives). Attempts to enter UEFI caused a fatal crash, but reinstall (over PXE) worked fine. BIOS upgrade from within fresh install restored it.
From the changelog:
TazeTSchnitzel|10 years ago
I seem to recall a recent post on here about how you shouldn't have such hard dependencies. It's good advice.
Incidentally, this type of dependency is unlikely to happen if you have a shared-nothing model (like PHP has, for instance), because in such a system each request is isolated and tries to connect on its own.
totally|10 years ago
The thing that fixed the last problem doesn't always fix the current problem.
dgritsko|10 years ago
unknown|10 years ago
[deleted]
swrobel|10 years ago
alblue|10 years ago
There were other learning points such as immediately going into anti DDoS mode and human communication issues that didn't realise or escalate the problem until some time after the issues started occurring.
aidenn0|10 years ago
Firmware issue meant that a large fraction of their servers could not detect the disks on reboot.
This prevented the redis cluster from starting.
They inadvertently have a hard-dependency on redis being up for the majority of their infrastructure to start.
daigoba66|10 years ago
contingencies|10 years ago
Takeaway: If you run any complex system, ensure that each component is tested for its response to various degrees of failure in peer services, including but not limited to totally unavailable, intermittent connectivity, reduced bandwidth, lossy links, power-cycling peers.
No CI/test process was in place for hardware/firmware combos to ensure they recovered fine from power loss.
Takeaway: If you run a decent-sized cluster, ensure all new hardware ingested is tested through various power state transitions multiple times, and again after firmware updates. With software defined networking now the norm, we have little excuse not to put a machine through its paces on an automated basis before accepting it to run critical infrastructure.
No CI/test process was in place for status advisory processes to ensure they were sufficiently rapid, representative, and automated.
Takeaway: Test your status update processes as you would test any other component service. If humans are involved, drill them regularly.
Infrastructure was too dependent on a single data center.
Takeaway: Analyze worst case failure modes, which are usually entire-site and power, networking or security related. Where possible, never depend on a single site. (At a more abstract level of business, this extends to legal jurisdictions). Don't believe the promises of third party service providers (SLAs).
PS. I am available for consulting, and not expensive.
maerF0x0|10 years ago
Edit this is mostly the "DR" part of tldr :P
draw_down|10 years ago
You're welcome.
jargonless|10 years ago
I would STFW, but searching for "HA" isn't helpful.
dang|10 years ago
polysaturate|10 years ago
suraj|10 years ago
cycomachead|10 years ago
xzlzx|10 years ago
mattbeckman|10 years ago
unknown|10 years ago
[deleted]
unsatchmo|10 years ago
[deleted]
osoti|10 years ago
[deleted]
julesbond007|10 years ago
While this was happening at Github, I noticed several other companies facing that same issue at the same time. Atlassian was down for the most part. It could have been an issue with the service github uses, but they won't admit that. Notice they never said what the firmware issue was instead blaming it on 'hardware'.
I think they should be transparent with people about such vulnerability, but I suspect they would never say so because then they would lose revenue.
Here on my blog I talked about this issue: http://julesjaypaulynice.com/simple-server-malicious-attacks...
I think it was some ddos campaign going on over the web.
dandandan|10 years ago