Hi all - I'm the head of engineering at GitHub. Please accept my sincere apology for this downtime. The cause was a bad deploy (a db migration that changed an index). We were able to revert in about 30 minutes. This is slower than we'd like, and we'll be doing a full RCA of this outage.
Thanks for taking the time to personally give a status update while things are on fire. I hope you and all the others who are dealing with this emergency will have an especially restful weekend.
I was just griping on Twitter yesterday about how many developers won't immediately revert an update that causes downtime, but will actually spend time trying to solve the problem while Rome burns.
@keithba I have build a - private - GitHub action around https://github.com/sbdchd/squawk - for Postgres - that lints all our migrations files on each PR. The action extract raw SQL from the codebase and pass them into squawk.
It catches many exclusive locks migration or missing `index concurrently` that would otherwise have been release to production and causing downtime or degraded service. Maybe something you should start doing.
I wonder if they track Github Status traffic volume as some sort of meta-indicator? Is it even viable?
I was futzing around with the description for a PR and hitting save wouldn't update it, yet clicking edit would show the text I expected to see.
Suspecting something was up I checked Github Status but it was green across the board. Assuming enough other people hit the same chain of events, could it provide a reliable enough indicator of an issue?
> I wonder if they track Github Status traffic volume as some sort of meta-indicator? Is it even viable?
Sure, the previous decent sized company (~1000+ devs) had that exact metric available.
Visits to the status page generally that is. Now whether you could actually correlate that to an increase in errors for a particular component, no so much ;)
I'm sure it's totally feasible but it requires a certain amount of discipline to have consistency in logging/metric standards across all your applications to some extent.
Even worse, some applications would return a shared error page but internally, I believe it was logged as a 301 redirect until someone spotted it :)
I wonder if reliability has become less of a priority. As somebody with little to no experience of running things at scale I’m finding myself attributing this to some form of “move fast and break things”.
Nobody remembers the unicorn days? Earlier in GitHub's history, it seemed like a weekly outage was the norm. You just kind of expected it and built workflows in ways where you had a backup path to your code.
Given that the change happened in the mid-morning PST (timezone where GitHub HQ and most devs are located), I'm going to bet it's almost certainly something messed up from a regular update or deployment.
I remember something after their acquisition about new offers being lower than what certain people had previously, leading to important staff members leaving. This and some other issue I can't quite remember ... it was probably posted on HN :)
> I’m finding myself attributing this to some form of “move fast and break things”
That was the case when they were the small and hungry startup.
Meanwhile they've been acquired by a giant corporation with a less than stellar reputation for reliability or quality. So it's most likely a case actually of "move slow and break things".
It's surprisingly easy, depending on your scale/scope of course. But in general, I've managed to build CI/CD pipelines that are tolerant of GitHub (or any service) failures by following these steps:
1. Use as little of the configuration language provided by the CI as possible (prefer shellscripts that you call in CI instead of having each step in a YAML config for example)
2. Make sure static content is in a Git repository (same or different) that is also available on multiple SCM systems (I usually use GitHub + GitLab mirroring + a private VPS that also mirrors GitHub)
3. Have a bastion host for doing updates, make CI push changes via bastion host and have at least four devs (if you're at that scale, otherwise you just) with access to it, requiring multisig of 2 of them to access
Now when the service goes down, you just need 2 developers to sign the login for the bastion host, then manually run the shellscript locally to push your update. You'll always be able to update now :)
> our CI can't clone the PR to run tests. What do other folks use to avoid this situation?
Multiple remotes can help and is certainly something you should have as a backup. However I don't think it solves the root cause which is how the CI is configured.
I'm a firm proponent of keeping your CI as dumb as possible. That's not to say unsophisticated, I mean it should be decoupled as much as possible from the the how of the actions it's taking.
If you have a CI pipeline that consists of Clone, Build, Test, and Deploy stages, then I think your actual CI configuration should look as close as possible to the following pseudocode:
stages:
- clone: git clone $REPO_URL
- build: sh ./scripts/build.sh
- test: sh ./scripts/test.sh
- deploy: sh ./scripts/deploy.sh
Each of these scripts should be something you can run on anything from your local machine to a hardened bastion, at least given the right credentials/access for the deploy step. They don't have to be shell scripts, they could be npm scripts or makefiles or whatever, as long as all the CI is doing is calling one with very simple or no arguments.
This doesn't rule out using CI specific features, such as an approval stage. Just don't mix CI level operations with project level operations.
As a side benefit this helps avoid a bunch of commits that look like "Actually really for real this time fix deployment for srs" by letting you run these stages manually during development instead of pushing something you think works.
More importantly though, it makes it substantially easier to migrate between CI providers, recover from a CI/VCS crash, or onboard someone who's responsible for CI but maybe hasn't used your specific tool.
You really just need a TCP pathway between your CI and some machine with the git repo on it.
Or take your local copy and use git-fu commands to create a bare repo of it that you can compress and put somewhere like S3. Then download it in CI and checkout from that.
Or just tarball your app source, who cares about git, and do the same (s3, give it a direct path to the asset)
All of this is potentialy useless info though. Hard to say without understanding how your CI works. If all you need is the source code, there are a half dozen ways to get that source into CI without git.
Ideally your CI/CD is just calling Make/Python/Whatever scripts that are one shot actions. You should be able to run the same action locally from a clean git repo (assuming you have the right permissions).
The anti-pattern to watch out for is long, complex scripts that live in your CI system’s config file. These are hard to test and replicate when you need to.
Well unfortunately it seems everything I said 11 days ago has become a reality I'm afraid and I was still downvoted for pointing this truth out. [0]
Too many times I suggested everyone to begin self-hosting or have that as a backup but once again some think 'going all in on GitHub' is worth it. (It really is not the case)
When I built CI stuff at my previous job there were two remote repos that could be cloned from; Github and a repo on a system on the LAN that the CI's user has ssh access. Which one was used was controlled by a toggle-able environment variable in the CI system.
Github folks--this is really getting bad. I find it strange that your leadership will spends weeks of time, and pen hundreds of words about making right the wrongs they created with censorship (see: https://github.blog/2020-11-16-standing-up-for-developers-yo...), yet there's almost no attention given to these major outages that keep happening for a year now.
Where is the acknowledgment of a problem, root-cause analysis, and followup for new practices and engineering to prevent issues? Who is responsible for these issues and what are they doing to make it right? What positions are you hiring for _right now_ to get to work making your service reliable?
Again? Just 11 days ago [0], GitHub Actions had a degraded service and now it is the whole of GitHub. It's becoming a regular thing for them and it really is disappointing.
But I don't know how many times [0] I have to say this but, just get a self-hosted backup rather than 'going all in on GitHub' or 'Centralising everything'.
I started building Multiverse because of problems like this. Ironically it’s hosted on GitHub. Check it out if you are interested in decentralized VCS and code hosting.
We use a cluster of self-hosted GitLab instances. Their update cadence has been on a roll and their development process is much more transparent compared to GitHub imo because it's a lot easier to see how they comment and discuss when they have "all-remote" baked into the core of their workflow
Believe it or not, we have higher service availability hosting GitLab ourselves than GitHub
I noticed that Radicle claims similar functionality to centralized code collaboration platforms like GitHub, but Radicle itself is being developed on GitHub.
I was in the middle of some last minute pre-weekend PR review, and midway I discover it can't actually submit any of my comments. Is there a way to review and save (intermediate) state offline?
keithba|5 years ago
For those who are interested, on the first Wednesday of each month, I write a blog post on our availability. Most recent one is here: https://github.blog/2021-03-03-github-availability-report-fe...
mwcampbell|5 years ago
Uehreka|5 years ago
Thank you for not doing that.
hanniabu|5 years ago
oomathias|5 years ago
jborichevskiy|5 years ago
I was futzing around with the description for a PR and hitting save wouldn't update it, yet clicking edit would show the text I expected to see.
Suspecting something was up I checked Github Status but it was green across the board. Assuming enough other people hit the same chain of events, could it provide a reliable enough indicator of an issue?
spondyl|5 years ago
Sure, the previous decent sized company (~1000+ devs) had that exact metric available.
Visits to the status page generally that is. Now whether you could actually correlate that to an increase in errors for a particular component, no so much ;)
I'm sure it's totally feasible but it requires a certain amount of discipline to have consistency in logging/metric standards across all your applications to some extent.
Even worse, some applications would return a shared error page but internally, I believe it was logged as a 301 redirect until someone spotted it :)
notretarded|5 years ago
[deleted]
turbonoobie|5 years ago
I wonder if reliability has become less of a priority. As somebody with little to no experience of running things at scale I’m finding myself attributing this to some form of “move fast and break things”.
geerlingguy|5 years ago
Often git operations were unaffected though.
qbasic_forever|5 years ago
tazjin|5 years ago
whoisjohnkid|5 years ago
Florin_Andrei|5 years ago
That was the case when they were the small and hungry startup.
Meanwhile they've been acquired by a giant corporation with a less than stellar reputation for reliability or quality. So it's most likely a case actually of "move slow and break things".
cs-szazz|5 years ago
What do other folks use to avoid this situation? Have a Gitlab instance or similar that you can pull from instead for CI?
capableweb|5 years ago
1. Use as little of the configuration language provided by the CI as possible (prefer shellscripts that you call in CI instead of having each step in a YAML config for example)
2. Make sure static content is in a Git repository (same or different) that is also available on multiple SCM systems (I usually use GitHub + GitLab mirroring + a private VPS that also mirrors GitHub)
3. Have a bastion host for doing updates, make CI push changes via bastion host and have at least four devs (if you're at that scale, otherwise you just) with access to it, requiring multisig of 2 of them to access
Now when the service goes down, you just need 2 developers to sign the login for the bastion host, then manually run the shellscript locally to push your update. You'll always be able to update now :)
reificator|5 years ago
Multiple remotes can help and is certainly something you should have as a backup. However I don't think it solves the root cause which is how the CI is configured.
I'm a firm proponent of keeping your CI as dumb as possible. That's not to say unsophisticated, I mean it should be decoupled as much as possible from the the how of the actions it's taking.
If you have a CI pipeline that consists of Clone, Build, Test, and Deploy stages, then I think your actual CI configuration should look as close as possible to the following pseudocode:
Each of these scripts should be something you can run on anything from your local machine to a hardened bastion, at least given the right credentials/access for the deploy step. They don't have to be shell scripts, they could be npm scripts or makefiles or whatever, as long as all the CI is doing is calling one with very simple or no arguments.This doesn't rule out using CI specific features, such as an approval stage. Just don't mix CI level operations with project level operations.
As a side benefit this helps avoid a bunch of commits that look like "Actually really for real this time fix deployment for srs" by letting you run these stages manually during development instead of pushing something you think works.
More importantly though, it makes it substantially easier to migrate between CI providers, recover from a CI/VCS crash, or onboard someone who's responsible for CI but maybe hasn't used your specific tool.
whalesalad|5 years ago
Or take your local copy and use git-fu commands to create a bare repo of it that you can compress and put somewhere like S3. Then download it in CI and checkout from that.
Or just tarball your app source, who cares about git, and do the same (s3, give it a direct path to the asset)
All of this is potentialy useless info though. Hard to say without understanding how your CI works. If all you need is the source code, there are a half dozen ways to get that source into CI without git.
redisman|5 years ago
theptip|5 years ago
The anti-pattern to watch out for is long, complex scripts that live in your CI system’s config file. These are hard to test and replicate when you need to.
rvz|5 years ago
Too many times I suggested everyone to begin self-hosting or have that as a backup but once again some think 'going all in on GitHub' is worth it. (It really is not the case)
[0] https://news.ycombinator.com/item?id=26301750
inetknght|5 years ago
fweespeech|5 years ago
Gitlab, mirrored repo basically.
nneonneo|5 years ago
Florin_Andrei|5 years ago
Don't use Microsoft?
qbasic_forever|5 years ago
Where is the acknowledgment of a problem, root-cause analysis, and followup for new practices and engineering to prevent issues? Who is responsible for these issues and what are they doing to make it right? What positions are you hiring for _right now_ to get to work making your service reliable?
jtdev|5 years ago
[deleted]
rvz|5 years ago
But I don't know how many times [0] I have to say this but, just get a self-hosted backup rather than 'going all in on GitHub' or 'Centralising everything'.
[0] https://news.ycombinator.com/item?id=26301659
mfer|5 years ago
With that out of the way... GH has had a lot of issues in recent months. More than the past. I would hope those things are on a road to being fixed.
suspecthorse|5 years ago
https://github.com/multiverse-vcs/go-multiverse
justaguy88|5 years ago
Is there a pass-through proxy for git? Or a leader-follower arrangement that is nice, with a proxy server?
drstewart|5 years ago
smallnamespace|5 years ago
You can set up a cronjob to sync them, or some have built-in config to do the mirroring [4].
I used Google's mirroring option before. It was fine, but we never had to use it (local copies were sufficient when GH was slow one day).
[1] https://cloud.google.com/source-repositories
[2] https://aws.amazon.com/codecommit/
[3] https://azure.microsoft.com/en-us/services/devops/repos/
[4] https://cloud.google.com/source-repositories/docs/mirroring-...
drudoo|5 years ago
jtdev|5 years ago
usui|5 years ago
Believe it or not, we have higher service availability hosting GitLab ourselves than GitHub
rklaehn|5 years ago
Comevius|5 years ago
They could use some dogfooding, and new website.
zymhan|5 years ago
svnpenn|5 years ago
Jonovono|5 years ago
ffpip|5 years ago
ProtoAES256|5 years ago
My heart can't handle another rollercoaster of unicorns for long...
brnt|5 years ago
drstewart|5 years ago
WFHRenaissance|5 years ago
leemac|5 years ago
johncalvinyoung|5 years ago
b_fiive|5 years ago
balecrim|5 years ago
unknown|5 years ago
[deleted]
gadrev|5 years ago
zymhan|5 years ago
notretarded|5 years ago
[deleted]
pault|5 years ago