Maybe it's because I just personally identify with the founders of github (i.e. entrepreneurial sw engineers), but I'm starting to get mad at whoever keeps doing this. Here's hoping that with all the smart people this is affecting, the people responsible will be tracked down and exposed.
From what I've seen it's usually a rational business or policy decision. And your adversaries are in, or acting through, states with a weak rule of law. "Exposing" these actors is impossible/useless until you address that.
Easier said than done. We've had DDOS issues in the past as well, and getting it resolved - even by throwing money at the problem - is nontrivial.
What amounts to throwing a massive amount of hardware at the problem (i.e., boxes that can handle 10-100+gbps of traffic, filter out the attacks, and pass only legit stuff down to your servers) is expensive[1], and casuses all sorts of unexpected behavior: API clients mysteriously break, good traffic gets mistakenly dropped, latency is added to the whole process, etc. It gets even weirder on SSL-protected sites. And it's all dependent on attackers not getting the IP of your actual servers which they could then just attack directly.
[1] For sites with even not a whole lot of traffic, you're talking a one-year contract easily in the range of an engineer's salary. I wouldn't be surprised if the cost to protect sites with as much traffic as Github exceeded $1m/year. Even if you have plenty of cash in the bank, that's one hell of a pill to swallow.
I'm sure that's what they are doing now. However it takes time to setup new servers as well as writing code (that's been throughly audited) to help protect their existing and future servers.
>Perhaps I'm misunderstanding: I thought one goal of DVCS was to remove central points of failure? In that sense, isn't a central "hub" regressive?
This meme is getting really, really tiresome. Github being down is NOT a central point of failure. Most people know that setting up your own git server is trivial, literally a 3-4 step process. We know that we don't lose our files, our history, our working tree, etc.
The "git" in Github is easily replaced. The "hub" part has its own value. The communication tools, the well-presented diffs, the inline-editing capability, issues, wiki, etc. That's the value people are gnashing their teeth over.
Git is distributed and there's no reason you should have to stop working, or committing, just because github is temporarily unavailable. At least for dependencies only on git. Losing access to wikis, pull requests, and issues may be a problem for some teams.
Back in 2009 when it happened to bitbucket, this was afaik due to hosting a particular project (hurting bitbucket was a side effect of hurting this particular project, some communities seems to be happy to resolve issues with DDoS attacks...).
It could just be some rogue deployment script running from EC2 that are a little more active that it should be. Imagine someone is deploying their 1GB repo from GitHub to 100 small EC2 instances :)
LOIC is pretty easy to filter, it's about a 1 out of 10 on the difficulty scale. Either GitHub as a whole is technically incompetent, or they are getting hit with something built by big kids.
[+] [-] redsymbol|13 years ago|reply
[+] [-] carlosaguayo|13 years ago|reply
[+] [-] donavanm|13 years ago|reply
[+] [-] adgar2|13 years ago|reply
[+] [-] codinghorror|13 years ago|reply
Pretty please?
This is a service I pay for and my business relies on. Having it down three times in three days impacts our work.
[+] [-] Firehed|13 years ago|reply
What amounts to throwing a massive amount of hardware at the problem (i.e., boxes that can handle 10-100+gbps of traffic, filter out the attacks, and pass only legit stuff down to your servers) is expensive[1], and casuses all sorts of unexpected behavior: API clients mysteriously break, good traffic gets mistakenly dropped, latency is added to the whole process, etc. It gets even weirder on SSL-protected sites. And it's all dependent on attackers not getting the IP of your actual servers which they could then just attack directly.
[1] For sites with even not a whole lot of traffic, you're talking a one-year contract easily in the range of an engineer's salary. I wouldn't be surprised if the cost to protect sites with as much traffic as Github exceeded $1m/year. Even if you have plenty of cash in the bank, that's one hell of a pill to swallow.
[+] [-] bkanber|13 years ago|reply
"UGH GitHub down again, I guess I have to go work on something equally as important for upwards of an hour"
I call shenanigans on you, good sir.
[+] [-] Caballera|13 years ago|reply
[+] [-] mylittlepony|13 years ago|reply
[+] [-] peripetylabs|13 years ago|reply
I wonder if there's a way to host Git repositories with static files, say, on Amazon S3... That would be neat.
[+] [-] mattdeboard|13 years ago|reply
This meme is getting really, really tiresome. Github being down is NOT a central point of failure. Most people know that setting up your own git server is trivial, literally a 3-4 step process. We know that we don't lose our files, our history, our working tree, etc.
The "git" in Github is easily replaced. The "hub" part has its own value. The communication tools, the well-presented diffs, the inline-editing capability, issues, wiki, etc. That's the value people are gnashing their teeth over.
[+] [-] zalambar|13 years ago|reply
http://ozmm.org/posts/when_github_goes_down.html has a good summary of quick ways to keep using git without github.
[+] [-] zrail|13 years ago|reply
http://blog.spearce.org/2008/07/using-jgit-to-publish-on-ama...
[+] [-] ihuman|13 years ago|reply
[+] [-] tonfa|13 years ago|reply
[+] [-] boomzilla|13 years ago|reply
[+] [-] tegansnyder|13 years ago|reply
[+] [-] Tipzntrix|13 years ago|reply
[+] [-] chrixian|13 years ago|reply
[+] [-] rorrr|13 years ago|reply
Fighting DDoS attacks is not trivial, especially if you're against a sophisticated botnet, and your code has multiple slow parts.
[+] [-] didip|13 years ago|reply
[+] [-] w1ntermute|13 years ago|reply
[+] [-] dsl|13 years ago|reply
[+] [-] onyxraven|13 years ago|reply
[+] [-] kmfrk|13 years ago|reply
[+] [-] jsanroman|13 years ago|reply
[+] [-] arcatek|13 years ago|reply
No data will be compromised, but it will still be a pain in the .. head.