Very cool that they are able to change BGP advertisements from ChatOps, achieve convergence and mitigate the attack in all of 4 minutes, that is some insane engineering.
I had a similar reaction. I had to double check the timestamps when I first read them. That this was all handled so fast is extremely impressive to me.
Meh, or just block UDP to your networks that have no reason to run UDP. Every carrier will do upstream ACL's these days. 5 years ago that wasn't the case. These days, they all do. Some free, some charge.
re: chat ops vs a web page. It's just a single BGP advertisement -- big whoop. Chatops is just hipster famous right now.
Am I old-fashioned to raise an eyebrow when I discover that Memcached servers are running visible to the public Internet? This strikes me as approximately as bizarre as having a database server that accepts connections from the public Internet.
In my day, such back-end services were either simply not connected to the Internet (connected via a private network to the application services), firewalled, or at the very least, configured to listen for and respond exclusively to connections from known front-end or application services.
Is this sort of deployment architecture falling out of favor? My casual observation is that cloud architectures—at least the ones I've seen employed by small organizations—are more comfortable than I am with services running with public IPs. What is going on? Am I misunderstanding this in some way?
No, it's not out of favor. There are a lot of unqualified people out there pushing buttons on cloud providers dashboards and not caring about security (or not even understanding that it's an issue) though.
When it's easier to just open up a server to the wide world than it is to learn how to connect safely, you'll always get a lot of people doing it.
It's simpler to just click services on AWS and get a public IP to connect to. Drop-policy Firewalls like AWS security groups are hard to configure and debug. Managing network interfaces and binding to specific interfaces instead of others is hard and causes hanging connections.
Those are the excuses I dealt with when I took over the current IT department. By now, only haproxy accepts public connections. Everything else is firewalled to the office at most.
This is the entire Internet we're talking about, of course there will be a few misconfigured servers. It's more surprising that there are only a thousand.
> firewalled, or at the very least, configured to listen for and respond exclusively to connections from known front-end or application services.
Combine this with staying on top of vulnerabilities, this is really all you can hope for from a host standpoint. What is changing are the days of perimeter defense. The Zero Trust model is really the best path forward, and the only way to implement security in relation to the IoT.[1][2]
This is a great example of why it's important to pick secure defaults when writing software, especially software that is often deployed on high bandwidth servers or cloud instances. If no listening interfaces are specified then the default should be to exit with an error, not listen on everything!
I also wonder if you can store something in a memcached cache that looks like a valid request, then reflect that with the source IP of another memcached server and let them burn each other out...
Is there any legitimate reason to spoof a source IP? I don't think there is, why don't ISPs block any traffic with a source IP that isn't in their network. And then the rest of us block any ISPs that don't do that.
1. It may be difficult/expensive to arrange for the correct set of source subnets to be available at the points where filtering needs to be done. Motivation to perform egress filtering fails to overcome this cost threshold.
2. Fear that some customers are actually (probably without realizing) relying on alien source address traffic being routed. Therefore filtering that traffic would result in unhappy customers and support workload.
In our network over the years I've come across several instances where it turned out we were (erroneously) relying on one of our upstream providers routing traffic with source IP from another provider's network. Since policy-based source IP selection on outbound traffic is quite tricky to setup and get right, I can imagine that ISPs would take the easy way out and just pass the traffic.
Spoofing is in the eye of the beholder. A router first and foremost routes packets toward the right destination, there is no such thing as a "spoofed source IP" without context. Policy about what traffic is allowed to come from what pipe is always error prone and increased complexity.
Now that we are moving away from net neutrality, can we not get ISPs to do DDOS protection so that we don't need specialised services like Cloudflare to be layered on top of simple sites?
Lol, CloudFlare is what's breaking the web if you need to have a stupidly complicated JavaScript engine enabled and accessible to a webpage you don't trust (and can't trust) to be able to access the said webpage.
Based on how it's done, you can't check first if the page hidden behind clouflare is something you'd want to enable javascript for, because clouflare will not let you see the HTML code of the page, without enabling javascript for it first.
According to [0] that is around 1/400th of total internet traffic per second. This begs the question: who has that kind of botnet at their disposal and why are they targeting Github?
Edit: The attacker didn't need nearly that kind of bandwidth to execute this attack. See [1]
From what I understand the attack originates from publicly exposed memcached servers configured to support udp and that have no authentication requirements:
- put a large object in a key
- construct a memcached "get" request for that key
- forge the IP address of the udp request to point to that of the target/victim server
- memcached sends the large object to the target/victim
Multiply times thousands of exposed memcached servers.
Yes and there are a lot of attacks of very, very large sizes going on. Over the last few days we've mitigated some huge attacks. Luckily, everyone is working together to rate limit and clean up this problem.
Side observation: kudos to Sam Kottler for level-headed acknowledgement of the business impact of an incident like this to Github’s clientele, and appearing to own it. Well done, sir
These attacks are often described as denial of service attacks, but I wonder if many of them aren't employed as cover for an intrusion attempt. Is it possible that intrusive traffic could be mixed in with such an attack?
A DoS attack is, by literal definition, an attempt to overwhelm a host until it is forced to _deny service_ to valid user requests. Are there intrusion techniques that both bring down the server and break into it at the same time? I'm not a security expert, but that doesn't seem like it makes a whole lot of sense to me.
What does an incident like this cost to Github in terms of the extra capacity added? I guess the potential loss of business is way higher, but still very curious about the magnitude.
corndoge|8 years ago
giodamelio|8 years ago
scurvy|8 years ago
re: chat ops vs a web page. It's just a single BGP advertisement -- big whoop. Chatops is just hipster famous right now.
bhauer|8 years ago
In my day, such back-end services were either simply not connected to the Internet (connected via a private network to the application services), firewalled, or at the very least, configured to listen for and respond exclusively to connections from known front-end or application services.
Is this sort of deployment architecture falling out of favor? My casual observation is that cloud architectures—at least the ones I've seen employed by small organizations—are more comfortable than I am with services running with public IPs. What is going on? Am I misunderstanding this in some way?
scaryclam|8 years ago
When it's easier to just open up a server to the wide world than it is to learn how to connect safely, you'll always get a lot of people doing it.
tetha|8 years ago
Those are the excuses I dealt with when I took over the current IT department. By now, only haproxy accepts public connections. Everything else is firewalled to the office at most.
lozenge|8 years ago
vlan0|8 years ago
Combine this with staying on top of vulnerabilities, this is really all you can hope for from a host standpoint. What is changing are the days of perimeter defense. The Zero Trust model is really the best path forward, and the only way to implement security in relation to the IoT.[1][2]
[1]https://www.youtube.com/watch?v=k80jOH2H10U [2]https://www.safaribooksonline.com/library/view/zero-trust-ne...
consumer451|8 years ago
But when I read that he had found a public facing Jenkins server owned by Google, I figured I must be missing something.
I run a 2 man shop, but I still keep things like Jenkins behind OpenVPN. Why would anyone leave Jenkins open? There must be a reason, right?
https://emtunc.org/blog/01/2018/research-misconfigured-jenki... [0]
techman9|8 years ago
A quick Shodan search[1] shows like 90k boxes publicly accessible Memcached. Misconfiguration of firewalls is a serious problem.
[1] https://www.shodan.io/search?query=11211
smudgymcscmudge|8 years ago
r1ch|8 years ago
I also wonder if you can store something in a memcached cache that looks like a valid request, then reflect that with the source IP of another memcached server and let them burn each other out...
arkadiyt|8 years ago
https://github.com/memcached/memcached/commit/dbb7a8af90054b...
chrisper|8 years ago
dec0dedab0de|8 years ago
dboreham|8 years ago
1. It may be difficult/expensive to arrange for the correct set of source subnets to be available at the points where filtering needs to be done. Motivation to perform egress filtering fails to overcome this cost threshold.
2. Fear that some customers are actually (probably without realizing) relying on alien source address traffic being routed. Therefore filtering that traffic would result in unhappy customers and support workload.
In our network over the years I've come across several instances where it turned out we were (erroneously) relying on one of our upstream providers routing traffic with source IP from another provider's network. Since policy-based source IP selection on outbound traffic is quite tricky to setup and get right, I can imagine that ISPs would take the easy way out and just pass the traffic.
fulafel|8 years ago
TimWolla|8 years ago
uw_rob|8 years ago
lima|8 years ago
It's still possible to restrict it, but simple RPF checks don't always cut it.
always_good|8 years ago
How many times are we going to see the HN comment that says "lol why do so many people use Cloudflare? I don't need it for my blog!"
Naive decentralization (naive trust) doesn't work.
Dirlewanger|8 years ago
Santosh83|8 years ago
megous|8 years ago
Based on how it's done, you can't check first if the page hidden behind clouflare is something you'd want to enable javascript for, because clouflare will not let you see the HTML code of the page, without enabling javascript for it first.
That is broken.
hyperpower|8 years ago
flafla2|8 years ago
Edit: The attacker didn't need nearly that kind of bandwidth to execute this attack. See [1]
Edit: 1/50th -> 1/400th (bits vs bytes)
[0] http://www.internetlivestats.com/one-second/#traffic-band
[1] https://news.ycombinator.com/item?id=16493497
crescentfresh|8 years ago
From what I understand the attack originates from publicly exposed memcached servers configured to support udp and that have no authentication requirements:
- put a large object in a key
- construct a memcached "get" request for that key
- forge the IP address of the udp request to point to that of the target/victim server
- memcached sends the large object to the target/victim
Multiply times thousands of exposed memcached servers.
That about right?
jgrahamc|8 years ago
gjtorikian|8 years ago
r4um|8 years ago
paulgrimes1|8 years ago
koolba|8 years ago
shuntress|8 years ago
unknown|8 years ago
[deleted]
_RPM|8 years ago
vannevar|8 years ago
Benjammer|8 years ago
erikrothoff|8 years ago
yinyang_in|8 years ago
Proven|8 years ago
[deleted]