Somewhat ironically, the large French hosting provider OVH was one of the largest sources of our attack and also a victim of a large scale NTP amplification attack around the same time.
And their own semi-official ntp server supports monlist with a hefty response
I have a server hosted with OVH, they actually sent me a message a week or so ago advising me my server running a vulnerable version of NTP so that I could update it. I think they were even going to update it for me, but I went ahead and updated it myself anyway.
This was at least a week before the news of the big DDoS attack this week, so I'm surprised their own servers still had the vulnerable config/versions.
I have a server with OVH, but frankly I'm considering moving elsewhere after we've now been repeatedly hit by DOS from servers at OVH. It's fairly low grade, primitive SYN-flood attack that we easily knock back within minutes each time the attacker moves elsewhere (clearly he does not have access to many server resources, or he might have actually managed to muster enough simultaneous resources to do some damage; he's right this minutes wasting resources getting a SYN-flood from some no-name Russian hosting provider dropped by our firewall at a low enough rate that I can keep an eye on it live with tcpdump).
But while our colo provider was extremely responsive and started calling OVH and the other providers right away, and I also emailed evidence to OVH repeatedly, we were met with total silence. The other providers used reacted quickly. OVH let the servers continue to hammer us for days.
I'm seriously considering just dropping all their net blocks in our firewalls. We have next to no legitimate traffic originating there anyway.
Great write-up and very helpful for those of us who, despite doing so for years, remain amateurs at running our own servers. I am among those who think the Internet would be better as a whole if more people did in fact run servers—server software would gradually become easier for us amateurs to install and run without leaving it in a state that is open to nefarious exploits. But for the time being, I appreciate it when experts take the time to explain simple counter-measures as you have done. Thank you!
As far as I am aware, I am not responsible for any Internet-facing NTP servers (I certainly never set one up willingly), but it's good to have this in the back of my mind now in the off-chance that I ever do set one up.
I did have one of my Windows machines used for DNS amplification. I wrote about the incident [1] at my blog because I had been a bit surprised that it was not sufficient to simply disable recursion. That much had seemed like common sense, and I thought I had been so clever and thorough in turning it off. But later I found attackers were leveraging my server's willingness to provide a list of root DNS servers in response, even with recursion disabled. I ended up deleting the list of root servers and the problem went away. (Though, to be clear, I never ran the incident by any DNS experts, so I may have misdiagnosed the whole thing.)
I don't know what else I don't know about amplification attacks, so reports such as yours are helpful for people like myself who find it fun to run our own servers, but don't consider it an area of expertise.
We have some racks with public facing ILOM interfaces which sit outside the firewall, which turns out have ntpd running. We only noticed when our international bandwidth crawled to a halt due to them being used in an NTP attack.
It's a hassle, as they're old machines and out of support contract (so we can't upgrade the firmware), and so far as I can tell there's no way to turn off public access to ntpd over the admin interfaces. We're stuck with having to go to the hosting company and change the cabling to route them through the firewall.
Just because you didn't set up ntpd doesn't mean you don't have it running (somewhere).
As far as I can tell these attacks always rely on amplification using IP Spoofing. I take it there's no way of mitigating that in a lower layer without adding some leaky abstraction or general overhead to the network? So, for example, (speaking as someone who knows nothing about these things) you could add some sort of handshake along the lines of:
ntp server sees request from 1.1.1.1 (spoofed by attacker)
ntp server goes to 1.1.1.1 to check that they really sent the request (sort of ack type thing)
1.1.1.1 comes back to say that it's an uninitiated request
ntp server discards similar future requests for some time
Obviously that would require more toing and froing, along with more white / black list tracking etc. Then again, can't all machines have sensible defaults in their firewalls to stop them from participating in such attacks?
Is this not an issue for TCP?
EDIT: I'm assuming it's because UDP doesn't do any checking / acknowledge stuff by default?
Yes. And that's exactly what's been implemented in modern versions of ntpd (> 4.2.7p26, 2010/04/24... yes, 2010).
The problem is:
1) No one has upgraded NTPD (and often can't, for embedded devices like IPMI controllers)
2) This can be fixed by basic configuration in older NTPD versions, but up until recently many linux distributions were shipping vulnerable configs.
This particular command (monlist) is a management query, it's in no way related to serving up accurate time.
UDP is a message. It's the same as sending a letter with the sender's address on. You can lie about your own address, but if you want the other person to send you a reply it's better to write the correct information on.
TCP establishes a two way connection (SYN, SYN-ACK, ACK), so you can send the original SYN but the SYN-ACK will go to someone else and be discarded. UDP is fire and forget, in contrast.
There is a proposal from 2000 that is mentioned in the article (http://tools.ietf.org/html/rfc2827) that recommends that source networks filter out originating traffic that isn't legitimate. It is being implemented slowly.
FWIW, if you install the ntp package and do ntpdc -n -c monlist localhost you'll get a response but I haven't checked if it's configured by default to reject non-LAN requests.
FWIW, here's what I got on an Ubuntu 12.04.3 server running on my LAN. It looks like we should be fine with the defaults on Ubuntu at least. (Obviously always a good idea to use ufw/iptables to block everything you don't need exposed so you don't have to worry about stuff like this).
Before installing ntp (from another host on my LAN):
>It just gives up that data to anyone that asks? Seems like a huge privacy issue.
I could do a nmap on the public internet and probably get a similar amount of addresses. An IP is about as "private information" as a phone number nowadays (You know, those things that get sent out en masse in yellow and white books for public consumption with real-life names next to them).
Some historical context on NTP monlist: it's an old debugging interface from the days of the Friendly Internet, when services were more open and people were much less worried about this kind of security. NTP daemons give up a whole lot of information if you ask them; see also the "peers" and "sysinfo" commands, for instance.
Back in 1999 I used these monitoring commands to spider the NTP network, surveying some 175,000 hosts from a desktop workstation. Lots of fun! This kind of survey is much harder to do now because so many systems are locked down. http://alumni.media.mit.edu/~nelson/research/ntp-survey99/
This is offtopic, but this post was deleted some time ago, and now it's back, it was submitted by jgrahamc then also.
What happened to it? Did the algorithm snip it, but did jgrahamc undelete it somehow, or a mod? Just curious about the way those things work, not complaining.
Is the response to MONLIST also sent as UDP? If so, why does CloudFlare even accept those packets to IP addresses used for web hosting? Shouldn't all legitimate traffic be TCP on ports 80 and 443?
The packets have to actually reach you in order for you to filter them out. If you have a 300Gbps incoming pipe, and you're getting 300Gbps of attack traffic, then there isn't any space left in the pipe for your legitimate traffic. It doesn't matter that your router is throwing away the packets as soon as it receives them.
Also, web servers might want to consult NTP servers now and again.
UDP allows source and destination ports to be specified separately [1], so the attacker could just spoof the source port by setting it to 80 or 443. Unless the NTP server was specifically configured to not reply to well-known port numbers [2], this would result in the reply going to the spoofed port.
The problem is you have UDP packets on port 123 coming from all over the world hammering at your door. They're politely getting dropped by firewalls, but they consume all the bandwidth in to the edge of your network, so the legitimate traffic cant get through.
You can check whether there are open NTP servers that
support the MONLIST command running on your network by
visiting the Open NTP Project[0]. Even if you don't think
you're running an NTP server, you should check your
network because you may be running one inadvertently.
I'm not much of a network guy, but is it possible for Cloudflare to just redirect that DDOS traffic back to the NTP server that sent it?
This would have two benefits. Firstly the owner of the insecure NTP server is going to get a nasty message to fix their damn server, and secondly, the insecure NTP server gets taken out of the attack and becomes useless to the attacker.
Two reasons: it's a bad idea and it wouldn't help very much. Each individual NTP server is only generating a modest amount of traffic and such a response might well go unnoticed by the NTP server. Also, it would mean we'd have to generate 400Gbps back to the NTP servers creating an enormous amount of traffic.
This is probably a stupid question, why can't tier 1 providers (whom I suppose there are relatively few of and who I would expect to incorporate best practices) just decide to kill any NTP monlist UDP that ever crosses any of their NPUs?
Why would that not solve a large part of the problem?
I don't work for a Tier 1 ISP but I do work for an ISP.
As a customer, I don't want my ISP screwing with my traffic. As a provider, I don't want any customers complaining because we screw with their traffic.
To block monlist and only monlist queries, we'd have to be looking into the layer 7 payload of IP traffic. I'd rather not do that.
The brute-force method would be to block traffic to/from 123/UDP but that's gonna mess up a lot of stuff (including my own).
Cost. At typical backbone speeds there are problems enough dealing with basic routing at sufficient throughput already. "Nobody" at that level wants to also have to pattern match packets.
Cisco, Juniper etc. who manufacture high end routers would certainly love it.
A better alternative is to stop or limit source ip spoofing, because you can filter it on the interfaces connecting smaller providers and customers rather than the most resource-constrained routes to other backbone providers. And that's slowly happening (I'm saying, while looking at tcpdump output from a SYN attack that might very well use spoofed IPs). It's simpler because you "only" need a single lookup against a few bytes per packet per interface instead of potentially having a long list of patterns to check against the whole packet.
> and who I would expect to incorporate best practices
Don't bet on it. They will when it makes a big difference to them. But for many of these types of attack you'll see purely reactive reactions because it's often far cheaper (for them) vs. the costs of routing hardware etc. that can do enough processing per packet to be viable.
I'm not sure at what layer it would be visible that the packets are ntp monlists? The basic answer is probably that the providers just aren't filtering at that level, like if you need layer7 visibility, they don't want to do dpi on any packet that crosses their boundary. Even if you don't need L7 visibility, it would still require another element of processing and filtering that they might not be interested in doing since it will almost always result in some extra cost to run.
Many people have said it and I'll emphasize: To the extent possible, ISPs should not be doing layer 7/deep packet inspection. They carry traffic. They shouldn't filter. Not only is it unethical, but it's impractical for providers.
Stepping in and helping mitigate DDOS attacks such as this, by e.g. having the Tier-1s dropping traffic destined to the victim at their edges might be ok.
However, Tier 1 providers should not in any way police the internet.
So again we find the problem is protocols that are designed for convenience and not security. Sure, network providers could filter out bogus routes, but that's a band-aid more than a fix; the protocol is still broken from a security perspective. Nobody would stand for using rsh with host-based authentication in today's age, but for other protocols it's fine? And public services for the internet at large are great, until they become tools for the public to abuse other people. These protocols either need to be fixed to prevent abuse, or switch to using tcp (which nobody wants - so fix the protocols!)
All NTP seriousness aside, this new "record-breaking" DDoS attack was only possible because CloudFlare -- after the Spamhaus attack -- upgraded and expanded their network endpoints all over the world. When the next attack hits and they have again upgraded their connections with 100Gb/s combined, they'll be able to say that there was again a new record, this time it was 500Gb/s.
QUIC datagrams should be as spoofable as anything else using UDP.
The _QUIC Crypto_ design doc contains a section that covers spoofing [1], and seems to push responsibility for DDoS mitigation to the server implementation:
"[...] servers may decide to relax source address restrictions dynamically. One can imagine a server that tracks the number of requests coming from different IP addresses and only demands source-address tokens when the count of “unrequited” connections exceeds a limit globally, or for a certain IP range. This may well be effective but it’s unclear whether this is globally stable. If a large number of QUIC servers implemented this strategy then a substantial mirror DDoS attack may be split across them such that the attack threshold wasn’t reached by any one server."
[+] [-] spindritf|12 years ago|reply
And their own semi-official ntp server supports monlist with a hefty response
[+] [-] wjoe|12 years ago|reply
This was at least a week before the news of the big DDoS attack this week, so I'm surprised their own servers still had the vulnerable config/versions.
[+] [-] vidarh|12 years ago|reply
But while our colo provider was extremely responsive and started calling OVH and the other providers right away, and I also emailed evidence to OVH repeatedly, we were met with total silence. The other providers used reacted quickly. OVH let the servers continue to hammer us for days.
I'm seriously considering just dropping all their net blocks in our firewalls. We have next to no legitimate traffic originating there anyway.
[+] [-] dfc|12 years ago|reply
PS: Why did you "X.X" out IPs from a RFC1918 address space?
[+] [-] jebblue|12 years ago|reply
[+] [-] bhauer|12 years ago|reply
As far as I am aware, I am not responsible for any Internet-facing NTP servers (I certainly never set one up willingly), but it's good to have this in the back of my mind now in the off-chance that I ever do set one up.
I did have one of my Windows machines used for DNS amplification. I wrote about the incident [1] at my blog because I had been a bit surprised that it was not sufficient to simply disable recursion. That much had seemed like common sense, and I thought I had been so clever and thorough in turning it off. But later I found attackers were leveraging my server's willingness to provide a list of root DNS servers in response, even with recursion disabled. I ended up deleting the list of root servers and the problem went away. (Though, to be clear, I never ran the incident by any DNS experts, so I may have misdiagnosed the whole thing.)
I don't know what else I don't know about amplification attacks, so reports such as yours are helpful for people like myself who find it fun to run our own servers, but don't consider it an area of expertise.
[1] http://tiamat.tsotech.com/dns-amplification
[+] [-] asdf1011|12 years ago|reply
It's a hassle, as they're old machines and out of support contract (so we can't upgrade the firmware), and so far as I can tell there's no way to turn off public access to ntpd over the admin interfaces. We're stuck with having to go to the hosting company and change the cabling to route them through the firewall.
Just because you didn't set up ntpd doesn't mean you don't have it running (somewhere).
[+] [-] rhizome|12 years ago|reply
[+] [-] aidos|12 years ago|reply
Is this not an issue for TCP?
EDIT: I'm assuming it's because UDP doesn't do any checking / acknowledge stuff by default?
[+] [-] devicenull|12 years ago|reply
The problem is: 1) No one has upgraded NTPD (and often can't, for embedded devices like IPMI controllers) 2) This can be fixed by basic configuration in older NTPD versions, but up until recently many linux distributions were shipping vulnerable configs.
This particular command (monlist) is a management query, it's in no way related to serving up accurate time.
[+] [-] the_mitsuhiko|12 years ago|reply
[+] [-] corresation|12 years ago|reply
There is a proposal from 2000 that is mentioned in the article (http://tools.ietf.org/html/rfc2827) that recommends that source networks filter out originating traffic that isn't legitimate. It is being implemented slowly.
[+] [-] yread|12 years ago|reply
http://www.nothink.org/misc/snmp_reflected.php
[+] [-] voltagex_|12 years ago|reply
FWIW, if you install the ntp package and do ntpdc -n -c monlist localhost you'll get a response but I haven't checked if it's configured by default to reject non-LAN requests.
[+] [-] alex1|12 years ago|reply
Before installing ntp (from another host on my LAN):
After installing ntp (from another host on my LAN): After installing ntp (from the server itself):[+] [-] devicenull|12 years ago|reply
Being able tod this via localhost is not a problem, it's when it's open to the internet.
[+] [-] shawabawa3|12 years ago|reply
[+] [-] powertower|12 years ago|reply
It just gives up that data to anyone that asks? Seems like a huge privacy issue.
Imaging Apache or Nginx giving up the last 600 IPs it served and maybe the URLs they went to.
edit: there is always the occasionally open Apache /server-status handler that leaks this type of data.
[+] [-] Karunamon|12 years ago|reply
I could do a nmap on the public internet and probably get a similar amount of addresses. An IP is about as "private information" as a phone number nowadays (You know, those things that get sent out en masse in yellow and white books for public consumption with real-life names next to them).
[+] [-] buzzkills|12 years ago|reply
[+] [-] devicenull|12 years ago|reply
So you synced your time with a server. Why would anyone else care about that? Why does it matter if someone else knows you're syncing time?
This is a very different service then a web browser.
[+] [-] NelsonMinar|12 years ago|reply
Back in 1999 I used these monitoring commands to spider the NTP network, surveying some 175,000 hosts from a desktop workstation. Lots of fun! This kind of survey is much harder to do now because so many systems are locked down. http://alumni.media.mit.edu/~nelson/research/ntp-survey99/
[+] [-] tinco|12 years ago|reply
What happened to it? Did the algorithm snip it, but did jgrahamc undelete it somehow, or a mod? Just curious about the way those things work, not complaining.
[+] [-] rahimnathwani|12 years ago|reply
[+] [-] mnw21cam|12 years ago|reply
Also, web servers might want to consult NTP servers now and again.
[+] [-] omh|12 years ago|reply
[+] [-] andyjohnson0|12 years ago|reply
[1] http://en.wikipedia.org/wiki/User_Datagram_Protocol#Packet_s...
[2] Not sure if this is even possible
[+] [-] noselasd|12 years ago|reply
The problem is you have UDP packets on port 123 coming from all over the world hammering at your door. They're politely getting dropped by firewalls, but they consume all the bandwidth in to the edge of your network, so the legitimate traffic cant get through.
[+] [-] poobrains|12 years ago|reply
[+] [-] biot|12 years ago|reply
[+] [-] m-arnold|12 years ago|reply
ntpdc -c monlist 1.2.3.4
For more info see my blog post (it is related to VMware ESXi but instructions are useful for any ntpd): http://ar0.me/blog/en/posts/2014/01/howto-prevent-malicious-...
[+] [-] devicenull|12 years ago|reply
[+] [-] jlgaddis|12 years ago|reply
[+] [-] jadc|12 years ago|reply
[+] [-] unethical_ban|12 years ago|reply
If a bunch of computers are hooked together in a mesh relaying traffic in all directions, how can one do ingress filtering?
[+] [-] mturmon|12 years ago|reply
[+] [-] junto|12 years ago|reply
This would have two benefits. Firstly the owner of the insecure NTP server is going to get a nasty message to fix their damn server, and secondly, the insecure NTP server gets taken out of the attack and becomes useless to the attacker.
As a visual reference, it would be a bit like in Star Wars when Mace Windu fights Palpatine on Coruscant [1]: http://www.youtube.com/watch?v=Pk4AiCnMqpg#t=2m35s
Eventually these server provider who have left their servers wide open will get the message when there NTP servers no longer respond?
1. http://starwars.wikia.com/wiki/Showdown_on_Coruscant
[+] [-] jgrahamc|12 years ago|reply
Two reasons: it's a bad idea and it wouldn't help very much. Each individual NTP server is only generating a modest amount of traffic and such a response might well go unnoticed by the NTP server. Also, it would mean we'd have to generate 400Gbps back to the NTP servers creating an enormous amount of traffic.
[+] [-] petermonsson|12 years ago|reply
Why would that not solve a large part of the problem?
Thank you in advance.
[+] [-] jlgaddis|12 years ago|reply
As a customer, I don't want my ISP screwing with my traffic. As a provider, I don't want any customers complaining because we screw with their traffic.
To block monlist and only monlist queries, we'd have to be looking into the layer 7 payload of IP traffic. I'd rather not do that.
The brute-force method would be to block traffic to/from 123/UDP but that's gonna mess up a lot of stuff (including my own).
[+] [-] vidarh|12 years ago|reply
Cisco, Juniper etc. who manufacture high end routers would certainly love it.
A better alternative is to stop or limit source ip spoofing, because you can filter it on the interfaces connecting smaller providers and customers rather than the most resource-constrained routes to other backbone providers. And that's slowly happening (I'm saying, while looking at tcpdump output from a SYN attack that might very well use spoofed IPs). It's simpler because you "only" need a single lookup against a few bytes per packet per interface instead of potentially having a long list of patterns to check against the whole packet.
> and who I would expect to incorporate best practices
Don't bet on it. They will when it makes a big difference to them. But for many of these types of attack you'll see purely reactive reactions because it's often far cheaper (for them) vs. the costs of routing hardware etc. that can do enough processing per packet to be viable.
[+] [-] kamkazemoose|12 years ago|reply
[+] [-] unethical_ban|12 years ago|reply
[+] [-] noselasd|12 years ago|reply
However, Tier 1 providers should not in any way police the internet.
[+] [-] erichocean|12 years ago|reply
[+] [-] peterwwillis|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] TA-bye|12 years ago|reply
[+] [-] JulianMorrison|12 years ago|reply
UDP? [✓]
Amplification? [✓]
Spoofable? [?]
[+] [-] oasisbob|12 years ago|reply
The _QUIC Crypto_ design doc contains a section that covers spoofing [1], and seems to push responsibility for DDoS mitigation to the server implementation:
"[...] servers may decide to relax source address restrictions dynamically. One can imagine a server that tracks the number of requests coming from different IP addresses and only demands source-address tokens when the count of “unrequited” connections exceeds a limit globally, or for a certain IP range. This may well be effective but it’s unclear whether this is globally stable. If a large number of QUIC servers implemented this strategy then a substantial mirror DDoS attack may be split across them such that the attack threshold wasn’t reached by any one server."
[1] https://docs.google.com/document/d/1g5nIXAIkN_Y-7XJW5K45IblH...