Ask HN: How do you handle DDoS attacks?
For context, while exploring the load testing tool Siege running on a VPS, I was able to bring down multiple sites running on shared hosting, and some running on small VPS by setting a high enough concurrent number of users. This is not a DDoS, but it goes to show how easy it is to cause damage. Note: I only brought down sites that I own, or those of friends with their permission.
What tools are useful in fighting DDoS attacks and script kiddies? Mention free and paid options.
What are the options to limit damage in case of an attack? How do you limit bandwidth usage charges?
There was a previous discussion on this topic 6 years ago https://news.ycombinator.com/item?id=1986728
[+] [-] buro9|9 years ago|reply
The simple advice for layer 7 (application) attacks:
1. Design your web app to be incredibly cacheable
2. Use your CDN to cache everything
3. When under attack seek to identify the site (if you host more than one) and page that is being attacked. Force cache it via your CDN of choice.
4. If you cannot cache the page then move it.
5. If you cannot cache or move it, then have your CDN/security layer of choice issue a captcha challenge or similar.
The simple advice for layer 3 (network) attacks:
1. Rely on the security layer of choice, if it's not working change vendor.
On the L3 stuff, when it comes to DNS I've had some bad experiences (Linode, oh they suffered) some pretty good experiences (DNS Made Easy) and some great experiences (CloudFlare).
On L7 stuff, there's a few things no-one tells you about... like if you have your application back onto AWS S3 and serve static files, that the attack can be on your purse as the bandwidth costs can really add up.
It's definitely worth thinking of how to push all costs outside of your little realm. A Varnish cache or Nginx reverse proxy with file system cache can make all the difference by saving your bandwidth costs and app servers.
I personally put CloudFlare in front of my service, but even then I use Varnish as a reverse proxy cache within my little setup to ensure that the application underneath it is really well cached. I only have about 90GB of static files in S3, and about 60GB of that is in my Varnish cache, which means when some of the more interesting attacks are based on resource exhaustion (and the resource is my pocket), they fail because they're probably just filling caches and not actually hurting.
The places you should be ready to add captchas as they really are uncacheable:
* Login pages
* Shopping Cart Checkout pages
* Search result pages
Ah, there's so much one can do, but generally... designing to be highly cacheable and then using a provider who routinely handles big attacks is the way to go.
[+] [-] Kenji|9 years ago|reply
[+] [-] jaypaulynice|9 years ago|reply
[+] [-] partycoder|9 years ago|reply
1. Get from cache
2. Determine if cached value is valid
3. Query data store
4. Put data store value in cache
5. Return data
Instead of just getting it directly. In order to be able to cache you need to think about good cache invalidation. And client side caching won't work against malicious users.
[+] [-] wesleytodd|9 years ago|reply
We have many layers of protection:
* We run iptables and an api we wrote on our ingest servers. We run failtoban on a separate set of servers. When fail2ban sees something, we have it hit the api and add the iptables rules. This offloads the cpu of failtoban from our ingest servers.
* We block groups of known hosting company IP blocks, like digital ocean and linode. These were common sources of attacks.
* Our services all have rate limits which we throttle based on IP
* We have monitoring and auto-scaling which responds pretty quickly when needed. And has service level granularity.
* Recently moved behind cloudflare because google cloud did not protect us from attacks like the UDP floods which didn't even reach our servers.
EDIT: formatting
[+] [-] wesleytodd|9 years ago|reply
If they attackers are persistent, there is really no way to guarantee zero down time. THEY WILL FIND A WAY. Just make sure your stake holders know you are doing everything in your power to resolve the issues, and then actually do those things.
An anecdote:
We had been seeing DDOS attacks for a few weeks, so we had most everything locked down and working. But then suddenly one of the most important parts of our site started going down under load. That part is a real time chat system. We looked for which chat room had the load and it was one which did not require a user be registered. We switched the room into registered users only mode and thought we had solved it.
About 5 minutes later the attack came back with all registered users. We were amazed, becuase there is no way the attackers could have registered that many accounts in 5 minutes because of our rate limiting on that. Turns out that they had spend the past week or so registering users in case they needed them :)
[+] [-] ryanlol|9 years ago|reply
For example:
curl https://104.154.116.193 -H 'Host: www.stream.me' -v -k
[+] [-] anotherdpk|9 years ago|reply
[+] [-] kev009|9 years ago|reply
Firstly, we are built to endure any DDoS the internet has yet seen on our peering, backbone, and edge servers for CDN services. This is quite important when you are tasked with running a large percentage of the interweb but probably not practical for most organizations, mostly due to talent rather than cost (you need people that actually understand networking and systems at the implementation level, not the modern epithet of full stack developer).
But, it is critical to have enough inbound peering/transit to eat the DDoS if you want to mitigate it -- CDNs with a real first party network are well suited for this due to peering ratios.
Secondly, when you participate in internet routing decisions through BGP, you begin to have options for curtailing attacks. The most basic reaction would be manually null routing IPs for DoS, but that obviously doesn't scale to DDoS. So we have scrubbers that passively look for collective attack patterns hanging on the side of our core, and act upon that. Attack profiles and defense are confirmed by a human in our 24/7 operations center, because a false positive would be worse than a false negative.
Using BGP, we can also become responsible for other companies' IP space and tunnel a cleaned feed back to them, so the mitigation can complement or be used in lieu of first party CDN service.
In summary, the options are pretty limited: 1) Offload the task to some kind of service provider 2) Use a network provider with scrubbing 3) you've hired a team to build this because you are a major internet infrastructure.
[+] [-] rmdoss|9 years ago|reply
-DDoS you can handle (small ones). That anything up to 1 or 2Gbps or 1m packets per second.
-DDoS you can not handle. Anything higher than that.
For the smaller DDoS attacks, you can handle it by adding more servers and using a load balancer (eg. ELB) in front of your site. Both Linode and DigitalOcean will null route your IP address if the attack goes above 100-200Mbps, which is very annoying. Amazon and Google will let you handle on your own (and charge you for it), but you will need quite a few instances to keep up with it.
For anything bigger than that, you have to use a DDoS mitigation service. Even bigger companies do not have 30-40Gbps+ capacity extra hanging around just in case.
I have used and engaged with multiple DDoS mitigation companies and the ones that are affordable and good enough for HTTP (or HTTPS) protection are CloudFlare, Sucuri.net and Incapsula.
-CloudFlare: Is the most popular one and works well for everything but l7 attacks (in my experience). You need to get their paid plan, since the free one does not include ddos protection - they will ask you to upgrade if that happens.
-Sucuri.net: Not as well known as CloudFlare, but they have a very solid mitigation. Have been using them more lately as they are cheaper overall than CloudFlare and have amazing support.
-Incapsula: I used to love them, but their support has been really bad lately. They are on a roll trying to get everyone to upgrade their plans, so that's been annoying. If you can do stuff on your own, they work well.
That's been longer than what I anticipated, but hope it helps you decide.
thanks,
[+] [-] martin_|9 years ago|reply
http://www.bauer-power.net/2016/03/incapsula-had-major-world...
[+] [-] DivineTraube|9 years ago|reply
- Every one of our servers rate limits critical resources, i.e. the ones that cannot be cached. The servers autoscale when neccessary.
- As rate limiting is expensive (you have to remember every IP/resource pair across all servers) we keep that state in a locally approximated representation using a ring buffer of Bloom filters.
- Every cacheable resource is cached in our CDN (Fastly) with TTLs estimated via an exponential decay model over past reads and writes.
- When a user exceeds his rate limit the IP is temporarily banned at the CDN-level. This is achieved through custom Varnish VCLs deployed in Fastly. Essentially the logic relies on the bakend returning a 429 Too Many Requests for a particular URL that is then cached using the requester's ID as a hash key. Using the restart mechanism of Varnish's state machine, this can be done without any performance penalty for normal requests. The duration of the ban simply is the TTL.
TL;DR: Every abusive request is detected at the backend servers using approximations via Bloom filters and then a temporary ban is cached in the CDN for that IP.
[+] [-] ryanlol|9 years ago|reply
Looks like you're hosting at least some stuff at Hetzner, they're not going to do any filtering for you.
[+] [-] rootlocus|9 years ago|reply
[+] [-] tombrossman|9 years ago|reply
OVH include DDOS protection by default[0] and they have a very robust backbone network[1] in Europe and North America that they own and operate themselves (this is how & why anti-DDOS is standard with them).
For quick side-projects I still fire up a DigitalOcean instance or two because their UX is so slick and easy. If I needed huge scale and price didn't matter I would probably go with AWS (their 'anti-DDOS' is their vast bandwidth + your ability to pay for it during an attack). For everything else, I put it on OVH.
[0]https://www.ovh.com/us/anti-ddos/
[1]http://weathermap.ovh.net/
[+] [-] Urgo|9 years ago|reply
[+] [-] kalleboo|9 years ago|reply
[+] [-] rmdoss|9 years ago|reply
The main issue is that I lost a bit of faith in their support and reliability. vracks going down for hours with no updates. Connectivity issues. Servers disappearing.
Besides that, their DDoS protection works well for l3 attacks, except that they force a TCP reset on every connection. So if you are picky about extra connect times and having your clients re-establish their connections, they are great.
[+] [-] dineshp2|9 years ago|reply
[+] [-] jaypaulynice|9 years ago|reply
First off you need to determine where the attack is coming from. You could redirect based on IP/request headers in a .htaccess file or apache rules.
Your next bet is to distribute/auto-scale your application if possible.
You need to setup a web application firewall that sits in front of your web servers and analyzes the requests/responses that hit the web servers. A lot of the ddos campaigns are easy to identify based on the request headers/IP/Geo and requests/second.
It's not hard to write a small web server/proxy to do this, but it would be best left to someone who knows what they're doing because you don't want to block real user requests. You can use ModSecurity's open source WAF for apache/nginx, but again you have to know what you're doing.
When I faced this issue, I wrote a small web server/proxy here that you can start on port 80:
https://github.com/julesbond007/java-nio-web-server
Here I wrote some rules to drop the request if it's malicious:
https://github.com/julesbond007/java-nio-web-server/blob/mas...
[+] [-] DenisM|9 years ago|reply
For static content there is always CDN. Costly, but it works in a pinch, while you're planning you other moves.
The one thing left to worry about is dynamic content. Depending on the application you could restrict all requests to authorized users only while under attack.
This isn't a complete solution by any means, but reduced the attack surface considerably.
https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June20...
[+] [-] rmdoss|9 years ago|reply
1- For small attacks you can optimize your stack, cache your content and use a provider that allows you to quickly scale and add more servers to handle the traffic. Do not use Linode or Digital Ocean as they will null route you.
OVH, AWS and Google are the ones to go with.
2- Use a DDoS mitigation / CDN provider that will filter the attacks and only send clean traffic back to you.
The ones recommended so far:
https://cloudflare.com
https://sucuri.net
https://incapsula.com
[+] [-] carlosfvp|9 years ago|reply
I used to get attacked huge a load of corrupt UDP packets for a few seconds and that used to hang the main server, wich in 1 or 2 minutes disconnected all my players.
Solution: separate your UDP services from your TCP services in separate applications and servers, also use different type of protection services for each.
The attack still hanged the UDP services, so I started thinking about making a plugin for snort to analyse the traffic and only allow legit protocol packets. I haven't done any of this last idea because the attackers stopped since they noticed that no one was being disconnected.
BTW, for TCP and HTTP I just used any tiny service that protects me from SYN Flood, like Voxility resellers.
[+] [-] rmdoss|9 years ago|reply
If you have custom protocols, you have to get a full /24 mitigation and so far nobody can beat Arbor into it. Very expensive, but works well if you have BGP.
[+] [-] tumdum_|9 years ago|reply
[+] [-] DyslexicAtheist|9 years ago|reply
The only reason why they're not constantly called out by serious infosec folk for their scam is because they hire guys also involved in DefCon/BlackHat planning (try to sneak a hostile talk against Cloudflare past REDACTED[2] who btw is also advising Mr. Robot). It's lobbying at its finest.
[0] https://scotthelme.co.uk/tls-conundrum-and-leaving-cloudflar...
[1] https://blog.torproject.org/blog/trouble-cloudflare
EDIT: [2] redacted name since there is more than one, please duckduckgo by yourself.
[+] [-] dineshp2|9 years ago|reply
[+] [-] r1ch|9 years ago|reply
Non-volumetric attacks like SYN or HTTP floods can be mitigated with appropriate rate limiting or firewalling.
Some providers like OVH have decent network-level mitigation in place, but you're not gonna find that on a $5 VPS where they're more than happy to null route you to protect their network.
[+] [-] rmdoss|9 years ago|reply
Some syn floods can generate millions of packets per second, which is way more than a dedicated linux server can handle.
Good video on the topic:
https://www.youtube.com/watch?v=pCVTEx1ouyk
[+] [-] asimjalis|9 years ago|reply
https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June20...
AWS DDoS defense using rate based blacklisting
https://blogs.aws.amazon.com/security/post/Tx1ZTM4DT0HRH0K/H...
[+] [-] northwardstar|9 years ago|reply
DDoS protection providers offer a remote solution to protect any server / network, anywhere: https://sharktech.net/remote-network-ddos-protection.php
[+] [-] toast0|9 years ago|reply
B) make sure your servers don't fall over while getting full line rate of garbage incoming (this is not hard for reflection or synfloods, but is difficult if they're hitting real webpages, and very difficult if it includes a tls handshake)
C) bored ddos kiddies tend to ddos www only, so put your important things on other sub domains
D) hope you don't attract a dedicated attacker
[+] [-] ebbv|9 years ago|reply
This is one of the reasons I would consider managed hosting as opposed to AWS, Digital Ocean, etc. With any good managed hosting provider, they are going to take steps to help deal with the DDoS. Depending on your level of service and the level of the attack, of course. But they will have an interest in helping you deal with and mitigate the attacks.
The reality is that true DDoS solutions are expensive, and if you have a "small website" then you're probably not going to be able to afford them. But if you're at a good sized hosting provider, they're going to need to have these solutions themselves and can hopefully put them to use to protect your site.
[+] [-] damm|9 years ago|reply
Verisign and others offer this service; typically using DNS. However often they support BGP
2. Add limiting factors; if you have an abusive customer rate limit them in nginx. If you are expecting a heavy day rate limit the whole site.
3. Stress testing and likely designing your website to withstand DDoS attacks.
You can cache or not cache; that's not really the question. Handling a DDoS means what can you do to mitigate the extreme amount of traffic and still allow everything else to work.
[+] [-] TimMeade|9 years ago|reply
[+] [-] TimMeade|9 years ago|reply
[deleted]
[+] [-] executesorder66|9 years ago|reply
http://www.linuxjournal.com/content/back-dead-simple-bash-co...
[+] [-] kalleboo|9 years ago|reply
[+] [-] _nalply|9 years ago|reply
[+] [-] simbalion|9 years ago|reply
If you do piss anyone off, keep records of everything. Make sure you know who they are, and where they live, before you start doing business with them. This lets you send the police after they hire someone to DDoS you. Bad people need to be removed from the pool to reduce these sorts of attacks. Record 100% of your phone calls. Android has free apps to do this for you automatically. If you're in a state that requires 2-party authorization, move to a state that offers 1-party authorization. Sanity in laws = freedom of citizens.
[+] [-] anotherdpk|9 years ago|reply
[+] [-] bowyakka|9 years ago|reply
http://www.level3.com/~/media/files/brochures/en_secur_br_dd...