Roughly, a somewhat lackluster response to a somewhat lackluster DDoS attempt.
They tried blocking specific ip addresses, which didn't work, because the attack was somewhat distributed. They then just turned on some caching, which allowed the site to function, albeit with an unknown excess bandwidth charge pending.
And, the DDoS itself can't of been terribly impressive, as all it took to mitigate was a bit of caching. He mentions 10 requests / sec as the scale of the attack.
Thinking on this some more, this story makes even less sense.
He first mentions having to change Apache to recognize X-Forwarded-For, because there is Amazon Elastic Load Balancing between his site and the internet.
This means, of course, that the "attacking ips" aren't making direct connections to his EC2 instance. They are proxied connections, all from the internal ELB service.
So later, when he mentions trying to use iptables to block traffic...that just doesn't make sense. There are no connections from those ips to the EC2 instance. You could use .htaccess rules, since Apache is aware of X-Forwarded-For.
Lastly...why would you put an elastic load balancer in front of a single web server?
I was shocked that 12 requests/second could take down any site.
I use async logic (previously OpenResty, more recently NodeJS and Go) and largely pregenerated sites, so 2500 requests/second is a minimum baseline -- on a much lower end instance than an m4.xlarge.
There's a reason I don't use PHP (or any primarily synchronous language like Ruby) any more.
This is an amazingly weak DDoS, put your site behind CloudFlare or similar free service and go take a nap. They'll tank this without raising an eyebrow.
Yep, true, it's planned. But sometimes their captcha page tend to block some legitimate trafic...
It's not that impressive because we read everyday articles about crazy DDoS big companies are able to mitigate. But when it's the website your responsible for, whatever the number of requests/sec, you just need to find way to manage it, and CloudFlare can have some weird side effects.
That's php for you. Although I use php myself quite often, it can be a resource hog if you're lazy about optimization. A customer I was working with was using wordpress, and their homepage took about 5 seconds to load due to a hideously inefficient wordpress module that was doing the exact same sql query thousands of times! With a little bit of optimization I managed to get it down to about 1 or 2 seconds.
For my own sites, I mostly use static html or server-parsed html.
Ummmm.... A cache layer for any web application is a must have, perhaps he could have avoided the attack all along if it were present on the system since day one?...
At least for this kind of attack, a more serious DDoS won't be tamed by "just adding cache"
Well, typical SLA for server side is 500 ms, then you have a chance to load a whole page under 3 seconds, which is recommended by google usability findings.
villa-bali is not even close to this, my bet that you (or your ORM) are making too many requests to database. Try to record ALL requests to database during page rendering and I bet you have about hundred.
Check out following test results:
I wonder what would happen if GET / only returned a redirect to somewhere (either an HTTP code or an HTML with window.location='http:/yoursite.com/new_page'
[+] [-] tyingq|9 years ago|reply
They tried blocking specific ip addresses, which didn't work, because the attack was somewhat distributed. They then just turned on some caching, which allowed the site to function, albeit with an unknown excess bandwidth charge pending.
And, the DDoS itself can't of been terribly impressive, as all it took to mitigate was a bit of caching. He mentions 10 requests / sec as the scale of the attack.
[+] [-] tyingq|9 years ago|reply
He first mentions having to change Apache to recognize X-Forwarded-For, because there is Amazon Elastic Load Balancing between his site and the internet.
This means, of course, that the "attacking ips" aren't making direct connections to his EC2 instance. They are proxied connections, all from the internal ELB service.
So later, when he mentions trying to use iptables to block traffic...that just doesn't make sense. There are no connections from those ips to the EC2 instance. You could use .htaccess rules, since Apache is aware of X-Forwarded-For.
Lastly...why would you put an elastic load balancer in front of a single web server?
[+] [-] SomeCallMeTim|9 years ago|reply
I use async logic (previously OpenResty, more recently NodeJS and Go) and largely pregenerated sites, so 2500 requests/second is a minimum baseline -- on a much lower end instance than an m4.xlarge.
There's a reason I don't use PHP (or any primarily synchronous language like Ruby) any more.
[+] [-] brbsix|9 years ago|reply
[0]: http://lologhi.github.io/symfony2/2016/04/04/DDoS-attack-for...
[1]: https://webcache.googleusercontent.com/search?q=cache:J7lca_...
[2]: https://github.com/lologhi/lologhi.github.com/blob/master/_p...
[+] [-] ultramancool|9 years ago|reply
[+] [-] ninjakeyboard|9 years ago|reply
[+] [-] LaurentGh|9 years ago|reply
It's not that impressive because we read everyday articles about crazy DDoS big companies are able to mitigate. But when it's the website your responsible for, whatever the number of requests/sec, you just need to find way to manage it, and CloudFlare can have some weird side effects.
[+] [-] adrianpike|9 years ago|reply
my goodness.
[+] [-] cpncrunch|9 years ago|reply
For my own sites, I mostly use static html or server-parsed html.
[+] [-] otto_ortega|9 years ago|reply
At least for this kind of attack, a more serious DDoS won't be tamed by "just adding cache"
[+] [-] cortesoft|9 years ago|reply
[+] [-] woud420|9 years ago|reply
cat <file> | cut -d ' ' -f1 | sort | uniq -c | sort -nr
[+] [-] jnpatel|9 years ago|reply
[+] [-] xlucas|9 years ago|reply
[+] [-] jasonlfunk|9 years ago|reply
Site not installed The site ghirardotti.fr is not yet installed
[Edit: it's up now.]
[+] [-] jasonlfunk|9 years ago|reply
[+] [-] st78|9 years ago|reply
villa-bali is not even close to this, my bet that you (or your ORM) are making too many requests to database. Try to record ALL requests to database during page rendering and I bet you have about hundred. Check out following test results:
8 test agents: http://loadme.socialtalents.com/Result/ViewById/57341f645b5f... - 5% of users have to wait more than 2 seconds 16 test agents: http://loadme.socialtalents.com/Result/ViewById/57341f1a5b5f... 5% of users need to wait for more than 4 seconds.
Definitely, any bot can nuke your website easily.
[+] [-] cft|9 years ago|reply
[+] [-] kornish|9 years ago|reply
[+] [-] raverbashing|9 years ago|reply
[+] [-] placeybordeaux|9 years ago|reply
Stopped reading after that.