I'd be curious to know where this "ampnode" bulletproof hosting company physically had their servers and stuff colocated.
ironically enough, ampnode.com right now appears to be behind cloudflare, I got the 5 second cloudflare javascript/browser fingerprinting loading screen when checking it out.
"Dedicated Spoofing Servers Now Available. Unfiltered. Unmanaged. No scanning! No child porn! Buy yours today!
Don't forget to inquire within about the bulk order discounts."
well that certainly seems totally legit and not suspicious at all
My guess is a large majority is reflective / udp or tcp zombies, with any cast edge sinkholing traffic from known heuristics or via user configured rules.
My questions are
- are there any organisations that provide good writeups
- is there any well accepted guidelines on how to ethically self-test (unsure if service providers would be happy if you kill network hops in the process)
- how much real world activity leverages non-bandwidth load, such as cpu/memory consumption?
My information is a little out of date, I left a company that attracted low-effort DDoSing about 3 years ago, but let me try.
> My guess is a large majority is reflective / udp or tcp zombies, with any cast edge sinkholing traffic from known heuristics or via user configured rules.
I mostly saw UDP reflection (usually chargen (!)), but sometimes WordPress reflection (thanks pingback), and occasionally old-school syn floods.
We usually would get attacked on the www servers, and they had 10G (or 2x10G), which was usually enough to just ignore the incoming traffic. For the wordpress stuff, I did add a rule to block the traffic, because they were fetching large files and I don't care about pingbacks, and don't have good feelings for WordPress, so it was catharthic to deny all requests with WordPress in the U-A. Having to wait until after the TLS handshake was unfortunate though. :(
Occasionally, something would cause real trouble with availability, but if the incoming traffic was significantly over line rate, our hosting provider would null route that IP, and our DNS would move users to other www servers; occasionally someone would come up with something that would cause issues without high volumes (ip fragments were painful without tuning[1], syn floods could be troublesome if enough volume, although going to syncookies only would mitigate that, and there were big kernel improvements at some point), I had sampled packet captures always running on the www servers, so that it would be easier to investigate than hoping to get in while the servers were having trouble. Letting the www servers fall over was acceptable as well. Somebody has a little fun at our expense, but our real service wasn't on www, so who cares?
> - are there any organisations that provide good writeups
Yeah, search for "DDoS trends" I see writeups from cloudflare, microsoft, f5, netscout, lumen (aka centurylink/Level3), sans. I'm pretty sure I read a report from Akamai once too.
> - is there any well accepted guidelines on how to ethically self-test (unsure if service providers would be happy if you kill network hops in the process)
I would get enough casual flooding (usually exactly 90 seconds worth), that I only rarely needed to self-test. But I did a couple self-tests for syn flooding, because the degree of improvement was hard to guess at without tests, and synfloods weren't super common. For that test, I tested from machines that were on the same (private) subnet, and was careful to make sure the attacked machine wouldn't transmit any responses. The point was to try not to impact equipment except the machine under test, and the generating machines.
Alternatively, most of the DDoS our actual service faced, was when we messed up something and our real clients came with large numbers of requests. No need for DDoS testing if your real load is huge. :)
> - how much real world activity leverages non-bandwidth load, such as cpu/memory consumption?
IMHO, you're only going to see that if you're getting individually targeted. It might not be too hard to find something that will consume your cpu/memory, but it's unlikely to be in a DDoS as a service toolkit, or at least not at the casual level. Often, you can turn the tables on these and only do hard work if they've established a session of some sort, and then all of a sudden, they've got to keep sessions and it's a lot more expensive for them.
[1] At that point in time, the FreeBSD kernel ip defragmentation was just a linear search through the array of fragments, with the array size limited based on the total ram available. I think there's some hashing now, but the array is probably still considerably larger than it needs to be. We did see a small amount of legitimate looking traffic with ip fragmentation, but it was tens of packets per minute; 16 fragments is more than enough storage for legitimate traffic, although with the hashing, you need a bit more. I'm not even sure the attack traffic was intentionally ip fragmented, but it was.
2 years is less than I’d expect. I remember an 18 year old kid got 1.5 years for creating a variant of the Blaster worm (not even the original), and I don’t think he even profited from it? This guy is a fully grown adult, created an illegal business that performed 200,000 DDOS attacks, likely made millions and caused immense amounts of damage, and … just a 33% longer sentence?
Although, maybe “hacking” type convictions are getting lighter over time?
Same here. Other people have probably collectively spent vastly more than 2 years' time mitigating and recovering from attacks launched by this guy's service. As a previous victim, I'm still glad to hear there are at least some consequences involved for booter operators.
His sentence indicates to me he made millions and paid top attorneys to get that 2-year forced federal recruitment for new staff program (Federal prison is how minor criminals become majors.)
“ Despite admitting to FBI agents that he ran these so-called “booter” services (and turning over plenty of incriminating evidence in the process), Gatrel opted to take his case to trial, defended the entire time by public defenders.”
Also, when I was the target of one of these "booters", a lot of them explicitly state they're only to be used for stress testing services you own, which is legal.
At lest from the news I’ve seen some of these guys don’t want those jobs / can’t get them for some reason or another (tech skills aren’t all it takes skill wise to get these jobs) and they seem to really identify with the idea of some sort of online hacker bad guy persona.
Considering that most skilled tech work generally makes plenty to live on, lots of us don’t optimize for maximum salary (or want a “salary” at all).
And “risk” isn’t something that gets shown to you in an excel spreadsheet and then rolled on a die when you start a project. It’s something that you assess for yourself, from very little data, and with your own particular risk tolerance.
With all that in mind, choosing to work in an unauthorized industry is no different than working in any other industry: startups, fintech, enterprise, whatever.
You do it because you like what it represents, or who you work with, or the way it feels, etc; or because you don’t feel like you have other choices for whatever reason.
Aside from those, there are plenty of them available out there. However, I feel that if I am using the synthetic approach, it does not represent real-world traffic. On the old dataset side, it is ancient data. Also, it only contains the attack traffic, as the legitimate traffic has been removed.
On the other side of the world, there exist these "DDoS-for-hire" people, who seem to have plenty of army behind them, which I think the cost to hire them is reasonable. This would justify the "newness" and the "real-world"-ness of the data to be used to verify our new proposal. Let's say I hire them to attack myself and capture the traffic on my side. As long as I have a powerful machine, I would be able to save all the attacks into a "real-world" dataset.
However, this is problematic from an ethical perspective. Someone between me (as a victim) and the adversaries would also be DDoS-ed to some extent. And also, hiring the DDoS-as-a-service is considered risky for my job [0]. After thinking for some time, the big guys are the best entity owning this kind of data, i.e., Cloudflare, Fastly, etc. Does anyone know whether they share such data?
How much time did they want for Aaron Schwartz? This guy gets two years for making an entire business leading to thousands of CFAA violations. What a joke.
I feel like it should be illegal to run misconfigured DNS servers too given they can be used to commit crime. Analogous to drugs - it's crime to produce and sell drugs.
Under which jurisdiction would the server have to be running in a misconfigured state? What counts as misconfiguration? Would it extend to the advertising pages that consumer ISPs in the US regularly see fit to serve when they should be returning NXDOMAIN?
Public DNS servers have their place. 8.8.8.8 and 4.2.2.1 are pretty popular. DDoS attacks don't depend on DNS servers either so you can't solve the underlying problem by making them go away either.
I always thought it'd be great if the EFF ran a public DNS server. At least you could trust them not to use your requests to build a profile of your online activity or redirect you to ads.
[+] [-] walrus01|3 years ago|reply
ironically enough, ampnode.com right now appears to be behind cloudflare, I got the 5 second cloudflare javascript/browser fingerprinting loading screen when checking it out.
"Dedicated Spoofing Servers Now Available. Unfiltered. Unmanaged. No scanning! No child porn! Buy yours today! Don't forget to inquire within about the bulk order discounts."
well that certainly seems totally legit and not suspicious at all
https://ampnode.com/servers.html
how is it that the marketing website for this is still up?
https://www.justice.gov/usao-cdca/pr/illinois-man-convicted-...
[+] [-] 0xy|3 years ago|reply
Honeypot run by the feds.
[+] [-] shakna|3 years ago|reply
Nowhere. Their license agreement points to their hosting being with Microsoft Azure. [0]
[0] https://ampnode.com/terms.html
[+] [-] hsbauauvhabzb|3 years ago|reply
[+] [-] hsbauauvhabzb|3 years ago|reply
My guess is a large majority is reflective / udp or tcp zombies, with any cast edge sinkholing traffic from known heuristics or via user configured rules.
My questions are
- are there any organisations that provide good writeups
- is there any well accepted guidelines on how to ethically self-test (unsure if service providers would be happy if you kill network hops in the process)
- how much real world activity leverages non-bandwidth load, such as cpu/memory consumption?
[+] [-] matsur|3 years ago|reply
[+] [-] cookiengineer|3 years ago|reply
Well, and technically, distributed slowloris.
Most "collectives" still use forks of LOIC though.
[+] [-] toast0|3 years ago|reply
> My guess is a large majority is reflective / udp or tcp zombies, with any cast edge sinkholing traffic from known heuristics or via user configured rules.
I mostly saw UDP reflection (usually chargen (!)), but sometimes WordPress reflection (thanks pingback), and occasionally old-school syn floods.
We usually would get attacked on the www servers, and they had 10G (or 2x10G), which was usually enough to just ignore the incoming traffic. For the wordpress stuff, I did add a rule to block the traffic, because they were fetching large files and I don't care about pingbacks, and don't have good feelings for WordPress, so it was catharthic to deny all requests with WordPress in the U-A. Having to wait until after the TLS handshake was unfortunate though. :(
Occasionally, something would cause real trouble with availability, but if the incoming traffic was significantly over line rate, our hosting provider would null route that IP, and our DNS would move users to other www servers; occasionally someone would come up with something that would cause issues without high volumes (ip fragments were painful without tuning[1], syn floods could be troublesome if enough volume, although going to syncookies only would mitigate that, and there were big kernel improvements at some point), I had sampled packet captures always running on the www servers, so that it would be easier to investigate than hoping to get in while the servers were having trouble. Letting the www servers fall over was acceptable as well. Somebody has a little fun at our expense, but our real service wasn't on www, so who cares?
> - are there any organisations that provide good writeups
Yeah, search for "DDoS trends" I see writeups from cloudflare, microsoft, f5, netscout, lumen (aka centurylink/Level3), sans. I'm pretty sure I read a report from Akamai once too.
> - is there any well accepted guidelines on how to ethically self-test (unsure if service providers would be happy if you kill network hops in the process)
I would get enough casual flooding (usually exactly 90 seconds worth), that I only rarely needed to self-test. But I did a couple self-tests for syn flooding, because the degree of improvement was hard to guess at without tests, and synfloods weren't super common. For that test, I tested from machines that were on the same (private) subnet, and was careful to make sure the attacked machine wouldn't transmit any responses. The point was to try not to impact equipment except the machine under test, and the generating machines.
Alternatively, most of the DDoS our actual service faced, was when we messed up something and our real clients came with large numbers of requests. No need for DDoS testing if your real load is huge. :)
> - how much real world activity leverages non-bandwidth load, such as cpu/memory consumption?
IMHO, you're only going to see that if you're getting individually targeted. It might not be too hard to find something that will consume your cpu/memory, but it's unlikely to be in a DDoS as a service toolkit, or at least not at the casual level. Often, you can turn the tables on these and only do hard work if they've established a session of some sort, and then all of a sudden, they've got to keep sessions and it's a lot more expensive for them.
[1] At that point in time, the FreeBSD kernel ip defragmentation was just a linear search through the array of fragments, with the array size limited based on the total ram available. I think there's some hashing now, but the array is probably still considerably larger than it needs to be. We did see a small amount of legitimate looking traffic with ip fragmentation, but it was tens of packets per minute; 16 fragments is more than enough storage for legitimate traffic, although with the hashing, you need a bit more. I'm not even sure the attack traffic was intentionally ip fragmented, but it was.
[+] [-] yashap|3 years ago|reply
Although, maybe “hacking” type convictions are getting lighter over time?
[+] [-] samplatt|3 years ago|reply
[+] [-] kbuck|3 years ago|reply
[+] [-] cafard|3 years ago|reply
[+] [-] bsenftner|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] TedDoesntTalk|3 years ago|reply
What!!!?
[+] [-] chalst|3 years ago|reply
[+] [-] jmckib|3 years ago|reply
[+] [-] a13n|3 years ago|reply
[+] [-] Wowfunhappy|3 years ago|reply
[+] [-] ivantop|3 years ago|reply
[+] [-] the_doctah|3 years ago|reply
[+] [-] frozenport|3 years ago|reply
With his skills and being based in the USA, he could be getting a six figure salary without the risk?
[+] [-] dvt|3 years ago|reply
[1] https://securelist.com/the-cost-of-launching-a-ddos-attack/7...
[2] https://www.missioncriticalmagazine.com/articles/93185-the-d...
[+] [-] duxup|3 years ago|reply
[+] [-] swatcoder|3 years ago|reply
And “risk” isn’t something that gets shown to you in an excel spreadsheet and then rolled on a die when you start a project. It’s something that you assess for yourself, from very little data, and with your own particular risk tolerance.
With all that in mind, choosing to work in an unauthorized industry is no different than working in any other industry: startups, fintech, enterprise, whatever.
You do it because you like what it represents, or who you work with, or the way it feels, etc; or because you don’t feel like you have other choices for whatever reason.
[+] [-] quickthrower2|3 years ago|reply
[+] [-] rahn_doe|3 years ago|reply
[+] [-] czbvyRZNsVcpTm2|3 years ago|reply
We are currently working on how to solve DDoS in a better way (I can't share the details now, but it will eventually be published as a paper).
Most previous research has used either a synthetic or an old dataset to verify their proposal. Two famous examples of these are the following datasets: 1. https://www.unb.ca/cic/datasets/ddos-2019.html 2. https://www.caida.org/catalog/datasets/ddos-20070804_dataset...
Aside from those, there are plenty of them available out there. However, I feel that if I am using the synthetic approach, it does not represent real-world traffic. On the old dataset side, it is ancient data. Also, it only contains the attack traffic, as the legitimate traffic has been removed.
On the other side of the world, there exist these "DDoS-for-hire" people, who seem to have plenty of army behind them, which I think the cost to hire them is reasonable. This would justify the "newness" and the "real-world"-ness of the data to be used to verify our new proposal. Let's say I hire them to attack myself and capture the traffic on my side. As long as I have a powerful machine, I would be able to save all the attacks into a "real-world" dataset.
However, this is problematic from an ethical perspective. Someone between me (as a victim) and the adversaries would also be DDoS-ed to some extent. And also, hiring the DDoS-as-a-service is considered risky for my job [0]. After thinking for some time, the big guys are the best entity owning this kind of data, i.e., Cloudflare, Fastly, etc. Does anyone know whether they share such data?
[0] https://portswigger.net/daily-swig/dutch-police-warn-ddos-fo...
[+] [-] Rasbora|3 years ago|reply
[+] [-] TimPC|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] mvuksano|3 years ago|reply
[+] [-] jen20|3 years ago|reply
[+] [-] walrus01|3 years ago|reply
[+] [-] autoexec|3 years ago|reply
I always thought it'd be great if the EFF ran a public DNS server. At least you could trust them not to use your requests to build a profile of your online activity or redirect you to ads.
[+] [-] hsbauauvhabzb|3 years ago|reply
[+] [-] wang_li|3 years ago|reply