Glad to hear something official on this...5 or 6 days is way too long to go without something more than "We're working on it" and some light details. I understand that it's likely an all-hands-on-deck hair-on-fire situation over there, but those of us who rely on Linode for our own businesses have been largely left in the dark.
When our customers are emailing and tweeting us and they just want to know when we are going to be up, and all we can say is "We have no idea, we don't know why this is happening or what's really going on", that's pretty much the definition of a worst case scenario from a customer service standpoint.
As someone whose business relies on Linode currently to function, I am sympathetic to Linode's plight...this is the equivalent of someone coming and setting off a bomb in your factory; not exactly something that you can always plan for even if you have prevention measures in place. But they would have kept a lot more of my sympathy long-term if they would have communicated better with their customers in the first place...
EDIT: And it looks like the attackers decided to start things back up again, as Linode.com is unavailable...
We know that we've dropped the ball here. To be frank, it's just been extremely difficult to take our people off of mitigation long enough to write something more coherent than "they're attacking our webservers", "they're attacking our core routers", etc.
> And it looks like the attackers decided to start things back up again, as Linode.com is unavailable...
They're watching our status page for updates and starting new attacks when we resolve previous ones. There's been an almost 1:1 correlation lately.
Around the holidays, network engineers are the only ones who don't really take time off. The sorts of people who might say "hey, we need to give some clarity to customers" are less available than the people whose time is spent firefighting.
"It has become evident in the past two days that a bad actor is purchasing large amounts of botnet capacity in an attempt to significantly damage Linode’s business."
The timing of the DDoS was pretty interesting too, happening when not everyone is available.
I've seen Linode getting a lot of flack for not updating customers on what is happening, but what clarity does this announcement provide that their DDoS status page wasn't providing? They were updating the status page regularly.
1 - Attack mitigation was mostly successful. As I thought and they have confirmed, the attack vectors evolved continuously.
2 - They had to deal with this over Xmas. Anyone familiar with such a job knows what this means in terms of human resources, knowledge distribution, organization of technical response and communication with 3rd parties.
3 - Linode is not Nagios. If you don't monitor your own infrastructure don't expect Linode to SMS you because your site might be down. Linode resources were focused on fighting the DDoS, as they should, and provided regular updates through their status site, as is expected. Everything else is nice-to-have, but no a must-have.
4 - In line with what others said, I had 7 hours downtime in my London VPS. That is an uptime of 96% in the last 7 days. Considering restless DDoS ongoing over holidays, I'd say that is pretty good.
I'm sorry, but what happens to Linode sucks, but it is an eventuality anyone with assets depending on this service should have counted with, because it can happen everywhere. Cannot blame Linode if your HA strategy does not exist, or you never thought of a way to gracefully fail over to a second provider if your business depends on >96% availability.
1) They were caught with their pants down. Their DDoS playbook was probably either outdated or not full fleshed out. This is somewhat excusable if you're a content provider. It's not excusable if you're the cloud/colo/datacenter provider. This is literally your raison d'etre. Cue the "you had one job" memes.
2) Netops people don't get Christmas off (1). Our teams are working all year, all day, every day. Netops isn't HR or marketing. It's not super difficult to arrange a conference call with your transit providers, DDoS mitigators, etc. This is old hat to them.
3) Completely agree.
4) The downtime was a lot worse if you had multiple instances in more datacenters. Or put another way, the bigger the Linode customer you were, the worse it was.
(1) I've had a router die every Christmas or New Years for the past 6 years. Never had one die outside those windows. They must angry at me for what I've put them through.
My message to the attackers (in case they happen to read HN):
Fuck you. I will continue to be a Linode customer. Not sure what your goals might be but you will not succeed.
Frankly, and I am going to be politically incorrect here, these are the kinds of cases where I wish there was a "special forces" kind of task force to hunt down these pieces of shit and put them out of their misery.
This amounts to financial terrorism of the worst kind. It affects small and large companies and creates untold losses across the board. It is entirely unproductive. The world would be a better place if the pieces of shit who engage in this sort of financial terrorism simply didn't exist.
Happy New Year.
Linode folks: I'm renting another server next week. Don't need it. Just want to support your effort and, in a tiny way, help mitigate losses. I might just give it to the kids in the robotic team I mentor so they can play around in a real server environment.
> Frankly, and I am going to be politically incorrect here, these are the kinds of cases where I wish there was a "special forces" kind of task force to hunt down these pieces of shit and put them out of their misery.
Oh, it's certainly in the capability of the FBI and the NSA to hunt them down. Even child fuckers inside the Darkweb got busted.
The problem is priority: unless either child porn or a huge US company is involved, the three-letter-agencies don't give a shit about this kind of crime.
This is NOT the fault of the attackers. The continued efficacy of DDoS attacks is squarely the fault of ISPs who absolutely refuse to police compromised customers or filter egress.
This is a problem that has a solution, but the people in the best position to handle it have shown no interest in doing so for a decade.
If you trust people to do the right thing, you're going to get burnt.
We need to evolve the internet past these types of attacks and build systems to protect everyone from this kind of terrorism, rather than just get angry at people who abuse well known about loopholes.
Thanks for the update and the hard work. You all work a job that requires a lot of sweat and tears that goes unappreciated from many levels of our society. Making the internet work is hard work.
Know that you're in good company here and that we're rooting for you.
We appreciate this more than you know. Many of us have had the holidays ruined by these utterly relentless attacks, and it's a difficult thing to try and explain to our loved ones. Support from the community really helps.
+1 to this. I've been a happy Linode customer for 4-5 years now. The few times I've had to use support they've been great. I also appreciate the regular upgrades to their offerings meaning there's little need to shop around and hop hosts. The ddos attacks are annoying, but I will remain a loyal, happy customer.
My London Linodes' stats from the past 7 days:
40 outages. 6h25m downtime.
What upsets me the most is that customers haven't heard a single word from Linode. If you weren't watching their status page (and have server monitoring on your servers), you'd be clueless.
The least they could do is to email affected customers about what's happening, and the time frame they need to fix it.
But one week of continuous issues is just not good enough.
That being said, I love Linode, and will not use any other primary provider for my servers. They have been dead stable the past years before this happened, and never had any bottlenecks (e.g. overselling servers like other providers). And I feel sorry for the network engineers trying their best to fix this. The missing information is a Linode Customer Relations issue, not the engineer's.
Happy new year to everyone. And looking forward to a great 2016 at Linode.
1) Why in the world are you exposing your router control planes to the outside world? That should be ACL'd off (in stateless firewall rules and routing engine) to only allow access from a few IP's.
2) Your transit providers should be defending their infrastructure. I've never seen a transit provider allow an attacker to take out their /30 serials or IX addresses. This is their network after all. If attackers try to hit the serial between customer and provider, you just readdress the serial to RFC1918 space. You don't really need a routable address there other than to make traceroutes easy to read. If they attack farther upstream in the provider's network, you just add ACL's at the provider edge. Nothing external will ever need to reach a provider's core. This is basic, basic stuff.
Next time, don't only run your network on house bandwidth (HE, TelX, etc). Or in other words, caveat emptor.
I want to cut Linode some slack they have been great but those were my thoughts when the employee above divulged that info. It all seems a little like using cold fusion to run your whole shop..
People dont need to jump ship they need plans in place to deal with problems like this, even just a rsync.net account.
> a bad actor is purchasing large amounts of botnet capacity in an attempt to significantly damage Linode’s business.
I wonder what size investment this is taking, and what the end-game is for the bad actor. Unless Linode's mitigation tactics are increasing the bad actor's costs, what's to stop the bad actor from continuing the attacks until Linode goes out of business?
I know I migth come late, but being one of your very satisfied Customers, and having experienced this type of issue multiple times with other providers that didn't even bothered to even ackowledge that there was an issue, I can say that I wil remain with you regardless.
Also to those saying "I have all my business running at Linode so this is unacceptable", I only says this:
You get what you pay for, and for a VPS Service you won't find better than Linode, and if you have something critical running for YOUR Clients, than it is YOUR responsability to ensure resiliency against this type of situation. Linode is a VPS provider after all, and the reason why you are making money out of someone who doesn't know enougth to go to the VPS hoster themselfes.
Good luck making a profitable business and milking your Customers running on AWS or AZURE. You'd be broke and in debth at the first DDOS and Over-Bandwith charge from any of them.
I work at a service provider myself, and I understand what you guys had to deal with the last 10 days, and you have my full support.
I'm a bit surprised to see the update from someone who is not senior management/C-level (as far as I can tell). Where is the communication from the CEO/CTO?
It seems a bit unfair to have this fall on Alex's shoulders.
I could be way off base, happy to be put right. I'm sorry to hear about your ruined holidays. Hopefully you'll get some time off soon :)
That's crazy... I have a linode box with several low-traffic websites on it, old projects I've wanted to keep around for archival purposes. I picked linode because I wanted root access and they were cheap but really just because I wasn't sure of better options. I suppose there is always t2.nano.
Linode has always been a great host. Sure they've had their growing pains but I've never been more happy with a virtual hosting provider, even their support. But yes, days without communication is not a good thing. Let's hope they learn from this.
ISPs already spend plenty of money on DPI and HTTP injection gear. It would cost next to nothing to do basic egress filtering and detecting+throttling known compromised customers.
Because these days "botnet" can easily mean "botnet of compromised Linux servers" or "botnet of WiFi routers". The biggest DoS attacks are often based on exploiting UDP based protocols and so you end up being attacked by ISP/university-sized DNS or NTP servers.
I think the lesson here is don't rely on one supplier. I have my tiny infrastructure spread over three different suppliers in different geographical locations. Plan for the worst and hope for the best.
Edit. This is in no way a criticism of linode. The worst outcome is if we all end up with one monopoly supplier. I have deliberately avoided using the big player in this space as I want support diversity. This makes my job harder, but it is better for us all if we don't put all our eggs in the one basket.
Imagine the scenario: person A sends a malformed DNS request to a bunch of DNS resolvers, asking them to send the response to person B. Now imagine that person A is actually part of a large botnet, being controlled by person C, via some smoke-and-mirrors.
If you're person B (under attack) it's pretty difficult to track through all of that to person C. You'd need a lot of cooperation from people (likely in many different countries) who really just want to go back to their normal business. They're likely also charging for the traffic, so they're not really that bothered, and they're each only seeing a small proportion of what person B is seeing so they don't see it as much of a problem (so aren't likely to be inclined to get involved).
I'm willing to bet it's someone with money who has an interest in making Linode look bad or unreliable. I can't imagine someone would sustain an attack like this for shits and giggles.
Devices running Windows XP aren't the only ones that can be compromised. Other possibilities are unpatched Linux servers, servers with easily guessed root passwords, consumer routers, etc.
jsonip.com is hosted on Linode. It's been averaging roughly 6mb/s inbound for months, but in the last week it's been about 8.5. I'm not sure if the uptick has anything to do with the DDOS attacks or not.
Did you also run jsonip.org? I used that occasionally (not programmatically, just when I wanted to see my IP) and it started returning a 502 a few months ago.
[+] [-] silverlight|10 years ago|reply
When our customers are emailing and tweeting us and they just want to know when we are going to be up, and all we can say is "We have no idea, we don't know why this is happening or what's really going on", that's pretty much the definition of a worst case scenario from a customer service standpoint.
As someone whose business relies on Linode currently to function, I am sympathetic to Linode's plight...this is the equivalent of someone coming and setting off a bomb in your factory; not exactly something that you can always plan for even if you have prevention measures in place. But they would have kept a lot more of my sympathy long-term if they would have communicated better with their customers in the first place...
EDIT: And it looks like the attackers decided to start things back up again, as Linode.com is unavailable...
[+] [-] alexforster|10 years ago|reply
> And it looks like the attackers decided to start things back up again, as Linode.com is unavailable...
They're watching our status page for updates and starting new attacks when we resolve previous ones. There's been an almost 1:1 correlation lately.
[+] [-] Sir_Cmpwn|10 years ago|reply
Around the holidays, network engineers are the only ones who don't really take time off. The sorts of people who might say "hey, we need to give some clarity to customers" are less available than the people whose time is spent firefighting.
[+] [-] kunjanshah|10 years ago|reply
The timing of the DDoS was pretty interesting too, happening when not everyone is available.
[+] [-] brandon272|10 years ago|reply
[+] [-] nly|10 years ago|reply
Perhaps its time to consider some failover at another host. Same goes for anyone solely dependent on anyone.
[+] [-] alexandrerond|10 years ago|reply
1 - Attack mitigation was mostly successful. As I thought and they have confirmed, the attack vectors evolved continuously.
2 - They had to deal with this over Xmas. Anyone familiar with such a job knows what this means in terms of human resources, knowledge distribution, organization of technical response and communication with 3rd parties.
3 - Linode is not Nagios. If you don't monitor your own infrastructure don't expect Linode to SMS you because your site might be down. Linode resources were focused on fighting the DDoS, as they should, and provided regular updates through their status site, as is expected. Everything else is nice-to-have, but no a must-have.
4 - In line with what others said, I had 7 hours downtime in my London VPS. That is an uptime of 96% in the last 7 days. Considering restless DDoS ongoing over holidays, I'd say that is pretty good.
I'm sorry, but what happens to Linode sucks, but it is an eventuality anyone with assets depending on this service should have counted with, because it can happen everywhere. Cannot blame Linode if your HA strategy does not exist, or you never thought of a way to gracefully fail over to a second provider if your business depends on >96% availability.
[+] [-] scurvy|10 years ago|reply
2) Netops people don't get Christmas off (1). Our teams are working all year, all day, every day. Netops isn't HR or marketing. It's not super difficult to arrange a conference call with your transit providers, DDoS mitigators, etc. This is old hat to them.
3) Completely agree.
4) The downtime was a lot worse if you had multiple instances in more datacenters. Or put another way, the bigger the Linode customer you were, the worse it was.
(1) I've had a router die every Christmas or New Years for the past 6 years. Never had one die outside those windows. They must angry at me for what I've put them through.
[+] [-] rebootthesystem|10 years ago|reply
Fuck you. I will continue to be a Linode customer. Not sure what your goals might be but you will not succeed.
Frankly, and I am going to be politically incorrect here, these are the kinds of cases where I wish there was a "special forces" kind of task force to hunt down these pieces of shit and put them out of their misery.
This amounts to financial terrorism of the worst kind. It affects small and large companies and creates untold losses across the board. It is entirely unproductive. The world would be a better place if the pieces of shit who engage in this sort of financial terrorism simply didn't exist.
Happy New Year.
Linode folks: I'm renting another server next week. Don't need it. Just want to support your effort and, in a tiny way, help mitigate losses. I might just give it to the kids in the robotic team I mentor so they can play around in a real server environment.
[+] [-] mschuster91|10 years ago|reply
Oh, it's certainly in the capability of the FBI and the NSA to hunt them down. Even child fuckers inside the Darkweb got busted.
The problem is priority: unless either child porn or a huge US company is involved, the three-letter-agencies don't give a shit about this kind of crime.
[+] [-] vox_mollis|10 years ago|reply
This is a problem that has a solution, but the people in the best position to handle it have shown no interest in doing so for a decade.
[+] [-] craigmccaskill|10 years ago|reply
We need to evolve the internet past these types of attacks and build systems to protect everyone from this kind of terrorism, rather than just get angry at people who abuse well known about loopholes.
Your anger, while justified, is not helpful.
[+] [-] atom_enger|10 years ago|reply
Know that you're in good company here and that we're rooting for you.
[+] [-] alexforster|10 years ago|reply
[+] [-] asb|10 years ago|reply
[+] [-] jafingi|10 years ago|reply
What upsets me the most is that customers haven't heard a single word from Linode. If you weren't watching their status page (and have server monitoring on your servers), you'd be clueless.
The least they could do is to email affected customers about what's happening, and the time frame they need to fix it.
But one week of continuous issues is just not good enough.
[+] [-] jafingi|10 years ago|reply
Happy new year to everyone. And looking forward to a great 2016 at Linode.
[+] [-] scurvy|10 years ago|reply
2) Your transit providers should be defending their infrastructure. I've never seen a transit provider allow an attacker to take out their /30 serials or IX addresses. This is their network after all. If attackers try to hit the serial between customer and provider, you just readdress the serial to RFC1918 space. You don't really need a routable address there other than to make traceroutes easy to read. If they attack farther upstream in the provider's network, you just add ACL's at the provider edge. Nothing external will ever need to reach a provider's core. This is basic, basic stuff.
Next time, don't only run your network on house bandwidth (HE, TelX, etc). Or in other words, caveat emptor.
[+] [-] adamzoz|10 years ago|reply
People dont need to jump ship they need plans in place to deal with problems like this, even just a rsync.net account.
[+] [-] pbowyer|10 years ago|reply
[+] [-] bm98|10 years ago|reply
I wonder what size investment this is taking, and what the end-game is for the bad actor. Unless Linode's mitigation tactics are increasing the bad actor's costs, what's to stop the bad actor from continuing the attacks until Linode goes out of business?
[+] [-] jgord|10 years ago|reply
I imagine there are a few thousand linode fans who'd be happy to help fight back.
[+] [-] scurvy|10 years ago|reply
[+] [-] psxhacker|10 years ago|reply
I know I migth come late, but being one of your very satisfied Customers, and having experienced this type of issue multiple times with other providers that didn't even bothered to even ackowledge that there was an issue, I can say that I wil remain with you regardless.
Also to those saying "I have all my business running at Linode so this is unacceptable", I only says this: You get what you pay for, and for a VPS Service you won't find better than Linode, and if you have something critical running for YOUR Clients, than it is YOUR responsability to ensure resiliency against this type of situation. Linode is a VPS provider after all, and the reason why you are making money out of someone who doesn't know enougth to go to the VPS hoster themselfes.
Good luck making a profitable business and milking your Customers running on AWS or AZURE. You'd be broke and in debth at the first DDOS and Over-Bandwith charge from any of them.
I work at a service provider myself, and I understand what you guys had to deal with the last 10 days, and you have my full support.
[+] [-] switch007|10 years ago|reply
It seems a bit unfair to have this fall on Alex's shoulders. I could be way off base, happy to be put right. I'm sorry to hear about your ruined holidays. Hopefully you'll get some time off soon :)
[+] [-] tunesmith|10 years ago|reply
[+] [-] circuit_breaker|10 years ago|reply
[+] [-] vox_mollis|10 years ago|reply
And yet, we still get DDoS attacks. Why?
[+] [-] mike_hearn|10 years ago|reply
[+] [-] danieltillett|10 years ago|reply
Edit. This is in no way a criticism of linode. The worst outcome is if we all end up with one monopoly supplier. I have deliberately avoided using the big player in this space as I want support diversity. This makes my job harder, but it is better for us all if we don't put all our eggs in the one basket.
[+] [-] pbowyer|10 years ago|reply
[+] [-] rast-a|10 years ago|reply
[+] [-] kevinbowman|10 years ago|reply
If you're person B (under attack) it's pretty difficult to track through all of that to person C. You'd need a lot of cooperation from people (likely in many different countries) who really just want to go back to their normal business. They're likely also charging for the traffic, so they're not really that bothered, and they're each only seeing a small proportion of what person B is seeing so they don't see it as much of a problem (so aren't likely to be inclined to get involved).
[+] [-] atom_enger|10 years ago|reply
[+] [-] elinchrome|10 years ago|reply
[+] [-] ryukafalz|10 years ago|reply
[+] [-] zepto|10 years ago|reply
[+] [-] geuis|10 years ago|reply
[+] [-] brobinson|10 years ago|reply