This is nothing less than cyber-terrorism, and I hope the FBI is involved.
A problem I see with people throwing Linode under the bus is that only the mega hosting providers (AWS/Google) have the resources to mitigate such attacks. I wound hope that the industry can find a solution out there that allows smaller players (Linode, Digital Ocean, etc) to run a hosting business without the threat of DDoS.
If everyone moves to the big players, it is a loss for us all: feature-wise, quality-wise, security-wise, cost-wise.
Still sticking with Linode. This is definitely not their fault. If anything I blame ISPs for letting it happen. ISP's could just simply filter traffic by filtering forged packets, then dropping traffic at a backbone would be trivial, making most botnets obsolete.
AWS and Google have resources in place for their own benefit, and hosting with them means you get these benefits to some degree. But the issue is that this complicated infrastructure is only reachable by giants(amazon, google, rackspace, etc). AWS probably spends enough money, collectively, on DDoS mitigation and fallback that it could acquire another hosting company for that sum of money.
The to consider the shady practices of who ever is trying to sabotage Linode's business, the likely goal is to direct business elsewhere. So you may end up moving to a hosting company who is shady enough to have caused you a loss of profits for their own profits. Somethings up here, and I'm really curious to find out who did this. I'm not leaving a business because they were a victim of espionage.
I understand the severity of the issue is different to Slacks', but there should be a bunch of people on the Twitter account replying to people and saving customers. A lot of their customers are angry about lack of communication as much as they are about the downtime.
Although I'm not sure how important it is to announce outages on social media, I'm a customer and I haven't seen an e-mail either warning of an outage or explaining the situation. Generally I think linode is great but taking the tack of "Oh, if they don't notice, we shouldn't point it out to them" is just a bit cowardly.
I agree Linode have mishandled the situation, badly. The lack of information about the ongoing attacks is disconcerting. We have 40+ nodes in London and Frankfurt and are dreading the attacks resuming in those locations. Due to lack of information about the specifics of the attacks, the measures Linode are taking and an explanation why this has happened in the first place, we can't communicate any sound information back to our customers. The only way forward is to switch providers until this quiets down and more details surface.
Linode has always been this way. They are not extremely open from a business standpoint, but from what I can see they are fairly transparent from a technical standpoint. They have released a statement about this outage, and past outages. Smaller outages usually don't get public announcements, but often you'll see something on the forums, or you can figure out the problem by opening a ticket.
I've had pretty good communication with Linode and I've always had really good customer service. I don't think Linode is nearly as verbose about their internal business affairs and going-ons as the startup world seems to be, but I think competition has driven many companies to do the same, and the rapid evolution in this market makes it a reasonable effort to constantly keep the public in the loop.
As far as my experience, they've never with held information that I needed to know. But I understand why people are suspicious, and I think this suspicion is paranoia. These attacks are clearly espionage, and if they knew who was doing it, they'd have taken legal action by now.
Twitter is not that relevant to be honest. I assume many customers are not active on Twitter either. I do not see why it is so important to have activity on the account when there is a dedicated page for informing customers of what is going on.
Also, if you look at the medium post, the tweets are mostly sorry/thank you/everything should be fine soon. No actual information on what is going on. How could you put relevant technical details in a tweet? How does a short, uninformative tweet help you more?
edit: also, there is up-to-date technical discussions taking place on the irc channel #linode on oftc
They have dramatically improved communication in the last couple of days. They got a lot of flack for it in the thread a couple of days ago, and they seem to have responded to it. Updates are on status.linode.com.
I jokingly commented a few days back to a friend of mine that Slack is probably the last company I would bet a billion dollars on, to succeed. To be honest, I was very very wrong - Slack is incredible! Anybody can definitely build a Slack clone - its not an out of the world engineering challenge, but it will never be Slack, it will never have the love that Slack pours into its products and dev APIs. <3 Slack!
I honestly love Linode and am sure they'll come out better as a result of this. But our customers aren't as understanding. Currently we're playing a bit of cat and mouse and with each data centre going down - we're switching our recovery process into gear and restoring to a different VPS (outside Linode). We have linodes on pretty much all locations, but if this continues at this rate, we simply won't have any linodes left there.
It would be very hard to justify going back to Linode afterwards, even with the best intentions to do so. "... So you seriously want us to go back to this hosting provider that caused us all this mess over Christmas / New Year's??"
That's exactly what the reaction the attackers want.
However, what are the alternatives? Linode have been dead stable for me the past many years, and delivers what they promise in a transparent way. No overselling of servers. No sudden extra bills.
Linode will come out stronger after this, so it won't be able to happen on this scale again.
The big question is: Who, with a lot of money, would want to hurt Linode's business in this way? This isn't just a "script kiddie" having fun. It's a very well planned and powerful attack requiring buying large botnet capacity for an extensive amount of time.
Work in IT. Server was stable & online for 189 days before the 25th, knew them for stability.
No notification from them, just a handful of downtime alerts during time with the family. They were completely gone from BGP tables in Newark.
Used backups and moved sites to OVH. Don't know who they pissed off, I suspect another NJ competitor, who is known for taking cheap shots at other VPS companies.
It's a pain in the ass, but at the same time, how is their network so fragile? You would think at least some of the fragile systems being attacked would be firewalled or at least ACL'd off from the public net.
This is what happens when you don't run your own network and rely on other ASN's and uplinks to do the work for you. When it comes to other customers being affected, they will simply null you. Unlike your network ops who would be trying anything they could from OOB to rectify such.
What you are saying has been somewhat confirmed in Linode's latest update on the Atlanta outages [1]. I can't help it but to wonder if Linode were prepared or had a plan in place in case of a DDoS? It appears their upstream provider cuts them off completely once an attack starts/resumes and gradually puts them back on. The cycle then repeats.
We are also duplicating in OVH, read good things about their built-in DDoS protection on HN.
Every business endeavor has associated risks, which can be mitigated in a variety of ways for a variety of costs.
Offloading the responsibility for continuation of your business to Linode (or any other data center provider) is unfair. A history of uptime, verbal promises, or fancy SLA terms should never be interpreted to mean that disasters won't happen. A ten day long DDOS is a disaster and in this case a man-made disaster.
Using Linode (or another provider) instead of building your own data center is more cost effective, but it means you are no longer in direct control of your infrastructure (decreased costs, increased risk).
Designing your application to span multiple availability zones (data centers) can mitigate single points of failure within a single vendor but is more expensive than operating in a single zone.
Designing your application to span multiple vendors can mitigate single vendor failures (or changes in offerings from a single vendor) but is even more expensive.
And still there are ways to mitigate these costs, business interruption insurance can help cover the costs for moving to a new data center or vendor in case of a disaster (such as hiring staff, overtime, etc.). Lost profits can be covered by business interruption insurance also.
Of course it is expensive to operate any business in a hostile environment. A seaside restaurant better be prepared to weather a hurricane. I wonder how much money has been spent on security cameras, guards, metal detectors, and so on since 9/11? The increasing occurrence of targeted DDOS (and other types) of attacks is the physical equivalent of an increasingly hostile environment and is going to be associated with higher costs.
In the longer term, I think we need to find ways to get law enforcement better suited to deal with these problems but ultimately I think we'll need to radically change the way we handle network operations and the technical foundations of the network such as content centric networking (https://en.wikipedia.org/wiki/Content_centric_networking)
Let's hope the authorities can identify the bad actor(s) in this case -- if they haven't made extortion demands it's hard not to imagine they're a competitor, and it would be really frustrating if they were able to get away with it.
Linode has been great to us, but we can't risk further outages. We've switched over 20 nodes to Google Cloud for the time being -- thankfully before that 16+ hour outage today in Atlanta. Happy check them out again once the dust settles.
From an armchair, it seems like it's a good idea to distribute a virtual server farm on multiple providers (Linode, AWS, et al). There's even libraries available to abstract away the provider layer, like libcloud. However, IME it's typical to invest in just one provider.
Is anyone currently using libcloud or equiv. and able to share details?
The brutal thing about a DDoS on a web hosting company is that it affects their business in very long lasting ways. If an ecommerce site is down they may lose sales for that day, and a small amount of customers. If a hosting provider is down then they can lose many of their customers for life.
If you look at how interconnected stuff is these days it is really not that far away (if we haven't crossed that point already) were lives will be lost due to crimes like these.
The analogy for me is one of roads. If you block a road on purpose then an ambulance might not be able to reach an accident victim in time. The internet is infrastructure, just like roads and purposefully obstructing it wholesale is doing damage to a large number of parties.
What is sad is that these people get away with this stuff over and over again, it is very rare for DDoS organizers to be caught, rarer still (if it even ever happened) for them to be sentenced.
I am a (tiny) Linode customer with just one node but for a good number of years. Probably close to 10 now. This is the first outage I have with them. All in all it's a good thing as it made me finally learn how to use EC2 and I now have a backup there. I already shut down the EC2 backup instance and switched back to Linode as they seem to be up now.
I've used Linode increasingly since 2008, now consuming 10 times what I did at the start. I'm preparing to move to AWS today but genuinely hope that Linode comes through with a reasonable explanation of why I can expect this to not be repeated.
This story fall from #2 on HN to page #102 in seconds.
Is it because of only 38 upvotes vs 43 comments? The story is just 3 hours old.
@HN / dang: What's going on with HN sorting algorithm? The #3 story on HN is 1 hour old and has just 9 upvotes and 2 comments: "Churchill and His Money, or Lack of It":
https://news.ycombinator.com/item?id=10825575
I'm a Linode customer. I have been calling on a regular basis because I have a hosted service for my customers (thousands of them) that has been all but dead for 24 hours now. If I hear 'we are working on it' one more time, I may lose my mind. My customers are suffering severely, and I'm losing thousands of dollars as we speak. They keep telling me they are working with their upstream provider and that it's out of their hands. I'm not paying their upstream provider. I don't have a service agreement with their upstream provider. I'm paying them based on the service agreement I have with them. This is either a completely new level of DDoS or they are just completely incompetent in their way of handling it. In any case, I believe that Linode is going to suffer greatly for this in terms of lost customers. They're going to lose me by the end of next week, that's for certain.
>> This is either a completely new level of DDoS or they are just completely incompetent in their way of handling it.
My take is that it's somewhere in the middle. This type of outage has in the past, and will potentially in future, hit providers at all price points regardless of their SLAs and guarantees.
In my experience (and other comments indicate) Linode are genearlly very reliable. This isn't a mickey mouse, dirt-cheap VPS operation. The communication hasn't been awesome but by keeping an eye on status.linode.com I personally have felt reasonably well informed. If you trust they're working on it and doing the best they can.
Other commenters suggest this is a fairly sophisticated, deliberate and sustained attack. They're putting significant resources toward it. It also seems whoever is behind it has potentially gained inside knowledge of their network topology.
Based that understand, I'm not wasting my time calling them or sitting around hitting F5. I'm working to improve my systems' architecture for resiliency in this type of situation. That involves geo-distributed, multi-site redundancy and fail-over.
Why do you not have a disaster recovery plan that allows you to restore services to your customers in case of outage or other emergency?
I'm asking sincerely. I wouldn't be able to sleep at night if I had a SaaS product with thousands of customers and no way to restore service if my primary provider went down for an extended period of time.
What SLA do you have with them? I'm pretty sure it is the one where they will give a credit for the time they are down, I'm also pretty sure that your hosting on max a $90 Linode so that would be like $6.
There are SLA's available from hosts that will give the whole month ,1/4 or even year back if they miss even a single month's 99.99%
This is past a joke now this whole period of downtime but still Linode is not a big dollar hosting outfit, need a plan B for when a region or all of Linode goes down.
[+] [-] mjrpes|10 years ago|reply
This is nothing less than cyber-terrorism, and I hope the FBI is involved.
A problem I see with people throwing Linode under the bus is that only the mega hosting providers (AWS/Google) have the resources to mitigate such attacks. I wound hope that the industry can find a solution out there that allows smaller players (Linode, Digital Ocean, etc) to run a hosting business without the threat of DDoS.
If everyone moves to the big players, it is a loss for us all: feature-wise, quality-wise, security-wise, cost-wise.
[+] [-] larrymcp|10 years ago|reply
I just signed up for a Linode server. Going to use it for part of our offsite backup storage and to monitor some nodes on our primary network.
Huzzah...
[+] [-] tsturzl|10 years ago|reply
AWS and Google have resources in place for their own benefit, and hosting with them means you get these benefits to some degree. But the issue is that this complicated infrastructure is only reachable by giants(amazon, google, rackspace, etc). AWS probably spends enough money, collectively, on DDoS mitigation and fallback that it could acquire another hosting company for that sum of money.
The to consider the shady practices of who ever is trying to sabotage Linode's business, the likely goal is to direct business elsewhere. So you may end up moving to a hosting company who is shady enough to have caused you a loss of profits for their own profits. Somethings up here, and I'm really curious to find out who did this. I'm not leaving a business because they were a victim of espionage.
[+] [-] dantiberian|10 years ago|reply
The Main Twitter account still hasn't announced the Atlanta DoS. https://twitter.com/linode?lang=en
I understand the severity of the issue is different to Slacks', but there should be a bunch of people on the Twitter account replying to people and saving customers. A lot of their customers are angry about lack of communication as much as they are about the downtime.
[+] [-] empressplay|10 years ago|reply
[+] [-] StanAngeloff|10 years ago|reply
[+] [-] tsturzl|10 years ago|reply
I've had pretty good communication with Linode and I've always had really good customer service. I don't think Linode is nearly as verbose about their internal business affairs and going-ons as the startup world seems to be, but I think competition has driven many companies to do the same, and the rapid evolution in this market makes it a reasonable effort to constantly keep the public in the loop.
As far as my experience, they've never with held information that I needed to know. But I understand why people are suspicious, and I think this suspicion is paranoia. These attacks are clearly espionage, and if they knew who was doing it, they'd have taken legal action by now.
[+] [-] alegen|10 years ago|reply
Twitter is not that relevant to be honest. I assume many customers are not active on Twitter either. I do not see why it is so important to have activity on the account when there is a dedicated page for informing customers of what is going on.
Also, if you look at the medium post, the tweets are mostly sorry/thank you/everything should be fine soon. No actual information on what is going on. How could you put relevant technical details in a tweet? How does a short, uninformative tweet help you more?
edit: also, there is up-to-date technical discussions taking place on the irc channel #linode on oftc
[+] [-] manojlds|10 years ago|reply
[+] [-] reefoctopus|10 years ago|reply
[+] [-] pinkunicorn|10 years ago|reply
[+] [-] gingerlime|10 years ago|reply
It would be very hard to justify going back to Linode afterwards, even with the best intentions to do so. "... So you seriously want us to go back to this hosting provider that caused us all this mess over Christmas / New Year's??"
[+] [-] jafingi|10 years ago|reply
However, what are the alternatives? Linode have been dead stable for me the past many years, and delivers what they promise in a transparent way. No overselling of servers. No sudden extra bills.
Linode will come out stronger after this, so it won't be able to happen on this scale again.
The big question is: Who, with a lot of money, would want to hurt Linode's business in this way? This isn't just a "script kiddie" having fun. It's a very well planned and powerful attack requiring buying large botnet capacity for an extensive amount of time.
[+] [-] cpqq|10 years ago|reply
No notification from them, just a handful of downtime alerts during time with the family. They were completely gone from BGP tables in Newark.
Used backups and moved sites to OVH. Don't know who they pissed off, I suspect another NJ competitor, who is known for taking cheap shots at other VPS companies.
It's a pain in the ass, but at the same time, how is their network so fragile? You would think at least some of the fragile systems being attacked would be firewalled or at least ACL'd off from the public net.
This is what happens when you don't run your own network and rely on other ASN's and uplinks to do the work for you. When it comes to other customers being affected, they will simply null you. Unlike your network ops who would be trying anything they could from OOB to rectify such.
[+] [-] StanAngeloff|10 years ago|reply
We are also duplicating in OVH, read good things about their built-in DDoS protection on HN.
[1] http://status.linode.com/incidents/cbbcjnhhpkgm
[+] [-] gwright|10 years ago|reply
Offloading the responsibility for continuation of your business to Linode (or any other data center provider) is unfair. A history of uptime, verbal promises, or fancy SLA terms should never be interpreted to mean that disasters won't happen. A ten day long DDOS is a disaster and in this case a man-made disaster.
Using Linode (or another provider) instead of building your own data center is more cost effective, but it means you are no longer in direct control of your infrastructure (decreased costs, increased risk).
Designing your application to span multiple availability zones (data centers) can mitigate single points of failure within a single vendor but is more expensive than operating in a single zone.
Designing your application to span multiple vendors can mitigate single vendor failures (or changes in offerings from a single vendor) but is even more expensive.
And still there are ways to mitigate these costs, business interruption insurance can help cover the costs for moving to a new data center or vendor in case of a disaster (such as hiring staff, overtime, etc.). Lost profits can be covered by business interruption insurance also.
Of course it is expensive to operate any business in a hostile environment. A seaside restaurant better be prepared to weather a hurricane. I wonder how much money has been spent on security cameras, guards, metal detectors, and so on since 9/11? The increasing occurrence of targeted DDOS (and other types) of attacks is the physical equivalent of an increasingly hostile environment and is going to be associated with higher costs.
In the longer term, I think we need to find ways to get law enforcement better suited to deal with these problems but ultimately I think we'll need to radically change the way we handle network operations and the technical foundations of the network such as content centric networking (https://en.wikipedia.org/wiki/Content_centric_networking)
[+] [-] empressplay|10 years ago|reply
[+] [-] zhte415|10 years ago|reply
[+] [-] houseofmore|10 years ago|reply
[+] [-] click170|10 years ago|reply
Is anyone currently using libcloud or equiv. and able to share details?
[+] [-] dantiberian|10 years ago|reply
[+] [-] jacquesm|10 years ago|reply
The analogy for me is one of roads. If you block a road on purpose then an ambulance might not be able to reach an accident victim in time. The internet is infrastructure, just like roads and purposefully obstructing it wholesale is doing damage to a large number of parties.
What is sad is that these people get away with this stuff over and over again, it is very rare for DDoS organizers to be caught, rarer still (if it even ever happened) for them to be sentenced.
[+] [-] meshko|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] lambdud|10 years ago|reply
[+] [-] leeforkenbrock|10 years ago|reply
[+] [-] rrcap|10 years ago|reply
[+] [-] ugexe|10 years ago|reply
[+] [-] frik|10 years ago|reply
Is it because of only 38 upvotes vs 43 comments? The story is just 3 hours old.
@HN / dang: What's going on with HN sorting algorithm? The #3 story on HN is 1 hour old and has just 9 upvotes and 2 comments: "Churchill and His Money, or Lack of It": https://news.ycombinator.com/item?id=10825575
Screenshot: http://s3.postimg.org/6dn7h6w5v/hn_linode_fall.png
[+] [-] cpqq|10 years ago|reply
Couldn't find it on the 2nd or 3rd page, very odd.
[+] [-] dang|10 years ago|reply
Btw you should email [email protected] instead of asking us questions here. It's pretty random whether or not we see the latter.
[+] [-] stefantalpalaru|10 years ago|reply
[deleted]
[+] [-] DarthSith|10 years ago|reply
[+] [-] dwightgunning|10 years ago|reply
My take is that it's somewhere in the middle. This type of outage has in the past, and will potentially in future, hit providers at all price points regardless of their SLAs and guarantees.
In my experience (and other comments indicate) Linode are genearlly very reliable. This isn't a mickey mouse, dirt-cheap VPS operation. The communication hasn't been awesome but by keeping an eye on status.linode.com I personally have felt reasonably well informed. If you trust they're working on it and doing the best they can.
Other commenters suggest this is a fairly sophisticated, deliberate and sustained attack. They're putting significant resources toward it. It also seems whoever is behind it has potentially gained inside knowledge of their network topology.
Based that understand, I'm not wasting my time calling them or sitting around hitting F5. I'm working to improve my systems' architecture for resiliency in this type of situation. That involves geo-distributed, multi-site redundancy and fail-over.
My advice: be optimistic and proactive.
[+] [-] brandon272|10 years ago|reply
I'm asking sincerely. I wouldn't be able to sleep at night if I had a SaaS product with thousands of customers and no way to restore service if my primary provider went down for an extended period of time.
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] adamzoz|10 years ago|reply
There are SLA's available from hosts that will give the whole month ,1/4 or even year back if they miss even a single month's 99.99%
This is past a joke now this whole period of downtime but still Linode is not a big dollar hosting outfit, need a plan B for when a region or all of Linode goes down.