It's merely an informative blog post on a topic many people are interested in. I see nothing wrong with Cloudflare just explaining what happened, even though they had nothing to do with it.
It is little stuff like this that makes it come off a bit self-aggrandizing:
> We keep track of all the BGP updates and announcements we see in our global network. At our scale, the data we collect gives us a view of how the Internet is connected and where the traffic is meant to flow from and to everywhere on the planet.
> Fortunately, 1.1.1.1 was built to be Free, Private, Fast (as the independent DNS monitor DNSPerf can attest), and scalable, and we were able to keep servicing our users with minimal impact.
It isn’t a big deal, and the posts are still interesting. It just makes me roll my eyes a bit.
> I see nothing wrong with Cloudflare just explaining what happened, even though they had nothing to do with it.
You kinda inadvertently highlighted the issue: because they had nothing to do with it, they do not know what actually happened. They can pontificate about likely causes, just like others in the industry can, but they have no idea what actually caused the issue.
At no point in the blog post did they offer any conjecture about what was happening at Facebook. All of their information was general descriptions of DNS and BGP, or descriptions of how the Facebook outage was experienced on their end from running a DNS resolver. That in and of itself makes for an interesting and informative perspective.
I assume you did not read the blog post? It’s just a technical post describing the outage from Cloudflare’s perspective and mostly focuses on the increased traffic to 1.1.1.1 and the latency it caused
You can pontificate about likely contents of Cloudfare’s blog post, just like others who did not read it, but clearly you have no idea what it actually contains
If you read the blog post, you'll see that it's speculation-free facts about what happened. BGP announcements happened at time t, DNS started failing at t+n, DNS requests spiked, BGP updates happened at t', DNS returned to normal at t'+n.
cortesoft|4 years ago
> We keep track of all the BGP updates and announcements we see in our global network. At our scale, the data we collect gives us a view of how the Internet is connected and where the traffic is meant to flow from and to everywhere on the planet.
> Fortunately, 1.1.1.1 was built to be Free, Private, Fast (as the independent DNS monitor DNSPerf can attest), and scalable, and we were able to keep servicing our users with minimal impact.
It isn’t a big deal, and the posts are still interesting. It just makes me roll my eyes a bit.
dgb23|4 years ago
I very much prefer that over the almost patronizing, overly friendly tone some others have, or the stripped of any personality style that most have.
bagfish|4 years ago
[deleted]
Ansil849|4 years ago
You kinda inadvertently highlighted the issue: because they had nothing to do with it, they do not know what actually happened. They can pontificate about likely causes, just like others in the industry can, but they have no idea what actually caused the issue.
bikeshaving|4 years ago
dfdz|4 years ago
You can pontificate about likely contents of Cloudfare’s blog post, just like others who did not read it, but clearly you have no idea what it actually contains
Godel_unicode|4 years ago
Lazare|4 years ago
> They can pontificate about likely causes
They can, but they didn't.
rattray|4 years ago