top | item 45977683

(no title)

polack | 3 months ago

They failed on so many levels here.

How can you write the proxy without handling the config containing more than the maximum features limit you set yourself?

How can the database export query not have a limit set if there is a hard limit on number of features?

Why do they do non-critical changes in production before testing in a stage environment?

Why did they think this was a cyberattack and only after two hours realize it was the config file?

Why are they that afraid of a botnet? Does not leave me confident that they will handle the next Aisuru attack.

I'm migrating my customers off Cloudflare. I don't think they can swallow the next botnet attacks and everyone on Cloudflare go down with the ship, so it will be safer to not be behind Cloudflare when it hits.

discuss

order

huijzer|3 months ago

> They failed on so many levels here.

That's often the case with human error as especially aviation safety experts know: https://en.wikipedia.org/wiki/Swiss_cheese_model

itzjacki|3 months ago

Exactly. The only way this could happen in the first place was _because_ they failed at so many levels. And as a result, more layers of Swiss cheese will be added, and holes in existing ones will be patched. This process is the reason flying is so safe, and the reason why Cloudflare will be a little bit more resilient tomorrow than it was yesterday.

miki123211|3 months ago

In organizations with this level of care, if you fail at fewer levels, customers just never notice the error.

Any big and noticeable incident is one of the "we failed on so many levels here" kind, by definition.

michaelt|3 months ago

> Why did they think this was a cyberattack

Isn’t getting cyberattacked their core business?

Yokohiii|3 months ago

If so, why is their discovery of, non ambiguous?

cowsandmilk|3 months ago

> Why do they do non-critical changes in production before testing in a stage environment?

I guess the noncritical change here was the change to the database? My experience has been a lot of teams do a poor job having a faithful replica of databases in stage environments to expose this type of issue.

brookst|3 months ago

In part because it is somewhere between really hard and impossible. Is your staging DB going to be as big? Seeing the same RPS as prod? Seeing the same scenarios?

Permissions stuff might be caught without a completely faithful replica, but there are always going to be attributes of the system that only exist in prod.

nip|3 months ago

It’s easy to pick on logic that failed and for which you have a very detailed and great post mortem write-up.

Yet you omit to acknowledge that the remaining 99.99999% logic written that powers Cloudflare works flawlessly.

Also, hindsight is 20/20

Yokohiii|3 months ago

You are less critical with CF then they are with themselves.

A system that is 99.99999% flawless, can still be unusable.

optimism bias: 100/100

aforwardslash|3 months ago

I know its easy to criticize what happened after the fact and having a clear(er) picture of all the moving parts and the timeline of events, but I think that while most of the people in the thread are pointing out either Rust-related or lack of configuration validation, what really grinds my gears is something that - in my opinion - is bad engineering.

Having an unprivileged application querying system.columns to infer the table layout is just bad; Not having a proper, well-defined table structure indicates sloppiness in the overall schema design, specially if it changes quickly. Considering specifically clickhouse, and even if this approach would be a good idea, the unprivileged way of doing it would be "DESCRIBE TABLE <name>", NOT iterating system.columns. The gist of it - sloppy design not even well implemented.

Having a critical application issuing ad-hoc commands to system.* tablespace instead of using a well-tested library is just amateurism, and again - bad engineering; IMO it is good practice to consider all system.* privileged applications and ensure their querying is completely separate from your application logic; Sometimes some system tables change, and fields are added and/or removed - not planning for this will basically make future compatibility a nightmare.

Not only the problematic query itself, but the whole context of this screams "lack of proper application design" and devs not knowing how to use the product and/or read the documentation. Granted, this is a bit "close to home" for me, because I use ClickHouse extensively (at a scale - I'm assuming - several orders of magnitude smaller than CloudFlare) and I have spent a lot of time designing specifically to avoid at least some of these kind of mistakes. But, if I can do it at my scale, why aren't they doing it?

Yokohiii|3 months ago

On all the other issues, I thought they wanted to do the right thing at heart, but missed to make it fail safe. I can pass it as a problem of a journey to maturity or simply the fact that you can't get everything perfect. Maybe even a bit of sloppiness here and there.

The database issue screamed at me: lack of expertise. I don't use CH, but seeing someone to mess with a production system and they being surprised "Oh, it does that?", is really bad. And this is obviously not knowledge that is hard to achieve, buried deep in a manual or an edge case only discoverable by source code, it's bread and butter knowledge you should know.

What is confusing, that they didn't add this to their follow-up steps. With some benefit of doubt I'd assume they didn't want to put something very basic as a reason out there, just to protect the people behind it from widespread blame. But if that's not the case, then it's a general problem. Sadly it's not uncommon that components like databases are dealt with, on an low effort basis. Just a thing we plug in and works. But it's obviously not.

raxxorraxor|3 months ago

I don't think these are realistic requirements for any engineered system to be honest. Realistic is to have contingencies for such cases, which are simply errors.

But the case for Cloudflare here is complicated. Every engineer is very free to make a better system though.

polack|3 months ago

What is not realistic? To do simple input validation on data that has the potential to break 20% of the internet? To not have a system in place to rollback to the latest known state when things crash?

Cloudflare builds a global scale system, not an iphone app. Please act like it.

vsl|3 months ago

> Why did they think this was a cyberattack and only after two hours realize it was the config file?

They explain that at some length in TFA.

jve|3 months ago

> I'm migrating my customers off Cloudflare.

Is that an overreaction?

Name me global, redundant systems that have not (yet) failed.

And if you used cloudflare to protect against botnet and now go off cloudflare... you are vulnerable and may experience more downtime if you cannot swallow the traffic.

I mean no service have 100% uptime - just that some have more nines than others.

Carriethebest|3 months ago

There are many self-hosted alternatives to protect against botnet. We don't have to use cloudflare. Everthing is under their control!

nijave|3 months ago

We had better uptime with AWS WAF in us-east-1 than we've had in the last 1.5 years of Cloudflare.

I do like the flat cost of Cloudflare and feature set better but they have quite a few outages compared to other large vendors--especially with Access (their zero trust product)

I'd lump them into GitHub levels of reliability

We had a comparable but slightly higher quote from an Akamai VAR.

polack|3 months ago

Yes, it's probably an overreaction.

But at the same time, what value do they add if they:

* Took down the the customers sites due to their bug.

* Never protected against an attack that our infra could not have handled by itself.

* Don't think that they will be able to handle the "next big ddos" attack.

It's just an extra layer of complexity for us. I'm sure there are attacks that could help our customers with, that's why we're using them in the first place. But until the customers are hit with multiple ddos attacks that we can not handle ourself then it's just not worth it.

tete|3 months ago

I agree. I think the comments about how "it is fine, because so many things had to fail" do not apply in this case.

It's not that many things had to fail, it's that many things that are obvious haven't been done. It would be a valid excuse if many "exotic" scenarios would have to align, not when it's obvious error cases that weren't handled and changes have not been tested.

While having wrong first assumptions is just how things work when you try to analyze the issue[1], not testing changes before production is just stupidity and nothing else.

The story would be different if eg. multiple unlikely, hard to track things happened at once without anyone making a clearly linkable event, something that would also happen in staging. Most of the things mentioned could essentially statically checked. This is the prime example of what you want as any tech person, because it's not hard to prevent compared to a lot of scenarios where you deal with balancing likelihoods of scenarios, timings, etc.

You don't think someone is a great plumber, because they forgot their tools and missed that big hole in the pipe and also rang at the wrong door, because all these things failed. You think someone is a good plumber if they said they would have to go back to fetch a bulky specialized tool, because this is the rare case in which they need it, but they could also do this other thing in this spcific case. They are great plumbers if they tell you how this happened in first place and how to fix it. They are great plumbers if they manage to fix something outside of their usual scope.

Here pretty much all of the things that you pay them for failed. At a large scale.

I am sure this has there are reasons which we don't now about, and I hope that CloudFlare can fix them. Be it management focusing on the wrong things, be it developers not being in the wrong position or annoyed enough to care or something else entirely. However, not doing these things is (likely) a sign that currently they are not in the state of creating reliable systems - at least none reliable enough for what they are doing. It would be perfectly fine if they ran a web shop or something, but if as experienced many other companies rely on you being up or their stuff fails, then maybe you should not run a company with products like "Always Online".

[1] And should make you adapt the process of analyzing issues. Eg. making sure config changes are "very loud" in monitoring. It's one of the most easily tracked thing that can go wrong, and can relatively easily be mapped to a point in time compared to many other things.

kosolam|3 months ago

So where are you migrating to?