It's something we debated in our team: if there's an API that returns data based on filters, what's the better behavior if no filters are provided - return everything or return nothing?
The consensus was that returning everything is rarely what's desired, for two reasons: first, if the system grows, allowing API users to return everything at once can be a problem both for our server (lots of data in RAM when fetching from the DB => OOM, and additional stress on the DB) and for the user (the same problem on their side). Second, it's easy to forget to specify filters, especially in cases like "let's delete something based on some filters."
So the standard practice now is to return nothing if no filters are provided, and we pay attention to it during code reviews. If the user does really want all the data, you can add pagination to your API. With pagination, it's very unlikely for the user to accidentally fetch everything because they must explicitly work with pagination tokens, etc.
Another option, if you don't want pagination, is to have a separate method named accordingly, like ListAllObjects, without any filters.
Returning an empty result in that case may cause a more subtle failure. I would think returning an error would be a bit better as it would clearly communicate that the caller called the API endpoint incorrectly. If it’s HTTP a 400 Bad Request status code would seem appropriate.
>allowing API users to return everything at once can be a problem both for our server (lots of data in RAM when fetching from the DB => OOM, and additional stress on the DB)
You can limit stress on RAM by streaming the data. You should ideally stream rows for any large dataset. Otherwise, like you say you are loading the entire thing into RAM.
how about returning an error ? It’s the generic “client sent something wrong” bucket. Missing a required filter param is unambiguously a client mistake according to your own docs/contract → client error → 4xx family → 400 is the safest/default member of that family.
I like your thought process around the ‘empty’ case. While the opposite of a filter is no filter, to your point, that is probably not really the desire when it comes to data retrieval. We might have to revisit that ourselves.
Insufficient mock data in the staging environment? Like no BYOIP prefixes at all? Since even one prefix should have shown that it would be deleted by that subtask...
From all the recent outages, it sounds like Cloudflare is barely tested at all. Maybe they have lots of unit tests etc, but they do not seem to test their whole system... I get that their whole setup is vast, but even testing that subtask manually would have surfaced the bug
Testing the "whole system" for a mature enterprise product is quite difficult. The combinatorial explosion of account configurations and feature usage becomes intractable on two levels: engineers can't anticipate every scenario they need their tests to cover (because the product is too big understand the whole of), and even if comprehensive testing was possible - it would be impractical on some combination of time, flakiness, and cost.
I think Cloudflare does not sufficiently test lesser-used options. I lurk in the R2 Discord and a lot of users seem to have problems with custom domains.
It was also merged 15 days prior to production release...however, you're spot on with the empty test. That's a basic scenario that if it returned all...is like oh no.
Just crazy. Why does a staging environment matter? They should be running some integration tests against eg an in memory database for these kinds of tasks surely?
In the meantime, as you say, we’re now going through and evaluating other vendors for each component that CF provides - which is both unfortunate, and a frustrating use of time, as CF’s services “just worked” very well for a very long time.
I have many things dependent on Cloudflare. That makes me root for Cloudflare and I think I'm not the only one. Instead of finding better options we're getting stuck on an already failing HA solution. I wonder what caused this.
> Because the client is passing pending_delete with no value, the result of Query().Get(“pending_delete”) here will be an empty string (“”), so the API server interprets this as a request for all BYOIP prefixes instead of just those prefixes that were supposed to be removed. The system interpreted this as all returned prefixes being queued for deletion.
if v := req.URL.Query().Get("pending_delete"); v != "" {
// ignore other behavior and fetch pending objects from the ip_prefixes_deleted table
prefixes, err := c.RO().IPPrefixes().FetchPrefixesPendingDeletion(ctx)
if err != nil {
api.RenderError(ctx, w, ErrInternalError)
return
}
api.Render(ctx, w, http.StatusOK, renderIPPrefixAPIResponse(prefixes, nil))
return
}
even if the client had passed a value it would have still done exactly the same thing, as the value of "v" (or anything from the request) is not used in that block
> even if the client had passed a value it would have still done exactly the same thing, as the value of "v" (or anything from the request) is not used in that block
If they passed in any value, they would have entered the block and returned early with the results of FetchPrefixesPendingDeletion.
From the post:
> this was implemented as part of a regularly running sub-task that checks for BYOIP prefixes that should be removed, and then removes them.
They expected to drop into the block of code above, but since they didn't, they returned all routes.
Hi! I wrote this paragraph. I promise that I'm not an LLM, but I was in about hour 10 of my work day and I was asleep not long after writing this. Any failures in comprehensibility are from exhaustion.
(Other comments have explained the bug so I won't repeat them)
I do not work in the space at all, but it seems like Cloudflare has been having more network disruptions lately than they used to. To anyone who deals with this sort of thing, is that just recency bias?
It is not. They went about 5 years without one of these, and had a handful over the last 6 months. They're really going to need to figure out what's going wrong and clean up shop.
been at cf for 7 yrs but thinking of gtfo soon. the ceo is a manchild, new cto is an idiot, rest of leadership was replaced by yes-men, and the push for AI-first is being a disaster. c levels pretend they care about reliability but pressure teams to constantly ship, cto vibe codes terraform changes without warning anyone, and it's overall a bigger and bigger mess
even the blog, that used to be a respected source of technical content, has morphed into a garbage fire of slop and vaporware announcements since jgc left.
The one redeeming feature of this failure is staged rollouts. As someone advertising routes through CF, we were quite happy to be spared from the initial 25%.
Hindsight is 20/20 but why not dry run this change in production and monitor the logs/metrics before enabling it? Seems prudent for any new “delete something in prod” change.
Old tech could work around these outages. Set up GSLB at a DNS provider that does health checks or perform your own health checks to both origin and CDN's and use API's to change DNS. If the origin servers are OK and the CDN is not, automatically change DNS to a different CDN. There should be multiple probes that form a consensus. This process assumes one is managing the configurations of their CDN's through code and API so that one can set up and tear down any number of CDN's on a whim.
That does mean having contracts with more than one CDN provider however the cost should be negotiated based on monthly volume. i.e. the CDN with the most uptime gets the most money. If an existing CDN under contract refuses to negotiate then move some non critical path services to them and let that contract expire. Instate a company wide policy to never return to a vendor if their contract was intentionally not renewed.
This blog post is inaccurate, the prefixes were being revoked over and over - to keep your prefixes advertised you had to have a script that would readd them or else it would be withdrawn again. The way they seemed to word it is really dishonest.
> Because the client is passing pending_delete with no value, the result of Query().Get(“pending_delete”) here will be an empty string (“”), so the API server interprets this as a request for all BYOIP prefixes instead of just those prefixes that were supposed to be removed.
Lmao, iirc long time ago Google's internal system had the same exact bug (treating empty as "all" in the delete call) that took down all their edges. Surprisingly there was little impact as traffic just routed through the next set of proxies.
While neither am I nor the company I work for directly impacted by this outage, I wonder how long can Cloudflare take these hits and keep apologizing for it. Truly appreciate them being transparent about it, but businesses care more about SLAs and uptime than the incident report.
I’ll take clarity and actual RCAs than Microsoft’s approach of not notifying customers and keeping their status page green until enough people notice.
One thing I do appreciate about cloudflare is their actual use of their status page. That’s not to say these outages are okay. They aren’t. However I’m pretty confident in saying that a lot of providers would have a big paper trail of outages if they were more honest to the same degree or more so than cloudflare. At least from what I’ve noticed, especially this year.
Bluntly: they expended that credit a while ago. Those that can will move on. Those that can't have a real problem.
As for your last sentence:
Businesses really do care about the incident reports because they give good insight into whether they can trust the company going forward. Full transparency and a clear path to non-repetition due to process or software changes are called for. You be the judge of whether or not you think that standard has been met.
The code they posted doesn't quite explain the root cause. This is a good study case for resilient API design and testing.
They said their /v1/prefixes endpoint has this snippet:
if v := req.URL.Query().Get("pending_delete"); v != "" {
// ignore other behavior and fetch pending objects from the ip_prefixes_deleted table
prefixes, err := c.RO().IPPrefixes().FetchPrefixesPendingDeletion(ctx)
[..snip..]
}
What's implied but not shown here is that endpoint normally returns all prefixes. They modified it to return just those pending deletion when passing a pending_delete query string parameter.
The immediate problem of course is this block will never execute if pending_delete has no value:
This is because Go defaults query params to empty strings and the if statement skips this case. Which makes you wonder, what is the value supposed to be? This is not explained. If it's supposed to be:
Then this would work, but the implementation fails to validate this value. From this you can infer that no unit test was written to exercise the value:
The post explains "initial testing and code review focused on the BYOIP self-service API journey." We can reasonably guess their tests were passing some kind of "true" value for the param, either explicitly or using a client that defaulted param values. What they didn't test was how their new service actually called it.
So, while there's plenty to criticize on the testing front, that's first and foremost a basic failure to clearly define an API contract and implement unit tests for it.
But there's a third problem, in my view the biggest one, at the design level. For a critical delete path they chose to overload an existing endpoint that defaults to returning everything. This was a dangerous move. When high stakes data loss bugs are a potential outcome, it's worth considering more restrictive API that is harder to use incorrectly. If they had implemented a dedicated endpoint for pending deletes they would have likely omitted this default behavior meant for non-destructive read paths.
In my experience, these sorts of decisions can stem from team ownership differences. If you owned the prefixes service and were writing an automated agent that could blow away everything, you might write a dedicated endpoint for it. But if you submitted a request to a separate team to enhance their service to returns a subset of X, without explaining the context or use case very much, they may be more inclined to modify the existing endpoint for getting X. The lack of context and communication can end up missing the risks involved.
Final note: It's a little odd that the implementation uses Go's "if with short statement" syntax when v is only ever used once. This isn't wrong per se but it's strange and makes me wonder to what extent an LLM was involved.
[+] [-] kgeist|18 days ago|reply
The consensus was that returning everything is rarely what's desired, for two reasons: first, if the system grows, allowing API users to return everything at once can be a problem both for our server (lots of data in RAM when fetching from the DB => OOM, and additional stress on the DB) and for the user (the same problem on their side). Second, it's easy to forget to specify filters, especially in cases like "let's delete something based on some filters."
So the standard practice now is to return nothing if no filters are provided, and we pay attention to it during code reviews. If the user does really want all the data, you can add pagination to your API. With pagination, it's very unlikely for the user to accidentally fetch everything because they must explicitly work with pagination tokens, etc.
Another option, if you don't want pagination, is to have a separate method named accordingly, like ListAllObjects, without any filters.
[+] [-] alemanek|18 days ago|reply
[+] [-] Thaxll|17 days ago|reply
If not optional then return 400, otherwise return all the results ( and have pagination ).
You should always have pagination in an API.
[+] [-] Philip-J-Fry|18 days ago|reply
You can limit stress on RAM by streaming the data. You should ideally stream rows for any large dataset. Otherwise, like you say you are loading the entire thing into RAM.
[+] [-] qwertyuiop_|17 days ago|reply
[+] [-] MobileVet|18 days ago|reply
[+] [-] PunchyHamster|17 days ago|reply
[+] [-] est|17 days ago|reply
For me it's like `filter1=*`
[+] [-] CommonGuy|18 days ago|reply
From all the recent outages, it sounds like Cloudflare is barely tested at all. Maybe they have lots of unit tests etc, but they do not seem to test their whole system... I get that their whole setup is vast, but even testing that subtask manually would have surfaced the bug
[+] [-] zmj|18 days ago|reply
[+] [-] dabinat|18 days ago|reply
[+] [-] asciii|18 days ago|reply
[+] [-] suhputt|18 days ago|reply
[+] [-] martinald|18 days ago|reply
[+] [-] otar|18 days ago|reply
It's alarming already. Too many outages in the past months. CF should fix it, or it becomes unacceptable and people will leave the platform.
I really hope they will figure things out.
[+] [-] tallytarik|18 days ago|reply
In the meantime, as you say, we’re now going through and evaluating other vendors for each component that CF provides - which is both unfortunate, and a frustrating use of time, as CF’s services “just worked” very well for a very long time.
[+] [-] argestes|18 days ago|reply
[+] [-] alansaber|18 days ago|reply
[+] [-] NinjaTrance|18 days ago|reply
They definitely failed big this time.
[+] [-] vimda|18 days ago|reply
[+] [-] blibble|18 days ago|reply
the explanation makes no sense:
> Because the client is passing pending_delete with no value, the result of Query().Get(“pending_delete”) here will be an empty string (“”), so the API server interprets this as a request for all BYOIP prefixes instead of just those prefixes that were supposed to be removed. The system interpreted this as all returned prefixes being queued for deletion.
client:
server: even if the client had passed a value it would have still done exactly the same thing, as the value of "v" (or anything from the request) is not used in that block[+] [-] PunchyHamster|17 days ago|reply
but in short they are changing whether string is empty, and query string "pending_delete" is same as "pending_delete=" and will return empty
Or, if they specified `/v1/prefixes?pending_delete=potato` it would return "correct" list of objects to delete
Or in other words "Go have types safety, fuck it, let's use strings like in '90s PHP apps instead"
[+] [-] bstsb|18 days ago|reply
[+] [-] bretthoerner|18 days ago|reply
If they passed in any value, they would have entered the block and returned early with the results of FetchPrefixesPendingDeletion.
From the post:
> this was implemented as part of a regularly running sub-task that checks for BYOIP prefixes that should be removed, and then removes them.
They expected to drop into the block of code above, but since they didn't, they returned all routes.
[+] [-] asuffield|13 days ago|reply
(Other comments have explained the bug so I won't repeat them)
[+] [-] subscribed|18 days ago|reply
[+] [-] himata4113|18 days ago|reply
[+] [-] atty|18 days ago|reply
[+] [-] Icathian|18 days ago|reply
[+] [-] lysace|18 days ago|reply
[+] [-] dazc|18 days ago|reply
[+] [-] Betelbuddy|18 days ago|reply
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
[+] [-] candiddevmike|18 days ago|reply
https://github.com/cloudflare/terraform-provider-cloudflare/...
[+] [-] slophater|18 days ago|reply
even the blog, that used to be a respected source of technical content, has morphed into a garbage fire of slop and vaporware announcements since jgc left.
[+] [-] anurag|18 days ago|reply
[+] [-] jaboostin|18 days ago|reply
[+] [-] Bender|17 days ago|reply
That does mean having contracts with more than one CDN provider however the cost should be negotiated based on monthly volume. i.e. the CDN with the most uptime gets the most money. If an existing CDN under contract refuses to negotiate then move some non critical path services to them and let that contract expire. Instate a company wide policy to never return to a vendor if their contract was intentionally not renewed.
[+] [-] himata4113|18 days ago|reply
[+] [-] dilyevsky|18 days ago|reply
Lmao, iirc long time ago Google's internal system had the same exact bug (treating empty as "all" in the delete call) that took down all their edges. Surprisingly there was little impact as traffic just routed through the next set of proxies.
[+] [-] boarush|18 days ago|reply
[+] [-] llama052|18 days ago|reply
One thing I do appreciate about cloudflare is their actual use of their status page. That’s not to say these outages are okay. They aren’t. However I’m pretty confident in saying that a lot of providers would have a big paper trail of outages if they were more honest to the same degree or more so than cloudflare. At least from what I’ve noticed, especially this year.
[+] [-] jacquesm|18 days ago|reply
As for your last sentence:
Businesses really do care about the incident reports because they give good insight into whether they can trust the company going forward. Full transparency and a clear path to non-repetition due to process or software changes are called for. You be the judge of whether or not you think that standard has been met.
[+] [-] VirusNewbie|18 days ago|reply
[+] [-] abalone|17 days ago|reply
They said their /v1/prefixes endpoint has this snippet:
What's implied but not shown here is that endpoint normally returns all prefixes. They modified it to return just those pending deletion when passing a pending_delete query string parameter.The immediate problem of course is this block will never execute if pending_delete has no value:
This is because Go defaults query params to empty strings and the if statement skips this case. Which makes you wonder, what is the value supposed to be? This is not explained. If it's supposed to be: Then this would work, but the implementation fails to validate this value. From this you can infer that no unit test was written to exercise the value: The post explains "initial testing and code review focused on the BYOIP self-service API journey." We can reasonably guess their tests were passing some kind of "true" value for the param, either explicitly or using a client that defaulted param values. What they didn't test was how their new service actually called it.So, while there's plenty to criticize on the testing front, that's first and foremost a basic failure to clearly define an API contract and implement unit tests for it.
But there's a third problem, in my view the biggest one, at the design level. For a critical delete path they chose to overload an existing endpoint that defaults to returning everything. This was a dangerous move. When high stakes data loss bugs are a potential outcome, it's worth considering more restrictive API that is harder to use incorrectly. If they had implemented a dedicated endpoint for pending deletes they would have likely omitted this default behavior meant for non-destructive read paths.
In my experience, these sorts of decisions can stem from team ownership differences. If you owned the prefixes service and were writing an automated agent that could blow away everything, you might write a dedicated endpoint for it. But if you submitted a request to a separate team to enhance their service to returns a subset of X, without explaining the context or use case very much, they may be more inclined to modify the existing endpoint for getting X. The lack of context and communication can end up missing the risks involved.
Final note: It's a little odd that the implementation uses Go's "if with short statement" syntax when v is only ever used once. This isn't wrong per se but it's strange and makes me wonder to what extent an LLM was involved.
[+] [-] fjoaasdfas|17 days ago|reply
maybe go can do (string v, ok bool) for this or add proper sum types...
[+] [-] ssiddharth|18 days ago|reply
[+] [-] subscribed|18 days ago|reply
[+] [-] est|17 days ago|reply
[+] [-] unknown|18 days ago|reply
[deleted]