- They don't do enough or the right kind of smoke tests.
- They don't do exponential-canary deployments with an ability to rollback, and instead just YOLO it.
- They don't appear to have a customer-side security / client platform team update approval gating change control process for software updates or for definitions (or whatever they use).
This is fundamentally laziness and/or incompetency.
Yeah, I think I'm getting more detailed analysis on Social Media from strangers, which I know I should take with a grain of salt. But I guess I'm expecting a lot more than "a filed caused this" from the company that caused this havoc.
Can someone who actually understands what CrowdStrike does explain to me why on earth they don't have some kind of gradual rollout for changes? It seems like their updates go out everywhere all at once, and this sounds absolutely insane for a company at this scale.
It sounds like Channel files are just basically definition updates in normal antivirus software; it's not actually code, just some stuff on what the software should "look out for".
And it sounds like they shipped some malformed channel file and the software that interprets it can't handle malformed inputs and ate shit. That software happened to be kernel mode, and also marked as boot-critical, so it if falls over, it causes a BSOD and inability to boot.
and it's kind of understandable that channel files might seem safe to update constantly without oversight, but that's just assuming that the file that interprets the channel file isn't a bunch of dogshit code.
This "channel file" is equivalent to an AV signature file. Crowdstrike is the company, the product here is "Falcon" which does behavioral monitoring of processes both on the device and using logs collected from the device in the cloud.
I can see your perspective, but you should consider this: They protect these many companies, industries and even countries at such a global scale and you haven't even heard of them in the last 15 years of their operation until this one outage.
You can't take days testing gradual roll outs for this type of content, because that's how long customers are left unprotected by that content. Although the root cause is on the channel files, I feel like the driver that processes them should have been able to handle the "logic bug" in question so we'll find out more over time I guess.
For example, with windows defender which runs on virtually all windows systems, the signature updates on billions of devices are pushed immediately (with exception to enterprise systems, but even then there is usually not much testing on signature files themselves, if at all). As far as the devops process Crowdstrike uses to test the channel files, I think it's best to leave commentary on that to actual insiders but these updates happen several times a day sometimes and get pushed to every Crowdstrike customer.
My understanding is they basically deployed a configuration file. It seems like these files might be akin to virus signatures or other frequently updated run-time configuration.
I actually don't think it's outrageous that these files are rolled out globally, simultaneously. I'm guessing they're updated frequently and _should_ be largely benign.
What stands out to me is the fact that a bad config file can crash the system. No rollback mechanism. No safety checks. No safe, failure mode. Just BSOD.
Given the fix is simply deleting the broken file, it's astounding to me that the system's behavior is BSOD. To me, that's more damning that a bad "software update". These files seem to change often and frequently. Given they're critical path, they shouldn't have the ability to completely crash the system.
I'm more surprised at the fact that they didn't appear to have tested it on themselves first.
FWIW, at least Microsoft still "dogfoods" (and it's what coined that term), and even if the results of that aren't great, I'm sure they would've caught something of this severity... but then again, maybe not[1].
I have a friend who is a security guard at a bank in Hollywood, CA, who told me the computers at his location started going down between 12:00 and 13:00PDT (19:00-20:00UTC).
I don't understand CrowdStrike's rollout system, but given that people started seeing trouble earlier in the day, surely by that time they could have shut down the servers that were serving the updates, or something??
He also told me that soon after that the street outside the bank (another bank across the street, a hospital several blocks down) was lined with police who started barring entry to the buildings unless people had bank cards. By the time I woke up this morning technical people already knew basically what was going on, but I really underestimated how freaked out the average person must have been today.
> The update that occurred at 04:09 UTC was designed to target newly observed, malicious named pipes being used by common C2 frameworks in cyberattacks
The obvious joke here is CS runs the malicious C2 framework. So the system worked as designed: it prevented further execution and quarantined the affected machines.
But given they say that’s just a configuration file (then why the hell is it suffixed with .sys?), it’s actually plausible. A smart attacker could disguise themselves and use the same facilities as the CS. CS will try to block them and blocks itself in the process?
>>> Systems that are not currently impacted will continue to operate as expected, continue to provide protection, and have no risk of experiencing this event in the future.
Given that this incident has now happened twice in the space of months (first on Linux, then on Windows), and that as stated in this very post the root cause analysis is not yet complete, I find that statement of “NO RISK” very hard to believe.
This seems very unsatisfying. Not sure if I was expecting too much, but that’s a lot of words for very little information.
I’d like more information on how these Channel Files are created, tested, and deployed. What’s the minimum number of people that can do it? How fast can the process go?
I'm not a big expert but honestly this read like a bunch of garbage.
> Although Channel Files end with the SYS extension, they are not kernel drivers.
OK, but I'm pretty sure usermode software can't cause a BSOD. Clearly something running in kernel mode ate shit and that brought the system down. Just because a channel file not in kernel mode ate shit doesn't mean your kernel mode software isn't culpable. This just seems like a sleezy dodge.
It doesn't read to me as trying to dodge anything. They aren't saying "they're not kernel drivers, so everything is OK", they're saying "seeing the .sys on the filenames, you might think they're kernel drivers, but as it happens they're something else".
(Maybe there's some subtext that I'm missing, but I don't see how saying "these aren't kernel drivers" makes them look any better, and I do see why they might say it to be informative, so it looks like to me like they're doing the latter.)
> we are doing a "root cause analysis to determine how this logic flaw occurred"
That's going to find a cause: a programmer made an error. That's not the root of the problem. The root of the problem is allowing such an error to be released (especially obvious because of its widespread impact).
I'm no kernel expert, but people are saying Microsoft deserves some blame for not exposing necessary functionality to user space, requiring the use of a very-unsafe kernel driver.
Linux provides eBPF and macOS provides system extensions.
I'll also add that Windows itself heavily prioritizes backwards-compatibility over security, which leads companies to seek out third-party solutions for stopping malware instead of design-based mitigations being built into Windows.
Very weak and over corporate level of ass covering. And it doesn't even come close to doing that.
They should just let the EM of the team involved provide a public detailed response that I'm sure is floating around internally. Just own the problem and address the questions rather than trying to play at politics, quite poorly.
The lower you go in system architecture, the greater the impact when defects occur. In this instance, the Crowdstrike agent is embedded within the Windows Kernel, and registered with the Kernel Filter Engine illustrated in the diagram below.
If the initial root cause analysis is correct, Crowdstrike has pushed out a bug that could have been easily stopped had software engineering best practices been followed: Unit Testing, Code Coverage, Integration Testing, Definition of Done.
To my biased ears it sounds like these configuration-like files are a borderline DSL that maybe isn't being treated as such. I feel like that's a common issue - people assume because you call it a config file, it's not a language, and so it doesn't get treated as actual code that gets interpreted.
Can someone aim me at some RTFM that describes the sensor release and patching process, please? I'm lost trying to understand: When a new version 'n' of the sensor is released, we upgrade a selected batch of machines and do some tests (mostly waiting around :-)) to see that all is well. Then we upgrade the rest of the fleet by OU. However, 'cause we're scaredy cats, we leave some critical kit on n-1 for longer. And some really critical kit even on n-2. (Yeah, there's a risk in not applying patches I know but there are other outage-related risks that we balance; forget that for now) Our assumption is that n-1, n-2, etc are old, stable releases, and so when fan and shit collided yesterday, we just hopped on the console and did a policy update to revert to n-2 and assumed we'd dodged the bullet. But of course, that failed... you know what they say about assumptions :-) So in a long-winded way that leads to my three questions: Why did the 'content update' take out not just n but n-whatever sensors equally as effectively? Are the n-whatever versions not actually stable? And if the n-whatever versions are not actually stable and are being patched, what's the point of the versioning? Cheers!
You are probably not the target market of this product then. The real product CrowdStrike Falcon sells is regulatory compliance and it's a defacto requirement in many regulated industries including banking.
By the way, Falcon can be and is deployed to Linux and MacOS hosts in these organisations too it's just that this particular incident only affected Windows.
dang|1 year ago
CrowdStrike Update: Windows Bluescreen and Boot Loops - https://news.ycombinator.com/item?id=41002195 - July 2024 (3590 comments)
PedroBatista|1 year ago
Putting the actual blast radius aside, this whole thing seems a bit amateurish for a "security company" that pulls the contracts they do.
hi-v-rocknroll|1 year ago
- They don't do enough or the right kind of smoke tests.
- They don't do exponential-canary deployments with an ability to rollback, and instead just YOLO it.
- They don't appear to have a customer-side security / client platform team update approval gating change control process for software updates or for definitions (or whatever they use).
This is fundamentally laziness and/or incompetency.
grecy|1 year ago
The company that lobbied the hardest and paid the most in bribes got the contracts.
dev-jayson|1 year ago
tail_exchange|1 year ago
hatsunearu|1 year ago
And it sounds like they shipped some malformed channel file and the software that interprets it can't handle malformed inputs and ate shit. That software happened to be kernel mode, and also marked as boot-critical, so it if falls over, it causes a BSOD and inability to boot.
and it's kind of understandable that channel files might seem safe to update constantly without oversight, but that's just assuming that the file that interprets the channel file isn't a bunch of dogshit code.
notepad0x90|1 year ago
I can see your perspective, but you should consider this: They protect these many companies, industries and even countries at such a global scale and you haven't even heard of them in the last 15 years of their operation until this one outage.
You can't take days testing gradual roll outs for this type of content, because that's how long customers are left unprotected by that content. Although the root cause is on the channel files, I feel like the driver that processes them should have been able to handle the "logic bug" in question so we'll find out more over time I guess.
For example, with windows defender which runs on virtually all windows systems, the signature updates on billions of devices are pushed immediately (with exception to enterprise systems, but even then there is usually not much testing on signature files themselves, if at all). As far as the devops process Crowdstrike uses to test the channel files, I think it's best to leave commentary on that to actual insiders but these updates happen several times a day sometimes and get pushed to every Crowdstrike customer.
SkyPuncher|1 year ago
I actually don't think it's outrageous that these files are rolled out globally, simultaneously. I'm guessing they're updated frequently and _should_ be largely benign.
What stands out to me is the fact that a bad config file can crash the system. No rollback mechanism. No safety checks. No safe, failure mode. Just BSOD.
Given the fix is simply deleting the broken file, it's astounding to me that the system's behavior is BSOD. To me, that's more damning that a bad "software update". These files seem to change often and frequently. Given they're critical path, they shouldn't have the ability to completely crash the system.
userbinator|1 year ago
FWIW, at least Microsoft still "dogfoods" (and it's what coined that term), and even if the results of that aren't great, I'm sure they would've caught something of this severity... but then again, maybe not[1].
[1] https://news.ycombinator.com/item?id=18189139
Zamiel_Snawley|1 year ago
Everyone has a buggy release at some point, but impacting global customers at this level is damn near unforgivable.
Heads need to roll for this oversight.
Murky3515|1 year ago
jefurii|1 year ago
I don't understand CrowdStrike's rollout system, but given that people started seeing trouble earlier in the day, surely by that time they could have shut down the servers that were serving the updates, or something??
He also told me that soon after that the street outside the bank (another bank across the street, a hospital several blocks down) was lined with police who started barring entry to the buildings unless people had bank cards. By the time I woke up this morning technical people already knew basically what was going on, but I really underestimated how freaked out the average person must have been today.
unknown|1 year ago
[deleted]
rdtsc|1 year ago
The obvious joke here is CS runs the malicious C2 framework. So the system worked as designed: it prevented further execution and quarantined the affected machines.
But given they say that’s just a configuration file (then why the hell is it suffixed with .sys?), it’s actually plausible. A smart attacker could disguise themselves and use the same facilities as the CS. CS will try to block them and blocks itself in the process?
nonfamous|1 year ago
Given that this incident has now happened twice in the space of months (first on Linux, then on Windows), and that as stated in this very post the root cause analysis is not yet complete, I find that statement of “NO RISK” very hard to believe.
ungreased0675|1 year ago
I’d like more information on how these Channel Files are created, tested, and deployed. What’s the minimum number of people that can do it? How fast can the process go?
hatsunearu|1 year ago
> Although Channel Files end with the SYS extension, they are not kernel drivers.
OK, but I'm pretty sure usermode software can't cause a BSOD. Clearly something running in kernel mode ate shit and that brought the system down. Just because a channel file not in kernel mode ate shit doesn't mean your kernel mode software isn't culpable. This just seems like a sleezy dodge.
gjm11|1 year ago
(Maybe there's some subtext that I'm missing, but I don't see how saying "these aren't kernel drivers" makes them look any better, and I do see why they might say it to be informative, so it looks like to me like they're doing the latter.)
SoftTalker|1 year ago
unknown|1 year ago
[deleted]
patrickthebold|1 year ago
> We understand how this issue occurred and we are doing a thorough root cause analysis to determine how this logic flaw occurred.
There's always going to be flaws in the logic of the code, the trick is to not have single errors be so catastrophic.
chris_nielsen|1 year ago
How a common bug was rolled out globally with no controls, testing, or rollback strategy is the right question
pneumonic|1 year ago
That's going to find a cause: a programmer made an error. That's not the root of the problem. The root of the problem is allowing such an error to be released (especially obvious because of its widespread impact).
kyriakos|1 year ago
cyrnel|1 year ago
Linux provides eBPF and macOS provides system extensions.
I'll also add that Windows itself heavily prioritizes backwards-compatibility over security, which leads companies to seek out third-party solutions for stopping malware instead of design-based mitigations being built into Windows.
sgammon|1 year ago
jchiu1106|1 year ago
unknown|1 year ago
[deleted]
augustk|1 year ago
Zamiel_Snawley|1 year ago
isthisreallife2|1 year ago
canistel|1 year ago
Must be corrected to "the issue is not the result of or related to a cyberattack by external agents".
geuis|1 year ago
Very weak and over corporate level of ass covering. And it doesn't even come close to doing that.
They should just let the EM of the team involved provide a public detailed response that I'm sure is floating around internally. Just own the problem and address the questions rather than trying to play at politics, quite poorly.
0nate|1 year ago
https://www.nathanhandy.blog/images/blog/OSI%20Model%20in%20...
If the initial root cause analysis is correct, Crowdstrike has pushed out a bug that could have been easily stopped had software engineering best practices been followed: Unit Testing, Code Coverage, Integration Testing, Definition of Done.
automatoney|1 year ago
bryan_w|1 year ago
timbelina|1 year ago
xyst|1 year ago
If I ever get a sales pitch from these shit brains, they will get immediately shut down.
Also fuck MS and their awful operating system that then spawned this god awful product/company known as “CrowdStike Falcon”
robjan|1 year ago
By the way, Falcon can be and is deployed to Linux and MacOS hosts in these organisations too it's just that this particular incident only affected Windows.
hello_moto|1 year ago
1. critical infrastructure around the globe seemed to depend on CrowdStrike
2. "If I ever get a sales pitch from..." suggested you are in an environment that is far from critical infrastructure.
userbinator|1 year ago
bkjshki|1 year ago
[deleted]