(no title)
nicknow | 2 years ago
Think about it, first you need a race condition, and that race condition has to result in the unexpected result. That right there, assuming this code has been tested and is frequently used, is probably a less than 10% chance (if it was frequently happening someone would have noticed.) Then you need an engineer to decide they need this particular crash dump. Then you need your credential scanning software (which again, presumably usually catches stuff) to not be able to detect this particular credential. Now you need an account compromised to get network access and that user has access to this crash dump and the hacker happens to get to it and grabs it.
But even then, you should be safe because the key is old and is only good to get into consumer email accounts...except you have a bug that accepts the old key AND a bug that didn't reject this signing key for a token accessing corporate email accounts.
This is a really good system engineering lesson. Try all you want eventually enough small things will add up to cause a catastrophic result. The lesson is, to the extent you can, engineer things so when they blow-up the blast radius is limited.
rdtsc|2 years ago
With a caveat that when it comes to security the eventual succession doesn't come as a random process but will be actively targeted and exploited. The attackers are not random processes flipping coins, rather they can flip a coin that often lands on "heads", in their favor.
The post-mortem results are presented as if events happened as a random set of unfortunate circumstances: the attacker just happened to work for Microsoft, there just happened to be a race condition, and then a crash randomly happened, and then the attacker just happened to find the crash dump somewhere. We should consider even starting with the initial "race condition" bug, that it might have been inserted deliberately. The crash could have been triggered deliberately. An attacker may have been expecting the crash dump to appear in a particular place to grab it. The attacker may have had accomplices.
cathalc|2 years ago
Public RCAs are nothing more than carefully curated PR stunts to calm customers. You can be sure the internal RCA is a lot more damning.
ddalex|2 years ago
rawling|2 years ago
Does it say that?
> the Storm-0558 actor was able to successfully compromise a Microsoft engineer’s corporate account
hulitu|2 years ago
The Microsoft ecosystem looks like a Lego car built by the neighborhoud kids, everybody bringing something from home and smashing it together.
mynameisash|2 years ago
xwolfi|2 years ago
Why isn't it masking before writing to disk ? God only knows.
michaelt|2 years ago
Crash handlers don't know what state the system will be in when they're called. Will we be completely out of memory, so even malloc calls have started failing and no library is safe to call? Are we out of disk space, so we maybe can't write our logs out anyway? Is storage impaired, so we can write but only incredibly slowly? Is there something like a garbage collector that's trying to use 100% of every CPU? Are we crashing because of a fault in our logging system, which we're about to log to, giving us a crash in a crash? Does the system have an alarm or automated restart that won't fire until we exit, which our crash handler delays?
It's pretty common to keep it simple in the crash handler.
paganel|2 years ago
Which should probably mean that half (or more) of the Western business world relying on Outlook.com is a very wrong thing to have in place, but as the current money incentives are not focused on resilience nor on stuff like breaking super-centralized Outlook.com-like entities down means that I'm pretty sure we'll continue having events like this one happening well into the future.
unknown|2 years ago
[deleted]
qubex|2 years ago
rcxdude|2 years ago
anagpal|2 years ago
"reducing your blast radius" is never truly finished, so how do you know what is sufficient, or when the ROI on investing time/money is still positive?