(no title)
Quekid5 | 3 months ago
An argument can be made that it was instrumental in bootstrapping the early Internet, but it's not really necessary these days. People should know what they're doing 35+ years on.
It is usually better to just state fully formally up front what is acceptable and reject anything else out of hand. Of course some stuff does need dynamic checks, e.g. ACLs and such, but that's fine... rejecting "iffy" input before we get to that stage doesn't interfere with that.
0manrho|3 months ago
Well yes, that's because people have been misapplying and misunderstanding it. The original idea was predicated on the concept of "assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect"
But then the Fail Fast, Fail Often stupidity started spreading like wildfire and companies realized that the consequence for data breaches or other security failures was an acceptable cost of doing business (even if not always true) vs the cost of actually paying devs and sec teams to implement things properly and people kinda lost the plot on it. They just focused on the "be liberal in what you accept" part, went "Wow! That makes thing easy" and maybe only checked for the most common potential abuses/failure/exploit modes, if they bothered at all and only patched things retroactively as issues and exploits popped up in the wild.
Doing it correctly, like building anything robust and/or secure, is a non-trivial task.
unknown|3 months ago
[deleted]