(no title)
_bent | 1 month ago
Because right now you are triggering the cookie banner reflex where a user just instinctively dismisses any warnings, because they want to get on with their work / prevent having their flow state broken.
There should also probably be some more context in the warning text on what a malicious repo could do, because clearly people don't understand why are you are asking if you trust the authors.
And while you're at it, maybe add some "virus scanner" that can read through the repo and flag malicious looking tasks & scripts to warn the user. This would be "AI" based so surely someone could even get a job promotion out of this for leading the initiative :)
Tyriar|1 month ago
It was a long time ago this was added (maybe 5 years?), but I think the reasoning there was that since our code competency is editing code, opening it should make that work well. The expectation is that most users should trust almost all their windows, it's an edge case for most developers to open and browse unfamiliar codebases that could contain such attacks. It also affects not just code editing but things like workspace settings so the editor could work radically different when you trust it.
You make a good point about the cookie banner reflex, but you don't need to use accept all on those either.
dwallin|1 month ago
Trust in code operates on a spectrum, not a binary. Different code bases have vastly different threat profiles, and this approach does close to nothing to accomodate for that.
In addition, code bases change over time, and full auditing is near impossible. Even if you manually audit the code, most code is constantly changing. You can pull an update from git, and the audited repo you trusted can be no longer trustworthy.
An up front binary and persistent, trust or don't trust model isn't a particularly good match match for either user behavior or the potential threats most users will face.
ablob|1 month ago