top | item 35387593

(no title)

dane-pgp | 2 years ago

The first step in solving the trust problem is solving the identity problem. At the very least, once you've got cryptographic identities for entities involved in your supply chain, you can use a TOFU policy and check whenever an identity changes.

Simple operations like rotating a key shouldn't trigger any security warnings, as long as they new key is signed by the old one, and even adding new people to a team should happen seamlessly if (a majority of) the existing team members approve that new identity being added.

Of course it doesn't solve key compromise, or someone selling their keys to someone else, but with long-lived (even pseudonymous) identities, it becomes possible to reason about the trust level of packages just based on how long an identity has been used without being compromised.

No system is perfect, and there's still a long way to go, but the existing systems make the remaining problems more tractable, and already increase the cost for attackers, which should reduce attacks.

discuss

order

remram|2 years ago

> The first step in solving the trust problem is solving the identity problem

I disagree entirely. Knowing that the random "leftpad" library you pulled it was in fact authored by "John Brown, 46 years old, from Milwaukee" does absolutely nothing for your software security.

The only way to audit your dependencies is to actually have someone you trust (e.g. works for you) go and audit your dependencies. The entire system is built on a broken premise.

dane-pgp|2 years ago

I'm glad you agree that knowing someone's name, age, and address doesn't prove their trustworthiness, because I don't want trust decisions to be dependent on threats of state-backed violence or mob vigilantism.

It is possible to build up trust in an identity based on how long that identity has been used, and the "transitivity of trust" principle. So you wouldn't trust someone because "John sounds like a trustworthy name", and instead you'd look at how long the author's key had been associated with the library, and whether their key had previously been endorsed on other people's projects (for example having their PRs reviewed and accepted).

Admittedly this introduces a new danger that the social graphs start to become very dangerous honeypots of metadata, especially if we start letting employers vouch for their employees, but the ultimate goal here should be to use something like Verifiable Credentials with zero knowledge proofs, which will allow very strong probabilistic arguments to be made about whether an author (and all the code reviewers) have suddenly gone rogue and decided to burn their hard-earned reputations.

pharmakom|2 years ago

I totally disagree. If John Brown is a US citizen, works for a major tech company, etc. I feel more comfortable than if it's some anime avatar, location unknown, etc. Risk is a gradient and security at enterprise scale is a huge challenge. This helps move in the right direction. It would be better (of course) to review every line of every package, but what’s the timeline on a typical org achieving that?