> That repo still contained a GitHub Actions secret — a npm token with broad publish rights.
One of the advantages of Trusted Publishing [0] is that we no longer need long-lived tokens with publish rights. Instead, tokens are generated on the CI VM and are valid for only 15 minutes.
This has already been implemented in several ecosystems (PyPI, npm, Cargo, Homebrew), and I encourage everyone to use it, it actually makes publishing a bit _easier_.
More importantly, if the documentation around this still feels unclear, don’t hesitate to ask for help. Ecosystem maintainers are usually eager to see wider adoption of this feature.
I think the point around incorporating MFA into the automated publishing flow isn't getting enough attention.
I've got no problem with doing an MFA prompt to confirm publish by a CI workflow - but last I looked this was a convoluted process of opening a https tunnel out (using a third party solution) such that you could provide the code.
I'd love to see either npm or GitHub provide an easy, out the box way, for me to provide/confirm a code during CI.
Publishing a package involves 2 phases: uploading the package to npmjs, and making it availble to users. Right now these 2 phases are bundled together into 1 operation.
I think the right way to approach this is to unbundle uploading the packages & publishing packages so that they're available to end-users.
CI systems should be able to build & upload packages in a fully automated manner.
Publishing the uploaded packages should require a human to log into npmjs's website & manually publish the package and go through MFA.
I'm feeling that maybe the entire concept of "publishing packages" is something that's not really needed? Instead, the VCS can be used as a "source of truth", with no extra publishing step required.
This is how Go works: you import by URL, e.g. "example.com/whatever/pkgname", which is presumed to be a VCS repo (git, mercurial, subversion, etc.) Versioning is done by VCS tags and branches. You "publish" by adding a tag.
While VCS repos can and have been compromised, this removes an entire attack surface from the equation. If you read every commit or a diff between two tags, then you've seen it all. No need to also diff the .tar.gz packages. I believe this would have prevented this entire incident, and I believe also the one from a few weeks ago (AFAIK that also only relied on compromised npm accounts, and not VCS?)
The main downside is that moving a repo is a bit harder, since the import path will change from "host1.com/pkgname" to "otherhost.com/pkgname", or "github.com/oneuser/repo" to "github.com/otheruser/repo". Arguably, this is a feature – opinions are divided.
Other than that, I can't really think of any advantages a "publish package"-step adds? Maybe I'm missing something? But to me it seems like a relic from the old "upload tar archive to FTP" days before VCS became ubiquitous (or nigh-ubiquitous anyway).
> A while ago, I collaborated on angulartics2, a shared repository where multiple people still had admin rights. That repo still contained a GitHub Actions secret — a npm token with broad publish rights. This collaborator had access to projects with other people which I believe explains some of the other 40 initial packages that were affected.
> A new Shai-Hulud branch was force pushed to angulartics2 with a malicious github action workflow by a collaborator. The workflow ran immediately on push (did not need review since the collaborator is an admin) and stole the npm token. With the stolen token, the attacker published malicious versions of 20 packages. Many of which are not widely used, however the @ctrl/tinycolor package is downloaded about 2 million times a week.
I still don't get it. An admin on angulartics2 gets hacked, his Github access is used to push a malicious workflow that extracts an npm token. But why would an npm token in angulartics2 have publication rights to tinycolor?
I have admin rights on someone else’s npm repo and I’ve done most of the recent releases. Becoming admin lit a fire under me to fix all of the annoying things and shitty design decisions that have been stuck in the backlog for years so most of the commits are also mine. I don’t want my name on broken code that “works”.
I had just about convinced myself that we should be using a GitHub action to publish packages because there was always the possibility that publishing directly via 2FA, that one (or specifically I) could fuck up and publish something that wasn’t a snapshot of trunk.
But I worried about stuff like this and procrastinated on forcing the issue with the other admins. And it looks like the universe has again rewarded my procrastination. I don’t know what the answer is but giving your credentials to a third party clearly isn’t it.
> But why would an npm token in angulartics2 have publication rights to tinycolor?
Imo, this is one of the most classical ways organizations get pwned: That one sin from your youth years ago comes to bite you in the butt.
We also had one of these years ago. It wasn't the modern stack everyone was working to scan and optimize and keep us secure that allowed someone to upload stuff to our servers. It was the editor that had been replaced years and years ago, and it's replacement had also been replaced, the way it was packaged wasn't seen by the build-time security scans, but eventually someone found it with a URL scan. Whoopsie.
For the last 10 years I've been advocating for manual releases.
I've encountered a lot of backlash, but is it really that alien concept these days? CI/CD is cool, but between this and recent CF drama it seems we have a pretty solid evidence it can lead to a serious problems.
I worked at a BigBank once where deployments to production required at least five people present at a time and a lot of theatrics, but at least we knew what we were deploying.
I completely agree. You are infinitely more likely to get implicated in some widespread attack due to bugs in GitHub Actions or your automated release scripts than have your local machine's local build and signing infrastructure attacked.
I have yet to see any evidence that fancy CI/CD systems are better than good old fashioned tarballs and detached signatures. Bonus points for distribution packaging systems where they add an additional layer of review and separate validation of releases. People seem to gloss over that fact the "stodgy old-fashioned" rigamarole of Debian is part of the reason why the entire internet didn't pwned by the xz attack.
At the very least you should require a human to sign the blobs before the release is actually published. (This isn't always enough if the attacker can add themselves to the maintainers list and sign with their own key, which is why the distribution packaging systems where they maintain their own trusted copy of upstream keyrings is far more preferable.)
If your threat model boils down to "if my GitHub account gets attacked or even a single API key is leaked all of my users are fucked" then you really need to take a long look at a mirror and ask yourself if that is reasonable.
Two-factor auth for publishing is helpful, but requiring cryptographically signed approval by multiple authors would be more helpful. Then compromising a single author wouldn't be enough.
While multiple authors' signatures would be nice, a lot of these kinds of attacks would be solved if there was any signature verification being done of the commits, tags, or generated artefacts at all.
People like to complain about distribution packaging being obtuse, but most distributions have rich support for verifying that package sources were signed by a key in a keyring that is maintained by the distribution. My (somewhat biased) view is that language package managers still do not provide the same set of features for validation that (for instance) rpmbuild does.
The release process for runc has the following safeguards:
* As the upstream maintainer of runc, our releases are all signed with one of a set of keys that are maintained in our repo[1]. Our tags are also signed by one of the same keys. In my case, my key is stored in a Yubikey and so cannot easily be exfiltrated.
* Our release scripts include a step which validate that all of the keys in that keyring file are valid (sub)keys registered to the GitHub account of a maintainer[2]. They also prompt the person doing the signing to check that the list looks reasonable before signing anything[3].
* Distributions such as openSUSE have a copy of the keyring file[4] and the build system will automatically reject the build if the source code archive is not signed. Our official binary releases are also signed and so can be validated in a similar manner.
Maybe there are still gaps in this setup, and I would love to hear them. But I think this setup would have blocked this kind of attack at several stages. I personally don't like the idea of signing releases in CI -- if you really want to build your binaries in CI, that's fine, but you should always require a maintainer to personally sign the binaries at the end of the process.
For language package managers that do not support such a workflow, trusted publishing is a less awful setup than having long-lived publishing keys that may be incorrectly scoped (as happened in this case) but it still allows someone who gains access to your GitHub account (such as by stealing your cookies) to publish updated versions of your package with very little resource. GitHub supports setting a mandatory timeout for trusted publishing but the attacker could easily disable that. If someone got access to my GitHub account, it would be a very bad day but distributions would not accept the new releases because their copy of our keyring would not include the attackers keys (even if they added them to my account).
Disclaimer: I work at SUSE, though I will say that I would like for OBS to have nice support for validating checksums of artefacts like Arch and Gentoo do (you can /theoretically/ do it with OBS services or emulate it with forcelocal -- and most packages actually store the archive in OBS rather than pulling it at build time -- but it would be nice to do both).
These machine-to-machine OIDC flows seem secure, and maybe they are when they’re implemented properly, but they’re really difficult to configure. And I can’t shake the feeling that they’re basically just “tokens with more moving parts,” at least for a big chunk of exploitation paths. Without a human in the loop, there’s still some “thing” that gets compromised, whether it’s a token or something that generates time-limited tokens.
In the case of this worm, the OIDC flow wouldn’t even help. The GitHub workflow was compromised. If the workflow was using an OIDC credential like this to publish to npm, the only difference would be the npm publish command wouldn’t use any credential because the GitHub workflow would inject some temporary identity into the environment. But the root problem would remain: an untrusted user shouldn’t be able to execute a workflow with secret parameters. Maybe OIDC would limit the impact to be more fine-grained, but so would changing the token permissions.
Well the idea behind tokens is that they should be time and authZ limited. In most cases they are not so they degrade to a glorified static password.
Solutions like generating them live with a short lifetime, using solutions like oauth w/ proper scopes, biscuits that limit what they can do in detail, etc, all exist and are rarely used.
Anyone know of a published tool/script to check for the existence of any of the vulnerable npm packages? I don't see anything like that in the stepsecurity page.
Why is local 2FA unsustainable?! The real problem here is automated publishing workflows. The overwhelming majority of NPM packages do not publish often enough or have complicated enough release steps to justify tokens with the power to publish without human intervention.
What is so fucking difficult about running `npm publish` manually with 2FA? If maintainers are unwilling to do this for their packages, they should reconsider the number of packages they maintain.
Something somewhere needs to change because the status quo just isn't working. Yes, we can cheer on the benefit of OIDC tokens and zero-trust solutions in CI pipelines on HN all we want, but the fact is there's a significant number of library developers out there with millions of package downloads per week that will refuse to do anything about security until they're compromised or npm blocks them from publishing until they do.
And then there's other non-sensical proposals like spelunking deep into projects some which could be over a decade old and just rip out all the dependencies until there's nothing but a standard library is left. Look, I'm all for a better std lib, I think reducing the number of dependencies we have is good. But just saying "you should reduce dependencies" will do nothing concrete to fix the problem which already exists, because it's much easier said than done.
So either tens of thousands or hundreds of thousands of developers stop using npm, and everyone refactors their projects to add more code and strip dependencies, or npm starts enforcing things like 2FA and OIDC for package developers with over X number of weekly downloads, and blocks publishing for those that don't follow the new security rules. I think it's clear which solution is more practical to implement. The only other option is for npm to completely lose its reputation and then we wind up with XKCD 927 again.
darkamaul|5 months ago
One of the advantages of Trusted Publishing [0] is that we no longer need long-lived tokens with publish rights. Instead, tokens are generated on the CI VM and are valid for only 15 minutes.
This has already been implemented in several ecosystems (PyPI, npm, Cargo, Homebrew), and I encourage everyone to use it, it actually makes publishing a bit _easier_.
More importantly, if the documentation around this still feels unclear, don’t hesitate to ask for help. Ecosystem maintainers are usually eager to see wider adoption of this feature.
[0] https://docs.pypi.org/trusted-publishers/
realityking|5 months ago
Guess I know what I’ll be doing this weekend.
procaryote|5 months ago
mnahkies|5 months ago
I've got no problem with doing an MFA prompt to confirm publish by a CI workflow - but last I looked this was a convoluted process of opening a https tunnel out (using a third party solution) such that you could provide the code.
I'd love to see either npm or GitHub provide an easy, out the box way, for me to provide/confirm a code during CI.
tcoff91|5 months ago
I think the right way to approach this is to unbundle uploading the packages & publishing packages so that they're available to end-users.
CI systems should be able to build & upload packages in a fully automated manner.
Publishing the uploaded packages should require a human to log into npmjs's website & manually publish the package and go through MFA.
arp242|5 months ago
This is how Go works: you import by URL, e.g. "example.com/whatever/pkgname", which is presumed to be a VCS repo (git, mercurial, subversion, etc.) Versioning is done by VCS tags and branches. You "publish" by adding a tag.
While VCS repos can and have been compromised, this removes an entire attack surface from the equation. If you read every commit or a diff between two tags, then you've seen it all. No need to also diff the .tar.gz packages. I believe this would have prevented this entire incident, and I believe also the one from a few weeks ago (AFAIK that also only relied on compromised npm accounts, and not VCS?)
The main downside is that moving a repo is a bit harder, since the import path will change from "host1.com/pkgname" to "otherhost.com/pkgname", or "github.com/oneuser/repo" to "github.com/otheruser/repo". Arguably, this is a feature – opinions are divided.
Other than that, I can't really think of any advantages a "publish package"-step adds? Maybe I'm missing something? But to me it seems like a relic from the old "upload tar archive to FTP" days before VCS became ubiquitous (or nigh-ubiquitous anyway).
unknown|5 months ago
[deleted]
drdrey|5 months ago
> A new Shai-Hulud branch was force pushed to angulartics2 with a malicious github action workflow by a collaborator. The workflow ran immediately on push (did not need review since the collaborator is an admin) and stole the npm token. With the stolen token, the attacker published malicious versions of 20 packages. Many of which are not widely used, however the @ctrl/tinycolor package is downloaded about 2 million times a week.
I still don't get it. An admin on angulartics2 gets hacked, his Github access is used to push a malicious workflow that extracts an npm token. But why would an npm token in angulartics2 have publication rights to tinycolor?
hinkley|5 months ago
I had just about convinced myself that we should be using a GitHub action to publish packages because there was always the possibility that publishing directly via 2FA, that one (or specifically I) could fuck up and publish something that wasn’t a snapshot of trunk.
But I worried about stuff like this and procrastinated on forcing the issue with the other admins. And it looks like the universe has again rewarded my procrastination. I don’t know what the answer is but giving your credentials to a third party clearly isn’t it.
tetha|5 months ago
Imo, this is one of the most classical ways organizations get pwned: That one sin from your youth years ago comes to bite you in the butt.
We also had one of these years ago. It wasn't the modern stack everyone was working to scan and optimize and keep us secure that allowed someone to upload stuff to our servers. It was the editor that had been replaced years and years ago, and it's replacement had also been replaced, the way it was packaged wasn't seen by the build-time security scans, but eventually someone found it with a URL scan. Whoopsie.
STRiDEX|5 months ago
unknown|5 months ago
[deleted]
zeroq|5 months ago
I've encountered a lot of backlash, but is it really that alien concept these days? CI/CD is cool, but between this and recent CF drama it seems we have a pretty solid evidence it can lead to a serious problems.
I worked at a BigBank once where deployments to production required at least five people present at a time and a lot of theatrics, but at least we knew what we were deploying.
cyphar|5 months ago
I have yet to see any evidence that fancy CI/CD systems are better than good old fashioned tarballs and detached signatures. Bonus points for distribution packaging systems where they add an additional layer of review and separate validation of releases. People seem to gloss over that fact the "stodgy old-fashioned" rigamarole of Debian is part of the reason why the entire internet didn't pwned by the xz attack.
At the very least you should require a human to sign the blobs before the release is actually published. (This isn't always enough if the attacker can add themselves to the maintainers list and sign with their own key, which is why the distribution packaging systems where they maintain their own trusted copy of upstream keyrings is far more preferable.)
If your threat model boils down to "if my GitHub account gets attacked or even a single API key is leaked all of my users are fucked" then you really need to take a long look at a mirror and ask yourself if that is reasonable.
rectang|5 months ago
tcoff91|5 months ago
cyphar|5 months ago
People like to complain about distribution packaging being obtuse, but most distributions have rich support for verifying that package sources were signed by a key in a keyring that is maintained by the distribution. My (somewhat biased) view is that language package managers still do not provide the same set of features for validation that (for instance) rpmbuild does.
The release process for runc has the following safeguards:
Maybe there are still gaps in this setup, and I would love to hear them. But I think this setup would have blocked this kind of attack at several stages. I personally don't like the idea of signing releases in CI -- if you really want to build your binaries in CI, that's fine, but you should always require a maintainer to personally sign the binaries at the end of the process.For language package managers that do not support such a workflow, trusted publishing is a less awful setup than having long-lived publishing keys that may be incorrectly scoped (as happened in this case) but it still allows someone who gains access to your GitHub account (such as by stealing your cookies) to publish updated versions of your package with very little resource. GitHub supports setting a mandatory timeout for trusted publishing but the attacker could easily disable that. If someone got access to my GitHub account, it would be a very bad day but distributions would not accept the new releases because their copy of our keyring would not include the attackers keys (even if they added them to my account).
Disclaimer: I work at SUSE, though I will say that I would like for OBS to have nice support for validating checksums of artefacts like Arch and Gentoo do (you can /theoretically/ do it with OBS services or emulate it with forcelocal -- and most packages actually store the archive in OBS rather than pulling it at build time -- but it would be nice to do both).
[1]: https://github.com/opencontainers/runc/blob/v1.4.0-rc.1/runc... [2]: https://github.com/opencontainers/runc/blob/v1.4.0-rc.1/scri... [3]: https://github.com/opencontainers/runc/blob/v1.4.0-rc.1/scri... [4]: https://build.opensuse.org/projects/openSUSE:Factory/package...
cyberax|5 months ago
I freaking HATE tokens. I hate them.
There should be a better way to do authentication than a glorified static password.
An example of how to do it correctly: Github as a token provider for AWS: https://aws.amazon.com/blogs/security/use-iam-roles-to-conne... But this is an exception, rather than a rule.
chatmasta|5 months ago
In the case of this worm, the OIDC flow wouldn’t even help. The GitHub workflow was compromised. If the workflow was using an OIDC credential like this to publish to npm, the only difference would be the npm publish command wouldn’t use any credential because the GitHub workflow would inject some temporary identity into the environment. But the root problem would remain: an untrusted user shouldn’t be able to execute a workflow with secret parameters. Maybe OIDC would limit the impact to be more fine-grained, but so would changing the token permissions.
er4hn|5 months ago
Solutions like generating them live with a short lifetime, using solutions like oauth w/ proper scopes, biscuits that limit what they can do in detail, etc, all exist and are rarely used.
undecidabot|5 months ago
skydhash|5 months ago
pabs3|5 months ago
indigodaddy|5 months ago
retlehs|5 months ago
https://github.com/danielroe/provenance-action
sibeliuss|5 months ago
bikeshaving|5 months ago
Why is local 2FA unsustainable?! The real problem here is automated publishing workflows. The overwhelming majority of NPM packages do not publish often enough or have complicated enough release steps to justify tokens with the power to publish without human intervention.
What is so fucking difficult about running `npm publish` manually with 2FA? If maintainers are unwilling to do this for their packages, they should reconsider the number of packages they maintain.
STRiDEX|5 months ago
1oooqooq|5 months ago
waterTanuki|5 months ago
And then there's other non-sensical proposals like spelunking deep into projects some which could be over a decade old and just rip out all the dependencies until there's nothing but a standard library is left. Look, I'm all for a better std lib, I think reducing the number of dependencies we have is good. But just saying "you should reduce dependencies" will do nothing concrete to fix the problem which already exists, because it's much easier said than done.
So either tens of thousands or hundreds of thousands of developers stop using npm, and everyone refactors their projects to add more code and strip dependencies, or npm starts enforcing things like 2FA and OIDC for package developers with over X number of weekly downloads, and blocks publishing for those that don't follow the new security rules. I think it's clear which solution is more practical to implement. The only other option is for npm to completely lose its reputation and then we wind up with XKCD 927 again.