> The tools we use to build software are not secure by default, and almost all of the time, the companies that provide them are not held to account for the security of their products.
The companies? More like the unpaid open source community volunteers who the Fortune 500 leech off contributing nothing in return except demands for free support, fixes and more features.
> More like the unpaid open source community volunteers who the Fortune 500 leech off contributing nothing in return except demands for free support, fixes and more features.
People who work on permissively licensed software are donating their time to these Fortune 500 companies. It hardly seems fair to call the companies leeches for accepting these freely given donations.
Author of the article here - holistically this isn't just about NPM dependencies, it's the entire stacks we work with. Cloud vendors provide security, but out of the box they don't provide secure platforms - a lot of this is left up to developers, without security experts - this is dangerous - I have 25 years of experience and I wouldn't want to touch the depths of RBAC.
SaaS products don't enforce good security - I've seen some internally that don't have MFA or EntraID integration because they simply don't have those as features (mostly legacy systems these days, but they still exist).
I'm also an open-source author (I have the most used bit.ly library on npm - and have had demands and requests too), and I'm the only person you can publicly see on our [company github](https://github.com/ikea) - there's reasons for this - but not every company is leeching, rather there is simply no other alternative.
I remember joining my company right out of college. In the interview we started talking about open source since I had some open source Android apps. I asked if the company contributed back to the projects it used. The answer was no, but that they were planning to. Over a decade later... they finally created a policy to allow commits to open source projects. It's been used maybe 3 times in it's first year or so. Nobody has the time and the management culture doesnt want to waste budget on it.
Technology is insecure all the way down to the hardware. The structural cause of this is that companies aren’t held liable for insecure products, which are cheaper to build.
So companies’ profit motives contribute to this mess not just through the exploitation of open source labor (as you describe) but through externalizing security costs as well.
I find this perspective harmful to OSS as a whole. It is completely fine to release free software that other companies can use without restrictions, if you desire to do so. It is not meant to be a transaction. You share some, you take some.
It’s also ok to release paid free software, or closed software, restrictive licenses, commercial licenses, and sell support contracts. It’s a choice.
Npm is owned by Github, which is owned by Microsoft. They could have put more tooling into making npm better. For example; pnpm require you to "approve-builds" so that its only running scripts from dependencies you decide on, and Deno have a bunch of security capabilities to restrict what scripts can and can't do. There is always going to be supply chain attacks, and the biggest package repositories are going to be hit the most. But that doesn't mean that Microsoft couldn't have spent more on building better tooling with better security settings on by default.
In the case of npm though it is run by a very wealthy company: Microsoft.
But also, most OSS Software is provided without warranty. Commercial companies should either be held accountable for ensuring the open source components are secure or paying someone (either the maintainer directly, or a third party distributor) to verify the security of the component.
Per survey I read, majority of open source is created by people who are paid for it. The unpaid volunteer working full time on something is effectively a myth.
Well? If you license software the way most FOSS products are licensed, that's a natural result. It is literally putting up a sign saying "free beer."
You can't give permission for them to use the stuff for free and then accuse them of "leeching." If the expectation is contribution in kind, that needs to be in the license agreement.
Question for tanepiper: what would you have Microsoft do to improve things here?
My read of your article is that you don't like postinstall scripts and npx.
I'm not convinced that removing those would have a particularly major impact on supply chain attacks. The nature of npm is that it distributes code that is then executed. Even without npx, an attacker could still release an updated package which, when executed as a dependency, steals environment variables or similar.
And in the meantime, discarding box would break the existing workflows of practically every JavaScript-using developer in the world!
You also mention code signing. I like that too, but do you think that would have a material impact on supply chain attacks given they start with compromised accounts?
The investment I most want to see around this topic is in sandboxing: I think it should be the default that all code runs in a robust sandbox unless there is as very convincing reason not to. That requires investment that goes beyond a single language and package management platform - it's something that needs to be available and trustworthy for multiple operating systems.
The biggest problem with npm is that it is too popular. Nothing else. Even if you "mitigate" some of the risks by removing features like postinstall, it barely does anything at all -- if you actually use the package in any way, the threat is still there. And most of what we see recently could happen to crates.io, pypi etc as well.
It is almost frustrating to see people who don't understand security talk about security. They think they have the best, smartest ideas. No, they don't, otherwise they would have been done a long time ago. Security is hard, really hard.
For a start, maintainers of dependencies with more than 1000 weekly downloads should be forced to use phishing-resistant 2FA like WebAuthN to authenticate updates (ideally hardware security keys, but not strictly necessary), or sign the code using a registered PGP key (with significant cooldown and warnings when enrolling new keys, e.g. 72h).
Oh I agree - it's far too late to make major changes. When they took over, they had the opportunity to drive a new roadmap towards a more secure solution.
2FA isn't a solution to security, it's a solution to hinder and dissuade low-effort hackers from compromising accounts - it's still subject to social engineering (like spearphishing).
I tend to agree with your broader point - sandboxing will be the way to go, I've been having that very discussion today. we're also now enforcing CI pipelines with pinned dependencies (which we do with our helm charts, but npm by default will install with ^ semver and putting that on the developer to disable isn't good enough - the problem of course is that requires the OS vendors to agree on what is common.
This is a riff - not sure how possible this is, but it's not coming from nowhere, it's based on work I did 8 years back (https://github.com/takeoff-env/takeoff) - using a headless OS container image with a volume pointing to the source folder, run the install within the container (so far so good, this is any multi-stage docker build)
The key part would be to then copy the node_modules in the volume _data folder back to the host - this would of likely require the OS vendors to provide timely images with each release of their OS to handle binary dependencies, so is likely a non-starter for OSX.
What about if pw or 2fa change, your tokens go on a 24hr cooldown? I think the debug package maintainer even provided his 2fa to the phishing site. Obviously doesn't fix the case where they just exfiltrate and use tokens, but there's no fix that solves all of this, there needs to be layers. I also think npm should be scanning package updates for malicious code and pumping the brakes on potentially harmful updates for large packages.
Every day I feel more and more like Go mod's decision to use the lowest common version of a dependency rather than the highest was pure wisdom. Not only does it prevent code breaking at rest from poor semantic versioning, it's also served to prevent automatic inclusion of supply chain attacks.
npm as designed really aggressively likes to upgrade things, and the culture is basically to always blindly upgrade all dependencies as high as possible.
It's sold as being safer by patching vulnerabilities, but most "vulnerabilities" are very minor or niche, whereas a lot of risk is inherent in a shifting foundation.
Like it or not it's kind of a cultural problem. Recursively including thousands of dependencies, all largely updating with no review is a problem.
The thing I find particularly frightful and distinctive from the other package managers I regularly use is there is zero guarantee that the code a library presents on GitHub has anything to do with it's actual content in NPM. You can easily believe you've reviewed an items code by looking at it on GitHub, but that can have absolutely zero relation to what was actually uploaded to npm. You have to actually review what's been uploaded to npm as its entirety disconnected.
> You have to actually review what's been uploaded to npm
Crates.io and several other popular package managers have the exact same problem. Submitted packages are essentially a blob of loose files with the source code being mere metadata provided by the uploader (or attacker!)
The logic behind this is that not every package comes from a source repository that is based on Git and there may not be a convenient and trustworthy "web link" back to the matching commit. Some SCM systems don't even have cryptographically hashed commits with the same level of "stability" as a Git commit id!
IMHO all such public package repositories should do their own Git hosting for the package file contents. I.e.: to publish you'd have to push your code to their repo instead of uploading files.
Ideally they should also scan all uploads in various ways, run reproducible builds for platforms where that makes sense, etc...
It's a stretch to pin blame on Microsoft. They're probably the reason the service is still up at all (TFA admits as much). In hindsight it's likely that all they wanted from the purchase was AI training material. At worst they're guilty of apathy, but that's no worse than the majority of npm ecosystem participants.
"In hindsight it's likely that all they wanted from the purchase was AI training material."
Microsoft already owned GitHub. I don't see how acquiring npm would make a meaningful difference with respect to training material, especially since npm was already an open package repository which anyone could download without first buying the company.
It’s NOT a stretch to blame Microsoft. How many billions have we spent chasing “AI”? These issues could have been easily solved if we spent the consideration on them. This has been going on well over a decade.
Microsoft isn’t any better steward than the original teams.
This issue has happened Plenty under Microsoft’s ownership.
TBF it does happen to other package managers, too. There were similar attacks on PyPI and Rubygems (and maybe others). However, since npm is the largest one and has the most packages released, updated, and downloaded, it became the primary target. Similar to how computer viruses used to target Windows first and foremost due to its popularity.
Also, smaller package managers tend to learn from these attacks on npm, and by the time the malware authors try to use similar types of attacks on them the registries already have mitigations in place.
This is funny but ultimately a mischaracterization of a popularity contest. Node culture is extreme–perhaps pathological–about using many dependencies to work around the limited standard library but the same kind of attacks happen everywhere people are releasing code. The underlying problem is that once you release something it takes only seconds before someone else can be running your code with full privileges to access their account.
That’s why the joke doesn’t really work: America is a huge outlier for gun violence because we lack structural protections. Australia doesn’t have fewer attacks in proportion to a smaller population, they have a lower rate of those attacks per-capita because they have put rules in place to be less of a soft target.
Among other things, the attack space for npm is just so much larger. We run a large C# codebase (1M+ LOC) and a somewhat smaller TypeScript codebase (~200K LOC). I did a check the other day, and we have one potentially vulnerable nuget dependency for every 10,000 lines of C# code, but one potentially vulnerable npm dependency for about every 115 lines of TS code.
"The issue is actually lack of guns, the way way to prevent this is by having more guns" kind of doubling down.
The issue is the people that use npm, and choose to have 48 layers of dependencies without paying for anything. Blaming microsoft, which is a company which pays engineers and audits all of its code and dependencies, is a step in the wrong direction in the necessary self reflection path off npm vulns.
It's a popularity issue; npm is an easy target. I don't see why it wouldn't happen to golang for example. You just need take over the git repo it's over for all users upgrading like npm
It seems to me like one obvious improvement is for npm to require 2fa to submit packages. The fact that malware can just automatically publish packages without a human having to go through an MFA step is crazy.
I think the cooldown approach would make this type of attack have practically no impact anymore, if nobody ever updates to a newly published package version until, say, 2-3 days have gone by, surely there will be enough time for owner of the package to notice he got pwnd.
I've never heard of this. It sounds like a solid default to me. If you _really_ need an update you can override it, but it should remain the default and not allow opting out.
Here’s a one-liner for node devs on MacOS, pin your versions and manually update your supply chain until your tooling supports supply chain vetting, or at least some level of protection against instantly-updated malicious upstream packages.
Would love to see some default-secure package management / repo options. Even a 24 hour delayed mirror would be better than than what we have today.
The expected secure workflow should not require an elaborate bash incantation, it should be the workflow the tools naturally encourage you to use organically. "You're holding it wrong" cannot be possible.
1. "2FA doesn't work". Incorrect. MFA relying on SMS or TOTP is vulnerable to phishing. Token or device based is not. And indeed GitHub sponsored a program to give such tokens away to critical developers.
In 2021.
2. "There's no signing". Sigstore support shipped in like 2023.
The underlying view is that "Microsoft isn't doing anything". They have been. For years. Since at least 2022, based on my literal direct interactions with the individuals directly tasked to do the things that you say aren't or haven't been done.
I have no association with npm, GitHub or Microsoft. My connection was through Shopify and RubyGems. But it really steams me to see npm getting punched up with easily checked "facts".
Anyone have a good solution to scan all code in our Github org for uses of the affected packages? Many of the methods we've tried have dead ended. Inability to reliably search branches is quite annoying here.
Have you tried Dependency Track from OWASP? Generate SBOM from each repo/projects and post it with API to DT and you have full overview. You have to hook it up so it is done automatically because of course stuff will always move.
npm audit - will tell you if there's any packages with known vulnerabilities.
https://docs.npmjs.com/cli/v11/commands/npm-audit
I'd imagine it's considerably slower than search, but hopefully more reliable.
Here's a short recap of what you can do right now, because changing the ecosystem will take years, even if "we" bother to try doing it.
1. Switch to pnpm, it's not only faster and more space efficient, but also disables post-install scripts by default. Very few packages actually need those to function, most use it for spam and analytics. When you install packages into the project for the first time, it tells you what post-install scripts were skipped, and tells you how to whitelist only those you need. In most projects I don't enable any, and everything works fine. The "worst" projects required allowing two scripts, out of a couple dozen or so.
They also added this recently, which lets you introduce delays for new versions when updating packages. Combined with `pnpm audit`, I think it can replace the last suggestion of setting up a helper dependency bot with zero reliance on additional services, commercial or not:
2. If you're on Linux, wrap your package managers into bubblewrap, which is a lightweight sandbox that will block access to almost all of your system, including sensitive files like ~/.ssh, and prevent anything running under it from escalating privileges. It's used by flatpak and Steam. A fully working & slightly improved version was posted here:
I posted the original here, but it was somewhat broken because some flags were sorted incorrectly (mea culpa). I still prefer using a separate cache directory instead of sharing the "global" ~/.cache because sensitive information might also end up there.
3. Setup renovate or any similar bot to introduce artificial delays into your supply chain, but also to fast-track fixes for publicly known vulnerabilities. This suggestion caused some unhappiness in the previous discussion for some reason — I really don't care which service you're using, this is not an ad, just setup something to track your dependencies because you will forget it. You can fully self-host it, I don't use their commercial offering — never has, don't plan to.
4. For those truly paranoid or working on very juicy targets, you can always stick your work into a virtual machine, keeping secrets out of there, maybe with one virtual machine per project.
Bubblewrap seems excellent for Linux uses - on macOS, it seems like sandbox-exec could do some (all?) of what bubblewrap does on Linux. There's no official documentation for SBPL, but there are examples, and I found sandboxtron[0] which was a helpful base for writing a policy to try to contain npm
"...but also disables post-install scripts by default."
in pnpm docs it says:
"""
enablePrePostScripts
Default: true
Type: Boolean
When true, pnpm will run any pre/post scripts automatically. So running pnpm foo will be like running pnpm prefoo && pnpm foo && pnpm postfoo.
"""
You can also use tools like safe-chain which connects to malware databases and blocks installations of malicious packages. In this case it would have blocked installs around 20 minutes after the malware was added as this was how long it took to be added into the malware databases.
https://www.npmjs.com/package/@aikidosec/safe-chain
When we close and reopen VSCode (and some other IDEs), it updates the NPM packages for the installed plugins. Would these mitigations steps (e.g. pnpm) also take care of that?
My non-solution years ago was to use as little dependencies as possible. And vendor node_modules then review every line of code changed when I update dependencies.
Not every project and team can do that. But when feasible, it's a strong mitigation layer.
What worked was splitting dependency diff review among the team so it's less of a burden. We pin exact versions and update judiciously.
Here is an issue from 2013 where developers are asking to fix the package signing issue. Gone fully ignored because doing so was “too hard”: https://github.com/npm/npm/pull/4016
The NPM monoculture is the problem. It would be absurd to suggest that all backend engineers use the same build tooling and dependency library, but here we are with frontend. It's just too big of an attack surface.
It would be absurd to make such a suggestion. However, the comparison is not correct. Not all front-end development uses the same build tooling or dependency libraries, or programming language for that matter. Even if you narrow to the typescript ecosystem, it's still not true.
"Unfortunately, Microsoft seem to be actively hostile - in their lack of attempts to shut down an active security hole that’s almost a decade old, they have left their customers are the higest levels of risk seen in computing."
funny how npm is the exact same model as maven, gopkg, cpan, pip, mix, cargo, and a million others.
but only npm started with a desire to monetize it (well, npm and docker hub) and in its desire for control didn't implement (or allowed the community to implement) basic higiene.
I think if somebody wants to see library distribution channels tightened up they need to be very specific about what they would like to see changed and why it would be better, since it would appear that the status quo is serving what people actually want - being able to create and upload packages and update them when you want.
> But right now there are still no signed dependencies and nothing stopping people using AI agents, or just plain old scripts, from creating thousands of junk or namesquatting repositories.
This is as close as we get in this particular piece. So what's the alternative here exactly - do we want uploaders to sign up with Microsoft accounts? Some sort of developer vetting process? A curated lib store? I'm sure everybody will be thrilled if Microsoft does that to the JS ecosystem. (/s) I'm not seeing a great deal of difference between having someone's NPM creds and having someone's signing key. Let's make things better but let's also be precise, please.
> But right now there are still no signed dependencies
Considering these attacks are stealing API tokens by running code on developer's machines; I don't see how signing helps, attackers will just steal the private keys and sign their malware with those.
We treat code repositories as public infrastructure, but we don't want to pay for it, so corporations run them, with their profit interest in mind. This is the fundamental conflict, that I see.
And one solution, more non profits as organisations behind them.
But, nevermind. It's been 2 years since Jia Tan and the amount of such 'occurrences' in the npm ecosystem in the past 10 years are bordering on uncountable at this point.
And yet this hack got through? This amateuristic and extremely obvious attempt? The injected function was literally named something like 'raidBitcoinEthWallet' or whatnot.
npm has clearly done absolutely nothing in this regard.
We haven't even gotten to the argument of '... but then hackers will simply use the automated tools themselves and only release stuff that doesn't get flagged'. There's nothing to talk about; npm has done nothing.
Which gets us to:
* Web of trust
This seems to me to be a near perfect way for the big companies that have earned so, so much money using the free FOSS they rely on, to contribute.
They spend the cash to hire a team that reviews FOSS stuff. Entire libraries, sure, but also updates. I think most of them _already do this_, and many will even openly publish issues they found. But they do not publish negative results (they do not publish 'our internal team had a quick look at update XYZ of project ABC and didn't see anything immediately suspicious').
They should start doing that. And repos like npm, maven, CPAN, etcetera should allow either the official maintainer of a library, or anybody, to attach 'assertments of no malicious intent'.
Imagine that npm hosts the following blob of text for NPM hosted projects in addition to the javascript artefacts:
> "I, google dev/security team, hereby vouch for this update in the senses: not-malicious. We have looked at it and did not see anything that we think is suspicious or worse. We make absolutely no promises whatsoever that this library is any good or that this update's changelog accurately represents the changes in it; we merely vouch for the fact that we do not think it was written with malicious intent. We explicitly disavow any legal or financial liability with this statement; we merely stake our good name. We have done this analysis on 2025-09-17 and did it for the artefact with SHA512 hash 1238498123abac. Signed, [public/private key infrastructure based signature, google.com].
And a general rule that google.com/.well-known/vouch-public-key or whatever contains the public key so anybody can check.
Aside from Jia Tan/xz (which always stops any attempt; Jia Tan/xz was so legendary, exactly how the fuck THIS still happens given that massive wakeup call boggles my mind!), every supply chain attack was pretty dang easy to spot; the problem was: Nobody was looking, and everybody configures their build scripts to pick up point updates immediately.
We should update these to 'pick them up after a certain 'vouch' score is reached'. Where everybody can mess with their scoring tables (don't trust google? reduce the value of their vouch to 0).
I think security-crucial 0day fixing updates will not be significantly hampered by this; generally big 0-days are big news and any update that comes out gets analysed to pieces. The vouch signatures would roll in within the hour after posting them.
cube00|5 months ago
The companies? More like the unpaid open source community volunteers who the Fortune 500 leech off contributing nothing in return except demands for free support, fixes and more features.
ants_everywhere|5 months ago
People who work on permissively licensed software are donating their time to these Fortune 500 companies. It hardly seems fair to call the companies leeches for accepting these freely given donations.
tanepiper|5 months ago
SaaS products don't enforce good security - I've seen some internally that don't have MFA or EntraID integration because they simply don't have those as features (mostly legacy systems these days, but they still exist).
I'm also an open-source author (I have the most used bit.ly library on npm - and have had demands and requests too), and I'm the only person you can publicly see on our [company github](https://github.com/ikea) - there's reasons for this - but not every company is leeching, rather there is simply no other alternative.
giantg2|5 months ago
grafmax|5 months ago
So companies’ profit motives contribute to this mess not just through the exploitation of open source labor (as you describe) but through externalizing security costs as well.
ricardobeat|5 months ago
It’s also ok to release paid free software, or closed software, restrictive licenses, commercial licenses, and sell support contracts. It’s a choice.
theknarf|5 months ago
tcoff91|5 months ago
thayne|5 months ago
But also, most OSS Software is provided without warranty. Commercial companies should either be held accountable for ensuring the open source components are secure or paying someone (either the maintainer directly, or a third party distributor) to verify the security of the component.
watwut|5 months ago
psunavy03|5 months ago
You can't give permission for them to use the stuff for free and then accuse them of "leeching." If the expectation is contribution in kind, that needs to be in the license agreement.
austin-cheney|5 months ago
keybored|5 months ago
raisaguys|5 months ago
[deleted]
delduca|5 months ago
simonw|5 months ago
My read of your article is that you don't like postinstall scripts and npx.
I'm not convinced that removing those would have a particularly major impact on supply chain attacks. The nature of npm is that it distributes code that is then executed. Even without npx, an attacker could still release an updated package which, when executed as a dependency, steals environment variables or similar.
And in the meantime, discarding box would break the existing workflows of practically every JavaScript-using developer in the world!
You mention 2FA. npm requires that for maintainers of the top 100 packages (since 2022), would you like to see that policy increased to the top 1,000/10,000/everyone? https://github.blog/security/supply-chain-security/top-100-n...
You also mention code signing. I like that too, but do you think that would have a material impact on supply chain attacks given they start with compromised accounts?
The investment I most want to see around this topic is in sandboxing: I think it should be the default that all code runs in a robust sandbox unless there is as very convincing reason not to. That requires investment that goes beyond a single language and package management platform - it's something that needs to be available and trustworthy for multiple operating systems.
rs186|5 months ago
The biggest problem with npm is that it is too popular. Nothing else. Even if you "mitigate" some of the risks by removing features like postinstall, it barely does anything at all -- if you actually use the package in any way, the threat is still there. And most of what we see recently could happen to crates.io, pypi etc as well.
It is almost frustrating to see people who don't understand security talk about security. They think they have the best, smartest ideas. No, they don't, otherwise they would have been done a long time ago. Security is hard, really hard.
dns_snek|5 months ago
tanepiper|5 months ago
2FA isn't a solution to security, it's a solution to hinder and dissuade low-effort hackers from compromising accounts - it's still subject to social engineering (like spearphishing).
I tend to agree with your broader point - sandboxing will be the way to go, I've been having that very discussion today. we're also now enforcing CI pipelines with pinned dependencies (which we do with our helm charts, but npm by default will install with ^ semver and putting that on the developer to disable isn't good enough - the problem of course is that requires the OS vendors to agree on what is common.
This is a riff - not sure how possible this is, but it's not coming from nowhere, it's based on work I did 8 years back (https://github.com/takeoff-env/takeoff) - using a headless OS container image with a volume pointing to the source folder, run the install within the container (so far so good, this is any multi-stage docker build)
The key part would be to then copy the node_modules in the volume _data folder back to the host - this would of likely require the OS vendors to provide timely images with each release of their OS to handle binary dependencies, so is likely a non-starter for OSX.
zachrip|5 months ago
Ajedi32|5 months ago
At the time, I was focusing more on the approach of reducing the number of people you have to trust when you depend on a particular package.
whatevaa|5 months ago
donatj|5 months ago
npm as designed really aggressively likes to upgrade things, and the culture is basically to always blindly upgrade all dependencies as high as possible.
It's sold as being safer by patching vulnerabilities, but most "vulnerabilities" are very minor or niche, whereas a lot of risk is inherent in a shifting foundation.
Like it or not it's kind of a cultural problem. Recursively including thousands of dependencies, all largely updating with no review is a problem.
The thing I find particularly frightful and distinctive from the other package managers I regularly use is there is zero guarantee that the code a library presents on GitHub has anything to do with it's actual content in NPM. You can easily believe you've reviewed an items code by looking at it on GitHub, but that can have absolutely zero relation to what was actually uploaded to npm. You have to actually review what's been uploaded to npm as its entirety disconnected.
jiggawatts|5 months ago
Crates.io and several other popular package managers have the exact same problem. Submitted packages are essentially a blob of loose files with the source code being mere metadata provided by the uploader (or attacker!)
The logic behind this is that not every package comes from a source repository that is based on Git and there may not be a convenient and trustworthy "web link" back to the matching commit. Some SCM systems don't even have cryptographically hashed commits with the same level of "stability" as a Git commit id!
IMHO all such public package repositories should do their own Git hosting for the package file contents. I.e.: to publish you'd have to push your code to their repo instead of uploading files.
Ideally they should also scan all uploads in various ways, run reproducible builds for platforms where that makes sense, etc...
2d8a875f-39a2-4|5 months ago
simonw|5 months ago
Microsoft already owned GitHub. I don't see how acquiring npm would make a meaningful difference with respect to training material, especially since npm was already an open package repository which anyone could download without first buying the company.
righthand|5 months ago
Microsoft isn’t any better steward than the original teams.
This issue has happened Plenty under Microsoft’s ownership.
mr90210|5 months ago
I reckon that the ecosystem would have been much healthier if NPM had not been kept running without the care it requires.
amiga386|5 months ago
andrewl-hn|5 months ago
Also, smaller package managers tend to learn from these attacks on npm, and by the time the malware authors try to use similar types of attacks on them the registries already have mitigations in place.
acdha|5 months ago
That’s why the joke doesn’t really work: America is a huge outlier for gun violence because we lack structural protections. Australia doesn’t have fewer attacks in proportion to a smaller population, they have a lower rate of those attacks per-capita because they have put rules in place to be less of a soft target.
smithkl42|5 months ago
a4isms|5 months ago
TZubiri|5 months ago
The issue is the people that use npm, and choose to have 48 layers of dependencies without paying for anything. Blaming microsoft, which is a company which pays engineers and audits all of its code and dependencies, is a step in the wrong direction in the necessary self reflection path off npm vulns.
h1fra|5 months ago
tcoff91|5 months ago
jamesnorden|5 months ago
beart|5 months ago
https://docs.renovatebot.com/configuration-options/#minimumr...
deevus|5 months ago
matusp|5 months ago
apimade|5 months ago
Would love to see some default-secure package management / repo options. Even a 24 hour delayed mirror would be better than than what we have today.
find . -name package.json -not -path "/node_modules/" -exec sh -c ' for pkg; do lock="$(dirname "$pkg")/package-lock.json" [ -f "$lock" ] || continue tmp="$(mktemp)" jq --argfile lock "$lock" \ ".dependencies |= with_entries(.value = $lock.dependencies[.key].version) | .devDependencies |= with_entries(.value = $lock.dependencies[.key].version // $lock.devDependencies[.key].version)" \ "$pkg" > "$tmp" && mv "$tmp" "$pkg" done ' sh {} +
treyd|5 months ago
madeofpalk|5 months ago
What does this actually achieve?
simonw|5 months ago
unknown|5 months ago
[deleted]
deevus|5 months ago
https://github.com/pnpm/pnpm/issues/9921
jacques_chester|5 months ago
1. "2FA doesn't work". Incorrect. MFA relying on SMS or TOTP is vulnerable to phishing. Token or device based is not. And indeed GitHub sponsored a program to give such tokens away to critical developers.
In 2021.
2. "There's no signing". Sigstore support shipped in like 2023.
The underlying view is that "Microsoft isn't doing anything". They have been. For years. Since at least 2022, based on my literal direct interactions with the individuals directly tasked to do the things that you say aren't or haven't been done.
I have no association with npm, GitHub or Microsoft. My connection was through Shopify and RubyGems. But it really steams me to see npm getting punched up with easily checked "facts".
aj_g|5 months ago
cube00|5 months ago
Proxy NPM with something like Artifactory which stops the bad package getting back in or ending up in any new builds.
Follow it up with endpoint protection to weed the package out of the local checked out copies and .npm on the individual dev boxes.
unknown|5 months ago
[deleted]
ozim|5 months ago
ankit_mishra|5 months ago
TZubiri|5 months ago
ChrisArchitect|5 months ago
Shai-Hulud malware attack: Tinycolor and over 40 NPM packages compromised
https://news.ycombinator.com/item?id=45260741
homebrewer|5 months ago
1. Switch to pnpm, it's not only faster and more space efficient, but also disables post-install scripts by default. Very few packages actually need those to function, most use it for spam and analytics. When you install packages into the project for the first time, it tells you what post-install scripts were skipped, and tells you how to whitelist only those you need. In most projects I don't enable any, and everything works fine. The "worst" projects required allowing two scripts, out of a couple dozen or so.
They also added this recently, which lets you introduce delays for new versions when updating packages. Combined with `pnpm audit`, I think it can replace the last suggestion of setting up a helper dependency bot with zero reliance on additional services, commercial or not:
https://pnpm.io/settings#minimumreleaseage
2. If you're on Linux, wrap your package managers into bubblewrap, which is a lightweight sandbox that will block access to almost all of your system, including sensitive files like ~/.ssh, and prevent anything running under it from escalating privileges. It's used by flatpak and Steam. A fully working & slightly improved version was posted here:
https://news.ycombinator.com/item?id=45271988
I posted the original here, but it was somewhat broken because some flags were sorted incorrectly (mea culpa). I still prefer using a separate cache directory instead of sharing the "global" ~/.cache because sensitive information might also end up there.
https://news.ycombinator.com/item?id=45041798
3. Setup renovate or any similar bot to introduce artificial delays into your supply chain, but also to fast-track fixes for publicly known vulnerabilities. This suggestion caused some unhappiness in the previous discussion for some reason — I really don't care which service you're using, this is not an ad, just setup something to track your dependencies because you will forget it. You can fully self-host it, I don't use their commercial offering — never has, don't plan to.
https://docs.renovatebot.com/configuration-options/#minimumr...
https://docs.renovatebot.com/presets-default/#enablevulnerab...
4. For those truly paranoid or working on very juicy targets, you can always stick your work into a virtual machine, keeping secrets out of there, maybe with one virtual machine per project.
garblegarble|5 months ago
0: https://github.com/lynaghk/sandboxtron/tree/main
Hexshin0bi|5 months ago
in pnpm docs it says:
""" enablePrePostScripts Default: true Type: Boolean When true, pnpm will run any pre/post scripts automatically. So running pnpm foo will be like running pnpm prefoo && pnpm foo && pnpm postfoo. """
am i missing something here?
advocatemack|5 months ago
pt|5 months ago
unknown|5 months ago
[deleted]
rayhan0x01|5 months ago
[deleted]
hu3|5 months ago
Not every project and team can do that. But when feasible, it's a strong mitigation layer.
What worked was splitting dependency diff review among the team so it's less of a burden. We pin exact versions and update judiciously.
user34283|5 months ago
ESLint would be another culprit, adding 80 packages.
It quickly gets out of hand.
To me it seems like the fewest projects could use this approach you described.
bee_rider|5 months ago
You are doing the work. These automatic library installing services seem to have a massive free-rider problem.
bapak|5 months ago
righthand|5 months ago
jollyllama|5 months ago
beart|5 months ago
amai|5 months ago
Well said.
clarkdale|5 months ago
kronicum2025|5 months ago
1oooqooq|5 months ago
but only npm started with a desire to monetize it (well, npm and docker hub) and in its desire for control didn't implement (or allowed the community to implement) basic higiene.
thombles|5 months ago
> But right now there are still no signed dependencies and nothing stopping people using AI agents, or just plain old scripts, from creating thousands of junk or namesquatting repositories.
This is as close as we get in this particular piece. So what's the alternative here exactly - do we want uploaders to sign up with Microsoft accounts? Some sort of developer vetting process? A curated lib store? I'm sure everybody will be thrilled if Microsoft does that to the JS ecosystem. (/s) I'm not seeing a great deal of difference between having someone's NPM creds and having someone's signing key. Let's make things better but let's also be precise, please.
cube00|5 months ago
Considering these attacks are stealing API tokens by running code on developer's machines; I don't see how signing helps, attackers will just steal the private keys and sign their malware with those.
izacus|5 months ago
lukan|5 months ago
lupusreal|5 months ago
rzwitserloot|5 months ago
* The endless arms race.
But, nevermind. It's been 2 years since Jia Tan and the amount of such 'occurrences' in the npm ecosystem in the past 10 years are bordering on uncountable at this point.
And yet this hack got through? This amateuristic and extremely obvious attempt? The injected function was literally named something like 'raidBitcoinEthWallet' or whatnot.
npm has clearly done absolutely nothing in this regard.
We haven't even gotten to the argument of '... but then hackers will simply use the automated tools themselves and only release stuff that doesn't get flagged'. There's nothing to talk about; npm has done nothing.
Which gets us to:
* Web of trust
This seems to me to be a near perfect way for the big companies that have earned so, so much money using the free FOSS they rely on, to contribute.
They spend the cash to hire a team that reviews FOSS stuff. Entire libraries, sure, but also updates. I think most of them _already do this_, and many will even openly publish issues they found. But they do not publish negative results (they do not publish 'our internal team had a quick look at update XYZ of project ABC and didn't see anything immediately suspicious').
They should start doing that. And repos like npm, maven, CPAN, etcetera should allow either the official maintainer of a library, or anybody, to attach 'assertments of no malicious intent'.
Imagine that npm hosts the following blob of text for NPM hosted projects in addition to the javascript artefacts:
> "I, google dev/security team, hereby vouch for this update in the senses: not-malicious. We have looked at it and did not see anything that we think is suspicious or worse. We make absolutely no promises whatsoever that this library is any good or that this update's changelog accurately represents the changes in it; we merely vouch for the fact that we do not think it was written with malicious intent. We explicitly disavow any legal or financial liability with this statement; we merely stake our good name. We have done this analysis on 2025-09-17 and did it for the artefact with SHA512 hash 1238498123abac. Signed, [public/private key infrastructure based signature, google.com].
And a general rule that google.com/.well-known/vouch-public-key or whatever contains the public key so anybody can check.
Aside from Jia Tan/xz (which always stops any attempt; Jia Tan/xz was so legendary, exactly how the fuck THIS still happens given that massive wakeup call boggles my mind!), every supply chain attack was pretty dang easy to spot; the problem was: Nobody was looking, and everybody configures their build scripts to pick up point updates immediately.
We should update these to 'pick them up after a certain 'vouch' score is reached'. Where everybody can mess with their scoring tables (don't trust google? reduce the value of their vouch to 0).
I think security-crucial 0day fixing updates will not be significantly hampered by this; generally big 0-days are big news and any update that comes out gets analysed to pieces. The vouch signatures would roll in within the hour after posting them.
curtisszmania|5 months ago
[deleted]
outsideoftime|5 months ago
[deleted]
b_lax|5 months ago
[deleted]
raisaguys|5 months ago
[deleted]
Ronaldoxx721|5 months ago