If everyone is going to wait 3 days before installing the latest version of a compromised package, it will take more than 3 days to detect an incident.
Think about how the three major recent incidents were caught: not by individual users installing packages but by security companies running automated scans on new uploads flagging things for audits. This would work quite well in that model, and it’s cheap in many cases where there isn’t a burning need to install something which just came out.
A lot of people will still use npm, so they'll be the canaries in the coal mine :)
More seriously, automated scanners seem to do a good job already of finding malicious packages. It's a wonder that npm themselves haven't already deployed an automated countermeasure.
Also, if everyone is going to wait 3 days before installing the latest version of a compromised package, it will take more than 3 days to broadly disseminate the fix for a compromise in the wild. The knife cuts both ways.
Not really, app sec companies scan npm constantly for updated packages to check for malware. Many attacks get caught that way.
e.g. the debug + chalk supply chain attack was caught like this: https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...
The chalk+debug+error-ex maintainer probably would have noticed a few hours later when they got home and saw a bunch of "Successfully published" emails from npm that they didn't trigger.
I think uv should get some credit for being an early supporter of this. They originally added it as a hidden way to create stable fixtures for their own tests, but it has become a pretty popular flag to use.
This for instance will only install packages that are older than 14 days:
Nice, but I think the config file is a much better implementation for protecting against supply chain attacks, particularly those targeting developers rather than runtime. You don’t want to rely on every developer passing a flag every time they install. This does suffer from the risk of using `npm install` instead of `pnpm install` though.
It would also be nice to have this as a flag so you can use it on projects that haven't configured it though, I wonder if that could be added too.
I am an npm user. My reaction to these software supply chain attacks is to stop taking updates unless absolutely necessary for vulnerability mitigation or to selectively take performance or feature upgrades on a package-by-package basis. Obviously, that approach still opens me up to attacks based on when I choose to take updates that coincides with a malicious package release, but I feel like an extreme reluctance to upgrade will mostly keep me safe.
To achieve my goal, would this approach work:
- Pin all of my package.json versions (no prefacing versions with ~ or ^)
- Routine installation of packages both on my local and on CI servers will be done using `npm ci`
- `npm install <package_name> --save-exact/--save-dev` would be used only at the time of adding a package to package.json, followed by an `npm ci`
- Rely on tooling like GitHub Dependabot and CodeQL to inform the team when a dependency should be updated for security reasons and then manually update only the dependency with the desired version using `npm install lodash@4.17.21 --save-exact`, for example
EDIT: Thinking about this more, we would have to forbid deleting the package-lock.json and regenerating it with `npm install` and forbid the use of `npm update` so that package-lock.json would stay stable.
It's all good until the day comes that one dependency breaks compatibility and drops support for the version you have, and now you have days of dependency resolution work ahead of you because you've never bothered for years. Usually, incremental and timely upgrades reduce that kind of friction.
I have a question: when I’ve seen people discussing this setting, people talk about using like ”3 days” or ”7 days” as the timeout, which seems insanely short to me for production use. As a C++ developer, I would be hesitant to use any dependency in the first six months of release in production, unless there’s some critical CVE or something (then again, we make client side applications with essentially no networking, so security isn’t as critical for us, stability is much more important).
Does the JS ecosystem really move so fast that you can’t wait a month or two before updating your packages?
Waiting 6 months to upgrade a dependency seems crazy, that's definitely not a thing in other languages or maybe companies. (It might be due to priorization, but not due to some rule of thumb)
In the JVM ecosystem it's quite common to have Dependabot or Renovate automatically create PRs for dependency upgrades withing a few hours of it being released. If it's manual it highly irregular and depends on the company.
Suppose you have a package P1 with version 1.0.0 that depends on D1 with version ^1.0.0. The “^” indicates a range query. Without going into semver details, it helps update D1 automatically for minor patches or non-breaking feature additions.
In your project, everything looks fine as P1 is pinned to 1.0.0. Then, you install P2 that also uses D1. A new patch version of D1 (1.0.1) was released. The package manager automatically upgrades to 1.0.1 because it matches the expression ^1.0.0, as specified by P1 and P2 authors.
This can lead to surprises. JS package managers use lock files to prevent changes during installs. However, they still change the lock file for additions or manual version upgrades, resolving to newer minor dependencies if the version range matches. This is often desirable for bug fixes and security updates. But, it opens the door to this type of attack.
To answer your question, yes, the JS ecosystem moves faster, and pkg managers make it easy to create small libraries. This results in many “small” libraries as transitive dependencies. Rewriting these libraries with your own code works for simple cases like left-pad, but you can’t rewrite a webserver or a build tool that also has many small transitive dependencies. For example, the chalk library is used by many CLI tools to show color output.
NPM packages follow semantic versioning so minor versions should be fine to auto update. (there is still an issue what for package maintainer might be minor not being minor for you - but let's stick to ideal world for that)
I don't think people are having major versions updated every month, it is more really like 6 months or once a year.
I guess the problem might be people think auto updating minor versions in CI/CD pipeline will keep them more secure as bug fixes should be in minor versions but in reality we see it is not the case and attackers use it to spread malware.
> Does the JS ecosystem really move so fast that you can’t wait a month or two before updating your packages?
Really depends on the context and where the code is being used. As others have pointed out most js packages will use semantic versioning. For the patch releases (the last of the three numbers), for code that is exposed to the outside world you generally want to apply those rather quickly. As those will contain hotfixes including those fixing CVEs.
For the major and minor releases it really depends on what sort of dependencies you are using and how stable they are.
The issue isn't really unique to the JavaScript eco system either. A bigger java project (certainly with a lot of spring related dependencies) will also see a lot of movement.
That isn't to say that some tropes about the JavaScript ecosystem being extremely volatile aren't entirely true. But in this case I do think the context is the bigger difference.
> then again, we make client side applications with essentially no networking, so security isn’t as critical for us, stability is much more important)
By its nature, most JavaScript will be network connected in some fashion in environments with plenty of bad actors.
> Does the JS ecosystem really move so fast that you can’t wait a month or two before updating your packages?
In 2 months, a typical js framework goes through the full Gartner Hype Cycle and moves to being unmaintained with an archived git repo and dozens of virus infected forks with similar names.
It's common to have npm auditing enabled, which means your CI/CD will force you to update to a brand new version of a package because a security vulnerability was reported in an older one.
I've also had cases where I've found a bug in a package, submitted a bug report or PR, and then immediately pulled in the new version as soon as it was fixed. Things move fast in the JavaScript/npm/GitHub ecosystem.
I think the surface area for bugs in a C++ dependency is way bigger than a JS one. Pulling in a new node module is not going to segfault my app, for example.
I might be naive but why isn't any package manager (npm, pnpm, bun, yarn, ...) pushing for a permission system, where packages have to define in the package.json what permission they would like to access? À la Deno but scoped to dependencies or like mobile apps do with their manifest.
I know it would take time for packages to adopt this but it could be implemented as parameters when installing a new dependency, like `npm i ping --allow-net`. I wouldn't give a library like chalk access to I/O, processes or network.
I feel like that would require work from the language side, or at least runtimes. Is there a way of stopping code in one package from, say, hitting the network?
You might be able to do this around install scripts, though disk writing is likely needed for all (but perhaps locations could be controlled).
I feel like the correct solution to these problems (across NPM and all similar package managers) is a web-of-trust audit system based on:
- Reviewing the source code in the actual published package
- Tooling that enable one to easily see a trusted diff between a package version and the previous version of that package
- Built-in support in the package manager CLIs to only install packages that have a sufficient number of manual reviews from trusted sources (+ no / not too many negative reviews). With manual review required to bypass these checks.
There are enough users of each package that such a system should not be too onerous on users once the infrastructure was in place.
There was an NPM RFC for this feature (though not as focused on supply chain attacks) in 2022, but the main response mirrored some of the other comments in here.
"waiting a length of time doesn’t increase security, and if such a practice became common then it would just delay discovery of vulnerabilities until after that time anyways"
'Delayed dependency updates' is a response to supply-side attacks in the JavaScript world, but it aptly describes how I have come to approach technology broadly.
Large tech companies, as with most industry, have realized most people will pay with their privacy and data long before they'll pay with money. We live in a time of the Attention Currency, after all.
But you don't need to be a canary to live a technology-enabled life. Much software that you pay with your privacy and data has free or cheap open-source alternatives that approach the same or higher quality. When you orient your way of consuming to 'eh, I can wait till the version that respects me is built', life becomes more enjoyable in myriad ways.
I don't take this to absolute levels. I pay for fancy pants LLM's, currently. But I look forward to the day not too far away where I can get today's quality for libre in my homelab.
It seems like the core problem is (1) NPM node_modules is so large usually, no one actually audit them and (2) the NPM churn is so great, no one audits them and (3) the design of NPM appears to think that automatically updating point or minor versions is actually good and desirable.
Go is one of the few packing systems that got these right.
The downside of this approach is that this is how you create an ecosystem where legitimate security fixes never end up getting applied. There's no free lunch, you need to decide whether you're more concerned about vulnerabilities intentional backdoors (and thus never update anything automatically) or vulnerabilities from ordinary unintentional bugs (and thus have a mechanism for getting security updates automatically).
I don't think this is realistic in the default npm ecosystem where projects can have 1000s of dependencies (with the majority being transitive with fuzzy versions).
Though pnpm does have a setting to help with this too: https://pnpm.io/settings#resolutionmode time-based, which effectively pins subdependencies based on the published time of the direct dependency.
No, it isn't. Upgrades should be routine, like exercising. With your approach it becomes increasingly difficult and eventually impossible to upgrade anything since it requires moving a mountain. An update a ̶d̶a̶y̶ week makes the tech debt go away.
Does that work as well? I can't tell if the global settings are the same as workspace settings, and it lets me set nonsense keys this way, so I'm not sure if there is a global equivalent.
How is the age of a package calculated? If the publishing date of a package is obtained from the package's metadata defined by the package author, (just like Git commit dates are defined by the Git committer), then that would defeat the purpose of this new feature. The whole purpose of this feature is to protect from malicious or compromised package authors. Instead, it is necessary to query the package registry, trusting the package registry for the age of the package, rather than the package author. I presume this is how it works.
Basically we severed connection to the public npm registry completely earlier in the week whilst this worm plays out.
Unfortunately there wasn't a way to do this without taking our cached "good" public packages down as well, so we later replicated the good cached packages into a new standalone private registry to be the new upstream.
The bit that was not obvious in the moment but self evident once we realised is that the registry we're using took the copy time as the publish time, and therefore our new 2 week delay is rejecting the copied packages...
So sample size of one, but the registry we're using is definitely using upload time not any metadata in the packages themselves. Good to know the filtering is working.
in corp settings, you usually have a proxy registry. you can setup firewall there for this kind of things to filter out based on license, cve, release date, etc...
No, the ”vulnerability” here is npm unilaterally allowing postinstall scripts, which are then used as an entry point for malware.
Of course, the malware could just embed itself as an IIFE and get launched when the package is loaded, so disallowing postinstall is not really a security solution.
But the real solution to this kind of attack is to stop resolving packages by name and instead resolve them by hash, then binding a name to that hash for local use.
That would of course be a whole different, mostly unexplored, world, but there's just no getting around the fact that blindly accepting updated versions of something based on its name is always going to create juicy attack surface around the resolution of that name to some bits.
The problem here isn't, "someone introduced malware into an existing version of a package". The problem is, "people want to stay up to date, so when a new patch version is released, everyone upgrades to that new patch version".
Resolving by hash is a half solution at best. Not having automated dependency upgrades also has severe security downsides. Apart from that, lock files basically already do what you describe, they contain the hashes and the resolution is based off the name while the hash ensures for the integrity of the resolved package. The problem is upgrade automation and supply chain scanning. The biggest issue there is that scanning is not done where the vulnerability is introduced because there is no money for it.
A better (not perfect) solution: Every package should by AI analysed on an update before it is public available, to detect dangerous code and set a rating.
In package.json should be a rating defined, when remote package is below that value it could be updated, if it is higher a warning should appear.
But this will cost, but i hope, that companies like github, etc. will allow package-Repositories to use their services for free. Or we should find a way, to distribute this services to us (the users and devs) like a BOINC-Client.
postepowanieadm|5 months ago
acdha|5 months ago
anematode|5 months ago
More seriously, automated scanners seem to do a good job already of finding malicious packages. It's a wonder that npm themselves haven't already deployed an automated countermeasure.
kibwen|5 months ago
singulasar|5 months ago
kaelwd|5 months ago
blamestross|5 months ago
2) Real chances for owners to notice they have been compromised
3) Adopt early before that commons is fully tragedy-ed.
djdjsjejb|5 months ago
omnicognate|5 months ago
homebrewer|5 months ago
https://pnpm.io/settings#modulescachemaxage
zokier|5 months ago
fzeindl|5 months ago
the_mitsuhiko|5 months ago
This for instance will only install packages that are older than 14 days:
uv sync --exclude-newer $(date -u -v-14d '+%Y-%m-%dT%H:%M:%SZ')
It's great to see this kind of stuff being adopted in more places.
mcintyre1994|5 months ago
It would also be nice to have this as a flag so you can use it on projects that haven't configured it though, I wonder if that could be added too.
dwoldrich|5 months ago
To achieve my goal, would this approach work:
- Pin all of my package.json versions (no prefacing versions with ~ or ^)
- Routine installation of packages both on my local and on CI servers will be done using `npm ci`
- `npm install <package_name> --save-exact/--save-dev` would be used only at the time of adding a package to package.json, followed by an `npm ci`
- Rely on tooling like GitHub Dependabot and CodeQL to inform the team when a dependency should be updated for security reasons and then manually update only the dependency with the desired version using `npm install lodash@4.17.21 --save-exact`, for example
EDIT: Thinking about this more, we would have to forbid deleting the package-lock.json and regenerating it with `npm install` and forbid the use of `npm update` so that package-lock.json would stay stable.
sedatk|5 months ago
OskarS|5 months ago
Does the JS ecosystem really move so fast that you can’t wait a month or two before updating your packages?
dtech|5 months ago
In the JVM ecosystem it's quite common to have Dependabot or Renovate automatically create PRs for dependency upgrades withing a few hours of it being released. If it's manual it highly irregular and depends on the company.
progx|5 months ago
Normally old major or minor packages don't get an update, only the latest.
E.g. 4.1.47 (no update), 4.2.1 (yes got update).
So if the problem is in 4.1 you must "upgrade" to 4.2.
With "perfect" semver, this shouldn't be a problem, cause 4.2 only add new features... but... back to reality, the world is not perfect.
diegof79|5 months ago
Suppose you have a package P1 with version 1.0.0 that depends on D1 with version ^1.0.0. The “^” indicates a range query. Without going into semver details, it helps update D1 automatically for minor patches or non-breaking feature additions.
In your project, everything looks fine as P1 is pinned to 1.0.0. Then, you install P2 that also uses D1. A new patch version of D1 (1.0.1) was released. The package manager automatically upgrades to 1.0.1 because it matches the expression ^1.0.0, as specified by P1 and P2 authors.
This can lead to surprises. JS package managers use lock files to prevent changes during installs. However, they still change the lock file for additions or manual version upgrades, resolving to newer minor dependencies if the version range matches. This is often desirable for bug fixes and security updates. But, it opens the door to this type of attack.
To answer your question, yes, the JS ecosystem moves faster, and pkg managers make it easy to create small libraries. This results in many “small” libraries as transitive dependencies. Rewriting these libraries with your own code works for simple cases like left-pad, but you can’t rewrite a webserver or a build tool that also has many small transitive dependencies. For example, the chalk library is used by many CLI tools to show color output.
ozim|5 months ago
I don't think people are having major versions updated every month, it is more really like 6 months or once a year.
I guess the problem might be people think auto updating minor versions in CI/CD pipeline will keep them more secure as bug fixes should be in minor versions but in reality we see it is not the case and attackers use it to spread malware.
creesch|5 months ago
Really depends on the context and where the code is being used. As others have pointed out most js packages will use semantic versioning. For the patch releases (the last of the three numbers), for code that is exposed to the outside world you generally want to apply those rather quickly. As those will contain hotfixes including those fixing CVEs.
For the major and minor releases it really depends on what sort of dependencies you are using and how stable they are.
The issue isn't really unique to the JavaScript eco system either. A bigger java project (certainly with a lot of spring related dependencies) will also see a lot of movement.
That isn't to say that some tropes about the JavaScript ecosystem being extremely volatile aren't entirely true. But in this case I do think the context is the bigger difference.
> then again, we make client side applications with essentially no networking, so security isn’t as critical for us, stability is much more important)
By its nature, most JavaScript will be network connected in some fashion in environments with plenty of bad actors.
pandemic_region|5 months ago
In 2 months, a typical js framework goes through the full Gartner Hype Cycle and moves to being unmaintained with an archived git repo and dozens of virus infected forks with similar names.
patwolf|5 months ago
I've also had cases where I've found a bug in a package, submitted a bug report or PR, and then immediately pulled in the new version as soon as it was fixed. Things move fast in the JavaScript/npm/GitHub ecosystem.
codemonkey-zeta|5 months ago
calyhre|5 months ago
cluckindan|5 months ago
keraf|5 months ago
I know it would take time for packages to adopt this but it could be implemented as parameters when installing a new dependency, like `npm i ping --allow-net`. I wouldn't give a library like chalk access to I/O, processes or network.
IanCal|5 months ago
You might be able to do this around install scripts, though disk writing is likely needed for all (but perhaps locations could be controlled).
nicoburns|5 months ago
- Reviewing the source code in the actual published package
- Tooling that enable one to easily see a trusted diff between a package version and the previous version of that package
- Built-in support in the package manager CLIs to only install packages that have a sufficient number of manual reviews from trusted sources (+ no / not too many negative reviews). With manual review required to bypass these checks.
There are enough users of each package that such a system should not be too onerous on users once the infrastructure was in place.
Ozzie_osman|5 months ago
JoshuaEN|5 months ago
"waiting a length of time doesn’t increase security, and if such a practice became common then it would just delay discovery of vulnerabilities until after that time anyways"
https://github.com/npm/rfcs/issues/646#issuecomment-12824971...
unknown|5 months ago
[deleted]
gausswho|5 months ago
Large tech companies, as with most industry, have realized most people will pay with their privacy and data long before they'll pay with money. We live in a time of the Attention Currency, after all.
But you don't need to be a canary to live a technology-enabled life. Much software that you pay with your privacy and data has free or cheap open-source alternatives that approach the same or higher quality. When you orient your way of consuming to 'eh, I can wait till the version that respects me is built', life becomes more enjoyable in myriad ways.
I don't take this to absolute levels. I pay for fancy pants LLM's, currently. But I look forward to the day not too far away where I can get today's quality for libre in my homelab.
h4ch1|5 months ago
There's an open discussion about adding something similar to bun as well^
minimumReleaseAge doesn't seem to be a bulletproof solution so there's still some research/testing to be done in this area
kardianos|5 months ago
Go is one of the few packing systems that got these right.
chr15m|5 months ago
kibwen|5 months ago
JoshuaEN|5 months ago
Though pnpm does have a setting to help with this too: https://pnpm.io/settings#resolutionmode time-based, which effectively pins subdependencies based on the published time of the direct dependency.
esafak|5 months ago
tripplyons|5 months ago
pnpm config set -g minimumReleaseAge 1440
Does that work as well? I can't tell if the global settings are the same as workspace settings, and it lets me set nonsense keys this way, so I'm not sure if there is a global equivalent.
Flimm|5 months ago
mnahkies|5 months ago
Basically we severed connection to the public npm registry completely earlier in the week whilst this worm plays out.
Unfortunately there wasn't a way to do this without taking our cached "good" public packages down as well, so we later replicated the good cached packages into a new standalone private registry to be the new upstream.
The bit that was not obvious in the moment but self evident once we realised is that the registry we're using took the copy time as the publish time, and therefore our new 2 week delay is rejecting the copied packages...
So sample size of one, but the registry we're using is definitely using upload time not any metadata in the packages themselves. Good to know the filtering is working.
unknown|5 months ago
[deleted]
wallrat|5 months ago
Good to see some OSS alternatives showing up!
mceachen|5 months ago
_betty_|5 months ago
jsheard|5 months ago
tuananh|5 months ago
lloydatkinson|5 months ago
bamboozled|5 months ago
cluckindan|5 months ago
Of course, the malware could just embed itself as an IIFE and get launched when the package is loaded, so disallowing postinstall is not really a security solution.
sim7c00|5 months ago
maybe its better to disallow latest than use age as a metric.
NamlchakKhandro|5 months ago
If I see someone using npm as a cli tool unironically...
user1999919|5 months ago
__MatrixMan__|5 months ago
But the real solution to this kind of attack is to stop resolving packages by name and instead resolve them by hash, then binding a name to that hash for local use.
That would of course be a whole different, mostly unexplored, world, but there's just no getting around the fact that blindly accepting updated versions of something based on its name is always going to create juicy attack surface around the resolution of that name to some bits.
mort96|5 months ago
mirekrusin|5 months ago
you can only unpublish.
content hash integrity is verified in lockfiles.
the problem is with dependencies using semver ranges, especially wide ones like "debug": "*"
initiatives like provenance statements [0] / code signing are also good complement to delayed dependency updates.
also not running as default / whitelisting postinstall scripts is good default in pnpm.
modifying (especially adding) keys in npmjs.org should be behind dedicated 2fa (as well as changing 2fa)
[0] https://docs.npmjs.com/generating-provenance-statements
frankdejonge|5 months ago
progx|5 months ago
A better (not perfect) solution: Every package should by AI analysed on an update before it is public available, to detect dangerous code and set a rating.
In package.json should be a rating defined, when remote package is below that value it could be updated, if it is higher a warning should appear.
But this will cost, but i hope, that companies like github, etc. will allow package-Repositories to use their services for free. Or we should find a way, to distribute this services to us (the users and devs) like a BOINC-Client.
jonkoops|5 months ago
philipwhiuk|5 months ago