top | item 46032719

(no title)

darkamaul | 3 months ago

The "use cooldown" [0] blog post looks particularly relevant today.

I'd argue automated dependency updates pose a greater risk than one-day exploits, though I don't have data to back that up. That's harder to undo a compromised package already in thousands of lock files, than to manually patch a already exploited vulnerability in your dependencies.

[0] https://blog.yossarian.net/2025/11/21/We-should-all-be-using...

discuss

order

plomme|3 months ago

Why not take it further and not update dependencies at all until you need to because of some missing feature or systems compatibility you need? If it works it works.

skybrian|3 months ago

The arguments for doing frequent releases partially apply to upgrading dependencies. Upgrading gets harder the longer you put it off. It’s better to do it on a regular schedule, so there are fewer changes at once and it preserves knowledge about how to do it.

A cooldown is a good idea, though.

jonfw|3 months ago

There is a Goldilocks effect. Dependency just came out a few minutes ago? There is no time for the community to catch the vulnerability, no real coverage from dependency scans, and it's a risk. Dependency came out a few months ago? It likely has a large number of known vulns

bigstrat2003|3 months ago

That is indeed what one should do IMO. We've known for a long time now in the ops world that keeping versions stable is a good way to reduce issues, and it seems to me that the same principle applies quite well to software dev. I've never found the "but then upgrading is more of a pain" argument to be persuasive, as it seems to be equally a pain to upgrade whether you do it once every six months or once every six years.

kunley|3 months ago

> Why not take it further and not update dependencies at all until you need to because of some missing feature or systems compatibility you need? If it works it works.

Indeed there are people doing that and communities with a consensus such approach makes sense, or at least is not frowned upon. (Hi, Gophers)

SkyPuncher|3 months ago

This works until you consider regular security vulnerability patching (which we have compliance/contractual obligations for).

tim1994|3 months ago

Because updates don't just include new features but also bug and security fixes. As always, it probably depends on the context how relevant this is to you. I agree that cooldown is a good idea though.

hinkley|3 months ago

CI fights this. But that’s peanuts compared to feature branches and nothing compared to lack of a monolith.

We had so many distinct packages on my last project that I had to massively upgrade a tool a coworker started to track the dependency tree so people stopped being afraid of the release process.

I could not think of any way to make lock files not be the absolute worst thing about our entire dev and release process, so the handful of deployables had a lockfile each that was only utilized to do hotfix releases without changing the dep tree out from underneath us. Artifactory helps only a little here.

yupyupyups|3 months ago

Just make sure to update when new CVEs are revealed.

Also, some software are always buggy and every version is a mixed bag of new features, bugs and regressions. It could be due to the complexity of the problem the software is trying to solve, or because it's just not written well.

parliament32|3 months ago

Because if you're too far behind, when you "need" takes days instead of hours.

Sparkle-san|3 months ago

Because AppSec requires us to adhere to strict vulnerability SLA guidelines and that's further reinforced by similar demands from our customers.

jacquesm|3 months ago

But even then you are still depending on others to catch the bugs for you and it doesn't scale: if everybody did the cooldown thing you'd be right back where you started.

falcor84|3 months ago

I don't think that this Kantian argument is relevant in tech. We've had LTS versions of software for decades and it's not like every single person in the industry is just waiting for code to hit LTS before trying it. There are a lot of people and (mostly smaller) companies who pride themselves on being close to the "bleeding edge", where they're participating more fully in discovering issues and steering the direction.

woodruffw|3 months ago

The assumption in the post is that scanners are effective at detecting attacks within the cooldown period, not that end-device exploitation is necessary for detection.

(This may end up not being true, in which case a lot of people are paying security vendors a lot of money to essentially regurgitate vulnerability feeds at them.)

nine_k|3 months ago

To find a vulnerability, one does not necessarily deploy a vulnerable version to prod. It would be wise to run a separate CI job that tries to upgrade to the latest versions of everything, run tests, watch network traffic, and otherwise look for suspicions activity. This can be done relatively economically, and the responsibility could be reasonably distributed across the community of users.

bootsmann|3 months ago

It does scale against this form of attack. This attack propagates by injecting itself into the packages you host. If you pull only 7d after release you are infected 7d later. If your customers then also only pull 7d later they are pulling 14d after the attack has launched, giving defenders a much longer window by slowing down the propagation of the worm.

vintagedave|3 months ago

That worried me too, a sort of inverse tragedy of the commons. I'll use a weeklong cooldown, _someone else_ will find the issue...

Until no-one does, for a week. To stretch the original metaphor, instead of an overgrazed pasture, we grow a communally untended thicket which may or may not have snakes when we finally enter.

Ygg2|3 months ago

I don't buy this line of reasoning. There are zero/one day vulnerabilities that will get extra time to spread. Also, if everyone switches to the same cooldown, wouldn't this just postpone the discovery of future Shai-Huluds?

I guess the latter point depends on how are Shai-Huluds detected. If they are discovered by downstreams of libraries, or worse users, then it will do nothing.

__s|3 months ago

There are companies like Helix Guard scanning registries. They advertise static analysis / LLM analysis, but honeypot instances can also install packages & detect certain files like cloud configs being accessed

wavemode|3 months ago

Your line of reasoning only makes sense if literally almost all developers in the world adopt cooldowns, and adopt the same cooldown.

That would be a level of mass participation yet unseen by mankind (in anything, much less something as subjective as software development). I think we're fine.

hyperpape|3 months ago

For zero/one days, the trick is that you'd pair dependency cooldowns with automatic scanning for vulnerable dependencies.

And in the cases where you have vulnerable dependencies, you'd force update them before the cooldown period had expired, while leaving everything else you can in place.

Sammi|3 months ago

Pretty easy to do using npm-check-update:

https://www.npmjs.com/package/npm-check-updates#cooldown

In one command:

  npx npm-check-updates -c 7

tragiclos|3 months ago

The docs list this caveat:

> Note that previous stable versions will not be suggested. The package will be completely ignored if its latest published version is within the cooldown period.

Seems like a big drawback to this approach.