top | item 45287154

(no title)

progx | 5 months ago

That solve not really the problem.

A better (not perfect) solution: Every package should by AI analysed on an update before it is public available, to detect dangerous code and set a rating.

In package.json should be a rating defined, when remote package is below that value it could be updated, if it is higher a warning should appear.

But this will cost, but i hope, that companies like github, etc. will allow package-Repositories to use their services for free. Or we should find a way, to distribute this services to us (the users and devs) like a BOINC-Client.

discuss

order

jonkoops|5 months ago

Ah, yes! The universal and uncheatable LLM! Surely nothing can go wrong.

NitpickLawyer|5 months ago

Perfect is the enemy of good. Current LLM systems + "traditional tools" for scanning can get you pretty far into detecting the low hanging fruit. Hell, I bet even a semantic search with small embedding models could give you a good insight into "what's in the release notes matches what's in the code". Simply flag it for being delayed a few hours, till a human can view it. Or run additional checks.

progx|5 months ago

I can't wait to read about your solution.

progx|5 months ago

As i wrote "not perfect". But better than anything else or nothing.

philipwhiuk|5 months ago

A better solution is restricting package permissions.