top | item 45766830

(no title)

phiresky | 4 months ago

That seems a bit excessive to sandbox a command that really just downloads arbitrary code you are going to execute immediately afterwards anyways?

Also I can recommend pnpm, it has stopped executing lifecycle scripts by default so you can whitelist which ones to run.

discuss

order

tetha|4 months ago

At work, we're currently looking into firejail and bubblewrap a lot though and within the ops-team, we're looking at ways to run as much as possible, if not everything through these tools tbh.

Because the counter-question could be: Why would anything but ssh or ansible need access to my ssh keys? Why would anything but firefox need access to the local firefox profiles? All of those can be mapped out with mount namespaces from the execution environment of most applications.

And sure, this is a blacklist approach, and a whitelist approach would be even stronger, but the blacklist approach to secure at least the keys to the kingdom is quicker to get off the ground.

ashishb|4 months ago

firejail, bubblewrap, direct chroot, sandbox-run ... all have been mentioned in this thread.

There is a gazillion list of tools that can give someone analysis paralysis. Here's my simple suggestion: all of your backend team already knows (or should) learn Docker for production deployments.

So, why not rely on the same? It might not be the most efficient, but then dev machines are mostly underutilized anyway.

ashishb|4 months ago

> Also I can recommend pnpm, it has stopped executing lifecycle scripts by default so you can whitelist which ones to run.

Imagine you are in a 50-person team that maintains 10 JavaScript projects, which one is easier?

  - Switch all projects to `pnpm`? That means switching CI, and deployment processes as well
  - Change the way *you* run `npm` on your machine and let your colleagues know to do the same
I find the second to be a lot easier.

larusso|4 months ago

I don’t get your argument here. 10 isn’t a huge number in my book but I don’t know of course what else that entails. I would opt for a secure process change over a soft local workflow restriction that may or may not be followed by all individuals. And I would definitely protect my CI system in the same way than local machines. Depending on the nature of CI these machines can have easy access rights. This really depends how you do CI and how lacks security is.

afavour|4 months ago

There are a great many extra perks to switching to pnpm though. We switched on our projects a while back and haven’t looked back.

alt227|4 months ago

Yeah, id just take the time to convert the 10 projects rather than try to get 50 people to chnage their working habots, plus new staff coming in etc.

Switch your projects once, done for all.

fragmede|4 months ago

Am I missing something? Don't you also need to change how CI and deployment processes call npm? If my CI server and then also my deployment scripts are calling npm the old insecure way, and running infected install scripts/whatever, haven't I just still fucked myself, just on my CI server and whatever deployment system(s) are involved? That seems bad.

jve|4 months ago

You do the backward logic here. I would go for a single person to deal with pnpm migration and CI rather than instruct other 10 for everyone to hopefully do the right thing. And think about it when the next person comes in... so I'd go for the first option for sure.

And npm can be configured to prevent install scripts to be run anyways:

> Consider adding ignore-scripts to your .npmrc project file, or to your global npm configuration.

But I do like your option to isolate npm for local development purposes.

azangru|4 months ago

> which one is easier?

> Switch all projects to `pnpm`?

Sorry; I am out of touch. Does pnpm not have these security problems? Do they only exist for npm?

ashishb|4 months ago

> That seems a bit excessive to sandbox a command that really just downloads arbitrary code you are going to execute immediately afterwards anyways?

I won't execute that code directly on my machine. I will always execute it inside the Docker container. Why do you want to run commands like `vite` or `eslint` directly on your machine? Why do they need access to anything outside the current directory?

bandrami|4 months ago

I get this but then in practice the only actually valuable stuff on my computer is... the code and data in my dev containers. Everything else I can download off the Internet for free at any time.

throwaway290|4 months ago

It's weird that it's downvoted because this is the way

apsurd|4 months ago

it annoys me that people fully automate things like type checkers and linting into post commit or worse entirely outsourced to CI.

Because it means the hygiene is thrown over the fence in a post commit manner.

AI makes this worse because they also run them "over the fence".

However you run it, i want a human to hold accountability for the mainline committed code.

simpaticoder|4 months ago

pnpm has lots of other good attributes: it is much faster, and also keeps a central store of your dependencies, reducing disk usage and download time, similar to what java/mvn does.

johannes1234321|4 months ago

> command that really just downloads arbitrary code you are going to execute immediately afterwards anyways?

By default it directly runs code as part of the download.

By isolation there is at least a chance to do some form of review/inspection

Kholin|4 months ago

I've tried use pnpm to replace npm in my project, it really speed up when install dependencies on host machine, but much slower in the CI containers, even after config the cache volume. Which makes me come back to npm.

worthless-trash|4 months ago

> That seems a bit excessive to sandbox a command that

> really just downloads arbitrary code you are going to

> execute immediately afterwards anyways?

I don't want to stereotype, but this logic is exactly why javascript supply chain is in the mess its in.