top | item 42857532

Can we get the benefits of transitive dependencies without undermining security?

85 points| ltratt | 1 year ago |tratt.net

87 comments

order

jerf|1 year ago

If anything is going to put capabilities into the programmer ecosystem, I think it's this problem.

The neat thing about this particular problem is that you can do some really coarse things and get some immediate benefit. Capabilities in their original form, and perhaps their truest form, carry down the call stack, so that code can do things like "restrict everything that I call can only append to files in this specific subtree" in the most sophisticated implementations. But you could do something more coarse with libraries and just do things like "These libraries can not access the network", and get big wins on some simple assertions. If you're a library for turning jpegs into their pixels, you don't need network access, and with only a bit more work, you shouldn't even need filesystem access (get passed in files if you need them, but no ability to spontaneously create files).

This would not be a complete solution, or perhaps even an adequate solution, but it would be a great bang-for-the-buck solution, and a great way to step into that world and immediately get benefits without requiring the entire world to immediately rewrite everything to use granular capabilities everywhere.

jonahx|1 year ago

Capabilities are the only way.

It is insane to me that in 2025 there is no easy way for me to run a program that, say, "can't touch the filesystem or network". As you say, even a few simple, very coarse grained categories of capabilities would be sufficient for 95% of cases.

Veserv|1 year ago

That is almost the exact backwards way to talk about capabilities. It is not about "restricting" access, it is about "granting" access.

"These libraries can not access the network." No. "These libraries have not been given access to the network (and by default none are given access)."

From an implementation perspective, this is just passing in access rights as "local" resources instead of using "global" resources. For instance, it is self-evident that other code can not use your B-Tree local variable if you did not pass a reference to it to any called functions (assuming no arbitrary pointer casts). You just do the same with these "resources". It is just passing things to functions instead of relying on globals. The only difficulty is making these actions/resources "passable", which is trivial at the language-level, and "fine-grained/divisible" to avoid over-granting.

marcosdumay|1 year ago

Just to say, but You are describing an effect type system. (Or, to the "types are static" people, possibly a dynamic effect system.)

Capabilities taken literally are more of a network thing (it's how you prove you have access to a computer that doesn't trust you). On a language, you don't need the capabilities themselves.

zokier|1 year ago

I'm bit surprised that browsers did not get a mention here. They are one of the big examples of how an application can be split into gazillion small processes, split along trust boundaries, while also being pretty performance sensitive applications.

As for transitive deps, I have some hopes for more "distro" like models to arise. In particular I see clear parallels in how traditional Linux distros work and how large faang style monorepos work. So maybe we could see more these sort of platforms/collections of curated libraries that avoid these deep transitive dep trees by hoisting all the deps into one cohesive global dependency that has at least some more eyes on it.

yencabulator|1 year ago

I think Chrome isolated processes fail the idealized criteria, though:

> 1. Performance. [...] Unix-esque shared memory, though much faster, is far too difficult to use reliably for untrusted components.

> 2. Expressivity. [...] without descending into the horrors that RPC (Remote Procedure Call) tends to descend to.

Author is asking for how can we do better / get this popular and used all over, not what's the current state of the art (when that SOTA is a notoriously large and complex program that your average programmer has never looked inside of).

mike_hearn|1 year ago

Yes, and Chrome's Mojo IPC system is actually one of the only parts of the codebase that's neatly factored out (somewhat) into a reusable library.

https://chromium.googlesource.com/chromium/src/+/master/mojo...

In the past I used the Java SecurityManager to do a PoC PDF renderer that sandboxed the Apache PDFbox library. I lowered the privilege of the PDF rendering component and tested it with a PDF that exploited an old XXE vulnerability in the library. It worked! Unfortunately, figuring out how to do that wasn't easy and the Java maintainers got tired of maintaining something that wasn't widely used, so they ripped it out.

There are some conceptual and social difficulties with this kind of security, at some point I should write about them.

1. People get distracted by the idea of pure capability systems. See the thread above. Pure caps don't work, it's been tried and is an academic concept that nothing real uses but which sucks up energy and mental bandwidth.

2. People get distracted by speculation attacks. In process sandboxing can't stop sandboxed code from reading the address space, so this is often taken as meaning everything has to be multi-process (like Mojo). But that's not the case. Speculation attacks are read only attacks. To do anything useful you have to be able to exfiltrate data. A lot of libraries, if properly restricted, have no way to exfiltrate any data they speculatively access. So in-process sandboxing can still be highly useful even in the presence of Spectre. At the same time, some libraries DO need to be put into a separate address space, and it may even change depending on how it's used. Mojo is neat partly because it abstracts location, allowing components to run either inproc or outproc and the calling code doesn't know.

3. People get distracted by language neutrality. Once you add IPC you need RPC, and once you add RPC it's easy to end up making something that's fully language neutral so it turns into a fairly complex IDL driven system. The SecurityManager was nice because it didn't have this problem.

4. Kernel sandboxing is an absolute shitshow on every OS. Either the APIs are atrocious and impossible to figure out, or they're undocumented, or both, and none of them provide sandboxing at the right level e.g. you can't allow a process to make HTTP requests to specific hosts because the kernel doesn't know anything about HTTP.

Sandboxing libraries effectively is still IMHO an open research problem. The usability problems need someone to crack it, and it'll probably be language/runtime specific when they do even if libraries like Mojo are sitting there under the hood.

cudgy|1 year ago

“All this has led me, slowly and reluctantly, to the conclusion that our dependency-heavy approach to building software is fundamentally incompatible with security.”

This vulnerability seemed so apparent though. I used to object in the past about all the dependencies We were adding that we had no idea how they were implemented, who implemented them, or why they were implemented. I always got pushback that I was holding back the team for moving quickly and using the latest frameworks. it is important to consider that many of these security issues derived from the methods that new developers are taught, which is strap together a bunch of libraries that you only use 5 to 10% of resulting in a massive application with tons of unknown threats and poor performance by the way. I mostly blame the largest tech companies like Google, Meta the turn out large frameworks that are overkill for 90% of the development out there. New developers held these companies in extremely high regard and considered their technology to be state of the art in all sense. Yes, they were state-of-the-art for solving the problems and Google and Meta needed, but adoption of these technologies by small startups and other companies has now made the dependency explosion project endemic.

The worst violators of this principle that I’ve noticed is the proliferation of web development, frameworks, like rails, react, etc. further, it is ironic that these platforms, the web platform in particular is promoted as more secure relative to the old active X model of running binaries directly in the browser. However, I would rather run a trusted binary with 5k lines of code in a browser or app that my team has fully vetted than 1200 libraries and millions of lines of code to accomplish the same task.

Perhaps this is a good use for AI that would scan source code for library dependencies for security threats or potential security threats. Another solution would be to break out libraries into smaller components that perform specific functional tasks. These would be easier to validate and also result in smaller applications. Obviously there is no easy solution and running binary in your browser is not a great solution either. However, we as developers need to consider the trade-off between danger say running a compact native app versus “safety“ of using jack of all trade frameworks that include millions of lines of code.

nightpool|1 year ago

This is an interesting article to have up alongside the SLAP and FLOP vulnerabilities. I like capabilities as much as the next programmer, but my gut tells me that process boundaries are only going to get more important, not less, as chips get faster and untrusted code gets more widely understood. Or other sorts of hardward-enforceable memory boundaries

yencabulator|1 year ago

Yeah, it really seems capabilities and WASM sandboxes can't protect against speculation attacks. They'll both be very useful limiting what kinds of attacks can happen, still, since more isolation is either hard-to-program or slow on current hardware.

Longer term, I think our hopes have to be at either cheaper MMU transitions between processes or in huge numbers of cheap cores.

And for both of those, we'll need fast message passing. Either good frameworks for shared memory based message passing using today's tech, ones that guide people away from TOCTOU attacks; OR, hardware support for message passing, for example some sort of MMU ownership transfer / write-once memory sealing / read-once-into-local-memory / pass-ownership-of-cacheline or such that makes it easier to implement securely. Or for the lots of cheap cores scenario, some kind of hardware messaging primitive between cores.

They say systems research is dead, I think it's just lacking funding, because all of the above sounds very much like a revival of stuff that was being researched earlier, with Barrelfish etc. The real challenge is changing the mainstream hardware. I find it hard to believe AMD isn't devoting a 5-person team for this sort of stuff, with the hope of discovering a "small feature" that could change the world of software. Or one of the RISC-V shops, though they have enough challenges already ahead of them.

yencabulator|1 year ago

> Addressing the first of these points requires at least somewhat rethinking of hardware and operating systems

The (vaporware) Mill CPU design has "portals" that are like fat function calls with byte-granularity MMU capability protection to limit mutual exposure between untrusting bits of of code on opposite sides of the portal. Think of it as cheap function-call-like syscalls into your kernel, but also usable for microkernel boundaries and pure userspace components.

https://www.youtube.com/watch?v=5osiYZV8n3U

Of course, we can't have nice things and are stuck with x86/arm/riscv, where it seems nothing better than CHERI is realistic and such security boundaries will suffer relatively-enormous TLB switching overheads.

Sytten|1 year ago

Side note, I think the serde yaml debacle was so predictable. As much as I admire Dtolnay his choice to archive the repository and push a deprecated version on cargo made everybody scramble for an alternative "maintained" crate is on him. You can say whatever you want about checkbox security but most people still have to deal with it if only to make the tooling shut up so they can do their work.

Maybe the rust fondation should take over more of those fundamental crates when maintainers are not willing or able to continue working on them. A similar problem happened with the zip crate which was transferred rather fast to a new maintainer and most people still use a very old version pre-transfer.

ozim|1 year ago

There are already SBOM software bill of materials standards CycloneDX and SPDX in development and in use. There is VEX and also SLSA.

Idea is if everyone does legwork to check his dependencies you can trust your dependencies because they checked theirs.

It is still trust but we go implicit into „hey you sure you checked dependencies and for sure you did not just npm install library some kid from Alaska created who pulled his dependency on kid from Montenegro?”.

Including random libraries just because we can and it had enough stars on GH was bad idea already - but nowadays it becomes an offense and rightly so.

newpavlov|1 year ago

I think there is only one proper solution to the security problem of transitive dependencies: an open database of vetted/rejected libraries and tooling which would help to pull dependency versions according to configured rules.

For example, by trusting several big players such as Rust Foundation, Servo, Mozilla, Google, Facebook, etc. (developers will decide for themselves whom exactly they trust) who would manually review dependencies used by them we will be able to cover the most important parts of the ecosystem and developers will be able to review more minor dependencies in their projects themselves. cargo-crev + cargo-audit come somewhat close to what is needed, everything else is a matter of adoption.

Capabilities and other automated tools can help immensely with manual reviews, but can not replace them completely, especially in compiled languages like Rust.

>It’s easy for me to forget that the trust I place in those 20 direct dependencies is extended unchanged to the 161 indirect dependencies.

Unfortunately, when people mention such numbers they commonly do not account for the difference between "number of libraries in the dependency tree" and "number of groups responsible for the dependency tree". In practice, the latter number is often significantly smaller than the former. In other words, many projects (at least in the Rust world) lean towards the "micro-crate" approach, meaning that one group may be responsible for 20 crate in the dependency tree, which does not make security risks 20x bigger.

neipat|1 year ago

I'm building a company in this area that looks like something similar. The goal is to provide a safer source for open source application dependencies that augments/replaces e.g. NPM.

We take open source dependencies and:

- Patch vulnerabilities, including in transitive dependencies, even for old or EoL versions that teams may be on

- Continuously vet new versions that are published for malware; and don't bring them into the registry if so

- Inline dependencies that don't need to be separate packages (e.g. fold in trim-newlines, a 1-line NPM package, into a parent package) to simplify dependency trees

This is then available to developers as a one line change to switch to in e.g. package.json. Once you switch, you no longer need to manually review packages or do any of this scanning/upgrading/vulnerability management work, since you can trust and set policies on the registry.

We're in the very early days and working with a few future-minded developers to get feedback on the design. If you're interested, I'd love to share more! Please email me at neil@syntra.io

Ygg2|1 year ago

What is "numbers of groups responsible" in context of Cargo? Github orgs?

binary132|1 year ago

I do not foresee the status quo improving on this front. If anything, it will continue to get worse until we are forced to deal with a massive problem that will make the security crises we’ve dealt with up til now look like a walk in the park.

JoachimSchipper|1 year ago

I really like this article. I do think it's useful to consider that the unit of isolation ("process") of the cloud era is a VM or container, and that the major clouds do have some sort of permissions model.

grinkelhoof|1 year ago

I'm going to take a wild guess that WASM is orders of magnitude slower for IPC than raw unix, which is unfortunate because it seems like some of the most promising fertile soil for a security-first capability model.

Does that disqualify it as a potential path to a solution? How fine-grained would these components realistically need to be?

Ygg2|1 year ago

My one big problem with the title and the way this blog about is that it assumes infinite scaling - in performance, correctness, size, security. What works on a small scale is ludicrous to work on a huge one.

There is an assumption that your blog and your multi-billion SAAS should have transferrable skills. It's like expecting a person designing a shack and the person designing the next Fort Knox to use the same plans, materials, and people. Either you get extremely overbuilt shacks, with vault doors, separate HVAC and OSHA regulations that take decades to build and will cost you several billion dollars, or Fort Knox that anyone can kick down vault doors and steal money.

If your blog, or your 72h hackathon game takes 3000 dependencies, and maybe one of them is malicious (which is low probability) who cares?

If your multi-million SaaS has 3000 dependencies, yeah, it's time to slim it down. Granted, no one wants to do this because it costs money, and takes time away from shipping another feature.