Has anyone here even read the article?! All the comments here assume they're building a package manager for C!
They're writing a tool to discover and index all indirect dependencies across languages, including C libraries that were smuggled inside other packages and weren't properly declared as a dependency anywhere.
"Please don't" what? Please don't discover the duplicate and potentially vulnerable C libraries that are out of sight of the system package manager?
Yeah it's pretty weird how people assume that -l<name> is supposed to work in gcc/clang across distributions, but somehow deriving which OS package gives you that lib<name>.so file is the devil.
Please don't. C packaging in distros is working fine and doesn't need to turn into crap like the other language-specific package managers. If you don't know how to use pkgconf then that's your problem.
When I used to work with C many years ago, it was basically: download the headers and the binary file for your platform from the official website, place them in the header/lib paths, update the linker step in the Makefile, #include where it's needed, then use the library functions. It was a little bit more work than typing "npm install", but not so much as to cause headaches.
Well, if you're fine with using 3-year old versions of those libraries packaged by severely overworked maintainers who at one point seriously considered blindly converting everything into Flatpaks and shipping those simply because they can't muster enough of manpower, sure.
"But you can use 3rd party repositories!" Yeah, and I also can just download the library from its author's site. I mean, if I trust them enough to run their library, why do I need opinionated middle-men?
What "distro" package manager is available on Windows and macOS? vcpkg doesn't provide binary packages and has quite a few autotools-shaped holes. Homebrew is great as long as you're building for your local machine's macOS version and architecture, but if you want to support an actual user community you're SOL.
I mean … it clearly isn’t working well if problems like “what is the libssl distribution called in a given Linux distro’s package manager?” and “installing a MySQL driver in four of the five most popular programming languages in the world requires either bundling binary artifacts with language libraries or invoking a compiler toolchain in unspecified, unpredictable, and failure-prone ways” are both incredibly common and incredibly painful for many/most users and developers.
The idea of a protocol for “what artifacts in what languages does $thing depend on and how will it find them?” as discussed in the article would be incredibly powerful…IFF it were adopted widely enough to become a real standard.
I've contemplated this quite a bit (and I personally maintain a C++ artifact that I deploy to production machines, and I generally prefer not to use containers for it), and I think I disagree.
Distributions have solved a very specific problem quite nicely: they are building what is effectively one application (the distro) with many optional pieces, it has one set of dependencies, and the users update the whole thing when they update. If the distro wants to patch a dependency, it does so. ELF programs that set DT_INTERP to /lib/ld-linux-[arch].so.1 opt in to the distro's set of dependencies. This all works remarkably well and a lot of tooling has been built around it.
But a lot of users don't work in this model. We build C/C++ programs that have their own set of dependencies. We want to try patching some of them. We want to try omitting some. We want to write programs that are hermetic in the sense that we are guaranteed to notice if we accidentally depend on something that's actually an optional distro package. The results ... are really quite bad, unless the software you are building is built within a distro's build system.
And the existing tooling is terrible. Want to write a program that opts out of the distro's library path? Too bad -- DT_INTERP really really wants an absolute path, and the one and only interpreter reliably found at an absolute path will not play along. glibc doesn't know how to opt out of the distro's library search path. There is no ELF flag to do it, nor is there an environment variable. It doesn't even really support a mode where DT_INTERP is not used but you can still do dlopen! So you can't do the C equivalent of Python venvs without a giant mess.
pkgconf does absolutely nothing to help. Sure, I can write a makefile that uses pkgconf to find the distro's libwhatever, and if I'm willing to build from source on each machine* (or I'm writing the distro itself) and if libwhatever is an acceptable version* and if the distro doesn't have a problematic patch to it, then it works. This is completely useless for people like me who want to build something remotely portable. So instead people use enormous kludges like Dockerfile to package the entire distro with the application in a distinctly non-hermetic way.
Compare to solutions that actually do work:
- Nix is somewhat all-encompassing, but it can simultaneously run multiple applications with incompatible sets of dependencies.
- Windows has a distinct set of libraries that are on the system side of the system vs ISV boundary. They spend decades doing an admirable job of maintaining the boundary. (Okay, they seem to have forgotten how to maintain anything in 2026, but that's a different story.) You can build a Windows program on one machine and run it somewhere else, and it works.
- Apple bullies everyone into only targeting a small number of distros. It works, kind of. But ask people who like software like Aperture whether it still runs...
- Linux (the syscall interface, not GNU/Linux) outdoes Microsoft in maintaining compatibility. This is part of why Docker works. Note that Docker and all its relatives basically completely throw out the distro model of interdependent packages all with the same source. OCI tries to replace it with a sort-of-tree of OCI layers that are, in theory, independent, but approximately no one actually uses it as such and instead uses Docker's build system and layer support as an incredibly poorly functioning and unreliable cache.
- The BSDs are basically the distro model except with one single distro each that includes the kernel.
I would love functioning C virtual environments. Bring it on, please.
Using system/distro packages is great when you're writing server software and need your base system to be stable.
But, for software distributed to users, this model fails hard. You generally need to ship across OSs, OS versions and for that you need consistent library versions. Your software being broken because a distro maintainer has decided that a 3 year old version of your dependency is close enough is terrible.
Missing in this discussion is that package management is tightly coupled to module resolution in nearly every language. It is not enough to merely install dependencies of given versions but to do so in a way that the language toolchain and/or runtime can find and resolve them.
And so when it comes to dynamic dependencies (including shared libraries) that are not resolved until runtime you hit language-level constraints. With C libraries the problem is not merely that distribution packagers chose to support single versions of dependencies because it is easy but because the loader (provided by your C toolchain) isn't designed to support it.
And if you've ever dug into the guts of glibc's loader it's 40 years of unreadable cruft. If you want to take a shot at the C-shaped hole, take a look at that and look at decoupling it from the toolchain and add support for multiple version resolution and other basic features of module resolution in 2026.
I don't trust any language that fundamentally becomes reliant on package managers. Once package managers become normalized and pervasively used, people become less thoughtful and investigative into what libraries they use. Instead of learning about who created it, who manages it, what its philosophy is, people increasingly just let'er rip and install it then use a few snippets to try it. If it works, great. Maybe it's a little bloated and that causes them to give it a side-eye, but they can replace it later....which never comes.
That would be fine if it only effected that first layer, of a basic library and a basic app, but it becomes multiple layers of this kind of habit that then ends up in multiple layers of software used by many people.
Not sure that I would go so far as to suggest these kinds of languages with runaway dependency cultures shouldn't exist, but I will go so far as to say any languages that don't already have that culture need to be preserved with respect like uncontacted tribes in the Amazon. You aren't just managing a language, you are also managing process and mind. Some seemingly inefficient and seemingly less powerful processes and ways of thinking have value that isn't always immediately obvious to people.
I use a lot of obscure libraries for scientific computing and engineering. If I install it from pacman or manage to get an AUR build working, my life is pretty good. If I have to use a Python library the faff becomes unbearable, make a venv, delete the venv, change python version, use conda, use uv, try and install it globally, change python path, source .venv/bin/activate. This is less true for other languages with local package management, but none of them are as frictionless as C (or Zig which I use mostly). The other issue is .venvs, node_packages and equivalents take up huge amounts of disk and make it a pain to move folders around, and no I will not be using a git repo for every throwaway test.
uv has mostly solved the python issue. IME it's dependency resolution is fast and just works. Packages are hard linked from a global cache, which also greatly reduces storage requirements when you work with multiple projects.
It sounds like your understanding of modern package management is at least ten year out of date, and Python has been (until recently) among the worse, yes, so that definitely wouldn’t have been a model to follow
I get that the scope of the article is a bit larger than this, but it's a pet peeve of mine when authors acknowledge the advantages of conda and then dismiss it for...silly? reasons. It kind of sounds like they just don't know many people using it, so they assume something must be wrong with it.
> If you don’t need compiled extensions, Conda is more than you need.
Am I missing something or isn't that exactly the problem we're talking about here?
> And even when you do need it, conda environments are heavier than virtual environments and the resolver used to be infamously slow. Mamba exists largely because conda’s dependency resolution took forever on nontrivial environments.
Like it says here, speed isn't a problem anymore - mamba is fast. And it's true that the environments get large; maybe there's bloat, but it definitely does share package versions across environments when possible, while keeping updates and such isolated to the current environment. Maybe there's a space for a language package manager that tries to be more like a system package manager by updating multiple envs at once while staying within version constraints to minimize duplication, but idk if many developers would think that is worth the risk.
Mamba is fast, and Pixi is also fast + sands a lot of the rough edges off the Conda experience (with project/environment binding and native lock files).
Not perfect, but pretty good when uv isn't enough for a project or deployment scenario.
This comes up every ten years or so, and is a solved problem. Any decent distro has tools to scan the dependencies of each binary via ldd, to check if its deps are correct.
His example numpy shipping its own libblas.so, has the speciality that it's runtime loaded, so ldd will not find it, but the runtime dep is in the MANIFEST.
And seeing that is not in a standard path concludes that is a private copy, that needs to be updated seperately if broken.
The biggest difficult is not that, is the many assumptions you need when writing a makefile and how to use different versions of same library. The LD_PATH is something had as potentially risky. Not that it be... but assumptions of the past, like big monsters, are a barrier to the simpler C tooling.
> Conan and vcpkg exist now and are actively maintained
I am not sure if it is just me, but I seem to constantly run into broken vcpkg packages with bad security patches that keep them from compiling, cmake scripts that can't find the binaries, missing headers and other fun issues.
I think system package managers do just fine at wrangling static library dependencies for compiled languages, and if you're building something that somehow falls through the cracks of them then I think you should probably just be using git or some kinda vcs for whatever you're doing, not a package manager
But on the other hand, I am used to arch, which both does package-management ala carte as a rolling release distro and has a pretty extensively-used secondary open community ecosystem for non-distro-maintained packages, so maybe this isn't as true in the "stop the world" model the author talks about
One of my favorite blog posts. I enjoy it every time I read it. I've implemented two C package managers and they... were fine. I think it's a pretty genuinely hard thing to get right outside of a niche.
I've written two C package managers in my life. The most recent one is mildly better than the first from a decade ago, but still not quite right. If I ever build one I think is good enough I'll share, only
to mostly likely learn about 50 edge cases I didn't think of :)
They lost me when they advocate for global dependencies instead of bundling. Are you supposed to have one `python` in your machine? One copy of LLVM (shared across languages!) ? One `cuda-runtime`?
pornel|1 month ago
They're writing a tool to discover and index all indirect dependencies across languages, including C libraries that were smuggled inside other packages and weren't properly declared as a dependency anywhere.
"Please don't" what? Please don't discover the duplicate and potentially vulnerable C libraries that are out of sight of the system package manager?
imtringued|1 month ago
rwmj|1 month ago
hliyan|1 month ago
Joker_vD|1 month ago
"But you can use 3rd party repositories!" Yeah, and I also can just download the library from its author's site. I mean, if I trust them enough to run their library, why do I need opinionated middle-men?
geraldcombs|1 month ago
JohnFen|1 month ago
duped|1 month ago
GLIBC_2.38 not found
zbentley|1 month ago
The idea of a protocol for “what artifacts in what languages does $thing depend on and how will it find them?” as discussed in the article would be incredibly powerful…IFF it were adopted widely enough to become a real standard.
amluto|1 month ago
Distributions have solved a very specific problem quite nicely: they are building what is effectively one application (the distro) with many optional pieces, it has one set of dependencies, and the users update the whole thing when they update. If the distro wants to patch a dependency, it does so. ELF programs that set DT_INTERP to /lib/ld-linux-[arch].so.1 opt in to the distro's set of dependencies. This all works remarkably well and a lot of tooling has been built around it.
But a lot of users don't work in this model. We build C/C++ programs that have their own set of dependencies. We want to try patching some of them. We want to try omitting some. We want to write programs that are hermetic in the sense that we are guaranteed to notice if we accidentally depend on something that's actually an optional distro package. The results ... are really quite bad, unless the software you are building is built within a distro's build system.
And the existing tooling is terrible. Want to write a program that opts out of the distro's library path? Too bad -- DT_INTERP really really wants an absolute path, and the one and only interpreter reliably found at an absolute path will not play along. glibc doesn't know how to opt out of the distro's library search path. There is no ELF flag to do it, nor is there an environment variable. It doesn't even really support a mode where DT_INTERP is not used but you can still do dlopen! So you can't do the C equivalent of Python venvs without a giant mess.
pkgconf does absolutely nothing to help. Sure, I can write a makefile that uses pkgconf to find the distro's libwhatever, and if I'm willing to build from source on each machine* (or I'm writing the distro itself) and if libwhatever is an acceptable version* and if the distro doesn't have a problematic patch to it, then it works. This is completely useless for people like me who want to build something remotely portable. So instead people use enormous kludges like Dockerfile to package the entire distro with the application in a distinctly non-hermetic way.
Compare to solutions that actually do work:
- Nix is somewhat all-encompassing, but it can simultaneously run multiple applications with incompatible sets of dependencies.
- Windows has a distinct set of libraries that are on the system side of the system vs ISV boundary. They spend decades doing an admirable job of maintaining the boundary. (Okay, they seem to have forgotten how to maintain anything in 2026, but that's a different story.) You can build a Windows program on one machine and run it somewhere else, and it works.
- Apple bullies everyone into only targeting a small number of distros. It works, kind of. But ask people who like software like Aperture whether it still runs...
- Linux (the syscall interface, not GNU/Linux) outdoes Microsoft in maintaining compatibility. This is part of why Docker works. Note that Docker and all its relatives basically completely throw out the distro model of interdependent packages all with the same source. OCI tries to replace it with a sort-of-tree of OCI layers that are, in theory, independent, but approximately no one actually uses it as such and instead uses Docker's build system and layer support as an incredibly poorly functioning and unreliable cache.
- The BSDs are basically the distro model except with one single distro each that includes the kernel.
I would love functioning C virtual environments. Bring it on, please.
dminik|1 month ago
Using system/distro packages is great when you're writing server software and need your base system to be stable.
But, for software distributed to users, this model fails hard. You generally need to ship across OSs, OS versions and for that you need consistent library versions. Your software being broken because a distro maintainer has decided that a 3 year old version of your dependency is close enough is terrible.
aa-jv|1 month ago
Plus, we already have great C package management. Its called CMake.
duped|1 month ago
And so when it comes to dynamic dependencies (including shared libraries) that are not resolved until runtime you hit language-level constraints. With C libraries the problem is not merely that distribution packagers chose to support single versions of dependencies because it is easy but because the loader (provided by your C toolchain) isn't designed to support it.
And if you've ever dug into the guts of glibc's loader it's 40 years of unreadable cruft. If you want to take a shot at the C-shaped hole, take a look at that and look at decoupling it from the toolchain and add support for multiple version resolution and other basic features of module resolution in 2026.
pif|1 month ago
You meant: it's 40 years of debugged and hardened run-everywhere never-fails code, I suppose.
CMay|1 month ago
That would be fine if it only effected that first layer, of a basic library and a basic app, but it becomes multiple layers of this kind of habit that then ends up in multiple layers of software used by many people.
Not sure that I would go so far as to suggest these kinds of languages with runaway dependency cultures shouldn't exist, but I will go so far as to say any languages that don't already have that culture need to be preserved with respect like uncontacted tribes in the Amazon. You aren't just managing a language, you are also managing process and mind. Some seemingly inefficient and seemingly less powerful processes and ways of thinking have value that isn't always immediately obvious to people.
conorbergin|1 month ago
auxym|1 month ago
hbfbdhdjd|1 month ago
megolodan|1 month ago
etbebl|1 month ago
> If you don’t need compiled extensions, Conda is more than you need.
Am I missing something or isn't that exactly the problem we're talking about here?
> And even when you do need it, conda environments are heavier than virtual environments and the resolver used to be infamously slow. Mamba exists largely because conda’s dependency resolution took forever on nontrivial environments.
Like it says here, speed isn't a problem anymore - mamba is fast. And it's true that the environments get large; maybe there's bloat, but it definitely does share package versions across environments when possible, while keeping updates and such isolated to the current environment. Maybe there's a space for a language package manager that tries to be more like a system package manager by updating multiple envs at once while staying within version constraints to minimize duplication, but idk if many developers would think that is worth the risk.
elehack|1 month ago
Not perfect, but pretty good when uv isn't enough for a project or deployment scenario.
krautsauer|1 month ago
johnny22|1 month ago
I like wrapdb, but I'd rather have a real package manager.
rurban|1 month ago
His example numpy shipping its own libblas.so, has the speciality that it's runtime loaded, so ldd will not find it, but the runtime dep is in the MANIFEST. And seeing that is not in a standard path concludes that is a private copy, that needs to be updated seperately if broken.
No other hole than in his thinking and worrying.
arkt8|1 month ago
josefx|1 month ago
I am not sure if it is just me, but I seem to constantly run into broken vcpkg packages with bad security patches that keep them from compiling, cmake scripts that can't find the binaries, missing headers and other fun issues.
adzm|1 month ago
fsloth|1 month ago
Avoid at all cost.
Piraty|1 month ago
advael|1 month ago
But on the other hand, I am used to arch, which both does package-management ala carte as a rolling release distro and has a pretty extensively-used secondary open community ecosystem for non-distro-maintained packages, so maybe this isn't as true in the "stop the world" model the author talks about
Piraty|1 month ago
manofmanysmiles|1 month ago
I've written two C package managers in my life. The most recent one is mildly better than the first from a decade ago, but still not quite right. If I ever build one I think is good enough I'll share, only to mostly likely learn about 50 edge cases I didn't think of :)
Archit3ch|1 month ago
smw|1 month ago
xyzsparetimexyz|1 month ago
bddbtbtktn|1 month ago
[deleted]