I don't know about anyone else, but I treat my system and language specific package managers completely differently.
My system package manager has root access, it can make or break my machine. As a user, I am trusting the distribution I use and it's package maintainers. Package vetting, stability testing and signing are what I expect from my distribution.
For language specific package managers, those things would be nice, but completely unreasonable to expect. There is no trust involved, how can there be? Most package repositories have no vetting process, it's publicly writeable.
For python, there is virtualenv. Packages are "installed" in their little environments with user privileges. For node, I personally have a dir in home for modules and then ln -s cli tools into ~/bin. Again, all with user privileges.
The crazy thing is, for some people there is no distinction. In fact, I noticed a trend in the node community in that they all give instructions to install their modules globally. Literally every single installation instructions I have seen for node cli tools have said the same thing, install globally.
This is pretty baffling. If you were on a Windows machine, would you download some random setup file from a public ftp and run it as administrator? I don't know why an entire community (of power users and developers no less) seem to think it's somehow acceptable practise.
>This is pretty baffling. If you were on a Windows machine, would you download some random setup file from a public ftp and run it as administrator?
yes. that's the general practice under Windows. As a clueless end-user you also often get the original software wrapped in "experience enhancing" adware installers - provided you actually find the correct download link - the download pages of the various sites are littered with ads that contain fake "download now" buttons that install various PC "cleaning" utilities (also wrapped in adware installers themselves)
If you're a C or C++ dev your system and language package managers are usually the same thing.
And even in your case I would argue you don't need several different package managers, merely several different environments.
It's just a matter of having different databases/install paths depending on what you're trying to do, you don't need a whole new packager.
That would fit within the unix philosophy of having "one program that does one thing and does it well" instead of having a hundred package managers, each with its share of bugs and quirks and unique features.
>Literally every single installation instructions I have seen for node cli tools have said the same thing, install globally.
You must not use node very much, then. Generally, IF there are instructions, which there often aren't because its so obvious, it's because the package installs a bin you would want globally available. Most instructions don't exist, or tell you what to put in your package file
That's why every node project has a file called 'package.json'; it's so you can run 'npm install' and install into your local directory with local privileges.
...I'm really not sure who you've been talking to.
(To be clear this is specifically the way npm is designed to work, and it's very good at it; using npm as a global package manager is flat out stupid; maybe you're thinking of gem...)
It was inevitable that there will be a package manager for Rust. Packaging, versioning and distributing programming libraries still is an unsolved problem.
The requirements for an OS package manager are very different to one that is used for installing libraries to your development environment. Things move relatively slowly and there's less need to get bleeding edge versions included.
All of these programming language specific tools have their specific needs when it comes to version and dependency management. In contrast with OS package management, there is frequently a need to have several versions of a particular library installed. Libraries are usually installed in a per-user or per-project sandbox rather than system wide.
As much as I wish there was one package manager that could serve all these needs, I don't see that happening in the short term. The situation where we have half a dozen popular OS package managers and one (or more) package management systems for each programming language is less than ideal, but trying to unify all of those would be quite an effort. That would need getting the correct people sitting around the same table. And the end result would be a compromise of some kind.
I hope this happens but I don't know who would put the time and the effort to do it and what it would take for it to gain traction.
Question to the OP: which package manager would you have picked for Rust? You point out a lot of problems in the post but don't come up with obvious solutions.
Each of the major package management solutions have provided ways to talk to independent repositories. I think that, in some ways, it makes a sense for a language to maintain repositories for each of the major OSes. This isn't without problems, of course, because then instead of writing software, you're spending time packaging and testing that, when you could just make a ruby gem and be done with it. Which is what happens now.
Honestly, even a way to conclusively enumerate the installed packages, versions, and sources of each package would be an improvement. That way, I could at least be reasonably assured of recreating the environment.
As I've argued before, language-specific package managers are evil. Unfortunately, so are system package managers that are too cumbersome or can't/don't keep up with what's going on in the faster-moving language communities. This results in projects being packaged by people who don't actually understand it, while the people who understand it don't (and shouldn't need to) understand the minutiae of how to package things for a particular platform. This in turn causes far too frequent breakage.
What we really need is system package managers that can cooperate with their language-specific brethren to get info about packages under other managers' control, direct other managers to install something according to its own rules/methods, and so on.
"Hey apt-get, please tell me the status of this Ruby package"
<<apt-get turns around and gets the info from gem>>
"Hey yum, please install this Go package"
<<yum turns around and tells go to do it>>
The rules for how to talk to each language-specific package manager shouldn't even need to be very complicated. The real work would be getting all of them to use a common format for talking about versions, file lists, dependencies, etc. It would be worth it, though, to have those dependencies tracked properly across all languages/formats instead of being lost at each boundary.
I do like that your idea isn't "build something perfect" but rather "teach the imperfect things how to talk to each other". Could be very neat. Not sure what it would look like.
> Please, accept that a tool someone else wrote that you see as imperfect may actually solve the problem, and know that the world may not need another solution that does the same thing.
For Rust specifically, can you suggest any?
(I can't think of any, but that doesn't mean there aren't any.)
rpm, dpkg, windows MSI file, .App folder on Mac. These will all work with any language that you want, plus, you get the added benefit of standardized placement and having the USERS of each platform know what to expect when they install whatever you've decided to throw on their machine.
My favourite solution to this would be an APT extension which allows installation of binaries into $HOME by unprivileged users and for all these language-specific things to be turned into simple APT repositories.
That is what I prefer. I won't allow another package manager to run under sudo. Either it must be installed in user home or be installed system wide via my linux box's package manager.
The path to a standard package manager starts with a standardized protocol for package management.
A service protocol that is able to serve a repository of packages over http and ftp. A client protocol that can keep track of installed packages and can index, search and look for updates on installed packages.
Split package management into layers and only try to standardize bit by bit. People will never agree on deb vs rpm. People will never agree on using json vs python vs Makefile vs ruby vs shell vs whatever else - they'll always want their most familiar language for their package manager, which in domain-specific packaging means the domain-specific language.
So don't try to standardize those. Standardize the rest. Give us the protocol that can power all of this and increase interoperability. Separate the repository layer, the package format (deb, rpm), the packagefile format (setup.py, Makefile, PKGBUILD) and the package manager (interface: yum, apt-get, aptitude, pip, npm) from the rest of the protocol.
Make this potentially usable for things such as browser extension repositories, android package management, vim bundles and what not.
Someone please work on this. I'd do it but it just occured to me I have to clean my oven.
Yes, I think this is the right approach. Most package managers use the same command functionality under different synonyms. I don't mind all the different applications so much as the lack of a standard that they are built too.
My knee-jerk inclination to this post is to yell, "oh holy hell, yes!"
That said, and as others in this thread have noted, there are actually two use cases that need to be satisfied.
1. Here, you've got a base system, and you want to install some piece of software in order to use it. You want this to be guaranteed, for some reasonable definition of "guaranteed," to work with your existing base system.
2. Here, you want to install packages within a segregated environment, and you want those packages to work with any packages previously installed in said environment. You're probably attempting to do something like recreating your deployment environment locally.
It strikes me that there are only two issues preventing the latter from being subsumed by the former.
1. Not all package management systems provide a means to have multiple versions of a package/runtime/what-have-you installed at the same time. Often, this capability is there , but packages need to be specially crafted (unique names, etc.) for it to work. See Debian's various Ruby and Python runtime packages for example.
2. Not all package managers provide a way to install a set of specific package versions in a contained environment which is segregated and requires intention to enter.
(Note that I'm ignoring the "there are different package formats" issue; I don't think is in practice a huge barrier, and the package maintainers should be involved anyway.)
If we could get RPM and YUM to provide those services, then we could remove the vast majority of this duplication.
Alternatively, if we all agreed that developers should just use Linux containers as development environments, then all we'd need is upstream to use native OS packages (which is, really folks, not very hard).
> Suppose I used entirely off-the-shelf puppet code. Nothing custom, just modules I found. And I erased my repo which contains my puppet modules. How would I rebuild it and get the same thing that I had before?
Well, there's Blueprint (http://devstructure.com/blueprint/) which purports to reverse engineer servers and spit out Chef / Puppet modules.
But... I'm not sure I understand the question. It seems akin to asking "I deleted all of my source code, how do I rebuild what I had before?" That's why we have version control. That's why we have backups.
I also don't understand this rant in the context of Rust and its Cargo package manager. There are several distinct domains involved, and it seems pretty reasonable for each to have its own management tool.
Puppet, Chef, Ansible, or Salt for handling machine configuration. Yum or APT for handling system-level packages and services. Pip, Gem, NPM, or Cargo for application-level dependencies. Seems pretty reasonable to me.
If you need it to instantiate brand new machines, you can get into VMs (VirtualBox / VMware) or containers (Docker), each of which can also be trivially scripted (Vagrantfiles / Dockerfiles).
The whole array of tooling seems more complementary than competitive.
> Yum or APT for handling system-level packages and services. Pip, Gem, NPM, or Cargo for application-level dependencies.
That's the thing: what is the distinction between "system-level" and "application-level"? Has it really gotten to the point where the only thing we can use /usr/bin/python for is to run other things in /usr/bin? This may very well be the case, but it strikes me as slightly strange given that we never used to be afraid of linking against /usr/lib or running our scripts under /bin/bash.
What happened in the past 10-15 years that changed the world so much that whereas before we ran our applications on top of the system, now we seem to want to run them in individual sandboxes, often inside of other sandboxes inside of other sandboxes? Was it really so bad to yum install and gcc -lwhatever without having special paths everywhere for everything?
In principle if everyone used their distro packages for things like...say, Wordpress, we wouldn't have as many vulnerable installations on the web (see: NHS). How many people actually use the wordpress package from their distro rather than just uploading a private copy to their webdoc root?
Instead blog admins have to log in to their control panel and perform a (hopefully working) auto update there, and then have to shell in to upgrade other important things like PHP.
Have you ever seen how Debian et al package web software? According to the LSB FHS, and that's not only a lot of additional work that requires testing, and goes against how e.g. WordPress tackles the update functionality on their own.
Package management is a fractal problem; look at it from a high level and it all looks simple and they all look similar... zoom in and the similarities start falling away.
It's probably theoretically possible to build a meta-package-manager that really could make everybody happy, but it's difficult to imagine what project structure could get us there, and it's also difficult to imagine how to incrementally develop such a thing in a way that it is immediately useful to everybody. Without that you've got a barrier to deal with.
If you view an individual language package manager as essentially creating a container for the code to run in, a combination of Docker plus the Nix package manager is probably getting pretty close to what everybody needs, but you'd still have a long row to hoe getting everybody even remotely on board.
Captain metaphor here ... Isn't what you describe (a phenomenon which looks distinctly different at different levels of zoom) the opposite of a fractal?
Fractals are typically described as being self-similiar, i.e. they look the same regardless of the zoom level.
Most things don't, which would seem to mean that package management is like many other things, more than it is like fractals. Many things that are different look alike when viewed from far away, since you don't see the differentiating detail.
With regard to Rust specifically, you will always have the option of working like you do with C++: grab some binaries, and either stuff them into a location on your search path or just pass the "-L" flag to the compiler telling it where to look when linking. Cargo is not an attempt to create another walled garden, it's just an optional tool to automate dependency resolution and versioning external libraries.
That said, I agree that it's a huge pain that so many groups feel that the current tools are inadequate enough that they have to design and implement these sorts of things from scratch. I haven't looked much at 0install (http://0install.net/), but let's hope that something of its ilk saves us from this mess some day.
Hi, it'd be really awesome if you replied to post's actual argument, rather than attacking the author. The HN guidelines have some good suggestions for constructive discourse: http://ycombinator.com/newsguidelines.html
A huge part of the problem is that many of the language-level packages like .gem are incompatible with system packages like .deb. Some of this is due to the package managers and some of it is cultural. Rust is young enough that the culture is not frozen. Establish the culture that breaking API changes without increments to the major number is a showstopper bug, and that will help. Compare that with the Ruby culture, where Rails 2.3 introduced a huge number of breaking changes vs 2.2. Heck, there were breaking API changes in several of the 2.3.X releases. No wonder Bundler was created to lock versions down.
I wonder if it would be possible to build a meta-package-manager that works with all or at least a lot of the existing ones. The OP is totally correct in that having lots and lots of different package managers is insane. One major thing that is lacking currently is managing cross packet manager dependencies.
I don't believe this problem scales well enough to be possible to scale at a centralized point like a distro - there are too many different versions of too many libraries involved, so any solution must be decentralized. Nested support for namespaces would probably also be necessary to scale well.
IMHO the problem is that there is no standard package manager. Therefore everybody keeps backing custom solutions and fragment the ecosystem a little more.
If there was a standard package manager that wasn't tied to a particular OS/distribution then we could all just happily target it instead.
Of course the task of making a package manager that would work on all un*x flavours as well as Windows and probably a couple others and managing to get it accepted by the majority of users/distributions sounds like an impossible task to achieve.
Having been in a similar place myself, the solution is to host your own repos for packages and deployment config using Git. Never rely on the remote internet to be as consistent as internally-hosted code. Of course it'd be wonderful if you could do without, but somewhere you'll have to specify and track version numbers in a text file and Git's as good a way as any to track and tag that.
[+] [-] doesnt_know|12 years ago|reply
My system package manager has root access, it can make or break my machine. As a user, I am trusting the distribution I use and it's package maintainers. Package vetting, stability testing and signing are what I expect from my distribution.
For language specific package managers, those things would be nice, but completely unreasonable to expect. There is no trust involved, how can there be? Most package repositories have no vetting process, it's publicly writeable.
For python, there is virtualenv. Packages are "installed" in their little environments with user privileges. For node, I personally have a dir in home for modules and then ln -s cli tools into ~/bin. Again, all with user privileges.
The crazy thing is, for some people there is no distinction. In fact, I noticed a trend in the node community in that they all give instructions to install their modules globally. Literally every single installation instructions I have seen for node cli tools have said the same thing, install globally.
This is pretty baffling. If you were on a Windows machine, would you download some random setup file from a public ftp and run it as administrator? I don't know why an entire community (of power users and developers no less) seem to think it's somehow acceptable practise.
[+] [-] pilif|12 years ago|reply
yes. that's the general practice under Windows. As a clueless end-user you also often get the original software wrapped in "experience enhancing" adware installers - provided you actually find the correct download link - the download pages of the various sites are littered with ads that contain fake "download now" buttons that install various PC "cleaning" utilities (also wrapped in adware installers themselves)
[+] [-] simias|12 years ago|reply
And even in your case I would argue you don't need several different package managers, merely several different environments.
It's just a matter of having different databases/install paths depending on what you're trying to do, you don't need a whole new packager.
That would fit within the unix philosophy of having "one program that does one thing and does it well" instead of having a hundred package managers, each with its share of bugs and quirks and unique features.
[+] [-] wcummings|12 years ago|reply
You must not use node very much, then. Generally, IF there are instructions, which there often aren't because its so obvious, it's because the package installs a bin you would want globally available. Most instructions don't exist, or tell you what to put in your package file
[+] [-] shadowmint|12 years ago|reply
That's why every node project has a file called 'package.json'; it's so you can run 'npm install' and install into your local directory with local privileges.
...I'm really not sure who you've been talking to.
(To be clear this is specifically the way npm is designed to work, and it's very good at it; using npm as a global package manager is flat out stupid; maybe you're thinking of gem...)
[+] [-] exDM69|12 years ago|reply
The requirements for an OS package manager are very different to one that is used for installing libraries to your development environment. Things move relatively slowly and there's less need to get bleeding edge versions included.
All of these programming language specific tools have their specific needs when it comes to version and dependency management. In contrast with OS package management, there is frequently a need to have several versions of a particular library installed. Libraries are usually installed in a per-user or per-project sandbox rather than system wide.
As much as I wish there was one package manager that could serve all these needs, I don't see that happening in the short term. The situation where we have half a dozen popular OS package managers and one (or more) package management systems for each programming language is less than ideal, but trying to unify all of those would be quite an effort. That would need getting the correct people sitting around the same table. And the end result would be a compromise of some kind.
I hope this happens but I don't know who would put the time and the effort to do it and what it would take for it to gain traction.
Question to the OP: which package manager would you have picked for Rust? You point out a lot of problems in the post but don't come up with obvious solutions.
[+] [-] MPSimmons|12 years ago|reply
I'm not sure what the right solution is.
Each of the major package management solutions have provided ways to talk to independent repositories. I think that, in some ways, it makes a sense for a language to maintain repositories for each of the major OSes. This isn't without problems, of course, because then instead of writing software, you're spending time packaging and testing that, when you could just make a ruby gem and be done with it. Which is what happens now.
Honestly, even a way to conclusively enumerate the installed packages, versions, and sources of each package would be an improvement. That way, I could at least be reasonably assured of recreating the environment.
[+] [-] notacoward|12 years ago|reply
What we really need is system package managers that can cooperate with their language-specific brethren to get info about packages under other managers' control, direct other managers to install something according to its own rules/methods, and so on.
"Hey apt-get, please tell me the status of this Ruby package" <<apt-get turns around and gets the info from gem>>
"Hey yum, please install this Go package" <<yum turns around and tells go to do it>>
The rules for how to talk to each language-specific package manager shouldn't even need to be very complicated. The real work would be getting all of them to use a common format for talking about versions, file lists, dependencies, etc. It would be worth it, though, to have those dependencies tracked properly across all languages/formats instead of being lost at each boundary.
[+] [-] sanderjd|12 years ago|reply
I do like that your idea isn't "build something perfect" but rather "teach the imperfect things how to talk to each other". Could be very neat. Not sure what it would look like.
[+] [-] chrismorgan|12 years ago|reply
For Rust specifically, can you suggest any?
(I can't think of any, but that doesn't mean there aren't any.)
[+] [-] matt__rose|12 years ago|reply
[+] [-] claudius|12 years ago|reply
I can still have dreams, right?
[+] [-] donniezazen|12 years ago|reply
[+] [-] dTal|12 years ago|reply
[+] [-] scrollaway|12 years ago|reply
A service protocol that is able to serve a repository of packages over http and ftp. A client protocol that can keep track of installed packages and can index, search and look for updates on installed packages.
Split package management into layers and only try to standardize bit by bit. People will never agree on deb vs rpm. People will never agree on using json vs python vs Makefile vs ruby vs shell vs whatever else - they'll always want their most familiar language for their package manager, which in domain-specific packaging means the domain-specific language.
So don't try to standardize those. Standardize the rest. Give us the protocol that can power all of this and increase interoperability. Separate the repository layer, the package format (deb, rpm), the packagefile format (setup.py, Makefile, PKGBUILD) and the package manager (interface: yum, apt-get, aptitude, pip, npm) from the rest of the protocol.
Make this potentially usable for things such as browser extension repositories, android package management, vim bundles and what not.
Someone please work on this. I'd do it but it just occured to me I have to clean my oven.
[+] [-] nijo108|12 years ago|reply
[+] [-] cmhamill|12 years ago|reply
That said, and as others in this thread have noted, there are actually two use cases that need to be satisfied.
1. Here, you've got a base system, and you want to install some piece of software in order to use it. You want this to be guaranteed, for some reasonable definition of "guaranteed," to work with your existing base system.
2. Here, you want to install packages within a segregated environment, and you want those packages to work with any packages previously installed in said environment. You're probably attempting to do something like recreating your deployment environment locally.
It strikes me that there are only two issues preventing the latter from being subsumed by the former.
1. Not all package management systems provide a means to have multiple versions of a package/runtime/what-have-you installed at the same time. Often, this capability is there , but packages need to be specially crafted (unique names, etc.) for it to work. See Debian's various Ruby and Python runtime packages for example.
2. Not all package managers provide a way to install a set of specific package versions in a contained environment which is segregated and requires intention to enter.
(Note that I'm ignoring the "there are different package formats" issue; I don't think is in practice a huge barrier, and the package maintainers should be involved anyway.)
If we could get RPM and YUM to provide those services, then we could remove the vast majority of this duplication.
Alternatively, if we all agreed that developers should just use Linux containers as development environments, then all we'd need is upstream to use native OS packages (which is, really folks, not very hard).
Can we do that pretty please??
[+] [-] yeukhon|12 years ago|reply
This.
Has anyone ever tried https://github.com/jordansissel/fpm FPM yet?
[+] [-] threedaymonk|12 years ago|reply
E.g.: https://github.com/threedaymonk/packages/blob/master/go.sh
[+] [-] MPSimmons|12 years ago|reply
[+] [-] benburton|12 years ago|reply
[+] [-] callahad|12 years ago|reply
Well, there's Blueprint (http://devstructure.com/blueprint/) which purports to reverse engineer servers and spit out Chef / Puppet modules.
But... I'm not sure I understand the question. It seems akin to asking "I deleted all of my source code, how do I rebuild what I had before?" That's why we have version control. That's why we have backups.
I also don't understand this rant in the context of Rust and its Cargo package manager. There are several distinct domains involved, and it seems pretty reasonable for each to have its own management tool.
Puppet, Chef, Ansible, or Salt for handling machine configuration. Yum or APT for handling system-level packages and services. Pip, Gem, NPM, or Cargo for application-level dependencies. Seems pretty reasonable to me.
If you need it to instantiate brand new machines, you can get into VMs (VirtualBox / VMware) or containers (Docker), each of which can also be trivially scripted (Vagrantfiles / Dockerfiles).
The whole array of tooling seems more complementary than competitive.
[+] [-] jzwinck|12 years ago|reply
That's the thing: what is the distinction between "system-level" and "application-level"? Has it really gotten to the point where the only thing we can use /usr/bin/python for is to run other things in /usr/bin? This may very well be the case, but it strikes me as slightly strange given that we never used to be afraid of linking against /usr/lib or running our scripts under /bin/bash.
What happened in the past 10-15 years that changed the world so much that whereas before we ran our applications on top of the system, now we seem to want to run them in individual sandboxes, often inside of other sandboxes inside of other sandboxes? Was it really so bad to yum install and gcc -lwhatever without having special paths everywhere for everything?
[+] [-] nly|12 years ago|reply
In principle if everyone used their distro packages for things like...say, Wordpress, we wouldn't have as many vulnerable installations on the web (see: NHS). How many people actually use the wordpress package from their distro rather than just uploading a private copy to their webdoc root?
Instead blog admins have to log in to their control panel and perform a (hopefully working) auto update there, and then have to shell in to upgrade other important things like PHP.
[+] [-] _ak|12 years ago|reply
[+] [-] ymmy|12 years ago|reply
[+] [-] jerf|12 years ago|reply
It's probably theoretically possible to build a meta-package-manager that really could make everybody happy, but it's difficult to imagine what project structure could get us there, and it's also difficult to imagine how to incrementally develop such a thing in a way that it is immediately useful to everybody. Without that you've got a barrier to deal with.
If you view an individual language package manager as essentially creating a container for the code to run in, a combination of Docker plus the Nix package manager is probably getting pretty close to what everybody needs, but you'd still have a long row to hoe getting everybody even remotely on board.
[+] [-] unwind|12 years ago|reply
Fractals are typically described as being self-similiar, i.e. they look the same regardless of the zoom level.
Most things don't, which would seem to mean that package management is like many other things, more than it is like fractals. Many things that are different look alike when viewed from far away, since you don't see the differentiating detail.
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] ballard|12 years ago|reply
Docker? Container manager.
Virtualization? Container managers.
[+] [-] kibwen|12 years ago|reply
That said, I agree that it's a huge pain that so many groups feel that the current tools are inadequate enough that they have to design and implement these sorts of things from scratch. I haven't looked much at 0install (http://0install.net/), but let's hope that something of its ilk saves us from this mess some day.
[+] [-] dredmorbius|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] callahad|12 years ago|reply
[+] [-] bryanlarsen|12 years ago|reply
[+] [-] yxhuvud|12 years ago|reply
I don't believe this problem scales well enough to be possible to scale at a centralized point like a distro - there are too many different versions of too many libraries involved, so any solution must be decentralized. Nested support for namespaces would probably also be necessary to scale well.
[+] [-] mattdeboard|12 years ago|reply
[+] [-] simias|12 years ago|reply
If there was a standard package manager that wasn't tied to a particular OS/distribution then we could all just happily target it instead.
Of course the task of making a package manager that would work on all un*x flavours as well as Windows and probably a couple others and managing to get it accepted by the majority of users/distributions sounds like an impossible task to achieve.
[+] [-] lstamour|12 years ago|reply
[+] [-] chrisfarms|12 years ago|reply
sigh I can dream.
[+] [-] cwp|12 years ago|reply