I had been an Ubuntu user for almost 16 years, on servers, laptops, and recently containers. The snap situation with Ubuntu is just plain unpalatable, both in principal and in practice. I became so disappointed in the move by Canonical that I finally left Ubuntu altogether and no longer recommend it to friends and colleagues.
It takes years to cultivate a garden, but only minutes to destroy it.
Same here, were already in talks with red hat to move dozens of servers which are currently running ubuntu with enterprise subscriptions.
We know redhat is just as bad at forcing people to use their software (systemd...) but they have to key points which ubuntu lacks:
- they usually win their software wars
- their software usually works well enough and they give you great support
- they mantain their projects and live with their decisions, unlike ubuntu which flips and flops with major updates (I'm waiting for them to switch from netplan to something else again and fuck up all my ansible config again, which was already battle tested)
- they have better management tools overall.
The only reason we went with ubuntu in the first place was because it was familiar to all of us (we had all ran it on our desktops). When they take that familiarity away, then they lose their only real advantage.
Same. My go to has been to use Qubes if you at all can, because it's actually secure, and then to use Ubuntu, because it actually works. To me most of the bad reputation of desktop linux seemed to come from people refusing to use Ubuntu for demented reasons... But I have must not have been following distro news at all in recent years, because I only just now learned snap is not fully open. That's quite the cynical walled-garden power grab and bad enough by itself to drop Ubuntu.
More seriously, this is not an unexpected response if you've been using a system for 10 years or more. If you were new to the system you would just say "oh, interesting, this is how it does self contained packages" but your perspective of having a system you understand well, has worked for all your needs, and you know all the ways in which to work around it if you can't get exactly what you want works against your perception of a new feature.
In my experience it is the leading cause of burnout in engineers. You learn things, you use things, you customize them to your needs, and then the 'new participants' who don't have any experience and find those things "arcane" or "opaque" re-implement them for themselves, their friends, their company whatever. And then its something new and something new gets the exposure so still more people see the 'new' thing without even knowing there was an 'old' thing and its just "the way this feature is done."
As an experienced person it is tiring and bothersome to have to re-implement tool flows, capabilities, and other parts of your environment because some youngster re-invented the wheel yet again and you were not in a place to educate them on why the existing wheel was just fine.
The longer you live the more cycles you go through and the more ridiculous each new re-imagining of how to do 'X' becomes until all you seem to do is complain about how in the previous versions everything worked fine and this new stuff is crap and you aren't going to put up with it.
At which point ageism kicks in and your employer lays you off with mumblings about "not a team player" or "resistant to learning new skills" as if sharpening a knife with a round stone is any different or any better than sharpening a knife with a square one. It is easy to get bitter. It is easy to just roll over and whine with your fellow "oldsters" about the "good old days". It is also a kind of death.
Counter intuitively, I suspect that if companies invested in keeping the status quo engineering salaries would go down. That would result from skills learned as a junior engineer always being relevant to the current environment but increasingly more efficiently applied (as it typical as people get more experienced, they do things more quickly). That minimizes the number of people you need to develop your products and that keeps the number of engineers you need to employ down, so your costs go down and the poor engineers who aren't currently working have to compete more aggressively for available entry jobs by taking a lower salary. Fortunately, because it is counter intuitive I don't think there is any risk of it coming to pass.
In my opinion, the elves have left Middle Earth. Ubuntu and Ubuntu's current cohort of developer/users are more interested in an open source version of Windows than anything else. As a result more and more "windows like" architecture and features are replacing the old "UNIX like" architecture and features.
I think there are some very good critiques of snap (performance, provenance, reproducibility, namespacing, etc), and the first couple points in this article seem reasonable.
However I can't agree with this:
> apt/deb is a wonderful package management system and everyone is happy with it, at least the majority of Ubuntu/Debian users. Besides, dnf/rpm is also a similar packaging system for Fedora/RH systems and everyone is happy with that too.
Debs and rpms are great at assembling tightly coupled monolithic systems. Great! Let's keep using them for the base system. However when I want to install a QT app on a Gnome system or gasp a proprietary app, Debs are insufficient. I want all of the QT libs embedded in the package. I want the proprietary app in a container. I want MAC with a polished UX. I don't want debs to worry about those features. I want an "app store" done right: open yet verifiable. Protection in depth.
- being able to install multiple versions at the same time and switch between them
- if I really need to: being able to install the latest version (and even an unstable release); if that means that apt-get has to download and compile stuff, then I'm ok with that.
> I want all of the QT libs embedded in the package.
I don't understand why anybody wants this.
Libraries should have major versions and the latest of each major version should be compatible with anything using that major version, because that's what major version means for a library. You might then need to have more than one major version of the library installed, but any two applications using the same one should be able to use the same copy, and then have it maintained by one person in one place.
If every package has a separate copy of every library, people have to maintain them all separately. When that library has a security update, you now have to update five dozen packages instead of one or two, and have a security vulnerability if any of the maintainers don't keep up in a timely fashion. Which not all of them will.
> I want the proprietary app in a container.
People want containers to be magic but they're actually a hard problem. You want the app not to be able to do anything you don't want it to but still be able to do everything you do want it to.
A backup app that can't read my files is useless; it can't back them up. But it shouldn't be able to modify or delete them. But it should be able to modify its own state. It shouldn't have general network access but should be able to communicate with the backup server, which might have to be specified by the user and not the package maintainer. It doesn't need access to the GPU or the ability to use gigabytes of memory, but it does need to be able to transfer a lot of data over the network, but the data it transfers is lower priority than other network packets.
That requires the person configuring the app's container to have both detailed knowledge of the app and detailed knowledge of the container system. It's common for this not to be the case.
And that's why containers are a mess, not anything to do with the package manager, which should have little to do with the container system outside of packaging the app's default container configuration with the app.
I can only comment on snap for the server (non-desktop) side of things but packages in Ubuntu, which are Debian packages, contain random (none or a lot) amounts of shell scripts /var/lib/dpkg/info/* which may fail for any reason and introduce any number of side effects into the system, as they handle sometimes very complex software migrations and can change any number of things. Surely the desktop has some history here too (packaged proprietary graphics drivers for example doing unspeakable things to Xorg configs come to mind).
Snap is a way to contain / scope this kind of scripted activity. This is a welcome change. Additionally, deb/apt has much worse transaction support than yum and its successors (you can simply roll back yum transactions, good luck rolling back a borked APT system where package maintainer scripts already have done unspeakable things to the system and you're kind of stuck). APT's configuration system is also arcane and badly documented; debhelpers that control how most packages are built and work are tens of thousands of lines of perl, python, C++, C, makefile and m4 code that somehow work but are in no way a way to build straightforward predictable packages. It's ultra flexible, but also ultra complex. The trend in package / release management is going towards simplification, and not complication. A stop gap solution were many projects that allow the generation of Debian packages from venvs, random directory trees (for which you could also use the deprecated old-style DEBIAN package format and pass it to dpkg-build without the arcane dpkg-* toolchain, but again, Debian claims these this kind of packaging is not "well formed", and who knows when it gets removed).
Snaps are just a different way to do integrated containerized applications with scoped config management, versioning and a release system on top (which makes the difference). Perhaps somebody could make a better solution. Meanwhile, RedHat introduced AppStreams -- probably Canonical also felt it needed an answer to that.
Ubuntu's exposure to their "universe" repository component, many packages in which are badly maintained and are not a great part of the release QA, is also a huge risk as an enterprise distribution (where the money is) so it is no surprise that Canonical is looking to decouple their core offering on the server platform from that and maybe at some point remove it from the base install altogether ("take your own risk") and then drop it.
Largest issue I have with Snap is that it appears to assume the owner of a system is not in best position to make choices for their own systems.
Sure, this might be true for the average user, but it is toxic to the “super user” community that’s in the best position to help support the larger community and may end up pushing them away.
Snap at the very least should have an opt-out feature, if not be opt-in during an install.
There are parallels with the Gnome team's stance that "we know what's best for you", and that turns off a lot of linux users. There is a tension between those who wish to turn linux into Mac OS or Windows, and those who want fine-grained control over the workings of their computer. The arrogance of the gnome team and the snap apologists is a huge red flag to me. I don't use Ubuntu or Gnome, and I'm glad linux provides me that choice.
One of my big things with snap is how it locks the snap into the home directory. I get why they do that but it would be nice to override[1]. In my case I want to play audio files outside of my home directory but VLC doesn’t have access. And VLC now only updates the snaps and not it’s repos so you have to use the snaps.[2]
> If you wish to install the traditional deb package, it is available as usual via APT, with all security and critical bug fixes. However, there will be no major VLC version updates until the next Ubuntu release.
This is in line with Ubuntu (and Debian) repo policies. You do not get major software updates in between distribution updates unless you use a third party repository. You do get bug fixes and security fixes, and/or can track unstable if you need the bleeding edge.
This is a rant against snaps. Well, lets get to it.
Sorry, snaps are a LOT slower than just running a binary. Did I said they're slow, well they're slow.
I have an SSD and it feels like it's 1992 and I'm trying to run some snap from a Cyrix without cache and 16MB of RAM. I switch to binary version (oh my chromium), and it freaking flash, 0.x sec. and you're there, the full app is available.
Snaps are a NO GO my friends.
Besides having LOTS of problems running out of the standard GUI (Gnome3), or even in the standard (supposedly heavy-tested) GUI, they are slow.
Sorry, I've already said that uh? SLOW, that's snaps.
If there is somebody from Ubuntu here, please take a serious look about how snapped apps (pun intended), read/write $HOME defaults.
I mean we have to have defaults somewhere. So thingies like the colour theme, the theme engine, default download path, etc. are fully followed just as the user has configured them.
I use Ubuntu, but I certainly would not be using in the future if my applications which now take merely 0.x seconds to open start to take, 3-4-15! seconds to open. I fact I started to look to Debian and Fedora, they currently appear to have saner defaults than Ubuntu.
No, the second time I open an app in a session doesn't count AT ALL for the speed.
As a package manager for my app snaps have really been an anti-pattern and we're considering removing support for them.
Here are some problems I've had:
- snaps use a different directory than our main app. So if you install our debian package, then go to a snap package, all your data seems to vanish. It's just in another hidden directory. I tried to figure out how to get the directories to sync up but couldn't get it to work as it's yet 'another thing to support'. I only have so much time.
- snaps have various bugs that you encounter after you've shipped the app that aren't present at build time. Mostly due to being in a container and 'reasonable' things not being accessible and needing to be granted access to via a configuration file.
The strategy I'm thinking of migrating to is to just distribute as a .deb and have our own apt line that is installed during the .deb installation. I think this is what Slack and other Electron packages have migrated to which is easier for them to support.
I mean conceptually it sounds great. Put your apps in a container. They will be isolated. Great. But in practice it's a nightmare.
To be fair though. MacOS has similar issues when they started going with isolation and privileges.
I think the main issue is that none of the OS maintainers spend a day in the shoes of a package maintainer. And if they did they don't care because they own the OS and many of these apps compete with your core product.
At least you have plausible deniability that your behavior isn't anti-competitive - you're just trying to improve the security for the user!
For example, Zoom got a ton of crap about their installer but they compete with Facetime which DOES NOT have to constantly ask the user for privileges. Apple granted Facetime these privileges via the OS.
Devils advocate: it is plain weird for every app I install to have so much file system and system access. It’s nice to have a sandboxed solution built in. It would be nice if it was a solution that didn’t have the problems that this article listed, but snaps could be adapted to be good with a few changes.
Why a proprietary backend though? I suppose cannonical views packaged apps as a platform opportunity and wants to be the first to “capture” the users without somebody bigger coming and taking over?
> Devils advocate: it is plain weird for every app I install to have so much file system and system access. It’s nice to have a sandboxed solution built in. It would be nice if it was a solution that didn’t have the problems that this article listed, but snaps could be adapted to be good with a few changes.
Exactly. Personally I have been sticking my desktop programs into "firejail"-managed "containers" for a long time. It's a good thing that a similar solution has been implemented that is suitable to bring this too the masses.
Wasn't AppArmor already doing this though? If I remember correctly (and I never properly read up on it, so please correct me if I'm wrong) it limits which syscalls you can do and with which parameters, like opening only certain files. I think apparmor rules/profiles were becoming more common to be delivered with their respective packages (I'm using Debian), and it sounds like that already solves your exact concern without deviating from apt:
> it is plain weird for every app I install to have so much file system and system access
A quick glance at Wikipedia to make sure I'm not talking out of my ass seems to confirm that:
> is a Linux kernel security module that allows the system administrator to restrict programs' capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths. [...] AppArmor is enabled by default in Debian 10 (Buster) [from July 2019].
(Also, I'm not a fan of claiming "devil's advocate" when you're saying something that you know everyone will agree with. It's similar to saying "downvote me all you want but [insert popular HN opinion]". Of course the principle of lease privilege for software is something lauded by every logically thinking person.)
I wanted to love snaps. I really did. I like the idea of a self-contained program that doesn't get into DLL hell, and can't stomp all over my system if it misbehaves, and can be cleanly uninstalled.
Unfortunately, snap comes with all of these extra issues that happen when the developer isn't empathizing with the user. Also, much to my chagrin, snaps don't actually uninstall cleanly, and can really hoop your system. I now install snaps INSIDE of an LXC container so that snap can't misbehave and break my system, or else if I can I just use apt with a custom repo (for docker because the snap is awful). Ubuntu 20.04 will probably be my last Ubuntu system, and that's a shame... I really liked it.
I like Snaps. After the debacle that is Catalina, I built my first Linux desktop in 15 years. Ubuntu 20.4 worked perfectly out of the box, with Wayland and everything. Snaps and the Snap store are a huge improvement over the previous GUI interface to Apt.
I ended up settling on Silverblue, a Fedora derivative that uses an immutable base system along with Flatpak for applications, and it’s been great. Equally trouble free, and Flatpak has many of the benefits of Snaps without some of the downsides (fully open source).
The whole snaps business was already annoying. Then my external monitor stopped being detected due to some issue with Nvidia graphics on (which probably is more Nvidia's fault then Ubuntu's). I'm using Thinkpad X1 Extreme Gen 2. Tried Fedora 32's live usb and basically everything worked but the installer didn't detect all partitions and didn't give a choice of choosing exactly what goes into which partition without offering to wipe-out stuff (they need to take a cue from the flexibility of Ubuntu's installer). Finally tried Pop!_os's live usb and everything basically worked + had no issues with the installer either.
I wasn't keen on using a derivative distro of a derivative distro; thought of using Debian but wasn't very confident if the issues won't persist on it. Considering everything just works perfectly with Pop!_os, I might just stick with it.
Ubuntu was such a breath of fresh air at the beginning (I still remember Dapper Drake). It is sad to see it going this way.
Say what you will of their hardware, but the system76 team really nailed pop_os. So many little things I was used to suffering through manually configuring on a fresh install were just "right" straight out of the box.
Supposedly they're working on tabbed and stacking layouts for tiled windows- if that is the case I don't know if I'll go back to i3 or sway again!
Honestly this article had some poor arguments. Like Apt being fantastic-it isn't-see below. I do not see what the hate around snaps are for.
Yes, some snaps are not updated, and VLC only working in the home directory as another HN commenter said is a pain.
Snaps still solve a lot of issues on Linux that Windows does not have.
What happens if the new version of your editor breaks some well known plugins?
On Windows you just install the older version from the archives. On Linux you risk dependency issues. Snaps are a single command to switch to an older version. Very important in both the software and media space.
One option is to stay on the LTS version of Blender.
Another is back when I used Atom, one upgrade completely broke a popular plugin for me because of a change in Chrome. I thought it would be fixed quickly so didn't revert the package and I lost the older version in the cache. Took the Atom devs 2 weeks to handle the change to Google Chrome-then Arch maintainer of the Atom package didn't get around to upgrading it for a week or two after that.
Things like apt pinning really don't help you when you discover the issue on your main workstation. Also, doesn't change the fact that it is so much easier to revert on Windows. Snaps make version control even easier to do on Linux then Windows.
The back-end being proprietary is a good argument. but for a company that has worked so long with Linux and Free Software; give Canonical a little trust to release it.
Same here : I use it for Firefox and Chromium, and it just works. Another comment mentioned VLC, which should be a good candidate for frequent release too.
I can understand the "proprietary store" concerns of this thread, however.
I'll take a contrarian/devil's advocate stance. for some applications and some usage scenarios, snaps makes a lot of sense.
Take a multi user system and users who run applications like chromium or firefox.
It's dangerous to upgrade the application while users are running them as the files the running applications depend on can change thereby making them either break in weird ways or force the end users to restart them.
if these apps were just distributed as snaps, it wouldn't matter. they would keep on using the old image without any problem, while new executions would get the new image. If one really wanted to encourage them to exit and restart (i.e. some security hole), the same mechanisms that exist today to get people to restart could be used.
with that said, I think it should be a choice, not something force down our throat. if I install something with apt/dpkg , I expect it to be an apt/dpkg package, not a snap. if I want to use snap, I'll install it with snap.
You people do realize that snaps have been around for 4 years right? It has widespread first party support from various companies including Microsoft, Amazon, Mozilla, Google, Spotify, JetBrains etc...
They have wide spread adoption with almost 10x the install base of Flatpaks.
Do you guys really need to keep throwing blogs at something which isn't going away and is useful to users? How is this useful in anyway? Canonical isn't suddenly going to give up on this, and I don't even want them to.
The main point here is that Snaps take away control from the user. The user has no control over how and when apps update. The backend for snaps is proprietary and completely in control of Canonical. If they decide to shove ads down through Snaps, they can at any time. And the whole hijacking of the chromium apt package to backdoor in snaps without user consent is a move straight out of the Microsoft playbook
This feels very natural to what Apple, Google and Microsoft do on their OS. But Canonical seems to have forgotten that such behavior is what drove a lot of people to Linux. It is never going to be accepted. Nor it should be.
In my opinion, AppImage solved the problem Snap tried to solve, yet without being horribly intrusive.
Kudos to Linux Mint for taking a stand in regards to it, and hopefully more Ubuntu-based distros do the same until Canonical gets the idea.
I don't think they're out there trying to be malicious, but they need to be set back to the correct path as it has been the case with odd monetization choices for Ubuntu before that didn't really benefit anyone.
The anti-features of snap seem like business decisions. Making it easy for users to use alternative stores would kneecap Canonical's paid IoT offerings https://ubuntu.com/internet-of-things. Also, I wonder if the ability for developers to force software updates could be marketed as a kill switch for proprietary software vendors.
My prediction (95% confidence): Snaps will become installable from Windows (via WSL) and Microsoft will buy Ubuntu and try to take over the installers for other Linux distros by having them use Snap as the main repo. This will be Microsoft´s long term app installer strategy
I had seen some discussion before about the server not being open source but I can clearly see the store api there. I'm just looking at the code now and haven't taken any time in testing it out for myself yet.
Regarding debian packages and apt as we have seen with the amount of PPAs, there really has to be some solution for that. I like the snap format I believe it's heading in the right direction. It still has some problems with desktop software but that seems to be addressed as it has progressed.
There is this strange cold war between Red Hat and Canonical and Red Had seem to have most of the NIH problems if you look at the history. I really don't think Red Hat or some of it's developers possible like relying on code from Canonical.
[+] [-] SweetestRug|5 years ago|reply
It takes years to cultivate a garden, but only minutes to destroy it.
[+] [-] TheLastSamurai|5 years ago|reply
[1]: https://www.zdnet.com/article/linux-mint-dumps-ubuntu-snap/
[+] [-] fgonzag|5 years ago|reply
We know redhat is just as bad at forcing people to use their software (systemd...) but they have to key points which ubuntu lacks:
- they usually win their software wars
- their software usually works well enough and they give you great support
- they mantain their projects and live with their decisions, unlike ubuntu which flips and flops with major updates (I'm waiting for them to switch from netplan to something else again and fuck up all my ansible config again, which was already battle tested)
- they have better management tools overall.
The only reason we went with ubuntu in the first place was because it was familiar to all of us (we had all ran it on our desktops). When they take that familiarity away, then they lose their only real advantage.
[+] [-] Darmody|5 years ago|reply
Or maybe Shuttleworth doesn't care at all and he just wants the big bucks from MS.
[+] [-] dheera|5 years ago|reply
Launching Chrome: I click the Chrome icon.
Launching PrusaSlicer: Start a terminal and type
That doesn't seem like progress to me from a UX perspective.[+] [-] mytailorisrich|5 years ago|reply
[+] [-] htns|5 years ago|reply
[+] [-] AlexCoventry|5 years ago|reply
[+] [-] ChuckMcM|5 years ago|reply
More seriously, this is not an unexpected response if you've been using a system for 10 years or more. If you were new to the system you would just say "oh, interesting, this is how it does self contained packages" but your perspective of having a system you understand well, has worked for all your needs, and you know all the ways in which to work around it if you can't get exactly what you want works against your perception of a new feature.
In my experience it is the leading cause of burnout in engineers. You learn things, you use things, you customize them to your needs, and then the 'new participants' who don't have any experience and find those things "arcane" or "opaque" re-implement them for themselves, their friends, their company whatever. And then its something new and something new gets the exposure so still more people see the 'new' thing without even knowing there was an 'old' thing and its just "the way this feature is done."
As an experienced person it is tiring and bothersome to have to re-implement tool flows, capabilities, and other parts of your environment because some youngster re-invented the wheel yet again and you were not in a place to educate them on why the existing wheel was just fine.
The longer you live the more cycles you go through and the more ridiculous each new re-imagining of how to do 'X' becomes until all you seem to do is complain about how in the previous versions everything worked fine and this new stuff is crap and you aren't going to put up with it.
At which point ageism kicks in and your employer lays you off with mumblings about "not a team player" or "resistant to learning new skills" as if sharpening a knife with a round stone is any different or any better than sharpening a knife with a square one. It is easy to get bitter. It is easy to just roll over and whine with your fellow "oldsters" about the "good old days". It is also a kind of death.
Counter intuitively, I suspect that if companies invested in keeping the status quo engineering salaries would go down. That would result from skills learned as a junior engineer always being relevant to the current environment but increasingly more efficiently applied (as it typical as people get more experienced, they do things more quickly). That minimizes the number of people you need to develop your products and that keeps the number of engineers you need to employ down, so your costs go down and the poor engineers who aren't currently working have to compete more aggressively for available entry jobs by taking a lower salary. Fortunately, because it is counter intuitive I don't think there is any risk of it coming to pass.
In my opinion, the elves have left Middle Earth. Ubuntu and Ubuntu's current cohort of developer/users are more interested in an open source version of Windows than anything else. As a result more and more "windows like" architecture and features are replacing the old "UNIX like" architecture and features.
[+] [-] znpy|5 years ago|reply
[+] [-] schmichael|5 years ago|reply
However I can't agree with this:
> apt/deb is a wonderful package management system and everyone is happy with it, at least the majority of Ubuntu/Debian users. Besides, dnf/rpm is also a similar packaging system for Fedora/RH systems and everyone is happy with that too.
Debs and rpms are great at assembling tightly coupled monolithic systems. Great! Let's keep using them for the base system. However when I want to install a QT app on a Gnome system or gasp a proprietary app, Debs are insufficient. I want all of the QT libs embedded in the package. I want the proprietary app in a container. I want MAC with a polished UX. I don't want debs to worry about those features. I want an "app store" done right: open yet verifiable. Protection in depth.
[+] [-] amelius|5 years ago|reply
- a user-space install option
- rollback functionality (!)
- being able to install multiple versions at the same time and switch between them
- if I really need to: being able to install the latest version (and even an unstable release); if that means that apt-get has to download and compile stuff, then I'm ok with that.
[+] [-] zrm|5 years ago|reply
I don't understand why anybody wants this.
Libraries should have major versions and the latest of each major version should be compatible with anything using that major version, because that's what major version means for a library. You might then need to have more than one major version of the library installed, but any two applications using the same one should be able to use the same copy, and then have it maintained by one person in one place.
If every package has a separate copy of every library, people have to maintain them all separately. When that library has a security update, you now have to update five dozen packages instead of one or two, and have a security vulnerability if any of the maintainers don't keep up in a timely fashion. Which not all of them will.
> I want the proprietary app in a container.
People want containers to be magic but they're actually a hard problem. You want the app not to be able to do anything you don't want it to but still be able to do everything you do want it to.
A backup app that can't read my files is useless; it can't back them up. But it shouldn't be able to modify or delete them. But it should be able to modify its own state. It shouldn't have general network access but should be able to communicate with the backup server, which might have to be specified by the user and not the package maintainer. It doesn't need access to the GPU or the ability to use gigabytes of memory, but it does need to be able to transfer a lot of data over the network, but the data it transfers is lower priority than other network packets.
That requires the person configuring the app's container to have both detailed knowledge of the app and detailed knowledge of the container system. It's common for this not to be the case.
And that's why containers are a mess, not anything to do with the package manager, which should have little to do with the container system outside of packaging the app's default container configuration with the app.
[+] [-] 2ion|5 years ago|reply
Snap is a way to contain / scope this kind of scripted activity. This is a welcome change. Additionally, deb/apt has much worse transaction support than yum and its successors (you can simply roll back yum transactions, good luck rolling back a borked APT system where package maintainer scripts already have done unspeakable things to the system and you're kind of stuck). APT's configuration system is also arcane and badly documented; debhelpers that control how most packages are built and work are tens of thousands of lines of perl, python, C++, C, makefile and m4 code that somehow work but are in no way a way to build straightforward predictable packages. It's ultra flexible, but also ultra complex. The trend in package / release management is going towards simplification, and not complication. A stop gap solution were many projects that allow the generation of Debian packages from venvs, random directory trees (for which you could also use the deprecated old-style DEBIAN package format and pass it to dpkg-build without the arcane dpkg-* toolchain, but again, Debian claims these this kind of packaging is not "well formed", and who knows when it gets removed).
Snaps are just a different way to do integrated containerized applications with scoped config management, versioning and a release system on top (which makes the difference). Perhaps somebody could make a better solution. Meanwhile, RedHat introduced AppStreams -- probably Canonical also felt it needed an answer to that.
Ubuntu's exposure to their "universe" repository component, many packages in which are badly maintained and are not a great part of the release QA, is also a huge risk as an enterprise distribution (where the money is) so it is no surprise that Canonical is looking to decouple their core offering on the server platform from that and maybe at some point remove it from the base install altogether ("take your own risk") and then drop it.
[+] [-] enriquto|5 years ago|reply
[+] [-] znpy|5 years ago|reply
[+] [-] navaati|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] billme|5 years ago|reply
Sure, this might be true for the average user, but it is toxic to the “super user” community that’s in the best position to help support the larger community and may end up pushing them away.
Snap at the very least should have an opt-out feature, if not be opt-in during an install.
More criticisms maybe found here:
https://en.wikipedia.org/wiki/Snap_(package_manager)#Critici...
[+] [-] dilandau|5 years ago|reply
[+] [-] nerdbaggy|5 years ago|reply
[1]https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1643706
[2]https://www.videolan.org/vlc/download-ubuntu.html
[+] [-] oarsinsync|5 years ago|reply
> If you wish to install the traditional deb package, it is available as usual via APT, with all security and critical bug fixes. However, there will be no major VLC version updates until the next Ubuntu release.
This is in line with Ubuntu (and Debian) repo policies. You do not get major software updates in between distribution updates unless you use a third party repository. You do get bug fixes and security fixes, and/or can track unstable if you need the bleeding edge.
[+] [-] hn_throwaway531|5 years ago|reply
You can try using soft links as a work around.
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] uhnllon|5 years ago|reply
Sorry, snaps are a LOT slower than just running a binary. Did I said they're slow, well they're slow.
I have an SSD and it feels like it's 1992 and I'm trying to run some snap from a Cyrix without cache and 16MB of RAM. I switch to binary version (oh my chromium), and it freaking flash, 0.x sec. and you're there, the full app is available.
Snaps are a NO GO my friends.
Besides having LOTS of problems running out of the standard GUI (Gnome3), or even in the standard (supposedly heavy-tested) GUI, they are slow.
Sorry, I've already said that uh? SLOW, that's snaps.
If there is somebody from Ubuntu here, please take a serious look about how snapped apps (pun intended), read/write $HOME defaults.
I mean we have to have defaults somewhere. So thingies like the colour theme, the theme engine, default download path, etc. are fully followed just as the user has configured them.
I use Ubuntu, but I certainly would not be using in the future if my applications which now take merely 0.x seconds to open start to take, 3-4-15! seconds to open. I fact I started to look to Debian and Fedora, they currently appear to have saner defaults than Ubuntu.
No, the second time I open an app in a session doesn't count AT ALL for the speed.
[+] [-] burtonator|5 years ago|reply
Here are some problems I've had:
- snaps use a different directory than our main app. So if you install our debian package, then go to a snap package, all your data seems to vanish. It's just in another hidden directory. I tried to figure out how to get the directories to sync up but couldn't get it to work as it's yet 'another thing to support'. I only have so much time.
- snaps have various bugs that you encounter after you've shipped the app that aren't present at build time. Mostly due to being in a container and 'reasonable' things not being accessible and needing to be granted access to via a configuration file.
The strategy I'm thinking of migrating to is to just distribute as a .deb and have our own apt line that is installed during the .deb installation. I think this is what Slack and other Electron packages have migrated to which is easier for them to support.
I mean conceptually it sounds great. Put your apps in a container. They will be isolated. Great. But in practice it's a nightmare.
To be fair though. MacOS has similar issues when they started going with isolation and privileges.
I think the main issue is that none of the OS maintainers spend a day in the shoes of a package maintainer. And if they did they don't care because they own the OS and many of these apps compete with your core product.
At least you have plausible deniability that your behavior isn't anti-competitive - you're just trying to improve the security for the user!
For example, Zoom got a ton of crap about their installer but they compete with Facetime which DOES NOT have to constantly ask the user for privileges. Apple granted Facetime these privileges via the OS.
From the perspective of a user, it's horrible.
"Can this app access your Downloads folder?"
"Can this app access your Webcam?"
"Can this app access your Microphone?"
"Can this app access your Documents folder?"
... and on and on ad nausea.
[+] [-] gentleman11|5 years ago|reply
Why a proprietary backend though? I suppose cannonical views packaged apps as a platform opportunity and wants to be the first to “capture” the users without somebody bigger coming and taking over?
[+] [-] 2ion|5 years ago|reply
Exactly. Personally I have been sticking my desktop programs into "firejail"-managed "containers" for a long time. It's a good thing that a similar solution has been implemented that is suitable to bring this too the masses.
[+] [-] lucb1e|5 years ago|reply
> it is plain weird for every app I install to have so much file system and system access
A quick glance at Wikipedia to make sure I'm not talking out of my ass seems to confirm that:
> is a Linux kernel security module that allows the system administrator to restrict programs' capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths. [...] AppArmor is enabled by default in Debian 10 (Buster) [from July 2019].
(Also, I'm not a fan of claiming "devil's advocate" when you're saying something that you know everyone will agree with. It's similar to saying "downvote me all you want but [insert popular HN opinion]". Of course the principle of lease privilege for software is something lauded by every logically thinking person.)
[+] [-] kstenerud|5 years ago|reply
Unfortunately, snap comes with all of these extra issues that happen when the developer isn't empathizing with the user. Also, much to my chagrin, snaps don't actually uninstall cleanly, and can really hoop your system. I now install snaps INSIDE of an LXC container so that snap can't misbehave and break my system, or else if I can I just use apt with a custom repo (for docker because the snap is awful). Ubuntu 20.04 will probably be my last Ubuntu system, and that's a shame... I really liked it.
[+] [-] cocoa19|5 years ago|reply
This reduces the maintenance burden, while also allowing you to have up to date packages.
The way this is done is by bundling all dependencies in your snap package rather than using the system ones.
I think it's a great idea for applications you want to be updated frequently, like VS Code, Chrome, etc.
It's not perfect, e.g. back end is closed source, but I'm glad Ubuntu is giving us options and being at the forefront of package management.
[+] [-] rayiner|5 years ago|reply
I ended up settling on Silverblue, a Fedora derivative that uses an immutable base system along with Flatpak for applications, and it’s been great. Equally trouble free, and Flatpak has many of the benefits of Snaps without some of the downsides (fully open source).
[+] [-] BiteCode_dev|5 years ago|reply
Those arguments are always from very tech saavy people.
They are a good exemple of purity over practicality, completly ignoring all the problems apt/yum have for the average user or dev publishing software.
If you are looking for reasons as why Linux on the destkop never happen, well, this is one of them.
[+] [-] noisy_boy|5 years ago|reply
I wasn't keen on using a derivative distro of a derivative distro; thought of using Debian but wasn't very confident if the issues won't persist on it. Considering everything just works perfectly with Pop!_os, I might just stick with it.
Ubuntu was such a breath of fresh air at the beginning (I still remember Dapper Drake). It is sad to see it going this way.
[+] [-] zdragnar|5 years ago|reply
Supposedly they're working on tabbed and stacking layouts for tiled windows- if that is the case I don't know if I'll go back to i3 or sway again!
[+] [-] csdreamer7|5 years ago|reply
Yes, some snaps are not updated, and VLC only working in the home directory as another HN commenter said is a pain.
Snaps still solve a lot of issues on Linux that Windows does not have.
What happens if the new version of your editor breaks some well known plugins?
On Windows you just install the older version from the archives. On Linux you risk dependency issues. Snaps are a single command to switch to an older version. Very important in both the software and media space.
One option is to stay on the LTS version of Blender.
Another is back when I used Atom, one upgrade completely broke a popular plugin for me because of a change in Chrome. I thought it would be fixed quickly so didn't revert the package and I lost the older version in the cache. Took the Atom devs 2 weeks to handle the change to Google Chrome-then Arch maintainer of the Atom package didn't get around to upgrading it for a week or two after that.
Things like apt pinning really don't help you when you discover the issue on your main workstation. Also, doesn't change the fact that it is so much easier to revert on Windows. Snaps make version control even easier to do on Linux then Windows.
The back-end being proprietary is a good argument. but for a company that has worked so long with Linux and Free Software; give Canonical a little trust to release it.
[+] [-] WesternStar|5 years ago|reply
[+] [-] icebraining|5 years ago|reply
[+] [-] antpls|5 years ago|reply
I can understand the "proprietary store" concerns of this thread, however.
[+] [-] zepto|5 years ago|reply
[+] [-] compsciphd|5 years ago|reply
Take a multi user system and users who run applications like chromium or firefox.
It's dangerous to upgrade the application while users are running them as the files the running applications depend on can change thereby making them either break in weird ways or force the end users to restart them.
if these apps were just distributed as snaps, it wouldn't matter. they would keep on using the old image without any problem, while new executions would get the new image. If one really wanted to encourage them to exit and restart (i.e. some security hole), the same mechanisms that exist today to get people to restart could be used.
with that said, I think it should be a choice, not something force down our throat. if I install something with apt/dpkg , I expect it to be an apt/dpkg package, not a snap. if I want to use snap, I'll install it with snap.
[+] [-] kd913|5 years ago|reply
They have wide spread adoption with almost 10x the install base of Flatpaks.
Do you guys really need to keep throwing blogs at something which isn't going away and is useful to users? How is this useful in anyway? Canonical isn't suddenly going to give up on this, and I don't even want them to.
[+] [-] quantummkv|5 years ago|reply
This feels very natural to what Apple, Google and Microsoft do on their OS. But Canonical seems to have forgotten that such behavior is what drove a lot of people to Linux. It is never going to be accepted. Nor it should be.
[+] [-] afiori|5 years ago|reply
Personally I find snaps a very disappointing solution to a very interesting problem.
[+] [-] RNCTX|5 years ago|reply
Do you really want the answer to that...
There are people who would loudly insist that there should be nothing but a kernel and Stallman's FAQ.
[+] [-] oblio|5 years ago|reply
[+] [-] kemonocode|5 years ago|reply
Kudos to Linux Mint for taking a stand in regards to it, and hopefully more Ubuntu-based distros do the same until Canonical gets the idea.
I don't think they're out there trying to be malicious, but they need to be set back to the correct path as it has been the case with odd monetization choices for Ubuntu before that didn't really benefit anyone.
[+] [-] curt15|5 years ago|reply
[+] [-] zubairq|5 years ago|reply
[+] [-] olafura|5 years ago|reply
I had seen some discussion before about the server not being open source but I can clearly see the store api there. I'm just looking at the code now and haven't taken any time in testing it out for myself yet.
Regarding debian packages and apt as we have seen with the amount of PPAs, there really has to be some solution for that. I like the snap format I believe it's heading in the right direction. It still has some problems with desktop software but that seems to be addressed as it has progressed.
There is this strange cold war between Red Hat and Canonical and Red Had seem to have most of the NIH problems if you look at the history. I really don't think Red Hat or some of it's developers possible like relying on code from Canonical.