top | item 43519482

(no title)

devit | 11 months ago

That makes no sense: shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux, which is what flatpak/snap/appimage do.

It can also be achieved with static linking and by shipping all needed library and using a shell script loader that sets LD_LIBRARY_PATH.

Also glibc (contrary to the author's false claims) and properly designed libraries are backwards compatible, so in principle just adding the debs/rpms from an older Debian/Fedora that ships the needed libraries to the packaging repositories and running apt/dnf should work in theory, although unfortunately might not in practice due to the general incompetence of programmers and distribution maintainers.

Win32 is obviously not appropriate for GNU/Linux applications, and you also have the same dependency problem here, with the same solution (ship a whole Wine prefix, or maybe ship a bunch of DLLs).

discuss

order

Const-me|11 months ago

> shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux

That doesn’t work for GUI programs which use a hardware 3D GPU. Linux doesn’t have a universally available GPU API: some systems have GL, some have GLES, some have Vulkan, all 3 come in multiple versions of limited compatibility, and optional features many of them are vendor specific.

In contrast, it’s impossible to run modern Windows without working Direct3D 11.0 because dwm.exe desktop compositor requires it. If a software consumes Direct3D 11.0 and doesn’t require any optional features (for example, FP64 math support in shaders is an optional feature, but sticking to the required set of features is not very limiting in practice unless you need to support very old GPUs which don’t implement feature level 11.0), will run on any modern Windows. Surprisingly, it will also run on Linux systems which support Wine: without Vulkan-capable GPU will be slow but should still work due to Lavapipe, which is a Linux equivalent of microsoft’s WARP they use on Windows computers without hardware 3D GPU.

arghwhat|11 months ago

Note that this also underlines that the post's premise of Windows having a simple stable ABI - win32 sure is stable, but that's not what applications are coded against anymore.

Sure, you can run a 20 year old app, but that is not the same as a current app still working in 20 years, or even 5.

the__alchemist|11 months ago

Question, from an application developer's perspective: What is the implication in regards to cross-platform Vulkan applications? I.e., my 3D applications all use Vulkan, and they compile and just work on both Windows, and Ubuntu. Does this mean that on other or older distros, they might not work?

account42|11 months ago

This is FUD. There isn't a single real desktop Linux distribution without OpenGL support. The basic OpenGL API hasn't changed ever, it's just been extended. It has even more backwards compatibility than Direct3D. Sure you can deliberately build a distro with only Vulkan or GLES (a mobile API) if you want to be an ass but the same goes for Windows. Same for X11 - Xlib works everywhere even any Wayland-only distribution that gives a single crap about running binary distributed software.

Now GUI toolkits are more of an issue. That's annoying for some programs, many others do their own thing anyway.

sleepycatgirl|11 months ago

without vulkan-capable gpu you still get 3D acceleration via wined3d. Unless you meant, without any gpu at all :s

api|11 months ago

> That makes no sense: shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux, which is what flatpak/snap/appimage do.

True, but sad. The way to achieve compatibility on Linux is to distribute applications in the form of what are essentially tarballs of entire Linux systems. This is the "fuck it" solution.

Of course I suppose it's not unusual for Windows stuff to be statically linked or to ship every DLL with the installer "just in case." This is also a "fuck it" solution.

grandiego|11 months ago

> to distribute applications in the form of what are essentially tarballs of entire Linux systems.

No so bad when Linux ran from a floppy with 2Mb of RAM. Sadly every library just got bigger and bigger without any practical way to generate a lighter application specific version.

icedchai|11 months ago

You could say the same about container images for server apps. (Packaging is hard.)

vlovich123|11 months ago

> Also glibc (contrary to the author's false claims) and properly designed libraries are backwards compatible, so in principle just adding the debs/rpms from an older Debian/Fedora that ships the needed libraries to the packaging repositories and running apt/dnf should work in theory, although unfortunately might not in practice due to the general incompetence of programmers and distribution maintainers.

Got it. So everything is properly designed but somehow there's a lot of general incompetence preventing it from working. I'm pretty sure the principle of engineering design is to make things work in the face of incompetence by others.

And while glibc is backward compatible & that generally does work, glibc is NOT forward compatible which is a huge problem - it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on. Whereas on Mac & Windows it's pretty easy to build applications on my up-to-date system targeting older variants.

einpoklum|11 months ago

> So everything is properly designed but somehow there's a lot of general incompetence preventing it from working.

But it is working, actually:

* If you update your distro with binaries from apt, yum, zypper etc. - they work.

* If you download statically-linked binaries - they work.

* If you download Snaps/Flatpak, they work.

> it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on.

Only if you want to distribute a dynamically-linked binary without its dependencies. And even then - you have to build with a toolchain for that distro, not with that distro itself.

GoblinSlayer|11 months ago

Windows toolchain (even gnu) just provides old libraries to link with. This should work the same on linux, and AFAIK zig does just that.

skissane|11 months ago

> And while glibc is backward compatible & that generally does work, glibc is NOT forward compatible which is a huge problem - it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on.

Isn’t this easily solved by building in a container? Something a lot of people do anyway - I do it all the time because it insulates the build from changes in the underlying build agents - if the CI team decides to upgrade the build agent OS to a new release next month or migrate them to a different distro, building in a container (mostly) isolates my build job from that change, doing it directly on the agents exposes it to them

Spivak|11 months ago

glibc really doesn't want to be statically linked so if you go this route your option is to ship another libc. It does work but comes with its own problems— mostly revolving around nss.

okanat|11 months ago

And NSS defines how usernames are matched to uids, how DNS works, how localization works and so on. If you're changing libc you need to ship an entire distro as well since it will not use the system libraries of a glibc distro correctly.