top | item 19939761

(no title)

bjpbakker | 6 years ago

> ELFs still don't even have an accepted embedded icon standard FFS

Also Apple does not embed icons in their binaries. Their app bundles are not binaries, they are a directory structure. The icon is just another file, just like the _actual_ executable(s).

> you could publish a binary containing native versions for all existing architectures

This sort of ignores the hardest part of shipping binaries; linked libraries. Dynamic linking everything is simply not always feasible. Not to mention libc.

Also I don't really understand why anyone on Linux would want this. The fact that I can recompile all of the software I use, is a really important feature to me and not a distribution problem. I can see why Apple wanted this to simplify distribution via their Appstore, but IMO that's mostly to work-around their specific distribution problems. I don't see any of those problems on Linux.

(edit: wording, see below)

discuss

order

Crinus|6 years ago

> Their app bundles are not binaries, they are a directory structure.

Yes, but on Linux no file manager understands directories as bundles (except perhaps GNUstep's GWorkspace).

> Also I don't really understand why anyone on Linux would want this.

Because they want to distribute binaries themselves or via a 3rd party distribution site (ie. not part of a linux distribution) without having the user compile the code themselves (either out of convenience or because they do not want or cannot distribute the source code).

Having said that this is mainly useful in case you want to distribute a single binary that supports multiple architectures. Almost everything is distributed in archives (even self-extracting archives can be shell scripts - although annoyingly enough, software like GNOME's file manager make this harder) so you can use a shell script to launch the proper binary without kernel support.

AnIdiotOnTheNet|6 years ago

> Yes, but on Linux no file manager understands directories as bundles (except perhaps GNUstep's GWorkspace).

And the ROX Filer, which sadly hasn't been an active project for many years.

rjzzleep|6 years ago

> I can see why Apple wanted this to simplify distribution via their Appstore, but IMO that's mostly to work-around their specific distribution problems. I don't see any of those problems on Linux.

This predates the Appstore by a huge margin. They added universal binaries to make the transition between 32bit and 64bit seamless. And it worked really well actually.

Soon after tools popped up to reduce the binary sizes by stripping out the 64 or 32 bit part of it.

The other part that's a bit special is that apple has these special variables @executable_path, @loader_path, @rpath in linker options with an install_name_tool that allowed(s?) you to rewrite the system path to an application specific one, which allowed you to bundle the necessary libraries with a linker path that's relative to the executable or app resource path. I think this has gotten better recently, but pretty much everyone struggled with this at the beginning.

In linux it was basically outsourced to system packaging. So the developers outsourced it to the distro, whereas in Mac environment because of the lack of said packaging, the burden was placed on whoever is distributing software. Making people think twice about what they link.

classichasclass|6 years ago

> They added universal binaries to make the transition between 32bit and 64bit seamless.

Not just 32 vs 64-bit, but entire architectures. Mach-O universal binaries originated at NeXT, where at one time a binary could (and many did) run on SPARC, PA-RISC, x86 and 68K. On http://www.nextcomputers.org/NeXTfiles/Software you can see this in their filename convention: the "NIHS" tag tells you which architectures (NeXT 68K, Intel, HP, SPARC). The binary format carried over into OS X, where it was secretly leveraged as part of Marklar for many years.

In fact, Universal even on OS X really meant PowerPC and i386 at the beginning of the Intel age. It eventually morphed into the present meaning. I even maintained a fat binary with ppc750, ppc7400 (that is, non-AltiVec and AltiVec) and i386 versions.

pedrocr|6 years ago

>Also I don't really understand why anyone on Linux would want this. The fact that I can recompile all of the software I use, is a really important feature to me and not a distribution problem. I can see why Apple wanted this to simplify distribution via their Appstore, but IMO that's mostly to work-around their specific distribution problems. I don't see any of those problems on Linux.

Couldn't agree more and yet Snap and Flatpack exist. It's probably so that third-parties can package closed-source stuff for all distros easily. These days one of the first things I do on a fresh Ubuntu install is get rid of snapd because they use it for things where it's useless (e.g., gnome-calculator). If someday they stop packaging the apps directly I'll probably finally go back to Debian.

AnIdiotOnTheNet|6 years ago

It isn't just for closed source stuff. Some developers actually care about the user experience and don't want to have to tell people "sorry, you have to wait until someone comes along and decides to package that for your distro, or compile it from source!".

mrpippy|6 years ago

> Also Apple does not embed icons in their binaries. Their app bundles are not binaries, they are a directory structure. The icon is just another file, just like the _actual_ executable(s)

You actually can put an icon into the resource fork of a Mach-O binary, and it’ll show up in the Finder and Dock (assuming the executable turns itself into a GUI app).

It’s an uncommon thing to do, but Qemu uses it, and unfortunately I don’t think there’s another way to embed an icon in a bare Mach-O binary

jhbadger|6 years ago

The current Apple application structure (the .app directories that originated in NextStep) isn't what the previous poster was referring to -- the binaries in traditional (pre-OSX) MacOS weren't like this but actual files that could run on either Motorola 68000-series chips or (in the 1990s) IBM's Power PC chips.

0x8BADF00D|6 years ago

Apple does not use ELF for their binaries, MacOS ld will produce mach-o format.

bjpbakker|6 years ago

Right, I worded that quite strange. Thanks for the feedback!

admax88q|6 years ago

> The fact that I can recompile all of the software I use, is a really important feature to me and not a distribution problem.

I've always found this to be an interesting observation about free software. So many complicated things like FatELF, dll-hell are just straight up _not_ and issue when you're working in a source code world where you just compile the software for the machine you're using it on.

Most of the efforts around FatELF, FlatPak, etc seem to be to be driven by the desires of corporations who want to ship proprietary software on linux, and as such need better standardization at the binary level rather than the software level.

It's a win for Free Software in my mind, that we shouldn't typically have to worry about this added complexity. Just ship source code, and distributions can ship binaries compiled for each specific configuration that they choose to support.

Crinus|6 years ago

Note that source code access and FOSS are orthogonal. AFAIK in older Unix systems software you'd buy would often be in source code form. In fact at the past severa lLinux distributions had a lot of such software.

As an example Slackware distributes a shareware image viewer/manipulator called xv (which was very popular once upon a time): http://www.trilon.com/xv/

It is the license that makes something FOSS, not being able to compile/modify the source code.

CJefferson|6 years ago

Well, except I work on a large open source project and we have to blacklist random versions of gmp and gcc out code doesn't work with due to bugs.

And, we can't reasonably test with every version of the compiler and libraries, so we just have to wait for bug reports, then try to find out what's wrong.

Whereas I pick one set of all the tools, make a docker image, and then run 60 cpu days of tests. No Linux distro is going to do that much testing.

AnIdiotOnTheNet|6 years ago

> So many complicated things like FatELF, dll-hell are just straight up _not_ and issue when you're working in a source code world where you just compile the software for the machine you're using it on.

Said like someone who has never actually had to compile someone else's software. Why do you think so many projects these days have started shipping Docker containers of their build environment? Why are there things like autoconf?

AnIdiotOnTheNet|6 years ago

> Also Apple does not embed icons in their binaries. Their app bundles are not binaries, they are a directory structure. The icon is just another file, just like the _actual_ executable(s).

Pedantry. You could mount an ELF as a filesystem if you had any desire to. Structures are just structures.

> This sort of ignores the hardest part of shipping binaries; linked libraries.

Time has shown that dynamic linking all the things is a terrible idea on many fronts anyway. Why do you think there's all this Docker around and compiling statically is on an upward trend?

The solution is simple: DLLs for base platform stuff that provides interfaces to the OS and common stuff, statically compile everything else. Then the OS just ships a "virtual arch" version of the platform DLLs in addition to native on every arch.

The reason the Linux Community don't want this sort of thing is that, frankly, they just hate stability. I mean, the Kernel is stable (driver ABI excepted), but basically nothing outside of that is.

admax88q|6 years ago

> The reason the Linux Community don't want this sort of thing is that, frankly, they just hate stability.

I'd argue that the reason the Linux Community doesn't want this is that it introduces maintenance burdens on the community that only really serves to support corporations shipping proprietary software.

I really don't care about making proprietary software easier on linux, but I do care about linux having to carry the baggage of backwards compatability like Windows has had to handle just so that Google can deliver Chrome as a binary more reliably.

bjpbakker|6 years ago

> reason the Linux Community don't want this sort of thing is that, frankly, they just hate stability

And yet I fearlessly upgrade my Linux system at any time. With OSX you first have to check if the software you use is at all compatible, especially if you use proprietary software..