top | item 45875511

(no title)

qdotme | 3 months ago

You could actually support both at runtime with both ABIs being available. This is done routinely on x86_64 with x86 ABI for compatibility (both sets of system libraries are installed), for a while I used to run 3 ABIs (including x32 - the 64bit with short pointers) for memory savings with interpreted languages.

IRIX iirc supported all 4 variants of MIPS; HP-UX did something weird too! I’d say for some computations one or the other endianness is preferred and can be switched at runtime.

Back in the day it also saved on a lot of network stack overheads - the kernel can switch endianness at will, and did so.

discuss

order

mort96|3 months ago

Are you advocating that Linux systems on PowerPC should have two variants of every single shared library, one using the big endian ABI for big endian programs and one using the little endian ABI for little endian programs?

Because that's how 32-bit x86 support is handled. There are two variants of every library. These days, Linux distros don't even provide 32-bit libraries by default, and Ubuntu has even moved to remove most of the 32-bit libraries from their repositories in recent years.

Apple removed 32-bit x86 support entirely a few years back so that they didn't have to ship two copies of all libraries anymore.

What you're proposing as a way to support both little and big endian ABIs is the old status quo that the whole world has been trying (successfully) to move away from for the past decade due to its significant downsides.

And this is to say nothing of all the protocols out there which are intended for communication within one computer and therefore assume native endianness.

qdotme|3 months ago

There are downsides. Unsure if significant vs negligible. And same in terms of “internal” protocols - that essentially goes against the modularity (and while in the past there were good reasons to get away from modularity in pursuit of performance, darn, baudline.com of 2010 works amazingly well and is still in my toolbox!)

Big advantage of the “old ways” was the cohesion of software versions within a heterogenous cluster. In a way I caught the tail end of that with phasing out of MIT Athena (which at the time was very heterogeneous on the OS and architecture side) - but the question is, well, why.

Our industry is essentially a giant loop of centralizing and decentralizing, with advantages in both, and propagation delays between “new ideas” and implementation. Nothing new, all the economy is necessarily cyclic so why not this.

I’d argue that in the era of inexpensive hardware (again) and heterogenous edge compute, being able to run a single binary across all possible systems will again be advantageous for distribution. Some of that is the good old cosmopolitan libc, some of that is just a broad failure of /all/ end-point OS (which will brood its own consequences) - Windows 11, OSX, Androids etc..

kalleboo|3 months ago

I had to look it up, Apple has been shipping code for 2+ different ISAs simultaneously, continuously since 1994

68k+PPC 1994-2007, PPC32+PPC64 2003-2008, PPC+x86 2006-2008, x86+x64 2007-2019, x64+ARM 2021-2028(announced).