top | item 45095475

The future of 32-bit support in the kernel

263 points| binarycrusader | 6 months ago |lwn.net

301 comments

order

skywal_l|6 months ago

Removing nommu feels wrong to me. Being able to run linux on a simple enough hardware that anybody sufficiently motivated could write an emulator for, help us, as individuals, remain in control. The more complex things are, the less freedom we have.

It's not a well argumented thought, just a nagging feeling.

Maybe we need a simple posix os that would run on a simple open dedicated hardware that can be comprehended by a small group of human beings. A system that would allow communication, simple media processing and productivity.

These days it feels like we are at a tipping point for open computing. It feels like being a frog in hot water.

dragontamer|6 months ago

I don't think software emulation is very important.

Let's look at the lowest end chip in the discussion. Almost certainly the SAM9x60.... it is a $5 ARMv5 MMU chip supporting DDR2/LPDDR/DDR3/LPDDR3/PSRAM, a variety of embedded RAM and 'old desktop RAM' and mobile RAM.

Yes it's 32-bit but at 600MHz and GBits of RAM support. But you can seriously mass produce a computer under $10 with the chip (so long as you support 4-layer PCBs that can breakout the 0.75mm pitch BGA). As in, the reference design with DDR2 RAM is a 4-layer design.

There are a few Rockchips and such that are (rather large) TQFP that are arguably easier. But since DDR RAM is BGA I think it's safe to assume BGA level PCB layout as a point of simplicity.

---------

Everything smaller than this category of 32-bit / ARMv5 chips (be it Microchip SAM9x60, or competing Rockchips or AllWinner) is a microcontroller wholly unsuitable for running Linux as we know it.

If you cannot reach 64MBs of RAM, Linux is simply unusable. Even for embedded purposes. You really should be using like FreeRTOS or something else at that point.

---------

Linux drawing the line at 64MB hardware built within the last 20 years is.... reasonable? Maybe too reasonable. I mean I love the fact that the SAM9x60 is still usable for modern and new designs but somewhere you have to draw the line.

ARMv5 is too old to compile even like Node.js. I'm serious when I say this stuff is old. It's an environment already alien to typical Linux users.

reactordev|6 months ago

We need accessible open hardware. Not shoehorning proprietary hardware to make it work with generic standards they never actually followed.

Open source is one thing, but open hardware - that’s what we really need. And not just a framework laptop or a system76 machine. I mean a standard 64-bit open source motherboard, peripherals, etc that aren’t locked down with binary blobs.

eric__cartman|6 months ago

Those operating systems already exist. You can run NetBSD on pretty much anything (it currently supports machines with a Motorola 68k CPU for example). Granted many of those machines still have an MMU iirc but everything is still simple enough to be comprehend by a single person with some knowledge in systems programming.

duskwuff|6 months ago

nommu is a neat concept, but basically nobody uses it, and I don't see that as likely to change. There's no real use case for using it in production environments. RTOSes are much better suited for use on nommu hardware, and parts that can run "real" Linux are getting cheaper all the time.

If you want a hardware architecture you can easily comprehend - and even build your own implementation of! - that's something which RISC-V handles much better than ARM ever did, nommu or otherwise.

MisterTea|6 months ago

> Maybe we need a simple posix os that would run on a simple open dedicated hardware that can be comprehended by a small group of human beings.

Simple and POSIX would be a BSD like NetBSD or OpenBSD.

This is why I gravitated to Plan 9. Overall a better design for a networked world and can be understood by a single developer. People can and have maintained their own forks. Its very simple, small and cross platform was baked in from day one. 9P makes everything into a IO socket organized as a tree of names objects. Thankfully it's not POSIX which IMO is not worth dragging along for decades. You can port Unix things with libraries. It also abandons the typewriter terminal and instead uses graphics. A fork, 9front, is not abandoning 32 bit any time soon AFIK. I netboot an older Industrial computer that is a 400MHz Geode (32 bit x86) with 128 MB RAM and it runs 9front just fine.

Its not perfect and lacks features but that stands to reason for any niche OS without a large community. Figure out what is missing for you and work on fixing it - patches welcome.

pajko|6 months ago

Why do you need a full blown Linux for that? Much of the provided features are overkill for such embedded systems. Both NuttX and Zephyr provide POSIX(-like) APIs, NuttX has an API quite similar to the Linux kernel, so it should be somewhat easier to port missing stuff (have not tried to do that, the project I was working on got cancelled)

Denvercoder9|6 months ago

If you want a POSIX OS, nommu Linux already isn't it: it doesn't have fork().

trebligdivad|6 months ago

There are some other open OSs, like Zephyr, NuttX and Contiki - so maybe they're the right thing to use for the nommu case rather than Linux?

JoshTriplett|6 months ago

I don't think it makes sense to run Linux on most nommu hardware anymore. It'd make more sense to have a tiny unikernel for running a single application, because on nommu, you don't typically have any application isolation.

lproven|5 months ago

> Maybe we need a simple posix os that would run on a simple open dedicated hardware that can be comprehended by a small group of human beings.

That was part of the plan for Minix 3.

Clean separation in a microkernel, simple enough for teaching students, but robust.

But Intel used it and gave nothing back, and AST retired. :-(

pjmlp|6 months ago

There are plenty of FOSS POSIX like for such systems.

Most likely I won't be around this realm when that takes shape, but I predict the GNU/Linux explosion replacing UNIX was only a phase in computing history, eventually when everyone responsible for its success fades away, other agendas will take over.

It is no accident that the alternatives I mention, are all based on copyleft licenses.

chasil|6 months ago

This is a foreseeable cataclysm for me, as I retire next year, and the core of our queing system is 64-bit clean (k&r) as it compiled on Alpha, but our client software is very much not.

This is a young mans' game, and I am very much not.

noobermin|6 months ago

You're not alone, I feel the same way. I think the future if linux really will need to remove nommu would be a fork. I'm not sure if there's the community for that though.l

Blammmoklo|6 months ago

Supporting 32bit is not 'simple' and the difference between 32bit hardware and 64bit hardware is not big.

The industry has a lot of experience doing so.

In parallel, the old hardware is still supported, just not by the newest Linux Kernel. Which should be fine anyway because either you are not changing anything on that system anyway or you have your whole tool stack available to just patch it yourself.

But the benefit would be a easier and smaller linux kernel which would probably benefit a lot more people.

Also if our society is no longer able to produce chips in a commercial way and we loose all the experience people have, we are probably having a lot bigger issues as a whole society.

But I don't want to deny that it would be nice to have the simplest way of making a small microcontroller yourself (doesn't has to be fast or super easy just doable) would be very cool and could already solve a lot of issues if we would need to restart society from wikipedia.

cout|6 months ago

ELKS can still run on systems without an mmu (though not microcontrollers afaik).

762236|6 months ago

Removing nommu makes the kernel simpler and easier to understand.

ohdeargodno|6 months ago

Nothing prevents you from maintaining nommu as a fork. The reality of things is, despite your feelings, people have to work on the kernel, daily, and there comes a point where your tinkering needs do not need to be supported in main. You can keep using old versions of the kernel, too.

Linux remains open source, extendable, and someone would most likely maintain these ripped out modules. Just not at the expense of the singular maintainer of the subsystem inside the kernel.

jnwatson|6 months ago

It is amazing that big endian is almost dead.

It will be relegated to the computing dustbin like non-8-bit bytes and EBCDIC.

Main-core computing is vastly more homogenous than when I was born almost 50 years ago. I guess that's a natural progression for technology.

goku12|6 months ago

> It is amazing that big endian is almost dead.

I wish the same applied to written numbers in LTR scripts. Arithmetic operations would be a lot easier to do that way on paper or even mentally. I also wish that the world would settle on a sane date-time format like the ISO 8601 or RFC 3339 (both of which would reverse if my first wish is also granted).

> It will be relegated to the computing dustbin like non-8-bit bytes and EBCDIC.

I never really understood those non-8-bit bytes, especially the 7 bit byte. If you consider the multiplexer and demux/decoder circuits that are used heavily in CPUs, FPGAs and custom digital circuits, the only number that really makes sense is 8. It's what you get for a 3 bit selector code. The other nearby values being 4 and 16. Why did they go for 7 bits instead of 8? I assume that it was a design choice made long before I was even born. Does anybody know the rationale?

ndiddy|6 months ago

Big endian will stay around as long as IBM continues to put in the resources to provide first-class Linux support on s390x. Of course if you don’t expect your software to ever be run on s390x you can just assume little-endian, but that’s already been the case for the vast majority of software developers ever since Apple stopped supporting PowerPC.

Aardwolf|6 months ago

Now just UTF-16 and non '\n' newline types remaining to go

dgshsg|6 months ago

We'll have to deal with it forever in network protocols. Thankfully that's rather walled off from most software.

delduca|6 months ago

Good call out, I have just removed some #ifdef about endianness from my engine.

shmerl|6 months ago

On the userland side, there is some good progress of using thunking to run 32-bit Windows programs in Wine on Linux without the need for 32-bit libraries (the only edge case remaining is thunking 32-bit OpenGL which is lacking needed extensions for acceptable performance). But the same can't be said for a bunch of legacy 32-bit native Linux stuff like games which commonly have no source to rebuild them.

May be someone can develop such thunking for legacy Linux userland.

eric__cartman|6 months ago

How many of those legacy applications where the source is not available actually need to run natively on a modern kernel?

The only thing I can think of is games, and the Windows binary most likely works better under Wine anyways.

There are many embedded systems like CNC controllers, advertisement displays, etc... that run those old applications, but I seriously doubt anyone would be willing to update the software in those things.

cwzwarich|6 months ago

It shouldn’t be difficult to write a binary translator to run 32-bit executables on a 64-bit userspace. You will take a small performance hit (on top of the performance hit of using the 32-bit architecture to begin with), but that should be fine for anything old enough to not be recompiled.

5-|6 months ago

most of those games would have windows builds?

that said, i sometimes think about a clean-room reimplementation of e.g. the unity3d runtime -- there are so many games that don't even use native code logic (which still could be supported with binary translation via e.g. unicorn) and are really just mono bytecode but still can't be run on platforms for which their authors didn't think to build them (or which were not supported by the unity runtime at the time of the game's release).

xeonmc|6 months ago

Perhaps a new compatibility layer, call it LIME -- LIME Is My Emulater

dontlaugh|6 months ago

In practice, the path for legacy software on Linux is Wine.

hinkley|6 months ago

Win32S but the other way around.

Win64S?

greatgib|6 months ago

It's the end of an area, Linux used to be this thing that was running on quite anything and allowing to salvage old computers.

I think that there is a shitload of old desktop and laptop computers from 10 to 15 yrs that are still usable only with a linux distribution and that will not be true anymore.

Now Linux will be in the same lane as osx and windows running after the last shiny new things, and being like: if you want it, buy a new machine that will support it.

arp242|6 months ago

You can still run an older kernel. There are the "Super-long-term support" releases that have 10+ year support cycles. Some distros may go even further.

If you install 6.12 today (via e.g. Debian 13) then you'll be good until at least 2035. So removing it now de-facto means it will be removed in >10 years.

And as the article explains, this mostly concerns pretty old systems. Are people running the latest kernel on those? Most of the time probably not. This is really not "running after the last shiny thing". That's just nonsensical extreme black/white thinking.

Dylan16807|6 months ago

Desktops and laptops from 10 to 15 years ago are basically all 64 bit. By the time this removal happens, we'll be at 20 years of almost all that hardware being 64 bit. By the time hardware becomes "retro", you don't need the latest kernel version.

Lots of distros already dropped 32 bit kernel support and it didn't cause much fuss.

creshal|6 months ago

> I think that there is a shitload of old desktop and laptop computers from 10 to 15 yrs that are still usable only with a linux distribution and that will not be true anymore.

For mainstream laptops/desktops, the 32 bit era ended around 2006 (2003, if you were smart and using Athlon 64s instead of rancid Pentium 4).

Netbooks and other really weak devices held out a few years longer, but by 2010, almost everything new on the market, and a good chunk of the second-hand market, was already 64 bits.

markjenkinswpg|6 months ago

In my experience, the 10-15 year old salvaged computer that still works okay with GNU/Linux is increasingly a 64 bit machine.

Case in point, I'm writing on a x86_64 laptop that was a free give away to me about a year ago with a CPU release year that is 2012.

I have personally given away a x86_64 desktop unit years ago that was even older, might have had DDR1 memory.

Circa 2013 my old company was gifted a x86_64 motherboard with DDR2 memory that ended up serving as our in-office server for many years. We maxed the RAM (8GB) and at some point bought a CPU upgrade on ebay that gave us hardware virtualization extensions.

octoberfranklin|6 months ago

The Apple Watch has 32-bit memory addressing (and 64-bit integer arithmetic -- it's ILP32). Granted it doesn't run Linux, but it's a very very modern piece of hardware, in production, and very profitable.

Same for WASM -- 32-bit pointers, 64-bit integers.

Both of these platforms have a 32-bit address space -- both for physical addresses and virtual addresses.

Ripping out support for 32-bit pointers seems like a bad idea.

mrpippy|6 months ago

With watchOS 26, S9/10 watches will be going to normal ILP64 ARM64.

RAM limitations were one reason to use arm64_32, but a bigger reason is that the first watches were only ARMv7 (32-bit) so by sticking with 32-bit pointers, Apple was able to statically recompile all the 3rd party (ARMv7) apps from LLVM bitcode to arm64_32.

https://www.macrumors.com/2025/06/16/watchos-26-moves-apple-...

int_19h|6 months ago

64-bit memories are already in wasm 3.0 draft (and in any case this isn't a platform where you'd need the Linux kernel running).

SAI_Peregrinus|6 months ago

WASM isn't being used to run the Linux kernel, it's run by an application on top of an OS. That OS can be 64-bit, the WASM VMs don't care.

jacquesm|6 months ago

Funny, I remember 32 bits being 'the future', now it is the distant past. I think they should keep it all around, and keep it buildable. Though I totally understand the pressure to get rid of it I think having at least one one-size-fits-all OS is a very useful thing to have. You never know what the future will bring.

justin66|6 months ago

There's always NetBSD. I'm pretty sure that's supporting x86 as far back was 80486 and 32-bit SPARC as far back as... something I wouldn't want to contemplate.

nektro|6 months ago

important to remember that this fate isn't going to happen again with 64bit

petcat|6 months ago

Just because support would be removed from current and new versions doesn't mean the old code and tarballs are just going to disappear. Can dust off an old 32 bit kernel whenever you want

SlowTao|6 months ago

Always to option to fork it. Linux Legacy? Linux 32? Linux grey beard!

smitty1e|6 months ago

Technologies have lifecycles. Film at 11.

Mathnerd314|6 months ago

Linux has become the dominant operating system for a wide range of devices, even though other options like FreeRTOS or the BSD family seem more specialized. The widespread adoption of Linux suggests that a single, versatile operating system may be more practical than several niche ones. However, the decision to drop support for certain hardware because it complicates maintenance, as seen here, would seem to contradict the benefit of a unified system. I wouldn't be surprised if it really just results in more Linux forks - Android is already at the point of not quite following mainline.

charcircuit|6 months ago

>Android is already at the point of not quite following mainline.

It follows the latest LTS which I think is reasonable especially since phone vendors wants to have support for the device for several years.

shasheene|6 months ago

I think this is premature and a big mistake for Linux.

The costs of distros and the kernel steadily dropping older x86 support over the last few years never causes an outcry but it's an erosion of what made Linux great. Especially for non-English speaking people in less developed countries.

Open-source maintenance is not a obligation, but it's sad there is not more people pushing to maintain support. Especially for the "universal operating system" Debian which was previously a gold standard in architecture support.

I maintain a relatively popular live Linux distro based on Ubuntu and due to user demand will look into a NetBSD variant to continue support (as suggested in this thread), potentially to support legacy 586 and 686 too.

Though a Debian 13 "Trixie" variant with a custom compiled 686 kernel will be much easier than switching to NetBSD, it appears like NetBSD has more commitment to longer-term arch support.

It would be wonderful to develop systems (eg emulation) to make it practical to support architectures as close to indefinitely as possible.

It does feel like a big end of an era moment for Linux and distros here, with the project following the kind of decision making of big tech companies rather than the ideals of computer enthusiasts.

Right now these deprecation decisions will directly make me spend time working at layers of abstraction I wasn't intending to in order to mitigate the upstream deprecations of the kernels and distros. The reason I have used the kernel and distros like Debian has been to offload that work to the specialist maintainers of the open-source community.

meisel|6 months ago

It seems like it takes just one user using a certain piece of hardware to justify it being supported in the kernel, which is surprising to me. The cost to kernel dev velocity is not weighed more heavily against that?

bjackman|6 months ago

In general I don't think the marginal benefit of removing support for a certain piece of hardware would be significant in most cases. Most of the kernel is generic across a broad enough spectrum that removing one slice of that spectrum isn't likely to change its breadth.

E.g. there's some stuff like erratum workarounds for old x86 CPUs that would be nice to drop, but they are part of a big framework for handling x86 diversity. Dropping individual workarounds doesn't let you drop the framework.

Exceptions are gonna be cases where dropping the support removed something significant from the lowest common denominator. Big stuff like word size, memory ordering (I assume dropping Alpha would be quite handy), virtual address space limitations.

SAI_Peregrinus|6 months ago

One known user. Linux doesn't have automatic telemetry in every distro (or even most distros), the kernel devs don't really know how many people use. So if they know of one user, there are probably more users that just haven't come to their attention on the mailing lists.

hamandcheese|6 months ago

Sometimes, supporting special use cases like this can be a valuable exercise since it shows you all the places in your project that you made assumptions without even realizing it. It seems plausible to me that supporting niche users improves quality of the project as a whole.

EVa5I7bHFq9mnYK|6 months ago

Aren't 32 systems more power-efficient? It costs less energy to switch 32 transistors than 64.

em3rgent0rdr|6 months ago

Not just more power-efficient, but also a little more memory efficient because pointers are only half as big and so don't take up as much space in the cache. Lower-bit chips are also smaller (which could translate into faster clock and/or more functional units per superscaler core and/or more cores per die).

Part of the problem with these discussion is that often when people say "64-bit" vs "32-bit" they are also considering all the new useful instructions that were added to the new instruction set generation. But a true "apples-to-apples" comparison between "32-bit" and "64-bit" should be using almost identical whose only difference is the datapath and pointer size.

I feel that the programs and games I run shouldn't really need more than 4GB memory anyway, and the occasion instance that the extra precision of 64-bit math is useful could be handled by emulating the 64-bit math with the compiler adding a couple extra 32-bit instructions.

kimixa|6 months ago

On anything but the smallest implementations, the 32 vs 64bit alu cost difference is pretty tiny compared to everything else going on in the core to get performance. And assumes the core doesn't support 32-bit ops, leaving the rest of the ALU idle, or does something like double pumping.

Really the ALU width is an internal implementation detail/optimisation, you can tune it to the size you want at the cost of more cycles to actually complete the full width.

ainiriand|6 months ago

What makes you think that a 32 bit system has 32 transistors? For example, from the top of my head, the pentium pro had a 86 bit direction bus and a 64 bit data bus.

bobmcnamara|6 months ago

Sometimes you gotta run real fast and go right to bed to save power.

creshal|6 months ago

Memory buses are negligible, compared to everything else going on. Especially in a SoC that has not just a CPU, but 20 other devices as well.

chasil|6 months ago

Solaris uses 32-bit binaries in /bin and /usr/bin for most of POSIX.2, even though it requires the x86-64 architecture.

I saw this last in SmartOS.

smallpipe|6 months ago

Wait until you hear about 8 bit systems

theshrike79|6 months ago

Compared to 64 bit? Maybe.

Compared to ARM-based systems? Nope.

ry6000|6 months ago

I can’t help but wonder if kernel devs realize how much this discussion sounds like something you’d expect from Apple. They are talking about obsoleting hardware not because it’s fundamentally broken, but because it no longer fits neatly into a roadmap. Open source has always been about making hardware outlive commercial interest and let it run long after the hardware vendor abandons it.

I'm pretty shocked to see comments like "the RAM for a 32-bit system costs more than the CPU itself", but open source isn’t supposed to be about market pricing or what’s convenient for vendors; it’s about giving users the freedom to decide what’s worth running.

I understand that maintainers don’t want to drag around unmaintained code forever, and that testing on rare hardware is difficult. But if the code already exists and is working, is it really that costly to just not break it? The kernel's history is full of examples where obscure architectures and configs were kept alive for decades with minimal intervention. Removing them feels like a philosophical shift, especially when modern hardware is more locked down and has a variety of black box systems running behind it like Intel ME and AMD PSP.

kergonath|6 months ago

> They are talking about obsoleting hardware not because it’s fundamentally broken, but because it no longer fits neatly into a roadmap.

Not really. The discussion is about cost, benefits and available resources. Projects are not immune because they are open source or free software. Actual people still need to do the work.

> Open source has always been about making hardware outlive commercial interest and let it run long after the hardware vendor abandons it.

Again, not really. Open source has always been all about freely modifying and distributing software. This leaves some freedom for anyone to keep supporting their pet hardware, but that’s a consequence. In this case, I don’t think it would be a real problem if anyone would step up and commit the ressources necessary to keep supporting older hardware. This freedom was not taken away because a project’s developers decided that something was not worth their time anymore.

jcranmer|6 months ago

> But if the code already exists and is working, is it really that costly to just not break it?

It depends on the feature, but in many cases the answer is in fact 'yes.' There's a reason why Alpha support (defunct for decades) still goes on but Itanium support (defunct for years) has thoroughly been ripped out of systems.

kstrauser|6 months ago

What's the Venn diagram of people stuck with 32-bit hardware and people needing features of newer kernels? Existing kernels will keep working. New devices probably wouldn't support that ancient hardware; seen any new AGP graphics cards lately?

There's not a compelling reason to run a bleeding edge kernel on a 2004 computer, and definitely not one worth justifying making the kernel devs support that setup.

margalabargala|6 months ago

> open source isn’t supposed to be about market pricing or what’s convenient for vendors; it’s about giving users the freedom to decide what’s worth running.

Ehhh, it's about users having the ability to run whatever they like. Which they do.

If a group of users of 32 bit hardware care to volunteer to support the latest kernel features, then there's no problem.

If no one does, then why should a volunteer care enough to do it for them? It's not like the old kernel versions will stop working. Forcing volunteers to work on something they don't want to do is just a bad way to manage volunteers.

johnklos|6 months ago

So Arnd Bergmann thinks that all future systems, embedded included, will have 64 bit CPUs? Or will embedded just stop using Linux and move to the BSDs?

SAI_Peregrinus|6 months ago

Embedded has already split: You've got 8-bit, 16-bit, and some in-between MCUs that never ran Linux in the first place. You've got 32-bit MCUs that never ran Linux in the first place. You've got FPGAs that never really even run software. And you've got "application processors" like the ARM Cortex-A series that get used for "embedded Linux" in some cases. ARM Cortex-A series won't release any more 32-bit ISAs, so that mostly just leaves RISC-V as a potentially-relevant 32-bit ISA that might get new CPU designs. That's a small niche within an already small niche in embedded. Most embedded systems aren't using Linux, they're using an RTOS or bare-metal code.

jtolmar|6 months ago

From the article:

> The kernel is still adding support for some 32-bit boards, he said, but at least ten new 64-bit boards gain support for each 32-bit one.

And

> To summarize, he said, the kernel will have to retain support for armv7 systems for at least another ten years. Boards are still being produced with these CPUs, so even ten years may be optimistic for removal. Everything else, he said, will probably fade away sooner than that.

So, no, he does not think that at all.

natas|6 months ago

the netbsd team agrees! more users for us.

wibbily|6 months ago

> One other possibility is to drop high memory, but allow the extra physical memory to be used as a zram swap device. That would not be as efficient as accessing the memory directly, but it is relatively simple and would make it possible to drop the complexity of high memory.

Wild, like some kind of virtual cache. Reminds me a bit of the old Macintosh 68k accelerators; sometimes they included their own (faster) memory and you could use the existing sticks as a RAM disk.

chasil|6 months ago

Unfortunately, I am still using a 32-bit kernel using high memory. It was caled "PAE" - physical address extensions.

  $ cat /proc/version
  Linux version 2.6.18-419.0.0.0.2.el5PAE ... (gcc version 4.1.2 20080704 (Red Hat 4.1.2-55)) #1 SMP Wed Jun 28 20:25:21 PDT 2017

sylware|6 months ago

I have 32 bits support on my x86_64 gaming ring, _ONLY_ for the steam client.

The "steam client" is still a 32 bits ELF executable, which statically loads openGL and x11 libs... (namely not even a wayland->x11 fallback or a opengl->CPU rendering).

We would be all better with a nogfx static PIE executable, even a nogfx dynamic PIE executable if they want to explore the ELF setup of a distro.

datenwolf|6 months ago

> There are still some people who need to run 32-bit applications that cannot be updated; the solution he has been pushing people toward is to run a 32-bit user space on a 64-bit kernel. This is a good solution for memory-constrained systems; switching to 32-bit halves the memory usage of the system. Since, on most systems, almost all memory is used by user space, running a 64-bit kernel has a relatively small cost. Please, he asked, do not run 32-bit kernels on 64-bit processors.

Ohhh yes!

So, a couple of weeks ago I came across a discussion where some distro (I don't remember which one) contemplated removing 32-bit user space support, suggesting to users to simply run a VM running a 32 bit Linux instead. It was a stupid suggestion then, and this statement is a nice authorative answer from the kernel side, where such suggestions can be shoved to.

DrillShopper|6 months ago

Probably SuSE. We use SLES15 at work and their bizarre decisions in SLES 16 to remove X servers (except for XWayland), remove 32-bit libraries, and complete removal of their AutoYaST unattended install tool in favor for a tool that is maybe 25% compatible with existing AutoYaST files still baffles me. We spent months moving to SLES15 from a RHEL derivative a few years ago and now we basically have to do it again for SLES16 with as big as the changes are. We have some rather strong integrations with the Xorg servers, and Wayland won't cut it for us currently, so we're stuck unless we want to rearchitect 20 years of display logic to some paper spec that isn't evenly implemented and when it is, it's buggy as shit.

I've been pushing hard for us to move off SLES as a result, and I do not recommend it to anyone who wants a stable distribution that doesn't fuck over its users for stupid reasons.

webdevver|6 months ago

i do miss being able to read and memorize hex addresses. 64 bits is a little too long to easily 'see' at a glance. or see at all for that matter.

ezoe|6 months ago

It's interesting that only objection for removing big endian is from IBM and their mainframe and PowerPC. Also big endian is restricted to 32bit in Linux kernel.

wltr|6 months ago

I have some 32-bit systems (arm and x86), and it looks like I’m going to use them till the hardware breaks. The old x86 system is power hungry and inefficient, but the thing is, I power it on very occasionally. Like for half a day once a month. So its power consumption isn’t an issue. Probably I should consider some BSD for it. But what should I do with an arm system, if that’s applicable, I have no idea.

stephen_g|6 months ago

This seems pretty uninformed on the embedded side - the speaker is I'm sure very qualified generally but it sounds like mostly on the server/desktop side of things.

Like on Armv7-M it's said "Nobody is building anything with this kind of hardware now" - this is just wrong to the point of ridiculousness. Thousands of new products will be designed using these microcontrollers and still billions of units will be produced with them in them - now, true that almost none of those will run Linux on those MCUs but it's crazy to say "nobody" is building things with them. Many of course are moving to Armv8-M microcontrollers but those are 32 bit too!

On the Linux side, there are things like the AMD/Xillinx Zynq-7000 series that will be supported for many years to come.

It's not the worst idea in the world to deprecate support for 32-bit x86 but it is not time to remove it for ARM for many years yet.

Dylan16807|6 months ago

1. Until proven otherwise, let's assume that the speaker at the Linux conference was probably talking about Linux and saying something not ridiculous.

2. That sentence wasn't about 32-bit, it was about devices without MMUs.

shevis|6 months ago

I’m surprised no one is talking about the 2038 problem.

unregistereddev|6 months ago

That is not specific to 32-bit system architectures. The 2038 problem is specific to timestamps that are represented as milliseconds since the Unix epoch and are stored in a 32-bit integer. It's quite possible (and common) to deal with 64-bit integers on a 32-bit system architecture.

I am also surprised how little attention the 2038 problem gets. However, I also wonder how big a problem it is. We've known about it for years and all the software I've touched is not susceptible to it.