top | item 39733516

(no title)

txutxu | 1 year ago

Booting a modern laptop, something I do every day, is the anti-low-tech paradigm.

Just my motherboard BIOS is 50348032 bits. And it doesn't provide many options, I think the other way around, it hides many options on purpose.

Then there is an i7 processor... a whole beast itself against simplicity. With its Intel Management Engine, it's microcode updates, etc.

Secure boot? UEFI? VT extensions? TPM? NFC? graphics initialization?

OK, so far, we've code and material to fill the whole life of an engineer, and we didn't reach still the OS bootloader.

Ah, the bootloader, who remembers lilo... here we go with grub. Go read it's source code, and return back to explain me everything that is in there... see you in 3 months only for this.

Here it goes, the kernel. A thing that normal users don't see or touch. More than 30 Million of lines of code more. I won't talk about complexity here, but this project can really say it's "batteries included". The same you get an old obscure filesystem/protocol nobody uses, than proc/mem/i-o/netowrk schedulers for supercomputers. Blobs, firmware, more graphics stuff, observability, wifi, storage, a whole word in itself and everything comes up in microseconds.

There is such initram thingy, which is another "mini" (not mini in lines of code) operative system. Busybox? xD look at that.

Now, here it comes... the low-tech king: systemd! our love-hated init system (only init? well you already know).

Now, here we can start the operative system (which may not bee too simple), we could be talking for years, of each of the micro-components that help to launch the base OS services until you get to the login page.

Depending on the distro... we're skipping a hell of complexity (it's not the same the ps aux of an ubuntu desktop, than the ps aux of a minimal system). Let's skip it, let's skip hundreds of software components and phone home stuff.

The auth part, gives for a few years more of reading code, talk about, skipping plugins and optional stuff.

Then you can get a window manager or a desktop environment (so a week or some months of code more), running over Xorg (do you think it's simple?) or Wayland (you can devote your life until you own this part).

And now... let's launch a "web browser". I will stop here, we won't finish if we go deep into the browser complexity.

Modern hardware, software and engineering, are a big ball of snow. The more it advances, the bigger and out of control it gets.

discuss

order

roughly|1 year ago

> The more it advances, the bigger and out of control it gets.

This is largely due to path dependency - there’s a required amount of “bigness” and complexity to do the things we want to do, but it’s substantially lower than the amount we have, because we’re not starting from zero, we’re building on what we already have. You can see this everywhere - telecom lines follow old train lines, keyboard layouts mirror old typewriter layouts, desktop file system layouts mirror old mainframe layouts.

It mirrors evolution in that way - the path taken is the cheapest path from the current location, not the ideal path, which is why a giraffe has the same number of bones in its neck as you do.

I’ve actually been interested recently in what it would look like to have a truly modern software & hardware stack built from the ground up for modern computing - I feel like there were attempts at this in the 90s (BeOS comes to mind), but even something like ChromeOS was basically Linux under the hood. It feels like the industry’s decided what we have is Good Enough, and that’s a bit of a shame, because you’re right, it’s really quite a ball of spaghetti.

BobbyTables2|1 year ago

A minimal Alpine Linux install is what — 10MB?

A minimal Debian/AlmaLinux install is closer to 1G.

What did we gain with 100X the space, larger attack surface, and configuration complexity?

Even hardware is this way…

Look at the weight and power requirements of a SFF PC.

Then look at a blade server… Still requires heavy steel case, heavier than SFF PC because it’s larger, only job is to fit into heavy steel enclosure. Enclosure requires beefy steel rack to handle weight.

Every layer makes logical sense but the end result does not!!

Older NAS servers held multiple 3.5” drives (heavy), requiring big power supply. Now a plastic clip holds NVMe drives onto a motherboard — no steel!

npteljes|1 year ago

The system is the result of the structure and power distribution of the entities that built it. Another compounding factor is that I don't think any of the creators had "simplicity" on the forefront, when building their respective part of the system. So, yes, the end result is not simple.

As for reading the code or anything like that, I don't think I would have managed even a C64, and that's, I think, way simpler than the modern computer you described. So I've been lost for a long time.

Life is just complex.

overtomanu|1 year ago

Yes, life is complex literally!!

Even our DNA has lot of "unused"/"dormant" code

lambdaba|1 year ago

> The more it advances, the bigger and out of control it gets.

Yet my present computer can do far more than the one I had, say 30 years ago, when things were "simpler".

blueflow|1 year ago

I keep an old Thinkpad around running Windows XP. Its some 512MHz CPU, so not really that fast. But: It boots faster, the UI feels less sluggish, you can tell apart buttons from inert text, and launching Age of Empires I and restarting my savegame takes like 5 seconds.

I keep this Laptop as living proof that computers did indeed get shittier.

adrian_b|1 year ago

While that is true, most of the current complexity has nothing to do with enabling the present computers to do far more.

Most of the complexity is caused either by the need to provide backward compatibility or by the fact that the many parties who design the components of a computer had very different ideas about which is the right way to design them (so many compatibility layers are required) or by the fact that the manufacturers insist on implementing various additional features that are not really needed, because they may be useful for them even if they are harmful for the final owner of the computer.

Many of the most horrible features of the modern computers have been caused by the fact that Microsoft could not be bothered to implement in their operating systems certain features whose right place was inside the OS and Intel has kept piling workarounds over workarounds in their CPUs, one more ugly than the other, like the System Management Mode and the Management Engine.

AndrewKemendo|1 year ago

Conway’s law does not define a maximum size of a system.

It simply defines that the systems as measured reflect the structure that created them.

If you look at the software ecosystem as a whole, it is increasingly indivisible without the underlying structure, because interface types have been totally monopolized - you need a create a client-server REST/LAMP service with stateless agents consuming services

That is to say if you wanted to build a technical service, but that does not comply with existing trends in engineering then you just don’t exist

Technology is social it’s not simply mechanical

Socially, we don’t have holism as a goal. Cybernetics is socialism according to academia and increasing specialization means that nobody can fully understand the whole thing.

Because nobody can understand the whole thing there are opportunities for fragility, and basically stuff to break catastrophically with nobody knowing how to fix it.

I anticipate the next couple decades look like a lot of broken stuff that people rely on, that increasingly nobody knows how to fix

galdosdi|1 year ago

Maybe if you run the Arachne web browser for MS-DOS on an older non-UEFI intel machine, you might be approaching something where one person could still understand the whole sequence

logtempo|1 year ago

computers are complex indeed. That said, I don't think lowtech philsophy is about browsing HN with a Macintosh II. It does have this "get things done simple" branch, but also the ecology side of it, the reflexion about the relationship with technology, its social component, eventually the sovereignty we have over technology and its sustainability.

p_l|1 year ago

Honestly, I think a lot of the parts that people complain about aren't actually as complex as they seem - but they are the surface visible parts. And lack of knowledge and nostalgia glasses sometimes make people unaware of how complex things used to be.

A lot of underappreciated thing with boot firmware these days is that what is presented to you as end user is absolutely not connected to what options are actually possible. Also tons of complex and quirky code just to get the CPU and memory to the point of running other code. IIRC there's bits of code spanning early CPU reset and intel IME that are necessary to prevent the CPU from destroying itself, at least on some models. Similarly on AMD PSP.

Some of them will be disabled because the hw physically doesn't support them, or can result in weird behaviour that isn't going to be something you like, or effectively brick things.

Another part is that it allows real proper modularity of an open platform, at least on the vendor side. Add a new device that requires special driver? No longer you have to spend a long time just to integrate a blob, there's a standard API/ABI whether the driver is closed or open source. ACPI provides tons of ways to just specify the details of where something is and how to connect it. I love a lot about openpower, but PetitBoot is effectively less open if only because there's no possibility to use add-in card requiring drivers that wasn't already compiled into flash. With UEFI it works.

UEFI is honestly in many ways less complex than previous systems (no more hooking into tape drive boot sequence), especially since there's much less that needs to be in SMM block except for things required by hw (an example: some CPUs required SMM code for changing certain power levels, because the OS level API doesn't expose the low-level details like enforcing a synchronization point on all CPUs or low level internal registers).

If you use UEFI, you also don't need to implement whole complex (and IBM PC incompatible) craziness that is GRUB (or NTLDR with its ARC firmware emulation, or non-UEFI WINLDR which emulates chunk of UEFI...) - n.b. the last linux bootloader that properly handled IBM PC compatibility was LILO if installed in appropriate way (read: not how distros did it).

TPM and Secure Boot are reasonably easy parts of the whole thing. NFC and smartcards are also things that are reasonable for single engineer to grasp.

There's less visible, though some people remember about it, complexity that is enforced on us by corporate interests - big part of why AMD PSP is closed source is the same reason AMD was unable to open source some of the HDMI code recently - both AMD PSP and Intel IME are part of implementing "Secure Media Path", aka DRM bullshit for MPAA (HDCP and its DisplayPort equivalent). It's also why various DRM systems don't fully support fully open systems (for example normal linux distros, not ones that are built by vendor of a device with special blobs)

Now, systemd I'll agree it takes a good idea but does with horrible implementation, and that's from someone who gave in and tries to use it fully (mainly because I have no time to implement an alternative).

Wayland also makes in many ways life more complex than X11, OTOH XFree86 legacy of being lowest common denominator implementation even after reverting to X.Org means that people felt stuck.