top | item 41234793

(no title)

Antip0dean | 1 year ago

> ...we could have made something extremely minimal. Instead UEFI goes hard in the opposite direction...

My initial suspicion was that this was about preparing the ground for closed computing regardless of the surrounding hardware.

That this hasn't happened suggests it's just my imagination gone wild, it's a missed opportunity for (say) Microsoft, or the folks behind it had good intentions. Occam's Razor, I guess?

discuss

order

p_l|1 year ago

TL;DR UEFI builds an open platform even if the actual code is closed, while "simple" alternatives make closed platforms with open source code.

Raw coreboot/uboot like approaches give you open source but closed platform - the simplicity means you need considerable resources to do something else than the original maker wanted to do with it.

UEFI (and before it, Microsoft attempts at semi-standardizing PC low level interfaces, effort on ACPI, etc) are an effort to provide an open platform no matter the opennes of code, availability of deep dive docs of individual computer models, and handling the fact that computers arez in fact, complex.

If you want a general purpose computer that explicitly targets the idea that it's owner can just plug in CD/USB/netboot a windows/Linux/BSD installer media, without waiting for new release just to have a bootable kernel on s new machine, there's s lot of inherent complexity. Especially if you want to be able to boot a version from before release of the board you're using without significant loss of functionality (something that devicetree cannot do without special explicit work from physical device vendor, but is handled by ACPI through bytecode and _OSI checks for supported capability levels from OS).

Especially if you also want to make it extensible and reduce cost in integrating parts from different vendors (aka why UEFI with hardcoded CSM boot started taking over by 2005).

It's much easier not just to integrate a third party driver for example for network chip when the driver will use well defined interfaces instead of hooking into "boot BASIC from ROM" interrupt, especially when the driver can then expose it's configuration in standard format that will work no matter if you have monitor and mouse connected or just serial port. Petitboot is not the answer - it's way worse when you have to custom rebuild system firmware to add drivers (possibly removing other drivers to make space) because you want to netboot from a network card from different vendor, or just because the hardware is still good but the NIC is younger. Much easier to just grab driver from OpRom or worst case drop it in standardised firmware-accessible partition.

Did I mention how much easier handling booting with UEFI is compared to unholy mess of most other systems? Yes, even GRUB on x86, which by default doesn't write standard compliant boot code so if you dual boot and use certain software packages you end up with nonbootable system. Or how many Linux installers and guides make partitions that only boot because of bug-compatibility in many BIOSes. Not to mention messing with bootsectors Vs "if you drop a compatible filesystem with a file named this way* it will be used for booting".

If I want to play around with booting a late 1960s design where you need to patch binaries if you change something in hardware, I can boot a PDP-10 emulator instead. I push for using UEFI properly because I have work to do and goals to achieve other than tinkering with booting, no matter how much I like tinkering in general

hulitu|1 year ago

> Did I mention how much easier handling booting with UEFI is compared to unholy mess of most other systems?

Yeah. Like linux entries getting ignored, no eay to debug what went wrong (if an EFI executable fails, you're on your own). A shell which is undocumented. With BIOS i didn't spent hours trying to boot a linux kernel.