I knew about the difference they have between UNIX-like OSs in the usage of different signals (and the System V vs BSD battles, between others), but I didn't know Windows didn't have a similar system (I haven't done too much low-level in Windows).Thanks for the long comment!
fair_enough|4 months ago
However, Windows NT was also written after machines capable of running Unix cost less than a BMW so a lot of the good folks in Redmond during the early 90s took some liberties to improve on some fundamental design flaws of UNIX.
1. "Everything is a file" is very flexible for writing server applications where the user is expected to know and trust every program, but it is potentially harmful to expose devices as files to non-technical users. Nowadays with UEFI, you can even pipe /dev/zero to /dev/mem or /dev/port and brick your motherboard. There was a patch for this, but there are old servers running old Linux versions in the wild that can be permanently bricked.
2. Arguably, exposing such a wide range of signals to a userland program for it to handle is a design flaw, like the memory fault signals SIGSEGV and SIGBUS. They were not designed for IPC or exception handling, but they ended up being used that way by a lot of developers over the years. I won't start a war to make the case because I can see both sides on that, but #3 below is not controversial at all.
3. NTFS ACLs are a big improvement over UNIX-style ugo-rwx permissions. FWIW, they're also easier to work with than POSIX ACLs.
Just something to think about: the Windows way is radically different because compatibility with ye-olde DOS running on 68k CPUs ruined it in some ways, but in other ways its design was driven by learning from UNIX's mistakes.
despite the confusing name, Win32 is not just a 32-bit libc, it's a 64-bit libc on 64-bit Windows.