top | item 42980283

LINUX is obsolete (1992)

153 points| talles | 1 year ago |groups.google.com

179 comments

order
[+] ryao|1 year ago|reply

  While I could go into a long story here about the relative merits of the
  two designs, suffice it to say that among the people who actually design
  operating systems, the debate is essentially over. Microkernels have won.
The developers of BSD UNIX, SunOS, and many others would disagree. Also, the then upcoming Windows NT was a hybrid kernel design. While it has an executive "micro-kernel", all of the traditional kernel stuff outside the "microkernel" runs in kernel mode too, so it is really a monolithic kernel with module loading.

While the original post was written well before NeXTSTEP, the Mach 3.0 kernel was converted into a monolithic kernel in NeXTSTEP, which later became MacOS. The reality is that Mach 3.0 was just still slow performance wise, much like how NT would have been had they had made it into an actual micro-kernel.

In the present day, the only place where microkernels are common are embedded applications, but embedded systems often don't even have operating systems and more traditional operating systems are present there too (e.g. NuttX).

[+] lizknope|1 year ago|reply
> While the original post was written well before NeXTSTEP, the Mach 3.0 kernel was converted into a monolithic kernel in NeXTSTEP, which later became MacOS.

The original Tanenbaum post is dated Jan 29, 1992.

NeXTSTEP 0.8 was released in Oct 1988.

https://en.wikipedia.org/wiki/NeXTSTEP#Release_history

3.0 was not the conversion into a monolithic kernel. That was the version when it was finally a microkernel. Until that point the BSD Unix part ran in kernel space.

https://en.wikipedia.org/wiki/Mach_(kernel)

NeXTSTEP was based on this pre-Mach 3.0 architecture so it would have never met Tanenbaum's definition of a true microkernel.

> Mach received a major boost in visibility when the Open Software Foundation (OSF) announced they would be hosting future versions of OSF/1 on Mach 2.5, and were investigating Mach 3 as well. Mach 2.5 was also selected for the NeXTSTEP system and a number of commercial multiprocessor vendors.

OSF/1 was used by DEC and they rebranded it Digital Unix and then Tru64 Unix.

After NeXT was acquired by Apple they updated a lot of the OS.

https://en.wikipedia.org/wiki/XNU#Mach

> The basis of the XNU kernel is a heavily modified (hybrid) Open Software Foundation Mach kernel (OSFMK) 7.3.[3] OSFMK 7.3 is a microkernel[6] that includes applicable code from the University of Utah Mach 4 kernel and from the many Mach 3.0 variants forked from the original Carnegie Mellon University Mach 3.0 microkernel.

> The BSD code present in XNU has been most recently synchronised with that from the FreeBSD kernel. Although much of it has been significantly modified, code sharing still occurs between Apple and the FreeBSD Project as of 2009

Back in the late 2000's Apple hired some FreeBSD people to work on OS X.

Before Apple bought NeXT they were working with OSF on MkLinux which ported Linux to run on top of the Mach 3.0 microkernel.

https://en.wikipedia.org/wiki/MkLinux

> MkLinux is the first official attempt by Apple to support a free and open-source software project.[2] The work done with the Mach 3.0 kernel in MkLinux is said to have been extremely helpful in the initial porting of NeXTSTEP to the Macintosh hardware platform, which would later become macOS.

> OS X is based on the Mach 3.0 microkernel, designed by Carnegie Mellon University, and later adapted to the Power Macintosh by Apple and the Open Software Foundation Research Institute (now part of Silicomp). This was known as osfmk, and was part of MkLinux (http://www.mklinux.org). Later, this and code from OSF’s commercial development efforts were incorporated into Darwin’s kernel. Throughout this evolutionary process, the Mach APIs used in OS X diverged in many ways from the original CMU Mach 3 APIs. You may find older versions of the Mach source code interesting, both to satisfy historical curiosity and to avoid remaking mistakes made in earlier implementations.

So modern OS X is a mix of various code from multiple versions of Mach and BSD running as a hybrid kernel because as you said Mach 3.0 in true microkernel mode is slow.

[+] ww520|1 year ago|reply
Back in the times when I read that statement, it had immediately lost credibility to me. The argument was basically an appeal-to-authority/argument from authority. It put Tanenbaum onto the "villain" side in my mind, someone who was willing to use his position of authority to win argument rather than merits. Subsequent strings of microkernel failures proved the point. The moment Microsoft moved the graphic subsystem from user mode into the kernel mode to mitigate performance problem was the death of microkernel in Windows NT.
[+] fmajid|1 year ago|reply
Not just that but between 3.51 and 4.0 many NT drivers like graphics were moved to ring 0, trading performance for robustness.
[+] pjmlp|1 year ago|reply
All Intel CPUs have a Minix 3.0 powering their management engine.

Modern Windows 11 is even more hybrid than Windows NT planned to be, with many key subsystems running on their own sandbox managed by Hyper-V.

[+] InTheArena|1 year ago|reply
This is the thread that I read in high school that made me fall in love with software architecture. This was primarily because Tanenbaum’s position was so obviously correct, yet it was also clear to all that Linux was going to roll everyone, even at that early stage.

I still hand this out to younger software engineers to understand the true principle of architecture. I have a print off of it next to my book on how this great new operating system and SDK from Taligent was meant to be coded.

[+] abetusk|1 year ago|reply
I've heard of this debate but haven't heard an argument of adoption from a FOSS perspective. From Wikipedia on Minix [0]:

> MINIX was initially proprietary source-available, but was relicensed under the BSD 3-Clause to become free and open-source in 2000.

That is a full eight years after this post.

Also from Wikipedia on Linux becoming FOSS [1]:

> He [Linus Torvalds] first announced this decision in the release notes of version 0.12. In the middle of December 1992 he published version 0.99 using the GNU GPL.

So this post was essentially right at the cross roads of Linux going from some custom license to FOSS while MINIX would remain proprietary for another eight years, presumably long after it had lost to Linux.

I do wonder how much of an effect, subtle or otherwise, the licensing helped or hindered adoption of either.

[0] https://en.wikipedia.org/wiki/Minix

[1] https://en.wikipedia.org/wiki/History_of_Linux

[+] otherme123|1 year ago|reply
I installed my first linux in 1996. It came in a CD with a computer magazine: a free OS. That was huge, for me at least. Said CDs were filled with shareware software like winzip, that you had to buy or crack to use at 100%. Meanwhile there was this thing called Linux, for free, that included a web server, ftp, firewall, a free C compiler, that thing called latex that produced beautiful documents... The only thing it required from you was to sacrifice a bit of confort in the UI, and a bit of extra effort to get better results.

I didn't heard about Minix until mid 2000's maybe, and it was like an old legend of an allegedly better-than-linux OS that failed because people are dumb.

[+] LeFantome|1 year ago|reply
It is not at all subtle. If Minix was free, Linus may never Have written Linux at all. It cost $50 (as I recall). Linus hated that.

The first Linux license was that you could not charge for Linux. As it grew in popularity, people wanted to be able to charge for media (to cover their costs). So, Linus switched to the GPL which kept the code free but allowed charging for distribution.

[+] kazinator|1 year ago|reply
Academically, Linux is obsolete. You couldn't publish a paper on most of it; it wouldn't be original. Economically, commercially and socially, it isn't.

Toasters are also obsolete, academically. You couldn't publish a paper about toasters, yet millions of people put bread into toasters every morning. Toasters are not obsolete commercially, economically or socially. The average kid born today will know what a toaster is by the time they are two, even if they don't have one at home.

[+] forinti|1 year ago|reply
My father is a retired physics professor. I tried debating him once about an aqueduct in a town near us that was built in the early XX century.

His view is that it was moronic because communicating vessels had already been known for centuries.

I tried arguing that maybe they didn't have the materials (pipes), or maybe dealing with obstructions would have been difficult, etc. After all, this was a remote location at that time.

I think that the person who built it probably didn't know about communicating vessels but that it is also true that the aqueduct was the best solution for the time and place.

Anyway, debating academics about practical considerations is hard.

[+] JodieBenitez|1 year ago|reply
> Writing a new OS only for the 386 in 1991 gets you your second 'F' for this term. But if you do real well on the final exam, you can still pass the course.

what a way to argue...

[+] mhandley|1 year ago|reply
There's an element of "Worse is Better" in this debate, as in many real-world systems debates. The original worse-is-better essay even predates the Linux vs Minix debate:

https://dreamsongs.com/RiseOfWorseIsBetter.html

Gabriel was right in 1989, and he's right today, though sometimes the deciding factor is performance (e.g. vs security) rather than implementation simplicity.

[+] wongarsu|1 year ago|reply
Another big factor is conceptual simplicity, rather than implementation simplicity. Linux is conceptually simple, you can get a good mental model of what it's doing with fairly little knowledge. There is complexity in the details, but you can learn about that as you go. And because it is "like the unix kernel, just bigger" there have always been a lot of people able and willing to explain it and carry the knowledge forward.

Windows in comparison has none of that. The design is complex from the start, is poorly understood because most knowledge is from the NT 4.0 era (when MS cared about communicating about their cool new kernel), and the community of people who could explain it to you is a lot smaller.

It's impressive what the NT Kernel can do. But most of that is unused because it was either basically abandoned, meant for very specific enterprise use cases, or is poorly understood by developers. And a feature only gives you an advantage if it's actually used

[+] pjmlp|1 year ago|reply
Ironically it actually is, from 2025 perspective.

Not only does microservices and Kubernetes all over the place kind of diminishes whatever gains Linux could offer as monolithic kernels, the current trend of cloud based programing language runtimes being OS agnostic in serverless (hate the naming) deployment, also makes irrelevant what is between the type-2 hypervisor and language runtimes.

So while Linux based distributions might have taken over the server room as UNIX replacements, it only matters for those still doing full VM deployments in the style of AWS EC2 instances.

Also one of the few times I agree with Rob Pike,

> We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy.

> At the risk of contradicting my last answer a little, let me ask you back: Does the kernel matter any more? I don't think it does. They're all the same at some level. I don't care nearly as much as I used to about the what the kernel does; it's so easy to emulate your way back to a familiar state.

-- 2004 interview on Slashdot, https://m.slashdot.org/story/50858

[+] wolrah|1 year ago|reply
> Linus "my first, and hopefully last flamefest" Torvalds

If only he knew...

[+] mrlonglong|1 year ago|reply
Actually, Minix kinda won. Its descendents currently infest billions of Intel processors living inside the ME.
[+] hackerbrother|1 year ago|reply
It’s always heralded as a great CS debate, but Tanenbaum’s position seems so obviously silly to me.

Tanenbaum: Microkernels are superior to monolithic kernels.

Torvalds: I agree— so go ahead and write a Production microkernel…

[+] acmj|1 year ago|reply
People often forget the best way to win a tech debate is to actually do it. Once multiple developers criticized that my small program is slow due to misuse of language features. Then I said: fine, give me a faster implementation. No one replied.
[+] ViktorRay|1 year ago|reply
The realization that in 2058 some people will be reading comments from 2025 Hacker News threads and will feel amused at all the things we were so confidently wrong about.

;)

[+] scarface_74|1 year ago|reply
https://news.ycombinator.com/item?id=32919

I don't think what the iphone supports will matter much in the long run, it's what devices like these nokias that will have the biggest impact on the future of mobile http://www.nokia.com/A4405104

———

No one is going to stop developing in Flash or Java just because it doesn't work on iPhone. Those who wanna cater to the iPhone market will make a "watered down version" of the app. Just the way an m site is developed for mobile browser.Thats it.

——

If another device maker come up with a cheaper phone with a more powerful browser, with support for Java and Flash, things will change. Always, the fittest will survive. Flash and java are necessary evils(if you think they are evil).

——

So it will take 1 (one) must-have application written in Flash or Java to make iPhone buyers look like fools? Sounds okay to me.

——

The computer based market will remain vastly larger than the phone based market. I don't have real numbers off hand, but lets assume 5% of web views are via cellphones

[+] npsomaratna|1 year ago|reply
Back in the '90s, I read a book called the "Zen of Windows 95 Programming." The author started off with (paraphrased) "If you're reading this 25 years in the future, and are having a laugh, here's the state of things in '95"

I did re-read that section again 25 years later...

[+] jppope|1 year ago|reply
I am terrified to read my own comments from a year ago... I can't even imagine 25 or 30 years from now.
[+] nialse|1 year ago|reply
How about retrospective ranking of comments based on their ability to correctly predict the future? Call it Hacker Old Golds?
[+] lizknope|1 year ago|reply
Back around 2003 our director said "This customer wants to put our camera chip in a phone." I thought it was a dumb idea.

I remember when the first iPhone was released in Jan 2007 that Jobs said all the non-Apple apps would be HTML based.

I thought it was dumb. Release a development environment and there will be thousands of apps that do stuff they couldn't even think of.

The App Store was started in July 2008.

[+] deadbabe|1 year ago|reply
We’re not that optimistic about the future here.
[+] the_cat_kittles|1 year ago|reply
hopefully people have progressed to the point where hn has been completely forgotten
[+] deanCommie|1 year ago|reply
[+] yallpendantools|1 year ago|reply
Is it just me or is that response actually...nice and good spirited? I haven't read these annals of computing history for more than a decade now and I expected a bit more vitriol from Linus "Fuck You Nvidia" Torvalds. I mean, okay both sides fire zingers but with far less density than average HN.

Also there's https://groups.google.com/g/comp.os.minix/c/wlhw16QWltI/m/tH.... It was, unfortunately, not this young lad's last flamefest. See second sentence of last paragraph.

Goodness, the internet really was a nicer place back then. Nowadays, you quote forum etiquette on someone and you get called an idiot for it. I'm touching grass today and I'm gonna be grateful for it.

[+] AyyEye|1 year ago|reply
Linux is obsolete. The main thing it has going for it is that it isn't actively hostile to it's users like the alternatives. It's also somewhat hackable and open, for those technically enough inclined. Also unlike it's alternatives it's (slowly but surely) on a positive trajectory... And that's not something anyone says about Windows or Mac.

> How I hated UNIX back in the seventies - that devilish accumulator of data trash, obscurer of function, enemy of the user! If anyone had told me back then that getting back to embarrassingly primitive UNIX would be the great hope and investment obsession of the year 2000, merely because it's name was changed to LINUX and its source code was opened up again, I never would have had the stomach or the heart to continue in computer science.

> Why can’t anyone younger dump our old ideas for something original? I long to be shocked and made obsolete by new generations of digital culture, but instead I am being tortured by repetition and boredom. For example: the pinnacle of achievement of the open software movement has been the creation of Linux, a derivative of UNIX, an old operating system from the 1970s. It’s still strange that generations of young, energetic, idealistic people would perceive such intense value in creating them. Let’s suppose that back in the 1980s I had said, “In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new version of UNIX!” It would have sounded utterly pathetic.

- Jaron Lanier

[+] tcoff91|1 year ago|reply
Just goes to show that network effects beat superior technology every time.
[+] chris_wot|1 year ago|reply
I can’t see a single thing in that quote that explains why they didn’t like Unix. I’m sure there are good reasons, but the entire quote is an argument from emotion.
[+] decafbad|1 year ago|reply
This guy got a few million euros from EU for a secure OS, if I remember correctly. What happened to that project?
[+] otabdeveloper4|1 year ago|reply
Research was researched extensively. It's a net win for humanity, don't worry about it.
[+] musicale|1 year ago|reply
> Be thankful you are not my student. You would not get a high grade for such a design :-)

Further proof that computer "science" is a nonsense discipline. ;-)

The World Wide Web was invented at CERN, a particle physics laboratory, by someone with BA in physics. Who later got the Turing award, which computer scientists claim is somehow equivalent to a nobel prize.

Prof. Tanenbaum (whose degrees are also in physics) wasn't entirely off base though - Linux repeated Unix's mistakes and compromises (many of which were no longer necessary in 1992, let alone 2001 when macOS recycled NeXT's version of Unix) and we are still suffering from them some decades later.

[+] jmull|1 year ago|reply
I don’t think Tanenbaum’s distinction between micro-kernel and monolith is useful or important. He has monolith as a single binary running as a single process, while micro-kernel is multiple binaries/processes.

But either way these both boil down to bytes loaded in memory, being executed by the cpu. The significant thing about a microkernel is that the operating system is organized into functional parts that are separate and only talk to each other via specific, well defined channels/interfaces.

Microkernel uses processes and messages for this, but that’s hardly the only way to do it, and can certainly be done in a bunch of units that happen to be packaged into the same file and process. C header files to define interface, C ABI to structure the channels, .c files for the separate pieces.

Of course you could do that wrong, but you could also do it right (and, of course, the same is true of processes and messages).

A process, btw, is an abstraction implemented by the os, so microkernel or not, the os is setting the rules it plays by (subject to what the CPU provides/allows).

[+] nurettin|1 year ago|reply
I have no idea how they think IPC is as quick as in-process. I do it pretty quickly with memory mapping (shared memory between data providers and consumers), but it has at least an order of magnitude overhead compared to a concurrent queue even after 30 years.

Tannenbaum must be threatened by the growing linux community to start throwing flamebaits like this.

[+] adrian_b|1 year ago|reply
I do not understand what you say.

The best performance for IPC is achieved indeed as you say, using shared memory between the communicating parties.

But once you have shared memory, you can implement in it any kind of concurrent queue you want, without any kind of overhead in comparison with in-process communication between threads.

While other kinds of IPC, which need context switches between kernel and user processes, are slow, IPC through shared memory has exactly the same performance as inter-thread communication inside a process.

Inter-thread communication may need to use event-waiting syscalls, which cause context switches, but these are always needed when long waiting times are possible, regardless if the communication is inter-process or inside a process.

Mach and other early attempts at implementing micro-kernels have made the big mistake of trying to do IPC mediated by the kernel, which unavoidably has a low performance.

The right way to do a micro-kernel is for it to not handle any IPC, but only scheduling, event handling and resource allocation, including the allocation of the shared memory that enables direct communication between processes.

[+] xorcist|1 year ago|reply
Welcome to the future. We have microservices.
[+] pjmlp|1 year ago|reply
Meanwhile most Linux distros run containers all over the place, serialising into and out of JSON in every single RPC call, with users shipping Electron applications all over the place.

Got to put all that monolithic kernel performance to good use. /s

[+] dang|1 year ago|reply
Related. Others?

The Tanenbaum-Torvalds Debate - https://news.ycombinator.com/item?id=39338103 - Feb 2024 (1 comment)

Linux Is Obsolete (1992) - https://news.ycombinator.com/item?id=38419400 - Nov 2023 (2 comments)

Linux Is Obsolete (1992) - https://news.ycombinator.com/item?id=31369053 - May 2022 (2 comments)

The Tanenbaum – Torvalds Debate - https://news.ycombinator.com/item?id=27652985 - June 2021 (7 comments)

The Tanenbaum-Torvalds Debate (1992) - https://news.ycombinator.com/item?id=25823232 - Jan 2021 (2 comments)

Tanenbaum–Torvalds_debate (Microkernel vs. Monolithic Kernel) - https://news.ycombinator.com/item?id=20292838 - June 2019 (1 comment)

Linux is Obsolete (1992) - https://news.ycombinator.com/item?id=17294907 - June 2018 (168 comments)

The Tanenbaum-Torvalds Debate (1992) - https://news.ycombinator.com/item?id=10047573 - Aug 2015 (1 comment)

Linux is obsolete – A debate between Andrew S. Tanenbaum and Linus Torvalds - https://news.ycombinator.com/item?id=9739016 - June 2015 (5 comments)

LINUX is obsolete (1992) - https://news.ycombinator.com/item?id=8942175 - Jan 2015 (74 comments)

The Tanenbaum-Torvalds Debate (1992) - https://news.ycombinator.com/item?id=8151147 - Aug 2014 (47 comments)

Linux is obsolete (1992) - https://news.ycombinator.com/item?id=7223306 - Feb 2014 (5 comments)

Tanenbaum-Linus Torvalds Debate: Part II - https://news.ycombinator.com/item?id=4853655 - Nov 2012 (2 comments)

"LINUX is obsolete" - Andy Tanenbaum, 1992 - https://news.ycombinator.com/item?id=3785363 - April 2012 (14 comments)

Why was Tanenbaum wrong in the Tanenbaum-Torvalds debates? - https://news.ycombinator.com/item?id=3744138 - March 2012 (54 comments)

Why was Tanenbaum wrong in the Tanenbaum-Torvalds debates? - https://news.ycombinator.com/item?id=3739240 - March 2012 (1 comment)

Linux is Obsolete [1992] - https://news.ycombinator.com/item?id=545213 - April 2009 (46 comments)

[+] bdavbdav|1 year ago|reply
> As a result of my occupation, I think I know a bit about where operating > >are going in the next decade or so

I’m not sure one necessarily qualifies you to know the other… there always seems to be a lot of arrogance in these circles.