(no title)
sergiolp | 10 years ago
These days I prefer to do my kernel hacking on monolithic kernels, mainly NetBSD. I've stopped working on Mach, Hurd and other experimental microkernels (there're a bunch out there) because it was becoming increasingly frustrating.
If you'd ask me to define the problem with microkernels with one word, that would be "complexity". And its a kind of complexity that impacts everything:
- Debugging is hard: On monolithic kernels, you have a single image, with both code and state. Hunting a bug is just a matter of jumping into the internal debugger (or attaching an external one, or generating a dump, or...) and looking around. On Hurd, the state is spread among Mach and the servers, so you'll have to look at each one trying to follow the trail left by the bug.
- Managing resources is hard: Mach knows everything about the machine, but nothing the user. The server knows everything about the user, but nothing about the machine. And keeping them in sync is too much expensive. Go figure.
- Obtaining a reasonalbe performance is har... imposible: You want to read() a pair of bytes from disk? Good, prepare a message, call to Mach, yield a little while the server is scheduled, copy the message, unmarshall it, process the request, prepare another message to Mach to read from disk, call to Mach, yield waiting for rescheduling, obtain the data, prepare the answer, call to Mach, yield waiting for rescheduling, obtain your 2 bytes. Easy!
In the end, Torvalds was right. The user doesn't want to work with the OS, he wants to work with his application. This means the OS should be as invisble as possible, and fulfill userland requests following the shortest path. Microkernels doesn't comply with this requirement, so from a user's perspective, they fail natural selection.
That said, if you're into kernels, microkernels are different and fun! Don't miss the oportunity of doing some hacking with one of them. Just don't be a fool like me, and avoid become obsessed trying to achieve the imposible.
userbinator|10 years ago
Personally, I think modularity is good up to the extent that it reduces complexity by removing duplication, but beyond that it's an unnecessary abstraction that obfuscates more than simplifies.
nickpsecurity|10 years ago
Now, I'd prefer an architecture where we can use regular programming languages and function calls. A number of past and present hardware architectures are designed to protect things such as pointers or control flow. Those in production are not, but have MMU's & at least two rings. So, apps on them will both get breached due to inherently broken architecture and can be isolated through microkernel architecture with interface protections, too. So, it's really a kludgey solution to a problem caused by stupid hardware.
Still hasn't been a single monolithic system to match their reliability, security, and maintenance without clustering, though.
copsarebastards|10 years ago
[1] https://en.wikipedia.org/wiki/Law_of_conservation_of_complex...
sergiolp|10 years ago
twoodfin|10 years ago
vezzy-fnord|10 years ago
The problem with Mach, you mean. All the examples you listed are specific to it.
sergiolp|10 years ago