alt-text: One of the survivors, poking around in the ruins with the point of a spear, uncovers a singed photo of Richard Stallman. They stare in silence. "This," one of them finally says, "This is a man who BELIEVED in something."
Just yesterday I was explaining to a friend that Stallman is not crazy; rather there is a continuum from convenience to freedom and Stallman believes most of us are way too far on the "left", so he goes as far to the "right" as possible to show us we've gone to far. He pays for his freedoms with his conveniences and we pay for our conveniences with our freedoms.
Things are much more nuanced and multidimensional than this. Not trying to start a flame war here.
You might not want to start a flame war, but you know one is perhaps inevitable on this topic.
I've met RMS several times (he even stayed at my place once, and yes the rider asking hosts not to buy him a parrot is real and long-standing), and I came it at from a different angle: I was a contributor to a BSD OS some yonks ago.
As a result we had a good-natured conversation about some of the conflicts between GPL and BSD and he's actually a lot more forgiving than people portray him or that argument. He's grounded in more reality than people give him credit for. Yes, he supports and advocates GPL, but he'd rather see code released as BSD/MIT than not released at all - at least that was the impression I was left with.
There are two things that tend to get the anti-GPL crowds back up, which on your continuum I'm not sure fit easily.
Firstly, there is the right for a programmer to be paid passively for the result of work done by their software.
RMS broadly (and I'm paraphrasing ridiculously here) thinks that you should be paid for the hours you work, the software should be Free, you don't earn a right to keep it closed and sell access to it through licenses, etc.
Fine, but that would pretty much shut down the programming economy. People regularly write code because they or their employer believes a customer is going to pay for it, and use that as leverage. Obviously. If we were to outlaw such a model - and it would require statues to make a reality, laws that run contrary to free-market liberalism - a large number of developers would be laid off.
That's OK, says RMS, because society will be better off. However, Turkeys do not vote for Thanksgiving/Christmas, and I want the _freedom_ to chose to work in businesses with economic models like that.
I think interestingly if Amazon, Facebook, Google, Uber and all similar firms (e-comms, social networks, search engines, etc.), released all their code, the market would not move much. The value was first created by innovating through their code base, but now the value is in the systems, brands, and relationships those firms have been able to build. Curiously, it's possible we are now in a scenario where closed source code can help a startup scale, but once they're at scale, there is no value in keeping the code closed, and making it open could benefit the companies (through "volunteer commits"), as well as the wider economy. We are seeing the tip of this in the dev code space (.NET et al).
Secondly, there is a weird line after which RMS does not care about software freedom quite so much. For example, in one talk he gave I attended, he stated that because he does not wish to modify the code in his microwave, he is not bothered if that code is free or not. I think that's a weak (and arbitrary) line, that is inconsistent with the core argument and really doesn't sit on the continuum.
Either all code wants to be free, or only some code does. And if only some code does, well, you can't go around telling people what does and does not.
Perhaps he's revised his position since then (a decade ago now), but I know it was something that troubled me about his argument.
And when somebody's economic and political philosophy does not quite stand up to not-very-rigorous arguments about the economic or political impact, from the very same people you're trying to convince, it's going to be a hard sell.
TL;DR: people should be free to sell or give away software as they see fit, and let market forces decide (as they have already in say, Internet infrastructure software markets such as server OSes, web/mail servers, virtualisation, and so on).
I wonder if the HURD guys will finish and when. Can somebody comment on the continued existence of this project even though the ideas propogated by it(microkernels) have been well adopted into the mainstream?
Trying to emulate Mach, which was a dud as a microkernel, didn't help.
QNX is one of the few microkernels to get it right. L4 got stripped down so far that it's just a hypervisor, on which people usually load Linux. L4 took out arbitrary-length message passing in favor of interrupt-like events and shared memory between sender and receiver. This simplifies the kernel, but now it's easier for one side of a sender/receiver to mess up the other, since they share a communications area. The QNX primitive set (MsgSend, MsgReceive, and MsgReply) work well enough in practice to allow full POSIX functionality. Applications can talk to file system servers, network servers, etc. through those primitives. All QNX I/O works that way. You take maybe a 20% performance hit for the extra copying, but you get robustness in exchange.
Most important thing for performance in a microkernel: the CPU dispatcher and the message passing have to be tightly coordinated. You must be able to call another process and get a reply back without trips through the scheduler or a switch to a different CPU. QNX gets this right, because MsgSend is blocking. The sender blocks and the receiver starts without having to schedule. The data being sent is right there in the cache of the CPU, ready for use by the receiver. Good test for a microkernel - put some CPU-intensive jobs in a loop, while also running something that makes short request/reply calls to another process. If the request/reply process stalls out, the microkernel is doing it wrong. If the CPU-bound processes stall out, the microkernel is doing it wrong. Message passing should schedule as smoothly as a subroutine call. If it doesn't, performance under load will suck.
If human civilization has been ended with fire, then how does it come back with in ten years and with GNU/HURD to be specific. I don't get what's the humor here.
Is it possible that AI has taken control over free OS from all over the web and killed all proprietary stuff along with humans and now run's the earth without any human civilization?
"This infamously and perennially late GNU/Hurd OS will finally make it in to Randall's home after human civilization has been wiped out. The joke is that GNU/Hurd began to be developed in 1990, and while it was expected to be released in a relatively short time, even now only unstable builds have been released. So Randall is saying that he will finally run it in his house a decade or two after the end of civilization. GNU/Hurd will presumably have an advantage as humanity rebuilds civilization due to the widespread availability of its code and development tools, and perhaps also because of Stallman's depth of belief, based on the title text. Alternatively, GNU/Hurd might be finished by the same force that finished humankind, for instance Skynet, in case of AI Apocalypse. (Interestingly, although still far from completion, a new version of GNU/Hurd was released less than a week after this comic.)"
[...]
"The GNU/Hurd reference might also be a pun, as in a "herd" of Gnus "running" in his living room, as wild animals reclaim the Earth after the end of human civilization."
Probably a super-hardened (security-wise) OS for an iPod Touch-like "second device" intended for use primarily to arrange illicit assignations.
Very few apps available, all based on zero-knowledge protocols, and no apps that are capable of leaking your real-world identity through ordinary or casual use are allowed.
As a result, just about everyone on Capitol Hill has one of these, so they can't be banned even though they are, as a side-effect, extremely useful to whistleblowers, journalists, activists, dissidents, etc.
I think that source is a little bit confused on the distinction between iOS and OS X (macOS?).
iOS and OS X share the same kernel, named XNU (it's the userlands of the two that are almost completely different). That's been the case since iOS 1. Whenever either OS X or iOS gets ported to a new CPU (or CPU subfamily), the kernel gets another set of macros added to it for the purpose of identifying the new CPU. The presence of a new ARM macro in XNU doesn't mean much at all - it already has a ton of them, as well as truly ancient macros from long-forgotten systems (m68/88k comes to mind). Those macros have nothing to do with OS X or iOS individually - they're relevant primarily to the kernel itself.
Every now and then Apple release a new product, like TVOS for the Apple TV and people ask why Apple wrote an entirely new OS. why not base it on iOS or OSX. of course that's exactly what they do. There's a common core to OSX, iOS and now TVOS that are almost entirely the same code base. Which specific files are common or different for each flavour probably changes from time to time and this may well be a case of that.
The fact is though, iOS and OSX are already as much the same code base as they can be, and as much different as they need to be. That balance may change as the OSes evolve, but I don't think there's any pressing need or benefit to converging them completely.
In and of itself that doesn't prove anything. That file has had symbols for multiple architectures since forever, including CPUs like M68K, Sparc (years ago, I had hoped that Apple would buy Sun instead of Oracle - oh well) and VAX (but not Alpha).
[+] [-] JBiserkov|9 years ago|reply
Just yesterday I was explaining to a friend that Stallman is not crazy; rather there is a continuum from convenience to freedom and Stallman believes most of us are way too far on the "left", so he goes as far to the "right" as possible to show us we've gone to far. He pays for his freedoms with his conveniences and we pay for our conveniences with our freedoms.
Things are much more nuanced and multidimensional than this. Not trying to start a flame war here.
[+] [-] PaulRobinson|9 years ago|reply
I've met RMS several times (he even stayed at my place once, and yes the rider asking hosts not to buy him a parrot is real and long-standing), and I came it at from a different angle: I was a contributor to a BSD OS some yonks ago.
As a result we had a good-natured conversation about some of the conflicts between GPL and BSD and he's actually a lot more forgiving than people portray him or that argument. He's grounded in more reality than people give him credit for. Yes, he supports and advocates GPL, but he'd rather see code released as BSD/MIT than not released at all - at least that was the impression I was left with.
There are two things that tend to get the anti-GPL crowds back up, which on your continuum I'm not sure fit easily.
Firstly, there is the right for a programmer to be paid passively for the result of work done by their software.
RMS broadly (and I'm paraphrasing ridiculously here) thinks that you should be paid for the hours you work, the software should be Free, you don't earn a right to keep it closed and sell access to it through licenses, etc.
Fine, but that would pretty much shut down the programming economy. People regularly write code because they or their employer believes a customer is going to pay for it, and use that as leverage. Obviously. If we were to outlaw such a model - and it would require statues to make a reality, laws that run contrary to free-market liberalism - a large number of developers would be laid off.
That's OK, says RMS, because society will be better off. However, Turkeys do not vote for Thanksgiving/Christmas, and I want the _freedom_ to chose to work in businesses with economic models like that.
I think interestingly if Amazon, Facebook, Google, Uber and all similar firms (e-comms, social networks, search engines, etc.), released all their code, the market would not move much. The value was first created by innovating through their code base, but now the value is in the systems, brands, and relationships those firms have been able to build. Curiously, it's possible we are now in a scenario where closed source code can help a startup scale, but once they're at scale, there is no value in keeping the code closed, and making it open could benefit the companies (through "volunteer commits"), as well as the wider economy. We are seeing the tip of this in the dev code space (.NET et al).
Secondly, there is a weird line after which RMS does not care about software freedom quite so much. For example, in one talk he gave I attended, he stated that because he does not wish to modify the code in his microwave, he is not bothered if that code is free or not. I think that's a weak (and arbitrary) line, that is inconsistent with the core argument and really doesn't sit on the continuum.
Either all code wants to be free, or only some code does. And if only some code does, well, you can't go around telling people what does and does not.
Perhaps he's revised his position since then (a decade ago now), but I know it was something that troubled me about his argument.
And when somebody's economic and political philosophy does not quite stand up to not-very-rigorous arguments about the economic or political impact, from the very same people you're trying to convince, it's going to be a hard sell.
TL;DR: people should be free to sell or give away software as they see fit, and let market forces decide (as they have already in say, Internet infrastructure software markets such as server OSes, web/mail servers, virtualisation, and so on).
[+] [-] ykm|9 years ago|reply
[+] [-] Animats|9 years ago|reply
Trying to emulate Mach, which was a dud as a microkernel, didn't help.
QNX is one of the few microkernels to get it right. L4 got stripped down so far that it's just a hypervisor, on which people usually load Linux. L4 took out arbitrary-length message passing in favor of interrupt-like events and shared memory between sender and receiver. This simplifies the kernel, but now it's easier for one side of a sender/receiver to mess up the other, since they share a communications area. The QNX primitive set (MsgSend, MsgReceive, and MsgReply) work well enough in practice to allow full POSIX functionality. Applications can talk to file system servers, network servers, etc. through those primitives. All QNX I/O works that way. You take maybe a 20% performance hit for the extra copying, but you get robustness in exchange.
Most important thing for performance in a microkernel: the CPU dispatcher and the message passing have to be tightly coordinated. You must be able to call another process and get a reply back without trips through the scheduler or a switch to a different CPU. QNX gets this right, because MsgSend is blocking. The sender blocks and the receiver starts without having to schedule. The data being sent is right there in the cache of the CPU, ready for use by the receiver. Good test for a microkernel - put some CPU-intensive jobs in a loop, while also running something that makes short request/reply calls to another process. If the request/reply process stalls out, the microkernel is doing it wrong. If the CPU-bound processes stall out, the microkernel is doing it wrong. Message passing should schedule as smoothly as a subroutine call. If it doesn't, performance under load will suck.
[+] [-] dancek|9 years ago|reply
I know there's L4 which is pretty mainstream on coprocessors.
[+] [-] drivingmenuts|9 years ago|reply
[+] [-] akira2501|9 years ago|reply
[+] [-] hayleox|9 years ago|reply
[+] [-] yitchelle|9 years ago|reply
Why don't we keep adding a date referenece to all titles retrospectively if we want to have a chronological appreciation of the article.
[+] [-] saturdaysaint|9 years ago|reply
[+] [-] sssilver|9 years ago|reply
[+] [-] smegel|9 years ago|reply
[+] [-] kenOfYugen|9 years ago|reply
The timeline suggests a reference to Gary Bernhardt's talk, "The Birth & Death of JavaScript" [2].
1. http://runtimejs.org/
2. https://www.destroyallsoftware.com/talks/the-birth-and-death...
[+] [-] ekvintroj|9 years ago|reply
[+] [-] amelius|9 years ago|reply
[+] [-] qwertyuiop924|9 years ago|reply
[+] [-] aq3cn|9 years ago|reply
Is it possible that AI has taken control over free OS from all over the web and killed all proprietary stuff along with humans and now run's the earth without any human civilization?
Am I correct?
[+] [-] emiliobumachar|9 years ago|reply
"This infamously and perennially late GNU/Hurd OS will finally make it in to Randall's home after human civilization has been wiped out. The joke is that GNU/Hurd began to be developed in 1990, and while it was expected to be released in a relatively short time, even now only unstable builds have been released. So Randall is saying that he will finally run it in his house a decade or two after the end of civilization. GNU/Hurd will presumably have an advantage as humanity rebuilds civilization due to the widespread availability of its code and development tools, and perhaps also because of Stallman's depth of belief, based on the title text. Alternatively, GNU/Hurd might be finished by the same force that finished humankind, for instance Skynet, in case of AI Apocalypse. (Interestingly, although still far from completion, a new version of GNU/Hurd was released less than a week after this comic.)"
[...]
"The GNU/Hurd reference might also be a pun, as in a "herd" of Gnus "running" in his living room, as wild animals reclaim the Earth after the end of human civilization."
[+] [-] tremon|9 years ago|reply
Human civilization ending does not imply humans are extinct. There may be a few basement dwellers that survived, and kept coding.
[+] [-] fuzzfactor|9 years ago|reply
Obviously from the comic, the author is an early adopter.
That's why he embraces HURD as soon as it's ready, instead of waiting until after 2060.
[+] [-] sundvor|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] antoineMoPa|9 years ago|reply
[+] [-] eidorb|9 years ago|reply
[+] [-] webmaven|9 years ago|reply
Very few apps available, all based on zero-knowledge protocols, and no apps that are capable of leaking your real-world identity through ordinary or casual use are allowed.
As a result, just about everyone on Capitol Hill has one of these, so they can't be banned even though they are, as a side-effect, extremely useful to whistleblowers, journalists, activists, dissidents, etc.
[+] [-] jlebrech|9 years ago|reply
[+] [-] kryptiskt|9 years ago|reply
[+] [-] my123|9 years ago|reply
[+] [-] NoOn3|9 years ago|reply
[+] [-] nidx|9 years ago|reply
[+] [-] agumonkey|9 years ago|reply
[+] [-] hanief|9 years ago|reply
http://www.iclarified.com/57138/apple-adds-arm-support-to-ma...
[+] [-] woodruffw|9 years ago|reply
iOS and OS X share the same kernel, named XNU (it's the userlands of the two that are almost completely different). That's been the case since iOS 1. Whenever either OS X or iOS gets ported to a new CPU (or CPU subfamily), the kernel gets another set of macros added to it for the purpose of identifying the new CPU. The presence of a new ARM macro in XNU doesn't mean much at all - it already has a ton of them, as well as truly ancient macros from long-forgotten systems (m68/88k comes to mind). Those macros have nothing to do with OS X or iOS individually - they're relevant primarily to the kernel itself.
[+] [-] simonh|9 years ago|reply
The fact is though, iOS and OSX are already as much the same code base as they can be, and as much different as they need to be. That balance may change as the OSes evolve, but I don't think there's any pressing need or benefit to converging them completely.
[+] [-] phs318u|9 years ago|reply
https://github.com/opensource-apple/xnu/blob/10.11/osfmk/mac...
[+] [-] 0x0|9 years ago|reply
https://news.ycombinator.com/item?id=12772801