As usual HN comments react to the headline, without reading the content.
A lot of modern userspace code, including Rust code in the standard library, thinks that invariant failures (AKA "programmer errors") should cause some sort of assertion failure or crash (Rust or Go `panic`, C/C++ `assert`, etc). In the kernel, claims Linus, failing loudly is worse than trying to keep going because failing would also kill the failure reporting mechanisms.
He advocates for a sort of soft-failure, where the code tells you you're entering unknown territory and then goes ahead and does whatever. Maybe it crashes later, maybe it returns the wrong answer, who knows, the only thing it won't do is halt the kernel at the point the error was detected.
Think of the following Rust API for an array, which needs to be able to handle the case of a user reading an index outside its bounds:
struct Array<T> { ... }
impl<T> Array<T> {
fn len(&self) -> usize;
// if idx >= len, panic
fn get_or_panic(&self, idx: usize) -> T;
// if idx >= len, return None
fn get_or_none(&self, idx: usize) -> Option<T>;
// if idx >= len, print a stack trace and return
// who knows what
unsafe fn get_or_undefined(&self, idx: usize) -> T;
}
The first two are safe by the Rust definition, because they can't cause memory-unsafe behavior. The second two are safe by the Linus/Linux definition, because they won't cause a kernel panic. If you have to choose between #1 and #3, Linus is putting his foot down and saying that the kernel's answer is #3.
> Even "safe" rust code in user space will do things like panic when
things go wrong (overflows, allocation failures, etc). If you don't
realize that that is NOT some kind of true safely[sic], I don't know what
to say.
> Not completing the operation at all, is not really any better than
getting the wrong answer, it's only more debuggable.
What Linus is saying is 100% right of course - he is trying to set the expectations straight in saying that just because you replaced C code with multi thousands (or whatever huge number) of man months of efforts, corrections and refinements with Rust code it doesn't mean absolute safety is guaranteed. For him as a kernel guy just as when you double free the kernel C code detects it and warns about it Rust will panic abort on overflows/alloc fails etc. To the kernel that is not safety at all - as he points out it is only more debuggable.
He is allowing Rust in the kernel so he understands the fact that Rust allows you to shoot yourself in the foot a lot less than standard C - he is merely pointing out the reality that in kernel space or even user space that does not equate to absolute total safety. And as a chief kernel maintainer he is well within his rights to set the expectation straight that tomorrow's kernel-rust programmers write code with this point in mind.
(IOW as an example he doesn't want to see patches in Rust code that ignore kernel realities for Rust's magical safety guarantee - directly or indirectly allocating large chunks of memory may always fail in the kernel and would need to be accounted for even in Rust code.)
Great explanation. I am not an expert on this, so your comment helped me understand. It sounds like Linus is just being a good kernel maintainer here, and clarifying a misunderstood technical term - safety.
It's not a condemnation of rust, but rather a guidepost that, if followed, will actually benefit rust developers.
At least in user space, aborting an operation is much better than incorrect results. But the kernel being incorrect makes user space incorrect as well.
First of all, making a problem both obvious and easier to solve is better. Nothing "only" about it - it's better. Better both for the programmers and for the users. For the programmer the benefit is obvious, for the user problems will simply be more rare, because the benefit the programmer received will make software better faster.
Second, about the behavior. When you attempt to save changes to your document, would you rather have the corruption of your document due to a bug fail with fanfare or succeed silently? How about the web page you visited with embedded malicious JavaScript from a compromised third party, would you rather the web page closed or have your bank details for sale on a foreign forum? When correctness is out the window, you must abort.
If that's what Linus is saying, then he needs to work on his communication skills, because that is not what he said. What he actually said is that dynamic errors should not be detected, they should be ignored. That's so antiquated and ignorant that I hope that he meant what you said, but it's definitely not what he wrote.
As I posted up in this thread, the right way to handle this is to make dynamic errors either throw exceptions or kill the whole task, and split the critical work into tasks that can be as-a-whole failed or completed, almost like transactions. The idea that the kernel should just go on limping in a f'd up state is bonkers.
I’ve been using Rust for a while, and I’m so, so tired of hearing this argument.
Yes, we know. We get it. Rust is not an absolute guarantee of safety and doesn’t protect us from all the bugs. This is obvious and well-known to anyone actually using Rust.
At this point, the argument feels like some sort of ideological debate happening outside the realm of actually getting work done. It feels like any time someone says that Rust defends against certain types of safety errors, someone feels obligated to pop out of the background and remind everyone that it doesn’t protect against every code safety issue.
I mean, it's felt like anytime anyone mentions any code base not written in rust, someone pops in and points out that it's not safe, and should be rewritten in rust.
I think it's all part of the language maturing process. Give it time, zealots will either move on to something new (and then harass the rust community for not meeting their new standard of excellence) or simmer down and get to work.
Fwiw, the original article/email is less about "Rust has unsafe" and more about "panicking/crashing to avoid triggering UB isn't a viable strategy in the kernel."
I keep seeing claims that Rust users are insufferable and claim that Rust protects against everything. But, as someone who has started using Rust around 0.4, I have never seen these insufferable users.
> Rust is not an absolute guarantee of safety and doesn’t protect us from all the bugs.
That's not exactly the vibe I'm getting from the typical Rust fanboys popping up whenever there's another CVE caused by the usage of C or C++ though ;)
Rust does seem to attract the same sort of insufferable personalities that have been so typical for C++ in the past. Why that is, I have no idea.
I know next to nothing about kernel programming, but I'm not sure here what Linus' objection to the comment he is responding to here is.
The comment seemed to be making reference to rust's safety guarantees about undefined behaviour like use after free.
Linus' seems to have a completely different definition of "safey" that conflates allocation failures, indexing out of bounds, and division by zero with memory safety. Rust makes no claims about those problems, and the comment clearly refers to undefined behaviour. Obviously, those other problems are real problems, but just not ones that Rust claims to solve.
Edit: Reading the chain further along, it increasingly feels like Linus is aruging against a strawman.
I think a much better email from the thread to link to would be the earlier https://lkml.org/lkml/2022/9/19/840, where Linus actually talks about some of the challenges of kernel programming and how they differ from user-space programming.
I actually wondered with all the recent "Rust in the kernel" about culture clashes. I mean, most kernel developers aren't Rust programmers (and vice versa).
Now we got a first glimpse at what happens.
Still, I find it strange that it never seemed to come up in preparation to the first Rust merges. Were there any conflict resolution strategies in place (that I don't know about) or just "we flame it out on LKML"?
From the closing paragraph, I feel like he’s under the impression that Rust-advocating contributors are putting Rust’s interests (e.g. “legitimizing it” by getting it in the kernel) above the kernel itself.
> I feel like he’s under the impression that Rust-advocating contributors are putting Rust’s interests (e.g. “legitimizing it” by getting it in the kernel) above the kernel itself.
I mean the post Linus initially responded to did contain[1] a patch removing a kernel define, asking if anyone had any objections over removing that define, just to make the resulting Rust code a little nicer looking.
They probably are, in many cases. Rust’s community, in aggregate, have developed a reputation (earned, in my opinion). It’s too bad that the community don’t follow the leaders’ example in this regard. There are some quality, level-headed Rust advocates. They appear to be the minority.
You're completely wrong here. There is no push to "legitimize" Rust by getting it into the kernel. A lot of people want to actively write drivers for Linux without having to use C to do it.
Trying to tweak the kernel to make integration easier in a supposed non-harmful way doesn't harm anything.
> Not completing the operation at all, is not really any better than
getting the wrong answer, it's only more debuggable.
Wouldn't be that sure about that.
Getting the wrong answer can be a serious security problem.
Not completing the operation... well, it is not good, but that's it.
The kernel can’t fail to complete its operations, because then the entire system crashes and no logs are created. Instead, you can finish the operation and check the result.
> Not completing the operation... well, it is not good, but that's it.
Depends on what the operation is. If the operation is flying an airplane or controlling a nuclear reaction, you are sure that not completing the operation and just aborting the program is the worst outcome possible. Beside the error can crash the plane or melt down the nuclear reactor, but may also not have any effect at all, e.g. a buffer overflow overwrites a memory area that is not used for anything important.
Of course these are extreme example (for which Linux is of course out of discussion since it doesn't offer the level of safety guaranteed required), but we can make other examples.
One example could be your own PC. If you use Linux, take a look at the dmesg output and count the number of errors: there are probably a lot of them, for multiple reason. You surely want your system to continue running, and not panic on each of them!
I mean if it is a cosmetic thing sure. If it has substantial meaning I would rather have that 5 ton robotic welding arm not move than have it move through my skill.
It is sometimes acceptable to get wrong output. But is nearly always better to know it is wrong.
I don't think I buy Linus' high level claim. It is not necessarily better to press on with the wrong answer, in some cases failure actually is an option and might be much better than oops we did it wrong.
This morning I was reading about the analysis of an incident in which a London tube train drove away with open doors. Nobody was harmed, or even in immediate danger, the train had relatively few passengers and in fact they only finally alerted the driver at the next station, classic British politeness (they made videos, took photographs, but they didn't use the emergency call button until the train got to a station)
Anyway, the underlying cause involves systems which were flooded with critical "I'm failing" messages and would just periodically reboot and then press on. The train had been critically faulty for minutes, maybe even days before the incident, but rather than fail, and go out of service, systems kept trying to press on. The safety systems wouldn't have allowed this failed train to drive with its doors open - but the safety critical mistake to disable safety systems and drive the train anyway wouldn't have happened if the initial failure had caused the train to immediately go out of passenger service instead of limping on for who knows how long.
>And the reality is that there are no absolute guarantees. Ever. The "Rust is safe" is not some kind of absolute guarantee of code safety. Never has been. Anybody who believes that should probably re-take their kindergarten year, and stop believing in the Easter bunny and Santa Claus.
I thought that he had apologised and regretted being hostile in comments. Apparently not. Not that I have much of an issue with ranty colorful language, but you need to also be right and have a legitimate cause to pull it off...
The point he makes is BS. "the reality is that there are no absolute guarantees. Ever" Yeah, DUH! The compiler could have bugs and soundness issues for example.
The point is you don't need "absolute guarantees" just "way safer and which dozens more classes of issues discovered automatically" is already enough. The other guy didn't write about "absolute guarantees". He said "WE'RE TRYING to guarantee the absence of undefined behaviour". That's an aim, not a claim they've either achieved it, or they can achieve it 100%
>Even "safe" rust code in user space will do things like panic when things go wrong (overflows, allocation failures, etc). If you don't realize that that is NOT some kind of true safely, I don't know what to say.
Well, if Linus doesn't realize this is irrelevant to the argument the parent made and the intention he talked about, I don't know what to say...
> Even "safe" rust code in user space will do things like panic when
things go wrong (overflows, allocation failures, etc). If you don't
realize that that is NOT some kind of true safely, I don't know what
to say.
When people say "safe" there's a pretty precise meaning and it's not this.
Yes, anyone who believes rust is 100% "safe" (by any definition) is wrong. That's not something you learn in Kindergarten though, it's actually about understanding that Rice's Theorem is a generalization of the Halting Problem.
> o this is something that I really need the Rust people to understand. That whole reality of "safe" not being some absolute thing
The irony of Linus lecturing anyone on safety lol anyway "the Rust people" know this already, when they say "safe" they mean "memory safe" - https://en.wikipedia.org/wiki/Memory_safety
Anyway, dumb shit like this is why I've always been quietly dreading Rust in the kernel.
a) The kernel will never be safe software because the mainline developers don't want it to be or even know what safe means
b) It just invites more posts like this and puts Rust closer to one of the most annoying software communities
> Or, you know, if you can't deal with the rules that the kernel requires, then just don't do kernel programming.
Agreed on this point. I was very interested in kernel dev earlier in my career until I actually started to engage with it.
It does make sense that the mainline developers don't know what "safe" means if you arbitrarily decide that "safe" means "memory safe" specifically and no other kind of "safe". A Haskell or Clojure developer could arbitrarily decide that "safe" means "safe from side effects," but unless that is clearly stated every time they engage in discourse with someone I wouldn't blame their discussion partners for not knowing what the developer means when they talk about some code being "safe".
I will agree with you that I dread Rust in the kernel, hopefully it can continue to exist there peacefully without people getting too hot under the collar about their personal hang-ups. For all its flaws Rust has an amazing value prop in the borrow checker and I would love for memory bugs to be eliminated for good.
One of my Marine NCOs would say, "there is no such thing as safe."
You aren't safe on the FOB, in your car, in your barracks, or in your house. There are only degrees of safety. Very wise almost globally applicable words.
I feel like there is an underlying problem here that Rust tries to be
a "safe" language while "safety" isn't well defined. Rust said that
crashing a process is always safe so that when something unexpected
happens we can always resort to crashing so that we don't risk doing
anything unsafe.
The problem is that this definition of safety is very arbitrary.
Sometimes crashing a process can be safe (as in not causing serious
problems) but sometimes not. Accessing an array out of bounds can be
safe sometimes and sometimes not, and so on.
Rust says that here is a list of things that are always safe and here
is a list of things that are always unsafe and then people want safety
everywhere so they take that definition of safety to other contexts
where it doesn't make sense, like the kernel.
This kind of exchange was inevitable. The Rust crowd has this mentality that their code can be perfect (beyond even 'safe'), when in reality as long as your foundational system inputs and capacity aren't perfect, no downstream thing can be either. It's harder to see in user space but in the kernel you can't avoid reality. Hope the Rust crowd in general gets more moderate after this (or maybe not, but then that's only to the loss of Rust's long-term success).
I find the Rust community to be hostile towards any inquisitive questions about their claims of "guaranteed memory safety". I've argued before that C is probably a safer language in practise for the Linux kernel than Rust because you would have to contort and write non-idiomatic Rust, using FFI, or deal with C data structures that will hamper/remove a lot of Rust's memory safe benefits. Rust is also harder to read than C - especially if you are trying to keep a mental model of the bitmap layout in your head and just dealing with low level code.
Of course I've had many negative comments from "Rustaceans", with their defence of their negativity being "we don't like it when someone comes into our community".
It is a shame because Rust is a pretty cool language, but at this current rate I don't really see it being "the" systems programming language de jure.
I think Zig is probably a much better fit for writing a Kernel in a safer language. Again, rust programmers pile on and tell me that "zig isn't memory safe". We can't make use of other languages that bring safety benefits without the dog pile of "you should use Rust it's safe". Apparently nothing is safe other than Rust.
Hmmm, the linked email is not providing a lot of context, so surely I'm missing something, but there's something I definitely don't understand: is there not a third option between stopping the whole kernel on an error or allowing an incorrect result?
Maybe my misunderstanding comes from my ignorance of the kernel's architecture, but surely there's a way to segregate operations in logical fallible tasks, so that a failure inside of a task aborts the task but doesn't put down the entire thing, and in particular not a sensitive part like kernel error reporting? Or are we talking about panics in this sensitive part?
Bubbling up errors in fallible tasks can be implemented using panic by unwrapping up to the fallible task's boundary.
To my understanding this is exactly what any modern OS does with user space processes?
I always have the hardest of time in discussions with people advocating for or against that "you should stop computations on an incorrect result". Which computations should you stop? Surely, we're not advocating for bursting the entire computer into flames. There has to be a boundary. So, my take is to start defining the boundaries, and yes, to stop computations up to these boundaries.
>and in particular not a sensitive part like kernel error reporting
Things like "kernel error reporting" doesn't exist as discrete element. Sure, you might decide to stop everything and only dump log onto earlycon, but running with serial cable to every system that crashed would be rather annoying. For all kernel knows, the only way to get something to the outside world might be through USB Ethernet adapter and connection that is tunneled by userspace TUN device, at which point essentialy whole kernel must continue to run.
[+] [-] jmillikin|3 years ago|reply
A lot of modern userspace code, including Rust code in the standard library, thinks that invariant failures (AKA "programmer errors") should cause some sort of assertion failure or crash (Rust or Go `panic`, C/C++ `assert`, etc). In the kernel, claims Linus, failing loudly is worse than trying to keep going because failing would also kill the failure reporting mechanisms.
He advocates for a sort of soft-failure, where the code tells you you're entering unknown territory and then goes ahead and does whatever. Maybe it crashes later, maybe it returns the wrong answer, who knows, the only thing it won't do is halt the kernel at the point the error was detected.
Think of the following Rust API for an array, which needs to be able to handle the case of a user reading an index outside its bounds:
The first two are safe by the Rust definition, because they can't cause memory-unsafe behavior. The second two are safe by the Linus/Linux definition, because they won't cause a kernel panic. If you have to choose between #1 and #3, Linus is putting his foot down and saying that the kernel's answer is #3.[+] [-] blinkingled|3 years ago|reply
> Not completing the operation at all, is not really any better than getting the wrong answer, it's only more debuggable.
What Linus is saying is 100% right of course - he is trying to set the expectations straight in saying that just because you replaced C code with multi thousands (or whatever huge number) of man months of efforts, corrections and refinements with Rust code it doesn't mean absolute safety is guaranteed. For him as a kernel guy just as when you double free the kernel C code detects it and warns about it Rust will panic abort on overflows/alloc fails etc. To the kernel that is not safety at all - as he points out it is only more debuggable.
He is allowing Rust in the kernel so he understands the fact that Rust allows you to shoot yourself in the foot a lot less than standard C - he is merely pointing out the reality that in kernel space or even user space that does not equate to absolute total safety. And as a chief kernel maintainer he is well within his rights to set the expectation straight that tomorrow's kernel-rust programmers write code with this point in mind.
(IOW as an example he doesn't want to see patches in Rust code that ignore kernel realities for Rust's magical safety guarantee - directly or indirectly allocating large chunks of memory may always fail in the kernel and would need to be accounted for even in Rust code.)
[+] [-] lake_vincent|3 years ago|reply
It's not a condemnation of rust, but rather a guidepost that, if followed, will actually benefit rust developers.
[+] [-] swinglock|3 years ago|reply
First of all, making a problem both obvious and easier to solve is better. Nothing "only" about it - it's better. Better both for the programmers and for the users. For the programmer the benefit is obvious, for the user problems will simply be more rare, because the benefit the programmer received will make software better faster.
Second, about the behavior. When you attempt to save changes to your document, would you rather have the corruption of your document due to a bug fail with fanfare or succeed silently? How about the web page you visited with embedded malicious JavaScript from a compromised third party, would you rather the web page closed or have your bank details for sale on a foreign forum? When correctness is out the window, you must abort.
[+] [-] titzer|3 years ago|reply
As I posted up in this thread, the right way to handle this is to make dynamic errors either throw exceptions or kill the whole task, and split the critical work into tasks that can be as-a-whole failed or completed, almost like transactions. The idea that the kernel should just go on limping in a f'd up state is bonkers.
[+] [-] PragmaticPulp|3 years ago|reply
Yes, we know. We get it. Rust is not an absolute guarantee of safety and doesn’t protect us from all the bugs. This is obvious and well-known to anyone actually using Rust.
At this point, the argument feels like some sort of ideological debate happening outside the realm of actually getting work done. It feels like any time someone says that Rust defends against certain types of safety errors, someone feels obligated to pop out of the background and remind everyone that it doesn’t protect against every code safety issue.
[+] [-] chrsig|3 years ago|reply
I think it's all part of the language maturing process. Give it time, zealots will either move on to something new (and then harass the rust community for not meeting their new standard of excellence) or simmer down and get to work.
[+] [-] TillE|3 years ago|reply
Rust provides certain guarantees of memory safety, which is great, but it's important to understand exactly what that means and not to oversell it.
[+] [-] oconnor663|3 years ago|reply
[+] [-] Yoric|3 years ago|reply
I keep seeing claims that Rust users are insufferable and claim that Rust protects against everything. But, as someone who has started using Rust around 0.4, I have never seen these insufferable users.
I imagine that they lurk on some communities?
[+] [-] flohofwoe|3 years ago|reply
That's not exactly the vibe I'm getting from the typical Rust fanboys popping up whenever there's another CVE caused by the usage of C or C++ though ;)
Rust does seem to attract the same sort of insufferable personalities that have been so typical for C++ in the past. Why that is, I have no idea.
[+] [-] jmull|3 years ago|reply
[+] [-] pjmlp|3 years ago|reply
If there isn't 100% safety then why bother, it is the usual argument for the last 40 years.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] a_humean|3 years ago|reply
The comment seemed to be making reference to rust's safety guarantees about undefined behaviour like use after free.
Linus' seems to have a completely different definition of "safey" that conflates allocation failures, indexing out of bounds, and division by zero with memory safety. Rust makes no claims about those problems, and the comment clearly refers to undefined behaviour. Obviously, those other problems are real problems, but just not ones that Rust claims to solve.
Edit: Reading the chain further along, it increasingly feels like Linus is aruging against a strawman.
[+] [-] Smaug123|3 years ago|reply
[+] [-] throwawaybutwhy|3 years ago|reply
[+] [-] rat9988|3 years ago|reply
> You need to realize that
> (a) reality trumps fantasy
?
[+] [-] Tomte|3 years ago|reply
Now we got a first glimpse at what happens.
Still, I find it strange that it never seemed to come up in preparation to the first Rust merges. Were there any conflict resolution strategies in place (that I don't know about) or just "we flame it out on LKML"?
[+] [-] kweingar|3 years ago|reply
From the closing paragraph, I feel like he’s under the impression that Rust-advocating contributors are putting Rust’s interests (e.g. “legitimizing it” by getting it in the kernel) above the kernel itself.
[+] [-] magicalhippo|3 years ago|reply
I mean the post Linus initially responded to did contain[1] a patch removing a kernel define, asking if anyone had any objections over removing that define, just to make the resulting Rust code a little nicer looking.
[1]: https://lkml.org/lkml/2022/9/19/640
[+] [-] sidlls|3 years ago|reply
[+] [-] mlindner|3 years ago|reply
Trying to tweak the kernel to make integration easier in a supposed non-harmful way doesn't harm anything.
[+] [-] aaaaaaaaaaab|3 years ago|reply
[+] [-] lbhdc|3 years ago|reply
[+] [-] tcfhgj|3 years ago|reply
Wouldn't be that sure about that. Getting the wrong answer can be a serious security problem. Not completing the operation... well, it is not good, but that's it.
[+] [-] atty|3 years ago|reply
[+] [-] alerighi|3 years ago|reply
Depends on what the operation is. If the operation is flying an airplane or controlling a nuclear reaction, you are sure that not completing the operation and just aborting the program is the worst outcome possible. Beside the error can crash the plane or melt down the nuclear reactor, but may also not have any effect at all, e.g. a buffer overflow overwrites a memory area that is not used for anything important.
Of course these are extreme example (for which Linux is of course out of discussion since it doesn't offer the level of safety guaranteed required), but we can make other examples.
One example could be your own PC. If you use Linux, take a look at the dmesg output and count the number of errors: there are probably a lot of them, for multiple reason. You surely want your system to continue running, and not panic on each of them!
[+] [-] atoav|3 years ago|reply
It is sometimes acceptable to get wrong output. But is nearly always better to know it is wrong.
[+] [-] remram|3 years ago|reply
[+] [-] 2OEH8eoCRo0|3 years ago|reply
[+] [-] tialaramex|3 years ago|reply
This morning I was reading about the analysis of an incident in which a London tube train drove away with open doors. Nobody was harmed, or even in immediate danger, the train had relatively few passengers and in fact they only finally alerted the driver at the next station, classic British politeness (they made videos, took photographs, but they didn't use the emergency call button until the train got to a station)
Anyway, the underlying cause involves systems which were flooded with critical "I'm failing" messages and would just periodically reboot and then press on. The train had been critically faulty for minutes, maybe even days before the incident, but rather than fail, and go out of service, systems kept trying to press on. The safety systems wouldn't have allowed this failed train to drive with its doors open - but the safety critical mistake to disable safety systems and drive the train anyway wouldn't have happened if the initial failure had caused the train to immediately go out of passenger service instead of limping on for who knows how long.
[+] [-] eric4smith|3 years ago|reply
None of that is going to save us from bad code.
Some of the biggest systems that run the world are not written with either safe code nor strongly typed languages.
Yes I would say strongly typed languages and memory safe languages help make coding easier and indeed save time and some bugs.
But when you get past making the kinds of errors that cause memory problems or bad types…
You are still left with 95% of the bugs and logic errors anyway.
Still, 5% savings in productivity is not nothing.
[+] [-] coldtea|3 years ago|reply
I thought that he had apologised and regretted being hostile in comments. Apparently not. Not that I have much of an issue with ranty colorful language, but you need to also be right and have a legitimate cause to pull it off...
The point he makes is BS. "the reality is that there are no absolute guarantees. Ever" Yeah, DUH! The compiler could have bugs and soundness issues for example.
The point is you don't need "absolute guarantees" just "way safer and which dozens more classes of issues discovered automatically" is already enough. The other guy didn't write about "absolute guarantees". He said "WE'RE TRYING to guarantee the absence of undefined behaviour". That's an aim, not a claim they've either achieved it, or they can achieve it 100%
>Even "safe" rust code in user space will do things like panic when things go wrong (overflows, allocation failures, etc). If you don't realize that that is NOT some kind of true safely, I don't know what to say.
Well, if Linus doesn't realize this is irrelevant to the argument the parent made and the intention he talked about, I don't know what to say...
[+] [-] stephc_int13|3 years ago|reply
Because "safe" in the context of a programming language is provably wrong and thus will trigger adversary reactions.
Rust is a hardened language, compared to C/C++. In the same way that Ada is hardened language, with different techniques, but the spirit is similar.
[+] [-] staticassertion|3 years ago|reply
When people say "safe" there's a pretty precise meaning and it's not this.
Yes, anyone who believes rust is 100% "safe" (by any definition) is wrong. That's not something you learn in Kindergarten though, it's actually about understanding that Rice's Theorem is a generalization of the Halting Problem.
> o this is something that I really need the Rust people to understand. That whole reality of "safe" not being some absolute thing
The irony of Linus lecturing anyone on safety lol anyway "the Rust people" know this already, when they say "safe" they mean "memory safe" - https://en.wikipedia.org/wiki/Memory_safety
Anyway, dumb shit like this is why I've always been quietly dreading Rust in the kernel.
a) The kernel will never be safe software because the mainline developers don't want it to be or even know what safe means
b) It just invites more posts like this and puts Rust closer to one of the most annoying software communities
> Or, you know, if you can't deal with the rules that the kernel requires, then just don't do kernel programming.
Agreed on this point. I was very interested in kernel dev earlier in my career until I actually started to engage with it.
[+] [-] tmtvl|3 years ago|reply
I will agree with you that I dread Rust in the kernel, hopefully it can continue to exist there peacefully without people getting too hot under the collar about their personal hang-ups. For all its flaws Rust has an amazing value prop in the borrow checker and I would love for memory bugs to be eliminated for good.
[+] [-] 2OEH8eoCRo0|3 years ago|reply
You aren't safe on the FOB, in your car, in your barracks, or in your house. There are only degrees of safety. Very wise almost globally applicable words.
[+] [-] bobajeff|3 years ago|reply
[+] [-] robalni|3 years ago|reply
The problem is that this definition of safety is very arbitrary. Sometimes crashing a process can be safe (as in not causing serious problems) but sometimes not. Accessing an array out of bounds can be safe sometimes and sometimes not, and so on.
Rust says that here is a list of things that are always safe and here is a list of things that are always unsafe and then people want safety everywhere so they take that definition of safety to other contexts where it doesn't make sense, like the kernel.
[+] [-] mslm|3 years ago|reply
[+] [-] flumpcakes|3 years ago|reply
Of course I've had many negative comments from "Rustaceans", with their defence of their negativity being "we don't like it when someone comes into our community".
It is a shame because Rust is a pretty cool language, but at this current rate I don't really see it being "the" systems programming language de jure.
I think Zig is probably a much better fit for writing a Kernel in a safer language. Again, rust programmers pile on and tell me that "zig isn't memory safe". We can't make use of other languages that bring safety benefits without the dog pile of "you should use Rust it's safe". Apparently nothing is safe other than Rust.
[+] [-] dureuill|3 years ago|reply
Maybe my misunderstanding comes from my ignorance of the kernel's architecture, but surely there's a way to segregate operations in logical fallible tasks, so that a failure inside of a task aborts the task but doesn't put down the entire thing, and in particular not a sensitive part like kernel error reporting? Or are we talking about panics in this sensitive part?
Bubbling up errors in fallible tasks can be implemented using panic by unwrapping up to the fallible task's boundary.
To my understanding this is exactly what any modern OS does with user space processes?
I always have the hardest of time in discussions with people advocating for or against that "you should stop computations on an incorrect result". Which computations should you stop? Surely, we're not advocating for bursting the entire computer into flames. There has to be a boundary. So, my take is to start defining the boundaries, and yes, to stop computations up to these boundaries.
[+] [-] garaetjjte|3 years ago|reply
Things like "kernel error reporting" doesn't exist as discrete element. Sure, you might decide to stop everything and only dump log onto earlycon, but running with serial cable to every system that crashed would be rather annoying. For all kernel knows, the only way to get something to the outside world might be through USB Ethernet adapter and connection that is tunneled by userspace TUN device, at which point essentialy whole kernel must continue to run.