top | item 32802939

(no title)

lovingCranberry | 3 years ago

> "unsafe forbidden" (GH tag)

Unrelated to this project, but I dislike the obsession of "unsafe" within the rust community.

Sometimes I need to dereference a raw pointer (rare!).

Sometimes I actually know what I'm doing (very rare!!).

Sometimes I rigorously tested my code (exceptionally rare!!!).

When I see people making PRs (to e.g. Actix) to change unsafe code to safe code in an API the user *never* sees, which results in a performance penalty, just for the sake of not using the word "unsafe" in the code, I get mad. I totally understood Nikolay's reaction back then. Random people opened PRs and flamed him without knowing anything about the internals and the consequences.

The unsafe keyword means that I know what I'm doing. Just trust me for once, please.

Edit: if you actually want to know what you're doing too, I recommend you writing some linked lists. I hate linked lists with passion, I think they are a bad data structure and you should use Vectors 90% of the time and VecDeque the other 10% of cases. But they help you to understand what you're spending your electricity on.

discuss

order

jeroenhd|3 years ago

> Just trust me for once, please.

Why should I? Trusting random people is exactly why C(++) libraries are under constant attack through use-after-free and buffer overflow exploits. You can use `unsafe` in your code just fine, but don't expect others to just trust that you know what you're doing. There's no clear way to distinguish an expert in ownership and multithreading semantics from someone who copy-pasted their unsafe code from Stackoverflow.

I trust libraries that don't use `unsafe` more than I trust libraries that say they know what they're doing. It's nothing personal, it's just a preference for the type of bugs and vulnerabilities I'd like to avoid if I can.

As for whether the user sees it or not, that's irrelevant. The library can be buggy and I would never know. I'd rather have the borrow checker verify that the code isn't buggy than take your word for it. I know the borrow checker isn't perfect and I know there are good reasons why one would use `unsafe` in their code, but if possible I'd like the code I (re)use to be as safe as possible.

Actix is a library that very loudly proclaims "trust me, I know what I'm doing". Some people believe the authors, I prefer to use safer alternatives at the cost of minor performance penalties. Power to you if you disagree, but that's your choice and opinion as much as the authors' of libraries.

I don't think writing linked lists is enough to learn how to use `unsafe` code. You'd have to write multithreaded linked list at the very least to get an understanding of why safe Rust code has all of these limitations. Even then you may never encounter race conditions when you run your code but at least it's a start.

I, for one, know that I'm not capable enough a Rust programmer to write well-tested, provably correct, multithreaded pointer magic code for performance optimization and I don't care enough to learn that art at the moment. If I were to publish a Rust crate, I'd much prefer the code to be at a level I can trust myself to maintain, which means no unsafe code. You may be better versed in the necessary semantics than I am but as a library owner I'd need to be able to maintain your code if you create a PR for my library which means you'll have to dumb down your unsafe code for me, sorry.

CJefferson|3 years ago

The problem is, do people know what they are doing?

I didn't follow the whole Actix situation carefully, but here is a discussion where someone found of 15 ways to trigger undefined behaviour in safe code, caused by the unsafes in Actix:

https://github.com/actix/actix-web/issues/289#issuecomment-3...

Personally, I'd take halving the speed of my project to reduce the possibility of remote security holes. We live in a dangerous world nowadays, and we should take every chance to minimise the risk of serious security issues.

huimang|3 years ago

What does it matter if a user never interacts with that API or not?

Rust is focused around -safety- and performance. I would rather have a slight performance hit and safe code, rather than trusting some random person to 100% correctly write unsafe code. Which is why tools like cargo-audit and cargo-geiger exist. IIRC Nikolay didn't communicate well about -why- unsafe was used, and just closed PRs that converted unsafe code to safe code.

> The unsafe keyword means that I know what I'm doing. Just trust me for once, please.

No, it means you think you know what you're doing.

It's more likely that you don't know what you're doing and/or are unnecessarily invoking unsafe for convenience, than the opposite. Theoretically I can look at your code and see if it's correct... or I could just use projects that don't use unsafe at all and save the time/headache.

When it comes to web server frameworks and security, I would like to see as little unsafe usage as possible, and documentation as to exactly why it's needed. Which is why people switched to Warp/Tower and now Axum which forbids unsafe code entirely.

If all I cared about were eking out all performance at the cost of safety, I wouldn't be using Rust in the first place.

the__alchemist|3 years ago

I think the different philosophies you see re `unsafe` may be due to 2 related use-case pairs that both come up here:

#1: Low level vice applications programming. In the former, unsafe is a regular part of (at least certain layers) of code; ie you're working with memory (MMIO etc) as core operations, so will need `unsafe`. The situation gets ambiguous for things like peripheral typestates and owned singletons for register blocks etc; the line is blurred about what you're using the ownership model for, and what APIs should be marked as `unsafe`. For higher level uses like desktop programs and web servers, you may not need any `unsafe`.

#2: Libraries vice programs This is directly related to your main point: If using someone else's code as a dependency, unsafe can be a liability if you don't know why it's is used. This is one aspect of the broader topic of whether you can/should trust any given dependency, and balancing not re-inventing wheels with learning library quirks, edge-cases, subtle bugs, complexity etc. A spin on this is making infrastructure specifically; I think Actix's creators and users may have had different opinions on this.

brundolf|3 years ago

It's all context-dependent. You're right that people shouldn't just drop into a project they don't understand and demand that all unsafes be factored out, but just because an unsafe block is internal and carefully vetted that doesn't mean it's totally fine and chill either.

Here's a recent example where an unsafe led to a memory corruption vulnerability in a thoroughly battle-tested codebase: https://www.graplsecurity.com/post/attacking-firecracker