(no title)
jblow | 5 years ago
There is this habit in both academia and industry where people say "as fast as C" and justify this by comparing to a tremendously slow C program, but don't even know they are doing it. It's the blind leading the blind.
The question you should be asking yourself is, "If all these claims I keep seeing about X being as fast as Y are true, then why does software keep getting slower over time?"
(If you don't get what I am saying here, it might help to know that performance programmers consider malloc to be tremendously slow and don't use it except at startup or in cases when it is amortized by a factor of 1000 or more).
burntsushi|5 years ago
I wouldn't call that a first approximation. Take ripgrep as an example. In a checkout of the Linux kernel with everything in my page cache:
This alone, to me, says "to a first approximation, the speed of your program in 2021 is determined by the number of cores it uses" would be better than your statement. But I wouldn't even say that. Because performance is complicated and it's difficult to generalize.Using Rust made it a lot easier to parallelize ripgrep.
> C allows you to do bulk memory operations, Rust does not (unless you turn off the things about Rust that everyone says are good). Thus C is tremendously faster.
Talk about nonsense. I do bulk memory operations in Rust all the time. Amortizing allocation is exceptionally common in Rust. And it doesn't turn off anything. It's used in ripgrep in several places.
> There is this habit in both academia and industry where people say "as fast as C" and justify this by comparing to a tremendously slow C program, but don't even know they are doing it. It's the blind leading the blind.
I've never heard anyone refer to GNU grep as a "tremendously slow C program."
> The question you should be asking yourself is, "If all these claims I keep seeing about X being as fast as Y are true, then why does software keep getting slower over time?"
There are many possible answers to this. The question itself is so general that I don't know how to glean much, if anything, useful from it.
jblow|5 years ago
You chose an embarrassingly parallel problem, which most programs are not. So you cannot generalize this example across most software. When you try to parallelize a structurally complicated algorithm, the biggest issue is contention. I was leaving this out because it really is a 2nd order problem -- most software today would get faster if you just cleaned up its memory usage, than if you just tried to parallelize it. (Of course it'd get even faster if you did both, but memory is the E1).
> There are many possible answers to this.
How come so few people are concerned with the answers to that question and which are true, but so many people are concerned with making performance claims?
pornel|5 years ago
As I've pointed out in the article, Rust does give you precise control over memory layout. Heap allocations are explicit and optional. In safe code. You don't even need to avoid any nice features (e.g. closures and iterators can be entirely on stack, no allocations needed).
Move semantics enables `memcpy`ing objects anywhere, so they don't have a permanent address, and don't need to be allocated individually.
In this regard Rust is different from e.g. Swift and Go, which claim to have C-like speed, but will autobox objects for you.
jblow|5 years ago
zozbot234|5 years ago
Rust is now getting support for custom local allocators ala C++, including in default core types like Box<>, Vec<> and HashMap<>. It's an unstable feature, hence not yet part of stable Rust but it's absolutely being worked on.
Kutta|5 years ago
jblow|5 years ago