(no title)
damon_dam | 1 year ago
Unfortunately x86 is not one of them. That's when CMPXCHG16B on x86-64 comes in handy, as per the linked article. But if you're going to write library code in 2025, it should at least support x86 and ARM.
Which brings up a more interesting argument IMO: platform fragmentation. Rust has approximately 1.5 advantages over C/C++. To displace them in a reasonable time, it probably needed 4 or 5. Its most significant contribution for the foreseeable future is platform fragmentation.
[1] https://en.wikipedia.org/wiki/Load-link/store-conditional
explodingwaffle|1 year ago
It could even be possible to make some sort of “ABA primitive” and use that for these sort of data structures. This could well exist: I’ve not looked. These sorts of things really aren’t that common in my experience.
On LR/SC: to any atomics experts listening, isn’t it technically “obstruction-free” (as per the Wikipedia definitions at least) rather than lock-free? (though in practice this makes basically no difference and still counts as lock-free in the C++ (and Rust) sense) Just something that stuck out last time I got sucked into this rabbit hole.
adrian_b|1 year ago
They are the primitives with which you can implement shared data structures that are "lock-free" or "obstruction-free".
Anything that can be implemented with compare-and-swap can be implemented with LL/SC, and vice-versa.
The only difference between compare-and-swap and LL/SC is how they detect that the memory word has not been modified since the previous reading.
Compare-and-swap just compares the current value with the old value, while LL/SC uses a monitoring circuit implemented in the cache memory controller, which records if any store has happened to that memory location.
Therefore LL/SC is free of the ABA problem, while the existence of the ABA problem has been recognized already since the first moment when compare-and-swap has been invented.
Compare-and-swap has been invented by IBM, who has introduced this instruction in IBM System/370, in 1973. Simultaneously with compare-and-swap, IBM has introduced the instruction compare-double-and-swap, for solving the ABA problem by using a version counter.
Intel has added compare-and-swap renamed as CMPXCHG to 80486 in 1989, and compare-double-and-swap, renamed as CMPXCHG8B, to Pentium in 1993. On x86-64, CMPXCHG8B, i.e. compare-double-and-swap, has become CMPXCHG16B.
LL/SC has been invented in 1987, in the S-1 Advanced Architecture Processor, at the Lawrence Livermore National Laboratory. Then it has been added in 1989 to MIPS II, from where it has spread several years later to most RISC ISAs.
Using either compare-double-and-swap or LL/SC is equivalent, because both are free of the ABA problem.
However there are many cases when the optimistic access to shared data structures that can be implemented with compare-and-swap or LL/SC results in lower performance than access based on mutual exclusion or on dynamic partitioning of the shared data structure (both being implemented with atomic instructions, like atomic exchange or atomic fetch-and-add).
This is why the 64-bit ARM ISA, Aarch64, had to correct their initial mistake of providing only LL/SC, by adding a set of atomic instructions, including atomic exchange and atomic fetch-and-add, in the first revision of the ISA, Armv8.1-A.
damon_dam|1 year ago
Of course you can. I just meant that the linked article didn't.
> On LR/SC: to any atomics experts listening, isn’t it technically “obstruction-free” (as per the Wikipedia definitions at least) rather than lock-free?
The better criterion IMO is loop-free, which makes it a little easier to understand. Consider the following spin-locking code (with overabundant memory barriers):
Here's the equivalent LL/SC version: The pointer-tagging version is also obviously not loop-free. Which is faster, in which cases, and by how much?The oversimplified answer is that LL/SC is probably slightly faster than spin-locking on most platforms and cases, but pointer-tagging might not be.
gpderetta|1 year ago
saghm|1 year ago
I'm honestly not sure what any of this means. How are you counting "advantages", and how did you come up with the requisite number being 4 or 5? It's not clear to me why all advantages would be equal in magnitude. Maybe this is what you intended to convey by not using whole numbers, although it seems like trying to estimate how advantages compare like this in a way that was precise enough to be informative would be difficult, to say the least.
To be clear, I don't disagree with you that C/C++ are not in any immediate risk of being displaced (and I'd go further and argue that C and C++ are distinct enough in how they're used that displacing one wouldn't necessarily imply displacing the other as well). I just don't think I've ever seen things quantified in this way before, and it's confusing enough to me that I don't want to discount the possibility that I'm misunderstanding something.
damon_dam|1 year ago
The vagueness was intentional. There is of course no homogeneous way of combining advantages into a cardinal measure. It's just a rhetorical device.
The point is that it falls short of the amount needed. Another, more subtle point is that I didn't count disadvantages. The argument applies even if you think Rust doesn't have any.
wyager|1 year ago
I can think of at least 3 that deserve a whole integral bullet point:
* ADTs
* Hindley-Milner/typeclass type system
* Lifetimes and affine types
And a bunch of minor ones that count for something like 0.1-0.5 of an advantage.
I would guess we're on track for a majority-of-new-code switchover point somewhere around 2030-2035.
milesrout|1 year ago
What is an advantage or a disadvantage depends on who you are. Personally, I find the disadvantages (as I see them) outweigh the advantages. Others believe that some of those disadvantages are actually good things. And some of them depend on whether you are comparing to C or C++: compared to C, Rust has many disadvantages, but C++ has many of the same disadvantages, and often is worse.
I don't think Rust will ever have more new code being written in it than C and C++ combined. I doubt it will overtake either individually.
oneshtein|1 year ago
* Memory safety in safe code
* Fearless concurrency.
* match operator and destructuring
robocat|1 year ago