(no title)
i_am_a_peasant | 8 days ago
It happens with locking APIs, it happens with socket APIs, anything platform dependent.
Does the C++ standard give you an idiomatic way to set PTHREAD_RWLOCK_PREFER_READER_NP or PTHREAD_RWLOCK_PREFER_WRITER_NP explicitly when initializing a rwlock? Nope. Then you either roll your own or in Rust you reach for a crate where someone did the work of making a smarter primitive for you.
VorpalWay|8 days ago
And then you have mutexes internally inside some dependency still (e.g. grpc or what have you). What I would really like is the ability to change defaults for all mutexes created in the program, and have everyone use the same std mutexes.
By the way: rwlocks are often a bad idea, since you still get cache contention between readers on the counter for number of active readers. Unless the time you hold the lock for is really long (several milliseconds at least) it usually doesn't improve performance compared to mutexes. Consider alternatives like seqlocks, RCU, hazard pointers etc instead, depending on the specifics of your situation (there is no silver bullet when it comes to performance in concurrent primtitves).
loeg|8 days ago
Yes. I imagine subbing out a debug mutex implementation that tracks lock ordering and warns about order inversion (similar to things like witness(4)): https://man.freebsd.org/cgi/man.cgi?witness(4)
> rwlocks are often a bad idea
Yes.
> since you still get cache contention between readers on the counter for number of active readers
There are rwlock impls that put the reader counts on distinct cache lines per core, or something like that (e.g., folly::SharedMutex), mitigating this particular problem. But it isn't the only problem with rwlocks.
> Consider alternatives like seqlocks, RCU, hazard pointers etc instead, depending on the specifics of your situation (there is no silver bullet when it comes to performance in concurrent primtitves).
Yes. :)
jcalvinowens|8 days ago
Assuming you're building the whole userspace at once with something like yocto... you can just patch pthread to change the default to PTHREAD_PRIO_INHERIT and silently ignore attempts to set it to PTHREAD_PRIO_NONE. It's a little evil though.
> By the way: rwlocks are often a bad idea
+1
nly|8 days ago
These are usually called "scaleable locks" and the algorithms for them have been out there for decades. They are optimal from a cache coherence point of view.
The issue with them is it's impossible to support the same API as you're used to with std::shared_mutex, as every thread needs it's own line.
MaulingMonkey|8 days ago
pjmlp|8 days ago
unknown|8 days ago
[deleted]
surajrmal|8 days ago