top | item 47100063

(no title)

i_am_a_peasant | 8 days ago

You know it often is the case that APIs like this both in C++ and Rust don't offer you enough knobs when your usecase deviates from being trivial.

It happens with locking APIs, it happens with socket APIs, anything platform dependent.

Does the C++ standard give you an idiomatic way to set PTHREAD_RWLOCK_PREFER_READER_NP or PTHREAD_RWLOCK_PREFER_WRITER_NP explicitly when initializing a rwlock? Nope. Then you either roll your own or in Rust you reach for a crate where someone did the work of making a smarter primitive for you.

discuss

order

VorpalWay|8 days ago

Yeah, you can't enable priority inheritance for mutexes in std of either C++ or Rust. Which is a show stopper for hard realtime (my dayjob).

And then you have mutexes internally inside some dependency still (e.g. grpc or what have you). What I would really like is the ability to change defaults for all mutexes created in the program, and have everyone use the same std mutexes.

By the way: rwlocks are often a bad idea, since you still get cache contention between readers on the counter for number of active readers. Unless the time you hold the lock for is really long (several milliseconds at least) it usually doesn't improve performance compared to mutexes. Consider alternatives like seqlocks, RCU, hazard pointers etc instead, depending on the specifics of your situation (there is no silver bullet when it comes to performance in concurrent primtitves).

loeg|8 days ago

> What I would really like is the ability to change defaults for all mutexes created in the program, and have everyone use the same std mutexes.

Yes. I imagine subbing out a debug mutex implementation that tracks lock ordering and warns about order inversion (similar to things like witness(4)): https://man.freebsd.org/cgi/man.cgi?witness(4)

> rwlocks are often a bad idea

Yes.

> since you still get cache contention between readers on the counter for number of active readers

There are rwlock impls that put the reader counts on distinct cache lines per core, or something like that (e.g., folly::SharedMutex), mitigating this particular problem. But it isn't the only problem with rwlocks.

> Consider alternatives like seqlocks, RCU, hazard pointers etc instead, depending on the specifics of your situation (there is no silver bullet when it comes to performance in concurrent primtitves).

Yes. :)

jcalvinowens|8 days ago

> What I would really like is the ability to change defaults for all mutexes created in the program, and have everyone use the same std mutexes.

Assuming you're building the whole userspace at once with something like yocto... you can just patch pthread to change the default to PTHREAD_PRIO_INHERIT and silently ignore attempts to set it to PTHREAD_PRIO_NONE. It's a little evil though.

> By the way: rwlocks are often a bad idea

+1

nly|8 days ago

There are rw lock implementations where waiters (whether or readers or writers) don't contend on a shared cache line (they only touch it once to enqueue themselves, not to spin/wait)

These are usually called "scaleable locks" and the algorithms for them have been out there for decades. They are optimal from a cache coherence point of view.

The issue with them is it's impossible to support the same API as you're used to with std::shared_mutex, as every thread needs it's own line.

MaulingMonkey|8 days ago

One thing I appreciate about Rust's stdlib is that it exposes enough platform details to allow writing the missing knobs without reimplementing the entire wrapper (e.g. File, TcpStream, etc. allows access to raw file descriptors, OpenOptionsExt allows me to use FILE_FLAG_DELETE_ON_CLOSE on windows, etc.)

pjmlp|8 days ago

Because usually that is OS specific and not portable to be part of standard library that is supposed to work everywhere.

surajrmal|8 days ago

NP means the API is not portable. There are Linux specific extensions for many things but not everything has it. This is also nothing wrong with needing to use an alternative to the standard library if you have more niche requirements.