(no title)
hsjsbeebue | 1 year ago
I imagine something to do with memory usage or avoiding thread or thread pool starvation issues. Maybe performance too?
hsjsbeebue | 1 year ago
I imagine something to do with memory usage or avoiding thread or thread pool starvation issues. Maybe performance too?
toast0|1 year ago
If you're running real OS threads, I think task switching is going to be real context switches, which might mean spectre mitigations clear your cpu caches, but task switching can avoid that.
You may end up with more system calls with OS threads, because your runtime might be able to aggregate things a bit (blocking reads become kqueue/epoll/select, but maybe that's actually a wash, because you do still need a read call when the FD is ready, and real blocking only makes a single call)
lmm|1 year ago
treflop|1 year ago
I pray for all the code written by people who think they didn’t need to learn about synchronization because they wrote asynchronous code.
And unfortunately I’ve come across and had to fix asynchronous code with race conditions.
You cannot escape learning about synchronization. Writing race-condition-free code is not hard.
What is actually hard is writing fast lock-free routines, but that’s more a parallelism problem that affects both threaded and asynchronous code. And most people will never need to reach that level of code optimization for their work.
kaba0|1 year ago
Also, Rust’s ownership model only prevents data races, that’s only the tip of the iceberg of race conditions, and I don’t think that any general model makes it possible to statically determine that any given multithreaded code is safe. Nonetheless, that’s the only way to speed up most kind of code, so possibly the benefits outweigh the cost in many cases.
logicchains|1 year ago
lelanthran|1 year ago
Javascript has race conditions too, even with no threads involved.