(no title)
FpUser | 6 days ago
In my opinion this is probably problem for novice. Or people who only know how to program inside very limited and restricting environment. I write multithreaded business backends in modern C++ that accept outside http requests for processing, do some heavy math lifting. Some requests that expected to take short time are processed immediately, some long running one are going to a separate thread pools which also manage throttling of background tasks etc. etc.
I did not find it any particularly hard. All my "dangerous" stuff is centralized, debugged to death years ago and used and reused across multiple products. Stuff runs for years and years without single hick-up. To me it is a non issue.
I do realize that the situation is much tougher for those who write OS kernels but this is very specialized skill and they would know better what to do.
dspillett|6 days ago
Most devs spend most of their time, all of it even, on tasks that are either naturally sequential or don't benefit from threading enough over the safer option of multiple independent processes, so when they do come across a problem that is inherently parallelizable and needs the highest performance it is not a familiar situation for them. Familiarity can make some rather complex processes feel simple.
The same can be said for event loop driven concurrency, for those who don't work that way often the collection of potential race conditions there can feel daunting so they appreciate their chosen platform holding their hand a bit.
FpUser|6 days ago
Holding hand is useful until it is not. It often has big trade offs.