(no title)
spinningslate | 1 year ago
I cut my teeth on OS/2 in the early 90s, where using threads and processes to handle concurrent tasks was the recommended programming model. It was well-supported by the OS, with a comprehensive API for process/thread creation, deletion and inter-task communication. It was a very clear mental model: put each sequential sequence of operations in its own process/thread, and let the operating system deal with scheduling - including pausing tasks that were blocked on I/O.
My next encounter was Windows 3, with its event loop and cooperative multi-tasking. Whilst the new model was interesting, I was perplexed by needing to interleave my domain code with manual decisions on scheduling. It felt haphazard and unsatisfactory that the OS didn't handle scheduling for me. It made me appreciate more the benefits of OS-provided pre-emptive multi-tasking.
The contrast in models was stark. It seemed obvious that pre-emptive multi-tasking was so obviously better. And so it proved: NT bestowed it on Windows, and NeXT did the same for Mac.
Which brings us to today. I feel like I'm going through groundhog day with the renaissance of cooperative multi-tasking: promises, async/await and such. There's another topic today [0] that illustrates the challenges of attempting to performs actions concurrently in javascript. It brought back all the perplexion and haphazard scheduling decisions from my Windows 3 days.
As you note:
> Of course, context switching between different tasks is not free, and event loops have frequently been able to provide higher efficiency.
This is indeed true: having an OS or language runtime manage scheduling does incur an overhead. And, indeed, there are benchmarks [1] that can be interpreted as illustrating the performance benefits of cooperative over pre-emptive multitasking.
That may be true in isolation, but it inevitably places scheduling burden back on the application developer. Concurrent sequences of application domain operations - with the OS/runtime scheduling them - seems like a better division of responsibility.
[0]: https://news.ycombinator.com/item?id=42592224
[1]: https://hez2010.github.io/async-runtimes-benchmarks-2024/tak...
pjmlp|1 year ago
To this day it still seems it had a much better approach to components development and related tooling, than even COM reboot as WinRT offers.
spinningslate|1 year ago
jerf|1 year ago
Somehow we collectively took all the incredible experience with cooperative multitasking gathered over literally decades prior to Node and just chucked it in the trash can and had to start over at Day Zero re-learning how to use it.
This is particularly pernicious because the major issue with async is that it scales more poorly than threads, due to the increasing design complexity and the ever-increasing chances that the various implicit requirements that each async task has for the behavior of other tasks in the system will conflict with each other. You have to build systems of a certain size before it reveals its true colors. By then it's too late to change those systems.
jandrewrogers|1 year ago
The mistake most people are making these days is mixing paradigms within the same thread of execution, sprinkling async throughout explicitly or implicitly synchronous architectures. There are deep architectural conflicts between synchronous and asynchronous designs, and trying to use both at the same time in the same thread is a recipe for complicated code that never quite works right.
If you are going to use async, you have to commit to it with everything that entails if you want it to work well, but most developers don't want to do that.
mpweiher|1 year ago
The systems that can potentially benefit from async/await are a tiny subset of what we build. The rest just don't even have the problem that async/await purports to solve, never mind if it actually manages to solve it.