(no title)
ahepp
|
21 days ago
I think in the example the OP is making, the work is not useless. They're saying if you had a system doing the same work, with maybe 60 processes, you're better off splitting that into 600 processes and a couple thousand threads, since that will allow granular classification of tasks by their latency sensitivity
eviks|21 days ago
ua709|21 days ago
However, given the trend in modern software engineering to break work into units and the fact that on modern hardware thread switches happen very quickly, being able to distribute that work across different compute clusters that make different optimization choices is a good thing and allows schedulers to get results closer to optimal.
So really it boils down to if the gains in doing the work on different compute outweighs the cost splitting and distributing the work, then it's a win. And for most modern software on most modern hardware, the win is very significant.
As always, YMMV
locknitpicker|21 days ago
This is far from being a hypothesis. This is an accurate description of your average workstation. I recommend you casually check the list of processes running at any given moment in any random desktop or laptop you find in a 5 meter radius.