(no title)
EtCepeyd | 1 year ago
For the very good reason that the underlying math is insanely complicated and tiresome for mere practitioners (which, although I have a background in math, I openly aim to be).
For example, even if you assume sequential consistency (which is an expensive assumption) in a C or C++ language multi-threaded program, reasoning about the program isn't easy. And once you consider barriers, atomics, load-acqire/store-release explicitly, the "SMP" (shared memory) proposition falls apart, and you can't avoid programming for a message passing system, with independent actors -- be those separate networked servers, or separate CPUs on a board. I claim that struggling with async messaging between independent peers as a baseline is not why most people get interested in programming.
Our systems (= normal motherboards on one and, and networked peer to peer systems on the other end) have become so concurrent that doing nearly anything efficiently nowadays requires us to think about messaging between peers, and that's very-very foreign to our traditional, sequential, imperative programming languages. (It's also foreign to how most of us think.)
Thus, I certainly don't want a simple (but leaky) software / programming abstraction that hides the underlying hardware complexity; instead, I want the hardware to be simple (as little internally-distributed as possible), so that the simplicity of the (sequential, imperative) programming language then reflect and match the hardware well. I think this can only be found in embedded nowadays (if at all), which is why I think many are drawn to embedded recently.
hinkley|1 year ago
So because we discovered a lucrative, embarrassingly parallel problem domain that’s what basically the entire industry has been doing for 15 years, since multicore became unavoidable. We have web services and compilers being multi-core and not a lot in between. How many video games still run like three threads and each of those for completely distinct tasks?
gue5t|1 year ago
linkregister|1 year ago
What is this referring to? It sounds like a fascinating problem.
gmadsen|1 year ago
hinkley|1 year ago
On coroutines it’s not the network but the L1 cache. You’re better off running a function a dozen times and then running another than running each in turn.
EtCepeyd|1 year ago
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines...
vacuity|1 year ago
cmrdporcupine|1 year ago