(no title)
deviantbit | 11 months ago
He really science’d the heck out of that one. I’m getting tired of seeing opinions dressed up as insight—especially when they’re this detached from how real systems actually work.
I worked on the Cell processor and I can tell you it was a nightmare. It demanded an unrealistic amount of micromanagement and gave developers rope to hang themselves with. There’s a reason it didn’t survive.
What amazes me more is the comment section—full of people waxing nostalgic for architectures they clearly never had to ship stable software on. They forget why we moved on. Modern systems are built with constraints like memory protection, isolation, and stability in mind. You can’t just “flatten address spaces” and ignore the consequences. That’s how you end up with security holes, random crashes, and broken multi-tasking. There's a whole generation of engineers that don't seem to realize why we architected things this way in the first place.
I will take how things are today over how things used to be in a heart beat. I really believe I need to spend 2-weeks requiring students write code on an Amiga, and the programs have to run at the same time. If anyone of them crashes, they all will fail my course. A new found appreciation may flourish.
ryukoposting|11 months ago
I was made to witness the horrors of archaic computer architecture in such depth that I could reproduce them on totally unrelated hardware.
deviantbit|11 months ago
Sounds like you had a good mentor. Buy them lunch one day.
znpy|11 months ago
It took three hours and your of us to code an integer division start to finish (we were like 17 though).
The amount of understanding it gave has been unrivalled so far.
Diggsey|11 months ago
So the designers of the Cell processor made some mistakes and therefore the entire concept is bunk? Because you've seen a concept done badly, you can't imagine it done well?
To be clear, I'm not criticising those designers, they probably did a great job with what they had, but technology has moved on a long way from then... The theoretical foundations for memory models, etc. are much more advanced. We've figured out how to design languages to be memory safe without significantly compromising on performance or usability. We have decades of tooling for running and debugging programs on GPUs and we've figured out how to securely isolate "users" of the same GPU from each other. Programmers are as abstracted from the hardware as they've ever been with emulation of different architectures so fast that it's practical on most consumer hardware.
None of the things you mentioned are inherently at odds with more parallel computation. Whether something is a good idea can change. At one point in time electric cars were a bad idea. Decades of incremental improvements to battery and motor technology means they're now pretty practical. At one point landing and reusing a rocket was a bad idea. Then we had improvements to materials science, control systems, etc. that collectively changed the equation. You can't just apply the same old equation and come to the same conclusion.
hulitu|11 months ago
That's the problem, isn't it.
I don't want my programs to act independently, they need to exchange data with each other (copy-paste, drag and drop). Also i cannot do many things in parralel. Some thing must be done sequencially.
deviantbit|11 months ago
[deleted]
0xbadcafebee|11 months ago
Nobody teaches it, and nobody writes books about it (not that anyone reads anymore)
deviantbit|11 months ago
The biggest problem on why we don't write books is that people don't buy them. They take the PDF and stick it on github. Publishers don't respond to the authors on take down requests, github doesn't care about authors, so why spend the time on publishing a book? We can chase grant money. I'm fortunate enough to not have to chase grant money.
aleph_minus_one|11 months ago
Isn't it much more plausible that the people who love to play with exotic (or also retro), complicated architectures (with in this case high performance opportunities) are different people than those who love to "set up or work in an assembly line for shipping stable software"?
> I really believe I need to spend 2-weeks requiring students write code on an Amiga, and the programs have to run at the same time. If anyone of them crashes, they all will fail my course. A new found appreciation may flourish.
I rather believe that among those who love this kind of programming a hate for the incompetent fellow student will happen (including wishes that these become weed out by brutal exams).
wmf|11 months ago
sitkack|11 months ago
deviantbit|11 months ago
nicoburns|11 months ago
Is there any reason why GPU-style parallelism couldn't have memory protection?
monocasa|11 months ago
mabster|11 months ago
We had a generic job mechanism with the same restrictions on all platforms. This usually meant if it ran at all on Cell it would run great on PC because the data would generally be cache friendly. But it was tough getting the PowerPC to perform.
I understand why the PS4 was basically a PC after that - because it's easier. But I wish there was still SPUs off the side to take advantage of. Happy to have it off die like GPUs are.
api|11 months ago
You could also do things like having the JIT optimize the entire running system dynamically like one program, eliminating syscall and context switch overhead not to mention most MMU overhead.
Would it be faster? Maybe. The JIT would have to generate its own safety and bounds checking stuff. I’m sure some work loads would benefit a lot and others not so much.
What it would do is allow CPUs to be simpler, potentially resulting in cheaper lower power chips or more cores on a die with the same transistor budget. It would also make portability trivial. Port the core kernel and JIT and software doesn’t care.
zozbot234|11 months ago
GPU drivers take SPIR-V code (either "kernels" for OpenCL/SYCL drivers, or "shaders" for Vulkan Compute) which is not that different at least in principle. There is also a LLVM-based soft-implementation that will just compile your SPIR-V code to run directly on the CPU.
graemep|11 months ago
01HNNWZ0MV43FF|11 months ago
deviantbit|11 months ago
[deleted]
Yoric|11 months ago
Also, yeah, I recall the dreaded days of cooperative multitasking between apps. Moving from Windows 3.x to Linux was a revelation.
hulitu|11 months ago
musicale|11 months ago
Fortran is memory-safe, right? ;-)
unknown|11 months ago
[deleted]
varelse|11 months ago
[deleted]