top | item 43441930

(no title)

deviantbit | 11 months ago

"I believe there are two main things holding it back."

He really science’d the heck out of that one. I’m getting tired of seeing opinions dressed up as insight—especially when they’re this detached from how real systems actually work.

I worked on the Cell processor and I can tell you it was a nightmare. It demanded an unrealistic amount of micromanagement and gave developers rope to hang themselves with. There’s a reason it didn’t survive.

What amazes me more is the comment section—full of people waxing nostalgic for architectures they clearly never had to ship stable software on. They forget why we moved on. Modern systems are built with constraints like memory protection, isolation, and stability in mind. You can’t just “flatten address spaces” and ignore the consequences. That’s how you end up with security holes, random crashes, and broken multi-tasking. There's a whole generation of engineers that don't seem to realize why we architected things this way in the first place.

I will take how things are today over how things used to be in a heart beat. I really believe I need to spend 2-weeks requiring students write code on an Amiga, and the programs have to run at the same time. If anyone of them crashes, they all will fail my course. A new found appreciation may flourish.

discuss

order

ryukoposting|11 months ago

One of the most important steps of my career was being forced to write code for an 8051 microcontroller. Then writing firmware for an ARM microcontroller to make it pretend it was that same 8051 microcontroller.

I was made to witness the horrors of archaic computer architecture in such depth that I could reproduce them on totally unrelated hardware.

deviantbit|11 months ago

I tell students today that the best way to learn is by studying the mistakes others have already made. Dismissing the solutions they found isn’t being independent or smart; it’s arrogance that sets you up to repeat the same failures.

Sounds like you had a good mentor. Buy them lunch one day.

znpy|11 months ago

I had a similar experience. Our professor in high school would have us program a z80 system entirely by hand: flow chart, assembly code, computing jump offsets by hand, writing the hex code by hand (looking up op-codes from the z80 data sheet) and the loading the opcodes one byte at the time on a hex keypads.

It took three hours and your of us to code an integer division start to finish (we were like 17 though).

The amount of understanding it gave has been unrivalled so far.

Diggsey|11 months ago

> I worked on the Cell processor and I can tell you it was a nightmare. It demanded an unrealistic amount of micromanagement and gave developers rope to hang themselves with.

So the designers of the Cell processor made some mistakes and therefore the entire concept is bunk? Because you've seen a concept done badly, you can't imagine it done well?

To be clear, I'm not criticising those designers, they probably did a great job with what they had, but technology has moved on a long way from then... The theoretical foundations for memory models, etc. are much more advanced. We've figured out how to design languages to be memory safe without significantly compromising on performance or usability. We have decades of tooling for running and debugging programs on GPUs and we've figured out how to securely isolate "users" of the same GPU from each other. Programmers are as abstracted from the hardware as they've ever been with emulation of different architectures so fast that it's practical on most consumer hardware.

None of the things you mentioned are inherently at odds with more parallel computation. Whether something is a good idea can change. At one point in time electric cars were a bad idea. Decades of incremental improvements to battery and motor technology means they're now pretty practical. At one point landing and reusing a rocket was a bad idea. Then we had improvements to materials science, control systems, etc. that collectively changed the equation. You can't just apply the same old equation and come to the same conclusion.

hulitu|11 months ago

> and we've figured out how to securely isolate "users" of the same GPU from each other

That's the problem, isn't it.

I don't want my programs to act independently, they need to exchange data with each other (copy-paste, drag and drop). Also i cannot do many things in parralel. Some thing must be done sequencially.

0xbadcafebee|11 months ago

> There's a whole generation of engineers that don't seem to realize why we architected things this way in the first place.

Nobody teaches it, and nobody writes books about it (not that anyone reads anymore)

deviantbit|11 months ago

So, there are books out there. I use Computer Architecture: A Quantitative Approach by Hennessy and Patterson. Recent revisions have removed historical information. I understand why they did remove it. I wanted to use Stallings book, but the department had already made arrangements with the publisher.

The biggest problem on why we don't write books is that people don't buy them. They take the PDF and stick it on github. Publishers don't respond to the authors on take down requests, github doesn't care about authors, so why spend the time on publishing a book? We can chase grant money. I'm fortunate enough to not have to chase grant money.

aleph_minus_one|11 months ago

> What amazes me more is the comment section—full of people waxing nostalgic for architectures they clearly never had to ship stable software on.

Isn't it much more plausible that the people who love to play with exotic (or also retro), complicated architectures (with in this case high performance opportunities) are different people than those who love to "set up or work in an assembly line for shipping stable software"?

> I really believe I need to spend 2-weeks requiring students write code on an Amiga, and the programs have to run at the same time. If anyone of them crashes, they all will fail my course. A new found appreciation may flourish.

I rather believe that among those who love this kind of programming a hate for the incompetent fellow student will happen (including wishes that these become weed out by brutal exams).

wmf|11 months ago

The problem is that the exotic complexity enthusiasts cluster in places like HN and sometimes they overwhelm the voices of reason.

sitkack|11 months ago

Those students would all drop out and start meditating. That would be a fun course. Speed run developing for all the prickly architectures of the 80s and 90s.

deviantbit|11 months ago

I see what you did there.

nicoburns|11 months ago

> They forget why we moved on. Modern systems are built with constraints like memory protection, isolation, and stability in mind. You can’t just “flatten address spaces” and ignore the consequences.

Is there any reason why GPU-style parallelism couldn't have memory protection?

monocasa|11 months ago

It does. GPUs have full MMUs.

mabster|11 months ago

I loved and really miss the cell. It did take quite a bit of work to shuffle things in and out of the SPUs correctly (so yeah, it took much longer to write code and greater care), but it really churned through data.

We had a generic job mechanism with the same restrictions on all platforms. This usually meant if it ran at all on Cell it would run great on PC because the data would generally be cache friendly. But it was tough getting the PowerPC to perform.

I understand why the PS4 was basically a PC after that - because it's easier. But I wish there was still SPUs off the side to take advantage of. Happy to have it off die like GPUs are.

api|11 months ago

On flattening address spaces: the road not taken here is to run everything in something akin to the JVM, CLR, or WASM. Do that stuff in software not hardware.

You could also do things like having the JIT optimize the entire running system dynamically like one program, eliminating syscall and context switch overhead not to mention most MMU overhead.

Would it be faster? Maybe. The JIT would have to generate its own safety and bounds checking stuff. I’m sure some work loads would benefit a lot and others not so much.

What it would do is allow CPUs to be simpler, potentially resulting in cheaper lower power chips or more cores on a die with the same transistor budget. It would also make portability trivial. Port the core kernel and JIT and software doesn’t care.

zozbot234|11 months ago

> On flattening address spaces: the road not taken here is to run everything in something akin to the JVM, CLR, or WASM.

GPU drivers take SPIR-V code (either "kernels" for OpenCL/SYCL drivers, or "shaders" for Vulkan Compute) which is not that different at least in principle. There is also a LLVM-based soft-implementation that will just compile your SPIR-V code to run directly on the CPU.

graemep|11 months ago

We end up relying on software for this so much anyway. Your examples plus the use of containers and the like at OS level.

deviantbit|11 months ago

[deleted]

Yoric|11 months ago

Don't worry, with LLMs, we're moving away from anything that remotely looks like "stable software" :)

Also, yeah, I recall the dreaded days of cooperative multitasking between apps. Moving from Windows 3.x to Linux was a revelation.

hulitu|11 months ago

With LLM's it is just more visible. When the age of "updates" begun, the age of stable software died.

musicale|11 months ago

> I really believe I need to spend 2-weeks requiring students write code on an Amiga, and the programs have to run at the same time. If anyone of them crashes, they all will fail my course.

Fortran is memory-safe, right? ;-)