top | item 43821155

(no title)

TickleSteve | 10 months ago

There is a long history of CPUs tailored to specific languages:

- Lisp/lispm

- Ada/iAPX

- C/ARM

- Java/Jazelle

Most don't really take off or go in different directions as the language goes out of fashion.

discuss

order

pjmlp|10 months ago

Well, one could argue that modern CPUs are designed as C Machine, even more so that now everyone is adding hardware memory tagging as means to fix C memory corruption issues.

johnnyjeans|10 months ago

Only if you don't understand the history of C. B was a LCD grouping of assembler macros for a typical register machine, C just added a type system and a couple extra bits of syntax. C isn't novel in the slightest, you're structuring and thinking about your code pretty similar to a certain style of assembly programming on a register machine. And yes, that type of register machine is still the most popular way to design an architecture because it has qualities that end up being fertile middle ground between electrical engineers and programmers.

Also there are no languages that reflect what modern CPUs are like, because modern CPUs obfuscate and hide much of how the way they work. Not even assembly is that close to the metal anymore, and it even has undefined behavior these days. There was an attempt to make a more explicit version of the hardware with Itanium, and it was explicitly a failure for much of the same reason than iAPX432 was a failure. So we kept the simpler scalar register machine around, because both compilers and programmers are mostly too stupid to work with that much complexity. C didn't do shit, human mental capacity just failed to evolve fast enough to keep up with our technology. Things like Rust are more the descendant of C than the modern design of a CPU.

jonathaneunice|10 months ago

Also: UCSD p-System, Symbolics Lisp-on-custom hardware, ...

Historically their performance is underwhelming. Sometimes competitive on the first iteration, sometimes just mid. But generally they can't iterate quickly (insufficient resources, insufficient product demand) so they are quickly eclipsed by pure software implementations atop COTS hardware.

This particular Valley of Disappointment is so routine as to make "let's implement this in hardware!" an evergreen tarpit idea. There are a few stunning exceptions like GPU offload—but they are unicorns.

noosphr|10 months ago

They were a tar pit in the 1980s and 1990s when Moores law meant a 16x increase in processor speed every 6 years.

Right now the only reason why we don't have new generations of these eating the lunch of general purpose CPUs is that you'd need to organize a few billion transistors into something useful. That's something a bit beyond what just about everyone (including Intel now apparently) can manage.