top | item 39115765

(no title)

_chris_ | 2 years ago

> I’d be interested in understanding why the compilers never panned out but have never seen a good writeup on that. Or why people thought the compilers would be able to succeed in the first place at the mission.

It's a fundamentally impossible ask.

Compilers are being asked to look at a program (perhaps watch it run a sample set) and guess the bias of each branch to construct a most-likely 'trace' path through the program, and then generate STATIC code for that path.

But programs (and their branches) are not statically biased! So it simply doesn't work out for general-purpose codes.

However, programs are fairly predictable, which means a branch predictor can dynamically learn the program path and regurgitate it on command. And if the program changes phases, the branch predictor can re-learn the new program path very quickly.

Now if you wanted to couple a VLIW design with a dynamically re-executing compiler (dynamic binary translation), then sure, that can be made to work.

discuss

order

yvdriess|2 years ago

> Now if you wanted to couple a VLIW design with a dynamically re-executing compiler (dynamic binary translation), then sure, that can be made to work.

RIP Transmeta

andromeduck|2 years ago

Transmeta lived on in Nvidia's Project Denver but Denver was optimized for x86 and the Intel settlement precluded that. It ended up being too buggy/inefficient to compete in the market and effectively abandoned after the second generation.

gregw2|2 years ago

This makes a lot of sense to me, thanks for boiling it down. Compilers can predict the code instructions coming up decently, but not really the data coming up, so VLIW doesn't work that well compared to branch prediction and speculative and out of order execution complexities which VLIW tried to simplify away on branching-heavy commercial/database server workloads. Does that sound right?

actionfromafar|2 years ago

I think it could have worked if the IDE had performance instrumentation (some kind of tracing) which would have been fed in to the next build. (And perhaps several iterations of this.)

Another way to leverage the Itanium power would have been to make a Java Virtual Machine go really fast, with dynamic binary translation. This way you'd sidestep all the C UB optimization caveats.