top | item 19068876

(no title)

gaius | 7 years ago

Intel has its own compiler, surely they could ask their compiler gurus to help with ISA design.

Well that kinda was the concept - but they greatly underestimated the difficulty of producing a "sufficiently smart compiler". If VLIW had worked it would have greatly simplified processor design, no (or at least much less) need to worry about cleverly handling out-of-order execution for example. That in turn would have meant e.g. bigger L1 caches because you would have the die space to play with, or more execution units, or whatever.

It didn't work out and Intel did make some stupid decisions along the way (also underestimating the importance of backwards compatibility with existing code - not only binaries, but you also needed to be able to compile existing source well) but it was worth a punt, and maybe will be again someday - maybe a DL-based compiler could produce good VLIW code? Maybe a new (or old) language will be more amenable to VLIW compilation than C?

discuss

order

bpye|7 years ago

It is also problematic to need to recompile code for every different micro architecture. Maybe it could be done if the OS shipped with some sort of JIT/AOT compiler and binaries were all something like LLVM IR.

pjmlp|7 years ago

> Maybe a new (or old) language will be more amenable to VLIW compilation than C?

I would say certainly, given Fran Allen's point of view on C compilers.

gaius|7 years ago

A workflow I'd like to see: you write your code and ensure that it is functionally correct, all the tests pass. Then you go home and overnight, some ML/DL/NN whatever tool works on your code to find the best way to compile it (fastest binary that passes all tests). Repeat this every day for the duration of the project. At the end your artifacts are the source code, the shippable binary, and a model perfectly trained to produce the latter from the former. It's a shame that Itanic was too soon to take advantage of ML going mainstream.