top | item 40964936

(no title)

batterseapower | 1 year ago

I don't think that's naive - it could definitely help ameliorate the issues. To my knowledge noone has tried this approach.

The other idea I had when I was working on this stuff was to do JIT supercompliation. Clasically, supercompilers expand the whole graph at compile time, but it would be straightforward to delay the supercompilation process until the first time that the code is actually executed, which helps tame the problem of generating lots of specializations that may never actually be executed in practice.

(Choosing a generalization heuristic becomes more important in the JIT setting because you always have access to the full info about all parameters to the function you are specializing, and naively you would end up e.g. specializing sigmoid(x)=1/(1+exp(x)) for each value of x observed at runtime.)

discuss

order

rowanG077|1 year ago

Do current JIT implementation already have something to control specialization? I have heard many times for example that JavaScript engines will specialize hot paths.

moomin|1 year ago

To the best of my knowledge (which isn’t great) .NET has multiple compilation levels for functions, using more time after a method has been called a few times. But starts with a pretty fast but naive compilation. It also, I believe, specialises everything all the time, but (probably because of the multi-level compilation) that doesn’t seem to be a practical problem.