top | item 37552865

(no title)

boomanaiden154 | 2 years ago

What's the benefit of having an LLM do those things in a way that guesstimates? There are big wins to be had in code-size and some wins to be had in performance related to inlining [1][2], but I think the implementation in the references directly tied into the compiler's inlining heuristic is a much better way to do that as it guarantees correctness. In addition, there's a reason that compilers basically ignore the `inline` keyword these days.

For branch reordering, techniques like BOLT [5] are pretty effectively able to reorder code layout at the binary level for big performance gains by using profile information. ML models can sometimes synthesize that information [3], but if I recall correctly, the performance of those models wasn't as good.

Neural compilation (like what you're describing) has been tried with LLMs [4], but has a lot of correctness problems currently, and I don't think it's going to be feasible anytime soon to do reinforcement learning for performance/code-size improvements.

1. https://arxiv.org/abs/2101.04808 2. https://arxiv.org/abs/2207.08389 3. https://arxiv.org/abs/2112.14679 4. https://ieeexplore.ieee.org/document/9926313 5. https://arxiv.org/abs/1807.06735

discuss

order

No comments yet.