top | item 40138710

(no title)

hawkice | 1 year ago

I think compact verification aware languages will be of particular interest to anyone wanting to make machine learning models write code at scale. Verification, and test generation, seem critical to the process of getting feedback from the program itself and improving the automatic code generation process. Not quite as powerful as self-play was for two player game playing, but it's got to be close.

discuss

order

mofosyne|1 year ago

Doesn't ADA have some method of checking if the input/output of a function is within bound? Can't remember if its a runtime or compile check however.

But the concept of function contract verification would be interesting.

littlestymaar|1 year ago

Yeah “Reinforced learning with compiler feedback” in addition to RLHF.

For that reason I suspect that Rust could do pretty well in terms of its writeablility by an LLM, but there's the problem of the relatively small training codebase in the wild which make LLMs perform quite poorly in Rust at the moment (Llama3 70B being far worse than even ChatGPT 3.5 on that, but even ChatGPT 4 can't write nontrivial Rust code right now). So it seems hard to even bootstrap the process.

yosefk|1 year ago

OTOH when it writes buggy code, you tend to get decent error messages when trying to use it. In this sense Rust + LLM is way better than C++ + LLM