Regardless of the current SOTA, this is a task where I'm fairly confident future LLMs can get very good at, unlike the "code in natural language" nonsense.
I'm thinking something like a linter with access to the AST, that can produce warnings like "you forgot this corner case", "you are not freeing resources. Is this intended?" and so on.
Probabilistic linters does seem like a fruitful realm for them, yeah. A lot of code shares strong similarities with others in small scales, thanks to stack overflow and various other "help me solve X" -> "try Y" answer pairs.
I do wonder how to reliably tell it to ignore noisy, incorrect warnings though. They're potentially sensitive to any new input / weight / random-seed changes, so it seems like literally every LLM upgrade will run the risk of ignoring existing suppressions (or you say "ignore this whole line" and miss useful warnings) due to small perturbations...
qsort|2 years ago
I'm thinking something like a linter with access to the AST, that can produce warnings like "you forgot this corner case", "you are not freeing resources. Is this intended?" and so on.
Groxx|2 years ago
I do wonder how to reliably tell it to ignore noisy, incorrect warnings though. They're potentially sensitive to any new input / weight / random-seed changes, so it seems like literally every LLM upgrade will run the risk of ignoring existing suppressions (or you say "ignore this whole line" and miss useful warnings) due to small perturbations...