Since models are very good at writing very short computer programs, and computer programs are very good at mathematical calculations, would it not be considerably more efficient to train them to recognise a "what is x + y" type problem, and respond with the answer to "write and execute a small javascript program to calculate x + y, then share the result"?
Grimblewald|1 year ago
simiones|1 year ago
This question is of course relevant only in a research sense, in seeking to understand to what extent and in what ways the LLM is acting as a stochastic parrot vs gaining a type of "understanding", for lack of a better word.
Shrezzing|1 year ago
gmerc|1 year ago
ADeerAppeared|1 year ago
The problem is that it's not particularly useful: As the problem complexity increases, the user will need to be increasingly specific in the prompt, rapidly approaching being fully exact. There's simply no point to it if your prompt has to (basically) spell out the entire program.
And at that point, the user might as well use the backing system directly, and we should just write a convenient input DSL for that.
unknown|1 year ago
[deleted]
andrepd|1 year ago