(no title)
tomekowal | 2 months ago
Yes, it can consistently generate code that works and seems to be on top of it all due to a lot of training data. But wouldn't that use up more tokens and computational resources to produce than e.g. the same program in python?
If using more complex languages requires more resources from LLM, the same principles apply. One-off scripts are better in high level languages. Hot paths executed millions of times a second are better in lower level languages with high optimisation potential.
LLMs might slightly shift the correct choice towards lower languages. E.g. a small one-off script in C is much more viable with LLM's help. But the moment one needs to reuse it, it grows, and needs to be modified, one might regret not using higher level language.
No comments yet.