My point is not that LLMs are inherently trustworthy. It is that a prompt can make the intentions of the programmer clear in a way that is difficult to do with code because code is hard to read, especially in large volumes.
The solution, then, is to add comments to every difficult line of code and have an LLM check that comments match the code. Then you get precision and reliability of machine language + readability of human language
I’m not sure I agree with you that code is hard to read. I usually tend to go straight to the source code as it communicates precisely how something will behave. Well written code, like well written prose can also communicate intent effectively.
petekoomen|1 month ago
akomtu|1 month ago
catlifeonmars|1 month ago