All LLM output is non-deterministically wrong. Without a human in the loop who understands the code, you are stochastically releasing broken, insecure, unmaintainable software.
Any software engineer who puts a stamp of approval on software they have not read and understood is committing professional malpractice.
roguecoder|10 months ago
Any software engineer who puts a stamp of approval on software they have not read and understood is committing professional malpractice.