It's not the same, LLM's are qualitatively different due to the stochastic and non-reproducible nature of their output. From the LLM's point of view, non-functional or incorrect code is exactly the same as correct code because it doesn't understand anything that it's generating. When a human does it, you can say they did a bad or good job, but there is a thought process and actual "intelligence" and reasoning that went into the decisions.I think this insight was really the thing that made me understand the limitations of LLMs a lot better. Some people say when it produces things that are incorrect or fabricated it is "hallucinating", but the truth is that everything it produces is a hallucination, and the fact it's sometimes correct is incidental.
viraptor|2 months ago
leshow|2 months ago
The parent comment was making the case that humans are as non-deterministic as the LLM is, and I was explaining why that is not true.
sod22|2 months ago
Therefore it cannot necessarily discern between two statements that are practically identical in the eyes of humans. This doesnt make the technology useless but its clearly not some AGI nonsense.