(no title)
eevilspock | 1 year ago
One of the counter arguments to "LLMs aren't really AI" is: "Well, maybe the human brain works much like an LLM. So we are stupid in the same way LLMs are. We just have more sophisticated LLMs in our heads, or better training data. In other other words, if LLMs aren't intelligent, then neither are we.
The counter to this counter is: Can one build an LLM that can identify hallucinations, the way we do? That can classify its own output as good or shitty?
No comments yet.