top | item 42330873

(no title)

eevilspock | 1 year ago

The difference is that you are capable of reflection and self-awareness, in this particular case that you understand nothing about dressage and your judgements would be a farce.

One of the counter arguments to "LLMs aren't really AI" is: "Well, maybe the human brain works much like an LLM. So we are stupid in the same way LLMs are. We just have more sophisticated LLMs in our heads, or better training data. In other other words, if LLMs aren't intelligent, then neither are we.

The counter to this counter is: Can one build an LLM that can identify hallucinations, the way we do? That can classify its own output as good or shitty?

discuss

order

No comments yet.