top | item 41988968

(no title)

zeknife | 1 year ago

You said it, those tests are designed to measure human intelligence, because we know that there is a correspondence between test results and other, more general tasks - in humans. We do not know that such a correspondence exists with language models. I would actually argue that they demonstrably do not, since even an LLM that passes every IQ test you put in front of it can still trip up on trivial exceptions that wouldn't fool a child.

discuss

order

esafak|1 year ago

So they fail in their own way? They're not humans; that's to be expected.