top | item 42436859

(no title)

mofeien | 1 year ago

It also means that you should update your belief about the reasoning capabilities of LLMs at least slightly. if disconfirming evidence doesn’t shake your beliefs at all, you dont really have beliefs, you have an ideology.

discuss

order

zahlman|1 year ago

The observation here is far too easily explained in other ways to be considered particularly strong evidence.

Memorizing a solution to a classic brainteaser is not the same as having the reasoning skills needed to solve it. Finding out separate solutions for related problems might allow someone to pattern-match, but not to understand. This is about as true for humans as for LLMs. Lots of people ace their courses, even at university level, while being left with questions that demonstrate a stunning lack of comprehension.

isx726552|1 year ago

Or it just means anything shared on the internet gets RLHF’d / special cased.

It’s been clear for a long time that the major vendors have been watching online chatter and tidying up well-known edge cases by hand. If you have a test that works, it will keep working as long as you don’t share it widely enough to get their attention.

irunmyownemail|1 year ago

"It also means that you should update your belief about the reasoning capabilities of LLMs at least slightly."

AI, LLM, ML - have no reasoning ability, they're not human, they are software machines, not people. People reason, machines calculate and imitate, they do not reason.

atemerev|1 year ago

People are just analog neural circuits. They are not magic. Human brain is one physical implementation of intelligence. It is not the only one possible.