top | item 43501755

(no title)

jaakl | 11 months ago

My main takeaway here is that the models cannot tell know how they really work, and asking it from them is just returning whatever training dataset would suggest: how a human would explain it. So it does not have self-consciousness, which is of course obvious and we get fooled just like the crowd running away from the arriving train in LumiƩre's screening. LLM just fails the famous old test "cogito ergo sum". It has no cognition, ergo they are not agents in more than metaphorical sense. Ergo we are pretty safe from AI singularity.

discuss

order

famouswaffles|11 months ago

Nearly everything we know about the human body and brain is from the result of centuries of trial and error and experimentation and not any 'intuitive understanding' of our inner workings. Humans cannot tell how they really work either.

Philpax|11 months ago

Do you know how you work?