(no title)
jaakl
|
11 months ago
My main takeaway here is that the models cannot tell know how they really work, and asking it from them is just returning whatever training dataset would suggest: how a human would explain it. So it does not have self-consciousness, which is of course obvious and we get fooled just like the crowd running away from the arriving train in LumiƩre's screening.
LLM just fails the famous old test "cogito ergo sum". It has no cognition, ergo they are not agents in more than metaphorical sense. Ergo we are pretty safe from AI singularity.
famouswaffles|11 months ago
Philpax|11 months ago