(no title)
kelseyfrog | 5 days ago
My pet theory is one of ontological conscienceness paredoila. Just like face paredoila is a heightened sensitivity to seeing faces in inanimate objects, we observe consciousness through behavior including language with varying sensitivity. While our face detection circuitight be triggered by knots on a tree, we have other inputs which negate it so that we ultimately conclude that it is not in fact a face.
The same principal applies to consciousness. The consciousness trigger is triggered, but for some people the negating input can't overcome it and they conclude that consciousness really is in there.
I've observed a number of negating reasons like, a disbelief in substrate independence and knowledge of failure modes, but I'm curious what an exhaustive list would look like. Does your consciousness circuit get triggered? I know mine does. What beliefs override it preventing you from concluding AI is conscious?
alpaca128|5 days ago
In the short term, but over time the patterns get more obvious and the illusion breaks down. Generative AI is incredible at first impressions.
giantrobot|5 days ago
When previous generation LLMs spit out absurdist slop I think it was much easier for people avoid the fluency trap.