(no title)
srslack | 2 years ago
Human or animal consciousness is an emergent phenomenon that entails the ability to experience subjective states: emotions, self-awareness, etc.. It is not just about processing information but involves the qualitative experiences and the “what it is like” aspect of being.
When humans or animals feel pain, there is a subjective experience of suffering that is inherently tied to consciousness. The importance we assign to events, objects, or experiences is inherently based on how they impact our conscious experiences. The worth of things big or small is contingent upon the emotions or feelings they evoke in us.
In contrast, a regression-based function approximator does not have preferences, emotions, or experiences.
When you decide to lift your hand, there is a conscious experience involved. You have an intention and a subjective experience associated with that action. On the other hand, a regression-based function approximator does not “decide” anything in the experiential sense. It simply produces outputs based on inputs and pre-training and maybe RLHF that adjusted its weights. There is no intention, no subjective experience, and no consciousness involved.
There is no qualia. To put it simply: a LLM could output some text that makes you "believe" it has preferences, and subjective experiences. But there's nothing there. Just cognitive artifacts of human beings from its corpus. Does an LLM have recursive self-improvement? Does it have self-directed goals? Does it have any of that? No. It's a predictor. LLMs are not sentient. They have no agency. They are not conscious.
If all of that is not convincing to you, consider the following (audio-visual) perspective: https://www.youtube.com/watch?v=FBpPjjhJGhk
No comments yet.