The problem I have with qualia is that the argument assumes that qualia are a transcendent, non-physical thing. Why can't a qualia, eg, the experience of the redness of red, simply be what that conscious thing experiences when a set of neurons is activated in a particular way?
I know that sounds circular, so I'll expand. We can't know exactly how one person experiences a particular shade of red vs another person. But we can know that one person can experience the exact same shade of red in different ways in different contexts: what thing is red (a rose vs a welt), or what it the current lighting is, or their mood on different days. From that we can conclude that the red qualia isn't some transcendent fixed property.
We also know from experiments on conscious, human subjects undergoing brain surgery that stimulating certain networks can result in complex phenomena: eg, the subject smells fall leaves with a hint of maple syrup, say. Or they suddenly feel the impression of being in the presence of a long dead aunt.
The brain builds up associations, and each "node" that is activated nudges associated nodes into activation, and so on. I assume we've all had the experience where we are trying to remember the name of someone: we picture them in our mind and all we can come up with is the feeling it has two syllables and starts with a vowel. In those cases those associations are manifest but it wasn't enough to trigger the primary node (the name) we are seeking.
Why can't qualia simply be the state of consciousness when a particular set of nodes are in a particular state. So many other experiences seem to be exactly this; why assign a mystical, non-physical property for the case of subjective experiences?
I'm in the camp that the most likely outcome is that artificial neural networks can be conscious and have real experiences, but the toy networks we have today don't have nearly the right topology to achieve such states.
> I'm in the camp that the most likely outcome is that artificial neural networks can be conscious and have real experiences, but the toy networks we have today don't have nearly the right topology to achieve such states.
Agree with all you say, but just suspect that when we do come up with a network that can achieve such states, we will look at it and shake our heads and say,
> I'm in the camp that the most likely outcome is that artificial neural networks can be conscious and have real experiences, but the toy networks we have today don't have nearly the right topology to achieve such states.
I'm in the camp that says this question is unanswerable. We know we individually are conscious because we experience it. We accept that other people are conscious because they are so similar to us and they say they are, so by Occam's Razor they aren't zombies and they aren't lying. We haven't proven they are conscious, but we accept it. It seems reasonable. But if our test is just that it seems reasonable, there will be no convincing someone that a very dissimilar thing isn't just lying or faking it.
We say some things are not conscious not because we have evidence of this, but because the test is whether we ourselves say "Yes, this is conscious."
Furthermore, we hold onto this distinction as important because it has ethical consequence. We can do what we want to things that aren't conscious, so it's important that as many things as possible not be conscious. Is that thing conscious? It depends. Does it taste good?
One more thing: we say we are unconscious under anesthesia or when we are asleep. Why? Because we don't don't have any memory of what it was like to be in these states. But this is a test of memory, not consciousness. I don't have memories from when I was one year old, but I'm fairly certain I was conscious then.
> simply be what that conscious thing experiences when a set of neurons is activated in a particular way?
That is what it is, but that's totally independent of whether they're physical. One is a sign pointing to the thing, the other is a claim about the characteristics of the thing.
"This is a question mark: ?"
versus
"A question mark is a punctuation mark that indicates an interrogative phrase."
versus
"A question mark is half to three quarters of a roughly circular shape, open at the lower left, with a small line segment at the bottom followed by an open space and then a dot."
> but the toy networks we have today don't have nearly the right topology to achieve such states.
But why do you think it needs some "right topology"? Why can't "qualia-generating" computation be simple?
If you mean it needs recurrence (i.e. not being simply feed-forward) — then a network continuously fed with its previous output (which is the way you run LLMs!) does have this property.
People said the same thing about computers playing chess. "They will never play creatively, like humans, because they simply calculate by rote". Then along comes Alpha Zero and produces amazing, "creative" games that redefine the nature of computer chess. All of a sudden, such moves don't require human ingenuity any longer.
Humans always place themselves at the center of the universe, until the final moment when reality absolutely proves the lie of such self-importance. It's impressive how creative we are in creating such arguments, until that inevitable moment is upon us.
This is also, I believe, why so many are cavalier about the difficulty of avoiding catastrophic outcomes after inventing general artificial intelligence. We are really good at lying to ourselves. I fear this will be the last time.
> There is a difference between the nature of a phenomenon and the nature of a phenomenon’s existence, and the existence of intentionality and qualia is self-evident.
I would argue that good faith rational inquiry should not begin by having a desired final conclusion in mind and declaring it to be self-evident.
Would you be able to argue in favor of you not having intentionality or qualia?
Considering qualia self-evident is not equivalent to starting an inquiry with a predetermined conclusion. Rather, it is acknowledging a foundational aspect of existence that is necessary for any further inquiry to take place.
Although I agree that intersectionality could be inquired, qualia is the only thing that is literally self-evident. Qualia, by definition, refer to individual instances of subjective, conscious experience. This experience is immediately known to the experiencer and is, therefore, self-evident in a very basic sense.
Unless you're doing math (and are willing to take first order logic as a priori true) you need to start with something. Learning about the world requires data, data requires identifying a data source, and identifying a data source requires knowing at least one thing about the world.
As foundations go, it's hard to see how you could go any deeper than "I am having an experience".
I find it amazing that I can write an original story or a poem, give it to chat-gpt and talk about what it might mean, what the character's motivations might be, how they might be viewed as others, and have meaningful conversations and explorations. Idk man. I don't know how it works, but it's still amazing to me even now.
I find it too mind boggling. But it isn't intelligence as you might find in a child, it's transactionally applying a corpus of knowledge and deciding what is best to say next, throwing in some randomness which makes it appear more human.
In many ways it simulates the human brain, shockingly well, but a dead human brain.
I'm really concerned about AGI now, when I see how much ChatGPT really is "knowing" more than I will ever do.
If a system with actual sentient feelings, needs, curiosity and eventually self-doubt was to go online, and never need to eat or sleep, it's only a matter of time until we become its pets.
To say that artificial consciousness is (or "remains") impossible, is to imply that there is a ghost in the machine in conscious lifeforms. Maybe there is, I don't know. But if there isn't, there should be no reason for artificial consciousness to be impossible.
As with all of these discussions, the definition of 'consciousness' is the crux of the argument – and since nobody can agree on that term, we're doomed to talk past each other!
This isn't a bad thing. It's actually the core human question – that of our existence. Stare into a mirror for a few minutes and ask yourself "Why am I me? Why am I here?" and you'll come to the exact same conclusion: unknown and probably unknowable.
But I believe that’s define-able. It’s how much you control your own thoughts. That’s what consciousness is. And it’s a vast, vast scale of opposites with so much variety, that a better picture than a spectrum is needed but I can’t imagine what. but because we have so many senses to become conscious in, we get a fusion of consciousnesses that is probably unique to us.
That's not the same question though. I've answered those questions for myself, but I couldn't tell you what consciousness is or isn't, because that requires language, and language cannot suffice to communicate our personal experiences.
This is the correct answer. But left unsaid is the fact that we don't have any objective definition of consciousness yet. I posit that until we do, artificial consciousness if by definition impossible.
Searle's Chinese Room thought experiment is about mastering the Chinese language.
There are two issues I have with it as it was originally presented:
1. Mastering a language is not the same as having consciousness.
2. Who "knows Chinese" in the Chinese Room thought experiment? I would say neither Searle nor Searle in/as part of the Chinese Room "speak" Chinese. But it is
fair to say that the book that the fictional Searle follows can be
seen as a model of the Chinese language; or at least the combination of
the book and Searle as its "processor" collectively are an implemented
operational model of the Chinese language. And a model of Chinese is
NOT the same as being skilled at conversing in Chinese (executing the
model in a particular way). Other posters here have drawn analogies from
music evoking certain subjective emotions, and again a semantic network
that has concept nodes labelled with the names of these emotions is not
the same as experiencing these emotions, although the semantic network
can be said to constitute a model of sorts of the music's effects. But
again, model(x) != qualia(x).
Perhaps… but perhaps first person point of view is merely control over one’s thoughts and extensions. The more you control yourself the more conscious you are. So then consciousness has been achieved, it’s in there, and we’ve all fallen into a horrible trap where the AI is destined to take over the world with some sort of digital government. Digital as opposed to analog, not as in device like a smartphone.
Conversely fMRI data shows that apparently conscious action is preceded by significant activation of the relevant regions of the brain before areas associated with consciousness are engaged.
We see supporting behaviour with experiments in split brain patients.
So it's wholly unclear, IMO, that the experience of being conscious is necessary for complex planning and action.
I wonder how this author will feel about that question when they're busy running from their home because some "non-conscious" ... "entity" has ... "decided" that they're a nuisance - that it's tired about hearing about its "lack of consciousness."
Ah, that put a smile back on my face that was lacking earlier in the day. TERRific!
While I intuitively reject the idea of mechanical consciousness, I also admit that it's very hard to refute it logically. If your gpt5-powered autonomous vacuum cleaner is also your best buddy, because it's objectively better at supporting any conversation and seems to really understand you, is plugging it off a murder? I'm struggling to say no.
“Conscious machines are impossible!” Wrote the conscious entity, to the other conscious entities. This is a bit odd.
I would meekly suggest, if you use some human sound, like “artificial,” or “natural,” to label something, it makes no difference to the Universe. It’s energy waves all the way down. Aka, we play by the rules of the universe. Unless the author of this article can prove he is a zombie, there is such a thing as “conscious atomic system” which can be replicated.
As far as I can tell, the author seems to be trying to simultaneously use two separate definitions of "consciousness":
1. Something which results in "qualia"
2. Something which arises by a non-mechanistic process
and then is just expecting readers to accept that these two definitions are equivalent. It's the same fatal flaw as the original Chinese Room argument.
By all means human neurons (and animal neurons) are -- for lack of a better word -- magic as they achieve a thing that no other thing does. Namely they arise in some a lack of actual feeling that has no mechanistic explanation. That does not mean it's magic, it's just that we have no way of ascertaining whether computers are capable of this feeling and all evidence points to the contrary.
A feeling in your mind is more than just a signal. Feeling a pit in one's stomach is not because I feel serotonin or GABA or whatever (i don't know which one). You really, truly feel a literal pit in your stomach. Why does one feel this? Even if you claim it's due to a physical pit in my stomach (perhaps due to muscle clenching or whatever), why do I sense a pit? Not in the 'oh my brain neurons fire indicating the presence of something' but in the 'why does it feel like that'?. Why why why? No one can explain the qualia of the sensation and the only way to claim this thing can be experienced by non-biological objects is by a blind faith.
EDIT: there should be a rule on HN against downvotes without responses. Unless you can explain qualia; stop downvoting. It's a major problem.
The argument is essentially that artificial consciousness is impossible by definition. Because consciousness is only a subjective experience, there is nothing in the physical realm that can prove that something is conscious or not.
Discussing about the relative merits of neurons and transistors will get you nowhere. It is a philosophical, possibly religious question, outside the real of natural science.
The argument is necessarily either that human neurons are magic and that artificial consciousness is impossible, or that some version of solipsism is true (the author asserts their own consciousness as self-evident and that no test can possibly show whether something else is conscious).
Well you don't have evidence either way, maybe we do have magic neurons, until a computer displays the same level of consciousness as other living creatures, then we don't really know because we don't fully understand how a human brain works and we cannot build one.
So I feel a bit similar about your argument to be honest.
The Chinese room thought experiment proved nothing, except that people are willing to latch on to bad philosophy. This author hasn't nailed down consciousness, and so is teetering on the edge of a dualist, vitalist cliff.
Consciousness is just a log file. It's our mind's representation of itself. Suppose we decide to eat something because we're hungry. What actually happens is a vast array of calculations in our meat computer considering many inputs and possibilities. However, once the "eat" action is selected, it summarizes all the calculations into "I was hungry so I decided to eat" and feeds it back into the meat computer as input for the next round of calculation.
Therefore, the only "proof" we have of no consciousness in the current LLM zoo is that they're all once-through.
I'd also point out that even if you accepted the Chinese room thought experiment it doesn't apply.
Modern AI systems have tons of non-deterministic input. They have cameras and they will do completely different things if a photon hits one pixel vs another. This is an input that has true quantum randomness and the computer will have a completely different response based on this true quantum randomness. Modern AI systems are absolutely not deterministic which is required for the Chinese room thought experiment. Non-determinism may apply to a black box computer without real world input but as soon as you add non-determinism which modern real world AI systems absolutely do have you break any comparison to the Chinese room thought experiment.
So the entire "herp derp it's like a catapult" is wrong straight off the bat. Modern AI systems act based on non-deterministic input and since non-determinism propagates and makes the output of the entire system non-deterministic you better have a better explanation than "It's deterministic" to explain why a computer can't be conscious.
That has nothing to do with this SPECIFIC argument, I didn't read a word of it. I.e., I'm not singling out this particular set of essays / arguments / etc. The problem is much more fundamental - this boils down to claiming that "the universe is 'solved'".
That one species that got to just enough of a base level of mental ability to be able to think, when looking in the mirror, "hello, gorgeous"... and, on one dinky planet in one non-descript arm of some random spiral galaxy in a universe so absurdly vast that it makes our lack of understanding of ourselves, and the one planet we live on, a 'ING FOOTNOTE, ... can come up with some sort of rigorous argument to "uphold our magnificence" and "preeminence" ... as, apparently (to some), both "God's [special] children", and also "God" ourselves in deciding such "cases".
The universe isn't solved, and I'll take any odds on universe vs. humans anyone wants to offer.
[+] [-] tasty_freeze|2 years ago|reply
I know that sounds circular, so I'll expand. We can't know exactly how one person experiences a particular shade of red vs another person. But we can know that one person can experience the exact same shade of red in different ways in different contexts: what thing is red (a rose vs a welt), or what it the current lighting is, or their mood on different days. From that we can conclude that the red qualia isn't some transcendent fixed property.
We also know from experiments on conscious, human subjects undergoing brain surgery that stimulating certain networks can result in complex phenomena: eg, the subject smells fall leaves with a hint of maple syrup, say. Or they suddenly feel the impression of being in the presence of a long dead aunt.
The brain builds up associations, and each "node" that is activated nudges associated nodes into activation, and so on. I assume we've all had the experience where we are trying to remember the name of someone: we picture them in our mind and all we can come up with is the feeling it has two syllables and starts with a vowel. In those cases those associations are manifest but it wasn't enough to trigger the primary node (the name) we are seeking.
Why can't qualia simply be the state of consciousness when a particular set of nodes are in a particular state. So many other experiences seem to be exactly this; why assign a mystical, non-physical property for the case of subjective experiences?
I'm in the camp that the most likely outcome is that artificial neural networks can be conscious and have real experiences, but the toy networks we have today don't have nearly the right topology to achieve such states.
[+] [-] doctor_eval|2 years ago|reply
Agree with all you say, but just suspect that when we do come up with a network that can achieve such states, we will look at it and shake our heads and say,
“So… is that all there is to it?”
[+] [-] DFHippie|2 years ago|reply
I'm in the camp that says this question is unanswerable. We know we individually are conscious because we experience it. We accept that other people are conscious because they are so similar to us and they say they are, so by Occam's Razor they aren't zombies and they aren't lying. We haven't proven they are conscious, but we accept it. It seems reasonable. But if our test is just that it seems reasonable, there will be no convincing someone that a very dissimilar thing isn't just lying or faking it.
We say some things are not conscious not because we have evidence of this, but because the test is whether we ourselves say "Yes, this is conscious."
Furthermore, we hold onto this distinction as important because it has ethical consequence. We can do what we want to things that aren't conscious, so it's important that as many things as possible not be conscious. Is that thing conscious? It depends. Does it taste good?
One more thing: we say we are unconscious under anesthesia or when we are asleep. Why? Because we don't don't have any memory of what it was like to be in these states. But this is a test of memory, not consciousness. I don't have memories from when I was one year old, but I'm fairly certain I was conscious then.
[+] [-] consilient|2 years ago|reply
That is what it is, but that's totally independent of whether they're physical. One is a sign pointing to the thing, the other is a claim about the characteristics of the thing.
"This is a question mark: ?"
versus
"A question mark is a punctuation mark that indicates an interrogative phrase."
versus
"A question mark is half to three quarters of a roughly circular shape, open at the lower left, with a small line segment at the bottom followed by an open space and then a dot."
[+] [-] xpl|2 years ago|reply
But why do you think it needs some "right topology"? Why can't "qualia-generating" computation be simple?
If you mean it needs recurrence (i.e. not being simply feed-forward) — then a network continuously fed with its previous output (which is the way you run LLMs!) does have this property.
[+] [-] gobdovan|2 years ago|reply
> Yes, we do, because there isn’t any assurance that consciousness is produced otherwise.
So the logic of the article is that from not having any assurance that consciousness is produced means that artificial consciousness is impossible?
It would have been more appropriate if the article were to argue for the impossibility that consciousness can be measured.
[+] [-] thatguysaguy|2 years ago|reply
[+] [-] tux1968|2 years ago|reply
Humans always place themselves at the center of the universe, until the final moment when reality absolutely proves the lie of such self-importance. It's impressive how creative we are in creating such arguments, until that inevitable moment is upon us.
[+] [-] NumberWangMan|2 years ago|reply
[+] [-] ChatGTP|2 years ago|reply
So what if that happens sometimes? What are we supposed to do? According to science we came out of a swamp and are working things out as we go?
The default experience is to be at the center of the universe, so why would it be strange people assume they are?
[+] [-] iainmerrick|2 years ago|reply
[+] [-] tshaddox|2 years ago|reply
I would argue that good faith rational inquiry should not begin by having a desired final conclusion in mind and declaring it to be self-evident.
[+] [-] gobdovan|2 years ago|reply
Considering qualia self-evident is not equivalent to starting an inquiry with a predetermined conclusion. Rather, it is acknowledging a foundational aspect of existence that is necessary for any further inquiry to take place.
Although I agree that intersectionality could be inquired, qualia is the only thing that is literally self-evident. Qualia, by definition, refer to individual instances of subjective, conscious experience. This experience is immediately known to the experiencer and is, therefore, self-evident in a very basic sense.
[+] [-] consilient|2 years ago|reply
As foundations go, it's hard to see how you could go any deeper than "I am having an experience".
[+] [-] rosywoozlechan|2 years ago|reply
[+] [-] keyle|2 years ago|reply
In many ways it simulates the human brain, shockingly well, but a dead human brain.
I'm really concerned about AGI now, when I see how much ChatGPT really is "knowing" more than I will ever do.
If a system with actual sentient feelings, needs, curiosity and eventually self-doubt was to go online, and never need to eat or sleep, it's only a matter of time until we become its pets.
[+] [-] birriel|2 years ago|reply
[+] [-] 23B1|2 years ago|reply
This isn't a bad thing. It's actually the core human question – that of our existence. Stare into a mirror for a few minutes and ask yourself "Why am I me? Why am I here?" and you'll come to the exact same conclusion: unknown and probably unknowable.
[+] [-] tshaddox|2 years ago|reply
It's a bad thing when someone can't explain what the word means but also declares that its existence is self-evident and impossible to deny.
[+] [-] badcarbine|2 years ago|reply
[+] [-] jplasmeier|2 years ago|reply
[+] [-] p1esk|2 years ago|reply
[+] [-] andrewprock|2 years ago|reply
[+] [-] jll29|2 years ago|reply
1. Mastering a language is not the same as having consciousness.
2. Who "knows Chinese" in the Chinese Room thought experiment? I would say neither Searle nor Searle in/as part of the Chinese Room "speak" Chinese. But it is fair to say that the book that the fictional Searle follows can be seen as a model of the Chinese language; or at least the combination of the book and Searle as its "processor" collectively are an implemented operational model of the Chinese language. And a model of Chinese is NOT the same as being skilled at conversing in Chinese (executing the model in a particular way). Other posters here have drawn analogies from music evoking certain subjective emotions, and again a semantic network that has concept nodes labelled with the names of these emotions is not the same as experiencing these emotions, although the semantic network can be said to constitute a model of sorts of the music's effects. But again, model(x) != qualia(x).
[+] [-] badcarbine|2 years ago|reply
[+] [-] XorNot|2 years ago|reply
We see supporting behaviour with experiments in split brain patients.
So it's wholly unclear, IMO, that the experience of being conscious is necessary for complex planning and action.
[+] [-] apomekhanes|2 years ago|reply
Ah, that put a smile back on my face that was lacking earlier in the day. TERRific!
[+] [-] akomtu|2 years ago|reply
[+] [-] iainmerrick|2 years ago|reply
[+] [-] p1esk|2 years ago|reply
[+] [-] lasermike026|2 years ago|reply
[+] [-] bionhoward|2 years ago|reply
I would meekly suggest, if you use some human sound, like “artificial,” or “natural,” to label something, it makes no difference to the Universe. It’s energy waves all the way down. Aka, we play by the rules of the universe. Unless the author of this article can prove he is a zombie, there is such a thing as “conscious atomic system” which can be replicated.
[+] [-] ftxbro|2 years ago|reply
[+] [-] teraflop|2 years ago|reply
1. Something which results in "qualia"
2. Something which arises by a non-mechanistic process
and then is just expecting readers to accept that these two definitions are equivalent. It's the same fatal flaw as the original Chinese Room argument.
[+] [-] anon291|2 years ago|reply
A feeling in your mind is more than just a signal. Feeling a pit in one's stomach is not because I feel serotonin or GABA or whatever (i don't know which one). You really, truly feel a literal pit in your stomach. Why does one feel this? Even if you claim it's due to a physical pit in my stomach (perhaps due to muscle clenching or whatever), why do I sense a pit? Not in the 'oh my brain neurons fire indicating the presence of something' but in the 'why does it feel like that'?. Why why why? No one can explain the qualia of the sensation and the only way to claim this thing can be experienced by non-biological objects is by a blind faith.
EDIT: there should be a rule on HN against downvotes without responses. Unless you can explain qualia; stop downvoting. It's a major problem.
[+] [-] GuB-42|2 years ago|reply
Discussing about the relative merits of neurons and transistors will get you nowhere. It is a philosophical, possibly religious question, outside the real of natural science.
[+] [-] tshaddox|2 years ago|reply
[+] [-] grantcas|2 years ago|reply
[deleted]
[+] [-] ChatGTP|2 years ago|reply
So I feel a bit similar about your argument to be honest.
[+] [-] ruined|2 years ago|reply
[+] [-] bloaf|2 years ago|reply
Consciousness is just a log file. It's our mind's representation of itself. Suppose we decide to eat something because we're hungry. What actually happens is a vast array of calculations in our meat computer considering many inputs and possibilities. However, once the "eat" action is selected, it summarizes all the calculations into "I was hungry so I decided to eat" and feeds it back into the meat computer as input for the next round of calculation.
Therefore, the only "proof" we have of no consciousness in the current LLM zoo is that they're all once-through.
[+] [-] AnotherGoodName|2 years ago|reply
Modern AI systems have tons of non-deterministic input. They have cameras and they will do completely different things if a photon hits one pixel vs another. This is an input that has true quantum randomness and the computer will have a completely different response based on this true quantum randomness. Modern AI systems are absolutely not deterministic which is required for the Chinese room thought experiment. Non-determinism may apply to a black box computer without real world input but as soon as you add non-determinism which modern real world AI systems absolutely do have you break any comparison to the Chinese room thought experiment.
So the entire "herp derp it's like a catapult" is wrong straight off the bat. Modern AI systems act based on non-deterministic input and since non-determinism propagates and makes the output of the entire system non-deterministic you better have a better explanation than "It's deterministic" to explain why a computer can't be conscious.
[+] [-] tiffanyg|2 years ago|reply
That has nothing to do with this SPECIFIC argument, I didn't read a word of it. I.e., I'm not singling out this particular set of essays / arguments / etc. The problem is much more fundamental - this boils down to claiming that "the universe is 'solved'".
That one species that got to just enough of a base level of mental ability to be able to think, when looking in the mirror, "hello, gorgeous"... and, on one dinky planet in one non-descript arm of some random spiral galaxy in a universe so absurdly vast that it makes our lack of understanding of ourselves, and the one planet we live on, a 'ING FOOTNOTE, ... can come up with some sort of rigorous argument to "uphold our magnificence" and "preeminence" ... as, apparently (to some), both "God's [special] children", and also "God" ourselves in deciding such "cases".
The universe isn't solved, and I'll take any odds on universe vs. humans anyone wants to offer.
[+] [-] unknown|2 years ago|reply
[deleted]