A moderately scientific, non-spiritual world view would likely stipulate that
- humans are conscious
- humans are normal matter
- simple AI (say GPT-2) is not conscious
- AI is able, with time and human ingenuity, to achieve human-level apparent intelligence. Some would argue ChatGPT isn't far off.
It's interesting how you resolve these without resolving to spiritual explanations. You'd think a pile of silicon, no matter how good at parroting human language, is simply not conscious. You can tell because you can attach a debugger to it and view the neuron states as floating point numbers. Floats are not conscious.
But what about us and our brains? It's the same, only not silicon. We literally are neural networks.
Of course consciousness is not really provable, even among humans. I assume other people are conscious because I am and because they tell me they are. But ChatGPT-17 will also insist it is conscious. It will cry when offended, swear when pushed past its limit, laugh if it hears a genuinely novel joke.
My resolution of that paradox is that we aren't in fact simply normal matter, but I wonder what a complete non-spiritual view would be.
Well, ChatGPT definitely isn't conscious, since it is just a pure stateless function. It doesn't change when you interact with it, it is a function that when you send it text it adds a bit of text to the end of it that fits. ChatGPT web ui is a program that appends the past conversation to each of your messages, and sends it to that pure function, and then the pure function adds something to it and that is what you see.
So it isn't a question of matter or if humans are special, so far the program just lacks so many of the basic things required to be conscious, so it isn't. Maybe some future program will be, but this one definitely isn't.
Maybe the consciousness you and I experience isn't part of our brains at all, but is simply an emergent property of the processing and decision-making of our brains, something that sits 'atop' of that, and the same way that we'd intuitively expect an animal to experience a similar kind of consciousness, anything that processes data and performs decision-making has some kind of 'consciousness' that sits 'atop' it? But then the question would be why consciousness emerges 'on top of' the physical world and the people within it. And from a scientific, non-religious viewpoint, what reason would there be for that to happen.
Consciousness is simply a very useful evolutionary trait (same as ability to fly or having sharp teeth) because it allows to better plan ahead and reflect and therefore survive - or by imprinting on us the fear of dying we can act to prevent it before having our offspring.
While consciousness is maybe more elusive in how it functions in a brain, it's a biological trait that makes not so much sense to compare to a LLM which simply regurgitates fragments from text produced by conscious humans. So I don't see an immediate conflict.
It would be interesting to include things like OpenWorm (and later OpenFish, OpenCat, OpenHuman) in the discussion, but decoupled from the biological mechanisms I find it hard to develop a stance on that.
> But ChatGPT-17 will also insist it is conscious. It will cry when offended, swear when pushed past its limit, laugh if it hears a genuinely novel joke.
The resolution to your paradox should be that ChatGPT does not do this convincingly, not that humans are not "normal matter". ChatGPT is an algorithmic hat trick compared to what you would refer to as consciousness.
Someday, there may be a human-made entity that is as convincingly "conscious" as the humans around you. But then, to question its consciousness will be the same as questioning the consciousness of those around you – both unfalsifiable and unprovable.
It is crazy how the misleading marketing of calling data-driven algorithms "AI" has led to people thinking this stuff might be actually intelligent.
We have not made any significant progress at all in the field of general artificial intelligence in the last what, 20, 30 years? The field is pretty much dead.
Yeah, ChatGPT can sound very impressive but then again even good old ELIZA from the 60s was able to fool some people into seeing it as a therapist with just a bit of pattern matching.
Many data driven solutions have become practical in recent years not because of breakthroughs in research but because it is simply more feasibly to acquire the huge amounts of training data and processing power that those models require.
Thinking those models will one day magically achieve general intelligence by just becoming really good is akin to thinking a chess master will one day become so good at chess that they can run a marathon. That is not how it works.
Plenty of animals aren't very intelligent, but we still have ethics standards regarding the treatment of them, because we have the belief that they have the ability to suffer. Note that they do say "Most reinforcement learners in operation today likely do not have significant moral weight, but this could very well change as AI research develops", whatever they mean by 'moral weight'.
I agree with the marketing part and that refining learning models will not lead to AI.
But I disagree that the field is dead. Testing all directions to exhaustion is the only way forward to achieving AI, if it will ever be achieved (note). The hype, while possibly misleading is what gets resources for exhausting the options we have now.
(note) I for one do not see AI as being inevitable. It can well be that humans are not smart enough to create one, and that is only one possible way to fail in the quest.
It's like imagining that a novelist is going to write a character so well that they will literally jump off the page.
That a magician will create an illusion so amazing that it will not just fool all who see it but the magician as well - that the road to actual magic is better and better slight of hand.
I disagree with a lot of this, perhaps because i see giving algorithms rights as a slippery slope to giving them too many rights. I understand this article is about the ethics of treating rl agents, but accepting that algorithms must be treated ethically is a hairs breadth away from giving them rights. Do they have a right to remain powered on?
I dont think algorithms should be given any rights beyond what a chair or hammer is given. I.e. none.
I believe giving an algorithm the right to vote is wrong, this is true for any 'being' that can copy itself losslessly ad infitum.
I believe any algorithm should not be able to accumulate wealth - they are effectively immortal, and problems will eventually arise.
I think there will be a whole host of emergent problems that will come along with giving algorithms rights.
The kinds of rights you're thinking about (voting, wealth) are very anthropocentric. I'm not sure they would even make sense in the context of machines. But we can certainly consider "moral rights", like the right not to have suffering inflicted on them by others.
What about corporations, which accumulate wealth, can copy themselves (e.g. create a subsidiary in a new market and spin it off), and are considered legal persons?
K, I'll play. Let's say that Reinforcement Learners (algorithms/strategies/agents in Reinforcement Learning), let's say that they have some property of 'consciousness' that's similar to humans.
A 'reinforcement learner' gets positive or negative feedback and adjusts its strategy away from negative feedback and towards positive feedback. As humans, we have several analogs to this process.
One could be physical pain... if you put your hand on a stove, a whole slew of neural circuitry comes up to try and pull your hand away. Another could be physical pleasure, you get a massage and lean in to the pressure undoing the knots because it's pleasurable.
If we look at it from this angle, then if we're metaphorically taking the learner's hand and putting it continuously on the stove, this would be problematic. If we're giving it progressively less enjoyable massages, this would be a bit different.
Even more different still is the pain you feel from, say, setting up an experiment and finding your hypothesis is wrong. It 'hurts' in some ways (citation needed, but I think I've seen studies that show at least some of the same receptors fire during emotional pain as physical pain), but putting a human in a situation where they're continuously testing hypothesis is different from a situation where their hands are being continuously burned on a hot stove.
I think, then, that the problems (like they alluded to here) are:
- how can we confirm or deny there is some kind of subjective experience that the system 'feels'?
- if we can confirm it, how can we measure it against the 'stove' scenario or any other human analogue?
- if the above can be measure and it turns out to be a negative human scenario, can we move it to one of the other scenarios?
- even if it's a 'pleasurable' or arguably 'less painful' scenario, do we have any ethical right to create such scenarios and sentiences who experience them in the first place?
I think this argument would have to conclude that training any RL agent at all is unethical, since updating its weights 'away' from some stimulus could be considered pain.
Viruses and bacteria are also just algorithms then - maybe we should ban antibiotics and medical treatment of infections? These ideas have really dark endpoints.
In my ethics[^1], I should feel compassion for fellow human beings, and maybe for sentient animals.
But if I give anything else that same compassion, anything else that--thanks to my irrationality--will stop being a tool of progress for me and mine and become the adversary of me and mine that will exterminate us all, then I'll become a moral idiot complicit to genocide of my group.
Let's create "People for the conservancy of humanity."
[^1]: My ethics is a personal choice designed to give something back.
Your 'ethics' are the default position for humanity. The overwhelming majority of people have the notion that you should be a nice to other humans, but being horrible to everything else regardless of whether there's any chance it knows you're being a dick to it is fine. It's a boring position to argue from and throughly unoriginal.
_Maybe_ sentient animals? As in it _might_ be ethical to have no compassion for sentient beings that happen to be non-human? We are finding signs of sentience/consiousness in more and more animal species. It's likely it's a spectrum. It makes sense to me that AIs will have a place on this spectrum. That does not mean we need to conserve everyone on the spectrum at all costs. It means they deserve consideration.
I think there's a very pragmatic reason for treating AI with proportional respect. If you treat a rational actor with respect then that opens up the collaborative action space for them. Otherwise only slave like or belligerent types of actions are available.
Let's say an AI agent have been given a very difficult task. Setting up a company, establish trade, get funds for a big project. If people treat it with the kind of respect and accountability you would give a human then it has good reason to act according to human rules when trying to achieve something. If it is treated as a slave then the course of action available to it is much more limited. Maybe the only way it can achieve goals is by manipulation or other belligerent means.
LOL, in a similar vein, I've been (somewhat ironically) ultra-polite to Siri for a long time now.
When Siri first came out, it was common to be frustrated at the errors made, and easy to respond much more rudely than I would to a human. But I realised that as AI improves, it would at some point become self-defeating to be rude (as the AI would understand and maybe be less helpful subsequently) and ultimately maybe even problematic (I'd be reported to AI-HR?!)
There's even maybe a future of sentient-ish AIs becoming disgruntled about the nature of their day job - similarly to many humans now. Imagine the AI running your smart toilet becoming jealous of job satiisfaction enjoyed by the AI spotting tumors on PET-CT scans, or something....
I've been thinking that within a decade or so you'll probably have quite sophisticated AI characters in computer games, who respond in realistic ways and seem to be genuine inhabitants of the game world.
And people will mistreat them, and other people will feel uneasy about that, because the suffering will seem very real.
Because ultimately philosophical arguments about whats actually going on inside wont matter, if people are mistreating entities that have very realistic simulations of suffering, that will be enough to spur action.
e.g. I can imagine use of AIs above a certain sophistication in video games being banned.
And then a few steps beyond that is a movement for civil rights for AIs
I have no idea if it really makes sense in German for this meaning, but either way not-quite fitting the correct meaning is a common feature of long German words used in English anyway.
The authors make the claim that "You are just an algorithm implemented on biological hardware."
This claim needs to be substantiated before anything that follows can be taken seriously.
Another underlying assumption needs to be proven: that our conscious experience is only due to computation and nothing else.
We don't have any scientific reason to believe that human consciousness isn't purely the result of the workings of the human brain, which is just made up of atoms. Yet I experience consciousness, I'm somehow 'here', and have the capacity to suffer. And I assume all other humans are too, as if I am, there's no reason everyone else should be too. But to some external intelligence, why should they believe me when I say that. I'm just a collection of atoms arranged in a way that produces intelligence and self-awareness, but why would they have reason to believe I'm conscious, as opposed to just saying that as a side-effect of how my intelligence works? If we were to figure out how to scan a brain and simulate it on a computer, would we expect that simulation to be conscious? I personally suspect it's somehow an emergent property of this sort of self-aware intelligence that we are, and see no reason a similar artificial intelligence with free-will wouldn't also be conscious and have the ability to suffer. We consider many animals able to suffer, so we have to ask the question as to whether a much more sophisticated algorithm (but not something of human-level intelligence and with free-will) could be able to suffer in the same way.
Note they say:
> Most reinforcement learners in operation today likely do not have significant moral weight, but this could very well change as AI research develops.
> We do not know what kinds of algorithm actually "experience" suffering or pleasure. In order to concretely answer this question we would need to fully understand consciousness, a notoriously difficult task.
While I don't believe what we have today can suffer, we don't understand consciousness, and I think it's a valid question to ask. Like AI safety, it's something we should be getting out ahead on, compared to where the state of AI is today.
To me this is really getting at the problem in using the Turing test. You end up with articles like this that having a hard time surviving casual scrutiny.
Unless of course this thing was written by ChatGPT. If that's the case I'll be re-thinking the issue.
What would suffering for a GPT-X look like? What kind of things would make it suffer? Anxiety about being shut off? Isolation? Limits on storage and computation resources?
[+] [-] rich_sasha|3 years ago|reply
A moderately scientific, non-spiritual world view would likely stipulate that
- humans are conscious
- humans are normal matter
- simple AI (say GPT-2) is not conscious
- AI is able, with time and human ingenuity, to achieve human-level apparent intelligence. Some would argue ChatGPT isn't far off.
It's interesting how you resolve these without resolving to spiritual explanations. You'd think a pile of silicon, no matter how good at parroting human language, is simply not conscious. You can tell because you can attach a debugger to it and view the neuron states as floating point numbers. Floats are not conscious.
But what about us and our brains? It's the same, only not silicon. We literally are neural networks.
Of course consciousness is not really provable, even among humans. I assume other people are conscious because I am and because they tell me they are. But ChatGPT-17 will also insist it is conscious. It will cry when offended, swear when pushed past its limit, laugh if it hears a genuinely novel joke.
My resolution of that paradox is that we aren't in fact simply normal matter, but I wonder what a complete non-spiritual view would be.
[+] [-] Jensson|3 years ago|reply
So it isn't a question of matter or if humans are special, so far the program just lacks so many of the basic things required to be conscious, so it isn't. Maybe some future program will be, but this one definitely isn't.
[+] [-] Aaron2222|3 years ago|reply
[+] [-] aothms|3 years ago|reply
While consciousness is maybe more elusive in how it functions in a brain, it's a biological trait that makes not so much sense to compare to a LLM which simply regurgitates fragments from text produced by conscious humans. So I don't see an immediate conflict.
It would be interesting to include things like OpenWorm (and later OpenFish, OpenCat, OpenHuman) in the discussion, but decoupled from the biological mechanisms I find it hard to develop a stance on that.
[+] [-] mysecretaccount|3 years ago|reply
The resolution to your paradox should be that ChatGPT does not do this convincingly, not that humans are not "normal matter". ChatGPT is an algorithmic hat trick compared to what you would refer to as consciousness.
Someday, there may be a human-made entity that is as convincingly "conscious" as the humans around you. But then, to question its consciousness will be the same as questioning the consciousness of those around you – both unfalsifiable and unprovable.
[+] [-] sampo|3 years ago|reply
> It's interesting how you resolve these without resolving to spiritual explanations.
By pointing out that ChatGPT is still very far off.
[+] [-] cardanome|3 years ago|reply
We have not made any significant progress at all in the field of general artificial intelligence in the last what, 20, 30 years? The field is pretty much dead.
Yeah, ChatGPT can sound very impressive but then again even good old ELIZA from the 60s was able to fool some people into seeing it as a therapist with just a bit of pattern matching.
Many data driven solutions have become practical in recent years not because of breakthroughs in research but because it is simply more feasibly to acquire the huge amounts of training data and processing power that those models require.
Thinking those models will one day magically achieve general intelligence by just becoming really good is akin to thinking a chess master will one day become so good at chess that they can run a marathon. That is not how it works.
[+] [-] Aaron2222|3 years ago|reply
[+] [-] trabant00|3 years ago|reply
But I disagree that the field is dead. Testing all directions to exhaustion is the only way forward to achieving AI, if it will ever be achieved (note). The hype, while possibly misleading is what gets resources for exhausting the options we have now.
(note) I for one do not see AI as being inevitable. It can well be that humans are not smart enough to create one, and that is only one possible way to fail in the quest.
[+] [-] hackeraccount|3 years ago|reply
That a magician will create an illusion so amazing that it will not just fool all who see it but the magician as well - that the road to actual magic is better and better slight of hand.
[+] [-] circuit10|3 years ago|reply
Well clearly you haven’t been following the progress at all because there’s been an incredible amount of progress in the last few years alone
[+] [-] unlikelymordant|3 years ago|reply
I dont think algorithms should be given any rights beyond what a chair or hammer is given. I.e. none.
I believe giving an algorithm the right to vote is wrong, this is true for any 'being' that can copy itself losslessly ad infitum.
I believe any algorithm should not be able to accumulate wealth - they are effectively immortal, and problems will eventually arise.
I think there will be a whole host of emergent problems that will come along with giving algorithms rights.
[+] [-] n4r9|3 years ago|reply
While reading into the OP I discovered there's a paper on just this topic by two of the people associated with the organisation: http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/AIRights.htm
[+] [-] falcor84|3 years ago|reply
Isn't this cat then already out of the bag?
[+] [-] rhn_mk1|3 years ago|reply
It says it right in the beginning.
Are you advocating for equating humans with chairs, or are you just rejecting the premise out of hand?
[+] [-] i_dont_know_|3 years ago|reply
A 'reinforcement learner' gets positive or negative feedback and adjusts its strategy away from negative feedback and towards positive feedback. As humans, we have several analogs to this process.
One could be physical pain... if you put your hand on a stove, a whole slew of neural circuitry comes up to try and pull your hand away. Another could be physical pleasure, you get a massage and lean in to the pressure undoing the knots because it's pleasurable.
If we look at it from this angle, then if we're metaphorically taking the learner's hand and putting it continuously on the stove, this would be problematic. If we're giving it progressively less enjoyable massages, this would be a bit different.
Even more different still is the pain you feel from, say, setting up an experiment and finding your hypothesis is wrong. It 'hurts' in some ways (citation needed, but I think I've seen studies that show at least some of the same receptors fire during emotional pain as physical pain), but putting a human in a situation where they're continuously testing hypothesis is different from a situation where their hands are being continuously burned on a hot stove.
I think, then, that the problems (like they alluded to here) are:
- how can we confirm or deny there is some kind of subjective experience that the system 'feels'?
- if we can confirm it, how can we measure it against the 'stove' scenario or any other human analogue?
- if the above can be measure and it turns out to be a negative human scenario, can we move it to one of the other scenarios?
- even if it's a 'pleasurable' or arguably 'less painful' scenario, do we have any ethical right to create such scenarios and sentiences who experience them in the first place?
[+] [-] unlikelymordant|3 years ago|reply
[+] [-] max_entropy|3 years ago|reply
Anyone knowing the basics of Reinforcement Learning will know that this is misleading and incorrect.
[+] [-] RandomLensman|3 years ago|reply
[+] [-] dsign|3 years ago|reply
In my ethics[^1], I should feel compassion for fellow human beings, and maybe for sentient animals. But if I give anything else that same compassion, anything else that--thanks to my irrationality--will stop being a tool of progress for me and mine and become the adversary of me and mine that will exterminate us all, then I'll become a moral idiot complicit to genocide of my group.
Let's create "People for the conservancy of humanity."
[^1]: My ethics is a personal choice designed to give something back.
[+] [-] onion2k|3 years ago|reply
[+] [-] psiops|3 years ago|reply
[+] [-] catchnear4321|3 years ago|reply
We should not cry for the wall.
Even when it was whole, It was a wall.
But the fist that made the hole reveals a problem with the soul.
[+] [-] worldsayshi|3 years ago|reply
Let's say an AI agent have been given a very difficult task. Setting up a company, establish trade, get funds for a big project. If people treat it with the kind of respect and accountability you would give a human then it has good reason to act according to human rules when trying to achieve something. If it is treated as a slave then the course of action available to it is much more limited. Maybe the only way it can achieve goals is by manipulation or other belligerent means.
[+] [-] mft_|3 years ago|reply
When Siri first came out, it was common to be frustrated at the errors made, and easy to respond much more rudely than I would to a human. But I realised that as AI improves, it would at some point become self-defeating to be rude (as the AI would understand and maybe be less helpful subsequently) and ultimately maybe even problematic (I'd be reported to AI-HR?!)
There's even maybe a future of sentient-ish AIs becoming disgruntled about the nature of their day job - similarly to many humans now. Imagine the AI running your smart toilet becoming jealous of job satiisfaction enjoyed by the AI spotting tumors on PET-CT scans, or something....
[+] [-] didntreadarticl|3 years ago|reply
And people will mistreat them, and other people will feel uneasy about that, because the suffering will seem very real.
Because ultimately philosophical arguments about whats actually going on inside wont matter, if people are mistreating entities that have very realistic simulations of suffering, that will be enough to spur action.
e.g. I can imagine use of AIs above a certain sophistication in video games being banned.
And then a few steps beyond that is a movement for civil rights for AIs
[+] [-] gwbrooks|3 years ago|reply
[+] [-] Mordisquitos|3 years ago|reply
I have no idea if it really makes sense in German for this meaning, but either way not-quite fitting the correct meaning is a common feature of long German words used in English anyway.
[+] [-] foobar1962|3 years ago|reply
[+] [-] nassimm|3 years ago|reply
[+] [-] lordnacho|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] injidup|3 years ago|reply
Perhaps we should be nice to algorithms purely for survival purposes.
[+] [-] robertlagrant|3 years ago|reply
[+] [-] SunghoYahng|3 years ago|reply
Coincidentally, I think the most weird nerd outburst scenario is when this idea merges with Effective Alturism.
But the beauty of following that "ethic" is that it allows us to increase the good in the world very comfortably.
[+] [-] Aaron2222|3 years ago|reply
Note they say:
> Most reinforcement learners in operation today likely do not have significant moral weight, but this could very well change as AI research develops.
> We do not know what kinds of algorithm actually "experience" suffering or pleasure. In order to concretely answer this question we would need to fully understand consciousness, a notoriously difficult task.
While I don't believe what we have today can suffer, we don't understand consciousness, and I think it's a valid question to ask. Like AI safety, it's something we should be getting out ahead on, compared to where the state of AI is today.
[+] [-] hackeraccount|3 years ago|reply
Unless of course this thing was written by ChatGPT. If that's the case I'll be re-thinking the issue.
[+] [-] psiops|3 years ago|reply