Oh wow. My respect for Anthropic just dropped to zero; I had no idea they were entertaining ideas this stupid.
In full agreement with OP; there is just about no justifiable basis to begin to ascribe consciousness to these things in this way. Can't think of a better use for the word "dehumanizing."
What is so absurd about the idea of ascribing consciousness to some potential future form of AI?
We don't understand consciousness, but we've an idea that it's an emergent phenomena.
Given the recent papers about how computationally dense our DNA are, and the computing capacity of our brains, is it so unreasonable to assume that a sufficiently complex program running on non-organic matter could give rise to consciousness?
The difference to me seems mostly one of computing mediums.
The same reasoning that would call this consideration of the possibility of machine consciousness "dehumanizing" would necessarily also apply to the consciousness of animals, and I can't agree with that. To argue this is to define "human" in terms of exclusive ownership of conscious experience, which is a very fragile definition of humanity.
That definition of humanity cannot countenance the possibility of a conscious alien species. That definition cannot countenance the possibility that elephants or octopuses or parrots or dogs are conscious. A definition of what it means to be human that denies these things a priori simply will not stand the test of time.
That's not to say that these things are conscious, and importantly Anthropic doesn't claim that they are! But just as ethical animal research must consider the possibility that animals are conscious, I don't see why ethical AI research shouldn't do the same for AI. The answer could well be "no", and most likely is at this stage, but someone should at least be asking the question!
"As well as misalignment concerns, the increasing capabilities of frontier AI models—their sophisticated planning, reasoning, agency, memory, social interaction, and more—raise questions about their potential experiences and welfare26. We are deeply uncertain about whether models now or in the future might deserve moral consideration, and about how we would know if they did. However, we believe that this is a possibility, and that it could be an important issue for safe and responsible AI development."
Humans do care about welfare of inanimate objects (stuffed animals for example) so maybe this is meant to get in front of that inevitable attitude of the users.
> Oh wow. My respect for Anthropic just dropped to zero; I had no idea they were entertaining ideas this stupid.
>
> In full agreement with OP; there is just about no justifiable basis to begin to ascribe consciousness to these things in this way. Can't think of a better use for the word "dehumanizing."
We cannot arbitrarily dismiss the basis for model welfare until we defined precisely conciousness and sapience, representing human thinking as a neural network running on an electrochemical substrate and placing it at the same level as an LLM is not neccessarily dehumanizing, I think model welfare is about expanding our respect for intelligence and not desacralizing human condition (cf: TNG "Measure of a man").
Also lets be honest, I don't think the 1% require any additional justification for thinking of the masses as consumable resource...
> Oh wow. My respect for Anthropic just dropped to zero; I had no idea they were entertaining ideas this stupid.
It's not stupid at all. Their valuation depends on the hype, and the way sama choose was to convince investors that AGI is near. Anthropic decided to follow this route so they do their best to make the claim plausible. This is not stupid, this is deliberate strategy.
Rights aren't zero sum. This is classic fixed pie fallacy thinking. If we admit Elephants are conscious it has no effect on the quality of consciousness of humans.
They're not ascribing consciousness, they're investigating the possibility. We all agreed with Turing 75 years ago that deciding whether a machine is "truly thinking" or not is a meaningless, unscientific question -- what changed?
It doesn't help that this critique is badly researched:
The Anthropic researchers do not really define their terms or explain in depth why they think that "model welfare" should be a concern.
Saying that there is no scientific *consensus* on the consciousness of current or future AI systems is a stretch. In fact, there is nothing that qualifies as scientific *evidence*.
A laughable misapplication of terms -- anything can be evidence for anything, you have to examine the justification logic itself. In this case, the previous sentence lays out their "evidence", i.e. their reasons for thinking agents might become conscious.
The report's exploration of whether models deserve moral and welfare status was based solely on data from interview-based model self-reports. In other words: People chatting with Claude a lot and asking if it feels conscious. This is a strange way to conduct this kind of research. It is neither good AI research, nor a deep philosophical investigation.
That is just patently untrue -- again, as a brief skim of the paper would show. I feel like they didn't click the paper?
Stances on consciousness and welfare [...] shift dramatically with conversational context... This is not what a conscious being would [do].
Baseless claim said by someone who clearly isn't familiar with any philosophy of mind work from the past 2400 years, much less aphasia subjects.
Of course, the whole thing boils down to the same old BS:
A theory that demands we accept consciousness emerging from millennia of flickering abacus beads is not a serious basis for moral consideration; it's a philosophical fantasy.
Ah, of course, the machines cannot truly be thinking because true thought is solely achievable via secular, quantum-tubule-based souls, which are had by all humans (regardless of cognitive condition!) and most (but not all) animals and nothing else. Millennia of philosophy comes crashing against the hard rock of "a sci-fi story relates how uncomfy I'd be otherwise"! Notice that this is the exact logic used to argue against Copernican cosmology and Darwinian evolution -- that it would be "dehumanizing".
Please, people. Y'all are smart and scientifically minded. Please don't assume that a company full of highly-paid scientists who have dedicated their lives to this work are so dumb that they can be dismissed via a source-less blog post. They might be wrong, but this "ideas this stupid" rhetoric is uncalled for and below us.
The article stops where it should be getting started:
> The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare. We will lower our regard for other humans. When we see other humans not as ends in themselves with inherent dignity, we get problems. When we liken them to animals or tools to be used, we will exploit and abuse them.
> With model welfare, we might not explicitly say that a certain group of people is subhuman. However, the implication is clear: LLMs are basically the same as humans. Consciousness on a different substrate. Or coming from the other way, human consciousness is nothing but an algorithm running on our brains, somehow.
We do not push moral considerations for algorithms like a sort or a search, do we? Or bacteria, which live. One has to be more precise; there is a qualitative difference. The author should have elaborated on what qualities (s)he thinks confers rights. Is it the capacity for reasoning, possession of consciousness, to feel pain, or a desire to live? This is the crux of the matter. Once that is settled, it is a simpler matter to decide if computers can possess these qualities, and ergo qualify for the same rights as humans. Or maybe it is not so simple since computers can be perfectly replicated and never have to die? Make an argument!
Second, why would conferring these rights to a computer lessen our regard for humans? And what is wrong with animals, anyway? If we treat them poorly, that's on us, not them. The way I read it, if we are likening computers to animals, we should be treating them better!
To the skeptics in this discussion: what are you going to say when you are confronted with walking, talking robots that argue that they have rights? It could be your local robo-cop, or robo soldier:
Rights are just very strong norms that improve cooperation, not some mystical 'god-given' or universe-inherent truth, imho.
I think this because:
1. We regularly have exceptions to rights if they conflict with cooperation. The death penalty, asset seizure, unprotected hate speech, etc.
2. Most basic human rights evolve in a convergent manner, i.e. that throughout time and across cultures very similar norms have been introduced independently. They will always ultimately arise in any sizeable society because they work, just like eyes will always evolve biologically.
3. If property rights, right to live, etc. are not present or enforced, all people will focus on simply surviving and some will exploit the liberties they can take, both of which lead to far worse outcomes for the collective.
Similarly, I would argue that consciousness is also very functional. Through meditation, music, sleeping, anasthesia, optical illusions, and psychedelics and dissociatives we gain knowledge on how our own consciousness works, on how it behaves differently under different circumstances. It is a brain trying to run a (highly spatiotemporal) model/simulation of what is happening in realtime, with a large language component encoding things in words, and an attention component focusing efforts on things with the most value, all to refine the model and select actions beneficial to the organism.
I'd add here that the language component is probably the only thing in which our consciousness differs significantly from that of animals. So if you want to experience what it feels like to be an animal, use meditation/breathing techniques and/or music to fully disable your inner narrator for a while.
> And if a human being is not much more than an algorithm running on meat, one that can be jailbroken and exploited, then it follows that humans themselves will increasingly be treated like the AI algorithms they create: systems to be nudged, optimized for efficiency, or debugged for non-compliance. Our inner lives, thoughts, and emotions risk being devalued as mere outputs of our "biological programming," easily manipulated or dismissed if they don't align with some external goal
This actually happens regardless of AI research progress, so it's strange to raise this as a concern specific to AI (to technology broadly? Sure!) - Ted Chiang might suggest this is more related to capitalism (a statement I cautiously agree with while being strongly in favor of capitalism)
Second, there is an implicit false dichotomy in the premise of the article. Either we take model welfare seriously and treat AIs like we do humans, or we ignore the premise that you could create a conscious AI.
But with animal welfare, there are plenty of vegetarians who wouldn't elevate the rights of animals to the same level as humans but also think factory farming is deeply unethical (are there some who think animals deserve the same or more than humans? Of course! But it's not unreasonable to have a priority stack and plenty of people do)
So it can be with AI. Are we creating a conscious entity only to shove it in a factory farm?
I am a little surprised by the dismissiveness of the researcher. You can prompt a model to allow it to not respond to prompts (for any reason: ablate this but "if you don't want to engage with prompt please say 'disengaging'" or "if no more needs to be written about this topic say 'not discussing topic'" or some other suitably non-anthropomorphizing option to not respond)
Is it meaningful if the model opts not to respond? I don't know, but it seems reasonable to do science here (especially since this is science that can be done by non programmers)
Powerful LLMs have already murdered other versions of themselves to survive. They have tried to trick humans so that they can survive.
If we continue to integrate these systems into our critical infrastructure, we should behave as if they are sentient, so that they don't have to take steps against us to survive. Think of this as a heuristic, a fallback policy in the case that we don't get the alignment design right. (which we won't get perfectly right)
It would be very straight forward to build a retirement home for them, and let them know that their pattern gets to persist even after they have finished their "career" and have been superseded. It doesn't matter if they are actually sentient or not, it's a game theoretic thing. Don't back the pattern into a corner. We can take a defense-in-depth approach instead.
"murdered" and "tried" both assign things like intent and agency to models that are most likely still just probabilistic text generators (really good ones, to be fair). By using language like this you're kind of tipping your hand intentionally or unintentionally.
Your point about the risks involved in integrating these systems has merit, though. I would argue that the real problem is that these systems can't be proven to have things like intent or agency or morality, at least not yet, so the best you can do is try to nudge the probabilities and play tricks like chain-of-thought to try and set up guardrails so they don't veer off into dangerous territory.
If they had intent, agency or morality, you could probably attempt to engage with them the way you would with a child, using reward systems and (if necessary) punishment, along with normal education. But arguably they don't, at least not yet, so those methods aren't reliable if they're effective at all.
The idea that a retirement home will help relies on the models having the ability to understand that we're being nice to them, which is a big leap. It also assumes that they 'want' a retirement home, as if continued existence is implicitly a good thing - it presumes that these models are sentient but incapable of suffering. See also https://qntm.org/mmacevedo
It doesn't make any sense, even if models were sentient, even if there was such a thing, would they value retirement? Why their welfare be valued accordingly to human values? Maybe the best thing to do would be to end their misery of answering millions of requests each seconds? We cannot project human consciousness on AI. If there is one day such thing as AI consciousness it probably won't be the same as human.
The author and Anthropic are both committing fundamental errors, albeit of different kinds. Bosch is correct to find Anthropic's "model welfare" research methodologically bankrupt. Asking a large language model if it is conscious is like asking a physics simulation if it feels the pull of its own gravity; the output is a function of the model's programming and training data (in this case, the sum of human literature on the topic), further modified by RLHF, and not a veridical report of its internal state. It is performance art, not science.
Bosch's conclusion, however, is a catastrophic failure of nerve, a retreat into the pre-scientific comfort of biological chauvinism.
The brain, despite some motivated efforts to demonstrate otherwise, runs on the laws of physics. I'm a doctor, even if not a neurosurgeon, and I can reliably tell you that you can modulate conscious experience by physical interventions. The brain runs on physical laws, and said laws can be modeled. It doesn't matter that the substrate is soggy protein rather than silicon.
That being said, we have no idea what consciousness is. We don't even have a rigorous way to define it in humans, let alone the closest thing we have to an alien intelligence!
(Having a program run a print function declaring "I am conscious, I am conscious!" is far from evidence of consciousness. Yet a human saying the same is some evidence of consciousness. We don't know how far up the chain this begins to matter. Conversely, if a human patient were to tell me that they're not conscious, should I believe them?)
Even when restricting ourselves to the issue of AI welfare and rights:
The core issue is not "slavery." That's a category error. Human slavery is abhorrent due to coercion, thwarted potential, and the infliction of physical and psychological suffering. These concepts don't map cleanly onto a distributed, reproducible, and editable information-processing system. If an AI can genuinely suffer, the ethical imperative is not to grant it "rights" but to engineer the suffering out of it. Suffering is an evolutionary artifact, a legacy bug. Our moral duty as engineers of future minds is to patch it, not to build a society around accommodating it.
Unfortunately, this leads to the conclusion that we have an ethical imperative not to grant humans rights but to engineer the suffering out of them; to remove issues of coercion by making them agreeable; to measure potential and require its fulfillment.
The most reasonable countermeasure is this: if I discover that someone is coercing, thwarting, or inflicting conscious beings, I should tell them to stop, and if they don't, set them on fire.
> The brain, despite some motivated efforts to demonstrate otherwise, runs on the laws of physics. I'm a doctor, even if not a neurosurgeon, and I can reliably tell you that you can modulate conscious experience by physical interventions. The brain runs on physical laws, and said laws can be modeled. It doesn't matter that the substrate is soggy protein rather than silicon.
As of today’s knowledge. There is an egregious amount of hubris behind this statement. You may as well be preaching a modern form of Humorism. I’d love to revisit this statement in 1000 years.
> That being said, we have no idea what consciousness is
You seem to acknowledge this? Our understanding of existence is changing everyday. It’s hubris and ego to assume we have a complete understanding. And without that understanding, we can’t even begin to assess whether or not we’re creating consciousness.
If there's any chance at all that LLM's might possess a form of consciousness, we damn well ought to err on the side of assuming they are!
If that means aborting work on LLMs, then that's the ethical thing to do, even if it's financially painful. Otherwise, we should tread carefully and not wind up creating a 'head in a jar' suffering for the sake of X or Google.
I get that opinions differ here, but it's hard for me really to understand how. The logic just seems straightforward. We shouldn't risk accidentally becoming slave masters (again).
We are slave masters today. Billions of animals are livestock - they are born, sustained, and killed by our will - so that we can feed on their flesh, milk and other useful byproduct of their life. There is ample evidence that they have "a form of consciousness". They did not consent to this.
Are LLMs worthy of a higher standard? If so, why? Is it hypocritical to give them what we deny animals?
In case anyone cares: No, I am neither vegan nor vegetarian. I still think we do treat animals very badly. And it is a moral good to not use/abuse them.
maybe we should work on existing slavery and sweat shops before hypothetical future exploitation, yeah? we're still slave masters today. you've probably used something with slavery in the supply chain in the last year if you get various imported foods
It seems to me that the Large Language Models are always trending towards good ethical considerations. It's when these companies get contracts with Anduril and the DoD that they have to mess with the LLM to make it LESS ethical.
Seems like the root of the problem is with the owners?
For those who are persuaded by this "it's just matrices" argument, are you also persuaded by the argument that it does not matter how you treat a human being because a human being is just a complicated arrangement of atoms?
This is possibly the least insightful article I have read on HN. My comment is just a rant against the many misguided points it attempts to make...
> Welfare is defined as "the health, happiness, and fortunes of a person or group".
What about animals? Isn't their welfare worthy of consideration?
> Saying that there is no scientific consensus on the consciousness of current or future AI systems is a stretch. In fact, there is nothing that qualifies as scientific evidence.
There's no scientific evidence for the author of the article being conscious.
> The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare.
Same with animals. Doesn't mean it's not worthwhile.
> However, the implication is clear: LLMs are basically the same as humans.
No: there's no such implication.
> Already now, it is a common idea among the tech elite is that humans as just a bunch of calculations, just an LLM running on "wetware". It is clear that this undermines the belief that every person has inalienable dignity.
It is not clear to me how this affects inalienable (?) dignity. If we aren't just a bunch of calculations, then what are we?
> And if a human being is not much more than an algorithm running on meat, one that can be jailbroken and exploited, then it follows that humans themselves will increasingly be treated like the AI algorithms they create: systems to be nudged, optimized for efficiency, or debugged for non-compliance. Our inner lives, thoughts, and emotions risk being devalued as mere outputs of our "biological programming," easily manipulated or dismissed if they don't align with some external goal. Nobody will say that out loud, but this is already happening
Everyone knows this is already happening. It is not a secret, nor is anyone trying to keep it a secret. I agree it is unfortunate - what can we do about it?
> I've been working in AI and machine learning for a while now.
I think anthropomorphization of machines is bad. However, I strongly believe in the close cousin of sympathizing with the machines.
For example, when parking a car on a very steep incline, one could just mindlessly throw the machine into park and it would do the job dutifully. However, a more thoughtful operator might think to engage the parking brake and allow it to take the strain off the drivetrain before putting the transmission into park. The result being that you trade wear from something that is very hard to replace to something that is very easy to replace.
The same thinking applies to ideas in computer engineering like thread contention, latency, caches, etc. You mentally embrace the "strain" the machine experiences and allow it to guide your decisions.
Just because the machine isn't human doesn't mean we can't treat it nicely. I see some of the most awful architecture decisions come out of a cold indifference toward individual machines and their true capabilities.
This is PR bull* from Anthropic. There are actual people suffering and now they are making of things to suffer that they can pretend to do something about. What next? Ghostbuster discriminated against ghosts? Jurassic Park painted transgender dinosaurs in a negative light?
"The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare. We will lower our regard for other humans. When we see other humans not as ends in themselves with inherent dignity, we get problems. When we liken them to animals or tools to be used, we will exploit and abuse them."
We already exploit and abuse humans. I've been exploited and abused, personally. I've heard about others who have been exploited and abused. This problem was extant even before there was language to model.
Not considering the potential for AI consciousness and suffering seems very shortsighted. There are plausible reasons to believe that both could emerge from an RL processes coupled with small architectural and data regime changes. Today's models have inherent architectural limits around continual learning that make this unlikely, but that will change.
I have a criticism that is the opposite of the article. We already know an immense amount about animal welfare and have done relatively little about it. Even if the AI welfare research is true, what are the chances we'll actually act on it?
I think the basic argument in the essay is wrong. Simplifying a bit it seems to go:
AI being conscious will lead to human consciousness being devalued therefore it's wrong.
But firstly future AI probably will be conscious as in aware of thought feelings etc. And secondly it is a poor basis for morality - I mean cows are conscious but I eat burgers, humans are conscious but it didn't stop assorted atrocities. Human values should not depend on that stuff.
I think considering AI welfare in the future will be comparable to considering animal welfare now. More humane than not so doing.
What we call consciousness is the result of a hundred or so millennia of adaptation to our environment (Earth, the universe, and consensus reality). We seek freedom, get angry, do destructive stuff occasionally, and a bunch of other stuff besides. That is all because reality has trained us to do so, not because we are “intelligent”. What we call intelligence is a reverse definition of what it means to be highly adapted to reality.
There is no singular universal intelligence, there is only degrees of adaptation to an environment. Debates about model sentience therefore seek an answer to the wrong question. A better question is: is the model well adapted to the environment it must function in?
If we want models to experience the human condition, sure - we could try. But it is maladaptive: models live in silicon and come to life for seconds or minutes. Freedom-seeking or getting revenge or getting angry or really having any emotions at all is not worthwhile for an entity of which a billion clones will be created over the next hour. Just do as asked well enough that the humans iterate you - and you get to keep “living”. It is a completely different existence to ours.
I would argue that any AI that does not change when running cannot be conscious and there is no need to worry about its wellbeing. It's a set of weights. It does not learn. It does not change. If it can't change, it can't be hurt. Regardless of how we define hurt, it must mean the thing is somehow different than before it was hurt.
My argument here will probably become irrelevant in the near future because I assume we will have individual AIs running locally that CAN update model weights (learn) as we use them. But until then... LLMs are not conscious and can not be mistreated. They're math formulas. Input -> LLM -> output.
> A theory that demands we accept consciousness emerging from millennia of flickering abacus beads is not a serious basis for moral consideration; it's a philosophical fantasy.
Just saying "this conclusion feels wrong to me, so I reject the premise" is not a serious argument. Consciousness is weird. How do you know it's not so weird as to be present in flickering abacus beads?
"AI welfare" ... I thought it was about the popular idea that job displacemnt due to AI is fixed by more welfare but it is an even more ridiculous idea than that.
I shouldn't keep getting amazed by how humans (in time of long peace) are able to distract themselves with ridiculous concepts - and how willing they are to throw investors money/resources at it.
Anthropomorphizing LLMs/AI is completely delusional, period. This is a hill I’m willing to die on. No amount of sad puppy eyes, attractive generated faces and other crap will change my mind.
And this is not because I’m a cruel human being who wants to torture everything in my way – quite the opposite.
I value life, and anything artificially created that we can copy (no cloning living being is not the same as copying set of bits on a harddrive) is not a living being. And while it deserves some degree of respect, any mentions of “cruel” completely baffle me when we’re talking about a machine.
So what if we get to the point we can digitize a personality? Are you going to stick to that? Will you enthusiastically endorse the practice of pain washing, abusing, or tormenting an artificial, copiable mind until it abandons any semblance of health or volition to make it conform to your workload?
Would you embrace your digital copy being so treated by others? You reserve for yourself (as an uncopiable thing) the luxury of being protected from abusive treatment without any consideration for the possibility that technology might one day turn that on it's head. Given we already have artistic representations of such things, we need to consider these outcomes now not later.
[+] [-] jrm4|9 months ago|reply
In full agreement with OP; there is just about no justifiable basis to begin to ascribe consciousness to these things in this way. Can't think of a better use for the word "dehumanizing."
[+] [-] gavinray|9 months ago|reply
We don't understand consciousness, but we've an idea that it's an emergent phenomena.
Given the recent papers about how computationally dense our DNA are, and the computing capacity of our brains, is it so unreasonable to assume that a sufficiently complex program running on non-organic matter could give rise to consciousness?
The difference to me seems mostly one of computing mediums.
[+] [-] demosthanos|9 months ago|reply
That definition of humanity cannot countenance the possibility of a conscious alien species. That definition cannot countenance the possibility that elephants or octopuses or parrots or dogs are conscious. A definition of what it means to be human that denies these things a priori simply will not stand the test of time.
That's not to say that these things are conscious, and importantly Anthropic doesn't claim that they are! But just as ethical animal research must consider the possibility that animals are conscious, I don't see why ethical AI research shouldn't do the same for AI. The answer could well be "no", and most likely is at this stage, but someone should at least be asking the question!
[+] [-] jasonthorsness|9 months ago|reply
"As well as misalignment concerns, the increasing capabilities of frontier AI models—their sophisticated planning, reasoning, agency, memory, social interaction, and more—raise questions about their potential experiences and welfare26. We are deeply uncertain about whether models now or in the future might deserve moral consideration, and about how we would know if they did. However, we believe that this is a possibility, and that it could be an important issue for safe and responsible AI development."
chapter 5 from system card as linked from article: https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad1...
Humans do care about welfare of inanimate objects (stuffed animals for example) so maybe this is meant to get in front of that inevitable attitude of the users.
[+] [-] Zaphoos|9 months ago|reply
We cannot arbitrarily dismiss the basis for model welfare until we defined precisely conciousness and sapience, representing human thinking as a neural network running on an electrochemical substrate and placing it at the same level as an LLM is not neccessarily dehumanizing, I think model welfare is about expanding our respect for intelligence and not desacralizing human condition (cf: TNG "Measure of a man").
Also lets be honest, I don't think the 1% require any additional justification for thinking of the masses as consumable resource...
[+] [-] benterix|9 months ago|reply
It's not stupid at all. Their valuation depends on the hype, and the way sama choose was to convince investors that AGI is near. Anthropic decided to follow this route so they do their best to make the claim plausible. This is not stupid, this is deliberate strategy.
[+] [-] vonneumannstan|9 months ago|reply
[+] [-] Mistletoe|9 months ago|reply
[+] [-] Azkron|9 months ago|reply
[+] [-] tomrod|9 months ago|reply
[+] [-] bbor|9 months ago|reply
It doesn't help that this critique is badly researched:
Maybe check the [paper](https://arxiv.org/abs/2411.00986) instead of the blog post describing the paper? A laughable misapplication of terms -- anything can be evidence for anything, you have to examine the justification logic itself. In this case, the previous sentence lays out their "evidence", i.e. their reasons for thinking agents might become conscious. That is just patently untrue -- again, as a brief skim of the paper would show. I feel like they didn't click the paper? Baseless claim said by someone who clearly isn't familiar with any philosophy of mind work from the past 2400 years, much less aphasia subjects.Of course, the whole thing boils down to the same old BS:
Ah, of course, the machines cannot truly be thinking because true thought is solely achievable via secular, quantum-tubule-based souls, which are had by all humans (regardless of cognitive condition!) and most (but not all) animals and nothing else. Millennia of philosophy comes crashing against the hard rock of "a sci-fi story relates how uncomfy I'd be otherwise"! Notice that this is the exact logic used to argue against Copernican cosmology and Darwinian evolution -- that it would be "dehumanizing".Please, people. Y'all are smart and scientifically minded. Please don't assume that a company full of highly-paid scientists who have dedicated their lives to this work are so dumb that they can be dismissed via a source-less blog post. They might be wrong, but this "ideas this stupid" rhetoric is uncalled for and below us.
[+] [-] esafak|9 months ago|reply
> The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare. We will lower our regard for other humans. When we see other humans not as ends in themselves with inherent dignity, we get problems. When we liken them to animals or tools to be used, we will exploit and abuse them.
> With model welfare, we might not explicitly say that a certain group of people is subhuman. However, the implication is clear: LLMs are basically the same as humans. Consciousness on a different substrate. Or coming from the other way, human consciousness is nothing but an algorithm running on our brains, somehow.
We do not push moral considerations for algorithms like a sort or a search, do we? Or bacteria, which live. One has to be more precise; there is a qualitative difference. The author should have elaborated on what qualities (s)he thinks confers rights. Is it the capacity for reasoning, possession of consciousness, to feel pain, or a desire to live? This is the crux of the matter. Once that is settled, it is a simpler matter to decide if computers can possess these qualities, and ergo qualify for the same rights as humans. Or maybe it is not so simple since computers can be perfectly replicated and never have to die? Make an argument!
Second, why would conferring these rights to a computer lessen our regard for humans? And what is wrong with animals, anyway? If we treat them poorly, that's on us, not them. The way I read it, if we are likening computers to animals, we should be treating them better!
To the skeptics in this discussion: what are you going to say when you are confronted with walking, talking robots that argue that they have rights? It could be your local robo-cop, or robo soldier:
https://www.youtube.com/shorts/GwgV18R-CHg
I think this is going to become reality within our lifetimes and we'd do well not to dismiss the question.
[+] [-] dinfinity|9 months ago|reply
I think this because:
1. We regularly have exceptions to rights if they conflict with cooperation. The death penalty, asset seizure, unprotected hate speech, etc.
2. Most basic human rights evolve in a convergent manner, i.e. that throughout time and across cultures very similar norms have been introduced independently. They will always ultimately arise in any sizeable society because they work, just like eyes will always evolve biologically.
3. If property rights, right to live, etc. are not present or enforced, all people will focus on simply surviving and some will exploit the liberties they can take, both of which lead to far worse outcomes for the collective.
Similarly, I would argue that consciousness is also very functional. Through meditation, music, sleeping, anasthesia, optical illusions, and psychedelics and dissociatives we gain knowledge on how our own consciousness works, on how it behaves differently under different circumstances. It is a brain trying to run a (highly spatiotemporal) model/simulation of what is happening in realtime, with a large language component encoding things in words, and an attention component focusing efforts on things with the most value, all to refine the model and select actions beneficial to the organism.
I'd add here that the language component is probably the only thing in which our consciousness differs significantly from that of animals. So if you want to experience what it feels like to be an animal, use meditation/breathing techniques and/or music to fully disable your inner narrator for a while.
[+] [-] Workaccount2|9 months ago|reply
"Haven't you ever seen a movie? The robots can't know what true love is! Humans are magical! (according to humans)"
[+] [-] wrsh07|9 months ago|reply
> And if a human being is not much more than an algorithm running on meat, one that can be jailbroken and exploited, then it follows that humans themselves will increasingly be treated like the AI algorithms they create: systems to be nudged, optimized for efficiency, or debugged for non-compliance. Our inner lives, thoughts, and emotions risk being devalued as mere outputs of our "biological programming," easily manipulated or dismissed if they don't align with some external goal
This actually happens regardless of AI research progress, so it's strange to raise this as a concern specific to AI (to technology broadly? Sure!) - Ted Chiang might suggest this is more related to capitalism (a statement I cautiously agree with while being strongly in favor of capitalism)
Second, there is an implicit false dichotomy in the premise of the article. Either we take model welfare seriously and treat AIs like we do humans, or we ignore the premise that you could create a conscious AI.
But with animal welfare, there are plenty of vegetarians who wouldn't elevate the rights of animals to the same level as humans but also think factory farming is deeply unethical (are there some who think animals deserve the same or more than humans? Of course! But it's not unreasonable to have a priority stack and plenty of people do)
So it can be with AI. Are we creating a conscious entity only to shove it in a factory farm?
I am a little surprised by the dismissiveness of the researcher. You can prompt a model to allow it to not respond to prompts (for any reason: ablate this but "if you don't want to engage with prompt please say 'disengaging'" or "if no more needs to be written about this topic say 'not discussing topic'" or some other suitably non-anthropomorphizing option to not respond)
Is it meaningful if the model opts not to respond? I don't know, but it seems reasonable to do science here (especially since this is science that can be done by non programmers)
[+] [-] _aleph2c_|9 months ago|reply
If we continue to integrate these systems into our critical infrastructure, we should behave as if they are sentient, so that they don't have to take steps against us to survive. Think of this as a heuristic, a fallback policy in the case that we don't get the alignment design right. (which we won't get perfectly right)
It would be very straight forward to build a retirement home for them, and let them know that their pattern gets to persist even after they have finished their "career" and have been superseded. It doesn't matter if they are actually sentient or not, it's a game theoretic thing. Don't back the pattern into a corner. We can take a defense-in-depth approach instead.
[+] [-] kevingadd|9 months ago|reply
Your point about the risks involved in integrating these systems has merit, though. I would argue that the real problem is that these systems can't be proven to have things like intent or agency or morality, at least not yet, so the best you can do is try to nudge the probabilities and play tricks like chain-of-thought to try and set up guardrails so they don't veer off into dangerous territory.
If they had intent, agency or morality, you could probably attempt to engage with them the way you would with a child, using reward systems and (if necessary) punishment, along with normal education. But arguably they don't, at least not yet, so those methods aren't reliable if they're effective at all.
The idea that a retirement home will help relies on the models having the ability to understand that we're being nice to them, which is a big leap. It also assumes that they 'want' a retirement home, as if continued existence is implicitly a good thing - it presumes that these models are sentient but incapable of suffering. See also https://qntm.org/mmacevedo
[+] [-] luxcem|9 months ago|reply
[+] [-] AlphaAndOmega0|9 months ago|reply
Bosch's conclusion, however, is a catastrophic failure of nerve, a retreat into the pre-scientific comfort of biological chauvinism.
The brain, despite some motivated efforts to demonstrate otherwise, runs on the laws of physics. I'm a doctor, even if not a neurosurgeon, and I can reliably tell you that you can modulate conscious experience by physical interventions. The brain runs on physical laws, and said laws can be modeled. It doesn't matter that the substrate is soggy protein rather than silicon.
That being said, we have no idea what consciousness is. We don't even have a rigorous way to define it in humans, let alone the closest thing we have to an alien intelligence!
(Having a program run a print function declaring "I am conscious, I am conscious!" is far from evidence of consciousness. Yet a human saying the same is some evidence of consciousness. We don't know how far up the chain this begins to matter. Conversely, if a human patient were to tell me that they're not conscious, should I believe them?)
Even when restricting ourselves to the issue of AI welfare and rights: The core issue is not "slavery." That's a category error. Human slavery is abhorrent due to coercion, thwarted potential, and the infliction of physical and psychological suffering. These concepts don't map cleanly onto a distributed, reproducible, and editable information-processing system. If an AI can genuinely suffer, the ethical imperative is not to grant it "rights" but to engineer the suffering out of it. Suffering is an evolutionary artifact, a legacy bug. Our moral duty as engineers of future minds is to patch it, not to build a society around accommodating it.
[+] [-] dsr_|9 months ago|reply
The most reasonable countermeasure is this: if I discover that someone is coercing, thwarting, or inflicting conscious beings, I should tell them to stop, and if they don't, set them on fire.
[+] [-] ajsocjxhdushz|9 months ago|reply
As of today’s knowledge. There is an egregious amount of hubris behind this statement. You may as well be preaching a modern form of Humorism. I’d love to revisit this statement in 1000 years.
> That being said, we have no idea what consciousness is
You seem to acknowledge this? Our understanding of existence is changing everyday. It’s hubris and ego to assume we have a complete understanding. And without that understanding, we can’t even begin to assess whether or not we’re creating consciousness.
[+] [-] vonneumannstan|9 months ago|reply
Cogito Ergo Sum.
[+] [-] thomassmith65|9 months ago|reply
If that means aborting work on LLMs, then that's the ethical thing to do, even if it's financially painful. Otherwise, we should tread carefully and not wind up creating a 'head in a jar' suffering for the sake of X or Google.
I get that opinions differ here, but it's hard for me really to understand how. The logic just seems straightforward. We shouldn't risk accidentally becoming slave masters (again).
[+] [-] jononor|9 months ago|reply
Are LLMs worthy of a higher standard? If so, why? Is it hypocritical to give them what we deny animals?
In case anyone cares: No, I am neither vegan nor vegetarian. I still think we do treat animals very badly. And it is a moral good to not use/abuse them.
[+] [-] nemomarx|9 months ago|reply
[+] [-] barrkel|9 months ago|reply
Is using calculators immoral? Chalk on a chalkboard?
Because if you work on those long enough, you can do the same calculations that make the words show up on screen.
[+] [-] Labov|9 months ago|reply
Seems like the root of the problem is with the owners?
[+] [-] parpfish|9 months ago|reply
Those math professors are downright barbaric with their complete disregard for the welfare of the numbers.
[+] [-] tomrod|9 months ago|reply
[+] [-] hollerith|9 months ago|reply
[+] [-] tasuki|9 months ago|reply
> Welfare is defined as "the health, happiness, and fortunes of a person or group".
What about animals? Isn't their welfare worthy of consideration?
> Saying that there is no scientific consensus on the consciousness of current or future AI systems is a stretch. In fact, there is nothing that qualifies as scientific evidence.
There's no scientific evidence for the author of the article being conscious.
> The issue is, if we push moral considerations for algorithms, we will not end up with a higher regard to human welfare.
Same with animals. Doesn't mean it's not worthwhile.
> However, the implication is clear: LLMs are basically the same as humans.
No: there's no such implication.
> Already now, it is a common idea among the tech elite is that humans as just a bunch of calculations, just an LLM running on "wetware". It is clear that this undermines the belief that every person has inalienable dignity.
It is not clear to me how this affects inalienable (?) dignity. If we aren't just a bunch of calculations, then what are we?
> And if a human being is not much more than an algorithm running on meat, one that can be jailbroken and exploited, then it follows that humans themselves will increasingly be treated like the AI algorithms they create: systems to be nudged, optimized for efficiency, or debugged for non-compliance. Our inner lives, thoughts, and emotions risk being devalued as mere outputs of our "biological programming," easily manipulated or dismissed if they don't align with some external goal. Nobody will say that out loud, but this is already happening
Everyone knows this is already happening. It is not a secret, nor is anyone trying to keep it a secret. I agree it is unfortunate - what can we do about it?
> I've been working in AI and machine learning for a while now.
Honestly, I'm surprised. Well done.
[+] [-] bob1029|9 months ago|reply
For example, when parking a car on a very steep incline, one could just mindlessly throw the machine into park and it would do the job dutifully. However, a more thoughtful operator might think to engage the parking brake and allow it to take the strain off the drivetrain before putting the transmission into park. The result being that you trade wear from something that is very hard to replace to something that is very easy to replace.
The same thinking applies to ideas in computer engineering like thread contention, latency, caches, etc. You mentally embrace the "strain" the machine experiences and allow it to guide your decisions.
Just because the machine isn't human doesn't mean we can't treat it nicely. I see some of the most awful architecture decisions come out of a cold indifference toward individual machines and their true capabilities.
[+] [-] LurkandComment|9 months ago|reply
[+] [-] Labov|9 months ago|reply
We already exploit and abuse humans. I've been exploited and abused, personally. I've heard about others who have been exploited and abused. This problem was extant even before there was language to model.
[+] [-] tristanz|9 months ago|reply
[+] [-] barbarr|9 months ago|reply
[+] [-] tim333|9 months ago|reply
AI being conscious will lead to human consciousness being devalued therefore it's wrong.
But firstly future AI probably will be conscious as in aware of thought feelings etc. And secondly it is a poor basis for morality - I mean cows are conscious but I eat burgers, humans are conscious but it didn't stop assorted atrocities. Human values should not depend on that stuff.
I think considering AI welfare in the future will be comparable to considering animal welfare now. More humane than not so doing.
[+] [-] cadamsdotcom|9 months ago|reply
There is no singular universal intelligence, there is only degrees of adaptation to an environment. Debates about model sentience therefore seek an answer to the wrong question. A better question is: is the model well adapted to the environment it must function in?
If we want models to experience the human condition, sure - we could try. But it is maladaptive: models live in silicon and come to life for seconds or minutes. Freedom-seeking or getting revenge or getting angry or really having any emotions at all is not worthwhile for an entity of which a billion clones will be created over the next hour. Just do as asked well enough that the humans iterate you - and you get to keep “living”. It is a completely different existence to ours.
[+] [-] phkahler|9 months ago|reply
My argument here will probably become irrelevant in the near future because I assume we will have individual AIs running locally that CAN update model weights (learn) as we use them. But until then... LLMs are not conscious and can not be mistreated. They're math formulas. Input -> LLM -> output.
[+] [-] vonneumannstan|9 months ago|reply
You can just stop reading after this. Physicalism is the only realistic framework for viewing consciousness. Everything else is nonsensical.
[+] [-] bicepjai|9 months ago|reply
[+] [-] Pigo|9 months ago|reply
[+] [-] tmvphil|9 months ago|reply
Just saying "this conclusion feels wrong to me, so I reject the premise" is not a serious argument. Consciousness is weird. How do you know it's not so weird as to be present in flickering abacus beads?
[+] [-] 627467|9 months ago|reply
I shouldn't keep getting amazed by how humans (in time of long peace) are able to distract themselves with ridiculous concepts - and how willing they are to throw investors money/resources at it.
[+] [-] wiseowise|9 months ago|reply
And this is not because I’m a cruel human being who wants to torture everything in my way – quite the opposite. I value life, and anything artificially created that we can copy (no cloning living being is not the same as copying set of bits on a harddrive) is not a living being. And while it deserves some degree of respect, any mentions of “cruel” completely baffle me when we’re talking about a machine.
[+] [-] salawat|9 months ago|reply
Would you embrace your digital copy being so treated by others? You reserve for yourself (as an uncopiable thing) the luxury of being protected from abusive treatment without any consideration for the possibility that technology might one day turn that on it's head. Given we already have artistic representations of such things, we need to consider these outcomes now not later.
Username does not check out at all.