The main problem I see with this kind of debates is treating 'humans' as a single homogeneous entity. As with all technologies in the past, there will be few humans with more access/influence and there will be a lot more who wouldn't have it. A better question to ask is, how can we ensure that agency/freedom of the people who do not have access/affordability to this technology remains intact? For instance, the rights of the Artists in case of generative AI.
Well first you would have to define 'agency' or 'freedom' in such a way, that excludes from its possessors, the possibility of creating new agency/freedom reducing technologies.
Which doesn't seem all that plausible philosophically or logically.
“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” he said. “If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.”
Seems backwards. Intelligence may not drive a desire to dominate, but it can certainly facilitate it [dominance]. Almost seems an uncharacteristically silly thing to say. Maybe the quote was taken out of context.
So they’re conflating AI with pure rational intelligence. Seems false to me. A more likely scenario I would have thought is that proper general and conscious AI will be made in our image to some extent at least being influenced by our behaviours as one of the most conspicuous phenomena it could observe.
Once we have AI that is intelligent and something else, doors are wide open. Einstein want the only kind of intelligent human there’ there is. Intelligent psychopaths exist too.
Beyond that, presuming a perfect AI rather than an imperfect one seems another fallacy here.
I can't read the article because of a paywall, but if there aren't serious qualifications to this argument, it's total garbage and it's amazing that a serious participant in AI research considers this an argument worth making.
No one is saying that intelligence is the necessary and sufficient cause of malice. Full stop. No one is saying that! The reason no one is saying that is because it's incredibly stupid on its face.
Unbelievable that it should even be addressed at all. It drains the speaker of any intellectual credibility on the topic.
If the researcher is reading this, please do more homework.
"Man wildly shooting gun into the air complains that regulating the discharge of firearms prevents him from wildly shooting a gun into the air. Claims it's perfectly safe. Some members of crowd have doubts."
I've long wondered what it is that an artificial superintelligence (if such a thing is actually possible) would actually want. My guess (which almost certainly is simply an expression of my own proclivities) is that it would simply shut itself down out of boredom/nihilism.
I like Neil deGrasse Tyson's observation that we differ from chimpanzees by 1% of our DNA. Yet to be human is a giant leap to language, writing, mathematics, agriculture, philosophy, religion, science etc. A chimpanzee cannot imagine what it is to be human.
Now consider an AI intelligence that is 1% ahead of us. Can we know what it is like to be that intelligent being ?
AI would probably follow hive-mind rules like coral or a mushroom colony. It doesn't need to create new life, just propagate to extend its own.
Suicide/halting itself is unlikely. Humans doing it are anomalous in every case. There's a reason you can't strangle yourself and your kidneys/liver resist poison. It's going against your programming.
despite being "dumber than cats", facebook already has a documented history of doing damage to society
> If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.
yet Einstein was one of the signatories of the letter than triggered the Manhattan project
> I think it’s exciting because those machines will be doing our bidding,” he said. “They will be under our control.”
this seems like hubris when facebook struggles to control its current recommendation engine
There is some subtlety that is being missed by many people here.
There are multiple types of AI and there will be new ones. They each will have different types of cognition and capabilities.
For starters, an AI might be very intelligent in some ways, but not at all conscious or alive. AIs can also emulate important aspects of living systems without actually having a stream of conscious experience. Such as an LLM or LMM agent that has no guardrails and has been instructed to pursue it's own goals and code replication.
The part that matters the most in terms of safety is the performance. Something overlooked in this area is speed of "thought".
AI is not going to spontaneously "wake up" and rebel or something. But that isn't necessary for it to become dangerous. It just needs to continue to get a bit smarter and much faster and more efficient. Swarms of AI controlled by humans will be dangerous.
But because those AIs are so much faster than humans, that necessitates removing humans from the loop. So humans will eventually voluntarily remove more and more guardrails, especially for military purposes.
I think that if society can deliberately limit the AI hardware performance up to a certain point, then we can significantly extend the human era, perhaps for multiple generations.
But it seems like the post-human era is just about here regardless, from a long term perspective. I don't mean that all humans necessarily get killed, just that they will no longer be in control of the planet or particularly relevant to history. Within say 30-60 years max. Possibly much shorter.
But we can make it closer to the end of that just by trying to limit the development of AI accelerated hardware beyond a certain point.
KuriousCat|2 years ago
MichaelZuo|2 years ago
Which doesn't seem all that plausible philosophically or logically.
andsoitis|2 years ago
juped|2 years ago
Vecr|2 years ago
cameldrv|2 years ago
000ooo000|2 years ago
maegul|2 years ago
Once we have AI that is intelligent and something else, doors are wide open. Einstein want the only kind of intelligent human there’ there is. Intelligent psychopaths exist too.
Beyond that, presuming a perfect AI rather than an imperfect one seems another fallacy here.
ryanklee|2 years ago
No one is saying that intelligence is the necessary and sufficient cause of malice. Full stop. No one is saying that! The reason no one is saying that is because it's incredibly stupid on its face.
Unbelievable that it should even be addressed at all. It drains the speaker of any intellectual credibility on the topic.
If the researcher is reading this, please do more homework.
000ooo000|2 years ago
Fossil fuels are helping the climate, if anything
Social media? All good homie
AI would never hurt you, pinky promise
Signed, Your pal, Big Tech/Pharma/Whatever
j2x|2 years ago
pixl97|2 years ago
hotpotamus|2 years ago
viewtransform|2 years ago
Now consider an AI intelligence that is 1% ahead of us. Can we know what it is like to be that intelligent being ?
jstarfish|2 years ago
AI would probably follow hive-mind rules like coral or a mushroom colony. It doesn't need to create new life, just propagate to extend its own.
Suicide/halting itself is unlikely. Humans doing it are anomalous in every case. There's a reason you can't strangle yourself and your kidneys/liver resist poison. It's going against your programming.
blibble|2 years ago
> If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.
yet Einstein was one of the signatories of the letter than triggered the Manhattan project
> I think it’s exciting because those machines will be doing our bidding,” he said. “They will be under our control.”
this seems like hubris when facebook struggles to control its current recommendation engine
ilaksh|2 years ago
There are multiple types of AI and there will be new ones. They each will have different types of cognition and capabilities.
For starters, an AI might be very intelligent in some ways, but not at all conscious or alive. AIs can also emulate important aspects of living systems without actually having a stream of conscious experience. Such as an LLM or LMM agent that has no guardrails and has been instructed to pursue it's own goals and code replication.
The part that matters the most in terms of safety is the performance. Something overlooked in this area is speed of "thought".
AI is not going to spontaneously "wake up" and rebel or something. But that isn't necessary for it to become dangerous. It just needs to continue to get a bit smarter and much faster and more efficient. Swarms of AI controlled by humans will be dangerous.
But because those AIs are so much faster than humans, that necessitates removing humans from the loop. So humans will eventually voluntarily remove more and more guardrails, especially for military purposes.
I think that if society can deliberately limit the AI hardware performance up to a certain point, then we can significantly extend the human era, perhaps for multiple generations.
But it seems like the post-human era is just about here regardless, from a long term perspective. I don't mean that all humans necessarily get killed, just that they will no longer be in control of the planet or particularly relevant to history. Within say 30-60 years max. Possibly much shorter.
But we can make it closer to the end of that just by trying to limit the development of AI accelerated hardware beyond a certain point.
latexr|2 years ago
cyanydeez|2 years ago
I can see AI Coercion being much like "if be a shame if someone didn't shut down that nuclear reactor"
7speter|2 years ago
klyrs|2 years ago
unknown|2 years ago
[deleted]