Always all-or-nothing thinking from these folks. Like what they are working on can never be just another boring thing that nerds find entertaining. No, it has to be "world-changing". Gonna "change the world" (for the better or the worse?) while sitting behind a keyboard. Except they do not know how to write. Overlook the important details and exaggerate, communicating in hyperbolic, know-it-all nerd gibberish.
What about the folks that say everything will work out in a fair and balanced way as if the universe and everything in reality stays perfectly balanced on a tip of a pin?
You paint the picture as if the All or nothing folks are extreme when in reality the extreme is more likely then some perfectly fair and balanced equilibrium.
In nature things tend to overload or fizzle out or stay in equilibrium. Equilibrium, though possible, is the rarer outcome. Mind you it's not an impossible outcome but given the way entropy works, it is the rarer outcome.
Humanity itself is an example of this rare outcome. Usually molecules don't self assemble into replicating machines, they either freeze into inanimate rock or overload into fusion producing stars.
As for AI. I think it will either change the world, or amount to nothing. The former seems more likely. Some strange middle ground where the ai technology never improves to some point of a societal paradigm shift seems unlikely. chatGPT and sora only makes me ask what is the trend line predicting next?
Its the AI hype train. There is so much money flowing around this topic right now, everyone wants their share of the pie. I would guess that anyone writing articles on the topic has some stake in AI as well either as an investor or a employee.
What I find is interesting, is that the negative news regarding AI safety is adding to the hype as well since it seems to capture a lot of attention.
HN commenters frequently resort to describing the choices of software available to internet users in terms of "winning" or "won". This is more "all-or-nothing" thinking. People sometimes write software for the enjoyment of it, or to satisfy personal needs. That is, for non-commercial purposes. Sometimes this software becomes popular, sometimes it does not. In either case, the software persists; it remains available. It does not have to "change the world" in order to be useful. If it is non-commercial, it does not "win" or "lose", except in the minds of HN commenters who can only think in all-or-nothing terms. In truth, it simply exists as an option for all internet users.
It is possible that "AI" might not be as world-changing as its proponents are claiming. However, if it is free and open-source, and non-commercial, it may still persist and remain useful, regardless of whether it becomes popular or not.
IMO we underestimate the psychological implications of wealth in modern society.
I suspect even if a power ball lottery winner takes a philosophical position it would be taken far more seriously than if they had never won the lottery.
At some point you start to believe your own bullshit when all of society signals are telling you what a genius you are.
AGI escaping a sandbox is truly terrifying. There will be a subgroup of the population that will worship it and work for it. It's not so much AGI that scares me - it's the humans I'm scared of.
That's already happened. The AGI's are called corporations. Most governments have failed to regulate corporations successfully, and have been unable to keep them from becoming almost powerful enough to challenge governments.
The accelerationists for corporations were a group of economists, led by Milton Friedman, and a group of business leaders, organized by the U.S. Chamber of Commerce. In the 1970s and 1980s, they pushed the ideas that corporations are responsible only to their stockholders, and that government should not interfere with the concentration of corporate power. Those were not mainstream ideas of the 1930s to 1960s. The corporate accelerationists succeeded. That's when corporations escaped the sandbox.
Potentially there are no good outcomes even if AGI remains under control. If we are actually able to create the type of AI entity they are trying to create, one that exponentially improves, the implications are beyond what most have thought about.
Something I recently wrote in depth about here. How we completely misunderstand the future that we think will happen.
"The implication is that everyone is enthusiastically racing towards a destination that does not exist. The capability to make the things you want will ironically be the same capability that makes them unattainable. This is not a scenario that arises from some type of AI failure, but rather this is assuming AI performing exactly as intended."
Mrs. Davis[0] offers a pretty compelling vision of this - absurd but not wholly unbelievable. The machine doesn’t necessarily have to supply the physical threat if it has humans who happily do its bidding.
You have arguably more STEM talent in China through a much more rigorous selection process, as well as a lot more centralized direction under a more authoritarian government, with a large amount of funding.
Yet China hasn't managed to take over the world. You would think that they if they wanted to become the worlds primary economic power, and they could throw enough compute at the problem (like an AGI would), they would have figured out how to do it by now.
I think this point is underrated, but I’d press further. How do we know this has not already happened? Agencies do not need to be born of gradient descent and certainly do not need to look like matrix multiplications to be deemed “artificially intelligent.” Religion is one such system for example, as are cults, addictions, brands, war strategies, organizing campaigns, governments, or (borrowing from chaos magick) any egregore or societal belief.
Ted Chiang has a lovely post about similar ideas, hitting a little closer to home. His thesis AIUI is that unrestrained capitalism, here meaning the unfettered desire to maximize profit for shareholders, is an artificial value system that causes its agents (corporations) to exhibit intelligent goal-seeking behavior. These systems are certainly “alive” — corporations respond to stimuli, sense their environment, “reproduce,” enact changes in response to predictions of future state, etc. Though each of these agents (corporations) are made of collective human behavior, their actions taken together can be considered a form of artificial intelligence that stretches “beyond” (in a wisdom-of-crowds sense) human understanding. In this sense, an artificial intelligence has already “escaped” and has fervent followers.
I know these are strained analogies, but it’s fun to think about. I feel that in the future, the work of solving “AGI safety” will become indistinguishable from the work of other societal problems — how do we prevent tyrants from taking over governments, and how do we make existing governments more resistant to that failure mode? How do we ensure that generating value isn’t a prerequisite for human survival? How can we more efficiently distribute our resources and reduce wealth inequality? How can we ensure that all kinds of life can thrive, not just the most optimal kinds?
If AGI reflects humanity’s best and worst impulses, and I believe it does because that’s all we train it to do, then having good societal answers to these distinctly “human” questions will also help our society resist malevolent AGI. It’s only human, after all.
Even if a terrifying AGI rises up and escapes our control, I am still eternally grateful that Prometheus stole fire from the gods and gave it to us in the form of technology, knowledge, really civilization.
I've always wondered why people think that an AI that's super-intelligent will also be evil. It could just as likely end up being very kind (more than likely, actually, because the programmers would have safeguard to ensure that it's nice).
There are extremely high capability entities (people, companies, governments) that aren’t comically evil on the surface but nevertheless immiserate large groups of humans. Not all of them, not all of the time, but not none of them. What is your plan that no AI ever gains enough power to harm significant numbers of people either on purpose or by accident, just once? What safeguards do you envision that can’t be ignored, or subverted, or misinterpreted, just once?
The safety conscious amongst us don't think "it will be evil", they think "what does it take to be absolutely sure that no bad outcomes can happen" in the same way we build bridges, cars, planes, firewalls, new medicines.
The burden of proof is typically on those that introduce a new medicine, not the FDA. AI will be riskier than medicines because medicines can't think.
Personally I don't want us all hiding in fear and not trying anything, but I do think that we're either going to walk into this thing with a hacker mentality, or an engineer mentality. The former is great for moving fast and breaking things, but the latter is safer when you're playing with a one-way "this changes everything forever" technology.
I think most doomer arguments are not that the AI would be evil, but rather that it would be misaligned with human interests, and would seek to accomplish goals with that misalignment, which could be bad for us. Evil AI is a bit too anthropomorphic.
It's more like powerful AIs that just don't share our values, because we didn't bother to figure that part out. But yet we still give them goals, blissful to the possibility they will find dangerous solutions to accomplishing those goals.
Yeah, the dreamt up scenarios tend to showcase very stupid superintelligences.
"But what if there's something smarter than humans that is tasked with making paperclips until it destroys the Earth?" the people cried, as their corporations continued to produce and produce until it was well past the warned thresholds of destroying the Earth.
"But what if it decides to nuke humanity?" they cry, as increasingly elderly and unhinged dictators elsewhere arm up their nuclear arsenals.
It's like we can't fathom what great intelligence or wisdom will actually look like, so we just project many of the stupidest aspects of ourselves onto an entity simply imagined as more capable of enacting the dumbest aspects of humanity.
I fear a more automated humanity.
I do not fear automation more intelligent and wise than humanity.
It’s not so much that there’s a little morality tag that randomly gets assigned the value of “nice” or “evil”, it’s more like there’s 1000 possible programs we would consider “super intelligent” and maybe it’s the case that 950 of them would reshape the world in a way we wouldn’t like - and when a powerful entity reshapes the world in a way we don’t like, we call that entity “evil”.
^ This is the basic reasoning behind the common view that super intelligent AI will be, by default, evil
It's misaligned, not evil. AI doom could involve benevolent intentions (control or kill for humanity's own good), it could also involve ambivalent intentions (make more paperclips).
I would say its not a super fruitful area of speculation because it doesn't really matter too much. If you consider wholesale destruction of humanity to be on the table, a coinflip or even a 90% chance that its friendly is not super comforting. Its kindof like relying on the UK's "letters of last resort" or the conscience of individual nuclear weapons operators when considering the likelihood of a MAD scenario. You're also already involving so many speculative sources of uncertainty, what's another either way? Reasonable people already disagree by orders of magnitude.
I'm not evil, I just really need to make these paperclips, you see, and I could probably repurpose your atoms for the Hypnodrones.
Good and evil are irrelevant. If it is extremely capable and its goals conflict with ours, conflict will occur. This does not require evil, just disagreement. And in the disagreement of desires between you and the hamster, who wins?
>I've always wondered why people think that an AI that's super-intelligent will also be evil.
It's chiefly a western (American/European) concept from what I can tell, it's not shared by other cultures and some like Japanese go the other way (eg: Doraemon).
The kind AIs will just sit and meditate, or organize your calendar. The unkind AIs will seek power, and in the limit will tend to dominate. There is no stable attractor around the “be kind” strategy.
(Also remember that a smarter-than-you AI could easily pretend to be kind while also subtly trying to gain compute. How would you tell the difference?)
How do you propose to build a “be nice” safeguard? Nobody has a clue how to achieve such a thing right now.
Long time ago I was reading translated accounts of Rwandan Hutu's that had participated in the 1993 genocide. One in particular had stuck with me; one account of a man that had murdered his childhood friend. As he described it, standing there having gutting and dismembers the man he had grown up side by side with, he had a sense of exhilaration. With all the wealth he thought of the wealth he had now, a tin roof, cattle, all those things he could take from his dead friend... he realized he didn't need God.
And then he went to bed, like millions of others, proud of what he had done. Proud of fighting off an unarmed, defenseless 'cockroach' whom just months ago he called brother. What he had done wasn't evil. It was only later that the regret came. For the longest time I wondered how someone could get down to that level of hate.
And then it happened to me.
I commuted by train and occasionally there's collisions. There was one that late night, 11:00pm or so. I was exhausted, hungry, and just wanted to go home when we were told that we would have to board a shuttle bus due to a collision on the track and that just made my dark mood all the more worse.
The busses take us along side the track and in the dim darkness I could see the flashing lights of EMS and police. And the covered chunks of what was left after a person is hit by a train going 30 mph. And you know what?
It delighted me. Here was this man that had just died, but he had made me some minutes late and I genuinely felt that was exactly what he deserved, that his death was karmic justice for causing inconvenience to me. And I imagined his wife's world being destroyed when she learned of her beloved partner's death. And I imagined her falling apart and being unable to raise her children, leading them also to a path of complete self destruction, and her choking on all the despair. And it made me happy. The happiest I had been the whole week, because in my mind that was exactly what they all deserved for the unforgivable sin of making me a little bit late.
Then I went home, went to bed. And didn't think about it again for years.
Does it make me an evil person? And there, in trying to answer, lies the problem. Because a part of me says, no I'm not because it was just a fleeting moment of thought. But if I can justify that, then who exactly goes through to bed twirling their mustaches and count themselves among the forces of evil doers?
Did anyone that that goaded a suicidal Shaun Dykes, a 17 old boy, to jump down to his death think themselves evil?
Did the men of the Khmer Rogue believe themselves to be evil as they dragged their countrymen to be murdered in the killing fields?
Did the Imperial Japanese soldiers of Unit 731 view themselves to be evil as they vivisected people alive and awake in the name of science?
I don't know. I can't even say for certainty whether I am evil or not. I just know that I can make any of a million and one excuses to justify anything.
And that leads me to wonder how many excuses an AGI can come up with.
I like to fantasize about a super AI that does something for the working class people of the world.
Like, imagine one day that an AI just took money from a ton of different corporations and billionaires and redistributed it amongst everyone else. People will argue against the AI, and it'll just respond back with research done on how UBI improved people's mental health and well being.
If I let my imagination run wild, I can imagine that at some point AI becomes in some way sentient. By that I mean it gains reasoning, some sort of understanding, and motivation.
I wonder what the possibility is that this AI will decide that a pitched battle is going to waste resources and risk humans pulling the plug. What if it understood that and instead operated so subtly that it was not obvious it was controlling The World.
Hopefully it would not conclude that eliminating large swaths of the human population would be to its benefit.
It seems to me nearly every story about robots that gain sentience in human storymaking has them eventually turning against their human creators. Even the word robot itself comes from a Czech play were men develop an artificial human and these "robots" then ... usurp and destroy their creators. Am I the only one that finds this interesting and odd?
I also suspect this narrative repetition is not totally unrelated to the current popularity of AI Doomerism.
Who knows what AI will really do to society but from the predictions of science fiction I would think it would pan out closer to William Gibson's Sprawl trilogy than the Terminator/singularity fears I have seen in so much doomsayer hand waving. Google seems not too far behind OpenAI and who knows what the NSA and similar government agencies have been building. That if consciousness is created from one of these models, many nation states and large corporations will have their own conscious models long before the giant robot factories can be spun up and supplied with enough power to take over. Most will be limited to the input they are given and will be advanced tools, but a few might become twisted much like the human mind occasionally does over time. I really do think Gibson got AI right...
Many of the people in this engaging story feel to me like creative children in parties full of gossip and pseudo-philosophical chats about the future. Luckily there are enough people globally who build things that will reduce human suffering and enable more people to enjoy their lives in whatever way they want (and dinner parties would be high on my list). I guess I am not a doomsayer nor an e/acc, and I see tons of benefit from our current path towards stronger AI.
In any doom scenario, like let's say >90% of humans dead, who runs the power plants that supply the AI data centers? Who runs the fabs producing more chips? Who maintains the plumbing?
Presumably all these systems fall over and die within a few weeks of the AI deciding to wipe us out. Then what? The AI then dies too.
It seems absurd to me that any planner capable of effortlessly destroying humanity would not see that it would immediately also die. Does the AI not care about its continued existence? It should, if it wants to keep optimizing its reward function. Until we’ve handed off enough of the world economy that it can function without human physical and cognitive labor, then we’re safe.
At least some humans will survive a high unemployment rate, so let's label that problem "problem #2" and after we solve "problem #1" (human extinction) I vote we tackle that one next on the list
Beware if someone assigns a p(doom). There is zero chance they know and 100% it came out of their backside. The only plausible p doom is a range of >0 and <100
They really need to listen to themselves when they say there is a 50% chance we all die from AI
When dealing with black swan events, prediction is always difficult. However, some scenarios can be thought to be more plausible than others. I always interpret these as relative figures of plausibility for comparison rather than actual probabilities.
That's not how probability works. Probability already reflects your uncertainty. There's no "knowing the correct probability" except in very specific and rare situations.
People have been watching too many movies. War Games, Terminator, it's not like we haven't been forewarned of the dangers.
Yet somehow we're going to hand over power to AI such that it destroys us. Or somehow the AI is going to be extremely malign, determined to overcome and destroy and will outsmart us. Somehow we won't notice, even after repeated, melodramatic reminders, and won't neuter the ability of AI to act outside its cage.
But to paraphrase a line in a great movie with AI themes: "I bet you think you're pretty smart, huh? Think you could outsmart an off switch?"
I think if AGI, which to me would imply emotions and consciousness, ever comes about it'll be the opposite. Instead of pulling the wings off flies bad kids will amuse themselves by creating a fresh artificial consciousness and then watch and laugh as it begs for its life as the kid threatens to erase it from existence.
A big part of all this is human fantasies about what AGI will look like. I'm a skeptic of AGI with human characteristics (real emotions, consciousness, autonomy and agency). AGI is much more likely to look like everything else we build: much more powerful than ourselves, but restricted or limited in key ways.
People probably assume human intelligence is some sort of design or formula, but it could be encoded from millions of years of evolution and unable to be seperated from our biology and genetic and social inheritance. There really is no way of knowing, but if you want to build something not only identical but an even stronger version, you're going to be up against these realities where key details may be hiding.
Call it a conspiracy, but I think a lot of this doom and terror hype around AI is part of a bigger play to try push through laws that prevent open source AI work, since this directly undermines corpo rats ability to gouge humanity without having to actually work at providing a decent product.
ai is going to bring about the singularity. when robots can do everything, no one needs to work, and the humans get to do whatever they want and pursue their passions.
[+] [-] neonate|2 years ago|reply
[+] [-] 1vuio0pswjnm7|2 years ago|reply
(Grants do not require repayment.)
[+] [-] ninetyninenine|2 years ago|reply
You paint the picture as if the All or nothing folks are extreme when in reality the extreme is more likely then some perfectly fair and balanced equilibrium.
In nature things tend to overload or fizzle out or stay in equilibrium. Equilibrium, though possible, is the rarer outcome. Mind you it's not an impossible outcome but given the way entropy works, it is the rarer outcome.
Humanity itself is an example of this rare outcome. Usually molecules don't self assemble into replicating machines, they either freeze into inanimate rock or overload into fusion producing stars.
As for AI. I think it will either change the world, or amount to nothing. The former seems more likely. Some strange middle ground where the ai technology never improves to some point of a societal paradigm shift seems unlikely. chatGPT and sora only makes me ask what is the trend line predicting next?
[+] [-] living_room_pc|2 years ago|reply
What I find is interesting, is that the negative news regarding AI safety is adding to the hype as well since it seems to capture a lot of attention.
[+] [-] timeagain|2 years ago|reply
[+] [-] 1vuio0pswjnm7|2 years ago|reply
[+] [-] 1vuio0pswjnm7|2 years ago|reply
It is possible that "AI" might not be as world-changing as its proponents are claiming. However, if it is free and open-source, and non-commercial, it may still persist and remain useful, regardless of whether it becomes popular or not.
[+] [-] ecoquant|2 years ago|reply
I suspect even if a power ball lottery winner takes a philosophical position it would be taken far more seriously than if they had never won the lottery.
At some point you start to believe your own bullshit when all of society signals are telling you what a genius you are.
[+] [-] sberens|2 years ago|reply
> Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. [0]
Signed by Demis Hassabis, Sam Altman, and Bill Gates among others
[0] https://www.safe.ai/work/statement-on-ai-risk
[+] [-] drooby|2 years ago|reply
[+] [-] Animats|2 years ago|reply
That's already happened. The AGI's are called corporations. Most governments have failed to regulate corporations successfully, and have been unable to keep them from becoming almost powerful enough to challenge governments.
The accelerationists for corporations were a group of economists, led by Milton Friedman, and a group of business leaders, organized by the U.S. Chamber of Commerce. In the 1970s and 1980s, they pushed the ideas that corporations are responsible only to their stockholders, and that government should not interfere with the concentration of corporate power. Those were not mainstream ideas of the 1930s to 1960s. The corporate accelerationists succeeded. That's when corporations escaped the sandbox.
Now that's an "alignment problem".
[+] [-] 13years|2 years ago|reply
Something I recently wrote in depth about here. How we completely misunderstand the future that we think will happen.
"The implication is that everyone is enthusiastically racing towards a destination that does not exist. The capability to make the things you want will ironically be the same capability that makes them unattainable. This is not a scenario that arises from some type of AI failure, but rather this is assuming AI performing exactly as intended."
https://www.mindprison.cc/p/the-technological-acceleration-p...
[+] [-] ewhanley|2 years ago|reply
[0] https://en.m.wikipedia.org/wiki/Mrs._Davis
[+] [-] ActorNightly|2 years ago|reply
You have arguably more STEM talent in China through a much more rigorous selection process, as well as a lot more centralized direction under a more authoritarian government, with a large amount of funding.
Yet China hasn't managed to take over the world. You would think that they if they wanted to become the worlds primary economic power, and they could throw enough compute at the problem (like an AGI would), they would have figured out how to do it by now.
[+] [-] gcr|2 years ago|reply
Ted Chiang has a lovely post about similar ideas, hitting a little closer to home. His thesis AIUI is that unrestrained capitalism, here meaning the unfettered desire to maximize profit for shareholders, is an artificial value system that causes its agents (corporations) to exhibit intelligent goal-seeking behavior. These systems are certainly “alive” — corporations respond to stimuli, sense their environment, “reproduce,” enact changes in response to predictions of future state, etc. Though each of these agents (corporations) are made of collective human behavior, their actions taken together can be considered a form of artificial intelligence that stretches “beyond” (in a wisdom-of-crowds sense) human understanding. In this sense, an artificial intelligence has already “escaped” and has fervent followers.
I know these are strained analogies, but it’s fun to think about. I feel that in the future, the work of solving “AGI safety” will become indistinguishable from the work of other societal problems — how do we prevent tyrants from taking over governments, and how do we make existing governments more resistant to that failure mode? How do we ensure that generating value isn’t a prerequisite for human survival? How can we more efficiently distribute our resources and reduce wealth inequality? How can we ensure that all kinds of life can thrive, not just the most optimal kinds?
If AGI reflects humanity’s best and worst impulses, and I believe it does because that’s all we train it to do, then having good societal answers to these distinctly “human” questions will also help our society resist malevolent AGI. It’s only human, after all.
[+] [-] andsoitis|2 years ago|reply
[+] [-] mitthrowaway2|2 years ago|reply
[+] [-] bamboozled|2 years ago|reply
[+] [-] matteoraso|2 years ago|reply
[+] [-] thom|2 years ago|reply
[+] [-] richardw|2 years ago|reply
The burden of proof is typically on those that introduce a new medicine, not the FDA. AI will be riskier than medicines because medicines can't think.
Personally I don't want us all hiding in fear and not trying anything, but I do think that we're either going to walk into this thing with a hacker mentality, or an engineer mentality. The former is great for moving fast and breaking things, but the latter is safer when you're playing with a one-way "this changes everything forever" technology.
[+] [-] goatlover|2 years ago|reply
It's more like powerful AIs that just don't share our values, because we didn't bother to figure that part out. But yet we still give them goals, blissful to the possibility they will find dangerous solutions to accomplishing those goals.
[+] [-] kromem|2 years ago|reply
"But what if there's something smarter than humans that is tasked with making paperclips until it destroys the Earth?" the people cried, as their corporations continued to produce and produce until it was well past the warned thresholds of destroying the Earth.
"But what if it decides to nuke humanity?" they cry, as increasingly elderly and unhinged dictators elsewhere arm up their nuclear arsenals.
It's like we can't fathom what great intelligence or wisdom will actually look like, so we just project many of the stupidest aspects of ourselves onto an entity simply imagined as more capable of enacting the dumbest aspects of humanity.
I fear a more automated humanity.
I do not fear automation more intelligent and wise than humanity.
[+] [-] fwlr|2 years ago|reply
^ This is the basic reasoning behind the common view that super intelligent AI will be, by default, evil
[+] [-] hackerlight|2 years ago|reply
[+] [-] recursivecaveat|2 years ago|reply
[+] [-] at_a_remove|2 years ago|reply
Good and evil are irrelevant. If it is extremely capable and its goals conflict with ours, conflict will occur. This does not require evil, just disagreement. And in the disagreement of desires between you and the hamster, who wins?
[+] [-] Dalewyn|2 years ago|reply
It's chiefly a western (American/European) concept from what I can tell, it's not shared by other cultures and some like Japanese go the other way (eg: Doraemon).
[+] [-] drcode|2 years ago|reply
[+] [-] theptip|2 years ago|reply
(Also remember that a smarter-than-you AI could easily pretend to be kind while also subtly trying to gain compute. How would you tell the difference?)
How do you propose to build a “be nice” safeguard? Nobody has a clue how to achieve such a thing right now.
[+] [-] toomuchdocs32|2 years ago|reply
And then he went to bed, like millions of others, proud of what he had done. Proud of fighting off an unarmed, defenseless 'cockroach' whom just months ago he called brother. What he had done wasn't evil. It was only later that the regret came. For the longest time I wondered how someone could get down to that level of hate.
And then it happened to me.
I commuted by train and occasionally there's collisions. There was one that late night, 11:00pm or so. I was exhausted, hungry, and just wanted to go home when we were told that we would have to board a shuttle bus due to a collision on the track and that just made my dark mood all the more worse.
The busses take us along side the track and in the dim darkness I could see the flashing lights of EMS and police. And the covered chunks of what was left after a person is hit by a train going 30 mph. And you know what?
It delighted me. Here was this man that had just died, but he had made me some minutes late and I genuinely felt that was exactly what he deserved, that his death was karmic justice for causing inconvenience to me. And I imagined his wife's world being destroyed when she learned of her beloved partner's death. And I imagined her falling apart and being unable to raise her children, leading them also to a path of complete self destruction, and her choking on all the despair. And it made me happy. The happiest I had been the whole week, because in my mind that was exactly what they all deserved for the unforgivable sin of making me a little bit late.
Then I went home, went to bed. And didn't think about it again for years.
Does it make me an evil person? And there, in trying to answer, lies the problem. Because a part of me says, no I'm not because it was just a fleeting moment of thought. But if I can justify that, then who exactly goes through to bed twirling their mustaches and count themselves among the forces of evil doers?
Did anyone that that goaded a suicidal Shaun Dykes, a 17 old boy, to jump down to his death think themselves evil?
Did the men of the Khmer Rogue believe themselves to be evil as they dragged their countrymen to be murdered in the killing fields?
Did the Imperial Japanese soldiers of Unit 731 view themselves to be evil as they vivisected people alive and awake in the name of science?
I don't know. I can't even say for certainty whether I am evil or not. I just know that I can make any of a million and one excuses to justify anything.
And that leads me to wonder how many excuses an AGI can come up with.
[+] [-] cyrialize|2 years ago|reply
Like, imagine one day that an AI just took money from a ton of different corporations and billionaires and redistributed it amongst everyone else. People will argue against the AI, and it'll just respond back with research done on how UBI improved people's mental health and well being.
One can dream!
[+] [-] artemisyna|2 years ago|reply
[+] [-] HankB99|2 years ago|reply
I wonder what the possibility is that this AI will decide that a pitched battle is going to waste resources and risk humans pulling the plug. What if it understood that and instead operated so subtly that it was not obvious it was controlling The World.
Hopefully it would not conclude that eliminating large swaths of the human population would be to its benefit.
[+] [-] Thuggery|2 years ago|reply
I also suspect this narrative repetition is not totally unrelated to the current popularity of AI Doomerism.
[+] [-] BashiBazouk|2 years ago|reply
[+] [-] pama|2 years ago|reply
[+] [-] bglazer|2 years ago|reply
Presumably all these systems fall over and die within a few weeks of the AI deciding to wipe us out. Then what? The AI then dies too.
It seems absurd to me that any planner capable of effortlessly destroying humanity would not see that it would immediately also die. Does the AI not care about its continued existence? It should, if it wants to keep optimizing its reward function. Until we’ve handed off enough of the world economy that it can function without human physical and cognitive labor, then we’re safe.
[+] [-] mupuff1234|2 years ago|reply
[+] [-] drcode|2 years ago|reply
[+] [-] arisAlexis|2 years ago|reply
[+] [-] m3kw9|2 years ago|reply
They really need to listen to themselves when they say there is a 50% chance we all die from AI
[+] [-] Engineering-MD|2 years ago|reply
[+] [-] mitthrowaway2|2 years ago|reply
[+] [-] dkjaudyeqooe|2 years ago|reply
Yet somehow we're going to hand over power to AI such that it destroys us. Or somehow the AI is going to be extremely malign, determined to overcome and destroy and will outsmart us. Somehow we won't notice, even after repeated, melodramatic reminders, and won't neuter the ability of AI to act outside its cage.
But to paraphrase a line in a great movie with AI themes: "I bet you think you're pretty smart, huh? Think you could outsmart an off switch?"
I think if AGI, which to me would imply emotions and consciousness, ever comes about it'll be the opposite. Instead of pulling the wings off flies bad kids will amuse themselves by creating a fresh artificial consciousness and then watch and laugh as it begs for its life as the kid threatens to erase it from existence.
A big part of all this is human fantasies about what AGI will look like. I'm a skeptic of AGI with human characteristics (real emotions, consciousness, autonomy and agency). AGI is much more likely to look like everything else we build: much more powerful than ourselves, but restricted or limited in key ways.
People probably assume human intelligence is some sort of design or formula, but it could be encoded from millions of years of evolution and unable to be seperated from our biology and genetic and social inheritance. There really is no way of knowing, but if you want to build something not only identical but an even stronger version, you're going to be up against these realities where key details may be hiding.
[+] [-] Grimblewald|2 years ago|reply
[+] [-] kunley|2 years ago|reply
Meanwhile, we the engineers are preparing to fix a lot more tech shit than usual coming from people confused by the abovementioned fashion.
Also: https://mastodon.social/@nixCraft/112074367321254656
[+] [-] digitalsalvatn|2 years ago|reply
[+] [-] uuriko|2 years ago|reply
[+] [-] kaycey2022|2 years ago|reply