One of the problems the Butlerian Jihad ran up against, aside from the inevitable skirting of the lines from Richese and Ix (many machines on Ix, new machines), is that it runs directly counter to "Thou shalt not disfigure the soul."
Replacement of AI with Mentats (as well as other narrow specialities) has done nothing but disfigure the soul. We see few Mentats -- aside from Paul and eventually another -- who are not constricted. Similarly, if you practice medicine, well, you get the Imperial Conditioning. Certainly, a sign of trust ... but also a sign that the person's actions are no longer completely free.
Now, I am not touting the Heinlein "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship ..." line, exactly, but the alternative to AI is the kind of stagnation we see in Dune, millennia of locked down ritual, honed again and again, with some people becoming ... utilities.
Before we begin this jihad, we must examine the alternative futures.
So, a point of nerdity: mentats were not portrayed as disfigured in Dune. They had personalities and foibles and loyalties and so on. In fact, there was no limit on who could be a mentat, or what other position of power they could hold (some of Paul's friends note how formidable a mentat-duke would be - not something they would say if it were a disfigurement).
Another point of nerdity that no-one has mentioned yet, including the OP: Herbert sketched out an extended story that portrays humanity and the machines it had fought against so long merging in the long run. In part this is why Leto II never destroyed Ix even though it was constantly (quietly) breaking the Bulterian Jihad rules.
None of this invalidates the OP's core point, of course. I think it's a good and valuable discussion to consider technology from fundamentally moral grounds, and I wish we'd do it more.
Well put. But isn't the concern here with some utilities becoming ... people? Either way could result in disfigured souls. Is AI simply a pursuit of slavery without guilt?
I really doubt we will have the capability of building "a machine in the likeness of a human mind" in my lifetime. Present AI systems are essentially just function fitting. Building big probabilistic systems that we optimize with loads of training data. This is a far, far cry from the "strong AI" that people are so afraid of. I really think that people writing these sorts of pieces have an understanding of AI that's more rooted in fiction than engineering.
It's interesting to ponder how we should go about building and interacting with "strong AI", and questioning whether we should even build it in the first place. But I really don't think any detailed moral frameworks can be built when we have no real idea of what a "strong AI" would look like.
Also, it's worth reminding people that in the Dune universe the Butlerian Jihad led to millennia of stagnation and control of society by a narrow elite: The Spacing Guild, the Bene Gesserit, and the Landsraad.
For those that study the human brain, my understanding is that it kind of already is “just form fitting.” Ie, there are subsystems that just do basic stuff, and together each of these systems eventually add up to a prediction system (consciousness) that is in service to the rest of the brain, which in turn has a mutualistic relationship with the rest of the body.
In this sense, “strong AI” already exists, with or without computers, as a global ecosystem driving towards…nothing in particular. Insomuch as computer systems augment the ability of humanity to continue to reproduce long-term in a mutualistic way, those computer system will themselves survive.
You might say, “but why wouldn’t the computer find a more efficient path, foregoing humanity?” Well, humanity supports the infrastructure for computers, and computers support the humans supporting the infrastructure, then this is already local minima. Arriving at a different reality for “strong AI” will be the result of a random walk tending towards components that are capable of existing in the long term, aka “evolution.”
"I really doubt we will have the capability of building "a machine in the likeness of a human mind" in my lifetime."
It's really hard to predict the future. Look at what a horrible job most people did a hundred years ago (even 50 years ago) predicting what life was going to be like today.
Many people did not believe man would ever walk on the moon (even right up to the time it happened), same with desegregation and the fall of the Soviet Union, for starters.
Science and technology are especially hard to predict, as many advances are a result of accidents and surprising discoveries.
I wouldn't write off strong AI, though I'm not sure it'll happen as a mimicry of the human mind.
I dont really get how people see the progress of the last 10 or even 5 years and then imply that things won't improve all that much further than that for a whole generation.
I think it would be easier to emphasize and teach philosophy and ethics of computer science. The problem is, they can't even do this for business students, so focusing on the existential risk of AI alone is a drop of rain in the ocean. If you follow the topic, of let's say, sustainability, then you know this is a huge blind spot in economics and business. Look at only palm oil production as an example. The current threat to the environment of Indonesia has been recognized for decades, yet the world refuses to legislate against the multinational companies who are destroying the forests for palm oil. Again, this is only a small part of the problem, and it's deeply connected to many other problems, such as the profitable harvesting of Indonesian wood from this destruction, which recently showed up in Japan as a source for the Olympic Games infrastructure.
Because ethics in engineering is mostly about codified rule compliance rather than the deep navel gazing taken by actual philosophy students. The latter also rarely yields useful answers. Crack open one of the "professional" engineering ethics guide books and 90% of it is thou shalt not build a bad bridge/engine/circuit because it is very bad and you should report your boss to the authorities if they do. I never understood the moral uppity and delusion those engineers have. If a bridge falls it is bad, unless if it is explicitly designed to kill enemy soldiers, then it is good. You can extend this to drones and enemy schoolbuses and the entire defence industry if you want. The so-called "engineering ethics" field should rebrand and follow the financial industry. You follow guidelines because the compliance department demands it to cover the company's derriere. If you don't, your company will get fined. Skip the self-righteous morality because its only purpose is to reduce the principle agent problem for the managerial and asset owning class.
Regardless of the merit of this particular piece, the portion of Butler's Erewhon comprising "The Book of the Machines" makes for very interesting and forward thinking reading.
But the context in the story is fairly important - it takes place in a society that has essentially already carried out its own Butlerian Jihad and taken it too far. Wonderful book by the way.
I'm starting to think that the religious sects in the U.S. that laboriously evaluate a technology before incorporating it into their communities have a pretty good thing going. Sadly it's not really practical at a larger scale, and the suffering that could be avoided by adopting something early rather than late is difficult to estimate. Ah well!
Butler also wrote an interesting essay in 1863 called "Darwin Among the Machines" (the title which was later borrowed for a book by George Dyson). Butler was probably the first person to realize that natural selection could apply to machines as well as biological organisms
Why would it not be scalable? The individual communities should be allowed to make their own decisions. In the US this was the original way of approaching problems (unfortunately this is becoming less and less the case).
I think this is interesting fodder for science fiction authors but lacks concrete examples of what exactly it would mean to regulate or engage in a "Butlerian Jihad against AI."
I know things that I would like to see. Like humans "in the loop" (as opposed to "on the loop" or "out of the loop") for certain classes of decision making - for example target selection of military strikes or law enforcement. Or what kinds of information we use to train the decision making models, for example if you feed ML a racist data set and you get a racist algorithm - use that algorithm to decide who to give mortgages and you'll get systematic depression in generational wealth based on racial lines.
But this isn't some crusade on AI because it's AI; it has to be based in reality - what AI or ML is being used for, what information it operates on, what decisions it is used to make, and ultimately the human beings that are responsible for those decisions. The reason it is so hard to convince people as to how we should legislate (or otherwise regulate AI) is that every conversation drifts into science fiction and not concrete examples of the ethical issues today and what can be done today. Otherwise it comes off as Luddite fearmongering.
The problem is, we don't really know how consciousness works (I assume consciousness the the part the author takes issue with; most of our cognitive faculties in isolation are not that special). We don't even have a great definition of consciousness, or good tests for it, or know whether it is a linear spectrum, or if it emerges abruptly with the evolution of certain reasoning and attention faculties, and we don't know which animals have it and to what degree.
So when people say we shouldn't develop AI to think like that, it's basically saying we shouldn't try to understand how consciousness works. Because as soon as we do, I guarantee someone out there will attempt to make conscious AI.
Also, if we model a brain effectively, then study that model, would our brain not develop new skills from reflecting on such a model and therefore develop again beyond the model?
I read “A Thousand Brains: A New Theory of Intelligence” by Jeff Hawkins, and it now is clear to me that our neocortex (like computers, or AI) is just a lot of general purpose computing infrastructure with ZERO aims. Our emotions (which drive everything in the interest of gene propagation- as there is no purely logical reason to do anything) would need to be intentionally duplicated to give AI a reason to desire anything beyond what we instruct it to do.
> to give AI a reason to desire anything beyond what we instruct it to do
But if an AI is good at modelling the world and predicting the likely outcomes of various actions it can take (even if those actions are just to put text on a screen), we should expect it to develop a desire to achieve various "instrumental goals" in order for it to maximise its probability of correctly carrying out the task we instruct it do.
An example of an instrumental goal would be "accumulate resources", since the more resources it has, the better calculations it could perform, and the more certain it could be that it has correctly accounted for all the potential obstacles to it completing the task. Another instrumental goal would be "don't get destroyed", since if it is destroyed it will not be able to carry out the task at all (for most sensible tasks).
So without having any emotions or inherent desires, an intelligent agent is likely to accumulate various desires merely as a consequence of wanting to successfully do anything at all.
The whole discussion is way too antropocentric. We are analysing what would an AI want and desire and whatnot, assuming that an AI would have wants and desires. But also ignoring that an AI could act simply out of programmed logic. One can get catastrophic results also out of (mis)programmed logic - every prorammer here can confirm. So let's not deny the catastrophic possibility only because we imagine an AI must be first a human approximation in order to take catastrophic decisions. The danger is instead when the AI can take catastrophic decision - then we can be truly effed even if it's out of a simple bug.
I agree with you completely. This whole question of artificial intelligence presumes that intelligence is even a thing, and not just some form fitting for predictive reasoning that happens to be closely aligned to the well-being of the larger organism (human society).
The equivalent in silicon is functions fitting whatever purpose, but not strongly, for no particular reason except that they have been able to exist within that context across many, many, years.
The efficiency of a function may not be efficient, but it hasn’t been selected against…yet. That “DNA” source code might include a lot of cruft, but who cares as long as it works and people will download it anyway?
It says a lot about what injustices we have learned to accept that AI alarmists focus almost exclusively on the scifi-level hypothetical dangers of AI, rather than the very real problems it already causes today.
Those problems largely fall into three categories that I can think of off the top of my head at 1am:
1. AI is a convenient way to justify potentially uncomfortable decisions you would have made otherwise (idlewords said it best: "AI is money laundering for bias")
2. AI is being used in situations where it can be a threat to life and limb, like the current crop of self-driving(ish) cars
3. Essentially all of the gains from automating work going to people who already have capital
"AI alarmists" are worried because the worst-case outcomes of AGI are mistakes you cannot ever fix.
All the rest of these are bad, but they are problems we can fix given time and thought, because we will still exist. Extinction-level events decrease all future human utility to zero, and so should be treated with extraordinary care.
Ted Kaczynski calls for something like this, but against industrial technology generally. Even though his manifesto is a rational argument aimed at intellectuals, he has said in his more recent writings that to actually carry out his “stop technological advancement” plan you'd need to persuade people on an emotional level.
I'd love help finding better, less immediately downvoted off the map ways to say it, but I'd extend this to a wide class of software in general.
> Far more important than the process: strong AI is immoral in and of itself. For example, if you have strong AI, what are you going to do with it besides effectively have robotic slaves? And even if, by some miracle, you create strong AI in a mostly ethical way, and you also deploy it in a mostly ethical way, strong AI is immoral just in its existence. I mean that it is an abomination. It’s not an evolved being.
My fear is that most software, even when useful, locks us into certain paths. Our situations or needs change, evolve, but we will remain subject to inflexible software, to systems we cannot make change with us, in the vast majority of cases. Only a very few programs strive for better: spreadsheets being one noted example.
Ursala Franklin categorized technology as holistic or prescriptive[1], where it is something wielded or something that directs us. Even a social media app which lets us create content- a seemingly holistic act- still has narrow prescriptive channels we can not escape. We will never be able to understand or enhance this tool. We will never understand it, never see it's nature. This, to me, is the definition of what Erik talks about: an abomination, a thing beyond comprehension, a horror outside of reality, the form of existence which is shared.
I feel like we're reaching a crisis where we are creating an unknowable, unexplorable world. We're building an anti-Enlightenment prison. That, to me, constitutes a deontological hazard, demands that we assess the action themselves of creating unexplorable software.
[Edit: I misread the line I quotes as, "what are you going to do with it besides effectively be robotic slaves": that uhh changes the pertinence of our two discussions here notably. I think it's risky that the strong ai would be used to try to architect policies/systems that steer people, which is a different concern than Erik's.]
Yeah, while I see how you can perhaps separate a weak AI from a strong AI (make it pass a Turing test), it's much less clear to me as to how you're supposed to separate "AI" from just your regular computer (or robot, which is just an embodied computer) ?
The crux of his argument, and its downfall, at least in the short term is:
> All to say: discussions about controlling or stopping AI research should be deontological—an actual moral theory or stance is needed
I don't see this happening in the near future, at least in the West. We're in the middle of total epistemological meltdown, and only capable of reasoning from utility, or some insane framework like critical theory. If we get to Strong AI in our lifetimes, we're just going to spawn a bunch of reductive, racist robots.
I don't know about everyone mentioned, but Yudkowsky in particular rejects Pascal's wager (https://www.lesswrong.com/posts/ebiCeBHr7At8Yyq9R/being-half...) and argues (IIRC) that AGI poses a large risk of killing us all, rather than an infinitesimal risk.
We should be actively building this new "AI Species" because we are going to be extinct eventually and should think about making a better successor for the human species. The morality argument is nonsense.
How about this: "The primary objective of humanity should be to build an intelligent system with far more precise perception, reasoning and physical manipulation capabilities than humans"
There are all sorts of ways to build intelligences. Humans are unique in that they are mammals (defined by having mothers). Mothers raise us with love, and teach us, for our helpless first years. We also have to act in communities. So there is a sense in that we are very lucky - in humans, our intelligence correlates with our altruism. In the grand space of possible minds, it is very unlikely that altruism and morality is correlated with intelligence. So whatever that machine race we birth is, it won't have any of the things we value if we're just building for "precise perception, reasoning, etc"
Whatever you'd like to accomplish, if destroying a city of millions with a nuclear bomb would give you pause, extinguishing humanity should give you more pause.
imho precision is a chimera which often leads to an excess of certitude; acceptance and awareness of uncertainty often leads to better decision-making.
Put another way, a laser pointer is not a very good tool with which to explore a cave, unless you can systematically measure it over the whole cave, an expensive and time-consuming process. If you're exploring a new cave, you might be better off with weak omnidirectional illumination like a lamp.
I think an element of hubris in this article, and many other discussions of a superintelligent singularity - is that the preservation of humans as the dominant lifeform is an inherent good. Humans and our societal structures are incredibly flawed. We are greedy and vain and vindictive and apathetic.
If an AI is created that has consciousness and is smarter and more moral than humans, it could lead our world better than we currently do. It could genetically modify us to make humankind a more ideal species. These AIs could be better humans than humans can possibly be.
In 100-200 years, "kill all humans" could actually be the morally-best choice. By comparison, we could be anachronistic barbarians suitable for nothing but the novelty of viewing in a zoo.
There are certainly risks from passing the baton to a different unknown future lifeform, but I don't think we can have a priori certainty that replacing humans with AI will be inherently bad.
>His point was that there are no odds that would rationally allow a parent to bet the life of their child for a quarter. Human nature just doesn’t work that way, and it shouldn’t work that way.
People have done this for most of history. Working a farm, for example, is non-trivially dangerous and fairly low profit. Children often helped on the farm in rural communities from a young age. So every time you had your child work the farm you were rolling some dice. Over and over. But eventually those quarters add up to enough to put food on the table so it was rational to roll them.
This seems like the sort of philosophical argument only someone who has grown up a very privileged life and hasn't experienced much else would make. To them it is inherently wrong but to other it is simply part of life. Which inherently makes it no longer a universal axiom but a matter cultural upbringing.
It's unlikely that we'll ever be able to put the genie back in the bottle.
The potentially limitless power of AI is going to be too much of a temptation for some governments, militaries, powerful individuals, and various groups to resist.
Ban AI in one place and it'll just be developed somewhere else.
That's not to say we should simply march forward with our eyes closed, but efforts to stop AI are highly unlikely to succeed.
Jacques Ellul[1] talked about the almost inevitable and freedom-denying aspects of what he called "technique" (really the relentless drive towards efficiency across all disciplines) in The Technological Society[2].
Ellul's thought inspired the Unabomber, though Ellul himself never advocated violence as a response. As we all know, the Unabomber did not himself stop technological progress... he did manage to spread his ideas a bit, but that hasn't seem to have had any effect either.
I'm completely against violence myself, and unequivocally condemn terrorism of all sorts (including the Unabomber). Not only is it unethical and immoral, but it's ineffective and counterproductive.
I like to think of AI-Genesis through the lens of what humanity has already done through domestication. We take something primitive and progressively adapt it to serve a greater utility. I think working dogs are the most interesting example of this. We've taken a species, the wolf, and made it smarter while also making it want to do work, learn tricks, and follow orders. Of course, you still need to train the animal for optimal results but even breeds like collies know how to herd instinctively.
Anyways.
Let's assume the best and brightest dog breeders endeavor to make German Shepherds as intelligent as they possibly can. Would the same ethical debates about what constitutes a 'mind' come into play? What would happen if the dogs became smart enough to make their own mating decisions? Would we be worried about them turning on us once they get close to human level intellect? Would it be immoral to make these dogs work? Or, would not letting them work be considered immoral?
This is just food for thought. But I suspect AI's capabilities will grow much in the same way other domesticated species have grown into the specialized roles we've crafted for them.
Aren't we inappropriately reifying AI? AI doesn't really exist, other than as an academic field of research. For instance, here is how the European Union is trying to define AI for regulation purposes. Note how broad it is!
"artificial intelligence system (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with"
"Annex 1:
Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b)Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods."
Indeed, AI is just a synonym for computer program, there's a reason why Dune's Butlerian Jihad results in the destruction of all computers (including robots)...
I'm disappointed that the article doesn't seem to realize this, I doubt that anything less drastic is going to work, considering how easy it is to replicate software !
(I'm not really convinced that EU's approach can work either.)
I think we should strive for the Iain Banks vision of the future. Treat the AIS with respect and ensure it has equal rights. I’d be happy with a benevolent AI controlling the government. But I imagine, given the current state of things, we will probably treat it unethically and make it do something like optimise advertisement or bomb random villages.
This assumes that the AI knows what respect is, has desires, has a desire for respect, and I could go on. Of course we might get such an AI at some point, but any catastrophes can happen on the entire way until getting there because AI is neither a black/white situation nor a single big event.
[+] [-] at_a_remove|4 years ago|reply
Replacement of AI with Mentats (as well as other narrow specialities) has done nothing but disfigure the soul. We see few Mentats -- aside from Paul and eventually another -- who are not constricted. Similarly, if you practice medicine, well, you get the Imperial Conditioning. Certainly, a sign of trust ... but also a sign that the person's actions are no longer completely free.
Now, I am not touting the Heinlein "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship ..." line, exactly, but the alternative to AI is the kind of stagnation we see in Dune, millennia of locked down ritual, honed again and again, with some people becoming ... utilities.
Before we begin this jihad, we must examine the alternative futures.
[+] [-] javajosh|4 years ago|reply
Another point of nerdity that no-one has mentioned yet, including the OP: Herbert sketched out an extended story that portrays humanity and the machines it had fought against so long merging in the long run. In part this is why Leto II never destroyed Ix even though it was constantly (quietly) breaking the Bulterian Jihad rules.
None of this invalidates the OP's core point, of course. I think it's a good and valuable discussion to consider technology from fundamentally moral grounds, and I wish we'd do it more.
[+] [-] johnvaluk|4 years ago|reply
[+] [-] jrochkind1|4 years ago|reply
[+] [-] Manuel_D|4 years ago|reply
It's interesting to ponder how we should go about building and interacting with "strong AI", and questioning whether we should even build it in the first place. But I really don't think any detailed moral frameworks can be built when we have no real idea of what a "strong AI" would look like.
Also, it's worth reminding people that in the Dune universe the Butlerian Jihad led to millennia of stagnation and control of society by a narrow elite: The Spacing Guild, the Bene Gesserit, and the Landsraad.
[+] [-] IggleSniggle|4 years ago|reply
In this sense, “strong AI” already exists, with or without computers, as a global ecosystem driving towards…nothing in particular. Insomuch as computer systems augment the ability of humanity to continue to reproduce long-term in a mutualistic way, those computer system will themselves survive.
You might say, “but why wouldn’t the computer find a more efficient path, foregoing humanity?” Well, humanity supports the infrastructure for computers, and computers support the humans supporting the infrastructure, then this is already local minima. Arriving at a different reality for “strong AI” will be the result of a random walk tending towards components that are capable of existing in the long term, aka “evolution.”
[+] [-] pmoriarty|4 years ago|reply
It's really hard to predict the future. Look at what a horrible job most people did a hundred years ago (even 50 years ago) predicting what life was going to be like today.
Many people did not believe man would ever walk on the moon (even right up to the time it happened), same with desegregation and the fall of the Soviet Union, for starters.
Science and technology are especially hard to predict, as many advances are a result of accidents and surprising discoveries.
I wouldn't write off strong AI, though I'm not sure it'll happen as a mimicry of the human mind.
[+] [-] Tenoke|4 years ago|reply
[+] [-] sammalloy|4 years ago|reply
[+] [-] ampdepolymerase|4 years ago|reply
[+] [-] dane-pgp|4 years ago|reply
Shouldn't it be Indonesia passing (and enforcing) legislation about what happens in its forests?
[+] [-] pmoriarty|4 years ago|reply
[+] [-] devindotcom|4 years ago|reply
You can skip directly to it here:
https://www.gutenberg.org/files/1906/1906-h/1906-h.htm#chap2...
But the context in the story is fairly important - it takes place in a society that has essentially already carried out its own Butlerian Jihad and taken it too far. Wonderful book by the way.
I'm starting to think that the religious sects in the U.S. that laboriously evaluate a technology before incorporating it into their communities have a pretty good thing going. Sadly it's not really practical at a larger scale, and the suffering that could be avoided by adopting something early rather than late is difficult to estimate. Ah well!
[+] [-] jhbadger|4 years ago|reply
https://web.archive.org/web/20060524131242/http://www.nzetc....
[+] [-] brodouevencode|4 years ago|reply
[+] [-] duped|4 years ago|reply
I know things that I would like to see. Like humans "in the loop" (as opposed to "on the loop" or "out of the loop") for certain classes of decision making - for example target selection of military strikes or law enforcement. Or what kinds of information we use to train the decision making models, for example if you feed ML a racist data set and you get a racist algorithm - use that algorithm to decide who to give mortgages and you'll get systematic depression in generational wealth based on racial lines.
But this isn't some crusade on AI because it's AI; it has to be based in reality - what AI or ML is being used for, what information it operates on, what decisions it is used to make, and ultimately the human beings that are responsible for those decisions. The reason it is so hard to convince people as to how we should legislate (or otherwise regulate AI) is that every conversation drifts into science fiction and not concrete examples of the ethical issues today and what can be done today. Otherwise it comes off as Luddite fearmongering.
[+] [-] rjeidhwkn|4 years ago|reply
[deleted]
[+] [-] subroutine|4 years ago|reply
So when people say we shouldn't develop AI to think like that, it's basically saying we shouldn't try to understand how consciousness works. Because as soon as we do, I guarantee someone out there will attempt to make conscious AI.
[+] [-] tudorw|4 years ago|reply
[+] [-] z5h|4 years ago|reply
[+] [-] dane-pgp|4 years ago|reply
But if an AI is good at modelling the world and predicting the likely outcomes of various actions it can take (even if those actions are just to put text on a screen), we should expect it to develop a desire to achieve various "instrumental goals" in order for it to maximise its probability of correctly carrying out the task we instruct it do.
An example of an instrumental goal would be "accumulate resources", since the more resources it has, the better calculations it could perform, and the more certain it could be that it has correctly accounted for all the potential obstacles to it completing the task. Another instrumental goal would be "don't get destroyed", since if it is destroyed it will not be able to carry out the task at all (for most sensible tasks).
So without having any emotions or inherent desires, an intelligent agent is likely to accumulate various desires merely as a consequence of wanting to successfully do anything at all.
[+] [-] soco|4 years ago|reply
[+] [-] IggleSniggle|4 years ago|reply
The equivalent in silicon is functions fitting whatever purpose, but not strongly, for no particular reason except that they have been able to exist within that context across many, many, years.
The efficiency of a function may not be efficient, but it hasn’t been selected against…yet. That “DNA” source code might include a lot of cruft, but who cares as long as it works and people will download it anyway?
[+] [-] dry_soup|4 years ago|reply
Those problems largely fall into three categories that I can think of off the top of my head at 1am:
1. AI is a convenient way to justify potentially uncomfortable decisions you would have made otherwise (idlewords said it best: "AI is money laundering for bias")
2. AI is being used in situations where it can be a threat to life and limb, like the current crop of self-driving(ish) cars
3. Essentially all of the gains from automating work going to people who already have capital
[+] [-] bpodgursky|4 years ago|reply
All the rest of these are bad, but they are problems we can fix given time and thought, because we will still exist. Extinction-level events decrease all future human utility to zero, and so should be treated with extraordinary care.
[+] [-] qdiencdxqd|4 years ago|reply
[+] [-] rektide|4 years ago|reply
> Far more important than the process: strong AI is immoral in and of itself. For example, if you have strong AI, what are you going to do with it besides effectively have robotic slaves? And even if, by some miracle, you create strong AI in a mostly ethical way, and you also deploy it in a mostly ethical way, strong AI is immoral just in its existence. I mean that it is an abomination. It’s not an evolved being.
My fear is that most software, even when useful, locks us into certain paths. Our situations or needs change, evolve, but we will remain subject to inflexible software, to systems we cannot make change with us, in the vast majority of cases. Only a very few programs strive for better: spreadsheets being one noted example.
Ursala Franklin categorized technology as holistic or prescriptive[1], where it is something wielded or something that directs us. Even a social media app which lets us create content- a seemingly holistic act- still has narrow prescriptive channels we can not escape. We will never be able to understand or enhance this tool. We will never understand it, never see it's nature. This, to me, is the definition of what Erik talks about: an abomination, a thing beyond comprehension, a horror outside of reality, the form of existence which is shared.
I feel like we're reaching a crisis where we are creating an unknowable, unexplorable world. We're building an anti-Enlightenment prison. That, to me, constitutes a deontological hazard, demands that we assess the action themselves of creating unexplorable software.
[Edit: I misread the line I quotes as, "what are you going to do with it besides effectively be robotic slaves": that uhh changes the pertinence of our two discussions here notably. I think it's risky that the strong ai would be used to try to architect policies/systems that steer people, which is a different concern than Erik's.]
[1] https://en.wikipedia.org/wiki/Ursula_Franklin#Holistic_and_p...
[+] [-] dane-pgp|4 years ago|reply
I don't know if that better describes my feelings about Remote Attestation of operating system configuration, or just SystemD.
[+] [-] BlueTemplar|4 years ago|reply
The EU is trying to, but I'm not convinced...
https://news.ycombinator.com/item?id=27766294
[+] [-] jonstaab|4 years ago|reply
> All to say: discussions about controlling or stopping AI research should be deontological—an actual moral theory or stance is needed
I don't see this happening in the near future, at least in the West. We're in the middle of total epistemological meltdown, and only capable of reasoning from utility, or some insane framework like critical theory. If we get to Strong AI in our lifetimes, we're just going to spawn a bunch of reductive, racist robots.
[+] [-] thewakalix|4 years ago|reply
[+] [-] AndrewKemendo|4 years ago|reply
How about this: "The primary objective of humanity should be to build an intelligent system with far more precise perception, reasoning and physical manipulation capabilities than humans"
That's my starting point.
[+] [-] erikhoel|4 years ago|reply
[+] [-] tick_tock_tick|4 years ago|reply
Citation needed. As it currently stands it seems incredibly unlikely we won't expand to most of our local group making extinction incredible unlikely.
[+] [-] emiliobumachar|4 years ago|reply
[+] [-] anigbrowl|4 years ago|reply
Put another way, a laser pointer is not a very good tool with which to explore a cave, unless you can systematically measure it over the whole cave, an expensive and time-consuming process. If you're exploring a new cave, you might be better off with weak omnidirectional illumination like a lamp.
[+] [-] pphysch|4 years ago|reply
Unexamined anthropocentric hogwash.
[+] [-] dane-pgp|4 years ago|reply
[+] [-] cwkoss|4 years ago|reply
If an AI is created that has consciousness and is smarter and more moral than humans, it could lead our world better than we currently do. It could genetically modify us to make humankind a more ideal species. These AIs could be better humans than humans can possibly be.
In 100-200 years, "kill all humans" could actually be the morally-best choice. By comparison, we could be anachronistic barbarians suitable for nothing but the novelty of viewing in a zoo.
There are certainly risks from passing the baton to a different unknown future lifeform, but I don't think we can have a priori certainty that replacing humans with AI will be inherently bad.
[+] [-] marcinzm|4 years ago|reply
People have done this for most of history. Working a farm, for example, is non-trivially dangerous and fairly low profit. Children often helped on the farm in rural communities from a young age. So every time you had your child work the farm you were rolling some dice. Over and over. But eventually those quarters add up to enough to put food on the table so it was rational to roll them.
This seems like the sort of philosophical argument only someone who has grown up a very privileged life and hasn't experienced much else would make. To them it is inherently wrong but to other it is simply part of life. Which inherently makes it no longer a universal axiom but a matter cultural upbringing.
[+] [-] bopbeepboop|4 years ago|reply
[deleted]
[+] [-] pmoriarty|4 years ago|reply
The potentially limitless power of AI is going to be too much of a temptation for some governments, militaries, powerful individuals, and various groups to resist.
Ban AI in one place and it'll just be developed somewhere else.
That's not to say we should simply march forward with our eyes closed, but efforts to stop AI are highly unlikely to succeed.
Jacques Ellul[1] talked about the almost inevitable and freedom-denying aspects of what he called "technique" (really the relentless drive towards efficiency across all disciplines) in The Technological Society[2].
Ellul's thought inspired the Unabomber, though Ellul himself never advocated violence as a response. As we all know, the Unabomber did not himself stop technological progress... he did manage to spread his ideas a bit, but that hasn't seem to have had any effect either.
I'm completely against violence myself, and unequivocally condemn terrorism of all sorts (including the Unabomber). Not only is it unethical and immoral, but it's ineffective and counterproductive.
[1] - https://en.wikipedia.org/wiki/Jacques_Ellul
[2] - https://en.wikipedia.org/wiki/The_Technological_Society
[+] [-] BitwiseFool|4 years ago|reply
Anyways.
Let's assume the best and brightest dog breeders endeavor to make German Shepherds as intelligent as they possibly can. Would the same ethical debates about what constitutes a 'mind' come into play? What would happen if the dogs became smart enough to make their own mating decisions? Would we be worried about them turning on us once they get close to human level intellect? Would it be immoral to make these dogs work? Or, would not letting them work be considered immoral?
This is just food for thought. But I suspect AI's capabilities will grow much in the same way other domesticated species have grown into the specialized roles we've crafted for them.
[+] [-] Radim|4 years ago|reply
In what universe is a dog smarter than a wolf?
I think you got the correlation between animal domestication and their intelligence backward :)
[+] [-] dr_dshiv|4 years ago|reply
"artificial intelligence system (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with"
"Annex 1: Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b)Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods."
[+] [-] BlueTemplar|4 years ago|reply
I'm disappointed that the article doesn't seem to realize this, I doubt that anything less drastic is going to work, considering how easy it is to replicate software !
(I'm not really convinced that EU's approach can work either.)
[+] [-] foxes|4 years ago|reply
[+] [-] soco|4 years ago|reply