I don't want to anyone to build autonomous weapons, but I don't want anyone to build nuclear weapons or any other weapons of war either; I don't see how to avoid it. If the choice is to either develop and deploy autonomous weapons or to risk having your population conquered and murdered by enemies that use them, then there is no choice.
Possibly, autonomous weapons like chemical weapons won't be important to victory, or like most biological weapons (AFAIK) they won't be cost-effective. But it's hard to imagine a human defeating a bot in a shootout; consider human stock market traders who try to compete with flash trading computers, for example. In fact, I wonder if some of the tech is the same for optimizing decision speed and accuracy.
Perhaps the best response by governments is to use their resources to develop autonomous weapons countermeasures, especially those [EDIT: i.e., those countermeasures] that can be acquired and utilized by those with few resources: Towns, governments in poor countries, and even individuals.
Also, my guess is that it's an area ripe for effective international standareds, treaties and law. All governments can agree that they don't want the chaos of proliferating, unregulated autonomous weapons and would work to enforce the rules.
I've had a shootout of sorts against a robot. The robot was armed with an airsoft gun, and I with a Glock pistol. The goal was not to kill the robot (since it was expensive and the owner and I had spent a long time getting the machine vision software workin) but to avoid being hit by the robot while engaging some other targets.
The course had to be carefully constructed to avoid an immediate robot victory, and the robot wasn't mobile. I wouldn't take the human side in a confrontation with an armed robot driven by a defense budget.
The disadvantage of a robot is limited mobility and difficulty distinguishing friends from foes, the same disadvantages which plague landmines. The advantage is that a robotic force could provide the same area denial as landmines without the long-term consequences: set 20% of the robots to come home and recharge every day, with a week-long battery life, and you've got a very short period during which problems can happen.
Biological and chemical weapons are considerably easier to exclude by mutual agreement than AI, because the line is pretty clearly drawn. There is not much of an incremental path from dropping explosives do dropping gas containers. The line suggested here seems much more arbitrary and would be more like a wide grey area that would be pushed wider and wider into AI territory by including some form of alibi human interaction just as a formality. As described in the open letter, the "forbidden technology" would sit firmly sandwiched between the well established technology of "seek to kill" missiles that are fully autonomus once fired (as opposed to "seek, then autonomously decide to kill or not", which would be forbidden) and teleoperated equipment which is also explicitly allowed. The latter wont be stopped from getting better and better autonomous capabilities by mandating an operator to sign off kill decisions, which will eventually become a meaningless formality. If we want to avoid autonomous weapons, we need a more robust line than the one suggested.
It is easy to prevent it, have the UN ban it and provide incentives for countries to sign a treaty. This has worked for things like chemical and biological warfare. The key is to star the process now before generals get their hands on the technology so there won't be any pushback.
All governments can agree that they don't want the chaos of proliferating, unregulated autonomous weapons
The US is not going to give up this capability. It's still not quite fully signed up to the landmine treaty.
We know how this will go: automated colonial "antiterrorism" enforcement. Like drone strikes today, only lower cost. Entire populations kept in line by the robots that hunt in the night. Objecting to the death robots and organising against it will be considered evidence of terrorism and result in your death, along with anyone who phoned you recently enough. Deployed from Turkey to Tripoli.
But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.
I don't agree with that line of thinking but it would be quite a debate to have.
> Possibly, autonomous weapons like chemical weapons won't be important to victory, or like most biological weapons (AFAIK) they won't be cost-effective. But it's hard to imagine a human defeating a bot in a shootout; consider human stock market traders who try to compete with flash trading computers, for example. In fact, I wonder if some of the tech is the same for optimizing decision speed and accuracy.
The only way for human adversaries to fight autonomous weapons would be with brute, lethal force (nuclear/neutron weapons). It ends poorly for all involved.
The problem with rules is that someone always has it in their best interest to break them.
Unlike dropping a nuclear bomb, you could break the rules here for years without even being caught. It's more like Germany in the 1930's than the cold war.
> I don't want to anyone to build autonomous weapons, but I don't want anyone to build nuclear weapons or any other weapons of war either
FWIW, all of the scientists involved in creating the first nuclear weapons immediately after the first detonation began pushing for a ban on further nuclear armament, and since then all wars have been fought with conventional weapons.
I've been reading about the nuclear arms race and it is terrifying how often we came to destroying ourselves. I have possibly never seen greater evidence that there may be a god.
"With regard to moral questions, I do have something I would like to say about it. The original reason to start the project, which was that the Germans were a danger, started me off on a process of action which was to try to develop this first system at Princeton and then at Los Alamos, to try to make the bomb work. All kinds of attempts were made to redesign it to make it a worse bomb and so on. It was a project on which we all worked very, very hard, all co-operating together. And with any project like that you continue to work trying to get success, having decided to do it. But what I did—immorally I would say—was to not remember the reason that I said I was doing it, so that when the reason changed, because Germany was defeated, not the singlest thought came to my mind at all about that, that that meant now that I have to reconsider why I am continuing to do this. I simply didn't think, okay?"
This is extremely idealistic, but we need a way for engineers and scientists to feel accountable for the outcomes of their work, and to straight out refuse working on such projects. And the people who do work on such systems should be held accountable in some deep way. We have reached a developmental stage where building tools and techniques in the active goal of harming human lives has become morally unacceptable. Engaging in civil disobedience if you are working on such projects is the only acceptable outcome; Snowden should be remembered as the first of many, not as an exception.
(yes, there are many counterpoints to my argument, but starting debates is more interesting than spewing out platitudes. I'm interested in reading the replies)
I once worked in a German field engineering department of a large US semiconductor company as a student. In the department there was a noticeable barrier between one manager and the engineers. The following had happened there a few years ago: a client required a DSP to calculate the weight on a landmine switch. The departments engineers refused to work for the client bar one manager. They were threatened to be fired and they stayed on course and ended up keeping their jobs.
The way it worked was by one guy rallying, taking apart the specifications and explaining the actual moral implications to the engineers.
One of my family members turned down an offer of double his salary because it would entail working on military systems, and he's a conscientious objector.
> "the people who do work on such systems should be held accountable in some deep way"
... another of my family members has worked on autonomous military systems, and believes herself to be a viable military target because of it.
> "building tools and techniques in the active goal of harming human lives has become morally unacceptable. Engaging in civil disobedience if you are working on such projects is the only acceptable outcome"
The two people I referenced above have a deep, thoughtful, respectful disagreement. Your version is incredibly oversimplified. (For a taste, see the responses to https://news.ycombinator.com/item?id=1823802 .)
People do. Then they quit their jobs and are replaced by other smart people willing to do the work (and needing the job).
> people who do work on such systems should be held accountable in some deep way
Never going to happen. The political and military leaders are the ones who choose to develop and deploy such weapons. They should be held accountable, and sometimes they are. Should we go out and prosecute all the engineers and scientists who worked on nuclear bombs that have been sitting in bunkers and silos for the past 60 years?
> We have reached a developmental stage where building tools and techniques in the active goal of harming human lives has become morally unacceptable.
Who is "we"? A gun is specifically designed to kill things, but the wielder of the gun decides whether it will be used for good or for evil. Likewise, there are plenty of other objects not designed to kill people that are used for that purpose (stones, rope, buckets of water, etc).
Would you consider working on AI countermeasures? Would you want to have a strong defense that can fend off AI invaders, even if it means that defensive force could be re-purposed for offense?
> This is extremely idealistic...
Ideally, you want to rid the world of conflict and war. But this is impossible while there remain limited resources and different ideas. You would need to find an infinite source of food/water/land as well as force everyone to conform to one ideology to avoid war. So aside from being impossible (as far as resources go), you would need a totalitarian world government imposing thought control on all of humanity to bring about such a "peace."
Einstein and many nuclear scientists and engineers that participated in the project felt betrayed by the U.S. dropping the bomb in Japan, they worked on the making of the bomb, expecting that it would serve as a deterrent, not as a weapon. Because of this, some fled to the USSR and China. But even if the scientist/engineers wanted to stop the use of the bomb after working on it, they couldn't have, because it's a political decision. That's why this has to be stopped even before the weapons race starts, IMO.
That's a nice thought, but the cat is out of the bag. One week the elite geeks talk about how strong encryption is available to everyone and can't be stopped, or how you really can't regulate what comes out of a 3d printer. Then they try to put the autonomous AI drone genie back in its bottle?
Gamers gonna use tech for gaming, advertisers gonna use tech for advertising, military gonna use tech for militarying.
I think the pace of tech development is going so fast, we need to stop trying to ban individual developments and start trying to change the way people and governments behave so those bans aren't needed. But I'm not sure if that's even possible short of some dystopia.
"But listen to me, because I saw it myself: science began poor. Science was broke and so it got bought. Science was scared and so did what it was told. It designed the gun and gave the gun to power, and power then held the gun to science's head and told it to make some more."
At least half a dozen nations are working on such systems now. They're doing this not because they think it's a good idea, but because there's this attractor basin they recognize we reach by default, of a new arms race of faster, smarter, stronger AI weapons spiraling up. This is something we don't think is a good idea.
This letter is essentially a petition, which we'd like to take to the U.N., showing that the AI and ML communities don't want their work used in autonomous weapons, things that are built specifically to, by themselves, offensively target and kill people. Having this sort of grass roots effort has precedent: chemical weapons, landmines, and laser-blinding weapons were all banned globally with bans and treaties based on this sort of thing. It's true that terrorists and rogue states might still use them in isolation, but you won't get this effect of the major powers having an arms race. Although these things are under development right now in multiple countries, they haven't been deployed to the field yet. So we're really at an inflection point: we're trying to get a ban in place before they're actually deployed at all, because after that it'd be much harder to get such a treaty. If you agree with these sentiments, we are collecting signatories from the community.
Also consider that autonomous weapons could upend the global power structure:
For most of human history, military power has been tied to economic power and population size: Those with larger economies and populations have been more powerful. AFAIK, that is why the United States has been the dominant military power since WWII and why China may challenge the U.S. It's also how national governments have maintaind sovereignty, by having far more economic and human resources than any internal competitors (and when that isn't true, such as in poor countries, national governments can be ineffective).
But what if military power depends on the quantity and quality of bots? What stops a smaller or even poorer country from building a robot army? Poor countries have more manufacturing capacity than wealthy ones, AFIAK, and perhaps they need only one innovative, disruputive software developer to make their bot army superior or at least competitive. For example, could tiny Singapore dominate SE Asia or even become a world power? In fact, what stops a sub-national group such as Hezbollah, a Mexican drug cartel, another organized crime group, or even a wealthy individual from building their own army? Without checking the inside of every factory on the planet, will we even know the robot army is being built until it's too late? Will governments be able to protect their citizens from warlords and exercise sovereignty over their own territory? What about poor governments?
It's very speculative -- it remains to be seen, for example, how effective autonomous weapons will be -- but it could be a historic change. Perhaps our hope is that the technology will turn out to be like other weapons, such as airplanes: Anyone can build one, but the single engine prop plane is no threat to what can be built with the Pentagon budget.
Manual manufacturing capacity maybe. The reason 'poorer' countries have a lot of manufacturing is directly related to human cost. 'Richer' countries produce tons of stuff. The production in the richer countries generates high technology products that are built with automated systems. This would lead to the production of the machines to be more reliable, have better materials and be more precise.
Having one person generate a better algorithm is an interesting. Having fewer, older, less reliable machines with a 90% "kill" rate go against higher end machines with a 60% "kill" rate. Who would 'win' in that one? I would have no idea.
What's stopping something like those warlords/cartels now from making drones to make sure everyone is obeying them? There are some of drones flying around, but they are nothing compared to what the United States throws over the middle east. We are already having machines fight wars for us. They are just remote controlled instead of being totally autonomous.
I know it's standard tin foil hat territory, but wouldn't the biggest non-governmental risk be large multinational corporations? Some companies have incomes comparable with nation states, I'd have no reason to suspect they'd be any less capable of building military technology that rivalled those from a country.
I guess it will be somewhat close to what some other military high-techs.
Poor countries have the factories and manufacturing power, but the core of the tecnology is on very few hands.
For exmample, Brazil can build Jet Fighter Aircraft, but can it do it without foreign Radar systems? Engines? Missle control systems? and other systems?
I don't think they currently can. they can do 90% of it but the part that makes it effective as a weapon is imported and without it the plane is just as useful as a old fighter.
A ban is definitely a good idea but I think we should have something else as well. We need developers to agree, as human beings rather than law-abiding citizens, not to build these things. Don't apply for those jobs regardless of how well they pay. If your company starts projects in that market, leave and find something else. Understand that you are not building things to 'protect peace' or 'bring democracy' to people. Using your tech skills to create things to kill people is a dickish thing to do.
There are only 18.5 million developers in the world[1]; getting a consensus not to be evil shouldn't be beyond us.
Glad I'm not the only one worried about this. I spent a bit of my early career on this type of tech. At first I thought it was really cool and joked with my coworkers about building skynet. Eventually I realized no amount of coolness or money was worth putting my talents to building things that are obviously meant for destruction.
Sure they're tools and can be tools for peace in the right hands. But in the wrong hands, they can do immense damage. Perhaps one of the things that's kept humanity around is that despite the psychopaths in our midst who might not care if they destroyed every other human being, there are others whose conscience would get in the way.
This type of technology, in the hands of the wrong psychopath might mean the end of us. Despite the BS marketing behind AI, NO it is not sentient, it's a bunch of optimization algorithms. Not Good, Not Evil.
I realize that someone will build it. That, is an inevitability. Just know that it doesn't have to be me.
(Before you write comments on my hanlde please read my profile, it has more to do with Hip Hop than violence)
The thing that really bothers me about the autonomous weapons stuff is the potential for tyranny. One of the tricks with running an oppressive regime is that you still need people to do the actual enforcing. There's limits in just how far you can go based on what you can convince your own people to do to each other. Yeah, there's a lot of ways to use propaganda and other such tricks to get people to do things you wouldn't think they'd be willing to do, but there are still some absolute limits in there.
Really good AI weapons could change the whole balance around though. Whatever weird, crazy thing you dream up, just order the AI bots to make people do it, and it will be done. No convincing needed, no limits.
If all it took to create a nuclear reaction was some dirt and a microwave, we'd be fucked. All it takes is one angsty teen and they'd level a city.
This box, once opened, won't close. And it only serves our interests in the worst of ways. People we don't want having this stuff, will have it. This is just the tippy-tip of the iceburg though. There is a SLEW of technology coming out that has intimidating implications that really makes things super easy to control and exterminate a populace.
So that's the thing. We need to realize we can do anything. Really. you want to blow up the world? I'm sure we could find a way. You want to type a name into a computer and a drone finds that person and kills them? no problem. And thats not to mention all the other things we'll discover along the way.
It is far more likely that we will be the creators of our own destruction, than it is that we will be able to reign in our behavior and wield our intelligence to serve the interests of our species. We haven't gotten past killing each other, so we're just going to keep doing that, but get REALLY good at it. Our technical abilities have far outpaced our philosophical ones, and that doesn't bode well.
This seems the more credible AI threat to me: not that an AI will go rogue and decide on its own to start killing people, but rather that humans will design an AI with the express purpose of killing people.
The threat is more like that humans will design AI with the goal of making X objects or optimizing Y system, which leads to the unintended consequence of killing people.
The AI threat was credible enough to begin with. If you design an AI with the express purpose of making cheeseburgers, and allow it to improve itself, it will end up killing people. We don't know how to specify any utility function for a self-improving AI that won't lead to killing people.
I find it hopeful see people trying to mitigate these risks while they have only just begun to be realized. There has been more than enough historical cases already of inventions turning into something the inventor would dearly wish to have undone.
One major current concern of mine, that this letter does not address, is AI for surveillance and social control.
What is already being done in that regard is arguably military intelligence technology directed indiscriminately at entire populations, but the added element of powerful AI spidering over the data streams that go into places like the NSA Bluffdale facility is quite appalling.
I think this is even harder to inspect for than autonomous killing systems, and even more difficult to avoid the development of, since much of the capabilities needed will likely be similar to what academia and data-intensive commercial sector will want to serve their needs. But the potential damage through empowering totalitarian control could well be comparable to or greater than an "AI arms race".
I really hope the field will deal with this aspect too.
> Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.
Is it just me or does this read like an advertisement?
The race for AI-supported weaponry has been on for a long time. Rosenblatt was using perceptrons to try and identify tanks in the late 50s. So this is not a race whose start is to be forestalled, as the letter phrases it, since it began a long time ago. AI has been weaponized.
I think the caution FLI expresses towards autonomous weapons is fair, but let's be really clear on where we are. Various forms of weak and narrow AI have been applied to warfare for a long time, and they will continue to be, regardless of petitions by the prominent.
Nice to see but it will never work. Weapons move forward no matter how terrible they might be, politicians and military leaders always use the "if we don't they will" excuse. It's usually true which is sad.
> The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.
Sorry but it's too late; there will always be advancements in technology especially in AI and it's going to happen. So you can either not do research in it now while someone else does (or eventually repurposes other AI research for this task) or you can do it now to better understand it, its strengths and weaknesses, etc and possible use it to intercept other AI creations.
Just like it'll never be possible to ban all guns regardless of whether it would be the wrong or right thing to do asking people not to research this is simply not going to happen.
There have been relatively successful bans on chemical and biological weapons, why do you suspect we can't successfully ban the proliferation of autonomous weapons? These things don't appear out of thin air, they still have to be manufactured, sold and stored. If you can find them you can remove them, and deal with those who created them.
We all pretty much agree here that autonomous cars are safer and make better decisions than human-driven ones. Why wouldn't the same hold true of weapons?
There seems to be a philosophical distaste to letting machines decide whether or not to kill humans, but if the upshot is that fewer innocents and more legitimate targets are killed for less money, then I'm not sure what the problem is.
Humans are bad decision-makers at the best of times. Add in the stresses of combat and we're downright lousy. Why shouldn't we offload that decision-making to machines that can do it better than us?
Autonomous cars will for sure be more efficient on safety and decision making.
What I think they are warning against is the potential efficiency of AI machines at war. Wars could happen in minutes instead of years.
You mention more legitimate targets being killed and fewer innocents, but how are those being defined? There has been multiple points in history where the set of 'legitimate targets' by a group was defined by everyone not in their group.
I don't think that suppressing innovation is going to work, regardless of how many letters are written. People are working on these things. They will come, and at some point, yes, they will be misused. However, on the whole, they might be a good thing.
It would, for example, be awesome to have a system that could disable one or multiple active shooters in a public area within a few milliseconds of their first shot being fired. One of these should be in every classroom, movie theater, mall, and military base - anywhere that soft targets congregate. So you can't just say that we shouldn't have auto-targeted weapons, because they can do a tremendous amount of good and save countless lives.
I'm already wary of how much trust we put into programs without formal proofs; its a bit troubling that a formal proof it won't fire at kids with toy guns is essentially intractable.
Good luck with this, no seriously, good luck. Our technological capability moves forward whether we want it, even fight it, or not. It can be slowed somewhat (the electric car being one example, stem cell research another) but it will happen and we will need laws and frameworks to ensure we deal with this change appropriately sooner rather than later.
This is the equivalent of hiding our heads in the sand.
Unfortunately, it couldn't come at a worse time - a time when even the most "democratic" countries on Earth are pushing for their people to have fewer rights, more censorship, more surveillance, more torture, more secret assassinations and so on.
> we will need laws and frameworks to ensure we deal with this change appropriately sooner rather than later
You really think the state is going to hamstring itself? Because states are creating the demand for and purchasing autonomous killing tools like Metal Storm[1], not private entities.
Once my boss asked me if it would be possible to put code in our product that detects if it has been pirated, and if so, formats the hard drive.
I told him that it was a very bad idea for a number of reasons. Primarily I didn't want to have that code in my product because eventually it's going to run in the wrong case. If it's not in there, it won't ever run.
I am unconvinced of a lot of the fears around super-AI. I can get behind this initiative though. We have already banned some types of horrible weapons, like flamethrowers and chemical weapons. Hopefully we can manage to ban this one as well.
It's mostly that the AI in question will follow a general directive to its end. For example, ensuring the safety of a nation's population may include putting its citizens into extremely hardened bomb shelters and never let them leave. Or worse, annihilate the entire human species to ensure world peace (the absence of humans would produce the same result and would probably be more efficient in terms of execution). It's not that the AI will be super smart, just that the AI will be super dumb. As Aristotle put it, "Law is mind without reason." And for me, logic is just another set of laws without any sort of reason (justification).
For this to work there has to be a good definition of what they mean by AI. Control circuits to stabilize quadcopters or airplanes? What about a "formation" system, so that a number of drones can be controlled as one abstract entity? What about using computer vision to lock on a target (keep in mind radar locking has existed a long time)?
What about a drone patrolling along a predefined route? What about "macros" like a big red button that means fire all weapons, then rise and return home? What about an automatic avoid-anti-air program?
[+] [-] hackuser|10 years ago|reply
Possibly, autonomous weapons like chemical weapons won't be important to victory, or like most biological weapons (AFAIK) they won't be cost-effective. But it's hard to imagine a human defeating a bot in a shootout; consider human stock market traders who try to compete with flash trading computers, for example. In fact, I wonder if some of the tech is the same for optimizing decision speed and accuracy.
Perhaps the best response by governments is to use their resources to develop autonomous weapons countermeasures, especially those [EDIT: i.e., those countermeasures] that can be acquired and utilized by those with few resources: Towns, governments in poor countries, and even individuals.
Also, my guess is that it's an area ripe for effective international standareds, treaties and law. All governments can agree that they don't want the chaos of proliferating, unregulated autonomous weapons and would work to enforce the rules.
[+] [-] presidentender|10 years ago|reply
The course had to be carefully constructed to avoid an immediate robot victory, and the robot wasn't mobile. I wouldn't take the human side in a confrontation with an armed robot driven by a defense budget.
The disadvantage of a robot is limited mobility and difficulty distinguishing friends from foes, the same disadvantages which plague landmines. The advantage is that a robotic force could provide the same area denial as landmines without the long-term consequences: set 20% of the robots to come home and recharge every day, with a week-long battery life, and you've got a very short period during which problems can happen.
[+] [-] usrusr|10 years ago|reply
[+] [-] guelo|10 years ago|reply
[+] [-] pjc50|10 years ago|reply
The US is not going to give up this capability. It's still not quite fully signed up to the landmine treaty.
We know how this will go: automated colonial "antiterrorism" enforcement. Like drone strikes today, only lower cost. Entire populations kept in line by the robots that hunt in the night. Objecting to the death robots and organising against it will be considered evidence of terrorism and result in your death, along with anyone who phoned you recently enough. Deployed from Turkey to Tripoli.
[+] [-] knodi123|10 years ago|reply
In a fair fight? Sure. But until these bots have strong human-strength AI, enemies will always be able to come up with dirty tricks.
[+] [-] click170|10 years ago|reply
But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.
I don't agree with that line of thinking but it would be quite a debate to have.
[+] [-] toomuchtodo|10 years ago|reply
The only way for human adversaries to fight autonomous weapons would be with brute, lethal force (nuclear/neutron weapons). It ends poorly for all involved.
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] run4yourlives2|10 years ago|reply
Unlike dropping a nuclear bomb, you could break the rules here for years without even being caught. It's more like Germany in the 1930's than the cold war.
[+] [-] justizin|10 years ago|reply
FWIW, all of the scientists involved in creating the first nuclear weapons immediately after the first detonation began pushing for a ban on further nuclear armament, and since then all wars have been fought with conventional weapons.
I've been reading about the nuclear arms race and it is terrifying how often we came to destroying ourselves. I have possibly never seen greater evidence that there may be a god.
[+] [-] GuiA|10 years ago|reply
"With regard to moral questions, I do have something I would like to say about it. The original reason to start the project, which was that the Germans were a danger, started me off on a process of action which was to try to develop this first system at Princeton and then at Los Alamos, to try to make the bomb work. All kinds of attempts were made to redesign it to make it a worse bomb and so on. It was a project on which we all worked very, very hard, all co-operating together. And with any project like that you continue to work trying to get success, having decided to do it. But what I did—immorally I would say—was to not remember the reason that I said I was doing it, so that when the reason changed, because Germany was defeated, not the singlest thought came to my mind at all about that, that that meant now that I have to reconsider why I am continuing to do this. I simply didn't think, okay?"
(from "The Pleasure of Finding Things Out", transcript here: http://www.worldcat.org/wcpa/servlet/DCARead?standardNo=0738...)
This is extremely idealistic, but we need a way for engineers and scientists to feel accountable for the outcomes of their work, and to straight out refuse working on such projects. And the people who do work on such systems should be held accountable in some deep way. We have reached a developmental stage where building tools and techniques in the active goal of harming human lives has become morally unacceptable. Engaging in civil disobedience if you are working on such projects is the only acceptable outcome; Snowden should be remembered as the first of many, not as an exception.
(yes, there are many counterpoints to my argument, but starting debates is more interesting than spewing out platitudes. I'm interested in reading the replies)
[+] [-] julianpye|10 years ago|reply
The way it worked was by one guy rallying, taking apart the specifications and explaining the actual moral implications to the engineers.
[+] [-] lotharbot|10 years ago|reply
One of my family members turned down an offer of double his salary because it would entail working on military systems, and he's a conscientious objector.
> "the people who do work on such systems should be held accountable in some deep way"
... another of my family members has worked on autonomous military systems, and believes herself to be a viable military target because of it.
> "building tools and techniques in the active goal of harming human lives has become morally unacceptable. Engaging in civil disobedience if you are working on such projects is the only acceptable outcome"
The two people I referenced above have a deep, thoughtful, respectful disagreement. Your version is incredibly oversimplified. (For a taste, see the responses to https://news.ycombinator.com/item?id=1823802 .)
[+] [-] clarkmoody|10 years ago|reply
People do. Then they quit their jobs and are replaced by other smart people willing to do the work (and needing the job).
> people who do work on such systems should be held accountable in some deep way
Never going to happen. The political and military leaders are the ones who choose to develop and deploy such weapons. They should be held accountable, and sometimes they are. Should we go out and prosecute all the engineers and scientists who worked on nuclear bombs that have been sitting in bunkers and silos for the past 60 years?
> We have reached a developmental stage where building tools and techniques in the active goal of harming human lives has become morally unacceptable.
Who is "we"? A gun is specifically designed to kill things, but the wielder of the gun decides whether it will be used for good or for evil. Likewise, there are plenty of other objects not designed to kill people that are used for that purpose (stones, rope, buckets of water, etc).
Would you consider working on AI countermeasures? Would you want to have a strong defense that can fend off AI invaders, even if it means that defensive force could be re-purposed for offense?
> This is extremely idealistic...
Ideally, you want to rid the world of conflict and war. But this is impossible while there remain limited resources and different ideas. You would need to find an infinite source of food/water/land as well as force everyone to conform to one ideology to avoid war. So aside from being impossible (as far as resources go), you would need a totalitarian world government imposing thought control on all of humanity to bring about such a "peace."
[+] [-] jgome|10 years ago|reply
[+] [-] phkahler|10 years ago|reply
Gamers gonna use tech for gaming, advertisers gonna use tech for advertising, military gonna use tech for militarying.
I think the pace of tech development is going so fast, we need to stop trying to ban individual developments and start trying to change the way people and governments behave so those bans aren't needed. But I'm not sure if that's even possible short of some dystopia.
[+] [-] sanxiyn|10 years ago|reply
-- from Galileo's Dream, by Kim Stanley Robinson
[+] [-] jonroth15|10 years ago|reply
[+] [-] hackuser|10 years ago|reply
For most of human history, military power has been tied to economic power and population size: Those with larger economies and populations have been more powerful. AFAIK, that is why the United States has been the dominant military power since WWII and why China may challenge the U.S. It's also how national governments have maintaind sovereignty, by having far more economic and human resources than any internal competitors (and when that isn't true, such as in poor countries, national governments can be ineffective).
But what if military power depends on the quantity and quality of bots? What stops a smaller or even poorer country from building a robot army? Poor countries have more manufacturing capacity than wealthy ones, AFIAK, and perhaps they need only one innovative, disruputive software developer to make their bot army superior or at least competitive. For example, could tiny Singapore dominate SE Asia or even become a world power? In fact, what stops a sub-national group such as Hezbollah, a Mexican drug cartel, another organized crime group, or even a wealthy individual from building their own army? Without checking the inside of every factory on the planet, will we even know the robot army is being built until it's too late? Will governments be able to protect their citizens from warlords and exercise sovereignty over their own territory? What about poor governments?
It's very speculative -- it remains to be seen, for example, how effective autonomous weapons will be -- but it could be a historic change. Perhaps our hope is that the technology will turn out to be like other weapons, such as airplanes: Anyone can build one, but the single engine prop plane is no threat to what can be built with the Pentagon budget.
[+] [-] pageld|10 years ago|reply
Having one person generate a better algorithm is an interesting. Having fewer, older, less reliable machines with a 90% "kill" rate go against higher end machines with a 60% "kill" rate. Who would 'win' in that one? I would have no idea.
What's stopping something like those warlords/cartels now from making drones to make sure everyone is obeying them? There are some of drones flying around, but they are nothing compared to what the United States throws over the middle east. We are already having machines fight wars for us. They are just remote controlled instead of being totally autonomous.
[+] [-] ZenoArrow|10 years ago|reply
[+] [-] cfontes|10 years ago|reply
Poor countries have the factories and manufacturing power, but the core of the tecnology is on very few hands.
For exmample, Brazil can build Jet Fighter Aircraft, but can it do it without foreign Radar systems? Engines? Missle control systems? and other systems?
I don't think they currently can. they can do 90% of it but the part that makes it effective as a weapon is imported and without it the plane is just as useful as a old fighter.
[+] [-] onion2k|10 years ago|reply
There are only 18.5 million developers in the world[1]; getting a consensus not to be evil shouldn't be beyond us.
[1] http://www.techrepublic.com/blog/european-technology/there-a...
[+] [-] Killah911|10 years ago|reply
Sure they're tools and can be tools for peace in the right hands. But in the wrong hands, they can do immense damage. Perhaps one of the things that's kept humanity around is that despite the psychopaths in our midst who might not care if they destroyed every other human being, there are others whose conscience would get in the way.
This type of technology, in the hands of the wrong psychopath might mean the end of us. Despite the BS marketing behind AI, NO it is not sentient, it's a bunch of optimization algorithms. Not Good, Not Evil.
I realize that someone will build it. That, is an inevitability. Just know that it doesn't have to be me.
(Before you write comments on my hanlde please read my profile, it has more to do with Hip Hop than violence)
[+] [-] ufmace|10 years ago|reply
Really good AI weapons could change the whole balance around though. Whatever weird, crazy thing you dream up, just order the AI bots to make people do it, and it will be done. No convincing needed, no limits.
[+] [-] meesterdude|10 years ago|reply
This box, once opened, won't close. And it only serves our interests in the worst of ways. People we don't want having this stuff, will have it. This is just the tippy-tip of the iceburg though. There is a SLEW of technology coming out that has intimidating implications that really makes things super easy to control and exterminate a populace.
So that's the thing. We need to realize we can do anything. Really. you want to blow up the world? I'm sure we could find a way. You want to type a name into a computer and a drone finds that person and kills them? no problem. And thats not to mention all the other things we'll discover along the way.
It is far more likely that we will be the creators of our own destruction, than it is that we will be able to reign in our behavior and wield our intelligence to serve the interests of our species. We haven't gotten past killing each other, so we're just going to keep doing that, but get REALLY good at it. Our technical abilities have far outpaced our philosophical ones, and that doesn't bode well.
[+] [-] o_nate|10 years ago|reply
[+] [-] _lce0|10 years ago|reply
Some developer could simply forget to write the WHERE statement on the query for "which human not to kill"
[+] [-] lucb1e|10 years ago|reply
[+] [-] kiba|10 years ago|reply
[+] [-] cousin_it|10 years ago|reply
[+] [-] etiam|10 years ago|reply
One major current concern of mine, that this letter does not address, is AI for surveillance and social control. What is already being done in that regard is arguably military intelligence technology directed indiscriminately at entire populations, but the added element of powerful AI spidering over the data streams that go into places like the NSA Bluffdale facility is quite appalling. I think this is even harder to inspect for than autonomous killing systems, and even more difficult to avoid the development of, since much of the capabilities needed will likely be similar to what academia and data-intensive commercial sector will want to serve their needs. But the potential damage through empowering totalitarian control could well be comparable to or greater than an "AI arms race". I really hope the field will deal with this aspect too.
[+] [-] vonnik|10 years ago|reply
Is it just me or does this read like an advertisement?
The race for AI-supported weaponry has been on for a long time. Rosenblatt was using perceptrons to try and identify tanks in the late 50s. So this is not a race whose start is to be forestalled, as the letter phrases it, since it began a long time ago. AI has been weaponized.
I think the caution FLI expresses towards autonomous weapons is fair, but let's be really clear on where we are. Various forms of weak and narrow AI have been applied to warfare for a long time, and they will continue to be, regardless of petitions by the prominent.
[+] [-] coldcode|10 years ago|reply
[+] [-] BinaryIdiot|10 years ago|reply
Sorry but it's too late; there will always be advancements in technology especially in AI and it's going to happen. So you can either not do research in it now while someone else does (or eventually repurposes other AI research for this task) or you can do it now to better understand it, its strengths and weaknesses, etc and possible use it to intercept other AI creations.
Just like it'll never be possible to ban all guns regardless of whether it would be the wrong or right thing to do asking people not to research this is simply not going to happen.
[+] [-] ZenoArrow|10 years ago|reply
[+] [-] redthrowaway|10 years ago|reply
There seems to be a philosophical distaste to letting machines decide whether or not to kill humans, but if the upshot is that fewer innocents and more legitimate targets are killed for less money, then I'm not sure what the problem is.
Humans are bad decision-makers at the best of times. Add in the stresses of combat and we're downright lousy. Why shouldn't we offload that decision-making to machines that can do it better than us?
[+] [-] mrtron|10 years ago|reply
What I think they are warning against is the potential efficiency of AI machines at war. Wars could happen in minutes instead of years.
You mention more legitimate targets being killed and fewer innocents, but how are those being defined? There has been multiple points in history where the set of 'legitimate targets' by a group was defined by everyone not in their group.
[+] [-] ZenoArrow|10 years ago|reply
Can we just face up to the reality that more weapons = more potential abuse of weapons, as there's no such thing as a perfectly moral user.
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] downandout|10 years ago|reply
It would, for example, be awesome to have a system that could disable one or multiple active shooters in a public area within a few milliseconds of their first shot being fired. One of these should be in every classroom, movie theater, mall, and military base - anywhere that soft targets congregate. So you can't just say that we shouldn't have auto-targeted weapons, because they can do a tremendous amount of good and save countless lives.
[+] [-] ludamad|10 years ago|reply
[+] [-] lucisferre|10 years ago|reply
This is the equivalent of hiding our heads in the sand.
[+] [-] higherpurpose|10 years ago|reply
[+] [-] EGreg|10 years ago|reply
You can see that most murders with a gun in zones where guns are outlawed have been with guns purchased outside those zones.
So you can see where I'm going with this.
Legislation MAY work here but not everywhere.
[+] [-] waffle_ss|10 years ago|reply
You really think the state is going to hamstring itself? Because states are creating the demand for and purchasing autonomous killing tools like Metal Storm[1], not private entities.
[1]: http://gizmodo.com/236590/metal-storm-robot-weapon-fills-the...
[+] [-] tfinniga|10 years ago|reply
I told him that it was a very bad idea for a number of reasons. Primarily I didn't want to have that code in my product because eventually it's going to run in the wrong case. If it's not in there, it won't ever run.
I am unconvinced of a lot of the fears around super-AI. I can get behind this initiative though. We have already banned some types of horrible weapons, like flamethrowers and chemical weapons. Hopefully we can manage to ban this one as well.
[+] [-] norea-armozel|10 years ago|reply
[+] [-] im3w1l|10 years ago|reply