This is either satire or early stage schizophrenia. When you see phrases like "Although we can’t predict the technology of the future on the basis of what we know at present" and "I do not want to give the impression I know how we can deal with the nuctroid threat" without a shred of irony you know something (beyond the simple logical errors) is awry.
Sadly it's not unheard of for scientists and mathematicians to dabble in quackery later in their careers.
Edit: I'm not trying to dismiss his claim that technology has moral implications but he's trying to turn a well-worn social issue that's been around since pointy sticks into a technological one by waxing paranoid about the implications of the (by-definition nebulous) idea of "strong AI".
People can have peculiar opinions without having an identifiable psychiatric affliction. I doubt you are a psychiatrist and even if you were, you wouldn't be capable of diagnosing someone over the internet on the basis of a single text.
I am thoroughly dissatisfied with this being the highest voted comment at the moment. It does not make any argument at all. It just casts some aspersions on the author and his writing, judging it as 'schizophrenic' and 'awry'.
The comment can be summarized as 'the argument does not make any sense', without actually explaining why it doesn't make any sense. That summary would be a better comment, because we wouldn't get into this stupid irrelevant discussion about the author's mental health.
Not to mention the mind-blowing arrogance and ignorance he displays in believing that his research (on Bongard problems, for God's sake! Conceptions of human cognition that are 50 years behind!) could possibly lead to believable artificial humans.
It sounds to me that you interpret taking what might happen in the far future seriously as early-stage schizophrenia. Does that sound about right?
If so, that's disappointing news for me and others who try to take what might happen in the far future seriously.
I'm not sure what to make of your edit. It looks as though you wish to compartmentalize objections to technological development in such a way that they can't actually prevent any technology from getting developed. If we're just going to ignore the outcomes of such discussions I don't see any point in having them, we might as well just charge blindly forward.
If I read it right, he has stopped working on a form of artificial intelligence because it could potentially (or inevitably) be used to create androids indistinguishable from humans that are carrying nuclear or biological payloads inside of them, presumably to be detonated in a densely populated area.
Taking as a given, like he does, that the advancement and spread of technology are inevitable, wouldn't it still be many times more likely that people would just detonate suitcase nukes themselves before they decide to hide them in expensive and and potentially problematic robots? There's surely no shortage of people willing to die to do that, and even if there were it's unlikely that setting a bomb on a half hour timer and getting out of dodge will affect the success rate.
That frankly ridiculous scenario aside, I can imagine much more likely applications that computers capable of solving Bongard problems (which sound pretty cool) could be used in war, like automated drones that are able to independently identify targets.
The whole posting appears to lie somewhere between an immature rant and a publicity stunt. A suitcase would both elicit less suspicion and always (?) be easier/cheaper to design/develop/purchase/deploy than a humanoid robot.
A hundred years from now, it will probably be trivial to set up X-ray scanners (or whatever they will be using) to secure most urban areas from entering by humanoid robots; much more difficult to do so with suitcases, cars, and trucks which people have to bring with them for business, otherwise urban centers would not exist in the first place.
You're misinterpreting the premise of the article. The article is talking about ethics, not about some sort of danger. From an ethical perspective, if using a nuclear bomb is justified, then the delivery mechanism is irrelevant.
However, in the case of the sentient atomic bomb (and I think talking about it as an android obscures this question) we get in to stickier terrain. A sentient atomic bomb is a sentient being whose sole purpose in life is essentially genocide. That is definitely morally repugnant to me, and by extension I'd say it's problematic in any sane ethical system.
Creating a sentient atomic bomb would be no different in my ethical system from raising a child from birth to be the guidance system for an atomic bomb airplane.
Humans don't carry nukes because fissile material is _heavy_. Add onto that the shielding to make sure the person can even carry it without becoming gravely ill very shortly into their delivery and the fact a suitcase nuke or dirty bomb has never been personally delivered is not very surprising.
The ridiculous part is that making an android carry this payload doesn't change the nature of the payload. Heavy, fissile material will still give off signatures that will trip all manner of alarms.
> surely no shortage of people willing to die to do that
I think there is a major shortage of people willing to be suicide bombers. There is a grand fallacy out there that the world is full of suicidal terrorists. It is not.
Suicide bombers from Palestine were generally tricked or extorted. Those that were acting on their own volition generally could not detonate themselves, which is why bystanders had detonators.
There is strong evidence that most of the 9/11 hijackers did not know it was a suicide mission.
That said, under orders, extortion, or trickery, a human could definitely sneak a bomb into a city.
> “So where does the air vehicle called the Predator [i.e., a flying robot] fit? It is unmanned, and impressive. In 2002, in Yemen, one run by the CIA came up behind an SUV full of al-Qaeda leaders and successfully fired a Hellfire missile, leaving a large smoking crater where the vehicle used to be.”
> Yes, just as you read it: a number of human beings were turned to smoke and smithereens, and this pathetic journalist, whoever he is, speaking with the mentality of a 10-year-old who blows up his toy soldiers, reports in cold blood how people were turned to ashes by his favorite (“impressive”, yeah) military toys. Of course, for overgrown pre-teens like him, the SUV was not full of human beings, but of “al-Qaeda leaders” (as if he knew their ranks), of terrorists, sub-humans who aren’t worthy of living, who don’t have mothers to be devastated by their loss. Thinking of the enemy as subhuman scum to be obliterated without second thoughts was a typical attitude displayed by Nazis against Jews (and others) in World War II.
That's... quite a string of logic. He seems to know an awful lot about the mental process of that journalist.
As a critique of his general point: good general AI is dangerous (and useful) in so many ways I don't see why he focuses so narrowly on humanoid carriers of weapons of mass destruction - hell we already have those.
You are very right to question the interpretation of the subhuman argument. If someone has not been shot at or genuinely afraid for the life at the hands of another human being, it is very idealistic to say that the conversion of humans to subhumans by combatants is petty. As someone who has been shot at and shot back, the reduction of an unquestionably hostile enemy to subhuman is very normal if not necessary for most members of a military, on both sides of a conflict. People that judge the hatred of religiously motivated enemies are both naive and living in walled gardens.
The fact that the OP can morally object to participating in the research is the perfect definition of ideology inside a protected environment. If he had ever needed a gun, for example, to save his life, he would not question the morality of the creator, until he was once again safe from those that threatened him. I say until because people that question the need for violence have never experienced true hatred of violence. IMEO.
Pardon me but ... I think that there are way worse dangers than "humanoid bombs"... One of the main reason is that to achieve a nuclear explosion you need to have a critical mass and the it's hard to conceal for a lot of reasons (radiation etc...).
What's the difference with a car that could have a bomb in its trunk? Or a bag? A lot of scientists have wondered about these ethical questions but I believe that the benefits of high performance IA outweights the downsides of its research.
BUT I definitively agree with that:
"Americans should grow up and abandon their juvenile-minded treatment of weapons, high technology, and the value of “non-American human life” (which, sadly, to many of them is synonymous with “lowlife”). This is the hardest part of my proposal."
*edit: And what about an android to dismantle the atomic bomb instead of humans ? Sounds good to me!
> I think that there are way worse dangers than "humanoid bombs"
Yes, I am much more concerned about the scope for ubiquitous surveillance and systematic domination that even fairly modest gains in AI will allow. Something along the lines of the Emergency society in A Deepness in the Sky.
Maybe I'm just naive but it seems to me like in a world where the power of the atom bomb can fit in a briefcase, theres no need for androids to get bombs within striking range of their intended targets.
Why would I waste time making an AI robot to carry my bomb when for a lot less money and complexity I could just control it remotely.
Does he realize how crazy he sounds? Some people becomes obsessed with an idea, and start thinking that everything in the world is about them.
Have you ever been approached by someone on the street with a super important message to tell you, and they are utterly obsessed with it? That's how he sounds - only more articulate.
I'm don't intend to be insulting when I say he should see a mental health professional.
"They’re in the remote possibility of building intelligent machines that act, and even appear, as humans. If this is achieved, eventually intelligent weapons of mass destruction will be built, without doubt."
Worrying about this strikes me as a bit daft when you can already convince actual humans to be your weapons delivery system.
It also shows some significant shortsightedness regarding scaling laws which an AI researcher ought to have more experience with. A more legitimate worry would be basement-grade Predator drones. Grenade-bearing quadcopters which use computer vision to track and target dense crowds are something which technology can do now, rather than something which might optimistically happen in a few hundred years.
Definitely this. A terrorist with an engineering/chemistry/biology degree/knowledge could do a lot of damage in todays society. It's not hard to imagine if you let your mind wander
(the explosive homemade uav into a stadium would be pretty bad, could fly in from anywhere)
I don't think he gets that security is probability based, consequence * _likelihood_, then you concentrate on the factors you can control like monitoring for people with intent, looking for known patterns, developing response plans, etc
Limiting the technology available is an exercise in futility, and has negative impacts on society to boot
What I find really ridiculous about this article is that the author is worried about just a single possible use of a world-changing technology. He is concerned that creating real artificial intelligence will allow for the possibility of someone building androids with nuclear bombs inside masquerading as humans, a very specific and frankly ridiculous idea, taken straight out of the movie Impostor or from Philip K Dick's story of the same name.
In reality, the effects of building truly intelligent machines would be so vast, so utterly unpredictable, that worrying about one single possible use of the technology is absurd. Nothing has prepared us to deal with another fundamentally different intelligence on this planet, especially one that would soon outstrip our own. We don't know if we can keep the AIs as our slaves, or whether we would become their slaves, or merge with them, or we would become extinct like the dinosaurs and they would represent a new phase in human evolution.
Excessively specific adjective: The average human has no particular regard for the life of the Other. An open-eyed view of both history and the world around you reveals that in spades. Calling out what we usually call the civilized world for not caring about the life of the Other is a major, major lamppost argument. The idea that one should care about someone else 10,000 miles away of another color and completely different culture is a striking and unusual attitude in human affairs.
(Since we ourselves are human it can be easy to blip over the historical manifestations of these facts as just part of the natural order of things ourselves. So, as one exercise if you have trouble understanding what I mean on a gut level, consider the stocks [1]. Consider what it means that in the middle of what was at the time the height of civilization and the genesis of our own in the western world, these things not only existed, but were in public places. And used. I can not truly internalize this, only observe it. And consider how often you've seen these and never thought about what they actually mean about the culture they appear in, if you never have before. For those not of western civilizational descent you can find your own examples; they are abundant in all cultures.)
Of course, actual examination and comprehension of this state of affairs won't necessarily leave you more confident about the likely outcomes.... but it may make you reconsider the validity of letting someone else beat you to the research anyhow. Your influence towards humane usage is maximized by being on the cutting edge, not just being some guy over there yelling.
Like many other posters, I find his specific worries a bit misplaced. However, I have had some reluctance to continue working on some of my own machine learning projects because I'm worried about the potential abuses of the technology.
I'm sure the field will get along just fine without me, of course, but I just felt like I was very likely to be asked to use ML skills to do things I felt weren't entirely ethical.
I think that we're largely missing the point here. He's worried that his fundamentally harmless research will end up powering horrific weapons of mass destruction, enabling them to attack even more precisely and with more devastation. And quite frankly, I share his concerns that if those weapons were developed, we would use them without thought or care. And apologies to my fellow American Hackers, but America's got the rep for it, what with that one time they dropped a couple of nukes on hundreds of thousands of unarmed men, women and children, killing hundreds of thousands and levelling a couple of cities.
But, I digress, he's talking about androids sneezing us to death. I'm not going near a shop mannikin ever again.
The author's attitude that very few Americans are "intelligent, mature," and "[respect] life deeply" impeach his opinions on both logic and geopolitical topics as far as I'm concerned:
> It is typically Americans who display this attitude regarding hi-tech weapons. (If you are an American and are reading this, what I wrote doesn’t imply that you necessarily display this attitude; note the word “typically”, please.) The American culture has an eerily childish approach toward weapons, and also some outlandish (but also child-like) disregard for human life. (Once again, you might be an intelligent, mature American, respecting life deeply; it is your average compatriot I am talking about.)
As others have mentioned, this specific concern may not be much of a problem. It might be that it's easier to deliver a nuclear bomb the old-fashioned way than putting it in a fake person.
Another "this is why I quit" + name_of_company doomsday letter. Instead of a company, he's quitting his research and university. We know why this starts: seeking fame. We know how this ends: forgotten.
A lot of people get tired of their dissertation research, and I've heard others contemplate contrived reasons not to finish their PhD.
This happens to be especially far-fetched... but it takes a "big" reason to justify to yourself that you may leave behind so much work.
I hope the author realizes that this particular scenario isn't one of the 1,000,000 biggest concerns for humankind... that he continues his research program, and that he finds an application of his research that has a positive impact in a much more likely scenario.
I think the most credible concern this post mentions is the general disregard in the United States (especially among those in charge of the military) for the long term implications of the indiscriminate use of A.I.-based warfare. Drones seem great for the U.S. now: they make it easier to kill enemies and don't directly endanger American lives. But in a decade or two when "enemy" nations start to develop them too, things get a whole lot more complicated.
Nonetheless, I think the general stance of the article is severely flawed. We cannot halt research in computer cognition because it has the potential to be weaponized (and dangerously so). As the author himself mentions, it would be akin to halting the development of the knife because people can use it to stab each other, or the development of the Internet because it makes it easier for criminals to communicate and organize.
Avoiding a potential advance in technology by doing things like cutting funding to it, and hoping it will go away as a result, is never the solution to potentially dangerous development. One cannot stop the inexorable march of progress by "making a statement." The approach with greater value is to call out the dangers that the potential advance poses (as the post has done), and then work to develop an ethical framework for which the new technology can more safely exist.
The Singularity Institute has raised awareness of this broader issue in the past, as have several others, and is promoting the creation of "Friendly A.I." [1] to help address the problem.
If you're upset about the topless happy healthy woman and not about the scenes of disfigured war victims above it, there's something wrong with you as a human.
They’re in the remote possibility of building intelligent machines that act, and even appear, as humans. If this is achieved, eventually intelligent weapons of mass destruction will be built, without doubt.
As a non-American from a constitutionally neutral country, I think this is the equivalent of having people traveling in front of trains with red flags. There are any number of ways to disguise a devastating weapon or deliver it undisguised, and evil is not a mere by-product of technical incapacity.
[+] [-] bithive123|13 years ago|reply
Sadly it's not unheard of for scientists and mathematicians to dabble in quackery later in their careers.
Edit: I'm not trying to dismiss his claim that technology has moral implications but he's trying to turn a well-worn social issue that's been around since pointy sticks into a technological one by waxing paranoid about the implications of the (by-definition nebulous) idea of "strong AI".
[+] [-] Confusion|13 years ago|reply
I am thoroughly dissatisfied with this being the highest voted comment at the moment. It does not make any argument at all. It just casts some aspersions on the author and his writing, judging it as 'schizophrenic' and 'awry'.
The comment can be summarized as 'the argument does not make any sense', without actually explaining why it doesn't make any sense. That summary would be a better comment, because we wouldn't get into this stupid irrelevant discussion about the author's mental health.
[+] [-] fchollet|13 years ago|reply
[+] [-] astrofinch|13 years ago|reply
If so, that's disappointing news for me and others who try to take what might happen in the far future seriously.
I'm not sure what to make of your edit. It looks as though you wish to compartmentalize objections to technological development in such a way that they can't actually prevent any technology from getting developed. If we're just going to ignore the outcomes of such discussions I don't see any point in having them, we might as well just charge blindly forward.
[+] [-] baddox|13 years ago|reply
[+] [-] rorrr|13 years ago|reply
[+] [-] busted|13 years ago|reply
Taking as a given, like he does, that the advancement and spread of technology are inevitable, wouldn't it still be many times more likely that people would just detonate suitcase nukes themselves before they decide to hide them in expensive and and potentially problematic robots? There's surely no shortage of people willing to die to do that, and even if there were it's unlikely that setting a bomb on a half hour timer and getting out of dodge will affect the success rate.
That frankly ridiculous scenario aside, I can imagine much more likely applications that computers capable of solving Bongard problems (which sound pretty cool) could be used in war, like automated drones that are able to independently identify targets.
[+] [-] bluekeybox|13 years ago|reply
A hundred years from now, it will probably be trivial to set up X-ray scanners (or whatever they will be using) to secure most urban areas from entering by humanoid robots; much more difficult to do so with suitcases, cars, and trucks which people have to bring with them for business, otherwise urban centers would not exist in the first place.
[+] [-] lukeschlather|13 years ago|reply
However, in the case of the sentient atomic bomb (and I think talking about it as an android obscures this question) we get in to stickier terrain. A sentient atomic bomb is a sentient being whose sole purpose in life is essentially genocide. That is definitely morally repugnant to me, and by extension I'd say it's problematic in any sane ethical system.
Creating a sentient atomic bomb would be no different in my ethical system from raising a child from birth to be the guidance system for an atomic bomb airplane.
[+] [-] lysol|13 years ago|reply
The ridiculous part is that making an android carry this payload doesn't change the nature of the payload. Heavy, fissile material will still give off signatures that will trip all manner of alarms.
[+] [-] nohat|13 years ago|reply
[+] [-] tzs|13 years ago|reply
That gives away the ending, so if you like Asimov and might want to read the story, beware.
[+] [-] ff0066mote|13 years ago|reply
Quoted from just above the heading: "What can be done?"
It appears that he continues to work on the simulation of human cognition.
[+] [-] aneth3|13 years ago|reply
I think there is a major shortage of people willing to be suicide bombers. There is a grand fallacy out there that the world is full of suicidal terrorists. It is not.
Suicide bombers from Palestine were generally tricked or extorted. Those that were acting on their own volition generally could not detonate themselves, which is why bystanders had detonators.
There is strong evidence that most of the 9/11 hijackers did not know it was a suicide mission.
That said, under orders, extortion, or trickery, a human could definitely sneak a bomb into a city.
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] schraeds|13 years ago|reply
[deleted]
[+] [-] schraeds|13 years ago|reply
[deleted]
[+] [-] nohat|13 years ago|reply
> Yes, just as you read it: a number of human beings were turned to smoke and smithereens, and this pathetic journalist, whoever he is, speaking with the mentality of a 10-year-old who blows up his toy soldiers, reports in cold blood how people were turned to ashes by his favorite (“impressive”, yeah) military toys. Of course, for overgrown pre-teens like him, the SUV was not full of human beings, but of “al-Qaeda leaders” (as if he knew their ranks), of terrorists, sub-humans who aren’t worthy of living, who don’t have mothers to be devastated by their loss. Thinking of the enemy as subhuman scum to be obliterated without second thoughts was a typical attitude displayed by Nazis against Jews (and others) in World War II.
That's... quite a string of logic. He seems to know an awful lot about the mental process of that journalist.
As a critique of his general point: good general AI is dangerous (and useful) in so many ways I don't see why he focuses so narrowly on humanoid carriers of weapons of mass destruction - hell we already have those.
[+] [-] frankydp|13 years ago|reply
The fact that the OP can morally object to participating in the research is the perfect definition of ideology inside a protected environment. If he had ever needed a gun, for example, to save his life, he would not question the morality of the creator, until he was once again safe from those that threatened him. I say until because people that question the need for violence have never experienced true hatred of violence. IMEO.
[+] [-] victork2|13 years ago|reply
What's the difference with a car that could have a bomb in its trunk? Or a bag? A lot of scientists have wondered about these ethical questions but I believe that the benefits of high performance IA outweights the downsides of its research.
BUT I definitively agree with that:
"Americans should grow up and abandon their juvenile-minded treatment of weapons, high technology, and the value of “non-American human life” (which, sadly, to many of them is synonymous with “lowlife”). This is the hardest part of my proposal."
*edit: And what about an android to dismantle the atomic bomb instead of humans ? Sounds good to me!
[+] [-] Estragon|13 years ago|reply
[+] [-] unimpressive|13 years ago|reply
[+] [-] baddox|13 years ago|reply
Or, for that matter, a personal firearm?
[+] [-] ars|13 years ago|reply
Why would I waste time making an AI robot to carry my bomb when for a lot less money and complexity I could just control it remotely.
Does he realize how crazy he sounds? Some people becomes obsessed with an idea, and start thinking that everything in the world is about them.
Have you ever been approached by someone on the street with a super important message to tell you, and they are utterly obsessed with it? That's how he sounds - only more articulate.
I'm don't intend to be insulting when I say he should see a mental health professional.
[+] [-] glimcat|13 years ago|reply
Worrying about this strikes me as a bit daft when you can already convince actual humans to be your weapons delivery system.
It also shows some significant shortsightedness regarding scaling laws which an AI researcher ought to have more experience with. A more legitimate worry would be basement-grade Predator drones. Grenade-bearing quadcopters which use computer vision to track and target dense crowds are something which technology can do now, rather than something which might optimistically happen in a few hundred years.
[+] [-] coopdog|13 years ago|reply
(the explosive homemade uav into a stadium would be pretty bad, could fly in from anywhere)
I don't think he gets that security is probability based, consequence * _likelihood_, then you concentrate on the factors you can control like monitoring for people with intent, looking for known patterns, developing response plans, etc
Limiting the technology available is an exercise in futility, and has negative impacts on society to boot
[+] [-] sitkack|13 years ago|reply
[+] [-] ibarrac|13 years ago|reply
In reality, the effects of building truly intelligent machines would be so vast, so utterly unpredictable, that worrying about one single possible use of the technology is absurd. Nothing has prepared us to deal with another fundamentally different intelligence on this planet, especially one that would soon outstrip our own. We don't know if we can keep the AIs as our slaves, or whether we would become their slaves, or merge with them, or we would become extinct like the dinosaurs and they would represent a new phase in human evolution.
For more about the risks related to the rise of true AI read this: http://yudkowsky.net/singularity/ai-risk
[+] [-] jerf|13 years ago|reply
(Since we ourselves are human it can be easy to blip over the historical manifestations of these facts as just part of the natural order of things ourselves. So, as one exercise if you have trouble understanding what I mean on a gut level, consider the stocks [1]. Consider what it means that in the middle of what was at the time the height of civilization and the genesis of our own in the western world, these things not only existed, but were in public places. And used. I can not truly internalize this, only observe it. And consider how often you've seen these and never thought about what they actually mean about the culture they appear in, if you never have before. For those not of western civilizational descent you can find your own examples; they are abundant in all cultures.)
Of course, actual examination and comprehension of this state of affairs won't necessarily leave you more confident about the likely outcomes.... but it may make you reconsider the validity of letting someone else beat you to the research anyhow. Your influence towards humane usage is maximized by being on the cutting edge, not just being some guy over there yelling.
[1]: http://en.wikipedia.org/wiki/Stocks
[+] [-] cageface|13 years ago|reply
I'm sure the field will get along just fine without me, of course, but I just felt like I was very likely to be asked to use ML skills to do things I felt weren't entirely ethical.
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] domwood|13 years ago|reply
But, I digress, he's talking about androids sneezing us to death. I'm not going near a shop mannikin ever again.
[+] [-] jcoder|13 years ago|reply
> It is typically Americans who display this attitude regarding hi-tech weapons. (If you are an American and are reading this, what I wrote doesn’t imply that you necessarily display this attitude; note the word “typically”, please.) The American culture has an eerily childish approach toward weapons, and also some outlandish (but also child-like) disregard for human life. (Once again, you might be an intelligent, mature American, respecting life deeply; it is your average compatriot I am talking about.)
[+] [-] sambeau|13 years ago|reply
I realise the internet is full of this but I try my best to avoid it. I don't want to become immune to the shock.
The thought of this little guy's pain and suffering and the idea that he was casually being used to back-up an online essay is really sad.
[+] [-] j-b|13 years ago|reply
[+] [-] astrofinch|13 years ago|reply
However, I agree that development of AI should be done with caution. The work of the Singularity Institute is worth looking into; see http://commonsenseatheism.com/wp-content/uploads/2012/02/Mue... for a more academic summary and http://facingthesingularity.com/ for a longer popular summary of their positions.
[+] [-] ardillamorris|13 years ago|reply
[+] [-] dbecker|13 years ago|reply
This happens to be especially far-fetched... but it takes a "big" reason to justify to yourself that you may leave behind so much work.
I hope the author realizes that this particular scenario isn't one of the 1,000,000 biggest concerns for humankind... that he continues his research program, and that he finds an application of his research that has a positive impact in a much more likely scenario.
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] rkaplan|13 years ago|reply
Nonetheless, I think the general stance of the article is severely flawed. We cannot halt research in computer cognition because it has the potential to be weaponized (and dangerously so). As the author himself mentions, it would be akin to halting the development of the knife because people can use it to stab each other, or the development of the Internet because it makes it easier for criminals to communicate and organize.
Avoiding a potential advance in technology by doing things like cutting funding to it, and hoping it will go away as a result, is never the solution to potentially dangerous development. One cannot stop the inexorable march of progress by "making a statement." The approach with greater value is to call out the dangers that the potential advance poses (as the post has done), and then work to develop an ethical framework for which the new technology can more safely exist.
The Singularity Institute has raised awareness of this broader issue in the past, as have several others, and is promoting the creation of "Friendly A.I." [1] to help address the problem.
[1]: http://en.wikipedia.org/wiki/Friendly_AI
See also this recent article: http://www.economist.com/node/21556234
[+] [-] ars|13 years ago|reply
There is no A.I. based warfare - the drones are controlled by human pilots.
[+] [-] ruethewhirled|13 years ago|reply
[+] [-] dsr_|13 years ago|reply
[+] [-] anigbrowl|13 years ago|reply
We already have those. There are plenty of people willing to blow themselves up and take a bunch of others with them: http://en.wikipedia.org/wiki/Explosive_belt
As a non-American from a constitutionally neutral country, I think this is the equivalent of having people traveling in front of trains with red flags. There are any number of ways to disguise a devastating weapon or deliver it undisguised, and evil is not a mere by-product of technical incapacity.
[+] [-] fchollet|13 years ago|reply