Reading the comments on this article make me realize how unserious most engineers and software developers are taking this.
These threats are real.
The weaponization of everything is actually happening. Since I wrote about it a month ago (Self-Crashing Cars) a number of people have reached out including people with actual insight into the military aspect of it. Militaries around the world are getting ready for true AI enabled weapon systems and there building deterrence strategies for mass casualty cyber attack (including nuclear weapons response), whether its from hacked industrial plants or cars it doesn't matter. They're actually talking about the weaponization of cars at the Munich security conference.
We need to stop burying our head in the sand and write to our politicians about this threat. I know it sounds crazy but it's real.
As an aside, my main complaint about the people that truly understand this the inability / unwillingness to accept that the act of subverting systems capable of mass destruction via cyber attack amounts to cyber weapons of mass destruction. We need to assemble all the work / treaties / regulations / research we put into securing nukes into securing AI / robotics and for the identical reasons. We know how this ends otherwise. We need wide-spread government funding and we need to communicate what these things are in language that our governments understand. Not saying something that is true just because it sounds weird is counterproductive.
You can trust that people in the US Military are taking this very seriously. I know because I, alongside LtGen Shanahan and Eric Schmidt briefed Secretary Mattis personally about these issues and is part of my ongoing work within the IC/DoD.
You're correct though that most line DL engineers don't have these issues at top of mind. I don't know any ML researcher worth their salt that hasn't thought about them, but the tendency is to brush the concerns off until we're closer to more generalized systems.
That is not a wholly unreasonable position for many reasons, especially given the history of hype around AI, but I'd like to see more discussion happening between junior and mid-level engineers about these things - and especially more work being done about human-machine interfaces in the AI context, of which very little design thought put in.
Here's the crux of the issue with respect to nuclear WMDs:
In order to prevent citizens and other countries from creating nuclear bombs, our government severely limits access to relevant tools and materials, and actively seeks to censor the knowledge of how to build modern nuclear devices from other parties.
In order to prevent citizens and other countries from creating dangerous rogue AIs, our government _____________
> We need to assemble all the work / treaties / regulations / research we put into securing nukes into securing AI / robotics and for the identical reasons.
I disagree. I think we need a fundamentally different approach to AI than to nuclear weapons. The proliferation of nuclear weapons were controllable because the components needed to develop a nuclear weapon included specialized, controllable physical goods and fairly recognizable industrial installations.
I do agree that we need international treaties prohibiting the development and use of AI weapons technology to avoid encouraging an arms race.
However, I think that trying to prevent the spread of AI tools and technology will face similar problems to the US's attempts to prevent the spread of encryption tools and technology. It is fundamentally harder to control the spread of information that the spread of physical goods.
I think you are absolutely correct. I think that people who are more entrenched in the field are even subconsciously unwilling to accept what is to come knowing that the unavoidable regulation will limit their creativity and just make their lives more miserable. But they know...
The disturbing thing about this paper to me - flashy though it may be - is what they left out rather than what they kept in.
OpenAI appears to only be thinking of the crimes-against-individuals segment of malicious AI, rather then the crimes-against-humanity type of malicious AI that the surveillance advertising corporations who are supporting OpenAI are building.
I am far, far less worried about an assassin's drones using AI to find a politician in a crowd than I am about Facebook using pictures of me that other people have posted and tagged me in, so that my face is used to track my movements, and the movements of every other human on the planet, everywhere we go, and selling that information to everybody who wants a copy, and giving it away at the request of the local police.
I'm more concerned about Google using AI to mine every conversation I've ever had or my browsing history to classify me as a dissident before I apply for a visa to travel to China or the United States, or as a deadbeat before I apply for a bank loan, or sick before I apply for insurance, or as unrehabilitatable before I apply for parole.
The hackers-on-steroids narrative is a smokescreen for fully automated corporate fascism.
> I'm more concerned about Google using AI to mine every conversation I've ever had or my browsing history to classify me as a dissiden before I apply for a visa to travel to China or the United States, or as a deadbeat before I apply for a bank loan, or sick before I apply for insurance, or as unrehabilitatable before I apply for parole.
I'm quite paranoid about this, yet whenever I speak to people about it either people don't care, or already accept it's happening and inevitable.
I think part of the problem is many of us already feel we've lost the battle for privacy. Although, I'm not sure we ever seriously attempted to fight for it. Every street in cities in the UK is full CCTV cameras. The underground and buses track where you travel. Our internet is monitored and logged. This isn't a future problem that will manifest from greed and advances in AI, it's something we all accept and deal with today.
In fact, a lot of people will say to this, "if you don't do anything wrong you've got nothing to hide". They welcome it.
How is your concern different than what they are calling political security (~1/3 of the paper)?
>Political security. The use of AI to automate tasks involved in
surveillance (e.g. analysing mass-collected data), persuasion
(e.g. creating targeted propaganda), and deception (e.g.
manipulating videos) may expand threats associated with
privacy invasion and social manipulation. We also expect novel
attacks that take advantage of an improved capacity to analyse
human behaviors, moods, and beliefs on the basis of available
data. These concerns are most significant in the context of
authoritarian states, but may also undermine the ability of
democracies to sustain truthful public debates.
I took a class Sociology class in college called "Killing". Not a single time did we once ever talk about "authoritative" or "legitimate" killings of any sort. It turned out that this class was specifically meant for 'Criminal Justice' majors, people who were going on to be police officers and prosecutors. We didn't ever talk about war, the death penalty, genocide, eugenics, abortion, differential reproductive success, famine, none of it. The whole class was about homicide.
I'm more concerned that AI will become a modern deity in that it's considered science and ought not to be questioned. Things like predictions of academmic success, criminal behavior, etc that impact the rights of individuals.
I am far, far less worried about an assassin's drones using AI to find a politician in a crowd than I am about Facebook using pictures of me that other people have posted and tagged me in, so that my face is used to track my movements, and the movements of every other human on the planet, everywhere we go, and selling that information to everybody who wants a copy, and giving it away at the request of the local police.
Definitely not a lawyer but, as I understand it, depending on jurisdiction and context, some social media postings may be considered "public" information volunteered without "reasonable expectation of privacy" ---in which case, awfully enough, anything goes...
Again, not a lawyer; but I wonder if there should be a right to clear-language, mandatory warnings ---like in cigarrettes--- whenever you are about to post something that will not enjoy "reasonable expectation of privacy" (and hence could be sold or used against you in the future, etc.)..
I found out Google has been logging my location since I got an Android in 2014. You can see what Google has on you here: https://www.google.com/maps/timeline
My guess is normal people don't know or care enough to turn it off.
>> The disturbing thing about this paper to me - flashy though it may be - is what they left out rather than what they kept in.
I agree 100%
I would feel a lot more comfortable if we had an objective think tank like organization doing the research instead of the actual companies who are developing AI and have a vested financial interest in steering the public in the direction they want in order to lower people's concerns.
I've had concerns about this for a long time. Many people simply discounted me as a conspiracy buff when I brought up the dangers of AI. Now? Not so much.
At least Google and FB are big-enough targets that operate out in the open that they can be crushed by governments if they throw their weight around too much.
The things that scare me are actors that don't have flashy names. The invisible marketing companies funded by lobbyists. Astroturfers. Russian propagandists.
I've always found the group's name amusing.. " Open " AI is disconnected from observed reality since eons. Intelligence always has information asymmetry of some kind.
i dont share your concerns at all. my daily feelings and activities wouldnt change, whether facebook and google know about them or not. Contrast that to being killed by a drone.
privacy concerns is rather an interesting intellectual problem, and not a real one. if it was a real problem, people would change their use of those tools but they dont.
you can argue that loss of privacy allows corporations to manipulate us, but i see only trivial effects. look at how people vote and buy. i see more diversity and conclude less manipulation.
It may be superfluous to point this out but this seems to be talking about malicious uses of narrow AI, rather than malicious strong AI. The only defense against a malicious strong AI is, of course, a friendly strong AI.
I find it quite funny, rather intruiging, that we seem to have gone full circle on trusted sources of information. Historically, a face-to-face meeting was considered as the ultimate legitimate and trustworthy way. Not story or rumors or witnessing, since the courts say people can be "decieved", "traumatised", etc. Then came microphones, cameras, CCTV in the 20th century, and then they became the ultimate trusted sources of information.
And due to AI and it's rapidly increasing misuse by enormous conglomerates, it will be very soon when videos are never trusted but rather treated as comedic rumor and folklore, and we will go back again to how it always was.
...until replicants come.
I'm saddened that there are actual "smart" people who waste their days to work on these malicious forms of AI, be it Google's almost entire arsenal, or anything. However, i'm not surprised they do, but it is still sad.
Your comment reminds me of a silly little graph I saw posted to reddit once, basically stating that the prevalence of things like miracles and witchcraft was high throughout human history until the development of the camera, where it stayed low until the development of Photoshop.
> I'm saddened that there are actual "smart" people who waste their days to work on these malicious forms of AI, be it Google's almost entire arsenal, or anything. However, i'm not surprised they do, but it is still sad.
Am sure the usual justification to apply salve to your conscience for this sort of activity is the trope that the 'bad guys' will do it anyway, so we need to do it before them to counter them and be the torch-bearer of liberty.
The atom bomb was developed upon that fear and pretext. Compared to that AI is a fairly mild thing.
I would suggest that we shouldn't ever rely on digital archiving of important information. There should always be a copy of the information in analog, that can be dated & verified with analog methods.
This is one feature a blockchain excels at. People have stored the Bitcoin white paper in the blockchain. Anyone can then download it, and verify it is the untouched original.
You can have 1000 of companies that act fair and don't use AI for malicious purposes, but there it that one company or community that doesn't... and then someone sends you gay porn with your face in it.
I'm not too worried about that. The moment that type of technology becomes widely available is the moment this type of blackmail loses all edge. You might even actually start doing gay porn IRL and people will assume that it's been "deepfaked".
The corollary is a little more worrying: any kind of incriminating document about a politician or public figure will be dismissed as a fake immediately. I mean, they already do that, but that'll be even harder to figure out what's real and what's not.
That "grab them by the pussy" tape? Obviously fake. I mean, you don't even see the guy talking, just the audio, how gullible can you be?
That girl running away from the napalm bombing? Obviously fake. I mean you're going to tell me that all of her clothes burned but she's still fit enough to run? Everybody around her wears clothes. Come on man, are you new here?
That chinese guy standing in front of a military tank with groceries? Come on, I can do a more convincing fake in 10 seconds on my smartphone. There, look, I just did.
We have a brave new world ahead of us where you won't be able to trust anything you see or hear through any media, no matter how convincing it seems. That's pretty terrifying IMO.
I remember a while ago stumbling upon a conspiracy theory forum where people were claiming that a video of an interview with Julian Assange was a fake because there were a few strange visual artifacts around his face sometimes. Given that the quality of the video was very good and the oddities were rather minor (possibly encoding artifacts) I dismissed it as the usual tinfoil hattery.
I think in the future I won't be so sure anymore. I'm not sure if the technology to make such a good quality fake already exists but it's probably a matter of years before we get there. If some people with too much time on their hands manage to make somewhat convincing porn montages for free on the internet what can big three letter agencies do? What does the state of the art look like? What will it look like 10 years from now?
I really enjoy the example set by ClarifAI, the ability to search terrabytes of video with the help of tags is going to be a very nice boon for any totalitarian regime in the future.
Evil startup idea gleaned from paper: Use AI/ML to scour a sales prospect's online persona (social media) and build a 'vulnerability profile' and generate targeted, personalized cold-emails (or even phone calls eventually). Also identifies 'levers' for a particular person that can be used to influence a buying decision.
May even pre-qualify leads for you and tell you when not to waste your time :)
I mean, a good sales person already does a lot of this, but it's time-consuming. Imagine if you could automate this process.
Still reading the paper and forming an opinion. But my initial thoughts are what exactly is new here that couldn't be done through some other means? I'm sure there will be interesting implications, but right now nothing seems particularly novel.
I found this to be very fascinating read. I have heard of use of ML to detect, for instance spam or phishing emails, but I've never heard of attackers using ML models to generate phishing emails. How do you differentiate such a message from any other phishing attempt ?
Thinking out loud, in the US, we have seen breaches of OPM, travel, healthcare and insurance companies where seemingly the only motive was to exfil data. Many of these attempts are attributed to state sponsored APT groups. Now that someone has all this data, the next potential move seems to be to train models over this data to understand habits and patterns, frequent locations and friends, and predict social and political leanings...
I only have limited knowledge on this subject, but all this sounds plausible right ?
It's nice to see a discussion of AI risk that addresses concrete scenarios. A lot of the forecasted doom (in other reports) resorts to handwavy arguments, but rarely goes into specifics. The examples they've given[0] seem plausible enough to me (except the persuasive ads one).
[0]: Persuasive ads, vulnerability discovery and exploitation, hacking robots (this one is only tangentially AI related), and AI-augmented surveillance.
I am wondering does anyone have a survey or list of AI exploits or malicious actions done on production services or systems ? For example like if a misclassified image that would target a image recognition system (such as Clarfai)? I have only seen papers that have theoretical attacks so far.
It's out of the bag now - we just have to hope the blue team can defend against regimes where the best maths talent more likely ends up building military apps than doggy photo filters
When AI becomes practically unrecognisable from human, it will get really interesting in finding ways and means to stop it from being used for conning.
[+] [-] 3pt14159|8 years ago|reply
These threats are real.
The weaponization of everything is actually happening. Since I wrote about it a month ago (Self-Crashing Cars) a number of people have reached out including people with actual insight into the military aspect of it. Militaries around the world are getting ready for true AI enabled weapon systems and there building deterrence strategies for mass casualty cyber attack (including nuclear weapons response), whether its from hacked industrial plants or cars it doesn't matter. They're actually talking about the weaponization of cars at the Munich security conference.
We need to stop burying our head in the sand and write to our politicians about this threat. I know it sounds crazy but it's real.
As an aside, my main complaint about the people that truly understand this the inability / unwillingness to accept that the act of subverting systems capable of mass destruction via cyber attack amounts to cyber weapons of mass destruction. We need to assemble all the work / treaties / regulations / research we put into securing nukes into securing AI / robotics and for the identical reasons. We know how this ends otherwise. We need wide-spread government funding and we need to communicate what these things are in language that our governments understand. Not saying something that is true just because it sounds weird is counterproductive.
[+] [-] AndrewKemendo|8 years ago|reply
You're correct though that most line DL engineers don't have these issues at top of mind. I don't know any ML researcher worth their salt that hasn't thought about them, but the tendency is to brush the concerns off until we're closer to more generalized systems.
That is not a wholly unreasonable position for many reasons, especially given the history of hype around AI, but I'd like to see more discussion happening between junior and mid-level engineers about these things - and especially more work being done about human-machine interfaces in the AI context, of which very little design thought put in.
[+] [-] kakarot|8 years ago|reply
In order to prevent citizens and other countries from creating nuclear bombs, our government severely limits access to relevant tools and materials, and actively seeks to censor the knowledge of how to build modern nuclear devices from other parties.
In order to prevent citizens and other countries from creating dangerous rogue AIs, our government _____________
What do you think goes in that blank?
[+] [-] shkkmo|8 years ago|reply
I disagree. I think we need a fundamentally different approach to AI than to nuclear weapons. The proliferation of nuclear weapons were controllable because the components needed to develop a nuclear weapon included specialized, controllable physical goods and fairly recognizable industrial installations.
I do agree that we need international treaties prohibiting the development and use of AI weapons technology to avoid encouraging an arms race.
However, I think that trying to prevent the spread of AI tools and technology will face similar problems to the US's attempts to prevent the spread of encryption tools and technology. It is fundamentally harder to control the spread of information that the spread of physical goods.
[+] [-] evoloution|8 years ago|reply
[+] [-] BatFastard|8 years ago|reply
[+] [-] appleflaxen|8 years ago|reply
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
https://arxiv.org/abs/1802.07228
[+] [-] Mizza|8 years ago|reply
OpenAI appears to only be thinking of the crimes-against-individuals segment of malicious AI, rather then the crimes-against-humanity type of malicious AI that the surveillance advertising corporations who are supporting OpenAI are building.
I am far, far less worried about an assassin's drones using AI to find a politician in a crowd than I am about Facebook using pictures of me that other people have posted and tagged me in, so that my face is used to track my movements, and the movements of every other human on the planet, everywhere we go, and selling that information to everybody who wants a copy, and giving it away at the request of the local police.
I'm more concerned about Google using AI to mine every conversation I've ever had or my browsing history to classify me as a dissident before I apply for a visa to travel to China or the United States, or as a deadbeat before I apply for a bank loan, or sick before I apply for insurance, or as unrehabilitatable before I apply for parole.
The hackers-on-steroids narrative is a smokescreen for fully automated corporate fascism.
[+] [-] kypro|8 years ago|reply
I'm quite paranoid about this, yet whenever I speak to people about it either people don't care, or already accept it's happening and inevitable.
I think part of the problem is many of us already feel we've lost the battle for privacy. Although, I'm not sure we ever seriously attempted to fight for it. Every street in cities in the UK is full CCTV cameras. The underground and buses track where you travel. Our internet is monitored and logged. This isn't a future problem that will manifest from greed and advances in AI, it's something we all accept and deal with today.
In fact, a lot of people will say to this, "if you don't do anything wrong you've got nothing to hide". They welcome it.
[+] [-] letlambda|8 years ago|reply
>Political security. The use of AI to automate tasks involved in surveillance (e.g. analysing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation. We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data. These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates.
[+] [-] blurbleblurble|8 years ago|reply
I was disturbed and appalled.
[+] [-] katzgrau|8 years ago|reply
[+] [-] gone35|8 years ago|reply
Definitely not a lawyer but, as I understand it, depending on jurisdiction and context, some social media postings may be considered "public" information volunteered without "reasonable expectation of privacy" ---in which case, awfully enough, anything goes...
Again, not a lawyer; but I wonder if there should be a right to clear-language, mandatory warnings ---like in cigarrettes--- whenever you are about to post something that will not enjoy "reasonable expectation of privacy" (and hence could be sold or used against you in the future, etc.)..
[+] [-] YaxelPerez|8 years ago|reply
My guess is normal people don't know or care enough to turn it off.
[+] [-] at-fates-hands|8 years ago|reply
I agree 100%
I would feel a lot more comfortable if we had an objective think tank like organization doing the research instead of the actual companies who are developing AI and have a vested financial interest in steering the public in the direction they want in order to lower people's concerns.
I've had concerns about this for a long time. Many people simply discounted me as a conspiracy buff when I brought up the dangers of AI. Now? Not so much.
[+] [-] Pxtl|8 years ago|reply
The things that scare me are actors that don't have flashy names. The invisible marketing companies funded by lobbyists. Astroturfers. Russian propagandists.
[+] [-] lern_too_spel|8 years ago|reply
https://openai.com/about/#sponsors
[+] [-] lcalculus|8 years ago|reply
[+] [-] tyrex2017|8 years ago|reply
privacy concerns is rather an interesting intellectual problem, and not a real one. if it was a real problem, people would change their use of those tools but they dont.
you can argue that loss of privacy allows corporations to manipulate us, but i see only trivial effects. look at how people vote and buy. i see more diversity and conclude less manipulation.
[+] [-] taneq|8 years ago|reply
[+] [-] tree_of_item|8 years ago|reply
[+] [-] nradov|8 years ago|reply
[+] [-] debt|8 years ago|reply
[+] [-] zucchini_head|8 years ago|reply
And due to AI and it's rapidly increasing misuse by enormous conglomerates, it will be very soon when videos are never trusted but rather treated as comedic rumor and folklore, and we will go back again to how it always was.
...until replicants come.
I'm saddened that there are actual "smart" people who waste their days to work on these malicious forms of AI, be it Google's almost entire arsenal, or anything. However, i'm not surprised they do, but it is still sad.
[+] [-] rm_-rf_slash|8 years ago|reply
[+] [-] Santosh83|8 years ago|reply
Am sure the usual justification to apply salve to your conscience for this sort of activity is the trope that the 'bad guys' will do it anyway, so we need to do it before them to counter them and be the torch-bearer of liberty.
The atom bomb was developed upon that fear and pretext. Compared to that AI is a fairly mild thing.
[+] [-] VMG|8 years ago|reply
[+] [-] nukeop|8 years ago|reply
- User profiling
- De-anonymization
- Mining and correllating data from purchased databases of user info
- "Pre-crime" prediction that influence real decisions
- Changing insurance rates, credit scores, and so on based on decisions of completely opaque AI systems that use data from unknown sources
[+] [-] strawcomb|8 years ago|reply
[+] [-] mwaitjmp|8 years ago|reply
[+] [-] Spooky23|8 years ago|reply
[+] [-] akerro|8 years ago|reply
[+] [-] simias|8 years ago|reply
The corollary is a little more worrying: any kind of incriminating document about a politician or public figure will be dismissed as a fake immediately. I mean, they already do that, but that'll be even harder to figure out what's real and what's not.
That "grab them by the pussy" tape? Obviously fake. I mean, you don't even see the guy talking, just the audio, how gullible can you be?
That girl running away from the napalm bombing? Obviously fake. I mean you're going to tell me that all of her clothes burned but she's still fit enough to run? Everybody around her wears clothes. Come on man, are you new here?
That chinese guy standing in front of a military tank with groceries? Come on, I can do a more convincing fake in 10 seconds on my smartphone. There, look, I just did.
We have a brave new world ahead of us where you won't be able to trust anything you see or hear through any media, no matter how convincing it seems. That's pretty terrifying IMO.
I remember a while ago stumbling upon a conspiracy theory forum where people were claiming that a video of an interview with Julian Assange was a fake because there were a few strange visual artifacts around his face sometimes. Given that the quality of the video was very good and the oddities were rather minor (possibly encoding artifacts) I dismissed it as the usual tinfoil hattery.
I think in the future I won't be so sure anymore. I'm not sure if the technology to make such a good quality fake already exists but it's probably a matter of years before we get there. If some people with too much time on their hands manage to make somewhat convincing porn montages for free on the internet what can big three letter agencies do? What does the state of the art look like? What will it look like 10 years from now?
[+] [-] ZenPsycho|8 years ago|reply
[+] [-] extrememacaroni|8 years ago|reply
[+] [-] announcerman|8 years ago|reply
[+] [-] paulintrognon|8 years ago|reply
[+] [-] igorkraw|8 years ago|reply
[+] [-] jcadam|8 years ago|reply
May even pre-qualify leads for you and tell you when not to waste your time :)
I mean, a good sales person already does a lot of this, but it's time-consuming. Imagine if you could automate this process.
[+] [-] d0lph|8 years ago|reply
Still reading the paper and forming an opinion. But my initial thoughts are what exactly is new here that couldn't be done through some other means? I'm sure there will be interesting implications, but right now nothing seems particularly novel.
[+] [-] Bhilai|8 years ago|reply
Thinking out loud, in the US, we have seen breaches of OPM, travel, healthcare and insurance companies where seemingly the only motive was to exfil data. Many of these attempts are attributed to state sponsored APT groups. Now that someone has all this data, the next potential move seems to be to train models over this data to understand habits and patterns, frequent locations and friends, and predict social and political leanings...
I only have limited knowledge on this subject, but all this sounds plausible right ?
[+] [-] jeffreyrogers|8 years ago|reply
[0]: Persuasive ads, vulnerability discovery and exploitation, hacking robots (this one is only tangentially AI related), and AI-augmented surveillance.
[+] [-] zitterbewegung|8 years ago|reply
[+] [-] mtkd|8 years ago|reply
[+] [-] Santosh83|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] unknown|8 years ago|reply
[deleted]