The moral panic around deepfakes is hilarious to me.
Especially on a platform like Twitter where a tweet of a screenshot of a headline with no source (that possible has no source) will have thousands of upvotes and angry responses which is much more alarming and something that already exists today.
For example, every once in a while on r/PoliticalHumor you'll see a screenshot of a tweet that Trump didn't even write, yet everyone responding will take it at face value. Deepfakes are a red herring and a distraction from a ubiquitous phenomenon we might never solve.
Requiring deepfakes to have a disclaimer to me is like training people that it's safe to insert their credit card info on a website as long as they see the https lock icon in the navbar. Instead, people should be trained to be eternally vigilant and to be skeptical even if there is no "this is fake" disclaimer.
You're right on the money here. It's a distraction from the underlying issue. Instilling critical thinking in the population might not be desirable from a politicians perspective but it's reached the point of being a national security issue.
Plenty of handy tools for the aspiring fake tweeter too:
Very well said. We outsource our critical thinking at our own peril.
People are very good at believing what they want to, and the more we encourage them to turn off their skepticism because something has a little badge on it, the worse off we all are.
> The issue of what to do about them hit the spotlight in May when a video of House Speaker Nancy Pelosi (D-Calif.) that was heavily modified
From what I recall, one video was slightly slowed down, the other was just a montage of various clips joined together. I am not sure either constitutes serious modification or doctoring.
On the subject of the dangers of deepfakes, the most recent episode of The Blacklist addressed deepfakes in a storyline I found quite interesting.
Basically (spoilers ahead), this researcher creates a sentient AI and the AI promptly decides that sentient AI is a danger to humanity and tries to kill a few of the top AI researchers. Ok, kinda unrealistic.
The more realistic part? To get one of the AI researchers killed, a deepfake video is created of that researcher saying something along the lines of "over the years at X corp I've seen the worst of humanity, too much evil, it's time to end it all" accompanied by him strapping on a bomb vest. The video is released and everyone freaks out. He doesn't notice, goes to work, gets surrounded by cops while holding a small black device (his phone), and the police shoot and kill him thinking it's a detonator.
I'd always considered deepfakes in the context of making false political statements which could eventually be disproved. Worst case, a bunch of people think the wrong thing for a while. This use case of forcing rapid response without time for validation or refutal is quite a bit scarier and one I personally hadn't considered before.
>To get one of the AI researchers killed, a deepfake video is created of that researcher saying something along the lines of "over the years at X corp I've seen the worst of humanity, too much evil, it's time to end it all" accompanied by him strapping on a bomb vest. The video is released and everyone freaks out. He doesn't notice, goes to work, gets surrounded by cops while holding a small black device (his phone), and the police shoot and kill him thinking it's a detonator.
To be fair, I can imagine this happening in the US without a deepfake video. Just look at all the instances where the cops are called because of a "suspicious person" and end up shooting an unarmed civilian. Random example: https://en.wikipedia.org/wiki/Shooting_of_Charles_Kinsey
By attempting to police this aren’t they lending credence to the instances that they failed to detect? “See it’s [not marked as fake/it’s on twitter]! It must be real.”
Seems better that we all just adjust to the fact that we can’t trust what we see (we never could anyway).
I like the idea of not necessarily removing content just because an algorithm or group of people say it's a deepfake. Apply a label and let people make up their own minds.
Of course if there's no transparency to the process or a known way to contest being classified as a deepfake, this could lead to other problems. And is a work of performance art -- an actor who can do a spot-on impression of someone -- a deepfake if meant as art?
I worry that the moderation of deep fakes will only lead to deeper and deeper fakes, until no content on the internet can be trusted, and nothing left believable.
They must fully specify how they categorize content as misleading.
Deceit predates computers. Lies of omission and half-truths, misleading presentation of statistics (e.g. the ubiquitous pie chart of US federal spending that only shows discretionary spending). If they're setting themselves up as guardians, they should cover non-digital methods as well as deepfakes.
Yep. People use filters and image retouching a lot.
If Twitter wants users to pay attention to items of content which are particularly misleading, they need to avoid alert fatigue - i.e. these notices need to be rare and reliable.
I'd imagine if any video of that guy who didn't kill himself's alleged clients leaked somehow, anyone powerful who might be identified in a video would claim it was a deepfake.
I run a network of social sites and we've had this functionality for a couple of years, and the thing is people - especially Americans - just don't care.
They love being outraged, even when what they post is clearly labelled as fake they ignore it, and people commenting ignore it.
From the site perspective there's not much more we can do without driving people away, if you try to police content too much people will just go to another site.
I find this too, and I'm fascinated with this aspect of our new online world. I wonder why and how. I'm guessing outrage is a way of feeling superior to other people, "How can they be so stupid, I'm so happy I'm smarter than them and know the truth!" (Trying to think if this would apply to online public shamers, I guess so.).
I'm guessing the loneliness, insecurities and FOMO created by online social networks has lead to this. Although it was TV before that, there are reports of how Bhutan's society suffering negative things after the introduction of TV in 1999 (e.g. http://news.bbc.co.uk/2/hi/entertainment/3812275.stm )
The thing I find most worrying about Deepfakes is the moral panic aspect. Although flagging them as shopped and why they think that would be unobjectionable and respectful "good citizenship".
Even the false positives would be good for both laughs and insights if say an art musuem exhibit which screws with perspective or scale results in being flagged as fake.
There's more to worry about screenshots of modifying a bit dom and spreading 'deepakery' like that, but then again no one cares anyway. Worrying snout deepfakes is truly a pseudo problem.
> we propose defining synthetic and manipulated media as any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.
'meme' is pretty broad so if it's literally just a screenshot of a faked tweet then Yes it would fall under this rule based on the broad language they are using. BUT I think that's still a good thing as long has the label/warning is unobtrusive yet at the same time it needs to not be used on every meme/image since then it would loose it's effect.
I'm very curious how they are going to differentiate between normal image mashups (memes, etc.) and false information alterations at least without human oversight.
The blog post has a survey for feedback so I encourage everyone to leave some since this most likely impacts everyone at least indirectly.
[+] [-] hombre_fatal|6 years ago|reply
Especially on a platform like Twitter where a tweet of a screenshot of a headline with no source (that possible has no source) will have thousands of upvotes and angry responses which is much more alarming and something that already exists today.
For example, every once in a while on r/PoliticalHumor you'll see a screenshot of a tweet that Trump didn't even write, yet everyone responding will take it at face value. Deepfakes are a red herring and a distraction from a ubiquitous phenomenon we might never solve.
Requiring deepfakes to have a disclaimer to me is like training people that it's safe to insert their credit card info on a website as long as they see the https lock icon in the navbar. Instead, people should be trained to be eternally vigilant and to be skeptical even if there is no "this is fake" disclaimer.
We're long past screwed.
[+] [-] ShorsHammer|6 years ago|reply
Plenty of handy tools for the aspiring fake tweeter too:
https://www.tweetgen.com/create/tweet.html
[+] [-] cheald|6 years ago|reply
People are very good at believing what they want to, and the more we encourage them to turn off their skepticism because something has a little badge on it, the worse off we all are.
[+] [-] behringer|6 years ago|reply
[+] [-] JohnJamesRambo|6 years ago|reply
[+] [-] hart_russell|6 years ago|reply
[+] [-] tus88|6 years ago|reply
From what I recall, one video was slightly slowed down, the other was just a montage of various clips joined together. I am not sure either constitutes serious modification or doctoring.
[+] [-] nwalker85|6 years ago|reply
[+] [-] rococode|6 years ago|reply
Basically (spoilers ahead), this researcher creates a sentient AI and the AI promptly decides that sentient AI is a danger to humanity and tries to kill a few of the top AI researchers. Ok, kinda unrealistic.
The more realistic part? To get one of the AI researchers killed, a deepfake video is created of that researcher saying something along the lines of "over the years at X corp I've seen the worst of humanity, too much evil, it's time to end it all" accompanied by him strapping on a bomb vest. The video is released and everyone freaks out. He doesn't notice, goes to work, gets surrounded by cops while holding a small black device (his phone), and the police shoot and kill him thinking it's a detonator.
I'd always considered deepfakes in the context of making false political statements which could eventually be disproved. Worst case, a bunch of people think the wrong thing for a while. This use case of forcing rapid response without time for validation or refutal is quite a bit scarier and one I personally hadn't considered before.
[+] [-] gruez|6 years ago|reply
To be fair, I can imagine this happening in the US without a deepfake video. Just look at all the instances where the cops are called because of a "suspicious person" and end up shooting an unarmed civilian. Random example: https://en.wikipedia.org/wiki/Shooting_of_Charles_Kinsey
[+] [-] REDDitMen|6 years ago|reply
[+] [-] enneff|6 years ago|reply
Seems better that we all just adjust to the fact that we can’t trust what we see (we never could anyway).
[+] [-] sundvor|6 years ago|reply
[+] [-] mikece|6 years ago|reply
Of course if there's no transparency to the process or a known way to contest being classified as a deepfake, this could lead to other problems. And is a work of performance art -- an actor who can do a spot-on impression of someone -- a deepfake if meant as art?
[+] [-] xwdv|6 years ago|reply
[+] [-] bgun|6 years ago|reply
You seem to be under the impression that this is not already the case. Why is that?
[+] [-] jejones3141|6 years ago|reply
They must fully specify how they categorize content as misleading.
Deceit predates computers. Lies of omission and half-truths, misleading presentation of statistics (e.g. the ubiquitous pie chart of US federal spending that only shows discretionary spending). If they're setting themselves up as guardians, they should cover non-digital methods as well as deepfakes.
[+] [-] Miner49er|6 years ago|reply
> Misleading altered media does NOT include photos and videos that are edited to remove blemishes or physical imperfections.
[+] [-] jka|6 years ago|reply
If Twitter wants users to pay attention to items of content which are particularly misleading, they need to avoid alert fatigue - i.e. these notices need to be rare and reliable.
[+] [-] rpmisms|6 years ago|reply
[+] [-] zitterbewegung|6 years ago|reply
[+] [-] narrator|6 years ago|reply
[+] [-] new_guy|6 years ago|reply
They love being outraged, even when what they post is clearly labelled as fake they ignore it, and people commenting ignore it.
From the site perspective there's not much more we can do without driving people away, if you try to police content too much people will just go to another site.
But it does get pretty frustrating.
[+] [-] netsharc|6 years ago|reply
I find this too, and I'm fascinated with this aspect of our new online world. I wonder why and how. I'm guessing outrage is a way of feeling superior to other people, "How can they be so stupid, I'm so happy I'm smarter than them and know the truth!" (Trying to think if this would apply to online public shamers, I guess so.).
I'm guessing the loneliness, insecurities and FOMO created by online social networks has lead to this. Although it was TV before that, there are reports of how Bhutan's society suffering negative things after the introduction of TV in 1999 (e.g. http://news.bbc.co.uk/2/hi/entertainment/3812275.stm )
[+] [-] Geee|6 years ago|reply
[+] [-] est|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] Nasrudith|6 years ago|reply
Even the false positives would be good for both laughs and insights if say an art musuem exhibit which screws with perspective or scale results in being flagged as fake.
[+] [-] 9HZZRfNlpR|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] crb002|6 years ago|reply
[+] [-] KingMachiavelli|6 years ago|reply
> we propose defining synthetic and manipulated media as any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.
'meme' is pretty broad so if it's literally just a screenshot of a faked tweet then Yes it would fall under this rule based on the broad language they are using. BUT I think that's still a good thing as long has the label/warning is unobtrusive yet at the same time it needs to not be used on every meme/image since then it would loose it's effect.
I'm very curious how they are going to differentiate between normal image mashups (memes, etc.) and false information alterations at least without human oversight.
The blog post has a survey for feedback so I encourage everyone to leave some since this most likely impacts everyone at least indirectly.
[+] [-] AsmEmran|6 years ago|reply
[deleted]
[+] [-] calimac|6 years ago|reply
[deleted]