top | item 21506469

Twitter wants feedback on its proposed deepfakes policy

60 points| mikece | 6 years ago |arstechnica.com | reply

41 comments

order
[+] hombre_fatal|6 years ago|reply
The moral panic around deepfakes is hilarious to me.

Especially on a platform like Twitter where a tweet of a screenshot of a headline with no source (that possible has no source) will have thousands of upvotes and angry responses which is much more alarming and something that already exists today.

For example, every once in a while on r/PoliticalHumor you'll see a screenshot of a tweet that Trump didn't even write, yet everyone responding will take it at face value. Deepfakes are a red herring and a distraction from a ubiquitous phenomenon we might never solve.

Requiring deepfakes to have a disclaimer to me is like training people that it's safe to insert their credit card info on a website as long as they see the https lock icon in the navbar. Instead, people should be trained to be eternally vigilant and to be skeptical even if there is no "this is fake" disclaimer.

We're long past screwed.

[+] ShorsHammer|6 years ago|reply
You're right on the money here. It's a distraction from the underlying issue. Instilling critical thinking in the population might not be desirable from a politicians perspective but it's reached the point of being a national security issue.

Plenty of handy tools for the aspiring fake tweeter too:

https://www.tweetgen.com/create/tweet.html

[+] cheald|6 years ago|reply
Very well said. We outsource our critical thinking at our own peril.

People are very good at believing what they want to, and the more we encourage them to turn off their skepticism because something has a little badge on it, the worse off we all are.

[+] behringer|6 years ago|reply
To be fair, as a general rule, I assume all Trump quotes are real until proven otherwise.
[+] JohnJamesRambo|6 years ago|reply
Isn't that why it is even more important that deepfakes aren't on there?
[+] hart_russell|6 years ago|reply
Agreed. In general, it feels like modern America lacks critical thinking skills when it comes to discerning obviously fake information from real.
[+] tus88|6 years ago|reply
> The issue of what to do about them hit the spotlight in May when a video of House Speaker Nancy Pelosi (D-Calif.) that was heavily modified

From what I recall, one video was slightly slowed down, the other was just a montage of various clips joined together. I am not sure either constitutes serious modification or doctoring.

[+] nwalker85|6 years ago|reply
Yeah, calling that video a "deepfake" sounds like someone has no idea what they are talking about.
[+] rococode|6 years ago|reply
On the subject of the dangers of deepfakes, the most recent episode of The Blacklist addressed deepfakes in a storyline I found quite interesting.

Basically (spoilers ahead), this researcher creates a sentient AI and the AI promptly decides that sentient AI is a danger to humanity and tries to kill a few of the top AI researchers. Ok, kinda unrealistic.

The more realistic part? To get one of the AI researchers killed, a deepfake video is created of that researcher saying something along the lines of "over the years at X corp I've seen the worst of humanity, too much evil, it's time to end it all" accompanied by him strapping on a bomb vest. The video is released and everyone freaks out. He doesn't notice, goes to work, gets surrounded by cops while holding a small black device (his phone), and the police shoot and kill him thinking it's a detonator.

I'd always considered deepfakes in the context of making false political statements which could eventually be disproved. Worst case, a bunch of people think the wrong thing for a while. This use case of forcing rapid response without time for validation or refutal is quite a bit scarier and one I personally hadn't considered before.

[+] gruez|6 years ago|reply
>To get one of the AI researchers killed, a deepfake video is created of that researcher saying something along the lines of "over the years at X corp I've seen the worst of humanity, too much evil, it's time to end it all" accompanied by him strapping on a bomb vest. The video is released and everyone freaks out. He doesn't notice, goes to work, gets surrounded by cops while holding a small black device (his phone), and the police shoot and kill him thinking it's a detonator.

To be fair, I can imagine this happening in the US without a deepfake video. Just look at all the instances where the cops are called because of a "suspicious person" and end up shooting an unarmed civilian. Random example: https://en.wikipedia.org/wiki/Shooting_of_Charles_Kinsey

[+] enneff|6 years ago|reply
By attempting to police this aren’t they lending credence to the instances that they failed to detect? “See it’s [not marked as fake/it’s on twitter]! It must be real.”

Seems better that we all just adjust to the fact that we can’t trust what we see (we never could anyway).

[+] sundvor|6 years ago|reply
That's well and true for people at HN, but I fully believe that the general public needs a lot more help.
[+] mikece|6 years ago|reply
I like the idea of not necessarily removing content just because an algorithm or group of people say it's a deepfake. Apply a label and let people make up their own minds.

Of course if there's no transparency to the process or a known way to contest being classified as a deepfake, this could lead to other problems. And is a work of performance art -- an actor who can do a spot-on impression of someone -- a deepfake if meant as art?

[+] xwdv|6 years ago|reply
I worry that the moderation of deep fakes will only lead to deeper and deeper fakes, until no content on the internet can be trusted, and nothing left believable.
[+] bgun|6 years ago|reply
> until no content on the internet can be trusted, and nothing left believable.

You seem to be under the impression that this is not already the case. Why is that?

[+] jejones3141|6 years ago|reply
Two things:

They must fully specify how they categorize content as misleading.

Deceit predates computers. Lies of omission and half-truths, misleading presentation of statistics (e.g. the ubiquitous pie chart of US federal spending that only shows discretionary spending). If they're setting themselves up as guardians, they should cover non-digital methods as well as deepfakes.

[+] Miner49er|6 years ago|reply
From the survey:

> Misleading altered media does NOT include photos and videos that are edited to remove blemishes or physical imperfections.

[+] jka|6 years ago|reply
Yep. People use filters and image retouching a lot.

If Twitter wants users to pay attention to items of content which are particularly misleading, they need to avoid alert fatigue - i.e. these notices need to be rare and reliable.

[+] rpmisms|6 years ago|reply
I see a lot of trolling based on this rule, e.g. Amy Schumer's face being replaced would fall under this exception
[+] zitterbewegung|6 years ago|reply
What if the tweets themselves are synthetic? Would those be deleted or is that already covered somewhere?
[+] narrator|6 years ago|reply
I'd imagine if any video of that guy who didn't kill himself's alleged clients leaked somehow, anyone powerful who might be identified in a video would claim it was a deepfake.
[+] new_guy|6 years ago|reply
I run a network of social sites and we've had this functionality for a couple of years, and the thing is people - especially Americans - just don't care.

They love being outraged, even when what they post is clearly labelled as fake they ignore it, and people commenting ignore it.

From the site perspective there's not much more we can do without driving people away, if you try to police content too much people will just go to another site.

But it does get pretty frustrating.

[+] netsharc|6 years ago|reply
> They love being outraged

I find this too, and I'm fascinated with this aspect of our new online world. I wonder why and how. I'm guessing outrage is a way of feeling superior to other people, "How can they be so stupid, I'm so happy I'm smarter than them and know the truth!" (Trying to think if this would apply to online public shamers, I guess so.).

I'm guessing the loneliness, insecurities and FOMO created by online social networks has lead to this. Although it was TV before that, there are reports of how Bhutan's society suffering negative things after the introduction of TV in 1999 (e.g. http://news.bbc.co.uk/2/hi/entertainment/3812275.stm )

[+] Geee|6 years ago|reply
Maybe it's even more dangerous that a photo or a video can no longer be used as a proof in court. Deepfake technology gives plausible deniability.
[+] est|6 years ago|reply
I think deepfakes will force us to think deeper into words and meanings, not just familiar faces. It's an inevitable invention.
[+] Nasrudith|6 years ago|reply
The thing I find most worrying about Deepfakes is the moral panic aspect. Although flagging them as shopped and why they think that would be unobjectionable and respectful "good citizenship".

Even the false positives would be good for both laughs and insights if say an art musuem exhibit which screws with perspective or scale results in being flagged as fake.

[+] 9HZZRfNlpR|6 years ago|reply
There's more to worry about screenshots of modifying a bit dom and spreading 'deepakery' like that, but then again no one cares anyway. Worrying snout deepfakes is truly a pseudo problem.
[+] crb002|6 years ago|reply
Every meme photoshop now needs a label?
[+] KingMachiavelli|6 years ago|reply
From the original twitter blogpost:

> we propose defining synthetic and manipulated media as any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.

'meme' is pretty broad so if it's literally just a screenshot of a faked tweet then Yes it would fall under this rule based on the broad language they are using. BUT I think that's still a good thing as long has the label/warning is unobtrusive yet at the same time it needs to not be used on every meme/image since then it would loose it's effect.

I'm very curious how they are going to differentiate between normal image mashups (memes, etc.) and false information alterations at least without human oversight.

The blog post has a survey for feedback so I encourage everyone to leave some since this most likely impacts everyone at least indirectly.