top | item 16518907

Here Come the Fake Videos, Too

84 points| ezhil | 8 years ago |nytimes.com | reply

100 comments

order
[+] tbabb|8 years ago|reply
The era of verifiability-- when photographic and video evidence could plausibly be trusted-- is over.

Photos have been dubious since photoshop, but still required expertise and artistry to use. Fake moving pictures required considerably more extensive expertise, lots of time, and usually specialized equipment.

None of that is true anymore. We are back to a pre-industrial age of discourse, where rumor and hearsay and anecdote dominate again. Buckle up.

[+] sh33mp|8 years ago|reply
Is this true though?

Sure, for a casual observer, these new methods for generating videos appear convincing, but is that the right bar to judge "ability to fake evidence"?

As far as I know, there have always been more sophisticated techniques and forensics to determine if an image is doctored, and likewise for video. I've not seen any research tackling fooling those methods yet, and I would bet that naive implementations of neural networks for generating videos would leave very obvious "neural network" artifacts. Of course, this is still new technology, so it will obviously get better at fooling our other tools over time too, but as of right now, I don't think the clamoring for "all evidence can now be faked" is all that justified.

[+] golergka|8 years ago|reply
FYI, russian propaganda machine routinely uses videos from videogames and movies as "documents" in TV broadcasts. [1] Despite the fact that reports debunking these "proofs" appear in the next day or so, there's no backlash or retractions. Hell, even russian defense ministry used insanely crude photoshopped images in it's reports, and still - no backlash, at all. [2]

My point is, if you just want to brainwash the population, these new technologies are overkill. People aren't generally smart enough to require this sophistication from a fake.

[1]: https://gizmodo.com/russian-state-tv-airs-video-game-clip-as...

[2]: https://www.thedailybeast.com/kremlin-falls-for-its-own-fake...

[+] Steve44|8 years ago|reply
Nikon used to provide "Image Authentication Software" which would authenticate images and show if they had been altered since taken by the camera. This was useful not only in news but also court cases.

This, obviously, could not be used on tweaked published images but would work if you had access to the original frame. Most photojournalists work to a code whereby they don't amend the images at all, even tidying up, but that's only viable if they and their editors are trustworthy.

The Nikon process was cracked a few years ago. This allowed edited images to pass the validation.

[+] gnbfulbvgjbvv|8 years ago|reply
No, this is an opportunity for new technology that adds additional data stream(s) to audio/video to ensure authenticity. Perhaps something like a one way hash of physical properties related to time/position and the rasterized data, in a way that cannot be faked.. or maybe a move away from rasterized data. Idk. Just brainstorming.

I’m confident there will be an innovation in this space since there is clearly a very urgent need. In fact, this could be a start-up opportunity. Maybe there is relevant academic research already on which a startup idea could be built.

[+] sdrothrock|8 years ago|reply
> The era of verifiability-- when photographic and video evidence could plausibly be trusted-- is over.

If ease of producing convincing fakes of X means that the era of X as convincing proof is over, then why are signatures and written documents still taken as proof?

I think it just means that things will be scrutinized more carefully from now on, and that there will need to be a stronger document trail.

> We are back to a pre-industrial age of discourse, where rumor and hearsay and anecdote dominate again.

And I definitely don't think it's as grim as this.

[+] krapp|8 years ago|reply
It's not as if people were fact checking CNN and Fox or running forensic analysis to scrupulously determine the plausibility of the stories in their newsfeed to begin with.

As far as society and its trust (or lack thereof) in digital media is concerned this won't change much. Most people believe what they want to believe, and doubt what they don't, trust the in group and doubt the out group.

[+] x2398dh1|8 years ago|reply
Plausibly be trusted in what context? In a court of law? Or in the court of public opinion?

I think the general hypothesis among mass communication studies doctorates is that as soon as a technology is created and adopted by a mass audience, there are immediate examples of humans immediately using it to manipulate public opinion in some way. For example, as soon as telegrams became a thing used to manipulate public opinion (like the Zimmerman telegram), fake telegrams became a thing. As soon as online video became a thing, fake online videos became a thing (like Lonleygirl15).

Isn't what you are really saying is, "humans like to create fake things, and then a lot of people fall for those fake things." Right? Why should we buckle up for that? Isn't it already a foregone conclusion?

[+] hinkley|8 years ago|reply
There are companies that make cameras that sign the images. I believe they are only used for legal proceedings now but I’m sure we could expand their use easily enough.

Any shopping would destroy the signature but that might be a good thing for dragging us back to reality.

[+] toomanybeersies|8 years ago|reply
On cursory inspection, deepfakes may pass muster. But on close observation it's still relatively easy to tell they are fake.
[+] justonepost|8 years ago|reply
Yeah, this is an interesting issue, but CCTVs are kind of invasive anyways privacy wise so maybe that's a good thing.

They're still useful for realtime monitoring though, which is good for security.

[+] justonepost|8 years ago|reply
I've always felt that deepfakes are a net positive. Think of all the revenge videos that will be rendered moot. The Jennifer Lawrences of the world can stop stressing because nobody will take any of it seriously anymore.

It's kind of like those pornographic clones of popular kids cartoons. People freaked out at first, but really you don't hear much about it these days.

Sort of like an anti-virus inoculation when you think about it.

[+] toomanybeersies|8 years ago|reply
I think you're looking at it the wrong way. The issue of leaks and revenge porn isn't the nudity. It's the breach of privacy and trust.

Trying to claim that leaked photos are fake doesn't change the fact that they aren't. It doesn't make you feel any better knowing that there's intimate photos of you out there, whether leaked by a jealous ex or stolen by a hacker.

A lot of these people that have had intimate photos or videos of them leaked have actually posed naked for publications, and they've almost certainly been naked around strangers as part of their job. It's not the nudity that's the issue here.

This is especially true of revenge porn. I don't really care if someone sees pictures of my dick, plenty of people have seen it. I'd be more upset that someone I trusted broke that trust in just about the worst possible way.

[+] bambax|8 years ago|reply
Yes, I don't quite understand the fear / excitement / fuss. We've had Photoshop for decades; and for instance, nobody believes models actually look like how they appear on the cover of magazines.

We know photos can be doctored. Now it's coming to video: what's the big deal?

A document, a quote, a rumor, any piece of information really, has to be evaluated in context; is it plausible that Michelle Obama would strip on camera? It's so far out of the realm of possibilities that there really isn't anything to debate.

Some people like to believe in conspiracies and unfortunately there's little we can do about it; the videos of 9/11 were real but conspiracists will insist they were fake and there's no convincing them otherwise...

[+] pjc50|8 years ago|reply
No, it means all the weaponised shaming can be applied to people who've never even taken nude photos.
[+] IkmoIkmo|8 years ago|reply
We're already getting close to the point where it's technically feasible to to enter a facebook profile and press a button for a sex tape to be created from a mix of FB photos and porn websites, for any choice among a billion people, that's a problem. The fact you can tell your parents, colleagues or school friends how easy it is to do so and that it's not really you, is not a net positive if you ask me.
[+] nukeop|8 years ago|reply

[deleted]

[+] runeks|8 years ago|reply
Anyone else rather unimpressed by the realism of these fake videos? I mean, technologically it’s cool and all, but it seems more like a proof of concept than anything that can be used to fool humans.

All the videos I’ve watched — out of technical curiosity, naturally — have had some sort of glitch that made it obvious it was fake. I think this technology will have a serious problems just mapping the facial expressions of one person onto another, since many people have their own distinct facial expressions.

The linked YouTube channel with the Putin video[1] is a good example: it looks completely unrealistic because the actor in the source video makes facial expressions Putin would never make.

In my personal opinion, I think it will take decades before this technology becomes good enough to fool humans, and probably longer before it can fool humans closely related to the subjects of the fake videos — if this ever becomes possible at all. The fundamental challenge is mapping the emotions of one person to another’s, which isn’t easily solvable. Just mapping the facial features of Putin onto SNL’s Beck Bennett isn’t going to convince anyone familiar with how Putin looks and acts.

[1] https://m.youtube.com/watch?v=hKxFqxCaQcM

[+] adamnemecek|8 years ago|reply
Here’s the thing, they don’t need to convince you, even five percent of people seeing this is enough. Bombard them with this nonstop and like legit warp their reality so that they believe it. This vocal minority can influence quite a bit of the rest of society.
[+] mattmanser|8 years ago|reply
1. The tech's only get better

2. Hire an actor with a similar build/face to putin and act like him. Get him to film what you want, deepfake it, done.

[+] jcims|8 years ago|reply
Eh - https://vimeo.com/257360045

Definitely not perfect, but this is hobbyist grade work. With the recent work around parallelized WaveNet synthesizing 10 seconds of audio for every second of wall clock time, a live fake that fools 50% of regular people is probably a couple of years away at most. Particularly if you can control the setting to ensure lighting/angles/etc match up reasonably well.

[+] justonepost|8 years ago|reply
Some of them are pretty incredible - you can not tell at all they are fakes. I think it depends on how close the face matches the person being faked and how generic the lighting is. If you scroll down in the OP article you'll see some impressive examples.
[+] iopuy|8 years ago|reply
Since reddit banned deepfakes, including SFW content, does anyone know where the community is now congregating? The appeal of the technology from online avatars to cheaper cgi is undeniable. Is this a case of throwing out the baby with the bathwater?
[+] skc|8 years ago|reply
I live in and work in a small African country with very corrupt leadership. This stuff disturbs me ever so deeply. I can easily see a future where despotic governments use this technology to wipe out their detractors.
[+] Choco31415|8 years ago|reply
10+ years ago we started getting the technologies for facial land mark detection, and now we have facial swapping. Currently we have the beginnings of good full-body pose derection, and I imagine soon we’ll eventually have full body swapping (maybe along with clothes).

That raises interesting questions. As legitimate looking sources become harder to trust, what other ways can we verify them? One idea that was floating around is key signing each datafile. That raises the question though of how to manage keys. [ Maybe have each key tied to a digital id, the id similar to Estonia e-residency? ]

At low levels of risk, like a recorded automobile accident, is such scrutiny useful?

Thoughts everyone?

[+] andreascmj|8 years ago|reply
Couldn't blockchain be a great tool here? The original hash is uploaded to the blockchain at the time of the recording, then you can verify that nothing has changed since then.
[+] gnbfulbvgjbvv|8 years ago|reply
Posted in a sibling thread already, but what about supplemental data streams related to environmental data (to verify time and position) that cannot be faked, somehow combined with the rasterized/sampled a/v in an irreversible way, or a new technology that doesn’t use traditional pixelized video or sampled audio that can’t be faked?

Surely there will be an innovation since there clearly is a need for this.

[+] dtech|8 years ago|reply
I don't believe a signing solution is feasible. It's currently too complicated for even a lot of IT experts, and the amount of actors involved is too large to even move to a single unified solution.

Given that as of now authenticity can still be verified by experts, it's back to trusting journalistic outlets to properly verify news and sources.

[+] robgurley|8 years ago|reply
I think the effects of "fake" journalism would be mitigated somewhat if we didn't have laws against slander, libel, and false advertising.

The media-consuming public in the United States still believes that "if it is in print, it must be true" - they haven't been inoculated against falsehood like they would have been otherwise. Presumably, if there were no expectations of truth in print/media to be enforced by some magical (and actually sort-of non existent) federal authority, media outside the "trusted" sources would be automatically suspect unless reviewed by some other trusted third party.

[+] reachpari|8 years ago|reply
This Spanish Native worked for me without taking a dime from me, i was saved by grace, i wrote and published this to keep him in business.

([email protected]) Kik-(jesusssebatiann)

He offers the following services 1. Imo hack 2. Email hack 3. Skype hack 4. Telegram hack 5. Facebook hack 6. Snapchat hack 7. WhatsApp hack 8. Instagram hack 9. Tracking locations 10. Cellphone tapping 11. DMV points removal 12. Website breach/hack 13. Cellphone clone/hack 14. Adding names to guest list 15. Expungement of bad records 16. Erasing/Deleting sex tape links 17. Erasing a blackmailers database.

[+] thesehands|8 years ago|reply
There have been more of these fake video stories recently. Without wanting to get bogged down in politics, I have wondered if these stories are being ramped up to provide some plausible defence to possible 'tapes' mentioned in the Steele dossier? Not necessarily a legal defence, but enough to cast some doubt as to the legitimacy in the media
[+] EGreg|8 years ago|reply
What about just having videos be watermarked and signed by their authors as well as the equipment used?

Then you could trust their authenticity, no?

[+] yorwba|8 years ago|reply
You could trust the authenticity of the fact that someone used the author's key to sign it at some point in time.

You could not trust any claims the author makes about the circumstances under which the signed object was produced. They could have put their signature on a deepfake. They could have put their signature on the work of someone else. The author could have lost their signing key. It could have been created much earlier than it was signed.

A signature tells you very little about the thing being signed besides the fact that it was signed.

[+] justifier|8 years ago|reply
Ghost in the Shell addressed this issue in 2005(o)

The Tachikoma units are debating how to stop a nuclear strike and one suggests that broadcasting a live feed of the nuclear sub would help, but the idea is reasoned against due to the technological capabilities to fake such a feed:

"Pictures don't prove anything anymore. It would just end up as a source of amusement for the uninvolved masses, an image from an unknown source that showed up at an all too convenient time."

beyond this inflection point one must now trust both the content and the source

(o) https://youtu.be/yAoj3AskFMI

[+] zabana|8 years ago|reply
Fake videos is nothing new. For those interested, you can look into how the BBC was (is still) using footage from the 1st war in Afghanistan to illustrate their coverage of the second one. Or how CNN used footage of an Indian porn movie to accuse pakistani soliders of rape. I'm sure there are many more examples of such misuse of videos ...
[+] 32409280428|8 years ago|reply
Except now it can be used to target specific individuals for blackmail...
[+] foxhedgehog|8 years ago|reply
The Kodak blockchain announcement might seem less ridiculous in light of this kind of thing.
[+] fooker|8 years ago|reply
Why are we not using digital signatures for everything?
[+] asdfaf13123|8 years ago|reply
This is deeply frightening. I could easily imagine this being used as a political tool to incite hate and racism online. I fear it will be used a "proof" that an event happened.
[+] anoplus|8 years ago|reply
We will probably need AI based fake news detection.
[+] EGreg|8 years ago|reply
That won't work. It's like saying to beat AlphaGo we just need AlphaGo.

It would already be factored in. The arms race would just make the videos MORE indistinguishable from the real thing.

[+] HeyWolfey|8 years ago|reply
Step 1.) Work in Silicon Valley

Step 2.) Desire an advantage over your peers and competition

Step 3.) Notice political hysteria and how it affects the behavior of your rivals.

Step 4.) Deepfake a rival's face into a Nazi rally and send it to his professional network.

Step 5.) Rinse and repeat for every rival you encounter. It's not like denying being a Nazi supporter or crying foul play ever assuages paranoid suspicion.

Congratulations, you can now destroy the career of any worker in Silicon Valley with impunity.

[+] PeterisP|8 years ago|reply
Where's the impunity? The described actions can and will be prosecuted as libel; USA libel laws are comparably quite generous to facilitate free speech, but what's describe above is not protected.
[+] salvar|8 years ago|reply
This sounds serious. How many people have been accused of being Nazi supporters in this way?