top | item 31650797

Deepfake Offensive Toolkit (real-time deepfakes for virtual cameras)

556 points| draugadrotten | 3 years ago |github.com | reply

320 comments

order
[+] Peritract|3 years ago|reply
> Authors and contributing developers assume no liability and are not responsible for any misuse or damage caused by the use of this program.

Anything that can be created, will be created. However, that doesn't free you from all moral culpability. If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.

I'm not saying that they shouldn't have created this, or that they don't have the right to release it. But to create it, release it, and then pretend that any misuse was entirely separate to you is at best naive.

[+] barnabee|3 years ago|reply
I’d argue there is a moral imperative to create and release tools like this as free and open source software so that anyone has access to them and can choose whether to use them, rather than only sophisticated and well resourced adversaries.

IMO the creators should feel good about their actions, even if they feel bad or apprehensive about the direction of the world because this technology exists at all.

[+] krageon|3 years ago|reply
This has a clear benign use. Of course it sucks that you can also use it in a hostile manner - but the fact that this tool is publicly available rather than hidden in the pocket of some unscrupulous blackhat means that every space that uses verification with these methods can now incorporate this type of testing. That's a net benefit for society.

I do think disclaimers like this are a little juvenile (it reeks of a US-ian litigation mentality), but you can easily imagine why they put it there. Perhaps instead of the author being less naive, you need to be more empathetic.

[+] RektBoy|3 years ago|reply
What do you do? IMHO, only person naive here is you.

People should be happy, that there are still white-hats who're reporting exploits, even for no profit. I personally switched to gray-market, can't take the shit we get from companies, anymore.

This tool released publicly is only to bring an awareness to the topic. Everybody else who needed to exploit this, have these tools developed in private.

[+] ramblerman|3 years ago|reply
> If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.

That's a dangerous precedent. Would you apply the same logic to a kitchen knife? Or if for some reason only freemium products count (not sure why), then a pentesting tool?

I understand the underlying point you are trying to make but what are you proposing as an alternative exactly? Who gets to decide which products fall into a gray zone, whilst others are only for bad use. We already see this kind of shoddy thinking leading to keeping DALE-2 out of the public's hands (or at least that is their claim).

[+] solsane|3 years ago|reply
I disagree. The moral responsibility really rests on the person who uses the tool.

For instance, I once cheated by using gcc/godbolt to generate assembly output for a class from C code. By this logic, Richard Stallman should be blamed for my misconduct.

There are any number of reasons for supplanting another face onto your own, many of which are simply good fun. If you choose to use this for scamming or perverted reasons, so be it.

Moral posturing aside, perhaps there could be an invisible watermark or something included by default to easily identify less technically inclined actors as users of this tool.

[+] Hendrikto|3 years ago|reply
If I stab someone with a knife, who is responsible?

The inventor of knives? The knife‘s manufacturer? The store who sold me the knife? I would say the responsibility lays 100% with myself.

I do not think it makes sense to pursue inventors for what happens to their creations, unless they actively encourage misuse.

[+] yieldcrv|3 years ago|reply
All I learned from this response was to just release tools like this on Darknet Marketplaces and maybe Telegram and just forget the disclaimer
[+] ospray|3 years ago|reply
The main value in releasing tools like this is to demonstrate weakness in our current security controls. A key weakness of biometrics is that there is no secret data. Open source tooling like this help people understand that.
[+] wdroz|3 years ago|reply
> If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.

I don't agree, but people will never reach consensus on this moral topic. So please don't call the other side "at best naive".

[+] ls15|3 years ago|reply
Unless we cut off the chain of responsibility somewhere, the creators of the programming language, the computer and its components, as well as the people who have designed the computer and those who mined the required minerals are responsible as well.
[+] stef25|3 years ago|reply
What's the difference between this and Metasploit, sqlmap, ... ? Not saying you're wrong, it's just that while these tools have valid and legal uses (pentesting) they're also used in black hat scenarios and one could say the same about DOT.
[+] carnitine|3 years ago|reply
That’s a legal disclaimer, not a moral one.
[+] forgingahead|3 years ago|reply
We're actually super fortunate that the development of deepfake technologies have been done relatively out in the open, with source codes, concepts, and pre-trained models often being readily shared. This allows for a broader based understanding of what is possible, and then hopefully develop ways for folks to inoculate themselves, or at least have some societal-level resistance to being hoodwinked by them. If this tech was only developed in secret, and was being used in a targeted manner, who knows what kind of large-scale manipulations would be being undertaken.
[+] throw9871928|3 years ago|reply
I work at Axis, which makes surveillance cameras.[0] This comment is my own, and is not on behalf of the company. I'm using a throwaway account because I'd rather not be identified (and because surveillance is quite controversial here).

Axis has developed a way to cryptographically sign video, using TPMs (Trusted Platform Module) built into the cameras and embedding the signature into the h.264 stream.[1] The video can be verified on playback using an open-source video player.[2]

I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.

[0] https://www.axis.com/ [1] https://www.axis.com/newsroom/article/trust-signed-video [2] https://www.axis.com/en-gb/newsroom/press-release/axis-commu...

[+] yosito|3 years ago|reply
A few months ago, the IRS made me verify my identity using some janky video conferencing software where I had to hold up a copy of my passport. The software was so hard to use, that I can't believe average people manage to do it. Now, real-time deep fakes are literally easier to create than using the video verification software itself. This will have interesting societal implications.
[+] mustyoshi|3 years ago|reply
I'm glad they released this.

I'm sick snd tired of seeing big companies and orgs (Google is the most recent) publish an amazing application of ML but refuse to release the trained model because the model is biased and may be used in a bad way.

[+] sva_|3 years ago|reply
To those who ask about the ethics of releasing something like this, I'd say that this technology already exists, and bad actors probably already can get access if they really want to and are sophisticated enough. Making this available to the general public will spread awareness of the existence of such tools, and can then possibly have a preventive effect.
[+] blagie|3 years ago|reply
As someone with a stalker, I can't emphasize this enough. A stalker will go to all sorts of lengths to do bizarre shit. People don't believe it. I would guess governments will do some equivalent there-of.

Democratizing access to things -- including bad things -- has a preventative effect:

1) I can guard against things I know about

2) People take me seriously if something has been democratized

The worst-case scenario is if my stalker got her hands on something like deep fake technology before the police / prosecutor / jury didn't knew it existed. I'd probably be in jail by now if something like that ever happened. She's tried to frame me twice before. Fortunately, they were transparent. She'll try again.

Best case scenario is that no one has access to this stuff.

Worst case scenario is only a select group have access, and most people don't know about it.

Universal access is somewhere in between.

[+] bko|3 years ago|reply
I agree with everything you said, but I we shouldn't deny that opportunistic bad actors don't exist. Or it might get on their radar and be exploited. Open source tools also tend to be better maintained, documented and reliable, so the bad guys will have a better tool.

That being said, bringing it to light also has benefits like you said. If the tool is out in the open and state of the art techniques are used, technology to detect its use will also benefit.

[+] belter|3 years ago|reply
You are right. I saw some guy that looked like Matt Damon trying to sell me some crypto coins...
[+] roughly|3 years ago|reply
I'm reminded of Firesheep - https://en.wikipedia.org/wiki/Firesheep - which came out in 2010. It wrapped session hijacking on WiFi in an easily usable interface. The technique and the vulnerability wasn't anything new, but the extension raised awareness in a big way and really sparked a big push for getting SSL deployed, enabled, and defaulted everywhere.
[+] avivo|3 years ago|reply
Ease of access does matter.

It only buys time, but that can provide the time needed to create countermeasures and ideally make those very accessible—somewhat similar to responsible vulnerability disclosure.

This piece goes into more detail: https://aviv.medium.com/the-path-to-deepfake-harm-da4effb541... (excerpt from a working paper, part of which was presented at NeurIPs).

[+] gambler|3 years ago|reply
The entirety of deep fake technology was developed mostly in mainstream academia using "raising awareness" as an excuse. Paper after paper, model after model, repository after repository. Every single time the excuse was "if we don't do it, someone else will". This was going on for years and the explanation is absolutely laughable. Without countless human-hours put into this by academia, it's pretty obvious that this technology would be nowhere near its current state. Maybe some select military research agencies could develop something analogous. Currently this is accessible to literally every crook and prankster with internet access.

Also, the notion that "raising awareness" is going to prevent deep fakes from being used in practice shows complete and utter disconnect from reality. Most people who are skeptical are already aware how imminently fakeable all the media really is. Most people who still are unaware will remain so, no matter how many GitHub repositories some dipshits will publish.

[+] oliver910|3 years ago|reply
I agree that raising awareness that tools like this are possible is important and that sufficiently advanced actors can do this anyway, however I don't think in this case releasing pre-trained weights to the general public is responsible. This could probably be used to help bypass crypto exchange KYC for moneylaundering purposes. I'm not sure what the best access model is - email us with a good reason to get access to the weights perhaps - but what alarms me is there seems to be no consideration for misuse or responsible release at all.
[+] Karliss|3 years ago|reply
Even without deepfakes any kind of system relying on a person (or computer) not being tricked by webcam video seems quite questionable. People could still be tricked with a spliced video fragments of the real person or makeup especially if the set of face expressions used during "liveness check" is known ahead of time.
[+] light_hue_1|3 years ago|reply
That's like saying that nuclear weapons exist, and bad actors can potentially get them, so let's lower the bar so that anyone can.

Making such tools accessible is reprehensible. It will lead to more bad actors, to less trust in media and in any objective reality, and more erosion of our institutions and society.

There is absolutely no reason whatsoever for this. It's unethical and frankly downright evil.

[+] wafriedemann|3 years ago|reply
Well, genius of this guy. Create the threat then sell the cure. The old school business model we know from anti-virus software.

"I am Cofounder and CEO at Sensity, formerly called Deeptrace, an AI security startup detecting and monitoring online visual threats such as “deepfakes”." (one of the contributors of this repo)

[+] registeredcorn|3 years ago|reply
I'm really excited to see what could be done with this! I think the primary benefits of this being released are two fold:

1) It will give security researchers more freely available technology to work with in order to try and fight the malicious use of deepfakes. (I saw some interesting comments in this thread about TPM. It'd be interesting to see what other solutions are out there.)

2) It would raise the overall awareness of the general population about the existence and advancements that deepfake technology has made. I would argue that a small subset of the overall population know what the term "deepfake" means, and even fewer are aware at how far it has progressed in only a few short years. (I'm not super well versed in the topic myself, I just know that I've heard a lot of progress has been made.)

I think that since this tech is already actively being used by bad actors, the best course of action that we can take until at least a somewhat good counter to it has been adopted (and then quickly defeated) is to make as many people aware that this is something that could affect them, or their families. That this is something that could be used to get someone fired, or hurt, or killed. I think that the more that people are aware of its existence, the less impactful the overall effect of deepfakes becomes. People learn to look twice before making a call on something, because of how easy it has become to fake audio and video.

[+] SXX|3 years ago|reply
To all people who ask moral side of this tech and that it can be abused. What do you think about tools like Kali Linux? NMAP?

Basically any pentesting software can be abused.

[+] jacquesm|3 years ago|reply
It's not what we think about this, it is what the creators think about it that matters.
[+] giorgiop|3 years ago|reply
Hi there, one of the maintainers here. Thanks for sharing & AMA!
[+] madrox|3 years ago|reply
The moral lens people apply to deepfakes feels myopic to me. There are lots of tools that have been built that have arguably done more harm that no one talks about, like port scanning tools. Perhaps because the the negative consequences are so visually obvious and require few intuitive leaps.
[+] momojo|3 years ago|reply
> Perhaps because the the negative consequences are so visually obvious and require few intuitive leaps.

Exactly this. At some point it's not feasible to calculate the impact of a technology's trickle down. Unless the impact is large enough.

I'm not surprised people have such a viscerally moral reaction to deepfakes.

> There are lots of tools that have been built that have arguably done more harm that no one talks about, like port scanning tools

I don't think this is a fair argument against current tools.

If we had some way of accurately measuring the secondary effects of port-scanners, should we start caring? Should we retroactively remove any tools that we later regret? (I don't have good answers to these either)

[+] xiphias2|3 years ago|reply
Governments and bank branches should offer physical identification as a (potentially payed) service globally, and provide a digital signature that a person is owning a physical device (which should be valid until the device is reset or invalidated by the person).
[+] Nowado|3 years ago|reply
I'm absolutely loving the idea of deepfake-detection security SASS developing an easy to use live deepfake system. Crassus tradition is truly immortal.
[+] isaacfrond|3 years ago|reply
from the article:

Real-time, controllable deepfakes ready for virtual camera injection. Created for performing penetration testing against e.g. identity verification and video conferencing systems, for the use by security analysts, Red Team members, and biometrics researchers.

Reminds one of the vulnerable world hypothesis. It's only a matter of time before a technology comes along that is both destructive and so simple any fool can use it.

[+] jl6|3 years ago|reply
Which requires more computation: a real-time deepfake or real-time deepfake detection? That will determine the future balance of power.
[+] throw457|3 years ago|reply
Deepfakes are for deceiving humans. There are still way too many artifacts in the outputs that are easy to detect in code.
[+] tgv|3 years ago|reply
You don't need to detect a deepfake in the same time as it takes to generate one, do you?
[+] scoutt|3 years ago|reply
I can imagine more scenarios other than "bad actors" (scammers and such) for this software...

For example: what about using this in an interview for a remote job (possibly intercontinental)? It could tip the balance in some situations (due to biases). For example, beating ageism by projecting a 25 years old version of ourselves. Or removing a tatoo or birth defect (and possibly other changes too, you can imagine).

[+] lelag|3 years ago|reply
I predict Kitboga will have a lot of fun with this one...
[+] Lapsa|3 years ago|reply
oh my... happy porcupines