(no title)
rnimmer | 1 year ago
Built-in checks prevent processing of inappropriate content, ensuring legal and ethical use."
I see it claims to not process content with nudity, but all of the examples on the website demo impersonation of famous people, including at least one politician (JD Vance). I'm struggling to understand what the authors consider 'ethical' deepfaking? What is the intended 'ethical' use case here? Of all the things you can build with AI, why this?
KolmogorovComp|1 year ago
godelski|1 year ago
instagraham|1 year ago
FT had a fantastic podcast on the porn industry and the guy behind Mindgeek. Like many stories about multinational entities, you constantly hear the usual refrains - noone can regulate this, the entities keep changing their name and face, there is no accountability, etc. But when Visa and Mastercard threaten to pull their payments, the companies have to listen.
Visa and mastercard are the de facto regulators of porn today, and mostly do so to prevent nonconsentual and extreme fetish stuff from being displayed on mainstream platform.
From what I gathered from the podcast, they're not super keen on being the regulator - but it's a dirty job and somebody has to do it.
hackernewds|1 year ago
reaperducer|1 year ago
That can be asked of 90% of what's come out of the latest AI bubble so far.
Like a lot of technology, AI has so much potential for good. And we use it for things like games that simulate killing one another, or making fake news web sites, or pushing people to riot over lies, or making 12-year-olds addicted to apps, or eliminating the jobs of people who need those jobs the most, or, yes, pornography.
We can do better.
parineum|1 year ago
Like what?
greg_V|1 year ago
rikafurude21|1 year ago
exe34|1 year ago
unknown|1 year ago
[deleted]
tdeck|1 year ago
ithkuil|1 year ago
If the technology is actually made widely available that just reveals that the Pandora box was actually already open
godelski|1 year ago
If you're unwilling to recognize the benefits of something, it becomes easier to dismiss your argument. Instead, the truth is balancing trade-offs and benefits. Certainly there is a clear and harmful downside to this tech. But there are benefits. It does save a lot of money for the entertainment industry when you need to edit or do retakes. The most famous example might be superman[0].
The issue is that when the downsides get easy to dismiss, it becomes easy to get lost in the upsides. It'll get worse because few people consider themselves unethical. We're all engineers and we all have fallen for this trap in some way or another. But we also need to remember that the road to hell isn't paved with malicious intent...
[0] https://www.youtube.com/watch?v=2nxanN85O84
godelski|1 year ago
I too have ethical concerns. There are upsides though. It is a powerful tool for image and video editing (for swapping, you still need a generator on the backbone)[0]. It is a powerful tool for compression and upsampling (your generative model __is__ a compression of (a subset of) human faces, so you don't need to transmit the same data across the wire). It is easy to focus on the upsides and see the benefits. It is easy to not spend as much time and creative thinking directed at malicious usages (you're not intending to use or develop something for malicious acts, right?!). But there's two ways to determine malicious usages of a technology: 2) you emulate the thinking of a malicious actor, contemplating how they would use your tool, and 2) time.
But I also do think application matters. I think this can get hairy when you get nuanced. Are all deepfakes that are done without consent of the person being impersonated unethical? I think at face (pun intended) value, this looks like an unambiguous no. But what about parody like Sassy Justice?[1]. Intent here is not to deceive, and the deep fakes add to the absurdity of the characters, and thus the messages. Satire and parody itself doesn't work unless mimicry exists[2]. Certainly these comedic avenues are critical tools in democracy, challenging authority, and challenging mass logic failures[3] (which often happens specifically due to oversimplification and not thinking about the details or abuse).
I want to make these points because I think things are post hoc far easier to dismiss than a priori. We're all argumentative nerds, and I think despite the fact that we constantly make this mistake, we can all recognize that cornering someone doesn't typically yield in surrender, but them fighting back harder (why you never win an argument on the internet, despite having all the facts and being correct). And since we're mostly builders (of something) here, we all need to take much more care. *The simpler you rationalize something to be post hoc, the more difficult it will be to identify a priori.*
Even at the time, I had reservations when building what I made. But one thing I've found exceptionally difficult in ML research is that it is hard to convince the community that data is data. The structure of data may be different and that may mean we need more nuance in certain areas than others (which is exciting, as that's more research!), but at the end of it, data is data. But we get trapped in our common datasets to evaluate[4] and more and more, our research needs to be indistinguishable from a product (or at least a MVP). If we can make progress by moving away from Lena, I think we can make progress by moving away from faces AND by being more nuanced.
I don't regret building what I built, but I do wish there was equal weighting to the part of my voice that speaks about nuance and care (it is specifically that voice that led to my successful outcomes too). The world is messy and chaotic. We (almost) all want to clean it up and make it better. But because of how far we've advanced, we need to recognize that doing good (or more good than harm) is becoming harder and harder. Because as you advance in any topic, the details matter more and more. We are biased towards simplicity and biased towards thinking we are doing only good[5], and we need to fight this part of ourselves. I think it is important to remember that a lie can be infinitely simple (most conspiracies are indistinguishable from "wizards did it"), but accuracy of a truth is bounded by complexity (and real truth, if such a thing exists, has extreme or infinite complexity).
With that said, one of my greatest fears of AI, and what I think presents the largest danger, is that we outsource our thinking to these machines (especially doing so before they can actually think[6]). That is outsourcing one of the key ingredients into what defines us as humans. In the same way here, I think it is easy to get lost in the upsides and benefits. To build with the greatest intentions! But above all, we cannot outsource our humanity.
Ethics is a challenging subject and it often doesn't help that we only get formal education through gen ed classes. But if you're in STEM, it is essential that you are also a philosopher, studying your meta topic. Don't need to publish there, but do think about. Even just over beers with your friends. Remember, it's not about being right -- such a thing doesn't exist --, it is about being less wrong[7]
[0] https://www.youtube.com/watch?v=2nxanN85O84
[1] https://www.youtube.com/@SassyJustice
[2] https://www.supremecourt.gov/DocketPDF/22/22-293/242292/2022...
[3] https://www.gutenberg.org/files/1080/1080-h/1080-h.htm
[4] I do think face data can be helpful when evaluating models as our brains are quite adept at recognizing faces and even small imperfections. But this should make it all that much clearer that evaluation is __very__ hard.
[5] I think it is better to frame tech (and science) like a coin. It has value. The good or evil question is based on how the coin is spent. Even more so how the same type of coins are predominantly spent. Both matter and the topic is coupled, but we also need to distinguish the variables.
[6] Please don't nerdsplain to me how GPTs "reason". I've read the papers you're about to reply with. I recognize that others disagree, but I am a researcher in this field and my view isn't even an uncommon one. I'm happy to discuss, but telling me I'm wrong will go nowhere.
[7] https://hermiene.net/essays-trans/relativity_of_wrong.html
ncr100|1 year ago
Upvoted.