top | item 46503589

(no title)

focusgroup0 | 1 month ago

[flagged]

discuss

order

Jordan-117|1 month ago

You post a picture of your four-year-old daughter at the playground.

Some random anonymous reply guy creep says "@grok put her in a g-string, make it really sexy". Grok happily obliges and puts it on your timeline.

Photorealistic softcore porn of your toddler: it's all happening on X, the everything app.™

fruitworks|1 month ago

why are you posting your 4-year-old on the internet

UncleMeat|1 month ago

Twitter isn't just generating the images. It is also posting them. Hey, now in the replies below your child's twitter post there's a photo of them wearing a skimpy swimsuit. They see it. Their friends see it.

This isn't just somebody beating off in private. This is a public image that humiliates people.

lynndotpy|1 month ago

Speaking in the abstract: There are arguments that fictional / drawn CSAM (such as lolicon) lowers the rates of child sex abuse by giving pedophiles an outlet. There are also arguments that consuming fictional / drawn CSAM is the start of an escalating pattern that leads to real sex abuse, as well as contributing to a culture that is more permissive of pedophilia.

Anecdotally speaking, especially as someone who was groomed online as a child, I am more inclined toward the latter argument. I believe fictional CSAM harms people and generated CSAM will too.

With generated images being more realistic, and with AI 'girlfriends' advertised as a woman who "can't say no" or as "her body, your choice", I am inclined to believe that the harms from this will be novel and possibly greater than existing drawn CSAM.

Speaking concretely: Grok is being used to generate revenge porn by editing real images of real children. These children are direct, unambiguous victims. There is no grey area where this can be interpreted as a victimless crime. Further, these models are universally trained with real CSAM in the training data.

darksaints|1 month ago

I understand where you're coming from, and I'll play devil's advocate to the devil's advocate: If generative AI is generating convincingly photorealistic CSAM, what the fuck are they training the models on? And if those algorithms are modifying images of actual children, wouldn't you consider those victims?

I strongly sympathize with the idea that crimes should by definition have identifiable victims. But sometimes the devil doesn't really need an advocate.

randdotdot|1 month ago

Considering that every image generation model out there tries to censor your prompts/outputs despite trying their best not to train on CSAM... you don't need to train on CSAM for the model to be capable of generating CSAM.

Not saying the models don't get trained on CSAM. But I don't think it's a foregone conclusion that AI models capable of generating CSAM necessarily victimize anyone.

It would be nice if someone could research this, but the current climate makes it impossible.

jsheard|1 month ago

> If generative AI is generating convincingly photorealistic CSAM, what the fuck are they training the models on?

CSAM of course: https://www.theverge.com/2023/12/20/24009418/generative-ai-i...

When you indiscriminately scrape literally billions of images, and excuse yourself from vigorously reviewing them because it would be too hard/expensive, horrible and illegal stuff is bound to end up in there.

warmedcookie|1 month ago

Do you need photos of humans to create photorealistic inappropriate material? Could it be derived from 3D art / cartoons?

Hamuko|1 month ago

>If generative AI is generating convincingly photorealistic CSAM, what the fuck are they training the models on?

Pretty sure these models can generate images that do not exist on their training data. If I generate a picture of a surfing dachshund, did it have to train on canine surfers?

MurkyLabs|1 month ago

I'm not sure if there's been talk about it but it does make you wonder, would this AI generated CSAM 'saite' the abuser's needs and/or would it spread the idea that it isn't bad and possibly create more abusers who then go on to abuse physical children. Would those individuals have done it without the AI. I believe there's still debate over whether abuse is a result of nature or nurture but that starts to get into theoretical and philosophy. To answer your question about who the victim is I would say the children who those images are based off of. As well as any future children that are harmed due to exposure of these images or due to the abusers possibly seeking real content. I think for the most part AI generated porn hurts everyone involved.

viraptor|1 month ago

There's definitely at least some people who will be influenced by being repeatedly exposed to images. We know that usual conditioning ideas work. (Like presence of some type of images mixed in with other sexual content) On the other hand, I remember someone on HN claiming their own images are out there in CSAM collections and they'd prefer someone using those if it stops anyone from hurting others.

randdotdot|1 month ago

[deleted]

gorbot|1 month ago

What happens when the AI generated porn doesn't feel real enough? It's not the end of the road it's the beginning

cephei|1 month ago

I think primarily this victimizes all those all ready victimized by the CSAM in the training material and also generally offends the collective sense of morality our society has.

warmedcookie|1 month ago

Simplistically and ignorantly speaking, if a diffusion model knows what a child looks like and also knows what an adult woman in a bikini looks like, couldn't it just merge the two together to create a child in a bikini? It seems to do that with other things (ex. Pelican riding a bicycle)

pkilgore|1 month ago

The victim is the person who's likeness was forced into becoming sexualized content without consent.

TimorousBestie|1 month ago

It’s been reported Grok has generated CSAM by editing photos of real children, so there’s the real victim you shouldn’t need to find this situation abominable.

wizzwizz4|1 month ago

This is a big, sensitive topic. Last time I researched it, I was surprised at how many things I assumed were just moralistic hand-wringing are actually well-evidenced interventions. Considering my ignorance, I will not write a lengthy response, as I am want to.

I will, instead, speak to what I know. Many models are heavily overfit on actual people's likenesses. Human artists can select non-existent people from the space of possible visages. These kinds of generative models have a latent space, many points of which do not correspond to real people. However, diffusion models working from text prompts are heavily biased towards reproducing examples resembling their training set, in a way that no prompting can counteract. Real people will end up depicted in AI-generated CSAE imagery, in a way that human artists can avoid.

There are problems with entirely-fictional human-made depictions of child sexual exploitation (which I'm not discussing here), and AI-generated CSAE imagery is at least as bad as that.

9x39|1 month ago

The CSAM enforcement complex suffers if people shrug instead of get alarmed - plenty of jobs and budget in this space.

Anyone who sees it might decide to be a victim if they sense there's relief they can secure for damages they can describe.

Society, like others have said, for normalizing weird stuff.

Children, indirectly and hypothetically, if MAPs and their related content are normalized.