top | item 46503688

(no title)

darksaints | 1 month ago

I understand where you're coming from, and I'll play devil's advocate to the devil's advocate: If generative AI is generating convincingly photorealistic CSAM, what the fuck are they training the models on? And if those algorithms are modifying images of actual children, wouldn't you consider those victims?

I strongly sympathize with the idea that crimes should by definition have identifiable victims. But sometimes the devil doesn't really need an advocate.

discuss

order

randdotdot|1 month ago

Considering that every image generation model out there tries to censor your prompts/outputs despite trying their best not to train on CSAM... you don't need to train on CSAM for the model to be capable of generating CSAM.

Not saying the models don't get trained on CSAM. But I don't think it's a foregone conclusion that AI models capable of generating CSAM necessarily victimize anyone.

It would be nice if someone could research this, but the current climate makes it impossible.

jsheard|1 month ago

> If generative AI is generating convincingly photorealistic CSAM, what the fuck are they training the models on?

CSAM of course: https://www.theverge.com/2023/12/20/24009418/generative-ai-i...

When you indiscriminately scrape literally billions of images, and excuse yourself from vigorously reviewing them because it would be too hard/expensive, horrible and illegal stuff is bound to end up in there.

Jordan-117|1 month ago

That's probably incidental, horrible as it is. Models don't need training data of everything imaginable, just enough things in combination, and there's enough imagery of children's bodies (including non-sexual nudity) and porn to generate a combination of the two, same as it can make a hybrid giraffe-shark-clown on a tricycle despite never seeing that in the training data before.

The biggest issue here is not that models can generate this imagery, but that Musk's Twitter is enabling it at scale with no guardrails, including spamming them on other people's photos.

warmedcookie|1 month ago

Do you need photos of humans to create photorealistic inappropriate material? Could it be derived from 3D art / cartoons?

Hamuko|1 month ago

>If generative AI is generating convincingly photorealistic CSAM, what the fuck are they training the models on?

Pretty sure these models can generate images that do not exist on their training data. If I generate a picture of a surfing dachshund, did it have to train on canine surfers?