(no title)
darksaints | 1 month ago
I strongly sympathize with the idea that crimes should by definition have identifiable victims. But sometimes the devil doesn't really need an advocate.
darksaints | 1 month ago
I strongly sympathize with the idea that crimes should by definition have identifiable victims. But sometimes the devil doesn't really need an advocate.
randdotdot|1 month ago
Not saying the models don't get trained on CSAM. But I don't think it's a foregone conclusion that AI models capable of generating CSAM necessarily victimize anyone.
It would be nice if someone could research this, but the current climate makes it impossible.
jsheard|1 month ago
CSAM of course: https://www.theverge.com/2023/12/20/24009418/generative-ai-i...
When you indiscriminately scrape literally billions of images, and excuse yourself from vigorously reviewing them because it would be too hard/expensive, horrible and illegal stuff is bound to end up in there.
Jordan-117|1 month ago
The biggest issue here is not that models can generate this imagery, but that Musk's Twitter is enabling it at scale with no guardrails, including spamming them on other people's photos.
warmedcookie|1 month ago
Hamuko|1 month ago
Pretty sure these models can generate images that do not exist on their training data. If I generate a picture of a surfing dachshund, did it have to train on canine surfers?