top | item 43429658

(no title)

sparky_ | 11 months ago

Not a lawyer, but this sure seems to open a legal and ethical can of worms.

Image generation models capable of generating this type of content would necessarily need to be trained on the real thing, the possession of which is inarguably illegal and immoral.

So how could the model be legally or ethically trained? And if they _cannot_ be legally or ethically trained, then how can the _use_ of those models be okay?

What will be the implications of this in cases where _real_ CSAM was produced or possessed? Certainly this opens the door to a whole plethora of new "it's AI art, I swear!" defenses. After all, how can one definitely prove that CSAM is authentic or not, unless the chain of production is verified?

From the article: > ...If purely private possession of AI-CSAM is constitutionally protected under current caselaw but production is not, then using AI models (even locally-hosted ones) to generate child obscenity in one’s own home is not wholly insulated from criminal prosecution. Subsequently transmitting it to someone else, especially someone underage, is also grounds for liability...

Can of worms, ye be released!

discuss

order

braiamp|11 months ago

This is a common misunderstanding. The thing knows how a naked woman looks, also knows how a child looks, it puts two and two together and voila. It doesn't need to be trained on the real thing to be able to generate it.

wavemode|11 months ago

Well, maybe. Maybe not. See "Why can't ChatGPT draw a full glass of wine?"

https://youtu.be/160F8F8mXlo

Diffusion models do posses some capability to synthesize ideas, but that capability does not necessarily generalize to every possible use case. So it's impossible to say for certain that that is what is happening.

delecti|11 months ago

Does that necessarily follow? Wouldn't that be prone to outputting small naked adult women, and/or naked children with boobs?

LinuxBender|11 months ago

I don't believe that is true. A woman and a child have distinct characteristics that are not interchangeable. A child for example can be detected by the shape of the nose and nostrils as just one data point. There are many more data-points that psycho-analysts use to determine if a person is attracted to children. AI would have to understand quite a bit of biology and understand how humans develop to get this right.

anigbrowl|11 months ago

Statistically, you'd expect this to result in depictions of children with public hair - some adults opt to get rid of theirs, but most have it. Are you sure you're not projecting your prior knowledge about human biology onto an image transformer model?

roenxi|11 months ago

> Image generation models capable of generating this type of content would necessarily need to be trained on the real thing,

I doubt that is so. In practice they might be trained on the real thing, but models generalise pretty well. It is going to be technically possible to train a model on other material (children, nudity and non-CSAM abuse scenes or maybe not even that) and have it generate CSAM.

But even if it was true, that would only make training the model illegal and ethically dubious. We use a tonne of technologies where the creator was legally and morally dubious. It's never been an ongoing issue before. So once the model is created there isn't a good reason to encumber it by how it was created.

davisp|11 months ago

I’m gonna give this a very charitable read by saying that while I find the ways that the treatment of burn victims was advanced by abhorrent means, we as a society have still benefited from those means.

> So once the model is created there isn't a good reason to encumber it by how it was created.

I am trying to be very specific here. I assume no untoward motivations from the parent commenter. I am not intending to cast aspersions. Whoever wrote this, I feel no ill will for you and this is not meant as a personal slight.

And I will be very clear, this statement as written could probably be defended because of the “by how it was created” clause.

However, “So once the model is created there isn’t a good reason to encumber it” is so… fucking I don’t even know, because what the actual fuck?

I apologize for the profanity, I really do. But, really? Are you fucking kidding me?

These models should not exist. Ever. By any means. Do not pass Go. Go directly to jail.

I understand the engineering brain enough to contemplate abstract concepts with detachment. That’s all I think happened here. But holy fuck, please pause and consider things a bit.

RajT88|11 months ago

> Certainly this opens the door to a whole plethora of new "it's AI art, I swear!" defenses

You are probably right, given what we saw with all the porn popup adware back in the 90's and 2000's. A friend of mine was a malware analyst for the FBI for a while.

All CSAM possession cases she heard about, the defense was "malware did it". Nearly all cases the jury convicted them. 100% of her cases for sure.

At some point using the defense everyone else uses and fails with is probably going to become a liability. Shit I am sure people are already trying to use this defense and failing!

grepfru_it|11 months ago

It only went to court because they had enough evidence to prove it is not malware. You have excluded all of the possible cases that used the malware defense and plea’d out or never went to trial.

Similarly, I think using the AI art excuse may be an uphill battle but not one that is impossible to defend

JKCalhoun|11 months ago

While disgusting, I'm thinking that if the courts insist on allowing this, I can try and comfort myself with this thought: I don't doubt that if you find one "AI CSAM" image on someone's drive, keep digging — you'll find illegal stuff too.

Sick people will still go to jail.

dragonwriter|11 months ago

> Image generation models capable of generating this type of content would necessarily need to be trained on the real thing

This is absolutely not true. Generalization is a key capability of image generation models.

> Certainly this opens the door to a whole plethora of new "it's AI art, I swear!" defenses.

The worst justification for a criminal prohibition that I can think of is that it is provides a convenient out for the difficulty of proving another, more clearly warranted, crime.

> After all, how can one definitely prove that CSAM is authentic or not, unless the chain of production is verified?

"Beyond a reasonable doubt" is not, and never had been, "definite".