There are multiple valid reasons to fight realistic computer-generated CSAM content.
Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material.
Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here.
> Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder
I don't follow. If the prosecutor can't find evidence of a crime and a person is not charged, that is considered harmful? As such the 5th amendment would fall under the same category and so would encryption. Making law enforcement have to work harder to find evidence of a crime cannot be criminalized unless you can come up with a reason why the actions themselves deserve to be criminalized.
> specifically in many of the grok cases it harms young victims that were used as templates for the material.
What is the criteria for this? If something is suitably transformed such that the original model for it is not discernable or identifiable, how can it harm them?
Do not take these as an argument against the idea you are arguing for, but as rebuttals against arguments that are not convincing, or if they were, would be terrible if applied generally.
> I remember when CSAM meant actual children not computer graphics.
The "oh its photoshop" defence was an early one, which required the law to change in the uk to be "depictions" of children, so that people who talk about ebephiles don't have an out for creating/distributing illegal content.
Speaking of freedom, I lost posting permission yesterday after my earlier post. Even though vile replies were insinuating vile things, my right of reply was taken away. We should never diminish the "name of freedom" as you've just done.
It should go without saying that CSAM is revolting. Who wants to see that stuff? Not me, not most people. Grok can't make that content. Maybe someone got around it temporarily. I've always thought Grok heavily censored, it refuses to analyse an image of the Statue of David because "naughty bits". The ironic and sad thing is a fig leaf gets around that restriction. So the accusation that Grok has no guardrails and can generate CSAM seems at best an anomaly or a lie.
myrmidon|25 days ago
Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material.
Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here.
Eisenstein|25 days ago
I don't follow. If the prosecutor can't find evidence of a crime and a person is not charged, that is considered harmful? As such the 5th amendment would fall under the same category and so would encryption. Making law enforcement have to work harder to find evidence of a crime cannot be criminalized unless you can come up with a reason why the actions themselves deserve to be criminalized.
> specifically in many of the grok cases it harms young victims that were used as templates for the material.
What is the criteria for this? If something is suitably transformed such that the original model for it is not discernable or identifiable, how can it harm them?
Do not take these as an argument against the idea you are arguing for, but as rebuttals against arguments that are not convincing, or if they were, would be terrible if applied generally.
pjc50|25 days ago
Movie ratings are a good example of a system for restricting who sees unacceptable content, yes.
ascagnel_|25 days ago
There's basically no consent with what Grok is doing.
KaiserPro|25 days ago
The "oh its photoshop" defence was an early one, which required the law to change in the uk to be "depictions" of children, so that people who talk about ebephiles don't have an out for creating/distributing illegal content.
master-lincoln|25 days ago
unknown|25 days ago
[deleted]
mnewme|25 days ago
As a father there shouldn’t be any CSAM content anywhere.
And think about that it is already proven these models apparently had CSAM content in their training data.
Also what about the nudes of actual people? That is invasion of privacy
I am shocked that we are even discussing this.
thrance|25 days ago
[deleted]
master-lincoln|25 days ago
I don't see it...
mnewme|25 days ago
cess11|25 days ago
[deleted]
SecretDreams|25 days ago
exodust|25 days ago
Speaking of freedom, I lost posting permission yesterday after my earlier post. Even though vile replies were insinuating vile things, my right of reply was taken away. We should never diminish the "name of freedom" as you've just done.
It should go without saying that CSAM is revolting. Who wants to see that stuff? Not me, not most people. Grok can't make that content. Maybe someone got around it temporarily. I've always thought Grok heavily censored, it refuses to analyse an image of the Statue of David because "naughty bits". The ironic and sad thing is a fig leaf gets around that restriction. So the accusation that Grok has no guardrails and can generate CSAM seems at best an anomaly or a lie.