Obvious disclaimer first: I do not condone imagery of child abuse, I believe this guy has some kind of psychological problem and needs professional help, AI-generated deepfakes can actually cause real harm and proper regulation is needed.
With that out of the way:
> the generative AI wrinkle in this particular arrest shows how technology is generating new avenues for crime and child abuse.
Is it really child abuse if no children were involved? Does that mean that AI-generated imagery of some protected group being harmed causes actual harm to that protected group? Not saying it doesn't, but it may be worth thinking about.
> The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified, [...] that is a much harder problem to fix.
Sarcastic: people downloading and modifying open source software is a major problem indeed, hopefully a solution can be found.
> Is it really child abuse if no children were involved?
I think there is an empirical question one step beyond this. Does a pedophile who sees AI child porn get inured to it and then go on to try to act out fantasies with real victims? Or does this give pedophiles a way to satiate their desires indefinitely with only AI-based content, and lead to a lower portion abusing actual kids?
A second order issue is that distributing child porn is claimed to create demand for child porn which leads to more abuse. If there were no criminal penalties for purely AI-generated CSAM and the normal criminal penalties remained for CSAM that in any way derived from images of actual kids, would the cost-benefit difference push most consumers to demand only AI-generated stuff?
I'm not saying this is definitely the case, but I think it's at least plausible that more AI-generated CSAM would reduce actual sexual abuse of children, and from a harm-reduction standpoint, it should be beneficial to have more of it ... and it's also plausible that pedophiles being able to generate more and more extreme material at will would make things worse ... and it's also likely legally and institutionally impossible to do the studies to determine which of these is actually true.
I can, will, and do create whatever imagery I want, no matter how I create it, and no matter what anyone else's opinion about that is. I will look at or think about that potentially depraved shit as often, hard, and long as I want, I will feel about that however I want, and no one in this big world of control freaks can or will do anything about any of that.
In Australia, someone was jailed for possessing Simpson's cartoon in 2008:
https://www.computerworld.com/article/1510365
That seemed an like absurd overreach at the time. But Australia has until recently always suffered from overreach in it's censorship laws, so as an Australian I didn't find it surprising.
I always assumed the reason we didn't see the same thing in the US's was the 1st amendment. I guess it's too soon to know if this is just an aberration that will be fixed.
If the model was trained on CSAM then it’s definitely still CSAM, like the comment in this thread comparing it to cut cocaine, but if it wasn’t it sure seems like a thought crime.
> Is it really child abuse if no children were involved?
I hope that there are people out there who know the true answer.
It takes a special kind of person to seek the truth about something that people feel so strongly about. The truth doesn't care about us or how we feel. Finding it is a thankless task full of distractions and dead ends. If a perspective makes sense and rings true to me, that says more about me than it does about whether the perspective describes reality.
Doesn't matter if an actual child was abused or not. Under the PROTECT Act, obscene child porn is illegal to produce or distribute whether actual children were involved or not. The Supreme Court has ruled that material deemed obscene is NOT subject to First Amendment protection.
> Is it really child abuse if no children were involved? Does that mean that AI-generated imagery of some protected group being harmed causes actual harm to that protected group? Not saying it doesn't, but it may be worth thinking about.
Deepfakes of real children is real children being involved.
Outside of that, my response is, “how would you create a compelling CSAM image with nothing resembling the target in the training data?”
> Is it really child abuse if no children were involved?
The "harm" principle which is very popular these days to see if things are bad or good or punishable or not, I think it has a lot of problems. There are a lot of things that don't map onto this principle well. Social cohesion is a huge one. Something may not directly harm someone, but it can harm the fabric of society and many other things.
It's no different than the logic behind medication regulation. Some substances are known to induce bad/undesirable/harmful behavior and need to have access restricted. There may be a scientific argument about whether or not the argument is correct in this particular case, but the legal paradigm is sound.
Yes, basically. You can absolutely criminalize fake images.
On the opposite end of this spectrum, there is interest among digital forensics examiners in some kind of automated capability for detecting child porn. Such a capability would speed up the process and reduce the mental/emotional load on examiners having to deal with this sensitive type of content. Automated processing could also reduce the risks of this material being mishandled as evidence. In a report presented at the 2019 Digital Forensics Research Conference, a survey of forensics examiners showed heightened interest in AI/ML models for this application. They discuss some prior work, recent attempts, and challenges in reaching this goal.
It's typical for authoritarians, not specific to AI. There are laws hiding all over the place that make drawing pictures illegal. Another Florida Man, Mike Diana, was convicted and sentenced for his (absurdly far from pornography or realism) zine-comic book Boiled Angel, which had a circulation of 300.
For 3 years of probation, he was banned from drawing at all, even for personal use.
Florida defines child porn as anyone under the age of 18 so all they need to do is say the AI image depicts a 17 year old. Vaguely written laws are very expensive to fight
Important callout beyond the headline: "Last year, the National Center for Missing & Exploited Children received 4,700 reports of generated AI child porn, with some criminals even using generative AI to make deepfakes of real children to extort them."
The original rationale for criminalizing the possession of child porn was that a crime was inherently committed in its creation, and possession is participation in that crime. I think this is a correct conclusion.
I can't see this rationale extending to generated imagery (deepfakes aside). No victim exists.
Vaguely gesturing at social harm is not principled enough, in my opinion. One can point to actual crime and actual harm for filmed and photographed child pornography. For generated imagery, one can only point to one's own personal revulsion as "harm".
Florida law defines anything under the age of 18 as "child porn" - so if your catgirl doesn't look at least 30 you are probably heading to prison or an expensive court battle or both
People are not talking about how revolutionary this is. This isn't AI generating content to compete with artists. This is software running on standard hardware that turns electricity into material more illegal than cocaine. Just think of how that would impact the market for cocaine if suddenly everyone could make easily it at home from common ingredients.
We are spiraling very quickly towards a "media creation box", standalone software that will generate whatever content a person might want without any external connections. There will be huge societal ramifications. We think media bubbles are bad now, but just wait until everyone can live inside their own bubble filled with locally-generated content to match their increasingly warped world views.
i have not followed the topic since it's not my cup of beer, but what is the legal stance on hentai with characters that are depicted as under 18 (or whatever age of concent is relevant)
crazy how the same people who say fake pictures can cause enough harm to warrant jail time also are fine with dissemination of religious materials, violence porn on netflix, all of which is causing way more harm. by a mile. its not the harm, its just you picking and choosing.
So now is not downloading pics, but downloading AI models that can generate it. Are there people fine tuning models with these illegal images? Or is it just a jail broken model. In the former case the fine tuner needs to be traced, the latter case is new law territory
I suspect that most open image models are capable of creating illegal images without fine tuning (though perhaps with significant effort). Models are capable of generating images containing subject matter that has never been depicted by any human ever. It's not hard to imagine that a model that can produce images of nude bodies could adapt the apparent age of those bodies.
The legal challenges here will be important to follow.
No new law is necessary. The PROTECT Act makes all sexually explicit imagery of a minor that does not have artistic or literary merit illegal to produce, distribute, or possess. It closes the "no actual children were involved, it's just fiction/cartoons" loophole.
In the US if I get caught selling you a gram of pure cocaine I get the same punishment as I would if I sold you a gram that’s only 20% pure. If I sold you a gram of some random powder and told you it is cocaine I am likely to be prosecuted all the same whether I knew it was fake or not.
That aside, the “fully synthetic CSAM with no children involved at all” idea relies very, very heavily on taking the word of the guy who you just busted with a hard drive full of CSAM.
His defense would essentially have to be “Your honor I pinky swear that I used the txt2img tab of automatic1111 instead of the img2img tab” or “I did start with real CSAM but the img2img tab acts as an algorithmic magic wand imbued with the power to retroactively erase the previous harm caused by the source material”
There is no coherent defense to this activity that boils down to anything other than the idea that the existence of image generators should — and does — constitute an acceptable means of laundering CSAM and/or providing plausible deniability for anyone caught with it.
The idea that there would be any pushback to arresting or investigating people for distributing this stuff boggles the mind. Inventing a new type of armor to specifically protect child abusers from scrutiny is a choice, not some sort of emergent moral ground truth caused by the popularization of diffusion models.
Generally in the US, you must be proven guilty, not proven innocent. The prosecution must prove that the defendant actually committed the crime. The defendant absolutely can claim plausible deniability.
Although currently it is not entirely clear whether mere possession should continue to constitute a crime by itself, or it should require actual child abuse (because the former used to imply the latter).
[+] [-] tmtvl|1 year ago|reply
With that out of the way:
> the generative AI wrinkle in this particular arrest shows how technology is generating new avenues for crime and child abuse.
Is it really child abuse if no children were involved? Does that mean that AI-generated imagery of some protected group being harmed causes actual harm to that protected group? Not saying it doesn't, but it may be worth thinking about.
> The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified, [...] that is a much harder problem to fix.
Sarcastic: people downloading and modifying open source software is a major problem indeed, hopefully a solution can be found.
[+] [-] throwaway413486|1 year ago|reply
I think there is an empirical question one step beyond this. Does a pedophile who sees AI child porn get inured to it and then go on to try to act out fantasies with real victims? Or does this give pedophiles a way to satiate their desires indefinitely with only AI-based content, and lead to a lower portion abusing actual kids?
A second order issue is that distributing child porn is claimed to create demand for child porn which leads to more abuse. If there were no criminal penalties for purely AI-generated CSAM and the normal criminal penalties remained for CSAM that in any way derived from images of actual kids, would the cost-benefit difference push most consumers to demand only AI-generated stuff?
I'm not saying this is definitely the case, but I think it's at least plausible that more AI-generated CSAM would reduce actual sexual abuse of children, and from a harm-reduction standpoint, it should be beneficial to have more of it ... and it's also plausible that pedophiles being able to generate more and more extreme material at will would make things worse ... and it's also likely legally and institutionally impossible to do the studies to determine which of these is actually true.
[+] [-] kelseyfrog|1 year ago|reply
I think the path starts with asking, "How do things get normalized? What are the tools of normalization?"
[+] [-] ASalazarMX|1 year ago|reply
> A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography
I'd bet distribution of CSAM the crime what got him prosecuted, not modifying OSS.
[+] [-] elpocko|1 year ago|reply
[+] [-] rstuart4133|1 year ago|reply
I always assumed the reason we didn't see the same thing in the US's was the 1st amendment. I guess it's too soon to know if this is just an aberration that will be fixed.
[+] [-] deepfriedchokes|1 year ago|reply
[+] [-] MathMonkeyMan|1 year ago|reply
I hope that there are people out there who know the true answer.
It takes a special kind of person to seek the truth about something that people feel so strongly about. The truth doesn't care about us or how we feel. Finding it is a thankless task full of distractions and dead ends. If a perspective makes sense and rings true to me, that says more about me than it does about whether the perspective describes reality.
[+] [-] bitwize|1 year ago|reply
[+] [-] jrflowers|1 year ago|reply
How can you prove that zero children were involved at any point?
How does the model generate CSAM without it either being in the training material or fed to the model as an input?
[+] [-] imoverclocked|1 year ago|reply
Deepfakes of real children is real children being involved.
Outside of that, my response is, “how would you create a compelling CSAM image with nothing resembling the target in the training data?”
[+] [-] aprilthird2021|1 year ago|reply
The "harm" principle which is very popular these days to see if things are bad or good or punishable or not, I think it has a lot of problems. There are a lot of things that don't map onto this principle well. Social cohesion is a huge one. Something may not directly harm someone, but it can harm the fabric of society and many other things.
[+] [-] ajross|1 year ago|reply
Yes, basically. You can absolutely criminalize fake images.
[+] [-] tptacek|1 year ago|reply
[+] [-] rich_sasha|1 year ago|reply
[+] [-] evanjrowley|1 year ago|reply
https://dfrws.org/wp-content/uploads/2019/11/2019_USA_pres-a...
https://dfrws.org/wp-content/uploads/2019/06/2019_USA_paper-...
[+] [-] simcup|1 year ago|reply
[+] [-] pessimizer|1 year ago|reply
For 3 years of probation, he was banned from drawing at all, even for personal use.
https://cbldf.org/2016/09/mike-diana-case-still-resonates-in...
[+] [-] rainy59|1 year ago|reply
[+] [-] telecuda|1 year ago|reply
[+] [-] rendall|1 year ago|reply
I can't see this rationale extending to generated imagery (deepfakes aside). No victim exists.
Vaguely gesturing at social harm is not principled enough, in my opinion. One can point to actual crime and actual harm for filmed and photographed child pornography. For generated imagery, one can only point to one's own personal revulsion as "harm".
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] rainy59|1 year ago|reply
[+] [-] bun_terminator|1 year ago|reply
[+] [-] sandworm101|1 year ago|reply
We are spiraling very quickly towards a "media creation box", standalone software that will generate whatever content a person might want without any external connections. There will be huge societal ramifications. We think media bubbles are bad now, but just wait until everyone can live inside their own bubble filled with locally-generated content to match their increasingly warped world views.
[+] [-] sulandor|1 year ago|reply
[+] [-] simcup|1 year ago|reply
edit: https://en.m.wikipedia.org/wiki/Legal_status_of_fictional_po...
[+] [-] pessimizer|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] 00_hum|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] m3kw9|1 year ago|reply
[+] [-] bastawhiz|1 year ago|reply
The legal challenges here will be important to follow.
[+] [-] bitwize|1 year ago|reply
[+] [-] jrflowers|1 year ago|reply
That aside, the “fully synthetic CSAM with no children involved at all” idea relies very, very heavily on taking the word of the guy who you just busted with a hard drive full of CSAM.
His defense would essentially have to be “Your honor I pinky swear that I used the txt2img tab of automatic1111 instead of the img2img tab” or “I did start with real CSAM but the img2img tab acts as an algorithmic magic wand imbued with the power to retroactively erase the previous harm caused by the source material”
There is no coherent defense to this activity that boils down to anything other than the idea that the existence of image generators should — and does — constitute an acceptable means of laundering CSAM and/or providing plausible deniability for anyone caught with it.
The idea that there would be any pushback to arresting or investigating people for distributing this stuff boggles the mind. Inventing a new type of armor to specifically protect child abusers from scrutiny is a choice, not some sort of emergent moral ground truth caused by the popularization of diffusion models.
[+] [-] Ferret7446|1 year ago|reply
Although currently it is not entirely clear whether mere possession should continue to constitute a crime by itself, or it should require actual child abuse (because the former used to imply the latter).