top | item 46470317

Grok Sexual Images Draw Rebuke, France Flags Content as Illegal

58 points| akutlay | 1 month ago |finance.yahoo.com

94 comments

order

akutlay|1 month ago

It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.

wolvoleo|1 month ago

True, CSAM should be blocked by all means. That's clear as day.

However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.

If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.

Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.

However CSAM generation should obviously be blocked and it's very illegal here too.

zajio1am|1 month ago

This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.

NedF|1 month ago

[deleted]

TheAlchemist|1 month ago

What's amazing to me is that this is silenced by HN. It should be a major topic of discussion here.

chrisjj|1 month ago

What makes you say it is silenced by HN?

thrance|1 month ago

Grok breaks France's hate-speech laws all the time but they're only going after it because it can create images of naked people? Musk's propaganda nexus should have been banned years ago here, but not for this stupid reason.

johneth|1 month ago

It makes sexual images of real people without their consent. That's what's breaking the law.

josefritzishere|1 month ago

It would be Musk automating CSAM. This is how we're starting 2026?

chrisjj|1 month ago

The article doesn't mention CSAM. It is about "created sexualized images of people including minors" and CSAM is not that.

chrisjj|1 month ago

“AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”

Not possible.

ben_w|1 month ago

> Not possible.

To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."

Also… I think they probably could solve this. AI image analysis is a thing. AI that estimate age from an image has been a thing for ages. It's not like the idea of throwing the entire internet worth of images at a training sessions just to make a single "allowed/forbidden" filter is even ridiculous compared to the scale of all the other things going on right now.

SpicyLemonZest|1 month ago

It's extremely possible! As the source article notes, the Grok developers specifically chose to make their AI more permissive of sexual content than their competitors, which won't produce such images. This isn't a scenario where someone developed a complex jailbreak to circumvent Grok's built-in protections.

dragonwriter|1 month ago

> “AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”

> Not possible.

Note that the description of the accusation earlier in the article is:

> The French government accused Grok on Friday of generating “clearly illegal” sexual content on X without people’s consent, flagging the matter as potentially violating the European Union’s Digital Services Act.

It may be impossible to perfectly regulate what content the model can create, it is quite practical for the Grok product to enforce consent of the user whose content is being operated on before content can be generated based on it and, after the context is generated, before it can be viewed by or distributed to anyone else.

BigTTYGothGF|1 month ago

Then maybe they shouldn't go to market.

lokar|1 month ago

Then your business can fairly be ruled illegal.

You don't have the right to act in violation of the law merely because it's the only way to make a buck.

maplethorpe|1 month ago

Sure it is. Forbid training models on images of humans, humanoids, or living creatures, and they won't be able to generate images of those things. It's not like AI is some uncontrollable magic force that hatched out of an egg. It can only output what you put in.

xenospn|1 month ago

If it's possible to create a model that generates photorealistic images based on a single line of text, it is 100% possible to restrict the output.

wolvoleo|1 month ago

I'm sure it's possible. If anything they can just run an AI check after generation. Similar to the way Google makes sure it doesn't return CSAM in their results. If they can filter that, the AI providers can check their own output too.

belter|1 month ago

Possible or not, what about starting by criminal investigation, to force disclosure, and find out if Musk company had child porn in the training data?