It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.
True, CSAM should be blocked by all means. That's clear as day.
However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.
If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.
Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.
However CSAM generation should obviously be blocked and it's very illegal here too.
This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.
Grok breaks France's hate-speech laws all the time but they're only going after it because it can create images of naked people? Musk's propaganda nexus should have been banned years ago here, but not for this stupid reason.
To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."
Also… I think they probably could solve this. AI image analysis is a thing. AI that estimate age from an image has been a thing for ages. It's not like the idea of throwing the entire internet worth of images at a training sessions just to make a single "allowed/forbidden" filter is even ridiculous compared to the scale of all the other things going on right now.
It's extremely possible! As the source article notes, the Grok developers specifically chose to make their AI more permissive of sexual content than their competitors, which won't produce such images. This isn't a scenario where someone developed a complex jailbreak to circumvent Grok's built-in protections.
> “AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”
> Not possible.
Note that the description of the accusation earlier in the article is:
> The French government accused Grok on Friday of generating “clearly illegal” sexual content on X without people’s consent, flagging the matter as potentially violating the European Union’s Digital Services Act.
It may be impossible to perfectly regulate what content the model can create, it is quite practical for the Grok product to enforce consent of the user whose content is being operated on before content can be generated based on it and, after the context is generated, before it can be viewed by or distributed to anyone else.
Sure it is. Forbid training models on images of humans, humanoids, or living creatures, and they won't be able to generate images of those things. It's not like AI is some uncontrollable magic force that hatched out of an egg. It can only output what you put in.
I'm sure it's possible. If anything they can just run an AI check after generation. Similar to the way Google makes sure it doesn't return CSAM in their results. If they can filter that, the AI providers can check their own output too.
akutlay|1 month ago
wolvoleo|1 month ago
However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.
If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.
Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.
However CSAM generation should obviously be blocked and it's very illegal here too.
judahmeek|1 month ago
zajio1am|1 month ago
NedF|1 month ago
[deleted]
akutlay|1 month ago
TheAlchemist|1 month ago
chrisjj|1 month ago
ChrisArchitect|1 month ago
https://news.ycombinator.com/item?id=46460880
https://news.ycombinator.com/item?id=46466099
https://news.ycombinator.com/item?id=46468414
thrance|1 month ago
johneth|1 month ago
josefritzishere|1 month ago
chrisjj|1 month ago
unknown|1 month ago
[deleted]
chrisjj|1 month ago
Not possible.
ben_w|1 month ago
To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."
Also… I think they probably could solve this. AI image analysis is a thing. AI that estimate age from an image has been a thing for ages. It's not like the idea of throwing the entire internet worth of images at a training sessions just to make a single "allowed/forbidden" filter is even ridiculous compared to the scale of all the other things going on right now.
SpicyLemonZest|1 month ago
dragonwriter|1 month ago
> Not possible.
Note that the description of the accusation earlier in the article is:
> The French government accused Grok on Friday of generating “clearly illegal” sexual content on X without people’s consent, flagging the matter as potentially violating the European Union’s Digital Services Act.
It may be impossible to perfectly regulate what content the model can create, it is quite practical for the Grok product to enforce consent of the user whose content is being operated on before content can be generated based on it and, after the context is generated, before it can be viewed by or distributed to anyone else.
BigTTYGothGF|1 month ago
lokar|1 month ago
You don't have the right to act in violation of the law merely because it's the only way to make a buck.
maplethorpe|1 month ago
xenospn|1 month ago
wolvoleo|1 month ago
belter|1 month ago