It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.
wolvoleo|1 month ago
However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.
If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.
Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.
However CSAM generation should obviously be blocked and it's very illegal here too.
sam_lowry_|1 month ago
One should search Huggingface for role-playing models to have a decent level of erotic content, but even that does not guarantee you a pleasant experience.
chrisjj|1 month ago
johneth|1 month ago
> It feels a bit like foreign morals are being forced upon us.
Welcome to the rest of the world, where US morals have been forced upon us for decades. You should probably get used to it.
judahmeek|1 month ago
zajio1am|1 month ago
nutjob2|1 month ago
If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.
nozzlegear|1 month ago
NedF|1 month ago
[deleted]