top | item 46470318

(no title)

akutlay | 1 month ago

It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.

discuss

order

wolvoleo|1 month ago

True, CSAM should be blocked by all means. That's clear as day.

However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.

If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.

Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.

However CSAM generation should obviously be blocked and it's very illegal here too.

sam_lowry_|1 month ago

Funnily Mistral is as much censored as ChatGPT.

One should search Huggingface for role-playing models to have a decent level of erotic content, but even that does not guarantee you a pleasant experience.

chrisjj|1 month ago

Some misunderstanding here. This article makes abolutely no mention of CSAM. The objection is to "sexual content on X without people’s consent".

johneth|1 month ago

It's nonconsensual generation of sexual content of real people that is breaking the law. And things like CSAM generation which is obviously illegal.

> It feels a bit like foreign morals are being forced upon us.

Welcome to the rest of the world, where US morals have been forced upon us for decades. You should probably get used to it.

zajio1am|1 month ago

This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.

nutjob2|1 month ago

Safety isn't just implemented via system prompts, it's also a matter of training and fine tuning, so what you're saying is incorrect.

If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.

More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.

nozzlegear|1 month ago

Why does that seem absurd to you?

NedF|1 month ago

[deleted]